Sie sind auf Seite 1von 4

ECE 361: Digital Communication

Lecture 8: Capacity of the AWGN Channel

Introduction
In the last two lectures we have seen that it is possible to communicate rate efficiently and
reliably. In this lecture we will see what the fundamental limit to the largest rate of such a
reliable communication strategy is. This fundamental limit is called the capacity.

Examples
We can see what the fundamental limits to reliable communication are in the context of the
scenarios in the last two lectures:

1. AWGN channel: With binary modulation and random linear coding at the transmit-
ter and ML decoding at the receiver, we have seen that the largest rate of reliable
communication is ³ ´
SNR
R∗ = 1 − log2 1 + e− 2 . (1)
This is the capacity of the AWGN channel when the transmitter is restricted to do
linear coding and binary modulation.

2. Erasure channel: We developed this model in Lecture 7 in the context of simplifying


the receiver structure. But it is a very useful abstract model on its own right and
widely used to model large packet networks (such as the Internet). The basic model is
the following: the transmitter transmits one bit at a time (you could replace the word
“bit” by ”packet”). The receiver either receives the bit correctly or it is told that the
bit got erased. There is only a single parameter in this channel and that is the rate
of erasures (the chance that any single transmit bit will get erased before reaching the
receiver): p.
What is the largest data rate at which we can hope to communicate reliably? Well,
since only a single bit is sent at any time, the data rate cannot be more than 1 bit
per unit time. This is rather trivial and we can tighten our argument as follows: the
receiver receives only a fraction 1 − p of the total bits sent (the remaining p fraction of
the total bits sent got erased). So, the data rate for reliable communication could not
have been any more than the fraction of bits that the receiver got without erasures.
We can thus conclude that the data rate is no more than 1 − p bits per unit time. We
can say something stronger: if the data rate is more than 1 − p bits per unit time then
the reliability of communication is arbitrary poor (the chance of not getting all the
bits correctly at the receiver is very close to 100%).
How should the transmitter do to ensure that we really can communicate at rates close
to this upper limit? If the transmitter knew in advance where the erasures are going
to be, then it could simply ignore this time instants and communicate the bits only
over the remaining time instants. This, of course, would let reliable communication

1
occur at data rates very close to 1 − p. But the position of the erasures are unlikely
to be known in advance. We saw linear coding strategies in Lecture 7 that can still
achieve reliable communication at rates very close to 1 − p even in the absence of the
knowledge of the erasure locations at the transmitter.
So we can conclude:

• Communication at data rate larger than 1 − p bits per unit time entails arbitrarily
poor reliability.
• Reliable communication can be achieved by an appropriate transmit-receive strat-
egy as long as the data rate is less than 1 − p bits per unit time.

The quantity 1 − p represents a sharp and fundamental threshold for the data rate of
reliable communication: no strategy exists when the data rate is larger than 1 − p and
there do exist strategies when the data rate is less than 1 − p. These two different
aspects are summarized by the single sentence:

The capacity of the erasure channel is 1 − p bits per unit time.

In the rest of this lecture we will see what the capacity of the AWGN channel is. Before
we do this, we set the stage by highlighting a subtle difference between energy and power
constraints on the transmitter.

Power Constraint
In our discussion so far, we have considered the energy constraint on the transmit voltages:

|x[m]| ≤ E, ∀m. (2)

This constraint is also called a peak power constraint. An alternative and weaker constraint
is on the average power:
XN
|x[m]|2 ≤ N E. (3)
m=1

The peak power constraint in Equation (2) implies the average power constraint in Equa-
tion (3), but not vice versa. In this lecture we will consider the weaker average transmit
power constraint. Our focus is the usual AWGN channel

y[m] = x[m] + w[m], m = 1, . . . (4)

where w[m] is i.i.d. (independent and identically distributed) with statistics at any time
being Gaussian (zero mean and variance σ 2 ). In the last two lectures
√ we had restricted the
transmit voltage to be one of only two possible voltages (± E), now we allow any real
voltage as long as the average power constraint in Equation (3) is met. We will denote the
ratio
def E
SNR = 2 (5)
σ
as the signal to noise ratio of the channel.

2
Capacity
It turns out that the largest rate of arbitrarily reliable communication is

def 1
Cawgn = log2 (1 + SNR) bits/channel use. (6)
2
This is the most important formula in communications: a sort of the equivalent of, the more
famous, formula from Physics:
E = mc2 . (7)
That formula was derived by, as is very well known, by Albert Einstein. The communication
equivalent (cf. Equation (6)) was derived by Claude Shannon in 1948. Again, the operational
meaning of the capacity Cawgn is as before: for every rate below Cawgn there exist transmitter-
receiver strategies that ensure arbitrarily reliable communication. Furthermore, for any rate
larger than Cawgn communication is hopelessly unreliable.
We won’t quite go into how Equation (6) was derived, but we will work to see how it is
useful to communication engineers. We do this next.
As a starting point, it is instructive to see how the capacity performs at low and high
SNRs.
At high SNR, we can approximate 1 + SNR by SNR and then
1
Cawgn ≈ log2 SNR bits/channel use. (8)
2
We see that for every quadrupling of SNR the capacity increases by one bit. This is exactly
the same behavior we have seen very early in this course, indeed way back in Lecture 1.
At low SNR, we have
1
Cawgn ≈ (log2 e) SNR bits/channel use. (9)
2
In this situation a quadrupling of SNR also quadruples the capacity due to the linear relation
between capacity and SNR.

Transmitter and Receiver Designs


How do the transmitter and receiver strategies that hope to work close to this fundamental
limit look like?

• Transmitter: In our attempts to understand reliable communication at non-zero rates


(in the last two lectures) we divided the transmitter strategy into two parts:

– coding: mapping the information bits into coded bits; this is done at the block
level. We focused specifically on linear coding.
– modulation: mapping the coded bits into transmit voltages; this is done sequen-
tially.

3
It turns out that essentially the same steps continue to work even in attaining the
fundamental reliable rate of communication in Equation (6). At low SNRs, binary
modulation suffices. At high SNR, the modulation involves larger alphabets and is
also done in a block manner, albeit the modulation block size is usually smaller than
the coding block size.

• Receiver: In our study of the erasure channel in the previous lecture, we saw a fairly
simple receiver structure. In this general setting, the receiver is more involved: the ML
receiver is hopeless (computationally) to implement. Harnessing the understanding
gleamed from the erasure channel codes, a class of suboptimal (compared to ML) re-
ceiver techniques that are simple to implement have been developed in the last decade.
Specifically, these receivers iterate by alternating between demodulation and linear
decoding, eventually converging to the true information bit transmitted. This study is
somewhat out of the scope of this course. We will provide some reading material for
those interested in this literature at a later point.

Looking Ahead
So far we have focused on the discrete time additive noise channel (cf. Equation (4)). We
arrived at this model in Lecture 1 by using the engineering blocks of DAC (digital to analog
conversion) and ADC (analog to digital conversion) at the transmitter and receiver, respec-
tively. In the next lecture, we take a closer look at the DAC and ADC blocks in terms of how
their design impacts the end-to-end communication process. Of specific interest will be what
constrains the rate of discretization. Clearly, the larger this rate, the larger the capacity of
the end-to-end communication. We will see that the bandwidth of the transmit signal plays
an important role in constraining the number of channel uses per second. Further we will be
able to discuss the relation between the largest rate of reliable communication and the key
physical resources available to the engineer: power and bandwidth.

Das könnte Ihnen auch gefallen