Sie sind auf Seite 1von 13

Communication Theory 1/24/03

EE 561
Communication Theory
Spring 2003

Instructor: Matthew Valenti


Date: Jan. 24, 2003
Lecture #5

Review/Preview
Last time:
Source coding (discrete sources):
• Entropy.
• Data compaction.
• Source Rate.
• Source Coding Theorem.
• Huffman coding (and Lempel-Ziv algorithm)
This time:
Coding for continuous sources
Analog-to-Digital Conversion
• Sampling.
• Quantization.
Section 3.4 of textbook.

Spring 2001 1
Communication Theory 1/24/03

Block Diagram of a
Digital Communications System
Analog
Input Analog
signal Output
Sample D/A signal
Conversion

Digital
Quantize
Output
Direct Source
Digital Decoder
Input
Source
Decryption
Encode

Encryption Channel
Decoder

Channel Equalizer
Encoder

Modulator Channel Demodulator

Impulse Sampling
Impulse sampling or ideal sampling is the
process of multiplying a signal x(t) by a train
of impulses, ∞
δ T (t ) = ∑ δ (t − nTs )
s
n = −∞

The resulting sampled waveform xs(t) is:


∞ ∞
xs (t ) = x(t ) ∑ δ (t − nTs ) = ∑ x(nT )δ (t − nT )
s s
n = −∞ n = −∞

x(t) xs(t) sampled signal:


analog
input discrete-time
signal continuous amplitude

… …

Spring 2001 2
Communication Theory 1/24/03

F.T. of Sampled Signal


The Fourier Transform of a signal x(t) impulse
sampled at a rate of fs is:

X s ( f ) = fs ∑ X ( f − f n)
n = −∞
s

Where fs is the sample frequency/rate:


1
fs =
Ts
The F.T. of the sampled waveform consists of a train
of spectral copies of the original waveform’s Fourier
Transform.
The spectral copies are centered around integer multiples of
the sample frequency (harmonics).
The copies are weighted by amount fs

F.T. of Sampled Signal


If the F.T. of the original signal looks like:
X(f)

-B 0 B f

Then the F.T. of the sampled signal is:


height fs
Xs(f)

... ...
-fs 0 fs f

Here we have assumed that fs≥2B

Spring 2001 3
Communication Theory 1/24/03

Recovering the Analog Signal


The original analog signal is recovered
from the sampled signal by using an ideal
lowpass filter or Digital-to-Analog
converter:
... ...
fs fs f
-fs − 0 fs
2 2

If fs≥2B then we recover x(t) exactly:


X(f)

-B 0 B f

Undersampling and Aliasing


If the waveform is undersampled (i.e. fs<2B)
then there will be spectral overlap in the
sampled signal:
Xs(f)

... ...
-fs 0 fs f

The signal at the output of the DAC will be


different than the original analog signal:
X ‘(f)

0 f
Aliasing has occurred!

Spring 2001 4
Communication Theory 1/24/03

The Sampling Theorem


Nyquist Sampling Theorem:
If x(t) is bandlimited to B Hz, then it can be uniquely
represented by samples taken every Ts sec., where:
1
= fs ≥ 2B
Ts
The value 2B is the Nyquist rate.
If we sample at less than 2B, then aliasing will occur.
Problems with the Sampling Theorem:
Assumes that the signal is bandlimited.
• actual signals are never completely bandlimited.
• An anti-aliasing filter could be used to force signal to be
bandlimited.
Assumes that we perform ideal (impulse) sampling.
• Practical sampling methods (natural, flat-top) have slightly
different Fourier transforms.

Quantization
Earlier, we considered source coding for a discrete
source.
i.e. finite source alphabet.
Symbols can be represented by a finite number of bits.
What if the source is continuous valued?
e.g. samples of an analog input.
Then it would require an infinite number of bits to represent
the samples with perfect precision.
Quantization is the process of approximating
continuous valued samples with a finite number of
bits.
Quantization always introduces some distortion.
• We will investigate how to design a minimal-distortion
quantizer.
Huffman coding may be performed after quantization.
• This is called entropy coding.

Spring 2001 5
Communication Theory 1/24/03

Quantization Notation
Let X be a random variable representing a sample of
data.
Then X~ = f Q ( X ) is the quantized value of X.
The quantizer has L quantization levels:
~
X ∈ ~ l
x1 , ~
x 2 , ... , x~ L q
Every quantization level has a quantization region
associated with it.
The endpoints of the quantization regions are specified by
L+1 values:
l
x0 , x1 ,..., x L q
x0 = -∞ and xL = ∞
Given a sample x of the random variable X,
If xk-1 ≤ x ≤ xk then ~
x = f Q ( x ) = x~k

Example Quantizer
Most commercial ADC’s
Output signal
~
x 6 use uniform quantizers.
4 The quantization levels
2 of a uniform quantizer
are equally spaced apart.
-8 -6 -4 -2 2 4 6 8
-2 Input signal Uniform quantizers are
x
-4
optimal when the input
-6
distribution is uniform.
i.e. when all values within
the range are equally
Example Uniform 3 bit quantizer likely.

Spring 2001 6
Communication Theory 1/24/03

Example Quantizer
Consider the 3-bit (L=8) quantizer from previous
slide. We can specify it with a table:
k xk-1 xk ~
xk Output bits
1
2
3
4
5
6
7
8

Uniform vs. Nonuniform


Quantizers.
A quantizer can be completely specified by a
list of quantization levels.
~
X ∈ ~ l
x 2 , ... , ~
x1 , ~ xL q
This means the endpoints of the quantization
regions do not need to be separately specified.
Why?

A quantizer can be uniform of nonuniform:


Uniform: ~ l q
x k − x~k −1 = ∆ , ∀k ∈ 2,..., L
• Optimal if X has uniform pdf.
Otherwise, it is nonuniform
• Optimal if X has pdf other than unifom.

Spring 2001 7
Communication Theory 1/24/03

Nonuniform Quantization

Output signal
Many signals such as
~
x 6 speech have a
4
nonuniform distribution.
The amplitude is more
2
likely to be close to zero
than to be at a high level.
-8 -6 -4 -2 2 4 6 8
Input signal
Nonuniform quantizers
-2
x have unequally spaced
-4 levels
-6
The spacing can be chosen
to optimize the SNR for a
particular type of signal.
Example Nonuniform 3 bit quantizer

Rate
The rate of a quantizer is:
R = log2L
L is the number of quantization levels.
Usually R is an integer (L power of 2).
However, even if R is not an integer, we can still
achieve it by using Huffman coding:
• Entropy coding --- first quantize, then compact using a
Huffman code.
Question: What is the rate of the example
uniform quantizer? The example nonuniform
quantizer?

Spring 2001 8
Communication Theory 1/24/03

Distortion
Every quantizer introduces some amount of
distortion into a signal.
“Round-off” error.
The distortion function d x, ~x between x and ~
~
x b g
must be a nondecreasing function of x − x
i.e. as A c
x − x~ , d x − x~ hA
This is satisfied for:
b g
d x , x~ = x − x~
p Equation (3.4-2)
in Proakis

Where p = positive integer.


Usually we use p=2
• Then d( ) is called the Mean Square Error.

Average Distortion
The average distortion D is defined by:
n ~
D = E d( X , X ) s
zb g

= d x, ~
x p( x )dx
−∞

∑z b
xk

g
L
= d x, ~
xk p( x )dx
k =1 x k −1

The goal of our quantizer design is:


Minimize D for a given number of quantization
levels L.

Spring 2001 9
Communication Theory 1/24/03

Mean Square Error


(MSE)
Consider the distortion function:
b g
d x , x~ = x − x~
2

i.e. the Mean Square Error (MSE).


MSE penalizes big errors more than small
errors.
MSE is the power of the quantization noise.
Quantization noise: n~ = ~
x−x
The average signal to quantization noise ratio
is then: F S I Signal Power E m X r
GH N JK = =
2

avg Noise Power D

Example:
SNR Calculation
Consider a quantizer with levels:
l
−7 ,−5,−3,−1,+1,+3,+5,+7 q
The pdf of the input is:
R| 1
for − 8 ≤ x ≤ +8
S|
p( x ) = 16
T0 elsewhere
Determine the SNR.

Spring 2001 10
Communication Theory 1/24/03

Rate-Distortion Function
We are interested in the following questions:
For a given (average) distortion D, what is the minimum
rate R that is required?
For a given rate R, what is the minimum distortion D that
can be produced?
There is a functional dependence on the best values
of D and R that can be achieved.
Called the rate-distortion function R(D).
• The theoretical minimum rate R for a given D.
Tradeoff:
• In order to decrease D, R must be increased.
For an arbitrary pdf it is difficult to derive the rate-distortion
function.
• Rate-distortion theory is the topic of entire chapters in
information theory books and there are even some books that
are devoted to rate-distortion theory.

R(D) for Gaussian Sources


Consider a source that produces i.i.d.
(independent and identically distributed)
Gaussian random variables.
Let the variance be σ 2x
Then: R1 F σ 2 I
|S log2 GH D JK
x
0 ≤ D ≤ σ 2x
R( D) = 2 Equation (3.4-6)

|T 0 D>σ 2
x
in Proakis

where D is the average mean-square distortion.

Spring 2001 11
Communication Theory 1/24/03

R(D) for Gaussian Sources


3

R|1 log FG σ IJ 2
0 ≤ D ≤ σ 2x
|T 0H K
R( D) = S2
x
2
2 D
D > σ 2x
R(D)
we can
1 design
quantizers
that operate
on this side
0
can’t operate 0 0.2 0.4 0.6 0.8 1 of the line
over here D / σ 2x

Some observations:
Worst theoretical distortion is σ 2x
• Achieved by just sending all zeros (which has R=0).
D = 0 cannot be achieved with finite R.

Upper bound on R(D)


Theorem: The Gaussian source requires the
maximum rate among all other sources for a
specified level of mean square distortion.
Therefore, for any arbitrary distribution:
σ2
1
R( D) ≤ log2 x
2 D
c0 ≤ D ≤ σ h 2
x
Equation (3.4-9)
in Proakis

This means that Gaussian is a “worst case


scenario”.
Non Gaussian distributions require no more bits to
quantize than the equivalent Gaussian distribution.
• Equivalent: same variance.

Spring 2001 12
Communication Theory 1/24/03

Distortion-Rate function
Take the rate-distortion function for the
Gaussian pdf and solve for D:
D = σ 2x 2 −2 R Equation (3.4-10)
in Proakis

This is the distortion-rate function.


In decibels, this becomes:
c h
D = 10 log10 σ 2x 2 −2 R
= 10 log10 cσ h + 10 log c2 h
2
x 10
−2 R

= 10 log10 cσ h − 20R log b2g


2
x 10

= 10 log10 cσ h − 6R dB
2
x

distortion decreases by 6 dB per bit of quantization

Next Time
Next time, we will look at how we design
quantizers that minimize distortion
(maximize SNR)

Spring 2001 13

Das könnte Ihnen auch gefallen