Beruflich Dokumente
Kultur Dokumente
EE 561
Communication Theory
Spring 2003
Review/Preview
Last time:
Source coding (discrete sources):
• Entropy.
• Data compaction.
• Source Rate.
• Source Coding Theorem.
• Huffman coding (and Lempel-Ziv algorithm)
This time:
Coding for continuous sources
Analog-to-Digital Conversion
• Sampling.
• Quantization.
Section 3.4 of textbook.
Spring 2001 1
Communication Theory 1/24/03
Block Diagram of a
Digital Communications System
Analog
Input Analog
signal Output
Sample D/A signal
Conversion
Digital
Quantize
Output
Direct Source
Digital Decoder
Input
Source
Decryption
Encode
Encryption Channel
Decoder
Channel Equalizer
Encoder
Impulse Sampling
Impulse sampling or ideal sampling is the
process of multiplying a signal x(t) by a train
of impulses, ∞
δ T (t ) = ∑ δ (t − nTs )
s
n = −∞
… …
Spring 2001 2
Communication Theory 1/24/03
-B 0 B f
... ...
-fs 0 fs f
Spring 2001 3
Communication Theory 1/24/03
-B 0 B f
... ...
-fs 0 fs f
0 f
Aliasing has occurred!
Spring 2001 4
Communication Theory 1/24/03
Quantization
Earlier, we considered source coding for a discrete
source.
i.e. finite source alphabet.
Symbols can be represented by a finite number of bits.
What if the source is continuous valued?
e.g. samples of an analog input.
Then it would require an infinite number of bits to represent
the samples with perfect precision.
Quantization is the process of approximating
continuous valued samples with a finite number of
bits.
Quantization always introduces some distortion.
• We will investigate how to design a minimal-distortion
quantizer.
Huffman coding may be performed after quantization.
• This is called entropy coding.
Spring 2001 5
Communication Theory 1/24/03
Quantization Notation
Let X be a random variable representing a sample of
data.
Then X~ = f Q ( X ) is the quantized value of X.
The quantizer has L quantization levels:
~
X ∈ ~ l
x1 , ~
x 2 , ... , x~ L q
Every quantization level has a quantization region
associated with it.
The endpoints of the quantization regions are specified by
L+1 values:
l
x0 , x1 ,..., x L q
x0 = -∞ and xL = ∞
Given a sample x of the random variable X,
If xk-1 ≤ x ≤ xk then ~
x = f Q ( x ) = x~k
Example Quantizer
Most commercial ADC’s
Output signal
~
x 6 use uniform quantizers.
4 The quantization levels
2 of a uniform quantizer
are equally spaced apart.
-8 -6 -4 -2 2 4 6 8
-2 Input signal Uniform quantizers are
x
-4
optimal when the input
-6
distribution is uniform.
i.e. when all values within
the range are equally
Example Uniform 3 bit quantizer likely.
Spring 2001 6
Communication Theory 1/24/03
Example Quantizer
Consider the 3-bit (L=8) quantizer from previous
slide. We can specify it with a table:
k xk-1 xk ~
xk Output bits
1
2
3
4
5
6
7
8
Spring 2001 7
Communication Theory 1/24/03
Nonuniform Quantization
Output signal
Many signals such as
~
x 6 speech have a
4
nonuniform distribution.
The amplitude is more
2
likely to be close to zero
than to be at a high level.
-8 -6 -4 -2 2 4 6 8
Input signal
Nonuniform quantizers
-2
x have unequally spaced
-4 levels
-6
The spacing can be chosen
to optimize the SNR for a
particular type of signal.
Example Nonuniform 3 bit quantizer
Rate
The rate of a quantizer is:
R = log2L
L is the number of quantization levels.
Usually R is an integer (L power of 2).
However, even if R is not an integer, we can still
achieve it by using Huffman coding:
• Entropy coding --- first quantize, then compact using a
Huffman code.
Question: What is the rate of the example
uniform quantizer? The example nonuniform
quantizer?
Spring 2001 8
Communication Theory 1/24/03
Distortion
Every quantizer introduces some amount of
distortion into a signal.
“Round-off” error.
The distortion function d x, ~x between x and ~
~
x b g
must be a nondecreasing function of x − x
i.e. as A c
x − x~ , d x − x~ hA
This is satisfied for:
b g
d x , x~ = x − x~
p Equation (3.4-2)
in Proakis
Average Distortion
The average distortion D is defined by:
n ~
D = E d( X , X ) s
zb g
∞
= d x, ~
x p( x )dx
−∞
∑z b
xk
g
L
= d x, ~
xk p( x )dx
k =1 x k −1
Spring 2001 9
Communication Theory 1/24/03
Example:
SNR Calculation
Consider a quantizer with levels:
l
−7 ,−5,−3,−1,+1,+3,+5,+7 q
The pdf of the input is:
R| 1
for − 8 ≤ x ≤ +8
S|
p( x ) = 16
T0 elsewhere
Determine the SNR.
Spring 2001 10
Communication Theory 1/24/03
Rate-Distortion Function
We are interested in the following questions:
For a given (average) distortion D, what is the minimum
rate R that is required?
For a given rate R, what is the minimum distortion D that
can be produced?
There is a functional dependence on the best values
of D and R that can be achieved.
Called the rate-distortion function R(D).
• The theoretical minimum rate R for a given D.
Tradeoff:
• In order to decrease D, R must be increased.
For an arbitrary pdf it is difficult to derive the rate-distortion
function.
• Rate-distortion theory is the topic of entire chapters in
information theory books and there are even some books that
are devoted to rate-distortion theory.
|T 0 D>σ 2
x
in Proakis
Spring 2001 11
Communication Theory 1/24/03
R|1 log FG σ IJ 2
0 ≤ D ≤ σ 2x
|T 0H K
R( D) = S2
x
2
2 D
D > σ 2x
R(D)
we can
1 design
quantizers
that operate
on this side
0
can’t operate 0 0.2 0.4 0.6 0.8 1 of the line
over here D / σ 2x
Some observations:
Worst theoretical distortion is σ 2x
• Achieved by just sending all zeros (which has R=0).
D = 0 cannot be achieved with finite R.
Spring 2001 12
Communication Theory 1/24/03
Distortion-Rate function
Take the rate-distortion function for the
Gaussian pdf and solve for D:
D = σ 2x 2 −2 R Equation (3.4-10)
in Proakis
= 10 log10 cσ h − 6R dB
2
x
Next Time
Next time, we will look at how we design
quantizers that minimize distortion
(maximize SNR)
Spring 2001 13