Beruflich Dokumente
Kultur Dokumente
1
3.2 Sampling Theorem
Ts sampling period
g (t ) = g (nT ) (t nT )
n =
s s
fs = 1/Ts sampling rate
G ( f ) = g (nTs ) (t nTs ) exp( j 2ft )dt =
n =
g (nT ) exp( j 2nT f )
n =
s s
Claim: G ( f ) = f s G ( f mf )
m =
s
2
3.2 Spectrum of Sampled Signal
Let L( f ) = f s G( f mf ), and notice that it is periodic with period f .
m =
s s
m =
s
f s
2n
fs / 2
cn = G ( f mf s ) exp j f df , s = f mf s
m =
fs / 2
f s
f s / 2 + mf s 2n
= f s / 2 + mf s
G ( s ) exp j ( s + mf s ) ds
m = fs
f s / 2 + mf s 2n
= f s / 2 + mf s
G ( s ) exp j s ds
f s
m =
2n
= G ( s ) exp j s ds
f s
= g ( nTs )
n
L( f ) = g ( nTs ) exp j 2 f
n = f s
= g (mT ) exp( j 2mT f ), where m = n.
m =
s s
3
3.2 First Important Conclusion for Sampling
1
G( f ) = G ( f ) for | f | W .
2W
4
3.2 Aliasing due to Sampling
2W
fs
5
g (t ) = G ( f ) exp( j 2ft )df
g (nT ) exp( j 2nT f ) exp( j 2ft )df
1 W
= s s
fs W
n =
exp( j 2 (t nTs ) f )df
1
g (nT )
W
= s
fs W
n =
2W
sin[2W (t nTs )]
=
fs
g (nT )
n =
s
2W (t nTs )
= g (nT ) (2WT sinc[2W (t nT )])
n =
s s s
6
3.2 Interpolation in terms of filtering
Observe that
t
g (t ) = g (nT ) sinc T s n
n = s
is indeed a convolution between g(t) and sinc(t/Ts).
t t
g (t ) * sinc = g ( )sinc d
Ts Ts
t
= g ( nTs ) ( nTs ) sinc d
n = Ts
t
= g (nT ) s
( nTs )sinc d
n = Ts
Po-Ning Chen@cm.nctu Chapter 3-13
t
Reconstruction filter (interpolation filter) h(t ) = sinc
Ts
H ( f ) = Ts rect (Ts f )
fs
g (t ) H( f ) g (t )
fs / 2 fs / 2
7
3.2 Physical Realization of Reconstruction Filter
An ideal lowpass filter is not physically realizable.
Instead, we can use an anti-aliasing filter of bandwidth W,
and use a sampling rate fs > 2W. Then the spectrum of a
reconstruction filter can be shaped like:
8
3.3 Pulse-Amplitude Modulation (PAM)
PAM
The amplitude of regularly spaced pulses is varied in
proportion to the corresponding sample values of a
continuous message signal. Notably, the top of each pulse
is maintained flat. So this is
PAM, not natural sampling for
which the message signal is
directly multiplied by a
periodic train of rectangular
pulses.
1, 0<t<T
where h(t ) = 1 / 2, t = 0, t = T and m (t ) = m( nTs ) (t nTs ).
0, n =
otherwise
9
3.3 Pulse-Amplitude Modulation (PAM)
By taking filtering standpoint, the spectrum of S(f) can be
derived as:
S ( f ) = M ( f )H ( f )
= f s M ( f kf s ) H ( f )
k =
= f s M ( f kf s )H ( f )
k =
S ( f ) = f s M ( f kf s )H ( f )
k =
= f s M ( f ) H ( f ) + f s M ( f kf s ) H ( f )
|k |1
Reconstruction Filter Equalizer
M ( f )H ( f ) M ( f )
Po-Ning Chen@cm.nctu Chapter 3-20
10
3.3 Feasibility of Equalizer Filter
The distortion of M(f) is due to M(f)H(f),
1, 0<t <T
where h(t ) = 1 / 2, t = 0, t = T or H ( f ) = Tsinc( fT )exp( jfT )
0,
otherwise
1
exp( jfT ), | f | W
1
=
E ( f ) = H ( f ) Tsinc( fT )
0, otherwise
~
1 1 E( f )
> = f s > 2W . 1
T Ts 0.8
1
0.6
E.g., T = 1, W = 1/8.
~ , | f | W
E ( f ) = Tsinc( fT )
0.4
0.2
0, otherwise
-1 -0.5 0.5 1
i (t ) ~ o1 (t ) (t + T / 2) or o(t )
exp( jfT )
E( f )
11
3.3 Feasibility of Equalizer Filter
Causal
i (t ) o(t )
h(t )
Simplified Proof:
h (t ) = 0 for t < 0
o(t ) = h ( )i (t )d = 0h( )i (t )d
t
i (t ) = 0 for t < 0
o(t ) = 0 for t < 0
a 0, for t < 0;
If
h ( t ) dt 0 for some a > 0, then take i ( t ) =
1, for t 0.
a
o( a ) = h ( )d 0, which means that
there will be a nonzero output due to completely
a
zero input! Therefore, h( )d = 0 for every a > 0.
( a )
( a ) = 0 for every a > 0 = 0 for a > 0.
a
a a
h ( ) d = 0 for every a > 0
a
h( )d = h( a ) = 0 for a > 0.
12
3.3 Aperture Effect
The distortion of M(f) due to M(f)H(f)
1, 0<t <T
where h(t ) = 1 / 2, t = 0, t = T or H ( f ) = Tsinc( fT )exp( jfT )
0,
otherwise
1
~ , | f | W 1 1
E ( f ) = Tsinc( fT ) and > = f s > 2W .
T Ts
0, otherwise
1
~ , | f | 0.04
E ( f ) = sinc( f ) for T = 1, Ts = 10,W = 0.04
0, otherwise
~
E( f ) 1.00264
1
0.8
0.6
0.4
0.2
13
3.3 Pulse-Amplitude Modulation
Final notes on PAM
PAM is rather stringent in its system requirement, such
as short duration of pulse.
Also, the noise performance of PAM may not be
sufficient for long distance transmission.
Accordingly, PAM is often used as a mean of message
processing for time-division multiplexing, from which
conversion to some other form of pulse modulation is
subsequently made. Details will be discussed in Section
3.9.
14
Pulse trains
PDM
PPM
15
2 2
1 BT ,Carson 1
See slide 2 - 162 : figure - of - merit D =
2
1 = Bn ,Carson 1
2 W 2
16
3.6 Quantization Process
We may drop the time instance nTs for convenience, when
the quantization process is memoryless and instantaneous
(hence, the quantization at time nTs is not affected by earlier
or later samples of the message signal.)
Types of quantization
Uniform
Quantization step sizes are of equal length.
Non-uniform
Quantization step sizes are not of equal length.
midtread midrise
17
3.6 Quantization Noise
Uniform midtread
quantizer
18
3.6 Quantization Noise
Assume g( ) assigns the midpoint of each step interval to be
the representation level. Then
0, q<
2
q 1
Pr{Q q} = Pr ( M mod ) q = + , q <
2 2 2 2
1,
q
2
1
Or pdf f Q ( q) = 1 q <
2 2
P P P 3P
SNRO = /2
= = 2
= 2 L2
2 1 1 2 1 2mmax
/ 2 q dq mmax
12
12 L
19
Example 3.1 Sinusoidal Modulating Signal
Let m(t) = Am cos(2fct). Then
Am2 3( Am2 / 2) 2 3 R
P= and mmax = Am SNRO = L = 4 = (1.8 + 6 R ) dB
2 Am2 2
L R SNRO (dB)
32 5 31.8
64 6 37.8
128 7 43.8
256 8 49.8
20
3.6 Optimality of Scalar Quantizers
I k = {m [ A, A) : d ( m, vk ) d ( m, v j ) for all 1 j L}
21
(II) For fixed {Ik}, determine the optimal {vk}.
L
min d ( m, vk ) f M ( m )dm
{vk }
k =1 I k
L
d ( m, vk ) f M ( m )dm = d ( m, v j ) f M ( m )dm
k =1 I v j I
Since
v j
k j
d ( m, v j )
= f M ( m )dm
I
v j j
Partitions I1 I2 IL1 IL
22
Example: Mean-Square Distortion
(II) A necessary condition for the optimal v j is :
(m v j )2
m j +1 m j +1
mj
v j
f M ( m )dm = 2 ( m v j ) f M ( m )dm = 0.
mj
m j +1
mf M ( m )dm
v j ,optimal = = E [ M | m j M < m j +1 ]
mj
m j +1
f M ( m )dm
mj
(anti-alias)
23
3.7 Pulse-Code Modulation
Non-uniform quantizers used for telecommunication (ITU-
T G.711)
ITU-T G.711: Pulse Code Modulation (PCM) of Voice
Frequencies (1972)
It consists of two laws: A-law (mainly used in
Europe) and -law (mainly used in US and Japan)
This design helps to protect weak signal, which occurs
more frequently in, say, human voice.
3.7 Laws
Qautnization Laws
A-law
13-bit uniformly quantized
Conversion to 8-bit code
-law
14-bit uniformly quantized
Conversion to 8-bit code.
These two are referred to as compression laws since
they uses 8-bit to (lossily) represent 13-(or 14-)bit
information.
Po-Ning Chen@cm.nctu Chapter 3-48
24
3.7 A-law in G.711
A-law (A=87.6)
A 1
m, m
1 + log( A) A
FA-law ( m ) =
1 + log( A | m |) 1
sgn(m) , m 1
1 + log( A) A
Linear mapping
Logarithmic mapping
FA-law ( m) 1
0.8
0.6
0.4
0.2
output
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0
input
0.2 0.4 0.6 0.8 1 m
25
8 bit PCM code A piecewise linear approximation to the law.
128
112
96
80
64 FA-law ( m)
48
32
output
-32
-48
-64 256
-80 128
64
-96 -64
-112 -128
-256
-128
-4096 -2048 -1024 -512 0 512 1024 2048 4096
input
13 bit uniform quantization
26
Expander of A-law (assume nonnegative m)
log(1 + m )
F -law ( m) = sgn(m) for m 1.
1 + log( )
27
F -law ( m) 1
0.8
0.6
0.4
0.2
-0.2
-0.4
-0.6
-0.8
-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
m
28
Compressor of -law (assume nonnegative m)
Raised Input Values Compressed Code Word
Chord Step
Bits:12 11 10 9 8 7 6 5 4 3 2 1 0 Bits: 6 5 4 3 2 1 0
0 0 0 0 0 0 0 1 a b c d x 0 0 0 a b c d
0 0 0 0 0 0 1 a b c d x x 0 0 1 a b c d
0 0 0 0 0 1 a b c d x x x 0 1 0 a b c d
0 0 0 0 1 a b c d x x x x 0 1 1 a b c d
0 0 0 1 a b c d x x x x x 1 0 0 a b c d
0 0 1 a b c d x x x x x x 1 0 1 a b c d
0 1 a b c d x x x x x x x 1 1 0 a b c d
1 a b c d x x x x x x x x 1 1 1 a b c d
Raised Input = Input + 33 = Input + 21H
(For negative m, the raised input becomes input 33.)
An additional 7th bit is used to indicate whether the input signal is positive
(1) or negative (0).
Po-Ning Chen@cm.nctu Chapter 3-57
29
Comparison of A-law and -law specified in G.711.
1
0.8
0.6
0.4
0.2
-0.2
-0.4 A-law
mu-law
-0.6
-0.8
-1
-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8
3.7 Coding
After the quantizer provides a symbol representing one of
256 possible levels (8 bits of information) at each sampled
time, the encoder will transform the symbol (or several
symbols) into a code character (or code word) that is
suitable for transmission over a noisy channel.
Example. Binary code.
11100100 0 = change
1 1 1 0 0 1 0 0 1 = unchange
30
3.7 Coding
Example. Ternary code (Pseudo-binary code).
A
00011011 B
C
0 0 0 1 1 0 1 1
00011011ACABBCBB
Through the help of coding, the receiver may be able to
detect (or even correct) the transmission errors due to noise.
For example, it is impossible to receive ABABBABB,
since this is not a legitimate code word (character).
Po-Ning Chen@cm.nctu Chapter 3-61
3.7 Coding
Example of error correcting code Three-times repetition
code (to protect Bluetooth packet header).
00011011 000,000,000,111,111,000,111,111
Then majority law can be applied at the receiver to
correct one-bit error.
Channel (error correcting) codes are designed to compensate
the channel noise, while line codes are simply used as the
electrical representation of a binary data stream over the
electrical line.
31
3.7 Line codes
(a) Unipolar nonreturn-to-zero
(NRZ) signaling
(b) Polar nonreturn-to-zero (NRZ)
signaling
(c) Unipolar return-to-zero (RZ)
signaling
(d) Bipolar return-to-zero (BRZ)
signaling
(e) Split-phase (Manchester code)
N 1
hence, S ( f ) = G ( f ) an e j 2fnT and S2 NT ( f ) = G ( f ) an e j 2fnT .
b
b
b
n = n= N
1 N 1
PSD = lim | G ( f ) |2 E [an am* ]e j 2f ( n m )T
b
N 2 NT
b n = m = N
Po-Ning Chen@cm.nctu Chapter 3-64
32
1 N 1
PSD = lim | G ( f ) |2 E [an am* ]e j 2f ( n m )T b
N 2 NT
b n = m = N
1 N 1
j 2f ( n m ) T
=| G ( f ) |2 lim a ( n m )e b
b m = N n =
N 2 NT
1 N 1 j 2fkT
=| G ( f ) |2 lim a ( k )e b
b m = N k =
N 2 NT
1
=| G ( f ) |2 a ( k )e j 2fkT b
Tb k =
1 j 2fkT 2 2
a ( k )e e j 2fkTb
For i.i.d. {an }, b
= a + a
Tb k = Tb Tb k =
2
2
=
Tb
a
+
Tb
a
( f k / T )
k =
b
33
3.7 Power spectral of line codes
PSD of Unipolar NRZ
2 2
PSD U- NRZ =| G ( f ) |2 a + a ( f k / T ) b
Tb Tb k =
2
2
= A2Tb2sinc 2 ( fTb ) +a a
( f k / T ) b
Tb Tb k =
A2Tb
= sinc 2 ( fTb )1 + ( f k / Tb )
4 k =
A2Tb
= sinc 2 ( fTb )(1 + ( f ) )
4
Po-Ning Chen@cm.nctu Chapter 3-67
34
3.7 Power spectral of line
codes
Unipolar return-to-zero (RZ) signaling
An attractive feature of this line code is the presence of
delta functions at f = 1/Tb, 0, 1/Tb in the PSD, which
can be used for bit-timing recovery at the receiver.
Disadvantage: It requires 3dB more power than polar
return-to-zero signaling.
35
3.7 Power spectral of line codes
Bipolar return-to-zero (BRZ) signaling
Also named alternate mark inversion (AMI) signaling
No DC component and relatively insignificant low-
frequency components in PSD.
A, 0 t < Tb / 2
s (t ) = a g (t nT ), where g (t ) = 0,
n =
n b
otherwise
36
1 j 2fkT
PSD BRZ =| G ( f ) |2 a ( k )e b
Tb k =
fT 1 1 1
2 2
A Tb
= sinc 2 b cos(2fTb )
4 2 Tb 2 2
A2Tb fT
= sinc 2 b (1 cos(2fTb ) )
8 2
37
3.7 Power spectral of line codes
PSD of Manchester code
2 2
PSD Manchester =| G ( f ) |2 a + a ( f k / T )
b
Tb Tb k =
A2Tb2 fT fT 2
2
= sinc 2 b sin 2 b a + a
2 2 Tb Tb
( f k / T ) b
16 k =
fT fT
2
A Tb
= sinc 2 b sin 2 b
16 2 2
Adjust A so that PSD( f )df = 1.
(Normalize the transmission power.)
T T
PSD U-NRZ,Normalization = b sinc2 ( fTb ) + b ( f )
2 2
PSD P- NRZ,Normalization = Tbsinc 2 ( fTb )
Tb fT
PSD U-RZ = sinc 2 b 1 + ( f k / Tb )
4 2 k =
fT
PSD BRZ = Tbsinc 2 b sin 2 (fTb )
2
T fT fT
PSD Manchester = b sinc 2 b sin 2 b
4 2 2
Po-Ning Chen@cm.nctu Chapter 3-76
38
From integration standpoint,
f ' df '
PSD( f )df = PSD for f ' = fTb , but Tb ( fTb )df = ( f ' )df '.
Tb Tb
1 1
PSD U - NRZ,Normalization = sinc 2 ( f ' ) + ( f ' )
2 2
PSD P- NRZ,Normalization = sinc2 ( f ' )
1 2 f ' 1 k
PSD U -RZ = sinc + sinc 2 ( f ' k )
4 2 4 k = 2
f '
PSD BRZ = sinc2 sin 2 (f ' )
2
f ' f '
PSD Manchester = sinc 2 sin 2
2 2
Po-Ning Chen@cm.nctu Chapter 3-77
1
U-NRZ
P-NRZ
U-RZ
BRZ
0.8 Manchester
0.6
0.4
0.2
1/ 2
0
0 0.5 1 1.5 2
39
3.7 Differential encoding with unipolar NRZ line
coding
1 = no change and 0 = change.
on
dn
d n = d n 1 on = d n 1 on
3.7 Regeneration
Regenerative repeater for PCM system
It can completely remove the distortion if the decision
making device makes the right decision (on 1 or 0).
40
3.7 Decoding & Filtering
After regenerating the received pulse at the last time, the
receiver then decodes, and regenerates the original message
signal (with acceptable quantization error).
Finally, a lowpass reconstruction filter whose cutoff
frequency is equal to the message bandwidth W is applied at
the end (to remove the unnecessary high-frequency
components due to quantization).
41
3.8 Noise Consideration in PCM Systems
The main effect of channel noise is to introduce bit errors.
Notably, the symbol error rate is quite different from
the bit error rate.
A symbol error may be caused by one-bit error, or two-
bit error, or three-bit error, or ; so in general, one
cannot derive the symbol error rate from the bit error
rate (or vice versa) unless some special assumption is
made.
Considering the reconstruction of original analog signal,
a bit error in the most significant bit is more harmful
than a bit error in the least significant bit.
42
3.8 Error Threshold
Influence of Eb/N0 on BER at 105 bps
Eb/N0 (dB) BER About one error in every
4.3 102 103 second
8.4 104 101 second
10.6 106 10 seconds
12.0 108 20 minutes
13.0 1010 1 day
14.0 1012 3 months
The output signal-to-noise ratio of an analog FM receiver without
pre/de-emphasis is typically 40-50 dB. Pre/de-emphasis may reduce
the requirement by 13 dB.
Po-Ning Chen@cm.nctu Chapter 3-85
43
3.9 Time-division multiplexing
An important feature of sampling process is a conservation-
of-time.
In principle, the communication link is used only at the
sampling time instances.
Hence, it may be feasible to put other messages samples
between adjacent samples of this message on a time-shared
basis.
This forms the time-division multiplex (TDM) system.
A joint utilization of a common communication link by
a plurality of independent message sources.
44
3.9 Time-division multiplexing
45
Example 3.2 The T1 system
T1 system
Carries 24 64kbps voice channels with regenerative
repeaters spaced at approximately 2-km intervals.
Each voice signal is essentially limited to a band from
300 to 3100 Hz.
Anti-aliasing filter with W = 3.1 KHz
Sampling rate = 8 KHz (> 2W = 6.2 KHz)
ITU G.711 -law is used with = 255.
Each frame consists of 24 8 + 1 = 193 bits, where a
single bit is added at the end of the frame for the
purpose of synchronization.
46
3.10 Digital multiplexers
47
3.10 Digital multiplexers
Digital multiplexers are categorized into two major groups.
1. 1st Group: Multiplex digital computer data for TDM
transmission over public switched telephone network.
Require the use of modem technology.
2. 2nd Group: Multiplex low-bit-rate digital voice data
into high-bit-rate voice stream.
Accommodate in the hierarchy that is varying from
one country to another.
Usually, the hierarchy starts at 64 Kbps, named a
digital signal zero (DS0).
48
3.10 North American digital TDM hierarchy
The combined bit rate is higher than the multiple of the
incoming bit rates because of the addition of bit stuffing
and control signals.
49
3.10 Digital multiplexers
Synchronization and rate variation problems may be
resolved by bit stuffing.
Example 3.3. AT&T M12 (second-level multiplexer)
24 control bits are stuffed, and separated by sequences
of 48 data bits (12 from each DS1 input).
50
Example 3.3 AT&T M12 multiplexer
The control bits are labeled F, M, and C.
Frame markers: In sequence of F0F1F0F1F0F1F0F1, where F0
= 0 and F1 = 1.
Subframe markers: In sequence of M0M1M1M1, where M0 = 0
and M1 = 1.
Stuffing indicators: In sequences of CI CI CI CII CII CII CIII CIII
CIII CIV CIV CIV, where all three bits of Cj equal 1s indicate
that a stuffing bit is added in the position of the first
information bit associated with the first DS1 bit stream that
follows the F1-control bit in the same subframe, and three 0s
in CjCjCj imply no stuffing.
The receiver should use majority law to check whether a
stuffing bit is added.
Po-Ning Chen@cm.nctu Chapter 3-101
51
Example 3.3 AT&T M12 multiplexer
For M12 framing,
f in = 1.544 Mbps
f out = 6.312 Mbps
M = 288 4 + 24 = 1176 bits
L = 288 bits
M L 1 L
Duration of a frame = = S + (1 S )
f out f in f in
123
One bit is replaced
by a stuffed bit.
f in 1.544
S = L M = 288 1176 = 0.334601
f out 6.312
Po-Ning Chen@cm.nctu Chapter 3-103
288 287
1.5458 = 6.312 f in 6.312 = 1.54043
1176 1176
52
Example 3.3 AT&T M12 multiplexer
This results in an allowable tolerance range:
1.5458 1.54043 = 6.312 / 1176 = 5.36735 kbps
53
3.11 Virtues, limitations, and modifications of
PCM
Two limitations of PCM system (in the past)
Complexity
Bandwidth
Nowadays, with the advance of VLSI technology, and with
the availability of wideband communication channels (such
as fiber) and compression technique (to reduce the
bandwidth demand), the above two limitations are greatly
released.
54
Po-Ning Chen@cm.nctu Chapter 3-109
55
n
mq [n ] = mq [n 1] + eq [n ] = e [n],
j =
q
56
3.12 Delta modulation
Slope overload distortion
To eliminate the slope overload distortion, it requires
dm(t )
max (slope overload condition)
Ts dt
So, increasing step size can reduce the slope-overload
distortion.
Alternative solution is to use dynamic . (Often, a delta
modulation with fixed step size is referred to as a linear
delta modulator due to its fixed slope, a basic function
of linearity.)
57
3.12 Delta-sigma modulation
Delta-sigma modulation
In fact, the delta modulation distortion can be reduced
by increasing the correlation between samples.
This can be achieved by integrating the message signal
m(t) prior to delta modulation.
The integration process is equivalent to a pre-
emphasis of the low-frequency content of the input
signal.
58
3.12 Delta-sigma modulation
A straightforward
structure
Since integration is
a linear operation,
the two integrators
before comparator
can be combined
into one after
comparator.
Since
q [n ] = iq [n ] iq [n 1] i[n ] i[n 1] = ( n 1)T m(t )dt m(t )Ts ,
nTs
59
3.12 Delta modulation
Final notes
Delta modulation trades channel bandwidth (e.g., much
higher sampling rate) for reduced system complexity
(e.g., the receiver only demands a lowpass filter).
Can we trade increased system complexity for a reduced
channel bandwidth? Yes, by means of prediction
technique.
In Section 3.13, we will introduce the basics of
prediction technique. Its application will be addressed in
subsequent sections.
x[n ] = wk x[n k ]
k =1
60
3.13 Linear prediction
Design objective
To find the filter coefficient w1, w2, , wp so as to
minimize index of performance J:
J = E [e 2 [n ]], where e[n ] = x[n ] x[n ].
J = E x[n ] wk x[n k ]
k =1
p p p
= E[ x 2 [n ]] 2 wk E [ x[n ] x[n k ]] + wk w j E [ x[n k ]x[n j ]]
k =1 k =1 j =1
p
p p p
= RX [0] 2 wk RX [k ] + 2 wk w j R X [k j ] + wk2 R X [0]
k =1 k =1 j > k k =1
p i 1
J = 2 RX [i ] + 2 w j RX [i j ] + 2 wk RX [k i ] + 2 wi RX [0]
wi j =i +1 k =1
p
= 2 RX [i ] + 2 w j RX [i j ] = 0
j =1
61
p
w R
j =1
j X [i j ] = RX [i ] for 1 i p.
is said to be Toeplitz.
A Toeplitz matrix can be uniquely determined by p
elements, [a0, a1, , ap1].
62
3.13 Linear adaptive predictor
The optimal w0 can only be obtained with the knowledge of
autocorrelation function.
Question: What if the autocorrelation function is unknown?
Answer: Use linear adaptive predictor.
63
gi[n] can be approximated by:
p
g i [n ] J / wi = 2 RX (i ) + 2 w j RX (i j )
j =1
p
2 x[n ] x[n i ] + 2 w j [n ] x[n j ] x[n i ]
j =1
p
= 2 x[n i ] x[n ] + w j [n ] x[n j ]
j =1
p
w i [n + 1] = wi [n ] + x[n i ] x[n ] w j [n ] x[n j ]
j =1
= w i [n ] + x[n i ]e[n ]
64
3.13 Least mean square
The below pair results the form of the popular least-mean-
square (LMS) algorithm for linear adaptive prediction.
w j [n + 1] = w j [n ] + x[n j ]e[n ]
p
e [ n ] = x [ n ] w j [ n ] x[ n j ]
j =1
65
2
1 2m m2
Quantizati on Noise Power = max = max
12 L 3L2
3.14 DPCM
For DPCM, the
quantization
error is on e[n],
rather on m[n]
as for PCM.
So the
quantization
error q[n] is
supposed to be
smaller.
3.14 DPCM
Derive:
eq [n ] = e[n ] + q[n ]
mq [n ] = m [n ] + eq [n ]
= m [n ] + e[n ] + q[n ]
= m[n ] + q[n ] eq [n ] mq [n ]
66
3.14 DPCM
Notes
DM system can be treated as a special case of DPCM.
3.14 DPCM
Distortions due to DPCM
Slope overload distortion
The input signal changes too rapidly for the prediction
filter to track it.
Granular noise
67
3.14 Processing Gain
The DPCM system can be described by:
mq [n ] = m[n ] + q[n ]
So the output signal-to-noise ratio is:
E [m 2 [n ]]
SNRO =
E [ q 2 [n ]]
We can re-write SNRO as:
E [m 2 [n ]] E [e 2 [n ]]
SNRO = = G p SNRQ
E [e 2 [n ]] E [ q 2 [n ]]
where e[n ] = m[n ] m [n ] is the prediction error.
E [m 2 [n ]]
p = E [e 2 [n ]] processing gain
G
2
SNR = E [e [n ]] signal to quantization noise ratio
Q
E [ q 2 [n ]]
Notably, SNRQ can be treated as the SNR for
system of eq [n ] = e[n ] + q[n ].
68
3.14 Processing Gain
Usually, the contribution of SNRQ to SNRO is fixed and
limited.
One additional bit in quantization will results in 6 dB
improvement.
Gp is the processing gain due to a nice prediction.
The better the prediction is, the larger Gp is.
3.14 DPCM
Final notes on DPCM
Comparing DPCM with PCM in the case of voice
signals, the improvement is around 4-11 dB, depending
on the prediction order.
The greatest improvement occurs in going from no
prediction to first-order prediction, with some additional
gain resulting from increasing the prediction order up to
4 or 5, after which little additional gain is obtained.
For the same sampling rate (8KHz) and signal quality,
DPCM may provide a saving of about 8~16 kbps
compared to standard PCM (64 Kpbs).
Po-Ning Chen@cm.nctu Chapter 3-138
69
3.14 DPCM
Source: IEEE Communications Magazine, September 1997.
Excellent
ADPCM PCM
G.711
G.723.1 G.729 G.728 G.726
Good IS-641 US-1 G.727
G.723.1
JDC2 GSM
Speech Quality
Fair MELP 2.4 FS-1016
IS96
IS54
JDC
FS-1015 GSM/2
Poor
Unacceptable
2 4 8 16 32 64
Bit rate (kb/s)
Po-Ning Chen@cm.nctu Chapter 3-139
70
3.15 Adaptive quantization
Adaptive quantization refers to a quantizer that operates
with a time-varying step size [n].
[n] is adjusted according to the power of input sample
m[n].
Power = variance, if m[n] is zero-mean.
[n ] = E [m 2 [n ]]
In practice, we can only obtain an estimate of E[m2[n]].
71
3.15 AQF
AQF is in principle a more accurate estimator. However it
requires
an additional buffer to store unquantized samples for the
learning period.
explicit transmission of level information to the receiver
(the receiver, even without noise, only has the quantized
samples).
a processing delay (around 16 ms for speech) due to
buffering and other operations from the use of AQF.
The above requirements can be relaxed by using AQB.
3.15 AQB
72
3.15 APF and APB
Likewise, the prediction approach used in ADPCM can be
classified into:
Adaptive prediction with forward estimation (APF)
Prediction based on unquantized samples of the input
signals.
Adaptive prediction with backward estimation (APB)
Prediction based on quantized samples of the input
signals.
The pro and con of APF/APB is the same as AQF/AQB.
APB/AQB are a preferred combination in practical
applications.
3.15 ADPCM
Adaptive prediction
with backward
estimation (APB).
73
3.16 Computer experiment: Adaptive delta
modulation This figure may be incorrect.
e[n ] eq [n ]
In this section,
eq [n 1]
the simplest form
of ADPCM
modulation with
AQB is
simulated,
namely, ADM
with AQB.
Comparison with
LDM (linear DM)
where step size is
fixed will also be
performed.
Po-Ning Chen@cm.nctu Chapter 3-147
In this section, eq [n 1]
the simplest form
of ADPCM
modulation with
AQB is
simulated,
namely, ADM
with AQB.
Comparison with
LDM (linear DM)
where step size is
fixed will also be
performed.
Po-Ning Chen@cm.nctu Chapter 3-148
74
3.16 Computer experiment: Adaptive delta
modulation
1 eq [n 1]
[n 1] 1 + , if [n 1] min
[n ] = 2 e [n ]
q
if [n 1] < min
min ,
f 1
m(t ) = 10 sin 2 s t , LDM = 1 and min =
100 8
LDM ADM
75
3.17 MPEG audio coding standard
The ADPCM and various voice coding techniques
introduced above did not consider the human auditory
perception.
In practice, a consideration on human auditory perception
can further improve the system performance (from the
human standpoint).
The MPEG-1 standard is capable of achieving transparent,
perceptually lossless compression of stereophonic audio
signals at high sampling rate.
A human subjective test shows that a 6-to-1
compression ratio are perceptually indistinguishable to
human.
Po-Ning Chen@cm.nctu Chapter 3-151
76
3.17 Characteristics of human auditory system
Auditory masking
When a low-level signal (the maskee) and a high-
level signal (the masker) occur simultaneously (in
the same critical band), and are close to each other in
frequency, the low-level signal will be made
inaudible (i.e., masked) by the high-level signal, if
the low-level one lies below a masking threshold.
77
3.17 MPEG audio coding standard
78
3.18 Summary and discussion
Sampling transform analog waveform to discrete-time
continuous wave
Nyquist rate
Quantization transform discrete-time continuous wave to
discrete data.
Human can only detect finite intensity difference.
PAM, PDM and PPM
TDM (Time-division multiplexing)
PCM, DM, DPCM, ADPCM
Additional consideration in MPEG audio coding
Po-Ning Chen@cm.nctu Chapter 3-157
79