Beruflich Dokumente
Kultur Dokumente
DIGITAL COMMUNICATION
UNIT I SAMPLING & QUANTIZATION
SAMPLING:
2W
n
2.The signal c an be c ompletely rec overedfrom )
g (
Nyquist rate 2W
2W
Nyquist interval 1
2W
When thesignal is not band - limited (under sampling)
aliasing oc c urs.To avoid aliasing, w emay limit the
signal bandw idth or have higher sampling rate.
If G( f ) 0 for f W and Ts 1
2W
n j n f
G ( f ) g ( ) exp( ) (3.4)
n 2W W
With
1.G( f ) 0 for f W
2. f s 2W
w e find from Equation (3.5) that
1
G( f ) G ( f ) , W f W (3.6)
2W
Subs tituting (3.4) into (3.6) w e may rew riteG( f ) as
g ( 2W ) exp(
1
jnf n
G( f ) ) , W f W (3.7)
2W n W
n
g (t ) is uniquely determined by g ( ) for n
2W
n
or g ( ) c ontains all information of g (t )
2W
n
To rec onstruct g (t ) from g (
) , w e may have
2W
g (t )
G( f ) exp( j2f t)df
1
n jn f
W
g( ) exp( ) exp( j2 f t)df
W 2W 2W W
n
n 1 n
W
n
g( )
2W 2W W exp j2 f (t
2W
) df (3.8)
n s in(2Wt n)
n
g(
2W
)
2 Wt n
g (
n
) sin c(2Wt n) , - t (3.9)
n 2W
(3.9) is an interpolation formula of g (t )
Figure 3.3 (a) Spectrum of a signal. (b) Spectrum of an
undersampled version of the signal exhibiting the aliasing
phenomenon.
Figure 3.4 (a) Anti-alias filtered spectrum of an information-bearing signal. (b) Spectrum
of instantaneously sampled version of the signal, assuming the use of a sampling rate
greater than the Nyquist rate. (c) Magnitude response of reconstruction filter.
Pulse-Amplitude Modulation :
Se
Let s(t ) denote th CAsD
eqEu
ngeineceerionfg Cfollteg-etop puls es as
la
s(t ) m( nT
n
s ) h(t nTs ) (3. 10)
1, 0 t T
1
h(t ) t 0, t T (3. 11)
2,
0,
other w is e
m(nT ) ( nT )h(t )d
s s
n
m(nT ) s
( nTs )h(t )d (3. 13)
n
S ( f ) M ( f ) H ( f ) (3.16)
M ( f ) f s M(f
k
k fs ) (3.17)
S ( f ) f s M(f
k
k fs ) H ( f ) (3.18)
Page 5
The most common technique for sampling voice in PCM
systems is to a sample-and-hold circuit.
Page 6
error, resulting in an inability to recover exactly the original
analog signal.
The amount of error depends on how mach the analog
changes during the holding time, called aperture time.
Page 8
Pulse code modulation (PCM) is produced by analog-to-
digital conversion process.
fs > 2fA(max)
Quantization Process:
Quantization Noise:
Page 10
Let the quantization error be denoted by the random
variable Q of sample value q
q m (3.23)
Q M V , (E[M ] 0) (3.24)
Assuming a uniform quantizer of the midrise type
2m
the step - size is
max
(3.25)
L
m max mm max , L : total number of levels
1
, q
f Q (q) 2 2 (3.26)
0, otherw ise
1
Q2 E[Q 2 ] q 2 f Q (q)dq
2
2 q 2 dq
2
2
2
(3.28)
12
Page 12
Figure 3.13 The basic elements of a PCM system
Page 13
Compression laws. (a) m -law. (b) A-law.
- law
log(1 m )
(3. 48)
log(1 )
dm log(1 )
(1 m ) (3. 49)
d
A - law
A(m) 1
1 log A 0 m
A (3. 50)
1 log( A m ) 1
m 1
1 log A A
1 log A 1
dm 0 m
A
A (3. 51)
d (1 A) m
1
m 1
A
Page 14
Figure 3.15 Line codes for the electrical representations of binary
data.
(a) Unipolar NRZ signaling. (b) Polar NRZ signaling.
(c) Unipolar RZ signaling. (d) Bipolar RZ signaling.
(e) Split-phase or Manchester code.
Page 15
Noise consideration in PCM systems:
(Channel noise, quantization noise)
Page 16
Time-Division Multiplexing(TDM):
Digital Multiplexers :
Page 17
UNIT II WAVEFORM CODING
n sgn(en ) (3.53)
mq nmq n 1 eq n (3.54)
w her e mq n is the quantizer output , e q n is
SCAD Engineering College Page 18
Page 19
Slope Overload Distortion and Granular Noise:
(3.56)
Page 20
Linear Prediction (to reduce the sampling rate):
(3.60)
Let the index of performance be
Find w1 , w2 ,, w p to minimize J
From (3.59)(3.60)and (3.61) w e have
p
J E x n 2 w E
2
k xn xn k
k 1
Page 21
p p
w j wk E xn j xn k (3.62)
j 1 k 1
Ass ume X (t ) is stationary proc ess w ith zero mean ( E[ x[n]] 0)
X2 E x 2 n ( E xn) 2
E x 2 n
The autoc orrelation
RX ( k Ts ) RX k E xnxn k
We may simplify J as
p p p
J 2
X 2wk R X k w j wk R X k j (3.63)
k 1 j 1 k 1
J p
2R k 2w j RX k j 0
wk
X
j 1
p
w j RX k j RX k RX k , k 1,2, ,p (3.64)
j 1
RX 0 , RX 1 ,, RX p
Substituting (3.64)into (3.63) yields
p p
J m in 2wk RX k wk RX k
2
X
k 1 k 1
p
wk RX k
2
X
SCAD Engkin1eering College Page 22
r w0 r R r
2
X
T
X
2
X
T
X
1
X X (3.67)
r T R 1r 0, J is alw ays less than 2
X X X m in X
Linear adaptive prediction :
wk n 1 wk n gk , k 1,2, ,p (3.69)
1
2
1
w here is a step - size parameter and is for c onvenience
2
of presentation.
J P
g 2R X k 2 w j R X k j
k
wk j 1
p
2Exn xn k 2 w j E xn j xn k , k 1,2,, p (3.70)
j 1
p
wk n 1 wk n xn k xn w j nxn j
j 1
SCADEwnkginneerixnng Coklleenge , k 1,2,, p (3.72) Page 23
p
w hereen xn w j nxn j by (3.59) (3.60) (3.73)
j 1
Figure 3.27
Block diagram illustrating the linear adaptive prediction process
Usually PCM has the sampling rate higher than the Nyquist rate
.The encode signal contains redundant information. DPCM can
efficiently remove this redundancy.
Page 24
Figure 3.28 DPCM system. (a) Transmitter. (b) Receiver.
en mn m n
(3.74)
m n is a predic tion value.
T he quantizer output is
eq n en qn
(3.75)
w hereqn is quantization error.
T he predic tion filter input is
mq n m n en qn (3.77)
From (3.74)
mn
mq n mn qn (3.78)
Page 25
Processing Gain:
Page 26
2. Assign the available bits in a perceptually efficient
manner.
Page 27
UNIT III BASEBAND TRANSMISSION
Duo-binary Signaling :
1 if symbol bk is 1 ck ak ak 1
ak
1 if symbol bk is 0
Page 28
H I ( f ) H Nyquist ( f )[1 exp( j 2fTb )]
H Nyquist ( f )[exp( jfTb ) exp( jfTb )] exp( jfTb )
2H Nyquist ( f ) cos(fTb ) exp( jfTb )
1, | f | 1/ 2Tb
H Nyquist ( f )
0, otherwise
sin(t / Tb ) sin[(t Tb ) / Tb ]
hI (t )
t / Tb (t Tb ) / Tb
Tb2 sin(t / Tb )
t (Tb t )
d k bk d k 1
ck ak ak 1
Page 30
Modified Duo-binary Signaling :
ck ak ak 1
H IV ( f ) H Nyquist ( f )[1 exp( j4 fTb )]
2 jH Nyquist ( f ) sin(2 fTb ) exp( j2 fTb )
precoding
dk bk d k 2
symbol 1 if either symbol bk or d k 2 is 1
symbol 0 otherwise
Page 31
|ck|=1 : random guess in favor of symbol 1 or 0
If | ck | 1, say symbol bk is 1
If | ck | 1, say symbol bk is 0
Page 32
Generalized form of correlative-level coding:
N 1
t
h(t) wn sin c n
n Tb
Page 33
Page 34
Tapped-delay-line equalization :
Page 35
N
h(t ) w (t kT )
k N
k
1, n 0 1, n0
p(nT )
0, n 0 0, n 1, 2,....., N
Zero-forcing equalizer
Optimum in the sense that it minimizes the peak
distortion(ISI) worst case
Simple implementation
The longer equalizer, the more the ideal condition for
distortionless transmission
Adaptive Equalizer :
Least-Mean-Square Algorithm:
E en2
Ensemble-averaged cross-correlation
e y
2E en n 2E en n 2E en xnk 2Rex (k )
wk wk wk
Rex (k ) E en xnk
Page 37
Optimality condition for minimum mean-square error
0 for k 0, 1,...., N
wk
Mean-square error is a second-order and a parabolic function
of tap weights as a multidimentional bowl-shaped surface
Adaptive process is a successive adjustments of tap-weight
seeking the bottom of the bowl(minimum value )
Steepest descent algorithm
The successive adjustments to the tap-weight in
direction opposite to the vector of gradient )
Recursive formular ( : step size parameter)
1
wk (n 1) wk (n) , k 0, 1,...., N
2 wk
wk (n) Rex (k ), k 0, 1,...., N
Least-Mean-Square Algorithm
Steepest-descent algorithm is not available in an
unknown environment
Approximation to the steepest descent algorithm using
instantaneous estimate
Rex (k ) en xnk
wk (n 1) wk (n) e n x nk
Page 38
In the case of small , roughly similar to steepest
descent algorithm
Page 39
Implementation Approaches:
Analog
CCD, Tap-weight is stored in digital memory, analog
sample and multiplication
Symbol rate is too high
Digital
Sample is quantized and stored in shift register
Tap weight is stored in shift register, digital
multiplication
Programmable digital
Microprocessor
Flexibility
Same H/W may be time shared
Page 40
yn hk xn k
k
h0 xn hk xnk hk xnk
Using data decisk i0ons madek o0 n the basis of precursor to take
care of the postcursors
The decision would obviously have to be correct
Page 41
In the case of an M-ary system, the eye pattern contains (M-
1) eye opening, where M is the number of discreteamplitude
levels
Page 42
Interpretation of Eye Diagram:
Page 43
Page 44
UNIT IV DIGITAL MODULATION SCHEME
Page 45
Page 46
Page 47
ASK, OOK, MASK:
Page 48
One amplitude encodes a 0 while another amplitude encodes
a 1 (a form of amplitude modulation)
Page 49
Implementation of binary ASK:
Page 50
A cos2f 2t binary1
s t A cos2f 2t binary 0
FSK Bandwidth:
Page 51
Applications
On voice-grade lines, used up to 1200bps
Used for high-frequency (3 to 30 MHz) radio
transmission
used at higher frequencies on LANs that use coaxial
cable
DBPSK:
Differential BPSK
0 = same phase as last signal element
1 = 180 shift from last signal element
Page 52
A co s 2f c t
3 11
4
A cos2f t 4
s t
c 01
A co s 2 f ct
3
00
4
A cos 2 f ct
4
10
Concept of a constellation :
Page 53
M-ary PSK:
Using multiple phase angles with each angle having more than one
amplitude, multiple signals elements can be achieved
R R
D
L log 2 M
Page 54
QAM:
As an example of QAM, 12 different phases are
combined with two different amplitudes
Since only 4 phase angles have 2 different amplitudes,
there are a total of 16 combinations
With 16 signal combinations, each baud equals 4 bits of
information (2 ^ 4 = 16)
Combine ASK and PSK such that each signal
corresponds to multiple bits
More phases than amplitudes
Minimum bandwidth requirement same as ASK or PSK
Page 55
QAM and QPR:
Page 56
Offset quadrature phase-shift keying (OQPSK):
Page 57
Generation and Detection of Coherent BPSK:
Figure 6.26 Block diagrams for (a) binary FSK transmitter and
(b) coherent binary FSK receiver.
Page 58
Fig. 6.28
Page 59
Figure 6.29 Signal-space diagram for MSK system.
Page 60
Generation and Detection of MSK Signals:
Figure 6.31 Block diagrams for (a) MSK transmitter and (b)
coherent MSK receiver.
Page 61
Page 62
Page 63
UNIT V ERROR CONTROL CODING
Block Codes:
Page 64
The maximum number of detectable errors is
d m in 1
That is the maximum number of correctable errors is given
by,
d m in 1
t
2
where dmin is the minimum Hamming distance between 2
codewords and means the smallest integer
. . ... . .
ak1 ak 2 ... akn a k
Thus,
k
c dia i
i 1
ai must be linearly independent, i.e.,
Since codewords are given by summations of the ai vectors,
then to avoid 2 datawords having the same codeword the ai vectors
must be linearly independent.
Sum (mod 2) of any 2 codewords is also a codeword, i.e.,
Since for datawords d1 and d2 we have;
d 3 d1 d 2
So,
k k k k
c3 d 3i a i (d1i d 2i )a i d1i a i d 2i a i
i 1 i 1 i 1 i 1
c3 c1 c 2
Page 66
Error Correcting Power of LBC:
1 0 1 1
G
0 1 0 1
a1 = [1011]
a2 = [0101]
For d = [1 1], then;
1 0 1 1
0 1 0 1
c
1 1 1 0
Systematic Codes:
1 0 .. 0 p11 p12 .. p1R
0 .. p2 R
I | P
1 .. 0 p21 p22
G R=n-k
.. .. .. .. .. .. .. ..
0 0 .. 1 pk1 pk 2 .. pkR
Page 68
Another possibility is algebraic decoding, i.e., the error flag
is computed from the received codeword (as in the case of
simple parity codes)
How can this method be extended to more complex error
detection and correction codes?
Page 69
Parity Check Matrix:
This is so since,
k
c dia i
i 1
and so,
k k
b j .c b j . d i a i d i (a i .b j ) 0
i 1 i 1
Page 70
In this example the H matrix has only one row, namely b1.
This vector is orthogonal to the plane containing the rows of
the G matrix, i.e., a1 and a2
Any received codeword which is not in the plane containing
a1 and a2 (i.e., an invalid codeword) will thus have a
component in the direction of b1 yielding a non- zero dot
product between itself and b1.
Error Syndrome:
Page 71
For systematic linear block codes, H is constructed as
follows,
G = [ I | P] and so H = [-PT | I]
where I is the k*k identity for G and the R*R identity for H
Example, (7,4) code, dmin= 3
1 0 0 0 0 1 1
0 1 1 1 1 0 0
1
G I | P
0
0 1 0 0
0 1 0
1
1
0
1 0
H - P | I
T
1 0 1 1 0 1 0
1 1 0 1 0 0 1
0 0 0 1 1 1 1
0 1 1
1
0 1
1 1 0
s c r H T 1 1 0 1 0 0 11 1 1 0 0 0
1 0 0
0 1 0
0 0 1
Page 72
Standard Array:
c1 (all zero) c2 cM s0
e1 c2+e1 cM+e1 s1
e2 c2+e2 cM+e2 s2
e3 c2+e3 cM+e3 s3
eN c2+eN cM+eN sN
Hamming Codes:
Page 73
Page 74
Constraint length C=n(L+1) is defined as the number of
encoded bits a message bit can influence to
Page 75
x ' j m j3 m j2 m j
x '' m m m
j j 3 j 1 j
x ''' j m j2 m j
Here each message bit influences
a span of C = n(L+1)=3(1+1)=6
successive output bits
Page 76
Page 77
Convolution point of view in encoding and generator matrix:
Page 78
g [1 0 1 1]
(1)
g ( 2 ) [1 1 1 1]
x ' j m j 2 m j 1 m j
x '' j m j 2 m j
x x ' x '' x ' x '' x ' x '' ...
out 1 1 2 2 3 3
Page 79
Page 80
Representing convolutional codes compactly: code trellis and
state diagram:
State diagram
Page 81
Assuming encoder zero initial state, encoded word for any
input of k bits can thus be obtained. For instance, below for
u=(1 1 1 0 1), encoded word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1
1, 1 1) is produced:
Page 82
THE VITERBI ALGORITHEM:
ln p(y, xm ) j0 ln p( y j | xmj )
that is maximized by the correct path
Exhaustive maximum likelihood
method must search all the paths
in phase trellis (2k paths emerging/
entering from 2 L+1 states for
an (n,k,L) code)
The Viterbi algorithm gets its
efficiency via concentrating intosurvivor paths of the trellis
Page 83
THE SURVIVOR PATH:
Page 84
For this reason the non-survived path can
be discarded -> all path alternatives need not
to be considered
Note that in principle whole transmitted
sequence must be received before decision.
However, in practice storing of states for
input length of 5L is quite adequate
Page 85
The decoded ML code sequence is 11 10 10 11 00 00 00 whose
Hamming
distance to the received sequence is 4 and the respective decoded
sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum
distance path.
(Black circles denote the deleted branches, dashed lines: '1' was
applied)
Page 86
Figure right shows a common point
at a memory depth J
J is a random variable whose applicable
magnitude shown in the figure (5L)
has been experimentally tested for
negligible error rate increase
Note that this also introduces the
delay of 5L!
H(7,4)
Generator matrix G: first 4-by-4 identical matrix
Page 87
Message information vector p
Transmission vector x
Received vector r
and error vector e
Parity check matrix H
Error Correction:
Page 88
Example of CRC:
Page 89
Example: Using generator matrix:
g [1 0 1 1]
(1)
g ( 2 ) [1 1 1 1]
11
00 01
11 01
11
10
01
Page 90
correct:1+1+2+2+2=8;8 (0.11) 0.88
false:1+1+0+0+0=2;2 (2.30) 4.6
total path metric: 5.48
Page 91
Turbo Codes:
Backgound
Turbo codes were proposed by Berrou and Glavieux in
the 1993 International Conference in Communications.
Performance within 0.5 dB of the channel capacity limit
for BPSK was demonstrated.
Features of turbo codes
Parallel concatenated coding
Recursive convolutional encoders
Pseudo-random interleaving
Iterative decoding
Page 92
Motivation: Performance of Turbo Codes
Comparison:
Rate 1/2 Codes.
K=5 turbo code.
K=14 convolutional code.
Plot is from:
L. Perez, Turbo Codes, chapter 8 of Trellis Coding by
C. Schlegel. IEEE Press, 1997
Pseudo-random Interleaving:
Page 93
Solution:
Make the code appear random, while maintaining
enough structure to permit decoding.
This is the purpose of the pseudo-random interleaver.
Turbo codes possess random-like properties.
However, since the interleaving pattern is known,
decoding is possible.
In a coded systems:
Performance is dominated by low weight code words.
A good code:
will produce low weight outputs with very low
probability.
An RSC code:
Produces low weight outputs with fairly low
probability.
However, some inputs still cause low weight outputs.
Because of the interleaver:
The probability that both encoders have inputs that
cause low weight outputs is very low.
Therefore the parallel concatenation of both encoders
will produce a good code.
Iterative Decoding:
There is one decoder for each elementary encoder.
Each decoder estimates the a posteriori probability (APP) of
each data bit.
The APPs are used as a priori information by the other
decoder.
Decoding continues for a set number of iterations.
Page 94