Sie sind auf Seite 1von 125

DIGITAL COMMUNICATIONS

Part I: Source Encoding

Why digital?

Ease of signal generation


Regenerative repeating capability
Increased noise immunity
Lower hardware cost
Ease of computer/communication
integration
2000BijanMobasseri

Basic block diagram


Info
source

Source
encoder

Channel
encoder

Digital
modulation
channel

Output
transducer

Source
decoder

Channel
decoder

2000BijanMobasseri

CH

Digital
demod
3

Some definitions
Information source
Raw data:voice, audio
Source encoder:converts analog info to a binary
bitstream
Channel encoder:map bitstream to a pulse
pattern
Digital modulator: RF carrier modulation of
bits or bauds
2000BijanMobasseri

A bit of history
Foundation of digital communication is the
work of Nyquist(1924)
Problem:how to telegraph fastest on a channel
of bandwidth W?
Ironically, the original model for
communications was digital! (Morse code)
First telegraph link was established between
Baltimore and Washington in 1844
2000BijanMobasseri

Nyquist theorem
Nyquist theorem, still standing today, says
that over a channel of bandwidth W, we can
signal fastest with no interference at a rate no
more than 2W
Any faster and we will get intersymbol
interference
He further proved that the pulse shape that
achieves this rate is a sinc
2000BijanMobasseri

Signaling too fast


Here is what might happen when signaling
exceeds Nyquists rate
Transmitttedbitstream

Receivedbitstream
Pulsesmearingcouldhavebeenavoidedifpulses
hadmoreseparation,I.e.bitratereduced
2000BijanMobasseri

Shannon channel capacity


Claude Shannon, a Bell Labs
Mathematician, proved in 1948 that a
communication channel is fundamentally
speed-limited. This limit is given by
C=Wlog2(1+P/NoW) bits/sec
Where W is channels bandwidth, P signal
power and No is noise spectral density
2000BijanMobasseri

Implications of channel capacity


If data rate is kept below channel capacity,
R<C, then t is theoretically possible to
achieve error-free transmission
If data rate exceeds channel capacity, errorfree transmission is no longer possible

2000BijanMobasseri

First step toward digital comm:


sampling theorem
Main question: can a finite number of
samples of a continuous wave be enough to
represent the information? OR
Can you tell what the original signal was
below?

2000BijanMobasseri

10

How to fill in the blanks?


Could you have guessed this? Is there a
unique signal connecting the samples?

2000BijanMobasseri

11

Sampling schemes
There are at least 3 sampling schemes
Ideal
Flat-top
Sample and hold

2000BijanMobasseri

12

Ideal sampling
Ideal sampling refers to the type of samples
taken. Here, we are talking about impulse
like(zero width) samples.

Ts

2000BijanMobasseri

13

Ideal sampler
Multiply the continuous signal g(t) with a
train of impulses
g(t)

g(t)=g(nTs)(tnTs)

(tnTs)

Ts

2000BijanMobasseri

14

Key question
What is the proper sampling rate to allow
for a perfect reconstruction of the signal
from its samples?
To answer this question, we need to know
how g(t) and g(t) are related?

2000BijanMobasseri

15

Spectrum of g(t)
g(t) is given by the following product
g(t)=g(t)(t-nTs)
Taking Fourier transform
G(f)= G(f)*{fs(f-nfs)
Graphical rendition of this convolution
follows next
2000BijanMobasseri

16

Expanding the convolution


We can exchange convolution and
summation
G(f)=G(f)*{fs(f-nfs)= fs {G(f)* (f-nfs)}
Each convolution shifts G(f) to f= nfs
G(f)

G(f)*(fnfs)}
nfs
2000BijanMobasseri

17

G(f):final result
Spectrum of the sampled signal is then
given by
G(f)=fs{G(fnfs)

This is simply the replication of the original


continuous signal at multiples of sampling
rate
2000BijanMobasseri

18

Showing the spectrum of g(t)


Each term of the convolution is the original
spectrum shifted to a multiple of sampling
frequency
G(f)

G(f)

2000BijanMobasseri

fs

fs2fs

19

Recovering the original signal


It is possible to recover the original
spectrum by lowpass filtering the sampled
signal
G(f)

fs

W
LPF
W
W
2000BijanMobasseri

fs2fs

20

Nyquist sampling rate


In order to cleanly extract baseband
(original) spectrum, we need sufficient
separation with the adjacent sidebands
Min. separation can be found as follows
G(f)

fsw>W

fs

fs>2W
W

2000BijanMobasseri

fs

21

Sampling below Nyquist:


aliasing
If signal is sampled below its Nyquist rate,
spectral folding, or aliasing, occurs.
Lowpassfilteringwillnotrecover
thebasebandspectrumintactasa
resultofspectralfolding

fs<2W
2000BijanMobasseri

22

Sample-and-hold
A practical way of sampling a signal is
sample-and-hold operation. Here is the
idea:signal is sampled and its value held
until the next sample

2000BijanMobasseri

23

Issues
Here are the questions we need to answer:
What is the sampling rate now?
Can the message be recovered?
What price do we pay for going with a practical
approach?

2000BijanMobasseri

24

Modeling sample-and-hold
The result of sample-and-hold can be
simulated by writing the sampled signal as
s(t)=m(nTs)h(t-nTs)
Where h(t) is a basic square pulse and m(t)
is the baseband message
Thisisasquarepulseh(t)scaledbysignal
Sampleatthatpoint,iem(nTs)h(tnTs)
Ts
2000BijanMobasseri

25

A systems view
It is possible to come up with a system that
does sample-and-hold.
h(t)

h(t)

Ts
Idealsampling

Ts

Eachimpulsegeneratesasquarepulse,h(t),attheoutput.
OutputsarealsospacedbyTsthiswehaveasampleand
holdsignal

2000BijanMobasseri

26

Message reconstruction
Key question: can we go back to the
original signal after sample-and-hold ?
This question can be answered in the
frequency domain

2000BijanMobasseri

27

Spectrum of the sample-and-hold


signal
Sample-and-hold signal is generated by
passing an ideally sampled signal, m(t),
through a filter h(t). Therefore, we can write
s(t)= m(t)*h(t)
or
S(f)= M(f)H(f)
whatwehaveavailable

ContainsmessageM(f)

2000BijanMobasseri

Known(itisasinc)

28

Is message recoverable?
Lets look at the individual components of
S(f). From ideal sampling results
M(f)=fsM(f-kfs)
M(f)

2000BijanMobasseri

29

Problems with message recovery


The problem here is we dont have access to
M(f). If we did, it would be like ideal
sampling
What we do have access to is S(f)
S(f)= M(f)H(f)
We therefore have a distorted version of an
ideally sampled signal
2000BijanMobasseri

30

Example message
Lets show what is happening. Assume a
message spectrum that is flat as follows
M(f)

M(f)

fs2fs
2000BijanMobasseri

31

Sample-and-hold spectrum
We dont see M(f). We see M(f)H(f).
Since h(t) was a square pulse of width Ts,
H(f) is sinc(fTs) .
M(f).
W

H(f)

2000BijanMobasseri1/Ts=fs

32 f

Distortion potential
The original analog message is in the
lowpass term of M(f)
H(f) through the product M(f)H(f) causes
a distortion of this term.
Lowpass filtering of the sample-and-hold
signal will only recover a distorted message

2000BijanMobasseri

33

Illustrating distortion
M(f)
W
fs
wanttorecoverthis

2fs

H(f)

1/Ts=fs
Sampleandholdsignal.
Iflowpassfiltered,theoriginal
Messageisnotrecovered

Whatisactuallyrecovered

2000BijanMobasseri

34

How to control distortion?


In order to minimize the effect of H(f) on
reconstruction, we must make H(f) as flat as
possible in the message bandwidth(-W,W)
What does it mean? It means move the first
zero crossing to the right by increasing the
sampling rate, or decreasing pulse width

2000BijanMobasseri

35

Does it make sense?

The narrower the pulse, hence higher


sampling rate, the more accurate you can
capture signal variations
2000BijanMobasseri

36

Variation on sample-and-hold
Contrast the two following arrangements

Ts

sampleperiod
andpulsewidth
arenotthesame

2000BijanMobasseri

37

How does this affect


reconstruction?
The only thing that will change is h(t) and
hence H(f)
M(f)
W
wanttorecoverthis

H(f)

differentzerocrossing

1/
Sampleandholdsignal.
Iflowpassfiltered,theoriginal
Messageisnotrecovered

f
Whatisactuallyrecovered

2000BijanMobasseri

38

How to improve reconstruction?


Again, we need to flatten out H(f) within (W,W). and the way to do it is to use
narrower pulses (smaller )

2000BijanMobasseri

39

Sample-and-hold converges to
ideal sampling
If reducing the pulse width of h(t) is a good
idea, why not take it to the limit and make
them zero?
We can do that in which case sample-andhold collapses to ideal sampling(impulses
are zero width pulses)

2000BijanMobasseri

40

Pulse Code Modulation


Filtering, Sampling, Quantization and
Encoding

Elements of PCM Transmitter


Encoder consists of 5 pieces
Continuous
message

LPF

Sampler

Quantizer

Encoder

Transmission path
Regenerative
repeater

Regenerative
repeater

2000BijanMobasseri

42

Quantization
Quantization is the process of taking
continuous samples and converting them to
a finite set of discrete levels
1.52

1.2
.86
0.41

2000BijanMobasseri

43

Defining a quantizer
Quantizer is defined by its input/output
characteristics; continuous values in,
discrete values out
out

out

in

in
Outputremainsconstant
Evenasinputvariesoverarange
Midtreadtype

Midrisetype
2000BijanMobasseri

44

Quantization noise/error
Quantizer clearly discards some
information. Question is how much error is
committed?
Message(m)

q(m)

Quantizedmessage(v)

Error=q=mv
2000BijanMobasseri

45

Illustrating quantization error


Sampled
quantized

v3

Quantizationerror

v2

v1

quantizerstepsize
2000BijanMobasseri

46

More on
Controls how fine samples are quantized.
Equivalently, controls quantization error.
To determine we need to know two
parameters
Number of quantization levels
Dynamic range of the signal

2000BijanMobasseri

47

for a uniform quantizer


Let sample values lie in the range ( -mmax,
+mmax). We also want to have exactly L
levels at the output of the quantizer. Simple
math tells us
max
=2mmax/L

Llevels

min
2000BijanMobasseri

48

Quantization error bounds


Quantization error is bounded by half the
step size
Level2
Errorq

Errorq

Level1

|q|</2
2000BijanMobasseri

49

Statistics of q
Quantization error is random. It can be
positive or negative with equal probability.
This is an example of a uniformly
distributed random variable.
Densityfunctionf(q)

1/

/2
2000BijanMobasseri

/2

q
50

Quantization noise power


Any uniformly distributed random variable
in the range (-a/2 to a/2) has an average
power(variance) given by a2/12.
Here, quantization noise range is ,
therefore

2q=2/12
2000BijanMobasseri

51

Signal-to-quantization noise
Leaving aside random noise, there is always
a finite quantization noise.
Let the original continuous signal have
power P=<m2(t)> and quantization noise
variance(power) 2q
(SNR)q=P/ 2q=12P/ 2
2000BijanMobasseri

52

Substituting for
We have related step size to signal dynamic
range and number of quantization levels
=2mmax/L
Therefore, signal to quantization
noise(sqnr)
sqnr=(SNR)q=[3P/m2max]L2
2000BijanMobasseri

53

Example
Let m(t)=cos(2fmt). What is the signal to
quantization noise ratio(sqnr) for a 256level quantizer
Average message power P is 0.5, therefore
sqnr=(3x0.5/1)2562=98304~50dB

2000BijanMobasseri

54

Nonuniform quantizer
Uniform quantization is a fantasy. Reason is
that signal amplitude is not equally spread
out. It occupies mostly low amplitude levels

2000BijanMobasseri

55

Solution:nonuniform intervals
Quantize fine where amplitudes spend most
of their time

2000BijanMobasseri

56

Implementing nonuniform
quantization:companding
Signal is first processed through a nonlinear
device that stretches low amplitudes and
compresses large amplitudes
Largeamplitudespressed
output

Lowamplitudesstretched

input
2000BijanMobasseri

57

A-law and -law


There are two companding curves, A-law
and -law. Both are very similar
Each has an adjustment parameter that
controls the degree of companding (slope of
the curve)
Following companding, a uniform
quantization is used
2000BijanMobasseri

58

Encoder
Quantizer outputs are merely levels. We
need to convert them to a bitstream to finish
the A/D operation
There are many ways of doing this
Natural coding
Gray coding

2000BijanMobasseri

59

Natural coding
How many bits does it take to represent Llevels? The answer is
n=log2L bits/sample
Natural coding is a simple decimal to binary
conversion
Quantizerlevels(8)

0000
1001
2010
3011
.
7111

2000BijanMobasseri

Encoderoutput(3bitspersample

60

Gray coding
Here is the problem with natural coding: if
levels 2(010) and 1(001) are mistaken, then
we suffer two bit errors
We want an encoding scheme that assigns
code words to adjacent levels that differ in
at most one bit location

2000BijanMobasseri

61

Gray coding example


Take a 4-bit quantizer (16 levels). Adjacent
levels differ by juts one bit
00001
10000
20100
30101
41101
.
2000BijanMobasseri

62

Quantizer word size


Knowing n, we can refer to n-bit quantizers
For example, if L=256 with n=8bits/sample
We are then looking at an 8-bit quantizer

2000BijanMobasseri

63

Interaction between sqnr and


bit/sample
Converting sqnr to dB provides a different
insight. Take 10log10(sqnr)
sqnr=kL2 where k=[3P/m2max]
In dB
(sqnr)dB=+20logL= +20log2n
(sqnr)dB= +6n dB
2000BijanMobasseri

64

sqnr varies linearly with


bits/sample
What we just saw says higher sqnr is
achieved by increasing n(bits/sample).
Question then is, what keeps us from doing
that for ever thus getting arbitrarily large
sqnrs?

2000BijanMobasseri

65

Cost factor
We can increase number of bits/sample hence
quantization levels but at a cost
The cost is in increased bandwidth but why?
One clue is that as we go to finer quantization,
levels become tightly packed and difficult to
discern at the receiver hence higher error rates.
There is also a bandwidth cost
2000BijanMobasseri

66

Basis for finding PCM


bandwidth
Nyquist said in a channel with transmission
bandwidth BT, we can transmit at most 2BT
pulses per second:
R(pulses/second)<2BT(Hz)
Or
BT(Hz)>R/2(pulses/second)
2000BijanMobasseri

67

Transmission over phone lines


Analog phone lines are limited to 4KHz in
bandwidth, what is the fastest pulse rate
possible?
R<2BT=2x4000=8000 pulses/sec
Thats it? Modems do a bit faster than this!
One way to raise this rate is to stuff each
pulse with multiple bits. More on that later
2000BijanMobasseri

68

Accomodating a digital source


A source is generating a million bits/sec.
What is the minimum required transmission
bandwidth.
BT>R/2=106/2=500 KHz

2000BijanMobasseri

69

PCM bit rate


The bit rate at the output of encoder is
simply the following product
R(bits/sec)=n(bits/sample)xfs(samples/sec)
R=nfs bits/sec
quantized

101101

Encodedat5bits/sample

2000BijanMobasseri

70

PCM bandwidth
But we know sampling frequency is 2W.
Substituting fs=2W in R=n fs
R=2nW (bits/sec)
We also had BT>R/2. Replacing R we get
BT>nW
2000BijanMobasseri

71

Comments on PCM bandwidth


We have established a lower bound(min) on
the required bandwidth.
The cost of doing PCM is the large required
bandwidth. The way we can measure it is
Bandwidth expansion quantified by
BT/W>n (bits/sample)
2000BijanMobasseri

72

Bandwidth expansion factor


Similar to FM, there is a bandwidth
expansion factor relative to baseband, i.e.
=BT/W>n
Lets say we have 8 bits/sample meaning it
takes , at a minimum, 8 times more than
baseband bandwidth to do PCM

2000BijanMobasseri

73

PCM bandwidth example


Want to transmit voice (~4KHz ) using an 8bit PCM. How much bandwidth is needed?
We know W=4KHz, fs=8 KHz and n=8.
BT>nW=8x4000=32KHz
This is the minimum PCM bandwidth under
ideal conditions. Ideal has to do with pulse
shape used
2000BijanMobasseri

74

Bandwidth-power exchange
We said using finer quantization (more
bits/sample) enhances sqnr because
(sqnr)dB= +6n dB
At the same time we showed bandwidth
increases linearly with n. So we have a
trade-off

2000BijanMobasseri

75

sqnr improvement
Lets say we increase n by 1 from 8 to 9
bits/sample. As result, sqnr increases by 6
dB
sqnr= +6x8= +48
+6dB
sqnr= +6x9= +54

2000BijanMobasseri

76

Bandwidth increase
Going from n= 8 bits/sample, to 9
bits/sample, min. bandwidth rises from 8W
to 9W.
If message bandwidth is 4 KHz, then
BT=32 KHz for n=8
+4KHzor12.5%increase
BT=36 KHz for n=9

2000BijanMobasseri

77

Is it worth it?
Lets look at the trade-off:
Cost in increased bandwidth:12.5%
Benefit in increased sqnr: 6dB

Every 3 dB means a doubling of the sqnr


ratio. So we have quadrupled sqnr by
paying 12.5% more in bandwidth

2000BijanMobasseri

78

Another way to look at the


exchange
We provided 12.5% more bandwidth and
ended up with 6 dB more sqnr.
If we are satisfied with the sqnr we have,
we can dial back transmitted power by 6 dB
and suffer no loss in sqnr
In other words, we have exchanged
bandwidth for lower power
2000BijanMobasseri

79

Similarity with FM
PCM and FM are examples of wideband
modulation. All such modulations provide
bandwidth-power exchange but at different
rates. Recall =BT/W
FM.SNR~2
PCM..SNR~22
Muchmoresensitivetobeta,
Betterexchnage

2000BijanMobasseri

80

Complete PCM system design


Want to transmit voice with average power
of 1/2 watt and peak amplitude 1 volt using
256 level quantizer. Find
sqnr
Bit rate
PCM bandwidth

2000BijanMobasseri

81

Signal to quantization noise


We had
sqnr=[3P/m2max]L2
We have L=256, P=1/2 and mmax=1.
sqnr=98304~50 dB

2000BijanMobasseri

82

PCM bitrate
Bit rate is given by
R=2nW (bits/sec)=2x8x4000=64 Kb/sec
This rate is a standard PCM voice channel
This is why we can have 56K transmission
over the digital portion of telephone
network which can accomodating 64
Kb/sec.
2000BijanMobasseri

83

PCM bandwidth
We can really talk about minimum
bandwidth given by
BT|min=nW=8x4000=32 KHz
In other words, we need a minimum of 32
KHz bandwidth to transmit 64 KB/sec of
data.

2000BijanMobasseri

84

Realistic PCM bandwidth


Rule of thumb to find the required
bandwidth for digital data is that
bandwidth=bit rate
BT=R
So for 64 KB/sec we need 64 KHz of
bandwidth
One hertz per bit
2000BijanMobasseri

85

Differential PCM
Concept of differential encoding is of great
importance in communications
The underlying idea is not to look at
samples individually but to look at past
values as well.
Often, samples change very little thus a
substantial compression can be achieved
2000BijanMobasseri

86

Why differential?
Lets say we have a DC signal and blindly go
about PCM-encoding it. Is it smart?

Clearly not. What we have failed to realize is


that samples dont change. We can send the
first sample and tell the receiver that the rest
are the same
2000BijanMobasseri

87

Definition of differential
encoding
We can therefore say that in differential
encoding, what is recorded and ultimately
transmitted is the change in sample
amplitudes not their absolute values
We should send only what is NEW.

2000BijanMobasseri

88

Where is the saving?


Consider the following two situations
2

1.6

1.6

2
1.6

0.4 0.8 0.4


0.8

0.4
0.4

0.8

The right samples are adjacent sample


differences with much smaller dynamic
range requiring fewer quantization levels
2000BijanMobasseri

89

Implementation of
DPCM:prediction
At the heart of DPCM is the idea of
prediction
Based on n-1 previous samples, encoder
generates an estimate of the nth sample.
Since the nth sample is known, prediction
error can be found. This error is then
transmitted
2000BijanMobasseri

90

Illustrating prediction
Here is what is happening at the transmitter
Tobetrasmited
Predictionerror

Pastsamples(alreadysent)
Predictionofthe
Currentsample

OnlyPredictionerrorissent
2000BijanMobasseri

91

What does the receiver do?


Receiver has the identical prediction
algorithm available to it. It has also
received all previous samples so it can
make a prediction of its own
Transmitter helps out by supplying the
prediction error which is then used by the
receiver to update the predicted value
2000BijanMobasseri

92

Interesting speculation
What if our power of prediction was
perfect? In other words, what if we could
predict the next sample with no error?.
What kind of communication system would
be looking at?

2000BijanMobasseri

93

Prediction error
Let m(t) be the message and Ts sample
interval, then prediction error is given
e(nTs ) m(nTs ) m nTs

Predictionerror

2000BijanMobasseri

94

Prediction filter
Prediction is normally done using a
weighted sum of N previous samples
N

m nTs wi mn i Ts
i1

The quality of prediction depends on the


good choice of weights wi
2000BijanMobasseri

95

Finding the optimum filter


How do you find the best weights?
Obviously, we need to minimize the
prediction error. This is done statistically
2
e
Min nTs
over w

Choose a set of weights that gives the


lowest (on average) prediction error
2000BijanMobasseri

96

Prediction gain
Prediction provides an SNR improvement
by a factor called prediction gain

2M
message power
Gp 2
e prediction error power

2000BijanMobasseri

97

How much gain?


On average, this gain is about 4-11 dB.
Recall that 6 dB of SNR gain can be
exchanged for 1 bit per sample
At 8000 samples/sec(for speech) we can
save 1 to 2 bits per sample thus saving 8-16
Kb/sec.

2000BijanMobasseri

98

DPCM encoder
Inputsample +

Prediction
error

quantizer

Predictionerror

encoder

Prediction
Ntap
prediction

Updatedprediction

Prediction error is used to correct the estimate


in time for the next round of prediction
2000BijanMobasseri

99

Delta modulation (DM)


DM is actually a very simplified form of
DPCM
In DM, prediction of the next sample is
simply the previous sample
Predictionerror
Estimateof

2000BijanMobasseri

100

DM encoder-diagram
out

in

+
Inputsample
+

Prediction

1bit
Prediction quantizer

error

DelayTs

2000BijanMobasseri

Predictionerror()

+
Updatedprediction

101

DM encoder operation
Prediction error generates at the output
of quantizer
If error is positive, it means prediction is
below sample value in which case the
estimate is updated by + for the next step

2000BijanMobasseri

102

Slope overload effect


Signal rises faster than prediction: too
small
samples
Ts

predictions

initialestimate

2000BijanMobasseri

103

Steady state: granular noise


Prediction can track the signal; prediction
error small
Twodropstoreachthesignal

2000BijanMobasseri

104

Shortcomings of DM
It is clearly the prediction stage that is
lacking
Samples must be closely taken to insure that
previous-sample prediction algorithm is
reasonably accurate
This means higher sample rates

2000BijanMobasseri

105

Multiplexing
Concurrent communications calls for some
form of multiplexing. There are 3 categories
FDMA(frequency division multiple access)
TDMA(time division multiple access)
CDMA(code division multiple access)

All 3 enjoy a healthy presence in the


communications market
2000BijanMobasseri

106

FDMA
In FDM, multiple users can be on at the
same time by placing them in orthogonal
frequency bands
guardband

user1

user2

userN

TOTALBANDWIDTH

2000BijanMobasseri

107

FDMA example:AMPS
AMPS, wireless analog standard, is a good
example

Reverse link(mobile-to-base): 824-849MHz


Forward link: 869-894 MHz
channel bandwidth:30 KHz
total # channels: 833
Modulation: FM, peak deviation 12.5 KHz
2000BijanMobasseri

108

TDMA
Where FDMA is primarily an analog
standard, TDMA and CDMA are for digital
communication
In TDMA, each user is assigned a time
slot, as opposed to a frequency slot in
FDMA

2000BijanMobasseri

109

Basic idea behind TDMA


Take the following 3 digital lines
frame

2000BijanMobasseri

110

TDM-PCM

TDMPAM

TDMPCM(bits)
quantizerand
encoder

quantizerand
encoder

channel

decoder

2000BijanMobasseri

lpf
lpf
111

Parameters of TDM-PCM
A TDM-PCM line multiplexing M users is
characterized by the following parameters
data rate(bit or pulse rate)
bandwidth

2000BijanMobasseri

112

TDM-PCM Data rate


Here is what we have
M users
Each sampled at Nyquist rate
Each sample PCMd into n bit words

Total bit rate then is


R=M(users)xfs(samples /sec/user)xn(bits/sec)
=nMfs bits sec
2000BijanMobasseri

113

TDM-PCM bandwidth
Recall Nyquist bandwidth. Given R pulses
per second, we need at least R/2 Hz.
In reality we need more (depending on the
pulse shape) so
BT=R=nMfs Hz

2000BijanMobasseri

114

T1 line
Best known of all TDM schemes is AT&Ts
T1 line
T1 line multiplexes 24 voice
channels(4KHz) into one single bitstream
running at the rate of 1.544 Mb/sec. Lets
see how

2000BijanMobasseri

115

T1 line facts
Each of the 24 voice lines are sampled at 8
KHz
Each sample is then encoded into 8 bits
A frame consists of 24 samples, one from
each line
Some data bits are preempted for control
and supervisory signaling
2000BijanMobasseri

116

T1 line structure:
all frames except 1,7,13,19...
channel1

channel24

channel2

1234567812345678

12345678

informationbits(8bitspersample)

FRAME(repeats)

2000BijanMobasseri

117

Inserting non-data bits


In addition to data, we need slots for
signaling bits (on-hook/off hook, charging)
Every 6th frame (1,7,13,19..) is selected and
the least significant bit per channel is
replaced by a signaling bit
channel1

channel2

1234567

1234567

channel24
1234567

2000BijanMobasseri

118

Framing bit
Timing is of utmost significance in T1. We
MUST be able to know where the
beginning of each frame is
At the end of each frame a single bit is
added to help with frame identification
channel1

channel24

channel2

1234567812345678

12345678F

informationbits(8bitspersample)
2000BijanMobasseri

119

T1 frame length
How long is one frame?One revolution
generates
frame
sampledat8KHz
rotatesat8000revs/sec.

24

framelength=1/8000=
125microseconds

2000BijanMobasseri

120

T1 bit rate per frame


Data rate
8x24=192 bits per frame

Framing bit rate


1 bit per frame

Total per frame


193 bits/frame

2000BijanMobasseri

121

Total T1 bit rate


We know there are 8000 frames a sec. and
there are 193 bits per frame. Therefore
T1 rate=193x8000=1.544

2000BijanMobasseri

Mb/sec

122

Signaling rate component


Not all 1.544 Mb/sec is data. In every 6th
frame, we replace 24 data bits by signaling
bits. Therefore
signaling rate=
(8000 frames/sec)(1/6)(24 bits)=32 Kbits/sec

2000BijanMobasseri

123

TDM hierarchy
It is possible to build upon T1 as follows
64kb/sec

DS2
DS1
1stlevel
multiplexer

24

2ndlevel
multiplexer

3rdlevel
multiplexer

DS3

DS0
DS1:
1.544MB/sec

DS2:
6.312Mb/sec

2000BijanMobasseri

7lines

DS3:
44.736Mb/sec

124

Recommended problems
6.2
6.15
6.17

2000BijanMobasseri

125

Das könnte Ihnen auch gefallen