Sie sind auf Seite 1von 214

Advanced Digital Communication Waveform Coding Techniques

Waveform Coding Techniques


       

Pulse-code modulation Channel Noise and Error probability Quantization Noise and Signal to Noise Ratio Robust Quantization Differential Pulse Code Modulation Delta Modulation Coding Speech at Low Bit Rates Applications
M Theerthagiri 2

Signal Encoding - 4 Types


SIGNAL
Digital


Analog
Digital data, Analog signal Modem converts digital data into analog signal so that it can be transmitted over an analog line. ASK, FSK, PSK Performance QAM

Information
 

Digital Analog

Signal
 

Digital

Digital Analog

D A T A
Analog

Digital data, digital signal Two different voltage levels for binary 0 &1. More complex encoding schemes are used to to improve performance, by altering the spectrum of the signal and providing synchronization capability Analog data, digital signal Voice & Video PCM DM Performance

Analog data, Analog signal

AM, FM, PM

M Theerthagiri

Taxonomy of Speech Coders


Speech Coders Waveform Coders Time Domain: PCM, ADPCM Frequency Domain: e.g. Sub-band coder, Adaptive transform coder Source Coders Linear Predictive Coder Vocoder

Waveform coders: attempts to preserve the signal waveform not speech specific (I.e. general A-to-D conv) PCM 64 kbps, ADPCM 32 kpbs, CVSDM 32 kbps Vocoders: Analyse speech, extract and transmit model parameters Use model parameters to synthesize speech LPC-10: 2.4 kbps Hybrids: Combine best of both Eg: CELP (used in GSM)
M Theerthagiri 4

From analog signal to digital code (PCM)

M Theerthagiri

Digital representation of Analog signals




Advantages
 

ruggedness to transmission noise and interference efficient regeneration of the coded signal along the transmission path the potential for communication privacy and security through encryption the possibility of a uniform format for different kinds of baseband signals Increased transmission bandwidth requirement Increased system complexity
M Theerthagiri 6

Disadvantageous
 

PCM


PCM belongs to a class of signal coders known as waveform coders in which an analog signal is approximated by mimicking the amplitude vs time waveform and hence the name

M Theerthagiri

What is meant by PCM?




Pulse code modulation (PCM) is a method of signal coding in which the message signal is sampled, the amplitude of each sample is rounded off to the nearest one of a finite set of discrete levels and encoded so that both time and amplitude are represented in discrete form.. This allows the message to be transmitted by means of a digital waveform.

M Theerthagiri

PCM system : basic elements


A/D

D/A
M Theerthagiri 9

Basic Signal Processing Operations in PCM


       

Sampling Quantizing Encoding Regeneration Decoding Reconstruction Multiplexing Synchronization


M Theerthagiri 10

Sampling
 

The incoming message wave is sampled with a train of narrow rectangular pulses so as to closely approximate the sampling process To ensure perfect reconstruction of message the sampling rate must be greater than twice the highest frequency component W of the message wave In practice a low pass pre-alias filter is used to at the front of the sampler to exclude frequencies greater than W before sampling The application of sampling permits the reduction of the continuously varying message wave to a limited number of discrete values per second
M Theerthagiri 11

Sampling an analogue signal

Prior to digitisation, signals must be sampled




 

ADC converts the height of each pulse into binary representation Sampling involves the multiplication of the signal by a train of sampling pulses

With a frequency fs=2B=1/T

M Theerthagiri

12

Sampling as multiplication by a sampling waveform:

Sampling pulse is short enough so that can normally considered have zero duration DAC, however produces pulses length T Multiplication = Amplitude modulation


Amplitude modulation produces sidebands M Theerthagiri

13

Quantizing


 

The conversion of an analog (continuous ) sample of the signal into digital (discrete) form is called quantizing process human ear / eye can detect only finite intensity differences it is not necessary to transmit the exact amplitude of samples original analog signal may be approximated by a signal constructed of
discrete amplitudes
M Theerthagiri 14

Quantizing


Reduce the number of distinct output values to a much smaller set. Main source of the loss" in lossy compression. Three different forms of quantization.
  

Uniform: midrise and midtread quantizers. Nonuniform: companded quantizer. Vector Quantization.
M Theerthagiri 15

Quantized signal

each value is translated to its 7-bit binary equivalent the 8th bit indicates sign
M Theerthagiri 16

Quantized signal

first three sample values

M Theerthagiri

17

Basic signal PCM

processing operations in

Quantization

M Theerthagiri

18

Quantizing


The peak-to-peak range of input sample values is subdivided into a finite set of decision levels or decision thresholds that are aligned with the risers of the staircase The output is assigned a discrete value selected from a finite set of representation levels or reconstruction values that are aligned with the treads of the stair case
M Theerthagiri 19

M Theerthagiri

20

Two types of quantization


representation levels
3 2 5 3 /2 /2 -2 -3 7 /2 overload level

representation levels

. . . /2 . . . .
decision thresholds

7 /2 5 /2 3 /2 /2 2 3 4

overload level

midtread
M Theerthagiri

midrise
21

M Theerthagiri

22

M Theerthagiri

23

M = Even Zero is not one of the output levels Zero is a decision boundary

M = Odd Zero is one of the output levels Zero is reconstruction level


M Theerthagiri 24

Symmetric uniform quantization midtread




 

peak-to-peak range of input sample values is sub-divided into a finite set of decision levels or decision thresholds thresholds are aligned with risers decision thresholds are located at /2, 3 /2, . output is assigned a discrete value aligned with the tread of the staircase steps are at 0, , 2 , ..
M Theerthagiri 25

Symmetric uniform quantization : midrise




decision thresholds are located at 0, , 2 , .. representation levels are at /2, 3 /2, 5 /2, .

M Theerthagiri

26

Symmetric uniform quantization




Overload level :


the absolute value of which is 0.5 times peak-to-peak range of input sample values the difference between values of output and input of the quantizer

Quantization error :


Max. instantaneous value = 0.5 step size total range of variation is (0.5 step) to + (0.5 step)
M Theerthagiri 27

M Theerthagiri

28

Encoding


A process to translate the discrete set of sample values to a more appropriate form of signal best suited for transmission over a line, radio path or optical fibre One of the discrete events in a code is called a code element or a symbol A particular arrangement of symbols used in a code to represent a single value of a discrete set is called a code word or character In a binary code, each symbol may be either of two distinct values or kinds such as the presence or absence of a pulse.
M Theerthagiri 29

Regeneration

Regenerative Repeater

To control the effect of noise and distortion while passing through a channel Three functions of the regenerative repeater


equalizing, timing and decision making


M Theerthagiri 30

Regeneration


Equalizer :


shapes the received pulses to compensate for the impairment in amplitude and phase provides periodic clock pulses for sampling the received & equalized pulses at each bit interval, makes a decision whether a pulse is present (exceeds a predetermined voltage level) or not ; accordingly transmits a new pulse (1 or 0)
M Theerthagiri 31

timing circuit :


decision making device :




M Theerthagiri

32

Regeneration


Departure of regenerated signal




The presence of channel noise and interference causes the repeater to make wrong decisions occasionally


Wrong decision : bit error

Spacing between pulses deviates from its assigned value causes jitter in to the regenerated pulse position, there by causing distortion
M Theerthagiri 33

Decoding
The receiver reshapes and cleans up the received pulses  These clean pulses are regrouped into code words and decoded or mapped back into a PAM signal


M Theerthagiri

34

Reconstruction


Decoder output is passed through a low-pass reconstruction filter whose cut off frequency = message bandwidth

M Theerthagiri

35

Multiplexing


Different message sources are multiplexed by time division

M Theerthagiri

36

Synchronization


Timing operations at the receiver must follow closely the corresponding operations at the transmitter a local clock at the receiver to keep the same time as the distant transmitter clock synchronization pulse or frame is transmitted alongwith code elements

M Theerthagiri

37

Channel Noise and Error probability




Performance of PCM system is influenced by




Channel Noise, which may be introduced any where along the channel path Quantizing noise, which is introduced in the transmitter and is carried along to the receiver output

M Theerthagiri

38

Channel Noise


The effect of transmission noise is to introduce transmission errors Symbol 0 occupationally mistaken as 1 and vice versa The fidelity (reliability) of information transmission by PCM in the presence of channel noise is measured in terms of error rate or probability of error
M Theerthagiri 39

Additive White Gaussian Noise


 

 

A basic and generally accepted model for thermal noise in communication channels, is the set of assumptions that the noise is additive, i.e., the received signal equals the transmit signal plus some noise, where the noise is statistically independent of the signal. the noise is white, i.e, the power spectral density is flat, so the autocorrelation of the noise in time domain is zero for any nonzero time offset. the noise samples have a Gaussian distribution. Mostly it is also assumed that the channel is Linear and Time Invariant. The most basic results further assume that it is also frequency non-selective.

M Theerthagiri

40

M Theerthagiri

41

The Basic SNR Parameter for Digital Communication Systems


In digital communications, we more often use Eb/N0, a normalized version of SNR, as a figure of merit.
Eb STb S / Rb S W ! ! ! N0 N / W N / W N R
Eb = bit energy, S = signal power Tb = bit time, Rb = R = bit rate N0 = noise power spectral density N = noise power, W = bandwidth
M Theerthagiri 42

Nonuniform quantization Robust quantization




As in speech transmission, the same quantizer has to accommodate input signals with widely varying power levels. A nonuniform quantizer for which the SNR remains constant over a wide range of input power levels is called robust
M Theerthagiri 43

What you mean by non-uniform quantization?




Step size is not uniform. Non uniform quantizer is characterized by a step size that increases as the separation from the origin of the transfer characteristics is increased. Non-uniform quantization is otherwise called as robust quantization

M Theerthagiri

44

Nonuniform quantization


In the case of uniform quantization levels, the quantization noise power depends only on the spacing between the levels, and is independent of the actual signal level at any instant. The SNR decreases with a decrease in the input power level relative to the maximum range of the quantizer, which is undesirable in many applications. For example, in a speech system a fixed quantization noise power will be more objectionable when a quiet speaker is speaking than when a loud one is.

M Theerthagiri

45

Nonuniform quantization


In A remedy is to use nonuniform quantization levels. This can be achieved by using a nonuniform quantizer

level 7 level 6 level 5 level 4 level 3 level 2 level 1 level 0


M Theerthagiri 46

Nonuniform quantization Robust quantization




As in speech transmission, the same quantizer has to accommodate input signals with widely varying power levels. A nonuniform quantizer for which the SNR remains constant over a wide range of input power levels is called robust.
M Theerthagiri 47

Nonuniform quantization Probability density function




A uniform quantizer makes sense when the probability distribution of the signal in the range -Vmax to Vmax is uniform. If we have reason to believe that the distribution is nonuniform, and we know what the actual distribution is, then we can place nonuniform quantization levels in an optimal manner
M Theerthagiri 48

Nonuniform quantization Probability density function




Recall from the discussion on information theory that the entropy is maximized if the probability of occurrence of each level is equal. Therefore choose the quantization levels such that the probabilities of occurrence in each level are equal.

p(x)

x 0a b c d 1
49 M Theerthagiri

Nonuniform quantization Companding




Uniform More often, nonuniform levels quantization is achieved by output first distorting the original 4a signal with a nonlinear 3a compressor characteristic, and then using a uniform 2a quantizer on the result: a -4a -3a -2a -a input a 2a 3a 4a -a

-2a -3a
Nonuniform levels
M Theerthagiri

-4a
50

Nonuniform quantization Companding




A given signal change at small magnitudes will then carry the uniform quantizer through more steps than the same change at large magnitudes. At the receiver, an inverse compression characteristic (or expansion) is applied, so that the overall transmission is not distorted. The processing pair (compression and expansion) is usually referred to as companding.
M Theerthagiri 51

Nonuniform quantization -law compander


 

Vout

The -law compander is characterized by Vout = log(1+Vin) / log(1+) The -law companding is used for PCM telephone systems in the USA, Canada and Japan, with the standard value of = 255

1 0.8 0.6
mu=255

0.4 0.2 0 0 0.2 0.4 0.6

mu=1 mu=10 mu=100 mu=1000

0.8

Vin

M Theerthagiri

52

Nonuniform quantization A-law compander


1


  

 

The A-law compander is characterized by Vout = A*Vin / {1+log(A)} for Vin < 1/A Vout = A*{1+log(A*Vin) / {1+log(A)} for 1/A Vin The A-law companding is used for PCM telephone systems in Europe, with A = 87.56

0.8

Vout

0.6
A=1

0.4 0.2 0 0 0.2 0.4 0.6

A=10 A=100 A=1000 A=87.6

0.8

Vin

M Theerthagiri

53

Non-uniform quantization


For a non-uniform quantizer, the quantization error power is related to the quantizer s input distribution, since it has smaller quantization step for small input and larger quantization step for large input. In most cases the quantizer input has a distribution similar to Normal distribution, which means using a non-uniform quantizer will lead to smaller quantization error power.
M Theerthagiri 54

UNIFORM - QUANTIZER
2

Variance,

2 Q

Features:  Variance is valid only if the input signal does not overload Quantizer  SNR Decreases with a decrease in the input power level.
55

12

ROBUST QUANTIZER
A Quantizer whose SNR remains essentially constant for a wide range of input power levels. . Non Uniform Quantizer
56

Non Uniform Quantizer


 

Variable Step-Size. Smaller amplitude - Smaller Step Size. Larger amplitude - Large Step size

57

Non- Uniform Quantizer MODEL


Input
Compressor Uniform Quantizer Expander

Output

Compander = Compressor + Expander


58

Compressor output

Compressor

Compressor input
59

Expander input

Expander
Expander output
60

Quantization Error-1
Transfer Characteristics Compressor --- C(x) Expander --- C-1(x) C(x). C-1(x) = 1

61

Quantization error-2
Compressor Characteristics ( for large L ) 2x max dc(x) = for k =0,1,.....L - 1 dx L k
k

= Width in the interval Ik

62

Quantization Error-3
Let fX(x) = PDF of X . Assumptions:  fX(x) is Symmetric  fX(x) is approximately constant in each interval. ie.. fX(x) = fX(yk)

63

Quantization Error-4
fX(x) = fX(yk)
k

= xk+1 - xk for k = 0, 1,

L-1.

pk = Probability of variable X pk = P (xk < X < xk+1 ) = fX(x)


k k

= fX(yk)

p
k=0

L-1

=1
64

Quantization Error-5
Q = yk X for xk < X < xk+1

Variance 2 = E ( Q2) = E [( X Q
 xmax

yk ) 2 ]

WQ !

( x  yk ) f X ( x) dx
2
65

 xmax

WQ !
2 k !0

L 1

pk (k
WQ
2

xk 1

( x  yk ) dx
2
L 1

xk

Carrying out Integration w.r.t x

1 2 ! pk ( k 12 k !0

2 k

/ 12 ) - Variance of error in the Interval Ik


k

for all k in Uniform Quantizer.


66

Types of Companding
1.

- law ( US, Canada & Japan) A - law ( Europe)

2.

67

-law
Q=255 reduces noise power in speech ~20dB

68

-law companding

c( x ) ln(1  Q x / x max ) ! xmax ln(1  Q )


= 255

0e

x xmax

e1

- practical value

69

A-law

Normalized input
70

A-law
A x /x max 0 x x max 1 A

c( x ) ! xmax

1+ lnA

1+ ln(A x /x max ) 1+ lnA

x 1 1 A x max

Practical value for A = 87.5


71

Companding Gain - Gc
Companding gain

For -law

dc( x) Gc ! as x 0 p dx

dc( x) Q Gc ! ! dx ln(1  Q )
72

Advantages of Non Uniform Quantizer


 

Reduced Quantization noise High average SNR

73

M Theerthagiri

74

M Theerthagiri

75

DPCM - Transmitter

76

DPCM - Receiver

77

Taxonomy of Speech Coders


Speech Coders Waveform Coders Time Domain: PCM, ADPCM Frequency Domain: e.g. Sub-band coder, Adaptive transform coder Source Coders Linear Predictive Coder Vocoder

Waveform coders: attempts to preserve the signal waveform not speech specific (I.e. general A-to-D conv) PCM 64 kbps, ADPCM 32 kpbs, CVSDM 32 kbps Vocoders: Analyse speech, extract and transmit model parameters Use model parameters to synthesize speech LPC-10: 2.4 kbps Hybrids: Combine best of both Eg: CELP (used in GSM)
M Theerthagiri 78

Voice Compression Technologies


Unacceptable
64 (Cellular)

Business Quality

PCM (G.711)

Toll Quality *

Bandwidth
(Kbps)

32 24 16 8 0

ADPCM 32 (G.726) ADPCM 24 (G.726) ADPCM 16 (G.726) LPC 4.8

LDCELP 16 (G.728) CS-ACELP 8 (G.729)

Quality
M Theerthagiri 79

Bandwidth Requirements
Voice Band Traffic
Encoding/ Compression
G.711 PCM A-Law/u-Law Law/u

Result Bit Rate


64 kbps (DS0)

G.726 ADPCM G.729 CS-ACELP CSG.728 LD-CELP LDG.723. G.723.1 CELP

16, 24, 32, 40 kbps 8 kbps 16 kbps 6.3/5.3 kbps Variable


M Theerthagiri 80

Voice Compression ADPCM




Adaptive Differential Pulse Code Modulation


  




Waveform coding scheme Adaptive: automatic companding Differential: encode the changes between samples only Rates and bits per sample:

32Kbps = 8 Kbps x 4 bits/sample 24 Kbps = 8 Kbps x 3 bits/sample 16 Kbps = 8 Kbps x 2 bits/sample

M Theerthagiri

81

Speech Coding Schemes Speech Coding Schemes [1],[2]

M Theerthagiri

82

Main Attributes of Speech Coders


 

Bit rate - This is the number of bits per second (bps) which is required to encode the speech into a data stream. Subjective quality This is the perceived quality of the reconstructed speech at the receiver. It may not necessarily correlate to objective measures such as the signal-to-noise ratio. Subjective quality may be further subdivided into intelligibility and naturalness. The former refers to the ability of the spoken word to be understood; the latter refers to the human-like" rather than robotic" or metallic characteristic of many current low-rate coders. Complexity -The computational complexity is still an issue despite the availability of ever-increasing processing power. Invariably, coders which are able to reduce the bit rate require greater algorithmic complexity - often by several orders of magnitude. Memory - The memory storage requirements are also related to the algorithmic complexity. Template-based coders require large amounts of fast memory to store algorithm coefficients and waveform prototypes.

M Theerthagiri

83

Main Attributes of Speech Coders




Delay - Some processing delay is inevitable in a speech coder. This is due not only to the algorithmic complexity (and hence computation time) but also to the buffering requirements of the algorithm. For real-time speech coders, the coding delay must be minimized in order to achieve acceptable levels of performance. Error sensitivity - High-complexity coders, which are able to leverage more complex algorithms to achieve lower bit rates, often produce bit streams which are more susceptible to channel or storage errors. This may manifest itself in the form of noise bursts or other artifacts. Bandwidth - refers to the frequency range which the coder is able to faithfully reproduce. Telephony applications are usually able to accept a lower bandwidth, with the possibility of compromising the speech intelligibility.

M Theerthagiri

84

M Theerthagiri

85

Differential PCM


A PCM technique that codes the difference between sample points to compress the digital data more efficient because audio waves propagate in predictable patterns, DPCM predicts the next sample and codes the difference between the prediction and the actual point Since differences between samples are expected to be smaller than the actual sampled amplitudes, fewer bits are required to represent the differences
M Theerthagiri 86

DPCM


Foe example if X(k) extends over the interval VH-VL and using PCM X(k) is encoded using 28 =256 the the step size S = (VH-VL) / 28, that is VH-VL =256*S If, However, the difference signal X(k)-X(k-1) extends only over +/- 2S the the quantized levels needed are +/- 0.5 S and at +/- 1.5 S. There are only 4 levels and two bits are adequate
M Theerthagiri 87

Differential PCM


DPCM takes advantage of the high correlation between samples by encoding the difference between samples rather than the absolute sample value Can reduce bit rate (by about 25 %) by using prediction based on previous samples Sends only the difference between predicted and actual - 4 bits per sample Over time, the error between the decoded signal and the differentially encoded signal increase . so, periodically, a full pulse is sent rather than the difference
M Theerthagiri 88

Differential PCM
 

An extension of pulse code modulation which differentially encodes the data to increase transmission efficiency Differential PCM (DPCM) is used in many image and video compression algorithms, including JPEG. The principle behind differential pulse code modulation is that the source data is likely to be an analogue signal, which is likely to change in amplitude quite gradually; there are unlikely to be any large jumps in amplitude over a short time. Therefore, the signal can be efficiently represented by an initial value, and incremental deltas against this value thereafter. Since these differences are likely to be small, fewer bits may be used to encode such a signal, and therefore throughput may be increased.
M Theerthagiri 89

M Theerthagiri

90

Differential PCM
For the given input signal the sampled values are 1, 2, 4, 5, 6, 9, 7, 4, 3, 0, 2, 3, 5, 6. Encoded using standard pulse code modulation, this data set would require ceil(log2(9))1 = 4 bits per sample. Notice, however, that the delta between two samples is never less than -3 or greater than +3. This gives a range of 7 values, which can be encoded in ceil(log2(7))1 = 3 bits per sample. If the encoding scheme used was differential pulse code modulation, the output would be:

M Theerthagiri

91

Differential PCM


At the time of the PCM process, the differences between input sample signals are minimal. Differential PCM (DPCM) is designed to calculate this difference and then transmit this small difference signal instead of the entire input sample signal. Since the difference between input samples is less than an entire input sample, the number of bits required for transmission is reduced. This allows for a reduction in the throughput required to transmit voice signals. Using DPCM can reduce the bit rate of voice transmission down to 48 kbps.

M Theerthagiri

92

Differential PCM : process




 

1. Input signal is sampled at a constant sampling frequency (twice the input frequency) 2. Samples are modulated using the PAM process. At this point, the DPCM process takes over 3. Sampled input signal is stored in what is called a predictor 4. Predictor takes the stored sample signal and sends it through a differentiator
M Theerthagiri 93

Differential PCM : process




5. Differentiator compares the previous sample signal with the current sample signal and sends this difference to the quantizing and coding phase of PCM 6. After quantizing and coding, the difference signal is transmitted to its final destination 7. At the receiving end of the network, everything is reversed
M Theerthagiri 94

Differential PCM : process




8. First the difference signal is de quantized 9. Then this difference signal is added to a sample signal stored in a predictor 10. Resulting signal is sent to a low-pass filter that reconstructs the original input signal
M Theerthagiri 95

Differential PCM System


_________ xi(nTs)
prediction error

v(nTs) b(nTs)

Q(.) e(nTs) = xi(nTs) - xp(nTs)

e(nTs)

predicted value xp(nTs)


Previous sample

u(nTs) = xi(nTs) + q(nTs) Transmitter

b(nTs)

Reconstruction

Receiver
M Theerthagiri 96

Differential PCM System




Baseband signal x(t) is sampled @ fs = 1 / Ts to produce a sequence of correlated samples Ts seconds apart, denoted by {x(nTs)} Quantizer
  

Input e(nTs) = xi(nTs) - xp(nTs) where xi(nTs) is the unquantized sample xp(nTs) is its predicted value produced by a predictor e(nTs) is called the prediction error, the amount by which the predictor fails to predict the input correctly M Theerthagiri 97

Differential PCM


Let the quantizer input - output characteristics be defined by the nonlinear function Q(.) Quantizer output v(nTs) = Q {e(nTs)} = e(nTs) + q(nTs) where q(nTs) is the quantization error The quantizer output v(nTs) is added to the predicted value xp(nTs) to produce

the predictor input u(nTs) = xp(nTs) + v(nTs)


M Theerthagiri

98

Differential PCM


u(nTs) = xp(nTs) + e(nTs) + q(nTs) = xi(nTs) + q(nTs) Irrespective of the properties of the predictor, the quantized signal u(nTs) differs from the original input signal by the quantization error Output at the receiver, differs from the original input only by the quantization error incurred as a result of quantizing the prediction error
M Theerthagiri 99

M Theerthagiri

100

M Theerthagiri

101

M Theerthagiri

102

Differential PCM
 

Output signal-to-quantization ratio is defined as (SNR)O = x2 / Q2 where




is the variance of the original input signal

is the variance of the quantization error

We can rewrite 2) 2 / 2) x ( 2  (SNR)O = ( x / Q E E = GP x (SNR)P where GP is the prediction gain produced by the differential quantization scheme

M Theerthagiri 103

Delta Modulation is the one bit ( or two level) version of (DPCM) differential pulse code modulation.

M Theerthagiri

104

Delta Modulation


The analog signal is approximated with a series of segments Each segment of the approximated signal is compared to the original analog wave to determine the increase or decrease in relative amplitude The decision process for establishing the state of successive bits is determined by this comparison
M Theerthagiri 105

Delta Modulation


only the change of information is sent, i.e., only an increase or decrease of the signal amplitude from the previous sample is sent, whereas a no-change condition causes the modulated signal to remain at the same 0 or 1 state of the previous sample unique features :


a one-bit codeword for the output eliminates the need for word-framing simple design of transmitter and receiver
M Theerthagiri 106

Delta Modulation
signal xi(t) xi(t)

sampling period

u(t)

u(t)

M Theerthagiri

107

Delta Modulation

M Theerthagiri

108

Delta Modulation

M Theerthagiri

109

Delta Modulation


the difference between the input and the approximation is quantized into only two levels, + or if the approximation falls below (above) the signal at the beginning of sampling period, it is increased (decreased) by if the signal variation is not too rapid between successive samples, the staircase approximation is within
M Theerthagiri 110

Delta Modulation
  

the step size

of the quantizer is given by = 2 prediction error e(nTs) = xi(nTs) xp(nTs) = xi(nTs) u(nTs - Ts) binary quantity b(nTs) = sgn[e(nTs)] is the algebraic sign of the error, except for the scaling factor b(nTs) is the one-bit word transmitted by the DM system

M Theerthagiri

111

Delta Modulation
e(nTs) xi(nTs) xp(nTs) b(nTs)

delay Ts u(nTs)

DM Transmitter

delay Ts

removes out-of-band quantization noise

DM Receiver
M Theerthagiri 112

Delta Modulation


Quantization Noise DM systems are subject to Two Types of quantization error


 

Slope overload distortion Granular noise

Slope overload distortion: This type of distortion is due to the use of a step size delta that is too small to follow portions of the waveform that have a steep slope. It can be reduced by increasing the step size. Granular noise: This results from using a step size that is too large too large in parts of the waveform having a small slope. Granular noise can be reduced by decreasing the step size.

M Theerthagiri

113

M Theerthagiri

114

M Theerthagiri

115

M Theerthagiri

116

Delta Modulation - example

M Theerthagiri

117

M Theerthagiri

118

M Theerthagiri

119

Define adaptive delta modulation




The performance of a delta modulator can be improved significantly by making the step size of the modulator assume a time- varying form. In particular, during a steep segment of the input signal the step size is increased. Conversely, when the input signal is varying slowly, the step is reduced , In this way, the step size is adapting to the level of the signal. The resulting method is called adaptive delta modulation (ADM).
M Theerthagiri 120

M Theerthagiri

121

Adaptive Delta Modulation


  

improved performance over DM step size of the modulator is varied step size is adapted to the input signal level during a steep segment of input signal, step size is increased when input signal is varying slowly, step size is reduced
M Theerthagiri 122

Adaptive Delta Modulation


x(nTs) logic for step size control delay Ts

M Theerthagiri

123

Coding speech at low bit rates


 

Standard PCM operates at 64 Kbps Conservation of bandwidth / low bit rates needed to facilitate secure transmission over low-capacity radio channels Speech can be coded at low bit rates without compromising on acceptable fidelity may be as low as 2 Kbps However, increase in processing complexity / processing delays are associated with this
M Theerthagiri 124

Coding speech at low bit rates




Design philosophy for a waveform coder for speech at low bit rates :


To remove redundancies from the speech signal as far as possible To assign the available bits to code the non-redundant parts in an efficient way

Algorithms for redundancy removal and bit assignment become increasingly complex as bit rate is reduced
M Theerthagiri 125

Coding speech at low bit rates


 

Thumb rule Computational complexity (measured in terms of multiplyadd operations) increases by an order of magnitude for every halving of bit rate in the 64 to 8 Kbps range
M Theerthagiri 126

Define ADPCM.


It means adaptive differential pulse code modulation, a combination of adaptive quantization and adaptive prediction. Adaptive quantization refers to a quantizer that operates with a time varying step size. The autocorrelation function and power spectral density of speech signals are time varying functions of the respective variables. Predictors for such input should be time varying. So adaptive predictors are used.
M Theerthagiri 127

Coding speech at low bit rates by ADPCM




Adaptive Differential PCM (achieves 32 Kbps)a widely used variation of PCM codes the difference between sample points like differential PCM (DPCM)but also dynamically switches the coding scale to compensate for variations in amplitude and frequency uses an adaptive predictor for the differences between pulses how does ADPCM adapt these quantization levels ?
M Theerthagiri

128

Coding speech at low bit rates by ADPCM




 

if the difference signal is low, ADPCM increases the size of the quantization levels if the difference signal is high, ADPCM decreases the size of the quantization levels ADPCM adapts the quantization level to the size of the input difference signal this generates an SNR that is uniform throughout the dynamic range of the difference signal
M Theerthagiri 129

Coding speech at low bit rates by ADPCM




ADPCM is a digital coding scheme that uses: both adaptive quantization and adaptive prediction adaptive quantization :


estimating the variance of the input signal continuously estimating the input signal from the quantized difference signal
M Theerthagiri 130

adaptive prediction :


Coding speech at low bit rates:Adaptive quantization


 

quantizer operates with a time-varying step size (nTs), where Ts is the sampling period step size (nTs) is varied to match the variance 2 of the input signal x(nT ) s x x(nTs) is the standard deviation, varies with time xe(nTs) is an estimate of the standard deviation adaptive quantization estimates x(nTs) continuously

M Theerthagiri

131

Coding speech at low bit rates:Adaptive quantization


 

Two methods Derive forward estimates of x(nTs) using the unquantized samples of x(nTs)


AQF

Derive backward estimates of x(nTs) using the quantized samples of x(nTs)




AQB

M Theerthagiri

132

Coding speech at low bit rates:Adaptive quantization


 

AQF the samples of the speech signal, the unquantized ones, are buffered the samples are released after the estimate xe(nTs) has been obtained since estimate is done on unquantized samples : step size (nTs) is independent of quantizing noise more reliable than the quantized case
M Theerthagiri 133

Coding speech at low bit rates:Adaptive quantization




AQF : this method requires transmission of level information (typically 5 to 6 bits per step size sample) to the remote decoder of the receiver


overheads / processing delay

AQB avoids problems of level transmission, buffering, delay




practically more popular compared to AQF

M Theerthagiri

134

Coding speech at low bit rates AQB

x(nTs)

uses the recent history of the quantizer output to extract information for computation of (nTs)
M Theerthagiri 135

What is meant by forward and backward estimation?




AQF: Adaptive quantization with forward estimation. Unquantized samples of the input signal are used to derive the forward estimates. AQB: Adaptive quantization with backward estimation. Samples of the quantizer output are used to derive the backward estimates. APF: Adaptive prediction with forward estimation, in which unquantized samples of the input signal are used to derive the forward estimates of the predictor coefficients. APB: Adaptive prediction with backward estimation, in which Samples of the quantizer output and the prediction error are used to derive estimates of the predictor coefficients.
M Theerthagiri 136

Coding speech at low bit rates:Adaptive prediction


 

Two methods Derive forward estimates of predictor coefficients using the unquantized samples of x(nTs)


APF

Derive backward estimates of predictor coefficients using the quantized samples of x(nTs)


APB

M Theerthagiri

137

Coding speech at low bit rates APF


Buffer and predictor coefficient Calculator To Channel

also transmitted over channel

M Theerthagiri

138

Coding speech at low bit rates APB


y(nTs) x(nTs)

xe(nTs) u(nTs)

M Theerthagiri

139

Subband Coding


In sub-band coding (SBC), the speech signal is filtered into a number of subbands and each subband is adaptively encoded. The number of bits used in the encoding process differs for each subband signal with bits assigned to quantizers according to a perceptual criteria. By encoding each subband individually, the quantization noise is confined within its subband. The output bit streams from each encoder are multiplexed and transmitted. At the receiver demultiplexing is performed, followed by decoding of each subband data signal. The sampled subband signals are then combined to yield the recovered speech.
M Theerthagiri 140

Subband Coding


Note that down sampling of subband signals must occur at the output of the subband filters to avoid over sampling. The down sampling ratio is given by the ratio of original speech bandwidth to subband bandwidth. Conventional filters cannot be used for the production of subband signals because of the finite width of the band-pass transition bands. If the bandpass filters overlap in the frequency domain, subsampling causes aliasing which destroys the harmonic structure of voiced sounds and results in unpleasant perceptual effects. If the bandpass filters don't overlap, the speech signal cannot be perfectly reconstructed because the gaps between the channels introduce an audible echo. Quadrature mirror filter (QMF) banks [32] overcome this problem and enable perfect reconstruction of the speech signal.

M Theerthagiri

141

Adaptive Subband Coding




It is a frequency domain coder, in which the speech signal is divided in to number of subbands and each one is coded separately. It uses non masking phenomenon in perception for a better speech quality. The noise shaping is done by the adaptive bit assignment.

M Theerthagiri

142

Coding speech at low bit rates


 

 

 

Adaptive Sub-band Coding (ASBC) PCM and ADPCM function in timedomain ASBC is a frequency domain coder Speech signal is divided into a number of sub-bands Each sub-band is encoded separately Capable of achieving 16 Kbps with quality comparable to 64 Kbps PCM
M Theerthagiri 143

Coding speech at low bit rates: (ASBC)




Uses the following characteristics of speech and hearing mechanism to advantage :


 

quasi-periodic nature of voiced speech noise-masking of hearing mechanism

Quasi-periodic nature People speak with a characteristic pitch frequency


This permits reliable prediction of pitch, reduction in prediction error and reduction in number of bits per sample to be transmitted
M Theerthagiri 144

Coding speech at low bit rates: (ASBC)




Noise-masking phenomenon Human ear does not perceive in a frequency band if the noise is about 15 dB below the signal level in that band
A relatively large coding error can be tolerated near formants, coding rate can be reduced A formant is a peak in an acoustic frequency spectrum which results from the resonant frequencies of any acoustical system. It is most commonly invoked in phonetics or acoustics involving the resonant frequencies of vocal tracts
M Theerthagiri 145

Coding speech at low bit rates: (ASBC)




The number of bits used to encode each sub-band is varied dynamically, called adaptive bit assignment The no. of bits is shared with other bands, as necessary, depending on the encoding accuracy to be achieved for each sub-band
M Theerthagiri 146

Coding speech at low bit rates: (ASBC)




Examples :
 

low frequency predominated signal may use bit assignment 5, 2, 1, 0 high frequency predominated signal may use bit assignment 1, 1, 3,
sub-bands with little or no energy content may not have to be encoded at all

Quantizing noise within any sub-band is limited to that sub-band ------ low-level speech of a sub-band cannot be hidden by quantizing noise of another sub-band
M Theerthagiri 147

Coding speech at low bit rates: (ASBC)


 

Steps : 1. Speech band is divided into number of contiguous bands using a filter-bank of (BPFs) band-pass filters (typically 4 to 8) 2. The output of each BPF is translated in frequency to a low-pass form by a modulation process 3. The sub-band signals are sampled at a rate slightly higher than the relevant Nyquist rate

M Theerthagiri

148

Coding speech at low bit rates: (ASBC)


 

Steps : 4. The samples are digitally encoded using ADPCM. Each sub-band is encoded based on the spectral content of that subband 5. The encoded samples are multiplexed and transmitted 6. Bit assignment info is also transmitted to enable the receiver decode them individually

M Theerthagiri

149

Coding speech at low bit rates: (ASBC)


 

Steps : 7. The decoded sub-bands are converted at the receiver to their original locations in the frequency band 8. The frequency re-translated subbands are summed up to produce a close replica of the original signal

M Theerthagiri

150

Coding speech at low bit rates: (ASBC)




   

fs = sampling rate of original (full-band) signal N = average number of bits used to encode a sample of the signal M = number of sub-bands Bit rate = N x fs per second Nfs = (MN) x (fs / M) Bit rate = (Total no. of bits per sample) x (Sampling rate per sub-band)

M Theerthagiri

151

Coding speech at low bit rates: (ASBC)


  

 

Example : No. of sub-bands = M = 4 Sampling rate of original signal = fs = 8 KHz Average no. of bits per sample = 2 Sampling rate for each sub-band = 2 KHz Total no. of bits per sample = 8

M Theerthagiri

152

Coding speech at low bit rates


 

(MOS)

Subjective quality : Mean Opinion Score

In multimedia (audio, voice telephony, or video) especially when compression techniques are used --the MOS (more realistic than SNR) is used to provide a numerical indication of the perceived quality of received media after compression and/or transmission An MOS is obtained by conducting formal tests on human subjects
M Theerthagiri 153

Coding speech at low bit rates


 

Subjective quality : Mean Opinion Score (MOS) the MOS is generated by averaging the results of a set of standard subjective tests a number of listeners rate the heard audio quality of test sentences read aloud by both male and female speakers over the communications medium being tested the MOS is the arithmetic mean of all the individual scores can range from 1 (worst) to 5 (best)

M Theerthagiri

154

Coding speech at low bit rates


Subjective quality : Mean Opinion Score (MOS)

MOS Quality 5 Excellent / Perfect 4 Good / High 3 2 1


Fair / Communication

Impairment Imperceptible Perceptible, but not annoying Slightly annoying Annoying Very annoying
M Theerthagiri 155

Poor Bad

Coding speech at low bit rates


 

Subjective quality : Mean Opinion Score (MOS) Practical issues

Using MOS ratings : 16 Kbps ASBC method approaches ratings of 4, very close to the 64 Kbps and 32 Kbps DPCM methods Using SNR ratings : 16 Kbps ASBC compares poorly with higher bit-rate PCM ASBC falls short of 64 Kbps PCM and 32 Kbps ADPCM ---- quality drops sharply with tandem codings however, not significant in an all-digital link
M Theerthagiri 156

Measuring Performance of Speech Coders




The quality of speech output of a speech coder is a function of bit-rate, complexity, delay and bandwidth.

M Theerthagiri

157

Waveform Coding Techniques Applications




Digital Multiplexers


Hierarchy of digital multiplexers, whereby digitized voice, data, video signals are combined into one final data stream That is well suited for use in long-haul telecommunication network
M Theerthagiri 158

Light wave transmission link




Waveform Coding Techniques Applications Digital Multiplexers

Operates at higher rates than inputs Computer outputs Digitized voice Digitized fax TV signals
at different rates

Conceptual diagram of multiplexingdemultiplexing

combining several digital signals at different rates into a single data stream at considerably higher bit rate than any of the inputs
M Theerthagiri 159

Waveform coding techniques Applications-Digital Multiplexers




Accomplish multiplexing of digital signals by bit-by-bit interleaving procedure




a selector switch that sequentially selects a bit from each incoming line and then applies it to the high speed common line at the receiver, the output from the common line is separated into low-speed individual components and delivered to respective destinations
M Theerthagiri 160

Waveform coding techniques Applications-Digital Multiplexers


 

Two major groups of digital multiplexers are used in practice Low-speed operations


 

Designed to combine relatively low speed digital signals up to a maximum of 4800 bps, in to a higher speed multiplexed signal with a rate of up to 9600 bps used primarily to transmit data over voice-grade channels uses Modems for converting digital format to analog format Designed to operate at much higher bit rates, forms part of data transmission service generally provided by communication carrier companies. Example the T1 carrier system which has been developed by the BELL system in the United States in the early 1960s for digital voice communication over short-haul distances of 10-50 miles.
M Theerthagiri 161

High-speed operations :


Transmission Rates
Japanese Standard 97728 kbits/s x4 97728 kbits/s x3 32064 kbits/s x5 6312 kbits/s x4 1544 kbits/s x24 x3 2048 kbits/s x30 64 kbits/s
M Theerthagiri 162

North America Standard

European Standard 564992 kbits/s x4

274176 kbits/s x6 44736 kbits/s x7

139264 kbits/s x4 x3 34368 kbits/s x4 8448 kbits/s x4

Digital Hierarchy
MULTIP LEXING LEVELS (DS) # OF VOICE NORTH EUROPE JAPAN CHANNELS AMERICA

0 1

1 24 30 48

0.064 1.544

0.064

0.064 1.544

2.048 3.152 6.312 8.448 3.152 6.312

2 (4xDS1)

96 120

M Theerthagiri

163

Multiplexing # OF VOICE Levels CHANNELS 3 (7xDS2) 480 672 1344 1440 4 (6xDS3) 1920 4032 5760 7680

NORTH AMERICA 44.376 91.053

EUROPE 34.368

JAPAN 32.064

97.728 139.264 274.176 397.200 565.148

M Theerthagiri

164

Waveform coding techniques Applications-Digital Multiplexers


First Level 1 2

DS0 Ch DS1 an T1 ne l Ba nk
Second Level 1

. , Voice , Signals ,

24

M U X
4

DS2 T2 1 . , , 7
DPCM

Third Level

T1 @ 1.544 mbps T2 @ 6.312 mbps T3 @ 44.736 mbps T4 @ 274.176 mbps


Fourth Level 1

M U X

DS3 T3 . , ,

DS4 M U X T4

Digital Data

Picturephone

PCM Television

Digital Hierarchy BELL System


M Theerthagiri 165

Digital Trunk
24 DS0 T1 Mux (Chan Bank) DS1 DS1 DS1 DS1 T2 Mux (M1-2) DS2 DS2 DS2 DS2 DS2 48 DS0 1C Mux T3 Mux (M2-3) DS3 DS3 DS4 DS2 DS3 DS2 DS3 T4 Mux (M3-4)

DS1C

Level DS0 DS1 DS1c DS2 DS3 DS4

# Voice bps 1 64k 24 1.544M 48 3.152M 96 6.312M 672 44.736M 4032 274.176M

28 DS1

T3 Mux (M1-3)

DS3 DS3

North American Hierarchy


M Theerthagiri 166

M Theerthagiri

167

M Theerthagiri

168

Waveform Coding Techniques Applications


Digital hierarchy - T1 carrier - Bell system

Level First

Type
Channel bank

Input Output 24 * T1 Voice signals (1.544


Mbps)

Second Multiplexer Third


Multiplexer

4 * T1 Digital data 7 * T2 DPCM


(Picturephone)

T2
(6.312 Mbps)

T3
(44.736 Mbps)

Fourth Multiplexer

6 * T3 PCM (Television)
M Theerthagiri

T4
(274.176 Mbps)
169

M Theerthagiri

170

M Theerthagiri

171

T1 Carrier System


Hierarchy of digital transmission formats that are used in North America. The T stands for "Trunk". The basic unit of the Tcarrier system is the DS-0, which is multiplexed to form transmission formats with higher speeds. There exist four of them: T1, T2, T3 and T4.
   

T1 T2 T3 T4

is composed of 24 DS-0s. = 4*T1. = 7*T2. = 6*T3.

 

Each of the T* units can also be referred to as a DS* unit, that is, T1=DS1, T2=DS2 etc. The T-carrier system is quite similar to, and compatible with, the E-carrier system used in Europe, but it has lower capacity since it uses in-band signaling, or bit-robbing.

M Theerthagiri

172

T1 Carrier System


 

The T1 carrier system was developed in the United States in the early 1960s for digital voice communication over short-haul distances of 10-50 miles. Each channel (user) is first sampled at a rate of 8000 samples per second and quantised using 8 bit companding. 24 voice channels are then combined into a composite signal denoted as DS1. We thus have a total of 192 bits. One bit is added to this total for synchronisation purposes. A 1010... sequence, in odd-numbered frames, is used for this purpose. There is a total of 193 bits in a frame of duration 1/8000 = 125ms. The trunk rate is (193/125) x 106 = 1.544 Mbits/s.

M Theerthagiri

173

Waveform coding techniques Applications-Digital Multiplexers




Basic problems involved in the design of Digital Multiplexers- irrespective of its groupings Synchronization :
 

Demultiplexing requires that the bit rates of signals are locked to a common clock; synchronization of the incoming signals is necessary

Framing : The multiplexed signal needs to be encapsulated using framing to enable identification of individual components at the receiver Handling of small variations / drift in input bit rates
M Theerthagiri 174

Waveform coding techniques Applications-Digital Multiplexers




Bit stuffing : Used to cater for requirements of synchronization and rate adjustment to accommodate small variations in the input data rates Outgoing bit rate of the mux is kept slightly higher than the sum of the maximum expected bit rates of the input channels this is done by stuffing bits, which are additional non-information carrying bits incoming signal is stuffed with no. of bits, as necessary, to raise its bit rate equal to that of a locally generated clock at the demultiplexer, corresponding destuffing is carried out by removing the identified stuffed bits M Theerthagiri 175

Bit Stuffing
   

It was noted earlier that provision must be made to handle small transmissionrate variations from users. To handle small rate variations, we can employ a bit stuffing technique. Consider the arrangement as shown in Figure 18.9. Figure 18.9 Elastic buffer for bit stuffing. The data sequence from each user is fed into a elastic buffer at the rate of R1 bits per second. The contents of this buffer are then fed to the input of the multiplexer at a higher rate, and the multiplexer also monitors the buffer contents. If the input rate R1 begins to drop relative to the clock rate R'1, the buffer contents decrease. When the number of bits in the buffer drops below a predefined threshold level, the multiplexer disables readout ofthis buffer by the stuff signal, as shown in Figure 18.9. A bit is then stuffed. When the buffer contents rise above the threshold level, sampling of the buffer contents is resumed. An example of the bit-stuffing process is shown in Figure 18.10. Bits are stuffed into the multiplexed data stream at time t = 3 when the input rate of user 1 drops below the threshold level and at time t = 6 when the input rate of user 2 drops below the threshold level.

M Theerthagiri

176

M Theerthagiri

177

T1 Carrier System


Designed to accommodate 24 voice channels primarily for short distance


 

human voice signal 300 Hz to 3400 Hz


passed through a LPF with cut-off frequency of 3.4 KHz before sampling W = 3.4 KHz, Nyquist rate = 6.8 KHz, standard sampling rate in telephone systems is 8 KHz each frame, therefore, occupies 125 seconds

each frame comprises 24 * 8-bit words plus a synchronizing bit added at the end of the frame; total = 193 bits

M Theerthagiri

178

T1 carrier (1.544 Mb/s)


Digital part of phone system based on the T1 carrier: 193 bit frame (125 us, 8000 samples/s, 8 bits/sample/channel)

channel 1

channel 2

channel 3

channel 24

bit 1 is a framing code

8 data bits per channel

Each channel has a data rate of 8000 samples/s * 8 bits/channel = 64 Kb/s

M Theerthagiri

179

M Theerthagiri

180

Waveform coding techniques Applications-Digital Multiplexers




Digital Multiplexers : T1 carrier system


   

frame size = 193 bits frame duration = 125 seconds duration of each bit = 0.647 seconds bit rate = 1.544 Mbps needed to transmit information related to :
  

Special Supervisory or signalling information




telephone off-hook dialled number telephone on-hook

in every sixth frame, the LSB of each voice channel is deleted the signalling bit in inserted in the place of the LSB
M Theerthagiri 181

Super Frame


For two reasons assignment of 8th digit in every 6th frame to signaling and the need for two signaling paths for some switching systems it is necessary to identify a super frame of 12 frame in which the 6th and 12th frame contain two signaling paths. To achieve this and still allow for rapid synchronization of the receiver framing circuitry the frames are divided into odd and even frames.

M Theerthagiri

182

T1 System Framing Structure

M Theerthagiri

183

M Theerthagiri

184

M Theerthagiri

185

Applications
Digital Multiplexers : Bell system M12 Multiplexer
I II III IV

Signal format

I II III IV

SIGNAL FORMAT of BELL System M12 Multiplexer

each frame is sub-divided into four subframes

these four sub-frames I, II, III, IV are transmitted in thatMorder Theerthagiri

186

M Theerthagiri

187

Three types of control Bits


  

Needed to provide synchronization, frame indication, and to identify which of the 4 input signals has been stuffed. These control bits are labeled as F, M and C F- Control Bit: two per sub frame. Constitute the main framing pulse. The main framing sequence if F0F1F0F1F0F1F0F1 0r 01 01 01 01 M-Control Bits- 1 per sub frame forms secondary framing pulse. It is 0111 C-Control Bits- Three per sub frame are stuffing indicators.
   

CI refers to input channel I CII refers to input channel II CIII refers to input channel III CIV refers to input channel IV

000 for three Cs indicates no stuffing and 111 for three Cs indicates stuffing.

M Theerthagiri

188

M Theerthagiri

189

Digital Hierarchy


The output of the M12 multiplexer is operating 136 kbs faster than the agragate rate of four DS1 6.312 vs 4x1.544=6.176 M12 frame has 1176 bits, i.e. 294-bit subframes ; each subframe is made of up of 49-bits blocks; each block starts with a control bit followed by a 4x12 info bits from four DS1 channels
M Theerthagiri 190

Makeup of a DS2 Frame


M1 01 02 03 04 C1 01 02 03 04 F0 01 02 03 04 C2 01 02 03 04 C3 01 02 03 04 F1 01 02 03 04

Bit stuffing
M1 01 02 03 04 C1 01 02 03 04 F0 01 02 03 04 C2 01 02 03 04 C3 01 02 03 04 F1 01 02 03 04

  

4 M bits (O11X X=0 alarm) C=000,111 bit stuffing absent/present nominal stuffing rate 1796 bps, max 5367
M Theerthagiri 191

M Theerthagiri

192

M Theerthagiri

193

M Theerthagiri

194

M12 Multiplexer


 

12 bits from each (of the four) T1 inputs are interleaved to accumulate a total of 48 bits control bits are inserted by the multiplexer 1 bit is inserted in between sequences of 48 data bits each frame contains 24 control bits control bits are of 3 types : F, M , C
M Theerthagiri 195

M12 Multiplexer
Type No. of bits
per subframe

Description

F M C

main framing pulses secondary framing pulses to identify the four sub-frames stuffing indicators CI refers to input channel I, CII refers to II, ..
M Theerthagiri 196

1 3

M12 Multiplexer


all three C-control bits set to 1 indicates that a stuffed bit has been inserted into that T1 signal; 0 no stuffing stuffed bit is inserted in the position of the first information bit of the T1 signal that follows the F1 control bit in the same sub-frame a single error in any of the 3 C-control bits can be detected at the receiver by using majority logic
M Theerthagiri 197

M12 Multiplexer
 

Demultiplexing : search for main framing sequence F0F1F0F1F0F1F0F1 establishes identity for the four input T1 signals, M- and Ccontrol bits correct framing of the C-control bits is verified from the M0M1M1M1 sequence finally the four T1 signals are properly demultiplexed and de-stuffed
M Theerthagiri 198

Waveform coding techniques Applications-Light Wave Transmission


 

Optical Fibre Cable links Advantages : low transmission loss, high bandwidths, small size, light weight, immunity to EMI Applications :long-haul, high-speed communications

M Theerthagiri

199

Waveform coding techniques Applications-Light Wave Transmission




Optical Fibre Cable links


  

Transmitter (Driver + Light Source) optical fiber waveguide Receiver input is binary data fed from the output of a device like the digital multiplexer the driver for the light source is a lowvoltage-high-current device the driver turns the light source on or off
M Theerthagiri 200

Transmitter


Waveform coding techniques Applications-Light Wave Transmission




light source


consists of a laser injection device or a semiconductor LED the on-off light pulses transmitted are launched into the OFC source-to-fiber coupling loss fiber-loss or attenuation dispersion
M Theerthagiri 201

Optical fiber waveguide:


  

Waveform coding techniques Applications-Light Wave Transmission




Receiver : regeneration of original data




detection : light pulses are converted back to electrical current pulses ; uses a photodiode to convert from power to current pulse shaping and timing : amplification / filtering / equalization of electrical pulses and extraction of timing information decision making : to decide that the received pulse is on or off
M Theerthagiri 202

M Theerthagiri

203

Optical Link Loss Budget Analysis

M Theerthagiri

204

M Theerthagiri

205

M Theerthagiri

206

Taxonomy of Speech Coders


Speech Coders Waveform Coders Time Domain: PCM, ADPCM Frequency Domain: e.g. Sub-band coder, Adaptive transform coder Source Coders Linear Predictive Coder Vocoder

Waveform coders: attempts to preserve the signal waveform not speech specific (I.e. general A-to-D conv) PCM 64 kbps, ADPCM 32 kpbs, CVSDM 32 kbps Vocoders: Analyse speech, extract and transmit model parameters Use model parameters to synthesize speech LPC-10: 2.4 kbps Hybrids: Combine best of both Eg: CELP (used in GSM)
M Theerthagiri 207

Speech Quality of Various Coders

M Theerthagiri

208

How does DPCM calculate the difference between the current sample signal and a previous sample?


The first part of DPCM works exactly like PCM (that is why it is called differential PCM). The input signal is sampled at a constant sampling frequency (twice the input frequency). Then these samples are modulated using the PAM process. At this point, the DPCM process takes over. The sampled input signal is stored in what is called a predictor. The predictor takes the stored sample signal and sends it through a differentiator. The differentiator compares the previous sample signal with the current sample signal and sends this difference to the quantizing and coding phase of PCM (this phase can be uniform quantizing or companding with A law or u law). After quantizing and coding, the difference signal is transmitted to its final destination. At the receiving end of the network, everything is reversed. First the difference signal is dequantized. Then this difference signal is added to a sample signal stored in a predictor and sent to a low pass filter that reconstructs the original input signal.
M Theerthagiri 209

Linear Predictive Coding (LPC)




In DPCM, the value of the current sample is guessed based on the previous sample. Can a better prediction be made ? The answer is yes. For example, we can use the previous two samples to predict the current one

LPC is more general than DPCM. It exploit the correlation between multiple consecutive samples
M Theerthagiri 210

M Theerthagiri

211

M Theerthagiri

212

M Theerthagiri

213

M Theerthagiri

214

Das könnte Ihnen auch gefallen