Sie sind auf Seite 1von 159

Pa

sing terli
s
e
c
t
Signal Processing
ro in VeDigital
P
l
a
t
n
r
g
a
l Si
dM
taModule
n
i
a
g
7:
Stochastic
Signal Processing and Quantization
i
i
D
on 013
d
n
ra
2
olo P

Module Overview:

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
Quantization
l
d
ta
Digi doni an 13
A/D and D/A conversion
ran 20
P
o
l
Pao

Module 7.1: Stochastic signals

Module 7.2:

Module 7.2:

sing terli
s
e
c
t
Signal Processing
ro in VeDigital
P
l
a
t
n
r
g
a
i
tal S i and M Module 7.1: Stochastic signal processing
i
g
i
D
don 2013
n
a
r
P

aolo

Overview:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
Power spectral density
gn
art
i
S
M
l
d
ta
Digi doni an 13
Filtering a stochastic signal
ran 20
P
o
l
Noise
Pao
A simple random signal

Overview:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
Power spectral density
gn
art
i
S
M
l
d
ta
Digi doni an 13
Filtering a stochastic signal
ran 20
P
o
l
Noise
Pao
A simple random signal

Overview:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
Power spectral density
gn
art
i
S
M
l
d
ta
Digi doni an 13
Filtering a stochastic signal
ran 20
P
o
l
Noise
Pao
A simple random signal

Overview:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
Power spectral density
gn
art
i
S
M
l
d
ta
Digi doni an 13
Filtering a stochastic signal
ran 20
P
o
l
Noise
Pao
A simple random signal

Deterministic vs. stochastic

deterministic signals are known in advance: x[n] = sin(0.2 n)

ng Im rgoing
interesting signals are not known in advance: s[n] s=siwhat
li to say next

7.1

e
roce
Vett
al Si i and M
t
i
g
i
stochastic signals can be described
D
on 013
dprobabilistically
n
a
2
r
P random
signals?
lo with
can we do signal processing
Yes!
o
a
P
we usually know something, though: g
s[n]
nalisPa speech
artinsignal

will not develop stochastic signal processing rigorously but give enough intuition to deal
with things such as noise

Deterministic vs. stochastic

deterministic signals are known in advance: x[n] = sin(0.2 n)

ng Im rgoing
interesting signals are not known in advance: s[n] s=siwhat
li to say next

7.1

e
roce
Vett
al Si i and M
t
i
g
i
stochastic signals can be described
D
on 013
dprobabilistically
n
a
2
r
P random
signals?
lo with
can we do signal processing
Yes!
o
a
P
we usually know something, though: g
s[n]
nalisPa speech
artinsignal

will not develop stochastic signal processing rigorously but give enough intuition to deal
with things such as noise

Deterministic vs. stochastic

deterministic signals are known in advance: x[n] = sin(0.2 n)

ng Im rgoing
interesting signals are not known in advance: s[n] s=siwhat
li to say next

7.1

e
roce
Vett
al Si i and M
t
i
g
i
stochastic signals can be described
D
on 013
dprobabilistically
n
a
2
r
P random
signals?
lo with
can we do signal processing
Yes!
o
a
P
we usually know something, though: g
s[n]
nalisPa speech
artinsignal

will not develop stochastic signal processing rigorously but give enough intuition to deal
with things such as noise

Deterministic vs. stochastic

deterministic signals are known in advance: x[n] = sin(0.2 n)

ng Im rgoing
interesting signals are not known in advance: s[n] s=siwhat
li to say next

7.1

e
roce
Vett
al Si i and M
t
i
g
i
stochastic signals can be described
D
on 013
dprobabilistically
n
a
2
r
P random
signals?
lo with
can we do signal processing
Yes!
o
a
P
we usually know something, though: g
s[n]
nalisPa speech
artinsignal

will not develop stochastic signal processing rigorously but give enough intuition to deal
with things such as noise

Deterministic vs. stochastic

deterministic signals are known in advance: x[n] = sin(0.2 n)

ng Im rgoing
interesting signals are not known in advance: s[n] s=siwhat
li to say next

7.1

e
roce
Vett
al Si i and M
t
i
g
i
stochastic signals can be described
D
on 013
dprobabilistically
n
a
2
r
P random
signals?
lo with
can we do signal processing
Yes!
o
a
P
we usually know something, though: g
s[n]
nalisPa speech
artinsignal

will not develop stochastic signal processing rigorously but give enough intuition to deal
with things such as noise

Deterministic vs. stochastic

deterministic signals are known in advance: x[n] = sin(0.2 n)

ng Im rgoing
interesting signals are not known in advance: s[n] s=siwhat
li to say next

7.1

e
roce
Vett
al Si i and M
t
i
g
i
stochastic signals can be described
D
on 013
dprobabilistically
n
a
2
r
P random
signals?
lo with
can we do signal processing
Yes!
o
a
P
we usually know something, though: g
s[n]
nalisPa speech
artinsignal

will not develop stochastic signal processing rigorously but give enough intuition to deal
with things such as noise

A simple discrete-time random signal generator

For each new sample, toss a fair coin:


(
ing tosseisrlihead
sn-th
+1 if the outcome ofcthe
s
e
tt
o
x[n] =
rof
Vetoss
P
l
n
i
1 if the outcome
the
n-th
is tail
a
t
n
r

7.1

l Sig nd Ma
a
t
i
Dig doni a 13
an 20
lo Pr

each sample is independent


Pao from all others

each sample value has a 50% probability

A simple discrete-time random signal generator

For each new sample, toss a fair coin:


(
ing tosseisrlihead
sn-th
+1 if the outcome ofcthe
s
e
tt
o
x[n] =
rof
Vetoss
P
l
n
i
1 if the outcome
the
n-th
is tail
a
t
n
r

7.1

l Sig nd Ma
a
t
i
Dig doni a 13
an 20
lo Pr

each sample is independent


Pao from all others

each sample value has a 50% probability

A simple discrete-time random signal generator

every time we turn on the generator we obtain a different realization of the signal

we know the mechanism behind each instance

but how can we analyze a random signal?

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.1

A simple discrete-time random signal generator

every time we turn on the generator we obtain a different realization of the signal

we know the mechanism behind each instance

but how can we analyze a random signal?

1
b

7.1

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

10

b
b

15
b

20
b

25
b

30
b

A simple discrete-time random signal generator

every time we turn on the generator we obtain a different realization of the signal

we know the mechanism behind each instance

but how can we analyze a random signal?

1
b

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.1

5
b

10
b

15
b

20
b

25
b

30
b

A simple discrete-time random signal generator

every time we turn on the generator we obtain a different realization of the signal

we know the mechanism behind each instance

but how can we analyze a random signal?

1
b

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.1

5
b

10
b

15
b

20
b

25
b

30
b

A simple discrete-time random signal generator

every time we turn on the generator we obtain a different realization of the signal

we know the mechanism behind each instance

but how can we analyze a random signal?

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.1

5
b
b

10
b

15
b

20
b

25
b

30
b

Spectral properties?

lets try with the DFT of a finite set of random samples

every time its different; maybe with more data?

no clear pattern... we need a new strategy

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.1

Spectral properties?

lets try with the DFT of a finite set of random samples

every time its different; maybe with more data?

no clear pattern... we need a new strategy

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

96

|X [k]|2

80
64

48
b

32

b
b

b
b

b
b

16

7.1

10

15

b
b

20

b
b

25

30

Spectral properties?

lets try with the DFT of a finite set of random samples

every time its different; maybe with more data?

no clear pattern... we need a new strategy

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

96

|X [k]|2

80
64

48

32

10

b
b

15

b
b

7.1

16
0

b
b

20

b
b

b
b

25

30

Spectral properties?

lets try with the DFT of a finite set of random samples

every time its different; maybe with more data?

no clear pattern... we need a new strategy

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

96
|X [k]|2

80
64

48

P
b

32
b

b
b

b
b

16

b
b

b
b

b
b

7.1

10

15

20

25

30

Spectral properties?

lets try with the DFT of a finite set of random samples

every time its different; maybe with more data?

no clear pattern... we need a new strategy

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

396

|X [k]|2

330
264

198
b

66
b

7.1

P
b

132
0

b
b

b
b
b

10

b
b

b
b

b
b

b
b
b
b
b

20

b
b

b
b

b
b b b

30

b
b
b

40

b
b

b
b
b

b
b
b

50

b
b
b

b
b

b
b

60

Spectral properties?

lets try with the DFT of a finite set of random samples

every time its different; maybe with more data?

no clear pattern... we need a new strategy

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

798

|X [k]|2

665
532
399
266
133
b
b

b b

b b b

b b

21

b
b

b b
b b b

b
b b b

b b

P
b

7.1

b b
b

b b b

b b

b
b
b

b
b
b b

42

b b
b

b b

b b

b b

b b b

b b
b

63

b
b b

b b
b b
b

b
b
b
b b

b b

84

b b b
b

b b

b
b

b
b

b b

b b b

b b b

105

b b
b b

b b b
b

b
b
b

b b

126

Averaging

7.1

when faced with random data an intuitive response is to take averages

sing terli
s
e
c
t called expectation
in probability theory the average is across realizations
ro inand
Veits
P
l
a
t
n
r
l Sig nd Ma
for the coin-toss signal:
a
t
i
Dig doni a 13
0 + 1 P[n-th toss is head] = 0
ran tossis 2tail]
E [x[n]] = 1 P[n-th
P
o
l
Pao
so the average value for each sample is zero...

Averaging

7.1

when faced with random data an intuitive response is to take averages

sing terli
s
e
c
t called expectation
in probability theory the average is across realizations
ro inand
Veits
P
l
a
t
n
r
l Sig nd Ma
for the coin-toss signal:
a
t
i
Dig doni a 13
0 + 1 P[n-th toss is head] = 0
ran tossis 2tail]
E [x[n]] = 1 P[n-th
P
o
l
Pao
so the average value for each sample is zero...

Averaging

7.1

when faced with random data an intuitive response is to take averages

sing terli
s
e
c
t called expectation
in probability theory the average is across realizations
ro inand
Veits
P
l
a
t
n
r
l Sig nd Ma
for the coin-toss signal:
a
t
i
Dig doni a 13
0 + 1 P[n-th toss is head] = 0
ran tossis 2tail]
E [x[n]] = 1 P[n-th
P
o
l
Pao
so the average value for each sample is zero...

Averaging

7.1

when faced with random data an intuitive response is to take averages

sing terli
s
e
c
t called expectation
in probability theory the average is across realizations
ro inand
Veits
P
l
a
t
n
r
l Sig nd Ma
for the coin-toss signal:
a
t
i
Dig doni a 13
0 + 1 P[n-th toss is head] = 0
ran tossis 2tail]
E [x[n]] = 1 P[n-th
P
o
l
Pao
so the average value for each sample is zero...

Averaging the DFT

7.1

sing terli
Vet
P
l
n
i
a
t
n
r
E [X [k]] = 0
l Sig nd Ma
a
t
i
Dig doni a 13
n energy or
however the signal moves, soraits
20power must be nonzero
P

o
l
Pao
s
... as a consequence, averaging the DFT willrnot
ocework

Averaging the DFT

7.1

sing terli
Vet
P
l
n
i
a
t
n
r
E [X [k]] = 0
l Sig nd Ma
a
t
i
Dig doni a 13
n energy or
however the signal moves, soraits
20power must be nonzero
P

o
l
Pao
s
... as a consequence, averaging the DFT willrnot
ocework

Averaging the DFT

7.1

sing terli
Vet
P
l
n
i
a
t
n
r
E [X [k]] = 0
l Sig nd Ma
a
t
i
Dig doni a 13
n energy or
however the signal moves, soraits
20power must be nonzero
P

o
l
Pao
s
... as a consequence, averaging the DFT willrnot
ocework

Energy and power

the coin-toss signal has infinite energy (see Module 2.1):

sing terli
s
e
c
o lim n(2N
Ex = lim
|x[n]|Pr=
Ve+t 1) =
l
N
N
i
a
t
n
r
n=N
l Sig nd Ma
a
t
i
ig
ni a 13
however it has finite powerDover any
ointerval:
d
n
20
ra
P

o
l
N
X
Pao
1
2
N
X

Px = lim

7.1

2N + 1

n=N

|x[n]| = 1

Averaging

lets try to average the DFTs square magnitude, normalized:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
pick a number of iterations M
gn
art
i
S
M
l
igita andonobtain
i andM3N-point realizations
run the signal generator MDtimes
d
ran 201
P
o
l
compute the DFT of each
realization
Pao
pick an interval length N

average their square magnitude divided by N

10

Averaging

lets try to average the DFTs square magnitude, normalized:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
pick a number of iterations M
gn
art
i
S
M
l
igita andonobtain
i andM3N-point realizations
run the signal generator MDtimes
d
ran 201
P
o
l
compute the DFT of each
realization
Pao
pick an interval length N

average their square magnitude divided by N

10

Averaging

lets try to average the DFTs square magnitude, normalized:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
pick a number of iterations M
gn
art
i
S
M
l
igita andonobtain
i andM3N-point realizations
run the signal generator MDtimes
d
ran 201
P
o
l
compute the DFT of each
realization
Pao
pick an interval length N

average their square magnitude divided by N

10

Averaging

lets try to average the DFTs square magnitude, normalized:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
pick a number of iterations M
gn
art
i
S
M
l
igita andonobtain
i andM3N-point realizations
run the signal generator MDtimes
d
ran 201
P
o
l
compute the DFT of each
realization
Pao
pick an interval length N

average their square magnitude divided by N

10

Averaging

lets try to average the DFTs square magnitude, normalized:

7.1

sing terli
s
e
c
ro in Vet
P
l
a
pick a number of iterations M
gn
art
i
S
M
l
igita andonobtain
i andM3N-point realizations
run the signal generator MDtimes
d
ran 201
P
o
l
compute the DFT of each
realization
Pao
pick an interval length N

average their square magnitude divided by N

10

Averaged DFT square magnitude

M=1

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

0
0

7.1

10

15

20

25

30

11

Averaged DFT square magnitude

M = 10

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao
b

P
b

0
0

7.1

10

15

20

25

30

11

Averaged DFT square magnitude

M = 1000

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

1
b

b
b

b
b

P
0
0

7.1

10

15

20

25

30

11

Averaged DFT square magnitude

M = 5000

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

1
b

b
b

b
b

P
0
0

7.1

10

15

20

25

30

11

Power spectral density



P[k] = E |XN [k]|2 /N

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
it looks very much as if P[k]g=it1a
Di
ni an 13
o
d
n
2in0 frequency...
if |XN [k]|2 tends to the energy
radistribution
P

o
l
Pao
...|X [k]|2 /N tends to the power distribution (aka density) in frequency

the frequency-domain representation for stochastic processes is the power spectral density

7.1

12

Power spectral density



P[k] = E |XN [k]|2 /N

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
it looks very much as if P[k]g=it1a
Di
ni an 13
o
d
n
2in0 frequency...
if |XN [k]|2 tends to the energy
radistribution
P

o
l
Pao
...|X [k]|2 /N tends to the power distribution (aka density) in frequency

the frequency-domain representation for stochastic processes is the power spectral density

7.1

12

Power spectral density



P[k] = E |XN [k]|2 /N

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
it looks very much as if P[k]g=it1a
Di
ni an 13
o
d
n
2in0 frequency...
if |XN [k]|2 tends to the energy
radistribution
P

o
l
Pao
...|X [k]|2 /N tends to the power distribution (aka density) in frequency

the frequency-domain representation for stochastic processes is the power spectral density

7.1

12

Power spectral density



P[k] = E |XN [k]|2 /N

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
it looks very much as if P[k]g=it1a
Di
ni an 13
o
d
n
2in0 frequency...
if |XN [k]|2 tends to the energy
radistribution
P

o
l
Pao
...|X [k]|2 /N tends to the power distribution (aka density) in frequency

the frequency-domain representation for stochastic processes is the power spectral density

7.1

12

Power spectral density: intuition

7.1

g
ro in Vet
P
l
a
i.e., we cannot predict if the signal Smoves
gn slowly
artor super-fast
i
M
l
igita oni anofdeach
this is because each sampleDis independent
other: we could have a realization of all
d
013every
ansign
2
r
P
ones or a realization in which
the
changes
other sample or anything in between

lo
Pao

in
P[k] = 1 means that the power is equally distributed
rli
cess over alltefrequencies

13

Power spectral density: intuition

7.1

g
ro in Vet
P
l
a
i.e., we cannot predict if the signal Smoves
gn slowly
artor super-fast
i
M
l
igita oni anofdeach
this is because each sampleDis independent
other: we could have a realization of all
d
013every
ansign
2
r
P
ones or a realization in which
the
changes
other sample or anything in between

lo
Pao

in
P[k] = 1 means that the power is equally distributed
rli
cess over alltefrequencies

13

Power spectral density: intuition

7.1

g
ro in Vet
P
l
a
i.e., we cannot predict if the signal Smoves
gn slowly
artor super-fast
i
M
l
igita oni anofdeach
this is because each sampleDis independent
other: we could have a realization of all
d
013every
ansign
2
r
P
ones or a realization in which
the
changes
other sample or anything in between

lo
Pao

in
P[k] = 1 means that the power is equally distributed
rli
cess over alltefrequencies

13

Filtering a random process

7.1

sing
al P artin V
n
g
i
S
y [n] = (x[n] + x[n 1])/2
dM
tal
Digi doni an 13
what is the power spectral density?
ran 20
P
o
l
Pao

li

r
lets filter the random process with a 2-point rMoving
oces Average
ette filter

14

Filtering a random process

7.1

sing
al P artin V
n
g
i
S
y [n] = (x[n] + x[n 1])/2
dM
tal
Digi doni an 13
what is the power spectral density?
ran 20
P
o
l
Pao

li

r
lets filter the random process with a 2-point rMoving
oces Average
ette filter

14

Filtering a random process

7.1

sing
al P artin V
n
g
i
S
y [n] = (x[n] + x[n 1])/2
dM
tal
Digi doni an 13
what is the power spectral density?
ran 20
P
o
l
Pao

li

r
lets filter the random process with a 2-point rMoving
oces Average
ette filter

14

Averaged DFT magnitude of filtered process

b
b

b
b

b
b

7.1

M=1

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao
b

b
b

10

b
b
b

15

b
b

20

25

30

15

Averaged DFT magnitude of filtered process

M = 10
b

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

1
b

7.1

10

b
b

15

20

25

30

15

Averaged DFT magnitude of filtered process

M = 5000

1
b

b
b

b
b

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao
b

7.1

10

b
b

15

20

25

30

15

Averaged DFT magnitude of filtered process

M = 5000

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao
|(1 + e j(2/N)k )/2|2

1
b

b
b

b
b

7.1

10

b
b

15

20

25

30

15

Filtering a random process

7.1

sing terli
s
e
c
ro= DFT
Vet
P
it looks like Py [k] = Px [k] |H[k]|2 , whereaH[k]
{h[n]}
l
n
i
t
n
r
l Sig nd Ma
a
t
i
can we generalize these results
a finite set of samples?
Dig beyond
oni a 013
d
n
a
2
lo Pr
o
a
P

16

Filtering a random process

7.1

sing terli
s
e
c
ro= DFT
Vet
P
it looks like Py [k] = Px [k] |H[k]|2 , whereaH[k]
{h[n]}
l
n
i
t
n
r
l Sig nd Ma
a
t
i
can we generalize these results
a finite set of samples?
Dig beyond
oni a 013
d
n
a
2
lo Pr
o
a
P

16

Stochastic signal processing

a stochastic process is characterized by its power spectral density (PSD)

it can be shown (see the textbook) that the PSD is

sing terli
s
e
c
ro {rxin[n]}
Vet
Px (e j )a=l P
DTFT
t
n
r
l Sig nd Ma
a
t
i
ig is theoautocorrelation
where rx [n] = E [x[k] x[n +Dk]]
ni a 13 of the process.
d
n
20 it is:
ray [n] = H{x[n]},
P

for a filtered stochastic process


o
l
Pao
Py (e j ) = |H(e j )|2 Px (e j )

7.1

17

Stochastic signal processing

a stochastic process is characterized by its power spectral density (PSD)

it can be shown (see the textbook) that the PSD is

sing terli
s
e
c
ro {rxin[n]}
Vet
Px (e j )a=l P
DTFT
t
n
r
l Sig nd Ma
a
t
i
ig is theoautocorrelation
where rx [n] = E [x[k] x[n +Dk]]
ni a 13 of the process.
d
n
20 it is:
ray [n] = H{x[n]},
P

for a filtered stochastic process


o
l
Pao
Py (e j ) = |H(e j )|2 Px (e j )

7.1

17

Stochastic signal processing

a stochastic process is characterized by its power spectral density (PSD)

it can be shown (see the textbook) that the PSD is

sing terli
s
e
c
ro {rxin[n]}
Vet
Px (e j )a=l P
DTFT
t
n
r
l Sig nd Ma
a
t
i
ig is theoautocorrelation
where rx [n] = E [x[k] x[n +Dk]]
ni a 13 of the process.
d
n
20 it is:
ray [n] = H{x[n]},
P

for a filtered stochastic process


o
l
Pao
Py (e j ) = |H(e j )|2 Px (e j )

7.1

17

Stochastic signal processing

sing terli
s
e
c
ro in Vet
P
l
a
rt magnitude) in the stochastic case
filters designed for deterministic signals
gnstill work
a(in
i
S
M
l
ita ni and
Digsince
we lose the concept of phase
we
o dont know the shape of a realization in advance
and 2013
r
P
lo
Pao

key points:

7.1

18

Stochastic signal processing

sing terli
s
e
c
ro in Vet
P
l
a
rt magnitude) in the stochastic case
filters designed for deterministic signals
gnstill work
a(in
i
S
M
l
ita ni and
Digsince
we lose the concept of phase
we
o dont know the shape of a realization in advance
and 2013
r
P
lo
Pao

key points:

7.1

18

Noise

noise is everywhere:

sing terli
s
e
c
sum of extraneous interferences
ro in Vet
P
l
a
gn
art
i
S
M
l
quantization and numerical ierrors
d
ta
Dig doni an 13
...
ran 20
P
o
l
Paaostochastic signal
we can model noise as

the most important noise is white noise

thermal noise

7.1

19

Noise

noise is everywhere:

sing terli
s
e
c
sum of extraneous interferences
ro in Vet
P
l
a
gn
art
i
S
M
l
quantization and numerical ierrors
d
ta
Dig doni an 13
...
ran 20
P
o
l
Paaostochastic signal
we can model noise as

the most important noise is white noise

thermal noise

7.1

19

Noise

noise is everywhere:

sing terli
s
e
c
sum of extraneous interferences
ro in Vet
P
l
a
gn
art
i
S
M
l
quantization and numerical ierrors
d
ta
Dig doni an 13
...
ran 20
P
o
l
Paaostochastic signal
we can model noise as

the most important noise is white noise

thermal noise

7.1

19

Noise

noise is everywhere:

sing terli
s
e
c
sum of extraneous interferences
ro in Vet
P
l
a
gn
art
i
S
M
l
quantization and numerical ierrors
d
ta
Dig doni an 13
...
ran 20
P
o
l
Paaostochastic signal
we can model noise as

the most important noise is white noise

thermal noise

7.1

19

Noise

noise is everywhere:

sing terli
s
e
c
sum of extraneous interferences
ro in Vet
P
l
a
gn
art
i
S
M
l
quantization and numerical ierrors
d
ta
Dig doni an 13
...
ran 20
P
o
l
Paaostochastic signal
we can model noise as

the most important noise is white noise

thermal noise

7.1

19

Noise

noise is everywhere:

sing terli
s
e
c
sum of extraneous interferences
ro in Vet
P
l
a
gn
art
i
S
M
l
quantization and numerical ierrors
d
ta
Dig doni an 13
...
ran 20
P
o
l
Paaostochastic signal
we can model noise as

the most important noise is white noise

thermal noise

7.1

19

Noise

noise is everywhere:

sing terli
s
e
c
sum of extraneous interferences
ro in Vet
P
l
a
gn
art
i
S
M
l
quantization and numerical ierrors
d
ta
Dig doni an 13
...
ran 20
P
o
l
Paaostochastic signal
we can model noise as

the most important noise is white noise

thermal noise

7.1

19

White noise

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

white indicates uncorrelated samples

rw [n] = 2 [n]

Pw (e j ) = 2

7.1

20

White noise

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

white indicates uncorrelated samples

rw [n] = 2 [n]

Pw (e j ) = 2

7.1

20

White noise

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

white indicates uncorrelated samples

rw [n] = 2 [n]

Pw (e j ) = 2

7.1

20

White noise

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

Pw (e j )

7.1

/2

/2

21

White noise

7.1

the PSD is independent of the probability distributioninof


s g thetesingle
li samples (depends only
r
s
e
c
on the variance)
t
e
ro

al P artin V
n
g
i
S
distribution is important to estimate
tal boundsdforMthe signal
Digi doni an 13
very often a Gaussian distribution
the
20 experimental data the best
ranmodels
P

o
l
o
PaGaussian
AWGN: additive white
noise

22

White noise

7.1

the PSD is independent of the probability distributioninof


s g thetesingle
li samples (depends only
r
s
e
c
on the variance)
t
e
ro

al P artin V
n
g
i
S
distribution is important to estimate
tal boundsdforMthe signal
Digi doni an 13
very often a Gaussian distribution
the
20 experimental data the best
ranmodels
P

o
l
o
PaGaussian
AWGN: additive white
noise

22

White noise

7.1

the PSD is independent of the probability distributioninof


s g thetesingle
li samples (depends only
r
s
e
c
on the variance)
t
e
ro

al P artin V
n
g
i
S
distribution is important to estimate
tal boundsdforMthe signal
Digi doni an 13
very often a Gaussian distribution
the
20 experimental data the best
ranmodels
P

o
l
o
PaGaussian
AWGN: additive white
noise

22

White noise

7.1

the PSD is independent of the probability distributioninof


s g thetesingle
li samples (depends only
r
s
e
c
on the variance)
t
e
ro

al P artin V
n
g
i
S
distribution is important to estimate
tal boundsdforMthe signal
Digi doni an 13
very often a Gaussian distribution
the
20 experimental data the best
ranmodels
P

o
l
o
PaGaussian
AWGN: additive white
noise

22

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

END OF MODULE 7.1


P

sing terli
s
e
c
t
Signal Processing
ro in VeDigital
P
l
a
t
n
r
g
a
i
tal S i and M
i
g
Module 7.2: Quantization
i
D
on 013
d
n
Pra
2
aolo

Overview:

7.2

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
Uniform quantization and errortanalysis
l
d
a
Digi doni an 13
Clipping, saturation, companding
ran 20
P
o
l
Pao
Quantization

23

Overview:

7.2

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
Uniform quantization and errortanalysis
l
d
a
Digi doni an 13
Clipping, saturation, companding
ran 20
P
o
l
Pao
Quantization

23

Overview:

7.2

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
Uniform quantization and errortanalysis
l
d
a
Digi doni an 13
Clipping, saturation, companding
ran 20
P
o
l
Pao
Quantization

23

Quantization

7.2

sing
li
al P artin V
n
g
i
S
M set of values
we need to map the range of a tsignal
al onto adfinite
Digi doni an 13
irreversible loss of information r
anquantization
20 noise
P

o
l
Pao

r
s sample)
digital devices can only deal with integers (b rbits
oceper
ette

24

Quantization

7.2

sing
li
al P artin V
n
g
i
S
M set of values
we need to map the range of a tsignal
al onto adfinite
Digi doni an 13
irreversible loss of information r
anquantization
20 noise
P

o
l
Pao

r
s sample)
digital devices can only deal with integers (b rbits
oceper
ette

24

Quantization

7.2

sing
li
al P artin V
n
g
i
S
M set of values
we need to map the range of a tsignal
al onto adfinite
Digi doni an 13
irreversible loss of information r
anquantization
20 noise
P

o
l
Pao

r
s sample)
digital devices can only deal with integers (b rbits
oceper
ette

24

Quantization schemes

x[n]

Q{}

x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
Several factors at play:
d
ta
Digi doni an 13
storage budget (bits per sample)
ran 20
P
o
l
o floating point)
storage scheme (fixedPa
point,

properties of the input


range
probability distribution

7.2

25

Quantization schemes

x[n]

Q{}

x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
Several factors at play:
d
ta
Digi doni an 13
storage budget (bits per sample)
ran 20
P
o
l
o floating point)
storage scheme (fixedPa
point,

properties of the input


range
probability distribution

7.2

25

Quantization schemes

x[n]

Q{}

x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
Several factors at play:
d
ta
Digi doni an 13
storage budget (bits per sample)
ran 20
P
o
l
o floating point)
storage scheme (fixedPa
point,

properties of the input


range
probability distribution

7.2

25

Quantization schemes

x[n]

Q{}

x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
Several factors at play:
d
ta
Digi doni an 13
storage budget (bits per sample)
ran 20
P
o
l
o floating point)
storage scheme (fixedPa
point,

properties of the input


range
probability distribution

7.2

25

Quantization schemes

x[n]

Q{}

x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
Several factors at play:
d
ta
Digi doni an 13
storage budget (bits per sample)
ran 20
P
o
l
o floating point)
storage scheme (fixedPa
point,

properties of the input


range
probability distribution

7.2

25

Scalar quantization

x[n]

Q{}

x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
The simplest quantizer:
0
2scalar)
ran (hence
P
each sample is encoded individually

o
l
Pao

7.2

each sample is quantized independently (memoryless quantization)

each sample is encoded using R bits

26

Scalar quantization

x[n]

Q{}

x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
The simplest quantizer:
0
2scalar)
ran (hence
P
each sample is encoded individually

o
l
Pao

7.2

each sample is quantized independently (memoryless quantization)

each sample is encoded using R bits

26

Scalar quantization

x[n]

Q{}

x[n]

sinRgbpsterli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
The simplest quantizer:
20
ran (hence
P

each sample is encoded individually


o
scalar)
l
Pao

7.2

each sample is quantized independently (memoryless quantization)

each sample is encoded using R bits

26

Scalar quantization
Assume input signal bounded: A x[n] B for all n:

each sample quantized over 2R possible values 2R intervals.

each interval associated to a quantization value

7.2

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

27

Scalar quantization
Assume input signal bounded: A x[n] B for all n:

each sample quantized over 2R possible values 2R intervals.

each interval associated to a quantization value

7.2

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

27

Scalar quantization
Assume input signal bounded: A x[n] B for all n:

each sample quantized over 2R possible values 2R intervals.

each interval associated to a quantization value

7.2

x0

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
x1
x2
x3
ao

27

Scalar quantization
Example for R = 2:

A
i0

7.2

k = 10 ssing

ekrl=i 11
e
t
c
t
o
e
r
V
x0
x1
x3
naxl2P Martin
g
i
S
l
d
ta
i3
Di1igi id2 oni an 13
n
0
a
2
r
I2
I0 olo P I1
I3
Pa

k = 00

k = 01

what are the optimal interval boundaries ik ?

what are the optimal quantization values xk ?

B
i4

28

Scalar quantization
Example for R = 2:

A
i0

7.2

k = 10 ssing

ekrl=i 11
e
t
c
t
o
e
r
V
x0
x1
x3
naxl2P Martin
g
i
S
l
d
ta
i3
Di1igi id2 oni an 13
n
0
a
2
r
I2
I0 olo P I1
I3
Pa

k = 00

k = 01

what are the optimal interval boundaries ik ?

what are the optimal quantization values xk ?

B
i4

28

Quantization Error

e[n] = Q{x[n]} x[n] = x[n] x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
model x[n] as a stochastic process
l
igita oni and 3
D
model error as a white noise sequence:
and 201
r
P
lo
error samples are uncorrelated
Pao
all error samples have the same distribution

7.2

we need statistics of the input to study the error

29

Quantization Error

e[n] = Q{x[n]} x[n] = x[n] x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
model x[n] as a stochastic process
l
igita oni and 3
D
model error as a white noise sequence:
and 201
r
P
lo
error samples are uncorrelated
Pao
all error samples have the same distribution

7.2

we need statistics of the input to study the error

29

Quantization Error

e[n] = Q{x[n]} x[n] = x[n] x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
model x[n] as a stochastic process
l
igita oni and 3
D
model error as a white noise sequence:
and 201
r
P
lo
error samples are uncorrelated
Pao
all error samples have the same distribution

7.2

we need statistics of the input to study the error

29

Quantization Error

e[n] = Q{x[n]} x[n] = x[n] x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
model x[n] as a stochastic process
l
igita oni and 3
D
model error as a white noise sequence:
and 201
r
P
lo
error samples are uncorrelated
Pao
all error samples have the same distribution

7.2

we need statistics of the input to study the error

29

Quantization Error

e[n] = Q{x[n]} x[n] = x[n] x[n]

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
model x[n] as a stochastic process
l
igita oni and 3
D
model error as a white noise sequence:
and 201
r
P
lo
error samples are uncorrelated
Pao
all error samples have the same distribution

7.2

we need statistics of the input to study the error

29

Uniform quantization

simple but very general case

range is split into 2R equal intervals of width = (B A)2R

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.2

30

Uniform quantization

simple but very general case

range is split into 2R equal intervals of width = (B A)2R

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.2

30

Uniform quantization

simple but very general case

range is split into 2R equal intervals of width = (B A)2R

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.2

30

Uniform quantization
Mean Square Error is the variance of the error signal:


e2 = E |Q{x[n]} x[n]|2
sing terli
s
Z B
e
c
t
l Pro} rti)n2 dVe
=
fx n
(a)(Q{

lASig nd Ma
a
t
i
Dig 2X
1
oZni a 013
d
n
2 )(xk )2 d
ra
fx (
lo P=
R

Pao

k=0

Ik

error depends on the probability distribution of the input

7.2

31

Uniform quantization
Mean Square Error is the variance of the error signal:


e2 = E |Q{x[n]} x[n]|2
sing terli
s
Z B
e
c
t
l Pro} rti)n2 dVe
=
fx n
(a)(Q{

lASig nd Ma
a
t
i
Dig 2X
1
oZni a 013
d
n
2 )(xk )2 d
ra
fx (
lo P=
R

Pao

k=0

Ik

error depends on the probability distribution of the input

7.2

31

Uniform quantization
Mean Square Error is the variance of the error signal:


e2 = E |Q{x[n]} x[n]|2
sing terli
s
Z B
e
c
t
l Pro} rti)n2 dVe
=
fx n
(a)(Q{

lASig nd Ma
a
t
i
Dig 2X
1
oZni a 013
d
n
2 )(xk )2 d
ra
fx (
lo P=
R

Pao

k=0

Ik

error depends on the probability distribution of the input

7.2

31

Uniform quantization
Mean Square Error is the variance of the error signal:


e2 = E |Q{x[n]} x[n]|2
sing terli
s
Z B
e
c
t
l Pro} rti)n2 dVe
=
fx n
(a)(Q{

lASig nd Ma
a
t
i
Dig 2X
1
oZni a 013
d
n
2 )(xk )2 d
ra
fx (
lo P=
R

Pao

k=0

Ik

error depends on the probability distribution of the input

7.2

31

Uniform quantization of uniform input

Uniform-input hypothesis:
fx ( ) =

1
sing terli
s
e
c
Br
A
o
et

V
nal P Martin
g
i
S
d
tal
Digi doni an 13
1 Z 20
r2an 2X
P
(xk )2 d
o
l
o =
R

Pa

7.2

k=0

Ik

B A

32

Uniform quantization of uniform input

Lets find the optimal quantization point by minimizing the error


2 1 Z
2
e2
X
(
xk ssin)g
li
e
=
detter
c
o
r

xm

xm al PIk B in
k=0
tAV
n
r
g
a
i
S
taZl xi mand )M
Digi= do2(
n
d
n
B 2A013
Im
a
r

lo P
A+m+
Pao
(
xm )2
=
B A A+m
R

7.2

33

Uniform quantization of uniform input

Lets find the optimal quantization point by minimizing the error


2 1 Z
2
e2
X
(
xk ssin)g
li
e
=
detter
c
o
r

xm

xm al PIk B in
k=0
tAV
n
r
g
a
i
S
taZl xi mand )M
Digi= do2(
n
d
n
B 2A013
Im
a
r

lo P
A+m+
Pao
(
xm )2
=
B A A+m
R

7.2

33

Uniform quantization of uniform input

Lets find the optimal quantization point by minimizing the error


2 1 Z
2
e2
X
(
xk ssin)g
li
e
=
detter
c
o
r

xm

xm al PIk B in
k=0
tAV
n
r
g
a
i
S
taZl xi mand )M
Digi= do2(
n
d
n
B 2A013
Im
a
r

lo P
A+m+
Pao
(
xm )2
=
B A A+m
R

7.2

33

Uniform quantization of uniform input

sing terli
s
e
c
ro in +Vet
= 0 forna
xml P= A + rm
2
g
at

xm
i
S
M
l
d
a
t
n
i
Dig doni a 13
20
ran is the
P

o
optimal quantization
point
intervals
midpoint, for all intervals
l
Pao

Minimizing the error:

7.2

e2

34

Uniform 3-Bit quantization function


x [n]
1.00

111

0.75

110

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao
0.50

0.25

101

100

x[n]

1.00 0.75 0.50 0.25

010

001

000

0.25

0.50

0.75

1.00

0.25

0.50

0.75

1.00

7.2

35

Uniform quantization of uniform input

Quantizers mean square error:


e2

R 1 Z
2X
A+k+

ing e)2rli
(A + ke+
s/2
s
c
o
ett d
al Pr B inA V

gn
art
i
S
M
l
igita(/2oni a)n2 dd 3
= 2RD
nd A 201
0 ra B
P
lo
Pao2
k=0

A+k

7.2

12

36

Uniform quantization of uniform input

Quantizers mean square error:


e2

R 1 Z
2X
A+k+

ing e)2rli
(A + ke+
s/2
s
c
o
ett d
al Pr B inA V

gn
art
i
S
M
l
igita(/2oni a)n2 dd 3
= 2RD
nd A 201
0 ra B
P
lo
Pao2
k=0

A+k

7.2

12

36

Uniform quantization of uniform input

Quantizers mean square error:


e2

R 1 Z
2X
A+k+

ing e)2rli
(A + ke+
s/2
s
c
o
ett d
al Pr B inA V

gn
art
i
S
M
l
igita(/2oni a)n2 dd 3
= 2RD
nd A 201
0 ra B
P
lo
Pao2
k=0

A+k

7.2

12

36

Error analysis

error energy
e2 = 2 /12,

signal energy

signal to noise ratio

in dB

= (B A)/2R

sing terli
s
e
c
Vet
l ProA)2r/12
n
i
x2 n
=a(B
t
l Sig nd Ma
a
t
i
Dig doni a 132R
SNR
20= 2
ran
P
o
l
o

Pa

SNRdB = 10 log10 22R 6R dB

7.2

37

Error analysis

error energy
e2 = 2 /12,

signal energy

signal to noise ratio

in dB

= (B A)/2R

sing terli
s
e
c
Vet
l ProA)2r/12
n
i
x2 n
=a(B
t
l Sig nd Ma
a
t
i
Dig doni a 132R
SNR
20= 2
ran
P
o
l
o

Pa

SNRdB = 10 log10 22R 6R dB

7.2

37

Error analysis

error energy
e2 = 2 /12,

signal energy

signal to noise ratio

in dB

= (B A)/2R

sing terli
s
e
c
Vet
l ProA)2r/12
n
i
x2 n
=a(B
t
l Sig nd Ma
a
t
i
Dig doni a 132R
SNR
20= 2
ran
P
o
l
o

Pa

SNRdB = 10 log10 22R 6R dB

7.2

37

Error analysis

error energy
e2 = 2 /12,

signal energy

signal to noise ratio

in dB

= (B A)/2R

sing terli
s
e
c
Vet
l ProA)2r/12
n
i
x2 n
=a(B
t
l Sig nd Ma
a
t
i
Dig doni a 132R
SNR
20= 2
ran
P
o
l
o

Pa

SNRdB = 10 log10 22R 6R dB

7.2

37

The 6dB/bit rule of thumb

7.2

sing terli
s
e
c
ro= 96dBin Vet
P
l
max
SNR
a
gn
art
i
S
M
l
d
ta
a DVD has 24 bits/sample:
Digi doni an 13
20= 144dB
ran maxSNR
P
o
l
Pao

a compact disk has 16 bits/sample:

38

The 6dB/bit rule of thumb

7.2

sing terli
s
e
c
ro= 96dBin Vet
P
l
max
SNR
a
gn
art
i
S
M
l
d
ta
a DVD has 24 bits/sample:
Digi doni an 13
20= 144dB
ran maxSNR
P
o
l
Pao

a compact disk has 16 bits/sample:

38

Rate/Distortion Curve

distortion (e2 )

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao
rate (R)

7.2

39

Other quantization errors

sing terli
s
e
c
ro in Vet
P
l
a
clip samples to [A, B]: linear distortion
gn(can beMput
artto good use in guitar effects!)
i
S
l
ita ni and
Digsimulates
smoothly saturate input: this
o the saturation curves of analog electronics
and 2013
r
P
lo
Pao

If input is not bounded to [A, B]:

7.2

40

Other quantization errors

sing terli
s
e
c
ro in Vet
P
l
a
clip samples to [A, B]: linear distortion
gn(can beMput
artto good use in guitar effects!)
i
S
l
ita ni and
Digsimulates
smoothly saturate input: this
o the saturation curves of analog electronics
and 2013
r
P
lo
Pao

If input is not bounded to [A, B]:

7.2

40

Clipping vs saturation

1 sing
s
erli
e
t
c
t
o
e
r
V
nal P Martin
g
i
S
d
tal
Digi doni an 13 0
1
2
1
r2an 20
P
o
l
ao

0
2

7.2

41

Other quantization errors

If input is not uniform:

7.2

use uniform quantizer and accept increased error. ing


ess etterli
c
For instance, if input is Gaussian:
o
r
V
nalP Martin
g
i
S
d3 2 2
tal 2
Digi done i=an 2 13
n
0

Pra

lo
design optimal quantizer
Paofor input distribution, if known (Lloyd-Max algorithm)

use companders

42

Other quantization errors

If input is not uniform:

7.2

use uniform quantizer and accept increased error. ing


ess etterli
c
For instance, if input is Gaussian:
o
r
V
nalP Martin
g
i
S
d3 2 2
tal 2
Digi done i=an 2 13
n
0

Pra

lo
design optimal quantizer
Paofor input distribution, if known (Lloyd-Max algorithm)

use companders

42

Other quantization errors

If input is not uniform:

7.2

use uniform quantizer and accept increased error. ing


ess etterli
c
For instance, if input is Gaussian:
o
r
V
nalP Martin
g
i
S
d3 2 2
tal 2
Digi done i=an 2 13
n
0

Pra

lo
design optimal quantizer
Paofor input distribution, if known (Lloyd-Max algorithm)

use companders

42

-law compander
C{x}

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
igita oni and 3
ln(1 +D
|x[n]|)
C{x[n]} = sgn(x[n])
and 201
r
ln(1o+P)
l
Pao

7.2

43

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

END OF MODULE 7.2


P

sing terli
s
e
c
t
Signal Processing
ro in VeDigital
P
l
a
t
n
r
g
a
i
tal S i and M Module 7.3: A/D and D/A Conversion
i
g
i
D
don 2013
n
a
r
P

aolo

Overview:

7.3

sing terli
s
e
c
ro in Vet
P
Analog-to-digital (A/D) conversion
l
a
gn
art
i
S
M
l
d
ta
Digital-to-analog (D/A) conversion
Digi doni an 13
ran 20
P
o
l
Pao

44

Overview:

7.3

sing terli
s
e
c
ro in Vet
P
Analog-to-digital (A/D) conversion
l
a
gn
art
i
S
M
l
d
ta
Digital-to-analog (D/A) conversion
Digi doni an 13
ran 20
P
o
l
Pao

44

From analog to digital

7.3

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
quantization discretized amplitude
l
d
ta
Digi doni an 13
how is it done in practice?
ran 20
P
o
l
Pao
sampling discretizes time

45

From analog to digital

7.3

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
quantization discretized amplitude
l
d
ta
Digi doni an 13
how is it done in practice?
ran 20
P
o
l
Pao
sampling discretizes time

45

From analog to digital

7.3

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
quantization discretized amplitude
l
d
ta
Digi doni an 13
how is it done in practice?
ran 20
P
o
l
Pao
sampling discretizes time

45

From analog to digital

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.3

46

A tiny bit of electronics: the op-amp

vp

sing terlivo
s
e
c
Pro in Vet
vn
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

vo = G (vp vn )

7.3

47

A tiny bit of electronics: the op-amp

vp

sing terlivo
s
e
c
Pro in Vet
vn
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

vo = G (vp vn )

7.3

47

The two key properties

7.3

sing terli
s
e
c
ro in Vet
P
infinite input gain (G )
l
a
gn
art
i
S
M
l
d
ta
zero input current
Digi doni an 13
ran 20
P
o
l
Pao

48

The two key properties

7.3

sing terli
s
e
c
ro in Vet
P
infinite input gain (G )
l
a
gn
art
i
S
M
l
d
ta
zero input current
Digi doni an 13
ran 20
P
o
l
Pao

48

Inside the box


+Vcc

sing terli
s
e
c
ro in Vet vo
P
l
a
gn
art
i
S
M
l
d
ta
vp
vn
Digi doni an 13
n
0
a
2
r

lo P
Pao

Vcc
7.3

49

The op-amp in open loop: comparator

sing terliy
s
e
c
ro in Vet
P

l
VT
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.3

(
+Vcc
y=
Vcc

if x > VT
if x < VT

50

The op-amp in open loop: comparator

sing terliy
s
e
c
ro in Vet
P

l
VT
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

7.3

(
+Vcc
y=
Vcc

if x > VT
if x < VT

50

The op-amp in closed loop: buffer

sing terliy
s
e
c
Pro
Vet
l
n
i
a
t
n
r
l Sig nd Ma
a
t
i
Dig doni a 13
ran 20
P
o
l
ao

y =x

7.3

51

The op-amp in closed loop: buffer

sing terliy
s
e
c
Pro
Vet
l
n
i
a
t
n
r
l Sig nd Ma
a
t
i
Dig doni a 13
ran 20
P
o
l
ao

y =x

7.3

51

The op-amp in closed loop: inverting amplifier


R2

g
rocessin etterli
V
nal P Martin
y
g
i
S
l
d
a
t
n
i
Dig doni a + 13
ran 20
P
o
l
ao
R1

y = (R2 /R1 )x
7.3

52

The op-amp in closed loop: inverting amplifier


R2

g
rocessin etterli
V
nal P Martin
y
g
i
S
l
d
a
t
n
i
Dig doni a + 13
ran 20
P
o
l
ao
R1

y = (R2 /R1 )x
7.3

52

A/D Converter: Sample & Hold

x(t)

k(t)

sing terli
s
e
c
ro in V+et
P
l
a
+
gn
artC1
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

T1

x[n]

P
Fs

7.3

53

A/D Converter: 2-Bit Quantizer


x[n]
+V0

R
+

sing terli
s
e
c
ro iMSB
Vet
P
l
+
n
a
t
Sign nd Mar
l
a
t
i
ig
D
oni a01 013 LSB
d
n
+ Pra
2
aolo

+0.5V0
R
0
R
0.5V0
V0

7.3

11

10

54

D/A Converter
LSB
V0

2R

...

MSB

sing terli
s
e
c
ro in Vet
2R
2R
2R nal P
g
aRrt
i
S
M
l
and
igita
R D R doni
an 2013
r
P
lo
x(t)
Pao
+

7.3

55

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

END OF MODULE 7.3


P

sing terli
s
e
c
ro in Vet
P
l
a
gn
art
i
S
M
l
d
ta
Digi doni an 13
ran 20
P
o
l
ao

END OF MODULE 7
P

Das könnte Ihnen auch gefallen