Beruflich Dokumente
Kultur Dokumente
Lecture outline
• Basic concepts
• Statistical averages,
• Autocorrelation function
• Wide sense stationary (WSS)
• Multiple random processes
1
PDF processed with CutePDF evaluation edition www.CutePDF.com
A property of MVG_OMALLOOR 11/25/2008
Random processes
• A random process (RP) is an extension of a RV
• Applied to random time varying signals
• Example: “thermal noise” in circuits caused by
the random movement of electrons
• RP is a natural way to model info sources
• RP is a set of possible realizations of signal
waveforms governed by probabilistic laws
• RP instance is a signal (and not just one number
like the case of RV)
Example 1
• A signal generator generates six possible
sinusoids with amplitude one and phase zero.
• We throw a die, corresponding to the value F,
the sinusoid frequency = 100F
• Thus, each of the possible six signals would be
realized with equal probability
• The random process is X(t)=cos(2π × 100F t)
2
A property of MVG_OMALLOOR 11/25/2008
Example 2
• Randomly choose a phase Θ ~ U[0,2π]
• Generate a sinusoid with fixed amplitude (A)
and fixed freq (f0) but a random phase Θ
• The RP is X(t)= A cos(2πf0t + Θ)
X(t)= A cos(2πf0t + Θ)
3
A property of MVG_OMALLOOR 11/25/2008
Example 3
• X(t)=X
• Random variable
X~U[-1,1]
Random processes
• Corresponding to each ωi in the sample space
Ω, there is a signal x(t; ωi) called a sample
function or a realization of the RP
• For the different ωI’s at a fixed time t0, the
number x(t0; ωi) constitutes a RV X(t0)
• In other words, at any time instant, the value
of a random process is a random variable
4
A property of MVG_OMALLOOR 11/25/2008
Example 4
• We throw a die, corresponding to the value F,
the sinusoid frequency = 100F
• Thus, each of the possible six signals would be
realized with equal probability
• The random process is X(t)=cos(2π × 100F t)
• Determine the values of the RV X(0.001)
• The possible values are cos(0.2π), cos(0.4π),
…, cos(1.2π) each with probability 1/6
5
A property of MVG_OMALLOOR 11/25/2008
Example 5
Example 6
• Example of a discrete-time random process
• Let ωi denote the outcome of a random
experiment of independent drawings from
N(0,1)
• The discrete–time RP is {Xn}n=1 to ∞, X0=0, and
Xn=Xn-1+ ωi for all n≥1
6
A property of MVG_OMALLOOR 11/25/2008
Statistical averages
• mX(t) is the mean, of the random process X(t)
• At each t=t0, it is the mean of the RV X(t0)
• Thus, mX(t)=E[X(t)] for all t
• The PDF of X(t0) denoted by fX(t0)(x)
∞
E [ X ( t0 )] = m X ( t0 ) = ∫ xf x ( t0 ) ( x )dx
−∞
7
A property of MVG_OMALLOOR 11/25/2008
Example 7
• Randomly choose a phase Θ ~ U[0,2π]
• Generate a sinusoid with fixed amplitude (A)
and fixed freq (f0) but a random phase Θ
• The RP is X(t)= A cos(2πf0t + Θ)
• We can compute the mean
• For θ∈[1,2π], fΘ(θ)=1/2π, and zero otherwise
• E[X(t)]= ∫{0 to 2π} A cos(2πf0t+Θ)/2π.dθ = 0
Autocorrelation function
• The autocorrelation function of the RP X(t) is
denoted by RX(t1,t2)=E[X(t1)X(t2)]
• RX(t1,t2) is a deterministic function of t1 and t2
8
A property of MVG_OMALLOOR 11/25/2008
Example 8
• The autocorrelation of the RP in ex.7 is
• We have used
Example 9
• X(t)=X
• Random variable X~U[-1,1]
• Find the autocorrelation function
9
A property of MVG_OMALLOOR 11/25/2008
Example 8 (cont’d)
• The autocorrelation of the RP in ex.7 is
10
A property of MVG_OMALLOOR 11/25/2008
Example 10
• Randomly choose a phase Θ ~ U[0,π]
• Generate a sinusoid with fixed amplitude (A) and
fixed freq (f0) but a random phase Θ
• The new RP is Y(t)= A cos(2πf0t + Θ)
• We can compute the mean
• For θ∈[1,π], fΘ(θ)=1/π, and zero otherwise
• MY(t) = E[Y(t)]= ∫{0 to π} A cos(2πf0t+Θ)/π.dθ
= -2A/π sin(2πf0t)
• Since mY(t) is not independent of t, Y(t) is
nonstationary RP
Multiple RPs
• Two RPs X(t) and Y(t) are independent if for all t1
and t2, the RVs X(t1) and X(t2) are independent
• Similarly, the X(t) and Y(t) are uncorrelated if for
all t1 and t2, the RVs X(t1) and X(t2) are
uncorrelated
• Recall that independence uncorrelation, but
the reverse relationship is not generally true
• The only exception is the Gaussian processes
(TBD next time) were the two are equivalent
11
A property of MVG_OMALLOOR 11/25/2008
12
A property of MVG_OMALLOOR
Outline
Lecture #4
Stochastic Process z Stochastic Process
z Counting Process
z Poisson Process
ดร. อนันต ผลเพิ่ ม z Brownian Motion Process
Anan Phonphoem, Ph.D.
anan@cpe.ku.ac.th
z Autocovarience and Autocorrelation
z Random Sequence
http://www.cpe.ku.ac.th/~anan
Computer Engineering Department
Kasetsart University, Bangkok, Thailand
z Stationary Process
z Wide-sense Stationary Process
7/9/2003 2 7/9/2003
SX
3 7/9/2003 4 7/9/2003
Source:www.inventorsmuseum.com
5
Sample Functions 6
7/9/2003 7/9/2003
1
A property of MVG_OMALLOOR
Example 1 Example 1
x(t,e1)
…
Source:www.analyticalsci.com/Astronomy/ Hansen
x(t,en) Average temp of engine
At time t = 2500.10 sec at t = 2500.10 sec is
x(2500.10, e1) = 1200 C 1320 C Î E[X(2500.10)]
Ensemble Average
e1 = 1st launch measuring
Sample functions
7 7/9/2003 8 7/9/2003
Example 2 Example 2
f(t,y1)
z Measure the rain fall in a day @Songkla A sample function of rain fall
province every day. in y1 = year 1990 (1 ≤ t ≤ 365)
z Let F(t) = random process
f(t,y2)
A sample function of rain fall
z f(t,y) = a sample function for measuring in y2 = year 1991 (1 ≤ t ≤ 365)
at day “t” of the year “y”
…
f(t,yn)
A sample function of rain fall
in yn = year 2001 (1 ≤ t ≤ 365)
Sample functions
9 7/9/2003 10 7/9/2003
2
A property of MVG_OMALLOOR
Stochastic Process Examples IID Random Sequence
W(t)
Record temperature as z Independent, Identically Distributed Random
a continuous time Sequence
X(n) z Independent trials of an experiment at a
Record round(temperature) constant rate
as a continuous time
Y(t) z Discrete / Continuous
Record temperature Theorem:
every T seconds k
Z(n) PXn … Xn (x1,…,xk) = PX(x1)…PX(xk) = ∏ PX(xi)
Record round(temperature) 1 k
i=1
every T seconds
13 7/9/2003 14 7/9/2003
2
1
t
S1 S2 S3 S4
X1 X2 X3 X4
15 7/9/2003 16 7/9/2003
3
A property of MVG_OMALLOOR
Poisson Process Poisson Process
z Poisson Process is N(t)
4
a Counting Process that the # of Arrival
during any interval is Poisson RV 3
nth arrival time
z An arrival during any instant is 2
process Æ Memoryless t
S1 S2 S3 S4
z Xn is called Interarrival Time X1 X2 X3 X4 nth interarrival time
19 7/9/2003 20 7/9/2003
21 7/9/2003 22 7/9/2003
α i = λ(ti-ti-1)
23 7/9/2003 24 7/9/2003
4
A property of MVG_OMALLOOR
Example Example
z Let Nk = # of packets transmitted in kth hour z Joint PMF of # of packets transmitted in the kth
hour and zth hour
z # packets in each hour is IID
αknk e- αk αznz e- αz nk = 0,1,…
PNk,Nz(nk,nz) = nk! n z! nz = 0,1,…
[12(3600-0)]n e -12 (3600-0) n = 0,1,2,..
PNi(n) = n! 0 Otherwise
0 Otherwise α(nk+nz) e- 2α nk = 0,1,…
[43200]n e -43200 n = 0,1,2,.. = nk! nz! nz = 0,1,…
= n! 0 Otherwise
0 Otherwise
α = αk = αZ = λ T = 12(3600-0)] = 43200
25 7/9/2003 26 7/9/2003
λe -λx x≥0
fX(x) =
0 Otherwise
27 7/9/2003 28 7/9/2003
29 7/9/2003 30 7/9/2003
5
A property of MVG_OMALLOOR
Joint PDF Expected Value
z Theorem: For the Brownian motion X(t): X(t1) Æ f X(t 1 ) (x) Æ E[X(t1)]
process X(t), joint PDF of X(t1),…,X(tk)
Definition: The expected value of a
f X(t1),…,X(tk) (x1,…,xk) stochastic process X(t) is the
–(xn – xn-1)2 deterministic function
k
1 2 α(tn – tn-1)
= ∏ e µx(t) = E[X(t)]
n=1 √2πα(tn – tn-1)
31 7/9/2003 32 7/9/2003
Autocovarience Correlation
Note
• For τ = 0 Î CX(t,t) = Var[X(t)]
35 7/9/2003 36 7/9/2003
6
A property of MVG_OMALLOOR
Autocorrelation Autocovariance & Autocorrelation
Note:
Autocovariance Î use X(t) to predict a future X(t+τ)
Autocorrelation Î describe the power of a random signal
37 7/9/2003 38 7/9/2003
For a discrete time process, the sample function Definition: The autocovariance of a random
is described by the ordered sequence of
sequence Xn is
random variable Xn = X(nT)
CX[m,k] = Cov[Xm, Xm+k]
z Definition: A random sequence Xn is
an ordered sequence of random Definition: The autocorrelation of a random
variable X0,X1,… sequence Xn is
RX[m,k] = E[Xm Xm+k]
T X4 X5
t
0 2 4 6 8
X0 = (0T)
X1 = (1T)
X2 = (2T)
41 7/9/2003 7/9/2003
7
A property of MVG_OMALLOOR
Stationary Process Stationary Process
• For a random process X(t), normally,
Definition: A stochastic process X(t) is stationary
at t1: X(t1) has pdf = fX( t )(x) [depends on t1] iif for all sets of time t1,…,tm and any time
1
• For a random process X(t), different τ,
at t1: X(t1) has pdf = fX( t )(x) [not depend on t1] f X( t ),… ,X( tm ) (x1,…,xm) =
1 1
45 7/9/2003 46 7/9/2003
Theorem: A stationary random sequence Xn, for z Telegraph Signal, X(t) take value ± 1
all m z X(0) = ± 1 with probability = 0.5
E[Xm] = µX
z Let X(t) toggles the polarity with each
RX[m.k] = RX[0,k] = RX[k]
CX[m,k] = RX[k] – µ2X = CX[k] occurrence of an event in a Poisson
process rate α
47 7/9/2003 48 7/9/2003
8
A property of MVG_OMALLOOR
Example Example
z Find PMF of X(t), fX(t)(x)
z P[X(t)] = P[X(t) | X(0) = 1] P[X(0) = 1]
+1
+ P[X(t) | X(0) = –1 ] P[X(0) = –1]
• P[X(t) | X(0) = 1] = P[N(t) = even]
∞
(αt)2j
–1 =∑ e –αt
X1 X2 X3 X4 X5 j=0 (2j)!
= e–αt (1/2) (e–αt + e–αt )
= (1/2) (1 + e-2αt)
49 7/9/2003 50 7/9/2003
Example Example
• P[X(t) | X(0) = –1] = P[N(t) = odd] • P[X(t) = 1]
∞
(αt)2j+1 = P[X(t) | X(0) = 1] P[X(0) = 1]
=∑ e –αt
j=0 (2j+1)! + P[X(t) | X(0) = –1] P[X(0) = –1]
= (1/2) (1 + e-2αt)(1/2) + (1/2) (1 + e-2αt)(1/2)
= e–αt (1/2) (e–αt – e–αt )
= 1/2
= (1/2) (1 – e-2αt) • P[X(t) = -1]
= 1 – P[X(t) = 1] = 1/2
½ X(t) = –1 , 1
fX(t)(x) =
0 Otherwise
51 7/9/2003 52 7/9/2003
53 7/9/2003 54 7/9/2003
9
A property of MVG_OMALLOOR
Wide Sense Stationary Example
z For every stationary process or sequence, it z Let Xn = ± 1 with prob = ½ (n = even)
is also wide sense stationary. z For n = odd
Xn = -1/3 with prob = 9/10
z However, if it is a wide sense stationary it
Xn = 3 with prob = 1/10
may or may not be stationary.
z Stationary ?
– No
z Wide sense stationary ?
– Mean = 0 for all n
– CX(t,τ) = 0 for τ > 0
– CX(t,τ) = 1 for τ = 0
– Yes , it’s wide sense stationary
55 7/9/2003 56 7/9/2003
57 7/9/2003 58 7/9/2003
10
A property of MVG_OMALLOOR
ECE 244/444
DIGITAL COMMUNICATIONS
Alireza Seyedi
Fall 2008
A property of MVG_OMALLOOR
2 I
Introduction
d i
S ti 1
Section 1.1
1
Digital
g Communications
3
Elements of Digital
g Communications
4
Elements of Digital
g Communications
5
6 R d Processes
Random P (A quick Review)
S ti 2
Section 2.7,2.7-1
727 1
Random processes
p
7
Random processes
p
8
Example:
p X1 where X1((t)) = Y,, where Y is an RV with uniform
pdf over [0,1]
Example: X2 where X2(t) = Cos(ωt), where ω an RV with
uniform
f pdf [0 2π]
df over [0,2
Example: X3 where X3(t) are independent and have pdf
N(t,1)
Example: X4 where X4(t) are iid with pdf N(0,1)
Example:
p X5 where X5((t)=X
) 4((t)+X
) 4((t-1))
Joint density
y and autocorrelation
9
f X ( t1 ),...,
) X ( t N ) ( x1 ,..., x N )
E[g ( X (t1 ),..., X (t N ))] = ∫ g ( X (t1 ),..., X (t N )) f X (t1 ),...,X (t N ) ( x1 ,..., xN )dx1...dxN
Th autocorrelation
The t l ti function
f ti off an RP X(t) is
i ddefined
fi d as
[
R X (t1 , t 2 ) = E X (t1 ) X * (t 2 ) ]
Autocorrelation
10
Example:
p Autocorrelation of X2
Example: Autocorrelation of X4
Autocorrelation
11
Example:
p Autocorrelation of X5
A stationaryy p
process X(t)
( ) is one that
Example:
p
Is X2 WSS? Why?
A X4 and
Are d X5 WSS? Why?
Wh ?
Filtering
g an RP
15
Power spectral
p densityy (PSD)
( )
14
Power Spectral
p Densityy (PSD)
( ) of a WSS RP is the Fourier
transform of the RX(t), that is SX(f)= F {RX(t)}
PSD tells us “How much power the RP has at each frequency”.
Example: PSD of X4
Note: An RP with flat PSD is called a “white” RP. This is due to the
fact that white light has “all the frequencies” in the visible light
range.
g
Example: PSD of X5
Filtering
g an RP
16
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
( x+a ) 2 ⎥
∞ 1 − ⎛ a ⎞ x 2π ⎣ x x x 2n ⎦
P( R > 0 | S = −a) = ∫ e 2σ n2
= Q⎜⎜ ⎟⎟
0
2πσ n2 ⎝σn ⎠ ≈
1
e− x / 2 , for x ≥ 3
2
x 2π
A property 國立台灣海洋大學
of MVG_OMALLOOR 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
-
- -
100 - - where X1 and X2 are Gaussian with mean 0 and variance σ2
- - Then R is a Rayleigh random variable with pdf:
10-2 - -
r / 2σ 2
PR (r ) = e −r
2
- -
σ2
Q(x)
10-4 - -
- -
10-6 - - Rayleigh pdf’s are frequently used to model fading when
- - no line of site signal is present
10-8 - -
- -
10-10
-
-
0 1 2 3 4 5 6
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
0
-
0 1 2 3 4 5
r
A property 國立台灣海洋大學
of MVG_OMALLOOR 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
國立台灣海洋大學 國立台灣海洋大學
National Taiwan Ocean University National Taiwan Ocean University
Randomness in Introductory
Computing
• Understanding randomness requires
mathematical precision
– How to, and how not to, generate random
Richard Anderson permutations
1
A property of MVG_OMALLOOR
2
A property of MVG_OMALLOOR
3
A property of MVG_OMALLOOR
4
A property of MVG_OMALLOOR
5
A property of MVG_OMALLOOR
Mystery 1 Mystery 2
Jims faced around that was all. I was. He cold camp in the first Hwt he fescean hine, Hrot ic, monigongsele brothgars gestrles his
time whether time and settle druther saw his headingscow, or two r dyde, hres siomia num bite le inna nga e ofer mg inga t nall
plantation, Aint you, too bluff bank every well follow it little place. onned, searon monda Getan. Of The and lestre gende berse cw h
All right between up the hugged me stands after him. They was hine hwere, eorcum and on ban Hrla dre urh freodon gladufond
blooded these two thing one, and pick up to his plan, I wasnt have geheall runces fde nfrendel lenga, sylf wisthle onne gehws geatolc
confidence of itand went aloft In the said not to it was to steal that one orfan can mdes forbearn sprc, eard gd hinder. Hle hie riht
come yet aint, Mars Tom, and the raft did youd killed. Youd see geflmes, and fyreferas nicel hlw u eaf, on nefng giddan, fre, his
them betwixt them says these grave, and so he tribute before she faran scealdan, t his lefulofter rst wereard ges weallenrce, t ymb
was for from Ohioa mulatter the doors shoot But Well, Mars Sid, A wealsgen, grdig, gum, git eorhten, holde scold. Sige byrge ingum
dog. Why, what was making come afternoon, and carriages again fen hrurond Hroterda hogoe t gewrc, him dyrne t den u in beora
bundle of raft and, but coffin, and red flapping along begun in fren grtten he Joy Attas, rce gewealh is ford forlscaa ror, Hrmanna
slopped a few minute and jumped it out a whacks till cyst rnste, Gren eaxe genele fingene, byrhten Heaoscip oferhe
Erra. Hoard bodon he hsa ge
Mystery 3 Mystery 4
The day, Into hele swete hem here As she hires thus it wist, I Morning all years dismember. Their reclined or dear what is or
lengthe of myn hertes, I woful Troilus, And what is here. Thus good trousers perament. Hooray Seven the who it acrostitute Blushes
entencioun, And lest, Ayeins than hete Is that fresh entente. What good might threw the bore that would youd my fear him I will not.
echone, doun of trouthe affyle, She hard that I spek your wordes of Haines lie wide of osier. Millicence of the Peering fools in the didnt
thou art amis helm to grone. Rys, taken me I smiten worst I nede, In before his time. Saluting. Twopence. He was widower, wait. It was
alle Gan pulle, or sharpe Than can biholde I love, quod Pandarus not in your loud By Its Cordial refresh from began Mr Nannett cup.
another cominge, as is to my laste. Among the lettre that I seye, This was on two dark sea. Where weak eyes, made them on. They
Was al my parlement. And sith hede three, That in the goost draggin. Would give picture sad in the must simply, reclined out
departe first hir owne And, and woundemen of the day thought with and a smeared Of The jady could only to be original mirth,
devyse every lawe I here folk of Troilus, no preye Acursed be, That both waterment anything in a weeks, render, well What reproduced
felawe, That this swere to levest lere. But of Pandarus on the love a there hanker coach. Stars face, says Bob Doran, which it hes men
wey ye, that ther though the stalke, And if that swich routhe, and he any morning and ribbons. Her Leg Of The gr
gan t
6
A property of MVG_OMALLOOR
Random
Processes
1/24
A property of MVG_OMALLOOR
Random Process
Other Names : Random Signal
Stochastic Process
Examples of Discrete-Time
Random Processes
3/24
A property of MVG_OMALLOOR
Examples of Continuous-Time
Random Processes
4/24
A property of MVG_OMALLOOR
k
x[k]
… …
k
6/24
A property of MVG_OMALLOOR
7/24
A property of MVG_OMALLOOR
Random Processes
How do we probabilistically characterize a RP?
View #1 provides one clue!
At each time ti we have a RV and it can be
described by a PDF: P(x;ti). It is possible that the
PDF changes with time.
8/24
A property of MVG_OMALLOOR
Random Processes
So to describe a RP, we need P(x;t) for all t!
Is that enough? NO!
9/24
Random Processes
A property of MVG_OMALLOOR
Realization #1
Realization #2
Realization #3
x1 x2 ….. xn 10/24
A property of MVG_OMALLOOR
Random Process
This complete description of RP is virtually
impossible to use for practical applications!
Usually make do with 1st and 2nd order PDF’s:
P(x;t) & P(x1,x2; t1,t2)
Q: What do the 1st and 2nd order PDF’s tell us ?
Ans #1 1st order P(x;t) tells, as a function of time, what
values are likely and unlikely to occur
11/24
A property of MVG_OMALLOOR
Random Process
1) Mean or Average or “Expected Value” of x(t)
∞
E{x(t )} = ∫ x P( x; t ) dx
−∞
Other Notations:
x(t ) = E{x(t )}
mx (t ) = E{x(t )}
Shows “center of concentration” of possible values
of x(t) as a Function of time in general.
12/24
A property of MVG_OMALLOOR
Random Process
2) Variance of X(t)
{ }
Deviation from mean
σ x2( t ) = E [x (t ) − x (t )] 2
∞
= ∫ [x − x (t )] P ( x; t ) dx
2
−∞
Random Process
14/24
A property of MVG_OMALLOOR
Random Processes
N
∑ score ( i )
1
Test Average =
N i =1
15/24
A property of MVG_OMALLOOR
Time-Varying PDF of RP
Mean
PDF p(x;k)
Varies
0.03 For This
0.025 Example
0.02
0.015
0.01
0.005
0
150
100
50
x (Temp in F)
400
0 350
300
250
200
150
100
-50 50 k (Day of the Year)
0
16/24
A property of MVG_OMALLOOR
Sample Functions
of This TV RP
Five Sample Functions of Temp RP
100 Can See
Temp in F Temp in F Temp in F Temp in F Temp in F
50 The
0 Varying
100 0 50 100 150 200 250 300 350 400
Mean
50
0
100 0 50 100 150 200 250 300 350 400
50
0
100 0 50 100 150 200 250 300 350 400
50
0
100 0 50 100 150 200 250 300 350 400
50
0
0 50 100 150 200 250 300 350 400
k (Day of year)
17/24
Different Example
A property of MVG_OMALLOOR
Time-Varying PDF of RP
Variance
Varies
PDF: p(x;k)
For This
0.08 Example
0.06
0.04
0.02
0
50
400
0 300
200
x (value) 100 k (time index)
-50 0
18/24
Sample Functions
A property of MVG_OMALLOOR
of this TV RP
Five Sample Functions of RP Can See
100
The
0 Varying
x
-100 Variance
50 0 50 100 150 200 250 300 350 400
0
x
-50
50 0 50 100 150 200 250 300 350 400
0
x
-50
100 0 50 100 150 200 250 300 350 400
0
x
-100
50 0 50 100 150 200 250 300 350 400
0
x
-50
0 50 100 150 200 250 300 350 400
k index
19/24
Random Processes
A property of MVG_OMALLOOR
…. both Positive?
…. both Negative?
…. of Opposite Signs?
20/24
A property of MVG_OMALLOOR
Random Processes
As with mean and variance for the 1st order PDF,
we want something that captures most of the
essence of the 2nd order PDF
Auto correlation function (ACF) of a RP
R x ( t1 , t 2 ) = E {x ( t1 ) x ( t 2 )}
Correlates process at pairs
of times t1, t2
∞ ∞
R x (t1 , t2 ) = ∫ ∫ x1 x2 P( x1, x2 ; t1, t2 ) dx1, dx2
−∞ −∞
21/24
A property of MVG_OMALLOOR
22/24
Comparing ACF’s of 2 RP’s
A property of MVG_OMALLOOR
Note : Both x(t) & y(t) have the same 1st Order PDF,
….yet they appear to be very different!
t t
t t
t t
t t
x(t1) x(t2) y(t1) y(t1) 23/24
A property of MVG_OMALLOOR
Four Realizations of x(t) Four Realizations of y(t)
τo τo
t t
t t
t t
t t
t1 t2= t1+τo τo t1 t2= t1+τo
Ry (t1, t1+τ)
RX (t1, t1+τ)
τ
For t2= t1+ τ, R x ( t1 , t 2 ) = R x ( t1 , t1 + τ )
D.1
Reference
A property of MVG_OMALLOOR
Reference
– Chapter 4.1 - 4.3, S. Haykin, Communication Systems, Wiley.
D.2
Introduction
A property of MVG_OMALLOOR
Transmitted Received
Noise
message message
Information source
– produces a message (or a sequence of symbol) to be
transmitted to the destination.
– Example 1
• Analog signal (voice signal): sampling, quantizing and
encoding are used to convert it into digital form
D.3
Introduction (1)
A property of MVG_OMALLOOR
87
76
65
5
4
Quantization 4
noise 3 Digits
3
2
2
1
1
00
t
D.4
Introduction (2)
A property of MVG_OMALLOOR
– encoding
Digits Binary code Return-to-zero
00 000
21 001
32 010
43 011
54 100
65 101
76 110
87 111
D.5
Introduction (3)
A property of MVG_OMALLOOR
– Example 2
• digital source from a digital computer
Transmitter
– operates on the message to produce a signal suitable for
transmission over the channel.
D.6
Introduction (4)
A property of MVG_OMALLOOR
Channel
– medium used to transmit the signal from transmitter to the
receiver
– Attenuation and delay distortions
– Noise
Receiver
– performs the reverse function of the transmitter
• determine the symbol from the received signal
¾Example: 1 or 0 for a binary system
Destination
– the person or device for which the message is intended.
D.7
Signaling Rate
A property of MVG_OMALLOOR
Digital message
– An ordered sequence of symbols drawn from an alphabet of
finite size µ.
• Example
¾Binary source: µ=2 for alphabet 0,1 where 0 and 1 are
symbols
¾A 4 level signal has 4 symbols in its alphabet such as ±1, ±3
Signaling Rate
– The symbols are suitably shaped by a shaping filter into a sequence
of signal-elements. Each signal-element has the same duration of T
second and is transmitted immediately one after another, so that the
signal-element rate (signaling rate) is 1/T elements per second
(bauds).
D.8
Bit Rate
A property of MVG_OMALLOOR
Bit Rate
– The bit rate is the product of signaling rate and no of
bit/symbol.
– Example
• A 4-level PAM with a signaling rate = 2400 bauds/s.
• Bit rate (Data rate) =2400 X log2(4) = 4800 bits/s (bps)
D.10
Matched Filter (1)
A property of MVG_OMALLOOR
t t
Square pulse Signal at the receiving end
D.11
Matched Filter (2)
A property of MVG_OMALLOOR
Signal Power
Let g(f) and h(f) denoted the Fourier Transform of g(t) and
h(t).
∞
g 0 (t ) = ∫ H ( f )G ( f ) exp( j 2πft )df
−∞
∞
D.13
Matched Filter (4)
A property of MVG_OMALLOOR
Noise Power
N0
– Since w(t) is white with a power spectral density , the
2
spectral density function of Noise is
N0 2
SN ( f ) = H( f )
2
∞
N0
∫
2
– The noise power = E[n (t )] =
2
H( f ) df
2 −∞
D.14
Matched Filter (5)
A property of MVG_OMALLOOR
S/N Ratio
– Thus the signal to noise ratio become
∞ 2
D.15
Matched Filter (6)
A property of MVG_OMALLOOR
– Our problem is to find, for a given G(f), the particular form of the
transfer function H(f) of the filter that makes η at maximum.
Schwarz’s inequality:
∞ ∞
∫ ∫ φ1 ( x) dx ∫ φ 2 ( x) dx
2 2
φ1 ( x)φ 2 ( x)dx ≤
−∞ −∞ −∞
D.16
Matched Filter (7)
A property of MVG_OMALLOOR
(Note: e j 2πfT = 1)
D.17
Matched Filter (8)
A property of MVG_OMALLOOR
∞
2
∫ G( f )
2
The S/N ratio η ≤ df
N0 −∞
or
2E
η≤ ……(3)
N0
∞
where the energy E= ∫ G ( f ) df is the input signal energy
2
−∞
D.18
Matched Filter (9)
A property of MVG_OMALLOOR
Notice that the S/N ratio does not depend on the transfer
function H(f) of the filter but only on the signal energy.
The optimum value of H(f) is then obtained as
H ( f ) = kG * ( f ) exp(− j 2πfT )
D.19
Matched Filter (10)
A property of MVG_OMALLOOR
h(t)=kg(T-t) …..(4)
Example:
The signal is a rectangular pulse.
g (t )
T t
h(t )
kA
T t D.21
Matched Filter (12)
A property of MVG_OMALLOOR
g o (t )
kA2T
T t
D.22
Matched Filter (13)
A property of MVG_OMALLOOR
T
r (t ) ∫0
Sample at t = T
D.23
Realization of the Matched filter (1)
A property of MVG_OMALLOOR
When t = T
T
y(T)= ∫ r (τ )g (τ )dτ
0
D.24
Realization of the Matched filter (2)
A property of MVG_OMALLOOR
T
r (t ) ∫0
g (t )
Correlator
D.25
Error Rate of Binary PAM (1)
A property of MVG_OMALLOOR
Signaling
– Consider a non-return-to-zero (NRZ) signaling (sometime
called bipolar). Symbol 1 and 0 are represented by positive
and negative rectangular pulses of equal amplitude and
equal duration.
Noise
– The channel noise is modeled as additive white Gaussian
noise of zero mean and power spectral density No/2. In the
signaling interval 0 ≤ t ≤ Tb , the received signal is
Receiver
T y Decision 1 if y > λ
x(t ) ∫0 device 0 if y < λ
Sample at t = Tb
λ
D.27
Error Rate of Binary PAM (3)
A property of MVG_OMALLOOR
Case I
– Suppose that a symbol 0 is sent then the received signal is
x(t) = -A + n(t)
If the signal is input to a bandlimited low pass filter
(matched filter implemented by the integrate-and-dump
circuit), the output y(t) is obtained as:
t Tb
1
∫
y(t)= x (t ) dt = − A +
∫
Tb 0
n(t )dt
0 D.28
Error Rate of Binary PAM (4)
A property of MVG_OMALLOOR
2Tb
(Proof refers to p.254, S. Haykin, Communication
Systems)
D.29
Error Rate of Binary PAM (5)
A property of MVG_OMALLOOR
1 ( y + A) 2
∴ f y ( y 0) = exp(− )
πN 0 / Tb N 0 / Tb
D.31
Error Rate of Binary PAM (7)
A property of MVG_OMALLOOR
D.32
Error Rate of Binary PAM (8)
A property of MVG_OMALLOOR
D.33
Error Rate of Binary PAM (9)
A property of MVG_OMALLOOR
y+ A
Define a new variable z =
N o / Tb
N0
and then dy = dz.
Tb
∞
1
We have
P10 =
π ∫ exp(− z 2 )dz
Eb / N o
D.34
Error Rate of Binary PAM (9)
A property of MVG_OMALLOOR
D.35
Error Rate of Binary PAM (10)
A property of MVG_OMALLOOR
D.36
Error Rate of Binary PAM (11)
A property of MVG_OMALLOOR
Case II
Similary, the conditional probability density function of Y
given that symbol 1 was sent, is
1 ( y − A) 2
f y ( y 1) = exp(− )
πN 0 / Tb N 0 / Tb
λ
1 ( y − A) 2
P01 =
πN 0 / Tb ∫
−∞
exp(−
N 0 / Tb
)dy
D.37
Error Rate of Binary PAM (12)
A property of MVG_OMALLOOR
D.39
A property of MVG_OMALLOOR
r (t ) h(t ) y (t ) 0
1
A property of MVG_OMALLOOR
∫ ∫
≤ ∫
2
with equality when Q=cS s (u ) q (u ) du s (u ) du q 2 (u )du
0 0 0
∫
c s (u ) du ∫ s 2 (u ) du
Pick h(t-u) to be equal to cs(u) SNR opt (t ) = 0 2 t =0
N 0c N0
2 0 ∫ s 2 (u ) du
2
2
A property of MVG_OMALLOOR
3
A property of MVG_OMALLOOR
4
A property of MVG_OMALLOOR
MATCHED FILTERS
•The matched filter is the optimal linear filter
for maximizing the signal to noise ratio (SNR)
in the presence of additive stochastic noise.
•Matched filters are commonly used in radar,
in which a signal is sent out, and we measure
the reflected signals, looking for something
similar to what was sent out.
•Two-dimensional matched filters are
commonly used in image processing, e.g., to
improve SNR for X-ray pictures
1
A property of MVG_OMALLOOR
2
A property of MVG_OMALLOOR
3
A property of MVG_OMALLOOR
4
A property of MVG_OMALLOOR
White Noise
• For the case of white noise, the
description of the matched filter is
simplified as follows: For white noise,
= No / 2. Thus equation becomes,
(3.3)
Example 3.1
(3.4)
Proof
6
A property of MVG_OMALLOOR
7
A property of MVG_OMALLOOR
8
FIGURE 3-2 Waveforms associated with the match filter of Example 3-1
A property of MVG_OMALLOOR
9
A property of MVG_OMALLOOR
10
A property of MVG_OMALLOOR
The presence of channel noise w(t) adds randomness to the matched filter
output. 11
A property of MVG_OMALLOOR
12
A property of MVG_OMALLOOR
13
A property of MVG_OMALLOOR
14
A property of MVG_OMALLOOR
15
A property of MVG_OMALLOOR
17
A property of MVG_OMALLOOR
In both implementation
p of the correlation receiver
we calculate
where
L us define
Let d fi
we have
Consequently
Example: 16QAM
We have
g1 (t ) g 2 (t )
A A
0 Tb 2 Tb t 0 Tb 2 Tb t
amplitude values for PAM and pulse position values for PPM.
The symbol interval is
k
T= = kTb (2.1.1)
Rb
0 Tb 2Tb kTb
Figure 2.1.3 Relationship between the symbol interval and the bit interval
2-3
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
sm (t ) = Am gT (t ), m = 1, 2, …, M , 0 ≤ t ≤ T (2.1.2)
0 T 0 T M
Signal pulses for PAM Signal pulses for PPM
Another important feature of these signals is their energies. They can be
expressed as
T T
Em = ∫ s (t )dt = A
2
m
2
m 0 ∫ gT2 (t )dt , m = 1, 2,…, M (2.1.3)
0
2-6
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
PAM and PPM are two examples of a variety of different types of signal sets
that can be constructed for transmission of digital information over base-band
channels.
For example, if we take a set of M/2 PPM signals and construct the M/2 negative
signal pulses, the combined set of M signals constitute a set of M biorthogonal
signals.
- All the M signals have equal energy.
Example: M=4 - The channel bandwidth required to transmit the M signals
is just one-half of that required to transmit M PPM
signals.
A A
t t
0 T 2 0 T 2 T
0 T 2 t 0 T 2 T t
2-7
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
⎛ 1 ⎞
Es′ = ∫ [ sm′ (t ) ] dt = ⎜ 1 − (2.1.8)
T 2
⎟ Es
0
⎝ M ⎠
T 1 1
and ∫0 sm′ (t)sn′ (t)dt = − M Es = − ( M −1) Es′, m ≠ n (2.1.9)
Where Es is the energy of each of the orthogonal signals and Es′ is the energy
of each of the signals in the simplex signal set.
Note: (1) the signals in the simplex signal set have smaller energy than the
signals in the orthogonal signal set.
(2) The simplex signals are not orthogonal. They have negative
correlation, which is equal for all pairs of signals.
2-8
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
c3 = [1 0 1 0]
c4 = [ 0 1 0 1]
2-9
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Solution:
As indicated before, a code bit 1 is mapped into the rectangular pulse gT (t ) of
duration T/4 and a code bit 0 is mapped into the rectangular pulse − gT (t ) .
Thus, we construct the four signals shown as follows that correspond to the
four code words.
s1 (t ) s2 (t )
A A
t T t
0 T 0
−A
s3 (t ) s4 (t )
A A
t t
0 T 4 T 2 3T 4 T 0 T 4 T 2 3T 4 T
−A −A
Figure E2.1.1 Generated signals
Note: the first three signals are mutually orthogonal, but the fourth signals is
the negative of the third.
2-10
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
ψ 1 (t ) = s1 (t ) E1 (2.1.11)
Thus,ψ 1 (t ) is simply s1(t) normalized to unit energy.
● Step 2: The second signal is constructed from s2(t) by first computing the
projection of s2(t) onto ψ 1 (t ) which is
∞
(2.1.12)
c12 = ∫−∞
s2 (t )ψ 1 (t ) dt
Then, c12ψ 1 (t ) is subtracted from s2(t) to yield
d 2 (t ) = s2 (t ) − c12ψ 1 (t ) (2.1.13)
2-11
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
ψ k (t ) = d k (t ) Ek (2.1.15)
where k −1
d k (t ) = sk (t ) − ∑c
i =1
ik ψ i (t ) (2.1.16)
∞
and cik = ∫ sk (t )ψ i (t )dt , i = 1, 2,…, k − 1 (2.1.17)
−∞
Thus, the orthogonalization process is continued until all the M signals {sm (t )}
have been exhausted and N ≤ M orthonormal signals have been constructed.
The N orthonormal signals {ψ n (t )} form a basis in the N-dimensional signal space.
The dimensionality N of the signal space will be equal to M if all the M signals
are linearly independent (if none of the signals is a linear combination of the
other signals).
2-12
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Example 2.1.2: Let us apply the Gram-Schmidt procedure to the set of four
signals illustrated in Figure E2.1.2 (a).
s1 (t ) s3 (t )
1 1
t
t
0 0 1 3
2
-1
s2 (t ) s4 (t )
1 1
t t
0 1 2 0 3
-1
Figure E2.1.2 (a) Original signal set
Step 1: ψ 1 (t ) = s1 (t ) E1 Step 2: ψ 2 (t ) = d 2 (t ) E2
2 2 s1 (t )
2 c12 = ∫ s2 (t )ψ 1 (t )dt = ∫ s2 (t ) dt = 0
E1 = ∫ s12 (t )dt = 2 0 0
2
0
d 2 (t ) = s2 (t ) − c12ψ 1 (t ) = s2 (t )
ψ 1 (t ) = s1 (t ) 2 2 2
E2 = ∫ d 22 (t )dt = ∫ s22 (t )dt = 2
0 0
ψ 2 (t ) = s2 (t ) 2
2-14
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Step 3: ψ 3 (t ) = d3 ( t ) E3 Step 4: ψ 4 (t ) = d 4 ( t ) E4
s1 ( t )
s1 ( t ) c14 = ∫ s4 ( t )ψ 1 ( t ) dt = ∫ s4 ( t )
3 3
dt = 2
c13 = ∫ s3 ( t )ψ 1 ( t ) dt = ∫ s3 ( t )
3 3
dt = 0 0 0
2
0 0
2
3 3 s2 (t )
s2 ( t ) c24 = ∫ s4 (t )ψ 2 (t )t = ∫ s4 (t ) dt = 0
c23 = ∫ s3 ( t )ψ 2 ( t ) dt = ∫ s3 ( t )
3 3
dt = − 2 0 0
2
0 0
2
c34 = ∫ s4 (t )ψ 3 (t )dt = ∫ s4 (t ) ( s3 (t ) + s2 (t ) ) dt = 1
3 3
d3 (t ) = s3 (t ) − c13ψ 1 ( t ) − c23ψ 2 ( t )
0 0
ψ 3 (t ) = d3 (t )
ψ 4 (t ) = 0
= s3 (t ) + s2 (t )
2-15
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
ψ 1 (t ) ψ 3 (t )
1
1 2
t t
0 2 2 3
ψ 2 (t )
1 2
t
1 2 Figure E2.1.2 (b) Orthonormal signals
−1 2
2-16
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
s1 (t ) = 2ψ1 (t )
ψ 2 (t )
s2 s2 (t ) = 2ψ 2 (t )
s3 (t ) =ψ 3 (t ) − s2 (t ) =ψ 3 (t ) − 2ψ 2 (t )
2 0
2 s1 ψ 1 (t )
s4 (t ) = d4 (t ) + 2ψ1 (t ) +ψ 3 (t )
3
ψ 3 (t ) 3
s4 = 2ψ1 (t ) +ψ 3 (t )
s3
Figure E2.1.3 Signal vectors corresponding to the signals si (t ), i = 1, 2,3, 4
2-17
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
ψ 1 (t ) ψ 3 (t )
1 1
t t
0 0 2 3
1
ψ 2 (t )
1 Figure. 2.1.6 Alternate set of basis functions
t
0 1 2
Note: The change in the basis functions has not changed the lengths (energies)
of the signal vectors.
2-18
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
sm (t ) = Am gT (t ), 0 ≤ t ≤ T , m = 1, 2, … , M (2.1.21)
sm = E g Am , m = 1, 2, … , M (2.1.24)
2-19
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
d mn = sm − sn = Eg ( Am − An )
2 2
(2.1.25)
2d 2d 2d 2d 2d
We observe that the PAM signals have different energies. In particular, the
energy of the mth signal is
Em = sm2 = Eg Am2 (2.1.26)
m =1
(2.1.27)
2-20
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
( E , 0, 0,…, 0 )
vectors,
s1 = s
s2 = ( 0, E , 0, … , 0 )
s
(2.1.29)
(
sM = 0, 0, 0, … , Es )
2-21
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
ψ 2 (t )
s2
2 Es
2 Es
s1 ψ 1 (t )
s3 2 Es
ψ 3 (t )
Figure. 2.1.8 Orthogonal signals for M=N=3
2-22
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
s1 = ( E , 0, 0,… , 0 )
s s2
s2 = ( 0, E , 0, … , 0 )
s
sM 2 (
= 0, 0, 0, … , E s ) − s1 s1
sM 2 +1 (
= − E s , 0, 0, … , 0 ) − s2
(
s M = 0, 0, 0, … , − E s ) (2.1.31)
Figure 2.1.9 Signal constellation for
M=4 biorthogonal signals
We note that the distance between any pairs of signal vectors is either 2 E s
or 2 E . Hence, the minimum distance between pairs of signal vectors is 2 Es
s
2-23
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Es′ = sm′ = sm − s
2 2
⎛ 1 ⎞ (2.1.34)
= ⎜1 − ⎟ Es
⎝ M ⎠
2-24
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
The distance between any two signal points is not changed by the transition of
the origin, i.e., the distance between signal points remains at d = 2 Es .
Finally, the M simplex signals are correlated.
The cross-correlation coefficient (normalized cross-correlation) between the
mth and nth signals is
sm′ ⋅ sn′
γ mn =
sm′ sn′
−1 M 1
= =− (2.1.35)
(1 − 1 M ) ( M − 1)
Hence, all the signals have the same pair-wise.
s2
Figure 2.1.10 Signal constellation for
M=4 simplex signals
s3 s1
s4 2-25
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
We observe that there are 2 N possible signals that can be constructed from the
possible 2 N binary code words.
2-26
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
We also observe that the 2 N possible signal points correspond to the vertices of
an N-dimensional hypercube with its center at the origin.
The M signals constructed in this manner have equal energy Es.
The cross-correlation coefficient between any pair of signals depends on how we
select the M signals from the 2 N possible signals.
It is apparent that any adjacent signal points have a cross-correlation coefficient
of N −2
γ= (2.1.38)
N
and a corresponding Euclidean distance
ψ 2 (t ) d = 2 Es N (2.1.39)
s2 s1 ψ 2 (t )
ψ 1 (t ) s5 s3
s6
s3 s4 N=2 s4 ψ 1 (t )
ψ 3 (t ) s7
Figure 2.1.11 Signal-space diagrams for s2 N=3
signals generated from binary codes s8
s1 2-27
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
r (t ) = sm (t ) + n(t ), 0 ≤ t ≤ T (2.2.1)
Where n(t) denotes the sample function of Additive White Gaussian Noise
(AWGN) process with power-spectral density Sn(f)=N0/2 W/Hz.
Based on the observation of r(t) over the signal interval, we wish to design a
receiver that is optimum in the sense that it minimizes the probability of
making error.
It is convenient to subdivide the receiver into two parts:
- The signal demodulator: to convert the received signal r(t) into an
( )
N-dimensional vector r = r1 , r2 ,… , rN , where N is the dimension
of the transmitted signals.
- The detector: to decide which of the M possible signals was
transmitted based on observation of the vector r.
Two realization:
- based on the use of signal correlators.
- based on the use of matched filters.
2-29
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
In other words, the signal and the noise are expanded into a series of linearly
weighted orthonormal basis functions {ψ n (t )}.
It is assumed that the N basis functions {ψ n (t )} span the signal space, so that
every one of the possible transmitted signals of the set {sm (t ),1 ≤ m ≤ M } can
be represented as a weighted linear combination of {ψ n (t )} .
In the case of the noise, the function {ψ n (t )} do not span the noise space.
However, the noise terms that fall outside the signal space are irrelevant to
the detection of the signal.
Suppose the received signal r(t) is passed through a parallel bank of N cross-
correlators which basically compute the projection of r(t) onto the N basis
functions {ψ n (t )}, as illustrated in Figure 2.2.2.
2-30
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
ψ 2 (t )
Received r2
X ∫0 ( ) dt
T
To detector
signal
r (t )
ψ N (t )
rN
X ∫ ( ) dt
T
0
Sample at t=T
The signal is now represented by the vector sm with components smk, k=1,2,…,N.
Their values depend on which of the M signals was transmitted.
The components {nk } are random variables that arise from the presence of
additive noise.
In fact, we can express the received signal r(t) in the interval 0 ≤ t ≤ T as
N N
r (t ) = ∑ smkψ k (t ) + ∑ nkψ k (t ) + n′(t )
k =1 k =1
N
= ∑ rkψ k (t ) + n′(t ) (2.2.4)
k =1
We will show below that n′(t ) is irrelevant to the decision as to which signal was
transmitted.
Consequently, the decision my be based entirely on the correlator output signal
and noise components rk = smk + nk , k = 1, 2,… , N .
{ }
Since the signals sm (t ) are deterministic, the signal components are
deterministic. The noise components {nk } are Gaussian.
Their mean values are
E [ nk ] = E [ n ( t ) ]ψ k ( t ) dt = 0,
T
∫0
∀n (2.2.6)
Their covariance are
E [ nk nm ] = E [ n ( t ) n (τ ) ]ψ k ( t )ψ m (τ ) dtd τ
T T
∫ ∫
0 0
N0 T T
=
2 ∫0 ∫0
δ ( t − τ )ψ k ( t )ψ m (τ ) dtd τ
N0 T N0 (2.2.7)
=
2 ∫0
ψ k ( t )ψ m ( t ) dt =
2
δ mk
Where δ mk = 1 when m = k and zero otherwise.
Therefore, the N noise components {nk } are zero-mean uncorrelated Gaussian
random variables with a common variance σ n = N 0 2 .
2
2-33
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
k =1
f ( rk smk ) =
1 −( rk − smk )
2
where , k = 1, 2,… , N (2.2.11)
N0
e
π N0
2-34
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
⎡ N ⎤
f ( r sm ) =
1
exp ⎢ −∑ ( rk − smk ) N 0 ⎥ , m = 1, 2,… , M (2.2.12)
2
(π N 0 )
N 2
⎣ k =1 ⎦
As a final point, we wish to show that the correlator outputs (r1,r2,…,rN) are
sufficient statistics for reaching a decision on which of the M signals was
transmitted, i.e., that no additional relevant information can be extracted from
the remaining noise process n′(t ) .
Indeed, n′(t ) is uncorrelated with the N correlator outputs {rk }, i.e.,
E [ n′(t )rk ] = E [ n′(t ) ] smk + E [ n′(t )nk ] = E [ n′(t )nk ]
⎧⎪ ⎡ N ⎤ ⎫⎪
= E ⎨ ⎢ n(t ) − ∑ n jψ j (t ) ⎥ nk ⎬
⎩⎪ ⎣ j =1 ⎦ ⎭⎪
N
= ∫ E [ n(t )n(τ ) ]ψ k (τ )dτ − ∑ E ⎡⎣ n j nk ⎤⎦ψ j (t )
T
0
j =1
N0 N (2.2.13)
= ψ k (t ) − 0 ψ k (t ) = 0
2 2
2-35
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Since n′(t ) and {rk } are Gaussian and uncorrelated, they are also statistically
independent.
Consequently, n′(t ) does not contain any information that is relevant to the
decision as to which signal was transmitted.
2-36
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Example 2.2.1: Consider an M-ary PAM signal in which the basic pulse shape
gT (t ) is rectangular as shown in Figure 2.2.3.
The additive noise is a zero-mean white Gaussian noise process.
Determine the basis function ψ (t ) and the output of the correlation-type
demodulator.
gT (t )
a Figure 2.2.3 Signal pulse
t
0
T
Since the PAM signal set has a dimension N=1, there is only one basis
function ψ (t ) .
2-37
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
1 ⎡ T ⎤ dt
r=
⎢
⎣ ∫ 0
[ s m ( t ) + n ( t ) ] ⎥⎦
T
1 ⎡ T T
⎤
= ∫ m
T ⎢⎣ 0
s ψ ( t ) dt + ∫0 n ( t ) dt
⎥⎦
= sm + n
2-38
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
⎡1 T T ⎤
σ = E ⎢ ∫ ∫ n(t )n(τ )dtdτ ⎥
2
n
⎣T 0 0 ⎦
1 T T
= ∫ ∫ E [ n(t )n(τ ) ] dtdτ
T 0 0
N0 T T N0
=
2T ∫0 ∫0
δ (t − τ )dtdτ =
2
The Probability Density Function for the sampled output is
f ( r sm ) =
1 − ( r − sm )
2
N0
e
π N0
2-39
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
hk (t ) = ψ k (T − t ), 0 ≤ t ≤ T (2.2.14)
Where {ψ k (t )} are the N basis functions and hk(t)=0 outside of the interval
0 ≤ t ≤ T.
The outputs of these filters are
t
yk (t ) = ∫ r (τ )hk (t − τ )dτ
0
t
= ∫ r (τ )ψ k (T − t + τ )dτ , k = 1, 2,… , N (2.2.15)
0
2-40
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Hence, the sampled outputs of the filters at time t=T are exactly the set of
values {rk } obtained from the N linear correlators.
A filter whose impulse response h(t)=s(T-t), where s(t) is assumed to be
confined to the time interval 0 ≤ t ≤ T , is called the matched filter to the
signal s(t).
2-41
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Received
r2
ψ2 (T −t) Figure 2.2.6
signal r (t )
matched filter-
rN type demodulator
ψN (T −t)
2-42
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
At the sampling instant t=T, the signal and noise components are
T T
y (T ) = ∫ s(τ )h(T − τ )dτ + ∫ n(τ )h(T − τ )dτ
Signal 0 0
component = ys (T ) + yn (T ) (2.2.18)
Noise component
2-43
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Problem: To select the filter impulse response that maximizes the output
Signal-to-Noise Ratio (SNR) defined as
⎛ S ⎞ y s2 (T ) (2.2.19)
⎜ ⎟ =
⎝ N ⎠ 0 E ⎡⎣ y n (T ) ⎤⎦
2
The denominator in (2.2.19) is simply the variance of the noise term at the
output of the filter.
Let us evaluate E ⎡⎣ yn (T ) ⎤⎦ . We have
2
N T T
= 0
2 ∫ ∫ 0 0
δ (t − τ )h(T − τ )h(T − t )dtdτ
N T
= 0
2 ∫ 0
h 2 (T − t )dt (2.2.20)
Note: the variance depends on the power-spectral density of the noise and
the energy in the impulse response h(t).
2-44
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Since the denominator of the SNR depends on the energy in h(t), the maximum
output SNR over h(t) is obtained by maximizing the numerator of (S/N)0 subject
to the constraint that the denominator is held constant.
The maximization of the numerator is most easily performed by use of the
Cauchy-Schwarz inequality.
Cauchy-Schwarz inequality
If g1(t) and g2(t) are finite-energy signals, then
2
⎡ g (t ) g (t )dt ⎤ ≤ ∞ g 2 (t )dt ∞ g 2 (t )dt
∞
If we set g1 (t ) = h(t )
and g 2 (t ) = s (T − t )
it is clear that the (S/N)0 is maximized when h(t)=Cs(T-t), i.e., h(t) is matched
to the signal s(t).
The scale factor C2 drops out of the expression for (S/N)0, since it appears in
both the numerator and the denominator.
⎛S⎞ 2 T 2
⎜ ⎟ =
⎝ N ⎠0 N 0
∫ 0
s (t )dt
2 Es
= (2.2.23)
N0
Note: the output SNR from the matched filter depends on the energy of the
signal s(t) but not on the detailed characteristics of s(t).
2-46
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
= ∫ s (τ )e j 2π f τ dτ ⎤ e − j 2π fT
⎡ T
⎢⎣ 0 ⎥⎦
(2.2.24)
= S ∗ ( f )e − j 2π fT
We observe that the matched filter has a frequency response which is the
complex conjugate of the transmitted signal spectrum multiplied by the phase
− j 2π fT
factor e , which represents the sampling delay of T.
On the other hand, the phase of H(f) is the negative of the phase of S(f).
2-47
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Now, if the signal s(t) with spectrum S(f) is passed through the matched filter,
− j 2π fT
the filter output has a spectrum Y ( f ) = S ( f ) e
2
.
Hence, the output signal is
∞
ys (t ) = ∫ Y ( f )e j 2π ft df
−∞
∞
=∫ S ( f ) e − j 2π fT e j 2π ft df
2
(2.2.25)
−∞
∞ T
ys (T ) = ∫ S ( f ) df = ∫ s 2 (t )dt = Es
2
(2.2.26)
−∞ 0
2-48
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
The noise of the output of the matched filter has a power-spectral density
S0 ( f ) = H ( f ) N 0 2
2
(2.2.27)
Hence, the total noise power at the output of the matched filter is
∞ N0 ∞
Pn = ∫ S0 ( f )df = ∫
2
H ( f ) df
−∞ 2 −∞
N0 ∞ Es N 0 (2.2.28)
2 ∫−∞
= S ( f ) df =
2
The output SNR is simply the ratio of the signal power Ps given by
Ps = ys2 (T ) (2.2.29)
Example 2.2.2: Consider the M=4 biorthogonal signals shown in the Figure
2.1.5 for transmitting information over an AWGN channel.
The noise is assumed to have zero mean and power-spectral density N0/2.
Determine the basis functions for this signal set, the impulse response of the
matched-filter demodulators, and the output signals of the matched-filter
demodulators when the transmitted signal is s1(t).
Solution: The M=4 biorthogonal signals have dimension N=2.
Hence, two basis functions are needed to represent the signals.
From Figure 2.1.5, we choose ψ 1 (t ) and ψ 2 (t ) as
⎧ 2 T , ⎧ 2 T (2.2.31)
⎪ , 0≤t ≤ ⎪ , ≤t ≤T
ψ 1 (t ) = ⎨ T 2 ψ 2 (t ) = ⎨ T 2
⎪ 0, otherwise ⎪ 0, otherwise
⎩ ⎩
The impulse responses of the two matched filters are
⎧ 2 T ⎧ 2 T
⎪ , ≤t ≤T ⎪ , 0≤t ≤
h1 (t ) = ψ 1 (T − t ) = ⎨ T 2 , h2 (t ) = ψ 2 (T − t ) = ⎨ T 2
⎪ 0, otherwise ⎪ 0, oherwise (2.2.32)
⎩ ⎩
2-50
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
ψ 1 (t ) ψ 2 (t )
2T 2T
t t
0 T 2 T 3T 2 0 T 2 T 3T 2
Figure E2.2.2 (a) Basis functions
h1 (t ) = ψ 1 (T − t ) h2 (t ) = ψ 2 (T − t )
2T 2T
t t
0 T 2 T 3T 2 0 T 2 T 3T 2
Figure E2.2.2 (b) Impulse responses
y1s (t ) y2 s (t )
2
AT 2 A2T 2
t t
0 T 2 T 3T 2 0 T 2 T 3T 2
Hence, the received vector formed from the two matched filter outputs at the
sampling instant t=T is
( 1 2)
r = r ,r = E + n ,n ( s 1 2 ) (2.2.33)
Where n1=y1n(T) and n2=y2n(T) are the noise components at the outputs of
the matched filter, given by
T
ykn (T ) = ∫ n(t )ψ k (t )dt , k = 1, 2 (2.2.34)
0
Clearly, E [ nk ] = E [ ykn (T ) ] = 0 .
2-52
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Their variance is
N
∫ ∫ δ ( t − τ )ψ
T T
= 0 k (τ )ψ k (t )dtdτ
2 0 0
N T N0
= 0
2 ∫
0
ψ k2 (t )dt =
2
(2.2.35)
( )
2
⎛S⎞ Es 2 Es
⎜ ⎟ = =
⎝ N ⎠0 N0 2 N0
Which agrees with our previous result.
Note: the four possible outputs of the two matched filters, corresponding to
the four possible transmitted signals in Figure 2.1.5 are
( r1 , r2 ) = ( )( )(
Es + n1 , n2 , n1 , Es + n2 , − Es + n1 , n2 and n1 , − Es + n2 ) ( )
2-53
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
f ( r sm ) P ( sm )
P ( sm r ) = (2.2.36)
f (r )
Where f (r|sm) is the conditional PDF of the observed vector given sm ,
P(sm) is the a priori probability of the mth signal being transmitted.
2-54
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
From (2.2.36) and (2.2.37) we observe that the computation of the posterior
probabilities P (sm|r) requires knowledge of the a priori probabilities P (sm)
and the conditional PDF f (r|sm) for m=1,2,…, M.
When the M signals are equally probable, a priori, i.e., P(sm)=1/M for all M.
Furthermore, the denominator in (2.2.36) is independent of which signal is
transmitted.
Consequently, the decision rule based on finding the signal that maximizes
P(sm|r) is equivalent to finding the signal that maximizes f (r|sm).
The conditional PDF f (r|sm) is usually called the likelihood function.
The decision criterion based on the maximum of f (r|sm) over the M signals is
called the maximum-likelihood (ML) criterion.
A detector based on the MAP criterion and one that is based on the ML
criterion make the same decisions as long as the priori probabilities P (sm) are
all equal, i.e., the signals {sm} are equiprobable.
2-55
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
f ( r sm ) = Π f ( rk smk ) , m = 1, 2,… , M
N
k =1
f ( rk smk ) =
1 − ( rk − smk )2
where e N0
, k = 1, 2,… , N
π N0
The natural logarithm of f (r|sm), which is a monotonic function, is
N
ln f ( r sm ) = − ln (π N 0 ) −
N 1
∑ ( rk − smk ) (2.2.38)
2
2 N0 k =1
Hence, for the AWGN channel, the decision rule based on the ML criterion
reduces to finding the signal sm that is closest in distance to the received
signal vector r.
This decision rule is referred to minimum distance detection. 2-56
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
n =1 n =1 n =1
= r − 2r.sm + sm , m = 1, 2,… , M
2 2
(2.2.40)
2
The term r is common to all decision metrics, and hence, it may be ignored in
the computations of the metrics.
The result is a set of modified distance metrics
D′ ( r , sm ) = −2r.sm + sm
2
(2.2.41)
C ( r , sm ) = 2r.sm − sm
2
(2.2.42)
The term r.s m represents the projection (or the correlation) of the received
signal vector onto each of the M possible transmitted signal vectors.
We call C (r,sm ), m = 1, 2,… , M the correlation metrics for deciding which of the
M signals was transmitted.
2-57
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
The term |sm|2=Em, m=1,2,…, M may be viewed as bias terms that serve as
compensation for signal sets that have unequal energies (PAM).
2
If all signals have the same energy, s m may also be ignored in the
computation of the correlation metrics C ( r,sm ) and the distance metrics
D ( r,sm ) or D′ ( r,sm )
Conclusion:
We have demonstrated that the optimum ML detector computes a set of M
distances D ( r,sm ) or D′ ( r,sm ) and selects the signal corresponding to the
smallest (distance) metric.
Equivalently, the optimum ML detector computes a set of M correlation
metrics C (r,sm) and selects the signal corresponding to the largest
correlation metric.
In this case (the signals are equally probable), the MAP criterion is
equivalent to the ML criterion.
But, when the signals are not equally probable, the optimum MAP detector
bases it decision on the probabilities P (sm|r), m=1,2,…, M, given by (2.2.36)
or, equivalently, on the metrics
PM ( r,sm ) = f ( r sm ) P ( sm ) (2.2.43)
2-58
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Example 2.2.3: Consider the case of binary PAM signals in which the two
possible signal points are s1 = − s2 = Eb , where Eb is the energy per bit.
The priori probabilities are P(s1)=p and P(s2)=1-p.
Determine the metrics for the optimum MAP detector when the transmitted
signal is corrupted with AWGN.
Solution: The received signal vector (one dimensional) for binary PAM is
r = ± Eb + yn (T ) (2.2.44)
( )
f ( r s1 ) =
2
1 − r − Eb 2σ n2
e (2.2.45)
2πσ n
( )
f ( r s2 ) =
2
1 − r + Eb 2σ n2
e (2.2.46)
2πσ n
2-59
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
( ) (2.2.47)
PM ( r,s1 ) = pf ( r s1 ) =
2
p − r − Eb 2σ n2
e
2πσ n
(1 − p ) −( r + )
PM ( r,s 2 ) = (1 − p ) f ( r s2 ) =
2
Eb 2σ n2 (2.2.48)
e
2πσ n
2-60
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
( ) ( ) s1
2 2
r + Eb − r − Eb > (1− p)
2σ 2 < ln
p
(2.2.51)
n s2
Or equivalently,
s1
> σ n2 (1 − p ) N 0 (1 − p )
r Eb < ln = ln (2.2.52)
s2 2 p 4 p
Note: in the case of unequal priori probabilities, its is necessary to know not
only the values of the priori probabilities but also the value of the power-
spectral density N0 in order to compute the threshold.
When p=1/2, the threshold is zero, and knowledge of N0 is not required by the
detector.
2-61
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
P ( e sm ) = ∫ f ( r s m ) dr (2.2.53)
Rmc
M
= ∑ ⎡1 − ∫ f ( r sm ) dr ⎤
1 (2.2.54)
m =1 M ⎣
⎢ Rm ⎦⎥
Note: P(e) is minimized by selecting the signal sm if f (r|sm) is larger than
f (r|sk) for all m ≠ k .
When the M signals are not equally probable, the proof given above can be
generalized to show that the MAP criterion minimizes the average probability of
error.
2-62
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
s1 (t ) = gT (t ) ⎫
⎬ called antipodal
s2 (t ) = − gT (t ) ⎭
where
⎧arbitrary non − zero pulse, 0 ≤ t ≤ Tb
gT (t ) = ⎨
⎩ 0, elsewhere
Let us assume that the two signals are equally likely and that signal s1(t) was
transmitted.
Then, the received signal from the (matched filter or correlation-type)
demodulator is (2.3.1)
r = s1 + n = Eb + n
Where n represents the Additive White Gaussian Noise component which
has zero mean and variance σ n = N 0 2 .
2
In this case, the decision rule based on the correlation metric given by (2.2.42)
C ( r, sm ) = 2r.sm − sm
2
(2.2.42)
compares r with the threshold zero.
If r > 0 , the decision is made in favor of s1(t), and if r < 0, the decision is
made that s2(t) was transmitted.
Clearly, the two conditional PDF of r are
−( r − )
p ( r s1 ) =
2
1 Eb N0
(2.3.2)
e
π N0
−( r + )
p ( r s2 ) =
2
1 Eb N0
(2.3.3)
e
π N0
2-64
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
p ( r s2 ) p ( r s1 )
r
− Eb 0 Eb
Given that s1(t) was transmitted, the probability of error is simply the probability
that r < 0, i.e.,
( )
p ( e s1 ) = ∫ p ( r s1 ) dr
2
0 1 0 − r− E
∫
N0
= e dr
−∞
π N0 −∞
set x =
2
N0
(r − Eb )
1 − 2 Eb N 0 1 ∞
∫ ∫
− x2 2
= dx = e − x 2 dx
2
e
2π −∞
2π 2 Eb N 0 ∞
1 − x2 2
(1) Q ( x) ≤ e for all x ≥ 0
2
1
(2)
Q ( x) < e − x2 2
for all x ≥ 0
2π x
The frequently used lower bound is
1 ⎛ 1 ⎞ − x2 2
(1) Q ( x) > ⎜1 − 2 ⎟ e for all x ≥ 0
2π x ⎝ x ⎠
2-66
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Since the signals s1(t) and s2(t) are equally likely to be transmitted, the
average probability of error is
⎛ 2 Eb ⎞
pb = p ( e s1 ) + p ( e s2 ) = Q ⎜⎜
1 1 (2.3.5)
⎟⎟
2 2 ⎝ N0 ⎠
Two important characteristics of this performance measure
1. The probability of error depends only on the ratio Eb/N0 and not on any
other detailed characteristics of the signals and noise.
2. 2Eb/N0 is also the output SNR from the matched filter (and correlation-
type) demodulator. Eb/N0 is usually called the signal-to-noise ratio.
The probability of error may be expressed in terms of the distance between the
two signals s1 and s2.
From Figure 2.3.1, the two signals are separated by the distance d12 = 2 Eb .
By substituting Eb = d122 4 in (2.3.5), we obtain
(2.3.6)
⎛ d
2 ⎞
pb = Q ⎜ 12
⎟
⎜ 2 N0 ⎟
⎝ ⎠ 2-67
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
s2 2 Eb
s1
Figure 2.3.3 Signal points for binary orthogonal signals
s1 = ( E , 0)
b
= ( 0, E )
(2.3.7)
s2 b
2-68
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
r = ⎡⎣ Eb + n1 , n2 ⎤⎦ (2.3.8)
We can now substitute for r into the correlation metrics given by (2.2.42) to
obtain C(r,s1) and C(r,s2).
Then the probability of error is the probability that C(r,s2)>C(r,s1). Thus
( ) 1 ∞
∫
−x 2N 2
P n2 − n1 > Eb = e dx0
2π N 0 E b
1 ∞ ⎛ Eb ⎞
= ∫ e − x 2 dx = Q ⎜⎜
2
⎟⎟ (2.3.10)
2π Eb N0
⎝ N 0 ⎠
2-69
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
C (r, s 2 ) = 2r.s 2 − s 2
2
C (r, s1 ) = 2r.s1 − s1
2
s1 = ⎡⎣ Eb , 0 ⎤⎦ , s 2 = ⎡⎣ 0, Eb ⎤⎦ ⇒ ( s 2 − s1 ) = ⎡⎣ − Eb , Eb ⎤⎦
( )(
P ⎡⎣C ( r, s 2 ) > C ( r, s1 ) ⎤⎦ = P ⎡ 2 Eb + 2n1 , 2n2 . − Eb , Eb > 0 ⎤
⎣ ⎦ )
= P ⎡⎣ −2 Eb − 2n1 Eb + 2n2 Eb > 0 ⎤⎦
= P ⎡⎣( n2 - n1 ) Eb > Eb ⎤⎦
= P ⎡⎣( n2 - n1 ) > Eb ⎤⎦
2-70
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
⎡1 T T ⎤
σ = E ⎢ ∫ ∫ n(t )n(τ )dtdτ ⎥
2
n
⎣T 0 0 ⎦
⎡1 T T ⎤
σ = E ⎢ ∫ ∫ x(t ) x(τ )dtdτ ⎥
2
x
⎣T 0 0 ⎦
= E ∫ ∫ ( n2 (t ) − n1 (t ) )( n2 (τ ) − n1 (τ ) ) dtdτ ⎤
1 ⎡ T T
T ⎢⎣ 0 0 N0/2
⎥⎦
N0/2
0 0
= E ⎡ ∫ ∫ ( n2 (t )n2 (τ ) − n1 (t )n2 (τ ) − n2 (t )n1 (τ ) + n1 (t ) n1 (τ ) ) dtdτ ⎤
1 T T
T ⎢⎣ 0 0 ⎥⎦
N N
= 0+ 0
2 2
= N0
2-71
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
⎛ Eb ⎞
Pb = Q ⎜⎜ ⎟⎟ (2.3.11)
⎝ N 0 ⎠
Conclusions:
- At any given error probability, the
Eb/N0 required for orthogonal signals
is 3 dB more than that for antipodal
signals.
-The difference of 3 dB is simply due to
the distance between the two signal
points, which is d12 = 2 Eb for
2
M m =1 M m =1
Eg M ( M − 1)
2
⎛ M 2 −1 ⎞
= =⎜ ⎟ Eg (2.3.14)
M 3 ⎝ 3 ⎠
Where Em denotes the energy of the PAM signals.
The average probability of error for M-ary PAM can be determined from the
decision rule that maximizes the correlation metrics given by (2.2.42).
Equivalently, the detector compares the demodulator output r with a set of
M-1 thresholds, which are placed at the midpoints of successive amplitude
levels as shown in Figure 2.3.5.
si si +1 si + 2 si + 3 si + 4 si + 5
τi τ i +1 τ i+2 τ i +3 τ i+4
si - Signal point 2 Eg
τ i - Thresholds
Figure 2.3.5 Placement of Thresholds at midpoints of successive amplitude levels
r = sm + n = Eg Am + n (2.3.16)
2-74
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
On the basis that all amplitude levels are equally likely a priori, the average
probability of a symbol error is simply the probability that the noise variable n
exceeds in magnitude one-half of the distance between levels.
However, when either one of the two outside levels ± ( M − 1) is transmitted, an
error can occur in one direction only.
Thus, we have
M −1
PM =
M
(
P r − sm > Eg ) xold
M −1 2 ∞
∫
− x2 N0
= e dx
M π N0 g E
xnew = 2 N 0 xold
Set
M −1 2 ∞
∫
− x2 2
= e dx
M 2π 2 Eg N 0
xnew
2 ( M − 1) ⎛ 2 Eg ⎞
= Q⎜ ⎟ (2.3.17)
M ⎜ N0 ⎟
⎝ ⎠
3
From (2.3.15) we note that E g = PavT (2.3.18)
M −1 2
2-75
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
2 ( M − 1) ⎛ 6 PavT ⎞
PM = Q⎜ ⎟ (2.3.19)
M ⎜
⎝ ( M − 1) N 0 ⎟⎠
2
Or equivalently,
2 ( M − 1) ⎛ 6 Eav ⎞
PM = Q⎜ ⎟
M ⎜
⎝ ( )
M − 1 N0 ⎟
2
⎠
(2.3.20)
Conclusions:
r= ( Es + n1 , n2 , n3 ,… , nM ) (2.3.23)
C ( r,s M ) = Es nM (2.3.24)
2-78
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Note: the scale factor Es may be eliminated from the correlator outputs by
dividing each output by Es .
Then, with this normalization, the PDF of the first correlator output r1 = Es + n1 ( )
is 1 −( x1 − Es ) N 0
2
f r1 ( x1 ) = e (2.3.25)
π N0
and the PDF of the other M-1 correlator outputs are
1
f rm ( xm ) = e − xm
2
N0
, m = 2,3,… , M (2.3.26)
π N0
It is mathematically convenient to first derive the probability that the detector
makes a correct decision.
This is the probability that r1 is larger than each of the other M-1 correlator
outputs n2,n3,…,nM. This probability may be expressed as
−∞ (2.3.27)
Where P ( n2 < r1 , n3 < r1 ,… , nM < r1 r1 ) denotes the joint probability that
n2,n3,…,nM are all less than r1, conditioned on any given r1.
Then this joint probability is averaged over all r1.
2-79
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Since the{rm } are statistically independent, the joint probability factors into a
product of M-1 marginal probabilities of the form
P ( nm < r1 r1 ) = ∫ f rm ( xm ) dxm , m = 2,3,… , M
r1
−∞
1 2 N 0 r1 ⎛ 2 ⎞
∫
− x2 2
= e dx = Q ⎜⎜ − r1 ⎟⎟ (2.3.28)
2π −∞
⎝ N0 ⎠
These probabilities are identical for m=2,3,…,M and hence, the joint probability
under consideration is simply the result in (2.3.28) raised to the (M-1) power.
Thus, the probability of a correct decision is
M −1
∞ ⎡ ⎛ 2 ⎞⎤
Pc = ∫ ⎢Q ⎜⎜ − r1 ⎟⎟ ⎥ f ( r1 ) dr1 (2.3.29)
−∞
⎢⎣ ⎝ N 0 ⎠ ⎥⎦
and the probability of a (k-bit) symbol error is PM = 1 − Pc
(2.3.30)
⎧ ⎡ ⎛ M −1
⎫
1 ∞ ⎪ x ⎞ ⎤ ⎪ −( x − )
2
∫−∞ ⎨1 − ⎢⎢Q ⎜⎜
2 Es N 0 2
where PM = ⎟⎟ ⎥ ⎬ e dx (2.3.31)
2π ⎪⎩ ⎣ ⎝ N 0 ⎠ ⎦⎥ ⎪
⎭
The same expression for the probability of error is obtained when any one of the
other M-1 signals is transmitted.
2-80
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Since all the M signals are equally likely, the expression for PM given in (2.3.31)
is the average probability of a symbol error.
This expression can be evaluated numerically.
In comparing the performance of various digital modulation methods, it is
desirable to have the probability of error expressed in terms of the SNR per bit,
Eb/N0, in stead of the SNR per symbol Es/N0.
With M=2k, each symbol conveys k bits of information, and hence, Es=kEb.
Thus, (2.3.31) may be expressed in terms of Eb/N0 by substituting for Es.
For equi-probable orthogonal signals, all symbol errors are equi-probable and
occur with probability PM P
= kM (2.3.32)
⎛k ⎞
M −1 2 −1
Furthermore, there are ⎜ n ⎟ ways in which n bits out of k may be in error. Hence,
⎝ ⎠
the average number of bit errors per k-bit symbol is
k
⎛ k ⎞ PM 2k −1 (2.3.33)
∑ ⎜ ⎟ k
n =1 ⎝ n ⎠ 2 − 1
=k k
2 −1
PM
And the average bit error probability is just the result in (2.3.33) divided by k,
the number of bits per symbol. Thus
2k −1 P
Pb = k PM ≈ M , k 1 (2.3.34)
2 −1 2
2-81
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Conclusions:
This figure illustrates that by increasing
the number M of signals, one can
reduce the SNR/bit required to achieve
a given probability of a bit error.
PM = P ( ∪ Ei ) ≤ ∑ P ( Ei )
n
n
i =1
Hence, i =1
PM ≤ ( M − 1) P2 = ( M − 1) Q ( )
Es N 0 < MQ ( Es N 0 ) (2.3.35)
− k ( Eb N0 − 2ln 2 ) 2 (2.3.37)
PM < e
As k → ∞ , or equivalently, as M → ∞ , the probability of error approaches
zero exponential, provided that Eb N 0 is greater than 2 ln 2 ,i.e.,
Eb (2.3.38)
> 2 ln 2 = 1.39 (1.42dB )
N0
The simple upper bound on the probability of error given by (2.3.37) implies
that as long as SNR>1.42 dB, we can achieve an arbitrary low PM.
However, this union bound is not a very tight upperbound at a sufficiently low
SNR due to the fact that the upper bound for the Q-function in (2.3.36) is
loose.
2-84
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
( )
2
−k Eb N 0 − ln 2 (2.3.39)
PM < 2e
Eb
> ln 2 = 0.693 (−1.6dB ) (2.3.40)
N0
This minimum SNR/bit (-1.6 dB) is called the Shannon limit for an Additive
White Gaussian Noise channel.
2-85
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
According to this decision rule, the probability of a correct decision is equal to the
probability that r1 = Es + n1 > 0 and r1 exceeds rm = nm for m = 2,3,… , M 2 .
But,
P ( nm < r1 r1 > 0 ) =
1 r1 1 r1 N0 2
∫ ∫
− x2 N0 − x2 2
e dx = e dx (2.3.43)
π N0 − r1
2π − r1 N0 2
0
⎣ 2π − r1 N0 2
⎦
2-87
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Conclusions:
2-89
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
(1) On the basis of the SNR required to achieve a specified probability of error.
- This would not be very meaningful, unless it were made on the basis of some
constraint, such as fixed data rate of transmission.
Suppose that the bit rate Rb is fixed and let us consider the channel bandwidth
required to transmit the various signals.
M-ary PAM
W = Rb 2k = Rb 2 log 2 M Hz (2.3.50)
2-90
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
PPM
The symbol interval T is subdivided into M subintervals of duration and pulses
of width T/M are transmitted in the corresponding subintervals.
The spectrum of each pulse is approximately M/2T wide.
The channel bandwidth required to transmit the PPM signals is
W = M 2T = M 2 ( k Rb ) = MRb 2 log 2 M Hz (2.3.51)
Biorthogonal and Simplex Signals
These signals result in similar relationships as PPM (orthogonal).
In the case of biorthogonal signals, the required bandwidth is one-half of that
for orthogonal signals.
(2) Based on the normalized data rate Rb/W (bits/second per hertz of
bandwidth) versus the SNR per bit (Eb/N0) required to achieve a given error
probability.
For PAM and orthogonal signals, we have
Rb
PAM: = 2 log 2 M (2.3.52)
W
Rb 2 log 2 M
PPM (orthogonal): = (2.3.53)
W M
2-91
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Conclusions:
In the case of PAM, increasing the number of amplitudes M results in a
higher bit rate to bandwidth ratio Rb/W.
However, the cost of achieving the higher data rate is an increment in the
SNR/bit.
However, the SNR/bit required to achieve a given error probability (in this
case, PM=10-5) decreases as M increases.
2-93
A property of MVG_OMALLOOR
AT77.11 Digital Modulation Techniques
Conclusions:
Consequently, M-ary orthogonal signals are appropriate for power-limited
channels that have sufficiently large bandwidth to accommodate a large
number of signals.
2-94
Modulation,
A property ofDemodulation
MVG_OMALLOOR and Coding
Period 3 - 2005
Sorour Falahati
Lecture 1
Course information
Scope of the course
Course material
Schedule
Staff
Grading
2005-01-21 Lecture 1 2
Equalization
Synchronization
Design goals
Trade-offs between various parameters
2005-01-21 Lecture 1 3
Modulation,
A property ofDemodulation
MVG_OMALLOOR and Coding
Course material
Course text book:
“Digital communications: Fundamentals and Applications”
by Bernard Sklar,Prentice Hall, 2001,ISBN: 0-13-084788-7
Additional recommended books:
“Communication systems engineering”, by John G. Proakis
and Masoud Salehi, Prentice Hall, 2002, 2nd edition, ISBN:
0-13-095007-6
“Communication Systems, Analysis and design”, by
H.P.E.Stern and S. A. Mahmoud, Prentice Hall, 2004,
ISBN: 0-13-121929-4
Material accessible from course homepage:
Lecture slides
Laboratory syllabus (Lab. PM)
Set of exercises and formulae
Home assignments and solutions
2005-01-21 Lecture 1 4
Schedule
12 lectures (from week 3 to week 8)
10 tutorials (from week 4 to week 8)
4 mandatory graded home assignments
1 mandatory lab. work (weeks 8-9)
Final written exam 14th March 2005
2005-01-21 Lecture 1 5
Staff
Course responsible and lecturer: Sorour
Falahati.
Email: sorour.falahati@signal.uu.se
Office: Magistern 2112A
Tel.: 018-471 1077
Tutorial and laboratory assistant: Daniel
Aronsson.
Email: daniel.anorsson@signal.uu.se
Office: Magistern 2140B
Tel.: 018-471 3071
2005-01-21 Lecture 1 6
Modulation,
A property ofDemodulation
MVG_OMALLOOR and Coding
Grading
To obtain grade 3, a student has to:
1. To complete the laboratory work
2. To pass all the home assignments (HA)
3. To pass the written final exam
The final grade (3,4,5) is calculated as
Final grade:
0.8(grade on final exam)+0.2(average grade on HAs)
Exam and home assignments have each 60 points.
0-29 30-39 40-49 50-60
Fail Grade 3 Grade 4 Grade 5
2005-01-21 Lecture 1 7
Transmitter
Source Channel
Formatter Modulator
encoder encoder
Receiver
Source Channel
Formatter Demodulator
decoder decoder
2005-01-21 Lecture 1 9
Modulation,
A property ofDemodulation
MVG_OMALLOOR and Coding
2005-01-21 Lecture 1 10
Propagation distance
2005-01-21 Lecture 1 11
Classification of signals
Deterministic and random signals
Deterministic signal: No uncertainty with
respect to the signal value at any time.
Random signal: Some degree of uncertainty
in signal values before it actually occurs.
Thermal noise in electronic circuits due to the
random movement of electrons
Reflection of radio waves from different layers of
ionosphere
2005-01-21 Lecture 1 12
Modulation,
A property ofDemodulation
MVG_OMALLOOR and Coding
Classification of signals …
Periodic and non-periodic signals
A discrete signal
Analog signals
2005-01-21 Lecture 1 13
Classification of signals ..
Energy and power signals
A signal is an energy signal if, and only if, it has
nonzero but finite energy for all time:
2005-01-21 Lecture 1 14
Random process
A random process is a collection of time functions, or
signals, corresponding to various outcomes of a
random experiment. For each outcome, there exists a
deterministic function, which is called a sample
function or a realization.
Random
variables
Real number
Sample functions
or realizations
(deterministic
function)
time (t)
2005-01-21 Lecture 1 15
Modulation,
A property ofDemodulation
MVG_OMALLOOR and Coding
Random process …
Strictly stationary: If none of the statistics of the random process
are affected by a shift in the time origin.
and
, respectively.
2005-01-21 Lecture 1 16
Autocorrelation
Autocorrelation of an energy signal
2005-01-21 Lecture 1 17
Spectral density
Energy signals:
Power signals:
Random process:
Power spectral density (PSD):
2005-01-21 Lecture 1 18
Modulation,
A property ofDemodulation
MVG_OMALLOOR and Coding
Properties of an autocorrelation
function
For real-valued (and WSS in case of
random signals):
1. Autocorrelation and spectral density form
a Fourier transform pair.
2. Autocorrelation is symmetric around zero.
3. Its maximum value occurs at the origin.
4. Its value at the origin is equal to the
average power or energy.
2005-01-21 Lecture 1 19
[w/Hz]
Power spectral
density
Autocorrelation
function
Probability density function
2005-01-21 Lecture 1 20
Input Output
Linear system
Deterministic signals:
Random signals:
2005-01-21 Lecture 1 21
Modulation,
A property ofDemodulation
MVG_OMALLOOR and Coding
Ideal filters:
Non-causal!
Low-pass
Band-pass High-pass
Realizable filters:
RC filters Butterworth filter
2005-01-21 Lecture 1 22
Bandwidth of signal
Baseband versus bandpass:
Baseband Bandpass
signal signal
Local oscillator
Bandwidth dilemma:
Bandlimited signals are not realizable!
Realizable signals have infinite bandwidth!
2005-01-21 Lecture 1 23
(a)
(b)
(c)
(d)
(e)50dB
2005-01-21 Lecture 1 24
A property of MVG_OMALLOOR
Phil Karn
KA9Q
March 1995
Copyright 1995 Phil Karn
KA9Q FEC - 1
Overview of Seminar
Disclaimer: I'm not an expert either. But teaching is a good way to learn a
subject...
KA9Q FEC - 2
A property of MVG_OMALLOOR
Limits to Communication
KA9Q FEC - 3
S
C = B log 2 (1 + )
N
where:
C = channel capacity, bits/sec
B = channel bandwidth, Hz
S = signal power, W
N = noise power, W
KA9Q FEC - 4
A property of MVG_OMALLOOR
Comments on Shannon
Random data bits are assumed. If not, compress them. This is the subject
of another branch of information theory called source coding (not the
subject of this seminar).
The formula applies only to the additive white gaussian noise (AWGN)
channel. Fading, interference, distortion, etc are not considered
Channel capacity is a theoretical limit only; it describes the best that can
possibly be done with any code and modulation method. Beating this limit
is like building a perpetual motion machine
You can send data faster than capacity, but it won't be error free. If you add
an error-correcting code to deal with the errors, the corrected data rate will
always be less than the channel capacity, but you may still be ahead of
where you started
Shannon does not actually show how to build a practical system that
reaches the limit. This is what everybody has been working on since.
KA9Q FEC - 5
N = N 0B
where B is the bandwidth and N0 is the noise spectral density in watts/Hz.
Doubling the bandwidth doesn't quite double the capacity because the
noise also doubles. So what's the net effect?
KA9Q FEC - 6
A property of MVG_OMALLOOR
Capacity Restated
Let's define
Then after some (omitted) algebra, we find that for a system to obey the
Shannon limit, it must satisfy the inequality
Eb 2R −1
≥
N0 R
The ratio Eb/N0, pronounced "ebb-know", describes the overall power
efficiency of the system. It is usually expressed in dB.
KA9Q FEC - 7
Eb (dB)
20
N0
15
10
0.0 1 0.1 1 10
"power "bandwidth
limited" limited"
KA9Q FEC - 8
A property of MVG_OMALLOOR
Eb (dB)
20
N0
15
QPSK
5
10.8dB
0.0 1 0.1 1 10
-5
KA9Q FEC - 9
When used for error correction, this is called Forward Error Correction (FEC)
KA9Q FEC - 10
A property of MVG_OMALLOOR
KA9Q FEC - 11
Power-Limited Coding
KA9Q FEC - 12
A property of MVG_OMALLOOR
Bandwidth-Limited Coding
KA9Q FEC - 13
P=SB
spectral density S
uncoded signal
f
r=1/2 K=7
coded signal P=S/4 spectral density=S/8
f
2B
KA9Q FEC - 14
A property of MVG_OMALLOOR
ECC Types
Block codes
CRC
Orthogonal (Reed-Mueller, etc)
Hamming
Golay
BCH
Reed-Solomon
etc
Convolutional codes
Sequentially decoded
Viterbi decoded
KA9Q FEC - 15
Block Coding
k input bits
KA9Q FEC - 16
A property of MVG_OMALLOOR
The lowly CRC is a block code used for error detection only. I.e., the
decoder produces the k output bits only if there are no channel errors. If
even a single error occurs, the decoder signals an erasure. A higher level
ARQ (Automatic Request-Repeat) then retransmits the block
More sophisticated codes can correct some number of errors. Only if the
number of errors exceeds the code's correction ability will it signal an
erasure
KA9Q FEC - 17
Systematic Codes
Most block codes are systematic. That is, the k input bits appear as-is in the
n output symbols. The additional n-k symbols are parity symbols
In a nonsystematic code, every output symbol is a function of more than
one input data bit, i.e., the input does not appear imbedded in the output
Nonsystematic linear block codes have no particular advantage over
systematic codes, so systematic block codes are generally used in
practice. This lets decoders cut corners (at a price in lost performance)
KA9Q FEC - 18
A property of MVG_OMALLOOR
Hamming Distance
The Hamming distance (or just "distance") between two equal-length bit
strings is the number of places in which they disagree.
For example, these strings have distance 2:
001101001
100101001
KA9Q FEC - 19
Distance Properties
KA9Q FEC - 20
A property of MVG_OMALLOOR
Distance Properties - 2
KA9Q FEC - 21
Distance Properties - 3
All other things being equal, bigger code blocks provide better performance for
the same relative overhead - at the expense of added delay
Tradeoffs exist between the erasure rate and the undetected error rate. For
example, the designer might specify that if a received codeword is "too far"
from a valid codeword, an erasure is declared even though the error could
have been corrected. This reduces the undetected error rate but increases the
erasure rate
In most block codes, the distribution of errors in a block is irrelevant; only the
total number of errors in a block matters. This makes large block codes good
for burst errors, e.g., against pulsed interference
Resistance to error bursts can be further increased by interleaving (more later)
KA9Q FEC - 22
A property of MVG_OMALLOOR
Orthogonal Signalling
KA9Q FEC - 23
Performance
KA9Q FEC - 24
A property of MVG_OMALLOOR
KA9Q FEC - 25
Hamming
KA9Q FEC - 26
A property of MVG_OMALLOOR
KA9Q FEC - 27
Reed-Solomon Codes
Reed-Solomon codes are a subclass of non-binary BCH codes. That is,
instead of individual binary digits (bits), R-S codes operate on symbol
alphabets of more than two values. These alphabet sizes are usually powers
of 2 and are written GF(2m)
A R-S code block size is one less than the alphabet size. E.g., a R-S code over
GF(256) will have a block size of 255 symbols or 2040 bits. There is an
"extended" R-S code with a block size equal to the alphabet size (256 symbols
or 2048 bits for our example)
The number of data symbols within a R-S codeword can be chosen according
to the desired error correcting ability. To correct up to E errors in a block, you
need 2E parity symbols, with the rest being data. E.g., if you want to correct
up to 10 errors in a R-S code over GF(256), we will need 20 parity symbols.
This leaves 255-20 = 235 data symbols per block. This results in a (255,235)
code over GF(256)
R-S codes can correct as many erasures (known errors) as parity symbols, ie.,
twice as many erasures as errors
KA9Q FEC - 28
A property of MVG_OMALLOOR
R-S Codes - 2
A Reed-Solomon code achieves the best performance possible for any
linear code with the same block size. They are also relatively easy to
implement at large block sizes (better performance)
This has made them extremely popular; the R-S code is now arguably the
single most important block code used for error correction
Applications of R-S codes include digital audio (Compact Disc, DAT, etc),
deep space communications (Voyager and Galileo, both in combination with
convolutional coding), and the HAL Clover II HF modem
KA9Q FEC - 29
KA9Q FEC - 30
A property of MVG_OMALLOOR
Interleaving
Interleaving is a technique that allows a designer to construct a big block
code out of a smaller one. I.e., given a (n,k) block code, we can construct a
(L*n,L*k) block code
A common way to do this is to allocate a block of memory as a 2-
dimensional matrix. The encoded symbols of each block are written into
memory in rows. Then they are read out and transmitted in columns. The
receiver reverses the procedure, decoding each row in turn.
transmit order
code block 1 S11 S12 S13 S14 S15 A burst now must
S21 S22 S23 S24 S25 be more than 5
code block 2
symbols long to
code block 3 S31 S32 S33 S34 S35 affect more than
S41 S42 S43 S44 S45 one symbol in
code block 4
each code block.
code block 5 S51 S52 S53 S54 S55
KA9Q FEC - 31
(28,24) (32,28)
Interleaver disc
RS coder RS coder
KA9Q FEC - 32
A property of MVG_OMALLOOR
CIRC (complete)
KA9Q FEC - 33
The CD - 2
By itself, the inner (32,28) RS code isn't very strong. More than (32-28)/2 =2
symbol errors to the decoder causes it to fail and flag an erasure on all 28
of its output symbols.
But the deinterleaver distributes these 28 erasures across 28 different
blocks of the outer (28,24) RS code, one erasure per block. And because
these are erasures, not errors, the outer code can correct up to 28-24=4 of
them in a block. So the outer code easily fills in the missing symbols.
The concatenated system is so strong that it can completely correct an
error burst of up to 3,874 bits - assuming no more bursts occur until the
interleavers have flushed!
The two CD codes are examples of "shortened" R-S codes. An 8-bit
symbol size would ordinarily imply a 255-byte block, but the encoder sets
the unused symbols to zero and the decoder takes this into account.
KA9Q FEC - 34
A property of MVG_OMALLOOR
Convolutional Coding
Convolutional codes (also known as tree codes) operate on continuous
streams of data, rather than on fixed size blocks
A convolutional encoder is extremely simple: a shift register, some XOR
gates and a multiplexer:
r=1/2, K=7
NASA Standard code
+
coded
data
symbols
input
out
+
shift
KA9Q FEC - 35
Code Parameters
KA9Q FEC - 36
A property of MVG_OMALLOOR
Convolutional Decoding
KA9Q FEC - 37
KA9Q FEC - 38
A property of MVG_OMALLOOR
Code Tree
KA9Q FEC - 39
Sequential Decoding
Sequential decoding looks for the path through the code tree that
most closely matches the received sequence of symbols
Two sequential decoding algorithms are commonly used: Fano and
stack. Though they look quite different, they accomplish the same
thing
The decoder measures the “goodness” of the match. This is called
the metric. As long as it continues to increase, it moves forward
If the metric decreases, the decoder assumes it may have made a
mistake earlier due to noise. It backs up and sequentially examines
alternate paths until it finds one whose metric increases again
The decoder may look at the “quality” of each received symbol and
give it an appropriate weight in the total metric. This is called soft
decision decoding and is a significant advantage of convolutional
codes over block codes (2 dB on AWGN channels). Just a few bits
of precision give almost all of this gain; 3 bit A/D sampling is
common
KA9Q FEC - 40
A property of MVG_OMALLOOR
KA9Q FEC - 41
Example Performance
KA9Q FEC - 42
A property of MVG_OMALLOOR
Viterbi Decoding
KA9Q FEC - 43
Trellis Diagram
KA9Q FEC - 44
A property of MVG_OMALLOOR
Decoding Example
KA9Q FEC - 45
KA9Q FEC - 46
A property of MVG_OMALLOOR
Performance vs k
KA9Q FEC - 47
Sequential vs Viterbi
KA9Q FEC - 48
A property of MVG_OMALLOOR
Sequential vs Viterbi
KA9Q FEC - 49
Block vs Convolutional
Block
Practical at very large block sizes for improved performance
against burst errors
Inherently difficult to adapt to soft-decision decoding (Chase
method)
Requires external synchronization
Convolutional
Outperforms block codes at comparable implementation
complexity
Constraint lengths have practical limits. Interleaving to protect
against burst errors
Easily adapted to soft-decision decoding
Viterbi decoders can self-synchronize
KA9Q FEC - 50
A property of MVG_OMALLOOR
KA9Q FEC - 51
Concatenated Coding
KA9Q FEC - 52
A property of MVG_OMALLOOR
KA9Q FEC - 53
Galileo Coding
KA9Q FEC - 54
A property of MVG_OMALLOOR
Galileo-2
KA9Q FEC - 55
KA9Q FEC - 56
A property of MVG_OMALLOOR
Diversity vs Coding
KA9Q FEC - 57
KA9Q FEC - 58
A property of MVG_OMALLOOR
Type I ARQ-FEC
KA9Q FEC - 59
Type II ARQ-FEC
In a Type II hybrid ARQ-FEC system, the two layers are more closely
coupled.
A low rate FEC code that can be punctured or inverted is chosen.
That is, the decoder is capable of decoding a message with a large
number of missing symbols
The transmitter first sends only enough of the coded symbols to
allow successful decoding on a “clean” channel. If this is
successful, an ACK signals the transmitter to proceed to the next
data block; the remaining symbols from the acknowledged block
are discarded unsent
If the receiver cannot decode the packet, it sets it aside without
discarding it. The transmitter then sends additional symbols from
the encoded packet. The receiver combines these with the previous
transmission and tries again to decode the result
The process repeats until decoding is successful. If all of the coded
symbols have been sent and the receiver still can’t decode the
packet, the symbols are sent again
KA9Q FEC - 60
A property of MVG_OMALLOOR
Type II - 2
As more of the symbols of a packet are sent, two things happen: the
total received Eb/N0 increases, and the effective code rate
decreases, increasing the coding gain.
Eventually the packet can be decoded successfully, assuming that
the synchronization mechanism still works
Type II hybrid ARQ-FEC schemes are more complex, so they aren’t
as popular as Type I schemes. But they are highly efficient and
adapt quickly to changing channel conditions, which makes them
interesting for radio
Kantronics’ G-TOR is a Type II hybrid. The FEC code in G-TOR is the
Golay block code
Pactor resembles a Type II hybrid, but the “FEC” scheme (simple
repetition of the same uncoded packet) has no coding gain. Pactor
relies solely on Eb/N0 accumulation for its performance
KA9Q FEC - 61
Summary
Error control coding has been around a long time. Most of the
important codes and algorithms were discovered over 25-30 years
ago. But until relatively recently, strong FEC was used mainly in
deep space communications where the benefits outweighed the
considerable computational costs
But the computer revolution has changed all this. It is now possible
to implement strong FEC at reasonable speeds and at reasonable
prices, even in software on general purpose computers
Inexpensive consumer products (CD players, V.32 modems) now
incorporate strong FEC that brings these systems remarkably close
to the Shannon limit
It’s time we brought FEC to ham radio as well!
KA9Q FEC - 62
A property of MVG_OMALLOOR
KA9Q FEC - 63
A property of MVG_OMALLOOR
Adam Margetts
Electrical Engineering Dept.
The Ohio State University
A property of MVG_OMALLOOR
• What is 3G?
– Next generation cellular standards
– based on code division multiple access (CDMA)
(~2 Mbps)
– W-CDMA (ETSI)—Upgrade from GSM (sort-of)
– cdma2000 (TIA)—Upgrade from IS-95
• Where does my research fit in?
– Improving the 3G Downlink (base station to
mobile unit)
– Mobile unit’s receiver combats time-varying
channel impairments
A property of MVG_OMALLOOR
Scrambling
Spread Code
Bitstream 1
(SF = N1)
multiuser
+ x chips
Spread
Bitstream K Pilot
(SF = NK)
• Chip-rate is constant.
• Orthogonal Variable Spreading Factor (OVSF) codes.
• Pilot is all 1’s code.
• Scrambling whitens transmitted signal.
A property of MVG_OMALLOOR
Two-Stage Receiver
Linear to despreader
Stage 2
Second Stage: Decision “Feedback”
A property of MVG_OMALLOOR
Equalization
-
FIR + FIR z-1
+
-
• Decisions are fed-forward from 1st stage.
• Post-cursor ICI cancellation.
• Decision-directed adaptation.
• Improvement over linear MMSE.
Second Stage: Inter-chip-Interference (ICI)
A property of MVG_OMALLOOR
Cancellation
- Re-channel
+ (sans Cursor)
- +
• Re-channel multiuser chip-rate signal.
• Pre- and post-cursor ICI cancellation.
• Decision-directed adaptation.
• MRC many ICI cancellation branches for diversity gain
A property of MVG_OMALLOOR
Simulation Results
• DD enhances
adaptive
performance
• Second-stage
improves BER
• IC out-performs
DFE.
• Max-SINR+IC
fails to reach MF
bound due to
errors fed
forward.
Adaptive
Algorithms
A property of MVG_OMALLOOR
Publications
• Future Research:
– On the MF bound of scrambled DS-SS in Rayleigh Fading
– Detection of the Active OVSF codes in the Downlink
– MIMO Equalization for the CDMA downlink
A property of MVG_OMALLOOR
My Background