Sie sind auf Seite 1von 294

Digital Communication

EcE 4034
B.Tech. Second Year for EcE
Date: 14.3.08

Dr. Kyawt Khin


Professor and Head
Department of Electronic Engineering
and Information Technology
Yangon Technological University

1
Chapter 12
Digital Communication Concepts
12.1 Digital Information

• Bit
• Coding
• Coding Efficiency
One bit can define 2 objects
2 bit can define 2 of 2 = 2 . 2 = 22 = 4 object
3 bit can define 2 of 2 of 2 = 2 . 2 . 2 = 23 = 8 object
4 bit can define 2 of 2 of 2 of 2 = 2.2.2.2 = 24 = 16 object
2
2n = M
the number of required bits = n
different things or levels = M

n = log 2 M

Exact number of digits required


Coding Efficiency =
Actual number of digits used

6.46 bits
e.g Coding Eff : = = 0.923 = 92.3%
7 bits

3
12.2 Information Transfer rate (fi)

Unit bit/ sec or bps


e.g Serial digital word 101001 (6 bits)
Time taken = 6 ms

6bits
fi = = 1,000bit / sec(1kbps)
6ms

4
12.3 Signaling (BAUD) Rate (fb)
Signal level (V)

1
0 t (ms)
0 1 3 4 5 6 7 8 9

Tb = 1 ms
fb = 1/Tb = 1 k baud
Note In a purely binary system
the bit rate = the baud rate
Fig 12.1 Binary transmission
5
e.g.
Binary message 10 10 01 11

Quaternary transmission 2V 2V 1V 3V
fi= 1 kbps

Volts fb = 1 k baud (bit/sec)


Fig 12.2 Four level transmission of a
4 binary message
3
2
1 t (ms)
0
1 2 3 4 5 6
fi (transfer rate) = 8 bits/4ms = 2 kbps
6
fb (band rate) = 4 symbols/4ms =1 k baud
12.4 System Capacity (OR) Imformation Capacity (C)
C = information x ( 1/Tm)= (1/Tb) log 2 M

C = 2 f c (min log 2 M
where Tm is the message time
1/Tb is the signaling rate
log2M is the number of bits (OR)
Hartly Law CαBX T
Where C = information capacity
B = bandwidth ,
T = transmission time
C = B log2 ( S + 1)bps
N
7
12.5 Bandwidth Considerations
•the minimum possible bandwidth required for a given
pulse rate
• how pulses can be shaped to minimize the bandwidth and
distortion of the data pulses
• fcmim cut off ≥ (1/2Tb) = ½ fb
Eg. If 1000 bit/s are transmitted NRZ,
fcmim cut off = ½ fb = ½ x 1000 = 500 Hz
0 1 0 1 0
Fig 11.17 Squarewave
t and fundamental
frequency
Tb T 8
Continued

Tb = 1/ fb

f = 1/T = 1/ 2Tb = ½ fb

BWmin = ½ fb

fb = the transmission line bit-rate (baud rate)

9
τ 2 A  τ  1  1 τ 2 
v(t ) = +  sin π  cos 2π  t +  sin 2π  cos 2π  t + .........
T π  T T   2 T T  
The pulse repetition rate is f = 1/T (symbols/sec)

Volts
Amplitude
(Volts)
A 2A τ
τ sin π (volts )
t
π T

T τ sin ( nπ τ/ T )
2A
T ( nπ τ/ T )
Time domain description

f (Hz)
0 1/T 2/T
f = 2/T
f = 1/T
Frequency domain description
10
Figure 12-5 Time and frequency description of a rectangular pulse train
1 1 0 1 0

Figure 12-6 Return-to-zero (RZ) data stream

11
12.6 Power in Digital Signal

Compare the power of an NRZ square wave to NRZ-


bipolar
1 0 1
v NRZ
A 0 t
1 0 1
v/2 NRZ-B
B v t
-v/2
Fig 12.2 Comparison of NRZ and NRZ-bipolar
12
Comparison of NRZ and NRZ- bipolar
power in an NRZ signal NRZ signal

PNRZ = v2m /2R


PNRZ-B = 2(V/2)2/ 2R = V2 / 4R
It is seen that the on/off NRZ signal has twice the power of the
NRZ-bipolar signal.
Also, the instantaneous (peak) power for
NRZ is V2/R and NRZ-B = V2/ 4R,
For a 4:1 difference in peak power dc power for rectangular RZ
and NRZ signal.

13
Digital Transmission Formats

1. NRZ : Non-return to zero


2. NRZ-B : NRZ-Bipolar
3. RZ : Return to zero (~ 50% duty cycle)
4. Biphase (Bi- φ ), also called “Manchester” code
5. AMI : Alternate mark inversion

14
Digital sequence
1 0 1 0 0 11 1
A. NRZ : Nonreturn to zero t

B. NRZ-B : NRZ - Bipolar t

C. RZ : Return to zero(~50% duty cycle) t

D. Biphase (Bi-φ ) t
Also called “ Manchester “ code
E. AMI : Alternate mark inversion t

Figure 12-10 A few digital transmission formats


15
Continued

TTL (Transistor-Transistor Logic) Level Signal Format


• 0~1.3 volts for a logic 0
• 3.6~5 volts for a logic 1
• current level less than 16 mA

16
12.7 PCM System Analysis`

• Sampling f s > 2 fA(max)

fs = sampling frequency
fAmax = input max frequency
• Quantization
• Encoding

Quantization is the process of approximating sample levels


into their closed fixed value.

17
Digital clock
Analog
Serial PCM
A(t) input
Sampler Encoder output
7 111
110
5 101
100 1 2 3 4
t 3 011
010 011 11 0 10 1 10 0
1 001
0 1 2 3 4
000 Digital signal
t
Ts
Sampling pulses Pulse
generator

Figure 11.14 : A 3-bit PCM system showing analog to 3-bit digital

18
Dynamic Range and Resolution

Dynamic range is the ratio of largest to smallest analog


signal.
Resolution is the smallest analog input voltage change
that can be distinguished by A/D converter.
q = V Fs / 2n

where q = resolution
n = number of bits in the digital code word
VFs = full-scale voltage range for the analog signal

19
Dynamic Range( DR)

ADC parameters = V Fs / q
= 2n = M
DR = Vmax/ Vmin = 2n
DR (dB) = 20 log Vmax/ Vmin
20 log 2n = 20n log 2 = 6.02n
or DR(dB) ≈ 6n
For linearly encoded PCM system
DR(dB) = 6 dB/ bit
20
Signal to Quantization Noise Ratio
(SQR)
For input signal minimum amplitude
SQR = minimum voltage / quantization noise
For input signal maximum amplitude
SQR = maximum voltage / quantization noise
Linear quantizng in PCM systems has two major drawbacks.(i)
Companding
Companding is the process of compressing, then expanding.
Or nonlinear encoding/decoding, called companding

21
Companding
Linear quantizng in PCM systems has two major drawbacks.
(i) The uniform step size means that weak analog signals will have
a much poorer S/Nq than the strong signals.
(ii) Systems of wide dynamic range require many ending bits and
consequently wide system bandwidth.
Companding
Companding is the process of compressing, then expanding.
Or nonlinear encoding/decoding, called companding

22
111 Linearanalog-to-digital converter

110 transfercharacteristic.

101

q
Digital output code

100
N
001

010
A
001
Full scale, or VFS
000 (Volts)
Samplevoltageinput(Volts) Vmax

+q/2

-q/2
0 } q

B. Quantum uncertainty or quantization noise, ± q/2


Fig 12.15Linear ADC characteristic and quantization noise. 23
Introduction to digital
communications

B. Macq
(macq@tele.ucl.ac.be)
1) Point-to-point
communications

Source Destination

Source: -discrete events (from an alphabet)


-waveforms (speech, sound, images, video)

Transmission media: -radio frequency ”on the air”


-satellite channel
-copper wire
-optical fiber
Telecommunications

Signal coding modulation demodulation decoding

Signal: bandwidth or rate (letters per second, Bauds) Signal +


Transmission media: limited efficient frequency band, distortion
confidentiality,
noise.
The aim of modulation: to be in the right (shared) band
The aim of coding: compress, mark, cipher, sign, resist to errors
All implemented in processors (VLSI) able to process bit streams
and to interface to the electrical signal (I/O) and the electrical
media
Modulation

❚ A channel is a linear system: a frequency


at its input gives the same (attenuated)
frequency at the output
❚ Modulation is an operation which embed a
signal into a given parameter of a carrier
frequency:
❙ amplitude
❙ phase
❙ frequency
Digital modulations: how to
transmit streams of 0 and 1

Capacity = B log2(1+S/N)

B: bandwidth in Hz
S: signal power
N: noise power
Capacity in bits/sec

Result: available rate (bits/sec)


with a given bit error rate (probability of bit error)
Example of modern
modulations

❚ QPSK: about 40 Mbits/sec in a 27 MHz


satellite channel (12 GHz)
❚ QAM: about 40 Mbits/sec in a 8 MHz cable
TV channel (30-600 MHz)
❚ WFDM: up to 10 Gbits/sec on an optical
fiber
❚ ADSL: up to 6 Mbits/sec downstream, 300
kbits/sec upstream on a telephone pair
Coding sources into 0s and
1s

❚ Pulse Code Modulation (PCM): how to


transform a waveform into a bit
stream
Coding sources into 0s and
1s

❚ ASCI: 8 bits, 256 letters


Bit streams

❚ Digital telephone:
❙ 8000 * 8 bits/sample = 64 kbits/s
❚ Digital sound:
❙ 44000 * 16 bits/sample *2 = 1,4 Mbits/s
❚ Digital TV
❙ 576*720*25*(8 bits/lum + 8bits/chrom) = 166
Mbits/sec
❚ Multimedia: multiplexed programs, data,
audio, pictures, video, ...
Codings for bit streams

Source Cryptographic Channel


coding coding coding
coding
Source coding: compress, watermark, label
Cryptographic coding: cipher, sign
Channel coding: error correcting code, checksum
2) Multipoint-to-multipoint
communications

Alice Cathy Brute force

Benoit Diana

Cathy
Alice
Local Area Network:
Media access control
Diana Ethernet or Token ring
Benoit
Multipoint-to-multipoint
communications (cont.)

Alice Cathy

Switch or router
Diana
Benoit

Wide Area Networks and modern LANs (Switched Ethernet)

Switch: connection oriented network


Router: connectionless (datagram) oriented network
Dr. Uri Mahlab 36
Communication system

Information Transmitter
Transmitter Receiver
Receiver
Channel Decision
source

Dr. Uri Mahlab 37


Block diagram of an Binary/M-ary signaling
scheme
Timing
HT(f) Hc(f)
(X(t (XT(t
Information Pulse Trans
channel
source generator filter
+
Channel noise
n(t)
+
Y(t)
Receiver
Output A/D filter

HR(f)
Clock
recovery
network

Dr. Uri Mahlab 38


Block diagram Description

pg (t)
{dk}={1,1,1,1,0,0,1,1,0,0,0,1,1,1} Tb

Tb Timing
HT(f)
(X(t (XT(t
Information Pulse Trans
source generator filter

pg (t )

+ a if d k = "1";
For bk=1 Tb
ak = 
− a if d k = "0";
For bk=0 Tb

Dr. Uri Mahlab 39


Block diagram Description (Continue - 1)

{dk}={1,1,1,1,0,0,1,1,0,0,0,1,1,1}

Tb Timing
HT(f)
(X(t (XT(t
Information Pulse Trans
source generator filter

pg (t ) pg (t )
For bk=1 Tb Tb
Transmitter
For bk=0 Tb filter Tb

Dr. Uri Mahlab 40


Block diagram Description (Continue - 2)

{dk}={1,1,1,1,0,0,1,1,0,0,0,1,1,1}

Tb Timing
HT(f)
(X(t (XT(t
Information Pulse Trans
source generator filter

100110 ∞
X (t ) = ∑a
k =−∞
k p g (t − kTb )

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb
Dr. Uri Mahlab 41
Block diagram Description (Continue - 3)

∑a
{dk}={1,1,1,1,0,0,1,1,0,0,0,1,1,1}
X (t ) = k p g (t − kTb )
k =−∞
Tb Timing
HT(f)
(X(t (XT(t
Information Pulse Trans
100110 source generator filter

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

Dr. Uri Mahlab 42


Block diagram Description (Continue - 4)
Timing
HT(f) HR(f)
(X(t
Information Pulse Trans + Receiver
source generator filter filter
Channel noise n(t)

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

Dr. Uri Mahlab 43


Block diagram Description (Continue - 5)

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

Y (t ) = ∑ Ak pr (t − t d − kTb ) + n0 (t )
k

Dr. Uri Mahlab 44


Block diagram of an Binary/M-ary signaling
scheme
Timing
HT(f) Hc(f)
(X(T (Xt(T
Information Pulse Trans
channel
source generator filter
+
Channel noise
n(t)
+
Y(t)
Receiver
Output A/D filter

HR(f)
Clock
recovery
network

=Uri∑
Y (t ) Dr. Ak pr (t − t d − kTb ) + n0 (t )45
Mahlab
k
Block diagram Description

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

t
1 0 0 0 1 0

1 0 0 1 1 0
Dr. Uri Mahlab 46
Block diagram of an Binary/M-ary signaling
scheme
Timing
HT(f) Hc(f)
(X(t (XT(t
Information Pulse Trans
channel
source generator filter
+
Channel noise
n(t)
+
Y(t)
Receiver
Output A/D filter

HR(f)
Clock
recovery
network

Dr. Uri Mahlab 47


Explanation of Pr(t)

HT(f) Hc(f) HR(f)


Pg(t) Pr(t)
Trans Receiver
channel
filter filter

Y (t ) = ∑ Ak pr (t − t d − kTb ) + n0 (t )
k

Pg(f) HT(f) Hc(f) HR(f) Pr(f)


Dr. Uri Mahlab
p r (0) = 1 48
The output of the pulse generator X(t),is given by

X( t ) = ∑a p (t −k T )
k =−

k g b

Pg(t) is the basic pulse whose amplitude ak depends on .the kth


input bit

Dr. Uri Mahlab 49


The input to the A/D converter is

Y (t ) = ∑ Ak pr (t − t d − kTb ) + n0 (t )
k

For tm =mTb+td and td is the total time delay in the system,


we get.
t

t
Y( t m )

t2 t3 t
t1 tm
Dr. Uri Mahlab 50
The output of the A/D converter at the sampling time

tm =mTb+td

Y (t ) = ∑ Ak pr (t − t d − kTb ) + n0 (t )
k

Y ( t m ) = A m + ∑A k p
r
[( m −k ) T ] + n ( t )
b 0 m
K ≠m

Tb 2Tb 5Tb 6Tb


t
Y( t m ) 3Tb 4Tb

t2 t3 t
t1 Dr. Uri Mahlab tm 51
Y ( t m ) = A m + ∑A k p
r
[( m −k ) T ] + n ( t )
b 0 m
K ≠m

ISI - Inter Symbol


Interference

Y( t m )

t2 t3 t
t1 Dr. Uri Mahlab tm 52
Explanation of ISI
HT(f) Hc(f) HR(f)
Pg(t) Pr(t)
Trans Receiver
channel
filter filter

t
t
Fourier
Fourier BandPass Transform
Transform Filter

f f

HT(f) Hc(f) HR(f)


Pg(f) Trans Receiver
Pr(f)
Dr. Urichannel
Mahlab filter 53
filter
Explanation of ISI - Continue
t
t
Fourier
Fourier BandPass Transform
Transform Filter

f f

Tb 2Tb 5Tb 6Tb


t
3Tb 4Tb

Dr. Uri Mahlab 54


-The pulse generator output is a pulse waveform

X (t ) = ∑a
k =−∞
k p g (t − kTb )

p g ( 0) =1

a If kth input bit is 1


ak =
− a if kth input bit is 0
-The A/D converter input Y(t)

Y (t ) = ∑ Ak pr (t − t d − kTb ) + n0 (t )
k Dr. Uri Mahlab 55
T y p e o f I m p o r
O f a P A M

D a E t a r T r o r ar Nn s o m N i s oi e t S t i es y e d s
r a t er a pt e o w p e o S r w p e e c r co t m
d e n s i

Dr. Uri Mahlab 56


5.2 BASEBAND BINARY PAM SYSTEMS

D e s i g n o f a b a s e b a n
b i n a r y P A M s y s t e m

P u l s e s h pa r ( pt ) e s H R( f ) H T( t )
p g( t )

- minimize the combined effects of inter symbol


interference and noise in order to achieve minimum
probability of error for given data rate.
Dr. Uri Mahlab 57
5.2.1 Baseband pulse shaping
The ISI can be eliminated by proper choice of
received pulse shape pr (t).

1 for n = 0 
p r ( nT b ) =  
0 for n ≠ 0
Doe’s not Uniquely Specify Pr(t) for all values of
t.

Dr. Uri Mahlab 58


To meet the constraint, Fourier Transform Pr(f) of Pr(t), should
satisfy a simple condition given by the following theorem
Theorem

k
if ∑
k =−

Pr (f + ) = Tb for
Tb
f <1 / 2Tb

1 for n = 0
Then p r ( nT b ) = 
0 for n ≠ 0

Proof

p r (t) = ∫p
−∞
r (f ) exp( j2πft )df

( 2 k +1) / 2 Tb

p r ( t ) = ∑k =−∞

Dr. Uri
∫p
( 2Mahlab
r(f ) exp( j2πft )df
k −1) / 2 Tb 59
( 2 k +1) / 2Tb

pr ( nT b ) =∑ ∫p r ( f ) exp( j 2πfnT b t ) df
k ( 2 k −1) / 2Tb

1 / 2 Tb
k
p r ( nT b ) = ∑ ∫ p r (f '+ ) exp( j2πf ' nT b )df '
k
−1 / 2 Tb
Tb
1 / 2 Tb
k
p r ( nT b ) = ∫ (∑ p r (f + )) exp( j2πfnT b )df
−1 / 2 Tb
k
Tb
1 / 2 Tb
sin( nπ)
p r ( nT b ) = ∫ Tb exp( j2πfnT b )df =
1 / 2 Tb

Which verify that the Pr(t) with a transform Pr(f)


Satisfy ZERO ISI

Dr. Uri Mahlab 60


The condition for removal of ISI given in the theorem is called
Nyquist (Pulse Shaping) Criterion

Y ( t m ) = A m + ∑ A k Pr (( m − k )Tb ) + n 0 ( t m )
k ≠m

1 for n = 0 
p r ( nT b ) =  
0 for n ≠ 0
1

sin( nπ)
p r ( nT b ) =

-2Tb -Tb Tb 2Tb

Dr. Uri Mahlab 61


The Theorem gives a condition for the removal of ISI using a Pr(f) with
a bandwidth larger then rb/2/.
ISI can’t be removed if the bandwidth of Pr(f) is less then rb/2.

Pg(f) HT(f) Hc(f) HR(f)


Pr(f)
Tb 2Tb 5Tb 6Tb
t
3Tb 4Tb

Dr. Uri Mahlab 62


Particular choice of Pr(t) for a
given application

p r( t )

R a t e o f S d h e a c p a i ny g
o f r ( pt )

The smallest values near Tb, 2Tb, … Shape of Pr(f) determines the ease
In such that timing error (Jitter) with which shaping filters can be
will not Cause large ISI realized.
Dr. Uri Mahlab 63
A Pr(f) with a smooth roll - off characteristics is preferable
over one with arbitrarily sharp cut off characteristics.

Pr(f) Pr(f)

Dr. Uri Mahlab 64


In practical systems where the bandwidth available for
transmitting data at a rate of rb bits\sec is between rb\2 to rb Hz, a
class of pr(t) with a raised cosine frequency characteristic is
most commonly used.
A raise Cosine Frequency spectrum consist of a flat amplitude portion and a roll off portion that
has a sinusoidal form.

Tb , f ≤ rb / 2 −β

 2 π rb rb rb
Pr (f ) = Tb cos ( f − +β), −β < f ≤ +β
 4β 2 2 2
0, f < rb / 2 +β

FT −1 {Pr (f )} =
cos 2πβt  sin πrb t 
= Pr ( t ) = 
 πr t  
1 − ( 4βt ) 2  b 
Dr. Uri Mahlab 65
raised cosine frequency characteristic

Dr. Uri Mahlab 66


Summary
The BW occupied by the pulse spectrum is B=rb/2+β .
The minimum value of B is rb/2 and the maximum value is rb.

Larger values of β imply that more bandwidth is required for a


given bit rate, however it lead for faster decaying pulses, which
means that synchronization will be less critical and will not cause
large ISI.

β =rb/2 leads to a pulse shape with two convenient properties.


The half amplitude pulse width is equal to Tb, and there are zero
crossings at t=3/2Tb, 5/2Tb…. In addition to the zero crossing
at Tb, 2Tb, 3Tb,…...
Dr. Uri Mahlab 67
5.2.2

Optimum transmitting and receiving filters

The transmitting and receiving filters are chosen to provide


a proper

H T , HR

p u l s e s nh oa ip s i en gi m m

Dr. Uri Mahlab 68


-One of design constraints that we have for selecting the filters is the
relationship between the Fourier transform of pr(t) and pg(t).

p g ( f ) H T ( f ) H R ( f ) = K c Pr ( f ) exp(−2 j 2πftd )
Where td, is the time delay Kc normalizing constant.

In order to design optimum filter Ht(f) & Hr(f), we will assume that Pr(f),
Hc(f) and Pg(f) are known.

Portion of a baseband PAM system

Dr. Uri Mahlab 69


Pg(f) HT(f) Hc(f) HR(f)
Pr(f)
If we choose Pr(t) {Pr(f)} to produce Zero ISI we are left
only to be concerned with noise immunity, that is will choose

{ H T (f )} and { H R (f )} ⇒ minimum of noise effects

Dr. Uri Mahlab 70


Noise Immunity

Problem definition:
For a given :
•Data Rate - rb
•Transmission power - ST
•Noise power Spectral Density - Gn(f)
•Channel transfer function - Hc(f)
•Raised cosine pulse - Pr(f)

Choose

{ H T (f )} and { H R (f )} ⇒ minimum of noise effects


Dr. Uri Mahlab 71
Error probability Calculations

At the m-th sampling time the input to the A/D is:

Y ( t m ) = A m + ∑ A k Pr (( m − k )Tb ) + n 0 ( t m )
k ≠m

"1" if Y( t m ) > 0
We decide:
"0" if Y( t m ) ≤ 0

Perror = Pr ob[ Y( t m ) > 0 "0" was sent] • Pr ob( To sent "0" ) +


Pr ob[ Y( t m ) < 0 "1" was sent] • Pr ob( To sent "1" )

Dr. Uri Mahlab 72


Y( t m ) = + A + n0 ( t m ) if "1"
Y( t m ) = −A + n0 ( t m ) if "0"
A=aKc

• Pr ob ( To sent "0" ) = Pr ob ( To sent "1" ) = 0.5


1
Perror = { Pr ob[ n 0 ( t m ) < −A ] + Pr ob[ n 0 ( t m ) > A ]}
2
The noise is assumed to be zero mean Gaussian at the receiver input
then the output should also be Zero mean Gaussian with variance No
given by:

N 0 = ∫ G n (f ) H R (f ) df
2

−∞
Dr. Uri Mahlab 73
1 (e − [ n ] 2 / 2 N0 ) 1 (e − [ n − A ) ] 2 / 2 N0 )
2π N 0 2π N 0

y(tm)
0 A

Perror = Pr ob[ n > b] = ∫
1
e ( −[ z ) ] 2 / 2 N 0 )
dz
b 2 πN 0

b
Dr. Uri Mahlab 74
1 (e − [ y ( t m )+ A ) ] 2 / 2 N0 ) 1 (e − [ y ( t m )− A ) ] 2 / 2 N0 )
2π N 0 2π N 0

y(tm)
-A A

Perror = Pr ob[ Y( t m ) > 0] Pr ob[ Y( t m ) < 0]

y(tm)
0

Dr. Uri Mahlab 75


1 (e − [ y ( t m )+ A ) ] 2 / 2 N0 ) 1 (e − [ y ( t m )− A ) ] 2 / 2 N0 )
2π N 0 2π N 0

y(tm)
-A A

VTransmit
Vreceived

Dr. Uri Mahlab 76



Pe = 1 / 2 ∫
1
(
exp − x / 2 N 0 dx =
2
)
x >A 2πN 0

=∫
1
(
exp − x / 2 N 0 dx
2
) z=x / N0
=
A 2πN 0

 
Pe = ∫
1
(
exp − z 2 / 2 dz =Q
A
 N 
)
A / N0 2π  0 


Q( u ) = ∫
1
(
exp − z 2 / 2 dz )
u 2π
Dr. Uri Mahlab 77
Q(u)

Q( u ) = ∫
1
(
exp − z 2 / 2 dz )
u 2π

=∫ dz=
u U


 A 
Pe = ∫
1 2
(
exp − z / 2 dz =Q )
 N 

A / N0 2π  0 


Q( u ) = ∫
1
(
exp − z / 2 dz
2
)
u 2π
Dr. Uri Mahlab 78
A
= Signal to Noise Ratio
N0
Perror decreases as A / N0 increase

Hence we need to maximize the signal


to noise Ratio

Thus for maximum noise immunity the filter transfer functions HT(f)
and HR(f) must be xhosen to maximize the SNR

Dr. Uri Mahlab 79


Optimum filters design calculations

We will express the SNR in terms of HT(f) and HR(f)

We will start with the signal:



X (t ) = ∑a
k =−∞
k p g (t − kTb )

2 2

G X (f ) =
p g (f )
Tb
E ak{ }= 2
a p g (f )
Tb

The psd of the transmitted signal is given by::


2
G X (f ) = H T (f ) ⋅ G X ( f )

Dr. Uri Mahlab 80


And the average transmitted power ST is

a2 2
∫ P (f )
2
ST = g ⋅ H T (f ) df A k =K c a k =
Tb −∞ A =K c a


A2 2
∫ Pg (f ) ⋅ H T (f ) df
2
= ST = 2
K c Tb −∞

2
ST K c Tb
A2 = ∞
2
∫ P (f )
2
g ⋅ H T (f ) df
−∞

The average output noise power of n0(t) is given by:

∫G
2
No = n (f ) H R (f ) df
−∞
Dr. Uri Mahlab 81
The SNR we need to maximize is

A2 ST Tb
=
No  ∞ ∞
Pr (f )
2

 ∫ G n (f ) H R (f ) df ⋅ ∫
2
2
df 
−∞ − ∞ H c (f ) H R ( f ) 
where Pr (f ) = H c (f )H R (f )H T (f )
Or we need to minimize

 ∞ ∞ 2
 
df   = min{ γ }
Pr (f )
min  ∫ G n (f ) H R (f ) df ⋅ ∫
2 2
2
−∞ − ∞ H c (f ) H R ( f )  
Dr. Uri Mahlab 82
Using Schwartz’s inequality

∞ ∞ ∞ 2

∫ V(f ) df ⋅ ∫ W (f ) df ≥ ∫ V(f ) W (f )df


2 2

−∞ −∞ −∞

The minimum of the left side equaity is reached when


V(f)=const*W(f)

If we choose :
1/ 2
V (f ) = H R (f ) G n (f )
Pr (f )
W (f ) =
H R ( f ) H c (f )

Dr. Uri Mahlab 83


γ2 is minimized when
2 K Pr (f )
H R (f ) = 1/ 2
H c (f ) G n (f )
2 1/ 2
2 K c Pr (f ) G n (f )
H T (f ) = 2
K H c (f ) Pg (f )
K − an arbitrary positive constant

The filter should have alinear phase response in a total time delay of td

Dr. Uri Mahlab 84


Finally we obtain the maximum value of the SNR to be:

 A2  ST Tb
  = 2
 N o  max  Pr (f ) G n (f ) 
∞ 1 / 2

∫ df 
−∞ H c (f ) 

  A 2  
Perror = Q  
 N 
 o  max 

Dr. Uri Mahlab 85


For AWGN with G n (f ) = η / 2
and
pg(f) is chosen such that it does not change much over the
bandwidth of interest we get.
2 p r (f )
H R (f ) = K1
H c (f )
2 p r (f )
H T (f ) =K 2
H c (f )
Rectangular pulse can be used at the input of HT(f).


1 for t <τ/ 2; τ<
< Tb
p g ( t ) =
0 elsew here
Dr. Uri Mahlab 86
5.2.3 Design procedure and Example

The steps involved in the design procedure.


Example:Design a binary baseband PAM system to
transmit data at a a bit rate of 3600 bits/sec with a bit error
probability less than 10 −4.
The channel response is given by:
10 −2 for f  2400
Hc ( f ) = 
0 _ elewhere

The noise spectral density is Gn ( f ) = 10 −14 watt / Hz


Dr. Uri Mahlab 87
Solution:
rb = 3600 bits / sec
pe ≤ 10 −4
B = 2400 Hz
Gn ( f ) = 10 −4 watt / Hz

If we choose a braised cosine pulse spectrum with


β = rb / 6 = 600
 1
 3600 , f 1200

pr ( f ) =  1 π
cos 2 ( f − 1200),1200 ≤ f  2400
 3600 2400

 0, f ≥ 2400
Dr. Uri Mahlab 88
We choose a pg(t)
1, t  1200
p g (t ) = 
0, elsewhere;τ = Tb / 10 = (0.28)(10 − 4 )
sin πfτ
pg ( f ) = τ ( )
πfτ
p g (0) = τ , p g (2400) = 0.973τ ≈ τ

1/ 2
H T ( f ) = K1 p r ( f )
1/ 2
H R ( f ) = Pr ( f )
We choose K1 = (3600 )(10 3 )

Dr. Uri Mahlab 89
p g ( f ) H T ( f ) H c ( f ) H R ( f ) = pr ( f )
Plots of Pg(f),Hc(f),HT(f),HR(f),and Pr(f).

Dr. Uri Mahlab 90


To maintain a Pe ≤ 10 −4
( A2 / N 0 ) max
Q( ( A2 / N 0) max ≤ 10 − 4
( A2 / N 0 ) max ≥ 3.75
( A2 / N 0 ) max ≥ 14.06

2 2
1 A  Pr ( f ) G ( f ) 
2 ∞ 1/ 2
 10 −14   ∞ 
ST = ( ) max  ∫ df  = (3600)(14.06) − 4   ∫ Pr ( f )df 
n

Tb N 0  −∞ Hc ( f )   10  −∞ 

For Pr(f) with raised cosine shape ∫ P ( f ) df
−∞
r =1

And hence ST = (14.06)(3600)(10 −10 ) ≈ −23dBm


Dr. Uri Mahlab 91
Which completes the design.
EE 551/451, Fall, 2007

Communication Systems

Zhu Han
Department of Electrical and Computer Engineering

Class 1

Aug. 28nd , 2007


Motivations
 Recent Development
– Satellite Communications
– Telecommunication: Internet boom at the end of last decade
– Wireless Communication: next boom? iPhone
 Job Market
– Probably one of most easy and high paid majors recently
– Intel changes to wireless,
– Qualcom, Broadcom, TI, Marvell, Cypress
 Research Potential
– One to one communication has less room to go, but
multiuser communication is still an open issue.
– Wimax, 3G, next generation WLAN
EE 541/451 Fall 2007
Communication System

A B
Engineering System

Social System

Genetic System

History and fact of communication

EE 541/451 Fall 2007


Communication System Components

transmitter

Source Source Channel


Modulation D/A
input Coder Coder

channel Distortion and noise +

Reconstructed
Signal Source Channel
demodulation A/D
output decoder decoder

receiver

EE 541/451 Fall 2007


Communication Process
 Message Signal
 Symbol
 Encoding
 Transmission
 Decoding
 Re-creation

 Broadcast
 Point to Point

EE 541/451 Fall 2007


Telecommunication
 Telegraph
 Fixed line telephone
 Cable
 Wired networks
 Internet
 Fiber communications
 Communication bus inside computers to communicate
between CPU and memory

EE 541/451 Fall 2007


Wireless Communications
 Satellite
 TV
 Cordless phone
 Cellular phone
 Wireless LAN, WIFI
 Wireless MAN, WIMAX
 Bluetooth
 Ultra Wide Band
 Wireless Laser
 Microwave
 GPS
 Ad hoc/Sensor Networks

EE 541/451 Fall 2007


Analog or Digital
 Common Misunderstanding: Any transmitted signals are
ANALOG. NO DIGITAL SIGNAL CAN BE TRANSMITTED
 Analog Message: continuous in amplitude and over time
– AM, FM for voice sound
– Traditional TV for analog video
– First generation cellular phone (analog mode)
– Record player
 Digital message: 0 or 1, or discrete value
– VCD, DVD
– 2G/3G cellular phone
– Data on your disk
– Your grade
 Digital age: why digital communication will prevail
EE 541/451 Fall 2007
Source Coder
 Examples
– Digital camera: encoder;
TV/computer: decoder
– Camcorder
– Phone
– Read the book
 Theorem
– How much information is
measured by Entropy
– More randomness, high
entropy and more information

EE 541/451 Fall 2007


Channel, Bandwidth, Spectrum
 Bandwidth: the number of bits per second is proportional to B
http://www.ntia.doc.gov/osmhome/allochrt.pdf

EE 541/451 Fall 2007


Power, Channel, Noise
 Transmit power
– Constrained by device, battery, health issue, etc.
 Channel responses to different frequency and different time
– Satellite: almost flat over frequency, change slightly over time
– Cable or line: response very different over frequency, change
slightly over time.
– Fiber: perfect
– Wireless: worst. Multipath reflection causes fluctuation in
frequency response. Doppler shift causes fluctuation over time
 Noise and interference
– AWGN: Additive White Gaussian noise
– Interferences: power line, microwave, other users (CDMA phone)

EE 541/451 Fall 2007


Shannon Capacity
 Shannon Theory
– It establishes that given a noisy channel with information capacity C and
information transmitted at a rate R, then if R<C, there exists a coding
technique which allows the probability of error at the receiver to be made
arbitrarily small. This means that theoretically, it is possible to transmit
information without error up to a limit, C.
– The converse is also important. If R>C, the probability of error at the
receiver increases without bound as the rate is increased. So no useful
information can be transmitted beyond the channel capacity. The theorem
does not address the rare situation in which rate and capacity are equal.
 Shannon Capacity

C = B log 2 (1 + SNR ) bit / s

EE 541/451 Fall 2007


Modulation
 Process of varying a carrier signal
in order to use that signal to
convey information
– Carrier signal can transmit far
away, but information cannot
– Modem: amplitude, phase, and
frequency
– Analog: AM, amplitude, FM,
frequency, Vestigial sideband
modulation, TV
– Digital: mapping digital
information to different
constellation: Frequency-shift
key (FSK)

EE 541/451 Fall 2007


Example
 Figure 10
 Modulation over carrier fc
s(t)=Accos(2π fct) for symbol 1; -Accos(2π fct) for symbol 0
 Transmission from channel
x(t)=s(t)+w(t)
 Correlator
T
 0.5 Ac + wT , for symbol 1
yT = ∫ x(t ) cos(2πf c t )dt = 
0 − 0.5 Ac + wT , for symbol 0
 Decoding
– If the correlator output yT is greater than 0, the receiver output
symbol 1; otherwise it outputs symbol 0.

EE 541/451 Fall 2007


Channel Coding
 Purpose
– Deliberately add redundancy to the transmitted information, so
that if the error occurs, the receiver can either detect or correct it.
 Source-channel separation theorem
– If the delay is not an issue, the source coder and channel coder can
be designed separately, i.e. the source coder tries to pack the
information as hard as possible and the channel coder tries to
protect the packet information.
 Popular coder
– Linear block code
– Cyclic codes (CRC)
– Convolutional code (Viterbi, Qualcom)
– LDPC codes, Turbo code, 0.1 dB to Channel Capacity

EE 541/451 Fall 2007


Quality of a Link (service, QoS)
 Mean Square Error
N
1
MSE =
N
∑ | Xˆ
i =1
i − X i |2

 Signal to noise ratio (SNR)


Prec Ptx G
Γ= 2 = 2
σ σ
– Bit error rate
– Frame error rate
– Packet drop rate
– Peak SNR (PSNR)
– SINR/SNIR: signal to noise plus interference ratio
 Human factor

EE 541/451 Fall 2007


Communication Networks
 Connection of 2 or more distinct (possibly dissimilar) networks.
 Requires some kind of network device to facilitate the
connection.
 Internet

Net A Net B

EE 541/451 Fall 2007


Broadband Communication

EE 541/451 Fall 2007


OSI Model
Open Systems Interconnections; Course offered next semester

EE 541/451 Fall 2007


TCP/IP Architecture
• TCP/IP is the de facto
global data
communications standard.
• It has a lean 3-layer
protocol stack that can be
mapped to five of the
seven in the OSI model.
• TCP/IP can be used with
any type of network, even
different types of networks
within a single session.

EE 541/451 Fall 2007


Summary
 Course Descriptions
 Communication System Structure
– Basic Block Diagram
– Typical Communication systems
– Analog or Digital
– Entropy to Measure the Quantity of Information
– Channels
– Shannon Capacity
– Spectrum Allocation
– Modulation
– Communication Networks

EE 541/451 Fall 2007


Digital
Communication
Vector Space
concept

114 Digital communication - vector approach


Dr. Uri Mahlab
Signal space
■ Signal Space
■ Inner Product
■ Norm
■ Orthogonality
■ Equal Energy Signals
■ Distance
■ Orthonormal Basis
■ Vector Representation
■ Signal Space Summary
115 Digital communication - vector approach
Dr. Uri Mahlab
Signal Space

S(t) S=(s1,s2,…)

•Inner Product (Correlation)


•Norm (Energy)
•Orthogonality
•Distance (Euclidean Distance)
•Orthogonal Basis

116 Digital communication - vector approach


Dr. Uri Mahlab
ONLY CONSIDER SIGNALS, s(t)

t <0 T
s (t ) = 0 if
t >T t

T
Energy = Es = ∫ s 2 (t ) dt < ∞
0

117 Digital communication - vector approach


Dr. Uri Mahlab
Inner Product - (x(t),
y(t))
T

( x(t ), y (t ) ) ≡ ∫ x(t ) y (t )dt


0

y
x ⋅y

θ x

x ⋅ y = x y cos θ

Similar to Vector Dot Product

118 Digital communication - vector approach


Dr. Uri Mahlab
Example

A
T
t
-A
2A
A/2
t
T

A T T 3 2
( x(t ), y (t ) ) = ( A)( ) + (− A)(2 A) = − A T
2 2 2 4

119 Digital communication - vector approach


Dr. Uri Mahlab
Norm - ||x(t)||
T

x (t ) ≡ ( x(t ), x(t ) ) = ∫ x 2 (t ) dt = Energy = Ex


2

x(t ) = Ex
2
x =x⋅x

x Similar to norm of vector

A
T
2π 2 T
x(t ) = ∫ ( A cos t ) dt = A = Ex
T 0 T 2
-A
120 Digital communication - vector approach
Dr. Uri Mahlab
Orthogonality
T

( x(t ), y(t )) = 0 ∫ x(t ) y (t )dt = 0


0

A y x ⋅ y =0

T x
-A

Y(t)
B Similar to orthogonal vectors
T
121 Digital communication - vector approach
Dr. Uri Mahlab
•ORTHONORMAL FUNCTIONS X(t)
2/T

{
( x(t ), y (t ) ) = 0 T
and
x (t ) = y (t ) =1 Y(t)
T 2/T
∫ x(t ) y (t )dt = 0
0
T
T T

∫ x 2
(t ) dt = ∫ (t )dt = 1
y 2

0 0

y =1
( x(t ), y (t ) ) = 0 x =1
x(t ) = y (t ) = 1

122 Digital communication - vector approach


Dr. Uri Mahlab
Correlation Coefficient
ρ≡
( x(t ), y (t ) )
x(t ) y (t )
1 ≥ ρ ≥ -1
T
ρ =± 1 when x(t)=± ky(t) (k>0
∫ x (t ) y (t ) dt
ρ= 0

Ex Ey
•In vector presentation y
x⋅ y
ρ = cosθ = θ
xy x
123 Digital communication - vector approach
Dr. Uri Mahlab
Example
X(t) Y(t)

10A
A
t t
T -A
T/2 7T/8

T
5 2
( x(t ), y (t ) ) = ∫ x(t ) y(t )dt = A T
Now,
0 4
5 A 2T
ρ=
( x(t ), y (t ) )
= 4 ≅ 0.14
Ex Ey (10A T )( 7 A T )
8
ρ shows the “real” correlation
124 Digital communication - vector approach
Dr. Uri Mahlab
Distance, d T
d = x ( t ) − y( t ) = ∫ [ x ( t ) − y( t )] dt
2 2 2

d 2 = Ex + Ey − 2ρ ExEy
• For equal energy signals

d = 2E (1 − ρ)
2

• ρ =-1 (antipodal) d=2 E


• ρ =0 (orthogonal) d = 2E
• 3dB “better” then orthogonal signals
125 Digital communication - vector approach
Dr. Uri Mahlab
Equal Energy
Signals
d = 2 E (1 − ρ )
•To maximize d
ρ = −1 (antipodal signals)
x(t ) = − y (t )
y
d=2 E
•PSK (phase Shift Keying) E

x(t ) = A cos 2πf 0 t


d=2 E
(0 ≤ t ≤ T )
x
y (t ) = − A cos 2πf 0 t
126 Digital communication - vector approach
Dr. Uri Mahlab
•EQUAL ENERGY SIGNALS
ORTHOGONAL SIGNALS (ρ =0)
y
d = 2E
d = 2E E

PSK (Orthogonal Phase Shift Keying)


x (t ) = A cos 2πf1t x
(0 ≤ t ≤ T )
y (t ) = A cos 2πf 0 t
1 3
(Orthogonal if ( f1 − f 0 ) ⋅ T = , 1 , , ...)
2 2

127 Digital communication - vector approach


Dr. Uri Mahlab
Signal Space
summary
• Inner Product
T

( x(t ), y (t ) ) ≡ ∫ x(t ) y (t )dt


0

•Norm ||x(t)||
T
x(t ) = ( x(t ), x(t ) ) = ∫ x 2 (t )dt = Energy
2

•Orthogonality
( x(t ), y (t ) ) = 0
if
x (t ) = y (t ) =1 (Orthogonal )

128 Digital communication - vector approach


Dr. Uri Mahlab
• Corrolation Coefficient, ρ

( x(t ), y (t ) ) ∫ x(t ) y (t )dt


ρ= = 0

x(t ) y (t ) ExEy

•Distance, d
T
d = x ( t ) − y( t ) = ∫ [ x ( t ) − y( t )] dt
2 2 2

d 2 = Ex + Ey − 2ρ ExEy
129 Digital communication - vector approach
Dr. Uri Mahlab
Modulation

130 Digital communication - vector approach


Dr. Uri Mahlab
 Modulation
Modulation
 BPSK
 QPSK
 MPSK
 QAM
 Orthogonal FSK
 Orthogonal MFSK
 Noise
 Probability of Error

131 Digital communication - vector approach


Dr. Uri Mahlab
Binary Phase Shift Keying
– (BPSK)
2E
x0 (t ) = cos 2πf 0 t
T 0 ≤t ≤T
2E
x1 (t ) = − cos 2πf 0 t
T

We define
1 bit
Rbit =
φ1 (t ) =
2
cos 2πf 0 t
T sec
T
d =2 E
so,

φ (t )
x0 (t ) = Eφ1 (t ) - E E
132
x1 (t ) = − Eφ1 (t )
Digital communication - vector approach
Dr. Uri Mahlab
Binary antipodal
signals vector
presentation
Consider the two signals:

2E
s1 ( t ) = −s 2 ( t ) = cos 2πf c t 0≤t≤T
T
The equivalent low pass waveforms are:
2E
u1 ( t ) = −u 2 ( t ) = 0≤t≤T
T

133 Digital communication - vector approach


Dr. Uri Mahlab
The vector representation is –
Signal constellation.

E E

134 Digital communication - vector approach


Dr. Uri Mahlab
The cross-correlation coefficient is:
s1 ⋅ s 2
Re( ρ12 ) = = −1
s1 s 2

The Euclidean distance is:

d12 ={2E[1 −Re( ρ12 )]}


1/ 2
=2 E

Two signals with cross-correlation coefficient


of -1 are called antipodal

135 Digital communication - vector approach


Dr. Uri Mahlab
Multiphase signals
Consider the M-ary PSK signals:

2E  2π 
s m (t) = cos 2πf c t + (m − 1) m = 1,2,..., M , 0 ≤ t ≤ T
T  M 
2E 2π 2E 2π
= cos (m − 1) cos 2πf c t − sin (m − 1) sin 2πf c t
T M T M

The equivalent low pass waveforms are:

2E j2 π ( m −1) / M
u m (t) = e m = 1,2,..., M , 0 ≤ t ≤ T
T

136 Digital communication - vector approach


Dr. Uri Mahlab
The vector representation is:
 2π 2π 
s m =  E cos (m − 1), E sin (m − 1) m = 1,2,..., M
 M M 
Or in complex-valued form as:

j2 π ( m −1) / M
u m = 2E e s3
s2
s4 s2
E E
s3 s1 s5 s1

s6 s8
s4 s7
M=4 M =8
137 Digital communication - vector approach
Dr. Uri Mahlab
Their complex-valued correlation coefficients are :
T
1
ρ km = ∫ k = 1,2,..., M , m = 1,2,...,M
*
u k ( t ) u m ( t )dt
2E 0
= e j2 π (m-k)/M
and the real-valued cross-correlation coefficients are:

Re(ρ km ) = cos (m − k )
M
The Euclidean distance between pairs of signals is:

d km = { 2E[1 − Re(ρ km )]}


1/ 2

1/ 2
  2π 
= 2E 1 − cos (m − k ) 
  M 
138 Digital communication - vector approach
Dr. Uri Mahlab
The minimum distance dmin corresponds to the case which
| m-k |=1

 2π 
d min = 2E1 − cos 
 M

139 Digital communication - vector approach


Dr. Uri Mahlab
Quaternary PSK - QPSK
x4 (t ) x1 (t )
(10) (00)
Es
d m in = 2 Es

x3 (t ) x2 (t )
(11) (01)

140
Dr. Uri Mahlab Digital communication - vector approach *
2E a 1 = 2E / T ⋅ cos θ
φ 2 (t) = sin 2π f 0 t
T
a2 X(t)
a 2 = 2E / T ⋅ sin θ
− E

θ φ1 (t ) =
2E
cos2π f 0 t 2E
a1 T A=
T

2E
x(t) = cos( 2πf 0 t + θ)
T

141 Digital communication - vector approach


Dr. Uri Mahlab
x ( t ) = A cos(2πf 0 t + θ)
0≤ A≤∞ 0≤ t≤T
2E 2E
x(t) = ⋅ cos θ ⋅ cos 2πf 0 t + ⋅ sin θ ⋅ sin 2πf 0 t =
T T
= a 1φ1 ( t ) + a 2φ 2 ( t )
x4 (t ) x1 (t )
(10) (00)
Es
d min d min = 2 Es

x3 (t ) x2 (t )
(11) (01)
142 Digital communication - vector approach
Dr. Uri Mahlab
2E 3π
x 4 (t) = cos( 2πf 0 t + )
T 4
d min = 2E s
 1  bits
R bit =  
 Tb  sec
R symbol = R bit / 2
Es Es Es
Eb = = =
log 2 M log 2 4 2
143 Digital communication - vector approach
Dr. Uri Mahlab
MPSK
ϕ 2 (t )
π
d = 2 E sin
M
d m in
ϕ1 (t )

144 Digital communication - vector approach


Dr. Uri Mahlab
1 bits
R bit = (log 2 M )
T sec
e.g
M =8
Es Es
Eb = =
log 2 8 3

145 Digital communication - vector approach


Dr. Uri Mahlab
Multi-amplitude
Signal
Consider the M-ary PAM signals


sm (t) = A m cos 2πf c t
T
j2 πf c t
= A m Re[ u ( t )e ] m=1,2,….,M

Where this signal amplitude A m takes the discrete


values (levels)
A m = 2m − 1 − M m=1,2,….,M

The signal pulse u(t) , as defined is rectangular


U(t)= 0≤t≤T
T

But other pulse shapes may be used to obtain a


narrower signal spectrum .
146 Digital communication - vector approach
Dr. Uri Mahlab
Clearly , this signals are one dimensional (N=1)
and , hence, are represented by the scalar
components

s m1 = A m ε M=1,2,….,M

The distance between any pair of signal is


2
d mk = (s m1 − s k1 ) = ε | A m − A k |
M=2 2 ε

f1 ( t )
s1 0 s2

M=4 2 ε 2 ε 2 ε

s1 f1 ( t )
s2 0 s3 s4
Signal-space diagram for M-ary PAM signals .
147 Digital communication - vector approach
Dr. Uri Mahlab
The minimum distance between a pair signals

d min = 2 ε

148 Digital communication - vector approach


Dr. Uri Mahlab
Multi-Amplitude
MultiPhase signals
QAM Signals
A quadrature amplitude-modulated (QAM) signal
or a quadrature-amplitude-shift-keying (QASK) is
represented as

2ε 2ε
s m ( t ) = A mc cos 2πf c t − A ms sin 2πf c t
T T
j 2 πf c t
= Re[( A mc + jA ms )u ( t )e ]

Where A mc and A ms are the information


bearing signal amplitudes of the quadrature carriers
and u(t)= 2ε 0≤t≤T .
T
149 Digital communication - vector approach
Dr. Uri Mahlab
QAM signals are two dimensional signals and, hence,
they are represented by the vectors

s m = ( ε A m c, ε A m s )
The distance between a pair of signal vectors is

d mk = | s m − s k |2

= ε [(Am c − Akc ) + ( Am s − Aks ) ]


2 2 k,m=1,2,…,M

When the signal amplitudes take the discrete values


{ 2m − 1 − M, m = 1,2,...., M} In this case the minimum
distance is d min = 2 ε

150 Digital communication - vector approach


Dr. Uri Mahlab
QAM (Quadrature Amplitude
Modulation)
ϕ 2 (t )

ϕ1 (t )

151 Digital communication - vector approach


Dr. Uri Mahlab
QAM=QASK=AM-PM

16 − Q A M

1 d
2
 
1  3d 
2
 1  d   3d  
 2 2

E A V G =  2   +  2   +    +   
4   2   4   2   2   2   2  
5 2
EA V G = d
2 ϕ 2 (t )

2 d
d = EA VG ϕ1 (t )
5
Tsysm bol= Tbit / ( log2 M )

152 Digital communication - vector approach


Dr. Uri Mahlab
M=256
+ M=128
M=64
M=32
M=16
M=4

153 Digital communication - vector approach


Dr. Uri Mahlab
For an M - ary QAM Square Constellation

M −1 2
E SAVG = d
6
M = 2 n n bits/symbol
6
d2 = ES
2 −1
n

For a One - Dimentional signal


M2 −1 2
E SAVG = d
12

In general for large M - adding one bit requires


6dB more energy to maintain same d .

154 Digital communication - vector approach


Dr. Uri Mahlab
Binary orthogonal
signals
Consider the two signals
2E
s1 ( t ) = cos 2πf c t 0≤t≤T
T
2E
s 2 (t) = sin 2πf c t 0≤t≤T
T
Where either fc=1/T or fc>>1/T, so that
T
1
Re(ρ12 ) = ∫ s1 ( t )s 2 ( t )dt
E0
Since Re(p12 )=0, the two signals are orthogonal.

155 Digital communication - vector approach


Dr. Uri Mahlab
The equivalent lowpass waveforms:
2E
u1 (t ) = 0≤t≤T
T
2E
u 2 (t) = − j 0≤t≤T
T

The vector presentation:


s2 = ( E ,0 ) (
s 2 = 0,− E )
Which correspond to the signal space diagram

E
s1
Note that

E
d12 = 2E
d12
s2
156 Digital communication - vector approach
Dr. Uri Mahlab
We observe that the vector representation for the
equivalent lowpass signals is
u 1 = [u 11 ]
u 2 = [u 21 ]

Where
u11 = 2E + j0
u 21 = 0 − j 2E

157 Digital communication - vector approach


Dr. Uri Mahlab
M-ary Orthogonal Signal
Let us consider the set of M FSK signals


s m (t) = cos[2πf c t + 2πm∆ft ]
T
= Re[u m ( t )e j2 πf c t ] m=1,2,….,M 0≤t≤T

This waveform are characterized as having equal


energy and cross-correlation coefficients

 2ε 
 
=   ∫ e j2 πm∆ft dt
T
ρkm

sin πT (m − k )∆f jπT ( m − k ) ∆f
= e
πT (m − k )∆f

158 Digital communication - vector approach


Dr. Uri Mahlab
The real part of ρ km is
sin πT(m − k )∆f
ρ r = Re(ρ km ) = cos πT(m − k )∆f
πT(m − k )∆f
sin 2πT (m − k )∆f
=
2πT (m − k )∆f
ρr

0
2
∆f
1 1 3
2T T 2T T

159 Digital communication - vector approach


Dr. Uri Mahlab
1
First, we observe that R e(ρ km ) =0 when ∆ f =
and m ≠ k . 2T
Since |m-k|=1 corresponds to adjacent frequency slots ,
1
∆f = represent the minimum frequency
2T
separation between adjacent signals for orthogonality of
the M signals.

160 Digital communication - vector approach


Dr. Uri Mahlab
For the case in which s2
,the FSK signals ∆f = 1 / 2T
are equivalent to the N- 2 ε
2 ε
dimensional vectors
s1
s1 =( ε ,0,0,…,0) 2 ε
s3
s 2 =(0, ε ,0,…,0)
Orthogonal signals for M=N=3
signal space diagram
s N =(0,0,…,0, ε )

Where N=M. The distance between pairs of signals


is

d km = 2ε all m,k

Which is also the minimum distance.

161 Digital communication - vector approach


Dr. Uri Mahlab
Biorthogonal Signal
A set of M bi-orthogonal signals can be constructed
from M/2 orthogonal signals by simply including the
negatives of the orthogonal signals .
Thus, we require N=M/2 dimensions for the
construction of M bi-ortogonal signals .

s2 s2

− s1 s1 − s1 s1
f1 ( t )
ε
− s2 − s2
f3 (t)
M=4 M=6

162 Digital communication - vector approach


Dr. Uri Mahlab
We note that the correlation between any pair of
waveforms is either ρ r = −1 or 0. The
corresponding distances are d = 2 ε or 2ε ,
with the latter being the minimum distance.

163 Digital communication - vector approach


Dr. Uri Mahlab
Orthogonal FSK
(Orthogonal Frequency
Shift Keying)
2E
x0 (t ) = cos 2πf 0t
T 0≤t ≤T
2E
x1 (t ) = cos 2πf1t
T
1 3
(f1 − f 0 )T = ,1, ,...
2 2

 2 2 
 cos 2πf 0 t , cos 2πf1t  = 0
 T T 
T
2 2
∫0 T cos 2πf 0t T cos 2πf1t = 0
164 Digital communication - vector approach
Dr. Uri Mahlab
ϕ 2 (t )
“0”

d= 2 E

ϕ1 (t )
“1”

2
ϕ1 (t ) = cos 2πf 0 t
T
1 bits
Rbit = 2
T sec ϕ 2 (t ) = cos 2πf1t
T
165 Digital communication - vector approach
Dr. Uri Mahlab
ORTHOGONAL MFSK

2E
x1 (t ) = cos 2πf1t
t
2E
x2 (t ) = cos 2πf 2 t
t
2E
x3 (t ) = cos 2πf 3t
t
• •
• •
• •
166 Digital communication - vector approach
Dr. Uri Mahlab
All signals are orthogonal to each other

ϕ 2 (t )
E
d= 2 E E
ϕ1 (t )
E
ϕ 3 (t )

1 bits
Rbit = (log 2 M )
T sec
167 Digital communication - vector approach
Dr. Uri Mahlab
How to
generat
e
signals
168 Digital communication - vector approach
Dr. Uri Mahlab
2ε 2ε
s m ( t ) = A mc cos 2πf c t − A ms sin 2πf c t
T T
2 Eb cos 2πf 0 t

0 T 2T 3T 4T 5T 6T

0 T 2T 3T 4T 5T 6T

169
− 2 Eb sin 2πf 0 t
Digital communication - vector approach
Dr. Uri Mahlab
s m ( t ) = I( t ) • cos 2π f c t − Q( t ) • sin 2π f c t

2 Eb cos 2πf 0 t

0 T 2T 3T 4T 5T 6T

s m (t)
+

0 T 2T 3T 4T 5T 6T

170
− 2 Eb sin 2πf 0 t
Digital communication - vector approach
Dr. Uri Mahlab
s m ( t ) = I( t ) • cos 2π f c t − Q( t ) • sin 2π f c t

2 Eb cos 2πf 0 t

I( t )
0 T 2T 3T 4T 5T 6T

s m (t)
+

Q( t )
0 T 2T 3T 4T 5T 6T

171
− 2 Eb sin 2πf 0 t
Digital communication - vector approach
Dr. Uri Mahlab
IQ Modulator
2 Eb cos 2πf 0 t

I( t )

s m (t)
+
Q( t )

− 2 Eb sin 2πf 0 t

172 Digital communication - vector approach


Dr. Uri Mahlab
Pulse shaping filter IQ Modulator
2 Eb cos 2πf 0 t

I( t )

s m (t)
+
Q( t )

− 2 Eb sin 2πf 0 t

173 Digital communication - vector approach


Dr. Uri Mahlab
NOIS
E
174 Digital communication - vector approach
Dr. Uri Mahlab
What about Noise
•White Gaussian Noise
n1 (t ) n2 (t )

T T

∞ ∞
n 1 ( t ) = ∑ a i ϕi ( t ) n 2 ( t ) = ∑ a i ϕi ( t )
i =1 i =1

•The coefficients are random variables !

175 Digital communication - vector approach


Dr. Uri Mahlab
WHITE GAUSSIAN NOISE
(WGN)
N 0 Watts
p n (f ) =
2 Hz

We write

n ( t ) = ∑ n i φi ( t )
i =1

•All n i are gaussian variables


•All n i are independent
f (n ) = f (n1 , n 2 ,...) = f (n1 )f (n 2 )...

= π f (n i )
i =1

176 Digital communication - vector approach


Dr. Uri Mahlab
•All have same probability distribution

E{ n i } = 0
n i2
1 −
f(n i ) = e N0

{ }
E ni
2
=
N0
2

N0
2

177 Digital communication - vector approach


Dr. Uri Mahlab
n3 n(t)
n1

n2

•White Gaussian Noise has energy in every


dimension
ni2
∞ ∞ 1 −
f (n ) = π f (n i ) = π e N0
i =1 i =1 N0

2

178 Digital communication - vector approach


Dr. Uri Mahlab
Probability of Error for
Binary
Signaling
The two signal waveforms are given as
s m ( t ) = Re[u m ( t )e j2 πf c t ]
m = 1,2 0≤t≤T

These waveforms are assumed to have equal


energy E and their equivalent lowpass um(t),
m=1,2 are characterized by the complex-valued
correlation coefficient ρ12 .

179 Digital communication - vector approach


Dr. Uri Mahlab
The optimum demodulator forms the decision
variables
T 

U m = Re r ( t )u m ( t )dt  m = 1,2
*

0 
Or,equivalently
µ(u m ) = Re e r ⋅ u [ jφ *
m ] m = 1,2

And decides in favor of the signal corresponding


to the larger decision variable .

180 Digital communication - vector approach


Dr. Uri Mahlab
Lets see that the two expressions yields the
same probability of error .
Suppose the signal s1(t) is transmitted in the
interval 0≤ t≤ T . The equivalent low-pass
received signal is
− jφ
r ( t ) = αe u 1 ( t ) + z ( t ) 0≤t≤T
Substituting it into Um expression obtain
U1 = Re(2αE + N1 ) = 2αE + N1r
U 2 = Re(2αEρ + N 2 ) = 2αEρ r + N 2 r
Where Nm, m=1,2, represent the noise
componentsT in the decision variables,given by
N m = e jφ ∫ Z( t )u *m ( t )dt
0
181 Digital communication - vector approach
Dr. Uri Mahlab
And N mr = Re( N m ) .
The probability of error is just the probability
that the decision variable U2 exceeds the
decision variable u1 . But
P( U 2 > U1 ) = P( U 2 − U1 > 0) = P( U1 − U 2 < 0)

Lets define variable V as


V = U1 − U 2 = 2αE(1 − ρ r ) + N1r − N 2 r
N1r and N2r are gaussian, so N1r-N2r is also
gaussian-distributed and, hence, V is gaussian-
distributed with mean value
m v = E ( v) = 2αE(1 − ρ r )

182 Digital communication - vector approach


Dr. Uri Mahlab
And variance

[ ]
σ 2v = E ( N1r − N 2 r )
2

= E ( N ) − 2E ( N N 2r ) + E N 2r ( )
2 2
1r 1r

= 4EN 0 (1 − ρ r )
Where N0 is the power spectral density of z(t) .
The probability of error is now
0
P(V < 0) = ∫ p( v)dv
−∞
0
1
∫e
−( v − m v ) 2 / 2 σ 2v
= dv
2πσ v −∞

1  α2E 
= erfc
 2N0
( 1 − ρr ) 

2  
183 Digital communication - vector approach
Dr. Uri Mahlab
Where erfc(x) is the complementary error
function, defined as

2

−t 2
erfc ( x ) = e dt
πx

It can be easily shown that

1  α 2
E 
p 2 = erfc
 2N0
( 1 − ρr ) 

2  

184 Digital communication - vector approach


Dr. Uri Mahlab
Distance, d T
d = x ( t ) − y( t ) = ∫ [ x ( t ) − y( t )] dt
2 2 2

d 2 = Ex + Ey − 2ρ ExEy
• For equal energy signals

d = 2E (1 − ρ)
2

• ρ =-1 (antipodal) d=2 E


• ρ =0 (orthogonal) d = 2E
• 3dB “better” then orthogonal signals
185 Digital communication - vector approach
Dr. Uri Mahlab
It is interesting to note that the probability of
error P2 is expressed as

1  α 2 2 
d
p 2 = erfc 12 
2  2N0 
 
Where d12 is the distance of the two signals .
Hence,we observe that an increase in the
distance between the two signals reduces the
186 probability of error . Digital communication - vector approach
Dr. Uri Mahlab
1  α 2
E 
p 2 = erfc
 2N0
( 1 − ρr ) 

2  

1  α 2 2 
d
p 2 = erfc 12 
2  2N0 
 
187 Digital communication - vector approach
Dr. Uri Mahlab
1  2 2 
α d12 
p 2 = erfc
2  2N0 
 
π
d = 2 E sin E E
M
d min

M=256

s2
+ M=128
M=64
M=32
M=16
M=4
2 ε
2 ε

s1
2 ε
s3
188 Digital communication - vector approach
Dr. Uri Mahlab
Signal-Space Analysis

ENSC 428 – Spring 2008


Reference: Lecture 10 of Gallager
Digital Communication
System
Representation of Bandpass
Signal
x ( t ) = s ( t ) cos ( 2π fc t )

Bandpass real signal x(t) can be written as:

x ( t ) = 2 Re  x%( t ) e j 2π fct  where x%( t ) is complex envelop

Note that x%( t ) = x%


I ( t ) + j ⋅ xQ ( t )
%

In-phase Quadrature-phase
Representation of Bandpass
Signal
x ( t ) = 2 Re  x%( t ) e j 2π fct 
(1)
I ( t ) + j ⋅ xQ ( t )  
= 2 Re  x% %   cos ( 2π f ct ) + j sin ( 2π f ct ) 

I ( t) 2 cos ( 2π f c t ) + x%
Q ( t )  − 2 sin ( 2π f c t ) 
= x%  

x( t) = x( t) e
jθ ( t )
(2) Note that % %

x ( t ) = 2 Re  x%( t ) e j 2π fct  = 2 Re  x%( t ) e ( ) ⋅ e j 2π f ct 


jθ t

= x%( t ) 2 cos ( 2π fc t + θ ( t ) )
Relation between x( t) and x%( t )

e − j 2π f c t
x( t) x%( t )
2

x
f

2 2

-fc fc f fc f f
1 %
X( f) =  X ( f − fc ) + X%* ( − ( f + fc ) ) 
2
 X ( f ), f > 0
X+ ( f) = , X%( f ) = X + ( f + fc )
0, f <0
Energy of s(t)

E = ∫ s 2 ( t ) dt
−∞

S ( f ) df
2
=∫ (Rayleigh's energy theorem)
−∞

= 2∫ S ( f ) df
2
(Conjugate symmetry of real s(t ) )
0
∞ 2
=∫ S ( f ) df
%
0
Representation of bandpass LTI
System

s( t) h( t) r ( t)

s%( t ) h%( t ) r%( t )

r%( t ) = s%( t ) ∗ h%( t )


R%( f ) = S%( f ) H%( f )
= S%( f ) H ( f + fc ) because s (t ) is band-limited.

H ( f ) =  H%( f − fc ) + H%* ( − ( f + fc ) ) 
 H ( f ), f > 0
H+ ( f ) = 
0, f <0
H%( f ) = H ( f + f )
+ c
Key Ideas
Examples (1): BPSK
Examples (2): QPSK
Examples (3): QAM
Geometric Interpretation
(I)
Geometric Interpretation
(II)
 I/Q representation is very convenient for some modulation
types.
 We will examine an even more general way of looking at
modulations, using signal space concept, which facilitates
 Designing a modulation scheme with certain desired properties
 Constructing optimal receivers for a given modulation
 Analyzing the performance of a modulation.
 View the set of signals as a vector space!
Basic Algebra: Group
 A groupis defined as a set of elements G and a
binary operation, denoted by · for which the
following properties are satisfied
 For any element a, b, in the set, a·b is in the set.
 The associative law is satisfied; that is for a,b,c in
the set (a·b)·c= a·(b·c)
 There is an identity element, e, in the set such that
a·e= e·a=a for all a in the set.
 For each element a in the set, there is an inverse
element a-1 in the set satisfying a· a-1 = a-1 ·a=e.
Group: example
 A set of non-singular n×n matrices of
real numbers, with matrix multiplication
 Note; the operation does not have to be
commutative to be a Group.
 Example of non-group: a set of non-
negative integers, with +
Unique identity? Unique
inverse fro each element?
 a·x=a. Then, a-1·a·x=a-1·a=e, so x=e.
 x·a=a

 a·x=e. Then, a-1·a·x=a-1·e=a-1, so x=a-1.


Abelian group
 Ifthe operation is commutative, the group is
an Abelian group.
 The set of m×n real matrices, with + .
 The set of integers, with + .
Application?
 Laterin channel coding (for error correction or
error detection).
Algebra: field
 A fieldis a set of two or more elements F={α ,β ,..}
closed under two operations, + (addition) and *
(multiplication) with the following properties
 F is an Abelian group under addition
 The set F−{0} is an Abelian group under multiplication,
where 0 denotes the identity under addition.
 The distributive law is satisfied:

(α +β )∗γ = α ∗γ +β ∗γ
Immediately following
properties
α ∗β = 0 implies α = 0 or β = 0
 For any non-zero α , α ∗0= ?
 α ∗0 + α = α ∗0 + α ∗1= α ∗(0 +1)= α
∗1= α ; therefore α ∗0 = 0
 0∗0 =?
For a non-zero α , its additive inverse is non-
zero. 0∗0= (α +(− α ) )∗0 = α ∗0+(− α
)∗0 = 0+0= 0
Examples:
 the set of real numbers
 The set of complex numbers
 Later, finite fields (Galois fields) will be
studied for channel coding
 E.g., {0,1} with + (exclusive OR), * (AND)
Vector space

 A vector space V over a given field F is a set of elements (called


vectors) closed under and operation + called vector addition. There
is also an operation * called scalar multiplication, which operates
on an element of F (called scalar) and an element of V to produce
an element of V. The following properties are satisfied:
 V is an Abelian group under +. Let 0 denote the additive identity.
 For every v,w in V and every α ,β in F, we have
 (α ∗β )∗v= α ∗( β ∗v)
 (α + β )∗v= α ∗v+ β ∗v
 α ∗( v+w)=α ∗v+ α ∗w
 1*v=v
Examples of vector space
 Rn over R
 Cn over C

 L2 over
Subspace.
Let V be a vector space. Let V be a vector space and S ⊂ V .
If S is also a vector space with the same operations as V ,
then S is called a subspace of V .

S is a subspace if
v, w ∈ S ⇒ av + bw ∈ S
Linear independence of
vectors
Def)
A set of vectors v1 , v2 , vn ∈ V are linearly independent iff
Basis
Consider vector space V over F (a field).
We say that a set (finite or infinite) B ⊂ V is a basis, if
* every finite subset B0 ⊂ B of vectors of linearly independent, and
* for every x ∈ V ,
it is possible to choose a1 , ..., an ∈ F and v1, ..., v n ∈ B
such that x = a1v1 + ... + an vn .

The sums in the above definition are all finite because without
additional structure the axioms of a vector space do not permit us
to meaningfully speak about an infinite sum of vectors.
Finite dimensional vector
space
A set of vectors v1 , v2 , vn ∈ V is said to span V if
every vector u ∈ V is a linear combination of v1 , v2 , vn .

Example: R n
Finite dimensional vector
space
 A vector space V is finite dimensional if there
is a finite set of vectors u1, u2, …, un that span V.
Finite dimensional vector
space
Let V be a finite dimensional vector space. Then

•If v1 , v2 , vm are linearly independent but do not span V , then V


has a basis with n vectors (n > m) that include v1 , v2 , vm .

•If v1 , v2 , vm span V and but are linearly dependent, then


a subset of v1 , v2 , vm is a basis for V with n vectors ( n < m) .

•Every basis of V contains the same number of vectors.

Dimension of a finiate dimensional vector space.


Example: Rn and its Basis
Vectors

•••
Inner product space: for
length and angle
Example: Rn
•••

•••

•••
•••
Orthonormal set and
projection theorem
Def)
A non-empty subset S of an inner product space is said to be
orthonormal iff
1) ∀x ∈ S , < x, x >= 1 and
2) If x, y ∈ S and x ≠ y, then < x, y >= 0.
Projection onto a finite
dimensional subspace
Gallager Thm 5.1

Corollary: norm bound

Corollary: Bessel’s inequality


Gram –Schmidt
orthonormalization

Consider linearly independent s1 , ..., sn ∈ V , and inner product space.


We can construct an orthonormal set { φ1 , ..., φn } ∈ V so that
span{s1 , ..., sn } = span { φ1 , ..., φn }
Gram-Schmidt Orthog.
Procedure
Step 1 : Starting with
s1(t)
Step 2 :
Step k :
Key Facts
Examples (1)
cont … (step 1)
cont … (step 2)
cont … (step 3)
cont … (step 4)
Example application of
projection theorem
Linear estimation
L2([0,T])
(is an inner product space.)
Consider an orthonormal set
 1  2π kt  
φk ( t ) = exp  j  k = 0, ±1, ±2,... .
 T  T  
Any function u(t ) in L2 ( [ 0, T ] ) is u = ∑ k =−∞ u, φk φk . Fourier series.

For this reason, this orthonormal set is called complete.

Thm: Every orthonormal set in L2 is contained in some


complete orthonormal set.

Note that the complete orthonormal set above is not unique.


Significance? IQ-modulation
and received signal in L2

r ( t , ξ ) = s ( t ) + N ( t , ξ ) ∈ L2 ( [ 0, T ] )
s ( t ) ∈ span { 2 T cos 2π fc t , − 2 T sin 2π fc t }
Any signal in L2 can be represented as ∑ i riφi (t ).

There exist a complete orthonormal set


{ }
2 cos 2π f c t , − 2 sin 2π f c t , φ3 (t ), φ4 (t ),...
On Hilbert space over C.
For special folks (e.g., mathematicians)
only
L2 is a separable Hilbert space. We have very useful
results on
1) isomorphism 2)countable complete orthonormal set

Thm
If H is separable and infinite dimensional, then it is
isomorphic to l2 (the set of square summable sequence
of complex numbers)
If H is n-dimensional, then it is isomorphic to Cn.
The same story with Hilbert space over R. In some sense there is only one real and one
complex infinite dimensional separable Hilbert space.
L. Debnath and P. Mikusinski, Hilbert Spaces with Applications, 3rd Ed., Elsevier, 2005.
Hilbert space
Def)
A complete inner product space.

Def) A space is complete if every Cauchy


sequence converges to a point in the space.

Example: L2
Orthonormal set S in
Hilbert space H is complete
if
Equivalent definitions
1) There is no other orthonormal set strictly containing S . (maximal)
2) ∀ x ∈ H , x = ∑ x, ei ei
3) x, e , ∀e ∈ S implies x = 0
4) ∀ x ∈ H , x = ∑
2 2
x, ei

Here, we do not need to assume H is separable.


Summations in 2) and 4) make sense because we can prove the following:
Only for mathematicians
(We don’t need
separability.)
Let O be an orthonormal set in a Hilbert space H .
{
For each vector x ∈ H , set S = e ∈ O x, e ≠ 0 is }
either empty or countable.

{
Proof: Let Sn = e ∈ O x, e
2
> x
2
}
n .

Then, S n < n (finite)


Also, any element e in S (however small x, e is)
is in Sn for some n (sufficiently large).

Therefore, S = Un =1 Sn . Countable.
Theorem
 Every orothonormal set in a Hilbert space is
contained in some complete orthonormal set.
 Every non-zero Hilbert space contains a complete
orthonormal set.
 (Trivially follows from the above.)

( “non-zero” Hilbert space means that the space has a non-zero element.
We do not have to assume separable Hilbert space.)

 Reference: D. Somasundaram, A first course in functional analysis, Oxford, U.K.: Alpha Science, 2006.
Only for mathematicians.
(Separability is nice.)
Euivalent definitions
Def) H is separable iff there exists a countable subset D
which is dense in H , that is, D = H .
Def) H is separable iff there exists a countable subset D such that
∀x ∈ H , there exists a sequence in D convergeing to x.

Thm: If H has a countable complete orthonormal set, then H is separable.


proof: set of linear combinations (loosely speaking)
with ratioanl real and imaginary parts. This set is dense (show sequence)
Thm: If H is separable, then every orthogonal set is countable.
proof: normalize it. Distance between two orthonormal elements is 2. .....
Signal Spaces:
L2 of complex functions
Use of orthonormal set
M-ary modulation {s1 (t ), s2 (t ),..., sM (t )}
Find orthonormal functions f1 (t ), f 2 (t ),.., f K (t ) so that
{s1 (t ), s2 (t ),..., sM (t )} ⊂ span{ f1 (t ), f2 (t ),.., fK (t )}
Examples (1)

T T
2 2
Signal Constellation
cont …
cont …
cont …

QPSK
Examples (2)
Example: Use of
orthonormal set and basis
 Two square functions
Signal Constellation
Geometric Interpretation
(III)
Key Observations
Vector XTMR/RCVR
Model
A

N

s(t) r(t) = s(t) + n (t) s(t) = ∑ s i φ i ( t) , φi φj i= j


i =1

n (t) = ∑
i =1
n i φ i ( t)
n (t)

Waveform channel / Correlation


s1
⊗ Receiver
⊗ z T

0
r1 = s 1 + n 1

φ1( t) φ1( t)
s2
⊗ .
.

s(t)

r(t)
.
.
⊗ z T

0
r2 = s 2 + n 2
Vector
XTMR
. φ2( t) . . φ2( t) . Vector
n(t) RCVR
. .
} . . }
sN
⊗ ⊗ z T

0
rN = s N + n N

φΝ (t) φΝ (t)
“ 通信系统 (Communication Systems)” 课

Ch.5 Signal-Space Analysis


5.1 Introduction
5.2 Geometric Representation of Signals
5.3 Conversion of the Continuous AWGN Channel
into a Vector Channel
5.4 Likelihood Functions
5.5 Coherent Detection of Signals in Noise:
Maximum Likelihood Decoding
5.6 Correlation Receiver
5.7 Probability of Error
5.8 Summary and Discussion

256
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

5.1 Introduction
T
Ei = ∫ si2 (t )dt , i = 1, 2,  , M
0

1 M
pi = P ( mi ) = , i = 1, 2,  , M pe = ∑ pi P( mˆ ≠ mi mi )
M i =1

Minimizing pe ⇒
Optimum receiver in the
 0≤t ≤T
x(t ) = s i (t ) + w(t ),  minimum probability of
i = 1, 2,  , M
error sense

Fig. 5.1 Block diagram of a generic digital communication system.

257
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

5.2 Geometric Representation
of Signals
• Geometric representation
– To represent any set of M energy signals as
linear combinations of N orthonormal basis
functions, where N ≤ M.
• Gram-Schmidt orthogonalization procedure
– How to choose the N orthonormal basis
functions for M energy signals

258
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Basic Representations

N
0 ≤ t ≤ T
si (t ) = ∑ sijφ j (t ), 
j =1 i = 1, 2,  , M

T i = 1, 2,  , M
sij = ∫ si (t )φ j (t )dt , 
0
 j = 1, 2,  , N

T 1 if i = j
∫0 φi (t )φ j (t )dt = δ ij = 0 if i ≠ j Orthonormal

259
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Illustration of Concepts

Figure 5.4 Illustrating the geometric representation of


signals for the case when N = 2 and M = 3.

260
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Orthonormal Basis Functions

N
0 ≤ t ≤ T i = 1, 2,  , M
si (t ) = ∑ sijφ j (t ), 
T
sij = ∫ si (t )φ j (t )dt , 
j =1 i = 1, 2,  , M 0
 j = 1, 2,  , N

Fig. 5.3 Geometric representation of signals.


261
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Orthonormal Basis Functions
(Cont’d)
• The signal vector si  si1 
s 
– si(t) is completely  i2 
si =
determined by si  
– s i , length (or “absolute
 
 siN 
value”, “norm”) of si
N
= s s = ∑s
2 T 2
si i i ij
j =1

262
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Orthonormal Basis Functions
(Cont’d)
T
Ei = ∫ si2 (t ) = sTi s i = s i
2

T
0
s i − s k Euclidean distance

0
si (t ) sk (t ) = sTi s k between si and sk
N
= ∑ ( sij − skj ) = ∫ ( si (t ) − si (t )) 2 dt
2 T
si − s k 2
0
j =1

sTi s k
cosθ ik = Angle between si and sk
si ⋅ s k

263
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Example: Schwarz Inequality


For real-valued signals:
2
 s (t ) s (t )dt  ≤  ∞ s 2 (t )dt  ∞ s 2 (t )dt 

 ∫−∞ 1 2
  ∫−∞ 1  ∫−∞ 2 
For complex-valued signals:
2

 ∞
 ∞

∫−∞ 1  ∫−∞ 1  ∫−∞ 2
2 2
s (t ) s*
(t ) dt ≤  s (t ) dt  s (t ) dt
2

For either case, the equality holds if and only if


s2(t) = cs1(t), where c is any constant.

264
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Example: Schwarz Inequality
(Cont’d)
s1 (t ) = s11φ1 (t ) + s12φ2 (t )  s11   s21 
s2 (t ) = s21φ1 (t ) + s22φ2 (t )
s1 =   , s 2 =  
 s12   s22 

cos θ =
s1T s 2
=
∫−∞ 1
s (t ) s2 (t )dt
≤1
(∫ ) (∫ )
1/ 2 1/ 2
s1 s 2 ∞
2

s (t )dt s22 (t )dt
−∞ 1 −∞

2
 s (t ) s (t )dt  ≤  ∞ s 2 (t )dt  ∞ s 2 (t )dt 

 ∫−∞ 1 2
  ∫−∞ 1  ∫−∞ 2 

265
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Gram-Schmidt
Orthogonalization Procedure
s1 (t )
φ1 (t ) = s1 (t ) = E1φ1 (t ) = s11φ1 (t )
E1 T
s21 = ∫ s2 (t )φ1 (t )dt
0

g 2 (t ) = s2 (t ) − s21φ1 (t )
g 2 (t )
φ2 (t ) = T
s2 (t ) = ∫ g 22 (t )dtφ 2 (t ) + s21φ1 (t )
T

2
g (t )dt
2
0 0
T
sij = ∫ si (t )φ j (t )dt = s22φ2 (t ) + s21φ1 (t )
0
i −1
g i (t ) = si (t ) − ∑ sijφ j (t )
j =1
g i (t )
φi (t ) =
T
∫0
g i2 (t )dt

266
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Example: 2B1Q Code

M = 4 and N = 1 φ1 (t ) = si (t ) / si , i = 1, 2,3, 4

Fig. 5.5 Signal-space representation of the 2B1Q code.

267
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

5.3 Conversion of the Continuous
AWGN Channel into a Vector Channel

• In this section, we show that:


– In an AWGN channel, only the
projections of the noise onto the basis
functions of the signal set affect the
sufficient statistics of the signal
detection; the remainder of the noise is
irrelevant.

268
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Signal Analysis with AWGN Channel
0 ≤ t ≤ T
x(t ) = si (t ) + w(t ) 
i = 1, 2,  , M
T
sij = ∫ si (t )φ j (t )dt Noise element
0
T affecting signal
w j = ∫ w(t )φ j (t )dt detection
0
T
x j = ∫ x(t )φ j (t )dt = sij + w j
0
N
x′(t ) = x(t ) − ∑ x jφ j (t ) Irrelevant
j =1 noise element
N
= si (t ) + w(t ) − ∑ ( sij + w j )φ j (t )
Fig. 5.2 The AWGN Channel. j =1
N
= w(t ) − ∑ w jφ j (t ) = w′(t )
j =1

269
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Statistical Characterization
Since {Xj} are Gaussian random variables,
they are statistically independent.
µ X j = E[ X j ] = E[ sij + W j ] = sij + E[W j ] = sij
N0
σ = var[ X j ] = E[( X j − sij ) ] = E[W ] =
2
Xj
2
j
2

2
cov[ X j X k ] = E[( X j − µ X j )( X k − µ X k )] = E[W jWk ] = 0

X(t), W(t) : Random Processes


1 if i = j
T
∫0 i j
φ (t )φ (t ) dt = δ =  x(t), w(t) : Sample Functions
0 if i ≠ j
ij
Xj, Wj : Random Variables
xj, wj : Sample Values

270
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课
Statistical Characterization 件

(Cont’d)
 X1 
X 
Observation vector X= 2 
 
 
X N 

N
f X (x mi ) = ∏ f X j ( x j mi ), Memoryless channel
j =1
N
1 1
=∏ exp[− ( x j − sij ) 2 ]
j =1 πN 0 N0
N
1
= (πN 0 ) −N / 2
exp[−
N0
∑ (x
j =1
j − sij ) 2
], i = 1, 2,  , M

271
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Theorem of Irrelevance
Insofar as signal detection in AWGN is concerned,
only the projections of the noise onto the basis
functions of the signal set {si (t )}i =1 affects the
M

sufficient statistics of the detection problem; the


remainder of the noise is irrelevant.

The AWGN channel


The vector channel

x = s i + w, i = 1, 2,  , M

272
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

5.4 Likelihood Functions


Given the observation vector x, which
message symbol mi is transmitted

Likelihood function: L(mi ) = f X (x mi ), i = 1, 2,  , M

Log-likelihood function: l (mi ) = log L(mi ), i = 1, 2,  , M


N
1
AWGN channel: l (mi ) = −
N0
∑ j ij , i = 1, 2,  , M
( x
j =1
− s ) 2

273
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

5.5 Coherent Detection of Signals in
Noise: Maximum Likelihood Decoding

• The signal constellation


• The signal detection problem
• The optimal decision rules

274
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

The Signal Constellation


Set of N orthonormal A Euclidean space of
basis functions dimension N

The set of message points s i in this space


corresponding to the set of transmitted
signals si (t ) is called a signal constellation.

The observation vector x is represented by a


received signal vector in the same Euclidean space.

275
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

The Signal Constellation (Cont’d)

Fig. 5.7 Illustrating the effect of noise perturbation, depicted in (a),


on the location of the received signal point, depicted in (b).

276
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

The Signal Detection Problem
Given the observation vector x, perform a mapping
from x to an estimate m̂ of the transmitted
symbol, mi, in a way that would minimize the
probability of error in the decision-making
process.
Probability of error when make
the decision: mˆ = mi
Pe (mi x) = P (mi not sent x)
= 1 − P (mi sent x)

277
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

The Optimum Decision Rules


Set mˆ = mi if
P (mi sent x) ≥ P (mk sent x) for all k ≠ i
The maximum a posteriori
probability (MAP) rule: Set mˆ = mi if
pk f x (x mk )
is maximum for k = i
f x ( x)
pk = pi , for all k
Set mˆ = mi if
The maximum likelihood rule: l (m ) = log f (x m ) is maximum for k = i
k x i

278
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

The Optimum Decision Rules (Cont’d)

• Graphical interpretation of the


maximum likelihood decision rule
– Dividing the observation space Z into M-
decision regions Z1, Z2, …, ZM
– The rule is:
Observation vector x lies in region
Zi if l(mk) is maximum for k = i

279
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

The Optimum Decision Rules
(Cont’d)
With AWGN channel:

Maximizing: l (mk )

Equals minimizing:
N

∑ (x
2
j − skj ) = x − s k
2

j =1

Equals maximizing:
N
Fig. 5.8 An illustration of the
Ek = ∑ s
N
1 2


j =1
x j skj − Ek
2 j =1
kj decision rule with
AGWN channel.
energy of signal sk (t )
280
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

5.6 Correlation Receiver


Detector/Demodulator Signal transmission decoder

Fig. 5.9 The optimal receiver using correlators.

281
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Correlation Receiver (Cont’d)


h j (t ) = φ j (T − t )

y j (t ) = ∫ x(τ )h j (t − τ )dτ
−∞

= ∫ x(τ )φ j (T − t + τ )dτ
−∞

y j (T ) = ∫ x(τ )φ j (τ )dτ
−∞
T
= ∫ x(τ )φ j (τ )dτ
0

Therefore, the correlator equals sampling at


time T after a matched filter.

282
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Correlation Receiver (Cont’d)


Detector/Demodulator Signal transmission decoder

Fig. 5.10 The optimal receiver using matched filters.

283
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

5.7 Probability of Error

• Average probability of symbol error


• Invariance of the probability of error
to rotation and translation
• Minimum energy signals
• Union bound on the probability of
error
• Bit versus symbol error probabilities

284
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Average Probability of Symbol Error

M
Pe = ∑ pi P(x dose not lie in Z i mi sent)
i =1

1 M
Zi: Region in the
=
M
∑ P(x dose not lie in Z
i =1
i mi sent)
observation space
M corresponding to
1
= 1−
M
∑ P(x lies in Z
i =1
i mi sent) decision mi.

M
1
= 1−
M
∑∫
i =1
Zi
f x (x mi )dx

285
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Invariance of the Probability of
Error to Rotation and Translation
s
Rotation: i ,rotate = Qs i , i = 1, 2,  , M , (QQ I
= I)

Translation: s i ,translate = s i − a, i = 1, 2,  , M

Distance invariance: • Maximum likelihood


detection
x rotate − s i ,rotate = x − s i • AWGN channel Error
probability
x translate − s i ,translate = x − s i invariance

286
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Rotational Invariance

Figure 5.11 A pair of signal constellations for


illustrating the principle of rotational invariance.

287
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Minimum Energy Signals


M

Energy of a signal constellation: ε = ∑ s i pi


2

i =1

Translating the signal M


ε translate = ∑ s i − a pi = ε − 2aT E[s] + a
2 2
constellation by a
vector amount a: i =1
M
where E[ s] = ∑ s i pi
i =1

To minimizing ε translate , a = a min = E[s],


2
and ε translate,min = ε − a min

288
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Minimum Energy Signals (Cont’d)

Figure 5.12 A pair of signal constellations for illustrating


the principle of translational invariance.

289
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Union Bound on the Probability of Error

For AWGN channel, the symbol error probability is:


M
1
Pe = 1 −
M
∑∫
i =1
Zi
f x (x mi )dx where
N
1
f X (x mi ) = (πN 0 ) −N / 2
exp[−
N0
∑ j ij ], i = 1, 2,  , M
( x
j =1
− s ) 2

The above formulations are


impractical to calculate Using bounds

Union bound: one of the bounds, made by


simplifying the region of integration in the top
formulation
290
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Union Bound (Cont’d)


The probability of x is closer
to sk than si, when si is sent is:
∞ 1 v2
P2 (s i , s k ) = ∫ exp(− )dv
d ik / 2
πN 0 N0
Pairwise error 1 d
probability = erfc( ik )
2 2 N0
d ik = s i − s k and
Fig. 5.13 Illustrating the union bound.
2 ∞ (a) Constellation of four message
erfc (u ) =
π ∫
u
exp(− z 2 )dz points. (b) Three constellations with a
common message point and one other
message point retained from the
original constellation.

291
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Union Bound (Cont’d)


M
1 M d
Pe (mi ) ≤ ∑ P2 (s i , s k ) = ∑ erfc( ik ), i = 1, 2,  , M
k =1 2 k =1 2 N0
k ≠i k ≠i
M
1 M M d ik
Pe = ∑ pi Pe (mi ) ≤ ∑∑ pi erfc( )
i =1 2 i =1 k =1 2 N0
k ≠i

The signal constellation Using the minimum


is circularly symmetric distance dmin
about the origin
( M − 1) d min
1 M
d ik Pe ≤ erfc( )
Pe ≤ ∑ erfc( ) for all i 2 2 N0
2 k =1 2 N0
k ≠i ( M − 1) 2
d min
≤ exp(− )
2 π 4N0
292
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

Bit Versus Symbol Error Probabilities

• In general, there are no unique relationships between


symbol error probability and Bit Error Rate (BER).
• In the following two special cases, there are some
relationships.
Case 2: M = 2K
, and equal
Case 1: Gray Coding
symbol error probability
log 2 M
Pe ≈ P(  {ith bit is in error})
i =1 M /2
M BER = ( )Pe
≤ ∑ P(ith bit is in error) M −1
i =1

= log 2 M ⋅ (BER)
293
东南大学移动通信国家重点实验室
“ 通信系统 (Communication Systems)” 课

5.8 Summary and Discussion


• Signal space analysis: representing each
signals by an N-dimensional orthonormal
vector.
• Maximum likelihood detection: deducing
the most likely transmitted symbol from
the channel output.
• Symbol error probability
– The union bound
– The relationship to BER

294
东南大学移动通信国家重点实验室