Sie sind auf Seite 1von 143

EE120 - Fall’15 - Lecture 1 Notes1 1

Licensed under a Creative Commons


Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
26 August 2015

Linear Time-Invariant (LTI) Systems

x(t) y(t)

Linearity: Two conditions must be satisfied:


1. Scaling:
ax (t) → ay(t) for any number a; (1)
2. Superposition:
x1 ( t ) + x2 ( t ) → y1 ( t ) + y2 ( t ). (2)

Corollary: If the input to a linear system is 0, the output must be 0.


Proof. Choose a = 0 in the scaling property.
Time-Invariance:A time shift in the input results is an identical time
shift in the output:
x ( t − T ) → y ( t − T ). (3)

Example: Moving average filter:


1
y[n] = ( x [n − 1] + x [n] + x [n + 1]) → LTI. (4)
3
Example: Median Filter:
y[n] = med{ x [n − 1], x [n], x [n + 1]} → TI, but nonlinear. (5)

5
4 x1 [n] 4 4 y1 [n]
3 3

n n

x2 [n] y2 [n]
2

n n
-1 -1 -1 -1
ee120 - fall’15 - lecture 1 notes 2

5 x1 [n] + x2 [n] ̸= y1 [n] + y2 [n]


4 4
3 3 3

n n

Discrete-Time (DT) LTI Systems: Convolution Sum


Section 2.1 in Oppenheim & Willsky
Let h[n] denote the response of an LTI system to the unit impulse:

1 δ[n]

Then, for any input x [n], the output is:


y[n] = ∑ x [k ]h[n − k] ”convolution sum” (6)
k=−∞

Proof. Rewrite x [n] as

x [n] = ... + x [−1]δ[n + 1] + x [0]δ[n] + x [1]δ[n − 1] + ... (7)



= ∑ x [k]δ[n − k] (8)
k =−∞

Since δ[n] → h[n], by time-invariance: δ[n − k] → h[n − k].


Then, by linearity: ∑k x [k]δ[n − k] → ∑k x [k ]h[n − k].
Example: For the moving average system above,

1 δ[n] h[n]
1 1 1
3 3 3

n n
h[n − k]
1 1 1
3 3 3

n
n−1 n+1
k
ee120 - fall’15 - lecture 1 notes 3


y[n] = ∑ x [k]h[n − k]
k =−∞
n +1
1 1
= ∑ 3
x [k] = ( x [n − 1] + x [n] + x [n + 1])
3
(9)
k = n −1

Example: For the median filter:

δ[n] h[n]

n n

Since the system is nonlinear, we can’t use convolution to predict the


output.

Continuous-Time (CT) LTI Systems: Convolution Integral


Section 2.2 in Oppenheim & Willsky
Unit impulse:
δ(t) , lim δ∆ (t) (10)
∆ →0 1
∆ δ∆ (t)
where δ∆ (t) is as in Figure 1.
Let h(t) denote the response of a LTI system to δ(t). ∆ t
Figure 1: δ∆ (t)
Then, for any input x (t), the output is :
Z ∞
y(t) = x (τ )h(t − τ )dτ ”convolution integral” (11)
−∞

Proof. First, note that the staircase approximation in Figure 2 recovers


x (t) as ∆ → 0:

x (t) = lim
∆ →0
∑ x (k∆)∆δ∆ (t − k∆). (12)
k =−∞

Next, let h∆ (t) denote the response of the system to δ∆ (t) and note
from the LTI property that the response to each term in the sum
above is x (k∆)∆h∆ (t − k∆). Thus, the response to x (t) is
∞ Z ∞
y(t) = lim
∆ →0
∑ x (k∆)h∆ (t − k∆)∆ =
−∞
x (τ )h(t − τ )dτ. (13)
k =−∞
ee120 - fall’15 - lecture 1 notes 4

Figure 2: Staircase approximation of


x (k∆)∆δ∆ (t − k∆) x ( t ).

x (t)

t
∆ 2∆ ··· k∆

Properties of LTI Systems


Section 2.3 in Oppenheim & Willsky
We will denote the convolution operation by 00 ∗00 .
1. Commutative Property:

x [n] ∗ h[n] = h[n] ∗ x [n] (14)

Proof.
∞ ∞
∑ x [k]h[n − k] = ∑ x [ n − r ] h [r ], (15)
k =−∞ r =−∞

with the change of variables (n − k) , r.

2. Distributive Property:

x [n] ∗ (h1 [n] + h2 [n]) = x [n] ∗ h1 [n] + x [n] ∗ h2 [n] (16)

h1 [n]
x[n] y[n] ≡ x[n] h1 [n] + h2 [n] y[n]
h2 [n]

3. Associative Property:

x [n] ∗ (h1 [n] ∗ h2 [n]) = ( x [n] ∗ h1 [n]) ∗ h2 [n] (17)

x[n] h1 [n] h2 [n] y[n] ≡ x[n] h1 [n] ∗ h2 [n] y[n]

Combine this with the commutative property


h1 [n] h2 [n] ≡ h2 [n] h1 [n]

Properties 1,2,3 above also hold for CT systems.


ee120 - fall’15 - lecture 1 notes 5

Determining Causality from the Impulse Response

For a DT LTI system, causality means:

h[n] = 0, ∀n < 0. (18)


For a CT LTI system, causality means:

h(t) = 0, ∀t < 0. (19)


Proof. Since y[n] = ∑∞
k =−∞ h [ k ] x [ n − k ], if h [ k ] 6 = 0 for some k < 0, then
y[n] depends on x [n − k ], where n − k > n.
Example: Moving average system above: h[−1] 6= 0 → noncausal.

Determining Stability from the Impulse Response

Stability criterion for a DT LTI system:



∑ |h[k]| < ∞. (20)
k =−∞

Stability criterion for a CT LTI system:


Z ∞
|h(τ )|dτ < ∞. (21)
−∞

Proof.
Sufficiency: Suppose ∑∞ k =−∞ | h [ k ]| < ∞ and show that bounded
inputs give bounded outputs:
| x [n]| ≤ B for all n, for some B > 0.
|y[n]| = | ∑k x [n − k]h[k]| ≤ ∑k | x [n − k]| · |h[k]| ≤ B ∑k |h[k]| < ∞.
Necessity: To prove ”stable ⇒ ∑k |h[k ]| < ∞” prove the contraposi-
tive:
00
∑ |h[k]| = ∞ ⇒ unstable.00 (22)
k
Let x [n] = sgn{ h[−n]}. Then, since y[n] = ∑∞
k =−∞ h [ k ] x [ n − k ]:

y [0] = ∑ h[k] x [−k] = ∑ h[k]sign{h[k]} = ∑ |h[k]| = ∞. (23)
k =−∞ k k

Examples:
1. Moving average system above:
1 1 1
∑ |h[k]| = 3 + 3 + 3 = 1 → stable. (24)
k
Rt 1
2. Integrator: y(t) = −∞ x (τ )dτ. h(t) is the unit step (see Figure 3),
and Z ∞ t
|h(τ )|dτ = ∞. (25) Figure 3: UnitStep
−∞
EE120 - Fall’15 - Lecture 2 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
31 August 2015

LTI Systems Described by Differential and Difference Equations

Linear Constant Coefficient Differential Equations:

N M
dk y(t) dk x (t)
x (t)→ ∑ ak dtk
= ∑ b k
dtk
→ y(t) (1)
k =0 k =0

dy dy N −1
If the initial conditions are zero: y(0) = dt (0) = ... = dt N −1 (0) = 0,
then this is an LTI system. Nonzero initial conditions destroy linear-
ity: if y(0) 6= 0, then x (t) ≡ 0 does not imply y(t) ≡ 0.
Example:

+
+ dy −y + x
x(t)
− C y(t) C = (2)
− dt R

With x (t) ≡ 0, solution is y(t) = y(0)e−t/RC (≡ 0 only if y(0) = 0).

Linear Constant Coefficient Difference Equations:

N M
x [n]→ ∑ a k y [ n − k ] = ∑ bk x [ n − k ] → y [ n ] (3)
k =0 k =0

LTI if y[−1] = y[−2] = ... = y[− N ] = 0.


Example: Accumulator
n
y[n] = ∑ x [ k ] → y [ n ] − y [ n − 1] = x [ n ] (4)
k =−∞

With x [n] ≡ 0, the solution is y[n] = y[−1] for n ≥ 0. 


Rewrite the difference equation (3) as:

a0 y[n] + a1 y[n − 1] + ... + a N y[n − N ] = b0 x [n] + ...b M x [n − M] (5)

and note that it defines a causal system if a0 6= 0.


Henceforth assume a0 = 1 (if a0 6= 1, we can divide the other coeffi-
cients by a0 ).
ee120 - fall’15 - lecture 2 notes 2

FIR vs. IIR Systems

The special case N = 0 above defines a finite impulse response (FIR)


system:
y[n] = b0 x [n] + ... + b M x [n − M] (6)
Impulse response:

h[n] = b0 δ[n] + b1 δ[n − 1] + ... + b M δ[n − M ] (7)

b1 b2 bM
b0 ...

1 2 M n
finite duration

Note that a FIR system is always stable, because the sum ∑n |h[n]| is
over a finite duration and, thus, finite.

An infinite impulse response (IIR) example:


y[n] − y[n − 1] = x [n], y[−1] = 0 (accumulator)
Impulse response:
h [ n ] − h [ n − 1] = δ [ n ]
h[0] = h[−1] + δ[0] = 0+1 = 1
h [1] = h [0] = 1
h [2] = h [1] = 1
..
.

h[n] 1
...
n
infinite duration
ee120 - fall’15 - lecture 2 notes 3

Block Diagram Representation of DT LTI Systems:

N M
y[n] = − ∑ ak y[n − k ] + ∑ bk x [ n − k ] (8)
k =1 k =0

b0
x [n] + + y[n]

D D
b1 − a1
+ +
D D
b2 − a2
+ +

This requires N + M delay elements (memory registers). For an


implementation with fewer memory registers, recall that changing
the order of two LTI systems in series does not change the result:
h1 [n] h2 [n] ≡ h2 [n] h1 [n]

x [n] b0 y[n] x [n] b0 y[n]


+ + + +

D D D
− a1 b1 =⇒ − a1 b1
+ + + +

D D D
− a2 b2 − a2 b2
+ + + +

Note that a FIR system can be implemented with the blue block only;
no feedback loops are required.
ee120 - fall’15 - lecture 2 notes 4

Response of LTI Systems to Complex Exponentials


Section 3.2 in Oppenheim & Willsky

Complex Exponentials

Continuous-time:
s=σ + jω
x (t) = est , s ∈ C −−−−−→ x (t) = |{z}
eσt e jωt
|{z} (9)
envelope periodic

Discrete-time:
z=re jω
x [n] = zn , z ∈ C −−−−→ x [n] = r n e jωn (10)

Figures 1 and 2 on page 7 plot est and zn for various values of s and z
in the complex plane.

The response of a LTI system to a complex exponential is the


same complex exponential scaled by a constant.

Z ∞ Z ∞

s(t−τ ) −sτ
x (t) → h(t) → y(t) = h(τ )e dτ = h(τ )e dτ est
−∞ −∞
| {z }
, H (s)
(11)

∞ ∞
!
x [n] → h[n] → y[n] = ∑ h[k]z n−k
= ∑ h[k]z −k
zn (12)
k =−∞ −∞
| {z }
, H (z)
H (s) and H (z) are called ”transfer functions” or ”system functions.”
Example: Find the transfer function H (s) for y(t) = x (t − 3).
If x (t) = est then
y(t) = x (t − 3) = es(t−3) = |{z}
e−3s est . (13)
= H (s)

Alternatively, use the impulse response h(t) = δ(t − 3):


Z ∞
H (s) = δ(τ − 3)e−sτ dτ = e−3s . (14)
−∞

Frequency Response of a LTI System

x (t) = e jωt (s = jω ) → y(t) = H ( jω )e jωt


(15)
x [n] = e jωn (z = e jω ) → y[n] = H (e jω )e jωn

R∞
H ( jω ) = −∞ h(τ )e− jωτ dτ
(16)
H (e jω ) = ∑ h[k]e− jωk
ee120 - fall’15 - lecture 2 notes 5

Filtering
Section 3.9 in Oppenheim & Willsky
LTI system designed such that H ( jω ) (H (e jω ) in DT) is zero or close
to zero for frequencies to be eliminated.
Example: Why is the moving average system a low-pass filter?

M
1
y[n] = ∑
2M + 1 k=− M
x [n − k] (17)

h[n]
1
2M+1

−M M

M
1 e jωM  
H (e jω ) = ∑ 2M + 1
e− jωk =
2M + 1 |
1 + e− jω + ... + e− jω2M
k =− M {z }
1−e− jω (2M+1) if w6=02
1−e− jω

. 2
since this is a geometric series
− jω ( M+ 12 ) jω ( M+ 21 ) − jω ( M+ 21 )
1 e e −e
= e jωM −
2M + 1
| e
{z
jω/2
}| e jω/2 − e− jω/2
{z }
=1 sin(ω ( M+1/2))
sin(ω/2)

(

1 if ω = 0
H (e ) = 1 sin(ω ( M+1/2)) (18)
2M +1 sin(ω/2)
ω 6= 0

2π π 2π
2M +1

Low frequencies pass through:


ω=0 x[n] y[n]
... 1 ... ... 1 ...
n n

High frequencies are attenuated:


1
ω=π x[n] = ejπn = (−1)n y[n] = (−1)n+M 2M+1
... ... ... ...
n n
ee120 - fall’15 - lecture 2 notes 6

Example: Is y[n] = 12 ( x [n] − x [n − 1]) low-pass or high-pass?

ω=π x[n] = (−1)n y[n] = (−1)n


... ... ... ...
n n

ω=0 x[n] ≡ 1 y[n] ≡ 0


... 1 ... ... 1 ...
n n

To find H (e jω ), note that the impulse response is:

h[n] 1/2

n
−1/2

1 1 − jω 1 1
H (e jω ) = ∑ h[n]e− jωn = − e
2 2
= (1 − e− jω ) = e− jω/2 2jsin(ω/2)
2 2
n=−∞
(19)

|H(ejω )| = sin(ω/2)
1

π 2π ω

Example: CT low-pass filter

dy
R RC + y = x y (0) = 0
dt
+
x(t) + C y(t) x = e jωt → y = H ( jω )e jωt


d n o
RC H ( jω )e jωt + H ( jω )e jωt = e jωt
dt
jωRCH ( jω ) + H ( jω ) = 1
Therefore,
1
H ( jω ) = (20)
1 + jωRC

1 |H(jω)| = √ 1
1+(RCω)2

ω
ee120 - fall’15 - lecture 2 notes 7

Im{s} Figure 1: The real part of est for various


3

2
values of s in the complex plane.
1.5

1
Note that est is oscillatory when s has
1

0.8

0.6
0.8

0.6
1
1

0
an imaginary component. It grows
0.5

unbounded when Re{s} > 0, decays to


0.4 0.4

0.2 0.2

0 0
0 -1

zero when Re{s} < 0, and has constant


-0.2 -0.2

-0.4 -0.4
-0.5
-0.6 -0.6
-2

-0.8

amplitude when Re{s} = 0.


-0.8
-1
-1
0 1 2 3 4 5 6 7 8 9 10 -1
0 1 2 3 4 5 6 7 8 9 10
-3
0 1 2 3 4 5 6 7 8 9 10
-1.5
0 1 2 3 4 5 6 7 8 9 10

1.5

1 2
1
1
0.8
0.8

0.6
0.6
1
0.4 0.5
0.4

0.2
0.2

0
0
0 0
-0.2
-0.2

-0.4
-0.4
-0.5
-1
-0.6
-0.6

-0.8 -0.8

-1 -1
-1
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 -2

-1.5
0 1 2 3 4 5 6 7 8 9 10
-3
0 1 2 3 4 5 6 7 8 9 10

2
2.8

1.8
2.6
1.6
2.4
1.4

2.2
1.2

2
1
1 1

0.9
0.9
1.8
0.8 0.8
0.8
0.7
1.6
0.6 0.7 0.6

0.5
0.6
0.4 1.4
0.4
0.5
0.3
0.2 1.2
0.4
0.2

0.1
0 1 2 3 4 5 6 7 8 9 10
0.3
0 1 2 3 4 5 6 7 8 9 10 0 1
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

Re{s}

Figure 2: The real part of zn for various


3

2
values of z in the complex plane. It
grows unbounded when |z| > 1, decays
0.8

0.6

1
0.4

0.2

0
0
to zero when |z| < 1, and has constant
-0.2

-0.4 -1
amplitude when z is on the unit circle
-0.6

-0.8
0 5 10 15 20 25
-2
(|z| = 1).
4 -3
0 5 10 15 20 25

3
Im{z} 3.5

2 3

2.5
1

2
0

1.5
-1

1
-2

0.5
-3

0
0 5 10 15 20 25
-4
0 5 10 15 20 25

Re{z}
2

1.5
1.5

1
0.5

0
0.5

-0.5

-1 0

-1.5

-0.5
0 5 10 15 20 25
-2
0 5 10 15 20 25

2
1

0.9

0.8

1.5 0.7

0.6

0.5

1 0.4

0.3

0.5
0.2

0.1

0
0 5 10 15 20 25

-0.5

-1

-1.5

-2
0 5 10 15 20 25
EE120 - Fall’15 - Lecture 3 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
2 September 2015

Fourier Series for Continuous-Time Periodic Signals


Section 3.3 in Oppenheim & Willsky
Recall:
)
1
e jθ + e− jθ

e jθ = cos θ + j sin θ cos θ = 2
(e jθ )∗ = e− jθ = cos θ − j sin θ 1
e jθ − e− jθ

sin θ = 2j

and e jω0 t = cos ω0 t + j sin ω0 t is a periodic signal with period T = ω0 .

Fourier Series represents a periodic signal x (t + T ) = x (t) ∀t as a


weighted sum of sinusoidals e jkω0 t k = 0, ∓1, ∓2, ...

x (t) = ∑ ak e jkω0 t ω0 = 2π
T (synthesis equation) (1)
k =−∞

k = 0: ak e jkω0 t ≡ a0 (”dc component”)


k = ∓1: fundamental frequency (”first harmonic”)
k = ∓2: ”second harmonic”
1 2
Example: x (t) = 1 + cos(2πt) + sin(4πt) + cos(6πt) (2)
2
| {z } | {z } |3 {z }
1 j4πt
= 2j e
= 14 e j2πt = 13 e j6πt
1 − j4πt
+ 14 e− j2πt −2e + 13 e− j6πt

then a0 = 1, a1 = a−1 = 14 , a2 = − a−2 = 1


2j , a3 = a−3 = 13 .

Property: For a real signal x (t) = x ∗ (t), ak = a∗−k .


Proof: Follows from the "conjugate symmetry" property: If x (t) has
Fourier series coefficient ak , then x ∗ (t) has Fourier series coefficients
bk = a∗−k . If x (t) is real, then x (t) = x ∗ (t); therefore, ak = bk = a∗−k .

How to find the Fourier Series coefficients ak ?

Multiply both sides of the synthesis equation (1) with e− jnω0 t and
integrate from 0 to T = 2π
ω0 :

Z T ∞ Z T


0
x (t)e− jnω0 t dt = ∑ ak
0
e j(k−n)ω0 t dt (3)
k =−∞ |  {z }
 T if k = n
=
 0 if k 6 = n
| {z }
= Tan
ee120 - fall’15 - lecture 3 notes 2

Therefore:
Z T
1
an = x (t)e− jnω0 t dt (analysis equation) (4)
T 0

1
RT
In particular, a0 = T 0 x (t)dt (average of x (t) over one period).

Example: Periodic Square Wave

x(t)
1
... ...
−T1 T1 T t

For k = 0, Z T
1 1 2T1
a0 = dt = (5)
T − T1 T
For k 6= 0,
Z T T1
1 1 1 −1
e− jkω0 t dt = e− jkω0 t

ak = (6)
T − T1 T jkω0 − T1
| {z }
=e jkω0 T1 −e− jkω0 T1
=−2jsin(kω0 T1 )
2 1
= sin(kω0 T1 ) = sin(2πk TT1 ) (7)
kω0 T kπ

Discrete-Time Periodic Signals

A discrete-time signal x [n] is periodic if there exists integer N 6= 0 s.t.

x [n + N ] = x [n] for all n. (8)

Question: Is x [n] = cos(ω0 n) periodic for any ω0 ?


Answer: No. It is periodic only when ω0 /π is rational. To find the
fundamental period N, find the smallest integers M, N such that

ω0 N = 2πM (9)

Examples:
1. cos(n) is not periodic;
2. cos( 5π7 n ), N = 14;
3. cos( π5 n), N = 10;
4. cos( 5π7 n ) + cos ( 5 n ), N = s.c.m.{14, 10} = 70.
π
ee120 - fall’15 - lecture 3 notes 3


Question: Which one is a higher frequency, ω0 = π or ω0 = 2 ?
Answer: ω0 = π

cos(πn) cos( 3π π
2 n) = cos( 2 n)
1 1

n n

N =2 N =4

In discrete time ω = π is the highest frequency, as depicted below.

1.5

0.5

-0.5

-1

-1.5

higher frequency -2
0 5 10 15 20 25

ω= π
2
1.5

1
1.5

1
0.5

0.5

0
ω=π ω=0 0

-0.5

-0.5
0 5 10 15 20 25
-1

-1.5

-2
0 5 10 15 20 25


ω= 2

lower frequency 2

1.5

0.5

-0.5

-1

-1.5

-2
0 5 10 15 20 25

Discrete-Time Fourier Series

The complex exponential signal

e jω0 n = cos(ω0 n) + jsin(ω0 n)

is periodic if ω0 N = 2πM for some integers M, N:

e jω0 (n+ N ) = e jω0 n e|jω 0N


{z } = e
jω0 n
(10)
=e j2πM =1

The Fourier Series expresses the periodic sequence x [n + N ] = x [n] as


ee120 - fall’15 - lecture 3 notes 4

a linear combination of

Φk [n] , e jkω0 n , k = 0, ∓1, ∓2, ..., ω0 = 2π


N (11)

Key difference between CT and DT:

e j(k+ N )ω0 n = e jkω0 n (12)

because e jNω0 n = e j2πMn = 1. Therefore,

Φk [n] = Φk+ N [n] = Φk+2N [n] = ... (13)

and N independent functions Φk [n] (e.g., Φ0 [n], Φ1 [n], ..., Φ N −1 [n]) are
enough for the Fourier Series. We use the finite series

x [n] = ∑ ak Φk [n] (Synthesis Equation) (14)


k =h N i

where k = h N i means any set of N successive integers: k = 0, 1, ..., N −


1, or k = 1, 2, ..., N, or other choices.

Example: For N = 6, Φk [n] = e jk 6 n

k=1 Im Im k=2
Φ1 [2] Φ1 [1] = Φ1 [7] Φ2 [1]

Φ1 [0] = Φ1 [6] Φ2 [0] = Φ2 [3]


Φ1 [3] Re Re

Φ1 [4] Φ1 [5] Φ2 [2]

k=3 Im Im k=6

Φ6 [0] =
Φ3 [0] = Φ3 [2] = Φ6 [1] = ...
Φ3 [1] Re Re
(or Φ0 [n] ≡ 1)

Properties of Φk [n]:

1. Periodicity in n: Φk [n + N ] = Φk [n];
2. Periodicity in k: Φk+ N [n] = Φk [n];
3. (
N if k = 0, ∓ N, ∓2N, ...
∑ Φk [n] = 0 otherwise (15)
n=h N i

4. Φk [n] · Φm [n] = Φk+m [n] (follows from definition Φk [n] = e jk N n)
ee120 - fall’15 - lecture 3 notes 5

Finding the Fourier Series coefficients ak :

Multiply both sides of (14) by Φ−m [n] and sum over n = h N i:

∑ x [ n ] Φ−m [ n ] = ∑ ∑ ak Φk−m [ n ] (16)


n=h N i n=h N i k =h N i

= ∑ ak ∑ Φk−m [ n ] (17)
k=h N i n=h N i

| {z }
 N if k = m(modN )
=
 0 otherwise
= Nam (18)

Replace m → k:

1 2π
ak =
N ∑ x [n]e− j N kn (Analysis Equation) (19)
n=h N i

Summary:

CT DT
∞ 2π 2π
Synthesis x (t) = ∑ ak e jk T t x [n] = ∑ ak e jk Nn
k =−∞ k =h N i

1 2π 1 2π
Z
Analysis ak =
T T
x (t)e− jk T t ak =
N ∑ x [n]e− jk Nn
n=h N i

Example:
   
2π 4π π
x [n] = 1 + sin n + cos n+ N = 10
10 10 4
| {z } | {z }
2π π 4π
1 j 10 n
= 12 e 4 e 10
j j n
= 2j e
2π π 4π
+ 21 e 4 e 10
1 − j 10 n −j −j n
− 2j e

1 1 1 π 1 π
= 1+ Φ 1 [ n ] − Φ −1 [ n ] + e j 4 Φ 2 [ n ] + e − j 4 Φ −2 [ n ]
2j 2j 2 2

If we choose h N i to be {0, 1, 2, ..., 9}, then


π
a0 = 1, a1 = 1
2j , a2 = 21 e j 4 , a3 = a4 = a5 = a6 = a7 = 0, (20)
π
a8 = 21 e− j 4 , a9 = − 2j1 (21)

Note: As in CT, x [n] real implies that a−k = a∗k . Combined with the
periodicity of coefficients in DT (a N −k = a−k ): a N −k = a∗k .
ee120 - fall’15 - lecture 3 notes 6

Example: Rectangular pulse train


x[n]
1
... ...
−N1 N1 N n

For the special case N1 = 0 ("impulse train"):


1 2π 1 2π 1
ak =
N ∑ x [n]e− jk Nn =
N
x [0]e− jk N 0 =
N
∀k.
n=h N i

Derive the following for N1 6= 0:


( 2N +1
1
N k=0
ak = 1 sin(kπ (2N1 +1)/N ) (22)
N sin(kπ/N )
k 6= 0.

The figure below shows how the partial sum


M
∑ ak Φk [n] (23)
k =− M

progressively reconstructs x [n] as more harmonics are included.

1.2
Figure 1: The partial sum (23) with
1 Fourier coefficients (22), for N = 9
0.8 and N1 = 2. When M = 4, (23) is the
M=1
x[n]

0.6 complete Fourier series; thus we fully


0.4 recover the rectangular pulse.
0.2
0
-20 -15 -10 -5 0 5 10 15 20
n
1.2
1
0.8
M=2
x[n]

0.6
0.4
0.2
0
-20 -15 -10 -5 0 5 10 15 20
n
1.2
1
0.8
M=3
x[n]

0.6
0.4
0.2
0
-20 -15 -10 -5 0 5 10 15 20
n
1.2
1
0.8
x[n]

M=4 0.6
0.4
0.2
0
-20 -15 -10 -5 0 5 10 15 20
n
ee120 - fall’15 - lecture 3 notes 7

Fourier Series as a "Change of Basis"

Consider the period-two signal:


(
2 if n even
x [n] =
3 if n odd.

We have N = 2 and the Fourier series is:

x [ n ] = a0 Φ0 [ n ] + a1 Φ1 [ n ]

where Φ0 [n] ≡ 1, Φ1 [n] = (−1)n . Applying the analysis equation,


you can show that:
5 1
a0 = a1 = − .
2 2
Now view x [n] as a vector:
" #
2
x=
3

whose entries are the values x [n] takes at n = 0, 1 and the dimension
is two because the period is N = 2.
Then, " # " #
1 1
Φ0 = Φ1 =
1 −1
can be viewed as new basis vectors and the Fourier series can be
interpreted as a change of basis:
" # " # " #
5 1 1 1 2
x = a0 Φ0 + a1 Φ1 = − = .
2 1 2 −1 3

The advantage of the new basis is that, instead of the values in time,
the signal is represented with coefficients of its frequency compo-
nents. This allows, for example, compression algorithms that allocate
more bits to accurately store the coefficients of frequency components
that matter more to the quality of sound than other frequencies.
EE120 - Fall’15 - Lecture 4 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
9 September 2015

Continuous Time Fourier Transform


Chapter 4 in Oppenheim & Willsky
Applicable to aperiodic signals (unlike Fourier series which is appli-
cable only to periodic signals).
Main idea: Treat aperiodic signal x (t) as the limit of a periodic signal
x̃ (t) as period T → ∞ (see figure below). As T increases, the funda-
mental frequency ω0 = 2π T decreases and the harmonic components
become closer in frequency, forming a continuum in the limit T → ∞.
Example:
x(t) x̃(t)
1
... ...
−T1 T1 t −T −T1 T1 T t
Aperiodic Periodic

Recall from Lecture 3 that the periodic signal on the right has Fourier
series coefficients:
2sin(kω0 T1 ) 2π
ak = , ω0 = T . (1)
kω0 T
This expression is not defined for k = 0,
Then but we interpret a0 to be the limit as
k → 0, i.e., a0 = 2T1 /T.

2sin(ωT1 )
Tak = (2)
ω
ω =kω0

a0 T
a1 T
envelope for T ak
a2 T

ω0 ω

This envelope is the “Fourier transform" of x (t) above. In general:


1
Z
ak = x̃ (t)e− jkω0 t dt (3)
T T
Z ∞


Z 
Tak = x (t)e− jkω0 t dt = x (t)e− jωt dt (4)


−∞ ∞
| {z } ω = kω 0

, X ( jω ) (Fourier Transform)
ee120 - fall’15 - lecture 4 notes 2

R∞
X ( jω ) = −∞ x (t)e− jωt dt (Analysis Equation)
R∞ (5)
1 jωt dω
x (t) = 2π −∞ X ( jω ) e (Synthesis Equation)

Derivation of the synthesis equation:


∞ ∞
1 2π h i
x̃ (t) = ∑ ak e jkω0 t = ∑ X ( jω )e jωt 2 (6)

k =−∞
2π T
k =−∞ |{z}
ω =kω0
= ω0
2
The summation term on
the right can be pictured as:
Take the limit as T → ∞:
Z ∞ X(jω)ejωt
1
x (t) = X ( jω )e jωt dω (7)
2π −∞

Convergence: If the "Dirichlet conditions" below hold, then kω0


ω
(k+1)ω0
Z W
1
2π X ( jω )e jωt dω
−W

converges to x (t) as W → ∞ for all t, except at discontinuities where


it converges to the average:
R∞
D1) x (t) is absolutely integrable: ∞ | x (t)|dt < ∞;
D2) x (t) has finite # of minima and maxima within any finite inter-
val;
D3) x (t) has finite # of discontinuities within any finite interval and
the discontinuities are finite.
Examples:

1) x (t) = e−at u(t), a > 0.


Z ∞ Z ∞ ∞
− at − jωt 1
X ( jω ) = e e dt = e−(a+ jω )t dt = e−(a+ jω )t

0 0 a + jω | {z 0}
=1
1 p
X ( jω ) = , | a + jω | = a2 + ω 2 , ]a + jω = tan−1 (ω/a)
a + jω
1
| X ( jω )| = √ , ]X ( jω ) = − tan−1 (ω/a)
a2 + ω 2

|X(jω)| π/2 ∡X(jω)


1/a

ω
ω −π/2
ee120 - fall’15 - lecture 4 notes 3

2) Dirac delta:

x (t) = δ(t)
Z ∞
X ( jω ) = δ(t)e− jωt dt = 1 for all ω
−∞

3) Rectangular pulse:
(
1 |t| < T1 x(t)
x (t) =
0 |t| ≥ T1

Z T
1
X ( jω ) = e− jωt dt −T1 T1 t
− T1
R T1
For ω = 0, X ( j0) = − T1 dt = 2T1 . For ω 6= 0,

e jωT1 − e− jωT1
Z T
1 1 − jωt T1 2sin(ωT1 )
e− jωt dt = − e = = .
− T1 jω T1 jω ω
sinπθ

Combining, θ 6= 0
sinc(θ ) , πθ
1 θ = 0.
(
ω=0
 
2T1 T1 1
X ( jω ) = 2sin(ωT1 ) = 2T1 sinc ω
ω ω 6= 0 π
−3 −2 −1 1 2 3
θ
4) (
1 |ω | < W
X ( jω ) =
0 |ω | ≥ W

A derivation similar to Example 3 gives:


Z W  
1 W W
x (t) = e jωt dω = sinc t
2π −W π π

Note the duality in Examples 3 and 4.

FT
rectangular pulse ↔ sinc
FT
sinc ↔ rectangular pulse

5) X ( jω )= 2πδ(ω − ω0 )
Z ∞
1
x (t) = 2πδ(ω − ω0 )e jωt dω = e jω0 t
2π −∞
FT
If ω0 = 0, then 1 ↔ 2πδ(ω ). Note the duality with Example 2.
ee120 - fall’15 - lecture 4 notes 4

Fourier Transform of Periodic Signals


Section 4.2 in Oppenheim & Willsky
From Example 5 above:
FT
e jω0 t ↔ 2πδ(ω − ω0 ) (8)

By linearity:
∞ ∞
FT
∑ ak e jkω0 t ↔ ∑ 2πak δ(ω − kω0 ) (9)
k =−∞ k =−∞

Example:
x (t) = cos(ω0 t) = 21 e jω0 t + 21 e− jω0 t
X ( jω ) = πδ(ω − ω0 ) + πδ(ω + ω0 )

X(jω)
π π

−ω0 ω0 ω

Example: Impulse Train



x (t) = ∑ δ(t − kT )
k =−∞
1 T/2 1
Z
ak = δ(t)e− jkω0 t dt = for all k
T − T/2 T


X ( jω ) =
T ∑ δ(ω − kω0 )
k =−∞

x(t) X(jω)

... ... FT
... ...
−T T 2T t ω0 2ω0 3ω0 ω

Properties of the Fourier Transform


Section 4.3 in Oppenheim & Willsky
FT FT
Consider x (t) ↔ X ( jω ) and y(t) ↔ Y ( jω ).
Linearity:

FT
ax (t) + by(t) ↔ aX ( jω ) + bY ( jω ), a, b ∈ R (10)

Time-Shift:
FT
x (t − t0 ) ↔ e− jωt0 X ( jω ) (11)
ee120 - fall’15 - lecture 4 notes 5

Proof:
Z ∞ Z ∞
x (t − t0 )e− jωt dt = x (τ )e− jω (t0 +τ ) dτ
−∞ | {z } −∞

Z ∞
= e− jωt0 x (τ )e− jωτ dτ
−∞
| {z }
= X ( jω )

Conjugation and Conjugate Symmetry

FT
x ∗ (t) ↔ X ∗ (− jω ) (12)

If x (t) is real: X ( jω ) = X ∗ (− jω ) (because x (t) = x ∗ (t))

⇒ | X ( jω )| = | X (− jω )| (even symmetry) (13)


]X ( jω ) = −]X (− jω ) (odd symmetry) (14)

Example 1 above:

|X(jω)| π/2 ∡X(jω)


1/a

ω
ω −π/2

Differentiation and Integration

dx (t) FT
↔ jωX ( jω ) (15)
dt

Z t
FT 1
x (τ )dτ ↔ X ( jω ) + πX (0)δ(ω ) (16)
−∞ jω

Example: x (t) = u(t) (unit step)


Rt FT
Note x (t) = −∞ δ(τ )dτ and δ(t) ↔ ∆( jω ) ≡ 1.

jω ∆ ( jω ) + π∆ (0) δ ( ω )
1 1
Thus, X ( jω ) = = jω + πδ(ω )

Time and Frequency Scaling


 
FT1 jω
x ( at) ↔ X , a 6= 0 (17)
| a| a
ee120 - fall’15 - lecture 4 notes 6

Proof:
Z ∞ Z ∞

at )e− jωt dt
x (|{z} = x (τ )e− jωτ/a , if a > 0
−∞ −∞ a

Z −∞

= x (τ )e− jωτ/a , if a < 0
∞ a
Z ∞
1 1  ω
= x (τ )e− jωτ/a dτ = X j
| a | −∞ | a| a

Example:
x(t)
sinWt FT
x (t) = ←→
πt
−T1 T1 t
We can interpret this as a scaling of:
X0 (jω)
sint FT
x0 ( t ) = ←→
πt
−1 1 ω
sinWt FT
x (t) = W = Wx0 (Wt) ←→ X0 ( jω/W ) = X ( jω )
πWt
Corollary (a=-1): x (−t) ↔ X (− jω ) (18)
)
If x (−t) = x (t) then X (− jω ) = X ( jω ) X ( jω ) = X ∗ ( jω ), i.e.,
If x (t) is also real: X (− jω ) = X ∗ ( jω ) X ( jω )is real.

Parseval’s Relation:
Z ∞ Z ∞
1
| x (t)|2 dt = | X ( jω )|2 dω (19)
−∞ 2π −∞

1
Example: x (t) = e− at u(t) a > 0 ↔ X ( jω ) =
a + jω
Z ∞ Z ∞
1 −2at 0 1
| x (t)|2 dt = e−2at dt = e =
−∞ 0 2a ∞ 2a
Z ∞ Z ∞  ω  ∞
1 1 π 1
| X ( jω )|2 dω = dω = tan−1 = = 2π

−∞ −∞ 2
a +ω 2 a a −∞

a 2a

Initial Value:
Z ∞
1
x (0) = X ( jω )dω (synthesis eq’n with t = 0) (20)
2π −∞

DC Component:
Z ∞
X (0) = x (t)dt (analysis equation with ω = 0) (21)
−∞
EE120 - Fall’15 - Lecture 5 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
14 September 2015

Continuous Time Fourier Transform Continued

LTI Systems and the Convolution Property:

Recall from Lecture 2:

x (t) = e jωt → h(t) → y(t) = H ( jω )e jωt (1)

where
Z ∞
H ( jω ) = h(t)e− jωt dt ”frequency response” (2)
−∞

Thus, H ( jω ) is the Fourier Transform of the impulse response h(t).


Note that the first Dirichlet condition for the existence of H ( jω ):
Z ∞
|h(t)|dt < ∞ (3)
−∞

is equivalent to the stability of the system.


Question: Why is the ideal low pass filter ”ideal”?

1 H(jω)

−ωc ωc ω

Answer: Synthesis equation applied to H ( jω ) gives (see Lecture 4):

sinωc t
h(t) = (4)
πt

not causal!
t
ee120 - fall’15 - lecture 5 notes 2

The Convolution Property of the Fourier Transform


Section 4.4 in Oppenheim & Willsky
FT
h(t) ∗ x (t) ←→ H ( jω ) X ( jω ) (5)
Proof: Z ∞
y(t) = x (τ )h(t − τ )dτ
−∞
Z ∞ Z ∞ 
Y ( jω ) = x (τ )h(t − τ )dτ e− jωt dt
−∞ −∞
Z ∞ Z ∞
= x (τ ) h(t − τ )e− jωt dtdτ
−∞ −∞
| {z }
= e− jωτ H ( jω )2
2
follows from the time-shift property
Z ∞
= H ( jω ) x (τ )e− jωτ dτ
−∞
= H ( jω ) X ( jω )
1
Example: h(t) = e− at u(t) a > 0 ↔ H ( jω ) =
a + jω
1
x (t) = e−bt u(t) b > 0 ↔ X ( jω ) =
b + jω

1
Y ( jω ) = H ( jω ) X ( jω ) = (6)
( a + jω )(b + jω )
Partial Fraction Expansion (if a 6= b):
=1 =0 3
z }| { z }| {
A B ( Ab + Ba) + j( A + B)ω
Y ( jω ) = + =
a + jω b + jω ( a + jω )(b + jω )
3
From here: B = − A and Ab − Aa = 1,
  which implies that
1 1 1
Y ( jω ) = − 1
b − a a + jω b + jω A = −B =
b−a
1  − at 
y(t) = e − e−bt u(t)
b−a

If a = b:  
1 d 1
Y ( jω ) = =j
( a + jω )2 dω a + jω

FT dX ( jω ) dx (t)
Use the property4 : − jtx (t) ↔ 4
dual of dt ↔ jωX ( jω )

Then y(t) = − j2 te−at u(t) = te− at u(t).
HW problem: Show that

tr−1 − at
e−at u(t) ∗ . . . ∗ e− at u(t) = e u(t)
| {z } (r − 1) !
r times
ee120 - fall’15 - lecture 5 notes 3

The Frequency Shifting Property


Section 4.3 in Oppenheim & Willsky
FT
e jω0 t x (t) ←→ X ( j(ω − ω0 )) (7) dual of x (t − t0 ) ↔ e− jω0 t X ( jω )

Proof:
Z ∞ Z ∞
e jω0 t x (t)e− jωt dt = x (t)e− j(ω −ω0 )t dt = X ( j(ω − ω0 ))
−∞ −∞

Example: Amplitude Modulation (AM)

r(t)
s(t)
(signal to be
transmitted)
cosω0 t
(carrier signal)
 
1 jω0 t 1 − jω0 t 1 1
r (t) = s(t) e + e ←→ R( jω ) = S( j(ω − ω0 )) + S( j(ω + ω0 ))
2 2 2 2
S(jω) R(jω)
A
A/2 A/2

-ω1 ω1 ω -ω0 ω0 ω
(ω0 >> ω1 in practice)

Demodulation:

r(t)
s(t) LPF

cosω0 t cosω0 t

1 1
G ( jω ) = R( j(ω − ω0 )) + R( j(ω + ω0 ))
2 2
LPF
A/2
A/4 A/4

-2ω0 ω0 2ω0 ω

The Multiplication Property


Section 4.5 in Oppenheim & Willsky
Z ∞
1
s(t) p(t) ←→ S( jθ ) P( j(ω − θ ))dθ (8) dual of the convolution property
2π −∞
ee120 - fall’15 - lecture 5 notes 4

Proof: Apply the synthesis equation to the right-hand side above:


Z ∞  Z ∞ 
1 1
S( jθ ) P( j(ω − θ ))dθ e jωt dω
2π −∞ 2π −∞
Z ∞  Z ∞ 
1 1
= S( jθ ) P( j(ω − θ ))e jωt dω dθ
2π −∞ 2π −∞
| {z }
=e jθt p(t) 5
Z ∞ 5
from the frequency shift property
1 jθt
= p(t) S( jθ )e dθ
2π −∞
| {z }
=s(t)

Example: Truncating the impulse response of the ideal low-pass filter


sinωc t
1 W (t) h(t) = πt

−T1 T1 t
Z ∞
FT 1
ĥ(t) , w(t) · h(t) ↔ H ( jθ )W ( j(ω − θ ))dθ
2π −∞

ĥ(t) W (j(ω − θ))


H(jθ)

−T1 T1 t -ωc ω ωc θ

Ĥ ( jω ) approximates the ideal filter H ( jω ):

H(jω)

Ĥ(jω)

−ωc ωc ω

but it is still non-causal. Causal version:


FT
ĥ(t − T1 ) ↔ e− jωT1 Ĥ ( jω )

magnitude

ĥ(t − T1 )

T1 2T1 t ω

Same magnitude as Ĥ ( jω ):

− jωT1
Ĥ ( jω ) = | Ĥ ( jω )|

e

but e− jωT1 Ĥ ( jω ) has an additional phase of −ωT1 due to the delay.


ee120 - fall’15 - lecture 5 notes 5

Frequency Response of Continuous Time LTI Systems


Sections 4.7 and 6.5 in Oppenheim &
N M Willsky
dk y(t) dk x (t)
∑ ak dtk
= ∑ bk dtk
(9)
k =0 k =0

Take Fourier transforms of both sides and apply the differentiation


property:
N M
∑ ak ( jω )k Y ( jω ) = ∑ bk ( jω )k X ( jω ) (10)
k =0 k =0

Then,
Y ( jω ) ∑ M b ( jω )k
H ( jω ) = = kN=0 k . (11)
X ( jω ) ∑k=0 ak ( jω )k

Example: First-order system:

dy
τ + y(t) = x (t) (12)
dt

where τ > 0: ”time constant” (e.g., τ = RC in the RC circuit)

R
1
H ( jω ) = +
1 + jωτ x(t) +
− C y(t)

1
1 |H(jω)| = √1+ω2 τ 2 t
step resp=(1 − e− τ )u(t)
√1 1
2
1−e−1
1
ω τ
τ t
Example: Second-order system:

d2 y dy
2
+ 2ξωn + ωn2 y(t) = ωn2 x (t) (13)
dt dt

where ξ is called the damping ratio, and ωn the natural frequency.

ωn2 1
H ( jω ) = =
( jω )2 + 2ξωn ( jω ) + ωn2 ( jω/ωn )2 + 2ξ ( jω/ωn ) + 1

The figure below shows the frequency, impulse, and step responses
for various values of ξ. Note that increasing ωn stretches the fre-
quency response along the ω axis and compresses the impulse and
step responses along the t axis. Therefore, a large natural frequency
means faster response.
ee120 - fall’15 - lecture 5 notes 6

Figure 1: The frequency, impulse, and


Frequency response (magnitude) step responses for the second order
20
system (13). Note from the frequency
10 20 log10 | H ( jω )| 1 = 0.1
1 = 0.2 response (top) that a resonance peak
0 1 = 0.4 occurs when ζ < 0.7.
1 = 0.7
-10 1 = 1.0
1 = 1.5
-20 .
bigger
-30
ζ
-40
-50
-1 0 1
10 ωn 10 ωn 10 ωn
!
Impulse response
1
h(t)/ωn
0.5

-0.5

-1
0 5 10 15
ωn t ωn ωn

Step response
2
s(t)
1.5

0.5

0
0 5 10 15
ωn t ωn ωn

When does resonance occur?


1 1
| H ( jω )|2 =  2 =  4  2
ω2 2 ω2
1− ωn2
+ 4ξ ωn2
ω
ωn + (4ξ 2 − 2) ω
ωn +1

Note that the denominator is strictly increasing in ω if 4ξ 2 − 2 ≥


0 and has a minimum at some ω > 0 otherwise. Thus, if 4ξ 2 −

2 < 0 (i.e., ξ < 1/ 2 ≈ 0.7), then | H ( jω )| has a resonance peak
as confirmed with the frequency response shown in the top figure
above.
ee120 - fall’15 - lecture 5 notes 7

Example:

k
m x(t): force
b
y(t): displacement

d2 y dy
m 2
+ b + ky = x
dt dt
2
d y
 
b dy
 
k 1
+ + y= x
dt2 m dt m m
| {z } | {z }
2ξωn ωn2
q
√b k b2
with ξ = and ωn = m. Ressonance occurs if ξ 2 = 4km < 12 ,
2 km
i.e., if b2 < 2km.
EE120 - Fall’15 - Lecture 6 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
16 September 2015

Discrete Time Fourier Transform (DTFT)


Chapter 5 in Oppenheim & Willsky
Given aperiodic signal x [n] of finite duration, construct periodic
sequence x̃ [n] for which x [n] is one period:
x[n] x̃[n]

... ...
n −N N n

1 2π
ak =
N ∑ x̃ [n]e− jk N n (1)
n=h N i
∞ ∞
!

− jk 2π
Nak = ∑ x [n]e N n = ∑ x [n]e − jωn
(2)



n=−∞ n=−∞
ω=k
| {z N}
| {z }
,X (e jω )
(DTFT) more closely spaced
samples as N →∞
1 2π
ak = X (e jkω0 ), ω0 = (3)
N N

Furthermore,
2π 1
x̃ [n] = ∑ ak e jk N n = ∑ N
X (e jkω0 )e jkω0 n (4)
k =h N i k =h N i
1 2π
=
2π ∑ N
X (e jkω0 )e jkω0 n (5)
k =h N i |{z}
= ω0
| {z }
X(e )ejωn

kω0
ω
(k+1)ω0

1
Z
As N → ∞ : → X (e jω )e jωn dω (6)
2π 2π

1
X (e jω )e jωn dω
R
x [n] = 2π 2π (Synthesis Equation)
(7)
X (e jω ) = ∑∞
n=−∞ x [n]e− jωn (Analysis Equation)
ee120 - fall’15 - lecture 6 notes 2

Note that in CT:


Z ∞ Z ∞
1
x (t) = X ( jω )e jωt
dω, X ( jω ) = x (t)e− jωt dt (8)
2π −∞ −∞

Main difference: Synthesis equation in DT over a period of 2π since


X (e jω ) is periodic.
Convergence guaranteed if

∑ | x [n]| < ∞ (similar to Dirichlet 1 in CT). (9)
n=−∞

Examples:
1) x [n] = an u[n], | a| < 1

∞ ∞
1
X (e jω ) = ∑ an e− jωn = ∑ (ae− jω )n = 1 − ae− jω (10)
n =0 n =0

2) x [n] = a|n| , | a| < 1

∞ −1
X (e jω ) = ∑ an e− jωn + ∑ a−n e− jωn (11)
n =0 n=−∞
| {z }
∑∞ n jωn = ∑∞ an e− j(−ω )n = ∑∞ an e− j(−ω )n −1
n =1 a e n =1 n =0

1 1
= − jω
+ −1 (12)
1 − ae 1 − ae jω
2 − a(e jω + e− jω )
= −1 (13)
1 − a(e jω + e− jω ) + a2
2 − 2acosω 1 − a2
= 2
−1 = (14)
1 − 2acosω + a 1 + a2 − 2acosω
3) (
1 |n| ≤ N1
x [n] = (15)
0 |n| > N1
sin(ω ( N1 +1/2))
(
N1
ω 6= 0,
X (e jω ) = ∑ e− jωn =
2N1 + 1
sin(ω/2)
ω = 0.
(16)
n=− N1
Derive this as an exercise.

X(e )
x[n] 2N1 + 1

−N1 N1 n 2π 2π
2N1 +1
ee120 - fall’15 - lecture 6 notes 3

4) X (e jω ) = δ(ω ) − π ≤ ω ≤ π, (17)
or, equivalently:

X (e jω ) = ∑ δ(ω − 2πl ) (18)
l =−∞

1 1
Z π
x [n] = δ(ω )e jωn dω = (19)
2π −π 2π

x [n] = 1 ↔ X (e jω ) = 2π ∑ δ(ω − 2πl ) (20)
l =−∞

x [n] = e jω0 n ↔ X (e jω ) = 2π ∑ δ(ω − ω0 − 2πl ) 2 (21)
l =−∞
2
from the frequency shift property
below

Fourier Transform of Periodic Signals


Section 5.2 in Oppenheim & Willsky

jk 2π 2π
x [n] = ∑ ak e N n ↔ X (e jω ) = ∑ 2πak δ(ω −
N
k) (22)
k =h N i k =−∞

X(ejω )
2πa0 2πa0 = 2πaN
2πa1 2πa1 = 2πaN+1
2πa2

ω0 2ω0 2π ω

Properties of DTFT
Section 5.3 in Oppenheim & Willsky
Time Shift:
x [n − n0 ] ←→ e− jωn0 X (e jω ) (23)

Frequency Shift:
e jω0 n x [n] ←→ X (e j(ω −ω0 ) ) (24)

As a special case let ω0 = π and note that e jπn = (−1)n . Thus, if

x2 [n] = (−1)n x1 [n]

then
X2 (e jω ) = X1 (e j(ω −π ) ).
The figure below illustrates this with an example where x1 [n] and
X1 (e jω ) are shown at the top, and x2 [n] = (−1)n x1 [n] and X2 (e jω )
are at the bottom. Note that X1 (e jω ) is concentrated around ω =
0, ± 2π, · · · and X2 (e jω ) is concentrated around ω = ±π, ±3π, · · · .
ee120 - fall’15 - lecture 6 notes 4

x1 [ n ] X1 (e jω )
0.04 0.2

0.18
0.035

0.16
0.03
0.14

0.025
0.12

0.02 0.1

0.08
0.015

0.06
0.01
0.04

0.005
0.02

0 0

−2π
-20 -15 -10 -5 0 5 10 15 20
0 2π
n ω
x2 [n] = (−1)n x1 [n] X2 (e jω )
0.04 0.2

0.18
0.03

0.16
0.02
0.14

0.01
0.12

0 0.1

0.08
-0.01

0.06
-0.02
0.04

-0.03
0.02

-0.04 0
-20 -15 -10 -5

n
0 5 10 15 20
−3π −π π 3π
ω

Figure 1: (a) Discrete time signal x1 [n]. (b) Fourier transform of x1 [n]. Note
that X1 (ejω ) is concentrated around 0, ±2π, ±4π, . . . . (c) Discrete time signal
Example: Suppose a low-pass filter H (e jω ) has been designed with
x2 [n]. (d) Fourier transform of x2 [n]. Note LP
that X2 (ejω ) is concentrated around
±π,impulse
±3π, . . response
.. h LP [n]. To obtain a high-pass filter, let:
HHP (e jω ) = HLP (e j(ω −π ) )
h HP [n] = (−1)n h LP [n].

Time Reversal:
x [−n] ←→ X (e− jω ) (25)
Example:
1
an u[n] ←→
1 − ae− jω
1
a−n u[−n] ←→
1
1 − ae jω
Conjugation and Conjugate Symmetry:

x ∗ [n] ←→ X ∗ (e− jω ) (26)


Thus, if x [n] is real, then:
X (e jω ) = X ∗ (e− jω ) (27)
or using the periodicity of X (e jω ):
X (e jω ) = X ∗ (e j(2π −ω ) ) ⇒ | X (e jω )| = | X (e j(2π −ω ) )| (28)
jω j(2π −ω )
]X (e ) = −]X (e ) (29)
ee120 - fall’15 - lecture 6 notes 5

Example 3 above: |X(ejω )|

π 2π ω

Combining time reversal and conjugate symmetry:

x [−n] ←→ X (e− jω ) = X ∗ (e jω ) (if x [n] is real). (30)

Thus, if x [n] = x [−n], then: X (e jω ) = X ∗ (e jω ), i.e., X (e jω ) is real.


(See Examples 2 and 3 above.)
If x [n] = − x [−n], then: X (e jω ) = − X ∗ (e jω ), i.e., X (e jω ) is purely
imaginary.
Differencing:

x [n] − x [n − 1] ←→ (1 − e− jω ) X (e jω ) (31)
dx (t)
Compare to CT: dt ↔ jωX ( jω )
Accummulation:
n ∞
1
∑ x [m] ←→
1 − e− jω
X ( e jω
) + πX ( e j0
) ∑ δ(ω − 2πk) (32)
m=−∞ k =−∞

Rt 1
Compare to CT: −∞ x (τ )dτ ←→ jω X ( jω ) + πX (0) δ ( ω )

Example: δ[n](unit impulse) ←→ ∆(e jω ) ≡ 1 (33)


n ∞
1
u[n] = ∑ δ[m] ←→ ∆ ( e
1 − e− jω | {z }

) + π∆ ( e j0
) ∑ δ(ω − 2πk)
m=−∞ | {z } k=−∞
=1 =1

1
=
1 − e− jω
+ π ∑ δ(ω − 2πk) 3
k=−∞
3
Compare to Example 1; now a = 1.

Time Expansion:
Define
(
x [n/M ] if n = 0, ∓ M, ∓2M, ...
x( M) [n] = (34)
0 otherwise.

x[n] x(2) [n]

n n
ee120 - fall’15 - lecture 6 notes 6

Then
x( M) [n] ←→ X (e jωM ). (35)

Proof:
∞ ∞
∑ x( M) [n]e− jωn = ∑ x( M) [kM]e− jωkM
n=−∞ k =−∞ | {z }
= x [k]

= ∑ x [k ]e− j(ωM)k = X (e jωM ).
k =−∞

See figure below for an illustration on a rectangular pulse (top) ex-


panded with M = 2 (middle) and M = 3 (bottom). The Fourier
transforms shown on the right obey the property (35).

x [n] X (e jω )
1
5

−2π −π
-10 -8 -6 -4 -2 0 2 4 6 8 10

n 0 π 2π
x (2) [ n ] X (e j2ω )
1
5

-10 -8 -6 -4 -2

n
0 2 4 6 8 10
−2π −π 0 π 2π
x (3) [ n ] X (e j3ω )
1
5

−2π −π
-10 -8 -6 -4 -2 0 2 4 6 8 10

n 0 π 2π
Figure 1: Inverse relationship between time and frequency domains: As k in-
The xtime
creases, expansion property derived above is analogous to the con-
k [n] spreads out while its transform is compressed.
tinuous time property x ( at) ←→ X ( jω/a) with M playing the role of
the scaling factor 1/a.

1
ee120 - fall’15 - lecture 6 notes 7

Differentiation in Frequency:

dX (e jω )
nx [n] ←→ j (36)

Proof:

X (e jω ) = ∑ x[n]e− jωn
n
dX (e jω )
= − j ∑ nx [n]e− jωn
dω n

Multiply both sides by j and substitute − j2 = 1.


dX ( jω )
CT analog: tx (t) ←→ j dω

Parseval’s Relation:


1
Z
∑ | x [n]|2 =
2π 2π
| X (e jω )|2 dω (37)
n=−∞
EE120 - Fall’15 - Lecture 7 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
21 September 2015

Discrete Time Fourier Transform (DTFT) Continued

Finding the Frequency Response from a Difference Equation


Section 5.8 in Oppenheim & Willsky
N M
∑ a k y [ n − k ] = ∑ bk x [ n − k ] (1)
k =0 k =0

Substitute x [n] = δ[n], and y[n] = h[n]:


N M
∑ ak h[n − k ] = ∑ bk δ [ n − k ] (2)
k =0 k =0

Take the Fourier Transform of both sides (recall that δ[n] ↔ 1):
!
N M
∑ ak e− jωk H (e jω ) = ∑ bk e− jωk (3)
k =0 k =0

∑kM=0 bk e− jωk
H (e jω ) = (4)
∑kN=0 ak e− jωk

We can find the impulse response h[n] from the inverse Fourier trans-
form of H (e jω ):
Example:
3 1
y[n] − y[n − 1] + y[n − 2] = 2x [n] (5)
4 8
Frequency response:
2 2
H (e jω ) = =
1 − 34 e− jω + 18 e−2jω (1 − 12 e− jω )(1 − 14 e− jω )
Partial fraction expansion:
4 2
H (e jω ) = 1 − jω
− 1 − jω
1− 2e 1− 4e
Thus, the impulse response is:
 n  n
1 1
h[n] = 4 u[n] − 2 u[n]
2 4

Example: Describe the LTI system with impulse response h[n] =


αn u[n], |α| < 1, with a difference equation.
1
H (e jω ) = (6)
1 − αe− jω
ee120 - fall’15 - lecture 7 notes 2

which is (4) with b0 = 1, α0 = 1, and a1 = −α. Thus,

y[n] − αy[n − 1] = x [n].

Example: Find the difference equation describing a LTI system whose


impulse response is:
 n
1 1 n
 
1
h[n] = u[n] + u [ n ].
2 2 4

Convolution Property of DTFT


Section 5.4 in Oppenheim & Willsky

y[n] = h[n] ∗ x [n] ←→ Y (e jω ) = H (e jω ) X (e jω ) (7)

Example:

1
h[n] = αn u[n] |α| < 1 ↔ H (e jω ) =
1 − αe− jω
1
x [n] = βn u[n] | β| < 1 ↔ X (e jω ) =
1 − βe− jω

1
Y (e jω ) =
(1 − αe− jω )(1 − βe− jω )

If α 6= β, employ partial fraction expansion:

A B
Y (e jω ) = −
+ ,
1 − αe jω 1 − βe− jω
−β
with A + B = 1 and Aβ + Bα = 0. Then, A = α
α− β and B = α− β .
Then,
α β 1  n +1 
y[n] = αn u[n] − βn u[n] = α − β n +1 u [ n ]
α−β α−β α−β

If α = β, then:

time-shift
z}|{
e jω
 
1 d 1
Y (e jω ) = =
(1 − αe− jω )2 −αj dω 1 − αe− jω
| {z }
↔ 1j nαn u[n] 2
2
by the differentiation property:
dX (e jω )
nx [n] ↔ j dω
Then,
1
y[n] = ( n + 1 ) α n +1 u [ n + 1 ] = ( n + 1 ) α n u [ n + 1 ] = ( n + 1 ) α n u [ n ]
α
where we replaced u[n + 1] with u[n] since (n + 1)αn = 0 for n = −1.
ee120 - fall’15 - lecture 7 notes 3

Example: Determine the function performed by the block diagram


below where HLP is a low-pass filter with cutoff frequency ωc < π/2.

(−1)n (−1)n

x HLP (e jω ) x
w1 [ n ] w2 [ n ] w3 [ n ]
x [n] + y[n]
w4 [ n ]
HLP (e jω )

W1 (e jω ) = X ( e j(ω −π ) )
W2 (e jω ) = HLP (e jω ) X (e j(ω −π ) )
W3 (e jω ) = HLP (e j(ω −π ) ) X (e jω )
W4 (e jω ) = HLP (e jω ) X (e jω )

Adding W3 (e jω ) and W4 (e jω ):

Y (e jω ) = ( HLP (e jω ) + HLP (e j(ω −π ) )) X (e jω ) (8)


| {z }

bandstop filter

−π ωc π
Multiplication Property
Section 5.5 in Oppenheim & Willsky
1
Z
jθ j(ω −θ )
x1 [n] x2 [n] ←→ X1 ( e ) X2 ( e )dθ 3 (9)
2π 2π

Proof: Apply synthesis equation to the right-hand side: 3


”periodic convolution”

1 1
Z Z
X (e jθ ) X2 (e j(ω −θ ) )dθe jωn dω
2π 2π 2π 2π 1
1 1
Z Z
= X1 (e jθ ) X2 (e j(ω −θ ) )e jωn dω dθ
2π 2π 2π 2π
| {z }
=e jθn x2 [n] 4
1
Z
= x2 [ n ] X1 (e jθ )e jθn dθ = x1 [n] x2 [n].
2π 2π
| {z }
= x1 [ n ]
4
from the frequency shift property
ee120 - fall’15 - lecture 7 notes 4

Example: sin( 3π
4 n ) sin ( 2 n )
π
Interpret the value at n = 0 as
x [n] = ·
x [0] = x1 [0] x2 [0] = 34 12
| πn
{z } | πn {z }
, x1 [ n ] , x2 [ n ]

Use the multiplication property to calculate X (e jω ). First note that:


1
sin(ωc n) ... ...
←→
πn ωc
−2π 2π
Easy to show by applying the synthesis equation:

1 1 jωn ωc

1 1 1 jωc n
Z ωc
jωn
e dω = e = (e − e− jωc n )
2π −ωc 2π jn − ωc πn 2j
| {z }
=sinωc n

By the multiplication property,

1
Z
X (e jω ) = X1 (e jθ ) X2 (e j(ω −θ ) )dθ (10)
2π 2π

i.e. periodic convolution of rectangular pulses in frequency domain:

X1 (e jω )
X2 (e jω )
−3π −π π 3π ω
4 2 2 4
ω=0
− X1 (e jθ )
π − X2 ( e j ( ω − θ ) )
θ
ω = π/4

π
θ
ω = π/2

3π/4
θ
ω = 3π/4

π/2
θ
ω=π
π π
4 4
−π π θ
integration interval
X (e jω )
1/2
1/4
π π 3π
0 4 2 4 π 2π ω
ee120 - fall’15 - lecture 7 notes 5

FIR Filter Design by Windowing

Ideal low-pass filter:


1
... ...
sin(ωc n) jω
h[n] = ←→ H (e ) =
πn −2π ωc 2π
To obtain a FIR filter truncate the ideal impulse response:
(
1 |n| ≤ N1
ĥ[n] = h[n]w[n], where w[n] =
0 otherwise.

What is the effect of truncation on the frequency response? From the


last lecture:
sin (ω ( N1 + 1/2))
W (e jω ) =
sin(ω/2)
main lobe
2N 1 + 1
side lobes

2π 2π
2N 1 + 1

1
Z
Thus, Ĥ (e jω ) = H (e jθ )W (e j(ω −θ ) )dθ
2π 2π
See the animation on the last page.

H (e jθ )
W (e j(ω −θ ))
%

Ĥ (e jω )
H (e jω ) %

ω
ee120 - fall’15 - lecture 7 notes 6

Gibbs Phenomenon and Tapered Windows

Note from the figure below that Ĥ (e jω ) exhibits oscillations near the
discontinuities of H (e jω ) and their amplitudes do not decrease as N1
is increased. This is known as the Gibbs Phenomenon.

N1 = 1 Ĥ (e jω ) N1 = 2
H (e jω )

ω ω
N1 = 3 N1 = 9

ω ω
These oscillations are caused by the sizable side lobes of W (e jω ) (high
frequency components) which are due to the abrupt change from 0 to
1 in w[n].
”Tapered” windows mitigate this problem, e.g., the triangular
(Bartlett) window:
|n|
(
1 − N if |n| ≤ N1
w[n] = 1
0 otherwise.

Other tapered windows exist (Hanning, Hamming, Blackman, etc.)


and are depicted in Figure 1. Although the differences between these
windows may not be appreciable in time domain, their Fourier Trans-
forms have significant differences as shown in Figure 2. Note the
tradeoff between main
| lobe
{z width} & side lobe amplitude in Figure 2.
| {z }
must be small for must be small to
sharp transition reduce ripples
from passband to
stopband
ee120 - fall’15 - lecture 7 notes 7

Figure 1: Tapered windows of type


1.2 Rectangular
Bartlett Bartlett, Hanning, Hamming, and
Hanning
Hamming Blackman for N1 = 25 superimposed.
Blackman

0.8

w[n]
0.6

0.4

0.2

0
-25 -20 -15 -10 -5 0 5 10 15 20 25
n

Summary

To obtain a FIR filter truncate the ideal filter’s impulse response h[n]
with one of the window functions w[n]:

ĥ[n] = h[n]w[n].

The new impulse response is zero outside of n ∈ {− N1 , · · · , N1 } but


not yet causal. To make it causal, shift to the right by N1 :

ĥ[n − N1 ] ←→ e− jωN1 Ĥ (e jω )

which does not change the magnitude of the frequency response,


only the phase. Finally, ĥ[n] must be scaled by a constant to obtain
∑n ĥ[n] = 1, so the dc gain is Ĥ (e j0 ) = 1.
FIR implementation:

y[n] = b0 x [n] + b1 x [n − 1] + ... + b M x [n − M ]

where b0 , ..., b M are the impulse response coefficients: bn = ĥ[n − N1 ].


MATLAB command for design:
fir1( M,ωc ,w) returns a vector of the coefficients b0 , ..., b M above
M: filter order (2N1 above)
ωc : desired cutoff frequency divided by π (so the frequency range
[0, π ] is normalized to [0, 1])
w: vector of length M + 1 for the values of the window function w[n];
enter boxcar( M + 1) for a rectangular window, and bartlett( M +
1), hamming( M + 1), hanning( M + 1), blackman( M + 1) for others.
ee120 - fall’15 - lecture 7 notes 8

0
Figure 2: The magnitude of the Fourier
Transform W (e jω ) (in dB) for the rectan-
-10
20 log10 |W (e jω )|, rectangular gular, Bartlett, Hanning, Hamming, and
-20

-30
Blackman windows. Here each window
function in Figure 1 is scaled such that
-40
∑n w[n] = 1, so W (e j0 ) = 1 = 0 dB.
-50
dB Note that the tapered windows progres-
-60
sively reduce the side lobe amplitudes
-70
in the order they are presented. This
-80
has the desired effect of reducing rip-
-90
ples in the frequency response of the
-100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
truncated filter. However, the main lobe
0
width increases which has a negative
-10
effect: the transition from passband to
Bartlett stopband will be slower for the filter
-20
truncated with the respective window.
-30

-40

dB -50

-60

-70

-80

-90

-100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

-10
Hanning
-20

-30

-40

dB -50

-60

-70

-80

-90

-100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

-10
Hamming
-20

-30

-40

dB -50

-60

-70

-80

-90

-100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

-10

-20
Blackman
-30

-40

dB -50

-60

-70

-80

-90

-100
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

normalized frequency (×π radians)


ee120 - fall’15 - lecture 7 notes 9

Example: B=fir1(50,0.2,hamming(51)) returns the coefficients of


a FIR low pass filter of order M = 50 with cutoff frequency 0.2π,
truncated with a Hamming window.
stem(0:50,B) plots the impulse response:

0.2

0.15

0.1

0.05

-0.05
0 5 10 15 20 25 30 35 40 45 50

freqz(B) plots the frequency response:

-20
Magnitude (dB)

-40

-60

-80

-100

-120

-140
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (#: rad/sample)

-200
Phase (degrees)

-400

-600

-800

-1000

-1200

-1400
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (#: rad/sample)
ee120 - fall’15 - lecture 7 notes 10

Play/Pause
EE120 - Fall’15 - Lecture 8 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
23 September 2015

Phase Distortion in LTI Systems


Section 6.2 in Oppenheim & Willsky
The phase of the Fourier Transform ]X (e jω ) is as important as the
magnitude | X (e jω )| in describing the features of the signal x [n].
Example: Recall x [−n] ←→ X (e− jω ) = X ∗ (e jω ) when x [n] is real.
Since | X (e jω )| = | X ∗ (e jω )|, the DTFT of x [n] and x [−n] differ only by
their phase. This shows that phase difference alone can distinguish
two signals significantly.
Consider an LTI system whose frequency response can be written as:

H (e jω ) = A(e jω )e− jαω

where A(e jω ) is real and nonnegative. Such a system is called “linear


phase" because the phase ] H (e jω ) = −αω is a linear function of ω.
Linear phase filters are desirable because each frequency component
of a signal passing through them is delayed by the same duration, α:

e jωn → H (e jω ) → H (e jω )e jωn = A(e jω )e jω (n−α)

By contrast, an LTI system whose phase ] H (e jω ) depends nonlin-


early on ω delays each frequency component differently and can
cause severe distortion. See page 3 for an example of such phase
distortion.
The linear phase property is very restrictive in practice. The re-
laxed version below maintains the essential benefits and is easy to
achieve in FIR filter design:

Generalized Linear Phase Systems


An LTI system with frequency response H (e jω ) is called “general-
ized linear phase” if we can find a real-valued function A(e jω ) and
constants α, β, such that:

H (e jω ) = A(e jω ) e− jαω + jβ (1)


| {z }
real, but sign
change allowed

Note that ] H (e jω ) = β − αω for each ω such that A(e jω ) > 0.


If the sign of A(e jω ) changes at a frequency ω, then ] H (e jω ) changes
discontinuously by π.
ee120 - fall’15 - lecture 8 notes 2

Example: If h[−n] = h[n] (even symmetric), then H (e jω ) is real. We


can take A(e jω ) = H (e jω ), α = β = 0.
H (e jω ) = A(e jω )

h[n] DTFT
←→

n ω
− N1 N1 2π
] H (e jω )
ω

−π
slope= α = 0 except
at discontinuities

Example: If h[n] = h0 [n − N1 ] where h0 [n] is even symmetric, then

H (e jω ) = H0 (e jω )e− jωN1 . (2)


Since H0 (e jω ) is real, take A(e jω ) = H0 (e jω ), α = N1 , β = 0.
The windowed FIR filters in the last lecture have this form, there-
fore they are generalized linear phase:
sin ωc n
h0 [ n ] = · w[n] (even symmetric). (3)
| πn
{z } |{z}
window
impulse resp. (rectangular,
of ideal LPF Hamming, etc.)

Frequency response of a Hamming windowed filter from last lecture:

-20
Magnitude (dB)

-40

-60

-80

-100

-120

-140
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (#: rad/sample)

-200
Phase (degrees)

-400

-600

-800

-1000

-1200

-1400
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Normalized Frequency (#: rad/sample)

Example: If h[−n] = −h[n] (odd symmetric) then H (e jω ) is purely


imaginary. Let A(e jω ) = − jH (e jω ) = H (e jω )e− jπ/2 (real).
H (e jω ) = A(e jω )e jπ/2 → α = 0, β = π/2. (4)
ee120 - fall’15 - lecture 8 notes 3

Group delay
200

150 d
− ] H (e jω )
grd[H(ejω)]


100

50

-50
-π -0.8π -0.6π -0.4π -0.2π 0 0.2π 0.4π 0.6π 0.8π π
ω
Magnitude of frequency response
2.5

2
| H (e jω )|
|H(e jω)|

1.5

0.5

0
-π -0.8π -0.6π -0.4π -0.2π 0 0.2π 0.4π 0.6π 0.8π π
ω
Input in the time domain
1

0.5
x [n]
x[n]

-0.5

-1
0 50 100 150 200 250 300
sample number (n)
Magnitude of the DFT of input
20

15 | X (e jω )|
|

10
|X

0
-π -0.8π -0.6π -0.4π -0.2π 0 0.2π 0.4π 0.6π 0.8π π
ω
Output in the time domain
2
y[n]
1

-1

-2
0 50 100 150 200 250 300
sample number (n)

Figure 1: Phase distortion illustrated on a signal x [n] (middle plot) with three dominant frequency components (shown
in the plot of | X (e jω )| underneath). This signal is applied to a LTI system with highly nonlinear phase, as seen from
d
the plot for - dω ] H (e jω ) (top). In the output y[n] (bottom), the order of the the low and middle frequency components
are swapped because the low frequency component incurred a large delay. The high frequency component (ω = 0.8π)
is filtered out because H (e j0.8π ) = 0 (the plot second from the top). See Section 5.1.2 in Oppenheim & Schafer, Discrete-
Time Signal Processing, 3rd ed., Prentice Hall, for the construction of the LTI system in this example.
ee120 - fall’15 - lecture 8 notes 4

Two Dimensional (2D) Fourier Transform

2D CTFT Analysis Equation:


Z ∞ Z ∞
X ( jω1 , jω2 ) = x (t1 , t2 )e− jω1 t1 e− jω2 t2 dt1 dt2 (5)
−∞ −∞

2D CTFT Synthesis Equation:


Z ∞ Z ∞
1
x ( t1 , t2 ) = X ( jω1 , jω2 )e jω1 t1 e jω2 t2 dω1 dω2 (6)
(2π )2 −∞ −∞

2D DTFT Analysis Equation:

∞ ∞
X (e jω1 , e jω2 ) = ∑ ∑ x [n1 , n2 ]e− jω1 n1 e− jω2 n2 (7)
n1 =−∞ n2 =−∞

Note that this is periodic with period (2π, 2π ):

X (e jω1 , e jω2 ) = X (e j(ω1 +2π ) , e jω2 ) = X (e jω1 , e j(ω2 +2π ) ).

2D DTFT Synthesis Equation:

1
Z Z
x [ n1 , n2 ] = X (e jω1 , e jω2 )e jω1 n1 e jω2 n2 dω1 dω2 (8)
(2π )2 2π 2π

Absolute integrability/summability conditions for convergence:


Z ∞ Z ∞
| x (t1 , t2 )|dt1 dt2 < ∞ (continuous time) (9)
−∞ −∞
∞ ∞
∑ ∑ | x [n1 , n2 ]| < ∞ (discrete time). (10)
n1 =−∞ n2 =−∞

Example: x [ n1 , n2 ] = δ [ n1 , n2 ] : = δ [ n1 ] δ [ n2 ].

∞ ∞
X (e jω1 , e jω2 ) = ∑ ∑ δ[n1 , n2 ]e− jω1 n1 e− jω2 n2 = e− jω1 0 e− jω2 0 = 1
n1 =−∞ n2 =−∞

Example: x [ n 1 , n 2 ] = a n1 b n2 u [ n 1 , n 2 ], | a| < 1, |b| < 1.


∞ ∞
X (e jω1 , e jω2 ) = ∑ ∑ an1 bn2 e− jω1 n1 e− jω2 n2
n1 =0 n2 =0
∞ ∞
= ∑ n1 − jω1 n1
a e ∑ bn2 e− jω2 n2
n1 =0 n2 =0
1 1
=
1 − ae 1 1 − be− jω2
− jω
ee120 - fall’15 - lecture 8 notes 5

Separability Property of the 2D DTFT:


If x [n1 , n2 ] = x1 [n1 ] x2 [n2 ] then X (e jω1 , e jω2 ) = X1 (e jω1 ) X2 (e jω2 ) as in
the examples above. A similar property holds for the 2D CTFT.
Proof:
∞ ∞
X (e jω1 , e jω2 ) = ∑ ∑ x1 [n1 ] x2 [n2 ]e− jω1 n1 e− jω2 n2
n1 =−∞ n2 =−∞
∞ ∞
= ∑ x1 [n1 ]e− jω1 n1 ∑ x2 [n2 ]e− jω2 n2
n1 =−∞ n2 =−∞
| {z }| {z }
= X1 (e jω1 ) = X2 (e jω2 )

2D Systems

x [n1 , n2 ]→ → y [ n1 , n2 ]

When the input is δ[n1 , n2 ] the output is called the impulse response
and denoted h[n1 , n2 ] as in 1D systems.
Example: 2D moving average filter

1 1 1
y [ n1 , n2 ] = ∑ ∑
9 k =−1 k =−1
x [ n1 − k 1 , n2 − k 2 ]
1 2

(n1 ,n2 ) −−−→ 3 × 3 sliding window

(
1
9 −1 ≤ n1 ≤ 1 and − 1 ≤ n2 ≤ 1
h [ n1 , n2 ] =
0 otherwise.

2D Convolution:
If the system is linear shift-invariant, then:

y [ n1 , n2 ] = h [ n1 , n2 ] ∗ x [ n1 , n2 ]
∞ ∞
= ∑ ∑ h [ m1 , m2 ] x [ n1 − m1 , n2 − m2 ]
m1 =−∞ m2 =−∞
∞ ∞
= ∑ ∑ x [ m1 , m2 ] h [ n1 − m1 , n2 − m2 ].
m1 =−∞ m2 =−∞
ee120 - fall’15 - lecture 8 notes 6

Convolution Property of the 2D DTFT

h[n1 , n2 ] ∗ x [n1 , n2 ] ←→ H (e jω1 , e jω2 ) X (e jω1 , e jω2 ) (11)


Example: 2D separable ideal low pass filter
H (e jω1 , e jω2 ) = 1 in the shaded regions of the (ω1 , ω2 )-plane below
and = 0 otherwise:
ω2

(0,2π ) (2π,2π )
• •

ω c2

(2π,0)
• ω1
ω c1

We can write this frequency response as:

H (e jω1 , e jω2 ) = H1 (e jω1 ) H2 (e jω2 )

where (
jωi 1 | ωi | ≤ ω ci
Hi (e )= i = 1, 2.
0 ω ci < | ωi | ≤ π
Then, from the separability property,

sin ωc1 n1 sin ωc2 n2


h [ n1 , n2 ] =
πn1 πn2

which is depicted below for ωc1 = ωc2 = 0.2π.

0.04
0.035
0.03 h [ n1 , n2 ]
0.025
0.02
0.015
0.01
0.005
0
-0.005
-0.01
30
20
30
10 20
0 10
-10 0
n2 -20
-10

-30 -30
-20
n1
ee120 - fall’15 - lecture 8 notes 7

Example: 2D circularly symmetric ideal low pass filter


H (e jω1 , e jω2 ) = 1 in the shaded regions of the (ω1 , ω2 )-plane below
and = 0 otherwise:
ω2

(0,2π ) (2π,2π )
• •

ωc

ωc (2π,0)
• ω1

In the region [−π, π ] × [−π, π ], this can be expressed as:


 q
 1 ω 2 + ω22 ≤ ωc
H (e jω1 , e jω2 ) = q 1
 0 ω 2 + ω 2 > ωc . 1 2

The 2D DTFT Synthesis Equation yields:


 q 
ωc 2 2
h [ n1 , n2 ] = q J1 ωc n1 + n2
2π n21 + n22

where J1 (·) is the Bessel function of the first kind and first order.2 2
See mathworld.wolfram.com for a
description of Bessel functions of the
Note that h[n1 , n2 ] is not separable. However, like the frequency first kind. The Matlab command to
response H (e jω1 , e jω2 ), it exhibits circular symmetry. See the figure evaluate J1 (·) is besselj(1,·) where the
first argument specifies the order.
below for a depiction of h[n1 , n2 ] for ωc = 0.2π.

0.035

0.03
h [ n1 , n2 ]
0.025

0.02

0.015

0.01

0.005

-0.005
30
20
30
10 20
0 10
-10 0
n2 -20
-10
n1
-20
-30 -30
ee120 - fall’15 - lecture 8 notes 8

Projection-Slice Theorem
Consider the following "projection" of the 2D function x (t1 , t2 ) along
the t2 axis:
Z ∞
x0 ( t1 ) , x (t1 , t2 )dt2 .
−∞

Then, the 1D CTFT of x0 (t1 ) is related to the 2D CTFT of x (t1 , t2 ) by:

X0 ( jω1 ) = X ( jω1 , jω2 )|ω2 =0 .

This is because:
Z ∞ Z ∞
X ( jω1 , jω2 )|ω2 =0 = x (t1 , t2 )e− jω1 t1 e− j0t2 dt1 dt2
−∞ −∞
Z ∞ Z ∞ 
= x (t1 , t2 )dt2 e− jω1 t1 dt1
−∞ −∞
Z ∞
= x0 (t1 )e− jω1 t1 dt1 = X0 ( jω1 ).
−∞

A generalization of this property to projections along any direction


is known as the projection-slice theorem and is illustrated in the figure
below. (Projection along the t2 axis above is the special case θ = 0.)

t2 ω2

t
jec
pro slic
e

θ t1 θ ω1
.
F.T
1D

This theorem is crucial in tomography where one collects projections


at many angles about a 2D object. The 1D Fourier Transform of each
such "shadow" corresponds to a slice of the 2D Fourier Transform.
One can thus obtain the 2D Fourier Transform by combining these
slices and then reconstruct x (t1 , t2 ) from the synthesis equation.
EE120 - Fall’15 - Lecture 9 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
28 September 2015

Discrete Fourier Transform (DFT)

DFT is applicable to a finite duration sequence x [n] such that x [n] = 0


when n < 0 and n ≥ N.

 N −1

∑ x [n]e− j N kn k=0,1,. . . ,N-1


X [k] = n =0 (1)

0 for other k.

Connection to Discrete Fourier Series

If we define a period-N sequence x̃ [n] by adding x [n] end-to-end:


x̃ [n] := x [(n mod N )] (2)
then X [k] = Nak for k = 0, 1, . . . , N − 1:
Nak := X [(k mod N )]. (3)

Synthesis Equation for DFT:

N −1

1 2π
∑ X [k ]e j N kn 0 ≤ n ≤ N−1

x [n] = N k =0 (4)

0 otherwise.

Example:

x [n]
1
n
0 1 2 3 4 5

Take N = 5 (5-point DFT):


4 2π
X [k] = ∑ e− j N kn , k = 0, 1, 2, 3, 4 (5)
n =0

5 if k = 0
= (6)
0 otherwise.
ee120 - fall’15 - lecture 9 notes 2

To see why X [k] = 0 for, say k = 1, note that (5) is the sum of the
following complex numbers:

Im

n=1
n=2

Re
n=0
n=3
n=4

What if we take N = 10 (10-point DFT)?


X [k ] = 10ak , k = 0, 1, . . . , 9 where ak are FS coefficients of:

x̃ [n]
... 1 ... n
1 2 3 4 5 9 101112131415

From Lecture 3, (FS of rectangular pulse train):



− j 4π k sin π2 k
X [k] = Nak = e 10 π
 k = 0, 1, . . . , 9. (7)
sin 10 k
X [0] = 5, X [2] = X [4] = X [6] = X [8] = 0, X [1] = 1 − j3.0777,
X [3] = 1 − j0.7265, X [5] = 1, X [7] = 1 + j0.7265, X [9] = 1 + j3.0777.

Conjugate Symmetry Property of the DFT:

If x [n] is real, then


X ∗ [ N − k] = X [k] k = 1, 2, . . . , N − 1. (8)
Example: In the 10-point DFT above, X [1] = X ∗ [9], X [3] = X ∗ [7], . . . .

Connection between DFT and DTFT

Recall: ∞ N −1
X (e jω ) = ∑ x [n]e− jωn = ∑ x [n]e− jωn (9)
n=−∞ n =0
N −1 2π
X [k] = ∑ x [n]e− j N kn , k = 0, 1, . . . , N − 1 (10)
n =0

N samples of DTFT at ω = 2π Nk
X [k] = X (e jω ) (11)

w= 2π
N k k = 0, 1, . . . , N − 1.
ee120 - fall’15 - lecture 9 notes 3

Back to the example:

4
sin (5ω/2)
X (e jω ) = ∑ e− jωn = e− j2ω sin(ω/2)
(12)
n =0

| X (e jω )| 10 point DFT
5 point DFT
5

ω
2π 4π 6π 8π 2π
5 5 5 5

MATLAB:
fft([1 1 1 1 1]) for 5-point,

fft([1 1 1 1 1 0 0 0 0 0]) for 10-point DFT.

The derivation of X [k] = X (e jω ) ω = 2π k above assumed N ≥ duration



N
of x [n]. What do we recover from N samples of the DTFT otherwise?

DTFT
x [n] ←→ X (e jω ) (13)

DFT
? ←→ X (e jω ) (14)

ω = 2π
N k

(15)
N −1 
1  2π
= ∑ X (e jω ) 2π e j N kn n = 0, 1, . . . , N − 1 (16)

N ω= k
k =0 | {z N }



= x [m]e− j N km
m=∞
N −1 ∞
1 2π
=
N ∑ ∑ x [ m ] e j N k(n−m) (17)
k =0 m=−∞
∞  N −1 
1 j 2π
= ∑ x [m]
N ∑ e N k(n−m) (18)
m=∞ k =0
| {z }

1

if m = n modN
=
0

otherwise
= . . . + x [n − N ] + x [n] + x [n + N ] + . . . (19)
(20)

= ∑ x [n − rN ] 0 ≤ n ≤ N − 1. (21)
r =−∞

= x [n] if N ≥ duration of x [n]; otherwise, aliasing in time!


ee120 - fall’15 - lecture 9 notes 4

Example:

| X (e jω )|
x [n]
1 n ω
123456 π 2π

N ≥ 5 samples of X (e jω ) correspond to N-point DFT of x [n].


N = 4 samples of X (e jω ) correspond to 4-point DFT of:

∑ x [n − 4r ] 0 ≤ n ≤ 3. (22)
r =−∞

2
1
n
123456

DFT Makes Convolution Easy

y[n] = h[n] ∗ x [n] (23)


|{z} |{z}
FIR with input sequence
duration = P with duration = L

Set N ≥ L + P − 1 (duration of y[n]). Pad zeros in h[n] and x [n] to


make them duration = N. Take their N-point DFT2 to find H [k] and
X [k ]. Take inverse DFT3 of H [k] · X [k ] to obtain y[n]. 2
e.g., using fft in MATLAB
3
ifft in MATLAB

2D DFT
Analysis Equation: For 0 ≤ k1 ≤ N1 − 1 and 0 ≤ k2 ≤ N2 − 1:

N1 −1 N2 −1 2π k n 2π
−j N 1 1 − j N k 2 n2
X [k1 , k2 ] = ∑ ∑ x [ n1 , n2 ] e 1 e 2 (24)
n1 =0 n2 =0

Synthesis Equation: For 0 ≤ n1 ≤ N1 − 1 and 0 ≤ n2 ≤ N2 − 1:

N1 −1 N2 −1
1 2π k n
jN 2π k 2 n2
x [ n1 , n2 ] =
N1 N2 ∑ ∑ X [k1 , k2 ]e 1
1 1
e N2 (25)
k 1 =0 k 2 =0

Reading for Interested Students


Chapter 27: Data Compression in The Scientist and Engineer’s Guide to
Digital Signal Processing (www.dspguide.com). See in particular Figures
27-9, 27-10, 27-11, 27-12, 27-15.
EE120 - Fall’15 - Lecture 10 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
5 October 2015

Sampling
Chapter 7 in Oppenheim & Willsky
Discrete-time sequence obtained from continuous-time signal x (t):
xd [n] = x (nT ), T : sampling period

x [4]
x [3]
x [0] x [1] x [2]

t
T 2T 3T 4T 5T · · ·

Can x (t) be recovered from its samples?

Shannon-Nyquist Sampling Theorem


If x (t) is bandlimited with X ( jω ) = 0 for |ω | > ω M and
ωS > 2ω M (1)

where ws = 2π T , then x ( t ) is uniquely determined by its samples


x (nT ), n = 0, ∓1, ∓2, . . ..

bandlimited by ws /2
bandwidth exceeds ws /2

To see where (1) comes from, we define “impulse train sampling”:


x p (t) = x (t) · p(t) (2)

where p(t) = ∑ δ(t − nT ) (3)
n=−∞ Claude Shannon (1916-2001)

p(t)
··· t
−T 0 T 2T 3T 4T 5T

x p (t)

Harry Nyquist (1889-1976)


ee120 - fall’15 - lecture 10 notes 2

How is X p ( jω ) related to X ( jω )?
1
Recall: Fourier series coefficients of the impulse train are ak = T for
all k.
∞ ∞

P( jω ) = ∑ 2πak δ(ω − kωs ) =
T ∑ δ(ω − kωs ) (4)
k =−∞ k =−∞
Z ∞ ∞
1 1
X p ( jω ) =
2π −∞
X ( jθ ) P( j(ω − θ ))dθ =
T ∑ X ( j(ω − kωs )) (5)
k =−∞


1
X p ( jω ) =
T ∑ X ( j(ω − kωs )) (6)
k =−∞

X ( jω ) X p ( jω )
1
1 T
ω ω
-ω M ωM ωM ωs
if ws > 2w M

X p ( jω )
1
T
ω
ω M ωs
if ws < 2w M “aliasing”

Thus, if (1) holds (no aliasing), then x (t) can be recovered from x p (t)
with a lowpass filter of gain T and cutoff frequency w2s :

T
X p ( jω ) Xr ( jω ) = X ( jω )
ωs
2
reconstruction
filter: Hr ( jω )

The reconstruction filter is an interpolator in the time domain:


=π/T
 z}|{ hr ( t )
ωs  1
sin t
hr ( t ) = T 2
πt
 t
sin πt T
= πt
 T 2T
T
ee120 - fall’15 - lecture 10 notes 3

!
xr ( t ) = hr ( t ) ∗ x p ( t ) = hr ( t ) ∗ ∑ x(nT )δ(t − nT ) (7)
n

= ∑ x (nT ) (hr (t) ∗ δ(t − nT )) (8)


n
= ∑ x (nT )hr (t − nT ) (9)
n

x[T ]
x [2T ]
x [0]

t
T 2T 3T

The sum of these sinc functions gives the reconstructed signal xr (t).

Zero-order Hold Approximate Reconstruction

x (t)
x0 ( t )

x0 ( t ) = x p ( t ) ∗ h0 ( t ) (10)

h0 ( t )
1
t
T

sin(wT/2)
H0 ( jω ) = e− jωT/2 (11)
w/2
X0 ( jω ) = H0 ( jω ) X p ( jω ) (12)

T H0 ( jω )
Hr ( jω )

ω
ωs ωs
2
ee120 - fall’15 - lecture 10 notes 4

Linear Interpolation (First-order Hold)


= x p ( t ) ∗ h1 ( t )

h1 ( t )
1
t
t −T T

 2
1 sin(ωT/2)
H1 ( jω ) = (13)
T ω/2

T H0 ( jω )
H1 ( jω )
Hr ( jω )
ω
ωs ωs
2

Examples of Aliasing
Section 7.3 in Oppenheim & Willsky
Example 1: x (t) = cos(ω0 t), ωs = 3ω0

X ( jω )
π π

ω
− ω0 ω0

X p ( jω )
T π π reconstruction filter
T T

ω
− ω0 ω0 ω s ωs
2

T 2T t

ω0 =
3T
ee120 - fall’15 - lecture 10 notes 5

3ω0
Example 2: x (t) = cos(ω0 t), ωs = 2

X p ( jω )
T

ω
− ω s ω0 ω s ω0
2
ωs

ω 
0
xr (t) = cos t 6= x (t) (14)
2

xr ( t )
x (t)
t
2π 4π
w0 w0

Example 3: (phase reversal)

1 jφ jω0 t 1 3ω0
x (t) = cos(ω0 t + φ) = e e + e− jφ e− jω0 t , ωs =
2 l 2 2
2πδ(ω −ω0 )
(15)

X p ( jω )
T
π − jφ π jφ π − jφ π jφ
Te Te Te Te

ω
ω0 ω0 ω s
2

 ω0   ω 
Xr ( jω ) = πe jφ δ ω + + πe− jφ δ ω − 0 (16)
2 2
l l
ω ω
1 − j 20 t 1 j 20 t
2π e 2π e

1  j ( ω0 t − φ ) ω0 
xr ( t ) = e 2 + e− j( 2 t−φ) (17)
2 
ω0 
= cos t − φ → phase reversal (18)
 2ω 
= cos − 0 t + φ (19)
2
Wagon wheel effect in movies: Wheel appears to rotate more slowly
and in the opposite direction when actual speed exceeds half of the
sampling rate (18-24 frames/second).
ee120 - fall’15 - lecture 10 notes 6

Example 4: (critical frequency)

x (t) = cos(ω0 t + φ) ωs = 2ω0 (20)

X p ( jω ) π jφ
+ e− jφ )
T (e
= 2π
T cos( φ )
ω
ω0 ωs

Hr ( jω )
T
T
2 ω
ωs
2

Xr ( jω )
π cos(φ) π cos(φ)

ω
ω0

xr (t) = cos(φ) cos(ω0 t) 6= x (t) unless φ = 0. (21)

φ=0 0 < φ < 90◦ φ = 90◦


t t t

x (t) x (t)
xr ( t ) = x ( t ) xr ( t ) xr ( t )

Example 5: x (t) = cos(ω0 t) ws = 3ω0 , zero-order hold reconstruction

T 2T ω0 4T t

X p ( jω ) H0 ( jω )

ω
− ω0 ω0 ω s ωs
2
EE120 - Fall’15 - Lecture 11 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
7 October 2015

Sampling Continued
Chapter 7 in Oppenheim & Willsky
x (t)

t
T 2T 3T · · ·

xd [n] = x (nT ) (1)

How are the Fourier Transforms of x (t) and xd [n] related?


Use impulse train sampling: x p (t) = ∑n xd [n]δ(t − nT ) to relate the
two:


Ω = ωT
X p ( jω ) = ∑ xd [n]e− jωTn
n=−∞ ω : radians/sec.

Ω : radians
Xd (e jΩ ) = ∑ xd [n]e− jΩn
n=−∞


Xd (e jΩ ) = X p ( jω ) (2)

Ω=ωT


1
Last lecture: X p ( jω ) =
T ∑ X ( j(ω − kωs )) (3)
k =−∞

Combine the two:


X ( jω )
1

ω
-ω M ωM

X p ( jω )
1
T

ω
ωM 2π
ωs = T

Xd (e jΩ )
1
T

Ω = ωT
↓π 2π
ωM T
ee120 - fall’15 - lecture 11 notes 2

DT Processing of CT Signals
Section 7.4 in Oppenheim & Willsky

xd [n] DT yd [n]
x (t) C/D D/C y(t)
System

T T

This is not a LTI system (why not?), therefore it does not possess a
well-defined frequency response H ( jω ).
However, if x (t) is bandlimited by ws /2 = π/T, an “effective” H ( jω )
can be calculated:

xd [n] yd [n] y p (t) T



Hd e jΩ Ω → ωT

x (t) × ω→ T
y(t)
ωs /2

p(t) Hr ( jω )

Yd (e jΩ ) = Hd (e jΩ ) Xd (e jΩ ) = Hd (e jΩ ) X p ( jΩ/T ) (4)
jωT jωT
Yp ( jω ) = Yd (e ) = Hd (e ) X p ( jω ) (5)

 TH (e jωT ) X ( jω ) |ω | < ω /2
d p s
Y ( jω ) = (6)
0 |ω | > ωs /2.

1
X p ( jω ) =
T ∑ X ( j(ω − kωs )) (7)
k =−∞

Combining (6) and (7):


 ∞
 Hd (e jωT ) ∑ X ( j(ω − kωs ))
 |ω | < ωs /2
Y ( jω ) = k =−∞ (8)

0 |ω | > ωs /2.

If xc (t) is bandlimited by ωs /2, no aliasing:



∑ X ( j(ω − kωs )) = X ( jω ) |ω | < ωs /2 (9)
k =−∞


 H (e jωT ) X ( jω ) |ω | < ωs /2
d
Y ( jω ) = (10)
0 |ω | > ωs /2.


 H (e jωT ) Effective freq. resp.
Y ( jω ) d |ω | < ωs /2
Heff ( jω ) = = valid for inputs with
X ( jω ) 0 |ω | > ωs /2 bandwidth < ω /2
s
(11)
ee120 - fall’15 - lecture 11 notes 3

Example: Digital differentiator



|ω | < ωs /2
 jω
We want Heff =
0 |ω | > ωs /2

 
therefore, Hd (e jΩ ) = j |Ω| < π
T

| Hd (e jΩ )|
π
T


π 2π

From Inverse Fourier Transform:



 cos πn = (−1)n
nT nT , n 6= 0
hd [n] = (12)
0, n = 0.

yd [n] = hd [n] ∗ xd [n] = ∑ xd [n − k ] hd [k ] (13)


k
= . . . hd [−2] xd [n + 2] + hd [−1] xd [n + 1]
+ h d [1] x d [ n − 1] + h d [2] x d [ n − 2] + . . . (14)
| {z } | {z }
−hd [−1] −hd [−2]

= hd [−1] ( xd [n + 1] − xd [n − 1])
+ hd [−2] ( xd [n + 2] − xd [n − 2]) + . . . (15)

Note that this is a form of numerical differentiation.


We can truncate hd [n] with an appropriate window and implement
the resulting FIR filter.
Matlab commands:
• fir1 for lowpass, highpass, bandpass, etc.

• fir2 for arbitrary shapes:

fir2(M, F,A ,window(M+1))


|{z}
plot (F, A) defines filter shape

F
1

Example above: F = [0, 1] A = [0, jπ/T ]


ee120 - fall’15 - lecture 11 notes 4

Try a simpler Euler approximation instead:

x d [ n ] − x d [ n − 1]
yd [n] = = hd [n] ∗ xd [n] (16)
T

where hd [0] = 1
h d [1] = −1 , hd [n] = 0 for n < 0 and n > 1.
T, T

1
Hd (e jΩ ) = ∑ hd [n]e− jΩn = T
(1 − e− jΩ ) (17)
n

 1 (1 − e− jωT ) |ω | < ωs /2
T
Heff ( jω ) = (18)
0 |ω | > ωs /2
1
q
| Heff ( jω )| = (1 − cos ωT )2 + (sin ωT )2 |ω | < ωs /2 (19)
T
1
q
= 2(1 − cos ωT ) (20)
T

|ω |
2
T | Heff ( jω )|

ω
ωs π
2 = T

Example:
Digital implementation of a delay: y(t) = x (t − ∆)
How should we design the DT filter?
If ∆ is an integer multiple of T, then
hd [n]

yd [n] = xd [n − ∆T ] → 1
|{z} n
integer ∆
T
What if ∆T is not an integer?
We want y(t) = x (t − ∆), i.e. Y ( jω ) = e− jω∆ X ( jω ).
Therefore, the desired Heff ( jω ) is:

e− jω∆ |ω | < ω /2
s
Heff ( jω ) = (21)
0 |ω | > ωs /2


and Hd (e jΩ ) = e− j T ∆ |Ω| < π.
Then inverse Fourier Transform gives:
  
sin n − ∆T π 


hd [n] =   = sinc n − (22)
n − ∆T π T
ee120 - fall’15 - lecture 11 notes 5

Examples:

1) ∆ = T

sin((n − 1)π ) 1 if n = 1
hd [n] = = (23)
( n − 1) π 0 otherwise

2) ∆ = T
2
  
1
sin n− 2 π
hd [n] =   (24)
1
n− 2 π

yd [n] = hd [n] ∗ xd [n] = ∑ xd [k ] hd [n − k ] (25)


k

 
= ∑ xd [k]sinc n − − k (26)
k
T
| {z }
Visualize this as if n is a
continuous variable:
bandlimited interpolation of
xd [k ] shifted by ∆/T

∑k xd [k]sinc(n − k)
x d [1]
x d [2] x d [4]
x d [0] y d [2]
y d [1] x d [3]
y d [3]

n
∆ 1 2 3
T
ee120 - fall’15 - lecture 11 notes 6

2D Sampling

Given x (t1 , t2 ) and sampling periods T1 , T2 :

xd [n1 , n2 ] , x (n1 T1 , n2 T2 ).

Impulse train sampling:

x p ( t1 , t2 ) = x ( t1 , t2 ) p ( t1 , t2 )

where
p(t1 , t2 ) , ∑ ∑ δ(t1 − n1 T1 , t2 − n2 T2 ).
n1 n2

2D CTFT gives:

1
X p ( jω1 , jω2 ) =
T1 T2 ∑ ∑ X ( j (ω1 − k1 ωs1 ) , j (ω2 − k2 ωs2 ))
k1 k2

where
2π 2π
ω s1 = and ω s2 = .
T1 T2
Therefore, if x (t1 , t2 ) is bandlimited:

X ( jω1 , jω2 ) = 0 when |ω1 | > ωc1 or |ω2 | > ωc2

and
ωs1 > 2ωc1 , ωs2 > 2ωc2 ,

then there is no aliasing upon sampling:

ω2

(0,ωs2 ) (ωs1 ,ωs2 )


• •

ω c2
(ωs1 ,0)
• ω1
ω c1

Thus, x (t1 , t2 ) can be reconstructed from its samples with a low pass
filter.
EE120 - Fall’15 - Lecture 12 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
12 October 2015

Sampling of Discrete-Time Signals


Section 7.5 in Oppenheim & Willsky
Impulse Train Sampling


p[n] = ∑ δ[n − kN ] (1)
k =−∞

x p [n] = x [n] p[n] = ∑ x [kN ]δ[n − kN ] (2)
k=−∞

N=2 x [n]

× × ×
× ×
n

x p [n]

1
Fourier series coefficients of p[n] : ak = N for all k. Therefore,
∞ ∞
 2π  2π
P(e jω ) = ∑ 2πa k δ ω − k
N
=
N ∑ δ(ω − kωs ).
k =−∞ |{z} k =−∞
= ωs

Since x p [n] = x [n] p[n], the multiplication property of DTFT implies:

1
Z
X p (e jω ) = P(e jθ ) X (e j(ω −θ ) )dθ (3)
2π 2π

where the integrand has the form:

P(e jθ ) X (e j(ω −θ ) )

2π j(ω −kωs ) )
N X (e
··· ···
θ
k = N − 1 2π
k=0 k 2π
N

ee120 - fall’15 - lecture 12 notes 2

Thus,
N −1
1 2π
X p (e jω ) =
N ∑ X (e j(ω −kωs ) ) ωs =
N
(4)
k =0

No aliasing if ωs > 2ωm where ωm is the bandwidth of X (e jω ).


Below is an example of X (e jω ) and X p (e jω ) for N = 2:

X (e jω )
1

ω
ωm 2π

1
X p (e jω )
N

ω
ωm 2π 2π
N

Compare (4) to impulse train sampling of CT signals:



1 2π
X p ( jω ) =
T ∑ X ( j(ω − kωs )) ωs =
T
. (5)
k =−∞

Ideal Reconstruction Filter

Hr (e jω )
N

ω
−2π ωs
2

sin(πn/N )
Impulse response: hr [n] = when n 6= 0, and hr [0] = 1
πn/N

hr [kN ] = 0, k = 1, 2, 3, . . .

n
N 2N 3N

Convolution with hr [n] results in a “bandlimited interpolation:"



xr [ n ] = x p [ n ] ∗ hr [ n ] = ∑ x [kN ]δ(n − kN ) ∗ hr [n] (6)
k=∞
| {z }
=hr [n−kN ]

= ∑ x [kN ]hr [n − kN ] (7)
k =−∞
ee120 - fall’15 - lecture 12 notes 3

Bandlimited interpolation is illustrated below on a signal sampled


with period N = 10.
2
x p [n]
1

0
-5 0 5 10 15 20 25
2
1 x [0] hr [ n ]
0
-1
-5 0 5 10 15 20 25
2
1 x [ N ] hr [ n − N ]
0
-1
-5 0 5 10 15 20 25
2
1 x [2N ]hr [n − 2N ]
0
-1
-5 0 5 10 15 20 25
3
2 xr [n] = ∑k x [kN ]hr [n − kN ]
1
0
-5 0 5 10 15 20 25

Zero-Order Hold Reconstruction

x [n] × H0 (e jω ) x0 [ n ]
x p [n]

p[n]

h0 [ n ]

···
n
N−1N

Taking the DTFT of this impulse response we get the frequency re-
sponse:
N −1 sin(ωN/2)
H0 (e jω ) = e− jω 2 ω 6= 0, and H0 (e j0 ) = N (8)
sin(ω/2)

N | H0 (e jω )|
Hr (e jω )

ω
−2π −π 2π 2π
N N
ee120 - fall’15 - lecture 12 notes 4

Downsampling
Select every Nth sample, discard the rest:
xb [n] = x [ Nn] = x p [ Nn]. (9)

x p [n]

xb [n]

x[ N ]
x [0]

X p (e jω ) = . . . + x [0] + x [ N ]e− jωN + x [2N ]e− jω2N + . . . (10)


jω − jω − j2ω
Xb ( e ) = . . . + x [0] + x [ N ] e + x [2N ]e +... (11)

=⇒ Xb (e jω ) = X p (e jω/N ) (12)

X (e jω )
1

ω
ωm 2π

1
X p (e jω )
N

ω
ωm 2π 2π
N

1
Xb (e jω )
N

ω
ωm · N 2π

Note that sampling of a CT signal followed by downsampling is


equivalent to sampling of the CT signal at a slower rate:

x (t) C/D ↓N ≡ x (t) C/D

T NT
ee120 - fall’15 - lecture 12 notes 5

Upsampling
Inverse of downsampling:

xb [n] ↑N x [n]

 x [n/N ]
b n = 0, ∓ N, ∓2N, . . .
• Expand signal by N: x p [n] =
0 otherwise

• Obtain x [n] from x p [n] by interpolation (reconstruction filter)

xb [n] Xb (e jω )

n ω
ωm π 2π
x p [n] X p (e jω )

N Hr (e )

n ω
ωm
N
π 2π
N
x [n] X (e jω )

n ω
missing samples recovered by interpolation
ωm
N
2π 2π
N

Is sampling a CT signal followed by upsampling equivalent to sampling the


CT signal faster in the first place?
Not if the initial, slower sampling introduced aliasing. Yes, if the
original sampling period T was small enough to avoid aliasing:

xd [n]
xc (t) C/D ↑N ≡ xc (t) C/D

T T/N
If xc (t) is
bandlimited
by ω M < π/T
ee120 - fall’15 - lecture 12 notes 6

3
Example: xc (t) = cos ω0 t ωs = ω0
2

xc (t)

T 2T 3T t
2π 4π
w0 w0
Interpolated (red) sam-
↑ ples don’t match the
upsampling with N = 2 results of ×2 faster
sampling.
Sampling with T/2:
xc (t)

T 3T 5T
2 T 2 2T 2 3T t

Downsampling by a Noninteger (but Rational) N


Write N = M/L where M and L are integers. Upsample by L, then
downsample by M.
Example: The signal with spectrum below can be downsampled by
N = 4.5 without aliasing: first upsample by 2, then downsample by 9.

X (e jω )
1

ω
2π π 2π
upsamping ×2 9
2
Xu (e jω )

ω
π
9
π 2π

downsampling ×9 Xub (e jω )
2/9
ω
π 2π
ee120 - fall’15 - lecture 12 notes 7

What happens if we downsample first, upsample next?

X (e jω )
1

ω
2π π 2π
downsampling ×9 9
Xb (e jω )
1
9

ω
upsamping ×2: π 2π
Hr (e jω ): gain= 2
cutoff: π2
2 Xbu (e jω )
9
6= Xub (e jω )
ω
π
2
π 2π
EE120 - Fall’15 - Lecture 13 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
14 October 2015

The Laplace Transform


Chapter 9 in Oppenheim & Willsky
The Laplace transform of x (t) is
Z ∞
X (s) = x (t)e−st dt (1)
−∞

where s is a complex variable. X (s)|s= jω is the Fourier transform.


Recall from Lecture 2:
Z ∞
est → h(t) → H (s)est where H (s) , h(t)e−st dt.
−∞

Thus the transfer function H (s) is the Laplace transform of h(t).


Example 1:

1
x (t) = e− at u(t) ↔ X ( jω ) = if a > 0 (Lecture 4)
jω + a
Find the Laplace transform:
Z ∞
X (s) = e−at e−st dt.
0

Let σ denote the real part of s (s = σ + jω ):


Z ∞
−( a+σ)
X (s) = e−(a+σ)t e− jωt dt = Fourier transform of e
(convergence if a+δ>0)
u ( t ).
0
1 1
= =
jω + ( a + σ) s+a
Therefore, 1
X (s) = if δ = Re{s} > − a
s+a
If a = 0 (unit step), Fourier transform doesn’t converge, but the
Laplace transform does for Re{s} > 0.

Example 2: x (t) = −e− at u(−t)


Z 0
X (s) = − e− at e−st dt ( let τ = −t)
−∞
Z 0 Z ∞
1 ( a+s)τ ∞
= − e(a+s)τ (−dτ ) = − e(a+s)τ dτ = − e |0
∞ 0 a+s

1
If Re{ a + s} < 0, then e(a+s)τ → 0 as τ → ∞: = if Re{s} < − a
s+a
ee120 - fall’15 - lecture 13 notes 2

Region of Convergence (ROC)

Im Im
Ex. 1: Ex. 2:

−a Re −a Re

If the ROC includes the imaginary axis, then the Fourier transform
converges.
Example 3: x (t) = 3e−2t u(t) − 2e−t u(t)

3 2
X (s) = − ROC = {s| Re{s} > −2} ∩ {s| Re{s} > −1}
s+2 s+1
s−1 = {s| Re{s} > −1}
=
(s + 1)(s + 2)
Example 4: x (t) = e−at u(t), a: complex.

1
X (s) = if Re{s + a} > 0, i.e., if Re{s} > − Re{ a}.
s+a

Example 5: cos(ω0 t)u(t) = 12 e jω0 t u(t) + 12 e− jω0 t u(t)


From Example 4 the Laplace transform is:

1 1 1 1 1 2s s
+ = = 2
2 s − jω0 2 s + jω0 2 s2 + ω02 s + ω02
Re{s}>0 Re{s}>0

L s
cos(ω0 t)u(t) ←→ Re{s} > 0 (2)
s2 + ω02

A similar derivation shows:

L ω0
sin(ω0 t)u(t) ←→ Re{s} > 0 (3)
s2 + ω02

Example 6: e− at cos(ω0 t)u(t) = 12 e−(a− jω0 )t u(t) + 12 e−(a+ jω0 )t u(t)


From Example 4:

1 1 1 1 (s + a)
+ =
2 s + a − jω0 2 s + a + jω0 (s + a)2 + ω02
Re{s}>− a Re{s}>− a

L (s + a)
e− at cos(ω0 t)u(t) ←→ Re{s} > − a (4)
(s + a)2 + ω02
ee120 - fall’15 - lecture 13 notes 3

Poles and Zeros of Laplace Transforms

N (s)
X (s) = (5)
D (s)
Zeros: roots of N (s) (N (s) = 0), poles: roots of D (s) (D (s) = 0).

Im
Ex. 3: Ex. 5:
jω0

−2 −1 1 Re
ROC −jω0 ROC
Zeros marked with 00 o00 , and poles with 00 x00 .

Example 7:
4 1
x (t) = δ(t) − e−t u(t) + e2t u(t)
3 3
R∞
Laplace transform of δ(t): −∞ δ(t)e−st dt = 1 for all s ∈ C .
4 1 1 1
X (s) = 1− +
3s+1 3s−2
3(s + 1)(s − 2) − 4(s − 2) + (s + 1) 3s2 − 6s + 3
= =
3(s + 1)(s − 2) 3(s + 1)(s − 2)
s2 − 2s + 1 ( s − 1)2
= = if Re{s} > 2.
(s + 1)(s − 2) (s + 1)(s − 2)
Im

−1 1 2 Re
ROC

Note: In each example, the ROC excludes poles (Property 2 below).

Properties of the ROC


Section 9.2 in Oppenheim & Willsky
Note that the Laplace and Fourier transforms are related by

L{ x (t)} = F { x (t)e−σt } where σ = Re{s}.

The properties below characterize the ROC for the Laplace transform
by examining the absolute integrability condition2 for the conver- 2
The other Dirichlet conditions (Lecture
gence of the Fourier transform: 4) are assumed to be satisfied.
Z ∞
| x (t)|e−σt dt < ∞. (6)
−∞
ee120 - fall’15 - lecture 13 notes 4

1) ROC consists of strips parallel to the imaginary axis.

Justification: Since (6) depends only on σ = Re{s}, if a particular


point is in the ROC then the entire vertical line with the same real
part must be in the ROC.
2) For rational Laplace transforms (X(s)=N(s)/D(s)), ROC does not
contain poles.

3) If x (t) is of finite duration and absolutely integrable, then the ROC


is the entire complex plane.
Justification: For a finite duration signal like the one below, condition
(6) becomes:
Z T2
| x (t)|e−σt dt < ∞.
T1

To see that this indeed holds for all σ, note that


Z T2 Z T2
| x (t)|e−σt dt ≤ max e−σt · | x (t)|dt
T1 t∈[ T1 ,T2 ] T1

R T2
where e−σt is bounded in the bounded interval [ T1 , T2 ] and T1 | x (t)|dt
is bounded from the absolute integrability of x (t).

x(t)

T1 T2 t

Definition: x (t) is right-sided if x (t) = 0 prior to some finite time


T1 , left-sided if x (t) = 0 after some finite time T2 , and two-sided if
neither is the case.

x(t) x(t)

T1 t T2 t

4) For a right-sided x (t), if ROC is not empty then it extends to +∞


along the real axis. (Examples 1,3,4,5,6,7)
Justification: Suppose
Z ∞
| x (t)|e−σ0 t dt < ∞
T1
ee120 - fall’15 - lecture 13 notes 5

for some σ0 , so that the vertical line Re{s} = σ0 is in the ROC.


R∞
Then, for any σ ≥ σ0 , T | x (t)|e−σt dt < ∞ because:
1
Z ∞ Z ∞
| x (t)|e−σt dt = | x (t)|e−σ0 t e|−(σ{z
−σ0 )t
}dt
T1 T1
≤e−(σ−σ0 )T1 t≥ T1
Z ∞
≤ e−(σ−σ0 )T1 | x (t)|e−σ0 t dt
−∞
| {z }
<∞

5) For a left-sided x(t), if ROC is not empty then it extends to −∞ along


the real axis. (Example 2)

6) For a two-sided x (t), if ROC is not empty then it is a vertical strip.


Example 8:
(
−b|t| e−bt t≥0
x (t) = e =
ebt t<0

1L
e−bt u(t) ←→ if Re{s} > −b (Example 1) (7)
s+b
L −1
ebt u(−t) ←→ if Re{s} < b (Example 2) (8)
s−b

If b ≤ 0, ROC = ∅. If b > 0, ROC = {s| − b < Re{s} < b}.

Im

−b b Re

7) If X (s) is rational, then ROC is bounded by poles or extends to ∓∞.

8) If X (s) is rational and x (t) is right-sided, ROC is the half-plane to the


right of the rightmost pole. (Examples 1,3,4,5,6,7)
If x (t) is left-sided, ROC is the half-plane to the left of the leftmost pole.
(Example 2)

Inverse Laplace Transform by Partial Fraction Expansion


Section 9.3 in Oppenheim & Willsky
Example 9:

1 A B ( A + B)s + (2A + B)
X (s) = = + =
(s + 1)(s + 2) s+1 s+2 (s + 1)(s + 2)
ee120 - fall’15 - lecture 13 notes 6

)
A+B = 0 A=1
2A + B = 1 B = −1

Note that:

e−t u (t ) e−2t u(t)


1 1
→ Re{s} > −1 → Re{s} > −2
s+1 s+2
& &
−e−t u(−t) −e−2t u(−t)
Re{s} < −1 Re{s} < −2
Thus, x (t) can’t be determined uniquely unless the ROC is specified.

Possibilities:
1) x (t) = e−t u(t) − e−2t u(−t), if Re{s} > −1.

Im

−2 −1 Re
ROC

2) x (t) = e−t u(t) − e−2t u(−t), ROC = ∅


since Re{s} > −1 and Re{s} < −2 do not intersect.

3) x (t) = −e−t u(−t) + e−2t u(t) if −2 < Re{s} < −1

Im

−2 −1 Re

4) x (t) = −e−t u(−t) − e−2t u(−t) if Re{s} < −2

Im

−2 −1 Re
ROC
ee120 - fall’15 - lecture 13 notes 7

Signal Transform ROC Table 1: Laplace transforms of several


functions.
δ(t) 1 all s
1
u(t) s Re{s} > 0
1
−u(−t) s Re{s} < 0
t n −1 1
( n −1) !
u(t) sn Re{s} > 0
n − 1
− (nt −1)! u(−t) 1
sn Re{s} < 0
e− at u(t) 1
s+ a Re{s} > − a
−e−at u(−t) 1
s+ a Re{s} < − a
tn−1 − at 1
( n −1) !
e u(t) (s+ a)n
Re{s} > − a
n − 1
− (nt −1)! e−at u(−t) 1
(s+ a)n
Re{s} < − a
δ(t − T ) e−sT all s
s
cos(ω0 t)u(t) s2 +ω02
Re{s} > 0
ω0
sin(ω0 t)u(t) s2 +ω02
Re{s} > 0
s+ a
e− at cos(ω0 t)u(t) (s+ a)2 +ω02
Re{s} > − a

e− at sin(ω0 t)u(t) ω0
(s+ a)2 +ω02
Re{s} > − a
EE120 - Fall’15 - Lecture 14 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
19 October 2015

The Laplace Transform continued


Z ∞
X (s) = x (t)e−st dt
−∞

If the ROC contains the imaginary axis, then the Fourier transform is
obtained by setting s = jω in the Laplace transform:
Z ∞
X ( jω ) = x (t)e− jωt dt.
−∞

Properties of the Laplace Transform


Section 9.5 in Oppenheim & Willsky
L
Assume that x (t) ↔ X (s) with ROC = R.
Linearity:
L
ax1 (t) + bx2 (t) ←→ aX1 (s) + bX2 (s) (1)
ROC contains R1 ∩ R2 , but can be larger: e.g., if x1 (t) = x2 (t) and
a =−b, then ax1 (t) + bx2 (t) ≡ 0 and ROC is the entire complex plane.
Time-Shift:
x (t − t0 ) ↔ e−st0 X (s) (2)
ROC unchanged because:
Z ∞ Z ∞ Z ∞
x (t − t0 )e−st dt = x (τ )e−sτ e−st0 dτ = e|{z}
−st0
x (τ )e−sτ dτ
−∞ | {z } −∞ −∞
,τ this factor | {z }
doesn’t change X (s)
convergence

Shifting in the s-Domain:

L
es0 t x (t) ←→ X (s − s0 ) ROC = R + Re{s0 } (3)

F
Compare to: e jω0 t x (t) ←→ X ( j(ω − ω0 )) in Fourier transforms.
Time-Scaling:

L 1 s
x ( at) ←→ X ROC = a · R (4)
| a| a

L
In particular, x (−t) ←→ X (−s), with ROC = − R.
ee120 - fall’15 - lecture 14 notes 2

Example:

s L
cos(ω0 t)u(t) ←→ Re{s} > 0
+ ω02 s2
L −s
cos(−ω0 t)u(−t) = cos(ω0 t)u(−t) ←→ 2 Re{s} < 0
s + ω02

Conjugation:

L
x ∗ (t) ←→ X ∗ (s∗ ) ROC unchanged (5)

Therefore, if x (t) is real: X (s) = X ∗ (s∗ )


F
Equivalent property in Fourier transforms: x ∗ (t) ←→ X ∗ (− jω )
Convolution:

L
x1 (t) ∗ x2 (t) ←→ X1 (s) X2 (s) ROC contains R1 ∩ R2 (6)

Differentiation in Time Domain:

dx (t) L
←→ sX (s) ROC contains R but can be larger (7)
dt

Example:

L 1
x (t) = u(t) ←→ X (s) = R = {s : Re{s} > 0}
s
dx (t) L
= δ(t) ←→ 1 for all s ROC : entire complex plane
dt

Example:

L ω0
x (t) = sin(ω0 t)u(t) ←→ Re{s} > 0
s2 + ω02
dx (t) L ω s
= ω0 cos(ω0 t)u(t) ←→ 2 0 2 Re{s} > 0
dt s + ω0

Differentiation in the s-Domain:

L dX (s)
−tx (t) ←→ ROC unchanged for exponential signals (8)
ds
R∞ dX (s) R∞
Proof: X (s) = −∞ x (t)e−st dt then ds = −∞ −tx (t)e−st dt.
ee120 - fall’15 - lecture 14 notes 3

Example:

L 1
e− at u(t) ←→
s+a
 
L d 1 1
te− at u(t) ←→ − =
ds s + a ( s + a )2
 
L d 1 2
t2 e− at u(t) ←→ − 2
=
ds (s + a) ( s + a )3
..
.
L n!
tn e− at u(t) ←→
( s + a ) n +1

with Re{s} > − a for all cases.


n!
Special case a = 0: u(t) ↔ 1s , tu(t) ↔ 1
s2
, ..., tn u(t) ↔ s n +1

Example: Partial fraction expansion for repeated poles


Given ROC = {s : Re{s} > −1}, find the inverse Laplace transform
for:
1
X (s) = .
(s + 1)(s + 2)2

1 A1 A A22
X (s) = 2
= + 21 +
(s + 1)(s + 2) s + 1 s + 2 ( s + 2)2
A1 (s + 2)2 + A21 (s + 1)(s + 2) + A22 (s + 1)
=
(s + 1)(s + 2)2

( A1 + A21 )s2 + (4A1 + 3A21 + A22 )s + (4A1 + 2A21 + A22 ) = 1


| {z } | {z } | {z }
=0 =0 =1
=⇒ A1 = 1, A21 = A22 = −1
1 1 1
X (s) = − − ↔ x (t) = (e−t − e−2t − te−2t )u(t)
s + 1 s + 2 ( s + 2)2

Integration in Time:

Z t
L 1
x (τ )dτ ←→ X (s) ROC contains R ∩ {s : Re{s} > 0} (9)
−∞ s
Rt
Follows from the convolution property: −∞ x (τ )dτ = x (t) ∗ u(t).
Rt
Example: x (t) = δ(t) ↔ 1 ∀s, −∞ x (τ )dτ = u(t) ↔ 1s Re{s} > 0.

Initial Value Theorem:


If x (t) = 0 for all t < 0 and contains no impulses or singularities at
t = 0,
x (0+ ) = lim sX (s) (10)
s→∞
ee120 - fall’15 - lecture 14 notes 4

Example:

1
e− at u(t) ↔ 1 e−at
s+a
s
lim = 1 = e−at u(t)|t=0+
s→∞ s+a t
Final Value Theorem:
If x (t) = 0 for all t < 0 and x (t) has a finite limit as t → ∞, then

lim x (t) = lim sX (s) (11)


t→∞ s →0

Transfer Functions of LTI Systems


Section 9.7 in Oppenheim & Willsky

x (t) → h(t) → y(t)

From the convolution property:

Y (s) = H (s) X (s)


R∞
where H (s) = −∞ h(t)e−st dt is called the “transfer function” or
“system function."

Finding the transfer function from differential equations

N M
dk y(t) dk x (t)
∑ ak dt k
= ∑ bk
dtk Figure 1: Oliver Heaviside (1850-1925),
k =0 k =0
a self-taught electrical engineer, in-
Take the Laplace transform of both sides and use differentiation vented the "operational calculus" where
d
property: the differential operator dt is treated as
N M a symbol (’s’ in our case) and a linear
∑ a k s k Y ( s ) = ∑ bk s k X ( s ) differential equation is manipulated
algebraically. Dynamic circuit elements
k =0 k =0
could now be represented with simple
algebraic expressions similar to Ohm’s
Y (s) ∑ M b sk b s M + b M−1 s M−1 + ... + b0 Law (e.g. Ls for inductance). Heavi-
H (s) = = kN=0 k = M N side’s method had found widespread
X (s) ∑ k =0 a k s k a N s + a N −1 s N −1 + ... + a0 use by the time others established the
full mathematical justification with the
help of a transform used by Laplace a
Poles of the system: roots of a N s N + a N −1 s N −1 + ... + a0 century earlier. Heaviside had many
other contributions, including condens-
ing Maxwell’s theory of electromag-
Zeros of the system: roots of b M s M + b M−1 s M−1 + ... + b0
netism into the four vector equations
known today. He further coined the
terms "inductance" and "impedance,"
and the unit step u(t) is sometimes
referred to as the Heaviside function.
ee120 - fall’15 - lecture 14 notes 5

Example:
R L

+
+ di
x(t)
− C y(t) y(t) + L + Ri = x (t)
− dt

dy
Substitute i = C dt :

d2 y dy
LC + RC + y(t) = x (t).
dt2 dt
Therefore,
1
H (s) = .
LCs2 + RCs + 1

Poles: − RC ∓ R2 C2 −4LC . No zeros.
2LC

How do poles affect the system response?

If there are no repeated poles, partial fraction expansion gives:


N
A
H (s) = ∑ s − iαi (12)
i =1

where αi , i = 1, ..., N, are the poles. Then, assuming causality:


N
h(t) = ∑ A i e αi t u ( t ) (13)
i =1

Each pole αi contributes an exponential term eαi t to the response.2 2


See Figure 2 on the next page which
we discussed in Lecture 2.
If αi is repeated m times, then the system response includes:

tm−1 eαi t , ..., teαi t , eαi t

Example: In the RLC circuit above, we expect oscillatory response if

R2 C2 < 4LC.

How do zeros affect the system response?

Suppose s = β is a zero of H (s), i.e., H ( β) = 0. Then:

e βt → h(t) → y(t) = H ( β)e βt = 0


ee120 - fall’15 - lecture 14 notes 6

Thus, the zero s = β blocks inputs of the form e βt from appearing at


the output.

Example: Consider the RLC circuit above and redefine the output to
be the current instead of the capacitor voltage: y(t) , i (t). Then,
 
d dy
C x (t) − Ry(t) − L = y ( t ).
dt dt
| {z }
voltage across capacitor

Rearrange terms:

d2 y dy dx
LC 2
+ RC + y = C .
dt dt dt
Then, the transfer function is:
Cs
H (s) = .
LCs2 + RCs + 1
Zero at s = 0 blocks constant inputs: When x (t) = e0t ≡ 1, y(t) ≡ 0.

Im{s} Figure 2: The real part of est for various


3

2
values of s in the complex plane.
1.5

1
Note that est is oscillatory when s has
1

0.8

0.6
1

0.8

0.6
1

0
an imaginary component. It grows
0.5

unbounded when Re{s} > 0, decays to


0.4 0.4

0.2 0.2

0 0
0 -1

zero when Re{s} < 0, and has constant


-0.2 -0.2

-0.4 -0.4
-0.5
-0.6 -0.6
-2

-0.8

amplitude when Re{s} = 0.


-0.8
-1
-1
0 1 2 3 4 5 6 7 8 9 10 -1
0 1 2 3 4 5 6 7 8 9 10
-3
0 1 2 3 4 5 6 7 8 9 10
-1.5
0 1 2 3 4 5 6 7 8 9 10

1.5

1 2
1
1
0.8
0.8

0.6
0.6
1
0.4 0.5
0.4

0.2
0.2

0
0
0 0
-0.2
-0.2

-0.4
-0.4
-0.5
-1
-0.6
-0.6

-0.8 -0.8

-1 -1
-1
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 -2

-1.5
0 1 2 3 4 5 6 7 8 9 10
-3
0 1 2 3 4 5 6 7 8 9 10

2
2.8

1.8
2.6
1.6
2.4
1.4

2.2
1.2

2
1
1 1

0.9
0.9
1.8
0.8 0.8
0.8
0.7
1.6
0.6 0.7 0.6

0.5
0.6
0.4 1.4
0.4
0.5
0.3
0.2 1.2
0.4
0.2

0.1
0 1 2 3 4 5 6 7 8 9 10
0.3
0 1 2 3 4 5 6 7 8 9 10 0 1
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

Re{s}
EE120 - Fall’15 - Lecture 15 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
21 October 2015

Analysis of LTI Systems using the Laplace Transform


Section 9.7 in Oppenheim & Willsky
x (t) → h(t) → y(t) Y (s) = H (s) X (s)

Causality: h(t) = 0 ∀t < 0


R∞
Stability: −∞ |h(t)|dt < ∞

Determining Causality and Stability from H (s)

Causality: If H (s) is rational, causality is equivalent to the ROC being


the half plane to the right of the rightmost pole.
Example: h(t) = e−t u(t)

1
H (s) = Re{s} > −1
s+1

Example (why rationality of H (s) matters):

h(t) = e−(t+1) u(t + 1) 1

(right-sided but not causal)


−1 t
es Not rational. If you don’t check for rationality
H (s) = −→
s+1 first, you can falsely conclude causality from
Re{s} > −1 the ROC.

Stability: An LTI system is stable if and only if the ROC of H (s)


includes the imaginary axis.
Example:
s−1 2/3 1/3
H (s) = = +
(s + 1)(s − 2) s+1 s−2
ee120 - fall’15 - lecture 15 notes 2

Possibilities for ROC:


1 2 imaginary axis 3

unstable stable unstable


Note that the same conclusion can be reached by applying the abso-
lute integrability test to h(t):
 
1. h(t) = 23 e−t + 13 e2t u(t) not absolutely integrable

2. h(t) = 32 e−t u(t) − 13 e2t u(−t) absolutely integrable


 
3. h(t) = − 23 e−t + 31 e2t u(−t) not absolutely integrable

Simpler stability test with additional causality assumption:

A causal LTI system with rational H (s) is stable if and only if all
poles of H (s) are in the open left half-plane, i.e., all poles have
negative real parts.
Note: ”Open” left half-plane means that the imaginary axis is ex-
cluded.
Example (poles on the imaginary axis cause instability):

1
H (s) = (integrator)
s

If the input is x (t) = u(t), then X (s) = 1s and Y (s) = H (s) X (s) = s12 .
Then, y(t) = tu(t) which is unbounded although the input x (t) is
bounded.
Example (Butterworth filters):

1
| H ( jω )|2 = ωc : cutoff frequency, N : filter order
1 + (ω/ωc )2N

|H(jω)|
larger N :
1 sharper transition
√1
2

ωc ω
Derive the transfer function of a causal and stable LTI system with
real-valued h(t) that gives this frequency response.
ee120 - fall’15 - lecture 15 notes 3

1 1
| H ( jω )|2 = H ( jω ) H ∗ ( jω ) =  2N =⇒ H (s) H (−s) =  2N
| {z } jω s
H (− jω )
1+ jωc 1+ jωc
since h(t) is real

 
s 2N
Thus, the roots of 1 + = 0 are the poles of H (s) combined
jωc
| {z }
with the poles of H (−s).
s 2π
= e j( 2N +k 2N )
π
k = 0, 1, ..., 2N − 1
jωc

e j 2 ωc e j( 2N +k 2N )
π π
s = |{z}
j

N =3

k=0 k=5

30o

k=1
k=4

k=2 k=3
| {z } | {z }
H(s) H(−s)

Since the filter is to be causal and stable, H (s) must contain the N
poles in the left-half plane (k = 0, 1, ..., N − 1) and H (−s) must
contain the rest k = N, ..., 2N − 1.
Denominator of H (s) for N = 3:

(s + ωc )(s + ωc e j 3 )(s + ωc e− j 3 )
π π

| {z }
π
s2 +2cos ( )ωc s+ωc2
| {z 3 }
=1

= (s + ωc )(s + ωc s + ωc2 ) = s3 + 2ωc s2 + 2ωc2 s + ωc3


2

ωc3
Therefore, H (s) = s3 +2ωc s2 +2ωc2 s+ωc3
(so that H (0) = dc-gain = 1)
Normalized transfer function for the N = 3 example above:
 
0 1 0 s
H (s) = 3 H (s) = H for any desired ωc
s + 2s2 + 2s + 1 ωc
ee120 - fall’15 - lecture 15 notes 4

Evaluating the Frequency Response from the Pole-Zero Plot


Section 9.4 in Oppenheim & Willsky
1 1
Example: H (s) = s +1 | H ( jω )| = | jω +1|
Im
|H(jω)|

jω − (−1) 1
= jω + 1

−1 Re ω
Im
∡H(jω)


ω
1
+

−90o

∡jω + 1
Re
s +1
Example: H (s) = s+10
|H(jω)|
Im 1
∡H(jω)

1/10

10
ω
1

+
jω +


∡H(jω)

−10 −1 Re
ω
1 1+ jω
compare to Bode plots: H ( jω ) = 10 1+ jω/10
20log10 |H(jω)|

1 10 100
0dB
ω

−20dB

∡|H(jω)|
π/2

π/4
1 10 100

ω
ee120 - fall’15 - lecture 15 notes 5

Example (second order system):

ωn2
H (s) = (1)
s2 + 2ζωn s + ωn2
ωn2
H ( jω ) =
( jω )2 + 2ζωn ( jω ) + ωn2

ζ : damping ratio, ωn : natural frequency


Recall: resonance occurs if ζ < √1
≈ 0.7
2
 2  
Poles of H (s): s2 + 2ζωn s + ωn2 = 0, or ωsn + 2ζ ωsn + 1 = 0.
p
Then, ωsn = −ζ ∓ ζ 2 − 1
Therefore, complex conjugate poles if ζ < 1:

s1,2 = ωn (−cos(θ ) ∓ jsin(θ )) where θ defined by cosθ = ζ

Im
jωn sinθ

ωn

θ
−jωn cosθ
Re

Resonance condition ζ < √1 means θ > 45o


2

Im
|H(jω)|
jωn sinθ
q
= jωn 1 − ζ 2

1
ω
Re

peak at ω = ωn 1 − 2ζ 2

See Figure 1 below which we discussed in Lecture 5.


ee120 - fall’15 - lecture 15 notes 6

Figure 1: The frequency, impulse, and


Frequency response (magnitude) step responses for the second order
20
system (1). Note from the frequency
10 20 log10 | H ( jω )| 1 = 0.1
1 = 0.2 response (top) that a resonance peak
0 1 = 0.4 occurs when ζ < 0.7.
1 = 0.7
-10 1 = 1.0
1 = 1.5
-20 .
bigger
-30
ζ
-40
-50
10 -1ωn 10 0ωn 10 1ωn
!
Impulse response
1
h(t)/ωn
0.5

-0.5

-1
0 5 10 15
ωn t ωn ωn

Step response
2
s(t)
1.5

0.5

0
0 5 10 15
ωn t ωn ωn
EE120 - Fall’15 - Lecture 16 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
26 October 2015

All-Pass Systems
Section 9.4.3 in Oppenheim & Willsky
What is the frequency response of an LTI system with transfer func-
tion:
a−s
H (s) = , a > 0?
s+a
Im
|H(jω)|

ϕ = −∡(a − jω) 1

ϕ
−a a Re ω
∡H(jω) = −2ϕ = −2tan−1 (ω/a)

ω
−π/2

−π

General all-pass system:


∏in=1 ( ai − s)
Hap (s) = , ai > 0 i = 1, ..., n. (1)
∏in=1 (s + ai )

For a stable and causal all-pass system, all zeros are in the right half-
plane because they are mirror images of the poles.

Although | Hap ( jω )| ≡ 1, an all-pass system introduces delay:

e jωt → Hap ( jω ) → Hap ( jω )e jωt = e j] Hap ( jω ) e jωt = e jω (t−τ (ω ))

where:
] Hap ( jω )
τ (ω ) , − > 0.
ω
Moreover, the system is not linear phase (i.e. τ (ω ) is not constant);
therefore it causes phase distortion. (Recall Lecture 8.)

Minimum Phase Systems

A stable and causal LTI system is called minimum phase if all of its
zeros are in the open left half-plane.
ee120 - fall’15 - lecture 16 notes 2

Any non-minimum phase transfer function H (s) can be decomposed


as:
H (s) = Hmin (s) Hap (s) (2)
| {z } | {z }
min-phase all-pass

This decomposition explains the genesis of the term minimum phase:

| H ( jω )| = | Hmin ( jω )| since | Hap ( jω )| ≡ 1,

but the all-pass component adds more delay. Therefore, H (s) and
Hmin (s) have identical frequency responses in magnitude, but Hmin (s)
has the minimum phase delay.

10 − s s + 10 10 − s
Example: H (s) = = ·
10(s + 1) 10(s + 1) s + 10
| {z } | {z }
Hmin (s) Hap (s)

Hmin (s) Im

−10

−1 10
Re
Hap (s)

∡|H(jω)|

|H(jω)| = |Hmin (jω)|


1 10 100
0.1

0dB 1 10 100 ω
−45o
ω ∡Hmin (jω)

−90o

∡H(jω)
−20dB

−180o

Example:
Hd (s) Hc (s)
distorting system, compensating
e.g.,comm.channel system

1
If Hd (s) is minimum phase, we can simply choose Hc (s) = Hd (s)
.
If Hd (s) is nonminimum phase, Hc (s) = H 1(s) is unstable. To avoid
d
instability, decompose: Hd (s) = Hd,min (s) Hd,ap (s) and select:

1
Hc (s) = =⇒ Hd (s) Hc (s) = Hd,ap (s)
Hd,min (s) | {z }
magnitude distortion eliminated
ee120 - fall’15 - lecture 16 notes 3

Transfer Functions of Interconnected LTI Systems


Section 9.8 in Oppenheim & Willsky

h1 (t)
h ( t ) = h1 ( t ) + h2 ( t )
x(t) y(t)
h2 (t) H (s) = H1 (s) + H2 (s)

h ( t ) = h1 ( t ) ∗ h2 ( t )
x(t) h1 (t) h2 (t) y(t)
H (s) = H1 (s) H2 (s)

e(t) E(s) = X (s) − H2 (s)Y (s)


x(t) h1 (t) y(t)

Y (s) = H1 (s) E(s)
h2 (t) = H1 (s) X (s) − H1 (s) H2 (s)Y (s)

(1 + H1 (s) H2 (s))Y (s) = H1 (s) X (s)


Y (s) H1 (s)
= H (s) =
X (s) 1 + H1 (s) H2 (s)
Example: Feedback Control

e(t) x(t)
r(t) Hc (s) Hp (s) y(t)

r (t): reference signal to be tracked by y(t)


Hc (s): controller, H p (s): system to be controlled - ”plant”

Hc (s) H p (s)
H (s) =
1 + Hc (s) H p (s)

Example:

x(t): force y(t): speed dy


M M = x (t) −→ MsY (s) = X (s)
dt
1
H p (s) =
Ms
Take constant gain controller: Hc (s) = K

r(t)

K
Ms 1 M y(t)
H (s) = K
= τ=
1+ Ms
τs + 1 K
y(t) for smaller τ t
(larger K)
ee120 - fall’15 - lecture 16 notes 4

The Unilateral Laplace Transform


Section 9.9 in Oppenheim & Willsky
Z ∞
X (s) = x (t)e−st dt (3)
0−

Identical to the bilateral Laplace transform if x (t) = 0 for t < 0.

1
Example: x ( t ) = e − a ( t +1) u ( t + 1 )
e−a
−1 t
es
X (s) = Re{s} > − a
s+a
e− a
X (s) = Re{s} > − a
s+a

Properties of the unilateral Laplace transform

Most properties of the bilateral Laplace transform also hold for the
unilateral Laplace transform.
Exceptions:

Convolution:

x1 (t) ∗ x2 (t) ←→ X1 (s)X2 (s) if x1 (t) = x2 (t) = 0 for all t < 0

This follows from the convolution property of the bilateral Laplace


transform which coincides with the unilateral transform because
x1 (t) = x2 (t) = 0, t < 0.
Differentiation in Time:

dx (t)
←→ sX (s) − x (0− )
dt

Repeated application gives:

d2 x ( t ) d dx (t)
 
 dx
2
= ←→ s sX (s) − x (0− ) − (0− )
dt dt dt dt
dx
= s2 X (s) − sx (0− ) − (0− )
dt
d3 x ( t ) d d2 x ( t ) d2 x −
   
2 − dx −
= ←→ s s X ( s ) − sx ( 0 ) − ( 0 ) − (0 )
dt3 dt dt2 dt dt2
dx − d2 x
= s 3 X ( s ) − s 2 x (0− ) − s (0 ) − 2 (0− )
dt dt
ee120 - fall’15 - lecture 16 notes 5

Solving differential equations with the unilateral Laplace transform

Example:
d2 y ( t ) dy
2
+ 3 + 2y(t) = et t ≥ 0 (4)
dt dt
dy −
Initial condition y(0− ) = a, dt (0 ) = b.

1
(s2 Y (s) − as − b) + 3(sY (s) − a) + 2Y (s) = s −1

1 as2 +(b+2a)s+(1−b−3a)
(s2 + 3s + 2)Y (s) = as + b + 3a + s −1 = s −1

as2 +(b+2a)s+(1−b−3a)
Y (s) = (s+1)(s+2)(s−1)

Partial fraction expansion:

A1 A2 B
Y (s) = + +
s+1 s+2 s−1
( A1 + A2 + B)s2 + ( A1 + 3B)s + (2B − 2A1 − A2 )
=
(s + 1)(s + 2)(s − 1)

Match coefficients:

A1 + A2 + B = a 
 B = 1/6
A1 + 3B = b + 2a A1 = − 12 + 2a + b
= 31 − a − b

2B − 2A1 − A2 = 1 − b − 3b  A
2

Then,
   
1 t 1 −t 1
y(t) = e + − + 2a + b e + − a − b e−2t t ≥ 0.
6 2 3

Compare this to the standard method for solving linear constant


coefficient differential equations:
The first term in y(t) above is the particular solution. If we substitute
y p (t) = 16 et in (4):

d2 y p ( t ) dy p
2
+3 + 2y p (t) = et .
dt dt
The second and third terms constitute the homogeneous solution. If
we substitute yh (t) = A1 e−t + A2 e−2t :

d2 y h ( t ) dy
+ 3 h + 2yh (t) = 0.
dt2 dt
Thus, y(t) = y p (t) + yh (t) and A1 and A2 are selected to satisfy the
initial conditions.
EE120 - Fall’15 - Lecture 17 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
28 October 2015

The z-Transform
Chapter 10 in Oppenheim & Willsky

X (z) , ∑ x [ n ] z−n (1)
n=−∞


X (z) = X (e jω ) : DTFT (2)

z=e jω
The DTFT converges if the ROC for the z-transform includes the unit
circle z = e jω :

Im
ejω

z-­‐Transform  developed  in  


ω
the  1Re950s:  

Example 1: x [n] = an u[n]

∞ ∞
1
X (z) = ∑ an z−n = ∑ (az−1 )n = 1 − az−1 if | az−1 | < 1
n =0 n =0
| {z }
ROC: |z|>| a|

Yakov  Tsypkin  
DTFT: John  Ragazzini   Eli  Jury   Lo�i  Zadeh  
1
       (Moscow)   X (e ) =      (1Columbia)  

− ae− jω
if | a| < 1

Prof. Zadeh: Berkeley EECS Profs


Dear  Murat,   Im unit
[…]  I  am  pleased  to  learn  that  you  are  talking  
circle about  z-­‐transforma�on  in  your  lecture  on    
Monday.  […]  My  paper  with  Ragazzini  was  published  in  1952.  Ragazzini  was  my  Ph.D.    
supervisor.  […]  In  the  period  before  the  paper  was  published  there  was  a  growing  recogni�on    
that  sampling  a  signal  was  an  important  
a pRe
art  of  signal  processing.  Sampling   played  
Figure 1: Berkeley EECSa  emeritus
key  role  
in  Shannon's  work.  At  that  �me,  it  occurred  to  me  that  it  was  natural   to  look  at  sampling  as  a  
professor Lotfi Zadeh (above) and
former professor Eliahu Jury (below)
convolu�on  of  a  signal  with  a  sequence  of  delta-­‐func�on,  transforming  
ROC the  sthose
were among ignal  into  
who what  the
developed  
was  called  the  z-­‐transform.  I  used  the  le�er  z  not  because  of  my  name   but  because  in  the    
theory of z transforms in the 1950s.
Research in sampled systems was in
mathema�cal  literature  on  difference  equa�ons,  z  was  the  symbol  tpart hat  motivated
was  used.  
by radar[…]  
which came to
z
Sincerely,  Lo�i  
Poles and zeros: X ( z ) = z− a → pole at z = a, zero at z = 0. prominence during World War II.
ee120 - fall’15 - lecture 17 notes 2

Example 2: x [n] = − an u[−n − 1]

−1 ∞ Im
X (z) = ∑ − an z−n = ∑ − a−n zn
n=−∞ n =1

= 1− ∑ ( a −1 z ) n
n =0
1
a Re
= 1− if | a−1 z| < 1
1 − a −1 z
− a −1 z 1 ROC
= = ROC : |z| < | a|
1 − a −1 z 1 − az−1

DTFT converges is | a| > 1.


 n  n
Example 3: x [n] = − 1
u[−n − 1] + −1 u[n]
2 3

1
1 1 z z 2z(z − 12 )
X (z) = + = + =
1 − 12 z−1 1 + 13 z−1 z− 1
2 z− 1
2 (z − 1 1
2 )( z + 3 )
1 1
|z| < |z| >
| 2 {z 3}
ROC: 1 <| z |< 1
3 2

Im

− 31 1
2
Re

 n  n
Example 4: x [n] = 1
u[n] − −1 u[−n − 1]
2 3

ROC = {z : |z| > 21 } ∩ {z : |z| < 13 } = ∅

Example 5: x [n] = an , a 6= 0.
x [n] = an u[n] + an u[−n − 1]
ROC = {z : |z| > a} ∩ {z : |z| < a} = ∅

Properties of the ROC


Section 10.2 in Oppenheim & Willsky
1) A ring or disk in the z-plane, centered at the origin.
2) ROC does not contain any poles.
3) For finite duration sequences, ROC is the entire z-plane, except
possibly z = 0 or z = ∞.
ee120 - fall’15 - lecture 17 notes 3

Example 6:
a)
z2 + z + 1
1 X ( z ) = 1 + z −1 + z −2 =
z2
ROC excludes z = 0 because of the
n pole at z = 0.
If x [n] 6= 0 for some n > 0, ROC excludes z = 0.
b)
1 X ( z ) = z2 + z + 1

n ROC excludes z = ∞.

If x [n] 6= 0 for some n < 0, ROC excludes z = ∞.


c) x [n] = δ[n] → X (z) = 1 and ROC is the entire z-plane, including
z = 0 and z = ∞.

4) If x [n] is right-sided (x [n] ≡ 0 ∀n < N1 , for some N1 ) and if |z| = r0


is in the ROC, then all finite values of z for which |z| > r0 are also in
the ROC. (Ex 1)
z = ∞ included if N1 = 0. (Ex 6)

5) If x [n] is left-sided (x [n] ≡ 0 ∀n > N2 , for some N2 ) and if |z| = r0


is in the ROC, then 0 < |z| < r0 is also in the ROC. (Ex 2)
z = 0 included if N2 = 0. (Ex 6)

6) If x [n] is two-sided and if |z| = r0 is in the ROC, then the ROC is a


ring that includes |z| = r0 . (Ex 3)

The following hold when X (z) is rational:


7) ROC is bounded by poles or extends to infinity.

8) If x [n] is right-sided, ROC extends from the outermost pole to ∞.


z = ∞ included if N1 = 0.
If x [n] is left-sided, ROC extends from the innermost nonzero pole to 0.
z = 0 included if N2 = 0.
For a two-sided sequence, the ROC is a ring:
Inner bound: Pole with largest magnitude that contributes to the
right side.
Outer bound: Pole with smallest magnitude that contributes to the
left side.
(Ex 3)
ee120 - fall’15 - lecture 17 notes 4

Inverse z-Transform by Partial Fraction Expansion

Example:
1 A1 A2
X (z) =   = 1 −1
+
1 − 21 z−1

1− 1 −1
1− 1 −1 1 − 4z
4z 2z
 
1 −1
A1 = 1− z X (z)|z= 1 = −1
4 4
 
1 −1
A2 = 1− z X (z)|z= 1 = 2
2 2

1) 2) 3)

− 41 1
2

ROC
  n  n    n  n   n  n
1 1 1 1 1 1
x [n] = 2 − u[n] x [n] = − 2 − u[−n − 1] x [ n ] = −2 u[−n − 1] − u[n]
2 4 2 4 2 4

How to perform a PFE in general?

b0 + b1 z−1 + . . . + b M z− M
X (z) = , a0 6 = 0
a 0 + a 1 z −1 + . . . + a N z − N
Suppose unrepeated poles: d1 , d2 , ..., d N .
If M < N,
N
Ak
X (z) = ∑ 1 − d k z −1
k =1
If M ≥ N,
M− N N
Ak
X (z) = ∑ Br z−r + ∑ 1 − d k z −1
r =0 k =1

M− N N
x [n] = ∑ Br δ[n − r ] + ∑ Ak dnk u[n]
r =0 k =1
| {z }
if right-sided

Example:

1 + 2z−1 + z−2
X (z) = M=N=2
1 − 32 z−1 + 12 z−2
A1 A2
= B0 + +
1 − 12 z−1 1 − z −1
ee120 - fall’15 - lecture 17 notes 5

Matching coefficients: A1 = −9, A2 = 8, B0 = 2.


 n
1
x [n] = 2δ[n] − 9 u[n] + 8u[n].
2

Differentiation (in z-domain) Property:


Z
←→ X (z)
x [n] ROC = R (3)
Z dX (z)
nx [n] ←→ −z ROC = R (4)
dz

Proof: X (z) = ∑ x [ z ] z−n
n=−∞
∞ ∞
dX (z)
dz
= ∑ −nx [n]z−(n+1) = −z−1 ∑ nx [n]z−n
n=−∞ n=−∞

dX (z)
∑ nx [n]z−n = −z
dz
n=−∞

Example:

Z 1
an u[n] ←→
1 − az−1
az−2 az−1
 
Z d 1
nan u[n] ←→ −z −
=z −
=
dz 1 − az 1 (1 − az )1 2 (1 − az−1 )2

Back to Partial Fraction Expansions: If dk is a pole of multiplicity


two, include two terms:
1 d k z −1
Ak1 + A k2
1 − d k z −1 (1 − d k z −1 )2
l
( Ak1 + Ak2 n) dnk u[n]

− 12 + z−1 M=1 1
Example: X (z) = |z| >
(1 − 21 z−1 )2 N=2 2
1 −1
1 2z
= A11 1 −1
+ A12  2
1− 2z 1 − 12 z−1
A11 + 21 ( A12 − A11 ) z−1
=  2
1 − 12 z−1
)
A11 = − 12 3
1 A12 =
2 ( A 12 − A 11 ) = 1 2
   n
1 3 1
x [n] = − + n u[n]
2 2 2
ee120 - fall’15 - lecture 17 notes 6

Signal Transform ROC Table 1: z transforms of several func-


tions.
δ[n] 1 all z
δ[n − m] z−m all z except z = 0 if m > 0,
all z except z = ∞ if m < 0
1
u[n] 1 − z −1
|z| > 1
1
−u[−n − 1] 1 − z −1
|z| < 1
1
an u[n] 1− az−1
|z| > a
1
− an u[−n − 1] 1− az−1
|z| < a
az−1
nan u[n] (1− az−1 )2
|z| > a
az−1
−nan u[−n − 1] (1− az−1 )2
|z| < a
1−cos(ω0 )z−1
cos(ω0 n)u[n] 1−2 cos(ω0 )z−1 +z−2
|z| > 1
sin(ω0 )z−1
sin(ω0 n)u[n] 1−2 cos(ω0 )z−1 +z−2
|z| > 1
1−r cos(ω0 )z−1
r n cos(ω0 n)u[n] 1−2r cos(ω0 )z−1 +r2 z−2
|z| > r
r sin(ω0 )z−1
r n sin(ω0 n)u[n] 1−2r cos(ω0 )z−1 +r2 z−2
|z| > r
EE120 - Fall’15 - Lecture 18 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
2 November 2015

Properties of the z-Transform

1) Linearity: ax1 [n] + bx2 [n] ←→ aX1 (z) + bX2 (z)


ROC contains R1 ∩ R2 where Ri is the ROC of xi [n], i = 1, 2.
2) Time Shifting: x [n − n0 ] ←→ z−n0 X (z)
ROC unchanged, except for possible addition/deletion of 0 and ∞.

x[n]
1 X ( z ) = z + 1 + z −1

ROC excludes 0 and ∞.
−1 1 n
x[n − 1]
X ( z ) = 1 + z −1 + z −2 =
1
↔ z −1 X ( z )
ROC now includes |z| = ∞.
0 1 2 n
Example: Find the inverse z-Transform (right-sided) of

1 1
X (z) =  =z
1 − 12 z−1

1 −1
z −1
1− 2z
  n +1
1
x [n] = u [ n + 1]
2

3) Scaling in the z-domain:


 
Z z
z0n x [n] ←→ X ROC = |z0 | · R
z0

where R is the ROC of x [n]. Compare to:


 
DTFT
e jω0 n x [n] ←→ X e j(ω −ω0 )
L
e s0 t x ( t ) ←→ X (s − s0 ) ROC = R + Re{s0 }

4) Time Reversal:
 
Z 1
x [−n] ←→ X ROC = 1/R
z
ee120 - fall’15 - lecture 18 notes 2

Example:
 n
1 1 1
x [n] = u[n] ↔ X (z) = |z| >
2 1 − 12 z−1 2
−2z−1
 
1 1
x [−n] = 2n u[−n] ↔ X = = |z| < 2
z 1
1 − 2z 1 − 2z−1

5) Convolution Property:

Z
x1 [n] ∗ x2 [n] ←→ X1 (z) X2 (z) ROC contains R1 ∩ R2

6) Differentiation in z-domain:

Z dX (z)
nx [n] ←→ −z ROC unchanged
dz
Example:
Z az−1
nan u[n] ←→ |z| > | a|
(1 − az−1 )2

7) Initial Value Theorem: If x [n] = 0 for n < 0, then

x [0] = lim X (z) (1)


z→∞

Proof: X (z) = x [0] + x [1]z−1 + x [2]z−2 + ...


| {z }
→0 as z→∞

DT LTI Systems
Section 10.7 in Oppenheim & Willsky

x [n] → h[n] → y[n] = h[n] ∗ x [n]

Y (z) = H (z) X (z)




| {z }
transfer function → H (z) = h [ n ] z−n
n=−∞

Causality: h[n] = 0 ∀n < 0. Stability: ∑∞


n=−∞ | h [ n ]| < ∞

Determining causality from H (z)

Recall: For a right-sided x [n] with rational X (z), ROC extends from
the outermost pole to infinity (infinity included if x [n] = 0 ∀n < 0).

A DT LTI system with rational transfer function H (z) is causal if


and only if the ROC extends from the outermost pole and
includes |z| = ∞.
ee120 - fall’15 - lecture 18 notes 3

Examples: z3 − 2z2 + z
H (z) =
z2 + 14 z + 81
ROC can’t include |z| = ∞ because the numerator has higher order
than the denominator: 3 > 2 =⇒ not causal

1 ROC: |z| > 2 =⇒ causal


H (z) = 1 −1
(1 − −1
2 z )(1 − 2z )

Stability


∑ |h[n]| < ∞ means that the ROC includes the unit circle.
n=−∞

An LTI system is stable if and only if the ROC of transfer function


H (z) includes the unit circle.
Sharper condition if the system is causal and H (z) rational:

A causal LTI system with rational transfer function H (z) is stable


if and only if all poles of H (z) lie inside the unit circle.

s-plane z-plane
Im Im

Re 1 Re

stability region

From Difference Equations to Transfer Functions:

N M
∑ ak y[n − k ] = ∑ bk x [ n − k ]
k =0 k =0
N M
∑ ak z−k Y (z ) = ∑ bk z − k X ( z )
k =0 k =0
Y (z) ∑kM=0 bk z−k
H (z) = =
X (z) ∑kN=0 ak z−k

Example:

1 1 1 + 13 z−1
y[n] − y[n − 1] = x [n] + x [n − 1] =⇒ H (z) =
2 3 1 − 12 z−1
ee120 - fall’15 - lecture 18 notes 4

Geometric Evaluation of the Frequency Response H (e jω ):

b0 + b1 z−1 + . . . + b M z− M b0 ∏kM=1 (1 − ck z−1 ) ck : zeros


H (z) = =
a 0 + a 1 z −1 + . . . + a N z − N a0 ∏kN=1 (1 − dk z−1 ) dk : poles

Im
ejω
b ∏k=1 |1 − ck e− jω |
M
| H (e )| = 0 N

V1
a0 ∏k=1 |1 − dk e− jω |
dk Re
|1 − dk e− jω | = |e jω − dk | = |V1 |

Example:

Im
1 + z −1 ejω
H (z) = |{z}
0.05
1 − 0.9z−1 V2
s.t. H (1)=1 (dc gain) ω V1
|V2 | Re
| H (e jω )| = 0.05
|V1 |

|H(jω)|
1

π 2π ω

Simple Filters:

Low-Pass:

1 − α 1 + z −1
H (z) = , |α| < 1 for stability
2 1 − αz−1

|H(jω)| 3dB cutoff frequency ωc related to α


1
1
√ by:
2
1 − sin(ωc )
α=
cos(ωc )
ωc π ω
ee120 - fall’15 - lecture 18 notes 5

High-Pass:

1 + α 1 − z −1
H (z) = , H (1) = 0 and H (−1) = H (e jπ ) = 1
2 1 − αz−1

|H(jω)|
11
√ 1 − sin(ωc )
2 α=
cos(ωc )
ωc π ω
Band-Stop (Notch):

1+α 1 − 2βz−1 + z−2


H (z) = | β| < 1 |α| < 1
2 1 − β(1 + α)z−1 + αz−2
Note: 1 − 2βz−1 + z−2 = (1 − e jω0 z−1 )(1 − e− jω0 z−1 ) where cosω0 = β

Im
zeros on the unit circle: e∓ jω0
poles approach zeros as α → 1.
ω0
Re 1+α 2 ± 2β
H (∓1) = =1
2 (1 + α)(1 ± β)

|H(jω)| larger α
1

ω0 = π ω
= cos−1 (β)

Band-Pass:
1−α 1 − z −2
H (z) = , |α| < 1 | β| < 1
2 1 − β(1 + α)z−1 + αz−2

Im |H(jω)|
1
larger α
Re
ω0 = π ω
−1
= cos (β)
EE120 - Fall’15 - Lecture 19 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
4 November 2015

Geometric Evaluation of the Frequency Response (Continued)

Example: ( M + 1)-point moving average system

1
y[n] = ( x [n] + x [n − 1] + . . . + x [n − M])
M+1

h[n]
1
M +1
...
M n

1  
H (z) = 1 + z −1 + . . . + z − M
M+1
1 z M + z M −1 + . . . + 1
=
M+1 zM

All poles at z = 0. (Note this is true for any FIR system.)


Zeros: roots of z M + z M−1 + . . . + 1.
From the identity z M+1 − 1 = (z − 1)(z M + z M−1 + . . . + 1), the roots
of z M + z M−1 + . . . + 1 are the roots of z M+1 − 1 except for z = 1.

z M+1 = 1 =⇒ z = e j M+1 k k = 1, 2, . . . , M (k = 0, i.e., z = 1 excluded)


(M = 7) Im M +1
|H(ejω )|

1
M +1

M poles Re

2π π ω
M +1
ee120 - fall’15 - lecture 19 notes 2

Example: Finding the phase


Im

ejω 1 − a 1 + z −1 1−az+1
H (z) = =
V2 2 1 − az−1 2 z−a
V1 jω
ϕ1 1−ae +1 1 − a V2
ϕ2 H (e jω ) = =
−1 a Re 2 e jω − a 2 V1
1 − a |V2 |
| H (e jω )| = ] H (e jω ) = φ2 − φ1
2 |V1 |

|H(ejω )|
1 ∡H(ejω )

−90o (ϕ2 → 90o , ϕ1 → 180o )


π ω

All Pass Systems

Continuous-time: Discrete-time:
a−s z −1 − a z − 1/a
H (s) = H (z) = = −a , a real
s+a 1 − az − 1 z−a
Im
Im

ejω
V1 V2
V1 V2
ω
Re a 1 Re
a
|V1 | = |V2 | ∀ω

|V1 |2 = (cosω − a)2 + sin2 ω = 1 − 2acosω + a2


 2
1 1 2
|V2 |2 = − cosω + sin2 ω = 2 − cosω + 1
a a a
|V |
then, a2 |V2 |2 = |V1 |2 ⇒ | a| |V2 | = 1.
1

|e jω − 1/a| |V |
| H (e jω )| = | a| = | a| 2 = 1 ∀ω
|e jω − a| |V1 |
ee120 - fall’15 - lecture 19 notes 3

General form of a DT all pass system:

N
z −1 − a k
 
H (z) = ∏ 1 − a k z −1
k =1

Each pole ak accompanied by a zero at 1/ak .


For a stable and causal all pass system, all zeros are outside the unit
circle.

Minimum Phase Systems

Definition: A stable and causal DT LTI system with transfer function


H (z) whose zeros are also within the unit circle.
As in continuous-time, a nonminimum phase transfer function H (z)
can be decomposed as:

H (z) = Hmin (z) Hap (z)

where Hmin (z) is minimum phase and Hap (z) is all pass.
Construct Hap (z) such that it encompasses all zeros of H (z) outside
the unit circle. Obtain Hmin (z) from H (z)/Hap (z).
Example:

1 − 3z−1
H (z) = zero @ 3 outside the unit circle
1 − 21 z−1

z−1 − 1/3 H (z) 1 − 3z−1 1 − 31 z−1 1 − 13 z−1


Hap (z) = Hmin (z) = = = − 3
1 − 13 z−1 Hap (z) 1 − 12 z−1 z−1 − 1/3 1 − 12 z−1

Im
Hmin

1/2 3
Re

Hap

Pole/zero pair added at 1/3.


Pole at 1/3 paired with existing zero at 3 to form all pass system.
Zero at 1/3 paired with existing pole at 1/2 to form minimum phase
system.
ee120 - fall’15 - lecture 19 notes 4

Example:

s[n] Hd (z) Hc (z) sc [n]


sd [n]
distorting system, compensating
e.g.,comm.channel system

1
If Hd (z) is minimum phase, we can design Hc (z) = Hd (z)
(stable).

If not, decompose: Hd (z) = Hd,min (z) Hd,ap (z)


1
and design: Hc (z) = =⇒ Hd (z) Hc (z) = Hd,ap (z)
Hd,min (z)

Magnitude distortion corrected.

The Unilateral z-Transform


Section 10.9 in Oppenheim & Willsky

X (z) = ∑ x [ n ] z − n = x [ 0 ] + x [ 1 ] z −1 + x [ 2 ] z −2 + . . . (1)
n=0

Properties of the Unilateral z-Transform

Most properties of the bilateral z-transform hold for the unilateral


transform.
Exceptions:

Convolution:
UZ
x1 [n] ∗ x2 [n] ←→ X1 (z)X2 (z) if x1 [n] = x2 [n] = 0 ∀n < 0.

Time Delay:
UZ
x [n − 1] ←→ z−1 X (z) + x [−1]
Z
Contrast to: x [n − 1] ←→ z−1 X (z)
Proof:

∑ x[n − 1]z−n = x[−1] + |x[0]z−1 + x[1]z−{z2 + x[2]z−3 + . .}.
n =0
−1
= z −1 ( x [ 0 ] + x [ 1 ] z + x [ 2 ] z −2 + . . . )
| {z }
=X (z)
ee120 - fall’15 - lecture 19 notes 5

Applying repeatedly:

UZ
x [ n − 2] ←→ z−1 (z−1 X (z) + x [−1]) + x [−2]
= z−2 X (z) + x [−1]z−1 + x [−2]
UZ
x [ n − 3] ←→ z−1 (z−2 X (z) + x [−1]z−1 + x [−2]) + x [−3]
= z−3 X (z) + x [−1]z−2 + x [−2]z−1 + x [−3]

Solving Difference Equations using the Unilateral z-Transform

Example: y[n] − 0.6y[n − 1] = (0.5)n u[n]


Take unilateral z-transforms on both sides:
1
Y (z) − 0.6(z−1 Y (z) + y[−1]) =
1 − 0.5z−1

1 1 + 0.6y[−1] − 0.3y[−1]z−1
(1 − 0.6z−1 )Y (z) = 0.6y[−1] + −
=
1 − 0.5z 1 1 − 0.5z−1

(1 + 0.6y[−1]) − 0.3y[−1]z−1 A B
Y (z) = − −
= −
+
1
(1 − 0.6z )(1 − 0.5z )1 1 − 0.6z 1 1 − 0.5z−1
)
A + B = 1 + 0.6y[−1] B = −5
0.5A + 0.6B = 0.3y[−1] A = 6 + 0.6y[−1]
y[n] = (6 + 0.6y[−1])(0.6)n u[n] − 5(0.5)n u[n]

Compare to the time domain method:


1) Homogenous solution: A(0.6)n
2) Particular solution: y p [n] = B(0.5)n
Substitute in difference equation to find B:

B(0.5)n − 0.6B(0.5)n−1 = (0.5)n


B − 0.6(0.5)−1 B = 1 =⇒ −0.2B = 1 =⇒ B = −5

3) The complete solution is y[n] = A(0.6)n − 5(0.5)n and


A is determined from the initial condition:

y[−1] = A(0.6)−1 − 5(0.5)−1


A = (0.6)(y[−1] + 10) = 6 + 0.6y[−1]
ee120 - fall’15 - lecture 19 notes 6

Interconnections of DT LTI Systems

x[n] H1 (z) H2 (z) y[n] Y (z) = H2 (z) H1 (z) X (z)


| {z }
H (z)

H1 (z)
x[n] y[n] Y (z) = ( H1 (z) + H2 (z)) X (z)
H2 (z)

x[n] H1 (z) y[n] Y (z) H1 (z)


− =
X (z) 1 + H1 (z) H2 (z)
H2 (z)

Example:
b0 b0
+ +
x [n] y[n] x [n] y[n]

z −1 z −1
b1 b1
+ ⇒ +
= b1 + b2 z−1
z −1 z −1
b2 b2

⇒ H (z) = b0 + z−1 (b1 + b2 z−1 ) = b0 + b1 z−1 + b2 z−2

Example:

x [n]
+ y[n] x [n]
+ y[n]

z −1 z −1
− a1 − a1
+ ⇒ +
z −1 = − a 1 − a 2 z −1
z −1
(parallel)
− a2 − a2

Then, from the feedback interconnection formula:


1 1
H (z) = =
1 − z−1 (− a 1 − a2 z −1 ) 1 + a1 z −1 + a 2 z −2
ee120 - fall’15 - lecture 19 notes 7

Recall the block diagram in Lecture 2 for:


N M
y[n] = − ∑ a k y [ n − k ] + ∑ bk x [ n − k ]
k =1 k =0

b0 + b1 z−1 + . . . + b M z− M
H (z) =
1 + a 1 z −1 + . . . + a N z − N

b0
x [n] + + y[n]

D D
b1 − a1
+ +
D D
b2 − a2
+ +

The delay element D corresponds to z−1 . The blue block implements


the numerator:

H1 (z) = b0 + b1 z−1 + . . . + b M z− M

and the orange block implements the denominator:


1
H2 (z) =
1 + a1 z −1 + . . . + a N z− N
as in the examples above. The series interconnection of the two gives:

H (z) = H1 (z) H2 (z).

Changing the order of H1 (z) and H2 (z) does not change the product
and allows us to use fewer delay elements, as was done in Lecture 2.
(See figure on next page.)

Obtaining Transfer Functions from Block Diagrams

In the examples above a repeated application of the series, parallel,


and feedback interconnection formulas gave the transfer function.
However, this is not an efficient approach in general and may not be
possible for every interconnection.
ee120 - fall’15 - lecture 19 notes 8

We outline an alternative procedure and illustrate it on the block


diagram below.
Step 1: Label the output of each delay element as a function of time:
w1 [n], w2 [n], ... These are called "state variables."

b0
x [n] + I
w1 [ n + 1 ]
+ y[n]

− a1 w1 [n] b1
+ +
D
− a2 w2 [n] b2

Step 2: Note that the input signal to the ith delay element is wi [n + 1]
so that its output is wi [n]. By inspecting the interconnection, ex-
press these inputs in terms of the input x [n] and state variables
w1 [n], w2 [n], ... In the block diagram above:

w1 [ n + 1 ] = − a 1 w1 [ n ] − a 2 w2 [ n ] + x [ n ]
w2 [ n + 1 ] = w1 [ n ]

Step 3: Take the z transform of these equations and solve for W1 (z),
W2 (z), ... in terms of X (z).

zW1 (z) = − a1 W1 (z) − a2 W2 (z) + X (z)


zW2 (z) = W1 (z)

Substituting W2 (z) = W1 (z)/z in the first equation and rearranging:


 a2  z
z + a1 + W1 (z) = X (z) ⇒ W1 (z) = 2 X (z)
z z + a1 z + a2
1
W2 (z) = 2 X (z)
z + a1 z + a2

Step 4: Express Y (z) in terms of X (z) and W1 (z), W2 (z), ..., again
using the interconnection. Then substitute W1 (z), W2 (z),... from the
previous step so Y (z) depends on X (z) only.

b0 z2 + b1 z + b2
Y (z) = b0 zW1 (z) + b1 W1 (z) + b2 W2 (z) = X (z)
z2 + a1 z + a2
b0 z2 + b1 z + b2 b0 + b1 z−1 + b2 z−2
H (z) = = .
z2 + a1 z + a2 1 + a 1 z −1 + a 2 z −2
ee120 - fall’15 - lecture 19 notes 9

Exercise: Show that the block diagram below yields the same trans-
fer function
b0 + b1 z−1 + b2 z−2
H (z) = .
1 + a 1 z −1 + a 2 z −2

b0
x [n] + y[n]

b1 − a1
+

b2 − a2
+
EE120 - Fall’15 - Lecture 20 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
9 November 2015

Feedback Control
Chapter 11 in Oppenheim & Willsky

e(t) x(t)
r(t) Hc (s) Hp (s) y(t)

r (t) : reference signal to be tracked by y(t)


Hc (s) : controller; H p (s) : system to be controlled (”plant”)
Closed-loop transfer function:

Y (s) Hc (s) H p (s)


H (s) = =
R(s) 1 + Hc (s) H p (s)

Constant-gain control: Hc (s) = K

KH p (s)
H (s) =
1 + KH p (s)

Closed-loop poles: roots of 1 + KH p (s) = 0

Example 1 (Speed Control)

x(t): force y(t): speed


M

1
H p (s) = −→ open-loop pole: s = 0
Ms
1 K
Closed-loop pole: 1 + K Ms = 0 =⇒ s = − M

step response:
Im
r(t)

K→∞ Re
K=0
larger K t
ee120 - fall’15 - lecture 20 notes 2

Example 2 (Position Control) y(t) : position

d2 y dy 1 1
M 2
+b = x (t) H p (s) = =
dt dt Ms2 + bs s( Ms + b)

Open-loop poles: s = 0, −Mb


Closed-loop poles:

K
1+ = 0 =⇒ Ms2 + bs + K = 0
s( Ms + b)

−b ∓ b2 − 4KM
s=
2M

step response:
Im
b2
K> 4M
b2
K= 4M

b
−M Re

b2
K< 4M

Root-Locus Analysis
Section 11.3 in Oppenheim & Willsky
How do the roots of
1 + KH (s) = 0

move as K is increased from K = 0 to K = +∞?


If a point s0 ∈ C is on the root locus, then H (s0 ) = −K1 for some
K > 0, therefore ] H (s0 ) = π. The rules for sketching the root locus
below are derived from this property.
Rules for sketching the root locus:
Let

sm + bm−1 sm−1 + ... + b0


H (s) = m≤n
sn + an−1 sn−1 + ... + a0
∏m
k =1 ( s − β k ) β k : zeros k = 1, ..., m
=
∏nk=1 (s − αk ) αk : poles k = 1, ..., n

1) As K → 0, the roots converge to the poles of H (s):

1
H (s) = − →∞
K
Since there are n poles, the root locus has n branches, each starting at
a pole of H (s).
ee120 - fall’15 - lecture 20 notes 3

2) As K → ∞, m branches approach the zeros of H (s). If m < n, then


n − m branches approach infinity following asymptotes centered at:

∑nk=1 αk − ∑m
k =1 β k
n−m

with angles:

180o + (i − 1)360o
i = 1, 2, ..., n − m.
n−m

Example 2 above: n − m = 2, poles: 0, −b/M


−b , and angles = 90o , −90o
with center = 2M
3) Parts of the real line that lie to the left of an odd number of real
poles and zeros of H (s) are on the root locus.
Example 1 above: Example 2:

Proof of Property 3:
m n
] H ( s0 ) = ∑ ](s0 − β k ) − ∑ ](s0 − αk )
k =1 k =1

If s0 is on the real line:


(
](s0 − a) =
π if s0 < a π
0 if s0 > a
s0 a
Therefore,

] H (s0 ) = rπ r : total # of poles and zeros to the right of s0


= π if r is odd.

4) Branches between two real poles must break away into the com-
plex plane for some K > 0. The break-away and break-in points can
be determined by solving for the roots of

dH (s)
=0
ds
that lie on the real line.
Example 2 above:
1
H (s) =
Ms2 + bs
ee120 - fall’15 - lecture 20 notes 4

dH −2Ms − b −b
= =0 ⇒ s=
ds ( Ms2 + bs)2 2M

Example 3:
s−1
H (s) =
(s + 1)(s + 2)
n = 2, m = 1, zeros: s = 1, poles: s = 1, −2.
| {z }
one asymptote
with angle 180o

Im

Re

Example 4:

s+2
H (s) = n − m = 1 asymptote with angle 180o
s ( s + 1)

Im
break-away/ break-in points:

dH s2 + s − (2s + 1)(s + 2)
= =0
ds s2 ( s + 1)2
−2 −1 Re
s2 + s − (2s2 + 5s + 2) = 0

2 −4 ∓ 8 √
√ s + 4s + 2 = 0 ⇒ s = = −2 ∓ 2
2
circle w/ radius 2
centered at −2
Example 5:
s+2
H (s) = a>2
s(s + 1)(s + a)
(pole at − a added to the previous example)
n − m = 2, therefore two asymptotes with angles ∓90o
(0−1− a)−(−2) 1− a
center of the asymptotes: 2 = 2
ee120 - fall’15 - lecture 20 notes 5

a=4
Im

1−a
2

−a −2 −1 Re

dH(s)
determined from ds =0

dH
For large enough a, ds = 0 has three real, negative roots:

Im

−a Re

MATLAB command: rltool


High-Gain Instability:
Large feedback gain causes instability if:
1) H (s) has zeros in the right-half plane (nonminimum phase)
2) n − m ≥ 3
n−m = 2
n−m = 3

60o
center

stable but poorly damped as K %


n−m = 4 n−m = 5

45o 72o 36o


ee120 - fall’15 - lecture 20 notes 6

n − m = 1 : faster response without losing damping or stability as


K%
Example: Root locus of a system that can’t be stabilized with constant
gain feedback:
EE120 - Fall’15 - Lecture 21 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
18 November 2015

Step Response of Second Order Systems

ωn2
H (s) =
s2 + 2ζωn s + ωn2 Im
ωd
ζ: damping ratio, ωn : natural frequency
ωn
σ ωd
z }| { z }| {
Poles: s1,2 = −ωn cosθ ∓ jωn sinθ where cos θ = ζ θ
−σ Re
Below are the step responses for various values of ζ. Note that ωn
changes only the time scale, not the shape of the response.

2
1 = 0.1
1 = 0.2
1.5 1 = 0.4
1 = 0.7
1 = 1.0
s(t)

0.5

0
0 5 10 15
ωn t ωn ωn

Important Features of the Step Response:

1) Rise time (tr): time to go from 10% to 90% of steady-state value


2) Peak overshoot (M p ): (peak value - steady state)/steady state
3) Peaking time (t p ): time to peak overshoot
4) Settling time (ts ): time after which the step response stays within
1% of the steady-state value

Mp
∓1%
1
0.9

0.1
tr tp ts t
ee120 - fall’15 - lecture 21 notes 2

How do these parameters depend on ζ and ωn ?

L 1
u(t) : unit step ←→
s
Step response:
1 ωn2
Y (s) = H (s) = (1)
s s(s2 + 2ζωn s + ωn2 )
A B B∗
= + +
s s + σ + jωd s + σ − jωd
 
1 σ
A=1 B=− 1+j
2 ωd
 
y(t) = 1 + Be−σt e− jωd t + B∗ e−σt e jωd t u(t)
   
= 1 + Be− jωd t + B∗ e jωd t e−σt u(t)
| {z }
= 2Re{|Be− jωd t
{z } }
 
=− 21 1+ j ωσ (cosωd t− jsinωd t)
 d 
= − cosωd t + ωσd sinωd t

   
σ −σt
y(t) = 1 − cosωd t + sinωd t e u(t)
ωd

Peaking time:
 
d −σt σ
y(t) = σe  dt +
cosω sinωd t − e−σt (−ωd sinωd t +  
σcosω d t)
dt

ωd
 2 
σ
= e−σt + ωd sinωd t
ωd
d π
y(t) = 0 =⇒ sinωd t = 0 tp =
dt ωd

Peak overshoot: M p = y(t p ) − 1


 
−σt p −σ π
y(t p ) = 1 − cosωd t p e = 1 + e−σt p = 1 + e ωd
| {z }
=cosπ =−1
(
−π ωσ −π √ ζ
ζ % =⇒ M p & 0.05 ζ = 0.7
Mp = e d =e 1− ζ 2 Mp ≈
M p → 0 as ζ → 1 0.16 ζ = 0.5
Approximate expressions for rise time and settling time:

4.6
ts ≈ (obtained from e−σts = 0.01)
σ
1.8
tr ≈ for ζ = 0.5 (changes little with ζ)
ωn
ee120 - fall’15 - lecture 21 notes 3

Note that t p , ts , tr are inversely proportional to ωn :


π π 4.6 4.6 1.8
tp = = p ts ≈ = tr ≈ .
ωd ωn 1 − ζ 2 σ ωn ζ ωn
This is consistent with our observation on page 1 that ωn changes
only the time scale, not the shape of the response. We make this
property explicit in the following statement:
If ζ is kept constant and ωn is scaled by a factor of α > 0 (ωn → αωn ) then
the step response is scaled in time by α: y(t) → y(αt).
cst.ζ
Im
αωn

y(t) y(αt)
ωn
−→
Re
t t

Proof: If we replace ωn with αωn in (1), we get


(αωn )2 ωn2 1 s
=   = Y .
s (s2 + 2ζ (αωn )s + (αωn )2 ) s s 2

+ 2ζωn s

+ ωn2 α α
α α

The statement above then follows from the scaling property of


Laplace transform:
s
L 1
y(αt) ←→ Y .
α α
Summary: ωn % increases speed of the response
ζ % reduces overshoot
Although the formulas above are for second order systems, they can
be applied as approximate expressions to higher order systems with
two dominant poles:

Im

Re

response due to far- dominant


away poles die out poles
quickly; therefore,
can be ignored
ee120 - fall’15 - lecture 21 notes 4

Control Design by Root Locus

Root locus examples from last lecture:

1) 2) 3)
Im Im
Im

r2
Re r r1 Re Re


r= r1 r2

Example:
x(t): force y(t): position
M

d2 y dy 1
M 2
+b = x (t) → H p (s) = 2
dt dt Ms + bs
Suppose a damping ratio of ζ = 0.7 is desired:

Im
select the gain K that corresponds
ζ = 0.7 to this point on the root locus
(θ = 45o )

Re

Suppose, in addition to ζ, a lower bound on ωn is specified:


cst. ζ Im
cst. wn

Re

desired region
for closed-loop
poles (shaded)

The root locus doesn’t go through the desired region, therefore con-
stant gain control won’t work. Try the controller:
s−β
Hc (s) = K α < β < 0 (pole to the left of zero)
s−α
ee120 - fall’15 - lecture 21 notes 5

Closed-loop poles:
Hc (s) H p (s)
z }| {z }| {
s− β
1 + K s−α s( Ms1+b) = 0
| {z }
H (s)

Select α, β such that the root locus passes through the desired region

Im

α β Re

select gain K
from this segment

A controller of the form

s−β
Hc (s) = K α<β<0
s−α

is called a ”lead controller”.


Example: s+1
Hc (s) =
s + 10
1 1 + jω
Hc ( jω ) =
10 1 + jω/10

20log10 | Hc ( jω )| = −20 − 20log10 |1 + jω/10| + 20log10 |1 + jω |

|Hc (jω)|

0dB 1 10 100
ω

−20dB

”phase-lead”
o
90

1 10
100 ω

−90o
ee120 - fall’15 - lecture 21 notes 6

Some History: Black’s Feedback Amplifier

The use of feedback is not limited to designing controllers that shape


the dynamic response of a system. Another important advantage is to
guarantee robustness to variations and disturbances.
In its early days Bell Labs developed amplifiers that enabled long
distance telephone communication. However, the amplifiers had
significant variations in their gains and their nonlinearity caused in-
terference between the channels. Addressing these problems Harold
Black introduced a negative feedback around the amplifier that both
reduced the variations in the gains and extended the linear range.
We illustrate these benefits on a static model of the amplifier in the
figure below. Suppose the amplifier has gain µ in its linear range and
the output saturates at ±1. When a negative feedback with gain

β  µ −1

is applied, the relationship between the new input x̃ and the output y
is again a saturation nonlinearity (show this), but the new gain is
µ
≈ β −1
1 + βµ

which is robust to variations in µ. In addition, the response is linear


1+ βµ
when | x̃ | ≤ µ ≈ β, a significantly wider range than | x | ≤ µ−1 .

1
x
x̃ y
µ −1

⇒ x̃ 1+ βµ
y
µ
≈ β  µ −1
ee120 - fall’15 - lecture 21 notes 7

The drawback is that the gain of the amplifier is now significantly


reduced. As Black explains in his 1934 paper in the Bell System Tech-
nical Journal:
"... by building an amplifier whose gain is deliberately made say 40
decibels higher than necessary and then feeding the output back on
the input in such a way as to throw away excess gain, it has been
found possible to effect extraordinarily improvement in constancy of
amplification and freedom from nonlinearity."
EE120 - Fall’15 - Lecture 22 Notes1 1
Licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike
Murat Arcak 4.0 International License.
23 November 2015

Steady State Accurary

e(t)
r(t) Hc (s) Hp (s) y(t)

e(t) = r (t) − y(t)


E(s) = R(s) − Y (s)
Hc (s) H p (s) 1
= R(s) − R(s) = R(s)
1 + Hc (s) H p (s) 1 + Hc (s) H p (s)

Suppose r (t) is a unit step. How do we guarantee e(t) converges to


zero instead of a different constant?

1 1 1
R(s) = =⇒ E(s) = ·
s 1 + Hc (s) H p (s) s

Final Value Theorem:


1
ess , lim e(t) = lim sE(s) =
t→∞ s →0 1 + Hc (0) H p (0)

To ensure ess = 0, we need lims→0 Hc (s) H p (s) = ∞, i.e.,

Hc (s) H p (s) must have at least one pole at s = 0.

Example: Position control

1
H p (s) = Hc (s) = K x(t) y(t): position
Ms2 + bs M
K
Hc (s) H p (s) = → ess = 0
s ( Ms + b)

y(t) large K
1

small K
t
ee120 - fall’15 - lecture 22 notes 2

Example: Speed control

1 x(t) y(t): speed


H p (s) = Hc (s) = K M
Ms + b

1 1
ess = = 6= 0
1 + Hc (0) H p (0) 1 + K/b
1 K/b
yss = 1 − ess = 1 − =
1 + K/b 1 + K/b

y(t) large K ess = 1


1+K/b
1

small K
t

Steady-state error decreases with increasing K, but increasing K is


not always a viable approach (poor damping if #poles − #zeros = 2,
instability if #poles − #zeros ≥ 3).

Integral Control

If H p (s) does not contain a pole at s = 0, introduce one in Hc (s).


Drawback: pole at s = 0 makes it harder to meet damping and natu-
ral frequency specifications.
Example: Speed control of a DC motor

R L

ω
+ +
x(t) − kω

J

Suppose we want to control y(t) = ω (angular velocity).


First, find the transfer function H p (s):

dω (t)
J = ki (t)
dt
di (t)
L = −kω (t) − Ri (t) + x (t)
dt
ee120 - fall’15 - lecture 22 notes 3

Take the Laplace transform and substitute y = ω:

JsY (s) = kI (s)


LsI (s) = −kY (s) − RI (s) + X (s)
X (s)−kY (s)
Substitute I (s) = Ls+ R from the second equation into the first:

X (s) − kY (s)
JsY (s) = k
Ls + R
Im
[ Js( Ls + R) + k2 ]Y (s) = kX (s)
Y (s) k
H p (s) = =
X (s) JLs2 + JRs + k2
Re

Constant gain control Hc (s) = K gives nonzero


steady-state error:

1 1
ess = = K
6= 0
1 + KH p (0) 1+ k
Increasing the gain reduces ess , but leads to a poorly damped system:

y(t)
larger K
1

smaller K
t
K
Integral Control: Hc (s) = s

Im

ωn smaller than
cst. gain control
Re

ess = 0 achieved at the cost of slower response (smaller ωn ):

y(t)
1

t
ee120 - fall’15 - lecture 22 notes 4

Solution: Augment integral control with lead control:

Ks−β
Hc (s) = α < β < 0.
s s−α
The main features of this controller are similar to PID (proportional-
integral-derivative) control which is very popular in industry.

Disturbance Rejection with Integral Control

d (wind, current, load, etc)


r Hc (s) Hp (s) y

Y (s) = H p (s) ( Hc (s)( R(s) − Y (s)) + D (s))



1 + Hc (s) H p (s) Y (s) = Hc (s) H p (s) R(s) + H p (s) D (s)
Hc (s) H p (s) H p (s)
Y (s) = R(s) + D (s)
1 + Hc (s) H p (s) 1 + Hc (s) H p (s)
| {z } | {z }
,Ynominal (s) ,∆ ( s )

Suppose d(t) = u(t): unit step. How do we guarantee y(t) recov-


ers from this disturbance; that is, δ(t) , y(t) − ynominal (t) → 0?

d(t) y(t)

t t
δ(t)

H p (s) 1
∆(s) =
1 + Hc (s) H p (s) s

H p (s)
lim δ(t) = lim s∆(s) = lim
t→∞ s →0 s→0 1 + Hc ( s ) H p ( s )

= 0 if Hc (s) has a pole at s = 0.

Example: Consider again the position control example on page 1.


1
Although the plant H p (s) = s( Ms+b)
has a pole at s = 0, a constant
ee120 - fall’15 - lecture 22 notes 5

gain controller Hc (s) = K cannot eliminate the steady state offset


caused by a disturbance entering the system before the plant (as in
the block diagram above). Instead we get the offset:
H p (s) 1
lim δ(t) = lim = .
t→∞ s →0 1 + Hc (s) H p (s) K

To remove the offset the control Hc (s) itself must have a pole at s = 0.

What Happens to Canceled Poles?

They get decoupled from the input-output relation but continue to


exist internally, creating dynamic modes that are invisible from the
output or can’t be influenced by the input.
As an illustration consider the series interconnection below where a
pole-zero cancelation occurs at s = λ.

x 1
e s−λ s−λ y

In the time domain the first and second blocks satisfy, respectively
de(t) dy(t)
x (t) = − λe(t) and = λy(t) + x (t).
dt dt
Combining the two, we get:
d
(y(t) − e(t)) = λ(y(t) − e(t)) ⇒ y(t) − e(t) = (y(0) − e(0))eλt .
dt
Thus, instead of y(t) = e(t), we have

y(t) = e(t) + (y(0) − e(0))eλt

where the second term is a result of the canceled pole at s = λ. Since


transfer functions don’t account for initial conditions, this term does
not appear in the transfer function of the series interconnection:
Y (s) 1
= (s − λ) = 1.
E(s) s−λ

Example: Consider the circuit below where x is the voltage across


the parallel interconnection of a current source with two inductors
and a resistor. The currents r, e, y, and i are as labeled.
The orange block with input x and output y is governed by:
dy(t) Y (s) 1
L1 = x (t) ⇒ = .
dt X (s) L1 s
ee120 - fall’15 - lecture 22 notes 6

e
y ↓i
+
r L1 x L2 R2

The blue block with input e and output x is governed by:

x (t)
 
d
L2 e(t) − = x (t)
dt R2
| {z }
= i (t)

X (s) X (s)
 
L2 s
⇒ L2 s E ( s ) − = X (s) ⇒ = L2
.
R2 E(s)
R2 s + 1

Noting from Kirchoff’s law that e = r − y, we view this circuit as a


feedback interconnection of the two blocks with reference input r:

e L2 s x 1
r L2 y
R2 s +1 L1 s

With the pole-zero cancelation at s = 0, the closed loop transfer


function has a single pole at s = − R2 ( L1 + L12 ). Thus, when r (t) ≡ 0,
1
one might expect the current y(t) to decay to zero. However, this is
not necessarily true: depending on the initial conditions, a constant
current can remain in the loop formed by the two inductors as a
result of the canceled pole at s = 0:

+
L1 L2 0 R2

Das könnte Ihnen auch gefallen