Sie sind auf Seite 1von 21

Chapter 2 Discrete-Time Signals and

Systems
—Introduction
z Signal processing (system analysis and design)
„ Analog
„ Digital
z Examples of (digital) signals and systems (O&S, Chap. 1)
1-D: Speech and audio
2-D: Images and video
z History (O&S, Chap. 1)
„ Before 1950s: analog signals/systems
„ 1950s: Digital computer
„ 1960s: Fast Fourier Transform (FFT)
„ 1980s: Real-time VLSI digital signal processors
z A typical digital signal processing system

x(t) Digital y(t)


H1(s) A/D D/A H2(s)
filter
x[n] y[n]

x(t) Equivalent y(t)


analog
filter

— Discrete-time Signals: Sequences


z Continuous-time signal – Defined along a continuum of times. x(t)
Continuous-time system – Operates on and produces continu-
ous-time signals.

Chapter 2 1
Discrete-time signal – Defined at discrete times. x[n]; sequences of
numbers.
Discrete-time system – Operates on and produces discrete-time
signals.
Ex., (O&S, p.10)

Remarks: Digital signals usually refer to the quantized discrete-time


signals. In this course, we are mostly dealing with discrete-time
signals with continuous values (unquantized values).
z Sampling: Very often, x[n] is obtained by sampling x(t).
That is, x[n] = x(nT ) , T: is the sampling period. But T is often not
important in the discrete-time signal analysis. (We will discuss the
sampling process in Notes Chap. 3.)

z Basic Sequences:
„ Unit sample Sequence
1
n=0
δ [n] = ⎧⎨
1
−1 0 1 2 3 n
⎩0 n≠0

Chapter 2 2
Remark: It is often called the discrete-time impulse or simply
impulse. (Some books call it unit pulse sequence.)
„ Unit Step Sequence
1
n≥0
u[n] = ⎧⎨
1
−1 0 1 2 3 n
⎩0 n<0
Note 1: u[0]=1, well-defined.
Note 2: u[n] = ∑ m = −∞ δ [m] running sum;
n

δ [n] = u[n] − u[n − 1]


„ Exponential sequences
x[n] = Aα n
-- Combining basic sequences:
⎧ Aα n n≥0
x[n] = ⎨ ,
⎩0 n<0

x[n] = Aα n u[n]

z Sinusoidal sequences
x[n ] = A cos(ω 0 n + φ ) for all n

A: amplitude, ω 0 = 2πf 0 : frequency, φ : phase


„ It can be viewed as a sampled continuous-time sinusoidal.
However, it is not always periodic! Why?
Condition for being periodic with period N: x[ n] = x[ n + N ]
That is, A cos(ω 0 n + φ ) = A cos(ω 0 ( n + N ) + φ )
Or, ω 0 (n + N ) = ω 0 n + 2πk , where k, n are integers (k, a fixed
number; n, a running index, − ∞ < n < ∞ ).
Î ω 0 N = 2πk Î ω 0 = 2πk / N . Hence, f 0 must be a rational
number.

Chapter 2 3
„ One discrete-time sinusoid corresponds to multiple continu-
ous-time sinusoids of different frequencies.
x[n] = A cos(ω 0 n + φ )
= A cos((ω 0 + 2πr )n + φ ) for all n

where r is any integer

Typically, we pick up the lowest frequency (r=0) under the as-


sumption that the original continuous-time sinusoidal has a
limited frequency value, 0 ≤ ω 0 < 2π or − π ≤ ω 0 < π . This
is the unambiguous frequency interval.
„ Interpretation of frequency (O&S, p.15): For continuous-time
sinusoidal x(t ) = A cos(Ω 0t + φ ) , as Ω 0 increases, x(t ) oscil-
lates more and more rapidly. For discrete-time sinusoidal
x[n] = A cos(ω 0 n + φ ), as ω 0 increases fromω 0 = 0 toward
ω 0 = π , x[n] oscillates more and more rapidly. However, as
ω 0 increases fromω 0 = π to ω 0 = 2π , the oscillations become
slower. Why?
Because of the periodicity in ω 0 of sinusoidal (and complex ex-
ponential) sequences, frequencies around ω 0 = 0 are indis-
tinguishable from frequencies around ω 0 = 2π .
Î Frequencies in the vicinity of ω 0 = 2kπ for any integer
value of k are typically referred to as low frequencies, while
frequencies in the vicinity of ω 0 = ( 2k + 1)π are typically re-
ferred to as high frequencies.
„ The above discussions are valid to the complex exponential se-
quences defined below.

z Complex Exponential Sequences


x[n] = Aα n , A = A e jφ , and α = α e jω 0
Hence,

Chapter 2 4
x[n] = A α e j (ω 0 n +φ ) = A α cos(ω 0 n + φ ) + j A α sin(ω 0 n + φ )
n n n

— Discrete-time Systems
z A discrete-time system is defined mathematically as a transforma-
tion or operator that maps an input sequence with values x[n] into
an output sequence with values y[n] .
x [n ] y[n ] = T[x [n ]]
T[:]
y[n] = T {x[n]}
„ Ideal Delay
y[ n] = x[ n − nd ], − ∞ < n < ∞ ,
where nd is a fixed positive integer called the delay of the system.
„ Moving Average
M2
1
y[n] = ∑ x[n − k ]
M1 + M 2 + 1 k = −M1
z Memoryless: If the output y[n] at every value of n depends only
on the input x[n] at the same value of n.
z Linear: If it satisfies the principle of superposition.
(a) Additivity: T {x1[n] + x2 [n]} = T {x1[n]} + T {x2 [n]}
(b) Homogeneity or scaling: T {ax[n]} = aT {x[n]}
z Time-invariant (shift-invariant): A time shift or delay of the input
sequence causes a corresponding shift in the output sequence.

y[n]
T delay
y[n-n0]

x[n]

x[n-n0]
delay T yn0[n]

Chapter 2 5
Ex. y[n] = x[αn] is not time-invariant.
z Causality: For any n0 , the output sequence value at the index
n = n0 depends only on the input sequence values for n ≤ n0
z Stability in the bounded-input, bounded-output sense (BIBO): If and
only if every bounded input sequence produces a bounded output
sequence.

— Linear Time-invariant (LTI) Systems


z A linear system is completely characterized by its impulse response.
(Fig. 2.8, O&S, p.24)

(1) Sequence as a sum of delayed impulses: x[n] = ∑ x[m]δ [n − m]
m = −∞
(2) An LTI system due to δ [n] input
x[n] = δ [n] yields y[n] = h[n] (impulse response)
∞ ∞
(3) x[n] = ∑ x[m]δ [n − m] yields y[n ] = ∑ x[m ]h[n − m ]
m = −∞ m = −∞

1 2
1/2 1
1/3

h[n ] −1 0 1 2 3 n x [n ] −1 0 1 2 n

x[n ] = δ [ n ] + 2δ [ n − 1]
1
1/2
1/3

h [n ] −1 0 1 2 3 n

Chapter 2 6
2
1
2/3

2h[n − 1] −1 0 1 2 3 n
1 5/2
4/3
2/3

Î y [n ] −1 0 1 2 3 n


z Convolution sum: f 3 [n ] = ∑ f1[m] f 2 [n − m ] = f1[n ] ∗ f 2 [n ]
m = −∞

Note: Here, n is the independent (outside) variable, and m is the


dummy variable

„ Procedure of convolution
1. Time-reverse: h[m] Æ h[−m]
2. Choose an n value
3. Shift h[−m] by n: h[n − m]
4. Multiplication: x[n] ⋅ h[n − m]

5. Summation over m: y[n] = ∑ x[m]h[n − m]
m = −∞
Choose another n value, go to Step 3.
Example:
1 2
1/2 1
1/3

h [n ] −1 0 1 2 3 n x [m ] −1 0 1 2 m

Chapter 2 7
1/2 1
1/3

h[ − m ] −2 −1 0 m

y [0] = ∑ m x [ m ]h [ − m ] = 1 ,
1/2 1
1/3

h[1 − m ] −2 −1 0 1 m

y[1] = ∑m x[m]h[1 − m] = 5 / 2 ,

1/2 1
1/3

h[2 − m] −1 0 1 2 m

y[2] = ∑m x[m]h[2 − m] = 4 / 3 ,

1/2 1
1/3

h[3 − m] − 1 0 1 2 3 m

y[3] = ∑ m x[ m ]h[3 − m ] = 2 / 3 .

— Properties of LTI Systems


z The properties of an LTI system can be observed from its impulse
response.
z Cascade connection:
h[n] = h1[n] ∗ h2 [n]

h1(n) h2(n) h 1( n) * h 2( n )
==>
z Parallel connection:

Chapter 2 8
h[n] = h1[n] + h2 [n]
h1(n)

+
h2(n) + h 1( n) + h 2( n )
==>

z BIBO stability: If h[n] is absolutely summable , i.e.,



∑ h[k ] = S < ∞
k = −∞

z Casual sequence Æ Causal system: h[n] = 0, n < 0


z Memoryless LTI: h[n] = kδ [n]
z Some frequently used systems:
-- Ideal Delay
y[ n] = x[n − nd ] h[n] = δ [n − nd ]
-- Moving Average
1 M2 ⎧ 1
y[n] = ∑ x[n − k ] h[n] = ⎨ M + M + 1 , − M 1 ≤ n ≤ M 2
M1 + M 2 + 1 k = − M1

1 2
⎪⎩ 0, otherwise

-- Accumulator
n
h[n] = u[n] , unit step
y[n] = ∑ x[k ]
k = −∞

-- Forward Difference
y[ n] = x[ n + 1] − x[ n] h[n] = δ [n + 1] − δ [n]
-- Backward Difference
y[ n] = x[ n] − x[ n − 1] h[n] = δ [n] − δ [n − 1]

z Finite-duration Impulse Response (FIR):


Its impulse response has only a finite number of nonzero samples.
-- FIR systems are always stable.
z Infinite-duration Impulse Response (IIR):

Chapter 2 9
Its impulse response is infinite in duration.
z Inverse System:
h[n] g[n]
x[n] y[n] x[n]

System g[n] is the inverse of h[n]


h[n] ∗ g[n] = δ [n]

—Linear Constant-Coefficient Difference Equations


„ An important class of LTI system is described by linear con-
stant-coefficient equation.
z Difference Equation: (general form)
N M
∑ ak y[n − k ] = ∑ bm x[n − m]
k =0 m=0

First-order system: y[n] = ay[n − 1] + bx[n]


Solution:

y[ n] = y p [n] + yh [n] = particular solution + homogeneous solution


N
Homogeneous solution: ∑ ak y[n − k ] = 0 (x[n]=0)
k =0

Particular solution: (experience!)

—Frequency-Domain Representation
z Eigenfunction and eigenvalue
What is the eigenfunction of a system T{.}?
Cf [n] = T { f [n]} , where C is a complex constant, eigenvalue.
The output waveform has the same shape of the input waveform.
The complex exponential sequence is the eigenfunction of any LTI

Chapter 2 10
system.

x[n] = e jωn LTI h[n] y[n] = H (e jω )e jωn



H (e ) = ∑ h[k ]e
k = −∞
− jωk

Magnitude: H (e jω ) Phase: ∠H (e jω )
z H (e jω ) is periodic. Why?
z The above eigenfunction analysis is valid when the input is applied
to the system at n = −∞ .
z What happens if the input applies to a causal LTI system at n=0?
⎧ 0, n<0
jωn ⎪ n
x[n] = e u[n] ⇒ y[n] = ⎨⎛ ⎞
⎜ ∑ h[k ]e − jωk ⎟e jωn n ≥ 0
⎪⎩⎝ k = 0 ⎠
Consider only n>=0,

⎛ ∞ − jωk ⎞ jωn ⎛ ∞ ⎞
y[n] = ⎜ ∑ h[k ]e ⎟e − ⎜ ∑ h[k ]e − jωk ⎟e jωn
⎝ k =0 ⎠ ⎝ k = n +1 ⎠
⎛ ∞ ⎞

= H (e )e jωn
− ⎜ ∑ h[k ]e − jωk ⎟e jωn
⎝ k = n +1 ⎠
y[n] = y ss [n] + yt [ n]
where yss [n] is called the steady state response and
yt [n] is called the transient response.
„ If the system is stable, yt [ n] → 0 as n → ∞ .
„ H (e jω ) exists if the system is stable. In this case, y[n] = y ss [n]
as n → ∞ .

— Fourier Transform of Sequences


„ Interpretation: Decompose an “arbitrary” sequence into “sinu-

Chapter 2 11
soidal components” of different frequencies.
z DTFT: Discrete-time Fourier Transform

Analysis: X (e ) =jω
∑ x[n]e− jωn ≡ F{x[n]} −π < ω ≤ π
n = −∞

1 π
∫ π X (e

Synthesis: x[n ] = )e jωn dω ≡ F −1{ X (e jω )}
2π −

x[n] ↔ X (e jω ) Discrete-Time Fourier Transform pair


Remarks: Fourier transform is also called Fourier spectrum.
Magnitude spectrum: | X (e jω ) |
Phase spectrum: ∠X (e jω )
Remarks: (1) X (e jω ) is continuous in frequency, ω .
(2) X (e jω ) is “periodic” with period 2π . (Why?)

z Does every x[n] have DTFT?


Convergence conditions: “error”Æ0 as N (samples)Æ ∞
(A) Absolutely summable

∑ x[n] < ∞ (uniform convergence, O&S, p.50)
n = −∞

(B) Finite energy (square-summable) ⇒ mean-square error Æ0


∞ 2
∑ x[n ] < ∞ (mean-square convergence, O&S, p.51)
n = −∞

Remark: “Absolutely summable” is a stronger requirement.


Gibbs phenomenon (square summable not absolute summable)
(Fig. 2.21, O&S, p.52)
Finite-term approximation of an ideal low-pass filter: The mean
square approximation error decreases as the number of terms (N) in-
creases. However, the maximum error does not decrease until N → ∞ .

Chapter 2 12
z DTFT of Special Functions
-- Impulse
δ [ n] ↔ 1
δ [n − n0 ] ↔ e − jωn0
-- Constant

1↔ ∑ 2πδ (ω + 2πr ) ; A periodic impulse train.
r = −∞

Note: This is the analog impulse (delta) function.

-- Complex exponential

e jω 0 n
↔ ∑ 2πδ (ω − ω 0 − 2πr )
r = −∞

-- Cosine sequence

∑ π [e jθ δ (ω − ω 0 + 2πk ) + e− jθ δ (ω + ω 0 + 2πk )]

cos(ω 0 n + θ ) ↔
k = −∞

Chapter 2 13
-- Unit step

1
u[n] ↔ + π ∑ δ (ω + 2πr )
1 − e − jω r = −∞

Symmetry Properties of Fourier Transform


Any (complex) x[n] can be decomposed into
x[n] = xe [n] + x0 [n]
where
Conjugate-symmetric part: xe [ n] = ( x[ n] + x * [ − n]) / 2
Conjugate-antisymmetric part: x0 [ n] = ( x[ n] − x * [− n]) / 2
Remark: x[n] is conjugate-symmetric if x[n] = x * [−n]
x[n] is conjugate-antisymmetric if x[n] = − x * [−n]
On the other hand, X (e jω ) = Re[ X (e jω )] + j Im[ X (e jω )]

Key 1: xe [n] ↔ Re[ X (e jω )] , xo [n] ↔ j Im[ X (e jω )]

Similarly, X (e jω ) can be decomposed into


X (e jω ) = X e (e jω ) + X o (e jω )
where X e (e jω ) is the conjugate-symmetric part and
X o (e jω ) is the conjugate-antisymmetric part
Key 2: Re[ x[n]] ↔ X e (e jω ) , j Im[ x[n]] ↔ X o (e jω )

Special case 1: If x[n] is real, X (e jω ) is conjugate symmetric


(magnitude –even, phase – odd)
Special case 2: If x[n] is conjugate-symmtric, X (e jω ) is real.
(O&S, Table 2.1, P.56)

—Fourier Transform Theorems


-- Linearity

Chapter 2 14
If x[n] ↔ X (e jω ) and y[n] ↔ Y (e jω )
then ax[ n] + by[ n] ↔ aX (e jω ) + bY (e jω )
-- Time Shift
If x[n] ↔ X ( e jω )
then x[n − nd ] ↔ X ( e jω ) e − jω n d
-- Frequency Modulation
If x[ n] ↔ X ( e jω )
then e jω 0 n x[n] ↔ X (e j (ω −ω 0 ) )
--Time Reversal

If x[ n ] ↔ X ( e jω )
then x[-n ] ↔ X ( e − jω )

-- Complex Conjugation

If x[ n ] ↔ X ( e jω )
then x * [n ] ↔ X * ( e − jω )

-- Differentiation in frequency
If x[n] ↔ X ( e jω )
dX (e jω )
then nx[n] ↔ j

-- Convolution
If x[n] ↔ X ( e jω ) and h[n] ↔ H (e jω )
then x[n] ∗ h[n] ↔ X (e jω ) H (e jω )
-- Multiplication
If x[n] ↔ X ( e jω ) and w[n] ↔ W (e jω )
1 π jθ
then x[n]w[n] ↔ ∫−π X (e )W (e j (ω −θ ) )dθ

Chapter 2 15
-- Parseval’s Theorem
If x[n] ↔ X ( e jω )
∞ π
1
then E= ∑ | x[n] |2 = 2π ∫−π | X (e

) |2 dω
n = −∞

-- Signal Energy:
⎡ M 2⎤
E = lim ⎢ ∑ x[n] ⎥ , if exists.
M →∞ ⎣ ⎦
n=−M

-- Signal Power:
⎡⎛ 1 ⎞ M 2⎤
P = lim ⎢⎜ ⎟ ∑ x[n] ⎥ , if exists.
M → ∞ ⎣⎝ 2 M + 1 ⎠ ⎦
n=−M

Chapter 2 16
Example 2.25

Chapter 2 17
Chapter 2 18
—Discrete-Time Random Signals
z Input signal x[n]: real-valued sequence and wide-sense stationary
discrete-time random process.
System h[n]: A stable LTI system with real impulse response.
∞ ∞
Output y[n]: y[n ] = ∑ h[n − k ]x[k ] = ∑ h[k ]x[n − k ]
k = −∞ k = −∞

z Means of output and input: mx[n]=E[x[n]], my=E[y[n]].


∞ ∞
E[ y[n ]] = ∑ h[k ]E[ x[n − k ]] = m x ∑ h[k ] = m x H (e j 0 )
k = −∞ k = −∞

z Autocorrelation function of output process:

φ yy [n, n + m] = E{ y[n ] y[n + m]}


∞ ∞
= E{ ∑ ∑ h[k ]h[r ]x[n − k ]x[n + m − r ]}
k = −∞ r = −∞
∞ ∞
= ∑ h[k ] ∑ h[r ]E{x[n − k ]x[n + m − r ]}
k = −∞ r = −∞
∞ ∞
= ∑ h[k ] ∑ h[r ]φ xx [m + k − r ]
k = −∞ r = −∞
= φ yy [m]
Let l=r-k
∞ ∞
φ yy [m] = ∑φ xx [m − l ] ∑ h[k ]h[l + k ]
l = −∞ k = −∞

, (2.192)
= ∑φ xx [m − l ]chh [l ]
l = −∞


where we have defined chh [l ] = ∑ h[k ]h[l + k ]
k = −∞

Chapter 2 19
„ chh[l] is called a deterministic autocorrelation sequence or, sim-
ply, the autocorrelation sequence of h[n].

„ Φxx(ejω), Φyy(ejω) and Chh(ejω) denoting the Fourier transforms of


φxx[m], φyy[m] and chh[l], respectively.
„ From (2.192), Φyy(ejω)=Chh(ejω)Φxx(ejω), and
Chh(ejω)=H(ejω)H*(ejω)=|H(ejω)|2.

„ Φyy(ejω)=|H(ejω)|2Φxx(ejω) ---- power density spectrum.

1 π jω
z E [ y 2 [n ]] = φ yy [0] =
2π ∫−π Φ yy (e )dω = total average power in output

1 π jω
E [ y 2 [n ]] = φ yy [0] = ∫−π | H (e ) |2 Φ xx ( e jω )dω

„ Suppose that H(ejω) is a bandpass filter

„ φxx[m] is an even function, so Φxx(ejω)=Φxx(e-jω)


1 ωb 1 −ω b
φ yy [0] = ∫ω Φ xx ( e jω )dω + ∫−ω Φ xx ( e jω )dω
2π a 2π a

=average power in output

„ lim φ yy [0] ≥ 0 ---- Φxx(ejω)≥0


(ω − ω ) → 0
b a

z Corss-correlation

φ xy [m] = E [ x[n ] y[n + m ]] = E [ x[n ] ∑ h[k ]x[n + m − k ]
k =−∞

= ∑ h[k ]φ
k =−∞
xx [m − k ]

Chapter 2 20
„ Assume φxx[m]=σx2δ[m], we note that φxy[m]=σx2h[m]

1. Φxx(ejω)=σx2, -π≤ω≤π.

2. Φxy(ejω)=σx2H(ejω)

Chapter 2 21

Das könnte Ihnen auch gefallen