Sie sind auf Seite 1von 49

Brief Introduction of Advanced

Signal Processing
1. Advanced Transforms
2. Wold Representation and Linear Predictive Coding
3. Weiner Filters
4. Adaptive Filters
5. Power Spectrum Estimation
6. Direction Estimation
Short Time Fourier Transform (STFT)

https://www.cse.unr.edu/~bebis/CS474/Lec
tures/ShortTimeFourierTransform.ppt
Fourier Transform

Fourier Transform reveals which frequency components


are present in a function:

(inverse DFT)

where: (forward DFT)


Examples

f1 (t ) cos(2 5 t )

f 2 (t ) cos(2 25 t )

f 3 (t ) cos(2 50 t )
Examples (contd)

F1(u)

F2(u)

F3(u)
Fourier Analysis Examples (contd)

f 4 (t ) cos(2 5 t )
cos(2 25 t )
cos(2 50 t )

F4(u)
Limitations of Fourier Transform

1. Cannot not provide simultaneous time and frequency


localization.
Fourier Analysis Examples (contd)

f 4 (t ) cos(2 5 t )
cos(2 25 t )
cos(2 50 t )

Provides excellent
localization in the F4(u)
frequency domain but
poor localization in the
time domain.
Limitations of Fourier Transform (contd)

1. Cannot not provide simultaneous time and frequency


localization.

2. Not very useful for analyzing time-variant, non-


stationary signals.
Stationary vs non-stationary signals
Stationary signals:
time-invariant spectra
f 4 (t )

Non-stationary
signals: time-varying
spectra f (t )
5
Stationary vs non-stationary signals (contd)
Stationary signal:

f 4 (t )

Three
frequency
components,
present at all
times! F4(u)
Stationary vs non-stationary signals (contd)

Non-stationary signal:

f 5 (t )

Three
frequency
components,
NOT present at
all times! F (u)
5
Stationary vs non-stationary signals (contd)

Non-stationary signal:

f 5 (t )

Perfect knowledge of what


frequencies exist, but no
information about where
these frequencies are
located in time!

F5(u)
Short Time Fourier Transform (STFT)
Segment the signal into narrow time intervals (i.e., narrow
enough to be considered stationary) and take the FT of each
segment.
Each FT provides the spectral information of a separate
time-slice of the signal, providing simultaneous time and
frequency information.
STFT - Steps
(1) Choose a window function of finite length
(2) Place the window on top of the signal at t=0
(3) Truncate the signal using this window
(4) Compute the FT of the truncated signal, save results.
(5) Incrementally slide the window to the right
(6) Go to step 3, until window reaches the end of the signal
STFT - Definition

Time Frequency Signal to


parameter parameter be analyzed

STFT fu (t [
, u) f (t )
W (t - t
) ]
e - j 2 ut
dt
2D function t

STFT of f(t): Windowing Centered at t=t


computed for each function
window centered at
t=t
Example

f(t)

[0 300] ms 75 Hz sinusoid
[300 600] ms 50 Hz sinusoid
[600 800] ms 25 Hz sinusoid
[800 1000] ms 10 Hz sinusoid
Example
STFT fu (t
, u)

f(t)

W(t)
scaled: t/20
Choosing Window W(t)

What shape should it have?


Rectangular, Gaussian, Elliptic

How wide should it be?


Window should be narrow enough to ensure that the portion
of the signal falling within the window is stationary.
But very narrow windows do not offer good localization
in the frequency domain.
STFT Window Size
[ f (t )
, u)
STFT fu (t )]
W (t - t e - j 2 ut dt
t

W(t) infinitely long: W (t ) 1


STFT turns into FT, providing

excellent frequency localization, but no time localization.

W(t) infinitely short:


W (t ) (t )
results in the time signal

(with a phase factor), providing excellent time localization but
no frequency localization.

[ f (t )
, u)
STFT fu (t )]
(t - t e - j 2 ut dt f (t
)e - jut
t
STFT Window Size (contd)

Wide window good frequency resolution, poor


time resolution.

Narrow window good time resolution, poor


frequency resolution.

Wavelets: use multiple window sizes.


Example
different size windows

(four frequencies, non-stationary)


Example (contd)

STFT fu (t
, u)

STFT fu (t
, u)

scaled: t/20
Example (contd)

STFT fu (t
, u)

STFT fu (t
, u)

scaled: t/20
Heisenberg (or Uncertainty) Principle

1
t f
4

Time resolution: How well two spikes Frequency resolution: How well
in time can be separated from each two spectral components can be
other in the frequency domain. separated from each other in the
time domain

t and f cannot be made arbitrarily small !


Heisenberg (or Uncertainty) Principle

We cannot know the exact time-frequency


representation of a signal.
We can only know what interval of frequencies are
present in which time intervals.
Wold Representation and its
applications

The speech samples constitute a random process


Random Processes
Strictly Stationary Process

Weiner-
Kintchie
Widesense stationary process theorem
Wold representation

the sequence v(m) is called the


cepstrum of x(n)
This Laurent series expansion is possible only for
those functions which includes unit circle of z-
plane in its analytic region (possess derivatives of
all orders)

Causal part of the sequence v(m)


defines H(Z) and non-causal part
defines H(Z-1)

Taylor series
expansion is
applicable in this
case

This Fourier series coefficients v(m) are


known as cepstral coefficients and the
sequence v(m) is called the cepstrum
of x(n)
Autoregressive [AR(p)]; all pole process

Moving Average [MA(p)]; all zero process

AR-MA ; Pole-zero process (system)


For Autoregressive-
Moving Average (AR-
MA) process (system).
Non-Linearity is
involved due to bkbk+m
term in ARMA as well
as MA systems.

For Autoregressive (AR) system the


equation is linear, called Yule-Walker
equation which can be represented
in matrix form.

Optimum value of p is desired for satisfying contradicting requirements of estimation


error and computational complexity.
For finding the coefficients of pth order system involves autocorrelation of x(n) over p
to p and the variance of innovations process w(n) produced by 1/H(z).
Linear Predictive Coders (LPC)
(An application of Wold representation and adaptive filters)

Rx Parameters
Noise source
(for unvoiced)

Vocal tract filter


Periodic Pulse Source
(for voiced)

Voiced sounds: Periodic air pulses pass through vibrating vocal chords. (freq of
periodic pulse train should be equal to pitch)
Unvoiced sounds: Force air through a constriction in vocal tract, producing
turbulence.

Two methods for voiced/ unvoiced decision


(i) Zero Crossing unvoiced signals have much faster zero crossing as compared
to voiced.
(ii) Power unvoiced signal has very low power as compared to voiced
Variants of LPC:
Single Pulse Excited (single pulse excitation is used at
Rx, produces audible distortion)

Multipulse excited (8 pulses per period are used, position


and amplitude of an pulse is adjusted to reduce the
distortion )

Code excited (a codebook of Guassian excitation signals


is maintained by both Tx and Rx. Besides other
parameters, The transmitter transmits the index of code
where it finds the best match)

Residual excited (single pulse LPC is used for decoding


at Tx itself and synthesized voice is generated. Now the
difference (residue) of above two signals is coded and
send with the original LPC codes)
Depending upon the
Weiner Filter (for stationary signals) reference signal s(n)
Weiner filter can
perform 3
operations, viz.
s(n)=d(n) it is
filtering,
s(n) = d(n+D) it is
Prediction,
s(n)=d(n-D) it is
smoothing

This set of M
linear equations
are called Weiner-
Hopf equation or
the normal
equations of the
optimum filter i.e.
Weiner Filter.
Example Problem

H(f)=S(f)/V(f)

Recall that
F[a|m|]=(1-a2)/(1+a2-2aCos2f)
Using F[a|t|]=(1-a2)/(1+a2-2aCos2f)
Adaptive Filter (for non-stationary signals)
Applications
Adaptive antenna system
Digital Comm. Receivers
Adaptive Noise Cancellation
Modeling of unknown system

The Transversal Adaptive


Training
and tracking Filter (Linear)
phases It is basically a Weiner Filter with
additional facility for weight updating

As per LMS algorithm:


New [W]= previous [W] + (constant) x (previous error = last desired O/P
previous actual O/P) x (current input vector)
the constant is adjusted to minimize alteration between successive iteration.
The time span over which the equalizer
Types of Adaptive
converges (properly trained) depends upon the
Filters (Equalizers) type, the structure, the algorithm, and the rate of
change of the channel

for training
and tracking

Blind Algorithms: No training seq needed e.g. Spectral coherence restoral


equalization (SCORE) technique exploits spectral redundancy or
cyclostationarity property of the transmitted signals to acquire equalization.
http://cwww.ee.nctu.edu.tw/course/asp/ASP04.pdf

LMS Algorithm
New [W]= previous [W] +
(constant) x (previous error
= last desired O/P
previous actual O/P) x
(current input vector)
Mean square of error is minimized which maximizes the
signal to distortion ratio for a given equalizer length but
the convergence rate is slow. It wont work if time
dispersion in CH>delay through EQ
RLS Algorithm

fast but computationally


expensive, as time averages
with some recursive principle
is used instead of ensamble
PM(n) is the inverse of the correlation
averages. It becomes unstable
matrix, KM(n) is the Kalman gain
if an echo is more powerful
vector and eM(n) is the error. than the present symbol.
Decision Feedback adaptive Filter (nonlinear)
Forward Filter Using the last decided symbol, the
r(t+nT) r(t+[n-1]T) r(t+[n-2]T) r(t)
INPUT
ISI affected current symbol is
T T T estimated by the feedback filter.

The forward filter estimates the


Cn C C C
n-1 n-2 0 current symbol from the ISI affected
+ +
received symbol.
ERROR
ek
+ +
+ - TRAINING The error signal is the difference
SEQUENCE
+
between (a) and (b) above, which is
-
DECISION OUTPUT
used for tuning the coefficients of
DEVICE Zk both forward and the feedback
- -
filters. Their joint estimation may
bm b2 b1 also be done.

Performance is better than the


T T T linear filters but an error occurred in
^
X k-m
^
X k-2 ^
X k-1 ^
Xk
deciding a symbol propagates to
Feedback Filter
next few symbols.
Zero Forcing Equalizer Maximum Likelihood
Sequence Estimator
Zero Forcing equalizer (MLSE):
reproduces the channel It is known to be the
inverse transfer function, to optimum filter, but its
compensate for strong complexity increases
attenuation in certain exponentially with channel
frequency bands, it is memory length (i.e. higher
compelled to generate strong ISI).
gains in the same frequency
bands. This means that not Decision Feedback
only the signal is amplified in Equalizer (DFE) is a good
those frequency bands, but suboptimal filter which can
also any noise present is be easily implemented using
amplified as well. simple FIR filters.
Minimum mean square error
(MMSE) avoids this problem
of noise amplification.
www.nxp.com/files/dsp/doc/app_note/AN2072.pdf

Das könnte Ihnen auch gefallen