Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

245 Aufrufe

Attribution Non-Commercial (BY-NC)

Als PDF, TXT **herunterladen** oder online auf Scribd lesen

- Spatial regretion: the curious case of negative spatial dependence
- Project in DSP c6713
- Dsp Important Viva Questions
- DEMULTI PLEXER AND MULTIOLEXER
- Tests for the error component model in the presence of local misspecification
- List of Ece,Eee & e&i
- Introduction to Digital Filters
- Latest 2017-2018 IEEE Final Year Projects List for BE/Btech, ME/Mtech
- Adaptive Noise Cancellation_new
- A Low-Power Reconfigurable DSP System
- unit 8 VTU format.pdf
- 1095_1330502945110
- Basic Concepts of Signal Processing
- FPGA Based Area Efficient Edge Detection Filter for Image Processing Applications
- The Levinson Algorithm
- Review On Design Of Digital FIR Filters
- Ch5.4_5
- TutorialOnWaveletsFromEEperspective.pdf
- lec2
- booook.docx

Sie sind auf Seite 1von 44

Instructor:

Dr. Gleb V. Tcheslavski

Contact:

gleb@ee.lamar.edu

Office Hours:

Room 2030

Class web site:

http://www.ee.lamar.edu/

gleb/adsp/Index.htm

ELEN 5301 Adv. DSP and Modeling Summer II 2008

The problem of estimation (or extraction) of one signal from another arises quite

often. In many applications, the desired signal (speech, radar signal, EEG, image,

etc ) is not available or observed directly.

etc.) directly Instead

Instead, the desired signal may be noisy

or destroyed. In some simple situations, it may be possible to design a classical

filter (LPF, HPF, BPF) to resolve the desired signal from the data. However, these

filters are rarely optimum in the sense of producing the best estimate of the

signal. Therefore, optimum digital filters – including Wiener and Kalman filters –

are of interest.

The discrete Wiener filter is designed

to recover the desired signal dn from

noisy observations

xn = d n + vn (6.2.1)

Assuming that both dn and xn are wss random processes, Wiener considered the

problem of designing the filter W(z) that produces the minimum mean-square

(MMS) error estimate of dn.

1

7/20/2008

Therefore:

ξ = E en { } 2

(6.3.1)

Thus, the problem is to find a filter that minimizes ξ. We begin by considering the

general problem of Wiener filtering, where an LTI filter W(z) minimizing (6.3.2)

needs to be designed. Depending upon the relationship between xn and dn, a

number of different problems may be solved with Wiener filters. Some of them are:

1. Filtering: given xn = dn + vn, need to estimate dn with a causal filter; i.e., from

the current and past values of xn;

2. Smoothing: the same as filtering except that the filter can be non-causal;

3. Prediction: if dn = xn+1 and W(z) is a causal filter, the Wiener filter becomes a

linear predictor. Thus, the filter produces a prediction (estimate) of the future

value of the signal in terms of a linear combination of its previous values;

4. Deconvolution: when xn = dn∗gn + vn with gn being the unit-pulse response of

an LTI filter, the Wiener filter becomes a deconvolution filter.

We need to design an FIR Wiener filter producing the MMS error estimate of a

desired process dn by filtering a set of observations of a statistically related

process xn.

Assuming that xn and dn are jointly wss with known autocorrelations rx,k and rd,k

and known cross-correlation rdx,k; denoting the unit-pulse response of the Wiener

filter by wn; and assuming a p-1 order filter, the filter transfer function will be

p −1

W ( z ) = ∑ wn z − n (6.4.1)

n=0

Therefore, for the input

p xn, the filter output

p will be

p −1

dˆn = ∑ wl xn −l (6.4.2)

l =0

To minimize the mean-square error (that does not depend on n)

{ } = E{d }

2

ξ = E en − dˆn

2

(6.4.3)

n

2

7/20/2008

∂ξ ∂ ⎧ ∂en* ⎫

=

∂wk* ∂wk*

E { n n } ⎨en ∂w* ⎬ = 0,0

e e*

= E k = 0,1,...,

0 1 p −1 (6.5.1)

⎩ k ⎭

complex conjugate

p −1

With en = d n − ∑ wl xn −l (6.5.2)

l =0

it follows that ∂e *

= − xn*− k

n

(6.5.3)

∂w *

k

and, therefore:

E {en xn*− k } = 0, k = 0,1,..., p − 1 (6.5.4)

p −1

E {d x *

n n−k } − ∑ w E {x l

*

x

n −l n − k } = 0,0 k = 0,1,...,

0 1 p −1 (6 6 1)

(6.6.1)

l =0

E { xn −l xn*− k } = rx (k − l ) (6.6.2)

(6 6 1) becomes

(6.6.1)

p −1

∑ w r (k − l ) = r

l =0

l x dx (k ), k = 0,1,..., p − 1 (6.6.4)

Wiener-Hopf equations.

ELEN 5301 Adv. DSP and Modeling Summer II 2008

3

7/20/2008

⎢ ⎥⎢ ⎥ ⎢ ⎥

⎢ rx (1) rx (0) " rx* ( p − 2) ⎥ ⎢ w1 ⎥ ⎢ rdx (1) ⎥

= (6.7.2)

⎢ # # % # ⎥⎢ # ⎥ ⎢ # ⎥

⎢ ⎥⎢ ⎥ ⎢ ⎥

⎣⎢ rx ( p − 1) rx ( p − 2) " rx (0) ⎦⎥ ⎣⎢ wp −1 ⎦⎥ ⎣ rdx ( p − 1) ⎦

R x w = rdx (6.7.3)

of filter coefficients, and rdx is the vector of cross-correlations between dn and xn.

ELEN 5301 Adv. DSP and Modeling Summer II 2008

⎧⎪ ⎡ ⎤ ⎫⎪

*

{ } ⎧ ⎫

p −1 p −1

ξ = E en = E ⎨en ⎢ d n − ∑ wl xn −l ⎥ ⎬ = E ⎨en d n* − ∑ wl* E {en xn*−l }⎬

2

(6.8.1)

⎩⎪ ⎣ l =0 ⎦ ⎭⎪ ⎩ l =0 ⎭

and since wk is the solution to the Wiener-Hopf equation, it follows that

⎧⎛ p −1

⎞ ⎫

Therefore: ξ min = E {en d n*} = E ⎨⎜ d n − ∑ wl xn −l ⎟ d n* ⎬ (6.8.3)

⎩⎝ l =0 ⎠ ⎭

and evaluating the expected values

p −1

ξ min = rd (0) − ∑ wl rdx* (l ) (6.8.4)

l =0

4

7/20/2008

Finally, since

w = R −x 1rdx (6.9.2)

10

observation corrupted by noise

xn = d n + vn (6.10.1)

Assume that noise is a zero mean process uncorrelated with dn. Then

E {d n vn*− k } = 0 (6.10.2)

{

rx (k ) = E { xn + k xn* } = E [ d n + k + vn + k ][ d n + vn ] }

Since *

(6.10.4)

rx (k ) = rd (k ) + rv (k ) (6.10.5)

5

7/20/2008

11

Therefore, with Rd the autocorrelation matrix for dn, Rv the autocorrelation matrix

for vn, and rdx = rd = [rd(0),…rd(p – 1)]T, the Wiener-Hopf equations become

[ R d + R v ] w = rd (6.11.1)

Further simplifications are possible when more information about the statistic of

the signal is available.

Example: let dn be an (AR) process with an autocorrelation sequence

rd (k ) = α

k

(6.11.2)

where

h 0 < α < 1 and

d th

the corrupting

ti noise

i vn is

i uncorrelated

l t d white

hit with

ith a variance

i

σv2 and

xn = d n + vn (6.11.3)

We need to design a 1st order FIR Wiener filter to reduce the noise.

W ( z ) = w0 + w1 z −1 (6.11.4)

12

⎡ rx (0) rx (1) ⎤ ⎡ w0 ⎤ ⎡ rdx (0) ⎤

⎢ r (1) r (0) ⎥ ⎢ w ⎥ = ⎢ r (1) ⎥ (6 12 1)

(6.12.1)

⎣ x x ⎦ ⎣ 1 ⎦ ⎣ dx ⎦

Since noise is uncorrelated with the signal

rdx (k ) = rd (k ) = α

k

(6.12.2)

rx (k ) = rd (k ) + rv (k ) = α + σ v2δ k

k

and (6.12.3)

p equations

q become

⎡1 + σ v2 α ⎤ ⎡ w0 ⎤ ⎡ 1 ⎤

⎢ ⎥⎢ ⎥ = ⎢ ⎥ (6.12.4)

⎣ α 1 + σ v2 ⎦ ⎣ w1 ⎦ ⎣α ⎦

The Wiener filter is ⎡ w0 ⎤ 1 ⎡1 + σ v2 − α 2 ⎤

⎢w ⎥ = ⎢ ⎥ (6.12.5)

⎣ 1 ⎦ (1 + σ v2 ) − α 2 ⎣ ασ v

2 2

⎦

ELEN 5301 Adv. DSP and Modeling Summer II 2008

6

7/20/2008

13

W ( z) =

(1 + σ − α ) + ασ

2

v

2 2 −1

vz

(6 13 1)

(6.13.1)

(1 + σ ) − α 2 2

v

2

For a particular case of α = 0.8 and σv2 = 1, the Wiener filter becomes

magnitude response

14

1−α 2

Pd ( e jω ) =

(1 + α 2 ) − 2α cos ω

(6 14 1)

(6.14.1)

Pd ( e jω ) =

0.36

(6.14.2)

1.64 − 1.6 cos ω

The power spectrum

of the desired signal

is decreasing with

frequency. Spectrum

of noise is constant,

therefore, LPF should

increase SNR.

7

7/20/2008

15

ξ mini = E en { } = r (0) − w r

2

d

*

0 dx

d d (1) = σ v

(0) − w1rdx

* 2 1 + σ v2 − α 2

(1 + σ )2 2

−α 2

(6.15.1)

Prior to filtering, since rd(0) = σd2 = 1 and σv2 = 1, the power in dn equals to power

in vn, then SNR = 1 = 0 dB. After filtering, the power in the signal d’n = wn∗dn is

⎡ 1 α ⎤ ⎡ w0 ⎤

{ } = w R w = [w

E dn '

2 T

d 0 w1 ] ⎢ ⎥ ⎢ ⎥ = 0.3748

⎣α 1 ⎦ ⎣ w1 ⎦

(6.15.2)

{ } = w R w = [w

E vn '

2 T

v 0

⎡w ⎤

w1 ] ⎢ 0 ⎥ = 0.2206

⎣ w1 ⎦

(6.15.3)

0.3748

SNR = 10 lg = 2.302 dB (6.15.4)

0.2206

ELEN 5301 Adv. DSP and Modeling Summer II 2008

16

With noise-free observations, linear prediction is the problem of finding the MMS

estimate (prediction) of xn+1 using a linear combination of the current and p-1

previous values of xn.

Therefore, an FIR linear

predictor of order p-1 has the

form:

p −1

xˆn +1 = ∑ wk xn − k (6.16.1)

k =0

The linear predictor may be implemented by the Wiener filter by setting dn = xn+1.

Since

rdx (k ) = E {d n xn*− k } = E { xn +1 xn*− k } = rx (k + 1) (6.16.2)

8

7/20/2008

17

⎢ ⎥⎢ ⎥ ⎢ ⎥

⎢ rx (1) rx (0) " rx* ( p − 2) ⎥ ⎢ w1 ⎥ ⎢ rx (2) ⎥

= (6.17.1)

⎢ # # % # ⎥⎢ # ⎥ ⎢ # ⎥

⎢ ⎥⎢ ⎥ ⎢ ⎥

⎢⎣ rx ( p − 1) rx ( p − 2) " rx (0) ⎥⎦ ⎢⎣ wp −1 ⎥⎦ ⎣ rx ( p ) ⎦

p −1

ξ min = rd (0) − ∑ wk rx* (k + 1) (6.17.2)

k =0

18

FIR Wiener filter: Linear prediction:

Example

For the same (AR) process with an autocorrelation sequence

rd (k ) = α

k

(6.18.1)

xˆn +1 = w0 xn + w1 xn −1 (6.18.2)

⎡ 1 α ⎤ ⎡ w0 ⎤ ⎡ α ⎤

⎢α 1 ⎥ ⎢ w ⎥ = ⎢α 2 ⎥ (6.18.3)

⎣ ⎦⎣ 1⎦ ⎣ ⎦

The predictor coefficients are

⎡ w0 ⎤ 1 ⎡ 1 −α ⎤ ⎡ α ⎤ ⎡α ⎤

⎢ w ⎥ = 1−α 2 ⎢ −α =

1 ⎥⎦ ⎢⎣α 2 ⎥⎦ ⎢⎣ 0 ⎥⎦

(6.18.4)

⎣ 1⎦ ⎣

9

7/20/2008

19

FIR Wiener filter: Linear prediction:

Example

Therefore, the predictor is

xˆn +1 = α xn (6.19.1)

and the prediction error decreases. For uncorrelated samples, α = 0 and ξmin = 1,

which is equal

q to the variance of xn. The optimum

p p

predictor in this case will be

xˆn +1 = 0 (6.19.3)

20

the signal is contaminated by

noise The linear predictor needs

noise.

to estimate (predict) the signal in

the presence of noise.

yn = xn + vn (6.20.1)

p −1 p −1

xˆn +1 = ∑ wk yn − k = ∑ wk ( xn − k + vn − k ) (6.20.2)

k =0 k =0

R y w = rdy (6.20.3)

10

7/20/2008

21

If the noise vn is uncorrelated with the signal xn, then Ry, the autocorrelation

matrix for yn, is

ry (k ) = E { yn yn*− k } = rx (k ) + rv (k ) (6 21 1)

(6.21.1)

Therefore, the only difference between linear prediction with and without noise is

i th

in the autocorrelation

t l ti matrix

t i ffor the

th input

i t signal.

i l Wh

When noise

i iis uncorrelated

l t d with

ith

the signal, Rx is replaced with Ry = Rx + Rv.

22

generalized to the problem of multistep prediction, when xn+α is predicted as a

linear combination of the p values xn, xn-1,… xn-p+1.

p −1

xˆn +α = ∑ wk xn − k (6.22.1)

k =0

compared to the one-step prediction:

11

7/20/2008

23

⎢ ⎥⎢ ⎥ ⎢ ⎥

⎢ rx (1) rx (0) " rx* ( p − 2) ⎥ ⎢ w1 ⎥ ⎢ rx (α + 1) ⎥

= (6.23.1)

⎢ # # % # ⎥⎢ # ⎥ ⎢ # ⎥

⎢ ⎥⎢ ⎥ ⎢ ⎥

⎣⎢ rx ( p − 1) rx ( p − 2) " rx (0) ⎦⎥ ⎣⎢ wp −1 ⎦⎥ ⎣ rx (α + p − 1) ⎦

R x w = rx ,α (6.23.2)

where rx,α is the autocorrelation vector beginning with rx(α). The MMS error is

p −1

ξ min = rx (0) − ∑ wk rx* (k + α ) = rx (0) − rxH,α w (6.23.3)

k =0

24

be implemented as a one-step

predictor using a linear

combination of the values of xn

over the interval from n-α-p-2 to

n-α+1.

p −1

xˆn +1 = ∑ wk xn − k −α +1 (6.24.1)

k =0

Assuming that the delay α is a free parameter

parameter, the prediction problem can be

viewed as finding the filter coefficients AND the delay α that minimize MS error.

12

7/20/2008

25

The problem of noise cancellation is similar to the filtering problem since the goal

is to recover a signal degraded by noise (btw, the signal is assumed to be

recorded by a primary sensor)

sensor). However

However, unlike the filtering where the noise

autocorrelation is known,

noise parameters need to

be estimated by a

secondary sensor that is

placed within the noise

field. Although the noise is

measured by a secondary

sensor,, it will be correlated

with the noise coming from the primary sensor but the two processes are not equal.

Since v1,n ≠ v2,n, it is not possible to estimate dn by simply subtracting v2,n from xn.

Instead, the noise canceller contains a Wiener filter estimating the noise vˆ1, n from

the sequence received from the secondary sensor. This estimate is then

subtracted from the primary signal to form an estimate

dˆn = xn − vˆ1,n (6.25.1)

ELEN 5301 Adv. DSP and Modeling Summer II 2008

26

R v 2 w = rv1v2 (6 26 1)

(6.26.1)

Where Rv2 is the autocorrelation matrix of v2,n and rv1v2 is the cross-correlation

between the needed noise signal v1,n and the Wiener filter input v2,n. The cross-

correlations are

rv1v2 (k ) = E {v1, n v2,* n − k } = E {( xn − d n ) v2,* n − k } = E { xn v2,* n − k } − E {d n v2,* n − k } (6.26.2)

Assuming

g that v2,n

2 n and dn are uncorrelated:

R v 2 w = rxv2 (6.26.4)

13

7/20/2008

27

FIR Wiener filter: Noise cancellation

(Example)

Assume that the desired

signal is a sinusoid:

d n = sin ( nω0 + φ ) (6.27.1)

the noise sequences are

xn dn

v1,n = 0.8v1,n −1 + g n

(6.27.2)

v2,n = −0.6v2,n −1 + g n

where gn is zero-mean, unit variance white noise uncorrelated with dn.

xn = d n + v1,n (6.27.3)

28

FIR Wiener filter: Noise cancellation

(Example)

The sample (estimated) autocorrelation is

N −1

1

rˆv2 (k ) =

N

∑v

n =0

v

2, n 2, n − k

(6.28.1)

N −1

1

rˆxv2 (k ) =

N

∑x v

n =0

n 2, n − k

(6.28.2)

Therefore, the use of LTI Wiener filter is not optimum. However, an

adaptive Wiener filter may provide effective noise cancellation in

nonstationary environments.

14

7/20/2008

29

FIR Wiener filter: Noise cancellation

(Example)

Output of a 6th

order Wiener

filter

Output of a

12th order

Wiener filter

30

sequence

seque ce xn would

ou d p

produce an output yn = xn∗hn tthat

oduce a at iss as

close as possible – in the mean-square sense – to the

desired process dn. We notice that for the IIR Wiener filter,

there are an infinite number of unknown coefficients to be

found: hn for all n.

15

7/20/2008

31

For a noncausal (unconstrained) IIR Wiener filter, the problem is to find the unit

sample response of the filter

∞

H ( z) = ∑hz

n =−∞

n

−n

(6.31.1)

ξ = E en{ } 2

(6.31.2)

∞

en = d n − dˆn = d n − ∑ hl xn −l (6.31.3)

l =−∞

32

This problem can be solved similarly to the FOR Wiener filter problem: by equating

the derivative of the mean-square error with respect to hk* to zero for each k:

∂ξ

= − E {en xn*− k } = 0, −∞ < k < ∞ (6.32.1)

∂hk*

Which is equivalent to

The last equation (6.32.2) is called the orthogonality principle and it is identical to

the orthogonality principle for an FIR filter except that here the equality must hold

for all k. Therefore:

∞

∑ h E {x

l =−∞

l x*

n −l n − k } = E {d x } , *

n n−k −∞ < k < ∞ (6.32.3)

We note that the expectation in the lhs is the autocorrelation, and in the rhs –

cross-correlation.

ELEN 5301 Adv. DSP and Modeling Summer II 2008

16

7/20/2008

33

∞

∑ h r (k − l ) = r

∞

l =−∞

l x dx (k ), − ∞ < k < ∞ (6.33.1)

Which are the Wiener-Hopf equations of the noncausal IIR Wiener filter. We

observe that the only difference compared to the FIR case is the summation limit

and the range of values for k. We can also notice that

hk ∗ rx (k ) = rdx (k ) (6.33.2)

H ( e jω ) Px ( e jω ) = Pdx ( e jω ) (6.33.3)

Pdx ( e jω )

H (e jω

)= Px ( e jω )

(6.33.4)

34

Pdx ( z )

H ( z) = (6.34.1)

Px ( z )

We observe that the denominator is an auto-spectrum, while the numerator is a

cross-spectrum.

The mean-square error is

∞

ξ min = rd (0) − ∑ hl rdx* (l ) (6.34.2)

l =−∞

U i P

Using Parseval’s

l’ th

theorem

π

∫π H ( e ) P ( e ) dω

1

ξ min = rd (0) − ω jω * j

(6.34.3)

2π

dx

−

17

7/20/2008

35

π

∫π P ( e ) dω

Since 1 ω

rd (0) = j

(6.35.1)

2π

d

−

π

∫π ⎡⎣ P ( e ) − H ( e ) P ( e )⎤⎦ dω

1

ξ min = ω j ω ω j * j

(6.35.2)

2π

d dx

−

1 1

2π j v∫

ξmin = rd (0) − −1

dz = z −1dz

2π j v

* * * *

dx

C

36

xn = d n + vn ((6.36.1))

We need to find auto- and cross-spectra. Assuming that dn and vn are uncorrelated

zero mean random processes, the autocorrelation is

rx (k ) = rd (k ) + rv (k ) (6.36.2)

Px ( e jω ) = Pd ( e jω ) + Pv ( e jω ) (6.36.3)

The cross-correlation

rdx (k ) = E {d n xn*− k } = E {d n d n*− k } + E {d n vn*− k } = rd ( k ) (6.36.4)

Pdx ( e jω ) = Pd ( e jω ) (6.36.5)

18

7/20/2008

37

Pd ( e jω )

H (e jω

)= P

(e ) + P (e )

(6 37 1)

(6.37.1)

jω jω

d v

jω

( )

jω

bands, a little attenuation takes place. Since

Pdx ( e jω ) = Pd ( e jω ) (6.37.2)

π π

1 1

ξ min = jω jω * jω ω j ω

ω j

2π

d

−

38

Finally, combining with the filter frequency response equation, the MS error of a

noncausal IIR Wiener filter smoother is

π Pv ( e jω ) π

Pd ( e jω ) ∫π P ( e ) H ( e ) dω

1 1

ξ min = ∫π dω = ω ωj j

Pd ( e ) + P (e )

(6.38.1)

2π jω jω

2π

v

− v −

1

2π j v∫

ξ min = P ( z)H ( z) z

v

−1

dz (6.38.2)

19

7/20/2008

39

For the causal filter, its unit pulse response hn is zero for n < 0. Therefore:

∞

dˆn = xn ∗ hn = ∑ hk xn − k (6.39.1)

k =0

Using the same procedure as before, we can find the Wiener-Hopf equations for

the causal IIR Wiener filter:

∞

∑ h r (k − l ) = r

l =0

l x dx (k ); 0≤k ≤∞ (6.39.2)

The important difference between this result and the one for the noncausal IIR

filter is the summation limit. The restriction on k as being non-negative implies that

the cross-correlation rdx(k) cannot be expressed as the convolution of hk and rx(k).

40

We start the filter design at the special case when the input to the filter is unit

variance white noise εn. Denoting the Wiener filter coefficients by gn, the Wiener-

Hopf equations are

∞

∑ g rε (k − l ) = r ε (k );

l =0

l d 0≤k ≤∞ (6.40.1)

Since rε(k) = δ(k), the lhs reduces to gk. Therefore, the causal Wiener filter for

white noise is:

g n = rd ε (n)un (6.40.2)

z domain solution is

G ( z ) = [ Pd ε ( z ) ]+ (6.40.3)

Here “+” indicates the “positive-time part” of the sequence whose z-transform is

contained within the brackets.

20

7/20/2008

41

Wiener filter will be white noise. Assuming that the input xn is a random process

with a rational power spectrum with no poles and zeros on the unit circle

circle, we may

perform a spectral factorization and write Px as follows:

N ( z)

Q( z ) = 1 + q1 z −1 + q2 z −2 + ... = (6.41.2)

D( z )

where N(z) and D(z) are minimum phase monic polynomials. If xn is filtered with a

filter having a transfer function of the form

1

F ( z) = (6.41.3)

σ 0Q ( z )

ELEN 5301 Adv. DSP and Modeling Summer II 2008

42

Pε ( z ) = Px ( z ) F ( z ) F * (1 z * ) = 1 (6 42 1)

(6.42.1)

Therefore, the output process εn is white noise and F(z) is called a whitening filter.

We notice that since Q(z) is minimum phase, then F(z) is stable and causal and

has a stable and causal inverse F-1(z). As a result, xn may be recovered from εn by

filtering with the inverse filter F-1(z). In other words, there is no loss of information

in the linear transformation producing white noise from xn.

Let H(z) be the causal Wiener filter with an input xn having a rational spectrum and

producing the MMS estimate of dn. Suppose that the input is filtered with a cascade

of three filters F(z), F-1(z), and H(z) where F(z) is the causal whitening filter for xn

and F-1(z) is its causal

inverse.

21

7/20/2008

43

The cascade G(z) = F-1(z)H(z) is the causal IIR Wiener filter producing the MMS

estimate of dn from the white noise εn. The causality of G(z) follows from the fact

that both F-1(z) and G(z) are causal

causal.

The cross-correlation between dn and εn is

⎧⎪ ⎡ ∞ ⎤ ⎫⎪ ∞ *

*

rd ε (k ) = E {d ε *

n n−k } = E ⎨dn ⎢ ∑ fl xn−k −l ⎥ ⎬ = ∑ fl rdx (k + l ) (6.43.1)

⎪⎩ ⎣ l =−∞ ⎦ ⎪⎭ l =−∞

Therefore, the cross-power spectral density is

Pdx ( z )

Pd ε ( z ) = Pdx ( z ) F * (1 z * ) =

σ 0Q* (1 z * )

(6 43 2)

(6.43.2)

1 ⎡ Pdx ( z ) ⎤

G (z) = ⎢ ⎥

σ 0 ⎢ Q* (1 z* ) ⎥

(6.43.3)

⎣ ⎦ +

ELEN 5301 Adv. DSP and Modeling Summer II 2008

44

H ( z) = F ( z)G ( z) (6.44.1)

then

1 ⎡ P ( z) ⎤

H ( z) = ⎢ *dx * ⎥

σ 0 Q ( z ) ⎢ Q (1 z ) ⎥

(6.44.2)

2

⎣ ⎦+

In the case of real processes, hn is real and causal IIR Wiener filter in the form:

1 ⎡ Pdx ( z ) ⎤

H ( z) = ⎢ ⎥ (6.44.3)

σ 02Q ( z ) ⎣ Q (1 z ) ⎦ +

Finally, the MS error for the causal IIR Wiener filter is

∞

ξmin = rd (0) − ∑ hl rdx* (l ) (6.44.4)

l =0

ELEN 5301 Adv. DSP and Modeling Summer II 2008

22

7/20/2008

45

π

∫π ⎡⎣ P ( e ) − H ( e ) P ( e )⎤⎦ dω

1

ξmin = ωj ω jω * j

(6.45.1)

2π

d dx

−

⎡ P ( z) − H ( z ) P (1 z ) ⎤ z

1

2π j v∫ ⎣

ξmin = d ⎦

*

dx

* −1

dz (6.45.2)

We observe that the expressions for the causal IIR Wiener filter error in the

frequency and z-domains are exactly the same as the corresponding expressions

for the non-causal IIR Wiener filter. In the time-domain description the difference

arises in the summation limits.

46

factorization as

d ( z)

Pdx

H nc ( z ) =

σ 02Q ( z ) Q* (1 z * )

(6.46.1)

1 ⎡ P (z) ⎤

H nc ( z ) = ⎢ dx ⎥

σ 02Q ( z ) ⎢ Q* (1 z* ) ⎥

(6.46.2)

⎣ ⎦

Th non-causall Wiener

The Wi filt

filter can be

b implemented

i l t d by

b the

th structure

t t shown

h below.

b l W

We

note that the first filter is the causal whitening filter generating white noise εn while

the second

is noncausal

filter to

produce the

MMS dn.

ELEN 5301 Adv. DSP and Modeling Summer II 2008

23

7/20/2008

47

A causal IIR Wiener filter is formed by taking the causal part of [Pdx(z)/Q*(1/z*)] as

shown below.

48

xn = d n + vn ((6.48.1))

Pdx ( z ) = Pd ( z ) (6.48.2)

1 ⎡ P ( z) ⎤

H ( z) = ⎢ *d * ⎥

σ 0 Q ( z ) ⎢ Q (1 z ) ⎥

(6.48.3)

2

⎣ ⎦+

where Px ( z ) = Pd ( z ) + Pv ( z ) = σ 02Q ( z ) Q* (1 z * ) (6.48.4)

However, to evaluate (built) the actual Wiener filter, the expressions for power

spectral densities Pd(z) and Pv(z) are required.

24

7/20/2008

49

Causal IIR Wiener filter: filtering

(Example)

We need to estimate a signal dn generated by

d n = 0.8

0 8d n −1 + wn (6.49.1)

xn = d n + vn (6.49.2)

Where vn is unit variance white noise uncorrelated with dn and wn is white noise

with variance σ2w = 0.36. Therefore, rd(k) = 0.8|k|. To find the optimum causal IIR

Wiener filter, we begin from observation that

Pdx ( z ) = Pd ( z ) (6.49.3)

Px ( z ) = Pd ( z ) + Pv ( z ) = Pd ( z ) + 1 (6.49.4)

Therefore, with

0.36

Pd ( z ) =

(1 − 0.8 z −1 ) (1 − 0.8 z )

(6.49.5)

50

Causal IIR Wiener filter: filtering

(Example)

The power spectrum of xn is

Px ( z ) = 1 +

0.36

= 1.6

16

(1 − 0.5 z −1 ) (1 − 0.5 z )

(1 − 0.8 z −1 ) (1 − 0.8 z ) (1 − 0.8 z −1 ) (1 − 0.8 z )

(6.50.1)

Px ( z ) = σ 02Q ( z ) Q ( z −1 ) (6.50.2)

1 − 0.5 z −1

with σ 02 = 1.6, Q(z) = (6.50.3)

0 8 z −1

1 − 0.8

Since the causal IIR Wiener filter is

1 ⎡ P ( z) ⎤

H (z) = ⎢ dx ⎥

σ 02Q ( z ) ⎢ Q ( z −1 ) ⎥

(6.50.4)

⎣ ⎦+

ELEN 5301 Adv. DSP and Modeling Summer II 2008

25

7/20/2008

51

Causal IIR Wiener filter: filtering

(Example)

Pdx ( z ) 0.36 (1 − 0.8 z )

we can express =

Q ( z −1 ) (1 − 0.8 z −1

)(1 − 0.8 z ) 1 − 0.5 z )

(

0.36 z −1 0.6 0.3

= = +

(1 − 0.8z −1 )( z −1 − 0.5) 1 − 0.8 z −1 z −1 − 0.5

(6.51.1)

Therefore: ⎡ P ( z) ⎤ 0.6

⎢ dx −1 ⎥ =

⎢⎣ Q ( z ) ⎥⎦ + 1 − 0.8 z

(6.51.2)

−1

Positive-time part

1 (1 − 0.8 z )

−1

0.6 0.375

H ( z) = =

1.6 (1 − 0.5 z ) (1 − 0.8 z ) 1 − 0.5 z −1

(6.51.3)

−1 −1

hn = 0.375 (1 2 ) un

n

or (6.51.4)

52

Causal IIR Wiener filter: filtering

(Example)

Since D̂ ( z ) = H ( z ) X ( z ) (6.52.1)

th estimate

the ti t off dn may be

b computed

t d recursively

i l as ffollows

ll

{( )}

∞ l

3 ∞ ⎛1⎞

= rd (0) − ∑ hl rdx (l ) = 1 − ∑ ⎜ ⎟ ( 0.8 ) = 0.3750

2

ξ min = E d n − dˆn

l

(6.52.3)

l =0 8 l =0 ⎝ 2 ⎠

For the comparison, for the 2nd-order FIR Wiener filter, the MS error was 0.4048.

We conclude that using all previous observations of xn only slightly improves the

performance of the Wiener filter.

26

7/20/2008

53

Causal IIR Wiener filter: filtering

(Example)

For another comparison, we compute a noncausal Wiener filter as

Pdx ( z ) Pd ( z ) 0.36 1.6

H ( z) = = =

Px ( z ) Px ( z ) (1 − 0.5 z −1 ) (1 − 0.5 z )

(6.53.1)

n

3 ⎛1⎞

hn = ⎜ ⎟ (6.53.2)

10 ⎝ 2 ⎠

The MS error can be computed as

∞ ∞ k

3 ⎛1⎞ 3

ξ min = rd (0) − ∑ hl rdx (l ) = 1 − 2∑ ⎜ ⎟ ( 0.8 ) + = 0.3

k

(6.53.3)

l =−∞ k = 0 10 ⎝ 2 ⎠ 10

Which is lower than for the causal filter as it should be expected.

54

Causal IIR Wiener filter: filtering

(Example)

An interesting observation from the result for that particular non-causal IIR filter is

that the recursive estimator can be rewritten as

Similarly, the MMS estimate of dn-1 is based on all observations of xn up to time n-

1. If we have an estimate of dn-1, we may “predict” the estimate for of dn! In this

situation, we may “predict” the next measurement of xn.

When the next actual measurement of xn arrives, we may compare it to the

predicted value. The prediction error will be

α n = xn − xˆn (6.54.2)

This error is called an innovation process and represents the “new information”. In

other words, it represents the part that cannot be predicted. Therefore, the

estimate of dn can be corrected by the new information. This approach is related to

Kalman filtering.

27

7/20/2008

55

Causal IIR Wiener filter: linear

prediction

We need to derive an optimum linear predictor in the form

∞

xˆn +1 = ∑ hk xn − k (6.55.1)

k =0

that would produce the best estimate for xn+1 based on xk for all k ≤ n. Since the

infinite number of past signal values is used, we expect a better prediction than an

FIR predictor produces.

For the linear prediction problem d n = xn +1 (6.55.2)

Therefore:

Pdx ( z ) = zPx ( z ) (6.55.4)

56

Causal IIR Wiener filter: linear

prediction

The Wiener predictor is then

1 ⎡ zP ( z ) ⎤

H ( z) = ⎢ x

⎥

σ Q ( z ) ⎢ Q* (1 z * ) ⎥

2

(6 56 1)

(6.56.1)

0 ⎣ ⎦+

Px ( z ) = σ 02Q ( z ) Q* (1 z * )

However, since

(6.56.2)

1

H ( z) = ⎡ zQ ( z ) ⎤⎦ +

Q( z) ⎣

(6.56.3)

Q ( z ) = 1 + q1 z −1 + q2 z −2 + ... (6.56.4)

28

7/20/2008

57

Causal IIR Wiener filter: linear

prediction

we observe that the positive-time part of zQ(z) is

⎡⎣ zQ ( z ) ⎤⎦ + = ⎡⎣ z + q1 + q2 z −1 + q2 z −2 + ...⎤⎦ = q1 + q2 z −1 + q2 z −2 + ...

+

1 ⎡ 1 ⎤

H ( z) = z ⎡⎣Q ( z ) − 1⎤⎦ = z ⎢1 − ⎥ (6.57.2)

Q( z) ⎣ Q( z)⎦

The MMS error is

⎡ P ( z ) − H ( z ) P (1 z ) ⎤ z

1

2π j v∫ ⎣

ξ min = d ⎦

*

dx

* −1

dz (6.57.3)

58

Causal IIR Wiener filter: linear

prediction

Since Pd ( z ) = Px ( z ) and Pdx ( z ) = zPx ( z ) (6.58.1)

the ill b

be

H ( z ) Px* (1 z * ) ⎤⎦ z −1dz

1

⎡P ( z ) − z

2π j v∫ ⎣

ξ min = x

−1

(6.58.2)

C

Px ( z ) = Px* (1 z * ) (6.58.3)

1

ξ min = ∫ Px ( z ) ⎡⎣1 − z −1 H ( z ) ⎤⎦ z −1dz

2π j v

(6.58.4)

C

29

7/20/2008

59

Causal IIR Wiener filter: linear

prediction

Substituting the transfer function for the causal IIR Wiener predictor leads to

1 ⎡ ⎛ 1 ⎞ ⎤ −1 1 Px ( z ) −1

ξ min = ∫

v P ( z ) ⎢ 1 − ⎜

⎜ 1 − ⎟⎟ ⎥ z dz = ∫

v z dz

2π j C ⎣⎢ ⎝ Q ( z ) ⎠ ⎦⎥ 2π j C Q ( z )

x

σ 02Q* (1 z * ) z −1dz = σ 02 q0

1

= ∫

v

2π j C

(6.59.1)

Finally, monic, q0 = 1 and

and, therefore,

therefore the MMS error is

ξ min = σ 02 (6.59.2)

60

Causal IIR Wiener filter: linear

prediction

The spectral factorization suggests that for a wss random process, whose power

spectrum is a real-valued, positive, and periodic function of frequency, the

f ll i ffactorization

following t i ti h holds:

ld

Px ( z ) = σ 02Q ( z ) Q* (1 z * ) (6.60.1)

∫ ln Px ( e )dω

1 jω

where 2π

σ =e

2

0

−π

(6.60.2)

Th f

Therefore: π

∫ ln Px ( e )dω

1 jω

2π

ξ min = e −π

(6.60.3)

30

7/20/2008

61

Causal IIR Wiener filter: linear

prediction of an AR process

Assuming that xn is an autoregressive* AR(p) process with a power spectrum

σ 02

Px ( z ) =

A ( z ) A* (1 z * )

(6.61.1)

where p

A ( z ) = 1 + ∑ ak z − k (6.61.2)

k =1

is a minimum phase polynomial having all its roots inside the unit circle.

The optimum linear predictor is

which happens to be an FIR filter! Therefore, only the last p values out of an

infinite number of past signal samples are used to predict xn+1.

ELEN 5301 Adv. DSP and Modeling Summer II 2008

62

Causal IIR Wiener filter: linear

prediction of an AR process

A random autoregressive AR(p) process satisfies a difference equation of the

form:

xn = − a1 xn −1 − a2 xn − 2 − ... − a p xn − p + wn (6.62.1)

where wn is white noise. Since wn+1 cannot be predicted from xn or its previous

values (noise is assumed as uncorrelated with the signal), predicting xn+1, we can

use the model for xn and ignore the noise at best.

31

7/20/2008

63

Causal IIR Wiener filter: linear

prediction of an AR process (Ex)

Consider a real-valued AR(2) process

xn = 0.9

0 9 xn −1 − 00.22 xn − 2 + wn (6 63 1)

(6.63.1)

1

Px ( z ) =

A ( z ) A ( z −1 )

(6.63.2)

Th optimum

The ti linear

li predictor

di t iis

64

Causal IIR Wiener filter: linear

prediction of an AR process (Ex)

A specific realization of xn (solid line) and its optimal prediction (dotted line) are

shown below

N −1

1

ξ= ∑[ x − xˆn +1 ] = 0.0324

2

(6.64.1)

n +1

N n =0

32

7/20/2008

65

Causal IIR Wiener filter: linear

prediction of an AR process (Ex)

However, in practice, the statistics of xn is never known. Therefore, more

realistically would be to estimate the AR parameters* first for the given data:

N −1

1

rˆx ( k ) =

N

∑x x

n =0

n n−k

(6.65.1)

rˆx (0) = 2.1904 rˆx (1) = 1.5462 rˆx (2) = 0.8670 (6.65.2)

⎡ 2.1904 1.5462 ⎤ ⎡ aˆ1 ⎤ ⎡1.5462 ⎤

⎢1.5462 2.1904 ⎥ ⎢ aˆ ⎥ = − ⎢0.8670⎥ (6 65 3)

(6.65.3)

⎣ ⎦⎣ 2⎦ ⎣ ⎦

which are solved to find estimates for a1 and a2 next

⎡ aˆ1 ⎤ ⎡ −0.8500 ⎤

⎢ aˆ ⎥ = ⎢ 0.2042 ⎥ (6.65.4)

⎣ 2⎦ ⎣ ⎦

* - AR parameters estimation will be explained in details later.

ELEN 5301 Adv. DSP and Modeling Summer II 2008

66

Causal IIR Wiener filter: linear

prediction of an AR process (Ex)

We observe that the estimated AR parameters are not equal to the true ones and,

therefore, the predictor becomes

Next, instead of using the predictor on the data that was used to estimate the AR

parameters, we apply the predictor to the next 200 data values. The result is

shown below:

33

7/20/2008

67

The deconvolution problem is concerned with the recovery of the signal dn that

has been convolved with a filter gn that may not be precisely known:

xn = d n ∗ g n (6.67.1)

data: slightly out of focus cameras, band-limited communication channels. For

instance, a moving during the exposure camera would introduce distortion. If the

“blurring” function gn is perfectly known and has an inverse gn-1, such that

g n ∗ g n−1 = δ n (6.67.2)

X ( e jω )

d n = xn ∗ g −1

⇔ D (e jω

)= G

(e )

(6.67.3)

n jω

68

Another problem is that the frequency response G(ejω) is zero (or very small) at

some frequencies. Therefore, G(ejω) may be either noninvertible or ill-conditioned.

In addition, noise may be introduced in the measurement process, and, therefore,

more accurate model for the observed process would be

xn = d n ∗ g n + wn (6.68.1)

where wn is additive noise that is often assumed as uncorrelated with dn. In this

situation, even if the inverse filter exists and is well-behaved, after the inverse filter

is applied to xn, the restored signal is

W ( e jω )

Dˆ ( e jω ) = D ( e jω ) + = D ( e jω ) + V ( e jω )

G (e )

(6.68.2)

jω

noise

dˆn = d n + vn (6.68.3)

34

7/20/2008

69

An alternative approach to the deconvolution problem is to design a Wiener filter

producing the MMS estimate of dn from xn. Let hn be a noncausal IIR LTI filter

producing the estimate of dn. ∞

dˆn = xn ∗ hn = ∑hx

l =−∞

l n −l

(6.69.1)

Pdx ( e jω )

H (e jω

)= Px ( e jω )

(6 69 2)

(6.69.2)

Px ( e jω ) = Pd ( e jω ) G ( e jω ) + Pw ( e jω )

2

(6.69.3)

70

Pdx ( e jω ) = Pd ( e jω ) G* ( e jω )

Also, the cross-psd is

(6.70.1)

Pd ( e jω ) G * ( e jω )

H (e jω

)= (6.70.2)

Pd ( e jω ) G ( e jω ) + Pw ( e jω )

2

Moreover, assuming that G(ejω) is non-zero for all ω and that its inverse exists:

⎡ Pd ( e jω ) ⎤

H (e ) = jω1 ⎢ ⎥

G ( e jω ) ⎢ P ( e jω ) + P ( e jω ) G ( e jω ) ⎥

(6.70.3)

2

⎢⎣ d w ⎥⎦

Since the power spectrum of the filtered noise is

Pv ( e jω ) = Pw ( e jω ) G ( e jω )

2

(6.70.4)

35

7/20/2008

71

Pd ( e jω )

F ( e jω ) =

Pd ( e jω ) + Pv ( e jω )

(6.71.1)

which is the noncausal IIR Wiener smoothing filter for estimating dn from

yn = d n + vn (6.71.2)

“inverse degradation” filter followed by a noncausal Wiener smoothing filter that

reduces

d th

the filt

filtered

d noise.

i

72

The primary limitation to Wiener filters is that both the signal and noise processes

must be jointly wss. Most of practical signals are nonstationary, which limits

applications

li ti off Wi

Wiener filt

filters.

Recall that for recovering an AR(1) process of the form

xn = a1 xn −1 + wn (6.72.1)

yn = xn + vn (6.72.2)

where vn and wn are uncorrelated white noise processes, the optimum linear

estimate

i off xn using h measurements off yk, k ≤ n could

i allll off the ld b

be computed

d with

iha

recursion:

xˆn = a1 xˆn −1 + K [ yn − a1 xˆn −1 ] (6.72.3)

{

E xn − xˆn

2

} (6.72.4)

36

7/20/2008

73

However, the mentioned above again uses a wss assumption. For a time-varying

situation, the optimum estimate may be found as

The above result can be extended to estimation of AR(p) processes of the form

p

xn = ∑ ak xn − k + wn (6.73.2)

k =1

yn = xn + vn (6.73.3)

74

⎢ x ⎥

xn = ⎢

n −1 ⎥

(6.74.1)

⎢ # ⎥

⎢ ⎥

⎢⎣ xn − p +1 ⎥⎦

Then equations (6.73.2) and (6.73.3) may be expressed in the form:

⎡ a1 a2 " a p −1 ap ⎤ ⎡1 ⎤

⎢1 0 " 0 0⎥ ⎥ ⎢0⎥

⎢ ⎢ ⎥

xn = ⎢ 0 1 " 0 0 ⎥ x n −1 + ⎢0 ⎥ wn (6.74.2)

⎢ ⎥ ⎢ ⎥

⎢# # " # #⎥ ⎢# ⎥

⎢⎣ 0 0 " 1 0 ⎥⎦ ⎢⎣0 ⎥⎦

37

7/20/2008

75

yn = [1 0 " 0] x n + vn

and

(6.75.1)

x n = Ax n −1 + w n (6.75.2)

yn = cT x n + vn (6.75.3)

w n = [ wn 0 " 0]

T

(6.75.4)

We notice that for the case of AR(1) process, the optimum estimate of the state

vector xn using all the previous measurements, may be found as

ELEN 5301 Adv. DSP and Modeling Summer II 2008

76

The equation (6.75.2) is applicable to stationary AR(p) processes only but can be

easily generalized to nonstationary processes as follows

x n = A n −1x n −1 + w n (6.76.1)

mean white noise process with

⎧Q (n) k =n

E {w n w nH } = ⎨ w (6.76.2)

⎩0 k≠n

In addition, let yn be a vector of observations of length q formed as

y n = Cn x n + v n (6.76.3)

noise statistically independent of wn with

ELEN 5301 Adv. DSP and Modeling Summer II 2008

38

7/20/2008

77

⎧Q (n) k =n

E { v n v nH } = ⎨ v (6.77.1)

⎩0 k≠n

The optimum linear estimate for the time-varying case would be

With the appropriate Kalman gain matrix Kn, this recursion corresponds to the

discrete Kalman filter.

Assuming g that An, Cn, Qv((n), ), and Qw((n)) are known and denotinggxˆ n|n the best

linear estimate of xn at time n given the observations yi for I = 1,2,…,n and

denoting xˆ n|n −1 the best linear estimate of xn at time n given the observations yi for

I = 1,2,…,n-1, the corresponding state estimation errors are

e n|n = x n − xˆ n|n

(6.77.3)

e n|n −1 = x n − xˆ n|n −1

ELEN 5301 Adv. DSP and Modeling Summer II 2008

78

Pn|n −1 = E {e n|n −1e nH|n −1}

(6.78.1)

Suppose that we are given an estimate x̂ 0|0 of the state x0 and that the error

covariance matrix for this estimate P0|0 is known. When the measurement y1 is

available, we need to update x̂ 0|0 and find the estimate x̂1|1of the state at time n =

1 minimizing the MS error

{ } { }

p −1

= tr {P1|1} = ∑ E ei ,1|1

2 2

ξ1 = E e1|1 (6.78.2)

i =0

Once such an estimate is found and the error covariance P1|1 is evaluated, the

estimation is repeated for the next observation y2.

39

7/20/2008

79

The solution to this problem is derived in two steps. First, given x

xˆ n|n−1 , which is the best estimate of xn without the observation yn. Next, given

yn, and dx

x̂ n|n−1 we will

ill estimate

ti t xn.

In the first step, since no new measurements are used to estimate xn, all we know

is that

x n = A n −1x n −1 + w n (6.79.1)

Since wn is a zero-mean white noise and its values are unknown, we may predict

xn as follows:

xˆ n|n −1 = A n −1xˆ n −1|n −1 (6.79.2)

80

Since wn has zero mean, if x

then x

Since the estimation error en-1|n-1 is uncorrelated with white noise wn, then

where Qw(n) is the covariance matrix for the noise process wn. This completes the

first step of the Kalman filter.

40

7/20/2008

81

ˆ n −1|n −1

In the second step, we incorporate the new measurement yn into the estimate x

A new linear estimate is formed next as

where Kn and Kn’ are matrices that need to be specified. The error can be found as

e n|n = x n − K 'n xˆ n|n −1 − K n y n = x n − K 'n ⎡⎣ x n − e n|n −1 ⎤⎦ − K n [Cn x n + v n ]

= ⎡⎣I − K 'n − K n Cn ⎤⎦ x n + K 'ne n|n −1 − K n v n (6.81.2)

Since

E { v n } = 0 and E {en|n −1} = 0 (6 81 3)

(6.81.3)

then x

82

xˆ n|n = [ I − K nCn ] xˆ n|n −1 + K n y n (6.82.1)

or

xˆ n|n = xˆ n|n −1 + K n ⎡⎣ y n − Cn xˆ n|n −1 ⎤⎦ (6.82.2)

e n|n = K 'ne n|n −1 − K n v n = [ I − K n Cn ] e n|n −1 − K n v n (6.82.3)

uncorrelated with en|n-1:

E {e n|n −1 v n } = 0 (6.82.4)

H

(6.82.5)

41

7/20/2008

83

Next, we need to find the value for the Kalman gain Kn that minimizes the MS

error

ξ n = tr {Pn|n } (6.83.1)

trace

where the trace function is n

tr ( A) = ∑ aii (6.83.2)

i =1

Therefore, we need to differentiate ξn with respect to Kn, set the derivative to zero,

and solve for Kn. Using the matrix differentiation formulas

d

tr ( KA ) = A H (6.83.3)

dK

tr ( KAK H ) = 2KA

d

and (6.83.4)

dK

ELEN 5301 Adv. DSP and Modeling Summer II 2008

84

we obtain

tr ( Pn|n ) = −2 [ I − K nCn ] Pn|n −1CnH + 2K nQv (n) = 0

d

(6.84.1)

dK

Solving for Kn gives the expression for the Kalman gain

−1

K n = Pn|n −1CnH ⎡⎣Cn Pn|n −1CnH + Q v (n) ⎤⎦ (6.84.2)

(6.84.3)

xˆ 0|0 = E {x 0 } (6.84.4)

P0|0 = E {x 0 x 0H } (6.84.5)

42

7/20/2008

85

In summary:

86

xn = 0.8

0 8 xn −1 + wn (6.86.1)

yn = xn + vn (6.86.2)

uncorrelated with wn. Thus, with An = 0.8 and Cn = 1, the Kalman filter state

equation is

0 8 xˆn −1 + K n [ yn − 0.8

xˆn = 0.8 0 8 xˆn −1 ] (6 86 3)

(6.86.3)

Here the state vector is a scalar. Therefore, the Kalman gain can be computed

with scalar equations

Pn|n −1 = 0.82 Pn|n −1 + 0.36 (6.86.4)

−1

K n = Pn|n −1 ⎡⎣ Pn|n −1 + 1⎤⎦ (6.86.5)

43

7/20/2008

87

With

xˆ0 = E { x0 } = 0 and { } =1

P0|0 = E x0

2

(6.87.2)

are shown for the first few values of n.

Kalman filter settles into its steady state

solution:

88

measurements yn to estimate the state xn of a dynamic

system.

algorithm that has found applications in various areas

including radar tracking, estimation and prediction of target

trajectories, adaptive equalization of telephone channels and

f di di

fading dispersive

i channels,

h l adaptive

d ti antenna

t arrays…

44

- Spatial regretion: the curious case of negative spatial dependenceHochgeladen vonJuan Pinzon
- Project in DSP c6713Hochgeladen vonVitrag Sheth
- Dsp Important Viva QuestionsHochgeladen vonDeepak Sahu
- DEMULTI PLEXER AND MULTIOLEXERHochgeladen vonkaran007_m
- Tests for the error component model in the presence of local misspecificationHochgeladen vonJason Henry
- List of Ece,Eee & e&iHochgeladen vonTrinath Koganti
- Introduction to Digital FiltersHochgeladen vonGaurav Prabhaker
- Latest 2017-2018 IEEE Final Year Projects List for BE/Btech, ME/MtechHochgeladen vonvishal_nanhe
- Adaptive Noise Cancellation_newHochgeladen vonammayi9845_930467904
- unit 8 VTU format.pdfHochgeladen vonammayi9845_930467904
- 1095_1330502945110Hochgeladen vonRahul Goyal
- FPGA Based Area Efficient Edge Detection Filter for Image Processing ApplicationsHochgeladen vonAnoop Kumar
- A Low-Power Reconfigurable DSP SystemHochgeladen vonJadur Rahman
- Basic Concepts of Signal ProcessingHochgeladen vonM8R-xf9b1g214
- The Levinson AlgorithmHochgeladen vonBabisIgglezos
- Review On Design Of Digital FIR FiltersHochgeladen vonAnonymous CUPykm6DZ
- Ch5.4_5Hochgeladen vonAta Ur Rahman Khalid
- TutorialOnWaveletsFromEEperspective.pdfHochgeladen vonJagadish Venkataraman
- lec2Hochgeladen vonlonlinness
- booook.docxHochgeladen vonMutiara Candra
- 2014 12 PotM Waveform Distortion Effects to Protection Relays ENUHochgeladen vonAhmed Samir
- 73A37315d01Hochgeladen vonUgo Marani
- Determining the Optimum Coefficients of Iir All-pass Filter Based on Genetic AlgorithmHochgeladen vonOAIJSE
- ARDL ResultsHochgeladen vonhairul rahman
- 5-Dequantization Bias for JPEG DecompressionHochgeladen vonSalahaldeen Altous
- dsp based paperHochgeladen vonaneesh_salim
- journalHochgeladen vonLatika Saini
- Document(4)Hochgeladen vonDavid Alvarez Miranda
- Speech Recog Report - For MergeHochgeladen vonSanjeev Sharma
- OIF-Tech-Options-400G-01.0.pdfHochgeladen vonGadi Huaman Flores

- control systemHochgeladen vonSunita Revale
- 0022-3727_40_11_018.pdfHochgeladen vonSujay Bhattacharya
- AP GP ExerciseHochgeladen vonPng Poh Sheng
- ch5Hochgeladen vonyashwanthr3
- Cse-V-Formal Languages and Automata Theory [10cs56]-SolutionHochgeladen vonBnks Sdfdsfs
- Simulated AnnealingHochgeladen vonAjay Pratap Yadav
- MELJUN CORTES Automata Lecture Non-regular Languages 2Hochgeladen vonMELJUN CORTES, MBA,MPA
- UNIT-IHochgeladen vonMuhammed Irfan
- The Relationship of Procedural and Declarative KnowledgeHochgeladen vonMam Mamcom
- Beta DistributionHochgeladen vonHasrul Zahid
- Calc1Hochgeladen vonLuis
- APStats FormulasHochgeladen vonRickDal
- Class 10 Question Paper Final New Syllabus 2017 2018 10Hochgeladen vonKushal Sarkar
- RMO-Exam-Paper_08-10-2017.pdfHochgeladen vonAyush Singh
- WME01_01_que_20161025Hochgeladen vonMueez Ahmad
- Solve WaveHochgeladen vonmarcelo_fis_mat
- Fundamentals of Matlab FinalHochgeladen vonthamaraikannan
- DX100 Instruction ManualHochgeladen vonalan_smo
- The Performances in the English and MTB Exams in Math_publishable FormatHochgeladen vonMarlon Villaluz
- CalculusHochgeladen vonJuan Luis Leiva Torres
- BMM10232_Chapter 2_Percentage & Fractions.pdfHochgeladen vonParijat Roy
- Mathematics_ Made_ Easy for Children_ with _Visual Impairment.pdfHochgeladen vonAnil durgam
- ANOVA Matlab Instructions.pdfHochgeladen vonDEEPAK
- Population Projection AssignmentHochgeladen vonjpv90
- Mastering Mathematics for Electronic EngineeringHochgeladen vonLiviu
- 171976396-Modul-Matematik-Tingkatan-4-Salinan-Pelajar.docHochgeladen vonraja
- Log Tables and Their OperationsHochgeladen vonUday Prakash Sahu
- The Total Strong Split Domination Number of GraphsHochgeladen voninventionjournals
- morino_1986 Helmholtz decomposition revisited; Vorticity generation and trailing edge conditionHochgeladen vonproscribd
- ch3.pdfHochgeladen vonArdi Sujiarta