Sie sind auf Seite 1von 26

Chapter 3 Wiener Filters

1. Introduction

l Wiener filters are a class of optimum linear filters which involve linear
estimation of a desired signal sequence from another related sequence.

l In the statistical approach to the solution of the linear filtering problem,


we assume the availability of certain statistical parameters (e.g. mean
and correlation functions) of the useful signal and unwanted additive
noise. The problem is to design a linear filter with the noisy data as
input and the requirement of minimizing the effect of the noise at the
filter output according to some statistical criterion.

l A useful approach to this filter-optimization problem is to minimize the


mean-square value of the error signal that is defined as the difference
between some desired response and the actual filter output. For
stationary inputs, the resulting solution is commonly known as the
Weiner filter.

l The Weiner filter is inadequate for dealing with situations in which


nonstationarity of the signal and/or noise is intrinsic to the problem. In
such situations, the optimum filter has to be assumed a time-varying
form. A highly successful solution to this more difficult problem is
found in the Kalman filter.
2. Linear Estimation with Mean-Square Error Criterion

l Fig. 3.1 shows the block schematic of a linear discrete-time filter W(z)
for estimating a desired signal d(n) based on an excitation x(n)

d(n)

+
x(n) y(n) e(n)
W(z)
-
Fig. 3.1

l We assume that both x(n) and d(n) are random processes (discrete-time
random signals). The filter output is y(n) and e(n) is the estimation error.

l To find the optimum filter parameters, the cost function or performance


function must be selected.
In choosing a performance function the following points have to be
considered:

(1) The performance function must be mathematically tractable.


(2) The performance function should preferably have a single
minimum so that the optimum set of the filter parameters could be
selected unambiguously.

l The number of minima points for a performance function is closely


related to the filter structure. The recursive (IIR) filters, in general,
result in performance function that may have many minima, whereas the
non-recursive (FIR) filters are guaranteed to have a single global
minimum point if a proper performance function is used.

l In Weiner filter, the performance function is chosen to be


ξ = E[ e(n) ]
2

This is also called “mean-square error criterion”


3. Wiener Filter - the Transversal, Real-Valued Case

l Fig. 3.2 shows a transversal filter with tap weights w0, w1, … , wN-1 .

x(n) z-1 x(n-1) z-1 z-1


wN-1
w0 w1

y(n)
-
e(n) d(n)
+

Fig. 3.2

Let W = [ w0 w1 ... wN −1 ]T
X ( n) = [ xn xn −1 ... x n −N +1 ]T ....(1)
N −1
The output is y (n) = ∑ wi x(n − i)
i =0

= W T X (n )
= X T (n )W ....(2)
Thus we may write
e(n) = d (n) − y( n)
= d (n) − X T (n)W ...(3)
The performance function, or cost function, is then given by
ξ = E[e 2 (n )]
= E[(d (n ) − W T X (n ))(d ( n) − X T (n )W )]
= E[d 2 (n)] − W T E[ X ( n)d ( n)] − E[ X T (n) d (n)]W + W T E[ X (n) X T (n )]W ...(4)

Now we define the Nx1 cross-correlation vector


P ≡ E[ X (n)d (n)]
= [ p0 p1 ... p N −1 ]T ...(5)
and the NxN autocorrelation matrix
R ≡ E [ X (n) X T (n )]
 r00 r01 L r0, N −1 
 r r11 r1, N −1 
=
10
...(6)
 M M M 
 
rN −1,0 rN −1,1 L rN −1, N −1 

Also we note that


E[d (n ) X T ( n)] = P T
W T P = P TW
Then we obtain
ξ = E[d 2 ( n)] − 2W T P + W T RW ...(7)

Equation (7) is a quadratic function of the tap-weight vector W with a


single global minimum.
Note that R has to be a positive definite matrix in order to have a unique
minimum point in the w-space.

l Minimization of performance function

To obtain the set of tap weights that minimize the performance function,
∂ξ
we set = 0 , for i = 0,1,..., N − 1
∂wi

or ∇ξ = 0 ... (8)

where ∇ is the gradient vector defined as the column vector


∂ ∂ ∂ T
∇ =[ L ]
∂w0 ∂w1 ∂wN −1
and zero vector 0 is defined as N-component vector
0 = [0 0 L 0]T
Equation (7) can be expanded as
N −1 N −1 N −1
ξ = E[d 2 (n) − 2∑ p l wl + ∑∑ wl wm rlm ...(9)
l =0 l =0 m=0
N −1 N −1
and ∑∑ w w
l =0 m=0
l r
m lm can be expanded as

N −1 N −1 N −1 N −1 N −1

∑∑ w w
l =0 m=0
l r = ∑ ∑ wl wm rlm + wi ∑ wm rim + wi rii
m lm
l =0 m=0 m≠0
2
...(10)
l ≠i m≠i l ≠i
Then we obtain
∂ξ N −1
= −2 p i + ∑ Wl (rli + ril ) ...(11)
∂wi l =0

for i = 0,1, 2,..., N − 1


∂ξ
By setting = 0 , we obtain
∂wi
N −1

∑W ( r
l =0
l li + ril ) = 2 pi ...(12)

Note that
rli = E[ x (n − l ) x (n − i )] = Φ xx (i − l )
The symmetry property of autocorrelation function of real-valued signal,
we have the relation rli = ril

Equation (12) then becomes


N −1

∑r w
l =0
il l = pi for i = 0,1,2,..., N − 1

In matrix notation, we then obtain


RWop = P ...(13)

where Wop is the optimum tap-weight vector

Equation (13) is also known as the Wiener-Hopf equation,


which has the solution Wop = R −1 P ...(14)

Assuming that R has inverse.


The minimum value of ξ is

ξ min = E [d 2 (n )] − Wop P
T

= E[d 2 (n)] − Wop RWop


T
...(15)

Equation (15) can also expressed as


ξ min = E [d 2 (n )] − P T R −1 P ...(16)
l Example 3.1: A Modelling problem

v(n)
Plant d(n)
+
x(n) W(z) e(n)
y(n) -

Assuming that
d (n ) = 2 x( n) + 3 x(n − 1) + v (n)

and v(n) represents a stationary white process σv2=0.1 ,

x(n) is also a white process, with unit varianceσx2=1

Note:

By this assumption, the actual system function of the plant is 2+3z-1

Design an optimum filter (Weiner filter) to model the plant.


Question:

How to choose the number of taps (N) ?

Once N is determined, the procedure to solve the problem is


straightforward.
If N=2 is chosen p.55(textbook)
1 0
then R =  
0 1 
2
P= 
3

and ξ = 13.1 − 4 w0 − 6 w1 + w0 2 + w12

w   2
thus  0, 0  =  
 w0,1  3
ξ min = 0.1
4. Principle of Orthogonality - An Alternative Approach to the Design

of Wiener Filter

l The cost function (or performance function) is given by


ξ = E[e 2 (n )] ...(17)
∂ξ ∂e( n )
= E[2e(n ) ] ...(18)
∂w i ∂wi
for i = 0,1,2,..., N − 1
where e(n)=d(n)-y(n)
Since d(n) is independent of wi , we get
∂e( n ) ∂y (n)
=−
∂wi ∂wi
= − x( n − i ) ...(19)
Then we obtain
∂ξ
= −2 E [e (n) x (n − i)] ...(20)
∂wi
for i = 0,1,2,..., N − 1

l When the Wiener filter tap,


∂ξ
weights are set to their optimal values, =0.
∂wi
Hence, if e0(n) is the estimation error when wi are set to their optimal
values, then equation (20) becomes
E[ e0 ( n ) x( n − i )] = 0 for i = 0,1,..., N − 1 ...(21)
That is,
the estimation error is uncorrelated with the filter tap inputs, x(n-i).
This is known as “ the principles of orthogonality”.

l We can also show that the optimal filter output is also uncorrelated with
the estimation error.
That is, E[ e0 (n ) y 0 (n )] = 0 ....(22)

This result indicates that the optimized Weiner filter output and the
estimation error are “orthogonal”.
5. Normalized Performance Function

l If the optimal filter tap weights are expressed by w0,l , l=0,1,2,… ,N-1.
The estimation error is then given by
N −1
e0 (n ) = d (n ) − ∑ w0,l x (n − l ) ...(23)
l =0

and then
d ( n) = e 0 ( n ) + y 0 ( n) ...(24)
E[ d 2 ( n)] = E [e0 (n )] + E [ y 0 ( n)] + 2 E[ e0 ( n ) y 0 (n )]
2 2

l We may note that


E[ e0 (n )] = ξ min
2
...(26)

and we obtain
ξ min = E[d 2 (n )] − E [ y0 (n )]
2
...(27)

l Define ? as the normalized performance function


ξ
ρ= ...(28)
E [d 2 (n )]

l ?=1 when y(n)=0

l ? reaches its minimum value, ?min , when the filter tap-weights are
chosen to achieve the minimum mean-squared error.
E [ y0 (n )]
This gives ρ min = 1 − ...(29)
E [d 2 ( n )]
We have 0 < ?min < 1 .
6. Wiener Filter - Complex-Valued Case

l In many practical applications, the random signals are complex-valued.


For example, the baseband signal of QPSK & QAM in data
transmission systems.

l In the Wiener filter for processing complex-valued random signals, the


tap-weights are assumed to be complex variables.

l The estimation error, e(n), is also complex-valued. We may write


ξ = E [ e(n ) ] = E [e(n )e* (n )]
2
...(30)

l The tap-weight wi is expressed by


wi=wi,R+jwi,I … (31)
The gradient of a function with respect to a complex variable w=wR+jwI
∂ ∂
is defined as ∇ cw ≡ + j ...(32)
∂wR ∂w I

l The optimum tap-weights of the complex-valued Wiener filter will be


obtained from the criterion:
∇ cwi ξ = 0 for i = 0,1,2,..., N − 1

∂ξ ∂ξ
That is, = 0 and =0 ...(33)
∂wi , R ∂wi ,R

l Since ξ = E [e( n )e* (n )] , we have

∇ cwi ξ = E [e(n )∇ cwi e* (n ) + e * (n )∇ cwi e( n )] ...(34)

Noting that
N −1
e(n ) = d (n ) − ∑ wk x (n − k ) ...(35)
k =0

∇ cwi e (n ) = − x( n − i )∇ cwi wi ...(36)


∇ cwi e * (n ) = − x * (n − i )∇ cwi w *i ...(37)

Applying the definition (32), we obtain


∂wi ∂wi
∇ cwi wi = + j = 1 + j ( j) = 0
∂wi , R ∂wi , I

∂wi* ∂wi*
and ∇cw wi* = + j = 1 + j( − j ) = 2
i
∂wi ,R ∂wi , I

Thus, equation (34) becomes


∇ cwi ξ = −2 E [e(n ) x * (n − i)] ...(38)

The optimum filter tap-weights are obtained when ∇ cw ξ = 0 . This givens i

E[ e0 ( n ) x * (n − i )] = 0 for i = 0,1,2,..., N − 1 ...(39)

where e0(n) is the optimum estimation error.

l Equation (39) is the “principle of orthogonality” for the case of


complex-valued signals in wiener filter.

l The Wiener-Hopf equation can be derived as follows:


Define x (n ) ≡ [ x (n ) x( n − 1) L x (n − N + 1)]T ...(40)

and w ( n) ≡ [w0* w1* L w*N −1 ]T ...(41)

We can also write x ( n ) ≡ [ x * ( n ) x * ( n − 1) L x * (n − N + 1)]H ...(42)

and w ( n) ≡ [w0 w1 L wN −1 ]H ...(43)

where H denotes complex-conjugate transpose or Hermitian.

Noting that
e(n ) = d (n ) − w0H x( n) ...(44)

and E[e0 ( n) x (n )] = 0 ...(45) from equation(39)

We have
E[ x ( n)(d * (n ) − x H (n )w0 )] = 0 ...(46)

and then
Rw0 = p ...(47)
where R = E[ x (n ) x H (n )]

and p = E [ x (n )d * (n )]

l Equation (47) is the Wiener-Hopf equation for the case of


complex-valued signals.

l The minimum performance function is then expressed as


ξ min = E [d 2 (n )] − w0H R w0 ...(48)

l Remarks:
In the derivation of the above Wiener filter we have made assumption
that it is causal and finite impulse response, for both real-valued and
complex-valued signals.
7. Unconstrained Weiner Filters

l The block diagram of a Wiener filter is shown in Fig. 3.1

d(n)

+
x(n) y(n) e(n)
W(z)
-
Fig. 3.1

We assume that the filter W(z) may be non-causal and/or IIR.

l We consider only the case the signals and the system parameters are
real-valued.
Moreover, we assume that the complex variable z remains on the unit
circle, i.e. z = 1 , thus z*=z-1 , z = 1 .

7.1 Performance Function

l The performance function is defined as


ξ = E[ e(n) ]
2

Where e(n)=d(n)-y(n)
In terms of autocorrelation and cross-correlation functions, we have
ξ = E[d 2 ( n)] + E[ y 2 (n )] − 2 E[ y (n)d (n)]
= φ dd (0) + φ yy (0) − 2φ yd (0) ...(49)

Using the inverse Z-transform relations, we have


1 dz 1 dz
ξ = φ dd (0) + ∫
2πj c
Φ yy ( z ) − 2
z ∫
2πj c
Φ yd ( z )
z
...(50)

Since Y(z)=X(z)W(z), Φ yd ( z ) = W ( z )Φ xd ( z)

If z is selected to be on the unit circle in the z-plane, then


Φ yy ( z ) = W ( z ) Φ xx ( z ),
2

W ( z ) = W ( z )W * ( z ) and W * ( z ) = W ( z −1 )
2
Using these relations in equation(50), we obtain
1 dz 1 dz
ξ = φdd (0) + ∫ W ( z ) Φ xx ( z ) − 2 ∫ W ( z )Φ xd ( z )
2

2πj c z 2πj c z
1 dz
= φ dd (0) + ∫
2πj c
[W * ( z)Φ xx ( z ) − 2Φ xd ( z )]W ( z)
z
...(51)

Where the contour of integration, c , is the unit circle.

l The performance function given in equation(51) covers IIR and FIR


This is a most general form of the Wiener filter performance function.

Example 3.2
From general expression of ξ , eq.(51) to demonstrate the special case

of real-valued FIR system.

Consider the case where the Wiener filter is an N-tap FIR filter with
N −1
system function W ( z ) = ∑ wl z −l ...(52)
l=0

Using eq.(52) in the expression of ξ l , given by eq.(51), then


N −1 N −1
1 dz 1 dz
ξ = φdd (0) + ∫ (∑ wl z )(∑ wm z )Φ xx ( z) − 2
−l +m
∫ (∑ wl z −l )Φ xd ( z )
2πj c l =0 z 2πj c l =0 z
N −1 N −1 N −1
1 1
= φ dd (0) + ∑∑ wl w m ⋅ ∫ Φ ( z ) z m − l −1
dz − 2∑ wl ∫ Φ xd ( z ) z −l −1 dz
2πj c 2πj c
xx
l =0 m=0 l =0

Using the inverse z-transform relation, this gives


N −1 N −1 N −1
ξ = φdd (0) + ∑ ∑ wl wmφ xx (m − l ) − 2∑ wl φ xd ( −l)
l =0 m=0 l =0

Now using the notations


φ dd (0) = E [d 2 (n )]
φ xd (−l ) = p l
φ xx (m − l ) = φ xx (l − m) = rlm

We obtain the expression


N −1 N −1 N −1
ξ = E[d 2 ( n)] − 2∑ pl wl + ∑∑ wl wm rlm
l =0 l =0 m=0

Note that
p = E[ x (n)d (n)]
Thus pl = E[ x( n − l )d (n )]
By definition, φ xy ( k ) = E[ x (n) y * (n − k )]
∴ φ xd (−l ) = E[ x (n)d (n + l )] = E[ x( n − l )d ( n)]

Example 3.3
Modelling with an IIR filter
v(n)
G(z)
Plant d(n)
+
1 − w0 z −1 e(n)
x(n)
1 − w1 z −1 y(n) -

1 − w0 z −1
The plant is modeled by an IIR filter with W ( z ) =
1 − w1 z −1
We assume that all involved signals and system parameters are
real-valued.

x(n) : input sequence, white process with zero-mean, σ x2 = 1 .

v(n) : additive noise.


x(n) & v(n) are uncorrelated.
Thus, Φ xx ( z ) = 1 and Φ vx ( z ) = 0

G(z) : system function of the unknown plant.

Φ xd ( z) = G( z −1 )Φ xx ( z )

Using equation(51) to obtain the performance function ξ :


w1 − w0 1 − w 0 w1 w0 w − w0 w
ξ = φdd (0) + ⋅ + − 2[ 1 G( w1−1 ) + 0 G(∞ )]
1 − w1
2
w1 w1 w1 w1

l We may find that there can be many local minima, and searching for the
global minimum of ξ may not be a trivial task.
7.2 Optimum transfer function

l From the principle of orthogonality, we have


E[e0 (n) x( n − i)] = 0 for i = 0,1, 2,..., N − 1 ...(52)

where e0 (n) = d ( n) − ∑w
l = −∞
0 ,l x (n − l ) ...(53)

Here we assume that all involved signals are real-valued.

l Combining eq.(52)&eq.(53), we obtain


∑w
l = −∞
0 ,l E[ x( n − l ) x( n − i)] = E[d (n ) x(n − i)] ...(54)

with E[ x( n − l ) x( n − i)] = φ xx (i − l )

and E[d (n ) x(n − i)] = φ dx (i )

and taking Z-transform on both sides of equation(54), we get


Φ xx ( z )W0 ( z) = Φ dx ( z ) ...(55)
This is referred to as the “Wiener-Hopf” equation of the unconstrained
Wiener filtering.

l The optimum unconstrained Wiener filter is given by


Φ dx ( z )
W 0 ( z) = ...(56)
Φ xx ( z )
and
Φ dx (e jw )
W0 (e jw ) = ...(57)
Φ xx (e jw )
This is the frequency response of the Wiener filter

Φ dx (e jw ) : cross-power spectral density of d(n) and x(n)

Φ xx (e jw ) : power spectral density of x(n)


8. Application of Wiener Filters Ⅰ:Modelling

l Consider the modeling problem depicted in Fig. 3.8

ν0 (n)
μ(n)
G(z)
Plant d(n)
+
y(n)
νi(n) W(z) e(n)
x(n) -

l μ(n), ν0(n), andνi(n) are assumed to be stationary, zero-mean and

uncorrelated with one another.

l The input to Wiener filter is given by


x (n ) = µ (n ) + υ i (n ) ...(58)
and the desired output is given by
d ( n) = g n * µ (n ) + υ 0 (n ) ...(59)
where gn is the impulse response sample of the plant.

l The optimum unconstrained Wiener filter transfer function


Φ dx ( z )
W0 ( z ) = ...(60)
Φ xx ( z )
Note that
φ xx ( k ) = E[ x( n) x( n − k )]
= E [(µ ( n ) + υi (n ))(µ (n − k ) + υ i (n − k ))]
= E [µ ( n )µ ( n − k )] + E [ µ (n )υi ( n − k )] + E [υ i (n )µ ( n − k )] + E [υ i (n )υ i (n − k )]
= φ µµ (k ) + φυiυi ( k ) ...(61)

Taking Z-transform on both sides of eq.(61), we get


Φ xx ( z ) = Φ µµ ( z ) + Φυiυi ( z ) ...(62)
Φ dx ( z )
l To calculate W0 ( z ) = , we must first find the expression for Φ dx ( z) .
Φ xx ( z )
We can show that
Φ dx ( z ) = Φ d ' µ ( z )

where d’(n) is the plant output when the additive noiseν0(n) is

excluded from that.


Moreover, we have
Φ d 'µ ( z ) = G( z)Φ µµ ( z )

Thus Φ dx ( z ) = G ( z )Φ µµ ( z )

And we obtain
Φ µµ ( z )
W0 ( z ) = G( z ) ...(63)
Φ µµ ( z ) + Φ υiυi ( z )

l We note that W0(z) is equal to G(z) only when Φυ υ ( z ) is equal to zero.


i i

That is, whenνi(n) is zero for all values of n.

The noise sequenceνi(n) may be thought of as introduced by a

transducer that is used to get samples of the plant input.

l Replacing z by ejw in equation(63), we obtain


Φ µµ (e jw )
W0 ( e ) =jw
G(e jw ) ...(64)
Φ µµ ( e ) + Φ υiυi (e )
jw jw

Φ µµ ( e jw )
l Define K (e ) ≡
jw
...(65)
Φ µµ (e jw ) + Φ υiυi ( e jw )

We obtain
W0(ejw)=K(ejw)G(ejw)
With some mathematic manipulation, we can find the minimum
mean-square error, ξ min , expressed by

1 π

2
ξ min = φυ0υ0 (0) + (1 − K (e jw ))Φ µµ (e jw ) G( e jw ) dw ...(66)
2π −π

l The best performance that one can expect from the unconstrained
Wiener filter is
ξ min = φυ 0υ0 (0)

and this happens whenνi(n)=0 .

l The Wiener filter attempts to estimate that part of the target signal d(n)
that is correlated with its own input x(n) and leaves the remaining part
of d(n) (i.e. ν0(n)) unaffected. This is known as “ the principles of

correlation cancellation “.
9. Application of Wiener Filters Ⅱ:Inverse Modelling

l Fig. 3.9 depicts a channel equalization scenario.

d(n)=s(n)

?(n)
s(n) y(n) x(n) + e(n)
H(z) W(z)
-

s(n) : data samples


H(z) : System function of communication channel
?(n) : additive channel noise which is uncorrelated with s(n)
W(z) : filter function used to process the received noisy signal samples
to recover the original data samples.

l When the additive noise at the channel output is absent, the equalizer
has the following trivial solution:
1
W0 ( z ) = ...(67)
H ( z)
This implies that y(n)=s(n) and thus e(n)=0 for all n.
Demonstration: Fig. 3.10

l When the channel noise, ?(n), is non-zero, the solution provided by


equation(67) may not be optimal.
x (n ) = hn * s(n ) + υ (n ) ...(68)

and d ( n) = s( n ) ...(69)

where hn is the impulse response of the channel, H(z).


From equation(68), we obtain
Φ xx ( z ) = Φ ss ( z ) H ( z) + Φυυ ( z )
2
...(70)

Also,
Φ dx ( z ) = Φ sx ( z ) = H ( z −1 )Φ ss ( z ) ...(71)

With z = 1 , we may also write

Φ dx ( z) = H * ( z )Φ ss ( z ) ...(72)
and then
H * ( z)Φ ss ( z )
W0 ( z) = ...(73)
Φ ss ( z ) H ( z ) + Φ υυ ( z )
2

This is the general solution to the equalization problem when there is no


constraint on the equalizer length and, also, it may be let to be
non-causal.

l Equation(73) can be rewritten as


1 1
W0 ( z ) = ⋅ ...(74)
Φ υυ ( z ) H (z)
1+
Φ ss ( z ) H ( z )
2

l Let z=ejw and define the parameter


2
Φ ss (e jw ) H ( e jw )
ρ (e ) ≡
jw
...(75)
Φ υυ (e jw )

2
where Φ ss ( e jw ) H (e jw ) and Φυυ (e jw ) are the signal power spectral
density and the noise power spectral density, respectively, at the channel
output.
We obtain
ρ (e jw ) 1
W0 (e jw ) = ...(76)
1 + ρ (e ) H (e jw )
jw

Note that ?(ejw) is a non-negative quantity, since it is the signal-to-noise


power spectral density ratio at the equalizer input.
ρ ( e jw )
Also, 0 ≤ ≤1 ...(77)
1 + ρ (e jw )

l Cancellation of ISI and noise enhancement

Consider the optimized equalizer with frequency response given by


ρ (e jw ) 1
W0 (e jw ) =
1 + ρ (e ) H (e jw )
jw

In the frequency regions where the noise is almost absent, the value of
is very large and hence
1
W0 (e jw ) ≈
H (e jw )
The ISI will be eliminated without any significant enhancement of noise.
On the other hand, in the frequency regions where the noise level is
high, the value of ?(ejw) is not large and hence the equalizer does not
approximate the channel inverse well. This is of course, to prevent noise
enhancement.
10. Noise Cancellation

l Fig. 3.12 shows a typical noise canceller block diagram

s(n) d(n) + e(n)


Primary input -
H(z)

G(z)
?(n) x(n) W(z)
Reference
input

s(n) : signal source


) : noise source
s(n) & ?(n) are uncorrelated
H(z) & W(z) are two system functions used to mixed the two input
signals to form d(n) & x(n).
d(n) : primary input
x(n) : Reference input

x(n)=?(n)+hn*s(n) … (78)
d(n)=s(n)+gn*?(n) … (79)

l Since s(n) & ?(n) are uncorrelated with each other, we obtain
Φ dx ( z ) = Φ sdx ( z ) + Φυdx ( z ) ...(81)

where is Φ dx ( z) when ?(n)=0 for all n,

and Φυdx ( z) is Φ dx ( z) when s(n)=0 for all n.

This is because s(n) and ?(n) are uncorrelated with each other, their
contribution in Φ dx ( z) can be considered separately.

Thus, we obtain
Φ sdx ( z) = H * ( z )Φ ss ( z ) ...(82)
and Φυdx ( z) = G ( z )Φυυ ( z ) ...(83)

Recall that z = 1 . Finally, we get

Φ dx ( z) = H * ( z )Φ ss ( z) + G( z )Φυυ (z) ...(84)

H * ( z )Φ ss ( z ) + G( z )Φ υυ ( z )
and W0 ( z ) = 2
...(85)
Φ υυ ( z) + Φ ss ( z ) H ( z)

l Equation(85) for noise cancellation may be thought of as a


generalization of the results we obtained for the modeling and inverse
modeling scenarios, given by equation(63) and (73).

l To minimize the mean-square value of the output error, we must strike a


balance between noise cancellation and signal cancellation at the output
of the noise canceller.
Cancellation of the noise ?(n) occurs when the Wiener filter W(z) is
chosen to be close to G(z), and cancellation of the signal s(n) occurs
when the Wiener filter W(z) is close to the inverse of H(z). In the sense,
we may note that the noise canceller treats s(n) and ?(n) without making
any distinction between them.

l Define
ρ pri (e jw ) : signal-to-noise PSD at primary input

: signal-to-noise PSD at reference input


ρ out (e jw ) : signal-to-noise PSD at output

Φ ss (e jw )
ρ pri (e jw ) = 2
...(86)
G (e jw ) Φυυ ( e jw )
2
H (e jw ) Φ ss (e jw )
ρ ref ( e jw ) = ...(87)
Φ υυ (e jw )

To calculate ρ out (e jw ) , we note that s(n) reaches the canceller output


through two routes : one direct and one through the cascade of H(z)

and W(z) :
2
Φ see (e jw ) = 1 − H (e jw )W (e jw ) Φ ss (e jw ) ...(88)

Similarly, ?(n) reaches the output through the routes G(z) and W(z) :
2
Φ υee (e jw ) = G( e jw ) − W (e jw ) Φ υυ ( e jw ) ...(89)

Replacing W(ejw) by W0(ejw), we obtain


2
1 − G( e jw )H ( e jw ) Φ υυ
2
(e jw )
Φ (e ) =
s
ee
jw
2
Φ υυ (e jw )
2
Φ υυ (e jw ) + H ( e jw ) Φ ss (e jw )

2 2
υ
H ( e jw ) 1 − G(e jw ) H (e jw ) Φ 2ss ( e jw )
and Φ (e ) =
ee
jw
2
Φ υυ ( e jw )
2
Φ υυ (e jw ) + H (e jw ) Φ ss (e jw )

Hence, ρ out (e jw ) can now be obtained as

Φ see (e jw )
ρ out (e jw ) ≡
Φυee (e jw )
Φ υυ (e jw )
= 2
H ( e jw ) Φ ss ( e jw )
1
= ...(90)
ρ ref ( e jw )

This is known as “ power inversion ”. ( Widrow et al, 1975)

** This result shows that the noise canceller works better when
ρ ref ( e jw ) is low.
Example 3.6

Two Omni-directional Antenna in a Receiver


A
s(n)

d(n)
?0
l
x(n)
?(n)
B
y(n) +
0 w0 e(n)
90
-

~
x (n)
x(n)
w1

desired signal s(n)= a(n)cos nw0


interferer signal ?(n)= ß(n)cos nw0
(jammer)

l s(n) arrives the receiver (antennas) in the direction perpendicular to the


line connecting A & B.
?(n) arrives at an angle ?0 with respect to the direction of s(n).

l a(n) and ß(n) are narrowband baseband signals.


Thus s(n) & ?(n) can be treated as narrowband signals concentrated
around w=w0 .

l since s(n) & ?(n) may be treated as single tones, and thus a filter with
two degrees of freedom is sufficient for optimal filtering.

l Wiener filter coefficients can be determined as follows.


s(n) arrives at the same time at both omni-antennas. ?(n) arrives at B
first and arrives at A with a delay
l sinθ 0
δ0 =
c
the delay is normalized by the time step T which corresponds to one
increment of time index n. This gives
l sinθ0
δ0 =
cT
Thus
d ( n) = α (n ) cos nw0 + β ( n) cos[(n − δ 0 )w0 ]
s (n ) = α ( n) cos nw0 + β (n ) cos nw0
x~( n) = α (n ) sin nw0 + β (n ) sin nw0
 E [ x 2 ( n)] E[ x( n ) x~(n )]
R= ~ 
 E [ x (n ) x ( n )] E [ x~ 2 ( n )] 

With some mathematical manipulations, we obtain


σ α2 + σ β2 1 0
R= 0 1
2  

where σ α2 and σ β2 are variance of a(n) and ß(n), respectively.

 E [d ( n) x( n)[ 1 σ α + σ β cos δ 0 w0 
2 2

Also, P =  ~ =  
 E [d (n ) x ( n )] 2  σ β sin δ 0 w0 
2

The Wiener-Hopf equation R W0 = P

(σ α2 + σ β2 cos δ 0 w0 ) 1
gives W0 =  ⋅ 2
 σ β sin δ 0 w0  σα + σ β
2 2

The signal-to-noise ratio at the output is equal to σ β2 / σ α2

while the signal-to-noise ratio at the reference input is equal to σ α2 / σ β2 .

Das könnte Ihnen auch gefallen