Beruflich Dokumente
Kultur Dokumente
1. Introduction
l Wiener filters are a class of optimum linear filters which involve linear
estimation of a desired signal sequence from another related sequence.
l Fig. 3.1 shows the block schematic of a linear discrete-time filter W(z)
for estimating a desired signal d(n) based on an excitation x(n)
d(n)
+
x(n) y(n) e(n)
W(z)
-
Fig. 3.1
l We assume that both x(n) and d(n) are random processes (discrete-time
random signals). The filter output is y(n) and e(n) is the estimation error.
l Fig. 3.2 shows a transversal filter with tap weights w0, w1, … , wN-1 .
y(n)
-
e(n) d(n)
+
Fig. 3.2
Let W = [ w0 w1 ... wN −1 ]T
X ( n) = [ xn xn −1 ... x n −N +1 ]T ....(1)
N −1
The output is y (n) = ∑ wi x(n − i)
i =0
= W T X (n )
= X T (n )W ....(2)
Thus we may write
e(n) = d (n) − y( n)
= d (n) − X T (n)W ...(3)
The performance function, or cost function, is then given by
ξ = E[e 2 (n )]
= E[(d (n ) − W T X (n ))(d ( n) − X T (n )W )]
= E[d 2 (n)] − W T E[ X ( n)d ( n)] − E[ X T (n) d (n)]W + W T E[ X (n) X T (n )]W ...(4)
To obtain the set of tap weights that minimize the performance function,
∂ξ
we set = 0 , for i = 0,1,..., N − 1
∂wi
or ∇ξ = 0 ... (8)
N −1 N −1 N −1 N −1 N −1
∑∑ w w
l =0 m=0
l r = ∑ ∑ wl wm rlm + wi ∑ wm rim + wi rii
m lm
l =0 m=0 m≠0
2
...(10)
l ≠i m≠i l ≠i
Then we obtain
∂ξ N −1
= −2 p i + ∑ Wl (rli + ril ) ...(11)
∂wi l =0
∑W ( r
l =0
l li + ril ) = 2 pi ...(12)
Note that
rli = E[ x (n − l ) x (n − i )] = Φ xx (i − l )
The symmetry property of autocorrelation function of real-valued signal,
we have the relation rli = ril
∑r w
l =0
il l = pi for i = 0,1,2,..., N − 1
ξ min = E [d 2 (n )] − Wop P
T
v(n)
Plant d(n)
+
x(n) W(z) e(n)
y(n) -
Assuming that
d (n ) = 2 x( n) + 3 x(n − 1) + v (n)
Note:
w 2
thus 0, 0 =
w0,1 3
ξ min = 0.1
4. Principle of Orthogonality - An Alternative Approach to the Design
of Wiener Filter
l We can also show that the optimal filter output is also uncorrelated with
the estimation error.
That is, E[ e0 (n ) y 0 (n )] = 0 ....(22)
This result indicates that the optimized Weiner filter output and the
estimation error are “orthogonal”.
5. Normalized Performance Function
l If the optimal filter tap weights are expressed by w0,l , l=0,1,2,… ,N-1.
The estimation error is then given by
N −1
e0 (n ) = d (n ) − ∑ w0,l x (n − l ) ...(23)
l =0
and then
d ( n) = e 0 ( n ) + y 0 ( n) ...(24)
E[ d 2 ( n)] = E [e0 (n )] + E [ y 0 ( n)] + 2 E[ e0 ( n ) y 0 (n )]
2 2
and we obtain
ξ min = E[d 2 (n )] − E [ y0 (n )]
2
...(27)
l ? reaches its minimum value, ?min , when the filter tap-weights are
chosen to achieve the minimum mean-squared error.
E [ y0 (n )]
This gives ρ min = 1 − ...(29)
E [d 2 ( n )]
We have 0 < ?min < 1 .
6. Wiener Filter - Complex-Valued Case
∂ξ ∂ξ
That is, = 0 and =0 ...(33)
∂wi , R ∂wi ,R
Noting that
N −1
e(n ) = d (n ) − ∑ wk x (n − k ) ...(35)
k =0
∂wi* ∂wi*
and ∇cw wi* = + j = 1 + j( − j ) = 2
i
∂wi ,R ∂wi , I
Noting that
e(n ) = d (n ) − w0H x( n) ...(44)
We have
E[ x ( n)(d * (n ) − x H (n )w0 )] = 0 ...(46)
and then
Rw0 = p ...(47)
where R = E[ x (n ) x H (n )]
and p = E [ x (n )d * (n )]
l Remarks:
In the derivation of the above Wiener filter we have made assumption
that it is causal and finite impulse response, for both real-valued and
complex-valued signals.
7. Unconstrained Weiner Filters
d(n)
+
x(n) y(n) e(n)
W(z)
-
Fig. 3.1
l We consider only the case the signals and the system parameters are
real-valued.
Moreover, we assume that the complex variable z remains on the unit
circle, i.e. z = 1 , thus z*=z-1 , z = 1 .
Where e(n)=d(n)-y(n)
In terms of autocorrelation and cross-correlation functions, we have
ξ = E[d 2 ( n)] + E[ y 2 (n )] − 2 E[ y (n)d (n)]
= φ dd (0) + φ yy (0) − 2φ yd (0) ...(49)
Since Y(z)=X(z)W(z), Φ yd ( z ) = W ( z )Φ xd ( z)
W ( z ) = W ( z )W * ( z ) and W * ( z ) = W ( z −1 )
2
Using these relations in equation(50), we obtain
1 dz 1 dz
ξ = φdd (0) + ∫ W ( z ) Φ xx ( z ) − 2 ∫ W ( z )Φ xd ( z )
2
2πj c z 2πj c z
1 dz
= φ dd (0) + ∫
2πj c
[W * ( z)Φ xx ( z ) − 2Φ xd ( z )]W ( z)
z
...(51)
Example 3.2
From general expression of ξ , eq.(51) to demonstrate the special case
Consider the case where the Wiener filter is an N-tap FIR filter with
N −1
system function W ( z ) = ∑ wl z −l ...(52)
l=0
Note that
p = E[ x (n)d (n)]
Thus pl = E[ x( n − l )d (n )]
By definition, φ xy ( k ) = E[ x (n) y * (n − k )]
∴ φ xd (−l ) = E[ x (n)d (n + l )] = E[ x( n − l )d ( n)]
Example 3.3
Modelling with an IIR filter
v(n)
G(z)
Plant d(n)
+
1 − w0 z −1 e(n)
x(n)
1 − w1 z −1 y(n) -
1 − w0 z −1
The plant is modeled by an IIR filter with W ( z ) =
1 − w1 z −1
We assume that all involved signals and system parameters are
real-valued.
Φ xd ( z) = G( z −1 )Φ xx ( z )
l We may find that there can be many local minima, and searching for the
global minimum of ξ may not be a trivial task.
7.2 Optimum transfer function
∑w
l = −∞
0 ,l E[ x( n − l ) x( n − i)] = E[d (n ) x(n − i)] ...(54)
with E[ x( n − l ) x( n − i)] = φ xx (i − l )
ν0 (n)
μ(n)
G(z)
Plant d(n)
+
y(n)
νi(n) W(z) e(n)
x(n) -
Thus Φ dx ( z ) = G ( z )Φ µµ ( z )
And we obtain
Φ µµ ( z )
W0 ( z ) = G( z ) ...(63)
Φ µµ ( z ) + Φ υiυi ( z )
Φ µµ ( e jw )
l Define K (e ) ≡
jw
...(65)
Φ µµ (e jw ) + Φ υiυi ( e jw )
We obtain
W0(ejw)=K(ejw)G(ejw)
With some mathematic manipulation, we can find the minimum
mean-square error, ξ min , expressed by
1 π
∫
2
ξ min = φυ0υ0 (0) + (1 − K (e jw ))Φ µµ (e jw ) G( e jw ) dw ...(66)
2π −π
l The best performance that one can expect from the unconstrained
Wiener filter is
ξ min = φυ 0υ0 (0)
l The Wiener filter attempts to estimate that part of the target signal d(n)
that is correlated with its own input x(n) and leaves the remaining part
of d(n) (i.e. ν0(n)) unaffected. This is known as “ the principles of
correlation cancellation “.
9. Application of Wiener Filters Ⅱ:Inverse Modelling
d(n)=s(n)
?(n)
s(n) y(n) x(n) + e(n)
H(z) W(z)
-
l When the additive noise at the channel output is absent, the equalizer
has the following trivial solution:
1
W0 ( z ) = ...(67)
H ( z)
This implies that y(n)=s(n) and thus e(n)=0 for all n.
Demonstration: Fig. 3.10
and d ( n) = s( n ) ...(69)
Also,
Φ dx ( z ) = Φ sx ( z ) = H ( z −1 )Φ ss ( z ) ...(71)
Φ dx ( z) = H * ( z )Φ ss ( z ) ...(72)
and then
H * ( z)Φ ss ( z )
W0 ( z) = ...(73)
Φ ss ( z ) H ( z ) + Φ υυ ( z )
2
2
where Φ ss ( e jw ) H (e jw ) and Φυυ (e jw ) are the signal power spectral
density and the noise power spectral density, respectively, at the channel
output.
We obtain
ρ (e jw ) 1
W0 (e jw ) = ...(76)
1 + ρ (e ) H (e jw )
jw
In the frequency regions where the noise is almost absent, the value of
is very large and hence
1
W0 (e jw ) ≈
H (e jw )
The ISI will be eliminated without any significant enhancement of noise.
On the other hand, in the frequency regions where the noise level is
high, the value of ?(ejw) is not large and hence the equalizer does not
approximate the channel inverse well. This is of course, to prevent noise
enhancement.
10. Noise Cancellation
G(z)
?(n) x(n) W(z)
Reference
input
x(n)=?(n)+hn*s(n) … (78)
d(n)=s(n)+gn*?(n) … (79)
l Since s(n) & ?(n) are uncorrelated with each other, we obtain
Φ dx ( z ) = Φ sdx ( z ) + Φυdx ( z ) ...(81)
This is because s(n) and ?(n) are uncorrelated with each other, their
contribution in Φ dx ( z) can be considered separately.
Thus, we obtain
Φ sdx ( z) = H * ( z )Φ ss ( z ) ...(82)
and Φυdx ( z) = G ( z )Φυυ ( z ) ...(83)
H * ( z )Φ ss ( z ) + G( z )Φ υυ ( z )
and W0 ( z ) = 2
...(85)
Φ υυ ( z) + Φ ss ( z ) H ( z)
l Define
ρ pri (e jw ) : signal-to-noise PSD at primary input
Φ ss (e jw )
ρ pri (e jw ) = 2
...(86)
G (e jw ) Φυυ ( e jw )
2
H (e jw ) Φ ss (e jw )
ρ ref ( e jw ) = ...(87)
Φ υυ (e jw )
and W(z) :
2
Φ see (e jw ) = 1 − H (e jw )W (e jw ) Φ ss (e jw ) ...(88)
Similarly, ?(n) reaches the output through the routes G(z) and W(z) :
2
Φ υee (e jw ) = G( e jw ) − W (e jw ) Φ υυ ( e jw ) ...(89)
2 2
υ
H ( e jw ) 1 − G(e jw ) H (e jw ) Φ 2ss ( e jw )
and Φ (e ) =
ee
jw
2
Φ υυ ( e jw )
2
Φ υυ (e jw ) + H (e jw ) Φ ss (e jw )
Φ see (e jw )
ρ out (e jw ) ≡
Φυee (e jw )
Φ υυ (e jw )
= 2
H ( e jw ) Φ ss ( e jw )
1
= ...(90)
ρ ref ( e jw )
** This result shows that the noise canceller works better when
ρ ref ( e jw ) is low.
Example 3.6
d(n)
?0
l
x(n)
?(n)
B
y(n) +
0 w0 e(n)
90
-
~
x (n)
x(n)
w1
l since s(n) & ?(n) may be treated as single tones, and thus a filter with
two degrees of freedom is sufficient for optimal filtering.
E [d ( n) x( n)[ 1 σ α + σ β cos δ 0 w0
2 2
Also, P = ~ =
E [d (n ) x ( n )] 2 σ β sin δ 0 w0
2
(σ α2 + σ β2 cos δ 0 w0 ) 1
gives W0 = ⋅ 2
σ β sin δ 0 w0 σα + σ β
2 2