Sie sind auf Seite 1von 5

onalization Techniques for Two-Dimensional Adaptive Filters

Jeffrey C. Strait and W. Kenneth Jenkins Department of Electrical and Computer Engineering and The Coordinated Science Laboratory University of Illinois L Urbana, I 61801

Abstract
Image and video signal processing applications often require filters with unknown or time-varying characteristics. Two-dimensional adaptive filters have been examined recently as a proposed solution to these problems. The following system considerations have driven research on cost-effective acceleration algorithms for 2D adaptive filters. First the high data rates in digital video processing demand computational eficiency, and second, the nonstationary signal properties of images require optimized convergence speed. We present an overview of structures and algorithms developed to achieve an improved rate of convergence with reduced computational complexity. These include 2 - 0 Newtontype adaptive filters and 2 - 0 transform domain adaptive filters. The results are benchmarked against simple 2 - 0 LMS and RLS adaptive filters.

2. Acceleration techniques
Consider a simple 2-D direct form FIR filter with coefficients h(nl,n2), output y(nl,n2), and input x(nl,n2)
N-1

y(nl9n2)=

N-l

Ch(m19m2)x(n1 -ml,n2 -q> (1)

n,=O m2=O

Assuming some desired signal, d(nl ,n2), is available, straightforward application of the 2-D LMS algorithm [3] to the filter minimizes the mean-square error, E[e2(nim)l, where e(ni,n2)=d(ni ,nz)-y(ni,n2).

1. Introduction
Adaptive filtering solutions naturally apply to classical signal processing problems such as data compression and noise removal [1][2].The emerging widespread use of digital video in consumer electronics, communications, and multimedia applications offers similar technical obstacles which are amiable to two-dimensional adaptive processing. The computational and performance requirements of such methods demand optimization of the adaptive filters complexity, convergence speed, and modeling flexibility. This paper offers a survey of twodimensional filter structures and algorithms [3][4], with an emphasis on orthogonalization techniques designed to maximize the rate at which the filters converge. An adaptive filter is defined by its structure and the algorithm used to iteratively modify its parameters or coefficients. The algorithm minimizes some predetermined error criterion, usually the mean-squared error. Common filter structures examined for 2-D adaptive filtering include various FIR and IIR implementations, combined with algorithms ranging in complexity from simple LMS type to accelerated Newton type updates.
x(nl,n2 - N + 1 )

x(n1 -N+1,n2)

1058-6393/96 $5.00 0 1996 IEEE Proceedings of ASILOMAR-29

1358

The 2-D LMS adaptive filter defined by ( 1 ) and (2) is plagued by poor performance when exposed to a correlaled input signal. To compensate for the effects of eigenvalue disparity, orthogonalization is introduced by any one of several methods. The recursive least squares (RLS) algorithm is a member of the Newton-type class of algorithms. 13y column-ordering the filter response and the input signal beneath the filter support as N2x1 vectors, the 2-D R I 3 algorithm follows from solving the least squares solution recursively with the matrix inversion lemma. To show this we optimize the sum of squared errors over some region of support 9 2

from which a recursive form can be derived

to generate the algorithm given as

(7 ')
The filter will converge in approximately N2 iteratiom, but the method of RLS requires a budget of O[N4,] multiplications per iteration. It also has well-known numerical problems, which are exhibited in the presence of highly non-stationary signals. Regardless, it offers an attractive benchmark for comparison with othe:r orthogonal methods. By incorporating a Newton-type search algorithm instead of the steepest-descent method used by the LMS approach, the rate of convergence is no longer dependent on the statistical properties of the input. Hence, the following Newton-type algorithm effects orthogonalization of the input upon the adaptive algorithm.

This usually provides a consistent positive semidefinite autocorrelation matrix estimate with inclusion of the exponential weighting factor and addition of 61 to R . The weighting factor allows the filter to perform reasonably well during statistical transition regions. A lower value of a allows the lag estimates to forget old data and adapt to new conditions. Equation (8) is computationally intensive due to the required system solution R,u(nl , n 2 ) = x(nl,n2),but a fast algorithm based on the block Levinson algorithm [5] reduces the complexity to O[N3]. We assume that over a continuous block of N2 pixels the input signal can be assumed to be nearly stationary. Therefore, we can impose Toeplitz-block Toeplitz structure on the 2-D autocorrelation matrix, since a 2-D stationary autocorrelation matrix is known to be Toeplitz-block Toeplitz. By column-ordering the filter coefficients and the input signal, the required Toeplitz-block Toeplitz system solution can be bundled within a block processing fast quasi-Newton (FQN) algorithm to achieve the necessary savings. The approach exploits the relationship between the two systems Rxxu(nl,n2) x ( n l , n 2 ) and =

Rxxu(nl +1,n2) = x ( q +1,n2) where

x(n1+1,n2)= The equation incorporates an autocorrelation matrix estimate which must be computed iteratively. A simple biased, weighted 2-D lag coefficient estimator follows as

~ ( n lN -

+2,n2)

x(n1- N + 2 , n 2 - N )

The two vectors x ( n l ,n2) and x ( n 1+1 ,n2) are clearly related by a vector shift of length N, and this property allows efficient computation of the required system solution at the latter index point. The filter is indexed as in a multichannel configuration, but it also can be indexed in the perpendicular direction

1359

using a transformation of the autocorrelation matrix based on a suitable permutation. This allows flexibility in selecting the shape of the block of index points which can be grouped together based on statistical similarity. The block can be square, rectangular, or linear. By constructing an exchange matrix of dimension N2XN2

E=

where

and ej is a IXN row vector with 1 in the jthposition and 0 elsewhere, the transformation ERET = R , when applied to the original system, generates a closely related system, R,C(nl, n 2 )= Z(n1,n2), corresponding to perpendicular indexing. The fast algorithm is summarized in tables 1 and 2 [5], and it can be applied to both the original system and the transformed system. Table 2 lists an efficient algorithm to successively compute system solutions at consecutive index points with O[N3] multiplications. If applied N2-1 times in succession, the overall complexity of the algorithm reduces to O [ N 3 ] , since the algorithm in table 1, which consumes O[N5] multiplications, is only executed once per block of N2 pixels. The performance limitation is subsequently the quality of the autocorrelation matrix estimate. Transform domain adaptive filter (TDAF) algorithms are also well-suited to 2-D signal processing. Orthogonal transforms with power normalization can be used to accelerate the convergence of a gradient-based adaptive filter with a colored input signal. The following (possibly complex) LMS algorithm can be applied to the 2-D TDAF

where rc(nl,n2) is the column-ordered vector formed by premultiplying the input column-ordered vector r(nl,n2) by the 2-D unitary transform T .

Channel normalization results from including the matrix 1 ,N- l)], Au2=diag[ou2(0,0)ou2( ,0)...ou2(N-1 where each diagonal component is ou2(ml,m2)=E[lu(ml,m2)121.

Ideally, the Karhunen-Loeve Transform (KLT) is used to achieve optimal convergence, but this requires a priori knowledge of the input statistical properties. The KLT corresponding to the input autocorrelation matrix R , is constructed using as rows of T the orthonormal , eigenvectors of R. Therefore, with unitary QxH=[ql... 4 ~ 2 1 A,=diag[Al ... AN^], the unitary similarity and transformation is R ~ Q x - ' A x Q x , the KLT is given by and T=Qx. However, since the statistical properties of the input process are usually unknown and time varying, the KCT cannot be implemented in practice. Researchers have found that many fixed transforms do provide good orthogonalization for a wide class of input signals. Those include the Discrete Fourier Transform (DFT or FFT), the Discrete Cosine Transform (DCT), and the Walsh Hadamard Transform (WHT). For example, the DFT provides only approximate channel decorrelation since it is well-known that a "sliding" DFT implements a parallel bank of overlapping band-pass filters with center frequencies evenly distributed over the interval [0,2n]. Furthermore, the DFT (or FFT) is hampered by the fact that it requires complex arithmetic. It is still a very effective method of orthogonalization which we compare here to the 2-D FQN algorithm. System identification simulation results demonstrate the convergence performance of the methods discussed so far. In figure 1 we show a plot of the mse vs. iteration for a 3x3, 2-D, FIR recursive least squares adaptive filter with a low-pass input and model. As expected, the filter converges to the artificial noise floor in approximately 9 iterations. The statistical properties of the input do not affect this rate, as long as the input is stationary. In figure 2 we present a similar experiment again with 2-D second-order FIR adaptive filters incorporating three different adaptive algorithms. The curve with the slowest convergence corresponds to the filter with the 2-D LMS algorithm. The other two curves are for the 2-D TDAFDFT and the 2-D FQN algorithms. In all three cases the input signal is low-pass colored noise. Clearly the LMS filter is adversely affected by the coloring present in the input. The transform domain adaptive filter successfully compensates for much of the input correlation, but it does not do so completely since the DFT is not the optimal transform. However, the FQN algorithm converges even faster, being suboptimal only in that it incorporates an autocorrelation matrix estimate instead of the actual unknown matrix. The performance limit for the FQN algorithm exists as the rate at which the identical filter with the 2-D LMS algorithm converges in the presence of a white input signal. This is because it is still a gradientbased algorithm as is LMS. Though not shown here, that limit is very near to the FQN convergence plot shown in figure 2. In figure 3 we show a similar example of the comparison between LMS, transform domain, and FQN adaptive filters with a forth-order FIR filter structure.

1360

Table 1 The block Levinson algorithm ior solving Toeplitz-block Toeplitz systems of equations. solve: Rpup= bl,l+p1

Table 2 The block Levinson algorithm for efficiently solving the next Toeplitz-block Toeplitz system of equations, given parameters from the original Levinson solution.

1+1-

1+1- 'T pP-rTrp-l

-1 1+1 - - a pp-1 p-1


1+1
J

up-l + bI+p = ' b +b gp-1 1+1,I+p-l l+p

not decorrelated the adaptive modes. With a lower value of a the lag coefficients approach their ideal values more quickly than for higher values.

3. Conclusions
We have presented an overview of acceleration techniques for two-dimensional adaptive filters based on methods to compensate for the effects of eigenvalue disparity of the input signal autocorrelation matrix. Newton-type algorithms are known to improve upon the performance seen by simple steepest-descent algorithms. We emphasized a computationally efficient method of applying a Newton-type algorithm, called the 2-D fast quasi-Newton filter, and classical transform-based methods. These two fundamental approaches can be applied to other structures as well. In reference [4] the 2-D McClellan transformation filter and 2-D IIR filters are successfully examined as candidates for orthogonalization techniques based on the methods of this paper. Two-dimensional adaptive filters are applicable to many image, video, and multichannel processing problems. These range from noise suppression and cancellation to predictive coding. Data rate and performance requirements demand research into computationally efficient and rapidly converging filters.

det ; = det am &

<Pm1 = Pm

w T

rm = [ r1 r2 ... rm]

Again, the FQN filter demonstrates fastest convergence, followed by the TDAF and the LMS filter. The FQN results presented so far are for stationary input signals. If the FQN filter finds itself submersed in a nonstationary statistical environment, the autocorrelation matrix estimate must train independently. This adversely affects the rate of convergence until the autocorrelation estimate is a reasonably close approximation to the actual statistical properties of the input. In the experiments discussed above, the filter wais allowed to run without filter coefficient adaptation until the lag coefficient estimates settled. In figure 4, the autocorrelation matrix is initialized as the identity matrix, then the filter is suddenly allowed to begin adaptation ad the lag coefficients and the filter coefficients concurrently. Again we use a 3x3 FIR adaptive filter with the same low-pass colored input. The plots show discrepancies in rate of convergence for different values of a. Since the initial estimate of the matrix is wrong, the algorithm has

4. Acknowledgment
This work is supported in part by the National Science Foundation under grant number NSF MIP 91-00212 and in part by the Joint Services Electronics Program under grant NOOO14-90-5- 1270. The results and opinions expressed in this paper do not necessarily represent those of the sponsoring agencies.

1361

ZDFQN us ZDTORF us L I S
I

'

'

-1263 -14
3

5ee

i m

15ee

ZBBE
iteration

m e

I
3888
3563~7
WEO

11 1

"

ITERRBIONS

"

"

Figure 1 Convergence plot for a second-order 2-D FIR RLS adaptive filter.

Figure 3 Convergence plot for a 2-D, 5x5, FIR, FQN adaptive filter in system identification with low-pass colored input. Corresponding convergence plots are shown for the SDTDAFDFT and the SDLMS.
5y
I I I

E R -

0 R

d B-l

itmatkm

Figure 2 Convergence plot for a 2-D, 3x3, FIR, FQN adaptive filter in system identification with low-pass colored input. Corresponding convergence plots are shown for the 2DTDAFDFT and the 2DLMS.

-2

1
I
0
200

400

600

800

1000

12W

14W

ITERAFIONS

Figure 4 Nonstationary convergence plots for the same 2-D, 3x3, FIR, FQN adaptive filter with the same low-pass colored input signal.

5. References
S. Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ: Prentice-Hall, 1991.
B . Widrow and S. D. Stearns, Adaptive Signal

[ ] Jeffrey C . Strait, "Structures and algorithms for two4


dimensional adaptive signal processing," Ph.D.
dissertation, Univ. of Illinois at Urbana-Champaign,

Processing. Englewood Cliffs, NJ: Prentice-Hall, 1985. M. M. Hadhoud and D. W. Thomas, "The twodimensional adaptive LMS (TDLMS) algorithm," IEEE Trans. Circuits Syst., vol. 35, pp. 485-494,

1988.

Urbana, I ,1995. L [5] D. G. Manolakis, N. Kalouptsidis, and G. Carayannis, "Fast algorithms for discrete-time Wiener filters with optimum lag," IEEE Trans. on Acoust., Speech, Signal Process., vol. ASSP-31, no. 1, Feb. 1983.

1362

Das könnte Ihnen auch gefallen