Sie sind auf Seite 1von 7

Kalman Recursive Least Squares Algorithm

1 Introduction

The Recursive least squares (RLS) adaptive filter is an algorithm which recursively
finds the filter coefficients that minimize a weighted linear least squares cost function
relating to the input signals. This in contrast to other algorithms such as the least mean
squares (LMS) that aim to reduce the mean square error. In the derivation of the RLS,
the input signals are considered deterministic, while for the LMS and similar
algorithm they are considered stochastic.

Figure1. Transversal filter with time-varying tap weights

Compared to most of its competitors, the RLS exhibits extremely fast convergence.
However, this benefit comes at the cost of high computational complexity, and
potentially poor tracking performance when the filter to be estimated (the "true
system") changes.

Fig 2: Representation of RLS Algorithm

The convergence rate of the gradient-based LMS algorithm is very slow, especially
when the eigenvalues of the input covariance matrix RNN have a very large spread, i.e,
. In order to achieve faster convergence, complex algorithms which
involve additional parameters are used. Faster converging algorithms are based on a
least squares approach, as opposed to the statistical approach used in the LMS
algorithm. That is, rapid convergence relies on error measures expressed in terms of a
time average of the actual received signal instead of a statistical average. This leads to
the family of powerful, albeit complex, adaptive signal processing techniques known
as recursive least squares (RLS), which significantly improves the convergence of
adaptive equalizers.

The least square error based on the time average is defined as

(1)

where λ is the weighting factor close to 1, but smaller than 1, e*(i, n) is the complex
conjugate of e (i, n). and the error e (i, n) is

(2)

and

(3)

where yN(i) is the data input vector at time i , and wN(n) is the new tap gain vector at
time n . Therefore, e (i, n) is the error using the new tap gain at time n to test the old
data at time i , and J (n) is the cumulative squared error of the new tap gains on all the
old data.

The RLS solution requires finding the tap gain vector of the equalizer wN(n) such that
the cumulative squared error J(n.) is minimized. It uses all the previous data to test the
new tap gains. The parameter λ is a data weighting factor that weights recent data
more heavily in the computations, so that J(n) tends to forget the old data in a
nonstationary environment. If the channel is stationary, λ may be set to 1

To obtain the minimum of least square error J (n) , the gradient of J (n) in equation (1)
is set to zero,

(4)

Using equations (2)-(4), it can be shown that

(5)

where is the optimal tap gain vector of the RLS equalizer,

(6)
(7)

The matrix R NN(n) in equation (6) is the deterministic correlation matrix of input data
of the equalizer yN(i), and pN(i) in equation (7) is the deterministic crosscorrelation
vector between inputs or the equalizer yN(i) and the desired output d (i) . where d (i) =
x(i). To compute the equalizer weight vector using equation (5), it is required to
compute .

From the definition of RNN(n) in equation (6), it is possible to obtain a recursive


equation expressing RNN(n) in terms of RNN(n - l) ,

(8)

Since the three terms in equation (8) are all N by N matrices, a matrix inverse lemma
[Bie77] can be used to derive a recursive update for in terms of the previous
inverse, .

(9)
where

(10)

Based on these recursive equations, the RLS minimization leads to the following
weight update equations:

(11)

where

(12)

The RLS algorithm may be summarized as follows:

1. Initialize w(0) = k(0) = x(0) = 0, where INN is an

N X N identity matrix, and δ is a large positive constant.

2. Recursively compute the following:

(13)

(14)
(15)

(16)

(17)

In equation (16), λ is the weighting coefficient that can change the performance of the
equalizer. If a channel is time-invariant, λ can be set to 1. Usually 0.8 < λ < l is used.
The value of I. has no influence on the rate of convergence, but does determines the
tracking ability of the RLS equalizers. The smaller the λ, the better the tracking ability
of the equalizer. However, if λ is too small, the equalizer will be unstable [Lin84].
The RLS algorithm described above, called the Kalman RLS algorithm, use
arithmetic operations per iteration.

Summary of Algorithms

There are number of variations of the LMS and RLS algorithms that exist for adapting
an equalizer. Table 1 shows the computational requirements of different algorithms,
and lists some advantages and disadvantages of each algorithm. Note that the RLS
algorithms have similar convergence and tracking performances, which are much
better than the LMS algorithm. However, these RLS algorithms usually have high
computational requirement and complex program structures. Also, some RLS
algorithms tend to be unstable. The fast transversal filter (FTF) algorithm requires the
least computation among the RLS algorithms, and it can use a rescue variable to avoid
instability. However, rescue techniques tend to be a bit tricky for widely varying
mobile radio channels, and the FTF is not widely used.
Table 1: Comparison of Various Algorithms for adaptive equalization.

Matlab Code
N=80;
r=0.998;
w=0.6*randn(1,N);
v=randn(1,N);
x=zeros(1,N);
y=zeros(1,N);
x(1)=1;
for n1=1:(N-1),
x(n1+1)=0.8*x(n1)+w(n1+1);
end
for n2=1:N,
y(n2)=x(n2)+v(n2);
end
M=40;
p=1/0.01*eye(M);
W=zeros(1,M);
u=zeros(1,M);
for n=1:N,
q=min(M,n);
for i=1:q,
u(i)=y(n-i+1);
end
e=x(n)-conj(W)*conj(u)';
k=p*conj(u)'/(r+conj(u)*p*conj(u)');
p=(p-k*conj(u)*p)/r;
W=W+conj(k)'*conj(e);
end
u=zeros(1,M);
xe=zeros(1,N);
for n=1:N,
q=min(M,n);
for i=1:q,
u(i)=y(n-i+1);
end
xe(n)=u*W';
end
plot([1:N],x,'r-o'),hold on,plot([1:N],xe,'b-*')

Matlab Graph

Figure 3 : Kalman RLS

References
1. Wireless Communications Principles and Practice by T.S. Rappaport
2. www.matworks.com

Das könnte Ihnen auch gefallen