Sie sind auf Seite 1von 5

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882

Volume 4, Issue 6, June 2015

704

A New Least Mean Squares Adaptive Algorithm over Distributed


Networks Based on Incremental Strategy
Karen Der Avanesian1, Mohammad Shams Esfand Abadi2
1,2
Shahid Rajaee Teacher Training University, Faculty of Electrical and Computer Engineering,
Tehran, Iran

Abstract
This paper applies the new least mean squares (LMS)
adaptive algorithm, which is circulantly weighted LMS
(CLMS), in distributed networks based on incremental
strategy. Thedistributed CLMS (dCLMS) algorithm is
optimized with respect to approximate a priori
knowledge of input autocorrelation signals from all
nodes in the network. In comparison with dLMS, the
dCLMS adaptive algorithm has faster convergence speed
especially for highly colored input signals. We
demonstrate the good performance of dCLMS through
several simulation results in distributed networks.
Keywords: Distributed networks, incremental strategy,
adaptive algorithm, circulantly weighted LMS.

I. Introduction
The least mean squares (LMS) algorithm is used in many
adaptive filter applications. Unfortunately, forhighly
colored inputs, the convergence speed of these
algorithms may become extremely slow. The normalized
LMS (NLMS) can slightly solve this problem. The affine
projection algorithm (APA) and recursive least squares
(RLS) have high convergence speed for highly colored
input data. But in comparison with LMS and NLMS
algorithms, RLS and APA have high computational
complexity. Therefore the main problem in ordinary
adaptive algorithms is sensitivity to the kind of input
signals. We are looking for to the algorithm which has
the following features at the same time [1]:
High convergence speed for different input
signals.
Low steady-state mean square error (MSE).
Low computational complexity.
The circulantly weighted LMS (CLMS) adaptive
algorithm which was presented in [1] has these three
features [2], The CLMS algorithm is based on the
modification of the update term in the LMS algorithm
with a circulant matrix optimized with respect to some
approximate apriori knowledge of the autocorrelation
properties of the input to the adaptive filter. The
convergence speed of CLMS is faster than LMS adaptive
algorithm especially for highly colored input signal.
Also, the computational complexity of CLMS is slightly
larger than LMS.

Distributed processing deals with the extraction of


information from data collected at nodes that are
distributed over a geographic area. For example, each
node in a network of nodes could collect noisy
observations related to a certain parameter of interest.
The nodes would then interact with each other in a
certain manner, as dictated by the network topology, in
order to arrive at an estimate of the parameter. The
objective is to arrive at an estimate that is as accurate as
the one that would be obtained if each node had access to
the information across the entire network. Obviously, the
effectiveness of any distributed implementation will
depend on the modes of cooperation that are allowed
among the nodes [3].
In an incremental mode of cooperation, information
flows in a sequential manner from one node to the
adjacent node. This mode of operation requires a cyclic
pattern of collaboration among the nodes, and it tends to
require the least amount of communications and power
[4].
Consensus implementations employ two time scales
and they function as follows. Assume the network is
interested in estimating a certain parameter. Each node
collects observations over a period of time and reaches
an individual decision about the parameter. During this
time, there is limited interaction among the nodes; the
nodes act more like individual agents. Following this
initial stage, the nodes then combine their estimates
through several consensus iterations; under suitable
conditions,
the
estimates
generally
converge
asymptotically to the desired (global) estimate of the
parameter.
The distributed LMS (dLMS) was the first adaptive
distributed network which was presented in [4]. When
the input signals of the nodes are highly colored, the
convergence speed of dLMS will be slow. Different
adaptive distributed networks such as distributed APA
(dAPA) and distributed RLS (dRLS) was proposed [1].
Again these algorithms have high computational load.
In this paper we apply the CLMS algorithm in
distributed networks based on incremental strategy to
establish the distributed CLMS (dCLMS) algorithm. In
distributed CLMS (dCLMS), the circulant matrix is
designed based on the autocorrelation matrices of input
signals from the nodes. Therefore the dCLMS has better
performance than dLMS algorithm.

www.ijsret.org

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 6, June 2015

This paper is organized as follows. In Section II, the


CLMS adaptive algorithm is presented. In the next
section, the dCLMS will be established. Finally before
concluding the paper, we present several simulation
results in distributed networks.

II. CLMS Algorithm


As we know, the convergence speed of adaptive filter
algorithm is depending on the eigenvalue spread of
autocorrelation of input signals. Large eigenvalue spread
leads to the slow convergence speed. If we know that the
input signal has an autocorrelation matrix given by, or
being close to , it would make sense to select the
matrix as the inverse or approximate to . This will give
a low eigenvalue spread for , and consequently rapid
convergence. We restrict to be circulant and
symmetric, for keep computational complexity low [1].
Therefore, the weight update equation with M
coefficients in CLMS is introduced as:
= 1 + ()

(1).

In (1), ()is the input signal vector which is defined


as:
(2).
= , 1 , , (

+ 1)
Also, is the step-size and ()is the error signal
which is obtained by:
= ( 1)

(3).

Where () is desired signal.


Given an assumed set of autocorrelation matrices
0 , 1 , , (1) for input signal, find the circulant
matrix that best approximate the inverse of the
autocorrelation matrices. We can define the following
optimization problem.
1

(4).
0

=0

Where . denotes the Forbenius norm, I is the


identity matrix, and denotes the set of circulant
matrices [1].Writing the circulant matrix explicitly as:
0
1
1

1 2
0 1

2 3

2
1
=

0
1

(5).

Where all the rows of are =


0 for =
0,1, 1, and is shift matrix which is defined by
[1]:

0 1
= 0 0

1 0

0 0
1 0

0 0

(6).

Solving the optimization problem in (4) leads to the


following equation.
1 1

(7).

2 0

=0 =0

1 1

=
=0 =0

Where0 = 1,0, ,0 . We propose to use fixed


matrix, which is good approximate inverse to set of input
autocorrelation matrices [1].

III. dCLMS Algorithm


Learning algorithm for distributed networks based on
incremental strategy has been shown in Fig. 1. Based on
local data , and 1 from its previous
node in the 1 cycle, the update equation for local
estimation is performed at each time instant . Finally,
the local estimation is used as the global
estimation for filter coefficients. This value is used as the
initial local estimation 1 + 1 for the next time
instant ( + 1). Actually each node is an adaptive filter
and has its own local data ,
where
denotes the node index. We assume a linear data model
to generate the desired signal for each node as:
(8).
= +
Where is additive white Gaussian noise of
each node and is the unknown system that we would
like to estimate it.
The local update equation for distributed CLMS
algorithm estimation at each time instant in
incremental network is given by
(9).
= 1
Where
(10).
= ()1
Is the error signal in all nodes.
1. Designing Circulant Matrix Based on True/close
Autocorrelation Matrices
The first strategy for designing the matrix is based on
the autocorrelation of input signals of the nodes. We can
select the autocorrelation matrices close to the true
autocorrelation matrices. Based on these autocorrelation
matrices, the matric from (5) is designed [1].
2. Designing Matrix Based on Estimation of
Autocorrelation Matrices
The true autocorrelation matrices in each node can be
obtained from the following relation:
(11).
= ()
We can estimate these matrices from (12) and (13) as:

www.ijsret.org

705

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 6, June 2015

1
=

()

(12).

=+1

1
=
+1

()

(13).

=0

Where T denotes the number of recent regressors in


each node. Then we can design the C matrix based on
these estimations for autocorrelation matrices.

Fig.1. Distributed processing over adaptive incremental


network through CLMS algorithm.

V. Conclusion

IV.Simulation Results
We simulated performance of dCLMS algorithm in a
distributed network base on incremental strategy with 20
nodes in a system identification setup. The impulse
response of the random unknown system has M = 20
taps. The correlated input signal, , at each node
generated by passing white Gaussian noise process with
unit variance through first order autoregressive model
filter, which z-transform function is:
(14).

1 2
1

dCLMS. Fig.5-8 present simulated steady-state value


MSE and EMSE of dLMS with = 0.0002 and dCLMS
with = 0.02 for each node, also can note that for this
colored input if we choose step size of dLMS algorithm
more than 0.0002 result may not be converge.
Preconditioned matrix designed based on estimation of
autocorrelation matrices of each node and also for AR(1)
inputs with correlation index close to index of entire
nodes. In all simulations estimated correlation indexes
has been chosen = {0.9,0.95,0.99}. Results show
simulated steady-state of dCLMS, MSE for both
designed preconditioned matrices.Fig.5-6 shows
performance of simulated Steady state dCLMS and
dLMS which is demonstrate that steady state of dLMS
and dCLMS is same for 10000 sample iteration but
dCLMS has much more convergence speed. Fig 7-8
shows steady state for 1000 sample iteration which are
demonstrate that dCLMS algorithm reached to steady
state but dLMS did not converge yet. Fig.9 present
simulated MSE learning curves of dCLMS algorithm in
which preconditioned matrix was designed with entire
nodes autocorrelation for three different estimations of
autocorrelation, first estimating with whole samples and
then estimating with 500 and 50 regressors of input
samples. With this estimated autocorrelation new
preconditioned matrices was made and results was
demonstrated. Fig.10 just like Fig.9 but instead of using
entire nodes correlation index for designing
preconditioned matrix only choose = {0.9,0.95,0.99}
which were near to entire nodes correlation index.

And 0.9,0.99 which is highly colored input data.


The Noise sequence of each node is a white Gaussian
2
noise with variance of ,
(0,0.1].Fig. 2 shows
correlation index and noise power profile of each node.
All estimation in simulation results are obtained by
ensemble averaging over 100 independent trials. Fig.3-5
compares MSE and MSD simulated learning curves of
dLMS and dCLMS for different step size with true and
estimated autocorrelation matrices which is obtain from
Eq. (12) and (13) for node 1. As we can see dCLMS
algorithm has a high convergence speed comparing with

In this paper we applied circulant weighted LMS


algorithm in adaptive distributed networks based on
incremental strategy. Circulant matrix due to its
characteristic which is estimation of inverse leads to
decreasing eigenvalue separation of autocorrelation; this
could increase convergence speed of adaptive algorithm.
Simulation results show the good performance of this
algorithm in distributed networks.
Correlation Index (k)

Noise Power Profile ( 2v,k )

0.1

0.99

0.09

0.98

0.08

0.97

0.07

0.96

0.06

0.95

0.05

0.94

0.04

0.93

0.03

0.92

0.02

0.91

0.01

0.9

10

12

Node Number

14

16

18

20

10

12

14

16

18

20

Node Number

Fig.2. Node Profile: Noise power and Correlation index


for the Gaussian data network.

www.ijsret.org

706

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882

707

Volume 4, Issue 6, June 2015


-20

30

(a) dLMS algorithm , = 0.0002


(b) dCLMS algorith , = 0.02 (C designed with entire nodes correlation index)
(c) dCLMS algorith , = 0.02 (C designed with {0.99,0.95,0.9})

20

(a) dLMS, = 0.0002


(b) dCLMS (C disigned with entire nodes), = 0.02
(c) dCLMS (C designed with {0.9 , 0.95 ,0.99} , = 0.02

-22

(b)
-24

10

EMSE(dB)

MSE(dB)

(a)

(c)

-10

-26

-28

-30

(c)

(b)
-32

(a)

-20

-34

-30

100

200

300

400

500

600

700

800

900

10

12

14

16

18

20

Node Number

1000

Itteration Number

Fig.3. Comparing MSE learning Curves of dLMS and


dCLMS for different step size and correlation index.

Fig.6. Steady-state EMSE for dLMS and dCLMS, using


C matrix designed by true and estimated autocorrelation
of each node.Simulated for 10000 samples.
MSE
-6

30

(a) dLMS algorithm , = .0002


(b) dCLMS algorith , = .02 (C designed with entire nodes correlation index)
(c) dCLMS algorith , = .02 (C designed with {.99,.95,.9})

(a)

-8

20

-10

-12

10

(a)

MSE(dB)

MSD(dB)

-14
0

-10

-16

(b)

-18

-20

(b)
(c)
-20

(c)

-22

(a) dLMS, = 0.0002

-24
-30

(b) dCLMS (C disigned with entire nodes), = 0.02


(c) dCLMS (C designed with {0.9 , 0.95 ,0.99} , = 0.02

-26
0

100

200

300

400

500

600

700

800

900

1000

Itteration Number
-28

Fig.4. Comparing MSD learning curves of dLMS and


dCLMS with different step size and correlation index.

10

12

14

16

18

20

Node Number

Fig.7.Steady-state MSE for dLMS and dCLMS, using C


matrix designed by true and estimated autocorrelation of
each node.Simulated for 1000 samples.

-10

-12

-16
-14

-16

-18

-20

(a)

-20

(b)
EMSE(dB)

MSE(dB)

-18

-22

(c)
-24

(a)

(a) dLMS = 0.0002

-26

(b) dCLMS (C disigned with entire nodes), = 0.02


(c) dCLMS (C designed with {0.9 , 0.95 ,0.99} , = 0.02

(b)
-24

(b) dCLMS (C disigned with entire nodes), = 0.02


(c) dCLMS (C designed with {0.9 , 0.95 ,0.99}, = 0.02

-28

-30

(a) dLMS, = 0.0002


-22

10

12

14

16

18

-26

20

-28

Node Number

(c)

Fig.5. Steady-state MSE for dLMS and dCLMS, using C


matrix designed by true and estimated autocorrelation of
each node. Simulated for 10000 samples.

-30

10

12

14

16

18

20

Node Number

Fig.8. Steady-state EMES for dLMS and dCLMS, using


C matrix designed by true and estimated autocorrelation
of each node. Simulated for 1000 samples.

www.ijsret.org

International Journal of Scientific Research Engineering & Technology (IJSRET), ISSN 2278 0882
Volume 4, Issue 6, June 2015

[7] J. H. Husoy, A preconditioned LMS adaptive


filter ECTI Trans. on electrical Eng.,
electronics and communications, vol 5, pp.3-9
Feb 2007

30

(a) dCLMS , cirulant C matrix designed with true auto correlation

20

(b)

(b) dCLMS , cirulant C matrix designed with estimated autocorrelation,


for T = 500 regressors of each node

MSE(dB)

10

(c) dCLMS , cirulant C matrix designed with estimated autocorrelation,


for T = 50 regressors of each node

(c)

-10

-20

-30

(a)
-40

100

200

300

400

500

600

700

800

900

1000

Itterations Number

Fig.9. MSE learning curves of dCLMS for different


estimation of autocorrelation for each highly colored
node.
30

(a) dCLMS , cirulant C matrix designed with true auto correlation


(a)

(b) dCLMS , cirulant C matrix designed with estimated autocorrelation,


for T = 500 regressors of each node

20

(c) dCLMS , cirulant C matrix designed with estimated autocorrelation,


for T = 100 regressors of each node
10

MSE(dB)

(b)

-10

-20

(c)
-30

100

200

300

400

500

600

700

800

900

1000

Itterations Number

Fig.10. MSE learning curves of dCLMS for different


estimation of autocorrelation for set of correlation index
near to correlation of each node.

References
[1] J. H. Husoy, M. S. Esfand Abadi, A novel
LMS-type adaptive filter optimized for operation
in multiple signal environments" .
[2] J. H. Husoy, M. S. Esfand Abadi, A Family of
Flexible
NLMS-type
Adaptive
Filter
Algorithms IEEE 2007.
[3] C. G. Lopes and A. H. Sayed, Incremental
adaptive strategies over distributed networks,
IEEE Trans. Signal Process, vol. 55, no. 8, pp.
40644077, Aug. 2007
[4] A. H. Sayed and C. G. Lopes, Adaptive
processing over distributed networks IEICE
Trans. Fundam. Electron, Commun. Comput.
Sci., vol. E90-A, no. 8, pp. 15041510, 2007.
[5] Haykin , Adaptive Filter Theory, Second ed.
Englewood Cliffs, NJ: Prentice hall 1996
[6] A.H. Sayed Adaptive Filters, New York: Wiley
2008

www.ijsret.org

708

Das könnte Ihnen auch gefallen