Sie sind auf Seite 1von 29

Performance Analysis of Proportionate-Type Normalized Mean Square

Algorithms
Kaio Douglas Tefilo Rocha
Milo Doroslovaki

1. Introduction

In an effort to gather the various types of Least Mean Square (LMS) algorithms and analyze their
performance when applied to adaptive filtering, this work is conducted. A special attention is
given to the Proportionate-Type Normalized LMS algorithms, as seven different examples of this
category are presented. Also, to each of these algorithms, a version based on the affine projection
format is assigned, giving rise to seven new algorithms. An extensive set of simulations is
performed in order to compare in many ways the performance of these algorithms when working
as the engine of a system identification process. For the unknown system to be identified,
different impulse responses are given, but mainly, the system is chosen to have a sparse impulse
response, as an specific application echo cancellation is addressed in this work. The
simulations also vary in terms of input signal and measurement noise signal. The software
MATLAB from MathWorks is used to carry these simulations.
In section 2, a mathematical description of the algorithms to be tested by simulation is given. A
base framework for both the proportionate-type NLMS and affine projection algorithms is
presented.
In section 3, the simulation results are presented. The simulations are divided into six groups. In
the first three groups, three different unknown system impulse responses are tested. For each one,
both a white Gaussian noise and a colored noise signal are used as input. In simulation 4, the
behavior of the algorithms are analyzed in the case which a shift occurs in the original impulse
response. In simulation 5, the input signal is a speech, and five experiments are performed, each
of them with a different type of measurement noise. Finally, simulation 6 addresses the affine
projection versions of the algorithms, as the previous five sets only contemplate the
proportionate-type NLMS format.
Section 4 concludes this work with remarks of the results and suggestions for possible further
subjects to be addressed and added to it.

2. Description of Proportionate-Type NLMS Algorithms

In this section, applications of the PtNLMS algorithms are shown, in order to justify their study
in this work. After given the motivation, a unified framework is presented, so that the description
of the different types of algorithms can be constructed based on it. The description of algorithms
is separated into two parts. The first one concerns the regular PtNLMS algorithms and the second
part addresses these same algorithms, but modified to follow a different framework based on the
affine projection algorithm.

2.1. Application of PtNLMS Algorithms

One of the main motivations for the study of PtNLMS algorithms is their use in network
echo cancellation, in order to reduce the presence of delayed copies of the original signal [1]. In
modern telephone networks, greater delays increase the need for echo cancellation techniques,
requiring faster echo cancellation algorithms, especially when the echo path is sparse. A sparse
echo path has a large percentage of the energy distributed to only a few coefficients, whereas a
dispersive echo path has its energy distributed more evenly among its coefficients [1]. In this
work, greater attention is given to the sparse echo paths, as the PtNLMS algorithms can show
improved performance when dealing with sparse echo paths than standard algorithms such as the
least mean square (LMS) and the normalized least mean square (NLMS) [1]. An example of a
sparse echo path impulse response is shown in Figure 1.1.
Sparse Impulse Response
0.3
0.25
0.2

Amplitude

0.15
0.1
0.05
0
-0.05
-0.1
-0.15
0

50

100

150

200

250
300
Coefficient Index

350

400

Figure 2.1. Sparse impulse response example.

2.2. Unified Framework for PtNLMS Algorithms

450

500

As this work is based on [1], the notation introduced by the authors will also be used here, as
well as the base model for the PtNLMS algorithms. Only real signals are contemplated in this
study.
Regarding notation, the following will be used throughout this work. Vectors are denoted by
bold lowercase letters, such as x . Also, we consider all the vectors to be column vectors,
unless the contrary is stated. Scalars are notated by Roman or Greek letters, such as
i

. The

th

component of a vector

is represented by

or

x i . Matrices are denoted as bold

uppercase letters, such as G . When dealing with time varying vectors, we consider a vector at
time k

to be represented as

x (k ) .

Referring to the standard system identification block diagram in Figure 2.2, we introduce
our framework. Consider an input signal x (k ) for time k that enters an unknown system
with impulse response

w . Let the output of the of the system be

x ( k )=[ x ( k ) , x ( k1 ) , , x ( k L+1 ) ]

noise

v (k)

, and

y ( k )=wT x (k )

where

is the length of the filter. Then, measurement

is added to our output, giving the measured output of the system,

d (k ) . The

impulse response of the system is estimated with the adaptive filter coefficient vector (or weight
^ (k ) , which has the same length L . The output of the adaptive filter is given by
vector) w
^y ( k )= w
^ T ( k ) x (k ) . The error signal
used to drive the adaptive algorithm.

e (k )

is the difference between

d (k )

and

^y ( k ) ,

Figure 2.2. Adaptive filtering system identification block diagram.

The base PtNLMS algorithm is presented in Table 2.1. The term

F [ w
^ l ( k ), k ] , where

l {1,2, , L } , is the law that governs how each coefficient is updated. The quantity
is important when

F [ w
^ l ( k ), k ]

min

is zero in some cases. It is used to set the minimum gain a

coefficient can receive. The constant

p , with

p 0 , along with

, also

0 ,

prevents the very small coefficients from stalling, especially in the beginning, when the
coefficients are zero. l (k ) selects the maximum value between the previously calculated
min

and the actual coefficient gain, so that the gain matrix can be constructed.

G ( k )=Diag {g 1 ( k ) , , gL ( k ) }
gl ( k )

is the time-varying stepsize control diagonal matrix, with the

values in the principal diagonal and all the other entries being zero. The new estimated

coefficients vector is calculated.

is the fixed stepsize parameter and

is typically a

small positive number added to the denominator in order to avoid division by zero if the inputs
are zero, x ( k )=0 .
Table 2.1. Base PtNLMS Algorithm with Time-Varying Stepsize Matrix
T

x (k )

[ x ( k ) , x ( k1 ) , , x ( k L+1 ) ]

^y ( k )

^ (k )
xT (k ) w

e(k)

d ( k ) ^y ( k )

F [|w
^ l ( k )|, k ]

Depends on the specific algorithm

min ( k )

^ 1 ( k )| , k ] , , F [|w
^ L ( k )|, k ]
max p , F [|w

l( k )

max { min ( k ) , F [|w


^ l ( k )|, k ] }

gl ( k )

l (k)
L

1
(k )
L i=1 i

G( k)

Diag {g1 ( k ) , , g L ( k )}

w
^ ( k + 1)

^ ( k )+
w

G ( k ) x ( k ) e (k )
T
x (k ) G (k ) x (k )+

As commented in the previous subsection, we are going to work with systems with sparse
^ ( k )=0 , because, as stated in [1], the
impulse response, so we can use as a start point w
PtNLMS algorithms rely on the fact that as the system impulse response is sparse, most of the
coefficients are zero, leading to having most of the estimated weights to be correct. Then, the
weights corresponding to the active coefficients should be updated towards their correct value in
the fastest way possible, in order to speed the convergence up.
The objective of the PtNLMS algorithms is, then, to determine which coefficients are zero
and which are non-zero, as well as to assign gains to the estimated coefficients so that they can
reach the true value in a manner that the convergence speed is increased as much as possible.
These tasks are performed by the control law, that manages the way the gains are assigned to the
estimated coefficients, as well as applies some criteria to distinguish which coefficients are
closer or farther from their true values. Most of the PtNLMS control laws work to assign the
minimum possible gains to the coefficients they detect that are closer to the real values. The main
differences between them is how they assign gains to the coefficients that are not near the true
values, and this is the subject of the next subsection.

2.3. Proportionate-Type Normalized Least Mean Square Algorithms

Now that the motivation and a unified framework were given to the study of the PtNLMS
algorithms, they are shown individually and in further details in this subsection.

2.3.1. Normalized Least Mean Square Algorithm (NLMS)

We begin with the NLMS algorithm, which strictly speaking, is not a proportionate-type
algorithm, as its main characteristic is to assign the same gain to all the estimated coefficients,
regardless their difference compared to the true values.
Referring to our base algorithm in Table 2.1, to obtain the NLMS algorithm, we choose the
^ l|, k ] =1 , which will then lead to a gain matrix G ( k )=I for all k .
control law F [|w

2.3.2. Proportionate Normalized Least Mean Square Algorithm (PNLMS)

As the name suggests, the PNLMS is an actual proportionate-type algorithm, the simplest of
them. The gain assigned by its control law is proportional to the magnitude of the estimated
coefficients, i.e.
F [|w
^ l ( k )|, k ] =|^
wl ( k )|,1 l L .
It is based on the assumption that the system impulse response is sparse, explaining that
coefficients with larger magnitudes should have a faster adaptation than those that are closer to
zero [1].

2.3.3. Improved Proportionate Normalized Least Mean Square Algorithm (IPNLMS)

The IPNLMS algorithm is a hybrid between the NLMS and PNLMS algorithms, in a way
that the user can control how much it should behave as one or the other two base algorithms. Its
control law is given by
^ l ( k )|, k ] =( 1 IPNLMS )
F [|w

w^ ( k )1
L

^ l ( k )|,
+ ( 1+ IPNLMS )|w

w^ ( k )1= |w^ j ( k )|

where

j=1

parameter

IPNLMS ,

is the

L1

norm of the estimated coefficients vector. The

1 IPNLMS 1 , is used to define if the algorithm will behave more as

the NLMS or the PNLMS algorithms, or even somewhere in between the two. Note that if
IPNLMS =1 , the algorithm behaves exactly as the NLMS algorithm, and if IPNLMS =1 , it
behaves exactly as the PNLMS algorithm instead.
This algorithm adopts

=0 , so that the min logic presented in Table 2.1 is not needed.

The gain vector components are, then, given by


gl ( k ) =

F [|w
^ l ( k )|, k ]

F [|w^ ( k )|, k ]
l

(1 IPNLMS )
|w^ l ( k )|
+ ( 1+ IPNLMS )
.
2L
2w
^ ( k )1

To avoid division by zero, which may happen at the beginning of adaptation, when the
estimated coefficients are close to zero, in practice, a modified expression for the gain vector
elements is used instead:
gl ( k )=

where

(1 IPNLMS )
|w^ l ( k )|
+ ( 1+ IPNLMS )
,
2L
^ ( k )1+ IPNLMS
2w
IPNLMS is a small positive number [1], similar to the

^ ( k + 1)
parameter in our w

expression in Table 2.1.

2.3.4.

Proportionate Normalized Least Mean Square Algorithm (MPNLMS)

The MPNLMS algorithm considers a parameter

used to define a region around the true

coefficients that when all the estimated coefficients are within it, it is considered that the
algorithm has converged. The region is so called -vicinity.
The algorithm assigns a gain proportional to the logarithm of the estimated coefficients, and
its control law is given by
F [|w
^ l ( k )|, k ] =ln(1+ |w
^ l ( k )|) ,1 l L,
where

=1/

[1].

2.3.5. Individual Activation Factor Proportionate Normalized LMS Algorithm (IAF-PNLMS)

The IAF-PNLMS algorithm has the following control law:


F [|w
^ l ( k )|, k ]

|w
^ l ( k )|

l (k )

1
1
^ l ( k )| ,k ] + l ( k 1 ) ,
2 F [|w
2
l ( k 1 ) ,

k =mL , m=1,2,3,
otherwise

max( l ( k ) , F [|w
^ l ( k )|, k ] ) .

l( k )
In the initialization,

l (0) is typically initialized with a small positive constant for all the

2
coefficients, such as 10 / L .

The difference present in the IAF-PNLMS algorithm is that it transfers part of the inactive
coefficients gain to the active ones, by means of l ( k ) . Therefore, it offers better gain
distribution to the coefficients, compared to the PNLMS and IPNLMS algorithms, however, it
slows down the convergence rate of small coefficients [1]. These two qualities combined make
this algorithm more appropriate for system impulse responses with high sparseness, i.e. with just
a very few active coefficients.
2.3.6. Adaptive

Proportionate Normalized LMS Algorithm (AMPNLMS)

By modifying the MPNLMS algorithm, we have a new algorithm called Adaptive MPNLMS
(AMPNLMS). The main modification is in the parameter, which is constant in the
MPNLMS algorithm and now is allowed to vary in time in the AMPNLMS algorithm, providing
more flexibility in the minimization of the mean square error (MSE).
Table 2.2 shows the steps to get the modified control law for the AMPNLMS algorithm. It
begins with (k +1) , an estimate of the mean square error, obtained by time averaging, where
0< <1

is a constant. Then, we scale the estimated MSE by a factor

, giving

~
L (k ) ,

the distance to the steady-state MSE, the value that indicates we reached convergence, when the MSE
reaches it. After this, we calculate

c ( k ) , the distance each weight deviation should be from zero, so

that we can confirm convergence. The time varying

is then calculated, so that finally the control law

can be generated [1].


Table 2.2. Obtaining the control law for the AMPNLMS algorithm

( k +1 )

2
( k ) + ( 1 ) e ( k )

~
L (k )

(k +1)

~
L (k )

c (k)

(k )

1
c(k)

F [|w
^ l ( k )|]

1+ (k ) w
^ l ( k )
ln

Lx

2.3.7. Adaptive Segmented Proportionate Normalized LMS Algorithm (ASPNLMS)

In order to avoid the calculation of the logarithm term in the MPNLMS algorithm, the
Segmented PNLMS (SPNLMS) algorithm was introduced. Its idea is based on divide the
logarithm function into two segments: the first one with a certain slope, and the second one with
zero slope. Tuning the slope of the first segment and the transition point between the two is an
important factor when making this linear approximation of the logarithm curve.
Here, the Adaptive Segmented PNLMS (ASPNLMS) algorithm is proposed, a slight
modification of the SPNLMS algorithm, in order to simplify the AMPNLMS algorithm. Table
2.3 shows how this new algorithm works, concluding with the control law [1].
The parameter

is the scaling factor, which defines the slope of the first segment, as

well as the transition point between the two segments, therefore, it plays an important role in the
performance of the algorithm.

Table 2.3. Obtaining the control law for the ASPNLMS algorithm

( k +1 )

( k ) + ( 1 ) e 2 ( k )

~
L (k )

c ( k)

(k +1)

~
L (k )
2

Lx

F [|w
^ l ( k )|]

^ l ( k )|< c (k )
w
^ l ( k ) if ,|w
c
{ 1,
w
^ ( k ) ( k)
if

2.4. The Affine Projection Algorithm

After presenting the proportionate-type NLMS algorithms, we now present the affine
projection algorithm. It was motivated by the desire of obtaining an improvement in the rate of
convergence of the NLMS algorithm [2]. Its principle is to recycle old data to improve its
convergence speed, however, by reusing data the final algorithm misadjustment increases. The
trade-off between final misadjustment and convergence rate is obtained by the introduction of a
convergence factor [3].
As presented in [3], to derive the process behind the affine projection algorithm, we first
X ap ( k)
N
define a matrix
composed by the last
input signals,
x ( k ) , x ( k1 ) , , x ( kN +1) , where

x ( k )=[ x ( k ) , x ( k1 ) , , x ( k L+1 ) ]

x(k)
x ( k1 )
x ( k N +1 )
(
)
x ( k2 )
x ( kN )
X ap ( k ) = x k1

x ( k L+1 ) x ( kL ) x ( kN L+ 2 )

X ap ( k ) =[ x ( k ) x ( k1 ) x ( k N +1 ) ] .
Then, referring to the block diagram in Figure 2.2, we define the output of the adaptive filter

[ ]

^y ap ,0 (k )
^y ap ( k )= X Tap ( k ) w
^ ( k )= ^y ap ,1 (k ) .

^y ap ,N 1 (k )
The measured output of the unknown system is represented as

[ ]

d(k )
d (k 1)
d ap ( k )=
.

d (k N +1)
d ap ( k )

The difference between

and

^y ap ( k )

is the error, that drives the adaptive

algorithm, and it is represented as

[ ]

e ap ,0 (k )
d ap ( k ) ^y ap , 0 (k )
d ap ( k 1 )^y ap ,1 ( k)
e ap ( k )= e ap ,1 (k) =d ap (k ) ^y ap ( k )=
.

eap , N1 (k)
d ap ( k N +1 )^y ap , N 1 ( k)
To summarize the affine projection algorithm process, a table similar to the one used for the
proportionate-type algorithms is presented with all the steps needed to obtain the updated
estimated coefficients. Note that the same min logic is used here, so that the gain matrix can
be generated and plugged into the final expression.
Table 2.4. Base Affine Projection Algorithm

X ap ( k )

[ x ( k ) , x ( k1 ) , , x ( k N + 1 ) ]

^y ap ( k )

^ (k )
X Tap ( k ) w

e ap ( k )

d ap (k ) ^y ap ( k )

F [|w
^ l ( k )|, k ]

Depends on the specific algorithm

min ( k )

^ 1 ( k )| , k ] , , F [|w
^ L ( k )|, k ]
max p , F [|w

l( k )

max { min ( k ) , F [|w


^ l ( k )|, k ] }

gl ( k )

l ( k)
L

1
(k )
L i=1 i

G( k)

Diag {g1 ( k ) , , g L ( k )}

w
^ ( k + 1)

w
^ ( k )+ G ( k ) X ap ( k ) ( X Tap ( k ) X ap ( k ) + I ) e ap (k )

We can obtain the affine projection version of all the algorithms discussed in the previous
^ l ( k )|, k ] into our base algorithm.
subsection by just plugging their control laws F [|w

3. Simulation Results
In order to analyze and compare the behavior and performance of the algorithms described in the
last section, several simulations were performed. This section presents the results of such
simulations, as well as some explanation on how they were prepared. Different input signals
were used, such as white Gaussian noise, colored noise and speech. Most of the simulations used
the system impulse response in Figure 2.1, although different impulse responses were also
experimented. As measurement noise, in most cases white Gaussian noise was added to the
unknown system output, sometimes with different signal to noise ratio (SNR) values. Both the
proportionate-type NLMS and affine projection algorithms were contemplated in the simulations.
The software MATLAB was used to carry the simulations, because it is relatively easy to
implement and experiment the algorithms in different situations, proving to be a powerful tool
when dealing with digital signal processing.

3.1. Simulations with the Proportionate-Type NLMS Algorithms

We begin the experiments with the proportionate-type NLMS algorithms: NLMS, PNLMS,
IPNLMS, MPNLMS, IAF-PNLMS, AMPNLMS and ASPNLMS. Each simulation is separated
in subsections, varying the input type, the unknown system impulse response, measurement
noise, etc. When needed, the parameters used in each algorithm is explicitly cited.

3.1.1. Simulation 1: Impulse Response Type 1

For the first simulation, we use the system impulse response shown in Figure 2.1, the
measurement noise used is white Gaussian noise with SNR =40 dB . The parameters used in
each algorithm is presented in Table 3.1.
Table 3.1. Parameters for Simulation 1
Algorithm
NLMS
PNLMS

IPNLMS

MPNLMS

Parameters

=0.3

=0.0001

=0.01

=0.3

p=0.01

=0.0001

IPNLMS =0

=0.3

IPNLMS =0.0001

=0.0001

=1000

=0.3

=0.01

=0.0001

p=0.01
IAF-PNLMS

AMPNLMS

=0.3

=0.0001

=0.99

p=0.01

=1000

=0.3

x =1

=0.0001

=0.01

ASPNLMS

=0.99

=10

=1000

p=0.01

x =1

=0.3

=0.01

=0.0001

We have two simulations in this subsection: one with white Gaussian noise input and the
other with colored noise input. The coloring filter is a low pass filter with a pole at z=0.9 .

For each simulation, 100 Monte Carlo runs were executed. The learning curves for them are
shown in Figures 3.1 and 3.2.
Comparing Algorithms - White Noise Input

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

NLMS

-5
-10

MSE (dB)

-15
-20
PNLMS

-25

IPNLMS

-30

MPNLMS

IAF-PNLMS

AMPNLMS

-35
-40
-45

ASPNLMS
0

0.5

1.5

2
Iteration Number

2.5

3.5

4
4

x 10

Figure 3.1. Learning curve for simulation 1, white Gaussian noise input.
Comparing Algorithms - Colored Noise Input

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

-5
NLMS

-10

MSE (dB)

-15
-20

PNLMS

-25

IPNLMS
ASPNLMS

-30
-35

MPNLMS

-40

-45

IAF-PNLMS

AMPNLMS
0

4
Iteration Number

8
4

x 10

Figure 3.2. Learning curve for simulation 1, colored noise input.

In the first set of curves (Figure 3.1), we can first mention that one advantage of the
proportionate type algorithms over the simple NLMS is the fast initial convergence, which may
or may not prevail during the process, depending on the algorithm. Examples of algorithms that
seem to maintain fast convergence rate during the whole process are the MPNLMS, AMPNLMS
and ASPNLMS algorithms, the first two being based on direct calculation of a logarithm term
and the third one presents a linearization of this logarithm curve. Although it is a simplification,
the ASPNLMS algorithm has a very good performance, even offering less computational effort.
Strangely, the IAF-PNLMS algorithm reaches a convergence plateau, but above the other
algorithms, maybe because the impulse response is not sufficiently sparse, as in [4] it seems to
have a better performance when the impulse response is strongly sparse.
When the input signal is colored noise (Figure 3.2), it is noticeable that the convergence
speed is reduced for all the algorithms. The best performances are presented by the MPNLMS
and AMPNLMS algorithms, maintaining fast convergence rate during the whole process. The
NLMS does not reached convergence with this amount of iterations.

3.1.2. Simulation 2: Impulse Response Type 2

In the second simulation, another impulse response is used, as shown in Figure 3.3. It is also
sparse, but it has a smaller number of coefficients, 256, whereas the previous one had 512.

Sparse Impulse Response Type 2

0.1

Amplitude

0.05

-0.05

-0.1

50

100

150
Coefficient Index

200

250

Figure 3.3. Sparse impulse response type 2.

The simulation parameters are the same as the previous simulation, as presented in Table
3.1. Also, the measurement noise is white Gaussian noise with SNR =40 dB . We also have
two types of input: white Gaussian noise and colored noise, the same as in the previous
subsection. The learning curves are shown in Figures 3.4 and 3.5.
Comparing Algorithms - White Noise Input

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

-5
-10

NLMS

MSE (dB)

-15
PNLMS

-20

IPNLMS

-25

IAF-PNLMS

-30
MPNLMS
-35

AMPNLMS

-40
-45

ASPNLMS
0

0.5

1.5

2
Iteration Number

2.5

3.5

4
4

x 10

Figure 3.4. Learning curve for simulation 2, white Gaussian noise input.
Comparing Algorithms - Colored Noise Input

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

-5
-10
NLMS

MSE (dB)

-15
-20

IPNLMS
PNLMS

-25

ASPNLMS

IAF-PNLMS

-30
-35
AMPNLMS
-40
-45

MPNLMS
0

4
Iteration Number

8
4

x 10

Figure 3.5. Learning curve for simulation 2, colored noise input.

Comparing the results of this simulation with the last one, we can see that most of the
algorithms took a slight more iterations to converge. The biggest difference is the performance of
the NLMS algorithm, a lot better here than in the previous one. In the white noise input case, the
best performances are presented by the ASPNLM and AMPNLMS algorithms, followed by the
MPNLMS. In the colored input case, MPNLMS algorithm is the best, followed by AMPNLMS
and ASPNLMS algorithms. In both simulations, the IAF-PNLMS algorithm showed an even
worse performance than in the previous simulations.

3.1.3. Simulation 3: Impulse Response Type 3

Similar to the previous two simulations, the only difference is the impulse response being
used. Now, instead of a sparse impulse response, we use a dispersive one. It is generated by
0.1 t
multiplying a white Gaussian noise signal with length 512 by an exponential e
, producing
an exponentially damped signal, as shown in Figure 3.6.

Exponentially Damped Impulse Response

3.5
3
2.5
2
1.5
Amplitude

1
0.5
0
-0.5
-1
-1.5
-2
0

50

100

150

200
250
300
Coefficient Index

350

400

450

500

Figure 3.6. Impulse response type 3.

The same parameters are used here. Figures 3.7 and 3.8 show the learning curves for white
noise input and colored noise input, respectively.
Comparing Algorithms - White Noise Input

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

0
-5
NLMS
-10

MSE (dB)

-15

IAF-PNLMS
ASPNLMS

-20

IPNLMS
-25
-30

PNLMS

MPNLMS

-35
AMPNLMS

-40
-45

0.5

1.5

2
Iteration Number

2.5

3.5

Figure 3.7. Learning curve for simulation 3, white Gaussian noise input.

4
4

x 10

Comparing Algorithms - Colored Noise Input

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

-5

-10
NLMS

MSE (dB)

-15
ASPNLMS
-20

MPNLMS

IAF-PNLMS

PNLMS

-25
-30
AMPNLMS
-35

-40

IPNLMS

4
Iteration Number

8
4

x 10

Figure 3.8. Learning curve for simulation 3, colored noise input.

In order to check the robustness of the algorithms, this simulation with a dispersive impulse
response was carried out. As can be seen, the convergence rate is slower than in the two previous
cases. As the PNLMS algorithm is intended to work with sparse impulse responses, its
performance got worse here. The IAF-PNLMS also had a bad performance in this case, as
expected, given that it goes better with impulse responses with high sparseness. However, in the
white noise input case, the other algorithms converged in a similar fashion, being the NLMS the
slowest of them. In the colored noise input simulation, none of the algorithms reached
convergence.

3.1.4. Simulation 4: Shift Tracking Analysis

The purpose of this simulation is to analyze the capability of the algorithms to cope with a
shift in the coefficients of the unknown system impulse response while the adaptation process is
happening. From now on, only the sparse impulse response in Figure 2.1 is going to be used in
the simulations. The parameters of the algorithms used in this simulation are the same as the ones
presented in Table 3.1. As measurement noise, white Gaussian noise with SNR=40 dB is used
again.

In the two simulations preformed in this section, the original impulse response is shifted by
50 coefficients when the number of iterations is half the total. Again, the system is excited with
two types of input: white Gaussian noise and colored noise, with the coloring filter being a low
pass filter with a pole at z=0.9 . Figures 3.9 and 3.10 show the learning curves obtained
from the simulations.
Shift Tracking - White Noise Input

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

0
-5

NLMS
NLMS

MSE (dB)

-10
-15
-20

AMPNLMS

PNLMS
-25

PNLMS

IPNLMS
-30

IPNLMS

MPNLMS
AMPNLMS
ASPNLMS

-35
-40

IAF-PNLMS

MPNLMS

IAF-PNLMS

ASPNLMS

4
Iteration Number

8
4

x 10

Figure 3.9. Learning curve for simulation 4, white Gaussian noise input.
Shift Tracking - Colored Noise Input

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

0
NLMS

NLMS

-5

MSE (dB)

-10
-15
-20

AMPNLMS
AMPNLMS

-25

ASPNLMS
IPNLMS

-30
-35
-40

PNLMS

IAF-PNLMS

IAF-PNLMS
IPNLMS

MPNLMS

MPNLMS
0

PNLMS
ASPNLMS

4
Iteration Number

Figure 3.10. Learning curve for simulation 4, colored noise input.

8
4

x 10

As we can see, in the white input case, all of the algorithms can track the shift in the impulse
response very fast, showing very small or even none delay relative to the first part. In the colored
input scenario, the changes are more apparent, as for most of them, half the iterations is not
sufficient to reach convergence, but still, it seems they will continue the process normally.

3.1.5. Simulation 5: Speech Input

In this subsection, we address the performance of the algorithms when the input is a speech
signal. The simulations use the impulse response in Figure 2.1. The speech input signal used in
all the simulations is shown in Figure 3.11.
Speech Input Signal

1
0.8
0.6
0.4

Amplitude

0.2
0
-0.2
-0.4
-0.6
-0.8
-1

4
n

8
4

x 10

Figure 3.11. Plot of the speech signal to be used in the simulations.

This subsection is composed by 5 simulations, all of them using the same speech input
signal and the system impulse response cited. In the first one, no measurement noise is added to
the output of the unknown system. In the second one, white Gaussian noise with SNR =40 dB
is used as measurement noise, whereas in the third one,

SNR =20 dB

one, we apply a coloring filter (low pass filter with a pole at

is used. In the fourth

z=0.9 ) to the

SNR =40 dB

white Gaussian noise and add as measurement noise. Finally, in the last simulation, a second
speech (Figure 3.12) is added as measurement noise with 1/10 of the power of the system
output.
In each simulation, 100 Monte Carlo runs are executed. To generate 100 different inputs
from a single speech signal, circular shifts are applied to the base signal. The parameters used in
the algorithms are the ones shown in Table 3.1.

Speech Signal for Measurement Noise

1
0.8
0.6
0.4

Amplitude

0.2
0
-0.2
-0.4
-0.6
-0.8
-1

4
n

8
4

x 10

Figure 3.12. Plot of the speech signal to be used as measurement noise in the 5 th simulation.

The resultant learning curves are shown in Figures 3.13 to 3.17.


Comparing Algorithms - Speech Input (No Meas. Noise)

0
-10

NLMS

IPNLMS

AMPNLMS

-20

MSE (dB)

-30

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

-40
-50
MPNLMS
IAF-PNLMS

-60

AMPNLMS

-70
-80

PNLMS
0

4
n

Figure 3.13. Learning curve for simulation 5, no measurement noise.

8
4

x 10

Comparing Algorithms - Speech Input (40dB White Noise)

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

-5
NLMS

-10

ASPNLMS
AMPNLMS
PNLMS

MSE (dB)

-15

MPNLMS

IAF-PNLMS

-20
-25
-30
IPNLMS

-35
-40

4
n

8
4

x 10

Figure 3.14. Learning curve for simulation 5, 40 dB white measurement noise.

Comparing Algorithms - Speech Input (20 dB White Noise)

-5

ASPNLMS

NLMS

AMPNLMS

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

MSE (dB)

-10

-15

-20

-25

IPNLMS

IAF-PNLMS

PNLMS

4
n

MPNLMS

8
4

x 10

Figure 3.15. Learning curve for simulation 5, 20 dB white measurement noise.


Comparing Algorithms - Speech Input (40dB Colored Noise)

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

-5
ASPNLMS

NLMS
-10

AMPNLMS
IAF-PNLMS

MPNLMS

MSE (dB)

-15

-20

-25

-30
PNLMS
-35

-40

IPNLMS

4
n

8
4

x 10

Figure 3.16. Learning curve for simulation 5, colored measurement noise.


Comparing Algorithms - Speech Input (Speech Noise)

20
15

ASPNLMS

NLMS
PNLMS
IPNLMS
MPNLMS
IAF-PNLMS
AMPNLMS
ASPNLMS

AMPNLMS

NLMS
10
5

MSE (dB)

0
-5
-10
-15
MPNLMS

-20
PNLMS

-25
-30

IAF-PNLMS IPNLMS

4
n

8
4

x 10

Figure 3.17. Learning curve for simulation 5, second speech measurement noise.

Beginning with the first simulation, where no measurement noise is added, the result is
rather chaotic and hard to make conclusions, except that none of the algorithms seem to have
reached convergence.
In the second and third simulations, in which 40 dB and 20 dB white Gaussian noise were
added as measurement noise, respectively, it is easier to see that the algorithms reach a
convergence plateau, but unlike the previous simulations with noise input signals, they establish
in different levels, possibly because speech is some kind of non-stationary signal. In both cases,
the IPNLMS algorithm has a deeper convergence, reaching the level closest to the measurement
noise level. A noticeable difference is that in the 40 dB case, the levels are closer to each other,
whereas in the 20 dB case, they are more scattered through the vertical axis.
In the last simulation, with a second speech as measurement noise, the result is even more
chaotic. There is no convergence at all, and the mean square error happens to be greater than the
output power throughout the process for all the algorithms, except the IPNLMS.

3.2. Simulations with the Affine Projection Algorithms

To finalize the series of simulations, the PtNLMS algorithms are modified to follow the
affine projection algorithm format. All the algorithms use N=4 , i.e. 3 of the last input vectors
are recycled when calculating a new estimated coefficients vector. Two simulations are
performed in this subsection. They both use the impulse response in Figure 2.1, as well as the
parameters in Table 3.1, except that now it is used =0.01 . As measurement noise, white
Gaussian noise with

SNR =40 dB

is used. As in the previous simulations, one of the

simulations, white Gaussian noise is used as input and the other one uses colored noise, with the
same coloring filter. The learning curves are shown in Figures 3.18 and 3.19.

Comparing Algorithms (Affine Projection) - White Noise Input

AP-NLMS
AP-PNLMS
AP-IPNLMS
AP-MPNLMS
AP-IAF-PNLMS
AP-AMPNLMS
AP-ASPNLMS

0
NLMS

-5
-10

MSE (dB)

-15

PNLMS
ASPNLMS

-20

IAF-PNLMS

IPNLMS

-25
-30
-35

MPNLMS

-40
-45

AMPNLMS

4
Iteration Number

8
4

x 10

Figure 3.18. Learning curve for simulation 6, white Gaussian noise input.
Comparing Algorithms (Affine Projection) - Colored Noise Input

AP-NLMS
AP-PNLMS
AP-IPNLMS
AP-MPNLMS
AP-IAF-PNLMS
AP-AMPNLMS
AP-ASPNLMS

0
-5

NLMS

-10

MSE (dB)

-15
PNLMS

ASPNLMS

-20

IAF-PNLMS

IPNLMS

-25
-30

MPNLMS

-35

AMPNLMS

-40
-45

4
Iteration Number

Figure 3.19. Learning curve for simulation 6, colored noise input.

8
4

x 10

Comparing the results of the affine projection algorithms to simulation 1, the proportionatetype algorithms seem to outperform the affine projection ones, those converge faster. In the white
noise input case, the AP-MPNLMS algorithm has the best performance, followed by the APAMPNLMS and the AP-ASPNLMS. The AP-IAF-PNLMS has the same strange behavior,
converging in a level that is above the others. The AP-NLMS does not reach convergence. In the
colored noise input case, the convergence speed is slightly reduced, and the AP-AMPNLMS
algorithm outperforms the AP-MPNLMS.

4. Conclusion
As a conclusion of this work, it can be said that most of the algorithms that were presented can
successfully identify an unknown system even if it does not have a sparse impulse response,
which is the type of impulse response the proportionate-type NLMS algorithms were designed
for. Some of them present relatively better performance, such as the MPNLMS, the AMPNLMS
and the ASPNLMS. The first two have a disadvantage in the computational complexity, as they
depend on the calculation of a logarithmic term. Then, the ASPNLMS seems to be a very good
alternative, as it performs as well as the other two, sometimes even better, and has a simpler
implementation, due to the linearization of the logarithmic term. An interesting point is that the
algorithms can keep track of a changing impulse response, as shown in a simulation where the
impulse response is shifted in the middle of the adaptation process. However, when dealing with
speech input, the results are not very conclusive and the simulation process requires more
computational power, being slower than the other ones. A more conclusive way to analyze the
results of the speech input simulation is by listening to the error signal generated in one run of
the code. By using this technique, in most of the situations, the IPNLMS, MPNLMS,
AMPNLMS and ASPNLMS algorithms presented a better performance. Finally, the affine
projection format was tested, and the conclusion was that in the scenarios it was tested, the
performance of the proportionate-type NLMS algorithms were still better. Although, only one
type of simulation was performed with this format. Therefore, a possible expansion of this work
is to test these algorithms in different conditions and detect which of the formats is better in each

situation. Also, there is the need to find a better way to test the behavior of the algorithms when
the input is a speech signal.

5. References
[1] K. Wagner and M. Doroslovaki, Proportionate-Type Normalized Least Mean Square
Algorithms, pp. 112, 119132, Mar. 2013.
[2] Affine Projection 1, pp. 334341
[3] Affine Projection 2, pp. 156161
[4] F. C. de Souza, R. Seara and D. R. Morgan, A PNLMS Algorithm With Individual
Activation Factors, in IEEE Trans. Signal Process., vol. 58, no. 4, pp. 20362047, Apr. 2010.

Das könnte Ihnen auch gefallen