Sie sind auf Seite 1von 11

Contents

Block Adaptive Filters

Lecture 8
Block & Frequency-Domain FrequencyAdaptive Filtering

Fast LMS Algorithm (F LMS) (F-LMS) Numerical Example Summary S

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

Why Frequency-Domain ? FrequencyTime-domain Adaptive Filtering : Long FIR adaptive filter length high computational complexity

Block Adaptive Filtering

x(n)

Serial-toparallel converter

Block FIR filter h

Parallel-toserial converter

Mechanism for sectioning

y(n)

IIR adaptive filt d ti filter

Frequency-domain F d i adaptive filtering (FDAF)

Mechanism for performing weight update

Serial-toparallel converter

e(n)

d(n)

slow convergence speed


input signal vector : Frequency-domain adaptive filtering (FDAF)
Adaptive Signal Processing : Lecture 8 3 N. Tangsangiumvisai

Fig.1 Block Adaptive Filter

x(n) = [ x(n), x(n 1), , x(n M + 1)]T ... (1)

adaptive filter length : M RLS sectioning the input signal into L-point blocks L point
Adaptive Signal Processing : Lecture 8 4

N. Tangsangiumvisai

Block Adaptive Filtering (II)


Let k be the block number. The adaptive filter for the kth block is given by
), ), h(k ) = [h0 (k ) h1 (k ) , hM 1 (k )] T
... (2)

Block Adaptive Filtering (III)


which is also equal to
y (kL + i ) = hT (k ) x(kL + i )
=
M 1 l =0

h (k ) x(kL + i l ),
l

i = 0,1, , M 1

... (5)

and is kept fixed over each block of data. p Thus, the sample time, n, can be written as , p , ,
n = kL + i, i = 0,1, , M 1, k = 0,1,
... (3)

Thus, the error signal becomes e( n ) = d ( n ) y ( n )


e(kL + i ) = d (kL + i ) y (kL + i )

... (6) ... (7)

The output of the adaptive filter is y ( n) = h T ( k ) x( n)


N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8

... (4)
5 N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 6

Example
For the kth block, Eq.(9) become
(when filter length M 3 bl k size L 3) ( h filt l th M=3, block i L=3);

Block Adaptive Filtering (IV)


The update equation of the adaptive filter is given by
h(k + 1) = h(k ) + (k )
... (8)

( (3 x( k 1) x( k 2) h0 (k ) (3 ) (3 ) y (3k ) x( k ) y (3k + 1) = x(3k + 1) x(3k ) x(3k 1) h1 (k ) y (3k + 2) x(3k + 2) x(3k + 1) x(3k ) h2 (k )

where

0< <

max

is the step-size parameter,

For the (k+1)th block;

and max is the largest eigenvalue of the autocorrelation matrix of the input signal vector vector. (When the eigenvalue spread is large has to be small large, to ensure stability of the system.)

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

Block Adaptive Filtering (V)


The quantity (k ) is defined as an M x1 cross-correlation vector : t M 1 ... (9) (k ) = x(kL + i ) e(kL + i )
i= i =0

Block Adaptive Filtering (VI)


Properties of the block LMS (BLMS) algorithm : BLMS and LMS minimize the same cost function. It gives a more accurate estimate (due to time averaging) of the gradient vector than the conventional LMS algorithm. Estimation E ti ti accuracy increases with th bl k size i ith the block i LMS : ( n ) = 2 x ( n ) e( n ) ... (11)
2 M 1 x(kL + i)e(kL + i) L i =0

where each element of (k ) can be seen as a linear cross-correlation of l ti f

j (k ) =

M 1 i =0

x(kL + i j )

e(kL + i ), j = 0,1, , M 1

... (10)

BLMS : S

(k ) =

... (12)

1/L factor makes it an un-biased time average


N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 9 N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 10

Block Adaptive Filtering (VII)


But this does not imply faster rate of convergence than LMS, i LMS since the eigenvalue spread i i d th i l d is independent of bl k d t f block size. effective step-size
of BLMS f

Block Adaptive Filtering (VIII)

Their time constant and misadjustment are identical. BLMS introduces signal path delay in the adaptation loop, loop depending on the block length L length, L.

B = L
... (13)

From eq.(8), the update equation becomes 1 h(k + 1) = h(k ) B (k ) 2 BLMS h ti ht b has tighter bound on th step-size. d the t i e.g. (L=10) LMS = 0.01 BLMS

= 0.1

LMS = 0.5
N. Tangsangiumvisai

BLMS = 5
which i i hi h is impossible ! ibl

Adaptive Signal Processing : Lecture 8

11

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

12

Block size
In general, the block size and the adaptive filter length can b chosen diff be h differently, i tl i.e. L M . The recommended choice of the block size is
L=M

Contents
Block Adaptive Filters Fast LMS Algorithm (F LMS) (F-LMS) Numerical Example Summary S

L > M : redundant data in the operations of adaptive filters L < M : insignificant data for the estimation of the gradient vector vector.

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

13

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

14

Fast LMS (F-LMS) Algorithm (Fx(n) Concatenate 2 blocks Old New x x FFT

F-LMS (II)
X(k) X Y(k) y(k) IFFT Save last block y Discard y(n) H(k) Delay H(k+1)

It was developed by Clark and Ferrara in 1980 1980. Instead f I t d of performing the usual time-domain f i th l ti d i adaptation, F-LMS employs the FFT method to obtain computational efficiency efficiency. The block length is now chosen to be L = M . Linear convolution Fast convolution (by employing the overlap-save method)

X FFT Append zero block Delete last block

I Gradient constraint 0

Conjugate

Discard IFFT e(n) + d(n)

X X (k)
H

FFT E(k)

Insert zero block 0 e

Fig.2 Fast LMS Algorithm


N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 15 N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 16

F-LMS (III)
The input signal matrix is given by
X(k ) = FFT[diag(x(kM - M), , x(kM - 1), x(kM),, x(kM + M - 1) )]
(N x N) N = 2M

F-LMS (IV)
From the relationship
The dont care data

(k 1)th block

k th block

... (14)

* y (k ) = IFFT[Y (k )] (k

... (17)

The frequency-domain error signal is therefore given by


0 E (k ) = FFT Mx1 (k e(k )
... (18)

The output signal vector for the block time k is obtained as Y ( k ) = X( k ) H ( k ) ... (15) which results in
y (k ) =

For the block time k, the error signal vector is equal to

[ y(kM ), y(kM + 1),, y(kM + M 1)]T


Adaptive Signal Processing : Lecture 8

... (16)

e( k ) =

[e(kM ), e(kM + 1),, e(kM + M 1)]T


... (19)
18 Adaptive Signal Processing : Lecture 8

= d(k ) y (k )
17 N. Tangsangiumvisai

N. Tangsangiumvisai

F-LMS (V)
and the desired signal vector is
d(k ) =

Convergence Rate Improvement


T

[d (kM ), d (kM + 1),, d (kM + M 1)]

... (20)

FLMS represents a precise frequency-domain implementation of BLMS BLMS. Large eigenvalue spread of the autocorrelation input matrix, slow convergence rate due to the upperbound of the step-size for stability guarantee. Each tap-weight of the adaptive filter is therefore suggested to be adapted independently from one another. Individual step-size is employed to each tapweight of the adaptive filter.
N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 20

The gradient vector is obtained from the first M elements of the fast correlation :
( k ) H = IFFT X (k ) E (k ) *
... (21)

The N-point FFT of the zero-padded gradient vector is used in the update equation as ( k ) H (k + 1) = H (k ) + FFT ... (22) 0
step-size
N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 19

Convergence Rate (II)


The estimate of average energy of the input signal in the ith f frequency bin, Pi (k ) , i given b bi is i by
2 Pi (k ) = F Pi (k 1) + (1 F ) X ii (k ) , i = 0,1, , N 1 ... (23)

Convergence Rate (III)


Hence, the cross-correlation vector ' (k ) now becomes
' ( k ) * = IFFT D (k ) X H (k ) E (k )
... (25)

diagonal element of the input signal matrix, at bl k time k i l i block i

stationary input

F 1
non- stationary input p

The constant 0 < F < 1 is a forgetting factor. D(k ) The step-size is now given by
F (k ) = di P01 (k ) P 1 (k ) , PN 11 (k ) diag ), 1 ),
'

This modified version will be referred to as F-NLMS in this course. Its update equation is given by
' ( k ) H (k + 1) = H (k ) + ' FFT 0
... (26)

( (

))

... (24)

where 0 < ' < 2 is a constant.


N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 21 N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 22

Contents
Block Adaptive Filters Fast LMS Algorithm (F LMS) (F-LMS) Numerical Example Summary S

AEC Example
An example of audio conferencing in a mobile situation [http://iec.org]

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

23

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

24

AEC Example (II)


In any telecommunication system that involves handsfree or speaker mode operation, an acoustic echo f k d ti ti h cancellation is necessary, in order to prevent the far-end speaker from hearing his/her own voice back as echo. echo

AEC Example (III)


Problem : Many echo paths in the room (reflected by many objects) Paths can vary over time, e.g. objects change their locations. echo path change

Q : How does the AEC system work ?

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

25

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

26

AEC Example (IV)


x(n) acoustic echo paths adaptive d ti filter

AEC Example (V)


d(n) + x(n) adaptive filter p w d(n) ( ) e(n)

e(n)

d(n) + d(n) Acoustic environment of the near-end speaker

adaptive filtering algorithm

Fig.3 Generic from of an adaptive filter x(n) : input signal d(n) : desired signal e(n) : error signal
27 N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 28

loudspeaker of the mobile unit microphone of the mobile unit

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

AEC Example (VII)


unknown system y

AEC Example (VIII)


Two main adaptive filtering algorithms : Least Mean Square (LMS) Recursive Least Squares (RLS) By considering at the LMS algorithm :
e( n ) = d ( n ) w H ( n ) x ( n )
w (n + 1) = w (n) + x(n) e* (n)
... (27) ... (28)

input x(n)

plant
+

output d(n) y(n) e(n)

adaptive filter

Normalized LMS (NLMS) algorithm :


Fig.4 Block diagram of an acoustic echo cancellation

w (n + 1) = w (n) + v
a small positive constant

x( n)

+ x( n) 2
2

e* ( n )

... (29)

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

29

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

30

AEC Example (IX)


The echo path (length) is determined by the size of the room where th phone ( h the h (e.g. conference calls) i used. f ll ) is d echo path delay = p echo path ---------------------------wave propagation speed

AEC Example (X)


The echo that is heard by people in the far-end room is delayed b th sum of d l d by the f
(1) the acoustic echo path delay in the room, and (2) round-trip delay i th network b t d d t i d l in the t k between th phones. the h

i.e. the larger the room size is, the longer the echo path delay is.

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

31

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

32

AEC Example (XI)


The echo canceller is designed to deal with an echo tail g capacity. (maximum amount of echo tail length) The echo tail length factor is ( ) - the echo reverberation time (in ms). - dispersion time (a decaying function in amplitude with time). (It depends on the acoustic of the operating environment.)

AEC Example (XII)


The chosen length of the adaptive filter is given by filter length = echo tail length x sampling frequency
Tail l T il length (ms) th ( ) 8 16 32 64 Echo th l E h path length th (m) 2.5 25 5 10 20 Filter l Filt length th (number of taps) 64 128 256 512

e.g. when the sampling frequency is 8 kHz. [N Jain Fundamentals of integration and evaluation of speech algorithms on an embedded system] N. Jain,
N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 33 N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 34

Numerical Example
Consider the AEC system. Input sequences are white noise and speech signal. The acoustic echo path is modeled with length 256 taps. Weight Error Vector Norm are shown.
WEVN(n) = 10 log 10
N. Tangsangiumvisai

Numerical Example (II)

h ( n) h ( n) h ( n) (n
2 2

2 2

... (30)
Fig.5 Fig 5 The Near-End Room Impulse Response Near End

Adaptive Signal Processing : Lecture 8

35

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

36

Numerical Example (III)

Numerical Example (IV)

Fig.6 The WEVN performance of NLMS V.S. F-NLMS, when the input signal is a white noise (L=256).
N. Tangsangiumvisai Adaptive Signal Processing : Lecture 8 37 N. Tangsangiumvisai

Fig.7 The WEVN performance of NLMS V.S. F-NLMS, with a speech input signal (L=256).
Adaptive Signal Processing : Lecture 8 38

Computational Complexity
NLMS : . RMPs per input sample

Computational Complexity (II)


Complexity Ratio =

NLMS 131840 525824 2100224

FNLMS 31744 68608 141756

FNLMS :

RMPs per 1 block of M output samples

Complexity Ratio 0.24 0.13 0 13 0.07

256 512

where M is the adaptive filter length p and includes the division operations.

1024

FNLMS gives complexity reduction as compared to NLMS

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

39

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

40

Summary
- For long adaptive filter length, FNLMS gives complexity reduction, as compared to the time domain method. time-domain method - FLMS gives the same convergence speed as LMS. (convergence speed i i d d is independent to bl k size.) d block i ) - FLMS has tighter bound for stability as compared to LMS. g y p - FLMS provides a more accurate estimate gradient (estimation accuracy depends on the block size) as compared to LMS LMS. Next week : Lecture 9 : Multi-rate systems and Filter Banks

N. Tangsangiumvisai

Adaptive Signal Processing : Lecture 8

41

Das könnte Ihnen auch gefallen