Sie sind auf Seite 1von 16

Integration of Linear Prediction

Techniques with Kalman Filter


Estimation
Faheem Khan1, Sung Ho Cho 1 and Naeem Khan2
Department of Electronics and Computer Engineering
Hanyang University South Korea
[{faheemkhan, dragon}@hanyang.ac.kr]
2
Department of Electrical Engineering
University of Engineering & Technology Peshawar Pakistan
1

Abstract
The missing data samples in state estimation process are restored by employing linear
prediction theory. Three different linear prediction techniques i.e. Normal Equation,
Levinson Durbin Algorithm (LDA) and Leroux Gueguen Algorithm (LGA) are implemented
in this paper. The Normal Equation method has high computational complexity. On the other
hand, LDA is less computationally expensive because it doesnt make use of matrix
inversion to compute linear prediction coefficients (LPC). However, LDA results in values of
LPC having larger dynamic range. Alternatively, LGA avoids the dynamic range problem by
applying Schwartz inequality in computing LPC values. It is concluded that LDA and LGA
have lower computational complexity as compared to Normal Equation when integrated with
Kalman filter. The major contribution of this paper is to reduce computational time required
in the Normal Equation method by employing LDA and LGA techniques.
Keywords: Kalman filter, Observation Loss, Open-Loop Estimation, Compensated Linear
Prediction Techniques, Optimization

1. Introduction

State estimation has been an interesting and active research area over the past decades due
to its important role in systems where direct access to the measured state of system is either
impossible or very difficult [1]. Kalman filter (KF) is an online recursive algorithm used to
estimate the system state with noise contaminated observations [2]. KF is a set of equations
that is used to estimate the state of a process in an efficient way to minimize the mean square
error [2]. Broadly speaking, KF consist of two major steps i.e. time update step (also called a
priori estimate or prediction step) and measurement update step (also called a posteriori
estimate or observation update). In simple words, a constant gain steady state Kalman filter
is a recursive method utilized for estimation of state variables of an LTI (linear time
invariant) system, subjected to stochastic noises [3]. Kalman filter accurately estimates
system state on the basis of noisy measurements. Hence precise knowledge of system
dynamics is required for Kalman filter. It relies on information of unmeasured stochastic
inputs and noise contaminated measurement data. The sensor readings are used for
computation of minimum mean square error estimate of system state [4][5][6].
In networked and distributed control system random communication packet losses may occur
due to several reasons i.e. limited spectrum of the channel, channel fading, interference, and
congestion of buffer registers, collision and many other deficits [7]. Kalman filter algorithm
is dependent on system dynamics, information of received signal and unknown noise signal.
Kalman filter predicts the system state and then updates it on the basis of measured
observations [7]. In case of loss of observation, the conventional Kalman filter (CKF) fails to
offer accurate state estimation. Alternatively, Open-Loop Estimation (OLE) is used to
overcome this disadvantage in case of loss of observation. In Open-Loop Kalman filtering
scheme, only prediction step is performed in case of observation loss. OLE doesnt perform
update step for tuning of the predicted signal because measurement data is not available [8].
As OLE only perform prediction step, it results in unbounded estimation error if the
observation data is missing for a long period of time [7]. Some estimation techniques are
required to produce acceptable state estimation with bounded estimation error in case of loss
of observation. Some researchers have replaced OLE technique with other methods i.e. Zero
Order Hold method or First Order Hold technique. In [9] [14] the Zero Order Hold method
(ZOH) is described. In ZOH, only the last sample is stored and updated throughout the
estimation process. This method has also certain disadvantages. If longer data loss is
encountered then employing only one sample at update step will result in unbounded error.
ZOH also requires the data samples to have strict correlation among them.
In this work, a novel estimation technique based on Kalman filter is shown, where
observations for measurement update step are not available. The missing observation data is
thereby reproduced through linear prediction techniques. The basic notion of the Linear
Prediction is to estimate the future data samples on the basis of the past values of the input
signal within a signal frame, the weights used to compute the linear combination are
calculated by minimizing the mean square prediction error [10]. In internal prediction,
Linear Prediction Coefficients (LPCs) are computed from the selected data frame by using
autocorrelation concept for processing of the data window. The LPCs of external prediction
are implemented for predicting the lost data samples which means LPCs associated with
external prediction are computed from the past samples of the signal [11]. The conventional
Normal Equation method has been found computationally expensive [12]. Alternatively,

Levinson Durbin Algorithm (LDA) considerably reduces this computational time by


avoiding the large matrix inversions involved in the computation of LPCs. However, LDA
has the drawback of a larger dynamic range in the values of LPC [12]. Another alternate
method is Leroux Gueguen Algorithm (LGA) which eliminates the problem of dynamic
range in a fixed point environment taking advantage of the Schwartz inequality in
computation of the LPC [10]. The LDA and LGA utilize the properties of autocorrelation
matrix; thereby decreasing the computational time as compared to the Normal Equation
method [10].
The rest of the paper is organized as follows: In Section II effect of data measurement loss is
discussed. Section III outlines existing solutions for compensating data loss in state
estimation. Section IV briefly introduces Linear Prediction theory. Modified LP techniques
i.e. Normal Equation, LDA and LGA are discussed in section V. A numerical example is
presented in section VI to show the effectiveness of the proposed method followed by
conclusion.
.

2. Problem Statement
Kalman filter is dependent on sensor readings for computation of minimum mean square
error estimate of the system state. However, when output data is unavailable due to channel
congestion, buffer overflow and/or sensor faults, then the performance of Kalman filter is
degraded to a considerable amount [9]. Observation loss is a major issue in control and
communication systems. Study of loss of measurements is a hot research topic for the
researchers since the last few years [5]. Kalman filter might face some situation where
measurement data may not be available for update step. The authors in [7] have performed
open loop Kalman estimation in case of LOOB i.e. when observation data is lost; the
predicted samples are processed to next iteration, without performing update step. Consider
the following discrete time LTI system [2]

x k = A x k1+ B uk1 + k1
z k =C x k +k
k R={0,1,2,3, } ;
In the above equations the parameters are defined as;
n
l
m
nn
is the state transition matrix, B Rnl is
x, R , uR ,R , A R
the input matrix, C R mn is the output matrix and ( x 0 , k , k ) are Gaussian,
uncorrelated white noise sequences with mean ( x 0 , 0,0 ) and covariance ( P0 ,Q k , Rk )
respectively. The estimation through Kalman filter is summarized as follows [13].
Algorithm 1: Discrete Time Kalman Estimation
1. Initialize

x 00 , u0 , 0 , 0 , P0 0 k =1.
2. Prediction cycle:

x k+1k = A x kk + B uk ; State estimation


T
Pk+1k =A P kk A +Qk ; Error covariance

3. Time-step update:

k k +1
4. Sense measurements:

z k+1=H x k+1 +k +1
5. Innovation vector calculation:

r k +1=z k+1H x k+1k


6. Then calculate the innovation covariance matrix:
T

S k+1 =H P k+1k H + Rk+1


7. Now calculate the gain matrix:

K k+1 =Pk+1k H T S1
k+1
8. Perform update cycle:

x k+1k+1 =x k+1k + K k+1 r k +1 ; State estimation


Pk+1k +1=(I K k+1 H ) Pk+1k ; Error covariance
9. Go back to step 2.
From the above algorithm it is clear that update step is dependent on measurements. In case
of unavailability of the output data ( z n ), KF may not result in optimal estimation. Due to
this reason we used three different Linear Prediction methods for predicting the missing
output observations for update step.

3. Existing Solutions
3.1 Open-Loop Kalman Filtering
In literature, the most prevailed method used in case of output data loss is Open Loop
Kalman filtering. In Open Loop Kalman estimation the measurement data is ignored and
Kalman filter gain matrix ( K k ) is set to zero value matrix, which means that no update
step is performed. As no measurement data arrives, no Kalman gain matrix is calculated and
no state and covariance update is performed. Hence only prediction step is performed as
measurement data is lost and it is not possible to obtain observational update step [14]. OLE
is a fast estimation technique and it has very simpler structure. But despite these advantages,
OLE may suffer from certain drawbacks, which are briefly discussed below.
Open-Loop Kalman estimation may result in divergence in case of adequate data samples
loss.
When observation data becomes available after data loss period, oscillations and/or sharp
spikes can be observed in the estimated parameters [14].
The steady state values of state and covariance are not regained even after recovery of data
loss. It takes too longer to approach the steady state.
OLE is summarized in the following algorithm.
Algorithm 2: Open Loop Kalman Filtering
1. Initialize

x 00 , u0 , 0 , 0 , P0 0 k =1.
2. Prediction cycle:

x k+1k = A x kk + B uk ; State estimation

T
Pk+1k =A P kk A +Qk ; Error covariance

3. Time-step update:

k k +1
4. Sense measurements:

z k+1 ; is not available


5. There is no residual innovation and hence Kalman gain is not calculated.
x k+1k+1 x k +1k ; State estimation
Pk+1k +1 P k+1k ; Error covariance
6. Return to Step 2.
In case of loss of observation for a long period of time, a sub-optimal estimation technique is
required to provide robust estimation performance having bounded estimation error
covariance.
3.2 Zero Order Hold Technique

Fig. 1 Open-Loop Kalman filtering

Due to various disadvantages associated with Open-Loop estimation, many researchers have
tried to replace OLE with some other techniques [14]. The Zero Order Hold technique is
explained in Fig. 1. The major limitation associated with this method is that only the most
recent sample of signal is required to be stored during the whole estimation process. Since
one data sample is employed at measurement update stage which may not result in optimal
solution for adequate data loss. ZOH also requires strict correlation among data samples of
the signal. Due to these reasons, this technique results in random sampling. Due to the above
mentioned disadvantages of the existing methods, we have proposed a new scheme in which
linear prediction techniques are implemented in Kalman estimation to compensate the loss of
observations during the update cycle of Kalman filter. In the following sections, linear
prediction techniques are discussed briefly.

4. Linear Prediction Theory

Fig. 2 Linear Prediction Techniques

4.1 Linear Prediction


The basic idea in linear prediction is that a signal is modeled as a linear combination of the
present and past samples of the signal. A major portion of work has been done on system
modeling in the field of control systems under subjects of system identification and
estimation. The weights used for computation of the linear combination are calculated by
reducing the mean-square prediction error [11]. Linear prediction is basically an
identification method where AR (autoregressive) parameters are found from the observed
signal [10]. It is assumed that the signal to be predicted is an AR signal. The following
equation shows the predicted signal:
p

^z [ n ]= i z [ ni ] (1)
i=1

and z represent the predicted and input signal respectively. The symbol
' i ' represents the i th coefficient weight of the signal. Linear prediction theory works
on the principle of minimizing the mean square error. Based on minimization of mean square
error, a mathematical expression is derived for computing the linear prediction coefficients.
The prediction error is the difference between the original and predicted signal as shown in
the equation as follows.
Where

^z

e [ n ] =z [ n ] z^ [ n ] (2)
The two major classes of linear prediction are external and internal linear prediction as
shown in Fig. 2. The detailed explanation of internal and external linear prediction is given in
the following sub-sections.
4.1.1 Internal Linear Prediction

Fig. 3 Internal Linear Prediction Concept

In this type of prediction, prediction coefficients for a certain data frame are computed from
the data inside that frame. The LPCs in internal prediction capture the data frame statistics
accurately. The data frame may be static or dynamic. The advantage associated with longer
frame size is its low computational complexity because LPCs are calculated and hence
transmitted less often. However, the coding delay for longer frame sizes may grows larger as
the system needs to wait for a longer period of time to collect too many samples [12].
Additionally, LPCs of a long frame may fail to achieve good prediction gain in case of
changing nature of non-stationary systems. Alternatively, a shorter data frame requires more
frequent LPC updates, which results in a more accurate portrayal of the input signal statistics
as compared to the longer data frame. Mostly, the internal prediction techniques rely on nonrecursive autocorrelation methods for estimation, which uses window of a finite length for
obtaining the signal samples. Internal linear prediction doesnt really predict the signal;
rather it just computes the coefficients of the input signal to be transmitted. The transmission
of LPCs of a signal requires less bandwidth and storage as compared to the original signal,
thus saving the useful bandwidth and memory space [10].
As depicted in Fig. 3, sliding window concept is used in this work. The stationary
window approach differs from the sliding window in its update process. In sliding window
approach, the window updates in backward direction. However, in stationary window
concept, same window is employed for computation of all sample values of the signal.
4.1.2 External Prediction

Fig. 4 External Linear Prediction Concept

The LPCs derived in external prediction are used in future data frame; i.e. the coefficients
related to a frame are not computed from data samples located inside that frame; instead
LPCs are derived from the past samples of the signal [12]. External prediction is more useful
where the statistical properties of the signal changes slowly with time. The frame size needs
to be long enough such that in case of data loss it can recover the signal. Fig. 4 explains the
sliding window concept for external linear prediction.
4.2 Prediction Gain
Predictiongain(PG)isyetanotherimportantparameterinLPtheory.PGcanbecalculated
fromtheequationgivenbelow[10]

PG=10 log 10 (

2s

E { s2 [n] }

E { e 2 [n]}

)=10 log 10 (
2

)(3)

Equation3definesPGastheratioofinputsignalvariancetothevarianceofprediction
error in units of decibel. PG is a parameter used to measure the performance of any
predictor.Ifthevalueofpredictiongainishigher,thenitemployslowervalueofprediction
error.SoapredictorwithahigherPGvalueispreferredtoonewithlowervalueofPG.An
optimumframesizeofLPfilterformaximumPGmaybesortedoutbyplottingPGas
functionofLPfilterorder.Apointofsaturationcomes,wherefurtherincrementinframe
sizewillnotaffectthepredictiongainsignificantly.Iftheexpectationsinequation(3)are
changedintosummationthenpredictiongaincanbedefinedbythefollowingequation.
m

PG [ l ] =10 log 10 (

s2 [ n]

n=mN +1
m

) (4)
2

e [n]

n =m N +1

In the above equation the parameters are defined as follows

e [ n ] =s [ n ] ^s [ n ]

(5)

s [ n ] + ai [ m ] s [ ni ] ; n=mN +1, , m
i =1

Intheaboveequation,theLPCsforinternalpredictionarecalculatedinsidetheinterval
[ mN +1, m ] and n<mN + 1 forexternallinearprediction.Intheaboveequation
(4)thepredictiongainisdefinedasafunctionoftimevariablem.

5. Linear Prediction Filter Order


Optimization is key concern in signal processing computer algorithms i.e. Normal equation,
LDA and LGA. Since the prediction error doesnt always decrease with increasing the
window size in linear prediction, therefore it is important to find out the sub-optimal value of
the frame size for LPC [14]. A constrain based method is given in Algorithm 3 to decide suboptimal values of LPFO for Normal Equation, LDA and LGA techniques.

Algorithm 3: LP Filter Order Selection

e m ( k1 ) =max ( e i ) , wheree i=x i^x i


i={ 1,2, .. . k1 } .
2. Initialize j=1, then calculate R z and r
3. Recursion: j=2. . . p
Obtain: z from Equation (1)

1. Compute

4:

Calculate: measurement update state estimation


On the basis of these compensated observations
c x^ k
Compute: e j ( k )=x k
Check: If e j ( k ) em ( k 1 )
Yes: n j: Order of the LP filter
Else: j j+ 1
Go back to step 3.

c x^ k

6. Linear Prediction Techniques


6.1 Normal Equation
In a system identification application, the FIR filter output shows an estimate of the present
output sample from an unknown system as a sum of linearly weighted past and/or future
samples from the input to unknown system. The Normal Equation derivation is actually
based on minimization of the mean square error. Conventionally, several methods have been
used to solve Normal Equations, most common are the covariance method which is used for
non-stationary process and autocorrelation method is appropriate for a stationary process. In
Normal Equation method the predicted signal is represented by equation given below [15]:
p

^z [ n ]= i z [ ni ] ( 6)
i=1

The mathematical equations for reducing the mean square error between actual and predicted
signal are discussed here. The cost function is defined as [8].

{(

J =E {e [ n ] }=E z [ n ] + i z [ ni ]
2

i=1

)}
2

(7)

J is the cost function which is precisely a second order function of LPCs. To get the
sub-optimal value of LPC, the cost function J is differentiated with respect to and
equated to zero, as [8].

{(

J
=2 E z [ n ] + i z [ ni ] z [ nk ] =0 (8)
k
i=1

( )

Rearranging Equation (8) gives:


p

E { z [ n ] z [ nk ] } + i E { z [ ni ] z [ nk ] }=0 (9)
i=1

Now

R z [ ik ] =E { z [ ni ] z [ nk ] } k=1, 2, 3... p(10)

This leads to Equation (11)


p

i R z [ ik ]=R z [k ](11)
i=1

The auto-correlation matrix

R z is given by:

R z [0 ] R z [1 ] R z [ 2 ]
R z [ p1 ]

R z [1 ] R z [ 0 ] R z [ 1 ]
R z [ p2 ]
R z=

R z [ p1 ] R z [ p2 ] Rz [ p3 ]
R z [ 0]

From Equations (10) and (11):

R z i=r z (12)
The variable r z represents the transpose of the array formed by the elements
R z [0] to R z [ p1] of the autocorrelation array. The resulting matrix in Equation
(12) is a Toeplitz matrix (a matrix having all elements along each diagonal to be same [11]).
This property of Toeplitz matrix allows the linear equations to be solved by the LevinsonDurbin algorithm or the Schur algorithm [14].
6.2 Levinson Durbin Algorithm (Modified)
Levinson Durbin Algorithm (LDA) is a recursive prediction technique where parameter
coefficients of an autoregressive process are predicted without high computational cost [14].
Levinson-Durbin algorithm (LDA) takes advantage of Toeplitz symmetry property of
autocorrelation matrix and thereby decrease the computational time. In Equation (12) the
autocorrelation array is used as the starting input for LDA algorithm. The coefficients that
we compute in LDA technique are in fact, Reflection Coefficients (RCs) [10]. The RCs
obtained as a result of this algorithm have one to one correspondence with LPC coefficients
because the input signal is Wide Sense Stationary (WSS). Let W and Q be the number of
iterations and frame size respectively. LDA algorithm can be described as follows:
Algorithm 3: Constraint Levinson Durbin Algorithm
1. Set loop for varying the window size.
For W =10 90
2. Error Threshold: Set threshold limit for error to have bounded prediction error.
3. Initialization: First iteration (W=0), set J 0=R [0 ]
The mean square prediction error is actually the first element of autocorrelation
window.
4. Recursion: For l=1,2,3, ,Q
5. Compute value of l th RC as follows [12]

l1
R[li]
i
l1

R [l ] +
i=1

k l =(1/ J l1)

Stop if l=Q
6. Now Calculate LPCs for the

l th order predictor as in [12]


l
l=K l ,
l1
li=l1
i K l li ; for i=1,2,3, , l1

Stop when l=Q


7. Calculate the value of minimum mean square prediction error as in [1]

J l=J l1 ( 1K 2l )
Check: the value of threshold error and compare with
8. If J<= eth
No: l=l+1 ; go to step 5.
Yes: Stop the Loop.
6.3 Leroux Gueguen Algorithm (Modified)
The main disadvantage of LDA technique is that it results in larger dynamic range in the
values of LPC. Alternatively, another linear prediction method-Leroux Gueguen Algorithm
(LGA) overcomes the problem of dynamic range in a fixed-point environment with the help
of Schwartz inequality [16]. Due to simple structure and straightforward computations, LGA
is integrated with Kalman filtering process in order to restore the missing observations
required in measurement update step of Kalman filter. It computes Reflection Coefficients
(RCs) from autocorrelation matrix without dealing with LPCs directly. The computational
time by LGA is also smaller as compared to Normal Equation. The algorithm is discussed as
follows. Consider W as the number of iterations and Q as frame size, then LGA technique
can be described as follows:
Algorithm 5: Compensated Leroux Gueguen Algorithm
1. Set the loop for varying the frame size.
W =10 90
For
2. Threshold Error: Set value for threshold error.
3. Initialization: For l=0 , set
( 0)

[ k ] =R [ k ] , k=Q+1, ,0, , Q
Recursion: for l=1,2,3, ,Q
l th Reflection Coefficient [21]
(l1) [l]
k l = (l1)
[0]
Stop when l=Q
4. Find

5. Compute values of epsilon parameters [8]


( l)

( l1)

( l1)

[ k ] = [ k ] k l [ lk ] ;
where k=Q+l +1, , 0,l+1, Q
Set l=l+1 ; Return to step 4.
6. Compute the error e as follows:

e [ n ] =z [ n ] z^ [n]
7. Check if

ee

th

No: Increment W=W+1


Yes: Stop loop and save all parameters.
The values of epsilon parameters are used to calculate the LPC coefficients. LGA technique
shows better performance in fixed point environment as the intermediate variables are
having bounded values. The problem with LGA is that it returns only RCs, which is not a
major concern if the filter is in lattice form [10]. The LGA technique followed by conversion
of RC to LPC doesnt result in substantial computational saving as compared to LDA. Due
to these factors LDA technique is more prevalent as compared to Normal Equation and LGA
techniques. These prediction schemes i.e. Normal Equation, LDA and LGA are implemented
in Kalman estimation and the simulation results are presented in the following section.

7. Numerical Simulation Results and Analysis


7.1 Mass Spring Damper (MSD) System Model
The example employed for the evaluation of the above analysis is mass-spring-damper
(MSD) system. The dynamics of the MSD system are described by the following
mathematical equations:

x ( t )= Ax ( t )+ Bu ( t ) + Lw ( t )
z k = k (C x k + v k )
The state vector

x T ( t )=[ x1 ( t ) x 2 ( t ) x 1 ( t ) x 2 ( t ) ]
Consist of displacement and speed of the two bodies in proposed system.

0 0
0 0
k
k1
A= 1
m1
m1
k 1 k 1+ k 2
m2
m2

1 0
0 1
b1
b1
m1
m1
b 1 b1 +b2
m2
m1

1
0]
m1
C=[ 0 10 0 ]
L=[0 0 0 3]
B T =[0 0

k 1=1, k 2=0.15, b1=b 2=0.1 and the


The values of the parameters are m 1=m 2=
sampling time T s=1 ms . The MSD plant disturbance and sensor noise dynamics are
given as
E { w ( t ) }=0, E { v ( t ) }=0
By putting the values of the given parameters, the above matrices will become:

0 0
1 0
0 0
0 1
A=
1
1
0.1 0.1
0.1 1.15 0.1 0.2

And

B T= [0 0 1 0 ]
In the next section, the proposed Compensated Closed Loop KF (CCLKF) algorithms are
implemented in MSD system and the results are presented. In Closed Loop KF, the lost
observation samples are predicted by using linear prediction schemes; hence the estimation
error is reduced as compared to existing Open Loop method.
7.2 Simulation Results
In the simulation we have used sampling time period of T s=0.001 s . In this section, the
simulation results obtained for OLKF and CCLKF with data loss are compared. In case of
unavailability of output data in OLKF, the measurement update step is not performed. So in
OLKF the filtering is reduced to prediction step only. However, in CCLKF compensated data
is achieved using Normal Equation, LDA and LGA methods. The error is significantly
reduced by employing the proposed techniques as shown in the Table 1. In CCLKF the
missing observation samples are predicted and then used in the update state of the Kalman
filter which reduces the error in estimated signal. The computational time required for CLKF
is more than OLKF, because of the more processing steps required for predicting the lost
samples by LGA in the proposed method. The computational time for the CCLKF increases
as the number of lost samples is increased due to more processing steps required for
predicting the lost observation samples. The graph in Fig. 5 consists of three curves. The
solid line, dotted dash line and dashed line represent the actual signal, signal predicted by
Compensated CLKF and conventional OLKF signal respectively. Data loss occurs from
sample 2220 to 2650. From the Fig. 5, the result shows that the predicted signal by OLKF is
more deviated from the original signal as compared to the CLKF. In Fig. 5, the error of our
proposed method has relatively less magnitude as compared to the error signal of the
conventional OLKF method. It should be noted that Fig. 5 only represent one Compensated
CLKF technique i.e. LGA, since the reuslts for other CLKF techniques i.e. Normal Equation and LDA
are represented in detail in Table 1. The computational time, error estimation and prediction
gain comparison has been shown in Table 1. It can be easily seen that Open Loop KF

provides less computational time because during the loss time it avoids measurement update
step. However, the absolute error generated by OLKF is quite larger than other schemes. On
the other hand, all LPC schemes (Normal Equations, Levinson-Durbin algorithm and
Leroux-Gueguen algorithm) are computationally expensive but they outperform OLKF. The
errors generated by these schemes are significantly lower than the error produced by OLKF.

First Estimated State

40

Second Estimated State

30
Actual Signal
Estimation by CLKF

20

Estimation by OLKF

20
10
0

-10
-20

500

1000

1500

2000

2500

3000

3500

4000

Third Estimated State

15

-20

500

1000

1500

2000

2500

3000

3500

4000

3000

3500

4000

Fourth Estimated State

10

10

0
-5

-5

-10
-15

500

1000

1500

2000

2500

3000

3500

4000

-10

500

1000

1500

2000

2500

Fig. 5 Comparison of OLKF with CCLKF

The results for estimation error, computational time and gain are summarized in the
following table. The table shows the estimation error, time and gain for the second state of
the system during the observation loss period i.e. samples 2220-2650. Since the prediction
gain is concerned only with linear prediction techniques, so its not applicable for the
conventional open loop method.
Table 1. Comparison of various parameters of different Estimation techniques for the

Estimation Error

second state of the mass spring damper system


Open Loop
KF with
KF with
KF
Normal
Levinson
Equation
Durbin
Algorithm
1.2130
.0621
.05327

KF with
Leroux
Gueguen
Algorithm
.04751

Computational
Time(seconds)

0.03721

0.1834

0.0612

0.0702

Prediction Gain

N/A

14.2328

17.5401

18.8125

From Table 1, it is clear that the estimation error by the proposed algorithms is
comparatively lower than OLKF; while the time taken by KF integrated with Normal
Equation is highest (0.1834). The difference will be more prominent if the matrix size is
increased. On the other hand, KF integrated with LDA and LGA techniques require less
computational time as compared to Normal Equation because they dont involve any matrix

inversion operation in calculation of LPC values. The gain value of LGA is the least among
all the methods, so it is preferred the most.

8. Conclusion
In this paper the loss of observation in Kalman filtering is studied and new techniques for
recovery of the lost observation are discussed. The linear prediction techniques implemented
in state estimation outperform the existing methods in reducing the estimation error in case
of loss of observations. The KF integrated with Normal Equation method takes a lot of time
in computation of the coefficients, so alternative techniques i.e. Levinson Durbin Algorithm
and Leroux Gueguen Algorithm are presented. LDA and LGA have lower computational
complexity than the Normal Equation method due to avoiding the matrix inversion in its
computation of LPC. The results shown that LGA is better than Normal Equation and LDA
techniques due to its lower computational complexity as well as bounded LPC coefficients.

References
[1] Dochain,D.(2003)."Stateandparameterestimationinchemicalandbiochemicalprocesses:a
tutorial."Journalofprocesscontrol13(8):801818.
[2] Welch,G.andG.Bishop(1995).AnintroductiontotheKalmanfilter.
[3] Yan,R.,etal.(2010)."CombiningAdaptiveFilteringandIFFlowstoDetectDDoSAttacks
withinaRouter."KSIITransactionsonInternet&InformationSystems4(3).
[4] KieuXuan,T.andI.Koo(2012)."CooperativeSpectrumSensingusingKalmanFilterbased
Adaptive Fuzzy System for Cognitive Radio Networks." KSII Transactions on Internet &
InformationSystems6(1).
[5] Khan, N., et al. (2013). Implementation of linear prediction techniques in state estimation.
AppliedSciencesandTechnology(IBCAST),201310thInternationalBhurbanConferenceon,
IEEE.
[6] Zhao,N.andH.Sun(2011)."RobustPowerControlforCognitiveRadioinSpectrumUnderlay
Networks."KSIITransactionsonInternet&InformationSystems5(7).
[7] Shi, Y. and H. Fang (2010). "Kalman filterbased identification for systems with randomly
missingmeasurementsinanetworkenvironment."InternationalJournalofControl83(3):538
551.
[8] Sinopoli,B.,etal.(2004)."Kalmanfilteringwithintermittentobservations."AutomaticControl,
IEEETransactionson49(9):14531464.
[9] Fang, H., et al. (2010). "Genetic adaptive state estimation with missing input/output data."
ProceedingsoftheInstitutionofMechanicalEngineers,PartI:JournalofSystemsandControl
Engineering224(5):611617.
[10] Chu,W.C.(2004).Speechcodingalgorithms:foundationandevolutionofstandardizedcoders,
JohnWiley&Sons.
[11] Makhoul,J.(1975)."Linearprediction:Atutorialreview."ProceedingsoftheIEEE63(4):561
580.
[12] Khan,F.,etal.(2013).Ontheoptimalframesizeoflinearpredictiontechniques.Circuits,Power
andComputingTechnologies(ICCPCT),2013InternationalConferenceon,IEEE.
[13] Micheli, M. (2001). Random sampling of a continuoustime stochastic dynamical system:
Analysis,stateestimation,andapplications,Citeseer.
[14] N.Khan.LinearPredictionApproachesforCompensationofMissingMeasurementsin
KalmanFiltering.PhDthesis,UniversityofLeicester,December2011

[15] MarpleJr,S.L.(1982)."Fastalgorithmsforlinearpredictionandsystemidentificationfilters
withlinearphase."Acoustics,SpeechandSignalProcessing,IEEETransactionson30(6):942
953.
[16] Le Roux, J. and C. Gueguen (1977). "A fixed point computation of partial correlation
coefficients."IEEETransactionsonAcousticsSpeechandSignalProcessing25(3):257259.

Faheem khan receivedhisB.Sc.degreeinElectricalEngineeringfromUniversityofEngineering&


Technology,Peshawar,Pakistanin2010.HehasworkedoutasalecturerfromSeptember2011to
August2013intheDepartmentofElectricalEngineering,UniversityofEngineering&Technology
Peshawar,Pakistan.Hehaspublishedvariousreputedconferencepapers.Heiscurrentlypusuinghis
Ph.D. degree from Electronics and Computer department Hanyang University South Korea. His
researchareasincludestateestimation,wirelesscommunicationandUWBRadartechnologies.

Sung Ho Chohas graduated from Department of Electronics Engineering Hanynag University Seoul
Korea, in 1982 and has completed his Ph.D. degree from Department of Electrical & Computer
Engineering at University of Utah USA, in 1989. He worked in Electronics & Telecommunication
Research Institute (ETRI), Korea, as senior researcher for three years. Professor Cho is currently
pursuing pragmatic researches for creating design methodologies. His research areas include wireless
technologies, Digital signal processing, embedded system, and networking protocol technologies. He
has more than 200 publications.

Naeem Khan is working as an assistant professor in Electrical department University of


Engineering & Technology Peshawar, Pakistan. He received his B.E degree from department of
electricalandelectronicsengineeringUniversityofengineering&TechnologyPeshawar,Pakistanin
year2003andhascompletedhisPh.D.degreeincontrolandcommunicationsystemfromLiester
UniversityU.K.in2012.Hehaspublishedseveraljournalandconferencepapers.Hisareaofinterest
isrobustcontrol,stateestimationunderintermittentobservations,andOFDMchannelestimation.

Das könnte Ihnen auch gefallen