Sie sind auf Seite 1von 11

Guidance Filter Fundamentals

Neil F. Palumbo, Gregg A. Harrison, Ross A. Blauwkamp,


and Jeffrey K. Marquart

hen designing missile guidance laws, all of the


states necessary to mechanize the implementation
are assumed to be directly available for feedback to the
guidance law and uncorrupted by noise. In practice, however, this is not the case.
The separation theorem states that the solution to this problem separates into the
optimal deterministic controller driven by the output of an optimal state estimator.
Thus, this article serves as a companion to our other article in this issue, “Modern
Homing Missile Guidance Theory and Techniques,” wherein optimal guidance laws
are discussed and the aforementioned assumptions hold. Here, we briefly discuss the
general nonlinear filtering problem and then turn our focus to the linear and extended
Kalman filtering approaches; both are popular filtering methodologies for homing
guidance applications.

INTRODUCTION
Our companion article in this issue, “Modern Homing problem,” and the resulting linear-quadratic optimal
Missile Guidance Theory and Techniques,” discusses lin- controller is deterministic. For example, consider the
ear-quadratic optimal control theory as it is applied to Cartesian version of PN, derived in the abovementioned
the derivation of a number of different homing guidance companion article and repeated below for convenience:
laws. Regardless of the specific structure of the guidance
law—e.g., proportional navigation (PN) versus the “opti- u PN (t) = 32 [x 1 (t) + x 2 (t) t go] . (1)
mal guidance law”—all of the states necessary to mech- t go
anize the implementation are assumed to be (directly)
available for feedback and uncorrupted by noise. We Examining Eq. 1, and referring to Fig. 1, the
refer to this case as the “perfect state information states of the PN guidance law are x 1 (t) _ rT – rM ,
y y

60 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


M = Missile flight path angle aM = Missile acceleration, deterministic controller driven by the output of an opti-
T = Target flight path angle normal to LOS
aT = Target acceleration,
mal state estimator.1–6 In this article, we will introduce
 = LOS angle normal to VT the optimal filtering concepts necessary to meet these
rM = Missile inertial position vector L = Lead angle needs.
rT = Target inertial position vector rx = Relative position x (rTx – rMx)
vM = Missile velocity vector ry = Relative position y (rTy – rMy)
In general terms, the purpose of filtering is to develop
vT = Target velocity vector R = Range to target estimates of certain states of the system, given a set of
noisy measurements that contain information about the
yI /zI
T states to be estimated. In many instances, one wants to
aT
vT perform this estimation process in some kind of “opti-
T mal” fashion. Depending on the assumptions made about
vM
aM Target the dynamic behavior of the states to be estimated, the
(T)
R statistics of the (noisy) measurements that are taken, and
L rT

M ry how optimality is defined, different types of filter struc-
 tures (sometimes referred to as “observers”) and equa-
Missile tions can be developed. In this article, Kalman filtering
(M) rM
is emphasized, but we first provide some brief general
xI comments about optimal filtering and the more general
Origin rx (and implementationally complex) Bayesian filter.
(O)

Figure 1.   Planar engagement geometry. The planar intercept


problem is illustrated along with most of the angular and Carte-
BAYESIAN FILTERING
sian quantities necessary to derive modern guidance laws. The Bayesian filtering is a formulation of the estimation
x  axis represents downrange while the y/z axis can represent problem that makes no assumptions about the nature
either crossrange or altitude. A flat-Earth model is assumed with (e.g., linear versus nonlinear) of the dynamic evolution of
an inertial coordinate system that is fixed to the surface of the the states to be estimated, the structure of the uncertain-
Earth. The positions of the missile (M) and target (T) are shown ties involved in the state evolution, or the statistics of the
with respect to the origin (O) of the coordinate system. Differen- noisy measurements used to derive the state estimates. It
tiation of the target–missile relative position vector yields relative does assume, however, that models of the state evolution
velocity; double differentiation yields relative acceleration. (including uncertainty) and of the measurement-noise
distribution are available.
In the subsequent discussions on filtering, discrete-
x 2 (t) _ v T – v M (that is, components of relative
y y time models of the process and measurements will be
position and relative velocity perpendicular to the refer-
the preferred representation. Discrete-time processes
ence x axis shown in Fig. 1). Recall from the compan-
may arise in one of two ways: (i) the sequence of events
ion article that the relative position “measurement” is
takes place in discrete steps or (ii) the continuous-time
really a pseudo-measurement composed of noisy line-of-
process of interest is sampled at discrete times. For our
sight (LOS) angle and relative range measurements. In
purposes, both options come into play. For example,
addition to relative position, however, this (Cartesian)
a radar system may provide measurements at discrete
form of the deterministic PN controller also requires
(perhaps unequally spaced) intervals. In addition, the fil-
relative velocity and time-to-go, both of which are not
tering algorithm is implemented in a digital computer,
(usually) directly available quantities. Hence, in words, thus imposing the need to sample a continuous-time
the relative position pseudo-measurement must be fil- process. Thus, we will begin by assuming very general
tered to mitigate noise effects, and a relative velocity discrete-time models of the following form:
state must be derived (estimated) from the noisy rela-
tive position pseudo-measurement (we dealt with how
x k = fk – 1 (x k – 1, w k – 1)
to obtain time-to-go in the companion article men- (2)
tioned above). Consequently, a critical question to ask y k = c k (x k, k) .
is: Will deterministic linear optimal control laws (such as
those derived by using the techniques discussed in our com- In Eq. 2, x k is the state vector at (discrete) time k,
panion article in this issue) still produce optimal results the process noise w k–1 is a functional representa-
given estimated quantities derived from noisy measure- tion of the (assumed) uncertainty in the knowledge
ments? Fortunately, the separation theorem states that of the state evolution from time k – 1 to time k, y k is
the solution to this problem separates into the optimal the vector of measurements made at time k, and the

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


61­­
N.  F.  PALUMBO  et  al. 

vector k is a statistical representation of the noise Given a measurement y k, the likelihood function (see
that corrupts the measurement taken at time k. We Eq. 4) characterizes the probability of obtaining that
will revisit continuous-to-discrete model conversion value of the measurement, given a state x k . The likeli-
later. hood function is derived from the sensor-measurement
Eq. 2 models how the states of the system are assumed model. Equations 3 and 4, when applied recursively,
to evolve with time. The function fk – 1 is not assumed to constitute the Bayesian nonlinear filter. The posterior
have a specific structure other than being of closed form. density encapsulates all current knowledge of the system
In general, it will be nonlinear. Moreover, no assump- state and its associated uncertainty. Given the posterior
tion is made regarding the statistical structure of the density, optimal estimators of the state can be defined.
uncertainty involved in the state evolution; we assume Generally, use of a Bayesian recursive filter para-
only that a reasonably accurate model of it is available. digm requires a methodology for estimating the prob-
The second statement in Eq. 2 models how the measure- ability densities involved that often is nontrivial.
ments are related to the states. Again, no assumptions Recent research has focused on the use of particle fil-
are made regarding the structure of c k or the statistics of tering techniques as a way to accomplish this. Particle
the measurement noise. filtering has been applied to a range of tracking prob-
Suppose that at time k – 1 one has a probability den- lems and, in some instances, has been shown to yield
sity that describes the knowledge of the system state at superior performance as compared with other filtering
that time, based on all measurements up to and includ- techniques. For a more detailed discussion of particle
ing that at time k – 1. This density is referred to as the filtering techniques, the interested reader is referred
prior density of the state expressed as p (x k – 1 | Yk – 1) to Ref. 7.
where Yk – 1 represents all measurements taken up to
and including that at time k – 1. Then, suppose a new
measurement becomes available at time k. The problem KALMAN FILTERING
is to update the probability density of the state, given For the purpose of missile guidance filtering, the more
all measurements up to and including that at time k. familiar Kalman filter is widely used.3, 6, 8, 9 The Kalman
The update is accomplished in a propagation step and a filter is, in fact, a special case of the Bayesian filter. Like
measurement-update step. the Bayesian filter, the Kalman filter (i) requires models
The propagation step predicts forward the prob- of the state evolution and the relationship between states
ability density from time k – 1 to k via the Chapman– and measurements and (ii) is a two-step recursive pro-
Kolmogorov equation (Eq. 3).7 cess (i.e., first predict the state evolution forward in time,
then update the estimate with the measurements). How-
p (x | Y )
1 4 4k 2 4k –413 ever, Kalman revealed that a closed-form recursion for
Prediction solution of the filtering problem could be obtained if the

= # p (x k | x k – 1) p ( x k – 1 | Yk – 1) d x k – 1 (3) following two assumptions were made: (i) the dynam-
1 44 2 44 3 1 4 44 2 4 44 3 ics and measurement equations are linear and (ii) the
Transitional density Prior
process and measurement-noise sequences are additive,
Eq. 3 propagates the state probability density function white, and Gaussian-distributed. Gaussian distributions
from the prior time to the current time. The integral is are described rather simply with only two parameters:
taken of the product of the probabilistic model of the their mean value and their covariance matrix. The
state evolution (sometimes called the transitional den- Kalman filter produces a mean value of the state esti-
sity) and the prior state density. This integration is over mate and the covariance matrix of the state estimation
the multidimensional state vector, which can render it error. The mean value provides the optimal estimate of
quite challenging. Moreover, in general, no closed-form the states.
solution will exist. As mentioned above, discrete-time models of the pro-
The measurement-update step is accomplished by cess and measurements will be the preferred representa-
applying Bayes’ theorem to the prediction shown above; tion when one considers Kalman filtering applications.
the step is expressed in Eq. 4: In many instances, this preference will necessitate the
representation of an available continuous-time model of
Likelihood
64474
Prediction
4 8 6 44 7 44 8 the dynamic system by a discrete-time equivalent. For
p (y k | x k) p (x k | Yk – 1) that important reason, in Box 1 we review how this pro-
p (x k | Y ) = . (4) cess is applied. In Box 2, we provide a specific example of
1 44 2 44k3
Posterior # p (yk | xk) p (xk | Yk ) d xk
–1 how one can discretize a constant-velocity continuous-
Density 1 4 4 4 4 4 44 2 4 4 4 4 4 44 3
Normalizing Constant time model based on the results of Box 1.

62 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


GUIDANCE FILTER FUNDAMENTALS

BOX 1.  DEVELOPMENT OF A DISCRETE-TIME EQUIVALENT MODEL


We start with a linear continuous-time representation of a stochastic dynamic system, as shown in Eq. 5:

xo (t) = A (t) x (t) + B (t) u (t) + w (t)


(5)
y (t) = Cx (t) +  (t) .

In this model, x d R n is the state vector, u d R m is the (deterministic) control vector (e.g., guidance command applied to the
missile control system at time t), y d R p is the measurement vector, w d R n and  d R p are vector white-noise processes (with
assumed zero cross-correlation), and t represents time. Matrices A, B, and C are compatibly dimensioned so as to support
the vector-matrix operations in Eq. 5. The white-noise processes are assumed to have covariance matrices as given in Eq. 6:

E [w (t) w T ()] = Q (t – )
(6)
E [ (t)  T ()] = R (t – ) .

In Eq. 6, E(·) represents the expectation operator defined as E (x) =


–3
#3
xp (x) dx, where p(x) is the probability density
of x. Above, note that the continuous Dirac delta function has the property that
3
–3
#
f ()  (t – ) d = f (t) , for any f(·),
continuous at t. Next, we consider samples of the continuous-time dynamic process described by Eq. 5 at the discrete
times t0, t1, . . . , tk, and we use state-space methods to write the solution at time tk + 14, 6, 10:

tk + 1 tk + 1
x (t k + 1) =  (t k + 1, t k) x (t k) + #t  (t k + 1, ) B () u () d +
t
#
 (t k + 1, ) w () d . (7)
1 4k 4 4 4 4 44 2 4 4 4 4 4 44 3 1 4k 4 4 4 44 2 4 4 4 4 44 3
k u k wk

In Eq. 7,  (t k + 1, t k) = e A (t k + 1 – t k) represents the system state transition matrix from time tk to tk + 1, where the matrix
exponential can be expressed as e A (t k + 1 – t k) = / 3 A k (t k + 1 – t k) k /k! . 1, 3–5, 10 Note that if the dynamic system is linear
and time-invariant, then the state transition matrix may be calculated as  (t k + 1, t k) = L –1 " [sI – A] –1 , , where L –1 {$}
k=0

represents the inverse Laplace transform and I is a compatibly partitioned identity matrix. Thus, using Eq. 7, we write the
(shorthand) discrete-time representation of Eq. 5 as given below:
x k + 1 = k x k + k u k + w k
y k = Ck x k +  k . (8)

In Eq. 8, the representation of the measurement equation is written directly as a sampled version of the continuous-time
counterpart in Eq. 5. In addition, the discrete-time process and measurement-noise covariance matrices, Qk and Rk, respec-
tively, are defined as shown below:
E [w k w T
i ] = Q k dk – i
E [ k  T
i ] = Rk dk – i (9)
E [w k  iT] = 06i , k .

Here we have used the discrete Dirac delta function, defined as do = 1, dn = 0 for n  0. Thus, as part of the discretization
process, we also seek the relationships between the continuous and discrete-time pairs {Q, Qk} and {R, Rk}. It can be shown
that given the continuous-time process disturbance covariance matrix Q and state transition matrix , and referring to
Eqs. 7 and 9, the discrete-time process disturbance covariance matrix Qk can be approximated as given in Eq. 104:
tk + 1
Qk . #t  (t k + 1, t) Q (t)  T (t k + 1, t) dt . (10)
k

To obtain an approximation of the measurement-noise covariance, we take the average of the continuous-time measurement
over the time interval t = tk − tk − 1 as shown below4:
tk tk
yk = 1 #t [Cx (t) +  (t)] dt . Cx k + 1 #t  (t) dt . (11)
t k–1
t k–1

From Eqs. 6, 9, and 11, we obtain the desired relationship between the continuous-time measurement covariance R and its
discrete-time equivalent Rk:
tk tk
1 E [ ()  T ()] dd = R .
R k = E [ k  T
i ]=
t 2
#t #t t
(12)
k–1 k–1

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


63­­
N.  F.  PALUMBO  et  al. 

BOX 2.  DISCRETIZATION EXAMPLE: CONSTANT-VELOCITY MODEL


Here, we illustrate (with an example) how one can derive a discrete-time model from the continuous-time representation. For
this illustrative example, we consider the state-space equations associated with a constant-velocity model driven by Gaussian
white noise (t) (i.e., the velocity state is modeled as a Weiner process4).

d ;r (t) E = ;0 1E ;r (t) E + ;0E  (t)


[[ U
dt v (t) 0 0 v (t) 1
A x (t) D (13)
r (t)
y (t) = [1 0] ; E +  (t)
Z v (t)
C

(Compare the specific structure above to the general expression in Eq. 5.) In Eq. 13, the process and measurement-noise sta-
tistics are given by the following: E[(t)] = 0, E[(t)(t)] = Qd(t – t), E[(t)] = 0, E[(t)()] = Rd(t – t), and E[(t)()] = 0. We
compute the state transition matrix k as k = L –1 " [sI – A] –1 , t ! T , where we have defined the sample time as T = tk + 1 − tk.
Then, from Eq. 7, we obtain the following discrete-time representation for this system:
r 1 T rk
;vk + 1 E = ; E ; E + k
0 1 vk
k+1 [W
k xk
(14)
r
y k = 61 0@ ; k E + k .
[ vk
Ck

(Compare the discrete-time representation in this example to the more general one shown in Eq. 8.) Based on the
results of the previous subsection, the discrete-time process and measurement-noise components in Eq. 14 are given by
 k = # k + 1  (t k + 1, ) D () d = # [T –  1] T  () d and  k = 1 # k + 1  (t) dt , respectively. Consequently, by using
t T t
tk 0 T tk
Eqs. 10 and 12, the discrete-time process and measurement-covariance matrices are computed as

T–
k ] = E;
; E  () d 6T –  1@  () dE
T T
Q k = E [ k  T
1
#0 0
#
1 3 1 2
E [T –  1] Qd = > 31 2 2 H Q
T T– T T
= # ; (15)
0 1 2 T T
1 T T T
R k = E [ k  T
k] = 2 #0 #0 E [ (u)  T ()] dud = 12 #0 Rd = R .
T T T

The Discrete-Time Kalman Filter The expressions in Eq. 16 embody the optimal design
The theory says that the Kalman filter provides problem, which is to minimize the mean square esti-
state estimates that have minimum mean square mation error trace {E ([ x k – xt k] [ x k – xt k] T|Yk)} sub-
error.4, 11, 12 An exhaustive treatment and derivation ject to the assumed plant dynamics and given a
of the discrete-time Kalman filter is beyond the scope sequence of measurements up to time k represented by
of this article. Instead, we shall introduce the design Yk = " y 1, y 2, f, y k ,.
problem and directly present the derivation results. To As previously discussed, the discrete-time Kalman
this end, we note that if x k is the state vector at time filter (algorithm) is mechanized by employing two dis-
k, and xt k is an estimate of the state vector at time tinct steps: (i) a prediction step (taken prior to receiv-
k, then the design problem may be stated as given ing a new measurement) and (ii) a measurement-update
below: step. As such, we will distinguish a state estimate that
exists prior to a measurement at time k, xt (–) k (the
min: J = trace {E ([ x k – xt k] [ x k – xt k] T|Yk)} a priori estimate) from one constructed after a measure-
x = k – 1 x k – 1 +  k – 1 u k – 1 + w k – 1 ment at time k, xt (k+) (the posteriori estimate). Moreover,
subject to: ) k (16)
y k = Ck x k +  k we use the term Pk to denote the covariance of the
E [w k w T
i ] = Q k dk – i estimation error, where P k(–) = E [x k – xt (–) t (–) T
k ][x k – x k ]
where: * k Ti ] = Rk dk – i .
E [   (+) (+ ) (+) T
and P k = E [x k – xt k ][x k – xt k ] . In what follows, we
E [w k  iT] = 0 6 i , k denote xt (0+) as our initial estimate, where xt (0+) = E [x (0)].

64 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


GUIDANCE FILTER FUNDAMENTALS

Table 1.  Discrete-time Kalman filter algorithm.


Step Description Expression

Initialization ) V
x (0+) = E 6x (0)@, P 0(+) = E 8x 0 – V
x (0+)B8x 0 – V
x (0+)B
T
(a) Initial conditions
Z
]] (b) State extrapolation V
x k(–) = k – 1 V
x k(+–)1 +  k – 1 u k – 1
Prediction [
] (c) Error-covariance extrapolation P k(–) = k – 1 P k(+–)1  kT– 1 + Q k–1
\Z

k 8C k P k C k + R kB
] (d) Kalman gain update (–) T –1
] K k = P k(–) C T
]
Correction [ (e) Measurement update V
x (k+) = V
x (–) V (–)
k + Kk ` y – Ck x k j
] k
]
] (f) Error-covariance update P k(+) = 6I – K k C k@ P k(–)
\

Based on this description, the discrete-time Kalman eration x3(t)  a(t) perpendicular to the target–missile
filter algorithm is encapsulated as shown in Table 1. LOS in order to develop missile acceleration commands
In Table 1, the filter operational sequence is shown in (see Fig. 1).
the order of occurrence. The filter is initialized as given
in step a. Steps b and c are the two prediction (or extrap- u APN (t) = 32 8x 1 (t) + x 2 (t) t go + 1 x 3 (t) t 2goB (17)
olation) steps; they are executed at each sample instant. t go 2
Steps d, e, and f are the correction (or measurement-
update) steps; they are brought into the execution path Referring back to the planar engagement geom-
when a new measurement y k becomes available to the etry shown in Fig. 1, consider the following stochastic
filter. Figure 2 illustrates the basic structure of the linear continuous-time model representing the assumed
Kalman filter, based on the equations and sequence laid engagement kinematics in the y (or z) axis:
out in Table 1.
ro (t) 0 1 0 r (t) 0 0
Example: Missile Guidance State Estimation via Linear >vo (t ) H >
= 0 0 1H>v (t ) H > H
+ – 1 u (t ) + >0H (t)
Discrete-Time Kalman Filter oa T (t) 0 0 0 a T (t) 0 1
(18)
r (t)
y (t) = 61 0 0@>v (t) H +  (t) .
In our companion article in this issue, “Modern
Homing Missile Guidance Theory and Techniques,”
the planar version of augmented proportional naviga- a T (t)
tion (APN) guidance law, repeated below for conve-
nience (Eq. 17), requires estimates of relative position In this example, the target acceleration state is
x1(t)  r(t), relative velocity x2(t)  v(t), and target accel- driven by white noise; it is modeled as a Weiner pro-
cess.4 It can be shown that this model
Measurements is statistically equivalent to a target

vk maneuver of constant amplitude
Kalman (guidance) filter To and random maneuver start time.13
– + (+) guidance
yk + + x̂k law As in the previous discretization
  Kk  example, we assume that the process
– +
(–)
and measurement-noise statistics
(+)
xˆ k + xˆ k–1 are given by the following relations:
Ck  k–1 z –1
E[v(t)] = 0, E[v(t)v(t)] = Qd(t – t),
+

Delay E[(t)] = 0, E[(t)(t)] = Rd(t – t), and
u k–1
k – 1 E[v(t)(t)] = 0.
Deterministic Pk(–) = k–1Pk–1
(+) T
inputs
k–1 + Qk–1 If we discretize the continuous-
(–) T (–) T
Kk = Pk Ck [Ck Pk Ck + Rk] –1 time system considered in Eq. 18, we
(+) (–)
Pk = [ I – Kk Ck ] Pk obtain the following discrete-time
dynamics and associated discrete-time
process and measurement-noise cova-
Figure 2.  The block diagram of the discrete-time, linear Kalman filter. riance matrices:

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


65­­
N.  F.  PALUMBO  et  al. 

R1 5 V To mechanize this filter,


rk + 1 1 T 12 T 2 rk S 20 T
1
T4 1
T 3W
> v k + 1 H = >0 1 T H> v k H + k u k +  k,
8 6
an estimate of the measure-
Q k = SS 18 T 4 1
3 T3 1
2 T 2WW Q ment-noise variance, s2r , is
aTk+1 1 3 a Tk S 16 T 3 1
T2 T W
104402 44 Y T
2
X
required. This parameter is
k xk
(19) important because it directly
R 2V affects the filter bandwidth.
rk S –T2 W
y k = 61 0 0@> v k H +  k,
There are a number of ways
Rk = R , k = S –T W .
\ T S W
S 0 W
in which one may set or
Ck aT select this parameter. The
k T X
conceptually simple thing
To illustrate the structure of the linear Kalman filter for the APN estimation prob- to do is to set the param-
lem, we will develop a block diagram of the filter based on the discrete-time model eter based on knowledge of
presented above. To help facilitate this, we assume that T is the rate at which mea- the sensor characteristics,
surements are available and that the filter runs at this rate (in the general case, this which may or may not be
assumption is not necessary). For this case, the equations presented in Table 1 can be easy in practice because the
used to express the estimation equation in the alternate (and intuitively appealing) noise variance may change
form given below: as the engagement unfolds
(depending on the type
xt k(+) = [I – KC k] [k – 1 xt (k+– 1) +  k – 1 u k – 1] + Ky k . (20) of targeting sensor being
employed). Another, more
Recall that, for APN, components of the state vector x are defined to be relative effective, approach is to
position, x 1 _ r, relative velocity, x 2 _ v, and target acceleration, x 3 _ a T, leading to adaptively adjust or estimate
x _ 6x 1 x 2 x 3@T . Therefore, using the system described by Eq. 19 in the alternate the measurement variance
Kalman filter form shown in Eq. 20, we obtain the APN estimation equations below: in real time. See Ref. 14 for a
more in-depth discussion on
R (+) V R (1 – K 1)(tr (k+–)1 + Tvt (k+–)1 + 12 T 2 at (T+) – 12 T 2 u k – 1) + K 1 y k
V this topic.
S tr k W S k–1
W
Svt (+)W = S–K (tr (+) + Tvt (+) + 1 T 2 at (+) – 1 T 2 u – y ) + vt (+) + Tat (+) – Tu W .  (21)
S (k+)W S 2 k – 1 k–1 2 Tk – 1 2 k–1 k k–1 Tk – 1 k – 1W Example: Discrete-Time APN
Sat T W SS (+ ) (+ ) 1 2 (+ ) 1 2
–K 3 (tr k – 1 + Tvt k – 1 + 2 T at T – 2 T u k – 1 – y k) + at T ( + ) W
T
k
X T k–1 k–1
W Kalman Filter Performance
X
As a simple example, the
Figure 3 depicts the structure dictated by the filtering equations above. Referring Kalman filter shown in Fig. 3
to Fig. 3, the makeup of the Kalman gain matrix, K, is shown below (see Eq. d in was implemented to estimate
Table 1): the lateral motion of a weav-
R p T V
S 11 2 W ing target. The total target
K 1 S p 11 T + sr W simulation time was 5  s,
K = >K 2H = SS 12 2 WW .
p T
(22) and the filter time step (T)
p 11 T + sr
K3 S p T W was 0.1 s. Figure 4 illustrates
SS p T13+ s2 WW the results for this example
T 11 r
X problem. The target lateral
In this expression, T is the filter and measurement sample time (in seconds), s2r / R acceleration (shown as aT
is the (continuous-time) relative position measurement variance (in practice, an esti- in Fig.  3) was modeled as a
mate of the actual variance), and pij represents the {i, j}th entry of the (symmetric) a sinusoidal function in the x1
priori error-covariance matrix P (–)
k as given below:
(lateral) direction with a 10-g
amplitude and 5-s period.
p 11 p 12 p 13 Motion in the x2 direc-
P (–)
k = >p 12 p 22 p 23H . (23) tion is a constant velocity
p 13 p 23 p 33 of Mach  1 (1116.4 ft/s). The
target initial conditions are
P (–)
k is recursively computed using Eq. c in Table 1 and Eq. 19 to compute the x = [0, 10,000]T (ft) and v
Kalman gain. The a posteriori error-covariance matrix, P (k+), is recursively computed = [0, 1116]T (ft/s) for position
by using Eq. f in Table 1 and leads to the following structure: and velocity, respectively.
The pseudo-measurement
(1 – K 1) p 11 (1 – K 1) p 12 (1 – K 1) p 13
>(p 12 – p 11 K 2) (p 22 – p 12 K 2) (p 23 – p 13 K 2)H .
is the lateral position (x1),
P (k+) = (24) which was modeled as true
(p 13 – p 11 K 3) (p 23 – p 12 K 3) (p 33 – p 13 K 3) x1 plus Gaussian white noise

66 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


GUIDANCE FILTER FUNDAMENTALS

Kinematics Sensor(s) Three-state guidance filter Guidance law


(APN)
aT T r
1 1  M kM + rˆk
  RM  K1 
+ s 2
R + +
– + – +
2 +
T  z –1
Final miss Angle 2 + + 3
[y(tf)] noise 
T + t 2go
+
Measured/
estimated range + z –1 ac*
+ vˆk
K2  t go
+ Hold
T
T
– uk–1 ac
 z –1
+
z –1
+ aˆ Tk t 2go
K3 
+ 2

Commanded
missile acceleration

Figure 3.  APN guidance filter. A discrete-time three-state Kalman filter is illustrated here, as is its place within
the guidance loop. The filter state estimates are relative position, relative velocity, and target acceleration. These
estimates are passed to the APN guidance law, which generates the acceleration commands necessary to achieve
intercept. Notice that a perfect interceptor response to the acceleration command is assumed in this simplified
feedback loop.

with statistics N(0, s = 1 ft) for the “low-noise” case and tion error for each case. The third-row plots illustrate
N(0, s = 10 ft) for the “high-noise” case. The estimated lateral velocity, and the bottom plots show lateral accel-
states of the filter comprise target lateral position, veloc- eration. It is clear that with the high-noise measurement,
ity, and acceleration. The filter was initialized by first the estimates deviate from truth much more as compared
collecting four lateral position measurement samples to the low-noise case.
{xM1(1), xM1(2), xM1(3), xM1(4)} and assigning the initial
state values as shown below:
NONLINEAR FILTERING VIA THE EXTENDED
x (i) (x (4) – x M1 (3)) KALMAN FILTER
xt 0 = / 4i = 1 M1 , vt 0 = M1 ,
4 t
(25) The conventional linear Kalman filter produces an
vt (x (2) – x M1 (1))
at T = 0 – M1 . optimal state estimate when the system and measure-
0 2 t 2 t 2 ment equations are linear (see Eq. 5). In many filtering
problems, however, one or both of these equations are
As mentioned, two cases are shown in Fig. 4: (left) a nonlinear, as previously illustrated in Eq. 2. In particu-
low-noise case with a measurement standard deviation s = lar, this nonlinearity can be the case for the missile
1 ft and (right) a high-noise case with measurement stan- guidance filtering problem. The standard way in which
dard deviation of s = 10 ft. The error-covariance matrix this issue of nonlinearity is treated is via the extended
was initialized as P0|s = 1 ft = diag " 1, 225, 2000 , for the Kalman filter (EKF). In the EKF framework, the system
low-noise case and P0|s = 10 ft = diag " 10, 2000, 50, 000 , and measurement equations are linearized about the cur-
for the high-noise case. For each case, the top plot shows rent state estimates of the filter. The linearized system of
the target position as x1 versus x2 (time is implicit). True, equations then is used to compute the (instantaneous)
measured, and estimated positions are shown along with Kalman gain sequence (including the a priori and a pos-
the 3s bounds. For the low-noise case, it is difficult to teriori error covariances). However, state propagation
distinguish truth from measurement or estimate (given is carried out by using the nonlinear equations. This
the resolution of the plot). For the high-noise case, the “on-the-fly” linearization approach implies that the EKF
position estimation error is more obvious. The second- gain sequence will depend on the particular series of
row plots show the estimated and measured lateral posi- (noisy) measurements as the engagement unfolds rather

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


67­­
N.  F.  PALUMBO  et  al. 

Estimate Measured topic.) Instead, we introduce the concept and present


Truth 3 bounds
10,000 the results as a modification to the linear Kalman filter
computations illustrated in Table 1. To start, consider
8000 the nonlinear dynamics and measurement equations
given below, where the (deterministic) control and the
x 2 (ft)

6000 process and measurement disturbances are all assumed


x1 – x 2 position x 1 – x2 position
to be input-affine:
4000
–200 –100 0 100 200 –200 –100 0 100 200 x k = fk – 1 (x k – 1) + b k – 1 (x k – 1) u k – 1 + w k – 1
x 1 (ft) x1 (ft) (26)
y k = c k (x k) + k .
20  = 1 ft  = 10 ft
x 1 position error (ft)

10
As before, we assume that the system distur-
0
bances are zero-mean Gaussian white-noise sequences
–10
Estimated, measured Estimated, measured with the following properties: E [w k w iT] = Q k d k – i ,
–20 position error vs. time position error vs. time E [ k  T T
i ] = R k d k – i , and E [w k  i ] = 0 6 i, k . In Eq. 26,
400 fk, b k, and c k are nonlinear vector-valued functions of
x 1 velocity (ft/s)

200 the state. We note that, given the n-dimensional state


0 vector x *k = [x 1* , . . . , x *n ] T and any vector-valued func-
k k
Velocity Velocity
tion of the state m k (x k*) = [m 1 (x *k ), . . . , m n (x *k )] T , we
–200 k k
will denote the Jacobian matrix Mk as shown:
–400
800 R 2m (x *) 2m 1 (x *k ) V
S 1 k g W
x 1 acceleration (ft/s2)

k k

400 S 2x 1 2x n W
2 m k (x *k ) S k k

0 Mk _ = h j h W . (27)
2x k S 2m (x *) 2m n (x *k ) W
n k
S gk k W
–400 S 2x 1 2x n W
Acceleration Acceleration T k k
X
–800
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s) Consequently, we can modify the Table 1 linear Kalman
filter calculations to implement the sequence of EKF
Figure 4.  APN Kalman filter results. A planar linear Kalman filter equations (Table 2).
is applied to estimate the position, velocity, and acceleration of Notice that the step sequence is identical to the linear
a target that is maneuvering (accelerating) perpendicular to the Kalman filter. However, unlike the linear Kalman filter,
x1 coordinate. The filter takes a position measurement in the x1 the EKF is not an optimal estimator. Moreover, because
direction. The (sensor) noise on the lateral position measure- the filter uses its (instantaneous) state estimates to lin-
ment was modeled as true x1 plus zero-mean Gaussian white earize the state equations on the fly, the filter may quickly
noise with standard deviation σ. The target maneuver is modeled diverge if the estimation error becomes too great or if the
as a sinusoid with a 10-g magnitude and a period of 5 s. Target process is modeled incorrectly. Nevertheless, the EKF is
motion in the x2 direction is constant, with a sea-level velocity of the standard in many navigation and GPS applications.
Mach 1 (~1116.4 ft/s). Two cases are shown: (left) a low-noise mea- The interested reader is referred to Refs. 4 and 8 for some
surement case (σ = 1 ft) and (right) a high-noise case (σ = 10 ft). additional discussion on this topic.
The plots illustrate the true and estimated position, velocity,
and acceleration of the target, along with the 3σ bounds for the
respective estimate. For each case, the second-row plot shows CLOSING REMARKS
the errors in the measured and estimated position compared to
In our companion article in this issue, “Modern
truth vs. time.
Homing Missile Guidance Theory and Techniques,” a
number of optimal guidance laws were derived and dis-
than be predetermined by the process and measurement cussed. In each case, it was assumed that all of the states
model assumptions (linear Kalman filter). Hence, the necessary to mechanize the implementation (e.g., rela-
EKF may be more prone to filter divergence given a par- tive position, relative velocity, target acceleration) were
ticularly poor sequence of measurements. Nevertheless, directly available for feedback and uncorrupted by noise
in many instances, the EKF can operate very well and, (referred to as the perfect state information problem). In
therefore, is worth consideration. practice, this generally is not the case. In this article, we
A complete derivation of the EKF is beyond the scope pointed to the separation theorem that states that an
of this article. (See Refs. 3, 4, 11, and 12 for more on this optimal solution to this problem separates into the opti-

68 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


GUIDANCE FILTER FUNDAMENTALS

Table 2.  Discrete-time EKF algorithm.


Step Description Expression

V
x (0+) = E 6x (0)@, P 0(+) = E 8x 0 – V
x (0+)B8x 0 – V
x (0+)B
T
(a) Initial conditions

(b) State extrapolation V k = fk – 1 ` x k – 1 j + b k – 1 ` x k – 1 j u k – 1


x (–) (+) (+)

2fk – 1 (V
x k(+–)1) (+) 2fk – 1 (V
T
x k(+–)1)
(c) Error-covariance extrapolation P k(–) = = GP k – 1 = G + Qk – 1
2x k – 1 2x k – 1

2c k (V V (–) V (–)
T T –1
x (–)
k ) G f = 2 c k (x k ) G (–) = 2 c k (x k ) G
(d) Kalman gain update K k = P k(–) = Pk + Rk p
2x k 2x k 2x k

(e) Measurement update V


x (k+) = V k + K k 8y – c k (x k )B
x (–) (–)
k

2c k (V
x (–)
k ) G o (–)
(f) Error-covariance update P k(+) = e I – K k = Pk
2x k

mal deterministic controller driven by the output of an appropriate for many applications.
optimal state estimator. Thus, we focused here on a dis- Finally, we recognize that most real-world dynamic
cussion of optimal filtering techniques relevant for appli- systems are nonlinear. As such, the application of linear
cation to missile guidance; this is the process of taking Kalman filtering methods first requires the designer to
raw (targeting, inertial, and possibly other) sensor data linearize (i.e., approximate) the nonlinear system such
as inputs and estimating the necessary signals (estimates that the Kalman filter is applicable. The EKF is an
of relative position, relative velocity, target acceleration, intuitively appealing heuristic approach to tackling the
etc.) upon which the guidance law operates. Moreover, nonlinear filtering problem, one that often works well
we focused primarily on (by far) the most popular of in practice when tuned properly. However, unlike its
these, the discrete-time Kalman filter. linear counterpart, the EKF is not an optimal estima-
We emphasized the fact that the Kalman filter shares tor. Moreover, care must be taken when using an EKF
two salient characteristics with the more general Bayes- because the approach is based on linearizing the state
ian filter, namely, (i) models of the state dynamics and dynamics and output functions about the current state
the relationship between states and measurements are estimate and then propagating an approximation of the
needed to develop the filter and (ii) a two-step recur- conditional expectation and covariance forward. Thus,
sive process is followed (prediction and measurement if the initial estimate of the state is wrong, or if the pro-
update) to estimate the states of the system. However, cess is modeled incorrectly, the EKF filter may quickly
one big advantage of the Kalman filter (as compared to diverge.
general nonlinear filtering concepts) is that a closed-
form recursion for solution of the filtering problem is
REFERENCES
obtained if two conditions are met: (i) the dynamics and
  1Athans, M., and Falb, P. L., Optimal Control: An Introduction to the
measurement equations are linear and (ii) the process
Theory and Its Applications, McGraw-Hill, New York (1966).
and measurement-noise sequences are additive, white,   2Basar, T., and Bernhard, P., H-Infinity Optimal Control and Related
and Gaussian-distributed. Moreover, because discrete- Minimax Design Problems, Birkhäuser, Boston (1995).
  3Bar-Shalom, Y., Li, X. R., and Kirubarajan, T., Estimation with Appli-
time models of the process and measurements are the
cations to Tracking and Navigation, John Wiley and Sons, New York
preferred representation when one considers Kalman (2001).
filtering applications, we also discussed (and illustrated)   4Brown, R. G., and Hwang, P. Y. C., Introduction to Random Signals and

how one can discretize a continuous-time system for Applied Kalman Filtering, 2nd Ed., John Wiley and Sons, New York
(1992).
digital implementation. As part of the discretization   5Bryson, A. E., and Ho, Y.-C., Applied Optimal Control, Hemisphere
process, we pointed out the necessity to determine the Publishing Corp., Washington, DC (1975)
relationships between the continuous and discrete-time   6Lewis, F. L., and Syrmos, V. L., Optimal Control, 2nd Ed., John Wiley

versions of the process covariance matrix {Q, Qk} and and Sons, New York (1995).
  7Ristic, B., Arulampalam, S., and Gordon, N., Beyond the Kalman Filter:
the measurement-covariance matrix {R, Rk}. Reasonable Particle Filters for Tracking Applications, Artech House, Norwood,
approximations of these relationships were given that are MA (2004).

JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)


69­­
N.  F.  PALUMBO  et  al. 

  8Siouris, G. M., An Engineering Approach to Optimal Control and Estima- 12Grewal, M. S., and Andrews, A. P., Kalman Filtering Theory and Prac-
tion Theory, John Wiley and Sons, New York (1996). tice, Prentice Hall, Englewood Cliffs, NJ (1993).
  9Zarchan, P., and Musoff, H., Fundamentals of Kalman Filtering: A Prac- 13Zames, G., “Feedback and Optimal Sensitivity: Model Refer-
tical Approach, American Institute of Aeronautics and Astronautics, ence Transformations, Multiplicative Seminorms, and Approxi-
Reston, VA (2000). mate Inverses,” IEEE Trans. Autom. Control 26, 301–320
10Franklin, G. F., Powel, J. D., and Workman, M. L., Digital Control (1981).
of Dynamic Systems, Chap. 2, Addison-Wesley, Reading, MA (June 14Osborne, R. W., and Bar-Shalom, Y., “Radar Measurement
1990). Noise Variance Estimation with Targets of Opportunity,” in
11Chui, C. K., and Chen, G., Kalman Filtering with Real-Time Applica- Proc. 2006 IEEE/AIAA Aerospace Conf., Big Sky, MT (Mar
tions, 3rd Ed., Springer, New York (1999). 2006).

The Authors
Neil F. Palumbo is a member of APL’s Principal Professional Staff and is the Group Supervisor of the Guidance, Naviga-
tion, and Control Group within the Air and Missile Defense Department (AMDD). He joined APL in 1993 after having
received a Ph.D. in electrical engineering from Temple University that same year. His interests include control and esti-
mation theory, fault-tolerant restructurable control systems, and neuro-fuzzy inference systems. Dr. Palumbo also is a lec-
turer for the JHU Whiting School’s Engineering for Professionals program. He is a member of the Institute of Electrical
and Electronics Engineers and the American Institute of Aeronautics and Astronautics. Gregg A. Harrison is a Senior
Professional Staff engineer in the Guidance, Navigation, and Control Group of AMDD at APL. He holds B.S. and M.S.
degrees in mathematics from the University of California, Riverside, an M.S.E.E. (aerospace controls emphasis) from the
University of Southern California, and an M.S.E.E. (controls and signal processing emphases) from The Johns Hopkins
University. He has more than 25 years of experience working in the aerospace industry, primarily on missile system and
spacecraft programs. Mr. Harrison has extensive expertise in missile guidance, navigation, and control; satellite attitude
control; advanced filtering techniques; and resource optimization algorithms. He is a senior member of the American
Institute of Aeronautics and Astronautics. Ross A. Blauwkamp received a B.S.E. degree from Calvin College in 1991 and
an M.S.E. degree from the University of Illinois in 1996; both degrees are in electrical engineering. He is pursuing a Ph.D.
from the University of Illinois. Mr. Blauwkamp joined APL in May 2000 and currently is the supervisor of the Advanced
Concepts and Simulation Techniques Section in the Guidance, Navigation, and Control Group of AMDD. His interests
include dynamic games, nonlinear control, and numerical methods for control. He is a member of the Institute of Elec-
trical and Electronics Engineers and the American Institute of Aeronautics and Astronautics. Jeffrey K. Marquart is
a member of APL’s Associate Professional Staff in AMDD. He joined the Guidance, Navigation, and Control Group in
January 2008 after receiving both his B.S. and M.S. degrees in aerospace engineering from the University of Maryland at
College Park. He currently is working on autopilot analysis, simulation validation, and guidance law design for the Stan-
dard Missile. Mr. Marquart
is a member of the Ameri-
can Institute of Aeronau-
tics and Astronautics. For
further information on the
work reported here, contact
Neil Palumbo. His e-mail
address is neil.palumbo@
Neil F. Palumbo Gregg A. Harrison Ross A. Blauwkamp Jeffrey K. Marquart jhuapl.edu.

The Johns Hopkins APL Technical Digest can be accessed electronically at www.jhuapl.edu/techdigest.

70 JOHNS HOPKINS APL TECHNICAL DIGEST, VOLUME 29, NUMBER 1 (2010)

Das könnte Ihnen auch gefallen