Beruflich Dokumente
Kultur Dokumente
Abstract In [Simon and Chia, 2002], an analytic estimate onto the constrained surface. Although the main
method was developed to incorporate linear state results are restricted to linear systems and linear state
equality constraints into the Kalman filter. When the equality constraints, the authors outlined steps to extend it
state constraint is nonlinear, linearization was employed to inequality constraints, nonlinear dynamics systems, and
to obtain an approximately linear constraint around the nonlinear state constraints.
current state estimate. This linearized constrained
Kalman filter is subject to approximation errors and According to [Simon and Chia, 2002], the inequality
may suffer from a lack of convergence. In this paper, constraints can be checked at each time step of the filter.
we present a method that allows exact use of second- If the inequality constraints are satisfied at a given time
order nonlinear state constraints. It is based on a step, no action is taken since the inequality constrained
computational algorithm that iteratively finds the problem is solved. If the inequality constraints are not
Lagrangian multiplier for the nonlinear constraints. satisfied at a given time step, then the constrained solution
The method therefore provides better approximation is applied to enforce the constraints. Furthermore, to apply
when higher order nonlinearities are encountered. the constrained Kalman filter to nonlinear systems and
Computer simulation results are presented to illustrate nonlinear state constraints, it is suggested in [Simon and
the algorithm. Chia, 2002] to linearize both the system and constraint
equations about the current state estimate. The former is
Keywords: Kalman filtering, nonlinear state constraints, equivalent to the use of an extended Kalman filter (EKF).
Lagrangian multiplier, iterative solution, tracking.
However, the projection of the unconstrained state
estimate onto a linearized state constraint is subject to
constraint approximation errors, which is a function of the
1 Introduction nonlinearity and more importantly the point around which
the linearization takes place. This may result in
In a recent paper [Simon and Chia, 2002], a rigorous convergence problems. It was suggested in [Simon and
analytic method was set forth to incorporate linear state Chia, 2002] to take extra measures to guarantee
equality constraints into the Kalman filtering process. convergence in the presence of nonlinear constraints.
Such constraints (e.g., known model and signal
information) are often ignored or dealt with heuristically. There are a host of constrained nonlinear optimization
The resulting estimates, even obtained with the Kalman techniques [Luenberger, 1989]. Primal methods search
filter, cannot be optimal because they do not take through the feasible region determined by the constraints.
advantage of this additional information about state Penalty and barrier methods approximate constrained
constraints. optimization problems by unconstrained problems through
modifying the objective function (e.g., add a term for
One example that benefits from state constraints is the higher price if a constraint is violated). Instead of the
ground target tracking. When a vehicle travels off-road or original constrained problem, dual methods attempt to
on an unknown road, the state estimation problem is solve an alternate problem (the dual problem) whose
unconstrained. However, when the vehicle is traveling on unknowns are the Lagrangian multipliers of the first
a known road, be it straight or curved, the state estimation problem. Cutting plane algorithms work on a series of
problem can be cast as constrained with the road network ever-improving approximating linear programs whose
information available from, say, digital terrain maps solutions converge to that of the original problem.
[Yang, Bakich, and Blasch, 2005]. Lagrangian relaxation methods are widely used in discrete
constrained optimization problems.
To make use of state constraints, previous attempts range
from reducing the system model parameterization to In this paper, we present a method that allows for the use
treating state constraints as perfect measurements. The of second-order nonlinear state constraints exactly. The
constrained Kalman filter proposed in [Simon and Chia, method can provide better approximation to higher order
2002] consists of first obtaining an unconstrained Kalman nonlinearities. The new method is based on a
filter solution and then projecting the unconstrained state computational algorithm that iteratively finds the
1
Form Approved
Report Documentation Page OMB No. 0704-0188
Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and
maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information,
including suggestions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington
VA 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to a penalty for failing to comply with a collection of information if it
does not display a currently valid OMB control number.
16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF 18. NUMBER 19a. NAME OF
ABSTRACT OF PAGES RESPONSIBLE PERSON
a. REPORT b. ABSTRACT c. THIS PAGE Same as 8
unclassified unclassified unclassified Report (SAR)
2
conditional mean square error subject to the state the approximate linear state constraint produces the
constraints, that is, current constrained state estimate x( + , which is however
subject to the constraint approximation error. Clearly, the
min E{ x x 2 | Y } such that Dx = d further away is x( from x, the larger is the approximation-
2
(8)
x
introduced error. More critically, such an approximately
where denotes the vector two-norm. Furthermore, linear constrained estimate may not satisfy the original
2
nonlinear constraint specified in Eq. (10). It is therefore
when W = P-1, i.e., the inverse of the unconstrained state desired to reduce this approximation-introduced error by
estimation error covariance, the solution in Eq. (7) including higher-order terms while keeping the problem
reduces to the result given by the maximum conditional computationally tractable. One possible approach is
probability method presented in the next section.
where g(x) = [gi(x)]T, d = [di]T, and g(x) = which may represent a road segment in a digital terrain
[gi(x)]. An approximate linear constraint is therefore map.
formed by replacing D and d in Eq. (3) with g(x) and
( ( (
d g ( x )+ g ' ( x ) T x , respectively. Following the constrained Kalman filtering of [Simon and
Chia, 2002], we can formulate the projection of an
Figure 1 illustrates this linearization process, which unconstrained state estimation onto a nonlinear constraint
identifies possible errors associated with linear surface as the constrained least-square optimization
approximation of a nonlinear state constraint. As shown, problem
the previous constrained state estimate x( lies somewhere
on the constrained surface but is away from the true state. x = arg min ( z H x) T ( z H x) (15)
x
The projection of the unconstrained state estimate x onto
subject to f(x) = 0
3
ei2 ( ) i2
T = (23a)
If we let W = H H and z = H x , the formulation in Eq. i (1 + i2 ) 2
(15) becomes the same as in Eq. (4). In a sense, Eq. (15) ei ( )t i
m x = t ( I + T ) 1 e( ) = (23b)
T T
is a more general formulation because it can also be
i 1 + i
2
interpreted as a nonlinear constrained measurement
e ( )t i
update or a projection in the predicted measurement x m = e( ) T ( I + T ) 1 t = i (23c)
T
i 1 + i
2
domain.
Construct the Lagrangian with the Lagrangian multiplier Bringing these terms into the constrained equation in Eq.
as (18b) gives rise to the constraint equation, now expressed
in terms of the unknown Lagrangian multiplier , as
J(x, ) = (z-Hx)T(z-Hx) + f(x) (17)
f() = (zTH mT) (HTH+ M)-2(HTz m)
Taking the partial derivatives of J(x, ) with respect to x + mT(HTH+ M)-1(HTz m)
and , respectively, setting them to zero leads to the + (zTH mT) (HTH+ M)-1m + m0
necessary conditions: = e()T(I+ T)-1T(I+T)-1e()
+ tT(I+ T)-1e() + e()T(I+ T)-1t + m0
-HTz + m + (HTH+M)x = 0 (18a) e 2 ( ) 2 ei ( )t j
xTMx + mTx + xTm + m0 = 0 (18b) =
i (1 i+ 2 i) 2 + 2i 1 + 2 + m0 (24)
i i
4
ei2 i2 Furthermore, if the error covariance matrix is not
f ( )= + m0 (27b)
i (1 + i2 ) 2 diagonal, the correlation direction will also affect the
statistical property. Ruling out such variability in
ei2 i4
f& ( ) = 2 (27c) conditions will make results analysis easier while not
i (1 + i2 ) 3 losing the generality.
The solution of Eq. (27) is also called the constrained To apply the linear constrained Kalman filter of [Simon
least squares [Moon and Stirling, 2000; pp 765-766], and Chia, 2002], the nonlinear constraint is linearized
which was previously applied for the joint estimation and about a previous constrained state denoted by x( and can
calibration [Yang and Lin, 2004]. When M = 0, the
be written as
constraint in Eq. (13) degenerates to a linear one. The
constrained solution in Eq. (19) is still valid. However, the
iterative solution for finding is no longer applicable and [x(
]
( x
y = r2 (30)
a closed-form solution is available as given in Eq. (7). y
5
to a separation error by a factor of 1.74 m/deg. This slightly smaller than the unconstrained estimates, because
corresponds to about 26 m for = -15o. Although the the scattering along a constraint curve is smaller than in a
linearized method using Eq. (7) effectively projects all the two dimensional (2D) plan. For the same reason, the RMS
unconstrained state estimates onto the linearized values for the linearized constrained estimates are smaller
constraint, the linearized constraint itself is tilted off, than those of the quadratic constrained estimates for || <
introducing larger errors due to the constraint 7o. However, the use of RMS values for comparison is
approximation. In contrast, the quadratic constrained less meaningful in this case. As shown in Figure 6, both
method, being independent of the prior knowledge about the unconstrained estimates (circle o) and linearized
the state estimate, easily satisfies the constraint. constrained estimates (dot ) do not always satisfy the
nonlinear constraints whereas quadratic constrained
In the second simulation, we draw 5000 unconstrained estimates (plus sign +) do all the time (i.e., the constraint
state estimates for each linearizing point , which varies is satisfied if the distance to the origin is 100 m).
from -20o to 20o in steps of 5o. At each linearizing point,
we calculate the root mean squared (RMS) errors between
the true state and the linearized and nonlinear constrained 11.8
monte carlo simulation 5000 runs per
deviation . 11.2
90 10.6
10.4
80 10.2
10
70
y, m
9.8
-20 -15 -10 -5 0 5 10 15 20
: angular offset from true state, deg
60
road segment
unconstrained
Figure 5 RMS Values of State Estimates vs. Angular
50 nonlinear constrained Offset
linearized constrained
linearizing point
true state
40
40 45 50 55 60 65 70 75 80 85
satisfy nonlinear constraints at = -5 deg
x, m 125
unconstrained state estimate
Figure 3 Projection of Unconstrained Estimates onto 120 linearized constrained state estimate
Constraints ( x( = x)
nonlinear constrained state estimate
115
110
distance to center, m
105
unconstrained estimates projected onto nonlinear constraints at = -15 deg
100
90
95
80 90
85
70
80
y, m
60 75
0 10 20 30 40 50 60 70 80 90 100
random sample number
50
road segment
unconstrained
nonlinear constrained
Figure 6 Constraint Satisfaction
40 linearized constrained
linearizing point
true state
30 In a 2D setting, the circular path constraint of Eq. (30)
40 45 50 55 60 65 70 75 80 85 90 allows for a simple geometry solution to the problem. The
desired on-circle point x( is the intersection between the
x, m
Figure 4 Projection of Unconstrained Estimates onto circle and the line extending from the origin to the off-
Constraints ( x( x) circle point x . The geometry solution can be written as
( y
As expected, the RMS values for the linearized x = r cos[tan 1 ( )] (32a)
x
constrained estimates (circle o) grow large as the angular
( y
offset increases. The RMS values for the quadratic y = r sin[tan 1 ( )] (32b)
constrained estimates (dot ) remain constant and are x
6
Since it is an exact solution for this particular simulation coordinate translation and rotation in order to apply this
example, we can use it to verify the iterative solution circular normalization. Reverse operations are then used
obtained by the quadratic constrained method. Figure 7 to transform back. For applications of high
shows the quadratic constrained estimates (cross x) dimensionality, the scalar iterative solution of Eq. (26)
obtained by the iterative algorithm of Eq. (27) and the may be more efficient.
exact geometry solutions (circle o), which indeed match
each other perfectly.
unconstrained estimates projected onto nonlinear constraints
y, m
70
algorithm to converge in the example presented.
65
60
In fact, for this simple example of a target traveling along
a circle used in the simulation, a closed-form solution can 55
road segment
be derived. Assume that W = I2, M = I2, m = 0, and m0 = - 50
unconstrained
nonlinear constrained
r2. The nonlinear constraint can be equivalently written as: 45
geometric solution
45 50 55 60 65 70 75 80
T 2 x, m
x x=r (33)
Figure 7 Iterative Quadratic Constrained Solution vs.
The quadratic constrained estimate given in Eq. (27a) is Geometry Solution
repeated below for easy reference:
iterative solution of lagrangian multiplier: first 5 data points
(
x = (W + M ) 1W x = (1 + ) 1 x (34) 1
0.9
where is the Lagrangian multiplier. 0.8
0.7
Bringing Eq. (34) back to Eq. (33) gives:
normalized ||
0.6
0.5
(T ( x T x
x x=( ) = r2 (35)
1+ 1+ 0.4
T
x x x 2 4 6 8 10 12 14
= 1 = 2 1 (36) iteration
r r
Figure 8 Convergence in Iterative Lagrangian Multiplier
where ||||2 stands for the 2-norm or length for the vector.
7
quadratic cost function, it is of interest to further extend
the iterative method to explore other types of nonlinear
constraints of practical significance.
Acknowledgements
Research supported in part under Contracts No. FA8650-
04-M-1624 and FA8650-05-C-1808, which is gratefully
acknowledged. Thanks also go to Dr. Rob Williams, Dr.
Devert Wicker, and Dr. Jeff Layne for their valuable
discussions and to Dr. Mike Bakich for his assistance.
References
Anonym, Comments from Reviewer # 1 of Fusion2006
Technical Review Committee, April 2006.
B. Anderson and J. Moore, Optimal Filtering, Prentice
Hall, Englewood Cliffs, NJ, 1979.
D.G. Luenberger, Linear and Nonlinear Programming
(2nd Ed.), Addison-Wesley, 1989.
T.K. Moon and W.C. Stirling, Mathematical Methods and
Algorithms for Signal Processing, Prentice Hall,
Upper Saddle River, NJ, 2000.
D. Simon and T.L. Chia, Kalman Filtering with State
Equality Constraints, IEEE Trans. On Aerospace and
Electronic Systems, 38, 1 (Jan. 2002), 128-136.
C. Yang, M. Bakich, and E. Blasch, Nonlinear
Constrained Tracking of Targets on Roads, Proc. of
the 8th International Conf. on Information Fusion,
Philadelphia, PA, July 2005.
C. Yang and D.M. Lin, Constrained Optimization for
Joint Estimation of Channel Biases and Angles of
Arrival for Small GPS Antenna Arrays, Proc. of the
60th Annual Meeting of the Institute of Navigation,
Dayton, OH, 2004.