Sie sind auf Seite 1von 9

Unit V Optimal Estimation

Class Notes

Estimation deals with finding an approximate value (called an estimate) of a quantity from
observations or measurements which contain the information on the quantity to be estimated. All
measurements are prone to errors and therefore an estimate can be only as close as possible to the
correct value depending on the methods used for the estimation. Estimation is to be done generally for
the purpose of implementing a control of a physical system.

Figure shows the general problem of estimation. The physical system is subjected to two types
of inputs, a control input which can be easily manipulated and a disturbance input which accounts for the
presence of internal or external phenomena which cannot be easily determined. The disturbance may be
inherent in the system or may be due to the environment, such as noise in electronic circuits, signal
interference due to radiation, turbulence of aircraft etc., which occur in an unpredictable or random
manner. The system variables which are the outputs of the system are then measured by a measurement
process or system. The measuring system not being the perfect one introduces its own errors which may
again be unpredictable or random. Some of the errors in the measurement may be systematic and can be
corrected for, but many estimation problems deal with the measurement errors also as random.

Block diagram of optimal estimation

The estimation problem here is to correctly calculate the true values of the system variables in the
presence of the disturbances and the errors introduced due to error in measurement. Often the time history
of the measurements is available and the estimates are to be obtained from these. Usually, a performance
measure is defined to assess the quality of the approximation or the estimate and the estimate is derived
that will maximize or minimize this performance measure. In such a case, the estimate is called an
optimal estimate. What is being done in an optimal estimation problem is to determine an algorithm by
which the estimate is obtained from the measurements based on the knowledge of the dynamics of the
physical system and whatever information one has on the disturbance process and the measurement
process or-the errors caused by these. In an associated control problem, the estimates obtained are used to
control the system in a desired manner.

1
A typical procedure for this can be summarized as follows.

1. Development of the models: This involves the specification of the models for (a) the physical
system (b) the disturbance process (c) the measurement system and (d) the measurement error
process. The models developed should be sufficiently accurate or at least be adequate for the
purpose.
2. Specification of the performance measure: The aim or the purpose of the whole procedure is
defined here. Obviously the performance measure must be realistic in terms of the physical
problem being studied and mathematically solvable.
3. Problem formulation: Here the information available from the two steps above is combined
along with a set of constraints that are to be imposed to define the problem.
4. Development of estimation and control algorithm: This is the implementation stage of the
problem in the physical system to achieve the purpose for which the whole exercise is carried out.
The usefulness of the results also will have to be examined.

Application of Optimal Estimation

1. Communication systems: To extract a message from received signal. The transmitted or received
signal will contain the message to be extracted and will be contaminated by unwanted disturbance
and measurement errors in different stages of the Transmission and reception.
2. Navigation: In a typical navigation problem the position and velocity of a vehicle have to be
correctly estimated from the available measurement on the two. Aircraft, space crafts, surface
ships and submarines use these for their movement.
3. Post experimental data analysis: Here the recoded data from an experiment is analyzed in detail
to assess the success. For example tracking and telemetry data available from the launching of a
spacecraft provides valuable information for subsequent missions.
4. Process control: Successful operation of large chemical processes require regulation and control for the
purpose of maintaining the efficiency of operation, quality of products and attainment of other specified
goals. Specific examples are machine tool control, aircraft and space craft flight control and multistage
chemical processes.

Least Square Estimation

A linear static process is described by an equation

= + (1)

Where,
is an( 1) vector of measurements of the output vector .
is the ( 1) input vector,
is the matrix formed from the measurements

It is required to find the best estimate of from the measurements . It is quite probable that,
there are more measurements than the minimum required. Then the problem is to use as many
measurements as possible to get a better estimate of . The best way is to minimize the sum of the
squared errors between the estimate and the true value of , hence called the Least Square Estimate.
The estimation error can be written as

= (2)

2
Sum of squared error can be written as
1 1
= 2 ( ) = 2 ( ) ( ) (3)

To find the best estimate that minimizes this sum of the squared errors, we minimize this error
with respect to the estimate .


= ( ) () (4)

This results in = or

= [ ]1 = + (5)

Where + = [ ]1 is called the pseudo-inverse of the matrix . The matrix to be inverted is


( ) and for it to be of full rank and the inverse to exist. If there are fewer measurements that
the number of unknowns, i.e. < , the number of measurements is not sufficient for the estimation of a
unique set of values for .

The estimation error is = [ ]1 ( + ) = [ ]1

If the noise corrupting the measurements has zero mean, the estimate will ultimately be true
value. Such an estimate is called an unbiased estimate.

The covariance of the estimation error can be evaluated as

= {( ) ( ) }

= {([ ]1 ) [ ]1 } (6)

= {([ ]1 [ ]) [ ]1 }

If the elements in the noise vector are uncorrelated with one another, [ ] is a diagonal matrix, .
Further if all the elements have the same uncertainty, all the diagonal elements are identical and

[ ] = = 2 (7)

Where is the root mean square(rms) value of each element in . In this case, the covariance in
equation (6) becomes,

= ( )1 2 (8)

and is measure of how well the estimate can be made.

3
Kalman Filter

Kalman filter is essentially a set of mathematical equations that implement a predictor-corrector


type estimator that is optimal in the sense that it minimizes the estimated error covariancewhen some
presumed conditions are met.

Since the time of its introduction, the Kalman filter has been the subject of extensive research and
application, particularly in the area of autonomous or assisted navigation. This is likely due in large part
to advances in digital computing that made the use of the filter practical, but also to the relative simplicity
and robust nature of the filter itself. Rarely do the conditions necessary for optimality actually exist, and
yet the filter apparently works well for many applications in spite of this situation.

Applications of Kalman filter.

The applications of a Kalman filter are numerous;


Tracking objects
Fitting Bezier patches to (noisy, moving,..) point data
Economics
Navigation
Many computer vision applications
Stabilizing depth measurements
Feature tracking
Cluster tracking
Fusing data from radar, laser scanner and stero cameras for depth and velocity
measurement
Many more

Advantages

1. Progressive method - No large matrices has to be inverted


2. Proper dealing with system noise
3. Track finding and track fitting
4. Detection of outliers
5. Merging track from different segments

Assumptions

1. Linear system

System parameters are linear function of parameters at some previous time


Measurements are linear function of parameters

2. White Gaussian noise


White: uncorrelated in time
Gaussian: noise amplitude

Discrete Kalman Filter Algorithm - Derivation

The Kalman filter estimates a process by using a form of feedback control. i.e. the filter estimates
the process state at some time and then obtains feedback in the form of (noisy) measurements. As such,

4
the equations for the Kalman filter fall into two groups: time update equations and measurement update
equations. The time update equations are responsible for projecting forward (in time) the current state and
error covariance estimates to obtain the a priori estimates for the next time step. The measurement update
equations are responsible for the feedbacki.e. for incorporating a new measurement into the a priori
estimate to obtain an improved a posteriori estimate.

The time update equations can also be thought of as predictor equations, while the measurement
update equations can be thought of as corrector equations. Indeed the final estimation algorithm
resembles that of a predictor-corrector algorithm for solving numerical problems as shown in Figure 1.

Figure 1. Discrete Kalman filter cycle.

The time update projects the current state estimate ahead in time. The measurement update adjusts
the projected estimate by an actual measurement at that time. The specific equations for the time and
measurement updates are presented in table 1 and table 2.

The time update equation project the state and covariance estimates forward from time step 1
to step . and are from equation

= 1 + 1 + 1 (6)

is formed from

()~(0, ) (7)

The first task during the measurement update is to compute the Kalman gain, . The next step is
to actually measure the process to obtain , and then to generate an a posteriori state estimate by

5
incorporating the measurement as in equation (4). Again equation (4) is simply equation (8) repeated here
for completeness.

(8)

The final step is to obtain an a posteriori error covariance estimate via equation (5). After each
time and measurement update pair, the process is repeated with the previous a posteriori estimates used to
project or predict the new a priori estimates. This recursive nature is one of the very appealing features of
the Kalman filterit makes practical implementations much more feasible than (for example) an
implementation of a Wiener filter which is designed to operate on all of the data directly for each
estimate. The Kalman filter instead recursively conditions the current estimate on all of the past
measurements. Figure 2 offers a complete picture of the operation of the filter, combining the high-level
diagram of Figure 1 with the equations from table 1and table 2.

Figure 2. A complete picture of the operation of the Kalman filter, combining the high-level
diagram of Figure 1 with the equations from table 1 and table 2.

Filter Parameters and Tuning

In the actual implementation of the filter, the measurement noise covariance is usually measured
prior to operation of the filter. Measuring the measurement error covariance is generally practical
(possible) because we need to be able to measure the process anyway (while operating the filter) so we
should generally be able to take some off-line sample measurements in order to determine the variance of
the measurement noise. The determination of the process noise covariance is generally more difficult
as we typically do not have the ability to directly observe the process we are estimating.

Sometimes a relatively simple (poor) process model can produce acceptable results if one
injects enough uncertainty into the process via the selection of . Certainly in this case one would hope
that the process measurements are reliable. In either case, whether or not we have a rational basis for
choosing the parameters, often times superior filter performance (statistically speaking) can be obtained
by tuning the filter parameters and . The tuning is usually performed off-line, frequently with the help
of another (distinct) Kalman filter in a process generally referred to as system identification.

6
Block diagram of Kalman Filter

Under conditions where and are in fact constant, both the estimation error covariance and
the Kalman gain will stabilize quickly and then remain constant. If this is the case, these parameters
can be pre-computed by either running the filter off-line, or for example by determining the steady-state
value.

It is frequently the case however that the measurement error (in particular) does not remain
constant. For example, when sighting beacons in our optoelectronic tracker ceiling panels, the noise in
measurements of nearby beacons will be smaller than that in far-away beacons. Also, the process noise is
sometimes changed dynamically during filter operation - becoming in order to adjust to different
dynamics. For example, in the case of tracking the head of a user of a 3D virtual environment we might
reduce the magnitude of if the user seems to be moving slowly, and increase the magnitude if the
dynamics start changing rapidly. In such cases might be chosen to account for both uncertainties
about the users intentions and uncertainty in the model.

Linear Quadratic Gaussian (LQG)

Linear Quadratic Gaussian (LQG) control is a modern state space technique for designing optimal
dynamic regulators. It enables you to trade off regulation performance and control effort, and to take into
account process and measurement noise. Like pole placement, LQG design requires a state-space model
of the plant. The LQG controller design methodology is based on the Kalman filter who in 1960
published his famous paper describing a recursive solution to the discrete-data linear filtering problem.

The LQG controller is simply the combination of a Kalman filter i.e. a linear-quadratic estimator
(LQE) with a linear-quadratic regulator (LQR). When we use the combination of an optimal estimator
(Kalman filter) and an optimal regulator (LQR) to design the controller, the compensator is called Linear
Quadratic Gaussian (LQG).

This regulator has state space equations


= [ ( )] +

= (1)
7
Block diagram of LQG controller

The goal is to regulate the output around zero. The plant is subject to disturbances and is driven
by controls. The regulator relies on the noisy measurements = + to generate these controls. The
plant state and measurement equations are of the form,

= + +

= + + + (2)

and both and are modeled as white noise. The LQG regulator consists of an optimal state feedback
gain and a Kalman state estimator. These two components can be designed independently.

1. Optimal State-Feedback Gain

In LQG control, the regulation performance is measured by a quadratic performance criterion of the form,

() = 0 { + 2 + } (3)

The weighting matrices Q, N and R are user specified and define the trade-off between regulation
performance (how fast goes to zero) and control effort. The first design step seeks a state feedback law
that minimizes the cost function. This gain is called the LQ-optimal gain.

2. Kalman State Estimator

As for pole placement, the LQ-optimal state feedback u = Kx is not implementable without full state
measurement. However, we can derive a state estimate x such that u = Kx remains optimal for the
output- feedback problem. This state estimate is generated by the Kalman filter.


= [ + + ( )] (4)

with inputs u (controls) and yv (measurements). The noise covariance data,

8
( ) = , ( ) = , ( ) = (5)

Determines the Kalman gain L, through an algebraic Riccati equation. The Kalman filter is an
optimal estimator when dealing with Gaussian white noise. Specifically, it minimizes the asymptotic
covariance of the estimation error x x.

lim (( )( ) ) (6)

The goal is to regulate the plant output y around zero. The input disturbance d is low frequency
with power spectral density (PSD) concentrated below 10 rad/sec. For LQG design purposes, it is
modeled as white noise driving a low-pass filter with a cutoff at 10 rad/sec.

There is some measurement noise n, with noise intensity given by,

(2 ) = 0.01 (7)

Use the cost function,



() = 0 (10 2 + 2 ) (8)

to specify the trade-off between regulation performance and cost of control.

Drawbacks of LQG

LQG robustness is not guaranteed. It is thus it is important to look at the stability radius and
spectral values sets of the closed loop design. This is related to the interaction of the transients of the true
state, the controller action and the observer.

Das könnte Ihnen auch gefallen