Beruflich Dokumente
Kultur Dokumente
x = Ax + Bu,
y = Cx + Du
4.2
z2 =
0
0
1
z2 + 0u
dt
z3
a0 a1 a2
z3
1
|
{z
}
A
93
c
Perry
Y.Li
94
What is the relationship between ai , i = 0, . . . , n 1 and eigenvalues of A?
Consider the characteristic equation of A:
s 1 0
s 1
(s) = det(sI A) = det 0
a0 a1 a2
(s)
= ni=1 (s i ) = sn + a
n1 sn1 + . . . + a
1 s + a
0
Some properties of characteristic polynomial for its proper design:
If i are in conjugate pair (i.e. for complex poles, j), then a
0 , a
1 , . . . , a
n1 are real
numbers; and vice versa.
P
Sum of eigenvalues: a
n1 = ni=1 i
Product of eigenvalues: a
0 = (1)n ni=1 i
If 1 , . . . , n all have negative real parts, then a
i > 0 i = 0, . . . n 1.
If any of the polynomial coefficients is non-positive (negative or zero), then one or more of
the roots have nonnegative real parts.
95
with
0
1
0
0
0
1
Ac =
(a0 + k0 ) (a1 + k1 ) (a2 + k2 )
4.3
If we can convert a system into controller canonical form via invertible transformation T
<nn :
0
..
z = T 1 x; Az = T 1 AT,
Bz = . = T 1 B
0
1
where z = Az z + Bz u is in controller canonical form:
0
1
0
...
0
0
1
...
Az =
0
...
0
1
a0 . . . an2 an1
0
..
Bz = .
0
1
c
Perry
Y.Li
96
The control law:
u = Kz T 1 x + v = Kx + v
where K = Kz T 1 places the poles at the desired locations.
Theorem For the single input LTI system, x = Ax + Bu, there is an invertible transformation
T that converts the system into controller canonical form if and only if the system is controllable.
Proof:
Only if: If the system is not controllable, then using Kalman decomposition, there are modes
that are not affected by control. Thus, eigenvalues associated with those modes cannot be
changed. This means that we cannot transform the system into controller canonical form,
since otherwise, we can arbitrarily place the eigenvalues.
If: Let us construct T . Take n = 3 as example, and let T be:
T = [v1 | v2 | v3 ]
0
1
0
0
1 T 1 ;
A = T 0
a0 a1 a2
0
B = T 0
1
0
1
0
0
1
AT = T 0
a0 a1 a2
(4.1)
Now consider each column of (4.1) at a time, starting from the last. This says that:
A v3 = v2 a2 v3 , v2 = Av3 + a2 v3 = AB + a2 B
Having found v2 , we can find v1 from the 2nd column from (4.1). This says,
A v2 = v1 a1 v3 ,
v1 = Av2 + a1 v3 = A2 B + a2 AB + a1 B
Now we check if the first column in (4.1) is consistent with the v1 , v2 and v3 we had found.
It says:
A v1 + a0 v3 = 0.
Is this true? The LHS is:
A v1 + ao v3 =A3 B + a2 A2 B + a1 AB + a0 B
=(A3 + a2 A2 + a1 A + a0 I)B
Since (s) = s3 +a2 s2 +a1 s+a0 is the characteristic polynomial of A, by the Cayley Hamilton
Theorem, (A) = 0, so A3 + a2 A2 + a1 A + a0 I = 0. Hence, A v1 + a0 v3 = 0.
a1
2
T = v1 v2 v3 = B AB A B a2
1
97
K = Kz T 1 .
Example:
1 2 2
1
x = 1 2 4 x + 0u
5 1 3
0
y= 1 0 0 x
5
x(0) = 2
1
The open loop system is unstable, as A has eigenvalues 4.4641, 2.4641 and 2. The system response
to a unit step input is shown in Fig. 4.1.
Characteristic equation of A
A (s) = s3 15s + 22 = 0
Desired closed loop pole locations : (2, 3, 4). Closed loop characteristic equation
Ac (s) = (s + 2)(s + 3)(s + 4) = s3 + 9s2 + 26s + 24
c
Perry
Y.Li
98
6
x 10
x 10
5
Output
States
2
4
4
3
6
2
x1
x2
x3
10
0
2
3
Time(sec)
0
0
3
Time(sec)
2 1 1
T = 23 1 0
11 5 0
2
v1 = 23
11
Feedback control
u = Kx + v
where v is an exogenous input. In our example, v = 1 is the reference input, and the feedback
law is implemented as:
u = Kx + gv
where g is a gain to be selected for reference tracking. Here it is selected as 11.98 to
compensate for the DC gain. As seen from Fig. 4.2, the output y reaches the target value of
1.
5
x1
x2
x3
3
2
Output
States
6
4
4
0
99
2
3
Time(sec)
3
0
2
3
Time(sec)
10
0
Control input
10
20
30
40
50
60
70
0
4
Time(sec)
4.4
with B
<nm ,
m > 1.
c
Perry
Y.Li
100
Innerloop
v1
xdot
y
C
f
k
Outerloop
x = (A BF )x + bv
2 1 0
1 3
A = 0 2 0; B = 1 0
0 0 2
0 1
we cannot find a g <m such that (A, Bg) is controllable. We can see this by applying the
PBH test with = 2. The reason for this is that A has repeated eigenvalues at = 2 with
more than one independent eigenvector. The same situation applies, if A is semi-simple, with
repeated eigenvalues.
20
11
x 10
20
x 10
x1
x1
x2
15
States
States
10
Nominal system without control
10
4
Time(sec)
x3
u1 = 1 and u2 = 1
x2
15
5
0
101
5
0
4
Time(sec)
12
x 10
1.5
Output
u1 = 1 and u2 = 1
1
0.5
0
0
4
Time(sec)
What the Hautus Keymann Theorem says is that it is possible after preliminary state feedback
using a matrix F . In fact, generally, most F matrices will make A = A BF has distinct
eigenvalues. This makes it possible to avoid the parasitic situation mentioned above, so that
Bg) is controllable.
one can find g <m so that (A,
Generally, eigenvalue assignment for a multiple input system is not unique. There will be
some possibilities of choosing the eigenvectors also. However, for the purpose of this class, we shall
use the optimal control technique to resolve the issue of choosing appropriate feedback gain K in
u = Kx + v. The idea is that K will be picked based on some performance criteria, not to just
to be placed exactly at some a-priori determined locations.
Remark: LQ method can be used for approximates pole assignment (see later chapter).
Example:
1 1
0
A = 1 3 1
0
1 1
0 0
B = 1 1
0 1
We see from Fig. 4.4 that the nominal system is unstable, and also the system is uncontrollable
by open loop step inputs. Let B = [b1 b2 ], then we see that (A, b1 ) is not controllable. We choose
c
Perry
Y.Li
102
5
x2
1.5
x3
After implementing
twoloop state feedback
Output
States
x1
0.5
1
0
0
1
0
4
Time(sec)
0.5
0
4
Time(sec)
Control Input
0
5
10
15
u1
20
25
0
u2
2
4
Time(sec)
1
g=
0
0
b1 = 1 = Bg
0
For the inner loop,
u = F x + gv
and for the outer loop,
v = Kx + g1 v1
where g1 is a parameter to be selected to account for the DC gain. The target closed loop pole
locations are 3 2 1 . As usual, A1 is transformed into controllable canonical form, the T
matrix is computed, and the gain vector Kz is computed as
Kz = 7 16 9
which is transformed into original system coordinates
K = Kz T 1 = 0 9 2
4.5
103
The pole placement technique is appropriate only for linear time invariant systems. How about
linear time varying systems, such as obtained by linearizing a nonlinear system about a trajectory?
Use least norm control
Make use of uniform controllability
Consider the modified controllability (to zero) grammian function:
Z t1
H (t0 , t1 ) :=
(t0 , )B( )B T ( )T (t0 , )e4( t0 ) d.
t0
Note that for = 0, H (t0 , t1 ) = Wc,[t0 ,t1 ] , the controllability (to zero) grammian.
Theorem If A() and B() are piecewise continuous and if there exists T > 0, hM hm > 0
s.t. for all t 0,
0 < hm I H0 (t, t + T ) hM I
then for any > 0, the linear state feedback,
u(t) = F (t)x(t) = B T (t)H (t, t + T )1 x(t)
will result in a closed loop system:
x = (A(t) B(t)F (t))x(t)
such that for all x(t0 ), and all t0 ,
kx(t)et k 0
Example: Consider a subsystem having vibrational modes, which need to be damped out,
x = A(t)x + B(t)u
y = Cx
0
(t)
A(t) =
(t)
0
C= 1 0
B=
0
2 + 0.5 sin(0.5t)
Rt
0
(t)dt.
Z
H (t, t + 4) =
t+4
where T = 4 has been chosen so that it is a multiple of the fundamental period of the sinusoidal
component of B(t).
c
Perry
Y.Li
104
4
x
x2
1
Output
States
4
0
10
20
30
Time(sec)
40
4
0
50
10
20
30
Time(sec)
40
50
40
50
15
15
x
10
10
x2
5
Output
States
5
0
10
10
15
0
10
20
30
Time(sec)
40
15
0
50
10
20
30
Time(sec)
Figure 4.7: Nominal system state (left) and output (right) with open loop control
x1
x2
1.5
1.5
1
Output
States
1
0.5
0.5
0.5
0.5
1
0
10
20
30
Time(sec)
40
50
1
0
10
20
30
Time(sec)
Figure 4.8: Closed loop system state (left) and output (right) for = 0.2
40
50
3.5
x1
105
x2
1.5
2.5
1
Output
States
2
1.5
0.5
1
0.5
0
0
0.5
0
10
20
30
Time(sec)
40
50
0.5
0
10
20
30
Time(sec)
40
50
40
50
40
50
Figure 4.9: Closed loop system state (left) and output (right) for = 1.8
0.6
40
0.4
30
Control Input
Control Input
0.2
0
0.2
20
10
0.4
0
0.6
0.8
0
10
20
30
Time(sec)
40
50
10
0
10
20
30
Time(sec)
Figure 4.10: Control input for = 0.2 (left) and = 1.8 (right)
3
E1
1
E1
0.5
0
1
0
10
20
30
Time(sec)
40
50
0.5
0
20
30
Time(sec)
0.9
0.8
4.5
E2
E2
4
3.5
0
10
0.7
0.6
0.5
10
20
30
Time(sec)
40
50
10
15
20
25
30
35
Time(sec)
40
45
50
c
Perry
Y.Li
106
2
1.5
x2
15
exponential term
exponential term
20
x1
0.5
0
x1
x2
10
0.5
1
0
10
20
30
Time(sec)
40
5
0
50
10
20
30
Time(sec)
40
50
The least norm solution that steers a state x(t0 ) to x(t1 ) = 0 with respect to the cost function:
Z t1
J[t0 ,t1 ] =
uT ( )u( )exp(4( t0 ))d
t0
is:
u( ) = e4( t0 ) B T ( )(t0 , )T H (t0 , t1 )1 x(t0 ).
and when evaluated at = t0 ,
u(t0 ) = B T (t0 )H (t0 , t1 )1 x(t0 ).
Thus, the proposed control law is the least norm control evaluated at = t0 . By relating
this to a moving horizon [t0 , t1 ] = [t, t + T ], where t continuously increases, the proposed
control law is the moving horizon version of the least norm control. This avoids the difficulty
of receding horizon control where the control gain can become infinite when t tf .
Proof is based on a Lyapunov analysis typical of nonlinear control, and can be found in
[Desoer and Callier, 1990, p. 231]
4.6
107
Observer Design
x = Ax + Bu
y = Cx
The observer problem is that given y(t) and u(t) can we determine the state x(t)?
Openloop observer
Suppose that A has eigenvalues on the LHP (stable). Then an open loop obsever is a simulation:
x
= A
x + Bu
The observer error dynamics for e = x x
are:
e = Ae
Since A is stable, e 0 exponentially.
The problem with open loop observer is that they do not make use of the output y(t), and also
it will not work in the presence of disturbances or if A is unstable.
4.7
Luenberger Observer
x
= A
x + Bu + L(y C x
)
This looks like the open loop observer except for the last term. Notice that C x
y is the output
prediction error, also known as the innovation. L is the observer gain.
Let us analyse the error dynamics e = x x
. Subtracting the observer dynamics by the plant
dynamics, and using the fact that y = Cx,
e = Ae + LC(x x
) = (A LC)e.
If A LC is stable (has all its eigenvalues in the LHP), then e 0 exponentially.
Design of observer gain L:
We use eigenvalue assignment technique to choose L. i.e. choose L so that the eigenvalues of
A LC are at the desired location, p1 , p2 , . . . , pn .
Fact: Let F <nn . Then, det(F ) = det(F T )
Therefore,
det(I F ) = det(I F T ).
Hence, F and F T have the same eigenvalues. So choosing L to assign the eignvalues of A LC is
the same as choosing L to assign the eigenvalues of
(A LC)T = AT C T LT
We know how to do this, since this is the state feedback problem for:
x = AT x + C T u,
u = v LT C.
c
Perry
Y.Li
108
The condition in which the eigenvalues can be placed arbitrarily is that (AT , C T ) is controllable.
However, from the PBH test, it is clear that:
rank I
AT
CT
I A
= rank
C
the LHS is the controllability test, and the RHS is the observability test. Thus, the observer eigenvalues can be placed arbitrarily iff (A, C) is observable.
Example: Consider a second-order system
x
+ 2n x + n2 x = u
where u = 3 + 0.5 sin(0.75t) is the control input, and = 1, n = 1rad/s. The state space
representation of the system is
d x
0
0
1
u
+
=
1
1 2
dt x
y= 0 1 x
The observer is represented as:
d x
0
0
1
u LC(
x x)
+
=
1
1 2
dt x
The observer gain L is computed by placing the observer poles at (0.25
0.5).
0.875
1.25
L=
See Fig. 4.13 for the plots of the observer states following and catching up with the actual system
states.
3.5
2.5
Actual system
Observer
2.5
Velocity
Position
1.5
2
1.5
1
0.5
Actual system
0
0
Observer
0.5
10
20
30
Time(sec)
40
50
0.5
0
10
20
30
Time(sec)
40
50
4.8
109
Properties of observer
U (s) = K X(s)
Q(s)(s)
where A BK is stable, and Q(s) is any stable controller (i.e. Q(s) itself does not have any
unstable poles), is necessarily stable. In fact, any stabilizing controller is of this form! (see
Goodwin et al. Section 18.6 for proof of necessity)
Transfer function of the observer:
X(s)
= T1 (s)U (s) + T2 (s)Y (s)
where
T1 (s) := (sI A + LC)1 B
T2 (s) := (sI A + LC)1 L
Both T1 (s) and T2 (s) have the same denominator, which is the characteristic polynomial of
the observer dynamics, obs (s) = det(sI A + LC).
With Y (s) given by Go (s)U (s) where Go (s) = C(sI A)1 B is the open loop plant model,
X(s)
= [T1 (s) + T2 (s)G0 (s)] U (s)
= (sI A)1 BU (s)
i.e. the same as the open loop transfer function, from u(t) to x(t). In this sense, the observer
is unbiased.
Where should the observer poles be ?
Theoretically, the observer error will decrease faster if the eigenvalues of the A LC are further
to the left (more negative). However, effects of measurement noise can be filtered out if eigenvalues
are slower. A rule of thumb is that if noise bandwidth is Brad/s, the fastest eigenvalue should be
greater than B (i.e. slower than the noise band) (Fig. 4.14). This way, observer acts as a filter.
If observer states are used for state feedback, then the slowest eigenvalues of A LC should be
faster than the eigenvalue of the state feedback system A BK.
c
Perry
Y.Li
110
text2
text3
text4
text1
4.9
For a system
x = Ax + Bu
(4.2)
y = Cx
the observer structure is
x
= A
x + Bu + L(y C x
)
(4.3)
where y(t) C x
(t) =: (t) is called innovation, which will be discussed later. Define observer error
e(t) = x
(t) x(t)
Stacking up (4.2) and (4.3) in matrix form,
x
A
0
x
B
=
+
u
LC A LC
x
B
x
e
x
,
to
Transforming the coordinates from
x
e
0
e
A LC 0
u
+
=
x
B
LC
A x
As seen from Fig. 4.15, there is no effect of control u on the error e, and if the gain vector L is
designed (by separation principle, see below) such that the roots of characteristic polynomial
ALC (s) = det(sI A + LC)
are in the LHP, then the error tends asymptotically to zero. This means the observer has estimated
the system states to some acceptable level of accuracy. Now, the gain vector K can be designed to
implement feedback using the observer state:
u = v Kx
x = Ax + Bu
y-
x
- x
= A
x + Bu LCe
C
111
- ?
h
?
?
x = Ax + Bu
e = (A LC)e
y-
4.9.1
Innovation Feedback
As seen earlier, the term innovation is the error in output estimation, and is defined by
(t) := (y C x
)
(4.4)
To see how the innovation process is used in observer state feedback, let us first look at the
transfer function form of the state feedback law. From (4.3), we can write
x
= (A LC)
x + Bu + Ly
Taking Laplace transform, we get
X(s)
= (sI A + LC)1 BU (s) + (sI A + LC)1 LY (s)
adj(sI A + LC)L
adj(sI A + LC)B
=
U (s) +
Y (s)
det(sI A + LC)
det(sI A + LC)
= T1 (s)U (s) + T2 (s)Y (s)
The state feedback law is
u(t) = K x
(t) + v(t)
(4.5)
c
Perry
Y.Li
112
Taking Laplace transform,
U (s) = K X(s)
+ V (s)
= K(T1 (s)U (s) + T2 (s)Y (s))
(1 + KT1 (s))U (s) = KT2 (s) + V (s)
L(s)
P (s)
U (s) =
Y (s) + V (s)
E(s)
E(s)
where
E(s) = det(sI A + LC)
L(s) = det(sI A + LC + BK)
P (s) = Kadj(sI A)L
The expression for P (s) has been simplified using a matrix inversion lemma, see Goodwin, p. 522.
The nominal plant TF is
G0 (s) =
B0 (s)
Cadj(sI A)B
=
det(sI A)
A0 (s)
1 CT2 (s) =
Therefore
(s) =
A0 (s)
B0 (s)
Y (s)
U (s)
E(s)
E(s)
v -hu
6
x = Ax + Bu
h
113
K
6
Q(s)
- x
= A
x + Bu L
x
-
- ?
h
(s)
Then
L(s)
P (s)
U (s) = V (s)
Y (s) + Qu (s)
E(s)
E(s)
A0 (s)
B0 (s)
Y (s)
U (s)
E(s)
E(s)
The nominal sensitivity functions which define the robustness and performance criteria are
modified affinely by Qu (s):
B0 (s)A0 (s)
A0 (s)L(s)
Qu (s)
E(s)F (s)
E(s)F (s)
B0 (s)P (s)
B0 (s)A0 (s)
T0 (s) =
+ Qu (s)
E(s)F (s)
E(s)F (s)
S0 (s) =
For plants the are open loop stable with tolerable pole locations, we can set K 0 so that
F (s) = A0 (s)
L(s) = E(s)
P (s) = 0
so that
B0 (s)
E(s)
B0 (s)
T0 (s) = Qu (s)
E(s)
S0 (s) = 1 Qu (s)
0 (s)
In this case, it is common to use Q(s) := Qu (s) AE(s)
. Then
c
Perry
Y.Li
114
Thus the design of Qu (s)(or Q(s)) can be used to directly influence the sensitivity functions.
Example: Consider a previous example:
x
+ 2n x + n2 x = u
where u = 3 + 0.5 sin(0.75t) is the control input, and = 1, n = 1rad/s. The state space
representation of the system is
d x
0
1
0
=
+
u
1 2
1
dt x
y= 0 1 x
The observer is represented as:
d x
0
1
0
=
+
u LC(
x x)
1
2
1
dt x
The observer gain L is computed by placing the observer poles at (0.25
0.875
L=
1.25
0.5).
B0 (s)
1
= 2
A0 (s)
s + 2s + 1
E(s) = det(sI A + LC) = s2 + 0.75s + 0.125
G0 (s) =
Q
(s)
u
s4 + 4.375s3 + 2.594s2 + 0.2656s 0.03125
s4 + 4.375s3 + 2.594s2 + 0.2656s 0.03125
0.375s 0.375
s2 + 2s + 1
T0 (s) = 4
+
Q
(s)
u
s + 4.375s3 + 2.594s2 + 0.2656s 0.03125
s4 + 4.375s3 + 2.594s2 + 0.2656s 0.03125
S0 (s) =
Now, Qu (s) can be selected to reflect the robustness and performance requirements of the system.
4.10
115
Adj(sI Ad )
xd (0)
det(sI Ad )
where xd (0) is iniital value of xd (t = 0). Thus, the disturbance generating polynomial is nothing
but the characteristic polynomial of Ad ,
d (s) = det(sI Ad )
For example, if d(t) is a sinusoidal signal,
0
1 xd1
x d1
=
2 0 xd2
x d2
d = xd1
The characteristic polynomial, as expected, is:
d (s) = det(sI Ad ) = s2 + 2
If we knew d(t) then an obvious control is:
u = d + v Kx
where K is the state feedback gain. However, d(t) is generally unknown. Thus, we estimate it using
an observer. First, augment the plant model.
x
B
x
A BCd
+
u
=
xd
0
x d
0 Ad
x
y= C 0
xd
Notice that the augmented system is not controllable from u. Nevertheless, if d has effect on y,
it is observable from y.
Thus, we can design an observer for the augmented system, and use the observer state for
feedback:
x
d x
A BCd
x
B
L1
=
+
u+
y C 0
0 Ad
x
d
0
L2
x
d
dt xd
x
u = Cd x
d + v K x
= v K Cd
x
d
c
Perry
Y.Li
116
where L = [LT1 , LT2 ]T is the observer gain. The controller can be simplified to be:
d x
A BK L1 C 0
x
=
x
L
C
A
x
dt d
2
d
d
B L1
v
0 L2
y
x
u = K Cd
+v
x
d
The y(t) u(t) controller transfer function Gyu (s) has, as poles, eigenvalues of A BK L1 C
and of Ad . Since d (s) = det(sI Ad ) the disturbance generating polynomial, the controller has
d (s) in its denominator.
This is exactly the Internal Model Principle.
Example: For the system
0
0
1
u
x+
1
1 2
y= 0 1
x =
The disturbance is a sinusoidal signal d(t) = sin(0.5t). The reference input is a constant step
v(t) = 3. The gains were designed to be:
K= 5 3
LT2
T
T
= 2.2 0.44 2.498 0.746
with observer poles at (0.6 0.5 1.7 1.4). Figures 4.17 - 4.19 show the simulation results.
For a reference signal v(t) = 3 + 0.5 sin 0.75t, the results are shown in Figures 4.20 - 4.22. In this
case,
K = 0.75 2
LT2
T
T
117
x1 without disturbance
States
States
x with disturbance
1
1
x2 without disturbance
x2 with disturbance
1
0
10
20
30
Time(sec)
40
1
0
50
10
20
30
Time(sec)
40
50
Figure 4.17: States of (left)open loop system and (right)observer state feedback
4
with disturbance
3.5
without disurbance.
3.5
Output
Output
2.5
3
2
1.5
2.5
1
after disturbance estimation
0.5
2
0
10
20
30
Time(sec)
40
0
0
50
without disturbance
10
20
30
Time(sec)
40
50
Figure 4.18: Output of (left)open loop system and (right)observer state feedback
4
closed loop control cancels out the
oscillations of the disturbance signal
3
Disturbance signals
Control Input
disturbance estimate
2
actual disturbance signal
1
0
open loop
4
0
10
20
30
Time(sec)
40
50
2
0
10
20
30
Time(sec)
40
50
c
Perry
Y.Li
118
4
x1
States
States
3
2
1
1
0
1
0
10
20
30
Time(sec)
40
1
0
50
10
20
30
Time(sec)
40
50
Figure 4.20: States of (left)open loop system and (right)observer state feedback
4.5
4
with disturbance
3.5
without disturbance
3
2.5
Output
Output
3.5
3
2
1.5
2.5
2
1.5
0
without disturbance
0.5
10
20
30
Time(sec)
40
0
0
50
10
20
30
Time(sec)
40
50
Figure 4.21: Output of (left)open loop system and (right)observer state feedback
4
3
Disturbance signals
Control Input
disturbance estimate
actual disturbance signal
1
0
open loop
4
0
10
20
30
Time(sec)
40
50
2
0
10
20
30
Time(sec)
40
50
119
u = [Ko Ka ]
xa
where x
is the observer estimate of the original plant itself.
x
= A
x + Bu + L(y C x
).
Notice that xa need not be estimated since it is generated by the controller itself!
The transfer function of the controller is: C(s) =
Ko
1
sI A + BKo + LC
BKo
L
Ka
CdT
0
sI Ad
from which it is clear that the its denominator has d (s) = det(sI Ad ) in it. i.e. the Internal
Model Principle.
An intuitive way of understanding this approach:
For concreteness, assume that the disturbance d(t) is a sinusoid with frequency .
Suppose that the closed loop system is stable. This means that for any bounded input, any
internal signals will also be bounded.
For the sake of contradiction, if some residual sinusoidal response in y(t) still remains:
Y (s) =
(s, 0)
s2 + 2
Ka (s, 0)
s2 + 2
c
Perry
Y.Li
120
4
3
x1 without disturbance
3
x1 without disturbance
States
States
2
2
x with disturbance
1
1
x2 without disturbance
1
x2 after augmented state feedback
0
x2 with disturbance
1
x2 without disturbance
0
2
1
0
10
20
30
Time(sec)
40
3
0
50
10
20
30
Time(sec)
40
50
Figure 4.23: States of (left)open loop system and (right)augmented state feedback
The most usual case is to combat constant disturbances using integral control. In this case, the
augmented state is:
Z t
xa (t) =
y( )d.
0
It is clear that if the output converges to some steady value, y(t) y , y must be 0. Or otherwise
xa (t) will be unbounded.
Example: Consider the same example as before. But this time, we do not use an observer to
estimate the disturbance. The dynamics is augmented to include a filter. The gains were selected
as:
T
L = 5.7 2.3
with observer poles at (3.5
4.2).
Ko = (3.02
2.18)
Ka = (0.5
0.76)
with the closed loop poles at (2.1 1.2 2.3 0.6). The results are shown in Figures
4.23 - 4.25 for a constant reference input v(t) = 3. The filter state contains the frequency of the
disturbance signal, and hence the feedback law contains that frequency. Hence the control signal
cancels out the disturbance. This is seen from the output, which does not contain any sinusoidal
component.
121
3.5
with disturbance
without disurbance.
3.5
Output
Output
2.5
3
2
1.5
2.5
without disturbance
after augmented state feedback
1
2
0
10
20
30
Time(sec)
40
0.5
0
50
10
20
30
Time(sec)
40
50
Figure 4.24: Output of (left)open loop system and (right)augmented state feedback
50
4
disturbance signal
open loop
augmented state feedback
30
2
States
Control Input
40
disturbance signal
filter state
20
10
0
0
10
0
10
20
30
Time(sec)
40
50
1
0
10
20
30
Time(sec)
40
50