Beruflich Dokumente
Kultur Dokumente
Faculty EE-Math-CS
Department of Electrical Engineering
Control Engineering
Intelligent Control
part 1 - MRAS
Kw
Controller
process
bp
Kp
Ki
____
K
s+a
K
d
p22
p21
Liapunov
Adaptation
kw2n
wn2
(s+2zwn s+
Author:
prof.dr.ir. Job van Amerongen
264
102CE2004
March 2004
Part
1
Intelligent Control
University of Twente
Control Engineering
Author:
Prof.dr.ir. Job van Amerongen
ii
University Twente
Part
Intelligent Control
iii
Model Reference
Adaptive Control
Systems
iv
1
Model Reference Adaptive
Control Systems
1
2
3
4
5
6
7
8
9
10
11
Introduction
The adaptation mechanism
Sensititvity models
The stability approach method of Liapunov
The hyperstability method
Identification and state estimation
Practical problems, noise, non-linearities
Noise
Discrete MRAS
Conclusions
References
Appendix
2
Indirect Adaptive Control,
Self tuning regulators
3
Learning control, Fuzzy
control
1
4
8
10
19
24
29
31
32
36
36
37
vi
INTRODUCTION
In these lecture notes various forms of model reference adaptive systems are
discussed. We start with an intuitive approach, showing that basic feedback
ideas help to find algorithms for parameter adjustment. We notice that two
questions arise. The first is how to find proper signals that adjust the right
parameter at the right moment. The second is how to guarantee stability for an
adaptive system that is inherently non linear due to the multipliers present in the
system. More insight in the first question is obtained by considering the
sensitivity model approach. Stability can be guaranteed by using Liapunovs
stability theory for the design of the adaptive system.
LEARNING GOALS
Definition of
adaptive control
Introduction
There are various structures that may give a control system the possibility to
react to variations in its parameters or to changing characteristics of the
disturbances. A normal feedback system also has the objective of decreasing the
sensitivity for these types of variations. However, when the variations are large,
even a well-designed constant-gain feedback system will not operate
satisfactorily. Then a more complex controller structure is required and certain
adaptive properties have to be introduced. An adaptive system may be defined
as follows:
An adaptive system is one in which in addition to the basic (feedback)
structure, explicit measures are taken to automatically compensate for
variations in the operating conditions, for variations in the process dynamics
or for variations in the disturbances, in order to maintain an optimal
performance of the system.
Many other definitions have been given in the literature; most of them only
describe a typical class of adaptive systems.
The definition given here assumes as a base an ordinary feedback structure for
the primary reaction to disturbances and parameter variations. On a secondary
level an adaptation mechanism tunes the gains of the primary controller, changes
its structure, and generates additional signals and so on. In such an adaptive
system the settings which are adjustable by the user are at the secondary level.
University of Twente
Control Engineering
Gain scheduling
Mode switching
Direct and
indirect
adaptive control
C = e 2 dt
0
where
(1)
Intelligent Control
e = ym y p
(2)
Instead of minimizing only the error between the output signals of the process
and the reference model, all the state variables of the process and the reference
model can be taken into account. When the state variables of the process are
denoted as (xp) and those of the reference model as (xm), the error vector (e) can
be defined:
e = xm x p
(3)
In that case the optimization problem can be translated into minimization of the
criterion:
T
C = eT Pe dt
(4)
Controller
FIGURE 1a
Process
Controller
Process
Adaptive
Controller
Adaptive
Controller
Reference
Model
Reference
Model
FIGURE 1b
The following considerations may play a role in the choice between adaptation
of the parameters and signal adaptation. An important property of systems with
parameter adaptation is that such systems have a memory. As soon as the
parameters of the process have been adjusted to their correct values, and there
are no new changes, the adaptive loop is in fact not necessary anymore: process
and reference model show the same behavior. In general memory is not present
in systems with signal adaptation. Therefore, the adaptive loop remains
necessary in all cases, in order to generate the right input signal. Consequently,
signal adaptive systems must react faster to changing process dynamics than
systems with parameter adaptation because no knowledge from the past can be
used. In systems where the parameters constantly vary over a wide range this is
advantageous. However, in a stochastic environment, i.e. in systems with a lot of
noise, this may be a disadvantage. High gains in the adaptive loop may introduce
a lot of noise in the input signal of the process as well.
When the parameters of the process vary slowly or only now and then, systems
with parameter adaptation will give a better performance because of their
memory. There are also adaptive algorithms which combine the advantages of
both methods. In the following attention will mainly be focused on parameter
adaptive systems, although the combination of parameter and signal adaptation
will also be discussed.
University of Twente
Control Engineering
Another way of looking at the system is the following. The standard feedback
control loop is considered as a fast reacting primary control system that has to
reject ordinary disturbances. Large variations in the parameters or large
disturbances are dealt with by the slower reacting secondary (adaptive) control
system (Figure 2).
primary control system
y
Controller
Process
Adaptive
Controller
secondary control system
Reference
Model
FIGURE 2
In the literature several methods have been described for designing adaptive
systems. But you can get more insight into a method by thinking about how to
find the algorithms yourself. This helps to really understand what is going on.
Therefore, for the time being we will postpone the mathematics and examine the
basic ideas of MRAS with a simple example. When we try to design an adaptive
controller for this simple system we will automatically encounter the problems
which require more theoretical background. Common properties of the various
design methods as well as their differences will become clear. In figure 3 a block
diagram is given of the system which will be used as an example throughout this
text.
bp
ap
Kb
process
Ka
reference model
FIGURE 3
Intelligent Control
bm
K n2
or
s 2 + 2n s + n2
s 2 + am s + 1
(5)
K b + bp
(6)
s + ( ap + Ka ) s + 1
2
The (linear) reference model has the same order as the process. The following
numerical values are chosen:
(7)
In this case only the DC-gain of the process and the reference model differ by a
factor of two. This can be seen in the step responses of this system (figure 4).
model
2.5 1.5
1.5
1.5 0.5
1
0.5
0.5 -0.5
0
-0.5
-1
-1
-0.5 -1.5
-1
-1.5
-2
-1.5 -2.5
Y_model
Error
Y_process
Of course the
controller with
parameters Ka and Kb
is not a real controller.
In fact we assume at
this stage that the
process parameters
can be adjusted
directly.
-2
FIGURE 4
10
20
30
40
50
time {s}
60
70
80
90
-2.5
100
(8)
With the adaptive gain the speed of adjustment can be set. The desired
memory function is realized by means of the integration which also guarantees
that a constant difference between (Kb + bp) and bm converges to zero. This
adaptive law with = 0.5 gives the results shown in figure 5.
University of Twente
Control Engineering
model
1.5 2
1 1
0.5 0
0 -1
-1
Error
Y_process
0.6
Kp
0.4
Y_model
Kb adaptation
0.8
0.2
10
20
30
40
50
60
time {s}
70
80
90
-0.5 -2
100
FIGURE 5
MIT rule
10
20
30
40
50
time {s}
60
70
80
90
-2
100
Although the result is impressive, it quickly becomes clear that there are still a
few problems left. When the input signal u is inverted the adjustment of Kb is
going in the wrong direction, because of the negative sign of e. An unstable
system is the result. However, the solution to this problem is simple. When the
sign of the input signal is taken into account, for instance by multiplying e and u,
the result of the parameter adjustment conforms again to figure 5. This yields the
adjustment law known as the MIT rule:
K b ( t ) = K b ( 0 ) + ( eu ) dt
(9)
(10)
K a ( t ) = K a ( 0 ) + ( ex2 ) dt
(11)
Intelligent Control
model
1.5 1
1 0
0.5 -1
-1
0 -2
-2
Error
Y_process
1.5
Y_model
Kb and Ka adaptation
Kb
Ka
0.5
10
20
30
40
50
60
time {s}
70
80
90
-0.5 -3
100
10
FIGURE 6
20
30
40
50
time {s}
60
70
80
90
-3
100
1.5 1
1 0
0.5 -1
-1
0 -2
-2
Kb
Ka
100
Y_model
Error
Y_process
Kb and Ka adaptation
50
-50
0
10
20
30
40
50 60
time {s}
70
80
90
-0.5 -3
100
FIGURE 7
10
20
30
40
50
time {s}
60
70
80
90
-3
100
The adaptive system, which was stable with low adaptive gains, becomes
unstable with higher adaptive gains. When the block diagram of this system is
examined (figure 8) it is clear that this stability problem cannot easily be solved,
due to the non-linearities which have been introduced into the system.
University of Twente
Control Engineering
bp
ap
Kb
Beta
Ka
alpha
k 2n
(s+2z ns+n2)
FIGURE 8
2.
A stability problem exists when the adaptive gains are increased as a result
of the desire to speed up the adaptation. This stability problem cannot easily
be solved by linear analysis methods because adaptation makes the system
non-linear.
These two problems clarify the origin of various methods for designing MRAS.
Two methods will be discussed in more detail in the following:
- the sensitivity approach. This method emphasizes the determination of the
dynamic speed of adaptation with the aid of sensitivity coefficients.
- the stability approach. This method emphasizes the stability problem. Due to
the non-linear character of an adaptive system it is essential that stability
theory of non-linear systems be used. It will be shown that, together with a
proof of stability, useful adaptive laws can be found.
Sensitivity models
In the sensitivity approach the first step is to translate the adaptation problem
into an optimization problem by introducing the criterion:
t
C=
1 2
e d
2 0
(12)
Intelligent Control
K i = i
C
K i
(13)
(14)
(15)
or
dK i
e
= i e
K i
dt
(16)
(17)
(18)
This algorithm is more or less similar to the algorithms of (10) and (11). The
direction of adjustment and the amount of adjustment relative to other
parameters is now determined by the error e and the sensitivity coefficient
yp/Ki. The latter can be determined by means of a sensitivity model. The
sensitivity coefficient explicitly sees to it that adjustment of Ki only takes place
when the error between process and reference model is sensitive to variations in
this particular parameter.
Example
y p + ( a p + K a ) y p + y p = ( K b + bp ) u
(19)
(20)
y p + K v y p + y p = ( K b + b ) u
(21)
University of Twente
Control Engineering
yp
K v
+ Kv
y p
K v
y p
K v
= y p
(22)
The differential equation (22) is identical to eq. (21), except for the input signal.
Eq. (22) is called a sensitivity model. When an estimation is made for the value
of Kv, e.g. by selecting it equal to the desired value am, the sensitivity coefficient
yp/Kv can be measured. From eq. (18) it follows that
Sensitivity model
y p
dK v
= e
K v
dt
(23)
Assuming that the process parameter ap varies slowly compared with the
adjustable parameter Ka due to the adaptation, it follows from eq. (20) that
y p
y p
dK v dK a
and
dt
dt
K v K a
(24)
Sensitivity
model
process
u
bp
ap
Kb
bm
_ ___p
Ka
am
Ka
FIGURE 9
EXERCISE
10
Intelligent Control
when process and reference model are described in state space form. The
process can be written as:
x p = Ap x p + B p u
(25)
with
Ap = Ap + K a
(26)
and
B p = B p + K b
(27)
(28)
e = xm x p
(29)
e = Am e + Ax p + Bu
(30)
with
A = Am Ap
(31)
B = Bm B p
(32)
Adjustment of A and B necessitates non-linear adaptive laws like eqs. (10) and
(11). Consequently, the resulting differential equation (30) is non-linear. To
ensure that for t , e = 0 it is necessary and sufficient to prove that e = 0 is a
stable equilibrium solution. According to Liapunovs stability theory this can be
done when we can find a (scalar) Liapunov function V(e), with the following
properties:
V(e) is definite positive (this means that V > 0 for e 0, eventually V = 0 for
e = 0.
V ( e ) is definite negative (this means that V < 0 for e 0, eventually V = 0 for
e = 0.
V(e) , if e .
When the Liapunov function V(e) is correctly chosen the adaptive laws follow
directly from the conditions under which V ( e ) is negative definite. The main
11
University of Twente
Control Engineering
(33)
Where
P is an arbitrary definite positive symmetrical matrix,
a and b are vectors which contain the non-zero elements of the A and B
matrices,
and are diagonal matrices with positive elements which determine the
speed of adaptation.
The choice of the Liapunov function as given in eq. (33) is a quite straightforward one. The
Liapunov function represents a kind of energy that is present in the system and when this energy
contents goes to zero, the system reaches a stable equilibrium. In dynamic systems the energy is
present in the integrators which can also be considered as the state variables of the system. The
components of e , a and b are the state variables of the system described by eq. (30). The components
of a and b are the parameter errors and can be seen as wrong initial conditions of the adaptive
controller parameters. It is thus desirable that all state variables e , a and b go to zero.
(34)
+ 2e T PAx p + 2a T a +
(35)
+ 2e PBu + 2b b
T
Let:
AmT P + Am P = Q
(36)
eT ( AmT P + PAm ) e = eT Qe
(37)
is definite negative. Stability of the system can be guaranteed if the last part of
eq. (35) is set equal to zero. Let:
12
Intelligent Control
e T PAx p + a T a = 0
(38)
e T PBu + bT b = 0
(39)
This yields, after some mathematical manipulations, the general form of the
adjustment laws:
ani =
1 n
pnk ek xi
ni k =1
1 n
bi = pnk ek ui
i k =1
(40)
(41)
AmT P + Am P + Q = 0
and consider this expression as the equilibrium solution of the differential equation
dP
= AmT P + Am P + Q
dt
This equation can easily be solved in e.g. 20-sim. The speed of convergence can be increased by
introducing a speed up factor > 1, e.g. = 10:
1 dP
dt
= AmT P + Am P + Q
The following steps are thus necessary to design an adaptive controller with the
method of Liapunov:
1.
2.
3.
4.
Step 1
13
University of Twente
Control Engineering
x1 = x2
(42)
x2 = x1 ( a p + K a ) x2 + ( bp + K b ) u
(43)
This yields:
1
0
0
Ap =
bp =
1 ( a p + K a )
bp + K b
(44)
(45)
(46)
with
0
0
0
A=
b=
0 1.4 + ( a p + K a )
1 ( K b + bp )
(47)
Only one element of the A-matrix, a22, and one element of the b-vector, b2, are
not equal to zero.
Step 2
Step 3
22
1
b22 =
( p21e1 + p22 e2 ) x2
( p21e1 + p22 e2 ) u
(48)
(49)
and
14
(50)
Intelligent Control
b2 = 1 ( K b + bp )
(51)
With the assumption that the speed of adaptation is fast compared with the speed
of variations in the process parameters, it follows from eqs. (48) and (49) that:
1
K a =
22
( p21e1 + p22 e2 ) x2
(52)
1
K b =
( p21e1 + p22 e2 ) u
(53)
Step 4
Any positive
definite matrix Q
leads to a positive
definite matrix P.
The opposite is not
true.
This fourth step starts with the choice of an arbitrary matrix Q. It is apparent
that, although the choice of Q is arbitrary, the performance of the adaptive
system is influenced by the choice of Q. The first choice of Q is not necessarily
the best one. But whatever the choice of Q is, as long as Q is positive definite,
the stability of the resulting adaptive system can be guaranteed. For simple
systems P can be found manually, for higher order systems, the computer can be
used. As an example select:
4 0.8
Q=
0.8 1.6
(54)
1 1.4 p22
p22 p11
+
p22 p21
p22 0
1 4 0.8
(55)
(56)
2 2
(57)
Both factors p21 and p22 are thus 2 in this case. The complete adaptive laws in
integral form are:
Ka =
Kb =
15
22
( p
( p
e + p22 e2 )u dt + K b ( 0 )
12 1
e + p22 e2 )x2 dt + K a ( 0 )
12 1
(58)
(59)
University of Twente
Control Engineering
process
bp
ap
K
b
K
a
p22
p21
Liapunov
k 2n
(s+2z ns+n2)
FIGURE 10a
Ka and Kb
1
Ka
Kb
0.5
0.26
0 2
0.21
-1 1
0.16
-2 0
0.11
-3 -1
0.06
-4 -2
0.01
error
Proces
Reference Model
1 3
-0.5
0
10
20
30
40
50 60
time {s}
70
80
90
100
10
FIGURE 10b
20
30
40
50
time {s}
60
70
80
90
100
Responses
Remarks
- The stability conditions found with the method of Liapunov are sufficient
conditions, they are not necessary. This implies, for instance, that there is still
some freedom in varying the relative values of the elements of the P-matrix,
without directly affecting the stability of the system.
16
Intelligent Control
ni , I
p
0
k 1
e xi d
ni , P
nk k
( p
e ) xi
nk k
(60)
The proportional term in eq. (60) can also be considered as a kind of signal
adaptation. It speeds up the adaptation when the process parameters are
rapidly changing and may improve the transient behavior of the parameters.
Instead of the proportional term a relay function may also be used. This
results in a relay plus integral adaptive law.
Figure 10c gives the responses with a proportional plus integral adaptive law.
It is clear that the effect of adding the proportional term is more damping of
the transients of the adjusted parameters, while also the error signal
converges faster than without the proportional term.
model
Proces
Reference Model
Ka and Kb
0.6
Ka
Kb
0.4
0.2
0
0.052
0 2
0.042
-1 1
0.032
-2 0
0.022
-3 -1
0.012
-4 -2
0.002
error
1 3
-0.2
0
10
20
30
40
50 60
time {s}
70
80
90
100
10
FIGURE 10c
20
30
40
50
time {s}
60
70
80
90
100
- On the other hand, one should be careful when applying complex adaptive
laws. In order to be able to apply a more complex algorithm, generally meant
to speed up the adaptation, a good resemblance of the structures of process
and reference model is essential. If there is such a resemblance the
performance of the system may be improved. If there is more uncertainty, the
more simple algorithms, like those of eqs. (40) and (41), may give good
results. Complex laws also require that more parameters be chosen, while
automatic tuning was just one of the aims of using adaptive control.
- A mathematical proof of stability with other adaptive laws, requires the
availability of other Liapunov functions. Several Liapunov functions can be
found in the literature. However, in principle the test of whether a certain
candidate adaptive law yields a stable system can be done more easily with
the hyperstability method as we will see later on.
17
University of Twente
Control Engineering
EXERCISE
Controller
process
____
K
s+a
bp
Kp
Ki
K
d
(61)
n2
s 2 + 2 zn s + n2
n = 1 en z = 0.7
(62)
Hint
0
x+
a K d K
K K p
1
K ( K w + K v ) 1
0
(63)
For the design of this adaptive system it is easier to describe the system with the
aid of the state variables and x2, where = u - xi:
0
= K K
p
x2
0
0
u
+
a K K d x2 0 K ( K w + K v ) 1
1
(64)
m 0 0u
+
2 zn x2, m 0 0 1
1
18
(65)
Intelligent Control
Derive the adaptive law for this system, using the procedure treated before.
Based on the work of Popov, Landau introduced the hyperstability concept for
the design of adaptive control systems. The hyperstability method is, just as the
second method of Liapunov, suitable for proving the stability of a non-linear
system. The concept of hyperstability is closely related to the concept of
passivity introduced by Willems. Passive elements such as capacitors,
inductances and resistors will never store more energy than the energy that is
stored in the element at t = 0 plus the energy supplied to it by the environment.
The same holds for systems that consist of a network of such passive elements.
If all the components in such a system are linear, the transfer function between
the two energy-conjugated variables at the port of such a passive system is
positive real. This means that the phase shift of such a system is always between
plus and minus 90 degrees.
Hyperstability
and passivity
Example
Se
passive system
FIGURE 12a
1
passive system
We compute the transfer function between the effort and flow, H(s) = f(s)/e(s)
(or between the voltage and current, H(s) = I(s)/U(s)) at the port of this passive
system and draw the nyquist and bode plots of this 4th order transfer function.
When we observe the phase shift , we see that -90 < < 90 (see figure 12b).
The transfer function itself contains a number of zeros that is equal to the
number of poles minus one:
19
0.1
University of Twente
Control Engineering
Linear System Im
0.05
-10
-40
-70
-100
0.001
0.01
0.1
1
Frequency (Hz)
10
100
0.001
0.01
0.1
1
Frequency (Hz)
10
100
Phase (deg)
100
-0.05
-0.1
-0.05
FIGURE 12b
0.05
Linear System Re
0.1
0.15
50
0
-50
-100
The following statement holds for a scalar linear passive (or hyperstable)
system:
Passivity of a
linear system
Passivity of hyperstability implies that the real part of the polar plot (nyquist
plot) is always positive:
Re ( H ( j ) 0
Any transfer function that does not have this property can be made positive real
by adding a number of properly chosen zeros. We can use this property for the
design of an adaptive controller. We reconsider the error equation eq. (30)
e = Am e + Ax p + Bu
(30)
(66)
The last two non-linear and time varying terms of (30) can be considered as an
input w to this linear system:
e = Am e w
(67)
We have seen before that the components of w among others the adjustable
parameters Ki depend on e. This implies that w can be seen as a (non-linear,
time-varying) feedback of the system given by eq. (66). We have also seen that
the adjustable parameters do not necessarily depend on the state vector e itself.
They depended on a combination of one or more components of e. Therefore,
we extend the system of eq. 67 with the output equation that defines the signal v:
e = Am e w
v = Ce
(68)
where C is the output matrix of the linear system. The adaptive system, seen as a
linear system with a non-linear feedback can thus be drawn as figure 13.
20
Intelligent Control
-w
linear
part
non-linear
part
FIGURE 13
A non-linear system split into two
parts
If both parts of figure 13 are hyperstable, the complete feedback system will
be a hyperstable (passive) system. For the linear part this implies that its
transfer function should have a number of zeros that is equal to the number
of poles minus one, such that the system effectively behaves as a first-order
system. Such zeros can be introduced by a proper choice of the output matrix
C. It can be demonstrated that such a positive real transfer function can be
guaranteed if C is chosen equal to P, where P is the solution of the Liapunov
equation:
AmT P + PAm = Q
(69)
v = Pe
(70)
ef d + E (0) 0 , or
0
ef d E (0)
(71)
e ( ) i ( ) d
(72)
1
2
Ce 2 (0) + e ( )i ( ) d 0 or
0
e ( )i ( ) d
1
2
Ce 2 (0)
In terms of our adaptive system: when we require that the non-linear part of
figure 13 with the input and output variables v and w is passive, we have to
choose w such that the following condition holds:
21
(73)
University of Twente
Control Engineering
( t ) = v T w d 02
(74)
where 02 < is a function of the initial conditions A(0) and B(0). Because w
contains the adaptive laws, this will enable that candidate adaptive laws are
checked on their stability properties.
When we split the system of eq. (30) into a linear and a non-linear part and we
apply the adaptive laws (58) and (59), we find the block diagram of figure 14. In
order to proof that the non-linear part is passive we must demonstrate that with
these adaptive laws condition (74) is satisfied.
Am
linear part
non-linear part
Ax
x
p
-w
1
_ __
1
_ __
A(0)
B(0)
Bu
FIGURE 14
Proof of passivity
of the non-linear
part
= v T w d
(75)
with
w = Ax p + Bu p
(76)
A = Am Ap
(77)
B = Bm B p
(78)
22
Intelligent Control
(79)
When the matrices A and B are adjusted by means of the adaptive laws (t)
and (t), respectively, it follows that
t
A = ( ) d A0
(80)
0
t
B = ( ) d B0
(81)
= v T x p ( ) d + A0 + u ( ) d + B0 d
(82)
Any candidate adaptive law can now be substituted in this equation. When we
try, for instance, the adaptive laws (40) and (41) used earlier, such that:
( t ) = vx Tp
(83)
( t ) = vuT
(84)
= v T x p vx Tp d + A0 + u vuT d + B0 d
(85)
This can be split into two parts and be rewritten in a quadratic form as:
1 = vx d + A0 A02
2 0
2 =
T
p
2
T
+
vu
d
B
0 B0
2 0
(86)
(87)
The first terms of both equations are quadratic terms and are therefore positive.
The conditions necessary to satisfy eq. (74) are thus:
1
2
23
A02
(88)
B02
(89)
University of Twente
Control Engineering
When A02 and B02 are smaller than the condition (74) is satisfied. When P is
selected according to eq. (69) the complete non-linear feedback system is
hyperstable.
For the system of figure 10a this leads with the aid of eqs. (50) and (51) to the
adaptive laws (52) and (53).
Remarks
The difference between the hyperstability method and the method of Liapunov is
not that different adaptive laws are found. Both methods differ in the way these
laws are found. Instead of searching for Liapunov functions which belong to a
certain candidate algorithm, the hyperstability method requires the proof that eq.
(74) is satisfied. In principle, this is easier than finding a proper Liapunov
function.
When we compare the adaptive laws which are found from the stability
approach with those found from the sensitivity approach, the following
differences become clear. Instead of using only the error signal e, the difference
between the outputs of the process and the reference model, the stability
approach uses the signal Pe. In the cases we considered until now, Pe
corresponded to the signal p12e1 + p22e2. This expression also contains the
derivative of the error e with a positive effect on the systems stability. By
introducing the hyperstability method it has been demonstrated that due to this
derivative, and in higher-order systems, due to higher derivatives as well, the
maximum phase lag never exceeds 90 degrees. In addition, the stability
approach uses the states of the process or the input signal of the process, instead
of the sensitivity coefficients.
When we compare the signals e with p12e1 + p22e2 and yp/Kv with x1, it
appears that the corresponding signals of the stability approach and the
sensitivity approach have a similar shape, but that the signals of the sensitivity
approach (e and yp/Kv) have a phase lag . One could consider the sensitivity
coefficients as a kind of estimated states. The sensitivity model is a state
estimator from this point of view. It will be clear that the observed phase lag has
a deteriorating effect on the systems stability. This explains the superiority of
the designs following the stability approach.
EXERCISE
Show that a proportional and a proportional plus integral adaptive law as used
before, also lead to a hyperstable adaptive system.
In the previous sections it has been indicated how MRAS can be used for direct
adaptation of the parameters of a controller. In this case the process must follow
the response of the reference model. However, when the process and the
reference model change places, the reference model, in this case referred to as
the adjustable model, will follow the response of the process (figure 15). This
is realized by adjusting the parameters of the adjustable model. In doing so two
things are achieved:
1
24
Intelligent Control
reference model, leads, after some time, to a situation where the parameters
of the process and the reference model are identical.
State estimation. When the adjustment of the adjustable model is successful,
states of the adjustable model will, after some time, be identical to the states
of the process. The model states can be considered as estimates of the
process states.
Controller
Process
x
p
Adaptive
Controller
Adjustable
Model
FIGURE 15
The adaptive laws for identification and for (direct) adaptation are identical
except that the states xp,i are replaced by the states xm,i. It should be noted that the
proof of stability in this case involves that the following Liapunov equation be
solved:
ATp P + PAp = Q
(90)
where instead of the A-matrix of the reference model Am the matrix Ap is used.
The process has to be stable in order to get a stable adaptive system. Ap is also
unknown. An estimate of Ap has to be made in order to be able to solve the
Liapunov equation.
Another issue relevant in the case of identification is the following. The design
procedure according to the method of Liapunov yields a proof of asymptotic
stability of e. This means that e 0, when t . For the parameter errors (A
and B) only stability in the sense of Liapunov can be guaranteed. That means
that it may happen that A 0 and B 0 for t . It can easily be seen that such
a situation may happen. When, for instance u = 0, for t , xp and xm will both
approach zero and thus e as well. However, no parameter adjustment will take
place anymore. In the case of adaptation this is hardly a problem because the
primary goal is then to ensure that e 0. The values of A and B do not really
matter. But in the case of identification A and B must approach zero. However,
it can be demonstrated that when the input signal is sufficiently rich, that
means that the input signal has sufficiently power at all frequencies which are
within the bandwidth of the system, it can be guaranteed that A and B both
approach to zero.
When the states of the process are corrupted with noise, the structure of figure
15 can be used to get filtered estimates of the process states. When the input
signal itself is not very noisy, the model states will also be almost free of noise.
By selecting adaptive gains which are not too large, noise on e can be prohibited
25
University of Twente
Control Engineering
from affecting the model states too much. It is important to notice that in this
case the filtering is realized without phase lag. The MRAS structure acts as an
adaptive observer. This will be illustrated by means of an example.
Example
Figure 16 illustrates this principle used in an autopilot for ships. The ships
parameters are identified in a closed loop system. The transfer of the ship, from
the rudder angle to the rate of turn , can be described by the first order
transfer function:
Ks
=
s s + 1
(91)
The influence of waves on the motions of the ship can be modeled as an extra
input to this model. However, it is not desirable to react on these waves with
steering signals. One way to achieve this is to consider these disturbances as
measurement noise, rather than process noise. Therefore, the influence of waves
is simulated by adding colored noise to the signal . This yields the signal .
After integration of the heading is obtained.
This process is controlled by an autopilot which consists of a proportional
control action with gain Kp and a rate feedback with gain Kd. In order to prevent
the rudder from reacting to each wave instead of , the estimated signal is
used in the controller. The signal is obtained with the above-mentioned state
estimator. The rudder angle is measured and is used as an input signal for a
model with the transfer function
Km
=
s m + 1
(92)
coloring
k
s +1
Kp
controller
waves
Kd
ship
K
a
K
b
FIGURE 16
26
adjustable
model
Intelligent Control
Because in general Ks and s will be unknown and may vary when, for instance,
the speed of the ship changes, it is necessary to adjust Km and m with the aid of
an adaptation mechanism.
In figure 17 responses are given of a simulation of this system, where
K m ( 0 ) = 1/ 2 K s
(93)
m ( 0 ) = 2 s
(94)
Km
1/m
( )
1
0.5
0.5 -0.5
0
-0.5
-1
-1
-0.5 -1.5
-1
-2
-1.5
50
100
150
200 250
time {s}
300
350
-2
400
75
-10 -10
50
-20 -20
25
-30 -30
Parametes Ka and Kb
Rudder
100
Reference
Psi
identification error
psi_dot_estimated
1.5 0.5
model
10 10
0.15
Kb
Ka
0.1
0.05
-40 -40
0
0
50
100
150
200
time {s}
250
300
350
50
100
150
200
time {s}
250
300
350
-25
400
400
FIGURE 17
27
University of Twente
Control Engineering
Parametes Ka and Kb
0.15
Kb
Ka
0.1
0.05
0
0
200
400
600
time {s}
800
1000
FIGURE 18
Effect of decreasing adaptive gains
1200
(95)
coloring
k
s +1
Kp
controller
ship
K
a
K
b
28
Kd
optimal
controller
design
FIGURE 19
waves
adjustable
model
Intelligent Control
model
10 10
60
-10 -10
30
-20 -20
-30 -30
-30
Parameters Kp and Kd
Rudder
Reference
Psi
40
90
30
Kp
Kd
20
10
0
-10
-40 -40
0
50
100
150
200
time {s}
250
300
350
400
50
FIGURE 20
100
150
200
time {s}
250
300
350
-60
400
Until now the practical problems have been put aside. In this section a few
practical problems and their solutions will be discussed:
- not completely matching process and reference model structures
- non-linearities in the process or the reference model
- noise (in general, disturbances in the process)
Structural differences
When the structures of the process and the reference model do not completely
match, a formal proof of stability does not in theory hold anymore. If this were
immediately disastrous, a practical application of MRAS would be hardly
possible. It has already been mentioned that in this case simple algorithms have
advantages over more complex algorithms. However, it is still important that the
dominating dynamics of the process are present in the reference model as well.
In general, it appears to be possible to compensate for small remaining
differences between the structure of the process and the reference model by
relatively fast parameter variations.
Example
29
University of Twente
Control Engineering
Controller
Setpoint
generator
Kp
Process
Ki
K
d
p
22
p
21
Liapunov
2n
(s+2z ns+n2)
Adaptation
ME
omega_motor
phi_motor
FIGURE 21
model
0.1
0.5 0.5
phi_motor {rad}
Reference Model
Kp, Kd, Ki
4
3
0.05
-0.5 -0.5
-1
0.025
-1
Kd
Kp
Ki
0.075
error
1
-1.5 -1.5
-0.025
0
-1
10
15
20
-2
25
time {s}
-2
FIGURE 22
Non linearities
10
time {s}
15
20
25
-0.05
Although the design methods which have been applied, such as the method of
Liapunov, are especially suited to non-linear systems, in the adaptive systems
described it is essential that the non-linearities be restricted to the adaptation
mechanism. Process and reference model must in principle be linear. Most nonlinearities can be translated, however, into variations of the other parameters of
the system. Such variations may be compensated by the adaptation mechanism.
It is possible to give a proof for stable adaptive laws for non-linear systems as
well, but those laws are in general more complex than the algorithms given
30
Intelligent Control
before. It appears that these types of algorithms fail when there are only small
structural differences between the process and the reference model.
out
in
FIGURE 23
Saturation
This latter method is preferred, because in that case the adaptive system can
remain unchanged. This approach is not only suited to eliminating any
saturation type of non-linearities from the process, but to dealing with
saturations in the reference model as well. The proof of stability is not affected
by this approach because the non-linearities are in principle removed from the
control loop. The paper in Appendix A gives an example of this solution.
Noise
(96)
with
x p = x p + p
(97)
This yields, for instance, for the components ei and xi :
ei xi , p = ei xi , p + ei i , p + xi i , e + i , e i , p
(98)
Assuming the mean value of ei and xi to be zero, integration of eq. (98), such
as in the adaptive laws, yields in the presence of noise one non-zero term.
t
i , p dt
i ,e
(99)
and thus
31
(100)
University of Twente
Control Engineering
i ,e i , p = i2, p
Equation (99) indicates that the adjusted parameters will drift away when the
signals in the adaptive system are too small to compensate for this term with a
non-zero mean. This is only a problem in the case of adaptation and not for
identification. When MRAS is applied for identification instead of x p , the
Ka and Kb
8
6
Ka
Kb
4
2
0
-2
20
40
60
(101)
80
100
time {s}
(102)
where T denotes the time interval after the last set-point change. The principle
of decreasing adaptive gains was already illustrated in figure 18.
instead of using the signal xp, in the case of adaptation the signal xm may also
be used. Although this is theoretically not correct, such a system may give
good results in practice
it is also possible to use the estimated states of the process, obtained by
means of the earlier-mentioned adaptive state estimator
it is possible to measure the terms e and p on-line. The product of both
terms can be used to compensate for the drift. The usefulness of this method
is determined by the accuracy with which both terms can be measured
low-pass filters as well as dead band may be used in the adaptive loops. The
use of filters is limited, because of the detrimental phase lag which they
introduce. Application of a dead band is simple and effective. The only
requirement for its effective application is that it must be possible to
determine an upper bound of the signal which causes the drift.
Depending upon the type of application, a choice can be made from one or more
of these measures.
Influence of discretization
32
Intelligent Control
xm ( k + 1) = Am x p ( k ) + Bm u ( k )
(103)
x p ( k + 1) = Ap x p ( k ) + B p u ( k )
(104)
The matrices Am, Ap, Bm and Bp are found from a transformation of the
continuous-time equations. Their elements are functions of the sampling rate T.
Note that the structure of the system has been changed in such a way that in eq.
(103) xp is used instead of xm. This is illustrated in figure 25.
u
B
p
-1
process
Ap
e
Am
x
-1
Bm
reference model
FIGURE 25
(105)
e ( k + 1) = A ( k ) x p ( k ) + B ( k ) u ( k )
(106)
with
A ( k ) = Am ( k ) Ap ( k )
(107)
B ( k ) = Bm ( k ) B p ( k )
(108)
At this stage a Liapunov function V(k) is selected and V(k) is determined from:
V ( k ) = V ( k + 1) V ( k )
(109)
with
V ( k ) = eT ( k ) Pe ( k ) + a T ( k ) a ( k ) + bT ( k ) b ( k )
33
(110)
University of Twente
Control Engineering
V ( k ) = eT ( k ) Pe ( k ) +
+e ( k + 1) PA ( k ) x p ( k ) + {a ( k + 1) + a ( k )} {a ( k + 1) a ( k )}
T
(111)
+eT ( k + 1) PB ( k ) u ( k ) + {b ( k + 1) + b ( k )} {b ( k + 1) b ( k )}
T
The first term of eq. (111) is negative definite when P is a positive definite
matrix. Let the last four terms of eq. (111) be equal to zero. This yields the
adaptive laws:
ai ( k + 1) =
1 n
pnA eA ( k + 1) xi , p ( k )
i A =1
(112)
bi ( k + 1) =
1 n
pnA eA ( k + 1) ui ( k )
i A =1
(113)
(114)
bp
process
ap
K
b
K
a
Pe
e (k)
x (k)
p
x (k)
m
computer
FIGURE 26
34
reference model
H(z)
Intelligent Control
0.03
0.025
0.5 0.5
0.02
0.015
Y_process
xm[1]
-0.5 -0.5
Parameters Ka and Kb
-1
0.01
-1
0.005
-1.5 -1.5
1.5
-2
-2
Kb
Ka
error
0.5
-0.005
-2.5 -2.5
-3
0
10
20
30
40
50
60
time {s}
70
80
90
-3
100
FIGURE 27
10
20
30
40
50
time {s}
60
70
80
90
-0.01
100
Because at each sample the reference-model states are made equal to the process
states, the error between process and reference model remains small. When we
zoom in, we clearly see the discrete nature of the sampled process state x1,p, the
reference model state x1,m, and the error between these signals (figure 28).
Prcoess, reference model and error
0.03
0.025
0.5 0.5
0.02
0.015
Y_process
xm[1]
-0.5 -0.5
Parameters Ka and Kb
1.5
-1
0.01
-1
0.005
-1.5 -1.5
1
Kb
Ka
-2
-2
0.5
-0.005
-2.5 -2.5
0
10
15
20
-3
25
-3
time {s}
FIGURE 28
35
10
time {s}
15
20
25
-0.01
error
University of Twente
Control Engineering
10
Conclusions
11
References
Classical book on
model reference
adaptive control
Landau, I.D., Adaptive Control the model reference approach, Control and
System theory series, 8, Marcel Dekker Inc., New York and Basel, 1979
Includes several
examples of real
applications
Applications of
MRAS to ship
steering
Paper on mode
switching
Overview of
adaptive control
systems, emphasis
on identification based systems
36