Sie sind auf Seite 1von 56

University of Twente

Faculty EE-Math-CS
Department of Electrical Engineering
Control Engineering

Intelligent Control
part 1 - MRAS
Kw

Controller

process

bp

Kp

Ki

____
K
s+a

K
d

p22
p21
Liapunov
Adaptation

kw2n
wn2
(s+2zwn s+

Author:
prof.dr.ir. Job van Amerongen

264

102CE2004
March 2004

Part
1
Intelligent Control

Model Reference Adaptive Control

University of Twente
Control Engineering

Author:
Prof.dr.ir. Job van Amerongen

ii

University Twente

Part

Intelligent Control

iii

Model Reference
Adaptive Control
Systems

2004 J. van Amerongen


Save exceptions stated by the law no part of this
publication may be reproduced in any form, by print,
photoprint, microfilm or other means, included a
complete or partial transcription, without the prior
written permission of the publisher.

iv

Course intelligent control

1
Model Reference Adaptive
Control Systems

1
2
3
4
5
6
7
8
9
10
11

Introduction
The adaptation mechanism
Sensititvity models
The stability approach method of Liapunov
The hyperstability method
Identification and state estimation
Practical problems, noise, non-linearities
Noise
Discrete MRAS
Conclusions
References
Appendix

2
Indirect Adaptive Control,
Self tuning regulators
3
Learning control, Fuzzy
control

1
4
8
10
19
24
29
31
32
36
36
37

vi

Model Reference Adaptive Control Systems (MRAS)

INTRODUCTION

In these lecture notes various forms of model reference adaptive systems are
discussed. We start with an intuitive approach, showing that basic feedback
ideas help to find algorithms for parameter adjustment. We notice that two
questions arise. The first is how to find proper signals that adjust the right
parameter at the right moment. The second is how to guarantee stability for an
adaptive system that is inherently non linear due to the multipliers present in the
system. More insight in the first question is obtained by considering the
sensitivity model approach. Stability can be guaranteed by using Liapunovs
stability theory for the design of the adaptive system.
LEARNING GOALS

After completing these notes you are expected to know


which signals play a role in an adaptive system
how an adaptive system can be designed based on a sensitivity approach
how an adaptive system can be designed based on a Liapunov (stability)
approach
CORE

Definition of
adaptive control

Introduction

There are various structures that may give a control system the possibility to
react to variations in its parameters or to changing characteristics of the
disturbances. A normal feedback system also has the objective of decreasing the
sensitivity for these types of variations. However, when the variations are large,
even a well-designed constant-gain feedback system will not operate
satisfactorily. Then a more complex controller structure is required and certain
adaptive properties have to be introduced. An adaptive system may be defined
as follows:
An adaptive system is one in which in addition to the basic (feedback)
structure, explicit measures are taken to automatically compensate for
variations in the operating conditions, for variations in the process dynamics
or for variations in the disturbances, in order to maintain an optimal
performance of the system.
Many other definitions have been given in the literature; most of them only
describe a typical class of adaptive systems.
The definition given here assumes as a base an ordinary feedback structure for
the primary reaction to disturbances and parameter variations. On a secondary
level an adaptation mechanism tunes the gains of the primary controller, changes
its structure, and generates additional signals and so on. In such an adaptive
system the settings which are adjustable by the user are at the secondary level.

University of Twente
Control Engineering

Gain scheduling
Mode switching

Model Reference Adaptive control Systems

According to the definition automatically changing from one mode of operation


to a different one is considered as an adaptive feature. Use of the knowledge
about the influence of an external variable on the behaviour of the system is an
adaptive feature as well. This type of adaptation can be realized in two different
ways: either by measuring particular disturbances and generating signals to
compensate for them (feedforward control), or by adjusting the feedback
controller gains according to a schedule based on knowledge about the influence
of the variables on the systems parameters (gain scheduling). Another
possibility is using a bank of controllers and select the best controller in a similar
way as gain scheduling. This is called mode switching. A variation on this idea
is a multi-model approach. The outputs of all models in a bank of models are
compared with the output of the process to be controlled. A controller can be
designed and implemented based on the model of which the output has the best
resemblance with the process output. In practice it is impossible to apply feedforward control or gain scheduling to a lot of different variables.
Several types of adaptive systems, in a more narrow sense, have been developed
which allow a system to be optimized without any knowledge of the causes of
changing process dynamics. Often the term adaptive control is restricted to these
types of adaptive systems. There is no clear distinction between adaptive control
and learning control. The term learning control is often used for more complex
systems where a lot of memory is involved and for problems that cannot be
solved by means of standard controllers, based on transfer functions, because
they require another form of knowledge representation, e.g. in neural-networklike structures. These lecture notes deal with a particular kind of adaptive
control, known as Model Reference Adaptive Control.
Adaptive control systems can be classified in various ways. One possibility is to
make a distinction between:

Direct and
indirect
adaptive control

systems with direct adjustment of the controller parameters, without


explicit identification of the parameters of the process (direct adaptive
control)
systems with indirect adjustment of the controller parameters, with explicit
identification of the parameters of the process (indirect adaptive control)
Model Reference Adaptive Control Systems, mostly referred to as MRAC or
MRAS, are mainly applied for direct adaptive control. However, in the
following the application of MRAS to system identification will also be
demonstrated.
The basic philosophy behind the application of MRAS is that the desired
performance of the system is given by a mathematical model, the reference
model. When the behavior of the process differs from the ideal behavior,
which is determined by the reference model, the process is modified, either by
adjusting the parameters of a controller (figure 1a) or by generating an
additional input signal for the process (figure 1b). This can be translated into an
optimization problem, i.e. minimization of the criterion:
T

C = e 2 dt
0

where

(1)

Intelligent Control

e = ym y p

(2)

Instead of minimizing only the error between the output signals of the process
and the reference model, all the state variables of the process and the reference
model can be taken into account. When the state variables of the process are
denoted as (xp) and those of the reference model as (xm), the error vector (e) can
be defined:

e = xm x p

(3)

In that case the optimization problem can be translated into minimization of the
criterion:
T

C = eT Pe dt

(4)

where P is a positive definite matrix.

Controller

FIGURE 1a

Process

Controller

Process

Adaptive
Controller

Adaptive
Controller

Reference
Model

Reference
Model

Parameter adaptive system

As we will see later on,


the multiplications in
the adaptive controller
always lead to a nonlinear system. It could
be argued what adaptive
control is more than
non-linear feedback.

FIGURE 1b

Signal adaptive system

The following considerations may play a role in the choice between adaptation
of the parameters and signal adaptation. An important property of systems with
parameter adaptation is that such systems have a memory. As soon as the
parameters of the process have been adjusted to their correct values, and there
are no new changes, the adaptive loop is in fact not necessary anymore: process
and reference model show the same behavior. In general memory is not present
in systems with signal adaptation. Therefore, the adaptive loop remains
necessary in all cases, in order to generate the right input signal. Consequently,
signal adaptive systems must react faster to changing process dynamics than
systems with parameter adaptation because no knowledge from the past can be
used. In systems where the parameters constantly vary over a wide range this is
advantageous. However, in a stochastic environment, i.e. in systems with a lot of
noise, this may be a disadvantage. High gains in the adaptive loop may introduce
a lot of noise in the input signal of the process as well.
When the parameters of the process vary slowly or only now and then, systems
with parameter adaptation will give a better performance because of their
memory. There are also adaptive algorithms which combine the advantages of
both methods. In the following attention will mainly be focused on parameter
adaptive systems, although the combination of parameter and signal adaptation
will also be discussed.

University of Twente
Control Engineering

Model Reference Adaptive control Systems

Another way of looking at the system is the following. The standard feedback
control loop is considered as a fast reacting primary control system that has to
reject ordinary disturbances. Large variations in the parameters or large
disturbances are dealt with by the slower reacting secondary (adaptive) control
system (Figure 2).
primary control system
y

Controller

Process

Adaptive
Controller
secondary control system

Reference
Model

FIGURE 2

Primary and Secondary control

The adaptation mechanism

In the literature several methods have been described for designing adaptive
systems. But you can get more insight into a method by thinking about how to
find the algorithms yourself. This helps to really understand what is going on.
Therefore, for the time being we will postpone the mathematics and examine the
basic ideas of MRAS with a simple example. When we try to design an adaptive
controller for this simple system we will automatically encounter the problems
which require more theoretical background. Common properties of the various
design methods as well as their differences will become clear. In figure 3 a block
diagram is given of the system which will be used as an example throughout this
text.

bp

ap

Kb

process

Ka

reference model

FIGURE 3

Process and reference model

Intelligent Control

In this example the (linear) process is described by the transfer function:


bp
s + ap s +1
2

bm
K n2
or
s 2 + 2n s + n2
s 2 + am s + 1

(5)

Variations in the parameter ap can be compensated by adjusting Ka and


variations in bp can be adjusted by adjusting Kb. This follows directly from the
transfer of process plus controller in Figure 3:

K b + bp

(6)

s + ( ap + Ka ) s + 1
2

The (linear) reference model has the same order as the process. The following
numerical values are chosen:

n = 1 , z = 0.7 , a p = 1.4 , bp = 0.5

(7)

In this case only the DC-gain of the process and the reference model differ by a
factor of two. This can be seen in the step responses of this system (figure 4).
model

2.5 1.5

1.5

1.5 0.5
1

0.5

0.5 -0.5
0

-0.5

-1

-1

-0.5 -1.5
-1

-1.5

-2

-1.5 -2.5

Y_model

Error
Y_process

Of course the
controller with
parameters Ka and Kb
is not a real controller.
In fact we assume at
this stage that the
process parameters
can be adjusted
directly.

and the model by

-2

FIGURE 4

10

20

30

40

50
time {s}

60

70

80

90

-2.5
100

Responses of process, reference model and error

Because e = ym y p and y p = 12 ym , in this case the error e is equal to yp. In


order to get two identical responses the parameter Kb has to be adjusted. It is
obvious that Kb should be increased. A reasonable choice for adjustment of Kb
seems to be:
K b ( t ) = K b ( 0 ) + e dt

(8)

With the adaptive gain the speed of adjustment can be set. The desired
memory function is realized by means of the integration which also guarantees
that a constant difference between (Kb + bp) and bm converges to zero. This
adaptive law with = 0.5 gives the results shown in figure 5.

Model Reference Adaptive control Systems

University of Twente
Control Engineering

model

1.5 2

1 1

0.5 0

0 -1

-1

Error
Y_process

0.6

Kp

0.4

Y_model

Kb adaptation

0.8

0.2

10

20

30

40

50
60
time {s}

70

80

90

-0.5 -2

100

FIGURE 5

MIT rule

10

20

30

40

50
time {s}

60

70

80

90

-2
100

Result of adaptation of Kb. At the left the parameter,


at the right responses of process, reference model and error

Although the result is impressive, it quickly becomes clear that there are still a
few problems left. When the input signal u is inverted the adjustment of Kb is
going in the wrong direction, because of the negative sign of e. An unstable
system is the result. However, the solution to this problem is simple. When the
sign of the input signal is taken into account, for instance by multiplying e and u,
the result of the parameter adjustment conforms again to figure 5. This yields the
adjustment law known as the MIT rule:

K b ( t ) = K b ( 0 ) + ( eu ) dt

(9)

A second problem is encountered when not only variations of bp have to be


compensated for, but also variations of the parameter ap. A similar reasoning as
in the case of adjustment of Kb could lead to an adjustment law for the parameter
Ka, based on e and on the sign of u. But this would lead to the same adjustment
law for each parameter. Apparently not only the direction of adjustment of the
parameter has to play a role, but also the amount of adjustment of each
parameter, relative to the others. Such a dynamic speed of adjustment may be
realized by adjusting each parameter, depending upon the effect this adjustment
has on decreasing the error. This reasoning leads to the candidate adjustment
laws:
K b ( t ) = K b ( 0 ) + ( eu ) dt

(10)

K a ( t ) = K a ( 0 ) + ( ex2 ) dt

(11)

The parameter Kb is adjusted when u, the signal which is directly influenced by


Kb, is large and the parameter Ka is adjusted when x2, the signal which directly
influenced by Ka, is large. In figure 6 simulation results are given. It appears that
our intuitive reasoning yields a system where a reasonably fast adaptation takes
place. In the simulation of figure 6 the values of bp and ap are taken as zero,
which is compensated by appropriate initial values of Ka and Kb (0.5 and 0.7,
respectively). The parameters converge to the correct values of 1 and 1.4 and as
a result the responses of the process and the reference model become equal. The
speeds of adaptation are chosen as = 12 and = 2.

Intelligent Control

model

1.5 1

1 0

0.5 -1

-1

0 -2

-2

Error
Y_process

1.5

Y_model

Kb and Ka adaptation

Kb
Ka

0.5

10

20

30

40

50
60
time {s}

70

80

90

-0.5 -3

100

10

FIGURE 6

20

30

40

50
time {s}

60

70

80

90

-3
100

Result of adaptation of Ka and Kb

In figure 6 the speed of adaptation, determined by the adaptive gains and , is


still small. In order to speed up the system, the adaptive gains are increased to
= 60 and = 10. This yields the disappointing results of figure 7.
model

1.5 1

1 0

0.5 -1

-1

0 -2

-2

Kb
Ka

100

Y_model

Error
Y_process

Kb and Ka adaptation

50

-50
0

10

20

30

40

50 60
time {s}

70

80

90

-0.5 -3

100

FIGURE 7

10

20

30

40

50
time {s}

60

70

80

90

-3
100

Adjustment of Ka and Kb with a higher speed of adaptation

The adaptive system, which was stable with low adaptive gains, becomes
unstable with higher adaptive gains. When the block diagram of this system is
examined (figure 8) it is clear that this stability problem cannot easily be solved,
due to the non-linearities which have been introduced into the system.

University of Twente
Control Engineering

Model Reference Adaptive control Systems

bp

ap

Kb

Beta

Ka

alpha

k 2n
(s+2z ns+n2)

FIGURE 8

Non-linearities in an adaptive control system

Until now we have thus faced two main problems:


1.

A kind of dynamic speed of adaptation is needed in order to realize that


each parameter is only adjusted when the resulting error is sensitive to
variation of that parameter.

2.

A stability problem exists when the adaptive gains are increased as a result
of the desire to speed up the adaptation. This stability problem cannot easily
be solved by linear analysis methods because adaptation makes the system
non-linear.

These two problems clarify the origin of various methods for designing MRAS.
Two methods will be discussed in more detail in the following:
- the sensitivity approach. This method emphasizes the determination of the
dynamic speed of adaptation with the aid of sensitivity coefficients.
- the stability approach. This method emphasizes the stability problem. Due to
the non-linear character of an adaptive system it is essential that stability
theory of non-linear systems be used. It will be shown that, together with a
proof of stability, useful adaptive laws can be found.

Sensitivity models

In the sensitivity approach the first step is to translate the adaptation problem
into an optimization problem by introducing the criterion:
t

C=

1 2
e d
2 0

(12)

In order to minimize C, the adjustable parameters Ki are varied. The direction of


these variations is determined by the gradient C/Ki, thus:

Intelligent Control

K i = i

C
K i

(13)

By differentiating eq. (13) to t, a continuous adaptive law can be found:


dK i
d C
= i
dt
dt K i

(14)

From eqs. (12) and (14) it follows that


dK i
1 2
= i
e
K i 2
dt

(15)

or
dK i
e
= i e
K i
dt

(16)

In eq. (2) the error has been defined as:


e = ym y p

(17)

Because ym / K i = 0 , it follows from eqs. (16) and (17) that


y p
dK i
= i e
K i
dt

(18)

This algorithm is more or less similar to the algorithms of (10) and (11). The
direction of adjustment and the amount of adjustment relative to other
parameters is now determined by the error e and the sensitivity coefficient
yp/Ki. The latter can be determined by means of a sensitivity model. The
sensitivity coefficient explicitly sees to it that adjustment of Ki only takes place
when the error between process and reference model is sensitive to variations in
this particular parameter.
Example

The process in figure 3 can be described by the differential equation:


y p + ( a p + K a ) y p + y p = ( K b + bp ) u

(19)

where y denotes dy/dt and 


y = d 2 y / dt 2 .
After the parameter Kv is introduced eq. (19) can be rewritten into eq. (21).
Kv = a p + Ka

(20)


y p + K v y p + y p = ( K b + b ) u

(21)

Differentiation of this equation to Kv yields:

Model Reference Adaptive control Systems

University of Twente
Control Engineering


yp
K v

+ Kv

y p
K v

y p
K v

= y p

(22)

The differential equation (22) is identical to eq. (21), except for the input signal.
Eq. (22) is called a sensitivity model. When an estimation is made for the value
of Kv, e.g. by selecting it equal to the desired value am, the sensitivity coefficient
yp/Kv can be measured. From eq. (18) it follows that

Sensitivity model

y p
dK v
= e
K v
dt

(23)

Assuming that the process parameter ap varies slowly compared with the
adjustable parameter Ka due to the adaptation, it follows from eq. (20) that
y p
y p
dK v dK a

and

dt
dt
K v K a

(24)

In figure 9 the resulting adaptive system has been given.

Sensitivity
model

process
u

bp

ap
Kb

bm

_ ___p
Ka

am

Ka

FIGURE 9

Example of an adaptive control system based on a sensitivity


model. Ka is adjusted to compensate for variations in ap.

EXERCISE

Derive the adaptive law for adjustment of the parameter Kb


The sensitivity method is simple and straightforward. The major disadvantage is
that stability can only be demonstrated by simulation or tests in practice. An
analytical proof of stability cannot be given.

The stability approach method of Liapunov

The design of adaptive systems based on stability theories has originated


because of the problems with stability in designs such as those based on
sensitivity models. The second method of Liapunov is the most popular
approach. A related approach is based on hyperstability theory. Both
approaches may produce the same results, so that there is no direct preference
for one of the two with respect to the resulting algorithms.
The use of Liapunovs stability theory for the design of adaptive systems was
introduced by Parks in 1966. Derivation of the adaptive laws is done most easily

10

Intelligent Control

when process and reference model are described in state space form. The
process can be written as:
x p = Ap x p + B p u

(25)

with
Ap = Ap + K a

(26)

and
B p = B p + K b

(27)

where Ap and B p are the varying process parameters which can be


compensated by the controller parameters Ka and Kb. The reference model can
be written as:
xm = Am xm + Bm u

(28)

Subtracting eq. (25) from eq. (28) yields, after defining e

e = xm x p

(29)

e = Am e + Ax p + Bu

(30)

with

A = Am Ap

(31)

B = Bm B p

(32)

Adjustment of A and B necessitates non-linear adaptive laws like eqs. (10) and
(11). Consequently, the resulting differential equation (30) is non-linear. To
ensure that for t , e = 0 it is necessary and sufficient to prove that e = 0 is a
stable equilibrium solution. According to Liapunovs stability theory this can be
done when we can find a (scalar) Liapunov function V(e), with the following
properties:
V(e) is definite positive (this means that V > 0 for e 0, eventually V = 0 for
e = 0.
V ( e ) is definite negative (this means that V < 0 for e 0, eventually V = 0 for

e = 0.
V(e) , if e .
When the Liapunov function V(e) is correctly chosen the adaptive laws follow
directly from the conditions under which V ( e ) is negative definite. The main

11

University of Twente
Control Engineering

Model Reference Adaptive control Systems

(theoretical) problem is the choice of a suitable V(e). Many suitable Liapunov


functions can be found. Different Liapunov functions lead to different adaptive
laws. Searching the Liapunov function which belongs to a certain candidate
algorithm is a difficult procedure. However, in the literature several standard
Liapunov functions have been given which yield useful adaptive laws. Simple
and generally applicable adaptive laws are found when we use the Liapunov
function:
V ( e ) = e T Pe + a T a + bT b

(33)

Where
P is an arbitrary definite positive symmetrical matrix,
a and b are vectors which contain the non-zero elements of the A and B
matrices,
and are diagonal matrices with positive elements which determine the
speed of adaptation.
The choice of the Liapunov function as given in eq. (33) is a quite straightforward one. The
Liapunov function represents a kind of energy that is present in the system and when this energy
contents goes to zero, the system reaches a stable equilibrium. In dynamic systems the energy is
present in the integrators which can also be considered as the state variables of the system. The
components of e , a and b are the state variables of the system described by eq. (30). The components
of a and b are the parameter errors and can be seen as wrong initial conditions of the adaptive
controller parameters. It is thus desirable that all state variables e , a and b go to zero.

With this choice of P, and , V(e) is a definite positive function.


Differentiation of V(e) yields:
V = eT Pe + e T Pe + 2a T a + 2bT b

(34)

Together with eq. (30) this yields:


T
V = ( Am e ) Pe + eT P ( Am e ) +

+ 2e T PAx p + 2a T a +

(35)

+ 2e PBu + 2b b
T

Let:
AmT P + Am P = Q

(36)

Because the matrix Am belongs to a stable system (the reference model), it


follows from the theorem of Malkin that Q is a definite positive matrix. This
implies that the first part of eq. (35):

eT ( AmT P + PAm ) e = eT Qe

(37)

is definite negative. Stability of the system can be guaranteed if the last part of
eq. (35) is set equal to zero. Let:

12

Intelligent Control

e T PAx p + a T a = 0

(38)

e T PBu + bT b = 0

(39)

This yields, after some mathematical manipulations, the general form of the
adjustment laws:
ani =

1 n

pnk ek xi
ni k =1

1 n

bi = pnk ek ui
i k =1

(40)

(41)

where n is the order of the system.


These algorithms have the same basic structure as the algorithms which were
earlier found in a heuristic way. The main difference is that all elements of the
error vector e (with weighting factors pnk) are used in the adaptive law, instead
of only the error signal e. The factors pnk are the elements from the n-th row and
the k-th column of the P-matrix. These elements can be found with the aid of eq.
(36). An arbitrary definite positive matrix Q is chosen, after which P can be
solved from eq. (36).
In a simulation P can easily be solved from eq. (36) by writing eq. (36) as

AmT P + Am P + Q = 0
and consider this expression as the equilibrium solution of the differential equation
dP
= AmT P + Am P + Q
dt

This equation can easily be solved in e.g. 20-sim. The speed of convergence can be increased by
introducing a speed up factor > 1, e.g. = 10:
1 dP

dt

= AmT P + Am P + Q

The following steps are thus necessary to design an adaptive controller with the
method of Liapunov:
1.
2.
3.
4.

determine the differential equation for e


choose a Liapunov function V
determine the conditions under which V is definite negative
solve P from AmT P + Am P = Q .

The whole procedure will be clarified with an example.


Example

In this example we will again use the system of figure 3.

Step 1

Determine the differential equation for e.


The process can be described by the following differential equations:

13

University of Twente
Control Engineering

Model Reference Adaptive control Systems

x1 = x2

(42)

x2 = x1 ( a p + K a ) x2 + ( bp + K b ) u

(43)

This yields:
1
0

0
Ap =
bp =

1 ( a p + K a )
bp + K b

(44)

The reference model (with n = 1 and 2zn = 1.4) is described by:


1
0
0
Am =
bm =
1 1.4
1

(45)

The differential equation for e is:


e = Am e + Ax p + bu

(46)

with
0
0
0

A=
b=

0 1.4 + ( a p + K a )
1 ( K b + bp )

(47)

Only one element of the A-matrix, a22, and one element of the b-vector, b2, are
not equal to zero.
Step 2

Choose a Liapunov function V.


In this example eq. (33) will be used as a Liapunov function.

Step 3

Determine the conditions under which dV/dt is definite negative.


These are the adjustment laws (40) and (41) which in this case simplify to:
a22 =

22

1
b22 =

( p21e1 + p22 e2 ) x2

( p21e1 + p22 e2 ) u

(48)

(49)

From eq. (47) it follows that:


a22 = 1.4 + ( a p + K a )

and

14

(50)

Intelligent Control

b2 = 1 ( K b + bp )

(51)

With the assumption that the speed of adaptation is fast compared with the speed
of variations in the process parameters, it follows from eqs. (48) and (49) that:
1
K a =

22

( p21e1 + p22 e2 ) x2

(52)

1
K b =
( p21e1 + p22 e2 ) u

(53)

Step 4

Solve P from AmT P + PAm = Q

Any positive
definite matrix Q
leads to a positive
definite matrix P.
The opposite is not
true.

This fourth step starts with the choice of an arbitrary matrix Q. It is apparent
that, although the choice of Q is arbitrary, the performance of the adaptive
system is influenced by the choice of Q. The first choice of Q is not necessarily
the best one. But whatever the choice of Q is, as long as Q is positive definite,
the stability of the resulting adaptive system can be guaranteed. For simple
systems P can be found manually, for higher order systems, the computer can be
used. As an example select:
4 0.8
Q=

0.8 1.6

(54)

which yields the following matrix equation:


0 1 p11

1 1.4 p22

p22 p11
+
p22 p21

p22 0
1 4 0.8

p22 1 1.4 0.8 1.6

(55)

This can be rewritten as:


2 p21

p11 1.4 p21 p22

p11 1.4 p22 p22


4 0.8
=

2 p21 2.8 p22


0.8 1.6

(56)

from which it follows that:


4 2
P =

2 2

(57)

Both factors p21 and p22 are thus 2 in this case. The complete adaptive laws in
integral form are:
Ka =

Kb =

15

22

( p

( p

e + p22 e2 )u dt + K b ( 0 )

12 1

e + p22 e2 )x2 dt + K a ( 0 )

12 1

(58)

(59)

University of Twente
Control Engineering

Model Reference Adaptive control Systems

From a (theoretical) stability point of view 22 and 2 may be freely chosen.


Stability is always guaranteed. In our earlier experiments it appeared that a
choice of 1/22 = 60 and 1/2 = 10 yielded an unstable system with the adaptive
laws (10) and (11). Repeating the experiment with the adaptive laws (58) and
(59) as in figure 10a gives the results depicted in figure 10b. The system now
remains stable and the error between process and reference model is decreasing
quickly. The not-connected block Liapunov solves the Liapunov equation (36).
The 20-sim code is as follows:
0.1 * ddt(P,Po) = transpose (Am)*P + P*Am + Q;
p21=P[2,1];
p22=P[2,2];

process

bp

ap

K
b

K
a

p22
p21

Liapunov

k 2n
(s+2z ns+n2)

FIGURE 10a

Adaptive system designed with Liapunov


model

Ka and Kb
1

Ka
Kb

0.5

0.26

0 2

0.21

-1 1

0.16

-2 0

0.11

-3 -1

0.06

-4 -2

0.01

error

Proces
Reference Model

1 3

-0.5
0

10

20

30

40

50 60
time {s}

70

80

90

100

10

FIGURE 10b

20

30

40

50
time {s}

60

70

80

90

100

Responses

Remarks
- The stability conditions found with the method of Liapunov are sufficient
conditions, they are not necessary. This implies, for instance, that there is still
some freedom in varying the relative values of the elements of the P-matrix,
without directly affecting the stability of the system.

16

Intelligent Control

- The speed of adaptation, which can be varied by the adaptive gains


(= 1/22) and (= 1/2), may in principle be chosen freely. In a practical
system the adaptive gains are limited. There will always be structural
differences between the process and the reference model which implies that
the proof of stability does not hold anymore.
- In some situations the adaptation can be speeded up by selecting other
adaptive laws. Instead of the integral adaptive laws (40) and (41),
proportional plus integral adaptive laws are also generally used. In the latter
case, eq. (40), for instance, changes into:
ani = ani ( 0 ) +

ni , I

p
0

k 1

e xi d

ni , P

nk k

( p

e ) xi

nk k

(60)

The proportional term in eq. (60) can also be considered as a kind of signal
adaptation. It speeds up the adaptation when the process parameters are
rapidly changing and may improve the transient behavior of the parameters.
Instead of the proportional term a relay function may also be used. This
results in a relay plus integral adaptive law.
Figure 10c gives the responses with a proportional plus integral adaptive law.
It is clear that the effect of adding the proportional term is more damping of
the transients of the adjusted parameters, while also the error signal
converges faster than without the proportional term.
model

Proces
Reference Model

Ka and Kb
0.6

Ka
Kb

0.4
0.2
0

0.052

0 2

0.042

-1 1

0.032

-2 0

0.022

-3 -1

0.012

-4 -2

0.002

error

1 3

-0.2
0

10

20

30

40

50 60
time {s}

70

80

90

100

10

FIGURE 10c

20

30

40

50
time {s}

60

70

80

90

100

Results with a proportional plus integral adaptive law


I = 60, P = 120, I = 10, P = 20,

- On the other hand, one should be careful when applying complex adaptive
laws. In order to be able to apply a more complex algorithm, generally meant
to speed up the adaptation, a good resemblance of the structures of process
and reference model is essential. If there is such a resemblance the
performance of the system may be improved. If there is more uncertainty, the
more simple algorithms, like those of eqs. (40) and (41), may give good
results. Complex laws also require that more parameters be chosen, while
automatic tuning was just one of the aims of using adaptive control.
- A mathematical proof of stability with other adaptive laws, requires the
availability of other Liapunov functions. Several Liapunov functions can be
found in the literature. However, in principle the test of whether a certain
candidate adaptive law yields a stable system can be done more easily with
the hyperstability method as we will see later on.

17

University of Twente
Control Engineering

Model Reference Adaptive control Systems

EXERCISE

Design an adaptive controller for the system of figure 11.


Kw

Controller

process
____
K
s+a

bp

Kp

Ki

K
d

Figure 11 A process and a controller


A second-order process with transfer function
K
s (s + a)

(61)

is controlled with the aid of a PD-controller (the differentiating action is


realized by means of state feedback). The parameters of this controller are Kp
and Kd .Variations in the process parameters K and a can be compensated for
by variations in Kp and Kd . Low-frequency disturbances of the input of the
process (with amplitude Kw) can be compensated by an additional input with
gain Ki, which may be considered as a kind of adaptive integrating action. In the
stationary state Kw = Ki should hold.
The desired performance of the complete feedback system is described by the
transfer function:
X 1, m
U

n2
s 2 + 2 zn s + n2

n = 1 en z = 0.7

(62)

Hint

Describe the system in state variables:


0
x =
K K p

0
x+

a K d K
K K p
1

K ( K w + K v ) 1
0

(63)

For the design of this adaptive system it is easier to describe the system with the
aid of the state variables and x2, where = u - xi:
 0
 = K K
p
x2

0
0
u
+

a K K d x2 0 K ( K w + K v ) 1
1

(64)

The description of the reference model is:


m 0
 = 2
x2, m n

m 0 0u
+


2 zn x2, m 0 0 1
1

18

(65)

Intelligent Control

Derive the adaptive law for this system, using the procedure treated before.

The hyperstability method

Based on the work of Popov, Landau introduced the hyperstability concept for
the design of adaptive control systems. The hyperstability method is, just as the
second method of Liapunov, suitable for proving the stability of a non-linear
system. The concept of hyperstability is closely related to the concept of
passivity introduced by Willems. Passive elements such as capacitors,
inductances and resistors will never store more energy than the energy that is
stored in the element at t = 0 plus the energy supplied to it by the environment.
The same holds for systems that consist of a network of such passive elements.
If all the components in such a system are linear, the transfer function between
the two energy-conjugated variables at the port of such a passive system is
positive real. This means that the phase shift of such a system is always between
plus and minus 90 degrees.

Hyperstability
and passivity

Example

Consider, as an example, the arbitrary electric network and bond graph of


figure 12a.
R

Se
passive system

FIGURE 12a

1
passive system

Electric network and corresponding bond graph of a passive


system

We compute the transfer function between the effort and flow, H(s) = f(s)/e(s)
(or between the voltage and current, H(s) = I(s)/U(s)) at the port of this passive
system and draw the nyquist and bode plots of this 4th order transfer function.
When we observe the phase shift , we see that -90 < < 90 (see figure 12b).
The transfer function itself contains a number of zeros that is equal to the
number of poles minus one:

19

Model Reference Adaptive control Systems

0.1

Linear System : Nyquist Diagram

Se1\p.e -> C1\p.f


Magnitude (dB)

University of Twente
Control Engineering

Linear System Im

0.05

-10
-40
-70
-100

0.001

0.01

0.1
1
Frequency (Hz)

10

100

0.001

0.01

0.1
1
Frequency (Hz)

10

100

Phase (deg)

100

-0.05

-0.1
-0.05

FIGURE 12b

0.05
Linear System Re

0.1

0.15

50
0
-50
-100

Nyquist plot and bode plot of a 4th order passive system

The following statement holds for a scalar linear passive (or hyperstable)
system:
Passivity of a
linear system

Passivity of hyperstability implies that the real part of the polar plot (nyquist
plot) is always positive:
Re ( H ( j ) 0

Any transfer function that does not have this property can be made positive real
by adding a number of properly chosen zeros. We can use this property for the
design of an adaptive controller. We reconsider the error equation eq. (30)
e = Am e + Ax p + Bu

(30)

the linear part of this system is given by:


e = Am e

(66)

The last two non-linear and time varying terms of (30) can be considered as an
input w to this linear system:
e = Am e w

(67)

We have seen before that the components of w among others the adjustable
parameters Ki depend on e. This implies that w can be seen as a (non-linear,
time-varying) feedback of the system given by eq. (66). We have also seen that
the adjustable parameters do not necessarily depend on the state vector e itself.
They depended on a combination of one or more components of e. Therefore,
we extend the system of eq. 67 with the output equation that defines the signal v:
e = Am e w
v = Ce

(68)

where C is the output matrix of the linear system. The adaptive system, seen as a
linear system with a non-linear feedback can thus be drawn as figure 13.

20

Intelligent Control

-w

linear
part

non-linear
part

FIGURE 13
A non-linear system split into two
parts

If both parts of figure 13 are hyperstable, the complete feedback system will
be a hyperstable (passive) system. For the linear part this implies that its
transfer function should have a number of zeros that is equal to the number
of poles minus one, such that the system effectively behaves as a first-order
system. Such zeros can be introduced by a proper choice of the output matrix
C. It can be demonstrated that such a positive real transfer function can be
guaranteed if C is chosen equal to P, where P is the solution of the Liapunov
equation:

AmT P + PAm = Q

(69)

This implies that:

v = Pe

(70)

To proof the passivity of the non-linear, time-varying part we cannot use an


approach that uses transfer functions. We have to consider the basic concept of a
passive system, i.e. a system that never delivers more energy to the outside
world than the (finite) amount of energy that is stored in the system at t = 0 plus
the amount of energy delivered to the system for t > 0. Energy flow to the
passive element is considered a positive energy flow. The demand that an
element is passive and does not supply more net energy to the environment than
the positive and finite amount of energy E(0), present in the element at t = 0, can
be mathematically expressed as:
t

ef d + E (0) 0 , or
0

ef d E (0)

(71)

where e and f are power-conjugated variables. In other words, such a passive


element can only store or dissipate energy.
Example

As an example we consider a (possibly non-linear) capacitor C.


The energy which goes into this capacitor is described by:
t

e ( ) i ( ) d

(72)

This expression is positive if the environment delivers energy to the capacitor.


For a passive system it is therefore necessary that not more energy is delivered
to the environment than the energy present at t = 0 plus the net stored energy,
given by eq. (72).
t

1
2

Ce 2 (0) + e ( )i ( ) d 0 or
0

e ( )i ( ) d

1
2

Ce 2 (0)

In terms of our adaptive system: when we require that the non-linear part of
figure 13 with the input and output variables v and w is passive, we have to
choose w such that the following condition holds:

21

(73)

University of Twente
Control Engineering

Model Reference Adaptive control Systems

( t ) = v T w d 02

(74)

where 02 < is a function of the initial conditions A(0) and B(0). Because w
contains the adaptive laws, this will enable that candidate adaptive laws are
checked on their stability properties.
When we split the system of eq. (30) into a linear and a non-linear part and we
apply the adaptive laws (58) and (59), we find the block diagram of figure 14. In
order to proof that the non-linear part is passive we must demonstrate that with
these adaptive laws condition (74) is satisfied.

Am

linear part
non-linear part

Ax

x
p

-w

1
_ __

1
_ __

A(0)

B(0)
Bu

FIGURE 14

Proof of passivity
of the non-linear
part

Adaptive control system split into a linear and a non-linear


part

We want to check condition (74):


t

= v T w d

(75)

with
w = Ax p + Bu p

(76)

A = Am Ap

(77)

B = Bm B p

(78)

22

Intelligent Control

Without loss of generality, we assume that


A ( 0 ) = A0 and B ( 0 ) = B0

(79)

When the matrices A and B are adjusted by means of the adaptive laws (t)
and (t), respectively, it follows that
t

A = ( ) d A0

(80)

0
t

B = ( ) d B0

(81)

Together with eq. (75) and (76) it follows that


t

= v T x p ( ) d + A0 + u ( ) d + B0 d

(82)

Any candidate adaptive law can now be substituted in this equation. When we
try, for instance, the adaptive laws (40) and (41) used earlier, such that:

( t ) = vx Tp

(83)

( t ) = vuT

(84)

then it follows that:


t

= v T x p vx Tp d + A0 + u vuT d + B0 d

(85)

This can be split into two parts and be rewritten in a quadratic form as:

1 = vx d + A0 A02
2 0

2 =

T
p

2
T

+
vu
d
B

0 B0
2 0

(86)

(87)

The first terms of both equations are quadratic terms and are therefore positive.
The conditions necessary to satisfy eq. (74) are thus:

1
2

23

A02

(88)

B02

(89)

University of Twente
Control Engineering

Model Reference Adaptive control Systems

When A02 and B02 are smaller than the condition (74) is satisfied. When P is
selected according to eq. (69) the complete non-linear feedback system is
hyperstable.
For the system of figure 10a this leads with the aid of eqs. (50) and (51) to the
adaptive laws (52) and (53).
Remarks
The difference between the hyperstability method and the method of Liapunov is
not that different adaptive laws are found. Both methods differ in the way these
laws are found. Instead of searching for Liapunov functions which belong to a
certain candidate algorithm, the hyperstability method requires the proof that eq.
(74) is satisfied. In principle, this is easier than finding a proper Liapunov
function.

When we compare the adaptive laws which are found from the stability
approach with those found from the sensitivity approach, the following
differences become clear. Instead of using only the error signal e, the difference
between the outputs of the process and the reference model, the stability
approach uses the signal Pe. In the cases we considered until now, Pe
corresponded to the signal p12e1 + p22e2. This expression also contains the
derivative of the error e with a positive effect on the systems stability. By
introducing the hyperstability method it has been demonstrated that due to this
derivative, and in higher-order systems, due to higher derivatives as well, the
maximum phase lag never exceeds 90 degrees. In addition, the stability
approach uses the states of the process or the input signal of the process, instead
of the sensitivity coefficients.
When we compare the signals e with p12e1 + p22e2 and yp/Kv with x1, it
appears that the corresponding signals of the stability approach and the
sensitivity approach have a similar shape, but that the signals of the sensitivity
approach (e and yp/Kv) have a phase lag . One could consider the sensitivity
coefficients as a kind of estimated states. The sensitivity model is a state
estimator from this point of view. It will be clear that the observed phase lag has
a deteriorating effect on the systems stability. This explains the superiority of
the designs following the stability approach.
EXERCISE

Show that a proportional and a proportional plus integral adaptive law as used
before, also lead to a hyperstable adaptive system.

Identification and state estimation

In the previous sections it has been indicated how MRAS can be used for direct
adaptation of the parameters of a controller. In this case the process must follow
the response of the reference model. However, when the process and the
reference model change places, the reference model, in this case referred to as
the adjustable model, will follow the response of the process (figure 15). This
is realized by adjusting the parameters of the adjustable model. In doing so two
things are achieved:
1

24

Identification of the process. The adjustment of the model parameters,


primarily intended to get identical responses from the process and the

Intelligent Control

reference model, leads, after some time, to a situation where the parameters
of the process and the reference model are identical.
State estimation. When the adjustment of the adjustable model is successful,
states of the adjustable model will, after some time, be identical to the states
of the process. The model states can be considered as estimates of the
process states.

Controller

Process

x
p

Adaptive
Controller

Adjustable
Model

FIGURE 15

MRAS applied for identification

The adaptive laws for identification and for (direct) adaptation are identical
except that the states xp,i are replaced by the states xm,i. It should be noted that the
proof of stability in this case involves that the following Liapunov equation be
solved:
ATp P + PAp = Q

(90)

where instead of the A-matrix of the reference model Am the matrix Ap is used.
The process has to be stable in order to get a stable adaptive system. Ap is also
unknown. An estimate of Ap has to be made in order to be able to solve the
Liapunov equation.
Another issue relevant in the case of identification is the following. The design
procedure according to the method of Liapunov yields a proof of asymptotic
stability of e. This means that e 0, when t . For the parameter errors (A
and B) only stability in the sense of Liapunov can be guaranteed. That means
that it may happen that A 0 and B 0 for t . It can easily be seen that such
a situation may happen. When, for instance u = 0, for t , xp and xm will both
approach zero and thus e as well. However, no parameter adjustment will take
place anymore. In the case of adaptation this is hardly a problem because the
primary goal is then to ensure that e 0. The values of A and B do not really
matter. But in the case of identification A and B must approach zero. However,
it can be demonstrated that when the input signal is sufficiently rich, that
means that the input signal has sufficiently power at all frequencies which are
within the bandwidth of the system, it can be guaranteed that A and B both
approach to zero.
When the states of the process are corrupted with noise, the structure of figure
15 can be used to get filtered estimates of the process states. When the input
signal itself is not very noisy, the model states will also be almost free of noise.
By selecting adaptive gains which are not too large, noise on e can be prohibited

25

University of Twente
Control Engineering

Model Reference Adaptive control Systems

from affecting the model states too much. It is important to notice that in this
case the filtering is realized without phase lag. The MRAS structure acts as an
adaptive observer. This will be illustrated by means of an example.
Example

Figure 16 illustrates this principle used in an autopilot for ships. The ships
parameters are identified in a closed loop system. The transfer of the ship, from
the rudder angle to the rate of turn  , can be described by the first order
transfer function:
Ks

=
s s + 1

(91)

The influence of waves on the motions of the ship can be modeled as an extra
input to this model. However, it is not desirable to react on these waves with
steering signals. One way to achieve this is to consider these disturbances as
measurement noise, rather than process noise. Therefore, the influence of waves
is simulated by adding colored noise to the signal  . This yields the signal  .
After integration of  the heading is obtained.
This process is controlled by an autopilot which consists of a proportional
control action with gain Kp and a rate feedback with gain Kd. In order to prevent
the rudder from reacting to each wave instead of  , the estimated signal  is
used in the controller. The signal  is obtained with the above-mentioned state
estimator. The rudder angle is measured and is used as an input signal for a
model with the transfer function
Km

=
s m + 1

(92)

coloring

k
s +1

Kp

controller

waves

Kd

ship

K
a

K
b

FIGURE 16

26

adjustable
model

MRAS applied to suppression of noise in an autopilot for ships

Intelligent Control

Because in general Ks and s will be unknown and may vary when, for instance,
the speed of the ship changes, it is necessary to adjust Km and m with the aid of
an adaptation mechanism.
In figure 17 responses are given of a simulation of this system, where
K m ( 0 ) = 1/ 2 K s

(93)

m ( 0 ) = 2 s

(94)

The following signals are given in this figure:




the rate of turn signal with noise due to the waves


the rate of turn signal without this noise

the estimated rate of turn signal


the model gain
the inverse of the time constant of the model
the heading
the rudder angle based on 

Km
1/m

( )

( ) the rudder angle based on 


State estimation performance

1
0.5

0.5 -0.5
0

-0.5

-1

-1

-0.5 -1.5
-1

-2

-1.5

50

100

150

200 250
time {s}

300

350

-2
400

75

-10 -10

50

-20 -20

25

-30 -30

Parametes Ka and Kb

Rudder

100

Reference
Psi

identification error
psi_dot_estimated

1.5 0.5

model

10 10

measure rate of turn

0.15

Kb
Ka

0.1

0.05

-40 -40
0
0

50

100

150

200
time {s}

250

300

350

50

100

150

200
time {s}

250

300

350

-25
400

400

FIGURE 17

27

Identification and state estimation with the aid of MRAS

University of Twente
Control Engineering

Model Reference Adaptive control Systems

Parametes Ka and Kb
0.15

Kb
Ka

0.1

0.05

0
0

200

400

600
time {s}

800

1000

FIGURE 18
Effect of decreasing adaptive gains

1200

The choice of the speed of adaptation is a compromise between a fast


parameter adjustment and suppression of fluctuations in the parameters due to
the noise. By introducing decreasing adaptive gains identification can be fast
in the beginning and slow down later on (see figure 18, where the speed of
adaptation is slowly decreased for t > 400). This will result in parameters that
do not fluctuate even when noise is present and in smooth estimates of the
rate-of-turn signal. In the literature (Landau, 1979) variations on this
principle have been described. By adding a second adjustable model the
speed of the parameter estimation can be increased without affecting the
suppression of fluctuation of the parameters (Hirsch and Peltie, 1973). It is
also possible to improve the state estimation by adding a second adjustable
model (Van Amerongen, 1982). The latter idea has had good results in an
autopilot for ships. A summary of this application has been given by Van
Amerongen (1984). This publication can be found in appendix A.
The result of the identification can be used to adjust the controller parameters,
e.g. by minimizing with the aid of the Ricatti equation the criterion
J = ( 2 + 2 ) dt

(95)

Where is the course error and the rudder angle.


The resulting structure is shown in figure 19 and the responses in figure 20.

coloring

k
s +1

Kp

controller

ship

K
a

K
b

28

Kd

optimal
controller
design

FIGURE 19

waves

adjustable
model

Indirect adaptive control: the identification result is used to


compute an optimal controller

Intelligent Control

model

10 10

60

-10 -10

30

-20 -20

-30 -30

-30

Parameters Kp and Kd

Rudder

Reference
Psi

40

90

30

Kp
Kd

20
10
0
-10

-40 -40
0

50

100

150

200
time {s}

250

300

350

400

50

FIGURE 20

100

150

200
time {s}

250

300

350

-60
400

Simulation results. At the left: results of the on-line controller


parameter optimization

Practical problems, noise, non-linearities

Until now the practical problems have been put aside. In this section a few
practical problems and their solutions will be discussed:
- not completely matching process and reference model structures
- non-linearities in the process or the reference model
- noise (in general, disturbances in the process)
Structural differences

When the structures of the process and the reference model do not completely
match, a formal proof of stability does not in theory hold anymore. If this were
immediately disastrous, a practical application of MRAS would be hardly
possible. It has already been mentioned that in this case simple algorithms have
advantages over more complex algorithms. However, it is still important that the
dominating dynamics of the process are present in the reference model as well.
In general, it appears to be possible to compensate for small remaining
differences between the structure of the process and the reference model by
relatively fast parameter variations.

Example

This is illustrated with the following example. A DC-motor with a flexible


transmission is controlled by means of an adaptive controller, based on a
second-order reference model. Because feedback of the motor angle and motor
angular velocity is used, the badly damped poles, due to the flexible
transmission do not hurt the stability. They are compensated by two complex
zeros (collocated control). As a result the adaptive gains vary during each
transient but overall a good control performance is achieved. Figure 21 gives the
adaptive control system and some detail of the model of the motor and load.
Figure 22 gives some relevant simulation results.

29

University of Twente
Control Engineering

Model Reference Adaptive control Systems

Controller
Setpoint
generator

Kp

Process

Ki

K
d

p
22
p
21
Liapunov

2n

(s+2z ns+n2)

Adaptation

ME

omega_motor
phi_motor

FIGURE 21

model

0.1

0.5 0.5

phi_motor {rad}
Reference Model

Kp, Kd, Ki

4
3

0.05

-0.5 -0.5

-1

0.025

-1

Kd
Kp
Ki

0.075

error

Adaptive control system and a more detailed model of the


process

1
-1.5 -1.5

-0.025

0
-1

10

15

20

-2

25

time {s}

-2

FIGURE 22

Non linearities

10

time {s}

15

20

25

-0.05

Responses of the process, reference model and error and


adjusted parameters

Although the design methods which have been applied, such as the method of
Liapunov, are especially suited to non-linear systems, in the adaptive systems
described it is essential that the non-linearities be restricted to the adaptation
mechanism. Process and reference model must in principle be linear. Most nonlinearities can be translated, however, into variations of the other parameters of
the system. Such variations may be compensated by the adaptation mechanism.
It is possible to give a proof for stable adaptive laws for non-linear systems as
well, but those laws are in general more complex than the algorithms given

30

Intelligent Control

before. It appears that these types of algorithms fail when there are only small
structural differences between the process and the reference model.
out

in

FIGURE 23

Saturation

Saturation-type non-linearities have a more detrimental effect (figure 23).


This type of non-linearity is often found in various types of actuators, such as
hydraulic valves, which are completely open, electronic amplifiers etc. In this
category a distinction can be made between non-linearities which limit the
amplitude of the input signal of the process and those limiting the rate of
change of this input signal. The first category can be dealt with simply. In
principle there are two possibilities:
1.
2.

Switch off the adaptation mechanism as long as the process is in


saturation
Modify the input signal of the process and the reference model such
that the saturation is never reached.

This latter method is preferred, because in that case the adaptive system can
remain unchanged. This approach is not only suited to eliminating any
saturation type of non-linearities from the process, but to dealing with
saturations in the reference model as well. The proof of stability is not affected
by this approach because the non-linearities are in principle removed from the
control loop. The paper in Appendix A gives an example of this solution.

Noise

In practical applications of MRAS an important problem is the presence of noise


on the states of the process. These difficulties arise due to the multiplication of
the error signal:
e = e + e

(96)

with
x p = x p + p

(97)

This yields, for instance, for the components ei and xi :
ei xi , p = ei xi , p + ei i , p + xi i , e + i , e i , p

(98)

Assuming the mean value of ei and xi to be zero, integration of eq. (98), such
as in the adaptive laws, yields in the presence of noise one non-zero term.
t

i , p dt

i ,e

(99)

Because e = xm x p and xm is noise-free, it follows that


e = p

and thus

31

(100)

University of Twente
Control Engineering

Model Reference Adaptive control Systems

i ,e i , p = i2, p

Equation (99) indicates that the adjusted parameters will drift away when the
signals in the adaptive system are too small to compensate for this term with a
non-zero mean. This is only a problem in the case of adaptation and not for
identification. When MRAS is applied for identification instead of x p , the

Ka and Kb

8
6

Ka
Kb

4
2
0
-2

20

40

60

(101)

80

100

time {s}

FIGURE 24 Parameter Ka drifts away


because of noise on the signal xp

noise-free signal xm is used. This implies that identification with MRAS


yields unbiased estimates of the process parameters. However, in an adaptive
system it is essential that measures be taken to prevent the parameters from
drifting away. A choice can be made from one or more of the possibilities
mentioned below:
the adaptation may be switched off when there are no set-point changes
the same can be realized more smoothly by multiplying the adaptive gains
with:
1
1+ T

(102)

where T denotes the time interval after the last set-point change. The principle
of decreasing adaptive gains was already illustrated in figure 18.
instead of using the signal xp, in the case of adaptation the signal xm may also
be used. Although this is theoretically not correct, such a system may give
good results in practice
it is also possible to use the estimated states of the process, obtained by
means of the earlier-mentioned adaptive state estimator
it is possible to measure the terms e and p on-line. The product of both
terms can be used to compensate for the drift. The usefulness of this method
is determined by the accuracy with which both terms can be measured
low-pass filters as well as dead band may be used in the adaptive loops. The
use of filters is limited, because of the detrimental phase lag which they
introduce. Application of a dead band is simple and effective. The only
requirement for its effective application is that it must be possible to
determine an upper bound of the signal which causes the drift.

Depending upon the type of application, a choice can be made from one or more
of these measures.

Influence of discretization

In the foregoing process and reference model have been considered as


continuous processes. Of course, the continuous algorithms may be used without
any modification for those processes where the sampling interval can be chosen
small enough to allow the discrete system to be described by the continuous
equations. But for discrete systems discrete MRAS-based adaptation laws can be
derived as well. With the hyperstability approach it is possible to derive discrete
adaptive laws which have a shape similar to the continuous ones.
The method of Liapunov may be applied when a small change is made in the
structure of the adaptive system. We start with the following equations:

32

Intelligent Control

xm ( k + 1) = Am x p ( k ) + Bm u ( k )

(103)

x p ( k + 1) = Ap x p ( k ) + B p u ( k )

(104)

The matrices Am, Ap, Bm and Bp are found from a transformation of the
continuous-time equations. Their elements are functions of the sampling rate T.
Note that the structure of the system has been changed in such a way that in eq.
(103) xp is used instead of xm. This is illustrated in figure 25.
u

B
p

-1

process

Ap
e

Am
x
-1

Bm

reference model

FIGURE 25

Series-parallel structure for discrete MRAS

The structure of figure 25 is referred to as series-parallel MRAS. This is due to


the fact that the reference model is not a normal parallel reference model, but
that it is placed partly in series (via Am) and partly parallel (via Bm) to the
process. For this structure we can find adaptive laws as follows:
e ( k ) = xm ( k ) x p ( k )

(105)

e ( k + 1) = A ( k ) x p ( k ) + B ( k ) u ( k )

(106)

with
A ( k ) = Am ( k ) Ap ( k )

(107)

B ( k ) = Bm ( k ) B p ( k )

(108)

At this stage a Liapunov function V(k) is selected and V(k) is determined from:
V ( k ) = V ( k + 1) V ( k )

(109)

with
V ( k ) = eT ( k ) Pe ( k ) + a T ( k ) a ( k ) + bT ( k ) b ( k )

and a, b, and defined in conformity with eq. (33) this yields:

33

(110)

University of Twente
Control Engineering

Model Reference Adaptive control Systems

V ( k ) = eT ( k ) Pe ( k ) +
+e ( k + 1) PA ( k ) x p ( k ) + {a ( k + 1) + a ( k )} {a ( k + 1) a ( k )}
T

(111)

+eT ( k + 1) PB ( k ) u ( k ) + {b ( k + 1) + b ( k )} {b ( k + 1) b ( k )}
T

The first term of eq. (111) is negative definite when P is a positive definite
matrix. Let the last four terms of eq. (111) be equal to zero. This yields the
adaptive laws:
ai ( k + 1) =

1 n

pnA eA ( k + 1) xi , p ( k )

i A =1

(112)

bi ( k + 1) =

1 n

pnA eA ( k + 1) ui ( k )
i A =1

(113)

where it has been assumed that:


a ( k + 1) + a ( k ) 2a ( k + 1)

(114)

When the structure of figure 25 is applied for identification the advantage of


unbiased parameter estimates disappears. By replacing xm by xp the term which
causes drift in the parameters is introduced. This is a well known problem in
system identification in general, where often such a series parallel structure is
used.
A complete adaptive system for the standard example used throughout these
notes is given in figure 26. A continuous-time process is controlled by means of
a discrete adaptive controller. Simulation results are given in figure 27.

bp

process

ap

K
b

K
a

Pe

e (k)

x (k)
p
x (k)
m

computer

FIGURE 26

34

reference model
H(z)

Discrete adaptive controller for the standard process

Intelligent Control

Prcoess, reference model and error

0.03

0.025

0.5 0.5

0.02

0.015

Y_process
xm[1]

-0.5 -0.5

Parameters Ka and Kb

-1

0.01

-1

0.005

-1.5 -1.5

1.5

-2

-2

Kb
Ka

error

0.5

-0.005

-2.5 -2.5

-3
0

10

20

30

40

50
60
time {s}

70

80

90

-3

100

FIGURE 27

10

20

30

40

50
time {s}

60

70

80

90

-0.01
100

Results of discrete adaptive controller

Because at each sample the reference-model states are made equal to the process
states, the error between process and reference model remains small. When we
zoom in, we clearly see the discrete nature of the sampled process state x1,p, the
reference model state x1,m, and the error between these signals (figure 28).
Prcoess, reference model and error

0.03

0.025

0.5 0.5

0.02

0.015

Y_process
xm[1]

-0.5 -0.5

Parameters Ka and Kb
1.5

-1

0.01

-1

0.005

-1.5 -1.5

1
Kb
Ka

-2

-2

0.5

-0.005

-2.5 -2.5
0

10

15

20

-3

25

-3

time {s}

FIGURE 28

35

10

time {s}

15

20

25

-0.01

First 25 seconds of the responses of figure 27

error

Model Reference Adaptive control Systems

University of Twente
Control Engineering

10

Conclusions

MRAS-based adaptation was a popular research topic in the period 1960-1990.


It results in relatively simple adaptive laws that can be applied in practice. An
example of such a practical application is described in Appendix A, where direct
and indirect MRAS are used for direct and indirect parameters adaptation as well
as state estimation. These algorithms form also the basis for the rudder roll
stabilization system that is used for fast container ships and at the M-frigates of
the Royal Netherlands Navy. In the book of Narendra and Annaswamy (1989) a
series of practical applications has been described. In addition to the application
on ship steering applications in process control (distillation columns), power
systems, robot manipulators, blood pressure control and so on are treated.
Besides from the practical applicability of these methods, the concepts that play
a role in the design of MRAS, are similar to those used in system identification
and, to a certain extend, learning control.

11

References

Classical book on
model reference
adaptive control

Landau, I.D., Adaptive Control the model reference approach, Control and
System theory series, 8, Marcel Dekker Inc., New York and Basel, 1979

Includes several
examples of real
applications

Narendra, K.S. and A.M. Annaswamy, Stable Adaptive Systems, Prentice


Hall, Englewood Cliffs, New Jersey, 1989

Applications of
MRAS to ship
steering

Amerongen, J. van, Adaptive steering of ships: a model- reference approach


to improved manoeuvring and economical course keeping, Ph.D thesis, Delft
University of Technology, p. XII, 195, 1982
Amerongen, J. van, Adaptive steering of ships - a model reference approach,
Automatica, vol. 20, no. 1, Pergamon Press, pp. 3-14, 1984 (available in
Appendix A)
Amerongen, J. van, P.G.M. van der Klugt and H.R. van Nauta Lemke,
"Rudder roll stabilization for ships" Automatica, vol. 26 no. 4, pp. 679-690,
1990
Amerongen, J. van, Ships: Rudder Roll Stabilization, Systems and Control
Encyclopedia, vol.1, Pergamon press, p 6, 1990

Paper on mode
switching

Hilhorst R.A., J. van Amerongen, P. Lhnberg and H.J.A.F. Tulleken,


Supervisory control of mode-switch processes, Automatica, no.30, pp 13191331, ISSN: 1319-1331, 1994

Overview of
adaptive control
systems, emphasis
on identification based systems

K. J. Astrom and B. Wittenmark, Adaptive control (2nd Ed.), AddisonWesley, 1995

36

Das könnte Ihnen auch gefallen