Sie sind auf Seite 1von 142

Analysis and Design of

Hybrid Control Systems


Jrgen Malmborg
Department of Automatic Control, Lund Institute of Technology
Analysis and Design of
Hybrid Control Systems
Analysis and Design of
Hybrid Control Systems
Jrgen Malmborg
Lund 1998
To Annika
Published by
Department of Automatic Control
Lund Institute of Technology
Box 118
S-221 00 LUND
Sweden
ISSN 02805316
ISRN LUTFD2]TFRT--1050--SE
c _1998 by Jrgen Malmborg
All rights reserved
Printed in Sweden by Printing Malm AB
Malm 1998
Contents
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . vii
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 What is a hybrid system? . . . . . . . . . . . . . . . . . . 2
1.2 Why use hybrid control systems . . . . . . . . . . . . . . 5
1.3 A block model of a general hybrid control system . . . . 8
1.4 Hybrid control system design . . . . . . . . . . . . . . . . 11
1.5 Mathematical models of hybrid systems . . . . . . . . . 16
1.6 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.7 Verication of behavior . . . . . . . . . . . . . . . . . . . 22
1.8 This thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2. Fast mode changes . . . . . . . . . . . . . . . . . . . . . . . . 27
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 Dynamics inheritance . . . . . . . . . . . . . . . . . . . . 30
2.3 Non-transversal sliding of degree two . . . . . . . . . . . 35
2.4 Two-relay systems . . . . . . . . . . . . . . . . . . . . . . 42
2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3. Simulation of hybrid systems . . . . . . . . . . . . . . . . . 58
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.2 Denitions of solutions to differential equations . . . . . 61
3.3 The simulation problem . . . . . . . . . . . . . . . . . . . 62
3.4 Structural detection of fast mode changes . . . . . . . . 65
3.5 New modes - new dynamics . . . . . . . . . . . . . . . . . 70
3.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4. Hybrid control system design . . . . . . . . . . . . . . . . . 73
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.3 A Lyapunov theory based design method . . . . . . . . . 78
4.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5. Experiments with hybrid controllers . . . . . . . . . . . . 89
v
Contents
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 89
5.2 A hybrid tank controller . . . . . . . . . . . . . . . . . . . 89
5.3 A heating]ventilation problem . . . . . . . . . . . . . . . 102
6. Concluding remarks . . . . . . . . . . . . . . . . . . . . . . . 110
A. Proof of Theorem 2.1 . . . . . . . . . . . . . . . . . . . . . . . 113
B. The chatterbox theorem . . . . . . . . . . . . . . . . . . . . . 118
C. Pal code for the double-tank experiment . . . . . . . . . . 122
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
vi
Acknowledgments
Acknowledgments
It would not have been possible to produce this thesis without a lot of help
from many people and it is a great pleasure to be given this opportunity
to express my gratitude.
First of all I would like to thank the three persons that have given
me invaluable help with my research. Bo Bernharsson, who has been a
great support both during the research and during the writing of the
thesis. I truly admire his great analytical skills and his excellency in
nding errors, small as large, in my manuscripts. Karl Johan strm, who
with his deep knowledge of automatic control and his great enthusiasm
has been a source of inspiration throughout my years at the department.
Johan Eker, a dear friend and research colleague, who retains his sense
of humor even after spending endless hours over the computer code for
our experiments.
It is truly a great privilege to work in such an intelligent and stimulat-
ing environment as the Automatic Control Department at Lund Institute
of Technology. I cannot mention all of you but thanks to Henrik Olsson,
Johan Nilsson and the rest of you for good times at, and after work.
The work has been partly supported by the Swedish National Board for
Industrial and Technical Development (NUTEK), Project Heterogeneous
Control of HVAC Systems, Contract 4806. Diana Control AB provided us
with controller hardware and the opportunity to test the hybrid controller
on a real process.
Thanks also to friends and family who have rooted for me even without
actually knowing what I was doing. The last and most important thanks
go to my beloved Annika for her love and support during times of hard
work.
J.M.
vii
1
Introduction
In control practice it is quite common to use several different controllers
and to switch between them with some type of logical device. One exam-
ple is systems with selectors, see strm and Hgglund (1995), which
have been used for constraint control for a long time. Systems with gain
scheduling, see strm and Wittenmark (1995), is another example. Both
selectors and gain scheduling are commonly used for control of chemical
processes, power stations, and in ight control. Other examples of sys-
tems with mode switching are found in robotics. Some examples are the
systems described in Brockett (1990), Brooks (1990) and Brockett (1993).
In this case many different controllers are used and the coordination of
the different controllers are dealt with by constructing a special language.
The expert controller discussed in strm and rzn (1993) represents
another type of system with an hierarchical structure where a collection of
controllers are juggled by an expert system. The autonomous controllers
systems described in Antsaklis et al. (1991) and strm (1992) have a
similar structure.
Lately there has been a considerable effort in trying to unify some of
the approaches and get a more general theory for hybrid systems. The
fundamental problem with hybrid systems is their complex mixture of
discrete and continuous variables. Such systems are very general and
they have appeared in many different domains. They have, for example,
attracted much interest in control as well as in computer science. In au-
tomatic control the focus has been on the continuous behavior, while com-
puter science has emphasized the discrete aspects, see Alur et al. (1993),
Alur and Dill (1994) and Henzinger et al. (1995a).
It is generally difcult to get analytical solutions to mixed differ-
ence]differential equations. For some problems it is possible to do qualita-
tive analysis for aggregated models. Because of the lack of good analysis
methods, many investigations of hybrid systems have relied heavily on
simulation. Unfortunately the general purpose simulation tools available
1
Chapter 1. Introduction
today are not so well suited for hybrid systems.
This chapter gives a brief overview of some aspects of hybrid control
systems: modeling, analysis, design and verication. The main part of
the rests of thesis, chapters two to ve, treat specic problems in hybrid
control systems. The topics are fast mode changes, simulation, a design
method that guarantees stability and nally some experiments with hy-
brid control systems.
1.1 What is a hybrid system?
Hybrid systems research can be characterized in many ways. In broad
terms, approaches differ with respect to the emphasis on continuous or
discrete dynamics, and on whether they emphasize simulation, verica-
tion, analysis or synthesis.
A young multi-disciplinary eld
Hybrid systems appear both in automatic control and in computer science.
This creates some difculties because researchers from different areas
use different terminology. This is also an advantage because it generates
a wide spectrum of problems. Depending on the researchers background
and the actual problem they can use timed automata, dynamical systems
theory, automata theory, discrete event systems, programming verication
methods, logic programming etc etc. Whatever basic method used, the
questions asked are often the same. What mix of continuous and discrete
properties is rich enough to capture the properties of the systems that
is modeled? How can it be veried that the hybrid model satises the
demands on performance and stability? How can a controller be derived
that meets discrete and continuous specications?
A typical trend is that the computer scientists focus on the logic and
the discrete aspects and the continuous aspects are treated quite cava-
lierly while the control engineers do the opposite. A good approach should
perhaps take a more balanced view.
Hybrid Control Systems
A hybrid control system is a control system where the plant or the con-
troller contains discrete modes that together with continuous equations
govern the behavior of the system. This general denition covers basically
every existing control system.
When the discrete parts are within the controller it is often in the
form of a scheduler or a supervisor. A limiter or a selector can also be
viewed as a discrete part of the controller. The process itself can also
2
1.1 What is a hybrid system?
have discrete modes. Many mechanical systems have physical limitations,
e.g. stops and barriers. Todays more complex control systems normally
contain discrete parts both in the controller and in the plant. More often
than not, the supervisory functions on higher hierarchical layers contain
discrete modes.
Since systems with mode switches appear in so many contexts there
is no unied approach. Different ways to deal with them are scattered in
many application areas. They also appear under many different names:
heterogeneous systems, multi-modal systems, multi-controller systems,
logic-based switching control systems, discrete event systems are some of
the names that appear in the literature. The name hybrid systems has
been used as a label for a large variety of engineering problems. In this
thesis it is used for control systems consisting of both continuous and
discrete parts.
EXAMPLE 1.1TEMPERATURE CONTROL
The rst example of a hybrid control system is temperature control with
two actuators, one for heating and one for cooling. The system thus has
two discrete modes. It is assumed that the continuous behavior of the
system can be described by one state, the temperature T. The behavior in
the heating and the cooling mode is different and a hybrid model of the
system is
dT
dt
=
_
f
c
(T, t, u)
f
h
(T, t, u)
, (1.1)
where f
c
(t, T, u) are the dynamics using the cooler and f
h
(t, T, u) are the
dynamics using the heater. Imagine that only one of the controllers, heater
or cooler, can be active at the time. To keep a certain temperature it is
therefore necessary to switch between them. The relative time a controller
is used decides the temperature of the object. The control problem is to
determine a switch strategy.
The hybrid model described by Equation 1.1 is in many ways too simplistic
to capture all the features of a general hybrid system. Sometimes there
is a need for a more advanced description with additional properties like
state jumps, state space structure changes, innitely fast mode changes,
more than two controllers, etc.
Hybrid control systems is really nothing new
Even if hybrid systems have been used for a long time the name hybrid
systems is relatively new. Many older control paradigms t nicely into the
hybrid systems framework. Examples of these are: Selector-based Control,
Gain Scheduling, Fuzzy Control and Logic Control.
3
Chapter 1. Introduction
EXAMPLE 1.2FLIGHT CONTROL
Control of airplanes has already from the beginning been done with a
hybrid controller with several modes. The airplane has several operating
modes depending on: speed, load, air pressure, take-off and landing.
Present status of HCS
In spite of their common practical use there is very little general purpose
theory available for systems with mode switching. For this reason both
analysis and design are often done intuitively. Variable structure systems,
Emelyanov (1967), Itkis (1976) and Utkin (1977) is one approach that is
well established. Discrete event dynamical systems, Ramadge and Won-
ham(1989) and Cassandras (1993) is another approach, with a theoretical
foundation.
Different efforts have different focus. There are today methods that
solve very specic problems of low complexity. These methods are already
in the implementation phase now. There are also very abstract methods
that deal with large complex systems on a high theoretical level. It seems
to be a longer way to actual implementation for these methods.
Today, most investigations of hybrid systems are done by simulation.
Even if much research effort is put into the the eld of hybrid systems
this situation is not likely to change in the near future. It is therefore
crucial to have good simulation tools for hybrid systems. There are several
simulation packages that allow for the mixture of continuous variables
and discrete variables, but the simulation performance is often poor. One
important problem is simulation of models where so called sliding mode
behavior arises, i.e. where there are innitely many mode switches in a
nite time interval. The simulation results for such systems are not to be
trusted. There will be more about this later on.
The control systems currently in use are more or less of hybrid nature.
Many controllers are linear controller with xes. One example is turning
on and off integration in a PI-controller. Maybe the most common hybrid
controller is the one that the operator is implementing together with the
normal controller for the process. For start-up the controller is switched
into manual. When close enough to the operating point the controller is
switched to automatic. One of the design methods in this thesis is solving,
i.e. automating, this problem as a special case.
Whats next
Over the last few years there has been a considerable research effort in the
area of hybrid systems. Numerous control systems with different models
are presented at the control conferences over the world. The merging of
the practical implementation of hybrid controllers with theoretical results
4
1.2 Why use hybrid control systems
are beginning to show up. This will lead to much better performing control
systems together with a more thorough understanding of why some of the
good old ones work.
It is the hope that the study of hybrid systems will evolve into a tool
capable of analyzing complex systems with discrete and continuous dy-
namics. Lately there has been a lot of work in the area of computer aided
verication of hybrid systems, see Henzinger et al. (1995b). The computer
aided verication and analysis is getting better. It will be absolutely nec-
essary to be able to do automatic analysis of systems as they grow more
complex. A probable scenario is that simulators and automatic analyzers
will grow into one tool, maybe also with some theorem proving capabili-
ties.
1.2 Why use hybrid control systems
Many physical systems are hybrid in the sense that they have barriers
or limitations. Inside the limitations they are modeled with differential
equations. A natural way to model these systems is to use a mixture of
differential equations and inequalities. Other systems have switches and
relays that can be naturally modeled as hybrid systems. These hybrid
models show up in many areas. Typical examples are ight control, air
trafc control, missile guidance, process control, robotics etc. Even if the
modes are strictly speaking not discrete it can be advantageous to model
systems that way. An example of this is when a nonlinear system is mod-
eled with a set of linear models each one covering a part of the state
space.
EXAMPLE 1.3HELICOPTERS
Helicopters are dynamical systems with several distinct modes. To model
the behavior, at least three sub-models are needed. One for hovering, one
for slow motions and one for fast motions where the helicopter dynamics
approach the dynamics of an airplane.
EXAMPLE 1.4AUTOMOTIVE SYSTEMS
A good example of a complex hybrid system is a car. Discrete signals are
gear, load and road characteristics, driver inputs, ABS control signal and
warnings. The continuous parts are often nonlinear e.g. motor character-
istics, sensor signals. To make a good model suitable for simulation and
controller design it is essential to use hybrid system modeling.
The major reason for using hybrid control systems is that they can out-
perform single controller systems and that they can solve problems that
5
Chapter 1. Introduction
can not be dealt with by conventional control. Another reason is that it is
easier to prot from the richer structure of a hybrid plant model if using
a hybrid control system adapted to this model.
Hybrid controllers are a special class of nonlinear controllers. They
are not restricted by some of the limitations always present for linear
systems. Sometimes hybrid control methods can be used to achieve better
performance than the limitations in the next section say is possible for
linear systems.
Fundamental limitations for linear systems
For some problems it can be necessary to use a nonlinear controller to
guarantee stability. A popular example is the nonholonomic integrator.
EXAMPLE 1.5NONHOLONOMIC INTEGRATOR
The system described by
x = u
y = v
z = xv yu, (1.2)
where u and v are control signals, cannot be locally asymptotically stabi-
lized with any time-invariant, smooth controller, see Brockett (1983). On
the other hand there are several hybrid design methods that can be used
to derive a stabilizing controller.
Research going back to Bode (1945) shows that there are fundamental
limitations to what can be achieved by linear feedback. Feuer et al. (1997)
shows some of these limitations and how hybrid control systems can be
used to improve performance. The famous result that the sensitivity func-
tion S(s) satises
1

_

0
log [S(j)[d
n
c

i=1
p
i
(1.3)
where p
i
are the open loop right half plane poles is one such limitation. In
Middleton (1991) a similar time domain result is shown. If q is a right half
plane zero of the plant and y(t) is the closed loop system step response,
then using error feedback
_

0
e
qt
y(t)dt = 0. (1.4)
From this follows the result that the step response for a non-minimum
phase system starts in the wrong direction. Another limitation of this kind
6
1.2 Why use hybrid control systems
is if the open linear system contains two integrators then the error e(t)
of any stable closed loop system after a step input satises
_

0
e(t)dt = 0. (1.5)
if error feedback is used. The overshoot is thus unavoidable. A hybrid
control solution of the kind introduced in the next section and further
analyzed in Chapter 4 can eliminate this problem. The solution there is
to use two controllers, for instance one time-optimal controller and one
xed linear controller.
Starting point for this thesis
The next example illustrates the design problem that was the starting
point for this thesis. The question to answer was this one: what can be
said about mixing control signals from several controller and when is it
worth while doing?
EXAMPLE 1.6FAST STEP RESPONSE
In the design of a PID control system there is often a choice between
a fast controller giving a large overshoot or a slow controller without a
overshoot. See Figure 1.1 (left). (For some processes the overshoot can be
reduced by set-point weighting.) For the process in Figure 1.1, a rst order
system with a time delay, it is quite easy to manually control the system
during the set-point change without getting any overshoot. The manually
applied control signal is imitating the control signal from a time-optimal
controller. See Figure 1.1(right). A good hybrid controller for this problem
0 50 100 150
0
0.5
1
1.5
2

0 50 100 150
0
1
2
3

0 50 100 150
0
0.5
1
1.5
2

0 50 100 150
0
1
2
3

y
sp
, y
u
t
t
y
sp
, y
u
t
t
Figure 1.1 Two examples of PID control (left). Combined manual and PID con-
trol (right). The set-point y
sp
and the measured variable y (top). Control signal u
(bottom).
would consist of a PID controller, a time-optimal controller, and a selector.
7
Chapter 1. Introduction
The results of this example are promising. Very fast step responses could
be combined with good steady state regulation.
The example above illustrates the advantages of combining a strategy for
fast set-point responses with a good regulation property. Expanding on
this idea it is clear that there are many control problems that could prot
from a hybrid design method. In fact, as soon as there are multiple design
goals that cannot be met by a single controller it can be advantageous to
use several controllers.
There are today many good controller design methods. Only a few of
these methods allow the designer to take several different design objec-
tives into account. A sample of possible design goals are
stability of the closed loop system,
fast load disturbance rejection,
fast set-point response,
low sensitivity to measurement noise,
low sensitivity to process variations.
Chapter 4 introduces a method that combines control signals from differ-
ent controllers while guaranteeing stability. The hybrid controller design
method is based on Lyapunov stability theory.
Why avoid hybrid control systems?
Some advantages of hybrid systems have been mentioned. What about
their disadvantages? The major drawbacks with hybrid control systems
are
higher complexity,
incomplete system understanding,
lack of analysis methods,
lack of appropriate design methods.
While it is fairly easy to come up with a hybrid controller that seems to
perform well for a given system it is often very difcult to prove that it
works under all conditions.
1.3 A block model of a general hybrid control system
This section presents a conceptual model of a hybrid system. Some pro-
posed mathematical models of hybrid systems are reviewed in a following
8
1.3 A block model of a general hybrid control system
section. The conceptual model is a graphical description of the different
parts of a hybrid control system. Figure 1.2 shows a general hybrid con-
trol system with seven main blocks or parts. Many existing hybrid control
Reference
Generator
Selector
Performance
Evaluators
Controllers Switch Process Estimators
Figure 1.2 A block model of a general hybrid control system.
systems t into this picture. The complexity of the blocks can vary from
rudimentary to very complex. In the actual implementation several blocks
can of course be coded together. The choice can for example depend on the
requirements on message passing between the blocks. The signals and
messages between the blocks are a mixture of discrete and continuous
signals.
The major tasks and features of the building blocks are presented in
the following sections.
Reference generator block
The basic task for this block is to generate a reference signal for the con-
trollers. In many cases other parts of the hybrid control systemwill benet
from a mode information signal. The mode signal could carry information
of the current task of the control system. Examples of such mode signal
information are
keep level, change operating point, perform automatic tuning. (1.6)
Information about the control objective could be used both by the selector
block and the performance evaluation block. The control signal is thus
a vector comprised of the continuous reference values and discrete mode
information signals, |y
re f
, q|.
Controller block
In a hybrid control system there is a set of controllers to choose from. The
set could contain anything from two to an innite number of controllers.
9
Chapter 1. Introduction
It is not necessary that the controllers belong to the same class or
that they share the same state space information. However, to be able to
handle wind-up and bump-less transfer problems the control signal value
to the active controller has to be available for all the other controllers.
In a gain scheduled system the controllers have the same structure
but different parameters. Here it is preferable to let the controllers share
state information.
Other designs methods do not use controllers of the same type. For
the fast set-point response problem it was for instance useful to combine
a time-optimal controller with a PID controller.
Process block
The process can be time-varying. The variation could be due to load dis-
turbance, different operating points etc. Exceeding certain levels, e.g. uid
overows, can also cause the process dynamics to change drastically. This
type of discrete mode changes are useful information to the process eval-
uation and selector block.
Estimator block
The estimator block has basically two tasks. The rst one is to contin-
uously update the parameters of a changing process. Secondly, if not all
states are measurable then the non-measurable states need to be esti-
mated with some kind of lters. Discrete mode changes could cause re-
sets in the estimates or a change in the estimated model structure. An
additional task could be to estimate mode information from the output
signals.
Performance evaluation block
The performance evaluation block will for many hybrid systems be quite
elaborate. The task is to assess the quality of the control and the esti-
mations. The evaluation signal sent to the selector block is discrete and
qualitative, examples are
sensor one is out of order, critical value for output y
2
is reached,
high noise level, parameter estimates are good, mode = 2. (1.7)
Robust controller It is possible to use a tighter, more aggressive, con-
trol if the correspondence between the model and system is better. One
possibility is to let the performance evaluation estimate the model accu-
racy. Based on this, a controller with matching robustness properties is
selected.
One criterium for controller selection could be the noise level in the
system. If the noise level is high some controllers could be unusable. If
10
1.4 Hybrid control system design
on the other hand the noise level is low those controllers could perform
better.
Selector block
Based on the information from the reference generator and performance
evaluation block this block selects which controller to use.
EXAMPLE 1.7
If the information from the performance evaluation block is that the
knowledge of the model is excellent and the reference generator block
demands a set-point change a time-optimal controller could be selected.
The selector is in practical implementations often a static logic function.
A more advanced version is with memory in the selector device. A simple
version of this is a hysteresis function. More complex structures could be
implemented with nite state machines.
If not all the controllers in the controller set are implemented, for
example when using gain-scheduling, then a recalculation of controller
parameters could be done by this block.
Switch block
This block takes care of the switching from the controller in use, u
1
, to
the new controller, u
2
, chosen by the selector. There are several methods
in use for switching
Smooth transition. The control signal is calculated as a weighted
mean u

= (1 )u
1
u
2
. Where gradually goes from 0 to 1.
Abrupt transition from u
1
to u
2
and then low-pass ltering of the
control signal sent to the process.
If bump-less transfer can be accomplished it is an advantage for many
applications. It will be shown later that the smooth transition, even for
stabilizing controllers u
1
and u
2
, may lead to unstable systems.
The switching mechanism has a large inuence of the properties of
the closed loop. Despite this, it is sometimes neglected in the analysis of
hybrid control systems.
1.4 Hybrid control system design
In this section some hybrid control schemes are reviewed. All schemes
t into the general picture in Figure 1.2, but not all blocks are explicitly
present in every design method.
11
Chapter 1. Introduction
Multi-control
Morse (1995b) uses a somewhat simpler architecture, see Figure 1.3.
Based on that architecture he introduces the notation multi-controller
for several different controller designs. The basic idea is that the process
outputs y are fed back to a set of controllers. Then each controller gener-
ates a candidate control signal. Which control signal that is nally used is
decided by the supervisor. At a certain point in time only one control sig-
nal is used, u = u

, where takes values in the index set of the controller


set.
C
1
C
2
.
.
.
C
i
C
m
u
1
u
2
u
i
u
m

u
y
supervisor
process
Figure 1.3 Multi-controller architecture, see Morse (1995b).
State sharing gain scheduling
The actual implementation of the controllers is not necessarily done as
the structure in Figure 1.3 indicates. If the controllers have the same
internal structure then the full controller set can be implemented with
only one control algorithm and some parameters. An example of this is a
gain scheduled controller where all the controllers, in the abstract multi-
controller, share the state information. Figure 1.3 could then be further
reduced to only one controller and a parameter passing method from the
supervisor.
It is not necessary to limit the number of controllers. In this set-up it
is possible to use an innite range of controllers.
Manual control
Simple controllers of the PID type have undergone an interesting devel-
opment over the past 15 years with the introduction of automatic tuning
12
1.4 Hybrid control system design
Open
Closed
0 0.5 1.0
Heater
Cooler
Figure 1.4 Split range control. The measured variable determines which con-
troller to use.
and adaptation. This work has also led to a signicantly improved un-
derstanding of PID controllers and their tuning. A common engineering
practice is to tune the controllers for good load disturbance rejection and
small sensitivity to process variations. The set-point response is then han-
dled by set-point weighting. Such controllers will perform quite well for
moderate set-point changes. Their response to large set-point changes can,
however, be improved.
Knowing that the controller is not optimized for large set-point changes
a process operator that has good knowledge of the process can put the
controller in manual mode and give the control signal himself. The control
signal thus implemented by the operator is often an approximation of a
time-optimal control signal. A typical control sequence is
full speed forward, maximum break. (1.8)
Split range control
Split range control is sometimes used for systems with several controllers
and only one measured signal. The measured variable determines which
controller to use, see Figure 1.4. The method is commonly used in systems
for heating and ventilation and it could have been applied to the temper-
ature control in Example 1.1. If the measured temperature is high, only
the cooler is active, and if the measured temperature is low, the heater is
on.
The heterogeneous controller in Kuipers and strm (1991) has a sim-
ilar idea but with overlapping working regions.
Max and Min selectors
Selector control is the counter-part of split range control. In selector con-
trol there are many measured signals and only one actuator. The selector
13
Chapter 1. Introduction
is a static device with many inputs and one output. Common selector
types are: maximum and minimum.
ON
AUTO
ADAPT
LOAD
MODEL
MV
SP
UE
FF1
FF2
HI
LO
DUP
DUM
NA
NB
NC
MD
DMP
SAMP
UMAX
UMIN
RESU
RESY
KINIT
BMPLVL
POLE
SFF1
SFF2
Signals Parameters
U
Integer
Real
Logical
Integer
Real
Logical
Output
Figure 1.5 The expert module STREGX in Firstloop. Picture taken from strm
and Wittenmark (1995) p. 513.
Adaptive control
Adaptive controllers are hybrid controllers with several modes. One ex-
ample is the adaptive controller from the company First Control. The
expert module STREGX in Firstloop is shown in Figure 1.5. The mode
switches, ON, AUTO and ADAPT control if the adaptive controller is in
modes on]off, automatic]manual and adapting or not.
Variable structure systems
Another conguration where several controllers are used is in variable
structure systems. The basic idea in variable structure systems is to dene
a switching function S = S(x). The applied control signal then depends
on the sign of the switching function. In the multi-variable situation there
can be several switching functions s
i
(x) and one control signal for each
x = f (x, u, t)
u
i
=
_
u

i
(x, t), i f s
i
(x) 0
u

i
(x, t), i f s
i
(x) < 0.
(1.9)
14
1.4 Hybrid control system design
The applied control is such that the trajectories will approach the subspace
S = 0. This example illustrates the importance of being able to detect
fast switches and being able to analyze the dynamics for a sliding mode
behavior. This problem is further addressed in chapters 2 and 3.
Knowledge-based control
The structure of a knowledge-based controller, Figure 1.6, see strm and
rzn (1993), ts into Figure 1.2 The expert controller represents a type
Knowledge-
based
system
Supervision
algorithms
Identification
algorithms
Control
algorithms
Process

Operator
Figure 1.6 The expert system decides witch controller to use.
of system with an hierarchical structure, where a collection of controllers
is juggled by an expert system.
Hierarchical control
Reducing complexity is an important reason for dealing with hybrid sys-
tems. Using hierarchical models is one method of modeling dynamic pro-
cesses with different levels of abstraction. In fact, all hybrid controllers
tting the general conceptual model are more or less hierarchical.
One example of successful hierarchical control is the hybrid control
of air trafc, see Antsaklis (1997) pp 378-404. The control structure for
an airplane is layered as: Strategic Planner, Tactical Planner, Trajectory
planner and Regulation
Strategic Planner: design a coarse trajectory for the aircraft.
Tactical Planner: renes the strategic plane and predicts conicts
with other airplanes.
Trajectory Planner: designs full state and input trajectories for the
aircraft together with a sequence of ight modes necessary.
Regulation Layer: tracks the feasible dynamic trajectory
Each layer uses a ner and more detailed model of the airplane.
15
Chapter 1. Introduction
Recongurable controllers
In airplane control there are often more control surfaces than actually
needed to stabilize the plane. It is in some cases possible to y with one
engine out of order, with a partially broken wing etc. It is possible to
build a hybrid controller based on several congurations of the airplane.
Imagine that controller c
1
is used when all systems are ok, controller c
2
is used when left engine is out, controller c
3
is used when right rear wing
is blown away, etc. Here the performance evaluation block should detect
the failure of parts of the system and schedule the use of a new controller.
Real-time demands
In real-time systems with several control loops it is possible to let the
loop with highest demands on sample time get the most cpu time. Doing
this leaves other control loops with less time. It might be necessary to
use less cpu time demanding controllers for those loops. In this case the
selector block needs information of available cpu time to be able to chose
controller. The area is quite new but it is imaginable to distribute cpu time
as a function of the tasks that the controllers are performing at a certain
instant. Then given a certain amount of cpu time, suitable controllers are
selected.
Fuzzy control
The idea of mixing or switching between control signals for hybrid systems
that are represented as local models with local controllers are very much
in line with what is done in fuzzy control. Some hybrid control schemes
could be viewed as a fuzzy controller, Sugeno and Takagi (1983).
1.5 Mathematical models of hybrid systems
There is a number of mathematical models describing hybrid systems. A
common feature is that the state space S has both discrete and continu-
ous variables, e.g. S R
n
Z
m
. The equations can be linear or nonlinear
and in general the discrete parts cannot be separated from the continuous
parts. The models proposed by various researchers differ in denition of
and restrictions on dynamic behavior. The difference between the models
are on aspects as generality, allowance of state jumps, dynamic restric-
tions etc. Many models do not allow fast switching or sliding.
The modeling problem
The modeling problem consists of creating models with a sufcient com-
plexity to capture the rich behavior of hybrid control systems. Yet they
16
1.5 Mathematical models of hybrid systems
should be easy enough to analyze, and formulated in a way that allows
simulation.
In the following sections there are short presentations of some of the
proposed models. Different branches of control science have their favorite
traditional model structures. There is no unied approach and not yet
any agreement on what constitutes the most fruitful compromise between
model generality and expressibility. For a review of different approaches
see Branicky (1995) or Morse (1995a).
Tavernini
Most of the work in this thesis will be be on systems that are on the
Differential Automata form described in in Tavernini (1987):
x = f (x(t), q(t)), x R
n
q(t) =(x(t), q(t

)), q Z
m
(1.10)
where x denotes the continuous and q the discrete variables. This model
does not allow for autonomous or controlled state jumps. Hybrid models
of this type are often represented with a graph, see Figure 1.7. Here each
x = f
1
x = f
2
x = f
3
x = f
4
x = f
5
x = f
6

23
Figure 1.7 Graph of a Tavernini-type hybrid system.
of the nodes represents a mode of the system. Associated with each mode
is a dynamic equation x = f
q
and mode jump conditions
qr
.
In their article in Antsaklis (1997) pp 31-56, Branicky and Mattsson dis-
cuss modeling and simulation of hybrid systems. The mathematical mod-
els below of Branicky, Brockett and Artstein are taken from that article.
Branicky
Branicky (1995) makes a formal denition of a controlled hybrid dynam-
ical system (CHDS), H
c
= |Q, , A, G, V, C, F|, where V is the discrete
controls, C the collection of controlled jump sets and F the collection of
17
Chapter 1. Introduction
controlled jump destination maps. Under some conditions this is a hy-
brid dynamical system (HDS) H = |Q, , A, G| where Q is the discrete
states, the continuous dynamics, A the autonomous jump sets and G
the autonomous jump transition map. In an equation form comparable
with other models in this section this is
x(t) = f (x(t), q(t)), x(t) ] A
q(t)
q(t

) = G
q
(x(t), q(t))
x(t

) = G
x
(x(t), q(t))
_
, x(t) A
q(t)
(1.11)
Brocketts model
Brockett (1993) uses the description
x(t) = f (x, p, z),
q = r(x, p, z),
z|p =(x, p, zp), (1.12)
where x X IR
n
, p(t) IR, and the rate equation r is nonnegative for
all arguments. Brockett has a mix of continuous and discrete variables.
The variable z is changed whenever p passes through integer values.
The notation p denotes the largest integer less than or equal to p. The
notation |p denotes the smallest integer greater than p.
Artsteins model
Artstein uses a number N of sub-models. Each sub-model has its own
dynamics, x = f
q
(x). A specic model is run for a certain time T
q
. After
time T
q
there is possibly a switch to another sub-model depending on the
value of a test function
q
(x). In equations this is
| x(t), (t)|
T
= | f
q(t)
(x(t)), 1|
T
, 0 T
q(t)
,
q(t

) =
_
G
p
(q(t)),
q
(t)(x(t)) 0,
G
n
(q(t)), (t)(x(t)) < 0.
_
, = T
q(t)
,
(t

) = 0, = T
q(t)
. (1.13)
StiverAntsaklis
Stiver and Antsaklis (1992) uses a hybrid description with the plant mod-
eled with continuous equations
x = f (x(t), r(t))
z(t) = q(x, t)
18
1.5 Mathematical models of hybrid systems
and a discrete regulator
t
i
= (t
i1
, zh
i
)
rh
i
= (t
i
), (1.14)
where zh
i
are discrete events generated by the plant and rh
i
are dis-
crete events generated by the controller. The translation from discrete to
continuous and vice versa are done with the interface equations
r(t) = (rh
i
)
zh
i
=(x(t)). (1.15)
The function (x(t)) is partitioning the state space into various regions.
A plant event is generated each time a region is entered. These event can
then be used by the controller. The partitioning must be detailed enough
to give the controller sufcient information whether or not the current
state is in an acceptable region. It must also be detailed enough for the
controller to determine the next control signal value. The over-all systems
can be viewed as two interacting discrete event systems and analyzed as
such.
Nerode-Kohn
Nerode and Kohn (1992) have developed a formal model for hybrid sys-
tems. The model provides a framework to represents the interaction be-
tween the continuous and discrete portions of the system and proposes
stability denitions. They study the topological issues of hybrid systems.
Small topologies are introduced for the design of the analog-to-digital map.
Michael Tittus
Tittus (1995) models batch processes. The batch processes consist of con-
tinuous ows of material and energy with discrete actuators and sensors.
The modeling is done with integrator processes, x = k
q
. The class of hy-
brid systems is very limited but these models are important for control of
batch processes. Using such simple models he is able to derive result as
stability and controllability of the systems.
Petri nets and timed automata
Petri nets have been used extensively in modeling and analysis of dis-
crete event systems. One example is Peleties and DeCarlo (1989) where
the continuous plant is approximated with a Petri net and a supervisor
consisting of two interacting Petri nets controls the plant.
19
Chapter 1. Introduction
1.6 Analysis
The way to analyze hybrid systems has been to convert the systems into
either discrete or continuous systems. The reason for this is of course
that the analytical framework for a control engineer deals only with pure
discrete or pure continuous systems. There is a lack of theory dealing
directly with mixed systems.
The analysis methods used for hybrid systems are often very cus-
tomized to a specic problem or, in best case, to a small class of problems.
The goal is to generalize all notions from pure control theory to hybrid
control systems but there is a long way to go. Some analysis methods are
reviewed in this section.
Perhaps the most fundamental feature to verify for a control system
is the stability of the closed loop.
Stability for special classes of hybrid systems
For some classes of hybrid systems there are methods to prove stability,
for example, piecewise linear systems and fuzzy systems.
Piecewise linear systems Stability of Piecewise linear systems de-
scribed by
x = A
i
x a
i
x X
i
, (1.16)
where the state space X is divided into regions X
i
such that X =
i
X
i
and
i belongs to an index set I have been studied by several authors, see e.g.
Ferron (1996), and for discrete systems, Sontag (1981). The basic idea is
to try to nd a quadratic matrix P = P
T
0 such that
A
T
i
P PA
i
< 0, i I. (1.17)
If this is possible then V = x
T
Px is a Lyapunov function for the piecewise
linear systemin Equation 1.16. Many systems do not admit such quadratic
Lyapunov functions. There are three ways to extend the applicability of
this method
1 Modify the denition of X
i
2 Allow piecewise quadratic functions P
i
3 Use other functions than quadratic as Lyapunov functions
20
1.6 Analysis
The rst method is rather limited and will always be very problem specic,
thus it is difcult to dene a general algorithm to do it.
The second method has recently been enhanced in Johansson and
Rantzer (1997) and Johansson and Rantzer (1998). The search of piece-
wise quadratic Lyapunov functions is formulated as a convex optimization
problem in terms of linear matrix inequalities. The idea, using piecewise
Lyapunov functions, is to reduce conservatism while retaining efcient
computation using convex optimization.
Chapter 4 introduces a design method for hybrid systems based on
local controllers and local Lyapunov functions. The local Lyapunov func-
tions can be of any kind and play an active part in designing the switching
strategy. Some controller design methods such as LQG come with a Lya-
punov function. In such a case it can be advantageous to be able to use
that Lyapunov function.
Fuzzy systems A method similar to piecewise linear systems is used in
Kiendl and Rger (1995). Takagi-Sugeno fuzzy systems are described by
fuzzy IF-THEN rules that locally represent linear input-output relations
of a system. The rules are of the following form:
Rule
i
: IF x
1
(k) is M
i1
. . . and x
n
(k) is M
in
THEN x(k 1) = A
i
x(k) B
i
u(k). (1.18)
The control laws of fuzzy controllers are approximated by means of afne
facet functions and stability is analyzed.
Caines and Ortega (1994) shows that a certain class of fuzzy con-
trollers can be viewed as piecewise constant controls. This is then used
to prove stability for nonlinear systems by using multiple Lyapunov func-
tions.
Controllability and observability of hybrid systems
Understanding the controllability property of a dynamics system is an-
other important objective of control design and analysis. Many practical
systems are nonlinear and operate in multiple state-space regions where
dynamics vary signicantly from one region to another.
Hierarchical hybrid control systems Caines and Wei (1995) study
the controllability problem for hybrid systems by dividing the state space
into cells. The controllability problem is then divided into two layers:
between block and in-block controllability. The controllability denitions
are then combined so that global controllability is achieved
21
Chapter 1. Introduction
Periodic systems Ezzine and Haddad (1989) construct observability
and controllability matrices for periodic systems of the type
x = A(r(t))x(t) B(r(t))u(t)
y(t) = C(r(t))x(t), (1.19)
where r(t) is a deterministic scalar sequence taking values in the index
set N = 1, 2, ..., n. Even if the subsystems |A(i), B(i), C(i)| are control-
lable and observable this is not necessarily true for the hybrid system in
Equation 1.19.
Aggregation
The analysis problem of a large hybrid system is immense. One method of
breaking it down is to impose a hierarchical model structure where models
residing on high levels only have a crude knowledge of the behavior. In
Antsaklis (1997), pp 329-341, Pappas and Sastry make abstractions of
hybrid systems. These abstractions should then generate the abstracted
behaviors of the original systems. The methods advantage lies in that the
abstracted systems are smaller and easier to analyze.
1.7 Verication of behavior
This research line has its roots in computer science. Computer programs
are modeled as automata see Figure 1.8. There is a nite number of states
and the transitions between the states are governed by logical expressions.
These automata can be analyzed for properties as dead-lock, liveness,
safety etc. In this basic automaton model there is no dynamics just static
logic switching.
The method has been developed gradually to encompass more com-
plex dynamical systems. Dynamics have been added to the nodes and the
transitions are allowed to depend on the values of the states in the nodes.
The development has gone from automata to Timed Automata, Alur and
Dill (1994), and Hybrid Automata, Alur et al. (1993).
Currently it is allowed to have dynamics of the kind x = |k
1
, k
2
| i.e.
integrator processes where the integration rate lies in an interval |k
1
, k
2
|.
Both the controller and the plant are modeled and it can the be veried
that all nodes can be reached or that a certain reference trajectory can be
followed with a certain resolution.
Hybrid Automata
The paper Henzinger et al. (1995b) introduces a framework of hybrid au-
tomata as a model and specication language for hybrid systems. They
22
1.7 Verication of behavior
State 1
State 3 State 2
State 0
i0/z0 i1/z0
i1/z1
i2/z0
i2/z2
i1/z0
i0/z0
i0/z0
i0/z0
i2/z0
i2/z2
Figure 1.8 An automaton without dynamics.
present two semi-decision procedures for verifying safety properties of
piecewise-linear hybrid automata, in which all variables change at con-
stant rates. The two procedures are based, respectively, on minimizing
and computing x-points on generally innite state spaces. They show
that if the procedures terminate, then they give correct answers. The
procedures provide an automatic way for verifying the properties of the
hybrid systems.
The discrete states of the controller are modeled by the vertices of a
graph (control modes), and the discrete dynamics of the controller is mod-
eled by the edges of the graph (control switches). The continuous states
of the plant are modeled by points in IR
n
, and the continuous dynamics of
the plants are modeled by ow conditions such as differential equations.
The verication problem
The verication problem can be formulated as: Given a collection of au-
tomata dening the system and a set of formulas of temporal logic den-
ing the requirements, derive conditions under which the system meets
the requirements. There are several software handling systems of differ-
ent complexity levels. For timed automata there are COSPAN, KRONOS,
Daws et al. (1996) and UPPAAL. One of the most advanced dealing with
hybrid automata is HYTECH.
The next example shows how formal verication methods can be used
to prove safety properties.
EXAMPLE 1.8RAILROAD CROSSING
The railroad crossing problem is a typical example, see Halbwachs (1993).
The problem is to investigate if a specic logic for opening and closing a
23
Chapter 1. Introduction
gate is safe under all possible conditions. The condition is simply that the
gate is down when the train passes the crossing. The problem is that the
speed of the train can vary and that the gate opening and lowering rates
are limited. The specication consists of three models
D = 1500 m
D = 1000 m
Enter Exit
Figure 1.9 The train enters the dangerous zone at Enter and leaves at Exit. While
the train is in the dangerous zone the gate should be down.
The train model
Continuous variable x measuring distance from entry point.
Maximum speed: x 60 m]s.
enter is sent when entering the region.
exit is sent when leaving the region.
The gate model
Continuous variable measuring the gate angle.
Speed: = 10 degrees]s
The controller model
Reads enter & exit signals from the train.
Emits raise & lower to the gate.
The three subsystems are modeled with one automaton each, see Fig-
ure 1.10.
The three subsystems are then combined into one, see Figure 1.11.
Formal verication can then be applied to the combined model to see
which nodes that are visited. An analysis shows that failure is possible if
a train is approaching while the gate is going up. There is no support in
the gate model for changing mode from going up to going down.
24
1.7 Verication of behavior
[:=90]
[=0]
[=10]
[=0]
[=0]
[>=0]
[=10]
[<=90]
[=90]
.
.
.
.
enter.exit.lower.raise
[x<=1000]
out
[x=1000] [x:=0].enter
[true]/
[x<=1500]
lower
raise
[k:=k+1]
enter.exit.[k >= 2]/
[k:=0]
free
enter/
[k:=k+1].lower
enter.exit.[k=1]/
[k:=k-1].raise
occupied
enter.exit/
[k:=k-1]
moving_down
on_gate
moving_up
open
closed
[x<=1500]/exit
approach
Gate
Train
Controller
k:=0
x<=60
.
Figure 1.10 Three automata, one for each subsystem.
approach
moving_down
occupied
on_gate
closed
occupied
on_gate
open
occupied
approach
open
occupied
approach
moving_up
occupied
out
moving_up
free
closed
occupied
approach
=90
=90
=0
=90
enter enter_gate
exit
enter enter_gate
exit
enter
out
free
open
Figure 1.11 The combined automaton for Example 1.8.
25
Chapter 1. Introduction
1.8 This thesis
This introduction chapter has dened hybrid systems and described some
advantages and problems with using them for modeling and control. There
is not yet a unied approach to analysis and design of hybrid systems.
It was seen that there are many different models and that several old
controller design methods could be viewed as hybrid systems.
The rest of this thesis consists of four chapters. They treat different
aspects of hybrid control systems: Analysis of fast mode changes, simu-
lation of hybrid system, a design method and experiments with hybrid
control systems.
Fast mode changes
Chapter 2 is a rather theoretical chapter dealing with the problem of
dening solutions to differential equations with discontinuous control. A
special interest will be the study of what happens in the case of innitely
fast mode changes. The analysis has its roots in Filippovs theory for differ-
ential equations with discontinuous right hand sides, see Filippov (1988).
The chapter is based on the paper Malmborg and Bernhardsson (1997).
Simulation of hybrid systems
The analysis tools available for hybrid systems are not nearly enough.
Therefore researchers must rely on simulation to assess the performance
of hybrid system. Todays simulation tools are not good enough to handle
hybrid systems. Chapter 3 addresses some problems concerning simula-
tion of hybrid systems. Further, there will be suggestions of some features
to be added to simulation tools to improve their reliability. The ideas of
this chapter are found in Malmborg and Bernhardsson (1997).
A design method
In Chapter 4 it is shown how hybrid controllers can be used to improve
the performance of a control system. A hybrid controller design method
that guarantees stability will be presented. In this chapter there will also
be a warning and explanation why some of the stability results in the
literature on hybrid control systems are false. The chapter is an extended
version of Malmborg et al. (1996).
Experiments with two hybrid control systems
The last chapter is a presentation of experimental results. The design
method from Chapter 4 is applied to two processes: Level control of a
double-tank process and air temperature control in a school. The result in
both cases is much faster responses to set-point changes. The double-tank
system control was presented in Malmborg and Eker (1997).
26
2
Fast mode changes
2.1 Introduction
As discussed in the previous chapter there are many ways to model hybrid
control systems. The common feature that the state space S has both
discrete and continuous variables leads to new possible behaviors of the
overall control system. This chapter analyzes higher order sliding and
sliding on multiple surfaces. The two-relay case in R
2
R
n2
is treated
in more detail. The term sliding will also be used for the case where the
system experiences a rapid cyclic switching.
For many of the proposed hybrid models, restrictions are introduced to
prevent innitely fast switching between the discrete modes. For instance
in Tavernini (1987) the distance between any two sets with different dis-
crete transitions is bounded away from zero and the next set from which
another discrete transition takes place is at least a xed distance away.
After a discrete transition the new starting point is in an open set on
which the dynamics are well dened.
If no such restrictions are imposed on the control system design then
it is quite common to get sliding modes, i.e. cycles of innitely fast mode
changes. The resulting dynamics are not always well dened and that is
an indication that the modeling must be rened. Innitely fast switches
and under-modeling are always difcult to deal with for simulation tools.
An example
To give a avor of the difculties that can exist, a simple example with
two states and one relay is examined.
EXAMPLE 2.1A SIMPLIFIED IVHS
The Equations 2.1 describe a simple model of a vehicle with a control sys-
tem. The control objective is to drive along the x
2
= 0 axis. The equations
27
Chapter 2. Fast mode changes
for x
2
,= 0 are
x
1
= cos(u)
x
2
= sin(u)
y = x
2
u = sgn(y), (2.1)
where sgn(y) is dened as
sgn(y) =
_

_
1 y 0
undened y = 0
1 y < 0.
(2.2)
The applied control is such that the car always points towards the x
2
= 0
subspace, see Figure 2.1. The direction u of the car is controlled with
x
1
x
2
u = 1
u = 1
Figure 2.1 Transversal sliding in Example 2.1
the relay u = sgn(y). The system description can be aggregated into the
hybrid automaton in Figure 2.2. The equation y 0 indicates that the
transition from mode
2
to mode
1
is enabled if y 0.
The dynamics in the two modes are
f
1
: x
1
= cos(1), x
2
= sin(1)
f
2
: x
1
= cos(1), x
2
= sin(1). (2.3)
Note that both the transition from mode
1
to mode
2
and the transition
from mode
2
to mode
1
are enabled for y = 0 and that fast switching is
possible on the subspace y = x
2
= 0. On that subspace the dynamics are
in fact not well dened by Equation 2.1.
28
2.1 Introduction
y 0
f
1
f
2
y 0 y < 0
y 0
y 0
Figure 2.2 The hybrid automaton for Example 2.1.
It would be advantageous if a simulation or analyzing tool automatically
could detect the structural possibility of fast switching in large systems
with several relays, extend the system with so called induced modes so
that the new system has only well-dened transitions without fast switch-
ing and describe how these modes should inherit dynamics from the ba-
sic system. The new automaton should have one induced mode and only
well-dened transitions, see Figure 2.3. For Example 2.1 the new mode
represents the part of the state space where x
2
= 0.
y 0
y = 0 and y

0
y = 0 and y

< 0
a
y < 0
f
1
f
2
f
?
Figure 2.3 System with one new mode. Only one induced transition is indicated,
a : (y = 0 and y

< 0)
The introduction of the new modes is nontrivial. To determine what
the dynamics should be often requires more information about the salient
features of the relay. There are several possible denitions of the inherited
dynamics. To determine which is physically motivated more modeling is
typically needed.
DEFINITION 2.1SLIDING SET
Denote the transition functions
ij
(x, x, x, . . . ). A transition from state i
29
Chapter 2. Fast mode changes
to state j is enabled if
ij
(x, x, x, . . . ) 0. Let the set M
I
be dened as
M
I
= x :
(ij)
1
0 . . .
(ij)
k
0
I = (ij)
1
, . . . , (ij)
k
. (2.4)
If for an index set I such that the
ij
form a loop (i
1
= j
k
) the set M
I
is
nonempty then M
I
is called a sliding set.
In the rest of this chapter
ij
will be input functions to relays.
EXAMPLE 2.2SLIDING SET FOR ONE RELAY
A relay u = sgn(x) is governing the transition between two modes. The
mode changes from mode
1
to mode
2
when the relay input goes from a
positive value to a negative value and vice versa. If
12
0 when x 0
and
21
0 when x 0, then the sliding set is x = 0.
Whether there is any sliding or not depends on the vectorelds in the
neighborhood of the sliding set.
EXAMPLE 2.3EXAMPLE 2.1 CONTINUED
There are at least two natural candidate denitions for the dynamics in
the new mode, x
2
= 0. Either choose u = 0 in order to stay on x
2
= 0, this
is called equivalent control and gives the equation
x
1
= cos 0 = 1
x
2
= 0, (2.5)
or else alternate between control signals u = 1 and u = 1, which gives
the convex solution
x
1
=
1
2
(cos(1) cos(1)) = cos(1)
x
2
= 0. (2.6)
Which solution to choose depends on the physical system that the relay
models. This is further discussed in the next section.
2.2 Dynamics inheritance
The new modes, such as the mode for x
2
= 0 in Example 2.1 will have
a set of dynamics that sometimes can be derived from the dynamics of
the adjacent modes. This procedure is denoted dynamics inheritance. Al-
ready the small Example 2.1 revealed that several denitions resulting in
30
2.2 Dynamics inheritance
different dynamics were possible. For many reasons, e.g. simulation and
stability analysis, the dynamics on the sliding surface have to be calcu-
lated and known in detail. The rest of this chapter treats the question of
how do dene the dynamics for some different cases of sliding and related
motions. An important issue is to decide if the inherited dynamics can
be dened in a unique way. Several different sliding-type motions will be
investigated: First order sliding, second order sliding, nth order sliding
and sliding on multiple sliding surfaces.
Transversal vectorelds rst order sliding
This section summarizes the results on sliding motion where the con-
trollers give rise to vectorelds that have a nonzero component perpen-
dicular to the sliding surface. This is called transversal sliding or rst
order sliding. The system class under consideration here is
x = f (x, u), x IR
n
u = | u
1
u
2
. . . u
m
|
T
u
i
= sgn(y
i
), i = 1, . . . , m. (2.7)
How to dene the sliding motion along a surface, such as x
2
= 0 in Ex-
ample 2.1, has been studied by several researchers, see Filippov (1988),
Utkin (1977) and Clarke (1990). For a smooth surface described by the
intersection of m smooth surfaces y
i
(x) = 0, i = 1, . . . m the dynamics
along the intersection can be dened in at least two possible ways. Which
one that is more appropriate depends on how the switches are modeled in
detail. Relays can be used for implementing the switches and in practice
the relays are not perfect. Two approximations are shown in Figure 2.4.
For the continuous approximation the relay output u is changed gradually
y y
u u

Figure 2.4 Approximations of the relay switch u = sgn(y). Hysteresis (left) or


continuous function (right)
as the relay input y changes. For the approximation with a hysteresis the
31
Chapter 2. Fast mode changes
output is changed innitely fast but only as the input exceeds a threshold
level. The two denitions used in this thesis are the denitions introduced
by Filippov and Utkin.
DEFINITION 2.2FILIPPOV CONVEX DEFINITION
The function x(t) is called a solution of Equation 2.7 if x(t) is differentiable
almost everywhere and
x(t) cof (x, u), where u
i
=
_
sgn(y
i
), y
i
,= 0
1, 1, y
i
= 0
(2.8)
In 2.8, co denotes the convex hull and the differential inclusion can be
written as
x =

u1,1
m

u
(t) f (x, u),
where

u
(t) = 1 and
u
(t) 0. The sum is taken over all combinations
of the u-states. If sliding on a surface y
i
= 0 occurs, then
u
is derived
from the equations
y
i
x
x = 0.
This denition has its roots in optimal control under the name of ex-
tended control and from generalized derivatives in non-smooth analysis,
see Clarke (1990). The denition can be motivated by a limiting process
where the relay equations are replaced by u
i
(t) = sgn(y
i
(t
i
)), where

i
0. This can for example be a good approximation if the relays are
implemented on a digital computer. This denition can also be motivated
by hysteresis in space, see Figure 2.4 (left).
An alternative denition of the sliding mode dynamics is the following.
DEFINITION 2.3UTKIN EQUIVALENT CONTROL
The function x(t) is a solution to Equation 2.7 if x(t) is differentiable
almost everywhere and the if m discrete variables in u can be relaxed to
continuous variables in |1, 1| and an equivalent control u
eq
(t) |1, 1|
m
can be found such that
x = f (x, u
eq
). (2.9)
If sliding occurs the equivalent control is derived from the equations y
i
= 0
and y
i
= 0.
This denition can be motivated by a limiting process where each relay
is approximated by a continuous function sgn

(x) that tends to sgn(x)


pointwise as 0

. This can be a good approximation if the relays are


32
2.2 Dynamics inheritance
implemented with analog components, see Figure 2.4 (right). The Filippov
convex denition and the Utkin equivalent control denition coincide if
f (x, u) is afne in u.
For Example 2.1 the convex denition gives the dynamics x
1
= cos(1)
along the switching line x
2
= 0. The equivalent control denition gives
x
1
= cos(0) = 1. Both denitions can be natural candidates for the physi-
cal behavior of the car. If the turning of the wheel can be done very fast,
but it takes a while to observe that the x
2
= 0 line has been crossed,
then the convex denition is appropriate. If the turning is slow, then it is
perhaps better modeled with the continuous approximation of the relay
i.e by equivalent control.
Multiple switching surfaces
If there are more than one relay, i.e. if m 1 then equations 2.8 or 2.9
are not sufcient to dene the sliding motion uniquely. The sliding takes
place on an (nm)-dimensional manifold. The number of unknowns,
u
,
is 2
m
1 and there are only m equations. The case with several relays is in
fact not very well understood. The physical sliding motion will depend on
the salient features of the different relays, e.g. which relay is the fastest.
The problem with two relays is investigated in Section 2.4.
Non-transversal sliding higher order sliding
In the literature it is common to assume transversal intersection of vec-
torelds and switching surfaces, i.e. f
T
i

j
,= 0, where
j
= 0 dene
the sliding sets and f
i
the vectorelds. Non-transversal intersections can
however arise quite naturally and should not be considered degenerate or
non-generic. To see this, Example 2.1 is extended with a third equation
describing sensor dynamics.
EXAMPLE 2.4IVHS EXAMPLE WITH SENSOR DYNAMICS
The new equation set is
x
1
= cos u
x
2
= sinu
x
3
= x
3
x
2
y = x
3
u = sgn(y). (2.10)
The switching surface is now given by x
3
= 0 and there is non-transversal
sliding on the subspace x
2
= x
3
= 0. Figure 2.5 shows a typical trajectory
close to the x
1
-axis. Note that
y
x
f (x, u) = 0 on this line for all u. Thus
33
Chapter 2. Fast mode changes
this equation cannot be used for nding the equivalent control u
eq
or the

u
in the convex denition.
To dene the sliding dynamics it can be motivated to extend the sliding
denitions for transversal vectorelds in the following way: differentiate
y(x) with respect to t until it is possible to solve for u.
0
2
4
6
1
0
1
0
1
x
1
x
2
x
3
Figure 2.5 IVHS example with stable sensor dynamics. There is sliding on the
subspace x
2
= x
3
= 0.
For this example the convex denition is derived from equations y = 0 and
y = 0. The equivalent control is given by solving y = 0, y = 0 and y = 0 for
u. (For both cases this gives the same sliding behavior in the x
1
direction
as without sensor dynamics.) There is still a difference depending on if
the convex or equivalent control denition is used since ( x
1
)
eq
= 1 and
( x
1
)
convex
= cos(1).
Note that initial conditions close to the subspace x
2
= x
3
= 0 give
approximately the convex solution. Hence this might be the best denition
of the dynamics on the subspace x
2
= x
3
= 0.
Necessary and sufcient conditions for existence of higher order sliding
for systems with one relay are unknown. A necessary condition is that

u
y
(k)
< 0 where k is the smallest integer such that

u
y
(k)
,= 0. The
number k is called the nonlinear relative degree, see Nijmeijer and van der
Schaft (1990). If f (x, u) = a(x) b(x)u and y = h(x) the condition can
be written L
b
L
k1
a
h(x) < 0. Here L
a
denotes the Lie-derivative in the
direction of a(x), i.e L
a
h =

h
i
x
a
i
. If the relative degree k is greater
than one the intersection is non-transversal. Transversal intersections
are hence generic only in the same weak sense as linear systems are
generically of relative degree one.
Sufcient conditions for non-transversal sliding are hard, since stabil-
ity of the switching set must also be studied. If for instance the sensor
dynamics equation above is changed to x
3
= x
3
x
2
the switching line
34
2.3 Non-transversal sliding of degree two
x
2
= x
3
= 0 is better described as being unstable and there will not be
any sliding motion along]around the line lasting a long time. The example
motivates the following investigation.
2.3 Non-transversal sliding of degree two
A system with two discrete modes is modeled with the equations
x = f
u
(x)
u = sgn( (x))
(x) =

x
f
u
(x). (2.11)
Switching between the modes takes place when = 0. To begin with, the
switching surface is transformed to (x) = y = 0. Any smooth disconti-
nuity surface can be transformed to this with a local change of variables
that can be found from the implicit function theorem. The following dis-
cussion assumes that there has been a coordinate transformation to arrive
at y = (x) = 0. Non-transversal sliding takes place on a subspace where
y = y = 0. Assume further that another smooth transformation can be
found as to have the sliding subspace at x : x
1
= 0, x
2
= 0. In order to
investigate the dynamics on such a subspace the following denitions are
done. The subspace S
12
is dened as
S
12
:= x
1
= 0, x
2
= 0, (2.12)
and the -environment of the subspace S
12
is
S
12
() := [x
1
[ < , [x
2
[ < . (2.13)
DEFINITION 2.4LOCALLY STABLE SUBSPACE
A subspace S
12
is said to be locally stable if for all 0 there exists 0
so that x(0) S
12
( ) = x(t) S
12
(), t 0.
Stable second order sliding
The stability issue is now solved for the case k = 2. The analysis was
inspired by Filippov (1988) pp. 234-238 where he does a similar analysis
for a two dimensional case. Let x(t), y(t), z(t) IR IR IR
n2
be given
by the following equations if y 0
x = P

(x, y; z)
y = Q

(x, y; z)
z = R

(x, y; z),
35
Chapter 2. Fast mode changes
and if y < 0 by
x = P

(x, y; z)
y = Q

(x, y; z)
z = R

(x, y; z),
where P

, Q

, R

C
2
. The plane y = 0 is hence a discontinuity plane
for the dynamics, see Figure 2.6.
x
z
y
x, y IR
z IR
n2
x = P

(x, y; z)
y = Q

(x, y; z)
z = R

(x, y; z)
x = P

(x, y; z)
y = Q

(x, y; z)
z = R

(x, y; z)
Figure 2.6 The discontinuity plane is y = 0.
Assume that Q

(0, 0; z) = 0, z, which is the case when there can be non-


transversal sliding along the x = y = 0 subspace. With the sign conditions
xQ

(x, 0; z) < 0, xQ

(x, 0; z) < 0, x ,= 0, (2.14)


there is no sliding in the plane y = 0 unless possibly along the subspace
x = y = 0. Further sign conditions assumed are
P

(0, 0; z) 0, P

(0, 0; z) < 0. (2.15)


From Equation 2.14 follows that Q

x
(0, 0; z) 0. If this condition is sharp-
ened to Q

x
(0, 0; z) < 0 the following theorem can be proved.
THEOREM 2.1SECOND ORDER STABLE SLIDING
For the system described above, the subspace S
yx
is locally stable around
the point (0, 0; z) if a
2
(z) = A

< 0, where
A

=
_
P
x
Q
y
P

Q
xx
2Q
x

(Q
x
P
z
PQ
xz
)R
Q
x
P
2
_

, (2.16)
36
2.3 Non-transversal sliding of degree two
y
x
(
k1
, 0; z
k1
)
(
k
, 0; z
k
)
y
x
(
k
, 0; z
k
)
(
k1
, 0; z
k1
)
Figure 2.7 The yx-plane. The z-directions, z IR
n2
, are omitted for simplicity.
The intersections with the y = 0 plane for x 0 are denoted
k
.
where all functions are evaluated at (0, 0; z). The sliding dynamics on
the subspace S
yx
are dened by:
z =

(0, 0; z)

(0, 0; z), (2.17)


where

and

are uniquely dened by

= 1

. (2.18)
Thus the convex denition is the natural solution if the dynamics on the
subspace S
yx
should be the limit of dynamics just off it The series of
intersections with the y = 0 plane for x 0 is denoted |0,
k
, z
k
|. The
sequence
k
is monotonously decreasing with

k1
=
k
a
2
(z)
2
k
O(
3
k
). (2.19)
Proof See Appendix A.
Remark Note that the rotational direction and the corresponding sign
conditions, equations 2.14 and 2.15 can be changed. The rotation in Fig-
ure 2.7 (left) is used in the proof of Theorem 2.1. The rotation in Fig-
ure 2.7 (right) is the natural choice for afne systems on normal form,
see Section 2.5, with y = x
1
. If the rotational direction is changed then
the stability condition is also reversed, a
2
(z) = A

< 0 becomes
a
2
(z) = A

0.
37
Chapter 2. Fast mode changes
EXAMPLE 2.5IVHS, WITH FILTER DYNAMICS
Introduce the lter parameter a in x
3
= ax
3
x
2
and check the stability
conditions for
x
1
= cos(u) = R
x
2
= sin(u) = P
x
3
= ax
3
x
2
= Q
y = x
3
u = sgn(y). (2.20)
Apply Equation 2.16 to derive A

and A

=
a
sin(1)
, A

=
a
sin(1)
. (2.21)
The rotation direction is as in Figure 2.7 (right) and the stability condition
is thus a
2
(x
1
) 0,
a
2
=
2
3
(A

) =
4a
3 sin(1)
0. (2.22)
It is now easy to see that using a stable lter, i.e a 0, leads to a stable
sliding motion. An unstable lter gives an unstable sliding manifold.
Simulation of Example 2.5 Figure 2.8 (left) shows a simulation of
Example 2.5. It is a phase portrait in the x
2
and x
3
coordinates and the
x
1
coordinate is omitted. The velocity in the x
1
-direction is cos(1). Fig-
ure 2.8 (right) shows an estimate of a
2
(x
1
) based on the formula derived
in Appendix A.

k1
=
k

2
k
a
2
(x
1
), (2.23)
where
k
is dened as the x
2
-coordinate for the intersections with the
(x
3
= 0)-plane for x
2
0. The limiting value for a
2
(x
1
), as k tend to
innity, is
4a
3sin(1)
= 1.58, for a = 1.
What about convergence rates? The time between two consecutive
crossings with (x
3
= 0, x
2
0) is approximately proportional to the x
2
coordinate of the starting point | x
1
(0), x
2
(0), 0) |
T
= | x
1
(0),
0
, 0 |
T
. Denote
the time for lap k as t
k
. The recursive equation for lap times hence becomes
t
k1
= t
k
t
2
k

2
3
(A

) O(t
3
k
). (2.24)
How long time does it take to reach the subspace x
2
= x
3
= 0?
38
2.3 Non-transversal sliding of degree two
-0.4 0 0.4
-0.2
-0.1
0
0.1
0 5 10 15
0.6
0.8
1
1.2
1.4
1.6
x
2 k
x
3 a
2
(x
1
)

Figure 2.8 The x


2
-x
3
phase plane (left) and the estimate, a
2
(x
1
), of the stability
function (right). The theoretical limiting value is 1.58.
LEMMA 2.1TIME TO REACH THE SUBSPACE x
2
= x
3
= 0
Given Equation 2.24 for the lap times, t
k
then the sum
T =

k=1
t
k
(2.25)
diverges.
Proof Re-scale the time variable to
k
=
2
3
(A

)t
k
. The difference
equations is now
k1
=
k
(1
k
), with
1
small. For any large value of
k
0
, a constant C can be chosen such that 0 <
1
Ck
0
<
k
0
<
C
k
0
. Then using
Equation 2.24 it is easily seen that 0 <
1
Ck
<
k
<
C
k
is true for all k k
0
.
The sum is bounded by two positive diverging sums and thus diverging
itself.
This will cause severe problems to simulation tools. The trajectory will
get closer and closer to the subspace x
2
= x
3
= 0 and the time between
two consecutive switches will become shorter and shorter. An advanced
simulation tool could notice this and introduce a new mode, modeling the
dynamics on the subspace x
2
= x
3
= 0. When the trajectory comes close
to this subspace the simulation could turn to this new mode.
Afne systems
If the system is afne in u then the expression for the A

-function can be
simplied. Start with an afne system
x = f (x) q(x) u, x IR
n
, u IR
y = h(x), y IR, (2.26)
39
Chapter 2. Fast mode changes
of nonlinear relative degree two, i.e. with L
q
h = 0 in Equation 2.26 and
apply a nonsingular coordinate transformation
z = S(x) = | h(x) L
f
h(x) s
3
(x) |
T
.
The vectoreld pattern in Figure 2.7 (right) applies with y = z
1
, x = z
2
.
After transformation the new state equations are
z
1
=

h(x) = (L
f
u L
q
)h(x) = L
f
h = z
2
= Q
z
2
= (L
f
uL
q
)L
f
h(x) = (L
2
f
uL
q
L
f
)h(x) = P
z
3
= f
3
(x) q
3
(x)u =
s
3
x
( f (x) q(x)u) = R. (2.27)
The sign and relative degree conditions in equations 2.14 and 2.15 be-
come
Q

z
2
= 1 0
P

= (L
2
f
L
q
L
f
)h(x) < 0
P

= (L
2
f
L
q
L
f
)h(x) 0. (2.28)
The equation for A

is reduced to
A

=
1
P
(P
z
2

P
z
3
R
P
), (2.29)
where again all functions are evaluated at (0

, 0; z
3
) and the stability
conditions is A

0.
Linear dynamics Non-transversal sliding for linear dynamics and one
relay was studied in Johansson and Rantzer (1996). They worked with
the system
x = Ax Bu
y = Cx
u = sgn(y)
x IR
n
, u, y IR, (2.30)
and proved that there can only be non-transversal sliding if CB = 0
and CAB 0. Their results are consistent with the results for nonlinear
systems derived in this section.
40
2.3 Non-transversal sliding of degree two
What about higher order stable sliding?
For simplicity only afne systems are treated here and the result is that
there can be no nth order sliding where n 3. Without loss of generality
the systems can be written as
x
1
= x
2
x
2
= x
3
x
3
= a(x) b(x)u
x
4
= f
r
(x)
y = x
1
u = sgn(y) (2.31)
DEFINITION 2.5THIRD ORDER SLIDING
The third order sliding manifold of Equation 2.31 is dened as the set
S
123
:= x
1
= 0, x
2
= 0, x
3
= 0.
The following denition is needed to check if innitely many switches take
place in nite time.
DEFINITION 2.6FAST SWITCHING POINTS
A point x S
123
is denoted a fast switching point if 0 and M there
0 such that if x(0) B

( x) \ S
123
then there x(t) that satises
Equation 2.31 at almost all t |0, | such that
there will be at least 2M switches in the time interval |0, |,
|x(t
2M
) S
123
| |x(0) S
123
|,
where t
2M
is the time of the 2Mth switch.
LEMMA 2.2A NECESSARY CONDITION FOR FAST SWITCHING IS [b( x)[ [a( x)[
Assume that [b( x)[ < [a( x)[. If u switches from 1 to 1 or vice versa,
then given an arbitrary short time interval , x will be close to x and
the sign of a(x) b(x)u will be constant, hence x
3
changes sign at most
once, x
2
changes sign at most two times and x
1
changes sign at most three
times in the same time interval.
THEOREM 2.2THERE ARE NO FAST SWITCHING POINTS FOR AFFINE SYSTEMS
OF ORDER 3.
Assume that b( x) [a( x)[ 0 and that a, b, f
r
are Lipschitz continuous.
Then x is not a fast switching point.
41
Chapter 2. Fast mode changes
Proof Partial integration gives for any C
2
_
t
2
t
1
(t) dt =
t
2
t
1
2
((t
2
) (t
1
))
_
t
2
t
1
(t t
1
)(t
2
t)
2

//
(t) dt.
Now apply this to (t) = x
2
(t) in Equation 2.31 with t
k
as the times where
x
1
(t) = 0. The expression for the state x
2
at the next switching time is
x
2
(t
2
) = x
2
(t
1
)
_
t
2
t
1
(t
2
t)(t t
1
)
t
2
t
1
(a bu) dt.
Continuing to next switch,
x
2
(t
2
) = x
2
(t
0
)

_
t
2
t
1
(t
2
t)(t t
1
)
t
2
t
1
(a b) dt
_
t
1
t
0
(t
1
t)(t t
0
)
t
1
t
0
(a b) dt.
(2.32)
Since is small and the interval between switches is small continuity
gives that a(x) b(x)u are almost constant and hence do not change
sign in the intervals t
1
t
0
and t
2
t
1
respectively. Hence x
2
(t
2
) can be
approximated with
x
2
(t
2
) = x
2
(t
0
)
(t
2
t
1
)
2
6
(b a)
(t
1
t
0
)
2
6
(b a).
Hence, the sequence x
2
(t
2k
) is growing and therefore the second condition
in Denition 2.6 is not fullled.
2.4 Two-relay systems
If there are several relays and thus several switching surfaces the situ-
ation gets more complicated as there can be sliding on several switching
surfaces simultaneously. In this situation it is not enough to choose def-
inition, Filippov or Utkin, for the sliding dynamics because there is not
enough equations to dene unique dynamics. The sliding dynamics will
in general depend on the salient features of the relays. In this section
it is shown that if the relays work on different time scales, i.e. the dif-
ference in switching times are large then it is possible to derive unique
sliding dynamics for simultaneous sliding on two switching surfaces. The
dynamics are again
x = f (x, u), x IR
n
, u |0, 1|
2
, (2.33)
42
2.4 Two-relay systems
and in the analysis that follows it is assumed that the relays are dened
as u
i
= sgn(x
i
) and that the vectorelds f (x, u
1
, u
2
) are constant. That
the inputs of the relays are the coordinates x
i
is not a severe restriction
since this situation can often be achieved by a smooth transformation of
the original inputs. In the following discussion it is assumed that such
a transformation has been done. The relays are not perfect and the hys-
teresis approximation in Figure 2.4 is used.
For two-relay systems there are four different relay states and four
vectorelds. Dene the possible switching sets as
S
1
:= x : x
1
= 0 S
2
:= x : x
2
= 0 S
12
:= x : x
1
= 0, x
2
= 0
S

1
:= x : x
1
= 0, x
2
< 0 S

2
:= x : x
2
= 0, x
1
< 0
S

1
:= x : x
1
= 0, x
2
0 S

2
:= x : x
2
= 0, x
1
0. (2.34)
Stable congurations
For stable vectoreld patterns in the neighborhood of S
12
, see Figure 2.9 to
Figure 2.11. The notation r and s indicates that the sizes and directions
of the vectorelds are such that the resulting motion is either rotational or
sliding. The automata indicate the different modes. There are four basic
f
1
f
2
f
3
f
4
x
1
x
2
1 2
3 4
5
Figure 2.9 Type rrrr and type rrrs
Figure 2.10 Type rsrs and type rrss
modes, one for each quadrant. If more than one vectoreld is active, such,
as in the sliding mode case, then new modes are added to the automaton.
There are six vectoreld patterns for which S
12
can be a locally stable
subspace. All six cases thus contain a mode in the middle corresponding
to the subspace S
12
. On S
12
all four vectorelds are active and a linear
combination of them denes the dynamics there.
43
Chapter 2. Fast mode changes
Figure 2.11 Type rsss and type ssss
Classication matrices
To facilitate the analysis some matrices describing the problem will be
used, see Seidman (1992). Introduce the matrix V containing the follow-
ing four vectorelds
V = | f
1
f
2
f
3
f
4
| =
_
_
f
11
f
21
f
31
f
41
f
12
f
22
f
32
f
42
f
13
f
23
f
33
f
43
_
_
, V IR
n4
. (2.35)
The dynamics can be written x = V, where = |
1
,
2
,
3
,
4
|
T
. Out-
side the sliding sets one of the
i
= 1 and the other
i
= 0. Note that
f
i1
, f
i2
IR and that f
i3
IR
n2
contains the rest of the dynamics. The
coefcient matrix C is used for checking the sliding conditions.
C =
_
_
1 1 1 1
f
11
f
21
f
31
f
41
f
12
f
22
f
32
f
42
_
_
, C IR
34
. (2.36)
Note that C = |1, x
1
, x
2
|
T
and that the equation C = |1, 0, 0|
T
holds on
the subspace S
12
.
Non-unique sliding dynamics
Sliding on the subspace S
12
requires
C = | 1 0 0|
T
. (2.37)
Typically C has rank 3 and the null space
N
is dened by any nonzero
solution
N
to
C
N
= | 0 0 0|
T
. (2.38)
The solutions, , to Equation 2.37 dening, x = V, lie on a line segment
in IR
4
and can be written
=
0

N
p,
i
0, (2.39)
44
2.4 Two-relay systems
where
0
is any solution to Equation 2.37 and p is a scalar. The dynamics
on the subspace S
12
can depend on the choice of p and are generally not
unique. Linear programming can be used to nd specic properties such
as
max
p
V(
0

N
p)
i
,

N
p 0, (2.40)
i.e. the maximum velocity in the x
i
direction.
Unique sliding dynamics
There are three situations where the dynamics can be uniquely dened
on a sliding set: if only two vectorelds are involved, if the dynamics are
afne in the relay output signals or if the relays work on different time
scales.
Only two vectorelds involved In the case when the sliding motion
involves only two modes the new dynamics can be uniquely determined.
For example, sliding on S

2
involves only vectorelds f
1
and f
4
and the
sliding conditions are
C = | 1 x
1
0 |
T
= |
1
0 0
4
|
T
. (2.41)
These equations give unique sliding dynamics in x = V. This is the
Filippov convex denition as relays with hysteresis are used.
Dynamics afne in relay outputs For special cases of V the mini-
mum and the maximum coincide. This happens e.g. when f (x, u) is afne
in u.
LEMMA 2.3UNIQUE SLIDING VELOCITY
A unique sliding velocity is obtained if the vectorelds f
1
, f
2
, f
3
and f
4
can be written as f
i
= f
0
f
1
u
1
f
2
u
2
, where u
1
and u
2
take the value
1 or 1.
Proof Any solution will have dynamics x that can be written as
x =
i=4

i=1

i
f
i
=
1
f
1

2
f
2

3
f
3

4
f
4
= f
0
f
1
((
1

2

3

4
) f
2
(
1

2

3

4
)
= f
0
| f
1
f
2
|
_
1 1 1 1
1 1 1 1
_
= | 0 0 x
3
|
T
. (2.42)
45
Chapter 2. Fast mode changes
For stable systems the sign conditions on the vectorelds give that the
rank of the left hand side is 2 and the dynamics will thus be unique. The
different solutions can written as
=
0

_
1 1
1 1
1 1
1 1
_

_
_
p
1
p
2
_
. (2.43)
Relays with hysteresis The third possibility to dene unique sliding
dynamics is to use relays with hysteresis. The relays can have hysteresis
in either space or time. Space hysteresis mean that the input has to ex-
ceed a certain threshold level before the relay switches. Time hysteresis
means that the relay switches a certain time after passing x
i
= 0. If the
relays work on different time scales i.e. one relay is much faster than the
other, then it is possible to derive unique solutions to all the six cases in
gures 2.9 to 2.11. That relay one is much faster than relay two means
that
1
,
2
0 in such a way that
lim

1
,
2
0

2
= 0, (2.44)
and where relay one is the relay controlling the switch over x
1
= 0.
For all six cases it holds that the subspace S
12
is reached in nite
time. This is proven for the case rrrr where it is not so obvious. That the
constant vectorelds requirement can be relaxed to nonlinear vectorelds
is proved in the ssss case. In the following sections the limiting sliding
dynamics are calculated for all six cases. The rst case, rrrr, is analyzed
both with and without hysteresis.
The case rrrr
Let the four constant vectorelds in V,
V =
_
_
f
11
f
21
f
31
f
41
f
12
f
23
f
32
f
42
f
13
f
23
f
33
f
43
_
_
, (2.45)
be oriented as in Figure 2.12. If 0 <
i
< ]2, i = 1 . . . 4, then there is a
cyclic motion in the x
1
-x
2
-plane. Outside the set
S
12
:= x : x
1
= 0, x
2
= 0, (2.46)
the vectorelds are used in the xed sequence, f
1
f
2
f
3
f
4
and the solu-
tion to x = V is well dened and depends only on the initial conditions.
46
2.4 Two-relay systems

4
x
1
x
2
x
1
x
2

4
Figure 2.12 Trajectory of a stable cyclic system. The stability condition is

i
tan(
i
) < 1. Without hysteresis (left) and with hysteresis (right).
Relays without hysteresis To begin with, a system where the switch-
ing is implemented with perfect, innitely fast, relays is studied. The
starting point is x(0) = |x
1
(0), 0; x
3
(0)|
T
. After one loop the new coordi-
nates are x(T
1
) = |x
1
(T
1
), 0; x
3
(T
1
)|
T
, where
x
1
(T
1
) = tan
4
tan
3
tan
2
tan
1
x
1
(0), (2.47)
hence the following lemma.
LEMMA 2.4LOCAL STABILITY OF S
12
The subspace S
12
is locally stable for perfect relay system of the form in
Figure 2.12 if
tan
1
tan
2
tan
3
tan
4
< 1. (2.48)
Proof Sample at each time that the positive x
1
-axis is crossed and
denote the samples x
1
(k). Then
x
1
(k 1) = x
1
(k)
= tan
1
tan
2
tan
3
tan
4
(2.49)
with the solution
x
1
(k) =
k
x
1
(0)
lim
k
x
1
(k) = 0, [[ < 1. (2.50)
47
Chapter 2. Fast mode changes
Approaching the set S
12
, switching will be faster and faster. Finally, after
a nite time t

the trajectory will reach S


12
. The time to reach the set S
12
is given by
t

= lim
k
t
k
=

j=1

j1
T
1
=
1
1
T
1
, (2.51)
where T
1
is the rst lap time. After t = t

the solution is no well-dened


and todays analysis and simulation tools fail.
The limiting average dynamics depends on the initial conditions and
are not unique. The limiting average dynamics could be calculated from
the relative time spent in each quadrant.
LEMMA 2.5AVERAGE DYNAMICS
The relative time spent in each quadrant is constant for every lap. From
this follows that the average velocity is constant.
Proof With constant vectorelds the following equations relate the
points where new quadrant is entered each loop and the time spent in
each quadrant, x ] S
12
,
x
i
(k 1) = x
i
(k), i = 1 . . . 4
t
i
(k 1) = t
i
(k), i = 1 . . . 4. (2.52)
The
i
are constant

i
(k 1) =
t
i
(k 1)

4
i=1
t
i
(k 1)
=
t
i
(k)

4
i=1
t
i
(k)
=
i
(k), (2.53)
and the average dynamics are
x(t) =
4

i=1

i
f
i
=
_
_
0
0

4
i=1
(
i
f
i
)
3
_
_
. (2.54)
However, the values of the
i
depend on the initial conditions. This means
that this approach does not lead to a meaningful denition of the dynamics
on the subspace S
12
.
48
2.4 Two-relay systems
Relays with hysteresis If hysteresis is introduced, see Fig. 2.12 (right),
the solution will tend to a unique limit cycle. One natural attempt to de-
ne the dynamics on S
12
would be to introduce hysteresis, compute the
average dynamics in the limit cycle and nally let the hysteresis tend to
zero. Assume now that the relays do not switch until the next quadrant
is penetrated a certain distance, see Figure 2.12 (right).
LEMMA 2.6HYSTERESIS DOES NOT AFFECT STABILITY
The set S
12
is locally stable for the system with hysteresis if S
12
is lo-
cally stable for the system without hysteresis. Furthermore the trajectory
converges to a limit cycle.
Proof Sample when the
4
line is crossed and dene x
1
(k) as the x
1
co-
ordinate for the crossing. The coordinates for the crossings can be written
as iterative functions, e.g. x
1
(k 1) = f (x
1
(k)), where f (x
1
) =
((((x
1

1
) tan
1

4

2
) tan
2

1
) tan
3

2

4
) tan
4
= x
1
, (2.55)
hence the iteration is stable and
lim
k
x
1
(k) = x
1
=

1
=

. (2.56)
One of the switching points is | x
1
,
4
; x
3
(k)|. From this the other switching
points can be calculated and also how much time that is spent in each
domain. This gives the
i
and the average dynamics for x
3
,
x =
4

i=1

i
(, f
1
, f
2
, f
3
, f
4
) f
i
. (2.57)
However, the limit
lim
0
V(), (2.58)
is not unique and will in general depend on the relative size of the
i
.
Unique solutions for and x
3
In the previous section it was shown
that in general and x
3
depend on the relative sizes of the
i
. Only the
cases where
1
=
3
and
2
=
4
are considered in the rest of this section.
If relay one is much faster than relay two then the limit cycle has one of
49
Chapter 2. Fast mode changes
its vertex at (0,
2
), see Figure 2.13 (left). Taking one lap in the limit
cycle gives as the solution to

2
= ((1 )
2
tan
2
tan
3
2
2
) tan
4
tan
1
=
tan
1
tan
2
tan
3
tan
4
2 tan
1
tan
4
1 tan
1
tan
2
tan
3
tan
4
. (2.59)
Knowing one vertex it is easy to calculate the others and the time spent
in each quadrant. Dene t
i
as the time interval vectoreld i is used. The
total lap time is

4
i=1
t
i
and the
i
are
t
i

4
i=1
t
i
. The dynamics in x = V
are now unique. Making relay two faster just results in a permutation of
the indices, see Figure 2.13 (right).
x
1
x
2
x
1
x
2

2

1
Figure 2.13 Motion close to the subspace S
12
for the case rrrr. Faster relay one
(left) and faster relay two (right).
The case ssss
In this case there can be sliding on all four half-lines S

1
and S

2
. Three
simulation sets illustrate the importance of knowing the relative speeds
of the relays.
Simulation with different relay speeds To get a feeling for the im-
pact of relative speeds of the relays a numerical example is studied. Three
simulation set are done. One with a faster relay over the x
1
= 0 axis, one
with a faster relay over the x
2
= 0 axis and one with two relays with
equal speed. The vectoreld matrix
V =
_
_
2 1 0.3 0.1
1 1 0.7 10
1 2 3 4
_
_
, (2.60)
50
2.4 Two-relay systems
is used in the rst two simulation sets. Space hysteresis is used and the
hysteresis constants are
1
and
2
for relay one and relay two respectively.
Simulation set 1, small
2
In this rst simulation set,
1
is kept
constant and
2
is decreased. For each value of
2
the average velocity x
3
is measured, see Table 2.1 for observed velocities.

1
1 1 1 1

2
1 0.5 0.1 0.05
x
3
2.34444 2.29643 2.26792 2.26784
Table 2.1 Results of simulation set 1, decreasing
2
.
Simulation set 2, small
1
Same principle as in simulation set 1 but
this time
1
is decreased. Table 2.2 shows the observed velocities. Note

1
1 0.5 0.1 0.05

2
1 1 1 1
x
3
2.34444 2.26537 1.98814 1.92444
Table 2.2 Results of simulation set 2, decreasing
1
.
that the limiting velocities are not the same in the two simulation sets.
Simulation set 3, small
1
and
2
If both
1
and
2
are decreased
at the same rate then the velocity in the x
3
direction depends only on the
initial conditions. The matrix V is in this simulation set was
V =
_
_
1 1 1 1
1 1 1 1
1 1 1 1
_
_
. (2.61)
Starting at the point x = | , , x
3
(0)| results in the sliding dynamics
x(3) = 1. Starting at the point x = | , , x
3
(0)| gives the sliding dy-
namics x(3) = 1. Other initial conditions give something in between.
Interpretation of the simulation results The simulation results can
be motivated by using Filippov convex solutions for the dynamics in the
x
3
direction.
51
Chapter 2. Fast mode changes
Simulation set 1 In the rst simulation set
1
was xed and the size of

2
was decreased. The velocity seems to converge to something between
2.26 and 2.27. This velocity is the one obtained if the Filippov convex
denition is used to calculate the sliding velocity on the manifold x
2
= 0.
Then the Filippov convex denition is again used to calculate the velocity
on the manifold x
1
= 0. In equations this looks like
| 0 1 0| ((1
14
) f
1

14
f
4
) = 0
| 0 1 0| ((1
23
) f
2

23
f
3
) = 0
(1
14
) f
1

14
f
4
= f
14
(1
23
) f
2

23
f
3
= f
23
| 1 0 0 | ((1
1423
) f
14

1423
f
23
) = 0

1423
f
14
(1
1423
) f
23
= F
C
1423
.
(2.62)
Calculations result in the sliding velocity F
C
1423,3
= x
3
= 2.2679, which is
consistent with the simulations.
Simulation set 2 In this simulation set
2
is xed and the size of
1
is decreased. The velocity seems to converge to somewhere between 1.90
and 1.95. This is the velocity obtained if the Filippov convex denitions
is used but in the other order, rst over x
1
= 0 and then over x
2
= 0. The
corresponding equations are
| 0 1 0| ((1
12
) f
1

12
f
2
) = 0
| 0 1 0| ((1
34
) f
3

34
f
4
) = 0
(1
12
) f
1

12
f
2
= f
12
(1
34
) f
3

34
f
4
= f
34
| 1 0 0 | ((1
1234
) f
12

1234
f
34
) = 0

1234
f
12
(1
1234
) f
34
= F
C
1234
.
(2.63)
Solving for F
C
1234
, gives F
C
1234,3
= x
3
= 1.9068. Again this is consistent with
the simulations.
Conclusions from simulations sets 13 It was seen that the slid-
ing velocity becomes well dened if the relative size
i
]
j
is small. As
for simulation set 3, the resulting sliding velocity depends on the initial
conditions.
The results of this example suggest the following theorem where the
requirement that the vectorelds should be constant is relaxed.
52
2.4 Two-relay systems
THEOREM 2.3THE CHATTERBOX THEOREM
Assume that the vectoreld pattern is as in the case ssss, see Figure 2.11.
If relay two is faster than relay one i.e.
lim

1
,
2
0

1
= 0, (2.64)
then the limiting dynamics as
1
,
2
0 are x = F
C
1423
. This is the dy-
namics given by taking Filippov convex solutions rst over x
2
= 0 and
then over x
1
= 0.
Proof See Appendix B.
x
1
x
2
x
1
x
2

1
Figure 2.14 Motion close to the subspace S
12
for the case ssss. Faster relay one
(left) and faster relay two (right).
To be consistent with rest of the six cases equations 2.62 and 2.63 are
rewritten here using another notation.
Relay one faster: lim

1
,
2
0

1

2
= 0 Two balance equations for the x
2
direction are used, one on S

2
and one on S

2
,
f
12

1
f
22

2
= 0
f
32

3
f
42

4
= 0. (2.65)
Those two equations replace one single conditions on x
2
in the C = 0
constraint.
Relay two faster: lim

1
,
2
0

2

1
= 0 Now there are two equations for
the x
1
direction instead, one on S

1
and one on S

1
,
f
11

1
f
41

4
= 0
f
21

2
f
31

3
= 0. (2.66)
53
Chapter 2. Fast mode changes
The case rrrs
x
1
x
2
x
1
x
2

1
Figure 2.15 Motion close to the subspace S
12
for the case rrrs. Faster relay one
(left) and faster relay two (right).
For motion close to the subspace S
12
, see Figure 2.15. A sliding motion
can possibly occur on S

2
. Depending on which relay that is the fastest
two different situations occur.
Relay one faster: lim

1
,
2
0

1

2
= 0 If relay one is the fastest then the
same solution as for the rrrr case is obtained. Find one vertex at (0,
2
).
This gives
i
and x = V on S
12
. See Figure 2.15 (left).
Relay two faster: lim

1
,
2
0

2

1
= 0 Now the vectoreld f
2
is used less
and less as
2
becomes smaller and smaller. The is found by adding the
equation
2
= 0 to the conditions on C.
C = | 1 0 0 |
T
| 0 1 0 0| = 0. (2.67)
The sliding motion is indicated with a thicker line, Figure 2.15 (right).
The case rsrs
With this vectoreld pattern there is a possibility for sliding both on S

2
and S

2
.
Relay one faster: lim

1
,
2
0

1

2
= 0 A faster relay one again leads back
to the rrrr solution with one vertex on (0,
2
). See Figure 2.16 (left).
54
2.4 Two-relay systems
x
1
x
2
x
1
x
2

1
Figure 2.16 Motion close to the subspace S
12
for the case rsrs. Faster relay one
(left) and faster relay two (right).
Relay two faster: lim

1
,
2
0

2

1
= 0 This lead to sliding on S

2
. The
balance equation in the x
2
direction, third row in C = 0, is now replaced
with two equations. One for S

2
and one for S

2
,
f
12

1
f
42

4
= 0
f
22

2
f
32

3
= 0, (2.68)
where the f
i2
are the components in the x
2
direction, see Figure 2.16
(right). Equation 2.68 together with the remaining equations in C = 0
now give a unique solution for and thus for x = V.
x
1
x
2
x
1
x
2

1
Figure 2.17 Motion close to the subspace S
12
for the case rrss. Faster relay one
(left) and faster relay two (right).
55
Chapter 2. Fast mode changes
The case rrss
Now there is possible sliding on S

1
and S

2
. The relative speeds of the
relays decide which vectoreld that is not used.
Relay one faster: lim

1
,
2
0

1

2
= 0 Vectoreld f
1
never comes into use.
The equation
1
= 0 is an additional constraint, which together with the
C = 0 condition give a unique velocity x = V. See Figure 2.17 (left).
Relay two faster: lim

1
,
2
0

2

1
= 0 Now it is vectoreld f
2
that is not
used. The additional constraint needed to get unique sliding dynamics is
thus
2
= 0. See Figure 2.17 (right).
The case rsss
x
1
x
2
x
1
x
2

1
Figure 2.18 Motion close to the subspace S
12
for the case rsss. Faster relay one
(left). Faster relay two (right).
In the last of the six cases the possible sliding sets are S

2
, S

2
and S

1
.
As usual whether there is sliding or not depends on the relative speeds
of the relays.
Relay one faster: lim

1
,
2
0

1

2
= 0 Vectoreld f
1
is never used and
thus
1
= 0. Sliding occurs on S

1
. See Figure 2.18 (left).
Relay two faster: lim

1
,
2
0

2

1
= 0 Sliding is possible both on S

2
and
S

2
. Now there are two equations for the x
2
direction. One for sliding on
S

2
and another for sliding on S

2
,
f
12

1
f
42

4
= 0
f
22

2
f
32

3
= 0.
In total there are four equations and thus unique sliding dynamics.
56
2.5 Summary
2.5 Summary
This chapter treats the problem of sliding or fast mode changes. The the-
ory for rst order sliding on one sliding surface is well-known but higher
order sliding and sliding on multiple sliding surfaces are not so often
treated in the control literature.
It is shown that higher order sliding is not an unusual phenomena
and a theorem for stability and the dynamics of second orders sliding is
presented. Is is further shown that for a specic system class there can
be no sliding of degree three or higher.
Sliding on multiple sliding surfaces leads in general to ambiguous
dynamics. If the difference in relative switching speeds for two switching
surfaces is large then it is possible to dene unique dynamics.
57
3
Simulation of hybrid
systems
3.1 Introduction
Today, most investigations of hybrid systems are done by simulation. Even
though much research effort is put into the the eld of hybrid systems
theory, this situation is not likely to change in the near future. To support
theory and help the designer of control systems the single most important
tool is, and will be, a good simulation tool. There has always been a strong
link between the control and simulation communities. On one hand sim-
ulation is an extremely important tool for almost every control engineer.
On the other hand, simulation of control systems often pushes the limits
of what simulation packages can do. This is because of the nature of con-
trol systems. The controlled plant could be virtually anything described
by a set of equations, inequalities, difference equations and differential
equations.
In addition to the normal requirements, a simulation tool for hybrid
systems must be able to detect and initialize discrete events. At discrete
instances the continuous states can be reset or the dynamics denitions
can be changed. There are several simulation packages claiming that they
allow for the mixture of continuous variables and discrete variables and
that they thus are suitable for simulating hybrid systems, see Otter and
Cellier (1995). Few of them are, however, strong general purpose simula-
tion tools. Typically, they are customized to a specic problem domain and
have difculties modeling systems from other domains. This is a serious
drawback since systems are becoming more complex and are often com-
posed of parts from many different domains. A manufacturing company
using several subsystems might require that the components come with
58
3.1 Introduction
a simulation model. To be able to interconnect the models it is then re-
quired that they are encoded using the same simulation language or that
the simulation tool is capable of handling several descriptions.
Other hybrid simulators consist of xes applied to continuous time
simulation packages that were not originally built to simulate hybrid sys-
tems. Currently, simulation performance for general hybrid systems is
poor and unpredictable.
Model deciencies
It is not always the case that a mathematical model can be simulated
as it stands. Even just entering a model into a simulation tool is yet one
more layer of approximation. This should be added to the approximations
already done when doing a mathematical model of a physical system and
later to the approximations done during implementation of the control
system. Especially important for hybrid systems is the approximation of
the transitions between the discrete modes. The modeling of switches
can drastically change the result, both in simulation and for the nal
implementation. Most users just plug in their models and study the result.
This method does not work for hybrid systems. There is often no way for a
user to know if the simulation results are to be trusted. Many simulation
results are today wrong by denition. If there is no well-dened solution
how could the result be right?
An important issue for hybrid systems is simulation of so called slid-
ing mode behavior, i.e. when there might be innite many mode switches
in a nite time interval. In this chapter the models are hybrid systems
consisting of continuous time differential equations and relays. The ideal
relay yields ambiguous solutions to discontinuous differential equations
and simulation results are hence difcult or impossible to evaluate. The
relay is a natural way to model discrete switches between otherwise con-
tinuous systems. It can, for example, model switches between different
vectorelds, parameter sets, or controllers.
Fast mode changes
It is a non-trivial problem to detect a possibility for fast mode changes
in certain variables for certain parameter values and certain initial con-
ditions by just looking at the equations. Thus it is difcult for the user
to know that the model is not good enough. On the other hand it is non-
trivial for the simulation program to resolve the problem when it has
detected fast mode changes. Today, the users are warned about algebraic
loops and inconsistencies in the model. It is not considerable overhead to
also check for possible sliding modes.
The problem could be solved by a semi-automatic procedure. The sim-
ulation package detects the problem and prompts the user for additional
59
Chapter 3. Simulation of hybrid systems
modeling information. The most important feature is of course that the
simulation program notices that there is a problem. This can be done
during compilation of the models or, if it is parameter dependent, during
simulation. To further aid the user, the tool should tell what variables
are involved in such fast cycles and suggest new induced modes. The con-
struction of a suitable model for simulations will typically be an iterative
process with interaction between the user and the simulation tool.
Some ideas of how under-modeling of sliding modes can be detected is
presented in Section 3.4.
Implementation
Most of the discussion in this chapter is about simulation, but of course
similar problems are present when it comes to implementation of the
hybrid control systems. In real-time implementation, as in simulation,
there is no ideal switching between the controllers. If the discrete transi-
tions are implemented with relays they can often be approximated with
hysteresis functions in the analysis. If the implementation is done with
transistors a better approximation might be a to approximate the relay
with a smooth function. Also note that nonzero sampling times, quantiza-
tion effects and unmodeled dynamics in actuators and measuring devices
introduce hysteresis-like behavior. It is important to have detailed knowl-
edge about the actual implementation to be able to calculate the sliding
dynamics. Understanding these concepts is of course very important for
the actual implementor of hybrid systems.
EXAMPLE 3.1QUANTIZATION EFFECTS CAN MAKE SMOOTH FUNCTIONS LOOK
LIKE RELAYS
Consider the following system, which is a variant of the system in Exam-
ple 2.1 discussed in Chapter 2.
x = (cos(u) 0.8)x
y = sin(u)
u =
2

arctan((y
re f
y)]). (3.1)
With limited resolution in the measurement of y
re f
y this system is
unstable if the gain
1

is low. With a higher gain the system becomes


stable. This corresponds to choosing the Filippov convex denition or the
Utkin equivalent control denition for the sliding dynamics.
60
3.2 Denitions of solutions to differential equations
3.2 Denitions of solutions to differential equations
The users of simulation programs often have little control over how the
discrete states are changed. It is thus difcult to know if the result is
well-dened by the local models f
1
, . . . , f
n
and the transitions
ij
. Some
simulation tools have sophisticated event handlers but this is not good
enough to detect model deciencies. Important issues are if the model is
detailed enough in terms of
number of local models f
i
well-dened transitions
ij
between the models
These problems are related to the denition of solutions to differential
equations. Depending on the denition of a solution the simulation re-
sult could be incorrect or correct. Possible denitions of solutions to the
differential equation x = f (x, t) could be
1 A function x(t) such that
dx
dt
= f (x, t), almost everywhere. This in-
cludes continuous and discontinuous vectorelds but not chattering.
2 A more general denition such as the one used in Filippov (1988),
Clarke (1990) and Paden and Sastry (1987). There, x(t) is called a
solution to x = f (x, t) if x(t) K| f |(x), a.e., where
K| f |(x)

=
0

N=0
cof (B(x, ) N, t) (3.2)
and
N=0
denotes intersection over all sets N of Lebesgue measure
zero. B(x, ) is a ball in IR
n
with radius , x IR
n
.
EXAMPLE 3.2SLIDING MODES TRANSITION EQUATIONS HAVE TO BE REFINED
Looking back on the examples in the previous chapter it was seen that in
order to get a well-dened solution, additional modes had to be introduced.
To dene the dynamics of these modes, the transition equations were
rened. Solution denition 1 above does not capture sliding behavior. The
solution denition 2 nds the sliding mode as a solution, however, it is
not unique, c.f. Filippov solutions in Chapter 2.
EXAMPLE 3.3A WRONG DEFINITION MIGHT MISS A SOLUTION
Mattsson (1996) has an interesting example that illustrates the problem
with hybrid system modeling. It is a second order example where the
61
Chapter 3. Simulation of hybrid systems
control signal u is a function of the coordinate x.
dx
dt
= u
2
dy
dt
= 1 u
2
, u =
_

_
1 x 0
0 x = 0
1 x < 0
. (3.3)
The given model is not enough to uniquely dene a solution. If the tran-
sition is modeled with a continuous function u

= u

then the system


gets stuck on the line x = 0. If the transition is modeled with a hysteresis
function the trajectories go right through the x = 0 axis.
y
x
Figure 3.1 Phase plane for Example 3.3. The vectoreld is vertical for x = 0 and
horizontal for x ,= 0. The solution x = 0, y = 1 is not considered a solution according
to denition 2.
The subspace x = 0 has Lebesgue measure zero and is hence disregarded
in the denition 2 above, thus the dynamics x = 0, y = 1 is not a solution
according to that denition.
The examples should make it clear that there really is no correct result
to some simulations unless more careful modeling of salient features is
done.
3.3 The simulation problem
What can a state-of-the-art simulation tool do and what more is needed?
The simulation problem can be divided into three main parts: modeling,
compilation and simulation.
62
3.3 The simulation problem
Modeling
The modeling part consists of entering dynamic equations, the static equa-
tions, and connecting them. For a control engineer the natural unit to
work with is the sub-system blocks of a block diagram. Good simulation
tools suited for hybrid systems are object oriented and have support for
building hierarchical models and graphically connecting model blocks.
Many simulation packages only allowentering of differential equations
on explicit form x = f (x, t). Rewriting the physical models this way needs
a modeling expert and is often hard or impossible. Further, this has to
be done globally, i.e. encompass all sub-models, and does not allow for
easy interconnection and reuse of models, see Andersson (1994). A basic
demand from the industry seems to be that it should be possible to use
models of the form E(x, t)
dx
dt
= f (x, t).
Compilation
Before actual simulation, connections between building blocks are made
into equations, and the collection of equations and differential equations
are put together in a differential-algebraic equation, a DAE, on the form
F( x, x, t) = 0. A modern simulation tool does a lot of analysis of this
DAE. The models are checked for mathematical completeness and consis-
tency and structural analysis is done to see if the equations can be split
into smaller sub-problems. The equations are sorted into a sequence of
sub-problems and block lower triangular partition (BLT) decomposes the
problem in computational order see Tarjan (1972).
In the compilation phase it is further decided if the initial conditions
together with the equation F( x, x, t) = 0 have a solution.
Simulation
If the differential-algebraic equation, F( x, x, t) = 0, can be transformed
back to explicit form x = f (x, t) methods like explicit Runge-Kutta can be
used. If not, implicit methods like DASSL, Brenan et al. (1989) or implicit
Runge-Kutta are used.
Hardware in the loop When testing control systems it is very useful
to be able to replace parts of the simulation model with the actual hard-
ware to be used in the real system. This requires that the simulation can
be done in real-time. Another simulation problem in real-time is when
humans are included in the loop, e.g. ight simulators.
Simulation of hybrid systems
Special care has to be taken simulating hybrid systems because of the mix
of continuous and discrete equations. Whatever mathematical description
63
Chapter 3. Simulation of hybrid systems
used in the modeling of a problem this will only be approximated by the
simulation model. One example of a modeling problem is in building a
simulation model from a description such as the Tavernini type hybrid
systems
x = f (x(t), q(t)), x IR
n
q(t) =(x(t), q(t

)), qZ
m
. (3.4)
Such a hybrid system is depicted as an automaton in Figure 3.2. The
f
1
f
2
f
3
f
4
f
5
f
6

23

24
Figure 3.2 Automaton for the model described with Equation 3.4.
transitionfrom node f
2
to f
3
can be a model of several discrete phenomena:
e.g. the vectoreld f
i
can change or there could be a state space jump. The
simulation result could differ a lot if these changes are done continuously
or abruptly i.e. how good is the simulation model approximation of the
mathematical description. Another problem is if both transitions
23
and

24
are enabled simultaneously.
On the other hand it is possible that a hybrid model is derived from
deliberately neglecting fast smooth transitions. In such a case the modeler
does not want to explicitly specify the transitions if their denition do not
alter the simulation result.
Simulation tools
There is not a lot to choose from in terms of good general purpose simu-
lation packages for hybrid systems. For a good review of simulation tools,
see Otter and Cellier (1995). A frequently available simulation tool is
SIMULINK. Two well-known object-oriented tools are Omola]OmSim and
SHIFT. Recently, several simulation research groups have formed an in-
ternational corporation developing the next generation simulation tool,
Modelica. This project is yet in the beginning but looks very promising.
SIMULINK See MATLAB (1995), has an intuitive, easy to use, inter-
face. It is available for a broad range of computing platforms and operating
64
3.4 Structural detection of fast mode changes
systems. A drawback is that SIMULINK does not offer appropriate event
handling mechanisms. This is bad for control systems in general but re-
ally critical for hybrid systems simulation. The SIMULINK providers are
aware of the problem and the event handling is getting better with every
new version.
Omola/OmSim See Andersson (1994), is one of the general purpose
tools that can be used for simulating hybrid systems. The simulation en-
vironment has features as hierarchical models, object-orientation model-
ing, continuous time and discrete event models. It has extensive model
analysis and error checking. It uses a sophisticated mix of symbolic equa-
tion manipulation, solution approximation, zero-nding, and ODE]DAE
solvers to accurately perform such mixed simulation. Omsim can handle
continuous dynamics given by general DAEs, i.e. differential equations
x = f (x, y, t) together with constraints 0 = q(x, y, t).
SHIFT See Antsaklis (1997), pp 113-133, is a programming language
for describing and simulating dynamic networks of hybrid automata. The
functionality of SHIFT is similar to the functionality of Omola]OmSim.
The main difference is that Omola]OmSim has statically interconnected
objects whereas SHIFT supports evolution of interconnections. In SHIFT
it is thus possible to model dynamically recongurable hybrid systems.The
primary application of SHIFT has been automatic control of vehicles and
highway systems.
Modelica See Mattsson and Elmqvist (1997), is an international effort
to standardize simulation tools. Modelica has a standardized functionality
and standardized integration routines. The work started in the continuous
time domain since there is a common mathematical framework in the form
of differential-algebraic equations. The rst goal was to collect knowledge
and construct a unied modeling language. The short time goal is to be
able to simulate DAEs with some discrete event features. The simulation
environment is independent of the physical domain and should be able to
model systems from different engineering elds.
3.4 Structural detection of fast mode changes
In a simulation environment such as Omsim, see Andersson (1994), it
is possible to do structural analysis of the equations before simulation.
Such a structural analysis can be achieved by efcient graph methods.
Distinction is then made only between zero and nonzero coefcients. An
65
Chapter 3. Simulation of hybrid systems
example of such structural analysis is block-lower triangular (BLT) par-
tition of the problem with respect to the variables. This has been used
to detect algebraic loops of minimal dimension, see Tarjan (1972) and
Duff et al. (1990). Another example is Pantelides algorithm, see Pan-
telides (1988) and Mattsson and Sderlind (1993) that is used to deter-
mine suitable forms of integration of high-index DAE problems. A BLT
partitioning can determine whether the output of a relay structurally in-
uences the input to the same relay. Hence there might exist a fast sliding
mode involving the dynamics between the output and the input of the re-
lay. This can be nontrivial to see, if the system is part of a much larger
problem. Since efcient graph methods that analyze dependencies exist it
is negligible overhead work to check for the structural possibility of fast
switching.
Before simulation it can be checked if sliding might occur and (mini-
mal) sliding loops can be determined. Since structural analysis only gives
necessary conditions it is not guaranteed that switching actually will oc-
cur during simulation. The structure analysis is done to determine if fast
switching may occur for problems with one or multiple relays.
To get some insight and to illustrate the method a small example with
two states and two relays is analyzed.
EXAMPLE 3.4STRUCTURAL DETECTION OF FAST MODE CHANGES
Consider the following hybrid system on explicit form
x
1
= f
1
(x
1
, u
1
)
x
2
= f
2
(u
2
)
y
1
= h
1
(x
2
)
y
2
= h
2
(x
1
)
u
1
= sgn(y
1
)
u
2
= sgn(y
2
).
The actual form of the functions f
i
and h
i
does not matter in this discus-
sion since only structural dependencies are analyzed.
Structure Jacobian The structure Jacobian is built in the following
way. The horizontal lines are the equations and differential equations
dening relations between the variables. Each variable has a column and
if a variable enters into a relation it is marked with a star on that line. Af-
ter adjoining the equations x
i
=
d
dt
x
i
the corresponding structure Jacobian
66
3.4 Structural detection of fast mode changes
is given by
x
1
x
1
x
2
x
2
u
1
y
1
u
2
y
2
f
1

f
2

h
1

h
2

sgn
1

sgn
2

x
1

x
2

Fast mode switching is possible if there is a loop from the output of a relay
to the input of the same relay. The possible sliding motion is conned to
the subspace dened by the input equations of the relays active in the
loop.
A directed graph description A directed graph, see Jantzen (1994) of
the system is built in the following way: Dene one node for each variable
x
i
, each derivative x
i
, each relay input y
i
and each relay output u
i
. Each
equation (row in the structure Jacobian) denes directed arcs between
the nodes. For example, the f
1
function denes arcs from u
1
and x
1
to x
1
,
the output relation h
1
denes an arc from x
2
to y
1
and the sgn
1
function
connects the relay input y
1
to the output u
1
. The last equations connect
x
i
to x
i
. If higher derivatives are present then there is one more node for
each order of differentiation.
Loop detection In this example there is a loop from the output of
relay one, u
1
, to the input of the relay, y
1
, of the form u
1
x
1
x
1

y
2
u
2
x
2
x
2
y
1
u
1
, which is easy to nd using a straight forward
graph algorithm. There are several graph methods for detecting loops,
see Jantzen (1994). The conclusion drawn is that there is a possibility for
sliding on the subspace x : y
1
(x) = y
2
(x) = 0.
Relation to Chapter 2
The structural detection method is now applied to some familiar examples.
The examples illustrate how rst order sliding, higher order sliding and
multiple surface sliding can be detected.
67
Chapter 3. Simulation of hybrid systems
EXAMPLE 3.5IVHS STRUCTURAL DETECTION OF THE SLIDING MODE.
In Chapter 2 it was seen that in this example there was a rst order
sliding mode.
x
1
= cos(u) = f
1
(u)
x
2
= sin(u) = f
2
(u)
y = x
2
= h(x)
u = sgn(y).
The corresponding structure Jacobian is given by
x
1
x
1
x
2
x
2
u y
f
1

f
2

h
sgn
x
1

x
2

There is a loop from the relay output, u, to the relay input, y of the relay
to the input of the form u x
2
x
2
y u and thus possibly sliding on
the set where the relay input function is zero, here x
2
= 0. There is only
one differentiated variable in the loop and the possible sliding is thus of
order one, i.e transversal sliding.
EXAMPLE 3.6IVHS WITH FILTER DYNAMICS
A third equation, modeling lter dynamics, was added and second order
sliding was observed for this example.
x
1
= cos(u) = f
1
(x, u)
x
2
= sin(u) = f
2
(x, u)
x
3
= ax
3
x
2
= f
3
(x, u)
y = h(x
3
)
u = sgn(y).
The structure Jacobian is somewhat larger but it is still possible to do
68
3.4 Structural detection of fast mode changes
structure analysis by hand.
x
1
x
1
x
2
x
2
x
3
x
3
u y
f
1

f
2

f
3

h
sgn
x
1

x
2

x
3

There is obviously no loop including x
1
and u. The possible sliding loop
from the relay output, u, to the relay input, y, is on the form u x
2

x
2
x
3
x
3
y u. The loop involves only two differentiated variables
and the possible sliding is here of at least second order. The sliding set is
given by x : x
2
= 0, x
3
= 0, i.e. y = 0 and y = 0.
The next example illustrates that also simultaneous sliding on multiple
sliding surfaces can be detected. If the sliding loops from several relays
are interconnected, i.e share variables, then there can be simultaneous
sliding on the intersections of the relay input sets.
EXAMPLE 3.7TWO-RELAY SYSTEMS
This is the two-relay case from the previous chapter. In this example
it was seen that there were possible sliding on two subspaces and also
simultaneous sliding on their intersection. Without explicitly specifying
the function the dynamics were on the form
x
1
= f
1
(u
1
, u
2
)
x
2
= f
2
(u
1
, u
2
)
x
3
= f
3
(u
1
, u
2
)
y
1
= h
1
(x
1
)
y
2
= h
2
(x
2
)
u
1
= sgn
1
(y
1
)
u
2
= sgn
2
(y
2
).
69
Chapter 3. Simulation of hybrid systems
The structure Jacobians for this example is
x
1
x
1
x
2
x
2
x
3
x
3
y
1
y
2
u
1
u
2
f
1

f
2

f
3

h
1

h
2

sgn
1

sgn
2

x
1

x
2

x
3

There is one loop for relay one: u
1
x
1
x
1
y
1
u
1
. A second loop:
u
1
x
2
x
2
y
2
u
2
is detected for relay two. Furthermore both u
1
and
u
2
appear in the f
1
and f
2
equations and hence there are cross-couplings.
From this structure analysis it then follows that there are possibly ve
new modes needed. They correspond to the sets S

1
, S

2
and S
12
.
3.5 New modes - new dynamics
The addition of new modes can be done semi-automatically as the simula-
tion tool detects a loop. If the user supplies more detailed equations for the
transitions, new modes can be added and the dynamics of the new modes
can be calculated. The generation of necessary new modes are illustrated
with an algorithm valid for the two-relay case.
ALGORITHM 3.1MODE INDUCEMENT ALGORITHM
1 Calculate the sliding sets associated with the relays, S

1
and S

2
.
2 Check for intersections, S
12
.
3 Generate induced modes for each sliding set and each intersection of
sliding sets. For the two-relay case ve induced modes are generated.
4 Remove modes that are not attractive, i.e. have vectorelds pointing
away from the sliding set associated with the mode.
70
3.5 New modes - new dynamics
In general, it is difcult to do point 4 in the algorithm above before sim-
ulation. Further, the investigation is not necessary at all if the initial
conditions are such that the trajectory never enters a sliding set. Per-
haps a better way is to generate new modes as they are needed during
simulation as illustrated in the next example.
EXAMPLE 3.8
To illustrate this method consider the two-relay problem with a vectoreld
pattern as in Figure 2.9. On the set S

2
= x : x
2
= 0, x
1
0 there is a
sliding mode not originally modeled. On that set the solution is not well-
dened. Fig. 3.3 shows the mode addition procedure step by step. To begin
1 2
3 4
14

14

41
Figure 3.3 New modes are added as they are needed.
with there are four modes in the model, one for each quadrant. Hitting the
sliding set S

2
the system starts to change mode very fast. Evaluating the
transition equations
14
and
41
it is seen that both are enabled. A new
mode denoted 14 for the set S

2
together with new transition equations,

1,14
,
4,14
and
14,2
are dened. This new extended model is now possible
to simulate until S
12
is reached. At S
12
there is again fast mode changes
and a new mode with new dynamics has to be introduced.
The dynamics in the new modes can be derived from Filippov and Utkin
denitions. Which one to choose depends on the additional modeling in-
formation supplied by the user. The Utkin equivalent control dynamics
can be determined by substituting the relay equations, for the relays in
the detected loop, with the equations y
1
(x) = y
2
(x) = y
1
(x) = y
2
(x) = 0
and solving for u, (if this is possible, otherwise higher derivatives of y
1
(x)
and]or y
2
(x) must be computed). The resulting u might be non-unique
or non-existent. The Filippov convex denition is in general harder to
compute. There are 2
k
vectorelds in the equation
x =

u1,1
m

u
f (x, u) (3.5)
and possibly non-unique solutions. The non-uniqueness problem can some-
times be solved by specifying relative speeds for the relays involved in the
fast mode change loop.
71
Chapter 3. Simulation of hybrid systems
If the simulation problem has an implicit DAE formulation then more
work is required to derive a solution to the sliding dynamics.
3.6 Summary
This chapter illustrates some of the problems present when simulating
hybrid system. The denitions of what constitutes a solution to a differ-
ential equation has to be considered in the analysis of the simulation
models.
A method based on structural analysis for determining the possibility
of fast mode changes was sketched. The structural detection method was
applied to some examples from Chapter 2.
New modes could be added to the simulationmodels semi-automatically
and the dynamics in the new modes can be dened using additional mod-
eling information.
The construction of suitable models for simulation will typically be an
iterative process with interaction between the user and the simulation
tool.
72
4
Hybrid control system
design
4.1 Introduction
As discussed in earlier chapters there can be several reasons to use a
hybrid controller. Many hybrid controllers give very good performance
but they are difcult to analyze. This chapter begins with a discussion
about how Lyapunov theory can be extended to cover hybrid systems as
well. An example of a natural extension that fails is presented. The main
part of the chapter presents a design method for hybrid control systems
that guarantees stability. The method is based on local controllers and
non-smooth Lyapunov functions. Some advantages with this method are
that
It is possible to enlarge the stability region and a larger system class
can be stabilized.
Complicated control problems can be divided into local tasks.
Local modeling and local controllers simplify design.
4.2 Stability
The step from one-controller systems to systems where a set of controllers
can act on the process is a great step. It is a lot more complicated to prove
stability for the general hybrid control system. The standard Lyapunov
theory can be extended and in some cases be used to prove stability. This
section contains some modications to Lyapunov theory. Some intuitive
modications are not correct for fast mode switching systems.
73
Chapter 4. Hybrid control system design
Lyapunov functions
Lyapunovs theorem for one-controller systems is recited here for conve-
nience. The notation follows Khalil (1992). Consider the system
x = f (x, u, t). (4.1)
It is of fundamental interest to know if the system described by equa-
tion 4.1 is stable when a specic control law u(x, t) is applied. Lyapunov
theory a powerful tool for this. The main idea is the following: nd a can-
didate Lyapunov function V(x, t) and prove that the time derivative

V is
negative denite.
THEOREM 4.1LYAPUNOV STABILITY
The differential equation
x(t) = f (t, x(t)), x IR
n
, t IR, (4.2)
where f is Lipschitz continuous, is uniformly exponentially stable if there
exists a continuously differentiable function V(t, x) and positive numbers

1
,
2
,
3
such that

1
[x[
2
< V(t, x) <
2
[x[
2
V
t

V
x
f (t, x)
3
[x[
2
(4.3)
for all x and t.
This powerful theorem works ne when only one controller acts on the
process. Theorem 4.1 is not easily generalized to the hybrid control system
with several control laws. A fairly intuitive idea to gradually go from one
stabilizing controller to another does not work. Equation 4.3 does not
hold for linear combinations of control laws as shown in Example 4.1
even though there is a Lyapunov function for each separate controller.
EXAMPLE 4.1LINEAR COMBINATION OF CONTROL LAWS
The triple integrator system
_
_
x
1
x
2
x
3
_
_
=
_
_
0 1 0
0 0 1
0 0 0
_
_
_
_
x
1
x
2
x
3
_
_

_
_
0
0
1
_
_
u
y = x
1
u
1
= 3x
1
2x
2
2x
3
u
2
= 0.37x
1
0.67x
2
0.67x
3
, (4.4)
is stable if controlled by either controller u
1
or controller u
2
. A linear
combination u =
1
4
(u
1
3u
2
) will however lead to an unstable system.
74
4.2 Stability
Non-smooth Lyapunov functions
The Lyapunov theory will now be extended to cater for multi-controller
systems. To begin with the differentiability condition is relaxed.
A non-smooth Lyapunov function is a Lyapunov function where the
constraint

V(x, t) < 0 is replaced with
V(x(t
2
), t
2
) < V(x(t
1
), t
1
), if t
2
t
1
.
Differentiability of V(x, t) is not required. All that is needed is that V(x, t)
is decreasing in t.
In Shevitz and Paden (1994) the basic Lyapunov stability theorems
are extended to the non-smooth case using Filippov solutions and Clarkes
generalized gradient. Similar ideas in Paden and Sastry (1987) allow the
authors to use a slightly modied Lyapunov theory to prove stability for
variable structure systems in a nice way.
Further generalizations
Lately there have been several attempts to generalize Lyapunovs stability
results to multi-controller systems. A common way of doing it is to dene
a non-smooth Lyapunov function and then have a switching method that
makes the non-smooth Lyapunov function decrease. Certain, natural, con-
jectures for a Lyapunov stability theorem are not valid in the presence
of sliding modes. In a later section it will be shown how additional re-
quirements on the hybrid controller can guarantee stability also in the
presence of innitely fast switching.

4
Figure 4.1 The state space is divided into regions
i
. The regions can be over-
lapping and =
i

i
.
75
Chapter 4. Hybrid control system design
CONJECTURE 4.1HYBRID LYAPUNOV STABILITY (WRONG)
Several control laws u
i
= c
i
(x, t) are used to control a plant x = f (x, u, t).
Each controller c
i
can be used in a region
i
. The whole state space is
the union of regions
i
i.e. =
i

i
, where the regions
i
are allowed to
overlap. Associated with each controller-region pair (c
i
,
i
) is a function
D
i
(x, t) such that
D
i
t

D
i
x
f (x, u
i
, t) 0
if controller c
i
is used. A Lyapunov function candidate D is now built by
patching together local D
i
functions. The conjecture is that if switching
from controller c
i
to controller c
j
is done only when D
j
D
i
then the
global function D is non increasing and the closed loop system is stable.
As shown in the next example, this conjecture is not true if innitely fast
switching between the controllers is allowed.
EXAMPLE 4.2A COUNTER EXAMPLE
Consider a simple switching system of the form
x(t) = A
i
x(t)
i(x) =
_
1, x
1
0
2, x
1
< 0,
where the continuous dynamics are given by
A
1
=
_
5 1
10 1
_
, A
2
=
_
5 1
10 1
_
.
Both linear subsystems are exponentially stable and it is straight forward
to construct a piecewise quadratic Lyapunov function candidate from lo-
cally decreasing functions D
i
, where
D
i
(x) = x
T
P
i
x
and
P
1
=
_
2.65 1.275
1.275 0.775
_
, P
2
=
_
2.65 1.275
1.275 0.775
_
.
This candidate Lyapunov function fullls the conditions of Conjecture 4.1
in both regions. Since the candidate function is continuous across the
76
4.2 Stability
switching manifold, Conjecture 4.1 suggests that the system should be
stable. A simulation of the system, shown in Figure 4.2, shows that the
system is unstable. The system reaches the switching manifold x
1
= 0 in
nite time, after which the state is forced to an unstable motion on that
switching manifold. The problem leading to instability in this example is
1.5 1 0.5 0 0.5 1 1.5
7
6
5
4
3
2
1
0
1
Figure 4.2 Simulated system with unstable sliding mode (full line), and level
curves of a continuous piecewise quadratic Lyapunov function (dashed line).
fast switching between x = A
1
x and x = A
2
x on the manifold x
1
= 0
Sliding mode dynamics for Counter example 4.2 In Chapter 2
it was seen that the different denitions of sliding dynamics coincided
for afne systems. Either denition can thus be used for calculating the
sliding dynamics in this example. The switching manifold is given by
C
T
x = 0 and the conditions for sliding give
y = C
T
A
1
x (1 )A
2
x = | 5 2 1| x = 0
Since this equation has a solution for x
1
= 0 for = 1]2, sliding can
occur. The equivalent dynamics along the sliding manifold are thus
_
x
1
x
2
_
=
1
2
(A
1
A
2
)
_
x
1
x
2
_
=
_
5 0
0 1
_ _
x
1
x
2
_
=
_
0
x
2
_
, (4.5)
which are clearly unstable.
Modied stability conditions What additional conditions are needed
in order to guarantee stability in the case of innitely fast switches? As
in the false Conjecture 4.1, associate each controller c
i
with a decreasing
function D
i
. Choose any switching scheme such that a switch from c
i
to
c
j
occurs only when D
j
D
i
. Dene W as a global non-smooth Lyapunov
77
Chapter 4. Hybrid control system design
function by patching together the local D
i
functions, W =
i
D
i
where

i
= 1 if controller c
i
is used and 0 otherwise. Using controller c
i
gives
the equations W = D
i
and x = f
i
. As long as no switching occurs it is
easily seen that,
W(x(t
2
)) < W(x(t
1
)), when t
2
t
1
.
The critical part is to study the case when the solution trajectory is on
a manifold where D
i
= D
j
. On this manifold the non-smooth Lyapunov
function is W = D
i
= D
j
and fast mode changes can occur. Further the
dynamics can be written x =
i
f
i

j
f
j
, where
i
and
j
are chosen
in order to stay on the manifold. It is necessary to show that W(x) is
decreasing, i.e
D
i
x = D
i
(
i
f
i

j
f
j
) < 0.
If all scalar products D
i
f
j
< 0 then W(x) is decreasing for any com-
binations of
i
and
j
. This is a sufcient but probably not a necessary
condition. The condition can be extended to cater for fast switching be-
tween more than two controllers.
4.3 A Lyapunov theory based design method
This section describes a method for designing stable hybrid control sys-
tems. The method works with a set of controllers coupled to a set of Lya-
punov functions. The emphasis of the design method is to guarantee sta-
bility. There will be some indication on what to do to improve performance.
Problem description
The hybrid systems in this section are modeled as systems composed of
a collection of continuous time systems f
i
, i = 1 . . . n, described by or-
dinary differential equations, and a static logic system that switches be-
tween the different continuous time systems. A common practical case is
to have one continuous time process and several controllers
x = f (x, t, u)
u = c
i
(x, t)
i(t) 1, . . . , n. (4.6)
A system f
i
is then a composition of the process and the i th controller,
x = f
i
(x, t) = f (x, t, c
i
(x, t)).
78
4.3 A Lyapunov theory based design method
Only the case of complete state information is considered here. If the sys-
tems f
i
are linear then this is a piecewise linear system. A discussion of
such systems of low dimensions are found e.g. in strm (1968). Discrete
piecewise linear systems are analyzed in e.g. Sontag (1981).
If it is required that the control signal passed on to the process is
smooth then it can be pre-ltered by a low-pass lter. This lter can
be seen as part of the process. Lyapunov functions then have to be con-
structed for the combined lter-process system.
Key idea
The key idea is to associate each subsystemf
i
with a separate Lyapunov
function and to construct the switching logic device in such a way that the
composite system is stable. This is done by constructing a global Lyapunov
function for the composite system. There are several difculties because
the Lyapunov-like functions dealt with do not have smooth derivatives.
Three small examples will illustrate the method. The rst example
shows the importance of choosing the switch functions properly. Different
choices can result in stable or unstable closed loop systems. The second
example shows what happens in the case of sliding modes. The third
example is the standard experiment of swinging up an inverted pendulum.
In strm and Furuta (1996) it is shown that a nice strategy for doing this
is obtained by combining energy control with a stabilizing linear strategy.
The results in this section can be used to derive a switching strategy that
guarantees global stability.
Combinations of Lyapunov functions
The Lyapunov theorem is rst extended and a similar result is proved
regarding combinations of several Lyapunov functions. The process is de-
scribed with the general nonlinear Equation 4.6
Assume that the System 4.6 and the control law u = c(x, t) is given
together with a set of Lyapunov functions V
1
, . . . , V
n
. An interesting ques-
tion is to nd out what kind of combinations of V
1
, . . . , V
n
are Lyapunov
functions.
LEMMA 4.1ONE CONTROLLER - SEVERAL LYAPUNOV FUNCTIONS
Let V
i
(x, t) be a set of given non-smooth Lyapunov functions for the
system (4.6) when using the controller u = c(x, t). Then the functions
W
1
(x, t) = min
i
|V
i
(x, t)|
W
2
(x, t) = max
i
|V
i
(x, t)|
(4.7)
are also non-smooth Lyapunov functions.
79
Chapter 4. Hybrid control system design
Proof The proof follows the ordinary proof of Lyapunovs stability the-
orem, see e.g. Khalil (1992), except that the time derivative,

W, is not
used. In fact all that is needed is that W
1
(x(t), t) decreases monotonously
with t. This is true since
W
1
(x(t
2
), t
2
) = min
i
|V
i
(x(t
2
), t
2
)|
< min
i
|V
i
(x(t
1
), t
1
)| = W
1
(x(t
1
), t
1
).
The proof for W
2
is similar.
Remark More involved structures can be used for example
W = min(max(V
1
, V
2
), max(V
3
, min(V
4
, V
5
))).
The function W is a non-smooth Lyapunov function if V
1
, . . . , V
5
are.
Combining several control laws
Lemma 4.1 is now extended to the case where a set of controllers c
1
, . . . , c
n
act on the system. As mentioned in the the introduction there are many
different reasons for using several controllers. Some additional reasons
are
The controller c
i
stabilizes the system only in a certain domain
i
,
The performance obtained by c
i
is only good inside the domain
i
.
A new method guaranteeing (global) stability when switching between
the controllers will be developed.
The following four assumptions are made:
A1 The
i
:s are open sets.
A2 For each pair (c
i
,
i
) there is a (C
1
) Lyapunov function V
i
.
A3 The separate
i
:s cover the whole state space , i.e.
=
n
_
i=1

i
.
A4 At any forced transition from c
i
to c
j
the corresponding Lyapunov
functions are equal, i.e. V
i
(x, t) = V
j
(x, t).
A forced transition is a switch from controller c
i
to controller c
j
because
that controller c
i
is no longer admissible as the system leaves region
i
.
80
4.3 A Lyapunov theory based design method
Switching strategy
The idea is to use the controller corresponding to the smallest Lyapunov
function. A problem that was not present in the one-controller case is the
possibility of a sliding mode behavior. This can happen if more than one
controller has the same Lyapunov function value and that value is the
smallest.
DEFINITION 4.1THE MIN-SWITCHING STRATEGY
Let f
i
(x, t) be the right-hand side of Equation 4.6 when control law c
i
is
used. Use a control signal u

such that,
x =
n

i=1

i
f
i
(x, t). (4.8)
where
i
0 satises

i
= 1 and
i
= 0 if either x ]
i
or if
V
i
(x, t) min
j
|V
j
(x, t)|. If only one controller achieves the minimum
then = 1 for that controller and all the other
i
are zero.
It is assumed that the sliding dynamics satisfy Equation 4.8.
Remark The
i
are in general not unique. The
i
depend on how many
controllers achieve the minimum at the same time and also on the de-
nition of sliding mode dynamics used.
Introduce S
ij
= V
i
V
j
. The set S
ij
(x, t) = 0 is called the switching
surface associated with the controller pair c
i
and c
j
. To maintain a sliding
motion on the surface S
ij
(x, t) = 0 using the min-switch strategy it is
necessary that
(V
i
)
t
V
i
f
j
< (V
j
)
t
V
j
f
j
(V
j
)
t
V
j
f
i
< (V
i
)
t
V
i
f
i
. (4.9)
Otherwise the system will leave the surface S
ij
(x, t) = 0. Consult Fig-
ure 4.3 for directions of the vector elds. Note that only the case with
transversal vector elds are treated here.
A Lyapunov theorem for hybrid control systems
THEOREM 4.2MINIMUM OF LYAPUNOV FUNCTIONS
Let the system be given by Equation 4.6 and assume that A1A4 hold.
Introduce W as
W = min(V
1
, V
2
, . . . , V
n
).
The closed loop system is stable with W as a non-smooth Lyapunov func-
tion if the min-switch strategy in Denition 4.1 is used.
81
Chapter 4. Hybrid control system design
V
i
(x, t) V
j
(x, t)
V
i
(x, t) < V
j
(x, t)
S
ij
(x, t) = 0
f
j
f
i
S
ij
Figure 4.3 The gure shows a phase plane close to the switching surface
S
ij
(x, t) = V
i
(x, t) V
j
(x, t) = 0. The control law corresponding to the smallest
Lyapunov function is used. Under certain conditions, see Equation 4.9, there will
be a sliding motion on the switching surface.
Proof Assume that V
1
(x, t) = min
j
|V
j
(x, t)|, for an open interval in t,
and that
1
0. Then in this interval
d
dt
W(x, t) = (V
1
)
t
V
1
x
= (V
1
)
t

i=1

i
V
1
f
i
<
n

i=1

i
((V
i
)
t
V
i
f
i
)
< 0.
The rst inequality follows from the fact that if
i
0 then
(V
1
)
t
V
1
f
i
< (V
i
)
t
V
i
f
i
.
This is a sliding condition associated with the switching surface S
1i
= 0.
The last inequality follows from the fact that V
i
is a Lyapunov function
for controller c
i
, i = 1, . . . , n. If there is no sliding, only one controller is
in use, then
j
= 0, i ,= j.
The idea to use the minimum of Lyapunov functions is also used in
Ferron (1996) and in Caines and Ortega (1994) where stability for a
certain class of fuzzy controllers is analyzed.
82
4.3 A Lyapunov theory based design method
Controller design
The design of the hybrid control system is now reduced to designing sub-
controllers c
i
with Lyapunov functions V
i
. The Lyapunov functions as well
as the sub-controllers can be of different types. Some design methods, such
as LQG, provides a Lyapunov function with the control law. Special care
has to be taken when a controllers validity region
i
,= .
Lyapunov function transformations The switching method gives
stability only. To achieve better performance it can be necessary to change
the location of the switching surfaces. This can to some degree be achieved
by different transformations. One example is transformations of the form

V
i
= q
i
(V
i
), (4.10)
where q
i
() are monotonously increasing functions.
For some controller design methods it is possible to add constraints
guiding the switching surfaces to certain regions. This is easier to accom-
plish if the sub-controllers are stabilizing in all .
Examples
This section contains three examples where the design method described
above is applied.
EXAMPLE 4.3SWITCHED CONTROL
The rst example illustrates the well-known fact that two controllers that
stabilize a system can not be combined arbitrarily to give a stable con-
troller and illustrates how the switching strategy dened above works.
Assume that two controllers, c
1
and c
2
, that give stable closed loop sys-
tems are found. The closed loop systems are
x =
_
1 5
0 1
_
x = f
1
(x), x =
_
1 0
5 1
_
x = f
2
(x).
Ad-hoc switching strategies Switching between the controllers c
1
and c
2
can result in a stable or an unstable system depending on how the
switching is done. For simplicity dene the new coordinates
z =
_
cos sin
sin cos
_
x.
Now use controller c
1
when z
1
z
2
0 and else controller c
2
. Figure 4.4
shows the trajectories for three different ad-hoc switch strategies corre-
sponding to = |45

, 30

, 32.8

|. The different choices result in a stable


system, an unstable system and a system with a limit cycle.
83
Chapter 4. Hybrid control system design
-3 -2 -1 0 1 2 3
-3
-2
-1
0
1
2
3
x
1
x
2
= 32.8

= 30

= 45

Figure 4.4 Simulation of Example 4.3 with -strategies. The stability of the
ad-hoc switching strategy depends on . The switching strategy dened by the
Lyapunov functions in Equation 4.11 corresponds to = 45

, which gives a stable


system.
Stabilizing switching strategy To illustrate how the stability theo-
rem works, pick two functions V
1
and V
2
that are Lyapunov functions
when applying c
1
and c
2
respectively, for example,
V
1
= x
2
1
10x
2
2
, V
2
= 10x
2
1
x
2
2
. (4.11)
Dene
W(x) = min(V
1
(x), V
2
(x)) (4.12)
and use the min-switching strategy above, i.e. control with c
1
if V
1
< V
2
and else with c
2
. Now W(x) is a non-smooth Lyapunov function.
With this choice of Lyapunov functions the switch surface is given by
the equation
S(x) = V
1
(x) V
2
(x) = 9x
2
2
9x
2
1
= 0.
Hence S consists of the two lines, S
1
: x
2
= x
1
and S
2
: x
2
= x
1
. An
inspection of the vector elds on these lines gives that all trajectories go
straight through them and no sliding motion occurs.
The switching strategy given by the Lyapunov functions in equation 4.11
and Theorem 4.2 is equivalent to the case = 45

above. The resulting


system is stable as can be seen in Figure 4.4.
84
4.3 A Lyapunov theory based design method
EXAMPLE 4.4SLIDING MOTION
This second example illustrates the case where more than one Lyapunov
function achieves the minimum.
Two different control laws u
1
and u
2
are applied to the same system.
Each control law gives a stable closed loop system
x =
_
1 1
1 1
_
x = f (x, u
1
), x=
_
1 1
1 1
_
x = f (x, u
2
).
Two functions V
1
and V
2
are Lyapunov functions when applying u
1
and
u
2
respectively are found
V
1
= 2x
2
1
x
2
2
, V
2
= x
2
1
2x
2
2
. (4.13)
Dene
W(x) = min(V
1
(x), V
2
(x)), (4.14)
and use the switching strategy from Theorem 4.2 i.e. control with u
2
if
V
1
V
2
and else with u
1
. Now W(x) is a non-smooth Lyapunov function.
In this example the switching surface consists of the two lines S
1
: x
1
= x
2
-2 -1 0 1 2
-2
-1
0
1
2

x
1
x
2

Figure 4.5 Simulation of Example 4.4 for several values of initial conditions.
and S
2
: x
1
= x
2
. As usual the vector elds f

and f

are inspected on
the surface S(x) = V
1
(x) V
2
(x) = x
2
2
x
2
1
= 0.
S
T
=
_
2x
1
2x
2
_
, f

=
_
x
1
x
2
x
1
x
2
_
, f

=
_
x
1
x
2
x
1
x
2
_
85
Chapter 4. Hybrid control system design
along S
1
we have, S
T
1
f

= 4x
2
2
and S
T
1
f

= 4x
2
2
. Along S
2
it holds
that, S
T
2
f

= 4x
2
2
and S
T
2
f

= 4x
2
2
. Thus S
1
is attractive and S
2
is not. After S
1
is reached the rest of the motion will be on S
1
. For sample
trajectories for different initial conditions see Figure 4.5.
EXAMPLE 4.5INVERTED PENDULUM
In this section a hybrid controller for the inverted pendulum is designed
with the method above. The inverted pendulum experiment is depicted in
Figure 4.5.

u
Figure 4.6 The inverted pendulum in Example 4.5.
Equations of motion The dynamics for the inverted pendulum are
J
p

mql sin mul cos = 0, (4.15)


where m is the mass, l is the distance from the pivot to the center of mass,
q is the gravitational constant and u is the acceleration of the pivot. The
angle is 0 when the pendulum is in an upright position.
Two different controllers are used. The rst controller is an energy
controller that will swing-up the pendulum. The second controller is a
state feedback controller that will hold it in the upright position. The en-
ergy controller is stabilizing for all initial conditions,
E
= whereas the
feedback controllers is stabilizing only within a certain region,
FB
. The
performance of the feedback controller is better than the energy controller
in the upright position.
86
4.3 A Lyapunov theory based design method
Energy Control The pendulum energy is
E =
1
2
J
p

2
mql(cos 1), (4.16)
where the potential energy has been chosen to give zero energy when the
pendulum is stationary in the upright position. The energy controller
u
E
= sat
nq
(kEsign(

cos)), (4.17)
where nq is the maximum amplitude of the controller, is described in the
paper by strm and Furuta (1996). The function V
E
= E
2
(where
is a positive constant) can be used as a Lyapunov function since
dV
E
dt
= 2E
dE
dt
= 2E(J
p


mql

sin)
= 2E(mul

cos)
= 2E(ml

cos)sat
nq
kEsign(

cos) 0. (4.18)

V
E
is zero only when

= 0 or =

2
and the only such point that is
also an equilibrium point of the closed loop system is =

= 0. Hence
the closed loop system is globally stable.
One drawback with this energy controller is that it is not very efcient
close to the equilibrium point. When the energy is close to 0 the system
can still be far from the point ( = 0,

= 0). Therefore it is preferred
only to use this controller in a domain
1
where its Lyapunov function is
large.
Linear Feedback Control The second controller is a linear feedback
controller given by
u
FB
= l
1
l
2

.
A function on the form
V
FB
(x) =
2

2
is a Lyapunov function for some value of . Not all values of give Lya-
punov functions but there is still some design freedom. Note that V
FB
is a
Lyapunov function within a specic domain only,
2
, in the (,

)-plane.
The size and shape of the domain
2
depends on the choice of .
87
Chapter 4. Hybrid control system design
Min-control The control law u
FB
is used whenever V
FB
< V
E
and
(,

)
2
, otherwise u
E
is used. The constant is added to assure
that the energy controller is not switched back in at the upright posi-
tion. the constant should be in the interval (0, V
FB
((
FB
))), where
V
FB
((
FB
)) is the value of V
FB
on the stability boundary for the linear
feedback controller. Figure 4.7 shows a swing-up and catch of the pendu-
lum. It is captured by the linear controller at the point t = 7.34, = 6.1,

= 0.18.
0 5 10 15
-2
0
2
4
6

0 5 10 15
-1
-0.5
0
0.5
1

t
u
t
Figure 4.7 Simulation of the pendulum using the switching strategy based on
min(V
E
, V
FB
). Only one switch occurs, at time t = 7.34, from the control law u
E
to
u
FB
. The system is caught and stabilized in the upright position.
4.4 Summary
This chapter introduced a design method for hybrid control systems that
gives closed loop stability. The design method is based on local controllers
and local Lyapunov functions. A global Lyapunov function W is build
from the local Lyapunov function. A switching strategy is then chosen as
to guarantee that the non-smooth Lyapunov function W is decreasing.
The method was applied to three examples and some modications to
avoid chattering were discussed.
Lyapunov theory is not easily extended to hybrid systems. A fairly
natural stability conjecture was proven to be wrong.
88
5
Experiments with hybrid
controllers
5.1 Introduction
This chapter describes two experiments with hybrid control systems. To
illustrate the theoretical results and to get some experience of practical
issues the simple hybrid controller of the last chapter was implemented.
The experiments were performed in two different software environments
but with similar basic control algorithms. Some general points of pro-
gramming languages for control systems will also be discussed.
5.2 A hybrid tank controller
In process control it is common practice to use PI control for steady state
regulation and to use manual control for large set-point changes. In this
experiment it is tried to combine the steady state regulation with a min-
imum time controller for the set-point changes.
First the process and the process model are introduced. Then the hy-
brid controller will be motivated, and the design for the sub-controllers
is presented. It is shown that the use of this hybrid controller improves
performance compared with a pure PID controller. Both good response to
set-point changes and good disturbance rejection are achieved.
The Process
The process to be controlled consists of two water tanks in series, see
Figure 5.1. The goal is to control the level (x
2
) of the lower tank and
89
Chapter 5. Experiments with hybrid controllers
Pump
x
2
x
1
Figure 5.1 The double-tank process.
indirectly the level (x
1
) of the upper tank. The two tank levels are both
measurable. Choosing the level of tank i as state x
i
the following state
space description is derived
x = f (x, u) =
_

1

x
1
u

x
1

2

x
2
_
, (5.1)
where the inowu is the control variable. The inowcan never be negative
and the maximum ow is u = 27 10
6
m
3
]s. Furthermore, in this
experimental setting the tank areas and the outow areas are the same
for both tanks, giving
1
=
2
. The square root expression is derived from
Bernoullis energy equations.
The Controller
As mentioned above a controller structure with two sub-controllers and a
supervisory switching scheme will be used. The time-optimal controller is
used when the states are far away from the reference point. Coming closer
the PID controller will automatically be switched in to replace the time-
optimal controller. At each different set-point the controller is redesigned,
keeping the same structure but using reference point dependent parame-
ters. Figure 5.2 describes the algorithm with a Grafcet diagram, see David
and Alla (1992). The Grafcet diagram for the tank controller consists of
four states. Initially no controller is in use. This is the Init state. Opt
is the state where the time-optimal controller is active and PID is the
state for the PID controller. The Ref state is an intermediate state used
for calculating new controller parameters before switching to a new time-
optimal controller.
90
5.2 A hybrid tank controller
Init
NewRef
Ref
not NewRef
Opt
NewRef
OnTarget or Off
PID
NewRef
Off
NewControllers
OptController
PIDController
Figure 5.2 A Grafcet diagram describing the control algorithm.
The sub-controller designs are based on a linearized version of Equa-
tion 5.1
x =
_
a 0
a a
_
x
_
b
0
_
u, (5.2)
where the parameter b has been scaled so that the new control variable u
is in |0, 1|. The parameters a and b are functions of , and the reference
level around which the linearization is done. It is later shown how the
neglected nonlinearities affect the performance. To be able to switch in the
PID controller, a fairly accurate knowledge of the parameters is needed.
PID controller design A standard PID controller on the form
G
PI D
= K(1
1
sT
I
sT
d
)
91
Chapter 5. Experiments with hybrid controllers
is used. The design of the PID controller parameters K, T
d
and T
I
is based
on the linear second order transfer function,
G(s) =
ab
(s a)
2
,
derived from Equation 5.2. Let the desired closed loop characteristic equa-
tion be
(s )(s
2
2 s
2
).
The parameters (, , ) = (1.0, 0.06, 0.7) are chosen for a reasonable be-
havior, both in case of set-point changes and under load disturbances. For
0 50 100 150 200
0
0.05
0.1
0.15
0.2

0 50 100 150 200
0
0.05
0.1
0.15
0.2

x
2
, x
R
2
x
2
, x
R
2
u
PI D
u
to
m m
V V
s
s
s
s
0 50 100 150 200
0
0.2
0.4
0.6
0.8
1

0 50 100 150 200
0
0.2
0.4
0.6
0.8
1

Figure 5.3 Pure PID (left) and pure time-optimal control (right). The PID con-
troller has a smooth control signal but gives a slow response with an overshoot. The
time-optimal controller gives a fast response but the control signal is unusable in
practice
some systems it is possible to get a smaller overshoot by set-point weight-
ing. Figure 5.3 left shows the set-point and load disturbance responses for
the PID controller. When implementing the real-time version of the PID
algorithm a low-pass lter is used on the derivative part.
92
5.2 A hybrid tank controller
Time-optimal controller design The theory of optimal control is well
established, see Lewis (1986) and Leitman (1981). This theory is applied
to derive minimum time strategies to bring the system as fast as possible
from one set-point to another. The Pontryagin maximum principle is used
to show that the time-optimal control strategy for the System 5.1 is of
bang-bang nature. The time-optimal control is the solution to the following
optimization problem
max J = max
_
T
0
1 dt (5.3)
under the constraints
x(0) = | x
0
1
x
0
2
|
T
x(T) = | x
R
1
x
R
2
|
T
u |0, 1|,
together with the dynamics in Equation 5.2. The Hamiltonian, H(x, u, ),
for this problem is
H = 1
1
(a

x
1
bu)
2
(a

x
1
a

x
2
),
with the adjoint equations,

=
H
x
,
_

2
_
=
_

a
2

x
1
a
2

x
1
0
a
2

x
2
_
_

2
_
. (5.4)
To derive the optimal control signal the complete solution to these equa-
tions is not needed. It is sufcient to note that the solutions to the adjoint
equations are monotonous. This together with the switching function from
Equation 5.4, i.e. =
1
bu, give the optimal control signal sequence that
minimizes H(u). Possible optimal control sequences are
0, 1, 1, 0, 0, 1.
For linear systems of order two, there can be at most one switch between
the maximum and minimum control signal value (it is assumed that the
tanks never are empty x
i
0).
The switching times are determined by the new and the old set-points.
In practice it is preferable to have a feedback loop instead of pre-calculated
switching times. Hence an analytical solution for the switching curves is
93
Chapter 5. Experiments with hybrid controllers
needed. For the linearized equation it is possible to derive the switching
curve
x
2
(x
1
) =
1
a
|(ax
1
b u)(1 ln(
ax
R
1
b u
ax
1
b u
)) b u|,
where u takes values in 0, 1. The time-optimal control signal is u = 0
above the switching curve and u = 1 below, for switching curves see
Figure 5.4.
The fact that the nonlinear system has the same optimal control se-
quence as the linearized system makes it possible to simulate the non-
linear switching curves and to compare them with the linear switching
curves. Simulation is done in the following way: initialize the state to the
value of a desired set-point and simulate backwards in time.
Note that the linear and the nonlinear switching curves are quite close
for the double-tank model, see Figure 5.4. The diagonal line is the set of
equilibrium points, x
R
1
= x
R
2
. Figure 5.4 shows that the linear switching
curves are always below the nonlinear switching curves. This will cause
the time-optimal controller to switch either too late or too soon. It is
0 0.05 0.1 0.15 0.2
0
0.05
0.1
0.15
0.2
x
2
x
1
Figure 5.4 Linear (full) and nonlinear (dashed) switching curves for different
set-points. Above the switching lines the minimum control signal is applied. Below
the lines the maximum control signal is used
not necessary to use the exact nonlinear switching curves since the time-
optimal controller is only used to bring the system close to the new set-
point. When sufciently close, the PID controller takes over.
Stabilizing switching-schemes
As seen in Chapter 4 it can happen that switching between stabilizing
94
5.2 A hybrid tank controller
controllers may lead to an unstable closed loop system. It is therefore
necessary to have a switching scheme that guarantees stability. Consider
the system
x = f (x, t, u
i
)
u
i
= c
i
(x, t), (5.5)
where the c
i
(x, t) represent different controllers. In a hybrid control sys-
tem different controllers are switched in for different regions of the state
space or in different operating modes. There exist some switching schemes
that guarantee stability. One of these is the min-switch strategy described
in Chapter 4. In this application only two controllers are used.
Lyapunov function modications From a control designers point
of view the design of a hybrid control scheme using the min-switching
strategy can be reduced to separate designs of n different control laws
and their corresponding Lyapunov functions. To improve performance it is
often convenient to change the location of the switching surfaces. This can,
to some degree, be achieved by different transformations of the Lyapunov
functions. One example was transformations of the form,

V
i
= q
i
(V
i
),
where q
i
() are monotonously increasing functions.
In some cases there can be very fast switching, chattering, between
two or more controllers having the same value of their respective Lya-
punov function. The hybrid controller is still stabilizing but it not desired
behavior in a practical implementation. One way to avoid this chattering
is to add a constant to the Lyapunov functions that are switched out and
subtract from the Lyapunov functions that are switched in. This works
as a hysteresis function. For two controllers with Lyapunov functions V
1
and V
2
the equations are V
1
= V
1
and V
2
= V
2
if controller two is
in use and V
1
= V
1
and V
2
= V
2
if controller one is controlling the
process. This guarantees that a controller is used for a time period t 0
before it is switched out. It is easily shown that the hybrid controller is
globally stabilizing with this addition.
Simulations
In this section some different switching methods are evaluated. In all
simulations a switching surface for the time-optimal controller based on
the linearized equations is used.
All simulations have been done in the Omola]Omsim environment
Andersson (1994), which supports the use of hybrid systems.
Pure time-optimal and pure PID control The rst simulation in
Figure 5.3, shows control of a linearized system using either a time-
95
Chapter 5. Experiments with hybrid controllers
optimal controller or a PID controller. The PID controller is tuned ag-
gressively to give the same rise time as the minimum time controller.
In practice, more conservative tuning must me made, otherwise measure-
ment noise will generate large control actions. Note that PID control gives
a large overshoot. The time optimal controller works ne until the level of
the lower tanks reaches its new set-point. Then the control signal starts
to chatter between its minimum and maximum value.
A simple switching strategy A natural switching strategy would be
to pick the best parts from both PID control and time-optimal control.
One way to accomplish this is to use the time-optimal controller when
far away from the equilibrium point and the PID controller when coming
closer.
0 100 200 300
0
0.05
0.1
0.15
0.2

0 100 200 300
0
0.05
0.1
0.15
0.2

x
2
, x
R
2
x
1
u
hybrid
x
2
x
1
0 100 200 300
0
0.2
0.4
0.6
0.8
1

0 0.05 0.1 0.15 0.2
0
0.05
0.1
0.15
0.2

Figure 5.5 Simulation of the simple switching strategy. Lower left gure shows
the control signal. The time-optimal controller makes one extra min-max switch
because of the nonlinearity. Catching regions are shown in lower right sub-gure.
As a measure of closeness the function V
close
is used,
V
close
=
_
x
R
1
x
1
x
R
2
x
2
_
T
P(, )
_
x
R
1
x
1
x
R
2
x
2
_
,
96
5.2 A hybrid tank controller
where
P(, ) =
_
cos
2
sin
2
(1 ) sin cos
(1 ) sin cos sin
2
cos
2

_
.
The switching strategy here is to start with the time-optimal controller
and then switch to the PID controller when V
close
< . The size and shape
of the catching region may be changed with the and parameters.
The P matrix above gives ellipsoidal catching curves. In this simulation
switching back to the time-optimal controller is not allowed until there is
a new reference value. See Figure 5.2 for a graphical description of the
algorithm. The simulation results in Figure 5.5, show how the best parts
from the sub-controllers are used to give very good performance.
Lyapunov theory based switching In this third simulation set the
min switching strategy that guarantees stability for the linearized system
is used. The two Lyapunov functions are dened as
V
PI D
=
_
_
x
R
1
x
1
x
R
2
x
2
x
R
3
x
3
_
_
T
P(, )
_
_
x
R
1
x
1
x
R
2
x
2
x
R
3
x
3
_
_
V
TO
= time left to reach new set-point
P(, ) =
_
_
cos
2

1
sin
2
(1
1
) sin cos p
13
(1
1
) sin cos sin
2

2
cos
2
p
23
p
13
p
23

2
_
_
.
The state x
3
is the integrator state in the PID controller and x
R
3
is its
steady state value. As in the previous simulation set the parameters
and are used to shape the catching region. The new state x
3
is preset
to its value at the new equilibrium point, i.e. x
R
3
, any time there is a set-
point change. This state is then not updated until after the rst switch to
PID control. Using this method a similar two-dimensional catching region
as in the previous simulation set is constructed. The simulation results
are presented in Figure 5.6.
This supervisory scheme may lead to two types of chattering behav-
ior. One is only related to the time-optimal controller and is due to the
nonlinearities. The nonlinear switching curve lies above the linear, see
Figure 5.4. That causes the trajectory of the nonlinear system to cross
the linear switching curve and the control signal goes from 0 to 1 some-
what too late or too early. One way to remove this problem is to introduce
97
Chapter 5. Experiments with hybrid controllers
0 100 200 300
0
0.05
0.1
0.15
0.2

0 100 200 300
0
0.05
0.1
0.15
0.2

x
2
, x
R
2
x
1
u
hybrid
x
2
x
1
0 100 200 300
0
0.2
0.4
0.6
0.8
1

0 0.1 0.2
-0.05
0
0.05
0.1
0.15
0.2

Figure 5.6 Lyapunov based switching. Similar result as in Figure 5.5
a hysteresis when going from minimum to maximum control signal in
the time optimal controller. There can also be chattering between the PID
and the time-optimal controller if their corresponding Lyapunov functions
have the same value. One solution to this problem is to add and remove
the constant as discussed in the section on Lyapunov functions modi-
cations.
Experiments
The theory and the simulations have also been veried by experiments.
For simplicity only the switching strategy in Sec. 5.2 is implemented.
Figure 5.7 shows the results of that experiment with the double-tanks.
The measurements from the lab process have a high noise level as can
be seen in Figure 5.7. A rst order low-pass lter
G
f
(s) =
1
s 1
is used to eliminate some of it. To further reduce the impact of the noise,
98
5.2 A hybrid tank controller
0 50 100 150 200 250 300 350
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0 50 100 150 200 250 300 350
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
x
2
, x
R
2
x
1
u
hybrid
x
2
x
1
0 50 100 150 200 250 300 350
0
0.2
0.4
0.6
0.8
1
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Figure 5.7 Lab experiment. The simple switching strategy is used. The noise level
is rather high.
a lter is added to the derivative part of the PID controller in a standard
way.
The parameters in the simulation model were chosen to match the
parameters of the lab process. It is thus possible to compare the exper-
imental results directly with the simulations. Comparison of Figure 5.5
and Figure 5.7 shows the close correspondence between simulation and
experimental results.
Model mismatch During experiments it was found that the difference
between the linear and the nonlinear switching curves were not so impor-
tant. It resulted in a few more min, max switches before reaching the
target area where the PID controller took over. However, a good model
of the static gain in the system is needed. If there is a large deviation it
cannot be guaranteed that the equilibrium points are within the catching
regions. The catching region is dened as a ellipsoid around the theoret-
ical equilibrium points.
Adaptation It is not difcult to incorporate some process parameter
estimation functions in the code. This would get a good estimate of the
static gain.
99
Chapter 5. Experiments with hybrid controllers
Points on the switching curves could be stored in a table instead of
using the analytical expressions. If the switch is too early or too late
because of the nonlinearities then this information could be used to update
the switching curve table.
Implementation issues
The traditional way of implementing real-time systems using languages
such as C or ADA gives very poor support for algorithms expressed in
a state machine fashion. The need for a convenient way to implement
hybrid systems is evident. The programming language must allow the de-
signer to code the controller as a state-machine, as a period process, or as
a combination of both. The later alternative is particularly useful in the
case where the controller is divided into several controllers sharing some
common code. A typical example of this is a state feedback controller for
a plant, where it is not possible to measure all states. To get information
about the non-measurable states an observer or a lter can be used. Typ-
ically this information is needed by the whole set of controllers, and thus
the controller itself can be implemented as a hybrid system, consisting of
one global periodic task that handles the ltering of process data, and a
set of controllers that can be either active or inactive. Many of the hy-
brid controllers from Chapter 4 have the same sorts of demands on the
programming language.
Implementing complex control algorithms puts high demands on the
software environment. Automatic code generation and verication are
needed together with advanced debugging and testing facilities.
PAL and Plsj PAL Blomdell (1997) is a dedicated control language,
which supports a mixture of periodic and sequential algorithms. Further-
more, the language supports data-types such as polynomials and ma-
trices, which are extensively used in control theory. The implementa-
tion of the hybrid controller for a double-tank system see Section 5.2
is written in PAL. PAL has an run-time environment PLSJ, Eker and
Blomdell (1996), that is well suited for experiments with hybrid control.
PLSJ was developed to meet the needs for a software environment for dy-
namically congurable embedded control systems. PLSJ features: rapid
prototyping, code re-usability, expandability, on-line congurability, porta-
bility,and efciency
For a more exhaustive description of PAL and PLSJ see the licenciate
thesis by Eker (1997).
PAL Code This section presents some of the special features in PAL.
The full PAL code can be found in Appendix A. PAL has all the usual
functions in a programming language as well as some special features
100
5.2 A hybrid tank controller
for implementing control systems. External functions can be imported or
dened in PAL.
function ln(r : input real) : real;external "ln";
function Switch(z1 : input real;z3 : input real;ubar : input real : real;
begin
result := 1.0]a

((a

z1 b

ubar)

(1.0 ln((a

z3 b

ubar)]
(a

z1 b

ubar))) b

ubar);
end Switch;
The code in the calculate and update blocks are common for all con-
troller modes. In these blocks typically inputs are read and input signals
are ltered.
calculate
begin
x1 := (1.0 c

h)

x1 c

h

y1;
x2 := (1.0 c

h)

x2 c

h

y2;
xref := yref

Kc;
end calculate;
update
begin
Vpid := qamma1

((x1 xref )

(x1 xref )
qamma2

(x2 xref )

(x2 xref ));


yold := x2;
end update;
The structure as seen in the Grafcet diagram is built from steps and
transitions
step Opt;
activate OptController;
end Opt;
step PID;
activate PIDController;
end PID;
transition from Opt to PI D when OnTarqet or Of f ;
For each step there is an action dening what is actually to be done in
that step. The PID controller action is
action PIDController;
usat : real;
begin
OnTarqet := false;
101
Chapter 5. Experiments with hybrid controllers
e := xref x2;
P := K

e;
D := d1

D d2

(yold x2);
u := P I D;
usat := Sat(umin, umax, u);
v := u;
I := I i1

e w1

(usat u);
end PIDController;
Notice the transparency in the description of continuous and discrete part.
The switching between modes are explicitly given as transitions.
Summary
A hybrid controller for a double-tank system has been designed and im-
plemented. Both simulations and real experiments were presented.
It was shown that a hybrid controller, consisting of a time-optimal
controller together with a PID controller gives very good performance.
The controller is easy to implement. It gives, in one of its forms, a
guaranteed closed loop stability.
It solves a practical and frequent problem. Many operators go to man-
ual control when handling start-up and set-point changes.
It is fairly easy to combine this method with a tuning experiment that
gives a second order model of the system. From the tuning experiments
it is easy to automatically generate a PID controller and an approximate
time-optimal controller.
The model mismatch was not serious. The nonlinearity leads to some
additional min, max switches.
5.3 A heating/ventilation problem
The ultimate goal was to implement the fast set-point control algorithms
on an existing control system and solve a real problem. Several indus-
tries were contacted to nd suitable test cases that were relevant and not
too complicated. It was found that there were good examples in heating,
ventilation and air conditioning.
The company Diana Control AB was interested in a cooperation and
fortunately they could supply both an interesting problem and a control
system. In a school in Klippan, see Figure 5.8, they use a combined heating
and ventilation system where pre-heated air is blown into the classrooms.
Specically the control problem is that they want to have two different
settings for the ventilation air temperature, one during the day and one
during the night. Another related problem is that when the system has
102
5.3 A heating/ventilation problem
Figure 5.8 The school in Klippan.
been shut down for some reason it is necessary to start from consider-
ably lower temperatures. The air is heated in a heat-exchanger and the
incoming air has outdoor temperature, which can be very low in Sweden
during the winter.
The present PID controller was tuned very conservatively to be able to
handle these cold starts without too large overshoots. This was exactly the
problem type that the fast set-point hybrid controller was designed for.
A time-optimal controller handling set-point changes and system starts
could be added to the existent PID controller.
The ventilation air temperature control loop Figure 5.9 shows the
ventilation process as it is graphically represented in the Diana control
system. The control signal, u, is the set-point for the opening of the valve.
Hot water ows through the valve to a heat-exchanger and the air is
heated. The output of the system is the air temperature, T, after the
fan. This heated air then goes to the classrooms in the school and the
nal temperatures in the classrooms are controlled with radiators on a
separate control system.
103
Chapter 5. Experiments with hybrid controllers
u
T

Figure 5.9 The air temperature control loop. The water ow to the heat-exchanger
is controlled by a valve via the control signal u. The output is the air temperature
T after the fan.
Multiple modeling and design steps
It was a requirement not to disturb the school kids too much with heavy
oscillations in the temperature. This was not a problem as only the system
identication and the nal tests actually affected the kids. The modeling
and controller design problem was solved in different steps
identication of a model,
building a simulation model,
design of the hybrid controller,
implementation on a DIANA ADP 2000 and testing against a real-
time simulation model,
nal testing at the school.
System identication
The identication data was logged on the Diana ADP 2000 running in
manual control mode and then uploaded over the modem connection. Sys-
tem identication data were collected by exciting the system with the
104
5.3 A heating/ventilation problem
input prole sketched in Figure 5.10. The only measurable state of the
valve opening
t
30%
20%
Figure 5.10 Input prole for the system identication. Valve opening in % as a
function of time.
system is the outlet air temperature. During normal use the reference
temperature varies between 17 and 22 degrees centigrade. The goal of
the experiment was also to be able to handle the start-up phase from
considerable lower temperatures. Therefore the transfer function was es-
timated at a much lower temperature as well.
The actual modeling was done by tting step responses of several pre-
dened low order models to the process data. Then the parameters in the
predened models were optimized to match the input-output data.
The identication data was separated into several sets where each set
covered a set-point. In fact there were actually two sets for each set-point,
one for raising the temperature and one for lowering the temperature.
There was no signicant difference in the models estimated from raising
or lowering the temperature or at different set-points. Basically the only
difference was in the static gain. The identication experiments gave that
G(s) =
b
(s a
1
)(s a
2
)
=
0.03
(s 0.01)(s 0.05)
, (5.6)
was a reasonable model of the system. During the experiments the pa-
rameter b had a variation of 20%. At least part of the variation in this
parameter is due to the outdoor temperature changes. To raise the tem-
perature more energy needs to be added to the outdoor air if it is colder.
The chosen model is complicated enough to capture the dynamics of
the system and still simple enough so it is possible to get an analytical
solution to the time-optimal control problem.
A state space representation There is some freedom in choosing a
state space representation for the transfer function in Equation 5.6. On
the real process only one state, the temperature, is measurable and to
simplify the ltering to be used later on in the real-time implementa-
tion, the second state was chosen as the derivative of the temperature. A,
105
Chapter 5. Experiments with hybrid controllers
for simulation and implementation, suitable state-space representation is
thus
_
x
1
x
2
_
=
_
0 1
a
1
a
2
(a
1
a
2
)
_
x
_
0
b
_
u
y = | 1 0| x. (5.7)
The parameters a
1
, a
2
are almost constant and only the gain b contributes
to the process variations as a function of the working conditions.
Controller design
The sub-controllers are again a PID controller and a time-optimal con-
troller. Most of the work done for the double-tank controller could be
re-used.
PID control The PID controller parameters were derived with a pole
placement design method. The closed loop dynamics were chosen to
(s )(s
2
2 s
2
). (5.8)
The parameters in the closed loop characteristic equation and their cor-
responding PID parameters were
| | = | 0.025 1.0 1.0|
| K T
i
T
d
| = | 0.0375 72 4.44| . (5.9)
The controller parameters were chosen to give a well damped closed loop
system but with approximately the same speed as the open system. The
controller good be more aggressively tuned than the currently used PID
controller as the error and control signal never are large.
Time-optimal control The problem of set-point response can be viewed
as a purely deterministic problem: to change the process from one state to
another in shortest possible time, possibly without any overshoot subject
to constraints on the control signal. The constraints are typically bounds
on the control signal or its rate of change. This is an optimal control
problem. For a wide class of systems the solution is of bang-bang character
where the optimal control signal switches between its extreme values.
For problems with a two dimensional state space the strategy can be
expressed in terms of a switching curve that separates the state space in
the subsets where the control signal assumes either its high or low value.
This changes the control principle from feed-forward to feedback.
Equilibrium points and switching curves around those are easily cal-
culated from the state space representation in Equation 5.7.
106
5.3 A heating/ventilation problem
Simulations
In practical applications it can be an advantage to approximate the switch-
ing curves with simpler functions. To investigate the behavior for such
approximations the switching curves for System 5.7 were approximated
with ve straight lines. As can be seen in Figure 5.11 the overall per-
formance did not deteriorate very much. The main difference from using
the theoretically correct switching strategy is that a few extra min, max
are needed. Simulations to test the methods robustness to process varia-
tions were also done. There were no signicant decrease in performance
for process parameter variations of 50%.
0 50 100 150 200 250 300 350 400
0
5
10
15
20
25
30
35
40
0 50 100 150 200 250 300 350 400
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
T, T
re f

T
u
T

T
0 50 100 150 200 250 300 350 400
0
0.2
0.4
0.6
0.8
1
14 15 16 17 18 19 20 21 22 23
1
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
1
Figure 5.11 Simulation of the ventilation model for the school in Klippan. The
afne approximation of the ideal switching curves is seen together with two set-point
change trajectories in the last sub-plot.
Implementation
The hybrid controller was implemented in FORTH on a Diana ADP 2000
and tested against a process simulation using real-time SIMNON, see
Elmqvist et al. (1990).
Diana ADP 2000 The control system, Diana ADP 2000, manufactured
by Diana Control AB is mainly used in heating-ventilation applications.
The systemis exible, modular and well-suited for testing of newsoftware.
107
Chapter 5. Experiments with hybrid controllers
It is possible to install new controller code without having to rebuild the
whole system for input]output handling, logging etc.
Control system hardware The controller hardware Diana ADP 2000
is modular. One unit can itself have several control loops and on large
installations several Diana ADP 2000 can be interconnected in a network.
In the network conguration one of the control units is a master and
the other are slaves. This master unit could be reached over a modem
connection and from the master it is then possible to communicate with
all the others. Over the telephone line and via a user interface it is for
every controller possible to: log data, change parameters, and down-load
new code.
Control system software The programming, both the real-time oper-
ating system and the application programs, is done in FORTH. Amongst
its good properties are that it is both interactive and compiled. Structure
can be built with the notion of words. Words are executable functions or
procedures that can be arranged in a hierarchical structure. The Diana
ADP 2000 comes with some predened objects and functions such as
Mathematical functions
Logical functions
Event-handling functions
Network communications functions
I]O-handling functions
Predened objects: controllers, timers etc
Alarm functions
Field test
Finally the controller was tested at the school in Klippan. The results are
shown in Figure 5.12. As were expected there was a few extra min, max
switches due to approximation of the switching curve and probably also
due to unmodeled dynamics and nonlinearities. The b parameter were
estimated from steady state values on the air temperature and the control
valve position.
A similar set-point change for the controller currently in use at the
school showed that it took approximately twice as long time to reach the
new set point. However, the main advantage with the proposed method
is that it will not give a large overshoot even if started from a very low
temperature.
108
5.3 A heating/ventilation problem
0 20 40 60 80 100 120 140 160 180
20
21
22
23
24
25
26
T, T
re f
u
0 20 40 60 80 100 120 140 160 180
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 5.12 A 5 degree set-point change using the hybrid controller. Air temper-
ature (left) and control signal (right).
Summary
The simple hybrid controller was tested on a fast set-point problem on
a school. Simulations indicated that the theoretically derived switching
curves could be approximated with simple functions without to much per-
formance deterioration.
Measurements at the school showed that the process variations were
not signicant and the variation in steady state gain could be estimated
from input output data before or during a set-point change.
A fairly crude model of the system together with approximative switch-
ing curves gave signicantly improvements for set-point changes. The
speed was doubled without large overshoots.
At this rst test, no attempts were made to smooth the control signal.
The extra min, max switches could have been avoided with a hysteresis
function.
109
6
Concluding remarks
There is currently a large interest in hybrid systems. This is motivated
both by applications and by intellectual curiosity. The problems are chal-
lenging because they lead to models that are a mixture of continuous and
discrete phenomena. Traditionally, the analysis of mixed systems has been
either to discretize the continuous dynamics or continualize the discrete
dynamics. The development of hybrid systems offers hope that complex
systems of mixed discrete and continuous dynamics can be dealt with as
they are. This is very attractive since then more detailed and accurate
models of the real world can be analyzed directly without ad hoc approx-
imations.
Mixing discrete and continuous dynamics can lead to very complex be-
havior. Often the complexity of the problem forces researchers to focus one
one aspect at a time, e.g. modeling, analysis, simulation, implementation
or design. For hybrid systems it may be impossible to achieve complete
deterministic control. The control system designer would perhaps be able
to develop a controller that ensures that all dynamics are driven to some
local region, but he may not be able to specify how the system will ex-
ecute the control action. It may be that the control, except for the nal
local control, must be done qualitatively. Some of the present hierarchical
design methods use this philosophy.
Hybrid system problems require a multidisciplinary approach. Map-
ping control problem to automata and using graph theoretical methods to
construct control algorithms can be mixed with more traditional methods
for analysis of control systems. Hybrid system models soon grow out of
hand and must be analyzed with computer tools. A big problem in hy-
brid control is that the analyzing and verifying tools do not yet match
the complexity of the theory. Simulation is used heavily but it is not reli-
able because most simulation packages are not designed to handle hybrid
systems.
Thus, the price to pay for using hybrid systems is complexity and
110
difcult analysis. Is there then something useful in this for a control
engineer that makes it worth the effort? Based on the work I have done
in this thesis I believe that much can be gained. Even if my investigations
are restricted to problems of limited complexity it gives some interesting
views on hybrid control. One example, the simple controller that combines
a fast set-point response with good regulation, has been taken all the way
from conceptual design via analysis to practical implementation.
Design: A systematic methodology for design of hybrid systems is highly
desirable. In this thesis I have developed a methodology where conven-
tional control designs are used in separate regions. The controllers and
the design methodologies used in the different regions can be different as
long as each controller is associated with a Lyapunov function. The global
objectives can then be met by switching between the controllers.
Fast mode changes: Dynamics are not well dened if the control de-
sign methods lead to fast mode changes. The dynamics depend on the
salient features of the implementation of the mode switch. More detailed
information could be needed to get a unique solution. The switching prob-
lem is not well understood. In this this thesis I derive a theorem for the
stability of second order switching together with the resulting dynamics.
Another switching problem that I solve is switching with two relays that
work on different time scales.
Simulation: The current simulation packages have problems modeling
and simulating hybrid systems. One of the problems is fast mode changes.
In Chapter 3, I show how possible fast mode changes can be found be-
fore or during simulation. The necessary analysis work is a very small
overhead for a modern simulation tool. The problem of choosing suitable
dynamics and extending the simulation model for such cases can be solved
semi-automatically.
Experiments: To get some experience from practical problems with hy-
brid control the switching strategy is implemented in two different soft-
ware environments. The attempt to add a time-optimal controller to an
existing PID controller on a commercial controller was successful.
To further test the method, approximations of the theoretically derived
controller strategy was used to check if a simpler implementation of the
switching strategy could be used while maintaining good performance.
Approximations of the switching curve with a few straight lines proved
sufcient even in the case of model uncertainties.
111
Chapter 6. Concluding remarks
Future work
The eld of hybrid control is vast and still in its infancy and researchers
have only investigated small parts of it so far. For hybrid control to be
really useful in practice it is absolutely necessary that simulation and
analyzing tools get better to support the control engineer.
The main issue as I see it is to nd modeling methods that scale up
to large systems while avoiding a combinatorial increase of complexity.
112
A
Proof of Theorem 2.1
The basic idea is that S
xy
= x = 0, y = 0 is a locally stable subspace if
the series
k
is decreasing, where
k
are the intersections with the y = 0
plane for x 0, see Figure A.1.
The analysis was inspired by Filippov (1988) pp. 234-238, where a
similar analysis for a two dimensional case is done. The following analysis
allows additional dynamics through the variables in z. The additional
directions z IR
n2
are not shown in Figure A.1.
y
x
(
k1
, 0; z
k1
)
(
k
, 0; z
k
)
Figure A.1 The x-y-plane. The z-directions, z IR
n2
, are omitted for simplicity.
The intersections with the y = 0 plane for x 0 are denoted
k
.
113
Appendix A. Proof of Theorem 2.1
Straightforward series expansions of x(t), y(t), z(t), P(x, y; z), Q(x, y; z) and
R(x, y; z) around (0, 0; z) show that a trajectory that starts in (, 0; z)
hits the plane y = 0 in ( A

2
O(
3
), 0, z O()). Following the
trajectory backwards in time from (, 0; z) shows that the previous in-
tersection with the y = 0 plane is at ( A

2
O(
3
), 0, z O()).
In the calculation below it is assumed that a state transformation has
been done that achieves Q(0, 0; z) = 0, z, so also Q
z
(0, 0; z) = 0 and
Q
zz
(0, 0; z) = 0. In the following equations all higher order terms are
omitted as soon as they no longer are needed. The dynamics on each side
of the plane y = 0 are
x = P

(x, y; z)
y = Q

(x, y; z)
z = R

(x, y; z). (A.1)


First series expansions of x(t, ), y(t, ), z(t, ) and

t() are done, where

t() is the time to the next intersection with y = 0. The index of the
constants x
ij
are chosen such that the index i indicates order in t and the
index j indicates order in .
x = t(x
10
x
11
O(
2
)) t
2
(x
20
O()) O(t
3
)
y = t(y
11
y
12

2
O(
3
)) t
2
(y
20
y
21
O(
2
)) t
3
(y
30
O())
z = t(z
10
z
11
O(
2
)) t
2
(z
20
O()) O(t
3
)
t = t
1
t
2

2
O(
3
). (A.2)
Now differentiate with respect to time and identify the coefcients in
terms of partial derivatives. Up to second order terms have been used
for x and z. Third order terms are needed for the expansion of y. It is
assumed that t
1
above is nonzero and thus that and t are of the same
order of magnitude.
x =x
10
x
11
2tx
20
= P P
x
( tx
10
) P
y
0 P
z
tz
10
z =z
10
z
11
2tz
20
= R R
x
( tx
10
) R
y
0 R
z
tz
10
y =y
11
y
12

2
2ty
20
2ty
21
3t
2
y
30
=Q Q
x
( t(x
10
x
11
O(
2
)) t
2
x
20
) Q
y
(ty
11
t
2
y
20
)

1
2
( tx
10
)Q
xx
( tx
10
) ( tx
10
)Q
xy
0
( tx
10
)Q
xz
(tz
10
)
1
2
0 Q
yy
0 0 Q
yz
(tz
10
). (A.3)
114
This gives the following equations where the right hand sides should be
evaluated at (0, 0

; z). The x
ij
constants are
1 : x
10
= P
: x
11
= P
x
t : 2x
20
= P
x
x
10
P
z
z
10
= P
x
P P
z
R. (A.4)
The z
ij
constants are
1 : z
10
= R
: z
11
= R
x
t :2z
20
= R
x
x
10
R
z
z
10
= R
x
P R
z
R, (A.5)
and nally the y
ij
constants are
1 : 0 = Q
: y
11
= Q
x

2
: y
12
=
1
2
Q
xx
t : 2y
20
= Q
x
x
10
= Q
x
P
t : 2y
21
= Q
x
x
11
Q
y
y
11
Q
xx
x
10
Q
xz
z
10
= Q
x
P
x
Q
y
Q
x
Q
xx
P Q
xz
R
t
2
: 3y
30
= Q
x
x
20
Q
y
y
20

1
2
x
10
Q
xx
x
10
x
10
Q
xz
z
10
=
1
2
Q
x
(P
x
P P
z
R)
1
2
Q
y
Q
x
P
1
2
PQ
xx
P PQ
xz
R. (A.6)
Now look for the

t that solves y(

t) = 0.
y(

t) =|t
1
t
2

2
O(
3
)| |y
11
y
12

2
O(
3
)|
|t
1
t
2

2
O(
3
)|
2
|y
20
y
21
O(
2
)|
|t
1
t
2

2
O(
3
)|
3
|y
30
O()| = 0 (A.7)
This gives a solution for t
1
and t
2
.

2
: t
1
y
11
t
2
1
y
20
= 0

3
: t
2
y
11
t
1
y
12
t
2
1
y
21
t
3
1
y
30
2t
1
t
2
y
20
= 0. (A.8)
115
Appendix A. Proof of Theorem 2.1
Now t
1
and t
2
can be calculated in terms of P, Q, R and partial derivatives.
t
1
=
y
11
y
20
=
2Q
x
Q
x
P
=
2
P
t
2
=
t
1
y
12
t
2
1
y
21
t
3
1
y
30
y
11
2t
1
y
20
=
1
Q
x
2
2
P
1
2
Q
x
P
|
2
P
1
2
Q
xx
(
2
P
)
2
1
2
(Q
x
P
x
Q
y
Q
x
Q
xx
P Q
xz
R)
(
2
P
)
3
1
6
(Q
x
(P
x
P P
z
R) Q
y
Q
x
P PQ
xx
P 2PQ
xz
R)|
=
1
Q
x
|
Q
xx
P

2
P
2
(Q
x
P
x
Q
y
Q
x
Q
xx
P Q
xz
R)

4
3P
3
(Q
x
(P
x
P P
z
R) Q
y
Q
x
P PQ
xx
P 2PQ
xz
R)|. (A.9)
Evaluate x at

t to nd the next :
x(

t) =

t(x
10
x
11
)

t
2
x
20
= (
2
P
t
2

2
)(x
10
x
11
) (
2
P
t
2

2
)
2
x
20
= (
2
P
t
2

2
)(P P
x
) (
2
P
t
2

2
)
2
1
2
(P
x
P P
z
R)
=
2
(t
2
P
2P
z
R
P
2
). (A.10)
With the expression for t
2
above, this nally results in:
x(

t) =
2
2
3
|
P
x
Q
y
P

Q
xx
2Q
x

(Q
x
P
z
PQ
xz
)R
Q
x
P
2
| =
2

2
3
A,
where the functions in A are evaluated at (0, 0

, z). This can be compared


with Filippovs result in Filippov (1988) for the two dimensional case, i.e.
without the z dynamics:
A = |
P
x
Q
y
P

Q
xx
2Q
x
|.
If the above calculations are repeated with (, 0

; z) as a starting point
then it is seen that the next passage at y = 0 is made at the point
(
2
3
A
2
, 0; z). This calculation is equivalent to following a trajectory
backwards in time from (, 0; z) to (
2
3
A
2
, 0; z). Introduce A

and
A

for the value of A when A is evaluated on each side of y = 0,


A

= A(0, 0

; z)
A

= A(0, 0

; z).
116
Finally to investigate stability consider

f

2
3
A


2
O(
3
)

q

2
3
A


2
O(
3
).
The iterative equation for the intersection with y = 0, x 0 are thus

k1
= (q f )(
k
) =
k

2
3

2
k
(A

) O(
3
k
).
The sliding dynamics on the subspace S
yx
are dened by:
z =

(0, 0; z)

(0, 0; z), (A.11)


where

and

are uniquely dened by

= 1

. (A.12)
Equation A.11 is a necessary condition for staying on the subspace S
yx
.
117
B
The chatterbox theorem
The notation chatterbox was introduced in Seidman (1992) where he
proves a similar theorem but uses constant vectorelds and relays with
time hysteresis. The notation from Chapter 2 is used where the vector-
2
1
2
2
x
2
x
1
x
0
x
1
A
B
C
D
Figure B.1 The chatterbox in the x
1
-x
2
-plane with
2
smaller than
1
. The bound-
aries of the chatterbox are denoted A, B, C and D.
elds f
1
(x) . . . f
4
(x) used in quadrants 1 . . . 4 are nonlinear functions of x.
Relays with hysteresis are used and switching from one vectoreld to an-
other is done at the boundaries A, B, C and D. For simplicity only the
x
1
-x
2
-plane is shown in Figure B.1 and as before the rest of the dynam-
118
ics are collected in x
3
. To begin with relay two is the fastest and thus
lim

1
0

2

1
= 0. The distanced traveled using any vectoreld f
i
is propor-
tional to time which gives O(t) = O().
The Filippov convex denition of a solution on S

1
is denoted F
C
14
and
a corresponding notation is used for the sliding dynamics on S

1
, S

2
and
S

2
. With those prerequisites the following theorem can be proved
THEOREM B.1THE CHATTERBOX THEOREM
If relays with different hysteresis
1
and
2
are used and
1
0 and

2
]
1
0 then the limiting dynamics on S
12
can be uniquely dened from
calculating the Filippov convex solutions F
C
14
, F
C
23
on S

2
and then take a
convex combination of F
C
14
and F
C
23
as to have the dynamics x = |0, 0; x
3
|
T
.
Proof Starting at x
0
= |
1
,
2
| (the z coordinates will be omitted in the
equations) and using Taylor expansions gives the next intersection with
the chatterbox at x
1
, where
x
1
= x
0
t f
1
(x
0
) O(
2
2
) (B.1)
the intersection with the boundary C occurs at time
t
1
=
2
2
f
12
O(
2
2
),
where f
12
< 0. On this boundary the relay switches to vectoreld f
4
.
That vectoreld is used until the boundary A is hit, which occurs after
additional time
t
4
=
2
2
f
42
O(
2
2
),
where f
42
0. The time between two intersections with the boundary A,
denoted t
14
is hence
t
14
= t
1
t
4
= 2
2
(
1
f
12

1
f
42
) O(
2
2
). (B.2)
The distance traveled in the x
1
direction during t
14
is
t
1
f
11
t
4
f
41
= 2
2
(
f
11
f
12

f
41
f
42
) O(
2
2
).
119
Appendix B. The chatterbox theorem
The boundary B is hit after a number f
1
- f
4
cycles. The number of such
cycles, n
14
, satises
2
1
2
2
(
f
11
f
12

f
41
f
42
) O(
2
)
1 n
14

2
1
2
2
(
f
11
f
12

f
41
f
42
) O(
2
)
1. (B.3)
The velocity, x
1
in the x
1
direction is hence within the interval
2
1
(n
14
1)t
14
x
1

2
1
(n
14
1)t
14
. (B.4)
From equation B.2B.4 the upper and lower bounds on x
1
, become
2
1
(n
14
1)t
14
=
2
1
2
1
(
1
f
12

1
f
42
)O(
2
)
(
f
11
f
12

f
41
f
42
)O(
2
)
2
2
(
1
f
12

1
f
42
) O(
2
)
=
1
f
42
f
12
f
11
f
42
f
12
f
41
(1 O(
2
))

2

1
(
1
f
12

1
f
42
) O(
2
2
]
1
)
.
Compare this with the Filippov convex combination, F
C
14
, where F
C
14,1
de-
notes the velocity in the x
1
direction
F
C
14
= f
1
(1 ) f
4
, =
f
42
f
42
f
12
F
C
14,1
=
f
11
f
42
f
12
f
41
f
42
f
12
. (B.5)
This shows that the velocity x
1
in the negative direction can then be
written as
x

1
= F
C
14,1
O(
2
) O(
2
]
1
). (B.6)
A similar calculation using f
2
and f
3
for x
1
0 gives
x

1
= F
C
23,1
O(
2
) O(
2
]
1
). (B.7)
Now, moving back and fourth in the x
1
direction leads to the equations
t
14
=
2
1
F
C
14,1
, t
23
=
2
1
F
C
23,1
. (B.8)
The resulting dynamics in the x
3
directions can be written as
x
3
=
t
14
t
14
t
23
x

1

t
23
t
14
t
23
x

1
O(
1
) (B.9)
x
3
= x
1423,3
O(
2
) O(
2
]
1
) O(
1
) (B.10)
120
Letting
1
and
2
]
1
goto zero gives
lim

1
0
lim

2
]
1
0
x
3
= x
C
1423,3
, (B.11)
i.e the solution obtained by taking the Filippov convex combination solu-
tion rst over the (x
2
= 0)-plane and then over the (x
1
= 0)-plane.
If the order is reversed, i.e letting
2
0 and
1
]
2
0, then the
resulting dynamics are x
C
1234,3
. In general x
C
1234,3
,= x
C
1423,3
.
121
C
Pal code for the double-tank
experiment
The full PAL code for the hybrid controller used for level control in the
double-tank experiment is presented in this section. The controller is writ-
ten as a Grafcet with four states, see Figure 5.2.
module regul;
function ln(r : input real) : real;external "ln";
function sqrt(r : input real) : real;external "sqrt";
block Tank
y1, y2 : input real;
OnTarqet := false, NewRef := false, Of f := false : boolean;
h : sampling interval;
a := 1.0, b := 1.0, yref := 0.50 : real;
K := 5.0, Ti := 60.0, Tr := 60.0, Td := 10.0 : real;
qamma1 := 1.0, qamma2 := 1.0, Vpid := 100.0 : real;
e := 0.0, yold := 0.0, P := 0.0, I := 0.0, D := 0.0 : real;
d1 := 1.0, d2 := 1.0, i1 := 1.0, w1 := 1.0 : real;
umin := 0.0, umax := 1.0 : real;
x1 := 5.0, x2 := 5.0, xref := 5.0, , u := 0.0 : real;
Ku := 0.000027, Kc := 5.0, reqion := 0.1 : parameter real;
omeqa := 0.04, zeta := 0.7, alpha := 1.0, N := 10.0 : parameter real;
c := 2.0, aa := 0.00000707, AA := 0.00273 : parameter real;
v := 0.0 : output real;
function Switch(z1 : input real;z3 : input real;ubar : input real : real;
begin
result := 1.0]a

((a

z1 b

ubar)

(1.0 ln((a

z3 b

ubar)]
(a

z1 b

ubar))) b

ubar);
122
end Switch;
function Sat(min : input real;max : input real;x : input real ) : real;
begin
if x < min then
result := min;
elsif x max then
result := max;
else
result := x;
end if;
end Sat;
calculate
begin
x1 := (1.0 c

h)

x1 c

h

y1;
x2 := (1.0 c

h)

x2 c

h

y2;
xref := yref

Kc;
end calculate;
update
begin
Vpid := qamma1

((x1 xref )

(x1 xref )
qamma2

(x2 xref )

(x2 xref ));


yold := x2;
end update;
initial step Init;
activate OffController;
end Init;
step Ref;
activate NewControllers;
end Ref;
step Opt;
activate OptController;
end Opt;
step PID;
activate PIDController;
end PID;
transition from Init to Ref when NewRef ;
transition from Ref to Opt when not NewRef ;
transition from PI D to Init when Of f ;
123
Appendix C. Pal code for the double-tank experiment
transition from Opt to PI D when OnTarqet or Of f ;
transition from PI D to Ref when NewRef ;
transition from Opt to Ref when NewRef ;
action OffController;
begin
v := 0.0;
Vpid := 100.0;
Of f := false;
end OffController;
action NewControllers;
begin
a := aa]AA

sqrt(9.81]2.0]xref );
b := Ku]AA

Kc;
K := (omeqa

omeqa

(1.0 2.0

alpha

zeta) a

a)]b]a;
Ti := b

K

a]alpha]omeqa]omeqa]omeqa;
Td := 1.0]a]b]K

(omeqa

(alpha 2.0

zeta) 2.0

a);
d1 := Td](Td N

h);
d2 := K

N

d1;
i1 := h

K]Ti;
w1 := h]Tr;
NewRef := false;
end NewController;
action OptController;
begin
if Vpid < reqion then
OnTarqet := true;
e := xref x2;
P := K

e;
D := d1

D d2

(yold x2);
I := aa]AA

sqrt(2.0

9.81

xref

Kc)]b;
end if;
u := 0.0;
if x1 xref and x2 < Switch(x1, xref , umin) then
u := umax;
end if;
if x1 < xref and x2 < Switch(x1, xref , umax) then
u := umax;
end if;
v := u;
end OptController;
124
action PIDController;
usat : real;
begin
OnTarqet := false;
e := xref x2;
P := K

e;
D := d1

D d2

(yold x2);
u := P I D;
usat := Sat(umin, umax, u);
v := u;
I := I i1

e w1

(usat u);
end PIDController;
procedure Ref05();
begin
yref := 0.05;
NewRef := true;
end Ref10;
procedure Ref15();
begin
yref := 0.11;
NewRef := true;
end Ref11;
procedure Stop();
begin
Of f := true;
end Stop;
end Tank;
end regul.
125
Bibliography
Alur, R., C. Courcoubetis, T. A. Henzinger, and P.-H. Ho (1993): Hybrid
automata: An algorithmic approach to the specication and verication
of hybrid systems. Technical Report TR93-1343. Cornell University,
Computer Science Department.
Alur, R. and D. L. Dill (1994): A theory of timed automata. Theoretical
Computer Science, 126:2, pp. 183235.
Andersson, M. (1994): Object-Oriented Modeling and Simulation of Hy-
brid Systems. PhD thesis ISRN LUTFD2]TFRT--1043--SE.
Antsaklis, P. J. (1997): Hybrid systems IV: Papers related to the fourth
international conference on hybrid systems, held in ithaca, N.Y,
October 1214, 1996.
Antsaklis, P. J., K. M. Passino, and S. J. Wang (1991): An introduction to
autonomous control systems. IEEE Control Systems Magazine, 11:4,
pp. 513.
strm, K. (1968): Lectures on Nonlinear Systems, chapter 3. In Swedish.
strm, K. and K. Furuta (1996): Swinging up a pendulum by energy
control. In IFAC World Congress. San Fransisco.
strm, K. J. (1992): Autonomous control. In Bensoussan and Verjus,
Eds., Future Tendencies in Computer Science, Control and Applied
Mathematics, vol. 653 of Lecture Notes in Computer Science, pp. 267
278. Springer-Verlag.
strm, K. J. and K.-E. rzn (1993): Expert control. In Antsaklis and
Passino, Eds., An Introduction to Intelligent and Autonomous Control,
pp. 163189. Kluwer Academic Publishers.
strm, K. J. and T. Hgglund (1995): PID Controllers: Theory, Design,
and Tuning, second edition. Instrument Society of America, Research
Triangle Park, NC.
126
strm, K. J. and B. Wittenmark(1995): Adaptive Control, second edition.
Addison-Wesley, Reading, Massachusetts.
Blomdell, A. (1997): The Plsj algorithm language.. Masters thesis,
Department of Automatic Control, Lund Institute of Technology.
Bode, H. (1945): Network Analysis and Feedback Amplier Design. D.
van Nostrand, New York.
Branicky, M. (1995): Studies in Hybrid Systems: Modeling, Analysis, and
Control. PhD thesis, LIDS, Massachusetts Institute of Technology.
Brenan, K. E., S. L. Campbell, and L. R. Petzold (1989): Numerical
Solution of Initial-Value Problems in Differential-Algebraic Equations.
North-Holland, New York.
Brockett, R. W. (1983): Asymptotic stability and feedback stabiliza-
tion. In Brockett et al., Eds., Differential Geometric Control Theory,
pp. 181191.
Brockett, R. W. (1990): Formal languages for motion description and map
making. In Brocket, Ed., Robotics. American Math. Soc., Providence,
Rhode Island.
Brockett, R. W. (1993): Hybrid models for motion control systems. In
Trentelman and Willems, Eds., Essays on Control: Perspectives in
the Theory and its Application. Birkhuser.
Brooks, R. A. (1990): Elephants dont play chess. In Maes, Ed., Designing
Autonomous Agents. MIT Press, Cambridge, Massachusetts.
Caines, P. and R. Ortega (1994): The semi-lattice of piecewise constant
controls for non-linear systems: A possible foundation for fuzzy con-
trol. In NOLCOS 95. Tahoe, CA.
Caines, P. and Y.-J. Wei (1995): Hierarchical hybrid control systems. In
Block Island Workshop. Rhode Island.
Cassandras, C. G. (1993): Discrete Event Systems: Modeling and Perfor-
mance Systems. Irwin.
Clarke, F. H. (1990): Optimization and Nonsmooth Analysis. SIAM.
David, R. and H. Alla (1992): Petri Nets and Grafcet: Tools for modelling
discrete events systems. Prentice-Hall.
Daws, C., A. Olivero, S. Tripakis, and S. Yovine (1996): The tool KRONOS.
In Alur et al., Eds., Hybrid Systems III, vol. 1066 of Lecture Notes in
Computer Science, pp. 208219. Springer-Verlag.
127
Appendix C. Bibliography
Duff, I. S., A. M. Erisman, and J. K. Reid (1990): Direct methods for
sparse matrices. Clarendon Press, Oxford.
Eker, J. (1997): A Framework for Dynamically Congurable Embedded
Controllers. Lic Tech thesis ISRN LUTFD2]TFRT--3218--SE.
Eker, J. and A. Blomdell (1996): A structured interactive approach to
embedded control. In 4th Intelligent Robotic System, pp. 191197.
Lisbon, Portugal.
Elmqvist, H., K. J. strm, T. Schnthal, and B. Wittenmark (1990):
Simnon Users Guide. SSPA, Gteborg, Sweden.
Emelyanov, S. V. (1967): Variable Structure Control Systems. Oldenburger
Verlag, Munich, FRG.
Ezzine, J. and A. H. Haddad (1989): Controllability and observability of
hybrid systems. Int. J. Control, 49:6, pp. 20452055.
Ferron, E. (1996): Quadratic stabilizability of switched systems via
state and output feedback. Technical Report CICS-P-468. Center for
intelligent control systems, MIT, Cambridge, Massachusetts, 02139,
U.S.A.
Feuer, A., G. G. C., and S. M.. (1997): Potential benets of hybrid control
for linear time invariant plants. In Proceedings of the 1997 American
Control Conference. Albuquerque, New Mexico, USA.
Filippov, A. F. (1988): Differential Equations with Discontinuous Right-
hand Sides. Kluwer, Dordrecht.
Halbwachs, N. (1993): Synchronous programming of reactive systems.
Kluwer Academic Pub.
Henzinger, T. A., P. W. Kopke, A. Puri, and P. Varaiya (1995a): Whats de-
cidable about hybrid automata? Technical Report TR95-1541. Cornell
University, Computer Science.
Henzinger, T. H., P.-H. Ho, and H. Wong-Toi (1995b): A user guide to
HyTech. Lecture Notes in Computer Science, 1019.
Itkis, U. (1976): Control Systems of Variable Structure. Halsted Press,
Wiley, New York.
Jantzen, J. (1994): Digraph Approach to Multivariable Control. Electrical
Power Engineering department, Technical University of Denmark,
Lyngby, Denmark.
Johansson, K. H. and A. Rantzer (1996): Global analysis of third-order
relay feedback systems. Report ISRN LUTFD2]TFRT--7542--SE.
128
Johansson, M., J. Malmborg, A. Rantzer, B. Berhardsson, and K.-E. rzn
(1997): Modeling and control of fuzzy, heterogeneous and hybrid
systems. In Proceedings of the SiCiCa 1997.
Johansson, M. and A. Rantzer (1997): On the computation of piecewise
quadratic Lyapunov functions. In Proceedings of the 36th IEEE
Conference on Decision and Control. San Diego, USA.
Johansson, M. and A. Rantzer (1998): Computation of piecewise
quadratic Lyapunov functions for hybrid systems. IEEE Transactions
on Automatic Control, April. Special issue on Hybrid Systems. To Ap-
pear.
Khalil, H. K. (1992): Nonlinear Systems. Macmillan, New York.
Kiendl, H. and J. J. Rger (1995): stability analysis of fuzzy control
systems using facet functions. Fuzzy Sets and Systems, 70, pp. 275
285.
Kuipers, B. and K. J. strm (1991): The composition of heterogeneous
control laws. In Proceedings of the 1991 American Control Conference,
pp. 630636. Boston, Massachusetts.
Lafferiere, G. (1994): Discontinuous stabilizing feedback using partially
dened Lyapunov functions. In Conference on Decision and Control.
Lake Buena Vista.
Lafferiere, G. and S. E. (1993): Remarks on control Lyapunov functions
for discontinuous stabilizing feedback. In Conference on Decision and
Control. Texas.
Leitman, G. (1981): The Calculus of Variations and Optimal Control.
Plenum Press, New York.
Lewis, F. L. (1986): Optimal Control. Wiley.
Malmborg, J., B. Berhardsson, and K. J. strm (1996): A stablizing
switching scheme for multi-controller systems. In Proceedings of the
1996 Triennal IFAC World Congress, IFAC96, vol. F, pp. 229234.
Elsevier Science, San Francisco, California, USA.
Malmborg, J. and B. Bernhardsson (1997): Control and simulation of hy-
brid systems. Nonlinear Analysis, Theory, Methods and Applications,
30:1, pp. 337347.
Malmborg, J. and J. Eker (1997): Hybrid control of a double tank system.
In IEEE Conference on Control Applications. Hartford, Connecticut.
Malmborg, J. and M. Johansson (1994): Fuzzy heterogeneous control. In
FALCON Meeting/EUFIT Conference. Aachen.
129
Appendix C. Bibliography
MATLAB (1995): MATLAB SIMULINK, toolboxes. The Mathworks, Co-
chituate Place, 24 Prime Park Way, Natick, MA, USA, version 4.2,
volume vi edition. 1 computer laser optical disk, graphics quick refer-
ence 1 MATLAB quick reference.
Mattsson, S. and H. Elmqvist (1997): Modelica an international effort
to design the next generation modeling language. In IFAC Symp. on
Computer Aided Control Systems Design, CACSD97. Gent, Belgium.
Mattsson, S. and G. Sderlind (1993): Index reduction in differential-
algebraic equations. SIAM Journal of Scientic and Statistical Com-
puting, 14:3, pp. 677692.
Mattsson, S. E. (1996): On object-oriented modeling of relays and sliding
mode behaviour. In IFAC96, Preprints 13th World Congress of IFAC,
vol. F, pp. 259264. San Francisco, California.
Middleton, R. H. (1991): Trade-offs in linear control systems design.
Automatica., 27:2, pp. 281292.
Morse, A. (1995a): Control using logic-based switching. In Block Island
Workshop. Rhode Island.
Morse, A. S. (1995b): Control using logic-based switching. In Isidori, Ed.,
Trends in Control. A European Perspective, pp. 69113. Springer.
Nerode, A. and W. Kohn (1992): Models for hybrid systems: automata,
topologies, stability. Technical Report. Mathematical Sciences Insti-
tute, Cornell University.
Nijmeijer, H. and A. van der Schaft (1990): Nonlinear Dynamical Control
Systems. Springer-Verlag.
Otter, M. and F. E. Cellier (1995): Software for modeling and simulating
control systems.
Paden, B. and S. Sastry (1987): A calculus for computing Filippovs
differential inclusion with application to the variable structure control
of robot manipulators. IEEE Trans. on Circuits and Systems, 34:1,
pp. 7382.
Pantelides, C. C. (1988): The consistent initialization of differential-
algebraic systems. Siam J. of Scientic and Statistical Computing,
9:2, pp. 213231.
Peleties, P. and R. DeCarlo (1989): A modeling strategy with event
structures for hybrid systems. In Proc. 28th CDC, pp. 13081313.
Tampa, FL.
130
Ramadge, P. J. G. and W. M. Wonham (1989): The control of discrete
event systems. Proceedings of the IEEE; Special issue on Dynamics
of Discrete Event Systems, 77, 1, pp. 8198.
Seidman, T. I. (1992): Some limit results for relays. In WCNA 92. Tampa,
USA.
Shevitz, D. and B. Paden (1994): Lyapunov stability theory of nonsmooth
systems. IEEE Trans. on Automat. Contr., 39:9, pp. 19101914.
Sontag, E. (1981): Nonlinear regulation: The piecewise linear approach.
IEEE Transactions on Automatic Control.
Stiver, J. and P. Antsaklis (1992): Modeling and analysis of hybrid control
systems. In Conference on Decision and Control. Tucson, Arizona,
USA.
Sugeno, M. and T. Takagi (1983): Multi-dimensional fuzzy reasoning.
Fuzzy Sets and Systems, 9:3, pp. 313325.
Tarjan, R. H. (1972): Depth rst search and linear graph algorithms.
Siam J. of Comp., 1, pp. 146160.
Tavernini, L. (1987): Differential automata and their discrete simula-
tors. Nonlinear Analysis, Theory, Methods and Applications, 11:6,
pp. 665683.
Tittus, M. (1995): Control Synthesis for batch Processes. PhD thesis,
Chalmers University of Technology.
Utkin, V. (1978): Sliding Mode and Their Application in Variable Structure
Systems. MIR Publishers, Moscow.
Utkin, V. I. (1977): Variable structure systems with sliding modes. IEEE
Transactions on Automatic Control, AC-22, pp. 212222.
131
ISSN 02805316
ISRN LUTFD2]TFRT--1050--SE

Das könnte Ihnen auch gefallen