Beruflich Dokumente
Kultur Dokumente
(a + d)
2
# 4(ad #bc)
2
This yields the following possible cases:
!
1
, !
2
real and negative Stable node
!
1
, !
2
real and positive Unstable node
!
1
, !
2
real and opposite signs Saddle point
!
1
, !
2
complex and negative real parts Stable focus
!
1
, !
2
complex and positive real parts Unstable focus
!
1
, !
2
complex and zero real parts Center
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
30
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
31
In Class Problem:
Graph the phase portraits for the linear system
!
x = Ax where
!
A =
a 0
0 "1
#
$
%
&
'
(
Solution: the system can be written as:
!
x = ax
y = "y
The equations are uncoupled. In this simple case, each equation may be solved
separately. The solution is:
!
x(t) = x
0
e
at
y(t) = y
0
e
"t
The phase portraits for different values of a are shown below. In each case, y decays
exponentially. Name the different cases.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
32
A complete phase space analysis: Lotka-Volterra Model
We consider here the classic Lotka-Volterra model of competition between two species,
here imagined to be rabbits and sheep. Suppose both species are competing for the same
food supply (grass) and the amount available is limited. Also, lets ignore all other
complications, like predators, seasonal effects, and other sources of food. There are two
main effects that we wish to consider:
1. Either species would grow to its carrying capacity in the absence of the other.
This can be modeled by assuming logistic growth for each species. Rabbits have a
legendary ability to reproduce, so perhaps we should assign them a higher
intrinsic growth rate.
2. When rabbits and sheep encounter each other, the trouble starts. Sometimes the
rabbit gets to eat, but more usually the sheep nudges the rabbit aside and starts
nibbling (on the grass). Well assume that these conflicts occur at a rate
proportional to the size of each population. (If there are twice as many sheep, the
odds of a rabbit encountering a sheep are twice as great). Also, assume that the
conflicts reduce the growth rate for each species, but the effect is more severe for
the rabbits.
A specific model that incorporates these assumptions is:
!
x = x(3 " x "2y)
y = y(2 " x " y)
where x(t) is the population of rabbits and y(t) is the population of sheep. Of course, x
and y are positive. The coefficients have been chosen to reflect the described scenario,
but are otherwise arbitrary.
There are four fixed points for this system: (0,0), (0,2), (3,0) and (1,1). To classify them,
we start by computing the Jacobian:
!
A =
3 "2x " y "2x
"y 2 " x "2y
#
$
%
&
'
(
To do the analysis, we have to consider the four points in turn.
(0,0): Then
!
A =
3 0
0 2
"
#
$
%
&
'
The eigenvalues are both positive at 3 and 2, so this is an unstable node. Trajectories
leave the origin parallel to the eigenvector for !=2, that is, tangential to v = (0,1), which
spans the y-axis. (General rule at a node, the trajectories are tangential to the slow
eigendirection, which is the eigendirection with the smallest |!|.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
33
(0,2): Then
!
A =
"1 0
"2 "2
#
$
%
&
'
(
The matrix has eigenvalues -1, -2. The point is a stable node. Trajectories approach along
the eigendirection associated with -1. You can check that this direction is spanned by (1, -
2).
(3,0): Then
!
A =
"3 "6
0 "1
#
$
%
&
'
(
The matrix has eigenvalues -1, -3. The point is a stable node. Trajectories approach along
the slow eigendirection. You can check that this direction is spanned by (3, -1).
(1,1): Then
!
A =
"1 "2
"1 "1
#
$
%
&
'
(
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
34
The matrix has eigenvalues
!
"1 2 . This is a saddle point. The phase portrait is as show
below:
Assembling the figures, we get:
Also, the x and y axes remain straight line trajectories, since
!
x = 0 when x=0 and
similarly
!
y = 0 when y=0.
We can assemble the entire phase portrait:
This phase portrait has an interesting biological interpretation. It shows that one species
generally drives the other to extinction. Trajectories starting below the stable manifold
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
35
lead to the eventual extinction of the sheep, while those starting above lead to the
eventual extinction of the rabbits. This dichotomy occurs in other models of competition
and has led biologist to formulate the principle of competitive exclusion, which states that
two species competing for the same limited resource cannot typically co-exist.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
36
Stability (Lyapunovs First Method)
Consider the system described by the equation:
Write x as :
Then
Lyapunov proved that the eigenvalues of A indicate local stability of the nonlinear
system about the equilibrium point if:
a) (The linear terms dominate)
b) There are no eigenvalues with zero real part.
Example:
Consider the equation:
If x is small enough, then
Thought question: What if a = 0?
Example: Simplified satellite control problem
Built in the 1960s.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
37
After about one month, would run out of gas.
How was the controller designed?
Lets pick .
It cold in space: the valves would freeze open. If and are small, there is not enough
torque to break the ice, so the valves get frozen open and all the gas escapes. One
solution is either relay control and / or bang-bang control. (These methods are inelegant).
Pick , and .
Case 1: Pick u = 0. The satellite just floats.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
38
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
39
In the thick black line interval, all trajectories point towards the switching line.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
40
Bad idea!
On the line, . (a>0).
On average:
On the average, the trajectory goes to the origin.
Introduction to Sliding Mode Control (also called Variable Structure Control)
Consider the system governed by the equation:
Inspired by the previous example, we select a control law of the form:
where . How should we pick the function s?
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
41
Case 1:
This does not yield the performance we want.
Case 2:
This does not yield the performance we want.
Case 3:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
42
When is ?
Let s>0. Then
That is, if s>0, iff
Example
Consider the system governed by the equation:
where d(t) is an unknown disturbance. The disturbance d is bounded, that is,
The goal of the controller is to guarantee the type of response shown below.
1) Is it possible to design a controller that guarantees this response assuming no
bounds on u?
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
43
2) If your answer on question (1) is yes, design the controller.
The desired behavior is a first-order response. Define
If s=0, we have the desired system response. Hence our goal is to drive s to zero.
If u appears in the equation for s, set s=0 and solve for u. Unfortunately, this is not the
case. Keep differentiating the equation for s until u appears.
Look for the condition for .
We therefore select u to be:
The first term dictates that one always approaches zero. The second term is called the
switching term. The parameter " is a tuning parameter that governs how fast one goes to
zero.
Once the trajectory crosses the s=0 line, the goals are met, and the system slides
along the line. Hence the name sliding mode control.
Does the switching surface s have to be a line?
No, but it keeps the problem analyzable.
Example of a nonlinear switching surface
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
44
Consider the system governed by the equation:
For a mechanical system, an analogy would be making a cart reach a given position at
zero velocity in minimal time.
The request for a minimal time solution suggests a bang-bang type of approach.
This can be obtained, for example, with the following expression for s:
The shape of the sliding surface is as shown below.
This corresponds to the following block diagram:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
45
Logic is missing for the case when s is exactly equal to zero. In practice for a
continuous system such as that shown above this case is never reached.
Classical Phase-Plane Analysis Examples
Reference: GM Chapter 7
Example: Position control servo (rotational)
Case 1: Effect of dry friction
The governing equation is as follows:
For simplicity and without lack of generality, assume that I = 1. Then:
That yields:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
46
The friction function is given by:
There are an infinite number of singular points, as shown below:
When , we have , that is, we have an undamped linear oscillation
( a center). Similarly, when , we have (another center).
From a controls perspective, dry friction results in an offset, that is, a loss of static
accuracy.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
47
To get the accuracy back, it is possible to introduce dither into the system. Dither is a
high-frequency, low-amplitude disturbance (an analogy would be tapping an offset scale
with ones finger to make it return to the correct value).
On average, the effect of dither pulls you in. Dither is a linearizing agent, that
transforms Coulomb friction into viscous friction.
Example: Servo with saturation
There are three different zones created by the saturation function:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
48
The effects of saturation do not look destabilizing. However, saturation affects the
performance by slowing it down.
The effect of saturation is to slow down the system.
Note that we are assuming here that the system was stable to start with before we applied
saturation.
Problems appear if one is not operating in the linear region, which indicates that the gain
should be reduced in the saturated region.
If you increase the gain of a linear system oftentimes it eventually winds up unstable,
except if the root locus looks like:
Root locus for a conditionally stable system (for example an inverted pendulum).
So there are systems for which saturation will make you unstable.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
49
SUMMARY: Second-Order Systems and Phase-Plane Analysis
Graphical Study of Second-Order Autonomous Systems
x
1
and x
2
are states of the system
p and q are nonlinear functions of the states
phase plane = plane having x
1
and x
2
as coordinates
! get rid of time
As t goes from 0 ! +", and given some initial conditions, the solution x(t) can be
represented geometrically as a curve (a trajectory) in the phase plane. The family of
phase-plane trajectories corresponding to all possible initial conditions is called the phase
portrait.
Due to Henri Poincar
French mathematician, (1854-1912).
Main contributions:
! Algebraic topology
! Differential Equations
! Theory of complex variables
! Orbits and Gravitation
! http://www-history.mcs.st-andrews.ac.uk/history/Mathematicians/Poincare.html
Poincar conjecture
In 1904 Poincar conjectured that any closed 3-dimensional manifold which is homotopy
equivalent to the 3-sphere must be the 3-sphere. Although higher-dimensional analogues
of this conjecture have been proved, the original conjecture remains open.
Equilibrium (singular point)
Singular point = equilibrium point in the phase plane
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
50
Slope of the phase trajectory
At an equilibrium point, the value of the slope is indeterminate (0/0) ! singular point.
Investigate the linear behaviour about a singular point
Set
Then
Which is the general form of a second-order linear system.
Obtain the characteristic equation
This equation admits the roots:
!
"
1,2
=
a + d
2
(a + d)
2
# 4(ad #bc)
2
Possible cases
Pictures are from H. Khalil, Nonlinear Systems, Second Edition.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
51
#
1
and #
2
are real and negative
STABLE NODE
#
1
and #
2
are real and positive
UNSTABLE NODE
#
1
and #
2
are real and of opposite sign
SADDLE POINT (UNSTABLE)
#
1
and #
2
are complex with negative real
parts
STABLE FOCUS
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
52
#
1
and #
2
are complex with positive real
parts
UNSTABLE FOCUS
#
1
and #
2
are complex with zero real parts
CENTER
Which direction do circles and spirals spin, and what does this mean?
Consider the system:
Let and .
With $ page of straightforward algebra, one can show that: (see homework 1 for details)
and
The r equation says that in a Jordan block, the diagonal element, %, determines whether
the equilibrium is stable. Since r is always non-negative, % greater than zero gives a
growing radius (unstable), while % less than zero gives a shrinking radius. & gives the rate
and direction of rotation, but has no effect on stability. For a given physical system,
simply re-assigning the states can get either positive or negative &.
In summary:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
53
If % > 0, the phase plot spirals outwards.
If % < 0, the phase plot spirals inwards.
If & > 0, the arrows on the phase plot are clockwise.
If & < 0, the arrows on the phase plot are counter-clockwise.
Stability
x=xe+'x
Lyapunov proved that the eigenvalues of A indicate local stability if:
(a) the linear terms dominate, that is:
(b) there are no eigenvalues with zero real part.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
54
4 `Equilibrium Finding
We consider systems that can be written in the following general form, where x is the
state of the system, u is the control input, and f is a nonlinear function.
Let u = u
e
= constant.
At an equilibrium point, .
Key points
Nonlinear systems may have a number of equilibrium points (from zero to
infinity). These are obtained from the solution of n algebraic equations in n
unknowns.
The global implicit function theorem states condition for uniqueness of an
equilibrium point.
Numeral solutions to obtain the equilibrium points can be obtained using several
methods, including (but not limited to) the method of Newton-Raphson and
steepest descent techniques.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
55
To obtain the equilibrium points, one has to solve n algebraic equations in n unknowns.
How can we find out if an equilibrium point is unique? See next section.
Global Implicit Function Theorem
Define the Jacobian of f.
The solution x
e
of for a fixed ue is unique provided:
1. det[J(x)] ! 0 for all x
2.
Note: in general these two conditions are hard to evaluate (particularly condition 1).
For peace of mind, check this with linear system theory. Suppose we had a linear system:
. Is x
e
unique? J=A, which is different from 0 for all x, and f = Ax, so the limit
condition is true as well (good!).
How does one generate numerical solutions to ? (for a fixed ue)
There are many methods to find numerical solutions to this equation, including, but not
limited to:
- Random search methods
- Methods that require analytical gradients (best)
- Methods that compute numerical gradients (easiest)
Two popular ways of computing numerical gradients include:
- The method of Newton-Raphson
- The steepest descent method
Usually both methods are combined.
The method of Newton-Raphson
We want to find solutions to the equation . We have a value, x
i
, at the i
th
iteration and an error, e
i
, such that e
i
= f(x
i
).
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
56
We want an iteration algorithm so that:
Expand in a first order Taylor series expansion.
We have: .
Suppose that we ask for: (ask for, not get)
Then:
That is, we get an expression for the Newton-Raphson iteration:
Note: One needs to evaluate (OK) and invert (not so good) the Jacobian.
Note: Leads to good convergence properties close to x
e
but causes extreme starting
errors.
Steepest Descent Technique (Hill Climbing)
Define a scalar function of the error, then choose to guarantee a reduction in this
scalar at each step.
Define: which is a guaranteed positive scalar. We attempt to minimize L.
We expand L in a first-order Taylor series expansion.
and
We want to impose the condition: L(i+1) < L(i).
This implies:
where ! is a scalar.
This yields:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
57
and
That is, the steepest descent iteration is given by:
Note: Need to evaluate J but not invert it (good).
Note: this has good starting properties but poor convergence properties.
Note: Usually, the method of Newton-Raphson and the steepest descent method are
combined:
where "
1
and "
2
are variable weights.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
78
6 `Controllability and
Observability of
Nonlinear Systems
Controllability for Nonlinear Systems
The Use of Lie Brackets: Definition
We shall call a vector function
!
f : "
n
#"
n
a vector field in !
n
to be consistent with
terminology used in differential geometry. The intuitive reason for this term is that to
every vector function f corresponds a field of vectors in an n-dimensional space (one can
think of a vector f(x) emanating from every point x). In the following we shall only be
interested in smooth vector fields/ By smoothness of a vector field, we mean that the
function f(x) has continuous partial derivatives of any required order.
Key points
Nonlinear observability is intimately tied to the Lie derivative. The Lie
derivative is the derivative of a scalar function along a vector field.
Nonlinear controllability is intimately tied to the Lie bracket. The Lie bracket
can be thought of as the derivative of a vector field with respect to another.
References
o Slotine and Li, section 6.2 (easiest)
o Sastry, chapter 11 pages 510-516, section 3.9 and chapter 8
o Isidori, chapter 1 and appendix A (hard)
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
79
Consider two vector fields f(x) and g(x) on !
n
. Then the Lie bracket operation generates
a new vector field defined by:
The Lie bracket is [f,g] is commonly written ad
f
g (where ad stands for adjoint).
Also, higher order Lie brackets can be defined recursively:
!
(ad
f
0
, g) " g
!
(ad
f
1
, g) " f , g [ ]
!
(ad
f
2
, g) " f , f , g [ ] [ ]
!
(ad
f
k
, g) " f ,(ad
f
k#1
, g)
[ ]
for k=1,2,3,
Recap Controllability for Linear Systems
!
C = B | AB | ... | A
n"1
B
[ ]
Local conditions (linear systems)
Let u = constant (otherwise no pb, but you get etc)
For linear systems, you get nothing new after the nth derivative because of the Cayley-
Hamilton theorem.
Re-writing controllability conditions for linear systems using this notation:
,
!
x = A x = A
2
x + AB
i
u
i
i=1
m
"
= A
2
x # f , B
i
[ ]
i=1
m
"
u
i
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
80
How this came about
,
So for example:
If we keep going:
!
x = A
3
x + A
2
B
i
u
i
= A
3
x + ad
f
2
B
i [ ]
u
i
i=1
m
"
i=1
m
"
Notice how this time the minus signs cancel out.
!
x
(n)
=
d
n
x
dt
n
= A
n
x + A
n"1
B
i
u
i
= A
n
x + ("1)
n"1
ad
f
n"1
B
i [ ]
u
i
i=1
m
#
i=1
m
#
Re-writing the controllability condition:
!
C = B
1
,..., B
m
, ad
f
B
1
,...ad
f
B
m
,...ad
f
n"1
B
1
,...ad
f
n"1
B
m
[ ]
The condition has not changed just the notation.
The terms B
1
through B
m
correspond to the B term in the original matrix, the terms with
ad
f
correspond to the AB terms, the terms with ad
f
n-1
correspond to the A
n-1
B terms.
Nonlinear Systems
Assume we have an affine system:
The general case is much more involved and is given in Hermann and Krener.
If we dont have an affine system, we can sometimes ruse:
Let
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
81
Select a new state: and v is my control " the system is affine in (z,v), and pick #
to be OK.
Theorem
The system defined by:
is locally accessible about x
0
if the accessibility distribution C spans n space, where n is
the dimension of x and C is defined by:
!
C = g
1
, g
2
,..., g
m
, g
i
, g
j
[ ]
,..., ad
g
i
k
g
j
,..., f , g
i
[ ]
,...ad
f
k
g
i
,...
[ ]
The g
i
terms are analogous to the B terms, the [g
i
,g
j
] terms are new from having a
nonlinear system, the [f,g
i
] terms correspond to the AB terms, etc
Note: if f(x) = 0 then and if in this case C has rank n, then the system is
controllable.
Example: Kinematics of an Axle
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
82
Basically, $ is the yaw angle of the vehicle, and x
1
and x
2
are the Cartesian locations of
the wheels. u
1
is the velocity of the front wheels, in the direction that they are pointing,
and u
2
is the steering velocity.
We define our state vector to be:
Our dynamics are:
The system is of the form:
f(x) = 0, and
Note:
If I linearize a nonlinear system about x
0
and the linearization is controllable, then the
nonlinear system is accessible at x
0
(not true the other way if the linearization is
uncontrollable the nonlinear system may still be locally accessible).
Back to the example:
where and in our case,
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
83
So
C has rank 3 everywhere, so the system is locally accessible everywhere, and f(x)=0 (free
dynamics system) so the system is controllable!
Example 2:
Note: if I had the linear system:
, , "
and the linear system is controllable.
Back to the example 2:
Is the nonlinear system controllable? Answer is NO, because x
1
can only increase.
But lets show it.
In standard form:
,
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
84
So
Accessible everywhere except where x
2
=0
If we tried [f,[f,g]], would we pick up new directions? It turns out they will also be
dependent on x
2
, and the rank will drop at x
2
= 0.
Example 3:
where
The system is of the form:
where
, and
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
85
We have:
If C has rank 4, then the system is locally accessible. Have fun
Observability for Nonlinear Systems
Intuition for observability:
From observing the sensor(s) for a finite period of time, can I find the state at previous
times?
Review of Linear Systems
where
where and p<n
(linear system)
z(t), 0%t%T
Can I determine x
0
?
where M is pxn, e
AT
is nxn and x
0
is nx1, so z(t) is px1
Using the Cayley-Hamilton theorem:
Note: the Cayley-Hamilton theorem applies to time-varying matrices as well.
So, I have:
!
z(t) = "
0
(t)M +"
1
(t)MA+... +"
n#1
(t)MA
n#1
{ }
x
0
So I can solve for x
0
iff the matrix O spans n space, where:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
86
This does not carry over to nonlinear systems, so we take a local approach.
Local Approach to Observability (Linear Systems)
v(t) is the measurement noise, can cause problems.
!
z
(n"1)
= MA
n"1
x
" O must have rank n
Lie Derivatives:
The gradient of a smooth scalar function h(x) of the state x is denoted by:
!
"h =
#h
#x
The gradient is represented by a row-vector of elements:
!
("h)
j
=
#h
#x
j
.
Similarly, given the vector field f(x), the Jacobian of f is:
!
"f =
#f
#x
It is represented by an nxn matrix of elements:
!
("f )
ij
=
#f
i
#x
j
Definition
Let f: !
n
&!
n
be a vector field in !
n
.
Let h: !
n
&! be a smooth scalar function.
Then the Lie derivative of h with respect to f is a new scalar defined by:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
87
Dimensions
f looks like:
h looks like: h(x) with x '!
n
" associates a scalar to each point in !
n
The Lie derivative looks like:
" L
f
h is a scalar.
Conventions:
By definition,
We can also define higher-order Lie derivatives:
etc
One can easily see the relevance of Lie derivatives to dynamic systems by considering
the following single-output system:
!
x = f (x)
y = h(x)
Then
And
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
88
Etc so
Use of Lie Derivative Notation for Linear Systems
so f(x)=Ax
, M
i
is 1xn
"
Define
!
G =
L
f
0
(h
1
) ... L
f
0
(h
p
)
... ... ...
L
f
n"1
(h
1
) ... L
f
n"1
(h
p
)
#
$
%
%
%
&
'
(
(
(
=
M
1
x ... M
p
x
... ... ...
M
1
A
n"1
x ... M
p
A
n"1
x
#
$
%
%
%
&
'
(
(
(
Now, define a gradient operator:
O must have rank n for the system to be observable.
Nonlinear Systems
Theorem:
Let G denote the set of all finite linear combinations of the Lie derivatives of h
1
,,h
p
with respect to f for various values of u = constant. Let dG denote the set of all their
gradients. If we can find n linearly independent vectors within dG, then the system is
locally observable.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
89
The system is locally observable, that is distinguishable at a point x
0
if there exists a
neighborhood of x
0
such that in this neighborhood,
if the states are different, the sensor readings are different
Case of a single measurement:
Look at the derivatives of z:
Let:
Expand in a first-order series about x
0
for u = u
0
Then must have rank n
Example:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
90
The question we are trying to answer is: from observation, does z contain enough
information on x
1
and x
2
?
Since x
1
= z, (by substitution in the first line)
If z = x
2
, then the system would not be distinguishable, since
But if you take one more derivative you can get a single valued expression since has
only one solution.
!
z = 2x
1
x
1
= 2x
1
x
1
2
2
+exp(x
2
) + x
2
"
#
$
%
&
'
Rank test for z=h= x
1
has rank 2 everywhere
" the system is locally observable everywhere.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
91
SUMMARY: Controllability and Observability for Nonlinear Systems
Controllability
The system is locally accessible about a point x
0
if and only if
C = [ g
1
,...,g
m
, [g
i
, g
j
],...[ad
gi
k
,g
j
],..., [f,g
i
],..., [ad
f
k
,g
i
],...]
has rank n where n is the rank of x. C is the accessibility distribution.
If the system has the form: that is, f(x) = 0, and C has rank n, then the
system is controllable.
Observability
z=h(x)
Two states x
0
and x
1
are distinguishable if there exists an input function u* such that:
z(x
0
) ( z(x
1
)
The system is locally observable at x
0
if there exists a neighbourhood of x
0
such that
every x in that neighbourhood other than x
0
is distinguishable from x
0.
A test for local observability is that:
must have rank n, where n is the rank of x and
For a px1 vector,
z = [h
1
, ..., h
p
]
T
,
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
92
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
93
LINEAR SYSTEMS
NONLINEAR SYSTEMS
CONTROLLABILITY
AND
ACCESSIBILITY
Intuition: the system is
controllable " you can get
anywhere you want in a finite
amount of time.
LINEAR TIME INVARIANT
SYSTEMS
CONTROLLABILITY
The system is controllable if:
C = [ B AB ... A
n-1
B ]
has rank n, where n is the rank of
x.
AFFINE SYSTEMS
ACCESSIBILITY
The system is locally accessible
about a point x
0
if and only if
C = [ g
1
,...,g
m
, [g
i
, g
j
],...
[ad
gi
k
g
j
],..., [f,g
i
],..., [ad
f
k
g
i
],...]
has rank n where n is the rank of
x. C is the accessibility
distribution.
CONTROLLABILITY
If f(x) = 0 and C has rank n, then
the system is controllable.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
94
OBSERVABILITY
AND DISTINGUISHABILITY
Intuition: the system is observable
" from observing the sensor
measurements for a finite period
of time, I can obtain the state at
previous times.
z=Mx
x has rank n
z has rank p
p<n
OBSERVABILITY
The system is observable if:
has rank n, where n is the rank of
x.
z=h(x)
DISTINGUISHABILITY
Two states x
0
and x
1
are
distinguishable if there exists an
input function u* such that:
z(x
0
) ( z(x
1
)
LOCAL OBSERVABILITY
The system is locally observable
at x
0
if there exists a
neighbourhood of x
0
such that
every x in that neighbourhood
other than x
0
is distinguishable
from x
0.
A test for local observability is
that:
must have rank n, where n is the
rank of x and
For a px1 vector,
z = [h
1
, ..., h
p
]
T
,
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
95
Remarks
In general the conditions for nonlinear systems are weaker than those for linear
systems. Properties for nonlinear systems tend to be local.
What to do for nonlinear controllability if the system is not in affine form?
Let
"
!
z =
x
u
"
#
$
%
&
'
, v is my control " the system is now affine in (z,v) and pick # to be OK.
Marius Sophus Lie
Born: 17 Dec 1842 in Nordfjordeide, Norway
Died: 18 Feb 1899 in Kristiania (now Oslo), Norway
The Lie Derivative and Observability
Definition
Let f: !
n
&!
n
be a vector field in !
n
.
Let h: !
n
&! be a smooth scalar function.
Then the Lie derivative of h with respect to f is:
Dimensions
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
96
f looks like:
h looks like: h(x) with x '!
n
" associates a scalar to each point in !
n
The Lie derivative looks like:
" L
f
h is a scalar.
Physically (time for pictures!)
Picture of f
f associates an n-dimensional vector to each point in !
n
In !
2
:
For example, let
!
f (x) =
"1 0
0 "2
#
$
%
&
'
(
x
1
x
2
#
$
%
&
'
(
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
97
)
f
t
(x
0
) = flow along the vector field for time t, starting at x
0
" tangent to the phase plane plot at every single point
Picture of h
For example, in !
2
, pick h to be the distance to the origin:
Lie derivative picture
Using this example:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
98
!
L
f
h =
" x
2
"x
1
" x
2
"x
2
#
$
%
&
'
(
)1 0
0 )2
#
$
%
&
'
(
So, the Lie derivative gives the rate of change in a scalar function h as one flows
along the vector field f.
In a control systems context:
x'!
n
f: !
n
&!
n
y=h(x) y'!
h: !
n
&!
along the flow of f
How does this tie into observability?
Imagine:
!
x = Ax
y = Cx
"
#
$
and we can only see y, a scalar, and we wish to find x'!
n
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
99
y = Cx
y = Cx = CAx
y
(n-1)
= CA
n-1
x
and solve for x (n equations)
" if [C CA ... CA
n-1
] has rank n, we have n independent equations in n
variables " OK
Using the Lie derivative
f(x) = Ax, h(x) = Cx
and by convention,
The Lie Bracket and Controllability
Definition
Let f: !
n
&!
n
be a smooth vector field in !
n
.
Let g: !
n
&!
n
be a smooth vector field in !
n
.
Then the Lie bracket of f and g is a third-order vector field given by:
!
f , g [ ] =
"g
"x
. f #
"f
"x
.g
Dimensions
f looks like: , g also looks like:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
100
So [f,g] is a vector field.
How does this tie into controllability?
Consider:
u
1
, u
2
are scalar inputs
x'!
3
,
What directions can we steer x if we start at some point x
0
?
Clearly, we can move anywhere in the span of {g
1
(x
0
),g
2
(x
0
)}.
Lets say that: ,
Can we move in the x
3
direction?
The directions that we are allowed to move in by infinitesimally small changes
are [g
1
,g
2
].
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
151
8 `Feedback Linearization
Key points
Feedback linearization = ways of transforming original system models into
equivalent models of a simpler form.
Completely different from conventional (Jacobian) linearization, because
feedback linearization is achieved by exact state transformation and feedback,
rather than by linear approximations of the dynamics.
Input-Output, Input-State
Internal dynamics, zero dynamics, linearized zero dynamics
Jacobis identity, the theorem of Frobenius
MIMO feedback linearization is also possible.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
152
Feedback linearization is an approach to nonlinear control design that has attracted lots of
research in recent years. The central idea is to algebraically transform nonlinear systems
dynamics into (fully or partly) linear ones, so that linear control techniques can be
applied.
This differs entirely from conventional (Jacobian) linearization, because feedback
linearization is achieved by exact state transformation and feedback, rather than by linear
approximations of the dynamics.
The basic idea of simplifying the form of a system by choosing a different state
representation is not completely unfamiliar; rather it is similar to the choice of reference
frames or coordinate systems in mechanics.
Feedback linearization = ways of transforming original system models into
equivalent models of a simpler form.
Applications: helicopters, high-performance aircraft, industrial robots, biomedical
devices, vehicle control.
Warning: there are a number of shortcomings and limitations associated with the
feedback linearization approach. These problems are very much topics of current
research.
References: Sastry, Slotine and Li, Isidori, Nijmeijer and van der Schaft
Terminology
Feedback Linearization
A catch-all term which refers to control techniques where the input is used to linearize
all or part of the systems differential equations.
Input/Output Linearization
A control technique where the output y of the dynamic system is differentiated until the
physical input u appears in the r
th
derivative of y. Then u is chosen to yield a transfer
function from the synthetic input, v, to the output y which is:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
153
If r, the relative degree, is less than n, the order of the system, then there will be internal
dynamics. If r = n, then I/O and I/S linearizations are the same.
Input/State Linearization
A control technique where some new output y
new
= h
new
(x) is chosen so that with respect
to y
new
, the relative degree of the system is n. Then the design procedure using this new
output y
new
is the same as for I/O linearization.
SISO Systems
Consider a SISO nonlinear system:
Here, u and y are scalars.
!
y =
"h
"x
x = L
f
1
h + L
g
(h)u = L
f
1
h if L
g
(h) = 0
If , we keep taking derivatives of y until the output u appears. If the output
doesnt appear, then u does not affect the output! (Big difficulties ahead).
If , we keep going.
We end up with the following set of equalities:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
154
with
with
with
The letter r designates the relative degree of y=h(x) iff:
That is, r is the smallest integer for which the coefficient of u is non-zero over the space
where we want to control the system.
Lets set:
Then , where
v(x) is called the synthetic input or synthetic control. y
(r)
=v
We have an r-integrator linear system, of the form: .
We can now design a controller for this system, using any linear controller design
method. We have . The controller that is implemented is obtained through:
Any linear method can be used to design v. For example,
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
155
Problems with this approach:
1. Requires a perfect model, with perfect derivatives (one can anticipate robustness
problems).
2. If the goal is , .
If , and r = 2, there are 18 states for which we dont know what is
happening. That is, if , we have internal dynamics.
Note: There is an ad-hoc approach to the robustness problem, by setting:
Here the first term in the expression is the standard feedback linearization term, and the
second term is tuned online for robustness.
Internal Dynamics
Assume r<n ! there are some internal dynamics
where
So we can write:
where A and B are in controllable canonical form, that is:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
156
where and
We define:
where z is rx1 and " is (n-r)x1. ( ).
The normal forms theorem tells us that there exists an " such that:
Note that the internal dynamics are not a function of u.
So we have:
The " equation represents internal dynamics; these are not observable because z does
not depend on " at all ! internal, and hard to analyze!
We want to analyze the zero dynamics. The system is difficult to analyze. Oftentimes, to
make our lives easier, we analyze the so-called zero dynamics:
and in most cases we even look at the linearized zero dynamics.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
157
and we look at the eigenvalues of J.
If these are well behaved, perhaps the nonlinear dynamics might be well-behaved. If
these are not well behaved, the control may not be acceptable!
For linear systems:
We have:
The eigenvalues of the zero dynamics are the zeroes of H(s). Therefore if the zeroes of
H(s) are non-minimum phase (in the right-half plane) then the zero dynamics are
unstable.
#
By analogy, for nonlinear systems: if is unstable, then the system:
is called a non-minimum phase nonlinear system.
Input/Output Linearization
o Procedure
a) Differentiate y until u appears in one of the equations for the derivatives of y
after r steps, u appears
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
158
b) Choose u to give y
(r)
=v, where v is the synthetic input
c) Then the system has the form:
Design a linear control law for this r-integrator liner system.
d) Check internal dynamics.
o Example
Oral exam question
Design an I/O linearizing controller so that y $ 0 for the plant:
Follow steps:
a) u appears ! r = 1
b) Choose u so that
!
In our case, and .
c) Choose a control law for the r-integrator system, for example proportional control
Goal: to send y to zero exponentially
! since y
des
= 0
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
159
d) Check internal dynamics:
Closed loop system:
If x
1
$ 0 as desired, x
2
is governed by
! Unstable internal dynamics!
There are two possible approaches when faced with this problem:
! Try and redefine the output: y=h(x
1
,x
2
)
! Try to linearize the entire system/space ! Input/State Linearization
#
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
160
Input/State Linearization (SISO Systems)
Question: does there exist a transformation !(x) such that the transformed system is
linear?
Define the transformed states:
I want to find %(x) such that where , with:
! v=v(x,u) is the synthetic control
! the system is in Brunowski (controllable) form
and
A is nxn and B is nx1.
We want a 1 to 1 correspondence between z and x such that:
Question: does there exist an output y=z
1
(x) such that y has relative degree n?
with
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
161
Let
Then: . And the form I need is:
" does there exist a scalar z
1
(x) such that:
for k = 1,,n-2
And ?
!
z "
z
1
z
2
...
z
n
#
$
%
%
%
%
&
'
(
(
(
(
=
L
f
0
(z
1
)
L
f
1
(z
1
)
...
L
f
n)1
(z
1
)
#
$
%
%
%
%
&
'
(
(
(
(
" is there a test?
so the test should depend on f and g.
Jacobis identity
Carl Gustav Jacob Jacobi
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
162
Born: 10 Dec 1804 in Potsdam, Prussia (now Germany)
Died: 18 Feb 1851 in Berlin, Germany
Famous for his work on:
! Orbits and gravitation
! General relativity
! Matrices and determinants
Jacobis Identity
A convenient relationship (S+L) is called Jacobis identity.
Remember:
!
L
f
0
h = h,
!
L
f
i
h = L
f
(L
f
i"1
h) = #(L
f
i"1
h). f
!
ad
f
0
g = g.
!
ad
f
g = [ f , g] = "g. f #"f .g,
!
ad
f
i
g = [ f , ad
f
i"1
g]
This identity allows us to keep the conditions in first order in z
1
" Trod through messy algebra
! For k = 0
(first order)
! For k = 1
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
163
! 2
nd
order (gradient)
Things get messy, but by repeated use of Jacobis identity, we have:
for for (*)
The two conditions above are equivalent. Evaluating the second half:
This leads to conditions of the type:
The Theorem of Frobenius
Ferdinand Georg Frobenius:
Born: 26 Oct 1849 in Berlin-Charlottenburg, Prussia (now Germany)
Died: 3 Aug 1917 in Berlin, Germany
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
164
Famous for his work on:
! Group theory
! Fundamental theorem of algebra
! Matrices and determinants
Theorem of Frobenius:
A solution to the set of partial differential equations for exists
if and only if:
a)
!
g, ad
f
g,..., ad
f
n"1
g
[ ]
has rank n
b)
!
g, ad
f
g,..., ad
f
n"2
g
[ ]
is involutive
#
Definition of involutive:
A linear independent set of vectors (f
1
, , f
m
) is involutive if:
#
i.e. when you take Lie brackets you dont generate new vectors.
Note: this is VERY hard to do.
Reference: George Myers at NASA Ames, in the context of helicopter control.
Example: (same as above)
Question: does there exist a scalar z
1
(x
1
,x
2
) such that the relative degree be 2?
This will be true if:
a) (g, [f, g]) has rank 2
b) g is involutive (any Lie bracket on g is zero $ OK)
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
165
Setting stuff up to look at (a):
Note:
looks dangerous
Question: how do we find z1?
We get a list of conditions:
!
(simplest)
! (always true for constant, independent vectors) In our case, ok if
!
x
1
" 3 3
So lets trod through and check:
(good that u doesnt appear, or r=1!)
(u appears! (good))
Define ,
Hope the problem is far away from
Let
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
166
! z
1
$ z
1d
Question: How to pick z
1d
?
We want: for
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
167
Feedback Linearization for MIMO Nonlinear Systems
Consider a square system (where the number of inputs is equal to the number of
outputs = n)
Let r
k
, the relative degree, be defined as the relative degree of each output, i.e.
For some i,
Let J(x) be an mxm matrix such that:
J(x) is called the invertibility or decoupling matrix.
We will assume that J(x) is non-singular.
Let:
where y
r
is an mx1 vector
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
168
Then we have:
where v is the synthetic input (v is mx1).
We obtain a decoupled set of equations:
so
Design v any way you want to using linear techniques
Problems:
! Need confidence in the model
! Internal dynamics
Internal Dynamics
The linear subspace has dimension (or relative degree) for the whole system:
! we have internal dynamics of order n-r
T
.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
169
The superscript notation denotes which output we are considering. We have:
where z
T
is rx1, "
T
is (n-r
T
)x1
The representation for x may not be unique!
Can we get a " who isnt directly a function of the controls (like for the SISO case)? NO!
and
Internal dynamics ! what is u?
! design v, then solve for u using
The zero dynamics are defined by z = 0.
!
The output is identically equal to zero if we set the control equal to zero (at all times).
Thus the zero dynamics are given by:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
170
Dynamic Extension - Example
References: Slotine and Li
Hauser, PhD Dissertation, UCB, 1989 from which this example is taken
Basically, & is the yaw angle of the vehicle, and x
1
and x
2
are the Cartesian locations of
the wheels. u
1
is the velocity of the front wheels, in the direction that they are pointing,
and u
2
is the steering velocity.
We define our state vector to be:
Our dynamics are:
!
x
1
= (cos")u
1
x
2
= (sin")u
1
" = u
2
#
$
%
&
%
We determined in a previous lecture that the system is controllable (f = 0).
and are defined as outputs.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
171
!
y
1
y
2
"
#
$
%
&
'
=
cos( 0
sin( 0
"
#
$
%
&
'
u
1
u
2
"
#
$
%
&
'
!
J(x) =
cos" 0
sin" 0
#
$
%
&
'
(
is clearly singular (has rank 1).
Let , where u
3
is the acceleration of the axle
! the state has been extended.
!
!
x
1
= (cos")x
3
x
2
= (sin")x
3
x
3
= u
1
= u
3
" = u
2
#
$
%
%
&
%
%
where in the extended state space
Take and .
and the new J(x) matrix: is non-singular for u
1
' 0 (as long as
the axle is moving).
How does one go about designing a controller for this example?
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
172
Given y
1d
(t), y
2d
(t):
Let:
!
To obtain the control, u:
and ! we have a dynamic feedback controller (the controller has dynamics, not
just gains, in it).
#
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
173
Pictures for SISO cases:
o Picture of I/O system (r = 1)
o In general terms
n
th
order
r = relative degree < n
a) Differentiate:
and if r>1
!
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
174
and =0 if r<2
!
where
b) Choose u in terms of v
Let
For now, to simplify the pictures, let
c) Choose control law
d) Check internal dynamics
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
175
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
176
Feedback Linearization and State Transformation
We have an n
th
order system where y is the natural output, with relative degree r.
Previously, we skimmed over the state transformation interpretation of feedback
linearization.
Why do we transform the states?
The differential equations governing the new states have some convenient properties.
Example:
Consider a linear system
The points in 2-space are usually expressed in the natural basis: .
So when we write x, we mean a point in 2-space that is gotten to from the origin by
doing:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
177
where (x
1
,x
2
) are the coordinates of a point in in the natural basis.
To diagonalize the system, we do a change of coordinates, so we express points like:
!
1 0
0 1
"
#
$
%
&
'
x = t
1
t
2
[ ]x'
where t
1
and t
2
are the eigenvectors of A and x represents the coordinates in the new
basis x.
So we get a nice equation in the new coordinates:
where (is diagonal.
For I/O linearization, we do the same kind of thing:
We seek some nonlinear transformation so that the new state, x, is governed by
differential equations such that the first r-1 states are a string of integrators (derivatives of
each other), and the differential equation for the r
th
state has the form:
= nonlinear function(x) +u
and n-r internal dynamics states will be decoupled from u (this is a matter of
convenience).
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
178
So we have: x=T(x) where T is nonlinear.
Lets enforce the above properties:
We know how to choose T
1
(x) through T
r
(x). They are just etc
How do we choose the T
r+1
(x) through T
n
(x)?
These transformations need to be chosen so that:
1. The transformation T(x) is a diffeomorphism:
o One to one transformation
o T(x) is continuous
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
179
o T
-1
(x) is continuous
o Also and must exist and be continuous
2. The " states should have no direct dependence on the input u.
Example (from HW)
We know that T
1
(x
1
,x
2
) = y = x
2
.
What about T
2
(x
1
,x
2
)?
Choose T
2
(x
1
,x
2
) to satisfy the above conditions. Lets start with condition 2, u does not
appear in the equation for .
We are only concerned about the second term. To eliminate the dependence in u, we must
have:
(T
2
should not depend on x
2
).
An obvious answer is : T
2
(x
1
,x
2
) = x
1
. Then, we would have:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2010
180
Is this a diffeomorphism? Obviously yes.
Note that works also.
What about ? (NO violates one-to-one transformation part of the conditions
for a proper diffeomorphism).
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
161
9 `Sliding Mode Control of
Nonlinear Systems
Historically:
- Other terms have been used, most predominantly Variable Structure Systems
(VSS)
- Began in the 1960s in the USSR (Fillipov, Utkin)
- Used in Japan in the 1970s for power systems
- Adopted in the US in the late 1970s and the 1980s, principally for robotics.
Brought over by Utkin, a professor at Ohio State.
Key points
Applicable to nonlinear dynamic systems.
Directly considers robustness issues as part of the design process.
Reference: Slotine and Li chapter 7
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
162
Attributes:
- Applicable to nonlinear dynamic systems
- Directly considers robustness issues as part of the design process.
Second-Order Example
Consider the system governed by the following equation:
where f is a nonlinear function, d represents a time-varying disturbance, and u is the
control input.
We can separate f and d into known parts (denoted by a m subscript, for model), and
unknown parts (denoted by !).
We write the state of the system as
Our goal is to design a controller such that perfectly (asymptotically) and the
error must go to zero exponentially. (Note: this is an ambitious goal).
Define the error as:
Then we can define a sliding surface, S, as:
If S=0, then the error goes to zero exponentially with a time constant 1/". This is
consistent with the controller goals. (also, if we can make S=0 in finite time t
1
, then we
can write out the equation for the error as a function of time: )
Computing the first derivative of S:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
163
The u term appears in the equation for the first derivative of S. We say that S has
relative degree 1. (This will always be a desirable property for S).
We select u to cancel out some of the known terms:
Here the last term will deal with the uncertainties.
This results in:
Question: How can I make S#0 in finite time?
I need bounds on the uncertainties, !f and !d. For example:
The next thing to do is to select a Lyapunov function candidate. For example, we may
select , which leads to . That is, to make negative definite, we
need to pick u so that .
As before, let
Then:
We now consider the worst-case uncertainty:
and we need this quantity to be <0
where $ is a tuning parameter.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
164
This choice of control input guarantees that:
The full expression for the control input, u, is:
Suppose we encounter the worst-case scenario. Then:
Let S(0)=0. Then,
In general,
So,
This last equation is referred to as the sliding condition.
It indicates that S(t) will reach zero in a finite time .
After t
1
, we enter the sliding mode, and the system chatters with zero amplitude and
infinite frequency on the average.
When in the sliding mode,
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
165
Notes:
(a) We get perfect disturbance rejection and robustness to model error
(b) This is a Lyapunov-based controller design. Equilibrium point is guaranteed to be
stable.
Basic idea of sliding mode:
First-order systems are much easier to control than higher-order ones.
Simple example (single integrator, hard to screw up!)
Goal is: y ! 0
For example this is applicable to velocity control via force.
Control law:
We can use any gain we please for k without instability.
Specifically, we can use, , which looks like close to x = 0.
Root locus: poles of the following system as a function of the gain k:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
166
In our case:
Note that at infinity gain (sgn(x) case), the closed loop system is stable.
Double integrator example
This corresponds to, for example, position control via force.
Goal is: y ! 0
Control law: we must be more careful while choosing u:
For example, say we pick . The root locus will look like:
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
167
has extremely oscillatory poles
% Simplistic control law does not work.
Triple integrator example
Goal is: y ! 0
Again, lets try .
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
168
Relation to sliding mode control
Moral of the story:
First-order systems are trivial to control: if output is too high, push down if output is
too low, push down.
Idea of sliding mode control:
Reduce every system to a first order system that we are trying to force to zero. Then we
can use the intuitive control law described above, and furthermore, we can use ANY gain
we want, including infinity (perfect tracking, perfect disturbance rejection).
How do we do the reduction?
We define a new output, s, which looks like a first-order system.
What properties should s have?
(a) The relative degree of the output s should be 1, that is u should appear
explicitly in the expression for .
(b) s # 0 is the control goal, that is s needs to be designed so that good things happen
to the physical output when s # 0.
Sliding mode control for the single integrator
Goal is: y ! 0
Trivial: s = x and check the two above-mentioned conditions:
(i) Relative degree: OK
(ii) Is s # 0 a good thing? Yes!
Sliding mode control for the double integrator
Goal is: y ! 0
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
169
Finding s is a little harder. Lets use the conditions!
(i) s(x
1
,x
2
) % s must have x
2
in it
(ii) s # 0 must be a good thing
Is s = x
2
a good thing? NO!
Lets try: where ">0. This is good!
exponentially (">0).
So we use: , with ">0.
Look at the first order control problem:
Choose:
The term cancels out that term in the expression for , and v is similar to a
synthetic input.
We are now faced once again with a familiar first-order control problem.
We let: %
Picture (for the case of "=1)
is a line of slope 1 ( )
General trick: if the system is of order n, in general the sliding surface has degree n-1.
In our case, so we go to the line at one level set per step
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
170
Sliding mode control for the triple integrator
Goal is: y ! 0
Once again, we develop an expression for s using the two conditions mentioned above.
(i) s(x
1
,x
2
,x
3
) % s must have x
3
in it
(ii) s # 0 must be a good thing
s(x
1
,x
2
,x
3
)=0
We need to involve x
1
and x
2
or we wont go anywhere.
Control of Nonlinear Dynamic Systems: Theory and Applications
J. K. Hedrick and A. Girard 2005
171
A good thing to do is to design s so that s = 0 % x
1
# 0 exponentially.
For example, lets consider:
Shorthand notation:
!
S " s + # ( )
2
x
1
where s is a Laplace differentiating operator.
This is identical to: with n = 2.
Dynamics of the first-order system in s:
is the first order system to control
For example, we use:
In this expression, the CE(x) term cancels the x terms that represent system dynamics.
For example in our case, one can pick:
These results can also be obtained using Lyapunov functions and arguments:
We want to include a sgn(x) term to make %
For More General Systems
Appendix A: Mathematical Background