Beruflich Dokumente
Kultur Dokumente
Bernard Brogliato
Rogelio Lozano
Bernhard Maschke
Olav Egeland
Dissipative Systems
Analysis and Control
Theory and Applications
Third Edition
Communications and Control Engineering
Series Editors
Alberto Isidori, Roma, Italy
Jan H. van Schuppen, Amsterdam, The Netherlands
Eduardo D. Sontag, Boston, USA
Miroslav Krstic, La Jolla, USA
Communications and Control Engineering is a high-level academic monograph
series publishing research in control and systems theory, control engineering and
communications. It has worldwide distribution to engineers, researchers, educators
(several of the titles in this series find use as advanced textbooks although that is not
their primary purpose), and libraries.
The series reflects the major technological and mathematical advances that have
a great impact in the fields of communication and control. The range of areas to
which control and systems theory is applied is broadening rapidly with particular
growth being noticeable in the fields of finance and biologically-inspired control.
Books in this series generally pull together many related research threads in more
mature areas of the subject than the highly-specialised volumes of Lecture Notes in
Control and Information Sciences. This series’s mathematical and control-theoretic
emphasis is complemented by Advances in Industrial Control which provides a
much more applied, engineering-oriented outlook.
Indexed by SCOPUS and Engineering Index.
Publishing Ethics: Researchers should conduct their research from research
proposal to publication in line with best practices and codes of conduct of relevant
professional bodies and/or national and international regulatory bodies. For more
details on individual ethics matters please see:
https://www.springer.com/gp/authors-editors/journal-author/journal-author-helpdesk/
publishing-ethics/14214
Third Edition
123
Bernard Brogliato Rogelio Lozano
INRIA Grenoble Rhône-Alpes Centre de Recherche de Royalieu
Université Grenoble Alpes Heuristique et Diagnostic des Systèmes
Grenoble, France Université de Technologie de Compiègne
UMR-CNRS 6599
Bernhard Maschke Compiègne, France
LAGEP
Université Lyon 1 Olav Egeland
Villeurbanne, France Department of Production
and Quality Engineering
Norwegian University of Science
and Technology
Trondheim, Norway
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
Thank you for opening the third edition of this monograph, dedicated to dissipative
linear or nonlinear, autonomous or time-varying, smooth or nonsmooth,
single-valued or set-valued, and finite-dimensional dynamical systems with inputs
and outputs (very little will be said on infinite-dimensional systems, while
stochastic systems are not treated). Linear time-invariant systems occupy a large
part of the monograph, with the notion of positive real transfer function, and its
many variants. Positive real systems are indeed quite popular in the Automatic
Control and the Circuits scientific communities. Their definition and analysis
originate from Networks and Circuits, and were first introduced in Wilhelm Cauer’s
and Otto Brune’s Ph.D. theses, in 1926 and 1931 [168, 198]. Later, fundamental
contributions in the broader class of dissipative systems were made by Lur’e,
Kalman, Yakubovich, Popov, Anderson, Willems, Hill and Moylan, and Byrnes
(this short list does not pretend to be exhaustive, and we apologize for the forgotten
names).
One should expect to see neither all results about dissipative and positive real
systems in this book nor all proofs of the presented results. However, on one hand,
the extensive bibliography is used to point the reader to various articles, on the
other hand, many results are presented with their proof. In particular, a long chapter
is dedicated to the celebrated Kalman–Yakubovich–Popov (KYP) Lemma and to
the absolute stability problem, with many different versions of the KYP Lemma.
A particular emphasis is put on the KYP Lemma for non-minimal systems, and on
the absolute stability problem for Lur’e dynamical systems with a set-valued
feedback nonlinearity (a specific form of differential inclusions).
We would like to thank Oliver Jackson from Springer London, for his support in
the launching of this third edition.
v
Preface to the Second Edition
Thank you for your interest in the second edition of our book on dissipative
systems. The first version of this book has been improved and augmented in several
directions (mainly by the first author supported by the second and third authors
of the second version). The link between dissipativity and optimal control is now
treated in more detail, and many proofs which were not provided in the first edition
are now given in their entirety, making the book more self-contained. One difficulty
one encounters when facing the literature on dissipative systems is that there are
many different definitions of dissipativity and positive real transfer functions (one
could say a proliferation), many different versions of the same fundamental
mathematical object (like the Kalman–Yakubovich–Popov Lemma), and it is not
always an easy task to discover the links between them all. One objective of this
book is to present those notions in a single volume and to try, if possible, to present
their relationships in a clear way. Novel sections on descriptor (or singular) sys-
tems, discrete-time linear and nonlinear systems, some types of nonsmooth systems,
viscosity solutions of the KYP Lemma set of equations, time-varying systems,
unbounded differential inclusions, evolution variational inequalities, hyperstability,
nonlinear H1 , input-to-state stability, have been added. Conditions under which the
Kalman–Yakubovich–Popov Lemma can be stated without assuming the mini-
mality of the realization are provided in a specific section. Some general results
(like well-posedness results for various types of evolution problems encountered in
the book, definitions, matrix algebra tools, etc.) are presented in the Appendix, and
many others are presented in the main text when they are needed for the first time.
We thank J. Collado and S. Hadd who made us some remarks, and we remain of
course open to any comments that may help us continue to improve our book.
vii
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Example 1: System with Mass, Spring, and Damper . . . . . . . . . . 2
1.2 Example 2: RLC Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Example 3: A Mass with a PD Controller . . . . . . . . . . . . . . . . . 5
1.4 Example 4: Adaptive Control . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Positive Real Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Dynamical System State Space Representation . . . . . . . . . . . . . . 10
2.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Interconnections of Passive Systems . . . . . . . . . . . . . . . . . . . . . 14
2.4 Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Passivity of the PID Controllers . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.6 Stability of a Passive Feedback Interconnection . . . . . . . . . . . . . 25
2.7 Mechanical Analogs for PD Controllers . . . . . . . . . . . . . . . . . . . 25
2.8 Multivariable Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.9 The Scattering Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.10 Feedback Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.11 Bounded Real and Positive Real Transfer Functions . . . . . . . . . . 33
2.12 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.12.1 Mechanical Resonances . . . . . . . . . . . . . . . . . . . . . . . . 47
2.12.2 Systems with Several Resonances . . . . . . . . . . . . . . . . . 50
2.12.3 Two Motors Driving an Elastic Load . . . . . . . . . . . . . . 51
2.13 Strictly Positive Real (SPR) Systems . . . . . . . . . . . . . . . . . . . . . 53
2.13.1 Frequency-Domain Conditions for a Transfer
Function to be SPR . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
2.13.2 Necessary Conditions for HðsÞ to be PR (SPR) . . . . . . . 58
2.13.3 Tests for SPRness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.13.4 Interconnection of Positive Real Systems . . . . . . . . . . . . 60
2.13.5 Special Cases of Positive Real Systems . . . . . . . . . . . . . 61
ix
x Contents
2.14 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.14.1 SPR and Adaptive Control . . . . . . . . . . . . . . . . . . . . . . 66
2.14.2 Adaptive Output Feedback . . . . . . . . . . . . . . . . . . . . . . 67
2.14.3 Design of SPR Systems . . . . . . . . . . . . . . . . . . . . . . . . 69
2.15 Negative Imaginary Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
3 Kalman–Yakubovich–Popov Lemma . . . . . . . . . . . . . . . . . . . . . . . . 81
3.1 The Positive Real Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.1.1 PR Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . 82
3.1.2 Lossless PR Transfer Functions . . . . . . . . . . . . . . . . . . . 88
3.1.3 Positive Real Balanced Transfer Functions . . . . . . . . . . 89
3.1.4 A Digression to Optimal Control . . . . . . . . . . . . . . . . . . 89
3.1.5 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
3.1.6 Positive Real Lemma for SPR Systems . . . . . . . . . . . . . 92
3.1.7 Descriptor Variable Systems . . . . . . . . . . . . . . . . . . . . . 104
3.2 Weakly SPR Systems and the KYP Lemma . . . . . . . . . . . . . . . . 108
3.3 KYP Lemma for Non-minimal Systems . . . . . . . . . . . . . . . . . . . 113
3.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.3.2 Spectral Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3.3.3 Sign Controllability . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.3.4 State Space Decomposition . . . . . . . . . . . . . . . . . . . . . . 123
3.3.5 A Relaxed KYP Lemma for SPR Functions
with Stabilizable Realization . . . . . . . . . . . . . . . . . . . . . 125
3.3.6 Positive Real Pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3.3.7 Sufficient Conditions for PR and Generalized
PR Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.4 Recapitulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
3.5 SPR Problem with Observers . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.6 The Negative Imaginary Lemma . . . . . . . . . . . . . . . . . . . . . . . . 136
3.7 The Feedback KYP Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
3.8 Structural Properties of Passive LTI Systems . . . . . . . . . . . . . . . 140
3.9 Time-Varying Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
3.10 Interconnection of PR Systems . . . . . . . . . . . . . . . . . . . . . . . . . 143
3.11 Positive Realness and Optimal Control . . . . . . . . . . . . . . . . . . . . 145
3.11.1 General Considerations . . . . . . . . . . . . . . . . . . . . . . . . . 146
3.11.2 Least Squares Optimal Control . . . . . . . . . . . . . . . . . . . 147
3.11.3 The Popov Function and the KYP Lemma LMI . . . . . . . 153
3.11.4 A Recapitulating Theorem . . . . . . . . . . . . . . . . . . . . . . 157
3.12 The Generalized KYP Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . 158
3.12.1 On the Design of Passive LQG Controllers . . . . . . . . . . 159
3.12.2 SSPR Transfer Functions: Recapitulation . . . . . . . . . . . . 161
3.12.3 A Digression on Semi-definite Programming
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Contents xi
• R the set of real numbers; C the set of complex numbers; N the set of non-
negative integers; Z the set of integers
• Rn ðCn Þ: the set of n-dimensional vectors with real (complex) entries. Rnþ (Rn ):
the set of n-dimensional vectors with nonnegative (nonpositive) real entries
• AT : transpose of the matrix A 2 Rnm or 2 Cnm
conjugate of the matrix A 2 Cnm
• A:
• AH : conjugate transpose matrix of the matrix A 2 Cnm (if A 2 Cnm , then
A ¼ B þ jC with B; C 2 Rnm , and AH ¼ BT jC T )
• A 0 ð<0Þ: positive-definite (positive-semidefinite) matrix, i.e., xT Ax [ 0 ð 0Þ
for all x 6¼ 0. A 0 ð40Þ: negative-definite (negative-semidefinite) matrix, i.e.,
xT Ax \ 0 ð 0Þ for all x 6¼ 0. Such matrices are not necessarily symmetric
• The matrix A is Hermitian if A ¼ AH
• kðAÞ: an eigenvalue of A 2 Rnn
• rðAÞ: the set of eigenvalues of A 2 Rnn (i.e., the spectrum of A)
• kmax ðAÞ, kmin ðAÞ: the largest and smallest eigenvalue of the matrix A, respectively
• rmax ðAÞ ðrmin ðAÞÞ: largest (smallest) singular value of A
• A matrix A 2 Rnn is said to be a Hurwitz matrix if all its eigenvalues are negative
(ki ðAÞ \ 0 for i ¼ 1; . . .; n). One also says that A is asymptotically stable
• qðAÞ: the spectral radius of A, i.e., maxfjkj j k 2 rðAÞg
• trðAÞ: the trace of the matrix A
• A; : the Moore–Penrose inverse of the matrix A
• Let A 2 Rnm be a matrix. Aij is the ði; jÞth element of A. For J f1; . . .; ng,
K f1; . . .; mg, AJK is the submatrix fAjk gj2J;k2K . If J ¼ f1; . . .; ng
(resp. K ¼ f1; . . .; mg), we write A K (resp. AJ )
• ODE: Ordinary Differential Equation; PDE: Partial Differential equation
• BV, LBV, RCLBV: Bounded Variation, Local BV, Right Continuous LBV
• AC: Absolutely Continuous
• In the n n identity matrix, On the n n zero matrix
@f
• @x ðxÞ 2 Rmn : the Jacobian of the function f : Rn ! Rm at x
xvii
xviii Notation
Dissipativity theory gives a framework for the design and analysis of control systems,
using an input–output description based on energy-related considerations. Dissipa-
tivity is a notion which can be used in many areas of science, and it allows the control
engineer to relate a set of efficient mathematical tools to well-known physical phe-
nomena. The insight gained in this way is very useful for a wide range of control
problems. In particular, the input–output description allows for a modular approach
to control systems design and analysis.
The main idea behind this is that many important physical systems have cer-
tain input–output properties related to the conservation, dissipation, and transport
of energy. Before introducing precise mathematical definitions, we will somewhat
loosely refer to such input–output properties as dissipative properties, and systems
with dissipative properties will be termed dissipative systems. When modeling dissi-
pative systems, it may be useful to develop the state space or input–output models, so
that they reflect the dissipativity of the system, and thereby ensure that the dissipativ-
ity of the model is invariant with respect to model parameters, and to the mathematical
representation used in the model. The aim of this book is to give a comprehensive
presentation of how the energy-based notion of dissipativity can be used to establish
the input–output properties of models for dissipative systems. Also, it will be shown
how these results can be used in controller design. Moreover, it will appear clearly
how these results can be generalized to a dissipativity theory where conservation of
other physical properties, and even abstract quantities, can be handled.
Models for use in controller design and analysis are usually derived from the basic
laws of physics (electrical systems, dynamics, thermodynamics). Then, a controller
can be designed based on this model. An important problem in controller design is the
issue of robustness which relates to how the closed-loop system will perform when
the physical system differs either in structure or in parameters from the design model.
For a system where the basic laws of physics imply dissipative properties, it may
make sense to define the model so that it possesses the same dissipative properties
regardless of the numerical values of the physical parameters. Then if a controller
© Springer Nature Switzerland AG 2020 1
B. Brogliato et al., Dissipative Systems Analysis and Control, Communications
and Control Engineering, https://doi.org/10.1007/978-3-030-19420-8_1
2 1 Introduction
is designed so that stability relies on the dissipative properties only, the closed-loop
system will be stable whatever the values of the physical parameters. Even a change
of the system order will be tolerated provided it does not destroy the dissipativity.
Parallel interconnections and feedback interconnections of dissipative systems
inherit the dissipative properties of the connected subsystems, and this simplifies
analysis by allowing for manipulation of block diagrams, and provides guidelines on
how to design control systems. A further indication of the usefulness of dissipativity
theory is the fact that the PID controller is a dissipative system, and a fundamental
result that will be presented is the fact that the stability of a dissipative system
with a PID controller can be established using dissipativity arguments. Note that
such arguments rely on the structural properties of the physical system, and are not
sensitive to the numerical values used in the design model. The technique of controller
design using dissipativity theory can therefore be seen as a powerful generalization
of PID controller design.
There is another aspect of dissipativity which is very useful in practical appli-
cations. It turns out that dissipativity considerations are helpful as a guide for the
choice of a suitable variable for output feedback. This is helpful for selecting where
to place sensors for feedback control.
Throughout the book, we will treat dissipativity for state space and input–output
models, but first we will investigate simple examples which illustrate some of the
main ideas to be developed more deeply later.
where m is the mass, D is the damper constant, K is the spring stiffness, x is the
position of the mass, and F is the force acting on the mass. The energy of the system
is
1 1
V (x, ẋ) = m ẋ 2 + K x 2 .
2 2
The time derivative of the energy when the system moves is dtd V (x(t), ẋ(t)) =
m ẍ(t)ẋ(t) + K x(t)ẋ(t). Inserting the equation of motion, we get
d
V (x(t), ẋ(t)) = F(t)ẋ(t) − D ẋ 2 (t).
dt
1.1 Example 1: System with Mass, Spring, and Damper 3
This means that the energy at time t = T is the initial energy, plus the energy supplied
to the system by the control force F minus the energy dissipated by the damper. Note
that if the input force F is zero, and if there is no damping, then the energy V (·) of
the system is constant. Here D ≥ 0 and V [x (0) , ẋ (0)] > 0, and it follows that the
integral of the force F and the velocity v = ẋ satisfies
T
F (t) v (t) dt ≥ −V [x (0) , ẋ (0)] . (1.1)
0
The physical interpretation of this inequality is seen from the equivalent inequality
T
− F (t) v (t) dt ≤ V [x (0) , ẋ (0)] , (1.2)
0
T
which shows that the energy − 0 F (t) ẋ (t) dt that can be extracted from the
system is less than or equal to the initial energy stored in the system. We will show
later that (1.1) implies that the system with input F and output v is passive. The
Laplace transform of the equation of motion is
2
ms + Ds + K x(s) = F (s) ,
di
L (t) + Ri(t) + C x(t) = u(t),
dt
t
where x(t) = 0 i t dt . The energy stored in the system is
1 2 1 2
V (x, i) = Li + C x .
2 2
The time derivative of the energy when the system evolves is given by dtd V (x(t), i(t))
= L dt
di
(t)i(t) + C x(t)i(t). Inserting the differential equation of the circuit, we get
d
V (x(t), i(t)) = u(t)i(t) − Ri 2 (t).
dt
Integration of this equation from t = 0 to t = T gives the equality
T T
V [x (T ) , i (T )] = V [x (0) , i (0)] + u (t) i (t) dt − Ri 2 (t) dt. (1.4)
0 0
Similarly, to the previous example, this means that the energy at time t = T is the
initial energy plus the energy supplied to the system by the voltage u minus the energy
dissipated by the resistor. Later we shall call (1.4) a dissipation equality. Note that if
the input voltage u is zero, and if there is no resistance, then the energy V (·) of the
system is constant: the system is said lossless. Here R ≥ 0 and V [x (0) , ẋ (0)] > 0,
and it follows that the integral of the voltage u and the current i satisfies
t
u (s) i (s) ds ≥ −V [x (0) , i (0)] . (1.5)
0
The physical interpretation of this inequality is seen from the equivalent inequality
t
− u (s) i (s) ds ≤ V [x (0) , i (0)] , (1.6)
0
t
which shows that the energy − 0 u (s) i (s) ds that can be extracted from the system
is less than or equal to the initial energy stored in the system. We will show later
that (1.5) implies that the system with input u and output i is passive. The Laplace
transform of the differential equation of the circuit is
Ls 2 + Rs + C x(s) = u (s)
i i
∠ ( jω) ≤ 90◦ ⇒ Re ( jω) ≥ 0, (1.7)
u u
for all ω ∈ [−∞, +∞]. We see that in both examples we arrive at transfer functions
that are stable, and that have positive real parts on the jω-axis. This motivates for
further investigations on whether there is some fundamental connection between
conditions on the energy flow in equations associated with the control equations
(1.1) and (1.5) and the conditions on the transfer functions (1.3) and (1.7). This will
be made clear in Chap. 2.
Consider the mass m with the external control force u. The equation of motion is
m ẍ(t) = u(t).
u = −K P x − K D ẋ.
A purely mechanical system with the same dynamics as this system is called a
mechanical analog. The mechanical analog for this system is a mass m with a spring
with stiffness K P and a damper with damping constant K D . We see that the pro-
portional action corresponds to the spring force, and that the derivative action cor-
responds to the damper force. Similarly, as in Example 1, we can define an energy
function
1 1
V (x, ẋ) = m ẋ 2 + K P x 2 ,
2 2
which is the total energy of the mechanical analog. In the same way as in Example
1, the derivative action will dissipate the virtual energy that is initially stored in
the system, and intuitively, we may accept that the system will converge to the
equilibrium x = 0, ẋ = 0. This can also be seen from the Laplace transform
ms 2 + K D s + K P x(s) = 0,
which implies that the poles of the system have negative real parts. The point we are
trying to make is that, for this system, the stability of the closed-loop system with a
PD controller can be established using energy arguments. Moreover, it is seen that
6 1 Introduction
stability is ensured for any positive gains K P and K D independently of the physical
parameter m. There are many important results derived from energy considerations
in connection with PID control, and this will be investigated in Chap. 2.
u = −K e − âx + ẋd , e = x − xd ,
ψ(t) = −ã(t)x(t),
de
(t) + K e(t) = ψ(t).
dt
Let us define a function Ve (·) which plays the role of an abstract energy function
related to the tracking error e:
1
Ve (e) = e2 .
2
The time derivative of Ve (·) along the solutions of the system is given by
Note that this time derivative has a similar structure to that seen in Examples 1 and
2. In particular, the −K e2 term is a dissipation term, and if we think of ψ as the
input and e as the output, then the eψ term is the rate of (abstract) energy supplied
1.4 Example 4: Adaptive Control 7
from the input (we shall call it later the supply rate). We note that this implies that
the following inequality holds for the dynamics of the tracking error:
T
e(t)ψ(t)dt ≥ −Ve [e (0)] .
0
To proceed, we define one more energy-like function. Suppose that we are able to
select an adaptation law so that there exists an energy-like function Va (ã) ≥ 0 with
a time derivative
V̇a (ã(t)) = −e(t)ψ(t). (1.8)
We note that this implies that the following inequality holds for the adaptation law:
T
[−ψ(t)] e(t)dt ≥ −Va ã(0) .
0
Then the sum of the energy functions V (e, ã) = Ve (e) + Va (ã) has a time derivative
along the solutions of the system given by
This means that the energy function V (e, ã) is decreasing as long as e(·) is nonzero,
and by invoking additional arguments from Barbalat’s Lemma (see Lemmas A.40
and A.41), we can show that this implies that e(t) tends to zero as t → +∞. The
required adaptation law for (1.8) to hold can be selected as the simple gradient update
d â
(t) = x(t)e(t),
dt
and the associated energy-like function is Va (ã) = 21 ã 2 . Note that the convergence
of the adaptive tracking controller was established using energy-like arguments, and
that other adaptation laws can be used as long as they satisfy the energy-related
requirement (1.8).
Chapter 2
Positive Real Systems
Positive real systems were first discovered and studied in the Networks and Circuits
scientific community, by the German scientist Wilhelm Cauer in his 1926 Ph.D. thesis
[1–4]. However, the term positive real has been coined by Otto Brune in his 1931
Ph.D. thesis [5, 6], building upon the results of Ronald M. Foster [7] (himself inspired
by the work in [8], and we stop the genealogy here). O. Brune was in fact the first to
provide a precise definition and characterization of a positive real transfer function
(see [6, Theorems II, III, IV, V]). Positive realness may be seen as a generalization
of the positive definiteness of a matrix to the case of a dynamical system with inputs
and outputs. When the input–output relation (or mapping, or operator) is a constant
symmetric matrix, testing its positive definiteness can be done by simply calculating
the eigenvalues and checking that they are positive. When the input–output operator
is more complex, testing positive realness becomes much more involved. This is the
object of this chapter which is mainly devoted to positive real linear time-invariant
systems. They are known as PR transfer functions. The definition of Positive Real
(PR) systems has been motivated by the study of linear electric circuits composed
of resistors, inductors, and capacitors. The driving point impedance from any point
to any other point of such electric circuits is always PR. The result holds also in
the sense that any PR transfer function can be realized with an electric circuit using
only resistors, inductors, and capacitors. The same result holds for any analogous
mechanical or hydraulic systems. This idea can be extended to study analogous
electric circuits with nonlinear passive components and even magnetic couplings,
as done by Arimoto [9] to study dissipative nonlinear systems. This leads us to the
second interpretation of PR systems: they are systems which dissipate energy. As
we shall see later in the book, the notion of dissipative systems, which applies to
nonlinear systems, is closely linked to PR transfer functions.
This chapter reviews the main results available for PR linear systems. It starts with
a short introduction to so-called passive systems. It happens that there has been a
In this book, various kinds of evolution or dynamical systems will be analyzed: lin-
ear, time-invariant, nonlinear, finite-dimensional, infinite-dimensional, discrete-time,
nonsmooth, “standard” differential inclusions, “unbounded”, or “maximal mono-
tone” differential inclusions, etc. Whatever the system we shall be dealing with, it is
of utmost importance to clearly define some basic ingredients:
• A state vector x(·) and a state space X ,
• A set of admissible inputs U ,
• A set of outputs Y ,
• An input/output mapping (or operator) H : u → y,
• A state space representation which relates the derivative of x(·) to x(·) and u(·),
and
• An output function which relates the output y(·) to the state x(·) and the input
u(·).
Such tools (or some of them) are necessary to write down the model, or system,
that is under examination. When one works with pure input/output models, one does
not need to define a state space X ; however, U and Y are crucial. In this book,
we will essentially deal with systems for which a state space representation has
been defined. Then the notion of a (state) solution is central. Given some state space
model under the form of an evolution problem (a differential equation or something
looking like this), the first step is to provide information on such solutions: the nature
of the solutions (as time functions, for instance), their uniqueness, their continuity
with respect to the initial data and parameters, etc. This in turn is related to the
set of admissible inputs U . For instance, if the model takes the form of an ordinary
differential equation (ODE) ẋ(t) = f (x(t), u(t)), the usual Carathéodory conditions
will be in force to define U as a set of measurable functions, and x(·) will usually be
an absolutely continuous function of time. In certain cases, one may want to extend
2.1 Dynamical System State Space Representation 11
Clearly, item (vii) will not apply to some classes of time-varying systems, and an
extension is needed [11, Sect. 6]. There may be some items which do not apply well
to differential inclusions where the solution may be replaced by a solution set (for
instance, the semigroup property may fail). The basic fact that X is a metric space
will also require much care when dealing with some classes of systems whose state
spaces are not spaces of functions (like descriptor variable systems that may involve
Schwarz’ distributions). In the infinite-dimensional case, X may be a Hilbert space
(i.e., a space of functions) and one may need other definitions, see, e.g., [13, 14]. An
additional item in the above list could be the continuity of the transition map ψ(·)
with respect to the initial data x0 . Some nonsmooth systems do not possess such a
property, which may be quite useful in some stability results. A general exposition of
the notion of a system can be found in [15, Chap. 2]. We now stop our investigations
of what a system is since, as we said above, we shall give well-posedness results
each time they are needed, all through the book.
12 2 Positive Real Systems
2.2 Definitions
In this section and the next one, we introduce input–output properties of a system, or
operator H : u → H (u) = y. The system is assumed to be well-posed as an input–
output system, i.e., we may assume that H : L2,e → L2,e .1
Definition 2.1 A system with input u(·) and output y(·) where u(t), y(t) ∈ Rm , is
passive if there is a constant β such that
t
y T (τ )u(τ )dτ ≥ β (2.1)
0
for all functions u(·) and all t ≥ 0. If, in addition, there are constants δ ≥ 0 and ε ≥ 0
such that
t t t
y T (τ )u(τ )dτ ≥ β + δ u T (τ )u(τ )dτ + ε y T (τ )y(τ )dτ (2.2)
0 0 0
for all functions u(·), and all t ≥ 0, then the system is input strictly passive (ISP) if
δ > 0, output strictly passive (OSP) if ε > 0, and very strictly passive (VSP) if
δ > 0 and ε > 0.
Obviously β ≤ 0 as the inequality (2.1) is to be valid for all functions
t u(·) and in
particular the control u(t) = 0 for all t ≥ 0, which gives 0 = 0 y T (s)u(s)ds ≥ β.
Thus, the definition could equivalently be stated with β ≤ 0. The importance of the
form of β in(2.1) will be illustrated
t in Examples 4.66 and 4.67; see also Sect. 4.3.2.
t
Notice that 0 y T (s)u(s)ds ≤ 21 0 [y T (s)y(s) + u T (s)u(s)]ds is well defined since
both u(·) and y(·) are in L2,e by assumption. The constants δ and ε in (2.2), are
sometimes called the input and output passivity indices, respectively.
Remark 2.2 OSP implies that the system has a finite L2 -gain, see the proof of
Lemma 2.82. Let us take some advance and suppose that the system we deal with, is
L2,e -stable (see Sect. 4.1, and Definition 4.17). This means that there exists a finite
gain γ2 > 0 such that ||y||2,e ≤ γ2 ||u||2,e . Simple calculations show that input strict
passivity, implies output strict passivity in this case, with a constant γδ2 .
2
Remark 2.3 The above definition is well suited to autonomous (and causal2 ) systems.
In case of time-varying
t systems (which we shall encounter in the book), a better
definition is t01 y T (τ )u(τ )dτ ≥ β for any t0 and t1 , t1 ≥ t0 .
Theorem 2.4 Assume that there is a continuous function V (·) ≥ 0 such that
t
V (t) − V (0) ≤ y(s)T u(s)ds (2.3)
0
for all functions u(·), for all t ≥ 0 and all V (0). Then the system with input u(·) and
output y(·) is passive. Assume, in addition, that there are constants δ ≥ 0 and ε ≥ 0
such that
t t t
V (t) − V (0) ≤ y T (s)u(s)ds − δ u T (s)u(s)ds − ε y T (s)y(s)ds (2.4)
0 0 0
for all functions u(·), for all t ≥ 0 and all V (0). Then the system is input strictly
passive if there is a δ > 0, it is output strictly passive if there is an ε > 0, and very
strictly passive if there is a δ > 0 and an ε > 0 such that the inequality holds.
Δ
for all functions u(·) and all t ≥ 0, so that (2.1) is satisfied with β = −V (0) ≤ 0.
Input strict passivity, output strict passivity, and very strict passivity are shown in the
same way.
This indicates that the constant β is related to the initial conditions of the system;
see also Example 4.66 for more information on the role played by β. It is also worth
looking at Corollary 3.3 to get more information on the real nature of the function
V (·): V (·) will usually be a function of the state of the system. The reader may have
guessed such a fact by looking at the examples of Chap. 1.
1. If
V̇ (t) ≤ y T (t)u(t) − d(t) (2.5)
for all t ≥ 0 and all functions u(·), the system is input strictly passive (ISP).
3. If there exists a ε > 0 such that
for all t ≥ 0 and all functions u(·), the system is output strictly passive (OSP).
4. If there exists a δ > 0 and a ε > 0 such that
for all t ≥ 0 and all functions u(·), the system is very strictly passive (VSP).
Δ t
If V (·) is the total energy of the system, then u, y t = 0 y T (s)u(s)ds can be seen
as the power supplied to the system from the control, while d(t) t can be seen as the
power dissipated by the system. This means that the condition 0 d(s)ds ≥ 0 for all
t ≥ 0 means that the system is dissipating energy. The term w(u, y) = u T y is called
the supply rate of the system. We will see later in this book, that this is the supply rate
for passive systems, and that a more general definition exists that defines dissipative
systems.
Remark 2.6 All these notions will be examined in much more detail in Chap. 4,
see especially Sect. 4.4.2. Actually, the notion of passivity (or dissipativity) has
been introduced in various ways in the literature. It is sometimes introduced as a
pure input/output property of an operator (i.e., the constant β in (2.1) is not related
to the state of the system) [16–18], and serves as a tool to prove some bounded
input/bounded output stability results. Willems has, on the contrary, introduced dis-
sipativity as a notion which involves the state space representation of a system,
through so-called storage functions [12, 19]. We will come back to this subject in
Chap. 4. Hill and Moylan started from an intermediate definition, where the con-
stant β is assumed to depend on some initial state x0 [20–23]. Then, under some
controllability assumptions, the link with Willems’ definition is made. Theorem 2.4
will be generalized in Theorem 4.34, which proves that indeed such functions V (·)
exist and are functions of the system’s state. In this chapter and the next one, we will
essentially concentrate on linear time-invariant dissipative systems, whose transfer
functions are named positive real (PR). This is a very important side of passivity
theory in Systems and Control theory.
A useful result for passive systems is that parallel and feedback interconnections of
passive systems are passive, and that certain strict passivity properties are inherited.
To explore this we consider two passive systems with scalar inputs and outputs
(Fig. 2.1). Similar results are found for multivariable systems. System 1 has input
u 1 and output y1 , and system 2 has input u 2 and output y2 . We make the following
assumptions:
1. There are continuous differentiable functionsV1 (t) ≥ 0 and V2 (t) ≥ 0.
t t
2. There are functions d1 (·) and d2 (·) such that 0 d1 (s)ds ≥ 0 and 0 d2 (s)ds ≥ 0
for all t ≥ 0.
3. There are constants δ1 ≥ 0, δ2 ≥ 0, ε1 ≥ 0 and ε2 ≥ 0 such that
Assumption 3 implies that both systems are passive, and that system i is strictly
passive in some sense if any of the constants δi or εi are greater than zero. For the
parallel interconnection, we have u 1 = u 2 = u, y = y1 + y2 , and
yu = (y1 + y2 )u = y1 u + y2 u = y1 u 1 + y2 u 2 . (2.11)
yu = y1 (u 1 + y2 ) = y1 u 1 + y1 y2 = y1 u 1 + u 2 y2 . (2.13)
Δ
Again by adding (2.9)–(2.11) wefind that there is a V (·) = V1 (·) + V2 (·) ≥ 0 and a
t
d f b = d1 + d2 + δ1 u 21 such that 0 d f b (s)ds ≥ 0 for all t ≥ 0 and
Let us now deal with linear invariant systems, whose input–output relationships take
the form of a rational transfer function H (s) (also sometimes denoted as h(s) in the
single-input–single-output case), s ∈ C, and y(s) = H (s)u(s), where u(s) and y(s)
are the Laplace transforms of the time functions u(·) and y(·). Parseval’s theorem is
very useful in the study of passive linear systems, as will be shown next. It is now
recalled for the sake of completeness.
Theorem 2.7 (Parseval’s theorem) Provided that the integrals exist, the following
relation holds:
∞ ∞
1
x(t)y (t)dt = x( jω)y ( jω)dω, (2.15)
−∞ 2π −∞
where y denotes the complex conjugate of y and x( jω) is the Fourier transform of
x(t), where x(t) is a complex function of t, Lebesgue integrable.
Proof The result is established as follows: the Fourier transform of the time function
x(t) is ∞
x( jω) = x(t)e− jωt dt, (2.16)
−∞
Here ∞ ∞
− jωt
y (t)e jωt
dt = y(t)e dt = y ( jω), (2.20)
−∞ −∞
the case of LTI and nonlinear systems. Their usefulness will be illustrated through
examples of stabilization (Fig. 2.2).
Theorem 2.8 Given a linear time-invariant linear system with rational transfer
function h(s), i.e.,
y(s) = h(s)u(s). (2.21)
Let us assume that all the poles of h(s) have real parts less than zero. Then the
following assertions hold:
See Theorem 2.25 and Lemma 2.27 for extensions to the MIMO case.
Remark 2.9 A crucial assumption in Theorem 2.8 is that all the poles have neg-
ative real parts. This assures that in Parseval’s Theorem as stated in Theorem
2.7, the “integrals exist”. Let us recall that if h(s) = a(s)
b(s)
for two polynomials
Δ
a(s) and b(s), then Re(h( jω)) = a( jω)b(−2|b(
jω)+b( jω)a(− jω)
jω)|2
. The polynomial m( jω) =
a( jω)b(− jω) + b( jω)a(− jω) has real coefficients, with even powers of ω, and is
always nonnegative if the operator in question is positive (i.e., Re(h(s)) ≥ 0 for all
s ∈ C such that Re(s) ≥ 0).
Proof The proof is based on the use of Parseval’s theorem. In this Theorem, the time
integration is over t ∈ [0, ∞). In the definition of passivity there is an integration
over τ ∈ [0, t]. To be able to use Parseval’s theorem in this proof, we introduce the
truncated function
u(τ ) when τ ≤ t
u t (τ ) = (2.22)
0 when τ > t,
which is equal to u(τ ) for all τ less than or equal to t, and zero for all τ greater than t.
The Fourier transform of u t (τ ), which is denoted u t ( jω), will be used in Parseval’s
18 2 Positive Real Systems
theorem. Without loss of generality, we will assume that y(t) and u(t) are equal to
zero for all t ≤ 0. Then according to Parseval’s theorem
t ∞ ∞
1
y(τ )u(τ )dτ = y(τ )u t (τ )dτ = y( jω)u t ( jω)dω. (2.23)
0 −∞ 2π −∞
where
The left-hand side of (2.24) is real, and it follows that the imaginary part on the
right-hand side is zero. This implies that
t ∞
1
u(τ )y(τ )dτ = Re[h( jω)]|u t ( jω)|2 dω. (2.26)
0 2π −∞
The equality is implied by Parseval’s theorem. It follows that the system is passive,
and in addition input strictly passive if δ > 0. Then, assume that the system is passive.
Thus, there exists a δ ≥ 0 so that
∞
t t
δ
y(s)u(s)dsz ≥ δ u 2 (s)ds = |u t ( jω)|2 dω, (2.28)
0 0 2π −∞
for all u(·), where the initial conditions have been selected so that β = 0. Here δ = 0
for a passive system, while δ > 0 for a strictly passive system. Then
∞ ∞
1 δ
Re[h( jω)]|u t ( jω)|2 dω ≥ |u t ( jω)|2 dω, (2.29)
2π −∞ 2π −∞
and ∞
1
(Re[h( jω)] − δ)|u t ( jω)|2 dω ≥ 0. (2.30)
2π −∞
If there exists a ω0 so that Re[h( jω0 )] < δ, this inequality will not hold for all u
because the integral on the left-hand side can be made arbitrarily small if the control
2.4 Linear Systems 19
signal is selected to be u(t) = U cos ω0 t. The results 1 and 2 follow. To show result
3, we first assume that the system is output strictly passive, that is, there is an ε > 0
such that
t t ∞
ε
y(s)u(s)ds ≥ ε y 2 (s)ds = |h( jω)|2 |u t ( jω)|2 dω. (2.31)
0 0 2π −∞
which is equivalent to
and the second inequality follows by straightforward algebra. The converse result is
shown similarly as the result for input strict passivity.
Remark 2.10 It follows from (2.24) that the bias β in (2.1) can be taken equal to
zero for zero initial conditions, when linear time-invariant systems are considered.
Note that according to the theorem, a passive system will have a transfer function
which satisfies
|∠h( jω)| ≤ 90◦ for all ω ∈ [−∞, +∞]. (2.34)
In a Nyquist diagram, the theorem states that h( jω) is in the closed half plane
Re [s] ≥ 0 for passive systems, h( jω) is in Re [s] ≥ δ > 0 for input strictly passive
systems, and for output strictly passive systems, h( jω) is inside the circle with center
in s = 1/ (2ε) and radius 1/ (2ε) . This is a circle that crosses the real axis in s = 0
and s = 1/ε.
Remark 2.11 A transfer function h(s) is rational if it is the fraction of two polyno-
mials in the complex variable s, that is if it can be written in the form
Q(s)
h(s) = , (2.35)
R(s)
where Q(s) and R(s) are polynomials in s. An example of a transfer function that
is not rational is h(s) = tanh s, which appears in connection with systems described
by partial differential equations (it is of infinite dimension, see Example 2.41).
Example 2.12 Note the difference between the condition Re[h( jω)] > 0 and the
condition for input strict passivity, in that there exists a δ > 0 so that Re[h( jω0 )] ≥
δ > 0 for all ω. An example of this is
1
h 1 (s) = . (2.36)
1 + Ts
20 2 Positive Real Systems
1 1 ωT
h 1 ( jω) = = −j . (2.37)
1 + jωT 1 + (ωT )2 1 + (ωT )2
However, there is no δ > 0 that ensures Re[h( jω0 )] ≥ δ > 0 for all ω ∈ [−∞, +∞].
This is seen from the fact that for any δ > 0 we have
1 1−δ 1
Re[h 1 ( jω)] = < δ for all ω > . (2.38)
1 + (ωT )2 δ T
This implies that h 1 (s) is not input strictly passive. We note that for this system
1
|h 1 ( jω)|2 = = Re[h 1 ( jω)], (2.39)
1 + (ωT )2
s+c
h 2 (s) = , (2.40)
(s + a)(s + b)
Moreover |h 4 ( jω)|2 ≤ 1, so that Re[h 4 ( jω)] ≥ TT21 ≥ TT21 |h 4 ( jω)|2 , which shows
that the system is output strictly passive with ε = T1 /T2 . The reader may verify
from a direct calculation of |h 4 ( jω)|2 and some algebra that it is possible to have
Re[h 4 ( jω)] ≥ |h 4 ( jω)|2 , that is, ε = 1. This agrees with the Nyquist plot of h 4 ( jω).
2.4 Linear Systems 21
i
2)
+
u R C
−
R L C
i
3)
+
u
−
R1
i
4)
+
u L C R
−
is input strictly passive as Re[h 2 ( jω)] = 1/R > 0 for all ω (and we notice it has a
relative degree r = −1, like system 1). For system 4, we find that
2
1 (1 − ω LC) + ω R1 (R1 +R)
2 2 2 L
1
Re[h 4 ( jω)] = ≥ , (2.42)
R1 (1 − ω2 LC)2 + ω2 L 2 2 R1 + R
(R1 +R)
So far, we have only considered systems where the transfer functions h(s) have
poles with negative real parts. There are however passive systems that have transfer
functions with poles on the imaginary axis. This is demonstrated in the following
example.
Example 2.16 Consider the system ẏ(t) = u(t) which is represented in transfer
function description by y(s) = h(s)u(s) where h(s) = 1s . This means that the trans-
fer function has a pole at the origin, which is on the imaginary axis. For this system
Re[h( jω)] = 0 for all ω. However, we cannot establish passivity using Theorem 2.8
as this theorem only applies to systems where all the poles have negative real parts.
Instead, consider
t t
y(s)u(s)ds = y(s) ẏ(s)ds (2.43)
0 0
Theorem 2.17 Consider a linear time-invariant system with a rational transfer func-
tion h(s). The system is passive if and only if:
1. h(s) has no poles in Re [s] > 0.
2. Re[h( jω)] ≥ 0 for all ω ∈ [−∞, +∞] such that jω is not a pole of h(s).
3. If jω0 is a pole of h(s), then it is a simple pole, and the residual in s = jω0 is real
and greater than zero, that is, Ress= jω0 h(s) = lims→ jω0 (s − jω0 )h( jω) > 0.
The above result is established in Sect. 2.11. Contrary to Theorem 2.8, poles on the
imaginary axis are considered.
Corollary 2.18 If a system with transfer function h(s) is passive, then h(s) has no
poles in Re [s] > 0.
2.4 Linear Systems 23
(s + z 1 )(s + z 2 ) · · ·
h(s) = , (2.45)
s(s + p1 )(s + p2 ) · · ·
where Re[ pi ] > 0 and Re[z i ] > 0, which means that h(s) has one pole at the origin
and the remaining poles in Re [s] < 0, while all the zeros are in Re [s] < 0. Then
the system with transfer function h (s) is passive, if and only if Re[h( jω)] ≥ 0 for
all ω ∈ [−∞, +∞].
Proof The residual of the pole on the imaginary axis is
z1 z2 . . .
Ress=0 h(s) = . (2.46)
p1 p2 . . .
Here the constants z i and pi are either real and positive, or they appear in complex
conjugated pairs where the products z i z i = |z i |2 and pi pi = | pi |2 are real and posi-
tive. It is seen that the residual at the imaginary axis is real and positive. As h(s) has
no poles in Re [s] > 0 by assumption, it follows that the system is passive, if and
only if Re[h( jω)] ≥ 0 for all ω ∈ [−∞, +∞].
Example 2.20 Consider two systems with transfer functions
s2 + a2
h 1 (s) = , a = 0, ω0 = 0 (2.47)
s(s 2 + ω02 )
s
h 2 (s) = , ω0 = 0, (2.48)
s2 + ω02
where all the poles are on the imaginary axis. Thus, condition 1 in Theorem 2.17 is
satisfied. Moreover,
a 2 − ω2
h 1 ( jω) = − j (2.49)
ω(ω02 − ω2 )
ω
h 2 ( jω) = j , (2.50)
ω02 − ω2
so that condition 2 also holds in view of Re[h 1 ( jω)] = Re[h 2 ( jω)] = 0 for all
ω so that jω is not a pole in h (s). We now calculate the residuals, and find
2 ω02 −a 2
that Ress=0 h 1 (s) = ωa 2 , Ress=± jω0 h 1 (s) = 2ω 2 , Ress=± jω 0 h 2 (s) = 2 . We see that,
1
0 0
according to Theorem 2.17, the system with transfer function h 2 (s) is passive, while
h 1 (s) is passive whenever a < ω0 .
Example 2.21 Consider a system with transfer function
1
h(s) = − . (2.51)
s
24 2 Positive Real Systems
The transfer function has no poles in Re [s] > 0, and Re[h( jω)] ≥ 0 for all ω = 0.
However, Ress=0 h(s) = −1, and Theorem 2.17 shows that the system is not passive
This result agrees with the observation that
t y(t)
1
y(s)u(s)ds = − y(s)dy = [y(0)2 − y(t)2 ], (2.52)
0 y(0) 2
where the right-hand side has no lower bound, as y(t) can be arbitrarily large.
PID controllers are among the most popular feedback controls, if not the most popular
ones. This may be due, in part, to their passivity property, as will be shown next.
Proposition 2.22 Assume that 0 ≤ Td < Ti and 0 ≤ α ≤ 1. Then the PID controller
1 + Ti s 1 + Td s
h r (s) = K p (2.53)
Ti s 1 + αTd s
is passive.
1 + Ti s 1 + Td s
h r (s) = K p β , (2.54)
1 + βTi s 1 + αTd s
1 K pβ
|h r ( jω)| ≤ K p β · 1 · = . (2.55)
α α
The result on the Re [h r ( jω)] can be established using Nyquist diagram, or by direct
calculation of Re [h r ( jω)].
Remark 2.24 The above results encompass PI controllers, since it is allowed that
Td = 0.
2.6 Stability of a Passive Feedback Interconnection 25
Consider a feedback loop with loop transfer function h 0 (s) = h 1 (s)h 2 (s) as shown
in Fig. 2.4. If h 1 is passive and h 2 is strictly passive, then the phases of the transfer
functions satisfy
It follows that the phase of the loop transfer function h 0 (s) is bounded by
As h 1 and h 2 are passive, it is clear that h 0 (s) has no poles in Re [s] > 0. Then
according to standard Bode–Nyquist stability theory the system is asymptotically
stable and BIBO stable.3 The same result is obtained if instead h 1 is strictly passive
and h 2 is passive. We note that, in view of Proposition 2.23, a PID controller with
limited integral action is strictly stable. This implies that
• A passive linear system with a PID controller with limited integral action is BIBO
stable.
For an important class of systems, passivity or strict passivity is a structural property
which is not dependent on the numerical values of the parameters of the system.
Then passivity considerations may be used to establish stability even if there are large
uncertainties or large variations in the system parameters. This is often referred to as
robust stability. When it comes to performance, it is possible to use any linear design
technique to obtain high performance for the nominal parameters of the system.
The resulting system will have high performance under nominal conditions, and in
addition robust stability under large parameter variations.
In this section, we will study how PD controllers for position control can be repre-
sented by mechanical analogs when the input to the system is force and the output
is position. Note that when force is input and position is output, then the physical
x D x0
system is not passive. We have a passive physical system if the force is the input and
the velocity is the output, and then a PD controller from position corresponds to PI
controller from velocity. For this reason, we might have referred to the controllers in
this section as PI controllers for velocity control.
We consider a mass m with position x(·) and velocity v(·) = ẋ(·). The dynam-
ics is given by m ẍ(t) = u(t) where the force u is the input. The desired posi-
tion is xd (·), while the desired velocity is vd (·) = ẋd (·). A PD controller u =
K p (1 + Td s) [xd (s) − x(s)] is used. The control law can be written as
where D = K p Td . The mechanical analog appears from the observation that this
control force is the force that results if the mass m with position x is connected to the
position xd with a parallel interconnection of a spring with stiffness K p and a damper
with coefficient D as shown in Fig. 2.5. If the desired velocity is not available, and
the desired position is not smooth, a PD controller of the type
This is the force that results if the mass m is connected to the position xd with a
spring of stiffness K p and a damper with coefficient D as shown in Fig. 2.6. If the
velocity is not measured the following PD controller can be used
2.7 Mechanical Analogs for PD Controllers 27
x x1 x0
1 + Td s
u(s) = K p (xd (s) − x(s)), (2.60)
1 + αTd s
where 0 ≤ α ≤ 1 is the filter parameter. We will now demonstrate that this transfer
function appears by connecting the mass m with position x to a spring with stiffness
K 1 , in series with a parallel interconnection of a spring with stiffness K and a
damper with coefficient D, as shown in Fig. 2.7. To find the expression for K 1 and
K , we let x1 be the position of the connection point between the spring K 1 and
the parallel interconnection. Then the force is u = K 1 (x1 − x), which implies that
x1 (s) = x(s) + u(s)/K 1 . As there is no mass in the point x1 there must be a force of
equal magnitude in the opposite direction from the parallel interconnection, so that
u(s) = K (xd (s) − x1 (s)) + D(vd (s) − v1 (s)) = (K + Ds)(xd (s) − x1 (s)).
(2.61)
Insertion of x1 (s) gives
1
u(s) = (K + Ds)(xd (s) − x(s) − u(s)). (2.62)
K1
+Ds 1+ KD s
u(s) = K 1 K 1K+K (x (s) − x(s)) =
+Ds d
K1 K
(x (s)
K 1 +K 1+ K K+K KD s d
− x(s)).
1
Let us now state the MIMO counterpart of items 1 and 2 of Theorem 2.8.
Theorem 2.25 Consider a linear time-invariant system
28 2 Positive Real Systems
with a rational transfer function matrix H (s) ∈ Cm×m , input u(t) ∈ Rm and input
y(t) ∈ Rm . Assume that all the poles of H (s) are in Re [s] < 0. Then,
1. The system is passive ⇔ λmin [H ( jω) + H ( jω)] 0 for all ω ∈ [−∞, +∞].
2. The system is input strictly passive ⇔ There is a δ > 0 so that λmin [H ( jω)
+H ( jω)] ≥ δ > 0 for all ω ∈ [−∞, +∞].
Remark 2.26 Similar to Theorem 2.8, a crucial assumption in Theorem 2.25 is that
the poles have negative real parts, i.e., there is no pole on the imaginary axis.
Proof Let A ∈ Cm×m be some Hermitian matrix with eigenvalues λi (A). Let x ∈ Cm
be an arbitrary vector with complex entries. It is well known from linear algebra that
x Ax is real, and that x Ax ≥ λmin (A)|x|2 . From Parseval’s theorem, we have
∞
m ∞
m ∞
0 y T (s)u t (s)ds = i=1 0 yi (s)(u i )t (s)ds = 1
i=1 2π −∞ yi ( jω)(u i )t ( jω)dω
∞
= 1
2π −∞ y ( jω)u t ( jω)dω,
where we recall that u t (·) is a truncated function and that s in the integrand is a dumb
integration variable (not to be confused with the Laplace transform!). This leads to
t ∞ ∞
0 y T (s)u(s)ds = 0 y T (s)u t (s)ds = 1
2π −∞ y ( jω)u t ( jω)dω
∞
= 1
4π −∞ [u t ( jω)y( jω) + y ( jω)u t ( jω)]dω
∞
= 1
4π −∞ u t ( jω)[H ( jω) + H ( jω)]u t ( jω)dω.
Lemma 2.27 ([24, Lemma 1]) Let H (s) ∈ Cm×m be an asymptotically stable ratio-
nal transfer matrix. Assume that H (s) + H T (−s) has full normal rank m. Then
there exists a scalar δ > 0 such that H ( jω) + H ( jω) δ H ( jω)H ( jω), for all
ω ∈ [−∞, +∞], if and only if H (s) is OSP.
2.9 The Scattering Formulation 29
a = e + z 0 i and b = e − z 0 i, (2.66)
z(s) − z 0 z
−1 z(s)
g(s) = = 0 z(s) (2.67)
z 0 + z(s) 1 + z0
is the scattering function of the system. The terms wave variable and scattering
function originate from the description of transmission lines where a can be seen as
the incident wave and b can be seen as the reflected wave. If the electrical circuit
has only passive elements, that is, if the circuit is an interconnection of resistors,
capacitors, and inductors, the passivity inequality satisfies
t
e(τ )i(τ )dτ ≥ 0, (2.68)
0
where it is assumed that the initial energy stored in the circuit is zero. We note that
which implies
t t t
b2 (τ )dτ = a 2 (τ )dτ − 4z 0 e(τ )i(τ )dτ. (2.70)
0 0 0
From this, it is seen that passivity of the system with input i and output e corresponds
to small gain for the system with input a and output b in the sense that
t t
b2 (τ )dτ ≤ a 2 (τ )dτ. (2.71)
0 0
30 2 Positive Real Systems
This small gain condition can be interpreted loosely in the sense that the energy
content b2 of the reflected wave is smaller than the energy a 2 of the incident wave.
For the general linear time-invariant system y(s) = h(s)u(s), introduce the wave
variables
a = y + u and b = y − u, (2.72)
where, as above, a is the incident wave and b is the reflected wave. As for electrical
circuits, it will usually be necessary to include a constant z 0 so that a = y + z 0 u,
b = y − z 0 u so that the physical units agree. We tacitly suppose that this is done by
letting z 0 = 1 with the appropriate physical unit. The scattering function is defined
by
Δ b y−u h(s) − 1
g(s) = (s) = (s) = . (2.73)
a y+u 1 + h(s)
Theorem 2.28 Consider a system with rational transfer function h(s) with no poles
in Re[s] ≥ 0, and scattering function g(s) given by (2.73). Then
1. The system is passive if and only if |g( jω)| ≤ 1 for all ω ∈ [−∞, +∞].
2. The system is input strictly passive, and there is a γ so that |h ( jω)| ≤ γ for all
ω ∈ [−∞, +∞], if and only if there is a γ
∈ (0, 1) so that |g( jω)|2 ≤ 1 − γ
.
γ
and strict passivity follows from Re [h( jω)] ≥ 4−2γ
> 0. Finite gain of h ( jω) fol-
lows from
γ
|h( jω)|2 − 4 − 2γ
Re[h( jω)] + γ
≤ 0, (2.77)
which in view of the general result |h( jω)| > Re[h( jω)] gives the inequality
2.9 The Scattering Formulation 31
4 − 2γ
|h( jω)| −
2
|h( jω)| + 1 ≤ 0. (2.78)
γ
|h ( jω)| ≤ . (2.79)
γ
We shall come back on the relationships between passivity and bounded realness in
the framework of dissipative systems and H∞ theory, see Sect. 5.10. A comment on
the input–output change in (2.72): the association of the new system with transfer
function g(s) merely corresponds to writing down uy = 41 (a + b)(a − b) = 41 (a 2 −
t t t
b2 ). Thus, if 0 u(s)y(s)ds ≥ 0 one gets 0 a 2 (s)ds ≥ 0 b2 (s)ds: the L2 -norm of
the new output b(t) is bounded by the L2 -norm of the new input a(t).
We can think of h(s) as describing the plant to be controlled, and h r (s) as describing
the feedback controller. Here u t is the feedback control and u f is the feedforward
control. We assume that the plant h(s) and that the feedback controller h r (s) are
strictly passive with finite gain. Then, as shown in Sect. 2.6, we have ∠|h 0 ( jω)| <
Δ
180◦ where h 0 (s) = h(s)h r (s) is the loop transfer function, and the system is BIBO
stable. A change of variables is now introduced to bring the system into a scattering
Δ Δ
formulation. The new variables are a = y + u and b = y − u for the plant, and
uf
y0 e ut u y
hr (s) h(s)
−
h0 (s)
a0 = y0 + u f
b0 = y0 − u f ar br a b
gr (s) g(s)
−
g0 (s)
Δ Δ
ar = u t + e and br = u t − e for the feedback controller. In addition, input variables
Δ Δ
a0 = y0 + u f and b0 = y0 − u f are defined. We find that
ar = u t + y0 − y = u − u f + y0 − y = b0 − b (2.82)
and
br = u t − y0 + y = u − u f − y0 + y = a − a0 . (2.83)
Δ h(s) − 1 Δ h r (s) − 1
g(s) = and gr (s) = .
1 + h(s) 1 + h r (s)
Now, h(s) and h r (s) are passive by assumption, and as a consequence, they cannot
have poles in Re [s] > 0. Then it follows that g(s) and gr (s) cannot have poles
in Re [s] > 0 because 1 + h(s) is the characteristic equation for h(s), with a unity
negative feedback, which obviously is a stable system. Similar arguments apply for
1 + h(s). The system can then be represented as in Fig. 2.9 where
In the passivity setting, stability was ensured when two passive systems were inter-
connected in a feedback structure, because the loop transfer function h 0 ( jω) had a
phase limitation so that ∠h 0 ( jω) > −180◦ . We would now like to check if there is
an interpretation for the scattering formulation that is equally simple. This indeed
turns out to be the case. We introduce the loop transfer function
Δ
g0 (s) = g(s)gr (s) (2.86)
of the scattering formulation. The function g0 (s) cannot have poles in Re [s] > 0
as g(s) and gr (s) have no poles in Re [s] > 0 by assumption. Then we have from
Theorem 2.28:
2.10 Feedback Loop 33
As a consequence of this,
|g0 ( jω)| < 1 (2.87)
for all ω ∈ [−∞, +∞], and according to the Nyquist stability criterion, the system
is BIBO stable.
Bounded real and positive real are two important properties of transfer functions
related to passive systems that are linear and time-invariant. We will in this section
show that a linear time-invariant system is passive, if and only if the transfer function
of the system is positive real. To do this we first show that a linear time-invariant
system is passive if and only if the scattering function, which is the transfer function
of the wave variables, is bounded real. Then we show that the scattering function is
bounded real if and only if the transfer function of the system is positive real. We
will also discuss different aspects of these results for rational and irrational transfer
functions.
We consider a linear time-invariant system y(s) = h(s)u(s) with input u and
Δ
output y. The incident wave is denoted a = y + u, and the reflected wave is denoted
Δ
b = y − u. The scattering function g(s) is given by
h(s) − 1
g(s) = (2.88)
1 + h(s)
1 2
u(t)y(t) = [a (t) − b2 (t)]. (2.89)
4
Once again, we assume that the initial conditions are selected so that the energy
function V (t) is zero for initial time, that is V (0) = 0. In fact, the mere writing
y(s) = h(s)u(s) means that initial conditions on the output and input’s derivatives
have been chosen null. The passivity inequality is then
t t
1
0 ≤ V (t) = u(s)y(s)ds = [a 2 (s) − b2 (s)]ds, (2.90)
0 4 0
which is (2.1) with β = 0, i.e., with zero bias. It is a fact that non zero initial conditions
can, in certain cases where there exists purely imaginary poles/zeroes cancelations
(that correspond in a state space representation to uncontrollable or unobservable
34 2 Positive Real Systems
oscillatory modes), result in a system that satisfies the passivity inequality only for
zero initial conditions (for otherwise β = −∞ [25, Example 4]). The properties
bounded real and positive real will be defined for functions that are analytic in the
open right half plane Re[s] > 0. We recall that a function f (s) is analytic in a domain
only if it is defined and infinitely differentiable for all points in the domain. A point
where f (s) ceases to be analytic is called a singular point, and we say that f (s)
has a singularity at this point. If f (s) is rational, then f (s) has a finite number of
singularities, and the singularities are called poles. The poles are the roots of the
denominator polynomial R(s) if f (s) = Q(s)/R(s), and a pole is said to be simple
pole if it is not a multiple root in R(s).
In the literature, the words scattering, or Schur, or contractive are sometimes used
instead of bounded real. The following holds.
Theorem 2.30 Consider a linear time-invariant system described by y(s) =
h(s)u(s), and the associated scattering function a = y + u, b = y − u and b(s) =
g(s)a(s) where
h(s) − 1
g(s) = , (2.91)
1 + h(s)
Proof Assume that y(s) = h(s)u(s) is passive. Then (2.90) implies that
t t
a (τ )dτ ≥
2
b2 (τ )dτ (2.92)
0 0
for all t ≥ 0. It follows that g(s) cannot have any singularities in Re[s] > 0 as this
would result in exponential growth in b(t) for any small input a(t). Thus, g(s) must
satisfy condition 1 in the definition of bounded real. Let σ0 be an arbitrary real and
positive constant, and let a(t) = eσ0 t 1(t) where 1(t) is the unit step function. Then
the Laplace transform of a(t) is a(s) = s−σ 1
0
g(s)
, while b(s) = s−σ 0
. Suppose that the
system is not initially excited, so that the inverse Laplace transform for rational g(s)
gives
2.11 Bounded Real and Positive Real Transfer Functions 35
n
g(s) g(s)
b(t) = Ress=si e + Ress=σ0
si t
e σ0 t ,
i=1
s − σ 0 s − σ 0
where si are the poles of g(s) that satisfy Re [si ] < 0, and Ress=σ0 s−σ g(s)
0
= g(σ0 ).
σ0 t
When t → +∞, the term including e will dominate the terms including esi t , and
b(t) will tend to g(σ0 )eσ0 t . The same limit for b(t) will also be found for irrational
g(s). As a(t) is real, it follows that g(σ0 ) is real, and it follows that g(s) must satisfy
condition 2 in the definition of bounded realness.
Let s0 = σ0 + jω0 be an arbitrary point in Re[s] > 0, and let the input be a(t) =
Re[es0 t 1(t)]. Then b(t) → Re[g(s0 )es0 t ] as t → +∞ and the power
Δ 1 2
P(t) = [a (t) − b2 (t)] (2.93)
4
will tend to
1 2σ0 t
P(t) = [e cos2 ω0 t − |g(s0 )|2 e2σ0 t cos2 (ω0 t + φ)],
4
where φ = arg[g(s0 )]. This can be rewritten using cos2 α = 21 (1 + cos 2α), and the
result is
In this expression s0 and σ0 are constants, and we can integrate P(t) to get the energy
function V (t):
t
V (t) = −∞ P(s)ds = 1
16σ0
[1 − |g(s0 )|2 ]e2σ0 t + 1
16
Re{ s10 [1 − g(s0 )2 ]e2s0 t }.
First, it is assumed that ω0 = 0. Then Re{ s10 [1 − g(s0 )2 ]e2s0 t } will be a sinusoidal
function which becomes zero for certain values of t. For such values of t, the condition
V (t) ≥ 0 implies that
1
[1 − |g(s0 )|2 ]e2σ0 t ≥ 0,
16σ0
which implies that 1 − |g(s0 )|2 ≥ 0. Next it is assumed that ω0 = 0 such that s0 = σ0
is real. Then g(s0 ) will be real, and the two terms in V (t) become equal. This gives
1
0 ≤ V (t) = [1 − g 2 (s0 )]e2σ0 t ,
8σ0
and with this it is established that for all s0 in Re[s] > 0 we have 1 − |g(s0 )|2 ≥ 0 ⇒
|g(s0 )| ≤ 1. To show the converse we assume that g(s) is bounded real and consider
36 2 Positive Real Systems
Because g(s) is bounded and analytic for all Re [s] > 0, it follows that this limit
exists for all ω, and moreover
|g( jω)| ≤ 1.
Then it follows from Parseval’s theorem that, with at being the truncated version of
a, we have ∞
0 ≤ 8π
1
|a
−∞ t ( jω)| 1 − |g( jω)| dω
2 2
t t
= 1
4 0 [a
2
(s) − b2 (s)]ds = 0 u(s)y(s)ds,
Remark 2.31 It is important to notice that we have used, as shown in the proof,
the definition of passivity in Definition 2.1, with β = 0, i.e., with
t zero initial data.
Actually, it is possible to show that some LTI systems satisfy 0 u(s)T y(s)ds ≥ 0
t
for all t ≥ 0 and x(0) = 0, however for x(0) = 0, one has 0 u(s)T y(s)ds that is not
lower bounded [25, Example 4], and hence the system is not passive in the sense of
Definition 2.1 for all initializations. A minimality assumption (in an I/O setting, no
poles/zeroes cancelations) guarantees that such cases do not occur, however. Actually,
as shown later in the book, a controllability assumption is sufficient to guarantee the
equivalence between passivity with β = 0, and the existence of a function V (·) as
in Theorem 2.4, with V (x(0)) = 0. See Theorems 4.35, 4.46, and notice that β in
Definition 2.1, is quite close to what will be called later the required supply.
Define the contour C which encloses the right half plane, as shown in Fig. 2.10. The
maximum modulus theorem is as follows. Let f (s) be a function that is analytic
inside the contour C. Let M be the upper bound on | f (s)| on C. Then | f (s)| ≤ M
inside the contour, and equality is achieved at some point inside C if and only if f (s)
is a constant. This means that if g(s) is bounded real, and |g(s)| = 1 for some point
in Re[s] > 0, then |g(s)| achieves its maximum inside the contour C, and it follows
that g(s) is a constant in Re[s] ≥ 0. Because g(s) is real for real s > 0, this means
that g(s) = 1 for all s in Re[s] ≥ 0. In view of this, [1 − g(s)]−1 has singularities in
Re[s] > 0 if and only if g(s) = 1 for all s in Re[s] ≥ 0.
If g(s) is assumed to be a rational function the maximum modulus theorem can
be used to reformulate the condition on |g (s)| to be a condition on |g ( jω)| . The
reason for this is that a rational transfer function satisfying |g( jω)| ≤ 1 for all ω will
also satisfy
lim |g( jω)| = lim |g(s)|. (2.95)
ω→∞ |s|→∞
Therefore, for a sufficiently large contour C, we have that |g( jω)| ≤ 1 implies
|g(s)| ≤ 1 for all Re[s] > 0 whenever g(s) is rational. This leads to the following
result.
2.11 Bounded Real and Positive Real Transfer Functions 37
Re
Theorem 2.32 A real rational function g(s) is bounded real if and only if
A bounded real transfer function is necessarily proper (i.e., the degree of its denom-
inator is less or equal to the degree of its numerator, or, it has a relative degree ≥ 0).
Indeed let g(s) = a(s) for two polynomials a(s) and b(s) with arbitrary degrees,
b(s)
m
ji
then it follows from fraction decomposition that g(s) = f (s) + i=1 r =1 (s−ai )r +
air
n
k i
r =1 (x 2 +bi x+ci )r , for some polynomial f (s). How to calculate the coefficients
bir x+cir
i=1
is unimportant to us. It is clear that unless f (s) is a constant, the second condition
in Theorem 2.32 cannot be satisfied.
Let us now state a new definition (see also Remark 2.42).
Definition 2.33 A transfer function h(s) is said to be positive real (PR) if:
1. h(s) is analytic in Re[s] > 0.
2. h(s) is real for positive real s.
3. Re[h(s)] ≥ 0 for all Re[s] > 0.
38 2 Positive Real Systems
(ω)
Re[H(jω)]
0
The last condition above is illustrated in Fig. 2.11 where the Nyquist plot of a PR
transfer function H (s) is shown. The notion of positive realness extends to multi-
variable systems:
Definition 2.34 The transfer matrix H (s) ∈ Cm×m is positive real if:
• H (s) has no pole in Re[s] > 0.
• H (s) is real for all positive real s.
• H (s) + H (s) 0 for all Re[s] > 0.
Theorem 2.35 Let the transfer matrix H (s) = C(s In − A)−1 + D ∈ Cm×m , where
the matrices A, B, C, and D are real, and every eigenvalue of A has a nega-
tive real part. Then H (s) is positive real if and only if y [H ( jω) + H ( jω)]y =
y Π ( jω)y ≥ 0 for all ω ∈ R and all y ∈ Cm .
This result was proved in [26, p. 53]. The introduction of the spectral function Π (s)
allows us to state a result on which we shall come back in Sect. 3.3.
2.11 Bounded Real and Positive Real Transfer Functions 39
for all u ∈ L2,e , if and only if its associated spectral function Π (s) is nonnegative.
Proof We assume that u(t) = 0 for all t < 0 and that the system is causal. Let the
output y(·) be given as
t
y(t) = Du(t) + Ce A(t−τ Bu(τ )dτ. (2.96)
0
Let U (s) and Y (s) denote the Laplace transforms of u(·) and y(·), respectively. Let
us assume that Π (s) has no pole on the imaginary axis. From Parseval’s theorem,
one has
+∞ +∞
1
[y (t)u(t) + u (t)y(t)]dt =
T T
[Y ( jω)U ( jω) + U ( jω)Y ( jω)]dω.
−∞ 2π −∞
(2.97)
One also has Y (s) = D + C(s In − A)−1 B U (s). Therefore
+∞ +∞
1
[y (t)u(t) + u (t)y(t)]dt =
T T
U ( jω)Π ( jω)U ( jω)dω. (2.98)
−∞ 2π −∞
It follows that:
+∞
• Π ( jω) 0 for all ω ∈ R implies that −∞ [y T (t)u(t) + u T (t)y(t)]dt ≥ 0 for all
admissible u(·).
• Reciprocally, given a couple (ω0 , U0 ) that satisfies U0T Π ( jω0 )U0 < 0, there exists
by continuity an interval Ω0 such that U0T Π ( jω)U0 < 0 for all ω ∈ Ω0 . Conse-
quently, the inverse Fourier transform v0 (·) of the function
U0 if ω ∈ Ω0
U ( jω) = (2.99)
0 if ω ∈/ Ω0
Ω0 U0 Π ( jω)U0 dω < 0. Therefore, positivity of
1 T
makes the quadratic form 2π
Λ(·) and of its spectral function are equivalent properties.
If Π (s) has poles on the imaginary axis, then Parseval’s theorem can be used under
the form
40 2 Positive Real Systems
+∞ +∞
1
e−2at [y T (t)u(t) + u T (t)y(t)]dt = U (a + jω)S(a + jω)U (a + jω)dω
−∞ 2π −∞
(2.100)
which is satisfied for all real a, provided the line a + jω does not contain any pole
of Π (s).
Remark 2.37 We see that nonnegativity means passivity in the sense of Definition
2.1, with β = 0. Thus, it is implicit in the proof of Proposition 2.36 that the ini-
tial data on y(·) and u(·) and their derivatives, up to the required orders, are zero.
Consequently, the positivity of the operator Λ(·), when associated with a state space
representation (A, B, C, D), is characterized with the initial state x(0) = 0. Later on
in Chap. 4, we shall give a definition of dissipativity, which generalizes that of posi-
tivity for a rational operator such as Λ(·), and which precisely applies with x(0) = 0;
see Definition 4.23.
h(s) − 1
g(s) = . (2.101)
1 + h(s)
Assume that g(s) = 1 for all Re[s] > 0. Then h(s) is positive real if and only if g(s)
is bounded real.
Proof Assume that g(s) is bounded real and that g(s) = 1 for all Re[s] > 0. Then
[1 − g(s)]−1 exists for all s in Re[s] > 0. From (2.101) we find that
1 + g(s)
h(s) = , (2.102)
1 − g(s)
where h(s) is analytic in Re[s] > 0 as g(s) is analytic in Re[s] > 0, and [1 − g(s)]−1
is nonsingular by assumption in Re[s] > 0. To show that Re[h(s)] ≥ 0 for all
Re[s] > 0 the following computation is used:
1+g (s)
1−g (s)g(s)
2Re[h(s)] = h (s) + h(s) = 1−g (s)
+ 1+g(s)
1−g(s)
= 2 [1−g (s)][1−g(s)] . (2.103)
2.11 Bounded Real and Positive Real Transfer Functions 41
We see that Re[h(s)] ≥ 0 for all Re[s] > 0 whenever g(s) is bounded real. Next
assume that h(s) is positive real. Then h(s) is analytic in Re[s] > 0, and [1 + h(s)]
is nonsingular in Re[s] > 0 as Re[h(s)] ≥ 0 in Re[s] > 0. It follows that g(s) is
analytic in Re[s] > 0. From (2.103) it is seen that |g(s)| ≤ 1 in Re[s] > 0; it follows
that g(s) is bounded real.
In fact, the transfer function g(s) in Theorem 2.38 is supposed to be strict bounded
real, since |g(s)| ≤ 1 and g(s) = 1 imply |g(s)| < 1. From Theorems 2.30 and 2.38
it follows that:
Notice that the proof has been led in a pure input/output framework, without any
mention to a state space realization, excepted that once again we implicitly assume
that initial conditions are zero. This result was proved in [19, 28] with explicit
mention to an associated state space realization and x(0) = 0.
Example 2.40 The condition in the corollary means that h(s) = ∞. The transfor-
mation from h(s) to g(s) is called a Moebius (or Cayley in this case) transformation.
A fundamental result in electrical circuit theory is that if the transfer function h(s) is
rational and positive real, then there exists an electrical one-port built from resistors,
capacitors, and inductors so that h(s) is the impedance of the one-port [29, p. 815].
If e is the voltage over the one-port and i is the current entering the one-port, then
e(s) = h(s)i(s). The system with input i and output e must be passive, because the
total stored energy of the circuit must satisfy
Here |es | ≥ 1 for Re[s] > 0, while es (1 − e−2s ) = 0 ⇒ e−2s = 1. Therefore, the
singularities are found to be sk = jkπ, k ∈ {0, ±1, ±2 . . .}, which are on the imag-
inary axis. This means that h(s) is analytic in Re[s] > 0. Obviously, h(s) is real for
42 2 Positive Real Systems
real s > 0. Finally, we check if Re [h(s)] is positive in Re[s] > 0. Let s = σ + jω.
Then
cosh s = 21 [eσ (cos ω + j sin ω) + e−σ (cos ω − j sin ω)]
= cosh σ cos ω + j sinh σ sin ω,
cosh σ sinh σ
Re[h(s)] = > 0, Re [s] > 0, (2.105)
| sinh s|2
where it is used that σ = Re [s], and the positive realness of h(s) has been established.
One sees that h(s) has infinitely many simple poles on the imaginary axis (hence
it represents the input/output operator of an infinite-dimensional system). A quite
similar analysis can be led for h(s) = tanh(s), which also has infinitely many simple
−4Re[s]
poles located at j k + 21 π , k ∈ Z, with Re[h(s)] = 1−e |1+e−2s |2
for all s ∈ C which
are not poles of h(s) [30]. Another example of irrational
infinite-dimensional transfer
function is given in [30, Example 3.2]: h(s) = ∞ Ck
k=0 s− jk 2 , which is positive real.
Remark 2.42 Consider the definition of positive real transfer functions in Definition
2.33. Let us define the set L (U, Y ) as the Banach space of all linear bounded oper-
ators U → Y , with U and Y complex Hilbert spaces, with L (U, U ) = L (U ). The
set Hα (L (U )) is the set of all L (U, Y )-valued functions which are holomorphic
on the sets Cα = {s ∈ C | Re[s] > α}, excepted on isolated points like poles and
essential singularities.4 Let us also define the set Σh as the set of poles and essential
singularities of h(s). Then an alternative definition of positive real transfer functions
is as follows [30, Definition 3.1]:
bm s m + · · · + b0
H (s) = , (2.106)
s n + an−1 s n−1 + · · · + a0
4 Fora holomorphic function f (s), one defines an essential singularity as a point a where neither
1
lims→a f (s) nor lims→a f 1(s) exist. The function e s has an essential singularity at s = 0. Rational
functions do not have essential singularities; they have only poles.
2.11 Bounded Real and Positive Real Transfer Functions 43
where ai , bi ∈ IR are the system parameters n is the order of the system and r =
n − m is the relative degree. For rational transfer functions, it is possible to find
conditions on the frequency response h( jω) for the transfer function to be positive
real. The result is presented in the following theorem.
Theorem 2.45 A rational function h(s) is positive real if and only if:
Δ
is real and positive. If ω0 is infinite, then the limit R∞ = limω→∞ h( jω)
jω
is real
and positive.
Proof The proof can be established by showing that conditions 2 and 3 in this The-
orem are equivalent to the condition
Re[h(s)] ≥ 0 (2.107)
for all Re[s] > 0 for h(s) with no poles in Re[s] > 0. First assume that conditions
2 and 3 hold. We use a contour C as shown in Fig. 2.12 which goes from − jΩ to
jΩ along the jω axis, with small semicircular indentations into the right half plane
around points jω0 that are poles of h(s). The contour C is closed with a semicircle
into the right half plane. On the part of C that is on the imaginary axis Re[h(s)] ≥ 0
by assumption. On the small indentations
As Re[s] ≥ 0 on the small semicircles, and Ress= jω0 h(s) is real and positive accord-
ing to condition 3, it follows that Re[h(s)] ≥ 0 on these semicircles. On the large
semicircle into the right half plane with radius Ω, we also have Re[h(s)] ≥ 0, and
the value is a constant equal to limω→∞ Re[h( jω)], unless h(s) has a pole at infinity
at the jω axis, in which case h(s) ≈ s R∞ on the large semicircle. Thus, we may
conclude that Re[h(s)] ≥ 0 on C. Define the function
f (s) = e−Re[h(s)] .
Re
r
exists for all ω such that jω is not a pole in h(s). To show condition 3, we assume
that ω0 is a pole of multiplicity m for h(s). On the small indentation with radius r
into the right half plane, we have s − jω0 = r e jθ where −π/2 ≤ θ ≤ π/2. Then
y(t) = u̇(t) − u(t), u(0) = u 0 , y(0) = y0 , is PR. Indeed h(s) = 1 and satisfies all
the requirements for PRness. Whether or not it represents a passive operator in the
sense of Definition 2.1, is another story. In view of Corollary 2.39, equivalence holds
under the zero bias condition, i.e., u 0 = 0 and y0 = 0. Let us check it here. We have
y(t) − u(t) =(y0 − u 0 )et , fromwhich it follows that if y0 = u 0 , then y(t) = u(t) for
t t
all t ≥ 0, and 0 u(s)y(s)ds = 0 u 2 (s)ds ≥ 0: thus the system is passive with zero
bias (i.e., β = 0). Let us nowtake u 0 = 1, u(t) = et , so that y(t) = y0 et = y0 u(t)
t t
for all t ≥ 0. It follows that 0 u(s)y(s)ds = 0 y0 e2s ds = y20 (e2t − 1). If y0 < 0
t
one obtains 0 u(s)y(s)ds → −∞ as t → +∞, and there does not exist any β such
that (2.1) holds true: the system is not passive in the sense of Definition 2.1, since
there exist inputs, time, and initial data such that it does not satisfy (2.1). This allows
us to guess that uncontrollable/unobservable unstable modes may create trouble.
Theorem 2.45 extends to multivariable systems:
Theorem 2.48 The rational function H (s) ∈ Cm×m is positive real if and only if:
• H (s) has no poles in Re[s] > 0.
• H ( jω) + H ( jω) 0 for all positive real ω such that jω is not a pole of H (·).
• If jω0 , finite or infinite, is a pole of H (·), it is a simple pole and the corresponding
residual K 0 = lims→ jω0 (s − jω0 )H (s) if ω0 < +∞, or K ∞ = limω→∞ H (jωjω) if
ω0 = ∞, is a positive semi-definite Hermitian matrix.
By “H (s) has no poles”, we mean that “no element of H (s) has a pole”. Or, we
say that H (s) has a pole at s0 , if some element of H (s) has a pole at s = s0 . Notice
that jω is a pole of H (·), √if denominators contain terms like s 2 + a, a ≥ 0, whence
−ω + a = 0 for ω = ± a. We refer the reader to [26, Theorem 2.7.2] for the
2
Remark 2.51 Theorem 2.45 has an infinite-dimensional counterpart, see [30, Theo-
rem 3.7].
Extensions of the above bounded realness results towards the MIMO case are worth
stating. Let us start with Definition 2.29 which extends to matrix functions G(s) as
follows:
Theorem 2.53 ([26, Theorem 8.4.7]) Let G(s) ∈ Cm×m be a bounded real transfer
Δ
matrix, with Im − G(s) invertible almost everywhere. Then H (s) =
(Im + G(s))(Im − G(s))−1 is a positive real transfer matrix. Conversely, if H (s) ∈
Cm×m is a positive real transfer matrix, then G(s) = (H (s) + Im )−1 (H (s) − Im )
always exists and it is bounded real.
Theorem 2.54 ([32, Theorem 2.8]) (i) Let H (s) = C(s In − A)−1 B + D ∈ Cm×m
be a square transfer function, and let H (s) be positive real with Im + D full
6 In the case of square matrices, the poles and their multiplicities can be determined from the fact that
n(s)
det(H (s)) = c d(s) for some polynomials n(s) and d(s), after possible cancelation of the common
factors. The roots of n(s) are the zeroes of H (s), the roots of d(s) are the poles of H (s) [31,
Corollary 2.1].
7 Also called in this particular case the Cayley transformation.
2.11 Bounded Real and Positive Real Transfer Functions 47
2.12 Examples
The study of PR transfer functions was first motivated by circuits [1–3, 6, 7], and
Brune proved that every PR transfer function with finitely many poles and zeroes, can
be realized by a network [6, Theorem VIII, p. 68]. Let us describe several mechanical
systems which illustrate the above developments.
An interesting and important type of system is a motor that is connected to a load with
an elastic transmission. The motor has moment of inertia Jm , the load has moment
of inertia JL , while the transmission has spring constant K and damper coefficient
D. The dynamics of the motor is given by
where θm (·) is the motor angle, Tm (·) is the motor torque, which is considered to be
the control variable, and TL (·) is the torque from the transmission. The dynamics of
the load is
JL θ̈ L (t) = TL (t). (2.111)
which gives
θL 1 + 2Z Ωs1
(s) = , (2.114)
θm 1 + 2Z Ωs1 + s2
Ω12
where Ω12 = K
JL
and 2Z
Ω1
= D
K
. By adding the dynamics of the motor and the load we
get
Jm θ̈m (t) + JL θ̈ L (t) = Tm (t), (2.115)
which leads to
1 + 2Z Ωs1
Jm s 2 θm (s) + JL s 2 s2
θm (s) = Tm (s), (2.116)
1 + 2Z Ωs1 + Ω12
where J = Jm + JL is the total inertia of motor and load, and the resonant fre-
quency ω1 is given by ω12 = 1JL Ω12 = JJm Ω12 , while the relative damping is given
1− J
load parameters, while the parameters Ω1 and Z depend only on the load. The main
observation in this development is the fact that Ω1 < ω1 . This means that the transfer
function θm (s)/Tm (s) has a complex conjugated pair of zeros with resonant frequency
Ω1 , and a pair of poles at the somewhat higher resonant frequency ω1 . The frequency
response is shown in Fig. 2.13 when K = 20, Jm = 20, JL = 15 and D = 0.5. Note
that the elasticity does not give any negative phase contribution. By multiplying the
transfer functions θ L (s)/θm (s) and θm (s)/Tm (s) the transfer function
θL 1 + 2Z Ωs1
(s) = s2
(2.118)
Tm J s 2 (1 + 2ζ ωs1 + ω12
)
is found from the motor torque to the load angle. The resulting frequency response is
shown in Fig. 2.14. In this case the elasticity results in a negative phase contribution
for frequencies above ω1 .
40
θ m (j ω )
20
amplitude (dB)
Tm
-20
-40
10 -1 10 0 10 1
ω [rad/s]
0
θ m (j ω )
-50 Tm
fase
-100
-150
-200
10 -1 10 0 10 1
ω [rad/s]
40
θ L (j ω )
20
amplitude (dB)
Tm
-20
-40
10 -1 10 0 10 1
ω [rad/s]
-150
-200
θ L (j ω )
fase
-250 Tm
-300
-350
10 -1 10 0 10 1
ω [rad/s]
1 1 1
V (ωm , ω L , θ L , θm ) = Jm ωm2 + JL ω2L + K [θ L − θm ]2 , (2.119)
2 2 2
where ωm (t) = θ̇m (t) and ω L (t) = θ̇ L (t). The rate of change of the total energy is
equal to the power supplied from the control torque Tm (t) minus the power dissipated
in the system. This is written
We see that the power dissipated in the system is D[ω L (t) − ωm (t)]2 which is the
power loss in the damper. Clearly, the energy function V (t) ≥ 0 and the power loss
satisfy D[Δω(t)]2 ≥ 0. It follows that
t t
ωm (s)Tm (s)ds = V (t) − V (0) + D[Δω(s)]2 ds ≥ −V (0), (2.121)
0 0
which implies that the system with input Tm (·) and output ωm (·) is passive. It follows
that Re[h m ( jω)] ≥ 0 for all ω ∈ [−∞, +∞]. From energy arguments we have been
able to show that
θm
− 180◦ ≤ ∠ ( jω) ≤ 0◦ . (2.122)
Tm
2.12.2.1 Passivity
Consider a motor driving n inertias in a serial connection with springs and dampers.
Denote the motor torque by Tm and the angular velocity of the motor shaft by ωm .
The energy in the system is
V (ωm , θm , θ Li ) = 2 Jm ωm
1 2 + 21 K 01 (θm − θ L1 )2 + 21 JL1 ω2L1 + 21 K 12 (θ L1 − θ L2 )2 + · · ·
Clearly, V (·) ≥ 0. Here Jm is the motor inertia, ω Li is the velocity of inertia JLi ,
while K i−1,i is the spring connecting inertia i − 1 and i and Di−1,i is the coefficient
2.12 Examples 51
of the damper in parallel with K i−1,i . The index runs over i = 1, 2, . . . , n. The system
therefore satisfies the equation V̇ (t) = Tm (t)ωm (t) − d(t), where
represents the power that is dissipated in the dampers: it follows that the system with
input Tm and output ωm is passive. If the system is linear, then the passivity implies
that the transfer function h m (s) = ωTmm (s), has the phase constraint |∠h m ( jω)| ≤ 90◦ ,
for all ω ∈ [−∞, +∞]. It is quite interesting to note that the only information that
is used to find this phase constraint on the transfer function, is that the system is
linear, and that the load is made up from passive mechanical components. It is not
even necessary to know the order of the system dynamics, as the result holds for an
arbitrary n.
In this section, we will see how passivity considerations can be used as a guideline
for how to control two motors that actuate on the same load, through elastic inter-
connections consisting of inertias, springs, and dampers as shown in Fig. 2.15. The
motors have inertias Jmi , angle qmi , and motor torque Tmi where i ∈ {1, 2}. Motor
1 is connected to the inertia JL1 with a spring with stiffness K 11 and a damper D11 .
Motor 2 is connected to the inertia JL2 with a spring with stiffness K 22 and a damper
D22 . Inertia JLi has angle q Li . The two inertias are connected with a spring with
stiffness K 12 and a damper D12 . The total energy of the system is
+K 11 (qm1 − q L1 )2 + K 22 (qm2 − q L2 )2 + K 12 (q L1 − q L2 )2 ],
and the time derivative of the energy when the system evolves is
V̇ (t) = Tm1 q̇m1 (t) + Tm2 q̇m2 (t) − D11 (q̇m1 (t) − q̇ L1 (t))2
= +D22 (q̇m2 (t) − q̇ L2 (t))2 + D12 (q̇ L1 (t) − q̇ L2 (t))2 .
It is seen that the system is passive from (Tm1 , Tm2 )T to (q̇m1 , q̇m2 )T . The system
is multivariable, with controls Tm1 and Tm2 and outputs qm1 and qm2 . A controller
can be designed using multivariable control theory, and passivity might be a useful
tool in this connection. However, here we will close one control loop at a time
to demonstrate that independent control loops can be constructed using passivity
arguments. The desired outputs are assumed to be qm1 = qm2 = 0. Consider the PD
controller
Tm2 = −K p2 qm2 − K v2 q̇m2 (2.124)
for motor 2 which is passive from q̇m2 to −Tm2 . The mechanical analog of this
controller is a spring with stiffness K p2 and a damper K v2 which is connected between
the inertia Jm2 and a fixed point. The total energy of the system with this mechanical
analog is
+K 22 (qm2 − q L2 )2 + K 12 (q L1 − q L2 )2 + K p2 q22 ],
V̇ (t) = Tm1 (t)q̇m1 (t) − D11 (q̇m1 (t) − q̇ L1 (t))2 + D22 (q̇m2 (t) − q̇ L2 (t))2
+D12 (q̇ L1 (t) − q̇ L2 (t))2 − K v2 q̇22 (t).
It follows that the system with input Tm1 and output q̇m1 is passive when the PD
controller is used to generate the control Tm2 . The following controller can then be
used:
1 + Ti s 1
T1 (s) = K v1 β q̇1 (s) = K v1 1 + (β − 1) sq1 (s). (2.125)
1 + βTi s 1 + βTi s
This is a PI controller with limited integral action if q̇1 is considered as the output
of the system. The resulting closed-loop system will be BIBO stable independently
from system and controller parameters, although in practice, unmodeled dynamics
and motor torque saturation dictate some limitations on the controller parameters.
As the system is linear, stability is still ensured even if the phase of the loop transfer
function becomes less that −180◦ for certain frequency ranges. Integral effect from
the position can therefore be included for one of the motors, say motor 1. The resulting
controller is
1 + Ti s
T1 (s) = K p1 q1 (s) + K v1 sq1 (s). (2.126)
Ti s
2.12 Examples 53
In this case, the integral time constant Ti must be selected, e.g., by Bode diagram
techniques so that stability is ensured.
Consider again the definition of Positive Real transfer function in Definition 2.33. The
following is a standard definition of Strictly Positive Real (SPR) transfer functions,
as given, for instance, in [35].8
1 σ + λ − jω
H (s) = = . (2.128)
(σ + λ) + jω (σ + λ)2 + ω2
Note that for all Re[s] = σ > 0 we have Re[H (s)] ≥ 0. Therefore, H (s) is PR.
Furthermore, H (s − ε) for ε = λ2 is also PR and thus H (s) is also SPR.
Example 2.60 Consider now a simple integrator (i.e., take λ = 0 in the previous
example)
1 1 σ − jω
H (s) = = = 2 . (2.129)
s σ + jω σ + ω2
In view of Theorem 2.8, one may wonder whether an SPR transfer function is ISP,
OSP. See Examples 4.69, 4.71, 4.72.
8 As we shall see later, such a definition may not be entirely satisfactory, because some non-regular
transfer matrices can be SPR according to it, while they should not, see Example 2.67, see also the
paragraph after Definition 2.77.
54 2 Positive Real Systems
The definition of SPR transfer functions given above is in terms of conditions in the
s complex plane. Such conditions become relatively difficult to be verified as the
order of the system increases. The following theorem establishes conditions in the
frequency-domain ω for a transfer function to be SPR.
Theorem 2.61 ((Strictly Positive Real) [36]) A rational transfer function h(s) ∈ C
is SPR if:
1. h(s) is analytic in Re[s] ≥ 0, i.e., the system is asymptotically stable.
2. Re[h( jω)] > 0, for all ω ∈ (−∞, ∞) and
3. a. lim ω2 Re[h( jω)] > 0 when r = 1,
ω2 →∞
b. lim Re[h( jω)] > 0, lim h( jω)
jω
> 0 when r = −1,
ω2 →∞ |ω|→∞
Proof Necessity: If h(s) is SPR, then from Definition 2.58, h(s − ε) is PR for some
ε > 0. Hence, there exists an ε∗ > 0 such that for each ε ∈ [ 0, ε∗ ), h(s − ε) is
analytic in Re[s] < 0. Therefore, there exists a real rational function W (s) such that
[26]
h(s − ε) + h(−s + ε) = W (s − ε)W (−s + ε), (2.130)
where W (s) is analytic and nonzero for all s in Re [s] > −ε. Let s = ε + jω; then
from (2.130) we have
2Re [h( jω)] = |W ( jω)|2 > 0, for all ω ∈ (−∞, ∞). (2.131)
bm s m + bm−1 s m−1 + · · · + b1 s + b0
h(s) = . (2.132)
s n + an−1 s n−1 + · · · + a1 s + a0
If m = n − 1, i.e., r = 1, bn−1 = 0, then from (2.132) it follows that bn−1 > 0 and
an−1 bn−1 − bn−2 − εbn−1 > 0 for h(s − ε) to be PR, and
h( jω − ε)
lim = bn−1 ≥ 0,
|ω|→∞ jω
then bn+1 > 0, bn − bn+1 an−1 ≥ εbn+1 > 0, and therefore 3. b. follows directly.
Sufficiency; Let (A, b, c, d, f ) be a minimal state representation of h(s), i.e.,
Re [h( jω − ε)] = Re [h( jω)] + εRe [g( jω − ε)] > k2 − εk1 > 0 (2.138)
for all ω ∈ (−∞, ∞) and 0 < ε < min {ε∗ , k2 /k1 } . Hence, h(s − ε) is PR and there-
fore h(s) is SPR.
If r = 1, then Re [h( jω)] > k3 > 0 for all |ω| < ω0 and ω2 Re [h( jω)] > k4 > 0
for2 all |ω| ≥ ω0 , where
ω0 , k3 , k4 are finite positive constants. Similarly, one has
ω Re [g( jω − ε)] < k5 and |Re [g( jω − ε)]| < k6 for all ω ∈ (−∞, ∞) and
some finite positive constants k5 , k6. Therefore, Re[h( jω − ε)] > k3 − εk6 for all
|ω| < ω0 and ω2 Re [h( jω − ε)] > k4 − εk5 for all |ω| ≥ ω0 . Consequently, it
follows that for 0 < ε < min {k3 /k6 , ε∗ , k4 /k5 } and for all ω ∈ (−∞, ∞), Re
[h( jω − ε)] > 0. Hence, h(s − ε) is PR and therefore h(s) is SPR.
If r = −1, then d > 0 and therefore
Hence, for each ε in the interval [0, min {ε∗ , d/k1 } ), Re [h( jω − ε)] > 0 for all ω ∈
(−∞, ∞). Since lim h(jω jω)
= f > 0, then lim h( jω−ε)jω
= f > 0, and therefore, all
ω→∞ ω→∞
the conditions of Definition 2.33 and Theorem 2.45 are satisfied by h(s − ε); hence
h(s − ε) is PR, i.e., h(s) is SPR and the sufficiency proof is complete.
Remark 2.62 It should be noted that when r = 0, conditions 1 and 2 of the Theo-
rem, or 1 and Re[h( jω)] > δ > 0 for all ω ∈ [−∞, +∞], are both necessary and
sufficient for h(s) to be SPR.
Notice that H (s) in (2.127) satisfies condition 3.a., but H (s) in (2.129) does not.
56 2 Positive Real Systems
Let us now give a multivariable version of Theorem 2.61. Though there seems to be
a consensus about the definition of an SPR transfer function in the SISO case m = 1
in the literature, such is not quite the case for the MIMO case m ≥ 2, where several
definitions and several characterization results have been published since early works
in the 1970s. The following has been published in [37].
Definition 2.63 ([37, Definition 1]) A transfer function H (s) ∈ Cm×m is SPR, if
there exists a scalar ε > 0 such that H (s) is analytic in a region for which Re[s] ≥ −ε
and
H ( jω − ε) + H ( jω − ε) 0, for all ω ∈ R. (2.140)
One says that H (s) is regular (non-singular) if det(H ( jω) + H ( jω) ) is not identi-
cally zero for all ω ∈ R.
Apart from the regularity condition, Definitions 2.63 and 2.58 are the same (in fact,
Definition 2.63 is sometimes stated as a lemma which is a consequence of definition
2.58 with the normal rank condition [38, Lemma 2]). The regularity as stated in
Definition 2.63, is needed in the frequency-domain characterizations of both next
Asnoted in [38], without the regularity condition, the matrix transfer H (s) =
results.
1 11
s+1
, would be SPR [39, Remark 2.1]. The following is true.
11
Lemma 2.64 ([37, Lemma 1]) The transfer function H (s) ∈ Cm×m is SPR and
regular if and only if the following conditions hold:
1. There exists β > 0 such that H (s) is analytic in the region {s ∈ C|Re[s] >
−β}.
2. H ( jω) + H ( jω) 0 for all ω ∈ R.
3.
lim ω2ρ det(H ( jω) + H ( jω)) = 0, (2.141)
|ω|→+∞
where ρ is the dimension of Ker(H (∞) + H (∞)). In either case, the limit
is positive.
Theorem 2.65 Let H (s) ∈ Cm×m be a proper rational transfer matrix and
suppose that det(H (s) + H T (s)) is not identically zero. Then H (s) is SPR if
and only if:
• H (s) has all its poles with negative real parts,
• H ( jω) + H T (− jω) 0 for all ω ∈ R, and one of the following three con-
ditions is satisfied9 :
– H (∞) + H T (∞) 0,
– H (∞) + H T (∞) = 0 and limω→∞ ω2 [H ( jω) + H T (− jω)] 0,
– H (∞) + H T (∞) 0 (but not zero nor nonsingular), and there exist pos-
itive constants σ and δ such that
The determinant condition means that H (s) has full normal rank, i.e., it is regular.The
side condition (2.142) is used in [38], where it is argued that it allows one to establish
a counterpart for negative imaginary systems, due to its conceptual simplicity. How-
ever, both side conditions in Theorem 2.65 and in Lemma 2.64 are equivalent one to
each other (a direct proof of this fact may be found in [42]).
The side condition can
be interpreted as a condition on the spectral density ω2 F( jω) + F(− jω) , that
should be bounded away from zero for sufficiently large |ω|. Another formulation
of the side condition has been presented in the literature [43], which reads as
It is pointed out in [37] that the limit in (2.143), exists only if D + D T = 0, i.e., if
ρ = m (if D = 0, such systems are strictly proper, with H (∞) = 0). It is therefore
a bit annoying to apply the side condition in (2.143) when 0 < ρ < m (notice that
Wen’s seminal result in Lemma 3.16, deals with the cases D = 0 and D 0 only,
hence avoids the controversies raised in [37]).
Example 2.66 The system in Example 2.50 cannot be SPR, because one element
has a pole at s = 0.
1 1
Example 2.67 Let H (s) = s+a1
s+b
1 . Let us assume that a, b, c, and d are all
s+c s+d
different one from each other, and are all positive. Calculating det(H (s)) one finds
that the system has four simple poles at a, b, c, and d. Calculations yield
9 Asnoted in [38], the third condition encompasses the other two, so that the first and the second
conditions are presented only for the sake of clarity.
58 2 Positive Real Systems
2a cb2 +bc2 +(b+c)ω2
a 2 +ω2 (b2 +ω2 )(c2 +ω2 )
H ( jω) + H (− jω) =
T
cb +bc +(b+c)ω
2 2 2
2d
(b +ω )(c +ω )
2 2 2 2 d 2 +ω2
symmetric part
(2.144)
jω(b2 −c2 )
0 (b2 +ω2 )(c2 +ω2 )
+ jω(c2 −b2 ) .
(b2 +ω2 )(c2 +ω2 )
0
skew symmetric part
2
cb2 +bc2 +(b+c)ω2
Thus, H ( jω) + H T (− jω) 0 if and only if 4ad
(a 2 +ω2 )(d 2 +ω2 )
− (b2 +ω2 )(c2 +ω2 )
>
0. One has H (∞) + H (∞) = 0, hence ρ = 2, and
T
limω→∞ ω2 [H ( jω) +
HT
2a b + c
(− jω)] = , which is 0 if and only if 4ad − (b + c)2 > 0, a > 0,
b + c 2d
b > 0. Under all these conditions, H (s) is SPR, since it complies with Theorem 2.65
conditions. Remind that if we content ourselves with Definition 2.58, then the case
with a = b = c = d = 1 is SPR, because Definition 2.58 does not require regularity.
1
1
Example 2.68 ([37, Example 1]) The transfer matrix H (s) = s+1 is not
− s+1 s+1
1 1
(1−ε) −εω 2 2
SPR. Indeed det(H ( jω) + H (− jω)T ) = 4 [(1−ε)2 +ω2 ]2 , which is negative for large
enough ω.
In general, before checking all the conditions for a specific transfer function to be
PR or SPR, it is useful to check first that it satisfies a set of necessary conditions.
The following are necessary conditions for a (single-input/single-output) system to
be PR (SPR):
Remark 2.69 In view of the above necessary conditions, it is clear that unstable
systems or nonminimum-phase systems are not positive real. Furthermore, proper
transfer functions can be PR only if their relative degree is 0 or 1. This means, for
instance, that a double integrator, i.e., H (s) = s12 is not PR. This remark will turn out
2.13 Strictly Positive Real (SPR) Systems 59
C Ki
H (s) = Ls + + H0 (s) + , (2.145)
s i
s − jωi
Theorem 2.72 ([55]) Consider H (s) = C(s In − A)−1 B ∈ C. H (s) is SPR if and
only if (1) C AB < 0, (2) C A−1 B < 0, (3) A is asymptotically stable, (4) A(In −
ABC
C AB
)A has no eigenvalue on the open negative real axis (−∞, 0). Consider now
H (s) = C(s In − A)−1 B + D ∈ C, D > 0. H (s) is SPR if and only if (1) A is asymp-
totically stable, (2) the matrix (A − BC
D
)A has no eigenvalue on the closed negative
real axis (−∞, 0].
Stability means here that all the eigenvalues are in the open left half of the complex
plane Re[s] < 0, and may be called strict stability. An interpretation of SPRness is
that (A, B, C, D) with D = 0 is SPR if and only if the matrix pencil A−1 + λ(A −
BC
D
) is nonsingular for all λ > 0 [55]. See also [58, Theorem 6.2] for a similar result.
So-called state space symmetric systems, satisfying A = A T , B = C T , D = D T ,
have the following property.
Theorem 2.73 ([59]) Let the system (A, B, C, D) be symmetric, with minimal real-
ization. Then it is positive real if and only if A 0 and D 0. Assume further that
it has no poles at the origin. Then it is positive real if and only if A ≺ 0 and D 0.
One of the important properties of positive real systems is that the inverse of a PR
system is also PR. In addition, the interconnection of PR systems in parallel or in
negative feedback (see Fig. 2.16) inherits the PR property. More specifically, we have
the following properties (see [36]):
• H (s) is PR (SPR) ⇔ H1(s) is PR (SPR).
• If H1 (s) and H2 (s) are SPR so is H (s) = α1 H1 (s) + α2 H2 (s) for α1 ≥ 0, α2 ≥ 0,
α1 + α2 > 0.
• If H1 (s) and H2 (s) are SPR, so is the transfer function of their negative feedback
1 (s)
interconnection H (s) = 1+HH1 (s)H 2 (s)
.
Remark 2.74 Note that a transfer function H (s) need not be proper to be PR or SPR
(for, if it is PR or SPR and proper, its inverse is also PR or SPR). For instance, the non-
proper transfer function s is PR. See also Example 4.70 with H (s) = s + a, a > 0.
y2 u2
H2
2.13 Strictly Positive Real (SPR) Systems 61
10 This is also proved in [26, Problem 5.2.4], which uses the fact that if (A, B, C, D) is a min-
imal realization of H (s) ∈ Cm×m , then (A − B D −1 C, B D −1 , −C T (D −1 )T , D −1 ) is a minimal
realization of H −1 (s). Then use the KYP Lemma (next chapter) to show that H −1 (s) is positive
real.
62 2 Positive Real Systems
In the multivariable case, one replaces the second condition by H ( jω) + H T (− jω)
0 for all ω ∈ R. It is noteworthy that a transfer function may be WSPR but not be
SPR, see an example below. In case H (s) is regular, WSPRness may be seen as an
intermediate notion between PR and SPR. if regularity lacks, H (s) may
However,
1 1
be SPR while not WSPR, as H (s) = s+1 1
proves (this is SPR according
1 1
to Definition 2.58, but this is not WSPR according to Definition 2.77, due to the
fact that Definition 2.77 imposes regularity11 ). See Sect. 5.4 for more analysis on
WSPR systems, which shows in particular and in view of Examples 4.69 and 4.71
that WSPR is not SPR.
Definition 2.78 (Strong SPR) A real rational function H (s) ∈ C is strongly SPR
(SSPR) if
1. H (s) is analytic in Re[s] ≥ 0.
2. Re[H ( jω)] ≥ δ > 0, for all ω ∈ [−∞, ∞] and some δ ∈ R.
Notice that SSPR implies SPR (see Theorem 2.65), while as noted above, WSPR
does not. In fact when (A, B, C, D) is a realization of H (s), i.e., H (s) = C(s In −
A)−1 B + D, with D + D T 0, then SPR and SSPR are equivalent notions (one
sometimes defines SSPR functions, as SPR functions such that condition 2 in
Definition 2.78 holds). In the multivariable case (rational matrices in Cm×m ), the
second condition for SSPRness becomes H ( jω) + H T (− jω) 0 for all ω ∈ R
and H (∞) + H T (∞) 0, or as H ( jω) + H T (− jω) δ Im for all ω ∈ [−∞, ∞]
and for some δ > 0. From Theorem 2.8, it can be seen that an SSPR transfer
function is ISP, and from Theorem 2.25 the same holds for transfer matrices. If
the system is proper and has a minimal state space realization (A, B, C, D) then
H (s) + H T (−s) = C(s In − A)−1 B − B T (s In + A T )−1 C T + D + D T , so that the
second condition implies D + D T
0 ⇒ D 0. This may also be deduced from
+∞
the fact that C(s In − A)−1 B + D = i=1 C Ai−1 Bs −i + D (→ D as s → ∞). The
next result may be useful to characterize SSPR matrix functions.
Lemma 2.79 ([56]) A proper rational matrix H (s) ∈ Cm×m is SSPR if and only if
its principal minors Hi (s) ∈ Ci×i are proper rational SSPR matrices, respectively,
for i = 1, ..., m − 1, and det(H ( jω) + H T (− jω)) > 0 for all ω ∈ R.
The next lemma is close to Theorems 2.53 and 2.54.
Lemma 2.80 ([56]) Let G(s) ∈ Cm×m be a proper rational matrix satisfying det
(Im + G(s)) = 0 for Re[s] ≥ 0. Then the proper rational matrix H (s) = (Im +
G(s))−1 (Im − G(s)) ∈ Cm×m is SSPR if and only if G(s) is strictly bounded real.
A quite similar result is stated in [33, Corollary 6.1] where the notions of (strongly)
positive real balanced and (strictly) bounded real balanced systems, are used. We
have a further characterization of SSPR transfer matrices as follows [63, Theorem
9]:
11 Thus,
it would certainly be more rigorous either to augment Definition 2.58 with regularity or to
modify Definition 2.77. This was pointed out to us by Augusto Ferrante.
2.13 Strictly Positive Real (SPR) Systems 63
Theorem 2.81 Let (A, B, C, D) be minimal, then the rational matrix H (s) =
C(s I − A)−1 B + D ∈ Cm×m is SSPR if and only if it is VSP.
For the proof we need the following lemma, which follows from item 2 in Theorem
2.25:
Lemma 2.82 Let (A, B, C, D) be minimal, then the rational matrix H (s) = C(s I −
A)−1 B + D ∈ Cm×m is SSPR if and only if it is ISP with A Hurwitz.
Proof of Theorem 2.81: (i) =⇒ Being SSPR the system is ISP, and since A is Hurwitz
it is L2 BIBO stable (see Theorem 4.18). We follow now the proof of [63, Theorem
9] (see also [64]) to show that the system is VSP. Being ISP, there exists ν > 0 and β1
such that12 u t , yt ≥ ν u t , u t + β1 . Since the system is L2 stable, there exists γ2 >
0 and β2 such that yt , yt ≤ γ2 u t , u t + β2 . Thus, there exists ε1 > 0, ε2 > 0 small
enough such that ν − ε1 − ε2 γ2 ≥ 0, such that u t , yt − ε1 u t , u t − ε2 yt , yt =
u t , yt − νu t , u t + (ν − ε1 )u t , u t − ε2 yt , yt ≥ β1 (ν − ε1 )u t , u t − ε2 (γ2
u t , u t + β2 ) = β1 − ε2 β2 + (ν − ε1 − ε2 γ2 )u t , u t ≥ β1 − ε2 β2 . Therefore,
u t , yt − ε1 u t , u t − ε2 yt , yt ≥ β with β = β1 − ε2 β2 : the system is VSP. (ii)
⇐= Clearly, VSP t implies ISP and OSP. t In turn OSP implies L2 stability. Indeed
OSP means that 0 y T (s)y(s)ds ≤ 1ε 0 u T (s)y(s)ds − βε for ε > 0 and some β (see
(2.1)). Now let us use the fact that u T y = (λu)T ( λ1 y) ≤ 21 λ2 u T u + 2λ1 2 y T y for any
t
λ2 t T
t T β
λ ∈ R. We obtain 0 y T (s)y(s)ds ≤ 2ε 0 u (s)u(s)ds + 2ελ
1
0 y (s)y(s)ds − ε ,
t T
λ2 t T
2
β
from which we infer (1 − 2ελ 1
2 ) 0 y (s)y(s)ds ≤ 2ε 0 u (s)u(s)ds − ε . It suffices
now to choose λ such that 1 − 2ελ2 > 0 ⇔ λ > √2ε . Thus, VSP implies ISP and L2
1 1
BIBO stability. The L2 BIBO stability cannot hold if A has unstable eigenvalues (for
there exist exponentially diverging outputs for zero input), hence A must be Hurwitz.
Therefore, the system is SSPR by Lemma 2.82.
Let us now illustrate the various definitions of PR, SPR, and WSPR functions on
examples.
1
H (s) = , with λ > 0. (2.146)
s+λ
1 λ − jω
H ( jω) = = 2 . (2.147)
λ + jω λ + ω2
t
12 We use the notation f t , gt for 0 f (s)g(s)ds.
64 2 Positive Real Systems
Therefore,
λ
Re[H ( jω)] = > 0 for all ω ∈ (−∞, ∞). (2.148)
λ2 + ω2
ω2 λ
• lim ω2 Re[H ( jω)] = lim λ2 +ω2
= λ > 0.
ω2 →∞ ω2 →∞
1 1
Consequently, s+λ is SPR. However, s+λ is not SSPR because there does not exist a
λ
> 0 such that Re[H ( jω)] > δ, for all ω ∈ [−∞, ∞] since lim λ2 +ω 2 = 0.
ω2 →∞
s+α+β
H (s) = , α, β > 0. (2.149)
(s + α)(s + β)
(ω2 +α 2 )(ω2 +β 2 )
= (ω2 +α 2 )(ω2 +β 2 ) > 0, for all ω ∈
(−∞, ∞), so H (s) is weakly SPR. However H (s) is not SPR since
ω2 αβ(α + β)
lim = 0. (2.151)
ω2 →∞ (ω2 + α 2 )(ω2 + β 2 )
s+α
Example 2.86 ([65]) The transfer function (s+1)(s+2)
is
• PR if 0 ≤ α ≤ 3,
• WSPR if 0 < α ≤ 3, and
• SPR if 0 < α < 3.
Let us point out that other definitions exist for positive real transfer functions, like
the following one:
Definition 2.87 ([66] (γ -PR)) Let 0 < γ < 1. The transfer function H (s) ∈ Cm×m
is said to be γ -positive real if it is analytic in Re[s] ≥ 0 and satisfies
Other classes of PR systems exist which may slightly differ from the above
ones, see, e.g., [67, 68]. In particular a system is said to be extended SPR if
it is SPR and if H ( j∞) + H T (− j∞) 0. Noting that H ( j∞) = D, this is
found to be equivalent (at least for proper systems with a realization of the
form C(s In − A)−1 B + D) to the second condition in Definition 2.78, since it
implies the existence of some δ > 0 such that D + D T δ I 0 for all ω ∈
R ∪ {±∞}. Hence extended SPR is the same as SSPR for proper systems, though
both names are used in the literature. If the system is non-proper (or improper),
then it has a transfer function of the form Es + C(s In − A)−1 B + D for some
matrix E, with E = E T 0 by PRness. Then H ( jω) = +H T (− jω) = C( jωIn −
A)−1 B + D + (C(− jωIn − A)−1 B + D)T for all ω, since E jω + E T (− jω) = 0.
Thus again H ( j∞) + H T (− j∞) = D + D T : both extended and strong SPR are
the same. From
+∞the series expansion of a rational transfer matrix, one deduces that
H ( jω) = i=1 C Ai−1 B( jω)−i + D which implies that D + D T 0. The defini-
tion of SSPRness in [68, Definition 3] and Definition 2.78 are not the same, as they
impose that H (∞) + H T (∞) 0 only, with limω→∞ ω2 [H ( jω) + H T (− jω)] > 0
if H (∞) + H T (∞) is singular. The notion of marginally SPR (MSPR) transfer func-
tions is introduced in [68]. MSPR functions satisfy inequality 2 of Definition 2.77;
however, they are allowed to possess poles on the imaginary axis.
Definition 2.90 (Marginally SPR) The transfer matrix H (s) ∈ Cm×m is marginally
SPR, if it is PR and H ( jω) + H ( jω) 0 for all ω ∈ (−∞, +∞).
It happens
p that MSPR functions can be written as H1 (s) + H2 (s), where H1 (s) =
α0
s
+ i=1 αs i2s+β
+ωi2 , while H2 (s) has poles only in Re[s] < 0, αi ∈ R
i m×m
, βi ∈ Rm×m ,
ωi > 0, i = 1, . . . , p, ωi = ω j for i = j. The relationships between WSPR and
MSPR transfer functions are as follows.
Lemma 2.91 ([68, Lemma 1]) Let H (s) ∈ Cm×m . Then H (s) is MSPR if and only
if: (i) H2 (s) is WSPR, (ii) αi = αiT 0, i = 0, 1, . . . , p, and (iii) βi = −βiT , i =
1, . . . , p.
66 2 Positive Real Systems
2.14 Applications
The concept of SPR transfer functions is very useful in the design of some type of
adaptive control schemes. This will be shown next for the control of an unknown plant
in a state space representation and it is due to Parks [69] (see also [70]). Consider a
linear time-invariant system in the following state space representation
ẋ(t) = Ax(t) + Bu(t)
(2.154)
y(t) = C x(t),
with state x(t) ∈ IR n , input u(t) ∈ IR and output y(t) ∈ IR. Let us assume that there
exists a control input
u = −L T x + r (t), (2.155)
where r (t) is a reference input and L ∈ IR n , such that the closed-loop system behaves
as the reference model
ẋr (t) = (A − B L T )xr (t) + Br (t)
(2.156)
yr (t) = C xr (t).
We also assume that the above reference model has an SPR transfer function. From
the Kalman–Yakubovich–Popov Lemma, which will be presented in detail in the
next chapter, this means that there exists a matrix P 0, a matrix L
, and a positive
constant ε such that T
Acl P + P Acl = −L
L
T − ε P
(2.157)
PB = CT ,
where Acl = A − B L T . Since the system parameters are unknown, let us consider
the following adaptive control law:
where L̂ is the estimate of L and L̃ is the parametric error L̃(t) = L̂(t) − L . Intro-
ducing the above control law into the system (2.154) we obtain
Define the state error x̃ = x − xr and the output error e = y − yr . From the above
we obtain d x̃
dt
(t) = Acl x̃(t) − B L̃ T (t)x(t)
(2.160)
e(t) = C x̃(t).
T d L̃
V̇ (x̃, L̃) = x̃ T (Acl P + P Acl )x̃ − 2 x̃ T P B L̃ T x + 2 L̃ T PL .
dt
Choosing the following parameter adaptation law:
d L̂
(t) = PL−1 x(t)e(t) = PL−1 x(t)C x̃(t),
dt
we obtain
T
V̇ (x̃, L̃) = x̃ T (Acl P + P Acl )x̃ − 2 x̃ T (P B − C T ) L̃ T x.
V̇ (x̃) = −x̃ T (L
L
T + ε P)x̃ ≤ 0. (2.162)
It follows that x̃, x and L̃ are bounded. Integrating the above we get
t
x̃ T (s)(L
L
T + ε P)x̃(s)ds ≤ V (x̃ (0) , L̃ (0)).
0
state x(t) ∈ IR n , input u(t) ∈ IR m and output y(t) ∈ IR p . Assume that there exists a
constant output feedback control law
(2.165)
PB = CT .
Since the plant parameters are unknown, consider the following adaptive controller
for r (t) = 0:
u(t) = − K̂ (t)y(t),
where K̂ (t) is the estimate of K at time t. The closed-loop system can be written as
ẋ(t) = Āx(t) − B( K̂ (t) − K )y(t)
y(t) = C x(t).
Define K̃ (t) = K̂ (t) − K , and consider the following Lyapunov function candidate
V (x, K̃ ) = x T P x + tr K̃ T Γ −1 K̃ ,
where Γ 0 is an arbitrary matrix. The time derivative of V (·) along the system
trajectories is given by
−1 d
V̇ (x, K̃ ) = x ( Ā P + P Ā)x − 2x PB K̃ y + 2tr K̃ Γ
T T T T
K̃ .
dt
13 Similarly
as in the foregoing section, this is a consequence of the Kalman–Yakubovich–Popov
Lemma for SPR systems.
2.14 Applications 69
d
K̂ (t) = Γ y(t)y T (t),
dt
The adaptive control scheme presented in the previous section motivates the study
of constant output feedback control designs such that the resulting closed-loop is
SPR. The positive real synthesis problem (i.e., how to render a system PR by output
feedback) is important in its own right and has been investigated by [73–76], see also
[77, Theorem 4.1] [78, Proposition 8.1] and Theorem 3.61 in Chap. 3. This problem
is quite close to the so-called passification or passivation by output feedback [79–
81]. Necessary and sufficient conditions have been obtained in [71] for a linear
system to become SPR under constant output feedback. Furthermore, they show that
if no constant feedback can lead to an SPR closed-loop system, then no dynamic
feedback with proper feedback transfer matrix can do it neither. Hence, there exists
an output feedback such that the closed-loop system is SPR if and only if there exists
a constant output feedback rendering the closed-loop system SPR. Consider again
the system (2.154) in the MIMO case, i.e., with state x(t) ∈ IR n , input u(t) ∈ IR m
and output y(t) ∈ IR p and the constant output feedback in (2.163). The closed-loop
is represented in Fig. 2.17, where G(s) is the transfer function of the system (2.154).
The equation of the closed-loop T (s) of Fig. 2.17 is given in (2.164). It has the
transfer function
T (s) = (Im + G(s)K )−1 G(s). (2.166)
Theorem 2.92 ([82]) Any strictly proper strictly minimum-phase system (A, B, C)
with the m × m transfer function G(s) = C(s In − A)−1 B and with C B = (C B)T
0, can be made SPR via constant output feedback.
The fact that the zeroes of the system have to satisfy Re[s] < 0 is crucial. Consider
s 2 +1
G(s) = (s+1)(s+2)(s+5) . There does not exist any static output feedback u = ky +
70 2 Positive Real Systems
w which renders the closed-loop transfer function PR. Indeed if ω2 = 9−k 8−k
then
Re[T ( jω)] < 0 for all k < 0. Therefore, the strict minimum-phase assumption is
necessary. Recall that a static state feedback does not change the zeroes of a linear
time-invariant system. We now state the following result where we assume that B
and C are full rank.
Theorem 2.93 (SPR synthesis [71]) There exists a constant matrix K such that the
closed-loop transfer function matrix T (s) in (2.166) is SPR, if and only if
C B = (C B)T 0,
and there exists a positive definite matrix X such that C⊥ herm{B⊥ X B⊥T A}C⊥T ≺ 0.
When the above conditions hold, K is given by
the problems analyzed so far correspond to studying the system with realization
(A − B K C, B, C). An extension concerns the new system with output z = F y =
FC x for some matrix F. One may then study the static state feedback, with realization
(A − B K , B, FC), and the static output feedback (A − B K C, B, FC). That is, does
there exist F and K that render the closed-loop system between the new output z
and the new input r , with u = K x + r or u = K y + r , SPR?
Theorem 2.95 ([84, Theorem 2]) The static output feedback problem (with closed-
loop realization (A − B K C, B, FC)), has a solution if and only if the static state
feedback problem (with closed-loop realization (A − B K , B, FC)) has a solution.
Algorithms to solve these problems are proposed in [84].
It is noteworthy that the above results apply to systems with no feedthrough
term, i.e., D = 0. What happens when a feedthrough matrix is present in the out-
put? An answer is provided in [75, Theorem 4.1], where this time one considers a
dynamic
output
feedback.
Thesystem (A, B, C, D) is partitioned as B = (B1 , B2 ),
C1 D11 D12
C= ,D= . It is assumed that (A, B2 ) is stabilizable and that
C2 D21 0
(A, C2 ) is detectable. The
closed-loop system is said internally stable if the matrix
A + B2 D K C2 B2 C K
is asymptotically stable (it has eigenvalues with negative
B K C2 AK
real parts), where (A K , B K , C K , D K ) is the dynamic feedback controller.
Theorem 2.96 ([75]) There exists a strictly proper (dynamic) state feedback such
that the closed-loop system is internally stable and SSPR if and only if there exist
two matrices F and L such that:
• D11 + D11
T
0.
• The algebraic Riccati inequality
T −1
(A + B2 F)T P + P(A + B2 F) + (C1 + D12 F − B1T P)T (D11 + D11 ) .
The proof of this result relies on material that will be presented in the next chapter
(the KYP Lemma for SSPR systems).
72 2 Positive Real Systems
Remark 2.97 We shall see later in the book that the stabilization of some classes of
nonsmooth dynamical systems with state constraints, requires more than the above
problems (see Sect. 5.5.3).
The conditions such that a system can be rendered SPR via static state feedback are
relaxed when an observer is used in the control loop. However, this creates additional
difficulty in the analysis because the closed-loop system loses its controllability. See
Sect. 3.5 for more information. Other results related to the material exposed in this
section, may be found in [47, 48, 85–93]. Despite there being no close relationship
with the material of this section, let us mention [94] where model reduction which
preserves passivity is considered. Spectral conditions for a single-input–single-output
system to be SPR are provided in [55]. The SPRness is also used in identification
of LTI systems [95]. Robust stabilization when a PR uncertainty is considered is
studied in [96]. Conditions such that there exists an output feedback that renders a
closed-loop system generalized PR (the definition is given in the next chapter), or
PR, are given in [78, Proposition 8.1] and Theorem 3.61 in Chap. 3.
+∞
1
H (s) = ψi ψiT , (2.169)
i=1
s 2 + κi s + ωi2
+∞
1 κi
− j (H ( jω) − H ( jω)) = −ω ψi ψiT 0, for all ω ≥ 0.
2 (ωi − ω)2 + ω2 κ 2
i=1
(2.170)
In words, H ( jω) has a negative semi-definite Hermitian imaginary part for all ω ≥ 0.
Then one refers to H (s) as negative imaginary. It happens that any flexible structure
with collocated force actuators and position sensors has a negative imaginary transfer
function matrix [99]. Therefore, one introduces the following definition.
Definition 2.99 [38, Definition 4] The transfer matrix H (s) ∈ Cm×m is negative
imaginary if:
1. H (s) is analytic in Re[s] > 0,
2. j (H (s) − H (s)) 0 for all Re[s] > 0 and Im[s] > 0,
3. j (H (s) − H (s)) = 0 for all Re[s] > 0 and Im[s] = 0, and
4. j (H (s) − H (s)) 0 for all Re[s] > 0 and Im[s] < 0.
It follows from [38, Lemma 3], that this is equivalent to the above definition with item
(1), (2’), (3), (4), (5), with the additional property: (6) if s = ∞ is a pole of H (s),
then it has at most multiplicity two (double pole). Moreover, both the coefficients in
the expansion at infinity of H (s), are negative semi-definite Hermitian matrices. The
systems as in Definition 2.98 (ii) are called weakly strictly NI in [38, Definition 6],
and the following holds, which may be viewed as the counterpart of Lemma 2.64,
for NI systems.
Theorem 2.100 ([38, Theorem 3]) The transfer function matrix H (s) ∈ Rm×m is
strongly strictly negative imaginary in the sense of Definition 2.98 (iii), if and only
if:
1. H (s) has all its poles in Re[s] < 0,
2. j (H ( jω) − H ( jω)) 0 for all ω ∈ (0, +∞),
3. there exists σ0 > 0 and δ > 0 such that σmin [ω3 j (H ( jω) − H ( jω))] > σ0 for
all ω ≥ δ, and
Δ
4. Q = limω→0+ ω1 j (H ( jω) − H ( jω)) 0.
Let us end this introduction to NI systems, by stating some relationships between
positive real and imaginary negative transfer functions.
Theorem 2.101 ([46, Theorem 3.1]) Let H (s) ∈ Cm×m be negative imaginary. Then
Δ
G(s) = s[H (s) − H (∞)] is positive real. Conversely, let G(s) ∈ Rm×m be positive
Δ
real. Then H (s) = 1s G(s) + D is imaginary negative for any D = D T .
One sees that NI systems have to be stable, just as PR systems are, and SNI systems
are asymptotically stable, just as SPR ones.
• H (s) = 1s is NI (and is also PR), H (s) = s12 is NI (but it is not PR), H (s) = − s12
is not NI.
• The phase of NI systems satisfies ∠H (s) ∈ [−π, 0] rad. This is why some
transfer
functions like 1s can be both NI and PR: its phase belongs to − π2 , 0 .
2s 2 +s+1
• H (s) = (s 2 +2s+5)(s+1)(2s+1) is NI, but not strictly NI. H (s) = s+1
1
is strictly NI
(and it is SPR) [99].
2.15 Negative Imaginary Systems 75
−s −5
• H (s) = s+5
−(4s+5)
s+5
−s +s+15
2 is SNI [101, Example 1].
s 2 +6s+5 s 2 +6s+5
• The positive feedback interconnections of NI systems has been studied in [99–
101, 104]. Roughly speaking, the positive feedback interconnections of a NI and
a strictly NI transfer functions, is itself strictly NI.
• Discrete-time NI systems have been analyzed in [105–107]. Similar to the case of
positive real transfer functions, NI continuous-time transfer functions transform
into NI discrete-time transfer functions, and vice versa, via a Cayley transformation
s = z−1
z+1
(see Sect. 3.15.4 for more details on the PR case).
• As a consequence of Theorem 2.101, the relative degree of a strictly proper NI
real rational transfer function (m = 1) is at most r = 2, and all its finite zeroes are
in Re[s] ≤ 0 [38, Lemma 5].
• The counterpart of Theorem 2.73 is as follows [59]: The state space symmetric
system (A, B, C, D) is NI if and only if A ≺ 0.
• Applications: as said above, the main motivation for NI systems is the positive
position feedback control of flexible structures. It has been applied to the control
of various systems: three-mirror optical cavity [108], cantilever beams in nanopo-
sitioning [109], large vehicle platoons [110], coupled fuselage-rotor mode of a
rotary wing unmanned aerial vehicle [111], active vibration control system for the
mitigation of human-induced vibrations in lightweight civil engineering structures
[112].
References
1. Cauer E, Mathis W, Pauli R (2000) Life and work of Wilhelm Cauer. In: Jaï AE, Fliess,
MM (eds) Proceedings 14th international symposium of mathematical theory of networks
and systems MTNS2000. Perpignan, France
2. Cauer W (1926) Die verwirklichung der wechselstromwiderstände vorgeschriebener frequen-
zabhängigkeit. Archiv für Elecktrotechnik 17:355–388
3. Cauer W (1929) Vierpole. Elektrische Nachrichtentechnik 6:272–282
4. Pauli R (2002) The scientic work of Wilhelm Cauer and its key position at the transition from
electrical telegraph techniques to linear systems theory. In: Proceedings of 16th European
meeting on cybernetics and systems research (EMCSR), vol 2. Vienna, Austria, pp. 934–939
5. Brune O (1931) The synthesis of a finite two-terminal network whose driving-point impedance
is a prescribed function frequency. J Math Phys 10:191–236
6. Brune O (1931) Synthesis of a finite two-terminal network whose driving-point impedance
is a prescribed function of frequency. PhD thesis, Massachusetts Institute of Technology,
Department of Electrical Engineering, USA (1931). http://hdl.handle.net/1721.1/10661
7. Foster R (1924) A reactance theorem. Bell Syst Tech J 3(2):259–267
8. Campbell GA (1922) Physical theory of the electric wave filter. Bell Syst Tech J 1(2):1–32
9. Arimoto S (1996) Control theory of nonlinear mechanical systems: a passivity-based and
circuit-theoretic approach. Oxford University Press, Oxford, UK
10. Moylan PJ, Hill DJ (1978) Stability criteria for large-scale systems. IEEE Trans Autom
Control 23(2):143–149
11. Willems JC (1971) The generation of Lyapunov functions for input-output stable systems.
SIAM J Control 9:105–133
76 2 Positive Real Systems
12. Willems JC (1972) Dissipative dynamical systems, Part I: general theory. Arch Rat Mech
Anal 45:321–351
13. Barb FD, Ionescu V, de Koning W (1994) A Popov theory based approach to digital h ∞
control with measurement feedback for Pritchard-Salamon systems. IMA J Math Control Inf
11:277–309
14. Weiss M (1997) Riccati equation theory for Pritchard-Salamon systems: a Popov function
approach. IMA J Math Control Inf 14:45–83
15. Sontag ED (1998) Mathematical control theory: deterministic finite dimensional systems, vol
6, 2nd edn. Texts in applied mathematics. Springer, New York, USA
16. Desoer CA, Vidyasagar M (1975) Feedback systems: input-output properties. Academic Press,
New-York
17. Vidyasagar M (1981) Input-output analysis of large-scale interconnected systems. Decompo-
sition, well-posedness and stability, vol 29. Lecture notes in control and information sciences.
Springer, London
18. Vidyasagar M (1993) Nonlinear systems analysis, 2nd edn. Prentice Hall, Upper Saddle River
19. Willems JC (1972) Dissipative dynamical systems, part II: linear systems with quadratic
supply rates. Arch Rat Mech Anal 45:352–393
20. Hill DJ, Moylan PJ (1976) The stability of nonlinear dissipative systems. IEEE Trans Autom
Control 21(5):708–711
21. Hill DJ, Moylan, PJ (1975) Cyclo-dissipativeness, dissipativeness and losslessness for nonlin-
ear dynamical systems, Technical Report EE7526, November, The university of Newcastle.
Department of Electrical Engineering, New South Wales, Australia
22. Hill DJ, Moylan PJ (1980) Connections between finite-gain and asymptotic stability. IEEE
Trans Autom Control 25(5):931–936
23. Hill DJ, Moylan PJ (1980) Dissipative dynamical systems: Basic input-output and state prop-
erties. J Frankl Inst 30(5):327–357
24. Bhowmick P, Patra S (2017) On LTI output strictly negative-imaginary systems. Syst Control
Lett 100:32–42
25. Hughes TH (2017) A theory of passive linear systems with no assumptions. Automatica
86:87–97
26. Anderson BDO, Vongpanitlerd S (1973) Network analysis and synthesis: a modern systems
theory approach. Prentice Hall, Englewood Cliffs, New Jersey, USA
27. Faurre P, Clerget M, Germain F (1979) Opérateurs Rationnels Positifs. Application à
l’Hyperstabilité et aux Processus Aléatoires. Méthodes Mathématiques de l’Informatique.
Dunod, Paris. In French
28. Willems JC (1976) Realization of systems with internal passivity and symmetry constraints.
J Frankl Inst 301(6):605–621
29. Desoer CA, Kuh ES (1969) Basic circuit theory. McGraw Hill International, New York
30. Guiver C, Logemann H, Opmeer MR (2017) Transfer functions of infinite-dimensional sys-
tems: positive realness and stabilization. Math Control Signals Syst 29(2)
31. Maciejowski J (1989) Multivariable feedback design. Electronic systems engineering. Addi-
son Wesley, Boston
32. Reis T, Stykel T (2010) Positive real and bounded real balancing for model reduction of
descriptor systems. Int J Control 83(1):74–88
33. Ober R (1991) Balanced parametrization of classes of linear systems. SIAM J Control Optim
29(6):1251–1287
34. Bernstein DS (2005) Matrix mathematics. Theory, facts, and formulas with application to
linear systems theory. Princeton University Press, Princeton
35. Narendra KS, Taylor JH (1973) Frequency domain criteria for absolute stability. Academic
Press, New York, USA
36. Ioannou P, Tao G (1987) Frequency domain conditions for SPR functions. IEEE Trans Autom
Control 32:53–54
37. Corless M, Shorten R (2010) On the characterization of strict positive realness for general
matrix transfer functions. IEEE Trans Autom Control 55(8):1899–1904
References 77
38. Ferrante A, Lanzon A, Ntogramatzidis L (2016) Foundations of not necessarily rational neg-
ative imaginary systems theory: relations between classes of negative imaginary and positive
real systems. IEEE Trans Autom Control 61(10):3052–3057
39. Tao G, Ioannou PA (1990) Necessary and sufficient conditionsfor strictly positive real matri-
ces. Proc Inst Elect Eng 137:360–366
40. Khalil HK (1992) Nonlinear systems. MacMillan, New York, USA. 2nd edn. published in
1996, 3rd edn. published in 2002
41. Wen JT (1988) Time domain and frequency domain conditions for strict positive realness.
IEEE Trans Autom Control 33:988–992
42. Ferrante A, Lanzon A, Brogliato B (2019) A direct proof of the equivalence of side conditions
for strictly positive real matrix transfer functions. IEEE Trans Autom Control. https://hal.inria.
fr/hal-01947938
43. Tao G, Ioannou PA (1988) Strictly positive real matrices and the Lefschetz-Kalman-
Yakubovich Lemma. IEEE Trans Autom Control 33(12):1183–1185
44. Heemels WPMH, Camlibel MK, Schumacher JM, Brogliato B (2011) Observer-based control
of linear complementarity systems. Int J Robust Nonlinear Control 21(10):1193–1218
45. Wolovich WA (1974) Linear multivariable systems, vol 11. Applied mathematical sciences.
Springer, Berlin
46. Ferrante A, Ntogramatzidis L (2013) Some new results in the theory of negative imaginary
systems with symmetric transfer matrix function. Automatica 49(7):2138–2144
47. Gregor J (1996) On the design of positive real functions. IEEE Trans Circuits Syst I Fundam
Theory Appl 43(11):945–947
48. Henrion D (2002) Linear matrix inequalities for robust strictly positive real design. IEEE
Trans Circuits Syst I Fundam Theory Appl 49(7):1017–1020
49. Dumitrescu B (2002) Parametrization of positive-real transfer functions with fixed poles.
IEEE Trans Circuits Syst I Fundam Theory Appl 49(4):523–526
50. Yu W, Li X (2001) Some stability properties of dynamic neural networks. IEEE Trans Circuits
Syst I Fundam Theory Appl 48(2):256–259
51. Patel VV, Datta KB (2001) Comments on “Hurwitz stable polynomials and strictly positive
real transfer functions”. IEEE Trans Circuits Syst I Fundam Theory Appl 48(1):128–129
52. Wang L, Yu W (2001) On Hurwitz stable polynomials and strictly positive real transfer
functions. IEEE Trans Circuits Syst I Fundam Theory Appl 48(1):127–128
53. Marquez HJ, Agathokis P (1998) On the existence of robust strictly positive real rational
functions. IEEE Trans Circuits Syst I(45):962–967
54. Zeheb E, Shorten R (2006) A note on spectral conditions for positive realness of single-input-
single-output systems with strictly proper transfer functions. IEEE Trans Autom Control
51(5):897–900
55. Shorten R, King C (2004) Spectral conditions for positive realness of single-input single-
output systems. IEEE Trans Autom Control 49(10):1875–1879
56. Fernandez-Anaya G, Martinez-Garcia JC, Kucera V (2006) Characterizing families of pos-
itive real matrices by matrix substitutions on scalar rational functions. Syst Control Lett
55(11):871–878
57. Bai Z, Freund RW (2000) Eigenvalue based characterization and test for positive realness of
scalar transfer functions. IEEE Trans Autom Control 45(12):2396–2402
58. Shorten R, Wirth F, Mason O, Wulff K, King C (2007) Stability criteria for switched and
hybrid systems. SIAM Rev 49(4):545–592
59. Liu M, Lam J, Zhu B, Kwok KW (2019) On positive realness, negative imaginariness, and
H∞ control of state-space symmetric systems. Automatica 101:190–196
60. Hakimi-Moghaddam M, Khaloozadeh H (2015) Characterization of strictly positive multi-
variable systems. IMA J Math Control Inf 32:277–289
61. Skogestad S, Postlethwaite I (2005) Multivariable feedback control, 2nd edn. Wiley, New
York
62. Kailath T (1980) Linear systems. Prentice-Hall, Upper Saddle River
78 2 Positive Real Systems
89. Betser A, Zeheb E (1993) Design of robust strictly positive real transfer functions. IEEE Trans
Circuits Syst I Fundam Theory Appl 40(9):573–580
90. Bianchini G, Tesi A, Vicino A (2001) Synthesis of robust strictly positive real systems with
l2 parametric uncertainty. IEEE Trans Circuits Syst I Fundam Theory Appl 48(4):438–450
91. Turan L, Safonov MG, Huang CH (1997) Synthesis of positive real feedback systems: a simple
derivation via Parott’s Theorem. IEEE Trans Autom Control 42(8):1154–1157
92. Son YI, Shim H, Jo NH, Seo JH (2003) Further results on passification of non-square linear
systems using an input-dimensional compensator. IEICE Trans Fundam E86–A(8):2139–
2143
93. Bernussou J, Geromel JC, de Oliveira MC (1999) On strict positive real systems design:
guaranteed cost and robustness issues. Syst Control Lett 36:135–141
94. Antoulas AC (2005) A new result on passivity preserving model reduction. Syst Control Lett
54(4):361–374
95. Anderson BDO, Landau ID (1994) Least squares identification and the robust strict positive
real property. IEEE Trans Cicuits Syst I 41(9):601–607
96. Haddad WM, Bernstein DS (1991) Robust stabilization with positive real uncertainty: Beyond
the small gain theorem. Syst Control Lett 17:191–208
97. Lanzon A, Petersen IR (2007) A modified positive-real type stability condition. In: Proceed-
ings of European control conference. Kos, Greece, pp 3912–3918
98. Lanzon A, Petersen IR (2008) Stability robustness of a feedback interconnection of systems
with negative imaginary frequency response. IEEE Trans Autom Control 53(4):1042–1046
99. Petersen IR, Lanzon A (2010) Feedback control of negative-imaginary systems. IEEE Control
Syst Mag 30(5):54–72
100. Petersen IR (2016) Negative imaginary systems theory and applications. Annu Rev Control
42:309–318
101. Lanzon A, Chen HJ (2017) Feedback stability of negative imaginary systems. IEEE Trans
Autom Control 62(11):5620–5633
102. Xiong J, Petersen IR, Lanzon A (2012) On lossless negative imaginary systems. Automatica
48:1213–1217
103. Song Z, Lanzon A, Patra S, Petersen IR (2012) A negative-imaginary lemma without min-
imality assumptions and robust state-feedback synthesis for uncertain negative-imaginary
systems. Syst Control Lett 61:1269–1276
104. Xiong J, Petersen IR, Lanzon A (2010) A negative imaginary lemma and the stability of inter-
connections of linear negative imaginary systems. IEEE Trans Autom Control 55(10):2342–
2347
105. Liu M, Xiong J (2017) Properties and stability analysis of discrete-time negative imaginary
systems. Automatica 83:58–64
106. Ferrante A, Lanzon A, Ntogramatzidis L (2017) Discrete-time negative imaginary systems.
Automatica 79:1–10
107. Liu M, Xiong J (2018) Bilinear transformation for discrete-time positive real and negative
imaginary systems. IEEE Trans Autom Control. https://doi.org/10.1109/TAC.2018.2797180
108. Mabrok MA, Kallapur AG, Petersen IR, Lanzon A (2014) Locking a three-mirror optical
cavity using negative imaginary systems approach. Quantum Inf Rev 1:1–8
109. Mabrok MA, Kallapur AG, Petersen IR, Lanzon A (2014) Spectral conditions for nega-
tive imaginary systems with applications to nanopositioning. IEEE/ASME Trans Mechatron
19(3):895–903
110. Cai C, Hagen G (2010) Stability analysis for a string of coupled stable subsystems with
negative imaginary frequency response. IEEE Trans Autom Control 55:1958–1963
111. Ahmed B, Pota H (2011) Dynamic compensation for control of a rotary wing UAV using
positrive position feedback. J Intell Robot Syst 61:43–56
112. Diaz M, Pereira E, Reynolds P (2012) Integral resonant control scheme for cancelling human-
induced vibations in light-weight pedestrian structures. Struct Control Health Monit 19:55–69
Chapter 3
Kalman–Yakubovich–Popov Lemma
with x(0) = x0 , and where x(t) ∈ IRn , u(t), y(t) ∈ IRm with n ≥ m. The Positive Real
lemma can be stated as follows.
Lemma 3.1 (Positive Real Lemma or KYP Lemma) Let the system in (3.1) be
controllable and observable. The transfer function H (s) = C(sIn − A)−1 B +
D, with A ∈ IRn×n , B ∈ IRn×m , C ∈ IRm×n , D ∈ IRm×m is PR with H (s) ∈ IRm×m ,
s ∈ C, if and only if there exists matrices P = P T 0, P ∈ IRn×n , L ∈ IRn×m and
W ∈ IRm×m such that
PA + AT P = −LLT
PB − C T = −LW (3.2)
D + DT = W T W
The proof will be given below. It immediately follows from Lemma A.69 that if
D + DT = 0 (i.e., D is skew-symmetric, see Example 3.152), then PB = C T , which
we may name a passivity input/output constraint.1 One immediate consequence is that
in this case, the matrix CB = BT PB is symmetric positive semi-definite; hence, sym-
metry and positive semi-definiteness of CB is a necessary condition for an observable
and controllable system to be PR. The set of matrix equations in (3.2) is often called
the Lur’e equations, since it has been introduced first by A.I. Lur’e2 in [3]. In the
sequel, we will equivalently name the Lur’e equations, the KYP Lemma equations.
Using Theorem2.35, one sees that the KYP Lemma shows the equivalence between
an infinite-dimensional problem (check conditions for all values of frequencies via the
spectral function Π (s) nonnegativity) and a finite-dimensional problem (solving a
matrix inequality), which both characterize positive real transfer functions.
Example 3.2 Let us point out an important fact. It is assumed in Lemma 3.1 that
the representation (A, B, C, D) is minimal. Then PRness of the transfer function
C(sIn − A)−1 B + D is equivalent to the solvability of the set of equations (3.2) with
P = P T 0. Consider now the following scalar example, where (A, B, C, D) =
(−α, 0, 0, 1), with α > 0. The transfer function is H (s) = 0 that is PR. The set
The first equation above is known as the Lyapunov equation. Note that LLT is not
positive definite but necessarily semi-positive definite as long as m < n. The third
equation above can be interpreted as the factorization of D + DT . For the case D = 0,
the above set of equations reduces to the first two equations with W = 0. If one comes
back to the frequency domain (Definitions 2.33 and 2.34), one sees that the stability
is taken care of by the first equation in (3.2), while the other equations rather deal
with the positivity. As recalled in the introduction of this chapter, the first published
version of the KYP Lemma was by [4, 5] in 1962, with D = 0. The set of equations
(3.2) can also be written as
−PA − AT P C T − PB L
= LT W 0. (3.3)
C − BT P D + D T WT
More details on the matrices L and W , and how they are related to spectral factor-
izations of the transfer matrices, are given in Theorem 3.179. Notice that (3.3) can
be written equivalently as
−P 0 A B AT C T −P 0
+ 0. (3.4)
0 I CD BT D T 0 I
Corollary 3.3 Let the system in (3.1) be controllable and observable, and let D = 0.
Assume that C(sIn − A)−1 B is PR. Then
t t
1
u (s)y(s)ds = V (x(t)) − V (x0 ) −
T
xT (s)(AT P + PA)x(s)ds, (3.5)
0 2 0
for all t ≥ 0, with V (x) = 21 xT Px, P satisfies the LMI in (3.3), and the equality is
computed along state trajectories starting at x(0) = x0 and driven by u(·) on [0, t].
Proof Positive realness and minimality imply that (3.2) is satisfied. By simple cal-
t
culation of the integral 0 uT (s)y(s)ds and using the KYP Lemma conditions, pre-
multiplying ẋ(t) by P, (3.5) follows.
84 3 Kalman–Yakubovich–Popov Lemma
The same holds if D = 0, as the reader may check. We shall see in the next chapter
that V (x) is a storage function for the system (A, B, C), and that the equality in (3.5)
is a dissipation equality. One may rewrite it as follows, with an obvious “physical”
interpretation:
t
1
V (x(t)) = V (x ) + xT (s)(AT P + PA)x(s)ds +
0 2
energy at time t initial energy
0
dissipated energy
(3.6)
t
+ u (s)y(s)ds
T
0
externally supplied energy
3 LetA ∈ Rn×n be the transition matrix. Minimality of n is equivalent to having (A, B) controllable
and (A, C) observable.
3.1 The Positive Real Lemma 85
(3.5) for some positive-definite quadratic function V (x). Then does it satisfy the
KYP Lemma conditions? The answer is yes. Indeed, notice first that the dissipation
equality (3.5) is equivalent to its infinitesimal form
1
uT (t)y(t) = xT (t)P ẋ(t) − xT (t)(AT P + PA)x(t), (3.7)
2
since it holds for all t ≥ 0. Continuing the calculations, we get
1
uT (t)Cx(t) = xT (t)P(Ax(t) + Bu(t)) − xT (t)(AT P + PA)x(t), (3.8)
2
so that uT (t)Cx(t) = xT (t)PBu(t). Since this equality holds for any u(·), one must
have C T = PB. This shows that the second KYP Lemma condition is true. Now
suppose that more generally the system satisfies a dissipation equality as
t t
1
u (s)y(s)ds = V (x(t)) − V (x0 ) −
T
xT (s)Qx(s)ds, (3.9)
0 2 0
1
uT (t)y(t) = xT (t)(PA + AT P)x(t) − xT (t)Qx(t).
2
This must hold for any admissible input. Rewriting this equality with u(·) ≡ 0 we
obtain that necessarily PA + AT P = Q = −LLT for some matrix L. Thus we have
proved the following.
Corollary 3.5 Let (3.9) hold along the system’s trajectories with Q 0, V (x) =
x Px, P = P T 0. Then the KYP Lemma set of equations (3.2) also hold.
1 T
2
Remark 3.6 In the case D = 0, assuming that the dissipation equality (3.9) holds
yields after time derivation
1 1
uT (C − BT P)x + uT Du − xT (AT P + PA)x = − xT Qx ≥ 0, (3.10)
2 2
since Q 0. In a matrix form this leads to
−AT P − PA C T − PB x
(x T
u )
T
≥ 0. (3.11)
C − BT P D + DT u
((A, B, C, D) minimal)
t
PR transfer function ⇔ 0 uT (τ )y(τ )d τ ≥ 0, with x(0) = 0
These developments and results somewhat shed new light on the relationships
between PR transfers, passivity, dissipation, and the KYP Lemma set of equations.
However, we have not yet proved the KYP Lemma, i.e., the fact that the frequency-
domain conditions for positive realness, are equivalent to the LMI in (3.2) when
(A, B, C, D) is minimal. Several proofs of the KYP Lemma appeared in the book
[15].
Proof of the KYP Lemma: The proof that is reproduced now is taken from Ander-
son’s work [16].
Sufficiency: This is the easy part of the proof. Let the set of equations in (3.2) be
satisfied. Then
Necessity: Suppose that rank(H (s) + H T (−s)) = r almost everywhere. From the
PRness4 it follows that there exists an r × m matrix W0 (s) such that
and
• (i) W0 (·) has elements which are analytic in Re[s] > 0, and in Re[s] ≥ 0 if H (s)
has elements which are analytic in Re[s] ≥ 0.
• (ii) Rank(W0 (s)) = r in Re[s] > 0.
• (iii) W0 (s) is unique save for multiplication on the left by an arbitrary orthogonal
matrix.
This is a Youla factorization. Suppose that all poles of H (s) are in Re[s] < 0 (the
case when poles may be purely imaginary will be treated later). Equivalently, all
the eigenvalues of A have negative real parts, i.e., A is asymptotically stable. From
Lemmas A.78 and A.80 (with a slight adaptation to allow for the direct feedthrough
term), it follows that there exists matrices L and W = W0 (∞), such that W0 (s) has
a minimal realization (A, B, LT , W ) (i.e., W0 (s) = W + LT (sIn − A)−1 B), with two
minimal realizations for H (s) + H T (−s) = W0T (−s)W0 (s) being given by
T
A 0 B C
(A1 , B1 , C1 , W W ) =
T
, , ,W W
T
(3.14)
0 −AT CT −B
and
A 0 B PB + LW
(A3 , B3 , C3 , W W ) =
T
, , ,W W ,
T
0 −AT PB + LW −B
(3.15)
(B, AB, . . .) = (T1 B, AT1 B, . . .) = (T1 B, T1 AB, . . .) = T1 (B, AB, . . .). (3.16)
The controllability Kalman matrix [B, AB, . . . , An−1 B] has rank n because of the
minimality of the realization. Thus T1 = In and thus PB + LW = C T . The third
equation in (3.2) follows by setting s = ∞ in H (s) + H T (−s) = W0T (−s)W0 (s).
In the second step, let us relax the restriction on the poles of H (s). In this case
H (s) = H1 (s) + H2 (s) where H1 (s) has purely imaginary axis poles, and H2 (s) has
all its poles in Re[s] < 0, and both H1 (s) and H2 (s) are positive real. Now from
Lemma A.83, it follows that there exists P1 = P1T 0 such that P1 A1 + AT1 P1 = 0
and P1 B1 = C1T , where (A1 , B1 , C1 ) is a minimal realization of H1 (s). For H2 (s) we
may select a minimal realization (A2 , B2 , C2 , D2 ), and using the material just proved
above we may write ⎧
⎨ P2 A2 + AT2 P2 = −L2 LT2
P2 B2 = C2T − L2 W (3.17)
⎩ T
W W = D2 + D2T .
It can be verified that the KYP Lemma set of equations (3.2) is satisfied by tak-
ing P = P1 + P2 , A = A1 + A2 , BT = (B1T B2T ), C = (C1 C2 ), LT = (0 LT2 ). More-
over, with (A1 , B1 , C1 ) and (A2 , B2 , C2 , D2 ) minimal realization sof H1 (s) and H2 (s),
(A, B, C, D2 ) is a minimal realization of H (s). Indeed, the degree of H (s) is the sum
of the degrees of H1 (s) and H2 (s) which have no common poles. It just remains to
verify that the equations (3.2), hence, constructed are valid under any (full rank)
coordinate transformation, since they have been established for a particular form
A1 + A2 .
The KYP Lemma has been derived in the so-called behavioral framework in [17].
The formula in (3.13) is a factorization [18], which makes another path to find P, L
and W in (3.2).
A lossless system
t is a passive system for which the inequality in (2.1) is replaced by
the equality 0 yT (τ )u(τ )d τ = 0. A lossless transfer function H (s) ∈ Cm×m satisfies
the following [16, 19].
Theorem 3.7 Let H (s) ∈ Cm×m be a rational transfer matrix. Then it is lossless
positive real, if and only if it is positive real and H (jω) + H (jω) = 0 for all ω ∈ R
such that jω is not a pole of any element of H (s).
+2)
2
Example 3.8 H (s) = s(ss2 +1 is lossless (notice that it is not proper, hence it has no
realization (A, B, C, D) such that H (s) = C(sI2 − A)−1 B + D). See also [19, Sect. V]
for an MIMO example.
The KYP Lemma equations for a lossless PR transfer function are given for a minimal
realization (A, B, C, D), as ⎧ T
⎨ A P + PA = 0
PB − C T = 0 (3.18)
⎩
D + DT = 0,
3.1 The Positive Real Lemma 89
for some P = P T 0. Lossless proper PR functions have poles only on the imaginary
axis and are of the form H (s) = ni=1 As2i s+B
+ω2
i
+ As0 + B0 for some matrices Ai = ATi
i
0, Bi = −BiT , and the ωi ’s are real and all different (non-proper systems are allowed
if one adds a term Ls, L = LT 0). Clearly, SPR (and consequently SSPR), as well
as WSPR systems, cannot be lossless. A lossless system does not dissipate energy,
as it can be inferred, for instance, from the dissipation equality (3.6): the storage
function is constant along system’s trajectories. A proof of the KYP Lemma for
lossless systems is given in Theorem A.83 in Sect. A.6.8.
In other words, with PR balanced transfer matrices, one can associate a KYP Lemma
LMI which admits P = In as a solution.
We will deal at several places in the book with optimal control and its link with
dissipativity. Let us nevertheless point out a first relationship. Provided D + DT is
90 3 Kalman–Yakubovich–Popov Lemma
full rank (i.e., D + DT 0 in view of (3.2) and Theorem A.65), the matrix inequality
in (3.3) is equivalent to the following algebraic Riccati inequation:
Equivalence means that the LMI and the Riccati inequality possess the same set of
solutions P. The KYP Lemma says that if the transfer function D + C(sIn − A)−1 B
is PR and (A, B, C, D) is minimal, then they both possess at least one solution P =
P T 0. Let us recall that the optimal control problem
+∞
min J (x0 , u) = (xT (t)Qx(t) + uT (t)Ru(t))dt, (3.20)
u∈U 0
under the constraints (3.1) and with R 0, Q 0, has the solution u (x) = −R−1 BT Px,
where P is a solution of the Riccati equation −PA − AT P + PBR−1
BT P = Q 0. When the cost function contains cross terms 2xT Su, then P is the solu-
tion of the Riccati equation −PA − AT P − (S − BT P)R−1 (S T − PB) = Q 0, and
the optimal control is u (x) = −R−1 (S T − BT P)x. The Belmann function for these
problems is the quadratic function V (x) = xT Px and V (x0 ) = minu∈U J (x0 , u).
If Q 0 then P 0 and V (x) is a Lyapunov function for the closed-loop system
ẋ(t) = Ax(t) + Bu (x(t)), as can be checked by direct calculation of dtd V (x(t)) along
the closed-loop trajectories.
Therefore the Riccati inequality in (3.19) corresponds to the Riccati inequation
of an infinite horizon LQ problem whose cost matrix is given by
⎛ ⎞
Q CT
⎝ ⎠, (3.21)
C D+D T
which is (3.19) with = instead of . As we shall see further in the book, a Riccati
equation for a PR system corresponds in the nonlinear case to a partial differential
inequation (Hamilton–Jacobi inequalities), whose solutions serve as Lyapunov func-
tions candidates. The set of solutions is convex and possesses two extremal solutions
(which will be called the available storage and the required supply) which satisfy the
algebraic Riccati equation, i.e., (3.19) with equality, see Sect. 4.3.3, Lemma 4.50, and
Proposition 4.51. More details between the KYP Lemma and optimal control will
be given in Sect. 3.11. The case when D + DT = 0 and D + DT 0 will be treated
in Sect. 4.5. Such cases possess some importance. Indeed, obviously PR functions
may not have a realization with a full rank matrix D. Let us end this section by
recalling another equivalence: the system (A, B, C, D) with a minimal realization
and D + DT 0 is PR if and only if the Hamiltonian matrix
⎛ ⎞
A − B(D + DT )−1 C B(D + DT )−1 BT
⎝ ⎠ (3.23)
−C T (D + DT )−1 C −AT + C T (D + DT )−1 BT
has no pure imaginary eigenvalues, see Lemmas A.61 and A.62. This is a way to
characterize SSPR transfer matrices. Indeed, notice that about s = ∞ one has
+∞
H (s) = C(sIn − A)−1 B + D = CAi−1 Bs−i + D,
i=1
so that H (∞) = D. The SSPRness thus implies by Definition 2.78 (2) that D δ 0
for some δ ∈ R (or D + DT δIm 0 if m ≥ 2). It is noteworthy that D + DT
0 ⇔ D 0. However, D is not necessarily symmetric.
+∞
Remark 3.11 When the cost function is defined as J (x0 , u) = 0 y(t)T y(t)dt,
then a necessary condition
T for x0T Px0 to be the optimal(minimum) cost is that P = P T
A P + PA + C T C PB + C T D
satisfies the LMI: 0 [22]. One can use Propo-
BT P + D T C DT D
sition A.68 to construct an equivalent Riccati inequality (which involves (DT D)−1
in case DT D has full rank).
92 3 Kalman–Yakubovich–Popov Lemma
3.1.5 Duality
The linear matrix inequality (3.3) thus defines a set P of matrices P 0. Let us
investigate the relationships between the set of solutions of the Lur’e equations for
a system (A, B, C, D), and for its dual system.
Lemma 3.12 (duality) Let (A, B, C, D) be such that the set P is not empty.
The inverse P −1 ∈ P −1 of any element of P is a solution of the dual problem
(AT , C T , BT , D).
are simultaneously negative definite. The set P̃ is the set that solves the KYP Lemma
linear matrix inequality for the dual system.
Consider the set of equations in (3.2) and Definition 2.58 of a SPR transfer function.
Assume that a realization of the input–output system is given by the quadruple
(A, B, C, D), i.e., C(sIn − A)−1 B + D = H (s), and (A, B, C, D) is minimal. Then
H (s − ε) = C(sIn − εIn − A)−1 B + D, and a realization of H (s − ε) is given by
(A + εIn , B, C, D). Saying that H (s − ε) is PR is therefore equivalent to stating
that (A + εIn , B, C, D) satisfies the KYP Lemma set of equations (3.2), provided
(A + εIn , B, C, D) is minimal. Therefore, (A, B, C, D) is SPR, if and only if (A +
εIn )T P + P(A + εIn ) = −LLT , and the last two equations in (3.2) hold, with P =
P T 0. The first equation can be rewritten as AT P + PA = −LLT − 2εP. As is
well known, this implies that the matrix A is Hurwitz, i.e., all its eigenvalues have
negative real parts. Indeed, consider the Lyapunov function V (x) = xT Px. Then along
with trajectories of the system ẋ(t) = Ax(t), one obtains V̇ (x(t)) = xT (t)(−LLT −
2εP)x(t) ≤ −2εV (x(t)). Consequently, the system is exponentially stable. This, in
3.1 The Positive Real Lemma 93
particular, shows that SPR transfer functions have poles with negative real parts and
confirms Theorem 2.61.
Lemma 3.13 ((Multivariable LKY Lemma) [23]) Consider the system in (3.1), with
m ≥ 2. Assume that the rational transfer matrix H (s) = C(sI − A)T B + D has poles
which lie in Re[s] < −γ , where γ > 0, and (A, B, C, D) is a minimal realization of
H (s). Then H (s − μ) for μ > 0 is PR, if and only if a matrix P = P T 0, and
matrices L and W exist such that
⎧
⎨ PA + AT P = −LLT − 2μP
PB − C T = −LW (3.25)
⎩
D + DT = W T W.
The conditions in (3.25) are more stringent than those in (3.3). Notice that the first
line in (3.25) can be rewritten as
which allows one to recover (3.3) with A changed to μIn + A. The transfer function
of the triple (μIn + A, B, C), precisely is H (s − μ). Thus (3.25) exactly states that
(μIn + A, B, C) is PR and satisfies (3.3). It is assumed in Lemma 3.13 that the system
is multivariable, i.e., m ≥ 2. The LKY Lemma for monovariable systems (m = 1) is
as follows.
Lemma 3.14 (Monovariable LKY Lemma) [24]) Consider the system in (3.1), with
m = 1. Suppose that A is such that det(sIn − A) has only zeroes in the open left half
plane. Suppose (A, B) is controllable, and let μ > 0, L = LT 0 be given. Then a
real vector q and a real matrix P = P T 0 satisfying
PA + AT P = √
−qqT − μL
(3.27)
PB − C = 2Dq,
T
Lemma 3.13 is not a direct extension of Lemma 3.14 because the matrix L = LT 0
is arbitrary in Lemma 3.14. However, minimality is not assumed in Lemma 3.14,
since only the controllability is supposed to hold. We now state a result that concerns
Definition 2.87.
94 3 Kalman–Yakubovich–Popov Lemma
Lemma 3.15 ([25]) Assume that the triple (A, B, C) is controllable and observable.
The system whose realization is (A, B, C, D) is γ -positive real if and only if there
exist matrices L and W such that
⎧
⎪
⎪ PA + AT P = −(1 − γ 2 )C T C − LT L
⎪
⎪
⎨
PB = (1 + γ 2 )C T − (1 − γ 2 )C T D − LT W (3.28)
⎪
⎪
⎪
⎪
⎩ T
W W = (γ 2 − 1)Im + (γ 2 − 1)DT D + (γ 2 + 1)(D + DT ).
The next seminal result is due to J.T. Wen [26], who established different relationships
between conditions in the frequency domain and the time domain, for SPR systems.
In the following lemma, μmin (L) = λmin 21 (L + LT ) .
Lemma 3.16 (KYP Lemma for SPR Systems) Consider the LTI, minimal (control-
lable and observable) system (3.1) whose transfer matrix is given by
where the minimum singular value σmin (B) > 0.5 Assume that the system is expo-
nentially stable. Consider the following statements:
Δ
1. (1) There exist P 0, P, L ∈ Rn×n , μmin (L) = ε > 0, Q ∈ Rm×n , W ∈ Rm×m
that satisfy the Lur’e equations
AT P + PA = −QT Q − L (3.30)
BT P − C = W T Q (3.31)
W T W = D + DT . (3.32)
L = 2μP (3.33)
5 Letm ≤ n. This is found to be equivalent to Ker(B) = {0}, and to rank(B) = m [27, Proposition
5.6.2].
3.1 The Positive Real Lemma 95
and
lim ω2 H (jω) + H (jω) 0. (3.37)
ω→∞
(5) The system can be realized as the driving point impedance of a multiport
dissipative network.
(6)
The Lur’e equations
with L = 0 are satisfied by the internal parameter set
A + μIn , B, C, D corresponding to H (jω − μ) for some μ > 0.
(8) There exists a positive constant ρ and a constant ξ (x0 ) ∈ R, ξ (0) = 0, such
that for all t ≥ 0
t t
uT (s)y(s)ds ≥ ξ (x0 ) + ρ u(s)2 ds. (3.39)
0 0
(9) There exists a positive constant γ and a constant ξ (x0 ), ξ (0) = 0, such that
for all t ≥ 0 t
eγ s uT (s)y(s)ds ≥ ξ (x0 ) . (3.40)
0
(10) There
exists apositive constant α such that the following kernel is positive
in L2 R+ ; Rm×m :
where δ and 1(·) denote the Dirac measure and the step function, respectively.
(11) The following kernel is coercive in L2 [0, T ] ; Rm×m , for all T :
Jf = R u + r,
u, u + k,
where the inner products are in the L2 sense. A unique solutions exists if R is a
coercive L(L2 ) (the space of bounded operators in L2 ) operator. Now,
By condition (2), if
2
η > F(jωIn − A)−1 BH∞ ,
Since a unique solution of the optimal control problem exists, the necessary con-
ditions from the maximum principle must be satisfied. The Hamiltonian is given
by
λ̇ = 2F T Fx − 2C T u − AT λ.
T
PA + AT P + F T F x = C − BT P u
T
= − C − BT P W −1 W −T C − BT P x.
Assume condition (2) is false. Then there exist {un }, un = 1, and {ωn } such that
! 1
0 ≤ H (jωn ) + H (jωn ) un , un ≤ .
n
As n → ∞, if ωn → ∞, then
!
H (jωn ) + H (jωn ) un , un → Dun , un ≥ μmin (D) > 0,
which is a contradiction since the left-hand side converges to zero. Hence, un and ωn
are both bounded sequences and therefore contain convergent subsequences unk and
ωnk . Let the limits be uo and ωo . Then
!
H (jωn ) + H (jωn ) un , un = 0.
This implies
ε 1
≤ − x(t)2 + uT (t)y(t) − Qx(t) − W u(t)2
2 2
ε
≤ − x(t)2 + uT (t)y(t).
2
3.1 The Positive Real Lemma 99
Identifying −V (xo ) with ξ(xo ) and ε with ρ in (3.39), condition (8) follows.
• (8) =⇒ (2)
Let t → ∞ in (3.39), then
t ∞
u (s)y(s)ds ≥ ξ(xo ) + ρ
T
u(t)2 dt.
0 0
t ∞
In particular, for xo = 0, 0 uT (s)y(s)ds ≥ ρ 0
u(t)2 dt. By Plancherel’s theorem,
∞ ∞
û∗ (jω)ŷ(jω)d ω ≥ ρ û(ω)2 d ω,
−∞ −∞
for all u ∈ L2 . Suppose that for each η > 0, there exists w ∈ C and ωo ∈ R such that
and ∞
ρ û(ω)2 d ω = rρ w2 .
−∞
100 3 Kalman–Yakubovich–Popov Lemma
If η < ρ, this is a contradiction. Hence, there exists an interval η > 0 such that (3.34)
holds.
• (8) =⇒ (11)
Condition (11) follows directly from condition (8).
• (11) =⇒ (8)
The implication is obvious if xo = 0. In the proof of (8) =⇒ (2), xo is taken to be
zero. Therefore, for x = 0 (11) =⇒ (8) =⇒ (2). It has already been shown that
(2) =⇒ (8). Hence, (11) =⇒ (2) =⇒ (8).
• (1
) =⇒ (1)
By definition.
• (1
) =⇒ (1) (if D = 0).
If D = 0, then W = 0. Rewrite (3.30) as AT P + PA = −QT Q − L + 2μP − 2μP.
For μ small enough, QT Q + L − 2μP 0. Hence, there exists Q1 such that AT P +
PA = −Q1T Q1 − 2μP. Since (3.31) is independent of Q1 when D = 0, (1
) is proved.
• (1
) =⇒ (6)
By straightforward manipulation
• (6) =⇒ (7)
Same as in (1) =⇒ (2) except L is replaced 2μP.
• (7) =⇒ (6)
Positive Real (or KYP) lemma
• (4) =⇒ (7)
For μ > 0 sufficiently small, A − μIn remains strictly stable. Now, by direct substi-
tution
By (3.37), for all ω sufficiently large, there exists g > 0 such that
g
2w H (jω)w ≥ w2 . (3.48)
ω2
Hence, there exists ω1 ∈ R large enough so that (3.47) and (3.48) hold with some g
and k dependent on ω1 . Then, for |ω| ≤ ω1 ,
The terms in curly brackets in (3.49) and (3.50) are finite. Hence, there exists μ small
enough such that (3.49) and (3.50) are both nonnegative, proving condition (7).
• (7) =⇒ (4)
From (7) =⇒ (6), the minimal realization (A, B, C, D) associated with H (jω)
satisfies the Lur’e equation with L = 2μP. Following the same derivation as in
(1) =⇒ (2), for all w ∈ Cm , we have
Since P 0 and, by assumption, σmin (B) > 0, H (jω) is positive for all ω ∈ R. It
remains to show (3.37). Multiply both sides of the inequality above by ω2 , then
ω2 2μ μmin (P)σmin
2
(B)
ω2 w (H (jω) + H (jω))w ≥ w2 .
||ω| − A|2
102 3 Kalman–Yakubovich–Popov Lemma
V̇ (t, x(t)) =
1 t 1 t
= eγ xT (t)Px(t) + eγ xT (t)(PA + AT P)x(t) + eγ xT (t)PBu(t)
t
2 2
ε V (t, x(t))
− eγ Qx(t) − W u(t)2 + eγ uT (t)y(t)
t t
≤ γ V (t, x(t)) −
2 P
ε
− γ V (t, x(t)) + eγ uT (t)y(t).
t
≤−
2 P
Choose 0 < γ < ε/2 P . Then by comparison principle, for all T ≥ 0,
t
eγ uT (s)y(s)ds ≥ −xoT Pxo .
s
• (9) =⇒ (6)
Define ⎧ /2 t
⎨ u1 (t) = e(γ ) u(t)
/2 t
y (t) = e(γ ) y(t) (3.52)
⎩ 1 /2 t
x1 (t) = e(γ ) x(t),
Since
thisγholds
true for all û1 (jω) ∈ L2 , one has H1 (jω) + H1∗ (jω) ≥ 0. Equivalently
H jω − 2 − H ∗ jω − γ2 0, proving (7).
• (9) =⇒ (10)
Use the transformation in (3.52); then, condition (10) follows directly from condition
(9) with α = γ /2.
• (10) =⇒ (9)
If xo = 0, (10) =⇒ (9) is obvious. Since in the proof of (9) =⇒ (6), only the xo =
0 case is considered, it follows, for the xo = 0 case, (10) =⇒ (9) =⇒ (6). It has
already been shown that (6) =⇒ (9). Hence, (10) =⇒ (6) =⇒ (9).
• (2) =⇒ (4) =⇒ (3)
The implications are obvious.
Remark 3.17 Stating H (jω) + H (jω) δIn for all ω ∈ R = (−∞, +∞), is
equivalent to stating H (jω) + H (jω) 0 for all ω ∈ R ∪ {−∞, +∞} = [−∞,
+∞]. This is different from H (jω) + H (jω) 0 for all ω ∈ R because such a
condition does not imply the existence of a δ > 0 such that H (jω) + H (jω) δIn
for all ω ∈ R.
, then H (jω) + H (jω) = 1+ω
2
Example 3.18 If H (s) = s+1
s 2ω
2 , so H (s) is not SPR
despite Re[H (∞)] = 2. But H (0) + H (0) = 0. If H (s) = s+2 s+1
, then H (jω) +
H (jω) = 4+ω
2
1+ω2
≥ 1 for all ω ∈ [−∞, +∞]. This transfer function is SSPR. If
H (s) = s+1 , then H (jω) + H (jω) = 1+ω
1 2
2 > 0 for all ω ∈ (−∞, +∞). Moreover,
2ω2
limω→+∞ 1+ω2
> 0, so H (s) is SPR.
Further results on the characterization of PR or SPR transfer functions can be found
in [31–44].
Remark 3.19 (Positive real lemma for SSPR systems) Strong SPR systems are
defined in Definition 2.78. Let (A, B, C, D) be a minimal state space representa-
tion. It follows from Lemma 3.16 cases (2) and (1) that a system is SSPR if and only
if the KYP Lemma equations satisfy: there exists P = P T 0 such that
AT P + PA PB − C T
≺ 0. (3.54)
BT P − C −(D + DT )
Assumption 1 The pair (E, A) is regular, i.e., det(sE − A) is not identically zero,
s ∈ C.
Let us recall some facts about (3.55). If the pair (E, A) is regular, then there exists
two square invertible matrices U and V such that the system can be transformed into
its Weierstrass canonical form
Ē ẋ(t) = Āx(t) + B̄u(t)
(3.56)
y(t) = C̄x(t) + Du(t),
− A1 0 Iq 0
with x(0 ) = x0 , Ā = U AV = , Ē = U EV = , B̄ = U B =
0 Iq 0 N
B1
, C̄ = CV = (C1 C2 ). The (n − q) × (n − q) matrix N is nilpotent, i.e.,
B2
N l = 0 for some integer l ≥ 1. Generally speaking, solutions of (3.55) are not func-
tions of time but distributions (i.e., the general solutions may contain Dirac and
derivatives of Dirac measures). The system is called impulse-free if N = 0. To better
visualize this, let us notice that the transformed system can be written as [45]
ẋ1 (t) = A1 x1 (t) + B1 u(t)
(3.57)
N ẋ2 (t) = x2 (t) + B2 u(t),
where denotes the convolution product. The Dirac measure at t = 0 is δ0 , while δ0(i)
is its ith derivative in the sense of Schwarz’ distributions. When N = 0 the variable
x2 (·) is just equal to −B2 u(t) at all times. Otherwise, an initial state jump may occur,
and this is the reason why we wrote the left limit x(0− ) in (3.55). The exponential
modes of the regular pair (E, A) are the finite eigenvalues of sE − A, s ∈ C, such
that det(sE − A) = 0.
Definition 3.20 The descriptor system (3.55) is said to be admissible if the pair
(E, A) is regular, impulse-free and has no unstable exponential modes.
Proposition 3.21 ([46]) The descriptor system (3.55) is admissible and SSPR
(strongly SPR), if and only if there exist matrices P ∈ Rn×n and W ∈ Rn×m satisfying
E T P = P T E 0, ET W = 0
(3.59)
AT P + P T A AT W + P T B − C T
≺ 0.
(AT W + P T B − C T )T W B + BT W − D − D T
T
has a solution P ∈ Rn×n , then the transfer matrix H (s) is PR. Conversely, let H (s) =
p
i=−∞ Mi s be the expansion of H (s) about s = ∞, and assume that D + D
i T
M0 + M0 . Let also the realization of H (s) in (3.55) be minimal. Then if H (s) is PR,
T
Minimality means that the dimension n of E and A is as small as possible. The main
difference between Proposition 3.21 and Theorem 3.22 is that it is not supposed that
the system is impulse-free in the latter. When the system is impulse-free, one gets
M0 = H (∞) = D − C2 B2 , and the condition D + DT M0 + M0T is not satisfied
unless C2 B2 + (C2 B2 )T 0.
Proof Let us prove the sufficient part of Theorem 3.22. Let s with Re[s] > 0 be any
point such that s is not a pole of H (s). The matrix sE − A is nonsingular for such a s.
From Proposition A.67, it follows that we can write equivalently the LMI in (3.60)
as ⎧ T
⎪
⎪ A P + P T A = −LLT
⎨ T
P B − C = −LW
(3.61)
⎪
⎪ D + DT W T W
⎩ T
E P = P E 0,
T
106 3 Kalman–Yakubovich–Popov Lemma
for some matrices L and W . From the first and last equations of (3.61), it follows that
Notice that (sE − A)F(s) = B where F(s) = (sE − A)−1 B. Thus since H (s) =
C(sE − A)−1 B + D and the second relation in (3.61) one has
Using now (3.63) and (3.62) and the third relation in (3.61), we obtain
Recall here that s has been assumed to be any complex number, with Re[s] > 0 and
such that it is not a pole of H (s). Now suppose H (s) has a pole s0 with Re[s0 ] > 0.
Then, there exists a pointed neighborhood of s0 that is free of any pole of H (s) and
thus H (s) satisfies (3.65) in this domain. However, this is impossible if s0 were a
pole of H (s). Therefore, H (s) does not have any pole in Re[s] > 0, and (3.65) is true
for any s ∈ C with Re[s] > 0. Thus H (s) is PR.
In the proof, we used the fact that the pair (E, A) is regular (see Assumption 1) which
equivalently means that the matrix sE − A is singular for only finitely many s ∈ C.
The SSPR version of Theorem 3.22 is as follows [48, Theorem 3.9].
C = (1 1 1), D = 21 ,
where b is a constant. The pair (E, A) is regular, impulse-free, and stable. One has
1 1 1
H (s) = + −b+ , (3.68)
s+1 s+2 2
and from H (jω) + H (−jω) = ω22+1 + ω24+4 − 2b + 1, it follows that H (s) is SSPR
when b = 0 and is not SSPR when b = 1, despite obviously D > 0.
Another example is treated in Example 4.70. Theorem 3.22 is completed as follows.
Theorem 3.26 ([47]) If the LMI
AT PE + E T P T A ET PT B − C T
E T PE = E T P T E 0, 0, (3.69)
(E T P T B − C T )T −D − DT
has a solution P ∈ Rn×n , then the transfer matrix H (s) of (3.55) is PR.
It is noted in [50, Remark 4.3] that an impulse-free descriptor system (i.e., N = 0
in (3.57) (3.58)) can be recast into the standard form, because in this case H (s) =
C1 (sIn1 − A1 )−1 + D − C2 B2 , where C1 and C2 are such that y(t) = (C1 C2 )x(t) +
Du(t) in the so-called Weierstrass canonical form (whose state equation is given by
(3.57) (3.58)). Therefore, systems for which N = 0 have greater interest. From the
Weierstrass form, one constructs three matrices:
I 0 0 ... 0 0
V =
0 −B2 −NB2 . . . −N k−1 B2 0
⎛ ⎞
A1 B1 0 . . . 0 0
⎜ 0 0 I ... 0 0⎟ (3.70)
⎜ ⎟
⎜ .. .. ⎟ , U = (0 I 0 . . . 0 0).
F = ⎜ ... ... ... . . ⎟⎟
⎜
⎝ 0 0 0 ... 0 I ⎠
0 0 0 ... 0 0
⎛ ⎞
VF W
One then defines W = ⎝ V ⎠, and one denotes M 0 if xT Mx ≤ 0 for all x ∈ W .
U
Then the next result holds.
Theorem 3.27 ([50, Theorem 5.1]) Consider the following statements:
108 3 Kalman–Yakubovich–Popov Lemma
It is important to note that impulsive behaviors (involving Dirac measures and their
derivatives) are avoided in Theorem 3.27, because the W subspace captures the initial
states for which a smooth solution exists. Passivity arguments are used in the context
of higher order Moreau’s sweeping process with distribution solutions, in [51, 52].
It would be interesting to investigate if such an approach could extend to descriptor
variable systems.
Further reading: Results on positive realness of descriptor systems, KYP Lemma
extensions and applications to control synthesis, can be found in [46, 47, 49, 53–59].
The problem of rendering a descriptor system SPR by output feedback is treated in
[60]. The discrete-time case is analyzed in [61, 62].
Lemma 3.28 ([63] (Weakly SPR))Consider the minimal (controllable and observ-
able) LTI system (3.1), whose transfer matrix function is given by
Assume that the system is exponentially stable and minimum phase. Under such
conditions the following statements are equivalent:
3.2 Weakly SPR Systems and the KYP Lemma 109
and such that the quadruplet (A, B, L, W ) is a minimal realization whose trans-
fer function: H (s) = W + LT (sIn − A)−1 B has no zeros in the jω-axis (i.e.,
rank H̄ (jω) = m, for all ω < ∞).
2. H (jω) + H (jω) 0, for all ω ∈ IR.
3. The following input–output relationship holds:
t t
ut (s)y(s)ds + β ≥ ȳT (s)ȳ(s)ds, for all t > 0,
0 0
and so
110 3 Kalman–Yakubovich–Popov Lemma
since H (s) has no zeros on the jω-axis, H (jω) has full rank and, therefore, the right-
hand side of (3.75) is strictly positive.
(2) ⇒ (1): In view of statement 2, there exists an asymptotically stable transfer
function H (s) such that (see Sects. A.6.7, A.6.8, [64] or [65])
H (jω) + H (jω) = H (jω)H (jω) 0. (3.76)
Therefore the various matrices above can be related through a state space transfor-
mation, i.e., ⎧
⎨ TAT −1 = F
TB = G (3.83)
⎩ −1
CT = W T J + G T P̄.
which is the second equation of (3.72). The transfer function H (s) was defined by
the quadruplet (F, G, J , W ) in (3.77) which is equivalent, through a state space
transformation, to the quadruplet (T −1 FT , T −1 G, JT , W ). In view of (3.83) and
since LT = JT , H (s) can also be represented by the quadruplet (A, B, LT , W ), i.e.,
We finally note from (3.76) that H (jω) has no zeros on the jω-axis.
(1) ⇒ (3): Consider the following positive-definite function: V (x) = 21 xT Px. Then
using (3.72) we obtain
where ȳ is given by
ẋ(t) = Ax(t) + Bu(t)
(3.87)
ȳ(t) = LT x(t) + W u(t).
with β = V (x(0)).
t
(3) ⇒ (2): Without loss of generality, consider an input u such that uT (s)u(s)ds <
t 0
+∞, for all t ≥ 0. Dividing (3.89) by 0 uT (s)u(s)ds, we obtain
t t T
uT (s)y(s)ds + V (x(0)) ȳ (s)ȳ(s)ds
0
t ≥ 0t . (3.90)
0 u (s)u(s)ds 0 u (s)u(s)ds
T T
Since H (s) has no zeros on the jω-axis, the right-hand side of the above equation is
strictly positive and so is the left-hand side for all nonzero U (jω) ∈ L2 , and thus
In the same way, the KYP Lemma for MSPR transfer functions (see Definition 2.90)
has been derived in [67].
Lemma 3.29 ([67, Lemma 2]) Let H (s) ∈ Cm×m is MSPR, and (A, B, C, D) be a
minimal realization of it. Then there exist real matrices P = P T 0, L ∈ Rn×n2 ,
W ∈ Rm×m , such that ⎧ T
⎪
⎪ A P + PA = −LT L
⎨
C − BT P = W T L
(3.92)
⎪
⎪ W T W = D + DT
⎩ m×(n1 +n2 )
L = (0 L ) ∈ R ,
3.3.1 Introduction
The KYP Lemma as stated above is stated for minimal realizations (A, B, C, D),
i.e., when there is no pole-zero cancellation in the rational matrix C(sIn − A)−1 B.
However as Example 3.2 proves, non-minimal realizations may also yield a solvable
set of equations (3.2). The KYP Lemma can indeed be stated for stabilizable systems,
or more generally for uncontrollable and/or unobservable systems. This is done in
[68–76] and is presented in this section. The motivation for such an extension stems
from the physics, as it is easy to construct systems (like electrical circuits) which are
not controllable [77, 78] but just stabilizable or marginally stable, or not observable.
There are also topics like adaptive control, in which many poles/zeroes cancellation
occur, so that controllability of the dynamical systems cannot be assumed. Let us
provide an academic example. Consider the system with transfer function h(s) = s+1 s+1
which is SSPR. This system has several realizations:
⎧ ⎧
⎨ ẋ(t) = −x(t) ⎨ ẋ(t) = −x(t) + bu(t)
ẋ(t) = −x(t)
(a) y(t) = cx(t) + u , (b) y(t) = u(t) , (c)
⎩ ⎩ y(t) = u(t)
c ∈ R \ {0} b ∈ R \ {0}
(3.93)
The representation in (3.93) (a) is uncontrollable and observable, the one in (3.93)
(b) is controllable and unobservable, and the one in (3.93) (c) is uncontrollable
and unobservable. With each one of these three representations, we can associate
a quadruple (A, B, C, D) as (−1, 0, c, 1) for (3.93) (a), (−1, b, 0, 1) for (3.93) (b),
(−1, 0,
0, 1) for (3.93)
(c). The Lur’e equations
6
the unknown p >
have 0 and takethe
−2p −c −2p pb −2p 0
form: 0 for (3.93) (a), 0 for (3.93) (b),
−c −2 pb −2 0 −2
0 for (3.93) (c). They all possess solutions. This shows that minimality is not at all
necessary for the KYP Lemma equations to possess a positive-definite solution. As
a further example let us consider h(s) = s−1 s−1
, which is also SSPR since h(s) = 1. It
has the realizations:
⎧ ⎧
⎨ ẋ(t) = x(t) ⎨ ẋ(t) = x(t) + bu(t)
ẋ(t) = x(t)
(a) y(t) = cx(t) + u , (b) y(t) = u(t) , (c) (3.94)
⎩ ⎩ y(t) = u(t)
c ∈ R \ {0} b ∈ R \ {0}
The representation in (3.94) (a) is uncontrollable and observable, the one in (3.94)
(b) is controllable and unobservable, the one in (3.94) (c) is uncontrollable and
unobservable. The uncontrollable/unobservable mode is unstable. With each one of
these three representations we can associate a quadruple (A, B, C, D) as (1, 0, c, 1)
for (3.93) (a), (1, b, 0, 1) for (3.93) (b), (1, 0, 0, 1) for (3.93) (c). The Lyapunov
6 More precisely, what we name Lur’e equations should include matrices L and W in the unknowns,
so that we should better speak of KYP Lemma LMI here.
114 3 Kalman–Yakubovich–Popov Lemma
+1)
+ 1, which has a realization
(A, B, C, D) that is not controllable, due to the poles/zeroes cancellation at jω = ±1.
It can be proved that there does not exist P = P T 0 that solves the Lur’e equations,
and the system is not passive in the sense of Definition 2.1, because there does not
exist a bounded β such that (2.1) holds, excepted if x(0) = 0 so that β = 0. An
interesting example of a non-minimal state space representation of a power system,
whose Lur’e equations possess a solution P = P T , can be found in [79, Sect. 5].
Let us recall a fundamental result. Consider any matrices A, B, C, D of appropriate
dimensions. Then the KYP Lemma set of equations (3.2) solvability implies that
for all ω ∈ R, where the spectral density function Π (·) was introduced by Popov,
and is named Popov’s function, as we already pointed out in Sect. 2.1, Theorem 2.35,
and Proposition 2.36. There we saw that one can characterize a positive operator with
the positivity of the associated spectral function. In a word, a necessary condition
for the solvability of the KYP Lemma set of equations is that the Popov function
satisfies (3.95). The proof of this result is not complex and relies on the following
fact. Let Q, C, and R be matrices of appropriate dimensions, and define the spectral
function (also called the Popov function)
T
(sIn − A)−1 B Q C (sIn − A)−1 B
Π (s) = , (3.96)
Im CT R Im
This is typically the kind of property that is used in Theorem 3.77, see also [80–
82], and is closely related to the
existence of spectral factors
for PR transfer matri-
Q − AT P − PA C − PB
ces. It follows from (3.97) that 0 implies Π (s) 0.
C T − BT P R
Consequently, the solvability of the Lur’e equations with some P = P T implies the
nonnegativity of the Popov function.
The spectral function in (3.95) satisfies the equality Π (s) = Π T (−s) with s ∈
C. In addition, if the pair (A, B) is controllable, then the inequality (3.95) implies
7
the solvability of the KYP Lemma set of equations, i.e., it is sufficient for (3.2)
to possess a solution (P = P T , L, W ). It is worth noting that, under minimality of
(A, B, C, D), the KYP Lemma set of equations solvability with P = P T 0, and
the positive realness of H (s) = C(sIn − A)−1 B + D are equivalent. Let us notice
that Π (jω) = H (jω) + H (jω). Let us summarize some results about relationships
between the Lur’e equations solvability, PRness, and spectral functions positivity:
Π (jω) 0
(if A is Hurwitz)
(if (A, B) controllable) or ⇑ (if D = 0)
The first equivalence is proved in [65, Theorem 9.5 p. 258], see Theorem 3.77 with
Q = 0, S = C T , and R = D + DT . Notice that the second equivalence is stated under
no other assumption that all eigenvalues of A have negative real parts (see Theorem
2.35 and Proposition 2.36). In particular, no minimality of (A, B, C, D) is required.
The last implication shows that the KYP Lemma solvability is sufficient for PRness
of the transfer matrix, without minimality assumption [76], see Corollaries 3.40 and
3.41 in Sect. 3.3.4. In case of controllability, the equivalence is proved in [83, Lemma
3]. It is important to recall that “KYP Lemma equations solvability” does not mean
that P is positive definite, but only the existence of a solution (P = P T , L, W ). When
7 In fact, this can be given as the definition of a spectral function [65, Sect. 6.2].
116 3 Kalman–Yakubovich–Popov Lemma
(if (A, B, C, D) minimal)
C(sIn − A)−1 B is PR
(if (A, B) controllable, D = 0 and A Hurwitz)
Kalman also proved in [6] that the set Sunob = {x ∈ Rn | xT Px = 0} is the linear space
of unobservable states of the pair (C, A). Thus we see in passing that if (C, A) is
observable, then Sunob = {0}, and using that P is symmetric we infer that P is full
rank, hence P 0.
solution P is full-rank.
Let us give the proof of this result, quoted from [84, Proposition 1]. Let z be such that
z T Pz = 0. Since P = P T 0, it follows that Pz = 0 (the positive semi-definiteness
is crucial for this to hold). It follows that z T (AT P + PA)z = 0. Since −(AT P + PA)
is symmetric, and due to the KYP Lemma LMI it is 0, we obtain (AT P + PA)z =
PAz = 0. This means that A ker(P) ⊆ Ker(P) is an A-invariant subspace. Passivity
implies that
AT P + PA PB − C T z
(z T αwT ) = −2αwT Cz − α 2 wT (D + DT )w ≤ 0,
BT P − C −(D + DT ) αw
for all real α and all w ∈ Rm . If Cz = 0, one can choose α and u such that this
inequality does not hold. Thus Cz = 0. This means that Ker(P) ⊆ ker(C). Since
the unobservability subspace Ker(C) ∩ Ker(CA) ∩ . . . ∩ Ker(CAn−1 ) is the largest
3.3 KYP Lemma for Non-minimal Systems 117
(if (A, B) controllable, D = 0)
(if (A, B, C, D) controllable and observable)
(if A asymptotically stable)
t
0 uT (s)Λ(u(s))ds ≥ 0.
where Λ(t) = CeAt 1(t) + BT e−At C T 1(−t) + Dδt is the kernel (or extended
impulse response) of the system (A, B, C, D), δt is the Dirac measure with atom
at t, and 1(·) is the unit step function: 1(t) = 0 if t < 0, 21 if t = 0, and 1 if t > 0.
We remind that by “KYP Lemma equations”, we mean the non-strict inequality in
(3.3). The bottom inequality simply means that the system is passive in the sense of
Definition 2.1, with β = 0 (this is named nonnegativity of the operator in [65]).
Let us now state a result due to Meyer [91, Lemma 2], and which does not require
neither the controllability nor the observability.
Lemma 3.30 (Meyer–Kalman–Yakubovich (MKY) Lemma) Given a scalar D ≥ 0,
vectors B and C, an asymptotically stable matrix A, and a symmetric positive-definite
matrix L, if
( )
D
Re[H (jω)] = Re + C(jωIn − A)−1 B 0 for all ω ∈ R (3.98)
2
An application of the MKY Lemma is in Sect. 8.2.2. Let recall finally that the mono-
variable LKY Lemma 3.14 for SPR transfer functions in Sect. 3.1.6.1 assumes only
the controllability (again with the constraint that A is an exponentially stable matrix).
Let us now state and prove the above result about nonnegative operators. This was
proved in [65, Theorem 3.1], and we reproduce it here now (the proof uses an opti-
mization problem, and was established in Faure’s thesis [90]).
Theorem 3.31 Let (A, B, C, D) be a realization of a system Λ : u → Λ(u), with
kernel Λ(·), with A asymptotically stable and (A, B) a controllable pair. Then
t
u T
(s)Λ(u(s))ds ≥ 0 for all t ≥ 0 and all admissible u(·), if and only if there
−∞
−AT P − PA −PB + C T
exists P = P 0 such that
T
0.
−BT B − C D + DT
Proof Let us recall that the kernel is given by Λ(t, τ ) = CeA(t−τ ) B1(t − τ ) +
BT eA (t−τ ) C T 1(t − τ ) + (D + DT )δt−τ , and it satisfies Λ(t, τ ) = Λ(t − τ, 0) =
T
t
We have UT ΛU = 2 −∞ u(τ )T Λ(u(τ ))d τ . Indeed, one computes that
+∞ t +∞ +∞
UT ΛU = −∞ u(t)T −∞ Λ(t, τ )u(τ )d τ dt + −∞ t u(τ )T Λ(τ, t)u(t)d τ dt
t +∞ τ
= −∞ u(s)Λ(u(s))ds + −∞ u(τ )T −∞ Λ(τ, t)u(t)dtd τ.
(3.101)
3.3 KYP Lemma for Non-minimal Systems 119
and P ∗ = P ∗,T 0.9 The proof consists in showing that P ∗ satisfies the theorem’s
LMI. Let us associate with a u(·) ∈ E (ξ ), a controller v(·) defined as
u(τ + Δt) if τ < −Δt
v(τ ) = (3.107)
u0 if − Δt ≤ τ ≤ 0,
9 Later in the book, we shall see that P ∗ defines the so-called required supply.
120 3 Kalman–Yakubovich–Popov Lemma
with arbitrary u0 . The controller v(·) belongs to E (ζ ), for any ζ to which the system
is transferred at time t = 0 by u(·). Let V be defined in (3.103), and let Q∗ , S ∗ be
associated with P ∗ as above. Then
∗ ∗
T ∗ T ∗ Q S ξ
(V ΛV − ζ P ζ ) − (U ΛU − ξ P ξ ) = (ξ u0 )
T T T T
Δt + O (Δt 2 ).
S ∗,T R u0
(3.108)
Using the definition of P ∗ , we have (VT ΛV − ζ T P ∗ ζ ) ≥ 0 and (UT ΛU − ξ T P ∗ ξ )
can be made arbitrarily small with suitable choice
∗ of∗ u(·). Since the vectors u0 and
Q S
ξ are arbitrary, one infers from (3.108) that 0. Thus P = P ∗ satisfies
S ∗,T R
the requirements and the proof is finished.
t
Remark 3.32 Recall that y(t) = Λ(u(t)) = −∞ Λ(t, τ )u(τ )d τ . Notice further that
t t 0
−∞ u(s) y(s)ds = 0 u(s) y(s)ds + −∞ u(s) y(s)ds, hence setting the constant
T T T
Δ 0 t
β = − −∞ u(s)T y(s)ds, it follows that −∞ u(s)T y(s)ds ≥ 0 is the same as
t
0 u(s) y(s)ds ≥ β. Hence β can be interpreted as the total amount of energy that
T
has been injected in the system in past times (before t = 0), which makes the initial
system’s energy.
The first results that we present rely on the factorization of the Popov function and
have been derived by Pandolfi and Ferrante [70, 71]. If Π (s) is a rational matrix that
is bounded on the imaginary axis and is such that Π (jω) 0, then there exists a
matrix M (s) which is bounded in Re[s] > 0 and such that Π (jω) = M T (jω)M (jω)
(see Sect. A.6.7 for more details on factorizations of spectral functions). The matrix
M (s) of a spectral factorization has as many rows as the normal rank of Π (s). The
normal rank of a polynomial matrix is defined as the rank of Π (s) considered as
a rational matrix. If Π (s) ∈ Cm×m , and if det(Π (s)) is not the zero function (for
instance, if the determinant is equal to s − 1), Π (s) is said to have normal rank m.
More generally, a polynomial matrix has rank q, if q is the largest of the orders of
the minors that are not identically zero [92, Sect. 6.3.1].
Let us consider an eigenvalue s0 of A and a Jordan chain of s0 , i.e., a finite
sequence of vectors satisfying Av0 = s0 v0 , Avi = s0 vi + vi−1 , 0 < i ≤ r − 1, where
r is the length of the Jordan chain. One has
k
ti
e v0 = e v0 , e vk = e
At s0 t At s0 t
vk−i . (3.109)
i=0
i!
One has ( )
1 dh T 1 dh T
Mh = M (−s0 ) = M (−s) . (3.111)
h! dsh h! dsh s0
In other words, h!Mh is the hth derivative of the function M T (−s) calculated at
s = s0 . All the matrices Ms0 ,i as well as the rational functions Π (s) and M (s) are
calculable from A, B, C, and D. The notation col[a0 , a1 , . . . , an ] is for the column
matrix [a0 a1 . . . an ]T .
Theorem 3.33 ([70]) Let the matrices Ms0 ,i be constructed from any spectral factor
of Π (s) and assume that A is asymptotically stable. If the transfer function H (s) is
positive real, then there exist matrices L, W , and P = P T 0 which solve the KYP
Lemma set of equations (3.2), if and only if the following conditions hold for every
Jordan chain Js0 ,i of the matrix A:
For the proof (that is inspired from [93]), the reader is referred to the paper [70]. It
is noteworthy that there is no minimality assumption in Theorem 3.33. However, P
is only semi-positive definite.
Example 3.34 ([70]) Let C = 0, B = 0, D = 0. Then Π (s) = 0 and the set of equa-
tions AT P + PA = −LLT , PB = C T − LW is solvable. One solution is L = 0, P = 0.
This proves that Theorem 3.33 does not guarantee P 0.
The second theorem relaxes the Hurwitz condition on A.
Theorem 3.35 ([71]) Let A ∈ Rn×n , B ∈ Rn×m , C ∈ Rm×n , and D ∈ Rm×m . Assume
that σ (A) ∩ σ (−AT ) = ∅. If the KYP Lemma set of equations (3.2) is solvable, i.e.,
there exist matrices P = P T , L, W which solve it, then Π (jω) 0 for each ω and
the condition (3.112) holds for every Jordan chain Js0 ,i of the matrix A. Conversely,
let Π (jω) be nonnegative for each ω and let (3.112) hold for every Jordan chain of
A. Then the set of equations (3.2) is solvable. Condition (3.112) does not depend on
the specific spectral factor M (s) of Π (s).
A matrix A satisfying σ (A) ∩ σ (−AT ) = ∅ is said unmixed.
Remark 3.36 Until now we have spoken only on controllability, and not of observ-
ability. Thus one might think that the unobservable part has no influence neither on
(3.95) nor on the solvability of (3.2). Things are more subtle as shown in the next
section.
122 3 Kalman–Yakubovich–Popov Lemma
with P1 0, and Q1 Q1T = −2P1 . However, the system of equations obtained by elim-
inating the unobservable subspace associated with (A, C) has no solution, because
the second equation for this reduced system takes the form 0 = I − 0. This example
shows that unobservability is not innocent in the KYP Lemma solvability (which is
to be understood here as the existence of a triple (P = P T , L, W ) that solves (3.2)).
with
Ā1 0
A1 = , A21 = (Â21 0), C1 = (C̃1 0).
Ã21 Ã2
3.3 KYP Lemma for Non-minimal Systems 123
One may check that σ (A2 ) ∩ σ (−AT1 ) = ∅. The matrix B can be partitioned con-
B1
formably with the partitioning of A as B = . The image space of the matrix
B2
(0 I ), where the identity matrix I has the size of A2 , is unobservable for the
pair (A, C) and is the largest unobservable subspace such that the corresponding
dynamics does not intersect the backwards dynamics of the remaining part, i.e.,
σ (A2 ) ∩ σ (−AT1 ) = ∅. This space is named the unmixing unobservable subspace.
The system (A1 , B1 , C1 , D) obtained from (A, B, C, D) by eliminating the part cor-
responding to the unmixing unobservable subspace is called the mixed+observable
subsystem. When A is unmixed, the mixed+observable subsystem is exactly the
observable subsystem. In such a case, the unobservable part of the system plays no
role in the solvability of the KYP Lemma set of equations (3.2).
Theorem 3.37 ([69]) Given a quadruple (A, B, C, D), let A be unmixed and let
(A1 , B1 , C1 , D) be the matrices associated with the observable subsystem. Then, the
KYP Lemma set of equations (3.2) possesses solutions (P = P T , L, W ) if and only
if the set of equations ⎧ T
⎨ A1 P1 + P1 A1 = −L1 LT1
P1 B1 = C1T − L1 W1 (3.113)
⎩ T
W1 W1 = D + D T ,
Once again, we insist on the fact that it is not required here that P nor P1 be positive-
definite or even semi-positive definite matrices. The result of Theorem 3.37 relies
on the unmixity of A. However, the following is true, which does not need this
assumption.
Theorem 3.38 ([69]) The KYP Lemma set of equations (3.2) possesses solutions
(P = P T , L, W ), if and only if (3.113) possesses solutions.
The sign controllability has also been used in [81, 94] to analyze the existence of
solutions to the Lur’e equations. It is shown in [94] that the sign controllability of
(A, B), plus the nonnegativity of the spectral Popov function (Π (jω) 0), is not
sufficient to guarantee the solvability of the Lur’e equations.
The result presented in this subsection also relies on a decomposition of the state
space into uncontrollable and unobservable subspaces. It was proposed in [75, 95].
Let us start from a system with realization (A, B, C), A ∈ Rn×n , B ∈ Rn×m , C ∈ Rp×n .
The Kalman controllability and observability matrices are denoted as Kc and Ko ,
124 3 Kalman–Yakubovich–Popov Lemma
respectively. The state space of the linear invariant system (A, B, C) is given by the
direct sum
X = X1 ⊕ X2 ⊕ X3 ⊕ X4 ,
Theorem 3.39 ([75, 95]) Let (A, B, C) be a realization of the rational matrix H (s).
Let K ∈ Rn×n be any matrix satisfying
X1 ⊕ X2 ⊆ sp(K) ⊆ X1 ⊕ X2 ⊕ X3 . (3.114)
Then H (s) is positive real if and only if there exist real matrices P = P T 0 and L
such that T
K (PA + AT P + LLT )K = 0
(3.115)
K T (PB − C T ) = 0.
If B has full column rank, then H (s) is positive real if and only if there exist real
matrices P = P T and L, with K T PK 0, such that
T
K (PA + AT P + LLT )K = 0
(3.116)
PB − C T = 0.
Corollary 3.40 ([75]) Let (A, B, C) be a realization of the rational matrix H (s) ∈
Cm×m . Then H (s) is positive real if there exists matrices P = P T 0 and L such
that the Lur’e equations in (3.2) hold.
Corollary 3.41 ([75]) Let (A, B, C, D) be a realization of the rational matrix H (s) ∈
Cm×m . Ler K be any matrix satisfying (3.114). Then H (s) is positive real if and only
if there exists P = P T 0, L and W , such that
⎧ T
⎨ K (PA + AT P + LLT )K = 0
K T (PB − C T + LW ) = 0 (3.117)
⎩
D + DT = W T W.
We infer that
$ T % $ T T %
A P + PA PB − C T K (A P + PA)K K T (PB − C T )
0 =⇒ 0
BT P − C −(D + DT ) (BT P − C)K −(D + DT ) (3.119)
with P = P T 0 with P = P T 0.
We may name the right-hand LMI, the K-Lur’e equations, or the K-KYP Lemma
equations. If K is invertible (which is the case if (A, B) is controllable or if D = 0),
the equivalence holds.
It is noteworthy that an improved version of the above results has been published
in [96]. Let Kc still denote Kalman’s controllability matrix.
Lemma 3.42 ([96, GCTPR Lemma]) Let (A, B, C, D) be a realization (not neces-
sarily minimal) of H (s) ∈ Cm×m . Then H (s) is positive real if and only if there exist
real matrices L, W and P = P T with KcT PKc 0 such that
⎧ T T
⎨ Kc (A P + PA + LT L)Kc = 0
K T (PB − C T + LT W ) = 0 (3.120)
⎩ cT
D + D − W T W = 0.
The next result is taken from [68]. Let us consider the system in (3.1) and suppose
(A, B, C, D) is a minimal realization, m ≤ n. Suppose that H (s) + H T (−s) has rank
m almost everywhere in the complex plane, i.e., it has normal rank m (this avoids
redundant inputs and outputs). The following lemma gives us a general procedure
to generate uncontrollable equivalent realizations from two minimal realizations of
a given transfer matrix H (s). The uncontrollable modes should be similar and the
augmented matrices should be related by a change of coordinates as explained next.
where the dimensions of A01 and A02 are the same. Moreover, there exists a nonsin-
gular matrix T0 such that A01 = T0 A02 T0−1 and C01 = C02 T0−1 . Then (Āi , B̄i , C̄i , D̄i ),
i = 1, 2 are two equivalent realizations.
where the dimensions of A01 and A02 are the same. Moreover, there exists
a nonsingu-
lar matrix T0 such that A01 = T0 A02 T0−1 and B01 = T0 B02 . Then Σ i Ai , Bi , C i , Di
for i = 1, 2 are two equivalent realizations of H (s).
which implies that (A + μIn ) is Hurwitz and thus Z(s − μ) is analytic in Re[s] ≥ 0.
Δ
Define now for simplicity Φ(s) = (sIn − A)−1 . Therefore,
3.3 KYP Lemma for Non-minimal Systems 127
H (s − μ) + H T (−s − μ) =
T T T T
= D + D +* CΦ(s − μ)B+ + B Φ (−s − μ)C
T T T
= W T W + B P + W T L Φ(s − μ)B + B Φ (−s − μ) PB + LT W
T T T
= W T W + W T LΦ(s − μ)B + B Φ (−s − μ)LT W + B PΦ(s − μ)B
T T
+B Φ (−s − μ)PB
= W T W + W T LΦ(s − μ)B * −T
T T T T
+B Φ (−s − μ)LT W + B Φ (−s − μ) Φ (−s − μ)P
−1
+
+PΦ (s − μ) Φ(s − μ)B
T T T T
= W T W + W T LΦ(s − μ)B + B Φ (−s − μ)LT W + B Φ (−s − μ)×
,* T
+
-
× −(s + μ)I − A P + P (s − μ)I − A Φ(s − μ)B
T T
= W T W + W T LΦ(s − μ)B + B Φ (−s − μ)LT W
T T
, T
-
+B Φ (−s − μ) −2μP − A P − PA Φ(s − μ)B
T T
= W T W + W T LΦ(s − μ)B + B Φ (−s − μ)LT W
T T
+B Φ (−s − μ) LT L + (ε − 2μ) P Φ(s − μ)B
T T
= W T W + W T LΦ(s − μ)B + B Φ (−s − μ)LT W
T T T T
+B Φ (−s − μ)LT LΦ(s − μ)B + (ε − 2μ) B Φ (−s − μ)PΦ(s − μ)B
* T T
+
= W T + B Φ (−s − μ)LT W + LΦ(s − μ)B
T T
+ (ε − 2μ) B Φ (−s − μ)PΦ(s − μ)B.
(3.125)
From the above it follows that H (jω − μ) + H T (−jω − μ) 0, for all ω ∈
[−∞, +∞], and H (s) is SPR.
Necessity: Assume that H (s) ∈ SPR. Let Σ A, B, C, D be a stabilizable and observ-
able realization of H (s) and Σ (A, B, C, D) a minimal realization of H (s). Given
that the controllable and uncontrollable modes are different, we can consider that the
matrix A is block diagonal and therefore H (s) can be written as
−1
sIn − A 0 B
H (s) = C C0 +
D , (3.126)
0 sI − A0 0
D
C
(sIn −A) B
Aε = A + 2ε I ∈ R(n+n0 )×(n+n0 )
Aε = A + 2ε I ∈ Rn×n (3.127)
A0ε = A0 + 2ε I ∈ Rn0 ×n0 .
Note that Aε is also block diagonal having block elements Aε and A0ε , and the
eigenvalues of Aεand A0ε are different.
Let Σε (Aε , B, C, D) be a minimal realization
of U (s) and Σ ε Aε , B, C, D an observable and stabilizable realization of U (s).
Therefore
U (s) = C(sIn − Aε )−1 B + D = C(sIn − Aε )−1 B + D. (3.128)
Remark 3.46 Here is used implicitly the assumption that Z(s) + Z T (−s) has normal
rank m, otherwise the matrix V (s) would be of dimensions (r × m), where r is the
normal rank of Z(s) + Z T (−s).
Although we will not require the minimality of ΣV T (−s)V (s) in the sequel, it can be
proved to follow from the minimality of ΣV (F, G, H , J ), see [16, 97]. Let us now
define a non-minimal realization of V (s) obtained from ΣV (F, G, H , J ) as follows:
F 0 G
F= , G= , H = H H0 , J = J , (3.131)
0 F0 0
3.3 KYP Lemma for Non-minimal Systems 129
and such that F0 is similar to A0ε and the pair (H0 , F0 ) is observable, i.e., there exists
T0 nonsingular such that F0 = T0 A0ε T0−1 . This constraint will be clarified later on.
Since σ (F0 ) ∩ σ (F) = ∅, the pair
F 0
(H , F) = H H0 , (3.132)
0 F0
is observable. Thus the non-minimal realization Σ V F, G, H , J of V (s) is observ-
able and stabilizable.
Now a non-minimal realization of V T (−s)V (s) based on
Σ V F, G, H , J
$$ % $ % %
F 0 G T
T T
Σ V T (−s)V (s) T T , T , J H −G , J J (3.133)
H H −F H J
From the diagonal structure of the above realization, it could be concluded that the
eigenvalues of F0 correspond to uncontrollable modes, and the eigenvalues of (−F0T )
correspond to a unobservable modes. A constructive proof is given below. Since the
pair (H , F) is observable and F is stable, there exists a positive-definite matrix
T K r
K =K = 0, (3.135)
r T K0
This explains why we imposed the constraint that (H0 , F0 ) should be observable.
Δ
Otherwise,
there will not exist a positive-definite solution for (3.136). Define T =
I 0 −1 I 0
;T = and use it as a change of coordinates for the non-minimal
K I −K I
realization Σ V T (−s)V (s) above, to obtain
130 3 Kalman–Yakubovich–Popov Lemma
⎛ ⎞
F 0 0 0 G
⎜ 0 F0 0 0 ⎟
⎜ 0 ⎟
Σ V T (−s)V (s) = ⎜
⎜
0 0 −F T 0 T ⎟.
⎟ (3.137)
0 −F0T (J H + G K)
T
⎝ 0 0 ⎠
T
J H + G K −G T 0 JTJ
$$ % $ % %
Aε 0 B T
T
ΣU (s)+U T (−s) T , T , C −B , D + D . (3.138)
0 −Aε C
Using (3.129) we conclude that the stable (unstable) parts of the realizations of
U (s) + U T (−s) and V T (−s)V (s) are identical. Therefore, in view of the block diag-
onal structure of the system and considering only the stable part we have
F 0 −1 Aε 0
F= = RAε R = R R−1
0 F
0 0 A0ε
G B
G= = RB = R
0 0
(3.139)
T
J H + G K = CR−1 = C C0 R−1
T
JTJ = D + D .
The above relationships impose that the uncontrollable parts of the realizations of
U (s) and V (s) should be similar. This is why we imposed that F0 be similar to A0ε in
the construction of the non-minimal realization of V (s). From the Lyapunov equation
(3.136) and using F = RAε R−1 in (3.139), we get
⎧ T T
⎪
⎪ KF + F K = −H H
⎪
⎨ T T
KRAε R−1 + R−T Aε RT K = −H H
(3.140)
⎪
⎪
T
RT KRAε + Aε RT KR =
T
−RT H H R
⎪
⎩ T
PAε + Aε P = −LT L,
Δ Δ
where we have used the definitions P = RT KR; L = H R. Introducing (3.127) we get
the first equation of (3.123). From the second equation of (3.139) we have G = RB.
From the third equation in (3.139) and using W = J we get
3.3 KYP Lemma for Non-minimal Systems 131
⎧ T
⎪
⎪ JH + G K = CR−1
⎪
⎨ T T
J H R + G R−T RT KR =C
(3.141)
⎪
⎪ WTL + B P
T
=C
⎪
⎩ T
PB = C − LT W,
which is the second equation of (3.123). Finally from the last equation of (3.139),
we get the last equation of (3.123) because W = J .
Example 3.47 Consider H (s) = (s+a)(s+b)
s+a
, for some a > 0, b > 0, b = a. Let a non-
minimal realization of H (s) be
⎧
⎪
⎪ −a 0 0
⎨ ẋ(t) = x(t) + 1 u(t)
0 −b α (3.142)
⎪
⎪
⎩
y(t) = [β α]x(t)
Let us continue this section on relaxed KYP Lemmas, with a brief exposition of
the results in [72, 73]. The notion of positive real pairs has been introduced in [73,
Definition 7].
Definition 3.49 Let P(·) and Q(·) be n × n matrices whose elements are polynomial
functions. The pair (P, Q) is said to be a positive real pair if
1. P(s)Q(s̄)T + Q(s)P(s̄)T 0 for all Re[s] ≥ 0.
2. rank[(P − Q)(s)] = n for all Re[s] ≥ 0.
3. Let p be an n-vector of polynomials, and s ∈ C. If pT (PQ + QP ) = 0 and
p(s)T (P − Q)(s) = 0, then p(s) = 0.
132 3 Kalman–Yakubovich–Popov Lemma
Some
t1 comments are necessary. In the following, passivity is understood here as
t0 u(s) T
y(s)ds ≥ β for some β, all t1 , t0 , t1 ≥ t0 , all admissible u(·). The passivity
of the controllable part of the system implies item 1. The stability of the observable
part of the system implies item 2. So does the stabilizability of the system. Condition
in item 3 implies that if the transfer function is lossless positive real, then the system
is controllable (see Remark 3.52). If Q is invertible, then H (s) = Q(s)−1 P(s) is PR.
In this case, item 1 is equivalent to PRness. Thus item 1 extends PRness to the case
where Q is singular.
Theorem 3.50 ([73, Theorem 9]) Let the system be described in an input/output
form as the set of (u, y) ∈ L2,e (R; Rn ) × L2,e (R; Rn ) such that P dtd y = Q dtd u,
for some n × n matrices P and Q whose elements are polynomial functions. Then, the
system is passive if and only if (P, Q) is a positive real pair in the sense of Definition
3.49.
We know that PRness is not sufficient for the system to be passive, since some systems
could be PR; however, the existence of oscillatory uncontrollable modes prevents the
existence of a constant β such that passivity holds true. The conditions in items 2
and 3 guarantee that this is the case for positive real pairs. Now let us consider the
state space representation ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t), and let the
external behavior of the system be the set (u, y) as in Theorem 3.50. Let us denote
K0 = col(C, CA, . . . , CAn−1 ) the Kalman’s observability matrix.
Theorem 3.51 ([72, Theorem 10]) The next statements are equivalent:
1. The external (input/ouput) behavior of the system takes the form P dtd y =
Q dtd u, for some n × n matrices P and Q whose elements are polynomial func-
tions, and (P, Q) is a positive real pair.
2.
There exists P = PT 0 such that the Lur’e equations
−AT P − PA C T − PB
0 hold.
C − BT P D + D T
t
3. The storage function Va (x) = supx(t0 )=x,t1 ≥t0 − t01 u(s)T y(s)ds, satisfies Va =
−AT P− − P− A C T − P− B
x P− x, with: (a) P− = P− 0, (b)
1 T T
0, (c)
2 C − BT P− D + DT
K0 z = 0 ⇐⇒ P− z = 0, (d) all other solutions P = P T 0 of the Lur’e equa-
tions, satisfy P P− .
4. Va (x) < +∞ for all x ∈ Rn .
The storage function Va (·) will be named later, the available storage, see Definition
4.37, see also Theorem 4.43. One sees also that condition (c) of item 3 is close to item
7 in Proposition 3.62, which is itself close to a result shown by Kalman in [6]: they
all relate the observability to the rank of P. It is noteworthy that no controllability
nor observability assumption has been made in Theorem 3.51.
3.3 KYP Lemma for Non-minimal Systems 133
Remark 3.52 It is important to insist here on the role played by uncontrollable oscil-
latory modes (that correspond to uncontrollable purely imaginary poles/zeroes can-
cellations). The sign controllability assumptions, as done in Sect. 3.3.3, allows one to
avoid such modes (since sign controllability implies that purely imaginary modes are
controllable), and the definition of positive real pairs does something quite similar.
Actually, sign controllability implies the property in item 2 of Definition 3.49, but
the reverse implication does not hold.
Let us end this section on relaxed KYP Lemmas for non-minimal realizations, with
a result which somewhat extends some of the above ones (like Corollaries 3.40 and
3.41). Let H (s) ∈ Cm×m , and its realization (A, B, C, D) with A ∈ Rn×n is not neces-
sarily minimal. Let us recall that the transfer function H (s) is said to be generalized
PR if H (jω) + H T (−jω) 0 for all ω ∈ R. Then the following holds.
Proposition 3.53 ([99, Proposition 4.2]) Suppose that the Lur’e equations for the
(not
necessarily minimal) realization
(A, B, C, D) hold for some matrix P = P T , i.e.,
−A P + PA C − PB
T T
0. Then, (i) if the matrix diag(−P, Im ) has ν eigen-
C − BT P D + D T
values with positive real part, ν ∈ {0, . . . , n}, no eigenvalue with zero real part, and
n − ν eigenvalues with positive real parts, the transfer matrix H (s) is generalized
PR with at most ν poles with negative real parts, and n − ν poles with positive real
parts. (ii) If P 0 (⇒ ν = n), then H (s) is positive real.
The role played by diag(−P, Im ) is clear from (3.4). Generalized PR transfer func-
tions are sometimes called pseudo-positive. The KYP Lemma for generalized PR
transfer matrices has been studied in [83, 100], see Sect. 3.3.1.
3.4 Recapitulation
Let us make a short summary of the relationships between various properties (BR is
for bounded real, PR is for positive real). The next diagrams may guide the reader
throughout the numerous results and definitions which are given in the book. The
equivalences and implications can be understood with the indicated theorems or
propositions. Let us start the recapitulation with the following (SBR is for Strict
Bounded Real):
134 3 Kalman–Yakubovich–Popov Lemma
↑
⇑⇓
Lemma Example Lemma
Lemma
⇑
2.80 4.71 2.91
4.75
↓
Theorem
2.81
VSP ⇐⇒ SSPR =⇒ SPR =⇒ WSPR =⇒ PR
↑
section
Theorem Theorem
⇓ Lemma
5.4
4.73 2.101
2.82 ⇓
↓
Strict state
OSP ISP OSP NI
passive
Remark
⇓
2.2
OSP
In the next diagram, we recall the links between bounded real, positive real, spectral
functions, passivity, and Lur’e equations. Equivalences or implications hold under
the conditions stated in the indicated theorems, lemmas, and corollaries, in which
the reader is invited to read for more details.
Remind that in general (no minimality assumption on the realization, or no Hur-
witz A), the positive real condition is only necessary for the Lur’e equations to hold
with P = P T 0 (though the arrows in the above-framed table could let the reader
think that equivalence holds always). Equivalence holds if the definition of PRness
is extended in a suitable way [72, 73], see Sect. 3.3.6.
3.4 Recapitulation 135
G(s) ∈ BR
Corollary 3.41
H (s) ∈ PR ⇐⇒ K-Lur’e equations
Corollary 3.40
Theorem 2.35 ⇓ ⇑
or (A, B) controllable
Theorem 3.77,
or strict inequalities
Π (jω) 0 ⇐⇒ Lur’e equations
Proposition 2.36
(see section 3.1.1)
Theorem 4.27
Theorem 4.33
Theorem 4.34
Theorem 4.105
Passive in Definitions
Lur’e equations Lemmae 3.66, 4.106
⇐⇒ 4.23, 4.26, 4.31
time-varying, non-linear
Non-negative operator
Theorem 3.31
Lur’e equations ⇐⇒ Passive in Definition 2.1
Theorem 3.50
Positive-real Pairs
(Definition 3.49)
136 3 Kalman–Yakubovich–Popov Lemma
The KYP Lemma for noncontrollable systems is especially important for the design
of feedback controllers with state observers [101–104], where the closed-loop system
may not be controllable. This may be seen as the extension of the works described in
Sect. 2.14.3 in the case where an observer is added to guarantee that the closed-loop
is SPR.
Theorem 3.54 ([101, 102]) Consider a system with transfer function H (s) ∈ Cm×m ,
and its state space realization
⎧
⎨ ẋ(t) = Ax(t) + Bu(t)
(3.143)
⎩
y(t) = Cx(t),
The modes associated with the matrix (A − LC) are noncontrollable. The case of
unstable matrix A is solved in [102].
The negative imaginary lemma is the counterpart of the KYP Lemma, for negative
imaginary systems introduced in Sect. 2.15.
Lemma 3.55 ([105, Lemma 7] [106, Lemma 1] [107, Lemma 8] [108, Corollary
5]) Let (A, B, C, D) be a minimal realization of the transfer function H (s) ∈ C
1 m×m .
Then H (s) satisfies items (1)–(4) of Definition 2.98 (i), if and only if
1. det(A) = 0, D = DT ,
3.6 The Negative Imaginary Lemma 137
The transfer function H (s) is lossless NI in the sense of Definition 2.98 (iv) if and
only if
1. det(A) = 0, D = DT ,
2. there exists a matrix P = P T 0 such that
The transfer function H (s) is strictly NI in the sense of Definition 2.98 (ii), if and
only if
1. A is Hurwitz, D = DT , rank(B) = rank(C) = m,
2. there exists a matrix P = P T 0 such that
3. H (s) − H T (−s) has no transmission zeroes on the imaginary axis, except possibly
at s = 0.
Remark 3.56 Suppose that the system (A, B, C, D) is lossless NI, with vector
relative degree r = (2, . . . , 2)T ∈ Rm . Then D = 0, CB = 0, and CAB = −CA2
PC T = CAPAT C T 0 (and 0 if C has full row rank since both A and P have
full rank, which is thus necessary and sufficient for r = (2, . . . , 2)T ).
The above assumes the minimality of the realization. Just as the KYP Lemma can
be extended without minimality as we saw in Sect. 3.3, one has the following.
The conditions in item (1) (without the regularity condition on A) are shown in [110,
Lemma 2] to be necessary and sufficient for H (s) to be NI in the sense of Definition
2.98 (2’) (3) (4) (5). Notice that the Lyapunov equation in the above conditions
138 3 Kalman–Yakubovich–Popov Lemma
Lemma 3.58 ([111, SSNI Lemma 1]) Let H (s) ∈ Cm×m , with realization (A, B,
C, D), (C, A) observable, and H (s) + H T (−s) has full normal rank m. Then A is
Hurwitz and H (s) satisfies
1. H (∞) = H (∞)T ,
2. H (s) has no poles in Re[s] ≥ 0,
3. j(H (jω) − H (jω)) 0 for all ω ∈ (0, +∞),
4. limω→∞ jω(H (jω) − H (jω)) 0,
5. limω→∞ j ω1 (H (jω) − H (jω)) 0,
if and only if
1. D = DT ,
2. there exists P = P T 0 such that AP + PAT ≺ 0 and B + APC T = 0.
This class of NI systems is used in a Lur’e problem framework for absolute stability
of positive feedback interconnections with slope-restricted static nonlinearities, in
[112, 113].
Finally, let us take advantage of this presentation, to state the next result that
makes the link between NI systems and dissipativity.
The feedback KYP Lemma is an extension of the KYP Lemma, when one considers a
controller of the form u(t) = Kx(t). This is quite related to the material of Sect. 2.14.3:
which are the conditions under which a system can be made passive (or PR) in closed
loop? Let us consider the system
ẋ(t) = Ax(t) + Bu(t)
(3.149)
y(t) = Cx(t),
with the usual dimensions and where all matrices are real.
A − λIn B
the polynomial det are in the closed left half plane, and all the
C 0
A B In 0
pure imaginary eigenvalues of the matrix pencil R(λ) = −λ
C 0 0 0
have only linear elementary divisors λ − jω;
• (vii) the matrix CB = (CB)T 0, the pair (A, B) is stabilizable and the system
(3.149) is weakly minimum phase.
Both matrix equations in (i) and (iii) are bilinear matrix inequalities (BMIs). The
feedback KYP Lemma extends to systems with a direct feedthrough term y = Cx +
Du. It is noteworthy that Theorem 3.61 holds for multivariable systems. If u(t) =
Kx(t) + v(t), then (i) means that the operator v → y is SPR. It is known that this
control problem is dual to the SPR observer design problem [120]. Related results are
in [121]. We recall that a system is said weakly minimum phase if its zero dynamics
is Lyapunov stable. The zero dynamics can be explicitly written when the system is
written in a special coordinate basis as described in [122–124]. The particular choice
for K after item (iv) means that the system can be stabilized by output feedback.
More work may be found in [125]. The stability analysis of dynamic output feedback
systems with a special formulation of the KYP Lemma has been carried out in [103].
The problem of design of PR systems with an output feedback has been also tackled
in [126, Theorem 4.1] [99, Proposition 8.1].
140 3 Kalman–Yakubovich–Popov Lemma
Let us consider the Lur’e equations in (3.2). As we have seen in Sect. 3.1.1 (Corollar-
ies 3.3, 3.4, 3.5, comments in-between these corollaries and in Remark 3.6), a system
which satisfies the Lur’e equations also satisfies a dissipation inequality (there is
equivalence). There are also relationships between the Lur’e equations solvability
and the PRness of the transfer function (or matrix in the MIMO case). The material
that follows is taken from Camlibel and coauthors in [84, 127]. Let us first recall
that a system (or quadruple) (A, B, C, D) is passive in the sense that it satisfies a
dissipation inequality as (2.3),10 if and only if the LMIs
AT P + PA PB − C T
0, P = P T 0, (3.150)
BT P − C −(D + DT )
have a solution P. This can be shown along the lines in Sect. 3.1.1. There is no
minimality requirement of (A, B, C, D) for this equivalence to hold. It is also easy to
see that there is no need to impose P 0, as only the symmetry of P plays a role to
write the dissipation inequality. In what follows we therefore say that (A, B, C, D) is
passive if along its trajectories and for all t0 ≤ t1 and admissible inputs, there exists
t
V : Rn → R+ , such that V (x(t1 )) − V (x(t0 )) ≤ t01 uT (s)y(s)ds.
Proposition 3.62 ([84, 127]) Suppose that the system (A, B, C, D) is passive, and
let P be a solution of the LMIs (3.150). Then the following statements are true:
1. D 0.
2. wT (D + DT )w= 0 ⇒ C w = PBw.
T T
C PB
3. Ker = Ker .
D + DT D + DT
4. wT (D + DT )w = 0 ⇒ wT CBw = wT BT PBw ≥ 0.
5. z T (AT P + PA)z = 0 ⇒ Cz = BT Pz.
6. A Ker(P) ⊆ Ker(P).
7. Ker(P) ⊆ ker(C) ∩ Ker(CA) ∩ . . . ∩ Ker(CAn−1 ).
−1
8. w ∈ Ker(PB) ⇒ H (s)w = Dw, whereH (s) = C(sIn − A) B + D.
PB
9. Ker(H (s) + H T (s)) = Ker for all s ∈ R, s > 0 which are not an
D + DT
eigenvalue of A.
Proof
1. Follows from the KYP Lemma LMI and P 0, using Lemma A.70.
2. Let w satisfy wT (D + DT )w = 0.11 Notice that
T
A P + PA PB − C T z
z T (AT P + PA)z + 2αz T (PB − C T )w = (z T αwT )
BT P − C −(D + DT ) αw
≤ 0.
10 Later in the book, we will embed this into Willems’ dissipativity, see Definition 4.21.
11 If this is satisfied for all w, then D + DT is skew-symmetric and we recover that PB = C T .
3.8 Structural Properties of Passive LTI Systems 141
Since α and z are arbitrary, the right-hand side can be made positive unless
(PB − C T )w = 0.
3. Follows from item 2.
4. Follows from item 2.
5. Let z be such that z T (AT P + PA)z = 0. Notice that
T
A P + PA PB − C T z
(z T αwT ) = 2αz T (PB − C T )w − α 2 wT (D + DT )w
T T
B P − C −(D + D ) αw
≤ 0.
Since α and w are arbitrary, the right-hand side can be made positive unless
z T (PB − C T ) = 0.
6. Let z be such that z T Pz = 0. Since P = P T 0, it follows that Pz = 0 (the posi-
tive semi-definiteness is crucial for this to hold). It follows that z T (AT P + PA)z =
0. Since −(AT P + PA) is symmetric, and due to the KYP Lemma LMI it is 0,
we obtain (AT P + PA)z = PAz = 0. This means that A ker(P) ⊆ Ker(P).
7. Continuing the foregoing item: Passivity implies that
T
A P + PA PB − C T z
(z T αwT ) = −2αwT Cz − α 2 wT (D + DT )w ≤ 0,
BT P − C −(D + DT ) αw
for all real α and all w ∈ Rm . If Cz = 0, one can choose α and u such that
this inequality does not hold. Thus Cz = 0. This means that Ker(P) ⊆ ker(C).
Since the unobservability subspace Ker(C) ∩ Ker(CA) ∩ . . . ∩ Ker(CAn−1 ) is the
largest A-invariant subspace12 that is contained in Ker(C), we obtain Ker(P) ⊆
Ker(C) ∩ Ker(CA) ∩ . . . ∩ Ker(CAn−1 ).
8. If w ∈ Ker(PB) then Bw has to belong to Ker(P), which is contained in the
unobservability subspace ker(C) ∩ Ker(CA) ∩ . . . ∩ Ker(CAn−1 ). This means
that C(sIn − A) Bw = C +∞
−1 −k k
k=0 s A Bw = 0 where we used Cayley–Hamilton
theorem.
/ σ (A). Let w be such that PBw = 0 and (D + DT )w = 0. Due
9. Let s ∈ R, s > 0, s ∈
to the foregoing item, one has wT (H (s) + H T (s))w = 0. From passivity it follows
that H (s) + H T (s) 0, which implies that
(H (s) + H (s))w = 0. This means
T
PB
that Ker(H (s) + H T (s)) ⊇ ker . The reverse inclusion holds: let w ∈
D + DT
Ker(H (s) + H (s)) and define z = (sI − A)−1 Bw. Notice that Az + bw = sz, and
T
AT P + PA PB − C T z
(z T wT )
BT P − C −(D + DT ) w
5. There exists an α J > 0 such that μ(DJJ + CJ • B•J σ −1 ) ≥ α J σ −1 for all suffi-
ciently large σ , where μ(A) = λmin ( 21 (A + AT )) for any square matrix A.
6. s−1 (DJJ + CJ • B•J s−1 )−1 is proper.
Remark 3.64 Items 4 and 5 of Proposition 3.62, and items 3, 4, and 5 of Proposition
3.63, somewhat extend the fact that when PB = C T (from the KYP Lemma equa-
tions when D = 0), then the Markov parameter satisfies CB = BT PB 0. Under
an additional rank condition, it is even 0. Item 4 in Proposition 3.62 implies that
Ker(D + DT ) ⊆ Ker(PB − C T ).
Remark 3.65 The passivity of the system (kIn , B, C, D) for some k ∈ R implies
that D 0 and Ker(D + DT ) ⊆ Ker(PB − C T ) [128, 129]. The reverse implication
(hence equivalence) is proved in [130].
with x(t0 ) = x0 , and where the functions A(·), B(·), C(·), and D(·) are supposed to
be piecewise continuous, and D(t) εIm , ε ≥ 0. It is assumed that all (t, x) with
t > t0 are reachable from (t0 , 0), and that the system is zero-state observable (such
controllability and observability conditions may be checked via the controllability
and observability grammians, see, e.g., [131]). It is further assumed that the required
supply is continuously differentiable in both t and x, whenever it exists (the required
supply is a quantity that will be defined in Definition 4.38. The reader may just
want to consider this as a regularity condition on the system (3.151)). The system
(3.151) is supposed to be well-posed; see Theorem 3.90, and it defines an operator
Λ : u(t) → y(t). The kernel of Λ(·) is given by Λ(t, r) = C(t)Φ(t, r)B(r)1(t − r) +
BT (t)Φ T (r, t)C T (t)1(r − t) + R(t)δt−r , where 1(t) = 0 if t < 0, 1(t) = 21 if t = 0
and 1(t) = 1 if t > 0, R(t) = D(t) + DT (t), δt is the Dirac measure at t, Φ(·, ·) is
the transition matrix of A(t), i.e., Φ(t, r) = X (t)X −1 (r) for all t and r, and dX dt
=
A(t)X (t). The kernel
t plays the role of the transfer function, for time-varying systems.
Then Λ(u(t)) = −∞ Λ(t, r)u(r)dr. The next lemma is taken from [65, Theorem 7.6],
where it is presented as a corollary of Lemma 4.106. This problem was solved in
[132], see also [133, 134].
Lemma 3.66 Let the above assumptions holds (in particular R(t) = D(t) + DT
(t) 0 for all times). The operator Λ(·) is nonnegative if and only if there exists
an almost everywhere continuously differentiable function P(·) = P T (·) 0 such
that on (t0 , t)
Q(t) S(t)
0, (3.152)
S T (t) R(t)
where
Ṗ(t) + AT (t)P(t) + P(t)A(t) = −Q(t)
(3.153)
C T (t) − P(t)B(t) = S(t).
We will now study the stability properties of positive real or strictly positive real
systems when they are connected in negative feedback. We will consider two PR
systems H1 : u1 → y1 and H2 : u2 → y2 . H1 is in the feedforward path and H2 is
144 3 Kalman–Yakubovich–Popov Lemma
y2 u2
H2
in the feedback path (i.e., u1 = −y2 and u2 = y1 ). The stability of the closed-loop
system is concluded in the following lemma when H1 is PR and H2 is weakly SPR.
Proof Let us define the following state space representation for system H1 :
ẋ1 (t) = A1 x1 (t) + B1 u1 (t)
(3.154)
y1 (t) = C1 x1 (t) + D1 u1 (t).
and
H 2 (s) = W2 + LT2 (sIn − A2 )−1 B2 , (3.158)
has no zeros in the jω-axis. Consider the following positive definite function: Vi (xi ) =
xiT Pi xi , i = 1, 2. Then using (3.155) and (3.157),
3.10 Interconnection of PR Systems 145
where we have used the fact that 2uiT Di ui = uiT (Di + DiT )ui = uiT WiT Wi ui . Define
ȳi = LTi xi + Wi ui and V (x) = V1 (x1 ) + V2 (x2 ), then
Then t
ȳ2T (s)ȳ2 (s)ds ≤ V (0). (3.161)
0
The material of this section is taken from [135, 136]. As we have already pointed
out in Sect. 3.1.4, strong links exist between dissipativity and optimal control. In this
section, more details are provided. Close results were also obtained by Yakubovich
[9, 137, 138].
146 3 Kalman–Yakubovich–Popov Lemma
Let us start with some general considerations which involve some notions which
have not yet been introduced in this book, but will be introduced in the next chapter
(actually, the only missing definitions are those of a storage function and a supply
rate: the reader may thus skip this part and come back to it after having read Chap. 4).
The notions of dissipation inequality and of a storage function have been introduced
(without naming them) in (2.3), where the function V (·) is a so-called storage function
and is a function of the state x(·) (and is not an explicit function of time). Let us
consider the following minimization problem:
+∞
Δ
Vf (x0 ) = min w(u(s), x(s))ds (3.162)
u∈L 2,e 0
with
w(u, x) = uT Ru + 2uT Cx + xT Qx (3.163)
∂ Vf
(x)(Ax + Bu) + w(u, x) ≥ 0, for all x ∈ Rn , u ∈ Rm . (3.165)
∂x
One realizes immediately by rewriting (3.164) as the dissipation inequality
t1
− Vf (x(0)) ≥ −Vf (x(t1 )) − w(u(t), x(t))dt (3.166)
0
that −Vf (·) plays the role of a storage function with respect to the supply rate
−w(u, x). Let us end this section making a small digression on the following well-
known fact: why is the optimal function in (3.162) a function of the initial state? To
see this intuitively, let us consider the minimization problem
+∞
inf (u2 (t) + x2 (t))dt (3.167)
u∈U 0
3.11 Positive Realness and Optimal Control 147
subject to ẋ(t) = u(t), x(0) = x0 . Let U consist of smooth functions. Then finiteness
of the integral in (3.167) implies that limt→+∞ x(t) = 0. Take any constant a ∈ R.
Then +∞ +∞
0 2ax(t)u(t)dt = 0 2ax(t)ẋ(t)dt =
+∞ (3.168)
= 0 dtd [ax2 (t)]dt = [ax2 (t)]+∞0 = −ax 2
0 .
+∞
So indeed inf u∈U 0 (u2 (t) + x2 (t))dt is a function of the initial state.
The above facts were proved by Molinari [139], who considered four types of
optimal control problems, including (3.165) (3.166), in a slightly broader context (it
is not assumed that w(u, x) ≥ 0, but just that the integral
+∞ over [t1 ,t2+∞ ] exists when
t2 → +∞). In the next lemma, one just considers t1 instead of 0 in (3.165),
which does not change the problem since our system is time-invariant. Also u[t1 ,t2 ]
means all controls that are piecewise continuous on [t1 , t2 ] for any t1 and t2 .
Lemma 3.68 ([139, Lemma 1]) If the problem (3.165) (3.166) is well defined, then
the resulting optimal cost function Vf (·) satisfies the (so-called) normalized dissi-
t
pation inequality (NDI): Vf (0) = 0 and Vf (x(t1 )) ≤ t12 w(u(s), x(s))ds + Vf (x(t2 )).
, -
t
Furthermore, Vf (x(t1 ) = inf u[t1 ,t2 ] t12 w(u(s), x(s))ds + Vf (x(t2 )) .
Proof (we reproduce the proof from [139]) If an admissible u[0,+∞) gives a cost
α < 0 for the state x(t1 ) = 0, then ku[0,+∞) gives the cost k 2 α. Considering large k
shows that the cost has no lower bound, a contradiction. Considering u[0,+∞) ≡ 0
provides Vf (0) = 0. Now consider any u[t1 ,t2 ] and any admissible u[t2 ,+∞) . The
concatenated function u[t1 ,+∞) is admissible and by definition of Vf (·) we get
t +∞
Vf (x(t1 )) ≤ t12 w(u(s), x(s))ds + t2 w(u(s), x(s))ds. Ranging over all admissible
u[t2 ,+∞) provides the NDI. The second part is proved as follows. Consider any admis-
t
sible u[t1 ,+∞) and any interval [t1 , t2 ]. Directly t12 w(u(s), x(s))ds + Vf (x(t2 )) ≤
+∞
t1 w(u(s), x(s))ds. The left-hand side is certainly not less than the infimum
over all u[t1 ,t2 ] , and combined with the NDI provides the inequalities: Vf (x(t1 )) ≤
t +∞
inf u[t1 ,+∞) t12 w(u(s), x(s))ds ≤ t1 w(u(s), x(s))ds. Ranging over all admissible
inputs u[t1 ,+∞) provides the result.
We have already pointed out the relationship which exists between the linear matrix
inequality in the KYP Lemma (see Sect. 3.1.4) and optimal control, through the
construction of a Riccati inequality that is equivalent to the linear matrix inequality
(LMI) in (3.3). This section is devoted to deepen such relationships. First of all, let
us introduce (or re-introduce) the following algebraic tools:
where s ∈ C and s̄ is its complex conjugate. Notice that H (s̄, s) can be rewritten
as T
(s̄In − A)−1 B Q CT (sIn − A)−1 B
H (s̄, s) = . (3.173)
Im C R Im
where x(jω, u) is defined from jωx = Ax + Bu, i.e., x(jω, u) = (jωIn − A)−1 Bu. See,
for instance, Theorem 3.77 for more information on the spectral function and its link
with the KYP Lemma set of equations (some details have already been given in
Sect. 3.3.1,
see (3.96) (3.97)). One sometimes calls any triple of matrices A, B, and
Q CT
a Popov triple.
C R
Remark 3.70 In the scalar case, the ARE (3.171) becomes a second-order equation
aG 2 + bG + c = 0 with real coefficients. It is clear that without assumptions on
3.11 Positive Realness and Optimal Control 149
a, b, and c there may be no real solutions. Theorem A.60 in Appendix A.4 states
conditions under which an ARE as in (3.171) possesses a real solution.
We will denote the inequality in (3.164) as the DIE (for dissipation inequality),
keeping in mind that the real dissipation inequality is in (3.166). Let us introduce the
following optimal control problems, with w(x, u) in (3.166):
+∞
Δ
V + (x0 ) = min w(u(s), x(s))ds, lim x(t) = 0, (3.175)
u∈L 2,e 0 t→+∞
+∞
Δ
V − (x0 ) = − min w(u(s), x(s))ds, lim x(t) = 0, (3.176)
u∈L 2,e 0 t→+∞
t
Δ
Vn (x0 ) = min w(u(s), x(s))ds. (3.177)
u∈L 2,e ,t≥0 0
These four problems (i.e., (3.165), (3.175)–(3.177)) are subject to the dynamics
ẋ(t) = Ax(t) + Bu(t), with initial data x(0) = x0 .
Assumption 3 We assume that the pair (A, B) is controllable throughout Sect. 3.11.2.
Therefore, this assumption will not be repeated. One notes that the four functions in
(3.165), (3.175), (3.176), and (3.177) are quadratic functions of the state x0 . Let us
summarize few facts:
• Vn (·) ≤ 0 (take t = 0 in (3.177) to deduce that the minimum cannot be positive).
• Vn (·) ≤ Vf (·) ≤ V + (·): indeed, if the scalar 0 w(u(s), x(s))ds sweeps a certain
t
+∞
domain in R while t ≥ 0, then the scalar 0 w(u(s), x(s))ds must belong to this
+∞
domain. And similarly if the scalar 0 w(u(s), x(s))ds sweeps a certain domain
+∞
while u ∈ L2,e , the scalar 0 w(u(s), x(s))ds subject to the limit condition must
lie inside this domain.
• Vn (·) < +∞,Vf (·) < +∞, V + (·) < +∞: by controllability the integrand w(u, x)
is bounded whatever the final (bounded) state, so the lowerbound is bounded.
• V − (·) > −∞: note that
+∞ +∞
− min w(u(s), x(s))ds = max − w(u(s), x(s))ds.
u∈L 2,e 0 u∈L 2,e 0
• If there exists a feedback controller u(x) such that w(u(x), x) ≤ 0 and such
that ẋ(t) = Ax(t) + Bu(x(t)) has an asymptotically stable fixed point x = 0, then
Vn (·) = Vf (·) = V + (·).
• If w(u, x) = uT y and an output y = Cx + Du is defined, then the optimal control
problem corresponds to a problem where the dissipated energy is to be minimized.
• If w(x, 0) ≥ 0 then the functions V (·) which satisfy the DIE in (3.164) define Lya-
punov functions candidate since −V (·) is then nonincreasing along the (uncon-
trolled) system’s trajectories, as (3.166) shows.
The second part of the last item is satisfied provided the system is asymptotically
stabilizable, which is the case if (A, B) is controllable. The first part may be satisfied
if R = 0, Q = 0, and the matrix A + BC is Hurwitz. The first part of the last-but-one
item is satisfied if R = 0, Q = 0 (take u = −Cx).
Lemma 3.71 Let R 0. For quadratic functions V (x) = xT Gx, G = G T , the DIE
in (3.164) is equivalent to the LMI in (3.169).
The LMI follows from (3.178). Then the proof is as in Sect. 3.1.4.
Let us now present some theorems which show how the LMI, the ARI, the ARE,
and the FDI are related one to each other and to the boundedness properties of the
functions Vf (·), V + (·). The proofs are not provided entirely for the sake of brevity.
In what follows, the notation V (·) > −∞ and V (·) < +∞ mean, respectively, that
the function V : Rn → R is bounded for bounded argument. In other words, given x0
bounded, V (x0 ) is bounded. The controllability of (A, B) is sufficient for the optimum
to be bounded [92, p. 229].
Proof Let us prove the last two items. If there exists a solution G = G T to the LMI,
then
−(In s̄ − AT )G − G(In s − A) GB + C T −2σ G 0
(3.179)
BT G + C R 0 0
with
s = σ + jω, σ ∈ R, ω ∈ R, and s̄ = σ − jω. Postmultiplying by
(In s − A)−1 B
, and premultiplying by (BT (In s̄ − AT )−1 In ), one obtains
In
3.11 Positive Realness and Optimal Control 151
From the first item and since σ ≥ 0 one sees that indeed (3.180) implies the FDI (as
G is nonpositive-definite).
One recognizes that A+ and A− are the closed-loop transition matrices corresponding
to a stabilizing optimal feedback in the case of A+ . G + is called the stabilizing
solution of the ARE. V + (·) and V − (·) are in (3.175) and (3.176), respectively. It
is noteworthy that, if in the first assertion of the theorem, one looks for negative
semi-definite solution of the ARE, then the equivalence has to be replaced by “only
if”. In such a case, the positivity of the Popov function is only a necessary condition.
One can already conclude from the above results that the set of solutions to the KYP
Lemma conditions (3.2) possesses a minimum solution P − = −G + and a maximum
solution P + = −G − when D + DT 0, and that all the other solutions P 0 of
the ARE satisfy −G + P −G − . The last two items tell us that if the ARE has a
solution G − ≺ 0 then the optimal controller asymptotically stabilizes the system. In
this case limt→+∞ x(t) = 0 so that indeed Vf (·) = V + (·).
The function −Vf (·) corresponds to what we shall call the available storage (with
respect to the supply rate w(x, u)) in Chap. 4. The available storage will be shown
to be the minimum solution to the ARE, while the maximum solution will be called
the required supply. Also dissipativity will be characterized by the available storage
being finite for all x ∈ X and the required supply being lower bounded. The material
in this section brings some further light on the relationships that exist between optimal
control and dissipative systems theory. We had already pointed out a connection in
152 3 Kalman–Yakubovich–Popov Lemma
Sect. 3.1.4. Having in mind that what we call a dissipative linear invariant system is
a system which satisfies a dissipation equality as in (3.5), we can rewrite Theorem
3.73 as follows.
Theorem 3.75 ([142]) Suppose that the system (A, B, C, D) in (3.1) is con-
trollable and observable and that D + DT is full rank. Then, the ARE
has a real symmetric nonnegative definite solution if and only if the system
in (3.1) is dissipative with respect to the supply rate w(u, y) = uT y. If this is
the case then there exists one and only one real symmetric solution P − such
that Re[λ(A− )] ≤ 0, A− = A + B(D + DT )−1 (BT P − − C), and one and only
one real symmetric solution P + such that Re[λ(A+ )] ≥ 0, A+ = A + B(D +
DT )−1 (BT P + − C). Moreover, 0 ≺ P − P + and every real symmetric solution
satisfies P − P ≤ P + . Therefore, all real symmetric solutions are positive
definite. The inequalities H (jω) + H T (−jω) 0 for all ω ∈ R, Re[λ(A− )] <
0, Re[λ(A+ )] > 0, and P − ≺ P + , hold simultaneously.
It will be seen later that the matrices P + and P − play a very particular role in the
energy properties of a dynamical system (Sect. 4.3.3, Remark 4.39). Theorem 3.75
will be given a more general form in Theorem 4.61. The matrix P − is the stabi-
lizing solution of the ARE. Algorithms exist that permit to calculate numerically
the extremal solutions P − and P + ; see [65, Annexe 5.A] where a Fortran routine
is proposed. See also Proposition 5.81 for related results, in the context of H∞ and
bounded real systems, where a different kind of Riccati equation appears.
Remark 3.76 Let us study the case when C = 0 and Q = 0, with R = Im without
loss of generality. The ARE then becomes
AT G + GA − GBBT G = 0, (3.181)
and obviously G = 0 is a solution. It is the solution that yields the free terminal time
+∞
optimal control problem of the optimization problem 0 uT (t)u(t)dt. If the matrix
A is Hurwitz, G = 0 is the maximum solution of (3.181). If −A is Hurwitz, G = 0
is the minimum solution to the ARE.
Extensions toward the singular case (R 0) can be found in [126]; see also
Remark 4.111.
3.11 Positive Realness and Optimal Control 153
We did not provide most of the proofs of the above results, and in particular Theorem
3.73. Let us end this section with a result that links the positivity of the Popov function,
and a KYP Lemma LMI, and its complete proof.
where the pair (A, B) is controllable and is nonnegative if and only if there exists
P = P T such that
Q − AT P − PA S − PB
0. (3.183)
S T − BT P R
Lemma 3.78 ([65, Proposition 9.4]) Let Π (s) be the spectral function in (3.182),
which we say is described by the five-tuple (A, B, Q, S, R). Then
• (i) Π (s) is also described by the five-tuple (A2 , B2 , Q2 , S2 , R2 ) where A2 = A, B2 =
B, Q2 = Q − AT P − PA, S2 = S − PB, R2 = R, where P = P T is any matrix.
• (ii) For H (s) = Im − C(sIn − A + BC)−1 B where C is any m × n matrix, the
spectral function H T (s)Π (s)H (s) is described by the five-tuple (A1 , B1 , Q1 , S1 ,
R1 ) where A1 = A − BC, B1 = B, Q1 = Q + C T QC − SC − C T S, S1 = S − C T R,
R1 = R.
Proof (i) Let Π2 (s) be the Popov function described by the five-tuple (A2 , B2 , Q2 , S2 ,
R2 ). Then
(ii) Notice that (sIn − A)−1 BH (s) = (sIn − A + BC)−1 B. The Popov function
H (s)Π (s)H (s) can be written as
T
154 3 Kalman–Yakubovich–Popov Lemma
Lemma 3.79 Let A ∈ Rr×r , B ∈ Rs×s , C ∈ Rr×s . The solution of the equation AP +
PB = C is unique if and only if the set of eigenvalues of A and the set of eigenvalues
of −B have no common element.
⎛ ⎞ ⎛ ⎞
P•1 C•1
Δ ⎜ ⎟ Δ ⎜ ⎟
Proof Let us first define P = ⎝ ... ⎠ and C = ⎝ ... ⎠, where P•i and C•i
P•s C•s
Δ
are
⎛ the ith column ⎞ vectors of P and C, respectively, and A = blockdiag(A) +
B11 Ir . . . B1s Ir
⎜ .. . ⎟
⎝ . . . . .. ⎠ = blockdiag(A) + {Bij Ir }. Let U and V be the invertible matri-
Bs1 Ir . . . Bss Ir
Δ Δ
ces such that J (A) = U AU −1 and J (B) = V BV −1 are in the Jordan’s form. Then
the Jordan’s form of A is J (A ) = blockdiag(J (A)) + {[J (B)]ij Ir }. The eigenvalues
of A are therefore the diagonal entries of J (A ). It is inferred that each of these
eigenvalues is the sum of an eigenvalue of A and an eigenvalue of B, and vice versa.
The lemma is proved.
(C,P)
In the next proof, the notation (A, B, Q, S, R) −→ (A
, B
, Q
, S
, R
) means that one
has applied the two transformations of Lemma 3.78 successively. The two Popov
functions which correspond one to each other through such a transformation are
simultaneously nonnegative.
(C,J )
One checks that (A, B, Q, S, R) −→ (A − BC, B, 0, H T , R) with H = S − JB −
C T R. Under these conditions, the positivity of Π (s) is equivalent to that of
If the matrix M has the special form as in Theorem 3.77, and Q 0, then A is a
stable matrix and P 0, from standard Lyapunov equation arguments. We will need
those results when we deal with hyperstability. The generalization of this equiva-
lence for a limited range of frequencies |ω| ≤ has been proposed in [143, 144],
see Sect. 3.12. This has important practical consequences because of bandwidth lim-
itations in actuators and sensors.
Remark 3.80 There is a strict version of this result when A is a Hurwitz matrix [89,
Theorem 2]. It says that if A is a Hurwitz matrix, then the strict Popov frequency
condition Π (λ) δ In+m , δ > 0, is satisfied if and only if, the KYP Lemma LMI
(3.3) holds strictly for some P = P T 0.
Definition 3.81 Two Popov triples (A, B, Q, C, R) and (Ã, B̃, Q̃, C̃, R̃) are called
(X , F)-equivalent if there exist matrices F ∈ Rm×n and X = X T ∈ Rn×n such that
⎧
⎨ Ã = A + BF, B̃ = B
Q̃ = Q + LF + F T LT + F T RF + ÃX + XA (3.190)
⎩
L̃ = L + F T R + XB, R̃ = R.
One then writes (A, B, Q, C, R) ∼ (Ã, B̃, Q̃, C̃, R̃). Two Popov triples (A, B, 0,
C, R) and (Ã, B̃, 0, C̃, R̃) are called dual if à = −AT , B̃ = L, L̃ = −B, R̃ = R.
From the material which is presented above, it should be clear that a Popov triple can
be seen as the representation of a controlled dynamical system ẋ(t) = Ax(t) + Bu(t)
together with a functional with a quadratic cost as in (3.166). With a Popov triple Σ
one can therefore naturally associate a Popov’s function ΠΣ as in (3.173), a Riccati
equality, and an extended Hamiltonian pencil:
⎛ ⎞ ⎛ ⎞
In 0 0 A 0 B
λMΣ − NΣ = λ ⎝ 0 In 0 ⎠ − ⎝ −Q −AT −C T ⎠ . (3.191)
0 0 0 C BT R
Lemma 3.83 ([145, 146]) Let Σ = (A, B, Q, C, R) be a Popov triple; the following
statements are equivalent:
• There exists an invertible block
2 × 2 matrix
V with upper right block zero, such
−I 0
that R = V T J V , where J = m1
, and the Riccati equality AT P + PA −
0 Im2
(PB + C T )R−1 (BT P + C) + Q = 0 has a stabilizing solution P.
• ΠΣ (s) has a J -spectral factorization ΠΣ (s) = G(s) J G(s), with G(s), G(s)−1
being rational m × m matrices with all poles in the left open complex plane.
These tools and results are useful in the H∞ theory, see [146, Lemma 2, Theorem 3].
3.11 Positive Realness and Optimal Control 157
Let us state a theorem proved in [138] and which holds for stabilizable systems (there
is consequently also a link with the material of Sect. 3.3). This theorem summarizes
several relationships between the solvability of the KYP Lemma set of equations and
the regular optimal control problem, under a stabilizability assumption only.
Theorem 3.84 Let the pair (A, B) be stabilizable. Then the following assertions are
equivalent:
• (i) The optimal control problem: (3.162) and (3.163) subject to ẋ(t) = Ax(t) +
Bu(t), x(0) = x0 , is regular, i.e., it has a solution for any x0 ∈ Rn , and this solution
is unique.
• (ii) There exists a quadratic Lyapunov function V (x) = xT Px, P T = P, such that
the form V̇ + w(u, x) = 2xT P(Ax + Bu) + w(u, x) of the variables x ∈ Cn and
u ∈ Cm is positive definite.
• (iii) The condition w(u, x) ≥ δ(xT x + uT u) for any value of ω ∈ R, x ∈ Cn , u ∈ Cm
satisfying jωx = Ax + Bu, holds for some δ > 0.
• (iv) The matrix R = RT in (3.163) is positive definite and the set of equations
PA + AT P + Q = kRk T , PB + C T = −kR possesses a solution in the form of real
matrices P = P T and C, such that the controller u = Cx stabilizes the system
ẋ(t) = Ax(t) + Bu(t).
0 −In
• (v) R 0 and det(jωJ − K) = 0 for all ω ∈ R, with J = , K=
In 0
T −1
C R C − Q AT − CR−1 BT
.
A − BR−1 C T BR−1 BT
• (vi) R 0 and there exist a quadratic form V = xT Px, P = P T , and a matrix
1
k ∈ Rn×m , such that V̇ + w(u, x) = |R 2 (u − k T x)|2 and the controller u = k T x
stabilizes the system ẋ(t) = Ax(t) + Bu(t).
• (vii) The functional Vf (·) in (3.162) is positive definite on the set M (0) of processes
(x(·), u(·)) that satisfy ẋ(t) = Ax(t) + Bu(t) with x(0) = x0 = 0, i.e., there exists
δ > 0 such that
+∞ +∞
w(u(t), x(t))dt ≥ δ (xT (t)x(t) + uT (t)u(t))2 dt
0 0
for all (x(·), u(·)) ∈ M (0), where M (x0 ) is the set of admissible processes.
Let at least one of these assertions be valid (which implies that they are all valid). Then
there exists a unique pair of matrices (P, K) which conforms with the requirements of
item (iv). In the same way, there is a unique pair which complies with the requirements
of item (vi), and the pairs under consideration are the same. Finally any of the items
(i) through (vii) implies that for any initial state x0 ∈ Rn one has V (x0 ) = x0T Px0 =
minM (x0 ) Vf (x(·), u(·)).
158 3 Kalman–Yakubovich–Popov Lemma
The set M (x0 ) of admissible processes consists of the set of pairs (x(·), u(·)) which
satisfy ẋ(t) = Ax(t) + Bu(t) with x(0) = x0 , with u ∈ L2 . If (A, B) is controllable
then M (x0 ) = ∅ for any x0 ∈ Rn .
Let us give in this section a brief account of the so-called Generalized KYP Lemma,
as introduced in [143, 144, 147]. The basic idea is to characterize positive realness,
in a finite (low) frequency range. The motivation is that in practice, actuators and sen-
sors possess a finite bandwidth, and one can control flexible structures by imposing
passivity only in a limited frequency range.
The material in Sect. 3.11.3 may be used as a starting point, in particular, Theorem
3.77 and the framed summary at the end of that section. The generalized KYP Lemma,
is stated as a modified version of the equivalence: Π (jω) 0 ⇐⇒ there exists P =
P T such that M̄ ≺ 0. Let us start with the following definition.
Definition 3.85 ([147, Definition 4]) A transfer function H (s) ∈ Cm×m is said finite
frequency positive real (FFPR) with bandwidth , if
1. H (s) is analytic in Re[s] > 0,
2. H (jω) + H (jω) 0 for all real positive ω ∈ [0, ] such that jω is not a pole of
H (·),
3. every pole of H (s) on j[0, ], if any, is simple and the corresponding residual
K0 0 or K∞ 0, and it is Hermitian.
Compare with the conditions in Theorem 2.48. Here we take those conditions as a
definition, as it often happens.
Theorem 3.86 ([147, Theorem 1, Theorem 4]) Let A ∈ Rn×n , B ∈ Rn×m , M =
M T ∈ R(n+m)×(n+m) , and > 0 be given. Assume that the pair (A, B) is control-
lable. Then the following statements are equivalent:
1. The finite frequency inequality
(jωIn − A)−1 B (jωIn − A)−1 B
M 0 (3.192)
Im Im
Δ
holds for all ω ∈ Ω = {w ∈ R | det(jωIn − A) = 0, |ω| ≤ }.
2. There exist P = P T ∈ Rn×n , Q = QT ∈ Rn×n , Q 0, such that
AT In −Q P A B
+ M 0. (3.193)
BT 0 P 2Q In 0
If the strict inequalities are considered in both (3.192) and (3.193), and A has no
eigenvalues λ(A) = jω and |ω| ≤ , then the controllability assumption can be
3.12 The Generalized KYP Lemma 159
removed. Let us now consider transfer functions H (s) ∈ Rm×m , with real rational
elements. Let x(·) be the state of any minimal realization of H (s), u(·) the input and
y(·) the output. Then the following statements are equivalent:
1. H (s) is FFPR with bandwidth . +∞
+∞
2. −∞ uT (t)y(t)dt ≥ 0 holds for all L2 bounded inputs, such that −∞ ẋ(t)ẋT (t)
+∞
dt 2 −∞ x(t)xT (t)dt.
The proof of these results is technically involved and therefore omitted in this brief
introduction to FFPR transfer functions, readers are referred to [147] for complete
developments. Notice that M̄ in the above-framed paragraph
at the
end
of Sect.
3.11.3
AT In 0 P A B
can be rewritten equivalently as M̄ = M + . This is
BT 0 P 0 In 0
recovered if we let Q vanish in Theorem 3.86, which is the case when → +∞
[147]. Discrete-time counterparts of Definition 3.85 and Theorem 3.86 can be found
in [148], see the end of Sect. 3.15.5 for a very brief presentation.
where v(·) and w(·) are white, zero-mean Gaussian noises. Since the system is PR,
we assume, without loss of generality (see Remark 3.88 at the end of this section),
that the following equations hold for some matrix Qa 0:
A + AT = −QA 0 (3.195)
B = CT . (3.196)
where Pc = PcT 0 and Pf = PfT 0 are the LQ-regulator and the Kalman–Bucy
filter Riccati matrices which satisfy the algebraic equations Riccati
Pc A + AT Pc − Pc BR−1 BT Pc + Q = 0 (3.200)
Pf AT + APf − Pf BR−1
w B Pf + QV = 0,
T
(3.201)
where Q and R are the usual weighting matrices for the state and input, and QV and
RW are the covariance matrices of v and w. It is assumed that Q 0 and that the pair
1/2
(A, QV ) is observable. The main result is stated as follows.
Theorem 3.87 ([151]) Consider the PR system in (3.194), (3.195), and (3.196)
and the LQG-type controller in (3.197) through (3.201). If Q, R, Qv and Rw
are such that
Qv = Qa + BR−1 BT , Rw = R (3.202)
and
Δ
Q − BR−1 BT = QB 0, (3.203)
then the controller in (3.198) through (3.199) (described by the transfer function
from y to u
) is SPR.
where QB is defined in (3.203). In view of (3.196) and the above, it follows that the
controller in (3.198) and (3.199) is strictly positive real.
The above result states that, if the weighting matrices for the regulator and the
filters are chosen in a certain manner, the resulting LQG-type compensator is SPR.
However, it should be noted that this compensator would not be optimal with respect
to actual noise covariance matrices. The noise covariance matrices are used herein
merely as compensator design parameters and have no statistical meaning. Condition
(3.203) is equivalent to introducing an additional term yT R−1 y in the LQ performance
index (since Q = QB + CR−1 C T ) and is not particularly restrictive. The resulting
feedback configuration is guaranteed to be stable despite unmodeled plant dynamics
and parameter inaccuracies, as long as the plant is positive real. One application of
such compensators would be for controlling elastic motion of large flexible space
structures theoremFaurreusing collocated actuators and sensors. Further work on
passive LQG controllers has been carried out in [152–157]. In [155], the standard
LQG problem is solved over SPR transfer functions, see Sect. 5.10.4 for a similar
control design synthesis.
− 21
we obtain (3.195) with QA = P − 2 LLT P − 2 .
1 1
on the left and on the right by P
− 21
Multiplying (3.206) on the left by P we obtain (3.196).
Let us recapitulate some of the material in the previous subsections, where SSPR
transfer functions are used. We consider the two matrix polynomials
162 3 Kalman–Yakubovich–Popov Lemma
and the linear invariant system (Σ) with minimal realization: ẋ(t) = Ax(t) + Bu(t),
y(t) = Cx(t) + Du(t). Then all the following statements are equivalent one to each
other [158]:
• (1) The transfer function of (Σ) is SSPR, and A is asymptotically stable.
• (2) There exists P = P T 0 such that
( )
AT P + PA C T − PB
≺ 0. (3.208)
C − BT P −(D + DT )
The above equivalences make a nice transition to the relationships between semi-
definite programming problems (SDP) and the KYP Lemma. Let us consider a SDP
of the form
3.12 The Generalized KYP Lemma 163
L
minimize qT x + k=1 Tr(Qk Pk )
ATk Pk + Pk Ak Pk Bk p
subject to + i=1 xi Mki Nk , k = 1, . . . , L ,
BkT Pk 0
(3.210)
where the variables (unknowns) are x ∈ Rp and Pk = PkT ∈ Rnk ×nk , Tr denotes the
trace of a matrix, the problem data are q ∈ Rp , Qk = QkT ∈ Rnk ×nk , Ak ∈ Rnk ×nk , Bk ∈
Rnk ×mk , Mki = MkTi ∈ R(nk +mk )×(nk +mk ) , and Nk = NkT ∈ R(nk +mk )×(nk +mk ) .
Such a SDP is named a KYP-SDP [161] because of the following. As seen in
Sect. 3.11.3, the KYP Lemma states a frequency-domain inequality of the form
(jωIn − A)−1 B (jωIn − A)−1 B
M 0 (3.211)
Im Im
for all ω ∈ [−∞, +∞], with M symmetric and A has no imaginary eigenvalue (equiv-
alently, the transfer function C(sIn − A)−1 B + D has no poles on the imaginary axis).
And (3.211) is equivalent to the LMI M̄ 0 (see the end of Sect. 3.11.3). The con-
straints in the KYP-SDP in (3.210) possess the same form as M̄ 0, where M is
replaced by an affine function of the variable x. Let us take Qk = 0, then the KYP-
SDP can equivalently be rewritten as
minimize qT x
(jωIn − Ak )−1 Bk (jωIn − Ak )−1 Bk
subject to (Mk (x) − Nk ) 0
Im Im
k = 1, . . . , L,
p (3.212)
where the optimization variable is x and Mk (x) = i=1 xi Mki . Applications of KYP-
SDPs are in optimization problems with frequency-domain inequalities, linear sys-
tems analysis and design, digital filter design, robust control analysis using inte-
gral quadratic constraints, linear quadratic regulators, quadratic Lyapunov functions
search, etc. More details may be found in [161]. We do not provide more details on
this topic, since this would bring us too far away from our main interest in this book.
3.13.1 Introduction
In this section, we study the stability of an important class of control systems. The
Lur’e problem has been introduced in [2] and can be considered as the first step
towards the synthesis of controllers based on passivity. Consider the closed-loop
system shown in Fig. 3.3. We are interested in obtaining the conditions on the linear
164 3 Kalman–Yakubovich–Popov Lemma
0 y
system and on the static nonlinearity such that the closed-loop system is stable. This
is what is called the Lur’e problem. The celebrated Popov and Circle Criteria are
introduced, as well as techniques relying on multipliers and loop transformations.
The following is to be considered as an introduction to these fields, which constitute
by themselves an object of analysis in the Automatic Control literature.
The linear system is given by the following state space representation:
⎧
⎨ ẋ(t) = Ax(t) + Bu(t)
(Σ) (3.213)
⎩
y(t) = Cx(t) + Du(t), x(0) = x0
The linear system is assumed to be minimal, i.e., controllable and observable which
means that
rank B AB . . . An−1 B = n,
and ⎡ ⎤
C
⎢ CA ⎥
rank ⎢ ⎥
⎣ : ⎦ = n.
CAn−1
ay
0 y
In the scalar case (m = 1), the static nonlinearity is shown in Fig. 3.4, with 0 < a <
b < +∞.
Remark 3.89 Let m = 1. The sector condition is often written in an incremental way
as a ≤ φ(yy11)−φ(y
−y2
2)
≤ b, for all y1 and y2 ∈ R. It is not difficult to show that the functions
y → by − φ(y) and y → φ(y) − ay are both nondecreasing under such a constraint.
If φ(·) is differentiable, this implies that a ≤ φ̇(y) ≤ b for all x. It follows also that
φ(y) − ay and by − φ(y) have the same signum for all y. Then, if φ(0) = 0, (φ(y) −
ay)(by − φ(y)) ≥ 0 and a ≤ φ(yy11)−φ(y −y2
2)
≤ b are equivalent conditions. If m ≥ 2, the
incremental sectoricity can be expressed as ai ≤ φi (yy1,i1 )−φi (y2 )
−y2,i
≤ bi for all y1 , y2 ∈ Rm ,
1 ≤ i ≤ m. Then bi yi − φi (y) and φi (y) − ai yi have the same signum, so that (φ(y) −
ay)T (by − φ(y)) ≥ 0. The sectoricity with m = 1 is also sometimes written as ay2 ≤
φ(t, y)y ≤ by2 , which is equivalent when φ(t, 0) = 0 to (by − φ(t, y))(φ(t, y) −
ay) ≥ 0. Finally, notice that the above sector condition could also be formulated as
(φ (t, y) − Ay)T (By − φ (t, y)) ≥ 0 for some m × m matrices A and B.
The function φ(·, ·) must be such that the closed-loop system is well-posed. For an
ordinary differential equation ẋ(t) = f (x(t), t), the so-called Carathéodory condi-
tions are as follows.
and the output mapping makes sense only if Eq. (3.215) has a unique solution y =
h(x) for all t ≥ 0 and all x ∈ Rn . Let us recall that a single-valued mapping ρ(·)
is monotone if x − x
, y − y
≥ 0 whenever x = ρ(y) and x
= ρ(y
), for all y ∈
dom(ρ) and y
∈ dom(ρ). It is strongly monotone if x − x
, y − y
≥ α||y − y
||2
for some α > 0.
Lemma 3.94 Let D 0 and φ : Rm → Rm be monotone. Then the equation
y = Cx − Dφ(y) (3.216)
The proof of the above fact applies to generalized equations of the form 0 ∈ F(y) +
NK (y), where NK (·) is the normal cone to the closed convex set K ⊆ Rm (we shall
come back on convex analysis later in this chapter). It happens that NRn (y) = {0} for
all y ∈ Rm . But it is worth keeping in mind that the result would still hold by restricting
the variable y to some closed convex set. Coming back to the Lur’e problem, one
sees that a direct feedthrough of the input in the output is allowed, provided some
conditions are respected. Positive real systems with D 0 (which therefore have
a uniform vector relative degree r = (0, . . . , 0)T ∈ Rm ), or with D 0, meet these
conditions.
Lur’e problem in Fig. 3.3 can be stated as follows: find the conditions on (A, B, C, D)
such that the equilibrium point x = 0 of the closed-loop system is globally asymp-
totically stable for all nonlinearities φ(·) in the sector [a, b]. Then the system is
168 3 Kalman–Yakubovich–Popov Lemma
+ u y + u y
(A, B, C, D) (A, B, C, D)
- -
ky φ(t, y)
k φ(t, ·)
Fig. 3.5 LTI system with a constant output (left), and a sector nonlinearity (right), in negative
feedback
Aizerman’s conjecture states that if the vector field Ax + Bφ(y) is Hurwitz for all
linear functions φ(·) with slopes in [a, b], then the fixed point x = 0 should be
globally asymptotically stable for any nonlinear φ(·) in the sector [a, b].
Conjecture 3.96 (Kalman’s conjecture) Consider the system in Fig. 3.5 (right) with
a nonlinearity such that φ(t, y) = φ(y) (i.e., a time-invariant and continuously dif-
ferentiable nonlinearity), m = 1, φ(0) = 0 and a ≤ ddyφ (y) ≤ b. Then the system in
(3.213) with D = 0 is globally asymptotically stable, if it is globally asymptotically
stable for all nonlinearities φ(y) = ky, k ∈ [a, b].
Thus Kalman’s conjecture says that if A − kBC is Hurwitz for all k ∈ [a, b], x = 0
should be a globally stable fixed point for (3.213) (3.214) with slope-restricted φ(·)
as described in Conjecture 3.96. However, it turns out that both conjectures are false
in general (a counterexample to Aizerman’s conjecture is constructed in [166], who
exhibited a periodic solution in case n = 4, for a particular case; these results are,
however, incomplete and have been improved later). In fact, the absolute stability
problem, and consequently Kalman conjecture, may be considered as a particular
case of a more general problem known in the Applied Mathematics literature as
3.13 The Lur’e Problem (Absolute Stability) 169
the Markus–Yamabe conjecture (MYC in short). The MYC can be stated as follows
[167].
In other words, the MYC states that if the Jacobian of a system at any point of the
state space has eigenvalues with strictly negative real parts, then the fixed point of
the system should be globally stable as well. Although this conjecture seems very
sound from an intuitive point of view, it is false for n ≥ 3. Counterexamples have
been given for instance in [168]. It is, however, true in dimension 2, i.e., n = 2.
This has been proved in [169, 170]. The proof is highly technical and takes around
40 pages. Since it is, moreover, outside the scope of this monograph dedicated to
dissipative systems, it will not be reproduced nor summarized here. This is, however,
one nice example of a result that is apparently quite simple and whose proof is quite
complex. The Markus–Yamabe conjecture has been proved to be true for gradient
vector fields, i.e., systems of the form ẋ(t) = ∇f (x(t)) with f (·) of class C 2 [171].
It is clear that the conditions of the Kalman’s conjecture with f (x) = Ax + bφ(y),
φ(0) = 0, make it a particular case of the MYC. In short, one could say that Kalman’s
conjecture (as well as Aizerman’s conjecture) is a version of MYC for control theory
applications. Since, as we shall see in the next subsections, there has been a major
interest in developing (sufficient) conditions for Lur’e problem and absolute stability
in the Systems and Control community, it is also of significant interest to know the
following result:
Since it has been shown in [169] that the MYC is true for n = 1, 2, it follows immedi-
ately that this is also the case for the Kalman’s conjecture. Aizerman’s conjecture has
been shown to be true for n = 1, 2 in [174], proving in a different way that Kalman’s
conjecture holds for n = 1, 2. The following holds for the case n = 3.
dφ dφ
with x(t) ∈ R3 , y(t) ∈ R, b ∈ R3 , c ∈ R3 , miny dy
(y) = 0, maxy dy
(y) = k
dφ
∈ (0, +∞), φ(0) = 0, is globally asymptotically stable if the matrices A + dy (y)cT
∈ Rn×n are Hurwitz for all y(t) ∈ R.
In Sect. A.7, we provide a sketch of the proof of the fact that the result does not hold
for n ≥ 4, which consists of a counterexample.
170 3 Kalman–Yakubovich–Popov Lemma
φ(t, ·)
Let us come back to the Lur’e problem with single-valued nonlinearities in the feed-
back loop. Consider the observable and controllable system in (3.213). Its transfer
function H (s) is
H (s) = C (sIn − A)−1 B + D. (3.219)
Assume that the transfer function H (s) ∈ Cm×m is SPR and is connected in negative
feedback with a nonlinearity φ(·, ·) as illustrated in Fig. 3.6. The conditions for
stability of such a scheme are stated in the following theorem.
Theorem 3.100 Consider the system in Fig. 3.6. If H (s) in (3.219) is SPR, the
conditions of Lemma 3.94 are satisfied, and if φ(t, y) is in the sector [0, ∞), i.e.,
(i) φ (t, 0) = 0, for all t ≥ 0,
3.13 The Lur’e Problem (Absolute Stability) 171
Proof Since H (s) = C (sI − A)−1 B + D is SPR, then there exist P 0, Q and W,
ε > 0 such that ⎧ T
⎨ A P + PA = −εP − QT Q
BT P + W T Q = C (3.220)
⎩ T
W W = D + DT .
(3.221)
Note that BT P = C − W T Q. Hence, using the above, (3.213) and the control u =
−φ(t, y), we get
xT (t)PBφ(t, y(t)) = φ T (t, y(t))BT Px(t) = φ T (t, y(t))Cx(t) − φ T (t, y(t))W T Qx(t)
= φ T (t, y(t)) (y(t) − Du(t)) − φ T (t, y(t))W T Qx(t)
= φ T (t, y(t)) (y(t) + Dφ(t, y(t))) − φ T (t, y(t))W T Qx(t).
Using (3.220) and the fact that yT φ(t, y) ≥ 0 for all y and t, we have
Δ
Define z̄(t) = − (Qx(t) + W φ(t, y(t)))T (Qx(t) + W φ(t, y(t))) which can also be
rewritten as V̇ (x(t)) = −εV (xt)) + z̄(t). Thus
t
V (x(t)) = e−εt V (0) + e−ε(t−τ ) z̄ (τ ) d τ ≤ e−εt V (0) .
0
+ ū + ȳ
φ(t, ·) φ1 (t, ·)
- +
1
c−
aIm
Theorem 3.100 applies when φ(·, ·) belongs to the sector [0, ∞). In order to use the
above result when φ(·, ·) belongs to the sector [a, b] we have to make some loop
transformations which are given next. Loop transformations have been widely used
in the context of Lur’e systems, more generally feedback passive systems, because
they allow one to analyze the stability of equivalent systems, and may yield less
conservative stability conditions [97, 181, 182].
Δ
(1) If φ(·, ·) belongs to the sector [a, b], then φ1 (t, y) = φ (t, y) − ay belongs to the
sector [0, b − a] . This is illustrated in Fig. 3.7 (left).
(2) If φ1 (·, ·) belongs to the sector [0, c] with c = b − a, then we can make the
transformation indicated in Fig. 3.7 (right) where ȳ = φ2 (t, ū) and 1 ( δ > 0
is an arbitrarily small number. Therefore, as is shown next, φ2 (·, ·) belongs to
the sector [0, ∞).
Note that if φ1 = c̄, then ȳ = c̄
c̄
1− c−δ
= c̄(c−δ)
c−c̄−δ
ū. Therefore,
1. if c̄ = c, lim ȳ = ∞.
δ→0 ū
ȳ
2. if c̄ = 0, ū
= 0.
Using the two transformations described above, the system in Fig. 3.6 can be trans-
formed into the system in Figs. 3.8 and 3.9. We then have the following corollary of
Theorem 3.100.
Corollary 3.101 If H2 (s) in Figs. 3.8 and 3.9 is SPR and the nonlinearity φ(·, ·)
belongs to the sector [0, ∞), then the closed-loop system is globally exponentially
stable.
H2 (s)
+
H1 (s)
- +
Im
b−a−
φ2 (t, ·)
aIm
b−a−
+
H(s)
-
aIm
aIm
H1 (s) - +
φ(t, ·)
+ +
φ2 (t, )
x+jy x+jy[1+ax−jay]
Now one has z
1+az
+ 1
b−a
= 1+a(x+jy)
+ 1
b−a
= (1+ax)2 +y2 a2
+ 1
b−a
. Therefore η =
x(1+ax)+ay2
(1+ax)2 +y2 a2
+ b−a
1
> 0, or equivalently
174 3 Kalman–Yakubovich–Popov Lemma
0 < (b − a) x (1 + ax) + ay2 + (1 + ax)2 + y2 a2
= (b −
a) x + ax2 + ay2 + 1 + 2ax + a2 x2 + a2 y2 (3.222)
= ba x + y2 + x (b + a) + 1,
2
a+b 2 (a+b)2 (a+b)2
which implies bay2 + ba x + 2ab
+1− 4ab
> 0. Note that 1 − 4a2 b2
=
− (a−b)
2
4ab−a2 −2ab−b2
4ab
= 4ab
. Introducing the above into (3.222), we get
a+b 2 (a − b)2
bay + ba x +
2
> .
2ab 4ab
2 4 4 |a−b|
If ab > 0 this can be written as y2 + ba x + a+b > (a−b)
2
4 a+b 4
2 b2 , or z + 2ab > 2|ab| .
4 4 |a−b| 2ab 4a
4 a+b 4
If ab < 0 then z + 2ab < 2|ab| . Let D(a, b) denote the closed disc in the complex
z
plane centered at a+b
2ab
and with radius |a−b|
2|ab|
. Then Re 1+az + b−a1
> 0, if and only
if 4 4
4 4
4z + a + b 4 > |a − b| , ab > 0.
4 2ab 4 2 |ab|
In other words, the complex number z lies outside the disc D(a, b) in case ab > 0
and lies in the interior of the disc D(a, b) in case ab < 0. We therefore have the
following important result.
Theorem 3.102 (Circle Criterion) Consider the SISO system (m = 1) in Figs. 3.8
and 3.9. The closed-loop system is globally exponentially stable if
(i) 0 < a < b: The plot of H (jω) lies outside and is bounded away from the
disc D(a, b). Moreover, the plot encircles D(a, b) exactly ν times in the
counterclockwise direction, where ν is the number of eigenvalues of A with
positive real part.
(ii) 0 = a < b: A is a Hurwitz matrix and
1
Re H (jω) + > 0. (3.223)
b
(iii) a < 0 < b: A is a Hurwitz matrix; the plot of H (jω) lies in the interior of
the disc D(a, b) and is bounded away from the circumference of D(a, b).
(iv) a < b ≤ 0: Replace H (.) by −H (.), a by −b, b by −a and apply (i) or (ii).
The proof of this celebrated result, can be found for instance in [181, pp. 227–228].
It uses the Nyquist criterion for the proof of case (i).
3.13 The Lur’e Problem (Absolute Stability) 175
Remark 3.103 If b − a → 0, the “critical disc” D(a, b) in case (i) shrinks to the
“critical point” 0 −1/a of the Nyquist criterion. The circle criterion is applicable to
time-varying and/or nonlinear systems, whereas the Nyquist criterion is only appli-
cable to linear time-invariant systems. One see that condition (ii) means that H (s) is
SSPR.
Unlike the circle criterion, the Popov criterion [1, 7, 64] applies to autonomous
single-input–single-output (SISO) systems:
⎧
⎪
⎪ ẋ(t) = Ax(t) + bu(t)
⎨
ξ̇ (t) = u(t)
⎪
⎪ y(t) = cx(t) + d ξ(t)
⎩
u(t) = −φ(y(t)),
Hence, the transfer function is h(s) = ds + c(sI − A)−1 b, which has a pole at the
origin. We can now state the following result:
176 3 Kalman–Yakubovich–Popov Lemma
Theorem 3.104 (SISO Popov criterion) Consider the system in (3.224). Assume
that
1. A is Hurwitz.
2. (A, b) is controllable, and (c, A) is observable.
3. d > 0.
4. φ(·) belongs to the sector (0, ∞).
Then,
the system is globally
asymptotically stable if there exists r > 0 such that
Re (1 + jωr)h(jω) > 0, for all ω ∈ R.
Remark 3.105 Contrary to Popov criterion, the circle criterion does not apply to
systems with a pole at s = 0, and φ(·) belongs to the sector [0, ∞).
Proof of Popov criterion: Note that s(sIn − A)−1 = (sIn − A + A)(sIn − A)−1 =
In + A(sIn − A)−1 . Hence
(1 + rs)h(s) = (1 + rs) ds + c (sIn − A)−1 b
= ds + rd + c(sIn − A)−1 b + rcb + rcA(sIn − A)−1 b.
d
Note that jω
is purely imaginary. From the above and by assumption we have
Re (1 + jωr)h(jω) = Re r (d + cb) + c (In + rA) (jω − A)−1 b > 0.
(these are the Lur’e equations from the KYP Lemma, where q and w play the role of
L and W , respectively). Choose the Lyapunov function candidate15 :
y
V (x, ξ ) = x Px + d ξ + 2r
T 2
φ (σ ) d σ (3.226)
0
y
Given that φ(·) belongs to the sector [0, ∞), it then follows that 0 φ (σ ) d σ ≥ 0.
Hence, V (x, ξ ) is positive definite and radially unbounded.
.T
V̇ (x, ξ ) = x Px + xT P ẋ + 2d ξ ξ̇ + 2rφ(y)ẏ
= (Ax − bφ)T Px + xT P(Ax − bφ) − 2d ξ φ + 2rφ (c (Ax − bφ) − d φ) .
Since g(jω) → r(d + cb) as ω → ∞, it follows that r(d + cb) > 0. Hence,
for all x ∈ Rn and for all ε > 0. We now show that V̇ (x, ξ ) < 0 if (x, ξ ) = (0, 0) .
If x = 0 then V̇ (x, ξ ) < 0 since P 0. If x = 0 but ξ = 0, then y = d ξ = 0, and
φy > 0 since φ(·) belongs to the sector (0, ∞). Therefore the system (3.224) is
globally asymptotically stable.
The next result is less restrictive (or conservative), in the sense that the nonlinearity
is allowed to belong to a larger class of sectors.
Corollary 3.106 Suppose the assumptions of Theorem 3.104 are satisfied, and that
φ(·) belongs to the sector (0, k) , k > 0. Then, the system is globally asymptotically
stable, if there exists r > 0 such that
1
Re (1 + jωr)h(jω) + > 0. (3.228)
k
Δ −1
Proof From the loop transformation in Fig. 3.10, one has φ1 = φ 1 − k1 φ , and
g1 = g(s) + k1 = (1 + jωr)h(jω) + k1 , where g(s) is in (3.225). Calculations show
that φ1 ∈ (0, +∞) and satisfies the assumptions of Theorem 3.104. Therefore, the
condition Re[g1 (jω)] = Re[h(jω)] + rω Im[h(jω)] + k1 > 0, guarantees the global
asymptotic stability of the closed-loop system.
178 3 Kalman–Yakubovich–Popov Lemma
1
k
1
k
+
φ(·)
+
φ1 ( )
Remark 3.107 The circle and the Popov criteria owe their great success to the fact
that they lend themselves to graphical interpretations, as pointed out above for the
circle criterion. Consider, for instance, the inequality in (3.228). Consider the func-
Δ
tion m(jω) = Re[h(jω)] + jωIm[h(jω)], ω > 0. Note that Re[(1 + jωr)h(jω)] =
Re[h(jω)] − rωIm[h(jω)] = Re[m(jω)] − rIm[m(jω))]. Then, condition (3.228)
means that there must exist
a straight line with an arbitrary, fixed slope, passing
through the point − k1 , 0 in the complex plane, such that the plot of m(jω) lies to
the right of this line (tangency points can exist [186, Sect. VII.3.A]), see Fig. 3.11. The
slope of this line is equal to 1r , and it is usually called the Popov’s line. It was soon after
Popov published his result that Aizerman and Gantmacher noticed that the slope 1r
could be negative [187], see [188, Sect. 2.1.5, Sect. 2.1.6]. This is extended to the
MIMO case in [189] (inspired by [190]), who use the frequency condition M (jω) +
M (jω) > 0, with M (jω) = K −1 + (Im + jωN )H (jω), φ(y)T (y − K −1 φ(y)) ≥ 0 for
all y, and N is indefinite matrix. The proof is led with a Lyapunov–Postnikov function
(i.e., a Lyapunov function containing an integral term as in (3.226)). A multivariable
version of Popov criterion is given in Theorem 5.102 in Sect. 5.11. In the multivari-
able case, the graphical interpretation becomes too complex to remain interesting,
see [191].
ωIm[h(jω)]
1
kr (r > 0)
ω = +∞
ω = +∞ ω = 0ω = 0
Re[h(jω)]
1
−k
1
kr (r < 0)
We have seen in the foregoing sections, the usefulness of loop transformations, which
allow one to pass from nonlinearities in the sector [0,
+∞), to the sector [a, b]. For
instance, the Popov criterion uses the condition Re (1 + jωr)h(jω) > 0, among
other conditions, where the term 1 + jωr is a so-called multiplier. Larger classes of
multipliers M (jω) were introduced in [192, 193], then in [194].16 Before defining
these multipliers, let us motivate their use. Let us consider the feedback intercon-
nection in Fig. 3.12, with φ : R → R, φ : L2,e → L2,e , being in the sector [0, k].
A multiplier M is understood as a biproper (i.e., with relative degree zero) transfer
function, with both poles and zeroes in the left half plane. Then, the feedback system
in Fig. 3.12 with H (s) asymptotically stable is stable if and only if the Lur’e sys-
tem without multipliers is stable (here stability is understood as Lyapunov stability,
or internal stability). If the operator S2 = φM −1 (·) is passive, then it is sufficient
that S1 = M (s)H (s) be strictly input passive (i.e., an SSPR transfer function, see
Sect. 2.13.5, or an SPR transfer function, see Example 4.71) [195] [182, Lemma
3.5.1].
The definition of the O’Shea–Zames–Falb multipliers multipliers is as follows [194,
195], where δt is the Dirac measure with atom at t:
16 Such multipliers are usually called Zames–Falb multipliers; however, as noted in [195], the
original idea is from O’Shea.
180 3 Kalman–Yakubovich–Popov Lemma
S1
r1 + u1 y1
H(s) M (s)
-
y2 u2 + r̂2 = M r2 r2
φ(·) M −1 (s) M (s)
+
S2
for some γ ∈ R and for all ω ∈ R. Then, the feedback interconnection in Fig. 3.13
is L2 -stable, i.e., r1 , r2 ∈ L2 (R) implies that u1 , u2 , y1 , y2 ∈ L2 (R).
+∞
17 The bilateral or two-sided Laplace transform of m(t) is M (s) = −∞ e−st m(t)dt.
3.13 The Lur’e Problem (Absolute Stability) 181
y2 u2 +
r2
φ(·)
+
Theorem 3.109 can be stated for SISO systems and nonlinearities in the sec-
Δ H (s)− 1
tor [a, b], b > a, in which case it is required that H̄ (s) = 1−aH (s)b be stable and
Re[M (jω)H̄ (jω)] ≥ 0 for all ω ∈ R [199].
In Fig. 3.13, the signals r1 and r2 may represent noise or reference signals. Notice
that (3.229) can be equivalently rewritten as Re[M (jω)( k1 + H (jω))] > 0. Impos-
ing further conditions on φ(·) allows one to obtain that limt→+∞ y(t) = 0 [194,
Corollaries 1 and 2]. As noted in [200], given H (s), it is not easy to find a mul-
tiplier M (s). Many articles have been published whose aim is at calculating such
multipliers, or at showing their properties. Especially, the conservativeness of the
results is a central topic, for obvious practical reasons. An interesting survey on
various types of multipliers is made in [195], where it is pointed out that mod-
ern methods for multipliers search are no longer graphical. One can find in the
Automatic Control literature results about the existence of classes of multipliers
[201], the relationships between different classes of multipliers [196, 202], and
on their calculation [200, 203, 204]. Lemma 3.129 in Sect. 3.14.2 is extended in
[205] using Zames–Falb multipliers. Other loop transformations than the one in
Fig. 3.12 and using O’Shea–Zames–Falb multipliers exist, where both the direct
and the feedback loops are pre- and post-multiplied by two different multipli-
ers [194, Theorem 2]. MIMO systems and nonlinearities are treated this way in
[206]. Set-valued nonlinearities, which we will analyze in details in Sect. 3.14, are
treated in [207, 208] with the use of multipliers M (jω) = τ + jθ ω + κω2 and loop
transformations. Multipliers of the form α + βs + li=1 1 − γi s+η 1
i
are used in
[209], for monotone increasing nonlinearities in the sector [0, +∞). Various struc-
tured uncertainties in negative feedback are analyzed in [210], with multipliers
ai bi (−1)i−1
of the form M (jω) = m0 − n−1 i=1 (jω+α)i − (jω−α)i
, m0 > 0, which approximate
O’Shea–Zames–Falb multipliers for n sufficiently large (the MIMO case is treated in
[211]). Numerical algorithms have been proposed to calculate O’Shea–Zames–Falb
multipliers after it was noticed in [199] that the problem of absolute stability can be
stated as an infinite-dimensional linear programming optimization problem:
( +∞ )
λ = max min Re 1− e−jωt z(t)dt H (jω) ,
z∈Z ω∈R −∞
182 3 Kalman–Yakubovich–Popov Lemma
+∞
with Z = {z | z(t) ≥ 0 for all t ∈ R, −∞ z(t)dt ≤ 1}, and one sees that the term
+∞
1 − −∞ e−jωt z(t)dt is the Fourier transform of δt − z(t). If λ > 0 then the feed-
back system is absolutely stable. The authors in [212] and [213] proposed algo-
rithms to compute approximated multipliers with finite-dimensional convex pro-
gramming problems, where one approximates the above programby a finite-
dimensional convex program. Given the step n, define Mn (jω) = 1 − ni=1 e−jωti hi ,
h = (h1 , h2 , . . . , hn )T , and
One has that z ∈ Z ⇔ h ∈ Zn . The algorithm is iterated until λN > 0 for some N
(see [213, Algorithm 1]). Since the algorithm in [212] may converge to a suboptimal
solution, it has been improved in [213], see [213, Theorem 2].
Notice that the static nonlinearities in Theorem 3.109 are monotonic. Non-
monotonic nonlinearities are considered in [214], who define extensions of O’Shea–
Zames–Falb multipliers. The problem tackled in Sect. 3.14.5 also deals with some
kind of non-monotonic nonlinearity (a set-valued class); however, both classes of
nonlinearities are quite different and the stability results in [214] (i.e., L2 stability)
and in Sect. 3.14.5 (i.e., semi-global Lyapunov stability), differ as well. It is notewor-
thy that the above results deal with the SISO m = 1 case. The MIMO case is indeed
more involved [206]. The stability of the interconnection as in Fig. 3.13 is analyzed
further in Sect. 5.1.
The circle criterion has been introduced in [215–217] and generalized afterward.
Further results on the absolute stability problem, Popov criterion, and Zames–Falb
multipliers can be found in [115, 186, 218–241]. These references constitute only a
few of all the articles that have been published on the topic. A list of articles analyzing
the absolute stability problem can be found in [242]. The reader is also referred to
Sect. 5.11 on hyperstability. It is also worth reading the European Journal of Control
special issue dedicated to V.M. Popov [243]. Generalization of the Popov criterion
with Popov multipliers can be found in [190, 244, 245]. An interesting comparative
study between the circle criterion, the Popov criterion, and the small gain theorem has
been led in [246] on a fourth-order spring–mass system with uncertain stiffness. The
result in terms of conservativeness is that the Popov criterion design supersedes the
circle criterion design and that the small gain design is the most conservative one.
An example in [195, Sect. 4.4] shows, however, that the Popov criterion does not
offer improvement over the circle one. The conservativeness of the obtained stability
3.13 The Lur’e Problem (Absolute Stability) 183
It is of interest to extend the Lur’e problem to the case where the static nonlinear-
ity in the feedback loop is not differentiable, or even not a single-valued function
(say, a usual function), but is a multivalued, or set-valued function (see Defini-
tion 3.113).18 This allows also to consider nonlinearities that belong to the sector
[0, +∞], but not to the sector [0, +∞) (in planar curves, this corresponds to ver-
tical branches of piecewise linear mappings, as in the set-valued signum function).
The first results on the stability of set-valued Lur’e systems, or set-valued absolute
stability, were obtained in Russia by Yakubovich and others, see [207, 208, 248],
the monograph [188] and references therein. They were mainly related to the prob-
lem of systems with relay functions in the feedback loop, and the well-posedness
relied on Filippov’s convexification of discontinuous systems (more precisely, so-
called piecewise single-valued nonlinearities are studied, which are monotone set-
valued functions ϕ(·), such that each component ϕi (·) is monotone, continuous, and
single-valued almost everywhere, where the function is completed by the segment
[limx→xd ,x<xd ϕi (x), limx→xd ,x>xd ϕi (x)] if xd is a discontinuity point; hence, each ϕ· (·)
is some kind of extended relay function, see [188, Chap. 3]). The set-valued nonlin-
earities that we shall consider in this section are more general. It is also noteworthy to
remind that Zames, in his celebrated articles [216, 217], let the door open to consider
set-valued mappings (which he called relations) in the feedback loop, and provided
the definition of incrementally positive relations [217, Appendix A], which is nothing
but monotonicity as defined below. The material which is presented in this section
can be seen as a nontrivial step forward, for the analysis of systems with relations
(in Zames’ sense) in the feedback loop. Before stating the main results, we need to
introduce some basic mathematical notions from convex analysis. The reader who
18 One should not confuse systems with multiple (single-valued) nonlinearities in the feedback loop
(which are MIMO systems) and systems with multivalued nonlinearities in the feedback loop (which
can be SISO systems), as they form two quite different classes of dynamical systems.
184 3 Kalman–Yakubovich–Popov Lemma
wants to learn more on convex analysis and differential inclusions with maximal
monotone operators is invited to have a look at the textbooks [249–253].
Remark 3.110 The nonsmooth dynamical systems studied in this section, and else-
where in this book, can be recast into the framework of dynamical hybrid systems,
or of switching systems. The usefulness of such interpretations is unclear, however.
for all x ∈ Rn . Geometrically, (3.232) means that one can construct a set of affine
functions (straight lines) y → (x − y)T γ + f (x) whose “slope” γ is a subgradient
of f (·) at x. The set ∂f (y) may be empty, however, if f (·) is convex and f (y) < +∞,
then ∂f (y) = ∅ [251]. The simplest example is f : R → R+ , x → |x|. Then
⎧
⎨ −1 if x < 0
∂f (x) = [−1, 1] if x = 0 (3.233)
⎩
1 if x > 0.
One realizes in passing that ∂|x| is the so-called relay characteristic, and that 0 ∈ ∂|0|:
the absolute value function has a minimum at x = 0. The subdifferential of the
indicator function of K (which is convex since K is convex in our case) is given by
⎧
⎪
⎪ {0} if x ∈ Int (K)
⎪
⎪
⎨
∂ψK (x) = NK (x) if x ∈ bd(K) (3.234)
⎪
⎪
⎪
⎪
⎩
∅ if x ∈
/ K,
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 185
and
NK (x) = {z ∈ Rn | z T (ζ − x) ≤ 0, for all ζ ∈ K} (3.235)
is the outward normal cone to K at x.19 Notice that 0 ∈ NK (x) and that we have
drawn the sets x + NK (x) rather than NK (x) in Fig. 3.14. Also, NK (x) = {0} if x ∈
Int(K), where Int(K) = K \ bd(K). The set in (3.234) is the subdifferential from
convex analysis.
Example 3.111 If K = [a, b] then NK (a) = R− and NK (b) = R+ .
Definition 3.112 Let K ⊂ Rn be a closed nonempty convex cone. Its polar cone (or
negative cone) is
with J (x) = {i ∈ {1, . . . , m} | hi (x) ≤ 0}; the tangent cone in (3.237) is named the
linearization cone tangent cone. One notes that K needs not be convex to define TK (x)
19 When K is polyhedral, the normal cone is generated by the outward normals to K at the considered
point of its boundary.
186 3 Kalman–Yakubovich–Popov Lemma
in (3.237). Both tangent cones are identical under some constraint qualification (there
are many such CQs, one of the most popular being the Mangasarian–Fromovitz CQ
[165, 255]). Some examples are depicted in Fig. 3.14.
In practice one often has X = Rn and Y = Rm for some integers n and m. The
mappings whose graphs are in Fig. 3.15c–f are multivalued.
Definition 3.115 A monotone mapping ρ(·) is maximal if for any x ∈ X and any
y ∈ Y such that y − y1 , x − x1 ≥ 0 for any x1 ∈ dom(ρ) and any y1 ∈ ρ(x1 ), then
y ∈ ρ(x).
that (0, 0) ∈ gph(ρ). For any x ∈ dom(ρ) and any y ∈ ρ(x), we obtain x, y ≥ 0,
i.e., in case the mapping is single-valued, xT ρ(x) ≥ 0: the multivalued monotone
mapping is in the sector [0, +∞].
Complete nondecreasing curves in R2 are the graphs of maximal monotone map-
pings. Examples of monotone mappings (n = m = 1) are depicted in Fig. 3.15. They
may represent various physical laws, like dead zone (a), saturation or elastoplasticity
(b), corner law–unilateral effects, ideal diode characteristic—(c), Coulomb friction
(d), MOS transistor ideal characteristic (e), unilateral and adhesive effects (f). The
depicted examples all satisfy (0, 0) ∈ gph(ρ), but this is not necessary. One sees that
unbounded vertical lines are allowed. Consider a static system (a nonlinearity) with
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 187
the input/output relation y = H (u). If the operator H (·) is monotone in the above
sense, then for any u1 and u2 , and corresponding y1 ∈ H (u1 ), y2 ∈ H (u2 ), one has
y1 − y2 , u1 − u2 ≥ 0. If H (0) = 0, then for any u and any y ∈ H (u), u, y ≥ 0.
Thus monotonicity is in this case, equivalent to passivity.
We end this section by recalling classical tools and definitions which we shall
need next.
Definition 3.117 Let −∞ < a < b < +∞. A function f : [a, b] → Rn is abso-
lutely continuous, if for all ε > 0 there exists a δ > 0, such that for all n ∈ N
n any family of disjoint intervals
and n (α1 , β1 ), (α2 , β2 ), …,(αn , βn ) in R satisfying
i=1 (βi − αi ) < δ, one has i=1 |f (β i ) − f (αi )| < ε.
Some functions like the Cantor function are continuous but are not absolutely con-
tinuous. In fact, absolutely continuous (AC) functions are usually better known as
follows:
Theorem 3.118 can also be stated as there exists a Lebesgue integrable function g(·)
such that f (t) = g(τ )d τ (d τ being the Lebesgue measure). In a more sophisticated
188 3 Kalman–Yakubovich–Popov Lemma
For instance, the indicator function in (3.231) is in Γ0 (K) when K is a closed convex
nonempty set of Rn , and its subdifferential (the normal cone to K at x), in (3.234) is
maximal monotone K → Rn . In addition, ∂ϕ(x) is a convex closed domain (possibly
empty) of Rn . See Sect. A.9.2 for related material. One has, for instance, ϕ(x) =
2
ψR− (x) in Fig. 3.15c, ϕ(x) = |x| + x2 for (d), ϕ(x) = ψ(−∞,a] (x) − ψ[−a,+∞) (x) +
a−b
(x − b)2 if |x| ≥ b
2 for (e). If ϕ(x1 , . . . , xm ) = μ1 |x1 | + · · · + μm |xm | + 21 xT x,
0 if |x| < b
then
∂ϕ(0) = ([−μ1 , μ1 ], . . . , [−μm , μm ])T .
Proposition 3.122 Assume that f : Y → (−∞, +∞] is convex and lower semicon-
tinuous. Let A : X → Y be a linear and continuous operator. Assume that there exists
a point y0 = Ax0 at which f (·) is finite and continuous. Then
for all x ∈ X .
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 189
The chain rule also holds for affine mappings A : x → A0 x + b: ∂(f ◦ A)(x) =
AT0 ∂f (Ax) [249, Theorem 4.2.1]. Further generalizations exist, see [257, Sect. 10.B]
[258, Proposition A.3]. Researchers in Automatic Control are usually more famil-
iar with another type of chain rule, which applies to the composition of Lipschitz
continuous Lyapunov functions with AC solutions of Filippov’s differential inclu-
sions, V (x(t), t), for the computation of dtd V (x(t), t) [259, Lemma 1]. Let us now
state a generalization of the existence and uniqueness results for ODEs (Theorems
3.90 to 3.92), to a class of differential inclusions. The next theorem is known as the
Hille–Yosida theorem if the operator A : x → A(x) is linear.
Theorem 3.123 ([252, Theorem 3.1]) Let A(·) be a maximal monotone operator
mapping Rn into Rn . Then for all x0 ∈ dom(A) there exists a unique Lipschitz con-
tinuous function x(·) on [0, +∞) such that
with x(0) = x0 , almost everywhere on (0, +∞). The function satisfies x(t) ∈
dom(A) for all t > 0, and it possesses a right derivative for all t ∈ [0, +∞). If
x1 (·) and x2 (·) are two solutions then ||x1 (t) − x2 (t)|| ≤ ||x1 (0) − x2 (0)|| for all
t ∈ [0, +∞). In case the operator A(·) is linear then x(·) ∈ C 1 ([0, +∞), Rn ) ∩
C 0 ([0, +∞), dom(A)). Moreover, ||x(t)|| ≤ ||x0 || and ||ẋ(t)|| ≤ ||(Ax(t))|| ≤ ||
A(x0 )|| for all t ≥ 0.
It is noteworthy that the notion of an operator in Theorem 3.123 goes much further
than the mere notion of a linear operator in finite dimension. It encompasses subdif-
ferentials of convex functions, as will be seen next. It also has important applications
in infinite-dimensional systems analysis. The differential inclusion in (3.239) is a
first example of a set-valued Lur’e dynamical system. Indeed, it can be rewritten as
⎧
⎨ ẋ(t) = −λ(t)
λ(t) ∈ A(x(t)) (3.240)
⎩
x(0) = x0 ∈ dom(A),
that is, this is an integrator with maximal monotone static feedback loop, and λ(t) is
a selection of the set-valued term A(x(t)).
⎧
⎨ +1 if x > 0
Example 3.124 Let A(x) = [0, 1] if x = 0 . Then, the solution is given as x(t) =
⎩
0 if x < 0
+
(x0 − t) if x0 ≥ 0
, where x+ = max(0, x).
x0 if x0 < 0
Assume that (0, 0) ∈ gph(A), and that the generalized equation 0 ∈ A(x ) possesses
the unique solution x = 0 (thus, the origin is the unique equilibrium of the dynam-
ics (3.239)). Consider the Lyapunov function candidate V (x) = 21 xT x, then along
the trajectories of (3.239), we obtain V̇ (x(t)) = −x(t)T λ(t), where λ(t) is in (3.240).
190 3 Kalman–Yakubovich–Popov Lemma
Using the monotonicity and the fact that (0, 0) ∈ gph(A), one infers that V̇ (x(t)) ≤ 0:
monotonicity implies (under an additional assumption on the graph of the multi-
function) Lyapunov stability. It is possible to strengthen monotonicity, by imposing
various forms of strong monotonicity (or co-coercivity) [260].
Let us end this section, by noting that well-posedness results of the differential
inclusion in (3.239), extend to more general cases
where the single-valued function f (·, ·) has to satisfy some regularity conditions
(Lipschtiz-like), see [261, 262].
where y(t), yL (t) ∈ Rm , x(t) ∈ Rn and a.e. means almost everywhere in the Lebesgue
measure,20 the function ϕ(·) will be defined next. The fixed points of (3.242) can be
characterized by the generalized equation
0 ∈ {Ax } − B∂ϕ(Cx ).
One notices that the system in (3.242) is a differential inclusion, due to the multivalued
right-hand side. Indeed, the subdifferential ∂ϕ(y) is in general multivalued. What
is the difference between the differential inclusion in (3.242) and, say, Filippov’s
set-valued convexified systems, which readers from Systems and Control are more
familiar with? The main discrepancy between both is that the right-hand side of
(3.242) need not be a compact (bounded) subset of the state space X ⊆ Rn , for
all x ∈ X . It can, for instance, be a normal cone, which is usually not bounded
(the normal cone at a of the interval [a, b], a < b, is the half line R− , see Example
3.111). Of course there is a nonzero overlap between the two sets of inclusions: If the
feedback loop contains a static nonlinearity as in Fig. 3.15d, then the inclusion (3.242)
can be recast either into the “maximal monotone” formalism, or the “Filippov”
20 It is possible that we sometimes forget or neglect to recall that the inclusions have to be satisfied
almost everywhere. In fact, this is the case each time the solutions are AC, being their derivative
defined up to a set of zero (Lebesgue) measure.
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 191
formalism. Actually, Filippov’s systems are in turn a particular case of what one
can name “standard differential inclusions”, i.e., those inclusions whose right-hand
side is compact, convex, and possesses some linear growth property to guarantee
the global existence of solutions (see e.g., [263, Theorem 5.2] for more details).
Other criteria of existence of solutions exist for set-valued right-hand sides which
are Lipschitz continuous, or lower semicontinuous, or upper semicontinuous, or outer
semicontinuous, or continuous [264] (all definitions being understood for set-valued
functions, not for single-valued ones). These properties are not satisfied by our set-
valued Lur’e systems, in general, (for instance, the outer semi-continuity holds for
locally bounded mappings, which clearly excludes normal cones), or they have, like
the upper semi-continuity, to be adapted to cope with the fact that some set-valued
functions have a bounded domain of definition. To summarize, the basic assumptions
on the right-hand sides of both types of inclusions differ so much that their study
(mathematics, analysis for control) surely differ a lot as well.
Let us assume that
(a) G(s) = C(sI − A)−1 B, with (A, B, C) a minimal representation, is an SPR
transfer matrix. In particular, from the KYP Lemma this implies that there
exists matrices P = P T 0, and Q = QT 0 such that PA + AT P = −Q and
BT P = C.
(b) B is full column rank, equivalently Ker(B) = {0}. Thus CA−1 B + BT A−T C T is
negative definite.21
(c) ϕ : Rm → R ∪ {+∞} is convex lower semicontinuous, so that ∂ϕ(·) is a maxi-
mal monotone multivalued mapping by Corollary 3.121.
Lemma 3.125 ([265]) Let assumptions (a)–(c) hold. If Cx(0) ∈ dom(∂ϕ), then the
system in (3.242) has a unique absolutely continuous (AC) solution on [0, +∞).
with z(0) = Rx(0), has a unique AC solution on [0, +∞).22 First, to say that
Cx(0) ∈ dom(∂ϕ) is to say that CR−1 z(0) ∈ dom(∂ϕ), and this just means that
z(0) ∈ dom(∂f ). Second, it follows from the KYP Lemma that RAR−1 + (RAR−1 )T
is negative definite. Therefore, the multivalued mapping x → −RAR−1 x + ∂f (x) is
maximal monotone [252, Lemma 2.4]. Consequently, the existence and uniqueness
21 Indeed, A is full rank, and BT A−T C T + CA−1 B = −BT A−T QA−1 B ≺ 0. Under the same rank
condition, one has BT AT C T + CAB ≺ 0.
22 Let us recall that we should write {RAR−1 z(t)} to see it as a set, a notation we never employ to
result follows from Theorem 3.123. Now set x(t) = R−1 z(t). It is straightforward to
check that x(t) is a solution of the system in (3.242). Actually, the system in (3.243)
is deduced from (3.242) by the change of state vector z = Rx.
The proof of the lemma (see also [128, 266]) shows in passing that the negative
feedback interconnection of a PR system with a maximal monotone nonlinearity,
produces a differential inclusion with maximal monotone set-valued right-hand side.
This will be generalized in the sequel and remains true in an infinite-dimensional
setting, see Sect. 4.8.2. This can be considered as a new result about operations which
preserve maximal monotonicity of operators.23
where (A, B, C) satisfies (a) and (b) above, y(t), λ(t) ∈ Rm , and Cx(0) ≥ 0. The
second line in (3.244) is a set of complementarity conditions between y and λ, stating
that both these terms have to remain nonnegative and orthogonal one to each other.
The LCS in (3.244) can be equivalently rewritten as in (3.243) with ϕ(y) = ψ(R+ )m (y),
noting that
0 ≤ y ⊥ λ ≥ 0 ⇐⇒ −λ ∈ ∂ψRm+ (y), (3.245)
which is a basic result in convex analysis, where ψ(·) is the indicator function in
(3.231). One remarks that if (A, B, C) is passive, then the supply rate w(λ, y) = 0:
complementarity does not inject energy into the system (see (3.6)). Lemma 3.125
is extended in [258] to the case of nonautonomous systems with both locally AC
and locally BV inputs, both in the linear and nonlinear cases.24 The nonautonomous
case yields another, more complex, type of differential inclusion named first-order
Moreau’s sweeping process.
Remark 3.127 Let us note in passing that Lemma 3.125 applies to nonlinear systems
as ẋ(t) = − nk=0 x2k+1 (t) − yL (t), y = x, yL ∈ ∂ϕ(y), x ∈ R. Indeed, the dynamics
2
−yL → y is strictly dissipative with storage function V (x) = x2 , so that P = 1 and
z = x.
Remark 3.128 The change of state variable z = Rx, that is, instrumental in Lemma
3.125, has been used afterward in [129, 258, 268–284], extended in [128, 285], and
in [258, Sect. 4] for the nonlinear case.
Lemma 3.129 ([265]) Let assumptions a)–c) hold, the initial data be such that
Cx(0) ∈ dom(∂ϕ), and assume that the graph of ∂ϕ contains (0, 0). Then, i)
x = 0 is the unique solution of the generalized equation Ax ∈ B∂ϕ(Cx), ii)
The fixed point x = 0 of the system in (3.242) is exponentially stable.
Proof The proof of part i) is as follows. First of all notice that x = 0 is indeed a fixed
point of the dynamics with no control, since 0 ∈ B∂ϕ(0). Now Ax ∈ B∂ϕ(Cx) ⇒
PAx ∈ PB∂ϕ(Cx) ⇒ xT PAx = xT ∂g(x), where g(x) = ϕ(Cx) (use Proposition 3.122
to prove this), g(·) is convex as it is the composition of a convex function with a linear
mapping, and we used assumption (a). The multivalued mapping ∂g(x) is monotone
since g(·) is convex. Thus xT ∂g(x) ≥ 0 for all x ∈ Rn . Now there exists Q = QT 0
such that xT PAx = − 21 xT Qx < 0 for all x = 0. Clearly then, x satisfies the general-
ized equation only if x = 0.
Let us now prove part (ii). Consider the candidate Lyapunov function W (x) =
1 T
2
x Px. From Lemma 3.125, it follows that the dynamics in (3.242) possesses on
[0, +∞) a solution x(t) which is AC, and whose derivative ẋ(t) exists a.e.. The
same applies to W (·) which is AC [29, p.189]. Differentiating along the closed-loop
trajectories we get
d (W ◦x) a.e. T
dt
(t) = x (t)Pw(t)
= xT (t)P(Ax(t) − ByL (t)) = −xT (t)Qx(t) − xT (t)PByL (t) (3.246)
= −xT (t)Qx(t) − xT (t)C T yL (t),
where yL is any vector that belongs to ∂ϕ(Cx). The equality in the first line means
that the density of the measure d (W ◦ x) with respect to the Lebesgue measure dt
(which exists since W (x(t)) is AC) is the function xT Pw. Consequently, d (Wdt◦x) +
xT Qx ∈ −xT C T ∂ϕ(Cx) = −xT ∂g(x) a.e., where d (Wdt◦x) is computed along the sys-
a.e.
tem’s trajectories. Let us consider any z ∈ ∂g(x). One gets d (Wdt◦x) = −xT Qx −
xT z ≤ −xT Qx from the property of monotone multivalued mappings and since
(x, z) = (0, 0) belongs to the graph of ∂g(x). The set of time instants at which
the inequality d (Wdt◦x) ≤ −xT Qx is not satisfied is negligible in the Lebesgue mea-
t W (·), which is continuous,
sure. It follows that the function of time t is nonin-
creasing. Thus one has W (t) − W (0) = 0 (−xT Qx − xT z)d τ ≤ − 0 xT Qxd τ . Con-
t
sequently, 21 λmin (P)xT x ≤ W (0) − 0 λmin (Q)xT xd τ , where λmin (·) is the small-
est eigenvalue. By the Gronwall’s Lemma 3.116, one gets that 21 λmin (P)xT x ≤
W (0) exp −2 λλmin (Q)
min (P)
t which concludes the proof.
194 3 Kalman–Yakubovich–Popov Lemma
where q(t) is the position of the system, μ is the friction coefficient, and the control
is given in Laplace transform by u(s) = H (s)q(s). Defining x1 = q and x2 = q̇ and
u = αq + β q̇ we obtain
⎧
⎪
⎪ 0 1 0
⎨ ẋ(t) = x(t) − μ yL (t)
αβ m (3.248)
⎪
⎪ yL (t) ∈ ∂|q̇(t)|
⎩
y(t) = x2 (t).
Remark 3.133 Extensions of the circle criterion to the case of set-valued feed-
back mappings have been analyzed in the control literature [188, 207, 208, 293].
The results in [293, Corollary 9] are restricted to set-valued lower semicontinu-
ous nonlinearities φ : R → R in the sector [a, b], b < +∞, satisfying φ(0) = {0},
hence excluding relay (signum) multifunctions. The results in [207, 208] apply
to nonlinearities of relay type φ : Rm → Rm , φi (·) is a function of yi only, and
φ· (0+ ) = −φi (0− ) > 0, 1 ≤ i ≤ m, ddyφii ≥ 0 for yi = 0. Solutions are understood in
the sense of Filippov in [207, 208], though the framework of maximal monotone oper-
also be used, since the considered nonlinearities satisfy φ(y) = ∂ϕ(y),
ators could
ϕ(y) = m i=1 ai |yi | + ϕc (y), ϕc (·) differentiable and nondecreasing function, ai > 0.
The so-called monostability is investigated in [207, 208], i.e., absolute stability with
respect to a stationary set, not with respect to an equilibrium point. Hence, the con-
ditions of the theorem in [208] differ from those presented in this chapter, in which
we focus on stability of fixed points. One crucial assumption in [208] is that CB 0
and diagonal. The results can also be consulted in [188, Theorem 3.10], where the
word dichotomic is used (a system is dichotomic if any solution tends to the station-
ary set asymptotically, hence, this is the monostability property, and it is pointwise
dichotomic when each solution converges asymptotically to an equilibrium point).
See Sect. A.1.1.4 for stability of sets.
196 3 Kalman–Yakubovich–Popov Lemma
Therefore, W (·) is a storage function for (3.249) that is smooth in x, despite the
system is nonsmooth. We notice that if Bu(t) in (3.249) is replaced by Eu(t) for
some matrix E and with both (A, E, C) and (A, B, C) being PR, then the above
developments yield that W (·) is a storage function provided the two triples have a
set of KYP Lemma equations with the same solution P, so that BT P = C.
It is quite possible to incorporate (dissipative) state jumps in the above set-valued sys-
tems (state jumps can occur initially, if y(0− ) = Cx(0− ) ∈/ dom(∂ϕ)). This is the case
for linear complementarity systems as in (3.244). The state jumps are defined from the
storage function matrix P associated with the PR triple (A, B, C), see (3.250). Several
equivalent formulations of the state jump mapping are possible, see [294, Proposition
2.62] [127, 258] [295, p. 319]. Let us see how this works, when the set-valued part is
given by the normal cone NK (y) to a closed, nonempty convex set K ⊆ Rm . In other
words, ϕ(y) = ψK (y), where ψK (·) is the indicator function. At a jump at time t, it
is possible to show that the dynamics becomes ẋ(t + ) − ẋ(t − ) ∈ −NK (y(t + )).25 The
complete mathematical rigor would imply us to state that at a state jump time, the
overall dynamical system is no longer a differential inclusion, but rather a measure
differential inclusion (MDI). Indeed, the derivative ẋ(·) no longer exists in the usual
sense, but it does exist in the sense of Schwarz’ distributions, or measures. That is,
dx = (x(t + ) − x(t − ))δt where δt is the Dirac measure. However, we do not want to
enter such mathematics here (see, for instance, [258] for details). Let us now make
the basic assumption that there exists P = P T 0 such that PB = C T (which is
implied by the PRness of the triple (A, B, C), but may hold without stability). Then,
25 Actually, the fact that the argument of the normal cone is y(t + ) is a particular choice that yields
a particular state jump mapping.
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 197
Lemma 3.134 Let {0} ∈ K. The state jump mapping in (3.251) is dissipative, i.e., if
t is a jump time, then V (x(t + )) ≤ V (x(t − )).
Proof We have V (x) = 21 xT Px, from which it follows that V (x(t + )) − V (x(t − )) =
Δ
1
2
||projP [K̄; x(t − )]||2P
− 21 ||x(t − )||2P , where ||x||2P = xT Px for any x ∈ Rn . Since {0} ∈
K ⇒ {0} ∈ K̄, it follows that the projection defined from P = P T 0, is non-
expansive, and V (x(t + )) − V (x(t − )) ≤ 0.
In autonomous systems, there may exist a state jump at the initial time, in case the
initial output is outside dom(∂ϕ). After that, Lemma 3.125 secures the existence of
an absolutely continuous solution. However, in the nonautonomous case like (3.252)
below, things may be different, depending on the regularity of the external action
u(·). Let us investigate further how the post-jump state may be calculated. Let us
start with the inclusion P(x(t + ) − x(t − )) ∈ −NK̄ (x(t + )) (which is a generalized
equation with unknown x(t + )). Assume that the set K̄ is polyhedral, i.e., K̄ = {x ∈
Rn |Mx + N ≥ 0} for some matrix M and vector N . Then, the normal cone to K̄ at
z is NK̄ (z) = {w ∈ Rn |w = −M T α, 0 ≤ α ⊥ Mx + N ≥ 0} [249, Examples 5.2.6,
p.67]. We can rewrite the inclusion as P(x(t + ) − x(t − )) = M T α, 0 ≤ α ⊥ Mx(t + ) +
N ≥ 0. Few manipulations yield 0 ≤ α ⊥ MP −1 M T α + Mx(t − ) + N ≥ 0: this is a
linear complementarity problem, that can be solved using a suitable algorithm [296,
297]. Once α has been computed, then x(t + ) = P −1 M T α + x(t − ). This is therefore
a convenient way to calculate the projection, in the polyhedral case (in the general
case, it may not be obvious to compute the projection onto a convex set).
The material above extends to the case of LCS with feedthrough matrix D = 0 and
external controls:
198 3 Kalman–Yakubovich–Popov Lemma
⎧
⎨ ẋ(t) = Ax(t) + Bλ(t) + Eu(t)
(3.252)
⎩
0 ≤ λ(t) ⊥ w(t) = Cx(t) + Dλ(t) + Fu(t) ≥ 0,
−AT P − PA C T − PB
with a passivity constraint on (A, B, C, D) [127, 298], i.e.,
C − BT P D + DT
0 with P = P T 0. The definition of the set K has to be modified accordingly to
K = {z ∈ Rn | Cz + Fu(t + ) ∈ Q }, with Q = {z ∈ Rn | z ≥ 0, Dz ≥ 0, z T Dz = 0}.
Here Q is a closed convex cone (also called sometimes the kernel of the set of
solutions of the LCP: 0 ≤ z ⊥ Dz ≥ 0), Q is its dual cone. Notice that if D 0,26
then Q = {0} and Q = Rm , hence K = Rn and x(t + ) = x(t − ): there are no state
jumps (as long as the jump mapping is defined as above). We recover the fact that if
D is a P-matrix, a bounded multiplier λ is sufficient to integrate the system, which
is then a particular piecewise continuous system (with Lipschitz continuous vector
field) [299]. This bounded multiplier is merely the unique solution of the linear
complementarity problem 0 ≤ λ(t) ⊥ w(t) = Cx(t) + Dλ(t) + Fu(t) ≥ 0.
In the general case, one has to assume that {0} ∈ K to guarantee the dissipativ-
ity of the state jump mapping, plus a constraint qualification Fu(t) ∈ Q + Im(C),
which secures that K is a convex cone. Several equivalent formulations of the state
jump mapping (including those in (3.251), as well as mixed linear complementarity
problems) exist [298, Lemma 2].
One may wonder why this particular state jump has been chosen. From a purely
mathematical viewpoint, there is no obstacle in trying something else, like x(t + ) −
x(t − ) ∈ −BNK (y(t + ) + Λy(t − )) for some matrix Λ. It is possible to justify the
above jump mapping (with Λ = 0) in the case of circuits, using the charge/flux
conservation principle [300]. The very first property of the state jump mapping is
that the generalized equations that defines it have to be well-posed. For instance,
setting x(t + ) − x(t − ) ∈ −BNK (y(t − )) does not allow one to compute x(t + ) in a
unique way, contrarily to (3.251).
◦
From (3.252) and using (A.78) (with K = Rm + , and K = R− ), one can rewrite
m
The underlying Lur’e structure of the LCS in (3.252) clearly appears in (3.253). An
important step for the analysis of (3.253) is to characterize the domain of the operator
x → B(D + ∂ψK )−1 (−Cx − Fu(t)). This is the objective of the work in [279], see
also [278], for D 0 and u(t) = 0 for all t ≥ t0 .
Remark 3.136 These state jump mappings and their analysis are quite similar to
the more familiar case of mechanical systems, when the framework of Moreau’s
26 If
the passivity constraint holds, then it suffices to state that D be full rank, since the Lur’e
equations imply D + DT 0, hence D 0 and D 0 if it is invertible.
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 199
sweeping process (of second order) is chosen, see Sects. 6.8.2 and 7.2.4. The physics
behind circuits and mechanical systems may, however, not be identical, motivating
the analysis and use of different state reinitialization rules. Very detailed analysis of
state jump mappings (called restitution mappings in the field of Mechanics) has been
led in [295].
The finite-time convergence has been alluded to in Example 3.131. Let us report,
in this section, a result from [301, 302] that applies to differential inclusions of the
form
01 0
ẋ(t) − x(t) + ∈ −B∂ϕ(Cx(t)), (3.254)
00 ∇f (x1 (t))
Proposition 3.137 ([302, Theorem 2.1, Proposition 2.6] [301, Theorem 24.8])
1. For every initial condition (x1 (0), x2 (0)) = (x1,0 , ẋ2,0 ) ∈ Rn × Rn , there exists a
unique solution of (3.254) such that x1 ∈ C 1 ([0, +∞); Rn ) and x2 (·) is Lipschitz
continuous on [0, T ] for every T > 0.
2. limt→+∞ x2 (t) = 0.
3. limt→+∞ x1 (t) = x1,∞ , where x1,∞ satisfies −∇f (x1,∞ ) ∈ ∂ϕ(0).
4. If −∇f (x1,∞ ) ∈ Int(∂ϕ(0)), then there exists 0 ≤ t ∗ < +∞ such that x1 (t) =
x1,∞ for all t > t ∗ .
Δ √
The proof shows that h(t) = ||x2 (t)||2 satisfies ḣ(t) + 2ε h(t) ≤ 0, a.e. in [0, +∞).
This is used to prove that ||x2 (t)|| = 0 after a finite time. From the fact that
x2 (t) = ẋ1 (t), the result follows. Notice that item 4) in the proposition means that
−∇f (x1,∞ ) ∈ / bd(∂ϕ(0)). Since the boundary of a convex set has an empty inte-
rior, it seems reasonable to conjecture that the cases where −∇f (x1,∞ ) ∈ bd(∂ϕ(0))
are exceptional ones. Such conclusions agree with the well-known property of
Coulomb’s friction [295]: if the contact force lies on the boundary of the friction
cone, then sliding motion is possible. If on the contrary the contact force lies inside
the friction cone, then tangential sticking is the only possible mode. This is why the
condition in item 4) is sometimes called the dry friction condition. It will be used
again in Theorem 7.39 (Sect. 7.5.1).
Remark 3.138 Finite-time passivity is defined in [303] for nonlinear systems ẋ(t) =
f (x(t), u), y(t) = h(x(t), u), f (0, 0)) = 0, h(0, 0) = 0, f (·, ·) and h(·, ·) continu-
ous in their arguments. The infinitesimal dissipation inequality reads as uT (t)y(t) ≥
V̇ (x(t)) + γ (V (x(t)) for some continuously differentiable positive-definite storage
function V (x), and the function
εγ (·) is of class K and satisfies the classical condition
for finite-time convergence: 0 γdz(z) < +∞. Then the system with zero input has a
3
finite-time stable trivial solution x = 0. An example is given by ẋ(t) = −x 5 (t) + u,
y(t) = x3 (t). One notices that the vector field is not Lipschitz continuous near the
equilibrium, which is indeed a necessary condition for finite-time convergence.
Proposition 3.137 is less stringent than this definition of passivity, as it leaves some
freedom for the position equilibrium x1,∞ , a fact that is usual with Coulomb’s fric-
tion. The condition PB = C T is trivially satisfied in (3.254) since B = C T , so the
right-hand side is rewritten using the chain rule as −∂φ(x) with φ = ϕ ◦ C, a convex
function. The conditions such that V (x) = xT x is a Lyapunov function for the set of
equilibria of (3.254) can be deduced.
Let us consider the set-valued system ẋ(t) = Ax(t) + Bλ(t), λ(t) ∈ M (Cx(t)), where
the operator M : Rm → Rm is hypomonotone (see Definition 3.114), that is, there
Δ
exists k > 0 such that M˜(·) = (M + k)(·) is maximal monotone. One can use a
loop transformation as defined in Sect. 3.13.4.1, to analyze this system, as shown in
Fig. 3.16. The transformed system has the dynamics:
ẋ(t) = (A + kBC)x(t) + Bλ̃(t)
(3.255)
λ̃(t) ∈ −M˜(Cx(t)),
which is equivalent to the original one. Thus, Lemmas 3.125 and 3.129 apply, where
the condition is now that (A + kBC, B, C) be PR or SPR. If (A, B, C) is PR, then there
exists P = P T 0 such that PB = C T , and the condition boils down to checking the
stability of A + kBC: in a sense an excess of passivity of the linear system should
compensate for a lack of passivity (here, monotonicity) of the feedback loop.
3.14.3.1 Introduction
Let K ⊂ IRn be a nonempty closed convex set. Let F : IRn → IRn be a nonlinear
operator. For (t0 , x0 ) ∈ IR × K, we consider the problem P(t0 , x0 ): find a function t →
x(t) (t ≥ t0 ) with x ∈ C 0 ([t0 , +∞); IRn ), ẋ ∈ L∞,e ([t0 , +∞); IRn ) and such that
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 201
+ y = Cx
(A, B, C)
+ λ
-
−λ̃ = −λ + ky
+ −λ M(·)
+
k
M̃(·)
⎧
⎨ x(t) ∈ K, t ≥ t0
(3.256)
⎩
ẋ(t) + F(x(t)), v − x(t) ≥ 0, for all v ∈ K, a.e. t ≥ t0
with x(t0 ) = x0 . Here ., . denotes the euclidean scalar product in IRn . It follows from
standard convex analysis that (3.256) can be rewritten equivalently as the differential
inclusion
⎧
⎨ ẋ(t) + F(x(t)) ∈ −NK (x(t))
(3.257)
⎩
x(t) ∈ K, t ≥ t0 ,
where the definition of the normal cone to a set K ⊆ Rn is in (3.235). One sees
that (3.257) fits within (3.242), with a particular choice of the multivalued part (i.e.,
of the function ϕ(·)). Hence, (3.256) can be recast into Lur’e set-valued dynamical
systems. If K = {x ∈ Rn | Cx ≥ 0} (a convex polyhedral cone), the reader may use
Proposition 3.122 together with (3.231), (3.234), and (3.245) to deduce that (3.257)
is the LCS ⎧
⎨ ẋ(t) + F(x(t)) = C T λ(t)
(3.258)
⎩
0 ≤ Cx(t) ⊥ λ(t) ≥ 0.
202 3 Kalman–Yakubovich–Popov Lemma
One sees that in such a case, the input/output passivity condition PB = C T is trivially
satisfied, since B = C T . Still, another formulation for (3.257) is as follows (which is
known as a variational inequality of the second kind):
Theorem 3.139 ([281]) Let K be a nonempty closed convex subset of IRn and let
A ∈ IRn×n be constant. Suppose that F : IRn → IRn can be written as
Suppose that the assumptions of Theorem 3.139 are satisfied and denote by x(.; t0 , x0 )
the unique solution of Problem P(t0 , x0 ) in (3.256). Suppose now in addition that
0∈K (3.264)
and
− F(0) ∈ NK (0) (3.265)
so that F(0), h ≥ 0, for all h ∈ K. Then x(t; t0 , 0) = 0, t ≥ t0 , i.e., the trivial solu-
tion 0 is the unique solution of problem P(t0 , 0). Notice the important fact: if F(x)
is decomposed as above, and if k is the Lipschitz constant of F1 (·), then F(·) is
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 203
hypomonotone, i.e., F + κIn is monotone, for any constant κ ≥ k. Thus the function
x → Ax + F(x) is also hypomonotone, with κ ≥ sup||x||=1 ||Ax|| + k.
We now give two theorems inspired from [305] (see also [306, Sect. 5.2]) that guar-
antee that the fixed point of the systems is Lyapunov stable.
Theorem 3.142 ([281]) Suppose that the assumptions of Theorem 3.139 together
with the condition (3.265) hold. Suppose that there exist σ > 0 and V ∈ C 1 (IRn ; IR)
such that
1. V (x) ≥ a(x), x ∈ K, x ≤ σ, with a : [0, σ ] → IR satisfying a(t) > 0, for
all t ∈ (0, σ ),
2. V (0) = 0,
3. x − ∇V (x) ∈ K, for all x ∈ bd(K), x ≤ σ ,
4. Ax + F(x), ∇V (x) ≥ 0, x ∈ K, x ≤ σ .
Then the trivial solution of (3.262) and (3.263) is stable.
Theorem 3.143 ([281]) Suppose that the assumptions of Theorem 3.139 together
with the condition (3.265) hold. Suppose that there exist λ > 0, σ > 0 and V ∈
C 1 (IRn ; IR) such that
Sketch of the proof of Theorems 3.142 and 3.143: Notice that the condition in
item 3 implies that −∇V (x) ∈ TK (x) (the tangent cone to K at x ∈ K), for all x ∈ K,
x ≤ σ . Going back to (3.256), one sees that along system’s trajectories: V̇ (x(t)) =
∇V (x(t))T ẋ(t), and ẋ(t) + Ax(t) + F(x(t)), v − x(t) ≥ 0 for all v ∈ K and x(t) ∈
K. If x(t) ∈ bd(K), let us choose v = x(t) − ∇V (x(t)) (which is in K by item 3,
204 3 Kalman–Yakubovich–Popov Lemma
Definition 3.145 ([281]) The matrix A ∈ IRn×n is Lyapunov positive strictly stable
on K if there exists a matrix P ∈ IRn×n such that
Px,x
1. inf x∈K\{0} x2
> 0,
Ax,(P+P T )x
2. inf x∈K\{0} > 0,
x2
3. x ∈ bd(K) ⇒ (In − (P + P T ))x ∈ K.
Remark 3.146 Condition (1) of Definitions 3.144 and 3.145 is equivalent to the
existence of a constant c > 0 such that
Indeed, set
Δ Px, x
C = inf .
x∈K\{0} x2
that of positive semi-definite (PSD) matrices [308, p.174]. Indeed, a PSD matrix is
necessarily copositive on any set K. However, it is easy to construct a matrix that is
copositive on a certain set K, which is not PSD.
Let us here denote by PK (resp. PK+ ) the set of copositive (resp. strictly copos-
itive) matrices on K. Let us also denote by PK++ the set of matrices satisfying
condition (1) of Definition 3.144, that is
Bx, x
PK++ = B ∈ IRn×n | inf >0 .
x∈K\{0} x 2
and
LK++ = A ∈ IRn×n | ∃P ∈ PK++ such that (In − (P + P T ))(bd(K)) ⊂ K.
and PA + AT P ∈ PK++ .
Let us note that P needs not be symmetric. In summary, the classical positive-
definite property of the solutions of the Lyapunov matrix inequality is replaced by
the copositive-definite property.
To see how evolution variational inequalities are related to the systems in the fore-
going section, let us come back to the system in (3.242):
⎧ a.e.
⎨ ẋ(t) = Ax(t) − ByL (t)
y(t) = Cx(t) (3.267)
⎩
yL (t) ∈ ∂ϕ(y(t)), t ≥ 0,
and let us assume that the convex function ϕ(·) is the indicator of a closed convex
set K ⊂ Rn with 0 ∈ K. We therefore rewrite the problem as follows.
Find x ∈ C 0 ([0, +∞); IRn ) such that ẋ ∈ L∞,e (0, +∞; IRn ) and
and x(0) = x0 . Assume there exists a symmetric and invertible matrix R ∈ IRn×n such
that R−2 C T = B. Suppose also that there exists
Δ
y0 = CR−1 x0 ∈ Int(K). (3.271)
we see that problem (3.268) to (3.270) is equivalent to the following one: find z ∈
C 0 ([0, ∞); IRn ) such that ż ∈ L∞,e ([0, ∞); IRn ) and
ẋ(t) ∈ Ax(t) − B∂ψK (Cx(t)) ⇔ Rẋ(t) ∈ RAR−1 Rx(t) − RB∂ψK (CR−1 Rx(t))
⇔ ż(t) ∈ RAR−1 z(t) − R−1 R2 B∂ψK (CR−1 z(t))
(3.275)
⇔ ż(t) ∈ RAR−1 z(t) − R−1 C T ∂ψK (CR−1 z(t))
⇔ ż(t) ∈ RAR−1 z(t) − ∂ψK̄ (z(t)).
Indeed, one has ψK̄ (z) = (ψK ◦ CR−1 )(z) and using (3.271) we obtain ∂ψK̄ (z) =
R−1 C T ∂ψK (CR−1 z). We remark also that the set K̄ is closed convex with 0 ∈ K̄. The
variable change z = Rx is exactly the same as the variable change used in Lemma
3.125. The following holds.
Theorem 3.147 ([281, Theorem 5]) Let K ⊂ IRn be a closed convex set containing
x = 0, and satisfying the condition (3.271). Define K̄ as in (3.272). Suppose that
there exists a symmetric and invertible matrix R ∈ IRn×n such that R−2 C T = B.
1. If −RAR−1 ∈ LK̄ then the trivial equilibrium point of (3.273) (3.274) is stable.
2. If −RAR−1 ∈ LK̄++ then the trivial equilibrium point of (3.273) (3.274) is asymp-
totically stable.
Both results hold for the trivial equilibrium of (3.268)–(3.270).
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 207
Proof (1) RAR−1 ∈ LK̄ , so there exists a matrix G ∈ Rn×n such that
Gz, z
inf >0 (3.276)
z∈K̄\{0} ||z||2
and
RAR−1 z, (G + G T )z ≥ 0, for all z ∈ K̄, (3.277)
and
x ∈ bd(K̄) ⇒ (In − (G + G T ))z ∈ K̄. (3.278)
1 T
V (z) = x (G + G T )z. (3.279)
2
Then ∇V (z) = (G + G T )z, and one sees that all the assumptions made in Theorem
3.142 are satisfied. Indeed, (3.276) guarantees the existence of a constant k > 0
such that V (x) ≥ k||z||2 for all z ∈ K̄, see Remark 3.146. Clearly V (0) = 0. Finally
using (3.277) and (3.278), one infers that RAR−1 z, ∇V (z) ≥ 0 for all z ∈ K̄, and
z ∈ bd(K̄) ⇒ z − ∇V (z) ∈ K̄. Thus the conclusion follows from Theorem 3.142. (2)
RAR−1 ∈ LK̄++ , thus there exist a matrix G ∈ Rn×n which satisfies (3.276) (3.278),
and
RAR−1 z, (G + G T )z
inf > 0. (3.280)
z∈K̄\{0} ||z||2
Let us define V ∈ C 1 (Rn ; R) as in (3.279), and verify as in part (1) that items 1, 2
and 3 in Theorem 3.143 hold. Moreover, using (3.280), one deduces the existence of
a constant c > 0 such that
It follows that
c
RAR−1 z, (G + G T )z ≥ (G + G T )z, z, for all z ∈ K̄. (3.282)
||G + G T ||
yields item 4 of Theorem 3.143, and the conclusion follows from Theorem 3.143.
The last assertion of Theorem 3.147 is true, because z and x are related through an
invertible one-to-one state transformation.
Theorem 3.147 extends to the case when a nonlinear perturbation acts on the dynam-
ics, i.e., one considers a single-valued vector field Ax + F(x), with F(·) as above,
and in addition lim||x||→0 ||F(x)||
||x||
= 0, see [281, Theorem 6]. Many examples of stable
208 3 Kalman–Yakubovich–Popov Lemma
and unstable matrices on convex cones are given in [281]. Several criteria to test the
stability (and the instability) have been derived. Let us provide them without proof.
−1 12
Example 3.149 Let K̄ = R+ × R+ , and RAR = , then RAR−1 ∈ LK̄
11
eigenvalues with real part equal to 1). Let K̄ = R+ × R+ ,
(notice that thismatrix has
−1 1 −10
and RAR = , the matrix RAR−1 is a nonsingular M -matrix (it is also
0 2
exponentially unstable), K is a cone, if x ∈ K then xi ēi ∈ K, i = 1, 2, thus RAR−1 ∈
LK̄++ .
It is important to see that what renders the system stable, while RAR−1 may be
unstable, is that the vector field is modified when the trajectories attain the boundary
of K̄, due to the presence of the multiplier yL (t) which is a selection of the set-valued
right-hand side, see (3.270). A similar mechanism appears for the controllability
[309].
Example 3.150 (PR evolution variational inequalities) Assume that G(s) = C(sI −
A)−1 B, with (A, B, C) a minimal representation, is strictly positive real. From the
Kalman–Yakubovitch–Popov Lemma there exist P = P T 0 and Q = QT 0,
such that PA + AT P = −Q and BT P = C. Choosing R as the symmetric square root
of P, i.e., R = RT , R 0, and R2 = P, we see that BT R2 = C and thus R−2 C T = B.
Moreover
PAx, x + AT Px, x = −Qx, x, for all x ∈ IRn . (3.283)
Thus Ax, Px = − 21 Qx, x, for all x ∈ IRn . It results that −RAx, Rx > 0,
for all x ∈ IRn \{0}. Setting z = Rx, we see that
So −RAR−1 ∈ PI++ ++ ++
Rn ⊂ PK̄ ⊂ LK̄ . All the conditions of Theorem 3.147 (part
(ii)) are satisfied and the trivial solution of (3.268)–(3.270) is asymptotically stable.
The results presented in the foregoing section are here recovered. In case G(s) is
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 209
positive real then Theorem 3.147 (part (i)) applies. As shown above (see Lemma
3.129) the equilibrium point is unique in this case.
Example 3.151 (PR electrical circuit) The following example is taken from [277].
Let us consider the circuit in Fig. 3.17 (R1 , R2 , R3 ≥ 0, L2 , L3 > 0). One has 0 ≤
−uD4 ⊥ x2 ≥ 0 and 0 ≤ −uD1 ⊥ −x3 + x2 ≥ 0, where uD4 and uD1 are the voltages
of the diodes. The dynamical equations are
⎧
⎪ ẋ1 (t) = x2 (t)
⎪
⎪
⎪
⎪ R1 +R3
⎨ ẋ2 (t) = − L3 x2 (t) + L3 x3 (t) − L3 C4 x1 (t) + λ (t) + λ (t)
R1
⎪ 1 1
L3 1
1
L3 2
where x1 (·) is the time integral of the current across the capacitor, x2 (·) is the current
across the capacitor, and x3 (·) is the current across the inductor L2 and resistor
R2 , −λ1 is the voltage of the diode D1 , and −λ2 is the voltage of the diode D4 .
The system in (3.285) can be written compactly as the LCS: ẋ(t) = Ax(t) + Bλ(t),
0 ≤ λ(t) ⊥ y(t) = Cx(t) ≥ 0, with
⎛ ⎞ ⎛ ⎞
0 1 0 0 0
0 1 −1
A= ⎝ − L 1C − R1L+R 3 R1
L3
⎠, B = ⎝ L1 1
L3
⎠, C = .
3 4 3 3 0 1 0
0 R1
L2
− R1L+R
2
2
− L12 0
Example 3.152 The circuit in Example 3.151 has a zero feedthrough matrix D.
Let us consider the diode bridge circuit depicted in Fig. 3.18, which is such that
D = −DT 0 (and hence PB = C T from the Lur’e equations, despite D = 0). Its
dynamics is given by [294, Sect. 5.2.4]28 :
⎛ ⎞
⎛ ⎞ ⎛ ⎞ 0 0 1
0 − 1c 0 0 0 − 1c 1c ⎜ 0 0
⎠, B = ⎝ 0 0 0 0⎠, C = ⎜ 0⎟⎟,
A=⎝L 0
1
0 ⎝ −1 0 1⎠
0 0 − Rc1 f 1
cf
0 c1f 0
⎛ ⎞ 1 0 0
⎛ ⎞ (3.287)
0 −1 0 0
⎜1 0 1 VL
−1 ⎟
D=⎜
⎝ 0 −1 0
⎟ , x = ⎝ iL ⎠ .
0 ⎠
VR
0 1 0 0
It follows from the above that an extension of the KYP Lemma matrix inequalities to
linear evolution variational inequalities is possible at the price of replacing positive
definiteness, by copositive definiteness of matrices. However, what remains unclear
is the link with frequency-domain conditions. In other words, we have shown that
if the triple (A, B, C) is PR (or SPR), then it satisfies the requirements for the evo-
lution variational inequality in (3.273) to possess a Lyapunov stable equilibrium.
Is the converse provable? Certainly, the answer is negative, as some of the above
examples show that the matrix A can be unstable (with eigenvalues with positive real
1
iDR2
R
iDF1 iR
VC C VL L 2 VR 3
iDR1
iC iL CF
iDF2
parts), while A ∈ LK++ (thus the corresponding evolution variational inequality has
an asymptotically stable fixed point). Extension of the Krasovskii–LaSalle’s invari-
ance principle to evolution variational inequalities has been considered in [277, 278],
and we present an invariance result in the next section. In Chap. 6, we shall exam-
ine second-order evolution variational inequalities, which arise in some problems of
mechanics with nonsmooth contact laws.
Let us establish an invariance result for the system in (3.256). For x0 ∈ K, we denote
by γ (x0 ) the orbit
γ (x0 ) = {x(τ ; t0 , x0 ); τ ≥ t0 }
is continuous.
Proof Let x0 ∈ K be given and let {x0,i } ⊂ K such that x0,i → x0 in IRn . Let us here
Δ Δ
set x(t) = x(t; t0 , x0 ) and xi (t) = x(t; t0 , x0,i ). We know that
212 3 Kalman–Yakubovich–Popov Lemma
and
ẋi (t) + F(xi (t)), z − xi (t) ≥ 0, for all z ∈ K, a.e. t ≥ t0 . (3.289)
and
ẋi (t) + F(xi (t)), xi (t) − x(t) ≤ 0, a.e. t ≥ t0 . (3.291)
It results that
d (x − x)(t), x (t) − x(t) ≤ ωx (t) − x(t)2
dt i i i
−[F + ωIn ](xi (t)) − [F + ωIn ](x(t)), xi (t) − x(t),
d
xi (t) − x(t)2 ≤ 2ωxi (t) − x(t)2 , a.e. t ≥ t0 . (3.292)
dt
Using some Gronwall inequality, we get
and we denote by M the largest invariant subset of E. Then for each x0 ∈ K such
that γ (x0 ) ⊂ Ψ , we have
lim d (x(τ ; t0 , x0 ), M ) = 0.
τ →+∞
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 213
Proof (1) Let us first remark that for x0 given in K, the set Λ(x0 ) is invariant.
Indeed, let z ∈ Λ(x0 ) be given. There exists {τi } ⊂ [t0 , +∞) such that τi → +∞ and
x(τi ; t0 , x0 ) → z. Let τ ≥ t0 be given. Using Theorem 3.153, we obtain x(τ ; t0 , z) =
limi→∞ x(τ ; t0 , x(τi ; t0 , x0 )). Then remarking from the uniqueness property of solu-
tions that x(τ ; t0 , x(τi ; t0 , x0 )) = x(τ − t0 + τi ; t0 , x0 ), we get x(τ ; t0 , z) = limi→∞
Δ
x(τ − t0 + τi ; t0 , x0 ). Thus setting wi = τ − t0 + τi , we see that wi ≥ t0 , wi → +∞
and x(wi ; t0 , x0 ) → x(τ ; t0 , z). It results that x(τ ; t0 , z) ∈ Λ(x0 ).
(2) Let x0 ∈ K such that γ (x0 ) ⊂ Ψ . We claim that there exists a constant k ∈ IR
such that
V (x) = k, for all x ∈ Λ(x0 ).
Indeed, let T > 0 be given. We define the mapping V ∗ : [t0 ; +∞) → IR by the for-
mula
Δ
V ∗ (t) = V (x(t; t0 , x0 )).
The function x(.) ≡ x(.; t0 , x0 ) is absolutely continuous on [t0 , t0 + T ] and thus V ∗ (·)
is a.e. strongly differentiable on [t0 , t0 + T ]. We have
dV∗
(t) = ∇V (x(t)), ẋ(t), a.e. t ∈ [t0 , t0 + T ].
dt
We know by assumption that x(t) ∈ K ∩ Ψ, for all t ≥ t0 , and
We claim that
ẋ(t), ∇V (x(t)) ≤ 0, a.e. t ≥ t0 .
If x(t) ∈ Int{K} then there exists ε > 0 such that x(t) − ε∇V (x(t)) ∈ K. Setting
v = x(t) − ε∇V (x(t)) in (3.294), we obtain
and thus V ∗ (·) is decreasing on [t0 , +∞). Moreover, Ψ is compact and thus V ∗ (·)
is bounded from below on [t0 , +∞). It results that
lim V (x(τ ; t0 , x0 )) = k,
τ →+∞
for some k ∈ IR. Let y ∈ Λ(x0 ) be given. There exists {τi } ⊂ [t0 , +∞) such that
τi → +∞ and x(τi ; t0 , x0 ) → y. By continuity limi→+∞ V (x(τi ; t0 , x0 )) = V (y).
Therefore V (y) = k. Here y has been chosen arbitrary in Λ(x0 ) and thus V (y) =
k, for all y ∈ Λ(x0 ).
(3) The set γ (x0 ) is bounded, thus Λ(x0 ) is nonempty and
Let us now check that Λ(x0 ) ⊂ E. We first note that Λ(x0 ) ⊂ γ (x0 ) ⊂ K ∩ Ψ =
K ∩ Ψ . We know from part (2) of this proof that there exists k ∈ IR such that V (x) =
k, for all x ∈ Λ(x0 ). Let z ∈ Λ(x0 ) be given. Using Part (1) of this proof, we see that
x(t; t0 , z) ∈ Λ(x0 ), for all t ≥ t0 and thus
It results that
d
V (x(t; t0 , z)) = 0, a.e. t ≥ t0 . (3.297)
dt
Setting x(.) ≡ x(.; t0 , z), we check as above that
The mapping t → F(x(t; t0 , z)), ∇V (x(t; t0 , z)) is continuous and thus taking the
limit as t → t0 , we obtain F(z), ∇V (z) = 0. It results that z ∈ E. Finally Λ(x0 ) ⊂
M since Λ(x0 ) ⊂ E and Λ(x0 ) is invariant.
Some corollaries can be deduced from Theorem 3.154. We give them without proof.
Corollary 3.155 Suppose that K ⊆ Rn is a nonempty closed convex set, and that
F : Rn → Rn is continuous, such that F(·) + κIn is monotone for some κ ≥ 0. Let
also F(0), h ≥ 0 for all h ∈ K. Let V ∈ C 1 (IRn ; IR) be a function such that
1. x − ∇V (x) ∈ K, for all x ∈ bd(K),
2. F(x), ∇V (x) ≥ 0, for all x ∈ K,
3. V (x) → +∞ as x → +∞, x ∈ K.
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 215
lim d (x(τ ; t0 , x0 ), M ) = 0.
τ →+∞
The condition F(0), h ≥ 0 for all h ∈ K guarantees that the solution x(t;
t0 , 0) = 0 for all t ≥ t0 , and that {0} ∈ {z ∈ K | F(z), v − z ≥ 0, for all v ∈ K},
which is the set of fixed points (stationary solutions) of (3.262)–(3.263).
Corollary 3.156 Suppose that K ⊆ Rn is a nonempty closed convex set, and that
F : Rn → Rn is continuous, such that F(·) + κIn is monotone for some κ ≥ 0. Let
also F(0), h ≥ 0 for all h ∈ K. Suppose that there exists V ∈ C 1 (IRn ; IR) such that
1.
V (x) ≥ a(x), x ∈ K,
These results are a extension of Theorems 3.142 and 3.143, where one retrieves sim-
ilar ingredients. Let us end with a result that is useful for the material in Sect. 5.5.3,
about stabilization of linear evolution variational inequalities by static output feed-
back. We still consider K ⊂ IRn to be a closed convex set such that {0} ∈ K. Let
A ∈ IRn×n be a given matrix. We consider the above evolution variational inequality,
with F(·) ≡ A·, i.e., find x ∈ C 0 ([t0 , ∞); IRn ) such that ẋ ∈ L∞,e (t0 , +∞; IRn ) and
Corollary 3.157 Suppose that there exists a matrix G ∈ IRn×n such that
1. inf x∈K\{0} Gx,x
x2
> 0,
2. Ax, (G + G T )x ≥ 0, for all x ∈ K,
3. x ∈ bd(K) ⇒ In − (G + G T ) x ∈ K,
4. E(K, (G + G T )A) = {0}.
Then the trivial solution of (3.299)–(3.300) is (a) the unique stationary solution of
(3.299)–(3.300), (b) asymptotically stable, and (c) globally attractive.
with ϕ(·) a proper convex lower semicontinuous function, and we impose Cx(0) ∈
dom((D + ∂ϕ −1 )−1 ). The well-posedness of such differential inclusions is analyzed
in [128, 278, 284], and we take it for granted here that uniqueness of AC solutions
holds for all admissible initial data. This is a class of systems more general than
(3.256), in the sense that we allow for a nonzero feedthrough matrix D. It is shown
in [278, Sect. 5] that the invariance results hold when (A, B, C, D) is passive (i.e.,
the Lur’e equations are satisfied with semi negative definiteness) and ϕ̄(x) ≥ ϕ̄(0),
with ϕ̄(·) defined such that (∂ϕ)−1 (−λ) = ∂ ϕ̄(λ) for all λ.
There are several ways to generalize Lemmas 3.125 and 3.129. Let us motivate one
of them, by considering the set-valued system:
⎧
⎨ ẋ(t) − Ax(t) − Eu(t) ∈ −BNK (y(t))
y(t) = Cx(t) + Fu(t) (3.302)
⎩
y(t) ∈ K, t ≥ t0 .
Compared with (3.242), one sees that the system in (3.302) is acted upon by the
control signal u(·) at two places: in the single-valued part through E and in the set-
valued part through F. Assume that one wants to design a closed-loop system with
static output feedback, such that some exogenous reference r(t) is tracked. To this
aim, one first sets u = −Gy + r(t), for some matrix gain G ∈ Rm×m . First, one has
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 217
with obvious definitions of the matrices Â, B̂, Ĉ, F̂. Consequently, one faces a
new type of differential inclusion. Consider the indicator function ψK (Ĉx(t) +
F̂r(t)), and denote the affine mapping A : x → Ĉx(t) + F̂r(t), so that ψK (Ĉx(t) +
F̂r(t)) = (ψK ◦ A )(x) = ψK(t) (x), with K(t) = {x ∈ Rn |Ĉx(t) + F̂r(t) ∈ K}.
Using the chain rule of convex analysis in Proposition 3.122, it follows that
∂ψK (Ĉx + F̂r(t)) = Ĉ T ∂(ψK ◦ A )(x) = Ĉ T ∂ψK(t) (x) = Ĉ T NK(t) (x). Assume
now that there exists P = P 0 such that PB = Ĉ , with R = P, R = R (which
T T 2 T
is the same as assuming that the LMI: BT P = (Im + F G)−1 C, has a solution
P = P T 0 and G ∈ Rm×m ). Doing the state space change z = Rx, one can trans-
form (3.304) into
K(t). Roughly speaking, one calculates the variation var K (·) of K : I ⇒ Rn , for an
interval I , by replacing the Euclidean norm of vectors, by the Hausdorff distance
between sets. If var K (·) is locally AC (respectively locally RCBV), then K(·) is
locally AC (respectively locally RCBV). An intermediate step consists in linking
the properties of r(·), and of K(·). This is done in [258, Proposition 3.2], where a
constraint qualification Im(C) − Rm + = R is assumed to hold. Then a result in [318]
m
is used, which allows to state that the local absolute continuity of r(t) (respectively the
local RCBV) implies that of K(t). In the locally AC case, the existence and uniqueness
follow directly from [316, Theorem 1]. In the locally RCBV case, existence follows
from [317, Theorem 3.1], and the uniqueness is shown in [258, Theorem 3.5]. The
proof of uniqueness is based on a standard argument and Gronwall’s Lemma. One
should be aware of the fact that the RCBV case allows for state jumps, so that the
differential inclusion (3.305) has to be embedded into measure differential inclusions
(inclusions of measures, instead of functions).
Remark 3.159 Let us remind that we could have started from nonautonomous linear
complementarity systems as in (3.252), to get (3.305).
Let us briefly introduce a second extension of Lemmas 3.125 and 3.129. In (3.242),
one can replace the set-valued operator ∂ϕ(·), by a general maximal monotone opera-
tor. To that aim, let us consider a set-valued operator M : Rm ⇒ Rm which satisfies
a hypomonotonicity-like property: there exists a matrix K such that for all z1 , z2 ,
ζ1 ∈ M (z1 ), ζ2 ∈ M (z2 ), one has z1 − z2 , ζ1 − ζ2 ≥ −z1 − z2 , K(z1 − z2 ). The
Lur’e set-valued system is given as
⎧ ⎧
⎨ ẋ(t) = Ax(t) − Bλ(t) ⎪
⎨ ẋ(t) = (A + BKC)x(t) − Bμ(t)
y(t) = Cx(t) ⇔ y(t) = Cx(t)
⎩ ⎪
⎩ μ(t) = λ(t) + Ky(t) ∈ M¯(y(t)) =
Δ
λ(t) ∈ M (y(t)). (M + K)(y(t)).
(3.306)
Then we have the following.
Theorem 3.160 Let us assume that (i) (A + BKC, B, C) is passive with stor-
age function V (x) = 21 xT Px, P = P T 0. (ii) M (·) is such that M¯(·) is maxi-
mal monotone. (iii) Im(C) ∩ rint(Im(M¯)) = ∅. Then for each x(0) = x0 such that
Cx0 ∈ cl(Im(M¯)), the Lur’e set-valued system in (3.306) possesses a unique locally
absolutely continuous solution.
The proof follows from [128, Theorem 3]. Condition (i) is less stringent than
SPRness; however, nothing has been said about the stability in Theorem 3.160. The
conditions which guarantees that a system can be made PR or SPR via a static ouput
feedback can be deduced from various results, see Sect. 2.14.3, see also Theorem
3.61 and [126, Theorem 4.1] [99, Proposition 8.1].
Apart from introducing external inputs/disturbances, or considering non-
monotone set-valued nonlinearities, various directions of extension have been inves-
tigated:
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 219
31 It is noteworthy that such kind of “implicit” feedback structures are common in some control areas
like antiwindup systems, though usually analyzing single-valued nonlinearities [319, Eqs. (3)–(4)].
In the context of circuits with ideal components, the implicit structure stems from modeling.
220 3 Kalman–Yakubovich–Popov Lemma
• Design and analyze velocity observers for nonlinear Lagrangian systems, yielding
a particular type of first-order Moreau’s sweeping process [285].
• Design and analyze state observers for passive linear complementarity systems
[325].
• Study the output feedback control of Lur’e systems with “relay”-type set-valued
nonlinearities [312, 326].
• Study the time-discretization of some of the above set-valued Lur’e systems [285,
320, 323, 327], see Sect. 7.5.2.
• Analyze the infinite-dimensional case [129, 266], see Sect. 4.8.2.
• Analyze the robustness (i.e., preservation of the Lyapunov stability of the equilib-
rium) under uncertainties in A and B [129, Sect. 5].
• Another kind of nonsmooth characteristic, which does not fit with the maximal
monotone static nonlinearities, can be found in [328] where the passivity of an
oscillator subject to a Preisach hysteresis is shown. The absolute stability of sys-
tems with various types of hysteresis nonlinearities is also treated in [329–333].
• The so-called Zames–Falb multipliers method is employed in the context of inte-
gral quadratic constraints in [205] to extend Lemmas 3.125 and 3.129 and obtain
less conservative stability criterion.
This list is not at all an exhaustive one, in particular, we have skipped many extensions
of the first-order sweeping process (which, however, would bring us too far away
from the Lur’e problem and dissipative systems). The books [306, 311] are also
worth reading.
where the set K(t) is assumed to be nonempty, closed, and r-prox-regular for each
t ≥ t0 .
Thus, it follows from the definition that K is an r-prox-regular set, if and only if, for
each x, x
∈ K, and each w ∈ NK (x), with ||w|| < 1, we have
3.14 Multivalued Nonlinearities: The Set-Valued Lur’e Problem 221
w 1
, x − x
≥ − |x − x
|2 , for all x
∈ K. (3.308)
||w|| 2r
One sees that prox-regular sets possess a hypomonotonicity property. In the above
inequality, if we let r → ∞, then the expression on the right-hand side becomes
zero, and we see that w is the normal vector at x ∈ K(t) in the classical sense of
convex analysis. For that reason, we say that the case r → ∞ corresponds to K(t)
being convex for each t. In the next developments, the convex sets will be treated as
a particular case of the r-prox-regular sets by taking r → ∞.
We consider time-varying sets K(t); hence, the results relax most of the foregoing
results in two directions: convexity and time-invariance. In (3.307), the normal cone
has to be given a rigorous meaning; indeed, convex analysis is no longer sufficient.
Usually, one defines the normal cone in the sense of Clarke or Fréchet; however, this is
outside the scope of this section. We also take it for granted that a unique AC solution
of (3.307) exists for all initial conditions satisfying Cx(t0 ) ∈ K(t0 ) [283, Theorem
2.9]. Before stating the main result, we need to clarify the “speed” of variation of the
sets K(t). This is done as follows. Let us consider set-valued maps K : [t0 , ∞) ⇒ IRl ,
for some fixed t0 ∈ R. The variation of K(·) over an interval [t0 , t], denoted by vK (t),
is obtained by replacing |f (si ) − f (si−1 )| in Definition 6.63, with dH (K(si ), K(si−1 ))
in the definition of the variation of f (·), that is,
Δ
k
vK (t) = sup dH (K(si ), K(si−1 ))
t0 =s0 <s1 <···<sk =t
i=1
where the supremum is taken over the set of all partitions of [t0 , t], and the Hausdorff
distance between two sets is denoted by dH (K, K
) and is defined as
Δ
dH (K, K
) = max{sup d (x, K), sup d (x, K
)}. (3.309)
x∈K
x∈K
We shall also need the following technical lemmas about prox-regular sets.
Lemma 3.162 ([283, Lemma 2.7]) Consider a nonempty, closed, r-prox-regular set
K ⊂ Rm , r > 0, and a linear map C : Rn → Rm , so that K is in the range space of C.
Δ Δ
Then, the set K
= C −1 (K) is uniformly r
-prox-regular with r
= rσmin (C)/C2 .
and:
Lemma 3.163 ([283, Lemma 2.8]) For a multivalued function K : [t0 , ∞) ⇒
Δ
Im(C), assume that vK (·) is locally absolutely continuous. Let K
(t) = C −1 (K(t)),
then vK
(·) is also locally absolutely continuous and furthermore, v̇K
(t) ≤ σmin1(C)
v̇K (t), for Lebesgue almost all t ∈ [t0 , ∞).
222 3 Kalman–Yakubovich–Popov Lemma
Theorem 3.164 ([283, Theorem 3.2]) Consider the differential inclusion in (3.307),
and assume that (i) there exists a constant r > 0 such that, for each t ∈ [t0 , ∞), K(t) is
a nonempty, closed, and r-prox-regular set; (ii) The function vK (·) : [t0 , ∞) → IR+
is locally AC and |v̇K (t)| is bounded by v for all t except for a set of Lebesgue
measure zero, i.e., esssupt≥t0 |v̇K (t)| = v; (iii) Cx(t0 ) ∈ K(t0 ); (iv) K(t) is contained
Δ
in the image space of C for all t ≥ t0 ; (v) Let K
= C −1 (K), and let the matrix C
satisfy: for each z ∈ K
and w ∈ NK (Cz), C T w = 0 only if w = 0, or equivalently:
ker(C T ) ∩ NK (Cz) = {0} for all z ∈ K
. Suppose now that there exists P = P T 0
that satisfies the following for some θ > 0:
AT P + PA −θ P (3.310)
PB = C T . (3.311)
Δ Δ βθr
Rρ = x ∈ IRn | x. Px ≤ ρ 2 , ρ= , (3.312)
b RAR−1
Δ Δ
where R = RT 0, P = R2 , C = CR−1 , and b = σC(C) . If θ is large enough such
2
min
that
b
(1 − β)θ > ε + v, (3.313)
rσmin (C)
and 0 ∈ K(t) for all t ≥ t0 , then system (3.307) is asymptotically stable and the basin
of attraction contains the set Rρ ∩ C −1 (K(t0 )).
Before passing to the proof, let make some comments about the various assumptions:
(ii) restricts the “velocity” with which the sets K(t) move, (iii) prevents from state
re-initializations so that the solutions are AC, without (iv) the problem would be
void, (vi) is a kind of constraint qualification which assures that the chain rule from
convex analysis applies to the prox-regular case, i.e., for each z ∈ Rn , and v = Cz, it
Δ
holds that NK
(z) = {C T w | w ∈ NK (v)} = C T NK (Cz) [283, Lemma 2.4]. Finally,
(3.310) and (3.311) mean that the triplet (A, B, C) is strictly passive. Matrix norms
are induced norms.
Δ
where K
(t) = {z ∈ IRn | CR−1 z ∈ K(t)} is r
-prox-regular with r
= rσmin (C)/
C2 due to Lemma 3.162. From the basic assumption that our system has unique AC
solutions, it follows that (3.315) admits a unique locally absolutely continuous solu-
tion over [t0 , ∞). It follows also from [283, Theorem 2.9] that ż(t) + RAR−1 x(t) ≤
RAR−1 x(t) + v̇K (t) for almost all t ∈ [t0 , +∞). Thus we have
Step 2: Consider the Lyapunov function V : IRn → IR+ defined as V (z) = z . z, then
V (·) is continuously differentiable and its derivative along the trajectories of (3.315)
satisfies the following for almost all t ∈ [t0 , ∞):
1
V̇ (z(t)) ≤ −θ z(t)T z(t) + (RAR−1 z(t) + v̇K
(t)) · z(t)2
r
b 1
≤ −θ z(t)2 + RAR−1 · z(t) + |v̇K (t)| z(t)2
r σmin (C)
bv b
≤− θ− z(t)2 + RAR−1 · z(t)3
rσmin (C) r
b
≤ −(ε + β θ ) z(t)2 + RAR−1 · z(t)3 , (3.317)
r
such time for a fixed δ. Then, for every t in a neighborhood of t̄, it holds that
r 2 ε2 rε
V (z(t)) ≤ ρ 2 + 4b2 RAR −1 2 , and hence |z(t)| ≤ ρ + 2 bRAR−1 , which in turn implies
ε
V̇ (z(t)) ≤ − |z(t)|2
2
for almost all t in a neighborhood of t̄. It then follows that there exists t ∈ (t0 , t̄) such
that V (z(t)) > V (z(t̄)), which contradicts the minimality of t̄.
Step 4: For x(t0 ) ∈ C −1 (S(t0 )) ∩ Rρ , it follows from the previous step that |z(t)| ≤
ρ, for all t ≥ t0 , and for almost all t ≥ t0 , (3.317) yields
By comparison of lemma and integration, V (z(t)) ≤ e−ε(t−t0 ) V (z(t0 )), for t ≥ t0 and
the solution z(·) of system (3.315) with initial condition R−1 z(t0 ) ∈ C −1 (K(t0 )) ∩
Rρ . The foregoing relation guarantees that (3.315) is stable in the sense of Lyapunov,
and also limt→∞ z(t) = 0; hence, (3.315) is asymptotically stable. The matrix P
being positive definite guarantees that R is invertible, so that asymptotic stability is
preserved under the proposed change of coordinates, and the basin of attraction of
system (3.307) contains the set Rρ as claimed in the theorem statement.
One sees that conditions (3.310) and (3.311) state the strict state passivity of the
triple (A, B, C) (take V (x) = xT Px, differentiate it along the system’s trajectories,
then integrate between 0 and t to obtain (4.44) with S (x) = θ xT Px), which by
Theorem 4.73 and under a minimality assumption is equivalent to the SPRness of
the associated transfer matrix.
Lemma 3.166 ([273, Lemma 4.1]) The following statements are equivalent for any
closed set K ∈ Rn :
1. K is r
-prox-regular for any r
< r.
2. There exists a maximal monotone operator M (·) such that NK (x) ∩ B0 (m) +
m
r
x ⊂ M (x) ⊂ NK (x) + mr x for all x ∈ K.
Corollary 3.167 ([273, Corollary 4.1]) Let K be r
-prox-regular for any r
< r. An
AC function is a solution of the differential inclusion: ẋ(t) ∈ f (x(t)) − NK (x(t)),
x(0) ∈ K, if and only if it is the (unique) solution of the differential inclusion, for
some m > 0: ẋ(t) ∈ f (x(t)) + mr x(t) − M (x(t)), x(0) ∈ K.
Briefly, Corollary 3.167 says that one can “transform” the normal cone to a prox-
regular set (that yields a non-monotonic static feedback loop) into a maximal mono-
tone set-valued mapping. The price to pay is the additional term mr x in the vector
field, which takes away some passivity in the single-valued subsystem. This places
us in a loop transformation similar to the one in Fig. 3.16.
The input/output constraint PB = C T is used many times throughout the book and
is shown to be quite useful for the analysis of set-valued Lur’e systems. As alluded
to, this is closely related to the relative degree r ∈ Rm of the system. In the SISO
case m = 1, the Lur’e equations with D = 0 imply PB = C T , so that the Markov
parameter CB = BT PB > 0, and r = 1. In the MIMO case, D + DT = 0 implies
CB 0 ( 0 if B has full column rank m), and the associated transfer function has a
total index equal to 1 (see Proposition 2.71). To reinforce this idea, let us report the
following example from [334], where x(t) ∈ R3 , λ(t) ∈ R:
⎧ ⎛ ⎞ ⎛ ⎞
⎪
⎪ 010 0
⎪
⎪ ⎝ 0 0 1 ⎠ x(t) − ⎝ 0 ⎠ λ(t)
⎪
⎪ ẋ(t) =
⎪
⎪
⎨ 000 1
(3.318)
⎪
⎪
⎪
⎪ λ(t) ∈ sgn(y(t))
⎪
⎪
⎪
⎪
⎩
y(t) = (1 0 0)x(t).
Theorem 3.168 ([334, Theorems 1 and 3]) The Lur’e dynamical system in (3.318)
has a unique analytic solution (x, λ) on [0, ε), for some ε > 0, and for any initial
condition x(0) = x0 . Let x0 = 0, then there exists an infinity of solutions in the sense
of Filippov.
Solutions in the sense of Filippov are AC functions, such that ẋ(t) satisfies the
inclusion in (3.318) almost everywhere. The Filippov’s solutions constructed in [334]
start form the origin with a right accumulation of switching times. An AC function
226 3 Kalman–Yakubovich–Popov Lemma
In this section, we investigate how the KYP Lemma may be extended to discrete-time
systems of the following form:
⎧
⎨ x(k + 1) = Ax(k) + Bu(k)
(3.319)
⎩
y(k) = Cx(k) + Du(k),
It is noteworthy that the condition H (z) + H (z) 0 in |z| > 1 implies that
H T (e−jθ ) + H (ejθ ) 0 for all real θ such that no element of H (z) has a pole at
z = ejθ . The SPR condition is also written in the literature as H (ejθ ) + H T (e−jθ ) 0
for θ ∈ [0, 2π ) [219], or as H (βz) is PR for some 0 < β < 1 [336]. Under the
3.15 Discrete-Time Systems 227
Lemma 3.173 ([96, Lemma 6]) Let (A, B, C, D) be a realization (not necessarily
minimal) of H (z) ∈ Cm×m , and let Kc = (B AB . . . An−1 B) be Kalman’s controlla-
bility matrix, where A ∈ Rn×n . Then H (z) is positive real if and only if there exist
real matrices L and W , and P = P T with KcT PKc 0, such that
⎧ T T
⎨ Kc (A PA − P + LT L)Kc = 0
K T (AT PB − C T + LT W ) = 0 (3.321)
⎩ cT
D + D − BT PB = W T W.
32 We met already such full rank (regularity) conditions in continuous time, see Sect. 2.13.
33 The proof of equivalence, taken from [62], is given in Sect. A.10.
228 3 Kalman–Yakubovich–Popov Lemma
The similarity with the Lur’e equations in (3.115) and (3.117), which apply to
a non-minimal version of the continuous-time KYP Lemma, is worth noting. The
unobservable and/or uncontrollable states can be stable or unstable, without affecting
the lemma.
Similar to their continuous-time counterpart, the KYP Lemma conditions can be
written as an LMI, using, for instance, Proposition A.67. One immediately notices
from (3.320) that necessarily D = 0, otherwise W T W = −BT PB (and obviously
we assume that B = 0). If B has full rank m, then D must have full rank m so that
D + DT 0. Therefore, a positive real discrete-time system with full rank input
matrix has a relative degree 0. Consequently, in the monovariable case the relative
degree is always zero. However, it is worth noting that this is true for passive systems
only, i.e., systems which are dissipative with respect to the supply rate w(u, y) = uT y.
If a more general supply rate is used, e.g., w(u, y) = uT Ru + 2uT Sy + yT Qy, then
the relative degree may not be zero.
When W = 0 and L = 0 in (3.320), the system is said lossless. Then
1 T 1
x (k + 1)Px(k + 1) − xT (k)Px(k) = yT (k)u(k) (3.322)
2 2
for all u(k) and k ≥ 0, which in turn is equivalent to
1 T 1 k
x (k + 1)Px(k + 1) − xT (0)Px(0) = yT (i)u(i) (3.323)
2 2 i=0
for all x(0) and k ≥ 0. Let us now formulate a KYP Lemma for SPR functions.
Lemma 3.174 ([339, 341]) Let (A, B, C, D) be a minimal realization of H (z). The
transfer matrix H (z) is SPR if and only if there exist matrices P = P T 0, L and W
such that
⎧
⎨ P − AT PA = LT L
−BT PA + C = W T L (3.324)
⎩
D + DT − BT PB = W T W
is satisfied, the pair (A, L) is observable, and rank(Ĥ (z)) = m for z = ejω , ω ∈ R,
where (A, B, L, W ) is a minimal realization of Ĥ (z).
Similar to the continuous-time case, PR systems possess stable zeroes. Let us assume
that D is full rank. Then the zero dynamics is given by
which exactly is the dynamics on the subspace y(k) = 0. Let us recall that passivity
means that the system satisfies
Proposition 3.175 ([342]) Let the system (3.319) be passive. Then the zero dynamics
exists and is passive.
Proof One has V (A0 x) − V (x) = xT Mx, with M = (A − BD−1 C)T P(A − BD−1
C) − P. If M 0 then the zero dynamics is stable. Using the second equality of
the KYP Lemma conditions, one obtains
Using the equality C T D−T (DT + D)D−1 C = C T [D−1 + D−T ]C and using the third
equality of the KYP Lemma conditions (3.320), one gets
Positive real discrete-time transfer functions have proved to be quite useful for iden-
tification; see [343–345]. In particular, the so-called Landau’s scheme of recursive
identification [344] is based on PRness. Further works can be found in [219, 246,
342, 346–352]. Infinite-dimensional discrete-time systems and the KYP Lemma
extension have been studied in [353]. The time-varying case received attention in
[65, 354, 355]. In relation to the relative degree zero property pointed out above, let
us state the following result.
in [357, 358] for the design of repetitive controllers34 and in [362] for haptic inter-
faces. The discrete passivity inequality has also been used in the setting of time-
discretized differential inclusions where it proves to be a crucial property for the
behavior of the numerical algorithms [51] (see also [363] in the nonlinear frame-
work of Lagrangian systems).
The Tsypkin criterion may be considered as the extension of Popov and the circle
criteria, for discrete-time systems. It was introduced in [364–368]. For a discrete-time
system of the form
x(k + 1) = Ax(k) − Bφ(Cx, k), (3.330)
Tsypkin proved the absolute stability (i.e., the global asymptotic stability for all φ(·, ·)
in the sector (0, κ)) if the poles of the transfer function H (z) = C(zIn − A)−1 B lie
inside the unit disc and
1
Re[H (z)] + ≥ 0 for |z| = 1. (3.331)
κ
This is the discrete-time analog of the circle criterion. When φ(·) is time-invariant
and monotone, absolute stability holds if there exists a constant δ ≥ 0 such that
1
Re[(1 + δ(1 − z −1 ))H (z)] + ≥ 0 for all |z| = 1. (3.332)
κ
This is the discrete-time analog of the Popov criterion.
34 It
seems that the first proof of passivity for repetitive and learning controllers for robotic manip-
ulators has been done in [357], who analyzed the schemes proposed in [359–361].
3.15 Discrete-Time Systems 231
φi (σ )−φi (σ̂ )
0< σ −σ̂
,σ ∈ R, σ̂ ∈ R, σ = σ̂ , i = 1, . . . , m}.
When m = 1 then we get the usual sector condition 0 < φ(y)y < My2 . We also define
the matrices
A 0n×m B
Aa = , Ba = , Ca = (C − Im ), S = (C 0m ),
C 0m 0m
where yi = Ci x, Ci denotes the ith row of C, is a Lyapunov function for the negative
feedback interconnection of H (z) and the nonlinearity φ(·), whose fixed point is
globally asymptotically stable for all φ(·) ∈ Φ.
Further reading: Further details on the Tsypkin criterion can be found in [369] and
in the special issue [370]. Similarly, as for its continuous-time counterpart, there
has been an active subject of research to reduce the degree of conservativeness of
Tsypkin’s criterion, see, e.g., the Jury–Lee criteria [371–373] (sometimes considered
as the discrete-time counterpart of Popov Criterion), and the more recent results
in [374, 375] and [376–378] using LMIs. Comparisons between several criteria
are made in [377, 378] on particular examples, in terms of the allowed maximum
232 3 Kalman–Yakubovich–Popov Lemma
nonlinearity sector that preserves the stability. It is shown that the criteria proposed in
[377, 378] supersede significantly the previous ones, though it is noticed also that they
could certainly be improved further using Zames–Falb multipliers. Nonlinearities
with sector and slope restrictions are considered in [379], who derive less conservative
results by proper choice of a Lyapunov functional, and make comparisons with other
criteria.
In this section, we are interested in a problem with a high practical interest: given
a PR or SPR continuous-time system, is PRness preserved after a ZOH time-
discretization? The material is taken from De La Sen [380]. Let us start by recalling
some facts and definitions.
N (s)
Consider the transfer function H (s) = M (s)
= H1 (s) + d , where the relative
N1 (s)
degree of H (s) is 0, d ∈ R and H1 (s) = M (s) . H1 (s) is strictly proper. The sys-
tem is assumed to be stabilizable and detectable, i.e., N (s) = N1 (s) + dM (s) and
M (s) may possess common factors in the complex half plane Re[s] < 0. Let
(A, B, C, D) be a state representation of H (s). One has M (s) =det(sIn − A) and
N (s) = CAdj(sIn − A)B + D det(sIn − A), where Adj(·) is the adjoint matrix of the
square matrix (·). If M (s) and N (s) are coprime, then (A, B, C, D) is minimal (con-
trollable and observable) but by assumption if they are not coprime the uncontrollable
or unobservable modes are stable.
We assume that the system is sampled with a zero-order hold device of sampling
period Ts = h s, and we denote as usual tk = kh, xk = x(tk ) and so on. The continuous-
time system (A, B, C, D) becomes when discretized a discrete-time system
xk+1 = Φxk + Γ uk
(3.338)
yk+1 = Cxk+1 + Duk+1
h
for all k ≥ 0, k ∈ N, Φ = exp(hA), Γ = 0 exp(A(h − τ ))d τ B. The discrete
transfer function from u(z) to y(z), z ∈ C, is given by
Nd (z) 1−exp(−hs) N1d (z)
G(z) = Md (z)
=Z s
H (s) = G 1 (z) + D, G 1 (z) = Md (z)
, (3.339)
n−1−i n−k−1 i
where Adj(zIn − Φ) = n−1 i=0 k=0 sk z Φ , n is the dimension of the state
vector x, Nd (z) = N1d (z) + DM (z), the degree of the polynomial N1d is n − 1, and
the degree of Nd and Md is n. It is well known that the poles of G(z) and of G 1 (z)
are equal to exp(λA h) for each eigenvalue λA of the matrix A, so that the stability
is preserved through discretization. However, such is not the same for the zeros of
G 1 (z) which depend on the zeros and the poles of H1 (s), and on the sampling period
Ts . It cannot be guaranteed that these zeros are in |z| < 1. It is therefore clear that
the preservation of PRness imposes further conditions.
Let us denote H0 the set of stable transfer functions, possibly critically stable (i.e.,
with pairs of purely imaginary conjugate poles). Let us denote G 1 the set of discrete
stable transfer functions, possible critically stable.
• (ii) If (i) holds then there is a constant D̄ 0 such that for all D D̄, G(z) is
(discrete) positive real and G z (w) is (continuous) positive real.
It is interesting to note that (ii) is directly related to the comment made right after the
KYP Lemma 3.173. The homographic (Cayley) transformation w = z−1 z+1
transforms
the region |z| ≤ 1 into Re[w] ≤ 0; consequently, the stability of G z (w) follows if all
its poles are inside Re[w] ≤ 0.
In the next theorem, Hc (s) ∈ Cm×m denotes the transfer function of a continuous-
time system, Hd (z) ∈ Cm×m denotes the transfer function of a discrete-time system.
These results have been obtained by various authors [8, 65, 80, 335]. We consider the
transformation s = α z−1
z+1
, α > 0, equivalently z = α+s
α−s
. It is invertible and bilinear.
234 3 Kalman–Yakubovich–Popov Lemma
shows close connection between the Cayley transform and the midpoint discretiza-
tion (see F in item 2 in Theorem 3.179).
To start with, let us consider the following (θ, γ )-discretization of the passive LTI
continuous-time system ẋ(t) = Ax(t) + Bu(t), y(t) = Cx(t) + Du(t):
⎧x
⎪ k+1 − xk
⎨ = Axk+θ + Buk+γ
h (3.341)
⎪
⎩
yk+γ = Cxk+γ + Duk+γ ,
with x0 = x0 , and where θ and γ ∈ [0, 1], the subscript notation k + θ means xk+θ =
θ xk+1 + (1 − θ )xk and similarly uk+γ = γ uk+1 + (1 − γ )uk . As usual xk denotes
x(tk ) or x(k) as in foregoing sections.
Remark 3.182 Notice that if we define the output as yk = Cxk + Duk , and with
θ = γ = 21 , still using the forward shift operator xk = z −1 xk+1 , the system in (3.341)
has the transfer matrix H (z) = 2h z−1z+1
C(I − h A)−1 B + D. If we set θ = γ = 21 in
* n 2 −1 +
(3.341), then we obtain H (z) = 1+z 2z
hC z−1
2
I − 2h A
z+1 n
B + D . If we set θ = 21 ,
−1
γ = 1, then H (z) = z+1 hz
C z−1 I − 2h A
z+1 n
B + D.
Assuming that the inverse (In − hθ A)−1 is well defined (a sufficient condition is
h < θ||A||
1
where || · || is a norm for which ||In || = 1 [66, Theorem 1, Chap. 11], but
in many cases In − hθ A may be full rank for h > 0 not necessarily small), we define
⎧
⎪ 5
A = (In − hθ A)−1 (In + h(1 − θ )A)
⎪
⎨5
B = h(In − hθ A)−1 B
5 (3.342)
⎪
⎪ C = γ C5A + (1 − γ )C
⎩5
D = γ C5B + D.
The various cases quickly analyzed in Remark 3.182 could be recast into (3.342).
The (θ, γ )-discretization of the system is compactly written under the standard state
space form as ⎧
⎨ xk+1 = 5Axk + 5
Buk+γ
(3.343)
⎩
yk+γ = 5Cxk + 5
Duk+γ .
The transformation used in this section does not exactly match with the one used
in Theorem 3.179, due to the particular choice of the discretization method and
of the state space form in (3.343). In fact, the Cayley transform corresponds to
236 3 Kalman–Yakubovich–Popov Lemma
6
xk+1 −xk
h
= Axk+ 21 + 1h Buk , with the choice α = 2h . If passivity is wanted, then it
follows from (3.342) that if D = 0, then γ > 0 (because, as seen in Sect. 3.15.1, a
passive discrete-time system must have a nonzero feedthrough matrix D). Motivated
by many of the developments of this chapter, let us propose the following.
Definition 3.183 The quadruple (Ã, B̃, C̃, D̃) is said to be passive if there exists
matrices 5 5 ∈ Rm×m and Rn×n % R = RT 0, such that
L ∈ Rn×m and W
⎧
AT R5
⎨ (1) 5 A − R = −5 L5LT
B R5
(2) 5 T
A−5 C = −W 5 T5LT (3.344)
⎩ 5T W
5.
(3) 5
B R5
T
B−5 D−5 D = −W
T
Here, we take this as the definition of a passive system, which makes sense in view
of (3.326) (3.327). In turn, we know that the passivity of the continuous-time system
(A, B, C, D) is equivalent to having the Lur’e equations in (3.2), satisfied for some
matrices P = P T 0, L and W . Here we understand passivity in the sense that the
following dissipation equality with storage function V (x) = 21 xT Px,
t t
1 x(s)
V (x(t)) − V (x(0)) = u(s) y(s)dt −
T
(x (s), u (s))Q
T T
ds,
0 02 u(s)
(3.345)
holds for all t ≥ 0, and from the Lur’e equations in (3.2) (or in (3.3)):
Δ LLT W T LT
Q= 0. (3.346)
LW WTW
Using the same calculations as in (3.326) (3.327), it follows using (3.344) that along
the trajectories of the system (3.343), the following holds. Let V (xk ) = 21 xkT Rxk
denote the corresponding energy storage function. The dissipation equality
1 5 xk
V (xk+1 ) − V (xk ) = − (xkT , uk+γ
T
)Q , (3.347)
2 λk+γ
or equivalently
1 T T
k
5 xi
V (xk+1 ) − V (x0 ) = − (x , u )Q , (3.348)
2 i=0 i i+γ ui+γ
Δ 5
L5LT W 5 T5LT
5=
in terms of the matrix Q 5
LW5 W 5T W 5 0 holds, which is the discrete-time
counterpart of (3.346), and for all k ≥ 0.
The control problem to be solved here is as follows: given that (A, B, C, D) is
passive with associated storage functions, dissipation function, and supply rate,
under which conditions on θ and γ ,is the discretized system (5 B, 5
A, 5 C, 5
D) passive
3.15 Discrete-Time Systems 237
with the same storage functions, supply rate, and dissipation, for all h > 0 ? This is
a rather tough issue. As we will see next, this requires some clarification.
We know from the passivity of (A, B, C, D) that the Lyapunov equation (AT (h5 R) +
(h5R)A) = −5 LT has a unique solution (h5
L5 R) for given 5
L5LT . Thus provided that (1 −
2θ )AT 5RA = 0 (which is satisfied if θ = 21 ) the equality (3.349) (1), that is equivalent
to (3.344) (1), has a solution 5 R such that R = (I − hθ A)T 5 R(I − hθ A) which defines
the energy storage function of the discretized system. The state dissipation is given
by 5 L5
LT . Now taking W 5 = 0 one may rewrite (3.349) (2) as hBT 5 R−5 C(In + h(1 −
−1
θ )A) = 0, which means that the second equality for passivity is satisfied with a
new output matrix 1h 5 C(In + h(1 − θ )A)−1 . Then (3.349) (3) boils down to BT 5 RB =
5 5 5
D + D. Clearly, one can always find D such that this equality holds; however, it
T
may not be equal to the matrix 5 D in (3.342), so we denote it D̄. Changing 5 D into D̄
once again modifies the “output” yk+γ in (3.343). Therefore, the discrete-time system
does not possess the output yk+γ = 5 Cxk + 5Duk+γ in (3.343), but a new output equal
Δ 1
to ȳk+γ = h 5 C(In + h(1 − θ )A)−1 xk+1 + D̄uk+γ . This corresponds to changing the
supply rate of the system. Therefore, the discrete-time system is dissipative with
storage function 2h xkT Rxk , dissipation matrices 5
L and W5 = 0, supply rate ȳT uk+γ .
k+γ
Equalling the two triplets (supply rate, dissipation function, storage functions set)
requires a preliminary comment. First, we define the two cumulative dissipation
func-
Δ t 1 T x(s)
tions, for the continuous-time system as D(t) = 0 2 (x (s), λ (s))Q
T
ds,
λ(s)
Δ 5 xi
and for the discrete-time system as Dk = ki=0 2h (xiT , λTi+γ )Q . Thus we
λi+γ
238 3 Kalman–Yakubovich–Popov Lemma
have two options: seek for conditions such that P = R and Q˜ = hQ, or seek for con-
ditions such that hR = P and Q˜ = Q. The second option yields an approximation
of the infinitesimal dissipation equality, while the first option rather approximates
its integral form. Let us choose the second option in the sequel. The next proposi-
tion states conditions under which passivity is preserved, using the Lur’e equations
matrices (L, W ) and (5 5 ) (for, if these matrices pairs are equal, Q˜ = Q).
L, W
Proposition 3.185 ([298, Proposition 3]) Let hR = P, h > 0. Assume that both
(A, B, C, D) and (5 B, 5
A, 5 C, 5
D) are passive. Then,
⎧ T T
⎪
⎪ θ A LL = 0, θ BT LLT = 0
⎨
LL = 5
T
L5
L T
(2θ − 1)AT RA = 0, (1 − θ − γ )BT RA = 0
5 5 ⇐⇒
W L =W
T T T
LT
⎪
⎪ θ (γ − θ )BT RA2 = 0, BT LLT A = 0
⎩
γ W T LT A = 0.
(3.350)
Let us further assume that LLT = 5L5
LT and W T LT = W5 T5
LT . Then, we have
5T W
WTW = W 5 ⇐⇒ (1 − 2γ )BT RB = 0 and γ W T LT B = γ BT LW. (3.351)
5 T5
(W T LT − W LT )(In − hθ A) + hγ W T LT A = 0. (3.358)
Let us focus for a while on the left-hand side of (3.360). First, using AT R + RA =
−LLT , we have (In − hθ A)T R = R(In + hθ A) + hθ LLT , and then using (3.353)
(θ AT LLT = 0 holds), we have
Let us consider the lossless case L = 0. It is easily deduced from Proposition 3.185,
that the midpoint method with θ = γ = 21 preserves the losslessness for any h > 0.
From (3.351), one sees that the input strict passivity is preserved if γ = 21 (which is
necessary if R 0), and the matrix W T LT B is symmetric. It is clear that in general,
the conditions of Proposition 3.185 are rather stringent, and they tend to indicate
that one should either use numerical schemes with more parameters (like Runge–
Kutta), and/or let h take particular, or nonconstant, values. The case of passive linear
complementarity systems is treated in [298], including the analysis of state jumps
as in Sect. 3.14.2.2, and their numerical calculation. Examples of circuits with ideal
diodes are provided in [298]. The explicit Euler method is analyzed in [148] (called
therein the delta operator). The notion of low frequency SPR is introduced in [148], as
all elements of H (z) are analytic for |z| > 1h , and H (z) + H (z) 0 for all |θ | ≤ Θ
jθ −1
and |z| > 1h , where h > 0 is the sampling period, H (z) = C e h−1 In − A B + D,
for z = e h−1 , and for all |θ | ≤ Θ. This is the discrete-time counterpart of Definition
jθ
Let us consider the differential inclusion in (3.239) (equivalently its Lur’e system
form in (3.240)). Let us discretize it with an implicit Euler method:
xk+1 = xk − hλk+1
xk+1 − xk ∈ −hA(xk+1 ) ⇐⇒ , (3.368)
λk+1 ∈ A(xk+1 )
where xk = x(tk ), h > 0 is the time step, 0 = t0 < t1 < . . . < tn−1 < tn = T , and
the integration is performed on the interval [0, T ]. The variable λk+1 is the discrete
counterpart of λ(t) in (3.240). One sees that (3.368) is a generalized equation with
unknown xk+1 , which can be solved as
Δ
xk+1 = (In + hA)−1 (xk ) = JAh (xk ). (3.369)
The operator JAh (·) is called the resolvent of the maximal monotone operator A(·).
The resolvent of a maximal monotone mapping A(·) is non-expansive, that is,
for any x and y in dom(A), one has ||JAh (x) − JAh (y)|| ≤ ||x − y||. Let us assume
that JAh (0) = 0, equivalently (In + hA)(0) = hA(0) % {0}. Then ||JAh (x)|| ≤ ||x|| for
any x ∈ dom(A). Let us consider the Lyapunov function candidate V (xk ) = xkT xk .
We have V (xk+1 ) − V (xk ) = ||JAh (xk )||2 − ||xk ||2 ≤ 0. Thus the implicit Euler dis-
cretization allows one to preserve the Lyapunov stability properties of the Lur’e
differential inclusion (3.240).
Let us end this section by noting that the control problem with robustness against
unknown, bounded disturbance d (x, t), requires more thought. Indeed, consider the
perturbed plant ẋ(t) = u(t) + d (x(t), t), x(t) ∈ R, where the goal is to bring x(·) to
zero in a finite time. In continuous-time, one can set the simplest sliding-mode con-
troller u(t) ∈ −αsgn(x(t)), with α > |d (x, t)| for all x and t (notice that sgn(x) =
242 3 Kalman–Yakubovich–Popov Lemma
to some KYP Lemma equations conditions in [76, Theorem 4.1]. Still dealing with
Numerical Analysis, [397] shows some nice dissipativity properties of the Moreau–
Jean scheme for nonsmooth Lagrangian systems [296], which are set-valued Lur’e
systems (see Fig. 6.7 in Sect. 6). Discretization issues for nonlinear systems have
been also the object of several analysis. It is noteworthy that excepted in very par-
ticular cases, in the nonlinear case, one has to use an approximation of the plant’s
model to analyze the closed-loop system stabilization. One choice often made in the
emulation method,35 or consistency of the discrete-time system. is to represent the
plant with an explicit Euler method [323, 398]. The least thing that is expected then is
to prove that the closed-loop discrete-time system has solutions that converge toward
those of the continuous-time system, or/and if some consistency in the sense of [398,
Definitions 2.4, 2.5] holds true. Both the integral and the infinitesimal forms of the
dissipation inequality approximations are analyzed in [398]. Passivity preservation
with same supply rate and storage function is then stated in various results. The port
Hamiltonian system in (6.86) in Chap. 6, is discretized as follows:
xk+1 −xk
(J (xk ) − R(xk )) g(xk ) H0 (xk+1 ) − H0 (xk )
h = . (3.370)
yk g(xk )T 0 U
in [399]. Under some basic conditions, the method is shown to be convergent [399,
Proposition 5]. Using the passive output from (3.370) is shown to provide better per-
formance over a simply emulated controller using the explicit output. See also [400]
for discrete-time port Hamiltonian systems. The dissipativity of nonlinear discrete-
time systems is tackled in [401] using Taylor–Lie series discretization (involving
infinite sums), see also [402], and [403, 404] for results about feedback stabilization.
Passivity of repetitive processes and iterative learning control, which are represented
by discrete-time state spaces, is analyzed and used in [357, 405–407]. Dissipativity
has also proven to be a quite useful analytical tool, to study optimal control problems
and MPC (model predictive control) in a discrete-time setting [408–416]. Applica-
tions are in economic optimal control problems. The turnpike property formalizes
the phenomenon that an optimal trajectory (associated with an optimal controller)
stays “most of the time” close to an optimal steady-state (equilibrium) point. Briefly,
let us consider the optimal control problem:
K−1
minu∈Uk (x0 ) k=0 l(x(k), u(k))
(3.371)
subject to: x(k + 1) = f (x(k), u(k)), x(0) = x0 ,
where Uk (x0 ) is the space of admissible control sequences, l(·) is a continuous stage
cost, K ∈ N is the time horizon, f [·, ·) is continuous.
35 Roughly speaking, the emulation method consists of four steps: (i) design a suitable controller
in the continuous-time framework, (ii) discretize the controller and implement it with a zero-order
hold method, (iii) choose a discrete-time approximation of the plant’s model, and analyze the
discrete-time closed-loop system stability, and (iv) check convergence
244 3 Kalman–Yakubovich–Popov Lemma
References
1. Popov VM (1959) Critères suffisants de stabilité asymptotique globale pour les systèmes
automatiques non linéaires à plusieurs organes d’exécution. St Cerc Energ IX(4):647–680. In
Romanian
2. Lur’e A, Postnikov VN (1945) On the theory of stability of control systems. Appl Math Mech
8(3). (Prikl. Matem. i, Mekh., vol IX, no 5)
3. Lur’e A (1951) Certain nonlinear problems in the theory of automatic control. Gostekhiz-
dat, Moscow, Leningrad. Original: Nekotorye Nelineinye Zadachi Teorii Avtomaticheskogo
Regulirovaniya (Gos. Isdat. Tekh. Teor. Lit., 1951, U.S.S.R.), H.M. Stationery, Transl., 1957
4. Yakubovich VA (1962) La solution de quelques inégalités matricielles rencontrées dans la
théorie du réglage automatique. Doklady A.N. SSSR 143(6):1304–1307
5. Yakubovich VA (1962) The solution of certain matrix inequalities. Autom Control Theory
Sov Math AMS 3:620–623
6. Kalman RE (1963) Lyapunov functions for the problem of Lurie in automatic control. Proc
Natl Acad Sci USA 49(2):201–205
7. Popov VM (1959) Critéres de stabilité pour les systèmes non linéaires de réglage automatique,
basés sur l’utilisation de la transformée de laplace. St Cerc Energ IX(1):119–136. In Romanian
8. Popov VM (1964) Hyperstability and optimality of automatic systems with several control
functions. Rev Roum Sci Tech Sér Electrotech Energ 9(4):629–690
9. Yakubovich VA (1975) The frequency theorem for the case in which the state space and the
control space are Hilbert spaces, and its application incertain problems in the synthesis of
optimal control, II. Sib Math J 16:828–845
10. Brusin VA (1976) The Lurie equation in the Hilbert space and its solvability. Prikl Math Mekh
40(5):947–955 In Russian
11. Likhtarnikov AL, Yakubovich VA (1977) The frequency theorem for one-parameter semi-
groups. Math USSR Izv (Izv Akad Nauk SSSR, Ser Math) 11(4):849–864
12. Szegö G, Kalman RE (1963) Sur la stabilité absolue d’un système d’équations aux différences
finies. C R Acad Sci Paris 257(2):388–390
References 245
13. Barabanov NE, Gelig AK, Leonov GA, Likhtarnikov AL, Matveev AS, Smirnova VB, Fradkov
AL (1996) The frequency theorem (Kalman-Yakubovich Lemma) in control theory. Autom
Remote Control 57(10):1377–1407
14. Yao J, Feng J, Meng M (2016) On solutions of the matrix equation AX = B with respect to
semi-tensor product. J Frankl Inst 353:1109–1131
15. Anderson BDO, Vongpanitlerd S (1973) Network analysis and synthesis: a modern systems
theory approach. Prentice Hall, Englewood Cliffs
16. Anderson BDO (1967) A system theory criterion for positive real matrices. SIAM J Control
5(2):171–182
17. van der Geest R, Trentelman H (1997) The Kalman-Yakubovich-Popov lemma in a
behavioural framework. Syst Control Lett 32:283–290
18. Youla DC (1961) On the factorization of rational matrices. IEEE Trans Inf Theory IT-7:172–
189
19. Bitmead R, Anderson BDO (1977) Matrix fraction description of the lossless positive real
property. IEEE Trans Autom Control 24(10):546–550
20. Reis T, Willems JC (2011) A balancing approach to the realization of systems with internal
passivity and reciprocity. Syst Control Lett 60(1):69–74
21. Ober R (1991) Balanced parametrization of classes of linear systems. SIAM J Control and
Optim 29(6):1251–1287
22. Schumacher JM (1983) The role of the dissipation matrix in singular optimal control. Syst
Control Lett 2:262–266
23. Tao G, Ioannou PA (1988) Strictly positive real matrices and the Lefschetz-Kalman-
Yakubovich Lemma. IEEE Trans Autom Control 33(12):1183–1185
24. Taylor JH (1974) Strictly positive real functions and Lefschetz-Kalman-Yakubovich (LKY)
lemma. IEEE Trans Circuits Syst 21(2):310–311
25. Sakamoto N, Suzuki M (1996) γ -passive system and its phase property and synthesis. IEEE
Trans Autom Control 41(6):859–865
26. Wen JT (1988) Time domain and frequency domain conditions for strict positive realness.
IEEE Trans Autom Control 33:988–992
27. Bernstein DS (2005) Matrix mathematics. Theory, facts, and formulas with application to
linear systems theory. Princeton University Press, Princeton
28. Bryson AE, Ho YC (1975) Applied optimal control. Optimizaton, estimation and control.
Taylor and Francis, Abingdon
29. Rudin W (1987) Real and complex analysis, 3rd edn. Higher mathematics. McGraw Hill,
New York City
30. Naylor AW, Sell GR (1983) Linear operator theory in engineering and science. Springer, New
York
31. Shorten R, King C (2004) Spectral conditions for positive realness of single-input single-
output systems. IEEE Trans Autom Control 49(10):1875–1879
32. Wang L, Yu W (2001) On Hurwitz stable polynomials and strictly positive real transfer
functions. IEEE Trans Circuits Syst I- Fundam Theory Appl 48(1):127–128
33. Patel VV, Datta KB (2001) Comments on “Hurwitz stable polynomials and strictly positive
real transfer functions”. IEEE Trans Circuits Syst I- Fundam Theory Appl 48(1):128–129
34. Marquez HJ, Agathoklis P (2001) Comments on “hurwitz polynomials and strictly positive
real transfer functions”. IEEE Trans Circuits Syst I- Fundam Theory Appl 48(1):129
35. Yu W, Wang L (2001) Anderson’s claim on fourth-order SPR synthesis is true. IEEE Trans
Circuits Syst I- Fundam Theory Appl 48(4):506–509
36. Stipanovic DM, Siljak DD (2001) SPR criteria for uncertain rational matrices via polynomial
positivity and Bernstein’s expansions. IEEE Trans Circuits Syst I- Fundam Theory Appl
48(11):1366–1369
37. Marquez HJ, Damaren CJ (1995) On the design of strictly positive real transfer functions.
IEEE Trans Circuits Syst I- Fundam Theory Appl 42(4):214–218
38. Anderson BDO, Mansour M, Kraus FJ (1995) A new test for strict positive realness. IEEE
Trans Circuits Syst I- Fundam Theory and Appl 42(4):226–229
246 3 Kalman–Yakubovich–Popov Lemma
39. Henrion D (2002) Linear matrix inequalities for robust strictly positive real design. IEEE
Trans Circuits Syst I- Fundam Theory Appl 49(7):1017–1020
40. Dumitrescu B (2002) Parametrization of positive-real transfer functions with fixed poles.
IEEE Trans Circuits Syst I- Fundam Theory Appl 49(4):523–526
41. Gregor J (1996) On the design of positive real functions. IEEE Trans Circuits Syst I- Fundam
Theory Appl 43(11):945–947
42. de la Sen M (1998) A method for general design of positive real functions. IEEE Trans Circuits
Syst I- Fundam Theory Appl 45(7):764–769
43. Betser A, Zeheb E (1993) Design of robust strictly positive real transfer functions. IEEE Trans
Circuits Syst I- Fundam Theory Appl 40(9):573–580
44. Bianchini G, Tesi A, Vicino A (2001) Synthesis of robust strictly positive real systems with
l2 parametric uncertainty. IEEE Trans Circuits Syst I- Fundam Theory Appl 48(4):438–450
45. Cobb D (1982) On the solution of linear differential equations with singular coefficients. J
Differ Equ 46:310–323
46. Masubuchi I (2006) Dissipativity inequalities for continuous-time descriptor systems with
applications to synthesis of control gains. Syst Control Lett 55:158–164
47. Freund RW, Jarre F (2004) An extension of the positive real lemma to descriptor systems.
Optim Methods Softw 19(1):69–87
48. Gillis N, Sharma P (2018) Finding the nearest positive-real system. SIAM J Numer Anal
56(2):1022–1047
49. Zhang L, Lam J, Xu S (2002) On positive realness of descriptor systems. IEEE Trans Circuits
Syst I- Fundam Theory Appl 49(3):401–407
50. Camlibel MK, Frasca R (2009) Extension of Kalman-Yakubovich-Popov lemma to descriptor
systems. Syst Control Lett 58:795–803
51. Acary V, Brogliato B, Goeleven D (2008) Higher order Moreau’s sweeping process: mathe-
matical formulation and numerical simulation. Math Program Ser A 113:133–217
52. Brogliato B (2018) Non-autonomous higher-order Moreau’s sweeping process: well-
posedness, stability and Zeno trajectories. Eur J Appl Math 29(5):941–968
53. Knockaert L (2005) A note on strict passivity. Syst Control Lett 54(9):865–869
54. Reis T, Voigt M (2015) The Kalman-Yakubovic-Popov inequality for differential-algebraic
systems: existence of nonpositive solutions. Syst Control Lett 86:1–8
55. The Kalman-Yakubovic-Popov inequality for differential-algebraic systems (2015) Reis, T.,
rendel, O., Voigt, M. Linear Algebra and its Applications 485:153–193
56. Mahmoud MS (2009) delay-dependent dissipativity of singular time-delay systems. IMA J
Math Control Inf 26:45–58
57. Masubuchi I (2007) Output feedback conrtoller synthesis for descriptor systems satisfying
closed-loop dissipativity. Automatica 43:339–345
58. Chu D, Tan RCE (2008) Algebraic characterizations for positive realness of descriptor sys-
tems. SIAM J Matrix Anal Appl 30(1):197–222
59. Sajja S, Corless M, Zeheb E, Shorten R (2013) Comments and observations on the passivity
of descriptor systems in state space. Int J Control
60. Corless M, Zeheb E, Shorten R (2018) On the SPRification of linear descriptor systems via
output feedback. IEEE Trans Autom Control. https://doi.org/10.1109/TAC.2018.2849613
61. Xu S, Lam J (2004) New positive realness conditions for uncertain discrete descriptor systems:
analysis and synthesis. IEEE Trans Circuits Syst I- Fundam Theory Appl 51(9):1897–1905
62. Lee L, Chen JL (2003) Strictly positive real lemma and absolute stability for discrete time
descriptor systems. IEEE Trans Circuits Syst I- Fundam Theory Appl 50(6):788–794
63. Lozano R, Joshi SM (1990) Strictly positive real functions revisited. IEEE Trans Autom
Control 35:1243–1245
64. Popov VM (1973) Hyperstability of control systems. Springer, Berlin
65. Faurre P, Clerget M, Germain F (1979) Opérateurs Rationnels Positifs. Application à
l’Hyperstabilité et aux Processus Aléatoires. Méthodes Mathématiques de l’Informatique.
Dunod, Paris. In French
66. Lancaster P, Tismenetsky M (1985) The theory of matrices. Academic Press, New York
References 247
67. Joshi SM, Gupta S (1996) On a class of marginally stable positive-real systems. IEEE Trans
Autom Control 41(1):152–155
68. Collado J, Lozano R, Johansson R (2001) On Kalman-Yakubovich-Popov lemma for stabi-
lizable systems. IEEE Trans Autom Control 46(7):1089–1093
69. Ferrante A (2005) Positive real lemma: necessary and sufficient conditions for the existence
of solutions under virtually no assumptions. IEEE Trans on Autom Control 50(5):720–724
70. Pandolfi L (2001) An observation on the positive real lemma. J Math Anal Appl 255:480–490
71. Ferrante A, Pandolfi L (2002) On the solvability of the Positive Real Lemma equations. Syst
Control Lett 47:211–219
72. Hughes TH (2018) On the optimal control of passive or non-expansive systems. IEEE Trans
Autom Control 63(12):4079–4093
73. Hughes TH (2017) A theory of passive linear systems with no assumptions. Automatica
86:87–97
74. Rantzer A (1996) On the Kalman-Yakubovich-Popov Lemma. Syst Control Lett 28:7–10
75. Scherer R, Wendler W (1994) A generalization of the positive real Lemma. IEEE Trans Autom
Control 39(4):882–886
76. Scherer R, Turke H (1989) Algebraic characterization of A−stable Runge-Kutta methods.
Appl Numer Math 5:133–144
77. Hughes TH, Smith MC (2017) Controllability of linear passive network behaviour. Syst
Control Lett 101:58–66
78. Hughes TH (2018) On the internal signature and minimal electric network realizations of
reciprocal behaviours. Syst Control Lett 119:16–22
79. Zhu L, Hill DJ (2018) Stability analysis of power systems: a network synchronization per-
spective. SIAM J Control Optim 56(3):1640–1664
80. Anderson BDO, Hitz KL, Diem ND (1974) Recurisive algorithm for spectral factorization.
IEEE Trans Circuits Syst 21(6):742–750
81. Reis T (2011) Lur’e equations and even matrix pencils. Linear Algebra Appl 434:152–173
82. Massoudi A, Opmeer MR, Reis T (2017) The ADI method for bounded real and positive real
Lur’e equations. Numerische Mathematik 135:431–458
83. Anderson BDO, Moore JB (1968) Algebraic structure of generalized positive real matrices.
SIAM J Control 6(4):615–624
84. Camlibel MK, Iannelli L, Vasca F (2014) Passivity and complementarity. Math Program Ser
A 145:531–563
85. Yakubovich VA (1964) The method of matrix inequalities in the theory of stability of nonlinear
automatic control systems. Autom Remote Control 25(7):1017–1029
86. Yakubovich VA (1965) The method of matrix inequalities in the theory of stability of nonlinear
automatic control systems. Autom Remote Control 26(4):577–600
87. Yakubovich VA (1965) The method of matrix inequalities in the theory of stability of nonlinear
automatic control systems. Autom Remote Control 26(5):753–769
88. Barabanov NE (2007) Kalman-Yakubovich lemma in general finite dimensional case. Int J
Robust Nonlinear Control 17:369–386
89. Gusev SV, Likhtarnikov AL (2006) Kalman-Popov-Yakubovich lemma and the S-procedure:
a historical essay. Autom Remote Control 67(1):1768–1810
90. Faurre P (1973) Réalisaions Markoviennes de processus stationnaires. PhD thesis, University
Paris 6
91. Meyer KR (1965) On the existence of Lyapunov functions for the problem of Lur’e. SIAM J
Control 3:373–383
92. Kailath T (1980) Linear systems. Prentice-Hall, Upper Saddle River
93. Balakrishnan AV (1995) On a generalization of the Kalman-Yakubovich lemma. Appl Math
Optim 31:177–187
94. Clements D, Anderson BDO, Laub AJ, Matson JB (1997) Spectral factorization with
imaginary-axis zeros. Linear Algebra App 250:225–252
95. Scherer R, Wendler W (1994) Complete algebraic characterization of a-stable Runger-Kutta
methods. SIAM J Numer Anal 31(2):540–551
248 3 Kalman–Yakubovich–Popov Lemma
96. Xiao C, Hill DJ (1999) Generalizations and new proof of the discrete-time positive real lemma
and bounded real lemma. IEEE Trans Circuits Syst- I: Fundam Theory Appl 46(6):740–743
97. Khalil HK (1992) Nonlinear systems. MacMillan, New York. 2nd edn. published in 1996, 3rd
edn. published in 2002
98. Kimura H (1997) Chain scattering approach to H∞ control. Birkhauser, Boston
99. Alpay D, Lewkowicz I (2011) The positive real lemma and construction of all realizations of
generalized positive rational functions. Syst Control Lett 60:985–993
100. Dickinson B, Delsarte P, Genin Y, Kamp Y (1985) Minimal realizations of pseudo-positive
and pseudo-bounded rational matrices. IEEE Trans Circuits Syst 32(6):603–605
101. Collado J, Lozano R, Johansson R (2005) Observer-based solution to the strictly positive real
problem. In: Astolfi A (ed) Nonlinear and adaptive control: tools and algorithms for the user.
Imperial College Press, London, pp 1–18
102. Collado J, Lozano R, Johansson R (2007) Using an observer to transform linear systems into
strictly positive real systems. IEEE Transactions on Automatic Control 52(6):1082–1088
103. Johansson R, Robertsson A (2006) The Yakubovich-Kalman-Popov Lemma and stability
analysis of dynamic output feedback systems. Int J Robust Nonlinear Control 16(2):45–69
104. Johansson R, Robertsson A (2002) Observer-based strict positive real (SPR) feedback control
system design. Automatica 38(9):1557–1564
105. Xiong J, Petersen IR, Lanzon A (2012) On lossless negative imaginary systems. Automatica
48:1213–1217
106. Lanzon A, Petersen IR (2008) Stability robustness of a feedback interconnection of systems
with negative imaginary frequency response. IEEE Trans Autom Control 53(4):1042–1046
107. Xiong J, Petersen IR, Lanzon A (2010) A negative imaginary lemma and the stability of inter-
connections of linear negative imaginary systems. IEEE Trans Autom Control 55(10):2342–
2347
108. Petersen IR, Lanzon A (2010) Feedback control of negative-imaginary systems. IEEE Control
Syst Mag 30(5):54–72
109. Song Z, Lanzon A, Patra S, Petersen IR (2012) A negative-imaginary lemma without min-
imality assumptions and robust state-feedback synthesis for uncertain negative-imaginary
systems. Syst Control Lett 61:1269–1276
110. Mabrok M, Kallapur AG, Petersen IR, Lanzon A (2015) A generalized negative imaginary
lemma and Riccati-based static state-feedback negative imaginary synthesis. Syst Control
Lett 77:63–68
111. Lanzon A, Song Z, Patra S, Petersen IR (2011) A strongly strict negative-imaginary lemma
for non-minimal linear systems. Commun Inf Syst 11(2):139–142
112. Dey A, Patra S, Sen S (2016) Absolute stability analysis for negative-imaginary systems.
Automatica 67:107–113
113. Carrasco J, Heath WP (2017) Comment on “Absolute stability analysis for negative-imaginary
systems”. Automatica 85:486–488
114. Ferrante A, Ntogramatzidis L (2013) Some new results in the theory of negative imaginary
systems with symmetric transfer matrix function. Automatica 49(7):2138–2144
115. Bobstov AA, Nikolaev NA (2005) Fradkov theorem-based design of the control of nonlinear
systems with functional and parametric uncertainties. Autom Remote Control 66(1):108–118
116. Fradkov AL (1974) Synthesis of an adaptive system of linear plant stabilization. Autom
Telemekh 12:1960–1966
117. Fradkov AL (1976) Quadratic Lyapunov functions in a problem of adaptive stabilization of a
linear dynamical plant. Sib Math J 2:341–348
118. Andrievsky BR, Churilov AN, Fradkov AL (1996) Feedback Kalman-Yakubovich Lemma
and its applications to adaptive control. In: Proceedings of the 35th IEEE conference on
decision and control, Kobe, Japan. pp 4537–4542
119. Fradkov AL, Hill DJ (1998) Exponential feedback passivity and stabilizability of nonlinear
systems. Automatica 34(6):697–703
120. Arcak M, Kokotovic PV (2001) Observer-based control of systems with slope-restricted non-
linearities. IEEE Trans Autom Control 46(7):1146–1150
References 249
121. Arcak M, Kokotovic PV (2001) Feasibility conditions for circle criterion designs. Syst Control
Lett 42(5):405–412
122. Sannuti P (1983) Direct singular perturbation analysis of high-gain and cheap control prob-
lems. Automatica 19(1):41–51
123. Sannuti P, Wason HS (1985) Multiple time-scale decomposition in cheap control problems -
Singular control. IEEE Trans Autom Control 30(7):633–644
124. Sannuti P, Saberi A (1987) A special coordinate basis of multivariable linear systems, finite
and infinite zero structure, squaring down and decoupling. Int J Control 45(5):1655–1704
125. Fradkov AL (2003) Passification of non-square linear systems and feedback Yakubovich-
Kalman-Popov lemma. Eur J Control 6:573–582
126. Weiss H, Wang Q, Speyer JL (1994) System characterization of positive real conditions. IEEE
Trans Autom Control 39(3):540–544
127. Camlibel MK, Heemels WPMH, Schumacher H (2002) On linear passive complementarity
systems. Eur J Control 8(3):220–237
128. Camlibel MK, Schumacher JM (2016) Linear passive systems and maximal monotone map-
pings. Math Program Ser B 157:367–420
129. Adly S, Hantoute A, Le B (2016) Nonsmooth Lur’e dynamical systems in Hilbert spaces.
Set-Valued Var Anal 24:13–35
130. Le BK (2019) Lur’e dynamical systems with state-dependent set-valued feedback.
arXiv:1903.018007v1
131. Sontag ED (1998) Mathematical control theory: deterministic finite dimensional systems, vol
6, 2nd edn. Texts in applied mathematics. Springer, New York
132. Anderson BDO, Moylan PJ (1974) Synthesis of linear time-varying passive networks. IEEE
Trans Circuits Syst 21(5):678–687
133. Hill DJ, Moylan PJ (1980) Dissipative dynamical systems: basic input-output and state prop-
erties. J Frankl Inst 30(5):327–357
134. Forbes JR, Damaren CJ (2010) Passive linear time-varying systems: State-space realizations,
stability in feedback, and controller synthesis. In: Proceedings of American control confer-
ence, Baltimore, MD, USA, pp 1097–1104
135. Willems JC (1971) Least squares stationary optimal control and the algebraic Riccati equation.
IEEE Trans Autom Control 16(6):621–634
136. Willems JC (1974) On the existence of a nonpositive solution to the riccati equation. IEEE
Trans Autom Control 19:592–593
137. Yakubovich VA (1966) Periodic and almost periodic limit modes of controlled systems with
several, in general discontinuous, nonlinearities. Soviet Math Dokl 7(6):1517–1521
138. Megretskii AV, Yakubovich VA (1990) A singular linear-quadratic optimization problem. Proc
Leningrad Math Soc 1:134–174
139. Molinari BP (1977) The time-invariant linear-quadratic optimal control problem. Automatica
13:347–357
140. Popov VM (1961) Absolute stability of nonlinear systems of automatic control. Avt i Telemekh
22:961–979 In Russian
141. Pandolfi L (2001) Factorization of the Popov function of a multivariable linear distributed
parameter system in the non-coercive case: a penalization approach. Int J Appl Math Comput
Sci 11(6):1249–1260
142. Willems JC (1972) Dissipative dynamical systems, Part II: linear systems with quadratic
supply rates. Arch Rat Mech An 45:352–393
143. Iwasaki T, Hara S (2005) Generalized KYP Lemma: unified frequency domain inequalities
with design applications. IEEE Trans Autom Control 50(1):41–59
144. Iwasaki T, Meinsma G, Fu M (2000) Generalized s-procedure and finite frequency KYP
lemma. Math Prob Eng 6:305–320
145. Ionescu V, Weiss M (1993) Continuous and discrete time Riccati theory: a Popov function
approach. Linear Algebra Appl 193:173–209
146. Ionescu V, Oara C (1996) The four block Nehari problem: a generalized Popov-Yakubovich
type approach. IMA J Math Control Inf 13:173–194
250 3 Kalman–Yakubovich–Popov Lemma
147. Iwasaki T, Hara S, Yamauchi H (2003) Dynamical system design from a control perspective:
finite frequency positive-realness approach. IEEE Trans Autom Control 48(8):1337–1354
148. Yang H, Xia Y (2012) Low frequency positive real control for delta operator systems. Auto-
matica 48:1791–1795
149. Kelkar A, Joshi S (1996) Control of nonlinear multibody flexible space structures, vol 221.
Lecture notes in control and information sciences. Springer, London
150. Anderson BDO, Moore JB (1971) Linear optimal control. Prentice-Hall, Englewood Cliffs
151. Lozano R, Joshi SM (1988) On the design of dissipative LQG type controllers. In: Proceedings
of the 27th IEEE international conference on decision and control, Austin, Texas, USA, pp
1645–1646
152. Haddad WM, Bernstein DS, Wang YW (1994) Dissipative H2 /H∞ controller synthesis. IEEE
Trans Autom Control 39:827–831
153. Chellaboina V, Haddad WM (2003) Exponentially dissipative dynamical systems: a nonlinear
extension of strict positive realness. Math Prob Eng 1:25–45
154. Johannessen E, Egeland O (1995) Synthesis of positive real H∞ controller. In: Proceedings
of the American control conference, Seattle, Washington, USA, pp 2437–2438
155. Geromel JC, Gapski PB (1997) Synthesis of positive real H2 controllers. IEEE Trans Autom
Control 42(7):988–992
156. Johannessen EA (1997) Synthesis of dissipative output feedback controllers. PhD thesis,
NTNU, Trondheim
157. Garrido-Moctezuma R, Suarez D, Lozano R (1997) Adaptive LQG control of positive real
systems. In: Proceedings of European control conference, Brussels, Belgium, pp 144–149
158. Sun W, Khargonekar PP, Shim D (1994) Solution to the positive real control problem for
linear time-invariant systems. IEEE Trans Autom Control 39:2034–2046
159. Rodman L (1997) Non-Hermitian solutions of algebraic Riccati equations. Can J Math
49(4):840–854
160. Lancaster P, Rodman L (1995) Algebraic Riccati equations. Oxford University Press, Oxford
161. Vandenberghe L, Balakrishnan VR, Wallin R, Hansson A, Roh T (2005) Interior point algo-
rithms for semidefinite programming problems derived from the KYP lemma. In: Garulli
A, Henrion D (eds) Positive polynomials in control, vol 312. Lecture Notes in Control and
Information Sciences. Springer, Berlin, pp 195–238
162. Coddington EA, Levinson N (1982) Theory of Ordinary Differential equations. Tata McGraw
Hill Publishing company LTD, New Delhi. Sixth reprint, 1982
163. Arnold VI (1973) Ordinary differential equations. MIT Press, Cambridge
164. Cartan H (1967) Cours de Calcul Différentiel, 4th edn. Hermann, Paris, France
165. Facchinei F, Pang JS (2003) Finite-dimensional variational inequalities and complementarity
problems, vol I and II. Operations research. Springer, New-York
166. Fitts RE (1966) Two counterexamples to Aizerman’s conjecture. IEEE Trans Autom Control
11:553–556
167. Meisters GH (2001) A biography of the Markus-Yamabe conjecture. In: Mok N (ed.)
Aspects of mathematics: algebra, geometry and several complex variables. University of
Hong Kong, Department of Mathematics, HKU, Hong-Kong. https://www.math.unl.edu/
~gmeisters1/papers/HK1996.pdf
168. Cima A, Gasull A, Hubbers E, Manosas F (1997) A polynomial counterexample to the Markus-
Yamabe conjecture. Adv Math 131:453–457
169. Gutierrez C (1995) A solution to the bidimensional global asymptotic stability conjecture.
Ann Inst Henri Poincaré 12(6):627–671
170. Fessler R (1995) A solution of the two-dimensional global asymptotic Jacobian stability
conjecture. Ann Polon Math 62:45–75
171. Manosas F, Peralta-Salas D (2006) Note on the Markus-Yamabe conjecture for gradient
dynamical systems. J Math Anal Appl 322(2):580–586
172. Barabanov NE (1988) On the Kalman problem. Sib Matematischeskii Zhurnal 29:3–11 (1988).
Translated in Sib Math J, pp 333–341
References 251
200. Carrasco J, Maya-Gonzalez M, Lanzon A, Heath WP (2014) LMI search for anticausal and
noncausal rational Zames-Falb multipliers. Syst Control Lett 70:17–22
201. Turner MC, Kerr M, Postlehwaite I (2009) On the existence of stable, causal multipliers for
systems with slope-restricted nonlinearities. IEEE Trans Autom Control 54(11):2697–2702
202. Carrasco J, Heath WP, Lanzon A (2014) On multipliers for bounded and monotone nonlin-
earities. Automatica 66:65–71
203. Carrasco J, Heath WP, Li G, Lanzon A (2012) Comments on “on the existence of stable,
causal miltipliers for systems with slope-restricted nonlinearities”. IEEE Trans Autom Control
57:2422–2428
204. Turner MC, Kerr M, Postlethwaite I (2012) Authors reply to “Comments on “On the existence
of stable, causal multipliers for systems with slope-restricted nonlinearities””. IEEE Trans
Autom Control 57(9):2428–2430
205. Scherer CW, Holicki T (2018) An IQC theorem for relations: towards stability analysis of
data-integrated systems. In: 9th IFAC symposium on robust control design, Florianopolis,
Brazil
206. Safonov MG, Kulkarni VK (2000) Zames-Falb multipliers for MIMO nonlinearities. Int J
Robust Nonlinear Control 10:1025–1038
207. Leonov GA (1971) Stability of nonlinear controllable systems having a nonunique equilibrium
position. Autom Remote Control 10:23–28. Translated version. UDC 62–50:1547–1552
208. Gelig AK, Leonov GA (1973) Monostability of multicoupled systems with discontinu-
ous monotonic nonlinearities and non-unique equilibrium position. Autom Remote Control
6:158–161
209. Narendra KS, Neuman CP (1966) Stability of a class of differential equations with a single
monotone nonlinearity. J SIAM Control 4(2):295–308
210. Tugal H, Carrasco J, Falcon P, Barreiro A (2017) Stability analysis of bilateral teleoperation
with bounded and monotone environments via Zames-Falb multipliers. IEEE Trans Control
Syst Technol 25(4):1331–1344
211. Chen X, Wen JT (1996) Robustness analysis for linear time-invariant systems with structured
incrementally sector bounded feedback nonlinearities. Int J Appl Math Comput Sci 6(4):623–
648
212. Gapski PB, Geromel JC (1994) A convex approach to absolute stability problem. IEEE Trans
Autom Control 39(9):1929–1932
213. Chang M, Mancera R, Safonov M (2012) Computation of Zames-Falb multipliers revisited.
IEEE Trans Autom Control 57(4):1024–1029
214. Materassi D, Salapaka MV (2011) A generalized Zames-Falb multiplier. IEEE Trans Autom
Control 56(6):1432–1436
215. Sandberg IW (1964) A frequency domain criterion for the stability of feedback systems
containing a single time varying non linear element. Bell Syst Tech J 43:1901–1908
216. Zames G (1966) On the input-output stability of nonlinear time-varying feedback systems-
part I: conditions derived using concepts of loop gain, conicity, and positivity. IEEE Trans
Autom Control 11(2):228–238
217. Zames G (1966) On the input-output stability of nonlinear time-varying feedback systems-
part II: conditions involving circles in the frequency plane and sector nonlinearities. IEEE
Trans Autom Control 11(3):465–477
218. Altshuller D (2013) Frequency domain criteria for absolute stability. A delay-integral-
quadratic constraints approach, vol 432. Lecture notes in control and information sciences.
Springer, London
219. Haddad WM, Bernstein DS (1994) Explicit construction of quadratic Lyapunov functions for
the small gain, positive, circle, and Popov theorems and their application to robust stability-
Part II: discrete-time theory. Int J Robust Nonlinear Control 4(2):229–265
220. Haddad WM, Bernstein DS (1993) Explicit construction of quadratic Lyapunov functions for
small gain, positivity, circle, and Popov theorems and their application to robust stability. Part
I: continuous-time theory. Int J Robust Nonlinear Control 3(4):313–339
221. Wang R (2002) Algebraic criteria for absolute stability. Syst Control Lett 47:401–416
References 253
222. Margaliot M, Gitizadeh R (2004) The problem of absolute stability: a dynamic programming
appraoch. Automatica 40:1240–1252
223. Margaliot M, Langholz G (2003) Necessary and sufficient conditions for absolute stability:
the case of second order systems. IEEE Trans Circuits Syst I 50(2):227–234
224. de Oliveira MC, Geromel JC, Hsu L (2002) A new absolute stability test for systems with
state dependent perturbations. Int J Robust Nonlinear Control 12:1209–1226
225. Kiyama T, Hara S, Iwasaki T (2005) Effectiveness and limitation of circle criterion for LTI
robust control systems with control input nonlinearities of sector type. Int J Robust Nonlinear
Control 15:873–901
226. Impram ST, Munro N (2004) Absolute stability of nonlinear systems with disc and norm-
bounded perturbations. Int J Robust Nonlinear Control 14:61–78
227. Impram ST, Munro N (2001) A note on absolute stability of uncertain systems. Automatica
37:605–610
228. Fabbri R, Impram ST (2003) On a criterion of Yakubovich type for the absolute stability of
non-autonomous control processes. Int J Math Math Sci 16:1027–1041
229. Zevin AA, Pinsky MA (2003) A new approach to the Lur’e problem in the theory of absolute
stability. SIAM J Control Optim 42(5):1895–1904
230. Ho MT, Lu JM (2005) H∞ PID controller design for Lur’e systems and its application to a
ball and wheel apparatus. Int J Control 78(1):53–64
231. Gil MI, Medina R (2005) Explicit stability conditions for time-discrete vector Lur’e type
systems. IMA J Math Control Inf 22(4):415–421
232. Thathachar MAL, Srinath MD (1967) Some aspects of the Lur’e problem. IEEE Trans Autom
Control 12(4):451–453
233. Hu T, Huang B, Lin Z (2004) Absolute stability with a generalized sector condition. IEEE
Trans Autom Control 59(4):535–548
234. Hu T, Li Z (2005) Absolute stability analysis of discrete-time systems with composite
quadratic Lyapunov functions. IEEE Trans Autom Control 50(6):781–797
235. Halanay A, Rasvan V (1991) Absolute stability of feedback systems with several differentiable
nonlinearities. Int J Syst Sci 23(10):1911–1927
236. Cheng Y, Wang L (1993) On the absolute stability of multi nonlinear control systems in the
critical cases. IMA J Math Control Inf 10:1–10
237. Krasnosel’skii AM, Rachinskii DI (2000) The Hamiltonian nature of Lur’e systems. Autom
Remote Control 61(8):1259–1262
238. Hagen G (2006) Absolute stability via boundary control of a semilinear parabolic PDE. IEEE
Trans Autom Control 51(3):489–493
239. Siljak D (1969) Parameter analysis of absolute stability. Automatica 5:385–387
240. Partovi S, Nahi NE (1969) Absolute stability of dynamic system containing non-linear func-
tions of several state variables. Automatica 5:465–473
241. Leonov GA (2005) Necessary and sufficient conditions for the absolute stability of two-
dimensional time-varying systems. Autom Remote Control 66(7):1059–1068
242. Liberzon MR (2006) Essays on the absolute stability theory. Autom Remote Control
67(10):1610–1644
243. Popov VM (2002) Special issue dissipativity of dynamical systems: application in control
dedicated to Vasile Mihai Popov. Eur J Control 8(3):181–300
244. Jonsson U (1997) Stability analysis with Popov multipliers and integral quadratic constraints.
Syst Control Lett 31:85–92
245. Haddad WM, Bernstein DS (1995) Parameter dependent Lyapunov functions and the Popov
criterion in robust analysis and synthesis. IEEE Trans Autom Control 40(3):536–543
246. Haddad WM, Collins EG, Bernstein DS (1993) Robust stability analysis using the small gain,
circle, positivity, and Popov theorems: a comparative study. IEEE Trans Control Syst Technol
1(4):290–293
247. Arcak M, Larsen M, Kokotovic P (2009) Circle and Popov criteria as tools for nonlinear
feedback design. Automatica 39:643–650
254 3 Kalman–Yakubovich–Popov Lemma
248. Yakubovich VA (1967) Frequency conditions for the absolute stability of control systems with
several nonlinear or linear nonstationary blocks. Avtomat i Telemekh 6:5–30
249. Hiriart-Urruty JB, Lemaréchal C (2001) Fundamentals of convex analysis. Grundlehren Text
Editions. Springer, Berlin
250. Goeleven D, Motreanu D, Dumont Y, Rochdi M (2003) Variational and hemivariational
inequalities: theory, methods and applications. Volume 1: unilateral analysis and unilateral
mechanics. Nonconvex Optimization and its Applications. Kluwer Academic Publishers, Dor-
drecht
251. Moreau JJ (2003) Fonctionnelles convexes. Istituto Poligrafico e Zecca dello Stato S.p.A.,
Roma, Italy. Preprint Séminaire sur les Equations aux Dérivées Partielles. France, Collège de
France, Paris, pp 1966–1967
252. Brézis H (1973) Opérateurs Maximaux Monotones. North Holland mathematics studies. Else-
vier, Amsterdam
253. Rockafellar RT (1970) Convex analysis. Princeton University Press, Princeton
254. Moreau JJ (1988) Unilateral contact and dry friction in finite freedom dynamic. In: Moreau
JJ, Panagiotopoulos PD (eds) Nonsmooth mechanics and applications, vol 302. CISM courses
and lectures. International centre for mechanical sciences. Springer, Berlin, pp 1–82
255. Brogliato B, Daniilidis A, Lemaréchal C, Acary V (2006) On the equivalence between comple-
mentarity systems, projected systems and differential inclusions. Syst Control Lett 55(1):45–
51
256. Dovgoshey O, Martio O, Ryazanov V, Vuorinen M (2006) The Cantor function. Expo Math
24:1–37
257. Rockafellar RT, Wets RJB (1998) Variational analysis, vol 317. Grundlehren der Mathema-
tischen Wissenschaften. Springer, Berlin
258. Brogliato B, Thibault L (2010) Existence and uniqueness of solutions for non-autonomous
complementarity dynamical systems. J Convex Anal 17(3–4):961–990
259. Fischer N, Kamalapurkar R, Dixon WE (2013) LaSalle-Yoshizawa corollaries for nonsmooth
systems. IEEE Trans Autom Control 58(9):2333–2338
260. Bauschke HH, Combettes PL (2011) Convex analysis and monotone operator theory in
hilbert spaces. Canadian mathematics society, Science Mathématique du Canada. Springer
Science+Business media, Berlin
261. Bastien J, Schatzman M, Lamarque CH (2002) Study of an elastoplastic model with an infinite
number of internal degrees of freedom. Eur J Mech A/Solids 21:199–222
262. Bastien J (2013) Convergence order of implicit Euler numerical scheme for maximal monotone
differential inclusions. Z Angew Math Phys 64:955–966
263. Deimling K (1992) Multivalued differential equations. Nonlinear analysis and applications.
De Gruyter, Berlin-New York
264. Smirnov GV (2001) Introduction to the theory of differential inclusions, vol 41. American
Mathematical Society, Providence
265. Brogliato B (2004) Absolute stability and the Lagrange-Dirichlet theorem with monotone
multivalued mappings. Syst Control Lett 51:343–353. Preliminary version proceedings of the
40th IEEE conference on decision and control, vol 1, pp 27-32. Accessed 4–7 Dec 2001
266. Adly S, Hantoute A, Le BK (2017) Maximal monotonicity and cyclic monotonicity arising
in nonsmooth Lur’e dynamical systems. J Math Anal Appl 448:691–706
267. Brogliato B (2003) Some perspectives on the analysis and control of complementarity systems.
IEEE Trans Autom Control 48(6):918–935
268. Addi K, Adly S, Brogliato B, Goeleven D (2007) A method using the appproach of Moreau
and Panagiotopoulos for the mathematical formulation of non-regular circuits in electronics.
Nonlinear Anal: Hybrid Syst 1(1):30–43
269. Addi K, Brogliato B, Goeleven D (2011) A qualitative mathematical analysis of a class of
linear variational inequalities via semi-complementarity problems: applications in electronics.
Math Program A 126(1):31–67
270. Addi K, Goeleven D (2017) Complementarity and variational inequalities in electronics. In:
Daras N, Rassia T (eds) Operations research, engineering, and cyber security, vol 113. Springer
optimization and its applications. Springer International Publishing, Berlin, pp 1–43
References 255
271. Adly S, Goeleven D (2004) A stability theory for second-order nonsmooth dynamical systems
with application to friction problems. J Math Pures Appl 83:17–51
272. Adly S, Le BK (2014) Stability and invariance results for a class of non-monotone set-valued
Lur’e dynamical systems. Appl Anal 5:1087–1105
273. Adly S, Hantoute A, Nguyen BT (2018) Lyapunov stability of differential inclusions involving
prox-regular sets via maximal monotone operators. J Optim Theory Appl. https://doi.org/10.
1007/s10957-018-1446-7
274. Adly S, Hantoute A, Nguyen BT (2018) Equivalence between differential inclusions involving
prox-regular sets and maximal monotone operators. submitted. arXiv:1704.04913v2
275. Adly S, Le BK (2018) On semicoercive sweeping process with velocity constraint. Optim
Lett 12(4):831–843
276. Adly S, Hantoute A, Nguyen BT (2018) Lyapunov stability of differential inclusions involving
prox-regular sets via maximal monotone operators. J Optim Theory Appl. https://doi.org/10.
1007/s10957-018-1446-7
277. Brogliato B, Goeleven D (2005) The Krakovskii-LaSalle invariance principle for a class of
unilateral dynamical systems. Math Control Signals Syst 17:57–76
278. Brogliato B, Goeleven D (2011) Well-posedness, stability and invariance results for a class
of multivalued Lur’e dynamical systems. Nonlinear Anal Theory Methods Appl 74:195–212
279. Brogliato B, Goeleven D (2013) Existence, uniqueness of solutions and stability of nonmsooth
multivalued Lur’e dynamical systems. J Convex Anal 20(3):881–900
280. Brogliato B, Heemels WPMH (2009) Observer design for Lur’e systems with multivalued
mappings: a passivity approach. IEEE Trans Autom Control 54(8):1996–2001
281. Goeleven D, Brogliato B (2004) Stability and instability matrices for linear evolution varia-
tional inequalities. IEEE Trans Autom Control 49(4):521–534
282. Leine RI, van de Wouw N (2008) Uniform convergence of monotone measure differential
inclusions: with application to the control of mechanical systems with unilateral constraints.
Int J Bifurc Chaos 15(5):1435–1457
283. Tanwani A, Brogliato B, Prieur C (2014) Stability and observer design for Lur’e systems with
multivalued, nonmonotone, time-varying nonlinearities and state jumps. SIAM J Control
Optim 52(6):3639–3672
284. Tanwani A, Brogliato B, Prieur C (2018) Well-posedness and output regulation for implicit
time-varying evolution variational inequalities. SIAM J Control Optim 56(2):751–781
285. Tanwani A, Brogliato B, Prieur C (2016) Observer-design for unilaterally constrained
Lagrangian systems: a passivity-based approach. IEEE Trans Autom Control 61(9):2386–
2401
286. Utkin VI (1992) Sliding modes in control and optimization. Communications and control
engineering. Springer, Berlin
287. Baji B, Cabot A (2006) An inertial proximal algorithm with dry friction: finite convergence
results. Set Valued Anal 14(1):1–23
288. Korovin SK, Utkin VI (1972) Use of the slip mode in problems of static optimization. Autom
Remote Control 33(4):570–579
289. Korovin SK, Utkin VI (1974) Sliding mode based solution of static optimization and math-
ematical programming problems. applied aspects. In: Preprints of IFAC-IFORS symposium,
Varnia, Bulgaria, pp 1–8
290. Korovin SK, Utkin VI (1974) Using sliding modes in static optimization and nonlinear pro-
gramming problems. Automatica 10(5):525–532
291. Korovin SK, Utkin VI (1976) Method of piecewise-smooth penalty functions. Autom Remote
Control 37(4):39–48
292. Attouch H, Peypouquet J, Redont P (2014) A dynamical approach to an inertial forward-
backward algorithm for convex minimization. SIAM J Optim 24(1):232–256
293. Jayawardhana B, Logemann H, Ryan EP (2011) The circle criterion and input-to-state stability.
IEEE Control Syst Mag 31(4):32–67
294. Acary V, Bonnefon O, Brogliato B (2011) Nonsmooth modeling and simulation for switched
circuits, vol 69. Lecture notes in electrical engineering. Springer Science+Business Media
BV, Dordrecht
256 3 Kalman–Yakubovich–Popov Lemma
295. Brogliato B (2016) Nonsmooth mechanics. Models, dynamics and control, 3rd edn. Com-
munications and control engineering. Springer International Publishing, Switzerland. Erra-
tum/Addendum at https://hal.inria.fr/hal-01331565
296. Acary V, Brogliato B (2008) Numerical methods for nonsmooth dynamical systems, vol 35.
Lecture notes in applied and computational mechanics. Springer, Berlin
297. Cottle RW, Pang JS, Stone RE (1992) The linear complementarity problem. Academic
Press,Cambridge
298. Greenhalg S, Acary V, Brogliato B (2013) On preserving dissipativity of linear complemen-
tarity dynamical systems with the θ-method. Numer Math 125(4):601–637
299. Georgescu C, Brogliato B, Acary V (2012) Switching, relay and complementarity systems:
a tutorial on their well-posedness and relationships. Phys D: Nonlinear Phenom 241:1985–
2002. Special issue on Nonsmooth systems
300. Frasca R, Camlibel MK, Goknar IC, Iannelli L, Vasca F (2010) Linear passive networks with
ideal switches: consistent initial conditions and state discontinuities. IEEE Trans Circuits Syst
I Regular Papers 57(12):3138–3151
301. Adly S, Attouch H, Cabot A (2003) Finite time stabilization of nonlinear oscillators subject
to dry friction. In: Alart P, Maisonneuve O, Rockafellar RT (eds) Nonsmooth mechanics and
analysis: theoretical and numerical advances. Springer advances in mechanics and mathemat-
ics. Springer, Berlin, pp 289–304
302. Cabot A (2008) Stabilization of oscillators subject to dry friction: finite time convergence
versus exponential decay results. Trans Am Math Soc 360:103–121
303. Hou M, Tan F, Duan G (2016) Finite-time passivity of dynamic systems. J Frankl Instit
353:4870–4884
304. Kato T (1970) Accretive operators and nonlinear evolution equations in banach spaces. Non-
linear Funct Anal 18(1):138–161. Proceedings of Symposium Pure Math, Chicago
305. Goeleven D, Motreanu M, Motreanu V (2003) On the stability of stationary solutions of
evolution variational inequalities. Adv Nonlinear Var Inequal 6:1–30
306. Goeleven D (2017) Complementarity and variational inequalities in electronics. Mathematical
analysis and its applications. Academic Press, Cambridge
307. Goeleven D, Brogliato B (2005) Necessary conditions of asymptotic stability for uinlateral
dynamical systems. Nonlinear Anal: Theory, Methods Appl 61:961–1004
308. Murty KG (1997) Linear complementarity, linear and nonlinear programming. http://www-
personal.engin.umich.edu/~murty/book/LCPbook/
309. Brogliato B (2005) Some results on the controllability of planar evolution variational inequal-
ities. Syst Control Lett 54(1):65–71
310. Brézis H (1983) Analyse Fonctionnelle. Théorie et applications. Masson, Paris, France
311. Adly S (2017) A variational approach to nonsmooth dynamics. Springer briefs in mathematics.
Springer, Berlin
312. Alvarez J, Orlov I, Acho L (2000) An invariance principle for discontinuous dynamic systems
with applications to a Coulomb friction oscillator. ASME Dyn Syst Meas Control 122:687–
690
313. Bisoffi A, Lio MD, Teel AR, Zaccarian L (2018) Global asymptotic stability of a PID control
system with Coulomb friction. IEEE Trans Autom Control 63(8):2654–2661
314. Shevitz D, Paden B (1994) Lyapunov stability theory of nonsmooth systems. IEEE Trans
Autom Control 39(9):1910–1914
315. Leine RI, van de Wouw N (2008) Stability and convergence of mechanical systems with
unilateral constraints, vol 36. Lecture notes in applied and computational mechanics. Springer,
Berlin
316. Edmond JF, Thibault L (2005) Relaxation of an optimal control problem invloving a perturbed
sweeping process. Math Program Ser B 104(2–3):347–373
317. Edmond JF, Thibault L (2006) BV solutions of nonconvex sweeping process differential
inclusion with perturbation. J Differ Equ 226:135–179
318. Robinson SM (1975) Stability theory for systems of inequalities, I: linear systems. SIAM J
Numer Anal 12:754–769
References 257
319. Romanchuk BG, Smith MC (1999) Incremental gain analysis of piecewise linear systems and
application to the antiwindup problem. Automatica 35(7):1275–1283
320. Miranda-Villatoro F, Brogliato B, Castanos F (2018) Set-valued sliding-mode control of
uncertain linear systems: Continuous and discrete-time analysis. SIAM J Control Optim
56(3):1756–1793
321. van der Schaft AJ, Schumacher JM (1998) Complementarity modeling of hybrid systems.
IEEE Trans Autom Control 43(4):190–483
322. Adly S, Brogliato B, Le BK (2013) Well-posedness, robustness, and stability analysis of a
set-valued controller for Lagrangian systems. SIAM J Optim Control 51(2):1592–1614
323. Miranda-Villatoro F, Brogliato B, Castanos F (2017) Multivalued robust tracking control
of Lagrange systems: continuous and discrete-time algorithms. IEEE Trans Autom Control
62(9):4436–4450
324. Miranda-Villatoro F, Castanos F (2017) Robust output regulation of strongly passive lin-
ear systems with multivalued maximally monotone controls. IEEE Trans Autom Control
62(1):238–249
325. Heemels WPMH, Camlibel MK, Schumacher JM, Brogliato B (2011) Observer-based control
of linear complementarity systems. Int J Robust Nonlinear Control 21(10):1193–1218
326. van de Wouw N, Doris A, de Bruin JCA, Heemels WPMH, Nijmeijer H (2008) Output-
feedback control of Lur’e-type systems with set-valued nonlinearities: a Popov-criterion
approach. In: American control conference, Seattle, USA, pp 2316–2321
327. Adly S, Brogliato B, Le B (2016) Implicit Euler time-discretization of a class of Lagrangian
systems with set-valued robust controller. J Convex Anal 23(1):23–52
328. Krasnosel’skii AM, Pokkrovskii AV (2006) Dissipativity of a nonresonant pendulum with
ferromagnetic friction. Autom Remote Control 67(2):221–232
329. Barabanov NE, Yakubovich VA (1979) Absolute stability of control systems with one hys-
teresis nonlinearity. Autom Remote Control 12:5–12
330. Yakubovich VA (1963) The conditions for absolute stability of a control system with a
hysteresis-type nonlinearity. Sov Phys Dokl 8(3):235–237
331. Jayawardhana B, Ouyang R, Andrieu V (2012) Stability of systems with the Duhem hysteresis:
the dissipativity approach. Automatica 48:2657–2662
332. Ouyang R, Jayawardhana B (2014) Absolute stability analysis of linear systems with Duhem
hysteresis operator. Automatica 50:1860–1866
333. Paré T, Hassibi A, How J (2001) A KYP lemma and invariance principle for systems with
multiple hysteresis non-linearities. Int J Control 74(11):1140–1157
334. Pogromsky AY, Heemels WPMH, Nijmeijer H (2003) On solution concepts and well-
posedness of linear relay systems. Automatica 39(12):2139–2147
335. Hitz L, Anderson BDO (1969) Discrete positive-real functions and their application to system
stability. Proc IEE 116:153–155
336. Tao G, Ioannou PA (1990) Necessary and sufficient conditionsfor strictly positive real matri-
ces. Proc Inst Elect Eng 137:360–366
337. Hagiwara T, Mugiuda T (2004) Positive-realness analysis of sampled-data systems and its
applications. Automatica 40:1043–1051
338. Premaratne K, Jury EI (1994) Discrete-time positive-real lemma revisited: the discrete-time
counterpart of the Kalman-Yakubovich lemma. IEEE Trans Circuits Syst I(41):747–750
339. Caines PE (1988) Linear stochastic systems. Probability and mathematical statistics. Wiley,
New York
340. Ekanayake MM, Premaratne K, Jury EI (1996) Some corrections on “Discrete-time positive-
real lemma revisited: the discrete-time counterpart of the Kalman-Yakubovitch lemma”. IEEE
Trans Circuits Syst I: Fundam Theory Appl 43(8):707–708
341. Kapila V, Haddad WM (1996) A multivariable extension of the Tsypkin criterion using a
Lyapunov function approach. IEEE Trans Autom Control 41(1):149–152
342. Lopez EMN (2005) Several dissipativity and passivity implications in the linear discrete-time
setting. Math Prob Eng 6:599–616
258 3 Kalman–Yakubovich–Popov Lemma
343. Ljung L (1977) On positive real transfer functions and the convergence of some recursive
schemes. IEEE Trans Autom Control 22(4):539–551
344. Landau ID (1976) Unbiaised recursive identification using model reference adaptive tech-
niques. IEEE Trans Autom Control 21:194–202
345. Landau ID (1974) An asymptotic unbiased recursive identifier for linear systems. In: IEEE
conference on proceedings of decision and control including the 13th symposium on adaptive
processes, Phoenix, Arizona, USA, pp 288–294
346. Mosquera C, Perez F (2001) On the strengthened robust SPR problem for discrete time
systems. Automatica 37(4):625–628
347. Byrnes CI, Lin W (1994) Losslessness, feedback equivalence, and the global stabilization of
discrete-time nonlinear systems. IEEE Trans Autom Control 39(1):83–98
348. Zhou S, Lam J, Feng G (2005) New characterization of positive realness and control of a class
of uncertain polytopic discrete-time systems. Syst Control Lett 54:417–427
349. Kaneko O, Rapisarda P, Takada K (2005) Totally dissipative systems. Syst Control Lett
54:705–711
350. Bianchini G (2002) Synthesis of robust strictly positive real discrete-time systems with l2
parametric perturbations. IEEE Trans Circuits Syst I- Fundam Theory Appl 49(8):1221–1225
351. Mahmoud MS, Xie L (2000) Positive real analysis and synthesis of uncertain discrete time
systems. IEEE Trans Circuits Syst I- Fundam Theory Appl 47(3):403–406
352. Lopez EMN, Fossas-Colet E (2004) Feedback passivity of nonlinear discrete-time systems
with direct input-output link. Automatica 40(8):1423–1428
353. Arov DZ, Kaashoek MA, Pik DR (2002) The Kalman-Yakubovich-Popov inequality and
infinite dimensional discrete time dissipative systems, Report no 26, 2002/2203, spring, ISSN
1103–467X, ISRN IML-R-26-02/03-SE+spring. Institut Mittag-Leffler, The Royal Swedish
Academy of Sciences
354. Farhood M, Dullerud GE (2005) Duality and eventually periodic systems. Int J Robust Non-
linear Control 15:575–599
355. Farhood M, Dullerud GE (2002) LMI tools for eventually periodic systems. Syst Control Lett
47:417–432
356. Ma CCH, Vidyasagar M (1986) Nonpassivity of linear discrete-time systems. Syst Control
Lett 7:51–53
357. Brogliato B, Landau ID, Lozano R (1991) Adaptive motion control of robot manipulators: a
unified approach based on passivity. Int J Robust Nonlinear Control 1(3):187–202
358. Costa-Castello R, Grino R (2006) A repetitive controller for discrete-time passive systems.
Automatica 42(9):1605–1610
359. Messner W, Horowitz R, Kao WW, Boals M (1991) A new adaptive learning rule. IEEE Trans
Autom Control 36(2):188–197
360. Sadegh N, Horowitz R, Kao WW, Tomizuka M (1990) A unified approach to design of adaptive
and repetitive controllers for robotic manipulators. ASME J Dyn Syst Meas 112(4):618–629
361. Horowitz R, Kao WW, Boals M, Sadegh N (1989) Digital implementation of repetitive con-
trollers for robotic manipulators. In: Proceedings of IEEE international conference on robotics
and automation, Phoenix, AZ, USA, pp 1497–1503
362. Colgate JE, Schenkel G (1997) Passivity of a class of sampled-data systems: application to
haptic interface. J Robot Syst 14(1):37–47
363. Monteiro-Marques MDP (1993) Differential inclusions in nonsmooth mechanical problems:
shocks and dry friction. Progress in nonlinear differential equations and their applications.
Birkhauser, Basel
364. Tsypkin YZ (1964) A criterion for absolute stability of automatic pulse systems with mono-
tonic characteristics of the nonlinear element. Sov Phys Dokl 9:263–366
365. Tsypkin YZ (1962) The absolute stability of large scale, nonlinear sampled data systems.
Dokl Akadem Nauk SSSR 145:52–55
366. Tsypkin YZ (1963) Fundamentals of the theory of nonlinear pulse control systems. In: Pro-
ceedings of the second international congress of IFAC on automatic control, Basel, CH, pp
172–180
References 259
367. Tsypkin YZ (1964) Absolute stability of equilibrium positions and of responses in nonlinear,
sampled data, automatic systems. Autom Remote Control 24(12):1457–1471
368. Tsypkin YZ (1964) Frequency criteria for the absolute stability of nonlinear sampled data
systems. Autom Remote Control 25(3):261–267
369. Larsen M, Kokotovic PV (2001) A brief look at the Tsypkin criterion: from analysis to design.
Int J Adapt Control Signal Process 15(2):121–128
370. Tsypkin, Y.Z.: Memorial issue. The International Journal of Adaptive Control and Signal
Processing (S. Bittanti, Ed.) 15(2) (2001)
371. Jury EI, Lee BW (1964) On the stability of a certain class of nonlinear sampled-data systems.
IEEE Trans Autom Control 9(1):51–61
372. Jury EI, Lee BW (1964) On the absolute stability of nonlinear sampled-data systems. IEEE
Trans Autom Control 9(4):551–554
373. Jury EI, Lee BW (1966) A stability theory on multinonlinear control systems. In: Proceedings
od IFAC world congress, London, UK, vol 28, pp A1–A11
374. Hagiwara T, Kuroda G, Araki M (1998) Popov-type criterion for stability of nonlinear
sampled-data systems. Automatica 34(6):671–682
375. Hagiwara T, Araki M (1996) Absolute stability of sampled-data systems with a sector non-
linearity. Syst Control Lett 27:293–304
376. Gonzaga C, Jungers M, Daafouz J (2012) Stability analysis of discrete-time Lur’e systems.
Automatica 48:2277–2283
377. Ahmad NS, Heath WP, Li G (2013) LMI-based stability criteria for discrete-time Lur’e sys-
tems with monotonic, sector and slope restricted nonlinearities. IEEE Trans Autom Control
58(2):459–465
378. Ahmad NS, Carrasco J, Heath WP (2015) A less conservative LMI condition for stabil-
ity of discrete-time systems with slope-restricted nonlinearities. IEEE Trans Autom Control
60(6):1692–1697
379. Park J, Lee SY, Park P (2019) A less conservative stability criterion for discrete-time Lur’e
systems with sector and slope restrictions. IEEE Trans Autom Control. https://doi.org/10.
1109/TAC.2019.2899079
380. de la Sen M (2002) Preserving positive realness through discretization. Positivity 6:31–45
381. Liu M, Xiong J (2018) Bilinear transformation for discrete-time positive real and negative
imaginary systems. IEEE Trans Automc Control. https://doi.org/10.1109/TAC.2018.2797180
382. Galias Z, Yu X (2007) Euler’s discretization of single input sliding-mode control systems.
IEEE Trans Autom Control 52(9):1726–1730
383. Acary V, Brogliato B (2010) Implicit Euler numerical scheme and chattering-free implemen-
tation of sliding mode systems. Syst Control Lett 59:284–293
384. Acary V, Brogliato B, Orlov Y (2012) Chattering-free digital sliding-mode control with state
observer and disturbance rejection. IEEE Trans Autom Control 57(5):1087–1101
385. Huber O, Acary V, Brogliato B (2016) Lyapunov stability and performance analysis of the
implicit discrete sliding mode control. IEEE Trans Autom Control 61(10):3016–3030
386. Beikzadeh H, Marquez HJ (2013) Dissipativity of nonlinear multirate sampled data systems
under emulation design. Automatica 49:308–312
387. Acary V, Brogliato B, Orlov Y (2016) Comments on “chattering-free digital sliding-mode con-
trol with state observer and disturbance rejection”. IEEE Trans Autom Control 61(11):3707
388. Huber O, Acary V, Brogliato B, Plestan F (2016) Implicit discrete-time twisting controller
without numerical chattering: analysis and experimental results. Control Eng Pract 46:129–
141
389. Efimov D, Polyakov A, Levant A, Perruquetti W (2017) Realization and discretization of
asymptotically stable homogeneous systems. IEEE Trans Autom Control 62(11):5962–5969
390. Polyakov A, Efimov D, Brogliato B (2019) Consistent discretization of finite-time and fixed-
time stable systems. SIAM J Control Optim 57(1):78–103
391. Astrom KJ, Hagander P, Sternby J (1984) Zeros of sampled systems. Automatica 20(1):31–38
392. Rohrer RA, Nosrati H (1981) Passivity considerations in stability studies of numerical inte-
gration algorithms. IEEE Trans Circuits Syst 28(9):857–866
260 3 Kalman–Yakubovich–Popov Lemma
393. Jiang J (1993) Preservations of positive realness under discretizations. J Frankl Inst
330(4):721–734
394. Fardad M, Bamieh B (2009) A necessary and sufficient frequency domain criterion for the
passivity of SISO sampled-data systems. IEEE Trans Autom Control 54(3):611–614
395. Costa-Castello R, Fossas E (2007) On preserving passivity in sampled-date linear systems.
Eur J Control 6:583–590
396. Antsaklis PJ, Goodwine B, Gupta V, McCourt MJ, Wang Y, Wu P, Xia M, Yu H, Zhu F
(2013) Control of cyberphysical systems using passivity and dissipativity based methods. Eur
J Control 19:379–388
397. Acary V (2015) Energy conservation and dissipation properties of time-integration methods
for nonsmooth elastodynamics with contact. ZAMM-J Appl Math Mech 96(5):585–603
398. Laila DS, Nesic D, Teel AR (2002) Open and closed-loop dissipation inequalities under
sampling and controller emulation. Eur J Control 8:109–125
399. Aoues S, Loreto MD, Eberard D, Marquis-Favre W (2017) Hamiltonian systems discrete-time
approximations; losslessness, passivity and composabiilty. Syst Control Lett 110:9–14
400. Stramigioli S, Secchi C, van der Schaft AJ, Fantuzzi C (2005) Sampled data passivity and
discrete port-Hamiltonian systems. IEEE Trans Robot 21(4):574–587
401. Lopezlena R, Scherpen JMA (2006) Energy functions for dissipativity-based balancing of
discrete-time nonlinear systems. Math Control Signals Syst 18:345–368
402. Monaco S, Normand-Cyrot D (2011) Nonlinear average passivity and stabilizing controllers
in discrete time. Syst Control Lett 60(6):431–439
403. Mizumoto I, Ohdaira S, Iwai Z (2010) Output feedback strict passivity of discrete-time non-
linear systems and adaptive control system design with a PFC. Automatica 46(9):1503–1509
404. Zhao Y, Gupta V (2016) Feedback passivation of discrete-time systems under communication
constraints. IEEE Trans Autom Control 61(11):3521–3526
405. Costa-Castello R, Wang D, Grino R (2009) A passive repetitive controller for discrete-time
finite-frequency positive-real systems. IEEE Trans Autom Control 54(4):800–804
406. Pakshin P, Emelianova J, Emelianov M, Galkowski K, Rogers E (2016) Dissipativity and
stabilization of nonlinear repetitive processes. Syst Control Lett 91:14–20
407. Paszke W, Rogers E, Galkowski K (2013) KYP lemma based stability and control law design
for differential linear repetitive processes with applications. Syst Control Lett 62:560–566
408. Gruene L (2013) Economic receding horizon control without terminal constraints. Automatica
43(3):725–734
409. Gruene L, Mueller MA (2016) On the relation between strict dissipativity and turnpike prop-
erties. Syst Control Lett 90:45–53
410. Berberich J, Koehler J, Allgoewer F, Mueller MA (2018) Indefinite linear quadratic optimal
control: strict dissipativity and turnpike properties. IEEE Control Syst Lett 2(3):399–404
411. Gaitsgory V, Gruene L, Hoeger M, Kellett CM (2018) Stabilization of strictly dissipative
discrete time systems with discounted optimal control. Automatica 93:311–320
412. Gruene L, Kellett CM, Weller SR (2017) On the relation between turnpike properties for finite
and infinite horizon control problems. J Optim Theory Appl 173:727–745
413. Gruene L, Guglielmi R (2018) Turnpike properties and strict dissipativity for discrete time
linear quadratic optimal control problems. SIAM J Control Optimi 56(2):1282–1302
414. Zanon M, Gruene L, Diehl M (2017) periodic optimal control, dissipativity and MPC. IEEE
Trans Autom Control 62(6):29432,949
415. Olanrewaju OI, Maciejowski JM (2017) Implications of dissipativity on stability of economic
model predictive control-The indefinite linear quadratic case. Syst Control Lett 100:43–50
416. Kohler J, Mueller MA, Allgoewer F (2018) On periodic dissipativity notions in economic
model predictive control. IEEE Control Syst Lett 2(3):501–506
417. Faulwasser T, Korda M, Jones CN, Bonvin D (2017) On turnpike and dissipativity properties
of continuous-time optimal control problems. Automatica 81:297–304
418. Trélat E, Zhang C (2018) Integral and measure-turnpike properties for infinite-dimensional
optimal control systems. Math Control, Signals Syst 30. Article 3
References 261
419. Barkin AI (2008) On absolute stability of discrete systems. Autom Remote Control
69(10):1647–1652
420. Alamo T, Cepeda A, Fiacchini M, Camacho EF (2009) Convex invariant sets for discrete-time
Lur’e systems. Automatica 45(4):1066–1071
Chapter 4
Dissipative Systems
In this chapter, we will further study the concept of dissipative systems which is a
very useful tool in the analysis and synthesis of control laws, for linear and non-
linear dynamical systems. One of the key properties of a dissipative dynamical
system is that the total energy stored in the system decreases with time. Dissipa-
tiveness can be considered as an extension of PR systems to the nonlinear case.
Some relationships between positive real and passive systems have been established
in Chap. 2. There exist several important subclasses of dissipative nonlinear systems
with slightly different properties which are important in the analysis. Dissipativity is
useful in stabilizing mechanical systems like fully actuated robots manipulators [1],
robots with flexible joints [2–6], underactuated robot manipulators, electric motors,
robotic manipulation [7], learning control of manipulators [8, 9], fully actuated and
underactuated satellites [10], combustion engines [11], power converters [12–17],
neural networks [18–21], smart actuators [22], piezoelectric structures [23], haptic
environments and interfaces [24–33], particulate processes [34], process and chem-
ical systems [35–39], missile guidance [40], model helicopters [41], magnetically
levitated shafts [42, 43], biological and physiological systems [44, 45], flat glass
manufacture [46], and visual feedback control [47] (see Sect. 9.4 for more refer-
ences). Some of these examples will be presented in the following chapters.
Dissipative systems theory is intimately linked to Lyapunov stability theory. There
exist tools from the dissipativity approach that can be used to generate Lyapunov
functions. A difference between the two approaches is that the state of the system and
the equilibrium point are notions that are required in the Lyapunov approach, while
the dissipative approach is rather based on input–output behavior of the plant. The
input–output properties of a closed-loop system can be studied using L p stability
analysis. The properties of L p signals can then be used to analyze the stability of a
closed-loop control system. L p stability analysis has been studied deeply by Desoer
and Vidyasagar [48]. A presentation of this notion will also be given in this book since
they are very useful in the stability analysis of control systems, and, in particular, in
the control of robot manipulators.
© Springer Nature Switzerland AG 2020 263
B. Brogliato et al., Dissipative Systems Analysis and Control, Communications
and Control Engineering, https://doi.org/10.1007/978-3-030-19420-8_4
264 4 Dissipative Systems
Let us briefly review next the notation and definitions of normed spaces, L p norms,
and properties of L p signals. For a more complete presentation, the reader is referred
to [48] or any monograph on mathematical analysis [57–59]. Let E be a linear
space over the field K (typically K is IR or the complex field C). The function
ρ(.), ρ : E → IR + is a norm on E if and only if:
Let x : R → R be a function, and let |·| denote the absolute value. The most common
signal norms are the L1 , L2 , L p , and L∞ norms which are, respectively, defined as
1
|| x ||1 = |x(t)| dt, || x ||2 = |x(t)|2 dt 2
1
|| x || p = |x(t)| p dt p for 2 ≤ p < +∞
|| x ||∞ = ess sup |x(t)|dt = inf{a | |x(t)| < a, a.e.} = sup |x(t)|,
t∈IR t>0
where the integrals have to be understood on R, i.e., = R or, if the signals are
+∞
defined on R+ , as 0 . We say that a function f (·) belongs to L p if and only if
b
f is locally Lebesgue integrable (i.e., | a f (t)dt| < +∞ for any R b ≥ a) and
f p < +∞. To recapitulate:
4.1 Normed Spaces and L p Norms 265
i=1 || f i || p .
2 2
Proposition 4.1 If f ∈ L1 L∞ , then f ∈ L p for all 1 ≤ p ≤ +∞.
Proof Since f ∈ L1 , the set A = {t| | f (t)| ≥ 1} has finite Lebesgue measure. There-
fore, since f ∈ L∞
| f (t)| p dt < ∞, for all p ∈ [1, +∞).
A
Define the set B = {t| | f (t)| < 1}. Then, we have
| f (t)| p dt ≤ | f (t)|dt < | f (t)|dt < ∞, for all p ∈ [1, +∞).
B B
Finally, | f (t)| p dt = A | f (t)| p dt + B | f (t)| p dt < +∞.
1 1 1
−t
1
1+t
, f 3 (t) = 1+t 1 , f 4 (t) = e
1 1+t 4
, f 5 (t) = 1+t 1 , and f 6 (t) = 1+t 2
1 1+t 4
2
1 1+t 2
1 . It can
t4 t4 t2
be shown that (see Fig. 4.1): f 1 ∈ / L1 , f 1 ∈ / L2 and f 1 ∈ L∞ , f 2 ∈ / L1 , f 2 ∈ L2
and f 2 ∈ L∞ , f 3 ∈/ L1 , f 3 ∈ L2 and f 3 ∈ / L∞ , f 4 ∈ L1 , f 4 ∈ L2 and f 4 ∈ L∞ ,
f 5 ∈ L1 , f 5 ∈ L2 and f 5 ∈ / L∞ , f 6 ∈ L1 , f 6 ∈ / L2 and f 6 ∈ / L∞ .
The following facts are very useful to prove convergence of signals under different
conditions.
Fact 1 If V : R → R is a nondecreasing function (see Fig. 4.2) and if V (t) ≤ M for
some M ∈ IR and all t ∈ R, then V (·) converges.
Proof Since V (·) is nondecreasing, then V (·) can only either increase or remain
constant. Assume that V (·) does not converge to a constant limit. Then, V (·) has
to diverge to infinity since it cannot oscillate. In other words, there exists a strictly
increasing sequence of time instants t1 , t2 , t3 ... and a δ > 0 such that V (ti ) + δ <
V (ti+1 ). However, this leads to a contradiction since V (·) has an upperbound M.
Therefore, the sequence V (ti ) has a limit for any sequence of time instants {ti }i≥1 so
that V (·) converges.
Examples:
t t
• 0 |s(τ )|dτ < ∞ ⇒ 0 |s(τ )|dτ converges.
• Let V (·) be differentiable. Then, V (·) ≥ 0 and V̇ (·) ≤ 0 =⇒ V (·) converges.
t t
Fact 2 If 0 | f (t )|dt converges then 0 f (t )dt converges.
Proof In view of the assumption, we have
t
∞> | f (t )|dt = | f (t )|dt + | f (t )|dt .
0 t| f (t)>0 t| f (t)≤0
Then, both integrals in the right-hand side above converge. We also have
t
f (t )dt = | f (t )|dt − | f (t )|dt .
0 t| f (t)>0 t| f (t)≤0
t
Then, 0 f (τ )dτ converges too.
t t
Using Fact 1 it follows that 0 | f˙(s)|ds converges. This implies that 0 f˙(s)ds con-
verges which in turn implies that f (·) converges too.
Definition 4.2 The function (t, x)
→ f (t, x) is said to be globally Lipschitz (with
respect to x) if there exists a bounded k ∈ R+ such that
Definition 4.3 The function (t, x)
→ f (t, x) is said to be locally Lipschitz (with
respect to x) if (4.2) holds for all x ∈ K , where K ⊂ Rn is a compact set. Then, k
may depend on K .
Definition 4.5 The function (t, x)
→ f (t, x) is said to be Lipschitz with respect to
time if there exists a bounded k such that
Definition 4.6 The function f (·) is uniformly continuous in a set A if for all ε > 0,
there exists δ(ε) > 0:
Remark 4.7 Uniform continuity and Lipschitz continuity are two different notions.
Any Lipschitz function is uniformly continuous.
√ However, the inverse implication is
not true. For instance, the function x
→ x is uniformly continuous on [0, 1], but
it is not Lipschitz on [0, 1]. This may be easily checked from the definitions. The
criterion in Fact 6 is clearly a sufficient condition only (“very sufficient”, one should
say!) to assure uniform continuity of a function. Furthermore, uniform continuity
has a meaning on a set. Asking whether a function is uniformly continuous at a point
is meaningless [58].
Fact 6 f˙ ∈ L∞ ⇒ f is uniformly continuous.
Proof f˙ ∈ L∞ implies that f is Lipschitz with respect to time t and that f (·) is
uniformly continuous.
Fact 7 If f ∈ L2 and is Lipschitz with respect to time then limt→+∞ f (t) = 0.
t
Proof By assumption: 0 f 2 (s)ds < ∞ and | f (t) − f (t )| ≤ k|t − t |, for all t, t .
Assume that | f (t1 )| ≥ ε for some t1 , ε > 0, and | f (t2 )| = 0 for some t2 ≥ t1 ,
then ε ≤ | f (t1 ) − f (t2 )| ≤ k|t1 − t2 |, i.e., |t1 − t2 | ≥ kε . We are now interested in
t
computing the smallest lowerbound for t12 f 2 (t)dt . We will therefore assume that
in the interval of time (t1 , t2 ) the function f (·) decreases at maximum rate which is
given by k in the equation above. We therefore have (see Fig. 4.3)
t2 ε2 kε ε3
f 2 (s)ds ≥ = .
t1 2 2k
4.2 Review of Some Properties of L p Signals 269
Since f ∈ L2 , it is clear that the number of times | f (t)| can go from 0 to ε is finite
on R. Since ε > 0 is arbitrary, we conclude that f (t) → 0 as t → ∞.
Fact 8 If f ∈ L p (1 ≤ p < ∞) and if f is uniformly continuous, then f (t) → 0
as t → +∞.
Proof This result can be proved by contradiction following the proof of Fact 7.
Fact 9 If f 1 ∈ L2 and f 2 ∈ L2 , then f 1 + f 2 ∈ L2 .
Proof The result follows from
( f 1 (t) + f 2 (t))2 dt = ( f 12 (t) + f 22 (t) + 2 f 1 (t) f 2 (t))dt
≤ 2 ( f 12 (t) + f 22 (t))dt < +∞.
The following Lemma describes the behavior of an asymptotically stable linear sys-
tem when its input is L2 bounded.
Lemma 4.8 Consider the state space representation of a linear invariant system
ẋ(t) = Ax(t) + Bu(t), with u(t) ∈ Rm , x(t) ∈ Rn and A exponentially stable. If
u ∈ L2 then x ∈ L2 ∩ L∞ , ẋ ∈ L2 and limt→+∞ x(t) = 0.
Remark 4.9 The system above with u ∈ L2 does not necessarily have an equilibrium
point. Therefore, we cannot use the Lyapunov approach to study the stability of the
system.
Proof of Lemma 4.8 Since A is exponentially stable then there exists P = P T 0,
Q 0 such that P A + A T P = −Q, which is the well-known Lyapunov equation.
Consider the following positive definite function:
∞
V (x, t) = x P x + k
T
u T (s)u(s)ds,
t
Note that
2u T B T P x ≤ 2|u T B T P x| ≤ 2u B T P x
21 1
λmin Q 2
≤ 2u B T P λmin2 Q 2
x (4.4)
λmin Q
≤ u B P
2 T 2 2
λmin Q
+ 2
x2 ,
where we have used the inequality 2ab ≤ a 2 + b2 , for all a, b ∈ IR. Choosing k =
B T P2 λmin2 Q we get
λmin Q
V̇ (x(t), t) ≤ − x(t)2 .
2
Let us first briefly review the Gradient type Parameter Estimation Algorithm, which
is widely used in adaptive control and in parameter estimation. Let y(t) ∈ IR, φ(t) ∈
IR n be measurable functions1 which satisfy the following linear relation:
y(t) = θ T φ(t),
where θ (t) ∈ IR n is an unknown constant vector. Define ŷ(t) = φ(t)T θ̂ (t) and e(t) =
ŷ(t) − y(t). Then
1 Heremeasurable is to be taken in the physical sense, not in the mathematical one. In other words,
we assume that the process is well-equipped with suitable sensors.
4.2 Review of Some Properties of L p Signals 271
d θ̃ d θ̂
where θ̃ (t) = θ̂ (t) − θ . Note that dt
= dt
. Define the following positive function:
1 2
V (θ̃, φ) = e . (4.7)
2
Then
∂ V d θ̃ ∂ V dφ
V̇ (θ̃, φ) = + . (4.8)
∂ θ̃ dt ∂φ dt
From the last equation, it follows that for v = 0, V (·) is a nonincreasing function and
thus V , x, and θ̃ ∈ L∞ . Integrating the last equation it follows that x ∈ L2 ∩ L∞ .
Assume that f (·) has the required property so that x ∈ L∞ ⇒ f (x) ∈ L∞ . It fol-
lows that u ∈ L∞ and also ẋ ∈ L∞ . x ∈ L2 and ẋ ∈ L∞ implies limt→+∞ x(t) = 0.
Let us note from the last line of (4.10) that the operator H : v
→ x is output strictly
passive (OSP) as will be defined later.
In order to present the Passivity Theorem and the Small Gain Theorem, we will
require the notion of extended spaces. We will next present a brief introduction to
extended spaces. For a more detailed presentation, the reader is referred to [48].
Definition 4.12 (Linear maps) Let E be a linear space over K (R or C). Let L˜ (E, E)
be the class of all linear maps from E into E. L˜ (E, E) is a linear space satisfying
the following properties for all x ∈ E, for all A, B ∈ L˜ (E, E), for all α ∈ K :
⎧
⎨ (A + B)x = Ax + Bx
(α A)x = α(Ax) (4.11)
⎩
(AB)x = A(Bx).
Definition 4.13 (Induced Norms) Let |.| be a norm on E and A ∈ L˜ (E, E). The
induced norm of the linear map A is defined as
Δ
A = sup |Ax|
|x|
= sup |Az|. (4.12)
x =0 |z|=1
Induced norms possess the following properties. If A < ∞ and B < ∞ then
the following properties hold for all x ∈ E, α ∈ K :
1. |Ax| ≤ A|x|
2. α A = |α|A
3. A + B ≤ A + B
4. AB ≤ AB.
Since u∞ = 1 we have
t t
H ∞ ≤ sup |h(t − τ )| dτ = sup 0 |h(t − τ )| dτ
0
t≥0
∞ t≥0
= sup 0 h(t ) dt ≤ 0 h(t ) dt = h1 .
t
t≥0
t
We can choose u t (τ ) = sgn[h(t − τ )], t ∈ N. Thus, (h ∗ u t )(t) = 0 |h(t − τ )|dτ ≤
h ∗ u t ∞ . Therefore
t t
0 |h(τ )|dτ = 0 |h(t − τ )|dτ ≤h ∗ u t ∞
∞ (4.13)
≤ H ∞ ≤ h1 = 0 |h(t )|dt .
Consider a function f : R+ → R and let 0 ≤ T < +∞. Define the truncated func-
tion
f (t) if t ≤ T
f T (t) = (4.14)
0 if t > T.
The function f T (·) is obtained by truncating f (·) at time T . Let us introduce the
following definitions:
274 4 Dissipative Systems
L = { f : T → V | f < ∞}.
Le = { f : T → V | for all T ∈ T , f T < ∞}.
In other words, the sets L p,e or simply Le consist of all Lebesgue measurable
functions f (·) such that every truncation of f (·) belongs to the set L p (but f may
not belong to L p itself, so that L p ⊂ L p,e ). The following properties hold for all
f ∈ L p,e :
We can now introduce the notion of gain of an operator which will be used in the
Small Gain Theorem and the Passivity Theorem.
In the next definition, we consider a general operator with state, input, and output sig-
nal spaces. In particular, the input–output mapping is assumed to be causal, invariant
under time shifts, and it depends on the initial state x0 .
(H u)T ≤ ku T + β(x0 )
for all admissible u(·) and all x0 . If β(x0 ) = 0, we call H finite-gain stable
(FGS).
initial state x0 . One may be surprized that the notion of state intervenes in a definition
that concerns purely input–output operators (or systems). Some definitions, indeed,
do not mention such a dependence. This is related to the very basic definition of
what a system is, and well-posedness. Then, the notions of input, output, and state
can hardly be decoupled, in general. We call the gain of H the number k (or k(H ))
defined by
Theorem 4.18 Suppose that the matrix A has all its eigenvalues with negative real
parts (⇐⇒ ẋ(t) = Ax(t) is asymptotically stable). Then the system (4.15) is finite-
gain stable, where the norm can be any L p -norm, with 1 ≤ p ≤ +∞. In other words,
u ∈ L p =⇒ y ∈ L p and ||y|| p ≤ k p ||u|| p for some k p < +∞.
Conversely we have:
Theorem 4.19 Let D = 0, and suppose that the system is controllable and
detectable. If ||y||∞ ≤ k∞ ||u||∞ for some k∞ < +∞, then A is Hurwitz.
See, for instance, [62, Theorem 33]. A rather complete exposition of input–output
stability can be found in [63, Chap. 6].
This theorem gives sufficient conditions under which a bounded input produces a
bounded output (BIBO).
(i)
e1t ≤ (1 − γ1 γ2 )−1 (u 1t + γ2 u 2t + β2 + γ2 β1 )
e2t ≤ (1 − γ1 γ2 )−1 (u 2t + γ1 u 1t + β1 + γ1 β2 ).
Then
e1t ≤ u 1t + (H2 e2 )t ≤ u 1t + γ2 e2t + β2
e1t ≤ u 1t + β2 + γ2 (u 2t + γ1 e1t + β1 )
e2t ≤ u 2t + β1 + γ1 (u 1t + γ2 e2t + β2 ).
Finally,
e1t ≤ (1 − γ1 γ2 )−1 u 1t + γ2 u 2t + β2 + γ2 β1
e2t ≤ (1 − γ1 γ2 )−1 u 2t + γ1 u 1t + β1 + γ1 β2 .
4.3.1 Definitions
We will now review the definitions and properties of dissipative systems. Most of
the mathematical foundations on this subject are due to J.C. Willems [64], D.J. Hill
and P. Moylan [54, 55]. One difficulty when looking at the literature on the subject is
that there are many different notions of dissipativity, which are introduced in many
articles published here and there. One of the goals of this chapter is to present all
of them in one shot, and also the existing relationships between them. Consider
a causal nonlinear system (Σ) : u(t)
→ y(t); u(t) ∈ L pe , y(t) ∈ L pe represented
by the following state space representation affine in the input:
ẋ(t) = f (x(t)) + g(x(t))u(t)
(Σ) (4.19)
y(t) = h(x(t)) + j (x(t))u(t)
with x(0) = x0 , and where x(t) ∈ IR n , u(t), y(t) ∈ IR m , f (·), g(·), h(·), and j (·)
possess sufficient regularity, so that the system with inputs in L2,e is well-posed (see
Sect. 3.13.2), and f (0) = h(0) = 0. In other words, the origin x = 0 is a fixed point
for the uncontrolled (free) system, and there is no output bias at x = 0. The state
space is denoted as X ⊆ Rn . Let us call w(t) = w(u(t), y(t)) the supply rate and be
such that for all admissible u(·) and x(0) and for all t ∈ IR +
t
|w(u(s), y(s))|ds < +∞, (4.20)
0
along all possible trajectories of (Σ) starting at x(0), for all x(0), t ≥ 0.
For all trajectories, means for all admissible controllers u(·) that drive the state from
x(0) to x(t) on the interval [0, t]. It follows from Lemma 3.1 and Corollary 3.3 that
controllable and observable LTI systems with a positive real transfer functions are
dissipative with quadratic storage functions (see also [65] in the context of behavioral
approach to linear dynamical systems). Three comments immediately arise from
Definition 4.21: (i) first storage functions are defined up to an additive constant,
(ii) second, if the system is dissipative with respect to supply rates wi (u, y), 1 ≤
i ≤ m,then the system is also dissipative with respect to any supply rate of the
m
form i=1 αi wi (u, y), with αi ≥ 0 for all 1 ≤ i ≤ m, (iii) third, consider the system
(4.19) and assume that it evolves on the submanifold {(x, u) | h(x) + j (x)u = 0},
which is the output-zeroing subspace; assume further that the supply rate is such
that w(u, 0) = 0 for all u; then it follows immediately from (4.21) that V (x(t)) ≤
V (x(0)), showing that the zero dynamics of a dissipative system possesses some
stability property (depending on the properties of the storage function): here we
recover the fact that, in the LTI case, a PR transfer function must have stable zeroes
(see Sect. 2.13.2 and Theorem 3.61); this will be studied more deeply in Sect. 5.6.
One notices that the Definition 4.21 (sometimes referred to as Willems’ dissipa-
tivity) does not require any regularity on the storage functions: it is a very general
definition. Actually, storage functions do possess some regularity properties under
suitable assumptions, see Sect. 4.3.5. When one imposes that the storage functions
be of class C r for some integer r ≥ 0, then one speaks of C r -dissipativity. A third
comment may be done: Willems’ definition postulates that dissipativity holds when-
ever a storage function exists. Some other authors like Hill and Moylan, start from
a definition that is closer to Definition 2.1, and then prove the existence of storage
functions.
Example
t 4.22 Let us continue with Example 3.2. The input–output product satisfies
)dt = 0 u 2 (t )dt ≥ 0 for any initial data x(0). Now choose V (x) =
t
0 u(t )y(t
x . One has V (x(t)) ≤ V (x(0)) since solutions strictly decrease. Thus, V (x(t)) −
1 2
2 t
V (x(0)) ≤ 0 and V (x(t)) − V (x(0)) ≤ 0 u(t )y(t )dt : the system is dissipative,
though neither observable nor controllable (but, it is stable).
It is noterworthy that (4.21) is equivalent to the following: there exists W (·) such
that V (x1 ) − V (x0 ) ≤ W (x1 , x0 ) with
4.3 Dissipative Systems 279
t
W (x1 , x0 ) = inf w(u(s), y(s))ds (4.22)
u(·)∈U 0
along admissible controls which drive the state from x0 to x1 on the time interval
[0, t]. In the following, we shall use either 0 or t0 to denote the initial time for (4.19).
Dissipativity is also introduced by Hill and Moylan [55] as follows:
Definition 4.23 The system (Σ) in (4.19) is dissipative with respect to the
supply rate w(u, y) if for all admissible u(·) and all t1 ≥ t0 one has
t1
w(u(t), y(t))dt ≥ 0 (4.23)
t0
This corresponds to imposing that storage functions satisfy V (0) = 0. This is justified
by the fact that storage functions will often, if not always, be used as Lyapunov
functions for studying the stability of an equilibrium of (Σ) with zero input u(·).
In a slightly more general setting, one may assume that the controlled system has
a fixed point x (corresponding to some input u , and with f (x ) + g(x )u = 0,
y = h(x ) + j (x )u , and w(u , y ) = 0), and that V (x ) < +∞. Then changing
V (·) to V (·) − V (x ) one obtains V (x ) = 0 (we could even have stated this as an
assumption in Definition 4.21, as done, for instance, in [52]). In the sequel of this
chapter, we shall encounter some results in which dissipativity is indeed assumed
to hold with V (0) = 0. Such results were originally settled to produce Lyapunov
functions, precisely. Hill and Moylan start from (4.23) and then prove the existence
of storage functions, adding properties to the system. The motivation for introducing
Definition 4.23 is clear from Corollary 3.3, as it is always satisfied for linear invariant
positive real systems with minimal realizations.
Remark 4.24 Remember that for LTI systems, the constant (the bias) β in (2.1) is
zero for zero initial conditions (see Remark 2.10). When initial conditions (on the
state) are not zero, β measures the initial stored energy. This motivates the use of a
weak notion of passivity.
Another definition [54] states
t that the system is weakly dissipative with respect to
the supply rate w(u, y) if t01 w(u(t), y(t))dt ≥ −β(x(t0 )) for some β(·) ≥ 0 with
β(0) = 0 [66] (we shall see later the relationship with Willems’ Definition; it is clear
at this stage that weak dissipativity implies dissipativity in Definition 4.23, and that
Willem’s dissipativity implies weak dissipativity provided V (0) = 0). Still, another
definition is as follows [67].
Definition 4.25 The system (Σ) is said dissipative with respect to the supply rate
w(u, y) if there exists a locally bounded nonnegative function V : Rn → R, such
that
280 4 Dissipative Systems
t
V (x) ≥ sup V (x(t)) − w(u(s), y(s))ds : x(0) = x , (4.24)
t≥0,u∈U 0
where the supremum is therefore computed with respect to all trajectories of the con-
trolled system with initial condition x and admissible inputs. This is to be compared
with the statement in Lemma 3.68.
Definition 4.26 The system (Σ) is said to be cyclo-dissipative with respect to the
supply rate w(u, y) if
t1
w(u(s), y(s)ds ≥ 0, (4.25)
t0
The difference with Definition 4.21 is that the state boundary conditions are forced
to be the equilibrium of the uncontrolled system: trajectories start and end at x = 0.
A cyclo-dissipative system absorbs energy for any cyclic motion passing through the
origin. Cyclo-dissipativity and dissipativity are related as follows. Let us recall that
an operator H : u
→ y = H (u, t) is causal (or non-anticipative) if the following
holds: for all admissible inputs u(·) and v(·) with u(τ ) = v(τ ) for all τ ≤ t, then
H (u(·), t) = H (v(·), t). In other words, the output depends only on the past values
of the input, and not on future values. It is noteworthy here that causality may hold
for a class of inputs and not for another class.
Theorem 4.27 ([68]) Suppose that the system (Σ) defines a causal input–output
operator Hx(0) , and that the supply rate is of the form w(u, y) = y T Qy + 2y T Su +
u T Ru, with Q nonpositive definite. Suppose further that the system is zero-state
detectable (i.e., u(t) = 0, y(t) = 0, for all t ≥ 0 =⇒ limt→∞ x(t) = 0). Then, dis-
sipativity in the sense of Definition 4.23 and cyclo-dissipativity of (Σ) are equivalent
properties.
4.3 Dissipative Systems 281
The proof of this theorem relies on the definition of another concept, known as
ultimate dissipativity, which we do not wish to introduce here for the sake of briefness
(roughly, this is dissipativity but only with t = +∞ in (4.21)). The reader is therefore
referred to [68] for the proof of Theorem 4.27 (which is a concatenation of Definitions
2, 3, 8 and Theorems 1 and 2 in [68]).
A local definition of dissipative systems is possible. Roughly, the dissipativity
inequality should be satisfied as long as the system’s state remains inside a closed
domain of the state space [69].
Definition 4.28 (Locally dissipative system) Let X be the system’s state space. Let
Ue = {u(·) | u(t) ∈ U for all t ∈ R}. The dynamical system is locally dissipative
with respect to the supply rate w(u, y) in a region Ω ⊆ X if
t
w(u(s), y(s))ds ≥ 0, (4.26)
0
Definition 4.29 ([70, 71]) The system (Σ) is said quasi-dissipative with respect to
the supply rate w(u, y) if there exists a constant α ≥ 0 such that it is dissipative with
respect to the supply rate w(u, y) + α.
The next result helps to understand the meaning of the constant β (apart from the
fact that, as we shall see later, one can exhibit examples which prove that the value
of β(0) when β is a function of the initial state has a strong influence on the stability
of the origin x = 0). We saw in Sect. 2.2 that in the case of LTI systems, β = 0
when initial conditions vanish. Typically, in the nonlinear case, an inequality like
(4.27) with β = 0 can only be satisfied at a finite number of states that may be
under certain conditions equilibria. The supply rate that is considered is the general
supply rate w(u, y) = y T Qy + 2y T Su + u T Ru, where Q = Q T and R = R T (but
no other assumptions are made, so that Q and R may be zero). The definition of weak
dissipativity is as seen above, but in a local setting, i.e., an operator G : U → Y
which is denoted as G x0 as it may depend on the initial state. For a region Ω ⊂ Rn , we
denote G(Ω) the family of operators G x0 for all x0 ∈ Ω. Considering such domain
Ω may be useful for systems with multiple equilibria, see Example 4.36. Mimicking
the definition of weak finite gain (Definition 4.17):
Definition 4.31 ([54]) The operator G(Ω) is said weakly w(u, y)-dissipative if there
exists a function β : Ω → R such that
t
w(u(s), y(s))ds ≥ β(x0 ), (4.28)
0
for all admissible u(·), all t ≥ 0, and all x0 ∈ Ω. If β(x0 ) = 0 in Ω then the operator
is called w(u, y)-dissipative.
This definition has some local taste, as it involves possibly several equilibria of the
system (the set Ω). Therefore, when time comes to settle some stability of these equi-
libria, it may be that only local stability can be deduced. We also need a reachability
definition. The distance of x to Ω is d(x, Ω) = inf x0 ∈Ω ||x − x0 ||.
Uniform reachability means that a state x1 can be driven from some other state x0
with an input that is small if the distance between the two states is small. It is local
in nature.
Proof Take any x1 ∈ X 1 and any t1 > t0 , any x0 ∈ Ω, any u(·) ∈ U such that the
controlled trajectory emanating from x0 at t0 ends at x1 at t1 . The value of u(t) for
t > t1 is arbitrary. The inequality in (4.28) can be rewritten as
4.3 Dissipative Systems 283
t
w(u(s), y(s))ds ≥ βnew (x1 ) (4.29)
0
t
with βnew (x1 ) = β(x0 ) − 0 1 w(u(s), G x0 (u(s)))ds, and we used the fact that the
operator is invariant under time shifts. The value βnew (x1 ) depends only on x1 and
not on u(·), because the control that intervenes in the definition of βnew (x1 ) is the
specific control which drives x0 to x1 . Thus, G x1 is weakly dissipative.
If β(x0 ) = 0 then the system is dissipative with respect to one initial state (in
the sense of Definition 4.23 if x0 = 0). But it is weakly dissipative with respect to
other reachable initial states. Therefore, a way to interpret β is that it allows to take
into account nonzero initial states. In Example 4.67, we will see that weak finite-gain
stability is not enough to assure that the uncontrolled state space representation of the
system has aLyapunov stable fixed point. It follows from this analysis that defining
t
passivity as 0 u T (s)y(s)ds ≥ 0 for any initial state makes little sense if the initial
state is not included in the definition (or implicitly assumed to be zero).
The equivalence between Willems’ definition and weak dissipativity follows from
the following:
for all x1 ∈ X 1 , all admissible u(·) ∈ U , all t ≥ t0 , with y(t) = G x1 (u(t)) and
x(t) = x2 is the state starting at x1 at t0 .
Δ t
Proof Let us denote V (u, y, t0 , t) = t0 w(u(s), y(s))ds. By the system’s time-
invariance, V (u, y, t0 , t) depends only on t − t0 but not on t and t0 separately.
Let V (x1 ) = − inf V (u, G x1 u, t1 , t). Because of t ≥ t1 , t may be chosen
u(·)∈U ,t≥t1
as t1 and consequently V (x1 ) ≥ 0. For any t2 ≥ t1 and t ≥ t2 , one has V (x1 ) ≥
−V (u, G x1 u, t1 , t2 ) − V (u, G x2 u, t2 , t), where x(t2 ) = x2 is the state which starts at
x1 at time t1 and with the control u on [t1 , t2 ]. Since this inequality holds for all u, it
is true in particular that V (x1 ) ≥ −V (u, G x1 u, t1 , t2 ) − inf V (u, G x2 u, t2 , t)
u(·)∈U ,t≥t2
from which (4.30) follows. The inequality (4.28) implies that V (x1 ) ≤ −β(x1 )
so that 0≤ V (x) < +∞ for all x ∈ X 1 . Now starting from (4.30) one sees that
t
V (x1 ) + t0 w(u(s), y(s))ds ≥ 0 which shows that the system is w(u, y)-
dissipative.
284 4 Dissipative Systems
Moreover:
cyclo-dissipativity
Definition 4.26
(if ZSD and Q 0)
Hill and Moylan’s dissipativity
Definition 4.23
⇑ (if reachability)
weak w(u, y)-dissipativity [w(u, y)-dissipativity + reachability]
Definition 4.31
Willems’ dissipativity [Willems’ dissipativity +V (0) = 0]
Definition 4.21
(if local boundedness of the storage function)
Definition 4.25
The link between w(u, y)-dissipativity and dissipativity in Definition 4.23 can also be
established from Theorem 4.35. The equivalence between weak w(u, y)-dissipativity
and the other two supposes that the required dynamical system that is under study is
as (4.19), so in particular 0 ∈ Ω.
Example 4.36 ([54]) To illustrate Definition 4.31 consider the following system:
ẋ(t) = −α sin(x(t)) + 2γ u(t)
(4.31)
y(t) = −α sin(x(t)) + γ u(t),
with x(0) = x0 and α > 0. Then, V (x) = α(1 − cos(x)), V (x0 ) = 0 for all x0 =
±2nπ , n ∈ N. Thus, Ω = {x0 | x0 = ±2nπ }. This system is finite-gain stable, and
the equilibria are (locally) asymptotically stable.
4.3 Dissipative Systems 285
Having in mind this preliminary material, the next natural question is, given a system,
how can we find V (x)? This question is closely related to the problem of finding a
suitable Lyapunov function in the Lyapunov second method. As will be seen next,
a storage function can be found by computing the maximum amount of energy that
can be extracted from the system.
Definition 4.37 (Available Storage) The available storage Va (·) of the system
(Σ) is given by
t
0 ≤ Va (x) = sup − w(u(s), y(s))ds , (4.32)
x=x(0),u(·),t≥0 0
where Va (x) is the maximum amount of energy which can be extracted from
the system with initial state x = x(0).
The supremum is taken over all admissible u(·), all t ≥ 0, all signals with initial
value x(0) = x, and the terminal boundary condition x(t) is left free. It is clear
that 0 ≤ Va (x) (just take t = 0 to notice that the supremum cannot be negative).
When the final state is not free but constrained to x(t) = 0 (the equilibrium of the
uncontrolled system), then one speaks of the virtual available storage, denoted as
Va (·) [68]. Another function plays an important role in dissipative systems called
the required supply. We recall that the state space of a system is said reachable from
the state x if given any x and t there exist a time t0 ≤ t and an admissible controller
u(·) such that the state can be driven from x(t0 ) = x to x(t) = x. The state space X
is connected provided every state is reachable from every other state.
Definition 4.38 (Required Supply) The required supply Vr (·) of the system
(Σ) is given by 0
Vr (x) = inf w(u(s), y(s))ds (4.33)
u(·),t≥0 −t
with x(−t) = x , x(0) = x, and it is assumed that the system is reachable from
x . The function Vr (x) is the required amount of energy to be injected in the
system to go from x(−t) to x(0).
The infimum is taken over all trajectories starting from x at t, and ending at x(0) = x
at time 0, and all t ≥ 0 (or, said differently, over all admissible controllers u(·) which
286 4 Dissipative Systems
drive the system from x to x on the interval [−t, 0]). If the system is not reachable
from x , one may define Vr (x) = +∞. One notices that
Remark 4.39 The optimal “extraction” control policy which allows one to obtain
the available storage in case of an LTI system as in (3.1) is given by u = (D +
D T )−1 (B T P − − C)x, and the optimal “supply” control policy which allows one to
obtain the required supply is given by u = (D + D T )−1 (B T P + − C)x, where P +
and P − are as in Theorem 3.75.
Remark 4.40 Contrary to the available storage, the required supply is not necessarily
positive, see however Lemma 4.48. When the system is reversible, the required supply
and the available storage coincide [64]. It is interesting to define accurately what is
meant by reversibility of a dynamical system. This is done thanks to the definition
of a third energy function, the cycle energy:
t1
Vc (x) = inf u(t)T y(t)dt, (4.35)
u(·),t0 ≤t1 ,x(t0 )=0 t0
where the infimum is taken over all admissible u(·) which drive the system from
x(t0 ) = 0 to x. The cycle energy is thus the minimum energy it takes to cycle a system
between the equilibrium x = 0 and a given state x. One has Va (·) + Vc (·) = Vr (·)
(assuming that the system is reachable so that the required supply is not identically
+∞). Then, the following is in order:
A way to check reversibility for passive systems is given in [53, Theorem 8]. It uses
the notion of a signature, which we do not introduce here for the sake of briefness.
The following is taken from [68].
Example 4.42 Let us consider the one-dimensional system
ẋ(t) = −x(t) + u(t)
(4.36)
y(t) = x(t) + 21 u(t),
with x(0) = x0 . This system is dissipative with respect to the supply rate
w(u, y) t =
t t x 2 (s)
0 u(s)y(s)ds = 0 [( ẋ(s) + x(s))x(s) + 2 u (s)]ds = +
1 2
uy. Indeed 2 0
4.3 Dissipative Systems 287
t √ √
(s) + 21 u 2 (s))ds ≥ − x 2(0) . Then, Va (x) = 2−2 3 x 2 and Vr (x) = 2+2 3 x 2 : the
2
0 (x
2
available storage and required supply are the extrema solutions of the Riccati equa-
tion A T P + A P + (P B − C T )(D + D T )−1 (B T P − C) = 0, with A = −1, B = 1,
C = 1, D = 21 , which is in this case p 2 − 4 p + 1. Moreover, the available storage
and the virtual available storage (where the terminal state is forced to be x = 0) are
the same. One sees that V (x) = 21 x 2 is a storage function.
The following results link the boundedness of the functions introduced in Definitions
4.37 and 4.38 to the dissipativeness of the system. As an example, consider again
an electrical circuit. If there is an ideal battery in the circuit, the energy that can be
extracted is not finite. Such a circuit is not dissipative. The following results are due
to Willems [52, 53].
Theorem 4.43 ([52, 53]) The available storage Va (·) in (4.32) is finite for all
x ∈ X if and only if (Σ) in (4.19) is dissipative in the sense of Definition 4.21.
Moreover, 0 ≤ Va (x) ≤ V (x) for all x ∈ X for dissipative systems and Va (·)
is itself a possible storage function.
Proof (=⇒): In order to show that Va (x) < ∞ ⇒ the system (Σ) in (4.19) is dis-
sipative, it suffices to show that the available storage Va (·) in (4.32) is a storage
function, i.e., it satisfies the dissipation inequality
t
Va (x(t)) ≤ Va (x(0)) + w(s)ds.
0
But this is certainly the case because the available storage Va (x(t)) at time t is not
larger than the available storage Va (x(0)) at time 0, plus the energy introduced into
the system in the interval [0, t].
(⇐=): Let us now prove that if the system (Σ) is dissipative then Va (x) < ∞. If
(Σ) is dissipative then there exists V (x) ≥ 0 such that
t
V (x(0)) + w(s)ds ≥ V (x(t)) ≥ 0.
0
Since the initial storage function V (x(0)) is finite it follows that Va (x) < +∞. The
last part of the theorem follows from the definitions of Va (·) and V (·)
(see (4.24)).
288 4 Dissipative Systems
• (i) Vr (x(0)) < +∞ for any reachable state x(0) and with x(−t) = 0,
• (ii) Va (x(0)) > −∞ for any controllable state x(0),
• (iii) Va (0) = Vr (0) = 0 if x(−t) = 0,
• (iv) Vr (x) ≥ Va (x) for any state x ∈ X .
Controllability means, in this context, that there exists an admissible u(·) that drives
the state trajectory toward x = 0 at a time t ≥ 0.
Proof (i) and (ii) are a direct consequence of reachability and controllability, and
the fact that w(u(s), y(s)) is integrable. Now let x(0) be both reachable and con-
trollable. Let us choose a state trajectory which passes through the points x(−t) =
0 t
x(t) = 0, and with x(0) = x. Then −t w(u(s), y(s))ds + 0 w(u(s), y(s))ds ≥ 0,
from the definition of cyclo-dissipativity. From the definitions of Va (·) (paragraph
0
below Definition 4.37) and Vr (·), (iv) follows using that −t w(u(s), y(s))ds ≥
t
− 0 w(u(s), y(s))ds. (iv) remains true even in the case of uncontrollability and
unreachability, as in such a case Vr (x(0)) = +∞ and Va (x(0)) = −∞.
One infers from Lemma 4.45 (i) and Theorem 4.27 that a causal ZSD system is
dissipative in the sense of (4.22) (equivalently Definition (4.21) and (4.21)) only
if Vr (x) < +∞ for any reachable x. Similarly to the above results concerning the
available storage, we have the following:
4.3 Dissipative Systems 289
Theorem 4.46 ([52, Theorem 2 (ii)]) Let V (x ) = 0 and let the system (Σ)
in (4.19) be dissipative in the sense of Definition 4.21. Then the required supply
satisfies Vr (x ) = 0, and 0 ≤ Va (x) ≤ V (x) ≤ Vr (x) for all x ∈ X . Moreover,
if the state space is reachable from x , then Vr (x) < +∞ for all x ∈ X , and
the required supply is a possible storage function.
Lemma 4.48 Let the system (Σ) be dissipative in the sense of Definition 2.1 with
respect to the supply rate w(u, y), and locally w-uniformly reachable at x . Let V (·)
be a storage function. Then, the function Vr (·) + V (x(0)) is a continuous storage
function.
One sees that if the storage function satisfies V (0) = 0, and if x(0) = 0, then the
required supply is a storage function. The value V (x(0)) plays the role of the bias
−β in Definition 2.1. When V (0) = 0, the system has zero bias at the equilibrium
x = 0. In fact a variant of Theorem 4.43 can be stated as follows, where dissipativity
is checked through Va (·) provided the system (Σ) is reachable from some state x .
Lemma 4.49 ([76]) Assume that the state space of (Σ) is reachable from x ∈ X .
Then, (Σ) is dissipative in the sense of Definition 4.21, if and only if Va (x ) < +∞,
if and only if there exists a constant K > −∞ such that Vr (x) ≥ −K for all x ∈ X .
The conditions of Theorem 4.43 are less stringent since reachability is not assumed.
However, in practice, systems of interest are often reachable, so that Lemma 4.49 is
important. The second equivalence follows from (4.34).
290 4 Dissipative Systems
Notice that given two storage functions V1 (·) and V2 (·) for the same supply rate,
it is not difficult to see from the dissipation inequality that for any constant λ ∈ [0, 1]
then λV1 (·) + (1 − λ)V2 (·) is still a storage function. More formally:
Lemma 4.50 The set of all possible storage functions of a dissipative system is
convex. In particular, λVa (·) + (1 − λ)Vr (·), λ ∈ [0, 1] is a storage function provided
the required supply is itself a storage function.
Proof Let V1 (·) and V2 (·) be two storage functions. Let 0 ≤ λ ≤ 1 be a constant.
Then, it is an easy computation to check that λV1 (·) + (1 − λ)V2 (·) is also a storage
function. Since the available storage and the required supply are storage functions,
the last part follows.
The available storage and the required supply can be characterized as follows:
Proposition 4.51 Consider the system (Σ) in (4.19). Assume that it is zero-
state observable (u(t) = 0 and y(t) = 0 for all t ≥ 0 imply that x(t) = 0 for
all t ≥ 0), with a reachable state space X , and that it is dissipative with respect
to w(u, y) = 2u T y. Let j (x) + j T (x) have full rank for all x ∈ X . Then, Va (·)
and Vr (·) are solutions of the partial differential equality:
Proof The proof is led by calculating the integral of the right-hand side
of (4.39).
If the infimum exists and since j (x(t)) + j T (x(t)) is supposed to be full rank, it
follows that its Schur complement Wa (x) − Sa (x)( j (x) + j T (x))−1 SaT (x) = 0 (see
Lemma A.66), which exactly means that Va (·) satisfies the partial differential inequal-
ity (4.38). A similar proof may be made for the required supply.
In the linear time-invariant case, and provided the system is observable and control-
lable, then Va (x) = x T Pa x and Vr (x) = x T Pr x satisfy the above partial differential
equality, which means that Pa and Pr are the extremal solutions of the Riccati equa-
tion A T P + P A + (P B − C T )(D + D T )−1 (B T P − C) = 0. Have a look at Theo-
rems 3.73, 3.74, and 3.75, and Theorem 4.46. One deduces that the set of solutions
P = P T 0 of the KYP Lemma set of equations in (3.2) has a maximum Pr and a
minimum Pa elements, and that all other solutions satisfy 0 ≺ Pa P Pr . What is
called G + in Theorem 3.74 and is equal to −Pa and what is called G − is equal to −Pr
(it is worth recalling that minimality of (A, B, C, D) is required in the KYP Lemma
solvability with positive definite symmetric solutions, and that the relaxation of the
minimality requires some care, see Sect. 3.3). Similarly P − and P + in Theorem 3.75
are equal to Pa and Pr , respectively.
The following is a consequence of Theorem 2.4 and relates to a notion introduced
at the beginning of this book for input–output systems, to the notion of dissipativity
introduced for state space systems.
Theorem 4.53 (Passive systems) Suppose that the system (Σ) in (4.19) is dissipative
with supply rate w(u, y) = u T y and storage function V (·) with V (0) = 0, i.e., for
all t ≥ 0: t
V (x(t)) ≤ V (x(0)) + u T (s)y(s)ds. (4.43)
0
Passivity is defined in Definition 2.1. Let us recall that a positive real (PR) system is
passive, see Corollary 2.39.
292 4 Dissipative Systems
Definition 4.54 (Strictly state passive systems) A system (Σ) in (4.19) is said to
be strictly state passive if it is dissipative with supply rate w = u T y and the storage
function V (·) with V (0) = 0, and there exists a positive definite function S (x) such
that for all t ≥ 0:
t t
V (x(t)) ≤ V (x(0)) + u T (s)y(s)ds − S (x(s))ds. (4.44)
0 0
If the equality holds in the above and S (x) ≡ 0, then the system is said to be lossless.
Some authors [77] also introduce a notion of weak strict passivity that is more general
than the strict state passivity: the function S (x) is replaced by a dissipation function
D(x, u) ≥ 0, D(0, 0) = 0. One gets a notion that is close to (4.58) below. The notion
of weak strict passivity is meant to generalize WSPR functions to nonlinear systems.
Theorem 4.55 ([52]) Suppose that the system (Σ) in (4.19) is lossless with a min-
imum value at x = x such that V (x ) = 0. If the state space is reachable from x
and controllable to x , then Va (·) = Vr (·) and thus the storage function is unique
0
and given by V (x) = t1 w(u(t), y(t))dt with any t1 ≤ 0 and u ∈ U such that the
t1 starting at x at t1 is driven by u(·) to x = 0 at t = 0. Equivalently
state trajectory
V (x) = − 0 w(u(t), y(t))dt with any t1 ≥ 0 and u ∈ U such that the state trajec-
tory starting at x at t = 0 is driven by u(·) to x at t1 .
Remark 4.56 If the system (Σ) in (4.19) is dissipative with supply rate w = u T y
and the storage function V (·) satisfies V (0) = 0 with V (·) positive definite, then
the system and its zero dynamics are Lyapunov stable. This can be seen from the
dissipativity inequality (4.21), by taking u or y equal to zero.
A general supply rate has been introduced by [55], which is useful to distinguish
different types of strictly passive systems, and will be useful in the Passivity The-
orems presented in the next section. Let us reformulate some notions introduced in
Definition 2.1 in terms of supply rate, where we recall that β ≤ 0.
4.3 Dissipative Systems 293
Note that Definitions 4.54 and 4.58 do not imply, in general, the asymptotic stability
2
of the considered system. For instance, s+as
is ISP as stated in Definition 4.58; see
also Theorem 2.8. Though this will be examined at several places of this book, let us
explain at once the relationship between the finite-gain property of an operator as in
Definition 4.17, and dissipativity with respect to a general supply rate. Assume that
the system (Σ) is dissipative with respect to the general supply rate, i.e.,
t
V (x(t)) − V (x(0)) ≤ [y T (s)Qy(s) + u T (s)Ru(s) + 2y T (s)Su(s)]ds, (4.46)
0
so that the operator u
→ y has a finite L2 -gain with a bias equal to V (x(0)). Dissi-
pativity with supply rates w(u, y) = −δy T y + εu T u will be commonly met, and is
sometimes called the H∞ -behavior supply rate of the system. Therefore, dissipativity
with Q = −δ Im and R = ε Im and S = 0 implies finite-gain stability. What about the
converse? The following is true:
Theorem 4.59 ([54]) The system is dissipative with respect to the general
supply rate in (4.45) with zero bias (β = 0) and with Q ≺ 0, if and only if it is
finite-gain stable.
We note that the constant k in Definition 4.17 may be zero, so that no condition on
the matrix R is required in this theorem. The =⇒ implication has been shown just
t T The ⇐= implication
above. holds because of zero bias. Then, it can be shown that 0 ≤
0 [y (s)Qy(s) + u T
(s)Ru(s) + 2y T (s)Su(s)]ds. Dissipativity is here understood
in the sense of Hill and Moylan in Definition 4.23.
Remark 4.60 A dynamical system may be dissipative with respect to several sup-
ply rates, and with different storage functions corresponding to those supply rates.
Consider, for instance, a linear time-invariant system that is asymptotically stable: it
may be SPR (thus passive) and it has a finite gain and is thus dissipative with respect
to a H∞ supply rate.
Let us make an aside on linear invariant systems. A more general version of Theorem
3.75 is as follows. We consider a general supply rate with Q 0. Let us define
R̄ = R + S D + D T S + D T Q D 0 and S̄ = S + D T Q. Then
Theorem 4.61 ([66, Theorem 3.8]) Consider the system (A, B, C, D) with A
asymptotically stable. Suppose that
t
ε t
− w(u(s), y(s))ds ≤ − u T (s)u(s)ds + β(x0 ), (4.49)
0 2 0
A T P + P A + (P B − C T S̄ T ) R̄ T (B T P − S̄C) − C T QC = 0, (4.50)
A T P̄ + P̄ A + ( P̄ B − C T S̄ T ) R̄ T (B T P̄ − S̄C) − C T QC ≺ 0. (4.51)
Conversely, suppose that there exists a solution P 0 to the ARE (4.50) such
that the matrix A = A + B R̄ −1 (B T P − S̄C) is asymptotically stable. Then, the
4.3 Dissipative Systems 295
matrix A is asymptotically stable and the system (A, B, C, D) satisfies (4.49) with
the above supply rate.
This is directly related with H∞ theory and the Bounded Real Lemma, see Sect. 5.10.
Let us end this section, where several notions of a dissipative system have been
introduced, by another definition.
Definition 4.62 The square system (Σ) in (4.19) is said to be incrementally pas-
sive, if there exist real numbers δ ≥ 0 and ε ≥ 0, such that the auxiliary system
defined as ẋ1 (t) = f (x1 (t)) + g(x1 (t))u 1 , y1 = h(x1 (t)) + j (x1 (t))u 1 , and ẋ2 (t) =
f (x2 (t)) + g(x2 (t))u 2 , y2 = h(x2 (t)) + j (x2 (t))u 2 is dissipative with respect to the
supply rate
Let us examine the LTI case with incrementally passive realization (A, B, C, D).
By Proposition 4.63, (A, B, C, D) is passive, i.e., it is dissipative with the supply
rate w(x, y) = u T y. Thus, the “error” system ẋ1 − ẋ2 = A(x1 − x2 ) + B(u 1 − u 2 ),
y = C(x1 − x2 ) + D(u 1 − u 2 ) is passive with the supply rate w(u 1 , u 2 , y1 , y2 ) =
(u 1 − u 2 )T (y1 − y2 ). Being an LTI system, its storage functions have the form
V (x1 , x2 ) = 21 (x1 − x2 )T P(x1 − x2 ) for some P = P T 0, that is, the solution of
Lur’e equations. Provided some conditions are satisfied such that P 0, one sees
that the storage functions are Lyapunov functions for the “error” system with zero
input. Consequently, any two solutions x1 (·) and x2 (·) converge one to another (and
are thus asymptotically stable provided the origin is an equilibrium). In a sense,
solutions “forget” the initial conditions. Incremental passivity is therefore a notion
very close to the so-called convergent systems, introduced by B.P. Demidovich in
[80, 81].
Definition 4.64 Consider the system ẋ(t) = f (x(t), t), with f (·, ·) continuous in t
and continuously differentiable in x. It is said convergent if:
1. all solutions x(·) are well defined for all t ∈ [t0 , +∞), and all initial data t0 ∈ R,
x(t0 ) ∈ Rn ,
2. there exists a unique (limit) solution x̄(·) defined and bounded for all t ∈
(−∞, +∞),
3. the solution x̄(·) is globally asymptotically stable.
It is noteworthy that it is required that the limit solution property in item 2 holds on
the whole set (−∞, +∞). Consider ẋ(t) = Ax(t) with A Hurwitz, then x̄(t) = 0
(because all other solutions diverge for negative times). Systems of the form ẋ(t) =
Ax(t) + F(t) with bounded exogenous F(t) have a unique bounded limit solution.
Theorem 4.65 (Demidovich’s Convergence Theorem) Consider the system ẋ(t) =
f (x(t), t), with f (·, ·) continuous in t and continuously differentiable in x. Suppose
T
that there exists P = P T 0 such that P ∂∂ xf (x, t) + ∂∂ xf (x, t) P ≺ 0 uniformly
in (x, t) ∈ Rn × R, and || f (0, t)|| ≤ c < +∞ for some constant c and all t ∈ R.
Then the system is convergent.
The proof (in English) can be found in [82]. It can be shown [82] that (x1 −
x2 )T P( f (x1 , t) − f (x2 , t)) = 21 (x1 − x2 )T J (ξ, t)(x1 − x2 ), with J (ξ, t) = P ∂∂ xf (x,
T
t) + ∂∂ xf (x, t) P, and ξ is some point lying on the segment [x1 , x2 ]. Thus, the
link between convergence and incremental passivity is clear. For controlled sys-
tems ẋ(t) = f (x(t), t) + Bu, y = C x + H (t), the condition P B = C T plus Demi-
dovich’s condition with P = P T 0 are shown to guarantee incremental passivity.
Similar notions have been introduced independently in [83–85]. Incremental pas-
sivity is used in [86] to design nonlinear output feedback controllers in [87] for output
regulation of switched systems, in [88] for model reduction of nonlinear systems, in
[89] for the stabilization of nonlinear circuits with linear PI controllers, in [90] for
nonsmooth dynamics networks and Nash equilibria searching, using models close
4.3 Dissipative Systems 297
to complementarity systems (see Sect. 3.14 and (3.244)). The notion of equilibrium-
independent dissipative (EID) system has been introduced in [91, 92]. It is more gen-
eral than incremental dissipativity. Consider the system in (4.19). Assume that there
is a set S such that for any x̄ ∈ S , there is a unique ū such that f (x̄) + g(x̄)ū = 0.
The system is equilibrium-independent dissipative with supply rate w(u, y), if there
exists a continuously differentiable storage function V (x, x̄) ≥ 0, V (x̄, x̄) = 0, and
∇x V (x, x̄)T f (x, u) ≤ w(u − ū, y − ȳ). For a static (memoryless system) nonlin-
earity, this notion coincides with monotonicity if w(u, y) = u T y. If the nonlinearity
is single valued and SISO, this boils down to an increasing (rather, nondecreasing)
function. EID systems are investigated in [93], where it is proved that if a state space
system of the form (4.19), with g(x) = G and j (x) = J , is EID, then its fixed-
points I/O relation defines a monotone (in the sense of Definition 3.114) mapping.
Conditions are given that guarantee the maximality of this monotone mapping. The
fixed-points I/O mapping is defined as follows: First one defines the set of equi-
librium configurations for (4.19) as triples (ū, x̄, ȳ) such that 0 = f (x̄) + G ū, ȳ =
h(x̄) + J ū. Let G ⊥ be the left annihilator of G, i.e., G ⊥ G = 0, with rank(G) = m
and rank(G ⊥ ) = n − m. The set of realizable (or assignable) fixed points is E = Rn if
m = n, and E = {x̄ ∈ Rn |G ⊥ f (x̄) = 0} if m < n. One associates with every x̄ ∈ E ,
the unique equilibrium input and output ū = −(G T G)−1 G T f (x̄) = ku (x̄) and ȳ =
h(x̄) − J (G T G)−1 G T f (x̄) = k y (x̄) (all these manipulations boil down to solving
a linear system). One then considers the equilibrium I/O mapping K ū ȳ : ū
→ ȳ,
whose graph is given by {(ū, ȳ)|there exists x̄ solving ū = ku (x̄), ȳ = k y (x̄)}. The
mapping K ū ȳ (·) could be set valued, because for one ū, there may exist several ȳ,
which are given by all the x̄ such that ū = ku (x̄). Conditions for the maximality of
the monotone mapping K ū ȳ (·) are given in [93, Lemma A.1]. They rely mainly on
either imposing the continuity via the cocoercivity, or the upper hemicontinuity [94],
which both guarantee the maximality (if the mapping is monotone).
4.3.4 Examples
Example 4.66 At several places, we have insisted on the essential role played by
the constant β in Definition 2.1, which may be thought of as the energy contained
initially in the system.3 Let us illustrate here how it may influence the Lyapunov
stability of dissipative systems. For instance, let us consider the following example,
brought to our attention by David J. Hill, where the open-loop system is unstable:
ẋ(t) = x(t) + u(t)
αx(t) (4.54)
y(t) = − 1+x 4 (t) ,
3 For instance, passivity is introduced in [95, Eq. (2.3.1)], with β = 0, and stating explicitly that it
is assumed that the network has zero initial stored energy.
298 4 Dissipative Systems
t1 t αx(t)
t0 u(t)y(t)dt = − t01 (ẋ(t) − x(t)) 1+x 4 (t) dt
α (4.55)
≥ − 2 [arctan(x (t1 )) − arctan(x 2 (t0 ))].
2
Thus, the system is passive with respect to the storage function V (x) = α2 ( π2 −
arctan(x 2 )) and V (x) > 0 for all finite x ∈ Rn . Hence, the system is dissipative,
despite the fact that the open-loop system is unstable. Note however that −V (0) =
β(0) < 0 and that the system loses its observability at x = ∞. We shall come back
later on conditions that assure the stability of dissipative systems.
with the same V (x) as in the previous example. Thus, the system is weakly finite-gain
stable, but the unique equilibrium of the uncontrolled system, x = 0, is Lyapunov
unstable. We notice that the system in (4.56) is not passive. Therefore, weak finite-
gain stability is not sufficient to guarantee the Lyapunov stability.
In view of the above generalizations of the dissipativity and supply rate, a dissipation
equality that is more general than the one in Definition 4.54 can be introduced with a
so-called dissipation function D(x, u, t) ≥ 0 for all x ∈ X , admissible u, and t ≥ 0,
such that along trajectories of the system (Σ) one gets
t
V (x(t), t) = V (x(0), 0) + w(u(s), y(s))ds + D(x(0), u, t). (4.58)
0
Example 4.68 Let us continue√with Example 4.42. √ Let us consider the storage func-
tions V (x) = 21 C x 2 , with 2 − 3 ≤ C ≤ 2 + 3 (this C is not the one in (4.36), but
a new parameter). It is easily computed that the dissipation function is D(x, u, t) =
t
0 [C(x(s) − γc u(s)) + Rc u (s)]ds, with γc = 2C and Rc = 2 − Cγc . The choice
2 2 C−1 1 2
for this notation stems from the electrical circuit interpretation where C is a capacitor
and Rc is a resistance. It is worth noting that for each value of the coefficient C, there is
a different physical realization (different resistors, capacitors), but the state equations
(4.36) remain the same. Comparing with Definition 4.54, one has S (x) = x 2 when
C = 1. Comparing with the ISP Definition 4.58 one has ε = Rc , provided Rc > 0.
An interesting
√ interpretation
√ is in terms of phase lag. Let us choose the two outputs as
y1 = Rc u and y2 = C(x − γc u). Then, the transfer function √ between y2 (s) and
c −γc s
u(s) (the Laplace transforms of both signals) is equal to C 1−γ1+s . As C varies
4.3 Dissipative Systems 299
√ √ √ √
from 2 − 3 to 2 + 3, γc varies monotonically from − 21 ( 3 + 1) to 21 ( 3 − 1).
Thus, the phase lag of y2 (s) with respect to u(s) increases monotonically with C. Let
us now study the variation of the dissipation function D(x, u, t) with C. For small C,
the low-dissipation trajectories are those for which ||x|| is decreasing. For large C,
the low-dissipation trajectories are those for √which ||x|| is increasing. There are two
extreme cases, as expected: when C = 2 − 3, then V (x) = Va (x) and it is possi-
ble to drive the state to the origin with an arbitrarily small amount of dissipation. In
other words, the stored energy can be extracted from the system. Doing the converse
(driving the state from the origin to some other √ state) produces a large amount of
dissipation. The other extreme is for C = 2 + 3, then V (x) = Vr (x). In this case,
any state is reachable from the origin with an arbitrarily small amount of dissipation.
The converse (returning the state to the origin) however dissipates significantly. This
illustrates that for small C, the dissipation seems to be concentrated at the beginning
of a trajectory which leaves the origin x = 0, and returns back to the origin, and that
the opposite behavior occurs when C is large. This simple example therefore allows
one to exhibit the relationship between phase lag and dissipation delay.
Example 4.69 If a system (A, B, C, D) is SPR and the vector relative degree r =
(1 ... 1)T ∈ Rm (i.e., D = 0), then the system is OSP. Indeed from the KYP Lemma
3.16, defining V (x) = x T P x one obtains V̇ (x(t)) = −x T (t)(Q Q T + L)x(t) + 2y T
(t)u(t) along the system’s solutions. Integrating and taking into account that L =
2μP is full rank, the result follows. It is noteworthy that the √ converse is not true.
s+α
Any transfer function of the form s 2 +as+b , b > 0, 0 < a < 2 b is SPR if and only if
0 < α < a. However, s 2 +s+1s
is not SPR (obvious!) but it defines an OSP system. One
realization is given
t by ẋ 1 (t) = x2 (t), ẋ2 (t) = −x1 (t) − x2 (t) + u(t), y(t) = x2 (t).
t
One checks that 0 u(s)y(s)ds ≥ − 21 (x12 (0) + x22 (0)) + 0 y 2 (s)ds. Thus, SPRness
is only sufficient for OSPness, but it is not necessary.
Example 4.70 Consider the non-proper system y(t) = u̇(t) + au(t), a > 0, with
relative degree r = −1. This system is SSPR and ISP since Re [ jω + a] = a and
t t
u 2 (t)
u(s)y(s)ds = +a u 2 (s)ds
2
0 0
This plant belongs to the descriptor-variable systems (see Sect. 3.1.7), with state
space representation: ⎧
⎨ ẋ1 (t) = x2 (t)
0 = −x1 (t) + u(t)
⎩
y(t) = x2 (t) + au(t).
t1 t1 t1
u T (t)y(t)dt ≥ −V (x(t0 )) + δ u T (s)u(s)ds + α y T (s)y(s)ds,
t0 t0 t0
for some δ > 0 and α > 0 small enough.4 Note that the condition Q̄ 0 implies that
the vector relative degree of (A, B, C, D) is equal to (0 ... 0)T , which implies that the
matrix D = 0. Indeed D + D T = W T W and W = 0 implies that Q̄ does not have full
rank. In the monovariable case m = 1, then r = 0. In the multivariable case, Q̄ 0
implies that W has full rank m. Indeed we can rewrite Q̄ 0 as x T (Q + L L T )x +
u T W T W u − 2x T L W u > 0. If W has rank p < m, then we can find a u = 0 such
that W u = 0. Therefore, for the couple x = 0 and such a u, one has x̄ T Q̄ x̄ = 0
which contradicts Q̄ 0. We deduce that r = (0 ... 0)T ∈ Rm . VSP linear invariant
systems possess a uniform relative degree 0.
4 Once again we see that the system has zero bias provided x(t0 ) = 0. But in general β(x(t0 )) = 0.
4.3 Dissipative Systems 301
Proof The sufficiency (SPR ⇒ strictly passive) is proved just above. The reverse is
less easy, and is the contribution of [96]. It is based first on the fact that a strictly
passive LTI system (i.e., an LTI system satisfying (4.44)) satisfies the following Lur’e
equations [96, Theorem 2]:
A T P + P A = −L L T − μR
CT − P B = LT W (4.60)
D + D T = W T W,
It is noteworthy that Examples 4.71, 4.72, Theorems 4.73, 3.45, 2.65, as well as
Lemmas 2.64, 3.16 provide us with a rather complete characterization of SPR transfer
functions.
Example 4.74 A result close to Lemma 2.82 is as follows [97, Theorem 9], [76],
formulated for operators in a pure input/output setting.
Proof =⇒: VSP implies OSP and ISP, in turn, OSP implies L2 -stability (see the
proof of Theorem
t 2.81). ⇐= : ISP
t implies that there exists δ > 0 and β1 such that for
all t ≥ 0, 0 u T (s)y(s)ds ≥ δ 0 u T (s)u(s)ds + β1 . Moreover, L2 -stability implies
t t
the existence of a gain γ > 0 and β2 such that 0 y t (s)y(s)ds ≤ γ 0 u T (s)u(s)ds +
β2 . Consequently, there exist ε1 > 0, ε2 > 0, with δ − ε1 − ε2 γ ≥ 0, such that :
t T t T t T
0 u (s)y(s)ds − ε1 0 u (s)u(s)ds − ε2 0 y (s)y(s)ds
= 0t u T (s)y(s)ds − δ 0t u T (s)u(s)ds + (δ − ε1 ) 0t u T (s)u(s)ds − ε2 0t y T (s)y(s)ds
t T t T
≥ β1 + (δ − ε1 ) 0 u (s)u(s)ds − ε2 (γ 0 u (s)u(s)ds + β2 )
= β1 − ε2 β2 + (δ − ε1 − ε2 γ ) 0t u T (s)u(s)ds ≥ β1 − ε2 β2 .
(4.61)
Δ t t
Let β = β1 − ε2 β2 , then one obtains using (4.61) 0 u T (s)y(s)ds − ε1 0 u T
t
(s)u(s)ds − ε2 0 y T (s)y(s)ds ≥ β, and hence the system is VSP.
302 4 Dissipative Systems
On integrating we obtain
t t
−V (y(0)) ≤ V (y(t)) − V (y(0)) = −a y (s)ds +
2
u(s)y(s)ds.
0 0
t t
=⇒ u(s)y(s)ds + V (0) ≥ a y 2 (s)ds.
0 0
Thus, the system is OSP. Taking a = 0, we can see that the system, whose transfer
function is 1s , defines a passive system (the transfer function being PR).
Remark 4.77 As we saw in Sect. 2.9 for linear systems, there exists a relationship
between passive systems and L2 -gain [48]. Let Σ : u → y be a passive system as
in Definition 2.1. Define the input–output transformation u = γ w + z, y = γ w − z,
(compare with (2.72)) then
t t
β≤ u (s)y(s)ds =
T
(γ 2 w T (s)w(s) − z T (s)z(s))ds,
0 0
t t
which is equivalent to z T (s)z(s)ds ≤ γ 2 w T (s)w(s)ds − β, which means that
0 0
the system Σ : w
→ z has a finite L2 −gain.
Example 4.78 (L2 -gain) Let us consider the system ẋ(t) = −x(t) + u(t), y(t) =
x(t). This system is dissipative with respect to the H∞ supply rate t w(u, y) =
γ 2 u 2 − y 2 if and only if there exists a storage function V (x) such that 0 (γ 2 u 2 (τ ) −
y 2 (τ ))dτ ≥ V (x(t)) − V (x(0)). Equivalently the infinitesimal dissipation inequal-
ity holds, i.e., γ 2 u 2 (t) − y 2 (t) − V̇ (x(t))(−x(t) + u(t)) ≥ 0. Consider V (x) =
px 2 . The infinitesimal dissipation inequality then becomes γ 2 u 2 (t) − x 2 (t) − 2 px(t)
(−x(t)
+ u(t)) ≥ 0. In a matrix form this is equivalent to having the matrix
2p − 1 −p
0. This holds if and only if
−p γ2
γ 2 (2 p − 1) − p 2 ≥ 0. (4.62)
4.3 Dissipative Systems 303
Proposition 4.79 Consider the system represented in Fig. 4.5, where φ(·) is a static
nonlinearity, q ≥ 0 and σ φ(σ ) ≥ 0 for all σ ∈ R. Then H : u
→ y is passive.
Δ t
Proof Let us adopt the classical notation u, y t = 0 u(s)y(s)ds. Then
y, u t = φ(σ
t ), u t = φ(σ ), q σ̇ t+ σ t
= q 0 φ(σ (s))σ̇ (s)ds + 0 σ (s)φ[σ (s)]ds
σ (t) t (4.63)
= q σ (0) φ(σ )dσ + 0 σ (s)φ(σ (s))ds
σ (t) σ,(0)
≥ q 0 φ(σ )dσ − q 0 φ(σ )dσ,
where
σ we have used the fact that σ (t)φ(σ (t)) ≥ 0 for all t ≥ 0. Note that V (σ ) =
0 φ(ξ )dξ ≥ 0, and is therefore qualified as a storage function, σ (·) being the state
of this system.
Proposition 4.80 If a system is output strictly passive, then it is also weakly finite-
gain stable, i.e., OSP ⇒ WFGS.
δ
t t
Choosing λ = 1
δ
one gets 2 0 y 2 (s)ds ≤ β + 1
2δ 0 u 2 (s)dt, which ends the proof.
Several results are given in [64] which concern the Lyapunov stability of systems
which are finite-gain stable. They are not presented in this section since they rather
belong to the kind of results presented in Sect. 5.1.
304 4 Dissipative Systems
(4.66)
where k ≤ k3 and k ≤ 1
2k1
. So the system (Σ) : u
→ y is VSP.
Until now we have not said a lot on the properties of the storage functions: are
they differentiable (in x)? Continuous? Discontinuous? We now state results which
guarantee some regularity of storage functions. As we already pointed out, storage
functions are potential Lyapunov functions candidate. It is well known that Lyapunov
functions need not be smooth, neither differentiable.
Probably the first result in this direction is the following Lemma, for which we first
need a preliminary definition.
Definition 4.82 ([68]) A function V : X → R is called a virtual storage function if
it satisfies V (0) = 0 and
t1
V (x0 ) + w(u(s)y(s)ds ≥ V (x1 ) (4.67)
t0
for all t1 ≥ t0 and all admissible u(·), with x(t0 ) = x0 and x(t1 ) = x1 .
Clearly, if in addition one imposes that V (x) ≥ 0, then one gets storage functions.
Lemma 4.83 ([68]) Let the system (Σ) be locally w-uniformly reachable in the sense
of Definition 4.47. Then, any virtual storage function which exists for all x ∈ X is
continuous.
Proof Consider an arbitrary state x0 ∈ X , and let a virtual storage function be V (·).
Then, for any x1 in a neighborhood Ω of x0 , it follows from (4.67) that
4.3 Dissipative Systems 305
t1
V (x0 ) + w(u(s), y(s)ds ≥ V (x1 ), (4.68)
t0
where the time t1 corresponds to t in (4.37) and the controller u(·) is the one in
Definition 4.47 (in other words, replace [0, t] in (4.37) by [t0 , t1 ]). From (4.37) and
(4.68) and considering transitions in each direction between x0 and x1 , one deduces
that | V (x1 ) − V (x0 ) |≤ ρ( x1 − x0 ). Since x1 is arbitrary in Ω and since ρ(·) is
continuous, it follows that V (·) is continuous at x0 .
The next result concerns storage functions. Strong controllability means local w-
uniform reachability in the sense of Definition 4.47, plus reachability, and plus con-
trollability. We recall that a system is controllable if every state x ∈ X is controllable,
i.e., given x(t0 ), there exists t1 ≥ t0 and an admissible u(·) on [t0 , t1 ] such that the
solution of the controlled system satisfies x(t1 ) = 0 (sometimes this is named con-
trollability to zero). Reachability is defined before Definition 4.38. Dissipativity in
the next Theorem is to be understood in Hill and Moylan’s way, see (4.23). It shows
that controllability properties are necessary for the storage functions to be regular
enough.
Theorem 4.84 ([68, Theorem 14]) Let us assume that the system (Σ) in (4.19) is
strongly controllable. Then, the system is cyclo-dissipative (respectively, dissipative
in the sense of Definition 4.23) if and only if there exists a continuous function V :
X → R satisfying V (0) = 0 (respectively, V (0) = 0 and V (x) ≥ 0 for all x ∈ X )
and V (x(t)) ≤ w(u(t), y(t)) for almost all t ≥ 0 along the system’s trajectories.
A relaxed version of Theorem 4.84 is as follows:
Theorem 4.85 ([75]) Let the system ẋ(t) = f (x(t), u(t)) be dissipative in the sense
of Definition 2.1 with supply rate w(x, u), and locally w-uniformly reachable at
the state x . Assume that for every fixed u, the function f (·, u) is continuously
differentiable, and that both f (x, u) and ∂∂ xf (x, u) are continuous in x and u. Then,
the set R(x ) of states reachable form x is an open and connected set of X , and
there exists a continuous function V : R(x ) → R+ such that for every x0 ∈ R(x )
and every admissible u(·)
t
V (x(t)) − V (x0 ) ≤ w(x(s), u(s))ds (4.69)
0
along the solution of the controlled system with x(0) = x0 . An example of such a
function is Vr (x) + β, where β is a suitable constant and Vr (x) is the required supply
as in Definition 4.38.
We have already stated the last part of the Theorem in Lemma 4.48. The proof
of Theorem 4.85 relies on an extended version of the continuous dependence of
solutions with respect to initial conditions, and we omit it here. Let us now state a
result that is more constructive, in the sense that it relies on verifiable properties of
the system. Before this, we need the following intermediate proposition.
306 4 Dissipative Systems
Proposition 4.86 ([75]) If the linearization of the vector field f (x) + g(x)u around
x = 0, given by ż(t) = Az(t) + Bu(t) with A = ∂∂ xf (0) and B = ∂∂gx (0), is control-
lable, then the system (Σ) in (4.19) is locally w-uniformly reachable at x = 0.
Of course, controllability of the tangent linearization is here equivalent to having the
Kalman matrix of rank n. This sufficient condition for local w-uniform reachability
is easy to check, and one sees in passing that all time-invariant linear systems which
are controllable, also are local w-uniformly reachable. Then, the following is true,
where dissipativity is understood in Hill and Moylan’s sense, see (4.23):
Corollary 4.87 ([75, Corollary 1]) Let the system (Σ) be dissipative in its equilib-
rium point x , and suppose its tangent linearization at x = 0 is controllable. Then,
there exists a continuous storage function defined on the reachable set R(x ).5
Refinements and generalizations can be found in [98]. In Sect. 4.4, generalizations of
the Kalman–Yakubovich–Popov Lemma will be stated which hold under the restric-
tion that the storage functions (see then as the solutions of partial differential inequal-
ities) are continuously differentiable (of class C 1 on the state space X ). It is easy to
exhibit systems for which no C 1 storage function exists. This will pose a difficulty in
the extension of the KYP Lemma, which relies on some sort of infinitesimal version
of the dissipation inequality. Indeed the PDIs will have then to be interpreted in a
more general sense. More will be said in Sect. 4.5. Results on dissipative systems
depending on time-varying parameters, with continuous storage functions may be
found in [73].
Let us end this section on regularity with a result that shows that in the one-
dimensional case, the existence of locally Lipschitz storage functions implies the
existence of continuous storage functions whose restriction to Rn \ {x = 0} is con-
tinuously differentiable. Such a set of functions is denoted as C01 . We specialize
here to systems which are dissipative with respect to the supply rate w(u, y) =
γ 2 u T u − y T y. This is a particular choice of the general supply rate in (4.45). In the
differentiable case, the dissipation inequality in its infinitesimal form is
valued (in other words: multivalued!). The viscosity subgradient is also sometimes
called a regular subgradient [99, Eq. 8(4)]. In case the function V (·) is proper convex,
then the viscosity subgradient is the same as the subgradient from convex analysis
defined in (3.232) [99, Proposition 8.12], and if V (·) is differentiable it is the same
as the usual Euclidean gradient. An introduction to viscosity solutions is given in
Sect. A.3 in the Appendix. With this machinery in mind, one may naturally rewrite
(4.70) as
for all x ∈ X \ {0} and all admissible u(·) (see Proposition A.59 in the Appendix).
If the function V (·) is differentiable, then (4.72) becomes the usual infinitesimal
dissipation inequality ∇V T (x)[ f (x(t) + g(x(t))u] ≤ γ 2 u T (t)u(t) − y T (t)y(t). As
we saw in Sect. 3.14, it is customary in nonsmooth and convex analysis, to replace
the usual gradient by a set of subgradients, as done in (4.72). The set of all continuous
functions V : Rn → R+ that satisfy (4.72) is denoted as W (Σ, γ 2 ). The set of all
functions in W (Σ, γ 2 ), which are proper (radially unbounded) and positive definite,
is denoted as W∞ (Σ, γ 2 ).
Theorem 4.88 ([100]) Let n = m = 1 in (4.79) and assume that the vector fields
f (x) and g(x) are locally Lipschitz. Assume that for some γ > 0 there exists a locally
Lipschitz V ∈ W∞ (Σ, γ 2 ). Then W∞ (Σ, γ 2 ) ∩ C01 = ∅.
This result means that for scalar systems, there is no gap between locally Lipschitz
and C01 cases. When n ≥ 2 the result is no longer true as the following examples
prove [100].
2
Let us consider V2 (x1 , x2 ) = x12 + x23 . This function is proper, positive definite, and
continuous. Moreover, V2 ∈ W∞ (Σ2 , 1). However, any locally Lipschitz function in
W (Σ2 , 1) is neither positive definite nor proper.
308 4 Dissipative Systems
Theorem 4.91 ([100]) For any system (Σ) with locally Lipschitz vector fields f (x)
and g(x),
In other words, Theorem 4.91 says that, given a γ , if one is able to exhibit at least one
function in W∞ (Σ, γ 2 ), then increasing slightly γ allows one to get the existence
of a function that is both in W∞ (Σ, γ 2 ) and in C01 . This is a sort of regularization
of the storage function of a system that is dissipative with respect to the supply rate
w(u, y) = γ 2 u T u − y T y.
Remark 4.92 The results hold for systems which are affine in the input, as in (4.19).
For more general systems, they may not remain true.
Example 4.93 Let us lead some calculations for the system and the Lyapunov func-
tion of Example 4.89. We get
⎛ ⎞
2 or −2 or [−2, 2]
∂ V1 (x) = ⎝ ⎠
2 or −2 or [−2, 2]
(4.76)
↑ ↑ ↑
xi > 0 xi < 0 xi = 0.
Consequently, we may write the first line, taking (4.76) into account, as
⎧
⎪
⎪ 2(−x12 + x1 |x2 | + x1 u 1 ) if x1 > 0
⎪
⎪
⎨
2(x12 − x1 |x2 | − x1 u 1 ) if x1 < 0 (4.78)
⎪
⎪
⎪
⎪
⎩
[−2|x1 |(−x1 + |x2 | + u 1 ); 2|x1 |(−x1 + |x2 | + u 1 )] = {0} if x1 = 0
and similarly for the second line. It happens that V (·) is not differentiable at x = 0,
and that f (0) + g(0)u = 0. Let y1 = x1 , y2 = x2 . Consider the case x1 > 0, x2 > 0.
We obtain −2y T y + 2y T u ≤ −2y T y + y T y + u T u = −y + y T y + u T u. For x2 >
0 and x1 = 0, we obtain −2y22 + 2y2 u 2 ≤ −y + y T y + u T u = −y22 + u T u.
4.4 Nonlinear KYP Lemma 309
The KYP Lemma for linear systems can be extended for nonlinear systems having
state space representations affine in the input. In this section, we will consider the
case when the plant output y is not a function of the input u. A more general case
will be studied in the next section. Consider the following nonlinear system:
⎧
⎨ ẋ(t) = f (x(t)) + g(x(t))u(t)
(Σ) (4.79)
⎩
y(t) = h(x(t))
The system is strictly passive for S (x) > 0, passive for S (x) ≥ 0, and lossless
for S (x) = 0.
(2) There exists a C 1 nonnegative function V : X → IR with V (0) = 0, such that
L f V (x) = −S (x)
(4.81)
L g V (x) = h T (x),
∂ V (x)
where L g V (x) = ∂x
g(x).
Remark 4.95 Note that if V (x) is a positive definite function (i.e., V (x) > 0), then
the system ẋ(t) = f (x(t)) has a stable equilibrium point at x = 0. If, in addition,
S (x) > 0 then x = 0 is an asymptotically stable equilibrium point.
Proof of Lemma 4.94
• (1) ⇒ (2). By assumption we have
t t
V (x(t)) − V (x(0)) = y (s)u(s)ds −
T
S (x(s))ds. (4.82)
0 0
Δ
= L f V (x(t)) + L g V (x(t))u(t) (4.83)
Taking the partial derivative with respect to u, we get L f V (x) = −S (x) and
therefore L g V (x) = h T (x).
• (2) ⇒ (1). From (4.79) and (4.81), we obtain
d(V ◦ x)
(t) = L f V (x(t)) + L g V (x(t))u(t) = −S (x(t)) + h T (x(t))u(t).
dt
On integrating the above we obtain (4.79).
Remark 4.96 From these developments, the dissipativity equality in (4.80) is equiva-
lent to its infinitesimal version V̇ = L f V + L g V u = h T (x)u(t) − S (x) = u, y −
S (x). Obviously, this holds under the assumption that V (·) is sufficiently regular
(differentiable). No differentiability is required in the general Willems’ Definition of
dissipativity, however. Some authors [77] systematically define dissipativity with C 1
storage functions satisfying α(||x||) ≤ V (x) ≤ β(||x||) for some class-K∞ func-
tions, and infinitesimal dissipation equalities or inequalities. Such a definition of
dissipativity is therefore much more stringent than the basic definitions of Sect. 4.3.
Let us remark that the second equality in (4.81) defines the passive output associated
with the storage function V (·) and the triplet ( f, g, h). In other words, let us start
with the system (4.79), and assume that the first equality in (4.81) is satisfied for
some S (·) and storage function V (·) that is a Lyapunov function for the uncon-
T
trolled system ẋ(t) = f (x(t)). Then, the output y = g(x)T ∂∂Vx (x) is such that the
dissipation equality (4.80) holds.
Example 4.97 Consider the mechanical linear chain in Fig. 4.6. Assume that the
masses m 1 and m 4 are actuated with controls u 1 and u 4 , respectively (this makes
the system underactuated, as there are less inputs than degrees of freedom). With a
suitable choice of the masses coordinates, the dynamics is given by
P is full rank. In [101], it is shown that provided a certain mass ratio is small
enough, then non-collocated outputs can be used for feedback in systems of the
form M q̈(t) + K q(t) = Bu(t), M = M T 0, K = K T 0, while preserving the
PRness of the transfer function. The basic assumption in [101] is that the kinetic
energy has a dominant term associated with the non-collocated outputs. One first
derives the transfer matrix using the associated eigenproblem, and then one uses the
property of PR transfer matrices as in (2.145), where the coefficients have to satisfy
some positive definiteness properties.
Example 4.98 Consider the bilinear system ẋ(t) = Ax(t) + Bx(t) u(t) + Cu(t),
A ∈ Rn×n , B ∈ Rn×n , c ∈ Rn×1 , u(t) ∈ R [102]. Assume that A T P + P A 0, P =
P T 0. Then, the passive output is y = (C T + x T B T )P x, with storage function
V (x) = 21 x T P x.
We will now consider the more general case in which the system is described by the
following state space representation affine in the input:
ẋ(t) = f (x(t)) + g(x(t))u(t)
(Σ) (4.85)
y(t) = h(x(t)) + j (x(t))u(t),
Lemma 4.100 (NL KYP Lemma: general case) Let Assumptions 4 and 5 hold.
The nonlinear system (4.85) is dissipative in the sense of Definition 4.23 with
respect to the supply rate w(u, y) in (4.86) if and only if there exists functions
V : Rn → R, L : Rn → Rq , W : Rn → Rq×m (for some integer q), with V (·)
differentiable, such that:
V (x) ≥ 0, V (0) = 0
where ⎧
⎪ Δ
⎨ Ŝ(x) = Q j (x) + S
(4.89)
⎪
⎩ Δ
R̂(x) = R + j T (x)S + S T j (x) + j T (x)Q j (x).
w(u, y) = y T Qy + 2y T Su + u T Ru
= (h(x) + j (x)u)T Q(h(x) + j (x)u) + 2(h(x) + j (x)u)T Su + u T Ru
= h T (x)Qh(x) + 2u T j T (x)Qh(x) + u T j T (x)Q j (x)u + u T Ru+
+2u T j T (x)Su + 2h T (x)Su
= h T (x)Qh(x) + 2u T j T (x)Qh(x) + u T R̂(x)u + 2h T (x)Su,
(4.90)
314 4 Dissipative Systems
so that
Necessity. We will show that the available storage function Va (x) is a solution to the
set of equations (4.88) for some L(·) and W (·). Since the system is reachable from
the origin, there exists u(.) defined on [t−1 , 0] such that x(t−1 ) = 0 and x(0) = x0 .
Since the system (4.85) is dissipative it satisfies (4.23), then there exists V (x) ≥
0, V (0) = 0 such that:
t 0 t
t−1 w(s)ds = t−1 w(t)dt + 0 w(s)ds ≥ V (x(t)) − V (x(t−1 )) ≥ 0.
t
Remember that t−1 w(s)ds is the energy introduced into the system. From the above
we have t 0
w(s)ds ≥ − w(t)dt.
0 t−1
Dissipativeness in the sense of Definition 4.23 implies that Va (0) = 0 and the avail-
able storage Va (x) is itself a storage function, i.e.,
t
Va (x(t)) − Va (x(0)) ≤ w(s)ds for all t ≥ 0,
0
t
or 0 ≤ 0 (w(s) − ddtVa (s))ds for all t ≥ 0. Since the above inequality holds for all
t ≥ 0, taking the derivative in the above it follows that
d(Va ◦ x) Δ
0 ≤ w(u, y) − = d(x, u).
dt
4.4 Nonlinear KYP Lemma 315
Introducing (4.85)
for some L(x) ∈ IR q , W (x) ∈ IR q×m , and some integer q. Therefore, from the two
previous equations and the system (4.85) and the Definitions in (4.89) we obtain
+2(h(x) + j (x)u)T Su + u T Ru
g (x)∇Va (x)
1 T
2
= Ŝ T (x)h(x) − W T (x)L(x) (4.95)
Actually, the Lemma proves the ⇒ sense and the ⇐ sense is obvious. Using the
sufficiency part of the proof of the above theorem we have the following Corollary,
which holds under Assumptions 4 and 5:
Corollary 4.101 ([55]) If the system (4.85) is dissipative with respect to the supply
rate w(u, y) in (4.86), then there exists V (x) ≥ 0, V (0) = 0 and some L : X → Rq ,
W : X → Rq×m such that
d(V ◦ x)
= − (L(x) + W (x)u)T (L(x) + W (x)u) + w(u, y).
dt
Under the conditions
t of Corollary 4.101, the dissipation function in (4.58) is equal to
D(x(0), u, t) = 0 [L(x(s)) + W (x(s))u(s)]T [L(x(s)) + W (x(s))u(s)] ds. What
about generalizations of the KYP Lemma when storage functions may not be differ-
entiable (even possibly discontinuous)? The extension passes through the fact that
the conditions (4.88) and (4.89) can be rewritten as a partial differential inequality
which is a generalization of a Riccati inequation (exactly as in Sect. 3.1.4 for the
linear time- invariant case). Then, relax the notion of solution to this PDI to admit
continuous (or discontinuous) storage functions, see Sect. 4.5.
Remark 4.103 If j (x) ≡ 0, then the system in (4.85) cannot be ISP (that corresponds
to having R = −ε I in (4.86) for some ε > 0). Indeed if (4.85) is dissipative with
respect to (4.86) we obtain along the system’s trajectories:
d(V ◦x)
dt
(t)
= w(u(t), y(t))
= h T (x(t))Qh(x(t)) − L(x(t))L T (x(t)) + 2h T (x(t)) Ŝ(x(t))u(t)
−L T (x(t))W (x(t))u(t)
= (y(t) − j (x(t))u(t))T Q(y(t) − j (x(t))u(t)) − L(x(t))L T (x(t))
+2(y(t) − j (x(t))u(t))T [Q j (x(t)) + S]u(t) − L T (x(t))W (x(t))
= y T (t)Qy(t) − 2y T (x(t))Q j (x(t))u(t) + u T (t) j T (x(t))Q j (x(t))u(t)
−L(x(t))L T (x(t))
+2y T (t)Q j (x(t))u(t) + 2y T (t)Su(t) − 2u T (t) j T (x(t))Q j (x(t))u(t)
−2u T (t) j T (x(t))Su(t)
= y T (t)Qy(t) + 2y T (t)Su(t) − εu T (t)u(t).
(4.97)
If j (x) = 0 we get −L(x)L T (x) = −εu T u which obviously cannot be satisfied with
x and u considered as independent variables (except if both sides are constant and
identical). This result is consistent with the linear case (a PR or SPR function has to
have relative degree 0 to be ISP).
4.4 Nonlinear KYP Lemma 317
To end this section, let us notice that the conditions in (4.88), (4.89) can be equiva-
lently rewritten as
⎛ ⎞
−∇V (x)T f (x) + h T (x)Qh(x) − 21 ∇V T (x)g(x) + h T (x) Ŝ(x)
⎝ ⎠
− 21 (∇V T (x)g(x) + h (x) Ŝ(x))
T T
R̂ (4.98)
L T (x)
= L(x) W (x) 0,
W (x)
T
where we did as in (3.3). Let us choose now the supply rate w(u, y) = γ 2 Im − y T y,
which corresponds to the choice Q = −Im , S = 0, R = γ 2 Im (this is the H∞ , or
bounded real supply rate, see Sect. 5.10) so that Ŝ(x) = − j (x), R̂(x) = γ 2 Im −
j T (x) j (x). Then (4.98) with strict inequality reduces to
⎛ ⎞
−∇V (x)T f (x) − h T (x)h(x) − 21 ∇V T (x)g(x) − h T (x) j (x)
⎝ ⎠ 0.
− 21 (∇V T (x)g(x) − h (x) j (x))
T T
γ Im − j (x) j (x)
2 T
(4.99)
Applying Theorem A.65, one obtains the Hamilton–Jacobi inequality:
1 −1
−∇V (x)T f (x) − h T (x)h(x) − 2 ∇V
T (x)g(x) − h T (x) j (x) γ 2 Im − j T (x) j (x) ×
1
× 2 ∇V
T (x)g(x) − h T (x) j (x) 0,
(4.100)
as well as ∇V (x)T f (x) + h T (x)h(x) ≺ 0 and γ 2 Im − j T (x) j (x) 0: the first
inequality is related to Lyapunov stability (which is obtained if suitable reachability
assumptions are made, guaranteeing that the storage functions are positive definite),
the second one is related to L2 input/output stability. See [104] for a complete anal-
ysis.
Remark 4.104 In case of LTI systems, Q S R-dissipativity boils down to checking
the existence of P = P T 0 such that
A T P + P A − C T QC P B − Ŝ
0, (4.101)
(P B − Ŝ)T − R̂
All the results presented until now deal with time-invariant systems. This is partly
due to the fact that dissipativity is a tool that is used to study and design stable
closed-loop systems, and the Krasovskii–LaSalle invariance principle is at the core
of stability proofs (this will be seen in Chap. 5). As far as only dissipativity is dealt
with, one can say that most of the tools we have presented in the foregoing sections
(see, for instance, Theorems 4.34, 4.43, 4.46, Lemma 4.49), extend to the case:
ẋ(t) = f (x(t), t) + g(x(t), t)u(t)
(Σt ) (4.102)
y(t) = h(x(t), t) + j (x(t), t)u(t),
where the well-posedness conditions are assumed to be fulfilled (see Sect. 3.13.2),
and f (0, t) = 0 for all t ≥ t0 , x(t0 ) = x0 . The available storage and required supply
are now defined as
t1
Va (t, x) = sup − w(u(t), y(t))dt , (4.103)
u:(t,x)→ , t1 ≥t t
where the notation means that we consider all trajectories from (t, x), and
t
Vr (t, x) = inf w(u(t), y(t))dt , (4.104)
u:(t0 ,0)→(t,x) , t≥t0 t0
where the notation means that we consider all trajectories from (t0 , 0) to (t, x).
We choose the passivity supply rate w(u, y) = 2u T y, as the nonnegativity of an
operator is intimately related with passivity. Passivity is understood here in Willems’
sense (Definition 4.21), with storage functions V (t, x) ≥ 0 instead of V (x) ≥ 0, and
V (t, 0) = 0: t2
V (t2 , x(t2 )) ≤ V (t1 , ξ ) + 2u(s)T y(s)ds, (4.105)
t1
One sees that the nonnegativity used in this theorem is exactly the dissipativity of
Definition 4.23, with the passive supply rate (it is quite common in dissipative systems
literature that several names are given to the same notion, depending on authors and
time of writing). The difference with respect to the nonnegativity introduced in
Proposition 2.36 stems from the LTI nature of the systems dealt with in Proposition
2.36, which allows us to fix the initial time at t = 0. The following result holds true
[105, Theorem 7.4].
Lemma 4.106 Assume that (t, x) is accessible from (t0 , 0) for all t and x, f (0, t) =
0 for all t. Suppose moreover that the required supply Vr (t, x) and available storage
Va (t, x) are continuously differentiable on R × Rn . The operator Λ associated with
the system in (4.102) is nonnegative, if and only if there exists a continuous almost
everywhere differentiable function V : R × Rn → R, V (t, x) ≥ 0 for all (t, x) ∈
R × Rn , V (t, 0) = 0 for all t ∈ R, and such that
−∇V T (t, x) f (x, t) − ∂∂tV h T (x, t) − 21 ∇V T (t, x)g(x, t)
0. (4.106)
h(x, t) − 21 g T (x, t)∇V (t, x) j (x, t) + j T (x, t)
Proof Sufficiency(⇐=): let there exist a function V (·) as in the lemma, such that
(4.106) is satisfied. It is possible to calculate that the dissipation equality
320 4 Dissipative Systems
t2 t2
W (t, x) S(t, x)
1
2u(s)T y(s)ds = [V (t, x(t))]tt21 + (1 u T ) dt
t1 t1 S(t, x)T R(t, x)
u
(4.107)
holds for any t1 , t2 , t2 ≥ t1 , with W (t, x) = −∇V T (t, x) f (x, t) − ∂∂tV , S(t, x) =
h T (x, t) − 21 ∇V T (t, x)g(x, t), R(t, x) = j (x, t) + j T (x, t). Nonnegativity of Λ
t
means that t0 u(s)T Λ(u(s))ds ≥ 0 for all t and t0 , t ≥ t0 , with x(t0 ) = 0. Choose
t1 = t0 and t2 = t to conclude that the right-hand side of (4.107) is nonnegative.
Necessity(=⇒): It is possible to prove that Vr in (4.104) and Va in (4.103) exist if
and only if the system is passive with V (t, 0) = 0, which in turn is equivalent to
the nonnegativity of Λ, see Theorem
4.105.
Notice
Va (t, 0) = 0. The objective
that
Wr Sr Wa Sa
is to prove that the matrices and , associated with Vr and Va
SrT R SaT R
above, respectively, are nonnegative definite. Let u(·) be a controller which transfers
the system from (t0 , 0) to (t, ξ ). Let us associate with it the controller v(s) = u(s)
if s ≤ t, v(s) = u 0 if t < s ≤ t + Δt, u 0 an arbitrary controller. The controller v(·)
brings the system to a state ζ at time t + Δt. Doing as in the sufficiency part to obtain
(4.107), we get
t+Δt t
2v(s)T y(s)ds − Vr (t + Δt, ζ ) − t0 2u(s)T y(s)ds − Vr (t, ξ )
t0
Wr (t, ξ ) Sr (t, ξ ) 1 (4.108)
= (1 u 0T ) Δt + o(Δ).
Sr (t, ξ ) R(t, ξ )
T
u0
t+Δt
Using the definition of Vr (·, ·), it follows that t0 2v(s)T y(s)ds − Vr (t + Δt, ζ ) ≥
t
0 and t0 2u(s)T y(s)ds − Vr (t, ξ ) can be made arbitrarily small with a suitable choice
Since the vectoru 0 and the phase (t, ξ ) are arbitrary, we infer from (4.108)
of u(·).
Wr (t, ξ ) Sr (t, ξ )
that 0 for all t ∈ (t0 , t1 ) and all ξ ∈ Rn , and hence Vr (·, ·)
SrT (t, ξ ) R
is a suitable function V (·, ·)
satisfying the conditions of the Theorem (notice that
Wa Sa
Vr (t, 0) = 0). To prove that 0, one can follow the same steps, defining
SaT R
this time v(s) = u(s) if t ≤ s, v(s) = u 0 if t − Δt ≤ s < t.
We can treat linear time-varying (LTV) systems as particular case, see Lemma
3.66. In Sect. 3.1.1 and Corollary 3.5, it has been shown that a system which
satisfies a dissipation equality with a quadratic storage function also satisfies the
KYP Lemma equations. For LTV nonnegative systems, the KYP Lemma equations
also hold, see Lemma 3.66. Let us therefore start from the dynamical LTV system
ẋ(t) = A(t)x(t) + B(t)u(t), y(t) = C(t)x(t) + D(t)u(t), where it is assumed that
the matrix functions possess enough regularity so that the system is well-posed for all
admissible inputs. Let us assume that we are given P(t) = P T (t) 0 for all t, with
P : R → Rn×n differentiable. Thus, V (t, x) = x T P(t)x makes a suitable storage
function candidate. Calculating its derivative along the system’s trajectories gives
4.4 Nonlinear KYP Lemma 321
Clearly, the condition Q̄(t) 0 for all t guarantees that the system is dissipative.
Actually, cyclo-dissipativity is guaranteed also if P(t) = P T (t) only, without pos-
itive semidefiniteness. The calculations we have led (which are similar to those led
in Sect. 3.1.1 for LTI systems), as well as Proposition A.67, can be used to prove the
following.
Theorem 4.107 ([68, Theorem 16]) The system ẋ(t) = A(t)x(t) + B(t)u(t), y(t) =
C(t)x(t) + D(t)u(t) is cyclo-dissipative (respectively dissipative) with respect to the
supply rate w(u, y) = y T Qy + 2y T Su + u T Ru, Q = Q T , R = R T , and S constant
matrices, if and only if there exists matrices P(t), L(t), W (t), with P(t) = P T (t)
(respectively P(t) = P T (t) 0), satisfying the following:
The set of equations in (4.111) may be named Lur’e equations for LTV systems.
So far, only nonlinear systems which are linear in the input have been considered in
this book. Let us now analyze nonlinear systems of the following form:
ẋ(t) = f (x(t), u(t))
(4.112)
y(t) = h(x(t), u(t)),
with x(0) = x0 , and f (0, 0) = 0 and h(0, 0) = 0. It is assumed that f (·, ·) and h(·, ·)
are smooth functions (infinitely differentiable).
322 4 Dissipative Systems
As we have seen in Sect. 4.3.5, storage functions are continuous under some rea-
sonable controllability assumptions. However, it is a much stronger assumption to
suppose that they are differentiable, or of class C 1 . The versions of the KYP Lemma
that have been presented above rely on the property that V (·) is C 1 . Here we show
how to relax this property by considering the infinitesimal version of the dissipation
inequality: this is a partial differential inequality which represents the extension of
the KYP Lemma to the case of continuous, non-differentiable storage functions.
4.5 Dissipative Systems and Partial Differential Inequalities 323
First of all and before going on with the nonlinear affine-in-the-input case, let us
investigate a novel path to reach the conclusions of Sect. 3.1.4. We consider the
linear time-invariant system
ẋ(t) = Ax(t) + Bu(t)
(4.115)
y(t) = C x(t) + Dx(t).
where the supply rate is chosen as w(u, y) = u T y. By rearranging terms one gets
4.5.1.1 D0
and the matrix D + D T arises from the derivation of u T Du. Injecting u into H (x, p)
and rewriting u T Du as 21 u T (D + D T )u, one obtains
1
H (x, p) = p T Ax + (B T p − C x)T (D + D T )−1 (B T p − C x). (4.119)
2
Δ
Let us now consider the quadratic function V (x) = 21 x T P x, P = P T , and H (x, P) =
H (x, ∂∂Vx ). We obtain
1
H (x, P) = x T P Ax + (B T P x − C x)T (D + D T )−1 (B T P x − C x). (4.120)
2
which is the Riccati inequality in (3.19). We have therefore shown that under the
condition D 0 the inequality H (x, ∂∂Vx ) ≤ 0 is equivalent to the Riccati inequality
in (4.121), thus to the matrix inequality in (3.3).
4.5.1.2 D=0
Let us now investigate what happens when D = 0. Following the same reasoning
one finds that the maximizing input does not exist (the function to maximize is
( p T B − x T C T )u) so that it is necessary for the supremum to have a meaning (to
be different from +∞) that p T B − x T C T = 0 for all x ∈ Rn . Choosing the same
storage function as above, it follows that H (x, ∂∂Vx ) ≤ 0 yields P A + A T P 0 and
P B = C T : the system (A, B, C) is passive.
4.5.1.3 D0
Δ
f (z) = sup [z T u − f (u)]. (4.122)
u∈dom( f )
Doing the analogy with (4.116) one finds f (u) = u T Du, z = B T p − C x, and H (z)
is the sum of the conjugate of f (u) and p T Ax. It is a basic result from Convex
Analysis that if D + D T 0, then
from which one straightforwardly recovers the previous results and the Riccati
inequality. We also saw what happens when D = 0. Let us now investigate the case
D + D T 0. We get [109, Example 1.1.4]
⎧
⎨ +∞ if z ∈
/ Im(D + D T )
f (z) = (4.124)
⎩
z T (D + D T )† z if z ∈ Im(D + D T ),
H (x, p) = p T Ax+
⎧
⎨ +∞ if B T p − C x ∈
/ Im(D + D T )
+
⎩
(B T p − C x)T (D + D T )† (B T p − C x) if B T p − C x ∈ Im(D + D T ).
(4.125)
4.5 Dissipative Systems and Partial Differential Inequalities 325
whose proof can be deduced almost directly from Lemma A.70 noticing that
W T W 0.
P B − C T = P B − C T (D + D T )(D + D T )† . (4.128)
It follows from (4.128) and standard matrix algebra [110, p. 78, p. 433] that
Im(B T P − C) =Im[(D + D T )† (D + D T )(B T P − C)] ⊆ Im[(D + D T )† (D +
D )] ⊆ Im((D + D ) ) = Im(D + D ). Thus, (4.128) ⇐⇒ (4.127) (ii) ⇐⇒
T T † T
(4.126) (i). Now obviously (4.127) (i) is nothing else but (4.126) (ii). We there-
fore conclude that the conditions of the KYP Lemma in (3.2) are equivalent to the
degenerate Riccati inequality (4.126).
To summarize:
⇑ (D 0)
D=0
Hamiltonian function in (4.116) =⇒ LMI in (3.2) with W = 0
⇓ (D 0)
DRI in (4.126)
Remark 4.111 (Singular optimal control) As we saw in Sects. 3.1.4 and 3.11, the
link between passivity (the KYP Lemma) and optimal control exist when R =
D + D T 0. The optimal control problem is then regular. There must exist a link
between the KYP Lemma conditions with D + D T 0 and singular optimal con-
trol problems. We consider the optimal control with cost function w(u, x) = u T y =
u T (C x + Du) = 21 u T Ru + x T Cu. Let rank(D + D T ) = r < m and s = m − r be
the dimension of
the singular
control. Let n ≤ s, and partition B and C as B =
C1
(B1 B2 ) and C = , with B1 ∈ Rn×r , B2 ∈ Rn×s , C1 ∈ Rr ×n , C2 ∈ Rs×n . Then,
C2
(A, B, C, D) is PR if and only if D + D T 0 and there exists P = C B(B B) 0
satisfying P B = C T and
−P A − A T P −P B1 + C1T
0. (4.130)
−B1T P + C1 R1
The proof can be found in [111]. It is based on the fact that when D + D T is not full
rank, then (3.3) can be rewritten as −P B2 + C2T = 0 and (4.130).
We consider in this section the system (Σ) in (4.79). Let us first state a theorem,
which shows what kind of partial differential inequality, the storage functions of
dissipative systems (i.e., systems satisfying (4.24)) are solutions of. Let us define the
Hamiltonian function
Also, let V (x) = lim z→x inf V (z) be the lower semi-continuous envelope of V (·). A
locally bounded function V : X → R is a weak or a viscosity solution to the partial
differential inequality H (x, ∇V ) ≤ 0 for all x ∈ X , if for every C 1 function φ : X →
R and every local minimum x0 ∈ Rn of V − φ, one has H (x0 , ∂∂x φ(x0 )) ≤ 0. The
PDI H (x, ∇V ) ≤ 0 for all x ∈ X is also called a Hamilton–Jacobi inequality. The
4.5 Dissipative Systems and Partial Differential Inequalities 327
set U plays an important role in the study of the HJI, and also for practical reasons
(for instance, if u is to be considered as a disturbance, then it may be assumed to take
values in some compact set, but not in the whole of Rm ). Let us present the following
theorem, whose proof is inspired by [113]. Only those readers familiar with partial
differential inequalities and viscosity solutions should read it. The others can safely
skip the proof. The next theorem concerns the system in (4.79), where f (·), g(·), and
h(·) are supposed to be continuously differentiable, with f (0) = 0, h(0) = 0 (thus
x = 0 is a fixed point of the uncontrolled system), and ∂∂ xf , ∂∂gx , and ∂∂hx are globally
bounded.
Theorem 4.113 ([67]) (i) If the system (Σ) in (4.79) is dissipative in the sense of
Definition 4.25, with storage function V (·), then V (·) satisfies the partial differential
inequality
The suprema in (4.131) and (4.132) are computed over all admissible u(·). It is
noteworthy that the PDI in (4.132) is to be understood in a weak sense (V (·) is a
viscosity solution), which means that V (·) needs not be continuously differentiable
to be a solution. The derivative is understood as the viscosity derivative, see (4.71)
and Appendix A.3.
In short, Theorem 4.113 says that a dissipative system as (Σ) in (4.79) possesses
a storage function that is at least lower semi-continuous.
Proof of Theorem 4.113 (i) Let φ(·) ∈ C 1 (Rn ) and suppose that V − φ attains a
local minimum at the point x0 ∈ Rn . Let us consider a constant input u (u(t) = u
for all t ≥ 0), and let x(t) be the corresponding trajectory with initial condition
x(0) = x0 . For sufficiently small t ≥ 0 we get
since V − φ attains a local minimum at the point x0 ∈ Rn . Since the system (Σ)
is dissipative in the sense of Definition 4.25 with storage function V (·), and since
V (·) satisfies the dissipation inequality each time its associated storage V (·) does,
it follows that t
V (x0 ) − V (x(t)) ≥ − w(u, y(s))ds. (4.134)
0
H (x0 , ∇φ(x0 )) = ∇φ T (x0 ) f (x0 ) + sup[∇φ T (x0 )g(x0 )u − w(u, h(x0 ))] ≤ 0
u∈U
(4.137)
holds for all u ∈ U. We have therefore proved that V (·) is a viscosity solution of
(4.132).
(ii) Let us define U R = {u ∈ U | u ≤ R}, R > 0. Let U R denote the set of
controllers with values in U R . Since V (·) is lower semi-continuous, there exists a
∞
sequence {Ψi }i=1 of locally bounded functions such that Ψi ≤ V and Ψi → V as
i → +∞, Ψi ≥ V . Let τ > 0 and define
τ
Z iR (x, s) = sup Ψi (x(τ )) − w(u(r )y(r ))dr | x(s) = x . (4.138)
u∈U R s
Setting s = 0 yields
τ
V (x) ≥ sup Ψi (x(τ )) − w(u(r ), y(r ))dr | x(0) = x . (4.141)
u∈U R 0
Letting i → +∞ we obtain
4.5 Dissipative Systems and Partial Differential Inequalities 329
τ
V (x) ≥ sup V (x(τ )) − w(u(r ), y(r ))dr | x(0) = x . (4.142)
u∈U R 0
Letting R → +∞
τ
V (x) ≥ sup V (x(τ )) − w(u(r ), y(r ))dr | x(0) = x , (4.143)
u∈U 0
where we recall that U is just the set of admissible inputs, i.e., locally square
Lebesgue integrable functions of time (locally L2 ) such that (4.20) is satisfied. This
last inequality holds for all τ ≥ 0, so that (4.24) holds. Consequently (Σ) is dissi-
pative and V (·) is a storage function.
When specializing to passive systems then the following holds:
Corollary 4.114 ([67]) The system (Σ) in (4.79) is passive, if and only if there exists
a locally bounded nonnegative function V (·) such that V (0) = 0 and
for all x ∈ Rn .
In (4.145), solutions are supposed to be weak, i.e.: if Ξ (·) ∈ C 1 (Rn ) and V − Ξ
attains a local minimum at x0 ∈ Rn , then
∇Ξ T (x0 ) f (x0 ) ≤ 0
(4.146)
∇Ξ T (x0 )g(x0 ) = h(x0 ).
One sees that the set of conditions in (4.146) is nothing else but (4.81) expressed in
a weak (or viscosity) sense.
If one supposes that V (0) = 0 and x(0) = 0 then it follows from (4.147) that
330 4 Dissipative Systems
t
0 ≤ V (x(t)) ≤ [γ 2 u T (s)u(s) − y T (s)y(s)]ds, (4.148)
0
t t
from which one deduces that 0 y T (s)y(s)ds ≤ γ 2 0 u T (s)u(s)ds, which simply
means that the system defines an input–output operator Hx which has a finite
L2 −gain at most γ (see Definition 4.17), and Hx=0 has zero bias. An argument
of local w-uniform reachability assures that storage functions are continuous. Let
us assume that V (·) is a smooth storage function. Then, the dissipation inequality
(4.147) is equivalent to its infinitesimal form
Theorem 4.116 ([114]) Assume that the system in (4.85) has finite gain at most γ
and is uniformly controllable, so that Va (·) and Vr (·) are both well-defined continuous
storage functions. Then
• Va (·) is a viscosity solution of −H (x, ∇V (x)) = 0 if Assumption 6 is satisfied.
• Vr (·) is a viscosity solution of H (x, ∇V (x)) = 0 if Assumption 7 is satisfied.
Remark 4.117 • Storage functions that satisfy (4.88) can also be shown to be the
solutions of the following partial differential inequation:
1 1
∇V T (x) f (x) + (h T (x) − ∇V T (x)g(x)) R̂ −1 (x)(h(x) − g T (x)∇V (x)) ≤ 0,
2 2
(4.154)
when R̂ = j (x) + j T (x) is full rank, R = 0, Q = 0, S = 21 I . The proof is exactly
the same as in the linear time-invariant case (Sect. 3.1.4). The available storage
and the required supply satisfy this formula (that is similar to a Riccati equation)
as an equality (Proposition 4.51).
• In the linear invariant case, the equivalent to Hamilton–Jacobi inequalities are
Riccati equations, see Sect. 3.1.4. This also shows the link with optimal control.
Hamilton–Jacobi equalities also arise in the problem of inverse optimal control,
see Sect. 4.5.5.
• In the time-varying case (4.102), the PDI in (4.154) becomes
∂V
∂t
(x, t) + ∇V T (x, t) f (x, t)
+(h T (x, t) − 21 ∇V T (x, t)g(x, t)) R̂ −1 (x, t)(h(x, t) − 21 g T (x, t)∇V (x, t)) ≤ 0.
(4.155)
In order to illustrate the above developments let us present an example, taken from
[115].
332 4 Dissipative Systems
dV 2 2
P H (r, u) = 2 r [(r − 1)(r 2 − 4) + r (r 2 − 4)u] − γ 2 u T u + (r 2 − 1)2 ,
d(r 2 )
(4.158)
and the maximizing controller is
1 2 2 dV
u= r (r − 4)2 . (4.159)
γ 2 d(r 2 )
minimal set condition V (1) = 0 solves this HJI. One such solution is given by
1 3 3
V (r ) = − ln(r 2 ) − ln(4 − r 2 ) + ln(3). (4.161)
4 4 4
This function V (r ) is locally bounded on the set R, V (r ) ≥ 0, it is radially unbounded
for all x → bd(R) (all states approaching the boundary of R, in particular the origin),
and V (r ) = 0 on the circle S. Therefore, the system in (4.156) is dissipative with
4.5 Dissipative Systems and Partial Differential Inequalities 333
Let us summarize the developments in this section and the foregoing ones, on the
characterization of dissipative systems.
PR transfer functions
where the “implications” just mean that the problems are decreasing in mathematical
complexity.
4.5.4 Recapitulation
Let us take advantage of the presentation of this section, to recapitulate some tools
that have been introduced throughout the foregoing: Riccati inequalities, Hamiltonian
function, Popov’s functions, and Hermitian forms. A Hermitian form has the general
expression
x
H (x, y) = (x y )Σ
T T
, (4.162)
y
334 4 Dissipative Systems
Q YT
with x ∈ Rn , y ∈ Rn , Σ = , Q ∈ Rn×n , Y ∈ Rn×n , R ∈ Rn×n , Q = Q T ,
Y R
R = R T . Let y = P x for some P = P T ∈ Rn×n . Then
if and only if
Q + PY + Y T P + P R P = 0 (P = P T ).
The proof is done by calculating explicitly H (x, P x). The analogy with (4.120)
and (4.121) is straightforward (with equalities instead of inequalities). A solution to
the ARE is stabilizing if the ODE ẋ(t) = dH |
dy y=P x
= 2(Y + R P)x(t) is globally
asymptotically stable. The results of Theorems 3.73, 3.74, 3.75, and 4.61 allow us
to assert that stabilizing solutions exist in important cases.
Linking this with the spectral (or Popov’s) function Π (s) in Theorems 2.35 and
3.77, or (3.172), (3.173), we see that taking x = ( jωIn − A)−1 B and y = Im in
(4.162) (with appropriate dimensions of the matrices Y ∈ Rm×n and R ∈ Rm×m )
yield that Π ( jω) is a rational Hermitian-matrix-valued function defined on the imag-
inary axis. The positivity of Π ( jω) is equivalent to the passivity of the system with
realization (A, B, Y ), which in turn can be characterized by a LMI (the KYP Lemma
set of equations) which in turn is equivalent to an ARI.
A particular optimal
∞control problem is to find the control input u(·) that minimizes
the integral action 0 [q(x(t)) + u T (t)u(t)]dt under the dynamics in (4.79), where
q(x) is continuously differentiable and positive definite. From standard dynamic pro-
T
gramming arguments, it is known that the optimal input is u (x) = − 21 g T (x) ∂∂Vx (x),
where V (·) is the solution of the partial differential equation called a Hamilton–
Jacobi–Bellman equation:
!
∂V 1 ∂V ∂V T
(x) f (x) − (x)g(x)g (x)
T
(x) + q(x) = 0. (4.163)
∂x 4 ∂x ∂x
∞
Moreover, V (x(t)) = inf u(·) t [q(x(τ )) + u T (τ )u(τ )]dτ , V (0) = 0. One recog-
nizes that u (x) is nothing else, but a static feedback of the passive output of the sys-
tem (4.79) with storage function V (·). Applying some of the results in this section
and in Sect. 5.5, one may additionally study the stability of the closed-loop system
with the optimal input (see in particular Theorem 5.35). Let us consider the linear
time-invariant case with quadratic cost q(x) = x T Qx. Then, one looks for storage
4.5 Dissipative Systems and Partial Differential Inequalities 335
P A + A T P − P B B T P + Q = 0. (4.164)
where f (·) is smooth, f (0) = 0, and B is a constant matrix. We are also given a
performance index
t
V = lim η(x(t)) + (L (x(s))L(x(s)) + u (s)u(s))ds
T T
(4.166)
t→+∞ 0
Let us assume that u (x) is optimal with respect to the performance index (4.166),
and let us denote the minimum value of V as φ(x0 ). In general, there is not a unique
L(x) and η(x) for which the same controller is optimal. In other words, there may
exist many different L(x), to which correspond different φ(x), for which the same
controller is optimal. The inverse optimal control problem is as follows: given the
system (4.165) and the controller (4.167), a pair (φ(·), L(·)) is a solution of the
inverse optimal control problem if the performance index (4.166) is minimized by
(4.167), with minimum value φ(x0 ). In other words, the inverse approach consists
of designing a stabilizing feedback control law, and then to show that it is optimal
with respect to a meaningful and well-defined cost functional.
Lemma 4.119 ([116]) Suppose that the system in (4.165) and the controller in
(4.167) are given. Then, a pair (φ(·), L(·)) is a solution of the inverse optimal
control problem if and only if φ(x) and L(x) satisfy the equations
⎧
⎨ ∇φ T (x)[ f (x) − 21 Bk(x)] = −L T (x)L(x)
B ∇φ(x) = k(x)
1 T
(4.168)
⎩2
φ(0) = 0, φ(x) ≥ 0 for all x ∈ X.
be passive. If this is the case, then there exists two solutions (φa (·), L a (·)) and
(φr (·), L r (·)) of (4.168) such that all other solutions satisfy φa (x) ≤ φ(x) ≤ φr (x)
for all x ∈ X .
Indeed the equations in Lemma 4.119 are nothing else but the KYP Lemma conditions
for the system (4.169). The interpretation of φa (x) and φr (x) as the available storage
and required supply, respectively, is obvious as well. One recovers the HJB equation
(4.163) replacing g(x) by B and q(x) by L T (x)L(x).
Remark 4.121 The inverse optimal control problem was first solved by Kalman
[118] in the case of linear systems with linear state feedback. Other works can be
found in [119].
Let us end this section with a result that completes the above ones. We consider the
system
ẋ(t) = f (x(t)) + g(x(t))u(t)
(4.170)
y(t) = h(x(t)) + j (x(t))u(t),
with x(0) = x0 , and where all the mappings are continuously differentiable and
f (0) = 0, h(0) = 0. Let us define the set of stabilizing controllers:
with L : Rn → R+ , 0 ≺ R ∈ Rm×m .
Theorem 4.122 ([117, 120]) Consider the system in (4.170) with the performance
index in (4.171). Let us assume that there exists a continuously differentiable and
radially unbounded function V : Rn → R with V (0) = 0 and V (x) > 0 for all x =
0, satisfying
1
L(x) + ∇ T V (x) f (x) − ∇ T V (x)g(x)R −1 g T (x)∇V (x) = 0. (4.172)
4
Moreover, let h(x) = L(x) and suppose that the new system in (4.170) is zero-state
observable. Then, the origin x = 0 of the closed-loop system
1
u(x) = −φ(x) = − R −1 g T (x)∇V (x). (4.174)
2
The action in (4.171) is minimized in the sense that
The extension of Theorem 4.122 toward the output feedback case is given in [121,
Theorem 6.2]. The equation in (4.172) is a Hamilton–Jacobi–Bellman equation.
Consider the Hamiltonian function
using the strict convexity of the integrand in (4.171) (since R 0), so that the mini-
mizing input is u(x) = − 21 R −1 g T (x) p. Various application examples may be found
in [120], like the stabilization of the controlled Lorenz equations, the stabilization
of the angular velocity with two actuators, and with one actuator.
The material of this section is taken mainly from [122]. The following class of
systems is considered:
x(k + 1) = f (x(k)) + g(x(k))u(k)
(4.177)
y(k) = h(x(k)) + j (x(k))u(k),
where x(k) ∈ Rn , u(k) ∈ Rm , y(k) ∈ Rm , and the functions f (·), g(·), h(·), and j (·)
are smooth mappings. It is assumed that f (0) = 0 and h(0) = 0.
Definition 4.123 The dynamical system in (4.177) is said dissipative with respect
to the supply rate w(u, y) if there exists a nonnegative function V : Rn → R with
V (0) = 0 called a storage function, such that for all u ∈ Rm and all k ∈ N one has
or equivalently
338 4 Dissipative Systems
'
k
V (x(k + 1)) − V (x(0)) ≤ w(u(i), y(i)) (4.179)
i=0
for all k, u(k), and x(0). The inequality (4.179) is called the dissipation inequality
in the discrete-time setting.
equivalently V (x(k + 1)) − V (x(0)) = i=0 u (i)y(i) for all u(k) and all k.
It is of interest to present the extension of the KYP Lemma for such nonlinear
discrete-time systems, that is, the nonlinear counterpart to Lemma 3.172.
Lemma 4.125 (KYP Lemma [122]) The system (4.177) is lossless with a C 2 storage
function if and only if
⎧
⎪ V ( f (x)) = V (x)
⎪
⎪
⎪
⎪
⎪
⎪
⎪ ∂V
⎨ ∂z (z)|z= f (x) g(x) = h (x)
T
(4.180)
⎪
⎪ g T (x) ∂∂zV2 (z)|z= f (x) g(x) = j T (x) + j (x)
2
⎪
⎪
⎪
⎪
⎪
⎪
⎩
V ( f (x)) + g(x)u) is quadratic in u.
Proof Necessity: If the system is lossless there exists a nonnegative storage function
V (x) such that
∂ V ( f (x) + g(x)u) ∂V
= |z= f (x)+g(x)u = h T (x) + u T ( j T (x) + j (x)), (4.182)
∂u ∂z
and
∂ 2 V ( f (x)+g(x)u)
= g T (x) ∂∂zV2 |z= f (x)+g(x)u g(x) = j (x) + j T (x).
2
∂u 2
(4.183)
4.6 Nonlinear Discrete-Time Systems 339
Equations (4.182) and (4.183) imply the second and third equations in (4.180). The
last condition in (4.180) follows easily from (4.181).
Sufficiency: Suppose that the last condition in (4.180) is satisfied. One deduces that
for all u ∈ Rm and some functions A(x), B(x), C(x). From the Taylor expansion of
V ( f (x)) + g(x)u) at u = 0 we obtain
⎧
⎪ A(x) = V ( f (x))
⎪
⎪
⎪
⎨
B(x) = ∂ V ( f (x)+g(x)u)
∂u
|u=0 = ∂∂zV |z= f (x) g(x) (4.185)
⎪
⎪
⎪
⎪
⎩
C(x) = ∂ V ( f (x)+g(x)u) |u=0 = 21 g T (x) ∂∂zV2 |z= f (x) g(x).
2 2
∂u 2
A similar result is stated in [123, Lemma 2.5] for passivity instead of lossless-
ness (basically, one replaces the equalities in the first and third lines of (4.180), by
≤). Further results on nonlinear dissipative discrete-time systems may be found in
[124–126].
The topic of this section is the following: consider a nonlinear system with sufficiently
regular vector field, and its tangent linearization about some point (x , u ). Suppose
that the tangent linearization is positive real or strictly positive real. Then, is the
nonlinear system locally dissipative? Or the converse? Let us consider the following
nonlinear system:
ẋ(t) = f (x(t)) + g(x(t))u(t)
(Σ) (4.187)
y(t) = h(x(t)),
with x(0) = x0 , where f (·), g(·), h(·) are continuously differentiable functions of x,
f (0) = 0, h(0) = 0. Let us denote A = ∂∂ xf (0), B = ∂g(x)u
∂u
(x = 0, u = 0) = g(0),
C = ∂∂hx (0). The tangent linearization of the system in (4.187) is the linear time-
invariant system
ż(t) = Az(t) + Bu(t)
(Σt ) (4.188)
ζ (t) = C z(t),
340 4 Dissipative Systems
with z(0) = x0 . The problem is as follows: under which conditions are the following
equivalences true?
?
(Σt ) ∈ PR ⇐⇒ (Σ) is locally passive
?
(Σt ) ∈ SPR ⇐⇒ (Σ) is locally strictly dissipative
It also has to be said whether dissipativity is understood in Willems’ sense (exis-
tence of a storage function), or in Hill and Moylan’s sense. Clearly, one will also
be interested in knowing whether or not the quadratic storage functions for (Σt )
are local storage functions for (Σ). Important tools to study the above two equiva-
lences will be the local stability, the local controllability, and the local observability
properties of (Σ) when (A, B) is controllable, (A, C) is observable, and A has only
eigenvalues with nonpositive real parts. For instance, local w-uniform reachability
of (Σ) (Definition 4.47) is implied by the controllability of (Σt ) (Proposition 4.86).
One can thus already state that if A has eigenvalues with negative real parts, and if
(A, B) is controllable and (A, C) is observable, then (Σ) has properties that make it
a good candidate for local dissipativity with positive definite storage functions and
a Lyapunov asymptoticaly stable fixed point of ẋ(t) = f (x(t)) (see Lemmas 5.29
and 5.31 in the next chapter).
Example 4.126 Let us consider the scalar system
ẋ(t) = 21 x 2 (t) + (x(t) + 1)u(t)
(Σ) (4.189)
y(t) = x(t),
with z(0) = x0 . The tangent system (Σt ) is an integrator H (s) = 1s . It is PR, though
the uncontrolled (Σ) is unstable (it may even have finite escape times).
Example 4.127 Let us consider the scalar system
ẋ(t) = x 2 (t) − x(t) + (x 3 (t) + x(t) + 1)u(t)
(Σ) (4.191)
y(t) = x 2 (t) + x(t),
dissipative with this storage function and the supply rate uy since y = g T (x) ∂∂Vx (x).
Consider now
ẋ(t) = x 2 (t) − x(t) + u(t)
(Σ) (4.193)
y(t) = x(t)
with x(0) = x0 , and whose tangent linearization is in (4.192). This system is locally
stable with Lyapunov function V (x) = x2 , and y = g T (x) ∂∂Vx (x). Easy computation
2
t
yields that 0 u(s)y(s)ds ≥ V (x(t)) − V (x(0)) for x ∈ (−1, 1). Hence, V (x) is a
storage function for (4.193), which is locally dissipative in (−1, 1) x.
Let us present a result which states under which conditions the tangent linearization
of a dissipative system is a SPR system. Consider the system
ẋ(t) = f (x(t)) + g(x(t))u(t)
(Σ) (4.194)
y(t) = h(x(t)) + j (x(t))u(t)
with x(0) = x0 , and the dimensions for signals used throughout this book, f (0) = 0
and h(0) = 0. The notion of dissipativity that is used is that of exponential dissipa-
tivity, i.e., dissipativity with respect to exp(εt)w(u(t), y(t)) for some ε > 0.
Assumption 8 There exists a function κ : Rm → Rm , κ(0) = 0, such that
w(κ(y), y) < 0, y = 0.
Assumption 9 The available storage function Va (·) is of class C 3 .
Assumption 10 The system is completely reachable if for all x0 ∈ Rn there exists
a finite t0 ≤ 0, and an admissible input defined on [t0 , 0] which can drive the state
x(·) from the origin x(t0 ) = 0 to x(0) = x0 .
Theorem 4.128 ([121]) Let Q = Q T ∈ Rm×m , S = S T ∈ Rm×m , R = R T ∈ Rm×m ,
and assume that Assumptions 8, 9, and 10 hold, and that the system in (4.194) is
exponentially dissipative with respect to the general supply rate w(u, y) = y T Qy +
2y T Su + u T Ru. Then, there exists matrices P ∈ Rn×n , L ∈ R p×n , W ∈ R p×m , P =
P T 0, and a scalar ε > 0 such that
⎧ T
⎨ A P + P A + ε P − C T QC + L T L = 0
P B − C T (Q D + S) + L T W = 0 (4.195)
⎩
R + S T D + D T S + D T Q D − W T W = 0,
Theorem 4.129 ([128, Corollary 8.3.3] Consider the system in (4.194) and suppose
that j (0) = 0. Suppose that the tangent linearization (A, B, C, D) is dissipative
with respect to the supply rate w(u, y) = y T Qy + 2y T Su + u T Ru, with R 0,
and w(0, y) ≤ 0 for all y. Suppose that the Hamiltonian matrix
A − B R −1 SC B R −1 B T
(4.196)
T
C QC −(A − B R −1 SC)T
has no purely imaginary eigenvalues, and that A is asymptotically stable. Then, there
exists a neighborhood N ⊂ Rn of x = 0 and V : N → R with V (0) = 0, ∂∂Vx (0) = 0
such that ∂∂Vx [ f (x) + g(x)u] ≤ w(u, h(x) + j (x)u) for all x ∈ N and all u ∈ U ⊂
Rm , V (x) ≥ 0 for all x ∈ N. Consequently, the system in (4.194) is locally dissipative
in N with respect to w(u, y).
One remarks that the matrix (4.196) corresponds to the transition matrix of the Hamil-
tonian system of the first-order necessary condition of the Pontryagin principle for
the Bolza problem, with a cost function equal to u T Ru + x T C T QC x, under the
constraint ẋ(t) = (A − B R −1 SC)x(t) + Bu(t). The two above examples do not fit
within the framework of Theorem 4.129, as the dissipativity of the tangent lineariza-
tions holds with respect to the supply rate w(u, y) = u T y, and thus R = 0. Further
results can be found in the third edition of [128], see [76, Sect. 11].
The first extensions of the KYP Lemma to the infinite-dimensional case have been
achieved by Yakubovich et al. [131–134]. Let us briefly report in this section the
contribution in [135]. We consider a system
ẋ(t) = Ax(t) + Bu(t)
(4.197)
y(t) = C x(t) + Du(t),
with x(0) = x0 ∈ X , and where X is a real Hilbert space. The operator A : dom(A) ⊂
X → X is the infinitesimal generator of a C0 -semigroup U (t). The operators
4.8 Infinite-Dimensional Systems 343
Lemma 4.131 ([135]) Let H : L2,e → L2,e be defined by y = H (u) and (4.197).
Suppose that the C0 -semigroup associated with H satisfies ||U (t)|| ≤ Me−σ t for
some M ≥ 1 and σ > 0. Then, for γ < 2σ , ξ < σmin (D), H is (γ , ξ )-passive if and
only if for each ξ0 < ξ , there exist bounded linear operators 0 ≺ P = P T : X → X ,
L ' 0 : X → X , Q : X → Rm , and a matrix W ∈ Rm×m , such that
⎧ T
⎨ (A P + P A + 2γ P + L + Q T Q)x = 0 for all x ∈ dom(A)
BT P = C − W T Q (4.200)
⎩ T
W W = D + D T − 2ξ0 Im .
Here dom(A) is the domain of the operator A. A semigroup that satisfies the condition
of the lemma is said exponentially stable. The notation L(·) ' 0 means that L(·) is
a positive operator that is bounded invertible (or coercive).
6 An operator may here be much more general than a linear operator represented by a constant matrix
n ∂2
A ∈ Rm×n : x
→ Ax ∈ Rm . For instance, the Laplacian Δ = i=1 ∂ x 2 , or the D’Alembertian
i
∂2
∂t 2
− Δ are operators.
344 4 Dissipative Systems
interpreted as specific differential inclusions, see (3.253) and Sect. 3.14.4. Basically,
one considers the following Lur’e systems:
⎧
⎨ ẋ(t) = Ax(t) + Bλ(t), a.e. t ∈ R+
ẋ(t) ∈ −H x(t)
y(t) = C x(t) + Dλ(t) ⇐⇒ .
⎩ H x = −Ax + B(F −1 + D)−1 (C x).
λ(t) ∈ −F(y(t)), t ≥ 0
(4.201)
One considers A : X → X , B : Y → X , C : X → Y , D : Y → Y which are
given linear-bounded continuous mappings (single valued), F : Y ⇒ Y a maxi-
mal monotone operator (multivalued), X is a real reflexive Banach space with dual
X , same for Y and Y . The system (A, B, C, D) is said passive if for all ∈ X and
all y ∈ Y , one has
Σi=1
n
−Axi + Byi , xi+1 − xi ≤ Σi=1
n
y, C(xi+1 − xi − D(yi+1 − yi )
Σi=1
n
−Axi , xi+1 − xi + Σi=1
n
(B − C T )yi , xi+1 − xi + yi , D(yi+1 − yi ) ≤ 0.
(4.203)
Then, the following is true, which relates cyclic passive systems in negative feed-
back interconnection with maximal cyclically monotone mappings, with maximal
cyclically operators (see Definition A.93).
Theorem 4.135 ([130, Theorems 2 and 3]) Let X and Y be two Hilbert spaces,
and let F : Y ⇒ Y be a maximal cyclically monotone operator. Assume that
(A, B, C, D) is cyclically passive, with Im(C) ∩ Int(Im(F −1 + D)) = ∅, or B is
bijective. Then, the operator defined by H (x) = −Ax + B(F −1 + D)−1 (C x) is
maximal cyclically monotone. Moreover, for each initial condition x(0) = x0 ∈
cl(C −1 (Im(F −1 + D))), the set-valued infinite-dimensional Lur’e system in (4.201)
has a unique strong solution defined on R+ .
The condition on x0 guarantees that the initial state lies inside the domain (possibly
on its boundary) of the operator H (·). Thus, it prevents the state from jumping. Sim-
ilarly to the finite-dimensional case, the meaning of the differential inclusion (4.201)
is that there always exists a bounded multiplier λ(t) that keeps the state in the admis-
sible domain. The maximal cyclic monotonicity is needed when the initial condition
is allowed to belong to the closure of the admissible domain. If one uses maximal
monotonicity instead, then only a weak solution exists. This is a difference with
respect to the finite-dimensional case, where maximal monotonicity is sufficient. We
remind that a strong solution on an interval [0, T ] means a continuous function on
[0, T ], absolutely continuous on any interval [a, b] ⊂ (0, T ), and satisfying the dif-
ferential inclusion dt-almost everywhere. Weak solutions are defined as the uniform
limits of sequences of strong solutions of approximating problems. Stability results
(Lyapunov stability, Krasovskii–LaSalle invariance) are presented in [129], in which
it is also shown that solutions depend continuously on the initial data [129, Theorem
6].
As alluded to after Lemma 3.125 and Theorem 3.160 in finite-dimensional set-
ting, Theorems 4.133 and 4.135 show that the negative feedback interconnection of
a (cyclically) passive operator with a maximal (cyclically) monotone operator, pre-
serves, under some basic consistency conditions, the maximal (cyclic) monotonicity
of the closed-loop operator H (·), in an infinite-dimensional setting.
this adds to the (narrow) set of operations that preserve maximal monotonicity.
346 4 Dissipative Systems
for all t ≥ 0.
( (2 ( ∂u (2 (( ∂u
(2
(
One has ( ∂u
∂t
(t) (
2,Ω
= ( (t)( d x and ||∇u(t)||2 =
Ω ∂t 2,Ω (
Ω ∂ xi (x, t) ( d x. The
equality in (4.205) means that the system is lossless (energy is conserved). Notice
that the wave equation may be rewritten as a first-order system
∂u
∂t
− v = 0 on Q
∂v (4.206)
∂t
− Δu = 0 on Q.
4.8 Infinite-Dimensional Systems 347
u 0n −In
If X = then (4.206) becomes ddtX + AX = 0 with A = X . It hap-
v −Δ 0n
pens that the operator A + I2n is maximal monotone. We retrieve here this notion
that we used also in the case of finite-dimensional nonsmooth systems in Sect. 3.14.
The notation is kept form the foregoing section. The heat equation is given as
⎧ ∂u
⎨ ∂t − Δu = 0 on Q
u=0 on Σ (4.207)
⎩
u(x, 0) = u 0 (x) on Ω.
The variable u may be the temperature in the domain Ω. Under the assumption that
u 0 ∈ L2 (Ω), there exists a unique solution u(x, t) for (4.207) in C 1 (R+ ) which is
itself L2 (Ω). Moreover,
n ( (2
( ∂u (
for all t ≥ 0, where ∇u(t)22,Ω = i=1 ( ∂ xi (x, t)( d x.
Nonnegative systems: the theory of dissipative systems and the KYP Lemma have also
been applied to nonnegative systems [44, 45]. Nonnegative dynamical systems are
derived from mass and energy balance considerations that involve states whose values
are nonnegative. For instance, in ecological models, the quantity of fishes in a lake
cannot be negative (if the mathematical model allows for such negative values then
surely it is not a good model). A matrix A ∈ Rn×m is nonnegative if Ai j ≥ 0 for all
1 ≤ i ≤ n and all 1 ≤ j ≤ m. It is positive if the strict inequality > 0 holds. A matrix
A ∈ Rn×n is called essentially nonnegative (positive) if −A is a Z-matrix, i.e., if
348 4 Dissipative Systems
Theorem 4.138 (KYP Lemma for nonnegative systems [44]) Let q ∈ Rl and r ∈
Rm . Consider the nonnegative dynamical system with realization (A, B, C, D) where
A is essentially nonnegative, B, C and D are nonnegative. Then the system is expo-
nentially dissipative with respect to the supply rate w(u, y) = q T y + r T u if and only
if there exist nonnegative vectors p ∈ Rn , l ∈ Rn , and w ∈ Rm , and a scalar ε ≥ 0
such that T
A p + εp − C T q + l = 0
(4.209)
B T p − D T q − r + w = 0.
References
1. Brogliato B, Landau ID, Lozano R (1991) Adaptive motion control of robot manipulators: a
unified approach based on passivity. Int J Robust Nonlinear Control 1(3):187–202
2. Lozano R, Brogliato B (1992) Adaptive control of robot manipulators with flexible joints.
IEEE Trans Autom Control 37(2):174–181
3. Brogliato B, Ortega R, Lozano R (1995) Global tracking controllers for flexible-joint manip-
ulators: a comparative study. Automatica 31(7):941–956
4. Brogliato B, Rey D, Pastore A, Barnier J (1998) Experimental comparison of nonlinear con-
trollers for flexible joint manipulators. Int J Robot Res 17(3):260–281
5. Brogliato B, Lozano R (1996) Correction to “adaptive control of robot manipulators with
flexible joints”. IEEE Trans Autom Control 41(6):920–922
6. Albu-Schafer A, Ott C, Hirzinger G (2004) A passivity based Cartesian impedance controller
for flexible joint robots – part II: full state feedback, impedance design and experiments. In:
Proceedings of the IEEE international conference on robotics and automation. New Orleans,
LA, pp 2666–2672
References 349
7. Arimoto S (1999) Robotics research toward explication of everyday physics. Int J Robot Res
18(11):1056–1063
8. Arimoto S, Kawamura S, Miyazaki F (1984) Bettering operation of robots by learning. J
Robot Syst 2:123–140
9. Arimoto S (1990) Learning control theory for robotic motion. Int J Adapt Control Signal
Process 4:543–564 (1990)
10. Egeland O, Godhavn JM (1994) Passivity-based adaptive attitude control of a rigid spacecraft.
IEEE Trans Autom Control 39:842–846
11. Gravdahl JT, Egeland O (1999) Compressor surge and rotating stall: modeling and control.
Advances in industrial control. Springer, London
12. Jeltsema DJ, Sherpen JMA (2004) Tuning of passivity-preserving controllers for switched-
mode power converters. IEEE Trans Autom Control 49(8):1333–1344
13. Jeltsema DJ, Ortega R, Sherpen JMA (2003) On passivity and power balance inequalities of
nonlinear RLC circuits. IEEE Trans Circuits Syst I-Fundam Theory Appl 50(9):1174–1179
14. Sira-Ramirez H, Ortega R, Garcia-Esteban M (1997) Adaptive passivity-based control of
average DC to DC power converters models. Int J Adapt Control Signal Process 11:489–499
15. Sira-Ramirez H, Moreno RAP, Ortega R, Esteban MG (1997) Passivity-based controllers for
the stabilization of DC to DC power converters. Automatica 33(4):499–513
16. Angulo-Nunez MI, Sira-Ramirez H (1998) Flatness in the passivity based control of DC to
DC power converters. In: Proceedings of the 37th IEEE conference on decision and control.
Tampa, FL, USA, pp 4115–4120
17. Escobar G, Sira-Ramirez H (1998) A passivity based sliding mode control approach for the
regulation of power factor precompensators. In: Proceedings of the 37th IEEE conference on
decision and control. Tampa, FL, USA, pp 2423–2424
18. Yu W, Li X (2001) Some stability properties of dynamic neural networks. IEEE Trans Circuits
Syst-I: Fundam Theory Appl 48(2):256–259
19. Yu W (2003) Passivity analysis for dynamic multilayer neuro identifier. IEEE Trans Circuits
Syst I-Fundam Theory Appl 50(1):173–178
20. Danciu D, Rasvan V (2000) On Popov-type stability criteria for neural networks. Electron J
Qual Theory Differ Equ (23):1–10. Proceedings of the 6th Coll. QTDE, no 23, August 10–14
1999, Szeged, Hungary
21. Hayakawa T, Haddad WM, Bailey JM, Hovakimyan N (2005) Passivity-based neural network
adaptive output feedback control for nonlinear nonnegative dynamical systems. IEEE Trans
Neural Netw 16(2):387–398
22. Gorbet RB, Morris KA, Wang DWL (2001) Passivity-based stability and control hysteresis
in smart actuators. IEEE Trans Control Syst Technol 9(1):5–16
23. Kugi A, Schaler K (2002) Passivitätsbasierte regelung piezoelektrischer strukturen. Automa-
tisierungtechnik 50(9):422–431
24. Diolaiti N, Niemeyer G, Barbagli F, Salisbury JK (2006) Stability of haptic rendering: dis-
cretization, quantization, time delay, and Coulomb effects. IEEE Trans Robot 22(2):256–268
25. Mahvash M, Hayward V (2005) High-fidelity passive force-reflecting virtual environments.
IEEE Trans Robot 21(1):38–46
26. Ryu JH, Preusche C, Hannaford B, Hirzinger G (2005) Time domain passivity control with
reference energy following. IEEE Trans Control Syst Technol 13(5):737–742
27. Ryu JH, Hannaford B, Kwon DS, Kim JH (2005) A simulation/experimental study of the
noisy behaviour of the time domain passivity controller. IEEE Trans Robot 21(4):733–741
28. Lee D, Spong MW (2006) Passive bilateral teleoperation with constant time delay. IEEE Trans
Robot 22(2):269–281
29. Lee D, Li PY (2005) Passive bilateral control and tool dynamics rendering for nonlinear
mechanical teleoperators. IEEE Trans Robot 21(5):936–950
30. Lozano R, Chopra N, Spong MW (2002) Passivation of force reflecting bilateral teleoperators
with time varying delay. Mechatronics 2002: proceedings of the 8th mechatronics forum
international conference. Drebbel Institute for Mechatronics, Enschede, NL, pp 24–26
350 4 Dissipative Systems
31. Lee JH, Cho CH, Kim M, Song JB (2006) Haptic interface through wave transformation using
delayed reflection: application to a passive haptic device. Adv Robot 20(3):305–322
32. Colgate JE, Schenkel G (1997) Passivity of a class of sampled-data systems: application to
haptic interface. J Robot Syst 14(1):37–47
33. Shen X, Goldfarb M (2006) On the enhanced passivity of pneumatically actuated impedance-
type haptic interfaces. IEEE Trans Robot 22(3):470–480
34. Diez MD, Lie B, Ydstie BE (2002) Passivity-based control of particulate processes modeled
by population balance equations. In: proceedings of 4th world congress on particle technology.
Sydney, Australia
35. Duncan PC, Farschman CA, Ydstie BE (2000) Distillation stability using passivity and ther-
modynamics. Comput Chem Eng 24:317–322
36. Sira-Ramirez H, Angulo-Nunez MI (1997) Passivity based control of nonlinear chemical
processes. Int J Control 68(5):971–996
37. Sira-Ramirez H (2000) Passivity versus flatness in the regulation of an exothermic chemical
reactor. Eur J Control 6(3):1–17
38. Fossas E, Ros RM, Sira-Ramirez H (2004) Passivity-based control of a bioreactor system. J
Math Chem 36(4):347–360
39. Ydstie BE, Alonso AA (1997) Process systems and passivity via the Clausius-Planck inequal-
ity. Syst Control Lett 30:253–264
40. Léchevin N, Rabbath CA, Sicard P (2005) A passivity perspective for the synthesis of robust
terminal guidance. IEEE Trans Control Syst Technol 13(5):760–765
41. Mahony RE, Lozano R (1999) An energy based approach to the regulation of a model heli-
copter near to hover. In: Proceedings of the European control conference. Karlsruhe, Germany,
pp 2120–2125
42. Miras JD, Charara A (1998) A vector oriented control for a magnetically levitated shaft. IEEE
Trans Magn 34(4):2039–2041
43. Miras JD, Charara A (1999) Vector desired trajectories for high rotor speed magnetic bearing
stabilization. IFAC Proc Vol 32(2):49–54
44. Haddad WM, Chellaboina V (2005) Stability and dissipativity theory for nonnegative dynami-
cal systems: a unified analysis framework for biological and physiological systems. Nonlinear
Anal, R World Appl 6:35–65
45. Haddad WM, Chellaboina V, Rajpurobit T (2004) Dissipativity theory for nonnegative and
compartmental dynamical systems with time delay. IEEE Trans Autom Control 49(5):747–
751
46. Ydstie BE, Jiao Y (2004) Passivity based inventory and flow control in flat glass manufacture.
In: Proceedings of the 43rd IEEE international conference on decision and control. Nassau,
Bahamas, pp 4702–4707
47. Kawai H, Murao T, Fujita M (2005) Passivity based dynamic visual feedback control with
uncertainty of camera coordinate frame. In: Proceedings of the American control conference.
Portland, OR, USA, pp 3701–3706
48. Desoer CA, Vidyasagar M (1975) Feedback systems: input-output properties. Academic, New
York
49. Jonsson U (1997) Stability analysis with Popov multipliers and integral quadratic constraints.
Syst Control Lett 31:85–92
50. Kailath T (1980) Linear systems. Prentice-Hall, New Jersey
51. Fradkov AL (2003) Passification of non-square linear systems and feedback Yakubovich-
Kalman-Popov Lemma. Eur J Control 6:573–582
52. Willems JC (1972) Dissipative dynamical systems, part i: general theory. Arch Rat Mech An
45:321–351
53. Willems JC (1972) Dissipative dynamical systems, part ii: linear systems with quadratic
supply rates. Arch Rat Mech An 45:352–393
54. Hill DJ, Moylan PJ (1980) Connections between finite-gain and asymptotic stability. IEEE
Trans Autom Control 25(5):931–936
References 351
55. Hill DJ, Moylan PJ (1976) The stability of nonlinear dissipative systems. IEEE Trans Autom
Control 21(5):708–711
56. Byrnes CI, Isidori A, Willems JC (1991) Passivity, feedback equivalence, and the global
stabilization of minimum phase nonlinear systems. IEEE Trans Autom Control 36(11):1228–
1240
57. Rudin W (1998) Analyse Réelle et complexe. Dunod, Paris
58. Rudin W (1976) Principles of mathematical analysis, 3rd edn. McGraw Hill, New York
59. Rudin W (1987) Real and complex analysis, 3rd edn. Higher maths. McGraw Hill, New York
60. Datko R (1970) Extending a theorem of A.M. Lyapunov to Hilbert space. J Math Anal Appl
32:610–616
61. Pazy A (1972) On the applicability of Lyapunov’s theorem in Hilbert space. SIAM J Math
Anal 3:291–294
62. Sontag ED (1998) Mathematical control theory: deterministic finite dimensional systems.
Texts in applied mathematics, vol 6, 2nd edn. Springer, New York (1998)
63. Vidyasagar M (1993) Nonlinear systems analysis, 2nd edn. Prentice Hall, Upper Saddle River
64. Willems JC (1971) The generation of Lyapunov functions for input-output stable systems.
SIAM J Control 9:105–133
65. Trentelman HL, Willems JC (1997) Every storage function is a state function. Syst Control
Lett 32:249–259
66. Yuliar S, James MR, Helton JW (1998) Dissipative control systems synthesis with full state
feedback. Math Control Signals Syst 11:335–356
67. James MR (1993) A partial differential inequality for dissipative nonlinear systems. Syst
Control Lett 21:315–320
68. Hill DJ, Moylan PJ (1980) Dissipative dynamical systems: basic input-output and state prop-
erties. J Frankl Inst 30(5):327–357
69. Pota HR, Moylan PJ (1993) Stability of locally dissipative interconnected systems. IEEE
Trans Autom Control 38(2):308–312
70. Polushin IG, Marquez HJ (2004) Boundedness properties of nonlinear quasi-dissipative sys-
tems. IEEE Trans Autom Control 49(12):2257–2261
71. Fradkov AL, Puloshin IG (1997) Quasidissipative systems with one or several supply rates.
In: Proceedings of the European control conference. Brussels, Belgium, pp 1237–1242
72. Pavel L, Fairman W (1997) Nonlinear H∞ control: a j−dissipative approach. IEEE Trans
Autom Control 42(12):1636–1653
73. Lim S, How JP (2002) Analysis of linear parameter-varying systems using a non-smooth
dissipative systems framework. Int J Robust Nonlinear Control 12:1067–1092
74. Hughes TH (2018) On the optimal control of passive or non-expansive systems. IEEE Trans
Autom Control 63(12):4079–4093
75. Polushin IG, Marquez HJ (2002) On the existence of a continuous storage function for dissi-
pative systems. Syst Control Lett 46:85–90
76. van der Schaft AJ (2017) L2-gain and passivity techniques in nonlinear control, 3rd edn.
Communications and control engineering. Springer Int. Publishing AG, Berlin
77. Isidori A (1999) Nonlinear control systems II. Communications and control engineering.
Springer, London
78. Romanchuk BG, Smith MC (1999) Incremental gain analysis of piecewise linear systems and
application to the antiwindup problem. Automatica 35(7):1275–1283
79. Monshizadeh N, Lestas I (2019) Secant and Popov-like conditions in power network stability.
Automatica 101:258–268
80. Demidovich BP (1961) On the dissipativity in the whole of a non-linear system of differential
equations. I. Vestnik Moscow State University, Ser Mat Mekh, vol 6, pp 19–27. In Russian
81. Demidovich BP (1962) On the dissipativity of a non-linear system of differential equations.
II. Vestnik Moscow State University, Ser Mat Mekh vol 1, pp 3–8. In Russian
82. Pavlov A, Pogromsky A, van de Wouw N, Nijmeijer H (2004) Convergent dynamics, a tribute
to Boris Pavlovich Demidovich. Syst Control Lett 52:257–261
352 4 Dissipative Systems
83. Lohmiller W, Slotine JJE (1998) On contraction analysis for non-linear systems. Automatica
34:683–696
84. Yoshizawa T (1966) Stability theory by Liapunov’s second method. Mathematics Society,
Tokyo
85. Yoshizawa T (1975) Stability theory and the existence of periodic solutions and almost peri-
odic solutions. Springer, New York
86. Pavlov A, Marconi L (2008) Incremental passivity and output regulation. Syst Control Lett
57:400–409
87. Pang H, Zhao J (2018) Output regulation of switched nonlinear systems using incremental
passivity. Nonlinear Anal: Hybrid Syst 27:239–257
88. Besselink B, van de Wouw N, Nijmeijer H (2013) Model reduction for nonlinear systems
with incremental gain or passivity properties. Automatica 49:861–872
89. Jayawardhana B, Ortega R, Garcia-Canseco E, Castanos F (2007) Passivity of nonlinear
incremental systems: application to PI stabilization of nonlinear circuits. Syst Control Lett
56:618–622
90. Gadjov D, Pavel L (2018) A passivity-based approach to Nash equilibrium seeking over
networks. IEEE Trans Autom Control. https://doi.org/10.1109/TAC.2018.2833140
91. Hines G, Arcak M, Packard A (2011) Equilibrium-independent passivity: a new definition
and numerical certification. Automatica 47(9):1949–1956
92. Bürger M, Zelazo D, Allgöwer F (2014) Duality and network theory in passivity-based coop-
erative control. Automatica 50(8):2051–2061
93. Simpson-Porco JW (2018) Equilibrium-independent dissipativity with quadratic suppy rates.
IEEE Trans Autom Control. https://doi.org/10.1109/TAC.2018.2838664
94. Bauschke HH, Combettes PL (2011) Convex analysis and monotone operator theory in
Hilbert spaces. Canadian mathematics society, Science Mathématique du Canada. Springer
Science+Business Media, Berlin
95. Anderson BDO, Vongpanitlerd S (1973) Network analysis and synthesis: a modern systems
theory approach. Prentice Hall, Englewood Cliffs
96. Madeira DS, Adamy J (2016) On the equivalence between strict positive realness and strict
passivity of linear systems. IEEE Trans Autom Control 61(10):3091–3095
97. Kottenstette N, McCourt M, Xia M, Gupta V, Antsaklis P (2014) On relationships among
passivity, positive realness, and dissipativity in linear systems. Automatica 50:1003–1016
98. Polushin IG, Marquez HJ (2004) Conditions for the existence of continuous storage functions
for nonlinear dissipative systems. Syst Control Lett 54:73–81
99. Rockafellar RT, Wets RJB (1998) Variational analysis, Grundlehren der Mathematischen
Wissenschaften, vol 317. Springer, Berlin
100. Rosier L, Sontag ED (2000) Remarks regarding the gap between continuous, Lipschitz, and
differentiable storage functions for dissipation inequalities appearing in H∞ control. Syst
Control Lett 41:237–249
101. Damaren CJ (2000) Passivity and noncollocation in the control of flexible multibody systems.
ASME J Dyn Syst Meas Control 122:11–17
102. Bruni C, Pillo GD, Koch G (1974) Bilinear systems: an appealing class of “nearly linear”
systems in theory and applications. IEEE Trans Autom Control 19(4):334–348
103. Xie S, Xie L, de Souza CE (1998) Robust dissipative control for linear systems with dissipative
uncertainty. Int J Control 70(2):169–191
104. Imura JI, Sugie T, Yoshikawa T (1996) A Hamilton-Jacobi inequality approach to the strict
H∞ control problem of nonlinear systems. Automatica 32(4):645–650
105. Faurre P, Clerget M, Germain F (1979) Opérateurs Rationnels Positifs. Application à
l’Hyperstabilité et aux Processus Aléatoires. Méthodes Mathématiques de l’Informatique.
Dunod, Paris. In French
106. Lin W (1995) Feedback stabilization of general nonlinear control systems: a passive system
approach. Syst Control Lett 25:41–52
107. Chua LO (1971) Memristors - the missing circuit element. IEEE Trans Circuit Theory 18:507–
519
References 353
108. Chua LO, Kang SM (1976) Memristive devices and systems. Proc IEEE 64(2):209–223
109. Hiriart-Urruty JB, Lemaréchal C (2001) Fundamentals of convex analysis. Grundlehren text
editions. Springer, Berlin
110. Lancaster P, Tismenetsky M (1985) The theory of matrices. Academic, New York
111. Weiss H, Wang Q, Speyer JL (1994) System characterization of positive real conditions. IEEE
Trans Autom Control 39(3):540–544
112. Hodaka I, Sakamoto N, Suzuki M (2000) New results for strict positive realness and feedback
stability. IEEE Trans Autom Control 45(4):813–819
113. Lions PL, Souganidis PE (1985) Differential games, optimal control and directional deriva-
tives of viscosity solutions of Bellman’s and Isaac’s equations. SIAM J Control Optim 23:566–
583
114. Ball JA, Helton JW (1996) Viscosity solutions of Hamilton-Jacobi equations arising in non-
linear h ∞ -control. J Math Syst Estim Control 6(1):1–22
115. Cromme M (1988) On dissipative systems and set stability. MAT-Report no 1998-07, April,
Technical University of Denmark, Department of Mathematics
116. Moylan PJ (1974) Implications of passivity in a class of nonlinear systems. IEEE Trans Autom
Control 19(4):373–381
117. Moylan PJ, Anderson BDO (1973) Nonlinear regulator theory and an inverse optimal control
problem. IEEE Trans Autom Control 18:460–465
118. Kalman RE (1964) When is a linear control system optimal? Trans ASME (J Basic Eng) Ser
D 86:51–60 (1964)
119. El-Farra NH, Christofides PD (2003) Robust inverse optimal control laws for nonlinear sys-
tems. Int J Robust Nonlinear Control 13:1371–1388
120. Wan CJ, Bernstein DS (1995) Nonlinear feedback control with global stabilization. Dyn
Control 5(4):321–346
121. Chellaboina V, Haddad WM (2003) Exponentially dissipative dynamical systems: a nonlinear
extension of strict positive realness. Math Probl Eng 1:25–45
122. Byrnes CI, Lin W (1994) Losslessness, feedback equivalence, and the global stabilization of
discrete-time nonlinear systems. IEEE Trans Autom Control 39(1):83–98
123. Lin W, Byrnes CI (1995) Passivity and absolute stabilization of a class of discrete-time
nonlinear systems. Automatica 31(2):263–267
124. Lopez EMN, Fossas-Colet E (2004) Feedback passivity of nonlinear discrete-time systems
with direct input-output link. Automatica 40(8):1423–1428
125. Lopez EMN (2002) Dissipativity and passivity-related properties in nonlinear discrete-time
systems. PhD thesis, Universidad Politecnica de Cataluna, Instituto de Organizacion y Control
de Sistemas Industriales, Spain
126. Lopez EMN (2005) Several dissipativity and passivity implications in the linear discrete-time
setting. Math Probl Eng 6:599–616
127. Haddad WM, Chellaboina V (1998) Nonlinear fixed-order dynamic compensation for passive
systems. Int J Robust Nonlinear Control 8(4–5):349–365
128. van der Schaft AJ (2000) L2-gain and passivity techniques in nonlinear control, 2nd edn.
Communications and control engineering. Springer, London
129. Adly S, Hantoute A, Le B (2016) Nonsmooth Lur’e dynamical systems in Hilbert spaces.
Set-Valued Var Anal 24:13–35
130. Adly S, Hantoute A, Le BK (2017) Maximal monotonicity and cyclic monotonicity arising
in nonsmooth Lur’e dynamical systems. J Math Anal Appl 448:691–706
131. Yakubovich VA (1975) The frequency theorem for the case in which the state space and the
control space are Hilbert spaces, and its application incertain problems in the synthesis of
optimal control. II. Sib Math J 16:828–845
132. Yakubovich VA (1974) The frequency theorem for the case in which the state space and the
control space are Hilbert spaces, and its application incertain problems in the synthesis of
optimal control. I. Sib Math J 15:457–476
133. Likhtarnikov AL, Yakubovich VA (1977) The frequency theorem for one-parameter semi-
groups. Math USSR Izv (Izv Akad Nauk SSSR, Ser Mat) 11(4):849–864
354 4 Dissipative Systems
134. Likhtarnikov AL, Yakubovich VA (1977) Frequency theorem for evolution type equations.
Sib Mat Zh 17:1069–1085 In Russian
135. Wen JT (1989) Finite dimensional controller design for infinite dimensional systems: the
circle criterion approach. Syst Control Lett 13:445–454
136. Brogliato B, Goeleven D (2011) Well-posedness, stability and invariance results for a class
of multivalued Lur’e dynamical systems. Nonlinear Analysis. Nonlinear Analysis Theory,
Methods Appl 74:195–212
137. Brogliato B (2004) Absolute stability and the Lagrange-Dirichlet theorem with monotone
multivalued mappings. Syst Control Lett 51:343–353. Preliminary version proceedings of the
40th IEEE conference on decision and control, 4–7 December 2001, vol 1, pp 27–32
138. Brogliato B, Goeleven D (2013) Existence, uniqueness of solutions and stability of nonmsooth
multivalued Lur’e dynamical systems. J Convex Anal 20(3):881–900
139. Camlibel MK, Schumacher JM (2016) Linear passive systems and maximal monotone map-
pings. Math Program Ser B 157:367–420
140. Tanwani A, Brogliato B, Prieur C (2018) Well-posedness and output regulation for implicit
time-varying evolution variational inequalities. SIAM J Control Optim 56(2):751–781
141. Brogliato B, Thibault L (2010) Existence and uniqueness of solutions for non-autonomous
complementarity dynamical systems. J Convex Anal 17(3–4):961–990
142. Brézis H (1983) Analyse Fonctionnelle, Théorie et Applications. Masson, Paris
143. Curtain RF (1996) The Kalman-Yakubovich-Popov lemma for Pritchard-Salamon systems.
Syst Control Lett 27:67–72
144. Curtain RF, Oostveen JC (2001) The Popov criterion for strongly stable distributed parameter
systems. Int J Control 74:265–280
145. Curtain RF, Demetriou M, Ito K (2003) Adaptive compensators for perturbed positive real
infinite dimensional systems. Int J Appl Math Comput Sci 13(4):441–452
146. Pandolfi L (2001) Factorization of the Popov function of a multivariable linear distributed
parameter system in the non-coercive case: a penalization approach. Int J Appl Math Comput
Sci 11(6):1249–1260
147. Curtain RF (1996) Corrections to “The Kalman-Yakubovich-Popov Lemma for Pritchard-
Salamon systems”. Syst Control Lett 28:237–238
148. Bounit H, Hammouri H (1998) Stabilization of infinite-dimensional semilinear systems with
dissipative drift. Appl Math Optim 37:225–242
149. Bounit H, Hammouri H (2003) A separation principle for distributed dissipative bilinear
systems. IEEE Trans Autom Control 48(3):479–483
150. Weiss M (1997) Riccati equation theory for Pritchard-Salamon systems: a Popov function
approach. IMA J Math Control Inf 14:45–83
151. Barb FD, de Koning W (1995) A Popov theory based survey in digital control of infinite
dimensional systems with unboundedness. IMA J Math Control Inf 12:253–298
152. Barb FD, Ionescu V, de Koning W (1994) A Popov theory based approach to digital h ∞
control with measurement feedback for Pritchard-Salamon systems. IMA J Math Control Inf
11:277–309
153. Ionescu V, Oara C (1996) The four block Nehari problem: a generalized Popov-Yakubovich
type approach. IMA J Math Control Inf 13:173–194
154. Arov DZ, Staffans OJ (2005) The infinite-dimensional continuous time Kalman-Yakubovich-
Popov inequality. Oper Theory: Adv Appl 1:1–37
155. Bondarko VA, Fradkov AL (2003) Necessary and sufficient conditions for the passivicability
of linear distributed systems. Autom Remote Control 64(4):517–530
156. Grabowski P, Callier FM (2006) On the circle criterion for boundary control systems in factor
form: Lyapunov stability and Lur’e equations. ESAIM Control, Optim Calc Var 12:169–197
157. Hagen G (2006) Absolute stability via boundary control of a semilinear parabolic PDE. IEEE
Trans Autom Control 51(3):489–493
158. Logemann HL, Curtain RF (2000) Absolute stability results for well-posed infinite-
dimensional systems with applications to low-gain integral control. ESAIM: Control, Optim
Calc Var 5:395–424 (2000)
References 355
159. Arcak M, Meissen C, Packard A (2016) Networks of dissipative systems. Briefs in electrical
and computer engineering. Control, automation and robotics. Springer International Publish-
ing, Berlin
160. Cheban DN (1999) Relationship between different types of stability for linear almost periodic
systems in Banach spaces. Electron J Differ Equ 46:1–9. https://ejde.math.swt.edu
161. Hale JK, LaSalle JP, Slemrod M (1971) Theory of a class of dissipative processes. Division of
applied mathematics, Center for Dynamical Systems, Brown University, Providence Rhode
Island, USA, no 71-17408
Chapter 5
Stability of Dissipative Systems
In this chapter, various results concerning the stability of dissipative systems are
presented. First, the input/output properties of several feedback interconnections of
passive, negative imaginary, maximal monotone systems are reviewed. Large-scale
systems are briefly treated. Then the conditions under which storage functions are
Lyapunov functions are given in detail. Results on stabilization, equivalence to a
passive system, input-to-state stability, and passivity of linear delay systems are then
provided. The chapter ends with an introduction to H∞ theory for nonlinear systems
that is related to a specific dissipativity property, and with a section on Popov’s
hyperstability.
In this section, we will study the stability of the interconnection in negative feedback
of different types of passive systems. We will first study closed-loop interconnec-
tions with one external input (one-channel results) and then interconnections with
two external inputs (two-channel results). The implicit assumption in the passivity
theorems is that the problem is well-posed, i.e., that all the signals belong to L2e .
Apparently, the first versions of passivity theorem have been proposed in [1–3].
Theorem 5.2 (Passivity, one-channel [4]) Consider the system in Fig. 5.1. Assume
that both H1 and H2 are pseudo-VSP, i.e.,
© Springer Nature Switzerland AG 2020 357
B. Brogliato et al., Dissipative Systems Analysis and Control, Communications
and Control Engineering, https://doi.org/10.1007/978-3-030-19420-8_5
358 5 Stability of Dissipative Systems
t t t
y1T (s)u 1 (s)ds + β1 ≥ δ1 y1T (s)y1 (s)ds + ε1 u 1T (s)u 1 (s)ds
0 0 0
t t t
y2T (s)u 2 (s)ds + β2 ≥ δ2 y2T (s)y2 (s)ds + ε2 u 2T (s)u 2 (s)ds,
0 0 0
with δ1 + ε1 > 0, δ2 + ε2 > 0. The feedback closed-loop system is finite gain stable
if
δ2 ≥ 0, ε1 ≥ 0, ε2 + δ1 > 0,
Corollary 5.3 The feedback system in Fig. 5.1 is L2 −finite gain stable if
1. H1 is passive and H2 is ISP, i.e., ε1 ≥ 0, ε2 > 0, δ1 ≥ 0, δ2 ≥ 0,
2. H1 is OSP and H2 is passive, i.e., ε1 ≥ 0, ε2 ≥ 0, δ1 > 0, δ2 ≥ 0.
Δ t
Proof Let r1 , yt = 0 r1T (s)y(s)ds. Then
where f 2t = f, f t for any function f (·) in L2,e . Using the Schwartz’ inequality
we have
t t 1 t 1
2 2
r1 , yt = r1T (s)y(s)ds ≤ r1T (s)r1 (s)ds y T (s)y(s)ds = r1 t yt
0 0 0
5.1 Passivity Theorems 359
Then r1 t yt ≥ r1 , yt ≥ β1 + β2 + (δ1 + ε2 )y2t . For any λ ∈ IR the following
holds: 2
λ
√
1
r 1 2
t + y2
t = 1 √1
λ
r 1 t − λy t + r1 t yt
2λ 2 2 (5.2)
≥ β1 + β2 + (δ1 + ε2 )y2t .
Choosing λ = δ1 + ε2 we get
Example 5.4 (PI feedback control) Let us consider the system in Fig. 5.1, with H1
an VSP operator, and H2 a linear-invariant PI controller, i.e., y2 (t) = k1 u 2 (t) +
t
k2 0 u 2 (s)ds, k1 > 0, k2 > 0. We obtain:
Consider now the system depicted in Fig. 5.2, where r1 , r2 can represent disturbances,
initial condition responses, or controls. Assume the well-posedness, which in the I/O
context means that L2,e inputs map to L2,e outputs. The next theorem has been
360 5 Stability of Dissipative Systems
stated in [5] and in a more general “large-scale dissipative systems” form in [6], see
Sect. 5.1.8.
Theorem 5.5 (Passivity, two-channel) Assume H1 and H2 are both pseudo-VSP. The
feedback system is L2 -finite gain stable if ε1 + δ2 > 0, ε2 + δ1 > 0, where εi , δi ,
i = 1, 2, may be negative.
Proof
ε2 +λ1 ε1 +ε2
We choose λ1 = 2
and λ2 = 2
:
ε1
2ε1 r1 , y2 t ≤ r1 2t + ε1 λ
1 y2 2t .
λ
1
1 ε1 +δ2
Let us choose λ
1 = ε1
and λ
1 = 4
. Therefore
with V1 (·) and V2 (·) positive-definite functions. Then the origin is an asymptotically
stable equilibrium point if ε1 + δ2 > 0, and ε2 + δ1 > 0, and both H1 and H2 are
zero-state observable (i.e., u i ≡ 0, yi ≡ 0 ⇒ xi = 0).
Proof Consider the positive-definite function which is the sum of the two storage
functions for H1 and H2 , i.e.,
Then using the dissipativity inequalities in their infinitesimal form we get along the
trajectories of the system
362 5 Stability of Dissipative Systems
2
The result follows from the Krasovskii–LaSalle theorem, and the assumption guar-
anteeing that yi ≡ 0, u i ≡ 0 ⇒ xi = 0. If in addition V1 (·) and V2 (·) are radially
unbounded, then one gets global stability.
Roughly speaking, the foregoing lemma says that the feedback interconnection of
two dissipative systems is asymptotically stable provided an observability property
holds.
Remark 5.8 Theorems 5.2 and 5.5, as well as Lemma 5.7, allow for one subsystem
to possess an excess of passivity, while the other subsystem has a lack of passivity,
while the overall feedback system is stable in a certain sense. We met already such
excess/lack of passivity mountage in the absolute stability problem with hypomono-
tone and prox-regular sets, see Sects. 3.14.2.4 and 3.14.5.
Let us now state a result which uses the quasi-dissipativity property as defined in
Definition 4.29. Each subsystem H1 and H2 of the interconnection is supposed to be
dissipative with respect to a general supply rate of the form wi (u i , yi ) = yiT Q i yi +
2yiT Si u i + u iT Ri u i , with Q iT = Q i and RiT = Ri . Before stating the next Proposition,
we need a preliminary definition.
Definition 5.9 A system ẋ(t) = f (x(t), u(t)), y(t) = h(x(t)) has uniform finite
power gain γ ≥ 0 if it is quasi-dissipative with supply rate w(u, y) = γ 2 u T u − y T y.
Proposition 5.10 ([7]) Suppose that the systems H1 and H2 are quasi-dissipative
with respect to supply rates w1 (u 1 , y1 ) and w2 (u 2 , y2 ), respectively. Suppose there
exists ρ > 0 such that the matrix
Q 1 + ρ R2 −S1 + ρ S2T
Qρ = (5.10)
−S1T + ρ S2 R1 + ρ Q 2
is negative definite. Then the feedback system in Fig. 5.2 has uniform finite power
gain.
for some matrices Sρ and Rρ . Since Q ρ ≺ 0 it follows that there exists μ > 0 and
η > 0 such that
Integrating from t = 0 to t = τ ≥ 0 and using the fact that H1 and H2 are quasi-
dissipative with constants α1 ≥ 0 and α2 ≥ 0, we obtain
τ
0 [−η(y1T (t)y1 (t) + y2T (t)y2 (t)) + μ(r1T (t)r1 (t) + r2T (t)r2 (t))]dt
(5.13)
+(α1 + ρα2 )τ + β1 + ρβ2 ≥ 0,
This proof is really an input/output system stability result as it does not mention the
state. Let us mention a result in [8] that contains a version of the passivity theo-
rem, using the so-called secant condition for the stability of polynomials of the form
p(s) = (s + a1 )(s + a2 )...(s + an ) + b1 b2 ...bn , with all a⎛
i > 0 and all bi > 0. This
⎞
−a1 0 .... 0 −b1
⎜ b2 −a2 .... 0 0 ⎟
⎜ ⎟
p(s) is the characteristic polynomial of the matrix A = ⎜ . .. .. ⎟.
⎝ .. . . ⎠
0 0 .... bn −an
b1 ...bn
n
The secant condition states that A is Hurwitz provided that < sec πn =
a1 ...an
1
(cos( π ))n
.
n
It is also proved in [9, Theorem 5.2] that Hc (·) has finite L2 -gain. The interest of
this result is that it shows that a specific “switching” between VSP sub-controllers
guarantees some passivity of the controller, hence depending on the properties of
the plant H (·), the passivity theorem can be used. Further results on passivity in
gain-scheduled controllers may be found in [10–12], with experimental results on a
flexible-joint manipulator [11, 12].
We have seen in Theorems 2.28, 2.53, and 2.54 that there are close relationships
between positive real systems and bounded real systems. Namely, an invertible
transformation allows one to pass from PR to BR systems, and vice versa. This
is depicted in Fig. 5.4: the transfer function of both interconnections is H (s) =
(G(s) − Im )(G(s) + Im )−1 (in the case of linear-invariant systems). In Fig. 5.5, the
transfer function of both interconnections is equal to G(s) = (Im − H (s))(Im +
H (s))−1 . We recall that a system G : L2,e → L2,e is said contractive if ||G|| =
supx∈L 2 ||Gx||
||x||
≤ 1, i.e., its gain is less than or equal to unity. In case of a linear-
invariant system, we speak of a bounded real transfer function, see Definition 2.52.
5.1 Passivity Theorems 365
The following result holds, which extends Theorems 2.53 and 2.54 to nonlinear well-
posed operators. Passivity is understood as in Definition 2.1 with zero bias (β = 0).
Theorem 5.12 ([13]) Let G be a passive operator, then the operators E and F which
map u to y in the top and the bottom figures in Fig. 5.4, respectively, are contractive.
If G is bounded and G − ε Im is passive for some ε > 0, then ||E|| < 1 and ||F|| < 1.
Conversely, let G be contractive in Fig. 5.5. Then the operators C and D that map
u to y in both subfigures are passive. Moreover, if ||G|| < 1 then both operators are
bounded, and C − ε Im , D − ε Im are passive for some ε > 0.
Then we have the following result. Stability of a system is understood as bounded
input-bounded ouput stability with L2 -gain and zero bias (see Definition 4.17), where
the input is r = (r1 , r2 )T , the output is u = (u 1 , u 2 )T , so that the feedback system is
described by (Im + H )u = r , H = (H2 , −H1 ). One considers the system in Fig. 5.2,
where H1 and H2 map L2 to L2 , Hi (0)) = 0, Hi is continuous on L2 , Hi is causal,
1 ≤ i ≤ 2.
Theorem 5.13 ([13]) Consider the system in Fig. 5.2 and the three statements:
1. H1 is bounded (i.e., ||H1 || < +∞) and H2 is Lipschitz continuous, ||H2 H1 || < 1:
then the feedback system is stable.
2. Let ||H2 || ||H1 || < 1, then the feedback system is stable.
3. Let H1 − ε Im and H2 be passive for some ε > 0 and ||H1 || < +∞, or H2 − ε Im
and H1 are passive and ||H2 || < +∞ for some ε > 0: then the feedback system
is stable.
Then: 2) holds ⇒ 3) holds ⇒ 1) holds ⇒ 2) holds.
The proof of Theorem 5.13 uses Theorem 5.12.
Lemma 5.14 Assume that H1 in Fig. 5.2 is zero-state observable and lossless with a
radially unbounded positive-definite storage function V1 (x1 ), whereas H2 is WSPR.
Then the feedback interconnection of H1 and H2 is Lyapunov globally asymptotically
stable.
Proof Consider V (x1 , x2 ) = x2T P2 x2 + 2V1 (x1 ), where V1 (·) is a radially unbounded
positive-definite storage function for H1 . In view of the assumptions and of the KYP
Lemma, there exists matrices P2 , L 2 , W2 such that Eqs. (3.2) are satisfied for H2 .
Then,
5.1 Passivity Theorems 367
has no zeros on the imaginary axis (see Lemma 3.28). Note that Ȳ2 (s) = H̄2 (s)U2 (s).
Therefore, when ȳ2 (t) ≡ 0, u 2 (t) can only either exponentially diverge or expo-
nentially converge to zero. However, if u 2 (t) diverges, it follows from ȳ2 (t) =
W2 u 2 + L 2 x2 ≡ 0 that x2 should also diverge, which is a contradiction. It then fol-
lows that u 2 should converge to zero. Note that for u 2 = 0, the H2 system reduces to
ẋ2 (t) = A2 x2 (t) with A2 Hurwitz. Therefore if ȳ2 (t) ≡ 0, then x2 → 0. On the other
hand u 2 = y1 and so we also have y1 → 0. In view of the zero-state observability
of H1, we conclude that x1 → 0. Hence, from the Krasovskii–LaSalle’s invariance
set theorem, the largest invariant set S inside the set ȳ2 ≡ 0 is reduced to x = 0
plus all the trajectories such that x tends to the origin. Therefore, the origin x = 0 is
asymptotically stable. Moreover, when V1 (x1 ) is radially unbounded any trajectory
is bounded, and the equilibrium is globally asymptotically stable.
Another proof can be found in [15]. We have introduced the notion of marginally
SPR (MSPR) transfer function in Definition 2.90. The stability of interconnection of
MSPR transfer functions is also of interest.
Proposition 5.15 ([16, Theorem 1, Corollary 1.2]) The negative feedback inter-
connection of H1 and H2 in Fig. 5.1 is stable (i.e., the state space realization of the
interconnection with minimal realizations of both transfer matrices H1 (s) and H2 (s),
is globally asymptotically stable), if the following conditions are satisfied:
1. H1 (s) is MSPR,
2. H2 (s) is PR,
3. none of the purely imaginary poles of H1 (s) is a transmission zero of H2 (s).
or if
1. Both H1 (s) and H2 (s) are MSPR.
The proof uses Lemma 3.29 for the MSPR transfer function, the KYP Lemma for
the PR transfer function, and Krasovskii–LaSalle’s invariance theorem.
368 5 Stability of Dissipative Systems
The interconnection of incrementally passive systems (see Definition 4.62) has been
studied in [17]. Let us consider two MIMO systems:
ẋ = f x (x, u x , t) ż = f z (z, u z , t)
(a) (b) (5.16)
yx = h x (x, t), yz = h z (z, t).
Proposition 5.16 ([17]) Assume that both systems in (5.16) (a) and (b) are incre-
mentally passive. Let Λ be a gain matrix. Then the interconnection through u x =
+ vx and u z = −ΛT yx + vz is
Λyz incrementally passive with respect to the input
vx yx
v= and the output y = .
vz yz
Proof Let Vx (t, x1 , x2 ) and Vz (t, z 1 , z 2 ) be the storage functions associated with the
systems in (5.16) (a) and (b), respectively (see the dissipation equality in (4.53)). A
common storage function is constructed as W = Vx + Vz . From the incremental pas-
sivity definition, we obtain Ẇ (t) ≤ (yx1 − yx2 )T (u x1 − u x2 ) + (yz1 − yz2 )T (u z1 −
u z2 ) for all t. Substitution of u xi and u zi , i = 1, 2, yields
We saw in the proof of Lemma 3.125 in Sect. 3.14.2 that the negative feedback inter-
connection of an SPR system, with a maximal monotone mapping, defines another
maximal monotone mapping (this can be seen as a new operation under which max-
imal monotonicity is preserved). Let us now study under which conditions such an
interconnection defines a passive dynamical system. Let us consider the system in
Δ
Fig. 5.1. To be consistent with the notations introduced in Sect. 3.14, we let λ = −y2 ,
so that u 1 = r1 + λ, where λ is a selection of the set-valued operator (in the context
of complementarity systems, λ can be considered as a Lagrange vector multiplier
associated with the unilateral constraint). Thus, H1 has the state space realization
ẋ(t) = Ax(t) + Bλ(t) + Br1 (t) a.e. t ≥ 0, y(t) = C x(t), while H2 is a set-valued
5.1 Passivity Theorems 369
Δ
mapping y → −λ. We will denote M = H2 . We assume that the system is well-
posed (this is where the maximality plays a role), with unique absolutely continuous
solutions x(·), so that −λ(t) ∈ M (y(t)) for all t ≥ 0. The well-posedness imposes
some restrictions on the initial data, see Lemma 3.125 or Theorem 3.160. We suppose
that these conditions are satisfied.
Proposition 5.17 Consider the feedback system in Fig. 5.1. Assume that (A, B, C)
is passive, M (·) is set-valued maximal monotone, and that (0, 0) ∈ gph(M ). Then
the feedback interconnection is passive in the sense of Definition 2.1.
Proof Let t ≥ 0. From the passivity it follows that P B = C T , using Proposition
3.62 item 2, or Propositions A.67, A.68. One has
t t t T
r1T (s)y(s)ds = 0 r 1 (s)C x(s)ds = 0 r 1 (s)B P x(s)ds
T T
0
t
= 0 (ẋ(s) − Ax(s) − Bλ(s))T P x(s)ds
t t (5.18)
= 21 x T (s)P x(s) 0 − 21 0 x T (s)(A T P + P A)x(s)ds
t
− 0 λT (s)C x(s)ds,
consequently,
t t t t
1 T 1
r1T (s)y(s)ds = x (s)P x(s) + x T (s)Qx(s)ds − λT (s)C x(s)ds,
0 2 0 2 0 0
(5.19)
where Q = −A T P − P A 0 by passivity. Now by monotonicity and the graph
condition, we have y, λ ≤ 0 for all y ∈ dom(M ) and −λ ∈ M (y) (and we know
from the
t well-posedness conditions that the ouput will always stay in the domain).
Thus 0 λT (s)y(s)ds ≤ 0. Therefore
t t t
0 r1T (s)y(s)ds ≥ − 21 x T (0)P x(0) + 1
2 0 x T (s)Qx(s)ds + 0 (−λ
T
(s))y(s)ds
≥ − 21 x T (0)P x(0),
(5.20)
which ends the proof.
The cases with D = 0 and (y , 0) ∈ gph(M ), y = 0 (for instance, a relay multi-
function whose vertical part does not contain (0, 0)) remains to be analyzed. The
input/output operator r1 → y for the above interconnected system, with a nonzero
feedthrough matrix D (y = C x + Dλ), reads as
ẋ(t) ∈
A − B(M −1 + D)−1 C (x) + Br1
(5.21)
y(t) ∈ C − D(M −1 + D)−1 C (x).
Both right-hand sides in the state and in the output equations may be set-valued.
Conditions on both M (·) and D have to be imposed to that they become single-
valued. The dynamical system in (5.21) is a complex nonlinear system. See Sect. 3.14
for particular cases where well-posedness and stability hold for (5.21).
370 5 Stability of Dissipative Systems
Remark 5.18 The condition that (0, 0) ∈ gph(M ) is already present in Zames’ sem-
inal articles [1, 18], who defined incrementally positive relations (that is, input/output
mappings in a very general setting, including possible set-valued operators), under
this condition [18, Appendix A].
N
u i = u e,i − Hi j y j , (5.22)
j=1
where u i is the input of subsystem Hi , yi is its ouput, u e,i is an external input, and
all the Hi j are constant matrices. Grouping the inputs, outputs and external inputs as
N -vectors u, y and u e respectively, one may rewrite (5.22) as
u = u e − H y, (5.23)
Q̂ = S H + H T S T − H T R H − Q. (5.24)
Theorem 5.19 ([6, Theorem 1]) The overall system with input u e (·) and output y(·),
and the interconnection in (5.23) is L2 −finite gain stable if Q̂ 0 in (5.24).
Clearly, one can always find such a scalar. Then one finds after some manipulation
t1 T t1
1 1
Q̂ 2 y(t) − Ŝu e (t) Q̂ 2 y(t) − Ŝu e (t) dt ≤ α 2 u eT (t)u e (t)dt, (5.28)
t0 t0
so that
t1 t1
y (t)y(t)dt ≤ k
T 2
u eT (t)u e (t)dt (5.29)
t0 t0
Let us recall that we assumed at the beginning of this section that all signals belong to
the extended space L2,e (more rigorously: the inputs are in L2,e and we assume that
the systems are well-posed in the sense that the outputs also belong to L2,e ). Under
such an assumption, one sees that stating (5.29) for all t1 ≥ t0 ≥ 0 is equivalent to
stating ||y||2,t ≤ k||u e ||2,t for all t ≥ 0, where || · ||2,e is the extended L2 norm. One
notes that Theorem 5.19 is constructive in the sense that the interconnections Hi, j
may be chosen or designed so that the Riccati inequality Q̂ 0 in (5.24) is satisfied.
Let us end this section with a result which will allow us to make a link between the
interconnection structure and so-called M-matrices.
Theorem 5.20 ([6, Theorem 5]) Let the subsystem Hi have a L2 −finite gain γi and
suppose that all subsystems are single-input–single-output (SISO). Let Γ = diag(γi ),
and A = Γ H . Then if there exists a diagonal positive-definite matrix P such that
P − A T P A 0, (5.30)
A sufficient condition for the existence of a matrix P as in the theorem is that the
matrix B made of the entries bii = 1 − |aii |, bi j = −|ai j | for i = j, has all its leading
principal minors positive. Such a matrix is called an M-matrix.
372 5 Stability of Dissipative Systems
Further reading: Interconnections may lead to implicit equations for the variables
u i (·), which must be transformed into explicit expressions (the large-scale system
is then well-posed). Such well-posedness is studied in [19]. Another notion of well-
posedness (L2,e BIBO stability, causality, Lipschitz dependence of solutions as func-
tions of inputs) is analyzed in [20]. The stability and the control of large-scale systems
are studied deeply in [21–29], using passivity. The notion of vector dissipativity is
used
t Win [22–24]. Vector dissipativity is defined as V (x(t)) ≤≤ e W (t−t0 ) V (x(t0 )) +
(t−s)
t0 e w(u(s), y(s))ds for some nonnegative matrix W ,1 where ≤≤ means ≤
componentwise. Large-scale discrete-time systems are tackled in [23]. It is notewor-
thy that graph theory is used in these works [20, 21, 30]. Q S R-dissipativity is used
in [21, 25]. It is clear that the structure of the matrix H in (5.23) has a crucial effect
on the system’s behavior, and this is already present in Theorems 5.19 and 5.20. Star-
shaped and cyclic-symmetry interconnections are analyzed in [21], while diffusively
coupled and iterative feedback systems are studied in [25]. Each one corresponds to a
particular structure of H . Instability results are stated in [31, 32] (the first condition
in [31, Theorem 7] is redundant). Many of the above results have been stated in
[33] (condition (iii) of [33, Theorem 4] can be removed, and condition (ii) has to
be stated with T ψ(·) a nonnegative real-valued function). See also [34, Chap. 2] for
further generalizations of interconnected systems and more references therein. The
discrete-time analog of the passivity Theorem 5.5 is presented in [5, pp. 371–374].
It is noteworthy that more general versions should answer the question: how to pre-
serve the passivity (or dissipativity) of an interconnection, when the feedback loop is
discretized? In other words, check whether or not the application of a discrete-time
controller preserves the closed-loop dissipativity through an extended version of the
passivity theorem. One answer is given in [35, Theorem 11], where L2 -stability of
the feedback interconnection of a ZOH discretized plant with a discrete-time con-
troller, and a static nonlinearity in a sector, is shown. Finally, interconnections of
infinite-dimensional systems (partial differential equations with inputs and outputs)
are analyzed in [36].
Let us consider the feedback interconnection in Fig. 5.6. Then the following is true.
Theorem 5.21 ([37, 38]) Assume that the SISO LTI system H1 is NI, while the
SISO LTI system H2 is strictly NI, and that H1 (∞)H2 (∞) = 0, H2 (∞) ≥ 0. Then
the positive feedback interconnection in Fig. 5.6 is internally stable, if and only if
λmax (H1 (0)H2 (0)) < 1. In such a case, the transfer function from (r1 r2 ) to (y1 y2 )
is strictly NI.
Internal stability means that the closed-loop system has no algebraic loop, and all its
poles are in the closed left half complex plane. The second part of the theorem also
holds in the MIMO case, under the internal stability condition [37].
In this section, we will study the relationship between dissipativeness and stability
of dynamical systems. Let us first recall that in the case of linear systems, the plant
is required to be asymptotically stable to be WSPR, SPR, or SSPR. For a PR system,
it is required that its poles be in the left half plane and the poles in the jω−axis be
simple and have nonnegative associated residues. Consider a dissipative system as
in Definition 4.21. It can be seen that if u = 0 or y = 0, then V (x(t)) ≤ V (x(0)).
If in addition the storage function is positive definite, then we can conclude that the
system ẋ(t) = f (x(t)) has a Lyapunov stable fixed point x = 0, and the system’s
zero dynamics is stable. Furthermore, if the system is strictly passive (i.e., S (x) > 0
in (4.21)) then the system ẋ(t) = f (x(t)), and the system’s zero dynamics are both
asymptotically stable (see Theorem 4.10). Let us now consider passive systems as
given in Definition 2.1. The two following Lemmas will be used to establish the
conditions under which a passive system is asymptotically stable.
Definition 5.22 (locally ZSD)A nonlinear system (4.85) is locally zero-state detectable
(ZSD) [resp. locally zero-state observable (ZSO)] if there exists a neighborhood N
of 0 such that for all x(t) ∈ N
Roughly speaking, ZSO means that “large” states must create “large” outputs.
374 5 Stability of Dissipative Systems
Lemma 5.23 ([39, Lemma 7]) Consider a dissipative system with a general
supply rate w(u, y). Assume that
is a (minimum) solution of the KYP-NL set of equations (4.88), see the necessity
part of the proof of Lemma 4.100 and Theorem 4.43. Recall that 0 ≤ Va (x) ≤ V (x).
If we choose u such that w(u, y) ≤ 0 on [t0 , ∞), with strict inequality on a subset of
positive measure, then Va (x) > 0, for all y = 0. Note from the equation above that the
available storage Va (x) does not depend on u(t) for t ∈ [t0 , ∞). When y = 0 we can
choose u = 0 and therefore x = 0 in view of the zero-state observability assumption.
We conclude that Va (x) is positive definite and that V (x) is also positive definite
(see Definition A.12).
Remark 5.24 Lemma 5.23 is also stated in [4] in a slightly different way (with
stronger assumptions). Lemma 5.23 shows that observability-like conditions are
crucial to guarantee that storage functions are positive definite, a fact that is in turn
important for storage functions to be Lyapunov functions candidates. Consider the
linear time-invariant case. We recover the fact that observability guarantees that the
solutions of the KYP Lemma LMI are positive definite as proved by Kalman in [40],
see Sect. 3.3.
Lemma 5.25 Under the same conditions of the previous lemma, the free system
ẋ = f (x) is (Lyapunov) stable if Q 0 and asymptotically stable if Q ≺ 0, where
Q is the weighting matrix in the general supply rate (4.86).
Proof From Corollary 4.101 and Lemma 4.100 there exists V (x) > 0 for all x = 0,
V (0) = 0, such that (using (4.88) and (4.89))
5.3 Positive Definiteness of Storage Functions 375
d(V ◦x)
dt
(t) = − [L(x(t)) + W (x(t))u(t)]T [L(x(t)) + W (x(t))u(t)] +
+y T (t)Qy(t) + 2y T (t)Su(t) + u T (t)Ru(t)
= −L T (x(t))L(x(t)) − 2L T (x(t))W (x(t))u(t) − u T (t)W T (x(t))×
×W (x(t))u(t) + (h(x(t)) + j (x(t))u)T Q(h(x(t)) + j (x(t))u(t))+
+2(h(x(t)) + j (x(t))u(t))T Su(t) + u T (t) Ru(t),
(5.31)
so that
d(V ◦x)
dt
(t)
= −L T (x(t))L(x(t)) − u T (t)W T (x(t))W (x(t))u(t)+
+u T (t)[R + j T (x(t))Q j (x(t)) + j T (x(t))S + S T j (x(t))]u(t)+
+2[−L T (x(t))W (x(t)) + h T (x(t))(Q j (x(t)) + S)]u(t)+
+h T (x(t))Qh(x(t))
= −L T (x(t))L(x(t)) − u T (t) R̂(x(t))u(t) + u T (t) R̂(x(t))u(t)+
+2[−L T W (x(t)) + h T (x(t)) Ŝ(x(t))]u(t) + h T (x(t))Qh(x(t))
= −L T (x(t))L(x(t)) + ∇V T (x(t))g(x(t))u(t) + h T (x(t))Qh(x(t)).
(5.32)
For the free system ẋ(t) = f (x(t)) we have
d(V ◦ x)
(t) = −L T (x(t))L(x(t)) + h T (x(t))Qh(x(t)) ≤ h T (x(t))Qh(x(t)) ≤ 0.
dt
Example 5.26 Let us come back to Example 4.66. The system in (4.54) is not zero-
state detectable, since u ≡ 0 and y ≡ 0 do not imply x → 0 as t → +∞. And the
uncontrolled (or free) system is exponentially unstable (ẋ(t) = x(t)). This shows the
necessity of the ZSD condition.
Corollary 5.27 ([4]) Consider a dissipative system with a general supply rate
w(u, y). Assume that
1. The system is zero-state observable (i.e., u(t) ≡ 0 and y(t) ≡ 0 ⇒ x(t) = 0).
2. For any y = 0 there exists some u such that w(u, y) < 0.
Then passive systems (i.e., Q = R = 0, S = I ) and input strictly passive systems
(ISP) (i.e., Q = 0, 2S = I, R = −ε) are stable, while output passive systems (OSP)
(i.e., Q = −δ, 2S = I, R = 0) and very strictly passive systems (VSP) (i.e., Q =
−δ, 2S = I, R = −ε) are asymptotically stable.
376 5 Stability of Dissipative Systems
Before stating the next lemma, let us introduce another notion of zero-state detectabil-
ity.
for some t < +∞ such that 0 ≤ t ≤ τ . If in addition for any sequence {wn } ∈ Ω,
one has α(wn ) → +∞ as ||wn || → +∞, the system is said to be locally uniformly
zero-state detectable in Ωz with respect to Ω.
Clearly, a system that is ZSD according to this definition is also ZSD in the sense of
Definition 5.22. Sometimes, a system that satisfies the first part of Definition 5.28
is called uniformly observable. The local versions of Lemmas 5.23 and 5.25 are as
follows:
Suppose that Ωz ∩ Ωc = ∅. Then, the dynamical system has all its storage func-
tions V : Ωz ∩ Ωc → R continuous, V (0) = 0, and V (x) > 0 for all x ∈ Ωz ∩ Ωc .
Moreover, for any sequence {xn } ∈ Ωz ∩ Ωc , V (xn ) → +∞ as ||xn || → +∞.
We will also say that a system is said to be locally reachable with respect to Ω in a
region Ωr ⊆ Ω, if every state x1 ∈ Ωr is locally reachable with respect to Ω from
the origin x = 0 and for all t0 ∈ R, with an input that keeps the state trajectory inside
Ω.
Now we are ready to state the main result which concerns the local stability deduced
from local dissipativity.
In this section, we prove that if a system is WSPR (Weakly Strictly Positive Real),
it does not necessarily imply that the system is OSP (Output Strictly Passive). The
s+a+b
H (s) = (5.34)
(s + a)(s + b)
It can be shown that y(t) = f 1 (ω) cos(ωt) + f 2 (ω) sin(ωt), with f 1 (ω) =
ω3 −7ω
− (1+ω 2 )(4ω2 ) , and f 2 (ω) = (1+ω2 )(4ω2 ) . It can also be proved that
6
t
f 1 (ω) f 2 (ω) sin(2ωt)
u(τ )y(τ )dτ = − [cos(2ωt) − 1] + [t − ], (5.36)
0 4ω 2 2ω
and that
t
y 2 (τ )dτ = f 12 (ω) + sin(2ωt)
t
+ f 2
(ω) t
− sin(2ωt)
ω
0
4ω
2 2 2
(5.37)
− f 1 (ω) f 2 (ω) cos(2ωt)
2ω
−1 .
tn
Let us choose tn = 2nπ ω
for some integer n > 0. When ω → +∞, then 0 u(τ )
y(τ )dτ = f2 (ω)2nπ
4ω
, whereas
tn
2nπ( f 12 (ω) + f 22 (ω)) 1
y 2 (τ )dτ = + f 1 (ω) f 2 (ω) 1 − .
0 4ω 2ω
t ∼ t ∼
It follows that 0 n u(τ )y(τ )dτ ω → ∞ ωα5 while 0 n y 2 (τ )dτ ω → ∞ ωγ3 for some
positive real α and γ . Therefore,
t t an input u(t) = sin(ωt) and a time
we have found
t such that the inequality 0 u(τ )y(τ )dτ ≥ δ 0 y 2 (τ )dτ cannot be satisfied for any
δ > 0, as ω → +∞.
5.5 Stabilization by Output Feedback 379
where x(t) ∈ IR n , u(t), y(t) ∈ IR m , f (·), g(·), h(·), and j (·) are smooth functions
of x, and f (0) = h(0) = 0. We can now state the following result.
Theorem 5.35 (Global asymptotic stabilization [44]) Suppose (5.38) is passive and
locally ZSD. Let φ(y) be any smooth function such that φ(0) = 0 and y T φ(y) > 0,
for all y = 0. Assume that the storage function V (x) > 0 is proper. Then, the control
law u = −φ(y) asymptotically stabilizes the equilibrium point x = 0. If in addition
(5.38) is ZSD, then x = 0 is globally asymptotically stable.
Proof By assumption, V (x) > 0 for all x = 0. Replacing u = −φ(y) in (4.43), we
obtain t
V (x(t)) − V (x(0)) ≤ − y T (s)φ(y(s))ds ≤ 0.
0
It follows that V (x(t)) ≤ V (x(0)) < ∞, which implies that x(t) < +∞ for all
t ≥ 0, and thus y(t) < ∞. Therefore, V (x(·)) is nonincreasing
t and thus converges.
In the limit, the left-hand side of the inequality is 0, i.e., 0 y T (s)φ(y(s))ds → 0 as
t → +∞. Thus y(t) → 0 as t → +∞ and u also converges to 0. Since the system
is locally ZSD, then x(t) → 0 as t → +∞. If, in addition, the system is globally
ZSD, then x = 0 is globally asymptotically stable.
It is noteworthy that Theorem 5.35 is an absolute stability result, due to the sectoricity
imposed on the static feedback.
Lemma 5.36 Suppose the system (5.38) is passive and zero-state observable, with
feedback control law u = −φ(y), φ(0) = 0. Then the storage function of the closed-
loop system is positive definite, i.e., V (x) > 0, for all x = 0.
Proof Recall that the available storage satisfies 0 ≤ Va (x) ≤ V (x) and
t
Va (x) = sup − 0 y T (s)u(s)ds
x=x(0),t≥0,u
t T (5.39)
= sup 0 y (s)φ(y(s))ds .
x=x(0),t≥0,u
where f (·, ·), g(·, ·), and h(·, ·) are continuous functions R+ × Rn → Rn , f (t, 0) =
0 and h(t, 0) = 0 for all t ≥ 0. It is further supposed that f (·, ·), g(·, ·), and h(·; ·)
are uniformly bounded functions. Since the system is not autonomous, it is no longer
possible to apply the arguments based on the Krasovskii–LaSalle’s invariance prin-
ciple. An extension is proposed in [48] which we summarize here. Before stating the
main result, some definitions are needed.
Definition 5.37 ([48]) Let g : R+ × X → Rm be a continuous function. An
unbounded sequence γ = {tn } in R+ is said to be an admissible sequence associ-
ated with g(·) if there exists a continuous function gγ : R+ × X → Rm such that
the associated sequence {gn | (t, x) → g(t + tn , x)} converges uniformly to gγ (·)
on every compact subset of R+ × X . The function gγ (·) is uniquely determined and
called the limiting function of g(·) associated with γ .
Definition 5.38 ([48]) Let g : R+ × X → Rm be a continuous function. It is said to
be an asymptotically almost periodic (AAP) function if, for any unbounded sequence
{tn } in R+ there exists a subsequence γ of {tn } so that γ is an admissible sequence
associated with g(·).
The set of all admissible sequences associated with an AAP function g(·) is denoted
as Γ (g). As an example, any continuous function g : X → Rm , x → g(x), has all its
limiting functions equal to itself. A function g : R+ × R p → Rm that is continuous
and such that g(·, x) is periodic for each fixed x, has limiting functions which can
be written as time-shifting functions gt0 : (t, x) → g(t + t0 , x) of g(·, ·) for some
t0 > 0.
Lemma 5.39 ([48]) Suppose that g : R+ × X → Rm is uniformly continuous and
bounded on R+ × κ for every compact κ ⊂ X . Then, g(·, ·) is an AAP function.
Let f (·, ·) and h(·, ·) be AAP functions. With the system in (5.40), one associates
its reduced limiting system
5.5 Stabilization by Output Feedback 381
ż(t) = f γ (t, z(t))
(5.41)
ζ (t) = h γ (t, z(t)).
and its transformed form (3.273). We, however, consider now the controlled case,
i.e.,
⎧ dz
⎨ dt (t) − R A R −1 z(t) − R Fu(t), v − z(t) ≥ 0, for all v ∈ K̄ u , a.e. t ≥ 0
⎩
z(t) ∈ K̄ u , for all t ≥ 0,
(5.43)
with an output y = C x + Du = C R −1 z + Du, A ∈ Rn×n , B ∈ Rn×m , C ∈ Rm×n ,
D ∈ Rm×m . The set K̄ u = {h ∈ Rn | (C R −1 h + Du ∈ K } is the set in which the
output is constrained to stay for all times (including initially). Remember that R 2 B =
C T , where R 2 = P, P = P T 0 is the solution of P B = C T . The “input” matrix
B is hidden in this formulation, but we recall that the variational inequality (5.43) is
equivalent to the inclusion
⎧
⎨ ẋ(t) − Ax(t) − Fu ∈ −B N K (y(t))
y(t) = C x(t) + Du (5.44)
⎩
y(t) ∈ K , for all t ≥ 0,
via the state transformation z = Rx (remind that N K (x) is the normal cone to the non
empty closed convex set K ). Let us make an ouput feedback u = −Gy = −G(C x +
Du). To avoid an algebraic loop, let us assume the following.
Assumption 13 The feedback gain matrix G is chosen such that Im + G D is full
rank.
Therefore, u = −(Im + G D)−1 GC x. The inclusion in (5.44) becomes ẋ(t) − Āx(t)
∈ −B N K (C̄), with Ā = A − F(Im + G D)−1 GC and C̄ = C − D(Im + G D)−1
GC. We define K̄ = {h ∈ Rn | C̄ R −1 h ∈ K }.
Proof The proof is a direct application of [49, Corollary 6], which itself stems
from more general results on invariance for evolution variational inequalities, see
Sect. 3.14.3.5.
The synthesis problem therefore boils down to find G and P = P T 0, such that
(A − F(Im + G D)−1 GC)T P + P(A − F(Im + G D)−1 GC) 0, P B T = (C − D
(Im + G D)−1 GC)T , and with the additional condition Ker[R(A − F(Im + G D)−1
GC)R −1 + R −1 (A − F(Im + G D)−1 GC)T R] ∩ K̄ = {0}. If one imposes that
( Ā, B, C̄) be SPR, then this last condition is trivially satisfied, since SPRness implies
(A − F(Im + G D)−1 GC)T P + P(A − F(Im + G D)−1 GC) ≺ 0, hence (A − F
5.5 Stabilization by Output Feedback 383
(Im + G D)−1 GC)T P + P(A − F(Im + G D)−1 GC) is invertible and so is R(A −
F(Im + G D)−1 GC)R −1 + R −1 (A − F(Im + G D)−1 GC)T R.
The above result is based on a feedback of the output y. In general, y is not
a measured output; it is a signal defined from modeling. Thus, it is of inter-
est to consider, in addition to y, a measured output w = Cw x + Dw u. This is a
major discrepancy with respect to the classical output feedback problem, as ana-
lyzed in Sect. 2.14.3, where only one output is considered. Then the feedback con-
troller takes the form u = −Gw for some feedback gain matrix G. Proceeding
as above, it follows that u = −(Im + G Dw )−1 GCw x, assuming that Im + G Dw
is full rank, and y = C x + Du = (C − D(Im + G Dw )−1 GCw )x. Denoting  =
A − F(Im + G Dw )−1 GCw , Ĉ = C − D(Im + G Dw )−1 GCw , the inclusion in (5.44)
becomes
ẋ(t) − Âx(t) ∈ −BN K (Ĉ x(t)). (5.46)
The same steps may be followed to restate Lemma 5.41, where this time the triple
( Â, B, Ĉ) should be PR, R 2 B = Ĉ T , K̄ = {h ∈ Rn | Ĉr −1 h ∈ K }. It is noteworthy
that it is still y(t) ∈ K that is constrained, not w. The interplay between both outputs
is in Ĉ.
Byrnes, Isidori, and Willems [44] have found sufficient conditions for a nonlinear
system to be feedback equivalent to a passive system with a positive-definite storage
function. See Sect. A.2 in the appendix for a short review on differential geometry
tools for nonlinear systems. Consider a nonlinear system described by
ẋ(t) = f (x(t)) + g(x(t))u(t), x(0) = x0
(5.47)
y(t) = h(x(t)),
with f (·), g(·), and h(·) sufficiently smooth, f (0) = 0, h(0) = 0. Let us state a
general definition of the minimum-phase property.
Definition 5.42 The control system (5.47) is said to possess the minimum-phase
property with respect to the fixed point x = 0, if x is an asymptotically stable
equilibrium of the system under the constraint y ≡ 0. The dynamics of the system
(5.47) subjected to this constraint is called the zero dynamics.
Constraining the dynamics to the zero output obviously implies a particular choice
of the input (this is the output-zeroing feedback controller).
Definition 5.43 The system (5.47) is feedback equivalent to a passive system, if
there exists a feedback u(x, t) = α(x) + β(x)v(t) such that the closed-loop system
( f (x) + g(x)α(x), g(x)β(x), h(x)) is passive.
384 5 Stability of Dissipative Systems
This is an extension to the nonlinear case of what is reported in Sects. 3.7 and 2.14.3.3
This is often referred to as the problem of passification of nonlinear systems [50]. The
system has relative degree {1, . . . , 1} at x = 0 if L g h(0) = ∂h(x)
∂x
g(x)|x=0 is a non-
singular m × m matrix. If in addition the vector field g1 (x), . . . , gm (x) is involutive,
then the system can be written in the normal form
⎧
⎨ ż(t) = q(z(t), y(t))
(5.48)
⎩
ẏ(t) = b(z(t), y(t)) + a(z(t), y(t))u(t),
where b(z, y) = L f h(x) and a(z, y) = L g h(x). The normal form is globally defined
if and only if
H1: L g h(x) is non singular for all x.
H2: The columns of g(x)[L g h(x)]−1 form a complete vector field.
H3: The vector field formed by the columns of g(x)[L g h(x)]−1 commutes.
The zero dynamics describes the internal dynamics of the system when y ≡ 0 and is
characterized by ż(t) = q(z(t), 0). Define the manifold Z ∗ = {x : h(x) = 0} and
with
Let f ∗ (x) be the restriction to Z ∗ of f˜(x). Then the zero dynamics is also described
by
The next definition particularizes Definition 5.42, when the normal form exists.
Definition 5.44 Assume that the matrix L g h(0) is nonsingular. Then the system
(5.47) is said to be
1. Minimum phase if z = 0 is an asymptotically stable equilibrium of (5.51),
2. Weakly minimum phase if there exists W ∗ (z) ∈ C r , r ≥ 2, with W ∗ (z) positive
definite, proper and such that L f ∗ W ∗ (z) ≤ 0 locally around z = 0.
These definitions become global, if they hold for all z, and H1–H3 above hold.
Remark 5.45 It is in general quite an uneasy task to calculate the normal form of
a nonlinear system, and hence to characterize the stability of its zero dynamics. A
direct way consists in setting y(t) and its derivatives (along the system’s trajectories)
3 The problem of rendering the quadruple (A, B, C, D) passive by pole shifting is to find α ∈ R
such that (A + α In , B, C, D) is PR.
5.6 Zero Dynamics and Equivalence to a Passive System 385
to zero, and then in calculating the remaining dynamics, which is the zero dynamics.
This should a priori work even if the normal form does not exist. A way to check
the minimum-phase property has been proposed in [51]. Let us define (in the SISO
case) Hr (x, u) = (h(x), ḣ(x), ḧ(x), ..., h (r ) (x))T , the vector of output derivatives up
to the relative degree r .
Theorem 5.46 ([51, Theorem 1]) Consider the control system (5.47), and assume
that its normal form exists. The system is minimum phase if and only if there exists
a Lyapunov function V (·) and a function ρ(·), such that the dissipation inequality
∇V T (x)( f (x) + g(x)u) < Hr (x, u)T ρ(x, u) is satisfied for all admissible u and all
nonzero x in a neighborhood of x = 0.
Theorem 5.48 ([44]) Assume that the system (5.47) is passive with a C 2 positive-
definite storage function V (x). Suppose x = 0 is a regular point. Then L g h(0) is
nonsingular and the system has a relative degree {1, . . . , 1} at x = 0.
Proof If L g h(0) is singular, there exists u(x) = 0 for x in the neighborhood N(0) of
x = 0 such that L g h(x)u(x) = 0. Since rank{dh(x)} = m, for all x ∈ N (0), we have
γ (x) = g(x)u(x) = 0, for all x ∈ N (0). Given that the system (5.47) is passive, it
follows that L g V (x) = h T (x) so that
∂(u T h)
where L γ [u T h] = ∂x
γ, and
∂(u T h) ∂(u T h)
, . . . , ∂(u∂ xnh)
T
∂x
(x) = ∂ x1
∂ x1 ∂ xn
(5.52)
=h T
(x)[ ∂∂ux1 , . . . , ∂∂ux2 ] +u T
(x)[ ∂∂hx1 , . . . , ∂∂hxn ]
Then
1
f (t) = f (0) + f (1) (0)t + f (2) (s) t 2 ,
2
where 0 ≤ s ≤ t. Note that
⎧ γ
⎨ f (t) = V (φtγ (0))
⎪ γ
γ
f (1) (t) = ∂ V (φ∂ξt (0)) ξ̇ = ∂ V (φ∂ξt (0)) γ (ξ(t)) = L γ V (φt (0)) (5.54)
⎪
⎩ f (2) (t) = ∂ f (t) ξ̇ = L f (1) (t) = L 2 V (φ (0)).
(1) γ
∂ξ γ γ t
γ γ
Therefore, V (φt (0)) = V (0) + L γ V (0)t + L 2γ V (φs (0)) 21 t 2 . Given that V (0) = 0
we have
∂ V (x)
L γ V (0) = ∂x
g(x)u(x)|x=0 = L g V (x)u(x)|x=0 = h T (0)u(0) = 0. (5.55)
Thus
γ 1
V (φt (0)) = v T (φsγ (0))h(φsγ (0)) t 2 .
2
Recall that a necessary condition for a strictly proper transfer to be PR is that it has
only zeroes in the closed left half plane. The next theorem extends this fact to general
nonlinear systems. A function V : Rn → R+ is nondegenerate in a neighborhood of
x = 0 if its Hessian matrix ∂∂ xV2 (x) has full rank n in this neighborhood.
2
Theorem 5.49 ([44]) Assume that the system (5.47) is passive with a C 2 positive-
definite storage function V (·). Suppose that either x = 0 is a regular point or that
V (·) is nondegenerate. Then, the system zero dynamics locally exists at x = 0 and
the system is weakly minimum phase.
The two theorems above show essentially that any passive system with a positive-
definite storage function, under mild regularity assumptions, necessarily has relative
degree {1 . . . 1} at x = 0 and is weakly minimum phase. These two conditions are
shown to be sufficient for a system to be feedback equivalent to a passive system as
stated in the following theorem.
Theorem 5.50 ([44]) Suppose x = 0 is a regular point. Then the system (5.47)
is locally feedback equivalent to a passive system with C 2 storage function V (·)
which is positive definite, if and only if (5.47) has relative degree {1 . . . 1} at
x = 0 and is weakly minimum phase.
The above results have been extended to the relative degree zero case in [52]. Specif-
ically one considers systems of the classical form
ẋ(t) = f (x(t)) + g(x(t))u(t), x(0) = x0
(5.57)
y(t) = h(x(t)) + j (x(t))u(t),
with x(t) ∈ Rn , u(t) ∈ Rm , y(t) ∈ Rm , f (·) and g(·) are smooth vector fields, f (0) =
0, h(0) = 0, rank(g(0)) = m. The notion of invertibility will play a role in the next
result, and is therefore introduced now.
Let D(x) be a p × m matrix, the rows of which are linearly independent for all
x ∈ N. Let
j (x)
H (x) = ,
L g [D(x)h(x)]
Suppose that the system is passive with a C 2 positive-definite storage function V (·).
Then there is a neighborhood N̂ ⊆ N such that the system is invertible with relative
order 1 for all x ∈ N̂.
Proposition 5.53 ([52]) Consider a system as in (5.57) and assume that it satisfies
the regularity hypotheses of Proposition 5.52. Then there exists a regular static state
feedback which locally transforms the system into a passive system having a C 2
positive-definite storage function, if and only if the system is invertible with relative
order 1 and is weakly minimum phase.
The notion of weak minimum phase for (5.57) is similar to that for systems as in
(5.47), except that the input u ∗ (x) is changed, since the output is changed. The zero
dynamics input is calculated as the unique solution of
∗ j (x)
H (x)u (x) + = 0,
L f [D(x)h(x)]
and is such that the vector field f ∗ (x) = f (x) + g(x)u ∗ (x) is tangent to the sub-
manifold Z ∗ = {x ∈ N | D(x)h(x) = 0}. The proof of Proposition 5.53 relies on
the cross-term cancellation procedure and a two-term Lyapunov function, so that the
results of Sect. 7.3.3 may be applied to interpret the obtained closed-loop as the neg-
ative feedback interconnection of two dissipative blocks. Further works on feedback
equivalence to a passive system can be found in [45, 53–59]. The adaptive feedback
passivity problem has been analyzed in [60].
Cascaded systems are important systems that appear in many different practical cases.
We will state here some results concerning this type of systems which will be used
later in the book. Consider a cascaded system of the following form:
⎧
⎪
⎪ ζ̇ (t) = f 0 (ζ (t)) + f 1 (ζ (t), y(t))y(t)
⎨
(5.58)
⎪
⎪ ẋ(t) = f (x(t)) + g(x(t))u(t)
⎩
y(t) = h(x(t)).
The first dynamics in (5.58) with state ζ is called the driven system, while controlled
the dynamics { f, g, h} with state x is called the driving system.
Theorem 5.55 ([44, Theorem 4.13]) Consider the cascaded system (5.58). Suppose
that the driven system ζ̇ (t) = f 0 (ζ (t)) is globally asymptotically stable, and the
driving system { f, g, h} is (strictly) passive with a C r , r ≥ 1, storage function V (·)
which is positive definite. The system (5.58) is feedback equivalent to a (strictly)
passive system with a C r storage function which is positive definite.
The cascaded system in (5.58) can also be globally asymptotically stabilized using
a smooth control law as is stated in the following theorem for which we need the
following definitions. Concerning the driving system in (5.58), we define the associate
distribution [63, 64]
D = span{ad kf gi | 0 ≤ k ≤ n − 1, 1 ≤ i ≤ m} (5.59)
Theorem 5.56 ([44, Theorem 5.1]) Consider the cascaded system (5.58). Suppose
that the driven system is globally asymptotically stable, and the driving system is
passive with a C r , r ≥ 1, storage function V (·) which is positive definite and proper.
Suppose that S ={0}. Then the system (5.58) is globally asymptotically stabilizable
by the smooth feedback
where U (·) is a Lyapunov function for the driven system part ζ̇ (t) = f 0 (ζ (t)) of the
cascaded system (5.58).
Some additional comments on the choice of u in (5.61) are given in Sect. 7.3.3, where
the role of cross-term cancellation is highlighted. The mechanism used to prove
that (5.58) in closed-loop with (5.61) can be interpreted as the negative feedback
390 5 Stability of Dissipative Systems
interconnection of two passive systems was proposed in [65]. Further work on the
stabilization of cascaded systems using dissipativity may be found in [66].
Close links between passive systems and Lyapunov stability have been shown to
exist in the foregoing sections. This section demonstrates that more can be said. E.D.
Sontag has introduced the following notion of input-to-state stability (ISS): given a
system
The continuity in the second item of the definition means that for any sequence {x0,n }
such that limn→+∞ x0,n = x0 and any sequence {u n } such that limn→+∞ u n = u, then
the solution x(t; x0,n , u n ) → x(t; x0 , u) as n → +∞. Then the following holds.
5.8 Input-to-State Stability (ISS) and Dissipativity 391
Theorem 5.58 ([67, 68]) The system (5.62) is ISS if and only if there exists a class
K L-function β(·, ·), and two functions γ0 (·), γ1 (·) of class K such that
t
||x(t, x0 , u)|| ≤ β(||x0 ||, t) + γ0 es−t γ1 (||u(s)||)ds (5.63)
0
for some class K L function β(·, ·) and some class K function γ (·).
Let us now define an ISS-Lyapunov function.
Definition 5.59 A differentiable storage function V (·) is an ISS-Lyapunov function
if there exist two functions α1 (·) and α2 (·) of class K∞ such that
for all x ∈ Rn , u ∈ Rm . Equivalently, a storage function with the property that there
exist two class-K functions α(·) and χ (·) such that
where the supply rate w(u, x) = α1 (||u||) − α2 (||x||). The dissipation inequality
(5.67) might be written even if V (·) is not differentiable, using the notion of viscosity
solutions. However, as far as ISS is concerned, the following holds.
Theorem 5.60 ([68]) The system in (5.62) is ISS, if and only if it admits a smooth
ISS-Lyapunov function.
This strong result shows that ISS is more stringent that dissipativity. We recall that
smooth means infinitely differentiable.
Example 5.61 ([67]) Consider ẋ(t) = −x 3 (t) + x 2 (t)u 1 (t) − x(t)u 2 (t) + u 1 (t)
u 2 (t). When u 1 and u 2 are zero, the origin x = 0 is globally asymptotically sta-
2
ble. This can be easily checked with the Lyapunov function V (x) = x2 . One
392 5 Stability of Dissipative Systems
Let us now introduce a slightly different property known as the integral ISS (in short
iISS).
Definition 5.62 The system in (5.62) is iISS provided that there exist functions α(·)
and γ (·) of class K∞ , and a function β(·, ·) of class K L such that
t
α(||x(t)||) ≤ β(||x0 ||, t) + γ (||u(s)||)ds (5.68)
0
Notice that V̇ (x(t), u(t)) = ∂∂Vx ( f (x(t), u(t)). The fact that (5.69) is a dissipation
inequality (in its infinitesimal form) with supply rate w(x, u) = −α(||x(t)||) +
γ (||u||) is obvious. Since every class K∞ function is also positive definite, an ISS-
Lyapunov function is also an iISS-Lyapunov function. But the converse is not true.
Similarly to Theorem 5.60, one has the following.
Theorem 5.64 ([68]) The system in (5.62) is iISS if and only if it admits a smooth
iISS-Lyapunov function.
Example 5.65 Let us present an example of a scalar system that is not ISS but is
iISS. Consider
This system is not ISS because the input u(t) ≡ π2 gives unbounded trajectories. But
it is iISS. Indeed choose V (x) = x tan−1 (x). Then
2
V̇ (x(t), u(t)) ≤ − tan−1 (|x(t)| + 2|u(t)|, (5.71)
Theorem 5.66 The system in (5.62) is iISS if and only if the uncontrolled system
ẋ(t) = f (x(t), 0) has a globally asymptotically stable fixed point x = 0 and there
is a smooth storage function V (·) such that for some function σ (·) of class K∞
Let us now state a result on ISS in which zero-state detectability (Definition 5.22)
intervenes.
Theorem 5.67 ([68]) A system is iISS, if and only if there exists a continuous output
function y = h(x), h(0) = 0, which provides zero-state detectability and dissipativity
in the following sense: there exists a storage function V (·) and a function σ (·) of
class K∞ , a positive-definite function α(·) so that
The next results may be seen as a mixture of results between the stability of feedback
interconnections as in Fig. 5.2, the ISS property, and quasi-dissipative systems. Two
definitions are needed before stating the results.
Definition 5.68 A dynamical system ẋ(t) = f (x(t), u(t)), y(t) = h(x(t)), with
f (·, ·) and h(·) locally Lipschitz functions, is strongly finite-time detectable if there
exists a time t > 0 and a function κ(·) of class K∞ such that for any x0 ∈ Rn and
for any u ∈ U the following holds:
t
(u T (s)u(s) + y T (s)y(s))ds ≥ κ(||x0 ||). (5.74)
0
This definition is closely related to the ISS with respect to a compact invariant set.
However, ISUB implies only boundedness, not stability, and is therefore a weaker
property. The next proposition is an intermediate result which we give without proof.
Proposition 5.70 ([7]) Suppose that the system ẋ(t) = f (x(t), u(t)), y(t) =h(x(t))
has uniform finite power gain, with a locally bounded radially unbounded storage
function, and is strongly finite-time detectable. Then it is ISUB.
The definition of a finite power gain is in Definition 5.9. Then we have the following.
Theorem 5.71 ([7]) Suppose that each of the subsystems H1 and H2 in Fig. 5.2 has
the dynamics ẋi (t) = f i (xi (t), u i (t)), yi (t) = h i (xi (t)), i = 1, 2, and is
• Quasi-dissipative with general supply rate wi (u i , yi ), with a locally bounded radi-
ally unbounded storage function,
• Strongly finite-time detectable.
Suppose that there exists ρ > 0 such that the matrix Q ρ in (5.10) is negative definite.
Then the feedback system is ISUB.
Proof From Proposition 5.10, one sees that the feedback system has uniform finite
power gain. Suppose that V1 (·) and V2 (·) are locally bounded radially unbounded stor-
age functions for H1 and H2 , respectively. Then V1 (·) + ρV2 (·) is a locally bounded
radially unbounded storage function of the feedback system. Let us now show that
the feedback system is strongly finite-time detectable. We have
t1 T
0 [r 1 (s)r 1 (s) + y2 (s)y2 (s) + y1 (s)y1 (s)]ds
T T
t1 T (5.76)
≥ 0 [u 1 (s)u 1 (s) + y1T (s)y1 (s)]ds ≥ κ1 (||x1 (0)||),
and t2
[r T (s)r2 (s) + y2T (s)y2 (s) + y1T (s)y1 (s)]ds
t12 T
0
(5.77)
≥ 0 [u 2 (s)u 2 (s) + y2T (s)y2 (s)]ds ≥ κ2 (||x2 (0)||),
5.8 Input-to-State Stability (ISS) and Dissipativity 395
for some t1 > 0, t2 > 0, κ1 (·) and κ2 (·) ∈ K∞ . Combining (5.76) and (5.77) we
obtain
t
[r1T (s)r1 (s) + r2T (s)r2 (s) + y2T (s)y2 (s) + y1T (s)y1 (s)]ds
0 (5.78)
≥ 1
[κ (||x1 (0)||) + κ2 (||x2 (0)||)] ≥ κ (max{||x1 (0)||, ||x2 (0)||}),
2 1
The literature on ISS stability is abundant, and our objective in this section was just
to mention the connections with dissipativity. The interested reader should have a
look at [68] and the bibliography therein to realize the richness of this field.
where x(t) ∈ Rn , y(t) ∈ R p , u(t) ∈ R p are the state, the output and the input of the
system and τ denotes the delay. The matrices A ∈ Rn×n , A1 ∈ Rn×n , and B ∈ Rn×m
are constant. Time-delay systems may be seen as infinite-dimensional systems. In
particular, the initial data for (5.79) is a function φ : [−τ, 0] → Rn that is continuous
in the uniform convergence topology (i.e., ||φ|| = sup−τ ≤t≤t ||φ(t)||). The initial
396 5 Stability of Dissipative Systems
condition is then denoted as x(t0 + θ ) = φ(θ ) for all θ ∈ [−τ, 0]. There exists a
unique continuous solution [72, Theorem 2.1] which depends continuously on the
initial data (x(0), φ) in the following sense: the solution of (5.79) is denoted as
⎧
⎨ x(t + θ ) if t + θ ≥ 0
xt (θ ) = (5.80)
⎩
φ(t + θ ) if − τ ≤ t + θ ≤ 0
with θ ∈ [−τ, 0]. Let {φn (·)}n≥0 be a sequence of functions that converge uniformly
toward φ(·). Then xn (0) → x(0), and xt,n (·) converges uniformly toward xt (·). The
transfer function of the system in (5.79) is given by G(λ) = C(λ − A − A1 e−τ )−1 B,
with λ ∈ ρ(A + A1 e−τ ) ∈ C where ρ(M) = {λ ∈ C | (λIn − M) is full rank} for
M ∈ Rn×n [73].
The main result of this section can be stated as follows:
where
t
V (x(t), t) = x(t)T P x(t) + t−τ x(s)T Sx(s)ds. (5.83)
Remark 5.73 Note that the system (5.79) is passive only if γ = 0. Roughly speaking,
for γ > 0, we may say system (5.79) is less than output strictly passive. This gives us
an extra degree of freedom for choosing P and S in (5.81) since inequality in (5.81)
becomes more restrictive for γ = 0. We can expect to be able to stabilize the system
(5.79) using an appropriate passive controller as will be seen in the next section.
Note that for γ < 0 the system is output strictly passive but this imposes stronger
restrictions on the system (see (5.81)).
5.9 Passivity of Linear Delay Systems 397
t t t
2 0 u T (s)y(s)ds = 2 0 u T (s)C x(s)ds = 2 0 u T (s)B T P x(s)ds
t t
= 0 u T (s)B T P x(s)ds + 0 x(s)T P Bu(s)ds
t
d x T
= − Ax(s) − A1 x(s − τ ) P x(s)
0
ds
!
+ x T (s)P ddsx − Ax(s) − A1 x(s − τ ) ds (5.84)
t d(x T (s)P x(s))
= − x(s)T (A T P + P A)x(s)
0 ds !
− x(s − τ )T A1T P x(s) − x(s)T P A1 x(s − τ ) ds
t d V (s)
= 0 ds
− x(s)T Γ x(s) + I (x(s), x(s − τ )) ds,
Note that V (x, t) is a positive-definite function and I (x(t), x(t − τ )) ≥ 0 for all the
trajectories of the system. Thus from (5.82) and (5.84) it follows that
t t
0 u T (s)y(s)ds ≥ 1
2
[V (x(t), t) − V (x(0), 0)] − 1
2 0 x T (s)Γ x(s)ds
t
≥ 1
2
[V (x(t), t) − V (x(0), 0)] − 21 γ 0 x T (s)C T C x(s)ds
t
≥ − 21 V (x(0), 0) − 21 γ 0 y T (s)y(s)ds, for all t > 0.
(5.85)
Therefore, if γ = 0 then the system is passive.
Let us consider the block interconnection depicted in Fig. 5.1, where H1 represents
the system (5.79) and H2 is the controller which is input strictly passive as defined
above, i.e., for some ε > 0
t t
u 2 (s)y2 (s)ds ≥ −β2 + ε
T 2
u 2T (s)u 2 (s)ds (5.86)
0 0
398 5 Stability of Dissipative Systems
for some β ∈ R and for all t ≥ 0. The system H2 can be a finite-dimensional linear
system, for example. For the sake of simplicity, we will consider H2 to be an asymp-
totically stable linear system. We will show next that the controller satisfying the
above property will stabilize the system (5.79). From Lemma 5.72, the interconnec-
tion scheme, and (5.86), we have
u 1 = u, y1 = y
(5.87)
u 2 = y1 , y2 = −u 1 .
where A denotes a distribution of order 0 on some compact support [−τ, 0]. Let us
choose
A = Aδ(θ ) + A1 δ(θ − τ1 ) + A2 (θ ), (5.89)
where δ(θ ) represents the Dirac delta functional and A2 (θ ) is a piecewise continuous
function. Due to the term A2 (θ ) the system has a distributed delay. For the sake of
simplicity, we shall consider A2 (θ ) constant. The system (5.88) becomes
0
ẋ(t) = Ax(t) + A1 x(t − τ1 ) + −τ A2 x(t + θ )dθ + Bu(t)
(5.90)
y(t) = C x(t).
Some details on the well-posedness of such state delay systems are provided in
Sect. A.8 in the Appendix.
5.9 Passivity of Linear Delay Systems 399
where
t 0 t
V (x(t), t) = x(t)T P x(t) + t−τ1 x(s)T S1 x(s)ds + −τ ( t+θ x(s) S2 x(s)ds)dθ.
T
(5.93)
Proof We shall use the same steps as in the proof of Lemma 5.72. Thus from (5.90)
and the above conditions, we have
2 0t u T (s)y(s)ds = 2 0t u T (s)C x(s)ds = 2 0t u T (s)B T P x(s)ds
= 0t u T (s)B T P x(s)ds + 0t x T (s)P Bu(s)ds
(5.94)
0 T
= 0t ddsx − Ax(s) − A1 x(s − τ1 ) − −τ A2 x(s + θ )dθ P x(s)ds
t 0
T d x
+ 0 x(s) P ds − Ax(s) − A1 x(s − τ1 ) − −τ A2 x(s + θ )dθ ds.
We also have
t t d(x(s)T P x(s))
2 u T (s)y(s)ds = − x(s)T (A T P + P A)x(s) −
0 0 ds !
− x(s − τ1 )T A1T P x(s) − x T (s) P A1 x(s − τ1 ) ds
t T 0
− 0 x (s)P −τ A2 x(s + θ )dθ +
(5.95)
0
+ −τ x T (s + θ )A2T dθ P x(s) ds
t
= 0 d Vds(s) − x(s)T Γ (τ )x(s) + I1 (x(s), x(s − τ1 )) +
+ I2 (x(s), x(s + θ ))} ds,
0 −1 T
I2 (x(t), x(t + θ )) = −τ S2 A2 P x(t) − x(t + θ ) )T S2 S2−1 A2T P x(t) − x(t + θ ) dθ. (5.97)
400 5 Stability of Dissipative Systems
Note that V (·) is a positive-definite function and I1 (x(t), x(t − τ1 )) ≥ 0 and I2 (x(t),
x(t + θ )) ≥ 0 for all the trajectories of the system. Thus from (5.91) and (5.93), it
follows that
t T
1 t T
0 u (s)y(s)ds ≥ 2 [V (x(t), t) − V (x(0), 0)] − 2 0 x (s)Γ (τ )x(s)ds
1
t
≥ 1
2
[V (x(t), t) − V (x(0), 0)] − 21 γ 0 x T (s)C T C x(s)ds
t
≥ − 21 V (x(0), 0) − 21 γ 0 y T (s)y(s)ds for all t > 0.
(5.98)
Therefore if γ = 0, the system is passive.
Remark 5.75 The presence of a distributed delay term in the system (5.90) imposes
extra constraints in the solution of inequality (5.91). Note that for τ = 0 we recover
the previous case having only a point state delay. Extensions of the result presented
in this section can be found in [74]. Other work may be found in [75–80]. The
passification of time-delay systems with an observer-based dynamic output feedback
is considered in [81]. Results for systems with delay both in the state and the input
may be found in [82]. The stability and L 2 -gain of a class of switched systems with
delay with time-continuous solutions have been analyzed in [83].
Remark 5.76 Note also that given that the system (5.90) satisfies the inequality
(5.98), it can be stabilized by an input strictly passive system as described in the
previous section. Furthermore, due to the form of the Riccati equation the upper
bound for the (sufficient) distributed delay τ (seen as a parameter) may be improved
by feedback interconnection for the same Lyapunov-based construction. Such result
does not contradict the theory, since the derived condition is only sufficient, and not
necessary and sufficient.
Let us end this section on time-delay systems by noting that the absolute stability
problem for systems of the form
⎧
⎨ ẋ(t) = Ax(t) + Bx(t − τ ) + Dw(t)
y(t) = M x(t) + N x(t − τ ) (5.99)
⎩
w(t) = −φ(t, y(t))
has been studied in [84], with x(θ ) = φ(θ ) for all θ ∈ [−τ, 0], τ > 0 is the constant
delay and φ : R+ × Rm → Rm is a static, piecewise continuous in t and globally
Lipschitz continuous in y nonlinearity. This nonlinearity satisfies the sector condi-
tion (φ(t, y(t)) − K 1 y(t))T (φ(t, y(t)) − K 2 y(t)) ≤ 0 where K 1 and K 2 are constant
matrices of appropriate dimensions and K = K 1 − K 2 is symmetric positive defi-
5.9 Passivity of Linear Delay Systems 401
nite. Thus that the nonlinearity belongs to the sector [K 1 , K 2 ]. The following result
holds.
Proposition 5.77 ([84]) For a given scalar τ > 0, the system (5.99) is globally
uniformly asymptotically stable for any nonlinear connection in the sector [0, K ] if
there exists a scalar ε ≥ 0, real matrices P 0, Q 0, R 0 such that
⎛ ⎞
A T P + P A + Q − R P B + R P D − εM T K T τ A T R
⎜ (P B + R)T −Q − R −εN T K T τ BT R ⎟
⎜ ⎟ ≺ 0. (5.100)
⎝ (P D − εM T K T )T (−εN T K T )T −2ε Im τ DT R ⎠
(τ A T R)T (τ B T R)T (τ D T R)T −R
Theorem A.65 could be used to state an equivalent Riccati inequality. Other works
on absolute stability of time-delay systems can be found in [85–93].
In this section, we first briefly recall basic results on H∞ control of linear time-
invariant systems, then a brief review of the nonlinear case is done. We finish with
an extension of the finite power gain notion. It has already been seen in the case of
linear time-invariant systems that there exists a close relationship between bounded
realness and positive realness, see e.g., Theorems 2.28, 2.53 and 2.54. Here, we
investigate similar properties, starting from the so-called Bounded Real Lemma.
5.10.1 Introduction
||y(s)||2 ||y(t)||2
||H ||∞ = sup = sup σmax (H ( jω)) = sup , (5.101)
u(s)∈H2 ||u(s)||2 ω∈R u(t)=0 ||u(t)||2
for all t ≥ 0. Let us recall the following, known as the Bounded Real Lemma.
Lemma 5.78 (Bounded Real Lemma) Consider the system ẋ(t) = Ax(t) + Bu(t),
y(t) = C x(t). Let (A, B) be controllable and (A, C) be observable. The following
statements are equivalent:
• ||H ||∞ ≤ 1.
• The Riccati equation A T P + P A + P B B T P + C T C = 0, has a solution P =
P T 0.
The proof may be found in [94]. The following version of the so-called Strict Bounded
Real Lemma has been introduced in [95]; it is a strengthened version of the Bounded
Real Lemma.
Lemma 5.79 (Strict Bounded Real Lemma [95, Theorem 2.1]) Consider the system
ẋ(t) = Ax(t) + Bu(t), y(t) = C x(t). The following statements are equivalent:
1. A is asymptotically stable and ||H ||∞ < 1.
2. There exists a matrix P̄ = P̄ T 0 such that A T P̄ + P̄ A + P̄ B B T P̄ + C T C ≺
0.
3. The Riccati equation A T P + P A + P B B T P + C T C = 0 has a solution P =
P T 0 with A + B B T P asymptotically stable.
A similar result (for items 1 and 2) can be found in [96, Lemma 2.2] for the case
D = 0, who proves the equivalence between items 1 and 2, while the proof of [95,
Theorem 2.1] shows 1 ⇒ 2 ⇒ 3 ⇒ 1. Complete developments can also be found
in [97, Sect. 3.7] who prove the equivalence between 1 and 3. The Strict Bounded
Real Lemma therefore makes no controllability nor observability assumptions, but
it requires stability. In order to make the link with the bounded realness of rational
functions as introduced in Definition 2.29, let us recall that a transfer function H (s) ∈
Cm×m is bounded real if and only if all the elements of H (s) are analytic in Re(s) ≥ 0
and ||H ||∞ ≤ γ , or equivalently γ 2 Im − H ( jω)H ( jω) 0 for all ω ∈ R, γ > 0
(this is a consequence of the || · ||∞ norm definition as the maximal singular value,
see also Definition 2.52, and [98, Lemma 8.4.1]4 ). Thus we replace the upperbound 1
in Definition 2.29 by γ . The transfer function H (s) is said to be strictly bounded real
if there exists ε > 0 such that H (s − ε) is bounded real. It is strongly bounded real
if it is bounded real and γ 2 Im − D T D 0, where D = H (∞). An extension of the
Strict Bounded Real Lemma toward time-varying linear systems is made in [99–101]
[102, Lemma 6]. Roughly speaking, Riccati equations are replaced by differential
Riccati equations. Extension to systems with exogenous state jumps is proposed in
[103].
4 LetA ∈ Cn×n be square Hermitian, then A γ In if and only if λmax (A) ≤ γ , and A ≺ γ In if
and only if λmax (A) < γ .
5.10 Linear and Nonlinear H∞ , Bounded Real Lemmas 403
The above lemmas extend to the case where a direct feedthrough matrix D = 0
is present. To start with, it is noteworthy that another classical form of the Bounded
Real Lemma exists, which makes use of an LMI that is the counterpart of the KYP
Lemma LMI [104, pp. 308–309]. We recall it here for convenience.
Lemma 5.80 (Bounded Real Lemma) Let G(s) ∈ Cm×m be a matrix transfer func-
tion with G(∞) = D bounded,5 and (A, B, C, D) is a minimal realization of G(s).
Then G(s) is bounded real if and only if there exist real matrices P = P T 0, L,
W , such that
A T P + P A = −C T C − L L T
−P B = C T D + L W (5.103)
Im − D T D = W T W.
A factorization as in (3.3) is possible. The next proposition gathers results for both
bounded real and strict bounded real matrix transfer functions.
Proposition 5.81 ([105, Proposition 5.1]) Let (A, B, C, D) be such that Im −
D T D 0, and G(s) = C(s In − A)−1 B + D. Then
1. Im − G T (− jω)G( jω) 0, ω ∈ R, if and only if there exists P = P T 0 such
that
The solutions Pmin and Pmax define the available storage and the required supply
of the system, respectively [106], with the supply rate w(u, y) = u T u − y T y. In
[104, Theorem 7.3.6], the equivalence between the LMI in Lemma 5.80 and Riccati
inequalities is proved. It uses a nontrivial definition of the matrices L and W in
(5.103). It is interesting to recall here Theorem 3.75, which deals with dissipative
LTI systems with D + D T 0, and therefore with a different kind of Riccati equa-
tions, which also possesses minimum and maximum solutions, as well as stabilizing
solutions.
The following holds.
Lemma 5.82 The transfer matrix H (s) = C(s In − A)−1 B + D of the system
(A, B, C, D) is asymptotically stable and has an H∞ -norm ||H ||∞ < γ for some
γ > 0, if and only if there exists a matrix P = P T 0 such that
⎛ T ⎞
A P + PA PB CT
⎝ BT P −γ 2 Im D T ⎠ ≺ 0. (5.105)
C D −Im
Let us provide a sketch of the proof, insisting on the manipulations which allow one
to navigate between Riccati inequalities, and LMIs.6 Using Theorem A.65, the LMI
(5.105) is found to be equivalent to
A T P + P A − (P B + C T D)(D T D − γ 2 Im )−1 (B T P + D T C) + C T C ≺ 0,
(5.106)
and A T P + P A ≺ 0, from which we infer that if P = P T 0 then A is a Hurwitz
matrix.
The inverse
in (5.106) is well defined, since the LMI implies that the matrix
−γ 2 Im D T
≺ 0 and thus is full rank, so that using again Theorem A.65 it is
D −Im
inferred that −γ Im + D T D ≺ 0 ⇔ D T D ≺ γ 2 Im . This Riccati inequality tells us
2
that the system is dissipative with respect to the H∞ supply rate w(u, y) = γ 2 u T u −
y T y. This can be checked using, for instance, the KYP Lemma 4.100 with the right
choice of the matrices Q, R, and S. Using Theorem A.65, one can further deduce
that the Riccati inequality is equivalent to the LMI: find P = P T 0 such that
T
A P + P A + CT C P B + CT D
≺ 0. (5.107)
B T P + DT C D T D − γ 2 Im
The equivalence between the LMI in (5.105) and the LMI in (5.107) can be shown
using once again Theorem A.65, considering this time the Schur complement of the
matrix −Im in (5.105). We once again stress the fundamental role played by Theorem
A.65. The main result of this part is summarized as follows.
A T P + P A + (B T P + D T C)T (γ 2 Im − D T D)−1 (B T P + D T C) + C T C ≺ 0,
implies that the system (A, B, C, D) is strictly dissipative with respect to the
supply rate γ 2 u T u − y T y, which in turn implies that ||H ||∞ < γ .
Letting D → 0 and γ = 1, one recovers the Riccati equation for (A, B, C) in Lemma
5.79. The strict LMI in (5.105) is used in [107] for static output feedback H∞ control.
A generalization of the LMI in (5.105) for strict Q S R-dissipative systems as in
A T P + P A + (B T P + D T C)T (γ 2 Im − D T D)−1 (B T P + D T C) + R = 0.
(5.110)
From Proposition A.67, the set of equations in (5.109) is equivalent to the LMI
T
A P + P A + εP + CT C P B + CT D
0.
DT C + B T P D T D − γ 2 Im
The Riccati equations in Theorems 5.83 and 5.84 can be deduced from Lemma
A.66, see also the results presented in Sect. 3.12.2. Notice that the Riccati equa-
tions are not identical from one theorem to the other, since the considered sup-
ply rates differ: the first one concerns the H∞ supply rate, while the second
one concerns the passivity supply rate. The exponential dissipativity can also be
expressed via the existence of a storaget function, and the dissipation inequality is
then exp(εt)V (x(t)) − V (x(0)) ≤ 0 exp(ετ )u T (τ )y(τ )dτ , for all t ≥ 0. If V (·) is
continuously differentiable, then the infinitesimal form of the dissipation inequality is
V̇ (x(t)) + εV (x(t)) ≤ u T (t)y(t) for all t ≥ 0. Another definition of exponential dis-
sipativity has been introduced in [53], which is strict passivity (Definition 4.54) with
the storage functions that satisfy α1 ||x||2 ≤ V (x) ≤ α2 ||x||2 and α3 ||x||2 ≤ S(x)
for some α1 > 0, α2 > 0, α3 > 0. Such a definition was motivated by a result of
Krasovskii [109]. If a system is exponentially dissipative in this sense, then the
uncontrolled system is exponentially stable. The definition in Theorem 5.84 is more
general since the exponential dissipativity implies the strict dissipativity: in case the
storage function satisfies α1 ||x||2 ≤ V (x) ≤ α2 ||x||2 then the second condition is
also satisfied with S(x) = V (x). The exponential finite gain property has been used
in [83, 110] to study the stability of switched systems with delay and time-continuous
solutions.
Notice that the material presented in Sect. 3.11.3 finds application in the H∞ prob-
lem, via the so-called four-block Nehari problem. This may be seen as an extension
of the Bounded Real Lemma; see [111, Lemma 2, Theorem 3]. Further results on
H∞ control in the so-called behavioral framework may be found in [112].
Remark 5.85 (Finite L p -gain) A system has a finite L p -gain if it is dissipative with
respect to a supply rate of the form
for some γ > 0, δ > 0. It is noteworthy that such supply rates satisfy the condition
2 in Lemma 5.23 in a strong sense since w(0, y) < 0 for all y = 0.
The paper [113] concerns the standard H∞ problem and relationships between LMI,
ARE, ARI, and is worth reading.
5.10 Linear and Nonlinear H∞ , Bounded Real Lemmas 407
Let us make an aside on the discrete-time version of bounded real transfer matrices,
and the corresponding discrete-time Bounded Real Lemma.
Definition 5.86 ([114, 115]) Let H (z) be an n × m (ngeqm) transfer matrix. Then
H (z) is said to be discrete-time bounded real if
• all poles of each element of H (z) lie in |z| < 1,
• Im − H T (z −1 )H (z) 0 for all |z| = 1.
Then the following bounded real lemma holds.
Lemma 5.87 ([114, Lemma 8]) Let (A, B, C, D) be a realization (not necessarily
minimal) of the transfer matrix H (z) ∈ Cn×m . Let K c = (B, AB, . . . , An−1 B) be
Kalman’s controllability matrix. Then H (z) is bounded real if and only if there exist
real matrices L, W , and P = P T with K cT P K c 0, such that
⎧ T T
⎨ K c (A P A − P + C T C + L T L)K c = 0
K T (A T P B + C T D + L T W ) = 0 (5.113)
⎩ cT
D D + B T P B + W T W = Im .
Let us make an aside on the problem of designing a feedback u(t) = v(t) + K x(t)
applied to the linear time-invariant system
ẋ(t) = Ax(t) + Bu(t)
(5.114)
y(t) = C x(t) + Du(t),
with x(0) = x0 , so that the closed-loop system is dissipative with respect to the
supply rate w(v, y) = u T Ru − y T J y. Such systems, when they possess a storage
function x T P x, are named (R, P, J )-dissipative [116]. The feedback gain K has to
be chosen in such a way that the closed-loop system (A + B K , B, C + D K , D) is
(R, P, J )-dissipative. This gives rise to the following set of matrix equations:
⎧ T
⎨ A P + P A + CT JC = K T RK
P B + C T J D = −K T R (5.115)
⎩ T
D J D = R.
A suitable choice of the matrices P, R, and J allows one to obtain several standard
one-block or two-block results, to which Riccati equalities correspond. This is sum-
marized as follows, where the dimensions are not given explicitly but follow from
408 5 Stability of Dissipative Systems
y
the context. The notation y = means that the signal y is split into two sub-
u
signals: one still called the output y, the other one being the input u. The following
ingredients (LMI and Riccati equalities) have already been seen in this book, under
slightly different forms. This is once again the opportunity to realize how the supply
rate modifications influence the type of problem one is solving.
y C 0 Im 0
• Let y = ,C = ,D= ,J= . The matrix R in J and R
u 0 Im 0 R
in (5.115) are the same matrix. With this choice of input and matrices, one obtains
from (5.115) the standard LQR Riccati equation. Indeed one gets
T
A P + P A + CT C = K T RK
(5.116)
B T P = −R K
A T P + P A + C T C − P B R −1 B T P = 0 (5.117)
y C D Im 0
• Let y = ,C = ,D= ,J= . This time one gets the
u 0 Im 0 Im
normalized coprime factorization problem, still with J 0, P 0, R 0. From
(5.115) it follows that
⎧ T
⎨ A P + P A + CT C = K T RK
P B + C T D = −K T R (5.118)
⎩ T
D D + Im = R.
A T P + P A + C T C − (P B + C T D)(Im + D T D)−1 (B T P + D T C) = 0.
(5.119)
y C D Im 0
• Let y = , C= , D= , J= . We obtain the
u 0 Im 0 −γ 2 Im
Bounded Real Lemma, and (5.115) becomes
⎧ T
⎨ A P + P A + CT C = K T RK
C T D + P B = −K T R (5.120)
⎩
R = D T D − γ 2 Im .
If γ is such that R ≺ 0 and P 0, one can eliminate R and K from the above,
and obtain the Bounded Real Lemma Riccati equality
5.10 Linear and Nonlinear H∞ , Bounded Real Lemmas 409
A T P + P A + C T C + (P B + C T D)(γ 2 I − D T D)−1 (B T P + D T C) = 0.
(5.121)
y C D 0 Im
• Let y = ,C = ,D = ,J =− . We obtain the Positive
u 0 Im Im 0
Real Lemma, and (5.115) becomes the set of equations of the KYP Lemma, i.e.,
⎧ T
⎨ A P + P A = K T RK
CT − P B = K T R (5.122)
⎩
R = −(D + D T ).
A system that is dissipative with respect to this choice of the supply rate is called
J -dissipative. For more details on the J -dissipative approach and its application
in H∞ -control, one is referred to [117].
The problem that is of interest here, and which is in a sense of the same nature as the
problem treated in Sect. 3.12.1, is about the design of a robust controller that is also
PR. Let us consider the dynamical system
⎧
⎨ ẋ(t) = Ax(t) + B1 w(t) + B2 u(t)
z(t) = C1 x(t) + D12 u(t) (5.125)
⎩
y(t) = C2 x(t) + D21 w(t).
410 5 Stability of Dissipative Systems
The signal u(·) is the controller, and w(·) is a disturbance. Let us denote Hi j (s) =
Ci (s In − A)−1 B j + Di j , s ∈ C. Since D11 = 0 and D22 = 0, the transfer matrices
H11 (s) and H22 (s) are strictly proper. In a compact notation one has
z(s) w(s)
= H (s) , (5.126)
y(s) u(s)
C1 0 D12
with H (s) = (s In − A)−1 (B1 B2 ) + . The objective of the con-
C2 D21 0
trol task is to construct a positive real controller with transfer matrix K (s) such
that
||Tzw (s)||∞ = ||H11 (s) + H12 (s)K (s)(Im − H22 (s)K (s))−1 H21 (s)||∞ < γ
(5.127)
for some γ > 0. Some assumptions are in order.
Assumptions (ii) and (iii) will guarantee that some Riccati equations in (5.130)
and (5.131) possess a solution, respectively. Assumptions (iv) and (v) concern the
exogeneous signal w(·) and how it enters the transfer H (s): w(·) includes both plant
disturbances and sensor noise, which are orthogonal, and the sensor noise weighting
matrix is nonsingular. Assumption (iv) implies that C1 x and D12 u are orthogonal, so
that the penalty on z = C1 x + D12 u includes a nonsingular penalty on the control u.
Let us disregard for the moment that the controller be PR. We obtain the so-called
central controller
K (s) = −Fc (s In − Ac )−1 Z c L c , (5.128)
(i) Ac = A + γ −2 B1 B1T Pc + B2 Fc + Z c L c C2
(ii) Fc = −R −1 B2T Pc
(5.129)
(iii) L c = −Yc C2T N −1
(iv) Z c = (Im − γ −2 Yc Pc )−1
with Pc = PcT 0, Yc = YcT 0, ρ(Yc Pc ) < γ 2 , and these matrices are solutions
of the Riccati equations
and
A T Yc + Yc A + Yc [γ −2 C1 C1T − C2 N −1 C2T ]Yc + B1T B1 = 0. (5.131)
The next step is to guarantee that the controller is PR. To this end an additional
assumption is made.
Proof The proof consists of showing that with the above choices of B1 and of the
matrix Q r 0, then there exists Pr = PrT 0 and Q c = Q cT 0 such that
AcT Pr + Pr Ac + Q c = 0 (5.133)
PYc Z cT Pr = Pc . (5.135)
The choice made for B1 B1T reduces (5.139) to the KYP Lemma Equation A T P +
P A + Q = 0. This shows that Yc = P −1 is a solution of equation (5.131). Now
inserting (5.129)(i), (5.132), and (5.136)(ii) into (5.133) reduces this equation to
(5.130). This proves that the above choices for Ac , Pr , Q r guarantee that (5.133)
is true with Q c = Q r . In other words, we have shown that with the choices for the
matrices Ac , Pr , and Q r , the KYP Lemma first Eq. (5.133) is satisfied as it reduces to
the KYP Lemma equation A T P + P A + Q = 0 which is supposed to be satisfied.
The second equation is also satisfied because B2T P = C2 is supposed to hold.
Let us end these two sections by mentioning the work in [119, 120] in which the
H∞ problem is solved with a nonsmooth quadratic optimization problem, making
use of the same tools from nonsmooth analysis that we saw in various places of this
book (subderivatives, subgradients). The problem of minimizing the H∞ norm of a
transfer function, subject to a positive real constraint on another transfer function in
a MIMO system, is used in [121], see also [122]. The Bounded Real Lemma has
been extended to a class of nonlinear time-delayed systems in [78], see also [123,
124] for details on the H∞ control of delayed systems. Other, related results, may be
found in [46] using the γ −PRness property (see Definition 2.87). A discrete-time
version of the Bounded Real Lemma is presented in [125].
5.10.5 Nonlinear H∞
A nonlinear version of the Bounded Real Lemma is obtained from (4.88) (4.89)
setting Q = −Im , S = 0, R = γ 2 Im . One obtains
⎧
⎪
⎪ Ŝ(x) = − j (x)
⎪
⎪ R̂(x) = γ 2 Im − j T (x) j (x) = W T (x)W (x)
⎨
g (x)∇V (x) = − j T (x) j (x) − W T (x)L(x)
1 T (5.140)
⎪
⎪ 2
⎪
⎪ ∇V T (x) f (x) = −h T (x)h(x) − L T (x)L(x)
⎩
V (x) ≥ 0, V (0) = 0,
5.10 Linear and Nonlinear H∞ , Bounded Real Lemmas 413
From (5.141) with strict inequality, one easily gets the Hamilton–Jacobi inequality
(5.100), using Theorem A.65. Let us now pass to the main subject of this section.
Given a plant of the form
⎧
⎨ ẋ(t) = A(x(t)) + B1 (x(t))w(t) + B2 (x(t))u(t)
z(t) = C1 (x(t)) + D12 (x(t))u(t) (5.142)
⎩
y(t) = C2 (x(t)) + D21 (x(t))w(t),
with x(0) = x0 , A(0) = 0, C1 (0) = 0, C2 (0) = 0, C2 (·), and D21 (·) are continuously
differentiable, the nonlinear H∞ control problem is to construct a state feedback
ζ̇ (t) = a(ζ (t)) + b(ζ (t))y(t)
(5.143)
u(t) = c(ζ (t)),
with continuously differentiable a(·), b(·), c(·), a(0) = 0, c(0) = 0, dim(ζ (t)) = l,
such that there exists a storage function V : Rn × Rl → R+ such that
t1
V (x(t1 ), ζ (t1 )) − V (x(t0 ), ζ (t0 )) ≤ {γ 2 w T (t)w(t) − z T (t)z(t)}dt, (5.144)
t0
for any t1 ≥ t0 , along the closed-loop trajectories. The controller u(·) may be static,
i.e., u = u(x). One may also formulate (5.144) as
t1 t1
z (t)z(t)dt ≤ γ
T 2
w T (t)w(t)dt + β(x(t0 )), (5.145)
t0 t0
for some nonnegative function β(·) with β(0) = 0. The next result was proved in
[126].
Theorem 5.89 Let B1 (·) and B2 (·) be bounded, all data in (5.142) have bounded first
T
derivatives, D12 D12 = Im , D21 D21
T
= Iq , and D21 and D12 be constant. Consider the
state feedback u(x). If the closed-loop system satisfies (5.145), there exists a storage
function V (x) ≥ 0, V (0) = 0, such that the Hamilton–Jacobi equality,
has a smooth solution V (x) > 0 for x = 0, V (0) = 0, then the state-feedback con-
troller u(x) = −(D12 T
C1 (x) + B2T (x)∇V T (x)) makes the closed-loop system satisfy
(5.145). The stability of the closed-loop system is guaranteed provided that the system
ẋ(t) = A(x(t)) + B2 (x(t))u(x(t)) + B1 (x(t))w(t)
(5.147)
z(t) = C1 (x(t)) + D12 (x(t))u(x(t)),
is zero-state detectable.
Much more material can be found in [117, 127–129] and the books [43, 130]. Exten-
sions of the strict Bounded Real Lemma 5.79 to the nonlinear affine in the input case,
where storage functions are allowed to be lower semi-continuous only, have been
proposed in [131] and in [132].
We have already introduced the notion of finite power gain in Definition 5.9. Here
we refine it a little bit, which gives rise to the characterization of a new quantity
(the power bias) with a partial differential equality involving a storage function.
The material is taken from [127]. In particular, an example will show that storage
functions are not always differentiable, and that tools based on viscosity solutions
may be needed. We consider systems of the form
ẋ(t) = f (x(t)) + g(x(t))u(t)
(5.148)
y(t) = h(x(t)) + j (x(t))u(t),
with the usual dimensions of vectors, and all vector fields are continuously differen-
tiable on Rn . It is further assumed that ||g(x)||∞ < +∞, || j (x)||∞ < +∞, and that
∂f
∂x
(x), ∂∂gx (x), ∂∂hx (x), ∂∂ xj (x) are (globally) bounded.
Definition 5.90 The system (5.148) has finite power gain ≤ γ , if there exists finite
nonnegative functions λ : Rn → R (the power bias) and β : Rn → R (the energy
bias) such that
t t
y T (s)y(s)ds ≤ γ 2 u T (s)u(s)ds + λ(x)t + β(x) (5.149)
0 0
and dividing both sides of (5.149) by t and letting t → +∞, one obtains
5.10 Linear and Nonlinear H∞ , Bounded Real Lemmas 415
It is noteworthy that (5.149) implies (5.151) but not the contrary. Moreover, the link
between (5.149) and dissipativity is not difficult to make, whereas it is not clear with
(5.151). Since (5.151) is obtained in the limit as t → +∞, possibly the concept of
ultimate dissipativity could be suitable. This is why finite power gain is defined as
in Definition 5.90.
Proposition 5.91 ([127]) Any system with finite power gain ≤ γ and zero power
bias has an L2 -gain ≤ γ . Conversely, any system with L2 -gain ≤ γ has a finite
power gain with zero power bias.
This represents the energy that can be extracted from the system on [0, t]. It is
nondecreasing in t and one has for all t ≥ 0 and all x ∈ Rn :
Definition 5.92 The available power λa (x) is the most average power that can be
extracted from the system over an infinite time when initialized at x, i.e.,
φ(t, x)
λa (x) = lim sup . (5.154)
t→+∞ t
Proposition 5.93 ([127]) Suppose that the system has finite power gain ≤ γ with
power bias and energy pair (λ, β). Then the available power is finite, with λa (x) ≤
λ(x) for all x ∈ Rn .
One realizes that the framework of finite power gain systems tends to generalize that
of dissipative systems.
Example 5.94 ([127]) Consider the scalar system ẋ(t) = ax(t) + bu(t), y(t) =
c(x(t)), where c(·) is a saturation
⎧
⎨ −cε if x < −ε
c(x) = cx if |x| ≤ ε (5.155)
⎩
cε if x > ε.
The power gain γ = inf {γ ≥ 0 | (5.149) holds} thus depends on the power bias:
b √
| | c2 ε2 − λ if λ ∈ [0, c2 ε2 )
γ = aε (5.157)
0 if λ ∈ [c2 ε2 , +∞).
We are now going to characterize the property of finite power gain through a partial
differential equation, similarly to what has been developed in Sect. 4.5.
Suppose that the system has finite power gain ≤ γ . Then there exists a finite viscosity
solution pair (λ, V ) of the partial differential inequality
Theorem 5.96 ([127]) Suppose there exists a Lipschitz continuous solution pair
(λ, V ) of the partial differential equality
Example 5.97 Let us continue with the above example. The system is scalar, so that
the partial differential equality (5.161) reduces to a quadratic in ∇V (x). One may
compute that for γ ≥ | bc a
|
5.10 Linear and Nonlinear H∞ , Bounded Real Lemmas 417
⎧ γ 2 ax 2 $
⎨ − b2 (1 − 1 − μ2 ) if |x| < ε
$ √ 2 2 2
V (x) = |x|+ x −μ ε
⎩ − γ bax + γ ba|x| γ aε
2 2 2 2 2
2 2 x 2 − μ2 ε2 − b2
log √ 2 if |x| ≥ ε,
ε+ε 1−μ
(5.162)
where μ = | γbca |, and for γ < | bc
a
|:
⎧ $ √
⎪ γ 2 ax 2 γ 2a 2 − 1 |x| ε 2 − x 2 + ε 2 arcsin |x|
⎪
⎨ − − μ if |x| < ε
b2 b2
√ √ 2 2 ε
V (x) = − γ ax + γ ba|x| x 2 − ε2 − γ baε log |x|+ εx −ε −
2 2 2 2 2
⎪
⎪ b2 2 2
⎩ γ 2 aε2 π $ 2
− 2b2 μ −1 if |x| ≥ ε.
(5.163)
It is expected from these expressions that the function V (x) may not be differentiable
everywhere, so that viscosity solutions have to be considered.
The notion of hyperstable system has been introduced by Popov in 1964 [133, 134].
It grew out of the concept of absolute stability which was reviewed in Sect. 3.13. Let
us consider the system
ẋ(t) = Ax(t) + Bu(t)
(5.164)
y(t) = C x(t) + Du(t),
Definition 5.99 The pair (5.164) (5.165) is hyperstable if for any constant γ ≥ 0,
δ ≥ 0, and for every input u(·) such that
418 5 Stability of Dissipative Systems
Definition 5.100 The pair (5.164) and (5.165) has the minimal stability property if
for any initial condition x(0) there exists a control input u m (·) such that the trajectory
of (5.164) satisfies
• limt→+∞ ||x(t)|| = 0,
• η(0, t) ≤ 0, for all t ≥ 0.
The following theorem is taken from [135] and generalizes the results in [136–139].
Theorem 5.101 ([135]) Suppose that the pair (5.164) and (5.165) has the minimal
stability property. Then the pair (5.164) and (5.165) is
• Hyperstable if and only if the spectral function
T −1 Q S (s In − A)−1 B
(s) = (B (−s In − A )
T
Im ) (5.168)
ST R Im
is nonnegative.
• Asymptotically hyperstable if this spectral function is nonnegative and ( jω) 0
for all ω ∈ R.
It is worth recalling here Proposition 2.36, Theorem 3.77, as well as the equivalence at
the end of Sect. 3.12.2, between the spectral function positivity and the KYP Lemma
set of equations solvability.
Proof Let us prove the first item of the Theorem. Hyperstability implies positivity:
Let us consider the Hermitian matrix
Q S (s In − A)−1 B
Σ(s) = (B T (s̄ In − A T )−1 Im ) ,
S T R + (s + s̄)A Im
(5.169)
and let us prove that Σ(s) 0 for all Re[s] > 0 is implied by the hypersta-
bility. Indeed suppose that for some s0 with Re[s0 ] > 0, Σ(s0 ) ≺ 0. Then there
exists a nonzero vector u 0 ∈ Cm such that u 0 Σ(s0 )u 0 ≤ 0. For the input u(t) =
5.11 Popov’s Hyperstability 419
u 0 exp(s0 t) with the initial data x(0) = (s0 In − A)−1 Bu 0 , one has x(t) = (s0 In −
A)−1 Bu 0 exp(s0 t). Clearly, ||x(t)|| is increasing with the same rate as exp(Re[s0 ]t),
and it cannot satisfy an inequality as (5.167). On the other hand,
t the constraint (5.166)
is satisfied since for all t ≥ 0 one has η(0, t) = u 0 Σ(s0 )u 0 0 exp(2Re[s0 ]τ )dτ ≤ 0.
Consequently, Σ(s) is Hermitian positive for all s with Re[s] > 0. By continuity one
concludes that ( jω) = Σ( jω) 0 for all ω ∈ R.
Positivity implies hyperstability: Take any symmetric matrix G, and notice, using
the same manipulations as the ones used in Sect. 3.1.1 (pre-multiply ẋ(t) in (5.164)
by x(t)T G) that the functional in (5.165) can be rewritten as
t
Q − AT G − G A S − GB x
η(0, t) = [x T (P + G)x]t0 + (x T u T ) dτ.
0 ST − BT G R u
(5.170)
If one considers the matrix G = Pr that is the maximal solution of the KYP Lemma
set of equations (see, e.g., the arguments after Proposition 4.51), then
t
η(0, t) = [x (P + Pr )x]0 +
T t
||λ x(τ ) + ν u(τ )||2 dτ (5.171)
0
for some λ and ν , that correspond to a spectral factor of the spectral function
ˆ
(s) = C(s In − A + B E)−1 B + B T (−s In − A T + E T B T )C T + R, E is such that
ˆ
(A − B E) is Hurwitz, i.e., (s) = Z (−s)T Z (s), and Z (s) = λ (s In − A)−1 B + ν .
Let u m (·) be an input which renders η(0, t) ≤ 0, introduced via the minimal stabil-
ity assumption. If xm (·) is the corresponding state trajectory with initial condition
xm (0) = x0 , then x0T (P + Pr )x0 ≥ x T (t)(P + Pr )x(t) for all t ≥ 0, which implies,
since limt→+∞ x(t) = 0 for u(·) = u m (·)), that x0T (P + Pr )x0 ≥ 0 for all x0 . Thus
the matrix P + Pr is semi-positive definite. Suppose that there exists x0 such that
x0T (P + Pr )x0 = 0. The condition that η(0, t) ≤ 0 for the input u m (·) implies that
λ x(τ ) + ν u m (τ ) = 0. In other words, the state trajectory xm (·) of the dynamical
system
ẋ(t) = Ax(t) + Bu(t)
(5.172)
y(t) = λ x(t) + ν u m (t),
with initial state xm (0) = x0 and the input u m (·), results in an identically zero output
y(·). The inverse system of the system in (5.172), which is given by
ẋ(t) = (A − B(ν )−1 λ )x(t) + B(ν )−1 y(t)
(5.173)
u(t) = −(ν )−1 λ x(t) + (ν )−1 y(t),
has an unstable transfer function. It is deduced that one has lim ||xm (t)|| = 0 when
t→+∞
applying an identically zero input y(·) to (5.173). The assumption is contradicted.
Thus P + Pr is positive definite. There exists two scalars α > 0 and β > 0 such that
420 5 Stability of Dissipative Systems
and
α 2 ||x(t)||2 ≤ δ sup ||x(τ )|| + (β||x(0)|| + γ )2 , (5.176)
0≤τ ≤t
Proof Let us first prove that the system ẋ(t) = Ax(t) + Bu(t) associated
t
with η(0, t) = 21 [x T C T (α1 M − α2 N )C x]t0 + 0 {x T C T M T N C x + u T ((N + M)
C + (α1 − α2 )C B)x + u (Im + (α1 − α2 )C B)u}dt, α1 ≥ 0, α2 ≥ 0 satisfies a min-
T
imal stability condition in the sense of Definition 5.100. Let us consider the control
u m (t) = −E y(t) + ρ(t), where
α ρ̇(t) + (M − E)(N − M)(M − E)−1 ρ(t) = 0, ρ(0) = (E − M)y(0) if α ≥ 0
−α ρ̇(t) + (N − E)(N − M)(N − E)−1 ρ(t), ρ(0) = (E − N )y(0) if α < 0.
(5.178)
5.11 Popov’s Hyperstability 421
Then % &
ẋ(t) A − B EC B x(t)
= T (M−N )T −1 , (5.179)
ρ̇(t) 0 |α|
ρ(t)
inition of ρ(·), the second integral vanishes. Since ŷ(0) = 0 by construction, one
gets for all t ≥ 0:
t
α T
η(0, t) = ŷ T (M − E)T (N − E) ŷdt + ŷ (t)(M − E)T ŷ(t), (5.180)
0 2
References
4. Hill DJ, Moylan PJ (1976) The stability of nonlinear dissipative systems. IEEE Trans Autom
Control 21(5):708–711
5. Vidyasagar M (1993) Nonlinear systems analysis, 2nd edn. Prentice Hall, Englewood Cliffs
6. Moylan PJ, Hill DJ (1978) Stability criteria for large-scale systems. IEEE Trans Autom
Control 23(2):143–149
7. Polushin IG, Marquez HJ (2004) Boundedness properties of nonlinear quasi-dissipative sys-
tems. IEEE Trans Autom Control 49(12):2257–2261
8. Sontag ED (2006) Passivity gains and the "secant condition" for stability. Syst Control Lett
55(3):177–183
9. Forbes JR, Damaren CJ (2010) Design of gain-scheduled strictly positive real controllers
using numerical optimization for flexible robotic systems. ASME J Dyn Syst Meas Control
132:034503
10. Walsh A, Forbes JR (2017) Analysis and synthesis of input strictly passive gain-scheduled
controllers. J Frankl Inst 354:1285–1301
11. Walsh A, Forbes JR (2016) A very strictly passive gain-scheduled controller: theory and
experiments. IEEE/ASME Trans Mechatron 21(6):2817–2826
12. Walsh A, Forbes JR (2018) Very strictly passive controller synthesis with affine parameter
dependence. IEEE Trans Autom Control 63(5):1531–1537
13. Anderson BDO (1972) The small-gain theorem, the passivity theorem and their equivalence.
J Frankl Inst 293(2):105–115
14. Lozano R, Joshi SM (1990) Strictly positive real functions revisited. IEEE Trans Autom
Control 35:1243–1245
15. Hodaka I, Sakamoto N, Suzuki M (2000) New results for strict positive realness and feedback
stability. IEEE Trans Autom Control 45(4):813–819
16. Joshi SM, Gupta S (1996) On a class of marginally stable positive-real systems. IEEE Trans
Autom Control 41(1):152–155
17. Pavlov A, Marconi L (2008) Incremental passivity and output regulation. Syst Control Lett
57:400–409
18. Zames G (1966) On the input-output stability of nonlinear time-varying feedback systems-
part II: conditions involving circles in the frequency plane and sector nonlinearities. IEEE
Trans Autom Control 11(3):465–477
19. Moylan PJ, Vannelli A, Vidyasagar M (1980) On the stability and well-posedness of inter-
connected nonlinear dynamical systems. IEEE Trans Circuits Syst 27(11):1097–1101
20. Vidyasagar M (1980) On the well-posedness of large-scale interconnected systems. IEEE
Trans Autom Control 25(3):413–421
21. Ghanbari V, Wu P, Antsaklis PJ (2016) Large-scale dissipative and passive control systems
and the role of star and cyclic-symmetries. IEEE Trans Autom Control 61(11):3676–3680
22. Haddad WM, Chellaboina V, Hui Q, Nersersov S (2004) Vector dissipativity theory for large-
scale impulsive dynamical systems. Math Probl Eng 3:225–262
23. Haddad WM, Hui Q, Chellaboina V, Nersesov S (2004) Vector dissipativity theory for discrete-
time large-scale nonlinear dynamical systems. Adv Differ Equ 1:37–66
24. Haddad WM, Nersesov SG (2011) Stability and control of large-scale dynamical systems.
Applied mathematics. Princeton University Press, Princeton
25. Inoue M, Urata K (2018) Dissipativity reinforcement in interconnected systems. Automatica
95:73–85
26. Vidyasagar M (1981) Input-output analysis of large-scale interconnected systems: decompo-
sition, well-posedness and stability. Lecture notes in control and information sciences, vol 29.
Springer, London
27. Vidyasagar M (1977) New passivity-type criteria for large-scale interconnected systems. IEEE
Trans Autom Control 24(4):575–579
28. Vidyasagar M (1977) L2 -stability of interconnected systems using a reformulation of the
passivity theorem. IEEE Trans Circuits Syst 24(11):637–645
29. Sundareshan MK, Vidyasagar M (1977) L2 -stability of large-scale dynamical systems: criteria
via positive operator theory. IEEE Trans Autom Control 22(3):396–399
References 423
30. Hatanaka T, Chopra N, Fujita M, Spong MW (2015) Communications and control engi-
neering. Passivity-based control and estimation in networked robotics. Springer International
Publishing, Berlin
31. Hill DJ, Moylan PJ (1983) General instability results for interconnected systems. SIAM J
Control Optim 21(2):256–279
32. Vidyasagar M (1977) L2 -instability criteria for interconnected systems. SIAM J Control
Optim 15(2):312–328
33. Hill DJ, Moylan PJ (1977) Stability results for nonlinear feedback systems. Automatica
13:377–382
34. Arcak M, Meissen C, Packard A (2016) Networks of dissipative systems: briefs in electrical
and computer engineering. Control, automation and robotics. Springer International Publish-
ing, Berlin
35. Hagiwara T, Mugiuda T (2004) Positive-realness analysis of sampled-data systems and its
applications. Automatica 40:1043–1051
36. Ahmadi M, Valmorbida G, Papachristodoulou A (2016) Dissipation inequalities for the anal-
ysis of a class of PDEs. Automatica 66:163–171
37. Petersen IR, Lanzon A (2010) Feedback control of negative-imaginary systems. IEEE Control
Syst Mag 30(5):54–72
38. Xiong J, Petersen IR, Lanzon A (2010) A negative imaginary lemma and the stability of inter-
connections of linear negative imaginary systems. IEEE Trans Autom Control 55(10):2342–
2347
39. Hill DJ, Moylan PJ (1980) Dissipative dynamical systems: basic input-output and state prop-
erties. J Frankl Inst 30(5):327–357
40. Kalman RE (1963) Lyapunov functions for the problem of Lurie in automatic Control. Proc
Nat Acad Sci USA 49(2):201–205
41. Pota HR, Moylan PJ (1993) Stability of locally dissipative interconnected systems. IEEE
Trans Autom Control 38(2):308–312
42. Hill DJ, Moylan PJ (1980) Connections between finite-gain and asymptotic stability. IEEE
Trans Autom Control 25(5):931–936
43. van der Schaft AJ (2017) Communications and control engineering. L2-gain and passivity
techniques in nonlinear control, 3rd edn. Springer International Publishing AG, Berlin
44. Byrnes CI, Isidori A, Willems JC (1991) Passivity, feedback equivalence, and the global
stabilization of minimum phase nonlinear systems. IEEE Trans Autom Control 36(11):1228–
1240
45. Lin W (1995) Feedback stabilization of general nonlinear control systems: a passive system
approach. Syst Control Lett 25:41–52
46. Sakamoto N, Suzuki M (1996) γ -passive system and its phase property and synthesis. IEEE
Trans Autom Control 41(6):859–865
47. Lin W, Byrnes CI (1995) Passivity and absolute stabilization of a class of discrete-time
nonlinear systems. Automatica 31(2):263–267
48. Lee TC, Jiang ZP (2005) A generalization of Krasovskii-LaSalle theorem for nonlinear time-
varying systems: converse results and applications. IEEE Trans Autom Control 50(8):1147–
1163
49. Brogliato B, Goeleven D (2005) The Krakovskii-LaSalle invariance principle for a class of
unilateral dynamical systems. Math Control, Signals Syst 17:57–76
50. Polushin IG, Fradkov AL, Hill DJ (2000) Passivity and passification of nonlinear systems.
Autom Remote Control 61(3):355–388
51. Ebenbauer C, Allgöwer F (2008) A dissipation inequality for the minimum phase property.
IEEE Trans Autom Control 53(3):821–826
52. Santosuosso GL (1997) Passivity of nonlinear systems with input-output feedthrough. Auto-
matica 33(4):693–697
53. Fradkov AL, Hill DJ (1998) Exponential feedback passivity and stabilizability of nonlinear
systems. Automatica 34(6):697–703
424 5 Stability of Dissipative Systems
80. Mahmoud MS (2006) Passivity and passification of jump time-delay systems. IMA J Math
Control Inf 26:193–209
81. Gui-Fang L, Hui-Ying L, Chen-Wu Y (2005) Observer-based passive control for uncertain
linear systems with delay in state and control input. Chin Phys 14(12):2379–2386
82. de la Sen M (2007) On positivity and stability of a class of time-delay systems. Nonlinear
Anal, R World Appl 8(3):749–768
83. Sun XM, Zhao J, Hill DJ (2006) Stability and L 2 -gain analysis for switched delay systems:
a delay-dependent method. Automatica 42(10):1769–1774
84. Han QL (2005) Absolute stability of time-delay systems with sector-bounded nonlinearity.
Automatica 41:2171–2176
85. Bliman PA (2001) Lyapunov-Krasovskii functionals and frequency domain: delay-
independent absolute stability criteria for delay systems. International Journal of Robust and
Nonlinear Control 11:771–788
86. Gan ZX, Ge WG (2001) Lyapunov functional for multiple delay general Lur’e control systems
with multiple nonlinearities. J Math Anal Appl 259:596–608
87. He Y, Wu M (2003) Absolute stability for multiple delay general Lur’e control systems with
multiple nonlinearities. J Comput Appl Math 159:241–248
88. Li XJ (1963) On the absolute stability of systems with time lags. Chin Math 4:609–626
89. Liao XX (1993) Absolute stability of nonlinear control systems. Science Press, Beijing
90. Popov VM, Halanay A (1962) About stability of non-linear controlled systems with delay.
Autom Remote Control 23:849–851
91. Somolines A (1977) Stability of Lurie type functional equations. J Differ Equ 26:191–199
92. Zevin AA, Pinsky MA (2005) Absolute stability criteria for a generalized Lur’e problem with
delay in the feedback. SIAM J Control Optim 43(6):2000–2008
93. Liao XX, Yu P (2006) Sufficient and necessary conditions for absolute stability of time-
delayed Lurie control systems. J Math Anal Appl 323(2):876–890
94. Willems JC (1971) Least squares stationary optimal control and the algebraic Riccati equation.
IEEE Trans Autom Control 16(6):621–634
95. Petersen IR, Anderson BDO, Jonckheere EA (1991) A first principles solution to the non-
singular h ∞ control problem. Int J Robust Nonlinear Control 1:171–185
96. Zhou K, Khargonekar P (1988) An algebraic Riccati equation approach to H∞ optimization.
Syst Control Lett 11:85–91
97. Green M, Limebeer DJN (1995) Linear robust control. Prentice Hall, Englewood Cliffs
98. Bernstein DS (2005) Matrix mathematics: theory, facts, and formulas with application to
linear systems theory. Princeton University Press, Princeton
99. Chen W, Tu F (2000) The strict bounded real lemma for time-varying systems. J Math Anal
Appl 244:120–132
100. Ravi R, Nagpal K, Khargonekar P (1991) H∞ control of linear time-varying systems: a state
space approach. SIAM J Control Optim 29:1394–1413
101. Xie L, de Souza CE (1991) H∞ filtering for linear periodic systems with parameter uncertainty.
Syst Control Lett 17:343–350
102. Orlov YV, Aguilar LT (2014) Advanced H∞ control: birkhauser, towards nonsmooth theory
and applications
103. Shi P, de Souza CE, Xie L (1997) Bounded real lemma for linear systems with finite discrete
jumps. Int J Control 66(1):145–160
104. Anderson BDO, Vongpanitlerd S (1973) Network analysis and synthesis: a modern systems
theory approach. Prentice Hall, Englewood Cliffs
105. Ober R (1991) Balanced parametrization of classes of linear systems. SIAM J Control Optim
29(6):1251–1287
106. Opdenacker PC, Jonckheere EA (1988) A contraction mapping preserving balanced reduction
scheme and its infinity norm error bounds. IEEE Trans Circuits Syst 35(2):184–489
107. Cao YY, Lam J, Sun YX (1998) Static output feedback stabilization: an ILMI approach.
Automatica 34(12):1641–1645
426 5 Stability of Dissipative Systems
108. Xie S, Xie L, de Souza CE (1998) Robust dissipative control for linear systems with dissipative
uncertainty. Int J Control 70(2):169–191
109. Krasovskii NN (1963) Stability of motion. Stanford University Press, Stanford (translated
from “Nekotorye zadachi ustoichivosti dvzhenia”, Moskva, 1959)
110. Zhai GS, Hu B, Yasuda K, Michel A (2001) Disturbance attenuation properties of time-
controlled switched systems. J Frankl Inst 338:765–779
111. Ionescu V, Oara C (1996) The four block Nehari problem: a generalized Popov-Yakubovich
type approach. IMA J Math Control Inf 13:173–194
112. Trentelman HL, Willems JC (2000) Dissipative differential systems and the state space H∞
control problem. Int J Robust Nonlinear Control 10:1039–1057
113. Liu KZ, He R (2006) A simple derivation of ARE solutions to the standard h ∞ control problem
based on LMI solution. Syst Control Lett 55:487–493
114. Xiao C, Hill DJ (1999) Generalizations and new proof of the discrete-time positive real lemma
and bounded real lemma. IEEE Trans Circuits Syst I Fundam Theory Appl 46(6):740–743
115. Vaidyanathan PP (1985) The discrete-time bounded-real lemma in digital filtering. IEEE
Trans Circuits Syst 32:918–924
116. Staffans OJ (2001) J −preserving well-posed linear systems. Int J Appl Math Comput Sci
11(6):1361–1378
117. Pavel L, Fairman W (1997) Nonlinear H∞ control: a j−dissipative approach. IEEE Trans
Autom Control 42(12):1636–1653
118. Johannessen E, Egeland O (1995) Synthesis of positive real H∞ controller. In: Proceedings
of the american control conference. Seattle, Washington, pp 2437–2438
119. Apkarian P, Noll D (2006) Nonsmooth H∞ synthesis. IEEE Trans Autom Control 51(1):71–86
120. Apkarian P, Noll D (2006) Erratum to “nonsmooth H∞ synthesis”. IEEE Trans Autom Control
51(2):382
121. Misgeld BJE, Hewing L, Liu L, Leonhardt S (2019) Closed-loop positive real optimal control
of variable stiffness actuators. Control Eng Pract 82:142–150
122. Geromel JC, Gapski PB (1997) Synthesis of positive real H2 controllers. IEEE Trans Autom
Control 42(7):988–992
123. Niculescu SI (1997) Systèmes à retard: aspects qualitatifs sur la stabilité et la stabilisation.
Diderot Editeur, Arts et Sciences, Paris
124. Niculescu SI (2001) Delay effects on stability: a robust control approach. Lecture notes in
control and information sciences, vol 269. Springer, London
125. de Souza CE, Xie L (1992) On the discrete-time bounded real Lemma with application in the
characterization of static state feedback H∞ controllers. Syst Control Lett 18(1):61–71
126. James MR, Smith MC, Vinnicombe G (2005) Gap metrics, representations, and nonlinear
robust stability. SIAM J Control Optim 43(5):1535–1582
127. Dower PM, James MR (1998) Dissipativity and nonlinear systems with finite power gain. Int
J Robust Nonlinear Control 8:699–724
128. Ball JA, Helton JW (1996) Viscosity solutions of Hamilton-Jacobi equations arising in non-
linear h ∞ -control. J Math Syst Estim Control 6(1):1–22
129. Lee PH, Kimura H, Soh YC (1996) On the lossless and j−lossless embedding theorems in
h ∞ . Syst Control Lett 29:1–7
130. Kimura H (1997) Chain scattering approach to H∞ control. Birkhauser, Boston
131. James MR, Petersen IR (2005) A nonsmooth strict bounded real lemma. Syst Control Lett
54:83–94
132. Imura JI, Sugie T, Yoshikawa T (1996) A Hamilton-Jacobi inequality approach to the strict
H∞ control problem of nonlinear systems. Automatica 32(4):645–650
133. Popov VM (1964) Hyperstability of automatic systems with several nonlinear elements. Revue
roumaine des sciences et techniques, série électrotech. et énerg 9(1):35–45
134. Popov VM (1964) Hyperstability and optimality of automatic systems with several control
functions. Rev Roum Sci Techn Sér Electrotechn et Energ 9(4):629–690
135. Faurre P, Clerget M, Germain F (1979) Opérateurs rationnels positifs. Application à
l’Hyperstabilité et aux rocessus Aléatoires Méthodes Mathématiques de l’Informatique
Dunod, Paris. In French
References 427
136. Anderson BDO (1968) A simplified viewpoint of hyperstability. IEEE Trans Autom Control
13(3):292–294
137. Landau ID (1972) A generalization of the hyperstability conditions for model reference adap-
tive systems. IEEE Trans Autom Control 17:246–247
138. Ionescu T (1970) Hyperstability of linear time varying discrete systems. IEEE Trans Autom
Control 15:645–647
139. Landau ID (1974) An asymptotic unbiased recursive identifier for linear systems. In: Pro-
ceedings of the decision and control including the 13th symposium on adaptive processes,
IEEE Conference on, pp 288–294, Phoenix, Arizona
140. de la Sen M (1997) A result on the hyperstability of a class of hybrid dynamic systems. IEEE
Trans Autom Control 42(9):1335–1339
141. Chang KM (2001) Hyperstability appproach to the synthesis of adaptive controller for uncer-
tain time-varying delay systems with sector bounded nonlinear inputs. Proc Inst Mech Eng
Part I: J Syst Control Eng 215(5):505–510
142. Meyer-Base A (1999) Asymptotic hyperstability of a class of neural networks. Int J Neural
Syst 9(2):95–98
143. de la Sen M, Jugo J (1998) Absolute stability and hyperstability of a class of hereditary
systems. Informatica 9(2):195–208
144. Schmitt G (1999) Frequency domain evaluation of circle criterion, popov criterion and off-axis
circle criterion in the MIMO case. Int J Control 72(14):1299–1309
145. de la Sen M (2005) Some conceptual links between dynamic physical systems and operator
theory issues concerning energy balances and stability. Informatica 16(3):395–406
146. Kohlberg E, Mertens J (1986) On the strategic stability of equilibria. Econometrica 54:1003–
1039
Chapter 6
Dissipative Physical Systems
Lagrangian systems arose from variational calculus and gave a first general analytical
definition of physical dynamical systems in analytical mechanics [1–3]. They also
allow to describe the dynamics of various engineering systems as electromechanical
systems or electrical circuits. They also gave rise to intensive work in control in
order to derive different control laws, by taking into account the structure of the
system’s dynamics derived from energy-based modeling [4–6]. In this section, we
shall present the definition of controlled Lagrangian systems and particular attention
will be given to the expression of the interaction of a system with its environment.
In this section, the definition of Lagrangian systems with external forces on Rn and
the definition of Lagrangian control systems derived from it are briefly recalled.
Definition 6.1 (Lagrangian systems with external forces) Consider a configuration
manifold Q = Rn whose points are denoted by q ∈ Rn and are called generalized
coordinates. Denote by T Q = R2n its tangent space and its elements by (q, q̇) ∈ R2n ,
where q̇ is called generalized velocity. A Lagrangian system with external forces on
the configuration space Q = Rn is defined by a real function L(q, q̇), from the
tangent space T Q to R called Lagrangian function and the Lagrangian equations:
d ∂L ∂L
(q, q̇) − (q, q̇) = F, (6.1)
dt ∂ q̇ ∂q
∂ F(x)
where F : R → Rn is the vector of generalized forces acting on the system and ∂x
denotes the gradient of the function F(x) with respect to x.
Remark 6.2 In this definition, the configuration space is the real vector space Rn
to which we shall restrict ourselves hereafter, but in general one may consider a
differentiable manifold as configuration space [3]. Considering real vector spaces
as configuration spaces corresponds actually to considering a local definition of a
Lagrangian system.
If the vector of external forces F(·) is the vector of control inputs, then the Lagrangian
control system is fully actuated. Such models arise, for instance, for fully actuated
kinematic chains.
Example 6.3 (Harmonic oscillator with external force) Let us consider the very
simple example of the linear mass–spring system consisting of a mass attached to
a fixed frame through a spring and subject to a force F. The coordinate q of the
system is the position of the mass with respect to the fixed frame, and the Lagrangian
function is given by L(q, q̇) = K (q̇) − U (q), where K (q̇) = 21 m q̇ 2 is the kinetic
coenergy of the mass and U (q) = 21 kq 2 is the potential energy of the spring. Then,
the Lagrangian system with external force is given by m q̈(t) + kq(t) = F(t).
Lagrangian systems with external forces satisfy, by construction, a power balance
equation that leads to some passivity property.
Lemma 6.4 (Lossless Lagrangian systems with external forces) A Lagrangian sys-
tem with external forces (6.1) satisfies the following power balance equation:
dH
F T q̇ = , (6.2)
dt
where the real function H (·) is obtained by the Legendre transformation of the
Lagrangian function L(q, q̇) with respect to the generalized velocity q̇, and is defined
by
6.1 Lagrangian Control Systems 431
∂L
p(q, q̇) = (q, q̇) (6.4)
∂ q̇
and the Lagrangian function is assumed to be hyperregular [3] in such a way that the
map from the generalized velocities q̇ to the generalized momenta p is bijective. If
moreover the function H (·) is bounded from below, then the Lagrangian system with
external forces is lossless with respect to the supply rate: F T q̇ with storage function
H (·).
Proof let us first compute the power balance equation by computing F T q̇, using the
Lagrangian equation (6.1) and the definition of the generalized momentum (6.4). We
get
∂L ∂L
q̇ T F = q̇ T d
dt ∂ q̇
(q, q̇) − ∂q
(q, q̇) = q̇ T dtd p − q̇ T ∂∂qL
T T
= d
dt
q̇ p − q̈ T p + q̈ T ∂∂ q̇L − d
dt
L(q, q̇) = d
dt
q̇ p − L(q, q̇) = dH
dt
.
(6.5)
Then, using as outputs the generalized velocities and assuming that the function H (·)
is bounded from below, the Lagrangian system with external forces is passive and
lossless with storage function H (·).
Remark 6.5 The name power balance equation for (6.2) comes from the fact that for
physical systems, the supply rate is the power ingoing the system due to the external
force F, and that the function H (·) is equal to the total energy of the system.
Example 6.6 Consider again Example 6.3 of the harmonic oscillator. In this case,
the supply rate is the mechanical power ingoing the system, and the storage function
is H ( p, q) = K ( p) + U (q) and is the total energy of the system, i.e., the sum of the
elastic potential and kinetic energy.
Actually, the definition of Lagrangian systems with external forces may be too restric-
tive, as, for instance, the external forces F may not correspond to actual inputs. For
example, they may be linear functions of the inputs u:
F = J T (q)u, (6.6)
outputs: y = J (q)q̇. In order to cope with such situations, a more general definition
of Lagrangian systems with external controls is given and consists of considering
that the input is directly modifying the Lagrangian function [4, 5].
This definition includes the Lagrangian systems with external forces (6.1), by choos-
ing the Lagrangian function to be
It includes as well the case when the the external forces are given by (6.6) as a
linear function of the inputs, where the matrix J (q) is the Jacobian of some geometric
function C(q) from Rn to R p :
∂C
J (q) = (q). (6.9)
∂q
However, it also encompasses Lagrangian systems where the inputs do not appear
as forces as may be seen on the following example.
Example 6.8 Consider the harmonic oscillator, and assume now that the spring is no
longer attached to a fixed basis but to a moving basis with its position u considered
as an input. Let us choose as coordinate q, the position of the mass with respect to
the fixed frame. The displacement of the spring then becomes q − u, the potential
energy becomes: U (q, u) = 21 k(q − u)2 , and the Lagrangian becomes
1 2 1
L(q, q̇, u) = m q̇ − k(q − u)2 . (6.11)
2 2
The Lagrangian control systems then becomes
The formalism of Lagrangian control systems also allow one to consider more inputs
that the number of generalized velocities as may be seen on the next example.
6.1 Lagrangian Control Systems 433
Example 6.9 Consider again the harmonic oscillator and assume that the basis of
the spring is moving with controlled position u 1 , and that there is a force u 2 exerted
on the mass. Consider a gain as generalized coordinate, the position q ∈ R of the
mass with respect to an inertial frame. Then, considering the Lagrangian function
1 2 1
L(q, q̇, u) = m q̇ − k(q − u 1 )2 + qu 2 (6.13)
2 2
one obtains the Lagrangian control system
(a) (b)
where vC denotes the voltage at the port of the capacitor and i L 2 denotes the current
in the inductor labeled L 2 . The vector of generalized coordinates is hence obtained
by integration of the vector of generalized velocities:
φC
q= . (6.18)
Q L2
and the Kirchhoff’s laws. The Lagrangian function is constructed as the sum of four
terms:
L(q, q̇, u) = Eˆ (q̇) − E (q) + C (q, q̇) + I (q, u) . (6.19)
The function Eˆ (q̇) is the sum of the electric coenergy of the capacitors in the tree
Γ1 and the magnetic coenergy of the inductors in the cotree Λ2 which is, in this
example, in the case of linear elements:
1 1 1 1
Eˆ (q̇) = CvC 2 + L 2 i L 2 2 = C q̇12 + L 2 q̇22 . (6.20)
2 2 2 2
The function E (q) is the sum of the magnetic energy of the inductors in the cotree
Λ1 and the electric energy of the capacitors in the tree Γ2 which is
1 1
E (q) = φL 1 2 = (q1 + q10 )2 , (6.21)
2L 1 2L 1
where the relation between the flux φ L 1 of the inductor L 1 was obtained by integrating
the Kirchhoff’s mesh law on the mesh consisting of the capacitor C and the inductor
L 1 , yielding φ L 1 = (q1 + q10 ), and q10 denotes some real constant which may be
chosen to be null. The function C (q, q̇) accounts for the coupling between the
capacitors in the tree Γ1 and inductors in the cotree Λ2 , depending on the topological
interconnection between them and is
1
C q̈1 (t) − q̇2 (t) + (q1 (t) + q10 ) = 0 (6.24)
L1
L 2 q̈2 (t) + q̇1 (t) − u(t) = 0. (6.25)
Note that this system is of order 4 (it has 2 generalized coordinates) which does
not correspond to the order of the electrical circuit which, by topological inspection,
would be 3; indeed one may choose a maximal tree containing the capacitor and
having a cotree containing the 2 inductors. We shall come back to this remark and
expand it in the sequel when we shall treat the same example as a port-controlled
Hamiltonian system.
This example illustrates that, although the derivation of Lagrangian system is based
on the determination of some energy functions and other physical properties of the
system, its structure may not agree with the physical insight. Indeed the Lagrangian
436 6 Dissipative Physical Systems
control systems are defined on the state space T Q, the tangent space to the con-
figuration space. This state space has a very special structure; it is endowed with a
symplectic form, which is used to give an intrinsic definition of Lagrangian systems
[3]. A very simple property of this state space is that its dimension is even (there
are as many generalized coordinates as generalized velocities). Already this property
may be in contradiction with the physical structure of the system. Lagrangian control
systems, in the same way as the Lagrangian systems with external forces, satisfy, by
construction, a power balance equation and losslessness passivity property [16].
dE
uT z = , (6.26)
dt
where
n
∂2 H ∂ H
∂2 H ∂ H
n
zi = − + , (6.27)
i=1
∂q j ∂u i ∂ p j i=1
∂ p j ∂u i ∂q j
and the real function E is obtained by the Legendre transformation of the Lagrangian
function L(q, q̇, u) with respect to the generalized velocity q̇ and the inputs, and is
defined by
∂H
E(q, p, u) = H (q, p, u) − u T (6.28)
∂u
with
H (q, p, u) = q̇ T p − L(q, q̇, u), (6.29)
∂L
p(q, q̇, u) = (q, q̇), (6.30)
∂ q̇
and the Lagrangian function is assumed to be hyperregular [3] in such a way that the
map from the generalized velocities q̇ to the generalized momenta p is bijective for
any u. If moreover the Hamiltonian (6.29) is affine in the inputs (hence, the function
E is independent of the inputs), the controlled Lagrangian system will be called affine
Lagrangian control system. And assuming that E(q, p) is bounded from below, then
the Lagrangian system with external forces is lossless with respect to the supply rate
u T z with storage function E(q, p).
As we have seen above, the affine Lagrangian control systems are lossless with
respect to the storage function E(q, p), which in physical systems may be chosen
to be equal to the internal energy of the system. However, in numerous systems,
dissipation has to be included. For instance, for robotic manipulator, the dissipation
will be due to the friction at the joints and in the actuators. This may be done by
6.1 Lagrangian Control Systems 437
∂R
q̇ T (q̇) ≥ 0, (6.31)
∂ q̇
Example 6.13 Consider the example of the vertical motion of a magnetically levi-
tated ball. There are three types of energy involved: the magnetic energy, the kinetic
energy of the ball, and its potential energy. The vector of generalized coordinates
may be chosen as a vector in R2 , where q1 denotes a primitive of the current in
the inductor (according to the procedure described in Example 6.10); q2 = z is the
altitude of the sphere. The Lagrangian function may then be chosen as the sum of
three terms:
L(q, q̇, u) = Eˆm (q, q̇) + Eˆk (q̇) − U (q) + I (q, u) . (6.33)
The function Eˆm (q, q̇) is the magnetic coenergy of the inductor and depends on the
currents in the coil as well on the altitude of the sphere:
1
Eˆm (q, q̇) = L (q2 ) q̇12 , (6.34)
2
where
k
L (q2 ) = L 0 + . (6.35)
q2 − z 0
The function Eˆk (q̇) is the kinetic coenergy of the ball, i.e., Eˆk (q̇) = 21 m q̇22 . The
function U (q) denotes the potential energy due to the gravity, U (q) = gq2 . The
interaction potential is I (q, u) = q1 u. In order to take into account the dissipation
represented by the resistor R, one also define the following Rayleigh potential func-
tion: R (q̇) = 21 R q̇12 . This leads to the following Lagrangian control system with
dissipation:
438 6 Dissipative Physical Systems
∂L
L (q2 (t)) q̈1 (t) + (q2 (t)) q̇2 (t)q̇1 (t) + R q̇1 (t) − u(t) = 0 (6.36)
∂q2
1 ∂L
m q̈2 (t) − (q2 (t)) q̇12 (t) + g = 0. (6.37)
2 ∂q2
where U (q) is a real function from the configuration space Q on R and is called
potential energy and T (q, q̇) is a real function from T Q on R, called kinetic energy
and is defined by
1
T (q, q̇) = q̇ T M(q)q̇, (6.39)
2
where the matrix Rn×n M(q)T = M(q) 0 is called the inertia matrix.
Considering the special form of the Lagrangian function, the Lagrangian equations
(6.1) may be written in some special form which is particularly useful for deriving
stabilizing controllers as will be presented in the subsequent chapters.
Remark 6.15 The inertia matrix may be just positive semidefinite in multibody appli-
cations, where bodies’ coordinates are chosen as the so-called natural coordinates
[17]. The case M(q) 0 requires a careful treatment [18].
Lemma 6.16 (Lagrangian equations for simple mechanical systems) The
Lagrangian equations (6.1) for a simple mechanical system may be written as
where g(q) = dU
dq
(q) ∈ Rn ,
n
and Γi jk are called the Christoffel’s symbols associated with the inertia matrix M(q)
and are defined by
6.1 Lagrangian Control Systems 439
1 ∂ Mi j ∂ Mik ∂ Mk j
Γi jk = + − . (6.42)
2 ∂qk ∂q j ∂qi
τ T q̇ = dH
dt
(q, p) = q̇ T M(q)q̈ + 21 q̇ T Ṁ(q)q̇ + g(q)
= q̇ T (−C(q, q̇)
q̇ − g(q) + τ ) +2 q̇ Ṁ(q)q̇ + g(q)
1 T
(6.44)
= q̇ τ + 2 q̇ Ṁ(q) − 2C(q, q̇) q̇
T 1 T
from which (6.43) follows. Such forces are sometimes called gyroscopic [19]. It
is noteworthy that (6.43) does not mean that the matrix Ṁ(q) − 2C(q, q̇) is skew-
symmetric. Skew-symmetry is true only for the particular definition of the matrix
C(q, q̇) using Christoffel’s symbols.
Remark 6.19 The definition of a positive-definite symmetric inertia matrix for sim-
ple mechanical systems may be expressed in some coordinate- independent way by
using so-called Riemannian manifolds [1]. In [20, Chap. 4], the properties of the
Christoffel’s symbols, that shall be used in the sequel for the synthesis of stabilizing
controllers, may also be related to properties of Riemannian manifolds.
A class of systems which typically may be represented in this formulation is the
dynamics of multibody systems, for which systematic derivation procedures were
obtained (see [7] and the references herein).
derived from the Lagrangian one at the end of the nineteenth century, and has now
become the fundamental structure of the mathematical description of physical sys-
tems [1, 3]. In particular, it allowed one to deal with symmetry and reduction and
also to describe the extension of classical mechanics to quantum mechanics.
Assume that the map from generalized velocities to generalized momenta is invert-
ible, and consider the Legendre transformation with respect to q̇ of the Lagrangian
function called Hamiltonian function:
Then, the Lagrangian system with external forces is equivalent to the following
standard Hamiltonian system:
∂ H0
q̇(t) = ∂p
(q(t), p(t))
∂ H0
(6.47)
ṗ(t) = − ∂q (q(t), p(t)) + F(t).
with external forces may be expressed as a control Lagrangian system (for which the
inputs are an argument of the Lagrangian function), the standard Hamiltonian system
with external forces (6.48) may be expressed as Hamiltonian system, where the
Hamiltonian function depends on the inputs H (q, p, u) = H0 (q, p) − q T F, which
yields
∂ H0
∂ H0
q̇(t) ∂q −In ∂q 0n
= Js ∂ H0
+ F(t) = Js ∂ H0
+ F(t). (6.50)
ṗ(t) 0n In
∂p ∂p
As the simplest example let us consider the harmonic oscillator with an external
force.
Example 6.21 (Harmonic oscillator with external force) First, let us recall that in
its Lagrangian representation (see Example 6.3), the state space is given by the posi-
tion of the mass (with respect to the fixed frame) and its velocity. Its Lagrangian
is L(q, q̇, F) = 21 m q̇ 2 − 21 kq 2 + q T F. Hence, the (generalized) momentum is p =
∂L
∂ q̇
= m q̇. The Hamiltonian function obtained through the Legendre transformation
is H (q, p, F) = H0 (q, p) − q T F, where the Hamiltonian function H0 (q, p) repre-
sents the total internal energy H0 (q, p) = K ( p) + U (q), the sum of the kinetic
2
energy K ( p) = 21 pm , and the potential energy U (q). The Hamiltonian system
becomes
q̇(t) 0 1 kq(t) 0
= + F(t). (6.51)
ṗ(t) −1 0 p(t)
m
1
m
H (x, u) = H0 (x) − Hi (x) u i (6.52)
i=1
composed of the sum of the internal Hamiltonian H0 (x) and a linear combination of
m interaction Hamiltonian functions Hi (x) and the dynamic equations
m
ẋ(t) = Js ∇ H0 (x(t)) + i=1 Js ∇ Hi (x(t))u i (t)
(6.53)
ỹi (t) = Hi (x(t)), i = 1, .., m,
One may note that an input–output Hamiltonian system (6.53) is a nonlinear system
affine in the inputs in the sense of [21, 22]. It is composed of a Hamiltonian drift vector
field Js ∇ H0 (q, p) and the input vector fields Js ∇ Hi (q, p) are also Hamiltonian and
generated by the interaction Hamiltonian functions. The outputs are the Hamiltonian
interaction functions and are called natural outputs [16]. We may already note here
that these outputs, although called “natural”, are not the outputs conjugated to the
inputs for which the system is passive as will be shown in the sequel.
Example 6.23 Consider again Example 6.9. The state space is given by the displace-
ment of the spring and its velocity. Its Lagrangian is
1 1
L(q, q̇, F) = m(q̇ + u 1 )2 − kq 2 + qu 2 . (6.54)
2 2
If, moreover, the Hamiltonian function H0 (x) is bounded from below, then the input–
˙ with storage
output Hamiltonian system is lossless with respect to the supply rate u T ỹ,
function H0 (q, p).
Let us comment on this power balance equation, using the example of the harmonic
oscillator with moving frame and continue Example 6.23.
6.2 Hamiltonian Control Systems 443
Example 6.25 The natural outputs are then the momentum of the system: ỹ1 =
H1 (q, p) = p which is conjugated to the input u 1 (the velocity of the basis of the
spring) and the displacement of the spring ỹ2 = H2 (q, p) = q which is conjugated
to the input u 2 (the external force exerted on the mass). The passive outputs defining
the supply rate are then
p
ỹ˙1 = ṗ = −kq + u 2 , and ỹ˙2 = q̇ = − u1. (6.58)
m
Computing the supply rate, the terms in the inputs cancel each other and one obtains
p
ỹ˙1 u 1 + ỹ˙2 u 2 = kqu 1 + u 2 . (6.59)
m
This is precisely the sum of the mechanical power supplied to the mechanical system
by the source of displacement at the basis of the spring and the source of force at the
mass. This indeed is equal to the variation of the total energy of the mechanical sys-
tem. However, it may be noticed that the natural outputs as well as their derivatives
are not the variables which one uses in order to define the interconnection of this
system with some other mechanical system: the force at the basis of the spring which
should be used to write a force balance equation at that point, and the velocity of the
mass m which should be used in order to write the kinematic interconnection of the
mass (their dual variables are the input variables). In general, input–output Hamil-
tonian systems (or their Lagrangian counterpart) are not well suited for expressing
their interconnection.
Example 6.26 Consider the LC circuit of order 3 in Example 6.10. In the Lagrangian
formulation, the generalized velocities were q̇1 = VC , the voltage of the capacitor,
q̇2 = i L 2 , the current of the inductor L 2 , and the generalized coordinates were some
primitives denoted by q1 = φC and q2 = Q L 2 . The Lagrangian function was given
by L(q, q̇, u) = Eˆ (q̇) − E (q) + C (q, q̇) + I (q, u), where Eˆ (q̇) is the sum of
the electric coenergy of the capacitor and of the inductor L 2 , E (q) is the magnetic
energy of the inductor L 1 , C (q, q̇) is a coupling function between the capacitor and
the inductor L 2 and I (q, u) is the interaction potential function. Let us now define
the generalized momenta. The first momentum variable is
∂L ∂ Eˆ ∂ Cˆ ∂ Eˆ
p1 = = + = = C q̇1 = Q C , (6.60)
∂ q̇1 ∂ q̇1 ∂ q̇1 ∂ q̇1
and is the electrical charge of the capacitor, i.e., its energy variable. The second
momentum variable is
∂L ∂ Eˆ ∂ Cˆ
p2 = = + = L 2 q̇2 + q1 = φ L 2 + φC , (6.61)
∂ q̇2 ∂ q̇2 ∂ q̇2
444 6 Dissipative Physical Systems
and is the sum of the the total magnetic flux of the inductor L 2 (its energy variable)
and of the fictitious flux at the capacitor φC . The Hamiltonian function is obtained
as the Legendre transformation of L(q, q̇, u) with respect to q̇:
where Hi = q2 and H0 is
1 2 1 2 1
H0 (q, p) = q + p + ( p2 − q1 )2 . (6.63)
2L 1 1 2C 1 2L 2
Note that the function H0 ( p, q) is the total electromagnetic energy of the circuit, as the
state variables are equal to the energy variables of the capacitors and inductors. Indeed
using Kirchhoff’s law on the mesh containing the inductor L 1 and the capacitor C,
up to a constant q1 = φC = φ L 1 is the magnetic flux in the inductor, by definition
of the momenta p1 = Q C is the charge of the capacitor, and p2 − q1 = φ L 2 is the
magnetic flux of the inductor L 2 . This input–output Hamiltonian system again has
order 4 (and not the order of the circuit). But one may note that the Hamiltonian
function H0 does not depend on q2 . Hence, it has a symmetry and the drift dynamics
may be reduced to a third-order system (the order of the circuit) and in a second
step to a second-order system [3]. However, the interaction Hamiltonian depends on
the symmetry variable q2 , so the controlled system may not be reduced to a lower
order input–output Hamiltonian system. The power balance equation (6.57) becomes
d H0
dt
= u q̇2 = i L 2 u, which is exactly the power delivered by the source, as the current
i L 2 is also the current flowing in the voltage source.
The preceding input–output Hamiltonian systems may be extended, by considering
more general structure matrices than the symplectic structure matrix Js which appears
in the reduction of Hamiltonian systems with symmetries. Indeed, one may consider
so-called Poisson structure matrices, that are matrices J (x) depending on x(t) ∈ R2n ,
skew-symmetric and satisfying the Jacobi identities:
n
∂ Jik ∂ Jk j ∂ J ji
Jl j (x) + Jli (x) (x) + Jlk (x) = 0. (6.64)
k,l=1
∂ xl ∂ xl ∂ xl
Remark 6.27 These structure matrices are the local definition of Poisson brackets
defining the geometrical structure of the state space [1, 3] of Hamiltonian systems
defined on differentiable manifold endowed with a Poisson bracket. Such systems
appear, for instance, in the Hamiltonian formulation of a rigid body spinning around
its center of mass (the Euler–Poinsot problem).
Remark 6.28 Poisson structure matrices may be related to symplectic structure
matrices as follows. Note first that, by its skew-symmetry, the rank of the struc-
ture matrix of a Poisson bracket at any point is even, say 2n (then one says
also that the Poisson bracket has the rank 2n). Suppose moreover that the struc-
ture matrix has constant rank 2n in a neighborhood of a point x0 ∈ M. Then, the
6.2 Hamiltonian Control Systems 445
One may hence see a symplectic matrix appear associated with the first 2n coordi-
nates. The remaining coordinates correspond to so-called distinguished functions or
Casimir functions, which define an important class of dynamical invariants of the
Hamiltonian system [3].
With such structure matrices, the input–output Hamiltonian systems may be gener-
alized to Poisson control systems as follows [21].
m
ẋ(t) = J (x(t)) ∇ H0 (x(t)) − J (x(t)) ∇ Hi (x(t)) u i (t). (6.66)
i=1
As the examples of the LC circuit and of the levitated ball have shown, although
the input–output Hamiltonian systems represent the dynamics of physical systems
in a way that the conservation of energy is embedded in the model, they fail to rep-
resent accurately some other of their structural properties. Therefore, another type
of Hamiltonian systems called port-controlled Hamiltonian systems was introduced
which allow to represent both the energy conservation as well as some other struc-
tural properties of physical systems, mainly related to their internal interconnection
structure [20, 23].
One may note that port-controlled Hamiltonian systems, as the input– output Hamil-
tonian systems, are affine with respect to the inputs.
The systems (6.67) have been called port-controlled Hamiltonian system in allusion
to the network concept of the interaction through ports [20, 23, 24]. In this case, the
Hamiltonian function corresponds to the internal energy of the system, the structure
matrix corresponds to the interconnection structure associated with the energy flows
in the system [15, 25, 26] and the interaction with the environment of the network is
defined through pairs of port variables [23, 24]. Moreover, the underlying modeling
formalism is a network formalism which provides a practical frame to construct
models of physical systems and roots on a firmly established tradition in engineering
[27] which found its achievement in the bond graph formalism [23, 28, 29].
Port-controlled Hamiltonian systems differ from input–output Hamiltonian sys-
tems in three ways, which we shall illustrate below on some examples. First, the
structure matrix J (x) does not have to satisfy the Jacobi identities (6.64); such struc-
ture matrices indeed arise in the reduction of simple mechanical systems with non-
holonomic constraints [30]. Second, the input vector fields are no more necessarily
Hamiltonian, that is they may not derive from an interaction potential function. Third,
the definition of the output is changed. The most simple examples of port-controlled
Hamiltonian system consist of elementary energy storing systems, corresponding,
for instance, to a linear spring or a capacitor.
Example 6.32 (Elementary energy storing systems) Consider the following first-
order port-controlled Hamiltonian system:
ẋ(t) = u(t)
(6.68)
y(t) = ∇ H0 (x(t)),
where x(t) ∈ Rn is the state variable, H0 (x) is the Hamiltonian function, and the
structure matrix is equal to 0. In the scalar case, this system represents the integrator
which is obtained by choosing the Hamiltonian function to be H0 = 21 x 2 . This system
represents also a linear spring, where the state variable x(·) is the displacement of
the spring and the energy function is the elastic potential energy of the spring (for
instance, H (x) = 21 k q 2 where k is the stiffness of the spring). In the same way,
(6.68) represents a capacitor with x being the charge and H0 the electrical energy
stored in the capacitor, or an inductance where x is the total magnetic flux, and H0
is the magnetic energy stored in the inductance.
In R3 such a system represents the point mass in the three-dimensional Euclidean
space with mass m, where the state variable x(t) ∈ R3 is the momentum vector, the
input u ∈ R3 is the vector of forces applied on the mass, the output vector y(t) ∈ R3
is the velocity vector, and the Hamiltonian function is the kinetic energy H0 (x) =
1 T
2m
x x. It may be noted that such elementary systems may take more involved forms
6.2 Hamiltonian Control Systems 447
when the state variable belongs to some manifold different from Rn , as it is the case,
for instance, for spatial springs which deform according to rigid body displacements
[26, 31–33].
Like affine Lagrangian control systems and input–output Hamiltonian systems, port-
controlled Hamiltonian systems satisfy a power balance equation and under some
assumption on the Hamiltonian function are lossless.
Lemma 6.33 (Losslessness of port-controlled Hamiltonian systems) A port-
controlled Hamiltonian system (according to Definition 6.30) satisfies the follow-
ing power balance equation:
d H0
uT y = . (6.69)
dt
If moreover the Hamiltonian function H0 (x) is bounded from below, then the port-
controlled Hamiltonian system is lossless with respect to the supply rate u T y with
storage function H0 (x).
Again in the case when the Hamiltonian function is the energy, the balance equation
corresponds to a power balance expressing the conservation of energy. Let us now
consider a slightly more involved example, the LC circuit of order 3 treated here
above, in order to comment on the structure of port-controlled Hamiltonian systems
as well as to compare it to the structure of input–output and Poisson control systems.
Example 6.34 (LC circuit of order 3) Consider again the circuit of Example 6.10.
According to the partition of the interconnection graph into the spanning tree: Γ =
{C} ∪ {Su } and its cotree: Λ = {L 1 } ∪ {L 2 }, one may write Kirchhoff’s mesh law for
the meshes defined by the edges in Λ and the node law corresponding to the edges
in Γ as follows: ⎛ ⎞ ⎛ ⎞⎛ ⎞
iC 0 −1 −1 0 vC
⎜ vL 1 ⎟ ⎜ 1 0 0 0 ⎟ ⎜ i L 1 ⎟
⎜ ⎟ ⎜ ⎟⎜ ⎟
⎝ v L 2 ⎠ = ⎝ 1 0 0 −1 ⎠ ⎝ i L 2 ⎠ . (6.70)
−i S 0 0 1 0 vS
Now, taking as state variables the energy variables of the capacitor (the charge Q C , the
total magnetic fluxes φ L 1 and φ L 2 in the two inductors) one identifies immediately the
first three components of the left-hand side in (6.70) as the time derivative of the state
T
vector x = Q C , φ L 1 , φ L 2 . Denoting by HC (Q C ), HL 1 (φ L 1 ), and HL 2 (φ L 2 ), the
electric and magnetic energies stored in the elements, one may identify the coenergy
∂H ∂H
variables as follows: vC = ∂∂ QHCC , i L 1 = ∂φLL 1 , and i L 2 = ∂φLL 2 . Hence, the first three
1 2
components of the vector on the right-hand side of Eq. (6.70), may be interpreted as
the components of the gradient of the total electromagnetic energy of the LC circuit:
H0 (x) = HC (Q C ) + HL 1 (φ L 1 ) + HL 2 (φ L 2 ). Hence, the dynamics of the LC circuit
may be written as the following port-controlled Hamiltonian system:
ẋ(t) = J ∇ H0 (x(t)) + g u(t)
(6.71)
y(t) = g T ∇ H0 (x(t)),
448 6 Dissipative Physical Systems
where the structure matrix J and the input vector g are part of the matrix describing
Kirchhoff’s laws in (6.70) (i.e., part of the fundamental loop matrix associated with
the tree Γ ): ⎛ ⎞ ⎛ ⎞
0 −1 −1 0
J = ⎝ 1 0 0 ⎠ and g = ⎝ 0 ⎠ . (6.72)
1 0 0 1
The input is u = v S and the output is the current with generator sign convention
y = −i S . In this example, the power balance Eq. (6.69) is simply interpreted as the
time derivative of the total electromagnetic energy being the power supplied by the
source. Actually this formulation is completely general to LC circuits and it may be
found in [15], as well as the comparison with the formulation in terms of Lagrangian
or input–output Hamiltonian systems [9, 15].
The port-controlled Hamiltonian formulation of the dynamics of the LC circuit
may be compared with the input–output formulation derived in Example 6.26. First,
one may notice that in the port-controlled Hamiltonian formulation, the information
on the topology of the circuit and the information about the elements (i.e., the energy)
is represented in two different objects: the structure matrix and the input vector on
the one side, and the Hamiltonian function on the other side. In the input–output
Hamiltonian formulation, this information is captured solely in the Hamiltonian
function (with interaction potential), in the same way as in the Lagrangian formula-
tion in Example 6.10. Second, the port-controlled Hamiltonian system is defined with
respect to a non-symplectic structure matrix, and its order coincides with the order of
the circuit, whereas the input–output system is given (by definition) with respect to a
symplectic (even order) structure matrix of order larger than the order of the circuit.
Third, the definition of the state variables in the port-controlled system corresponds
simply to the energy variables of the different elements of the circuit, whereas in
the input–output Hamiltonian system, they are defined for the total circuit and, for
instance, the flux of capacitor L 2 does not appear as one of them. Finally, although the
two structure matrices of the port-controlled and the input–output Hamiltonian sys-
tems may be related by projection of the dynamics using the symmetry in q2 of the
input–output Hamiltonian system, the controlled systems remain distinct. Indeed,
consider the input vector g: it is clear that it is not in the image of the structure
matrix J . Hence, there exists no interaction potential function which generates this
vector and the port-controlled Hamiltonian formulation cannot be formulated as an
input–output Hamiltonian system, or a Poisson control system.
In order to illustrate a case where the energy function defines some interdomain
coupling, let us consider the example of the iron ball in magnetic levitation. This
example may be seen as the one-dimensional case of general electromechanical
coupling arising in electrical motors or actuated multibody systems.
Example 6.35 Consider again the example of the vertical motion of a magnetically
levitated ball as treated in Example 6.13. Following a bond graph modeling approach,
one defines the state space as being the variables defining the energy of the system.
Here, the state vector is then x = (φ, z, pb )T , where φ is the magnetic flux in the
6.2 Hamiltonian Control Systems 449
coil, z is the altitude of the sphere, and pb is the kinetic momentum of the ball.
The total energy of the system is composed of three terms: H0 (x) = Hmg (φ, z) +
U (z) + Hkin ( pb ), where Hmg (φ, z) denotes the magnetic energy of the coil and is
1 1
Hmg (φ, z) = φ2, (6.73)
2 L (z)
where L(z) is given in (6.35), U (z) = gz is the gravitational potential energy and
Hkin ( pb ) = 2m
1 2
p is the kinetic energy of the ball. Hence, the gradient of the energy
function H0 is the vector of the coenergy variables ∂∂Hx0 = (v L , f, vb ), where v L is the
voltage at the coil:
∂ Hmg φ
vL = = . (6.74)
∂φ L(z)
The sum of the gravity force and the electromagnetic force is given by f = g − f mg :
1 φ2 ∂ L
f mg = (z), (6.75)
2 L 2 (z) ∂z
and vb = pmb is the velocity of the ball. Then, from Kirchhoff’s laws and the kine-
matic and static relations in the system, it follows that the dynamics may be expressed
as a port- controlled
⎛ Hamiltonian system (6.67), where the structure matrix is con-
⎞
0 0 0
stant J = ⎝ 0 0 1 ⎠, and the input vector is constant g = (1 0 0)T . Note that
0 −1 0
the structure matrix is already in canonical form. In order to take into account, the
dissipation represented by the resistor R, one also defines the following dissipating
force v R = −Ri R = −Ri L , which may be expressed in a Hamiltonian-like format as
a Hamiltonian system with dissipation [34]. Let us compare now the port-controlled
Hamiltonian formulation with the Lagrangian or input–output Hamiltonian formu-
lation. Recall first the input–output Hamiltonian system obtained by the Legendre
transformation of the Lagrangian system of Example 6.13. The vector of the momenta
is
∂L φ
p= (q, q̇) = , (6.76)
∂ q̇ p b
Hence, the state space of the input–output representation is the state space of the port-
controlled system augmented with the variable q1 (the primitive if the current in the
inductor). Hence the order of the input–output Hamiltonian system is 4, thus larger
than 3, the natural order of the system (a second-order mechanical system coupled
450 6 Dissipative Physical Systems
with a first-order electrical circuit), which is precisely the order of the port-controlled
Hamiltonian system. Moreover, the state variable “in excess” is q1 , and is precisely
the symmetry variable of the internal Hamiltonian function H0 (x) in H (q, p). In an
analogous way as in the LC circuit example above, this symmetry variable defines
the interaction Hamiltonian, hence the controlled input–output Hamiltonian system
may not be reduced. And again one may notice that the input vector g does not belong
to the image of the structure matrix J , hence cannot be generated by any interaction
potential function.
Now, we shall compare the definitions of the outputs for input–output Hamiltonian
or Poisson control systems, and port- controlled Hamiltonian systems. Consider the
port-controlled system (6.67) and assume that the input vector fields are Hamiltonian,
i.e., there exists interaction Hamiltonian functions such that gi (x) = J (x)∇ Hi (x).
The port-conjugated outputs are then yi = ∇ H0T (x)gi (x) = ∇ H0T (x)J (x) ∇ Hi (x).
The natural outputs are ỹi = Hi (x). Using the drift dynamics in (6.67), their deriva-
tives are computed as
m
ỹ˙i = ∇ HiT (x)ẋ = yi + u j ∇ HiT (x)J (x) ∇ H j (x). (6.78)
j=1, j
=i
Hence, the passive outputs of both systems differ, in general, by some skew-
symmetric terms in the inputs. This is related to the two versions of the Kalman–
Yakubovich–Popov Lemma, where the output includes or not a skew-symmetric
feedthrough term.
Example 6.36 (Mass–spring system with moving basis) Consider again the mass–
spring system with moving basis and its input–output model treated in Examples 6.23
and 6.25. The input vector fields are Hamiltonian, hence we may compare the defi-
nition of the passive outputs in the input–output Hamiltonian formalism and in the
port-controlled Hamiltonian formalism. The derivatives of the natural outputs derived
in Example 6.25 are ỹ˙2 = q̇ = mp − u 1 and ỹ˙1 u 1 + ỹ˙2 u 2 = u 1 (kq) + u 2 mp . The port-
kq kq
conjugated outputs are y1 = (−1, 0) p = −kq, and y2 = (0, 1) p = mp .
m m
These outputs, contrary to the natural outputs and their derivatives, are precisely
the interconnection variables needed to write the kinematic and static relation for
interconnecting this mass–spring system to some other mechanical systems.
The mass–spring example shows how the different definitions of the pairs of input–
output variables for input–output and port-controlled Hamiltonian systems, although
both defining a supply rate for the energy function as storage function, are funda-
mentally different with respect to the interconnection of the system with its environ-
ment. One may step further and investigate the interconnection of Hamiltonian and
Lagrangian systems, which preserve their structure. It was shown that the port- con-
trolled Hamiltonian systems may be interconnected in a structure- preserving way
by so-called power continuous interconnections [34, 35]. Therefore, a generaliza-
tion of port-controlled Hamiltonian systems to implicit port-controlled Hamiltonian
6.2 Hamiltonian Control Systems 451
systems (encompassing constrained systems) was used in [20, 24, 34, 35]. However,
this topic is beyond the scope of this section, and we shall only discuss the inter-
connection of Lagrangian and Hamiltonian systems on the example of the ball in
magnetic levitation.
Example 6.37 (Levitated ball as the interconnection of two subsystems) We have
seen that the dynamics of the levitated ball may be formulated as a third-order port-
controlled Hamiltonian system, where the coupling between the potential and kinetic
energy is expressed in the structure matrix (the symplectic coupling) and the cou-
pling through the electromagnetic energy in the Hamiltonian function. However, it
also allows one to express this system as the coupling, through a passivity-preserving
interconnection, of two port-controlled Hamiltonian systems. Therefore, one may
conceptually split the physical properties of the iron ball into purely electric and
purely mechanical ones. Then, the electromechanical energy transduction is repre-
sented by a second-order port-controlled Hamiltonian system:
∂ Hmg
φ̇(t) 0 1 ∂φ
(φ(t), z(t)) 1 0
= + u(t) + u 1 (t), (6.79)
ż(t) −1 0 ∂ Hmg
(φ(t), z(t)) 0 1
∂z
The second subsystem simply represents the dynamics of a ball in vertical translation
submitted to the action of an external force u 2 :
∂ H2 (q(t), p(t))
q̇(t) 0 1 ∂q 0
= + u 2 (t), (6.81)
ṗ(t) −1 0 ∂ H2
(q(t), p(t)) 1
∂p
where the Hamiltonian H2 is the sum of the kinetic and the potential energy of the
ball: H2 (q, p) = 2m p + gq and the conjugated output is the velocity of the ball:
1 2
∂ H2
∂q
y2 = (0, 1) ∂ H2
. (6.82)
∂p
Considering lines 2 and 3 of the structure matrix, one deduces that the variations of
z and q satisfy
ż − q̇ = 0. (6.84)
Of course such a system is no more lossless, but it still satisfies a power balance
equation and under some assumption on the Hamiltonian system, a passivity property.
The system (6.85) is rewritten compactly as
ẋ(t) (J (x(t)) − R(x(t))) g(x(t)) ∇ H0 (x(t))
= . (6.86)
y(t) g(x(t))T 0 U (t)
d H0 ∂ H0 T ∂ H0
uT y = + (x)R(x) (x). (6.87)
dt ∂x ∂x
6.2 Hamiltonian Control Systems 453
If, moreover, the Hamiltonian function H0 (x) is bounded from below, then the port-
controlled Hamiltonian system with dissipation is dissipative with respect to the
supply rate u T y with storage function H0 (x).
Example 6.40 Consider first the magnetic part. Considering the losses in the coil
amounts to add to the skew-symmetric structure matrix defined in (6.79) the sym-
metric positive matrix:
−R 0
R= . (6.88)
0 0
Then, the total system also becomes a port-controlled Hamiltonian system with a
symmetric matrix Rtot = diag(−R, 03 ).
Let us end this section with a result stating some equivalence between minimal
passive LTI systems and a port Hamiltonian representation.
Proposition 6.41 Consider the system ẋ(t) = Ax(t) + Bu(t), y(t) = C x(t) +
Du(t), x(0) = x0 , with (A, B, C, D) a minimal representation. Assume that it
is a passive system, with storage function V (x) = 21 x T P x, with P = P T 0 a
solution of the KYP Lemma (or Lur’e) equations in (3.1). Consider the matrices
J = 21 (A P −1 − P −1 A T ), R = − 21 (A P −1 + P −1 A T ), K = 21 (P −1 C T − B), and
G = 21 (P −1 C T + B). Then, the system is equivalently rewritten in a port-controlled
Hamiltonian form as
ẋ(t) = (J − R)P x(t) + (G − K )u(t)
(6.89)
y(t) = (G + K )T P x(t) + Du(t).
In this section and in the next ones, we shall recall simple models corresponding
to electromechanical systems, which motivated numerous results on passivity-based
control. We recall and derive their passivity properties, and we illustrate some con-
cepts introduced in the previous sections and chapters. Actually, the results in the
next sections of the present chapter will serve as a basis for introducing the control
problem in Chap. 7. Our aim now is to show how one can use the passivity properties
of the analyzed processes, to construct globally stable control laws. We shall insist
454 6 Dissipative Physical Systems
on the calculation of storage functions, and it will be shown at some places (see,
for instance, Sect. 7.3) that this can be quite useful to derive Lyapunov functions for
closed-loop systems.
The dynamics of the mechanism constituting the mechanical part of a robotic
manipulator is given by a simple mechanical system according to Definition 6.14
and Lemma 6.16:
From Lemma 6.11, it follows that they are lossless systems with respect to the supply
rate τ T q̇ with storage function E(q, q̇) = 21 q̇ T M(q)q̇ + Ug (q) and g(q) = ∂∂qV is the
gradient of the gravitation potential energy Ug (q).
We have seen that storage functions play an important role in the dissipativity theory.
In particular, the dissipativity of a system can be characterized by the available
storage Va (q, q̇) and the required supply Vr (q, q̇) functions. Let us focus now on the
calculation of the available storage function (see Definition 4.37), which represents
the maximum internal energy contained in the system that can be extracted from it.
More formally, recall that we have
t t
Va (q0 , q̇0 ) = − inf τ T (s)q̇(s)ds = sup − τ T (s)q̇(s)ds.
τ :(0,q0 ,q̇0 )→ 0 τ :(0,q0 ,q̇0 )→ 0
(6.91)
The notation inf means that one performs the infinimization over all trajecto-
τ :(0,q0 ,q̇0 )→
ries of the system on intervals [0, t], t ≥ 0, starting from the extended state (0, q0 , q̇0 ),
with (q0 , q̇0 ) = (q(0), q̇(0)), with admissible inputs (at least the closed-loop system
must be shown to be well-posed). In other words, the infinimization is done over all
trajectories φ(t; 0, q0 , q̇0 , τ ), t ≥ 0. From (6.91), one obtains
t
1 T
Va (q0 , q̇0 ) = sup − q̇ M(q)q̇ + Ug (q(t)) − Ug (q(0))
τ :(0,q0 ,q̇0 )→ 2 0 (6.92)
= 21 q̇(0)T M(q(0))q̇(0) + Ug (q(0)) = E(q0 , q̇0 ),
where we have to assume that Ug (q) ≥ −K > −∞ for some K < +∞, so that we
may assume that the potential energy has been normalized to secure that Ug (q) ≥ 0
for all q ∈ Rn . It is not surprizing that the available storage is just the total initial
mechanical energy of the system (but we shall see in a moment that for certain
systems this is not so evident).
6.3 Rigid-Joint–Rigid-Link Manipulators 455
Remark 6.42 We might have deduced that the system is dissipative since Va (q, q̇) <
+∞ for any bounded state, see Theorem 4.43. On the other hand, Va (q, q̇) must be
bounded, since we already know that the system is dissipative with respect to the
chosen supply rate.
Remark 6.43 In Sect. 6.1, we saw that the addition of Rayleigh dissipation enforces
the dissipativity property of the system. Let us recalculate the available storage of a
rigid-joint–rigid-link manipulator when the dynamics is given by
∂R
M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + g(q(t)) + (t) = τ (t). (6.93)
∂ q̇
One has
t
Va (q0 , q̇0 ) = sup − τ T q̇ds
τ :(0,q0 ,q̇0 )→
0
t t
1 t ∂R
= sup − q̇ T M(q)q̇ − Ug (q) 0 − q̇ T ds (6.94)
τ :(0,q0 ,q̇0 )→ 2 0 0 ∂ q̇
since q̇ T ∂∂ q̇R ≥ δ q̇ T q̇ for some δ > 0. One, therefore, concludes that the dissipation
does not modify the available storage function, which is a logical feature from the
intuitive physical point of view (the dissipation and the storage are defined indepen-
dently one from each other).
Let us now compute the required supply Vr (q, q̇) as in Definition 4.38 with the same
assumption on Ug (q). Recall that it is given in a variational form by
0
Vr (q0 , q̇0 ) = inf τ T (s)q̇(s)ds, (6.95)
τ :(−t,qt ,q̇t )→(0,q0 ,q̇0 ) −t
where (qt , q̇t ) = (q(−t), q̇(−t)), (q0 , q̇0 ) = (q(0), q̇(0)), t ≥ 0. Thus, this time, the
minimization process is taken over all trajectories of the system, joining the extended
states (−t, qt , q̇t ) and (0, q0 , q̇0 ) (i.e., (q0 , q̇0 ) = φ(0; −t, qt , q̇t , τ )). For the rigid
manipulator case, one finds
Note that Vr (·) hence defined is not necessarily positive. However, if we compute it
from (−t, qt , q̇t ) = (−t, 0, 0), then indeed Vr (·) ≥ 0 is a storage function. Here one
trivially finds that Vr (q0 , q̇0 ) = E(q0 , q̇0 ) (= Va (q0 , q̇0 )).
Remark 6.44 The system is reachable from any state (q0 , q̇0 ) (actually, this system
is globally controllable). Similarly to the available storage function property, the
system is dissipative with respect to a supply rate, if and only if the required supply
Vr ≥ −K for some K > −∞, see Lemma 4.49. Here, we can take K = E(−t).
where q1 (t) ∈ Rn is the vector of rigid-link angles, q2 (t) ∈ Rn is the vector of motor
shaft angles, K ∈ Rn×n is the joint stiffness matrix, and J ∈ Rn×n is the motor shaft
inertia matrix (both assumed here to be constant and diagonal). Basic assumptions
are M(q1 ) = M(q1 )T 0, J = J T 0, K = K T 0.
It is asimple mechanical
system
in Lagrangian form (6.40),
we can
say that
M(q1 ) 0 C(q1 , q̇1 ) 0 0 g(q1 )
M(q) = , C(q, q̇) = ,τ = , g(q) = +
0 J 0 0 u 0
K (q2 − q1 )
. Actually, the potential energy is given by the sum of the gravity and
K (q1 − q2 )
the elasticity terms, Ug (q1 ) and Ue (q1 , q2 ) = 21 (q2 − q1 )T K (q2 − q1 ), respectively.
The dynamics of flexible-joint–rigid-link manipulators can be seen as the intercon-
nection of the simple mechanical system representing the dynamics of the rigid-
joint–rigid-link manipulators, with a set of linear Lagrangian systems with external
forces representing the inertial dynamics of the rotor, interconnected by the rota-
tional spring representing the compliance of the joints. It may be seen as the power
continuous interconnection of the corresponding three port-controlled Hamiltonian
systems, in a way completely similar to the example of the levitated ball (Example
6.37). We shall not detail the procedure here, but summarize it in Fig. 6.2. As a result,
it follows that the system is passive, lossless with respect to the supply rate u T q̇2 with
storage function being the sum of the kinetic energies and potential energies of the
different elements. We shall see in Sect. 6.6 that including actuator dynamics pro-
duces similar interconnected systems, but with quite different interconnection terms.
These terms will be shown to play a crucial role in the stabilizability properties of
the overall system.
6.4 Flexible-Joint–Rigid-Link Manipulators 457
Remark 6.45 The model in (6.97) was proposed by M.W. Spong [36], and is based
on the assumption that the rotation of the motor shafts due to the link angular motion
does not play any role in the kinetic energy of the system, compared to the kinetic
energy of the rigid links. In other words, the angular part of the kinetic energy of each
motor shaft rotor is considered to be due to its own rotation only. This is why the
inertia matrix is diagonal. This assumption seems satisfied in practice for most of the
manipulators. It is also satisfied (mathematically speaking) for those manipulators
whose actuators are all mounted at the base known as parallel-drive manipulators (the
Capri robot presented in Chap. 9 is a parallel-drive manipulator).
If this assumption
M(q1 ) M12 (q1 )
is not satisfied, the inertia matrix takes the form M(q) = [37].
M12T
(q1 ) J
The particular feature of the model in (6.97) is that it is static feedback linearizable,
and possesses a triangular structure that will be very useful when we deal with
control. One can use Theorem A.65, to characterize the off-diagonal term M12 (q1 ),
since M(q) 0.
Let us now prove in some other way that the system is passive (i.e., dissipative with
respect to the supply rate τ T q̇ = u T q̇2 ). We get for all t ≥ 0:
t t
0 u T (s)q̇2 (s)ds = 0 [J q̈2 (s) + K (q2 (s) − q1 (s))]T q̇2 (s)ds±
t
± 0 (q2 (s) − q1(s) ) K q̇1 (s)ds
T
t t (6.98)
= 21 q̇2T J q̇2 0 + 21 (q2 − q1 )T K (q2 − q1 ) 0
t
+ 0 (q2 (s) − q1 (s))T K q̇1 (s)ds.
The result is, therefore, true whenever Ug (q1 ) is bounded from below.
Remark 6.46 One could have thought of another decomposition of the system as
depicted in Fig. 6.3. In this case, the total system is broken down into two Lagrangian
control systems with input being the free end of the springs with respect to each sub-
model. The subsystem with generalized coordinate q1 (i.e., representing the dynamics
of the multibody system of the robot) is analogous to the harmonic oscillator of Exam-
ple 6.11 and with input q2 . The dynamics of the rotors (with generalized coordinates
q2 ) is again analogous to an additional external force u. But the interconnection of
these two subsystems is defined by u 1 = q2 and u 2 = q1 , involving the generalized
coordinates which are not passive outputs of the subsystems.
Remark 6.47 Let us point out that manipulators with prismatic joints cannot be
passive, except if those joints are horizontal. Hence, all those results on open-loop
dissipativity hold for revolute joint manipulators only. This will not at all preclude
the application of passivity tools for any sort of joints when we deal with feedback
control, for instance, it suffices to compensate for gravity to avoid this problem.
6.4 Flexible-Joint–Rigid-Link Manipulators 459
Va (q, q̇) = E(q, q̇) = 21 q̇1T M(q1 )q̇1 + 21 q̇2T J q̇2 + 21 [q1 − q2 ]T K [q1 − q2 ] + Ug (q1 ).
(6.102)
From Sect. 6.3.2, one finds that the energy required from an external source to transfer
the system from the extended state
(−t, q1 (−t), q2 (−t), q̇1 (−t), q̇2 (−t)) = (−t, q1t , q2t , q̇1t , q̇2t )
to
(0, q1 (0), q2 (0), q̇1 (0), q̇2 (0)) = (0, q10 , q20 , q̇10 , q̇20 )
is given by
Vr (q1 (0), q2 (0), q̇1 (0), q̇2 (0)) = E(q1 (0), q2 (0), q̇1 (0), q̇2 (0))
(6.103)
−E(q1 (−t), q2 (−t), q̇1 (−t), q̇2 (−t)).
Recall from the Positive Real (or Kalman–Yacubovich–Popov) Lemma 4.94 that a
system of the form
ẋ(t) = f (x(t)) + g(x(t))u(t)
(6.104)
y(t) = h(x(t))
is passive (dissipative with respect to the supply rate u T y) if and only if there exists
at least one function V (t, x) ≥ 0 such the following conditions are satisfied:
h T (x) = ∂∂Vx (x)g(x)
∂V (6.105)
∂x
(x) f (x) ≥ 0.
The if part of this Lemma tells us that an unforced system that is Lyapunov stable
with Lyapunov function V (·) is passive when the output has the particular form in
(6.105). The only if part tells us that given an output function, then passivity holds
only if the searched V (·) does exist.
Now, let us assume that the potential function Ug (q1 ) is finite for all q ∈ C .
Then it follows that the available storage calculated in (6.102) is a storage function,
hence it satisfies the conditions in (6.105) when y = J J −1 q̇2 = q̇2 and u is defined in
460 6 Dissipative Physical Systems
(6.97). More explicitly, the function E(q, q̇) in (6.102) satisfies the partial differential
equations (in (6.97) one has g T (x) = (0, 0, 0, J −1 ))
⎧
⎪ ∂ E T −1
⎨ ∂ q̇2 J = q̇2
⎪ T
∂E T T
q̇1 + ∂∂q̇E1 M(q1 )−1 (−C(q1 , q̇1 )q̇1 − g(q1 ) + K (q2 − q1 )) (6.106)
⎪
⎪
∂q1
⎩ + ∂ E T q̇ + ∂ E J −1 (K (q − q )) = 0.
∂q2 2 ∂ q̇2 1 2
We may conclude from the preceding examples that in general, for mechanical sys-
tems, the total mechanical energy is a storage function. However, the calculation of
the available storage may not always be so straightforward, as the following example
of a switched system shows.
τ = −λ2 q̇ − λ1 (q − qd ) + v, (6.108)
with qd > 0 a constant desired position, λ1 > 0, λ2 > 0, and v is a control signal.
The input in (6.108) is a PD controller but can also be interpreted as an input trans-
formation. Let us now consider the equivalent closed-loop system with input v and
output q̇, and supply rate w = vq̇. The available storage function is given by
t
Va (x0 , ẋ0 ) = sup − v(s)q̇(s)ds. (6.109)
τ :(0,q0 ,q̇0 )→ t0
Va (q0 , q̇0 ) =
= sup − (m q̈(s) + λ2 q̇(s) + λ1 q(s) − λ1 qd )q̇(s)ds
τ :(0,q0 ,q̇0 )→ i≥0 Ω2i
− (m q̈(s) + λ2 q̇(s) + (λ1 + k)q(s) − λ1 qd )q̇(s)ds
i≥0 Ω2i+1
t t2i+1 !
q̇ 2 2i+1 λ1
= sup − m − (q − qd ) 2
− λ2 q̇ (t)dt
2
τ :(0,q0 ,q̇0 )→ i≥0 2 t2i 2 t2i Ω2i
⎧ ⎫
⎨ 2 t2i+2 ⎬
q̇ 2
λ1 + k λ1 q d
+ − m − q− − (λ2 + f ) q̇ 2 (t)dt .
⎩ 2 2 λ1 + k Ω2i+1 ⎭
i≥0 t2i+1
(6.110)
In order to maximize the terms between brackets, it is necessary that the integrals
− Ωi q̇ 2 (t)dt be zero and that q̇(t2i+1 ) = 0. In view of the system’s controlla-
bility, there exists an impulsive input v that fulfills these requirements [49] (let
us recall that this impulsive input is applied while the system evolves in a free-
motion phase, hence has linear dynamics). In order to maximize the second term
t1
− λ21 (q − qd )2 t0 , it is also necessary that q(t1 ) = 0. Using similar arguments, it
follows that q̇(t2i+2 ) = 0 and that q(t2 ) = λλ11+k
qd
. This reasoning can be iterated to
obtain the optimal path which is (q0 , q̇0 ) → (0, 0) → ( λλ11+k
qd
, 0) where all the tran-
462 6 Dissipative Physical Systems
sitions are instantaneous. This leads us to the following available storage function:
Notice that the two functions in (6.111) and (6.112) are not equal. Their concate-
nation yields a positive-definite function of (q̃, q̇) = (0, 0) with q̃ = q − λλ11+k qd
(the
λ1 q d
equilibrium point of (6.107) (6.108) with v = 0, is q = λ1 +k when qd ≥ 0), that is
continuous and differentiable at q = 0 (indeed ∇Va (q, q̇) = (−λ1 qd m q̇)T on both
sides of the switching surface). Moreover along trajectories of (6.107) (6.108), one
gets V̇a (q(t), q̇(t)) ≤ −λ2 q̇(t)2 + q̇(t)v(t) for all q(t)
= 0, and at the switching sur-
face the trajectories are transversal for all q̇(t)
= 0 (this can be checked as follows: the
Δ Δ
switching surface is s = {(q, q̇) | h(q, q̇) = q = 0}, so that ∇h(q, q̇) = (1 0)T ,
and on both sides of s the vector field f (q, q̇) ∈ R2 satisfies ∇h(q, q̇) f (q, q̇) = q̇,
showing that there is no sliding mode nor repulsive surface). Therefore, the avail-
able storage Va (q̃, q̇) satisfies all the requirements to be a Lyapunov function for the
uncontrolled closed-loop system. The asymptotic stability analysis requires the use
of the Krasovskii–LaSalle principle.
and
m q̈(t) + (λ2 + f )q̇(t) + λ1 (q(t) − qd ) + kq(t) = v(t), (6.114)
that represent the persistent free motion and the persistent contact motion dynamics,
respectively. The available storage function for the system in (6.113) is given by (see
Remark 6.43)
1 1
Va (q, q̇) = m q̇ 2 + λ1 (q − qd )2 , (6.115)
2 2
whereas it is given for the system in (6.114) by
1 2 1 1
Va (q, q̇) = m q̇ + λ1 (q − qd )2 + kq 2 . (6.116)
2 2 2
It is clear that the functions in (6.111) and (6.115), (6.112) and (6.116), are, respec-
tively, not equal. Notice that this does not preclude that the concatenation of the
functions in (6.115) and (6.116) yields a storage function for the system (in which
6.5 Switched Systems 463
case it must be larger than the concatenation of the functions in (6.111) and (6.112)
for all (q, q̇)). In fact, an easy inspection shows that the functions in (6.115) and
λ kq 2
(6.116) are obtained by adding 21 λ11 +kd to those in (6.111) and (6.112), respectively.
Thus, their concatenation indeed yields a storage function for the system in (6.107)
with input (6.108).
Let x = (q̃ q̇)T , then the Filippov’s convexification of the closed-loop system
(6.107) (6.108) is equivalent to the relay system [42, Sect. 11]:
0 2 0 0 0
ẋ(t) ∈ 1
x(t) − 1
x(t) + sgn(x1 (t))
2 − 2λm1 +k − 2λ2m+ f
2 k f
m m
kqd
m
0
+ 21 .
− kqmd
(6.117)
One can, in turn, replace the set-valued sgn(x1 ) by the variable λ with x1 = λ1 − λ2 ,
0 ≤ 1 + λ ⊥ λ1 ≥ 0, 0 ≤ 1 − λ ⊥ λ2 ≥ 0, and obtain an equivalent linear comple-
mentarity system.
The first step is to define precisely what is meant by switched system. This is often
introduced as follows in the Systems and Control literature [50–59]:
ẋ(t) = f σ (x(t), u σ (t))
(6.118)
y(t) = h σ (x(t), u σ (t))
with f σ (0, 0) and h σ (0, 0) = 0, σ (·) is a piecewise constant signal defining the
indices of the vector fields, and the time instants when switches occur, i.e., a switch-
ing sequence (tk , i k ) is defined where tk are the switching times, i k is the index of
the activated mode σ (t) = i k on t ∈ [tk , tk+1 ). It is usually assumed one of the fol-
lowing, sometimes all of them: solutions are continuous functions of time, σ (·) is an
exogeneous signal with discontinuity times satisfying ti < ti+1 for all i ∈ N, there
exists a dwell time δ > 0 such that ti + δ < ti+1 for all i ∈ N, the sequence {ti } may
be finite or infinite, there is no Zeno phenomenon (only a finite number of times
ti in any bounded interval of time), σ (·) may be a function of the state (or of the
output). If σ (·) is exogeneous, the system is merely the concatenation of subsystems
with continuous solutions. If σ (·) is state dependent, things get more complex since
without further assumptions, discontinuities may appear on switching surfaces and
one has to embed (6.118) into another mathematical formalism, usually a differential
inclusion (like Filippov’s). Defining a dynamical system via the a priori nature of its
solutions, is a bit loose, as one should prove instead the existence of solutions that
belong to some functional set starting from the dynamical equations. Another way
to introduce switched systems is through a partitioning of the state space:
464 6 Dissipative Physical Systems
Remark 6.49 It is noteworthy that the class of systems considered in the above arti-
cles, does not encompass the nonsmooth systems which are examined elsewhere in
this book, like differential inclusions into normal cones, variational inequalities of
first or second kind, complementarity systems. They are different types of nonsmooth
dynamical systems. For instance, ẋ(t) = −1 if x(t) > 0 and ẋ(t) = 1 if x(t) < 0 has
a state-dependent switching function σ (·), and it gives rise to the (Filippov or maxi-
mal monotone) differential inclusion ẋ(t) ∈ −sgn(x(t)), which can equivalently be
written as a linear complementarity system, or an evolution variational inequality.
Clearly f σ (0, 0) is not satisfied, which does not hamper this differential inclusion to
possess the origin as its unique equilibrium.
In view of this short discussion, and in order to clarify the developments, we shall
assume that the system (6.118) is such that σ [0, +∞) → {1, 2, . . . , m} ⊂ N, with
m ≤ +∞, ti+1 > ti + δ, δ > 0, and each vector field f σ (t) (x, u) satisfies the condi-
tions of Theorems 3.90, 3.91, 3.92 and 3.93 with admissible u(·). Let us introduce
now a definition of dissipative switched systems:
Definition 6.50 ([50, Definition 3.3]) The system (6.118) is said to be dissipative
under the switching law σ (·), if there exist positive-definite continuous storage func-
tions V1 (x), V2 (x),…,Vm (x), locally integrable supply rate functions wii (u i , h i ),
1 ≤ i ≤ m, and locally integrable functions wij (x, u i , h i , t), 1 ≤ i ≤ m, 1 ≤ j ≤ m,
i
= j, called the cross-supply rates, such that
t
1. Vik (x(t)) − Vik (x(s)) ≤ s wiikk (u ik (τ ), h ik (x(τ )))dτ , k = 0, 1, . . . and tk ≤ s ≤
t < tk+1 , t
2. V j (x(t)) − V j (x(s)) ≤ s wijk (x(τ ), u ik (τ ), h ik (x(τ )))dτ , j
= i k , k = 0, 1, . . .
and tk ≤ s ≤ t < tk+1 ,
3. for any i and j, there exists u i (t) = αi (x(t), t) and φ ij (t) ∈ L1 (R+ ; R+ ), which
may depend on u i and on the switching sequence {(tk , i k )}k∈N , such that one
has f i (0, αi (0, t)) = 0 for all t ≥ t0 , wii (u i (t), h i (x(t)) ≤ 0 for all t ≥ t0 , and
j
wi (x(t), u i (t), h i (x(t), t) − φ ij (t) ≤ 0, for all j
= i, for all t ≥ t0 .
j
• V j (x) and w j (u j , h j ) are the usual storage function and supply rate for the sub-
system j when it is activated.
• Even when non active, a subsystem may have its “energy” V j (x) that varies because
all subsystems share the same state x. Thus, an active subsystem i k may bring
energy into the deactivated ones. In the defintion, the subsystem i k is active and
the cross-supply rates take care of the couplings and the energy transferred from
subsystem i k to subsystem j.
• When a common storage function V (s) exists (i.e., Vi (x) = V (x) for all j), and a
common suppy rate w(u i , h i ) with w(0, h i ) ≤ 0 exists also, then item 2) is satisfied
with wij (x, u i , h i , t) = wii (u i , h i ), and item 3) holds with u i (t) = 0 and φ ij (t) = 0
for all t ≥ t0 .
Stability results follow for a specific class of inputs:
Theorem 6.51 ([50, Theorem 3.7]) Let the switching function σ (·) satisfy the above
basic requirements, and let system (6.118) be dissipative with storage functions Vi (x),
Vi (0) = 0. Then, the origin is stable in the sense of Lyapunov for any control law
satisfying condition in item 3 in Definition 6.50.
In fact, the assumption made above on σ (·) may not be crucial as far as only the
definition of switched dissipativity is concerned, but becomes important when sta-
bility comes into play. Passivity is considered in [50], as well as a switched version
of the KYP Lemma and stabilization by output feedback under a ZSD condition on
each subsystem. An important feature for the stability of (6.118) is that the storage
functions of inactive subsystems may grow, but their total increase is upper bounded
by a function in L1 (R+ ; R+ ).
Further reading: Various results on stabilization of switched systems (6.118) can
be found in [51–59, 61]. The dissipativity of switched systems as in (6.119) is
analyzed in [62], where Filippov’s framework is used (hence it may encompass the
nonsmooth systems studied in Sect. 3.14, as long as those can be recast into Filippov’s
convexification, which is the case, for instance, of the signum set-valued function).
Dissipativity is defined with a unique smooth Lyapunov function (this is similar to
what we found in Sect. 6.5.1, and for the nonsmooth systems of Sect. 3.14), or a
continuous piecewise smooth storage function. It is pointed out that the fact that the
Lyapunov function decreases in the interior of the cells Ξi , may be insufficient for
stability, because sliding modes may be unstable. Conditions on the vector fields
have to be added [62, Proposition 13], see also [63, p. 64, p. 84]. The KYP Lemma is
extended to a class of switching systems in [61], where switches occur in the feedback
loop of Lur’e-like systems. The dissipativity of switched discrete-time systems has
been studied in [59, 64–66].
466 6 Dissipative Physical Systems
In all the foregoing examples, it has been assumed that the control is directly provided
by the generalized torque τ . In reality, the actuators possess their own dynamics, and
the torque is just the output of a dynamical system. In practice, the effect of neglecting
those dynamics may deteriorate the closed-loop performance [67]. In other words,
the dynamics in (6.40) are replaced by a more accurate armature-controlled DC motor
model as
M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + g(q(t)) = τ = K t I (t)
(6.120)
R I (t) + L ddtI (t) + K t q̇(t) = u(t),
Remark 6.52 One may consider this system as the interconnection of two subsystems
as in Fig. 6.5. One notes at once a strong similarity between the model in (6.120) and
the example of the magnetic ball in Example 6.35. The difference is that there is no
coupling through the energy function (no state variable in common) but that the sim-
ple mechanical system, representing the dynamics of the mechanical part is nonlinear.
∂H (6.121)
∂q
(t)
ymech (t) = (0n , In ) ∂H
= q̇(t),
∂p
(t)
where the input τ represents the electromechanical forces. The dynamics of the
motors is described by the following port-controlled Hamiltonian system with dis-
sipation with state variable being the total magnetic flux φ = L I and the magnetic
energy being Hmg = 2L φ :
1 2
∂H
φ̇(t) = −R ∂φmg (t) + u(t) + u mg (t)
∂H (6.122)
ymg (t) = ∂φmg (t) = I (t),
468 6 Dissipative Physical Systems
where u mg represents the electromotive forces. Note that the structure matrix con-
sists only of a negative-definite part, thus it is purely an energy dissipating system.
The interconnection between the two subsystems is defined by the following power
continuous interconnection:
Let us calculate directly the value of u, I t , where the choice of this supply rate is
motivated by an (electrical) energy expression:
) * ) *t
u, I t = 0t I T R I (s) + L ddsI (s) + K v q̇(s) ds = 0t I (s)T R I (s)ds + 21 I (s)T L I (s)
0
) *t
+ 21 q̇(s)T M(q(s))q̇(s) + [Ug (q(s))]t0
0
t
≥ 0 I (s)T R I (s)ds − 21 I (0)T L I (0) − 21 q̇(0)T M(q(0))q̇(0) − Ug (q(0)),
(6.126)
where we used the fact that R 0, L 0. One sees that the system in (6.120) is
even strictly output passive when the output is y = K t I . Indeed I T R I ≥ λmin (R)y T y
where λmin (R) denotes the minimum eigenvalue of R.
The supply rate u T I has been chosen according to the definition of conjugated port
variables of port-controlled Hamiltonian systems. In the sequel, we shall prove that
no other form on the port variables may be chosen to define a supply rate for another
storage function. Therefore, let us introduce a more general supply rate of the form
u T A T B I for some constant matrices A and B of suitable dimensions. Our goal is
to show that if the system is dissipative with respect to this new supply rate, then
necessarily (and sufficiently) A = α1 U −1 K t−1 and B = α K t U , where α
= 0 and U
is a full-rank symmetric matrix. Let us compute the available storage associated with
this supply rate, i.e.,
t
Va (q0 , q̇0 , I0 ) = sup − u T (s)A T B I (s)ds
u 2 :(0,q0 ,q̇0 ,I0 ) 0 t
1 T
= sup − [I L A T B I ]t0 + I T R A T B I ds (6.128)
u 2 :(0,q0 ,q̇0 ,I0 ) 2 0
t +
+ 0 q̇ T K t A T B K t−1 [M(q)q̈ + C(q, q̇)q̇ + g(q)] ds.
It follows that the necessary conditions for Va (q, q̇, I ) to be bounded are that
L A T B 0 and R A T B 0. Moreover, the last integral concerns the dissipativ-
ity of the rigid-joint–rigid-link manipulator dynamics. We know storage functions
for this dynamics, from which it follows that an output of the form K t−1 B T AK t q̇
does not satisfy the (necessary) Kalman–Yakubovic–Popov property, except if
K t−1 B T AK t = In . One concludes that the only supply rate with respect to which
the system is dissipative must satisfy
⎧ −1 T
⎨ K t B AK t = In
L AT B 0 (6.129)
⎩
R A T B 0.
with kti > 0. The last equation is the Lagrangian control system representing the
dynamics of the manipulator with n degrees of freedom defined in (6.90), where the
diagonal matrix K v 0, and represents the mechanical losses in the manipulator.
In order to reveal the passive structure of the system, we shall again, like in the
preceding case, assemble it as the interconnection of two passive port-controlled
Hamiltonian systems. Therefore, let us split this system into two parts: the magnetic
part and the mechanical part, and interconnect them through a power continuous
interconnection. The first port-controlled Hamiltonian system with dissipation rep-
resents the magnetic energy storage and the electromechanical energy transduction.
in the coils, φ = (φ1 , φ2 ) defining
T
The state variables are the totalmagnetic fluxes
the magnetic energy Hmg = 2 L 1 φ + L 1 φ and becomes
1 1 2 1 2
∂ H mg
−R1 0n ∂φ1 1 0 0
φ̇(t) = + u1 + u2 + u mg (t),
0n −R2 ∂ H mg 0 1 K t Lφ11
∂φ2
(6.132)
with the conjugated outputs associated with the voltages u 1 and u 2 :
∂ H mg
∂ H mg
∂φ1 ∂φ1
y1 = (1, 0) ∂ H mg
= I1 and y2 = (0, 1) ∂ H mg
= I2 , (6.133)
∂φ2 ∂φ2
∂ H (t)
q̇(t) 0n In ∂q 0n
= + u mech (t) (6.134)
ṗ(t) −In −K v ∂H
(t) In
∂p
∂H
∂q
ymech = (0n , In ) ∂H
= q̇(t), (6.135)
∂p
where one notes that the dissipation defined by the matrix K t was included in the
structure matrix. The interconnection of the two subsystems is defined as an elemen-
tary negative-feedback interconnection:
Hence, the complete system is passive with respect to the supply rate of the remaining
port variables u 1 y1 + y2 u 2 , and with storage function being the total energy Htot .
where J ∈ Rn×n is the rotor inertia matrix. It follows that the (disconnected) actuator
is passive with respect to the supply rate u 1T I1 + u 2T I2 . Actually, we could have started
by showing the passivity of the system in (6.140), and then proceeded to showing
the dissipativity properties of the overall system in (6.130), using a procedure analog
to the interconnection of subsystems. Similar conclusions hold for the armature-
controlled DC motor whose dynamics is given by
J q̈(t) = K t I (t)
(6.141)
R I (t) + L ddtI (t) + K t q̇(t) = u(t),
and which is dissipative with respect u T I . This dynamics is even output strictly
to
I1
passive (the output is y = I or y = ) due to the resistance.
I2
The available storage function of the system in (6.130) with respect to the supply
rate u 1T I1 + u 2T I2 is found to be, after some calculations:
1 T 1 1
Va (I1 , I2 , q, q̇) = I L 1 I1 + I2T L 2 I2 + q̇ T M(q)q̇ + Ug (q). (6.142)
2 1 2 2
This is a storage function and a Lyapunov function of the unforced system in (6.130).
Notice that the actuator dynamics in (6.140) with input (u 1 , u 2 ) and output (I1 , I2 )
(which are the signals from which the supply rate is calculated, hence the storage
functions) is zero-state detectable: ((u 1 , u 2 ) ≡ (0, 0) and I1 = I2 = 0) =⇒ q̇ = 0
(but nothing can be concluded on q), and is strictly output passive. From Lemma 5.23
one may conclude, at once, that any function satisfying the Kalman–Yacubovich–
Popov conditions is indeed positive definite.
In this section, we shall briefly treat systems which may be considered as models of
manipulators in contact with their environment through their end-effector or some
other body (for instance, in assembly tasks or in cooperation with other robots).
These systems are part of a more general class of constrained dynamical systems,
or implicit dynamical systems. More precisely, we shall consider simple mechanical
systems which are subject to two types of constraints. First, we shall consider ideal,
i.e., workless, constraints on the generalized coordinates or velocities, which again
may be split into integrable constraints which may be expressed on the generalized
coordinates, and non-holonomic constraints which may solely be expressed in terms
of the generalized velocities. Second, we shall consider the case when the environ-
ment itself is a simple mechanical system, and hence consider two simple mechanical
systems related by some constraints on their generalized coordinates.
φ(q) = 0. (6.144)
The two sets of constraints (6.144) and (6.145) define now a submanifold S on the
state space T Rn = R2n of the simple mechanical system (6.40):
, -
S = (q, q̇) ∈ R2n | φ(q) = 0, J (q)q̇ = 0 . (6.146)
The dynamics of the constrained simple mechanical system is then described by the
following system:
M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + g(q(t)) = τ (t) + J T (q(t))λ(t)
(6.147)
J (q(t))q̇(t) = 0,
Remark 6.55 Note that the constrained system (6.147) may be viewed as a port-
controlled Hamiltonian system with conjugated port-variables λ and y = J (q)q̇
interconnected to a power continuous constraint relation defined by y = 0 and
λ ∈ Rm . It may then be shown that this defines an implicit port-controlled Hamilto-
nian system [34, 35]. A more general definition of kinematic constraints was con-
sidered in [26, 69].
Remark 6.56 Constrained dynamical systems are the subject of numerous works
which are impossible to present here in any detail, and we refer the interested reader
to [70] for a brief historical presentation and presentation of related Hamiltonian and
Lagrangian formulation as well as to [71] for a Hamiltonian formulation in some
more system-theoretic setting.
Remark 6.57 Note that the kinematic constraint of order zero (6.144) is not included
in the definition of the dynamics (6.147). Indeed it is not relevant to it, in the sense
that this dynamics is valid for any constraint φ(q) = c where c is a constant vector
and may be fixed to zero by the appropriate initial conditions.
One may reduce the constrained system to a simple mechanical system of order
2(n − m), by using an adapted set of coordinates as proposed by McClamroch and
6.7 Passive Environment 475
Wang [72]. Using the theorem of implicit functions, one may find, locally, a function
ρ from Rn−m to Rm such that
φ(ρ(q2 ), q2 ) = 0. (6.148)
∂Q T ∂Q
M̃(z) = (Q(z))M(Q(q̃)) (Q(z)), (6.151)
∂ q̃ ∂ q̃
and g̃(z) is the gradient of the potential function Ũ (Q(z)). The kinematic constraint
is now expressed in a canonical form in (6.150), or in its integral form z 1 = 0. The
equations in (6.150) may be interpreted as follows: the second equation corresponds
to the motion along the tangential direction to the constraints. It is not affected by
the interaction force, since the constraints are assumed to be frictionless. It is exactly
the reduced-order dynamics that one obtains after having eliminated m coordinates,
so that the n − m remaining coordinates z 2 are independent. Therefore, the first
equation must be considered as an algebraic relationship, that provides the value of
the Lagrange multiplier as a function of the system’s state and external forces.
Taking into account the canonical expression of the kinematic constraints, the
constrained system may then be reduced to the simple mechanical system of order
2(n − m) with generalized coordinates z 2 , and inertia matrix (defining the kinetic
energy) being the submatrix M̃r (z 2 ) obtained by extracting the last n − m columns
and rows from M̃(z) and setting z 1 = 0. The input term is obtained by taking into
account the expression of Q and computing its Jacobian:
∂Q T Im 0m×(n−m)
= ∂ρ . (6.152)
∂z − ∂q 2
(Q(z) In−m
The reduced dynamics is then a simple mechanical system with inertia matrix M̃r (z)
and is expressed by
476 6 Dissipative Physical Systems
∂ρ
M̃r (z(t))z̈(t) + C̃r (z(t), ż(t))ż(t) + g̃r (z(t)) = − (z 2 (t)), In−m τ (t).
∂q2
∂ρ (6.153)
− ∂q2 (q2 (t))
The port conjuguated output to τ is given by yr (t) = ż 2 (t). Hence
In−m
the restricted system is passive and lossless with respect to the supply rate τ T yr , and
storage function being the sum of the kinetic and potential energy of the constrained
system.
Remark 6.58 We have considered the case of simple mechanical systems subject
to holonomic kinematic constraints, that means kinematic constraints of order 1 in
(6.145), that fulfill some integrability conditions which guarantee the existence of
kinematic constraints of order 0 (6.144). If this is not the case, the constraints are
said to be non-holonomic. This means that the system may no more be reduced to
a lower order simple mechanical system. As we shall not treat them in the sequel,
we do not give a detailed presentation and give a sketch of the results indicating
only some references. These systems may still be reduced by choosing an adapted
set of velocities (in the case of a Lagrangian formulation) or momenta (in the case
of a Hamiltonian formulation) and then projecting the dynamics along a subspace of
velocities or momenta [30, 70, 73]. This dynamics cannot be expressed as a controlled
Lagrangian systems, however it has been proved that it may still be expressed as a
port-controlled Hamiltonian system for which the structure matrix does not satisfy
the Jacobi identities (6.64) [30, 74].
In this section, the environment with which the considered system is in contact is
supposed to be compliant with linear elasticity.
becomes an internal force. The virtual work principle (for the moment let us assume
that all contacts are frictionless) tells us that for any virtual displacements δq and δx,
one has δx T Fx = −δq T Fq . This can also be seen as a form of the principle of mutual
actions. Let us further assume that rank(φ) = m 2 and that K e 0. Let us note that the
relation x = φ(q) relates the generalized displacements of the controlled subsystem
to those of the uncontrolled one, i.e., to the deflexion of the environment. With this in
mind, one can define
following
McClamroch a nonlinear
transformation q = Q(z),
z K φ(q , q ) q Ω(z , z )
z = Q −1 (q) = 1
= e 1 2
, 1
= 1 2
, q̇ = T (z)ż with
z2 q2 q2 z2
∂Ω T ∂Ω T
T (z) = ∂z 1 ∂z 2, where z 1 (t) ∈ Rm , z 2 (t) ∈ Rn−m , and φ(Ω(z 1 , z 2 ), z 2 ) = z 1
0 In−m
for all z in the configuration space. Notice that from the rank assumption on φ(q),
and due to the procedure to split z into z 1 and z 2 (using the implicit function Theo-
rem), the Jacobian T (z) is full-rank. Moreover z 2 = q2 where q2 are the n − m last
components of q. In new coordinates z, one has z 1 = x and
⎧
⎪
⎪ λ (t)
⎨ M̄(z(t))z̈(t) + C̄(z(t), ż(t))ż(t) + ḡ(z(t)) = τ̄ (t) +
z 1
0
(6.155)
⎪
⎪ Me (z 1 (t))z̈ 1 (t) + Ce (z 1 (t), ż 1 (t))ż 1 (t) + d Re
(t) + ge (z 1 (t))
⎩ dz 1
+K e z 1 (t) = −λz1 (t),
where λz1 (t) ∈ Rm , M̄(z) = T (z)T M(q)T (z), and τ̄ = T T (z)τ . In a sense, this coor-
dinate change splits the generalized coordinates into “normal” direction z 1 and “tan-
gential” direction z 2 , similarly as in Sect. 6.7.1. The virtual work principle tells us that
δz T Fz = −δz 1 λz1 for all virtual displacement δz, hence the form of Fz in (6.155),
where the principle of mutual actions clearly appears. The original system may
appear as having n + m degrees of freedom. However, since the two subsystems are
assumed to be bilaterally coupled, the number of degrees of freedom is n. This is
clear once the coordinate change in (6.155) has been applied. The system in (6.154)
once again has a cascade form, where the interconnection between both subsystems
is the contact interaction force.
Remark 6.59 An equivalent representation as two passive blocks is shown in Fig. 6.6.
As an exercise, one may consider the calculation of the storage functions associated
with each block.
Let us assume that the potential energy terms Ug (z) and Uge (z 1 ) are bounded from
below. This assumption is clearly justified by the foregoing developments on passivity
properties of Euler–Lagrange systems. Now, it is an evident choice that the suitable
supply rate is given by (τ̄ T + FzT )ż − λzT1 ż 1 . Notice that although one might be
tempted to reduce this expression to τ̄ T ż since FzT ż = λzT1 ż 1 , it is important to keep
it since they do represent the outputs and inputs of different subsystems: one refers
to the controlled system while the other refers to the uncontrolled obstacle. Let us
calculate the available storage of the total system in (6.155):
t , -
Va (z, ż) = sup − (τ̄ T + FzT )ż − λzT1 ż 1 ds
τ̄ :(0,z(0),ż(0))→ 0
(6.156)
= 21 ż T (0)M(z(0))ż(0) + 21 ż 1T (0)Me (z 1 (0))z 1 (0)
+ 21 z 1T (0)K e z 1 (0) + Ug (z(0)) + Uge (z 1 (0))
Hence, the system is dissipative since Va (·) is bounded for bounded state. Since we
introduced some Rayleigh dissipation in the environment dynamics, the system has
some strict passivity property.
The material in this section may be seen as the continuation of what we exposed in
Chap.3, Sects. 3.14, and 3.14.3. The notation is the same.
5.
M q̈(t) + C q̇(t) + K q(t) ∈ −H1 ∂φ(H2 q̇(t)), a.e. on [t0 , +∞). (6.158)
We recall that dom(∂φ) denotes the domain of the subdifferential of the convex func-
tion φ(·). The term H1 ∂φ(H2 ·) is supposed to model the unilaterality of the contact
induced by friction forces (for instance, the Coulomb friction model). Unilaterality
is not at the position level as it is in the next section, but at the velocity level. This is
important, because it means that the solutions possess more regularity. Notice that if
the system is considered as a first-order differential system, then as the section title
indicates, solutions (q(·), q̇(·)) are time continuous.
Theorem 6.61 ([75]) Suppose that
• (a) There exists a matrix R = R T ∈ Rn×n , nonsingular, such that
R −2 H2T = M −1 H1 . (6.159)
3 Recall that ∂φ(·) is a set-valued maximum monotone operator, see Corollary 3.121.
480 6 Dissipative Physical Systems
convex, and lower semicontinuous and is defined as Φ(x) = χ (ż), with χ (ż) =
(φ ◦ H2 R −1 )(ż). The well-posedness of the system in (6.160) can be shown relying
on a theorem quite similar to Theorem 3.139 with a slight difference as the variational
inequality that concerns (6.160) is of the second kind:
a.e. in [t0 , +∞). This reduces to (3.263) if one chooses φ(·) as the indicator function
of a convex set K , and is in turn equivalent to an unbounded unilateral differential
inclusion.4 Indeed one has
for any proper, convex, lower semicontinuous function with closed domain, M ∈
Rn×n , q ∈ Rn (see also Proposition A.89). The equation in (6.162) is a general-
ized equation with unknown u. Its well-posedness depends on the properties of
both M and φ(·). The stability analysis of such mechanical systems will be led in
Sect. 7.2.5. When Φ(·) is chosen as the absolute value function, Φ(x) = |x|, one
recovers Coulomb-like frictional dynamics. Indeed ∂|x| is nothing but the set-valued
relay function.
4 Unbounded, because normal cones are not bounded, and unilateral, because normal cones to sets
embed unilaterality.
6.8 Nonsmooth Lagrangian Systems 481
with q(0) = q0 and q̇(0− ) = q̇0 . In (6.163), M(q) = M T (q) 0 is the n × n iner-
tia matrix, F(q, q̇) = C(q, q̇)q̇ + ∂U ∂q
(q), where C(q, q̇)q̇ denotes centripetal and
Coriolis generalized forces, whereas U (q) is a smooth potential energy from which
conservative forces derive, and h(·) : Rn → Rm . We assume that h(q0 ) ≥ 0. The
set V (q) is the tangent cone to the set Φ = {q ∈ Rn | h(q) ≥ 0}, see (3.237) and
Fig. 3.14 for examples: V (q) = TΦ (q). The impact times are generically denoted as
tk , the left-limit q̇(tk− ) ∈ −V (q(tk )) whereas the right-limit q̇(tk+ ) ∈ V (q(tk )). The
third line in (6.163) is a collision mapping that relates pre- and post-impact general-
ized velocities, and e ∈ [0, 1] is a restitution coefficient [76]. The notation prox M(q)
means the proximation in the kinetic metric, i.e., the metric defined as x T M(q)y
q̇(t + )+eq̇(t − )
for x, y ∈ Rn : the vector k 1+e k is the closest vector to the pre-impact velocity,
inside V (q(tk )) (it can therefore be computed through a quadratic programme) [77].
In particular, the impact law in (6.163) implies that the kinetic energy loss at time tk
satisfies (see [76], [40, p. 199, p. 489], [78])
11−e + T
TL (tk ) = − q̇(tk ) − q̇(tk− ) M(q(tk )) q̇(tk+ ) − q̇(tk− ) ≤ 0. (6.164)
21+e
Remark 6.62 The formulation of the unilateral constraints in (6.163) does not
encompass all closed domains Φ = {q | h(q) ≥ 0}, as simple nonconvex cases with
so-called reentrant corners prove [79]. It can be used to describe admissible domains
Φ which are defined either by a single constraint (i.e., m = 1), or with m < +∞
where convexity holds at nondifferentiable points of the boundary bd(Φ) (such sets
are called regular [80]). It is easy to imagine physical examples that do not fit within
this framework, e.g., a ladder.
Let us note that the tangent cone V (q(t)) is assumed to have its origin at q(t) so
that 0 ∈ V (q(t)) to allow for post-impact velocities tangential to the admissible set
boundary bd(Φ). The second line in (6.163) is a set of complementarity conditions
between h(q) and λ, stating that both these terms have to remain nonnegative and
orthogonal one to each other. Before passing to the well-posedness results for (6.163),
let us define a function of bounded variation.
Definition 6.63 Let f : [a, b] → R be a function, and let the total variation of f (·)
be defined as
N
V (x) = sup | f (ti ) − f (ti−1 )|, (a ≤ x ≤ b), (6.165)
i=1
where the supremum is taken along all integers N , and all possible choices of the
sequence {ti } such that a = t0 < t1 < · · · < t N = x. The function f (·) is said of
bounded variation on [a, b], if V (b) < +∞.
One should not confuse BV functions with piecewise continuous functions. We say
that a function f : I → J is piecewise continuous, if there exists a constant δ > 0
and a finite partition of I into intervals (ai , bi ), with I = ∪i [ai , bi ], and bi − ai ≥ δ
482 6 Dissipative Physical Systems
for all i, and f (·) is continuous on each (ai , bi ) with left-limit at ai and right-
limit at bi . Thus, piecewise continuous functions are a different class of functions.
There are well-known examples of continuous functions which are not BV, like
f : x → x sin( x1 ) defined on [0, 1]. Clearly f (0) = 0 but the infinite oscillations
of f (·) as x approaches 0 hamper the bounded variation. In addition, piecewise
continuity precludes finite accumulations of discontinuities. BV functions are such
that, given any t, there exists σ > 0 such that the function is continuous on (t, t + σ ).5
But this σ may not be uniform with respect to t. Definition 6.63 holds whatever the
function f (·), even if f (·) is not AC. One may consult [81] for more informations
on BV functions. One speaks of local bounded variation (LBV) functions when
f : R → R and f (·) is BV on all compact intervals [a, b]. LBV functions possess
very interesting properties, some of which are recalled below.
∂h T
Assumption 17 The gradients ∇h i (q) = ∂q (q) are not zero at the contact config-
urations h i (q) = 0, and the vectors ∇h i , 1 ≤ i ≤ m, are independent. Furthermore
the functions h(·), F(q, q̇), M(q), and the system’s configuration manifold are real
analytic, and ||F(q, q̇)||q ≤ d(q, q(0)) + ||q̇||q , where d(·, ·) is the Riemannian dis-
tance and || · ||q is the norm induced by the kinetic metric.
Then the following results hold, which are essentially a compilation of Proposition
32, Theorems 8 and 10, and Corollary 9 of [78] (see also [40, Theorem 5.3]):
1. Solutions of (6.163) exist on [0, +∞) such that q(·) is absolutely continuous
(AC), whereas q̇(·) is right-continuous of local bounded variation (RCLBV). In
particular the left- and right-limits of these functions exist everywhere.
2. The function q(·) cannot be supposed to be everywhere differentiable. One
t a.e.
has q(t) = q(0) + 0 v(s)ds for some function v(·) = q̇(·). Moreover q̇(t + ) =
v(t + ) and q̇(t − ) = v(t − ) [82].
3. Solutions are unique (however, in general they do not depend continuously on
the initial conditions [40]). The analyticity of the data is crucial for this property
to hold [78].
4. The acceleration q̈ is not a function, it is a measure denoted as dv. The measure
dv is the sum of three measures: an atomic measure dμa (which is the derivative
of a piecewise constant jump function s(·)), a measure dμna which is a nonatomic
measure singular with respect to the Lebesgue measure dt (it is associated with
a singular function which is continuous everywhere, differentiable dt-almost
everywhere with zero derivative—the so-called cantor function is one example),
and an AC function with respect to dt, which we denote q̈(·), i.e., dv = dμa +
dμna + q̈(t)dt. The atoms of dμa are the times of discontinuity of s(·), and
correspond to the impact times [77]. As alluded to above, the measure dμna
is associated with a continuous LBV function ζq̇ (t) such that ζ̇q̇ (t) = 0 almost
5 Strictly
speaking, this is true only if the function has no accumulation of discontinuities on the
right, which is the case for the velocity in mechanical systems with impacts and complementarity
constraints. In other words, the motion cannot “emerge” from a “reversed” accumulation of impacts,
under mild assumptions on the data.
6.8 Nonsmooth Lagrangian Systems 483
everywhere on any interval [a, b]. Thus, the velocity can be written as v(·) =
g(·) + s(·), g(·) is the sum of an AC function and ζq̇ (·), and its derivative is equal
to dμna + q̈(t)dt.
The set of impact times is countable. In many applications, one has dμa =
5.
+ −
k≥0 [q̇(tk ) − q̇(tk )]δtk , where δt is the Dirac measure and the sequence {tk }k≥0
can be ordered, i.e., tk+1 > tk . However, phenomena like accumulations of left-
accumulations of impacts may exist (at least bounded variation does not preclude
them). In any case, the ordering may not be possible. This is a sort of complex
Zeno behavior.6 In the case of elastic impacts (e = 1) it follows from [78, Prop.
4.11] that tk+1 − tk ≥ δ > 0 for some δ > 0, which depends on the initial data.
Hence, solutions are piecewise continuous in this case.
6. Assumption 17 implies that between impacts the position and the contact force
multiplier are analytic [78, 83], which is a desired property for most studies in
Control or Robotics: unless the right-hand side of the dynamics contains some
nonanalytic terms, the solution is quite gentle between the velocity jumps. The
right-velocity q̇ + (·) is, therefore, also analytic in intervals contained in (tk , tk+1 ),
where tk , k ≥ 0 is any impact time.7
7. Any quadratic function W (·) of q̇ is itself RCLBV, hence its derivative is a
measure dW [77]. Consequently dW ≤ 0 has a meaning and implies that the
function W (·) does not increase [84, p. 101].
These results enable one to lead a stability analysis safely. Let us now introduce a
new formulation of the dynamics in (3.256), which can be written as the following
Measure Differential Inclusion (MDI) [77]:
− M(q(t))dv − F(q(t), v(t + ))dt ∈ ∂ψV (q(t)) (w(t)) ⊆ ∂ψΦ (q(t)), (6.166)
+ −
where w(t) = v(t )+ev(t
1+e
)
∈ ∂ V (q(t)) from (6.163). If e = 0, then w(t) = v(t + ) and,
+ −
if e = 1 then w(t) = v(t )+v(t
2
)
. Moreover, when v(·) is continuous then w(t) = v(t).
The term MDI has been coined by J.J. Moreau, and (6.166) may also be called
Moreau’s second-order sweeping process. The inclusion in the right-hand side of
(6.166) is proved as follows: for convenience let us rewrite the following definitions
for a closed nonempty convex set Φ ⊆ Rn :
which precisely means that the normal cone is the polar cone of the tangent cone
(see Definition 3.112), and
6 That is, all phenomena involving an infinity of events in a finite time interval, and which occur in
various types of nonsmooth dynamical systems like Filippov’s differential inclusions, etc.
7 In Control or Robotics studies, it may be sufficient to assume that the velocity is of special bounded
variation, i.e., the measure dμna is zero. However, this measure does not hamper stability analysis
as we shall see in Sect. 7.2.4, though in all rigor one cannot dispense with its presence.
484 6 Dissipative Physical Systems
dv dt
− M(q(t)) (t) − F(q(t), v(t + )) (t) ∈ ∂ψV (q(t)) (w(t)) ⊆ ∂ψΦ (q(t))
dμ dμ
(6.169)
holds dμ-almost everywhere. In a sense, densities replace derivatives, for measures.
When dealing with MDEs or MDIs, it is then natural to manipulate densities instead
of derivatives. In general one can choose dμ = |dv| + dt [81, p. 90], where |dv| is the
absolute value of dv, or dμ = ||v(t)||dt + dμa , or dμ = dt + dμa . It is fundamental
to recall at this stage, that the solution of (6.169) does not depend on this choice. For
instance, if dμ = ||v(t)||dt + dμa then for all t
= tk , dμ dt
(t) = ||v(t)||
1 dv
and dμ (t) =
q̈(t)
||v(t)||
. Whereas if dμ = dt + dμa , then for all t
= tk , dt
dμ
(t) = 1 and dv
dμ
(t) = q̈(t).
Remark 6.64 The above mathematical framework is more than just a mathematical
fuss. Indeed as noted in [77], introducing the velocity into the right-hand side of the
dynamics as done in (6.166), not only allows one to get a compact formulation of the
nonsmooth dynamics (see Fig. 6.7 in this respect), but it also paves the way toward
the consideration of friction in the model. In turn, it is clear that introducing friction
is likely to complicate the dynamics. In summary, the dynamics in (6.169) is rich
enough to encompass complex behaviors involving solutions which may be far from
merely piecewise continuous. This is a consequence of replacing functions by the
more general notion of measure, at the price of a more involved model. In fact, using
measures allows one to encompass somewhat complex Zeno behaviors occurring in
unilaterally constrained mechanical systems in a rigorous manner.
References
13. Acary V, Bonnefon O, Brogliato B (2011) Nonsmooth modeling and simulation for switched
circuits. Lecture notes in electrical engineering, vol 69. Springer Science+Business Media BV,
Dordrecht
14. Szatkowski A (1979) Remark on explicit topological formulation of Lagrangian and Hamilto-
nian equations for nonlinear networks. IEEE Trans Circuits Syst 26(5):358–360
15. Maschke BM, van der Schaft AJ, Breedveld PC (1995) An intrinsic Hamiltonian formulation
of the dynamics of LC-circuits. IEEE Trans Circuits Syst I: Fundam Theory Appl 42(2):73–82
16. Brockett RW (1977) Control theory and analytical mechanics. In: Martin C, Herman R (eds)
Geometric control theory. Math Sci Press, Brookline, pp 1–46
17. de Jalon JG, Gutteriez-Lopez M (2013) Multibody dynamics with redundant constraints and
singular mass matrix: existence, uniqueness, and determination of solutions for accelerations
and consraint forces. Multibody Syst Dyn 30(3):311–341
18. Brogliato B, Goeleven D (2015) Singular mass matrix and redundant constraints in unilaterally
constrained Lagrangian and Hamiltonian systems. Multibody Syst Dyn 35(1):39–61
19. Merkin Y (1997) Introduction to the theory of stability, TAM, vol 24. Springer, Berlin
20. van der Schaft AJ (2017) L2-gain and passivity techniques in nonlinear control, 3rd edn.
Communications and control engineering. Springer International Publishing AG, Switzerland
21. Nijmeier H, van der Schaft AJ (1990) Nonlinear dynamical control systems. Springer, New
York
22. Isidori A (1995) Nonlinear control systems. Communications and control engineering, 3rd edn.
Springer, London. 4th printing, 2002
23. Maschke BM, van der Schaft AJ (1992) Port controlled Hamiltonian systems: modeling origins
and system theoretic properties. In: Proceeding 2nd IFAC symposium on nonlinear control
systems design, NOLCOS’92. Bordeaux, France, pp 282–288
24. van der Schaft AJ, Maschke BM (1995) The Hamiltonian formulation of energy conserving
physical systems with ports. Archiv für Elektronik und Übertragungstechnik 49(5/6):362–371
25. Maschke BM, van der Schaft AJ, Breedveld PC (1992) An intrinsic Hamiltonian formulation of
network dynamics: nonstandard poisson structures and gyrators. J Frankl Inst 329(5):923–966
26. Maschke BM (1996) Elements on the modelling of multibody systems. In: Melchiorri C, Tor-
nambè A (eds) Modelling and control of mechanisms and robots. World Scientific Publishing
Ltd, Singapore
27. Branin FH (1977) The network concept as a unifying principle in engineering and the phys-
ical sciences. In: Branin FH, Huseyin K (eds) Problem analysis in science and engineering.
Academic Press, New York, pp 41–111
28. Paynter HM (1961) Analysis and design of engineering systems. M.I.T Press, Cambridge
29. Breedveld PC (1984) Physical systems theory in terms of bond graphs. PhD thesis, University
of Twente, Twente, Netherlands
30. van der Schaft AJ, Maschke BM (1994) On the Hamiltonian formulation of non-holonomic
mechanical systems. Rep Math Phys 34(2):225–233
31. Loncaric J (1987) Normal form of stiffness and compliant matrices. IEEE J Robot Autom
3(6):567–572
32. Fasse ED, Breedveld PC (1998) Modelling of elastically coupled bodies, part I: general theory
and geometric potential function method. ASME J Dyn Syst Meas Control 120:496–500
33. Fasse ED, Breedveld PC (1998) Modelling of elastically coupled bodies, part II: exponential-
and generalized-coordinate methods. J Dyn Syst Meas Control 120:501–506
34. Dalsmo M, van der Schaft AJ (1999) On representations and integrability of mathematical
structures in energy-conserving physical systems. SIAM J Control Optim 37(1):54–91
35. Maschke BM, van der Schaft AJ (1997) Interconnected mechanical systems, part I: geometry
of interconnection and implicit Hamiltonian systems. In: Astolfi A, Melchiorri C, Tornambè A
(eds) Modelling and control of mechanical systems. Imperial College Press, London, pp 1–16
36. Spong MW (1987) Modeling and control of elastic joint robots. ASME J Dyn Syst Meas
Control 109:310–319
37. Tomei P (1991) A simple PD controller for robots with elastic joints. IEEE Trans Autom
Control 36:1208–1213
488 6 Dissipative Physical Systems
38. Paoli L, Schatzman M (1993) Mouvement à un nombre fini de degrés de liberté avec contraintes
unilatérales: cas avec perte d’énergie. Math Model Numer Anal (Modèlisation Mathématique
Anal Numérique) 27(6):673–717
39. Acary V, de Jong H, Brogliato B (2014) Numerical simulation of piecewise-linear models
of gene regulatory networks using complementarity systems. Phys D: Nonlinear Phenom
269:103–119
40. Brogliato B (2016) Nonsmooth mechanics. Models, dynamics and control. Communications
and control engineering, 3rd edn. Springer International Publishing Switzerland, London. Erra-
tum/Addendum at https://hal.inria.fr/hal-01331565
41. Brogliato B (2003) Some perspectives on the analysis and control of complementarity systems.
IEEE Trans Autom Control 48(6):918–935
42. Georgescu C, Brogliato B, Acary V (2012) Switching, relay and complementarity systems: a
tutorial on their well-posedness and relationships. Phys D: Nonlinear Phenom 241:1985–2002.
Special issue on nonsmooth systems
43. Camlibel MK (2001) Complementarity methods in the analysis of piecewise linear dynamical
systems. PhD thesis, Tilburg University, Katholieke Universiteit Brabant, Center for Economic
Research, Netherlands
44. Imura J, van der Schaft AJ (2002) Characterization of well-posedness of piecewise-linear
systems. IEEE Trans Autom Control 45(9):1600–1619
45. Imura J (2003) Well-posedness analysis of switch-driven piecewise affine systems. IEEE Trans
Autom Control 48(11):1926–1935
46. Spraker JS, Biles DC (1996) A comparison of the Carathéodory and Filippov solution sets. J
Math Anal Appl 198:571–580
47. Thuan LQ, Camlibel MK (2014) On the existence, uniquenss and nature of Carathéodory and
Filippov solutions for bimodal piecewise affine dynamical systems. Syst Control Lett 68:76–85
48. Zolezzi T (2002) Differential inclusions and sliding mode control. In: Perruquetti W, Barbot
JP (eds) Sliding mode control in engineering. Marcel Dekker, New York, pp 29–52
49. Kailath T (1980) Linear systems. Prentice-Hall, New Jersey
50. Zhao J, Hill DJ (2008) Dissipativity theory for switched systems. IEEE Trans Autom Control
53(4):941–953
51. Yang D, Zhao J (2018) Feedback passification for switched LPV systems via a state and
parameter triggered switching with dwell time constraints. Noninear Anal: Hybrid Syst 29:147–
164
52. Dong X, Zhao J (2012) Incremental passivity and output tracking of switched nonlinear systems.
Int J Control 85(10):1477–1485
53. Pang H, Zhao J (2018) Output regulation of switched nonlinear systems using incremental
passivity. Nonlinear Anal: Hybrid Syst 27:239–257
54. Pang H, Zhao J (2016) Incremental [Q, S, R) dissipativity and incremental stability for
switched nonlinear systems. J Frankl Inst 353:4542–4564
55. Pang H, Zhao J (2017) Adaptive passification and stabilization for switched nonlinearly param-
eterized systems. Int J Robust Nonlinear Control 27:1147–1170
56. Geromel JC, Colaneri P, Bolzern P (2012) Passivity of switched systems: analysis and control
design. Syst Control Lett 61:549–554
57. Zhao J, Hill DJ (2008) Passivity and stability of switched systems: a multiple storage function
method. Syst Control Lett 57:158–164
58. Aleksandrov AY, Platonov AV (2008) On absolute stability of one class of nonlinear switched
systems. Autom Remote Control 69(7):1101–1116
59. Antsaklis PJ, Goodwine B, Gupta V, McCourt MJ, Wang Y, Wu P, Xia M, Yu H, Zhu F (2013)
Control of cyberphysical systems using passivity and dissipativity based methods. Eur J Control
19:379–388
60. Acary V, Brogliato B (2008) Numerical methods for nonsmooth dynamical systems, vol 35.
Lecture notes in applied and computational mechanics. Springer, Berlin
61. King C, Shorten R (2013) An extension of the KYP-lemma for the design of state-dependent
switching systems with uncertainty. Syst Control Lett 62:626–631
References 489
62. Samadi B, Rodrigues L (2011) A unified dissipativity approach for stability analysis of piece-
wise smooth systems. Automatica 47(12):2735–2742
63. Johansson M (2003) Piecewise linear control systems: a computational approach, vol 284.
Lecture notes in control and information sciences. Springer, Berlin
64. Li J, Zhao J, Chen C (2016) Dissipativity and feedback passivation for switched discrete-time
nonlinear systems. Syst Control Lett
65. Bemporad A, Bianchini G, Brogi F (2008) Passivity analysis and passification of discrete-time
hybrid systems. IEEE Trans Autom Control 53(4):1004–1009
66. Li J, Zhao J (2013) Passivity and feedback passfication of switched discrete-time linear systems.
Syst Control Lett 62:1073–1081
67. Brogliato B, Rey D (1998) Further experimental results on nonlinear control of flexible joint
manipulators. In: Proceedings of the American control conference, vol 4. Philadelphia, PA,
USA, pp 2209–2211
68. Ortega R, Espinosa G (1993) Torque regulation of induction motors. Automatica 29:621–633
69. Maschke BM, van der Schaft AJ (1997) Interconnected mechanical systems, part II: the dynam-
ics of spatial mechanical networks. In: Astolfi A, Melchiorri C, Tornambè A (eds) Modelling
and control of mechanical systems. Imperial College Press, London, pp 17–30
70. Marle CM (1998) Various approaches to conservative and nonconsevative nonholonomic sys-
tems. Rep Math Phys 42(1–2):211–229
71. van der Schaft AJ (1987) Equations of motion for Hamiltonian systems with constraints. J Phys
A: Math Gen 20:3271–3277
72. McClamroch NH, Wang D (1988) Feedback stabilization and tracking of constrained robots.
IEEE Trans Autom Control 33(5):419–426
73. Campion G, d’Andréa Novel B, Bastin G (1990) Controllability and state-feedback stabiliz-
ability of non-holonomic mechanical systems. In: de Witt C (ed) Advanced robot control.
Springer, Berlin, pp 106–124
74. Koon WS, Marsden JE (1997) Poisson reduction for nonholonomic systems with symmetry.
In: Proceeding of the workshop on nonholonomic constraints in dynamics. Calgary, CA, pp
26–29
75. Adly S, Goeleven D (2004) A stability theory for second-order nonsmooth dynamical systems
with application to friction problems. J Mathématiques Pures Appliquées 83:17–51
76. Mabrouk M (1998) A unified variational model for the dynamics of perfect unilateral con-
straints. Eur J Mech A/Solids
77. Moreau JJ (1988) Unilateral contact and dry friction in finite freedom dynamic. In: Moreau JJ,
Panagiotopoulos PD (eds) Nonsmooth mechanics and applications, vol 302. CISM international
centre for mechanical sciences: courses and lectures. Springer, Berlin, pp 1–82
78. Ballard P (2001) Formulation and well-posedness of the dynamics of rigid-body systems with
perfect unilateral constraints. Phil Trans R Soc Math Phys Eng Sci, special issue on Nonsmooth
Mech, Ser A 359(1789):2327–2346
79. Brogliato B (2001) On the control of nonsmooth complementarity dynamical systems. Phil
Trans R Soc Math Phys Eng Sci Ser A 359(1789):2369–2383
80. Clarke FH (1983) Optimization and nonsmooth analysis. Wiley Interscience Publications,
Canadian Mathematical Society Series of Monographs and Advanced Texts, Canada
81. Monteiro-Marques MDP (1993) Differential inclusions in nonsmooth mechanical problems.
Shocks and dry friction. Progress in nonlinear differential equations and their applications.
Birkhauser, Basel
82. Kunze M, Monteiro-Marques MDP (2000) An introduction to Moreau’s sweeping process. In:
Brogliato B (ed) Impacts in mechanical systems. Analysis and modelling. Lecture notes in
physics, vol 551, pp 1–60. Springer, Berlin; Proceeding of the Euromech Colloquium, vol 397,
Grenoble, France, June-July 1999
83. Lötstedt P (1982) Mechanical systems of rigid bodies subject to unilateral constraints. SIAM
J Appl Math 42:281–296
84. Dieudonné J (1969) Eléments d’Analyse, vol 2. Gauthier-Villars
490 6 Dissipative Physical Systems
85. Hiriart-Urruty JB, Lemaréchal C (2001) Fundamentals of convex analysis. Grundlehren text
editions. Springer, Berlin
86. Moreau JJ, Valadier M (1986) A chain rule involving vector functions of bounded variation. J
Funct Anal 74:333–345
87. Rudin W (1998) Analyse Réelle et Complexe. Dunod, Paris
88. Cottle RW, Pang JS, Stone RE (1992) The linear complementarity problem. Academic Press,
Cambridge
Chapter 7
Passivity-Based Control
This chapter is devoted to investigate how the dissipativity properties of the various
systems examined in the foregoing chapter can be used to design stable and robust
feedback controllers (in continuous and discrete time). We start with a classical result
of mechanics, which actually is the basis of Lyapunov stability and Lyapunov func-
tions theory. The interest of this result is that its proof hinges on important stability
analysis tools, and allows one to make a clear connection between Lyapunov sta-
bility and dissipativity theory. The next section is a brief survey on passivity-based
control methods, a topic that has been the object of numerous publications. Then, we
go on with the Lagrange–Dirichlet Theorem, state-feedback and position-feedback
control for rigid-joint–rigid-link systems, set-valued robust control for rigid-joint–
rigid-link fully actuated Lagrangian systems, state and output feedback for flexible-
joint–rigid-link manipulators, with and without actuators dynamics, and constrained
Lagrangian systems. Regulation and trajectory tracking problems, smooth and non-
smooth dynamical systems, are treated. The chapter ends with a presentation of state
observers design for a class of differential inclusions represented by set-valued Lur’e
systems.
The fundamental works on dissipative systems and positive real transfer function,
which are exposed in the foregoing chapters have been mainly motivated by the sta-
bility and stabilization of electrical networks. It is only at the beginning of the 1980s
that work on mechanical systems and the use of dissipativity in their feedback con-
trol started to appear, with the seminal paper by Takegaki and Arimoto [1]. Roughly
speaking, two classes of feedback controllers have emerged:
• Passivity-based controllers: the control input is such that the closed-loop system
can be interpreted as the negative interconnection of two dissipative subsystems.
The Lyapunov function of the total system is close to the process total energy,
in the sense that it is the sum of a quadratic function 21 ζ T M(q)ζ for some ζ
depending on time, generalized positions q and velocities q̇, and a term looking
like a potential energy. Sometimes, additional terms come into play, like in adap-
tive control where the online estimation algorithm provides supplementary state
variables. Such algorithms have been motivated originally by trajectory tracking
and adaptive motion control of fully actuated robot manipulators. The machinery
behind this is dissipative systems and Lyapunov stability theory. This chapter will
describe some of these schemes in great detail, consequently we do not insist on
passivity-based controllers in this short introduction.
• Controlled Lagrangian (or Hamiltonian): the objective is not only to get a
two-block dissipative interconnection, but also to preserve a Lagrangian (or a
Hamiltonian) structure in closed loop. In other words, the closed-loop system is
itself a Lagrangian (or a Hamiltonian) systems with a Lagrangian (or Hamiltonian)
function, and its dynamics can be derived from a variational principle such as
Hamilton’s principle. In essence, one introduces a feedback that changes the kinetic
energy tensor M(q). Differential geometry machinery is the underlying tool. The
same applies to port-Hamiltonian systems, which we saw in Chap. 6. Regulation
tasks for various kind of systems (mechanical, electromechanical, underactuated)
have been the original motivations of such schemes. The method is described in
Sect. 7.10.
Related terms are potential energy shaping, energy shaping, damping injection or
assignment, energy balancing. The very starting point for all those methods is the
Lagrange–Dirichlet (or Lejeune Dirichlet1 ) Theorem, which is described in Sect. 7.2.
It is difficult to make a classification of the numerous schemes that have been devel-
oped along the above two main lines. Indeed, this would imply to highlight the
discrepancies between the following:
• Trajectory tracking versus regulation.
• Full actuation versus underactuation.
• Fixed parameters versus adaptive control.
• Static feedback versus dynamic feedback.
• Smooth systems and controllers versus nonsmooth systems and/or controllers
(sliding-mode set-valued control).
• Constrained systems versus unconstrained systems.
• Rigid systems versus flexible systems.subsecNSBV, etc.
As alluded to before, the starting point may be situated in 1981 with the seminal
article by Takegaki and Arimoto [1]. The challenge then in the Automatic Control
and in the Robotics scientific communities, was about nonlinear control of fully
actuated manipulators for trajectory tracking purpose, and especially the design of
a scheme allowing for parameter adaptive control. The first robot adaptive control
algorithms were based on tangent linearization techniques [2]. Then, two classes
of schemes emerged: those requiring an inversion of the dynamics and acceleration
measurement or inversion of the inertia matrix M(q) [3–6], and those avoiding
such drawbacks [7–15]. Despite the fact that they were not originally designed with
dissipativity in mind, the schemes of the second class were all proved to belong to
passivity-based schemes in [16] (the schemes in [7, 8] were proved to be hyperstable
in [17], while the term passivity-based was introduced in [18]). Then, many schemes
have been designed, which more or less are extensions of the previous ones, but
adapted to constrained systems, systems in contact with a flexible environment, etc.
The next step, as advocated in [18], was to solve the trajectory tracking problem
in the adaptive control context, for flexible-joint robots. This was done in [19–22],
using what has been called afterward backstepping, together with a specific parame-
terization to guarantee the linearity in the unknown parameters, and a differentiable
parameter projection. The adaptive control of flexible-joint manipulators is a non-
trivial problem combining these three ingredients. See [23] for further comparisons
between this scheme and schemes designed with the backstepping approach, in the
fixed parameters case. Almost at the same time, the regulation problem with passivity-
based control of induction motors was considered in [24, 25], using a key idea of
[20, 21]. The control of induction motors then was a subject of scientific excitation
for several years.
Later came controlled Lagrangian and Hamiltonian methods as developed by
Bloch, Leonard, Mardsen [26, 27] and in [28, 29], to cite a few.
In this section, we present a stability result that was first stated by Lagrange in
1788, and subsequently proved rigorously by Lejeune Dirichlet. It provides sufficient
conditions for a conservative mechanical system to possess a Lyapunov stable fixed
point. The case of Rayleigh dissipation is also presented. The developments are based
on the dissipativity results of Chap. 4.
Let us consider the Euler–Lagrange dynamics in (6.1), or that in (6.40). Let us further
make the following:
Assumption 18 The potential energy U (q) is such that (i) dU
dq
(q) = 0 ⇔ q = q0
2
and (ii) d U
dq 2
(q0 ) 0. Moreover, M(q) = M(q) 0.
T
Theorem 7.1 (Lagrange–Dirichlet) Let Assumption 18 hold. Then, the fixed point
(q, q̇) = (q0 , 0) of the unforced system in (6.1) is Lyapunov stable.
Proof First of all, notice that the local (strict) convexity of U (q) around q0 precludes
the existence of other q arbitrarily close to q0 and such that dU dq
(q1 ) = 0. This means
that the point (q0 , 0) is a strict local minimum for the total energy E(q, q̇). We have
seen that E(q, q̇) is a storage function provided that U (q) remains positive. Now, it
suffices to define a new potential energy as U (q) − U (q0 ) to fulfill this requirement,
and at the same time to guarantee that the new E(q, q̇) satisfies E(0, 0) = 0, and is
a positive definite function (at least locally) of (0, 0). Since this is a storage function,
we deduce from the dissipation inequality (which is actually here an equality) that
for all τ ≡ 0 one gets
t
E(0) = E(t) − τ T (s)q̇(s)ds = E(t). (7.1)
0
Therefore, the fixed point of the unforced system is locally Lyapunov stable. Actually,
we have just proved that the system evolves on a constant energy level, and that the
special form of the potential energy implies that the state remains close enough to the
fixed point when initialized close enough to it. Notice that (7.1) is of the type (4.80)
with S (x) = 0: the system is lossless. All in all, we did not make an extraordinary
progress. Before going ahead with asymptotic stability, let us give an illustration of
Theorem 7.1.
Example 7.2 Let us consider the dynamics of planar two-link revolute joint manip-
ulator with generalized coordinates the link angles (q1 , q2 ) (this notation is not to be
confused with that employed for the flexible-joint–rigid-link manipulators). We do
not need here to develop the whole dynamics. Only the potential energy is of interest
to us. It is given by
dV T ∂R
f (x, τ ) = τ T x2 − x2T , (7.3)
dx ∂ x2
x1 q
where x = = , f (x, u) denotes the system vector field in a first-order
x2 q̇
state space notations, and V (x) is any storage function. Let us take V (x) = E(q, q̇).
We deduce that
dE T
Ė(t) = f (x(t), 0) = −δ q̇ T (t)q̇(t). (7.4)
dx
The only invariant set inside the set {(q, q̇)|q̇ ≡ 0} is the fixed point (q0 , 0). Resorting
to Krasovskii–LaSalle’s Invariance Theorem one deduces that the trajectories con-
verge asymptotically to this point, provided that the initial conditions are chosen in a
sufficiently small neighborhood of it. Notice that we could have used Corollary 5.27
to prove the asymptotic stability.
Remark 7.4 • Convexity: Convex properties at the core of stability in mechanics:
in statics, the equilibrium positions of a solid body lying on a horizontal plane,
submitted to gravity, are characterized by the condition that the vertical line that
passes by its center of mass, crosses the convex hull of the contact points of support.
In dynamics, Assumption 18 shows that without a convexity property (maybe local
or semi-global, as the prox-regular case treated in Sect. 3.14.5 shows), the stability
of the fixed point is generically impossible to obtain.
• The Lagrange–Dirichlet Theorem also applies to constrained Euler–Lagrange sys-
tems as in (6.150). If Rayleigh dissipation is added, and if the potential energy
satisfies the required assumptions, then the (z 2 , ż 2 ) dynamics are asymptotically
stable. Thus, z̈ 2 (t) tends toward zero as well so that λz1 (t) = ḡ1 (z 2 (t)) as t → +∞.
• If Assumption 18 is strengthened to have a potential energy U (q) that is globally
convex, then its minimum point is globally Lyapunov stable.
• Other results generalizing the Lagrange–Dirichlet Theorem for systems with cyclic
∂T
coordinates (i.e., coordinates such that ∂q i
(q) = 0) were given by Routh and Lya-
punov, see [30].
496 7 Passivity-Based Control
Remark 7.5 It is a general result that OSP together with ZSD yields under certain
conditions asymptotic stability, see Corollary 5.27. One basic idea for feedback con-
trol, may then be to find a control law that renders the closed-loop system OSP with
respect to some supply rate, and such that the closed-loop operator is ZSD with
respect to the considered output.
Example 7.6 Let us come back on the example in Sect. 6.5. As we noted, the concate-
nation of the two functions in (6.111) and (6.112) yields a positive definite function
of (q̃, q̇) = (0, 0) with q̃ = q − λλ11+k
qd
, that is continuous at q = 0. The only invariant
set for the system in (6.107) with the input in (6.108) is (q, q̇) = ( λλ11+kqd
, 0). Using
the Krasovskii–LaSalle’s invariance Theorem, one concludes that the point q̃ = 0,
q̇ = 0 is globally asymptotically uniformly Lyapunov stable.
One question that comes to one’s mind is that, since the strong assumption on which
the Lagrange–Dirichlet Theorem relies is the existence of a minimum point for the
potential energy, what happens if U (q) does not possess a minimum point? Is the
equilibrium point of the dynamics unstable in this case? Lyapunov and Chetaev stated
the following:
Theorem 7.7 (a) If at a position of isolated equilibrium (q, q̇) = (q0 , 0) the poten-
tial energy does not have a minimum, and neglecting higher order terms, it can be
expressed as a second-order polynomial, then the equilibrium is unstable. (b) If at a
position of isolated equilibrium (q, q̇) = (q0 , 0) the potential energy has a maximum
with respect to the variables of smallest order that occurs in the expansion of this
function, then the equilibrium is unstable. (c) If at a position of isolated equilib-
rium (q, q̇) = (q0 , 0) the potential energy, which is an analytical function, has no
minimum, then this fixed point is unstable.
2
Since U (q) = U (q0 ) + dU dq
(q0 )(q − q0 ) + 21 (q − q0 )T ddqU2 (q − q0 ) + o[(q − q0 )T
(q − q0 )], and since q0 is a critical point of U (q), the first item tells us that the
2
Hessian matrix ddqU2 is not positive definite, otherwise the potential energy would be
convex and hence the fixed point would be a minimum. Without going into the details
of the proof since we are interested in dissipative systems, not unstable systems, let
us note that the trick consisting of redefining the potential energy as U (q) − U (q0 )
in order to get a positive storage function no longer works. Moreover, assume there
is only one fixed point for the dynamical equations. It is clear, at least in the one-
2
degree-of-freedom case, that if ddqU2 (q0 ) ≺ 0, then U (q) → −∞ for some q. Hence
the available storage function that contains a term equal to sup [U (q(t))]t0
τ :(0,q(0),q̇(0))→
cannot be bounded, assuming that the state space is reachable. Thus, the system
cannot be dissipative, see Theorem 4.43.
7.2 The Lagrange–Dirichlet Theorem 497
Let us consider the class of Lagrangian systems as in Sect. 6.8.2, i.e., fully actuated
Lagrangian systems with complementarity conditions and impacts. The constraints
are supposed to be frictionless. First, notice that since F(q, 0) = ∂U
∂q
and 0 ∈ V (q),
fixed points of (6.166) satisfy the generalized equation 0 ∈ ∂ψΦ (q
) + ∂U ∂q
(q
),
Lemma 7.8 Consider a mechanical system as in (6.163). Assume that the potential
function U (q) is radially unbounded. Then if ψΦ (q) + U (q) has a strict global
minimum at q ∗ , the equilibrium point (q ∗ , 0) is globally Lyapunov stable.
Let us note that Φ needs not be convex in general, for instance, the equilibrium
may exist in Int(Φ), or it may belong to bd(Φ) but be forced by the continuous
dynamics; see Fig. 7.1 for planar examples with both convex and non-convex Φ. It
is obvious that in the depicted non-convex case, all points (q ∗ , 0) with q ∗ ∈ bd(Φ)
are fixed points of the dynamics. In case the set Φ is not convex, then the indicator
function ψΦ (·) is not convex neither, and one has to resort to other mathematical
tools than convex analysis. Prox-regular sets are a good candidate to relax convexity,
see Sect. 3.14.5. Then ψΦ (·) is a prox-regular function and ∂ψΦ (·) is the normal
cone (in Clarke’s sense) to Φ. Under a suitable constraint qualification (CQ) like the
Mangasarian–Fromovitz one, NΦ (·) can be expressed from the normals to Φ at the
active constraints, i.e., ∇h i (q), where h i (q) = 0. Then a complementarity problem
can be constructed from the generalized equation of fixed points.
Proof The proof of Lemma 7.8 may be led as follows. Let us consider the nonsmooth
Lyapunov candidate function
1 T
W (q, q̇) = q̇ M(q)q̇ + ψΦ (q) + U (q) − U (q ∗ ). (7.5)
2
Since the potential ψΦ (q) + U (q) has a strict global minimum at q ∗ equal to U (q ∗ )
and is radially unbounded, this function W (·) is positive definite on the whole state
space, and is radially unbounded. Also, W (q, q̇) ≤ β(||q||, ||q̇||) for some class K
function β(·) is satisfied on Φ ( q(t) for all t ≥ 0). The potential function ψΦ (q) +
U (q) is continuous on Φ. Thus W (q, q̇) in (7.5) satisfies the requirements of a
Lyapunov function candidate on Φ, despite the indicator function has a discontinuity
on bd(Φ) (but is continuous on the closed set Φ, see (3.231)). Moreover since (6.166)
secures that q(t) ∈ Φ for all t ≥ 0, it follows that ψΦ (q(t)) = 0 for all t ≥ 0. In
view of this, one can safely discard the indicator function in the subsequent stability
analysis. Let us examine the variation of W (q, q̇) along trajectories of (6.169). In
view of the above discussion, one can characterize the measure dW by its density
with respect to dμ and the function W decreases if its density dW dμ
(t) ≤ 0 for all
t ≥ 0. We recall Moreau’s rule for differentiation of quadratic functions of RCLVB
functions [34, pp. 8–9]: let u(·) be RCLBV, then d(u 2 ) = (u + + u − )du where u +
and u − are the right-limit and left-limit functions of u(·). Let us now compute the
density of the measure dW with respect to dμ:
+ T ∂U dq
dW
dμ
(t) = 1
2
q̇(t ) + q̇(t − ) M(q(t)) dμ
dv
(t) + ∂q dμ
(t)
(7.6)
∂
+ T dq
+ 21 ∂q q̇(t ) M(q(t))q̇(t + ) dμ (t),
where dq = v(t)dt since the function v(·) is Lebesgue integrable. Let us now
choose dμ = dt + dμa + dμna . Since dμ dt dq
(tk ) = 0 and dμ (tk ) = 0 whereas dμ
dv
(tk ) =
+ − + −
v(tk ) − v(tk ) = q̇(tk ) − q̇(tk ), it follows from (7.6) that at impact times one gets
dW 1 + T
(tk ) = q̇(tk ) + q̇(tk− ) M(q(t)) q̇(tk+ ) − q̇(tk− ) = TL (tk ) ≤ 0, (7.7)
dμ 2
where TL (tk ) is in (6.164). Let the matrix function Ṁ(q, q̇) be defined by Ṁ(q(t),
q̇(t)) = dtd M(q(t)). Let us use the expression of F(q, q̇) given after (6.163), and
let us assume that Christoffel’s symbols of the first kind are used to express the
T
∂
vector C(q, q̇)q̇ = Ṁ(q, q̇) − 21 ∂q q̇ T M(q(t))q̇ . Then, the matrix Ṁ(q, q̇) −
2C(q, q̇) is skew symmetric; see Lemma 6.17. Now if t = tk , one gets dμ dv
(t) =
v̇(t) = q̈(t) and dμ (t) = 1 [34, p. 76] and one can calculate from (7.6), using the
dt
where z 1 ∈ −∂ψV (q(t)) (w(t)) and W (·) is defined in (7.5). To simplify the notation
we have dropped arguments in (7.8), however, q̇ is to be understood as q̇(t) = q̇(t + )
since t = tk . Now since for all t ≥ 0 one has q̇(t + ) ∈ V (q) [35] which is polar to
∂ψΦ (q(t)), and from Moreau’s inclusion in (6.166), it follows that z 1T q̇(t + ) ≥ 0.
Therefore, the measure dW is nonpositive. Consequently, the function W (·) is non-
increasing [36, p. 101]. We finally notice that the velocity jump mapping in (6.163)
is a projection and is, therefore, Lipschtiz continuous as a mapping q̇(tk− ) → q̇(tk+ ),
for fixed q(tk ). In particular, it is continuous at (q
, 0), so that a small pre-impact
velocity gives a small postimpact velocity. All the conditions for Lyapunov stability
of (q
, 0) are fulfilled and Lemma 7.8 is proved.
The main feature of the proof is that one works with densities (which are functions
of time) and not with the measures themselves, in order to characterize the
variations of the Lyapunov function.
In order to reinforce this statement, let us provide a little example which illustrates
what kind of influence the singular measure dμna might have on the stability analy-
sis between the impacts, quoted from [37, Remark 4.9]. To understand this, consider
the function f i : [0, 1] → [0, 1], i = 1, 2, such that f 1 (x) = −αx, where α ∈ (0, 1)
and f 2 (·) is the Cantor function on the interval [0, 1]. Let f = f 1 + f 2 , then f (·)
is a continuous function of bounded variation, and f˙ = −α < 0 almost everywhere
with respect to Lebesgue measure, but f (·) is monotonically increasing. However,
the differential measure of f (·) satisfies: d f ([0, 1]) = (1 − α) > 0, that is, the dif-
ferential measure of the function shows that the function is increasing on the interval
[0, 1]. It is of course another question to know why such a singular function could
be present in the dynamics.
inside Φ”. Since the indicator function has originally been introduced by Moreau
as a potential associated with unilateral constraints, it finds here its natural use.
In fact, we could have kept the indicator function in the stability analysis. This
would just add a null term q̇(t + )T z 2 dμ
dt
(t) in the right-hand side of (7.6), with
z 2 ∈ ∂ψΦ (q(t)).
• As alluded to above, taking e = 1 in (6.163) ensures that there is no accumu-
lation ofimpacts, thus the sequence of impact times {tk }k≥0 can be ordered,
dμa = k≥0 δtk , and velocities are piecewise continuous. Then, a much simpler
formulation can be adopted by separating continuous motion phases occurring
500 7 Passivity-Based Control
on intervals (tk , tk+1 ) from impact times. The system is therefore non-Zeno for
e = 1,2 and if Assumption 6.8.2 in Sect. 6.8.2, holds.
• One does not need to make further assumptions on the measure dμa to conclude,
and one sees that this conclusion is obtained directly, applying general differen-
tiation rules of RCLBV functions. The dynamics might even contain dense sets
of velocity discontinuities, (7.6) and (7.7) would continue to hold. This shows
that using the MDI formalism in (6.166) or (6.169) places the stability analysis
in a much more general perspective than, say, restricting q̇(·) to be piecewise
continuous.
• Other results on energy-based control of a class of nonsmooth systems, not encom-
passing the mechanical and electrical systems we deal with in this book, may be
found in [38, 39].
• Lemma 7.8 has been extended to the case with set-valued friction by Leine and
van de Wouw [40, Chap. 7]. The interested readers are referred to their book for a
nice exposition of stability issues for MDIs.
Let us now derive a dissipation inequality for the dynamical system (6.163). To that
end, let us take advantage of the compact formalism (6.169). We consider a Lebesgue
measurable input τ (·) so that (6.169) becomes
dv dt dt
− M(q(t)) (t) − F(q(t), v(t + )) (t) − τ (t) ∈ ∂ψV (q(t)) (w(t)). (7.9)
dμ dμ dμ
Following (6.170), let ξ denote a measure that belongs to the normal cone to the
tangent cone ∂ψV (q(t)) (w(t)), and let us denote ddμR (·) its density with respect to μ.
The system in (7.9) is dissipative with respect to the generalized supply rate
1 dt dR
(v(t + ) + v(t − )), τ (t) + (t). (7.10)
2 dμ dμ
Noting that ξ = ∇h(q)λ (where the variable ξ is the same as in (6.170)), for some
measure λ, we obtain
1 dt dλ
(v(t + ) + v(t − )), τ (t) + ∇h(q) (t), (7.11)
2 dμ dμ
where we recall that v(·) satisfies the properties in item (ii) in Sect. 6.8.2 and that
outside impacts (i.e., outside atoms of the measure d R) one has dμdt
= 0 because the
Lebesgue measure has no atom. It is noteworthy that (7.11) is a generalization of
the Thomson–Tait’s Formula of Mechanics [41, Sect. 4.2.12], which expresses the
work performed by the contact forces during an impact. The supply rate in (7.11)
may be split into two parts: a function part and a measure part. The function part
describes what happens outside impacts, and one has 21 (v(t + ) + v(t − ) = v(t) = q̇(t).
The measure part describes what happens at impacts tk . Then one gets
(7.12)
where we recall that the dynamics at an impact time is algebraic: M(q(tk ))(v(tk+ ) −
v(tk− )) = ∇h(q) dμdλ
(tk ), with a suitable choice of the basis measure μ. The storage
function of the system is nothing else but its total energy. It may be viewed as the
usual smooth energy 21 q̇ T M(q)q̇ + U (q), or as the unilateral energy 21 q̇ T M(q)q̇ +
U (q) + ψΦ (q), which is nonsmooth on the whole of Rn × Rn . It is worth remarking,
however, that the nonsmoothness of the storage function in its arguments, is not a
consequence of the impacts, but of the complementarity condition 0 ≤ h(q) ⊥ λ ≥ 0.
with x(0− ) = x0 , where some assumptions are made on the set of times tk , see for
instance [42–45]. Such assumptions always make the set of state jump times, a very
particular case of the set of discontinuities of a LBV function. It is noteworthy that the
systems in (7.13) and in (6.163) are different dynamical systems. Most importantly,
the complementarity conditions are absent from (7.13). Another class of impulsive
systems is that of measure differential equations (MDE), or impulsive ODEs. Let us
consider one example:
5π 3π
ẋ(t) = sin x(t) + + cos x(t) + u̇(t), x(0− ) = x0 , x(t) ∈ R
4 4
(7.14)
where u(·) is of bounded variation. Applying [46, Theorem 2.1], this MDE has a
unique global generalized solution. See also [47] for the well posedness and stability
analysis of MDEs. Consider now
ẋ(t) = sin x(t) + 5π
4
+ cos x(t) + 3π
4
λ(t), x(0− ) = x0 , x(t) ∈ R
0 ≤ x(t) ⊥ λ(t) ≥ 0.
(7.15)
502 7 Passivity-Based Control
Let us now pass to the stability analysis of the systems presented in Sect. 6.8.1. The
set of stationary solutions of (6.157) and (6.158) is given by
Definition 7.10 A stationary solution q̄ ∈ W is stable provided that for any ε > 0
there exists η(ε) > 0 such that for any q0 ∈ Rn , q̇0 ∈ Rn , H2 q̇0 ∈ dom(∂φ), with
||q0 − q̄||2 + ||q̇0 ||2 ≤ η, the solution q(·, t0 , q0 , q̇0 ) of Problem 6.60, satisfies
||q(t, t0 , q0 , q̇0 ) − q̄||2 + ||q̇(t, t0 , q0 , q̇0 )||2 ≤ ε, for all t ≥ t0 . (7.17)
Theorem 7.11 ([49]) Let the assumptions of Theorem 6.61 hold, and 0 ∈ dom(∂φ).
Suppose in addition that
• R M −1 C R −1 0,
• R M −1 K R −1 0 and is symmetric.
Then, the set W = ∅, and any stationary solution q̄ ∈ W of (6.157) and (6.158) is
stable.
A variant is as follows:
Theorem 7.12 ([49]) Let the assumptions of Theorem 6.61 hold, and 0 ∈ dom(∂φ).
Let q̄ ∈ W be a stationary solution of (6.157) and (6.158). Suppose that
7.2 The Lagrange–Dirichlet Theorem 503
The next theorem concerns the attractivity of the stationary solutions, and may be
seen as an extension of the Krasovskii–LaSalle’s invariance principle. Let d[s, M ] =
inf m∈M ||s − m|| be the distance from a point s ∈ Rn to a set M ⊂ Rn .
Theorem 7.13 ([49]) Let the assumptions of Theorem 6.61 hold, and 0 ∈ dom(∂Φ).
Suppose that:
• R M −1 K R −1 0 and is symmetric,
• R M −1 C R −1 z + R M −1 K q̄, z + φ(H2 R −1 z) − φ(0) > 0, z ∈ Rn \ {0},
• dom(∂φ) is closed.
Then for any q0 ∈ Rn , q̇0 ∈ Rn , H2 q̇0 ∈ dom(∂Φ), the orbit
is bounded and
The proof is led with the help of the quadratic function V (x) = 21 (q − q̄)T R 2 M −1
K (q − q̄) + 21 q̇ T R 2 q̇. Notice that (q − q̄)T R 2 M −1 K (q − q̄) = (q − q̄)T R(R M −1
K R −1 )R(q − q̄). More on the attractivity properties of similar evolution problems
can be found in [50]. One should be careful with conclusions about Lyapunov stability
(which is different from attractivity) of sets, as the requirements on the Lyapunov
function are rather stringent [51, Lemma 1.6]. This is the reason why we wrote above,
that the theorem may be seen as an extension of the invariance principle (which is
usually a way to show the Lyapunov asymptotic stability).
controller of the form u(q) ∈ −sgn(q), which gives rise to a closed-loop system
m q̈(t) ∈ −q(t) − sgn(q(t)) − sgn(q̇(t)): this is not a maximal monotone right-hand
side, and is similar to the twisting algorithm of silding-mode control [55]. The for-
malisms and results exposed in Sect. 3.14, thus do not apply to this system. One
has to resort to the extension of Krasovskii–LaSalle’s invariance principle, for other
types of differential inclusions (like, for instance, Filippov’s differential inclusions).
Another study can be found in [56], who analyze the stability of a PID controller
applied to a system with Coulomb friction. The closed-loop system in [56, Eq. (4)]
is of the form ż(t) ∈ Az(t) − f c C T sgn(C z(t)) with C = (0 0 1), f c > 0, z(t) ∈ R3 .
Since one has P B = C T with P = f c I3 , the results in Sect. 3.14 could apply if
A + A T 0, which may not be the case. We notice however that the closed-loop
system can be written as ż(t) − Az(t) ∈ −∂ f (z(t)), where f (z) = ( f c sgn ◦ C)(z)
is proper, convex, and lower semicontinuous (and consequently ∂ f (·) is maximal
monotone). Hence, it fits within the class of differential inclusions in (3.241) studied
in [57, 58] [41, Theorem B.4], so that existence and uniqueness of solutions follows.
7.2.6 Conclusion
More precisely, the variable change defined in (6.159) allows one to rewrite the
dynamics (6.158) as
and both subsystems are passive. This interpretation together with the one at the
end of Sect. 6.8.2 allow us to conclude that “maximal monotone” differential inclu-
sions permit to nicely recast such nonsmooth systems into a sound and established
framework, which extends the usual Passivity Theorem.
In this section, we shall present various feedback controllers that assure some sta-
bility properties for the rigid-joint–rigid-link model in (6.90). Let us start with the
regulation problem, and then generalize to the trajectory tracking case. In each case,
we emphasize how the dissipativity properties of the closed-loop systems constitute
the basis of the stability properties.
7.3.1 PD Control
where λ1 > 0 and λ2 > 0 are the constant feedback gains (for simplicity, we consider
them as being scalars instead of positive definite n × n matrices, this is not very
important for what follows), q̃ = q − qd , qd ∈ Rn is a constant desired position. The
closed-loop system is given by
Two paths are possible: we can search for the available storage function of the closed-
loop system in (7.25), which is likely to provide us with a Lyapunov function, or
we can try to interpret this dynamics as the negative interconnection of two pas-
sive blocks, and then use the Passivity Theorem (more exactly one of its numerous
versions) to conclude on stability. To fix the ideas, we develop both paths in detail.
First of all, notice that in order to calculate an available storage, we need a supply rate,
consequently, we need an input (that will be just an auxiliary signal with no meaning).
Let us therefore just add a term u in the right-hand side of (7.25) instead of zero. In
other words, we proceed as we did for the example in Sect. 6.5: we make an input
transformation and the new system is controllable. Let us compute the available
storage along the trajectories of this new input–output system, assuming that the
506 7 Passivity-Based Control
potential U (q) is bounded from below, i.e., U (q) ≥ Umin > −∞ for all q ∈ Q:
t
Va (q0 , q̇0 ) = sup − u T (s)q̇(s)ds
u:(0,q0 ,q̇0 )→
0
t
= sup − q̇ T (s) {M(q(s))q̈(s) + C(q(s), q̇(s))q̇(s) + g(q(s))
u:(0,q0 ,q̇0 )→ 0
g(q) + λ2 q̃ = 0. (7.27)
Proof From the second part of Assumption 19, it follows that the available storage Va
in (7.26) is a storage function for the closed-loop system with input u (fictitious) and
Δ
output q̇. Next, this also allows us to state that V pd (q − qi , q̇) = Va (q, q̇) − U (qi ),
is a Lyapunov function for the unforced system in (7.25): indeed this is a storage
function and the conditions of Lemma 5.23 are satisfied. Now, let us calculate the
derivative of this function along the trajectories of (7.25):
V̇ pd (q(t) − qi , q̇(t)) = −λ1 q̇ T (t)q̇(t) + q̇ T (t) −g(q(t)) + dU
dq
(t)
(7.28)
= −λ1 q̇ T (t)q̇(t).
3 Actually this equality is is always true, even if the matrix Ṁ(q, q̇) − 2C(q, q̇) is not skew sym-
metric.
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 507
Therefore, one just has to apply the Krasovskii–LaSalle’s lemma to deduce that the
fixed points (qi , 0) are locally asymptotically Lyapunov stable. Lyapunov second
method guarantees that the basin of attraction Bri of each fixed point has a strictly
positive measure.
Remark 7.16 (Potential energy shaping) One remarks that asymptotic stability has
been obtained in part because the PD control injects some strict output passivity
inside the closed-loop system. This may be seen as a forced damping. On the other
hand, the position feedback may be interpreted as a modification of the potential
energy, so as to shape it adequately for control purposes. It seems that this technique
was first advocated by Takegaki and Arimoto in [1].
Remark 7.17 The PD control alone cannot compensate for gravity. Hence, the sys-
tem will converge to a configuration that is not the desired one. Clearly, increasing
λ2 reduces the steady-state error. But increasing gains is not always desirable in
practice, due to measurement noise in the sensors.
Since the closed-loop system possesses several equilibrium points, the underlying
passivity properties of the complete closed-loop system must be local in nature, i.e.,
they hold whenever the state remains inside the balls Bri [59]. It is however possible
that each block of the interconnection, when considered separately, possesses global
dissipativity properties. But the interconnection does not.
A First Interconnection: Looking at (7.25) one is tempted to interpret those
dynamics as the interconnection of two subsystems with respective inputs u 1 , u 2 and
outputs y1 and y2 , with y1 = u 2 and y2 = −u 1 , and
u 1 = −λ1 q̇ − λ2 q̃
(7.29)
y1 = q̇.
that q̇ ∈ L2 (R+ ). Notice that this is a consequence of the fact that H pd (s) defines
an ISP operator, see Theorem 2.8 item 2. We cannot say much more if we do not
pick up the storage functions of each subsystem. Now, the second subsystem has
dynamics such that the associated operator u 2 → y2 is ISP (hence necessarily of
relative degree zero) and with storage function λ22 z 1T z 1 . From the fact that ż 1 = q̇ and
due to the choice of the initial data, one has for all t ≥ 0: z 1 (t) = q̃(t). It is easy to
see then that the first subsystem (the rigid-joint–rigid-links dynamics) has a storage
function equal to 21 q̇ T M(q)q̇ + U (q) − U (qi ). The sum of both storage functions
yields the desired Lyapunov function for the whole system. The interconnection is
depicted in Fig. 7.2.
Remark 7.18 Looking at the dynamics of both subsystems, it seems that the total
system order has been augmented. But the interconnection equation y1 = z 1 may be
rewritten as ż 1 = q̇. This defines a dynamical invariant z 1 − q = q0 , where q0 ∈ Rn
is fixed by the initial condition z 1 (0) = q(0) − qd . Hence, the system (7.30) and
(7.31) may be reduced to the subspace z 1 − q = −qd , and one recovers a system of
dimension 2n (in other words, the space (q, q̇, z 1 ) is foliated by invariant manifolds
z 1 − q = −qd ).
Remark 7.19 In connection with the remarks at the beginning of this section, let
us note that the fixed points of the first unforced (i.e., u 1 ≡ 0) subsystem is given
by {(q, q̇) | g(q) = 0, q̇ = 0}, while those of the unforced second subsystem are
given by {z 1 | ż 1 = 0 ⇒ q̃ = q̃(0)}. Thus, the first subsystem has Lyapunov stable
fixed points, which correspond to its static equilibrium, while the fixed point of the
second subsystem corresponds to the desired static position qd . The fixed points of
the interconnected blocks are given by the roots of (7.27). If one looks at the system
from a pure input–output point of view, such a fixed points problem does not appear.
However, if one looks at it from a dissipativity point of view, which necessarily
implies that the input–output properties are related to the state space properties, then
it becomes a necessary step.
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 509
Remark 7.20 H pd (s) provides us with an example of a passive system that is ISP,
but obviously not asymptotically stable, only stable (see Corollary 5.27).
from which one recognizes an OSP system, while the second one has the dynamics
ż 1 (t) = u 2 (t)
(7.34)
y2 (t) = λ2 z 1 (t),
with z 1 (0) = q(0) − qd . One can check that it is a passive lossless system since
u 2 , y2 t = 0 λ2 q̃ T (s)q̇(s)ds = λ22 [q̃ T q̃(t) − q̃ T q̃(0)], with storage function λ22 q̃ T q̃.
t
Therefore applying the Passivity Theorem (see Theorem 5.2 and Corollary 5.3), one
still concludes that q̇ ∈ L2 (R+ ; Rn ). We, however, may go a little further with
this decomposition. Indeed, consider the system with input u = u 1 + y2 and out-
put y = y1 . This defines an OSP operator u → y. Setting u ≡ y ≡ 0 one obtains
that (q − qi , q̇) = (0, 0). Hence this closed-loop system is ZSD. Since the storage
function (the sum of both storage functions) we have exhibited is positive definite
with respect to this error equation fixed point, and since it is proper, it follows that
the equilibrium point of the unforced system (i.e., u ≡ 0) is globally asymptotically
stable. This second interconnection is depicted in Fig. 7.3. In conclusion, it is not
very important whether we associate the strict passivity property with one block or
the other. What is important is that we can systematically associate with these dis-
sipative subsystems some Lyapunov functions that are systematically deduced from
their passivity property. This is a fundamental property of dissipative systems, that
one can calculate Lyapunov functions for them. It has even been originally the main
motivation for studying passivity, at least in the field of control and stabilization of
dynamic systems.
The PID control is also a feedback controller that is widely used in practice. Let us
investigate whether we can redo the above analysis done for the PD controller. If we
proceed in the same manner, we decompose the closed-loop dynamics
510 7 Passivity-Based Control
t
M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + g(q(t)) + λ1 q̇(t) + λ2 q̃(t) + λ3 q̃(s)ds = 0,
0
(7.35)
into two subsystems, one of which corresponds to the rigid-joint–rigid-link dynamics,
and the other one to the PID controller itself. The input and output signals of this
interconnection are this time chosen to be
t
u 1 = −λ1 q̇ − λ2 q̃ − λ3 0 q̃(s)ds = −y2
(7.36)
y1 = q̇ = u 2 .
The transfer matrix of this linear operator is given by (compare with (2.53) and
(2.54))
λ1 s 2 + λ2 s + λ3
H pid (s) = In . (7.38)
s2
Thus, it has a a double pole with zero real part and it cannot be a PR transfer matrix,
see Theorem
t 2.45. This can also be checked by calculating u 2 , y2 t that contains a
term 0 u 2 (s)z 1 (s)ds which cannot be lower bounded.
If one chooses u 2 = q̃, then the PID block transfer matrix becomes
λ1 s 2 + λ2 s + λ3
H pid (s) = In , (7.39)
s
which this time is a PR transfer function for a suitable choice of the gains, and one
can check that
t t t
λ1 T λ3
u 2 , y2 t = [q̃ (s)q̃(s)]t0 + λ2 q̃ T (s)q̃(s)ds + q̃ T (s)q̃(s)ds , (7.40)
2 0 2 0 0
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 511
which shows that the system is even ISP (but the transfer function is not SPR, other-
wise this system would be strictly passive (in the state space sense), see Example 4.72,
which it is not from inspection of (7.40)). However, this change of input is suitable
for the PID block, but not for the rigid-joint–rigid-link block, that we know is not
passive with respect to the supply rate u 1T q̃ because of the relative degree of this
output. As a consequence the dynamics in (7.35) cannot be analyzed through the
Passivity Theorem.
Remark 7.21 The system in (7.35) can be shown to be locally Lyapunov stable [60]
T
t
with a Lyapunov function V (z), where z(t) = 0 q̃(s)T ds q̃(t)T q̇(t)T . Let
us add a fictitious input τ in the right-hand side of (7.35) instead of zero. From the
KYP Lemma, we know that there exists an output y (another fictitious signal) such
that this closed-loop system is passive with respect to the supply rate τ T y. One has
y = (0, 0, 1) ∂∂zV .
Before going on with controllers that assure tracking of arbitrary, smooth enough,
desired trajectories, let us investigate in more detail the relationships between Lya-
punov stable systems and the Passivity Theorem (which has numerous versions,
but is always based on the study of the negative interconnection of two dissipative
blocks). From the study, we made about the closed-loop dynamics of the PD and PID
controllers, it follows that if one has been able to transform a system (should it be
open or closed-loop) as in Fig. 3.2, and such that both blocks are dissipative, then the
sum of the respective storage functions of each block is a suitable Lyapunov function
candidate. Now, one might like to know whether a Lyapunov stable system possesses
some dissipativity properties. More precisely, we would like to know whether a sys-
tem that possesses a Lyapunov function, can be interpreted as the interconnection of
two dissipative subsystems. Let us state the following [61, 62]:
Lemma 7.22 Let L denote a set of Lyapunov stable systems with equilibrium point
(x1 , x2 ) = (0, 0), where (x1 , x2 ) generically denotes the state of systems in L . Sup-
pose the Lyapunov function V (x1 , x2 , t) satisfies
1.
V (x1 , x2 , t) = V1 (x1 , t) + V2 (x2 , t) (7.41)
along trajectories of systems in L , where β1 (·) and β2 (·) are class K functions,
and γ1 ≥ 0, γ2 ≥ 0.
512 7 Passivity-Based Control
Suppose there exist functions F1 (·) and F2 (·) such that for all x1 , x2 and t ≥ t0
∂ V1 ∂ V1 T
+ F1 (x1 , t) ≤ −γ1 β1 ( x1 ) (7.43)
∂t ∂ x1
∂ V2 ∂ V2 T
+ F2 (x1 , t) ≤ −γ2 β1 ( x2 ), (7.44)
∂t ∂ x2
and Fi (0, t) = 0, dim xi =dim xi for i = 1, 2, for all t ≥ t0 . Then there exists a set P
of Lyapunov stable systems, with the same Lyapunov function V (x1 , x2 , t), that can
be represented as the feedback interconnection of two (strictly) passive subsystems
with states x1 and x2 , respectively. These systems are defined as follows:
ẋ1 (t) = F1 (x1 (t), t) + G 1 (x1 (t), x2 (t), t)u 1
(7.45)
y1 = G 1T (x1 (t), x2 (t), t) ∂∂ Vx 1 (x1 , t)
1
ẋ2 (t) = F2 (x2 (t), t) + G 2 (x1 (t), x2 (t), t)y1
(7.46)
y2 = G 2T (x1 (t), x2 (t), t) ∂∂ Vx 2 (x2 , t) = −u 1 ,
2
where G 1 (·) and G 2 (·) are arbitrary smooth nonzero functions,4 which can be shown
to define the inputs and the outputs of the interconnected systems.
The proof of Lemma 7.22 is straightforward from the KYP property of the outputs
of passive systems. Note that Lemma 7.22 does not imply any relationship between
the system in L and the system in P other than the fact that they both have the
same Lyapunov function structure. That is why we used different notations for their
states (x1 , x2 ) and (x1 , x2 ). We are now interested in establishing sufficient conditions
allowing us to transform a system in L into a system in P having the particular
form given in (7.45) and (7.46). These conditions are discussed next. Suppose (Σ L )
has the following form (notice that this is a closed-loop form):
ẋ1 (t) = F1 (x1 (t), t) + G 1 (x1 (t), x2 (t), t)u 1
(7.47)
y1 = h 1 (x1 (t), t) = u 2
ẋ2 (t) = F2 (x2 (t), t) + G 2 (x1 (t), x2 (t), t)u 2
(7.48)
y2 = h 2 (x2 (t), t) = u 1 .
∂ V1 T T
V̇ (x1 , x2 , t) = ∂t
+ ∂∂ Vx11 F1 (x1 , t) + ∂∂tV2 + ∂∂ Vx22 F2 (x2 , t)
T T
+ ∂∂ Vx11 G 1 (x1 , x2 , t)h 2 (x2 , t) + ∂∂ Vx22 G 2 (x1 , x2 , t)h 1 (x1 , t) (7.49)
≤ −γ1 β1 ( x1 ) − γ2 β2 ( x2 )
with inequalities (7.43) and (7.44) satisfied for both systems in (7.47) and (7.48).
Now let us rewrite (Σ L ) in (7.47) (7.48) as follows (we drop the arguments for
convenience; u 1 = h 2 (x2 ), u 2 = h 1 (x1 )):
ẋ1 = (F1 + G 1 u 1 − ḡ1 ū 1 ) + ḡ1 ū 1
(7.50)
ȳ1 = ḡ1T ∂∂ Vx11 = ū 2
ẋ2 = (F2 + G 2 u 2 − ḡ2 ū 2 ) + ḡ2 ū 2
(7.51)
ȳ2 = ḡ2T ∂∂ Vx22 = −ū 1 .
Notice that (Σ̃ L ) in (7.50) and (7.51) and (Σ L ) in (7.47) and (7.48) strictly represent
the same system. We have simply changed the definition of the inputs and of the
outputs of both subsystems in (7.47) and (7.48). Then, the following lemma is true:
Lemma 7.23 Consider the closed-loop Lyapunov stable system (Σ L ) in (7.47) and
(7.48), satisfying (7.49), with F1 (·) and F2 (·) satisfying (7.43) and (7.44). A sufficient
condition for (Σ L ) to be transformable into a system in P, is that the following two
inequalities are satisfied:
1.
∂ V1 T ∂ V2
G1h2 + ḡ1 ḡ2T ≤ 0, (7.52)
∂ x1 ∂ x2
2.
∂ V2 T ∂ V1
G 2 h 1 − ḡ2 ḡ1T ≤ 0, (7.53)
∂ x2 ∂ x1
for some nonzero, smooth matrix functions ḡ1 (·), ḡ2 (·) of appropriate dimensions,
and with
⎧
⎨ F1 (0, t) + G 1 (0, x2 , t)u 1 (0, x2 , t) − ḡ1 (0, x2 , t)ū 1 (0, x2 , t) = 0
(7.54)
⎩
F2 (0, t) + g(x1 , 0, t)u 2 (x1 , 0, t) − ḡ2 (x1 , 0, t)ū 2 (x1 , 0, t) = 0,
T
with ∂∂tVi + ∂∂ Vxii f i ≤ −γi βi ( xi ), γi ≥ 0, f i (0, t) = 0 for all t ≥ t0 , and V (· )
satisfies (7.41) and (7.42). Then, we get along trajectories of (7.55) and (7.56):
V̇ (t) = V̇1 (t) + V̇2 (t) ≤ −γ1 β1 ( x1 ) − γ2 β2 ( x2 ). However, the subsystems
in (7.55) and (7.56) are not passive, as they do not verify the KYP property. The
conditions (7.52) and (7.53) reduce to
∂ V1 T ∂ V2 ∂ V1 T ∂ V2
G1 + ḡ1 ḡ2T = 0, (7.57)
∂ x1 ∂ x2 ∂ x1 ∂ x2
T T
as in this case ∂∂ Vx11 G 1 h 2 = − ∂∂ Vx22 G 2 h 1 . Now choose ḡ1 = −G 1 , ḡ2 = 1, ū 1 =
− ∂∂ Vx22 , ū 2 = −G 1T ∂∂ Vx11 : (7.57) is verified.
In conclusion, the system in (7.55) and (7.56) is not convenient because its outputs
and inputs have not been properly chosen. By changing the definitions of the inputs
and outputs of the subsystems in (7.55) and (7.56), leaving the closed-loop system
unchanged, we transform the system such that it belongs to P. In most of the cases,
the functions gi (·), h i (·) and f i (·) are such that the only possibility for the equivalent
systems in (7.50) and (7.51) to be Lyapunov stable with Lyapunov functions V1 (·) and
V2 (·), respectively, is that ḡi ū i ≡ gi u i , i.e., we only have to rearrange the inputs and
the outputs to prove passivity. From Lemma 7.23 we can deduce the following result:
Corollary 7.25 Consider the system in (7.47) and (7.48). Assume (7.49) is satisfied,
T T
and that ∂∂ Vx11 G 1 h 2 = − ∂∂ Vx22 G 2 h 1 (let us name this equality as the cross-terms can-
celation). Then (i) If one of the subsystems in (7.47) or (7.48) is passive, the system
in (7.47) and (7.48) can be transformed into a system that belongs to P. (ii) If the
system in (7.47) and (7.48) is autonomous, it belongs to P.
Proof Using the cross-terms cancelation equality, one sees that inequalities in (7.52)
and (7.53) reduce either to:
∂ V1 T ∂ V1 T ∂ V2
G1h2 + ḡ1 ḡ2T = 0, (7.58)
∂ x1 ∂ x1 ∂ x2
or to
∂ V2 T ∂ V2 T ∂ V1
− G2h1 + ḡ2 ḡ1T = 0. (7.59)
∂ x2 ∂ x2 ∂ x1
Suppose that the system in (7.48) is passive. Then h 2 = G 2T ∂∂ Vx22 , thus it suffices to
choose ḡ2 = G 2 , ḡ1 = −G 1 . If the system in (7.47) is passive, then h 1 = G 1T ∂∂ Vx11 ,
and we can take ḡ2 = G 2 , ḡ1 = G 1 . The second part of the corollary follows from
the fact that one has for all x1 and x2 :
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 515
∂ V1 T ∂ V2 T
G 1 (x1 )h 2 (x2 ) = − G 2 (x2 )h 1 (x1 ). (7.60)
∂ x1 ∂ x2
Then, (7.47) (7.48) can be transformed into a system that belongs to P. Necessarily,
h 2 (x2 ) = G 2T ∂∂ Vx22 and h 1 (x1 ) = −G 1T ∂∂ Vx11 , or h 2 (x2 ) = −G 2T ∂∂ Vx22 and h 1 (x1 = G 1T ∂∂ Vx11 ,
which correspond to solutions of (7.58) or (7.59), respectively.
Example 7.26 Throughout this chapter and Chap. 8, we shall see several applications
of Lemmas 7.22 and 7.23. In particular, it happens that the cancelation of cross-terms
in Lyapunov functions derivatives has been widely used for stabilization and almost
systematically yields an interpretation via the Passivity Theorem. To illustrate those
results, let us reconsider the PD controller closed-loop dynamics in (7.25). Let us
start from the knowledge of the Lyapunov function deduced from the storage function
in (7.26). Letting x1 = (q, q̇) be the state of the rigid-joint–rigid-link dynamics, and
x2 = z 1 be the state of the second subsystem in (7.31), one sees that the sum of the
storage functions associated with each of these blocks forms a Lyapunov function that
satisfies the conditions of Lemma 7.22. Moreover the conditions of Corollary 7.25
are satisfied as well, in particular, the cross-terms cancelation equality. Indeed from
(7.32), we get (but the same could be done with the interconnection in (7.29)),
⎧
⎨ ∂ V1 T G (x )h (x ) = g T (q), q̇ T M(q) 0
(−λ2 q̃) = −λ2 q̇ T q̃
∂ x1 1 1 2 2
M −1 (q)
⎩ ∂ V2 T
∂ x2
G 2 (x2 )h 1 (x1 ) = λ2 q̃ T q̇.
(7.61)
Hence, the dynamics in (7.25) can indeed be interpreted as the negative feed-
back interconnection of two dissipative blocks. As another example, consider The-
orem
5.56: notice
that choosing the controller u of the driving system as u T =
− L f1 U (ζ ) , exactly corresponds to a cross-terms cancelation equality. Hence the
closed-loop system thereby constructed can be analyzed through the Passivity The-
orem. This is the mechanism used in [61].
The tracking problem for the model in (6.90) can be easily solved using a lineariz-
ing feedback that renders the closed-loop system equivalent to a double integrator.
Then, all the classical machinery for linear systems can be applied. However we are
not interested here in following this path. We would rather like to see how the PD
control may be extended to the tracking case, i.e., how we can preserve and use the
system dissipativity to derive a globally stable controller guaranteeing tracking of
any sufficiently differentiable desired trajectory.
The first idea, proposed in [63], is a direct extension of the PD structure, applying
the following control algorithm to the dynamics in (6.90):
˙ − λ2 q̃(t), (7.62)
τ (t) = M(q(t))q̈d (t) + C(q(t), q̇(t))q̇d (t) + g(q(t)) − λ1 q̃(t)
with qd (·) ∈ C 2 (R+ ). Setting qd constant one retrieves a PD controller with gravity
compensation. The closed-loop system is given by
¨ + C(q(t), q̇(t))q̃(t)
M(q(t))q̃(t) ˙ + λ2 q̃(t) = 0.
˙ + λ1 q̃(t) (7.63)
This closed-loop dynamics resembles the one in (7.25). This motivates us to study
its stability properties by splitting it into two subsystems as
¨ + C(q(t), q̇(t))q̃(t)
M(q(t))q̃(t) ˙ = u 1 (t) = −y2 (t)
˙ = u 2 (t), (7.64)
y1 (t) = q̃(t)
and
ż 1 (t) = u 2 (t)
(7.65)
y2 (t) = λ1 u 2 (t) + λ2 z 1 (t),
5 Some fundamental assumptions will be repeated several times in this chapter, to easy the reading.
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 517
u 1 , y1 t = 0
¨ ) + C(q(τ ), q̇(τ ))q̃(τ
q̃˙ T (τ ) M(q(τ ))q̃(τ ˙ ) dτ
t (7.66)
= 1
2
q̃˙ T (τ )M(q(τ ))q̃(τ
˙ ) ≥ − 1 q̃(0)
2
˙
˙ T M(q(0))q̃(0),
0
and that
t t
u 2 , y2 t = λ1 0
˙ )dτ +
q̃˙ T (τ )q̃(τ 1
2
q̃(s)T q̃(s) 0 ≥ − 21 q̃(0)T q̃(0). (7.67)
Notice that the second block is ISP. Similarly to the PD controller analysis, one con-
cludes that the dynamics in (7.63) can indeed be transformed into the interconnection
of two passive blocks. We could also have deduced from Lemma 7.23 that such an
˙ = 1 q̃˙ M(q)q̃˙ + 1 λ2 q̃ T q̃ is a Lyapunov
interconnection exists, checking that V (q̃, q̃) 2 2
function for this system, whose derivative along the trajectories of (7.63) is semi-
negative definite, i.e., γ1 = 0 in Lemma 7.22 (we let the reader do the calculations by
him/herself). However, one cannot apply the Krasovskii–LaSalle’s Theorem to this
system because it is not autonomous (the inertia and Coriolis matrices depend explic-
˙ One has to resort to Matrosov’s
itly on time when the state is considered to be (q̃, q̃)).
Theorem to prove the asymptotic stability (see Theorem A.42 and Lemma A.43 in
the Appendix) [63]. Equivalent representations (that are to be compared to the ones
constructed for the PD control in Sect. 7.3.1) are depicted in Figs. 7.4 and 7.5.
The above scheme has the advantage of being quite simple. However, its extension to
the adaptive case (when the inertia parameters are supposed to be unknown, one needs
to introduce some online adaptation) is really not straightforward. One big challenge
518 7 Passivity-Based Control
in the Robotics and Automatic Control fields during the 1980s, was to propose a
feedback controller that guarantees tracking and which extends also to an adaptive
version (which will be presented in Sect. 8.1.1). Let us consider the following input
[7, 8]6 :
τ (q(t), q̇(t), t) = M(q(t))q̈r (t) + C(q(t), q̇(t))q̇r (t) + g(q(t)) − λ1 s(t), (7.68)
Notice that contrary to the scheme in (7.62), setting qd constant in (7.68) does not
yield the PD controller. However the controller in (7.68) can be seen as a PD action
(λ1 s) with additional nonlinear terms whose role is to assure some tracking properties.
Before going on, let us note that the whole closed-loop dynamics is not in (7.69)
since this is an nth order system with state s, whereas the whole system is 2nth order.
To complete it, one needs to add to (7.69):
˙ = −λq̃(t) + s(t).
q̃(t) (7.70)
6 It seems that what is now widely known as the Slotine and Li scheme, was also designed in [8] at
the same time so that the Slotine and Li scheme could be named the Slotine–Li–Sadegh–Horowitz
scheme.
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 519
It should be clear from all the foregoing developments that the subsystem in (7.69)
defines a passive operator between u 1 = −λ1 s = −y2 and y1 = s = u 2 , with stor-
age function V1 (s, t) = 21 s T M(q)s (which is a Lyapunov function for this sub-
system which is ZSO). This is strongly based on the skew-symmetry property in
Assumption 20. The equivalent feedback interconnection of the closed-loop is shown
in Fig. 7.6.
Therefore, it has a relative degree r2 = 0, and the state is not observable from the
output y2 . However, it is ZSD since {y2 = u 2 = 0} ⇒ lim z 1 (t) = 0. We also notice
t→+∞
that this system is VSP since
t t
u 2 , y2 t = λ1 0 u 2T (s)u 2 (s)ds = λ11 0 y2T (s)y2 (s)ds
t (7.72)
= λ21 0 u 2T (s)u 2 (s)ds + 2λ1 1 0 y2T (s)y2 (s)ds.
t
Let us compute storage functions for this system. Let us recall from (4.154) that for
systems of the form ẋ = f (x, t) + g(x, t)u, y = h(x, t) + j (x, t)u with j (x, t) +
j T (x, t) = R full rank, the storage functions are solutions of the partial differential
inequality (that reduces to a Riccati inequation in the linear case)
520 7 Passivity-Based Control
∂V T ∂V 1 ∂V T −1 1 ∂V
f (x, t) + + h −
T
g R h− g ≤ 0, (7.73)
∂x ∂t 2 ∂x 2 ∂x
and 0Va (·) and the required supply Vr (·) (with x(−t) = 0) satisfy (7.73) as an equal-
ity. Thus, the storage functions V (z 1 ) for the system in (7.71) are solutions of
dV T 1 dV T dV
−λ z1 + ≤ 0. (7.74)
dz 1 4λ1 dz 1 dz 1
which means that the best strategy to recover energy from this system through the
output y2 is to leave it at rest, and
0
Vr (z 1 (0)) = inf u 2T y2 ds
u 2 :(−t,0)→(0,z 1 (0)) −t
0 T
= inf λ1 (ż 1 + λz 1T )(ż 1 + λz 1 ) ds (7.77)
u 2 :(−t,0)→(0,z 1 (0)) −t
= λ1 λz 1T (0)z 1 (0),
where the last step is performed by simple integration of the cross-term and dropping
the other two terms which are always positive, for any control strategy. Notice that
Va (z) ≤ Vr (z), which agrees with Theorem 4.46.
We conclude that a suitable Lyapunov function for the closed-loop system in (7.69)
(7.70) is given by the sum
1 T
V (s, q̃, t) = s M(q̃, t)s + 2λλ1 q̃ T q̃. (7.78)
2
It is noteworthy that we have deduced a Lyapunov function from the knowledge of
some passivity properties of the equivalent interconnection form of the closed-loop
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 521
system. Historically, the closed-loop system in (7.69) and (7.70) has been studied
first using the storage function of the first subsystem in (7.69) only, and then using
additional arguments to prove the asymptotic convergence of the whole state toward
zero [7]. It is only afterward that the Lyapunov function for the whole closed-loop
system has been proposed [64]. We have shown here that it is possible to construct it
directly from passivity arguments. It must, therefore, be concluded on this example
that the dissipativity properties allow one to directly find out the right Lyapunov
function for this system.
Remark 7.29 Lemmas 7.22 and 7.23 can in general be used if one starts from the
knowledge of the Lyapunov function. However, the cross-terms cancelation is not
satisfied since
∂ V1 T
∂ x1
G 1 (x1 )h 2 (x2 ) = s T M(q)M −1 (q)λ1 s = λ1 s T s
∂ V2 T
(7.79)
∂ x2
G 2 (x2 )h 1 (x1 ) = −λλ1 q̃ T s.
∂ V1 T
This comes from the fact that this time one has to add ∂ x1
G 1 (x1 )h 2 (x2 ) +
∂ V2 T ∂ V2 T
∂ x2
G 2 (x2 )h 1 (x1 )
= −λ1 s s + λλ1 q̃ s to
T T
F2 (x2 ) =
∂ x2
−2λ λ1 q̃ T q̃ in order
2
to
get the inequality in (7.49). One may also check that the inequalities in (7.52) and
(7.53) can hardly be satisfied by any ḡ1 and ḡ2 . Actually, the conditions stated in
Lemma 7.23 and Corollary 7.25 are sufficient only. For instance, from (7.49), one can
T
change the inequalities in (7.52) and (7.53) to incorporate the terms ∂∂ Vx11 F1 (x1 , t)
T
and ∂∂ Vx22 F2 (x2 , t) in the conditions required for the matrices ḡ1 and ḡ2 . Actually,
Lemmae 7.22 and 7.23 will be useful when we deal with adaptive control, see Chap. 8,
in which case the cross-terms cancelation equality is generally satisfied.
There are two ways to prove the stability for the closed-loop system in (7.69) and
(7.70). The first proof is based on the positive function V (s, q̃, t) = 21 s T M(q)s
(which we denoted as V1 (s, t) above), where one notices that q(t) = q̃(t) + qd (t).
Hence, the explicit time dependency in V (s, q̃, t). This proof makes use of
Lemma 4.8. This proof does not show Lyapunov stability, but merely shows the
boundedness of all signals as well as the asymptotic convergence of the tracking
error and its derivative toward zero. The second proof is based on the Lyapunov
function (candidate) in (7.78). Lyapunov stability of the error (closed-loop) system
equilibrium point is then concluded.
First Stability Proof: Let us consider
1 T
V (s, q̃, t) = s M(q)s, (7.80)
2
522 7 Passivity-Based Control
since V (·, ·) ≥ 0. Therefore, s(·) is in L2 . Let us now consider the system in (7.70).
This is an asymptotically stable system, whose state is q̃(·) and whose input is s(·).
Applying Lemma 4.8 we deduce that q̃ ∈ L2 ∩ L∞ , q̃˙ ∈ L2 , and limt→+∞ q̃(t) = 0.
Furthermore, since V (s(t), q̃(t), t) ≤ V (s(0), q̃(0), 0), it follows that for bounded
initial data, ||s(t)|| < +∞, i.e., s ∈ L∞ . Therefore, q̃˙ ∈ L∞ as well, and from Fact
6 (Sect. 4.1) the function q̃(·) is uniformly continuous. Using (7.69) it follows that
ṡ ∈ L∞ , so using Fact 6 and then Fact 8, we conclude that s(t) → 0 as t → +∞.
˙ → 0 as t → +∞. All the closed-loop signals are bounded and the tracking
Thus q̃(t)
˙ converges globally asymptotically to zero. However, we have not proved
error (q̃, q̃)
the Lyapunov stability of the equilibrium point of the closed-loop error system (7.69)
and (7.70). This is the topic of the next paragraph.
Lyapunov Stability Proof: Let us now consider the positive definite function
in (7.78). Computing its derivative along the closed-loop system (7.69) and (7.70)
trajectories yields
˙
V̇ (q̃(t), q̃(t)) = −λ1 q̃˙ T (t)q̃˙ T (t) − λ2 λ1 q̃ T (t)q̃(t) ≤ 0, (7.84)
from which the global asymptotic Lyapunov stability of the fixed point (q̃, q̃) ˙ =
(0, 0) follows. The skew-symmetry property is used once again to compute the
derivative. It was further shown in [64] that when the system has only revolute joints,
then the stability is uniform. This comes from the fact that in such a case, the inertia
matrix M(q) contains only bounded (smooth) functions like cos(·) and sin(·) and is
thus bounded, consequently, the Lyapunov function is also upperbounded by some
class K function. It is interesting to see how the technology influences the stability.
In both stability analyses, one can conclude about exponential convergence.
Indeed for the first proof one has V̇ (s, q̃, t) ≤ −λ1 s T (t)s(t) ≤ − λminλM(q)
1
V (s, q̃, t).
˙
Therefore s(·) converges to zero exponentially fast, and so do q̃(·) and q̃(·).
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 523
The use of the property in Assumption 20 is not mandatory. Let us describe now a
control scheme proposed in [65], that can be classified in the set of passivity-based
control schemes, as will become clear after the analysis. Let us consider the following
control input:
These two representations of the same closed-loop system are now analyzed from a
“Passivity Theorem” point of view. Let us consider the following negative feedback
interconnection:
u 1 = −y2 = − 21 Ṁ(q, q̇)s + C(q, q̇)s
(7.88)
u 2 = y1 = s,
where
the first subsystem has the dynamics M(q(t))ṡ(t) + C(q(t), q̇(t))s(t) +
˙ + λd λq̃(t) = u 1 (t), while the second one is a static operator between
λd + λλ1 q̃(t)
u 2 = s and y2 given by u 2 (t) = 21 Ṁ(q(t), q̇(t))s(t) − C(q(t), q̇(t))s(t). It is easily
checked that if Assumption 20 is satisfied then
t
1
u 2 , y2 t = s T (τ )[ Ṁ(q(τ ), q̇(τ )) − 2C(q(τ ), q̇(τ ))]s(τ ), dτ = 0, (7.89)
2 0
and that the available storage of the second block is the zero function as well. Con-
cerning the first subsystem one has
t
t
+ 0 λd + λλ1 q̃˙ T (τ )q̃(τ ˙ ) + λ2 λd q̃ T (τ )q̃(τ ) dτ
λ2
≥ − 21 s(0)T M(q(0))s(0) − 1
2 2λλd + λ1 q̃(0)T q̃(0),
(7.90)
524 7 Passivity-Based Control
which proves that it is passive with respect to the supply rate u 1T y1 . It can also be
calculated that the available storage function of this subsystem is given by
t
Va (q̃(0), s(0)) = sup s T (τ ) {M(q(τ ))s(τ ) + C(q(τ ), q̇(τ ))s(τ )
−
u 1 :[q̃(0),s(0)]→ 0
+ λd + λλ1 q̃(τ ˙ ) + λλd q̃(τ ) dτ
λ2
= 21 s(0)T M(q(0))s(0) + λλd + 2λ 1
q̃ T (0)q̃(0).
(7.91)
Since this subsystem is ZSD (u 1 ≡ s ≡ 0 ⇒ q̃ → 0 as t → +∞) one concludes that
the available storage in (7.91) is actually a Lyapunov function for the corresponding
unforced system, whose fixed point (q̃, s) = (0, 0) (or (q̃, q̃) ˙ = (0, 0)) is asymptot-
ically stable. This also holds for the complete closed-loop system since the second
block has storage functions equal to zero, and the dynamics in (7.86) is ZSD when
one considers the input to be u in the left-hand side of (7.86) and y = y1 = s (set
u ≡ 0 and s ≡ 0 and it follows from (7.86) that q̃ → 0 exponentially). Actually, the
derivative of Va (q̃, s) in (7.91) along trajectories of the first subsystem is given by
λ ˙T ˙
V̇a (q̃(t), s(t)) = − λd + q̃ (t)q̃(t) − λ2 λd q̃ T (t)q̃(t) ≤ 0. (7.92)
λ1
It is noteworthy that the result in (7.92) can be obtained without using the skew-
symmetry property in Assumption 20. But skew symmetry was used to prove the
dissipativity of each block in (7.88).
Remark 7.30 Originally the closed-loop system in (7.86) has been proven to be
Lyapunov stable using the Lyapunov function
1 T 1
V (s, q̃) = s M(q)s + λ1 q̃ T q̃. (7.94)
2 2
The derivative of V (·) in (7.93) or (7.94) along closed-loop trajectories is given by
˙ ˙ λ1 ˙
V̇ (q̃(t), q̃(t)) = −q̃ (t) λd +
T
q̃(t) − 2λd λq̃˙ T (t)q̃(t) − λ2 λd q̃ T (t)q̃(t).
λ
(7.95)
Notice that Va (·) in (7.91) and V (·) in (7.94) are not equal one to each other. One
concludes that the passivity analysis of the closed loop permits to discover a (simpler)
Lyapunov function.
7.3 Rigid-Joint–Rigid-Link Systems: State Feedback 525
Remark 7.31 The foregoing stability analysis does not use the cross-terms cancela-
tion equality of Lemma 7.23. One concludes that the schemes that are not based on
the skew-symmetry property in Assumption 20 do not lend themselves very well to
an analysis through the Passivity Theorem. We may, however, consider the controller
in (7.85) to be passivity based since it does not attempt at linearizing the system,
similarly to the Slotine and Li scheme.
Usually, most of the manipulators are equipped with position and velocity sensors,7
and controlled point to point with a PD. The tracking case requires more, as we saw.
However, the controllers structure becomes more complicated, hence less robust. It
is of some interest to try to extend the separation principle for linear systems (a stable
observer can be connected to a stabilizing controller without destroying the closed-
loop stability), toward some classes of nonlinear systems. The rigid-joint–rigid-link
manipulator case seems to constitute a good candidate, due to its nice properties.
At the same time, such systems are nonlinear enough, so that the extension is not
trivial. In the continuity of what has been done in the preceding sections, we shall
investigate how the dissipativity properties of the Slotine and Li and of the Paden and
Panja schemes can be used to derive (locally) stable controllers not using velocity
feedback.
In the sequel, we shall start by the regulation case in Sect. 7.4.1, and then analyze
the tracking of trajectories in Sect. 7.4.2.
In this section, we present the extension of the PD controller when the velocity is not
available as done in [66, 67]. Basically, the structure of output (position) feedback
controllers is that of the original input where the velocity q̇ is replaced by some
estimated value. Let us consider the dynamics in (6.90) with the controller:
τ = g(qd ) − λ1 q̃ − 1
λ2
(q̃ − z)
(7.96)
ż = λ3 (q̃ − z),
7 Or,just position sensors whose output signal is differentiated through “dirty” filters to recover the
velocity.
526 7 Passivity-Based Control
and
1 T
V2 (x2 ) = x x2 . (7.100)
2 2
It can be shown that V1 (·) is positive definite and has a global minimum at (x11 , x12 ) =
(0, 0), provided λ1 ≥ γ where γ is a Lipschitz constant for g(·). Differentiating V (·)
along the trajectories of (7.97), or equivalently (7.98), one finds V̇ (x2 ) = −λ3 x2T x2 ,
T
where the cross-terms cancelation equality is satisfied, since ∂∂ Vx11 G 1 h 2 = −x12
T
x2 =
T
− ∂∂ Vx22 G 2 h 1 . Since the system is autonomous, Corollary 7.25 (ii) applies. Now, it is
easy to see that the second subsystem with state vector x2 , input u 2 = h 1 (x1 ), and
output y2 = −h 2 (x2 ) is passive:
t t 1 T
u 2 , y2 t = 0 λ2 x 2 (s)u 2 (s)ds = 0 λ2 x 2 (s)( ẋ 2 (s) +
1 T
λ3 x2 (s))ds
t (7.101)
= 1
[x T (s)x2 (s)]t0 + λλ23 0 x2T (s)x2 (s)ds,
2λ2 2
and one recognizes a storage function S2 (x2 ) equal to λ12 V2 with V2 in (7.100).
Notice that the second subsystem (with state x2 ) is strictly passive in the sense of
Lemma 4.94, but it is also OSP. The other subsystem is defined with input u 1 =
−y2 = h 2 (x2 ) and output y1 = u 2 = h 1 (x1 ) and is passive as one can check:
t u 1 , y1 t = x12 , h 2 t =
= 0
T
x12 (s)[M(x11 (s) + qd )ẋ12 (s) + C(x11 (s) + qd , x12 (s))x12 (s)
+g(x11 (s) + qd )x12 (s) − g(qd )x12 (s) + λ1 x11 (s)x12 (s)]ds = S1 (t) − S1 (0),
(7.102)
where we used ẋ11 (t) = x12 (t) in the calculation.
7.4 Rigid-Joint–Rigid-Link Systems: Position Feedback 527
Remark 7.32 (a) In connection with Remark 7.18, let us note that this time the
closed-loop scheme has an order strictly larger than the open-loop one. (b) One
has V (x1 , x2 ) = V1 (x1 ) + V2 (x2 ) = λ2 S1 (x1 ) + λ2 S2 (x2 ). This is due to the partic-
ular choice of h 1 (x1 ) and h 2 (x2 ). (c) The OSP plus ZSD properties of the second
block are important because it is precisely these properties that allow one to use the
Krasovskii–LaSalle’s Theorem to prove the asymptotic stability.
The material that follows is mainly taken from [68]. In fact, it is to be expected that
the separation principle does not extend completely to the nonlinear systems we deal
with. Indeed, the presented schemes assure local stability only (more exactly they
assure semi-global stability, i.e., the region of attraction of the closed-loop fixed point
can be arbitrarily increased by increasing some feedback gains). In what follows we
shall not develop the whole stability proofs. We shall just focus on the passivity
interpretation of the obtained closed-loop system, and in particular on the local
stability, that results from the fact that the storage function satisfies the dissipation
inequality locally only. A similar result for the Slotine and Li + observer controller,
may be found in [68].
The foregoing section was devoted to an extension of PD controllers and concerns
global regulation around a fixed position only. It is of interest to consider the tracking
case which is, as one expects, much more involved due to the non-autonomy of the
closed-loop scheme. Let us consider the following fixed parameter scheme (compare
with the expression in (7.62)):
⎧
⎪
⎪ τ = M(q)q̈d + C(q, q̇0 )q̇d + g(q) − λ1 (q̇0 − q̇r )
⎪
⎪
⎨
Controller q̇r (t) = q̇d (t) − λ2 e(t)
⎪
⎪
⎪
⎪
⎩ ˙ − λ3 q̃(t),
q̇0 (t) = q̂(t) (7.103)
⎧
˙ = z(t) + λ4 q̃(t) = z(t) + (λ6 + λ3 )q̃(t)
⎨ q̂(t)
Observer
⎩
ż(t) = q̈d (t) + λ5 q̃(t) = q̈d (t) + λ6 λ3 q̃(t),
V (e, s1 , q̃, s2 ) = 21 s1T M(q)s1 + 21 e T K 1 (q, e)e + 21 s2T M(q)s2 + 21 q̃ T K 2 (q, q̃)q̃,
(7.105)
that for a suitable choice of the initial data within a ball B ⊆ R4n , whose radius
is directly related to the control gains, the closed-loop fixed point (e, s1 , q̃, s2 ) =
(0, 0, 0, 0) is (locally) exponentially stable, see Proposition 7.33 below. The ball’s
radius can be varied by varying λ6 or λ1 , making the scheme semi-global. An intu-
itive decomposition of the closed-loop system in (7.104) is as follows, noting that
M(q)ë = M(q)ṡ1 − λ2 M(q)e:
⎧
⎨ M̄(q)ṡ + C̄(q, q̇)s = u 1 , q̃˙ = −λ2 q̃ + s1 , ė = −λ3 e + s2 ,
(7.106)
⎩
y1 = s, u 2 = y1 , y2 = −T (q, q̇, s) = −u 1 ,
T
where s = s1T s2T , and
λ1 s2 + λ2 C(q, q̇)ė − C(q, q̇d )s2 + λ2 M(q)ė
T (q, q̇, s) = − (7.107)
−λ1 s1 + C(q, s2 − q̇)ė,
M̄(q) = diag(M(q), M(q)), C̄(q, q̇) = diag(C(q, q̇), C(q, q̇)). (7.108)
The first subsystem is clearly passive with respect to the supply rate u 1T y1 . The
second subsystem is a memoryless operator u 2 → −T (q, q̇, u 2 ). If it can be shown
that locally −u 2T T (q, q̇, u 2 ) ≥ −δu 2T u 2 , then the system with input u = u 1 + y2 and
output y = y1 is OSP. Indeed
for some δ > 0. In other words, the function in (7.105) satisfies the dissipa-
T
tion inequality along the closed-loop trajectories: ddVx ( f (x) + g(x)u) ≤ u T h(x) −
δh T (x)h(x) for all u and x locally only, where x T = (e T , s1T , q̃ T , s2T ) and y = h(x).
7.4 Rigid-Joint–Rigid-Link Systems: Position Feedback 529
Then, under suitable ZSD properties, any storage function which is positive defi-
nite with respect to the closed-loop fixed point is a strict (local) Lyapunov function.
Notice that the total closed-loop system is ZSD since y1 = s ≡ 0 and u ≡ 0 implies
that y2 ≡ 0, hence u 1 ≡ 0 and e → 0 and q̃ → 0 as t → +∞.
Assumption 21 The next properties hold for the dynamics in (6.90):
1. 0 < Mmin ≤ ||M(q)|| ≤ Mmax for all q ∈ Rn , ||C(q, x)||
≤ Cmax ||x|| for all q
and x ∈ Rn , where the matrix norm is defined as ||A|| = λmax (A T A).
2. The Christoffel’s symbols associated with the inertia matrix are used to write
the Coriolis and centrifugal forces matrix C(q, q̇), so that the skew-symmetry
property of Lemma 6.17 holds.
3. M(q) = M(q)T 0.
4. supt∈R ||q̇d (t)|| = ||q̇d ||max < +∞.
Proposition 7.33 ([68, Proposition 3.2]) Let Assumption 21 hold. Consider the
√ and the controller+observer in (7.103). Assume that λ1 >
dynamics in (6.90),
λ2 Mmax + (3 + 2)Cmax ||q̇d ||max , and that λ6 > 2 Mλmax
1
. Then the closed-loop system
is locally exponentially stable. A region of attraction is given by
Pmin λ1 − λ2 Mmax √
B = x ∈ R | ||x|| < δ
4n
− (3 + 2)||q̇d ||max ,
Pmax Cmax
where δ = 1√
3+ 2
, Pmin = min( 13 Mmin , 23 λ2 M
λ3
max
), Pmax = max 6 λλ12 , 6 λλ13 .
where the vector F(t, q, q̇) ∈ Rn accounts for unmodeled dynamics and external
disturbances (that were not present in the foregoing analyses, see the dynamics
in (6.90)), and τ ∈ Rn represents the control input forces. Before designing the
controller, let us state some fundamental assumptions (some of which have already
been done in the foregoing sections, and are recalled here for convenience):
Assumption 22 The following properties hold for the Lagrangian dynamics in
(7.110):
1. The Christoffel’s symbols associated with the inertia matrix are used to write
the Coriolis and centrifugal forces matrix C(q, q̇), so that the skew-symmetry
property of Lemma 6.17 holds.
2. M(q) = M(q)T 0.
3. The matrices M(q), C(q, q̇) together with the vectors g(q) and F(t, q, q̇) sat-
isfy the following inequalities for all (t, q, q̇) ∈ R+ × Rn × Rn and some known
positive constants k1 , k2 , kC , k G and k F :
For any matrix M ∈ Rn×n , the norm Mm is the induced norm given by Mm =
supx=1 M x. If one uses the Euclidean norm, then the induced norm satisfies
Mm = λmax (M T M) [76, p. 365 Exercise 5]. Let us introduce the position error
q̃ = q − qd and the sliding surface s = q̃˙ + Λq̃, which will be used in order to
maintain the error signal at zero. Here, the matrix −Λ ∈ Rn×n is Hurwitz and satisfies
K p Λ = ΛT K p 0 for a symmetric and positive definite matrix K p ∈ Rn×n . The
proposed control law is as follows:
where q̇r = q̇d − Λq̃, K p ∈ Rn×n , K p = K pT 0. The term u accounts for the mul-
tivalued part of the controller and is specified below. The matrices M̂(q), Ĉ(q, q̇) and
ĝ(q) describe the nominal system and are assumed to fulfill Assumption 22 (although
with different bounds). In other words, we assume that all the uncertainties are in the
system parameters, and not in the structure of the matrices.
Assumption 23 The matrices M̂(q), Ĉ(q, q̇) together with the vector ĝ(q) satisfy
the following inequalities for all (t, q, q̇) ∈ R+ × Rn × Rn and some known positive
constants k̂1 , k̂2 , k̂C and k̂ g
0 < k̂1 ≤ M̂(q)m ≤ k̂2 , Ĉ(q, q̇)m ≤ k̂C q̇, ĝ(q) ≤ k̂ g q.
Remark 7.34 It is interesting to compare (7.111) and (7.68). Clearly the single-
valued part of the controller is reminiscent from the Slotine and Li algorithm, where
the exact model parameters are replaced by “estimates” (this is similar to the adaptive
control case, see the input τ (t) in (8.6)). The position feedback term is also present
in (7.68). However, we shall see next that the velocity feedback will be replaced by
a set-valued controller.
Since there are parameter uncertainties and a disturbance, a robust control strategy
has to be employed. First, notice that the closed-loop system dynamics is
where ΔM(q) = M(q) − M̂(q), ΔC(q, q̇) = C(q, q̇) − Ĉ(q, q̇) and Δg(q) =
g(q) − ĝ(q). Before going on, we need to upperbound the equivalent disturbance
ξ(t, s, q̃), using Assumption 22.
532 7 Passivity-Based Control
Proposition 7.35 ([75, Proposition 2]) The equivalent disturbance ξ(t, σ, q̃) satis-
fies
ξ(t, s, q̃) ≤ β(s, q̃),
where β(s, q̃) = c1 + c2 s + c3 q̃ + c4 q̃s + c5 q̃2 , for known positive con-
stants ci , i = 1, . . . , 5.
Let us now introduce the set-valued part of the controller:
Theorem 7.37 ([75, Theorem 1]) Let Assumptions 22 and 24 hold. Then, there exists
a solution s : [0, +∞) → Rn , q̃ : [0, +∞) → Rn of (7.112) and (7.114) for every
(s0 , q̃0 ) ∈ Rn × Rn , whenever:
α
γ (s, q̃) ≥ β(s, q̃), (7.115)
2
where β is specified in Proposition 7.35 and α is given in Proposition 7.36. The notion
of solution is taken in the following sense:
• s(·) is continuous and its derivative ṡ(·) is essentially bounded in bounded sets.
• ˙ continuous and bounded in bounded sets.
q̃(·) is continuous with derivative q̃(·)
• Equations (7.112) and (7.114) are satisfied for almost all t ∈ [0, +∞).
• s(0) = s0 and q̃(0) = q̃0 .
Proof (sketch of sketch): It uses the approximated closed-loop dynamics that cor-
responds to replacing the set-valued term ∂Φ(s), by its Yosida approximation, that
yields a well-posed single-valued dynamics. The total mechanical energy is used to
prove the positive invariance of a bounded ball in the state space. Then a classical
limit analysis allows one to conclude about the existence of solutions, using [73,
Theorem 4.2].
Theorem 7.39 Consider the closed-loop system (7.112) and (7.114), with K p = 0.
Let the assumptions of Theorem 7.37 hold. Set γ (s, q̃) = (2β(s, q̃) + δ)/α, where
δ > 0 is constant and β is defined as in Proposition 7.35. Then, the sliding surface
s = 0 is reached in finite time.
where ζ ∈ ∂Φ(s) and the skew symmetry in item (1) of Assumption 22, was used.
From the definition of the subdifferential and from Proposition 7.36, it follows that
−ζ, s ≤ −Φ(s) ≤ −αs, which yields
V̇ ≤ − αγ (s, q̃) − β(s, q̃) s.
Hence, if αγ (s,
q̃) = β(s, q̃) + δ where δ is a positive constant, we obtain V̇ (t) ≤
−δs = −δ 2
k2
V 1/2 (t). By applying the Comparison lemma and integrating over
the time interval [0, t], we obtain V 1/2 (t) ≤ V 1/2 (0) − √ δ t. Consequently, V (·)
√ 2k2
reaches zero in a finite time t ∗ bounded by t ∗ ≤ 2k2 1/2
δ
V (0).
Rξ
γ > , (7.116)
α
The equilibria of the closed-loop system (7.112) (7.114), with state vector (q̃, s), are
the solutions of the generalized equation:
7.5 Rigid-Joint–Rigid-Link Systems: Set-Valued Robust Control 535
C(qd (t), q̇d (t))s
+ K p q̃
+ ξ(t, s
, q̃
) ∈ −γ (s
, q̃
)∂Φ(s
)
(7.117)
s
= Λq̃
.
The complete analysis of the time discretization of the above controllers, is a rather
long process, which is summarized in this section. Indeed, one is dealing here with
a class of nonlinear and set-valued systems, and one cannot expect that the discrete-
time counterpart, is a simple matter. The basic idea is to use an implicit discretization,
in the spirit of the method introduced in [79–83], and successfully experimentally
validated in [82, 84–86].
The very first step is to choose a discretization for the plant dynamics in (7.110).
We will work with the Euler discretization:
M(qk ) q̇k+1h−q̇k + C(qk , q̇k )q̇k+1 + g(qk ) + F(tk , qk , q̇k ) = τk
(7.118)
qk+1 = qk + h q̇k .
This choice may appear arbitrary, see nevertheless Theorem 7.48 below. Contrarily
to the continuous-time case, where it was not needed, we assume that the estimated
matrices satisfy the skew-symmetry property:
Property 7.41 The matrices M̂(q) and Ĉ(q, q̇) satisfy dtd M̂(q(t)) = Ĉ(q(t), q̇(t)) +
Ĉ T (q(t), q̇(t)).
Mimicking the continuous-time problem, let us introduce the position error q̃k =
qk − qkd as well as the sliding surface sk = q̃˙k + Λq̃k , where q̃k+1 = q̃k + h q̃˙k , −Λ ∈
Rn×n is a Hurwitz matrix as in the continuous-time case, and qkd refers to the sample
of the reference trajectory at time tk . We propose the control law τk as
q̇ r −q̇ r
τk = M̂k k+1h k + Ĉk q̇k+1
r
+ ĝk + u k
(7.119)
qk+1 = qk + h q̇k ,
r r r
where q̇kr = q̇kd − Λq̃k and u k refers to the multivalued part of the controller plus an
additional dissipation term specified below. After some simple algebraic manipula-
tions, the closed-loop system is obtained from (7.118) and (7.119) as
536 7 Passivity-Based Control
Mk sk+1 − Mk sk + hCk sk+1 = −hξk + hu k
(7.120)
q̃k+1 = (In − hΛ) q̃k + hsk ,
where sk+1 =sk + h ṡk , q̃k+1 = q̃k + h q̃˙k and the equivalent disturbance
Δ
ξk = ξ(tk , sk , q̃k ) is given by
ξk = Fk + Mk − M̂k q̈kd − Λ (sk − Λq̃k ) + gk − ĝk
d (7.121)
+ Ck − Ĉk q̇k+1 − Λ (In − hΛ) q̃k + hσk .
Let us specify the remaining term u k in a similar way as its counterpart in continuous
time (7.114):
− u k ∈ K s ŝk+1 + γ ∂Φ(ŝk+1 ), (7.123)
where K s = K sT 0. This time the gain γ > 0 is considered constant and ŝk+1 is
defined by
Since the equivalent disturbance ξk is unknown, the controller will be calculated from
the nominal unperturbed plant (7.124) with state ŝk (which may be thought of as a
dum variable, as well) and using (7.120) as follows:
The framed system (7.125) is crucial in the control system’s design. It is a generalized
equation with unknowns ŝk+1 and ζk+1 , that allows one to calculate the controller
at each step, since it is not affected by the unknown disturbance. If we are able to
solve (7.125) and get ŝk+1 and ζk+1 , then the set-valued controller in (7.123) can be
calculated. System (7.125a)–(7.125d) may be viewed as follows: Equations (7.125a)
and (7.125d) are the Euler discretization of the plant with a pre-feedback, (7.125c) is a
nominal unperturbed system and (7.125b) is the discretized set-valued controller to be
7.5 Rigid-Joint–Rigid-Link Systems: Set-Valued Robust Control 537
calculated from (7.125c). From (7.125) it becomes clear that, when all uncertainties
and disturbances vanish, ŝk = sk whenever ŝ0 = σ0 .
Remark 7.42 Roughly speaking, the process is the same as the one in Sects. 3.15.6
and (3.368) (3.369), however, it is rendered more complex because on one hand of
the nonlinearities, on the other hand of the unknown disturbance.
Let us prove the well posedness of the general scheme (7.125), i.e., one can compute
a selection of the multivalued controller (7.125b) in a unique way, using only the
information available at time tk . Notice first that (7.125c) and (7.125b) imply
Δ
for all η ∈ Rn , where Aˆk = M̂k + h Ĉk + h K s . The equivalence follows from the
definition of the subdifferential, see (3.232). From Lemma A.96, it follows that ŝk+1
is uniquely determined if the operator Aˆk is strongly monotone. Additionally, note
that ŝk+1 depends on Aˆk , M̂k , sk , h, γ and Φ only (all of them available at time step
k). In order to obtain conditions for the strong monotonicity of Aˆk , we note that for
any w ∈ Rn ,
ε̂k m
ˆ
Ak w, w ≥ k̂1 + hκ1 − w2 , (7.127)
2
ε̂
with lim h↓0 kh m = lim h↓0 εk m
h
= 0. Hence, Aˆk is strongly monotone for any h
small enough such that
k̂1 ε̂k m
+ hκ1 − ≥ 0. (7.129)
2 2
Applying Lemma A.96 we obtain the uniqueness of ŝk+1 . It is noteworthy that the
strong monotonicity conditions, could be relaxed using the results in [78, Sect. 2.7].
Now we shall make use of Lemma A.98.
Remark 7.43 (Numerical Solvers) There are several ways to numerically solve prob-
lems of the form (A.81) or (7.126), like the semi-smooth Newton method [32,
Sect. 7.5] advocated in [74, Sect. 6]. For control applications this method may be too
time consuming, since it involves the computation of inverse matrices and proximal
maps of composite functions. In contrast, the simple method of successive approx-
imations [87, Sect. 14] can quickly find the fixed point of (A.82). Details about the
implementation are given in [75, Sect. VII].
538 7 Passivity-Based Control
Using Lemma A.98, the selection of the control value can be obtained from (7.125b)
(7.125c) as
1 ˆ
ζk+1 = − Ak ŝk+1 − M̂k sk (7.130a)
hγ
ŝk+1 = Proxμhγ Φ ((I − μAˆk )ŝk+1 + μ M̂k sk ), (7.130b)
where μ > 0 is such that 0 ≺ Aˆk + AˆkT − μAˆkT Aˆk . The solution of the implicit
Eq. (7.130b) with unknown ŝk+1 is a function of sk and h, and it is clear from (7.130a)
that the controller is non anticipative.
Assumption 25 The step length h > 0 is small enough such that the spectrum of
In − hΛ is contained in the interior of the complex unitary circle.
Let us now proceed with the stability analysis of the discrete-time system (7.125).
This is a rather long and technical analysis that will be summarized briefly in this
section. Let us start with the following technical result:
Lemma 7.44 ([75, Lemma 5]) There exists δ ∗ > 0 (depending on q̃0 and σ0 ) such
that for any h ∈ (0, δ ∗ ] the following inequalities hold:
! !
!ε̂k ! ≤ min k̂1 , 2hκ1 , (7.131a)
m
εk m ≤ min {k1 , 2hκ1 } , (7.131b)
Theorem 7.45 ([75, Theorem 5]) Let Assumptions 22, 23 and 25, hold. Con-
sider the discrete-time dynamical system (7.125). Then, there exist constants
r̂s > 0 and h ∗ > 0 such that, for all h ∈ (0, min{δ ∗ , h ∗ }] with δ ∗ given by
Lemma 7.44, the origin of (7.15a) is semi-globally practically stable whenever
γ and α satisfy
⎧ ⎫
⎨ 2k̂ ⎬
2 β̄ k̂2 2F
γ α > max β̄ 1 + , 2k̂2 r̂s + , (7.132)
⎩ k̂1 k̂1r̂s k̂1 k̂1 ⎭
for some constants β̄ and F . Moreover, ŝk reaches the origin in a finite number
of steps k ∗ , and ŝk = 0 for all k ≥ k ∗ + 1.
7.5 Rigid-Joint–Rigid-Link Systems: Set-Valued Robust Control 539
Proof (sketch of sketch): The proof uses the two positive definite functions V1,k =
ŝkT M̂k ŝk and V2,k = skT M̂k sk . Two cases are examined: ŝk+1 ≥ h r̂s , and ŝk+1 ≤
h r̂s . In the first case, one can prove that under some conditions, V2,k+1 − V2,k < 0.
In the second case, it is possible that V2,k+1 − V2,k > 0, however it is shown that if
V2,k increases, this is only in quantities small enough so that sk stays in the com-
pact set W = {w ∈ Rn |w T M̂0 w ≤ R}, for some R > 0 such that σ0 ∈ W . The con-
stant β̄ is defined as follows. The dynamics in (7.125a) is rewritten equivalently
as M̂k sk+1 − M̂k sk + h Ĉk sk+1 + h K s ŝk+1 + h(ξk + θk + ϑk ) = −hγ ζk+1 , with ξk
in (7.121), θk = (Mk − M̂k )ṡk and ϑk = (Ck − Ĉk )sk+1 , where sk+1 = sk + h ṡk .
Then ξ̂k = ξk + ϑk + θk , and ξ(tk , sk , q̃k ) ≤ β(sk , q̃k ), where β(sk , q̃k ) = c1 +
c2 sk + c3 q̃k + c4 q̃k sk + c5 q̃k 2 , and ci , i = 1, . . . , 5 are known positive
Δ
constants. We have β̄ = max(sk ,q̃k )∈W × R̃Bn β(sk , q̃k ) is an upper bound of β(σk , q̃k ),
R̃ = R̃(s0 , q̃0 ) is the radius of a closed ball such that q̃k ∈ R̃Bn (such a radius
! !always be found, see Remark 7.46). Finally, the constant F is defined as
can
! ! Δ
!ξ̂k ! ≤ F = b0 + b1 h + b2 h 2 , for some bi > 0.
Remark 7.46 Under the assumptions given in Theorem 7.45,it is clear that the slid-
ing variable sk converges to a ball of radius rs = k̂2 /k̂1 r̂ + 2 F̄/k̂1 h, which
implies the boundedness of the state variable q̃k . Recalling that Λ and h are such that
Assumption 25 holds the solution at the step k is given by
%
k−1
q̃k = (In − hΛ)k q̃0 + h (In − hΛ)(n+1) sk−n .
n=0
for some finite ρ > 0 [88, Theorem 22.11]. Therefore, q̃k is also bounded for all
k ∈ N. In fact, it converges to a ball of radius h Rs ρ.
Corollary 7.47 ([75, Corollary 3]) Let the assumptions of Theorem 7.45 hold. Then
in the case when there is no disturbance (ξ ≡ 0), the origin of (7.125) is globally
finite-time Lyapunov stable, while q̃k → 0 asymptotically.
We finish with a result that proves that the choice of the discretization for the plant,
made more or less arbitrarily as in (7.118), is in fact quite sound.
Theorem 7.48 (Convergence of the discrete-time solutions) [75, Theorem 6] Let
(sk , q̃k ) be a solution of the closed-loop discrete-time system (7.125) and let the
540 7 Passivity-Based Control
functions sh (t) = sk+1 + tk+1h−t (sk − sk+1 ), q̃h (t) = q̃k+1 + tk+1h−t (q̃k − q̃k+1 ), for all
t ∈ [tk , tk+1 ), be the piecewise-linear approximations of σk and q̃k , respectively. Then,
we can find a sequence of sampling times h converging to zero such that (σh , q̃h )
converges to (s, q̃), where (s, q̃) is a solution of
⎧
⎨ M(q(t))ṡ(t) + C (q(t), q̇(t)) σ (t) + K s s(t) + ξ(t, s(t), q̃(t)) = −γ ζ (t)
ζ (t) ∈ ∂Φ (s(t))
⎩˙
q̃(t) = s(t) − Λq̃(t),
(7.133)
with s(0) = s0 and q̃(0) = q̃0 .
We now turn our attention to another class of Lagrangian systems, with lumped
flexibilities, using Spong’s model in (6.97). The control problem of such systems
was challenging at the end of the 1980s [18]. Especially, extending the trajectory
tracking problem solved for the rigid-joint systems (in the fixed parameter, or the
adaptive cases) was considered to be a difficult issue, mainly due to the underactuated
feature of the system (2n degrees of freedom, and n inputs). Its triangular structure
is very helpful for feedback purpose, however.
9 The superiority of the implicit method over the explicit one, in terms of global versus local stability,
In Sect. 6.4, we saw how the dissipativity properties derived for the rigid-joint–
rigid-link manipulator case extend to the flexible-joint–rigid-link case, and we pre-
sented what we called passivity-based schemes. Considering the Lyapunov function
in (7.78), let us try the following [19–22]:
V (q̃1 , q̃2 , s1 , s2 ) = 21 s1T M(q1 )s1 + 21 s2T J s2 + λλ1 q̃1T q̃1 + λλ1 q̃2 q̃2
(7.134)
+ 21 (q̃1 − q̃2 )T K (q̃1 − q̃2 ) .
The various signals have the same definition as in the rigid case. One sees that
similarly to (7.78) this positive definite function mimics the total energy function
of the open-loop unforced system. In order to make it a Lyapunov function for the
closed-loop system, one can classically compute its derivative along the trajectories
of (6.97) and try to find out a u that makes its derivative negative definite. Since we
already have analyzed the rigid-joint–rigid-link case, we can intuitively guess that
one goal is to get a closed-loop system of the form
M(q1 (t))ṡ1 (t) + C(q1 (t), q̇1 (t))s1 (t) + λ1 s1 (t) = f 1 (s1 (t), s2 (t), q̃1 (t), q̃2 (t))
J ṡ2 (t) + λ1 s2 (t) = f 2 (s1 (t), s2 (t), q̃1 (t), q̃2 (t)).
(7.135)
For the moment, we do not fix the functions f 1 (·) and f 2 (·). Since the Lyapunov func-
tion candidate preserves the form of the system’s total energy, it is also to be strongly
expected that the potential energy terms appear in the closed-loop dynamics. More-
over, we desire that the closed-loop system consists of two passive blocks in negative
feedback. Obviously V (·) in (7.134) contains the ingredients for Lemmas ⎛ ⎞ 7.22 and
q̃1
⎜ s1 ⎟
7.23 to apply. The first block may be chosen with state vector x1 = ⎜ ⎟
⎝ q̃2 ⎠. We know
s
2
s
it is passive with respect to the supply rate u 1T y1 with input u 1 = 1 and output
s2
K (q̃1 − q̃2 )
y2 = . One storage function for this subsystem is
−K (q̃1 − q̃2 )
1 T 1
V1 (x1 , t) = s1 M(q1 )s1 + s2T J s2 + λλ1 q̃1T q̃1 + λλ1 q̃2 q̃2 . (7.136)
2 2
However, notice that we have not fixed the input and output of this subsystem, since
we leave for the moment f 1 (·) and f 2 (·) free. Now, the second subsystem must have
a storage function equal to:
1
V2 (x2 , t) = (q̃1 − q̃2 )T K (q̃1 − q̃2 ) , (7.137)
2
542 7 Passivity-Based Control
and we know it is passive with respect to the supply rate u 2T y2 , with an input u 2 = y1
and an output y2 = −u 1 , and from (7.137) with a state vector x1 = K (q̃1 − q̃2 ). Its
dynamics is consequently given by
In order for Lemmae 7.22 and 7.23 to apply, we also require the cross-terms can-
T T
celation equality to be satisfied, i.e., ∂∂ Vx11 G 1 h 2 = − ∂∂ Vx22 G 2 h 1 , where we get from
(7.135)
s1T f 1 + s2T f 2 = − (q̃2 − q̃1 )T K (s2 − s1 ), (7.139)
from which one deduces that f 2 (s1 , s2 , q̃1 , q̃2 ) = K (q̃1 − q̃2 ) and f 1 (s1 , s2 , q̃1 , q̃2 )
= K (q̃2 − q̃1 ). Thus since we have fixed the input and output of the second subsystem
so as to make it a passive block, we can deduce from Lemma 7.23 that the closed-
loop system that consists of the feedback interconnection of the dynamics in (7.135)
and (7.138) can be analyzed through the Passivity Theorem. Notice however that we
have not yet checked whether a state feedback exists that assures this closed-loop
form. This is what we develop now. Let us consider the following controller:
⎧
⎨ u = J q̈2r + K (q2d − q1d ) − λ1 s2
(7.140)
⎩
q2d = K −1 u r + q1d
It is noteworthy that the controller is thus formed of two controllers similar to the
one in (7.68): one for the first “rigid link” subsystem and the other for the motor
shaft dynamics. The particular form of the interconnection between them makes it
possible to pass from the first dynamics to the second one easily. It should be noted
that the form in (7.140) and (7.141) depends on the state (q̃1 , s1 , q̃2 , s2 ) only, and not
on any acceleration nor jerk terms.
To recapitulate, the closed-loop error dynamics is given by
M(q1 (t))ṡ1 (t) + C(q1 (t), q̇1 (t))s1 (t) + λ1 s1 (t) = K (q̃2 (t) − q̃1 (t))
t T t
[s1 − s2 ]dτ K [s1 − s2 ]dτ . (7.143)
0 0
This does not modify significantly the structure of the scheme, apart from the fact
that this introduces a dynamic state-feedback term in the control loop. Actually, as
shown in [22], the static state-feedback scheme has the advantage over the dynamic
one of not constraining the initial conditions on the open-loop state vector and on
q1d (0), q̇1d (0) and q̈1d (0). The stability of the scheme with the integral terms as in
(7.143) may be shown using the function
1 T 1 1
V (s1 , s2 , z) = s M(q1 )s1 + s2T J s2 + z T K z, (7.144)
2 1 2 2
with
⎧
⎪
⎪ q2d = q1d − λx + K −1 (−s1 + M(q1 )q̈1r + C(q1 , q̇1 )q̇1r + g(q1 ))
⎪
⎪ q̇1r (t) = q̇1d (t) − λq̃1 (t)
⎨
ẋ(t) = q̃1 (t) − q̃2 (t)
⎪
⎪ = λx(t) + (q̃1 (t) − q̃2(t) ) (ż(t) = s1 (t) − s2 (t))
⎪
⎪ z(t)
⎩
u = −s2 − J (−q̈2d + λq̃˙2 ) − K (q1d − q2d − λx).
Then, one gets along closed-loop trajectories V̇ (s1 , s2 , z) = −s1T s1 − s2T s2 . See [22]
for more details.
Remark 7.49 (From flexible to rigid joints) A strong property of the controller in
(7.140) and (7.141) in closed loop with the dynamics in (6.97), with the Lyapunov
function in (7.134), is that they converge toward the closed-loop system in (7.69) and
(7.70) when K → +∞ (all the entries diverge). Indeed, one notices that K (q2d −
q1d ) = u r for all K and that q2d → q1d as K → ∞. Noting that all the closed-loop
signals remain uniformly bounded for any K and introducing these results into u in
(7.140) one sees that u = J q̈r + u r − λ1 s1 , which is exactly the controller in (7.68)
applied to the system in (6.97), letting q1 ≡ q2 and adding both subsystems. We,
therefore, have constructed a real family of controllers that share some fundamental
features of the plant dynamics.
A close look at the above developments shows that the control scheme in (7.140)
and (7.141) is based on a two-step procedure:
• The control of the first equation in (6.97) using q2d as a fictitious input. Since q2d
is not the input, this results in an error term K (q̃2 − q̃1 ).
• A specific transformation of the second equation in (6.97) that makes the control
input u(·) explicitly appear. The controller is then designed in such a way that the
closed-loop dynamics possesses a Lyapunov function as in (7.134).
544 7 Passivity-Based Control
The stability proof for the fixed parameters Lozano and Brogliato scheme mimics
that of the Slotine and Li scheme for rigid systems. One may, for instance, choose
as a quadratic function
1 T 1 1
V (q̃1 , q̃2 , s1 , s2 ) = s1 M(q1 )s1 + s2T J s2 + (q̃1 − q̃2 )T K (q̃1 − q̃2 ) , (7.145)
2 2 2
instead of the Lyapunov function candidate in (7.134). The function in (7.145) is the
counterpart for flexible-joint systems, of the function in (7.80). Let us compute the
derivative of (7.145) along the trajectories of the error system (7.142):
V̇ (q̃1 (t), q̃2 (t), s1 (t), s2 (t)) = s1T (t)M(q1 (t))ṡ1 (t) + s2T (t)J ṡ2 (t)+
1 s T (t) Ṁ(q (t))s (t) + (q̃ (t) − q̃ (t))T K q̃˙ (t) − q̃˙ (t)
2 1 1 1 1 2 1 2
= s1T (t)[ 21 Ṁ(q1 (t)) − C(q1 (t), q̇1 (t))s1 (t) − λ1 s1 (t)
+K (q̃2 (t) − q̃1 (t))] + s2T [−λ1 s2 (t) + K (q̃1 (t) − q̃2 (t))]
+ (q̃1 (t) − q̃2 (t))T K (−λ1 q̃1 (t) + s1 (t) + λ1 q̃2 (t) − s2 (t))
= −λ1 s1T (t)s1 (t) − λ1 s2T (t)s2 (t)
−λ1 (q̃1 (t) − q̃2 (t))T K (q̃1 (t) − q̃2 (t)) ≤ 0.
(7.146)
It follows from (7.146) that all closed-loop signals are bounded on [0, +∞), and
that s1 ∈ L2 , s2 ∈ L2 . Using similar arguments as for the first stability proof of the
Slotine and Li controller in Sect. 7.3.4.3, one concludes that q̃1 (t), q̃2 (t), q̃˙1 (t) and
q̃˙2 (t) all tend toward zero as t → +∞. One may again also conclude on the expo-
nential convergence of these functions toward zero, noticing that V̇ (q̃1 , q̃2 , s1 , s2 ) ≤
βV (q̃1 , q̃2 , s1 , s2 ) for some β > 0.
It is also possible to lead a stability analysis using the Lyapunov function candidate
in (7.134). We reiterate that the quadratic function in (7.145) cannot be named a
Lyapunov function candidate for the closed-loop system (7.142), since it is not a
radially unbounded nor positive definite function of the state (q̃1 , q̃2 , q̃˙1 , q̃˙2 ).
7.6 Flexible-Joint–Rigid-Link: State Feedback 545
As pointed out, one may also view the passivity-based controller in (7.140) as the
result of a backstepping procedure that consists of stabilizing first the rigid part of the
dynamics, using the signal q2d (t) as a fictitious intermediate input, and then looking
at the rest of the dynamics. However instead of looking at the rest as a whole and
considering it as a passive second-order subsystem, one may treat it step by step:
this is the core of a popular method known under the name of backstepping. Let us
develop it now for the flexible-joint–rigid-link manipulators.
• Step 1: Any type of globally stabilizing controller can be used. Let us still use u r
in (7.141), i.e., let us set
q2d = K −1 u r + q1 , (7.147)
so that we get
M(q1 (t))ṡ1 (t) + C(q1 (t), q̇1 (t))s1 (t) + λ1 s1 (t) = K q̃2 (t). (7.148)
The system in (7.148) with q̃2 ≡ 0 thus defines a globally uniformly asymptotically
stable system with Lyapunov function V1 (q̃1 , s1 ) = 21 s1T M(q1 )s1 + λλ1 q̃1T q̃1 . The
interconnection term is therefore quite simple (as long as the stiffness matrix is
known!). Let us take its derivative to obtain
q̃˙2 (t) = q̇2 (t) − q̇2d (t) = q̇2 (t) + f 1 (q1 (t), q̇1 (t)q2 (t)), (7.149)
where f 1 (·) can be computed using the dynamics (actually q̇2d is a function of
the acceleration q̈1 which can be expressed in terms of q1 , q̇1 and q2 by simply
inverting the first dynamical equation in (6.97)).
• Step 2: Now, if q̇2 was the input, we would set q̇2 = − f 1 (q1 , q̇1 q2 ) − λ2 q̃2 − K s1
so that the function V2 = V1 + 21 q̃2T q̃2 has a negative definite derivative along the
partial closed-loop system in (7.148) and
However, q̇2 is not an input, so that we shall rather define a new error signal as
e2 = q̇2 − e2d , with e2d = − f 1 (q1 , q̇1 q2 ) − λ2 q̃2 − K s1 . One obtains
ė2 (t) = q̈2 (t) − ė2d (t) = q̈2 (t) + f 2 (q1 (t), q̇1 (t), q2 (t), q̇2 (t))
= J −1 (K (q1 (t) − q2 (t)) + u(t)) + f 2 (q1 (t), q̇1 (t), q2 (t), q̇2 (t)).
(7.151)
• Step 3: Since the real control input appears in (7.151) this is the last step. Let us
choose
546 7 Passivity-Based Control
so that we get:
ė2 (t) = −λ3 e2 (t) − q̃2 (t), (7.153)
where the term −q̃2 has been chosen to satisfy the cross-terms cancelation equality
(see Lemma 7.23) when the function V2 is augmented to
1
V3 (q̃1 , s1 , q̃2 , e2 ) = V2 + e2T e2 . (7.154)
2
Then along the closed-loop trajectories of the system in (7.148) (7.135) (7.153),
one gets
V̇3 (q̃1 (t), s1 (t), q̃2 (t), e2 (t)) = −λ1 q̃˙1T (t)q̃˙1 (t) − λ2 λ1 q̃1T (t)q̃1 (t)
(7.155)
−q̃2T (t)q̃2(t) − e2T (t)e2 (t),
which shows that this closed-loop system is globally uniformly exponentially sta-
ble.
It is noteworthy that e2 is not the time derivative of q2 . Therefore, the backstepping
method hinges upon a state variable transformation which actually depends on the
system dynamics in the preceding steps.
Remark 7.50 • The control law in (7.152) can be computed from the definition of
q2d in (7.147), and q̇2d as well as q̈2d are to be calculated using the dynamics
to express the acceleration q̈1 and the jerk q1(3) , as functions of positions and
velocities only (take the first dynamical equation in (6.97) and invert it to get the
acceleration. Differentiate it again and introduce the expression obtained for the
acceleration to express the jerk). Clearly, u(·) is a complicated nonlinear function
of the state, but it is a static state feedback. This apparent complexity is shared by
all the nonlinear controllers described in Sect. 7.6. Notice, however, that it is only
a matter of additions and multiplications, nothing else!
• We noticed in Remark 7.49 that the passivity-based controller tends toward the
Slotine and Li input, when the joint stiffness tends to infinity. This is no longer the
case with the backstepping controller derived here. Even more, after some manip-
ulations, it can be shown [23] that the controller in (7.152) can be equivalently
rewritten as
u = J [q̈2d − (λ2 + λ3 )q̃˙2 − (1 + λ2 λ3 )q̃2 − K (ṡ1 + s1 )]
(7.156)
q2d = K −1 u r + q1 ,
where it immediately appears that the term K (ṡ1 + s1 ) is not bounded as K grows
without bound. Here comes into play the design “flexibility” of the backstepping
method: let us modify the function V2 above to V2 = V1 + 21 q̃2T K q̃2 . Then in step
2, it is sufficient to choose q̇2 = − f 1 (q1 , q̇1 q2 ) − λ2 q̃2 − s1 , so that the final input
becomes
7.6 Flexible-Joint–Rigid-Link: State Feedback 547
u = J [q̈2d − (λ2 + λ3 )q̃˙2 − (1 + λ2 λ3 )q̃2 − (ṡ1 + s1 )]
(7.157)
q2d = K −1 u r + q1 .
Such a modification may appear at first sight quite innocent, easy to do, and very
slight: it is not! The experimental results presented in Chap. 9 demonstrate it.
Actually, the term K (ṡ1 + s1 ) introduces a high gain in the loop that may have
disastrous effects. This may be seen through simulations, see [23]. It is noteworthy
that even with quite flexible systems (some of the reported experiments were led
with a system whose stiffness is k = 3.5 Nm/rad) this term makes the control law
in (7.152) behave less satisfactorily than the one in (7.157). More details can be
found in Chap. 9.
• This recursive design method applies to all systems that possess a triangular struc-
ture [99]. See [100] for a survey of backstepping methods for flexible-joint manip-
ulators.
• Compare (7.156) and (7.157) to (7.140). Although these controllers have the same
degree of complexity and can be considered as being similar, they have significant
discrepancies as explained above. For instance, in (7.140), one has K (q2d − q1d ) =
u r while in (7.156) and (7.157), K (q2d − q1d ) = u r + q̃1 .
-K -K
u 21
y2 u2
H 21
y 21 _
u 22
H 22 y 22
Then, the closed-loop system can be viewed as the negative feedback interconnec-
tion of the block (H 1) with u 1 = u 11 + y12 = K q̃2 , y1 = y11 , with the block (H 2)
with input −K −1 u 2 = s1 = y1 and output −K y2 = −K q̃2 = −u 1 . This is depicted
in Fig. 7.7.
Remark 7.51 The backstepping procedure also yields a closed-loop system that can
be analyzed through the Passivity Theorem. However, the major difference with the
passivity-based method is that the block (H 2) is not related to any physical relevant
energetical term. In a sense, this is similar to what one would get by linearizing the
rigid-joint–rigid-link dynamics, applying a new linear feedback so as to impose some
second-order linear dynamics which may define an “artificial” passive system.
7.7.1 PD Control
We have seen in Sect. 7.3.1 that a PD controller stabilizes globally and asymptotically
rigid-joint–rigid-link manipulators. It is a combination of passivity and detectability
properties that makes such a result hold: the former is a guide for the choice of
a Lyapunov function, while the latter allows the Krasovskii–LaSalle’s invariance
principle to apply. More precisely, the OSP property is crucial, because OSP together
7.7 Flexible-Joint–Rigid-Link: Output Feedback 549
with ZSD of a system, imply its asymptotic stability in the sense of Lyapunov (see
Corollary 5.27). Let us consider the dynamics in (6.97) and the following controller:
Let us proceed as for the rigid-joint–rigid-link case, i.e., let us first “guess” a Lya-
punov function candidate from the available storage function, and then show how
the application of the Passivity Theorem applies equally well.
Similarly as for the rigid-joint–rigid-link case, one may guess that a PD controller
alone will not enable one to stabilize any fixed point. The closed-loop fixed point is
given by
g(q1 ) = K (q2 − q1 )
(7.161)
λ2 (q2 − qd ) = K (q1 − q2 ),
and we may assume for simplicity that this set of nonlinear equations (which are not
in general algebraic but transcendental) possesses a unique root (q1 , q2 ) = (q10 , q20 ).
We aim at showing the stability of this point. To compute the available storage of
the closed-loop system in (7.160), we consider a fictitious input u(·) in the second
dynamical equation, while the output is taken as q̃˙2 . Then we obtain the following:
t
Va (q̃1 , q̇1 , q̃2 , q̇2 ) = sup − q̇2T (s)u(s)ds
u:(0,q̃1 (0),q̇1 (0),q̃2 (0),q̇2 (0))→ 0
t
= sup − u T [J q̈2 + K (q2 − q1 ) + λ1 q̇2 + λ2 q̃2 ]ds
u:(0,q̃1 (0),q̇1 (0),q̃2 (0),q̇2 (0))→ 0
= 21 q̇1 (0)T M(q1 (0))q̇1 (0) + Ug (q1 (0)) + 21 q̇2 (0)T J q̇2 (0)
+ 21 (q2 (0) − q1 (0))T K (q2 − q1 ) + 21 λ2 q̃2T (0)q̃2 (0)
(7.162)
where q̃i = qi − qi0 , i = 1, 2. Now the supply rate satisfies w(0, q̇2 ) ≤ 0 for all q̇2 ,
and obviously (q̃1 , q̇1 , q̃2 , q̇2 ) = (0, 0, 0, 0) is a strict (global) minimum of Va (·)
in (7.162), provided Ug (q1 ) has a strict minimum at q10 . Notice that q̃2 = 0 ⇒
(q1 − q2 ) = 0 ⇒ g(q1 ) = 0 so that q1 = q10 is a critical point for Ug (q1 ) (that we
might assume to be strictly globally convex, but this is only sufficient). Hence from
Lemmae 5.23 and 4.8, one deduces that the closed-loop system in (7.160) is Lyapunov
550 7 Passivity-Based Control
where the first block has the dynamics J q̈2 (t) + λ1 q̇2 (t) + λ2 (q2 (t) − qd ) =
K (q1 (t) − q2 (t)), while the second one has the dynamics M(q1 (t))q̈1 (t) + C(q1 (t),
q̇1 (t))q̇1 (t) + g(q1 (t)) = K (q2 (t) − q1 (t)). It is easy to calculate the following:
from which one deduces that the first block is OSP (actually if we added Rayleigh
dissipation in the first dynamics, the second block would not be OSP with the pro-
posed decomposition). Each block possesses its own storage functions which are
Lyapunov functions for them. The concatenation of these two Lyapunov functions
forms the available storage in (7.162). Let us now consider the overall system with
input u = u 1 + y2 and output y = y1 . Setting u ≡ y ≡ 0 implies q̃2 ≡ 0 and q̇1 → 0,
q̃1 → 0 asymptotically. The system is ZSD. Hence by Lemmae 5.23 and 4.8, its fixed
point is globally asymptotically Lyapunov stable.
Remark 7.52 (Collocation) The collocation of the sensors and the actuators is an
important feature for closed-loop stability. It is clear here that if the PD control is
changed to
u(t) = −λ1 q̃˙1 (t) − λ2 q̃1 (t), (7.165)
then the above analysis no longer holds. It can even be shown that there are some
gains for which the closed-loop system is unstable [101]. One choice for the location
of the sensors may be guided by the passivity property between their output and the
actuators torque (in case the actuator dynamics is neglected in the design model).
7.7 Flexible-Joint–Rigid-Link: Output Feedback 551
A position feedback controller similar to the one in Sect. 7.4.1 can be derived for
flexible-joint–rigid-link manipulators [102]. It may be seen as a PD controller with
the velocity feedback replaced by an observer feedback. It is given by
u(t) = g(qd ) − λ1 q̃2 (t) − 1
(q̃ (t)
λ2 2
− z(t))
(7.166)
ż(t) = λ3 (q̃2 (t) − z(t)),
with q̃2 = q2 − qd + K −1 g(qd ), and qd is the desired position for q1 . The analysis is
quite close to the one done for the rigid-joint–rigid-link case. Due to the autonomy of
the closed-loop (qd is constant) Corollary 7.25 is likely to apply. The stability proof
is based on the following global Lyapunov function:
1
V (q̃1 , q̇1 q̃2 , q̇2 ) = λ2 M(q1 )q̇1 + 21 q̇2T J q̇2 + 21 q̃1T K q̃1
q̇ T
2 1
(7.167)
+ 21 q̃2T (K + λ1 In )q̃2 − 2λ2 q̃1T K q̃2 + 21 (q̃2 − z)T (q̃2 − z).
Compare with V (·) = V1 (·) + V2 (·) in (7.99) and (7.100): the structure of V (·) in
(7.167) is quite similar. It is a positive definite function provided K + dg(q) dq
(qd ) 0
−1
and λ1 In + K − K K + dg(q) dq
(qd ) 0, for all qd . This implies that K and λ1
are sufficiently large. The decomposition into two subsystems as in (7.98) can be
performed, choosing x2 = q̃2 − z and x1T = (q̃1T , q̇1T , q̃2T , q̇2T ) = (x11
T
, x12
T
, x13
T
, x14
T
).
The closed-loop scheme is given by
⎧
⎪
⎪ ẋ11 (t) = x12 (t)
⎪
⎪
⎪
⎪ ẋ 12 (t) = −M(x 11 (t) + qd )(C(x 11 (t) + qd , x 12 (t))x 12 (t) + K (x 11 (t) − x 12 (t))
⎨ +g(x (t) + q ) − g(q ))
11 d d
⎪
⎪ ẋ13 (t) = x14 (t)
⎪
⎪
⎪
⎪ ẋ14 (t) = J −1 (K (x11 (t) − x13 (t)) − g(qd ) − λ1 x13 (t) − 1
x (t))
λ2 2
⎩
ẋ2(t) = −λ3 x2(t) + x14 (t).
(7.168)
Define h 2 (x2 ) = 1
x
λ2 2
and h 1 (x1 ) = x14 . It follows that the cross-terms cancela-
T T
tion equality is satisfied since ∂∂ Vx11 G 1 h 2 = −x14
T
x2 = − ∂∂ Vx22 G 2 h 1 . Indeed one may
calculate that G 1T = (0, 0, 0, J −1 ) ∈ Rn×4n whereas G 2 = In . Hence once again
Corollary 7.25 applies and the closed-loop system can be interpreted via the Pas-
sivity Theorem.
Remark 7.53 • A result has been presented in [67], which allows one to recast the
dynamic position feedback controllers presented in this section and in Sect. 7.4,
into the same general framework. It is based on passifiability and detectability
properties. The interpretation of the P + observer schemes in Sects. 7.4.1 and 7.7.2
via Corollary 7.25 is however original.
552 7 Passivity-Based Control
We have seen in Sect. 6.6 that the available storage of the interconnection between
the rigid-joint–rigid-link manipulator model and the armature-controlled DC motor
is given by
1 1
Va (q, q̇, I ) = I T L I + q̇ T M(q)q̇ + Ug (q). (7.169)
2 2
Motivated by the method employed for the design of stable controllers for rigid-joint–
rigid-link and flexible-joint–rigid-link manipulators, let us consider the following
positive definite function:
1 ˜T ˜ 1 T
V (q̃, s, I˜) = I L I + s M(q)s + +2λλ1 q̃ T q̃, (7.170)
2 2
where s = q̃˙ + λq̃. Let us consider the dynamics in (6.120) which we recall here for
convenience:
⎧
⎨ M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + g(q(t)) = τ (t) = K t I (t)
(7.171)
⎩
R I (t) + L ddtI (t) + K t q̇dt(t) = u(t).
Let us set
Id = K t−1 (M(q)q̈r + C(q, q̇)q̇r + g(q) − λ1 s), (7.172)
u = R I − kv q̇ + L −1 I˙d − L −1 K t s − I˜ (7.174)
7.8 Including Actuator Dynamics 553
Taking the derivative of V (q̃, s, I˜) in (7.170) along closed-loop trajectories in (7.173)
and (7.175) one gets:
˙ − λ2 λ1 q̃ T (t)q̃(t),
V̇ (q̃(t), s(t), I˜(t)) = − I˜T (t)L I˜(t) − λ1 q̃˙ T (t)q̃(t) (7.176)
which shows that the closed-loop fixed point (q̃, s, I˜) = (0, 0, 0) is globally asymp-
totically uniformly stable in the sense of Lyapunov.
Remark 7.54 (Regulation of cascade systems) Consider the system in (7.171) with
Rayleigh dissipation in the manipulator dynamics. Let us write the second subsystem
in (7.171) as
I˙(t) = −L −1 R I (t) − L −1 K t q̇(t) + L −1 u(t). (7.177)
Thus, the unforced system (i.e., take v = 0) has a globally asymptotically stable fixed
point (in the sense of Lyapunov).
A similar analysis for the field-controlled DC motor case can be led. The dissi-
pativity properties of the driven and the driving subsystems allow the designer to
construct a globally stabilizing feedback law.
Remark 7.55 (Nested passive structure) The computation of V̇ (·) relies on a cross-
terms cancelation, as required in Lemma 7.23 and Corollary 7.25. Thus, if we had
started from the a priori knowledge of the function V (·), we could have deduced that
the closed-loop system can be analyzed as the negative feedback interconnection of
two passive blocks, one with input u 1 = K t I˜ and output y1 = s and dynamics in
554 7 Passivity-Based Control
In real robotic tasks, the manipulators seldom evolve in a space free of obstacles.
A general task may be thought as involving free motion as well as constrained
motion phases, and the transition between them (activation and/or deactivation of
constraints). In this section, we shall focus on the case when the system is assumed
to be in a permanent contact with some environment. In other words, the constraint
7.9 Constrained Mechanical Systems 555
between the controlled system and the obstacle is supposed to be bilateral. In all the
sequels, we assume that the potential energy of the controlled system Ug (z) and of
the passive environment Uge (z 1 ) each have a unique strict minimum, and to simplify
further that they are positive (i.e., they have been chosen so).
Before going on with particular environment dynamics, let us analyze the regulation
problem for the system in (6.155). To this end, let us define the PD control
where z̃ = z(t) − z d , z d a constant signal. Since we have assumed that the constraints
are bilateral, we do not have to restrict z d to a particular domain of the state space (i.e.,
we do not care about the sign of the interaction force). Let us “invent” a Lyapunov
function candidate by mimicking the available storage in (6.156), i.e.,
1 T 1 1 1
V (z̃, ż, z 1 ) = ż M(z)ż + ż 1 Me (z 1 )ż 1 + λ2 z̃ T z̃ + Ug (z) + Uge (z 1 ) + z 1T K e z 1 .
2 2 2 2
(7.182)
Instead of computing the derivative of this function along the closed-loop system
(6.155) and (7.181), let us decompose the overall system into two blocks. The
first
λz
block contains the controlled subsystem dynamics, and has input u 1 = Fz = ,
0
output y1 = ż. The second block has the dynamics of the environment, output u 2 =
−λz and input u 2 = ż. These two subsystems are passive since
t
u 1 , y1 t = 0 ż T M̄(z)z̈ + C̄(z, ż)ż + ḡ(z) + λ2 z̃ + λ2 ż ds
(7.183)
≥ − 21 z(0)T M̄(z(0))z(0) − Ug (z(0)) − 21 λ2 z(0)T z(0),
and
t
u 2 , y2 t = = 0 ż 1T Me (z 1 )z̈ 1 + Ce (z 1 , ż 1 )ż 1 + d Re
d ż 1
+ K e z 1 + ge (z 1 ) ds
≥ − 21 ż 1 (0)T Me (z 1 (0))ż 1 (0) − 21 z 1 (0)T K e z 1 (0) − Uge (z 1 (0)).
(7.184)
Now, the inputs and outputs have been properly chosen so that the two subsys-
tems are already in the required form for the application of the Passivity Theorem.
Notice that they are both controllable and ZSD from the chosen inputs and outputs.
Therefore, the storage functions that appear in the right-hand sides of (7.183) and
(7.184) are Lyapunov functions (see Lemmae 5.23 and 4.8) and their concatenation
556 7 Passivity-Based Control
We may assume that this equation has only one root z = z i , so that the fixed point
(z, ż) = (z i , 0) is globally asymptotically stable.
Remark 7.57 It is noteworthy that this interpretation works well because the inter-
connection between the two subsystems satisfies Newton’s principle of mutual
actions. The open-loop system is, therefore, “ready” for a decomposition through
the passivity theorem.
Remark 7.58 Let us note that there is no measurement of the environment state in
(7.181). The coordinate change presented in Sect. 6.7.2 just allows one to express
the generalized coordinates for the controlled subsystem in a frame that coincides
with a “natural” frame associated with the obstacle. It is clear however that the
transformation relies on the exact knowledge of the obstacle geometry.
The next step that consists of designing a passivity-based nonlinear controller guar-
anteeing some tracking properties in closed-loop has been performed in [105]. It has
been extended in [106] when the geometry of the obstacle surface is unknown (it
depends on some unknown parameters) and has to be identified (then an adaptive
version is needed). Further works using closed-loop passivity may be found in [107,
108].
Let us now analyze the case when Me (z 1 )z̈ 1 = 0, and the contact stiffness K e and
damping Re (ż 1 ) tend to infinity, in which case the controlled subsystem is subject to
a bilateral holonomic constraint φ(q) = 0.10 In the transformed coordinates (z 1 , z 2 )
the dynamics is given in (6.150), see Sect. 6.7.1. We saw that the open-loop properties
of the unforced system, transport from the free motion to the reduced constrained
motion systems. Similarly, it is clear that any feedback controller that applies to
the dynamics in (6.90) applies equally well to the reduced order dynamics (z 2 , ż 2 )
in (6.150). The real problem now (which has important practical consequences) is
10 Actually, the way these coefficients tend to infinity is important to pass from the compliant case to
the rigid body limit. This is analyzed for instance in [109] through a singular perturbation approach.
7.9 Constrained Mechanical Systems 557
to design a controller such that the contact force tracks some desired signal. Let
us investigate the extension of the Slotine and Li scheme in this framework. The
controller in (7.68) is slightly transformed into
τ¯1 = M̄12 z̈ 2r + C̄12 (z 2 , ż 2 )ż 2r + ḡ1 − λ2 λd
(7.186)
τ̄2 = M̄22 z̈ 2r + C̄22 (z 2 , ż 2 )ż 2r + ḡ2 − λ2 s2 ,
where all the terms keep the same definition as for (7.68). λd is some desired value
for the contact force λz1 . The closed-loop system is therefore given by
⎧
⎨ M̄12 (z 2 (t))ṡ2 (t) + C̄(z 2 (t), ż 2 (t))s2 (t) = λ2 (λz1 (t) − λd )
M̄22 (z 2 (t))ṡ2 (t) + C(z 2 (t), ż 2 (t))s2 (t) + λ1 s2 (t) = 0 (7.187)
⎩˙
z̃ 2 (t) = −λz̃ 2 (t) + s2 (t).
The dissipativity properties of the free-motion closed-loop system are similar to those
of (7.69) and (7.70). Notice that due to the asymptotic stability properties of the fixed
point (z̃ 2 , s2 ), one gets λz1 (t) → λd (t) as t → +∞.
In practice, one often has to face unilateral or inequality constraints, where the
equality in (6.144) is replaced by the inequality φ(q) ≥ 0, φ(q) ∈ Rm , which mod-
els the fact that contact may be lost or established with obstacles (in (6.169) one
has ∇φ(q)λz1 ∈ ∂ψΦ (q), which just means that the contact force is normal to the
admissible domain Φ boundary, as long as we deal with so-called perfect constraints,
i.e., without any tangential effects).11 Associated with the inequality constraint, is
a Lagrange multiplier λz1 ∈ Rm , which represents the contact forces between the
bodies in the system. This yields nonsmooth mechanical systems containing impact
(or velocity reinitializations) and so-called complementarity relationships between
λz1 and z 1 , of the form
λz1 ≥ 0, z 1 ≥ 0, λzT1 z 1 = 0, (7.188)
The objective is to design τ in such a way that the closed-loop system becomes
where Mc (q) is a desired kinetic tensor, and gc (q) = ∇Uc (q), where Uc (q) is a
desired potential energy. Let us propose
Since M(q) has full rank, one can rewrite (7.192) as (7.190). The fully actuated case
is therefore quite trivial, and the methods owns its interest to the underactuated case.
Let us therefore consider
for some n × m matrice G(q) with rank(G(q)) = m for all q ∈ Rn . There exists a
matrix G ⊥ (q) such that G ⊥ (q)G(q) = 0 for all q. Also, Im(G ⊥ (q))+Im(G T (q)) =
R2n , and both subspaces are orthogonal. It is thus equivalent to rewrite (7.193) as
G ⊥ (q){M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + g(q)} = 0
(7.194)
G T (q){M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + g(q)} = G T (q)G(q)τ,
where one notices that G T (q)G(q) is a m × m invertible matrix. Obviously the same
operation may be applied to the objective system, i.e.,
G ⊥ (q){Mc (q(t))q̈(t) + Cc (q(t), q̇(t))q̇(t) + gc (q)} = 0
(7.195)
G T (q){Mc (q(t))q̈(t) + Cc (q(t), q̇(t))q̇(t) + gc (q)} = 0.
One says that the two systems (7.194) and (7.195) match if they possess the same
solutions for any initial data (q(0), q̇(0)). It is easy to see that by choosing
τ = (G T (q)G(q))−1 G T (q) M(q)Mc−1 (q)[−Cc (q, q̇)q̇ − gc (q)] + C(q, q̇)q̇ + g(q) ,
(7.196)
one obtains
It then remains to examine what happens with the rest of the closed-loop dynamics.
Matching between (7.190) and (7.193) occurs, if and only if G ⊥ (q){Mc (q(t))q̈(t) +
Cc (q(t), q̇(t))q̇(t) + gc (q)} = 0 holds along the solutions of the closed-loop system
(7.193) and (7.196). In other words, matching occurs if and only if
Note that if there is matching, then we can also express the acceleration as
so that
G(q)τ = −M(q)Mc (q)[Cc (q, q̇)q̇ + gc (q)] + C(q, q̇)q̇ + g(q), (7.200)
G ⊥ (q) {−M(q)Mc (q)[Cc (q, q̇)q̇ + gc (q)] + C(q, q̇)q̇ + g(q)} = 0. (7.201)
560 7 Passivity-Based Control
Consequently matching between (7.190) and (7.193) occurs, if and only if (7.201)
holds and τ is as in (7.196).
Remark 7.59 All these developments may be led within a differential geometry
context [27]. This does not help in understanding the underlying simplicity of the
method (on the contrary it may obscure it for readers not familiar with geometrical
tools). However, it highlights the fact that the equality in (7.201) is in fact a partial
differential equation for Mc (q) and Uc (q). Consequently, the controlled Lagrangian
method boils down to solving a PDE.
Let us consider an extension of the set-valued systems in (3.242), with inputs and
outputs, as follows:
⎧
⎨ ẋ(t) = Ax(t) − By L (t) + Gu(t)
y(t) = C x(t) (7.202a)
⎩
z(t) = H x(t)
Two structures for state observer are proposed. The first observer (“basic” observer
scheme) for the system (7.202a) (7.202b) has the following form
˙ = (A − L H )x̂(t) − B ŷ L (t) + Lz(t) + Gu(t)
x̂(t)
(7.203)
ŷ L (t) ∈ ρ(C x̂(t)),
where L ∈ Rn×l is the observer gain, and C x̂(0) ∈ dom(ρ). The second observer
(“extended” observer scheme) has the following form:
˙ = (A − L H )x̂(t) − B ŷL (t) + Lz(t) + Gu(t)
x̂(t)
(7.204)
ŷ L (t) ∈ ρ((C − K H )x̂(t) + K z(t)),
7.11 Stable State Observers for Set-Valued Lur’e Systems 561
where L ∈ Rn×l and K ∈ Rm×l are the observer gains and x̂(0) is such that (C −
K H )x̂(0) + K z(0) ∈ dom(ρ). The basic observer is a special case of the extended
observer with K = 0.
Problem 7.60 ([115]) The problem of observer design consists in finding the gain
L for the basic observer or the gains L and K for the extended observer, such that
• Observer well posedness: for each solution x(·) to the observed plant (7.202a)
(7.202b), there exists a unique solution x̂(·) to the observer dynamics on [0, ∞),
and
• Asymptotic state recovery: x̂(·) asymptotically recovers x(·), i.e., limt→∞ [x̂(t) −
x(t)] = 0.
Let us prove that if the gains L and K are chosen such that the triple (A − L H, B, C)
(respectively (A − L H, B, C − K H )) is strictly passive, then the obtained observer
(7.203) ((7.204), respectively) will satisfy the requirements mentioned in Problem
7.60. To compute the gains L and K such that (A − L H, B, C − K H ) is strictly
passive, one can solve the matrix (in)equalities:
⎧
⎨ (A − L H )T P + P(A − L H ) = −Q ≺ 0
P = PT 0 (7.205)
⎩ T
B P = C − K H.
562 7 Passivity-Based Control
Lemma 7.62 ([115] Time independent ρ(·), basic observer) Consider the system
(7.202a) (7.202b) and the basic observer (7.203). We assume that the triple (A −
L H, B, C) is strictly passive, and Assumption 26 holds. Let x(·) be a locally AC
solution to (7.202a) (7.202b), with output trajectory z(·) for some x(0) ∈ dom(ρ ◦
C). Then, the corresponding observer dynamics (7.203) has a unique locally AC
solution on [0, ∞) for any initial state x̂(0) ∈ dom(ρ ◦ C).
Proof Since the triple (A − L H, B, C) is strictly passive and B has full column
rank, there exist matrices P = P T 0 and Q = Q T 0 that satisfy Lur’e equa-
tions (7.205) with K = 0. Applying the change of variables:
ξ = R x̂, (7.206)
1
where R = R T = P 2 , transforms (7.203) into:
ξ̇ (t) = R(A − L H )R −1 ξ(t) − R B ŷ L (t) + RGu(t) + R Lz(t)
(7.207)
ŷ L (t) ∈ ρ(C R −1 ξ(t)).
where ξ(0) ∈ dom(β). From the strict passivity condition (with K = 0) and the
full column rank property of B, it follows that C = B T P and C R −1 = B T R have
full row rank. Together with the fact that ρ(·) is maximal monotone, we have that
β(·) is maximal monotone as well, using Lemma A.94. From the strict passivity
condition (7.205), it follows that R −1 (A − L H )T R + R(A − L H )R −1 ≺ 0, which
means that the mapping ξ → −R(A − L H )R −1 ξ is monotone by definition. Max-
imality of the mapping ξ → −R(A − L H )R −1 ξ follows from linearity, see [116,
Proposition 2.3]. Hence, the mapping ξ → −R(A − L H )R −1 ξ + β(ξ ) is maximal
monotone, as the sum of maximal monotone mappings is maximal monotone again
7.11 Stable State Observers for Set-Valued Lur’e Systems 563
[117, Corollary 12.44]. Since the signal u(·) is locally AC, and z(·) is locally AC due
to Assumption 26, existence and uniqueness of locally AC solutions to (7.208) and
(7.203) follow now from a slight extension of Theorem 3.123, including locally AC
exogenous terms.
In the following lemma, we address the question of well posedness of the extended
observer scheme. Since in this case the multivalued mapping in (7.204) is time
dependent, we will consider a particular class of mappings ρ(·), equal to normal
cones to a certain convex closed set. Actually, in this case, it will turn out that the
second and third condition in (7.205), i.e., the existence of a symmetric positive
definite matrix P such that B T P = C − K H suffices to prove well posedness.
Lemma 7.63 ([115] Time independent ρ(·) = N S (·), extended observer) Consider
the system (7.202a) (7.202b) and the extended observer (7.204) with ρ(·) = N S (·),
where the set S ⊂ Rm is assumed to be non-empty, closed and convex. Suppose that
Assumption 26 holds and assume that there exists a matrix P = P T 0, such that
B T P = C − K H , and B has full column rank. Let the signal u(·) be locally AC
and let x(·) be a corresponding locally AC solution to (7.202a) (7.202b), with output
trajectory z(·), for some x(0) with C x(0) ∈ S. Then, the corresponding observer
dynamics (7.204) has a unique locally AC solution on [0, ∞) for each x̂(0) with
(C − K H )x̂(0) + K z(0) ∈ S = dom(ρ).
Proof Let us introduce the change of variable (7.206) for (7.204), where as before,
1
R = R T = P 2 . In the same way as in the proof of Lemma 7.62, (7.204) is trans-
formed into:
where ξ(0) = R x̂(0) ∈ S (0). The description (7.210) fits within the so-called per-
turbed sweeping process (see (3.305) for instance), with S (·) satisfying the condi-
tions of Theorem A.99. Since u(·) and z(·) are locally AC, the result follows now
from Theorem A.99.
Theorem 7.64 ([115]) Consider the observed system (7.202a) (7.202b), and either
the basic observer (7.203) or the extended observer (7.204), where (A − L H, B, C)
or (A − L H, B, C − K H ), respectively, is strictly passive with corresponding
matrices P = P T 0 and Q = Q T 0, satisfying (7.205). Assume also that the
additional conditions of Lemma 7.62 or Lemma 7.63, respectively, are satisfied. Let
x(·) be a locally AC solution to (7.202a) (7.202b) for x(0) ∈ dom(ρ ◦ C) and locally
AC input u : [0, +∞) → R p . Then the observer (7.203) (respectively (7.204)), has
for each x̂(0) with C x̂(0) ∈ dom(ρ) or (C − K H )x̂(0) + K z(0) ∈ dom(ρ) = S,
respectively, a unique locally AC solution x̂(·), which exponentially recovers the
Δ
state x(·), in the sense that the observation error e(t) = x(t) − x̂(t) satisfies the
exponential decay bound
λmax (P) λ
− 2λmin
(Q)
t
e(t) ≤ e(0)e max (P) (7.211)
λmin (P)
for t ∈ R+ .
Proof Using Lemma 7.62 or Lemma 7.63 for the basic and extended observers,
respectively, it follows that for each locally AC solution to the observer plant (7.202a)
(7.202b), the observer also has a locally AC solution x̂(·) provided C x̂(0) ∈ dom(ρ)
or (C − K H )x̂(0) + K z(0) ∈ dom(ρ) = S, respectively. Hence, the observation
error e(·) = x(·) − x̂(·) is also locally AC, and satisfies for the extended observer
(7.204) almost everywhere the error dynamics, obtained by subtracting (7.202a)
(7.202b) and (7.204):
Note that the error dynamics for the basic observer is obtained as a special case
of (7.212) by taking K = 0. We consider now the candidate Lyapunov function
V (e) = 21 e T Pe. Since e(·) is locally AC, t → V (e(t)) is also locally AC, and the
derivative V̇ (e(t)) exists for almost all t. Hence, V̇ (e(t)) satisfies for almost all t:
7.11 Stable State Observers for Set-Valued Lur’e Systems 565
with y L (t) ∈ ρ(C x(t)) and ŷ L (t) ∈ ρ(C x̂(t) + K (z(t) − ẑ(t))), it follows from the
monotonicity of ρ(·) that e T (t)(C − K H )T (y L (t) − ŷ L (t)) ≥ 0. Note that in the
case of the extended observer and thus under the conditions of Lemma 7.63, that
ρ(·) = N S (·) is also monotone. Therefore, V̇ (e(t)) ≤ − 21 e T (t)Qe(t). Since e T Qe ≥
λmin (Q)e T e ≥ λλmax
min (Q) T
(P)
e Pe = 2λ min (Q)
λmax (P)
V (e) for all e ∈ Rn , we have that
λmin (Q)
V̇ (e(t)) ≤ − V (e(t)). (7.214)
λmax (P)
This proves the exponential recovery of the state. The condition (7.211) is obtained
by taking the square root of the inequality above.
Remark 7.65 (Why extended observers?) Consider (7.202a) (7.202b) with the matri-
ces
1 0
A= , H = (1 0), B = (1 0)T , C = (0 1), (7.215)
0 −1
Further Reading: A preliminary version of the above material taken from [115],
appeared in [119], strongly inspired from [120]. The results in [121–123] followed,
with nice applications in control of deep drilling systems with set-valued fric-
tion. Significant extensions of the above, have been proposed in [37, 124, 125].
Observers for perturbed sweeping processes12 with prox-regular sets, and with AC
and BV solutions (hence possible state jumps), are studied in [37] (relying in part
on the material in Sect. 3.14.5). The proposed observers’ state, respects the same
constraint as the plant’s state does, hence positive plants have positive observers.
The relative degree condition, allowing for a nonzero feedthrough matrix D = 0,
is relaxed in [125] who consider normal cones right-hand sides, where the condi-
tion met in Proposition 3.62 item 4, is used to extend the I/O constraint P B = C T
to Ker(D + D T ) ⊆ Ker(P B − C T ) and D 0. Regulation with output feedback,
using a state observer, is analyzed for such extended sweeping processes (the nor-
mal cone argument is equal to y = C x + DyL ), see also [126] for regulation with
output feedback. The error dynamics is interpreted in terms of passive systems feed-
back negative interconnection. Set-valued controllers can be designed via comple-
mentarity problems, which guarantee the viability of polytopes. Velocity observers
for unilaterally constrained Lagrangian systems, are studied in [124], taking into
account all modes of motion: free motion, persistently constrained, and impacts.13
The observers take the form of first-order sweeping processes, whose well posedness
is carefully analyzed via time-discretization techniques. Again a Passivity Theorem
interpretation is given in [124]. Related results on observer design concern maximal
monotone differential inclusions and first-order sweeping processes [121, 122, 127–
131], non-convex first-order sweeping process with prox-regular sets [132, 133],
differential inclusions in normal cone to prox-regular set [134], linear complemen-
tarity systems [135], mechanical systems with unilateral contact and impacts [136,
137], vibro-impact systems (see a definition in [41, Sect. 1.3.2]) [138–143].
References
1. Takegaki M, Arimoto S (1981) A new feedback method for dynamic control of manipulators.
ASME J Dyn Syst Meas Control 102:119–125
2. Hsia TC (1986) Adaptive control of robot manipulators: a review. In: Proceedings of IEEE
international conference on robotics and automation, San Francisco, CA, USA, pp 183–189
3. Craig JJ, Hsu P, Sastry SS (1987) Adaptive control of mechanical manipulators. Int J Robot
Res 6(2):16–28
12 Here “perturbed” means “single valued vector field”, in the terminology employed in the math-
ematical literature about the sweeping process. It does not refer to any perturbation as is usual in
Automatic Control.
13 See https://www.youtube.com/channel/UCBAVArzY9eXORe08WHQt5LA for simulations.
References 567
4. Spong MW, Ortega R (1990) On adaptive inverse dynamics control of rigid robots. IEEE
Trans Autom Control 35:92–95
5. Amestegui M, Ortega R, Ibarra JM (1987) Adaptive linearizing decoupling robot control: a
comparative study of different parametrizations. In: Proceedings of the 5th Yale workshop on
applications of adaptive systems, New Haven, CT, USA
6. Craig JJ (1988) Adaptive control of mechanical manipulators. Addison Wesley, Reading
7. Slotine JJ, Li W (1988) Adaptive manipulator control: a case study. IEEE Trans Autom Control
33:995–1003
8. Sadegh N, Horowitz R (1987) Stability analysis of adaptive controller for robotic manipulators.
In: Proceedings of IEEE international conference on robotics and automation, Raleigh, USA,
pp 1223–1229
9. Kelly R, Carelli R (1988) Unified approach to adaptive control of robotic manipulators. In:
Proceedings of 27th IEEE conference on decision and control, Austin, TX, USA, pp 1598–
1603
10. Slotine JJE, Li W (1989) Composite adaptive control of robot manipulators. Automatica
25:509–520
11. Slotine JJE, Li W (1987) On the adaptive control of robot manipulators. Int J Robot Res
6:49–59
12. Sadegh N, Horowitz R (1990) Stability and robustness analysis of a class of adaptive con-
trollers for robotic manipulators. Int J Robot Res 9(3):74–92
13. Sadegh N, Horowitz R, Kao WW, Tomizuka M (1990) A unified approach to design of adaptive
and repetitive controllers for robotic manipulators. ASME J Dyn Sys Meas 112(4):618–629
14. Horowitz R, Kao WW, Boals M, Sadegh N (1989) Digital implementation of repetitive con-
trollers for robotic manipulators. In: Proceedings of IEEE international conference on robotics
and automation, Phoenix, AZ, USA, pp 1497–1503
15. Messner W, Horowitz R, Kao WW, Boals M (1991) A new adaptive learning rule. IEEE Trans
Autom Control 36(2):188–197
16. Brogliato B, Landau ID, Lozano R (1991) Adaptive motion control of robot manipulators: a
unified approach based on passivity. Int J Robust Nonlinear Control 1(3):187–202
17. Landau ID, Horowitz R (1989) Synthesis of adaptive controllers for robot manipulators using
a passive feedback systems approach. Int J Adapt Control Signal Process 3:23–38
18. Ortega R, Spong M (1989) Adaptive motion control of rigid robots: a tutorial. Automatica
25:877–888
19. Brogliato B (1991) Systèmes Passifs et Commande Adaptative des Manipulateurs. PhD the-
sis, Institut National Polytechnique de Grenoble, Laboratoire d’Automatique de Grenoble,
Grenoble, France. http://www.theses.fr/1991INPG0005. 11 Jan 1991
20. Lozano-Leal R, Brogliato B (1991) Adaptive control of robot manipulators with flexible
joints. In: Proceedings of American control conference, Boston, USA, pp 938–943
21. Lozano R, Brogliato B (1992) Adaptive control of robot manipulators with flexible joints.
IEEE Trans Autom Control 37(2):174–181
22. Brogliato B, Lozano R (1996) Correction to adaptive control of robot manipulators with
flexible joints. IEEE Trans Autom Control 41(6):920–922
23. Brogliato B, Ortega R, Lozano R (1995) Global tracking controllers for flexible-joint manip-
ulators: a comparative study. Automatica 31(7):941–956
24. Ortega R, Espinosa G (1991) A controller design methodology for systems with physical
structures: application to induction motors. In: Proceedings of the 30th IEEE international
conference on decision and control, Brighton, UK, pp 2344–2349
25. Ortega R, Espinosa G (1993) Torque regulation of induction motors. Automatica 29:621–633
26. Bloch AN, Leonard N, Mardsen JE (1997) Stabilization of mechanical systems using con-
trolled Lagrangians. In: Proceedings of 36th IEEE conference on decision and control, San
Diego, CA, USA, pp 2356–2361
27. Bloch AN, Leonard N, Mardsen JE (2000) Controlled Lagrangians and the stabilization of
mechanical systems I: the first matching theorem. IEEE Trans Autom Control 45(12):2253–
2269
568 7 Passivity-Based Control
28. Ortega R, van der Schaft AJ, Maschke B, Escobar G (2002) Interconnection and damp-
ing assignment passivity-based control of port-controlled hamiltonian systems. Automatica
38(4):585–596
29. Blankenstein G, Ortega R, van der Schaft AJ (2002) The matching conditions of controlled
Lagrangians and interconnections and damping assignment passivity based control. Int J
Control 75(9):645–665
30. Merkin Y (1997) Introduction to the theory of stability, TAM, vol 24. Springer, Berlin
31. Goeleven D, Stavroulakis GE, Salmon G, Panagiotopoulos PD (1997) Solvability theory and
projection method for a class of singular variational inequalities: elastostatic unilateral contact
applications. J Optim Theory Appl 95(2):263–294
32. Facchinei F, Pang JS (2003) Finite-dimensional variational inequalities and complementarity
problems, vol I and II. Operations research. Springer Verlag, New-York
33. Acary V, Brogliato B (2008) Numerical methods for nonsmooth dynamical systems. Lecture
notes in applied and computational mechanics, vol 35. Springer-Verlag, Berlin Heidelberg
34. Monteiro-Marques MDP (1993) Differential inclusions in nonsmooth mechanical problems.
Shocks and dry friction. Progress in nonlinear differential equations and their applications,
Birkhauser, Basel-Boston-Berlin
35. Moreau JJ (1988) Unilateral contact and dry friction in finite freedom dynamic. In: Moreau
JJ, Panagiotopoulos PD (eds) Nonsmooth mechanics and applications. CISM courses and
lectures, vol 302, pp 1–82. International Centre for Mechanical Sciences, Springer-Verlag
36. Dieudonné J (1969) Eléments d’Analyse, vol 2. Gauthier-Villars
37. Tanwani A, Brogliato B, Prieur C (2014) Stability and observer design for Lur’e systems with
multivalued, nonmonotone, time-varying nonlinearities and state jumps. SIAM J Control
Optim 52(6):3639–3672
38. Haddad WM, Nersersov S, Chellaboina V (2003) Energy-based control for hybrid port-
controlled Hamiltonian systems. Automatica 39:1425–1435
39. Haddad WM, Chellaboina V, Hui Q, Nerserov SG (2007) Energy and entropy based stabi-
lization for lossless dynamical systems via hybrid controllers. IEEE Trans Autom Control
52(9):1604–1614
40. Leine RI, van de Wouw N (2008) Stability and convergence of mechanical systems with
unilateral constraints, vol 36. Lecture notes in applied and computational mechanics. Springer
Verlag, Berlin Heidelberg
41. Brogliato B (2016) Nonsmooth mechanics. Models, dynamics and control, 3rd edn. Commu-
nications and control engineering. Springer International Publishing, Switzerland, London.
Erratum/Addendum at https://hal.inria.fr/hal-01331565
42. Haddad WM, Chellaboina V (2001) Dissipativity theory and stability of feedback intercon-
nections for hybrid dynamical systems. Math Probl Eng 7:299–335
43. Haddad WM, Chellaboina V, Hui Q, Nersersov S (2004) Vector dissipativity theory for large-
scale impulsive dynamical systems. Math Probl Eng 3:225–262
44. Haddad WM, Chellaboina V, Kablar NA (2001) Nonlinear impulsive dynamical systems. Part
I: stability and dissipativity. Int J Control 74(17), 1631–1658
45. Haddad WM, Chellaboina V, Nersersov S (2001) On the equivalence between dissipativity
and optimality of nonlinear hybrid controllers. Int J Hybrid Syst 1(1):51–66
46. Bressan A, Rampazzo F (1993) On differential systems with quadratic impulses and their
applications to Lagrangian systems. SIAM J Control Optim 31(5):1205–1220
47. Tanwani A, Brogliato B, Prieur C (2015) Stability notions for a class of nonlinear systems
with measure controls. Math Control Signals Syst 27(2):245–275
48. Haddad WM, Chellaboina V, Hui Q, Nerserov SG (2005) Thermodynamic stabilization via
energy dissipating hybrid controllers. In: Proceedings of IEEE conference on decision and
control–European control conference, Seville, Spain, pp 4879–4884
49. Adly S, Goeleven D (2004) A stability theory for second-order nonsmooth dynamical systems
with application to friction problems. Journal de Mathématiques Pures et Appliquées 83:17–
51
References 569
50. Adly S (2006) Attractivity theory for second order nonsmooth dynamical systems with appli-
cation to dry friction. J Math Anal Appl 322(2):1055–1070
51. Yakubovich VA, Leonov GA, Gelig AK (2004) Stability of stationary sets in control systems
with discontinuous nonlinearities, stability, vibration and control of systems, vol 14. World
Scientific
52. Adly S, Attouch H, Cabot A (2003) Finite time stabilization of nonlinear oscillators subject
to dry friction. In: Alart P, Maisonneuve O, Rockafellar RT (eds.) Nonsmooth mechanics and
analysis. Theoretical and numerical advances. Springer Advances in Mechanics and Mathe-
matics, pp 289–304
53. Cabot A (2008) Stabilization of oscillators subject to dry friction: finite time convergence
versus exponential decay results. Trans Am Math Soc 360:103–121
54. Alvarez J, Orlov I, Acho L (2000) An invariance principle for discontinuous dynamic systems
with applications to a Coulomb friction oscillator. ASME Dyn Syst Meas Control 122:687–
690
55. Orlov Y (2004) Finite time stability and robust control synthesis of uncertain switched systems.
SIAM J Control Optim 43(4):1253–1271
56. Bisoffi A, Lio MD, Teel AR, Zaccarian L (2018) Global asymptotic stability of a PID control
system with Coulomb friction. IEEE Trans Autom Control 63(8):2654–2661
57. Bastien J, Schatzman M, Lamarque CH (2002) Study of an elastoplastic model with an infinite
number of internal degrees of freedom. Eur J Mech A/Solids 21:199–222
58. Bastien J (2013) Convergence order of implicit Euler numerical scheme for maximal monotone
differential inclusions. Z Angew Math Phys 64:955–966
59. Pota HR, Moylan PJ (1993) Stability of locally dissipative interconnected systems. IEEE
Trans Autom Control 38(2):308–312
60. Andréa-Novel B, Bastin G, Brogliato B, Campion G, Canudas C, Khalil H, de Luca A, Lozano
R, Ortega R, Tomei P, Siciliano B (1996) Theory of robot control. In: Canudas de Wit C, Bastin
G, Siciliano B (eds.) Communications and control engineering. Springer Verlag, London
61. Lozano R, Brogliato B, Landau ID (1992) Passivity and global stabilization of cascaded
nonlinear systems. IEEE Trans Autom Control 37(9):1386–1388
62. Brogliato B, Lozano R, Landau ID (1993) New relationships between Lyapunov functions
and the passivity theorem. Int J Adapt Control Signal Process 7:353–365
63. Paden B, Panja R (1988) Globally asymptotically stable PD+ controller for robot manipula-
tors. Int J Control 47:1697–1712
64. Spong MW, Ortega R, Kelly R (1990) Comments on adaptive manipulator control: a case
study. IEEE Trans Autom Control 35:761–762
65. Johansson R (1990) Adaptive control of robot manipulator motion. IEEE Trans Robot Autom
6:483–490
66. Tomei P (1991) A simple PD controller for robots with elastic joints. IEEE Trans Autom
Control 36:1208–1213
67. Battilotti S, Lanari L, Ortega R (1997) On the role of passivity and output injection in the
ouput feedback stabilisation problem: application to robot control. Eur J Control 3:92–103
68. Berghuis H, Nijmeijer H (1993) A passivity approach to controller-observer design for robots.
IEEE Trans Robot Autom 9(6):741–754
69. Berghuis H, Nijmeijer H (1993) Robust control of robots using only position measurements.
In: Proceedings of IFAC world congress, Sydney, Australia, pp 501–506
70. Rocha-Cozatl E, Moreno JA (2011) Dissipative design of unknown inout observers for systems
with sector nonlinearities. Int J Robust Nonlinear Control 21:1623–1644
71. Fossen TI, Strand JP (1999) Passive nonlinear observer design for ships using Lyapunov
methods: full-scale experiments with a supply vessel. Automatica 35:3–16
72. Arcak M, Kokotovic PV (2001) Observer-based control of systems with slope-restricted non-
linearities. IEEE Trans Autom Control 46(7):1146–1150
73. Adly S, Brogliato B, Le BK (2013) Well-posedness, robustness, and stability analysis of a
set-valued controller for Lagrangian systems. SIAM J Optim Control 51(2):1592–1614
570 7 Passivity-Based Control
97. Drakunov SV, Izosimov DB, Luk’yanov AG, Utkin VA, Utkin VI (1990) The block control
principle II. Autom Remote Control 51:737–746
98. Utkin VI, Chen DS, Chang HC (2000) Block control principle for mechanical systems. ASME
J Dyn Syst Meas Control 122:1–10
99. Marino R, Tomei P (1995) Nonlinear control design. Geometric, adaptive, robust. Prentice
Hall, Englewood Cliffs
100. Bridges M, Dawson DM (1995) Backstepping control of flexible joint manipulators: a survey.
J Robot Syst 12(3):199–216
101. Spong MW, Vidyasagar M (1989) Robot dynamics and control. Wiley, New York
102. Battilotti S, Lanari L (1995) Global set point control via link position measurement for flexible
joint robots. Syst Control Lett 25:21–29
103. Kelly R, Santibánez VS (1998) Global regulation of elastic joints robots based on energy
shaping. IEEE Trans Autom Control 43(10):1451–1456
104. Lozano R, Valera A, Albertos P, Arimoto S, Nakayama T (1999) PD control of robot manip-
ulators with joint flexibility, actuators dynamics and friction. Automatica 35:1697–1700
105. Lozano R, Brogliato B (1992) Adaptive hybrid force-position control for redundant manipu-
lators. IEEE Trans Autom Control 37(10):1501–1505
106. Namvar M, Aghili F (2005) Adaptive force-motion control of coordinated robots interacting
with geometrically unknown environments. IEEE Trans Robot 21(4):678–694
107. Li PY, Horowitz R (2001) Passive velocity field control (PVFC): Part I-geometry and robust-
ness. IEEE Trans Autom Control 46(9):1346–1359
108. Li PY, Horowitz R (2001) Passive velocity field control (PVFC): Part II-application to contour
following. IEEE Trans Autom Control 46(9):1360–1371
109. McClamroch NH (1989) A singular perturbation approach to modeling and control of manip-
ulators constrained by a stiff environment. In: Proceedings of the 28th IEEE international
conference on decision and control, vol 3, Tampa, FL, USA, pp 2407–2411
110. Brogliato B, Niculescu SI, Orhant P (1997) On the control of finite dimensional mechanical
systems with unilateral constraints. IEEE Trans Autom Control 42(2):200–215
111. Brogliato B, Niculescu SI, Monteiro-Marques MDP (2000) On tracking control of a class of
complementary-slackness hybrid mechanical systems. Syst Control Lett 39:255–266
112. Bourgeot JM, Brogliato B (2005) Tracking control of complementarity Lagrangian systems.
Int J Bifurc Chaos 15(6):1839–1866
113. Morarescu IC, Brogliato B (2010) Trajectory tracking control of multiconstraint complemen-
tarity lagrangian systems. IEEE Trans Autom Control 55(6):1300–1313
114. Morarescu IC, Brogliato B (2010) Passivity-based switching control of flexible-joint comple-
mentarity mechanical systems. Automatica 46(1):160–166
115. Brogliato B, Heemels WPMH (2009) Observer design for Lur’e systems with multivalued
mappings: a passivity approach. IEEE Trans Autom Control 54(8):1996–2001
116. Brézis H (1973) Opérateurs Maximaux Monotones. Mathematics Studies. North Holland,
Amsterdam
117. Rockafellar RT, Wets RJB (1998) Variational Analysis, Grundlehren der Mathematischen
Wissenschaften, vol 317. Springer Verlag
118. Hiriart-Urruty JB, Lemaréchal C (2001) Fundamentals of convex analysis. Grundlehren text
editions. Springer Verlag, Berlin
119. Heemels WPMH, Juloski AL, Brogliato B (2005) Observer design for Lur’e systems with
monotone multivalued mappings. In: Proceedings of 16th IFAC Triennal world congress,
Prague, Czech Republic, pp 187–192
120. Brogliato B (2004) Absolute stability and the Lagrange-Dirichlet theorem with monotone
multivalued mappings. Syst Control Lett 51, 343–353. Preliminary version Proceedings of
the 40th IEEE Conference on Decision and Control, vol 1, pp 27–32. 4–7 Dec 2001
121. Doris A, Juloski AL, Mihajlovic N, Heemels WPMH, van de Wouw N, Nijmeijer H (2008)
Observer design for experimental non-smooth and discontinuous systems. IEEE Trans Control
Syst Technol 16(6):1323–1332
572 7 Passivity-Based Control
122. de Bruin JCA, Doris A, van de Wouw N, Heemels WPMH, Nijmeijer H (2009) Control of
mechanical motion systems with non-collocation of actuation and friction: a Popov criterion
approach for input-to-state stability and set-valued nonlinearities. Automatica 45(2):405–415
123. van de Wouw N, Doris A, de Bruin JCA, Heemels WPMH, Nijmeijer H (2008) Output-
feedback control of Lur’e-type systems with set-valued nonlinearities: a Popov-criterion
approach. In: American control conference, Seattle, USA, pp 2316–2321
124. Tanwani A, Brogliato B, Prieur C (2016) Observer-design for unilaterally constrained
Lagrangian systems: a passivity-based approach. IEEE Trans Autom Control 61(9):2386–
2401
125. Tanwani A, Brogliato B, Prieur C (2018) Well-posedness and output regulation for implicit
time-varying evolution variational inequalities. SIAM J Control Optim 56(2):751–781
126. Miranda-Villatoro F, Castanos F (2017) Robust output regulation of strongly passive lin-
ear systems with multivalued maximally monotone controls. IEEE Trans Autom Control
62(1):238–249
127. Brogliato B, Goeleven D (2011) Well-posedness, stability and invariance results for a class
of multivalued Lur’e dynamical systems. Nonlinear Anal Theory Methods Appl 74, 195–212
128. Chen D, Yang G, Han Z (2012) Impulsive observer for input-to-state stability based syn-
chronization of Lur’e differential inclusion system. Commun Nonlinear Sci Numer Simul
17(7):2990–2996
129. Huang J, Zhang W, Shi M, Chen L, Yu L (2017) H∞ observer design for singular one-sided
lur’e differential inclusion system. J Frankl Inst 354(8):3305–3321
130. Shi MJ, Huang J, Chen L, Yu L (2016) Adaptive full-order and reduced-order observers for
one-sided Lur’e systems with set-valued mappings. IMA J Math Control Inf 35(2):569–589
131. Vromen T, van de Wouw N, Doris A, Astrid P, Nijmeijer H (2017) Nonlinear output-feedback
control of torsional vibrations in drilling systems. Int J Robust Nonlinear Control 27(17):3659–
3684
132. Adly S, Hantoute A, Nguyen BT (2018) Lyapunov stability of differential inclusions involving
prox-regular sets via maximal monotone operators. J Optim Theory Appl https://doi.org/10.
1007/s10957-018-1446-7
133. Adly S, Hantoute A, Nguyen BT (2018) Lyapunov stability of differential inclusions involving
prox-regular sets via maximal monotone operators. J Optim Theory Appl https://doi.org/10.
1007/s10957-018-1446-7
134. Adly S, Hantoute A, Nguyen BT (2018) Equivalence between differential inclusions involving
prox-regular sets and maximal monotone operators. submitted. ArXiv:1704.04913v2
135. Heemels WPMH, Camlibel MK, Schumacher JM, Brogliato B (2011) Observer-based control
of linear complementarity systems. Int J Robust Nonlinear Control 21(10):1193–1218
136. Baumann M, Leine RI (2016) A synchronization-based state observer for impact oscillators
using only collision time information. Int J Robust Nonlinear Control 26:2542–2563
137. Baumann M, Biemond JJB, Leine RI, van de Wouw N (2018) Synchronization of impacting
systems with a single constraint. Phys D 362:9–23
138. Forni F, Tell A, Zaccarian L (2013) Follow the bouncing ball: global results on tracking and
state estimation with impacts. IEEE Trans Autom Control 58(6):1470–1485
139. Galeani S, Menini L, Tornambé A (2008) A high gain observer for the estimation of velocity
and coefficient of restitution in non-smooth mechanical systems. Int J Model Identif Control
4(1):44–58
140. Martinelli F, Menini L, Tornambé A (2004) Observability, reconstructibility and observer
design for linear mechanical systems unobservable in absence of impacts. ASME J Dyn Syst
Meas Control 125(4):549–562
141. Menini L, Tornambé A (2001) Asymptotic tracking of periodic trajectories for a simple
mechanical system subject to nonsmooth impacts. IEEE Trans Autom Control 46(7):1122–
1126
142. Menini L, Tornambé A (2002) Velocity observers for non-linear mechanical systems subject
to non-smooth impacts. Automatica 38:2169–2175
References 573
143. Menini L, Tornambé A (2002) State estimation of (otherwise unobservable) linear mechanical
systems through the use of non-smooth impacts: the case of two mating gears. Automatica
38:1823–1826
Chapter 8
Adaptive Control
In this section, we first examine the case of a PD controller with an adaptive gravity
compensation. Indeed, it has been proved in Sect. 7.3.1 that gravity hampers asymp-
totic stability of the desired fixed point, since the closed-loop system possesses an
equilibrium that is different from the desired one. Then, we pass to the case of tracking
control of n degree-of-freedom manipulators.
where we suppose that the gravity generalized torque g(q) = Yg (q)θg for some
known matrix Yg (q) ∈ Rn× p , and unknown vector θg , and θ̃g = θg − θ̂g . The esti-
mation algorithm is of the gradient type, and we know from Sect. 4.2.1 that such
an estimation law defines a passive operator q̇ → θ̃gT Yg (q), with storage function
V2 (θ̃g ) = 21 θ̃ T θ̃ . This strongly suggests one should decompose the closed-loop sys-
tem obtained by introducing (8.1) into (6.90) into two blocks as follows:
M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + λ1 q̇(t) + λ2 q̃(t) = −Yg (q(t))θ̃ (t)
(8.2)
θ̃˙g (t) = λ3 YgT (t)q̇(t).
Obviously, the first block with the rigid-joint–rigid-link dynamics and input u 1 =
−Yg (q)θ̃ (= −y2 ) and output y1 = q̇(= u 2 ) defines an OSP operator with stor-
age function V1 (q̃, q̇) = 21 q̇ T M(q)q̇ + λ22 q̃ T q̃, see Sect. 7.3.1. One is tempted to
conclude about the asymptotic stability with a Lyapunov function V (q̃, q̇, θ̃ ) =
V1 (q̃, q̇) + V2 (θ̃g ). However, notice that the overall system with input u = u 1 + y2
and output y = y1 , although OSP, is not ZSD. Indeed, u ≡ y ≡ 0 implies λ2 q̃ =
Yg (q)θ̃g and θ̃˙g = 0, nothing more. Hence, very little has been gained by adding an
estimation of the gravity, despite the Passivity Theorem applies well.
8.1 Lagrangian Systems 577
The lack of ZSD of the system in (8.2) is an obstacle to the asymptotic stability of the
closed-loop scheme. The problem is therefore to keep the negative feedback intercon-
nection structure of the two highlighted blocks, while introducing some detectability
property in the loop. However, the whole state is now (q, q̇, θ̃g ), and it is known
in identification and adaptive control theory that the estimated parameters converge
to the real ones (i.e., θ̃g (t) → 0) only if some persistent excitation conditions are
fullfilled. Those conditions are related to the spectrum of the signals entering the
regressor matrix Yg (q). Such a result is hopeless here since we are dealing with reg-
ulation. Hence, the best one may expect to obtain is convergence of (q̃, q̇) toward
zero. We may however hope that there exists a feedback adaptive controller that
can be analyzed through the Passivity Theorem and such that the underlying storage
function can be used as a Lyapunov function with Krasovskii–LaSalle Theorem to
prove asymptotic convergence. Let us consider the estimation algorithm proposed
in [1]:
˙θ̃ (t) = λ Y T (t) λ q̇(t) + 2q̃(t)
. (8.3)
g 3 g 4
1 + 2q̃ T (t)q̃(t)
This is a gradient update law. It defines a passive operator λ4 q̇ + 2q̃
1+2q̃ T q̃
→
Yg (q)θ̃g , not q̇ → Yg (q)θ̃g . We, therefore, have to look at the dissipativity proper-
ties of the subsystem with dynamics M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + λ1 q̇(t) +
λ2 q̃(t) = u 1 (t), y1 (t) = λ4 q̇(t) + 1+2q̃ T (t)q̃(t) : this is new compared to what we
2q̃(t)
t T
u 1 , y1 t = 0 λ4 q̇ +
2q̃
1+2q̃ T q̃
[M(q)q̈ + C(q, q̇)q̇ + λ1 q̇ + λ2 q̃]ds
t
λ4 T T
= q 0 λ4 q̇ T (λ2 q̃ + λ1 q̇) + ds d
2
q̇ M(q)q̇ + 2q̇1+2M(q) q̃ T q̃
q̃
ds
t T
q̇+2q̇ T C(q,q̇)q̃ 8q̇ T M(q)q̃ q̇ T q̃
+ 0 − 2q̇ M(q)1+2 T
q̃ q̃
+ T
1+2q̃ q̃
2 q̃
T
1+2q̃ q̃
(λ 2 q̃ + λ 1 q̇) ds
T t λ4 T
λ4 λ2 T
t t (8.4)
≥ 2
q̃ q̃ 0 + 2 q̇ M(q)q̇ + 2q̇1+2M(q) q̃
q̃ T q̃ 0
+ λ4 λ1 0 q̇ T q̇ds
t q̃ T q̃
λ1 ||q̇||.||q̃||
+ 0 2λ2 1+2 T
q̃ q̃
− 4λ M + kc
√
2
q̇ T
q̇ − 2 1+2q̃ q̃ T ds
where we have used the fact that due to the skew symmetry of Ṁ(q) − 2C(q, q̇), we
have Ṁ(q) = C(q, q̇) + C T (q, q̇), and where
578 8 Adaptive Control
1 λ21 kc 2λ M
λ4 > max + 4λ M + √ , √ ,
λ1 2λ2 2 λm λ2
with λm In M(q) λ M In , ||C(q, q̇)|| ≤ kc ||q̇|| for any compatible matrix and vec-
tor norms. Under these gain conditions, one sees from (8.4) that the first subsystem
is passive with respect to the supply rate u 1T y1 , and a storage function is given by
λ4 λ2 T λ4 2q̇ T M(q)q̃
V1 (q̃, q̇) = q̃ q̃ + q̇ T M(q)q̇ + . (8.5)
2 2 1 + 2q̃ T q̃
The first subsystem even possesses some strict passivity property; see (8.4). Finally, a
complete storage function is provided by the sum V (q̃, q̇, θ̃g ) = V1 (q̃, q̇) + V2 (θ̃g ),
and it can be shown that its derivative is semi-negative definite, and that the largest
invariant set contained in the set V̇ ≡ 0 is contained in the set (q, q̇) = (0, 0) which
ends the proof.
Remark 8.1 The storage function associated with the first subsystem is quite original.
It looks like the available storage of the closed-loop system when a PD controller
is applied, but the added term comes from “nowhere”. Our analysis has been done
easily because we knew beforehand that such a storage function was a good one. The
intuition behind it is not evident. It was first discovered in [2] and then used in [1].
Let us now pass to the controller presented in Sect. 7.3.4 in (7.68). It turns out that
this scheme yields a much more simple stability analysis than the PD with adaptive
gravity compensation: this is due to the fact that as pointed out earlier, it uses the
inertia matrix explicitly even for regulation.
ters. Actually, one has M(q(t))q̈(t) + C(q(t), q̇(t))q̇(t) + g(q(t)) = Y (q(t), q̇(t),
q̈(t))θ . The closed-loop system is therefore given by
⎧
⎪ M(q(t))ṡ(t) + C(q(t), q̇(t))s(t) + λ1 s(t) = Y (q(t), q̇(t), t)θ̃ (t)
⎪
⎪
⎪
⎨
˙ = −λq̃(t) + s(t)
q̃(t)
⎪
⎪
⎪
⎪
⎩˙
θ̃ (t) = λ Y T (t)(q(t), q̇(t), t)s(t)
2
(8.7)
The interpretation through the Passivity Theorem is obvious: the update law in (8.6)
is a gradient that defines a passive operator s → Y (q, q̇, t)θ̃ , and the first subsystem
has state (q̃, s). From the developments in Sect. 7.3.4, one therefore sees that the
adaptive version of the Slotine and Li controller just yields a closed-loop system that
is identical to the one in (7.69) (7.70), with an additional passive block interconnected
to the two previous ones in (7.69) and (7.71), see Fig. 8.1 and compare with Fig. 7.6.
The form of the storage function follows immediately. Similarly to the PD with
adaptive compensation scheme, one cannot expect to get asymptotic stability of the
whole state because of the parameter estimates that generally do not converge toward
the real ones. Let us consider the quadratic function
1 T 1
V (s, t) = s M(q)s + θ̃ T θ̃ . (8.8)
2 2
Computing its derivative along the closed-loop trajectories, and using the same argu-
ments as for the first stability proof of the fixed parameters Slotine and Li controller
in Sect. 7.3.4.3, one easily concludes on the global convergence of all signals, but
θ̃ (t), to zero as t → +∞, and on the boundedness of all signals on [0, +∞).
Remark 8.2 Historically, the passivity interpretation of the Slotine and Li scheme
has been deduced from Lemma 7.23, see [3–5], where most of the adaptive schemes
(including e.g., [6]) designed for rigid manipulators have been analyzed through the
Passivity Theorem. Indeed, this is based on a cross- terms cancelation, as defined in
Lemma 7.23. Actually, the first subsystem in (8.7) with state s has relative degree one
between its input u 1 = Y (q, q, t)θ̃ and its output y1 = s. As we shall remark when
we have presented the adaptive control of linear invariant systems with relative degree
one, the cross-terms cancelation equality is ubiquitous in direct adaptive control. The
extension of the Slotine and Li scheme to the case of force-position control when
the system is in permanent contact with a flexible stiff environment, has been done
in [7] (see also [8] for an extension of the scheme in [7]). The case of holonomically
constrained systems as in Sect. 6.7.1, is treated in [9], where experimental results
are shown, and it is pointed out that static friction (the set-valued part of Coulomb’s
friction model, or extensions of it), seriously degrades the performance.
580 8 Adaptive Control
Until now, we have presented only gradient-type update laws. It is clear that the
estimation block can be replaced by any system that has θ̃ inside the state and is
passive with respect to the same supply rate. The classical recursive least-squares
estimation algorithm does not satisfy such requirements. However, it can be “passi-
fied” as explained now. First of all, let us recall the form of the classical least-squares
algorithm:
θ̂˙ls (t) = P(t)Y T (q(t), q̇(t), t)s(t) (8.9)
Ṗ(t) = −PY (q(t), q̇(t), t)Y T (q(t), q̇(t), t)P(t), P(0) 0.
The required passivity property is between s and −Y (q, q̇, t)θ̃ (recall we have defined
θ̃ = θ − θ̂ ). Let us compute the available storage of this system:
t
Va (θ̃, P) = sup − s T Y θ̃ ds
s:(0,θ̃ (0),P(0))→ 0
1 T −1
t 1 t T −1
= sup −θ̃ P θ̃ + θ̃ Ṗ θ̃ ds (8.10)
s:(0,θ̃ (0),P(0))→ 2 0 2 0
t
1 T −1
t 1
= sup − θ̃ P θ̃ + θ̃ T Y Y T θ̃ds,
s:(0,θ̃ (0),P(0))→ 2 0 2 0
8.1 Lagrangian Systems 581
where we used the fact that Ṗ −1 = Y Y T . One remarks that the available storage in
(8.10) is not “far” from being bounded: it would suffice that Y T θ̃ be L2 -bounded.
However, it seems difficult to prove this. Consequently let us propose the following
modified least-squares estimation algorithm2 :
⎧
⎪
⎪ θ̂(t) = θ̂ls (t) + S (t),
⎪
⎪
⎪
⎪
⎪ Ṗ(t) = α(t) −P(t) 1+trY (Y(t)Y (t)
T
⎪ T (t)Y (t)) + λR P(t) + λP(t) ,
⎪
⎪
⎪
⎪
⎨ α(t) = s T (t)Y (t)Y T (t)s(t)
,
(1+s T (t)s(t))(1+tr(Y T (t)Y (t)))
⎪
⎪
T
(t)Y (t)
A = 1+tr(Y T (t) Y (t)) + λR,
Y
⎪
⎪
⎪
⎪
⎪
⎪ Y T (t)
⎪
⎪ S (t) = 1+tr(Y s(t)
T (t)Y (t)) 1+s T (t)s(t) θ̂lsT (t)Aθ̂ls (t) + M(1 + λλmax (R)) ,
⎪
⎪
⎩
λ ≥ 0, R 0, λmin (R)In P −1 (0) λmax (R) + λ1 In , M ≥ θ T θ.
(8.11)
Then the following is true [10, 11].
Lemma 8.3 The following hold:
1. λmin (R) ≤ λi (P −1 ) ≤ λmax (R) + λ1 .
t
t t t
2. 0 −s T Y θ̃ ds = 21 θ̃lsT P −1 θ̃ls − 21 0 θ̃lsT Ṗ −1 θ̃ls dτ − 0 s T Y S dτ , where θ̃ls =
0
θ −θ̂ls , and θ̂ls is the classical
least-squares estimate θ̂˙ls = PY T s.
3. − 21 0 θ̃lsT Ṗ −1 θ̃ls dτ − 0 s T Y S dτ ≥ 0.
t t
It follows that the mapping s → −Y θ̃ is passive with storage function 21 θ̃lsT P −1 θ̃ls .
The proof of Lemma 8.3 is not given here for the sake of briefness and also because
despite its originality, it has not been proved that such passified least-square yields
better closed-loop performance than the simple gradient update law (for instance,
in terms of parameter convergence speed and of robustness). It is therefore to be
seen more like a theoretical exercise (find out how to passify the classical least-
squares) rather than something motivated by applications. The interest for us here
is to illustrate the modularity provided by passivity-based controllers. As we shall
see further, it applies equally well to adaptive control of relative degree one and two
linear invariant systems.
In this section, we provide the complete exposition of the adaptive version of the
scheme of Sect. 7.6.1, which is the only adaptive scheme proposed in the literature
2 Let us note that the denomination “least-squares” somewhat loses its original meaning here, since
it is not clear that the proposed scheme minimizes any quadratic criterion. However, the name least
squares is kept for obvious reasons.
582 8 Adaptive Control
solving both the linearity in the parameters and the a priori knowledge of the stiffness
matrix K issues, and at the same time guaranteeing the global convergence of the
tracking errors, the boundedness of all the closed-loop signals, with only positions
and velocity measurements (no acceleration feedback). It has been published in
[12, 13].
The starting point for the stability analysis of the adaptive version is the quadratic
function
where σ p > 0 is a feedback gain, and θ̃ (t) = θ̂ (t) − θ is the parameter error vector.
We do not define what θ is at this stage, because this vector of unknown parameters
will be constructed in proportion as the stability proof progresses. Actually, it will
be proved in Lemma 8.6 below that the nonadaptive control law may be written as
where θ5T h(q1 ) = det(M(q1 )). Thus, a nice property that will be used is that M(q1 ) =
M(q1 )T 0 so that det(M(q1 )) > 0: the controller hence defined is not singular. This
is used when the parameter vector θ5 is replaced by its estimate θ̂5 (t), by defining a
suitable projection algorithm. Another issue is that of the a priori knowledge of the
diagonal matrix K = diag(kii ), which has to be replaced by an estimate K̂ in the
controller. Since the fictitious input q2d is defined with K −1 , its adaptive counterpart
will be defined with K̂ −1 (t), so that K̂ (t) has to be nonsingular. Moreover the signal
q2d has to be twice differentiable. This implies that K̂ (t) will have in addition to be
twice differentiable as well. The two-parameter projection algorithms are given as
follows. We define θ K = (k11 , k22 , ...., knn )T , and we assume that a lower bound α In
on M(q1 ) is known.
The Parameter Adaptation Algorithms
It is possible to define a subspace spanned by h(q1 ) as S = {v | v = h(q1 )
for some q1 }, and a set Λ = {v | v T h ≥ α n for all h ∈ S}. The set Λ is convex,
and θ5 ∈ Λ. The first parameter adaptation law is as follows:
⎧
⎨ h(q1 (t))u T (t)s2 (t) if θ̂5 (t) ∈ Int(Λ)
θ̂˙5 (t) = proj[Λ, h(q1 (t))u (t)s2 (t)] if θ̂5 (t) ∈ bd(Λ)
T (8.14)
⎩
and [h(q1 (t))u T (t)s2 (t)]T θ̂5⊥ > 0,
θ̂˙6 (t) = Y6T (q1 (t), q̇1 (t), q2 (t), q̇2 (t))s2 (t). (8.15)
The gradient update laws in (8.14) and (8.15) will then be used to define the adaptive
controller as
θ̂5T (t)h(q1 (t))u(t) + Y6 (q1 (t), q̇1 (t), q2 (t), q̇2 (t))θ̂6 (t) = 0. (8.16)
The second projection algorithm is as follows, and concerns the estimate of the
stiffness matrix K :
⎧
⎪
⎪ x (t)
i
if θ̂ki (t) ≥ δk
⎨
θ̂˙ki (t) = x (t)
i
if θ̂ki (t) ≥ δ2k and x i (t) ≥ 0 (8.17)
⎪
−x (t)
i
⎪
⎩ f (θ̂ki (t)) x i (t) if δk ≥ θ̂ki (t) ≥ δ2k and x i (t) ≤ 0,
s1T K (q2d − q1d ) = θkT diag(s1i )(q2d − q1d ) = Y2d (q1 , q̇1 , q1d , q2d )θk
(8.18)
Y2d (q1 , q̇1 , q1d , q2d ) = (q2d − q1d )T diag(s1i ).
1
[M(q1 (t))q̈1 (t) + C(q1 (t), q̇1 (t))q̇1 (t) + g(q1 (t)) − K (q2 (t) − q1 (t))] = 0,
1+s
(8.19)
where we implicitly mean that 1+s 1
[ f (t)] is the Laplace transform of f (t). Now we
have (we drop the time argument for simplicity)
1
1+s
[M(q1 )q̈1 ] = M(q1 )q̇1 − M(q1 (0))q̇1 (0)
(8.20)
− 1+s
1
[M(q1 )q̇1 − M(q1 (0)q̇1 (0)] − 1+s
1
[ Ṁ(q1 , q̇1 )q̇1 ],
3 Another type of C n projections is presented in [14], whose motivation is quite in the spirit of this
one, see e.g., [14, Sect. III].
584 8 Adaptive Control
1
[M(q1 )q̈1 ] =
1+s
τ t
= exp(−t) exp(τ ) M(q1 ) − M(q1 (0)q̇1 (0) − 0 Ṁ(q1 (y), q̇1 (y))dy 0
t τ
− 0 exp(τ ) M(q1 (τ ))q̇1 (τ ) − M(q1 (0)q̇1 (0) − 0 Ṁ(q1 (y), q̇1 (y))dy dτ ,
(8.22)
which finally yields
1
[M(q1 )q̈1 ]
=
1+s
t
t = M(q )q̇
1 1 − M(q 1 (0))q̇ 1 (0) − 0 Ṁ(q1 (y), q̇1 (y))dy−
τ
− 0 exp(−t + τ ) M(q1 )q̇1 − M(q1 (0))q̇1 (0) − 0 Ṁ(q1 (y), q̇1 (y))dy dτ.
(8.23)
Still integrating by parts, we get
t
0 exp(−t+τ ) Ṁ(q1 (τ ), q̇1 (τ ))q̇1(τ )dτ = τ
t t
= 0 Ṁ(q1 (τ ), q̇1 (τ ))q̇1 (τ )dτ − 0 exp(−t + τ ) 0 Ṁ(q1 (y), q̇1 (y))q̇1 (y)dy dτ,
(8.24)
from which we can deduce (8.20) combining (8.23) and (8.24). Now using (8.19)
and (8.20), we obtain
Let us now proceed with the stability proof, which we start by differentiating the
function (8.12) along the system’s trajectories. The controller u(·) will then be con-
structed step by step within the proof. Afterward, we shall recapitulate and present
compact forms of the input and the closed-loop system. We obtain
V̇ (s1 , s2 , q̃1 , θ̃ , q̃2 ) = s1T [M(q1 )ṡ1 + C(q1 , q̇1 )]s1 + det(M(q1 ))s2T J ṡ2 +
+ (q̃1 − q̃2 )T K (q̃˙1 − q̃˙2 ) + σ p q̃1T q̃˙1 + θ̃ T θ̃˙ + (8.26)
+ 21 dtd [det(M(q1 ))]s2T J s2 .
8.1 Lagrangian Systems 585
Notice that
T1 = s1T (M(q
1 )ṡ1 + C(q1 , q̇1 )s1 + K (q̃1 − q̃2 ))
= s1 M(q1 )(q̈1 − q̈1d + λq̃˙1 ) + C(q1 , q̇1 )(q̃˙1 + λq̃1 ) + K (q2d − q1d )
T
s1T [M(q1d )q̈1d + C(q1d , q̇1d )q̇1d + g(q1d ) − M(q1 )(q̈1d − λq̃˙1 )
− C(q1 , q̇1 )(q̇1d − λq̃1 ) − g(q1 )]s1
(8.32)
≤ s T (λM(q1 ) + b1 In )s + s T (−λ2 M(q1 ) + b2 In )q̃1 + b3 (s T s ||q̃1 ||
+ λ||s|| q̃1T q̃1 ),
for some positive bounded functions b1 (·), b2 (·), b3 (·) of q1d (·), q̇1d (·), and q̈1d (·).
where a1 (·), a2 (·) and a3 (·) are positive bounded functions of q1d , q̇1d , q̈1d , and of
the dynamic model parameters. Now, from (8.31) and the fact that the various terms
of the dynamical model are linear in the parameters, we can write
where the matrix Yd (q1d , q̇1d , q̈1d ) is of appropriate dimensions and θ1 is a vector of
constant parameters. Since K is a diagonal matrix, we can write
s1T K (q2d − q1d ) = Y2d (q1 , q̇1 , q1d , q2d )θk , (8.38)
where
Y2d (q1 , q̇1 , q1d , q2d ) = (q2d − q1d )T diag(s1i ). (8.39)
(we recall that s1i denotes the ith element of the vector s1 ∈ Rn ). Now injecting (8.38)
into (8.29), we obtain
T1 = s1T (Δ1 + Δ2 ) − Y2d (q1 , q̇1 , q1d , q2d )θ̃k + Y2d (q1 , q̇1 , q1d , q2d )θ̂k
(8.40)
± (σv + σn q̃1T q̃1 )s1T M(q1 )s1 ,
where θ̃k (t) = θ̂k (t) − θk , σv > 0, σn > 0. The last term in (8.40) will be used to
compensate for the term s1T Δ1 . Now, from Lemma 8.4, we have M(q1 (t))s1 (t) =
Y4 f (t)θ4 . Introducing this into (8.40) we obtain
T1 = s1T (Δ1 + Δ2 ) − Y2d (q1 , q̇1 , q1d , q2d )θ̃k + Y2d (q1 , q̇1 , q1d , q2d )θ̂k
(8.41)
+ (σv + σn q̃1T q̃1 )s1T Y4 f (t)θ4 − (σv + σn q̃1T q̃1 )s1T M(q1 )s1 .
8.1 Lagrangian Systems 587
Provided k̂ii > 0 for all 1 ≤ i ≤ n, we can safely define the function q2d (·) as follows:
K̂ (q2d − q1d ) = −(σv + σn q̃1T q̃1 )Y4 f (t)θ̂4 − Yd (q1d , q̇1d , q̈1d )θ̂1 − σ p q̃1 , (8.42)
where K̂ = diag(k̂ii ) and θ̂k = (k̂11 , k̂22 , ..., k̂nn )T . Introducing (8.42) into (8.39) we
obtain
Y2d (q1 , q̇1 , q1d , q2d )θ̂k = θ̂k diag(s1i )(q2d − q1d ) = s1T K̂ (q2d − q1d )
= −(σv + σn q̃1T q̃1 )s1T Y4 f (t)θ̂4 − s1T Yd (q1d , q̇1d , q̈1d )θ̂1 −
− σ p s1T q̃1 ,
(8.43)
where σ p > 0. Introducing (8.43) and (8.36) into (8.41) we obtain
T1 = s1T Δ1 − s1T Yd θ̃1 − Y2d θ̃k − (σv + σn q̃1T q̃1 )(s1T Y4 f θ̃4 + s1T M(q1 )s1 ) −
− σ p s1T q̃1 .
(8.44)
Furthermore from (8.35), we have that
s1T − (σv + σn q̃1T q̃1 )s1T M(q1 )s1 − λσ p q̃1T q̃1 ≤ −s1T s1 (λmin (M(q1 )) σv − a1 )
− q̃1T q̃1 (λσ p − a2 ) − s1T s1 q̃1T q̃1 (λmin (M(q1 )) σn − a3 ).
(8.45)
If σv , σ p , σn are chosen large enough so that
⎧
⎨ λmin (M(q1 )) σv − a1 ≥ δ0 > 0
λσ p − a2 ≥ δ1 > 0 (8.46)
⎩
(λmin (M(q1 )) σn − a3 ≥ 0,
we obtain
T1 ≤ −δ0 s1T s1 − δ1 q̃1T q̃1 − σ p q̃˙1T q̃1 − s1T Yd (q1d , q̇1d , q̈1d )θ̃1 −
(8.47)
− Y2d (q1 , q̇1 , q1d , q2d )θ̃k − (σv + σn q̃1T q̃1 )s1T Y4 f (t)θ̃4 .
V̇ (s1 , s2 , q̃1 , θ̃ , q̃2 ) ≤ −δ0 s1T s1 − δ1 q̃1T q̃1 − s1T Yd (q1d , q̇1d , q̈1d )θ̃1 −
(8.48)
− Y2d (q1 , q̇1 , q1d , q2d )θ̃k − (σv + σn q̃1T q̃1 )s1T Y4 f (t)θ̃4 + θ̃ T θ̃˙ + s2T T2 ,
with
J d
T2 = det(M(q1 ))ṡ2 + [det(M(q1 ))] − K (q̃1 − q̃2 ). (8.49)
2 dt
Let us define
588 8 Adaptive Control
where the precise definition of θ5 and θ6 will be given later. Let us introduce the
parameter update laws:
θ̂˙1 (t) = YdT (q1d (t), q̇1d (t), q̈1d (t))s1 (t)
(8.51)
θ̂˙ (t) = (σ + σ q̃ T q̃ )Y (t)s (t),
4 v n 1 1 4f 1
from Lemma 8.4. Now let us introduce (8.50), (8.51) and (8.17) into (8.48), in order
to obtain
V̇ (s1 , s2 , q̃1 , θ̃ , q̃2 ) ≤ −δ0 s1T s1 − δ1 q̃1T q̃1 + θ̃5T θ̃˙5 + θ̃6T θ̃˙6 + s2T T2 , (8.52)
has been used. The expression for the controller is obtained from the following
lemma.
with det(M(q1 )) = θ5T h(q1 ) > α n for some α > 0 and all q1 ∈ Rn . The vectors θ5
and θ6 are unknown parameters and h(q1 ) and Y6 (q1 , q̇1 , q2 , q̇2 ) are known functions.
q̈1 . Similarly q̇2d is a measurable signal (i.e., a function of positions and velocities);
see (8.39), Lemma 8.4, (8.42) and (8.17). However, notice that det(M(q1 ))q̈1 is a
function of q1 , q̇1 , and q2 . Thus q̈2d is a function of positions and velocities only. We
conclude that T2 can indeed be written in a compact form as in (8.54).
In view of Lemma 8.6, we obtain
V̇ (s1 , s2 , q̃1 , θ̃ , q̃2 ) ≤ −δ0 s1T s1 − δ1 q̃1T q̃1 + θ̃5T θ̂˙5 + θ̃6T θ̂˙6 + s2T θ5T h(q1 )u+
+ s2T Y6 (q1 , q̇1 , q2 , q̇2 )θ6
= −δ0 s1T s1 − δ1 q̃1T q̃1 + θ̃5T θ̂˙5 − s2T h T (q1 )θ̃5T u + s2T h(q1 )θ̂5T u
+ s2T Y6 (q1 , q̇1 , q2 , q̇2 )θ6 .
(8.56)
Introducing the parameters adaptation laws in (8.14) and (8.15), and the adaptive
control law in (8.16), into (8.56), we get
V̇ (s1 , s2 , q̃1 , θ̃ , q̃2 ) ≤ −δ0 s1T s1 − δ1 q̃1T q̃1 + θ̃5T θ̂˙5 − h(q1 )u T s2 . (8.57)
where we recall that proj[Λ, z] denotes the orthogonal projection on the hyperplane
tangent to bd(Λ) at z, and proj⊥ [Λ, z] is the component of z that is perpendicular to
this hyperplane at z. Then using (8.15), we obtain
⎧
⎨0 if θ̂5 ∈ Int(Λ)
θ̃5T [θ̂˙5 − h(q1 )u T s2 ] = −θ̃5T (h(q1 )u T s2 ) ≤ 0 if θ̂5 ∈ bd(Λ) (8.59)
⎩
and (h(q1 )u T s2 )T θ̂5⊥ > 0.
It immediately follows from (8.12), (8.61), Lemma 4.8 and Theorem 4.10 that θ̃ (·),
s1 (·), s2 (·), q̃1 (·), q̃˙1 (·), q̃˙2 (·) and q̃2 (·) are bounded functions of time on [0, +∞).4
Moreover s1 ∈ L2 . Using the same reasoning as in the proof of the fixed parameters
Slotine and Li or Lozano and Brogliato schemes, we deduce that q̃1 (t) → 0 as
t → +∞. It is deduced from (8.25) that the term s+1 1
[q2 ] is bounded, so that q2d is
bounded also, and consequently both q2 (·) and q̈1 (·) are bounded. The boundedness
of q̇2 (·) follows from differentiating (8.42), which proves that q̇2d (·) is bounded.
is clear that the desired trajectory q1d (t) and its first and second derivatives, are chosen as
4 It
Thus q̇2 (·) is bounded. The boundedness of the controller u can be inferred from
(8.16). One deduces that q̈2 (·) is bounded on [0, +∞).
8.1.2.1 Recapitulation
The closed-loop system that results from the controller defined in (8.14)–(8.16) and
(8.51) does not have a form as simple and intuitive as the closed-loop system of the
Slotine and Li adaptive controller, or of the closed-loop system of the Lozano and
Brogliato fixed- parameters controller. This seems, however, to be an intrinsic prop-
erty of the adaptive scheme for (6.97), because one needs to invert the first dynami-
cal equation to avoid the acceleration q̈1 (t) measurement. Consequently, the matrix
M −1 (q1 ) necessarily appears in the fixed-parameters scheme, and it is a nonlinear-in-
the-parameters function. The adaptation for the matrix K may be avoided in practice
if one is able to estimate it accurately enough. But the linearity-in-the-parameters
issue is unavoidable and intrinsic to such controlled dynamics.
After a certain number of manipulations based on the above developments, we
may write the closed-loop dynamics as follows:
M(q1 (t))ṡ1 (t) + C(q1 (t), q̇1 (t))s1 (t) = K (q̃2 − q̃1 ) + K̃ (q1d (t) − q2d (t))
−(σv + σn q̃1T q̃1 )Y4 f (t)θ̃4 (t) − Yd (q1d (t), q̇1d (t), q̈1d (t))θ̃1 (t) − σ p q̃1
− (σv + σn q̃1T q̃1 )M(q1 (t))s1 (t) + ΔW (q1 (t), q̇1 (t), q1d (t), q̇1d (t), q̈1d (t)),
with ΔW (q1 (t), q̇1 (t), q1d (t), q̇1d (t), q̈1d (t)) = M(q1 (t))[q̈1d (t) − λq̃˙1 (t)] +
+ C(q1 (t), q̇1 (t))[q̇1d (t) − λq̃1 (t)] − g(q1 (t)) + Yd (q1d (t), q̇1d (t), q̈1d (t))θ1 ,
θ̃5T (t)h(q1 (t))u(t) + Y6 (q1 (t), q̇1 (t), q2 (t), q̇2 (t))θ̃6 (t) = 0.
Updates laws in (8.14), (8.15), (8.17) and (8.51),
q̃˙i (t) = −λq̃i (t) + si (t), i = 1, 2
(8.61)
where we recall that Yd (q1d (t), q̇1d (t), q̈1d (t))θ1 = −M(q1d )q̈1d − C(q1d , q̇1d )q̇1d −
g(q1d ), see (8.31). It is worth comparing (8.61) with (7.142) to measure the gap
between adaptive control and fixed-parameter control, and comparing (8.61) with
(8.7) to measure the gap between the flexible-joint case and the rigid-joint case.
Remark 8.7 As we saw in Sect. 7.6.1, the fixed parameters Lozano and Brogliato
scheme is a passivity-based controller using a backstepping design method. The
adaptive scheme is a highly nontrivial extension, where the linearity in the parameters
and the unknown stiffness matrix issues imply the use of very specific update laws,
and hampers the direct application of backstepping methods designed elsewhere for
some classes of nonlinear systems.
8.1 Lagrangian Systems 591
Let us now investigate how the backstepping approach may be used to solve the
adaptive control problem for flexible-joint manipulators. We will assume that K is
a known matrix. We have to solve two main problems in order to extend the fixed-
parameter scheme presented in Sect. 7.6.2 toward an adaptive version:
To solve (1), we can use the idea introduced in [12] which consists of adding the
determinant of the inertia matrix det(M(q1 )) in the Lyapunov function V1 (·) (see
the foregoing section on the adaptive passivity-based scheme). As we explained
earlier, the nonlinearity in the unknown parameters comes from the terms containing
the inverse of the inertia matrix M −1 (q1 ). Premultiplying by det(M(q1 )) allows us
to retrieve LP terms, as det(M(q1 ))M −1 (q1 ) is indeed LP (the price to pay is an
overparametrization of the controller). Moreover (2) implies that q2d (see (7.147))
and e2d (see after (7.150)) are available online, and thus do not depend on unknown
parameters. We can proceed as follows:
• Step 1: The right-hand side of (6.97) can be written as Y1 (q1 , q̇1 , t)θ1∗ . Thus, we
choose q2d in (7.147) as
K q2d = Y1 (q1 , q̇1 , t)θ̂1 (8.62)
Adding ±Y1 (·)θ1∗ to the right-hand side of the first equation in (6.97) and differ-
entiating (8.63), one obtains:
M(q1 (t))ṡ1 (t) + C(q1 (t), q̇1 (t))s1 (t) + λ1 s1 (t) = K q̃2 (t) − Y1 (t)θ1∗
(8.64)
q̃˙2 (t) = q̇2 (t) − K −1 dtd (Y1 (t)θ1∗ ).
• Step 2: Now consider e2d defined after (7.150). The first two terms are available
but the third term is a function of unknown parameters and it is not LP (it contains
M −1 ). Assume now that V2 is replaced by
1 1
V2a = Vr (q̃1 , s1 , t) + θ̃1T θ̃1 + det(M(q1 ))q̃2T q̃2 . (8.65)
2 2
592 8 Adaptive Control
Setting q̇2 = e2d + e2 , i.e., q̃˙2 = e2d + e2 − K −1 dtd (Y1 θ̂1 ), we get along trajecto-
ries of (8.64):
V̇2a ≤ −λ1 q̃˙1T q̃˙1 − λ2 λ1 q̃1T q̃1 − s1T Y1 θ̃1 + θ̂˙1T θ̃1 + q̃2T K s1 +
+ q̃2T det(M(q1 ))e2 + q̃2T det(M(q1 ))[e2d − q̇2d ] + q̃2T dtd { (det(M(q
2
1 ))
}q̃2 .
(8.66)
Let us denote det(M) = Y2 (q1 )θ2∗ , and choose
where
d det(M(q1 ))
Y3 (q1 , q̇1 , q2 , t)θ3∗ = q̃2 − det(M(q1 ))q̇2d + K s1 . (8.68)
dt 2
Choose also
θ̂˙1 (t) = Y1T (q1 (t), q̇1 (t), t)s1 (t) (8.69)
Thus, we obtain
V̇2a ≤ −λ1 q̃˙1T q̃˙1 − λ2 λ1 q̃1T q̃1 + q̃2T det(M(q1 ))e2 + q̃2T Y2 θ2T e2d + Y3 θ3T ,
(8.70)
(we drop the arguments for convenience). Introducing ±q̃2T Y2 θ̂2 e2d we obtain
θ̂˙3 (t) = −Y3T (q1 (t), q̇1 (t), q2 (t), t)q̃2 (t) (8.72)
We therefore obtain
V̇3a ≤ −λ1 q̃˙1T q̃˙1 − λ2 λ1 q̃1T q̃1 + q̃2T det(M(q1 ))e2 − q̃2T q̃2 . (8.74)
Remark 8.8 In order to avoid any singularity in the control input, the update law in
(8.73) has to be modified using a projection algorithm, assuming that θ2∗ belongs to a
known convex domain. We refer the reader to the foregoing section for details about
how this domain may be calculated, and the stability analysis related to the projection.
8.1 Lagrangian Systems 593
For the sake of clarity of this presentation, we do not introduce this modification here,
although we know it is necessary for the implementation of the algorithm.
• Step 3: At this stage our goal is partially reached, as we have defined signals q̃2
and e2 available online. Now consider the function
1
V4a = V3a + det(M(q1 ))e2T e2 . (8.75)
2
We obtain
V̇4a ≤ −λ1 q̃˙1T q̃˙1 − λ2 λ1 q̃1T q̃1 + q̃2T det(M(q
1 ))e
2 (8.76)
− q̃2T q̃2 + e2T [v − ė2d ] + e2T dtd det(M(q))
2
e2 .
Notice that
d det(M(q1 ))
− det(M(q1 ))ė2d + e2 = Y4 (q1 , q̇1 , q2 , q̇2 )θ4∗ , (8.77)
dt 2
for some Y4 and θ4∗ matrices of suitable dimensions. Let us denote this time
det(M) = Y2 (q1 )θ5∗ (this is strictly equal to Y2 (q1 )θ2∗ defined above, but we choose
a different notation because the estimate of θ5∗ is going to be chosen differently).
Let us choose v = −q̃2 + w and
We obtain
V̇4a ≤ −λ1 q̃˙1T q̃˙1 − λ2 λ1 q̃1T q̃1 − q̃2T q̃2 − e2T wY2 θ̃5 + e2T Y4 θ̃4 − e2T e2 . (8.79)
θ̂˙4 (t) = −Y4T (q1 (t), q̇1 (t), q2 (t), q̇2 (t))e2 (t) (8.81)
(a projection algorithm has to be applied to θ̂5 ; see Remark 8.8 above). We obtain
To conclude this section, one may say that the backstepping procedure does not bring
much more than the passivity-based one to the adaptive control problem for flexible-
joint Lagrangian systems. The fact that the fictitious input q2d is premultiplied by
an unknown term K , creates a difficulty that has been solved in [12], but has never
been tackled in the “backstepping control” literature. The linearity-in-the-parameters
problem solution also is an original one, motivated by the physics of the process,
and whose solution also was proposed in [12] and nowhere else, to the best of the
authors’ knowledge.
The problem of adaptive control of linear invariant systems has been a very active field
of research since the beginning of the 1960s. Two paths have been followed: the indi-
rect approach which consists of estimating the process parameters, and using those
estimated values into the control input, and the direct approach that we described
in the introduction of this chapter. The direct approach has many attractive features,
among them the nice passivity properties of the closed-loop system, which actually
is a direct consequence of Lemma 7.23. This is what we develop now.
Before passing to more general classes of systems, let us reconsider the following
first-order system similar to the one presented in Sect. 1.4:
where x(t) ∈ R, a ∗ and b∗ are constant parameters, and u(t) ∈ R is the input signal.
The control objective is to make the state x(·) track some desired signal xd (·) ∈ R
defined as follows:
ẋd (t) = −xd (t) + r (t), (8.85)
where r (·) is some time function. Let us assume first that a ∗ and b∗ are known to the
designer and define the tracking error as e = x − xd . Then, it is easy to see that the
input
1
u = ∗ (r − (a ∗ + 1)x) (8.86)
b
8.2 Linear Invariant Systems 595
forces the closed-loop to behave like ė(t) = −e(t) so that e(t) → 0 as t → +∞. Let
us assume now that a ∗ and b∗ are unknown to the designer, but that it is known that
∗
b∗ > 0. Let us rewrite the input in (8.86) as u = θ ∗T φ, where θ ∗T = (− a b+1
∗ , b∗ ) and
1
φ = (x, r ) are the vector of unknown parameters and the regressor, respectively.
T
Since the parameters are unknown, let us choose (following the so-called certainty
equivalence principle, which is not a principle but mainly a heuristic method) the
control as
where θ̃ = θ̂ − θ ∗ . The reader may have a look now at (8.2) and (8.7) to guess what
will follow. The dynamics in (8.89) may be rewritten as [e](s) = 1+s 1
b∗ [θ̃ T φ](s),
where [·](s) denotes the Laplace transform and s ∈ C. Consequently, a gradient
estimation algorithm should suffice to enable one to analyze the closed-loop scheme
b∗
with the Passivity Theorem, since 1+s is SPR. Let us choose
As shown in Sect. 4.2.1, this defines a passive operator e → −θ̃ T φ. The rest of the
stability analysis follows as usual (except that since we deal here with a time-varying
system, one has to resort to Barbalat’s Lemma to prove the asymptotic convergence
of e(·) toward 0. The ZSD property plus Krasovskii–La Salle invariance Lemma do
not suffice, so that the various results exposed in Sect. 5.1 cannot be directly applied).
Remark 8.9 • The system in (8.85) is called a model of reference, and this adaptive
technique approach is called the Model Reference Adaptive Control MRAC, a
term coined by Landau [15].
• One can easily deduce the storage functions associated with each subsystem, and
form a Lyapunov candidate function for the overall closed-loop scheme.
596 8 Adaptive Control
• One may also proceed with a Lyapunov function analysis, and then retrieve the
passivity interpretation using the results in Sect. 7.3.3.
• We have supposed that b∗ > 0. Clearly, we could have supposed b∗ < 0. However,
when the sign of b∗ is not known, then the design becomes much more involved.
A solution consists of an indirect adaptive scheme with a modified estimation
algorithm [16]. The above passivity design is lost in such schemes.
B(s)
H (s) = k = C T (s In − A)−1 B, (8.92)
A(s)
where s is the Laplace variable. The constant k is the high-frequency gain of the
system, and we assume in the following that
• k > 0.
• A(s) and B(s) are monic polynomials, and B(s) is Hurwitz (the system has strictly
stable zero dynamics), with known order m = n − 1.
The problem is basically that of canceling the dynamics of the process with a suit-
able dynamic output feedback in order to get a closed-loop system, whose dynamics
matches that of a given reference model with input r (t) and output ym (t). The refer-
ence model transfer function is given by
Bm (s)
Hm (s) = km , (8.93)
Am (s)
where Hm (s) is chosen as a SPR transfer function. The control problem is that
of output tracking, i.e., one desires to find out a differentiator-free dynamic out-
put feedback such that all closed-loop signals remain bounded, and such that
lim |y(t) − ym (t)| = 0. It is clear that one chooses r (t) bounded so that ym (t)
t→+∞
is. Due to the fact that the parameters of the polynomials A(s) and B(s) as well as
k are unknown, the exact cancelation procedure cannot be achieved. Actually the
problem can be seen as follows: in the ideal case when the process parameters are
known, one is able to find out a dynamic output controller of the following form:
8.2 Linear Invariant Systems 597
⎧
⎪ u(t) = θ φ((t)r, ω1 (t), y(t), ω2 (t))
⎪ T T T
⎪
⎪
⎨
ω̇1 (t) = Λω1 (t) + bu(t), ω̇2 (t) = Λω2 (t) + by(t) (8.94)
⎪
⎪
⎪
⎪
⎩
φ = (r, ω1T , y, ω2T )T , θ = (kc , θ1T , θ0 , θ2T )T ,
with ω1 (t), θ1 , θ2 and ω2 (t) ∈ Rn−1 , θ0 ∈ R, and (Λ, b) is controllable. One sees
immediately that u(·) in (8.94) is a dynamic output feedback controller with a feed-
forward term. The set of gains (k, θ1 , θ0 , θ2 ) can be properly chosen such that the
closed-loop transfer function is
kc k B(s)λ(s)
H0 (s) = = Hm (s), (8.95)
(λ(s) − C(s))A(s) − k B(s)D(s)
λ(s)
where the transfer function of the feedforward term is given by λ(s)−C(s) while that
of the feedback term is given by λ(s) . C(s) has order n − 2 and D(s) has order
D(s)
n − 1. Notice that λ(s) is just the characteristic polynomial of the matrix Λ, i.e.,
λ(s) = (s In−1 − Λ)−1 and is therefore Hurwitz. We do not develop further the model
matching equations here (see e.g., [17] or [18] for details). Let us just denote the
set of “ideal” controller parameters such that (8.95) holds as θ ∗ . In general, those
gains will be combinations of the process parameters. Let us now write down the
state space equations of the whole system. Notice that we have
⎡ ⎤ ⎡ ⎤ ⎡ ⎤
ẋ(t) A 0 0 B
Δ
ż(t) = ⎣ ω̇1 (t) ⎦ = ⎣ 0 Λ 0 ⎦ z(t) + ⎣ b ⎦ u(t), (8.96)
ω̇2 (t) bC T x(t) 0 Λ 0
Now since the process parameters are unknown, so is θ ∗ . The controller in (8.94)
is thus replaced by its estimated counterpart, i.e., u = θ̂ φ. This gives rise to exactly
the same closed-loop structure as in (8.97), except that θ ∗ is replaced by θ̂ . Notice
that the system in (8.97) is not controllable nor observable, but it is stabilizable and
detectable. Also, its transfer function is exactly equal to H0 (s) when the input is r (t)
and the output is y. This is therefore an SPR transfer function. Now, we have seen in
the manipulator adaptive control case that the classical way to proceed is to add and
substract θ ∗T φ in the right-hand side of (8.97) in order to get (see (8.96) and (8.97))
a system of the form
where Am is given in the right-hand side of (8.97) while Bm is in the right-hand side
of (8.96) (actually in (8.97) the input matrix is given by Bm k ∗ ). We are now ready to
set the last step of the analysis: to this end, notice that we can define the same type of
dynamical structure for the reference model as the one that has been developed for
the process. One can define filters of the input r (t) and of the output ym (t) similarly
to the ones in (8.94). Let us denote their state as ω1m (·) and ω2m (·), whereas the total
reference model state will be denoted as z m (·). In other words, one is able to write
Defining e(t) = z(t) − z m (t) and introducing (8.99) into (8.98) one gets the follow-
ing error equation:
ė(t) = Am e(t) + Bm θ̃ T (t)φ(t). (8.100)
This needs to be compared with (8.7) and (8.2). Let us define the signal e1 = CmT e =
C T (x − xm ): clearly the transfer function CmT (s I3n−2 − Am )−1 Bm is equal to Hm (s)
which is SPR by definition. Hence, the subsystem in (8.100) is strictly passive with
input θ̃ T φ and output e1 (in the sense of Lemma 4.94) and is also OSP since it has
relative degree r = 1 (see Example 4.69). A gradient estimation algorithm is of the
form
θ̃˙ (t) = −λ1 φ(t)e1 (t), (8.101)
where λ1 > 0, is passive with respect to the supply rate u 2 y2 with y2 = −θ̃ T φ and
u 2 = e1 . Due to the stabilizability properties of the first block in (8.100), it follows
from the Meyer–Kalman–Yakubovich Lemma that the overall system is asymptoti-
cally stable. Indeed there exists a storage function V1 (e) = e T Pe associated with the
first block, and such that V (e, θ̃ ) = V1 (x) + 21 θ̃ T θ̃ is a Lyapunov function for the sys-
tem in (8.100) and (8.101), i.e., one gets V̇ (t) = −e T qq T e ≤ 0 (see (3.99)). Notice
that in general the closed-loop system is not autonomous, hence the Krasovskii–
LaSalle Theorem does not apply. One has to resort to Barbalat’s Lemma (see the
Appendix) to prove the asymptotic convergence of the tracking error e toward 0.
Notice also that the form of V̇ (t) follows from a cross-terms cancelation, so that
Lemma 7.23 directly applies.
Let us now concentrate on the case when the plant in (8.91) and (8.92) has relative
degree two. Let us pass over the algebraic developments that allow one to show
that there is a controller such that when the process parameters are known, then the
closed-loop system has the same transfer function as the model reference. Such a
controller is a dynamic output feedback of the form u = θ ∗T φ. It is clear that one can
repeat exactly the above relative degree one procedure, to get a system as in (8.100)
8.2 Linear Invariant Systems 599
and (8.101). However, this time, Hm (s) cannot be chosen as a SPR transfer function,
since it has relative degree two! Thus, the interconnection interpretation through the
Passivity Theorem no longer works. The basic idea is to modify the input u(·) so that
the transfer function between the estimator output and the first block output e1 , is
no longer Hm (s) but (s + a)Hm (s), for some a > 0 such that (s + a)Hm (s) is SPR.
To this end let us define a filtered regressor φ̄ = s+a
1
[φ], i.e., φ̄˙ + a φ̄ = φ. Since we
aim at obtaining a closed-loop system such that e1 = Hm (s)(s + a)θ̃ T φ̄, let us look
for an input that realizes this goal:
will be suitable. Indeed, one can proceed as for the relative degree one case, i.e., add
and substract θ ∗T φ to u in order to get ż(t) = Am z(t) + Bm (θ̃˙ T (t)φ̄(t) + θ̃ T (t)φ(t)),
such that the transfer function between θ̃ T φ̄ and e1 is Hm (s)(s + a). Then the update
law can be logically chosen as
the presentation of the closed-loop error equations: the whole developments would
take us too far.
V (z, ε, θ̃ , η̃, ζ̃ ) = Vz (z) + Vε (ε) + Vθ̃ (θ̃) + Vη̃ (η̃) + Vζ̃ (ζ̃ ), (8.106)
with Vz (·), Vε (·), Vθ̃ (·), Vη̃ (·), Vζ̃ (·) positive definite functions, λi > 0, 1 ≤ i ≤,
λε > 0, λη̃ > 0, λζ̃ > 0.
Now let us have a look at the equations in (8.105): note that we can rewrite
the closed-loop system similarly as in (7.47) and (7.48) as follows (ē1 is the first
component vector in Rr ):
⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ T ⎞ ⎛ ⎞
ż(t) A 0 0 z(t) bω θ̃ (t) bε2 (t)
˙ ⎠ = ⎝ en ē T A0 0 ⎠ ⎝ η̃(t) ⎠ + ⎝ 0
⎝ η̃(t) ⎠ + ⎝ 0n ⎠ (8.108)
1 n
˙ζ̃ (t) b̄ē1T 0 Ab ζ̃ (t) 0m 0m
Thus we can conclude from Lemma 7.23 that the closed-loop system can be trans-
formed into a system in P.5 With the notations of the preceding section, we
get V1 = Vθ̃ = 21 θ̃ T Γ −1 θ̃, V2 = Vz + Vη̃ + Vζ̃ , y2 = −u 1 = z, y1 = u 2 = bω T θ̃,
⎛ ⎞
Ir T
T
∂ V −1 ⎝ ⎠ ∂ V2 ∂ Vz T ∂ Vη̃ T ∂ Vζ̃
g1 = Γ ωb ( ∂ θ̃ = Γ θ̃), g2 = 0n×r , and ∂ x2 = ∂z
T θ̃
∂ η̃ ∂ ζ̃
. The
0m×r
∂ V1 T T
cross-terms cancelation equality is verified, as ∂ x1
g1 u 1 =− ∂∂ Vx22 g2 u 2 = −z T bω T θ̃ .
Similarly to the foregoing case, we only present here the closed-loop equations with-
out entering into the details on how the different terms are obtained. The interested
reader can consult the original paper [20, 21] for a comprehensive study of high-order
tuners. The closed-loop equations are the following:
m
ė(t) = −λe(t) + q0 θ̃ T (t)ω(t) + q0 ωi (t)c̄z i (t) + ε (8.110)
i=1
ż i (t) = Āz i (t)(1 + μωi2 (t)) − sign(q0 ) Ā−1 b̄ωi (t)e(t), i ∈ m (8.111)
where m = {1, ..., m}, e is the scalar tracking error, λ > 0, q0 is the high-frequency
gain of the open-loop system, |q0 | ≤ q̄, k ∈ Rm is the vector of estimated parameters
to be tuned, h(·) is an internal signal of the high-order tuner, θ̃ = h − q P , q P ∈ Rm
is a vector of unknown parameters, (c̄, Ā, b̄) is the minimal realization of a stable
transfer function, ω ∈ Rm is a regressor, and ε is an exponentially decaying term due
to nonzero initial conditions. The terms ki and h i denote the ith component of k and
Ā−1 b̄
h, respectively, whereas μ is a constant satisfying μ > 2m q̄c̄ .P
T
λ
. It is proved
in [20] that the system in (8.110) through (8.113) is stable using the function
m
V (e, θ̃ , z) = e2 + |q0 |θ̃ T θ̃ + δ z iT Pz i , (8.114)
i=1
5ε can be seen as a L2 -bounded disturbance and is therefore not important in our study.
2
602 8 Adaptive Control
1 2
V̇ (e, θ̃ , z) ≤ −λ e2 + ε , (8.115)
λ
T
.P Ā−1 b̄
with λ = λ − 2m q̄c̄ μ
. Now let us rewrite the system in (8.110) through
(8.113) as follows:
⎛ ⎞
ė(t)
⎜ ż 1 (t) ⎟
⎜ ⎟
⎜ ż 2 (t) ⎟ =
⎜ ⎟
⎝ ... ⎠
ż m (t)
⎛ ⎞
−λ q0 ω1 c̄ ... ... ... q0 ωm c̄
⎜ −sgn(q0 ) Ā−1 b̄ω1 Ā(1 + μω12 ) 0 ... ... 0 ⎟
⎜ ⎟
=⎜ −1
⎜ −sgn(q0 ) Ā b̄ω2 0 Ā(1 + μω22 ) 0 ... 0 ⎟×
⎟
⎝ ... ... ... ... ... ... ⎠
−sgn(q0 ) Ā−1 b̄ωm 0 ... ... 0 Ā(1 + μωm2 )
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
e(t) q0 θ̃ T (t)ω(t) ε(t)
⎜ z 1 (t) ⎟ ⎜ 0 ⎟ ⎜ 0 ⎟
⎜ ⎟ ⎜ ⎟ ⎜ ⎟
×⎜ z (t)
⎜ 2 ⎟ ⎜
⎟ + ⎜ 0 ⎟+⎜ 0 ⎟
⎟ ⎜ ⎟
⎝ ... ⎠ ⎝ ... ⎠ ⎝ ... ⎠
z m (t) 0 0
(8.116)
θ̃˙ (t) = −sgn(q0 )ω(t)e(t). (8.117)
We conclude from Corollary 4 " that the system in (8.116) (8.117) belongs to P,
m
with V1 = |q0 |θ̃ T θ̃ , V2 = e2 + δ i=1 z iT Pz i , g1 = −q0 ω, g2 = (1, 0, ..., 0)T , u 1 =
−y2 = −e, u 2 = y1 = q0 ω θ̃ . We can neglect ε in the analysis or consider it as a
T
L2 -bounded disturbance.
Comparing Eqs. (8.108) and (8.109) and Eqs. (8.116) and (8.117), we conclude
that the closed-loop error equations in both cases are pretty much similar. However,
this similarity is limited to the closed-loop system stability analysis. First, the basic
philosophies of each scheme are very different one to each other: roughly speaking,
the high-order tuners philosophy aims at rendering the operator between the tracking
error and the estimates strictly passive (using a control input that is the extension of
“classical” certainty equivalent control laws), while preserving stability of the overall
system with an appropriate update law. On the contrary, the backstepping method
is based on the use of a very simple “classical” update law (a passive gradient),
and the difficulty is to design a control input (quite different in essence from the
certainty equivalent control laws) which guarantees stability. Second, notice that θ̃
in (8.109) truly represents the unknown parameters estimates, while θ̃ in (8.117) is
the difference between the vector of unknown plant parameters and a signal h(·)
8.2 Linear Invariant Systems 603
internal to the high-order update law (the control input being computed with the
estimates k and their derivatives, up to the plant relative degree minus one). Third,
the tracking error in the backstepping scheme is part of an r -dimensional differential
equation (see the first equation in (8.105)), while it is the solution of a first-order
equation in the high- order tuner method (see (8.110)).
In [21], it is proved that the high-order tuner that leads to the error equations in
(8.110) through (8.113), defines a passive operator between the tracking error e and
(k − q P )T ω, and that this leads to nice properties of the closed-loop system, such
as guaranteed speed of convergence of the tracking error toward zero. In [22], it has
been shown that the backstepping method also possesses interesting transient per-
formances. Such results tend to prove that the schemes which are based on passivity
properties possess nice closed-loop properties. Other types of adaptive controllers
using passivity have been studied in [23].
References
1. Tomei P (1991) A simple PD controller for robots with elastic joints. IEEE Trans Autom
Control 36:1208–1213
2. Koditschek DE (1988) Application of a new Lyapunov function to global adaptive attitude
tracking. In: Proceedings of the 27th IEEE conference on decision and control, vol 1, pp
63–68. Austin, USA
3. Brogliato B (1991) Systèmes Passifs et Commande Adaptative des Manipulateurs. PhD the-
sis, Institut National Polytechnique de Grenoble, Laboratoire d’Automatique de Grenoble,
Grenoble, France. http://www.theses.fr/1991INPG0005
4. Brogliato B, Landau ID, Lozano R (1991) Adaptive motion control of robot manipulators: a
unified approach based on passivity. Int J Robust Nonlinear Control 1(3):187–202
5. Brogliato B, Lozano R, Landau ID (1993) New relationships between Lyapunov functions and
the passivity theorem. Int J Adapt Control Signal Process 7:353–365
6. Sadegh N, Horowitz R (1990) Stability and robustness analysis of a class of adaptive controllers
for robotic manipulators. Int J Robot Res 9(3):74–92
7. Lozano R, Brogliato B (1992) Adaptive hybrid force-position control for redundant manipu-
lators. IEEE Trans Autom Control 37(10):1501–1505
8. Namvar M, Aghili F (2005) Adaptive force-motion control of coordinated robots interacting
with geometrically unknown environments. IEEE Trans Robot 21(4):678–694
9. Liu YH, Kitagaki K, Ogasawara T, Arimoto S (1999) Model-based adaptive hybrid control for
manipulators under multiple geometric constraints. IEEE Trans Control Syst Technol 7(1):97–
109
10. Lozano R, de Wit CC (1990) Passivity-based adaptive control for mechanical manipulators
using LS type estimation. IEEE Trans Autom Control 35:1363–1365
11. Brogliato B, Lozano R (1992) Passive least-squares-type estimation algorithm for direct adap-
tive control. Int J Adapt Control Signal Process 6(1):35–44
12. Lozano R, Brogliato B (1992) Adaptive control of robot manipulators with flexible joints. IEEE
Trans Autom Control 37(2):174–181
13. Brogliato B, Lozano R (1996) Correction to “adaptive control of robot manipulators with
flexible joints”. IEEE Trans Autom Control 41(6):920–922
14. Cai Z, de Queiroz MS, Dawson DM (2006) A sufficiently smooth projection operator. IEEE
Trans Autom Control 51(1):135–139
15. Landau ID (1979) Adaptive control. The model reference approach. Marcel Dekker, New York
604 8 Adaptive Control
16. Lozano R, Brogliato B (1992) Adaptive control of first order nonlinear system without a priori
information on the parameters. IEEE Trans Autom Control 37(1):30–37
17. Narendra KS, Annaswamy A (1989) Stable adaptive systems. Prentice Hall, Upper Saddle
River
18. Sastry SS (1984) Model reference adaptive control-stability, parameter convergence and robust-
ness. IMA J Math Control Inf 1:27–66
19. Krstic M, Kanellakopoulos I, Kokotovic P (1994) Nonlinear design of adaptive controllers for
linear systems. IEEE Trans Autom Control 39:738–752
20. Morse AS (1992) High-order parameter tuners for the adaptive control of linear and nonlinear
systems. In: Isidori A, Tarn T-J (eds) Proceedings of the US-Italy joint seminar: systems,
models and feedback: theory and application. Progress in systems and control theory, vol 12,
pp 339–364. Springer, Capri (1992)
21. Ortega R (1993) On Morse’s new adaptive controller: parameter convergence and transient
performance. IEEE Trans Autom Control 38:1191–1202
22. Krstic M, Kokotovic P, Kanellakopoulos I (1993) Transient performance improvement with a
new class of adaptive controllers. Syst Control Lett 21:451–461
23. Owens DH, Pratzel-Wolters D, Ilchman A (1987) Positive-real structure and high-gain adaptive
stabilization. IMA J Math Control Inf 4:167–181
Chapter 9
Experimental Results
9.1.1 Introduction
how these schemes work in practice, and whether or not they bring significant per-
formance improvement with respect to PD and the Slotine and Li controllers (which
can both be cast into the passivity-based schemes, but do not a priori incorporate flex-
ibility effects in their design). What follows is taken from [3, 4]. More generally, the
goal of this section is to present experimental results for passivity-based controllers
with increasing complexity, starting from the PD input. Let us stress that the reliabil-
ity of the presented experimental works is increased by the fact that theoretical and
numerical investigations predicted reasonably well the obtained behaviors of the real
closed-loop plants, see [5]. The experimental results that follow should not be con-
sidered as a definitive answer to the question: “What is the best controller?”. Indeed
the answer to such a question may be very difficult, possibly impossible to give in
a general context. Our goal is only to show that the concepts that were presented in
the previous chapters may provide good results in practice.
In this work, the model as introduced in [6] is used, see (6.97). As we saw in Sect. 6.4,
this model possesses nice passivity properties as well as a triangular structure that
make it quite attractive for control design, see Sects. 7.6, 7.6.2, and 7.7.1. Only fixed
parameter controllers are considered here. As shown in [5] (see (7.140) and (7.157)),
the three nonlinear controllers for flexible-joint manipulators which are tested can
be written shortly as follows:
Controller 1
⎧
⎨u = J [q̈2d − 2q̃˙2 − 2q̃2 − K (ṡ1 + s1 )] + K (q2 − q1 )
(9.1)
⎩
q2d = K −1 u R + q1
Controller 2
⎧
⎨ u = J [q̈2d − 2q̃˙2 − 2q̃2 − (ṡ1 + s1 )] + K (q2 − q1 )
(9.2)
⎩
q2d = K −1 u R + q1
Controller 3 ⎧
⎨ u = J q̈2r + K (q2d − q1d ) − B2 s2
(9.3)
⎩
q2d = K −1 u R + q1d
where u R = M(q1 )q̈1r + C(q1 , q̇1 )q̇1r + g(q1 ) − λ1 s1 is as in (7.141). The signals
q̇1r = q̇1d − λq̃1 , s1 = q̃˙1 + λq̃1 are the classical signals used in the design of this
9.1 Flexible-Joint Manipulators 607
controller (the same definitions apply with subscript 2). Let us reiterate that the
expressions in (9.1), (9.2), and (9.3) are equivalent closed-loop representations. In
particular, no acceleration measurement is needed for the implementation, despite
the fact that ṡ1 may appear in the equivalent form of u. As pointed out in Remark 7.49,
the last controller is in fact an improved version (in the sense that it is a static state
feedback) of the dynamic state feedback proposed in [7, 8], that can be written as
t
u= J q̈2r − K [q1d− q2d − 0 (λ1 q̃1− λ2 q̃2 )dτ ] − λ2 s2
−1 −1
t (9.4)
q2d = s[s I + λ2 ] K u R + q1d − 0 (λ1 q̃1 − λ2 q2 )dτ ,
where s ∈ C is the Laplace transform variable. This controller has not been con-
sidered in the experiments, because it is logically expected not to provide better
results than its simplified counterpart: it is more complex but based on the same
idea. Controllers 1 and 2 are designed following a backstepping approach. The two
backstepping controllers differ from the fact that in Controller 2, the joint stiffness
K no longer appears before ṡ1 + s1 in the right-hand side of the u-equation. This
modification is expected to decrease significantly the input magnitude when K is
large. Indeed, this will be confirmed experimentally.
In [5], these controllers have been commented and discussed from several points of
views. Most importantly, it was shown that when the joint stiffness grows unbounded
(i.e., the rigid manipulator model is retrieved), then the only controller that converges
to the rigid Slotine and Li control law is the passivity-based Controller 3 in (9.3).
In this sense, it can be concluded that Controller 3 is the extension of the rigid case
to the flexible-joint case, which cannot be stated for the other two control laws.
We believe that this elegant physical property plays a major role in the closed-loop
behavior of the plant. As shown in Sect. 7.6.2, the backstepping schemes presented
here do possess some closed-loop passivity properties. However, they are related to
transformed coordinates, as the reader may see in Sect. 7.6.2. On the contrary, the
passivity-based schemes possess this property in the original generalized coordinates
q̃: consequently, they are closer to the physical system than the other schemes. This is
to be considered as an intuitive explanation of the good experimental results obtained
with passivity-based schemes (PD, Slotine and Li, and Controller 3).
This section is devoted to present the two experimental devices in detail: a planar
two degree-of-freedom (dof) manipulator, and a planar system of two pulleys with
one actuator. They are shown in Figs. 9.1 and 9.2, respectively. We shall concentrate
on two points: the mechanical structure and the real-time computer connected to the
process. Actually, we focus essentially in this description on the first plant, that was a
two DOF planar manipulator of the Laboratoire d’Automatique de Grenoble,1 France,
named Capri. The second process is much simpler, and is depicted in Fig. 9.3. It can
be considered as an equivalent one DOF flexible-joint manipulator. Its dynamics are
linear. Its physical parameters are given by I1 = 0.0085 kg m2 , I2 = 0.0078 kg m2 ,
K = 3.4 Nm/rad.
The Capri robot is a planar mechanism constituted by two links, of respective lengths
0.16 and 0.27 m, connected by two hubs. The first link is an aluminum AU4G,
U-frame to improve stiffness, with respect to the forearm which can be designed less
rigid. The second link has a more peculiar structure, because it supports the applied
forces: it is designed as a pipe of diameter 0.05 m, and it is equipped with force
piezoelectric sensors. The force magnitude, point of application, and orientation can
be measured and calculated. The sides of the forearm with Kistler quartz load washers
can measure extension and compression forces, and the half-spherical extremity
possesses a Kistler three components force transducer (only two of them are used)
from which it is possible to calculate the magnitude and the orientation of the applied
force. In this work, these force measurement devices are not needed, since we are
concerned by motion control only. The robot arm is actuated by two DC motors
located at the underside of the basement table (therefore, the Capri robot is a parallel-
drive manipulator for which the model in (6.97) is the “exact” one, see Remark
9.1 Flexible-Joint Manipulators 609
I2 I1
q2 q1
6.45). They are coupled to the links by reducers (gears and notched belts), each
of them with ratio 1/50. The first motor (Infranor MX 10) delivers a continuous
torque of 30 N cm and a peak torque of 220 N cm for a total weight of 0.85 kg.
The second motor (Movinor MR 08) provides a continuous torque of 19 N cm and
a peak torque of 200 N cm, for a weight of 0.65 kg. The drive arrangement is such
that the weight is not boarded on the links, to increase speed. Both motors are
equipped with 500 pulses/turn incremental encoder and a DC tachometer making
joint position q2 and velocity q̇2 available for feedback. The position q1 is measured
610 9 Experimental Results
q12
q11
by a potentiometer mounted on the last link. In the experiments, the velocity q̇1
has been obtained by differentiating the position signal (a filtering action has been
incorporated by calculating the derivative from one measurement every four only, i.e.,
every four sampling times). The effective working area of the robot arm is bounded
by sensors: an inductive sensor prevents the first arm from doing more than one turn,
i.e., q11 ∈ [− π2 , π2 ] (see Fig. 9.4 for the definition of the angles). Two microswitches
prevent the second arm from overlapping on the first one. They both inhibit the
inverters (Infranor MSM 1207) controlling the DC motors.
Remark 9.1 The Capri robot has been modeled as a parallel-drive rigid-link robot,
with the second joint elastic. It is clear that such a model is only a crude approximation
of the real device. Some approximations may be quite justified, like the rigidity of
the first joint and of the links. Some others are much more inaccurate.
(i) The belt that couples the second actuator and the second joint is modeled as
a spring with constant stiffness, which means that only the first mode of its
dynamic response is considered.
(ii) There is some clearance in the mechanical transmission (especially at the joints,
due to the belts and the pulleys), and a serious amount of dry friction.
(iii) The frequency inverters that deliver the current to the motors possess a non-
symmetric dead zone. Therefore, different amounts of current are necessary to
start motion in one direction or the other.
(iv) The value of q̇1 used in the algorithm and obtained by differentiating a poten-
tiometer signal is noisy, despite a filtering action.
(v) The inertial parameters have been calculated by simply measuring and weigh-
ing the mechanical elements of the arms. The second joint stiffness has been
measured statically off-line. It has been found to be 50 Nm/rad. This value has
been used in the experiments without any further identification procedure.
(vi) Some saturation on the actuators currents has been imposed by the software,
for obvious safety reasons. Since nothing a priori guarantees stability when the
inputs are saturated, the feedback gains have to be chosen so that the control
input remains inside these limits.
9.1 Flexible-Joint Manipulators 611
Some of these approximations stem from the process to be controlled, and cannot
be avoided (points i, ii, iii): this would imply modifying the mechanical structure.
The measurement noise effects in iv could perhaps be avoided via the use of velocity
observers or of position dynamic feedbacks. However, on one hand, the robustness
improvement is not guaranteed and would deserve a deep analytical study. On the
other hand, the structure of the obtained schemes would be significantly modified
(compare, for instance, the schemes in Sects. 7.3.4 and 7.4, respectively). A much
more simple solution consists of replacing the potentiometer by an optical encoder.
The saturation in (vi) is necessary to protect the motors, and has been chosen in
accordance with the manufacturer recommendations and our own experience on
their natural “robustness”. The crude identification procedure in (v) has been judged
sufficient, because the aim of the work was not to make a controller perform as well as
possible in view of an industrial application, but rather to compare several controllers
and to show that nonlinear control schemes behave well. In view of this, the most
important fact is that they are all tested with the same (acceptable) parameters values,
i.e., if one controller proves to behave correctly with these sets of parameters, do the
others behave as well or not? Another problem is that of the choice of the control
parameters, i.e., feedback gains. We will come back on this important point later.
0.4 sin(2 f t)
• Desired trajectory 2: q1d =
0.8 sin( f t)
b5
(s+b)5
[g(t)]
• Desired trajectory 3: q1d = b5
− (s+b) 5 [g(t)]
ied as indicated in the figure captions. These time functions, which are sufficiently
different to one another, have been chosen to permit to conclude about the capabil-
ity of adaptation of the controllers to a modification of the desired motion. This is
9.1 Flexible-Joint Manipulators 613
gains can be modified and at the same time the poles remain real. In view of these
limitations and of the lack of a systematic manner to calculate optimal feedback gains,
advantage has been taken in [3] of the pulley-system linearity. Since this system is
linear, the controllers in (9.1), (9.2), and (9.3) reduce to linear feedbacks of the form
u = Gx + h(t), where h(t) accounts for the tracking terms. De Larminat [9] has
proposed a systematic (and more or less heuristic) method to calculate the matrix G
for LQ controllers. Actually, one should notice that despite the fact that the nonlinear
backstepping and passivity-based controllers have a linear structure when applied
to a linear system, their gains appear in a very nonlinear way in the state-feedback
matrix G. As an example, the term multiplying q1 for the scheme in (9.3) is equal to
−(λλ2 + k) λk1 λ + (λ2 + I2 λ) λI1I+λ
1
1
+ I2 λI11λ (the gains λ1 and λ2 can be introduced in
(7.140) and (7.141), respectively, instead of using only one gain in both expressions,
so that the passivity-based controller has three gains). The tuning method proposed
in [9] that applies to LQ controllers allows one to choose the weighting matrices
of the quadratic form to be minimized, in accordance with the desired closed-loop
bandwidth (or cutoff frequency ωc (C L)). The advantages of this method are that
the user focuses on one closed-loop parameter only to tune the gains, which is quite
appreciable in practice. Therefore, one gets an “optimal” state-feedback matrix G LQ ,
with a controller u = G LQ x in the case of regulation. Since the various controllers
used in the experiments yield some state-feedback matrices G PD , G BACK1 , G BACK2
and G MES , respectively, which are (highly) nonlinear functions of the gains as shown
above, we choose to calculate their gains so that the norms ||G LQ − G CONT || are
minimum. This amounts to solving a nonlinear set of equations f (Z ) = 0, where Z
is the vector of gains. This is in general a hard task, since we do not know a priori
any root (otherwise the job would be done!). This has been done numerically by
constructing a grid in the gain space of each scheme and minimizing the above norm
with a standard optimization routine. The experimental results prove that the method
may work well, despite possible improvements (especially in the numerical way to
solve f (Z ) = 0). Its extension toward the nonlinear case remains an open problem.
The quadratic error sums e1 , e2 are reported in Tables 9.1 and 9.2. The error e3
is in Table 9.3. The maximum tracking errors |q1 − qd |max for the pulley system
are reported in Table 9.4. All the results for the pulley system in Tables 9.3 and
9.4 concern the desired motion q1d = sin(ωt). In each case, the presented figures
represent an average of several experiments. Concerning trajectories 2 and 3 in Tables
0.015 0.3
0.01 0.2
0.1
q12t [rad]
0.005
q11t [rad]
0
0
−0.1
−0.005
−0.2
−0.01 −0.3
−0.015 −0.4
0 5 10 15 0 5 10 15
4
2
2
1
Ic2 [A]
Ic1 [A]
0 0
−2 −1
−2
−4
0 5 10 15 0 5 10 15
9.1 and 9.2, the results outside brackets have been obtained after having retuned the
feedback gains. The ones between brackets have been obtained using the same gains
as for trajectory 1. When they are not modified, it means that we have not been able
to improve the results. A cross x indicates that no feedback gains have been found
to stabilize the system.
The next results that concern the Capri robot are reported in Figs. 9.5, 9.6, 9.7, 9.8,
9.9, 9.10, 9.11, 9.12, 9.13, 9.14, 9.15, 9.16, 9.17, 9.18, 9.19, and 9.20. The tracking
errors q̃11 , q̃12 and the inputs (currents) Ic1 and Ic2 at each motor, are depicted in
Figs. 9.5, 9.6, 9.7, 9.8, 9.9, 9.10, 9.11, 9.12, 9.13, 9.14, 9.15, and 9.16. Figures 9.17,
9.18, 9.19, and 9.20 contain results concerning the transient behavior when the second
link position tracking errors are initially of 0.4 rad. The inputs Ic1 and Ic2 are the
calculated ones, not the true input of the actuators (they coincide as long as there
is no saturation, i.e., Ic1 ≤ 2 A and Ic2 ≤ 2 A). The results concerning the pulley
system are in Figs. 9.32, 9.22, 9.23, 9.24, 9.25, 9.26, 9.27, 9.28 and 9.29. The signals
qd (t) and q1 (t) are shown in the upper boxes, and the torque input u is depicted in
the lower boxes (Fig. 9.21).
The gains of the PD controller that correspond to the tests on the Capri robot, reported
in Tables 9.1 and 9.2, are given in Table 9.5. They show that significant changes have
been necessary from one desired motion to the next. One sees that the PD gains
have had to be modified drastically to maintain a reasonable performance level. On
the contrary, it is observable from Tables 9.1 and 9.2 that even without any gain
modification, the other controllers still perform well in general. In any case, the
modifications have seldom exceeded 50% and concerned very few gains [4]. Since
this is also true for the Slotine and Li controller, we conclude that the insensitivity
of the performance with respect to desired motion changes is essentially due to the
compensation of the nonlinearities. The Slotine and Li controller seems to provide
the most invariant performance with respect to the desired motion. This is especially
apparent for trajectory 2 on the Capri experiments. In this case, it provides the best
error e2 , even after having retuned the gains for Controllers 2 and 3. This may be
explained by the fact that the input in (7.68) is much smoother than the others (see
Fig. 9.9). This, in turn, may be a consequence of its simplicity, and from the fact that
it does not use the noisy potentiometer signal.
9.1 Flexible-Joint Manipulators 617
0.08 0.5
0.06
q11t [rad]
q12t [rad]
0.04
0.02 0
0
−0.02
−0.04 −0.5
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
0.6 1
0.4 0.8
0.2 0.6
q11t [rad]
q12t [rad]
0 0.4
−0.2 0.2
−0.4 0
−0.6 −0.2
−0.8 −0.4
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
For the Capri experiments, it has not been possible to find feedback gains that sta-
bilize controller 1. On the contrary, this has been possible for the pulley system, see
Figs. 9.24, 9.27, and 9.28. This confirms the fact that the modification of the inter-
618 9 Experimental Results
0.015 0.3
0.01
0.2
0.005
q11t [rad]
q12t [rad]
0 0.1
−0.005 0
−0.01
−0.1
−0.015
−0.02 −0.2
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−2 −1
−2
−4
0 5 10 15 0 5 10 15
−3
x 10
6 0.4
4 0.2
q11t [rad]
q12t [rad]
2
0
0
−0.2
−2
−4 −0.4
−6 −0.6
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
mediate Lyapunov function (see (7.156) and (7.157)) may play a significant role in
practice, and that the term K (s1 + ṡ1 ) is a high gain in the loop if K is large.
9.1 Flexible-Joint Manipulators 619
0.2 1.2
1
0
0.8
q11t [rad]
q12t [rad]
−0.2 0.6
−0.4 0.4
0.2
−0.6
0
−0.8 −0.2
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
0.015 0.4
0.2
0.01
0
q11t [rad]
q12t [rad]
0.005 −0.2
0 −0.4
−0.6
−0.005
−0.8
−0.01 −1
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
Although the PD algorithm provides a stable closed-loop behavior in all cases (for
the Capri experiments and at the price of very large gain modifications as we pointed
out above), its performance is poor for trajectories 1 and 2. The behavior is much
620 9 Experimental Results
0.015 0.2
0.01 0.1
q11t [rad]
q12t [rad]
0.005 0
0 −0.1
−0.005 −0.2
−0.01 −0.3
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
0.2 0.8
0 0.6
q11t [rad]
q12t [rad]
−0.2 0.4
−0.4 0.2
−0.6 0
−0.8 −0.2
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
better for trajectory 3. This can be explained since this is almost a regulation task.
The improvements obtained with the Slotine and Li scheme show that the Coriolis
and centrifugal terms may play an important role depending on the desired motion.
9.1 Flexible-Joint Manipulators 621
0.02 0.5
0.015
0
q11t [rad]
q12t [rad]
0.01
0.005
−0.5
0
−0.005 −1
−0.01
−0.015 −1.5
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
0.2 1.2
1
0
0.8
q11t [rad]
q12t [rad]
−0.2 0.6
−0.4 0.4
0.2
−0.6
0
−0.8 −0.2
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
The PD and the Slotine and Li controls behave well for the Capri robot because
the joint stiffness is large. The results obtained for the pulley system show that the
behavior deteriorates a lot if K is small, see Tables 9.3 and 9.4.
622 9 Experimental Results
0.2
0.1
q12t [rad]
−0.1
−0.2
−0.3
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
Ic2 [A]
−1
−2
0.4
0.3
0.2
q12t [rad]
0.1
0
−0.1
−0.2
−0.3
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
Ic2 [A]
−1
−2
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
The rather complex structure of the nonlinear Controllers 1, 2, and 3 is not an obstacle
to their implementation with the available real-time computer described above. In
particular, recall that the acceleration and jerk are estimated by inverting the dynamics
(see Sect. 7.6). Such terms have a complicated structure and depend on the system’s
physical parameters in a nonlinear way. Some experiments have shown that the
sampling period (1 ms) could have been decreased to 0.5 ms.
9.1 Flexible-Joint Manipulators 623
0.4
0.2
q12t [rad]
−0.2
−0.4
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
Ic2 [A]
−1
−2
0.5
0.4
q12t [rad]
0.3
0.2
0.1
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
Ic2 [A]
−1
−2
The major problem that prevents certain controllers from behaving correctly, is the
input magnitude and shape. This has been noted above. The performance of Con-
trollers 2 and 3 may be less good than that of the Slotine and Li algorithm, mainly
because of the chattering in the input, inducing vibrations in the mechanical structure.
Chattering is particularly present during the regulation phases in Ic2 for trajectory 3
624 9 Experimental Results
0.6
0.4
q12t [rad]
0.2
−0.2
−0.4
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
1
Ic2 [A]
−1
−2
4
qd,q1 (rd)
3.5
2.5
0 0.5 1 1.5 2 2.5
time (s)
5
u (Nm)
−5
0 0.5 1 1.5 2 2.5
time (s)
and Controllers 2 and 3, see Figs. 9.13 and 9.15. On the contrary, Figs. 9.7 and 9.10
show smooth inputs. It may be expected from Fig. 9.20 that a less noisy velocity q̇1
obtained from a better position measurement would bring the shape of Ic2 close to
the input in Fig. 9.18. Indeed, they differ only in terms of chatter. One concludes that
an optical encoder to measure q1 would be a better solution.
9.1 Flexible-Joint Manipulators 625
qd,q1 (rd) 4
3.5
2.5
0 0.5 1 1.5 2 2.5
time (s)
5
u (Nm)
−5
0 0.5 1 1.5 2 2.5
time (s)
4
qd,q1 (rd)
3.5
2.5
5 6 7 8 9 10 11 12 13 14 15
time (s)
5
u (Nm)
−5
5 6 7 8 9 10 11 12 13 14 15
time (s)
qd,q1 (rd) 4
3.5
2.5
5 6 7 8 9 10 11 12 13 14 15
time (s)
5
u (Nm)
−5
5 6 7 8 9 10 11 12 13 14 15
time (s)
4
qd,q1 (rd)
3.5
2.5
5 6 7 8 9 10 11 12 13 14 15
time (s)
5
u (Nm)
−5
5 6 7 8 9 10 11 12 13 14 15
time (s)
Figs. 9.11 and 9.26, and 9.13 and 9.15). The advantage of passivity-based methods
is that the controllers are obtained in one shot, whereas the backstepping approach
a priori leads to various algorithms. This can be an advantage (more degrees of
freedom), but also a drawback as Controller 1 behaviour proves. Notice in Figs. 9.23,
9.1 Flexible-Joint Manipulators 627
0.015 0.1
0.01 0.05
0.005
q11t [rad]
q12t [rad]
0
0
−0.05
−0.005
−0.01 −0.1
−0.015 −0.15
0 5 10 15 0 5 10 15
4
2
2
1
Ic1 [A]
Ic2 [A]
0 0
−1
−2
−2
−4
0 5 10 15 0 5 10 15
4
qd,q1 (rd)
3.5
2.5
0 0.5 1 1.5 2 2.5
time (s)
5
u (Nm)
−5
0 0.5 1 1.5 2 2.5
time (s)
9.24, and 9.25 that Controllers 2 and 3 allow one to damp the oscillations much better
than Controller 1 and the PD (it is possible that the PD gains could have been tuned in
a better way for these experiments, see however the paragraph below on gain tuning
for the pulley system).
628 9 Experimental Results
qd,q1 (rd) 4
3.5
2.5
0 0.5 1 1.5 2 2.5
time (s)
5
u (Nm)
−5
0 0.5 1 1.5 2 2.5
time (s)
3.5
2.5
0 0.5 1 1.5 2 2.5
time (s)
5
u (Nm)
−5
0 0.5 1 1.5 2 2.5
time (s)
Fig. 9.29 Controller 2 (similar results with controller 3), ω = 7.5 rad/s
The transient behavior for the tracking error q̃12 can be improved slightly when
the flexibilities are taken into account in the controller design. This can be seen by
comparing Figs. 9.8 and 9.9 with Figs. 9.11 and 9.12, 9.26 and 9.14. The tracking
9.1 Flexible-Joint Manipulators 629
error tends to oscillate more for the Slotine and Li scheme than for the others.
Notice that these results have been obtained with initial tracking errors close to zero.
However, the results in Figs. 9.17, 9.18, 9.19, and 9.20 prove that the controllers
respond quite well to initial state deviation. The transient duration is around 0.5 s
for all the controllers. The tracking errors have a similar shape once the transient
has vanished. The only significant difference is in the initial input Ic2 . The torque is
initially much higher for nonzero initial conditions.
The method described in Remark 9.2 for tuning the gains in the case of the pulley
system provides good preliminary results. The gains that have been used in all the
experiments for the pulley system have not been modified during the tests on the
real device to tentatively improve the results. They have been kept constant. This
tends to prove that such a method is quite promising since it relies on the choice of
a single parameter (the closed-loop bandwidth, chosen as ωc (C L) = 11 rad/s in the
experiments) and is, therefore, quite attractive to potential users.
The actuators and current drivers neglected dynamics may have a significant
influence on the closed-loop behavior. A close look at Tables 9.3 and 9.4 shows the
existence of a resonance phenomenon in the closed loop. This can be confirmed
numerically by replacing u with u f = 1+τ u
s
, which allows one to suspect that this
actuator neglected dynamics may play a crucial role in the loop. It might be then
argued that developing velocity observers for such systems may not be so important,
whereas some neglected dynamics have a significant effect.
Remark 9.3 The peaks in the input Ic2 for trajectory 1 are due to the saturation of the
DC tachometers when the trajectory is at its maximum speed. When the saturation
stops, the velocity signal delivered by the tachometers has a short noisy transient that
results in such peaks in the input. However, this has not had any significant influence
on the performance, since such peaks are naturally filtered by the actuators (let us
recall that the calculated inputs are depicted).
630 9 Experimental Results
9.1.5 Conclusions
In this section, we have presented experimental results that concern the application of
passivity-based (PD, Slotine and Li, the controller in Sect. 7.6.1) and backstepping
controllers, to two quite different laboratory plants which serve as flexible-joint–
rigid-link manipulators. The major conclusion is that passivity-based controllers
provide generally very good results. In particular, the PD and Slotine and Li algo-
rithms show quite good robustness and provide a high level of performance when the
flexibility remains small enough. Tracking with high flexibility implies the choice
of controllers, which are designed from a model that incorporates the joint compli-
ance. These experimental results illustrate nicely the developments of the foregoing
chapter: one goes from the PD scheme to the one in Sect. 7.6.1 by adding more
complexity, but always through the addition of new dissipative modules to the con-
troller, and consequently to the closed-loop system. These three schemes can really
be considered to belong to the same “family”, namely passivity-based controllers.
It is, therefore, not surprising that their closed-loop behavior when applied to real
plants reproduces this “dissipative modularity”: the PD works well when nonlinear-
ities and flexibilities remain small enough, the Slotine and Li algorithm improves
the robustness with respect to nonlinearities, and the scheme in Sect. 7.6.1 provides
a significant advantage over the other two only if these two dynamical effects are
large enough. Finally, it is noteworthy that all controllers present a good robustness
with respect to the uncertainties listed in Sect. 9.1.3.
Further Results: Experimental results on the control of flexible-joint manipulators
using passivity, may be found in [10–18]. All these works conclude that passivity-
based controllers possess very good robustness with respect to dynamic unertainties
and noise, as well as insensitivity of the performances with respect to the gains
(compared to a PD controller whose performances decrease when the gains decrease).
Other studies and control algorithms have been proposed in [19–26]. Passivity-based
control of flexible link manipulators, a topic that we do not tackle in this book, is
analyzed in [27–30].
9.2.1 Introduction
The inverted pendulum is a very popular experiment, used for educational purposes
in modern control theory. It is basically a pole which has a pivot on a cart that can
be moved horizontally. The pole moves freely around the cart and the control objec-
tive is to bring the pole to the upper unstable equilibrium position by moving the
cart on the horizontal plane. Since the angular acceleration of the pole cannot be
controlled directly, the inverted pendulum is an underactuated mechanical system.
9.2 Stabilization of the Inverted Pendulum 631
Therefore, the techniques developed for fully actuated mechanical robot manipu-
lators cannot be used to control the inverted pendulum. The cart and pole system
is also known because the standard nonlinear control techniques are ineffective to
control it. Indeed, the relative degree of the system is not constant (when the output
is chosen to be the swinging energy of the pendulum), the system is not input–output
linearizable. Jakubczyk and Respondek [31] have shown that the inverted pendulum
is not feedback linearizable. An additional difficulty comes from the fact that when
the pendulum swings past the horizontal the controllability distribution does not have
a constant rank.
Consider the cart and pendulum system as shown in Fig. 9.30. We will consider
the standard assumptions, i.e., massless rod, point masses, no flexibilities, and no
friction. M is the mass of the cart, m the mass of the pendulum, concentrated in the
bob, θ the angle that the pendulum makes with the vertical and l the length of the
rod. The equations of motion can be obtained either by applying Newton’s second
law or by the Euler–Lagrange formulation. The system can be written as
where
x M + m ml cos θ
q= , M(q) = , (9.6)
θ ml cos θ ml 2
0 −ml sin θ θ̇ 0 f
C(q, q̇) = , g(q) = , and τ = .
0 0 −mgl sin θ, 0
(9.7)
Note that M(q) is symmetric and
l cos θ mg
θ
f M
x
632 9 Experimental Results
Therefore, M(q) is positive definite for all q. From (9.6) and (9.7), it follows that
0 ml sin θ θ̇
Ṁ(q, q̇) − 2C(q, q̇) = , (9.9)
−ml sin θ θ̇ 0
which is a skew-symmetric matrix (see Lemma 6.17). The potential energy of the
pendulum can be defined as U (θ ) = mgl(cos θ − 1). Note that U (θ ) is related to
g(q) as follows:
∂U 0
g(q) = = . (9.10)
∂q −mgl sin θ
Therefore, the system having f as the input and ẋ as the output, is passive. Note
that for f = 0 and θ ∈ [0, 2π ) the system (9.5) has a subset of two equilibrium
points. (x, ẋ, θ, θ̇ ) = (∗, 0, 0, 0) is an unstable equilibrium point and (x, ẋ, θ, θ̇ ) =
(∗, 0, π, 0) is a stable equilibrium point. The total energy E(q, q̇) is equal to 0 for
the unstable equilibrium point, and to −2mgl for the stable equilibrium point. The
control objective is to stabilize the system around its unstable equilibrium point,
i.e., to bring the pendulum to its upper position and the cart displacement to zero
simultaneously.
Let us first note that in view of (9.11) and (9.6), if ẋ = 0 and E(q, q̇) = 0, then
9.2 Stabilization of the Inverted Pendulum 633
1 2 2
ml θ̇ = mgl(1 − cos θ ). (9.14)
2
The above equation defines a very particular trajectory which corresponds to a homo-
clinic orbit. Note that θ̇ = 0 only when θ = 0. This means that the pendulum angular
position moves clockwise or counterclockwise until it reaches the equilibrium point
(θ, θ̇ ) = (0, 0). Thus, our objective can be reached if the system can be brought to
the orbit (9.14) for ẋ = 0, x = 0, and E = 0. Bringing the system to this homoclinic
orbit solves the problem of “swinging up” the pendulum. In order to balance the
pendulum at the upper equilibrium position, the control must eventually be switched
to a controller which guarantees (local) asymptotic stability of this equilibrium [32].
By guaranteeing convergence to the above homoclinic orbit, we guarantee that the
trajectory will enter the basin of attraction of any (local) balancing controller. We
do not consider in this book the design of the balancing controller. The passivity
property of the system suggests us to use the total energy E(q, q̇) in (9.11) in the
controller design. Since we wish to bring to zero x, ẋ, and E, we propose the fol-
lowing Lyapunov function candidate:
kE 2 kv kx
V (q, q̇) = E (q, q̇) + ẋ 2 + x 2 , (9.15)
2 2 2
where k E , kv , and k x are strictly positive constants. Note that V (q, q̇) is a positive
semi-definite function. Differentiating V (q, q̇) and using (9.12) we obtain
V̇ (q, q̇) = k E E Ė + kv ẋ ẍ + k x x ẋ
(9.16)
= k E E ẋ f + kv ẋ ẍ + k x x ẋ = ẋ(k E E f + kv ẍ + k x x).
Let us now compute ẍ from (9.5). The inverse of M(q) = M(θ ) can be obtained
from (9.6)–(9.8) and is given by
1 ml 2 −ml cos θ
M −1 (θ ) = , (9.17)
det(M(θ )) −ml cos θ M + m
ẍ 0 m 2 l 3 θ̇ sin θ ẋ
= (det(M(θ )))−1 +
θ̈ 0 −m 2 l 2 θ̇ sin θ cos θ θ̇
−m 2 l 2 g sin θ cos θ ml 2 f
+ + .
(M + m)mgl sin θ −ml f cos θ
1
ẍ(t) = m sin θ (t)(l θ̇ 2 (t) − g cos θ (t)) + f (t) . (9.18)
M + msin2 θ (t)
634 9 Experimental Results
Note that other functions f (ẋ) such that ẋ f (ẋ) > 0are also possible. The control law
in (9.21) will have no singularities provided that k E E + 1+sin kv
2
θ
= 0. The above
condition will be satisfied if for some ε > 0
kv
kE
−ε kv
kE
|E| ≤ <
. (9.23)
2 1 + sin2 θ
Note that when using the control law (9.21), the pendulum can get stuck at the (lower)
stable equilibrium point, (x, ẋ, θ, θ̇ ) = (0, 0, π, 0). In order to avoid this singular
point, which occurs when E = −2mgl (see (9.11)), we require |E| < 2mgl, i.e.,
|E| < 2g (for m = 1, l = 1). Taking also (9.23) into account, we require
kv
kE
−ε
|E| < c = min 2g, . (9.24)
2
Since V (·) is a nonincreasing function (see (9.22)), the inequality in (9.24) will hold
if the initial conditions are such that
c2
V (0) < . (9.25)
2
The above defines the region of attraction as will be shown in the next section.
The condition (9.25) imposes bounds on the initial energy of the system. Note that
the potential energy U = mgl(cos θ − 1) lies between −2g and 0, for m = l = 1.
This means that the initial kinetic energy should belong to [0, c + 2g). Note also
9.2 Stabilization of the Inverted Pendulum 635
that the initial position of the cart x(0) is arbitrary since we can always choose an
appropriate value for k x in V (·) in (9.15). If x(0) is large we should choose k x to
be small. The convergence rate of the algorithm may, however, decrease when k x
is small. Note that when the initial kinetic energy K (q(0), q̇(0)) is zero, the initial
angular position θ (0) should belong to (−π, π ). This means that the only forbidden
point is θ (0) = π . When the initial kinetic energy K (q(0), q̇(0)) is different from
zero, i.e., K (q(0), q̇(0)) belongs to (0, c + 2g) (see (9.24) and (9.25)), then there
are less restrictions on the initial angular position θ (0). In particular, θ (0) can even
be pointing downwards, i.e., θ = π provided that K (q(0), q̇(0)) is not zero. Despite
the fact that our controller is local, its basin of attraction is far from being small. The
simulation example and the real-time experiments will show this feature. For future
use, we will rewrite the control law f from (9.21) as
kv sin θ g cos θ − θ̇ 2 − 1 + sin2 θ (k x x + kd x ẋ)
f (θ, θ̇ , x, ẋ) =
. (9.26)
kv + 1 + sin2 θ k E E(q, q̇)
1 z2 M + m mlz 3 z2
E(z) = + mgl (z 3 − 1) . (9.28)
2 z5 mlz 3 ml 2 z5
k E 2 kv 2 k x 2
V (z) = E + z2 + z1 . (9.29)
2 2 2
which leads to
V̇ (z(t)) = −kd x z 2 (t)2 . (9.31)
Introducing (9.30) into (9.27), we obtain a closed-loop system of the form ż(t) =
F(z(t)). In order to apply Krasovskii–LaSalle’s Theorem, we require to define a com-
pact (closed and bounded) set Ω, with the property that every solution of the system
ż = F(z) which starts in Ω remains in Ω for all future time. Since V (z 1 , z 2 , z 3 , z 4 , z 5 )
in (9.29) is a nonincreasing function, (see (9.31)), then z 1 (·), z 2 (·), and z 5 (·) are
bounded. Note that z 3 (·) and z 4 (·) are also bounded. The set Ω is defined as
Ω = z ∈ R5 | z 3 2 + z 4 2 = 1, V (z 1 , z 2 , z 3 , z 4 , z 5 ) ≤ V (z(0)).
Therefore, the solutions of the closed-loop system ż(t) = F(z(t)) remain inside a
compact set Ω that is defined by the initial value of z. Let Γ be the set of all points in
Ω such that V̇ (z) = 0. Let M be the largest invariant set in Γ . Krasovskii–LaSalle’s
Theorem insures that every solution starting in Ω approaches M as t → ∞. Let us
now compute the largest invariant set M in Γ . In the set Γ (see (9.31)), V̇ (t) = 0,
and z 2 (t) = 0 for all t, which implies that z 1 (·) and V (·) are constant functions. From
(9.29), it follows that E(·) is also constant. Using (9.27), with M = m = l = 1, the
expression of ż 2 becomes
1
ż 2 (t) = z 4 (t) z 52 (t) − gz 3 (t) + f (z(t)) . (9.32)
1 + z 4 (t)
2
From (9.32) and (9.30), it follows that the control law has been chosen such that
From the above equation we conclude that (E f )(·) is constant in Γ . Since E(·) is
also constant, we either have (a) E(t) = 0 for all t, or (b) E(t) = 0 for all t.
• Case a: If E ≡ 0, then from (9.33), z 1 ≡ 0 (i.e., x ≡ 0). Note that f (·) in (9.30)
is bounded in view of (9.23). Recall that E ≡ 0 means that the trajectories are
in the homoclinic orbit (9.14). In this case, we conclude that x(·), ẋ(·), and E(·)
converge to zero. Note that if E ≡ 0, then f (·) does not necessarily converge to
zero.
• Case b: If E = 0, since (E f )(·) is constant, then the control input f (·) is also
constant. However, a force input f (·) constant and different from zero would lead
us to a contradiction (see the proof below). We, therefore, conclude that f ≡ 0 in
Γ . From (9.33), it then follows that z 1 ≡ 0 in Γ . It only remains to be proved that
E(t)0 when z 1 (t) = 0, z 2 (t) = 0, and f (t) = 0. From (9.27), we get
9.2 Stabilization of the Inverted Pendulum 637
Introducing (9.35) into (9.34), we obtain gl z 4 (t)z 3 (t) − z 5 2 (t)z 4 (t) = 0. Thus, we
have either
g
(a) z 5 2 (t) = z 3 (t), or (b) z 4 (t) = 0. (9.36)
l
Differentiating (9.36) (a) we obtain
g
2z 5 (t)z˙5 (t) = − z 5 (t)z 4 (t). (9.37)
l
Let us first study (9.37), and (9.36) (b) afterward.
– If z 5 (t) = 0, (9.37) becomes 2z˙5 (t) = − gl z 4 (t). Combining this equation with
(9.35) we conclude that z 4 ≡ 0, which implies (9.36) (b).
– If z 5 (t) = 0, then z˙5 (t) = 0, which together with (9.35) implies that z 4 (t) = 0,
which implies (9.36) (b).
Also from (9.36) (b), we have z 4 (t) = 0, then z˙4 (t) = 0. Since z 3 (t) = ±1 when
z 4 (t) = 0, we conclude from (9.27) that z 5 (t) = 0. So far we have proved that
z 1 (t) = 0, z 2 (t) = 0, z 3 (t) = ±1, z 4 (t) = 0, and z 5 (t) = 0. Moreover, z 3 (t) = −1
(which corresponds to θ (t) = π (mod 2π )) has been excluded by imposing
condition (9.24) (see also (9.11)). Therefore (z 1 (t), z 2 (t), z 3 (t), z 4 (t), z 5 (t))t =
(0, 0, 1, 0, 0)T , which implies that E(t) = 0. This contradicts the assumption
E(t) = 0, and thus the only possible case is E(q(t), q̇(t)) = 0.
Let us end this proof with the contradiction argument. We prove that when z 2 = 0,
E is constant and = 0, and f is constant, f should be zero. From (9.27), we get
f
z 4 (3gz 3 + K 2 ) = (9.42)
m
with K 2 = −(2g + K 1 ). Taking the time derivative of (9.42), we obtain (see (9.27))
z 5 (t) 3g z 32 (t) − z 42 (t) + K 2 z 3 (t) = 0. (9.43)
If z 5 (t) = 0, then ż 5 (t) = 0, and from (9.39) we conclude that z 4 (t) = 0. If z 5 (t) = 0,
then (9.43) becomes
3g z 32 (t) − z 42 (t) + K 2 z 3 (t) = 0. (9.44)
Differentiating (9.44), it follows that z 5 (t)z 4 (t) (−12gz 3 (t) − K 2 ) = 0. The case
when z 3 (t) = −K 12g
2
implies that θ (·) is constant, which implies z 5 (t) = 0, and so
z 4 (t) = 0 (see (9.39)). In each case, we conclude that z 4 (t) = 0 and z 5 (t) = 0. From
(9.38), it follows that f (t) = 0.
Simulations showed that the nonlinear control law brings the system to the homoclinic
orbit (see the phase plot in Fig. 9.31). Switching to the linear controller occurs at
time t = 120 s. Note that before the switching the energy E goes to zero and that the
Lyapunov function V (·) is decreasing and converges to zero.
9.2 Stabilization of the Inverted Pendulum 639
Real-time experiments showed that the nonlinear control law brings the system to the
homoclinic orbit (see the phase plot in Fig. 9.32). Switching to the linear controller
occurs at time t = 27 s. Notice that the control input lies in an acceptable range. Note
that in both simulation and experimental results, the initial conditions lie slightly
0.15 10
Displacement
0.1
5
0.05
0
0
−5
−0.05
−0.1 −10
−0.15 −15
0 100 200 300 0 2 4 6
Time [s] Angle [rad]
0.7
0.6
0
0.5
Energy
0.4
0.3
−0.5
0.2
0.1
−1 0
0 100 200 300 0 100 200 300
Time [s] Time [s]
0.08 6
0.06 5
Displacement
Angle [rad]
0.04 4
0.02 3
0 2
−0.02 1
−0.04 0
−0.06 −1
0 10 20 30 40 0 10 20 30 40
Time [s] Time [s]
0.4 10
0.3
5
Control
0.2
0
0.1
−5
0
−0.1 −10
−0.2 −15
0 10 20 30 40 0 2 4 6
Time [s] Angle [rad]
outside the domain of attraction. This proves that the estimation of the domain of
attraction in (9.24) and (9.25) is conservative.
9.3 Conclusions
for the inverted pendulum that brings the pendulum to a homoclinic orbit, while
the cart displacement converges to zero. Therefore, the state will enter the basin
of attraction of any locally convergent controller. The control strategy is based on
the total energy of the system, using its passivity properties. A Lyapunov function is
obtained using the total energy of the system. The convergence analysis is carried out
using the Krasovskii–LaSalle’s invariance principle. The system nonlinearities have
not been compensated for, which has enabled us to exploit the physical properties
of the system in the stability analysis. The proposed control strategy is proved to be
applicable to a wider class of underactuated mechanical systems like the hovercraft
and the Pendubot, see [33, 34].
Some applications of passivity in control are given in the introduction of Chap. 4. Let
us provide some more now. Networks of dissipative systems and their control are ana-
lyzed in [35–39] (robotic systems), [40–45] (chemical processes, reaction networks),
[46] (power networks), [47–49] (delayed networks), passivity is used for the control
of haptic systems [50–56], repetitive processes and iterative learning control [57–
60], marine vehicles and vessels [61–63], cable-driven systems [64, 65], unmanned
aerial vehicles with cable-suspended payloads [66], teleoperation systems [67–71],
single mast stacker crane [72], permanent-magnet synchronous motor [73], thermo-
hygrometric control in buildings [74], resonance elimination with active bearings
[75], heating, ventilation and air-conditioning [76], transfemoral prosthesis device
[77], grids2 [78–80], bioreactor system [81], AC/DC, DC/DC, boost converters [82–
84], hydraulic systems [85, 86], port Hamiltonian systems with holonomic bilateral
constraints [87], turbulent channel flow [88], wind-energy conversion systems [89],
cyber-physical systems [90], HIV-1 treatment scheduling [91], power supply [92],
multicellular converters [93], induction motors [94], float-glass process [95], visual
servoing [96–98], biped locomotion [99, 100], flexible multibody spacecrafts [101],
aircraft landing systems [102], electrostatic MEMS [103], neural networks [48, 104,
105], multi-virus propagation in networks [106], functional electrical stimulation
[107], electricity market trading [108], fault systems [109, 110], magnetically lev-
itated flexible beam [111], electropneumatic systems [112], attitude control [113,
114], internet congestion control [36, 115], visual human localization [116], actua-
tors with variable stiffness for mechanical systems in contact with unknown environ-
ments [117], shape-memory alloy position control systems [118], vortex motion in
a combustor [119], photovoltaic/battery systems [120], PEM fuel/cell battery [121],
photovoltaic/wind hybrid systems [122], influenza A virus treatment [123].
2 According to the U.S. Department of Energy Microgrid Exchange Group, the following criteria
defines a microgrid: A microgrid is a group of interconnected loads and distributed energy resources
within clearly defined electrical boundaries, that acts as a single controllable entity with respect
to the grid. A microgrid can connect and disconnect from the grid to enable it to operate in both
grid-connected or island mode.
642 9 Experimental Results
References
1. Spong MW (1989) Adaptive control of flexible joint manipulators. Syst Control Lett 13:15–21
2. Spong MW (1995) Adaptive control of flexible joint manipulators: comments on two papers.
Automatica 31(4):585–590
3. Brogliato B, Rey D (1998) Further experimental results on nonlinear control of flexible joint
manipulators. In: Proceedings of the American control conference, Philadelphia, PA, USA,
vol 4, pp 2209–2211
4. Brogliato B, Rey D, Pastore A, Barnier J (1998) Experimental comparison of nonlinear con-
trollers for flexible joint manipulators. Int J Robot Res 17(3):260–281
5. Brogliato B, Ortega R, Lozano R (1995) Global tracking controllers for flexible-joint manip-
ulators: a comparative study. Automatica 31(7):941–956
6. Spong MW (1987) Modeling and control of elastic joint robots. ASME J Dyn Syst Meas
Control 109:310–319
7. Lozano R, Brogliato B (1992) Adaptive control of robot manipulators with flexible joints.
IEEE Trans Autom Control 37(2):174–181
8. Brogliato B, Lozano R (1996) Correction to “adaptive control of robot manipulators with
flexible joints”. IEEE Trans Autom Control 41(6):920–922
9. de Larminat P (1993) Automatique. Commande des Systèmes Linéaires. Hermès, Paris,
France
10. Walsh A, Forbes JR (2018) Very strictly passive controller synthesis with affine parameter
dependence. IEEE Trans Autom Control 63(5):1531–1537
11. Walsh A, Forbes JR (2016) A very strictly passive gain-scheduled controller: theory and
experiments. IEEE/ASME Trans Mechatron 21(6):2817–2826
12. Giusti A, Malzahn J, Tsagarakis NG, Althoff M (2018) On the combined inverse-
dynamics/passivity-based control of elastic-joint robots. IEEE Trans Robot 34(6):1461–1471.
https://doi.org/10.1109/TRO.2018.2861917
13. Ott C, Albu-Schaeffer A, Kugi A, Hirzinger G (2008) On the passivity-based impedance
control of flexible joint robots. IEEE Trans Robot 24(2):416–429
14. Calanca A, Muradore R, Fiorini P (2017) Impedance control of series elastic actuators: pas-
sivity and acceleration-based control. Mechantronics 47:37–48
15. Tien LL, Albu-Schaeffer A, Hirzinger G (2007) MIMO state feedback controller for a flexible
joint robot with strong joint coupling. In: IEEE international conference on robotics and
automation, Roma, Italy pp 3824–3830
16. Albu-Schafer A, Ott C, Hirzinger G (2007) A unified passivity-based control framework for
position, torque and impedance control of flexible joint robots. Int J Robot Res 26(1):23–39
17. Albu-Schafer A, Ott C, Hirzinger G (2004) A passivity based cartesian impedance controller
for flexible joint robots – Part II: full state feedback, impedance design and experiments. In:
Proceedings of the IEEE international conference on robotics and automation, New Orleans,
LA, pp 2666–2672
18. Haninger K, Tomizuka M (2018) Robust passivity and passivity relaxation for impedance
control of flexible-joint robots with ineer-loop torque control. IEEE/ASME Trans Mechatron
23(6):2671–2680. https://doi.org/10.1109/TMECH.2018.2870846
19. Garofalo G, Ott C (2017) Energy based limit cycle control of elastically actuated robots. IEEE
Trans Autom Control 62(5):2490–2497
20. Li Y, Guo S, Xi F (2017) Preferable workspace for fatigue life improvement of flexible-joint
robots under percussive riveting. ASME J Dyn Syst Meas Control 139:041,012
21. Kostarigka AK, Doulgeri Z, Rovithakis GA (2013) Prescribed performance tracking for flex-
ible joint robots with unknown dynamics and variable elasticity. Automatica 49:1137–1147
22. Chang YC, Yen HM (2012) Robust tracking control for a class of electrically driven flexible-
joint robots without velocity measurements. Int J Control 85(2):194–212
23. Kelly R, nez VS (1998) Global regulation of elastic joints robots based on energy shaping.
IEEE Trans Autom Control 43(10):1451–1456
References 643
24. Liu C, Cheah CC, Slotine JJE (2008) Adaptive task-space regulation of rigid-link flexible-joint
robots with uncertain kinematics. Automatica 44:1806–1814
25. Battilotti S, Lanari L (1995) Global set point control via link position measurement for flexible
joint robots. Syst Control Lett 25:21–29
26. Battilotti S, Lanari L (1997) Adaptive disturbance attenuation with global stability for rigid
and elastic joint robots. Automatica 33(2):239–243
27. Feliu V, Pereira E, Diaz IM (2014) Passivity-based control of single-link flexible manipulators
using a linear strain feedback. Mech Mach Theory 71:191–208
28. Ryu JH, Kwon DS, Hannaford B (2004) Control of flexible manipulator with noncollocated
feedback: time-domain passivity approach. IEEE Trans Robot 20(4):776–780
29. Damaren CJ (2000) Passivity and noncollocation in the control of flexible multibody systems.
ASME J Dyn Syst Meas Control 122:11–17
30. Forbes JR, Damaren CJ (2012) Single-link flexible manipulator control accomodating pas-
sivity violations: theory and experiments. IEEE Trans Control Syst Technol 20(3), 652–662
31. Jakubczyk B, Respondek W (1980) On the linearization of control systems. Bull Acad Polon
Sci Math 28:517–522
32. Spong MW (1994) The swing up control of the Acrobot. In: Proceedings of IEEE International
Conference on Robotics and Automation, San Diego, CA, USA, pp 616–621
33. Fantoni I, Lozano R, Spong M (1998) Energy based control of the pendubot. IEEE Trans
Autom Control 45(4):725–729
34. Fantoni I, Lozano R, Mazenc F, Pettersen KY (2000) Stabilization of an underactuated hov-
ercraft. Int J Robust Nonlinear Control 10:645–654
35. Arcak M (2007) Passivity as a design tool for group coordination. IEEE Trans Autom Control
52(8):1380–1390
36. Arcak M, Meissen C, Packard A (2016) Networks of dissipative systems. Briefs in electrical
and computer engineering. Control, automation and robotics. Springer International Publish-
ing (2016)
37. Tippett MJ, Bao J (2013) Dissipativity based distributed control synthesis. J Process Control
23:755–766
38. Hatanaka T, Chopra N, Fujita M, Spong MW (2015) Passivity-based control and estima-
tion in networked robotics. Communications and control engineering. Springer International
Publishing
39. Kottenstette N, Hall JF, Koutsoukos X, Sztipanovits J, Antsaklis P (2013) Design of networked
control systems using passivity. IEEE Trans Control Syst Technol 21(3):649–665
40. Alonso AA, Ydstie BE (1996) Process systems, passivity, and the second law of thermody-
namics. Comput Chem Eng 20:1119–1124. Suppl
41. Alonso AA, Ydstie BE (2001) Stabilization of distributed systems using irreversible thermo-
dynamics. Automatica 37:1739–1755
42. Bao J, Lee PL (2007) Process control: the passive systems approach. Advances in industrial
control. Springer, London
43. Marton L, Szederkenyi G, Hangos KM (2018) Distrubuted control of interconnected chemical
reaction networks with delay. J Process Control 71:52–62
44. Oterao-Muras I, Szederkenyi G, Alonso AA, Hangos KM (2008) Local dissipative Hamilto-
nian description of reversioble reaction networks. Syst Control Lett 57:554–560
45. Ydstie BE (2002) Passivity based control via the second law. Comput Chem Eng 26:1037–
1048
46. Trip S, Cucuzzella M, de Persis C, van der Schaft A, Ferrara A (2018) Passivity-based design
of sliding modes for optimal load frequency control. IEEE Trans Control Syst Technol. https://
doi.org/10.1109/TCST.2018.2841844
47. Li H, Gao H, Shi P (2010) New passivity analysis for neural networks with discrete and
distributed delays. IEEE Trans Neural Netw 21(11):1842–1847
48. Kwon OM, Lee SM, Park JH (2012) On improved passivity criteria of uncertain neural
networks with time-varying delays. Nonlinear Dyn 67(2):1261–1271
644 9 Experimental Results
49. Zhang XM, Han QL, Ge X, Zhang BL (2018) Passivity analysis of delayed neural net-
works based on Lyapunov-Krasovskii functionals with delay-dependent matrices. IEEE Trans
Cybern. https://doi.org/10.1109/TCYB.2018.2874273
50. Atashzar SF, Shahbazi M, Tavakoli M, Patel R (2017) A passivity-based approach for stable
patien-robot interaction in haptics-enabled rehabilitation systems: modulated time-domain
passivity control. IEEE Trans Control Syst Technol 25(3):991–1006
51. Dong Y, Chopra N (2019) Passivity-based bilateral tele-driving system with parametric uncer-
tainly and communication delays. IEEE Syst Lett 3(2):350–355
52. Griffiths PG, Gillespie RB, Freudenberg JS (2011) A fundamental linear systems conflict
between performance and passivity in haptic rendering. IEEE Trans Robot 27(1):75–88
53. Miller BE, Colgate JE, Freeman RA (2004) On the role of dissipation in haptic systems. IEEE
Trans Robot 20(4):768–771
54. Shen X, Goldfarb M (2006) On the enhanced passivity of pneumatically actuated impedance-
type haptic interfaces. IEEE Trans Robot 22(3):470–480
55. Ye Y, Pan YJ, Gupta Y, Ware J (2011) A power-based time domain passivity control for haptic
interfaces. IEEE Trans Control Syst Technol 19(4):874–883
56. Ryu JH, Kim YS, Hannaford B (2004) Sampled and continuous-time passivity and stability
of virtual environment. IEEE Trans Robot 20(4):772–776
57. Brogliato B, Landau ID, Lozano R (1991) Adaptive motion control of robot manipulators: a
unified approach based on passivity. Int J Robust Nonlinear Control 1(3):187–202
58. Kasac J, Novakovic B, Majetic D, Brezak D (2008) Passive finite dimensional repetitive
control of robot manipulators. IEEE Trans Control Syst Technol 16(3):570–576
59. Pakshin P, Emelianova J, Emelianov M, Galkowski K, Rogers E (2018) Passivity based
stabilization of repetitive processes and iterative learning control design. Syst Control Lett
122:101–108
60. Pakshin P, Emelianova J, Emelianov M, Galkowski K, Rogers E (2016) Dissipativity and
stabilization of nonlinear repetitive processes. Syst Control Lett 91:14–20
61. Donaire A, Romero JG, Perez T (2017) Trajectory tracking passivity-based control for marine
vehicles subject to disturbances. J Frankl Inst 354:2167–2182
62. Fossen TI, Strand JP (1999) Passive nonlinear observer design for ships using Lyapunov
methods: full-scale experiments with a supply vessel. Automatica 35:3–16
63. Fossen TI, Strand JP (2001) Nonlinear passive weather optimal positioning control (WOPC)
system for ships and rigs: experimental results. Automatica 37:701–715
64. Caverly RJ, Forbes JR, Mohammadshhi D (2015) Dynamic modeling and passivity-based
control of a single degree of freedom cable-actuated system. IEEE Trans Control Syst Technol
23(3):898–909
65. Caverly RJ, Forbes JR (2018) Flexible cable-driven parallel manipulator control: maintaining
positive cable tensions. IEEE Trans Control Syst Technol 26(5):1874–1883
66. Guerrero-Sanchez ME, Mercado-Ravell DA, Lozano R (2017) Swiing-attenuation for a
quadrotor transporting a cable-susoended payload. ISA Trans 68:433–449
67. Aziminejad A, Tavakoli M, Patel RV, Moallem M (2008) Stability and performance in delayed
bilateral teleoperation: theory and experiments. Control Eng Pract 16(11):1329–1343
68. Hu HC, Liu YC (2017) Passivity-based control framework for task-space bilateral teleopera-
tion with parametric uncertainty over reliable networks. ISA Trans 70:187–199
69. Niemeyer G, Slotine JJ (1991) Stable adaptive teleoperation. IEEE J Ocean Eng 16(2):152–
162
70. Nuno E, Basanez L, Ortega R (2011) Passivity-based control for bilateral teleoperation: a
tutorial. Automatica 47:485–495
71. Ryu JH, Kwon DS, Hannaford B (2004) Stable teleoperation with time-domain passivity
control. IEEE Trans Robot Autom 20:365–373
72. Rams H, Schoeberl M, Schlacher K (2018) Optimal motion planning and energy-based control
of a single mast stacker crane. IEEE Trans Control Syst Technol 26(4):1449–1457
73. Achour AY, Mendil B, Bacha S, Munteanu I (2009) Passivity-based current controller design
for a permanent magnet synchronous motor. ISA Trans 48:336–346
References 645
74. Okaeme CC, Mishra S, Wen JTY (2018) Passivity-based thermohygrometric control in build-
ings. IEEE Trans Control Syst Technol 26(5):1661–1672
75. Heindel S, Mueller PC, Rinderknecht S (2018) Unbalance and resonance elimination with
active bearings on general rotors. J Sound Vib 431:422–440
76. Chinde V, Kosaraju KC, Kelkar A, Pasumarthy R, Sarkar S, Singh NM (2017) A passivity-
based power-shaping control of building HVAC systems. ASME J Dyn Syst Meas Control
139:111,007
77. Azimi V, Shu T, Zhao H, Ambrose E, Ames AA, Simon D (2017) Robust control of a pow-
ered transfemoral prosthesis device with experimental verification. In: Proceedings American
control conference, Seattle, USA, pp 517–522
78. Ji F, Xiang J, li W, Yue Q (2017) A feedback passivation design for DC microgrids and its
DC/DC converters. Energies 10:1–15
79. Harnefors L, Wang X, Yepes A, Blaabjerg F (2016) Passivity-based stability assessment of
grid-connected VSCs-an overview. IEEE J Emerg Sel Top Power Electron 4(1):116–125
80. Stegink T, de Persis C, van der Schaft A (2017) A unifying energy-based approach to stability
of power grids with market dynamics. IEEE Trans Autom Control 62(6):2612–2622
81. Fossas E, Ros RM, Sira-Ramirez H (2004) Passivity-based control of a bioreactor system. J
Math Chem 36(4):347–360
82. Chan CY (2008) Simplified parallel-damped passivity-based controllers for dc-dc power con-
verters. Automatica 44(11):2977–2980
83. Lee TS (2004) Lagrangian modeling and passivity-based control of three-phase AC/DC
voltage-source converters. IEEE Trans Ind Electron 51(4):892–902
84. Son YI, Kim IH (2012) Complementarity PID controller to passivity-based nonlinear control
of boost converters with inductor resistance. IEEE Trans Control Syst Technol 20(3):826–834
85. Li PY, Wang MR (2014) Natural storage function for passivity-basd trajectory control of
hydraulic actuators. IEEE/ASME Trans Mechatron 19(3):1057–1068
86. Mazenc F, Richard E (2001) Stabilization of hydraulic systems using a passivity property.
Syst Control Lett 44:111–117
87. Castanos F, Gromov D (2016) Passivity-based control of implicit port-Hamiltonian systems
with holonomic constraints. Syst Control Lett 94:11–18
88. Heins PH, Jones BL, Sharma AS (2016) passivity-based output-feedback control of turbulent
channel flow. Automatica 69:348–355
89. Qu YB, Song HH (2011) Energy-based coordinated control of wind energy conversion system
with DFIG. Int J Control 84(12):2035–2045
90. Koutsoukos X, Kottenstette N, Antsaklis P, Sztipanovits J (2008) Passivity-based control
design for cyber-physical systems. In: International workshop on cyber-physical systems
challenges and applications, Santorini Island, Greece
91. Palacios E, Espinoza-Perez G, Campos-Delgado DU (2007) A passivity-based approach for
HIV-1 treatment scheduling. In: Proceedings of the American control conference, New York
city, USA, pp 4106–4111 (2007)
92. Valderrama GE, Stankovic AM, Mattavelli P (2003) Dissipativity-based adaptive and robust
control of UPS in unbalanced operation. IEEE Trans Power Electron 18(4):1056–1062
93. Cormerais H, Buisson J, Richard PY, Morvan C (2008) Modelling and passivity based control
of switched systems from bond graph formalism: application to multicellular converters. J
Frankl Inst 345:468–488
94. Fattah HAA, Loparo KA (2003) passivity-based torque and flux tracking for induction motors
with magnetic saturation. Automatica 39:2123–2130
95. Ydstie BE, Jiao Y (2006) Passivity-based control of the float-glass process. IEEE Control
Syst Mag 26(6):64–72
96. dean Leon EC, Parra-Vega V, Espinosa-Romero A (2006) Visual servoing for constrained
planar robots subject to complex friction. IEEE/ASME Trans Mechatron 11(4):389–400
97. Hsu L, Costa RR, Lizarralde F (2007) Lyapunov/passivity based adaptive control of relative
degree two MIMO systems with an application to visual servoing. IEEE Trans Autom Control
52(2):364–371
646 9 Experimental Results
98. Morales B, Roberti F, Toibero JM, Carelli R (2012) Passivity based visual servoing of mobilde
robots with dynamic compensation. Mechatronics 22(4):481–491
99. Henze B, Balachandran R, Roa-Garzon MA, Ott C, Albu-Schaeffer A (2018) Passivity analysis
and control of humanoid robots on movable ground. IEEE Robot Autom Lett 3(4):3457–3464
100. Spong MW, Holm JK, Lee D (2007) Passivity based control of bipedal locomotion. IEEE
Robot Autom Mag 14(2):30–40
101. Shi J, Kelkar AG (2007) A dissipative control design for Jupiter icy moons orbiter. ASME J
Dyn Syst Meas Control 129:559–565
102. Akmeliawati R, Mareels IMY (2010) Nonlinear energy-based control method for aircraft
automatic landing systems. IEEE Trans Control Syst Technol 18(4):871–884
103. Wickramasinghe IPM, Maithripala DHS, Berg JM, Dayawansa WP (2009) passivity-based
stabilization of a 1-DOF electrostatic MEMS model with a parasitic capacitance. IEEE Trans
Control Syst Technol 17(1):249–256
104. Xiao Q, Huang Z, Zeng Z (2019) Passivity analysis for memristor-based inertial neural net-
works with discrete and distributed delays. IEEE Trans Syst Man Cybern Syst 49(2):375–385.
https://doi.org/10.1109/TSMC.2017.2732503
105. Zeng HB, He Y, Wu M, Xiao HQ (2014) Improved conditions for passivity of neural networks
with a time-varying delay. IEEE Trans Cybern 44(6):785–792
106. Lee P, Clark A, Alomair B, Bushnell L, Poovendran R (2018) Adaptive mitigation of multi-
virus propagation: a passivity-based approach. IEEE Trans Control Netw Syst 5(1):583–596
107. Ghanbari V, Duenas VH, Antsaklis PJ, Dixon WE (2018) Passivity-based iterative learning
control for cycling induced by functional electrical stimulation with electric motor assistance.
IEEE Trans Control Syst Technol. https://doi.org/10.1109/TCST.2018.2854773
108. Okawa Y, Muto K, Namerikawa T (2018) Passivity-based stability analysis of electricity
market trading with dynamic pricing. SICE J Control Meas Syst Integr 11(5):390–398
109. Manivannan R, Samidurai R, Cao J, Perc M (2018) Design of resilient reliable dissipativity
control for systems with actuator faults and probabilistic time-delay signals via sampled-data
approach. IEEE Trans Syst Man Cybern: Syst. https://doi.org/10.1109/TSMC.2018.2846645
110. Yang H, Cocquempot V, Jiang B (2008) Fault tolerance analysis for switched systems via
global passivity. IEEE Trans Circuits Syst-II: Express Briefs 55(12):1279–1283
111. Shimizu T, Kobayashi Y, Sasaki M, Okada T (2009) Passivity-based control of a magnetically
levitated flexible beam. Int J Robust Nonlinear Control 19:662–675
112. Saied KT, Smaoui M, Thomasset D, Mnif F, Derbel N (2008) Nonlinear passivity based
control law with application to electropneumatic system. In: Proceedings of 17th IFAC world
congress, Seoul, Korea, pp 1827–1832
113. Igarashi Y, Hatamaka T, Fujita M, Spong MW (2009) Passivity-based attitude synchrnization
in S E(3). IEEE Trans Control Syst Technol 17(5):1119–1134
114. Tsiotras P (1998) Further passivity results for the attitude control problem. IEEE Trans Autom
Control 43(11):1597–1600
115. Wen J, Arcak M (2004) A unifying passivity framework for network flow control. IEEE Trans
Autom Control 49(2):162–174
116. Hatanaka T, Chopra N, Ishizaki T, Li N (2018) Passivity-based distributed optimization
with communication delays using PI consensus algorithm. IEEE Trans Autom Control
63(12):4421–4428
117. Misgeld BJE, Hewing L, Liu L, Leonhardt S (2019) Closed-loop positive real optimal control
of variable stiffness actuators. Control Eng Pract 82:142–150
118. Gorbet RB, Wang DWL (1998) A dissipaticity approach to stability of a shape memory alloy
position control system. IEEE Trans Control Syst Technlol 6(4):554–562
119. Tadmor G, Banaszuk A (2002) Observer-based control of vortex motion in a combustor
recirculation region. IEEE Trans Control Syst Technlol 10(5):749–755
120. Mojallizadeh MR, Badamchizadeh MA (2016) Adaptive passivity-based control of a photo-
voltaic/battery hybrid power source via algebraic parameter identification. IEEE J Photovolt
6(2):532–539
References 647
121. Kalantar A, Kofighi A (2010) Adaptive passivity-based control of pem fuel cell/battery hybrid
power source for stand-alone applications. Adv Electr Comput Eng 10(4):111–120
122. Croci L, Martinez A, Coirault P, Champenois G, Gaubert JP (2012) Passivity-based control of
photovoltaic-wind hybrid system with Euler-Lagrange modeling. In: Proceedings of IECON
2012 - 38th annual conference on IEEE industrial electronics society, Montreal, QC, Canada,
pp 1126–1131
123. Hernandez-Mejia G, Alanis AY, Hernandez-Gonzalez M, Findeisen R, Hernandez-Vargas EA
(2019) Passivity-based inverse optimal impulsive control for Influenza treatment in the host.
IEEE Trans Control Syst Technol. https://doi.org/10.1109/TCST.2019.2892351
Appendix A
Background Material
In this Appendix, we present the background for the main tools used throughout
the book, namely, Lyapunov stability, differential geometry for nonlinear systems,
Riccati equations, viscosity solutions of PDEs, some useful matrix algebra results,
some results that are used in the proof of the KYP Lemma, complementarity prob-
lems, variational inequalities, maximal monotone operators, and a counterexample
to Kalman’s conjecture.
where f (·) is a nonlinear vector function, and x(t) ∈ Rn is the state vector. We
suppose that the system is well-posed, i.e., a unique solution exists globally (see
Sect. 3.13.2 for details on existence, uniqueness, and continuous dependence on pa-
rameters). We may, for instance, assume that the conditions of Theorem 3.90 are
satisfied. We refer the reader to Theorems 3.142 and 3.143 for extensions of Lya-
punov stability to more general systems, like evolution variational inequalities. In
this section, we focus on ODEs.
The nonlinear system (A.1) is said to be autonomous (or time-invariant) if f (·) does
not depend explicitly on time, i.e.,
where o stands for higher order terms in x. Linearization of the original nonlinear
system at the equilibrium point is given by
(strictly) stable matrix, i.e., if all the eigenvalues of A have (negative) nonpositive
real parts. The stability of linear time-invariant systems can be determined according
to the following theorem.
Theorem A.7 ([1, 2]) Given a matrix A ∈ Rn×n , the following statements are equiv-
alent:
• A is a Hurwitz matrix.
• There exists some positive definite matrix Q ∈ Rn×n such that A T P + P A = −Q
has a corresponding unique solution for P, and this P is positive definite.
• For every positive definite matrix Q ∈ Rn×n , A T P + P A = −Q has a unique
solution for P, and this solution is positive definite.
which holds true because A is Hurwitz, see [3, Corollary 11.9.4, Fact 11.18.33]. Con-
sider (3.310), which is equivalent to the existence of Q = Q T 0 such that (A T +
θ ∞
I )P + P(A + θ2 In ) = −Q. We infer that P = 0 exp(A T t + θ2 In t)Q exp(At +
2 n
θ
I t)dt. Let Q = α In , then using [3, Lemma 11.9.2] it follows that P = −α(A +
2 n
A T + θ In )−1 . Let us now use the series
development
[3, Proposition 9.4.13] which
gives P = αθ In + θα2 (−A − A T ) + O θ13 , provided θ is large enough so that the
spectral radius of − θ1 (A + A T ) is < 1. Under these assumptions, it is clear that in-
creasing θ allows one to decrease P so that the set Rρ in (3.312) increases in size,
even if ρ does not.
There exists versions of the Lyapunov equation, with solutions P = P T 0. The
following can be found in [4, Exercise 1].
Proposition A.8 Let A be a Hurwitz matrix, and let Q 0. Then the Lyapunov
equation A T P + P A = −Q has a unique solution P = P T 0.
Another interesting result is as follows [4, Proposition 1, p. 447]:
Proposition A.9 Let A ∈ Rn×n , and assume A has no eigenvalue on the imaginary
axis. If P = P T and if A T P + P A = Q 0, then the number of eigenvalues of P
with negative (resp. positive) real part, is less or equal to the number of eigenvalues
of A with negative (resp. positive) real part.
652 Appendix A: Background Material
Thus if A is Hurwitz, so is P. See also [1, Fact 12.21.14] for Lyapunov equation
with positive semi-definite solution.
The local stability of the original nonlinear system can be inferred from stability
of the linearized system as stated in the following theorem.
Theorem A.10 If the linearized system is strictly stable (unstable), then the equi-
librium point of the nonlinear system is locally asymptotically stable (unstable).
The above theorem does not allow us to conclude anything when the linearized
system is marginally stable. Then, one has to rely on more sophisticated tools like
the invariant manifold theory [5].
Remark A.11 Let A T P + P A ≺ 0 ⇔ x T (A T P + P A)x < 0 ⇔ x T A T P x + x T P
Ax < 0 for all Rn x = 0. Since both terms are scalars and P = P T , we have
x T A T P x = x T P Ax. Thus, 2x T P Ax < 0 for all x = 0, which means that −P A 0.
Theorem A.15 (Local stability) The equilibrium point x = 0 of the system (A.2)
is (asymptotically) stable in a ball B, if there exists a scalar function V (x) with
continuous derivatives such that V (x) is positive definite and V̇ (x) is negative semi-
definite (negative definite) in the ball B.
Theorem A.16 (Global stability) The equilibrium point of system (A.2) is globally
asymptotically stable if there exists a scalar function V (x) with continuous first-
order derivatives such that V (x) is positive definite, V̇ (x) is negative definite, and
V (x) is radially unbounded, i.e., V (x) → ∞ as x → ∞.
Clearly, the global asymptotic stability implies that x = 0 is the unique fixed point
of (A.2) in the whole state space Rn .
Condition (ii) means that the output function has to be Lm bounded along the system’s
trajectories with initial condition x0 . Another formulation of Theorem A.18 is as
follows [7].
Theorem A.20 Under the same assumptions of Theorem A.18, let K be the set of
points not containing whole trajectories of the system for ≤ t ≤ ∞. Then, if V̇ (x) ≤ 0
outside of K and V̇ (x) = 0 inside K , the system is asymptotically stable.
Notice in particular that {x = 0} ∈/ K . K can be a surface, a line, etc. In Theorem
A.6, notice that if Q = C T C with (A, C) being an observable pair, then asymptotic
stability is obtained again. More formally:
Corollary A.21 If C ∈ Rm×n and the pair (A, C) is observable, then the matrix A
is asymptotically stable if and only if there exists a matrix P = P T 0 that is the
unique solution of A T P + P A + C T C 0.
The proof of this corollary is based on the quadratic function V (x) = x T P x, whose
derivative is computed along the solutions of ẋ(t) = Ax(t). Then use the Krasovskii–
LaSalle Theorem to conclude on the asymptotic stability, using that the Kalman
observability matrix is full rank. A similar result holds with controllability:
Corollary A.22 If the matrix A is asymptotically stable, then A T P + P A + B B T =
0 has a unique solution P = P T 0 for any B ∈ Rn×m such that the pair (A, B) is
controllable. Conversely, if A T P + P A + B B T = 0 has a solution P = P T 0 for
a matrix B ∈ Rn×m such that the pair (A, B) is controllable, then A is asymptotically
stable.
Invariance results for time-invariant discrete-time systems have been obtained in
[8]. They apply to systems x(k + 1) = Ax(k), Lyapunov functions V (x) = x T P x,
satisfying V (x(k + 1)) − V (x(k)) = x T Qx, A T P A − P = Q 0, P = P T 0.
Lemma A.24 ([9, Lemma 1.6]) Suppose that the stationary set Ω of (A.1) is
bounded, and that there exists an ε-neighborhood Ωε = {y ∈ Rn | inf x∈Ω ρ(y, x) <
ε}, such that there exists a continuous function V (x, c) defined for all x ∈ Ωε , c ∈ Ω,
such that:
1. V (x, c) > 0 for all x ∈ Ωε \ Ω,
2. V (c, c) = 0,
3. for any solution x(t) of (A.1), V (x(t), c) is nonincreasing in t when x(t) ∈ Ωε .
Then, the stationary set Ω is Lyapunov stable.
An example of a function that satisfies items (1) and (2) is V (x, t) = dist2 (x, Ω) =
(x − proj[Ω; x])T (x − proj[Ω; x]), where the equality holds if Ω is convex.
The stability properties are called uniform when they hold independently of the initial
time t0 as in the following definitions.
The result of Theorem A.6 can be extended to linear time-varying systems of the
form (A.10) as follows.
Theorem A.32 A necessary and sufficient condition for the uniform asymptotic
stability of the origin of the system (A.10) is that a matrix P(t) exists such that
V (t, x) = x T P(t)x is positive definite for each t ≥ t0 , and
We present now the Lyapunov stability theorems for nonautonomous systems. The
following definitions are required.
(i) κ(0) = 0,
(ii) κ(χ ) > 0 for all χ > 0,
(iii) κ(·) is nondecreasing.
Statements (ii) and (iii) can also be replaced with (ii’) κ is strictly increasing, so
that the inverse function κ −1 (·) is defined. The function is said to be of class K∞ if
k = ∞ and κ(χ ) → ∞ as χ → ∞.
Theorem A.39 Assume that V (x, t) has continuous first derivatives around the
equilibrium point x = 0. Consider the following conditions on V (·) and V̇ (·) where
α(·), β(·), and γ (·) denote functions of class K ,and let Br be the closed ball with
radius r > 0 and center x = 0:
Lemma A.40 (Barbalat) If the differentiable function f (·) has a finite limit as t →
+∞, and if f˙(·) is uniformly continuous, then f˙(t) → 0 as t → ∞.
658 Appendix A: Background Material
This lemma can be applied for studying stability of nonautonomous systems with
Lyapunov Theorem, as stated by the following result.
Lemma A.41 If a scalar function V (x, t) is lower bounded and V̇ (x, t) is negative
semi-definite, then V̇ (x, t) → 0 as t → ∞ if V̇ (x, t) is uniformly continuous in time.
Definition A.44 (Lie derivative) The Lie derivative of h(·) with respect to f (·), is
the scalar
∂h
Lfh = f,
∂x
Definition A.45 (Lie bracket) The Lie bracket of f and g is the vector
∂g ∂f
[ f, g] = f − g,
∂x ∂x
L adg h = L f (L g h) − L g (L f h).
In this section, we present the normal form of a nonlinear system which has been
instrumental for the development of the feedback linearizing technique. For this, it
is convenient to define the notion of relative degree of a nonlinear system.
(i) L g L kf h(x) = 0, for all x in a neighborhood of the origin and for all k < r − 1,
(ii) L g L rf−1 h(x) = 0.
It is worth noticing that in the case of linear systems, e.g., f (x) = Ax, g(x) = B,
h(x) = C x, the integer r is characterized by the conditions C Ak B = 0 for all k <
r − 1 and C Ar −1 B = 0. It is well known that these are exactly the conditions that
define the relative degree of a linear system. Another interesting interpretation of the
relative degree is that r is exactly the number of times we have to differentiate the
output to obtain the input explicitly appearing. Let us now assume that u and y both
have dimension m in (A.12).
Definition A.50 (Uniform vector relative degree) Let u(t) ∈ Rm and y(t) ∈ Rm in
(A.12). The system is said to have a uniform relative degree r if ri = r for all
1 ≤ i ≤ m in the previous definition.
We note that this definition is different from the definition of the uniform relative
degree in [12, p. 427], where uniformity refers to the fact that the system (single-
input–single-output) has a (scalar) relative degree r at each x ∈ Rn . Here, we rather
employ uniformity in the sense that the vector relative degree has equal elements. In
the linear invariant multivariable case, such a property has favorable consequences as
recalled a few lines below. The functions L if h for i = 0, 1, . . . , r − 1 have a special
meaning as demonstrated in the following theorem.
is a diffeomorphism z = φ(x) that transforms the system into the following normal
form: ⎧
⎪
⎪ ż 1 = z 2
⎪
⎪ ..
⎪
⎪
⎪
⎪ .
⎪
⎪
⎨ żr −1 = zr
żr = b(z) + a(z)u (A.14)
⎪
⎪
⎪ żr +1 = qr +1 (z)
⎪
⎪
⎪
⎪ ..
⎪
⎪
⎪ .
⎩
ż n = qn (z)
A similar canonical form can be derived for the multivariable case, however, it
is more involved [12]. In the case of a linear time-invariant system (A, B, C), a
similar canonical state space realization has been shown to exist in [13], provided
C Ai B = 0 for all i = 0, 1, . . . , r − 2, and the matrix C Ar −1 B is nonsingular. This
Sannuti’s canonical form is quite interesting as the zero dynamics takes the form
ξ̇ (t) = A0 ξ(t) + B0 z 1 (t): it involves only the output z 1 of the system. The condi-
tions on the Markov parameters are sufficient conditions for the invertibility of the
system [14]. Other such canonical state space representations have been derived by
Sannuti and coworkers [15–17], which are usually not mentioned in textbooks.
From the Theorem A.51, we see that the state-feedback control law
1
u= (−b(z) + v), (A.15)
a(z)
∂h
g(x), ad f g(x), . . . , ad n−2 g(x) = 0.
∂x f
The Frobenius Theorem shows that the existence of solutions to this equation is
equivalent to the involutivity of {g(x), ad f g(x), . . . , ad n−2
f g(x)}. It can be shown
that the second condition, i.e., L g L f h(x) = 0 is ensured by the linear independence
n−1
Theorem A.52 For the system (A.12) there exists an output function h(x) such that
the triple { f (x), g(x), h(x)} has relative degree n at x = 0, if and only if:
(i) The matrix {g(0), ad f g(0), . . . , ad n−1
f g(0)} is full rank.
(ii) The set {g(x), ad f g(x), . . . , ad f g(x)} is involutive around the origin.
n−2
If the relative degree of the system r < n then, under the action of the feedback
linearizing controller (A.15), there remains an (n − r )-dimensional subsystem. The
importance of this subsystem is underscored in the proposition below.
Theorem A.53 Consider the system (A.12) assumed to have relative degree r . Fur-
ther, assume that the trivial equilibrium of the following (n − r )-dimensional dy-
namical system is locally asymptotically stable:
⎧
⎨ żr +1 = qr +1 (0, . . . , 0, zr +1 , . . . , z n )
⎪
.. , (A.16)
⎪ .
⎩
ż n = qn (0, . . . , 0, zr +1 , . . . , z n )
where qr +1 , . . . , qn are given by the normal form. Under these conditions, the control
law (A.15) yields a locally asymptotically stable closed-loop system.
the qualifier local in the above theorem. In other words, it can be shown that the
conditions above are not enough to ensure global asymptotic stability.
Further reading: The original Lyapunov Theorem is contained in [18], while sta-
bility of nonlinear dynamic systems is widely covered in [19, 20]. The proofs of
the theorems concerning Lyapunov stability theorem can be found in [2, 5, 21]. An
extensive presentation of differential geometry methods can be found in [12] and the
references therein. For the extension to the multivariable case and further details, we
refer the reader again to [12, 22].
This section intends to briefly describe what viscosity solutions of first-order non-
linear partial differential equations of the form
V (z) − V (x) − ζ T (z − x)
lim =0 (A.18)
z→x |z − x|
and this equality can equivalently be stated with the next two inequalities supposed
to hold simultaneously:
V (z) − V (x) − ζ T (z − x)
lim sup ≤0 (A.19)
z→x |z − x|
(in other words, ζ satisfies (A.19) if and only if the plane z → V (z) + ζ T (z − x) is
tangent from above to the graph of V (·) at x), and
V (z) − V (x) − ζ T (z − x)
lim inf ≥ 0, (A.20)
z→x |z − x|
(in other words ζ satisfies (A.20) if and only if the plane z → V (z) + ζ T (z − x) is
tangent from below to the graph of V (·) at x). The superdifferential of V (·) at x is
then defined as the set
It is noteworthy that such sets may be empty, see the examples below. Sometimes
these sets are named one-sided differentials. The function V (·) is said to be a viscosity
subsolution of the partial differential equation (A.17) if for each x ∈ Rn one has
F(x, V (x), ζ ) ≤ 0,
for all ζ ∈ D + V (x). The function V (·) is said to be a viscosity supersolution of the
partial differential equation (A.17) if for each x ∈ Rn one has
F(x, V (x), ζ ) ≥ 0,
for all ζ ∈ D − V (x). The function V (·) is said to be a viscosity solution of the partial
differential equation (A.17) if it is both a viscosity subsolution and a viscosity super-
solution of this partial differential equation. As we already pointed out in Sect. 4.3.5,
in case of proper2 convex functions the viscosity subdifferential (or subgradient)
and the convex analysis subgradient are the same [23, Proposition 8.12]. We now
consider two illustrating examples taken from [24].
Example A.56 The same function V (x) = |x| is not a viscosity solution of −1 +
| ∂∂Vx | = 0. At x = 0 and choosing ζ = 0 one obtains −1 + |0| = −1 < 0 so the func-
tion is not a supersolution, though it is a viscosity subsolution.
It is a fact that if V (·) is convex and not differentiable at x then D + V (x) = ∅. The
following Lemma says a bit more.
2 Proper in this context means that V (x) < +∞ for at least one x ∈ Rn , and V (x) > −∞ for all
x∈ Rn .
Appendix A: Background Material 665
and
I − = x ∈ I | D − V (x) = ∅
The second item says that if a function is not differentiable at x then necessarily one
of the two sets must be empty. This confirms the above examples. The third item says
that the points x where the continuous function V (·) admits a superdifferential and a
subdifferential, exist in I and even are numerous in I : they form dense subsets of I
(take any point y ∈ I and any neighborhood of y: there is an x in such a neighborhood
at which V (·) has a one-sided differential). There is another way to define a viscosity
solution.
From the first item it becomes clear why a convex function that is not differentiable
at x has D + V (x) = ∅. Then a continuous function V (·) is a viscosity subsolution of
F(x, V (x), ∇V (x)) = 0 if for every C 1 function ϕ(·) such that V − ϕ has a local
maximum at x one has F(x, V (x), ∇ϕ(x)) ≤ 0. It is a viscosity supersolution of
F(x, V (x), ∇V (x)) = 0 if for every C 1 function ϕ(·) such that V − ϕ has a local
minimum at x one has F(x, V (x), ∇ϕ(x)) ≥ 0. The following result is interesting:
Proposition A.59 ([25]) Given a system ẋ(t) = f (x(t), u(t)) whose solution
on [t0 , t1 ] is an absolutely continuous function such that ẋ(t) = f (x(t), u(t))
for almost all t ∈ [t0 , t1 ], a supply rate w(x, u) such that w(0, u) ≥ 0, and a
continuous function V : Rn → R such that V (0) = 0, then:
t
V (x(t1 )) − V (x(t0 ) ≤ t01 w(x(t), u(t))dt
holds for every solution [t0 , t1 ] → Rn
(A.22)
ζ T f (x, u) ≤ w(x, u) for every x ∈ Rn , u ∈ U , and ζ ∈ D − V (x)
666 Appendix A: Background Material
In other words, one may write the infinitesimal version of the dissipation inequality
when the storage function is not differentiable, by replacing its gradient by a viscosity
subgradient. Notice that all the Lyapunov functions we worked with in Sect. 3.14 were
differentiable, hence no viscosity solutions were needed in those developments. On
the contrary, the models (the plants) were nonsmooth.
The topic of studying and solving Riccati equations is a wide topic, and we do
not pretend to cover it in this small appendix. The results we present only aim at
showing that under some conditions which are different from the conditions stated
in the foregoing chapters, existence of solutions to algebraic Riccati equations can
be guaranteed. Let us consider the following algebraic Riccati equation:
P D P + P A + A T P − C = 0, (A.23)
where A ∈ Rn×n , C ∈ Rn×n and D ∈ Rn×n . P is the unknown matrix. Before going
on we need a number of definitions. A subspace Ω ⊂ R2n is called N −neutral if
x T N y = 0 for all x, y ∈ Ω (Ω may be Ker(N ), or Ker(N T )). The neutrality index
γ (M, N ) of a pair of matrices (M, N ) is the maximal dimension of a real M-invariant
N -neutral subspace in R2n .
A pair of matrices (A, D) is sign controllable if for every λ0 ∈ R at least one of
the subspaces Ker(λ0 In − A)n and Ker(−λ0 In − A)n is contained in Im[D, AD, . . . ,
An−1 D], and for every λ + jμ ∈ C, λ ∈ R, and μ ∈ R, μ = 0, at least one of the two
subspaces Ker[(λ2 + μ2 )In ± 2λA + A2 ]n is contained in Im[D, AD, . . . , An−1 D].
Another way to characterize the sign-controllability of the pair (A, D) is: for any
λ ∈ C, at least one of the two matrices (λIn − A D) and (−λ̄In − A D) is full
rank [26]. Sign-controllability of (A, D) implies that there exists a matrix K such
that F = A + D K is unmixed, i.e., σ (F) ∩ σ (−F T ) = ∅. Sign-controllability also
implies that all purely imaginary modes of (A, D), are controllable [27].
We now define the two matrices in R2n×2n
A D 0 In
M= , H= .
C −A T −In 0
Theorem A.60 ([28]) Let D 0 and (A, D) be sign controllable. Suppose that the
matrix M is invertible. Then the following statements are equivalent:
• The ARE (A.23) has a real solution.
• The ARE (A.23) has a real solution P for which rank(P − P T ) ≤ 2(n − γ (M, H )).
• The matrix M has a real n−dimensional invariant subspace.
• Either n is even, or n is odd and M has a real eigenvalue.
If γ (M, N ) = n there exists a real symmetric solution.
Appendix A: Background Material 667
P A + A T P − P B B T P + Q 0. (A.24)
Lemma A.61 ([29]) Suppose that the pair (A, B) is stabilizable. The following three
statements are equivalent:
P − A + A T P − − P − B B T P − + Q = 0, σ (A − B B T P − ) ⊂ C− .
A −B B T
• The Hamiltonian matrix H = has no eigenvalues on the imag-
−Q −A T
inary axis.
Suppose that one of these conditions hold. Then any solution P of (A.24) satisfies
P ≺ P −.
The notation σ (A) ∈ C− means that all the eigenvalues of A have negative real parts.
In case the pair (A, B) is not stabilizable, things are more complex and one has first
to perform a decomposition of A and B before proposing a test, see [30]. The next
lemma is used in the proof of Lemma 5.79 (which should therefore better be named
a theorem or a proposition, however, we kept the usual name for the Bounded Real
Lemma).
Lemma A.62 ([31, Lemma 2.1]) Assume that A is stable and the Riccati equation
A T P̄ + P̄ A + P̄ B B T P̄ + Q̄ = 0 (A.25)
AT P + P A − P M P + N = 0
⇔ AT P + P A ± P M T P − P M P + N = 0
⇔ (A T − P M T )P + P(A − M P) + N +
P M T P = 0 (A.27)
Δ
=Q
⇔ (A − M P)T P + P(A − M P) + Q = 0.
Definition A.63 ([14]) The transfer matrix H (s) = C(s In − A)−1 B + D ∈ Cm×m
is invertible, if there exists a proper transfer function Ĥ (s) and a nonnegative integer
l such that
1
Ĥ (s)H (s) = l Im . (A.28)
s
The least integer l satisfying (A.28) is called the inherent integration of H (s).
An m × m transfer matrix is invertible, if and only if it has rank m over the field of
proper transfer functions. Let us now give a definition of the normal rank of a transfer
function matrix.
Definition A.64 The transfer function H (s) ∈ Cm×m , analytic in the region Ω ⊆ C,
is said to have full normal rank, if there exists s ∈ Ω such that det(H (s)) = 0.
Appendix A: Background Material 669
In this section, some matrix algebra results are provided, some of which are instru-
mental in the PR and dissipative systems characterization.
G g Γ gT
Since proving that 0 is equivalent to proving that 0, the
gT Γ g G
equivalence between (3.3) and (3.19) follows from Theorem A.65, identifying Γ with
T −1
−P A − A T P and G with D + D T. The matrix
Γ − g G g is the so-called Schur
G g
complement of G with respect to , the matrix G − gΓ −1 g T is the Schur
gT Γ
complement of Γ with respect to the same matrix. See, for instance, [9, Lemma 2.1]
[39] for the proof. Thus, Theorem A.65 is sometimes called the Schur Complement
Lemma. Another useful result is the following:
Proof Let us prove the first part with 0. The “if” sense is easy to prove. The
“only if” is as follows: Assume M 0. Let S be any real square matrix such
that M = S T S, i.e., S is a square root of M. Let S = Q R be the Q R factoriza-
tion of S with an orthonormal matrix Q and an upper triangular matrix R. Then,
670 Appendix A: Background Material
M = R T R is a Cholesky
factorization
of M. Let us partition the matrix R as
R11 R12 Δ L T W
R= = . From M = R T R, it follows that M11 = L L T ,
0 R22 0 R22
M12 = L W , M22 = W T W + R22 T
R22 W T W . Therefore, L and W satisfy the con-
ditions of the proposition.
Another result that may be useful for the degenerate case of systems where D 0,
is the following one, which can be deduced from Proposition A.68.
R1 0
Lemma A.69 Let Q = Q T , S, R = R T be real matrices with R = , R1 =
0 0
R1T 0. Then, ⎛ ⎞
Q S1 S2
Q S
= ⎝ S1T R1 0 ⎠ 0 (A.30)
ST R
S2T 0 0
if and only if
Q S1
0 (A.31)
S1T R1
and
Q 0, S2 = 0, (A.32)
Applying Theorem A.65, the reduced order LMI can be rewritten as the Riccati
inequality Q − S1T R1 S1 0. This is the reduced order Riccati inequality satisfied
by a PR system with a feedthrough term D 0.
The following is taken from [42] and also concerns the degenerate case when
D 0, where A† is the Moore–Penrose pseudo-inverse of the matrix A.
Q S
Lemma A.70 Suppose that Q and R are symmetric. Then 0 if and only
ST R
if R 0, Q − S R † S T 0, S(I − R † R) = 0 (equivalently (I − R R † )S T = 0).
The next lemma is stated in [43] and is used in the proof of Proposition 3.63.
Lemma A.71 Let M = M T ∈ Rm×m and M 0. The following statements hold:
1. N T M N = 0 ⇒ M N = 0.
2. For any index set J ⊆ {1, . . . , m}, v T M J J v = 0 ⇒ M•J v = 0.
1 1
Proof (1) Being symmetric and 0, M has a square root M 2 = (M 2 )T 0 [4, The-
1
orem 1, p. 181]. Hence M 2 N = 0 so M N = 0. 2) Let the index set J ⊆ {1, . . . , m}
and the vector v be such that v T M J J v = 0. Denote m̄ = {1, . . . , m}. We obtain
M J J M J,m̄\J v
(v 0)
T
= 0.
Mm̄\J,J Mm̄\J,m̄\J 0
M J J M J,m̄\J v
Hence item 1 implies that = 0. Equivalently,
Mm̄\J,J Mm̄\J,m̄\J 0
M•J v = 0.
The following can be found in classical books on matrix algebra or linear systems
[4, 44, 45]. Let A ∈ Rm×m and C ∈ Rn×n be nonsingular matrices. Then
so that
(I + C(s I − A)−1 B)−1 = I − C(s I − A + BC)−1 B, (A.34)
where I has the appropriate dimension. Let now A and B be square nonsingular
matrices. Then −1
A 0 A−1 0
= , (A.35)
C B −B −1 C A−1 B −1
and
672 Appendix A: Background Material
−1
A D A−1 −A−1 D B −1
= . (A.36)
0 B 0 B −1
A D
Notice that if and A are both invertible, then H is full rank [4, Exercise 15
C B
p. 46].
Let A and B be invertible n × n matrices, then [1, Fact 2.14.13]:
Let us remind that the Moore–Penrose pseudo-inverse of A ∈ Rn×m is the unique ma-
trix X ∈ Rm×n satisfying the Penrose equations: AX A = A, X AX = X , (AX )T =
AX , (X A)T = X A. It is usually denoted as X = A† . If A has full column rank
m (⇒ m ≤ n), then A† = (A T A)−1 A T . If A has full row rank n (⇒ n ≤ m), then
A† = A T (A A T )−1 . The so-called Banachiewicz–Schur
form
of the inverse of a parti-
A D
tioned matrix is given as follows [46, 47]. Let M = , then its Banachiewicz–
C B
Schur form is defined as
−
A + A− D H − C A− −A− D H −
N (A− ) = , (A.39)
−H − C A− H−
Δ
where H = B − C A− D, A− is any pseudo-inverse which satisfies A A− A = A
(Moore–Penrose pseudo-inverse is one example).
A D
Theorem A.72 ([46, Theorem 2.10] [48]) Let M = and N (A− ) be as in
C B
(A.39). Then, the following statements are equivalent:
1. N (A− ) = M † .
2. Im(D) ⊆ Im(A), Im(C T ) ⊆ Im(A T ), Im(C) ⊆ Im(H ), Im(D T ) ⊆ Im(H T ), and
the pseudo-inverses A− and H − in N (A− ) satisfy A− = A† and H − = H † .
Let us consider a slightly more general LMI than the one in (5.105):
Appendix A: Background Material 673
⎛ ⎞
AT P + P A P B C T
⎝ BT P γ1 Im D T ⎠ ≺ 0. (A.40)
C D γ2 I m
−1
Δ
with E = γ1 Im − γ12 D T D , which is defined since D̃ is invertible. Using Theo-
rem A.65, we infer that (A.40) implies the Riccati inequality:
−1 BT P
A P + P A − (P B C ) D̃
T T
≺ 0. (A.42)
C
1 T 1 1
AT P + P A − C C − (P B − C T D)E(B T P − D T C) ≺ 0. (A.43)
γ2 γ2 γ2
Setting γ1 = γ 2 and γ2 = −1 allows one to recover the usual Riccati inequality for
the strict Bounded Real Lemma. It is also possible to calculate that E = − γ11 Im −
1
γ1
D T (γ1 γ2 Im − D D T )−1 D. More on such bounded real LMIs and (Q, S, R)-
dissipativity may be found in [49, Theorem 1].
Fact 1 Let A ∈ Rn×n be such that In − A is invertible. Then, (In + A)(In − A)−1 =
(In − A)−1 (In + A).
Proof We have (In + A)(In − A)−1 (In − A) = In + A, then (In − A)−1 (In + A)
(In − A) = (In − A)−1 (In − A2 ) = (In − A)−1 (In − A)(In + A) = In + A, where
we used that (In + A)(In − A) = (In − A)(In + A) = In − A2 .
Fact 2 Let A ∈ Rn×n be such that In + A is invertible. Then (In + A)−1 (In − A) =
(In − A)(In + A)−1 .
Fact 3 Let A ∈ Rn×n , μ and η real numbers. Then (In − μA)(In + η A) = (In +
η A)(In − μA).
−L L T + h 2 (1 − 2θ )A T R A = −(In − hθ A)T
L
L T (In − hθ A). (A.44)
−L L T + h 2 (1 − 2θ )A T R A = − LL T + hθ (
LL T A + AT LL T ) − h 2 θ 2 AT
LL T A.
(A.47)
It is noteworthy that since L depends on h, it is not possible to directly equating
the coefficient of the same power of h in order to obtain necessary and sufficient
conditions for the preservation of both the energy storage function ( i.e., h R = P) and
the state dissipation function (i.e., L L T =
LL T ) after discretization. The following
result aims at bridging this gap.
Proof Note that if A T P + P A = −L L T and (3.344) (1) are satisfied, then from
Lemma A.73 we know that (A.44) (equivalently (A.47)) holds. Let L L T = LLT
hold for all h > 0, (A.44) implies
h (1 − 2θ )A T R A + θ 2 A T L L T A = θ L L T A + A T L L T . (A.49)
For (A.49) to hold for any h > 0, one has to nullify the coefficient of the polynomial
in h. Then we get
θ L L T A = −(θ L L T A)T
LL =
T
L
L T =⇒ (A.50)
(2θ − 1)A T R A = θ 2 A T L L T A.
Let us split the proof with the values of θ , i.e., (a) θ ∈ (0, 1], θ = 21 , (b) θ = 21 and
(c) θ = 0.
Case (a). Let L L T = LL T hold for all h > 0 and (A.50) for all θ ∈ (0, 1]. Then
θ = 2 yields
1
⎧
⎨ L L T A = −(L L T A)T
L L = L L =⇒
T T
AT R A = 0 (A.51)
⎩ T
A L L T A = 0.
− L L T = −(I − hθ A)T
LL T (I − hθ A). (A.52)
We have seen that with a system (A, B, C, D), whose transfer matrix is H (s) =
C(S In − A)−1 B + D ∈ Cm×m , we can associate the spectral function Π (s) = C
(s In − A)−1 B + B T (−s In − A T )−1 C T + D + D T , which is the Laplace transform
of the kernel Λ(τ ) = Ce Aτ B 1(τ ) + B T e−A τ C T 1(τ ) + (D + D T )δτ , where 1(·)
T
is the unit step function and δτ is the Dirac measure. Let us deal now with non
negative spectral functions and their factorization.
Definition A.75 A spectral function Π (s) is nonnegative if there exists a nonnega-
tive real function Z (s) such that Π (s) = Z (s) + Z T (−s).
Proposition 2.36 states the equivalence between nonnegativity of Π (s) and of the
associated operator.
Definition A.76 A rational spectral function Π (s) ∈ Cm×m possesses a weak fac-
torization if Π (s) = H T (−s)H (s), where the factor H (s) ∈ C p×m is rational, real
and analytic in Re[s) > 0. The factor H (s) is minimal if H (s) and Π (s) have the
same poles in Re[s] ≤ 0 (in Re[s] < 0 these poles have the same multiplicity, and
on the imaginary axis, a simple pole of H (s) corresponds to an order-two pole of
Π (s)). The factorization is strong if the factor H (s) is minimal, square ( p = m), and
full rank in the half plane Re[s] > 0.
We then have the following:
Theorem A.77 ([51, Theorem 6.4]) The following facts are equivalent:
1. Π (s) is nonnegative.
Appendix A: Background Material 677
Proof From Proposition 2.36 it follows that 1 ⇔ 2. If Π (s) possesses a weak factor-
ization, Π (s) is clearly nonnegative, so 5 ⇒ 4 ⇒ 3 ⇒ 1. Let us prove 1 ⇔ 3. Realiza-
tion theory tells us that any minimal factor can be written as W + L T (s In − F)−1 G,
where F is stable since H (s) is analytic in Re[s] > 0. It follows that:
Quite similar developments are to be found in [53, Sect. 5.2], see also [54, Lemma
D.1]. One remarks that the matrices L and W in the proof, correspond to those
of Lur’e equations. Consider Lur’e equations in (3.2), with the associated transfer
matrix H (s). Then W + L T (s In − A)−1 B is a spectral factor associated with H (s)
[53]. As alluded to in the proof, algorithms for the calculation of spectral factorization
exist, see [52, 55, 56].
The results extend to the discrete-time case, where the s variable is replaced by
the z variable, and the condition Re[s] > 0 becomes |z| > 1, see Sect. 3.15.4.
678 Appendix A: Background Material
The following results are used in Anderson’s proof of the KYP Lemma 3.1.
Lemma A.78 ([57]) Let (A, B, C) be a minimal realization for H (s). Suppose that
all the poles of H (s) lie in Re[s] < 0. Let H (s) and W0 (s) related as in (3.13).
Suppose that W0 (s) has a minimal realization (F, G, L). Then the matrices A and
F are similar.
Proof Since (A, B, C) is a realization for H (s), a direct calculation shows that
T
A 0 B C
(A1 , B1 , C1 ) = , ,
0 −A T CT −B
is a realization of H (s) + H T (−s). Since H (s) and H T (s) cannot have a pole in
common (the poles of H (s) are in Re[s] < 0 and those of H T (−s) are in Re[s] > 0)
then the degree of H (s) + H T (−s) is equal to twice the degree of H (s).3 Thus the
triple (A1 , B1 , C1 ) is minimal. By direct calculation one finds that
with
F 0 G 0
(A2 , B2 , C2 ) = , ,
L L T −F T 0 −G
Using items (i) and (ii) below (3.13), it can then be shown that the degree of
W0T (−s)W0 (s) is twice the degree of W0 (s) and therefore the triple (A2 , B2 , C2 ) is
minimal. Let P = P T 0 be the unique positive definite solution of F T P + P F =
−L L T . The existence of such a P follows from item (i) below (3.13) and the minimal-
In 0
ity of (F, G, L). Then one may apply Lemma A.81 below, choosing T =
P In
to obtain the following alternative realization of W0T (−s)W0 (s)
F 0 G PG
(A3 , B3 , C3 ) = , ,
0 −F T PG −G
Since (A1 , B1 , C1 ) and (A3 , B3 , C3 ) are minimal realizations of the same transfer
matrix, and since A has eigenvalues with strictly negative real part, so has F from
item (i) below (3.13). The result follows from Lemma A.81.
Corollary A.79 Let H (s) have a minimal realization (A, B, C) and let H (s) and
W0 (s) be related as in (3.13). Then there exists matrices K , L such that W0 (s) has
3 Here the degree of H (s) ∈ Cm×m , is defined as the dimension of a minimal realization of H (s).
Appendix A: Background Material 679
and
A 0 K PK
(A3 , B3 , C3 ) = , , ,
0 −A T PK −K
Lemma A.80 ([57]) Let H (s) have a minimal realization (A, B, C) and let H (s)
and W0 (s) be related as in (3.13). Then there exists a matrix L̂ such that (A, B, L̂)
is a minimal realization for W0 (s).
Lemma A.81 Let (A1 , B1 , C1 ) and (A2 , B2 , C2 ) be two minimal realizations of the
rational matrix H (s). Then there exists a nonsingular matrix T such that A2 =
T A1 T −1 , B2 = T B1 , C2 = (T T )−1 C1 . Conversely if (A1 , B1 , C1 ) is minimal and T
is nonsingular, then this triple (A2 , B2 , C2 ) is minimal.
A 0
Corollary A.82 The only matrices which commute with are of the form
0 −A T
T1 0
, where T1 and T2T commute with A.
0 T2
The next lemma is a specific version of the KYP Lemma, for lossless systems, that
is needed in its proof.
Lemma A.83 Let H (s) be PR and have only purely imaginary poles, with H (∞) =
0. Let (A, B, C) be a minimal realization of H (s). Then there exists P = P T 0
such that
P A + AT P = 0
(A.57)
P B = CT .
Proof The procedure consists in finding a minimal realization (A, B, C) for which
the matrix P has an obvious form. Then use the fact that if P satisfies the set of
Δ
equation (A.57), then P = (T T )−1 P T −1 satisfies the corresponding set of equa-
tions for the minimal realization (T AT −1 , T B, (T −1 )T C). Thus, if one exhibits
a symmetric positive definite P for a particular minimal realization, a symmetric
positive definite Pwill exist for all minimal realizations. It is possible to write
H (s) as H (s) = i As 2i s+B
+ωi2
i
where the ωi are all different and the matrices Ai
and Bi satisfy certain requirements [58], see (2.145). Let us realize each term
(Ai s + Bi )(s 2 + ωi2 )−1 separately with a minimal realization (Fi , G i , Hi ). Select
a Pi such that (A.57) is satisfied, so as to obtain a minimal realization (F, G, H )
680 Appendix A: Background Material
As + B
H (s) = . (A.58)
s 2 + ω02
If the degree of H (s) in (A.58) is equal to 2k, then there exists k complex vectors vi
Δ k ! vi v̄iT v̄ v T
"
such that v̄i vi = 1, vi vi = mu i , 0 < μi ≤ 1, μi ∈ R, and H (s) = i=1 s− jω0 + s+i jωi 0
T T
[58]. Direct sum techniques allow one to restrict considerations to obtaining a mini-
mal realization for the degree 2, i.e.,
vv̄ T v̄v T
H (s) = + . (A.59)
s − jω0 s + jω0
and then
T T
0 −ω0 y y
(F, G, H, P) = , 1T , 1T , In
ω0 0 y2 y2
defines a minimal realization of (A.59) with the set of equations (A.57) satisfied.
This counterexample was inspired by a work from Barabanov [59] and suitably
modified in [60]. The whole construction is rather long, and only a sketch is provided
here.4 Let us consider the following fourth-order closed-loop system that will serve
as a pre-counterexample:
⎧
⎪
⎪ ẋ1 (t) = x2 (t)
⎨
ẋ2 (t) = −x4 (t)
(Σ) (A.60)
⎪ 3 (t) = x1 (t) − 2x4 (t) − 9131
⎪ ẋ φ(x4 (t))
⎩ 900
ẋ4 (t) = x1 (t) + x3 (t) − x4 (t) − 1837
180
φ(x4 (t)),
4 Thefirst author of this book, is indebted to Prof. Bernat, Dept. of Mathematics, Univ. Autonoma
de Barcelona, Spain, who provided him with useful references and comments about Kalman’s
conjecture counterexamples.
Appendix A: Background Material 681
with ⎧ 900
⎨ − 9185 if y < − 9185
900
As a first step, let us check that the system (Σ) satisfies the assumption of Kalman’s
conjecture, i.e., it is globally asymptotically stable for any φ(y) = ky with k ∈ [0, 1].
Notice that the characteristic function in (A.61) satisfies dφ dy
∈ [0, 1]. The tangent
linearization of the vector field of (Σ) is given by the Jacobian:
⎛ ⎞
0 1 0 0
⎜0 0 0 −1 ⎟
⎜ ⎟ (A.62)
⎝1 0 0 −k1 ⎠
1 0 1 −k2
with k1 = 2 + 9131 dφ
(x ) and k2 = 1 + 1837
900 d x4 4
dφ
(x ). The proof is based on the ap-
180 d x4 4
plication of the Routh–Hurwitz criterion. One finds that a necessary and sufficient
condition such that this Jacobian is Hurwitz for all x4 ∈ R, is that 0 < dφ
dy
(y) < 91310
5511
.
Notice that the characteristic function in (A.61) satisfies 0 ≤ dφ dy
(y), and not the strict
inequality, so it is not yet a suitable nonlinearity. This will not preclude the proof from
working, as we shall see later, because one will show the existence of a characteristic
function that is close to this one and which satisfies the Hurwitz condition. Actually
φ(·) in (A.61) will be useful to show that (Σ) possesses a stable periodic orbit, and
that there exists a slightly perturbed system (that is a system very close to the one
in (A.61)) which also possesses such an orbit and which satisfies the assumption of
Kalman’s conjecture.
Remark A.84 The reader may wonder how such a counterexample has been discov-
ered and worked out. Actually, the authors in [60] started from another counterex-
ample provided in [59] (but the arguments therein appeared to be incomplete) and
employed a numerical simulation procedure to locate a periodic trajectory by trying
different parameters. This explains the somewhat surprising and ad hoc values of the
parameters in (A.60) (A.61).
The construction of a periodic orbit relies on the explicit integration of the trajec-
tories of (A.61) in the domains where φ(·) is linear, i.e., D0 = {x | |x4 | ≤ 9185 900
}
and D− = {x | x4 < − 9185 }, respectively. Actually, since the system is symmetric
900
with respect to the origin (check that the vector field satisfies f (−x) = − f (x)) it is
not worth considering the domain D+ = {x | x4 > 9185 900
} in the limit cycle construc-
tion. These domains are separated by the hyperplanes Γ± = x ∈ R4 | x4 = ± 9185 900
.
These planes will serve later as Poincaré sections for the definition of a Poincaré map
and the stability study. See Fig. A.1. The procedure is as follows: let us consider an
682 Appendix A: Background Material
initial point x0 ∈ R4 in the state space, and choose, for instance, x0 ∈ Γ − . The peri-
odic orbit, if it exists, may consist of the concatenation of solutions of the system as
x evolves through the three domains D0 , D± within which the vector field is linear.
In each domain, one can explicitly obtain the solutions. Then, the existence of such
a periodic orbit simply means that the state x̄ attained by the system after having
integrated it sequentially through D− , D0 , D+ , and D0 again, satisfies x̄ = x0 . From
the very basic properties of solutions to linear differential equations, like continuous
dependence on initial data, this gives rise to a nonlinear system g(z) = 0, where z
contains not only the state x0 but the period of the searched orbit as well. In other
words, the existence proof is transformed into the existence of the zero of a certain
nonlinear system.
Remark A.85 Such a manner of proving the existence of periodic orbits has also
been widely used in vibro-impact mechanical systems, and is known in that field as
the Kobrinskii’s method [61].
Let us now investigate the proof in more detail. Due to the abovementioned symmetry,
it happens to be sufficient to apply the described concatenation method in the domains
D0 and D− . In these domains the system in (A.61) becomes:
⎧
⎪
⎪ ẋ1 (t) = x2 (t)
⎨
ẋ2 (t) = −x4 (t)
(Σ0 ) (A.63)
⎪
⎪ ẋ3 (t) = x1 (t) − 10931
900 4
x (t)
⎩
ẋ4 (t) = x1 (t) + x3 (t) − 2017 x (t),
180 4
and ⎧
⎪
⎪ ẋ1 (t) = x2 (t)
⎨
ẋ2 (t) = −x4 (t)
(Σ− ) (A.64)
⎪
⎪ ẋ (t) = x1 (t) − 2x4 (t) + 9131
⎩ 3 9185
ẋ4 (t) = x1 (t) + x3 (t) − x4 (t) + 1,
respectively. For the sake of briefness we will not provide here the whole expressions
of the solutions of (A.63) and (A.64), but only those of x1 . In case of the system in
(A.63) it is given by:
x1 (t)
= √
√
25496 cos( 10799t 17704 sin 10799t
360 ) 360
a1 81
− 25
+ t
)
− √
79222 exp(10t) 2002 exp( 5t6 ) 25207 exp( 360 t
25207 10799 exp( 360 )
√ √
3896 cos 10799t
360 137674504 sin 10799t)
360
+a2 − 79222081 + 125
− t
)
+ √
exp(10t) 12012 exp( 6t5 ) 378105 exp( 360 t
378105 10799 exp( 360 )
(A.65)
√ √
10799t
1860 cos 10799t 710940 sin 360
+a3 45
− 75
+ t
)
− √
39611 exp(10t) 1001 exp( 6t5 ) 25207 exp( 360 t
25207 10799 exp( 360 )
√ √
1980 cos 10799t
360 4140 sin 10799t
360
+a4 − 39611450
exp(10t) +
90
− t
)
− √ ,
1001 exp( 6t5 ) 25207 exp( 360 t
1939 10799 exp( 360 )
Appendix A: Background Material 683
where the initial condition for the integration is x(0) = (a1 , a2 , a3 , a4 ). The expres-
sions for the other components of the state are quite similar. Let us now consider the
construction of the nonlinear system g(z) = 0, g ∈ R5 . The initial point from which
the periodic solution is built is chosen in [60] as (a1 , a2 , a3 , − 9185
900
), i.e., it belongs
to the boundary Γ− defined above. Due to the symmetry of the system in (A.61), it
is sufficient in fact to construct only one half of this trajectory. In other words, the
existence can be checked as follows:
• Calculate the time T > 0 that the solution of system (A.64) needs, in forwards time,
to go from the state (a1 , a2 , a3 , − 9185
900
) to the hyperplane Γ− ; i.e., T = min{t|t >
0, φ2 (t; 0, x(0)) ∈ Γ− }.
• Calculate the time −τ < 0 that the solution of system (A.63) needs, in backwards
time, to go from the state (−a1 , −a2 , −a3 , 9185 900
) ∈ Γ+ , to the hyperplane Γ− ; i.e.,
−τ = max{τ̄ |τ̄ < 0, φ1 (τ̄ ; 0, −x(0)) ∈ Γ− }.
• Check that φ2 (T ; 0, x(0)) = φ1 (−τ ; 0, −x(0)), i.e., both portions of trajectories
coincide when attaining Γ− .
This is depicted in Fig. A.1, where one half of the searched orbit is drawn. We have
denoted the solution of system (A.63) as φ1 and that of (A.64) as φ2 . Actually, the
third item represents the nonlinear system g(z) = 0, with z = (τ, T, a1 , a2 , a3 )T .
One gets:
g1 (z) = φ2,1 (T ; 0, x(0)) − φ1,1 (τ ; 0, −x(0))
g2 (z) = φ2,2 (T ; 0, x(0)) − φ1,2 (τ ; 0, −x(0))
g3 (z) = φ2,3 (T ; 0, x(0)) − φ1,3 (τ ; 0, −x(0)) (A.67)
g4 (z) = φ2,4 (T ; 0, x(0)) + 9185
900
D0
R3
00
D0
φ1(−τ; 0, −x(0))
x(0)=(a1,a2,a3,- 900 /9185)
Γ- • •
φ2(Τ; 0, x(0)) D-
√ √
τ
τ
25496 exp( 360 ) cos( 10799τ 17704 exp( 360
) ) sin 10799τ
360
+ 25207
360
+ √
25207 10799
√ √
3T
cos 2 sin 23T 125 exp( 6τ )
+a2 − cos(T ) + sin(T ) + +√ + 12012 5
exp( T2 ) 3 exp( T2 )
√ √
τ τ
81 exp(10τ ) 3896 exp( 360 ) cos 10799τ
360 137674504 exp( 360 ) sin 10799τ
360
− 792220 − 378105 − √
378105 10799
√ √
cos 23T sin 23T 75 exp( 6τ ) )
+a3 cos(T ) − T −√ − 1001 5 + 45 exp(10τ (A.68)
exp( 2 ) 3 exp( T2 ) 39611
√ √
τ τ
1860 exp( 360 ) cos 10799τ
360 710940 exp( 360 ) sin 10799τ
360
+ 25207 + √
25207 10799
√
2 sin 23T 90 exp( 6τ ) exp(10τ )
+ 9185
900
sin(T ) − √ − 1001 5 + 45039611
3 exp( T2 )
√ √
τ τ
1980 exp( 360 ) cos 10799τ
360 4140 exp( 360 ) sin 10799τ
360
+ 25207 − √ .
1939 10799
As we announced above, we will not write down the whole vector g(z) here, the rest
of the entries having similar expressions. The next step is therefore to find a zero
of the system g(z) = 0. Actually, there exists many different results in the Applied
Mathematics literature (see for instance [62, Chap. XVIII]) that provide conditions
assuring the existence of a zero and a way to compute it. However, they are in general
of local nature, i.e., the iterative mapping that is proposed (Newton-like) converges
towards the zero (which is a fixed point of this mapping) only locally. In order to
cope with this feature, Bernat and Llibre first locate numerically a periodic orbit
for (A.61) and notice that it passes close to the point (a1 , a2 , a3 , − 9185
900
) with a1 =
Appendix A: Background Material 685
is contained in the ball Br1 (z 0 ) and converges towards the unique zero of g(z) that
is inside the set C0 ⊂ Br2 (z 0 ).
The authors in [60] choose
C0 = [0.4, 0.5] × [4.1, 4.2] × [0.17, 0.27] × [−2.1, 2.2] × [−1.33, −1.45]
−2.13366751019745, −1.3951391555710)
and take the || · ||∞ matrix norm. As one can see the application of the theorem
requires the computation of Jacobians and bounds on their norms. The whole thing
takes 16 journal pages in [60], and is omitted here for obvious reasons. All the
computations are made with an accuracy of 10−20 and the numerical errors are
monitored. All the parameters appearing in Theorem A.86 are calculated and the
conditions are fulfilled. So the existence of a zero z̄ 0 is shown, consequently the
system (A.61) possesses a periodic orbit Ω(t) that passes through x̄0 , where z̄ 0 =
(T̄0 , τ̄0 , x̄0 ).
The stability of periodic trajectories can be classically studied with Poincaré maps.
Due to the way the trajectory Ω(t) has been built, one suspects that the Poincaré
686 Appendix A: Background Material
section will be chosen to be Γ− , whereas the Poincaré map will be the concatenation
of four maps:
P1 : Br (x̄0 ) ∩ Γ− −→ Γ− , P2 : Br (x̄1 ) ∩ Γ− −→ Γ+
(A.70)
P3 : Br (−x̄0 ) ∩ Γ+ −→ Γ+ , P4 : Br (−x̄1 ) ∩ Γ+ −→ Γ− ,
Since the expressions for the solutions are known, the partial derivatives of φ2,i can
be calculated. The term ∂∂xT̄j can be obtained from φ2,4 (T̄ ; x) = − 9185
900
. Plugging this
into (A.72) yields:
∂φ2,4
∂ T̄ ∂x j
= − ∂φ2,4 . (A.73)
∂x j
∂ T̄
At this stage, one should recall that the zero z̄ 0 of g(z) = 0 is not known exactly,
only its existence in a neighborhood of a known point has been established. So one
is led to make the computations with the numerical approximation, and to mon-
itor the numerical error afterwards. The computation of the Jacobian is therefore
done with the values computed above, i.e., the first three components of x0 equal to
(0.2227501959407, −2.13366751019745, −1.3951391555710), whereas the time
T is taken as T0 = 4.1523442055633 s. The other Jacobians are computed in an
analogous way, and one finally finds that the three eigenvalues of D P(x0 ) are equal
to 0.305, 0.006, 9.1 10−6 . Then one concludes that the eigenvalues of D P(x̄0 ) also
are smaller than 1, using a result on the characterization of the error in the calculation
of eigenvalues of diagonalizable matrices.
Appendix A: Background Material 687
A.7.3 Summary
The system in (A.61) has been proved to possess a periodic orbit via a classical
method that consists of constructing a priori an orbit Ω and then of proving that
it does exist by showing the existence of the zero of some nonlinear system. Since
the main problem associated with such a method is to “guess” that the constructed
orbit has some chance to exist, a preliminary numerical study has been done to
locate an approximation of the orbit. Then, investigating the local stability of Ω(t)
by computing the Jacobian of its Poincaré map (the reader should remark that we
do not care about the explicit knowledge of the Poincaré map itself: the essential
point is that we are able to calculate, in an approximate way here, its Jacobian and
consequently the eigenvalues of the Jacobian). The system in (A.61) does not exactly
fit within the Kalman’s conjecture assumptions since it does not satisfy dφ
dy
(y) > 0.
The next step thus completes the counterexample by using a property of structural
stability of the perturbed vector field in (A.61).
Let us denote Tμ the Poincaré map with Poincaré section Γ− , defined from the flow
of the system in (A.61), with characteristic function μ(·), in the vicinity of x0 . Then
the following is true:
Lemma A.87 ([60]) There exists a characteristic function Ψ such that:
• Ψ (·) is C 1 ,
• 0 < dΨdy
(y) < 10 for all y ∈ R,
• Ψ (·) is sufficiently close to φ(·) in (A.61), with the C 0 topology in B̄r (0),5
• TΨ has a stable fixed point near x̄0 ,
where one assumes that the periodic orbit Ω(t) ⊂ Br (0), r > 0.
The proof is as follows: we know that Tφ has a stable fixed point x̄0 . Due to the stability
there exists a ball B̄r (x̄0 ) ⊂ D− such that Tφ B̄r (x̄0 ) ⊂ B̄r (x0 ). Then, using the
Theorem 3.215 of continuous dependence on initial conditions and parameters for
ordinary differential equations with C 0 and Lipschitz continuous – in the variable x –
vector field, there exists a function
Ψ satisfying the first three items in Lemma A.87,
and such that TΨ B̄r (x̄0 ) ⊂ B̄r (x0 ). In other words, a slight enough perturbation
of the vector field in (A.61) allows one to transform the characteristic function φ(·),
hence the whole vector field, into a C 1 function, assuring that the system is Hurwitz
(its Jacobian is Hurwitz at any point of the state space). The so-called Brouwer’s
fixed point Theorem guarantees then the existence of a fixed point for Tμ inside
the distance between Ψ (·) and φ(·) is defined from the norm of the uniform convergence as
5 i.e.,
Br (x̄0 ) (let us recall that Brouwer’s Theorem states that a continuous function g(·)
that maps a closed ball to itself, satisfies g(y) = y for some y). The fixed point of
TΨ corresponds to a periodic orbit of the system in (A.61) with Ψ as a characteristic
function. The second item in Lemma A.87 assures that such a system is Hurwitz,
see [60, Proposition 8.1]. This system, therefore, constitutes a counterexample to
the Kalman conjecture in dimension 4. As shown in [60], it is easily extended to the
dimensions n > 4, by adding subsystems ẋi (t) = −xi (t), i ≥ 5.
Definition A.88 (Mild solution) For x 0 ∈ Rn and φ ∈ C([−τ, 0], Rn ), a mild solu-
tion of the system (A.74) is the function defined by
t
x(t) = et A x 0 + 0 e(t−s)A [L xs + Bu(s)] ds, t ≥ 0
(A.75)
x(t + θ ) = φ(θ ), −τ ≤ θ ≤ 0.
By using a straightforward argument from fixed point theory, one can see that the
system (A.74) possesses a unique mild solution given as in Definition A.88. An
example of delay operator is given by
0
L f = A1 f (−τ1 ) + A2 (θ ) f (θ )dθ, (A.76)
−τ
Now if we set μ = A1 1[−τ,0] (·) + A2 (·), then we obtain the delay operator defined
by (A.76). Here 1[−τ,0] (·) is the indicator function of the interval [−τ, 0] (not the
same indicator as the one of convex analysis used elsewhere in this book), i.e., the
function that takes values 1 in [−τ, 0] and 0 outside.
Proposition A.89 Let K ⊆ Rn be a non-empty, closed, and convex set, and N K (x)
its normal cone at x ∈ K . Let also M = M T 0 be a constant n × n matrix, and
y ∈ Rn . Then
if K is a cone
⇐⇒ K M(x − y) ⊥ x ∈ K .
(A.78)
The last expression is a cone complementarity problem. The fact that the operators
(In + M −1 N K )−1 (·) and (M + N K )−1 (M·) are single-valued (and Lipschitz con-
tinuous), may be shown using [63, Proposition 1] [64, Propositions 1.5.9, 4.3.3]. In a
more general case where D 0 is not full rank, conditions for single-valuedness are
more stringent, and it may be the case that these operators are set-valued. One can
consider extensions with operators of the type x → (M + D∂ϕ)−1 (x), with ϕ(·) a
convex lower semicontinuous function, M and D constant matrices. The reason why
operators of this form are called proximity (or proximal) operators in the Applied
Mathematics literature is intuitively clear from (A.78).
At several places of the book, we have used Linear Complementarity Problems
(LCP).
Definition A.90 Let λ ∈ Rm , M ∈ Rm×m , q ∈ Rm , be constant. A Linear Comple-
mentarity Problem is a nonsmooth problem of the form: λ ≥ 0, w = Mλ + q ≥ 0,
w T λ = 0. This is rewritten compactly as 0 ≤ λ ⊥ w = Mλ + q ≥ 0.
Inequalities hold componentwise, due to the nonnegativity of the vectors.
Theorem A.91 (Fundamental Theorem of Complementarity) The Linear Comple-
mentarity Problem 0 ≤ λ ⊥ w = Mλ + q ≥ 0 has a unique solution λ for any q, if
and only if M is a P-matrix.
690 Appendix A: Background Material
This may serve as the definition of a P-matrix, from which it then follows that a
P-matrix is a matrix with all principal minors positive. Many other results about
the solvability of LCPs exist [65]. This one is the central result of complementarity
theory. The last equivalence in (A.89) allows us to state the following:
Lemma A.92 Let x ∈ Rn be the solution of the Linear Complementarity Problem
0 ≤ w = M x + q ⊥ x ≥ 0, where M = M T 0. Then x = proj M [Rn+ ; −M −1 q] =
M −1 proj M −1 [Rn+ ; q] − M −1 q.
Let us make the link with proximity operators. We have that 0 ≤ w = M x +
q ⊥ x ≥ 0 ⇐⇒ M x + q ∈ −∂ψ K (x) with K = Rn+ . Thus, equivalently x ∈ (M +
∂ψ K )−1 (−q). We conclude that if M is a P-matrix, the operator (M + ∂ψ K )−1 is
single-valued and x = (M + ∂ψ K )−1 (−q). This result extends to functions other
than the indicator of convex sets, see [63, Proposition 1, Proposition 3] [64, 66].
Maximal monotone operators are defined in Sect. 3.14.1, and are used in set-valued
Lur’e systems. An important result is Corollary 3.12.1, which relates maximal mono-
tone operators and subdifferentials of proper convex lower semicontinuous functions.
Another notion is cyclic monotonicity, which is used in Theorem 4.135:
Definition A.93 A set-valued mapping M : Rn ⇒ Rn is called cyclically monotone,
if for any set of pairs (xi , yi ), i = 0, 1, . . . , m (m arbitrary) such that yi ∈ M(xi ),
one has
We have the following [67, Theorem 24.9, Corollary 31.5.2], where Γ 0 (X ) denotes
the set of proper, convex, lower semicontinuous functions X → R ∪ {+∞}:
for some α > 0. Then, for each v ∈ X , there exists a unique solution x ∈ X to the
variational inequality
It is important to notice that when f (·) = ΨC (·) (the indicator of the closed convex
non-empty set C), the proximal map agrees with the classical projection operator
onto C.
Lemma A.98 ([71, Lemma 4]) Consider the following variational inequality of the
second kind,
with P ∈ Rn×n a strongly monotone operator (but not necessarily symmetric). Then,
the unique solution of (A.81) satisfies
for some μ > 0. Moreover, there exists μ > 0 such that the map
is a contraction.
Proof Let x be the solution of (A.81). Then, for any μ > 0, we have μr −
μP x ∈ ∂(μφ)(x) or, equivalently, (I − μP)x + μr − x ∈ ∂(μφ)(x). Hence, x =
Proxμφ ((I − μP)x + μr ). The second equality in (A.82) is a direct consequence of
Moreau’s decomposition Theorem [72, Theorem 14.3]. Recalling that Proxμφ is a
non expansive operator, we have that
# #
#Proxμφ (y1 ) − Proxμφ (y2 )# ≤ I − μPm x1 − x2 ,
Theorem A.99 ([73, Theorem 1]) Let S(·) satisfy the following assumptions:
(A1) For each t ≥ t0 , S(t) is a non-empty, closed and convex subset of Rn .
(A2) S(·) varies in an AC way, i.e., there exists an AC function v(·) such that for
any y ∈ Rn and s, t ≥ t0
• For every η > 0, there exists a nonnegative function kη (·) ∈ L1 (I, R) such that for
all t ∈ I and for any (x, y) ∈ B(0, η) × B(0, η) one has || f (t, x) − f (t, y)|| ≤
kη (t)||x − y||;
• there exists%a nonnegative function β(·) ∈ L1 (I, R) such that, for all t ∈ I and
for all x ∈ s∈I S(s), || f (t, x)|| ≤ β(t)(1 + ||x||).
Then, for any x0 ∈ S(t0 ), the inclusion (A.83) has a unique AC solution x(·) on I .
The first condition is a kind of local Lipschitz continuity property in the second
variable of f (·, ·), and the second condition is a growth condition. In case t1 = ∞,
then the theorem provides a result on the existence and uniqueness of a locally AC
694 Appendix A: Background Material
solution in a straightforward manner (in which kη (·) and β(·) become L1,e -functions
and S(·) varies in a locally AC manner).
Here, we reproduce the proof in [74], as announced in Sect. 3.15.1. Let us con-
sider a real rational square transfer matrix H (z) ∈ Cm×m . We want to show:
(H (z) + H (z) 0 for all |z| = 1) ⇔ (H (z) + H T (z̄) 0 for all |z| ≥ 1). The
Δ
⇐ proof is obvious. Let us focus on ⇒. Let us define K (z) = H (z −1 ), so that
K (z) is analytic in |z| ≤ 1, and K (z) + K (z) 0 for |z| = 1. For any x ∈ Cm , the
Δ
function f x (z) = x K (z)x, which is complex-valued, is analytic. Hence, Re[ f x (z)]
is harmonic in the domain |z| ≤ 1 (since the real and imaginary parts of a holo-
morphic function, are harmonic functions, while analycity implies holomorphic-
ity). Let us denote u x (r, θ ) the polar form of Re[ f x (z)]. Then, for any θ , one
has u x (1, θ ) > 0. The Poisson integral formula for u x (r, θ ), with r < 1, yields
2π 1−r 2
u x (r, θ ) = 2π
1
0 1−2r cos(φ−θ)+r 2 u x (1, φ)dφ, φ ∈ [0, 2π ], see [75, p. 17]. There-
fore, for r < 1 and any θ , one has u x (r, θ ) > 0 or, equivalently, for any x ∈ Cm ,
Re[ f x (z)] > 0 for |z| < 1. Recalling that K (z) + K (z) 0 for |z| = 1, this means
that K (z) + K (z) 0, for all |z| ≤ 1. From the relationship between K (z) and
H (z), the proof follows.
References
1. Bernstein DS (2005) Matrix mathematics: theory, facts, and formulas with application to linear
systems theory. Princeton University Press, Princeton, New jersey
2. Vidyasagar M (1993) Nonlinear systems analysis, 2nd edn. Prentice Hall
3. Bernstein DS (2009) Matrix mathematics: theory, facts, and formulas, 2nd edn. Princeton
University Press, Princeton, New jersey
4. Lancaster P, Tismenetsky M (1985) The Theory of Matrices. Academic Press, New York, USA
5. Khalil HK (1992) Nonlinear systems. MacMillan, New York, USA (1992). 2nd edn. Published
in 1996, 3rd edn. Published in 2002
6. Byrnes CI, Martin CF (1995) An integral-invariance principle for nonlinear systems. IEEE
Trans Autom Control 40(6):983–994
7. Merkin Y (1997) Introduction to the theory of stability, TAM, vol 24. Springer, Berlin
8. Michel AN, Hou L (2011) Invariance results for linear, time-invariant, discrete-time systems.
Nonlinear Anal Hybrid Syst 5(3):535–539
9. Yakubovich VA, Leonov GA, Gelig AK (2004) Stability of stationary sets in control systems
with discontinuous nonlinearities, stability, vibration and control of systems, vol 14. World
Scientific
10. Filippov AF (1988) Differential equations with discontinuous righthand sides. Kluwer Aca-
demic Publishers, Mathematics and Its Applications
11. Paden B, Panja R (1988) Globally asymptotically stable PD+ controller for robot manipulators.
Int J Control 47:1697–1712
Appendix A: Background Material 695
12. Isidori A (1995) Nonlinear control systems, 3rd edn. Communications and control engineering.
Springer, London. 4th printing, 2002
13. Sannuti P (1983) Direct singular perturbation analysis of high-gain and cheap control problems.
Automatica 19(1):41–51
14. Sain MK, Massey JL (1969) Invertibility of linear time-invariant dynamical systems. IEEE
Trans Autom Control 14(2):141–149
15. Saberi A, Sannuti P (1987) Cheap and singular controls for linear quadratic regulators. IEEE
Trans Autom Control 32(3):208–219
16. Sannuti P, Wason HS (1985) Multiple time-scale decomposition in cheap control problems -
singular control. IEEE Trans Autom Control 30(7):633–644
17. Sannuti P, Saberi A (1987) A special coordinate basis of multivariable linear systems, finite
and infinite zero structure, squaring down and decoupling. Int J Control 45(5):1655–1704
18. Lyapunov AM (1907) The general problem of motion stability. Ann Faculté des Sciences de
Toulouse. In Russian, 1892; Translated in French, 203–474
19. LaSalle J, Lefschetz S (1961) Stability by Liapunov’s direct method. Academic Press, New
York, NY, USA
20. Lefschetz S (1962) Stability of nonlinear control systems. Academic Press, New York, USA
21. Hahn W (1967) Stability of motion. Springer, New York
22. Nijmeier H, van der Schaft AJ (1990) Nonlinear dynamical control systems. Springer, New
York, USA
23. Rockafellar RT, Wets RJB (1998) Variational analysis, grundlehren der mathematischen wis-
senschaften, vol 317. Springer, Berlin
24. Bressan A (2011) Viscosity solutions of Hamilton–Jacobi equations and optimal control prob-
lems. https://www.math.psu.edu/bressan/PSPDF/HJ-lnotes.pdf
25. Rosier L, Sontag ED (2000) Remarks regarding the gap between continuous, Lipschitz, and
differentiable storage functions for dissipation inequalities appearing in H∞ control. Syst
Control Lett 41:237–249
26. Lancaster P, Rodman L (1995) Algebraic Riccati equations. Oxford university Press, Oxford
27. Clements D, Anderson BDO, Laub AJ, Matson JB (1997) Spectral factorization with imaginary-
axis zeros. Linear Algebra Appl 250:225–252
28. Rodman L (1997) Non-Hermitian solutions of algebraic Riccati equations. Can J Math
49(4):840–854
29. Knobloch HW, Isidori A, Flockerzi D (1993) Topics in control theory. Birkhauser, Basel
30. Scherer C (1992) H∞ control by state feedback for plants with zeros on the imaginary axis.
SIAM J Control Optim 30:123–142
31. Petersen IR, Anderson BDO, Jonckheere EA (1991) A first principles solution to the non-
singular h ∞ control problem. Int J Robust Nonlinear Control 1:171–185
32. Poubelle MA, Petersen IR, Gevers MR, Bitmead RR (1986) A miscellany of results of an
equation of count JF Riccati. IEEE Trans Autom Control 31(7):651–654
33. Ran ACM, Vreugdenhil R (1988) Existence and comparison theorems for algebraic Riccati
equations for continuous and discrete-time systems. Linear Algebra Appl 99:63–83
34. Barabanov NE, Ortega R (2004) On the solvability of extended Riccati equations. IEEE Trans
Autom Control 49(4):598–602
35. Barabanov NE, Ortega R (2002) Matrix pencils and extended algebraic Riccati equations. Eur
J Control 8(3):251–264
36. Wang HS, Yung CF, Chang FR (2006) A generalized algebraic Riccati equation. In: Control
for nonlinear descriptor systems. Lecture notes in control and information sciences, vol 326,
pp 141–148
37. Lee CH (2006) New upper solution bounds of the continuous algebraic Riccati matrix equation.
IEEE Trans Autom Control 51(2):330–334
38. Trentelman HL (1999) When does the algebraic Riccati equation have a negative semi-definite
solution? In: Blondel V-D, Sontag E-D, Vidyasagar M, Willems J-C (eds) Open problems in
mathematical systems and control theory. Springer, Berlin, pp 229–237
696 Appendix A: Background Material
39. van Antwerp JG, Braatz R (2000) A tutorial on linear and bilinear matrix inequalities. J Process
Control
40. Freund RW, Jarre F (2004) An extension of the positive real lemma to descriptor systems.
Optim Methods Softw 19(1):69–87
41. Knockaert L (2005) A note on strict passivity. Syst Control Lett 54(9):865–869
42. Boyd SP, Ghaoui LE, Feron, E, Balakrishnan V (1994) Linear matrix inequalities in system
and control theory. Studies in applied mathematics, vol 15. SIAM, Philadelphia
43. Camlibel MK, Heemels WPMH, Schumacher H (2002) On linear passive complementarity
systems. Eur J Control 8(3):220–237
44. Kailath T (1980) Linear systems. Prentice-Hall
45. Horn RA, Johnson CR (1985) Matrix analysis. Cambridge University Press, UK
46. Tian Y, Takane Y (2005) Schur complements and Banachiewicz-Schur forms. Electron J Linear
Algebra 13:405–418
47. Rohde CA (1965) Generalized inverses of partitioned matrices. SIAM J Appl Math 13:1033–
1035
48. Baksalary JK, Styan GPH (2002) Generalized inverses of partitioned matrices in Banachiewicz-
Schur form. Linear Algebra Appl 354:41–47
49. Tan Z, Soh YC, Xie L (1999) Dissipative control for linear discrete-time systems. Automatica
35:1557–1564
50. Greenhalg S, Acary V, Brogliato B (2013) On preserving dissipativity of linear complementarity
dynamical systems with the θ-method. Numerische Mathematik 125(4):601–637
51. Faurre P, Clerget M, Germain F (1979) Opérateurs Rationnels Positifs. Application à
l’Hyperstabilité et aux Processus Aléatoires. Méthodes Mathématiques de l’Informatique,
Dunod, Paris In French
52. Davis MC (1963) Factoring the spectral matrix. IEEE Trans Autom Control 8(4):296–305
53. Anderson BDO, Vongpanitlerd S (1973) Network analysis and synthesis: a modern systems
theory approach. Prentice Hall, Englewood Cliffs, New Jersey, USA
54. Hughes TH (2018) On the optimal control of passive or non-expansive systems. IEEE Trans
Autom Control 63(12):4079–4093
55. Tuel WG (1968) Computer algorithm for spectral factorization of rational matrices. IBM J Res
Dev 12(2):163–170
56. Anderson BDO, Hitz KL, Diem ND (1974) Recurisive algorithm for spectral factorization.
IEEE Trans Circuits Syst 21(6):742–750
57. Anderson BDO (1967) A system theory criterion for positive real matrices. SIAM J Control
5(2):171–182
58. Newcomb RW (1966) Linear multiport synthesis. McGraw-Hill, New York
59. Barabanov NE (1988) On the Kalman problem. Sibirskii Matematischeskii Zhurnal 29:3–11.
Translated in Sib Math J, pp 333–341
60. Bernat J, Llibre J (1996) Counterexample to Kalman and Markus-Yamabe conjectures in di-
mension larger than 3. Dyn Contin, Discret Impuls Syst 2:337–379
61. Brogliato B (2016) Nonsmooth mechanics. Models, Dynamics and Control, 3rd edn. Com-
munications and control engineering. Springer International Publishing Switzerland, London.
Erratum/Addendum at https://hal.inria.fr/hal-01331565
62. Kantorovich LV, Akilov GP (1982) Functional analysis, 2nd edn. Pergamon Press
63. Brogliato B, Goeleven D (2011) Well-posedness, stability and invariance results for a class of
multivalued Lur’e dynamical systems. Nonlinear Anal: Theory, Methods Appl 74:195–212
64. Facchinei F, Pang JS (2003) Finite-dimensional variational inequalities and complementarity
problems, vol I and II. Operations research. Springer, New York
65. Cottle RW, Pang JS, Stone RE (1992) The linear complementarity problem. Academic Press
66. Brogliato B, Goeleven D (2013) Existence, uniqueness of solutions and stability of nonmsooth
multivalued Lur’e dynamical systems. J Convex Anal 20(3):881–900
67. Rockafellar RT (1970) Convex analysis. Princeton University Press, Princeton
68. Tanwani A, Brogliato B, Prieur C (2014) Stability and observer design for Lur’e systems
with multivalued, nonmonotone, time-varying nonlinearities and state jumps. SIAM J Control
Optim 52(6):3639–3672
Appendix A: Background Material 697
A converters, 641
Absolute economic optimal control, 243
continuity, 166, 187 flat glass manufacture, 263
stability, 167, 400 flexible structure, 75, 159, 364, 630
Absolute stability float-glass systems, 641
circle criterion, 170 grid, 641
definition, 168 haptic interface, 263, 641
discrete-time, 230 HIV treatment, 641
multivalued nonlinearity, 190 hovercraft, 641
O’Shea–Zames–Falb multipliers, 179 influenza A virus, 641
Popov criterion, 175 internet congestion control, 641
set-valued nonlinearity, 183 large vehicle platoons, 75
Tsypkin criterion, 230, 244 manipulator with holonomic constraints,
with hysteresis, 220 579
AC function, 187 marine vehicles, 641
Actuator dynamics, 466, 629 memristive system, 322
Adaptive control MEMS, 641
Lagrangian systems, 576 missile guidance, 263
LTI systems, 594 nanopositioning, 75
nonlinear plant, 271 neural networks, 263
relative degree 2, 598 optical cavity, 75
relative degree 3, 599 particulate process, 263
SPR model reference, 66 PEM fuel/cell battery, 641
Aizerman’s conjecture, 168 pendubot, 641
Algebraic Riccati Equation (ARE), 148 photovoltaic/battery system, 641
stabilizing solution, 295, 668 physiological systems, 263
Algebraic Riccati Inequality (ARI), 148 power converters, 263
Anderson B.D.O., 86 process systems, 263
Application prosthesis, 641
aerial vehicle, 75 repetitive controllers, 230, 641
aircraft landing, 641 repetitive processes, 243
biological systems, 263 satellites, 263
biped robots, 641 shape memory alloy, 641
cable-driven systems, 641 smart actuators, 263
chemical processes, 641 thermohygrometric control, 641
civil engineering structures, 75 variable stiffness actuator, 641
combustion engine, 263 virus propagation, 641
© Springer Nature Switzerland AG 2020 699
B. Brogliato et al., Dissipative Systems Analysis and Control, Communications
and Control Engineering, https://doi.org/10.1007/978-3-030-19420-8
700 Index