Sie sind auf Seite 1von 246

SCHAUM'S OUTLINE OF

THEORY AND PROBLEMS


OF

STATE SPACE
and
LINEAR SYSTEMS

..
BY

DONALD M.WIBERG., Ph.D.


Associate Professor of Engineering
University of California, Los Angeles

S~BAUM'S OUTLINE SERIES


McGRAW·HILL BOOK COMPANY
New York, St. Louis, San Francisco, Dusseldorf, Johannesburg, Kuala Lumpur, London, Mexico,
Montreal, New Delhi, Panama, Rio de Janeiro, Singapore, Sydney, and Toronto
Copyright © 1971 by McGraw-Hill, Inc. All Rights Reserved. Printed in the
United States of America. No part of this publication may be reproduced,
stored in a retrieval system, or transmitted, in any form or by any means,
electronic, mechanical, photocopying,. recordiIig, or otherwise, without the
prior written permission of the publisher.
07-070096-6
1234567890 SHSH 754321
Preface

The importance of state space analysis is recognized in fields where the time behavior
of any physical process is of interest. The concept of state is comparatively recent, but the
methods used have been known to mathematicians for many years. As engineering, physics,
medicine, economics, and business become more cognizant of the insight that the state space
approach offers, its popularity increases.
This book was written not only for upper division and graduate students, but for prac-
ticing professionals as well. It is an attempt to bridge the gap between theory and practical
use of the state space approach to the analysis and design of dynamical. systems. The book
is meant to encourage the use of state space as a tool for analysis and design, in proper
relation with other such tools. The state space approach is more general than the "classical"
Laplace and Fourier transform theory. Consequently, state space theory IS applicable to all
systems that can be analyzed by integral transforms in time, and is applicable to many
systems for which transform theory breaks down. Furthermore, state space theory gives
a somewhat different insight into the time behavior of linear systems, and is worth studying
for this aspect alone.
In particular, the state space approach is useful because: (1) linear systems with time-
varying parameters can be analyzed in essentially the same manner as time-invariant linear
systems, (2) problems formulated by state space methods can easily be programmed on a
computer, (3) high-orde-r linear systems can be analyzed, (4) multiple input-multiple output
systems can be treated almost as easily as single input-single output linear systems, and
(5) state space theory is the foundation for further studies in such areas as nonlinear
systems, stochastic systems, and optimal control. These are five of the most important
advantages obtained from the generalization and rigorousness that state space brings to
the classical transform theory.
Because state space theory describes the time behavior of physical systems in a mathe-
matical manner, the reader is assumed to have some knowledge of differential equations and
of Laplace transform theory. Some classical control theory is needed for Chapter 8 only_
No knowledge of m:atrices or complex variables is prerequisite.
The book may appear to contain too many theorems to be comprehensible and/or useful
to the nonmathematician. But the theorems have been stated and proven in a manner
suited to show the range of application of the ideas and their logical interdependence.
Space that might otherwise have been devoted to solved problems has been used instead
to present the physical motivation of the proofs. Consequently I give my strongest recom-
mendation that the reader seek to understand the physical ideas underlying the proofs rather
than to merely memorize the theorenls. Since the emphasis is on applications, the book
might not be rigorous enough for the pure mathematician, but I feel that enough informa-
tion has been provided so that he can tidy up the statements and proofs himself._
The book has a number of novel features. Chapter 1 gives the fundamental ideas of
state from an informal, physical viewpoint, and also gives a correct statement of linearity.
Chapter 2 shows how to write transfer functions and ordinary differential equations in
matrix notation, thus motivating the material on matrices to follow. Chapter 3 develops
the important concepts of range space and null space in detail, for later application. Also
exterior products (Grassmann algebra) are developed, which give insight into determinants,
and which considerably shorten a number of later proofs. Chapter 4 shows how to actually
solve for the Jordan form, rather than just proving its existence. Also a detailed treatment
of pseudoinverses is given. Chapter 5 gives techniques for computation of transition
matrices for high-order time-invariant systems, and contrasts this with a detailed develop-
ment of transition matrices for time-varying systems. Chapter 6 starts with giving physical
insight into controllability and observability of simple systems, and progresses to the point
of giving algebraic criteria for time-varying systems. Chapter 7 shows how to reduce a
system to its essential parameters. Chapter 8 is perhaps the most novel. Techniques from
classical control theory are extended to time-varying, multiple input-multiple output linear
systems using state space formulation. This gives practical methods for control system
design, as well as analysis. Furthermore, the pole placement and observer theory developed
can serve as an introduction to linear optimal control and to Kalman filtering. Chapter 9
considers asymptotic stability of linear systems, and the usual restriction of uniformity is
dispensed with. Chapter 10 gives motivation for the quadratic optimal control problem,
with special emphasis on the practical time-invariant problem and its associated computa-
tional techniques. Since Chapters 6, 8, and 9 precede, relations with controllability, pole
placement, and stability properties can be explored.
The book has come from a set of notes developed for engineering course 122B at UCLA,
originally dating from 1966. It was given to the publisher in June 1969. Unfortunately,
the pUblication delay has dated some of the material. Fortunately, it also enabled a number
of errors to be weeded out.
Now I would like to apologize because I have not included references, historical develop-
ment, and credit to the originators of each idea. This was simply impossible to do because
of the outline nature of the book.
I would like to express my appreciation to those who helped me write this book. Chapter
1 was written with a great deal of help from A. V. Balakrishnan. L. M. Silverman helped
with Chapter 7 and P.K.C. Wang with Chapter 8. Interspersed throughout the book is
material from a course given by R. E. Kalman during the spring of 1962 at Caltech. J. J.
DiStefano, R. C. Erdmann, N. Levan, and K. Yao have used the notes as a text in UCLA
course 122B and have given me suggestions. I have had discussions with R. E. Mortensen,
M. M. Sholar, A. R. Stubberud, D. R. Vaughan, and many other colleagues. Improvements
in the final draft were made through the help of the control group under the direction of
J. Ackermann at the DFVLR in Oberpfaffenhofen, West Germany, especially by G. GrUbel
and R. Sharma. Also, I want to thank those UCLA students, too numerous to mention, that
have served as guinea pigs and have caught many errors of mine. Ruthie Alperin was very
efficient as usual while typing the text. David Beckwith, Henry Hayden, and Daniel Schau~
helped publish the book in its present form. Finally, I want to express my appreciation of
my wife Merideth and my children Erik and Kristin for their understanding during the
long hours of involvement with the book.
DONALD M. WIBERG
University of California, Los Angeles
June 1971
CONTENTS

Page
Chapter 1 MEANING OF STATE ••••• t ....................... ,. .................. . 1
Introduction to State. State of an Abstract Object. Trajectories in State
Space. Dynamical Systems. Linearity and Time Invariance. S~Tstems Con-
sidered. Linearization of Nonlinear Systems.

Chapter 2 METHODS FOR OBTAINING THE STATE EQUATIONS ...... 16


Flow Diagrams. Properties of Flow Diagrams. Canonical Flow Diagrams
for Time-Invariant Systems. Jordan Flow Diagram. Time-Varying Systems.
General State Equations~

Chapter 3 ELEMENTARY MATRIX THEORy........................... 38


Introduction. Basic Definitions. Basic Operations. Special Matrices. Deter-
minants and Inverse Matrices. Vector Spaces. Bases. Solution of Sets of
Linear Algebraic Equations. Generalization of a Vector. Distance in a Vector
Space. Reciprocal Basis. Matrix Representation of a Linear Operator. Ex-
terior Products.

Chapter 4 MATRIX ANALySIS........................................... 69


Eigenvalues and Eigenvectors. Introduction to the Similarity Transformation.
Properties of Similarity Transformations. Jordan Form. Quadratic Forms.
Matrix Norms. Functions of a Matrix. Pseudo inverse.

Chapter 5 SOLUTIONS TO THE LINEAR STATE EQUATION ........... 99


Transition Matrix. Calculation of the Transition Matrix for Time-Invariant
Systems. Transition Matrix for Time-Varying Differential Systems. Closed
Forms for Special Cases of Time-Varying Linear Differential Systems. Peri-
odically-Varying Linear Differential Systems. Solution of the Linear State
Equations with Input. Transition Matrix for Time-VarYing nifference Equa-
tions. Impulse Response Matrices. The Adjoint System.

Chapter 6 CONTROLLABILITY AND OBSERV ABILITY ................. 128


Introduction to Controllability and Observability. Controllability in Time-
Invariant Linear Systems. Observability in Time-Invariant Linear Systems.
Direct Criteria from A, B, and C. Controllability and Observability of Time-
Varying Systems. Duality.
CONTENTS

Page
Chapter 7 CANONICAL FORMS OF THE STATE EQUATION ........... 147
Introduction to Canonical Forms. Jordan Form for Time-Invariant Systems.
Real Jordan Form. Controllable and Observable Forms for Time-Varying
Systems. Canonical Forms for Time-Varying Systems.

Chapter 8 RELATIONS WITH CLASSICAL TECHNIQUES .............. 164


Introduction. Matrix Flow Diagrams. Steady State Errors. _ Root Locus.
Nyquist Diagrams. State Feedback Pole Placement. Observer Systems.
Algebraic Separation. Sensitivity,Noise Rejection, and Nonlinear Effects.

Chapter 9 STABILITY OF LINEAR SySTEMS .......................... 191


Introduction. Definitions of Stability for Zero-Input Linear Systems. De~
finitions of Stability for Nonzero Inputs. Liapunov Techniques. Liapunov
Functions for Linear Systems. Equations for the Construction of Liapunov
Functions.

Chapter 10 INTRODUCTION TO OPTIMAL CONTROL ................... 208


Introduction. The Criterion Functional. Derivation of the Optimal Control
Law. The Matrix Riccati Equation. Time-Invariant Optimal Systems. Out~
put Feedback. The Servomechanism Problem. Conclusion.

INDEX ......................................................... 233

/
Chapter 1
Meaning of State
1.1 INTRODUCTION TO STATE
To introduce the subject, let's take an informal, physical approach to the idea of state.
(An exact mathematical approach is taken in more advanced texts.) First, we make a
distinction between physical and abstract objects. A physical object is an object perceived
by our senses whose time behavior we wish to describe, and its abstraction is the mathe-
matical relationships that give some expression for its behavior. This distinction is made
because, in making an abstraction, it is possible to lose some of the relationships that make
the abstraction behave similar to the physical object. Also, not all mathematical relation-
ships can be realized by a physical object.
The concept of state relates to those physical objects whose behavior can change with
time, and to which a stimulus can be applied and the response observed. To predict the
future behavior of the physical object under any input, a series of experiments could be
performed by applying stimuli, or inputs, and observing the responses, or outputs. From
these experiments we could obtain a listing of these inputs and their corresponding observed
outputs, i.e. a list of input-output pairs. An input-output pair is an ordered pair of real
time functions defined for all t == to, where to is the time the input is first applied. Of
course segments of these input time functions must be consistent and we must agree upon
what kind of functions to consider, but in this introduction we shall not go into these
mathematical details.
Definition 1.1: The state of a physical obiect is any property of the object which relates
input to output such that knowledge of the input time function for t == to
and state at time t = to completely determines a unique output for t == to.
Example 1.1.
Consider a black box, Fig. 1-1, contain-
ing a switch to one of two voltage dividers.
Intuitively, the state of the box is the posi-
tion of the switch, which agrees with Defi-
nition 1.1. This can be ascertained by the
experiment of applying a voltage V to the
input terminal. Natural laws (Ohm's law)
dictate that if the switch is in the lower
position A, the output voltage is V/2, and
if the switch is in the upper position B, the
output voltage is V/4. Then the state A
determines the ihput-output pair to be
(V, V /2), and the state B corresponds to
(V, V/4). Fig. 1-1

1.2 STATE OF AN ABSTRACT OBJECT


The basic ideas contained in the above example can be extended to many physical objects
and to the abstract relationships describing their time behavior. This will be done after
abstracting the properties of physical objects such as the black box. For example, the
color of the box has no effect on the experiment of applying a voltage. More subtly, the
value of resistance R is immaterial if it is greater than zero. All that is needed is a listing
of every input-output pair over all segments of time t ~ to, and the corresponding states
at time to.

1
2 MEANING OF STATE [CHAP.!

Definition 1.2: An abstract object is the totality of input-output pairs that describe the
behavior of a physical object.
Instead of a specific list of input time functions and their correspondIng output time
functions, the abstract object is usually characterized as a class of all time functions that
obey a set of mathematical equations. This is in accord with the scientific method of
hypothesizing an equation and then checking to see that the physical object behaves in a
manner similar to that predicted by the equation. Hence we can often summarize the
abstract object by using the mathematical equations representing physical laws.
The mathematical relations which summarize an abstract object must be oriented,
in that m of the time functions that obey the relations must be designated inputs (denoted
by the vector u, having m elements 'Ui) and k of the time functions must be designated
outputs (denoted by the vector y, having k elements Yi). This need has nothing to do with
causality, in that the outputs are not "caused" by the inputs.

Definition 1.3: The state of an abstract object is a collection of numbers which together
with the input u(t) for all t ~ to uniquely determines the output y(t)
for all t ~ to.
In essence the state parametrizes the listing of input-output pairs. The state is the
answer to the question "Given u(t) for t ~ to and the mathematical relationsh~ps of the
abstract object, what additional information is needed to completely specify y(t) for t ~ to?"
Example 1.2.
A physical object is the resistor-capacitor network
shown in Fig. 1-2. An experiment is performed by applying
a voltage u(t), the input, and measuring a voltage yet), the R
ww
output. Note that another experiment could be to apply
yet) and measure u(t), so that these choices are determined
by the experiment.
The list of all input-output pairs for this example is
I
u(t) c I r
yet)
the class of all functions u(t). yet) which satisfy the mathe-
matical relationship
RCdy/dt + y = u (1.1)

This summarizes the abstract object. The solution of (1.1) is


1 T1
yet) = y(to) e(to-O/RC + ic it
to
e(T-O/RCU(T) dT (1.2) Fig. 1-2

This relationship explicitly exhibits the list of input-output pairs. For any input time function u(r) for
T ~ to. the output time function yet) is uniquely determined by y(to), a number at time to· Note the
distinction between time functions and numbers. Thus the set of numbers y(to) parametrizes all input-
output pairs, and therefore is the state of the abstract object described by (1.1). Correspondingly, a
choice of state of the RC network is the output voltage at time to·
Example 1.3.
The physical object shown in Fig. 1-3 is two RC networks in series. The pertinent equation is
R2C2 d2 y/dt 2 + 2.5RC dy/dt + y = u (1.3)

WN 'V'NI/'

r
u(t)
I R

Ic
2R

li C yet)

1_~~~l T Fig. 1-3


I
CHAP.!] MEANING OF STATE 3

with a solution
y(t)

+ _2_ dy (t )[eCto~t)I2RC _ eCto-02/RC]


3RC dt 0

(1.4)

Here the set of numbers y(to) and ~~ (to) parametrizes the input-output pairs, and may be chosen as state.
Physically, the voltage and its derivative across the smaller capacitor at time to correspond to the state.

Definition 1.4: A state variable, denoted by the vector x(t), is the time function whose
value at any specified time is the state of the abstract object at that time.

Note this difference in going from a set of numbers to a time function. The state can
be a set consisting of an infinity of numbers (e.g. Problems 1.1 and 1.2), in which case the
state variable is an infinite collection of time functions. However, in most cases considered
in this book, the state is a set of n numbers and correspondingly x(t) is an n-vector function
of time.

Definition 1.5: The state space, denoted by ~, is the set of all x(t).

Example 1.4.
The state variable in Example 1.2 is x(t) = y(t}, whereas in Example 1.1 the state variable remains
either A or B for all time.

Example 1.5.
y(t»)
The state variable in Example 1.3 is x(t) = dy .
(
dt (t)

The state representation is not unique. There can be many different ways of expressing
the relationship of input to output.

Example 1.6.
In Example 1.3, instead of the voltage and its derivative across the smaller capacitor, the state could
be the voltage and its derivative across the larger capacitor, or the state could be the voltages across
both capacitors.

There can exist inputs that do not influence the state, and, conversely, there can exist
outputs that are not influenced by the state. These cases are called uncontrollable and
unobservable, respectively, about which much more will be said in Chapter 6.

Example 1.7.
In Example 1.1, the physical object is state uncontrollable. No input can make the switch change
positions. However, the switch position is observable. If the wire to the output were broken, it would be
unobservable. A state that is both unobservable and uncontrollable makes no physical sense, since it can-
not be detected by experiment. Examples l.2 and 1.3 are both controllable and observable.

One more point to note is that we consider here only deterministic abstract objects.
The problem of obtaining the state of an abstract object in which random processes are
inputs, etc., is beyond the scope of this book. Consequently, all statements in the whole
book are intended only for deterministic processes.
4 MEANING· OF STATE [CHAP. 1

1.3 TRAJECTORIES IN STATE SPACE


The state variable x(t) is an explicit function of time, but also depends implicitly on the
starting time to, the initial state x(to) = Xo, and the input U(T). This functional dependency
can be written as x(t) = ~(t; to,xo, U(T)), called a trajectory. The trajectory can be plotted
in n-dimensional state space as t increases from to, with t an implicit parameter. Often
this plot can be made by eliminating t from the solutions to the state equation.
Example 1.8.
Given Xl(t) = sin t and X2(t) = cos t, squaring each equation and adding gives xi + x~ == 1. This
is a circle in the XIX2 plane with t an implicit parameter.

Example L9.
In Example 1.3, note that equation (1-4) depends on t, u(r), x(to) and to, where x(t o) is the vector with
components y(to) and dy/dt(to). Therefore the trajectories + depend on these quantities.
Suppose now u(t) = 0 and Re = 1. Let Xl = yet) and Xz = dy/dt. Then dx/dt = X2 and d 2y/dt 2 =
dX2/dt. Therefore dt = dXl/x2 and so d2y/dt 2 = x2dxZ/dxl' Substituting these relationships into (1.3)
gives

which is independent of t. This has a solution

Xl + 2X2 = C(2Xl + x2)4


where the constant C = [Xl (to) + 2X2(tO)}/[2xl(tO) + xZ(tO)]4. Typical
trajectories in state space are shown
in Fig. 1-4. The one passing through points xl(t O) = 0 and X2(t O) = 1 is drawn in bold. The arrows
point in the direction of increasing time, and all trajectories eventually reach the origin for this particular
stable system.

--------~----_+~--~~~--~----+_-------------Xl

1.4 DYNAMICAL SYSTEMS


In the foregoing we have assumed that an abstract object exists, and that sometimes
we can find a set of oriented mathematical relationships that summarizes this listing of
input and output pairs. Now suppose we are given a set of oriented mathematical relation-
ships, do we have an abstract object? The answer to this question is not always affirmative,
because there exist mathematical equations whose solutions do not result in abstract objects.

Example 1.10.
The oriented mathematical equation yet) = ju(t) cannot give an abstract object, because either the
input or the output must be imaginary.
If a mathematical relationship always determines a real output y(t) existing for all
t ~ to given any real input u(t) for all time t, then we can form an abstract object. Note
that by supposing an input u(t) for all past times as well as future times, we can form an
abstract object from the equation for a delayor y(t) = u(t - T). [See Problem 1.1.J
CHAP. 1] MEANING OF STATE 5

However, we can also form an abstract object from the equation for a predictor
y(t) = u(t + T). If we are to restrict ourselves to mathematical relations that can be
mechanized, we must specifically rule out such relations whose present oufputs depend
on future values of the input.

Definition 1.6: A dynamical syste'tn is an oriented mathematical relationship in which:


(1) A real output y(t) exists fOl' all t ~ to given a real input u(t) for
all t.
(2) Outputs y(t) do not depend on inputs u(,) for. > t.

Given that we have a dynamical system relating y(t) to u(t), we would like to construct
a set of mathematical relations defining a state x(t). We shall assume that a state space
description can be found for the dynamical system of interest ,satisfying the following
conditions (although such a construction may take considerable th~ught):

Condition 1: A real, unique output y(t) = fI(t, +(t; to, Xo, U{T)), u(t)) exists for all t> to
given the state Xo at time to and a real input U(T) for 7' ~ to.

Condition 2: A unique trajectory q,{t; to, Xo, u(.)) exists for all t > to given the state at
time to and a real input for all t ~ to.

Condition 3: A unique trajectory starts from each state, i.e.


(1.5)

Condition 4: Trajectories satisfy the transition property


q,(t; to, x(to), U(T)) = 'P(t; t l , X(tl)' u(.)) for to < tl < t (1.6)
where (1.7)
\
Condition 5: Trajectories +(t; to, Xo, u(,)) do not depend on inputs u(.) for • > t.

Condition 1 gives the functional relationship y(t) = 7j(t, x(t), u(t)) between initial state
and future input such that a unique output is determined. Therefore, with a proper state
space description, it is not necessary to know inputs prior to to, but only the state at time to.
The state at the initial time completely summarizes all the past history of the input.

Example 1.11.
In Example 1.2., it does not matter how the voltage across the capacitor was obtained in the past.
All that is needed to determine the unique future output is the state and the future input.
ConditiDn 2 insures that the state at a future time is uniquely determined. Therefore
knowledge of the state at any time, not necessarily to, uniquely determines the output. For
a given u(t), one and only one trajectory passes through each point in state space and exists
for all finite t ~ to. As can be verified in Fig. 1-4, one consequence of this is that the state
trajectories do not cross one another. Also, notice that condition 2 does not require the
state to be real, even though the input and output must be real.

Example 1.12.
The relation dy/dt = u(t) is obviously a dynamical system. A state space description dx/dt = ju(t)
with output y(t) = -jx(t) can be constructed satisfying conditions 1-5, yet the state is imaginary.
6 MEANING OF STATE [CHAP. 1

Condition 3 merely requires the state space description to be consistent, in that the
starting point of the trajectory should correspond to the initial state.. Condition 4 says
that the input uCr) takes the system from a state x(to) to a state x(t),' and if X(tl) is on
that trajectory, then the corresponding segment of the input will take the system from
X(tl) to x(t). Finally, condition 5 has been added to assure causality of the input-output
relationship resulting from the state space description to correspond with the causality of
the original dynamical system.
Example 1.13.
We can construct a state space description of equation (1.1) of Example 1.2 by defining a state
x(t) = yet). Then condition 1 is satisfied as seen by examination of the solution, equation (1.2). Clearly
the trajectory ¢(t; to, xo, U(T)) exists and is unique given a specified to. Xo and U(T), so condition 2 is satisfied.
Also, conditions 3 and 5 are satisfied. To check condition 4, given x(to) = y(t o) and U(T) over to:::: T:::: t,
then
x(t) =: (1.8)

where
x(t l ) = x(to)eCto-tl)/RC + RlC f tl eCTo-tl)/RC U(TO) dTo
to
(1.9)

Substitution of (1.9) into (1.8) gives the previously obtained (1.2). Therefore the dynamical system (1.1) has
a state space description satisfying conditions 1-5.
Henceforth, instead of "dynamical system with a state space description" we will simply
say "system" and the rest will be understood.

1.5 LINEARITY AND TIME INVARIANCE


Definition 1.7: Given any two numbers a, {3; two states Xl(tO), X2(tO); two inputs UI('), uz{-r);
and two corresponding outputs YI(T), Y2(T) for T ~ to. Then a system is
linear if (1) the state X3(tO) = aXl(to) + {3X2(tO), the output Y3er) = aYl(T) +
{3Y2(T), and the input U3(T) = aUI(T) + j1U2(T) can appear in the oriented ab-
stract object and (2) both Y3(T) ~nd X3(T) correspond to the state X3(tO) and
input U3(T).
The operators pet; to, Xo, u(r») = x(t) and n(t; pet; to,xo, U(T)) = Yet) are linear on {U(T)} EB
{x(to)} is an equivalent statement.
Example 1.14.
In Example 1.2,
XI(t) = xI(tO)eCto-O/RC + RIC ft eCT-tJ/RCUl(T) dT
to

yz(t) = x2(t) = x2(tO)eCto-t)/RC + ic ft to


eCT-tJ/RCuZ(T) dT

are the corresponding outputs YI(t) and Y2(t) to the states Xl(t) and X2(t) with inputs Ul(T) and UZ(T).
Since any magnitude of voltage is permitted in this idealized system, any state X3(t) = aXI(t) + (3X2(t),
any input U3(t) = aUl(t) + /3uz(t), and any output Y3(t) = aYI(t) + j3Y2(t) will appear in the list of input-
output pairs that form the abstract object. Therefore part (1) of Definition 1.7 is satisfied. Furthermore,
let's look at the response generated by X3(tO) and U3(T).

yet) = xa(to)e<to-t)/RC + RIC f t eCT-tJ/RC U3(T)


to
dT

[ctXI(t )
O
+ /3x2(tO)]e<to-O/RC + RIC it to
eCT-tJ/RC [aUI(T) + ,BU2(r)] dT
aYl(t) + j3Yz(t) = Y3(t)

Since Y3(t) = xs(t), both the future output and state correspond to X3(t O) and uaCt) and the system is linear.
CHAP. 1] MEANING OF STATE 7

Example 1.15.
Consider the system of Example 1.1. For some a: and (3 there is no state equal to a:A + /3B, where A and
B are the switch positions. Consequently the system violates condition (1) of Definition 1.7 and is not linear.
Example 1.16.
Given the system dx/dt = 0, y = ucosx. Then Yl(t) = Ul(t) COSX1(t O) and Y2(t) = U2(t) COSX2(tO)'
The state X3(t) = x3(tO) = aX 1(t O) + /3x2(t O) and is linear, but the output
yet) = [a: Ul(t) + !3u2(t)] cos [a:x 1(tO) + /3X2(t O)] ¥= aYl(t) + /3Y2(t)
except in special cases like Xl(t O) = X2(t O) = 0, so the system is not linear.

If a system is linear, then superposition holds for nonzero u(t) with x(to) = 0 and also
for nonzero x(to) with u(t) = 0 but not both together. In Example 1.14, with zero initial
voltage on the capacitor, the response to a biased a-c voltage input (constant + sin rot) could
be calculated as the response to a constant voltage input plus the response to an unbiased
a-c voltage input. Also, note from. Example 1.16 that even if superposition does hold for
nonzero u(t) with x(to) = 0 and for nonzero x(to) with u(t) = 0, the system may still not
be linear.

Definition 1.8: A system is time-invariant if the time axis can be translated and an equiva-
lent system results.

One test for time-invariance is to compare the original output with the shifted output.
First, shift the input time function by T seconds. Starting from the same initial state
Xo at tinle to + T, does y(t + T) of the shifted system equal y(t} of the original system?

Example 1.17.
Given the nonlinear differential equation
dx
dt

with x(6) = a. Let T = t - 6 so that dr = dt and


dx
dr = x
2
+ u
2

where x(" = 0) == a, resulting in the same system.


If the nonlinear equation for the state x were changed to
dx
dt = tx 2 + u2

with the substitution T == t - 6, then

and the appearance of the last term on the right gives a different system. Therefore this is a time-
varying nonlinear system. Equations with explicit functions of t as coefficients multiplying the state will
usually be time-varying.

1.6 SYSTEMS CONSIDERED


This book will consider only time-invariant and time-varying linear dynamical systems
described by sets of differential or difference equations of finite order. We shall see in the
next chapter that in this case the state variable x(t) is an n-vector and the system is linear.
Example 1.18.
A time-varying linear differential system of order n with one input and one output is described by the
equation
dny dn -ly dnu
dt n + D:l(t) dtn - 1 + ... + an(t)y = [3o(t) dt n + ... + /3n(t)u, (1.10)
8 MEANING OF STATE [CHAP. 1

Example 1.19.
A time-varying linear difference system of order n with one input and one output is described by
the equation
y(k + n) + a:l(k) y(k + n - 1) + ... + lXn(k) y(k) = i3o(k) u(k + n) + ... + i3n(k) u(k) (1.11)

The values of a(k) depend on the step (k) of the process, in a way analogous to which the aCt) depend on
t in the previous example.

1.7 LINEARIZATION OF NONLINEAR SYSTEMS


State space techniques are especially applicable to time-varying linear systems. In this
section we shall find out why time-varying linear systems are of such practical importance.
Comparatively little design of systems is performed from the time-varying point of view
at present, but state space methods offer great promise for the future. '
Consider a set of n nonlinear differential equations of first order:
II (Yl, Y2, •.. ,~ Yn, u, t)
12 (Yl, Y2, ., ., Yn, U, t)

dYn/ dt = In (Yl, Y2, ... , Yn, U, t)


A nonlinear equation of nth order dny/dt n = g(y, dy/dt, ... , d n ..!.l y /dt n -t, u, t) can always
be written in this form b'y defining Yl = y, dy/dt = Y2, ... , dn - 1y/dtn- 1 = Yn. Then a set
of n first order nonlinear differential equations can be obtained as
dyt/dt Y2
dY2/dt :::: Y3
(1.12)
dYn-1/dt = Yn
dYn/ dt :::: g(Yl, Y2, ... , Yn, U, t)

Example 1.20.
To reduce the second order nonlinear differential equation d2 y/dt 2 - 2y 3 + u dy/dt = 0 to two first
order nonlinear differential equations, define y = Yl and dy/dt = Y2' Then
dYl/dt Y2

dY2/dt = 2y~ - UY2

Suppose a solution can be found (perhaps by computer) to equations (1.12) for some
initial conditions Y1(tO), Y2(t O), ... , Yn(t o) and some input w(t). Denote this solution as the
trajectory 1>{t; ~v(t), Yl(t O), ••• , Yn(t O), to). Suppose now that the initial conditions are changed:
dy d n - 1y
y(to) = Yl(tO) + Xl(tO), dt (to) = yz(to) + X2{tO), ... , dt n- 1 (to) = Yn(t O) + Xn(tO)
where X1(tO), X2(tO) and Xn{t o) are small. Furthermore, suppose the input is changed slightly
to u(t) = w(t) + v{t) where v(t) is small. To satisfy the differential equations,
d(cf>l + x 1)/dt = 11 (4)1 + Xl' cf>2 + X2 , ••• , cf>~ + X n , W + v, t)
CHAP. 1] MEANING OF STATE 9

If 11' [2' ... , In can be expanded about <P1' <P z' ••• , <Pn and w using Taylor's theorem for sev-
eral variables, then neglecting higher order terms we obtain
d<Pl + dXl a/1 + all X2 ah + a/l v
dt dt 11 (<PI' <PZ' ••• , <P n' W, t) + -Xl
aYI aY2 + + aYn Xn au
d<pz + dX2 a[z alz . + aYn
a/z + al2
dt dt = I Z (<p1'<P 2 , ···,<pn,w,t) + -Xl
aYl + -Xz
ayz +
"
Xn au v
•••• ,. ................ l1li •••••••••• " ,. " • • • • .. • • •• " .. II II .......................... II. •••••

d<Pn + dXn a/n a/n a/n n al


I n(<PI' <P2' ... , <Pn' w, t) + a- Xl + -a Xz + '" + -a
dt dt Yl YZ Yn Xn + -a
U
v

where each a//aYi is the partial derivative of li(Y l ' Yz' ... , Yn' u, t) with respect to Yp evalu-
ated at Y1 = <PI' Yz = <P2' ••• , Y n = <Pn and U = w. Now, since each <Pi satisfies the original
equation, then d<p/dt = Ii can be canceled to leave

:t (~:) = (;~~;~~ ..~~~~~:.......... ~~~~~) (~:) (:~1:)


Xn aln/aYl a/n/aY2 .•• aln/aYn Xn
+
aln/au
v

which is, in general, a time-varying linear differential equation, so that the nonlinear equa-
tion has been linearized. Note this procedure is valid only for Xl, X2, ••• , Xn and v small
enough so that the higher order terms in the Taylor's series can be neglected. The matrix
of al/aYj evaluated at Yj = <Pi is called the Jacobian matrix of the vector f(y, u, t).
Example 1.21.
Consider the system of Example 1.20 with initial conditions y(to):= 1 and y(to) := -1 at to:= 1.
If the particular input w(t) := 0, we obtain the trajectories ¢l(t) :::: t- 1 and ¢2(t) :::: -t- z. Since 11 == Yz,
then all/aYl:= 0, al1layz:::: 1 and aldau:::: O. Since f 2 := 2yi -UYz, then alzlaYl:= 6yi, afz/BY2 :::: -u and
al~au :::: -Y2' Hence for initial conditions y(to):= 1 + XI(tO), y(to):= -1 + X2(t O) and inputs u:::: v(t),
we obtain

This linear equation gives the solution yet) == Xl(t) + t- 1 and dy/dt:= X2 - t- 2 for the original nonlinear
equation, and is valid as long as Xl' X2 and 'V are small.

Example 1.22.
Given the system dy/dt == ky - y2 + u. Taking u(t) := 0, we can find two constant solutions ¢(t):::: 0
and 1f;(t) := k. The equation for small motions x(t) about ¢(t) := 0 is dx/dt == It:x + u so that y(t) ~ x(t),
and the equation for small motions x(t) about 1f;(t) :::: k is dx/dt:= -kx + u so that y(t) = k + x(t).
10 MEANING OF STATE [CHAP. 1

Solved Problems
1.1. Given a delay line whose output is a voltage input delayed T seconds. What is the
physical object, the abstract object, the state variable and the state space? Also,
is it controllable, observable and a dynamical system with a state space description?
The physical object is the delay line for an input u(t) and an output y(t) = u(t ~ T). This
equation is the abstract object. Given an input time function u(t) for all t, the output yet) is
defined for t ~ to, so it is a dynamical system. To completely specify the output given only u( t)
for t. ~ to. the voltages already inside the delay line must be known. Therefore, the state at
timetIJ is x(to) = u[t o- T , to) ~ where the notation u[t2 , tl) means the time function u(r) for 7 in the
interval t2 === 7 < t 1 • For E> 0 as small as we please,· U[to-T, to) can be considered as the un-
countably infinite set of numbers
{u(to - T), 'u(to - T + e), •.. , u(t o - e)}
In this sense we can consider the state as consisting of an infinity of numbers. Then the state
variable is the infinite set of time functions
x(t) = U[t-T, t) == {u(t - T), u(t - T + e), ... , u(t - e)}

The state space is the space of all time functions T seconds long, perhaps limited by the
breakdown voltage of the delay line.
An input u(t) for to === t < to + T will completely determine the state T seconds later, and~
any state will be observed in the output after T seconds, so the system is both observable and con-
trollable. Finally, x(t) is uniquely made up of x(t - 7) shifted r seconds plus the input over T sec-
onds, so that the mathematical relation yet) == u(t - T) gives a system with a state space
deseri ption.

1.2. Given the uncontrollable partial differential equation (diffusion equation)


2
ay(r, t) = a y(r, t) 0.( t)
at ar2 + u r,
with boundary conditions y(O, t) = y(l, t) = O. What is the state variable?
The solution to this zero input equation for t:=:: to is
00

y(r, t) == ~ cne-n2172Ct-to) sin nrrr

i
n=l

where en == I y(r, to) sin n"r dr. All that is needed to determine the output is y(r, to), so that
o .
y(r, t) is a choice for the state at any time t. Since y(r, t) must be known for almost all r III the
interval 0,::::: r == 1, the state can be considered as an infinity of numbers similar to the case of
Problem 1.1.

1.3. Given the mathematical equation (dy/dt)2 = y2 + Qu. Is this a dynamical system
with a state space description?
A real output exists for all t === to for any u(t), so it is a dynamical system. The equation
can be written as dy/dt == s(t)y, where set) is a member of a set of time functions that take on
the value +1 or -1 at any fixed time t. Hence knowledge of y(t o) and set) for t ~ to uniquely
specify yet) and they are the state.

1.4. Plot the trajectories of


d 2y
=
if (dy/dt-y~ I)} + Ou
dt 2 ,if (dy/dt-y < 1)
Changing variables to y = Xl and dy/dt = X2' the mathematical relation becomes
or
CHAP. 1] MEANING OF STATE
11

The former equation can be integrated immediately to xz(t) = X2(tO), a straight line in the phase
plane. The latter equation is solved by multiplying by dx/2 to obtain x2 dx2i2 + Xl dXl/2 ::= O.
' can b
ThIS e 'm t egrat e d to 0 btam
. x 2 (t) + Xl2 (t) = Xz2 (to) + Xl2 (to). The result is an equation of circles
2
in the phase plane. The straight lines lie to the left of the line X2 - Xl ::= 1 and the circles to
the right, as plotted in Fig. 1-5.

2.-----~------~------+_--_+~,

3.-----~ __----~____ - - + __ _~

------~4~====~====~~====~~~~--_t--~-------------Xl
6.-------~----_. ____~

Note Xl increases for x2 > 0 (positive velocity) and decreases for x2 < 0, gIvmg the motion
of the system in the direction of the arrows as t increases. For instance, starting at the initial
conditions xl(t O) and xz(to) corresponding to the point numbered 1, the system moves along the
outer trajectory to the point 6. Sim'ilarly point 2 moves to point 5. However, starting at either
point 3 or point 4, the system goes to point 7 where the system motion in the next instant is not
determined. At point 7 the output y(t) does not exist for future times, so that this is not a
dynamical system.

1.5. Given the electronic device diagrammed in Fig. 1-6, with a voltage input u(t) and a
voltage output y(t). The resistors R have constant values. For to ~ t < t l , the
switch S is open; and for t === t l , S is closed. Is this system linear?

s
u(t) R y(t)

Fig. 1-6

Referring to Definition 1.7, it becomes apparent the first thing to do is find the state. No other
information is necessary to determine the output given the input, so there is no state, i.e. the
dimension of the state space is zero. This problem is somewhat analogous to Example 1.1, except
that the position of the switch is specified at each instant of time.
To see if the system is linear, sincex(t) = 0 for all time, we only need assume two inputs Ul(t)
and U2(t). Then Yl(t) = uI(t)/2 for to ~ t < tl and Yl(t) = uI(t)/3 for t:=, t l • Similarly Y2(t);:::
u2(t)/2 or uz(t)/3. Now assume an input o.:ul(t) + {3U2(t). The output is [o.:ul(t) + {3u2(t)]J2 for
to === t < tl and Ia::Ul(t) + {3u2(t)]/3 for t> tl' Substituting Yl(t) and Y2(t), the output is aYl(t) + {1Y2(t),
shOWing that superposition does hold and that the system is linear. The switch S can be consid-
ered a time-varying resistor whose resistance is infinite for t < tl and zero for t::=: t l . Therefore
Fig. 1-6 depicts a time-varying linear device.
12 MEANING OF STATE [CHAP.!

1.6. Given the electronic device of Problem 1.5 (Fig. 1-6), with a voltage input u(t) and
a voltage output y(t). The resistors R have constant values. However, now the posi-
tion of the switch S depends on y(t). Whenever y(t) is positive, the switch S is open;
and whenever y(t) is negative, the switch S is closed. Is this system linear?
Again there is no state, and only superposition for zero state but nonzero input need be
investigated. The input-output relationship is now
yet) = [5u(t) + u(t) sgn u(t)]l12
where sgn u = +1 if u is positive and -1 if u is negative. Given two inputs Ul and U2 with
resultant outputs Yl and Yz respectively, an output y with an input u3 = aul + {3~ is expressible as
y = [5( aul + (3U2) + (aul + fJ u 2) sgn (aul + !3u z)]l12
To be linear, ay + j3Y2 must be equal y which would be true only if
aUl sgn ul + j3u2 sgn U2 = (aUl + (3u2) sgn (aul + !3U2)
This equality holds only in special cases~ such as sgn Ul = sgn u2 = sgn (au1 + j3u2), so that the
system is not linear.

1.7. Given the abstract object characterized by


y(t) = xoe to - t + ,t
J to
e",-tu(T) dT

Is this time-varying?

~
This abstract object is that of Example 1.2, with
RC = 1 in equation (1.1). By the same procedure
used in Example 1.16, it can also be shown time-
invariant. However, it can also be shown time-invari-
ant by the test given after Definition 1.8. The input t
time function u{t) is shifted by T, to become (t). u to t

~
Then as can be seen in Fig. 1-7,
~ (t) = u(t - T)

Starting from the same initial state at time to + T,


t
to t

Y(u) = xoe to + T-u + f e t - u u(.,- - T) dT


Original System

I~(t)
17

~
to+T
Let g = T - T:

Y(CT) = xoeto+T-U + f u-T

to
e~+T-uuW dg
to +T t+ T
t

~
Evaluating Yat CT = t +T gives Iy(t)
y (t + T) = xoe to - t
+ ft ee-tuW dg t
to to+T t+T
which is identical with the output y(t). Shifted System

Fig. 1-7
CHAP. I] MEANING OF STATE 13

Supplementary Problems
1.8. Given the spring-ruass system shown in Fig. 1-8. What is the
physical object, the abstract object, and the state variable? "

1.9. Given the hereditary system yet) = it"" K(t, T) U(r) dT where
K(t, T) is some single-valued continuously differentiable function
of t and T. What is the state variable? Is the system linear?
Is the system time-varying? Fig. 1-8

1.10. Given the discrete time system x(n + 1) = x(n) + u(n), the series of inputs u(O), u(I), ... , u(k),
and the state at step 3, xeS). Find the state variable x(m) at any step m =: O.

1.11. An abstract object is characterized by yet) =u(t) for to ~ t < t l , and by dy/dt = du/dt for t =: t l .
lt is given that this abstract'object will permit discontinuities in yet) at t l . What is the dimension
of the state space for to === t < tl and for t =: tl?

1.12. Verify the solution (1.2) of equation (1.1), and then verify the solution (1.3) of equation (1.4).
Finally, verify the solution Xl + 2xz = C(2xl + xZ)4 of Example 1.9.

1.13. Draw the trajectories in two dimensional state space of the system y+ y = O.

1.14. Given the circuit diagram of Fig. 1-9(a), where the nonlinear device NL has the voltage-current
relation shown in Fig.- 1-9(b). A mathematical equation is formed using i = Cv and v = let) = f(Ov):
v= (1IC)f-l(v)
where 1- 1 is the inverse function. Also, veto) is taken to be the initial voltage on the capacitors.
Is this mathematical relation a dynamical system with a state space description?

C NL
~ vet)

Ca)
~ (b)
i

Fig. 1-9

1.15. Is the mathematical equation y2 + 1 = u a dynamical system?

1.16. Is the system dy/dt = tZy time-varying? Is it linear?

1.17. Is the system dy/dt = l/y tim'e-varying? Is it linear:

1.18. Verify that the system of Example 1.16 is nonlinear.

1.19. ~how equation (1.10) is linear.

1.20. Show equation (1.10) is time-invariant if the coefficients £ri and Pj for i = 1,2, "'J nand
j = 0, 1, ... , n, are not functions of time.

1.21. Given dXl/dt = Xl + 2, dX2/dt = Xz + u, y = Xl + xz·


(a) Does this system have a state space description:
(b) What is the input-output relation?
(c) Is the system linear?
14 MEANING OF STATE [CHAP; 1

1.22. What is the state space description for the anticipatory system yet) = u(t + T) in which only con-
tion 5 for dynamical systems is violated?

1.23. Is the system dx/dt = etu, y = e-tx time-varying?

1.24. What is the state space description for the diiferentiator y = du/dt?

1.25. Is the equation y = j(t)n a dynamical system given values of jet) for to === t === tl only?

Answers to Supplementary Problems


1.8. The physical object is the spring-mass system, the abstract object is all x obeying M + kx = 0, x
and the state variable is the vector having elements x(t} and dx/dt. This system has a zero input.

1.9. It is not possible to represent

yet) = y(t o) + ft K(t, r) u(r) d-r, unless K(t, r) = K(t o, r)


to
(This is true for to ~ T for the delay line.) For a general K(t, r), the state at time t must be
taken as U(T) for - 0 0 < T < t. The system is linear and time varying, unless K(t, r) = K(t - r),
in which case it is time-invariant.

k-1 3-k
1.10. x(k) = x(3) + i=3
~ u(i) for k = 4,5, . .. and x(k) = x(3) - ~ u(3 - i)
i=l
for k = 1,2,3. Note we
need not "tie" ourselves to an "initial" condition because anyone of the values xCi) will be the
state for i = 0, 1, ... n.

1.11. The dimension of the state space is zero for to::::: t < tb and one-dimensional for t:=: t 1• Because
the state space is time-dependent in general, it must be a family of sets for each time t. Usually
it is possible to consider a single set of input-output pairs over all t, i.e. the state space is time-
illVariant. Abstract obj ects possessing this property are called uniform abstract objects. This
problem illustrates a nonuniform abstract object.

1.12. Plugging the solutions into the equations will verify them.

1.13. The trajectories are circles. It is a dynamical system.

1.14. It is a dynamical system, because vet) is real and defined for all t ~ to. However, care must be
taken in giving a state space description, because j-l(V) is not single valued. The state space
description must include a means of determining· which· of the lines 1-2, 2-3 or 3-4 a particular
voltage corresponds to.

1.15. No, because the input u < 1 results in an imaginary output.

1.16. It is linear and time-varying.

1.17. It is nonlinear and time-invariant.

1.21. (a) Yes


(b) yet) et- to [x 1(t O) + X2(t O)] + ft et -7"Iu(T) + 2] dT
to
(c) No
CHAP.!] MEANING OF STATE 15

1.22. No additional knowledge is needed other than the input, so that state is zero dimensional and the
state space description is yet) = u(t + T). It is not a dynamical system because it is not realizable
physically if u(t) is unknown in advance for all t::=" to. However, its state space description
violates only condition 5, so that other equations besides dynamical systems can have a state
space description if the requirement of causality is waived.

1.23. Yes. yet) = e-txo + ft e(T-t) u(r) dT, and the contribution of the initial condition Xo depends
to
on when the system is started. If Xo = 0, the system is equivalent to dx/dt = -x + u where
y = x, which is time-invariant.

1.24. If we define
du/dt = lim [u(t+£)-u(t)]I€
£-1-0

so that y(to) is defined, then the state space is zero dimensional and knowledge of u(t) determines
yet) for all t ::=" to. Other definitions of du/dt may require knowledge of u(t) for to - € ~ t,.o:: to,
which would be the state in that case.

1.25. Obviously yet) is not defined for t > tl> so that as stated the equation is not a dynamical system.
However, if the behavior of engineering interest lies between to and t l , merely append y = Ou for
t ::=" tl to the equation and a dynamical system results.
Chapter 2

Methods for Obtaining


the State Equations
2.1 FLOW DIAGRAMS
Flow diagrams are a simple diagrammatical means of obtaining the state equations.
Beca use only linear differential or difference equations are considered here, only four basic
objects are needed. The utility of flow diagrams results from the fact that no differenti-
ating devices are permitted.

Definition 2.1: A summer is a diagrammatical abstract object having n inputs Ul(t),U2(t),


... , un(t) and one outputy(t) that obey the relationship -
y(t) = ±Ul(t) ± U2(t) ± ... ± un(t)
where the sign is positive or negative as indicated in Fig. 2-1, for example.
Ul(t) - - - - - - - _

) - - - - - - - _ y(t)

Fig. 2~1. Summer

Definition 2.2: A scalar is a diagrammatical abstract object having one input u(t) and one
output y(t) such that the input is scaled up or down by the time function a(t)
as indicated in Fig. 2-2. The output obeys the relationship y(t) = a(t) u(t).

U(t) -------~.8 . .------.. y(t)

Fig. 2-2. Scalor

Definition 2.3: An integrator is a diagrammatical abstract object having one input u(t),
one output y(t), and perhaps an initial condition y(to) which may be shown
or not, as in Fig. 2-3. The output obeys the relationship

y(t) y(t o) + ft u(,) d,


to

U(t) _ _ _ _ _ _ _+-1' ~>-O)----_-, y(t)

Fig. 2-3. Integrator at Time t

16
CHAP. 2] METHODS FOR OBTAINING THE STATE EQUATIONS 17

Definition 2.4: A delayor is a diagrammatical abstract object having one input u(k), one
output y(k), and perhaps an initial condition y(l) which may be shown or
not, as in Fig. 2-4. The output obeys the relationship
y(i + l + 1) = u(i + l) for j = 0,1,2, ...

u(k) --------.!. ~>-) -----_.. y(k)

Fig. 2~4. Delayor at Time k

2.2 PROPERTIES OF FLOW DIAGRAMS


Any set of time-varying or time-invariant linear differential or difference equations
of the form (1.10) or (1.11) can be represented diagrammatically by an interconnection of
the foregoing elements. Also, any given transfer function can also be represented merely
by rewriting it in terms of (1.10) or (1.11). Furthermore, multiple input and/or multiple
output systems -can be represented in an analogous manner.
Equivalent interconnections can be made to represent the same system.
Example 2.1.
Given the system
dyldt = o:.y + o:.U (2.1)

with initial condition y(to). An interconnection for this system is shown in Fig. 2-5.

u(t)--+--l >--.....----+- y(t)


+

Fig. 2-5

Since 0:. is a constant function of time, the integrator and scalo"r can be interchanged if the initial condi-
tion is adjusted accordingly, as shown in Fig. 2-6.

u(t) ---+--f 1---...----- y(t)


+

Fig. 2-6

This interchange could not be done if 0:. were a general function of time. In certain special cases it
is possible to use integration by parts to accomplish this interchange. If o:.(t) = t, then (2.1) can be inte-
grated to
y(t} = y(t o) + ft 'T[Y('T) + u('T)] d-r
to
18 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

Using integration by parts,

yet) :::: y(to) - It f'l'" [y(~) +


~ ~
uW] dg dT + t it4
[yeT) + U(T)] dT
which gives the alternate flow diagram shown in Fig. 2-7.

u(t) --+-1 r------+---.-. y( t)

Fig. 2·7

Integrators are used in continuous time systems, delayors in discrete time (sampled
data) systems. Discrete time diagrams can be drawn by considering the analogous con-
tinuous time system, and vice versa. For time-invariant systems, the diagrams are almost
identical, but the situation is not so easy for time-varying systems.
Example 2.2.
Given the discrete time system
y(k + l + 1) :::: ay(k + 1) + au(k + l) (2.2)
with initial condition y(l). The analogous continuous time systems is equation (2.1), where d/dt takes the
place of a unit advance in time. This is more evident by taking the Laplace transform of (2.1),
aY(S) + aU(S)
and the £ transform of (2.2),
zY(z) - zy(l) :::: aY(z) + aU(z)
Hence from Fig. 2-5 the diagram for (2.2) can be drawn immediately as in Fig. 2-8.

u(k + 1) ---+~ > - - - + - - -....... y(k + 1)

Fig. 2-8

If the initial condition of the integrator or delayor is arbitrary, the output of that
integrator or delayor can be taken to be a· state variable.
Example 2.3.
The state variable for (2.1) is yet), the output of the integrator. To verify this, the solution to equa-
tion (2.1) is
yet) = y(to) e(Y.Ct-to) + 0'. f t e(Y.Ct-T) U(T) dT
to
Note y(t o) is the state at to, so the state variable is yet).

Example 2.4.
The state variable for equation (2.2) is y(k + l), the output of the delayor. This can be verified in a
manner similar to the previous example.
CHAP.2J METHODS FOR OBTAINING THE STATE EQUATIONS 19

Example 2.5.
From Fig. 2-7, the state is the output of the second integrator only, because the initial condition of the
first integrator is specified to be zero. This is true because Fig. 2-7 and Fig. 2-5 are equivalent systems.

Example 2.6.
A summer or a scalor has no state associated with it, because the output is completely determined by
the input.

2.3 CANONICAL FLOW DIAGRAMS FOR TIME ..INVARIANT SYSTEMS


Consider a general time-invariant linear differential equation with one input and one
output, with the letter p denoting the time derivative d/dt. Only the differential equations
need be considered, because by Section 2.2 discrete time systems follow analogously.
p1l.y + (Xlpn-Iy + ... + O'.n-IPY + anY = fJopnu + f3 1 pn- 1u + ... + fJn-IPu + fJnu (2.3)
This can be rewritten as
p""(Y - f3 ou ) + pn-l(aIY - f3 IU) + ... + p(an-Iy - f3n - Iu) + anY - f3 nu =0
because aip1O-iy = pn-iaiy , which is not true if a
i depends on time. Dividing through by.pn
and rearranging gives
1 1 1
y = f3 0u + P(f3 1u -
aly) + ... + pn-l (f3 1O - Iu - <X1o-IY) + pn (fJ nu - anY) (2.4)

from which the flow diagram shown in Fig. 2-9 can be drawn starting with the output y
at the right and working to the left.
u(t) - -------+----.....,...-
.......

I---~y(t)

Fig. 2-9. Flow Diagram of the First Canonical Form

The output of each integrator is labeled as a state variable.


The summer equations for the state variables have the form
Y Xl + f3 0u
Xl = -a1Y + X,2 + f3 I U

X2 = -a2Y + X3 + f32U (2.5)

Xn - 1 -an-IY + Xn + f3 n - Iu
X10 -anY + f3nU
20 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

Using the first equation in (2.5) to eliminate y, the differential equations for the state vari-
ables can be written in the canonical matrix form
Xl -a
1 1 0 0 Xl /3 1 - a 1/3 o
X2 -a
2 0 1 0 X2 /3 2 - a 2/3 0
d
dt
.......... ,. ........... " III
+ u (2.6)
- an - 1 0 0 1 X n -1 f3 n - 1 - an- 1 f3 o
Xn -an 0 0 0 Xn /3 n - a rJ30

We will call this the first canonical form. Note the Is above the diagonal and the a'S down
the first column of the n x n matrix. Also, the output can be written in terms of the state
vector

y = (I 0 '" 0 0) (2.7)

Note this form. can be written down directly from the original equation (2.3).
Another useful form can be obtained by turning the first canonical flow diagram "back-
wards." This change is accomplished by reversing all arrows and integrators, interchang-
ing summers and connection points, and interchanging input and output. This is a heuristic
method of deriving a specific form that will be developed further in Chapter 7.

y(t)

u(t)

Fig. 2-10. Flow Diagram of the Second Canonical (Phase-variable) Form

Here the output of each integrator has been relabeled. The equations for the state
variables are now

.................... ,. .... l1li ••••••••• ,. •• 0 ............. II II ••••••••• II .... .

Xn -a1X n - a ZX n - 1 - ••• - O!n-1 X 2 - O!n X 1 +u


y flnX1 + f3n - 1x 2 + ... + f3 1xn + f3 o[U - Q1X n - ••• - O!n_1 X 2 - O!n X 1]
CHAP.2J METHODS FOR OBTAINING THE STATE EQUATIONS 21

In matrix form, (2.8) may be written as

Xl 0 1 0 0 Xl 0
X2 0 0 1 0 X2 0
d
dt = • • • • .. • • • .. • • .. ~,. .. • • • • • • • .. .. ill • • •

+ u (2.9)
0 0 0 1 0
Xn -an - 1 -an - 2
-a 1 Xn 1

and y (2.10)

This will be called the second canonical form, or phase-variable canonical form. Here the
Is are above the diagonal but the a'S go across the bottom row of the n x n matrix. By
eliminating the state variables x, the general input-output relation (2.8) can be verifie~d.
The phase-variable canonical form can also be written down upon inspection of the original
differential equation (2.3).

2.4 JORDAN FLOW DIAGRAM


The general time-invariant linear differential equation (2.3) for one input and one out-
put can be written as

y (2.11)

By dividing once by the denominator, this becomes

y (2.12)

Consider first the case where the denominator polynomial factors into distinct poles Ai,
i = 1,2, ... , n. Distinct means Ai 01= Aj for i 01= j, that is, no repeated roots. Because most
practical systems are stable, the Ai usually have negative real parts.
pn + a 1pn-1 + ... + O::n-1P + an = (p - A 1 )(P - A2) ' •• (p - An) (2.13)

A partial fraction expansion can now be made having the form


P1 Pz Pn
y f30u + ---\u
P - A1
+- - u + ... + --\-u
P - A2 P ---:: .ll.n
(2.14)

Here the residue Pi can be calculated as

(f3 i - a1f3o)Af-'1 + (f3 2 - 0::2f3 o)Ar-


Z
+ ... + ({3n-l - f3
O::n- 1 0)Ai + (f3n - 0::)30)
(2.15)
(Ai - Al)(Ai - "-2)' .. (Ai - \-1)('\ - Ai+l)' .. (\ - An)

The partial fraction expansion (2.14) gives a very simple flow diagram, shown in Fig.
2-11 follo,ving.
22 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

u(t) --~P---------l .r---y(t)

Fig. 2-11. Jordan Flow Diagram for Distinct Roots

Note that because p. and A. can be complex numbers, the statesx.t are complex;'valued
~ ~

functions of time. The state equations assume the simple form

(2.16)

Xn = AnXn-+U

Consider now the general case. For simplicity, only one multiple root (actually one
Jordan block, see Section 4.4, page 73) will be considered, because the results are easily
extended to the general case. Then the denominator in (2.12) factors to
(2.17)
instead of (2.13). Here there are v identical roots. Performing the partial fraction expan-
sion for this case gives
PIU P2U Pvu PiJ+I U Pnu
y = f3 oU + (P-Al)V + (P-Al)V-l + ... + P-AI
+ P-Av+I
+ ... + P-An'
(218)

The residues at the multiple roots can be evaluated as

= k = 1,2, ... , v (2.19)

where f(p) is the polynomial fraction in P from (2.12). This gives the flow diagram shown
in Fig. 2-12 following.
CHAP. 2] METHODS FOR OBTAINING THE STATE EQUATIONS 23

u(t}------"""'"--------~------------..,...f y(t)

Fig. 2~12. Jordan Flow Diagram with One Multiple Root

The state equations are then

XV-1 A1X v - l + Xv
Xv = ).IX v+ U (2.20)
XV+1 Av+lXv +l +U

Xn AnXn+U
y = (3ou + PIX! + P2X2 + ... + PnX n
24 METHODS FOR OBTAINING THE 'STATE EQUATIONS (CHAp. 2

The matrix differential equations associated with this Jordan· form are
Xl Al 1 0 0 0 0 0 Xl 0
X2 0 Al 1 0 0 0 0 X2 0
..... W'.,. .... II .. lit ........... W' ..... 0#'"" ..... " ..

d X v -1 0 0 0 Al 1 0 0 XI/-1 0
dt Xv
= 0 0 0 0 Al 0 0 Xv + 1
u (2.21)
Xv+l 0 0 0 0 o AII+1 0 XII + 1 1
ill ••••• ill ............ " ill .....................

Xn 0 0 0 0 0 0 An Xn 1

and y (PI P2 ...


P )
n
(j:) + f3 0u (2.22)

In the n X n matrix, there is a diagonal row of ones above each Al on the diagonal, and then
the other A's follow on the diagonal.
Example 2.7.
Derive the Jordan form of the differential system

jj + 2y + y == u+ u (2.23)

Equation (2.23) can be written as y :::: (~: 1~)2 u whose partial fraction expansion gives

o 1
y (p + 1)2 U + P + 1u
Figure 2-13 is then the flow diagram.

yet)

u(t)--+40-/
+

Fig. 2-13

Because the scalor following Xl is zero, this state is unobservable. The matrix state equations in
Jordan form are
METHODS FOR OBTAINING THE STATE EQUATIONS 25

- 2.5 TIME-VARYING SYSTEMS


Finding the state equations of a time-varying system is not as easy as for time-invariant
systems. However, the procedure is somewhat analogous, and so only one method will be
given here.
The general time-varying differential equation of order n with one input and one output
is shown again for convenience.

= (1.10)

Differentiability of the coefficients a suitable number of times is assumed. Proceeding in


a manner somewhat similar to the second canonical form, we shall try defining states as in
(2.8). However, an amount Ylt) [to be determined] of the input u(t) enters into all the states.
Xl = x 2 + Yl(t)U
x2 X3 + Y2(t)U
(2.24)
Xn- l xn + Yn-l(t)u
xn -al(t)X n- l¥2(t)X n_ 1 - ••• - an(t)xl + Yn(t)u
Y Xl + Yo(t)u
By differentiating y n times and using the relations for each state, each of the unknown Yi
can be found. In the general case,
=
= (2.25)

Example 2.8.
Consider the second order equation
d 2y dy d 2u du
dt 2 + Il'l(t) dt + 1l'2(t)y = f3o(t) dt2 + f31(t) dt + f32(t)U (2.26)

Then by (2.24)
Y == Xl + yo(t)u (2.27)

and differentiating,
(2.28)

Substituting the first relation of (2.24) into (2.28) gives


y :;: x2 + [Yl(t) + yo(t)]u + Yo(t)u (2.29)

Differentiating again,
(2.90)

From (2.24) we have


X2 = -al(t)X2 - 1l'2(t)Xl + Y2(t)U (2.31)

Now substituting (2.27), (2.29) and (2.30) into (2.31) yields


y- [Yl(t) + yo(t)]u - [Yl(t) + 2Yo(t)]u - Yo(t) u
= -ll'l(t){y - [Yl(t) + Yo(t)]u - Yo(t)u} - t'X2(t)[y - Yo(t)u] + Y2(t)u (2.32)
26 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

Equating coefficients in (2.26) and (2.32),


Y2 + Yl + Yo +(Y1 + Yo)a1 + Yoa2 = 132 (2.33)
Yl + 210 + alYO = 131 (2.34)
Yo = 130 (2.35)
Substituting (2.35) into (2.34),
(2.36)
and putting (2.35) and (2.36) into (2.33),

Y2 = Po - PI + /32 + (alf3o + 2Po - f3l)a1 + f3oa1 - f30az (2.37)

Using equation (2.24), the matrix state equations become


J\ 0 1 0 0 Xi
X2 0 0 1 0 Xz
d
dt
••••• II 41 ....... ill,. II ill .. 'II." ill ....... ,. ...... ,. ..

+ u (2.38)
0 0 0 1 Yn-l(t)
xn -an(t) -an_ 1 (t) -an_2 (t) -a 1(t) xn Yn(t)

y (1 0 ... 0)

2.6 GENERAL STATE EQUATIONS


Multiple input-multiple output systems can be put in the same canonical forms as single
input-single output systems. Due to complexity of notation, they will not be considered here.
The input becomes a vector u(t) and the output a vector y(t). The components are the inputs
and outputs, respectively. Inspection of matrix equations (2.6), (2.9), (2.21) and (2.88)
indicates a similarity of form. Accordingly a general form for the state equations of a
linear differential system of order n with m inputs and k outputs is
dx/dt = A(t)x + B(t)u
y = C(t)x + D(t)u (2.39)
where x(t) is an n-vector,
u(t) is an m-vector,
y(t) is a k-vector,
A(t) is an n X n matrix,
B(t) is an n X m matrix,
C(t) is a k x n matrix,
D(t) is a k x m matrix.
In a similar manner a general form for discrete time systems is
x(n + 1) = A(n) x(n) + B(n) u(n)
y(n) = C(n) x(n) + D(n) u(n) (2.40)
where the dimensions are also similar to the continuous time case.
CHAP.2J METHODS FOR OBTAINING THE STATE EQUATIONS 27

Specifically, if the system has only one input u and one output y, the differential equa-
tions for the system are
dx/dt = A(t)x + b(t)u
Y = ct(t)x + d(t)u
and similarly for discrete time systems. Here c(t) is taken to be a column vector, and
ct(t) denotes the complex conjugate transpose of the column vector. Hence ct(t) is a row
vector, and ct(t)x is a scalar. Also, since u, y and d(t) are not boldface, they are scalars.
Since these state equations are matrix equations, to analyze their properties a knowledge
of matrix analysis is needed before progressing further.

Solved Problems
2.1. Find the matrix state equations in the first canonical form for the linear time-
invariant differential equation
y + 5y + 6y = it + u (2.41)
with initial conditions y(O) = Yo, y(O) = yo. Also find the initial conditions on the
state variables.
Using p = dldt, equation (2 .."-1) can be written as p2y + 5py + 6y = pu + u, Dividing by p2
and rearranging,
1 1
Y = -(u-5y) + ~(u-6y)
p p2_

The flow diagram of Fig. 2~14 can be drawn starting from the output at the right.

u----------~--~------~~--~

.>--t---.... Y

Fig. 2..14

Next, the outputs of the integrators are labeled the state variables Xl and x2 as shown. Now
an equation can be formed using the summer on the left:
X2 = -6y +u
Similarly, an equation can be formed using the summer on the right:
Xl = X2 - 5y +u
Also, the output equation is y = Xl' Substitution of this back into the previous equations gives
Xl -5X1 + X2 + u
(2.42)
28 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

The state equations can then be written in matrix notation as

with the output equation


y

The initial conditions on the state variables must be related to Yo and Yo, the given output initial
conditions. The output equation is xl(t) = yet), so that Xl(O) = yeO) = Yo. Also; substituting
yet) = Xl (t) into (2.42) and setting t == 0 gives

yeO) = -5y(D) + X2(O) + u(O)


Use of the given initial conditions determines
X2(O) = Yo + 5yo - u(O)

These relationships for the initial conditions can also be obtained by referring to the flow diagram
at time t = O.

2.2. Find the matrix state equations in the second canonical form for the equation (2.41)
of Problem 2.1, and the initial conditions on the state variables.
The flow diagram (Fig. 2-14) of the previous problem is turned "backwards" to get the flow
diagram of Fig. 2-15.

y......----I

.....+----u

Fig. 2-15

The outputs of the integrators are labeled Xl and x2 as shown. These state variables are dif-
ferent from those in Problem 2.1, but are also denoted Xl and Xz to keep the state vector x(t) nota-
tion, as is conventional. Then looking at the summers gives the equations
y = Xl + X2 (2.43)

X2 = -6Xl - 5x2 +u (2.44)


Furthermore, the input to the left integrator is

This gives the state equations


d
dt
(Xl)
X2 = (-~ -~)(::) + G) u

and y (1 1)(:~)
The initial conditions are found using (2.43),

and its derivative


/"""-'

CHAP. 2] METHODS FOR OBTAINING THE STATE EQUATIONS 29

Use of (2 ..M) and (2.45) then gives


Yo = X2(0) - 6x 1 (0) - 5x z(0) + u(O) (2.47)
Equations (2.46) and (2.47) can be solved for the initial conditions

Xl(O) -2yo - lyo + lu(O)


X2(O) SYo + iYo - lu(O)

2.3. Find the matrix state equations in Jordan canonical form for equation (2.41) of Prob-
lem 2.1, and the initial conditions on the state variables.
The transfer function is
y

A partial fraction expansion gives


-1 2
Y = p+2 u + p+8 u

From this the flow diagram can be drawn:

+
+
u---.. J----~y

Fig. 2-16

The state equations can then be written from the equalities at each summer:

(-~ _~)(::) + (~) u


Y = (-1 2) ( :: )

From the output equation and its derivative at time to,

Yo = 2xz(O) - Xl (0)
Yo = 2X2(0) - 0: 1(0)
The state equation is used to eliminate Xl(O) and X2(0):

Solving these equations for Xl(O) and x2(O) gives


Xl(O) u(O) - 3yo - Yo
xz(O) l[u(O) - 2yo - Yo]
30 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

2.4. Given the state equations

! (~:) = (-~ -!)(~:) + G) u

y (1 1)(:~)
Find the differential equation relating the input to the output.
In operator notation, the state equations are
PX l X2

PX2 - 6X l - 5x2 +U

Eliminating Xl and X2 then gives


p2y + 5py + 6y = pu, +u
This is equation (2.41) of Example 2.1 and the given state equations were derived from this equa,..
tion in Example 2.2 .. Therefore this is a way to check the results.

2.5. Given the feedback system of Fig. 2-17 find a state space representation of this
closed loop system.

K
R(s)---+.-t G(s) = - - 1-------4~-_ C(s)
8+1

1
H(s) = 8 +3

Fig. 2-17

The transfer function diagram is almost in flow diagram form already. U sing the Jordan
canonical form for the plant G(s) and the feedback H(s) separately gives the flow diagram of
Fig. 2-18.
r----- -----------1
I I
r ( t ) - -........ 1 - + - - 9 - - - _ c(t)
+ I
I
I
IL __________________ ! ~

Fig. 2-18

Note G(s) in Jordan form is enclosed by the dashed lines. Similarly the part for H(s) was drawn,
and then the transfer function diagram is used to connect the parts. From the flow diagram,

d
dt
(Xl)X2 =
(-1 -1) (Xl) + (1)
K -3 X2 0 ret), c(t) = (K O)(~~)
CHAP. 2] METHODS FOR OBTAINING THE STATE EQUATIONS 31

, 2.6. Given the linear, time-invariant, multiple input-multiple output, discrete-time system
y/n + 2) + (l:lyl(n + 1) + a2Yl(n) + Y2YZ(n + 1) + YSY2(n) == (31u1(n) + 8Iu 2(n)
y 2(n + 1) + Y Y2(n) + (Xsyl(n + 1) + YI(n) ==
1 (X4 (32u/n) + 82u2(n)
Put this in the form
x(n+ 1) Ax(n) + Bu(n)
y(n) Cx(n) + Du(n)

where y(n) = (~;~:?), u(n) = (=;~:?).


The first canonical form will be used. Putting the given input-output equations into z opera-
tions (analogous to p operations of continuous time system),
Z2Y1 + alZYI + (X2Yl + Y2 z Y2 + YSY2 == [hul + 0lu2
zY2 + YIY2 + (lSZYI + ()!4VI == f32 u I + 02U 2
Dividing by Z2 and z respectively and solving for YI and Y2'

Starting from the right, the flow diagram can be drawn as shown in Fig. 2-19.

Fig. 2-19

Any more than three delayors with arbitrary initial conditions are not needed because a
fourth such delayor would result in an unobservable or uncontrollable state. From this diagram
the state equations are found to be
32 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

2.7. Write the matrix state equations for a general time-varying second order discrete time
equation, i.e. find matrices A(n), B(n), C(n), D(n) such that
x(n + 1) = A(n) x(n) + B(n) u(n)
y(n) = C(n) x(n) + D(n) u(n) (2.48)
given the discrete time equation
y(n + 2) + a 1(n) y(n + 1) + a 2 (n) y(n) f3 o(n) u(n + 2) + f3 1(n) u(n + 1) + {32(n) u(n) (2.49)
Analogously with the continuous time equations. try
xl(n + 1) = x2(n) + "h(n) u(n) (2.50)

x2(n + 1) = -al(n) x2(n) - a2(n) Xl (n) + Y2(n) u(n) (2.51)


yen) = xl(n) + Yo(n) u(n) (2.52)

Stepping (2.52) up one and substituting (2.50) gives


yen + 1) = X2(n) + Yl(n) u(n) + yo(n + 1) u(n + 1) (2.53)

Stepping (2.53) up one and substituting (2.51) yields


yen + 2) = -al(n) X2(n) - a2(n) XI(n) + Y2(n) u(n) + YI(n + 1) u(n + 1) + Yo(n + 2) u(n + 2) (2.54)

Substituting (2.52). (2.53). (2.54) into (2.49) and equating coefficients of u(n), u(n + 1) and u(n + 2)
gives
Yo(n) J3o(n - 2)

YI(n) J31(n-1) - al(n)J3o(n-2)

Y2(n) J32(n) - al(n)J31(n -1) + [al(n)al(n -1) - a2(n)]J30(n - 2)

In matrix form this is

y(n) (1 0) (::~:D + yo(n) u(n)

2.8. Given the time-varying second order continuous time equation with zero input,
y + O!I(t)y + a 2(t)y = 0 (2.55)

write this in a form corresponding to the first canonical· form

(~:) - (=:;m ~)(~:) (2.56)

.Y = (y,(t) y.(t» (~:) (2.57)


2] METHODS FOR OBTAINING THE STATE EQUATIONS 33

To do this, differentiate (2.57) and substitute for Xl and Xz from (2.56),


if = (rl - Yial - Yz a 2)Xl + (Yl + yz)xz (2.58)
Differentiating again and substituting for Xl and X2 as before,
y = C"il - 2a{Yt - 0:111 - tl2Y2 - 2azY2 - Cl!2Yl + a1C1!zYz + a1Y1)x1 + (2ft - a111 - Cl!zYz + YZ)X2 (2.59)

Substituting (2.57), (2.58) and (2.59) into (2.55) and equating coefficients of Xl and Xz gives the
equations
Yl - alYl + (0:2 - tl1hl = 0
Y2 + (a1 - 2C1!z)yz + (CI!~ - a2 a 1 + Cl!2 --'- a2)YZ + 2·h - a211 = 0

In this case, Yl(t) may be taken to be zero, and any non-trivial yz(t) satisfying
12 + (a1 - 2a2)YZ + (a~ - Cl!Z a l + IX2 - 0:2h2 = 0 (2.60)
will give the desired canonical form.
This problem illustrates the utility of the given time~varying form (2.38). It may always be
found by differentiating known functions of time. Other forms usually involve the. solution of
equations such as (2.60), which may be quite difficult, or require differentiation of the ai(t). Addition
of an input contributes even more difficulty. However, in a later chapter forms analogous to the
first canonical form will be given.

Supplementary Problems
2.9. Given the discrete time equation yen + 2) + 3y(n + 1) + 2y(n) = u(n + 1) + 3u(n), find the matrix
state equations in (i) the first canonical form, (ii) the second canonical form, (iii) the Jordan canon-
ical form.

2.10. Find a matrix state equation for the multiple input-multiple output system
YI + atih + a2YI + Y3112 + Y4YZ ::::: (h u 1 + SlUZ
yz + 11112 + 1zY2 "+ a3111 + Cl!4Y1 ::::: f32 u 1 + S2 U Z
2.11. Write the matrix state equations for the system of Fig. 2-20, using Fig. 2-20 directly.

u
Xz + Xl
~----~y
p + p

2.12. Consider the plant dynamics and feedback com- 500


u---~
pensation illustrated in Fig. 2-21. Assuming p(p + 5)(1'2 + P + 100)
the initial conditions are specified in the form v
v(O), v(O), ii (0), ·v·(O), w(O) and w(O), write a
w _ _ _ _...., 1'2+ p + 100
state space equation for the plant plus compen- 1'2 + 2p + 100
sation in the form x = Ax + Bu and show the
relation between the specified initial conditions
and the components of xeD). Fig. 2-21
34 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

2.13. The procedure of turning "backwards" the flow diagram of the first canonical form to obtain the
second canonical form was never· justified. Do this by verifying that equations (2.8) satisfy the
original input-output relationship (2.3). Why can't the time-varying flow diagram corresponding
to equations (2.24) be turned backwards to get another canonical form?

2.14. Given the linear time-varying differential equation


y + al(thi + a2(t)y = f3o(t) U + fll(t)U + f32(t)U
with the initial conditions y(O) = Yo, y(O) = Yo.
(i) Draw the flow diagram and indicate the state variables.
(ii) Indicate on the flow diagram the values of the scalors at all times.
(iii) Write down the initial conditions of the state variables.

2.15. Verify equation (2.37) using equation (2.25).

2.16. The simplified equations of a d-c motor are

Motor Armature: Ri + L ~; = V - Kf :

Inertial Load:

Obtain a matrix state equation relating the input voltage V to the output shaft angle 8 using a
state vector

2.17. The equations describing the time behavior of the neutrons in a nuclear reactor are
6
Prompt Neutrons: In (p(t) - {l)n :I ""-iCi
+ i=l
Delayed Neutrons:
6
where f3 = :I f3i and p(t) is the time-varying reactivity, perhaps induced by control rod motion.
i=1
Write the matrix state equations.

2.18. Assume the simplified equations of motion for a missile are given by
Lateral Translation: z + K ¢ + K a + Kaf3
1 2 = 0
Rotation: if; + K 4a + K5f3 = 0

Angle of Attack: a::: ¢ - Ka z


Rigid Body Bending Moment: M(l) = Ka(l)a + K B (l)f3
where z::;: lateral translation, ¢ = attitude, a == angle of attack, f3 == engine deflection, M(l)
bending moment at station 1. Obtain a state space equation in the form
dx Ax + Bu, y::::: - ex + Du
dt

where u = f3 and y == (M~l)) , taking as the state vector x

(i) x = (D (ii) x = G) (iii) x


CHAP. 2] METHODS FOR OBTAINING THE STATE EQUATIONS 35

2.19. 'Consider the time-varying electrical network of Fig. 2-22. The -il
voltages across the inductors and the current in the capacitors
can be expressed by the relations

ea - el :t (L1i l ) =
di l
L ldt +
. dL I
'/,1&

d del dGI
il - i2 dt (Glel) Glert + cI&
o o
d~ . dL2
el - eb :t (L 2i 2 ) = L2dt + t2dt
It is more convenient to take as states the inductor fluxes

PI = itto
(e a - el) dt + PI(tO)

P2 ft (el - eb) dt + P2(t O)


to
and the capacitor charge
ql = ft
to
(i1 - i 2) dt + q1(tO}

Obtain a state space equation for the network in the form

d:x
dt
A(t)x + B(t)u

yet) C(t)x + D(t)u

where the state vector x, input vector u, and output vector yare

(i)
x=GJ u (::) y
=G)
(ii)
x=G) u
(::) y =G)
2.20. Given the quadratic Hamiltonian H = -iqTVq + ipTTp where q is a vector of n generalized co-
ordinates, p is a vector of corresponding conjugate momenta, V and Tare n X n matrices corre-
sponding to the kinetic and potential energy, and the superscript T on a vector denotes transpose.
Write a set of matrix state equations to describe the state.

Answers to Supplementary Problems


2.9. (i) x(n + 1) = (-3
-2 ~) x(n) + (~) u(n); yen) :::: (1 0) x(n)

(ii) x(n+ 1) (_~ _~) x(n) + (~) u(n); yen) (3 1) x(n)

(iii) x(n + 1) = (-~ _~) x(n) + (~) u(n); yen) (-1 2) x(n)
36 METHODS FOR OBTAINING THE STATE EQUATIONS [CHAP. 2

2.10. Similar to first canonical form

~) (~' ~l)U
1 -Y3
. -0:2 0 -Y4

C"
x +
-a3 0 -[1 1 x
-a4 0 -Y2 0 f32
°2
(~ ~) x
0 0
y
0 1

or similar to second canonical form

.
X
(-; -0:1

-Y4 -Y3 -Y2 -Y1


1

0
0
-0:4

0 ~.)x (~1 ~)U


+

{12
°2
(~ ~)x
0 0
Y
0 1

2.11.

C~' +:2) (::) +(~)


+"'1

:t G;) -"'2
0 -"'3 X3 "'3
U

Y (1 oo)G:)
2.12. For Xl = v) X2 = V, ::li3 = V, X4 = ·v: x5 =w, X6=W the initial conditions result immediately and
01 0 0 0 0 0
00 1 0 0 0 0
0 0 0 1 0 0 0
x
0 -500 -105 -6 0 0
x + 500
u

0 0 0 0- 0 1 0
100 1 1 0 -100 -2 0

2.13. The time-varying flow diagram cannot be turned backwards to obtain another canonical form be-
cause the order of multiplication by a time-varying coefficient and an integration cannot be inter-
changed.

2.14. (i)
u--------~--------------------~~----------------__,

I - - - -. . y

Fig. 2-23
CHAP. 2] METHODS FOR OBTAINING THE STATE EQUATIONS 37

(ii) The values of Yo, Y1 and Y2 are given by equations (2.33), (2.34) and (2.35).
(iii) X1(0) = Yo - Yo(O) u(O) x2(0) = Yo - (fo(O) + Y1(0» u(O) - Yo(O) U(O)

2.16.
d
dt ( i)
fJ
dfJ/dt
:::: fJ (0 1 0) ( ;
de/dt
)

2.17. !£
dt (~1).:
Os

o
2.18. (i)

(~ ~)
1
c =

(ii)

-Ka 1

K2 K S
(iii)
o
-K4

(~
1 0
c == 'Kex 0

2.19. (i)

(ii) A -0- o1 )
.. 1
-LL- 1
2 2

c D

::::
2.20. The equations of motion are

Using the state vector x :::: (q1 qn PI ... Pn)T, the matrix state equations are

X :::: ( 0
-V
T)o x
Chapter 3

Elementary Matrix Theory


3.1 INTRODUCTION
This and the following chapter present the minimum amount of matrix theory needed
to comprehend the material in the rest of the book. It is recommended that even those well
versed in matrix theory glance over the text of these next two chapters, if for no other
reason than to become familiar with the notation and philosophy. For those not so well
versed, the presentation is terse and oriented towards later use. For a more comprehensive
presentation of matrix theory, we suggest study of textbooks solely concerned with the
subject.

3.2 BASIC DEFINITIONS


Definition 3.1: A matrix, denoted by a capital boldfaced letter, such as A or <P, or by the
notation {ail or {au}, is a rectangular array of elements that are members
of a field or ring, such as real numbers, complex numbers, polynomials,
functions, etc. The kind of element will usually be clear from the context.
Exam pIe 3.1.
An example of a matrix is
A = (
0 2 j)
1J x2 sin t

Definition 3.2: A row of a matrix is the set of all elements through which one horizontal
line can be drawn.

Definition 3.3: A column of a matrix is the set of all elements through which one vertical
line can be drawn.
Example 3.2.
The rows of the matrix of Example 3.1 are (0 2 j) and (1i x 2 sin t). The columns of this matrix are

(~). (:,) and (Si~ t) .


Definition 3.4: A square matrix has the same number of rows and columns.

Definition 3.5: The order of a matrix is m x n, read m by n, if it has m rows and n


columns.

Definition 3.6: A scalar, denoted by a letter that is not in boldface type, is a 1 x 1 matrix.
In other words, it is one element. When part of a matrix A, the notation
atj means the particular element in the ith row and jth column.

Definition 3.7: A vector, denoted by a lower case boldfaced letter, such as a, or with its
contents displayed in braces, such as {at}, is a matrix with only one row or
only one column. Usually a denotes a column vector, and aT a row vector.

38
CHAP. 3] ELEMENTARY MATRIX THEORY 39

Definition 3.8: The diagonal of a square matrix is the set of all elements aij of the matrix
in which i = j. In other words, it is the set of all elements of a square
matrix through which can pass a diagonal line drawn from the upper left
hand corner to the lower right hand corner.
Example 3.3.
Given the matrix

The diagonal is the set of elements through which the solid line is drawn, b 11 , b22 , b33 , and not those sets
determined by the dashed lines.

Definition 3.9: The trace of a square matrix A, denoted tr A, is the sum of all elements
on the diagonal of A.
n
trA ~ au
i=l

3.3 BASIC OPERATIONS


Definition 3.10: Two matrices are equal if their corresponding elements are equal. A = B
means aij = bij for all i and j. The matrices must be of the same order.

Definition 3.11: Matrix addition is performed by adding corresponding elements. A + B =


C means aij + bij = Cji, for all i and j. Matrix subtraction is analogously
defined. The matrices must be of the same order.

Definition 3.12: Mat1'ix ditfe1'entiation or integration means differentiate or integrate each


element, since differentiation and integration are merely limiting opera-
tions on sums, which have been defined by Definition 3.1l.
Addition of matrices is associative and commutative, i.e. A + (B + C) = (A + B) + C
and A + B = B + A. In this and all the foregoing respects, the operations on matrices have
been natural extensions of the corresponding operations on scalars and vectors. However,
recall there is no multiplication of two vectors, and the closest approximations are the dot
(scalar) product and the cross product. Matrix multiplication is an extension of the dot
product a· b = a 1b 1 + a2 b2 + ... + anb n, a scalar.

Definition 3:13: To perform matTix multiplication, the element Cij of the product matrix C
is found by taking the dot product of the ith row of the left matrix
A and the jth column of the right matrix B, where C = AB, so that
n
Cij = ~ aikb kj •
k=l

Note that this definition requires the left matrix (A) to have the same number of columns
as the right matrix (B) has rows. In this case the matrices A and B are said to be compatible.
It is undefined in other cases, excepting when one of the matrices is 1 x 1, i.e. a scalar.
In this case each of the elements is multiplied by the scalar, e.g. aB = {ab ii } for all i and j.
Example 3.4.
The vector equation y = Ax, when y and x are 2 X 1 matrices, i.e. column vectors, is
40 ELEMENTARY MATRIX THEORY [CHAP. 3

2
where Yi = ~ aikxk for i = 1 and 2. But suppose x = Bz, so that Then
k=l

2 ( 2 ) 2
Yi = ~ aik ~ bk-z- = ~ c--z-
k=l 3= 1 J 1 j=1 ~J j

so that y = A(Bz) = Cz, where AB = C.

Example 3.4 can be extended to show (AB)C = A(BC), i.e. matrix multiplication is asso.;.
ciative. But, in general, matrix multiplication is not commutative, AB =F BA. Also, there
is no matrix division.
Example 3.5.
To show AB # BA, consider
AB
(~ ~)(~ ~) D

BA (~. ~)(~ ~) C

and note D oF C. Furthermore, to show there is no division, cQnsider

BF
(~ ~)(~ ~) C

Since BA = BF = C, "division" by B would imply A = F, which is certainly not true.

Suppose we have the vector equation Ax = Bx, where A and Bare n X n matrices. It
can be concluded that A = B only if x is an arbitrary n-vector. For, if x is arbitrary, we
may choose x successively as el, e2, ... , en and find that the column vectors al = h1,
a2 = h2, ... , an = b n • Here ej are unit vectors, defined after Definition 3.17, page 41.

Definition 3.14: To partition a matrix, draw a vertical and/or horizontal line between two
rows or columns and consider the subset of elements formed as individual
matrices, called submatrices.

As long as the submatrices are compatible, i.e. have the correct order so that addition
and multiplication are possible, the submatrices can be treated as elements in the basic
operations.
Example 3.6.
A 3 X 3 matrix A can be partitioned into a 2 X 2 matrix All, a 1 X 2 matrix A21 , a 2 X 1 matrix A 12 ,
and a 1 X 1 matrix A22 •

A
( -~-::--+-~-:: )
A similarly partitioned 3 X 3 matrix B adds as

A + B

and multiplies as

AB

Facility with partitioned matrix operations will often save time and give insight.
CHAP.3J ELEMENTARY MATRIX THEORY 41

3.4 SPECIAL MATRICES


Definition 3.15: The zero matrix, denoted 0, is a matrix whose elements are all zeros.
Definition 3.16: A diagonal matrix is a square matrix whose off-diagonal elements are all
zeros.
Definition 3.17: The unit matrix, denoted I, is a diagonal matrix whose diagonal elements
are all ones.
Sometimes the order of I is indicated by a subscript, e.g. In is an n X n matrix. Unit vectors,
denoted ei, have a one as the ith element and all other elements zero, so that 1m =
(el J e2 J . . . J em).
Note IA = AI = A, where A is any compatible matrix.
Definition 3.18: An upper triangular matrix has all zeros below the diagonal, and a lower
triangular matrix has all zeros above the diagonal. The diagonal elements
need not be zero.
An upper triangular matrix added to or multiplied by an upper triangular matrix results
in an upper triangular matrix, and similarly for lower triangular matrices.
Definition 3.19: A transpose matrix, denoted AT, is the matrix resulting from an inter-
change of rows and columns of a given matrix A. If A = {au}, then
AT:= {ajd, so that the element in the ith row and jth column of A becomes
the element in the jth row and ith column of AT.
Definition 3.20: The complex conjugate tranBpose matrix, denoted At, is the matrix whose
elements are complex conjugates of the elements of AT.
Note (AB)T = BTAT and (AB)t = BtAt.
Definition 3.21: A matrix A is sym'metric if A = AT.
Definition 3.22: A matrix A is Hermitian if A = At.
Definition 3.23: A matrix A is normal if AtA = AAt.
Definition 3.24: A matrix A is skew-symmetric if A = -AT.
Note that for the different cases: Hermitian A = At, skew Hermitian A = -At, real
symmetric AT = A, real skew-symmetric A = -AT, unitary AtA == I, diagonal D, or
orthogonal AtA = D, the matrix is normal.

3.5 DETERMINANTS AND INVERSE MATRICES


Definition 3.25: The determinant of an n X n (square) matrix {au} is the sum of the signed
products of all possible combinations of n elements, where each element
is taken from a different row and column.
n!
det A ~ (-1)palPla2P2" ·anpn (3.1)
Here Pl, P2, ... , pn is a permutation of 1,2, ... , n, and the sum is taken over all possible
permutations. A permutation is a rearrangement of 1,2, ... , n into some other order, such as
2, n, ... ,1, that is obtained by successive transpositions. A transposition is the interchange
of places of two (and only two) numbers· in the list 1, 2, ... , n. The exponent p of -1 in
(3.1) is the number of transpositions it takes to go from the natural order of 1,2, ... , n to
Pl, P2, ... ,pn. There are n! possible permutations of n numbers, so each determinant is the
sum of n! products.
42 ELEMENTARY MATRIX THEORY [CHAP. 3

Example 3.7.
To find the determinant of a 3 X 3 matrix, all possible permntations of 1,2,3 mllst be found. Per-
forming one transposition at a time, the following table can be formed.

P PI' P2' Pa

0 1, 2, 3
1 3, 2, 1
2 2, 3, 1
3 2, 1, 3
4 3, 1, 2
5 1, 3, 2

This table is not unique in that for P = 1, possible entries are also 1,3,2 and 2,1,3. However, these
entries can result only from an odd p, so that the sign of each product in a determinant is unique. Since
there are 8! = 6 terms, all possible permutations are given in the table. Notice at each step only two
numbers are interchanged. Using the table and (3.1) gives
det A = (-I)Oallu22u33 + (-1)1a12a21a33
+ (-1)2a12u23aSI + (-1)3u13a22a31 + (-1)4a18u21a32 + (-1)5ullu23a32
Theorem 3wl: Let A be an n X n matrix. Then det (AT) = det (A).
Proof is given in Problem 3.3, page 59.

Theorem 3.2: Given two n X n matrices A and B. Then det (AB) = (det A) (det B).
Proof of this theorem is most easily given using exterior products, defined in Section
3.13! page 56. The proof is presented in Problem 3.15, page 65.

Definition 3.26: Elementary row (or column) operations are:


(i) Interchange of two rows (or columns).
(ii) Multiplication of a row (or column) by a scalar.
(iii) Adding a scalar times a row (or column) to another row (column).
To perform an elementary row (column) operation on an n X m matrix A, calculate the
product EA (AE for columns), where E is an n x n(m X m) matrix found by performing the
elementary operation on the n x n(m X m) unit matrix I. The matrix E is called an elemen-
tary matrix.
Example 3.8.
Consider the 2 X 2 matrix A = {au}. To interchange two rows, interchange the two rows in I to
obtain E.
EA ==

To add 6 times the second column to the first, multiply

AE
~) ==

Using Theorem 3.2 on the product AE or EA, it can be found that (i) interchange of two
rows or columns changes the sign of a determinant, i.e. det (AE) = -detA, (ii) multiplication
of a row or column by a scalar a multiplies the determinant by a, i.e. det (AE) = a det A,
and (iii) adding a scalar times a row to another row does not change the value of the de-
terminant, i.e. det (AE) = det A.
CHAP.3J ELEMENTARY MATRIX THEORY 43

Taking the value of a in (ii) to be zero, it can be seen that a matrix containing a row
or column of zeros has a zero determinant. Furthermore, if two rows or columns are
identical or multiples of one another, then use of (iii) will give a zero row or column, so that
the determinant is zero.
Each elementary matrix E always has _an inverse E-l, found by undoing the row or
column operation of I. Of course an exception is a = 0 in (ii).

Example 3.9.
The inverse of E = G~) is E-l = G~). The inverse of E = (~ ~) is E-l = (-~ n.
Definition 3.27: The determinant of the matrix formed by deleting the ith row and the jth
column of the matrix A is the minor of the element aij, denoted detMij. The
cofactor Co = (-l)Hi detMij.

Example 3.10.
The minor of a22 of a 3 X 3 matrix A is det M22 = alla33 - a13aSl' The cofactor c22 = (-1)4 det M22 =
detM22 •

Theorem 3.3: The Laplace expansion of the determinant of an n X n matrix A is detA


n n
~
i=l
aijCij for any column j or det A = ~
j=l
aijCij for any row i.

Proof of this theorem is presented as part of the proof of Theorem 3.21, page 57.

Example 3.11.
The Laplace expansion of a 3 X 3 matrix A about the second column is
det A = a12c12 + a22cZ2 + aS2c32
- a I 2(a21 a 33 - a23 a 31) + a22(a ll aaa - a13 a 31) - a32(alla23 - a13 a 21)

Corollary 3.4: The determinant of a triangular n x n matrix equals the product of the
diagonal elements.

Proof is by induction. The corollary is obviously true for n = 1. For arbitrary n, the
Laplace expansion about the nth row (or column) of an n x n upper (or lower) triangular
matrix gives detA = annCnn • By assumption, Cnn = ana22' • ·an-l,n-l, proving the corollary.
Explanation of the induction method: First the hypothesis is shown true for n = no,
where no is a fixed number. (no = 1 in the foregoing proof.) Then assume the hypothesis
is true for an arbitrary n and show it is true for n + 1. Let n = no, for which it is known
true, so that it is true for no + 1. Then let n = no + 1, so that it is true for no + 2, etc.
In this manner the hypothesis is shown true for all n ~ no.

Corollary 3.5: The determinant of a diagonal matrix equals the product of the diagonal
elements.

This is true because a diagonal matrix is also a triangular matrix.


n The Laplace expansion for a matrix whose kth row equals the ith row is det A =
1: akjCij
j=1
for k # i, and for a matrix whose kth column equals the jth column the Laplace
n
expansion is det A = ~ ~kCij. But these determinants are zero since A has two identical
rows or columns. i=l
44 ELEMENTARY MATRIX THEORY [CHAP. 3

Definition 3.28: The Kronecker delta 8ij == 1 if i =j and 8 ij = 0 if i =1= j.


Using the Kronecker notation,
n
:l:
i=1
aikCij Ski detA (3.2)

Definition 3.29: The adjugate matrix of A is adj A = {cu} T, the transpose of the matrix
of cofactors ofA.
The adjugate is sometimes called "adjoint", but this term is saved for Definition 5.2.
Then (3.2) can be written in matrix notation as
A adj A = I det A (3.3)

Definition 3.30: An m x n matrix B is a left inverse of the n x m matrix A if BA = 1m , and


an m x n matrix C is a right inverse if AC = In. A is said to be nonsingular
if it has both a left and a right inverse.
If A has both a left inverse B and a right inverse C, C == IC = BAC = BI = B. Since
BA = 1m and AC = In, and if C = B, then A must be square. Furthermore suppose a non ..
singular matrix A has two inverses G and H. Then G = GI = GAB = IH == H so that a
nonsingular matrix A must be square and have a unique inverse denoted A -1 such that
A-1A== AA-1 = I. Finally, use of Theorem 3.2 gives detA detA-l:;:::: detI:;:::: 1, so that if
detA::: 0, A can have no inverse.

Theorem 3.6: Cramer's rule. Given an n X n (square) matrix A such that detA =1= O.
Then
A-I de! A adj A

The proof follows immediately from equation (3.3).

Example 3.12.
The inverse of a 2 X 2 matrix A. is

Another and usually faster means of obtaining the inverse of nonsingular matrix A is
to use elementary row operations to reduce A in the partitioned matrix A II to the unit
matrix. To reduce A to a unit matrix, interchange rows until an =1= O. Denote the inter-
change by E 1 • Divide the first row by an, denoting this row operation by E 2 • Then
E2E1A has a one in the upper left hand corner. Multiply the first row by Ui1 and subtract
it from the ith row for i = 2,3, ... , n, denoting this operation by Ea. The first column of
EaE2EIA is then the unit vector el. Next, interchange rows E3E2EIA until the element in
the second row and column is nonzero~ Then divide the second row by this element and
subtract from all other rows until the unit vector e2 is obtained. Continue in this manner
until Em' .. E1A = I. Then Em'" El = A-I, and operating on I by the same row operations
will prod Uc€) A -1. Furthermore, det A -1 = det E1 det E 2 • •• det Em from Theorem 3.2.

Example 3.13.
To find the inverse of (_~ ~), adjoin the unit matrix to obtain

(-~ ~ I ~ ~)
CHAP. 3] ELEMENTARY MATRIX THEORY 45

Interchange rows (det El = -1):


o
~)
1
1 1

Divide the first row by -1 (det E2 = -1):


1 -1 I 0 -1)
( o 1 1 0

It turns out the first column is already ell and all that is necessary to reduce this matrix to I is to add the
second roW to the first (det Es = 1).

(~ o
1
I 1-1)
1 0

The matrix to the right of the partition line is the inverse of the original matrix, which has a determinant
equal to [(-l)(-l)(l)} ~l = 1.

3.6 VECTOR SPACES


Not all matrix equations have solutions, and some have more than one.

Example 3.14.
(a) The matrix equation

has no solution because no ~ exists that satisfies the two equations written out as
~ 0
2~ 1

(b) The matrix equation

(~)
is satisfied for any ~.

To find the necessary and sufficient conditions for existence and uniqueness of solutions
of matrix equations, it is necessary to extend some geometrical ideas. These ideas are
apparent for vectors of 2 and 3 dimensions, and can be extended to include vectors having
an arbitrary number of elements.
Consider the vector (2 3). Since the elements are real, they can be represented as points
in a plane. Let (t 1 t 2 ) = (2 3). Then this vector can be represented as the point in the
t l , t2 plane shown in Fig. 3-1.

• (2 3)

~~-+~~~~~~.---~.-~---.----~--~

Fig.3-1. Representation of (2 3) in the Plane


46 ELEMENTARY MATRIX THEORY [CHAP. 3

If the real vector were (1 2 3), it could be represented as a point in (~1 ~2 g3) space
by drawing the ~3 axis out of the page. Higher dimensions, such as (1 2 3 4), are harder
to draw but can be imagined. In fact some vectors have an infinite number of elements.
This can be included in the discussion, as can the case where the elements of the vector are
other than real numbers.

Definition 3.31: Let CUn be the set of all vect01'S with ncomponents. Let al and a2 be vectors
having n components, i.e~ al and a2 are in CUn. This is denoted al E CUn,
a2 E CUn. Given arbitrary scalars (31 and !32, it is seen that ((31al + !32a2) E <Vn,
i.e. an arbitrary linear combination of al and a2 is in G(}n.
'VI is an infinite line and is an infinite
G[}2
plane. To represent diagrammatically these
and, in general, rvn for any n, one uses the area
enclosed by a closed curve. Let 1l be a set of
vectors in G[}n. This can be represented as
shown in Fig. 3-2. Fig.-3-2. A Set of Vectors 11 in "Un

Definition 3.32: A set of vectors 11 in rvn is closed under addition if, given any al E 11 and
any a2 E 11, then (at + a2) E 11.

Example 3.15.
(a) Given 11 is the set of all 3-vectors whose elements are integers. This subset of 'V3 is closed under addi-
tion because the sum of any two 3-vectors whose elements are integers is also a 3-vector whose elements
are integers.

(b) Given 'U is the set of all 2-vectors whose first element is unity. This set is not closed under addition
because the sum of two vectors in 1l must give a vector whose first element is two.

Definition 3.33: A set of vectors U in G[}n is closed under scalar multiplication if, given any
vector a E'U and any arbitrary scalar /3, then (3a E 'U. The scalar f3 can be
a real or complex number.

Example 3.16.
Given 11 is the set of all 3-vectors whose second and third elements are zero. Any scalar times any
vector in 'U gives another vector in 1l, so U is closed under scalar multiplication.

Definition 3.34: A set of n-vectors 'U in ti()n that contains at least one vector is called a vector
space if it is (1) closed under addition and (2) closed under scalar multi-
plication.

If a E 1{, where 'U is a vector space, then Oa::::: 0 E'U because 'U is closed under scalar
multiplication. Hence the zero vector is in every vector space.
Given 81, a2, ... , an, then the set of all linear combinations of the 3i is a vector space
(linear manifold).
1l

3.7 BASES
Definition 3.35: A vector space 'U in G(Jn is spanned by the vectors 31, a2, ... , ak (k need not
equal n) if (1) al E 11,- a2 E 11, ... , ak E 11 and (2) every vector in 'U is a
linear combination of the ai, 32, . . . , ak.
· CHAP. 3] ELEMENTARY MATRIX THEORY 47

Example 3.17.
Given a vector space 'U in 'Va to be the set· of all 3-vectors whose third element is zero. Then (1 2 0),
(1 1 0) and (0 1 0) span 'U because any vector in 'U can be represented as (a (1 0), and

(ex (1 - 0) = (,8 - a)(l 2 0) + (2a - (1)(1 1 0) + 0(0 1 0)

Definition 3.36: Vectors a1, a2, .•. , ak E G()n are linearly dependent if there exist scalars
!31, !32, .•• , /3k not all zero such that /31a1 + (32a2 + ~ .. + j3kak O. =
Example 3.18.
The three vectors of Example 3.17 are linearly dependent because

+1(1 2 0) - 1(1 1 0) - 1(0 1 0) = (0 0 0)

Note that any set of vectors that contains the zero vector is a linearly dependent set.

Definition 3.37: A set of vectors are linearly independent if they are not linearly dependent.

Theorem 3.7: If and only if the column vectors a1, a2, •.• , an of a square matrix A are
linearly dependent, then det A = O.

Proof: If the column vectors of A are linearly dependent, from Definition 3.36 for some
13 1, i32 , ••• , f3 n not all zero we get 0 = f3 1a 1 + {32a2 + ... + f3nan, Denote one nonzero f3 as f3 i ,
Then
1 0 o
o 1 o
o 0 o
o 0 1

Since a matrix with a zero column has a zero determinant, use of the product rule of
Theorem 3.2 gives det A det E = O. Because det E = f3 i ¥- 0, then det A = O.
Next, suppose the column vectors of A are linearly independent. Then so are the
column vectors of EA, for any elementary row operation E. Proceeding stepwise as on
page 44, we find E 1 , ••• , Em such that Em'" ElA = I. (Each step can be carried out since
the column under consideration is not a linear combination of the preceding columns.)
Hence, (detEm)" . (detEI)(detA) = 1, so that detA ¥- O.
Using this theorem it is possible to determine if al, a2, ... , ak, k ~ n, are linearly de-
pendent. Calculate all the k x k determinants formed by deleting all combinations of n- k
rows. If and only if all determinants are zero, the set of vectors is linearly dependent.
Example 3.19.

Consider (~) and ( ; ) . Deleting the bottom row gives det G:) = O. Deleting the top row

gives det (! ~) = -12. Hence the vectors are linearly independent. There is no need to check the

determinant formed by deleting the middle row.

Definition 3~38: A set of n-vectors ai, a2, ... , ak form a basis for '11 if (1) they span ,'11 and
(2) they are linearly independent.
48 ELEMENTARY MATRIX THEORY [CHAP. 3

Example 3.2U.
Any two of the three vectors given in Examples 3.17 and 3.18 form a basis of the given vector space,
since (1) they span 1..( as shown and (2) any two are linearly independent. To verify this for (1 2 0) and
(1· 1 0), set

This gives the equations f3t + {J2 = =


0, 2{Jl + {J2 O. The only solution is that {J1 and {J2 are both zero,
which violates the conditions of the definition of linear dependence.

Example 3.21.
Any three non coplanar vectors in three-dimensional Euclidean space form a basis of C0 3 (not necessarily
the orthogonal vectors). However, note that this definition has been abstracted to include vector spaces
that can be subspaces of Euclidean space. Since conditions on the solution of algebraic equations are the
goal of this section, it is best to avoid strictly geometric concepts and remain with the more abstract ideas
represented by the definitions.
Consider 'U to be any infinite plane in three-dimensional Euclidean space 'V s. Any two noncolinear
vectors in this plane form a basis for 11.

Theorem 3.8: If al, a2, ... ,ak are a basis of 'U, then every vector in 'U is expressible
uniquely as a linear combination of a1, a2, ... , ak.

The key word here is uniquely. The proof is given in Problem 3.6.
To express any vector in 'U uniquely, a basis is needed. Suppose we are given a set of
vectors that span 'U. The next theorem is used in constructing a basis from this set.

Theorem 3.9: Given nonzero vectors a1, a2, ... , am E "Un. The set is linearly dependent if
and only if some ak, for 1 < k ~ m, is a linear combination of al, a2, ... , ak-l.

Proof of this is given in Problem 3.7. This theorem states the given vectors need only
be considered in order for the determination of linear dependency. We need not check
and see that each ak is linearly independent from the remaining vectors.

Example 3.22.
Given the vectors (1 -1 0), (-2 2 0) and (1 0 0). They are linearly dependent because (-2 2 0) =
-2(1 -1 0). We need not check whether (1 0 0) can be formed as a linear combination of the first
two vectors.
To construct a basis from a given set of vectors al, a2, ... , am that span a vector space
11, test to see if a2 is a linear combination of al. If it is, delete it from the set. Then test
if 83 is a linear combination of al and a2, or only a1 if a2 has been deleted. N ext test
84, etc., and in this manner delete all linearly dependent vectors from the set in order. The
remaining vectors in the set form a basis of 'U.

Theorem 3.10: Given a vector space Ii with a basis a1, a2, ... , am and with another basis
bl, b2 , ••• , hI. Then m· = l.

Proof: Note al, hi, h2, ... , hI are a linearly dependent set of vectors. Using Theorem
3.9 delete the vector hk that is linearly dependent on al, hI, ... , h k -1. Then a1, hi, ... , b k -'-l,
hk+ 1, ••• , bl still span 'U. Next note a2, a1, hI, ... , hk~l, b k +h .•• , hI are a linearly depen-
dent set. Delete another b-vector such that the set still spans 'U. Continuing in this manner
gives aI, •.. , 82, a1 span 'U. If l < m, there is an 81 + 1 that is a linear combination of aI, ... ,
82, al. But the hypothesis states the a-vectors are a basis, so they an must be linearly
independent, hence l ~ m. Interchanging the b- and a-vectors in the argument gives m ~ l,
proving the theorem.
CHAP. 3] ELEMENTARY MATRIX THEORY 49

Since all bases in a vector space 11 contain the same number of vectors, we can give
the following definition.
Definition 3.39: A vector space 11 has dimension n if and only if a basis of 11 consists of n
vectors.
Note that this extends the intuitive definition of dimension to a subspace of 'V m •

3.8 SOLUTION OF SETS OF LINEAR ALGEBRAIC EQUATIONS


Now the rneans are at hand to examine the solution of matrix equations. Consider the
matrix equation

o Ax

If all ~i are zero, x = 0 is the trivial solution, which can be obtained in all cases. To obtain
a nontrivial solution, some of the ~. must be nonzero, which means the ai must be linearly
~

dependent, by definition. Consider the set of all solutions of Ax = O. Is this a vector space?
(1) Does the set contain at least one element? '
Yes, because x =0 is always one solution.
(2) Are solutions closed under addition?
Yes, because ifAz = 0 and Ay = 0, then the sum x = z + y is a solution of
Ax=O.
(3) Are solutions closed under scalar multiplication?
Yes, because if Ax = 0, then j3x is a solution of A({3x) = O.
So the set of all solutions of Ax = 0 is a vector space.

Definition 3.40: The vector space of all solutions of Ax = 0 is called the null space of A.

Theorem 3.11: If an m n matrix A has n columns with r linearly independent columns,


X
then the null space of A has dimension n - r.
The proof is given in Problem 3.8.

Definition 3.41: The dimension of the null space of A is called the nullity of A.

Corollary 3.12: If A is an n x n matrix with n linearly independent columns, the null space
has dimension zero. Hence the solution x = 0 is unique.

Theorem 3.13: The dimension of the vector space spanned by the row vectors of a matrix
is equal to the dimension of the vector space spanned by the column vectors.
See Problem 3.9 for the proof.
Example 3.23. 1 2
Given the matrix ( 3). It has one independent row vector and t,herefore must have only one
246 '
independent column vector.

Definition 3.42: The vector space of all y such that Ax = y for some x is called the range
space of A.
It is left to the reader to verify that the range space is a vector space.
50 ELEMENTARY MATRIX THEORY [CHAP. 3

Example 3.24.
The range space and the null space may have other vectors in common in addition to the zero vector.
Consider

A = (~ ~) b = (~) c = (;)
Then Ab = 0, so b is in the null space; and Ac = b, so b is also in the range space.

Definition 3.43: The rank of the m x n matrix A is the dimension of the range space of A.

Theorem 3.14: The rank of A equals the maximum number of linearly independent column
vectors of A, i.e. the range space has dimension r.

The proof is given in Problem 3.10. Note the dimension of the range space plus the
dimension of the null space = n for an m x n matrix. This provides a means of determining
the rank of A. Determinants can be used to check the linear dependency of the row or
column vectors.

Theorem 3.15: Given an m X n matrix A. Then rank A = rank AT = rankATA = rankAAT.


See Problem 3.12 for the proof.

Theorem 3.16: Given an m x n matrix A and an n x k matrix B, then rankAB ===-rank A,


rank AB ~ rank B. Also, if B is nonsingular, rank AB = rank A; and if A
is nonsingular, rank AB = rank B.
See Problem 3.13 for the proof.

3.9 GENERALIZATION OF A VECTOR


From Definition 3.7, a vector is defined as a matrix with only one row or column. Here
we generalize the concept of a vector space and of a vector.
Definition 3.44: A set 'U of objects x, y, z, ... is called a linear vector space and the objects
x, y, z, ... are called vectors if and only if for all complex numbers a and (3
and all objects x, y and z in 'U:
(1) x + y is in 'U,

(2) x + y = y + x,
(3) (x + y) + z = x + (y + z),
(4) for each x and y in 'U there is a unique z in 1-l such that x + z = y,
(5) aX is in 'U,

(6) a {(3x) = (a(3)x,


(7) Ix = x,
(8) a(x + y) aX + ay,
(9) (a + (3)x aX + (3x.

The vectors of Definition 3.7 (n-vectors) and the vector space of Definition 3.34 satisfy
this definition. Sometimes a and (3 are restricted to be real (linear vector spaces -over the
field of real numbers) but for generality they are taken to be complex here.
CHAP. 3] ELEMENTARY MATRIX THEORY 51

Example 3.25.
The set 11 of time functions that are linear combinations of sin t, sin 2t, sin St, ... is.a linear vector
space.

Example 3.26.
The set of all solutions to dx/dt = A(t, x is a linear vector space, but the set of all solutions to dx/dt =
A(t) x + B(t) u for fixed u(t) does not satisfy (1) or (5) of Definition 3.44, and is not a linear vector space.

Example 3.27.
The set of all complex valued discrete time functions x(nT) for n = 0,1, . .. is a linear vector space,
as is the set of all complex valued continuous time functions x(t).

All the concepts of linear independence, basis, null space, range space, etc., extend im-
mediately to a general linear vector space.
Example 3.28.
The functions sin t, sin 2t, sin St, .. ' form a basis for the linear vector space 11 of Example 3.25, and
so the dimension of 1l is countably infinite.

3.10 DISTANCE IN A VECTOR SPACE


The concept of distance can be extended to a general vector space; To compare the
distance from one point to another, no notion of an origin need be introduced. Further-
more, the ideas of distance involve two points (vectors), and the "yardstick" measuring
distance may very well change in length as it is moved from point to point. To abstract
this concept of distance, we have

Definition 3.45: A metric, denoted p(a, b), is any scalar function of two vectors a E 14 and
b E 14 with the properties
(1) p(a, b) ~ 0 (distance is-always positive),
(2) p(a, b) = 0 if and only if a =b (zero distance if and only if the
points coincide),
(3) p(a, b) = p(b, a) (distance from a to b is the same as distance
from b to a),
(4) p(a, b) + p(b, c) ~ p(a, c) (triangle inequality).
Example 3.29.
(a) An example of a metric for n-vectors a and b is

p(a, b) =
(a-b)t(a-b)
+
J1I2
[1 (a - b)t (a - b)

(b) For two real continuous scalar time functions x(t) and yet) for to:::: t === tv one metric is

p(x, y) = [5,:' [x(t) - yet)], dt J12


Further requirements are sometimes needed. By introducing the idea of an origin and
by making a metric have constant length, we have the following definition.

Definition 3.46: The norm, denoted Iiall, of a vector a is a metric from the origin to the
vector a E 11, with the additional property that the "yardstick" is not a
function of a. In other words a norm satisfies requirements (1)-(4) of a
metric (Definition 3.45), with b understood to be zero, and has the addi-
tional requirement
(5) Ilaall = lailiali.
52 ELEMENTARY MATRIX THEORY [CHAP. 3'

The other four properties of a metric read, for a norm,


(1) ]]all ~ 0,

(2) Ila[[ =0 if and only if a = 0,


(3) [Iall = Ila]1 (trivial),
(4) Ila[ I + Ileil ~ Iia + ell·
Example 3.30.
A norm in 'V2 is II(al a2)112 ~ Ilall2 :::: v'lall2 + la212.
Example 3.31.
A norm in 'V 2 is II(al a2)lll = Ilall l = lal! + la21.
Example 3.32.
A norm in 'V n is the Euclidean norm Ila112::::.ya:Fa
In the above examples the subscripts 1 and 2 distinguish different norms.
Any positive monotone increasing concave function of a norm is a metric from 0 to a.

Definition 3.47: The inner product, denoted (x, y), of any two vectors a and b in 11 is- a
complex scalar function of a and b such that given any complex numbers
a and {3,
(1) (a, a) ;:".. 0,
(2) (a, a) = 0 if and only if a = 0,
(3) (a, b)* = (h, a),
(4) (aa + ph, c) = a*(a, c) + ,8*(b, c).
An inner product is sometimes known as a scalar or dot product.
Note (a, a) is real, (a, ab) = a(a, b) and (a, 0) = o.
Example 3.33. n

An inner product in 'V n is (a, b) = atb = ~


i=l
at bi •
Example 3.34.
An inner product in the vedor space 'U of time functions of Example 3.25 is

(x, y) = i7To x*(t) y(t) dt

Definition 3.48: Two vectors a andb are orthogonal if (a, b) = O.


Example 3.35.
. In three-dimensional space, two vectors are perpendicular if they are orthogonal for the inner product
of Example 8.83.

Theorem 3.17: For any inner product (a, b) the Schwarz inequality [(a, b)]2::::::: (a, a)(b, b)
holds, and furthermore the equality holds if and only if a = ab or a or b
. is the zero vector. .
Proof: If a or b is the zero vector, the equality holds, so take b -=F O. Then for any
scalar j3,
0::::::: (a + ph, a + fih) = (a, a)
+ ,8*(b, a) + j3(a, h) + 1J312(b, b)
where the equality holds· if and only if a + Ph = O.. Setting j3 = -(b, a)/(b, b) and re-
arranging gives the Schwarz inequality.
CHAP. 3] ELEMENTARY MATRIX THEORY 53

Example 3.36.
Using the inner product of Example 3.34,

1f x*(t) y(t) dt I' '"" (f Ix*(t)I' dt) (f ly(t)12 dt)


Theorem 3.18: For any inner product, V(a + b, a + b) ..,,::; V(a, a) + V(b, b).
Proof: (a+b,a+b) = l(a+b,a+b)1
= I(a, a) + (h, a) + (a, h) + (h, b)] ~ I(a, a)1 + I(b, a)] + ](a, h)1 + I(h, b)1
Use of the Schwarz inequality then gives
(a + b, a + b) ~ (a, a) + 2v'(a, a)(b, b) + (b, b)
and taking the square root of both sides proves the theorem.
This theorem shows that the square root of the inner product (a, a) satisfies the triangle
inequality. Together with the requirements of the Definition 3.47 of an inner product, it
can be seen that y(a, a) satisfies the Definition 3.46 of a norm.

Definition 3.49: The natural norm, denoted Ila112' of a vector a is IIal12 = V(a, a).

Definition 3.50: An orthonormal set of vectors aI, a2, ... ,ale is a set for which the inner
product
j}
{~
if i #
if i = j
where 8ij is the Kronecker delta.
Example 3.37.
An orthonormal set of basis vectors in 'V n is the set of unit vectors ell e21 ••. I en, where ei is defined as
o

o
= 1 _ ith position
o

o
Given any set of k vectors, how can an orthonormal set of basis vectors be formed from
them? To illustrate the procedure, suppose there are only two vectors, al and a2. First
choose either one and make its length unity:

becomes

Because hI must have length unity, Yl = IIa11121, Now the second vector must be broken up
into its components:
54 ELEMENTARY_ MATRIX THEORY [CHAP. 3

Here YZb 1 is the component of a 2 along b p and a 2 - 1 2b 1 is the component perpendicular to


hi" To find 12' note that this is the dot, or inner, product 12 = b 1t . a 2 • Finally, the second
orthonormal vector is constructed by letting a z -1Zhl have unit length:
az - YZh l
IIa2 ~ 1 2b 1 11z
This process is called the Gra1n-Schmit orthonormalization procedure. The orthonormal hj
is constructed from aj and the preceding hi for i = 1, ... , j -1 according to
j-1

aj - i~ (bi , aj)bi
j-1

II aj - i~ (hi' aj)hi 112


Example 3.38.
Given the vectors ai
= (1 1 0). a~ = (1 -2 1)/,;2. Then 1181112 =,;2, so hi = (1 1 0)//2.
By the formula, the numerator is

82 - (b l ,az)b1 = ~(-~)
Y2
- ~(i
1
O)~(-~)~(~)
Y2 V2
-{2
1
1 0
_1
2V2
(-!)
2

Making this have length unity gives b~ = (3 -3 2)/-122.

Theorem 3.19: Any finite dimensional linear vector space in which an inner product exists
has an orthonormal basis.
Proof: Use the Gram-Schmit orthonormalization procedure on the set of all vectors
in 'U, or any set that spans 'U, to construct the orthonormal basis.

3.11 RECIPROCAL BASIS


Given a set of basis vectors hit h2' ... , b n for the vector space ~n, an arbitrary vector
x E 'V n can be expressed uniquely as
(3.4)

Definition 3.51: Given a set of basis vectors h1, h2, ... , hn for G()n, a reciprocal basis is a set
of vectors r1, r2, ... , rn such that the inner product
i, j = 1, 2, ... , n (3.5)

Because of this relationship, defining R as the matrix made up of the ri as column


vectors and B as the matrix with hi as column vectors, we have
RtB = I
which is obtained by partitioned multiplication and use of (3.5).
Since B has n linearly independent column vectors, B isnonsingular so that R is
uniquely determined as R = (B-1)t. This demonstrates the set of ri exists, and the set is a
basis for 'Vn. because R-1 = Bt exists so that all n of the rr are linearly independent.
Having a reciprocal basis, the coefficients- f3 in (3.4) can conveniently be expressed.
Taking the inner product with an arbitrary ri on both sides of (3.4) gives
(ri' x) = f3 t (ri' hi) + f3 2(ri' h z) + ... + f3 n (ri , h n )
Use of the property (3.5) then·gives (:3i = (ri'x),
~,CHAP. 3] ELEMENTARY MATRIX THEORY 55

Note that an orthonormal basis is its own reciprocal basis. This is what makes "break-
ing a vector into its components" easy for an orthonormal basis, and indicates how to go
about it for a non-orthonormal basis in CVn.

'3.12 MATRIX REPRESENTATION OF A LINEAR OPERATOR


Definition 3.52: A linear operator L is a function whose domain is the whole of a vector
space 'U 1 and whose range is contained in a vector space 112 , such that for
a and b in 'U1 and scalars ll' and /3,
L(ll'a + ,8b) = ll'L(a) + ,BL(b)
Example 3.39.
An m X n matrix A is a linear operator whose domain is the whole of CU n and whose range is in CUm, i.e.
it transforms n-vectors into m-vectors.

, Example 3.40.
Consider the set 'U of time functions that are linear combinations of sin nt, for n = 1,2, . ... Then
00

any x(t) in 'U can be expressed as x(t) = ~ ~n sin nt. The operation of integrating with respect to
time is a linear operator, because n=l

Example 3.41.
The operation of rotating a vector in CU 2 by an angle ¢ is a linear operator of 'V 2 onto CU 2 • Rotation of a
vector aa + f3b is the same as rotation of both a and b first and then adding a times the rotated a plus fi
times the rotated b.

Theorem 3.20: Given a vector space 1,11 with a basis b 1 , b 2 , ••• , bn, ... , and a vector space
112 with a basis C1, C2, ••• , Cm, • • •• Then the linear operator L whose
domain is 'U 1 and whose range is in 112 can be represented by a matrix
{YjJ whose ith column consists of the components of L(b i ) relative to the
basis Cl, C2, ••• , Cm, • • • •

Proof will be given for n dimensional 'U 1 and nt dimensional 1l2 • Consider any x in 'U 1•

Then x = :t
i=1
pib i • Furthermore since L(b i ) is a vector in 'U 2 , determine Yji such that

L(x) t
i=l
PiL(b)
n
Hence the jth component of L(x) relative to the basis {c j } equals ~ Yji,811 i.e. the matrix
{Yji} times a vector whose components are the Pi. i=l

Example 3.42.
The matrix representation of the rotation by an angle ¢ of Example 3.41 can be found as follows.
Consider the basis e 1 = (1 0) and e2 = (0 1) of CU 2• Then any x = (fi1 f32) = file1 + f32 e2' Rotating
e 1 clockwise by an angle ¢ gives L(e1) =
(cos 4»e1 - (sin ¢)e2. and similarly L(e2) =
(cos ¢)e2 + (sin 1»ev
so that 1'11 = cos 1>, 1'21 = -sin ¢, Y12 = sin 1>. Y22 = cos ¢. Therefore rotation by an angle 1> can be
represented by

L
COS 1>
( -sin 1>
sin
cos ¢
¢)
56 ELEMENTARY MATRIX THEORY [CHAP. 3

Example 3.43.
An elementary row or column operation on a matrix A is represented by the elementary matrix E as
given in the remarks following Definition 3.26.

The null space, range space, rank, etc., of linear transformation are obvious extensions
of the definitions for its matrix representation.

3.13 EXTERIOR PRODUCTS


The advantages of exterior products have become widely recognized only recently, but
the concept was introduced in 1844 by H, G. Grassmann. In essence it is a generalization
of the cross product. We shall only consider its application to determinants here, but it is
part of the general framework of tensor analysis, in which there are many applications.
First realize that a function of an m-vector can itself be a vector. For example,
z = Ax: + By illustrates a vector function z of the vectors x and y. An exterior product,pP
is a vector function of pm-vectors, 31, a2, ••• , 3 p • However,,pP is an element in a generalized
vector space, in that it satisfies Definition 3.44, but has no components that can be arranged
as in Definition 3.7 except in special cases.
The functional dependency of </JP upon 31, 32, ••• , ap is written as pP = 3i /\ 3j /\ ••• /\ ak,
where it is understood there are p of the a's. Each 3 is separated by a /\, read "wedge",
which is Grassmann notation.

Definition 3.53: Given m-vectors ai in 11, where 11 has dimension n. An exterior product
cf>P = 3i /\ 3j /\ ... /\ 3k for p = 0,1,2, ... , n is a vector in an abstract vector
space, denoted /\1YU, such that for any complex numbers a and (3,
(aai + (3aj) /\ ak /\ ' •. /\ at = a(ai /\ 3k /\ ... /\ al) + (3(aj /\ ak /\ ••• /\ aL) (8.6)
ai/\'" 1\3j/\" '/\ak/\'" /\ar = -3i/\" ' / \ ak/\" '/\3j/\'" /\31 (8.7)
3i /\ aj /\ ... /\ 3k #- 0 if ai, 3j, ••• , 3k are linearly independent.
Equation (8.6) says ~p is multilinear, and (3.7) says pP is alternating.

Example 3.44.
The case p= 0 and p = 1 are degenerate, in that 1\ o'U is the space of all complex numbers and
1\ 111 = 1J, the original vector space of m-vectors having dimension n. The first nontrivial example is then
1\2'U. Then equations (3.6) and (3.7) become the biliriearity property
(3.8)

and also (3.9)

Interchanging the vectors in (3.8) according to (3.9) and multiplying by -1 gives


ak /\ (a3t + f3 a j) = a(3k /\ 3i) + /3(ak /\ aj) (3.10)

By (3.10) we see that Hi /\ aj is linear in either 3i or 3j (bilinear) but not both because in general
(aai) /\ (a3) # a(3i /\ 3j).
Note that setting 3) = 3i in (3.9) gives Furthermore if and only if 3j is a linear com-
bina tion of ai' ~ /\ 3j = O.
n n
Let b 1• b 2 , ••• , b n be a basis of 'U. Then 3 i ::::::: ~
/-:=1
a'kbk and 3j = :I
1=1
I'tbl> so that
':CHAP.3] ELEMENTARY MATRIX .THEORY 57

,'Since hk /\ hk ::::: 0 and hk /\ b t ::::: -b[ /\ b k for k> l, this sum can be rearranged to
11. 1.,..1
ai /\ aj = ~ ~ (llk'Yl- lll'Yk)b k /\h l (3.11)
1=1 k=l

Because ai /\ aj is an arbitrary vector in /\2'11, and (3.11) is a linear combination of hie /\ hb then the vectors
hk /\ h z for 1"";;: k < l ".,;;: n form a basis for /\2'11. Summing over all possible k and l satisfying this rela-
tion shows that the dimension of /\211 is n(n - 1)/2 ::::: (;).

Similar to the case /\ 2U, if any ai is a linear combination of the other a's, the exterior
product is zero and otherwise not. Furthermore the exterior products bi /\ h j /\ ••• /\ bk for
1 ~ i < j < ... < k ~ n form a basis for/\ P'U, so that the dimension of /\ Pl.{ is
n!
(n-p)ip!
(8.12)

In particular the only basis vector for 1\ nu is hi /\ b2 /\ ... /\ bn, so 1\ nu is one-dimensional.


Thus the exterior product containing n vectors of an n-dimensional space U is a scalar.

Definition 3.54: The determinant of a linear operator L whose domain and range is the
vector space U with a basis hi, h2' ... , bn is defined as
L(h~) /\ L(h2) /\ ... /\ L(bn)
detL hi /\ b 2 /\ ••• /\ bn
(3.13)

The definiiton is completely independent of the matrix representation of L. If L is an


11, x n matrix A with column vectors aI, a2, ... , an, and since el, e2, ... , en are a basis
Ae1 /\ Ae2 /\ ... '/\ Aen a1 /\ a2 /\ • . • /\ an
detA = el /\ e2 /\ . . . /\ en e1 /\ e2 /\ ... /\ en

Without loss of generality, we may define e1 /\ e2 /\ ... Aen = 1, so that


det A = a1 /\ a2 /\ • • . /\ an (3.14)
,and det I = el /\ ez /\ ... /\ en = 1.
N ow note that the Grassmann notation has a built-in multiplication process.

Definition 3.55: For an exterior product ~p = a1/\ . . . /\ ap in /\ PU and an exterior product


",q = C1 /\ ••• /\ Cq in /\ qu, define exterior rnultiplication as

cpP /\ ",q = a1 /\ ••• /\ ap /\ C1 /\ ••• /\ Cq (3.15)

This definition is consistent with the Definition 3,53 for exterior products, and so
cpP /\ ",q is itself an exterior product.
Also, if 1n =n then a1/\ ••• /\ an-2 /\ a n -1 is an n-vector since ~n-l has dimension n from
(3.12), and must equal some vector in 'U since u and /\ n-'-lU must coincide.

:Theorem 3.21. (Laplace expansion). Given an 11, x n matrix A with column vectors ai.
Then det A = a1 /\ 82 /\ ... /\ an = a[(a2 /\ ... /\ an).

C':,y Proof: Let ei be the ith unit vector in 'Vnand Ej be the ith unit vector in 'Vn-l, i.e. ei has
',''1,'b.components and Ej has n -1 components. Then ai = alie1 + a2ie2 + ... + aniCn so that
58 ELEMENTARY MATRIX THEORY [CHAP. 3

The first exterior product in the sum on the right can be written
el /\ a2 /\ ... /\ an = el /\ (al2e1 + a22e2 + ... + an2en) /\ ••• /\ (alnel + ... + annen )
Using the multilinearity property gives
el /\ a2 /\ . • . /\ an

0)
a22
(0 )
aZn
el /\
( :

anz
/\ . . . /\ :

ann

Since detIn = 1 = detln - 1, then

Performing exactly the same multilinear operations on the left hand side as on the right
hand side gives
e, 1\ [a22 (:J + ... + an,(:'J} ... [a2n (:.) + ... + ann(:JJ
1\

= (~22) /\ ... /\ (7")


an2 ann
detMn = Cn

where Cll is the cofactor of all. Similarly,


n
ej /\ 32 /\ ••• /\ 3n = Cjl and a1 /\ 32/\ ••• /\ 3 n ~ aj1Cjl == (Cn C21 ••• Cn 1)a1
j=1

so that a2/\ .•. /\ an = (Cll C21 ••• Cnl)T and Theorem 3.21 is nothing more than a state-
ment of the Laplace expansion of the determinant about the first column. The use of column
interchanges generalizes the proof of the Laplace expansion about any column, and use of
det A = det AT provides the proof of expansion about any row.

Solved Problems
3.L Multiply the following matrices, where aI, 32, hI and b 2 are column vectors with n
elements.
(i) (1, 2) (!) (iii) (:D (b'lb2 ) (v)
(~ ~J(j)
(ii) (~) (3 4) (iv) (a,la2) ~D (vi) (a, Ia2) (~ ~)
U sing the rule for multiplication from Definition 3.13, and realizing that multiplication of a
k X n matrix times an n X m matrix results in a k X m matrix, we have

(i) (1 X 3 + 2 X 4) = (11) (iii) (v)

(ii)
1 X 3)
( (2 X 3)
(1 X4») = (iv) (vi)
(2 X 4)
3J ELEMENTARY MATRIX THEORY
59

Find the determinant of A (a) by direct computation, (b) by using elementary row
and column operations, and (c) by Laplace's expansion, where
0 0
2 0
A
(1 D
::=
3 1
0 0
(a) To facilitate direct computation, form the table

P Ph Pz, Pa, P4 P PI' P21 Ps, P4


0 1 2 3 4 12 3 1 2 4
1 1 2 4 3 13 3 1 4 2
2 1 3 4 2 14 3 2 4 1
3 1 3 2 4 15 3 2 1 4
4 1 4 2 3 16 3 4 1 2
5 1 4 3 2 17 3 4 2 1
6 2 4 3 1 18 4 3 2 1
7 2 4 1 3 19 4 3 1 2
8 2 3 1 4 20 4 2 1 3
9 2 3 4 1 21 4 2 3 1
10 2 1 4 3 22 4 1 3 2
11 2 1 3 4 23 4 1 , 2 3

There are 4! = 24 terms in this table. Then


detA = +1· 2 ·1 • 2 - 1 • 2 • 8 • 0 + 1· 0 • 8 • 0 - l ' 0 • 3' 2
+ 1'6·3'0 -1'6·1'0 + 0'6'1'0 - 0'6'1'0 + 0'0'1'2
- 0·0·8'0 + 0'1'8'0 - 0'1'1·2 + 0'1'3'2 - 0·1·g'0
+ 0·2·g·0 - 0,2,1-2 + 0'6'1'0 - 0'6'3'0 + 2·0·3'0
- 2'0·1·0 + 2-2-1-0 - 2·2'1'0 + 2'1·1'0 -2-1'3-0
= 4
(b) Using elementary row and column operations, subtract I, 3 and 4 times the bottom row from
the first, second and third rows respectively. This reduces A to a triangUlar matrix, whose
determinant is the product of its diagonal elements l ' 2 ·1· 2 = 4.

(c) Using the Laplace expansion about the bottom row,

det A = 2 det (~ ~ ~ ) = 2•2 4


131

3.3. Prove that det (AT) = det A, where A is an n X n matrix.


n!
If A = {Ctij}, then AT = {Ct ji } so that det (~.T) = ~ (-1)PCtP1 l a p2 2' •. uPnn •

Since a determinant is all possible combinations of products of elements where only one element
is taken from each row and colUmn, the individual terms in the sum are the same. Therefore the
only question is the sign of each product. Consider a typical term from a 3 X 3 matrix: aala12u23,
i.e, P1 =3, P2 = =
1, Ps 2. Permute the elements through a12a31a23 to Ct12a2Sa3i> so that the row
numbers are in natural 1, 2, 3 order instead of the column numbers. From this, it can be concluded
in general that it takes exactly the same number of permutations to undo PI' P2;' . " Pn to 1,2, ... , n
as it does to permute 1,2, ... , n to obtain Pl' P2 • .. " Pn • Therefore p must be the same for each
product term in the series, and so the determinants are equal.
60 ELEMENTARY MATRIX THEORY [CHAP. 3

3.4. A Vandermonde matrix V has the form

Prove V is nonsingular if and only if Bi "1= OJ for i "1= j.


This will be proven by showing
det V = (0 2 - 01)(03 - ° )(° °
2 3- 1)' •• (On - 0n-l)(On - On-2)' •• (On - 01) = II
1 :=:i< ;===n
(OJ - 0i)

For n = 2, det V = O2 - 01' which agrees with the hypothesis. By induction if the hypothesis
can be shown true for n given it is true for n -1, then the hypothesis holds for n :=: 2. Note each
term of det V will contain one and only one element from the nth column, so that

det V = "10 + "lIOn + ... + "In_l 0: - 1


If On = 0i, for i = 1,2, .. 0, n -1, then det V = 0 because two columns are equal. Therefore
°
01' 2 , • 00' 8 n - 1 are the roots of the polynomial, and
Yo + Yl(Jn + ... + "In_18~-1 = "In-l (8 n - 01)(On - O2 )' •• {On - (In-I)

But "In-l is the cofactor of 0:- 1


by the Laplace expansion; hence

"I n - l det (~~ ~~ ....


(J~-2 (J;-2
.............
. ••
~.".~~)
O~=~

By assumption that the hypothesis holds for n - 1,

Combining these relations gives


det V = (on - 01)«(Jn - 82 )' •• (8 n - On-I) IT (OJ - (Ji)
l===i<.i===n-I

3.5. Show' det(! ~) det A det C, where A and Care n x nand m x m matrices
respecti vely.
Either det A=::O or det A :/= O. If det A = 0, then the column vectors of A are linearly
dependent. Hence the column vectors of (:) are linearly dependent, so that

det(! ~) = 0
and the hypothesis holds.
If det A oF 0, then A-I exists and

(! ~) (! :)(~ ~)(! A~IB)

The rightmost matrix is an upper triangular matrix, so its determinant is the product of the
diagonal elements which is unity. Furthermore, repeated use of the Laplace expansion about the
diagonal elements of I gives

det ( ! ~) = det A det (! ~ ) = det C

Use of the product rule of Theorem 3.2 then gives the proof.
CHAP.8J ELEMENTARY MATRIX THEORY 61

3.6. Show that if at, a2, ... , ak are a basis of 'U, then every vector in 'U is expressible
uniquely as a linear combination of aI, a2, ... , a".
Let x be an arbitrary vector in 'U. Because x is in 'U, and 'U is spanned by the basis vectors
aI' a2' ... , ak by definition, the question is one of uniqueness. If there are two or more linear
combinations of a l1 a2' ... , ale that represent x, they can be written as
k
x = ~ f3iai
i=1
and k
x ~ aiai
i=l

Subtracting one expression from the other gives

Because the basis consists of linearly independent vectors, the only way this can be an equality is
for f3i = ai' Therefore all representations are the same, and the theorem is proved.

Note that both properties of a basis were used here. If a set of vectors did not span 'U, all
vectors would not be expressible as a linear combination of the set. If the set did span 'U hut
were linearly dependent, a representation of other vectors would not be unique.

3.7. Given the set of nonzero vectors aI, a2, ... , am in G()n. Show that the set is linearly
dependent if and only if some ak, for 1 < k ,.:::: m, is a linear combination of aI, a2,
... , ak-l.

If part: If ak is a linear combination of aI' a2' •.. , ak -1> then


k-t
ak = ~ f3i ai
i=1

where the f3i are not all zero since ak is nonzero. Then

o = f3lal + f32a2 + ... + f3k-lak-l + (-l)ak + Oak+l + ... + Oam


which satisfies the definition of linear dependence.

Only if part: If the set is linearly dependent, then

where not all the f3i are zero. Find that nonzero 13k such that all f3i = 0 for i> k. Then the
linear combination is

3.8. Show that if an In. x n n1atrix A has n columns with at most r linearly independent
columns, then the null space of A has dimension n - r.
Because A is m X n, then ai E CUm' X E "Un- Renumber the ai so that the first r are the inde-
pendent ones. The rest of the column vectors can be written as

(3.16)

beca use a r + I' .•. , an are linearly dependent and can therefore be expressed as a linear combination
of the linearly independent column vectors. Construct the n - r vectors Xl' X2 • ••. ,x n - r such that
62 ELEMENTARY MATRIX THEORY [CHAP. 3

/311 /321 /3n- r.1


/312 /322 /3n-r,-2

f31r /32r /3n-r,r


Xl = X2 Xn - r
-1 0 0
0 -1 0

0 0 -1

Note that Ax1 = 0 by the first equation of (3.16), Ax2 = 0 by the second equation of (3.16), etc.,
so these are n - r solutions.
Now it will be shown that these vectors are a basis for the null space of A, First, they must
be linearly independent because of the different positions of the -1 in the bottom part of the vectors.
To show they are a basis, then, it must be shown that all solutions can be expressed as a linear
combination of the "4. i.e. it must be shown the Xj span the null space of A. Consider an arbitrary
solution x of Ax = O. Then

~1 /311 /321 /3n-r, 1 0"1

~r f31r /32r /3n-r,r ar


X ~r+ 1 -~r+l -1 - ~r+2 0 - ., . - ~n 0 + 0
~r+2 0 -1 0 0

~n 0 0 -1 0
n-r
Or, in vector notation, x = ~ -~r+ixt + S
i=l

where s is a remainder if the xi do not span the null space of A. If s = 0, then the "4 do span
the null space of A. Check that the last n - r elements of s are zero by writing the vector equality
as a set of scalar equations.
n-r
Multiply both sides of the equation by A. Ax ~ -~r+iAxi + As
i=l

Since Ax = 0 and ~ = 0, As = O. Writing this out in terms of the column vectors of A


r
gives :£
i=l
O"i~ = O. But these column vectors are linearly independent, so ai = O. Hence the
n- r Xi are a basis, so the null space of A has dimension n - r.

3.9. Show that the dimension of the vector space spanned by the row vectors of a matrix
is equal to the dimension of the vector space spanned by the column vectors.
Without loss of generality let the first r-column vectors 3 i be linearly independent and let s
of the row vectors ai be linearly independent. Partition A as follows:

1 aLr+l atn
. , .... , .......... , . , ... .
~

arl ar 2 arr I ar. r + 1 arn


A a r +l,l U r +1,2 •.• a r +1,r 1 a r +Lr+l ... U r +1,n
---------L------
.......................... ··1···'·······,·······,···
I am • r +l

I aLr+l
..... ·1·'·'·········,·······,·
Xr 1 ar. r + 1 • • . am
=
Xr + l i a r +1, r + 1 ar + L n
..... 'I·······················
Xm 1 a-m,r-t 1
3] ELEMENTARY MATRIX THEORY 63

Yl ' .,
-----~-~---"""---
Yr Yr+ I ' • • Yn)
( '~~~ , , .,' •• ,' , • ~~r' • , ~~:.~~ ~' , , ',' .. ,' • , ;~~
r+l
SOthat Xi = (ail ai2 ," and
air)
~
= (ali a2j yr a r + l,j)' Since the Xi are T-vectors, ~
i=l
bixi = 0
for some nonzero bi , Let the vector b T = (b l bz .,' br + 1) so that
1+1
o ~ biXi
i=l

Therefore bTYj = 0 for j = 1,2, ' .. ,1",


T

Since the last n - l' column vectors a i are linearly dependent, ai = ~ Cl'ijaj for i = 1'+ 1, ' , "n.
r r 3=1
Then Yi = ~ a··y· so that bTYi = ~ aijbTYj = 0 for i = r+ 1, ' , "n, Hence
3=1 ~J J ]=1

o= (b TYl b TY2 '" bTYt bTYr+ 1 . . . bTYrJ = blal + b2a 2 + ... + br + 1 arT 1
Therefore l' + 1 of the row vectors ai are linearly dependent, so that s::::: '/", Now consider AT.
The same argument leads to r:::::: 8, so that r = 8,

Show that the rank of the m X n matrix A equals the maximum number of linearly
independent column vectors of A.

Let there be r linearly independent column vectors, and without loss of generality let them be
r
aI' a2 • ' , ., aT' The ai = ~ aijaj for i = r + 1, . , " n, Any Y in the range space of A can be ex-
j=1
pressed in terms of an arbitrary x as
n
y = Ax = ~ aixi
i=l

This shows the ai for i = 1, ...• r span the range space of A, and since they are linearly inde-
pendent they are a basis, so the rank of A = r.

For an m x n matrix A, give necessary and sufficient conditions for the existence and
uniqueness of the solutions of the matrix equations Ax = 0, Ax = b and AX = B
in terms of the column vectors of A.

For Ax = 0, the solution x = 0 always exists, A necessary and sufficient condition for
uniqueness of the solution x = 0 is that the column vectors a 1.,." an are linearly independent.
To'show the necessity: If a i are dependent, by definition of linearly dependent some nonzero ~i
n
exist such that ~ ai~[ = O. Then there exists another solution x = (~l , .. ~n) # O. To show
i=l n
sufficiency: If the a[ are independent, only zero ~i exist such that ~ ai~i = Ax = 0,
i=l
n
For Ax = b, rewrite as b = ~ aixi' Then from Problem 3.10 a necessary and sufficient
i=l
condition for existence of solution is· that b lie in the range space of A, i.e. the space spanned by
the column vectors. To find conditions on the uniqueness of solutions, write one solution as
n n n
(1']1 '1]2 ' • , 7J n ) and another as (~1 g2 ' ., ~?l)' Then b= ~ a i 7Ji = ~ ai~i so that 0 = ~ (1,11 - ~i)ai'
i=1 i=1 i=l
The solution is unique if and only if aI' ... , an are linearly independent,
64 ELEMENTARY ·MATRIX THEORY [CHAP. 3

Whether or not b = 0, necessary and sufficient conditions for existence and uniqueness of
solution to Ax = b are that b lie in the vector space spanned by the column vectors of A· and that
the column vectors are linearly independent.

Since AX = B can be written as Axj = b j for each column vector Xj of X and hi of B, by


the preceding it is required that each hi lie in the range space of A and that all the column vectors
form a basis for the range space of A.

3.12. Given an m X n matrix A. Show rank A = rank AT = rankATA = rankAAT.


By Theorem 3.14, the rank of A equals the number r of linearly -independent column vectors
of A. Hence the dimension of the vector space spanned by the column vectors equals r. By
Theorem 3.13, then the dimension of the vector space spanned by the row vectors equals r. But the
row vectors of A are the column vectors of AT, so AT has rank r.

To show rank A = rankATA, note both A and ATA have n columns, Then consider any vector
y in the null space of A, i.e. Ay O. Then ATAy = =
0, so that y is also in the null space of ATA.
Now consider any vector z in the null space of ATA, i.e. ATAz = O. Then zTATAz = llAzll~ = 0, so
that Az = 0, i.e. z is also in the nun spaca of A. Therefore the null space of A is equal to the
null space of ATA, and has some dimension k. Use of Theorem 3.11 gives rank A = n-k =
rank ATA. Substitution of AT for A in this expression gives rank AT = rank AAT.

3.13. Given an m x n matrix A and an n X k matrix B, show that rankAB ==== rank A,
rankAB ==== rankB. Also show that if B is nonsingular, rank AB = rank A, and
that if A is nonsingular, rankAB = rankB.
Let rank A = r, so that A has r linearly indepedendent column vectors a 1, ••. , a".. Then
r n
ai = ~ (lijaj _ for i = r + 1, ... , n.
j=l
Therefore AB = ~ aib: where
i=l
bi are the row vectors
of B, using partitioned multiplication. Hence

AB = ~
i=l
aib'[ +. i
~=r+l
~
]=1
a'ijajb; =
i=1
~ ai (bi + :i akib~)
k=r+l

so that all the column· vectors of AB are made up of a linear combination of the r independent
column vectors of A, and therefore rank AB ~ r.

Furthermore, use of Theorem 3.15 gives rank B = rank BT. Then use of the first part of this
problem with BT substituted for A and AT for B gives rankBTAT === rank B. Again, Theorem 3.15
gives rank AB = rank BTAT, so that rank AB === rank B.

If A is nonsingular, rank B = rank A -l(AB) ~ rank AB, using A-I for A and AB for B in
the first result of this problem. But since rank AB === rank B, then rank AB = rank B if A-1
exists. Similar reasoning can be used to prove the relnaining statement.

3.14. Given n vectors Xl, X2, ••• , Xn in a generalized vector space possessing a scalar product.
Define the Gram matrix G as the matrix whose elements gij = (Xi, Xj). Prove that
detG = 0 if and only if Xl,X2, •• • ,Xn are linearly dependent. Note that G is a matrix
whose elements are scalars, and that Xi might be a function of time.
Suppose det G = O. Then from Theorem 3.7, /31g1 + fJ2g2 + ... + [:3ngn = 0, where g is a column
n n
vector of G. Then 0 = ~ fJiU'lj = ~ [:3i(Xi, x).
i=1 i=1

Multiplying by a constant 137 and summing still gives zero, so that


n n
o = ~ /3; ~ fJ-i(xi, Xj)
j=l i=l
'CHAP.8J ELEMENTARY MATRIX THEORY 65
n
Use of property (2) of Definition 3.47 then gives :.I f3~xi = 0, which is the definition of linear
dependence. i:::: 1
n
Now suppose the xi are linearly dependent. Then there exist Yi such that .~ YjXj = O.
n n j=1
Taking the inner product with any xi gives 0 = ~ Yj(xt. Xj) = ~ YjUij for any i. Therefore
n j=1 j=1
~ Yigj = 0 and the column vectors of G are linearly dependent so that detG = O.
j=l

3.15. Given two linear transformations Ll and L2J both of whose domain and range are
in 11. Show det(LIL2) = (detLl)(detL2) so that as a particular case detAB =
det BA = det A det B.
Let U have a basis hi. b 2 , ••• , bn' Using exterior products, from (3.13),
L 1L 2 (b 1) L 1L 2 (b 2 ) /\ ••• /\ L 1L 2 (h n )
/\

hI /\ b2 /\ • • • /\ b n
If L2 is singular, the vectors L 2 (b i ) = Ci are linearly dependent, and so are the vectors L 1L 2 (b i ) =
£1 CCi)' Then det (L 1L 2 ) = 0 = det L 2 • If L2 is nonsingular, the vectors Ci are linearly independent
and form a basis of 'U. Then Cl/\ C2 /\ ••• /\ Cn ¥= 0, so that

Cl /\ c2 /\ • • • /\ en

Using exterior products, prove that det (I + ab T) = 1 + bTa.


det (I + ab T ) = (el + b 1a) /\ (e2 + b 2 a) /\ ••• /\ (en + bna)
el /\ e2 /\ ... /\ en + b 1a /\ e2/\ ... /\ en + b2e l /\ a /\ e3 /\ ... /\ en +
+ bnel/\ ... /\ e n - l /\ a

Use of Theorem 3.21 gives


det (I + ab T) = 1 + b1aT (e2/\'" /\ en) - b2a T(el/\ ea /\ ... /\ en)
+ '" + (-l)n-l bnaT(e l /\ . . . /\ en-I)

but el = (e2/\ ... /\ en), etc., so that


det (I + ab T) = 1 + aTblel + a T b2e Z + ... + aTbne n = 1 + aTb

Note use of this permits the definition of a projection matrix P = 1 - abT(aTb}-l such that
det P = 0, Pa = 0, PTb = 0, p2 = P, and the transformation y = Px leave:;; only the hyperplane
bTx pointwise invariant, i.e. bTy = bTpx = bT[I - abT(aTb)-l]x :::; (b T - bT)x == O.
66 ELEMENTARY MATRIX THEORY [CHAP. 3

Supplementary Problems
3.17. Prove that an upper triangular matrix added to or multiplied by an upper triangular matrix results
in an upper triangulal' matrix. -

3.18. Using the formula given in Definition 3.13 for operations with elements, multiply the following

matrices (: ~)(: ~ s~ t)
Next, partition in any compatible manner and verify the validity of partitioned multiplication.

3.19. Transpose ( ~ 2
j
sin t
)
'
and then take the complex conjugate.

3.20. Prove IA = AI = A for any compatible matrix A.

3.21. Prove all skew-symmetric matrices have all their diagonal elements equal to zero.

3.22. Prove (AB)t = BtAt.

3.23. Prove that matrix addition and multiplication are associative and distributive, and that matrix
addition is commutative.

3.24. Find a nonzero matrix which, when multiplied by the matrix B of Example 3.5, page 40, results
in the zero matrix. Hence conclude AB = 0 does not necessarily mean A = 0 or B = O.

3.25. How many times does one particular element appear in the sum for the determinant of an n X n
matrix'!

3.26. Prove (AB)-1 = B-IA-l if the indicated inverses exist.

3.27. Prove (A-l)T = (AT)-I.

3.28. Prove det A-I = (det A)-I.

3.29. Verify d et (21 2) (-1


3 det 1 ~) = det (~ !)
since
(~ :)(-~ ~) (~ 2) 3 . Also verify

(~ ~) -I (-~ ~) (~ -1 ~) -1

4
3.30.

3.31.
Given A =
G D· 1
5
Find A-I.

If both A-I and B-1 exist, does (A + B) -1 exist in general'!


dA
3.32. Given a matrix A(t) whose elements are functions of time. Show dA-l/clt = -A-laTA-I.

1
3.33. Let a nonsingular matrix A be partitioned into Alh A 12 , A21 and A22 such that Au and A22 - A21 Ail Al2
have inverses. Show that

A-I =

and if A21 = 0, then


CHAP. 3] ELEMENTARY MATRIX THEORY 67

3.34. Are the vectors (2 0 -1 8), (1 -3 4 0) and (1 1 -2 2) linearly independent?

3.35. Given the matrix equations

(a) (b)
0'0'2122) (~1)
~2
= (0)
0
(0)
:~:) G:)
0'32
= (~)
0
Using algebraic manipulations on the scalar equations ai1~1 + ~2~2 = 0, find the conditions under
which no solutions exist and the conditions under which many solutions exist, and thus verify
Theorem 3.11 and the results of Problem 3.11.

3.36. Let x be in the null space of A and y be in the range space of AT, Show xTy == O.

3.37. Given matrix A = ( ~ 2 3 4)


1 -1 -1 .
Find a basis for the null space of A.

3.38. For A = (_~ -!). show that (a) an arbitrary vector z = (::) can be expressed as the

sum of two vectors, z = x + y. where x is in the range space of A and y is in the null space of the
transpose of A, and (b) this is true for any matrix A.

3.39. Given n X k matrices A and B and an m X n matrix X such that XA = XB. Under what conditions
can we conclude A = B?

3.40. Given x, yin '"'(..(, where b 1, b 2 , •• " b n are an orthonormal basis of 'U. Show that
n
(x, y) ~ (x, bi)(bi • y)
i=l

3.41. Given real vectors x and y such that IIxl12 = Ily112' Show ex + y) is orthogonal to (x - y).

3.42. Show that rank (A + B) ~ rank A + rank B.


3.43. Define T as the operator that multiplies every vector in 'Va by a constant a and then adds on a
translation vector to. Is T a linear operator?

3.44. Given the three vectors 81 = ev'IT9 -4 3), a 2 = (v'li9 -1 7) and a3 = eVi19 -10 -5), use
the Gram-Schmit procedure on al' a 2 and as in the order given to find a set of orthonormal basis
vectors.

3.45. Show that the exterior product 4-P = 31/\ ... /\ a p satisfies Definition 3.44, i.e. that is an element
in a generalized vector space /\P'ti, the space of all linear combinations of p-fold exterior products.

3.46. Show (aIel + a2e2 + aSe3) /\ ({he1 + f32e2 + f3a e s) = (a2f33 - O!sf32)e1 + (alf33 - 0!3f31)e2 + (a1f32 - a2f31)ea.
illustrating that 3/\ b is the cross product in 'Va.

3.47. Given vectors Xl. X2' •• "Xn and an n X n matrix A such that Yl' Y2' ... , Yn are linearly independent,
where Yi = Axi . Prove that XII x2' ... ,xn are linearly independent.

n!
3.48. Prove that the dimension of /\P'U =
(n - p)! p!

3.49. Show that the remainder for the Schwartz inequality


1 n n
(a, a)(b, b) - I(a, b)12 = -2.~ .~ JUibj - u j b i l
2
~=1 ,=1
for n-vectors. What is the remainder for the inner product defined as (a, b) = f a*(t) b(t) dt?
68 ELEMENTARY MATRIX THEORY [CHAP. 3

Answers to Supplementary Problems


3.18.
2a + x2
bx 2
aj + sin
b sin t
t)

3.19.
0
2
'l7")
x2
(
-j sin t

3.24. ( -_22:
I" :
I" ) for any a and fJ

3.25. (n -1)! as is most easily seen by the Laplace expansion.

3.30. A-I = (~:;~ ~ -~)


-1 -1 1

3.S1. No

3.34. No

3.37. (5 -4 1 O)T and (6 -5 0 l)T are a basis.

3.38. (a) x = (_~) a and y =:: (~) () where a and () are arbitrary, and since x and yare independ-
ent .they span G[J2'

3.39. The n column vectors of X must be linearly independent.

3.43. No, this is an affine transformation.

3.44. b 1 == (-v'i19 -4 3)/12, b2 == (0 3 4)/5 and a3 is coplanar with al and a 2 so that only b 1 and b 2
are required.

3.48. One method of proof uses induction.

3.49. ~ ff 1a(t) beT) - aCT) bet) 12 dT dt


Chapter 4

Matrix Analysis
4.1 EIGENVALUES AND EIGENVECTORS
Definition 4.1: An eigenvalue of the n x n (square) matrix A is one of those scalars A that
permit a nontrivial (x # 0) solution to the equation
Ax = AX (4.1)

Note this equation can be rewritten as (A - AI)x = O. Nontrivial solution vectors x


exist only if det (A - AI) = O.

Example 4.1.
Find the eigenvalues of (~ :). The eigenvalue equation is

Then

{G :) - A (~ n}(::) (~) or

The characteristic equation is det C~ A 3 ~ A) = O. Then (3 - A)(3 - A) - 4 = 0 or 1.2 - 6A + 5 = 0,


a second-order polynomial equation whose roots are the eigenvalues Al:= 1, A2 = 5.

Definition 4.2: The characteristic polynomial of A is det (A - AI). Note the characteristic
polynOll1ial is an nth order polynomial. Then there are n eigenvalues
Al, A21 ••. , An that are the roots of this polynomial, although some might be
repeated roots.

Definition 4.3: An eigenvalue of the square matrix A is said to be distinct if it is not a


repeated root.

Definition 4.4: Associated with each eigenvalue Ai of the n x n matTix A there is a nonzero
solution vector Xi of the eigenvalue equation AXi = AiXi. This solution
vector is called an eigenvector.
Example 4.2 ••
In the previous example, the eigenvector assocated with the eigenvalue 1 is found as follows.

or
(~)
Then 2xl + 4X2 = 0 and Xl + 2X2 = 0, from which Xl = ---:2X2' Thus the eigenvector Xl is Xl = (-~) X2
where the scalar x2 can be any number.

Note that eigenvectors have arbitrary length. This is true because for any scalar a',

the equation Ax = Ax has a solution vector aX since A(ax) = aAx = aAx = A(ll'x).

69
70 MATRIX ANALYSIS [CHAP. 4

Definition 4.5: An eigenvector is normalized to unity if its length is unity, i.e. Ilxll = l.
Sometimes it is easier to normalize x such that one of the elements is unity.
Example 4.3.
The eigenvector normalized to unit length belonging to the eigenvalue 1 in the previous example is
Xl 1(-2) h
= _r;;:
v5 1
I
Izmg Its fi rst e ement to umty gIves
, w ereas norma 1'" .. Xl = ( 1)
-1/2
.

4.2 INTRODUCTION TO THE SIMILARITY TRANSFORMATION


Consider the general time-invariant continuous-time state equation with no inputs,
dx(t)/dt = Ax(t) (4.2)
where A is a constant coefficient n x n matrix of real numbers. The initial condition is
given as x(O) = Xo.

Example 4.4.
Written out, the state equations (4.2) are
dXI(t)/dt = allxI(t) + al2x 2(t) + ... + alnXn(t)
dX2(t)/dt = a2l x l(t) + a22x 2(t) + ... + a2nx n(t)

and the initial conditions are given, such as

Now define a new variable, an n-vector y(t), by the one to one relationship
y(t) = M-l.x(t) (4.3)
It is required that M be an n X n nonsingular constant coefficient matrix so that the solution
x can be determined from the solution for the new variable y(t). Putting x(t) = My(t)
into the system equation gives
M dy(t)/dt = AMy(t)
Multiplying on the left by M-l gives
dy(t)/dt = M-1AMy(t) (4.4)

Definition 4.6: The transformation T-IAT, where T is an arbitrary matrix, is called a


similarity transformation on A. It is called similarity because the problem
is similar to the original one but with a change of variables from x to y.
Suppose M was chosen very cleverly to make M-1AM a diagonal matrix.A.. Then

( ~ ~~ :'.'... ~.)(~:~~~)
Yl(t) )
dy(t) !!:.- Y2(t) = .A.y( t)
----a;r--- = dt : .. ..
(
Yn(t) o 0 ... An Yn(t)
CHAP. 4] MATRIX ANALYSIS 71

Writing this equation _out gives dyJdt = AiYi for i = 1,2, ... , n. The solution can be ex-
pressed simply as Yi(t) = Yi(O)e Ait • Therefore if an M such that M-1AM =.A. can be found,
solution of dx/dt = Ax becomes easy. Although not always, such an M can usually be found.
In cases where it cannot, a T can always be found where T-1AT is aln10st diagonal.
Physically it must be the case that not all differential equations can be reduced to this
simple form. Some differential equations have as solutions teA;t, and there is no way to get
this solution from the simple form.
The transformation M is constructed upon solution of the eigenvalue problem for all
the eigenvectors Xi, i = 1,2, ... , n. Because AXi = A1Xi for i = 1,2, ... , n, the equations
can be "stacked up" using the rules of multiplication of partitioned matrices:
(AXI I AX21 ... I AXn)
(AlXl I A2 x 21 ... IAnXn)

(Xl ! XQ I . . . IXn) (~~ ~2


•• ••••••••••• ~ ~)

o 0 ... An

Therefore M = (Xl IX2! . . . IXu) (4.5)


When M is singular, A cannot be found. Under a number of different conditions, it can
be shown M is nonsingular. One of these conditions is stated as the next theorem, and
other conditions will be found later.

Theorem 4.1: If the eigenvalues of an n X n matrix are distinct, then the eigenvectors are
linearly independent.
Note that if the eigenvectors are linearly independent, M is nonsingular.
Proof: The proof is by contradiction. Let A have distinct eigenvalues. Let Xl, XQ, ••• , Xu
be the eigenvectors of A, with Xl, XQ, ••• ,XI,; independent and Xk+ 1, ••• , Xu dependent. Then
k
~ {3 l..J
..::;.;
x.! for j = k+1, k+2, .. . ,n where not all (3 tJ.. = O. Since x.J is an eigenvector,
i=1
k

\Xj = Aj ~ f3ijXi
i=1
for j = k+1, .. . ,n
k Ii:

Also, Axj -- ~f3ijAxi = ~ f3 ij \Xi


i=l i=l

Subtracting this equation from the previous one gives


Ii:
o = ~ f3ij(\ - ,\)xi
i=1

But the Xi, i = 1,2, ... , k, were assumed to be linearly independent. Because not all f3 ij
are zero, some Ai = Aj. This contradicts the assumption that A had distinct eigenvalues,
and so all the eigenvectors of A must be linearly independent.

4.3 PROPERTIES OF SIMILARITY TRANSFORMATIONS


To determine when a T can be found such that T-IAT gives a diagonal matrix, the
properties of a similarity transformation must be examined. Define
S = T-1AT (4. 6)
72 MATRIX ANALYSIS [CHAP. 4

Then the eigenvalues of 8 are found as the roots of det (8 - AI) = O. But
det (S - AI) det (T-IAT - AI)
det (T-IAT - AT-lIT)
det [T-l(A - AI)T]
Using the product rule for determinants,
det (S - AI) = det T-l det (A - AI) det T
Since detT-l = (det T)-l from Problem 3.12, det (S - AI) = det (A - AI). Therefore we
have proved

Theorem 4.2: All similar matrices have the same eigenvalues.

Corollary 4.3: All similar matrices have the same traces and determinants.
Proof of this corollary is given in Probleln 4.1.
A useful fact to note here "~lso is that all triangular matrices B display eigenvalues on
the diagonal, because the detetminant of the triangular matrix (B - AI) is th~ _product of
its diagonal elements.

Theorem 4.4: A matrix A can be reduced to a diagonal matrix A by a similarity trans-


formation if and only if a set of n linearly independent eigenvectors can
be found.

Proof: By Theorem 4.2, the diagonal matrix A must have the eigenvalues of A appear-
ing on the diagonal. If AT = TA, by partitioned matrix multiplication it is required that
Ati = Ait, where t are the column vectors of T. Therefore it is required that T have the
eigenvectors of A as its column vectors, and T-l exists if and only if its column vectors are
linearly independent.
It has already been shown that when the eigenvalues are distinct, T is nonsingular.
So consider what happens when the eigenvalues are not distinct. Theorem 4.4 says that
the only way we can obtain a diagonal matrix is to find n linearly independent eigenvectors.
Then there are two cases:
Case 1. For each root that is repeated k times, the space of eigenvectors belonging to
that root is k-dirnensional. In this case the matrix can still be reduced to a diagonal form.

~ ~ ~).
Example 4.5.
Given the matrix A = ( Then det (A - AI) = -i\.(1 - i\.)2 and the eigenvalues are
-1 0 0
0, 1 and 1. For the zero eigenvalue, solution of Ax = 0 gives x = (0 1 -1). For the unity eigenvalue,
the eigenvalue problem is

(
~ ~ ~) (:~)
-1 0 0 X3
(1) ( : ; )
X3

This gives the set of equations


CHAP. 4] MATRIX ANALYSIS 73

Therefore all eigenvectors belonging to the eigenvalue 1 have the form

where xl and X2 are arbitrary. Hence' any two linearly independent vectors in the space spanned by
(0 1 0) and (1 0 -1) will do. The transformation matrix is then

T M = C~ ~-D
and T-IAT = A., where A has 0, 1 and 1 on the diagonal in that order.

Note that the occurrence of distinct eigenvalues falls into Case 1. Every distinct eigen-
value must have at least one eigenvector associated with it, and since there are n distinct
eigenvalues there are n eigenvectors. By Theorem 4.1 these are linearly independent.

Case 2. The conditions of Case 1 do not hold. Then the matrix cannot be reduced
to a diagonal form by a similarity transformation.

Example 4.6.
Given the matrix A = (~ ~). Since A is triangular, the eigenvalues are displayed as 1 and 1.
Then the eigenvalue problem is

which gives the set of equations X2 = 0, 0 = O. All eigenvectors belonging to 1 have the form (Xl O)T.
Two linearly independent eigenvectors are simply not available to form M.

Because in Case 2 a diagonal matrix cannot be formed by a similarity transformation,


there arises the question of what is the simplest matrix that is almost diagonal that can be
formed by a similarity transformation. This is answered in the next section.

4.4 JORDAN FORM


The form closest to diagonal to which an arbitrary n x n matrix can be transformed
by a similarity transformation is the Jordan form, denoted J. Proof of its existence in all
cases can be found in standard texts. In the interest of brevity we omit the lengthy develop-
ment needed to show this form can always be obtained, and merely show how to obtain it.
The Jordan form J is an upper triangular matrix and, as per the remarks of the preceding
section, the eigenvalues of the A matrix must be displayed on the diagonal. If the A matrix
has r linearly independent eigenvectors, the Jordan form has n - r ones above the diagonal,
and all other elements are zero. The general form is

J = (4.7)
74 MATRIX ANALYSIS [CHAP. 4

Each Lji(Ai) is an upper triangular square matrix, called a Jordan block, on the diagonal
of the Jordan form J. Several Lji(Ai) can be associated with each value of Ai, and may differ
in dimension from one another. A general Lji(Ai) looks like
Ai 1 0 0
0 Ai 1 0
Lji(Ai) 0 0 Ai 0 (4.8)
.................
0 0 0 Ai

where Ai are on the diagonal and ones occur in all places just above the diagonal.

(Ao; :01':1 000).


Example 4.7.
Consider the Jordan form J -- 1\ Because all ones must occur above the

o 0 0 A2
diagonal in a Jordan block, wherever a zero above the diagonal occurs in J there must occur a boundary
between two Jordan blocks. Therefore this J contains three Jordan blocks,

There is one and only one linearly independent eigenvector associated with each Jordan
block and vice versa. This leads to the calculation procedure for the other column vectors
tt of T called generalized eigenvectors associated wit~ each Jordan block Lji(Ai):
AXi AiXi
Atl Ait1 + Xi
(4.9)

Note the number of tl equals the number of ones in the associated Lji(Ai). Then
A(Xi I t1 It21 ... Itt I ... ) (AiXi I Ait1 + Xi I Ait2 + t1 I ... I Ait! + tl-1 I •.. )
= (Xi Ih It21 ... Itl I ... )Lji(Ai)

This procedure for calculating the tl works very well as long as Xi is determined to within
a multiplicative constant, because then each tz is determined to within a mUltiplicative
constant. However, difficulty is encountered whenever there is more than one Jordan
block associated with a single value of an eigenvalue. Considerable background in linear
algebra is required to find a construction procedure for the t! in this case, which arises so
seldom in practice that the general case will not be pursued here. If this case arises, a
trial and error procedure along the lines of the next example can be used.

Example 4.8.
Find the transformation matrix T that reduces the matrix A to Jordan form, where

A G-~ D.
CHAP. 4] MATRIX ANALYSIS 75

The characteristic equation is (2 - ;\)(3 - ;\)(1 -;\) + (2 -;\) = O. A factor 2 -;\ can be removed,
and the remaining equation can be arranged so that the characteristic equation becomes (2- ;\)3 = O.
Solving for the eigenvectors belonging to the eigenvalue 2 results in

Therefore any eigenvector can be expressed in a linear combination as

What combination should be tried to start the procedure described by equations (4,.9)? Trying the general
expression gives

Then

7"2 + 73 = f3
-7"2 - 73 -[3

=
These equations are satisfied if a f3 This gives the correct x =
a(1 1 -l)T. Normalizing x by setting
a = 1 gives t = ('7'1 '7'2 1 - 72)T. The transformation matl'ix is completed by any other linearly inde-
pendent choice of x, say (0 1- _1)T, and any choice of '7'1 and 7'2 such that t is linearly independent of the
choices of all x, say '7'1 = 0 and '7'2 = 1. This gives AT = TJ, or '0

(~ ! ~)( ~ ~~)
o -1 1 -1 0 -1
(~~ ~)(~ ~ ~)
-1 0 -1 0 0 2

4.5 QUADRATIC FORMS


Definition 4.7: A quadratic form £?( is a real polynomial in the real variables ~1' ~2' ••• , ~n
n n
containing only terms of the form a:ij~i~j' such that ~ = i~ j~ aij~i~j'
where aij is real for all i and i.

Example 4.9.
Some typical quadr~tic forms are
.w.l 7~i
.w.2 = 3~i - 2~1~2 + ~~ + 5~1~3 - 7~2~1

.w.3 = all~i+ a12~i~2 + a21~2~1 + a22~~


.w.4 t~i + (1- t2)~1~2 - et~;

Theorem 4.5: All quadratic forms E<. can be expressed as the inner product (x, Qx) and
vice versa, where Q is an n X n Hermitian matrix, i.e. Qt = Q.

Proof: First ~ to (x, Qx):


(4.10)
76 MATRIX ANALYSIS [CHAP. 4

Let Q = {qij} = i{aij+ajJ. Then qij = qw so Q is real and symmetric, and ~=XTQX.
Next, (x, Qx) to ~ (the problem is to prove the coefficients are real):
n n
(x, Qx) LL
= i=lj=l qi"~i~'
J J
and
Then n n
(x, Qx) = lex, Qx) + t(x, Qtx) = L ~ (qt·J + q:)~i~'
! i=lj=l J J
n n
So (x, Qx) = LL
i=1 j=1
Re (qi')~i~'
J J
=~ and the coefficients are real.

Theorem 406: The eigenvalues of an n X n Hermitian matrix Q = Qt are real, and the
eigenvectors belonging to distinct eigenvalues are orthogonal.
The most important case of real symmetric Q is included in Theorem 4.6 because the
set of real symmetric matrices is included in the set of Hermitian matrices.
Proof: The eigenvalue problems for specific Ai and Aj are

(4.11)

Since Q is Hermitian,
QtXj = A.jXj

Taking the complex conjugate transpose gives


xfQ
3
= >..~J xfJ (4.12)
Multiplying (4.12) on the right by Xi and (4.11) on the left by xT gives
XjQXi - xjQXi = 0 = (>..~ - >..)xTxi
If j = i, then yxtxi is a norm on Xi and cannot be zero, so that \ = >"i, meaning each eigen-
value is real. Then if j -:F i, >..*J. - A.t = A.J - A..t But for distinct eigenvalues, >...J - A.t =/= 0,
so xj Xi = 0 and the eigenvectors are orthogonal.

Theorem 4.7: Even if the eigenvalues are not distinct, a set of n orthonormal eigenvectors
can be found for an n X n normal matrix N.
The proof is left to the solved problems. Note both Hermitian and real symmetric
matrices are normal so that Theorem 4.6 is a special case of this theorem.

Corollary 4.8: A Hermitian (or real symmetric) matrix Q can always be reduced to a
diagonal matrix by a unitary transformation, where U-IQV =.A and
V-I = vt.
Proof: Since Q is Hermitian, it is also normal. Then by Theorem 4.7 there are n
orthonormal eigenvectors and they are all independent. By Theorem 4.4 this is a necessary
and sufficient condition for diagonaIization. To show a transformation matrix is unitary,
construct V with the orthonormal eigenvectors as column vectors. Then
xt1
xt2
CHAP. 4] -MATRIX ANALYSIS 77

But xTx
j = (Xi'X) = 8ij because they are orthonormal. Then utu = I. Since the column
vectors of U are linearly independent, U- 1 exists, so multiplying on the right by U-l gives
ut = U-l, which was to be proven.
Therefore if a quadratic form fL = xtQx is given, rotating coordinates by defining
x = Uy gives ~ == ytutQUy = ytAy. In other words, ~ can be expressed as
£t = Al1Ylj2 + A21Y21 2 + ... + An IYnl 2
where the Ai are the real eigenvalues of Q. Note ~ is always positive if the eigenvalues
of Q are positive, unless y, and hence x, is identically the zero vector. Then the square root of
~ is a norm of the x vector because an inner product can be defined as (x, Y)Q = xtQy.

Definition 4.8: An 11, X n Hermitian matrix Q is positive definite if its associated quadratic
form ~ is always positive except when x is identically the zero vector.
Then Q is positive definite if and only if all its eigenvalues are> O.

Definition 4.9: An n x n Hermitian matrix Q is nonnegative definite if its associated


quadratic form ~ is never negative. (It may be zero at times when x is
not zero.) Then Q is nonnegative if and only if all its eigenvalues are ~ O.
Example 4.10.
~ = -
~i 2~1~2 + ~i = (~1 - ~2)2 can be zero when ~1 =: ~2' and so is nonnegative definite.
The geometric solution of constant Q. when Q is positive definite is an ellipse in n-space.

Theorem 4.9: A unique positive definite Hermitian matrix R exists such that RR = Q,
where Q is a Hermitian positive definite matrix. R is called the square root
ofQ.
P1'oof: Let U be the unitary matrix that diagonalizes Q. Then Q = UAU t . Since
Au is a positive diagonal element of A, defineAl/2 as the diagonal matrix of positive A1/ 2.
Q = UAlIZAl/2Ut = UAl/2UtU.A 1 / 2 U t
Now let R = UA1!2Ut and it is symmetric, real and positive definite because its eigenvalues
are positive. Uniqueness is proved in Problem 4.5.
One way to check if a Hermitian matrix is positive definite (or nonnegative definite)
is to see if its eigenvalues are all positive (or nonnegative). Another way to check is to use
Sylvester's criterion.

Definition 4.10: The mth leading principal minor, denoted detQm, of the n x n Hermitian
matrix Q is the determinant of the matrix Qm formed by deleting the last
n - m rows and columns of Q.

Theorem 4.10: A Hermitian matrix Q is positive definite (or nonnegative definite) if and
only if all the leading principal minors of Q are positive (or nonnegative) .
....
A proof is given in Problem 4.6.
Example 4.11.
Given Q = {qij}' Then Q is positive definite if and only if

o < det Ql = q11: 0 < det Q 2 = det (q~l q12); .•. , 0 < det Q n = det Q
q12 QZ2
If < is replaced by ~. Q is nonnegative definite.
Rearrangement of the elements of Q sometimes leads to simpler algebraic inequalities.
78 MATRIX ANALYSIS [CHAP. 4

Example 4.12.
The quadratic form

Q. =
is positive definite if ql1 > 0 and qnq22 - qi2 > O. But Q. can be written another way:

GI --
~ qll;1 2+ 2 q12;1;2 + 2 -_ (;2;1) (q22
q22;2 . q12) (~2)
q12 qu ~1

which is positive definite if q22> 0 and qllq22 - qi2 > O. The conclusion is that Q is positive definite if
det Q > 0 and either q22 or qu can be shown greater than zero.

4.6 MATRIX NORMS


Definition 4.11: A nOTm of a 'matrix A, denoted IIAII, is the mininlum value of K. such that
[IAxii ~ K!ix!1 for all x.
Geometrically, multiplication by a matrix A changes the length of a vector. Choose
the vector Xo whose length is increased the most. Then IIAII is the ratio of the length of
Axo to the length of Xo.
The matrix norm is understood to be the same kind of norm as the vector norm ·in its
defining relationship. Vector norms have been covered in Section 3.10. Hence the matrix
norm, is to be taken in the sense of corresponding vector norm.
Example 4.13.
To find jjU112, where ut = U-l, consider any nonzero vector x.
IIUxll~ = xtutUx ::::: xtx = Ilxil~ and so IIUl1 2 = 1

Theorem 4.11: Properties ,of any matrix norm are:


(1) IIAxl1 ~ IIAlllixl1
(2) IIAII = max IIAull, whel~e the maximum value of IIAul1 is to be taken
over those u such that Ilull = 1-
(3) IIA+Blj ~ IjAl1 + IIBII
(4) IIABjj ~ IIAIIIIBII
(5) IAI ~ IIAII for any eigenvalue A of A.
(6) IIAII = 0 if and only if A = O.
Proof: Since IIAII = K
min , substitution into Definition 4.11 gives (1).

To show (2), consider the vector u= X/a, where a = Ilxll. Then


II Ax II = IlaAul1 = ialllAul1 ~ Kllxll = Klailiull
Division by Ja) gives IIAulj ~ Kilull, so that only unity length vectors need be considered
instead of all X in Definition 4.11. Geometrically, since A is a linear operator its effect on
the length of aX is in direct proportion to its effect on x, so that the selection of Xo should
depend only on its direction.
To show (3),
II(A+B)xil = IIAx+Bx)1 ~ IIAxl1 + IIBx!1 ~ (IIAII + IIBII) Ilxli
for any x. The first inequality results from the triangle inequality, Definition 3.46(4), for
vector norms, as defined previously.
CHAP.4J MATRIX ANALYSIS 79

To show (4), use Definition 4.11 on the vectors Bx and x:


[[ABxll = IIA(Bx)11 ~ IIAllllBxl1 ~ IIAllllBllllxl1
for any x.
To show (5), consider the eigenvalue problem Ax = Ax. Then using the norm property
of vectors Definition 3.46(5),
IAlllxl1 = IIAxl1 = IIAxI[ ~ I]Allllxll
Since x is an eigenvector, llxll ~ 0 and (5) follows.
To show (6), IJAII = 0 implies IIAxI] = 0 from Definition 4.11. Then Ax= 0 so that
Ax = Ox for any x. Therefore A = O. The converse 1[011 == 0 is obvious.

Theorem 4.12: ]jAII 2 -- pmax' where pZmax is the maximum eigenvalue of AtA, and further-
more
o ~ Pmin ~
IIAxllz ~ Pmax
Ilxllz
To calculate IIA1Jz' find the maximum eigenvalue of AtA.

Proof: Consider the eigenvalue problem

AtAgi = P;gi
Since (x, AtAx) = (Ax, Ax) = [[Ax]!; ~ 0, then AtA is nonnegative definite and pf ~ O.
Since At A is Hermitian, PT is real and the gj can be chosen orthonormal. Express any x
n
in CV n as x = ~ eigi . Then
i=1 n n

I]AxI]~ ~ ]1~iAgill:
i=1
~ gfp;
i=1

Ilxll; n = n

~ II~igill~
i=1
~ ~f
i=1
n n n
Since (pnmin ~ ~f ~ ~ ~TP7 ~ (pT)max ~ ~L taking square roots gives the inequality of
i=l 1=1 i=1

the theorem. Note that IIAxliz = Pmaxllxl12 when x is the eigenvector gi belonging to (P~)lnaK'

4.7 FUNCTIONS OF A MATRIX


Given an analytic scalar function f(a) of a scalar a, it can be uniquely expressed in a
convergent Maclaurin series,
f(a)

where Ik = dkf(a)/da k evaluated at a = O.

Definition 4.12: Given an analytic function f(a) of a scalar a, the function of an n X n 1natrix

A is f(A) = i
k=O
tkAk/k!.

Example 4.14.
Some functions of a matrix A are
cosA = (cosO)I + (-sinO)A + (-cosOjA2/2 + ... + (-1)mA2m/(2'm)! +
eAt = (eO)I + {eO)At + (e O)A2 t 2/2 + ... + Aktk/k! + ...
80 MATRIX ANALYSIS [CHAP~4

Theorem 4.13: If T-IAT = J, then f(A) = Tf(J) T-l.


Proof: Note Ak = AA· .. A = TJT-ITJT~l ... TJT-l = TJkT-l. Then

f(A) = k=O
00

"'1:, fkAk/k! = t
k=O
fk,TJkT- 1/k!

Theorem 4.14: If A = TAT-l, where.A. is the diagonal matrix of eigenvalues Ai, then

f(A) = f(A1)xlr! + f(A )x r t + ... + f(An)xnr~


2 2

Proof: From Theorem 4.13,


f(A) = Tf(A)T-l (4.13)
But
o
f(A) =
o

Therefore

f(A) ==
/(A') I(A2) 0) (4.14)
(
o f(An}
Also, let T = (Xl Ix21 ... IXn) and T-l = (rl Ir21 .- .. Irn)t, where ri is the reciprocal basis
vector. Then from (4.13) and (4.14),

f{A)

The theorem follows upon partitioned multiplication of these last two matrices.
The square root function /(0:) == a 1l2 is not analytic upon substitution of any Ai. There-
fore- the square root R of the positive definite matrix Q had to be adapted by -always taking
the positive square root of Xi for uniqueness in Theorem 4.9.

Definition 4.13: Let f(a) == a in Theorem 4.14 to get the spectral representation of
A == TAT~l:

A =
CHAP. A] MATRIX ANALYSIS 81

Note that this is valid only for those A that can be diagonalized by a similarity trans-
formation.
To calculate I(A) = T/(J)T-I, we must find I(J). From equation (4.7),

(Ll1(Al~..
Jk =
o
0
Lmn(An)
)k =
_ ( L~l(Al~..
0
k 0

Lmn(An)
)

where the last equality follows from partitioned matrix multiplication. Then

I(J) ~ IkJk/k! =
/(Lll) 0) (4.15)
k=O ( o '. I(Lmn) ,

Hence, it is only necessary to find I(L) and use (4.15). By calculation it can be found that
for an l x l matrix L,
k ) Ak-U-IJ
( l-l

k ) Ak -CZ- 2 J (4.16)
( l- 2

o o
where (mn) = (n-m)!m!'
n! the number of combinations of n elements taken m at a time.

Then from f(L) = t


n=O
fkLk/k!, the upper right hand terms all are

all = t
k,.=l-l
Ik(
l
~-)Ak-(t-l}/k! =
1
1:
k=l-l
I k Ak -CI-l)/[(l-1)!(k-l+1)lJ (4.17)

but
f
k=l-l
I k Ak - a - 1 )/(k-l+1)! (4.18)

The series converge since I(A} is analytic, so that by comparing (4.17) and (.4-.18),
1 dl-I/(A}
(l- I) ! dA l - 1
Therefore
/()") dl/dA [(l-l)!J_ldl_ll/dAI_l)

I(L) =
(
0.... !~~)........... .[~l.~ .2} .!].~1.~1~~~/.~>".1~~
.. (4.19)

° 0 ... I(A}
From (4.19) and (4.15) and Theorem 4.13, I(A) can be computed.
Another almosLequivalent ,method that can be used comes from the Cayley-Hamilton
theorem.

Theorem 4.15 (Cayley-Hamilton): Given an arbitrary n x n matrix A with a charac-


teristic polynomial ¢(>..) = det (A - AI). Then ¢(A) = O.
The proof is given in Problem 5.4.
82 MATRIX ANALYSIS [CHAP. 4

Example 4.15.
Given A Then

det (A - }'I) = ¢(}.) = }.2 - 6}. +5 (4.20)

By the Cayley-Hamilton theorem,

¢(A) = A2 - 6A + 51 = G! ~~) - G~) + G~)


6 5 = (~ ~)
The Cayley-Hamilton theorem gives a means of expressing any power of a matrix in
terms of a linear combination of Am for -m = 0,1, ... , n-l.

Example 4.16.
From Example 4.15, the given A matrix satisfies 0 = A2 - 6A + 51. Then A2 can be expressed in
terms of A and I by
A2 = SA - 51 (4.21)

Also A3 can be found by multiplying (4.21) by A and then using (4.21) again:

A3 = 6A2 - 5A = 6(6A -:- 51) - 5A = 31A - 301

Similarly any power of A can be found by this method, including A-I if it exists, because (4.21) can be
multiplied by A-I to obtain
A-I = (61-A)/5

Theorem 4.16: For an n X n matrix A,


I(A) = YIAn-1 + Y2 An - 2 + ... + Yn-I A + Yn I
where the scalars Yi can be found from
f(J) = Yl In-l + Y2Jn-2 + ... + Yn- l J + Yn I
Here 1(1) is found from (4.15) and (4.19), and very simply from (4.14) if A can be
diagonalized.

This method avoids the calculation of T and T-l at the expense of solving
n

I(J) = ~ Yi Jn - i for the Yi'


i=1
'11.-1

Pro 01; Since I(A) 1: fkAk/k!


= k=O and Ale = l: <XkmA.m
m=O
by the Cayley-Hamilton
theorem, then

The quantity in brackets is Yn-m •

Also, from Theorem 4.13,


n n
I(J) = T-lf(A)T = T-1 ~ YiAn-iT = ~
i=1
"'liT-IAn-iT
i=l

Example 4.17.
For the A given in Example 4.15, cos A = riA + Y21. Here A has eigenvalues }.1 = 1, }.2 = 5. From
cos A = YtA + Y21 we obtain
cos }.1 Y1}.1 + "'12

COS}.2 Yl}.2 + "'12


CHAP. 4] MATRIX ANALYSIS 83

Solving for Yl and Y2 gives


cosA
cos 1- 5(32
cos
1-5
2) + 5 cos 1 - cos 5 (1
3 5- 1 \0

cos
2
1( 1-1) + co~ (1 11)
-1 1 2 1

Use of complex variable theory gives a very neat representation of f(A) and leads to
other computational procedures.

Theorem 4.17: If f(a} is analytic in a region containing the eigenvalues >"i of A, then

f(A) = 21. £: /(s) (sI - A)-1 ds


7iJ J'
where the contour integration is around the boundary of the region.

Proof: Si~ce f(A) = Tf(J)T-l = T [2~j .f [(s) (sl - J)-l as]T-l, it suffices to show
that f(L) = 27ij f /(s) (s I ~ L)-l ds. Since

sI - L
(
s ~~ s.~~~
o
.. .
0
.. ::: .... ..)
...
~
s-)..,

then
(S - >..)1-1 (8 - >..)1-2 1)
(81 - L)-l (s 2y)' ( ... ......~ (~~. ~).'~~ .. : : : .... ~ .~ ~..
o o . .. (8 - >..)1-1

The upper right hand term all of 21. £: /(8) (sI - L)-l ds is
7iJ J'

all 27ij
1 f f(s}ds
(8 + >..}1
Because all the eigenvalues are within the contour, use of the Cauchy integral formula then
gives
1 d1- 1f(>..)
-(Z---l)-! d>..l-l
which is identical with equation (4.19).

Example 4.18.
Using Theorem 4.17,
cos A = 2~i f cos s(81 - A)-l ds

For A as given in Example 4.15,


(81 - A)-l =
s- 3
( -2 8 -
-2 )
3
-1 = 82 -
1
68 +5
(8 - 2
3

Then
cos A
_ 2- 1:
-
cos
21rj:r (s - 1)(8 - 5)
8 (8 - 3 2) 2 s - 3
ds
84 MATRIX ANALYSIS [CHAP. 4

Performing a matrix partial fraction expansion gives

cosA = _1 f cos1 (-2 2) ds + lf~ (2 2)d


2rri -4(8 -1) 2 -2 271"i 4(8 - 5) 2 2 8

4.8 PSEUDOINVERSE
When the determinant of an n X n matrix is zero, or even when the matrix is not square,
there exists a way to obtain a solution "as close as possible" to the equation' Ax = y. In
this section we let A be an m X n real matrix and first examine the properties of the real
symmetric square matrices ATA and AAT, which are n X nand m x m respectively.

Theorem 4.18: The matrix BTB is nonnegative definite.


Proof: Consider the quadratic form ~ = yTy, which is never negative. Let y = Bx,
where B can be m x n. Then xTBTBx:::"" 0, so that BTH is nonnegative definite.
Fromthistheorem, Theorem 4.6 and Definition 4.9, the eigenvalues of ATA are either zero
or positive and real. This is also true of the eigenvalues of AAT, in which case we let
B = AT in Theorem 4.18.

Theorem 4.19: Let A be an m x n matrix of rank r, where r ~ m and r ~ n; let gf be an


orthonormal eigenvector of AT A; let fi be an orthonormal eigenvector of
AAT, and let p~ be the nonzero eigenvalues of ATA. Then
~

(1) pf are also the nonzero eigenvalues of AAT.


(2) Ag i Pifi for i = 1,2, ... , r
(3) Agi = 0 for i = r+1, .. . ,n
(4) ATf.t Pigi for i = 1,2, .. . ,r
(5) AT£.t 0 for i = r+1, .. . ,m

Proof: From Problem 3.12, there are exactly r nonzero eigenvalues of ATA and AAT.
Then ATAg.-t = p~g,
t t
for i = 1,2, -. .. , rand ATAg;• = 0 for i = r + 1, ... , n. Define an
=
m-vector hi as hi == Ag/Pi for i 1,2, ... , r. Then
AA'I'bi = AATAgJpi = A(pfg)/Pi = pfhi
Furthermore,
hih; = gfATAg/ PiPj = Pjgfg/Pi = 8ij
Since for each i there is one normalized eigenvector, hi can be taken equal to fi and (2) is
proven. Furthermore, since there are r of the p?,t
these must be the eigenvalues of AAT and
(1) is proven. Also, we can find fi for i = r+ 1, ... , m such that AATfi= 0 and are ortho-
normal. Hence the fi are an orthonormal b~sis for 'Vm and the gi are an orthonormal basis
for CV n •
To prove (3), ATAgi = 0 for i= r+l, ... ,m. Then IIAgill~ = gfATAgi = 0, so that
Agi = O. Similarly, since AATfi = 0 for i == r+ 1, ... , m, then ATfi = 0 and (5) is proven.
Finally, to prove (4),
(ATAg)/p~ P1?g./p·1
~
'-CHAP. 4] MATRIX ANALYSIS 85

Example 4.19.
6 0 4 0
Let A -- (6 1 0 6 ). Then m = r = 2, n = 4.
72 6 24 36)
AAT = (52 86) ATA = 6 1 0 6
86 73 ( 24 0 16 0
36 6 0 36

The eigenvalue of AAT are pi


= 100 and P~ = 25. The eigenvalues of ATA are pi = 100, pi = 25,
p; = p! =
0, O. The eigenvectors of ATA and AAT are
gl 0.84 0.08 0.24 O.48)T

g2 = 0.24 -0.12 0.64 -O.72)T


ga (-0.49 0.00 0;78 O.49)T

g4 = 0.04 -0.98 -0.06 -0.13)T

f1 = 0.6 0.8}T

f2 = 0.8 -O.6)T

From the above, propositions (1), (2), (3), (4) and (5) of Theorem 4.19 can be verified directly. Computationally
it is easiest to find the eigenvalues p2 and p2 and eigenvectors fl and f2 of AAT, and then obtain the gj
from propositions (4) and (3). 1 2

l'

Theorem 4.20: Under the conditions of Theorem 4.19; A = ~ pJigT.


i=l

Proof; The m x' n matrix A is a mapping from CVn to CVm , Also, gl' g2' ... , gn and
£1' £2' .•. ,fm form orthonormal bases for CV n and CV m respectively. For arbitrary x in G() n'-
n
X = ~ ~,g.
i=l 1 1
where ~i = gfx (4.22)

Use of properties (2) and (3) of Theorem 4.19 gives


r n
Ax = ~ ~iAgi ~
+ i==r+l ~iAgi
i=l
r
From (4.22), gj = gix and so Ax = ~ PifigTx. Since this holds for arbitrary x, the
theorem is proven. i=l

Note that the representation of Theorem 4.20 holds even when A is rectangular, and has
no spectral representation.
Example 4.20.
Using the A matrix and the results of Example 4.19,

6
(6
° 4 0)
1 0 6
= 10 (0.6) ~0.84 0.08 0.24 0.48) + 5 ( 0.8) (0.24 -0.12 0.64 -0.72)
0.8 -0.6

Definition 4.14: The pseudoinverse, denoted A -I, of the m x n real matrix A is the n x rn
r
real matrix A -1 = ~ p:-lg j fJ.
i=l ~

Example 4.21.
Again, for the A matrix of Example 4.19,

=
0.1
o,84)
0.08 (0.6 0.8)
0.24
+ 0.2
( 0.24)
-0.12 (0.8 -a.6)
0.64
= (-~~~~!! ~:~:~:)
0.1168 -0.0576
(
0.48 -0.72 -0.0864 0.1248
86 MATRIX ANALYSIS [CHAP. 4

Theorem 4.21: Given an m X n real matrix A and an arbitrary m-vector y, consider the
equation Ax = y. Define Xo = A - Iy . Then IIAx - Yl12 ~ IIAxo - YI12 and
for those z # Xo such that jlAz - Yl12 : : : II Ax o- y112, then Ilzi 12 > Ilxo 112.
In other words, if no solution to Ax::::: y exists, Xo gives the closest possible solution.
If the solution to Ax = y is not unique, Xo gives the solution with the minimum norm.

P1~oof; Using the notation of Theorems 4.19 and 4.20, an arbitrary m-vector y and an
arbitrary n-vector x can be written as
m n

y ::::: ~ 'lJifp X ~ ~igi


= i=l (4.23)
i=l

where rh::::: fry and ~i = gJx. Then use of properties (2) and (3) of Theorem 4.19 gives
n r m
Ax - y = ~ ~iAgi ~ (~iPi - 'I])fi + ~ 'l]ifi (4.24)
i=l i=l i=r+l

Since the fi are orthonormal,


r m

IIAx-YII~ ~ (~iPi - 7]i)2 + ~ '1];


i=l i= r+l

To minimize IIAx - Yl12 the best we can do is choose gi = 7]/Pi for i::::: 1,2, ... , r. Then
those vectors z in G()n that minimize IIAx - ylk can be expressed as
r
z :::::
~
i=l
giTJ/Pi +
where ~i for i = r+l, ... , n is arbitrary. But

The z with minimum norm must have ~i::::: 0 for i::::: r + 1, ... , n. Then using 7]i ::::: fT y from
T T
(4.23) gives z with a minimum norm ::::: ~ p:-l7]i ig = ~ p:-lgif[y = A - I y ::::: Xo.
i=l 1 i=l 1

Example 4.22.
Solve the equations 6x! + 4X3 = 1 and 6xl + x 2 + 6X4 = 10.
This can be written Ax = y, where y = (1 10)T and A is the matrix of Example 4.19. Since the
rank of A is 2 and these are four columns, the solution is not unique. The solution Xo with the minimum
norm is
0.4728)
0.1936
-0.4592
(
1.1616

Theorem 4.22: If it exists, any solution to Ax = y can be expressed as x = A -Iy +


(I - A -IA)z, where z is any arbitrary n-vector.

Proof; For a solution to exist, 'lJi = 0 for i = r + 1, ... , m in equation (4.24), and
~i for i = 1,2, ... , r. Then any solution x can be written as
::::: 7]/Pi

r n
X = ~ gi"fJ/Pi ~
+ i=r+l gi'i (4.25)
i=l
where ti for i::::: r + 1, ... , n are arbitrary scalars. Denote in G()n an arbitrary vector
n
Z = ""
.tt::..; g.'.,I].
where f:.
- I
= g!'z.
~
Note from Definition 4.14 and Theorem 4.20,
i=l
T T

= ~ ~
~.tt::..;
p-lg.f!'f
i t1kkk
gTp (4.26)
k=l i=l
CHAP. 4] MATRIX ANALYSIS 87
n
Furthermore, since the gi are orthonormal basis vectors for 'V n , 1 = ~ gig;. Hence
i=l
n 7l.

(1- A -IA)z ~ gigTz = ~ gi~i (4.27)


i=r+1 i=r+l

From equation (4.23), TJi = fry so that substitution of (4.27) into (4.25) and use of Definition
4.14 for the pseudo inverse gives
x = A-Iy + (I-A-1A)z
Some further properties of the pseudoin verse are:
1. If A is nonsingular, A - I = A -1.
2. A -IA =1= AA -1 in general.
3. AA-IA=A

5. (AA -I)T = AA-I


6. (A-IA)T=A-IA

7. A -lAw = w for all w in the range space of AT.


8. A -IX =0 for all x in the null space of AT,
9. A -1(y + z) = A -1y + A - IZ for all y in the range space of A and all z in the null
space of AT.
10. Properties 3-6 and also properties 7-9 completely define a unique A - I and are
sometimes used as definitions of A -I.
11. Given a diagonal matrix A = diag (AI, A2, •.. , An) where some Ai may be zero.
Then A -I = diag (A;\ .\.2\ ... , .\.;;1) where 0- 1 is taken to be O.
12. =
Given a Hermitian matrix H Ht. Let H UAUt where ut = V-I. = Then
H-I = UA -Jut where A - I can be found from 11.
13. Given an arbitrary m x n matrix A. Let- H = AtA. Then A-I = H- At = (AH-I)t
1

where H- 1 can be computed from 12.


14. (A -1)-1=A
15. (AT)-I = (A-1)T
16. The rank of A, AtA, AAt, A -I, A -1A and AA -1 equals tr (AA -1)Q = r.
17. If A is square, there exists a unique polar decomposition A = UH, where H2 = At A,
U = AA -I, ut = U-I. If and only if A is nonsingular, H is positive definite real
symmetric and U is nonsingular. When A is singular, H becomes nonnegative
definite and U becomes singular.
18. If A(t) is a general matrix of continuous time functions, A -1(t) may have discon-
tinuities in the time functions of its elements.
Proofs of these properties are left to the solved and supplementary problems.
88 MATRIX ANALYSIS [CHAP. 4

Solved Problems
4.1. Show that all similar matrices have the same determinants and traces.
To show this, we show that the determinant of a matrix equals the product of its eigenvalues
and that the trace of a matrix equals the sum of its eigenvalues, and then use Theorem 4.2.
Factoring the characteristic polynomial gives
det (A - AI) = (AI - },,)(A2 - A)' .. (An - A) = AIA2" 'A n + ... + (Al + A2 + ... + An )(-A)n-l + (-A)n
Setting A = 0 gives det A = Al A2' .. An. Furthermore,
det (A - AI) (al - Aed /\ (a2 -l\e2) /\ ... /\ (an - Ae n )
+ (-A) [el /\ a2 /\ ... /\ an. + al /\ ez /\ ... /\ an + ... + a1/\ a 2 /\ .•.
al /\ a2 /\ ... /\ an /\ en]
+ '" + (- A)n-l [al /\ e2 /\ ... /\ en + el /\ a2 /\ ... /\ en + ... + el /\ e2 /\ ... /\ an]
+ (-A) ne l/\ e2 /\ ... /\ en
Comparing coefficients of -A again gives AIA2'" An = a 1 /\ a2/\ ..• /\ an' and also
Al + A2 + ... + An = al /\ ez /\ .•. /\ en + el /\ 32 /\ ••• /\ en + '" + el /\ e2 /\ ... /\ an

However,
al /\ e2 /\ ... /\ en :;:: (anel + a21e 2 + ... + an1en) /\ ~2 /\ ••• /\ en

and similarly e 1 /\ 32/\ ••• /\ en = a22' etc. Therefore


Al + A2 + ... + An = all + a22 + ... + ann tr A

4.2. Reduce the matrix A to Jordan form, where

(a) A
(! =: =~)
3 -4 1
(c) A
( ~~o -~ -~)
0-1

(b) A
( -~o ~ =~)
1-3
(d) A
( -~o -~ 0-1
~)

(a) Calculation of det (8 : X -~~ A 1 =!X) = 0 gives Al = 1, 1.2 = 2, A, = 3. The eigen-

vector Xi is solved for from the equations

(
8-Ai -8 -2 )
4
3
-3 - Ai
-4
-2
1 - Ai
Xi =
(0)
0
0
giving Xl = G)' G)' G) x, = X3 =

where the third element has been normalized to one in X2 and Xs and to two in Xl' Then

G -2)( -8

D G DG D
-3 -2
-4 1
3
2
3
2
1
=
3
2
1
0
2
0

(b) Calculation of det


(-l-A 0 1-A
1
-1
-4
) =: 0 gives Al = A2 = AS = -l.
0 1 -3 - A
CHAP.4J MATRIX ANALYSIS 89

Solution of the eigenvalue problem (A - (-l)l)x = 0 gives

G~ =DGJ G)
and only one vector, (a 0 O)T where a is arbitrary.
=
Therefore it can be concluded there is only
one Jordan block Lll(-l), so

C~ -D
1
J Lu(-I) == -1
0
Solving,
1-1)
gives tl ==
G 2 -4
1 -2
(f3 2a a)T where f3 is arbitrary.
tl ::::

_Finally, from
G) == xl

1-1)
2 -4 t2 :::::

1 -2
we find t2 == (y 2f3 - a {3 - a)T where y is arbitrary. Choosing a, {3 and y to be 1,0 and 0
respectively gives a nonsingular (Xl I tl l,t2 ), so that

(~o
~ -~)(-~ ~ =~)(~ ~ -~)
1 -2 0 1 -3 0 1 -1
== (-~ -: ~)
0 0 -1.

(c) Since A is triangular, it exhibits its eigenvalues on the diagonal, so Al == A2 = A3 -l. =


The solution of iicAx = (-1)x is x = (a P 2f3)T, so there are two linearly independent eigen-
vectors a(1 0 O)T and p(O 1 2)T. Therefore there are two Jordan blocks L 1(-I) and L 2 (-1).
These can form two different Jordan matrices J 1 or J 2 :

(
L (-1)
I 0
0) (-1~ -~1_~0)
L 2 (-1) =

It makes no difference whether we choose J 1 or J 2 because we merely reorder the eigenvectors


and generalized eigenvector in the T matrix, i.e.,
A(X21 Xl I t l ) = (X21 Xl I t 1)J2
How do we choose a and f3 to get the correct Xl to solve for tl? From (4.19)

2o -1) 0 tl
o 0
from which f3 = 0 80 that Xl = (a 0 O)T and tl = (y 8 28 - a)T where y and {} are also
arbitrary. From Problem 4.41 we can always take a = 1 and y, 8, etc. = 0, but in general

C~ -~ ~DG 20~a 2;) G28~" 2;)C~ -~ -D


Any choice of a,/3, y and 0 such that the inverse of the T matrix exists will give a similarity
transformation to a Jordan form.

(d) The A matrix is already in Jordan form. Any nonsingular matrix T will transform it, since
A == -I and T-l(-I}T == -I. This can also be seen from the eigenvalue problem
(A - AI)x = (-I - (-1)I)x = Ox = 0
so that any 3-vector X is an eigenvector. The space of eigenvectors belonging to -1 is three
dimensional, so there are three Jordan blocks LU(A) = -1, L 1Z (A) = -1 and LlS(A) = -1
on the diagonal for A == -1.
90 MATRIX ANALYSIS [CHAP. 4

4.3. Show that a general normal matrix N (Le. NNt = NtN), not necessarily with distinct
eigenvalues, can be diagonalized by a similarity transformation U such that ut = U-l.
The proof is by induction. First, it is true for a 1 X 1 matrix, because it is already a diagonal
matrix and U = I. Now assume it is true for a k -1 X k -1 matrix and prove it is true for a
t l
k Xk matrix; i.e. assume that for U k-l -- U-k-l

Let

Form T with the first column vector equal to the eigenvector Xl belonging to AI' an eigenvalue of Nk;.
Then form k - 1 other orthonormal vectors x2' X3,' •• , Xk from 'V k using the Gram~Schmit process,
and make T = (xl I x21 ... I Xk)' Note TtT = I. Then,

AIXll n1 x2 nr Xk)
( ;; }Xl IX, I ... I xkl
(
~~~2.'... ~~~~ ........ ~.;.~k.
AIXkl ntx2 ntxk

and

where the aij are some numbers.

But TtNkT is normal, because

Therefore

AIo a12
a22 ..•
a1k)
a2k
(A'at2t 0
a;z '"
0)
a~k
(Aiaf2 0
a;2 ••.
0a~2 ) (At at2 0 a12 •.• a2k
alk
)

==
( ••• oil ill " ............ oil .. oil ~
.... : .............. " • .. • • " .... II .... " ...... ill .. .. • .. " ... ill •• " .. " .. " .. .

o ak2 •.• akk aIk a;k ••• a~k a~k a;k ••• akk 0 ak2 •.• akk

Equating the first element of the matrix product on the left with the first element of the product
on the right gives

Therefore a12, ala, ..• , alk must all be zero so that

where

and Ak~l is normal. Since A k - l is k -1 X k - 1 and normal, by the inductive hypothesis there
t
exists a U k-l = U-l t
k-l such that Uk - 1 Ak - l Uk - 1 = D, where D is a diagonal matrix.
CHAP. 4] MATRIX ANALYSIS 91

Define Sk such that Then st Sk = I and

Therefore the matrix TS k diagonalizes N k , and by Theorem 4.2, D has the other eigenvalues
of N k on the diagonaJ.
A2, A3, ••• , Ak

Finally, to show TS k is unitary,

I = StSk = StISk

4.4. Prove that two n x n Hermitian matrices A = At and B = Bt can be simultaneously


diagonalized by an orthonormal matrix U (i.e. utu = I) if and only if AB = BA.
If A = UAut and B = UDUt, then
AB = UAUtUDUt == uAnut =UDAUt = UDUtuAut = BA

Therefore aU matrices that can be simultaneously diagonalized by an orthonormal U commute.

To show the converse, start with AB = BA. Assume A has distinct eigenvalues. Then
Axi = AiXi, so that AB~ = BAxi = AiB~. Hence if Xi is an eigenvector of A, so is B~. For
distinct eigenvalues, the eivenvectors are proportional, so that BXi = Pi~ where Pi is a constant of
proportionality. But then Pi is also an eigenvalue of H, and ~ is an eigenvector of B. By normal~
izing the ~ so that Xi tXi = 1, U = (Xl I ... Ixn) simultaneously diagonalizes A and B.

If neither A nor B have distinct eigenvalues, the proof is slightly more complicated. Let A-
be an eigenvalue of A having multiplicity m. For nondistinct eigenvalues, all eigenvectors of
A belong in the m dimensional null space of A - A-I spanned by orthonormal xi' X2'" •• , "m' There-
m
fore B~ = .~ cijXj'
t=l
where the constants cij can be determined by Cij = "T BXi' Then for
C = {cij} and X = (Xl Ix21 ... Ix n ), C = XtBX = XtBtX = ct, so C is an m X m Hermitian
matrix. Then C = UmDmU! where Dm and Um' are m X m diagonal and unitary matrices respec-
tively, Now A(XUm ) = ;\(XUm ) since linear combinations of eigenvectors are still eigenvectors,
and Dm = Ut,XtBXUm . Therefore the set of m column vectors of XU m together with all other
normalized eigenvectors of A can diagonalize both A and B. Finally, (XUm)t(XUm ) = (UlxtxUm ) =
(UkImUm ) = 17/l> so that the column vectors XUm are orthonormal.

4.5. Show the positive definite Hermitian square root R, such that R2 = Q, is unique.
Since R is Hermitian, UA i ut = R where U is orthonormal. Also, R2 and R commute, so that
both Rand Q can be simultaneously reduced to diagonal form by Problem 4.4, and Q = UDUt.
Therefore D = .Ai. Suppose another matrix 8 2 = Q such that S = V A2 vt. By similar reason-
ing, D = Ai. Since a number> 0 has a unique positive square root, .A2 =.A i and V and U are
matrices of orthonormal eigenvectors. The normalized eigenvectors corresponding to distinct
eigenvalues are unique. For any nondistinct eigenvalue with orthonormal eigenvectors Xl' x2" .. , x n ,

AXXt
92 MATRIX ANALYSIS [CHAP. 4

and for any other linear combination of orthonormal eigenvectors, Yl' Y2' ... ; Ym ,

where Tt = T- 1 Then
m m'

Hence Rand S are equal even though U and V may differ slightly when Q has nondistinct eigen-
values.

4.6. Prove Sylvester's theorem: A Hermitian matrix Q is positive definite if and only
if all principal minors det Qm > O.
If Q is positive definite, ~ == (x, Qx) ==== 0 for any x. Let Xm be the vector of the first m
elements of x. For those XO whose last m - n elements are zero, (x m• Qmxm) = (xO, QXO) ==== O.
Therefore Qm is positive definite, and all its eigenvalues are positive. From Problem 4.1, the
determinant of any matrix equals the product of its eigenvalues, so det'!m > O.

If det'lm > 0 for m = 1,2, ... , n, we proceed by induction. For n = 1, det Q = Al > O.
Assume now that if det Q 1 > 0, ... , det Qn-l > 0, then Qn-l is positive definite and must
possess an inverse. Partition Qn as

+-q-nn---q-:~----\q-)(: I Q;:l
q
<k = : )(_Q_:-...,...-1 ) (4.28)

We are also given det Qn > O. Then use of Problem 3.5 gives det'tn = (qnn - qtQ;~lq) detQn-l'
so that qnn - qtQ;~lq > O. Hence

> o

for any vector (xt-l 1 x~). Then for any vectors y defined by

substitution into (x, Qx) and use of (4.28) will give (y, QnY) > O.

4.7. Show that if [IAII < 1, then (I - A)-l = iAn.


n=O
k eo

Let Sk = ~ An and S = ~ An. Then


Ti=O n=O
00

11 ~ Akll
n=k+l
:::
11
~
=k+l
11Allk
CHAP. 4] MATRIX ANALYSIS 93

by properties (3) and (4) of Theorem 4.11. Using property (6) and IIAII < 1 gives S = lim Sk'
k--+«!
Note JIAk-t 1 11 ~ IIAll k + 1 so that lim Ak+l = O. Since (I-A)Sk == !-Ak+l, taking limits as
k_eo
k - co gives {I - A)S == I. Since S exists, it is (1 - A)-I. This is called a contraction mapping,
because lI(I - A)xlj :=: (1-IIAli)llxll :=: Ilxll·

1 1 2)
4.8. Find the spectral representation of A
(o -1 1 -.2
0 -1
.

3
The spectral representation of A == ~ Ai~rit. The matrix A has eigenvalues AI::::: 1,
i=1
11.2 ::::: 1 - j and A3=1 + j, and eigenvectors Xl = (1 1 -0.5)T, x2::::: (j 1 O)T and xs::::: (-i 1 O)T.
The reciprocal basis ri can be found as

1 j _j)-1 ( 0 0
-4 )
1 1 1 ::::: 0.5 -i 1 2 -2i
( -0.5 0 0 i 1 2+2j

Then

A (1) (-~.5 )(0 0 -2) + (1- j)(f) (-0.5j 0.5 1- j) + (1 + j)(-f) (0.5j 0.5 1 + j)

4.9. Show that the relations AA -lA = A, A -lAA- 1 = A -I, (AA -1)T = AA - I and
(A -IA)T = A -IA define a unique matrix A -I, that can also be expressed as in De-
finition 4.15.
r r
Represent A ~ Pifig[ and A-I = ~ p-lgkf~. Then
i=1 k=1 k

AA-1A

r r r
~ ~ ~ PiP~lpkfig'[ gjfJfkg~
i=l j=1 k=1 ,

r
AA -IA = ~ Pifig'[ A
i=l

Similarly A -IAA - I == A -I, and from equation (4.~6),

r
A-IA = ~ gigf == (A-IA)T
i=l
r
Similarly (AA -1)T == (~ fifT) T == AA -I.
i=1

To show uniqueness, assume two solutions X and Y satisfy the four relations. Then

1. AXA==A 3. (AX)T == AX 5. AYA==A 7. (Ay)T == AY

2. XAX=X 4. (XA)T == XA 6. ¥A¥=Y 8. (¥A)T = YA

and transposing 1 and 5 gives


9. ATXTAT = AT 10. A1'yTAT = AT
94 MATRIX ANALYSIS [CHAP. 4

The following chain of equalities can be established by using the equation number above the
equals sign as the justification for that step.
X ~ XAX ~ ATXTX ~ ATyTATXTX ~ ATyTXAX ~ AT¥TX ~ YAX
~ YAYAX ~ YAyxTAT b yyTATXTAT ~ yyTAT b YAY ~ Y

Therefore the four relations given form a definition for the pseudoinverse that is equivalent to
Definition 4.15.

4.10. The outcome of y of a certain experiment is thought to depend linearly upon a para-
meter x, such that y= aX + (3. The experiment is repeated three times, during
which x assumes values Xl = 1, X2 = -1 and X3 = 0, and the corresponding out-
comes Y1 = 2, and Y2 == -2 and Y3 = 3. If the linear relation is true,
2 a(l)+ (3
-2 a(-l) + P
3 a(O) + f3
However, experimental uncertainties are such that the relations are not quite satisfied
in each case, so a and f3 are to be chosen such that
3
L (Yi - aXi - f3)2
i=l

is minimum. Explain why the pseudoinverse can be used to select the best a and f3,
and then calculate the best a and f3 using the pseudoinverse.
The equations can be written in the form y = Ax as

3
Defining Xo = A-I y , by Theorem 4.21, Ily - Axolli = ~ (Yi - aOxi - f30)2 is minimized. Since
ATA = (~ :), then p, = V2 and P2 = va. an;' g, = (1 0) and g2 = (0 1). Since

f i = Ag/Pi' then f1 = (1 -1 0)/"/2 and £2 = (1 1 1)/V3. Now A-I can be calculated from
Definition 4.15 to be
t -i
(i 1 !
0)
so that the best ao =2 and f30 = 1.
n
Note this procedure can be applied to ~ (Yi - axf - (3xi - y)2, etc.
i=1

4.11. Show that if an n X n matrix C is nonsinguIar, then a matrix B exists such that
C == eB" or B = In C.
Reduce C to Jordan form, so that C = TJT-I. Then B = In C = Tin JT-1, so that the
problem is to find In L(A) where L(X) is an l x l Jordan block, because
CHAP. 4] MATRIX ANALYSIS 95

U sing the Maclaurin series for the logarithm of L(A.) - AI,


00

In L(A.) = In [A.J + L(A.) - A.I] I In A. - ~ (-iA.) - i [L(A.) - A.I]i


i=l
Note all the eigenvalues ~ of L(A.) - A.I are zero, so that the characteristic equation is ~l = O. Then
by the Cayley-Hamilton theorem, [L(A.) - A.I]l = 0, so that
[-1

In L(A.) = I In A. - ~ (-iA.)-i [L(A.) - A.I]i


i=l

Since A. ¥= 0 because C-l exists, In L(A.) exists and can be calculated, so that In J and hence In C
can be found.
Note B may be complex, because in the 1 X 1 case, In (-1) = hr. Also, in the converse case
where B is given, C is always nonsingular for arbitrary B because C-l = e- B •

Supplementary Problems
4.12. Why does at least one nonzero eigenvector belong to each distinct eigenvalue?

4.13. Find the eigenvalues and eigenvectors of A where A


(~ -2
~ -~).
0-1

4.14. Suppose all the eigenvalues of A are zero. Can we conclude that A = O?

4.15. Prove by induction that the generalized eigenvector tt of equation (4.9) lies in the null space of
(A - AiI)l+l.

4.16. Let x be an eigenvector of both A and B. Is x also an eigenvector of (A - 2B)?

4.17. Let Xl and x2 be eigenvectors of a matrix A corresponding to the eigenvalues of A.! and A.2' where
A.l # A.2' Show that ax! + j3x2 is not an eigenvector of A if a # 0 and j3 #- O.

4.18. U sing a similarity transformation to a diagonal matrix, solve the set of difference equations

where
(~)
4.19. Show that all the eigenvalues of the unitary matrix U, where utu = I, have an absolute value
of one.

4.20. Find the unitary matrix U and the diagonal matrix .A. such that

utC{ ~ -t)u = A

Check your work.

4.21. Reduce to the matrix A 1~ 3/~5 -4~/5) to Jordan form.


(
96 MATRIX ANALYSIS [CHAP. 4

4.22. Given a 3 X 3 real matrix P oft 0 such that p2:;::: O. Find its Jordan form.

4.23. Given the matrix A :;::: (- ~ ~ ~)


• Find the transformation matrix T that reduces A
101
to Jordan form, and identify the Jordan blocks Lij(}\i)'

4.24. Find the eigenvalues and eigenvectors of A -_ (1 +od2 What happens as e ~ O?

4.25. Find the square root of (! :).


4.26. Given the quadratic form ~ == ~i + 2~1~2 + 4~~ + 4~2~3 + 2~~. Is it positive definite?

I;Q

4.27. Show that ~ A n/n! converges for any A.


n=O

4.28. Show that the coefficient an of I in the Cayley-Hamilton theorem An + alAn-l + ... + anI:;::: 0
is zero if and only if A is singular.

4.29. Does A2f(A):;::: [f(A)]A2?

4.30. Let the matrix A have distinct eigenvalues A1 and A2' Does A3(A - A1I){A - A2I) == O?

4.31. Given a real vector x:::::: (Xl X2 ••• xn)T and a scalar a. Define the vector
grad x 0:: :;::: (aa/aX1 aa/fJX2 .•• fJa/fJxn)T
Show grad x xTQx = 2Qx if Q is symmetric, and evaluate grad x x TAx for a nonsymmetric A.

n t
4.32. Show that I:;::: ~ Xjri'
i=l

4.33. Suppose A2:;::: O. Can A be nonsingular?

4.34. Find eat, where A is the matrix of Problem 4.2(a).-

4.35. Find eAt for A == ( 0' "').


-w 0'

4.36. Find the pseudoinverse of (4-03).


0

4.37. Find the pseudoinverse of A -- (13 26

4.38. Prove that the listed properties 1-18 of the pseudoinverse are true.

4.39. Given an m X n real matrix A and scalars Of such that


Agi :;::: O~fi Afi =
Show that only n == 1 gives fif j :;::: aij if gTgj = aij'

4.40. Given a real m X n matrix A. Starting with the eigenvalues and eigenvectors of the real sym-
metric (n + m) X (n + m) matrix ( : \ !T), derive the conclusions of Theorems 4.19 and 4.20.
CHAP. 4] MATRIX ANALYSIS 97

4.41. Show that the T matrix of J = T-IAT is arbitrary to within n constants if A is an n X n matrix
in which each Jordan block has distinct eigenvalues. Specifically, show T = ToK where To is
fixed and
Kl 0 ... 0)
K =
(
.0." .~~ .. :::....~.
° ° ". Km
where K j is an l X l matrix corresponding to the jth Jordan block L j of the form

(: : :' ::: ::~:


~1 :: :: ... :;_1)
... .... ... ...
where O::i is an arbitrary constant.

4.42. Show that for any partitioned matrix (A I 0), (A 10)-1 (!-I) .
4.43. Show that Ixt Axl === IIAI1211xlii .

4.44. Show that another definition of the pseudoinverse is A-I = lim (ATA + d')-lAT, where P is any
positive definite symmetric matrix that commutes with ATA. €--+O

4.45. Show IIA -III = 0 if and only if A = O.

Answers to Supplementary Problems


4.12. Because det (A - Ail) := 0, the column vectors of A - Ail are linearly dependent, gIvmg a null
space of at least one dimension. The eigenvector of Ai lies in the null space of A - Ail.

4.13. A = 3,3, -3, Xs = a(-2 0 1) + (3(0 1 0); x-s::::: y(1 0 1)

4.14. No

4.16. Yes

4.18. Xl(n) = 2 and x2(n) = 1 for all n.

~}
0
COl)
4.20. A
u 1
0
U ~ V2 0
1 0-1
0

G ~.8).
0 1
4.21. T 0.6
-0.8 0.6
J ;:::::

G D 1
0

0
1 0
4.22. J =
GD 0
0
or J =
G 0
0
98 MATRIX ANALYSIS [CHAP. 4

4.23. T = G: T: 0) for a,T, e arbitrary, and J = G 1


1
o
~), all one big Jordan block.

4.24. Al = 1 + el2, A2 = 1- e/2, xl = (1 0), x2 = (1 e); as Ii ~ 0, At = A2' Xl = X20 Slight perturba-


tions on the eigenvalues break the multiplicity.
2
4.25.
(1 21)
4.26. Yes

4.27. Use matrix norms.

4.28. If A-I exists, A-I = a;l[An-1 + a 1An-2 + ... ]. If an = AIA2" .An = 0, then at least one eigen-
value is zero and A is singular.

4.29. Yes

4.30. Yes

4.31. (AT + A)x

4.33. No

4.34. eAt
-4e t + 6e 2t - est
Set - ge 2t + eSt
-3e t + 4e2t - eSt
6e t - 6e2t + eat
-e t + 2e2t -
2et - 3e2t + eSt
eSt)
(
-4e t + 8e 2t + eat -8e t + 2e2t + eSt -et + e2t + eSt

4.35.
sin wt)
cos CJJt

4.36. 1(4
25 :-3
00)

4.37. A-I = 5~ GD
4.40. There are r nonzero positive eigenvalues Pi and r nonzero negative eigenvalues -Pi' Corresponding
to the eigenvalues Pi are the eigenvectors (g~) and to -Pi are (_g~). Spectral representation of
(! I~T) then gives the desired result. f, f,

4.48. IxtAxI === IIxl1211Axl12 by Schwartz' inequality.


Chapter 5

Solutions to the Linear State Equation


5.1 TRANSITION MATRIX
From Section 1.3, a solution to a nonlinear state equation with an input u(t) and an
initial condition xo can be written in terms of its trajectory in state space as x(t) =
t/J(t; U(T), Xo, to). Since the state of a zero-input system does not depend on u(,), it can be
written x(t) = +(t; Xo, to). Furthermore, if the system is linear, then it is linear in the
initial condition so that from Theorem 3.20 we obtain the

Definition 5.1: The transition matrix, denoted ~(t, to), is the n x n matrix such that
x(t) = ,pet; Xo, to) = (t(t, to)xo.
This is true for any to, i.e. x(t) = cI»(t, T) XCi) for 7' > t as well as ,=== t. Substitution of
x(t)= 4t(t, to)xo for arbitrary Xo in the zero-input linear state equation dx/dt = A(t)x
gives the matrix equation for (t(t, to),
Bq,(t, to)/Bt = A(t) ",(t, to) (5.1)
Since for any Xo, Xo = x(t o) = (t(to, to)xo,
the initial condition on q,(t, to) is
q,(to, to) = I (5.2)
Notice that if the transition matrix can be found, we have the solution to a time-varying
linear differential equation. Also, analogous to the continuous time case, the discrete
time transition matrix obeys
cJt(k+l,m) = A(k)4»(k,m) (5.3)

with q,(m,m) = I (5.4)


so that x(k) = ~(k, m) x(m).
Theorem 5.1: Properties of the continuous time transition matrix for a linear, time-
varying system are
(1) transition property
4t(t2, to) = 4t(t2, t 1) CP(tl, to) (5.5)
(2) inversion property
(5.6)
(3) separation property
CP(tl, to) :::: B(tl) 8- l (to) (5.7)

(4) determinant property


det cI»(tl, to) = e'f1to [tr A(T)] d-r (5.8)
and properties of the discrete time transition matrix are
(5) transition property
cp(k, m) :::: 4t(k, l) IP(l, m) (5.9)

99
100 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

(6) inversion property


(5.10)
(7) separation property
4l(m, k) = 8(m)9- 1 (k) (5.11)
(8) determinant property
det4l(k,m) = [detA(k-I)][detA(k-2)]···[detA(m)] for k>m (5.12)

In the continuous time case, cp-l(t, to) always exists. However, in rather unusual cir-
cumstances, A( k) may be singular for some k, so there is no guarantee. that the inverses
in equations (5.10) and (5.11) exist.
Proof of Theorem 5.1: Because we have a linear zero-input dynamical system, the
transition relations (1.6) and (1.7) become lfJ(t, to) x(to) = 4l(t, tl) X(tl) and X(tl) = q;(tl, to) x(to).
Combining these relations gives 4t(t, to) x(to) = cp(t, t 1 ) 4t(tl, to) x(to). Since x(to) is an arbi-
trary initial condition, the transition property is proven. Setting t2 = to in equation (5.5)
and using (5.2) gives cIJ(to, t 1) cIJ(tl, to) = I, so that if det cp(to, t 1 ) #- 0 the inversion property
is proven. Furthermore let 8(t) = cp(t, 0) and set tl = °
in equation (5.5) so that
cIJ(tz, to) = 8(t2) 4t(O, to). Use of (5.6) gives cp(O,to) = cp-l(to, 0) = g-l(tO) so that the separation
property is proven.
To prove the determinant property, partition rp into its row vectors 4»1' 4»21 •.. , +n' Then
det cIJ = +1 /\ +2 /\ ... /\ +n
and
d(det cp)/dt d"t.
""1
/ dt /\ '1"2
A. /\ • • • /\..1..
"t"n
+ ..I.. /\
"t"I"1'2
d..l.. / dt /\ ... /\..1..
't'n

+ ' .. + +1/\ +2 /\ ... /\d+Jdt (5.13)


From the differential equation (5.1) for 4l, the row vectors are related by
n

d+/dt = ~ aik(t) +k for i = 1,2, .. . ,n


k=l

Because this is a linear, time-varying dynamical system, each element aik(t) is continuous
and single-valued, so that this uniquely represents d+/dt for each t.
n
+1/\ ... /\ d+/dt /\ ... /\ +n +1/\ ... /\ L k=l
aik + k /\ ••• /\ 'Pn = aii +
1 /\'" /\ +i /\ ... /\ 9>n
Then from equation (5.13),
d( det q,)/ dt ==: an +
1 /\(1)2 /\ ••• /\ +n + a22+1 /\ +2 /\ •.• /\ +n + ... + ann+! /\+2/\ ••• /\ +n
[tr A(t)] det cp
Separating variables gives
d(det cIJ)/det cp = tr A(t) dt
Integrating and taking antilogarithms results in
ftt [tr A(T)] dr
det cp(t, to) = ye 0

where y is the constant of integration. Setting t = to gives det 4t(to, to) = det I == 1 = y,
so that the determinant property is proven. Since efW = 0 if and only if j(t) = -00, the
inverse of q,(t, to) always exists because the elements of A(t) are bounded.
The proof of the properties for the discrete time -transition matrix is quite similar,
and the reader is referred to the supplementary problems.
CHAP. 5] SOLUTIONS TO TH~ LINEAR STATE EQUATION 101

5.2 CALCULATION OF THE TRANSITION MATRIX FOR


TIME ..INV ARIANT SYSTEMS
Theorem 5.2: The transition matrix for a time-invariant linear differential system is
<p(t, T) = eA(t-T) (5.14)
and for a time-invariant linear difference system is
cp(k, m) = Ak-m (5.15)

00

Proof: The Maclaurin series for eAt =L Aktk/k!, which is uniformly convergent
k=O 00

as shown in Problem 4.27. Differentiating with respect to t gives deAt/dt = ~ Ak+ltk/k!,


k=O
so substitution into equation (5.1) verifies that eA(t-T) is a solution. ~urthermore, for
t =T, eA(t~T) = I, so this is the unique solution starting from cf{T, T) = I.
Also, substitution of cp(k, m) = AIe-m into equation (5.3) verifies that it is a solution,
and for k = m, Ak-m = I. Note that eAeB:oF eBe A in general, but eAt0eAtl = eAtleAto = eA(tO+t1 )
and AeAt = eAtA, as is easily shown using the Maclaurin series.
Since time-invariant linear systems are the most important, numerical calculation
of eAt is often necessary. However, sometimes only x(t) for t == to is needed. Then x(t)
can be found by some standard differential equation routine such as Runge-Kutta or Adams,
etc., on the digital computer or by simulation on the analog computer.
When eAt must be found, a number of methods for numerical calculation are available.
No one method has yet been found that is the easiest in all cases. Here we present four
of the most useful, based on the methods of Section 4.7.

1. Series method:
(5.16)

2. Eigenvalue method:
(5.17)
and, if the eigenvalues are distinct,

3. Cayley-Hamilton: n-l
eAt L yJt)Ai (5.18)
i=O

n-l

where the Yi(t) are evaluated from eJt = ~ Yi (t)Jl.


i=O
Note that from (4.15),

eLnO'l)t 0 o o o
o e L21 (Al)t o o o
ill ............................................. ill ill ill .... ill ill. ill.

o o eL;r(Al)t 0 o (5.19)
o o o eL12{A2)t o
o o o o
102 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

(5.20)

then
e"it teAit t~-1 e"?it/(l- 1) ')

.~ ••• ~~it• ........... ~~~~ ~".it~~l.~ ~: ~ (5.21) ,


(
o 0 ... el\it

4. Resolvent matrix:
1:.- 1 {R(s)} (5.22)
where R(s) = (sI - A)-I.

The hard part of this method is computing the inverse of (sI - A), since it is a poly-
nomial in s. For matrices with many zero elements, substitution and elimination is
about the quickest method. For the general case up to about third order, Cramer's
rule can be used. Somewhat higher order systems can be handled from the flow diagram
of the Laplace transformed system. The elements rij (s) of R(s) are the response of the
ith state (integrator) to a unit impulse input of the jth state (integrator). For higher
order systems, Leverrier's algorithm might be faster.

Theorem 5.3: Leverrier's algorithm. Define the n x n real matrices F 1, F 2 , ••• , Fn and
scalars Bl, B2 , ••• , Bn, as follows:
-trAFdl
-trAF2/2

en = -tr AFnln

Then
sn-l F1 + sn-2F2 + ... + sFn- 1 + Fn
(sI -A)-1 = sn + (hs n- 1 + ... + en-1S + en
(5.23)

Also, AFn + enI = 0, to check the method. Proof is given in Problem 5.4.
Having R(s), a matrix partial fraction expansion can be performed. First, factor
detR(s) as
detR(s) = sn + fh8 n- 1 + ... + On-1S + en = (s - 1.1)(8 - 1.2)' .. (s - An) (5.24)

where the Ai are the eigenvalues of A and the poles of the system. Next, expand R(s) in
matrix partial fractions. If the eigenvalues are distinct, this has the form
1 1 1
R(8) --R1
A1
S -
+ --R"
S - 1.2 ~
+ '" + --\-Rn
S - I\n
(5.25)

where Rk is the matrix-valued residue


(5.26)
CHAP.5J SOLUTIONS TO THE LINEAR STATE EQUATION 103

For mth order roots A, the residue of (3 - A) - i is

- -1 d -
m i
I
(n~ - i)! ds
'-- - - s-AmRs
m i - [( ) ()] S=X
(5.27)

Then eAt is easily found by taking the inverse Laplace transform. In the case of distinct
roots, equation (5.25) becomes
eAt = d'ltRl + eA2tR2 + ... + eAntRn (5.28)

Note, from the spectral representation equation (5.17),


t
Ri = Xiri
so that the eigenvectors Xi and their reciprocal rj can easily be found from the R i .
In the case of repeated roots,
.,c-l{(S - Ai)-m} = tm- 1eAitj(m -1)! (5.29)
To find Ak, methods similar to those discussed to find eAt are available.

1. Series method: (5.30)

2. Eigenvalue method: (5.31)

and for distinct eigenvalues n


Ak = ~ A~xir7
i=1

3. Cayley-Han1ilton: 11-1
~ Yj(k)Ai (5.32)
i=O

n-l

where the Yi(k) are evaluated from Jk = ~ Yr(1c)Ji where from equation (4.15),
i=l

L~l(Al) 0 o o o
o L;'l(Al) o o o

o o L~l (AI) 0 o (5.33)


o o o L~C2(A2) o
........................................ " ....... I .............. ill".

o o o o
and if Lji(Ai) is l x l as in equation {5.20},
A~ kA~-I 1-l)[(l-1) 1(1c - l + 1) 1] _1)
(k 1Ar+

(
~.....A.~ .... : : : ...(~.! ~: ~:~'! ~ (~~ ~~!. (.k. ~ .l.~~: :J.-: (5.34)

o 0 ... Ai

4. Resolvent matrix:
(5.35)
where R(z) = (zI - A)-I.
Since R(z) is exactly the same form as R(8) except with z for s, the inversion procedures
given previously are exactly the same.
104 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

The series method is useful if Ak = 0 for some k = k o• Then the series truncates at
ko - 1. Because the eigenvalue problem Ax = Ax can be multiplied by A k-l to obtain
0= AkX = AA k-1X = Akx, then A = O. Therefore the series method is useful only for
systems with ko poles only at the origin. Otherwise it suffers from slow convergence,
roundoff, and difficulties in recognizing the resulting infinite series.
The eigenvalue method is not very fast because each eigenvector must be computed.
However, at the 1968 Joint Automatic Control Conference it was the general consensus
that this was the only method that anyone had any experience with that could compute
eAt up to twentieth order.

The Cayley-Hamilton method is very similar to the eigenvalue method, and usually
involves a few more multiplications.
The resolvent matrix method is usually simplest for systems of less than tenth order.
This is the extension to matrix form of the usual Laplace transform techniques for single
input-single output that has worked so successfully in the past. For very high order
systems, Leverrier's algorithm involves very high powers of A, which makes the spread of
the eigenvalues very large unless A is scaled properly. However, it involves no matrix
inversions, and gives a means of checking the amount of roundoff in thatAFn + onI should
equal o. In the case of distinct roots, R. = xir:t so that the eigenvectors can easily be
obtained. Perhaps a combination of both Leverrier's algorithm and the eigenvalue method
might be useful for very high order systems.

5.3 TRANSITION MATRIX FOR TIME-VARYING DIFFERENTIAL SYSTEMS


There is NO general solution for the transition matrix of a time-varying linear system
such as there is for the time-invariant case.
Example 5.1.
We found that the transformation A = TJT-l gave a general solution
cp(t, to) = cp(t - to) = eACt-to) = T eJCt-to) T-l

for the time-invariant case. For the time-varying case,


dx/dt = A( t)x
Then A(t) = T(t) J(t)T-l(t),where the elements of T and J must be functions of t. Attempting a
change of variable x = T(t)y results in
dy/dt = J(t)y - T-1(t) (dT(t)/dt)y

which does not simplify unless dT(t}/dt = 0 or some very fortunate combination of elements.
We may conclude that knowledge of the time-varying eigenvalues of a time-varying
system usually does not help.
The behavior of a time-varying system depends on the behavior of the coefficients of
the A(t) matrix.
Example 5.2.
Given the time-varying scalar system dE/dt = ~ sgn (t - t 1) where sgn is the signum function, so that
sgn (t - t 1) = -1 for t < tl and sgn (t - t 1 ) = +1 for t > t 1• This has a solution ~(t) = ~(to)e-(t-to)
for t < tl and ~(t) = ~(tl)e(t-tl)for t> t l . For times t < t 1, the system appears stable,J~~t actually
the solution grows without bound as t ~ co. We shall see in Chapter 9 that the concept of staBility must
be carefully defined for a time-varying system.
Also, the phenomenon of finite escape time can arise in a time-varying linear system,
whereas this is impossible in a time-invariant linear system.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 105

Example 5.3.
Consider the time-varying scalar system

Then

and the solution is represented in Fig. 5-1. The solution goes to to tl


infinity in a finite time. Fig. 5-1
These and other peculiarities make the analysis of time-varying linear systems relatively
more difficult than the analysis of time-invariant linear systems. However, the analysis
of time-varying systems is of considerable practical importance. For instance, a time-
varying linear system usually results fronl the linearization of a nonlinear system about a
nominal trajectory (see Section 1.6). Since a control system is usually designed to keep
the variations from the nominal small, the time-varying linear system is a good approxi-
mation.
Since there is no general solution for the transition matrix, what can be done? In
certain special cases a closed-form solution is available. A computer can almost always
find a numerical solution, and with the use of the properties of the transition matrix
(Theorem 5.1) this makes a powerful tool for analysis. Finally and perhaps most im-
portantly, solutions for systems with an input can be expressed in terms of the transition
matrix.

5.4 CLOSED FORMS FOR SPECIAL CASES OF TIME-VARYING


LINEAR DIFFERENTIAL SYSTEMS
Theorem 5.4: A general scalar time-varying linear differential system dUdt = a(t)~ has
the scalar transition matrix
rta:("I/)dll
cp (t)
,T = eh

Proof: Separating variables in the original equation, d~/~ = a(t)dt. Integrating and
taking antilogarithms gives ~(t) = foef:u(71) d71 •
Theorem 5.5: If A(t) A(T) = A(T) A(t) for all t, T, the time-varying linear differential sys-
tem dx/dt == A(t)x has the transition matrIx 4'(t, T)
.- = est T
A(7) d'TI

This is a severe requirement on A(t), and is usually met only on final examinations.

Proof: Use of the series form for the exponential gives


1 r t rt
I + i l'
t
A("l) d", + 2! J. A("l) dr; J. A(e) de + .,. (5.36)

Taking derivatives,
a ftr A(l1) d1'l (5.37)
-e
at
But from equation (5.36),
A(t)e f;A(71) d1)

This equation and (5.37) are equal if and only if

A(t) it A(1]) d1] ft A(1]) d"l A(t)


106 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

Differentiating with respect to T and multiplying by -1 gives the requirement


A(t) A(1-) = A(T) A(t)
Only A(t) A(T) = G(t, T) need be multiplied in the application of this test. Substitution
of T for t and t for T will then indicate if G(t, ,) = G(T, t).

Example 5.4.
~). t2,,2 fl..,. + t) ,we
Given A(t) ==: ( : Then from A(t) A(T) = G(t, T);;::: ( 0 1 see immediately

Theorem 5.6: A piecewise time-invariant system, in which A(t) = Ai for ti ~ t~ tt+l


for i = 0,1,2, . .. where each Ai is a constant matrix, has the transition
matrix

Proof;Use of the continuity property of dynamical systems, the transition property


(equation (5.5)) and the transition matrix for time-invariant systems gives this proof.
Successive application of this theorem gives
~(t, to) eAoCt-to) for to === t ::::: tl
q,(t, to) eAICt-tl) eAoCtl-tO) for tl::::: t ::::: t2
etc.
Example 5.5.
Given the flow diagram of Fig. 5-2 with a switch S that switches from the lower position to the
upper position at time t l • Then dxidt = Xl for to === t < tl and dxidt ==: 2xl for tt == t. The
solutions during each time interval are Xl(t) ==: xlOe t - to for to === t < tl and XI(t);;::: X'1(t 1)e 2(t-t1 ) for
tl === t, where Xl(tt) = xlOetl-to by continuity.

Fig.5~2

It is common practice to approximate slowly varying coefficients by piecewise con-


stants. This can be dangerous because errors tend to accumulate, but often suggests means
of system design that can be checked by simulation with the original system.
Another special case is that the nth order time-varying equation
dny d n- 1 y dy
tn- + a t n- 1 _n_1 + ... + a t - + a y 0
dtn dt - n-l dt n

can be solved by assuming y ==: tAo Then a scalar polynomial results for A, analogous to the'
characteristic equation. If there are multiplicities of order m in the solution of this poly-
nomial, y = (1n t)m-lt"" is a solution for i = 0, 1,2, ... , m-l.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 107

A number of "classical" second order linear equations have closed form solution in the
sense that the properties of the solutions have been investigated.

Bessel's equation:
(5.38)

Associated Legendre equation:


(1- t2) Y - 2ty + [n(n + 1) - m2/(1- t 2 )]y = 0

Hermite equation:
y- 2ty + 2ay = 0

Laguerre equation:
ty + (1- t)y + ay = 0
with solution Ln(t), or
ty + (k + 1- t)y + (a - k)y = 0
with solution dkLn(t)/dt k •

Hypergeometric equation:
Ordinary:
t(1- t)ii + [y - (a + fi + l)t]y - afiy o
Confluent:
tii + (y-t)iJ - ay =0
Mathieu equation:
ii + (a + fi cos t)y = 0 (5.89)
or, with T = cos t,
2

The solutions and details on their behavior are available in standard texts on engineering
mathematics and mathematical physics.
Also available in the linear time-varying case are a number of methods to give q.(t, T)
as an infinite series. Picard iteration, Peano-Baker integration, perturbation techniques,
etc., can be used, and sometimes give quite rapid convergence. However, even only three
or four terms in a series representation greatly complicate any sort of design procedure,
so discussion of these series techniques is left to standard texts. Use of a digital or analog
computer is recommended for those cases in which a closed form solution is not readily
found.

5.5 ,PERIODICALLY..VARYING LINEAR DIFFERENTIAL SYSTEMS


Floquet theory is applicable to time-varying linear systems whose coefficients are con-
stant or vary periodically. Floquet theory does not help find the solution, but instead gives
insight into the general behavior of periodically-varying systems.

Theorem 5.7: (Floquet). Given the dynamical linear time-varying system dx/dt = A(t)x,
where A(t) = A(t + 1Il). Then
q.(t, T) == P(t, T)e RCt - T
)

where P(t, T) = P(t+lIl, T) and R is a constant matrix.


108 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

Proof: The transition matrix satisfies


aq,(t, -r)/at = A(t) q,(t, -r) with q,(-r, -r) = I
Setting t = t + w, and using A(t) = A(t + w) gives
aq,(t +w, -r)/at = A(t + w) q,(t+w, -r) = A(t) ~(t +w,-r) (5.40)

It was shown in Example 3.25 that the solutions to dx/dt = A(t)x for any initial con-
dition form a generalized vector space. The column vectors 4>i(t, T) of 4t(t, -r) span this vector
space, and since det q,(t, T) 01= 0, the +Jt, T) are a basis. But equation (5.40) states that the
Pi(t + w, 'f) are solutions to dx/dt = A(t)x, so that
1'1

.pi (t +w, T) = ~. Cji 4>j(t, -r)


j=l
for i = 1,2, ... , n
Rewriting this in matrix form, where C = {Cji},
4J(t+w, -r) = q,(t, -r)C (5.41)

Then
c = q,(-r, t) q,(t+w, T)
Note that C-l exists, since it is the product of two nonsingular matrices. Therefore by
Problem 4.11 the logarithm of C exists and will be written in the form
C = eWR (5.42)
If pet, -r) can be any matrix, it is merely a change of variables to write
q,(t, T) = pet, -r)eR(t-'I") (5.43)
But from equations (5.43), (5.41) and (5.42),
P(t+w, -r) = 4t(t+w, -r)e-R(t+~-T) == q,(t, -r)eWRe-R(t+IiJ-'I")
q,(t, T)e-RCt-'I") = pet, T)

From (5.41)-(5.43), R = w- 1 ln [q,(" t) q,(t+W,T)] and P(t,-r) = c)(t,T)e- RCt - T ), so that to


find Rand pet, -r) the solution q,(t, T) must already be known. It may be concluded that
Floquet's theorem does not give the solution, but rather shows the form of the solution.
The matrix pet, -r) gives the periodic part of the solution, and eRCt - T ) gives the envelope of
the solution. Since eRCt-'I") is the transition matrix to dz/dt = Rz, this is the constant
coefficient equation for the envelope z(t). If the system dz/dt = Rz has all poles in the left
half plane, the original time-varying system x(t) is stable. If R has all eigenvalues in the
left half plane except for some on the imaginary axis, the steady state of z(t) is periodic
with the frequency of its imaginary eigenvalues. To have a periodic envelope in the sense
that no element of z(t) behaves exponentially, all the eigenvalues of R must be on the
imaginary axis. If any eigenvalues of R are in the right half plane, then z(t) and x(t) are
unstable. In particular, if the coefficients of the A( t) matrix are continuous functions of
some parameter a, the eigenvalues of R are also continuous functions of (1:', so that periodic
solutions form the stability:boundaries of the system.

Example 5.6._
Consider the Mathieu equation d2 x/dt 2 + (0: + f3 cos t)x = 0 (5.39). Its periodic solutions are called
Mathieu functions, which exist only for certain combinations of - IX . and f3. The values of IX and 13
for which these periodic solutions exist are given by the curves in Fig. 5.3 below. These curves then
form the boundary for regions of stability.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 109

Fig. 5-3

Whether the regions are stable or unstable can be determined by considering the point f3 = 0 and
a: < 0 in region 1. This is known to be unstable, so the whole region 1 is unstable. Since the curves
are stability boundaries, regions 2 and 6 are stable. Similarly all the odd numbered regions are unstable
and all the even numbered regions are stable. The line f3 =0, a:::::: 0 represents a degenerate case,
which agrees with physical intuition.
It is interesting to note from the example above that an originally unstable system
might be stabilized by the introduction of a periodically-varying parameter, and vice versa.
Another use of Floquet theory is in simulation of cJt(t, T). Only c)(t,7") for one period w
need be calculated numerically and then Floquet's theorem can be used to generate the
solution over the whole time span.

5.6 SOLUTION OF THE.LINEAR,STATE EQUATIONS WITH INPUT


Knowledge of the transition matrix gives the solution to the linear state equation with
input, even in time-varying systems.

Theorem 5.8: Given the linear differential system with input


dx/dt = A(t)x + B(t)u
y = C(t)x + D(t)u (2.39)
_with transition matrix q,(t:. T) obeying ac)(t, T)/at = A(t) 4l(t, T) [equation
(5.1)]. Then

x(t) 4l(t,to)x(to) + Jto rt cJt(t,T)B(T)U(T)dT


y(t) = C(t) 4l(t, to) x(t o) + it
to
C(t) 4t(t, T) B(T) U(T) dT + D(t)u
(5.44)
The integral is the superposition integral, and in the time-invariant case it becomes a
convolution integral.

Proof: Since the equation dx/dt = A(t)x has a solution x(t) = lIt(t, to), in accordance
with the method of yariation of parameters, we change variables to k(t) where
x(t) = cIJ(t, to) k(t) (5.45)
110 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

Substituting into equation (2.39),


dx/dt = (acp/at)k + ipdk/dt == A(t)+k + B(t)u
Use of equation (5.1) and multiplication by ~(to, t) gives
dk/dt = ~(to, t) B(t) u(t) _-

Integrating from to to t,
k(t) = k(to) + rt +(to, 1') B(T) U(T) dT
Jto (5.46)

Since equation (5.45) evaluated at t = to gives x(to) = k(t o), use of (5.45) in (5.46) yields

4J(to, t) x(t) = x(to) + rt ~(to, 1') B(T) U(T) dT


Jto
Multiplying by +(t, to) completes the proof for x(t). Substituting into y(t) == eft) x(t) +
D(t) u(t) gives y(t).

In the constant coefficient case, use of equation (5.14) gives


t
x(t) = eACt-to)x(to) + Jr eA<t-T)Bu(T) d1' (5.47)
to

and
y(t) = CeA<t-to)x(to) + _
J
r CeA<t-T)Bu(T) dT + Du(t)
to
t
(5.48)

This is the vector convolution integral.

Theorem 5.9: Given the linear difference equation


+ B(k) u(k)
x(k + 1) = A(k)x(k)
y(k) = C(k) x(k) + D(k) u(k) (2.40)

with transition matrix ~(k, m) obeying iII(k + 1, m) = A(k) .(k, m) [equa-


tion (5.3)]. Then
k---:-l k-2 k-l
x(k) = II A(i) x(m) + L TI A(i) B(j) u(j) + B(k -1) u(k -1)
i=m - ;=m i=j+l (5.49)
where the order of multiplication starts with the largest integer, i.e. A(k-1) A(k - 2)· ...

Proof: Stepping equation (2.40) up one gives


x(m +2) = A(m + 1) x(m+ 1) + B(m + 1) u(m+ 1)
Substituting in equation (2.40) for x(m + 1),
x(m + 2) = A(m + 1) A(m) x(m) + A(m + 1) B(m) u(m) + B(m + 1) u(m + 1)
Repetitive stepping and substituting gives
x(k) = A(k -1)· .. A(m) x(m) + A(k - 1)· .. A(m + 1) B(m) u(m)
+ A(k -1)·· ·A(m +2) B(m + 1) u(m + 1) + '" + A(k-1)B(k- 2) u(k- 2)
+ B(k - 1) u(k - 1)
This is equation (5.49) with the sums and products written out.
CHAP. 5] SOLUTIONS TO THE LINEAR S'rATE EQUATION 111

5.7 TRANSITION MATRIX FOR TIME~V ARYING DIFFERENCE EQUATIONS


Setting B = 0 in equation (5.49) gives the transition matrix for time-varying difference
equations as
k-l
iIJ(k, m) = II A(i) for k > m (5.50)
i=m
m-l
For k = m, iIJ(m, m) = I, and if A -l(i) exists for all i, 4»(k, m) = II A -l(i) for k < m.
Then equation (5.49) can be written as i=k

k-l
x(k) 4»(k, m) x(m) + ~ iIJ(k, i + 1) B(j) uU) (5.51)
j=m

This is very similar to the corresponding equation (5.44) for differential equations
except the integral is replaced by a sum.
Often difference equations result from periodic sampling and holding inputs to dif-
ferential systems.

net) X u(tk )
Hold
dx/dt = A(t)x + B(t)u yet)
o 'T J -y = C(t)x D(t)u + T

Fig. 5-4

In Fig. 5-4 the output of the hold element is u(k) = U(tk) for tk ~ t < tk+ 1, where
= T for all k. Use of (5.44) at time t = tk+1 and to = tk gives
~k+l - tk

x(k + 1) (5.52)

Comparison with the difference equations (2.40) results in

where the subscript 8 refers to the difference equations of the sampled system. For time-
invariant differential systems, As = eAT, Bs = iT eACT-r)B dT, C S :::::: C and Ds = D. Since
in this case As is a matrix exponential, it is nonsingular no matter what A is (see the com-
ment after Problem 4.11).

Although equation (5.50) is always a representation for iIJ(k, m), its behavior is not
usually displayed. Techniques corresponding to the differential case can be used to show
this behavior. For instance, Floquet's theorem becomes cp(k, m) = P(k, m)Rk-m, where
P(k, m) = P(k + 00, m) if A(k) = (k + Also (0).

(k1!n)!y(k+n) + a l (k+~~l)l y(k+n-1) + ... + lX _


n 1
(k+l)y(k+1) + fXny(k) = 0

has solutions of the form )..k/k!. Piecewise time-invariant, classical second order linear,
and series solutions also have a corresponding discrete time form.
112 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

5.8 IMPULSE RESPONSE MATRICES


With zero initial condition x(to) = 0, from (5.44) the output y(t) is

y(t) = .( C(t) 4o(t, T) B(T) U(T) dT + D(t) u(t) (5.53)

This suggests that y(t) can be written as a matrix generalization of the superposition
integral,
y(t) = (5.54)

where H(t, T) is the impulse response matrix, i.e. hij(t,T) is the response of the ith output at
time t due to an impulse at the ith input at time T. Comparison of equations (5.53) and
(5.54) gives
t===T
H(t, T) = t<T
(5.55)

In the time-invariant case the Laplace transformation of H(t, 0) gives the transfer function
matrix
~{H(t,O)} = C(sI-A)-lB +D (5.56)

Similarly, for discrete-time systems y(k) can be expressed as


k
y(k) == ~ H(k, m) u(m) (5.57)
m=-<X)

where
C(k) 4J(k, m. + 1) B(m) k>m
H(k,m) = D(k) k= m (5.58)
{ o k<m

Also, the Z, transfer function matrix in the time-invariant case is


Z{H(k,O)} = C(zl-A)-lB+D (5.59)

5.9 THE ADJOINT SYSTEM


The concept of the adjoint occurs quite frequently, especially in optimization problems.

DeJinition 5.2: The adioint, denoted La, of a linear operator L is defined by the relation
(p, Lx) = (LaP, x) for all x and p (5.60)

We are concerned with the system dx/dt = A(t)x. Defining L = A(t) - Id/dt, this· becomes
Lx = O. Using the scalar product (p,x) =
equation (5.60) using integration by parts.
it1 to
ptxdt, the adjoint system is found from

(p,Lx) =
i tl
ptA(t)x dt -
it! dx
pt dt dt
to to

. ( ' [xtA tp + xt ~~J dt + pt(to) x(t:) - pt(t,) x(t o)

For the case p(to) :::: 0 = p(t~), we find La = At(t) + Id/dt.


CHi\.P.5} SOLUTIONS TO THE LINEAR ,STATE EQUATION 113

Since Lx = 0 for all x, then (p,Lx) = 0 for all xand p. Using (5.60) it can be con-
cluded LaP = 0, so that the adjoint 8ystem is defined by the relation
dp/dt = -At(t)p (5.61)
Denote the transition matrix of this adjoint system as "M(t, to), i.e.,
o~(t, to)/ot = -At(t) ~(t, to) with ~(to, to) = I (5.62)

Theorem 5.10: Given the system dx/dt = A(t)x + B(t)u and its adjoint system dp/dt =
-At(t)p. Then tl

pt(tl) X(tl) = pt(to) x(to) + J r pt(t) B(t) u(t) dt


to
(5.63)

and "itt( t, to) = • -I( t, to) (5.64)

The column vectors I/Ii(t, to) Ofik(t, to) are the reciprocal basis to the column vectors tpi(t, to)
of +(t, to). Also, if u(t) = 0, then pt(t) x(t) = scalar constant for any t.

Proof; Differentiate pt(t) x(t) to obtain


d(ptx)/dt = (dpt/dt)x + pt dx/dt
Using the system equation dx/dt = A(t)x + B(t)u and equation (5.61) gives
d(ptx)/dt = ptBu
Integration from to to tl then yields (5.63). Furthermore if u(t) = 0,
pt(to) x(to) = pt(t) x(t)

From the transition relation, x(t) = .(t, to) x(to) and pet) = 'i'(t, to) p(to) so that
pt(to) Ix(to) = pt(to) 'i't(t, to) eIt(t, to) x(to)
for any p(to) and x(to). Therefore (5.64) must hold.
The adjoint system transition matrix gives another way to express the forced solution
of (5.44):
x(t)

The variable of integration T is the second argument of (J(t, 7"), which sometimes poses simula-
tion difficulties. Since eIt(to, T) = .-l(T, to) = 'i't(T, to), this become~

x(t) q,(t, to) [x(to) + J: ,.t(T, to) OtT) U(T) dTJ (5.65)

in which the variable of integration 7" is the first argument of "!V(T, to).
The adjoint often can be used conveniently when a final value is given and the system
motion backwards in time must be found.

Exa~:~:n5.7~x/dt = (3: ~:)x. Use the adjoint system to find the set of states (",(1) "2(1)) that
permit the system to pass through the point x1(2) = 1.

The adjoint system is dp/dt = -t 2 (1 3) 2 p.


..
This has a transItIon matrIx
.
'k(t, T) = O.2e(~/2)",-(rJ2) (_~ -~) + O.2e2r-2t2 (~ ,~)
114 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP; 5

Since pt(2)x(2) = pt(l}x{l). if we choose pt(2) = (10), then (10)x(2) = xl(2) = 1 = pt(l)x(l) =
Pl(l) xl(l) + P2(!) x 2(1). But pel) = .(1,2) p(2), so that
pt(l) = 0.2(3e-1.5 + 2e6 2e6 - 2e-1.5)
The set of states xC!) that gives xl(2) = 1 is determined by
1 = (O.6e-1.5 + OA( 6)xl(1) + (OAe 6 - 0.4e-1.5)x2(1)

Solved Problems
5.1. Given 4»(t, to), find A(t).
Using (5.2) in (5.1) at time to = t, A(t) = a~{t, to)/at evaluated at- to = t. This is a quick
check on any solution.

5.2. Find the transition matrix for the system


o
dx
CIt -4
-1
by (a) series method, (b) eigenvalue method, (c) Cayley-Hamilton method, (d) re-
solvent matrix method. In the resolvent matrix method, find (81 - A)-l by (1) sub-
stitution and elimination, (2) Cramer's rule, (3) flow diagram, (4) Leverrier's
algorithm.
(a) Series method. From (5.16), eAt = 1+ At/!! + A 2 t 2 /2! + . ". Substituting for A,

(./2 0)
G D C~
0 0
eAt o) o + ...
= 1 + -4t 4t + 0 6t 2 -8t2
0 -t o 0 2t2 _2t2

=
C- +!'2+ ...
t
1- 2t
0
+ 4t2/2 - 2t(1 ~ 2t) + ...
-tel - 2t + 4t2/2 + ... )
4t(1- 2t +o4t2/2 + ... )
1 - 2t + 4t2/2 + 2t(1 - 2t) + ...
)

Recognizing the series expression for e- t and e- 2t gives


o
-eAt (1- 2t)e- 2t o 2t
4te- )
-te- 2t (1 + 2t)e- 2t

(b) The eigenvalues of A are -1, -2 and -2, with corresponding eigenvectors (1 0 O)T and (0 2 l)T.
The generalized eigenvector corresponding to -2 is (0 1 l)T, so that

(-~ -~
o -1
~)
0
= (~ ~ ~)(-~ -~ ~)(~ ~ -~)
0 1 1 0 0 -2 0 -1 2
Using equation (5.17),

eAt = (~ ~ ~)(e~t e~2t te~2t)(~ ~ _~)


o 1 1 2t 0 0 e- 0 -1 2
Mult.iplying out the matrices gives the answer obtained in (a).
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 115

(c) Again, the eigenvalues of A are calculated to be -1, -2 and -2. To find the Yi (t) in equation
(5.18),

eJt =
C~'
0
e- 2t
0
t'~2)
e-2t
== roO
0
1
0
0) (1
~ + Y1 ~ -2o
o
1
-2
0) + Y2 C 0)
0
0
o4
o
-4
4
which gives the equations
e- t = Yo - Y1 + Y2
e- 2t Yo - 2Y1 + 4Y2
te- 2t = Y1 - 4Y2

Solving for the Yi'


'Yo = 4e- t - 3e- 2t - 2te- 2t
'Y1 = 4e- t - 4e- 2t - 3te- 2t
Y2 e- t - e- 2t - te- 2t

Using (5.18) then gives


o
eA ' = (4e-' - 3e- 2' - 2te- 2') (~ 1
o

o
12 -16
0)
4 -4
Summing these ll'latrices again gives the answer obtained in (a).

(d1) Taking the Laplace transform of the original equation,

s.,e(X 1) - XlO -"e(X 1)


S.,e(X2) - X20 -4.,e(x2) + 4.,e(xs)
s.,e(xa) - Xao -"e(X2)

Solving these equations by substitution and elimination,

"e(Xg) == - (8 ;~)2 + [s ~ 2 + (8'; 2)2J xao


Putting this in matrix form .,e(x) = R(s)xo,
1
o o

1 2 4
Res) o 8 +2- (s + 2)2 (s + 2)2

1 _1_+_2_
o (8 + 2)2 s+2 (8+2)2

Inverse Laplace transformation gives eat as found in (a).

(d2) From (5.22),


116 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

U sing Cramer'S rule,


8 + 2)2 o
R(s) :::::
(.+1)~.+2)2 ( :
8(8 + 1)
-(8 + 1)
4(s o
+ 1) )
(8 + 1)(8 + 4)
Performing a partial fraction expansion,

R(s)
= .! C D .!2G D (.:2 G D
1 :
0
0
0
+
0
1
0
+ P
0
-2
-1
Addition will give R(s) as in (d1).

(dS) The flow diagram of the Laplace transformed system is shown in Fig. 5·5.

Fig. 5-5

For x10 ::::: 1, .(x1)::::: l/(s + 1) and .e(X2)::::: .e(xs) ::::: O.


For X20 == 1, .(X1)::::: 0, + 2)2 and .,e(xs)::::: 4/(s + 2)2.
.(X2)::::: s/(s
For XgO ::::: 1, .e(X1)::::: 0, .(X2):::: -l/(s + 2)2 and .e(xs);;;:: (s + 4)/(8 + 2)2.

Therefore,
1
0 0
s+l

8 4
R(s) 0
(s + 2)2 (8 + 2)2

-1 8+4
0 (8 + 2)2
(s + 2)2

Again a partial fraction expansion can be performed to obtain the previous result.

(d4) Using Theorem 5.3 for Leverrier's algorithm,

F1 ::::: I 01 ::::: 5

F2 :::::
(1 0;)+51
0-4
o-1
82 ::::: 8

Fa:::::
(4 ~ -8
0

-1 -4
~) +81 83 4

Using equation (5.23),

~G
0 0 0

RCs) =
1
0 D+·G
S3 + 582
1
-1
+ 88
D+ G
+ 4
0
-1 D
A partial fraction expansion of this gives the previous result.
CHAP.5J SOLUTIONS TO THE LINEAR STATE EQUATION 117

5.3. Using (a) the eigenvalue method and then (b) the resolvent matrix, find the transition
matrix for

(a) The eigenvalues a.re -1 and -2. with eigenvectors (1 O)T and (1 -1)T respectively. The re-
ciprocal basis is (1 1) and (0 -1). Using the spectral representation equation (5.31),

(b) From equation (5.35),


R(z)
-1
z+2
)-1 == (z
1
+ 1)(z + 2)
(Z. +0 2 z +1)
1
so that

,2"-1
z 1
z+ z 1- z
z+ z)
+ 2. (-l)k - (-2».:)
(-2)k
( z
o z+2

5.4. Prove. Leverrier's algorithm,


sn-1F1 + sn-2F2 + ... + 8Fn - 1 + Fn
(sI-A)-1 = sn + 8lS n - l + ... + 8n-18 + fin (5.23)

and the Cayley-Hamilton theorem (Theorem 4.15).


Let det (sl - A) ::::: ¢(s) == sn + 8 l S n - 1 + . " + 8 n - l S + On' Use of Cramer's rule gives
¢(8)(sl - A)-I::::: F(s) where F(s) is the adjugate matrix, i.e. the matrix of signed cofactors trans'"
posed of (sI - A).

An intermediate result must be proven before proceeding, namely that tr F(s) == d¢/ds. In the
proof of Theorem 3.21, it was shown the cofactor cli of a general matrix B can be represented as
elj ::::: ej /\ b2 /\ ••• /\ bn and similarly it can be shown cij == b l /\ ... /\ hi-l A ej /\ bi+ 1/\ ••• /\ b n • so
that letting B == sl - A and using tr F(s) == c11(s) + c22(8) + ... + enn(s) gives
tr F(s) == el /\ (se2 - a2) /\ ... 1\ (sen - an) + (se l - al) /\ e2/\ ... /\ (sel - an)
+ ... + (sel - al) /\ ... 1\ (8en -l - an-I) /\ en

But

and
drp/ds == el/\ (se2 - a2) /\ ... /\ (sen - an) + .. , + (sel - al) /\ ... /\ en == tr F(s)
and so the intermediate result is established.
Substituting the definitions for ¢(s) and F(s) into the intermediate result,
trF(s) == tr (sn-lF l + sn-2F2 + ... +sFn - 1 +Fn)
== sn-l trFl + sn-2 trF2 + .. , + s trFn- 1 + trFn
drp/ds ::::: nsn-l + (n -1)8 1 S n - 2 + ... + 8n - l

Equating like powers of s,


tr Fk + 1 == (n - k)ok- (5.66)

for k == 1,2, ... , n-1. and tr Fl == n. Rearranging (5.23) as


(sn + 8 18 n - 1 + ... + 8 n)I == (sl - A)(sn-lFl + sn-2F2 + ... + Fn)
and equating the powers of s gives I == F l , 8n l == -AFnl and
8k-I := -AFk + F k +l (5.67)
118 SOLUTIONS TO THE ::i:..lNEARS'TATE EQtTATION [CHAP. 5

for k = 1,2, .. . ,n-I. These are half of the relationships required.- The other half are obtained
by taking _their trace

and substituting into (5.66) to get


kBk = -trAFk
which are the other half of the relationships needed for the proof.

To prove the Cayley-Hamilton theorem, successively substitute for F k + 1, i.e.,

Using the last relation onI = -AFn then gives

which is the Cayley-Hamilton theorem.

5.5. Given the time-varying system


dx
at
Find the transition matrix and verify the transition properties [equations (5.5)-{5.8)].

Note that A(t) commutes with A(T), i.e.,

A(t) A(T) =

It can be shown similar to Problem 4.4 that two n X n matrices Band C with n independent eigen-
vectors can be simultaneously diagonalized by a nonsingular matrix T if and only if B commutes
with C. Identifying A(t) and A(T) for fixed t and T with Band C, then A(t) = T(t)A(t)T-l(t) and
A(T) = T(T)A(T)T-l(T) means that T(t) = T(T) for all t and- T. This implies the matrix of eigen-
vectors T is constant, so dT/dt = O. Referring to Example 5.1, when dT(t)/dt = 0 then a general
solution exists for this special case of A(t)A(T) = A(T)A(t).

For the given time-varying system, A(t) has the eigenvalues Al = a + je- t and i\.2 = a - je-t.
Since A(t) commutes with A(r), the eigenvectors are constant, (j -l)T and (j 1)T. Consequently

T = (
-1
j
~) A(t) =
(
a -01e-t T'-l ::;:: -1
2
(-1 -1)
-j 1

From Theorem 5.5,

Substituting the numerical values, integrating and multiplying out gives


CHAP.5J SOLUTIONS TO THE LINEAR STATE EQUATION 119

To check this, use Problem 5.1:

Setting 7" = t in this gives A(t).


To verify ~(t2' to) ::;:: ~(t2' t l ) ~(tl' to), note
cos (e-to - e- t 2) = cos (e-to - e- t l + e-tl - e- t2 )
::;:: cos (e-to - e-tl) cos (e-tl - e- t2 ) - sin (e-to - e-t1) sin (e-t1 - e- t2)
and

so that
sin (e-tl- e- t2»)
cos (e-t1 - e- t2 )

X e-a(ti-to)
COS (e-to - e- t1 ) sin (e-to - e- t1 »)
( -sin (e-to - e- t1 ) cos (e-to - e- t1 )
To verify 4t- l (t l • to) ::;:: Iilo(to, til, calculate Iilo-I(t I ; to) by Cramer's rule:

4t- I (t t) ::;::
ea(to-t1)
.
(COS. (e-to - e-t1) -sin (e-to - e-tl»)
1, 0 cos2 (e-to - e:-t1) + 8m2 (e-to - e tI) sm (e-to - e-t1) cos (e-to - e-t1)
Since cos2 (e-to - e-tl) + sin2 (e-to - e-t1) ::;:: 1 and sin (e-to - e-t1) ::::: -sin (e-t1- eto), this
equals 4t(to, til.

8(t) ::;::
sin (1- e- t »)
cos (1- e- t )
Then
8- I (t) :::: e- at COS .(1- e- t ) -sin (1- e- t »)
( sin (1- e- t ) cos (1- e- t )
Since we have
cos (e-to - e- t1 ) = cos (e-to -1) cos (1- e- t1) - sin (e- to - 1) sin (1 - e-tl)
and a similar formula for the sine, multiplication of B(tI ) and 8- I (to) will verify this.
Finally, det !ft(t, 7") ::;:: e2a (t-'t) and trA(t)::;:: 2a, so that integration shows that the determinant
property holds.

5.6. Find the transition matrix for the system

dx/dt = (0 1)x
-1 - (~+ a)/t -2a - lit
Writing out the matrix equations, we find dxl/dt::;:: x2 and so
d 2x l ldt2 + (2a + lit) dxlldt + [1 + (0:2 + a}/tJxl ::::: 0
Multiplying by the integrating factor teat, which was found by much trial and error with the
equation, gives
t(e at d 2x l /dt 2 + 2/Xe at dx1/dt + a2eatxl) + (eat dx1/dt + o:eatxl) + teatXI 0
which can be rewritten as
td2(eatXl)/dt2 + d(eatxI)ldt + teatxI ::;:: 0
This has the same form as Bessel's equation (5.38), so the solution is
xI(t) cle-atJO(t) + c 2e- at Y O(t)
x2(t) = dx1ldt ::;:: -o:Xt(t) + e-at[-cIJI(t) + c2dYo/dt]
120 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

To solve for the constants CI and C2,

eaT C~2(T)X~T~Xl(T»)
80 that x(t) = FG(t) G-l(T) F-lx (T), where

F = (1 -a ~)
Yo(t) )
dYo/dt
and so ~(t, T)= FG(t) G-l(T)F-I. This is true only for t and- T> 0 or t and T < 0', because
at the point t = 0, the elements of the original A(t) matrix blow up. This accounts for Y 0(0) = 00.

Admittedly this problem was contrived, and in practice a man~made system would only
accidentally have this form. However, Bessel's equation often occurs in nature, and knowledge
that as t ~ co, J 0 (t) ,..., V2lrrt cos (t - '/T/4) and Yo (t) ,.., V2/'/Tt - sin (t - 7r/4) gives great insight
into the behavior of the system.

5.7. Given the time-varying difference equation x(n + 1) = A(n) x(n), where A(n) = Ao
if n is even and A(n) = Al if n is odd. Find the fundamental matrix, analyze by
Floquet theory, and give the conditions for stability if Ao and Al are nonsingular.
From equation (5.50),
AIAOA1' .. AIAo if k is even and m is even
AoAIAO' .. AIAO if k is odd and m is even
.z,(k,m)

=
1 Al AoAI ... AoAl
AoAIAo' .. AOA!
if k is odd and m is odd
if k is even and

For m even, .z,(k, m) P(k, m) (A 1A o)(k-m)/2, where P(k, m) I if k is even and P(k, m) = =
m is odd

(AoAl-I)1I2 if k is odd. For m odd, (J(k, m) = P(k, m)(AoAI)Ck-m) 12, where P(k; m) = I if k is
odd and P(k,m) = (AIAO-I)1I2 if k is even. For instability, the eigenvalues of R = (AIAo)l/2
must be outside the unit circle. Since the eigenvalues of B2 are the squares of the eigenvalues of B,
it is enough to find the eigenvalues of A1Ao. This agrees with the stability ~nalysis of the equation
x(n + 2) = A1AoX(n).

5.8. Find the impulse response of the system


d2 y/dt2 + (1- 20:) dy/dt + (a 2 - 0: + e- 2t )y ~ u
Choose Xl = Y and X2 = et (dxl/dt - aXI) to find a state representation in which A(T) commutes
with A(t). Then in matrix form the system is

y =(1 O)x

From equation (5.55) and .z,(t, T) obtained from Problem 5.5,


B(t, T) = (37' + a(t-7') sin (e- T - e- t )

This is the response yet) to an input u(t) = S(t - T).

5.9. In the system of Problem 5.8, let u(t) = eCa - 2)t, y(to) = Yo and (dy/dt)(to) = ayo.
Find the complete response.
From equations (5.44) and Problem 5.5,
yet) = eaCt-to) cos (e-to - e-t)yo + eat st e-7" sin (e- r - e- t ) dT
to
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 121

Changing variables from T to 11 in the integral, where e- T - e- t = 11, gives

Notice this problem cannot be solved by Laplace transformation in one variable.

5~10. Given a step input U(8) = 6/8 into a system with a transfer function
8+1
H(8) = 82 + 58 + 6
Find the output y(t) assuming zero initial conditions.
The easiest way to do this is by using classical techniques.
6(8 + 1) = 1: + _3_ _ _ 4_
..e{y(t)} = U(8) H(8) =
+ 582 + 68
s3 8 8+2 8+3
Taking the inverse Laplace transform determines y = 1 + 3e- 2t - 4e- 3t•

Doing this by state space techniques shows· how it corresponds with the classical techniques.
From Problem 2.3 the state equations are

(-~ _~) (:~) + (~) u


y (-1 2)(:;)
The transition matrix is obviously
4)(t, 7)

The response can be expres.sed directly in terms of (5..M).


t (e-2Ct-T)
yet) == 4)(t, to)O +
f to
(-1 2)
0
== 1 + 3e- 2t - 4e- 3t (5.68)

This integral is usually very complicated to solve analytically, although it is easy for a computer.
Instead, we shall use the transfer function matrix of equation (5.56).

-<:{H(t,O)} (-1 (8 +02)-12)


(8
0) (1)
+ 3)-1 1

2 1 8+1
= == R(s)
8+3 8+2 82 + 58 + 6
This is indeed our original transfer function, and the integral (5.68) is a convolution and its
Laplace transform is
..e{y(t)}

whose inverse Laplace transform gives yet) as before.

5.11. Using the adjoint matrix, synthesize a form of control law for use in guidance.
We desire to guide a vehicle to a final state x(tt), which is known. From (5.44),

x(tf ) = 'fl(tt, t) x(t) + jtt t


*(tt, 1') B(T) U(T) dT
122 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

Choose u(t) = U(t)c where U(t) is a pre specified matrix of time functions that are easily mech-
anized, such as polynomials in t. The vector c is constant, except that at intervals of time it is
recomputed as knowledge of x(t) becomes better. Then c can be computed as

c = [ft' ~(tf' T) B(.) U(T) dTJ-' [~(tf' t) x(t) - x(tf )]

However, this involves finding C)(tb t) as the transition matrix of dxldt == A(t)x with x(t) as the
initial condition going to x (tt). Therefore C)(tj, t) would have to be computed at each recomputa-
tion of c, starting with the best estimates of x(t). To avoid this, the adjoint transition matrix
'i'(r, tf ) can be found starting with the final time tf' and be stored and used for all recomputations
of c because, from equation (5.64), 'l't(r, tf) = .p(tf , r) and c is found from

c = [f: ~t(T. tf) B(.) U(.) dTJ-' [~t(t. 1,) x(t) - x(tf )]

Supplementary Problems
5.12. Prove equations (5.9), (5.10), (5.11), and (5.12).

5.1S. Given cp(k, m), how can A(k) be found?

5.14. Prove that AeAt == eAtA and then find the conditions on A and B such that eAe B == eBeA.

5.15. Verify that 4t(t,7") = eACt - r ) and cp(k, m) == Ak-m satisfy the properties of a transition matrix
given in equations (5.5)-(5.12).

5.16. Given the fundamental matrix


(J(t, r)
=
~
_12 (e-e- 4Ct
-
r)
4ct - r ) -
+ 11 e-e- 4Ct
-
4Ct -1')
r
) -
+1
1)
What is the state equation corresponding to this fundamental matrix?

5.17. Find the transition matrix to the system dxldt == (~ _!) x.

5.18. Calculate the transition matrix .p(t,O) for dxldt == (~ ~)x using (a) reduction to Jordan

form, (b) the Maclaurin series, (e) the resolvent matrix.

5.19. Find eAt by the series method, where


series method is the easiest.
A G~ D· This shows a case where the

5.20. Find eAt using the resolvent matrix and Leverrier's algorithm, where A == (-~ -~ ~).
o 0-3

5.21. Find eAt using the Cayley-Hamilton method, where A is the matrix given in Problem 5.20.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 123

5.22. Use the eigenvalue method to find eAt for

A (-~ -~ -~)
-1 -1 2

5.23. Use the resolvent matrix and Cramer's rule to find eAt for A as given in Problem 5.22.

5.24. Use the resolvent matrix and Cramer's rule to find Ak for A as given in Problem 5.22.

5.25. Find eAt by using the Maclaurin series, Cayley-Hamilton and resolvent matrix methods when

A ~ (~-~).
5.26. Find the fundamental, or transition, matrix for the system

dx
at (-~ u
o
using the matrix Laplace transform method.

5.27. Given the continuous time system

dxldt = (=i =D + G) x u x(O) == (~)


y ::::; (1 O)x + 4u

Compute yet) using the transition matrix if u is a unit step function. Compare this with the
solution obtained by finding the 1 X 1 transfer function matrix for the input to the output.

5.28. Given the discrete time system

x(n+1) = (=! =!) x(n) + (~) u(n) x(O) ::::; (~)


yen) ::::; (1 O)x(n) + 4u(n)

Compute yen) using the transition matrix if u is the series of ones 1,1,1, ... ,1, ....

-1
5.29. (a) Calculate q:,(t, to) for the system dx/dt == (
0 21)X using Laplace transforms.

(b) Calculate q:,(k, m) for the system x(k + 1)::::;


(
-1
0 ~) x(k) using 2: transforms.

5.30. How does the spectral representation for eAt extend to the case where the eigenvalues of A are
not distinct?
n-i
5.31. In the Cayley-Hamilton method of finding eAt, show that the equation eJt ~ Yi(t)Ai can always
i=O
be solved for the Yi(t). For simplicity, consider only the case of distinct eigenvalues.

5.32. Show that the column vectors of *'(t, r) span the vector space of solutions to dx/dt == A(t)x.

5.33. Show A(t) A(T) == A(T) A(t) when A(t) == a:(t)C, where C is a constant n X n matrix and a(t) is a
scalar function of t. Also, find the conditions on ai/t) such that A(t) A(r) == A(r) A(t) for a
2 X 2 A(t) matrix.

5.34. Given the time-varying system


dx
Cit
Find the transition matrix. Hint: Find an integrating factor.
124 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

5.35. Prove Floquet's theorem for discrete time systems, ~(k, m) = P(k, m)Rk-m where P(k, m) =
P(k + w, m)if A(k) = A(k + w).

5.36. Given the time-varying periodic system


sin t sin
dx/dt == ( . t . t
sm sm
t) x. Find the transition matrix

q,(t, to) and verify it satisfies Floquet's result q,(t, to) = pet, to)eRCt-to) where P is periodic and
R is constant. Also find the fundamental matrix of the adjoint system.

5.37. The linear system shown in Fig. 5-6 is excited by a square wave set) with period 2 and amplitude
= 1. The system equation is ii + [,8 + a sgn (sin 7Tt)]y = O.
Is(t)1

8(t)o----f Multiplier ' - -_ _ _- - 0 yet)

Fig. 5-6

It is found experimentally that the relationship between a and ,8 that permits a periodic
solution can be plotted as shown in Fig. 5-7.

----------------~~--~--~--~--~--_r----r_----fi
o 2 3 4 5 6

Fig. 5-7

Find the equation involving a and ,8 so that these lines could be obtained analytically. (Do not
attempt to solve the equations.) Also give the general form of solution for all a and ,8 and mark
the regions of stability and instability on the diagram.

5.38. Given the sampled data system of Fig. 5-8 where S is a sampler that transmits the value of e(t)
once a second to the hold circuit. Find the state space representation at the sampling instants of
the closed loop system. Use Problem 5.10.

j-j
ret) 0
+
.(t) Hold
H 82
8+1
+ 5s + 6
oy(t)

Fig. 5-8

5.39. Find tt( t, T) and the forced response for the system t2;j + t~ + n :::: p(t) with 'I1(t o) = no and
~(to) = ~o.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 125

5.40. Consider the system


(Xl) .
d
dt X
2 (: !)(::) + (;) u
or x Ax+Bu

-(~:) = (~ ~)(::) or y = Cx + Du

Find the transfer functions .,e{Yl}/.,e{ul} and .,e{Y2}/.e{u2} using the relation
"c{y}/.,e{u} = [C(ls - A)-IB + DJ
5.41. The steady state response xss(t) of an asymptotically stable linear differential system satisfies the
equation
dxs/dt = A(t)xss + B(t) u(t)

but does not satisfy the initial condition xss(to) = Xo and has no reference to the initial time to.
~(t,T) is known.
(a) Verify, by substitution into the given equation, that

xss(t) = ft~(t'T)B(T)U(T)dT
where f t is the indefinite integral evaluated at T = t. Hint: For an arbitrary vector function
f{t, T),
d
dt
fset) f(t, T) dT = let, {J(t»
dj3
dt -
do:.
f(t, o:.(t» Cit +
f~(t) a
aff(t, T) dt"
a(t) a(t)

(b) Suppose ACt) = ACt + T), B(t) = B(t + T), u(t) = u(t + T), and the system is stable. Find an
expression for a periodic xss(t) = xss(t + T) in the form

xssCt) = K(T) ft+T 41(t, T) ~(T) UCT) dT


t
where K(T) is an n X n matrix to be found depending on T and independent of t.

5.42. Check that h(t) = et+a(t-T) sin (e- T - e- t ) satisfies the system equation of Problem 5.8 with
u(t)= .sCt - 7').

2
5.43. Find in closed form the response of the system (1 _ t 2) d y _ ! dy u to the input u(t) =
dt 2 t dt
tyt 2 ..... 1 with zero initial conditions.

5.44. Consider the scalar system dy/dt = -(1 + t)y + (1 + t)u. If the initial condition is yeO) = 10.0,
find the sign and magnitude of the impulse in u(t) required at t 1.0 to make y(2) 1.0. = =
0 -3/t2)
5.45. Given the system dx/dt = ( -1 2/t x. Find the relationship between Xl(t) and X2(t) such

that x 1(t j ) = 1, using the adjoint.

5.46. Show that if an n X n nonsingular matrix solution T(t) to the equation dT/dt = A(t)T - TD(t}
is known explicitly, where D(t) is an n X n diagonal matrix, then an explicit solution to dx/dt =
A(t)x is also known.
126 SOLUTIONS TO THE LINEAR STATE EQUATION [CHAP. 5

Answers to Supplementary Problems


5.13. A(k) = <I-(k+ 1, k)

5.14. If and only if AB = BA does eAeD = eDe A •

5.16. A
(-2 -2)
-2 -2

5.17. !p(t, r) = l[e t -,. (~) (1 2) + e4(T-t) (-~) (-1 2)]

t
5.18.
(
e
o
01)

5.20. "11 = 9, "12 = 26, 'Y3 = 24

eAt = 0.5.- 2t G 1
1
o
o
0) +
o O.5e- 4t
( 1-1
-1 1
o o 1 - 0 0
5.21. eAt (6e-2t - 8e -3t + 3e -4t)I + 0.5(7 e - 2t - 12e- 3t + 5e- 4t )A + O.5( e - 2t - 2e -3t + e -4t)A2

5.22. eAt

5.23. (sI - A)-l F(s)/s(s - 1)2 D/s + B/(s - 1) + e/(s _1)2 where

D = (~ ~ =~) e
C~ -~ -D B = (-~ -~ ~)
1 1-1 -1 -1 0

5.24. Ak
( -k
k-1
-k -1
k
2k +
-2k;1
1)
-1 -1

5.25. eAt (~t 1 ~ €2t)

5.27. yet) ![ll- 2e- (t-t o)/2 - e- (t-t o)] U(t - to)

5.28. y(k) 4 + [7 - 3(-l)k - 4(-1/2)k]/12

5.29. eIt(t, to) = ( et~-t et-to - eto- t ) ,


cJt(k,m) = (-l~k-m 1 - (~1)k-m)
et-to

5.30. Let the generalized eigenvector tt of A have a reciprocal basis vector s1' etc. Then
eAt = eAtxlrt + eAttlst + teAtxlsT + ...
5.31. A Vandermonde matrix in the eigenvalues results, which is then always invertible.
CHAP. 5] SOLUTIONS TO THE LINEAR STATE EQUATION 127

5.34. 4t(t, to) =

5.36. = eT (COSh T sinh T)


sinh T cosh T

where T = cos to - cos t, so 4»(t, to) = pet, to) and R O. Also ~t(t, to)
4»(to, t).

5.37. Let 1'2 = f3 + a, 82 = f3 - a. Then e2R = 4»(2,0) = cp(2, 1) cp(l, 0) where

41'(2,1) =
COSO (sin 0)/0)
4»(1,0) =
cosy (sin Y)/Y)
( -s sin IS cos 8 ( -I' sin I' cosy

For periodicity of the envelope zCt + 471/8) = zCt), eigenvalues '}-. of e 2R = e:!:.:Jo.
det ('}-./ - e2R) = '}-.2 ~ '}-.2 cos (J + 1 = '}-.2 - '}-. tr e2R + det e 2R
det cp(2, 1) det 41(1,0) = 1
2 cos 8 = 2 cos I' cos e; - (1'/8 + S/y) sin y sin S

The stability boundaries are then determined by


±2 = 2 cos y cos 0 - (1'/8 + Sly) sin I' sin 0

The solution is of the form .pCt, T) = PCt, T)eR(t-T) and the given curves form the stability boundaries
between unstable regions and periodic regions.
Reference: B. Van Der Pol and M. J. O. Strutt, On the Stability of the Solutions of Mathieu's
Equation, Philosophical Magazine, 7th series, vol. V, January-June 1928, pp. 18-38.

5.38. x(k + 1) =
1 + e- 2 )/2
( (1- e- 3 )/3
e- 2 -
(5e-3 - 2)/3
1) x(k) +
(3 - 3e- 2 ) r(k)
2 - 2e- 3 6

5.39. 'I](t) '1]0 cos (In t/to) + ~oto sin (In tlto) - ft sin (1n rlt) p(r)/T dT
to

5.41. K(T) = (e- RT - 1)-1

5.42. Since h(t) is an element of the transition matrix,


d2h/dt2 + (1- 2) dh/dt + (0;2 - 0; + e-Zt)h o for t #' r

Also, since dh/dt and h are continuous in t,

lim i7"+1:' d 2h/dt2 dt


1:'--+0
T-(,

5.43. yet) = (1/4 ~ Vt Z -1 )(t - to) + (sin -1 t - sin- 1 t o)/2

1 - 10e- 4
5.44. u(t) = 2e- 5/2 oCt - 1.0)

5.46. Let x = T(t)z.


Chapter 6

Controllability and Observability


6.1 INTRODUCTION TO CONTROLLABILITY AND OBSERV ABILITY
Can all the states of a system be controlled and/or observed? This fundamental question
arises surprisingly often in both practical and theoretical investigations and is most easily
investigated using state space techniques.

Definition 6.1: A state Xl of a system is controllable if all initial conditions Xo at any


previous time to can be transferred to Xl in a finite time by some control
function u(t, xo).
If all states Xl are controllable, the system is called completely controllable or simply
controllable. If controllability is restricted to depend on to, the state is said to be control-
lable at time to. If the state can be transferred from Xo to Xl as quickly as desired inde-
pendent of to, instead of in some finite time, that state is totally controllable. The system
is totally controllable if all states are totally controllable. Finally, we may talk about the
output y instead of the state X and give similar definitions for output controllable, e.g. an
output controllable at time to means that a particular output y can be attained starting from
any arbitrary Xu at to.
To determine complete controllability at time to for linear systems, it is necessary and
sufficient to investigate whether the zero state instead of all initial states can be trans-
ferred to all final states. Writing the complete solution for the linear case,
h
fIJ(tl, to) x(to) + i to
CP(tl, i) B(i) u(or) di

which is equivalent to starting from the zero state and going to a final state (tt) = x
x(t1) - tIt(tt, to) x(to). Therefore if we can show the linear system can go from 0 to any
x(t 1), then it can go from any x(to) to any X(tl)'
The concept of observability will turn out to be the dual of controllability.

Definition 6.2: A state x(t) at some given t of a system is observable if knowledge of the
input u(-r) and output y(.) over a finite time segment to < or === t com-
pletely determines x(t).
If all states x(t) are observable, the system is called completely observable. If observa-
bility depends on to, the state is said to be observable at to. If the state can be determined
for 'T in any arbitrarily small time segment independent of to, it is totally observable.
Finally, we may talk about observability when U(i) = 0, and give similar definitions for
zero-input observable.
To determine complete observability for linear systems, it is necessary and sufficient to
see if the initial state x(to) of the zero-input system can be completely determined from
y(-r), because knowledge of x(to) and U(i) permits x(t) to be calculated from the complete
solution equation (5.44).

128
CHAP. 6] CONTROLLABILITY AND OBSERVABILITY 129

We have already encountered uncontrollable and unobservable states in Example 1.7,


page 3. These states were physically disconnected from the input or the output. By
physically disconnected we mean that for all time the flow diagram shows no connection,
i.e. the control passes through a scalor with zero gain. Then it follows that any state
vectors having elements that are disconnected from the input or output will be uncontrol-
lable or unobservable. However, there exist uncontrollable and unobservable systems in
which the flow diagram is not always disconnected.

Example 6.1.
Consider the time-varying system

%t (::) ( ao f3O)(Xl)X2 +
(eat)
eSt U

From the flow diagram of Fig. 6-1 it can be seen that u(t) passes through scalors that are never zero.

uo-------+

Fig. 6-1

For zero initial conditions,

xl(tl)e-atl = st!
to
U(T) dT = x2(tl)e-f3tl

Only those xz(t 1) = x 1(t 1)e tt C,6-O:) can be reached at tlo so that X2{t1) is fixed after Xl(t 1) is chosen. There-
fore the system is not controllable.

6.2 CONTROLLABILITY IN TIME-INVARIANT LINEAR SYSTEMS


For time-invariant systems dxldt = Ax + Bu in the case where A has distinct eigen-
values, connectedness between the input and all elements of the state vector becomes
equivalent to the strongest form of controllability, totally controllable. We shall first con-
sider the case of a scalar input to avoid complexity of notation, and consider distinct eigen-
values before going on to the general case.

Theorem 6.1: Given a scalar input u(t) to the time-invariant system dx/dt = Ax + bu,
where A has distinct eigenvalues Ai. Then the system is totally controllable
if and only if the vector f = M-1b has no zero elements. M is the modal
matrix with eigenvectors of A as its column vectors.

Proof; Only if part: Change variables by x = Mz. Then the system becomes
dz/ dt = Az + fu, where.A. is the diagonal matrix of distinct eigenvalues Ai_ A flow diagram
of this system is shown in Fig. 6-2 below.
If any element ii of the f vector is zero, the element of the state vector Zi is disconnected
from the control. Consequently any x made up of a linear combination of the z's involving
Zt will be uncontrollable. Therefore if the system is totally controllable, then all elements
it must be nonzero.
130 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6

U o-------to

Fig. 6-2

If part: Now we assume all Ii are nonzero, and from the remarks following Definition
6.1 we need investigate only whether the transformed system can be transferred to an
arbitrary Z(t1) from z(to) = 0, where t1 can be arbitrarily close to to. To do this, note

i = 1,2, .. . ,n (6.1)

It is true, but yet unproven, that if the Ii are nonzero, many different u(.-) can transfer
o to Z(tl).
Now we construct a particular U{-r) that will always do the job. Prescribe U(T) as
n
U(T) == ~ I-'-ke-A~(1'-tl) (6.2)
k=l

where the JLk are constants to be chosen. Substituting the construction (6.2) into equation
(6.1) gives
n
== ~ f. JLk( e Ak (tl-1'), e At (tC 1'J) for all i (6.3)
k=l t

where the inner product is defined as

Equation (6.3) can be written in matrix notation as


ZI(tl)/h U11 g12 gIn 1-'-1
Z2(t1 )112 g21 U22 g2n P-2
= ........... ., •• 41 ......
(6.4)

Zn(tl)/In gln g2n Unn f1-n

where gik = (e Ak (tl- T ), eAtCtl-1'J). Note that because Ii =F 0 by assumption, division by Ii is


permitted. Since the time functions eAiCtC1') are obviously linearly independent, the Gram
matrix {Yik} is nonsingular (see Problem 3.14). Hence we can always solve equation (6.4)
for }-thjJ.2, ••• , p.n, which means the control (6.2) will always work.
Now we can consider what happens when A is not restricted to have distinct eigenvalues.
CHAP. 6] CONTROLLABILITY AND OBSERVABILITY un
Theorem 6.2: Given a scalar input u(t) to the time-invariant system dx/dt = Ax + bu,
where A is arbitrary. Then the system is totally controllable if and only if:
(1) each eigenvalue Ai associated with a Jordan block Lji(Ai) is distinct from
an eigenvalue associated with another Jordan block, and
(2) each element fi of f = T- 1b, where T-1AT = J, associated with the
bottom row of each Jordan block is nonzero.
Note that Theorem 6.1 is a special case of Theorem 6.2.

Proof: Only if part: The system is assumed controllable. The flow diagram for one
l x l Jordan block Lji(Ai) of the transformed system dzldt = Jz + fu is shown in Fig. 6-3.
The control u is connected to Zl, Z2, ••• , Z!-l and Zl only if it is nonzero. It does not matter
if 11,/2, ... and fl-1 are zero or not, so that the controllable system requires condition (2)
to hold. Furthermore suppose condition (1) did not hold. Then the bottom rows of two
different Jordan blocks with the same eigenvalues [Lvi(Ai) and L1ji(Ai)] could be written as
dzv/dt AiZv + /,,11-

~-----------------------------~.~~----------------
Fig. 6-3

Consider the particular state having one element equal to fTJzv - fvzn. Then
d(fT/zv- f"zTJ)ldt = fT/(AiZv + fvu) - /,,(AiZT/ + /TJu)
= Ai (fTJ Zv - !"Z1j)
Therefore /1/Z,,(t) - /vZTJ(t) = [fTJzv(O) - /vzTJ(O)]d,!t and is independent of the control. We have
found a particular state that is not controllable, so if the system is controllable, condition
(1) must hold.
If part: Again, a control can be constructed in similar manner to equation (6.2) to show
the system is totally controllable if conditions (1) and (2) of Theorem 6.2 hold.

Example 6.2.
To illustrate why condition (1) of Theorem 6.2 is important, consider the system

(~ ~)(:~) + G) u

y = (1 -3)(:~)
=
Then .,e{X1} = (8.,e{u} + xlO)/(s - 2) and .e{X2} (.e{u} + x2o)/(s - 2) so that y{t) =
(XIO - 3x20)e2t re-
gardless of the action of the control. The input is physically connected to the state and the state is
physically connected to the output, but the output cannot be controlled.
For discrete-time systems, an analogous theorem holds.
132 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6

Theorem 6.3: Given a scalar inputu(m) to the time-invariant system x(m + 1) = Ax + bu(m),
where A is arbitrary. Then the system is completely controllable if and
only if conditions (1) and (2) of Theorem 6.2 hold.

Proof: Only if part: This is analogous to the only if part of Theorem 6.2, in that the
flow diagram shows the control is disconnected from at least one element of the state vector
if condition (2) does not hold, and a particular state vector with an element equal to
f7J zv - fv z7J is uncontrollable if condition (1) does not hold.

If part: Consider the transformed systems z(m + 1) = Jz(m) + fu(m), and for simplicity
assume distinct roots so that J = 4. Then for zero initial condition,

For an nth order system, the desired state can be reached on the nth step because
Zl(n)//I ,.\n-l
1
.\n-2
1 1 u(O)
Z2(n)//2 ,.\n-l
2
,.\n-2
2 1 u(1)
(6.5)
•••• ill ....... ill •• ill .... ,. ..

Zn(n)/fn .\n-l
n
,.\n-2
n 1 u(n-l)

Note that a Vandermonde matrix with distinct elements results, and so it is nonsingular.
Therefore we can solve (6.5) for a control sequence u(O),u(1), .. • ,u(n-l) to bring the system
to a desired state in n steps if the conditions of the theorem hold.
For discrete-time systems with a scalar input, it takes at least n steps to transfer to an
arbitrary desired state. The corresponding control can be found from equation (6.5), called
dead beat control. Since it takes n steps, only complete controllability was stated in the
theorem. We could (but will not) change the -definition of total controllability to say that
in the case of discrete-time systems, transfer in n steps is total control.
The phenomenon of hidden oscillations in sampled data systems deserves some mention
here. Given a periodic function, such as sin rot, if we sample it at a multiple of its period
it will be undetectable. Referring to Fig. 6-4, it is impossible to tell from the sample points
whether the dashed straight line or the sine wave is being sampled. This has nothing to
do with controllability or observability, because it represents a failure of the abstract object
(the difference equation) to represent a physical object. In this case, a differential-differ-
ence equation can be used to represent behavior between sampling instants.

Fig. 6-4

6.3 OBSERVABILITY IN TIME-INVARIANT LINEAR SYSTEMS


Analogous to Theorem 6.1, connectedness between state and output becomes equivalent
to total observability for dx/dt = Ax + Bu, Y = ctx + dTu, when the system is stationary
and A has distinct eigenvalues. To avoid complexity, first we -consider scalar outputs and
distinct eigenvalues.
· CHAP. 6] CONTROLLABILITY AND OBSERVABILITY 133

Theorem 664: Given a scalar output y(t) to the time-invariant system dx/dt = Ax + Bu,
y = ctx + dTu, where A has distinct eigenvalues Ai. Then the system is
totally observable if and only if the vector gt = ctM has no zero elements.
M is the modal matrix with eigenvectors of A as its column vectors.
Proof: From the remarks following Defini-
tion 6.2 we need to see if x(to) can be recon-
structed from measurement of y(T) over to < T === t
in the case where U(T) = O. We do this by chang-
ing variables as - x = Mz. Then the system be-
comes dz/dt = Az and y = ctMz = gtz. The flow
diagram for this system is given in Fig. 6.5.
Each Zi(t) = zi(to)ehiCt-tO) can be determined
by taking in measurements of y(t) at times Tk =
to + (tl - to)k/n for k = 1,2, ... , n and solving
the set of equations y
11

Y(i k) L g,zi(tO)e"-ikCtl-to)/n
= i=l t

for giZi(tO)' When written in matrix form this


set of equations gives a Vandermonde matrix
which is always nonsingular if the Ai are distinct.
If all gi oF 0, then all Zi(t O) can be found. To find
x(t), use x(to) = Mz(to) and dx/dt = Ax + Bu.
Only if gi oF 0 is each state connected to y. Fig. 6-5
The extension to a general A matrix is similar to the controllability Theorem 6.2.
Theorem 6.5: Given a scalar output y(t) from the time-invariant system dx/dt = Ax + Bu,
y = ctx + dTu, where A is arbitrary. Then the system is totally observable
if and only if:
(1) each eigenvalue Ai associated with a Jordan block Lji(Ai) is distinct from
an eigenvalue associated with another Jordan block, and
(2) each element gi of gt = etT, where T-IAT = J, associated with the
top row of each Jordan block is nonzero.
The proof is similar to that of Theorem 6.2.
Theorem 6.6: Given a scalar output y(m) from the time-invariant system x(m + 1) =
Ax(m) + Bu(m), Y(11~) = etx(m) + dTu(m), where A is arbitrary. Then the
system is completely observable if and only if conditions (1) and (2) of
Theorem 6.5 hold.
The proof is similar to that of Theorem 6.3.
Now we can classify the elements of the state vectors of dx/dt = Ax + Bu, y = Cx + Du
and of x(m + 1) = Ax(m) + Bu(m), y = Cx + Du according to whether they are controllable
or not, and whether they are observable or not. In particular, in the single input-single
output case when A has distinct eigenvalues, those elements of the state Zi that have non-
zero Ii and gi are both controllable and observable, those elements Zj that have zero !J but
nonzero gj are uncontrollable but observable, etc. When A has repeated eigenvalues, a
glance., at Fig. 6-3 shows that Zk is controllable if and only if not all of /k, h+l' ... , f~ are
zero, and the eigenvalues associated with individual Jordan blocks are distinct.
Unobservable and uncontrollable elements of a state vector cancel out of the transfer
function of a single input-single output system.
134 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6

Theorem 6.7: For the single input-single output system dx/dt = Ax + bu, Y = ctx + du,
the transfer function ct(sI ~ A)-lb has poles that are canceled by zeros if
and only if some states are uncontrollable and/or unobservable. A similar
statement holds for discrete-time systems.
Prool; First, note that the Jordan flow diagram (see Section 2.4) to represent the
transfer function cannot be drawn with repeated eigenvalues associated with different
Jordan blocks. (Try it! The elements belonging to a particular eigenvalue must be com-
bined.) Furthermore, if the J matrix has repeated eigenvalues associated with different
Jordan blocks, an immediate cancellation occurs. This can be shown by considering the
bottom rows of two Jordan blocks with identical eigenvalues A.

d
dt
(Zl)
Z2

Then for zero intial conditions ..({Zl} = (s~ A)-lb1..({u} and ..({Z2} = (s - A)-lb 2..({u}, so that
..({y} [Cl(S - A)-lb 1 + C2(S - A)-lb 2 + d]..c{u}
Combining terms gives
..c{y}

This is a first-order transfer function representing a second-order system, so a cancellation


has occurred. Starting from the transfer function gives a system representation of
dz/dt = Az + u, Y = (c1b l + c2 b2 )z + du, which illustrates why the Jordan flow diagram can-
not be drawn for systems with repeated eigenvalues.
Now consider condition (2) of Theorems 6.2 and 6.5. Combining the flow diagrams of
Figs. 6-2 and 6-5 to represent the element Zi of the bottom row of a Jordan block gives
Fig. 6-6.
I
I

UO-O---~"I
I

I
· ·G--+Qf---~ y

I
I
Fig. 6-6

Comparing this figure with Fig. 2-12 shows that fiu i = Pi' the residue of Ai' If and only
if P'i = 0, a cancellation occurs, and Pi = 0 if and only if Ii and/or Ui = 0, which occurs when
the system is uncontrollable and/or unobservable.
Note that it is the uncontrollable and/or unobservable element of the state vector that
is canceled from the transfer function.

6.4 DIRECT CRITERIA FROM A, B, AND C


If we need not determine which elements of the state vector are controllable and ob-
servable, but merely need to investigate the controllability and/or observability of the whole
system, calculation of the Jordan form is not necessary. Criteria llsing only A, Band C
are available and provide an easy and general way to determine if the system is completely
controllable and observable.
CHAP. 6] CONTROLLABILITY AND OBSERV ABILITY 135

Theorem 6.8: The time-invariant system dx/dt = Ax + Bu is totally controllable if and


only if the n x nn1, matrix Q has rank n, where
Q = (BIABI ... I An-IB)
Note that this criterion is valid for vector u, and so is more general than Theorem 6.2.
One method of determining if rank Q =, n is to see if det (QQT) =1= 0, since rank Q =
rank QQT from property 16, page 87. However, there exist faster machine methods for
determining the rank.

Proof; To reach an arbitrary x(tt} from the zero initial state, we must find a control
u(.) such that
x(tI) (6.6)
n
Use of Theorem 4.16 gives eACtl- r ) L
i=l
·'/i(.)An-i so that substitution into (6.6) gives

X(tl) (BWI + ABw2 + ... + An-1Bwn) Q ( W~'nl)


Jrto
t1
where w k = 'Yn+l-k(')U(.) dT. Hence X(tl) lies in the range space of Q, so that Q must
have rank n to reach an arbitrary vector in "Un' Therefore if the system is totally con-
trollable, Q has rank n.
Now we assume Q has rank n and show that the system is totally controllable. This
time we construct u(,.) as
U(T) = /l/3(T - to) + /l20(1)(. - to) + ... + /ln8(n-D(. - to) (6.7)
where /lIe are constant 'Tn-vectors to be found and 8 Ck )(-.) is the kth derivative of the Dirac
delta function. Substituting this construction into equation (6.6) gives
x(t 1 ) = eACtl-to)B/ll + eACtl-tO) AB/l2 + ... + eACtcto) An-IB/l n (6.8)

since stl eACt1-T)B8(,. - to) dT = eACtl-to)B and the defining relation for 8Ck ) is

f_:
to

OCk)(t -~) g(~) d~ dkg/dtk

Using the inversion property of transition nlatrices and the definition of Q, equation (6.8)
can be rewritten

From Problem 3.11, page 63, a solution for the ILi always exists if rankQ = n. Hence SODle
(perhaps not unique) always exists such that the control (6.7) will drive the system to
ILi
X(t1).
The construction for the control (6.7) gives some insight as to why completely control-
lable stationary linear systems can be transferred to any desired state as quickly as possible.
No restrictions are put on the magnitude or shape of u(.). If the magnitude of the control
is bounded, the set of states to which the system can· be transferred by t1 are called the
reachable states at t l , which has the dual concept in observabiIity as recoverable states at
t l . Any further discussion of this point is beyond the scope of this text.
136 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6

A proof can be given involving a construction for a bounded u(t) similar to equation
(6.2), instead of the unbounded u(t) of (6.7). However, as tl ...,.. to, any control must become
unbounded to introduce a jump from x(t o) to x(t l ).
The dual theorem to the one just proven is

Theorem 6.9: The time-invariant system dx/ dt = Ax + Bu, y = Cx + Du is totally ob-


servable if and only if the kn X n matrix P has rank n where-

Theorems 6.8 and 6.9 are also true (replacing totally by completely) for the discrete-time
system x(m + 1) = Ax(m) + Bu(m), y(m) = Cx(m) + Du(m). Since the proof of Theorem
6.9 is quite similar to that of Theorem 6.8 in the continuous-time case, we give a proof for
the discrete-time case. It is sufficient to see if the initial state x(l) can be reconstructed
from knowledge of y(m) for l ~ m < 00, in the case where u(m) = O. From the state equa-
tion,
y(l) = Cx(l)
y(l + 1) ::;: Cx(l + 1) = CAx(l)

y(l + n -1) = CAu-1x(l)

This can be rewritten as


y(l) )
. = Px(l)
(
Y(I+~-l)
and a unique solution exists if and only if P has rank n, as shown in Problem 3.11, page 63.

6.5 CONTROLLABILITY AND OBSERVABILITY OF TIME-VARYING SYSTEMS


In time-varying systems, the difference between totally and completely controllable be-
comes important.
Example 6.3.
Consider the time~varying scalar system dx/dt = -x + b(t)u and y = x, where b(t) is either zero
or unity as shown in Fig. 6-7.

u Y

L...--_---f -1 '--_.....

Fig. 6~7

If b(t) = 0 for to == t < to + at, the system is uncontrollable in the time inter~al [to, to + at). If
b(t) = 1 for to + at:::;:: t :::;:: t l , the system is totally controllable in the time interval to + a. t == t ~ t I •
However, the system is not totally controllable, but is completely controllable over the time interval
to== t """ tl to reach a desired final state x(t l )·
CHAP. 6] CONTROLLABILITY AND OBSERVABILITY 137

Now suppose b(t) = 0 for t1 < t === t 2 • and we wish to reach a final state x(t2 ). The state x(t2 ) can
be reached by controlling the system such that the state at time t1 is x(t1) = e(t2 -h)x(t2 ). Then with zero
input the free system will coast to the desired statex(t2 ) = <I>(t2 • t 1 ) x(t l ). Therefore if the system is totally
controllable for any time interval te === t === te + fl.t, it is completely controllable for all t:=, te'
For the time-varying system, a criterion analogous to Theorem 6.8 can be formed.

Theorem 6.10: The time-varying system dx/dt = A(t)x + B(t)u is totally controllable if
and only if the nlatrix Q(t) has rank n for times t everywhere dense in
[to, t l ], where Q(t) = (Q1 I Q21 ... [Qn), in which Ql = B(t) and Qk+l =
-A(t)Qk + dQk/dt for k = 1,2, ... , n-l. Here A(t) and B(t) are assumed
piecewise differentiable at least n - 2 and n -1 tinles, respectively.
The phrase "for times t everywhere dense in [to, t l ]" essentially means that there can exist
only isolated points in t in which rank Q < n. Because this concept occurs frequently, we
shall abbreviate it to "Q(t) has rank n(e.d.)".
Proof: First we assume rank Q = n in an interval containing time 'fJ such that to < 'fJ < tl
and show the system is totally controllable.
Construct u( T) as n
U(T) ~ ILk8Ck-1)( T - "l) (6.9)
k=l

To attain an arbitrary x(tt} starting from x(to) = 0 we must have

Jrto I
t1 k 1
n d -
q.(tl' ,) B(,) U(T) dT k~l d,k-l [q,(tl, ,) B(T)] '/"=7) ILk (6.10)

But
(6.11)

Note 0 = dl/dt = d(q.q,-l)/dt = Acpq,-l +cpdcp-l/dt so that d~-l/dt = -cp-lA and equation
(6.11) becomes

Similarly,
dk - 1
dT k - 1 [CP(tl' T) B(T)] (6.12)

Therefore (6.10) becomes

A solution always exists for the ILk because rank Q = nand cp is nonsingular.
Now assume the system is totally controllable and show rank Q = n. From Problem
6.8, page 142, there exists no constant n-vector z =1= 0 such that, for times t everywhere
dense in to";:::: t ..<::: t l ,
ztq.(to, t) B(t) = 0
By differentiating k times with respect to t and using equation (6.12), this becomes
ztq.(to, t) Qk(t) = O. Since q.(to, t) is always nonsingular, there is no n-vector yt =
ztq.(to, t) # 0 such that
o= (ytQl J ytQ2 J •.. J ytQn) = ytQ = Y1ql + Y2q2 + . . . + Ynqn
where qi are the row vectors of Q. Since the n row vectors are then linearly independent,
the rank of Q(t) is n(e.d.).
138 CONTROLLABILITY AND OBSERV ABILITY [CHAP. 6

Theorem 6.11: The time-varying system dx/dt = A(t)x + B(t)u, y;:::: C(t)x + D(t)u is totally
observable if and only if the matrix P(t) has rank n(e.d.), where PT(t) =
(Pi I pr I ... I PJ) in which PI = C(t) and P k + I = PkA(t) + dPk/dt for
k = 1,2, ... , n-l. Again A(t) and B(t) are assumed piecewise differentiable
at least n - 2 and n -1 times, respectively.

Again, the proof is somewhat similar to Theorem 6.10 and will not be given. The situ-
ation is somewhat different for the discrete-time case, because generalizing the proof follow-
ing Theorem 6.9 leads to the criterion rank P = n, where for this case
PI = C(l), P2 = C(l + 1) A(l), ... , Pk = C(l + k - 1) A(l + k - 2)· .. A(l)

The situation changes somewhat when only complete controllability is required. Since
any system that is totally controllable must be completely controllable, if rank Q(t) has
rank n for some t> to [not rank n(e.d)] then x(t o) can be transferred to X(tl) for tl ~ t.
On the other hand, systems for which rank Q(t) < n for all t might be completely control-
lable (but not totally controllable).

Example 6.4.
Given the system
_(ao O)(Xl)
(J X2
+ (fl(t»)u
h(t)

where
2krr ::: t < (2k + 1)rr
= k = 0, ±1, ±2, ...
(2k + l)rr ::: t < 2(k + 1)7T

Q(t)

At each instant of time, one row of Q(t) is zero, so rank Q(t) :::: 1 for all t. However,

If t - to > 7l', the rows of .;.(to, t) B(t) are linearly independent time functions, and from Problem 6.30,
page 145, the system is completely controllable for it - to> 7T. The system is not totally controllable because
for every to. if t2 - to < 11, either 11(T) or 12(T) is zero for to::: T::: i 2·

However, for systems with analytic A(t) and B(t), it can be shown that complete con-
trollability implies total controllability. Therefore rank Q = n is necessary and sufficient
for complete controllability also. (Note fl(t) and f2(t) are not analytic in Example 6.4.)
For complete controllability in a nonanalytic system with rank Q(t) < n, the rank of
q,(t, 1') B(1') must be_ found.

6.6 DUALITY
In this chapter we have repeatedly used the same kind of proof for observabiIity as was
used for controllability. Kalman first remarked on this duality between observing a
dynamic system and controlling it. He notes the determinant of the W matrix of Problem
6.8, page 142, is analogous to Shannon's definition of information. The dual of the optimal
control problem of Chapter 10 is the Kalman filter. This duality is manifested by the
following two systems:
CHAP. 6] CONTROLLABILITY AND OBSERVABILITY 139

System #1: dx/dt = A(t)x + B(t)u


y = C(t)x + D(t)u
System #2:- dwldt = -At(t)w + Ct(t)V
z = Bt(t)w + Dt(t)v
Then system #1 is totally controllable (observable) if and only if system #2 is totally
observable (controllable), which can be shown immediately from Theorems 6.10 and 6.11.

Solved Problems
6.1. Given the system

dx
dt =
(1
o 2-1) + (0)
lOx 0 u, y (1 -1 l)x
1 -4 3 1
find the controllable and uncontrollable states and then find the observable and un-
observable states.
Following the standard procedure for transforming a matrix to Jordan form gives A = TJT-l
as

G-~ -D
Then f == T- 1b == (0 1 O)T and gT
= C~ ~
== cTT = (0 1 1).
D( ~ : DC~ -~ 0
The flow diagram of the Jordan system is
shown in Fig. 6-8.
Zl

1
o
+ 8-2
+
Z2
+
o---+---+{ 1 1
U Y
8-2
+
za
o 1

Fig. 6-8

The element zl of the state vector z is controllable (through Z2) but unobservable. The element
z2 is both controllable and observable. The element za is uncontrollable but observable.
Note Theorem 6.1 is inapplicable.
140 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6

6.2. Find the elements of the state vector z that are both controllable and observable for
the system of Problem 6.1.
Taking the Laplace transform of the system with zero initial conditions gives the transfer
function. Using the same symbols for original and transformed variables, we have
SXl Xl + 2X2 ~ Xs (6.13)

SX2 (6.14)

(6.15)

(6.16)

From (6.15), Xl = 4x2 + (s - 3)xs - u. Putting this in (6.13) gives (48 - 6)x2 = (8 - l)u - (8 - 2)2 XS •
Substituting this in (6.14), (s - 1)(8 - 2)2XS = (8 -1)2U. Then from (6.16),

y
-- [(s(8 -_1)2
2)3
_ 0 + ~Ju
2)2
(s -
-
-
(8-1)(8-2)
(8 -1)(8 - 2)2
u
Thus the transfer function k(s) = (8 - 2)-1, and from Theorem 6.7 the only observable and con-
trollable element of the state vector is z2 as defined in Problem 6.1.

6.3. Given the system of Problem 6.1. Is it totally observable and totally controllable?
Forming the P and Q matrices of Theorems 6.9 and 6.8 gives

P Q

Then rank P = 2, the dimension of the observable state space; and rank Q = 2, the dimension
of the controllable state space. Hence the system is neither controllable nor observable.

6.4. Given the time-invariant system

and that u(t) = e- t and y(t) =2 - (de- t • Find X1(t) and X2(t). Find X1(O) and X2(O).
What happens when = O? Q'

Since y = xl' then XI(t) = 2 - cde-t. Also, dX1/dt = aX2, so differentiating the output gives
x2(t)== -e- t + te-t. Then xl(D) = 2 and x 2(O) = -1. When a = 0, this procedure does not work
because dXl/dt = o. There is no way to find X2(t), because x2 is unobservable as can be verified from
Theorem 6.5. (For a = 0, the system is in Jordan form.)

6.5. The normalized equations of vertical motion y(r, 0, t) for a circular drumhead being
struck by a force u(t) at a point r = ro, 0 = 00 are
iJ2y
at2 V 2y + 27ir 8(r - ro) 8(0 - 00) u(t) (6.17)

where y(rl, 0, t) = 0 at r = rl, the edge of the drum. Can this force excite all the
modes of vibration of the drum?
The solution for the mode shapes is
00 oc
y(r, 0, t) ::::: ~ ~ I n (K m r/rl)[X2n,m (t) cos 2n1T09 + X2n +l,m (t) sin 2n'lTO]
m=l n=O
CHAP. 6] CONTROLLABILITY AND OBSERVABILITY 141

where ICm is the mth zero of the nth order Bessel function In(r). Substituting the motion of the first
harmonicm == 1, n == 1 into equation (6.17) gives
d2 x 2 ldt 2 ;\X21 + y cos 2;;8ou
d2x3ldt2 ;\x31 + y sin 2;;oou

where ;\ == K~ + (2;;)2 and y == ToJ1(lClrO!rl). Using the controllability criteria, it can be found
that one particular state that is not influenced by u(t) is the first harmonic rotated so that its node
line is at angle 8 0 , This is illustrated in Fig. 6-9. A noncolinear point of application of another
force is needed to excite, or damp out, this particular uncontrollable mode.

r--------~----------~r------ + Y

Fig. 6-9

6.6. Consider the system


1
dx
1
dt
o
where Ul and U2 are two separate scalar controls. Determine whether the system
is totally controllable if
(1) b (0 0 O)T
(2) b = (0 0 1)T
(3) b (1 0 oy
For each case, we investigate the controllability matrix Q == (B I AB I A2B) for

A
G~D and B
G b)
For b = 0 it is equivalent to scalar control, and by condition (I) of Theorem 6.2 the system is
uncontrollable. For case (2),
0 1 0 2
Q
G0
1
1
1
0
1
1
1 D
The first three columns are linearly independent, so Q has rank 3 and the system is controllable.
For case (3),
1 1 1 2
Q :::: o 1 o 1
o 1 o 1
The bottom two rows are identical and so the system is uncontrollable.
142 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6

6.7. Investigate total controllability of the time-varying system

~~ Gg)x + (~)u
The Q(t) matrix of Theorem 6.10 is
Q(t)
et(l- t»)
-e t

Then det Q(t) = et(e t + 2t - 2). Since et + 2t = 2 only at one instant of time, rank Q(t) = 2(e.d.).

6.8. Show that the time-varying system dx/dt = A(t)x + B(t)u is totally controllable if
and only if the matrix W(t, ,) is postive definite for every, and every t> T, where

W(t, ,) it cp(T, 'rJ) B{7]) Bt('rJ) 4>t('1, 'rJ) d'rJ

Note that this criterion depends on cJt(t, T) and is not as useful as Theorems 6.10 and
6.11. Also note positive definite W is equivalent to linear independence of the rows
of 4>(7,7]) Bh) for T ~ 1) ~ t.
If W(t,7") is positive definite, W-I exists. Then choose
u(7") = -Bt(r) 4>t(t 1, r) W-I(t o, t l ) x(t I )

Substitution will verify that

so that the system is totally controllable if W(t,7") is positive definite. Now suppose the system is
totally controllable and showW(t, '7") is positive definite. First note for any constant vector k,

(k, Wk) = f·,. t


ktcJlo(r, 'TJ) B(7]) Bt(7J) 4Jt(r, 17)k dn
-

ft IiBth) 4Jt(r, 7])kll~


'T
dn == 0

Therefore the problem is to show W is nonsingular if the system is totally controllable. Suppose
W is singular, to obtain a contradiction. Then there exists a constant n-vector z # 0 such that
(z, Wz) = O. Define a continuous, m-vector function of time f(t);:::: -Bt(t) q;t(to, t)z. But

ftlllf(-r)II~ dT
to
stl to
ztcJlo(t o' t) B(t) Bt(t) cJlot(to, t)z dt

(z, W(t l , to)z) = 0

Hence f(t) = 0 for all t, so that 0 Itt to


£t(t) u(t) dt for any u(t). Substituting for f(t) gives

t!
o -
S to
ztcJlo(to, t) B(t) u(t) dt (6.18)

In particular, since the system is assumed totally controllable, take u(t) to be the control that
transfers 0 to cJlo(tl' to)z # O. Then
t!
z
J to
4J(to, t) B(t) u(t) dt

Substituting this into equation (6.18) gives 0 = ztz which is impossible for any nonzero z.
Therefore no nonzero z exists for which (z, Wz) ~ 0, so- that W must be positive definite.
CHAP. 6] CONTROLLABILITY AND OBSERVABILITY 143

Supplementary Problems
6.9. Consider the bilinear scalar system d~/dt = u(t) ~(t). It is linear in the initial state and in the
control, but not both, so that it is not a linear system and the theorems of this chapter do not apply.
The flow diagram is shown in Fig. 6-10. Is this system completely controllable according to Defini-
tion 6.1?

~(t)
u(t)---~

Fig. 6-1.0

6.10. Given the system

dx
dt (
~ ~ ~)
-2 0 -2
x + (-~) u,
-1
y = (-2 1 O)x

Determine which states are observable and which are controllable, and check your work by deriving
the transfer function.

6.11. Given the system


dx y = (1 1)x
dt
Classify the states according to their observability and controllability, compute the P and Q matrices,
and find the transfer function.

6.12. Six identical frictionless gears with inertia I are mounted on shafts as shown in Fig. 6-11, with
a center crossbar keeping the outer two pairs diametrically opposite each other. A torque u(t)
is the input and a torque yet) is the output. Using the angular position of the two outer gearshafts
as two of the elements in a state vector, show that the system is state uncontrollable but totally
output controllable.

1-----I-----k:~>L.-.4--_ Y (t) +
u(t)

Fig. 6-11 Fig. 6-12

6.13. Given the electronic circuit of Fig. 6-12 where u(t) can be any voltage (function of time). Under
what conditions on R, Ll and L2 can both i 1(t 1 ) and i 2(t 1 ) be arbitrarily prescribed for tl > to.
given that i 1(to) and i 2 (tO) can be any numbers?

6.14. Consider the simplified model of a rocket· vehicle

Under what conditions is the vehicle state controllable?


144 CONTROLLABILITY AND· OBSERVABILITY [CHAP. 6

6.15. Find some other construction than equation (6.2) that will transfer a zero initial condition to an
arbitrary z(t 1).

6.16. Prove Theorem 6.5.

6.17. Prove Theorem 6.6.

6.18. What are the conditions similar to Theorem 6'.2 for which a two-input system is totally controllable?

6.19. Given the controllable sampled data system


~(n + 2) + 3Hn + 1) + 2~(n) u(n + 1) - u(n)
Write the state equation, find th~ transition matrix in closed form, and find the control that will
force an arbitrary initial state to zero in the smallest number of steps. (This control depends' upon
these arbitrary initial conditions.)

6.20. Given the system with nondistinct eigenvalues

dx
= y = (0 1 -l)x
dt

Classify the elements of the state vector z corresponding to the Jordan form into observable/not
observable, controllable/not controllable.

6.21. Using the criterion Q:::: (b I Ab I ... I An-1b), develop the result of Theorem 6.1.

6.22. Consider the discrete system x(k + 1) :::: Ax(k) + bu(k)


where x is a 2-vector, u is a scalar, A = (~ ~), b = (~).
(a) Is the system controllable? (b) If the initial condition is x(O) = (~), find the control ---J

sequence u(O), u(l) required to drive the state to the origin in two sample periods (i.e., x(2) = O}.

6.23. Consider the discrete system


x(k + 1) = Ax(k), ctx(k)

where x is a 2-vector, y is a scalar, A = (~ ct = (1 2).

(a) Is the system observable? (b) Given the observation sequence y(1) = 8, y(2) = 14, find the
initial state x(O).

6.24. Prove Theorem 6.8, constructing a bounded U(T) instead of equation (6.7). Hint: See Problem 6.8.

6.25. Given the multiple input-multiple output time-invariant system dx/dt = Ax + Bu, y = Cx + Du,
where y is a k-vector and u is an m-vector. Find a criterion matrix somewhat similar to the Q
matrix of Theorem 6.8 that assures complete output controllability.

6.26. Consider three linear time-invariant systems of the form


i = 1,2,3
(a) Derive the transfer function matrix for the interconnected system of Fig. 6-13 in terms of
A (i), Bm and C m , i = 1, 2, 3.

y(3)
u '---_----,

, Fig. 6-13
CHAP.6J CONTROLLABILITY AND OBSERV ABILITY 145

(b) If -the overall interconnected system in part (a) is observable, show that Sa is observable.
(Note that u(i) and y(i) are vectors.)

6.27. Given the time-varying system

(~
dx
= y = (e- t e- 2t)x
dt
Is this system totally controllable and observable?

6.28. Prove Theorem 6.9 for the continuous-time case.

6.29. Prove a controllability theorem similar to Theorem 6.10 for the discrete time-varying case.

6.30. Similar to Problem 6.8, show that the time-varying system dx/dt = A(t)x + B(t)u is completely
controllable for t1 > t if and only if the matrix Wet, T) is positive definite for every T and some
finite t > T. Also show this is equivalent to linear independence of the rows of 4t(T, '1]) BC?]} for
some finite 7J > T.

6.31. Prove that the linear time-varying system dxldt = A(t)x, y = C(t)x is totally observable if and
only if M(tll to) is positive definite for all t1 > to, where

M(t 1, to) = Itl


to
cltt(t, to) ct(t) C(t) 4t(t. to) dt

Answers to Supplementary Problems


6.9. No nonzero ~(t1) can be reached from ~(to) = 0, so the system is uncontrollable.

6.10. The states belonging to the eigenvalue 2 are unobservable and those belonging to -1 are uncon-
trollable. The transfer function is 3(8 + 3)-1, showing only the states belonging to -3 are both
controllable and observable.

6.11. One state is observable and controllable, the other is neither observable nor controllable.

Q" = (1-1)
1 -1
and h(s) = 8
2
+1

6.14. MB ¥= 0 and Za ¥= 0

6.15. Many choices are possible, such as


n
u(t) ~ Ilk{U[t- to - (t1 - toHk -l)/n] - U[t - to - (tl - to)k/n])
k=l
n
where U(t - T) is a unit step at t = T; another choice is u(t) =::: ~ Ilke-A~t. In both cases the
k=1
expression for Ilk is different from equation (6.4) and must be shown to have an inverse.

6.18. Two Jordan blocks with the same eigenvalue can be controlled if 111/22 - 112/21 ¥= 0, the!'s being
the coefficients of Ul and Uz in the last row of the Jordan blocks.
146 CONTROLLABILITY AND OBSERVABILITY [CHAP. 6

6.19. x(n + 1) ::::: (-3-2 ~) x(n) + (_~) u(n); y(n) == (1 0) x(n)

.p(n, k) == (_1)n-k (-1


-2 ~) + (-2)n-k (22-1-1)
(U(O) )
u(1) = 6(1310 -2-5)C~I(0»)
!
X2(O)

6.20. The flow diagram is shown in Fig. 6-14 where zl and z2 are controllable, Z2 and Za are observable.

u------------_ ~------------------y

Fig. 6-14

for ~ ::::: 1 and p ::= 0 in T -- (~~ ;~ 2p~ ~)


~ p - ~
.•

6.22. Yes; u(O) = -4, u(l) =2.


6.23. Yes; x(O) = (!).
6.25. rankR::::: k, where R = (CD I CAB , ... I CAn-IB I D)

6.26. D(s)::= C(3)(Is - A(3»-IB(3) [C(l) (Is - A(D)-IB(1) + C(2)(Is - A(2»-IB(2)]

6.27. It is controllable but not observable.


Chapter 7
Canonical Forms
of the State Equation
7.1 INTRODUCTION TO CANONICAL FORMS
The general state equation dx/dt = A(t)x + B(t)u appears to have all n2 elements of
the A(t) matrix determine the time behavior of x(t). The object of this chapter is to reduce
the number of states to m observable and controllable states, and then to transform the m2
elements of the corresponding A(t) matrix to only m elements that determine the input-
output time behavior of the system. First we look at time-invariant systems, and then at
time-varying systems.

7.2 JORDAN FORM FOR TIME ..INVARIANT SYSTEMS


Section 2.4 showed how equation (2.21) in Jordan form can be found from the transfer
function of a time-invariant system. For single-input systems, to go directly from the
form dx/dt = Ax + bu, let x = Tz so that dz/dt = Jz + T-1bu - where T-IAT = J. The
matrix T is arbitrary to within n constants so that T = ToK as defined in Problem 4.41,
page 97.
For distinct eigenvalues, dz/dt = Az + K-ITol bu, where K is a diagonal matrix with
elements k ii on the- diagonal. Defining g = TOlb, the equation for each state is dzJdt =
AiZi + (giu/k ii ). If gi = 0, the state Zi is uncontrollable (Theorem 6.1) and does not enter
into the transfer function (Theorem 6.7). For controllable states, choose k ii == gi. Then
the canonical form of equation (2.16) is attained.
For the case of nondistinct eigenvalues, look at the l x l system of one Jordan block with
one input, dz/dt = Lz + T-1bu. If the system is controllable, it is desired that T-1b = el,
as in equation (2.21). Then using T = ToK, we require TOI b = Kel = (at al-l ... al)T,
where the ai are the l arbitrary constants in the T matrix as given in Problem 4.41. In this
manner the canonical form of equation (2.21) can be obtained.
Therefore by transformation to Jordan canonical form, the uncontrollable and un-
observable states can be found and perhaps omitted from further input-output considera-
tions. Also, the n 2 elements in the A matrix are transformed· to the n eigenvalues that
characterize the time behavior of the system.

7.3 REAL JORDAN FORM


Sometimes it is easier to program a computer if all the variables are real. A slight
drawback of the Jordan form is that the canonical states z(t) are complex if A has any
complex eigenvalues. This drawback is easily overcome by a change of variables. We keep
the same Zi as in Section 7.2 when Ai is real, but when M is complex we use the following
procedure. Since A is a real matrix, if A is an eigenvalue then its complex conjugate A*
is also an eigenvalue and if t is an eigenvector then its complex conjugate t* is also. Without
lossof generality we can look at two Jordan blocks for the case of complex eigenvalues.

147
148 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7

If Re means "real part of" and 1m means "imaginary part of", this is
d (Rez + jImz) o )(Rez + jlmZ)
dt Rez - jImz ReL - jImL Rez - jlmz
ReL Rez - ImL Imz + jReL Imz + jlmL Rez)
( ReL Rez - ImL Imz - jReL Imz - jlmL Rez
By equating real and imaginary parts, the system can be rewritten in the "real" Jordan
form as
ReL -ImL)(Rez)
( ImL ReL Imz

7.4 CONTROLLABLE AND OBSERVABLE FORMS FOR


TIME-VARYING SYSTEMS
We can easily transform a linear time-invariant system into a controllable or observable
subsystem by transformation to Jordan form. However, this cannot be done for .time-
varying systems because they cannot be transformed to Jordan form, in general. In this
section we shall discuss a method of transformation to controllable and/or observable sub-
systems without solution of the transition matrix. Of course this method is applicable to
time-invariant systems as a subset of time-varying systems.
We consider the transformation of the time-varying system
(7.1)
into controllable and observable subsystems. The procedure for transformation can be
extended to the case y == CX(t)x + DX(t)u, b.ut for simplicity we take Dx(t) = O. We adopt
the notation of placing a superscript on the matrices A, Band C to refer to the state variable
because we shall make many transformations of the state variable.
In this chapter it will always be assumed that A(t), B(t) and C(t) are differentiable n - 2,
n - 1 and n - 1 times, respectively. The transformations found in the following sections
lead to the first and second ("phase-variable") canonical forms (2.6) and (2.9) when applied
to time-invariant systems as a special case.
Before proceeding, we need two preliminary theorems.

Theorem 7.1: If the system (7.1) has a controllability matrix QX(t), and an equivalence
transformation x(t) = T(t) z(t) is made, where T(t) is nonsingular and dif-
ferentiable, then the controllability matrix of the transformed system
QZ(t) = T-l(t) QX(t) _ and rank Qz(t) == rank QZ(t).

Proof: The transformed system is dz/dt = Az(t)z + B%(t)u, where AZ = T-l(AxT - dT/dt)
and BZ = T-IBx. Since QX = (Qi IQ~ I ... I Q~) and QZ is similarly partitioned, we need to
show Q~:::: T-l Q~ for k = 1,2, ... , n using induction. First Q~ = HZ = T-IBx = T-IQ:.
Then assuming Q~-l = T-IQ~_l'
Q% = -AzQ~_l + dQ~_/dt
= -T-l(AXT - dT/dt)(T-IQ~_l) + dT-l/dt Q~-l + T-ldQ~_l/dt
T-l(~AxQ%_l + dQ%_tldt) = T-1Q%
for k = 2,3, ... , n. Now QZ(t) = T-l(t) QX(t) and since T(t) is nonsingular for any t,
rank QZ(t) = rank QX(t) for all t.
CHAP.7J CANONICAL FORMS OF THE STATE EQUATION 149

It is reassuring to know that the controllability of a system cannot be altered merely


by a change of state variable. As we should expect, the same holds for observability.

Theorem 7.2: If the system (7.1) has an observability matrix PX(t), then an equivalence
transformation x(t) = T(t) z(t) gives PZ(t) = PX(t) T(t) and rank PZ(t) =
. rank PX(t).

The proof is similar to that of Theorem 7.1.


Use of Theorem 7.1 permits construction of a Tc(t) that separates (7.1) into its con-
trollable and uncontrollable states. Using the equivalence transformation ,,(t) = Tc(t) z(t),
(7.1) becomes
d (Zl) (7.2)
dt Z2 =
where the subsystem
dzddt = A~l (t)Zl + B:(t)u (7.3)
is of order nl === n. and has a controllability matrix QZ1(t) = (Q~ll Q~ll ... I Q~J with
ranknl(e.d.). This shows Zl is controllable and Z2 is uncontrollable in (7.2).
I

The main problem is to keep Tc(t) differentiable and nonsingular everywhere, i.e. for all
values of t. Also, we will find Q:<: such that it hasn - nl zero rows.

Theorem 7.3: The system (7.1) is reducible to (7.2) by an equivalence transformation if


and only if Qx(t) has rankn~(e.d.) and can be factored as QX = Vl(S 1R)
where VI is an 11. X nl differentiable matrix with ranknl everywhere, S is
an nl X mnl matrix with rank nl(e.d.), andR is any nl X m(n - nl) matrix.

Note here we do not say how the factorization QX = V (S IR)


1 is obtained.

·Proof: First assume (7.1) can be transformed by x(t) = Tc(t) z(t) to the form of (7.2).

Using induction, Q~ = (~:) = (<!J} and if Q: = Cf) then for i = 1,2, ... ,

Q:+l = - (!:, !~~)(~:') + fft (<!j')


(- A~, Q:' ; dQ:'1dt) (Q~~ 1 ) (7.4)

Therefore
(Qo
.
Z1
1 .' '.'
... 0
QZI
n1 Qn1+
o
Z
l 1 •••
... 0
0:<:1).
~

where F(t) is the nl X m(n-nl) matrix manufactured by the iteration process (7.4) for
i = nl, nl + 1, ... , n-l. Since QZl has rank nl(e.d.), QZ must also. Use of Theorem 7.1 and
the nonsingularity of Tc shows QX has ranknt(e.d.). Furthermore, let Tc(t) (Tl(t) IT2(t»), =
so that .
Tc(t) Qz(t)
(T1QZl TtF)
Since Tl(t) is an n X nl diiferentiablematrix with ranknl everywhere and QZl is an nl x 1nnl
matrix with rank nl(e.d.), the only if part of ,the theorem has been proven:
150 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7

For the proof in the other direction, since QX factors, then

where Vz(t) is any set of n - nl differentiable columns making Vet) = (VI Vz) nonsingular.
But what is the system corresponding to the controllability matrix on the right? From
Theorem 6.10, (~,) = BZ. Also,

C~+,) = - (!i: !~:)(~i) + :t (~i) i = 1,2, ... , n,-l

and

Therefore 0 = -A~l(t)S(t), and since Set) has ranknl(e.d.) _and A~l(t) is continuous, by
Problem 3.11, A~l(t) = O. Therefore the transformationV(t) = (VI(t) Vz(t» is the required
equivalence transformation.

The dual relationship for observability is x(t) =TO(t) z(t) that transforms (7.1) into

y = (CHt) 0) (:) (7.5)

where the subsystem


dza/dt = A~a(t)Z3 Y = C~(t)Z3 (7.6)
is of order n3 ~n and has an observability matrix pza(t) with rankn3(e.d.). Then PZ(t)
has n - n3 zero columns.

Theorem 7.4: The system (7.1) is reducible to (7.5) by an equivalence transformation if


and only if Px has rank n3(e.d.) and factors as px = (~) Ba where Ba is an
n3 X n differentiable matrix with rankna everywhere and S is a kn3 x na
matrix with rank ns(e.d.).

The proof is similar to that of Theorem


,
7.3.
Here TO =
R4
-1 where R4 is any (Rs)
set of n - na differentiable rows making TO nonsingular and S is the observability matrix
of the totally observable subsystem.

We can extend this procedure to find states WI, Wz, Ws and W4 that are controllable and
observable, uncontrollable and observable, controllable and unobservable, and uncontrollable
and unobservable respectively. The system (7.1) is transformed into

y = (C~ C~ 0 O)w (7.7)

in which Wi is an ni-vector for i = 1,2,3,4 and where the subsystem


CHAP.7J CANONICAL FORMS OF THE STATE EQUATION 151

d (WI)
dt Wa ( A~
A~
o
A~
)(WI) + (B~)
Ws B~ u

has a controllability matrix (~:: ~::) of rank n, + n3 ~ n( e.d.) and where the subsystem

d
dt
(Wl)
W2
= y = (7.8)

has an observability matrix (~:: ~:: ) of rank n, + n2 "" n(e.d. ). Hence these subsystems
are totally controllable and totally observable, respectively. Clearly if such a system as
(7.7) can be found, the states WI, W2, Ws, W4 will be as desired because the flow diagram of
(7.7) shows W2 and W4 disconnected from the control and Ws and W4 disconnected from the
output.

Theorem 7.5: The system (7.1) is reducible to (7.7) by an equivalence transformation if


and only if px has ranknl+n2(e.d.) and factors as PX = H I UI +R2U Z and
QX has rank ni + ns (e.d) and factors as QX = VlSl + V3Sa•

Here Hi( t) is a kn x ~ matrix with rank ni( e.d.), Si( t) is an ~ X mn matrix with rank ni( e.d.),
Ui(t) is an ni x n differentiable matrix with rank ~ everywhere, Vi(t) is an n x ~ differentiable
matrix with rank ni everywhere, and Ui(t) Vj(t) = 8ijI~. Furthermore, the rank of Hi and Si
must be such that the controllability and observability matrices of (7.8) have the correct
rank.

Proof: First assume (7.1) can be transformed to (7.7). By reasoning similar to (7.4),

(
~l QI3
o
FI2
0
P I2
P 22
0
0
pz
Q3I Q33 F32 Ga2 0
o o 0 G42 0
so that QX and px have rank ni +na (e.d.) and ni + n2 (e.d.), respectively. Let

Then
QX VI (Qll QI3 F lZ F I4) + Va(Q31 Q33 Fa2 F 34)
and
px (Pi1 piz Gi3 Gi4)TUl + (Pil pJ2 Gi3 G~)TU2

which is the required form.


Now suppose px and QX factor in the required manner. Then, by reasoning similar to
the proof of Theorem 7.3, Hi = (P~P~ G~ GL)T for i = 1,2 and Si = (Qil Qi3 Fi2 Fi4)
for i = 1, 3, and A~, A~ , A~, A;, A~, A:1 and A~ have all zero elements.
Theorem 7.5 leads to the following construction procedure:
1. Factor pXQx = HISI to obtain HI and SI.
2. Solve pXVl = HI and UIQx = 81 for VI and U I.
3. Check that UIVl =I n1 ,
15.2 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7

4. Factor px - RIUl = R2U2 and QX - VlSl = Vasa to obtain R2, U 2 , Va and S3.
5. Find the reciprocal basis vectors of U 2 to form V2.
6. Find V4 as any set of n4 differentiable columns making T(t) nonsingular~

7. Check that UiVj = SUInt.


8. Form T(t):::: (VI V 2 Va V4 ).
Unfortunately, factorization and finding sets of differentiable columns to make T(t)
nonsingular is not easy in general.

Exam~]e 7.1·
GIven P x =
(Sin t 1.)
sin t 1 . Obviously rank PX = 1 and it is factored by inspection, but suppose we
try to mechanize the procedure for the general case by attempting elementary column operations analogous
to Example 3.13, page 44. Then

sin t
( sin t
l)((Sin t)-i -(sin
1 0
t)-2)
(sin t)-l
(11 00)

The matrix on the right is perfect for pz, but the transformation matrix is not differentiable at t = i1r for
i = ... - 1, 0, 1, ' , ,

However, if a(t) and f3(t) are analytic functions with no common zeros, let

E(t) = a(t)
( f3(t)
-f3(t»)
a(t)
Then (a(t) P(t»E(t) = (a 2(t)
+ f32(t) 0) and E(t) is always nonsingularand differentiable.
This gives us a means of attempting a factorization even if not all the elements are analytic.
If a(t) and P(t) are analytic but have common zeros Cl, tz, ... , Ck, ••• , the matrix E(t)
cali be fixed up as
E(t) (;~Z -~l~D Jl (1- tll:k)-"Yk(l:k)

where Pk is the order of their common zero Ck and Yk(C k) is a convergence factor.

Example 7.2.
Let a:(t) = t2 and /3(t) = t S , Their common zero is tl =0 with order 2 = Pl' Choose Yt(fl} = rio
Then
E(t)

Note E(O) is nonsingular.


Using repeated applications of this form of elementary row or column operation, it is
obvious that we can find T(t) = En(t)En-l(t) ... El(t) such that pxT = (pzll 0), and similarly
for QX if px or QX is analytic and of fixed rank r === n. Also, denoting T-l -= (UU~).,
...
then px factors as PZIU l •
The many difficulties encountered if PX or QX changes rank should be noted. Design of
filters or controllers in this case can become quite complicated.

7.5 CANONICAL FORMS FOR TIME-VARYING SYSTEMS


As discussed in Section 5.3, no form analogous to the Jordan form exists for a general
time-varying linear system. Therefore we shall study transformations to forms analogous
to the first and second canonical forms of Section 2.3. -
CHAP. 7] CANONICAL FORMS OF THE STATE EQUATION 153

Consider first the transformation of a single-input, time-varying system


dX/dt = A(t)x + b(t)u (7.9)
to a form analogous to (2.6),
1 0 o o
o 1 o o
dz
dt == z + u (7.10)
-an_let) 0 0 1 o
-an(t) 0 0 o 1

Theorem 7.6: The system (7.9) can be transformed to (7.10) by an equivalence trans-
formation if and only if Qx(t), the controllability matrix of (7.9), is differ-
entiable and has rank n everywhere.

Note this implies (7.9) must be more than totally controllable to be put in the form (7.10)
in that rank Q = n everywhere, not n(e.d.). However, using the methods of the previous
section, the totally controllable states can be found to form a subsystem than can be put
in canonical form.

Proof: This is proven by showing the needed equivalence transformation T(t) =


First let x = Qx(t)w, where the n x n
QX(t)K, where K is a nonsingular constant matrix.
matrix QX(t) is partitioned as (qi j ••• j q~). Then
0 0 0 (-l)nan 1
-1 0 0 (-l)n-lan-l 0
dw 0 -1 0 (-1)n- 2an-2 0
([t = W + u (7.11)
.. oil ill • • • .. • .. • .. ,. • • ill ill ill ~ • • .. .. .. • • .. •

o 0 ... -1 o
This is true because b x = Qxb w = qf and QX-l(AxQx-dQx/dt) =AW so that -AXq:+dqf/dt =
qf+l for i = 1,2, ... , n-l. Also AWen = QX-l(AXq~ - dq~/dt). Setting w = Kz, where

will give the desired form (7.10).


K = (~~:~~; ~.: D .. : : : . .

If the system can be transformed to (7.10) by an equivalence transformation, from


Theorem 7.1 QX = TQz. Using Theorem 6.10 on (7.10), QZ = K-l which has rank n every-
where and T by definition has rank n everywhere, so QX must have rank n everywhere.
Now we consider the transformation to the second canonical form of Section 2.3, which
is also known as phase-variable canonical form.
0 1 0 0 0
0 0 1 0 0
dz + u (7.12)
dt = ... II. ill .............. ill.,. .. 1''''' 1' .....
z
0 0 0 1 0
al(t) a2(t) as(t) an(t) 1
154 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7

The controllability matrix QZ of this system is


o 0 o (_1)n-1
o 0 (_1)n-2 Qn-l,n-l

and
o -1 qn-2,2 qn-1,2
1 qll qn-2,1 Qn-l, 1

where qii = (-l)i an for 1 === i === n; and for 1 === k < i === n,

Qik (-l)ian-i+ k + (_1)k i-f1 Un-jQi-k,j+l


j=O
+ ±
j=1
(-1)H1 dqi-~;-i+1

Theorem 7.7: The system (7.9) can be transformed to (7.12) by an equivalence trans-
formation if and only if QX(t), the controllability matrix of (7.9), is differ-
entiable and has rank n everywhere.

Proof: The transformation matrix T = QXQZ-l, and QZ has rank n everywhere.


Therefore the proof proceeds similarly to that of Theorem 7.6, transforming to (7.11) and
then setting z = QZw. To determine the ai(t) from the ai(t) obtained from (7.11), note
-AzQz + dQz/dt = _QzAw when written out in its columns gives
(q~ Iq~ 1... I q~ I q~ + 1) = (q~ I q3 I ... I q! I -QZAwen)
Therefore q!+l = -QzAwen , which gives a set of relations that can be solved recursively
for the ai(t) in terms of the lXi(t} of (7.11).

Example 7.3.
Suppose we have a second-order system. Then

and

By equating the two expressions it is possible to find, recursively, a2 = -a1 and at:= -0:2 - dal/dt.

It appears that the conditions of Theorems 7.6 and 7.7 can be relaxed if the hz(t) of
equation (7.10) or (7.12) is left a general vector function of time instead of en. No results
are available at present for this case.
Note that if we are given (7.9), defining y = Zl in (7.12) permits us to find a corre-
sponding scalar nth order equation dnyldtn = alY + a2 dy/dt + ... + andn-lyldtn-1 + u.
For the case where u is a vector instead of a scalar in (7.9) a possible approach is to set
all elements in u except one equal to zero, and if the resulting QX has rank n everywhere
then the methods developed previously are applicable. If this is not possible or desirable,
the form

dw
(ft ==
(o~~ ~~...........~.
..
0 ... hi
(7.13)

may be obtained, where the fi are in general nonzero n-vector functions of time, and
CHAP. 7] CANONICAL FORMS OF THE STATE EQUATION 155

is of the form (7.11); and for i =F j,

A~ (~ ~ =
o
.. .. ::: ...
0 ...
:.~~.')
a~

This form is obtained by calculatingm QX matrices for each of the m columns of the B
matrix, Le. treating the systems as m single-input systems. Then from any l of these
single-input QX matrices, choose the first nl columns of the first QX matrix, the first ?t2
columns of the second QX matrix, ... , and the first nz columns of the lth QX matrix such that
these columns are independent and differentiable and nl + n2 + ... + nz = n. In this order
these columns form the T matrix. The proof is similar to that of Theorem 7.6.
To transform from this form to a form analogous to the first canonical form (7.10),
use a constant matrix similar to the K matrix used in the proof of Theorem 7.6. However,
as yet a general theory of reduction of time-varying linear systems has not been developed.

Solved Problems
7.1. Transform
1
x(m+1) 3
-1
to Jordan form.
From Example 4.8, page 74, and Problem 4.41, page 97,

J = G 1
2
o
To ~ (
-1
~
o
1
o -1
~), I)
Then

from which a2 = 2, al::::: -1,


C~ ~ -D(-O
PI::::: -3 are obtained.
= 0'
The proper transformation is then

(~ -1
~ ~)(-~0 -~0 -3~)
0 -1
:: (=~1 -2~ -~)
8
Substitution of x::::; Tz in the system equation then gives the canonical form
1
z(m+ 1) :::: 2
o
156 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7

7.2. Transform

to real Jordan form.

From Problem 4.8, page 93, and Problem 4.41, page 97,

= 1 ~ -j)(a o
~ )z
-~.5 ~ ~
x 13
(
0 o {3*

Substitution into the system equation gives

dz/dt 1-j
o
0) + (-2/a)
o z (1 - j)/f3 u
o 1 +j (1 + j)/f3*
Then a = -2 and 13 = 1 - j. Putting this in real form. give$

7.3. Using the methods of Section 7.4, reduce

dx/dt ~o ~)x+( ~)u,


-1 -2
y (1 1 2)x

(i) to a controllable system, (ii) to an observable system, (iii) to a controllable


and observable system.

(i) Form the controllability matrix Q = (~-~ ~)


-2 0-2

Observe that Q = (b I-Ab I A 2b) in accord with using the form of Theorem 6.10 in the
time~invariantcase, and not Q = (b 1Ab 1A 2b) which is the form of Theorem 6.8. Performing
elementary row operations to make the bottom row of QZ = 0 gives

Then the required transformation is x = Tz where

T-l = (-~ ~ D and


CHAP.7} CANONICAL FORMS OF THE STATE EQUATION 157

Using this change of variables, dzldt = T-1(AT - dTldt)z + T- 1bu.

dz/dt = (-~ -~ Dz + G)u y = (-1 -1 2)z

(ii) Forming the observability matrix gives

px = ( -~ : 0= DG -~ D ( -~ ;
where the factorization is made using elementary column transformations. Using the trans-
formation x = TOz,

dzldt =
(-1 -3 0) (-1)
0 2
-2 -2
0
1
z + 0
-2
u y (1 2 O)z

(iii) Using the construction procedure,

PXQZ = (-~ -~ -~) (-~ ~ ~)(-~ -~ -~)


-1 -1 -1 1 0 0 0 0 0

= ( -0 (-1 -1 -1) RlSl

Then

pZV1 = (-~
1
2
4
2)('') (-0
1
5
1'21

1'31
= = gl gives VI . = C~V31) 1'31
1'31

and

~
-1
=
-D
U1Qa: = (un U12 ud ( 0 (-1 -1 -1) 8 1 gives U1 (1 U13 - 1 U13)

-2 0

Note UI V1 = 1 for any values of UtB and 1'31' The usual procedure would be to pick a value
for each, but for instructional purposes we retain U13 and val arbitrary. Then

pa: - RIU 1 = ~ ~ :~: ~ ~ :~:)


5- ula 5- u1a
(~ ~ :~:)
5- u13
(0 1 1)

and

Choosing 1'31 = 0 and Uta == 1 gives


o
T

Using UiVj = 8 ij we can determine


G::-0 1

~ ~)-1
T =
G~ -D G o -1
158 CANONICAL FORMS OF THE STATE EQUATION [CHAP; 7

The reduced system is then

y = (1 1 O)z

where Zl is controllable and observable. Z2 is observable and uncontrollable, and Z3 is con-


trollable and unobservable. Setting VSI = U 13 == 1 leads to the Jordan form, and setting
vSl = U 13 == 0 leads to the system found in (H), which coincidentally happened to be in the
correct controllability form in addition to the observable form.

7.4. Reduce the following system to controllable-observable form.

dx/dt (-j 3+t


-1
0
0
-t
0
2
0
1-:~+t2)
t2 - 2t - 1
t
x +
(~)
0
0
u y (0 t2 0 -l)x

Using the construction procedure following Theorem 7.5 1

(~
-5 -16-2t
2 2 -6t[40)
0 0
0 0

(~
t2 0

px =
-t2 + 2t 0 -St2-1- t )
t2 - 4t+ 2 0 -St + 2t2 - 12t - 1
S

-t2+ 6t-6 0 -3t4 + 2tS - 24t2 + 15t - 18

-t2t2+ 2t )
81 (2 2 2 2)
( t2 - 4t +2
-t2 + 6t - 6

(0 1 0 0)

Factoring px - RIU 1 and QX - VI 8 1 gives

U2 = (0 0 0 1)

8 3 = (t -5 -16 - 2t -6t - 40) V3 = (D (


-3t3
-3t4 + 2t3
+-::: =~2t+
-
-1

24t2
- 1
15t - 18
)

Then vi = (0 0 0 1) and Vr =(0 0 1 0) and T:::::: (VI I V2 1Vsl v4). It is interesting to note
that the equivalence transformation

T=O~~D
will also put the system in the form (7.7).
CHAP. 7] CANONICAL FORMS OF THE STATE EQUATION 159

7.5. Reduce the following system to controllable form:

dx/dt =
( ~
sin t
-t) x + ( cost t ) u
--t- -cost

First we calculate QX and use elementary row operations to obtain

E(t)QX :::: t
( -cos t
cos
t
t)( t
cos t
t cos t)
cos2 t
=

The required transformation is

T = E-l(t)
t2
1
+ cos2 t
(t
cos t
-ctos t)

7.6. Put the time-invariant system

x(m+l) = (j ~ j)x(m) + G)u(m)


(i) into first canonical form (7.10) and (ii) into phase-variable canonical form (7.12).
Note this is of the same form as the system of Problem 7.3, except that this is a discrete-time
system. Since the controllability matrbc: has the same form in the time-invariant case, the pro-
cedures developed there can be used directly. (See Problem 7.19 for time-varying discrete-time
systems.)

From part (i) of Problem 7.3, the system can be put in the form

1 0
z(m+ 1) -2 -1
(
o 0

Therefore the best we can do is put the controllable sUbsystem

into the desired form. The required transformation is

to the first canonical form

To obtain the phase-variable canonical form,

T
160 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7

(1o -1)-1(
2
1 0)(-1)2
-2 -1

Then T ( 1 10)
-2
from which we obtain the phase-variable canonical form

Zp(m+ 1)

By chance, this also happens to be in the form of the first canonical form.

7.7. Put the time~varying system

dx/dt = Gs~lt)X + (-Du for t ~ 0

(i) into first canonical form, (ii) into second canonical form, and (iii) find a scalar
second-order equation with the given state space representation.
To obtain the first canonical form,

T == -2t -3+ t)(-1


sin 0
~)
from which

where
1 ( 8 - 4t - 2t2 + (t -1) sin t - cos t )
6 + 2t - sin t 6 - 6t - 2t2 - (t + 6) sin t - 3 cos t + sin2 t

To obtain the phase-variable form,

T ==
-2t + sin
-3
t) (0
1
-1
-a1(t)
)-1 (
2t - 2a1 - sin t
al +3
from which

The second-order scalar equation corresponding to this is

where y == Zpl.

7.8. Transform to canonical form the multiple-input system

1 2-1) (0 1)
dx/dt
(1-4 3 1 1
010 x+ 01 u

To obtain the form (7.13), we calculate the two QX matrices resulting from each of the columns
of the Bx matrix separately:
CHAP.7J CANONICAL FORMS OF THE STATE EQUATION 161

QX (u1 only)

Note both of these matrices are singular. However, choosing T as the first two columns of
QX(U1 only) and the first column of Q% (U2 only) gives

so that

(-~
4
dw/dt 4
o
Also, T could have been chosen as the first column of Q% (u1 only) and the first two columns of
Q:t (u2 only).

To transform to a form analogous to the first canonical form, let w ==Kz where

Then
K ;:::

G D
-1
0

(-~ Dz + G ~)u
0
dz/dt = 4
-4

Supplementary Problems

7;9. Transform dx/dt


G-~ -D CD x + u to Jordan form.

7.10. Transform dx/dt (=~ ~) x + (_:) u to real- Jordan form.

7.11. U sing the methods of Section 7.4, reduce

dxldt = G_~ -Dx G)u + y = (1 -1 1)x

(i) to a controllable system, (ii) to an observable system, (iii) to an observable and controllable
system.

7.12. Prove Theorem 7.2, page 149.

7.13. Prove Theorem 7.4, page 150.

7.14. Show that the factorization requirement on Theorem 7.3 can be dropped if T(t) can be nonsingular
and differentiable for times t everywhere dense in [to, tiJ.
162 CANONICAL FORMS OF THE STATE EQUATION [CHAP. 7

7.15. Consider the system !£ (Xl) =


dt X2
(fl(t») u
h(t)
where fl(t) = 1 - cos t for 0 === t ::::: 2 and zero
elsewhere, and f 2 (t) = 0 for 0 === t === 2 and 1- cos t elsewhere. Can this system be put in the
form of equation (7.2)1

7.16. Find the observable states of the system

(~
6 2

dx/dt
2
8
4
8
4
6
!)
6
2
x y (1 1 1 l)x

7.17. Check that the transformation of Problem 7.5, page 159, puts the system in the form of equation
(7.2), page 149, by calculating AZ and B2.

7.18. Develop Theorem 7.3 for time-varying discrete-time systems.

7.19. Develop the transformation to a form similar to equation (7.11) for time-varying discrete-time
systems.

7.20. Reduce the system dx/dt


o
1
-t+2)
t+2 x + (1) 1 u
o -t+ 1 0
to a system of the form of equation (7.2).

7.21. Given the time-invariant system dx/dt = Ax + enu where the system is in phase-variable canonical
form as given by equation (7.12). Let z = Tx where z is in the Jordan canonical form dz/dt =
.A.z + bu and.A. is a diagonal matrix. Show that T is the Vandermonde matrix of eigenvalues.

7.22. Verify the relationship for the qik in terms of the a" following equation (7.12) for a third-order
system.

7.23. Solve for the at in terms of the Cl'i for i = 1,2,:3 (third-order system) in a manner analogous to
Example 7.3, page 154.

-11 4
dx
7.24. Transform the system
dt = -1/2 1
(
-27/2 6
to phase-variable canonical form.

7.25. Transform the system dx/dt -~ _~ _~) x + (e~t) u


( -e t e- t -2 0
to phase-variable canonical form.

7.26. Using the results of Section 7.5, find the transformation x = Tw that puts the system
-1
dx/dt -2
3

-3
into the form dw/dt -1
1

1.27. Obtain explicit formulas to go to phase-variable canonical form directly in the case of time-invariant
systems.
CHAP.7J CANONICAL FORMS OF THE STATE EQUATION 163

7.28. Use the duality principle to :find a transformation that puts the system dx/dt == A(t)x and y::::: C(t)x
into the form

dz y
::::: (0 0 '" 0 l)z
dt

7.29. Prove that IjTl1 < 00 for the transformation to phase-variable canonical form.

Answers to Supplementary Problems

C~ 2D
-1
7.9. T 0 where f3 is any number # O.
0

7.10. T :::::
(-: -8)-12

7.11. There is one controllable and observable state, one controllable and unobservable state, and one
uncontrollable and observable state.

7.15. No. QZ::::: V1(1 0) but VI does not have rank one everywhere.

7.16. The transformation T :::::


( 4~~
8000
r41 r42 r ..)

puts the system into the form. of equation (7.6), for any r4i that make T nonsingular. Also, Jordan
form can be used hut is more difficult algebraically.

-3/2 1 1)
7.24. T-l
( 5/2 1-2
-1 -1 1

7.25. A

7.26. T = G-~ D
7.28. This form is obtained using the same transformation that puts the system dw/dt::::: At(t)w + ct(t)u
into phase-variable canonical form.

7.29. The elements of Qz-l are a linear combination of the elements of QZ, which are always finite as
determined by the recursion relation.
Chapter 8

Relations with Classical Techniques


8.1 INTRODUCTION
Classical techniques such as block diagrams, root locus, error constants, etc., have been
used for many years in the analysis and design of time-invariant single input-single output
systems. Since this type of system is a subclass of the systems that can be analyzed by
state space methods, we should expect that these classical techniques can be formulated
within the framework already developed in this book. This formulation is the purpose of
the chapter.

8.2 MATRIX FLOW DIAGRAMS


We have already studied flow diagrams in Chapter 2 as a graphical aid to obtaining the
state equations. The flow diagrams studied in Chapter 2 used only four basic objects
(summer, scalor, integrator, delayor) whose inputs and outputs were scalar functions of
time. Here we consider vector inputs and outputs to these basic objects. In this chapter
these basic objects will have the same symbols, and Definitions 2.1-2.4, pages 16-17, hold with
the following exceptions. A summer has n m-vector inputs Ul(t), U2(t), ... , un(t) and one
output m-vector y(t) = :±:Ul(t) ± U2(t) ± ... ± un(t). A scalor has one m':'vector input u(t)
and one output k-vector y(t) = A(t)u(t), where·A(t) is a k x m matrix. An integrator has
one m-vector input u(t) and one output m-vector y(t) = y(to) + it
to
u(,.) d,.. _ To denote
vector (instead of purely scalar) time function flow from one basic object to another, thick
arrows will be used.
Example 8.1.
Consider the two input - one output system

y - (3 2)x

This can be diagrammed as in Fig. 8-1.

x(t)
u(t)====>i 1---"'" y(t)

Fig. 8-1

Also, flow diagrams of transfer functions (block diagrams) can be drawn in a similar
manner for time-invariant systems. We denote the Laplace transform of x(t) as ..c{x}, etc.

164
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 165

Example 8.2.
The block diagram of the system considered in Example 8.1 is shown in Fig. 8-2.

.e{u}---...J l---oC{Y}

Fig. 8-2

Using equation (5.56) or proceeding analogously from block


diagram manipulation, this can be reduced to the diagram
of Fig. 8-3 where
H(s) = (6/8 (28 + 3)/8 2)
.e{U}--:1 H(s) I~-""'" .e{y}

(3 2>[8(~ ~) - (~ ~)r (~ ~) Fig.8-S

Vector block diagram manipulations are similar to the scalar case, and are as useful
to the system designer. Keeping the system representation in matrix form is often helpful,
especially when analyzing multiple input-multiple output devices.

8.3 STEADY STATE ERRORS


Knowledge of the type of feedback system that will follow an input with zero steady
state error is useful for designers. In this section we shall investigate steady state errors
of systems in which only the output is fed back (unity feedback). The development can be
extended to nonunity feedback systems, but involves comparing the plant output with a
desired output which greatly complicates the notation (see Problem 8.22). Here we extend
the classical steady state error theory for systems with scalar unity feedback to time-
varying multiple input-multiple output systems. By steady state we mean the asymptotic
behavior of a function for large t. The system considered is diagrammed in Fig. 8-4. The
plant equation is dx/dt = A(t)x + B(t)e, the output is y = C(t)x and the reference input is
d(t) = y(t) + e(t), where y, d and e are all m-vectors. For this system it will always be
assumed that the zero output is asymptotically stable, i.e.
lim C(t)cI»A_BC(t, T)
t~oo
= 0

where alP A-BC(t, T)/at = [A(t) - B(t)C(t)]cI» A-BC (t, T) and lP A-BC(t, t) = I. Further, we shall
be concerned only with inputs d(t) that do not drive Ily(t)11 to infinity before t = co, so that
we obtain a steady state as t tends to infinity.

d(t)==~ 1:==-:===> y( t)
+ Zero output is
asymptotically stable

Fig. 8-4. Unity Feedback System with Asymptotically Stable Zero Output
166 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

Theorem 8.1: For the system of Fig. 8-4, lim e(t) = 0 if and only if d = C(t)w + g
t-oQ
where dw/dt == A(t)w + B(t)g for all t:::=:, to in which g(t) is any function
such that lim g(t) = 0 and A, B, C are unique up to a transformation
t-::¢
onw.

Proof: Consider two arbitrary functions f(t) and h(t) whose limits may not exist as t
tends to co. If lim [f(t) - h(t)] = 0, then f(t) - h(t) = r(t) for all t, where r(t) is an
t_'"'
arbitrary function such that lim r(t) = O. From this, if 0 = lim e(t) = lim [d(t) - y(t)],
then for all t, t_oo t-oQ t-oo

d(t) = y(t) + r(t) = C(t)q,A_BC(t,tO)x(to) + rt C(t)cI»A_BC(t,T)B(T)d(i)dT +


Jto r(t)

Jrtot C(t) q, A-BC(t, or) B(T) d(T) dT + g(t) + C(t).p A-BC (t, to) w(to)
=
where the change of variables g(t) = r(t) + C(t)q,A-BC(t, to)[x(to) - w(to)] is one-to-one for
arbitrary constant w(to) because lim C(t)cI»A-BC (t, to) = O. This Volterra integral equation
t-oo
for d(t) is equivalent to the differential equations dw/dt = [A(t) - B(t) C(t)]w + B(t)d and
d = C(t)w + g. Substituting the latter equation into the former gives the set of equations
that generate any d(t) such that lim e(t) = O.
t_oo
Conversely, from Fig. 8-4, dx/dt = A(t)x + B(t)e = [A(t) - B(t) C(t)]x + B(t)d. Assuming
d = C(t)w + g and subtracting dw/dt = A(t)w + B(t)g gives
d(x-w)/dt = [A(t) - B(t) C(t)](x-w)
Then lim e lim (d - y) = lim [g - C(t)(x - w»)
t-co t-oo t-co

From the last part of the proof we see that e(t) = g(t) - C(t) CPA-Be(t, to)[x(to) - w(to)]
regardless of what the function g(t) is. Therefore, the system dw/dt = A(t)w + B(t)g with
d = C(t)w + g and the system dx/dt = [A(t) - B(t) C(t)]x + B(t)d with e = d - C(t)x are
inverse systems. Another way to see this is that in the time-invariant case we have the
transfer function matrix of the open loop system H(s) = C(sI - A)-lB relating e to y. Then
for zero initial conditions, ..e{d} = [H(s) + I]..c{g} and .,e{e} = [H(s) + I]-l..c{d} so that
.,e{g} = .,e{e}. Consequently the case where g(t) is a constant vector forms a sort of bound-
ary between functions that grow with time and those that decay. Of course this neglects
those functions (like sin t) that oscillate, for which we can also use Theorem 8.1.
Furthermore, the effect of nonzero initial conditions w(to) can be incorporated into g(t).
Since we are interested in only the output characteristics of the plant, we need concern our-
selves only with observable states. Also, because uncontrollable but observable states of the
plant must tend to zero by the assumed asymptotic stability of the closed loop system, we
need concern ourselves only with states that are both observable and controllable. Use of
equation (6.9) shows that the response due to any Wi(tO) is identical to the response due to
an input made up of delta functions and derivatives of delta functions. These are certainly
included in the class of all g(t) such that lim g(t)
t-""
O. =
Since the case g(t) = constant forms a sort of boundary between increasing and de-
creasing functions, and since we can incorporate initial conditions into this class, we may
take g(t) as the unit vectors to give an indication of the kind of input the system can follow
with zero error. In other words, consider inputs

for i = 1,2, .. . ,m
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 167

which can be combined into the matrix function

C(t)
J to rt 4tA(t,T)B(T)(e Ie 1... lem)dT
1 2 =

_ Inputs of this form give unity error, and probably inputs that go to infinity any little bit
slower than this will give zero error.

Example 8.3.
Consider the system of Example 8.1 in which e(t) = U2(t) and there is no input u 1(t). The zero input,

unity feedback system is then ~~::::: [( ~ ~) - (~) (3 2)] x whose output

y = (3 2)x e- t {[3X,(O) + 2x,(O)] cos V2 t + ~ [x.(O) + x,(O)] sin V2 t}


tends to zero asymptotically. Consequently Theorem 8.1 applies to the unity feedback system, so that the
equations
d (3 2) (:;) + g

where lim get) = 0, generate the class of inputs d(t) that the system can follow with zero error.
t-oo
Solving this system of equations gives

dU) = 3W1(O) + 2w2(0) + 3tW2(0) + it


o
[3(t - '1") + 2Jg(T} dT + gCt)

For get) = 0, w~ see that the system can follow arbitrary steps and ramps with zero error, which is in
agreement with the classical conclusion that the system is of type 2. Also, evaluating

ito
[3(t - T) + 2J dT = 1.5t2 + 2t

shows the system will follow t2 with constant error and will probably follow with zero error any function
t 2 -r: for any e > O. Thisis in fact the case, as can be found by taking get) = t-e.

Now if we consider the system of Example 8.1 in which e(t) ~ Ul(t) and there is no input u2(t),

then the closed loop system -is ~~::::: [( ~ ~) - (~) (3 2)] X. The output of this system is
y ::::: O.5x2(O) + [3Xl(O) + 1.5x2 (O)]e- 6t which does not tend to zero asymptotically so that Theorem 8.1 cannot
be used.

Definition 8.1: The system of Fig. 8-4 is called a type-l system (l = 1,2, ... ) when
lim e(t) = 0 for the inputs dt = (t - to)l-lU(t - to)ei for all i = 1,2, ... , m.
t- 00

In the definition, U(t - to) is the unit step function starting at t = to and ei is the ith unit
vector. All systems that do not satisfy Definition 8.1 will be called type-O systems.
Use of Theorem 8.1 involves calculation of the transition matrix and integration of the
superposition integral. For classical scalar type-l systems the utility of Definition 8.1 is
that the designer can simply observe the power of s in the denominator of the plant transfer
function and know exactly what kind of input the closed loop system will follow. The fol-
lowing theorem is the extension of this, but is applicable only to time-invariant systems with
the plant transfer function matrix H(s) = C(sI - A)-lB.

Theorem 8.2: The time-invariant system of Fig. 8-4 is of type l ~ 1 if and only if
H(s) = s-IR(s) + P(s) where R(s) and P(s) are any matrices such that
lim SR-1(S) = 0 and lilim 81- 1P(s)11 < 00.
s-o 8-0
168 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

Proof: From Theorem 8.1, the system is of type 1 if and only if ..e{(d1 1 d 2 ! ... I dm)} =
(l-l) Is-II = [H(s) +1]G(s) where .,e{gi}, the columns of G(s), are the Laplace transforms
of any functions gi(t) such that 0 = lim gi(t) = lim S~{gi} where S.,e{gi} is analytic for
Re s ~ O. t-eo 8-0

First, assume H(s) = s-IR(s) + P(s) where lim SR-l(S) = 0 so that n-l(s) exists in a
neighborhood of s = O. Choose 8-0

G(s) = (1-1) I S-I [H(s) + I] -1 = (l-l)! [R(s) + SIP(S) + sll]-1


Since [H(s) + I] -1 is the asymptotically stable closed loop transfer function matrix, it is
analytic for Re s === O. Then sG(s) has at most a pole of order 1-1 at s= 0 in the region
Re s ~ O. In some neighborhood of s = 0 where R-l(S) exists we can expand
sG(s) = (l-l)! s[R(s) + SlP(S) + sIIJ-l = (l-l)! SR-l(S)[1 + Z(s) + Z2(S) + ... ]
where Z(s) = SR-l(S)[SI-lP(S) +SI-II). Since IimZ(s) = 0, this expansion is valid for
s-o
small lsi, and lim sG(s) = O.
8-0
Consequently sG(s) has no pole at s = 0 and must be
analytic in Re s:::'" 0 which satisfies Theorem 8.1.
Conversely, assume lim sG(s) == ·0 where sG(s) is analytic in Re s === O. Write H(8) =
s-o
s-IR(s) +P(s) where P(s) is any matrix such that lilim sl-IP(s)ll < 00 and R(s) =
8 1[H(8) - P(s)] is still arbitrary. Then 8-0

(1-1) ! s-Il = [s-IR{s) + P(s) + I]G(s)


can be solved for SR-l(S) as
(l-l)! SR-l(S) = sG(s)(1 + W(S))-1 = 8G(s}[1 + W(s) + W2{S) + ... ]
where (l-l) ! W(s) = [SI-IP(S) + SI-11]sG(s). This expansion is valid for IlsG(s)!l small
enough, so that R-l(S) exists in some neighborhood of s = O. Taking limits then gives
lim SR-l(S) = O.
8-0
We should be careful in the application of Theorem 8.2, however, in light of Theorem 8.1.
The classification of systems into type 1 is not as clear cut as it appears. A system with
H(s) = (s + f)-1 can follow inputs of the form e- Et • As f tends to zero this tends to a step
function, so that we need only take E- 1 on the order of the time of operation of the system.
Unfortunately, for time~varying systems there is no guarantee that if a system is of
type N, then it is of type N - k for all k === O. However, this is true for time~invariant
systems. (See Problem 8.24.).
Example 8.4.

... JI (~+~
sa -1-~)
8 2
..e{d} ----7> ~{y}
+ ~
1 + 128 + 382 0
82
-

Fig. 8-5

The system sho~ in Fig. 8-5 has a plant transfer function matrix H(s) that can be written in the form

H(s) =
_1
82
(-68-1 + 1 9 -1)
0 + ( 0
128- 1 +3 -1 )
0
= s-2R(s) + pes)

in which
II;~ sP(s) I I (
0
- lim 00
- S-foO 12 + 38
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 169

and where
= l' (0
1m s
s-o -1
# 0

Since lim SR-l(S) has a nonzero element, the system is not of type 2 as appears to be the case upon
8-0
first inspection. Rewrite H(s) in the form (where R(s) and P(s) are different)

H(s)
!
s
(-6S-8-1++ 1 _~-1) + (03 0
2
9s-
12
-1) s-IR(s) + P(s)

Again, lilim pes) II < 00 but now


8-0

lim sR-l(s)
s-o
= lim
s-o
8( 0 1 +\28)
9s - 6
;;:::;
(~ ~)
-8
1 + 128
Since the closed loop system has poles at -1, -1, -0.5, and -0.5, the zero output is asymptotically stable,
Therefore the system is of type 1.

To find the error constant matrix of a type-l system, we use block diagram manipula-
tions on Fig. 8-4 to get ~{e} = [1+H(s)]-l.,e{d}. If it exists, then
lim e(t) = lim s[1 + S-l R(s) + P(S)]-l~{ d}
t--+oo s--+O

lim SI+l[SII + R(s) + sZP(s)]-I.,e{d} = lim SI+IR-l(S)~{d}


5--+0 8--+0

for any l> O. Then an error constant matrix table can be formed for time-invariant sys-
tems of Fig. 8-4.
Steady State Error Constant Matrices
System Type Step Input Ramp Input Parabolic Input

0 lim [I + H(S)]-l * *
s-+o
1 0 lim R-l(8) *
S-I-O

2 0 0 lim &-1(8)
s-O

In the table * means the system cannot follow all such inputs.
Example 8.5.
The type-1 system of Example 8.4 has an error constant matrix ;~ R-l(s) = (~ _~). Thus if
the input were (t - to)U(t - t o)e2J the steady state output would be [(t - to)U(t - to) + 6]e2' The system
can follow with zero steady state error an input of the form (t - to)U(t - t o)el'

8.4 ROOT LOCUS


Because root locus is useful mainly for time-invariant systems, we shall consider only
time-invariant systems in this section. Both single and multiple inputs and outputs can
be considered using vector notation, i.e. we consider
dx/dt = Ax + Bu y = Cx+Du (8.1)

Then the transfer function from y to u is H(s) = C(sI - A)-IB + D, with poles determined
by det (sI - A) = O. Note for the multiple input and output case these are the poles of the
whole system. The eigenvalues of A determine the time behavior of all the outputs.
170 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP; 8

We shall consider the case where equations (8.1) represent the closed loop system.
Suppose that the characteristic equation det (sI - A) = 0 is linear in some parameter K so
that it can be written as
SU + (8 1 + o/lK)sn-l + (8 2 + o/2K)sn-2 + ... + (8 n- 1+ o/n_1K)s + 8n + o/nK = 0

This can be rearranged to standard root locus form under K variation,


o/l sn -l + o/2sn - 2 + ... + o/n-l s + o/n
-1 K.
sn + 0lsn-1 + ... + ()n-Is + Bn
The roots of the characteristic equation can be found as K varies using standard root locus
techniques. The assumed form of the characteristic equation results from both loop gain
variation and parameter variation of the open loop system.
Example 8.6.
Given the system of Fig. 8-6 with variable feedback gain K.

d(t) ==::::::::::\
+

Fig. 8-6

The closed loop system can be written as


dx
dt
-K
( -K -3 x
1) +
(1
1
The characteristic equation is 82 + (3 + K)S + 41( = O. Putting it into standard root locus form leads to the
root locus shown in Fig. 8-7.

Fig. 8-7

Example 8.7.
The feedback system of Fig. 8-8 has an unknown (8 + 3) sinha y
parameter a. Find the effect of variations in a upon 82 + 38 + sinh a
the closed loop roots.'
Let sinh a: = K. The usual procedure is to set
the open loop transfer function equal to -1 to find
the closed loop poles of a unity feedback system. Fig. 8-8
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 171

(8 + 8)"
-1 = 82 + 3s + Ie

This can be rearranged to form the characteristic equation of the closed loop system, 82 + 38 + +
Ie

(8 + 3)IC ::.: O. Further rearrangement gives the standard root locus form under K variation.

8+4
-1
K 8(8+ 3)

This happens to give the same root locus as in the previous example for sinh IX === O.

8.5 NYQUIST DIAGRAMS


First we consider the time-invariant single input-
single output system whose block diagram is shown in
.('{e}
Fig. 8-9. The standard Nyquist procedure is to plot G(s)
+
G(s) H(s) where s varies along the Nyquist path en-
closing the right half s-plane. To do this, we need +
polar plots of GUw) H(jw) where w varies from -00 to
+00. .e{v}
H(s)
Using standard procedures, we break the closed
loop between e and v. Then setting this up in state
space form gives Fig. 8~9
dx/dt = Ax + be v = ctx+de (8.2)

Then GUw)H(jw) = ctUwI - A)-lb + d. Usually a choice of state variable x can be found
such that the gain or parameter variation K of interest can be incorporated into the c vector
only. Digital computer computation of {iwI - A)-lb as w varies can be most easily done by
iterative techniques, such as Gauss-Seidel. Each succeeding evaluation of (jwi+lI - A)-lb
can be started with the initial condition (jwl- A)-lb, which usually gives fast convergence.

Example 8.8.
Given the system shown in Fig. 8-10. The state space form of this is, in phase-variable canonical
form for the transfer function from e to 'I), '

dx
'U = (K O)x
dt
Ie
Then ct(j6lI - A)-lb giving the polar plot of Fig. 8-11.
jw(jw + 1) •

ImGH
.e{e}
ReGH
+

.c{v} 1
8+1

Fig. 8~lO

About the only advantage of this over standard techniques is that it is easily mechanized
for a computer program. For multiple·loop or multiple-input systems, matrix block diagram
manipulations give such a computer routine even more flexibility.
RELATIONS WITH CLASSICAL TECHNIQUES (CHAP. 8

Example 8.9.
Given the 2 input - 2 output system with block diagram
shown in Fig. 8-12.
. Then dx/dt -:::: Ax + hIe! +-b2 e2 and VI == and V2, == elx +
ctx. The loop connecting VI and el can be closed, so that +
61 ::: VI :::ct
x. Then ..e{v}
.c{V2} = el (81 - A - bi et) -lb2.,e{ e2}
50 that we can ask -the computer to give us the polar plot of
cd (jwI - A - bIer )-lb2 • Fig. 8.. 12

8.6 STATE FEEDBACK POLE PLACEMENT


Here we discuss a way to feed back the state vector to shape the closed loop transition
matrix to correspond to that of any desired nth-order scalar linear differential equation.
For time~invariant systems in particular, the closed loop poles can be placed where desired.
This is why the method is called pole placement, though applicable to general time-varying
systems. To "place the poles," the totally controllable part of the state equation is trans-
formed via x(t) == T(t)z(t) to phase-variable canonical form (7.12) as repeated here:
o 1 0 0 0
0 0 1 0 0
dz
dt = •• It • " •••••••••• II' ill ............
Z + u (7.12)
0 0 0 1 0
aI(t) a2(t) a3(t) an(t) 1
Now the scalar control u is constructed by feeding back a linear combination· of the z state
variables· as u = kt(t)z where each ki(t);:;:;;: -alt) - ai(t).
[-al(t) - a 1(t)]zl + [-a2(t) - a2(t)Jz2 + ... + [-an(t) - an(t)]zn
U ;:;:;;:

Each ai(t) is a time function to be chosen. This gives the closed loop system
o i 0 0
o 0 1 0
dz
== z
dt
o 1 o o
-a 1 (t) -a2(t) -ocs(t) -an(t)
n V
Then Zl(t) obeys z~n) + an(t)zi - + '" + a2(t)zl + a1(t)Zl = 0 and each Zi+l(t) for
i = 1,2, ... , n - 1 is the ith derivative of Zl(t). Since the a/t) are to be chosen, the corre-
sponding closed loop transition matrix q,z(t, to) can be shaped accordingly. Note, however,
thatx(t) =T(t)4tAt, to)zo so that shaping of the transition matrix iPz(t, to) must be done
keeping in mind, the effect of T(t).
This minor complication disappears when dealing with time-invariant systems. Then
T(t) is constant, and furthermore each ai(t) is constant. In this case the time behavior of
x(t) and z(t) is essentially the same, in that both AX and AZ have the same characteristic
equation An + an ",n-1 + ... + a 2 A+ a l = O.. For the closed loop system to have poles at the
desired values YI' 12' •.. , Yn, comparison of coefficients of the A.'s in (A-Yl)(A.-a2)· •• (A.-an) = 0
determines the desired value of each Q t••
Example 8.10;

~~ (~ ~ ~)" + \1(~)
Given the system

== u
2 -3 t
It is desired to have a time-invariant closed loop system with poles at 0, ~1 and -2. Then the desired system
will have a characteristic equation A3 + 3A2 + 2A := O. Therefore we choose u:= (-2 1 -(t + 3»)x, so
kT == (-2 1 -(t + 3»).
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 173

For multiple-input systems, the system is transformed to the form of equation (7.13),
except that thesul?system dw/dt = A~Wi + bf'ui must be in phase-variable canonical form
(7.12) and for i ¥= i, the A{j(t) must be all zeros except for the bottom row. Procedures
similar to those used in Chapter 7 can usually attain this form, although general conditions
are not presently available. If this form can be attained, each control is chosen as
ui = kt (t)Wi - e!Aijwj for j ¥= i to "place the poles" of Aii(t) and to subtract off the coupling
terms~
Why bother to transform to canonical form when trial and error can determine k?
Example 8.11.
Place the poles of the system of Problem 7.8 at PI' pz and Pa. We calculate

A-1
det
[(
_~_
This is
A3 - (k 13 + k21 + k22 + kZ3 + 5)A2 + [ku + 2k 13 + 3kz1 + 4k22 + 5k23 + k 13(k 21 + k 2Z ) - kZ3(kn + k 1Z ) + 8]A
- ku - k13 - 2k21 - 4k22 - 6k 23 - k n (k 22 + k 23 ) + k 12 (k 21 + k Z3 ) + k 13 (k 21 - k 2Z ) - 4
It would take much trial and error to choose the k's to match
(A - Pl)(A - PZ)(A - Pa) = A3 - (PI + pz + Pa)A2 + (PIP2 + PzPa + PIPa)A - PIPzPa
Trial and error is usually no good, because the algebra is nonlinear and increases greatly
with the order of the system. Also, Theorem 7.7 tells us when it is possible to "place the,
poles", namely when Q(t) has rank n everywhere. Transfor!llation to canonical form seems
the best method, as it can be programmed on a computer.
State feedback pole placement has a number of possible defects: (1) The solution appears
after transformation to canonical form, with no opportunity for obtaining an engineering
feeling for the system. (2) The compensation is in the feedback loop, and experience has
shown that cascade compensation is usually better. (3) All the state variables must be
available for measurement. (4) The closed loop system may be quite sensitive to small
variation in plant parameters. Despite these defects state feedback pole placement may
lead to a very good system. Furthermore, it can be used for very high-order and/or time-
varying systems for which any compensation may be quite difficult to find. Perhaps the
best approach is to try it and then test the system, especially for sensitivity.
Example 8.12.
Suppose that the system of Example 8.10 had t - e instead of t in the lower right hand corner of the
A(t) matrix, where e is a small positive constant. Then the closed loop system has a characteristic equation
AS + 3AZ + 2A - e = 0, which has an unstable root. Therefore this system is extremely sensitive.

8.7 OBSERVER SYSTEMS


Often we need to know the state of a system, and we can measure only the output of the
system. There are many practical situations in which knowledge of the state vector is
required, but only a linear combination of its elements is known. Knowledge of the state,
not the output, determines the future output if the future input is known. Conversely
knowledge of the present state and its derivative can be used in conjunction with the state
equation to determine the present input. Furthermore, if the state can be reconstructed
from the output, state feedback pole placement could be used in a system in which only the
output is available for measurement.
In a noise-free environment, n observable states can be reconstructed by differentiating
a single output n-l times (see Section 10.6). Ina noisy environment, the optimal recon-
struction of the state from the output of a linear system is given by the Kalman-Bucy filter.
A discussion of this is beyond the scope of this book. In this section, we discuss an observer
system that can be used in a noisy environment because it does not contain differentiators.
However, in general it does not reconstruct the state in an optimal manner.
174 RELATIONS WITH· CLASSICAL TECHNIQUES [CHAP. 8

To reconstruct all the states at all times, we assume the physical system to be observed
is totally observable. For simplicity, at first only single-output systems will be considered.
We wish to estimate the state of dx/dt = A(t)x + B(t)u, where the output y = et(t)x. The
state, as usual, is denoted x(t), and here we denote the estimate of the state as x(t).
First, consider an observer system of dimension n. The observer system is constructed as
dx/dt == A(t)x + k(t)[et(t)x - y] + B(t)u (8.3)
where k(t) is an n-vector to be chosen. Then the observer system can be incorporated into
the flow diagram as shown in Fig. 8-13.

~ Physical system ------+-0\.----------- Observer system -------..1.\


Fig. 8-13

Since the initial state x(to.), "where to is the time the observer system is started, is not
known, we choose x(to) == O. Then we can investigate the conditions under which x(t) tends
to x(t). Define the error e(t) == x(t) - (t). Thenx
de/dt = dx/dt - di/dt = [A(t) + k(t)ct(t)]e (8.4)

Similar to the method of the previous Section 8.6, k(t) can be chosen to "place the poles"
of the error equation (8 ..4-). By duality, the closed loop transition matrix 1j"(t, to) of the
adjoint equation dp/dt = -At(t)p - c(t)v is shaped using v == kt(t)p. Then. the transition
matrix fIJ(t, to) of equation (8 ..4.) is found as fIJ(t, to) = iJt(to' t), from equation (5.64). For
time-invariant systems, it is simpler to consider dw/dt = Atw + cv rather than the
adjoint. This is because the matrix At + ckt and the matrix A + ke t have the same
eigenvalues. This is easily proved by noting that if A is an eigenvalue of At + ck t , its com-
plex conj ugate A* is also. Then A* satisfies the characteristic equation det (A *1 - At - ckt) = O.
Taking the complex conjugate of this equation and realizing the determinant is invariant
under matrix transposition completes the proof. Hence the poles of equations (8.3) and (8.4)
can be placed where desired. Consequently the error e(t) can be made to decay "as quickly
as desired, and the state of the observer system tends to the state of the physical system.
However, as is indicated in Problem 8.3, we do not want to make the error tend to zero
too quickly in a practical system.
Example 8.13.
Given the physical system
dx y = (1 l)x
dt
Construct an observer system such that the error decays with poles at -:-:-2 and -3.

First we transform the hypothetical sy~em


dw
(it =
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 175

to the phase variable canonical form


: = (_~ _~)Z + G)v
where w = (: ~) z, obtained by Theorem 7.7. We desire the closed loop system to have the charac~
teristic equation 0 = (x. + 2) (X. + 3) = x.2 + 5X. + 6. Therefore choose v = (-4 -l)z = (-1 O)w. Then
k = (-1 O)t and the observer system is constructed as

(-3 0)
2 -2 x +
A (1)
0 Y +0 u
(-1)
N ow we consider an observer system of dimension less than n. In the case of a single~
output system we only need to estimate n - 1 elements of the state vector because the known
output and the n -1 estimated elements will usually give an estimate of the nth element of
the state vector. In general for a system having k independent outputs we shall construct
an observer system of dimension n - k.
We choose P(t) to be certain n - k differentiable rows such that the n x n matrix
P(t)
C(t)
)-1 = (H(t) I G(t)) exists at all times where H has n - k columns. The estimate x
(
is constructed as
x(t) = H(t)w + G(t)y or, equivalently, (P(t) ) x=( w) (8.5)
C(t) y
Analogous to equation (8.3), we require
P d~/dt = P[Ax + L(Cx - y) + Bu}
where L(t) is an n x k matrix to be found. (It turns out we only need to find PL, not L.)
This is equivalent to constructing the following system to generate w, from equation (8.5),
dwldt = (dPldt) x+ Pdx/dt = Fw - PLy + PBu (8.6)

where F is determined from FP = dP/dt+PA+ PLC. Then (F1-PLl( ~) = dP/dt +PA


so that F and PL are determined from (F j-PL) = (dP/dt + PA)(H IG). From (8.5) and
(8.6) the error e = P(x -,x) = Px - w obeys the equation de/dt = Fe.
The flow diagram is then as shown in Fig. 8-14.

~ Physical system - - - - - - - - ......


1.. ......-------
Fig.8~14
Observer system -~-------fOol·1
Example 8.14.
Given the system of Example 8.13, construct a first-order observer system.

Since c = (1 1), choose P = (PI pz) with PI # P2" Then

(:r 1 (1
PI - P2 -1
-P2)
PI
(H I G)
176 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

Therefore
A
X = 1 (1)
Pi - P2 -1
w + 1 (0)
Pi - P2 1
Y

and
F == (dP/dt + PA)H = (P, P.l (-2 1)( 1) =
PI - P2 2 -2 -1
-3PI + 4P2
PI - P2

PL == -(dP/dt + PA)G = ~(2 1)(-P2) = pi - 2p~


PI - P2 2 -2 PI Pi - P2

so that (PI - P2)dw/dt = (-BPI + 4p2)W - (pi - 2p~)y - PI(Pl- P2)U is the first~order observer. A bad
choice of PI/P2 with 1 < PI/P2 < 4/3 gives an unstable observer and makes the error blow up.

The question is, can we place the poles of F by proper selection of P in a manner similar
to that of the n-.dimensional observer? One method is to use trial and error, which is some-
times more rapid for low-order, time-invariant systems. However, to show that the poles
of F can be placed arbitrarily, we use the transformation x = Tz to obtain the canonical
form

:t (~:) = (~~: ..~~............~)(~:) + T-lBu


Zt All A!2 • • • An Zl

y (~~ ~~ .. ........... ~ ) Z

o 0 ." ct
where the subsystem dzi/dt = Aiizi + Biu and Yi = ct Zi is in the dual phase variable canon-
ical form

i = 1,2, ... , l

(0 o l)Zi (8.7)

in which B, is defined from T-I B = (~:) and n, is the dimension of the ith subsystem.

As per the remarks following Example 8.10, the conditions under which this form can
always be obtained are not known at present for the time-varying case, and. an algorithm
is not available for the time-invariant multiple-output case.

However, assuming the subsystem (8.7) can be obtained, we construct" the observer
equation (8.6) for the subsystem (8.7) by the choice of Pi = (I Ik i ) where k(t) is an (14 -1)-
vector that will set the poles of the observer. We assume ki(t) is differentiable. Then

(~) = (*) and


(::r = (-H-T-) (8.8)
CHAP; 8] RELATIONS WITH ·CLASSICAL TECHNIQUES 177

We find F, = (dP'/ dt + p,AiilH. = [(0 I dkd dt l + (I Ik'lAii1( +) from which

0 0 0 ki1(t)
1 0 0 ki2(t)
Fi = 0 1 0 ki3(t)
.........................
o 0 1 k i• ni - (t) 1

By matching ,coefficients of the "characteristic equation" with "desired pole positions,"


we make- the error decay as quickly as desired.
Also, we find PiLi as PiLi = -(dPJdt + PiAii)Gi .

Then
'_
A

X =
- '(A)
T.z = T i,
A
Zl

where
A
Zi=

Example 8.15.
Again, consider the system of Example 8.13.

~~ - (-~ _~) x + (- ~ ) u y = (1 l)x

To construct an observer system with a pole at -2, use the transformation x = Tz where (Tt)-l =

(: ~). Then equations (8.7) are

dz
dt (0-2)
1 -4
z + (-4) u
-1
y - (0 1)z

The estimate ~ according to equation (8.8) is now obtained as

A
Z
(-H+)(;)
where kl = -2 sets the pole of the observer at -2. Then F = -2 and PL = -2 so that the observer
system is dw/dt = -2w + 2y - 2u. Therefore

T~
A
X = =
and the error PT-1x - w = 2Xl + x2 - w decays with a time constant of 1/2. This gives the block diagram
of Fig. 8-15.

u 8+4 Y
82 + 48 + 2 t----~

1
8+2

Fig. 8-15
178 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

8.8 ALGEBRAIC SEPARATION


In this section we use the observer system of Section 8.7 to generate a feedback control
to place the closed loop poles where desired, as discussed in Section 8.6. Specifically, we
consider the physical open loop system
dx/dt = A{t)x + B(t)u + J(t)d
y = C(t)x (8.9)

with an observer system (see equation (8.6)),


dw/dt = F(t)w -P(t)L(t)y + P(t)B(t)u + P(t)J(t)d
x= H(t)w + G(t)y (8.10)

and a feedback control u(t) that has been formed to place the poles of the closed loop sys-
tem as
u = W(t)x (8.11)

Then the closed loop system block diagram is as in Fig. 8-16.

G J=:==:.J
u

~ Ph:iCa! system - - -.....·1-.--~-- Observer ----------+-1..-


. . Feedback ~
Fig. 8-16

Theorem 8.3: (Algebraic Separation). For the system (8.9) with observer (8.10) and feed-
back control (8.11), the characteristic equation of the closed loop system can
be factored as det (AI - A - BW) det (AI - F).

This means we can set the poles of the closed loop system by choosing W using the pole place-
ment techniques of Section 8.6 .and by choosing P using the techniques of Section 8.7.

Proof: The equations governing the closed loop system are obtained by substituting
equation (8.11) into equations (8.9) and (8.10):

:t(~) (p:W~~~~ic F::B~)(~) + (~J)d


Changing variables to e = Px - wand using HP + GC = I and FP = dP / dt + PA + PLC
gives
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 179

Note that the bottom equation de/dt = Fe generates an input -WHe to the closed loop
of observer system dx/dt = (A + BW)x. Use of Problem 3.5 then shows the characteristic
equation factors as hypothesized. Furthermore, the observer dynamics are in general
observable at the output (through coupling with x) but are uncontrollable by d and hence
cancel out of the closed loop transfer function.

Example 8.16.
For the system of Example 8.13, construct a one-dimensional observer system with a pole at -2 to
generate a feedback that places both the system poles at -1.

We employ the algebraic separation theorem to separately consider the system pole placement and
the observer pole placement. To place the pole of

dx
dt y == (1 l)x

using the techniques of Section 8.6 we would like


u == (-2 3/2)x
which gives closed loop poles at -1. However, we cannot use x to form iL, but must use l:: as found
from the observer system with a pole at -2, which was constructed in Example 8.15.

dw/dt = -2w + 2y - 2(u + d)


We then form the control as
u = (-2 3/2)~

Thus the closed loop system is as in Fig. 8-17.

y
8 +4 I - - -- _ -_ _
82 +48+2

1 +
8+2

Fig.S~17

Note that the control is still essentiaIIy in the feedback loop and that no reasons were given as to
why plant poles at -1 and observer pole at -2 were selected. However, the procedure works for high-
order, multiple input- multiple output, time-varying systems.

8.9 SENSITIVITY, NOISE REJECTION, AND NONLINEAR EFFECTS


Three major reasons for the use of feedback control, as opposed to open loop control,
are (1) to reduce the sensitivity of the response to parameter variations, (2) to reduce the
effect of noise disturbances, and (3) to make the response of nonlinear elements more linear.
A proposed design of a feedback system should be evaluated for sensitivity, noi_f?e tejection,
and the effect of nonlinearities. Certainly any system designed using the pole placement
techniques of Sections 8.6, 8.7 and 8.8 must be evaluated in these respects because of the
cookbook nature of pole placement.
180 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP; 8

In this section we consider these topics in a very cursory manner" mainly to show the
relationship with controllability and observability. Consequently we consider only: small
percentage changes in parameter variations, small noise compared, withthe'signal, -and
nonlinearities that are almost linear. Under these assumptions we will show how each
effect produces an unwanted input into a linear system and then how to minimize this un-
wanted input.
First we consider the effect of parameter variations. Let the subscriptN refer to the
nominal values and the subscript a refer to actual values. Then the nominal system',(the
system with zero parameter variations) can be represented by
dXN/dt = AN(t)xN + B(t)u
YN = C(t)XN + D(t)u (8.12)
These equations determine XN(t) and YN(t), so these quantities are assumed known. If some
of the elements of AN drift to some actual Aa (keeping B, C and D fixed only for simplicity),
then
dXa/dt Aa(t)Xa + B(t)u
Ya = C(t)Xa + D(t)u (8.13)
Then let ax = Xa - XN, SA = Aa - AN, 8y = Ya - YN, subtract
equations (8.12) from (8.13),
and neglect the product of small quantities BA Bx. Warning: That SA 8x is truly small
at all times must be verified by simulation. If this is so, then
d(8x)/dt == AN(t) 8x + BA(t) XN
8y == C(t) 8x (8.14)
In these equations AN(t), C(t) and XN(t) are known and 8A(t), theyariation of the parameters
of the A(t) matrix, is the input to drive the unwanted signal 8x.
For the case of noise disturbances d(t), the nominal system remains equations (8.i2) but
the actual system is
dKa/dt = AN(t)Xa + B(t)u + J(t)d
Ya = C(t)Xa + D(t)u + K(t)d (8.15)

Then we subtract equations (8.12) from (8.15) to obtain


_d(8x)/dt = AN(t) 8x + J(t)d
By = eft) 8x+ K(t)d _(8.16)

Here the noise d(t) drives the unwanted signal ax.


Finally, to show how a nonlinearity produces an unwanted signal, consider a scalar input
XN into a nonlinearity as shown in Fig. 8. .18. This can be redrawn into a linear system
with a large output and a nonlinear system with a small output (Fig. 8-19).

XN dN d

d * +
+

Fig. 8-18
+ Fig. 8. . 19
3d
CHAP. 8J- RELATIONS WITH -CLASSICAL TECHNIQUES 181

Here the unwanted signal is 8d which is generated by the nominal XN. This can be
incorporated into a block diagram containing linear elements, and the effect of the non-
linearity can be evaluated in a manner similar to that used in deriving equations (8.16).
d(8x)/dt = AN(t) ax + j(t)8d
8y = C(t) 8x + k(t) Sd (8.17)
Now observability and controllability theory can be applied to equations (8.14), (8.16)
and (8.17). We conclude that, if possible, we will choose C(t), AN(t) and the corresponding
input matrices B(t), D(t) or J(t), K(t) such that the unwanted signal is unobservable with
respect to the output 8y(t), or at least the elements of the state vector associated with
the dominant poles are uncontrollable with respect to the unwanted signal. If this is im-
possible, the system gain with respect to the unwanted signal should be made as low as
possible.

Example 8.17.
Consider the system d:x/dt = (-~ _ ~ ) x + G) u. The nominal value of the parameter ~ is zero
and the nominal input u{t) is a· unit step function.

dxNldt = (-~ -~) XN + G)


If XN{O) = 0,
equation (8.14),
then xN{t) = (~= :=:t). The effect of small variations in a can be evaluated from

d(ax)ldt = (-~ _~) ax + (~ ~)G = :=:')


Simplifying,
d(8x)/dt (-~ _~) Sx + G) 0(1- e-')

We ean eliminate the effects of a variation upon the output if c is chosen such that the output observability
matrix (ctb ctAb) == O. This results in a choice ct = (0 y) where y is any number.

Furthermore, all the analysis and synthesis techniques developed in this chapter can be
-used to analyze and design systems to reduce sensitivity and nonlinear effects and to reject
noise. This may be done by using the error constant matrix table, root locus, Nyquist,
and/or pole placement techniques on the equations (8.14), (8.16) and (8.17).
182 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

Solved Problems

8.1. For the system of Fig. 8-4, let ~~ = (~


t+1
_~)x + ~ 2)e
t+1
(t t+1
for t ~ 0,
with y = (1 l)x. Find the class of inputs that give a constant steady state error.
The closed loop system with zero input (d(t) = 0) is

dx
dt
[Ci1 _~) -. (t 12)(1 l)]X
t+1 t+1

which has a double pole at -1. Therefore the zero output of the closed loop system is asymp-
totically stable and the remarks following Theorem 8.1 are valid. The class of all d(t) that the
system can follow with lim e(t) = 0 is given by
t--+ OQ

dw
dt
Cl1 t+1
1t ) w
+ (t~.2).
t+1
g

with d (1 l)w + g. The transition matrix for this system is

T
1
+1
(t +1 - eT -
eT - t
t
t(1- eT - t
1 + te'T-t
»)
Then
d(t)

(Notice this system is unobservable.) For constant error let get) == K, an arbitrary constant. Then

K(t+1)
fto
t '7"+2
(r+1)2dT K(t + 1HIn (t + 1) - In (to + 1) + (to + 1)-1 - (t + 1)-1]

and the system follows with error K all functions that are asymptotic to this. Since the system is
reasonably well behaved, we can assume that the system will follow all functions going to infinity
slower than Ie(t + 1) In (t + 1).

8.2. Given the multiple~input system

:t (::) Z3
(-~ -~ ~)(::) + (~ -~)(~:)
0 0 -3 Za 1 0

Place the poles of the closed loop system at -4, -5 and -6.
Transform the system to phase variable canonical form using the results of Problem 7.21.

(-~ 1)("' ~)z


1 0
x -2 -8 0 le2

4 9 0 0 le3

Then
1/2)
z
(
leI
0
0
-1
0
-1
K2
0
~
le - 1
3
)(-! 1
5/2
:--4
3/2 1/2
-1 x
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 183

and

dx
dt ( 0 1 0) (1 1 1)(Q'/(l
o 0
-6 -11
1
-6
x + -1 -2 -3
1 4 9
1C2
lCa
Kl)( )
-K2
0
::

To obtain phase variable canonical form for ul' we set

which gives aKl = 1/2, "'2 = -1, lI:a::::: 1/2.


For a # 0, we have the phase variable canonical system

dx
dt =
-11
~ ~) -6
x + (~ -~ +~ 1/2Q'
1
~;::)(::)
4
To have the closed loop poles at -4, -5 and -6 we desire a charateristic polynomial AS + 15A.2 + 74A + 120.
Therefore we choose ul =
-114xl - 63x 2 - 9xa and U2 OXI + OX2 + OXa. =
In the case a = 0, the state Zt is uncontrollable with respect to Ul and, from Theorem 7.7,
cannot be put into phase variable canonical form with respect to u 1 alone. Hence we must use
U2 to control Zl. and ean assure a pole at -4 by choosing u2 = -3z 1 • Then we have the single-input
system

can be transformed to phase variable canonical form, and ul:;:::: -12z2 + 6zs'
The above procedure can be generalized to give a means of obtaining multiple-input pole
placement.

8.3. Given the system d 2 y/dt2 = O. Construct the observer system such that it has a double
pole at -yo Then find the error e'= x - x as a function of y, if the output really is
y(t) + 'J](t), where 'Y)(t) is noise.
The observer system has the form

d~
dt
=
(~ 1)A -+ (kl)z
o X k A
[(1 O)x - y - 1]]

The characteristic equation of the closed loop system is A(A - k 1) - k2 = O. A double pole at -y has
the characteristic equation A2 + 2YA + y2 = O. Hence set kl = -2y and k 2 :;:::: _y2. Then the
equation for the error is

Note the noise drives the error and prevents it from reaching zero.
The transfer function is found from
.e{1]} (2YS+ y2)
.e {(::)} 82 + 2ys + y2 y 2s

As y ~ 0::>, then el ~ 1] and ez ~ drJldt. If 1](t) = 170 cos IUt, then 1U110. the amplitude of d11/dt,
may be large even though '110 is small, because the noise may be of very high frequency. We con-
clude that it is not a good idea to set the observer system gains too high, so that the observer system
can filter out some of the noise.
184 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

8.4. Given the discrete-time system

x(n+ 1) = (_~ !) x(n) + (n urn), y(n) = (1 O)x(n)

Design a feedback controller given dominant closed loop poles in the z plane at
(1 ± 1)/2.
We shall construct an observer to generate the estimate of the state from which we can con-
struct the desired control. The desired closed loop characteristic· equation is ",2 - X+ 1/2 = o.
Hence we choose u = 3~/2 - 2~2. To generate ~l and ~2 we choose a first-order observer with a
pole at 0.05 so that it will hardly affect the response due to the dominant poles and yet will be large
enough to filter high-frequency noise. The transformation of variables

x= (~ ~)z gives z(n+1) = (~-~)z(n) + (~)u(n), yen} (0 l)z(n)

Use of equation (8.8) gives

P = (1 -0.05) and

Then the observer is

""x -_ (-13 o1)""z where

and wen) is obtained from wen + 1) = O.05w(n) - 2.1525y(n) + u(n}.

8.5. Find the sensitivity of the poles of the system

~~ (-1; a' _~)x


to changes in the parameter a where lal is small.
We denote the actual system as dXa/dt = Aaxa and the nominal system as dXN/dt = ANxN,
where AN == Aa when a = O. In general, AN == Aa - SA where !lSAIl- is small since Ja\ is small.
We assume AN has distinct eigenvalues xf so that we can always find a corresponding eigenvector
Wi from ANWi == xf
Wi. Denote the eigenvectors of A~ as vt, so that A~ vi = vt . Note the xf
eigenvalues Xf are the same for AN and Air.
Taking the transpose gives
-- .I\i
. Nvti (8.18)

which we shall need later. Next we let the actual eigenvalues Xf = Xf + SXL. Substituting this
into the eigenvalue equation for the actual Aa gives

(AN + 8A)(wi + OWi) = (;\.f + SXi ) (Wi + 8Wi) (8.19)

Subtracting ANWi == Xf Wi and multiplying by vr gives

vt SA Wi + vT AN SWi = I~n\i v1 Wi + xf v! OWi + v1 (OXi I - SA) 8wi

Neglecting the last qua~tity on the right since it is of second order and using equation (8.18) then
leads to
v! 8Awi (8.20)
VTWi

Therefore for the particular system in question,

Ad. = (
-1 + a2 ct.) -1 0)
( 2 -2 +
(ai30 0ct.)
2 -2
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 185

Then for AN we find

-1
(~) V1 = (~)
-2 W2
(~) V2 (-~)
Using equation (8.20),

).(21)
2
-'1 + (1 0) ( a0 aO·

= -2 + (-2 1) (~2 ~)(~) -2 -,-. 2a::

For larger values of a note we can use root locus under parameter variation to obtain exact values.
However, the root locus is difficult computationally for very high~order systems, whereas the
procedure just described has been applied to a 51st-order system.

8.6. Given the scalar nonlinear system of Fig. 8-20 with input a sin t. Should K> 1
be increased or decreased to minimize the effect of the nonlinearity?

a sin t y a sin t
+ +

Fig. 8·20 Fig. 8·21

The nominal linear system is shown in Fig. 8~21. The steady state value of eN = (a:: sin t)/(K - 1).
We approximate this as the input to the unwanted signal d 2 (Sy)/dt2 == e~, which gives the steady
state value of ay::;::: a3 (27 sin t - sin 3t)/[36(K -1)3]. This approximation that d 2 (oy)/dt2 = e~
instead of e!
must be verified by simulation. It turns out this a good approximation for lal ~ 1,
and we can conclude that for lal <" 1 we increase K to make ay/y become smaller.

8.7. A simplified, normalized representation of the control system of a solid-core nuclear


rocket engine is shown in Fig. 8-22 where Bn, ST, 8P, 8Pcr and BV are the changes from
nominal in neutron density, core temperature, core hydrogen preSSUFe, control rod
setting, and turbine power control valve setting, respectively. Also, G1(s), G2 (s) and
G3 (s) are scalar transfer functions and in the compensation K1, K2 , KS and K4 are scalar
constants so that the control is proportional plus integral. Find a simple means of
improving response.
liP

+ +

8V

Fig. 8·22
186 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

The main point is to realize this "multiple loop" system is really a multiple input-multiple
output system. The control system shown has constrained SPcr to be an inner loop. A means
of improving response is to rewrite the system in matrix form as shown in Fig. 8-23. This opens
up the possibility of "cross coupling" the feedback, such as having BPcr depend on 8P as well as
ST. Furthermore it is evident that the system is of type 1 and can follow a step input with zero
steady state error.

>

Fig. 8-23

8.8. Compensate the system (S+Pl)-I(S+P Z)-1 to have poles at -"'1 and -"'2' with an
observer pole at -7ro, by using the algebraic separation theorem, and discuss the effect
of noise 7J at the output.
The state space representation of the plant is

! (::) (-:tP. _p,l_ p.)(:~) + G}' + G)d


y = (1 O)x
The feedback compensation can be found immediately as
u = (PIPZ - ?Tl'ITz P1 + pz -?T1 - ?Tz) ~
To construct the observer system, let P = (al (lz). Then

(F!-PL)( ~) (-?To j-PL) (~1 ~z)


from which

Also PB = PJ = £lZ' Therefore the estimator dynamics are

!(:) = -?To ( : ) + [?TO(P1 + pz - ?To) - P1PZ]Y + u + d

To construct the estimate,

(~:) = (~r(:) = ( ~1
£lZ Vo - : , - p.)(:)
The flow diagram of the closed loop system is shown in Fig. 8-24.

A
Xt =Y

Fig. 8-24
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 187

Note the noise 'tJ is fed through a first-order system and gain elements to form ?.t. If the noise
level is high, it is better to use a second-order observer or a Kalman filter because then the noise
goes through no gain elements directly to the control, but instead is processed through first- and
second-order systems. If there is no noise whatsoever, flow diagram manipulations can be used to
show the closed loop system is equivalent to one compensated by a lead network, i.e. the above flow
diagram with 'I] = 0 can be rearranged as in Fig. 8-25.

d 1 y
+

Fig. 8-25

Note the observer dynamics have cancelled out, and the closed loop system remains second-
order. This corresponds to the conclusion the observer dynamics are uncontrollable by d. However
any initial condition in the observer will produce an effect on y, as can be seen from the first flow
diagram.

Supplementary Problems
8.9. Given the matrix block diagram of Fig. 8-26. Show that this reduces to Fig. 8-27 when the
indicated inverse exists.

d y

+
y
=d====::;:j1 G(I+HG)-l ;>

Fig. 8-26 Fig. 8~27

8.10. Given the matrix block diagram of Fig. 8-28. Reduce the block diagram to obtain HI isolated in a
single feedback loop.

Fig. 8-28
188 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

8.11. Determine whether the scalar feedback system of Fig.


y
8-29 in which yet) is related to e(t) as d---+I
+
(~ _4~-1)X (t~2) e
dx :::::
+ t>O
dt
y = (1 O)x
(i) can follow a step input 0: with zero error, (ii) can
follow a ramp input at with zero error. Fig. 8-29

8.12. Given the time-varying system d2y/dt 2 + aCt) dy/dt + f3(t)y ::::: u. Find a feedback control u such
that the closed loop system behaves like d 2z/dt 2 + 8(t) dz/dt + ¢(t)z ::::: O.

8.13. Given the system


dx
dt (1
o
1-2) -1 1 x y ::::: (0 1 -1)x
1 0-1
Construct a third-order observer system with poles at 0, -1 and -1.

8.14. Given the system


dx
dt == (1 1-2)
o
1
-1
0-1
1 x Y
(~
Construct a first-order observer system with a pole at -1.

8.15. Given the system


dx
y = (1 O)x
dt

Construct a first-order observer system with a pole at -3 and then find a feedback control
u =: kl~l + k 2 '052 that places both the closed loop system poles at -2.

8.16. Given the system


dx
dt
_1.
4
(3 1) x
1 3 y (1 1)x

Construct a first-order observer system with a pole at -4.

8.17. Given the system dx/dt = A(t)x + B(t)u where y = C(t)x + D(t)u. What is the form of the
observer system when D(t) ~ O? What is the algebraic separation theorem when D(t) ¥= O?

8.18. Given the system


dx
dt
= _!(3
4 1
and y = (0 1)x

where x(O) = o. As a measure of the sensitivity of the system, assuming jo:(t) 1 ~ 1, find ay(t)
as an integral involving aCt) and u(t). Hint: It is easiest to use Laplace transforms.

8.19. Given the nominal system dXN/dt = AN(t)xN + BN(t)UN and YN == CN(t)XN + DN(t)UN' What is the
equation for 8x corresponding to equation (8.1.0 when the parameters of the system become Aa(t).
Ba(t), Ca(t), Da(t) and ua(t)? .

1 + t + fet) U(t»)
8.20. Given the system dx/dt == ( f(t) t2 x. Choose u(t) such that at least one state
will be insensitive to small variations in f(t), given the nominal solution XN(t).

8.21. Given that the input d(t) is generated by the scalar system dw/dt = a(t)W and d(t) = [yet) + p(t)Jw,
under what conditions on pet) can the system dUJ/dt = c.:(t)x + f3(t)e with y = y(t)x follow any such
d(t) with zero error?
CHAP. 8] RELATIONS WITH CLASSICAL TECHNIQUES 189

8.22. Given the general nonunity feedback time-invariant system of Fig. 8-30. Under what conditions
can lim e(t) = O? Set F = I and H;::; I and derive Theorem 8.2 from these conditions.
t-co

+
.({y}
.e{d}--------.."l J...----.,/' .e{e}

Fig. 8-30

8.23. In the proof of Theorem 8.1, why is

d(t) = ft C(t)
to
IPA-BC Ct, '7") B(r) d(r) dT + get) + C(t) q.A-BC(t, to) w(to)

equivalent to dw/dt = [A(t) - B(t) C(t)]w + B(t)d and d = C(t)w + g?


8.24. Show that if a time-invariant system is of type N, then it is of type N - k for all integers k such
that 0 ~ k ~ N.

8.25. Given the system of Fig. 8-31 where the constant matrix K has been introduced as compensation.
Show that
(a) The type number of the system cannot change if K is nonsingular.
(b) The system is of type zero if K is singular.
d(t)

+
)t

-
-<
K -
...,.. H(s) ">

Fig. 8-31

8.26. Design a system that will follow d(t) = sin t with zero steady state error.

Answers to Supplementary Problems


8.10. This cannot be done unless the indicated inverse exists (Fig. 8-32). The matrices must be in the
order given.

d y

Fig. 8-32
190 RELATIONS WITH CLASSICAL TECHNIQUES [CHAP. 8

8.11. The closed loop system output y = (~)2 [(2 f - 1) Xl(t O) + (t - to) X 2 (t O) ] tends to zero, so
Theorem 8.1 applies. Then t 0

d(t) = w,(t O) + ~ [1 - C:)'] w2(t O) + Jot L[1 - (f)'] y(.) d,- '"" In t for t > some t,

so that the system can follow steps but not ramps with zero error.

8.12. For the state equation


dx
dt
corresponding to the given input-output relation,
'U, = (fJ - ¢ a - fJ)x
which shows the basic idea behind "pole placement,"
d2y/dt 2 + Q' dy/dt + fJy = (a - fJ) dy/dt + (fJ - ¢}y

8.13.
d~
dt G-~ -D~ G) + [(0 1 -l)~-YI
8.14. dw/dt -w + (0 1)y,
.A-
X

8.15.
1
d(t)--+~ 1 - - -__- - _ yet)
82 + 38 +2

Fig. 8-33

8.16. This cannot be done because the system is unobservable.

8.17. Subtract D(t)u from 11 entering the observer and this reduces to the given formulation .

.e{a:(t) XN(t)} p{x (t)} - _ .e{u(t)}


8.18. ~{oy(t)} = where
(s + 1){s + t) ..... N - 4(s + 1)(8 + t)

8.20. d(8x)/dt
1 + t.+ t(t) tt(t») Sx + (XN1) ot
( j{t) t2 XN2

dXNl/dt - (1 + t + t(t)XNl - U(t)xN2 )


Q(t) = (:;~ dXN2Idt - t(t)XN1 - t2XN2
Choose u(t) such that det Q(t) = 0 for all t.

8.21. pet) = e
- itto a(1)) d1) [
e(t) + ft y(t)e I
to T
tCl (1)) d1)
fJ(T) fJ(T) dT
]
+ y(t)/C where e(t) is any function such

that lim e(t) = 0 and K is an arbitrary constant.


t-+- 00

8.22. lim [F(s) - G(s)(I + G{s) H(8»-1]8~{d} = 0 and is analytic for s =::: O.
s-+-o

8.24. Use Theorem 8.2.

8.25. U 5e Theorem 8.2.

8.26. One answer is R(s) == 8(82 + 1)-1.


Chapter 9

Stability of Linear Systems


9.1 INTRODUCTION
Historically, the concept of stability has been of great importance to the system designer.
The concept of stability seems simple enough for linear time-invariant systems. However,
we shall find that its extension to nonlinear and/or time-varying systems is quite compli-
cated. For the unforced linear time-invariant system dx/dt = Ax, we are assured that
the solution x(t) = TeJCt-to)T-lxo does not blow up if all the eigenvalues of A are in the left
half of the complex plane.
Other than transformation to Jordan form, there are many direct tests for stability.
The Routh-Hurwitz criterion can be applied to the characteristic polynomial of A as a yes-
or-no test for the existence of poles in the right half plane. More useful techniques are
those of root locus, Nyquist, Bode, etc., which indicate the degree of stability in some sense.
These are still the best techniques for the analysis of low-order time-invariant systems, and
we have seen how to apply thenl to multiple input-multiple output systems in the previous
chapter. Now we wish to extend the idea of stability to time-varying systems, and to do
this we must first examine carefully what is meant by stability for this type of system.

9.2 DEFINITIONS OF STABILITY FOR ZERO-INPUT LINEAR SYSTEMS


A type of "stability" results if we can say the response is bounded.

Definition 9.1: For every Xo and every to, if there exists a constant K depending on Xo and
to such that Ilx(t)11 ~ K for all t ~ to, then the response x(t) is bounded.
Even this simple definition has difficulties. The troubie is that we must specify the response
to what. The trajectory of x(t) = </J(t; U(T),X(tO), to) depends implicitly on three quantities:
U(T), x(to) and to. By considering only U(T) = 0 in this section, we have eliminated one
difficulty for a while. But can the boundedness of the response depend on x(to)?
Example 9.1.
Consider the scalar zero-input nonlinear equation dx/dt = -x + x 2 with initial condition x(O) = Xo.
The solution to the linearized equation dx/dt = -x is obviously bounded for all xo' However, the solution
to the nonlinear equation is
x(t)

For all negative values of xo, this is well behaved. For values of Xo > 1, the denominator vanishes at a
time tl = In Xo - In (xo -1), so that lim x(t) = 00,
t-t 1

It can be concluded that boundedness depends upon the initial conditions for nonlinear
equations in general.

Theorem 9.1: The boundedness of the response x(t) of the linear system dx/dt = A(t)x
is independent of the initial condition Xo.

191
STABILITY OF LINEAR SYSTEMS [CHAP. 9
192

Proof: Ilx(t)11 = 11q,(t; 0, Xo, to)11 = 114J(t, to)xoll :-:=: 114t(t, to)llllxoll
Since Ilxoll is a constant, if Ilx(t)11 becomes unbounded as t ~ it is solely
00, due to cp(t, to).
Now we shall consider other, different types of stability. First, note that x = 0 is a
steady state solution (an equilibrium state) to the zero-input linear system dx/dt = A(t)x.
We shall define a region of state space by Ilxll < 10, and see if there exists a small region of
nonzero perturbations surrounding the equilibrium state x = 0 that give rise to a trajectory
which remains within Ilx!1 < E. If this is true for all E> 0, no matter how small, then we
have

Definition 9.2: The equilibrium state x = 0 of dx/dt = A(t)x is stable in the sense of
Liapunov (for short, stable Ls.L.) if for any to and every real number E> 0,
there is some 0 > 0, as small as we please, depending on to and E such
that if Ilxoll < 0 then Ilx(t)11 < E for all t > to.
This definition is also valid for nonlinear s!ystems with an equilibrium state x = O.
It is the most common definition of stability, and in the literature "stable Ls.L." is often
shortened to "stable". States that are not stable Ls.L. will be called unstable. Note stability
Ls.L. is a local condition, in that 0 can be as small as we please. Finally, since x = 0 is an
obvious choice of equilibrium state for a linear system, when speaking about linear systems
we shall not be precise but instead will say the system is stable when we mean the zero state
is stable.
Example 9.2.
Consider the nonlinear system of Example 9.1. If Xo === 1, then

Ix(t)1 ::::
11 + (e t -1)(1- xo)1
In Definition 9.2 we can set 0 = € > 0 if e:::: 1, and if € > 1 we set 0:= 1 to show the zero state of
Example 9.1 is stable i.s.L. Hence the zero state is stable Ls.L. even though the response can become
unbounded for some Xo' (This situation corresponds to Fig. 9-2(b) of Problem 9.1.) Of course if the
response became unbounded for all Xo # 0, the zero state would be considered unstable. Another point to
note is that in the application of Definition 9.2, in the range where e is small there results the choice of
a correspondingly small 8.

Example 9.3.
Given the Van der Pol equation

with initial condition x(O}:::; xo' The trajectories in


state space can be plotted as shown in Fig. 9-1. We
will call the trajectory in bold the limit cycle.
Trajectories originating outside the limit cycle spiral
in towards it and trajectories originating inside the
limit cycle spiral out towards it. Consider a small
circle of any radius, centered at the origin but such
that it lies completely within the limit cycle. Call
its radius € and note only xo::::: 0 will result in a
trajectory that stays within Jixii2 < E. Therefore
the zero state of the Van der Pol equation is unstable
but any trajectory is bounded. (This situation corre-
sponds to Fig. 9-2(e) of Problem 9.1.)

Theorem 9.2: The transition matrix of a linear system is bounded as jl4J(t, to)jj < K(tO) for
all t === to if and only if the equilibrium state x = 0 of dx/dt = A(t)x
is stable Ls.L.
Note Ilx(t)11 is bounded if Ilcp(t, to)11 is bounded.
CHAP. 9] STABILITY OF LINEAR SYSTEMS 193

Proof: First assun1e 1)I'Jl(t, to))1 < K(tO) where K is a constant depending only on to.
If we are given any € > 0, then we can always find 8 = dK(tO) such that if Ilxoll < 8 then
E = K(to)8 > 111'Jl(t, to)llllxoll == 114I(t, to)xoll = Ilx(t)jj. From Definition 9.2 we conclude stability
i.s.L.
Next we assume stability Ls.L. Let us suppose cp(t, to) is not bounded, so that there is at
least one element ipij(t, to) that becomes large as t tends to co. If Ilxoll < 8 for a nonzero 8,
then the element Xj of Xo can be nonzero, which results in a trajectory that eventually leaves
any region in state space defined by Ilxll < E. This results in an unstable system, so that
we have reached a contradiction and conclude that cp(t, to) must be bounded.
Taken together, Theorems 9.1 and 9.2 show that boundedness of Ilx(t)1I is equivalent to
stability i.s.L. for linear systems, and is independent of Xo. When any form of stability is
independent of the size of the initial perturbation Xo, we say the stability is global, or speak of
stability in the large. Therefore another way of stating Theorem 9.1 is to say (local) sta-
bility i.s.L. implies global stability for linear systems. The nonlinear system of Example
9.1 is stable i.s.L. but not globally stable i.s.L.
In practical applications we often desire the response to return eventually to the equi-
librium position x = 0 after a small displacement. This is a stronger requirement than
stability Ls.L. which only demands that the response stay within a region jlxll < E.
Definition 9.3: The equilibrium state x = 0 is asymptotically stable if (1) it is stable
Ls.L. and (2) for any "to and any Xo sufficiently close to 0, x(t) -7 0 as t -7 co.
This definition is also valid for nonlinear systems. It turns out that (1) must be assumed
besides (2), because there exist pathological systems where x(t) -7 0 but are not stable Ls.L.
Example 9.4.
Consider the linear harmonic oscillator ~~ = (_~ ~) x with transition matrix

4I(t,O) =
- (cos t
-sin t
sin t)
cos t
The A matrix has eigenvalues at ±j. To apply Definition 9.2, Ilx(t)112::= 11~(t, t o)I[21[xoI12 = I[xolb < E = 0 since
11~(tJ t o)112 ::::: 1. Therefore the harmonic oscillator is stable Ls.L. However, x(t) never damps out to 0, so
the harmonic oscillator is not asymptotically stable.
Example 9.5.
The equilibrium state :Ii = 0 is asymptotically stable for the system of Example 9.1, since any small
perturbation (xo < 1) gives rise to a trajectory that eventually returns to O.
In all cases except one, if the conditions of the type of stability are independent of to,
then the adjective uniform is added to the descriptive phrase.
Example 9.6.
If S does not depend on to in Definition 9.2, we have uniform stability i.s.L.
Example 9.7.
The stability of any time-invariant system is uniform.
The exception to the usual rule of adding "uniform" to the descriptive phrase results
because here we only consider linear systems. In the general framework of stability def-
initions for time-varying nonlinear systems, there is no inconsistency. To avoid the com-
plexities of general nonlinear systems, here we give
Definition 9.4: If the linear system dx/dt = A(t)x is uniformly stable i.s.L. and if for all
to and for any fixed p however large Ilxoll < p gives rise to a response
x(t) -7 0 as t -7 co, then the system is unifo1'mly asymptotically stable.
The difference between Definitions 9.3 and 9.4 is that the conditions of 9.4 do not depend
on to, and additionally must hold for all p. If p could be as small as we please, this would
be analogous to Definition 9.8. This complication arises only because of the linearity, which
in turn implies that Definition 9.4 is also global.
194 STABILITY OF LINEAR SYSTEMS [CHAP. 9

Theorem 9.3: The linear system dx/dt = A(t)x is uniformly asymptotically stable if and
only if there exist two positive constants 1<1 and 1<2 such that ]]q,(t, to)11 ~
I<le- K2 <t-t o) for all t::::,.. to and all to.

The proof is given in Problem 9.2.


Example 9.8.
Given the linear time-varying scalar system dx/dt = -x/to This has a transition matrix .p(t, to) = to/to
For initial times to > 0 the system is asymptotically stable. However, the response does not tend to 0 as
fast as an exponential. This is because for to < 0 the system is unstable, and the asymptotic stability is
not uniform.

Example 9.9.
Any time-invariant linear system dx/dt = Ax is uniformly asymptotically stable if and only if all the
eigenvalues of A have negative real parts.

However for the time-varying system dx/dt = A(t)x, if A(t) has all its eigenvalues
with negative real parts for each fixed t, this in general does not mean the system is asymp-
totically stable or even stable.
Example 9.10.
Given the system

with initial conditions x(O) = Xo. The eigenvalues are Al = /C and A2 =: 31(. Then if /C < 0, both
eigenvalues have real parts less than zero. However, the exact solution is
2X1(t) =: 3(XlO + x20)e 5Kt - (XIO + 3x20)e7Kt and 2X2(t):::: (XlO + 3x20)e-Kt - (x lO + x20)e- SKt
For any nonzero real /C the system is unstable.
There are many other types of stability to consider in the general case of nonlinear
time-varying systems. For brevity, we shall not discuss any more than Definitions 9.1-9.4
for zero-input systems. Furthermore, these definitions of stability, and those of the next
section, carry over in an obvious manner to discrete-time systems. The only difference is
that t takes on discrete values.

9.3 DEFINITIONS OF STABILITY FOR NONZERO INPUTS


In some cases we are more interested in the input-output relationships of a system than
its zero-input response. Consider the system
dx/dt = A(t)x + B(t)u y = C(t)x (9.1)
with initial condition x(to) = xo and where ]]A(t)ll < KA , a constant, and 11B(t)11 < KB' an-
other constant, for all t:::-: to, and A(t) and B(t) are continuous.

Definition 9.5: The system (9.1) is externally stable if for any to, any xo, and any u such that
]]u(t)]] ~ 8 for all t:::-: to, there exists a constant E which depends only on
to, Xo and 8 such that ]Iy(t)]]-:::. E for all t ~ to.
In other words, if every bounded input produces a bounded output we have external stability.

Theoretn 9.4: The system (9.1), with Xo = 0 and single input and output, is uniformly
externally stable if and only if there exists a number f3 < 00 such that

rt ]h(t, 7")1 d7" ~ f3


Jto
for all t ~ to where h(t, 7") is the impulse response.
CHAP. 9] STABILITY OF LINEAR SYSTEMS 195

If f3 depends on to the system is only externally stable. If Xo oF 0, we additionally require


the zero-input system to be stable i.s.L. and C(t) bounded for t> to so that the output does
not become unbounded. If the system has multiple inputs and/or multiple outputs, the
criterion turns out to be
n
it IIH(t, ,)111
to
dT === f3 where the l1 norm of a vector v in 'V n is
L IVil
IIvl1 1 = i=l (see Sections 3.10 and 4.6).

Proof: First we show that if it Ih(t, ')ldT ~ f3, then we get external stability. Since
y(t) =
Jtrto h(t,.) u(.) dT, we can tak:onorms on both sides and use norm properties to obtain

Jy(t)1 :=::; i ~
t1

I h (t, .)1 JU(T)) dT === 8 i~


t1
)h(t, .)1 dT ~ 8/3

From Definition 9.5 we then have external stability.


t
Next, if the system is externally stable we shall prove
Jr to
Ih(t, .)ldT ==== f3 by contra-
diction. We set U1(T) = sgn h(t,.) for to:::::=. === t, where sgn is the signum function which
has bound 1 for any t. By the hypothesis of external stability,

Iy(t)) =
Now suppose rt )h(t, T)Jd•
Jto . : : :. f3 is not true. Then by taking suitable values of t and to
we can always make this larger than any pI"eassigned number a. Suppose we choose t = B
and to = (jo in such a way that i91h(8, .)1 dT > a. Again we set U2(T) = sgn h(O, T) so that
8 80
a <1 eo
h(B, T)U2(r)dT = y(B).
arrive at a contradiction.
Since a is any preassigned number, we can set a = € and

Example 9.11.
Consider the system dy/dt = u. This has an impulse response h(t,.) = U(t - .), a unit step starting

at time •. Then f t
to
IU (t - .) I dT = t - to which becomes unbounded as t tends to 00. Therefore this

system is not externally stable although the zero-input system is stable i.s.L.

In general, external stability has no relation whatsoever with zero-input stability con-
cepts because external stability has to do with the: time behavior of the output and zero-
input stability is concel"ned with the time behavior of the state.

Example 9.12.
Consider the scalar time-varying system

where 8(t, to) == ft 0'(.) d.. Then the transition matrix is <p(t, to) == ee(t,to). Therefore
to

so that ft
to
Ih(t, .)1 d. < ft
-00
Ih(t, .)1 d. = 1. Thus the system is externally stable. However, since a(t)
can be almost anything, any form of zero-input stability is open to question.
196 STABILITY OF LINEAR SYSTEMS [CHAP. 9

However, if we make C(t) a constant matrix, then we can identify the time behavior of
the output with that of the state. In fact, with a few more restrictions on (9.1) we have
Theorem 9.5: For the system (9.1) with l1 norms, C(t) = I, and nonsingular B(t) such that
lIB-1(t)111 < K n - 1, the system is externally stable if and only if it is uni-
formly asymptotically stable.
To have B(t) nonsingular requires u to be an n-vector. If B is constant and nonsingular,
it satisfies the requirements stated in Theorem 9.5. Theorem 9.5 is proved in Problem 9.2.

9.4 LIAPUNOV TECHNIQUES


The Routh-Hurwitz, Nyquist, root locus, etc., techniques are valid for linear, time-
invariant systems. The method of Liapunov has achieved some popularity for dealing with
nonlinear and/or time-varying systems. Unfortunately, in most cases its practical utility
is severely limited because response characteristics other than stability are desired.
Consider some metric p(x(t), 0). This is the "distance" between the state vector and
the 0 vector. If some metric, any metric at all, can be found such that the metric tends to
zero as t ~ et:J, it can be concluded that the system is asymptotically stable. Actually,
Liapunov realized we do not need a metric to show this, because the triangle inequality
(Property 4 of Definition 3.45) can be dispensed with.
Definition 9.6: A time-invariant Liapunov junction, denoted v(x), is any scalar function
of the state variable x that satisfies the following conditions for all t ~ to
and all x in the neighborhood of the origin:
(1) v(x) and its partial derivatives exist and are continuous;
(2) v(O) = 0;
(3) v{x) > 0 for x ¥: 0;
(4) dv/dt = (grad x v)T dx/dt < 0 for x ~ o.
The problem is to find a Liapunov function for a particular system, and there is no general
method to do this.
Example 9.13.
Given the scalar system dxldt = -x. We shall consider the particular function 1](x) = x 2 • Applying
the tests in Definition 9.6, (1) 1](x) = x 2 and B7JIBx = 2x are continuous, (2) 1](0) = 0, (3) 7J(x) = x 2 > 0
for all x # 0, and (4) d7Jldt = 2x dxldt = -2X2 < 0 for all x # o. Therefore 7J(x) is a Liapunov function.

Theorem 9.6: Suppose a time-invariant Liapunov function can be found for the state
variable x of the system dx/dt = £(x, t) where £(0, t) = O. Then the state
x = 0 is asymptotically stable.
Proof: Here we shall only prove x ~ 0 as t ~ co. Definition 9.6 assures existence and
continuity of v and dv/dt. Now consider v(p{t; to, xo)), i.e. as a function of t. Since v > 0
and dv/dt < 0 for p ~ 0, integration with respect to t shows V{tp(tl; to, xo)> V(p(t2; to, xo))
for to < t1 < t2 • Although v is thus positive and monotone decreasing, its limit may not be
zero. (Consider 1 + e- t .) Assume v has a constant limit K> O. Then dv/dt = 0 when
v = K. But dv/dt < 0 for p ~ 0, and when p = 0 then dv/dt = (gradxv)Tf(O, t) = O. So
dv/dt= 0 implies p = 0 which implies v = 0, a contradiction. Thus v ~ 0, assuring x ~ O.
If Definition 9.6 holds for all to, then we have uniforn1 asymptotic stability. Additionally
if the system is linear or if we substitute "everywhere" for "in a neighborhood of the
origin" in Definition 9.6, we have uniform global asymptotic stability. If condition (4) is
weakened to dv/dt ~ 0, we have only stability i.s.L.
CHAP. 9] STABILITY OF LINEAR SYSTEMS 197

Example 9.14.
For the system of Example 9.13, since we have found a Liapunov function v(x) = x2 , we conclude
the system dx/dt = -x is uniformly asymptotically stable. Notice that we did not need to solve the state
equation to do this, which is the advantage of the technique of Liapunov.

Definition 9.7: A time-varrying Liapunov junction, denoted v(x, t), is any scalar function
of the state variable x and time t that satisfies for all t ~ to and all x
in a neighborhood of the origin:
(1) v(x, t) and its first partial derivatives in x and t exist and are con-
tinuous;
(2) v(O, t) = 0;
(3) v(x, t) ~ a(l]xlD > 0 for x ¥ 0 and t ~ to, where a(O) = 0 and a(~)
is a continuous nondecreasing scalar function of ~;
(4) dv/dt = (grad x v)T dx/dt + av/at < 0 for x ¥ O.
Note for all t ~ to, v(x, t) must be == a continuous nondecreasing, time-invariant func-
tion of the norm Ilxll.

Theorem 9.7: Suppose a time-varying Liapunov function can be found for the state
variable x(t) of the system dx/dt = f(x, t). Then the state x = 0 is asymp-
totically stable.
Proof: Since dv/dt < 0 and v is positive, integration with respect to t shows that
v(x(to), to) > v(x(t), t) for t > to. Now the proof must be altered from that of Theorem 9.6
because the time dependence of the Liapunov function could permit v to tend to zero even
though x remains nonzero. (Consider v = x2e~t when x = t). Therefore we require
v(x, t) == a(llxll), and a = 0 implies Ilxll = O. Hence if v tends to zero with this additional
assumption, we have asymptotic stability for some to.
If the conditions of Definition 9.7 hold for all to and if v(x, t) ...:::: P(I!xID where P(~) is a
continuous nondecreasing scalar function of ~ with (3(0) = 0, then we have uniform asymp-
totic stability. Additionally, if the system is linear or if we substitute "everywhere" for
"in a neighborhood of the origin" in Definition 9.7 and require a(jjxjD -? co with Ilxll-? 00,
then we have uniform global asymptotic stability.
Example 9.15.
Given the scalar system dx/dt = x. The function x 2 e- 4t satisfies all the requirements of Definition
9.6 at any fixed t and yet the system is not stable. This is because there is no a(llxlD meeting the require-
ment of Definition 9.7.
It is often of use to weaken condition (4) of Definition 9.6 or 9.7 in the "following manner.
We need v(x, t) to eventually decrease to zero. However, it is permissible for v(x, t) to be
constant in a region of state space if we are assured the system trajectories will move to
a region of state space in which dv/dt is strictly less than zero. In other words, instead of
requiring dv/dt < 0 for all x =1= 0 and all t == to, we could require dv/dt === 0 for all x =I" 0
and dv/dt does not vanish identically in t == to for any to and any trajectory arising from a
nonzero initial condition x(to).
Example 9.16.
Consider the Sturm-Liouville equation d 2 y/dt2 + pet) dy/dt + q(t)y = O. We shall impose conditions
on the scalars pet) and q(t} such that uniform asymptotic stability is guaranteed. First, call y = Xl and
dy/dt = X2' and consider the function vex, t) = -xi + x~/q(t). Clearly conditions (1) and (2) of Definition
9.7 hold if q(t) is continuously differentiable, and for (3) to hold suppose l/q(t):=: iCl > 0 for all t so that
a(llxll) = IlxJl~ min {1, iCl}' Here min {1, iCl} = 1 if iCl === 1, and min {l, iCt} = iCl if iCl < 1. For (4) to
hold, we calculate dv dXI X2 dX 2 x~ dq
dt 2Xl at + 2 q(t) Cit - q2(t) at
198 STABILITY OF LINEAR SYSTEMS [CHAP. 9

Since dXl/dt = Xz and dX2/dt = -P(t)X2 - q(t)Xl' then


dv 2p(t) q(t) + dq/dt 2
dt - q2(t) x2
Hence we require 2p(t) q(t) + dq/dt > 0 for all t ==- to, since q-2 > Ki.
But even if this requirement
is satisfied, dv/dt= 0 for x 2 =
0 and any Xl' Condition (4) cannot hold. However, if we can show that
when X2 = 0 and Xl ~ 0, the system moves to a region where X2 ~ 0, then vex, t) will still be a Liapunov
function. Hence when X2 = 0, we have dX2/dt = -q(t)xl' Therefore if q(t) # 0, then X2 must become
nonzero when Xl ~ O. Thus vex, t) is a Liapunov function whenever, for any t ==- to, q(t) #- 0 and
lIq(t) ==- Kl > 0, and 2p(t) q(t) + dq/dt > 0. Under these conditions the zero state of the given Sturm-
Liouville equation is asymptotically stable.
Furthermore we can show it is uniformly globally asymptotically stable if these conditions are inde-
pendent of to and if q(t) === K2' The linearity of the system implies global stability, and since the previous
calculations did not depend on to we can show uniformity if we can find a P([lxiD ==- vex, t). If q(t) === /C2,
then [lxll~ max {1, /CZl} = ,8([lx[[) ==- vex, t).
For the discrete-time case, we have the analog for Definition 9.7 as
Definition 9.8: A discrete-time Liapunov function, denoted v(x, k), is any scalar function
of the state variable x and integer k that satisfies, for all k> ko and all
x in a neighborhood of the origin:
(1) v(x, k) is continuous;
(2) v(O, k) = 0;
(3) v(x, k) ~ a(llxlD > 0
for x =1= 0;
(4) ~v(x, k) = v(x(k + 1), k + 1) - v(x(k), k) < 0 for x =1= O.
Theorem 9.8: Suppose a discrete-time Liapunov function can be found for the state
variable x(k) of the system x(k + 1) = f(x, k). Then the state x = 0 is
asymptotically stable.
The proof is similar to that of Theorem 9.7. If the conditions of Definition 9.7 hold for
all ko and if v(x, k) ~ {3(llxID where (3a) is a continuous nondecreasing function of ~ with
(3(0) = 0, then we have uniform asymptotic stability. Additionally, if the system is linear
or if we substitute "everywhere" for "in a neighborhood of the origin" in Definition 9.8
and require a(llxlD -? 00 with Ilxll-? 00, then we have uniform global asymptotic stability.
Again, we can weaken condition (4) in Definition 9.8 to achieve the same results. Condition
(4) can be replaced by ~v(x, k) ~ 0 if we have assurance that the system trajectories will
move from regions of state space in which .6.v = 0 to regions in which L1v < O.
Example 9.17.
Given the linear system x(k + 1) = A(k) x(k), where IIA(k)11 <1 for all k and for some norm. Choose
as a discrete-time Liapunov function vex) = Ilxll. Th,en
ill' = v(x(k + 1» - v(x(k» = Ilx(k + 1)11 - Ilx(k)[1 :::: [[A(k) x(k)11 - Ilx(k)[1
Since IIA(k)11 < 1, then IIA(k} x(k)11 === IIA(k)llllx(k)11 < Ilx(k)11 so that Av < O. The other pertinent
properties of v can be verified, so this system is uniformly globally asymptotically stable.

9.5 LIAPUNOV FUNCTIONS FOR LINEAR SYSTEMS


The major difficulty with Liapunov techniques is finding a Liapunov function. Clearly,
if the system is unstable no Liapunov function exists. There is indeed cause to worry that
no Liapunov function exists even if the system is stable, so that the search for such a func-
tion would be in vain. However, for nonlinear systems it has been shown under quite
general conditions that if the equilibrium state x = 0 is uniformly globally asymptotically
stable, then a time-varying Liapunov function exists. The purpose of this section is to
manufacture a Liapunov function. We shall consider only linear systems, and to be specific
we shall be concerned with the asymptotically stable real linear system
dx/dt = A(t)x (9.2)
CHAP. 9] STABILITY OF LINEAR SYSTEMS 199

in :,vhich there exists a constant K3 depending on to such that IIA(t)112:::=; K3(tO) and where
f 11cf>(r, t)11 2 dT exists for some norm. Using Theorem 9.3 we can show that uniformly
asymptotically stable systems satisfy this last requirement because 11cf>(t, r)11 : :=; Kle-K2Ct~T)
so that

However, there do exist asymptotically stable systems that do not satisfy this requirement.
Example 9.18.
Consider the time-varying scalar system dx/dt = -x/2t. This has a transition matrix <t>(t, to) =
(to/t)1/2. The system is asymptotically stable for to > O. However, even for t === to > 0, we find

f'" II<P(r, t)112 dT


t

Note carefully the position of the arguments of <1>.

Theorem 9.9: For system (9.2), vex, t) = XTP(t)X is a time-varying Liapunov function,

where pet) = i'" q"T(T, t) Q(T) q"(,, t) dT in which Q(t) is any continuous
positive definite symmetric matrix such that IIQ(t)ll:::=; K5(tO) and Q(t) - a
is positive definite for all t ~ to if £ > 0 is small enough.
This theorem is of little direct help in the determination of stability because q,,(t, ,) must be
known in advance to compute vex, t). Note pet) is positive definite, real, and symmetric.

Proof: Since q" and Q are continuous, if pet) < 00 for all t ~ to then condition (1) of
Definition 9.7 is satisfied. Use of IIQII : :=; KS gives IIPll ~ KS i""jj4J(r, t)W dr. Since the in-
tegral exists for the systems (9.2) under discussion, then condition (1) holds. Condition (2)
obviously holds. To show condition (3),

vex, t) = i CO

(q,,(T, t) X(t))TQ(,) eIl(r, t) x(t) dT = i oo


XT(T) Q(T) X(T) dT

Since Q(t) - EI is positive definite, then xT(Q - cI)x ~ 0 so that XTQX ~ €xTx > 0 for any
x( T) # O. Therefore

Since IIA(t)112:::=; Ka for the system (9.2), then use of Problem 4.43 gives

vex, t) ~ €K3 1 ieJ> IIA(T)1121Ix(T)II~ dT ~ -€K;l f.co xTAxdr


= -€K- 1
a Jtr""xT(dx/dT) d, = €K3lllx(t)II~/2 = a(llxll) > 0

for all x # 0 . Finally, to satisfy condition (4), since vex, t)


dv/dt = -xT(t) Q(t) x(t) < 0 since Q is positive definite.
Example 9.19.
Consider the scalar system of Example 9.8, dx/dt = -x/to Then <p(t, to) = to/to A Liapunov function
can be found for this system for to > 0, when it is asymptotically stable, because

IIA(t)112 =t- 1 < tol = K3(tO) and foo


t
114>(r, t) 112 dr = f t
00 (t/r)2 dr = t

For simplicity, we choose Q(r) = 1 so that pet) = fOO (t/T)2d'T= t. Then vex, t) = tx 2 , which is a
time-varying Liapunov function for all t === to > O. t
200 STABILITY OF LINEAR SYSTEMS [CHAP. 9

Consider the system


dx/dt = A(t)x + f(x, u(t), t) (9.3)
where for t ~ to, f(x, u, t) is real, continuous for small Ilxjl and Ilull, and 11£11-'1> 0 as ljxll--')o 0
and as Ilull--')o O. As discussed in Section 8.9 this equation often results when considering
the linearization of a nonlinear system or considering parameter variations. (Let ax from
Section 8.9 equal x in equation (9.3).) In fact, most nonswitching control systems satisfy
this type of equation, and Ilfll is usually small because it is the job of the controller to keep
deviations from the nominal small.
Theorem 9.10: If the system (9.3) reduces to the asymptotically stable system (9.2) when
f = 0, then the equilibrium state is asymptotically stable for some small
Ilx[[ and j[u[l·
In other words, if the linearized system is asymptotically stable for t:::".. to, then the corre-
sponding equilibrium state of the nonlinear system is asymptotically stable to small dis-
turbances.
Proof; Use the Liapunov futiction v(~~-t) = XTP(t)X
given by Theorem 9.9, in which
q,(t, T) is the transition matrix of the asymptotically stable system (9.2). Then conditions
(1) and (2) of Definition 9.7 hold for this as a Liapunov function for system (9.3) in the
same manner as in the proof of Theorem 9.9. For small Ilx\j it makes little difference
whether we consider equation (9.3) or (9.2), and so the lower bound a(llxlD in condition (3)
can be fixed up to hold for trajectories of (9.3). For condition (4) we investigate dv/dt =
d(xTpx)/dt = 2xT P dx/dt + x T (dP/dt)x = 2x TPAx + 2x TPf + x T(dP/dt)x. But from the proof
of Theorem 9.9 we know dv/dt along motions of the system dx/dt = A(t)x satisfies dv/dt =
-xTQx = 2xTPAx + x T(dP/dt)x so that -Q = 2PA + dP/dt. Hence for the system (9.3),
dv/dt = -xTQx + 2x T Pf. But Ilfll-? 0 as Ilxll--')o 0 and [[ull--')o 0, so that for small enough
Ilxll and Ilull, the term -xTQx dominates and dv/dt is negative.
Example 9.20.
Consider the system of Example 9.1. The linearized system dx/dt = -x is uniformly asymptotically
stable, so Theorem 9.10 says there exist some small motions in the neighborhood of x::::: 0 which result
in the asymptotic stability of the nonlinear system. In fact, we know from the solution to the nonlinear
equation that initial conditions Xo < 1 cause trajectories that return to x::::: O.
Example 9.21.
Consider the system dx/dt = -x + u(l + x - u), whereu(t) = e, a small constant. Then I(x, e, t) :::::
1:(1 + X - e) does not tend to zero as x ~ O. Consequently Theorem 9.10 cannot be used directly. However,
note the steady state value of x = 1:. Therefore redefine variables as z = X-e. Then dz/dt = -z + eZ,
which is stable for all I: ~ 1.
Example 9.22.
Consider the system
dXl/dt = X2 + xl(xi + ~)/2 and
The linearized system dx/dt = X2 and dX2/dt = -Xl is stable i.s.L. because it is the harmonic oscillator
of Example 9.4. But the solution to the nonlinear equation is xi + xi ::::: [(xio + X~o)-l- t]-l which is
unbounded for any xlO or X20' The trouble is that the linearized system is only stable i.s.L., not asymp-
totically stable.

9.6 EQUATIONS FOR THE CONSTRUCTION OF LIAPUNOV FUNCTIONS


From Theorem 9.9, if we take the derivative of P(t) we obtain
dP/dt _q,T(t, t) Q(t) q,(t, t) + i«l [8f1l(T, t)/8t]T Q(T)cfJ(,-, t) dT

+ s,~ .pT(T, t)Q(T)[a.p(T, t)/atJdT


Since cfJ'(t, t) == I and 8lJJ(T, t)/8t == 8cfJ- 1(t, T)/8t = -q,(T, t) A(t), we obtain
dP/dt + AT(t) P(t) + P(t) A(t) = -Q(t) (9.4)
CHAP.9J STABILITY OF LINEAR SYSTEMS 201

Theorem 9.11: If and only if the solution P(t) to equation (9.4) is positive definite, where
A(t) and Q(t) satisfy the conditions of Theorem 9.9, then the system (9.2),
dx/dt = A(t)x, is uniformly asymptotically stable.

Proof: If P(t) is positive definite, then v = XTP(t)X is a Liapunov function. If the


system (9.2) is uniforn1Iy asymptotically stable, we obtain equation (9.4) for P.
Theorem 9.11 represents a rare occurrence in Liapunov theory, namely a necessary and
sufficient condition for asymptotic stability. Usually it is a difficult procedure to find a
Liapunov function for a nonlinear system, but equation (9.4) gives a means to generate a
Liapunov function for a linear equation. Unfortunately, finding a solution to equation (9.4)
means solving n(n + 1)/2 equations (since P is symmetric), whereas it is usually easier to
solve then equations associated with dx/dt = A(t)x. However, in the constant coefficient
case we can take Q to be a constant positive definite matrix, so that P is the solution to
(9.5)

This equation gives a set of n(n + 1)/2 linear algebraic equations that can be solved for P
after any positive definite Q has been selected. It turns out that the solution always exists
and is unique if the system is asymptotically stable. (See Problem 9.6.) Then the solution
P can be checked by Sylvester's criterion (Theorem 4.10), and if and only if P is positive
definite the system is asymptotically stable. This procedure is equivalent to determining
if all the eigenvalues of A have negative real parts. (See Problem 9.7.) Experience has
shown that it is usually easier to use the Lienard-Chipart (Routh-Hurwitz) test than the
Liapunov method, however.

Example 9.23.
Given the system d2y/dt 2 + 2dy/dt + y =:: O. To determine the stability by Liapunov's method we set
up the state equations
dx
= y =:: (1 O)x
dt

We arbitrarily choose Q = I and solve

(~ =~)(:~~ P22
0 1)
PI2)(-1-2 ::::
(-1o 0) -1

The solution is P ~
---c -21- (31 1)
- l' Using Sylvester's criterion we find Pn = 3 >0 and detP = 1 > 0,
and so P is positive definite and the system is stable. But it is much easier to use Routh's test which almost
immediately says the system is stable.

We may ask at this point what practical use can be made of Theorem 9.9, other than the
somewhat comforting conclusion of Theorem 9.10. The rather unsatisfactory answer is
that we might be able to construct a Liapunov function for systems of the form dx/dt:::::
A(t)x + f(x,t) where A(t) has been chosen such that we can use Theorem 9.9 (or equation
(9.5) when the system is time-invariant). A hint on how to chooseQ is given in Problem 9.7.
Example 9.24.
Given the system d2y/dt2 + (2 + e~t) dy/dt + y ::::: O. We set up the state equation in the form
dx/dt = Ax + f:
dx
y - (1 O)x
dt

As found in Example 9.23, 2v = 3xf + 2XIX2 + x~. Then


dv/dt = d(x'tex)/dt = 2xTP dx./dt = 2x TP(Ax + f)

:':::: -xTQx + 2x TPf = -xi - x~ - (Xl + x2)e~tx2


202 STABILITY OF LINEAR SYSTEMS [CHAP. 9

dv T ( 1
dt -x e- t /2

Using Sylvester's criterion, 1 > 0 and 1 + e- t > e- 2t/4, so the given system is uniformly asymptotically
stable.
For discrete-time systems, the analog of the construction procedure given in Theorem
9.9 is
""
v(x, k) = ~ xT(k) cpT(m, k) Q(m) q,(m, k) x(k)
m=k

and the analog of equation (9.5) is ATPA - P = -Q.

Solved Problems
9.L A ball bearing with rolling friction rests on a deformable surface in a gravity field.
Deform the surface to obtain examples of the various kinds of stability.

Global Asymptotic Global Stability Unstable Unstable


Asymptotic Stability Stability i.s.L. but but and
Stability but Unbounded i.s.L. Unbounded Bounded Unbounded
(a) (b) (c) Cd) (e) (f)
Fig. 9-2

We can see that if the equilibrium state is globally asymptotically stable as in Fig. 9-2(a),
then it is also stable i.s.L., as are (b), (c) and (d), etc. If the shape of the surface does not vary
with time, then the adjective "uniform" can be affixed to the description under each diagram. If the
shape of the surface varies with time, the adjective "uniform" mayor may not be affixed, depend-
ing on how the shape of the surface varies with time.

9.2. Prove Theorems 9.3 and 9.5.


We wish to show dx/dt = A(t)x is uniformly asymptotically stable if and only if JI4J(t, to)!! ..:=
K'le-KzCt-to) for all t ~ to and all to. Also we wish to show dx/dt = A(t)x + B(t)u, where
IIB-l(t)lh ..:= K'B-l' is externally stable if and only if Ijofl(t, to)11 ..:= K'le- K2 (t-tO), and hence if and only
if the unforced system is uniformly asymptotically stable.
Since IJx(t)l! ~ IIIJJ(t, to)l) !!~oll, if !!!fJ(t, to)11 :::: K'le-KzCt-to) and ))xol!":= 8, then Ilx(t)!! ~ IelO
and tends to zero as t ~ which proves uniform asymptotic stability. Next, if dx/dt = A(t)x
(1;),

is uniformly asymptotically stable, then. for any p > 0 and any 'I'J > 0 there is a tl (independent
of to by uniformity) such that if Ilxojl":= p then I1x(t)II":= 'I'J for all t> to + t l • In fact, for p = 1
and 7J = e- 1, there is a tl = T such that 'I'J = e- 1 ~ Ilx(to + T)II = 11~(to + T, to)xoll. Since
I)xoll = 1, from Theorem 4.11, property 2, 11q:,(to + T, to)xoll = 114J(to + T, to)ll. Therefore for all
to, 11q:,(to + T, to)11 ..:= e- 1 and for any positive integer k,
114t(to + kT, to)ll : : : : llq:,(to + kT, to + (k - 1)T) II' .. II<JI(to + 2T, to + T)llllofI(to + T, to) II ..:= e- k
Choose k such that to + kT === t < to + (k + l)T and define K'1 = 11q:,(t, to + kT)lle < (I;) from
Theorem 9.2. Then
IlofI(t, to)11 ..:= 114>(t, to + kT)llll4>(t o + kT, to)11

Defining K2 = 1fT proves Theorem 9.3.


CHAP. 9] STABILITY OF LINEAR SYSTEMS 203

If lllJ1(t, t o)lll ::::: K1e-K2Ct-to), then

ft lIB(t, -dill d1"


~
:0::: ft II~
III (t, 1")11 !lB(1") II dT :0::: iCO s t
~
11<b(t, T)II dT
iCDiCl (1 - e- K2 Ct - to) )!K2 :0::: iC iCl/ KZ {3
D

so that use of Theorem 9.4 proves the system is uniformly externally stable.
If the system is uniformly externally stable, then from Theorem 9.4,

{3 === ftljH(t,'T)lhdT = ftlllll(t,T)B(T)111dT


to to
Since IIB-l(T)lh::::: /CD-I' then

ft 11~(t, T)111 dT
to
:0:::

for all to and all t. Also, for the system (9.1) to be uniformly externally stable with arbitrary Xo,
it must be uniformly stable i.s.L. so that II~(T, t o)111 < K from Theorem 9.2. Therefore

00 > (3K -1 K
O === st 111I'(t, T)lh II~(T, to)lh dT :::: ft 111I'(t, T) cJI(T, t o)lh d1"
=: st ~

to
IlcJI(t, t o)I]1 dT 111I'(t, to)lll (t-
~

to)

Define the time T =: {3ICB-1KE so that at time t =: to + T, {3K o - I K === 111II(t o + T, t o)111{3K B -lKE. Hence
114t(to + T, to)lb :::::: e- 1 , so that using the argument found in the last part of the proof just given
for Theorem 9.3 shows that j]4a(t, t o)I]1 :::::: Kle-Ct-to)/T, which proves Theorem 9.5.

9.3. Is the scalar linear system dx/dt = 2t(2 sin t -l)x uniformly asymptotically stable?
We should expect something unusual because IACt)1 increases as t increases, and A(t) alternates
in sign. We find the transition matrix as <pCt, to) =: £lew-octo) where o(t) = 4 sin t - 4tcos t - t 2 •
Since e(t) < 4 + 41tl- t2, lim e(t) = -00, so that lim <I>(t, to) = 0 and the system is asymptoti-
t-oo t-co
cally stable. This is true globally, i.e. for any initial condition xo, because of the linearity of the
system (Theorem 9.1). Now we investigate uniformity by plotting <I>(t, to) starting from three dif-
ferent initial times: to = 0, to = 2" and to = 417" (Fig. 9-3). The peaks occur at <I>«2n + 1)1T, 2nn) =
e1TC4 -'oT)C4n+1) so that the vertical scale is compressed to give a reasonable plot. We can pick an
initial time to large enough that some initial condition in IXol < 8 will give rise to a trajectory
that will leave Ixi < E for any E > 0 and (3 > O. Therefore although the system is stable i.s.L.,
it is not uniformly stable Ls.L. and so cannot be uniformly asymptotically stable.

Fig. 9-3

9.4. If ~(t) ...:::: I< + Jrt [pCr) ~('T) + {J.(1")] dT where p(t) ~o and K is a constant, show that
to
d ]
eitto p(T)dT
~(t) == [K + f t -

to
e fT p('I)d'l)
to JL
()
T T

This is the Gronwall-Bellman lemma and is often useful in establishing bounds for
the response of linear systems.
We shall establish the following chain of assertions to prove the Gronwall-Bellman lemma.
204 STABILITY OF LINEAR SYSTEMS [CHAP. 9

Assertion (i): If dw/dt == 0 and w(to)::= 0, then wet) ~ 0 for t ~ to.

Proof: If ",(t 1) > 0, then there is a T in to::= T::= tl such that "'fT) = 0 and w(t) > 0 for
T ~ t == tp But using the mean value theorem gives ",(tl) :=:",(t 1) - weT) :=: (t l - T) dw/dt ~ 0,
which is a contradiction.

Assertion (ii): If d",/dt - a(t)", ~ 0 and ",(to) "",. 0, then w(t):=: 0 for t::::: to.

Proof: Multiplying the given inequality by e-I}<t·to), where oCt, to) :=: fta(r) dr, gives
to
o ~ e- 1H t,to) dw/dt - a(t)e- OCt• to) :::::; dCe-OCt. to) ",)/dt

Applying assertion (i) gives e-o(t.to) '" ~ 0, and since e-OCt.to) > 0 then '" ~ O.

Assertion (iii): If d¢/dt - a(t)¢ == dy/dt - a(t)y and ¢(to):=: y(to), then ¢(t) === yet).

Proof: Let '" = ¢ - y in assertion (ii).

Assertion (iv): If dw/dt - a(t)" 0" ,(t) and "(to) '" K, then _(t) '" (K + s,: e-8(T,t,) pH <iT ) .8(t, t,).

Proof: Set '" :=:


a( t) :::::; ,u( t) and y( to) :::::
¢ and y =
K.
(K + f to
t e- OCT, to) ,u(r) dr) eOCt , to) in assertion (iii), because dy/dt-

Assertion (v): The Gronwall-Bellman lemma is true.

Proof: Let ",(t) = II; + ft [per) HT) + ,u(T)] dT, so that the given inequality says ~(t):=: ",(t).
to
Since pet) === 0 we have dw/dt = pet) ~(t) + p.(t) :=; pet) wet) + ,u(t), and w(to) = K. Applying assertion
(iv) gives

9.5. Show that if A{t) remains bounded for all t === to (lJA{t)Jl L: K), then the system cannot
have finite escape time. Also show IIx(t)11 ~ Ilxolle~:IIA(T}lIdT for any A{t).
Integrating dx/ dt :::::: A( t)x between to and t gives x( t) = xo
norm of both sides gives
+ it to
A(T) x(T) dT. Taking the

Ilx(t)l) :::: II Xo + ft A(T) X(T) dT II .~


to
II xol! + Ilf to
t A(T) X(T) dT II :=; l)xoll + f t

to
J]A(T)llllx(T)l1 <iT

Since j)A(r))I ~ 0, we use the Gronwall-Bellman inequality of Problem 9.4 with pet) = 0 to obtain

I)x(t)!) ~ !)xo)le~:IIACT)lldT. If ))A(t))1 ~ K, then Ilx(t)I):=; IlxolleKCt-to), which shows x(t) is bounded
for finite t and so the system cannot have finite escape time. However, if I)A(t)11 becomes un-
bounded, we cannot conclude the system has finite escape time, as shown by Problem 9.3.

9.6. Under what circumstances does a unique solution P exist for the equation ATp +
PA = -Q?
We consider the general equation BP + PA = C where A, Band C are arbitrary real n X n
matrices. Define the column vectors of P and C as Pi and ci respectively, and the row vectors of A as
n
aJ for i:=: 1,2, .. . ,n. Then the equation BP+PA = C can be written as BP+i=l
~ Pia[ = C, which
in turn can be written as

[( ~ ~ ::: ~)
... ... ...
o 0 ... B
=
(f)
CHAP•. 9] STABILITY OF LINEAR SYSTEMS 205

Call the first matrix in brackets Band the second A,


and the vectors p and c. Then this equation
can be written (13 + A)p = c. Note ~ has eigenvalues equal to the eigenvalues of B, call them
f3i' repeated n times. Also, the rows of A - Aln are linearly dependent if and only if .the rows of
AT - ~.In are linearly dependent. This happens if and only if det (AT -Aln) = O. So A has eigen-
values equal to the eigenvalues of A, call them ai' repeated n times. Call T the matrix that reduces
B to Jordan form J, and V = {vij} the matrix that reduces A to Jordan form. Then

A A
Hence the eigenvalues of B + A are ai + f3 j for i, j = 1, 2, ... , n.
A unique solution to (B + A)p = c exists if and only if det (:8 + A):= IT (ai + f3) #- o.
alIi.j
Therefore if and only if ai + f3j # 0 for all i and i, does a unique solution to BP + PA = C exist.
If B =_AT, then f3j == aj, so we require ai + aj # O. If the real parts of all aj are in the left half
plane (for asymptotic stability), then it is impossible that ai + fX.j := O. Hence a unique solution for
P always exists if the system is asymptotically stable.

9.7. Show that PA + ATp = -Q, where Q is a positive definite matrix, has a positive
definite solution P if and only if the eigenvalues of A are in the left half plane.
This is obvious by Theorem 9.11, but we shall give a direct proof assuming A can be diagonaIized
as M-IAM =.4.. Construct P as (M-l)tM-l. Obviously P is Hermitian, but not necessarily real.
Note P is positive definite because for any nonzero vector x we have xtPx = xt(M-l)tM-1x =
(M-1x)t(M-1x) > O. Also, xtQx:= -(M- 1x)t(.4. + .A. t)(M-1x) > 0 if and only if the real parts of the
eigenvalues of A are in the left half plane. Furthermore, v == xt(M-l)tM-1x decays as fast as is
possible for a quadratic in the state. This is because the state vector decays with a time constant
'1- 1 equal to one over the real part of the maximum eigenvalue of A, and hence the square of the
norm of the state vector decays in the worst case with a time constant equal to 1/2'1. To investi-
gate the time behavior of v, we find
dv/dt = xtQx := -(M-1x)t(.A. + .4. t) (M-1x) === -2'l/(M- 1x)t(M- 1x) = -2'l]v
Fot a choice of M-1x equal to the unit vector that picks out the eigenvalue with real part 'l}, this
becomes an equality and then v decays with time constant 1/2'1.

9.8. In the study of passive circuits and unforced passive vibrating mechanical systems
we often encounter the time-invariant real matrix equation FdZyldt 2 + Gdy/dt +
Hy = 0 where F and H are positive definite and G is at least nonnegative definite.
Prove that the system is stable Ls.L.
Choose as a Liapunov. function

v dyT (F + FT) dy
dt dt
+ r
Jo
t dyT (G
dT
+ GT) dy dT +
dT
yT(H + HT)y

This is positive definite since· F and H are, and

v TdYT)(H
( Y dt 0
206 STABILITY OF LINEAR SYSTEMS [CHAP. 9

which is a positive definite quadratic function of the state. Also,

dv
dt
dyT [ d
dt
2Y
F dt2 +
dy
G dt + By
]
+ [ d2Y
F dt'}, + dy
G dt + By
J T dy
dt == o
By Theorem 9.6, the system is stable i.s.L. The reason this particular Liapunov function was
chosen is because it is the energy of the system.

Supplementary Problems
9.9. Given the scalar system dx/dt = tx + u and y == x.

(a) Find the impulse response h(t, to) and verify ft [k(t, 'T)[
tlJ
dT ::::: yrI;; e
t2
/
2

(b) Show the system is not stable i.s.L.


(c) Explain why Theorem 9.5 does not hold.

9.10. Given the system dx/dt = -x/t + u and y = x. (a) Show the response to a unit step function
input at time to > 0 gives an unbounded output. (b) Explain why Theorem 9.5 does not hold even
though the system is asymptotically stable for to > O.

9.11. By altering only one condition in Definition 9.6, show how Liapunov techniques can be used to give
sufficient conditions to show the zero state is not asymptotically stable.

9.12. Prove that the real parts of the eigenvalues of a constant matrix A are < u if and only if given
any symmetric, positive definite matrix Q there exists a symmetric, positive definite matrix P which
is the unique solution of the set of n(n + 1)/2 linear equations -2uP + ATp + PA = -Q.

9.13. Show that the scalar system dx/dt = -(1 + t)x is asymptotically stable for t === 0 using Liapunov
theory.

9.14. Consider the time~varying network of Fig. 9-4. Let x 1(t) ==


charge on the capacitor and X2{t) = flux in the inductor, with
initial values XIO and X20 respectively. Then L{t) dXl/dt = x2
and L(t) C(t) dX2/dt + L(t)Xl + R(t) C(t)x2 = O. Starting
with L(t) CCt)
R + (2 L/RC) 1 )
pet) ( 1 2/R
find conditions on R, L, and C that guarantee asymptotic
stability. Fig. 9-4

9.15. Using the results of Example 9.16, show that if a > 0, 0 < fJ < 1 and a 2 > fJ2(a 2 + p.-2), then
the Mathieu equation d2 y/dt2 + ady/dt + (1 + fJ cos 2t/p.)y = 0 is uniformly asymptotically stable.

9.16. Given the time-varying linear system dx/dt = A(t)x with initial condition Xo where the elements
of A(t) are continuous in t. Let H(t) be a symmetric matrix defined by B(t) = (A + AT)/2. Let
Amin(t) and Amax(t) be, for each t, the smallest and the largest eigenvalue of H(t). Using the
Liapunov function vex) = XTX, show
it Amin(T) d-r - it Jlmax(T)dT
[[XO!!2 B to::::: Ilx(t)112 ::::: IIxol12B to

9.17. What is the construction similar to that of Problem 9.7 for P in the discrete-time case
ATPA-P = -Q?
CHAP. 9] STABILITY OF LINEAR SYSTEMS 207

9.18. Show that if the system of Example 9.22, page 200, is changed slightly to dxl/dt = X2 - €X1 +
(xi+ x~ )/2 and dX2/dt = Xl - €x 2 + (xi + x~ )/2, where E> 0, then the system is uniformly
asymptotically stable (but not globally).

d
9.19. Given the system -x
dt
Construct a Liapunov function for the system in the case o(t) = O. This Liapunov function must
give the necessary and sufficient condition for stability, i.e. give the exact stability boundaries on a.
Next, use this Liapunov function on the system where oCt) is not identically zero to find a condition
on oCt) under which the system will always be stable.

9.20. Given the system dx/dt = (A(t) + B(t»x where dx/dt = A(t)x has a transition matrix with norm
114I(t"r)11 :::::: e-K2Ct-r) • Using the Gronwall-Bellman inequality with p. = 0, show Ilxll:o::: Ilxolle{K3-K2)(t-to)
if IIB(t)II:o::: IC3e-K2t.

9.21. Show that Ilx(t)11 = IIxolle to


Jt IIACT)lIdr + ft eoT(t IlA(ll)lld1J B(r) U(T) dT for the system dx/dt = A(t)x +
B(t)u with initial condition x(t o) = Xo. to

Answers to Supplementary Problems


9.9. (a)
it~~ Ih(t, T)I dT = e
t2/2 ft-00 e-~/2 dr ~ ~ et2/Z
2 2 Z
(b) ¢(t, to) = et 12e-tol

(e) The system is not externally stable since (3 depends on t in Theorem 9.4.

9.10. (a) yet) == (t - t~ /t)/2 for t:=:: to > O.


(b) The system is not uniformly asymptotically stable.

9.11. Change condition (4) to read dv/dt> 0 in a neighborhood of the origin.

9.12. Replace A by A - uI in equation (9.5).

9.13. Use v == x2

9.14. 0 < Kl :0::: R(t) === IC2 < OJ;)

o < ICg == L(t) === /C4 < CQ

o < /Cs == C(t) === ICe < CQ

o < /C7 === 1 + R(L/RZ - + CL/RC -


o < "8 === 1 + RL/RZ
. C/2) L/R

9.17. P == (Mt)-lM-l which is always positive definite, and Q == M-l(I - .At A)M-l which is positive
definite if and only if each eigenvalue of A has absolute value less than one.

9.18. Use any Liapunov function constructed for the linearized system.
2
9.19. If Q == 21, then P ==21 (2+a
a The system is stable for all a if (J == O. If e oF 0, we
require 4 + 2aO - 02(1 + a2/2)2 > o.

9.20. Ilx)) === !Jxolle- K2


<t-to) + ft e- K2
<t-r) IIB(r)11 Ilx)) dT so apply the Gronwall-Bellman inequality to
~
Ilxl)e K2t :0::: JJxolleK2to + /Cg f t

to
Ilxll dT

9.21. Use Problem 9.4 with Ilx(t)11 == ~(t), lixoll = ", jjA(t)11 = pet), and IIB(..-) U(T»)J = P.(T).
Chapter 10

Introduction to Optimal Control


10.1 INTRODUCTION
In this chapter we shall study a particular type of optimal control system as an intro-
duction to the subject. In keeping with the spirit of the book, the problem is stated in
such a manner as to lead to a linear closed loop system. A general optimization problem
usually does not lead to a linear closed loop system.
Suppose it is a control system designer's task to find a feedback control for an open loop
system in which all the states are the real output variables, so that
dx/dt == A(t)x + B(t)u y = Ix + Ou (10.1)
In other words, we are given the system shown in Fig. 10-1 and -wish to design what goes
into the box marked *. Later we will consider what happens when y(t) == C(t) x(t) where
C(t) is not restricted to be the unit matrix.

I I
~
u(t) x(t) = y(t)

d( t) =====+:;;;'+ I ~======:=>=S::qystem (10.1) 1=======:::;-1 =====;» ;::::1

~ * r::::: ========::::J.
Fig. 10-1. Vector Feedback Control System

A further restriction must be made on the type of system ,to be controlled.


Definition 10.1: A regulator is a feedback control system in which the input d(t) == O.
For a regulator the only forcing terms are due to the nonzero initial conditions of the
=
state variables, x(to) Xo ¥ 0 in general. We shall only study regulators first because
(1) later the extension to servomechanisms (which follow a specified d(t)) will become easier,
(2) it turns out that the solution is the same if d(t) is white noise (the proof of this result
is beyond the scope of the book), and (3) many systems can be reduced to regulators.
Example 10.1.
Given a step input of height a, u(t)::::: aU(t), where the unit step U(t - to) = 0 -for t < to and
U(t - to) = 1 for t === to, into a scalar system with transfer function 1/(8 + f3) and initial state xo. But
this is equivalent to a system with a transfer function 1/[8(8 + f3)], with initial states a and Xo and with no
input. In other words, we can add an extra integrator with initial condition a at the input to the system
flow diagram to generate the step, and the resultant system is a regulator. Note, however, the input
becomes a state that must be measured and fed back if the system is to be of the form (10.1).
Under the restriction that we are designing a regulator, we require d(t) == o. Then the
input u(t) is the output of the box marked * in Fig. 10-1 and is the feedback control. We
shall assume u is then some function to be formed of the present state x(t) and time. This
is no restriction, because from condition 1 of Section 1.4 upon the representation of the state
of this deterministic system, the present state completely summarizes all the past history
of the abstract object. Therefore we need not consider controls u = u(x(7'), t) for to 6 7' ~ t,
but can consider merely u = u(x(t), t) at the start of our development.

208
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 209

10.2 THE CRITERION FUNCTIONAL


We desire the system to be optimal, but must be very exact in the sense in which the
system is optimal. We must find a mathematical expression to measure how the system
must be optin1al in comparison with other systems. A great many factors influence the
~ngineering utility of a system: cost, reliability, consumer acceptance, etc. The factors
mentioned are very difficult to measure and put a single number on for purposes of com-
parison. Consequently in this introductory chapter we shall simply avoid the question by
considering optin1ality only in terms of system dynamic performance. It is still left to the
art of engineering, rather than the science, to incorporate unmeasurable quantities into
the criterion of optimality.
In terms of performance, the system response is usually of most interest. Response is
the time behavior of the output, as the system tries to follow the input. Since the input to
a regulator is zero, we wish to have the time behavior of the output, which in this case
is the state, go to zero from the initial condition Xo. In fact, for the purposes of this chapter
there exists a convenient means to assign a number to the distance that the response [X(7)
for to ~ T ~ t 1] is from O. Although the criterion need not be a metric, a metric on the
space of all time functions between to and tt will accomplish this. Also, if we are interested
only in how close we are to zero at time t l , a metric only on all X(tl) is desired. To obtain
a linear closed loop system, here we shall only consider the particular quadratic metric

l(x,O) = txT(tl) SX(tl) + ~ rtlxT(T) Q(T) X(T) dT


J to
where S is an n x n symmetric constant matrix and Q(T) is a n x n symmetric time-varying
matrix. If either S or Q(T), or both, are positive definite and the other one at least non-
negative definite then f(x,O) is a norm on the product space {x(t l), X(T)}. It can be shown
this requirement can be weakened to S, Q nonnegative definite if the system dxldt = A(t)x
with y = yQ(t) x is observable, but for simplicity we assume one is positive definite. The
exact form of Sand Q( T) is to be fixed by the designer at the outset. Thus a number is
assigned to the response obtained by each control law u(x(t), t), and the optimum system is
that whose control law gives the minimum p{x, 0).
The choice of Q(T) is dictated by the relative importance of each state over the time
interval to ~ t < t 1•
Example 10.2.
Consider the system of Problem 2.18, page 34, with the choice of state variables (i). If the angle of
attack 0: = rp - K6Z is to be kept small, we can minimize the integral of 0:2 = xTHTHx where H =
(0 -Ka 1 0). Then Q = HTH is nonnegative definite and we must choose S to be positive definite.
Furthermore, if 0: is of importance only during the first part of the missile's flight, Q might be chosen as
a function of time, such as Q(t) = HTHe- t •
The choice of S is dictated by the relative importance of each state at the final time, t 1•
Example 10.3.
Consider the missile of Problem 2.18, page 34, with -the choice of state variables (i), whose target is
stationed at z = O. Then it makes no difference what path the missile flies to arrive near z = 0 at
t = t l • Therefore choose Q = O. What matters is how small .z2(t1} is. Also, we do not want z(t l }, ¢(tl )
or .if> (t 1) to be too large, so choose

s =
(~
where El. E2' E3 are small fixed positive numbers to be -chosen by trial and error after finding the closed
loop system for each fixed Ei' If any Ei = 0, an unstable system could result, but might not.
210 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10

For the system (10.1), the choice of control u(x:(t), t) to minimize p(x, 0) is u =
-B-l(t) Xo 8(t - to) in the case where B(t) has an inverse. Then x(t) = 0 for all t> to and
p(x, 0) = 0, its minimum value. If B(t) does not have an inverse, the optimal control is a
sum of delta functions and their derivatives, such as equation (6.7), page 135, and can drive
x(t) to 0 almost instantaneously. Because it is very hard to mechanize a delta function,
which is essentially achieving infinite closed loop gain, this solution is unacceptable. We
must place a bound on the control. Again, to obtain the linear closed loop system, here we
shall only consider the particular quadratic
1 It!
"2 J to UT(T) R(T) U(T) dT

where R(T) is an m x m symmetric time-varying positive definite matrix to be fixed at the


outset by the designer. R is positive definite to assure each element of u is bounded. This
is the generalized control "energy", and will be added to p2(X, 0). The relative magnitudes
of IIQ(-r)11 and IIR(T)II are in proportion to the relative values of the response and control
energy. The larger IIQ(T)II is relative to IIR(T)II, the quicker the response and the higher the
gain of the system.

10.3 DERIVATION OF THE OPTIMAL CONTROL LAW


The mathematical problem statement is that we are given dx/dt = A(t)x + B(t)u and
want to minimize
v[x, uJ (10.2)

Here we shall give a heuristic derivation of the optimal control and defer the rigorous
derivation to Problem 10.1. Consider a real n-vector function of time p(t), called the costate
or Lagrange multiplier, which will obey a differential equation that we shall determine.
Note for any x and u obeying the state equation, pT(t)[A(t)x + B(t)u - dx/dt] = O. Adding
this null quantity to the criterion changes nothing.

v[x, u]

Integrating the term pTdx/dT by parts gives


v [x, u] = VI[X(t1)] + v2 [x] + v3[U]
where
V1[X(t 1)] = txT(t1)SX(t1) - XT(tl)P(tl) + XT(to)p(to) (10.3)

v
2
[x] = St 1
(i xTQx + xTAp + x.T dp/dT) d., (10.4)

v [U]: rt'(!uTRU + uTBTp) dT (10.5)


3
J to
Introduction of the costate p has permitted v[x, u] to be broken into vi'v 2 and VS' and heuris-
tically we suspect that v[x, u] will be minimum when VI'V 2 and Vs are each independently
minimized. Recall from calculus that when a smooth function attains a local minimum,
its derivative is zero. Analogously, we suspect that if VI is a minimum, then the gradient of
(10.3) with respect to x(t 1) is zero:
(10.6)

and that if v
2
is a minimum, then the gradient of the integrand of (10.4) with respect to
x is zero:
dp/dt = _AT(t)p - Q(t)xop (10.7)
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 211

Here XOP(t) is the response x(t) using the optimal control law UOP(x(t), t) as the input to the
system (10.1). Consequently we define p(t) as the vector which obeys equations (10.6) and
(10.7). It must be realized that these steps are heuristic because the minimum of v might
not occur at the combined minima of VI' v 2 and v 3 ; a differentiable function need not result
from taking the gradient at each instant of time; and also taking the gradient with respect
to-x does not always lead to a minimum. This is why the final step of setting the gradient
of the integrand of (10.5) with respect to u equal to zero (equation (10.8)) does not give a
rigorous proof of the following theorem.

Theorem 10.1: Given the feedback system (10.1) with the criterion (10.2), having the
costate defined by (10.6) and (10.7). Then if a minimum exists, it is obtained
by the optimal control
(10.8)
The proof is given in Problem 10.1, page 220.
To calculate the optimal control, we must find p(t). This is done by solving the system
equation (10.1) using the optimal control (10.8) together with the costate equation (10.7),
!i (XOP) ( A(t) -B(t)R- 1(t)B T(t»)(X OP )
dt p \ -Q(t) _AT(t) P (10.9)
with xop(to) = Xo and p(tl) = SXOP(tl)'
Example lOA.
Consider the scalar time-invariant system dx/dt = 2x + u with criterion

v X 2 (t 1 ) +~ f
o
1
(3x 2 + u 2/4) dt
Then A = 2, B = 1, R = 1/4, Q = 3, S = 2. Hence we solve

2
( -3 -2
-4)(XOP) p

with xop(O) = x o, p(l) = 2x(1). U sing the methods of Chapter 5,

(X;;g») = [e~4t (! :) + e~4t (_! -4)](X


2
P(O»)
p(O)
O
(10.10)

Evaluating this at t = 1 and using p(l) = 2x op (1) gives


15e+ 8 + 1 (10.11)
p(O) 10e+8 - 2 Xo

Then from Theorem 10.1,


2e+ 4t + 30e8 - 4t (10.12)
1 - 5e+8 Xo

Note this procedure has generated u OP as a function of t and Xo. If Xo is known, storage
of UOP(xo, t) as a function of time only, in the memory of a computer controller, permits open
loop control; i.e. starting at to the computer generates an input time-function for dxop/dt =
Ax°P + BuOP(xo, t) and no feedback is needed.
However, in most regulator systems the initial conditions Xo are not known. By the
introduction of feedback we can find a control law such that the system is optimum for
arbitrary Xo. For purposes of illustration we will now give one method for finding the
feedback, although this is not the most efficient way to proceed for the problem studied in
this chapter. We can eliminate Xo from dxop/dt = Axop + BuoP(xQ, t) by solving for XOP(t)
in terms of Xo and t, i.e. XOP(t) = .,,(t; Xo, to) where q, is the trajectory of the optimal system.
Then XOP(t) = ,p(t; Xo, to) can be solved for Xo in terms of XOP(t) and t, Xo = xo(XOP(t), t). Sub-
stituting for Xo in the system equation then gives the feedback control system dxop/dt =
AxoP + BuOP(xOP(t), t).
212 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10

Example 10.5.
To find the feedback control for Example 10.4, from (10.10) and (10.11)

Xo (5eS-4t _ e + 4t)
5e+ 8 - 1

Solving this for Xo and substituting into (10.12) gives the feedback control
2 + 30e 8 (1-t)
1 - 5e8(1-t) XOp(t)

Hence what goes in the box marked * in Fig. 10-1 is the time-varying gain element K(t) where
2 + BOeS(l-t)
K(t) =:!
1- 5880 - 0

and the overall closed loop system is


4 + 20e80 - O
-1------::5,...-,88=(::-l----;-t~) ::cop

10.4 THE MATRIX RICCATI EQUATION


To find the time-varying gain matrix K(t) directly, let p(t) = P(t) XOP(t), Here P(t) is
an n X n Hermitian matrix to be found. Then UOP(x, t) = -R-1BTpxop so that K = -R-IBTP.
The closed loop system then becomes dxop/dt = (A - BR-IBTP)XOP , and call its transition
matrix qJcl(t, i). Substituting p = PXOP into the bottom equation of (10.9) gives
(dP/dt)xOP + P dxoP/dt = -QxoP - ATPXOP

Using the top equation of (10.9) for dxoP/dt then gives


o= (dP/dt + Q + ATp + PA - PBR-IBTp)xOP

But XOP(t) = ~el(t, to)xo, and since Xo is arbitrary and lite! is nonsingular, we find the n x n
matrix P must satisfy the matrix Riccati equation
(10,13)

This has the "final" condition P(tl) = S since P(tl) = P(tl) X(tl) = SX(tl). Changing inde-
pendent variables by T = tl - t, the matrix Riccati equation -becomes dP/dT = Q + ATp +
PA - PBR-lBTp where the arguments of the matrices are tl - T instead of t. The equation
can then be solved numerically on a computer from T = 0 tOT = tl - to, starting at the
initial condition P(O) = S. OccaSionally the matrix Riccati equation can also be solved
analytically.

Example 10,6.
For the system of Example 10.4, the 1 X 1 matrix Riccati equation is
-dP/dt == 3 + 4P -- 4P2
with P(l) = 2. Since this is separable for this example,
dP 1 P(t) -- 3/2 1 P(t) + 1/2
f
t
4(P - 3/2)(P + 1/2) == "8 In P(tl) -- 3/2 - SIn P(t1) + 1/2
tl

Taking antilogarithms and rearranging, after setting tl =:! 1, gives


15 + e8(t-U
pet) == 10 -- 2e8(t-l)

Then K(t) = -R-1BTP = -4P(t), which checks with the answer obtained in Example 10.5.
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 213

Another method of solution of the matrix Riccati equation is to use the transition matrix
of (10.9) directly. Partition the ttansition matrix as
~ (~l1(t, i)
at ~21(t, T)
~12(t, T))
~22(t, T)
;: : (A
-Q
-BR-IBT)(~l1(t, i)
-AT IP 21 (t, T)
~12(t,
<)22(t, i)
T)) (10.14)

Then
X(t)) ;::::
( p(t)
Eliminating x(t 1) from this gives
p(t) ::::: P(t) x(t) ::::: [<)21(t, t 1) + ~22(t, tl)S][IPU(t, t 1) +CP12(t, t1)S]-lX(t) (10.15)
so that P is the product of the two bracketed matrices. A sufficient condition (not necessary)
for the existence of the solution P(t) of the Riccati equation is that the open loop transition
matrix does not become unbounded in to ~ t ~ t 1• Therefore the inverse in equation (10.15)
can always be found under this condition.
Example 10.7.
For Example 10.4, use of equation (lO.10) gives pet} from (10.15) as
[3e 4 (t-lJ - 3e- 4 (t-l) + 2(6e4 (t-l) + 2e- 4 (t-1)}]l8
pet) [2e 4 ( t - l ) + 6e- 4 (t-l) + 2(4e4 (t-l) - 4e- 4 (t-l)}]l8
in Example 10.6.
which reduces to the answer obtained
We have a useful check on the solution of the matrix Riccati equation.

Theorem 10.2: If S is positive definite and Q(t) at least nonnegative definite, or vice versa,
and R(t) is positive definite, then an optimum v[x, UOP] exists if and only if
the solution P(t) to the matrix Riccati equation (10.13) exists and is bounded
and positive definite for all t < t l • Under these conditions v[x, nOP] :::::
txT(tO) P(to) x(to).
Proof is given in Problem 10.1, page 220. It is evident from the proof that if both Sand
Q(t) are nonnegative definite, P(t) can be nonnegative definite if v[x, u OP] exists.

Theorem 10.3: For S(t), Q(t) and R(t) symmetric, the solution P(t) to the matrix Riccati
equation (10.13) is symmetric.
Proof: Take the transpose of (10.13), note it is identical to (10.13), and recall that
there is only one solution P(t) that is equal to S at time t l •
Note this means for an n X n P(t) that only n(n + 1)/2 equations need be solved on the
computer, because S, Q and R can always be taken symmetric. Further aids in obtaining
a solution for the time-varying case are given in Problems 10.20 and 10.21.

10.5 TIME ..INVARIANT OPTIMAL SYSTEMS


SO far we have obtained only time-varying feedback gain elements. The most important
engineering application is for time-invariant closed loop systems. Consequently in this
section we shall consider A, B, Rand Q to be constant, S = 0 and t 1 ...,. 00 to obtain a con-
stant feedback element K.
Because the existence of the solution of the Riccati equation is guaranteed if the open
loop transition matrix does not become unbounded in to ~ t ~ tIl in the limit as tl ~ 00 we
should expect no trouble from asymptotically stable open loop systems. However, we wish
to incorporate unstable systems in the following development, and need the following ex-
istence theorem for the limit as tl 00 of the solution to the Riccati equation, denoted by II.
-)0
214 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10

Theorem 10.4: If the states of the system (10-.1) that are not asymptotically stable are
controllable, then lim P(to; t 1 ) = n(to) exists and n(to) is constant and posi-
tive definite. tl -+ ""

Proof: Define a control U1(T) = _BT(T) cpT(to, T) W-l(to, t 2) x(to) for to ,.:=:: T ~ t21 similar
to that used in Problem 6.8, page 142. Note W drives all the controllable states to' zero at
the time t2 < 00. Let U1(T) = 0 for T> t 2. Defining the response of the asymptotically
stable states as Xas(t), then 12"" xIsQXas dt < 00. Therefore

v[x, nl] = r (xTQx + U1TRu1) dt + J~(r.<J xIsQas dt


J~
t2
< co

Note v[x, u 1] ~ a(to, t 2) XT(tO) x(to) after carrying out the integration, since both x(t) and
ul(t) are linear in x(to). Here a(to, t 2 ) is some bounded scalar function of to and t 2 • Then
from Theorem 10.2,
txT(tO) n(to) x(to) = v[x, nOP] ~ a(to, t 2) xT(tO) x(toY < co
Therefore Iln(to)112 ~ 2a(to, t 2 ) < co so n(to) is bounded. It can be shown that for S = 0,
Ps=o (to; t) ~ Ps=o (to; t l ) for to ~ t ~ t l , and also that when S > 0, then
lim IIPs>o (to; t) - Ps=o (to; t)11 = 0
t-+ 00

Therefore, lim P(to; tl) must be a constant because for tl large any change in P(to; tl) must
tl-+C()

be to increase P(to; t l ) until it hits the bound n(to).


Since we are now dealing with a time-invariant closed loop system, by Definition 1.8 the
time axis can be translated and an equivalent system results. Hence we can send to -7 -co,
start the Riccati equation at P(t1) = S =0 and integrate numerically backwards in time
until the steady state constant n is reached.
Example 10.8.
Consider the system dx/dt = u with criterion p = ~f·l(X2 + u 2 ) dt. Then the Riccati equation is'
J to
-dP/dt = 1- p2 with P(t 1) = O. This has a solution P(to) = tanh (t1 - to). Therefore
II = lim P(to; t l ) = lim tanh (t 1 - to) = lim tanh (tl - to) = 1
~-~ ~-~ ~--~
The optimal control is U op = -R-lBTIIx = -x.
Since II is a positive definite constant solution of the matrix Riccati equation, it satisfies
the quadratic algebraic equation
(10.16)
Example 10.9.
For the system of Example 10.8, the quadratic algebraic equation satisfied by II is 0 = 1- II2. This
has solutions ±1, so that II = 1 which is the only positive definite solution.
A very useful engineering property of the quadratic optimal system is that it is always
stable if it is controllable. Since a linear, constant coefficient, stable closed loop system
always results, often the quadratic criterion is chosen solely to attain this desirable feature.
Theorem 10.5: If II exists for the constant coefficient system, the closed loop system is
asymptotically stable.
Proof: Choose a Liapunov function V = xTrrx. Since n is positive definite, V> 0
for all nonzero x. Then
dV/dt = xTII(Ax- BR-IBTnx) + (Ax - BR-IBTnx)TIIx = -xTQx - xTnBR-lBTnx < 0
for all x ¥= 0 because Q is positive definite and nBR-tBTn is nonnegative definite.
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 215

Since there are in general n(n + 1) solutions of (10.16), it helps to know that TI is the
only positive definite solution.
Theorem 10.6: The unique positive definite solution of (10.16) is n.
Proof: Suppose there exist two positive definite solutions TIl and II2. Then
Q-Q o (10.17)
Recall from Problem 9.6, that the equation XF + GX = K has a unique solution X when-
ever Ai(F) + Aj(G) =F 0 for any i and j, where Ai(F) is an eigenvalue of F and "AG) is an
eigenvalue of G. But A - BR~lBTTIi for i = 1 and 2 are stability matrices, from Theorem
10.5. Therefore the sum of the real parts of any combination of their eigenvalues must
be less than zero. Hence (10.17) has the unique solution III - ll2 = O.
Generally the equation (10.16) can be solved for very high-order systems, n ~ 50. This
gives another advantage over classical frequency domain techniques, which are not so easily
adapted to computer solution. Algebraic solution of (10.16) for large n is difficult, because
there are n(n + 1) solutions that must be checked. However, if a good initial guess P(t2)
is available, i.e. P(t2) = 8P(t2) + II where 8P(t2) is small, numerical solution of (10.13) back-
wards from any t2 to a steady state gives II. In other words, 8P(t) = P(t) - n tends to zero
as t -? -00, which is true because:
Theorem 10.7: If A, B, Rand Qare constant matrices, and R, Q and the initial state P(t2)
are positive definite, then equation (10.13) is globally asymptotically stable
as t -00 relative to the equilibrium state II.
--,)0

Proof: BP obeys the equation (subtracting (10.16) from (10.13))


-d8P/dt = F T8P + 8P F -8P BR-IBT 8P
where F:;:::: A - BR-IBTn. Choose as a Liapunov function 2V = tr[8PT (Iln T)-18P]. For
brevity, we investigate here only the real scalar case. Then 2F = -Qrr- 1 - B2R- 1rr and
2V = rr- 2 SP2, so that dV /dt = n~2 sP d8P/dt = n- 2 Sp2 (Qrr- l + B2R~IP) > 0 for all non-
zero sP since Q, R, P and II are all > O. Hence V 0 as t -00, and SP O. It can be
--,)0 --,)0 --,)0

shown in the vector case that dV/dt > 0 also.


Example 10.10.
For the system of Example 10.8, consider the deviation due to an incorrect initial condition, i.e.
suppose P(t l ) = € instead of the correct value P(t l ) = O. Then P(to) = tanh (tl - to + tanh- l €). For
any finite €, lim P(t o) = 1 = II.
to- -00
Experience has shown that for very high-order systems, the approximations inherent
in numerical solution (truncation and roundoff) often lead to an unstable solution for bad
initial guesses, in spite of Theorem 10.7. Another method of computation of n will now
be investigated.
Theorem 10.8: If Ai is an eigenvalue of H, the constant 2n x 2n matrix corresponding to
(10.9), then -xf is also an eigenvalue of H.
Proof: Denote the top n elements of the ith eigenvector of H as fi and the bottom n
elements as gi. Then

Ai (!:) = (10.18)

or
\gi -Qfi - ATgi

or -AI Ct) = (-B::'B :::~)(~~,)


T
216 INTRODUCTION TO OPTIMAL CONTROL [CHAP~' 10

This shows -Ai is an eigenvalue of HT. Also, since detM = (detMT)* for any matrix
M, then for any ~ such that det (H - ~I) = 0 we find that det (HT - ~I) = O. This shows that
if ~* is an eigenvalue of HT, then ~ is an eigenvalue of H. Since -Ai is an eigenvalue of
HT, it is concluded that -At is an eigenvalue of H.
This theorem shows that the eigenvalues of H are placed symmetrically with regard
to the jw axis in the complex plane. Then H has at most n eigenvalues with real parts < 0,
and will have exactly n unless some are purely imaginary.
Example 10.11.
The H matrix corresponding to Example 10.8 is H =
which are symmetric with respect to the imaginary axis. (-10-1).
0
This has eigenvalues +1 and -1,

Example 10.12.
A fourth-order system might give rise to an 8 X 8 H having eigenvalues placed as shown in Fig. 10-2.

1m]..

x x

---~--------~--~--~~-----~~-ReA

x x

Fig. 10-2

The factorization into eigenvalues with real parts < 0 and real parts> 0 is another way
of looking at the Wiener factorization of a polynomial p{(2) into p+(w) p-(w).

Theorem 10.9: If AI, A2, ••. , An are n distinct eigenvalues of H having real parts < 0, then
n = GF-1 where F = (£1 j £21 ... j in) and G = (gtl g2j ... Ign) as defined
in equation (10.18).

Proof: Since n is the unique Hermitian positive definite solution of (10.16), we will
show first that GF-1 is a solution to (10.16), then that it is Hermitian, and finally that it is
positive definite.
Define an n X n diagonal matrix A with AI, A2, •.• , An its diagonal elements so that from
the eigenvalue equation (10.18),
, A -BR-IBT)(F)
( -Q _AT G
from which
FA AF - BR-IB TG (10.19)
GA = -QF - ATG (10.20)
Premultiplying (10.19) by F-l and substituting for A in (10.20) gives
GF-1AF - GF-IBR-IBTG = -QF - ATG
Postmultiplying by F-l shows GF-1 satisfies (10.16).
Next, we show GF-l is Hermitian. It suffices to show M = FtG is Hermitian, since
GF-l = Ft- 1MF-l is then Hermitian. Let the elements of M be mjk = gk; for j =F k, ff
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 217

~)(!:) + (~)t (_~ ~) AkG:) }


~) + (_~ ~)H }(!:)
Since the term in braces equals 0, we have m jk = mk.j and thusGF-I is Hermitian.
Finally, to show GF-I is positive definite, define two n x n matrix functions of time as
8(t) = FeAtF-I and ,,(t) = GeAtF-I. Then 8(co) = 0 = ,,(co). Using. (10.19) and (10.20),
d8/dt (AF - BR-1BTG)eAtF-I
d,,/dt = -(QF + ATG)eAtF-l
Then
GF-I 9 t (0) ,,(0)
so that
i«) (eAtF-l)t(FtQF + GtBR-IBTG)(eAtF-l) dt
Since the integrand is positive definite, GF-l is positive definite, and GF-l = II.

Corollary 10.10: The closed loop response is x(t) = FeACt-to)F-lxo with costate p(t)
GeACt-to) F-l Xo.
The proof is similar to the proof that GF-l is positive definite.
Corollary 10.11: The eigenvalues of the closed loop matrix A - BR-l BTU are AI, A2, ••• , An
and the eigenvectors of A - BR-l BTn are fl' f2, ... , f n •
The proof follows immediately from equation (10.19). Furtherlnore, since AI, A2, •.• , An
are assumed distinct, then from Theorem 4.1 we know f1, f2, ... , fn are linearly independent,
so that F-l always exists. Furthermore, Theorem 10.5 assures Re Ai < 0 so no "-i can be
imaginary.
Example 10.13.
The H matrix corresponding to the system of Example 10.8 has eigenvalues -1 = Al and +1 = A2'
Corresponding to these eigenvalues,

and

where a and f3 are any nonzero constants. Usually, we would merely set a and {3 equal to one, but for
purposes of instruction we do not here. We discard A2 and its eigenvector since it has a real part > 0,
and form F;:::: It = a and G = 01 = a. Then II = GF-l = 1 because the a's cancel. From Problem 4.41,
in the vector case F = FoK and G = GoK where K is the diagonal matrix of arbitrary constants asso-
ciated with each eigenvector, but still D = GF-I = G oK(F oK)-l = GoFo.
Use of an eigenvalue and eigenvector routine to calculate n from Theorem 10.9 has
given results for systems of order n ~ 50. Perhaps the best procedure is to calculate an
approximate 110 = GF-l using eigenvectors, and next use Re (no + n~)/2 as an initial guess
to the Riccati equation (10.13). Then the Riccati equation stability properties (Theorem
10.7) will reduce any errors in 110, as well as provide a check on the eigenvector calculation.

10.6 OUTPUT FEEDBACK


Until now we have considered only systems in which the output was the state, y = Ix.
For y = Ix, all the states are available for measurement. In the general case y = C(t)x,
the states must be reconstructed from the output. Therefore we must assume the· observa-
hility of the closed loop system dx/dt = F(t)x where F(t) = A(t) - B(t) R-I(t) BT(t) P(t; tl).
218 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10

To reconstruct the state from the output, the output is differentiated n -1 times.
y = C(t)x = Nl(t)X
dy/dt = Nl(t)dx/dt + dN /dtx
1 = (NIF+dNddt)x = N2x

d n- 1y/dtn- 1 = (Nn - 1F + dNn-ddt)x = Nnx


where NT = (Ni I ..• IN~) is the observability matrix defined in Theorem 6.11. Define a
nk x k matrix of differentiation operators H(d/dt) by H(d/dt) = (I Ild/dt I ..• \Idn-1/dtn - 1).
Then x = N-I(t) H(d/dt)y. Since the closed loop system is observable, N has rank n. From
Property 16 of Section 4.8, page 87, the generalized inverse N-r has rank n. Using the
results of Problem 3.11, page 63, we conclude the n-vector x exists and is uniquely de-
termined. The optimal control is then u = R-1BTPN-IHy.
Example 10.14.
Given the system
y = (1 O)x

with optimal control u = k1xl + k2X2' Since y = Xl and dx/dt = X2, the optimal control in terms of the
output is u = k1y + k2 dy/dt.

x y
System (10.1) I C
I
I I

-R-IBTp
State
estimator

Fhr.l0-3

Since u involves derivatives of y, it may appear that this is not very practical. This
mathematical result arises because we are controlling a deterministic system in which there
is no noise, i.e. in which differentiation is feasible.
However, in most cases the noise is such that the probabilistic nature of the system must
be taken into account. A result of stochastic optimization theory is that under certain cir-
cumstances the best estimate of the state can be used in place of the state in the optimal
control and still an optimum is obtained (the "separation theorem"). An estimate of each
state can be obtained from the output, so that structure of the optimal controller is as shown
in Fig. 10-3.

10.7 THE SERVOMECHANISM PROBLEM


Here we shall discuss only servomechanism problems that can be reduced to regulator
problems. We wish to find the optimum compensation to go into the box marked * * in
Fig. 10-4.

o -e ** L >1 System (10.1) x; I c


y
;>
d---+:~~~~~~!~1~.~~~~1~·~~~11
U

Fig.l0~4. The Servomechanism Problem


CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 219

The criterion to be lllinimized is


leT(tl) Se(tl) + -
1
2J
r to
1'1
[eT(T) Q(T) e(T) + UT(T) R(T) U(T)] dT (10.21)

Note when e(t) = y(t) - d(t) is minimized, y(t) will follow d(t) closely.
To reduce this problem to a regulator, we consider only those d(t) that can be generated
= =
by arbitrary z(to) in the equation dz/dt A(t)z, d C(t)z. The coefficients A(t) and C(t)
are identical to those of the open loop system (10.1).
Example 10.15.
GiYen the open loop system

dx
dt ~o ~)x + (:~)u
-2 b3
y = (1 1 l)x

Then we consider only those inputs d(t) = ZlO + Z20(t + 1) + zsoe- 2t • In other words, we can consider as
inputs only ramps, steps and e- 2t functions and arbitrary linear combinations thereof.
Restricting d(t) in this manner permits defining new state variables w =x - z. Then
dw/dt = A(t)w + B(t)u e = C(t)w (10.22)
Now we have the regulator problem (10.22) subject to the criterion (10.21), and solution
of the matrix Riccati equation gives the optimal control 1:1S u = -R-IBTPW. The states w
can be found from e and its n -1 derivatives as in Section 10.6, so the content of the box
marked * * in Fig. 10-4 is R-IBTPN-IH.
The requirement that the input d(t) be generated by the zero-input equation dz/dt =
A(t)z has significance in relation to the error constants discussed in Section 8.3. Theorem
8.1 states that for unity feedback systems, such as that of Fig. 10-4, if the zero output is
asymptotically stable then the class of inputs d(t) that the system can follow (such that
lim e(t) = 0) is generated by the equations dw/dt = A(t)w + B(t)g and d(t) = C(t)w + g
t-CQ
where g(t) is any function such that lim g(t) = O. Unfortunately, such an input is not
t ...... «>
reducible to the regulator problem in general. However, taking g == 0 assures us the
system can follow the inputs that are reducible to the regulator problem if the closed loop
zero output is asymptotically stable.
Restricting the discussion to time-invariant systems gives us the assurance that the
closed loop zero output is asymptotically stable, from Theorem 10.5. If we further restrict
the discussion to inputs of the type d i = (t - to)lU(t - to)ei as in Definition 8.1, then Theorem
8.2 applies. Then we must introduce integral compensation when the open loop transfer
function matrix H(s) is not of the correct type l to follow the desired input.
Example 10.16.
Consider- a system with transfer function G(s) in which G(O) -# i.e. it contains no pure integrations.
OX) ,

We must introduce integral compensation, lis. Then the optimal servomechanism to follow a step input
is as shown in Fig. 10-5 where the box marked ** contains R-1bTIIN-IH(s). This is a linear combination
of 1, s, ... , sn since the overall system G (s) lsi s of order n + 1. Thus we can write the contents of **
as kl + k2 s + ... + kns n• The compensation for G(s) is then k1/s + k2 + kas + ... + kns n- 1. In a noisy
environment the differentiations cannot be realized and are approximated by a filter, so that the compen-
sation takes the form of integral plus proportional plus a filter.

9Yo
d(t) ~ ~L. -_e_(t_)_a_'_K_:~~~~_U~(~t)~~.~'_!__'''''_-_-_-Y:r-) --
step ___

Fig. 10-5
220 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10

10.8 CONCLUSION
In this chapter we have studied the linear optimal control problem. For time-invariant
systems, we note a similarity to the pole placement technique of Section 8.6. We can take
our choice as to how to approach the feedback control design problem. Either we select
the pole positions or we select the weighting matrices Q and R in the criterion. The equiva-
lence of the two methods is manifested by Corollary 10.11, although the equivalence is not
one-to-one because analysis similar to Problem 10.8 shows that a control u = kTx is optimal
if and only if Idet {jwI - A - bkT ) I ::::""Idet (jwI - A)I for all w. The dual of this equivalence
is that Section 8.7 on observers is similar to the Kalman-Eucy filter and the algebraic separa-
tion theorem of Section 8.8 is similar to the separation theorem of stochastic optimal control.

Solved Problems
10.1. Prove Theorem 10.1, page 211, and Theorem 10.2; page 213.
The heuristic proof given previously was not rigorous for the following six reasons.
(1) By minimizing under the integral sign at each instant of time, i.e. with t fixed, the resulting
minimum is not guaranteed continuous in time. Therefore the resulting optimal functions
XOp(t) and pet) may not have a derivative and equation (10.7) would make no sense.
(2) The open loop time function u(t) was found, and only later related to the feedback control u(x, t).
(3) We wish to take piecewise continuous controls into account.
(4) Taking derivatives of smooth functions gives local maxima, minima, or inflection points.
We wish to guarantee a global minimum.
(5) We supposed the minimum of each of the three terms PI' 1'2, and P3 gave the minimum of v.
(6) We said in the heuristic proof that if the function were a minimum, then equations (10.6), (10.7)
and (10.8) held, i.e. were necessary conditions for a minimum. We wish to give sufficient condi-
tions for a minimum: we will start with the assumption that a certain quantity vex, t) obeys a
partial differential equation, and then show that a minimum is attained.
To start the proof, call the trajectory x(t} = +(t; x(to), to) corresponding to the optimal system
dx/dt = Ax + Buop(x, t) starting from to and initial condition x(t o). Note ~ does not depend on u
since u has been chosen as a specified function of x and t. Then p[x, uOP(x, t)] can be evaluated
if cf. is known, simply by integrating out t and leaving the parameters t l , to and x(to). Symbolically,
p[x, UOP] ;:;::: icf.T(tl; x(t o), to)SI/>(t 1 ; x(to), to)

+ ~ ft! [4>T(T; x(t o), to) Q(T) 4>(T; x(to), to) + (UOP)T(I/>, T)RuoP(~, T)] dT
to
Since we can start from any initial conditions x(to) and initial time to, we can consider v[x, u Op]
+ 1 variables x(to) and to, depending on the fixed
v(x(to), to; t 1 ) where v is an explicit function of n
parameter t l •
Suppose we can find a solution vex, t) to the (Hamilton-Jacobi) partial differential equation

:~ + txTQx - t(grad x v) TBR -lBT gradx v + (grad x v ) Tax = 0 (10.23)

with the boundary condition V(X(tl)' t l ) ::::::: txT(tl) Sx(t 1). Note that for any control u(x, t),
!uTRu + l(grad x v)TBR-IBT grad x v + (gradx v)TBu
;:;::: t(u + R-IBT gradx'I.')TR(u + R-IBT gradxv) :::: 0
where the equality is attained only for
u(x, t) ;:;::: -R-'-lBT gradx v (10.24)
Rearranging the ineqUality and adding lxTQx to both sides,
}XTQx + iuTRu ~ }XTQX - !(gradxv)TBR-IBT gradxv - (gradxv)TBu
Using (10.23) and dx/dt = Ax + Bu, we get

iXTQx + iuTRu ~ - [~~ + (gradx v)T(Ax + Bu) ] = - fft v(x(t), t}


CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 221

Integration preserves the inequality, except for a set of measure zero:

Given the boundary condition on the Hamilton-Jacobi equation (10.23), the criterion v[x, u] for any
control is

so that if v(x(t o), to) is nonnegative definite, then it is the cost due to the optimal control, (10.2 .0:
uOP(x, t) = -R-l(t) BT(t) gradxv(x, t)

To solve the Hamilton-Jacobi equation (10.23), set vex, t) = txTP(t)x where pet) is a time-
varying Hermitian matrix to be found. Then grad x v = P(t)x and the Hamilton-Jacobi equation
reduces to the matrix Riccati equation (10 . 13). Therefore the optimal control problem has been
reduced to the existence of positive definite solutions for t < tl of the matrix Riccati equation.
Using more advanced techniques, it can be shown that a unique positive definite solution to the
matrix Riccati equation always exists if I!A(t)!! < 00 for t < t l • We say positive definite because

and all nonzero x(to), which proves Theorem 10.2. Since we know from Section 10.4 that the matrix
Ricca-ti equation is equivalent to (10.9), this proves Theorem 10.1.

10.2. Find the optimal feedback control for the scalar time-varying system dx/dt = - x/2t +

U
'"
It t 0 mInImIze v = "21 it!
to
(x 2 + u 2 )dt.

Note the open loop system has a transition matrix <p(t, T) = (rlt)1/2, which escapes at t = o.
The corresponding Riccati equation is
_ -dP = 1 _ ~ _ p2
dt t t2
with boundary condition P(t l ) = O. This has a solution

t 2_
t_l t2
- _
pet) =
ti + t2
Then P(t) > 0 for 0 < t < tl and for t < tl ~ 0.. However, for -t1 < t < 0 < t l , the interval
in which the open loop system escapes, PCt) < O. Hence we do not have a nonnegative solution of
the Riccati equation, and the control is not the optimal one in this interval. This can only happen
when IIACt) II is not bounded in the interval considered.

10.3. Given the nonlinear scalar system dy/dt = y3e. An open loop control d(t) = -t has
been found (perhaps by some other optimization scheme) such that the nonlinear
system dynldt = y~d with initial condition yn(O) = 1 will follow the nominal path
Yn(t) = (1 + t2)-1I2. Unfortunately the initial condition is not exactly one, but y(O) =
1 + € where € is some very small, unknown number. Find a feedback control that is
stable and will minimize the error y(tl) - Yn(t l ) at some time tl.
Call the error y(t) - YnCt) = xCt) at any time. Consider the feedback system of Fig. 10-6 below.
The equations corresponding to this system are
dy dYn + dx = (Yn+ x)3 Cd +u)
dt dt dt

Assume the error Ixl ~ IYI and corrections lui ~ Idl so that
13y;xd + y!u! ? !3Ynx2d + x 3 d + 3y;xu + 3Ynx2 u + x 3 ul (10.25)
222 INTRODUCTION TO OPTIMAL CONTROL (CHAP. 10

y(t)
d(t)

+
x(t)

Fig. 10-6

Since dy"Jdt == y!d, we have the approximate relation


dx 3t 1
at = - 1 + t2 x + (1 + t 2)3/2 U
We choose to minimize
v = !X 2 (t l )
o
+~
tl R(t) u 2 (t) dti
where R(t) is some appropriately chosen weighting function such that neither x(t) nor u(t) at any
time t gets so large that the inequality (10.25) is violated. Then K(t) = -P(t)/[(1 + t 2 )3J2R(t)1
where pet) is the solution to the Riccati equation
dP 6tP p2
-ill = -1 + t2 - (1 + t2)3R(t)
with P(t 1) = 1. This Riccati equation is solved numerically on a computer from t1 to O. Then
the time-varying gain function K(t) can be found and used in the proposed feedback system.

10.4. Given the open loop system with a transfer function (8 + £1')-1. Find a compensation
to minimize
= ~ fo"> [(y - d O)2 + (du/dt)2] dt
where do is the value of an arbitrary step input.
Since the closed loop system must follow a step, and has no pure integrations, we introduce
integral compensation. The open loop system in series with a pure integration is described by
dX2/dt = - ax2 + u, and defining Xl = u and du/dt = p. gives dXl/dt = du/dt = IL so that

dx
dt y = (0 1)x

Since an arbitrary step can be generated by this, the equations for the error become
dw
dt e ~ (0 1)w

subject to minimization of
1 reo
v
2" In (e
2
+ p.2) dt. The corresponding matrix Riccati equation is
o

o (~ ~ ) + (~ _~) II + II G_:) - II (~ ~) II
Using the method of Theorem 10.9 gives
+ 0:
~)
y
H
A ( 1

'Where V2 Y = ~ a + V0: - 4 +
2 4
-Va2 -
V0:4 - 4 and A-I = 0:2 + ya + 1. The optimal control is
then JL = 4(wl + yw 2 ). Since e = w 2 and WI = o:e + de/dt, the closed loop system is as shown in
Fig. 10-7 below.
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 223

t ~
-e 1 y
fJ-
~(y + a + 8)

+'z
8(8 + a)
S t e p - - -......
·1 ·1
I
Fig. 10-7
Figure 10-8 is equivalent to that of Fig. 10-7 so that the compensation introduced is integral
plus proportional.

Step ~(l+a;y) 1
8+a:
+
-

Fig. 10-8

10.5. Given the single..,input time-invariant system in partitioned Jordan canonical form

d (
dt
zc)
z:;- =

where Zc are the j controllable states, Zd are the n - j uncontrollable states, J e and J d
are j x j and (n-j) x (n- i) matrices corresponding to real Jordan form, Section 7.3,
bisa j-vector that contains no zero elements, U is ascalar control, and the straight
lines indicate the partitioning of the vectors and matrices. Suppose it is desired to
minimize the quadratic criterion, v where v = ~ foo [z~Qczc + p-1U2JdT. Here p> 0
to
is a scalar, and Qc is a positive definite symmetric real matrix. Show that the optimal
feedback control for this regulator problem is of the form u(t) = -kTzc(t) where
k is a constant j-vector, and no uncontrollable states Zd are fed back. From the results
of this, indicate briefly the conditions under which a general time-invariant single-
input system that is in J014dan form will have only controllable states fed back; i.e.
the conditions under which the optimal closed loop system can be separated into con-
trollable and uncontrollable parts.

The matrix II satisfies equation (10.16). For the ~se g:en, A = C' I;) B = (:),
Q
(
··Qoc I 0°), R-l -- p and partition II = (~).
II~d I lId Then

Now if ne can be shown to be constant and ncd can be shown 0, then kT == pbTnc' But from
(10.16),

==
224 INTRODUCTION TO OPTIMAL CONTROL ICHAP.IO

or

Within the upper left-hand partition,


o = JeDe + DeJe - pDebbTDe + Qc
the matrix Riccati equation for the controllable system, which has a constant positive definite
solution Dc. Within the upper right-hand corner,
o= JeDed + DedJd - pDcbbTDed +0
This is a linear equation in Ded and has a solution Ded = 0 by inspection, and thus u = -pbTUczc•
N ow a general system x
= Ax + Bu can be transformed to real Jordan form by x = Tz, where
T-1AT = J. Then the criterion

v ;:;:::
If~ (xTQx + p-1u Z) dr
-2 = -If~ (zTTTQTz + p- 1U 2 ) dr
2 t
to
gives the matrix Riccati equation shown for z, and Zd
0

if TTQT = (* '
Q 0)
i.e. only the con-

trollable states are weighted in the criterion. Otherwise uncontrollable states must be fed back.
If the uncontrollable states are included, v diverges if any are unstable, but if they are all stable
the action of the controllable states is influenced by them and v remains finite.

10.6. Given the controllable and observable system


dx/dt = A(t)x + B(t)u,
C(t)x (10.26) y =
It is desired that the closed loop response of this system approach that of an ideal or
model system dw/dt = L(t)w. In other words, we have a hypothetical system
dw/dt = L(t)w and wish to adjust u such that the real system dx/dt = A(t)x + B(t)u
behaves in a manner similar to the hypothetical system. Find a control u such that
the error between the model and plant output vector derivatives becomes small, i.e.
minimize
v 2 to 1It! [(d
y y
(it - Ly Q dt - Ly ) + u1'Ru] dt )T (d (10.27)

Nate that in the case A, L = constants and B = C = I, R == 0, this is equivalent to


asking that the closed loop system dx/dt = (A + BK)x be identical to dx/dt = Lx
and hence have the same poles. Therefore in this case it reduces to a pole placement
scheme (see Section 8.6).
Substituting the plant equation (10.26) into the performance index (10.27),

V = 2"1 jtt {[(C· + CA - LC)x + CBu]TQ[(C. + CA - LC)x +- CBu] + uTRu} dt (10.28)


to
This performance index is not of the same form as criterion (10.2) because cross products of x
and u appear. However, let 11 = u + (R + BTQB) -1 BTCTQ( dCI dt + CA - LC)x.Since R is positive
definite, zTRz > 0 for a~y nonzero z. Since BTQB is nonnegative definite, 0 < zTRz + zTBTQBz =
zT(R + BTQB)z so that R = R + BTQB is positive definite and hence its inverse always exists.
Therefore the control 11 can always be found in terms of u and x. Then the system (10.26) becomes
dx/dt = Ax + BI1 (10.29)

and the performance index (10.28) becomes

V = 2"1 It! A
(xTQx + uTRu) dt
A AA
(10.30)
to
in which, defining M(t) = dC/dt + CA -LC and K(t) = (BTQB + R)-lBTCTQM,
A A
A = A-BK, Q = CT[(M - BK)TQ(M - BK) + KTRK]C and R = R + BTQB
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 225
A A
Since R has been shown positive definite and Q is nonnegative definite by a similar argument, the
regulator problem (10.29) and (10.30) is in standard form. Then 11 = -R-lB'IJ'x is the optimal
solution to the regulator problem (10.29) and (10.30), where P is the positive definite solution to
A A A A
-dP/dt = Q + ATP + PA - PBR-1B'IJ'
with boundary condition P(tl ) = O. The control to minimize (10.27) is then
u = -R-lBT(P + BTCTQM)x
A
Note that cases in which Q is not positive definite or the system (10.29) is not controllable may
give no solution P to the matrix Riccati equation. Even though the conditions under which this
procedure works have not been clearly defined, the engineering approach would be to try it on a
particular problem and see if a satisfactory answer could be obtained.

10.7. In Problem 10.6, feedback was placed around the plant dx/dt = A(t)x + B(t)u,
y = C(t)x so that it would behave similar to a model that might be hypothetical. In
this problem we consider actually building a model dw/dt = L(t)w with output
v = J(t)w and using it as a prefilter ahead of the plant. Find a control u to the plant
such that the error between the plant and model output becomes small, Le. minimize

(10.31)

Again, we reduce this to an equivalent regulator problem. The plant and model equations can
be written as
(10.32)

and the criterion is, in terms of the (w x) vector,

v ~ s,:T(:Y(-J C)TQ(-J C) G) + Jat uTRu

Thus the solution proceeds as usual and u is a linear combination of wand x.

However, note that the system (10.32) is uncontrollable with respect to the respect to the w
variables. In fact, merely by setting L = 0 and using J(t) to generate the input, a form of general
servomechanism problem results. The conditions under which the general solution to the servo-
mechanism problem exists are not known. Hence we should expect the solution to this problem
to exist only under a more restrictive set of circumstances than Problem 10.6. But again, if a
positive definite solution of the corresponding matrix Riccati equation can be found, this procedure
can be useful in the design of a particular engineering system.

The resulting system can be diagrammed as shown in Fig. 10-9.

v(t)

x(t) y(t)

Feedback

Fig. 10·9
226 INTRODUCTION TO OPTIMAL CONTROL [CHAP.lO

10.8. Given the time-invariant controllable and observable single input-single output sys-
tem dx/dt = Ax + bu, y = cTx. Assume that u = kTx, where k has been chosen such

that 2v = i oo

(qy2 +u2) dt has been minimized. Find the asymptotic behavior of

the closed loop system eigenvalues as q --? 0 and q --? 00.

Since the system is optimal, k = -bTn where n satisfies (10.16), which is written here as
o= qccT + ATD + nA - nbbTn (10.33)
If q = 0, then n = 0 is the unique nonnegative definite solution of the matrix Riccati equation.
Then u = 0, which corresponds to the intuitive feeling that if the criterion is independent of
response, make this control zero. Therefore as q ~ 0, the closed loop eigenvalues tend to the open
loop eigenvalues.

To examine the case q""" 00, add and subtract sn from (10.33), multiply on the right by (sl - A)-l
and on the left by (-81 - AT)-l to obtain
o = q(-sl - AT)-lccT(sl - A)-l - nCsI - A)-l - (-sl - AT)-ln - (-81 - AT)-lnbbTn(sI - A)-l
(10.34)
Multiply on the left by b T and the right by h, and call the scalar G(8);:= cT(sl - A)-lb and the
scalar H(s) = +bTlI(sI - A)-lb. The reason for this notation is that, given an input with Laplace
transform pCs) to the open loop system, x(t) = .,c-l{(sl - A)-lbp (s)} so that
..e{y(t)} = cT~{x(t)} = cT(sI - A) -lbp(s) = G(s) pes)

and -.,c{u(t)} = bTlI~{xCt)} == bTlI(sl - A)-lbp(s) = H(s) pes)


In other words, G(s) is the open loop transfer function and H(s) is the transfer function from the
input to the control. Then (10.34) becomes
o = qG(-s) G(s) - H(s) - H(-s) - H(s) H(-s)

Adomg 1 to each side and rearranging gives


11 + H(s)12 = 1 + qjG(S)\2 (10.35)

It has been shown that only optimal systems obey this relationship. Denote the numerator of H(s)
as nCB) and of G(s) as neG), and the denominator of H(s) as d(H) and of G(s) as d(G). But d(G)::::;:
det (sl - A) = d(H). Multiplying (10.35) by Id(G)]2 gives
]d(G) + n(H»)2 ;:= ]d(G»)2 + qjn(G)12 (10.36)

As q -') 00, if there are m zeros of neG), the open loop system, then 2m zeros of (10.96) tend to the
zeros of jn(G)]2. The remaining 2(n - m) zeros tend to 00 and are asymptotic to the zeros of the
equation 8 2(n-m)::::;: q. Since the closed loop eigenvalues are the left half plane zeros of jd(G) +
n(H)12, we conclude that as q ~ 00 m closed loop poles tend to the m open loop zeros and the remain-
ing n - m closed loop poles tend to the left half plane zeros of the equation s2(n-m) ::::: q.

In other words, they tend to a stable Butterworth configuration of order n - m and radius q'l
where 2(n - m) ;:= y-l. If the system has 3 open loop poles and one open loop zero, then 2 open
loop poles are asymptotic as shown in Fig. 10-10.

This is independent of the open loop pole-zero con-


figuration. Also, note equation (10.36) requires the
closed loop poles to tend to the open loop poles as q ~ o.
Furthermore, the criterion
o
1 00

( qy 2 + u ) dt
2 is quite
general for scalar controls since c can be chosen by the
designer (also see Problem 10.19).

Of course we should remember that the results of


this analysis are valid only for this particular criterion
involving the output and are not valid for a general
quadratic in the state. Fig. to-tO
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 227

S upplementaryProblems
10.9. Given the scalar system dx/dt = 4x + u with initial condition x(o) = xo' Find a feedback control
u to minimize 21' = i
o
00 (9x 2 + u 2 ) dt.

10.10. Consider the plant


dx
dt G-fs)x+G)u
We desire to find u to minimize 21' fooo (4xi + u 2) dt.

(a) Write the canonical equations (10.9) and their boundary conditions.
(b) Solve the canonical equations for xCt) and pet).
(e) Find the open loop control u(xo. t).

10.11. For the system and criterion of Problem 10.10,


(a) write three coupled nonlinear scalar equations for the elements ¢ij of the pet) matrix,
(b) find pet) using the relationship pet) = p(t) x(t) and the results of part (b) of Problem
10.10,
(e) verify the results of (b) satisfy the equations found in Cal, and that PCt) is constant and
positive definite,
(d) draw a flow diagram of the closed loop system.

10.12. Consider the plant

and performance index


v = ~ f T
o
(qxi + u 2 ) dt
Assuming the initial conditions are Xl(O) = 5, x 2 (O) = -3, find the equations (10.9) whose solutions
will give the control (10.8). What are the boundary conditions for these equations (10.9)?

10.13. Consider the scalar plant dx/dt = x +U


and the performance index v -

Ca) Using the state-costate transition matrix .p(t, T),find PCt) such that pet) = PCt) xCt).
(b) What is lim PCt)?
t~-""

10.14. Given the system of Fig. 10-11, find K(t) to minimize

21' = it1
~
(X2 + u 2/p) dt and find lim K(t).
~~"" Zero input _1_ l---4x.--_
8+a
10.15. Given the system
dx
dt (-2o 1) 1 x +
(1)° u
Find the control that minimizes

21' = f""o CxTx + u 2) dt Fig.l0~11

10.16. Consider the motion of a missile in a straight line. Let;· = v and = u, where r is the position v
and v is the velocity. Also, u is the acceleration due to the control force. Hence the state equation

is l)(V + (°1)
O
r
) u. It is desired to minimize

2v = 2
qvv (T) + 2
qr r (T) + fT u2(T) dT
o
where qv and qr are scalars> 0.
INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10
228

Find the feedback control matrix M(e) relating the control u(t) to the states ret) and vet) in
the form
u(t) M(e) (r(t»)
vet)
where e = T - t. Here 8 is known as the "time to go".

10.17. Given the system dx = (


0
1) x + u with criterion 21' = (0) fOl (qx~ + u 2) dt. Draw the
dt 0 0 1 o
root locus of the closed loop roots as q varies from zero to CIJ.

10.18. Consider the matrix


H
-b
2
/rJ
-a
associated with the optimization problem
dx/dt = ax + btl,
v = 1.: (T (qx2 + r-u.2) dt
2 Jo
xeD) = xo; peT) = 0
Show that the roots of H are symmetrically placed with respect to the jl.) axis and show that the
location of the roots of H depends only upon the ratio q/r for fixed a and b.

10.19. Given the system (10.1) and let S = 0, Q(t) = 17Qo(t) and R(t) = pRo(t) in the criterion (10.2).
Prove that the optimal control law u op depends only on the ratio 7J/ p if Qo(t) and Ro(t) are fixed.

10.20. Show that the transition matrix (10.14) is symplectic, i.e.

( ~rl (t, 7")


~12(t,7")
q,~1 (t, 7"») (
4J22 (t,7") -I
0 I) (~l1(t,
0 ~21(t,7")
7")
~)
and also show de--t (4111(t, 7') ~12(t, T») 1.
4J 21 (t, 7') 4J 22 ( t, 7")

10.21. Using the results of Problem 10.20, show that for the case S = 0, P{t} = .p;}i (t, 7') .p21(t, T). This
provides a check on numerical computations.

10.22. For the scalar time-varying system dx/dt -x tan t + u, find the optimal feedback control

u(x, t) to minimize P= ~ itl (3x 2 + 4u2)dt.


o

10.23. Find the solution pet) to the scalar Riccati equation corresponding to the open loop system with

finite escape time dx/dt = -(x + u)/t with criterion v =~ Itl


to
(6x 2 + u 2 ) dt. Consider the behavior

for 0 < to < ti, for to < tl < 0 and for -It1 < to <0< tl where 1'5 = 3/2.

10.24. Given the system shown in Fig. 10-12. Find the compensation K to optimize the criterion
2v = 1
o
00

[(y - d)2 + u 2] dt.

9Yo
~ "'p ~____-_e_(t_)_.:I_K---,~ __ -.-4'1 !
. I
dlt) U_(t_) If---Y,..(t_)--

Fig. 10-12
CHAP. 10] INTRODUCTION TO OPTIMAL CONTROL 229

10,25. In Problem 10.6, show that if C = B :::: I in equation (10.26) and if R :::: 0 and Q:::: I in equation
(10.27), then the system becomes identical to the model.

10.26. Use the results of Problem 10.4 to generate a root locus for the closed loop system as a varies.
What happens when a:::: O?

10.27. Given the linear time-invariant system dx/dt:::: Ax + Bu with criterion


_
2v:::: Jt

tu
! (xTQx + uTRu) dt

where Q and R are both time-invariant positive definite symmetric matrices, but Q is n X n
and R is m X m. Find the feedback control u(x, t; t l ) that minimizes v in terms of the constant
n X n matrices K n , K 12 , K 21 , K22 where

in which the 2n-vector (::) satisfies the eigenvalue problem given in equation (10.18), where
AI' A2' .• " An have negative real parts and An+ 1. An+2' .•• , t..2n have positive real parts and are
distinct.

10.28. Given only the matrices G and F as defined in Theorem 10.9, note that G and F are in general
complex since they contain partitions of eigenvectors. Find a method for calculating II:::: GF-l
using only the real number field, not complex numbers.

10.29. (a) Given the nominal scalar system d~/dt = ~ + (1- with ~o:::: HO), and the performance criterion
I (''''
J :::: "2 J n 1)~2 + ,u2] dt for constant 71 > 1.
[(-17 2 - ConstTuct two Liapunov functions for the
o
optimal closed loop system that are valid for all 'TJ > 1. Hint: One method of construction is
given in Chapter 10 and another in Chapter 9.
(b) Now suppose the actual system is d~/dt:::: €~3 + ~ + (1-. Using both Liapunov functions obtained
in (a), give estimates upon how large € > 0 can be such that the closed loop system remains
asymptotically stable. In other words, find a function f('I].~) such that € < I('I],~) implies
asymptotic stability of the closed loop system.

10.30. Given the system dx/dt = A(t)x + B(t)u with x(to):::: Xo and with criterion function

J :::: xT(tf ) x(tf )!2 + Itl


to
uTu dt/2

(a) What are the canonical equations and their boundary conditions?
(b) Given the transition matrix 'P(t, T) where aCJJ(t, 'r}/at = A(t)CJJ(t, T) and CJJ(7,7):::: I. Solve the
canonical equations in terms of 'P(t, '1") and find a feedback control u(t).
(e) Write and solve the matrix Riccati equation.
230 INTRODUCTION TO OPTIMAL CONTROL [CHAP. 10

Answers to Supplementary Problems


10.9. u = -9x

10.10. (a) dx/dt = x2 Xl(O) XI0


dX2/dt = -v'5 X2 - P2 X2(O) X20

dPI/ dt - 4X l PI (00) 0

dp2/ dt -Pl+ V5P2 P2( 00) 0

(b) Xl(t) (2xlO + x20)e- t - (xlO + x 20 )e- 2t

X2(t) -(2XlO + x 20 )e- t + 2(XI0 + x20)e- 2t

pz(t) (V5 - 1)(2xlO + x20)e- t + (4 - 2V5 )(XlO + x20)e- 2t


Pl(t) = 4(2xlO + x20)e- t - 2(XIO + x20)e- 2t

(c) tt(t) = (1- V5 )(2X lO + x20)e- t + (2Y5 - 4)(XI0 + x20)e- 2t

10.11. (a) 0 4 - ¢~2

o <Pu - v'5 ¢12 - ¢12¢22

(d)
u
+

Fig.l0~13

[8(1 + V2) + 1] e2 \1'2 (T-t) - 1 -8(1- V2)


10.13. (a) pet) =
[V2 -1 + S]e2V2 <T-t) + 1 + V2 - 8

(b) lim pet) -1 + V2 regardless of S.


t--c<:>

10.15. Any control results in v = 00 since there is an unstable uncontrollable state.


CHAP. 101 INTRODUCTION TO OPTIMAL CONTROL 231

10.17. This is the diagram associated with Problem 10.8.

10.18. f... = ±ya2 + b 2 q/r


10.19. u op = Ro1bTPo1J/p where Po depends only on the ratio 1J/p as

-dPo/dt = Qo + ATpo + PoA - PObTRolbPo7]/p

10.20. Let tP(t, to) = q,T(t, to)Eq,(t, to) - E where E :::: (_~ ~). Then tP(t o, to) == 0 and dtP/ dt =
q,T(t, to)(HTE + EH)iP(t, to)
= 0, so tP(t, to) = O. This shows that for every increasing function
of time in Il:J there is a corresponding decreasing function of time.

10.22. u (
tan t -
It -
2" tan - 2 - tt) x

6t~t - 6t 1 t 6
10.23. P(t) -
- 3t 5 + 2t 5 •
For 0 < t
0
< t 1 and t
0
< t 1 < 0, P(t) > O. For -'Vt t < to < 0 < tt,
I
P(t) <0
1
and tends to - 00 as t tends to -yt 1 from the right.

10.24. K:::: 1

10.26. At ()':::: 0, the closed loop system poles are at ei77r/8 and e j97r / 8 •

10.27. X(t»)
( p(t)
Note p(t1 ) = 0 and solve for x(t 1) in terms of x(t). Then p(t):::: P(t) x(t).

10.28. Let the first 2m:=: n eigenvectors be complex conjugates) so that £21-1::: f2i and
for i = I, 2, ... , m. Define
F = (Re fl I 1m f1 I Re f3 I 1m fa I ... I 1m f 2m - 1 I f2m I ... I fn)
and Gsimilarly.. Then F and Gare real and n == GF-1.

10.29. (a) From Chapter 10, VI:::: ('I] + 1)~2 and from Chapter 9, V 2 = ~2/27J.

(b) e < 7J~-2 for both VI and V 2 •

10.30. (a) dx/dt = Ax - BBTp and dp/dt:::: -ATp with x(to):::: Xo and P(tl).= x(t I ).

(b) u = -BT«>T(t" t)[ «>(t, t , ) - s.: «>(t, T)BBT«>T(t" T) dTJ-l x(t)

(c) obvious from (b) above.


INDEX

Abstract object, 2-6 Component. See Element


uniform, 14 Contour integral representation of f(A), 83
Adjoint of a linear operator, 112 Contraction mapping, 92-93
Adjoint system, 112-114 Control
Adjugate matrix, 44 energy, 210
Aerospace vehicle, state equations of, 34 law, 208-211
Algebraic Controllable form of time-varying
equation for time-invariant optimal systems, 148-152
control, 214 Controllability, 128-146
separation, 178 of discrete-time systems, 132, 136
Alternating property, 56 of the output, 144
Analytic function, 79 for time-invariant systems with distinct
Anticipatory system. See State variable of eigenvalues, 129-130
anticipatory system; Predictor for time-invariant systems with nondistinct
Associated Legendre equation, 107 eigenvalues, 131, 135
Asymptotic of time-varying systems, 136-138, 142
behavior of optimal closed loop poles, 226 Convergence of the matrix exponential, 96
stability, 193 Convolution integral, 109
state estimator (see Observer systems; ,Costate, 210
Steady state errors) Cramer's rule, 44
Criterion functional, 209
Basis, 47
Bessel's equation, 107
BIBO stability. See External stability Dead beat control, 132
Block diagrams, 164-169 Decoupling. See Pole placement of
Bounded multiple-input systems
function, 191 Delay line, state variables of.
input-bounded output stability See State variable of a delay line
(see External stability) Delayor, 4, 17, 164
Boundedness of transition matrix, 192 Delta function
Butterworth configuration, 226 derivatives of Dirac, 135
Dirac, 135
Cancellations in a transfer function, 133-134 Kronecker, 44
Canonical Derivative
equations of Hamilton, 211 formula for integrals, 125
flow diagrams (see Flow diagrams, first of a matrix, 39
canonical form; Flow diagrams, Determinant, 41
second canonical form) by exterior products, 57
Cauchy integral representation, 83 of the product of two matrices, 42
Causality, 2-6 of similar matrices, 72
Cayley-Hamilton theorem, 81 of a transition matrix, 99-100
Characteristic polynomial, 69 of a transpose matrix, 42
Check on solution for transition matrices, 114 of a triangular matrix, 43
Closed Diagonal matrix, 41
forms for time-varying systems, 105 Diagonal of a matrix, 39
loop eigenvalues, 217 Difference equations, 7
loop optimal response, 217 from sampling, 111
under addition, 46 Differential equations, 7
under scalar multiplication, 46 Differentiator, state of.
Cofactor of a matrix element, 43 See State variable of a diiferentiator
Column of a matrix, 38 Diffusion equation, 10
Commutativity of matrices, 40 Dimension, 49
Compatible, 39-40 of null space, 49
Compensator of l'ange space, 50
pole placement, 172 Disconnected flow diagram, 129
integral plus proportional, 219-223 Discrete-time systems, 7
Complex conjugate transpose matrix, 41 from sampling, 111

233
234 INDEX

Distinct eigenvalues, 69-71 Gram-Schmidt process, 53-54


Dot product. See Inner product Grassmann algebra. See Exterior product
Duality, 138-139 Gronwall-Bellman lemma, 203
Dynamical systems, 4-6 Guidance control law, 121-122

Eigenvalue, 69 Hamilton-J acobi equation, 220


of a nonnegative definite matrix, 77 Hamiltonian matrix, 211
of a positive definite matrix, 77 Harmonic oscillator, 193
of similar matrices, 72 Hereditary system.
of a time-varying system, 194 See State variable of hereditary system
of a triangular matrix, 72 Hermite equation, 107
of a unitary matrix, 95 Hermitian matrix, 41
Eigenvector, 69 nonnegative definite, 77
generalized, 74 positive definite, 77
expression for II = GF-l, 216,229 Hidden oscillations in sampled data
of a normal matrix, 76 systems, 132
Element, 38 Hypergeometric equation, 107
Elementary operations, 42
Ellipse in n-space, 77 Impulse response matrices, 112
Equilibrium state, 192 Induced norm. See Matrix, norm of
Equivalence of pole placement and Induction, 43
optimal control, 220 Inner product, 52
Equivalence transformation, 148 for quadratic forms, 75
Estimation of state. See Observer systems Input-output pair, 1-6
Existence of solution Input solution to the state equation, 109
to matrix equations, 63 Instantaneous controllability and observability.
to Riccati equation, 213 See Totally controllable and observable
to time-invariant optimal control Integral
problem, 214 of a matrix, 39
Exp At, 101-111 representation of teA), 83
Exponential stability. Integral plus proportional control, 219-223
See Stability, uniform asymptotic Integrator, 16,164
Exterior product, 56 Inverse
External stability, 194 matrix, 44
of (81 - A), computation of, 102
Factorization of time-varying matrices, 152 system, 166
Feedback optimal control, 211-231
Finite escape time, 104-105,204 Jacobian matrix, 9
First canonical form, 19,20 Jordan block, 74
for time-varying systems, 153 Jordan form
Floquet theory, 107-109 flow diagram of, 21-24
Flow diagrams, 16-24 of a matrix, 73~75
of first canonical form, 19 of state equations, 147
interchange of elements in, 17
of Jordan form, 21-24 Kronecker delta, 44
of second canonical (phase variable) form, 20
for time-varying systems, 18 Lagrange multiplier, 210
Frequency domain characterization of Laguerre equation, 107
optimality, 220, 226 La place expansion of a determinant, 43, 57
Function of a matrix, 79-84 Least squares fit, 86
cosine, 82 Left inverse, 44
exponential, 96-104 Legendre equation, 107
logarithm, 94 Leverrier's algorithm, 102
square root, 80, 91 Liapunov function
Fundamental matrix. See Transition matrix construction of, for linear systems, 199-202
discrete-time, 198
Gauss elimination (Example 3.13), 44 time-invariant, 196
Gauss-Seidel method, 171 time-varying, 197
Global Linear
minimum of criterion functional, 220 dependence, 47
stability, 193 independence, 47
Gradient operator, 96 manifold, 46
Gram matrix, 64 network with a switch, 11, 12
INDEX 235

Linear (cont.) Nonexistence


system, 6-7 of solution to the matrix Riccati
vedor space, 50 equation, 221, 228
Linear operator, 55 of trajectories, 10
matrix representation of, 55 Nonlinear effects, 179-181
null space of, 49 Nonnegative definite, 77
range of, 49 N on singular , 44
Linearization, 8, 9 Norm, 51
Logarithm of a matrix, 94 of a matrix, 78
Lo.ss matrix. See Criterion functional natural, 53
Lower triangular matrix, 41 Normal matrix, 41
Normalized eigenvector, 70
Mathieu equation, 107-109 Null space, 49
Matrix, 38. See also Linear operator Nullity, 49
addition and subtraction, 39 n-vectors, 38
characteristic polynomial of, 69 Nyquist diagrams, 171
compatibility, 39
differentiation of, 39 Observable form of time-varying
eigenvalue of, 69 systems, 148-152
eigenvector of, 69 Observability, 128-146
equality, 39 of discrete-time systems, 133-138
function of, 79-84 of time-invariant systems, 133-136
fundamental, 99-111 of time-varying systems, 136-138, 145
generalized eigenvector of, 74 Observer systems, 173-177
Hermitian, 107 Open loop optimal control, 211
Hermitian nonnegative definite, 77 Optimal control law, 210
Hermitian positive definite 77 Order of a matrix, 38
integration of, 39 ' Oriented mathematical relations, 2-4
Jordan form of, 73-75 Orthogonal vectors, 52
logarithm of, 94 Orthonormal vectors, 53
multiplication, 39 Output feedback, 217-218
norm of, 78
principal minor of, 77 Parameter variations, 170, 171
rank of, 50 Partial differential equation of state
representation of a linear operator, 55 diffusion, 10
Riccati equation, 211-231 wave, 141
square, 38 Partial fraction expansion, 21,22
state equations, 26, 27 of a matrix, 102
transition, 99-111 Partition of a matrix, 40
unitary, 41 Passive system stability, 205
Matrizant. See Transition matrix Penalty function. See Criterion functional
Periodically varying system, 107-109
Mean square estimate, 94
Permutation, 41
Metric, 51
Phase plane, 11
Minimal order observer, 175-177
Minimum Phase variable canonical form, 20, 21
for time-varying systems, 153-4
energy control (Problem 10.30), 229
Physical object, 1
norm solution of a matrix equation, 86
Piecewise time-invariant system, 106
Minor, 43 Polar decomposition of a matrix, 87
principal, 77 Pole placement, 172-173, 220, 224
Mi.ssile equations of state, 34 Poles of a system, 169
Model-following systems, 224-225 Pole-zero cancellation, 134
Motor equations of state, 34 Positive definite, 77
Multilinear, 56 Predictor, 5
Multiple loop system, 185-186 Principal minor, 77
Pseudoinverse, 84-87
Natural norm, 53 Quadratic form, 75
of a matrix, 79
Neighboring optimal control, 221 Random processes, 3
Noise rejection, 179-181 Rank, 50
Nominal optimal control, 221 n(e,d.), 137
Nonanticipative system. See Causality Range space, 49
236 INDEX

Reachable states, 135 . State (cant.)


Reciprocal basis, 54 reachable, 135
Recoverable states, 135 recoverable, 135
Regulator, 208 State equations
Representation in matrix form, 26,27
of a linear operator, 55 solutions of, with input, 109-110
spectral, 80 solutions of, with nonzero input, 99-109
for rectangUlar matrices, 85 State feedback pole placement, 172-173
Residue
State space, 3
of poles, 21, 22
conditions on description by, 5
matrix, 102-103
State transition matrix, 99-111
Response function. See Trajectory
Riccati equation, 212-213 State variable, 3
negative solution to, 221 of anticipatory system, 14
Right inverse, 44 of delay line, 10
RLC circuit, 2, 6, 11, 35 of differentiator, 14
stability of, 205-207 of diffusion equation, 10
Root locus, 169 of hereditary system, 10
Row of matrix, 38 of spring-mass system, 10
Steady state
Sampled systems, 111 errors, 165
Scalar, 38 response, 125
equation representation for time-varying Sturm-Liouville equation, 197
systems, 153-154, 160 stability properties of, 198
product (see Inner product) Submatrices, 40
transition matrix, 105 Sufficient conditions for optimal control, 220
Scalar, 16,164 Summer, 16,164
Schmidt orthonormalization. Superposition, 7
See Gram-Schmidt process integral, 109
Schwarz inequality, 52 Switch. See Linear network with a switch
Second canonical form, 20-21 Sylvester's criterion, 77
for time-varying systems, 153-154
Symmetric matrix, 41
Sensitivity, 179-181
Symmetry
Servomechanism, 208 of eigenvalues of H, 215
problem, 218-220 of solution to matrix Riccati equation, 213
Similarity transformation, 70-72 Symplectic, 228
Simultaneous diagonalization Systems, 7, 8
of two Hermitian matrices, 91 dynamical, 5
of two arbitrary matrices, 118 non anticipative, 5
Skew-symmetric matrix, 41 physical, 1
Span, 46
Spectral representation, 80 Time to go, 228
Spring-mass system. Time-invariant, 7
See State variable of spring-mass system optimal systems, 213-217
Square root of a matrix, 77 Time-varying systems
Stability flow diagram of, 18
asymptotic, 194 matrix state equations of, 25-26
BIBO, 194 transition matrices for, 104-111
external, 194 Totally controllable and observable, 128
i.s.L., 192 Trace, 39
in the large, 193 of similar matrices, 72
of optimal control, 214 Trajectory, 4, 5
of solution of time-invariant Transfer function, 112
Riccati equation, 215
Transition
of a system, 192
matrix, 99-111
uniform, 193
property, 5
uniform asymptotic, 193
Transpose matrix, 41
State
of an abstract object, 1-3 Transposition, 41
controllable, 128 Triangle inequality, 51
observable, 128 Triangular matrix, 41
of a physical object, 1 Type-l systems, 167
INDEX 237

Undetermined coefficients, method of. Unity feedback systems, 165


See Variation of parameters, method of Unobservable, 3
Uniform abstract object. Unstable, 192
See Abstract object, uniform Upper triangular matrix, 41
Uniform stability, 193
asymptotic, 193 Van der Pol equation, 192
Uncontrollable, 3 Vandermonde matrix, 60
states in optimal control, 223 Variation of parameters, method of, 109
Uniqueness of solution Variational equations. See Linearization
to ATp + PA = -Q, 204 Vector, 38, 46, 50
to matrix equations, 63 Vector flow diagrams, 164
to the time-invariant optimal control, 215 Voltage divider, 1, 3
for the square root matrix, 91
Unit Wave equation, 141
matrix. 41 White noise input, 208
vector, 41 Wiener factorization, 216
Unitary matrix, 41
for similarity transformation of a Zero matrix, 41
Hermitian matrix, 76 Zero-input stability, 191
Unit-time delayor. See Delayor z transfer function, 18

Das könnte Ihnen auch gefallen