Beruflich Dokumente
Kultur Dokumente
Multivariable
Computer-controlled
Systems
A Transfer Function Approach
With 27 Figures
123
Em N. Rosenwasser, Dr. rer. nat. Dr. Eng. Bernhard P. Lampe, Dr. rer. nat. Dr. Eng.
State Marine Technical University University of Rostock
Lozmanskaya str. 3 Institute of Automation
190008 Saint Petersburg 18051 Rostock
Russia Germany
Series Editors
E.D. Sontag M. Thoma A. Isidori J.H. van Schuppen
properties of the optimal system: its structure and the set of its poles, for
instance.
Chapter 9 describes the methods of the preceding three chapters in greater
detail for the special case of single-loop MIMO SD systems. This is done with
the supposition that the transfer matrices of all continuous parts are normal
and that the sampling period is non-pathological. When theses suppositions
hold, important special cancellations take place; thus, the critical case, in
which the transfer matrices of continuous elements contain poles on the imag-
inary axis, is considered. In this way, the fact, important for applications, that
the solvability of the connected H2 problem in the critical case depends on
the location of the critical elements inside the control loop with respect to the
input and output of the system is stated. In this case there may be situations
in which the H2 problem has no solution.
Chapter 10 is devoted to the L2 problem for the standard SD system; it
contains, as special cases, the design of optimal tracking systems and the re-
design problem. In our opinion, this case constitutes a splendid example for
demonstrating the possibilities of frequency methods. This chapter demon-
strates that in the multidimensional case the solution of the L2 problem al-
ways leads to a singular quadratic functional for that a set of minimizing
control programs exists. Applying Laplace transforms during the evaluation
by the Wiener-Hopf method allows us to nd the complete set of optimal
solutions; by doing this, input signals of nite duration and constant signals
are included. We know of no alternative methods for constructing the general
solution to this problem.
The book closes with four appendices. Appendix A gives a short introduc-
tion to the -transformation (Taylor transformation), and its relationship to
other operator transformations for discrete sequences. In Appendix B some
auxiliary formulae are derived.Appendix C, written by Dr. K. Polyakov,
presents the MATLAB DirectSDM Toolbox. Using this toolbox, various
H2 and L2 problems for single-loop MIMO systems can be solved numeri-
cally.Appendix D, composed by Dr. V. Rybinskii, describes a design method
for control with guaranteed performance. These controllers guarantee a
required performance for arbitrary members of certain classes of stochastic
disturbances. The MATLAB GarSD Toolbox, used for the numerical
solution of such problems, is also presented.
In our opinion, the best way to get well acquainted with the content of
the book is, of course, the thorough reading of all the chapters in sequence,
starting with Chapter 1. We recognize, however, that this requires eort and
staying-power of the reader and an expert, interested only in SD systems, can
start directly with Chapter 6, looking into the preceding chapters only when
necessary.
The book is written in a mathematical style. We do not include el-
ementary introductory material on the functioning or the physical and
technological characteristics of computer-controlled systems; likewise, there
Preface xi
Rostock, Em Rosenwasser
May 17, 2006 Bernhard Lampe
Contents
1 Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Basic Concepts of Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Matrices over Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Left and Right Equivalence of Polynomial Matrices . . . . . . . . . . 12
1.6 Row and Column Reduced Matrices . . . . . . . . . . . . . . . . . . . . . . . 15
1.7 Equivalence of Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . . . 20
1.8 Normal Rank of Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . 21
1.9 Invariant Polynomials and Elementary Divisors . . . . . . . . . . . . . . 23
1.10 Latent Equations and Latent Numbers . . . . . . . . . . . . . . . . . . . . . 26
1.11 Simple Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.12 Pairs of Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.13 Polynomial Matrices of First Degree (Pencils) . . . . . . . . . . . . . . . 38
1.14 Cyclic Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.15 Simple Realisations and Their Structural Stability . . . . . . . . . . . 49
Appendices
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Part I
Algebraic Preliminaries
1
Polynomial Matrices
(a b) c = a (b c)
is true.
The set A is called a semigroup, if an associative operation is dened
in it. A semigroup A is called a group , if it contains a neutral element e, such
that for every a A
ae=ea=a
is correct, and furthermore, for any a A there exists a uniquely determined
element a1 A, such that
a a1 = a1 a = e . (1.1)
(a + b) + c = a + (b + c)
and
a + b = b + a.
Moreover, there exists a zero element 0, such that for an arbitrary a A
a + 0 = 0 + a = a.
(a + b)c = ac + bc , c(a + b) = ca + cb
1.2 Polynomials
1. Let N be a certain commutative associative ring with unit ele-
ment, especially it can be a eld. Let us consider the innite sequence
(a0 , a1 , . . . , ak ; 0, . . .), where ak = 0, and all elements starting from ak+1 are
equal to zero. Furthermore, we write
(a0 , a1 , . . . , ak ; 0, . . .) = (b0 , b1 , . . . , bk ; 0, . . .) ,
if and only if ai = bi (i = 0, . . . , k). Over the set of elements of the above form,
the operations addition and multiplication are introduced in the following way.
The sum is dened by the relation
(a0 , a1 , . . . , ak ; 0, . . .)(b0 , b1 , . . . , bk ; 0, . . .)
(1.2)
= (a0 b0 , a0 b1 + a1 b0 , . . . , a0 bk + a1 bk1 + . . . + ak b0 , . . . , ak bk ; 0, . . .) .
It is easily proven that the above explained operations addition and mul-
tiplication are commutative and associative. Moreover, these operations are
distributive too. Any element a N is identied with the sequence (a; 0, . . .).
Furthermore, let be the sequence
= (0, 1; 0, . . .) .
(a0 , a1 , . . . , ak ; 0, . . .) =
= (a0 ; 0, . . .) + (0, a1 ; 0, . . .) + . . . + (0, . . . , 0, ak ; 0, . . .)
= a0 + a1 (0, 1; 0, . . .) + . . . + ak (0, . . . , 0, 1; 0, . . .)
= a0 + a1 + a2 2 + . . . + ak k .
The expression on the right side of the last equation is called a polynomial in
with coecients in N . It is easily shown that this denition of a polyno-
mial is equivalent to other denitions in elementary algebra. For ak = 0 the
polynomial ak k is called the term of the polynomial
f () = a0 + a1 + . . . + ak k (1.3)
with the highest power. The number k is called the degree of the polynomial
(1.3), and it is designed by deg f (). If we have in (1.3) a0 = a1 = . . . = ak = 0,
6 1 Polynomial Matrices
then the polynomial (1.3) is named the zero polynomial . A polynomial with
ak = 1 is called monic. If for two polynomials f1 (), f2 () the relation f1 () =
af2 () with a N is valid, then these polynomials are called equivalent. In
what follows, we will use the notation f1 ) f2 () for the fact that the
polynomials f1 () and f2 () are equivalent.
Inside this book we only consider polynomials with coecients from the
real number eld R or the complex number eld C. Following [206] we use the
notation F for a eld that is either R or C. The set of polynomials over these
elds are designated by R[], C[] or F[] respectively. The sets R[] and C[]
are commutative rings without zero divisor. In what follows, the elements in
R[] are called real polynomials.
f () = an ( 1 ) ( n ) . (1.4)
f () = an ( 1 )1 ( q )q , 1 + . . . + q = n, (1.5)
where
deg r() < deg d() .
Hereby, the polynomial q() is called the entire part, and the polynomial
r() is the remainder from the division of f () by d().
3. Let us have f (), g() F[]. It is said, that the polynomial g() is a
divisor of f (), and we write g()|f (), if
f () = q()g()
4. For any matrix A Nnn there uniquely exists a matrix adj A of the form
A11 . . . An1
adj A = ... ... ... , (1.8)
A1n . . . Ann
where Aik is the algebraic complement (the adjoint) of the element aik of the
matrix A, which is received as the determinant of those matrix that remains by
cutting the ith row and kth column multiplied by the sign-factor (1)i+k .
The matrix adj A is called the adjoint of the matrix A. The matrices A and
adj A are connected by the relation
A(adj A) = (adj A)A = (det A)In , (1.9)
where the identity matrix In is dened by
1N 0N . . . 0N
0N 1N . . . 0N
In = . . . . = diag{1N , . . . , 1N }
.. .. . . ..
0N 0N . . . 1N
with the unit element 1N of the ring N , and diag means the diagonal matrix.
denotes the minor of the matrix A, which is calculated by the elements, that
are at the same time members of the rows with the numbers i1 , . . . , ip , and of
the columns with the numbers k1 , . . . , kp . Let
C = AB
where all elements are polynomials in F[], especially also in R[] or C[].
The set of these matrices will be designated by Fnm [], or symbolised directly
by Rnm [] resp. C[], and their subsets containing the constant matrices, are
denoted by Fnm , Rnm or Cnm , respectively. The matrices in Rnm and Rnm []
are called real.
2. Let, especially
ui () = ai1 () . . . aim () , (i = 1, 2, . . . , p)
be a certain set of rows with width m. The above dened rows will be called
linear dependent in F1m [], if and only if there exist polynomials ci () F[]
that are not all zero at the same time, such that
p
ci ()ui () = O1m .
i=1
Here Om is the matrix of dimension m with all elements equal to the zero
polynomial.
Based on this fundamental understanding of these denitions, all derived
concepts and insight can be transferred to polynomial matrices, especially the
normal rank of matrices over rings and also the formula of Binet-Cauchy.
1.4 Polynomial Matrices 11
A() = A0 q + A1 q1 + . . . + Aq , (1.10)
4. For A() Fnn [], Matrix (1.10) is related to its determinant det A(),
which itself is a polynomial in F[]. In accordance with the above statements
the matrix is non-singular if its determinant is dierent from the zero poly-
nomial. A non-singular matrix A() is related to the non-negative number
that is called the order of the matrix A(). The degree and order of a matrix
A() are connected by the inequalities
or
1
deg A() ord A() . (1.14)
n
For a regular matrix A() the inequalities (1.13), (1.14) become equalities.
In general for a given order ord A(), the degree of a matrix A() can be
an arbitrary large number. A non-singular quadratic polynomial matrix A()
with ord A() = 0, i.e. det A() = const. = 0 is called unimodular.
A() = A0 4 + A1 3 + A2 2 + A3 + A4 ,
12 1 Polynomial Matrices
where
00 0 0
A0 = , A1 = ,
10 5 1
0 0 1 0 2 1
A2 = , A3 = , A4 = .
6 3 5 4 6 4
In the present case we have deg A() = 4. The matrix A() is non-singular,
because of n = m = 2 and det A() / 0. At the same time ord A() = 2 due
to
det A() = 42 7 + 2 .
Moreover, the matrix A() is anomalous, because det A0 = 0.
be given. In this case we have deg A() = 5. At the same time det A() = 1
and ord A() = 0, thus the matrix A() is unimodular.
A1 () = p()A2 () .
Theorem 1.3 (following [113]). Let the matrix A() Fnm [] have maxi-
mal rank A , and the rst A columns of A() should have a minor of non-
vanishing order A . Then in dependence of its dimension, the matrix A()
can be transformed by left elementary operations into one of the three forms:
A = m = n :
g11 () g12 () . . . g1n ()
0 g22 () . . . g2n ()
p()A() = . .. .. .. = Al () , (1.15)
.. . . .
0 0 . . . gnn ()
A = n < m :
g11 () g12 () . . . g1n () g1,n+1 () . . . g1m ()
0 g22 () . . . g2n () g2,n+1 () . . . g2m ()
p()A() = . .. .. .. .. .. = Al () , (1.16)
.. . . . . ... .
0 0 . . . gnn () gn,n+1 () . . . gnm ()
A = m < n :
g11 () g12 () . . . g1m ()
0 g22 () . . . g2m ()
.. .. .. ..
. . . .
p()A() =
0 0 . . . gmm () = Al () .
(1.17)
0 0 . . . 0
. . .
.. .. ... .
.
0 0 ... 0
In (1.15)(1.17) the matrix p() is unimodular, and the gii () are monic
polynomials, where every gii () is of highest degree in its column. Doing so, the
matrix Al () is uniquely determined by A(). Moreover, in Formulae (1.15)
and (1.16) the matrix p() is also uniquely committed.
A1 () = A2 ()q() .
Theorem 1.4. Let the matrix A() have the maximal rank A , and the rst
A rows of A() should possess a non-zero minor of order A . Then according
to its dimension, by applying right elementary operations, the matrix A()
can be transformed into one of the three forms:
A = m = n :
g11 () 0 ... 0
g21 () g22 () ... 0
A()q() = . .. .. .. = Ar () , (1.18)
.. . . .
gn1 () gn2 () . . . gnn ()
A = n < m :
g11 () 0 ... 0 ... 0
g21 () g22 () 0 ... 0 ... 0
A()q() = . .. . .. .. . = Ar () , (1.19)
. . . . . . . . . . ..
gn1 () gn2 () . . . gnn () 0 ... 0
A = m < n :
g11 () 0 ... 0
g21 () g22 () ... 0
.. .. .. ..
. . . .
A()q() =
gm1 () gm2 () ... gmm ()
= Ar () . (1.20)
gm+1,1 () gm+1,2 () ... gm+1,m ()
.. .. ..
. . ... .
gn1 () gn2 () ... gnm ()
In (1.18)(1.20) the matrix q() is unimodular, and the gii () are monic
polynomials, where every gii () has the highest degree in its row. Doing so,
the matrix Ar () is uniquely determined by A(). Moreover, the matrix q()
in (1.18) and (1.20) is also uniquely committed.
which has degree 4. The matrix on the right side of the last equation is a
Hermitian canonical form. As a conclusion of Theorem 1.3, it follows that
this Hermitian form Al () and its transformation matrix p() are uniquely
determined. It should be remarked that the matrix Al () possesses not the
smallest possible degree of all matrices that are left-equivalent to A(). Indeed,
consider the product
1 2 1 0 1 2 3 2
A1 () = A() = .
0 1 2 1 2 + 1 2 + 2
Then obviously deg A1 () = 2. This is the minimal degree, and this result
conrms Inequality (1.14).
i = deg ai () , (i = 1, . . . , n) ,
Herein A1 () is a matrix, where the degree of its i-th row is smaller than i ,
and A0 is a constant matrix. Formula (1.21) could be transformed into
A() = diag{1 , . . . , n } A0 + A1 1 + . . . + Ap p , (1.22)
l = 1 + . . . + n
max = max {i } .
1in
Then obviously
deg A() = max . (1.23)
In analogy, assuming that b1 (), . . . , bn () are the columns of A() and
16 1 Polynomial Matrices
i = deg bi () ,
The number
r = 1 + . . . + n
is called the right order of the matrix A(). Introduce the notation
max = max {i } .
1in
Then we obtain
deg A() = max .
with
1 0 0 0 1 3 1 0 0
A0 = 1 0 0 , A1 = 0 1 2 , A2 = 0 1 1 . (1.27)
1 1 0 20 2 0 0 0
where
1 1 3 0 0 0 1 0 0
B0 = 1 1 2 , B1 = 0 1 1 , B2 = 0 0 0 .
0 1 0 1 0 2 2 00
1.6 Row and Column Reduced Matrices 17
det B0 = 0
is true.
Column-reduced matrices can be generated from row-reduced matrices
simply by transposition. Therefore, in the following only row-reduced matrices
will be considered.
Lemma 1.7. For Matrix (1.21) to be row reduced, a necessary and sucient
condition is the validity of the equation
with deg a1 () < l . For (1.29) to be valid, (1.28) is necessary. If, conversely,
(1.29) is fullled, then det A0 = 0 is true, and the matrix A() is row re-
duced.
Example 1.8. For Matrix (1.25) we get det A0 = 0, det B0 = 5, therefore
the matrix A() is column reduced but not row reduced. Hereby, we obtain
ord A() = r = 4.
Theorem 1.9 ([133]). Any non-singular matrix A() can be made row-
reduced by left-equivalent transforms.
Proof. Assume the matrix A() be given in form of Representation (1.22).
Then for det A0 = 0 the matrix A() is already row reduced. Therefore, take
a singular matrix A0 , i.e. det A0 = 0. Then there exists a non-zero row vector
= (1 , . . . , n ) such that
A0 = O1n (1.30)
is fullled. Let i1 , . . . , iq , (1 q n) be the non-zero components of , and
i1 , . . . , iq are the corresponding exponents i . Denote
= max {ij } ,
1jq
and let be a value of the index j, for which i = is valid. Then the row
() = 1 1 2 2 . . . . . . n1 n1 n n (1.31)
holds with
Ai = DAi , (1.34)
and the matrix D is generated from the identity matrix In by exchanging the
-th row by the row :
1 0 ... 0 ... 0 0
. . ..
.. . . . . . ... . . . ... .
D = 1 2 . . . . . . n1 n . (1.35)
. .
. . . . .
. . . . . .. . . . . . ..
0 0 ... 0 ... 0 1
Obviously, det D = = 0. From (1.30) and (1.33) it follows that the -th
row of the matrix A0 is identical to zero. That means, Equation (1.33) can be
written in the form
P ()A() = diag{1 , . . . , 1 , 1 , +1 , . . . , n }
(1.36)
A
0 + A
1 1 + . . . + Ap p ,
1 + 2 + 3 = 0 , 3 = 0 ,
so we choose = 1 1 0 , = 1 and = 2. Applying (1.31), (1.32) and
(1.34) yields
1 1 0
P () = D = 0 1 0 .
0 0 1
Using this result and (1.34), we nd
0 0 0 0 0 5 1 1 1
A0 = 1 0 0 , A1 = 0 1 2 , A2 = 0 1 1 .
1 1 0 20 2 0 0 0
Now, exchange the rst row of A0 by the rst row of A1 , the rst row of A1
by the rst row of A2 , and the rst row of A2 by the zero row. As result we
get
0 0 5 1 1 1 0 0 0
A
0 = 1 0 0 , A 1 = 0 1 2 , A2 = 0 1 1 .
1 1 0 2 02 0 0 0
The matrix A0 is regular. Therefore, the procedure stops, and with the help
of (1.36) we get
P ()A() = diag{, 2 , } A0 + A1 1 + A2 2
1 1 5 + 1
= 2 1 2 1 .
+ 2 2
be given. Then, if the matrices A() and B() are row reduced and left equiv-
alent, then the sets of numbers {1 , . . . , n } and {1 , . . . , n } coincide.
Corollary 1.12. If the matrices A() and B() are left equivalent, and the
matrix A() is row reduced, then
20 1 Polynomial Matrices
n
n
i i
i=1 i=1
is true, where the equality takes place if and only if the matrix B() is also
row reduced.
Proof. Because the matrices (1.37) are left equivalent, they possess the same
order. Therefore, by Lemma 1.7 it follows
1 + . . . + n ord B() = ord A() = 1 + . . . + n ,
where the equality in the left part exactly takes place, when the matrix B()
is row reduced.
Corollary 1.13. Under the conditions of Theorem 1.11,
deg A() = deg B() . (1.38)
Proof. From (1.23) and (1.37), we get
deg A() = max {i } = max {i } = deg B() ,
1in 1in
2.
Theorem 1.15 ([51]). Any n m matrix A() with the normal rank is
equivalent to the matrix
S () O,m
SA () = , (1.41)
On, On,m
and the ai () are monic polynomials, where every polynomial ai+1 () is di-
visible by ai ().
Matrix (1.41) is uniquely determined by the matrix A(), and it is named
as the Smith-canonical form of the matrix A().
1. Utilising the results from the preceding section, we are able to transfer
known results over the rank of number matrices to the normal rank of poly-
nomial matrices.
D() = A()B()
and
rank D() rank A() + rank B() (1.44)
are true.
22 1 Polynomial Matrices
Proof. Assume
with
D > min{A , B } . (1.45)
with
Then due to Corollary 1.17, there exists a value =
= A ,
rank A() = B ,
rank B() = D .
rank D()
which gives
min{rank A()
rank D() rank B()}
.
But this contradicts (1.45). This contradiction proves the validity of Inequality
(1.43).
Inequality (1.44) could be proved analogously because a corresponding
inequality holds for constant matrices.
2. From the inequalities of Sylvester (1.43), (1.44) ensue the following rela-
tions
rank[A()B()] = rank B() for rank A() = (1.46)
and
rank[A()B()] = rank A() for rank B() = . (1.47)
Herein, rank A() = can only be fullled for n , and rank B() =
only for m . Especially, Equation (1.46) is valid if the matrix A() is
non-singular, and Equation (1.47) holds if matrix B() is non-singular.
4. Applying the above thoughts used in the proofs of Theorem 1.18, the
known statements for number matrices [51] can also be proved for polynomial
matrices.
Theorem 1.19. The matrices A(), B() and the matrix D() should be
connected by
D() = A() B() .
Then,
rank D() rank A() + rank B() . (1.48)
Theorem 1.20. For any polynomial matrices A() and B() of equal dimen-
sion,
rank[A() + B()] rank A() + rank B() .
Remark 1.21. A corresponding relation to the last one was proven for number
matrices in [147].
3. The monic greatest common divisor of all minors of i-th order for the
matrix A() is named its i-th determinantal divisor. If rank A() = is true,
then there exist determinant divisors D1 (), D2 (), . . . , D ().
D () is named the greatest determinantal divisor. It can be shown that the
set of determinantal divisors is invariant against equivalence transformations
on the matrix A().
Di ()
ai () = , D0 () = 1 , (i = 1, . . . , ) . (1.51)
Di1 ()
D () = ( 1 )1 ( q )q , (1.53)
where all numbers i are dierent. We take from (1.51) that every invariant
polynomial ai () permits a factorisation of the form
with
0 pi p,i+1 p , (p = 1, . . . , q) .
The factors dierent from one in the expression (1.54) are called elemen-
tary divisors of the polynomial matrix A() in the eld C. In general, every
root i is congured to several elementary divisors. It follows from the above
said, that the set of invariant polynomials uniquely determines the set of el-
ementary divisors. The reverse is also true if the rank of the matrix A() is
known.
Example 1.22. Assume the rank of the matrix A() to be equal to four, and
the whole of its elementary divisors to be
( 2)2 , ( 2)2 , 2, 3, 3, 4.
1.9 Invariant Polynomials and Elementary Divisors 25
Using the set of invariant polynomials, we are able to specify immediately the
Smith-canonical form of the matrix A(). In the present case we get
1 0 0 0
0 2 0 0
SA () = 0 0 ( 2)2 ( 3)
.
0
0 0 0 ( 2) ( 3)( 4)
2
Lemma 1.23 ([51]). The system of elementary divisors of any diagonal ma-
trix is the unication of the elementary divisors of its elements.
2 , , 1, ( 1)2 , , 1
We realise immediately that the matrix A1 () possesses the one and only ele-
mentary divisor 3 . The matrix A2 () for a = 0 has the two equal elementary
divisors and . In case of a = 0, we nd for A2 () the two dierent elemen-
tary divisors and a. Thats why for a = 0 the totality of elementary
divisors of the matrix Ad () = diag{A1 (), A2 ()} consists of 3 , , a,
and the Smith-canonical form comes out as
However, in case of a = 0, we nd
SAd () = diag{1, 1, 1, , , 3 } .
Remark 1.27. The above example illustrates the fact that the dependence of
the Smith-canonical form (or the totality of its elementary divisors) on the
coecients of the polynomial matrix is numerically unstable.
dA () = det A()
is said to be the characteristic polynomial of the matrix A(), and the equation
dA () = 0 (1.55)
is its characteristic equation. The roots of the characteristic equation are called
the eigenvalues of the matrix A(). For A() Fnn [], the characteristic
polynomial dA () is equivalent to the greatest determinantal divisor Dn ().
Therefore, the characteristic equation (1.55) is equivalent to
D () = 0 , (1.56)
numbers i , that are congured by the factorisation (1.53). The latent roots
of square matrices coincide with its eigenvalues.
Owing to (1.52), the latent equation can be written in the form
a1 ()a2 () a () = 0 .
Hence it follows that every latent number is the root of at least one invariant
polynomial.
Corollary 1.30. It follows from Theorem 1.28, that the latent numbers i of
a non-degenerated matrix A() are exactly those numbers i , for which
becomes valid.
4. For a non-degenerated matrix A(), let the monic greatest common divi-
sor of the minors of A -th order be equal to 1. In that case, the latent equation
(1.56) has no roots, thus the matrix A() also does not possess latent roots.
Such polynomial matrices are said to be alatent. All invariant polynomials of
an alatent matrix are equal to 1.
Alatent square matrices turn out to be unimodular. For an alatent matrix
A(), the number matrix A() for all
possesses its maximal rank.
Proof. Under the made suppositions, due to Theorem 1.4, the Hermitian form
Ar () has the shape
Ar () = In On,mn ,
and the claimed relation emerges from (1.19) for () = q 1 ().
Analogously, we conclude from (1.17) that for n > m the vertical n m
matrix A() is alatent if and only if
Im
A() = ()
Onm,m
becomes true with a certain unimodular matrix ().
where
6.
Theorem 1.32. Suppose the n m matrix A() to be alatent. Then every
submatrix generated from any of its rows is also alatent.
Proof. Take a positive integer p < n and present the matrix A() in the form
a11 () . . . a1m ()
.. ..
. ... .
ap1 () . . . apm () Ap ()
A() = = . (1.59)
ap+1,1 () . . . ap+1,m () A1 ()
.. ..
. ... .
an1 () . . . anm ()
It is indirectly shown that the submatrix Ap () over the line turns out to be
alatent. Suppose the contrary. Then owing to (1.58), we get
Ap () = ap ()bp () ,
where the matrix ap () is latent, and ord ap () > 0. Applying this result from
(1.59),
ap () Op,np bp ()
A() =
Onp,p Inp A1 ()
be an eigenvalue of the matrix ap (), so
is acquired. Let
bp ()
= ap () Op,np
A()
Onp,p Inp
A1 ()
is valid.Because the rank of the rst factors on the right side is smaller than n,
< n, which is in contradiction to the supposed alatency
this implies rank A()
of A().
Remark 1.33. In the same way, it is shown that any submatrix of an alatent
matrix A() built from any of its columns also becomes alatent.
Corollary 1.34. Every submatrix built from any rows or columns of a uni-
modular matrix is alatent.
D () = a (), D1 () = D2 () = . . . = D1 () = 1 .
30 1 Polynomial Matrices
rank A(i ) = A 1, (i = 1, . . . , q)
Theorem 1.35. A necessary and sucient condition for the simplicity of the
n n matrix A()
is, thatthere exists a n 1 column B(), such that the
matrix L() = A() B() becomes alatent.
Proof. Suciency: Let the matrix A() B() be alatent and i , (i =
1, . . . , q) are the eigenvalues of A(). Hence it follows
rank A(i ) B(i ) = n, (i = 1, . . . , q) .
1.11 Simple Matrices 31
with
0
..
B() = () . . (1.61)
0
1
The matrix L() is alatent per construction.
Theorem 1.37. Let the matrices A() Fnn (), B() Fnn [] be given,
where the matrix A() is simple, but the matrix B() is of any structure.
Furthermore, let us have det A() = d() and
32 1 Polynomial Matrices
Lemma 1.40. For the matrix A Fnn , we assume rank A = , and let
be a certain norm in Fnn . Then there exists a positive constant 0 , such that
for B Fnn with B < 0 always
rank(A + B) .
d() = d0 k + . . . + dk = 0, d0 = 0
where
di () = di + di1 + di2 2 + . . . , (i = 1, . . . , k)
be a root of the
are polynomials in the variable with di (0) = di . Let
equation
d(, 0) = d() = 0
with multiplicity , i.e. an eigenvalue of the matrix A() with multiplicity .
Since the matrix A() is simple, we obtain
= n 1.
rank A()
d(, ) = 0 .
As known from [188], for || < , where > 0 is suciently small, there exist
i (), (i = 1, . . . , ), such that
continuous functions
i (), ) = det[A(
d( i ()) + B(
i ())] = 0 , (1.66)
i () may coincide. Thereby, the limits
where some of the functions
34 1 Polynomial Matrices
lim i ,
i () = (i = 1, . . . , )
0
i ) + Gi ()
= A(
with
i) + L
Gi () = B( i ()
and the matrices L i () for || < depend continuously on , and L
i (0) = Onn
holds. Next choose a constant > 0 with the property that for || < and all
i = 1, . . . , , the relation
i) + L
Gi () = B( i () <
On the other side, it follows from (1.66), that for || < , we have
i ()) + B(
rank[A( i ())] n 1 .
The above considerations can be made for all eigenvalues of the matrix A(),
therefore, Theorem 1.37 is proved by (1.60).
where the rst one is horizontal, and the second one is vertical. Due to
1.12 Pairs of Polynomial Matrices 35
Rv () = a () c () ,
the properties of vertical pairs can immediately deduced from the properties
of horizontal pairs. Therefore, we will consider now only horizontal pairs. The
pairs (a(), b()), [a(), c()] are called non-degenerated if the matrices (1.67)
are non-degenerated. If not supposed explicitly otherwise, we will always con-
sider non-degenerated pairs.
2. Let for the pair (a(), b()) exist a polynomial matrix g(), such that
with polynomial matrices a1 (), b1 (). Then the matrix g() is called a com-
mon left divisor of the pair (a(), b()). The common left divisor g() is named
as a greatest common left divisor (GCLD) of the pair (a(), b()), if for any
left common divisor g1 ()
g() = g1 ()()
with a polynomial matrix () is true. As known any two GCLD are right-
equivalent [69].
n m
r () r12 () n (1.69)
r() = 11
r21 () r22 () m
for which
r11 () r12 ()
n m
a() b() = N () O n (1.70)
r21 () r22 ()
holds. As known [69], the matrix N () is a GCLD of the pair (a(), b()) .
5. Let
s11 () s12 ()
s() = r1 () =
s21 () s22 ()
be a unimodular polynomial matrix. Then we get from (1.70)
36 1 Polynomial Matrices
s11 () s12 ()
a() b() = N () Onm .
s21 () s22 ()
Due to Corollary 1.34, the pair (s11 (), s12 ()) is irreducible. Therefore, the
next statement is true:
If Relation (1.68) is true, and g() is a GCLD of the pair (a(), b()),
then the pair (a1 (), b1 ()) is irreducible.
The reverse statement is also true:
If Relation (1.68) is valid, and the pair (a1 (), b1 ()) is irreducible, then
the matrix g() is a GCLD of the pair (a(), b()).
m
a() L() m
p() =
c() Onm n
is valid with a unimodular matrix p(), then L() is a GCRD of the corre-
sponding pair [a(), c()]. If L() and L1 () are two GCRD, then they are
related by
L() = f ()L1 ()
where f () is a unimodular matrix.
The vertical pair [a(), c()] is called irreducible, if the matrix Rv () in
(1.67) is alatent. The pair [a(), c()] turns out to be irreducible, if and only
if, there exists a unimodular matrix p() with
1.12 Pairs of Polynomial Matrices 37
a() Im
p() = .
c() Onm
Immediately, it is seen that the pair [a(), c()] is exactly irreducible, when
there exist polynomial matrices U (), V (), for which
U ()a() + V ()c() = Im .
Theorem 1.41. A necessary and sucient condition for the pair (a(), b())
to be irreducible, is the existence of a pair (l (), l ()), such that the matrix
a() b()
Ql () =
l () l ()
becomes unimodular.
For the pair [a(), c()] to be irreducible, it is necessary and sucient that
there exists a pair [r (), r ()], such that the matrix
r () c()
Qr () =
r () a()
becomes unimodular.
9.
Lemma 1.42. Necessary and sucient for the irreducibility of the pair
(a(), b()), with the n n and n m polynomial matrices a() and b(),
is the condition
rank Rh (i ) = rank a(i ) b(i ) = n , (i = 1, . . . , q) , (1.72)
10.
Lemma 1.43. Let the pair (a(), b()) be given with the n n and n m
polynomial matrices a(), b(). Then for the pair (a(), b()) to be irreducible,
it is necessary that the matrix a() has not more than m invariant polynomials
dierent from 1.
38 1 Polynomial Matrices
A() = A + B (1.73)
det(A + B)
/ 0.
2. In accordance with the general denition, the two matrices of equal di-
mension
A() = A + B, A1 () = A1 + B1 (1.74)
are called left(right)-equivalent, if there exists a unimodular matrix p()
(q()), such that
with unimodular matrices p(), q(). As follows from the above disclosures,
the matrices (1.74) are exactly left(right)-equivalent, if their Hermitian canon-
ical forms coincide. For the equivalence of the matrices (1.74), it is necessary
and sucient that their Smith-canonical forms coincide.
1.13 Polynomial Matrices of First Degree (Pencils) 39
3. The matrices (1.74) are named strictly equivalent, if there exist constant
matrices P , Q with
A() = P A1 ()Q . (1.75)
If in (1.74) the conditions det A = 0, det A1 = 0 are valid, i.e. the matrices are
regular, then the matrices A(), B() are only in that case equivalent, when
they are strictly equivalent. If det A = 0 or det A1 = 0, i.e. the matrices (1.74)
are anomalous, then the conditions for equivalence and strict equivalence do
not coincide.
where a is a constant.
Theorem 1.45 ([51]). Let
Furthermore, let
( 1 )1 , . . . , ( q )q , 1 + . . . + q = (1.78)
with
A = diag{J1 (1 ), . . . , Jq (q )} ,
(1.80)
A = diag{Jp1 (0), . . . , Jp (0)} ,
where
U = diag{I , A } , V = diag{A , In } . (1.82)
Theorem 1.48. Let Relation (1.77) be true for the non-singular anomalous
matrix (1.73). Then there exists a unimodular matrix P (), such that
P ()(A + B) = A() +B
= A (1.83)
Moreover
A
det 1 = 0 (1.85)
B2
is true together with
deg P () n . (1.86)
1 + . . . + n = ,
among the numbers 1 , . . . , n are exactly ones with the value one, and
the other n numbers are zero. Without loss of generality, we assume the
succession
1 = 2 = . . . = = 1, +1 = +2 = . . . = n = 0 .
Then the matrix A in (1.83) takes the shape (1.84). Furthermore, if the matrix
A() is represented in the form (1.87), then with respect to (1.79) and (1.84),
we get
A1
A0 = .
B2
Since the matrix A() is row reduced, Relation (1.85) arises.
It remains to show Relation (1.86). As follows from (1.36), each step de-
creases the degree of one of the rows of the transformed matrices at least by
one. Hence each row of the matrix A() cannot be transformed more than
once. Therefore, the number of transformation steps is at most n . Since
however, in every step the transformation matrix P () is either constant or
with degree one, Relation (1.86) holds.
Corollary 1.49. In the row-reduced form (1.83), n rows of the matrix
A() are constant. Moreover, the rank of the matrix built from these rows is
equal to n , i.e., these rows are linearly independent.
Example 1.50. Consider the anomalous matrix
112 213 + 2 + 1 2 + 3
A() = A + B = 1 1 2 + 3 2 5 = + 3 + 2 2 + 5
113 326 + 3 + 2 3 + 6
with
A0 = A, A1 = B .
In the rst transformation step (1.30), we obtain
1 + 2 + 3 = 0
21 + 22 + 33 = 0 .
hence
1 1 2 000
A1 () = P1 ()A() = diag{1, , } 1 1 2 + 3 2 5 1 .
1 1 3 326
B1 = LBL1
Remark 1.52. Theorem 1.51 implies the following property. If the matrix B
(the matrix In B) has the entirety of elementary divisors
( 1 )1 ( q )q , 1 + . . . + q = n ,
J = diag{J1 (1 ), . . . , Jq (q )} . (1.88)
The matrix Qc (A, B) is named controllability matrix of the pair (A, B).
Some statements regarding the controllability of pairs are listed now:
a) If the pair (A, B) is controllable, and the n n matrix R is non-singular,
then also the pair (A1 , B1 ) with A1 = RAR1 , B1 = RB is controllable.
Indeed, from (1.89) we obtain
Qc (A1 , B1 ) = RB RAB . . . RAn1 B == RQc (A, B) ,
Proof. The controllability matrix of the pair (A, LB) has the shape
Qc (A, LB) = LB ALB . . . An1 LB
(1.90)
= L B AB . . . An1 B = LQc (A, B)
where Qc (A, B) is the controllability matrix (1.89). If the pair (A, B)
is not controllable, then we have rank Qc (A, B) < n, and there-
fore, rank Qc (A, LB) < n. Thus the 1st statement is proved. If the
pair (A, B) is controllable and the matrix L is non-singular, then we
have rank Qc (A, B) = n, rank L = n and from (1.90) it follows
rank Qc (A, LB) = n. Hence the 2nd statement is shown. Finally, if the
matrix L is singular, then rank L < n and rank Qc (A, LB) < n are true,
which proves 3.
c) Controllable pairs are structural stable - this is stated in the next theorem.
Theorem 1.54. Let the pair (A, B) be controllable, and (A1 , B1 ) be an
arbitrary pair of the same dimension. Then there exists a positive number
0 , such that the pair (A + A1 , B + B1 ) is controllable for all || < 0 .
Proof. Using (1.89) we obtain
Qc (A + A1 , B + B1 ) = Qc (A, B) + Q1 + . . . + n Qn , (1.91)
where the Qi , (i = 1, . . . , n) are constant matrices, that do not depend
on . Since the pair (A, B) is controllable, the matrix Qc (A, B) contains
a non-zero minor of n-th order. Then due to Lemma 1.39 for suciently
small ||, the corresponding minor of the matrix (1.91) also remains dif-
ferent from zero.
Remark 1.55. Non-controllable pairs do not possess the property of struc-
tural stability. If the pair (A, B) is not controllable, then there exists a
pair (A1 , B1 ) of equal dimension, such that the pair (A + A1 , B + B1 )
for arbitrary small || > 0 becomes controllable.
8. The vertical pair [A, C] built from the constant m m matrix A and
n m matrix C is called observable, if the vertical pair of polynomial matrices
[Im A, C] is irreducible. Obviously, the pair [A, C] is observable, if and
only if the horizontal pair (A , C ) is controllable, where the prime means
the transposition operation. Due to this reason, observable pairs possess all
the properties that have been derived above for controllable pairs. Especially,
observable pairs are structural stable.
Proof. Let
det(In A) = dA (), deg dA () = n .
Then we obtain
with deg d1 (, ) < n for all . Therefore, by virtue of Theorem 1.37, there
exists an 0 , such that for || < 0 the matrix In A B remains simple,
i.e. the matrix A + B is cyclic.
2. Square constant matrices that are not cyclic, will be called in future com-
posed. Composed matrices are not equipped with the property of structural
stability in the above dened sense. For any composed matrix A, we can nd
a matrix B, such that the sum A + B becomes cyclic, as small even || > 0 is
chosen. Moreover, the sum A + B will become composed only in some special
cases. This fact is illustrated by a 2 2 matrix in the next example.
A = LBL1 = B .
5. Assume
d() = n + d1 n1 + . . . + dn (1.93)
to be a monic polynomial. Then the n n matrix AF of the form
0 1 0 ... 0 0
0 0 1 ... 0 0
.. . .. .. . . ..
AF = . .. . . . . (1.94)
0 0 0 ... 0 1
dn dn1 dn2 . . . d2 d1
det(In AF ) = n + d1 n1 + . . . + dn = d() .
a1 () = a2 () = . . . = an1 () = 1, an () = f () .
where the matrices p(), q() are unimodular, and d() is the characteristic
polynomial of the matrix A. From the last equation, we conclude that the set
of invariant polynomials of the cyclic matrix A coincides with the set of invari-
ant polynomials of the accompanying Frobenius matrix of its characteristic
polynomial d(). Hereby, the matrices In A and In AF are equivalent,
hence the matrices A and AF are similar, i.e.
A = LAF L1
be a Jordan block. The matrix Jn (a) turns out to be cyclic, because the matrix
a 1 0 ... 0 0 0
0 a 1 . . . 0 0 0
.
0 0 a .. 0 0 0
. . . . . . .
.. .. .. .. .. .. ..
0 0 0 . . . a 1 0
0 0 0 . . . 0 a 1
is alatent.
Let us represent the polynomial (1.93) in the form
d() = ( 1 )1 ( q )q ,
J = diag{J1 (1 ), . . . , Jq (q )} (1.97)
In J = diag{I1 J1 (1 ), . . . , Iq Jq (q )} , (1.98)
Obviously, we have
det[Ii Ji (i )] = ( i )i
det(In J) = ( 1 )1 ( q )q = d() .
rank(i In J) = n 1, (i = 1, . . . , q)
that means, Matrix (1.98) is cyclic. Therefore, Matrix (1.97) is similar to the
accompanying Frobenius matrix of the polynomial (1.93), thus
J = LAF L1 ,
() = (Ip A, B, C) , (1.101)
Proof. Since the pair (A, B) is controllable and the pair [A, C] is observable,
there exists, owing to Theorem 1.54, an 1 > 0, such that the pair (A +
A1 , B + B1 ) becomes controllable and the pair [A + A1 , C + C1 ] observable
for all || < 1 . Furthermore, due to Theorem 1.56, there exists an 2 > 0,
such that the matrix A + A1 becomes cyclic for all || < 2 . Consequently, for
|| < min(1 , 2 ) = 0 all realisations (A + A1 , B + B1 , C + C1 ) are simple.
Remark 1.59. Realisations that are not simple, are not provided by the prop-
erty of structural stability. For instance, from the above considerations we
come to the following conclusion:
Let the realisation (A, B, C) be not simple, and (A1 , B1 , C1 ) be a random
realisation of equal dimension, where the entries of the matrices A1 , B1 , C1
are in the whole statistically independent and equally distributed in a certain
interval [, ]. Then the realisation (A + A1 , B + B1 , C + C1 ) will be simple
with probability 1.
is used.
If however, the nominal realisation (A, B, C) is not simple, then the struc-
tural properties will, roughly spoken, not be preserved even for tiny deviations.
Let in particular
m2 () = a()m1 () , d2 () = a()d1 ()
3. Let the fraction (2.1) be given, and g() is the GCD of the numerator
m()
and the denominator d(), such that
m()
= g()m1 (), = g()d1 ()
d()
with coprime m1 (), d1 (). Then we have
m1 ()
() = . (2.4)
d1 ()
Notation (2.4) is called an irreducible form of the rational fraction. Further-
more, assume
d1 () = d0 n + d1 n1 + . . . + dn , d0 = 0 .
Then, the numerator and denominator of (2.4) can be divided by d0 , yielding
m()
() = .
d()
Herein the numerator and denominator are coprime polynomials, and besides
the polynomial d() is monic. This representation of a rational fraction will
be called its standard form. The standard form of a rational fraction is unique.
6. In algebra, it is proved that the sets of rational fractions C(), R() with
the above explained rules for addition and multiplication form elds. The zero
element of those elds proves to be the fraction 0/1, the unit element is the
rational fraction 1/1. If we have in (2.1) m()
= 0, then the inverse element
1 () is determined by the formula
d()
1 () = .
m()
2.1 Rational Fractions 55
lim ()ind = 0 = 0
exists, is called the index of the rational fraction (2.1). In case of ind = 0
(ind > 0), the fraction is called proper (strictly proper). In case of ind 0,
the fraction is said to be at least proper. In case of ind < 0 the fraction () is
named improper. If the rational fraction () is represented in the form (2.1)
and we introduce deg m()
= , deg d() = , then the fraction is proper,
strictly proper or at least proper, if the corresponding relation = , <
or is true. The zero rational fraction is dened as strictly proper.
r()
() = + q() (2.5)
d()
with polynomials r(), q(), where deg r() < deg d(), such that the rst
summand at the right side of (2.5) is strictly proper. The representation (2.5)
is unique. Practically, the polynomials r() and q() could be found in the
following way. Using (1.6), we uniquely receive
with deg r() < deg d(). Inserting the last relation into (2.1), we get (2.5).
9. The sum, the dierence and the product of strictly proper fractions are
also strictly proper. The totality of strictly proper fractions builds a commu-
tative ring without unit element.
m()
() =
d1 ()d2 ()
11. A separation of the form (2.6) can be generalised as follows. Let the
strictly proper fraction () possess the shape
m()
() = ,
d1 ()d2 () dn ()
where all polynomials in the denominator are two and two coprime, then there
exists a unique representation of the form
m1 () m1 () mn ()
() = + + ... + , (2.7)
d1 () d1 () dn ()
where all fractions on the right side are strictly proper. In particular, let the
strictly proper irreducible fraction
m()
() =
d()
with
d() = ( 1 )1 ( q )q
be given, where all i are dierent. Introduce ( i )i = di () and apply
(2.7), then we obtain a representation of the form
q
mi ()
() = , deg mi () < i . (2.8)
i=1
( i )i
mi ()
i () = , deg mi () < i
( i )i
where the mij are certain constants. Inserting this relation into (2.8), we get
q
mi1 mi2 mii
() = + + ... + , (2.9)
i=1
( i )i ( i )i 1 i
13. For calculating the coecients mik of the partial fraction expansion (2.9)
the formula
1 k1 m()( i )i
mik = (2.10)
(k 1)! k1 d() =i
can be used. The coecients (2.10) are closely connected to the expansion of
the function
m()( i )i
i () =
d()
into a Taylor series in powers of ( i ), that exists because the function
i () is analytical in the point = i . Assume for instance
with
1 dk1
ik = i () .
(k 1)! d k1
=i
mik = ik .
14. Let
m()
() = (2.11)
d1 ()d2 ()
be any rational fraction, where the polynomials d1 () and d2 () are coprime.
Moreover, we assume
m()
() = + q()
d1 ()d2 ()
m()
m1 () m2 ()
= + ,
d1 ()d2 () d1 () d2 ()
where the fractions on the right side are strictly proper. Altogether, for (2.11)
we get the unique representation
m() m1 () m2 ()
= + + q() , (2.12)
d1 ()d2 () d1 () d2 ()
where deg m1 () < deg d1 () and deg m2 () < deg d2 () are valid.
Let g() be any polynomial. Then (2.12) can be written in the shape
58 2 Fractional Rational Matrices
m1 () m2 ()
() = + g() + + q() g() , (2.13)
d1 () d2 ()
which is equivalent to
n1 () n2 ()
() = 1 () + 2 () = + , (2.14)
d1 () d2 ()
where
n1 () = m1 () + g()d1 () ,
(2.15)
n2 () = m2 () + [ q() g()] d2 () .
k1 () k2 ()
() = + q1 () + + q2 ()
d1 () d2 ()
with deg k1 () < deg d1 (), deg k2 () < deg d2 (). Comparing the last equa-
tion with (2.12) and bear in mind the uniqueness of the representation (2.12),
we get k1 () = m1 (), k2 () = m2 () and q1 () + q2 () = q(). While as-
signing q1 () = g(), q2 () = q() g(), we realise that the representation
(2.14) takes the form (2.12).
Selecting in (2.13) g() = 0, we obtain the separation
m1 () m2 ()
1 () = , 2 () = + q() , (2.16)
d1 () d2 ()
m1 () m2 ()
1 () = + q() , 2 () = , (2.17)
d1 () d2 ()
According to
13 0 01
N (2) = , N (3) = ,
5 0 70
the fraction (2.23) is irreducible. Since the polynomial d() in (2.24) is monic,
the expression (2.23) estabishes as the standard form of the rational matrix
L().
If the denominator d() in (2.21) has the shape (2.20), then the numbers
1 , . . . , q are called the poles of the matrix L(), and the numbers 1 , . . . , q
are their multiplicities.
where
M () O,m
ML () = (2.27)
On, On,m
2.3 McMillan Canonical Form 61
and
a1 () a ()
M () = diag ,..., . (2.28)
d() d()
Executing all possible cancellations in (2.28), we arrive at
1 () ()
M () = diag ,..., , (2.29)
1 () ()
2. The polynomial
L () = 1 () () (2.30)
is said to be the McMillan denominator of the matrix L(), and the polyno-
mial
L () = 1 () () (2.31)
its McMillan numerator. The non-negative number
Mdeg L() = deg L () (2.32)
is called the McMillan degree of the matrix L(), or shortly its degree.
3.
Lemma 2.2. For a rational matrix L() in standard form (2.21) the fraction
L ()
() = (2.33)
d()
a1 () b1 ()
= ,
d() 1 ()
where deg 1 () < deg d(). Since the polynomial 1 () is divisible by the
polynomials 2 (), . . . , (), we obtain from (2.29)
b1 () b ()
M () = diag ,..., ,
1 () 1 ()
62 2 Fractional Rational Matrices
where b1 (), . . . , b () are polynomials. Inserting this relation and (2.27) into
(2.26), we arrive at the representation
N1 ()
L() = ,
1 ()
where N1 () is a polynomial matrix, and deg 1 () < deg d(). But this
inequality contradicts our assumption on the irreducibility of the standard
form (2.21). This conict proves the correctness of 1 () = d(), and from
(2.30) arises (2.33).
From Lemma 2.2, for a denominator d() of the form (2.20), we deduce
the relation
L () = ( 1 )1 ( q )q = d()2 () () , (2.34)
Mdeg L() = 1 + . . . + q .
4.
Lemma 2.3. For any matrix L(), assuming (2.26), (2.27), we obtain
Proof. The left side of the claimed inequality establishes itself as a conse-
quence of Lemma 2.2. The right side is seen immediately from (2.28), because
under the assumption that all fractions ai ()/d() are irreducible, we obtain
L () = [d()] .
5.
Lemma 2.4. Let L() in (2.21) be an nn matrix with rank N () = n. Then
L ()
det L() = , = const. = 0 . (2.35)
L ()
Proof. For n = m and rank N () = n, from (2.26)(2.29) it follows
1 () 2 () n ()
L() = p() diag , ,..., q() .
d() 2 () n ()
Calculating the determinant on the right side of this equation according to
(2.30) and (2.31) yields Formula (2.35) with = det p() det q().
2.4 Matrix Fraction Description (MFD) 63
L() = a1
l ()bl () , (2.36)
which is called an LMFD (left matrix fraction description) of the matrix L().
Analogously, if there exists a non-singular m m matrix ar () with
N ()ar ()
L()ar () = = br ()
d()
and a polynomial n m matrix br (), we call the representation
L() = br ()a1
r () (2.37)
a right MFD (RMFD) of the matrix L(), [69, 68], and the matrix ar () is
named its right reducing polynomial.
2. The polynomials al () and bl () in the LMFD (2.36) are called left denom-
inator and right numerator, and the polynomials ar (), br () of the RMFD
(2.37) its right denominator and left numerator, respectively. Obviously, the
set of left reducing polynomials of the matrix L() coincides with the set
of its left denominators, and the same is true for the set of right reducing
polynomials and the set of right denominators.
Example 2.5. Let the matrices
2 2 + 2
2 7 + 18 2 + 7 2
L() =
( 2)( 3)
and
4 1
al () =
6
be given. Then by direct calculation, we obtain
3 +1
al ()L() = = bl () ,
2
3. For any matrix L() (2.21), there always exist LMFDs and RMFDs. In-
deed, take
al () = d()In , bl () = N () ,
then the rational matrix (2.21) can be written in form of an LMFD (2.36),
where
det al () = [d()]n ,
and therefore
deg det al () = ord al () = n deg d() .
In the same way, we see that
ar () = d()Im , br () = N ()
L() = a1 1
l1 ()bl1 () = al2 ()bl2 ()
and the pair (al1 (), bl1 ()) is irreducible, then there exists a non-singular
n n polynomial matrix g() with
Furthermore, if the pair (al2 (), bl2 ()) is also irreducible, then the matrix
g() is unimodular.
Remark 2.6. A corresponding statement is true for right MFDs.
2.4 Matrix Fraction Description (MFD) 65
Inserting (2.38) and (2.39) in (2.26), we obtain an LMFD (2.36) and an RMFD
(2.37) with
l ()p1 (), bl () = b()q() ,
al () = a
(2.40)
ar () = q 1 ()
ar (), br () = p()b() .
In [69] is stated that the pairs (al (), bl ()) and [ar (), br ()] are irre-
ducible, i.e. by using (2.40), Relations (2.36) and (2.37) generate ILMFDs
and IRMFDs of the matrix L().
6. If Relations (2.36) and (2.37) dene ILMFDs and IRMFDs of the matrix
L(), then it follows from (2.40) and Statement 2.3 that the matrices al ()
and ar () possess equal invariant polynomials dierent from one. Herein,
det al () det ar () d()2 () () = L () ,
where L () is the McMillan denominator of the matrix L(). Besides, the
last relation together with (2.32) yields
ord al () = ord ar () = Mdeg L() . (2.41)
Moreover, we recognise from (2.40) that the matrices bl () and br () in the
ILMFD (2.36) and the IRMFD (2.37) are equivalent.
7.
Lemma 2.7. Let a l () (ar ()) be a left (right) reducing polynomial for the
l () = (ord a
matrix L() with ord a r () = ). Then
Mdeg L() .
Proof. Let us have the ILMFD (2.36). Then due to Statement 2.3, we have
a
l () = g()al () , (2.42)
where the matrix g() is non-singular, from which directly follows the claim.
66 2 Fractional Rational Matrices
L() = a1
l1 ()bl1 ()
with det al1 () det ar1 (). The reverse statement is also true.
L() = a1 1
l ()bl () = br ()ar ()
al1 () = gl ()al () ,
where the matrix gl () is non-singular. Let det g() = h() and choose the
m m matrix gr () with det gr () h(). Then using
9.
Lemma 2.9. Let the PMD of the dimension n, p, m
be given, where the pair (a(), b()) is irreducible. Then, if we have an ILMFD
c()a1 () = a1
1 ()c1 () , (2.43)
Proof. Since the pair (a(), b()) is irreducible, owing to (1.71), there exist
polynomial matrices X(), Y () with
a()X() + b()Y () = Ip .
2.4 Matrix Fraction Description (MFD) 67
In analogy, the irreducibility of the pair (a1 (), c1 ()) implies the existence of
polynomial matrices U () and V () with
a1 ()U () + c1 ()V () = In . (2.45)
Using the last two equations, we nd
a1 ()U () + c1 ()V () = a1 ()U () + c1 ()Ip V ()
= a1 ()U () + c1 () [a()X() + b()Y ()] V () = In
which, due to (2.43), may be written in the form
a1 () [U () + c()X()V ()] + c1 ()b() [Y ()V ()] = In .
From this equation by virtue of (1.71), it is evident that the pair
(a1 (), c1 ()b()) is irreducible.
In the same manner, it can be shown that the pair [a2 (), c()b1 ()] is irre-
ducible.
Remark 2.10. The reader nds in [69] an equivalent statement to Lemma 2.9
in modied form.
10.
Lemma 2.11. Let the pair (a1 ()a2 (), b()) be irreducible. Then also
the pair (a1 (), b()) is irreducible. Analogously, we have: If the pair
[a1 ()a2 (), c()] is irreducible, then the pair [a2 (), c()] is also irreducible.
Proof. Produce
L() = a1 1
2 ()a1 ()b() = [a1 ()a2 ()]
1
b() .
Due to our supposition, the right side of this equation is an ILMFD. Therefore,
regarding (2.41), we get
Mdeg L() = ord[a1 ()a2 ()] = ord a1 () + ord a2 () . (2.46)
Suppose the pair (a1 (), b()) to be reducible. Then there would exist an
ILMFD
a1 1
3 ()b1 () = a1 ()b() ,
where ord a3 () < ord a1 (), and we obtain
L() = a1 1
2 ()a3 ()b1 () = [a3 ()a2 ()]
1
b1 () .
From this equation it follows that a3 ()a2 () is a left reducing polynomial for
L(). Therefore, Lemma 2.7 implies
Mdeg L() ord[a3 ()a2 ()] < ord a1 () + ord a2 () .
This relation contradicts (2.46), thats why the pair (a1 (), b()) has to be
irreducible. The second part of the Lemma is shown analogously.
68 2 Fractional Rational Matrices
be given with polynomial matrices a(), b(), c(), where the pairs (a(), b())
and [a(), c()] are irreducible. Then
L () det a()
c()a1 () = a1
1 ()c1 () . (2.49)
L() = a1
1 ()[c1 ()b()] .
Due to Lemma 2.9, the right side of this equation is an ILMFD and because
of (2.50), we get
L () det a1 () det a() .
Relation (2.48) now follows directly from (2.32).
L1 ()L2 ()
() =
L ()
realises as a polynomial.
L() = a1 1
1 ()b1 ()a2 ()b2 () . (2.55)
a1 1
3 ()b3 () = b1 ()a2 () , (2.56)
where
det a3 () det a2 () L2 () .
Using (2.55) and (2.56), we nd
L() = a1 1 1
1 ()a3 ()b3 ()b2 () = a4 ()b4 () , (2.57)
where
a4 () = a3 ()a1 (), b4 () = b3 ()b2 () .
Per construction, we get
Relations (2.52) and (2.57) dene LMFDs of the matrix L(), where (2.52) is
an ILMFD. Therefore, the relation
a4 () = g()a()
holds with an n n polynomial matrix g(). From the last equation arises
that the object
det a4 ()
= det g()
det a()
is a polynomial. Finally, this equation together with (2.54) and (2.58) yields
the claim of the Lemma.
Remark 2.14. From Lemma 2.13 under supposition (2.51), we get
L1 () = L() + G()
is easily proved. The rst factor on the right side is the alatent matrix Rh ()
and the second factor is a unimodular matrix. Therefore, the product is also
alatent and consequently, (2.62) is an ILMFD, which implies
Lemma 2.16. For the matrix L() Fnm (), let an ILMFD (2.52) be given,
and the matrix L1 () is determined by
L1 () = L()D() ,
L1 () = a1 ()[b()D()] (2.63)
denes an ILMFD of the matrix L1 (), and Equations (2.59), (2.60) are
fullled.
The latent numbers of the matrix R1h () belong to the set of numbers
1 , . . . , q . But for any 1 i q, we have
2.4 Matrix Fraction Description (MFD) 71
R1h (i ) = Rh (i )F (i ) ,
rank R1h (i ) = n, (i = 1, . . . , q) .
Therefore, the matrix R1h () satises Condition (1.72), and Lemma 1.42 guar-
antees that Relation (2.63) delivers an ILMFD of the matrix L1 (). From this
fact we conclude the validity of (2.59), (2.60).
12.
Lemma 2.17. Let the irreducible rational matrix
N ()
L() = (2.64)
d1 ()d2 ()
be given, where N () is an n m polynomial matrix, and d1 (), d2 () are
coprime scalar polynomials. Moreover, let the ILMFDs
1 () = N () = a1 ()b1 () ,
L 2 () = b1 () = a1 ()b2 ()
L
1 2
d1 () d2 ()
exist. Then the expression
13.
Lemma 2.18. Let irreducible representations of the form (2.21)
Ni ()
Li () = , (i = 1, 2) (2.65)
di ()
with n m polynomial matrices Ni () be given, where the polynomials d1 ()
and d2 () are coprime. Then we have
1 () = L()d2 () = N1 () d2 () + N2 () .
L
d1 ()
Applying (2.67), we obtain
L 1
1 () = a
1 () b1 ()d2 () + a
1 ()N2 () .
From Lemmata 2.152.17, it follows that the right side of the last equation
is an ILMFD, because the polynomials d1 () and d2 () are coprime. Now
introduce the notation
N2 ()
2 () = b1 () + a
L 1 () = b1 () + a a1
1 ()
2 ()b2 () (2.69)
d2 ()
and investigate the ILMFD
a a1
1 () 1
2 () = a1 ()a2 () . (2.70)
The left side of this equation is an IRMFD, because the matrices a
1 () and
2 () have no common eigenvalues. Therefore,
a
ord a
2 () = ord a1 () , (2.71)
and from Lemmata 2.9 and 2.15 we gather that the right side of the equation
2 () = a1 () a1 ()b1 () + a2 ()b2 () = a1 ()b2 ()
L 1 1
L() =
1 () 2 () ()
diag ,
d1 ()d2 () 2 ()2 ()
,...,
() ()
O,m
p() q() ,
On, On,m
where all fractions are irreducible, and all polynomials 2 (), . . . , () are
divisors of the polynomial d1 (), and all polynomials 2 (), . . . , () are divi-
sor of the polynomial d2 (). Furthermore, every i () is divisible by i+1 (),
and i () by i+1 ().
1
L() = a a1 () .
l ()b() r (2.73)
3.
Lemma 2.20. The pairs (al (), b()) and [
ar (), b()] dened by Relations
(2.72) are irreducible.
Proof. Build the LMFD and RMFD
N ()ar () l ()N ()
a
1
=a
l ()b(), a1
= b() r () .
d1 ()d2 () d1 ()d2 ()
With the help of (2.72), we immediately recognise that the right sides are
al (), b()), [
ILMFD resp. IRMFD. Therefore, the pairs ( ar (), b()] are irre-
ducible.
74 2 Fractional Rational Matrices
Suppose (2.72), then under the conditions of Lemma 2.20, it follows that
l () and ord a
in the representation (2.73), the quantities ord a r () take their
minimal values. A representation like (2.73) is named irreducible DMFD
(IDMFD). The set of all IDMFD of the matrix L() according to given poly-
nomials d1 (), d2 () has the form
L() = a1 1
l ()b()ar ()
with
al () = p()
al (), b() = p()b()q(), ar () = a
r ()q() ,
L() = a1 1
l ()b()ar ()
with
+1 1 1 2 0
al () = , ar () = , b() = .
2 +2 2 3 2 1
exists. For ind L = 0, ind L > 0 and ind L 0 the matrix L() is called proper,
strictly proper and at least proper, respectively. For rational matrices of the
form (2.21), we have
2. In a number of cases we also can receive the value of ind L from the LMFD
or RMFD.
Lemma 2.22. Suppose the matrix L() in the standard form
N ()
L() = (2.75)
d()
and the relations
L() = a1 1
l ()bl () = br ()ar () (2.76)
should dene LMFD resp. RMFD of the matrix L(). Then ind L satises the
inequalities
d()bl () = al ()N () ,
which results in
deg[d()bl ()] = deg[al ()N ()] . (2.78)
According to
d()bl () = [d()In ]bl ()
and due to the regularity of the matrix d()In , we get through (1.12)
which is equivalent to the rst inequality in (2.77). The second inequality can
be shown analogously.
Corollary 2.23. If the matrix L() is proper, i.e. ind L = 0, then for any
MFD (2.76) from (2.77) it follows
If the matrix L() is even strictly proper, i.e. ind L < 0 is true, then we have
L() = a1
l ()bl () (2.81)
i = i i , (i = 1, . . . , n)
and
L = min [i ] .
1in
ind L = L . (2.82)
where the B 0 =
i , (i = 0, 1, . . .) are constant matrices, and B Onm . Inserting
this and (2.83) into (2.81), we nd
1
L()L = A0 + A1 1 + . . . 1 1 + . . . .
0 + B
B
lim L()L = A1
0 B0 = Onm , (2.84)
Corollary 2.25. ([69], [68]) If in the LMFD (2.81) the matrix al () is row
reduced, then the matrix L() is proper, strictly proper or at least proper, if
and only if we have L = 0, L > 0 or L 0, respectively.
In the same way the corresponding statement for right MFD can be seen.
2.7 Strictly Proper Rational Matrices 77
L() = br ()a1
r ()
i =
i i , (i = 1, . . . , m)
and
L = min [i ] .
1im
ind L = L .
2. For any strictly proper rational n m matrix L(), there exists an indef-
inite set of elementary PMDs
() = (Ip A, B, C) (2.85)
i.e. realisations (A, B, C), such that
L() = C(Ip A)1 B . (2.86)
The right side of (2.86) is called a standard representation of the matrix,
or simply its representation. The number p, congured in (2.86), is called its
dimension. A representation, where the dimension p takes its minimal possible
value, is called minimal.
A standard representation (2.86) is minimal, if and only if its elementary
PMD is minimal, that means, if the pair (A, B) is controllable and the pair
[A, C] is observable.
The matrix L() (2.86) is called the transfer function (transfer matrix) of
the elementary PMD (2.85), resp. of the realisation (A, B, C). The elementary
PMD (2.85) and the PMD
1 () = (Iq A1 , B1 , C1 ) (2.87)
are called equivalent, if their transfer matrices coincide.
4.
Lemma 2.28. Assume n = m in the standard representation (2.86) and
det L()
/ 0. Then p n holds, and
k()
det L() = (2.92)
det(Ip A)
is valid, where k() is a scalar polynomial with
deg k() p n ind L . (2.93)
The case p < n results in det L() 0.
Proof. In accordance with Lemma 2.8, there exists an LMFD
C(Ip A)1 = a1
1 ()b1 () ,
5. Let the strictly proper rational matrix L() of the form (2.21) with
N ()
L() =
d1 ()d2 ()
be given, where the polynomials d1 () and d2 () are coprime. Then there
exists a separation
N1 () N2 ()
L() = + , (2.94)
d1 () d2 ()
where N1 () and N2 () are polynomial matrices and both fractions in (2.94)
are strictly proper.
The matrices N1 () and N2 () in (2.94) are uniquely determined.
In practice, the separation (2.94) can be produced by performing the sep-
aration (2.6) for every element of the matrix L().
Example 2.30. Let
+2
+ 3 2 + 1
L() =
( 2)2 ( 1)
be given. By choosing d1 () = ( 2)2 , d2 () = 1, a separation (2.94) is
found with
3 + 10 + 4 31
N1 () = , N2 () = .
4 + 13 + 7 42
6. The separation (2.94) is extendable to a more general case. Let the strictly
proper rational matrix have the form
N ()
L() = ,
d1 ()d2 () d ()
where all polynomials in the denominator are two-by-two coprime. Then there
exists a unique representation of the form
N1 () N ()
L() = + ... + , (2.95)
d1 () d ()
where all fractions on the right side are strictly proper. Particularly consider
(2.21), and the polynomial d() should have the form (2.20). Then under the
assumption
di () = ( i ) i , (i = 1, . . . , q)
from (2.95), we obtain the unique representation
q
Ni ()
L() = , (2.96)
i=1
( i )i
Ni ()
Li () =
( i )i
could be written as
Ni1 Ni2 Nii
Li () = + 1
+ ... + , (2.97)
( i )i ( i )i i
8. For calculating the matrices Nik in (2.97), we rely upon the analogous
formula to (2.10)
k1
1 N ()( i )i
Nik = . (2.99)
(k 1)! k1 d() |=i
where
42 3 1 31
N11 = , N12 = , N21 = .
55 4 1 42
9. The partial fraction expansion (2.98) can be used in some cases for solving
the question on reducibility of certain rational matrices.
Indeed, it is easily shown that for the irreducibility of the strictly proper
matrix (2.21), it is necessary and sucient that in the expansion (2.98)
must be true.
82 2 Fractional Rational Matrices
R()
L() = + G() = L0 () + G() , (2.101)
d()
where the fraction in the middle part is strictly proper, and G() is a polyno-
mial matrix. The representation (2.101) is unique. Practically, the dissection
(2.101) is done in such a way that the dissection (2.5) is applied on each
element of L().
Furthermore, the strictly proper matrix L0 () on the right side of (2.101) is
called the broken part of the matrix L(), and the matrix G() its polynomial
part .
N0 ()
L0 () = , (2.102)
d1 ()d2 ()
N1 () N2 ()
L() = + + G() . (2.103)
d1 () d2 ()
Example 2.33. For Matrix (2.22), we generate the separation (2.103) of the
shape
13 0 01
5 0 70 5 0
A() = + + .
2 3 0 +2
2.8 Separation of Rational Matrices 83
3. From (2.103) we learn that Matrix (2.101) can be presented in the form
Q1 () Q2 ()
L() = + , (2.104)
d1 () d2 ()
where
i ()
Q Ri ()
= + Gi () , (i = 1, 2) ,
di () di ()
where the fractions on the right sides are strictly proper, and G1 (), G2 ()
are polynomial matrices. Therefore,
R1 () R2 ()
L() = + + G1 () + G2 () .
d1 () d2 ()
Comparing this with (2.103), then due to the uniqueness of the expansion
(2.103), we get
Q1 () = N1 () + d1 ()G(), Q2 () = N2 () . (2.107)
For the solution (2.106), the rst summand in the separation (2.104) becomes
a strictly proper rational matrix, and for the solution (2.107) the second one
does. The particular separations dened by Formulae (2.106) and (2.107) are
called minimal with respect to d1 () resp. d2 (). Due to their construction,
the minimal separations are uniquely determined.
Example 2.36. The separation of Matrix (2.22), which is minimal with respect
to d1 () = 2, is given by the matrices
13 0 5( 3) 1
Q1 () = , Q2 () = .
5 0 7 ( 3)( + 2)
Example 2.37. For the strictly proper matrix in Example 2.30, we obtain a
unique minimal separation with Q1 () = N1 (), Q2 () = N2 (), where the
matrices N1 () and N2 () were already determined in Example 2.30.
2.9 Inverses of Square Polynomial Matrices 85
adj L()
L1 () = (2.108)
det L()
2. The matrix L() could be written with the help of (1.40), (1.49) in the
form
h1 () 0 ... 0
0 h1 ()h2 () . . . 0
1
L() = p1 () . . .. . q () ,
.. .. . ..
0 0 . . . h1 ()h2 () hn ()
where p() and q() are unimodular matrices. How the inverse matrix L1 ()
can be calculated? For that purpose, the general Formula (2.108) is used.
Denoting
we can write
L1 () = q()H 1 ()p() . (2.110)
1
Now, we have to calculate the matrix H (). Obviously, the characteristic
polynomial of H() amounts to
adj H()
H 1 () = (2.114)
dL min ()
with
adj H() = diag {h2 () hn (), h3 () hn (), . . . , hn (), 1} (2.115)
dL min () = h1 ()h2 () hn () = an () , (2.116)
where an () is the last invariant polynomial. Altogether, we receive by using
(2.110)
adj L()
L1 () = , (2.117)
dL min ()
where
adj
L() = q()adj H() p() . (2.118)
Matrix (2.118) is called the monic adjoint matrix, and the polynomial
dL min () the minimal polynomial of the matrix L(). The rational matrix
on the right side of (2.117) will be named monic inverse of the polynomial
matrix L().
3. Opposing (2.111) to (2.116) makes clear that among the roots of the
minimal polynomial dLmin () are all eigenvalues of the matrix L(), however,
possibly with lower multiplicity. It is remarkable that the fraction (2.117) is
irreducible. The reason for that lies in the fact that the matrix adj H() for no
value of becomes zero. The same can be said about Matrix (2.118), because
the matrices q() and p() are unimodular.
adj al () bl () br () adj ar ()
wl () = , wr () = (2.122)
dal () dar ()
2.
Denition 2.39. The transfer matrices wl () and wr () are called irre-
ducible, if the rational matrices on the right side of (2.122) are irreducible.
Now, we collect some facts on the reducibility of transfer matrices.
Lemma 2.40. If the matrices al (), ar () are not simple, then the transfer
matrices (2.120), (2.121) are reducible.
Proof. If the matrices al (), ar () are not simple, then owing to (2.116), we
conclude that the matrices a1 1
l (), ar () are reducible, and therefore, also
the fractions
adj al () bl ()
br ()adj ar ()
wl () = , wr () = . (2.123)
dal min () dar min ()
3.
Lemma 2.41. If the pairs (al (), bl ()), [ar (), br ()] are reducible, i.e. the
matrices
ar ()
Rh () = al () bl () , Rv () = (2.124)
br ()
are latent, then the fractions (2.120), (2.121) are reducible.
88 2 Fractional Rational Matrices
Proof. If the pair (al (), bl ()) is reducible, then by virtue of the results in
Section 1.12, we obtain
with ord g() > 0 and polynomial matrices al1 (), bl1 (), where due to
the relation
deg det al1 () < deg det al ()
holds. From (2.125), we gain
Theorem 2.42. If the pairs (al (), bl ()), [ar (), br ()] are irreducible, then
the monic transfer matrices (2.123) are irreducible.
Proof. Let the pair (al (), bl ()) be irreducible. Then the matrix Rh () in
(2.124) is alatent. Therefore, an arbitrary xed = yields
= rank al ()
rank Rh () bl ()
= n. (2.126)
Multiplying the matrix Rh () from left by the monic adjoint matrix adj al (),
with benet from (2.119), we nd
adj
al ()Rh () = dal min ()In adj al ()bl () . (2.127)
Now, let = 0 be any root of the polynomial dal min (), then due to
dal min (0 ) = 0 in (2.127)
adj
al (0 )Rh (0 ) = Onn adj al (0 )bl (0 )
adj 0 )bl (
al ( 0 ) = Onm
and therefore
2.10 Transfer Matrices of Polynomial Pairs 89
adj 0 )Rh (
al ( 0 ) = On,n+m . (2.128)
But from Relations (2.115), (2.118), we know rank [ adj 0 ) ] 1. Moreover,
al (
from (2.126) we get rank Rh (0 ) = n, and owing to the Sylvester inequality
(1.44), we conclude
rank adj 0 )Rh (
al ( 0) 1 .
for = 0 has only rank 1. On the other side, we immediately recognise that
the pair
0 1
al1 () = , bl1 () =
1 1 0
is an ILMFD of the transfer matrix (2.130), because the matrix
01
Rh1 () =
1 1 0
5.
Theorem 2.45. For the transfer matrices (2.122) to be irreducible, it is nec-
essary and sucient that the pairs (al (), bl ()), [ar (), br ()] are irreducible
and the matrices al (), ar () are simple.
that is named the transfer function (-matrix) of the PMD (2.131). Using
(2.108), the transfer matrix can be presented in the form
When in the general case a() is not simple, then by virtue of (2.117), we
obtain
c()adj a() b()
w () = . (2.134)
da min ()
The rational matrix on the right side of (2.134) is called the monic transfer
matrix of the PMD (2.131).
2.11 Transfer Matrices of PMDs 91
2.
Theorem 2.46. For a minimal PMD (2.131), the monic transfer matrix
(2.134) is irreducible.
c()a1 () = a1
1 ()c1 () . (2.135)
w () = a1
1 ()[c1 ()b()] . (2.138)
The right side of (2.138) is an ILMFD, what follows from the minimality of
the PMD (2.131) and Lemma 2.9. But then, employing Lemma 2.8 yields the
fraction
adj a1 () c1 ()b()
w () =
da1 min ()
to be irreducible, and this implies, owing to (2.137), the irreducibility of the
right side of (2.134).
3.
Theorem 2.47. For the right side of Relation (2.133) to be irreducible, it is
necessary and sucient that the PMD (2.131) is minimal and the matrix a()
is simple.
and the irreducibility of the right side of (2.133) follows from Theorem 2.46.
92 2 Fractional Rational Matrices
Lemma 2.48. Assume the PMDs (2.131) and (2.139) be equivalent and the
PMD (2.131) be minimal. Then the expression
det a
()
() = (2.141)
det a()
turns out to be a polynomial.
Proof. Lemma 2.8 implies the existence of the LMFD
a1 () = a1
c() 2 ()c2 () ,
where
() det a2 () .
det a (2.142)
Utilising (2.140), from this we gain the LMFD of the matrix w ()
w () = a1
2 ()[c2 ()b()] . (2.143)
On the other side, the minimality of the PMD (2.131) allows to conclude that
the right side of (2.138) is an ILMFD of the matrix w (). Comparing (2.138)
with (2.143), we obtain a2 () = g()a(), where g() is a polynomial matrix.
Therefore, the expression
det a2 ()
1 () = = det g()
det a1 ()
proves to be a polynomial. Taking into account (2.136) and (2.142), we realise
that the right side of Equation (2.141) becomes a polynomial. Hereby, ()
1 () holds.
Corollary 2.49. If the PMDs (2.131) and (2.139) are equivalent and mini-
mal, then
det a() det a
() .
Proof. Lemma 2.48 oers under the given suppositions that
det a() det a
()
,
det a
() det a()
are polynomials, this proves the claim.
2.11 Transfer Matrices of PMDs 93
5.
Lemma 2.50. Consider a regular PMD (2.131) and its corresponding trans-
fer matrix (2.132). Moreover, let the ILMFD and IRMFD
w () = p1 1
l ()ql () = qr ()pr () (2.144)
exist. Then the expressions
det a() det a()
l () = , r () = (2.145)
det pl () det pr ()
turn out to be polynomials. Besides, the sets of poles of each of the matrices
w1 () = pl ()c()a1 (), w2 () = a1 ()b()pr ()
are contained in the set of roots of the polynomial l () r ().
Proof. Consider the PMDs
1 () = (pl (), ql (), In ) ,
(2.146)
2 () = (pr (), Im , qr ()) .
Per construction, the PMDs (2.131) and (2.146) are equivalent, where the
PMD (2.146) is minimal. Therefore, due to Lemma 2.48, the functions (2.145)
are polynomials. Now we build the LMFD
c()a1 () = a1
3 ()c3 () , (2.147)
where
det a3 () det a() . (2.148)
As above, we have an LMFD of the transfer matrix w ()
w () = a1
3 ()[c3 ()b()] . (2.149)
This relation together with (2.144) determines two LMFDs of the transfer
matrix w (), where (2.144) is an ILMFD. Therefore,
a3 () = gl ()pl () (2.150)
holds with a non-singular n n polynomial matrix gl (). Inversion of both
sides of the last equation leads to
a1 1 1
3 () = pl ()gl () . (2.151)
Moreover, from (2.150) through (2.148), we receive
det a3 () det a()
det gl () = = l () . (2.152)
det pl () det pl ()
From (2.147) and (2.151), we earn
adj gl () c3 ()
pl ()c()a1 () = gl1 ()c3 () = ,
det gl ()
and with the aid of (2.152), this yields the proof for a left MFD. The relation
for a right MFD is proven analogously.
94 2 Fractional Rational Matrices
3.
Lemma 2.52. Let the right side of (2.153) dene an ILMFD of the matrix
w(), and Condition (2.154) should be fullled. Let w () and w1 () be the
McMillan denominators of the matrices w() resp. w1 (). Then the fraction
w ()
() =
w1 ()
proves to be a polynomial.
Proof. Take the ILMFD of the matrix w1 ():
w1 () = p1
l1 ()ql1 () .
4.
Lemma 2.54. Assume (2.154) be valid, and Q(), Q1 () be any polynomial
matrices of appropriate dimension. Then
w1 () + Q1 () w() + Q() .
l
w() + Q() = p1
l () [ql () + pl Q1 ()] ,
due to Lemma 2.15, is also an ILMFD. Hence owing to (2.154), the prod-
uct pl ()[w1 () + Q1 ()] turns out as a polynomial, thats what the lemma
claims.
Remark 2.55. An analogous statement is true for subordination from right.
Therefore, when the matrix w1 () is subordinated to the matrix w(), then
the broken part of w1 () is subordinated to the broken part of w(). The
reverse is also true.
5.
Theorem 2.56. Consider the strictly proper n m matrix w(), and its min-
imal realisation
w() = C(Ip A)1 B . (2.156)
Then for holding the relation
w1 () w() , (2.157)
l
w1 () = C(Ip A)1 B1 .
C(Ip A)1 = a1
1 ()b1 () .
w() = a1
1 ()[b1 ()B] (2.158)
a1 ()w1 () = b1 ()B1
s m
(2.159)
w()
= [ w1 () w() ] n .
We will show
Mdeg w()
= Mdeg w() = p . (2.160)
The equality Mdeg w() = p immediately follows because the realisation
(2.156) is minimal. It remains to show Mdeg w()
= Mdeg w(). For this
purpose, we multiply (2.159) from left by the matrix a1 (). Then taking into
account (2.157) and (2.158), we realise that
a1 ()w()
= a1 ()w1 () a1 ()w()
Mdeg w()
p.
Now, we will prove that the inequality cannot happen. Indeed, assume
Mdeg w()
= < p, then there exists a polynomial a() with ord a
() =
deg det a
() = , and
a
()w()
= a()w1 () a
()w()
w()
= C(I 1 B
p A) (2.161)
C,
with constant p p, n p and p (s + m) matrices A, B.
Bring the matrix
into the form
B
s m
B= B 1 B
2 p .
Then from (2.161), we gain
w()
= C(I 1 B
p A) 1 C(I 1 B
p A) 2 .
w1 () = C(I 1 B
p A) 1 , (2.162)
w() = C(I 1 B
p A) 2 . (2.163)
Expressions (2.156) and (2.163) dene realisations of the matrix w() of the
same dimension p. However, since realisation (2.156) is minimal, also realisa-
tion (2.163) has to be minimal. According to (2.88), we can nd a non-singular
matrix R with
2.12 Subordination of Rational Matrices 97
A = RAR1 , 2 = RB ,
B C = CR1 .
w1 () = C(Ip A)1 B1 , B1 = R1 B
1 ,
w1 () w() ,
r
w1 () = C1 (Ip A)1 B .
6.
Theorem 2.59. Consider the rational matrices
F () = a1
1 ()b1 (), G() = a1
2 ()b2 () . (2.165)
b1 ()a1 1
2 () = a3 ()b3 () . (2.166)
is alatent, i.e. the pair (a3 ()a1 (), b3 ()b2 ()) is irreducible.
H() = a1 1 1 1
1 ()b1 ()a2 ()b2 () = a1 ()a3 ()b3 ()b2 ()
(2.170)
= [a3 ()a1 ()]1 b3 ()b2 () .
Let Matrix (2.168) be alatent. Then the right side of (2.170) is an ILMFD
and with the aid of (2.169), we get a() = g()a3 ()a1 (), where g() is a
unimodular matrix. Besides
a()F () = g()a3 ()b1 ()
is a polynomial matrix, and hence (2.167) is true.
Necessity: Assume (2.167), then we have
a() = h()a1 () , (2.171)
where h() is a non-singular polynomial matrix. This relation leads us to
H() = a1 ()b() = a1
1 ()h
1
()b() . (2.172)
Comparing the expressions for H() in (2.170) and (2.172), we nd
a1
3 ()b3 ()b2 () = h
1
()b() . (2.173)
But the
matrix a3() b3 ()b2 () due to Lemma 2.9 is alatent, and the
matrix h() b() with respect to (2.171) and owing to Lemma 2.11 is
alatent. Therefore, the left as well as the right side of (2.173) present ILMFDs
of the same rational matrix. Then from Statement 2.3 on page 64 arise
h() = ()a3 (), b() = ()b3 ()b2 () , (2.174)
where the matrix () is unimodular. Applying (2.172) and (2.174), we arrive
at the ILMFD
H() = [()a3 ()a1 ()]1 b() .
This expression and (2.170) dene two LMFDs of the same matrix H(). Since
the matrix () is unimodular, we have
ord[()a3 ()a1 ()] = ord[a3 ()a1 ()] ,
and the right side of (2.170) is an ILMFD too. Therefore, Matrix (2.168) is
alatent.
A corresponding statement holds for subordination from right.
Theorem 2.60. Consider the rational matrices (2.164) and the IRMFDs
a1
F () = b1 () 1 (), a1
G() = b2 () 2 () .
7. The following theorem states an important special case, where the con-
ditions of Theorems 2.59 and 2.60 are fullled.
Theorem 2.61. If for the rational n p and p m matrices F () and G()
the relation
Mdeg[F ()G()] = Mdeg F () + Mdeg G() (2.175)
holds, i.e. the matrices F () and G() are independent, then the relations
take place.
Proof. Let us have the ILMFD (2.165), then Mdeg F () = ord a1 () and
Mdeg G() = ord a2 (). Besides, the pair [a2 (), b1 ()] is irreducible, that
can be seen by assuming the contrary. In case of ord a3 () < ord a2 () in
(2.166), we would obtain from (2.170)
which contradicts (2.175). The irreducibility of the pairs [a2 (), b1 ()] and
(2.175) implies that the right part of (2.170) is an ILMFD. Owing to Theo-
rem 2.59, the rst relation in (2.176) is shown. The second relation in (2.137)
is seen analogously.
8.
Remark 2.62. Under the conditions of Theorem 2.59 using (2.171) and (2.174),
we obtain
a()F () = ()a3 ()b1 () ,
that means, the factor ()a3 () is a left divisor of the polynomial matrix
a()F (). Analogously, we conclude from the conditions of Theorem 2.60,
when the IRMFD H() = b() a1 () is present, that
a() = b2 ()
G() a3 ()()
takes place with a unimodular matrix (). We learn from this equation that
under the conditions of Theorem 2.60, the polynomial matrix a3 ()() is a
right divisor of the polynomial matrix G()
a().
()
dik () = (2.178)
ik ()
wz () = a1 ()b()
det a() z () ,
where z () is the MMD of the row (2.179). Besides, the polynomial a() is
canceling from left for all matrices wi (), (i = 1, . . . , m), that means
wi () wz (), (i = 1, . . . , m) .
l
z ()
dzi () = , (i = 1, . . . , m)
i ()
where the i () are the MMDs of the matrices wi (), owing to Lemma 2.52,
become polynomials. In the same way can be seen that for a block column
1 ()
w
w() = ws () = ... (2.180)
n ()
w
the expressions
s ()
dsk () = , (k = 1, . . . , n)
k ()
become polynomials, where s (), k () are the MMDs of the column (2.180)
and of its elements.
2.13 Dominance of Rational Matrices 101
() ()
, (i = 1, . . . , n); , (k = 1, . . . , m)
iz () ks ()
() () iz ()
dik () = = z
ik () i () ik ()
become polynomials.
are true.
Lemma 2.65. The element wik () is dominant in Matrix (2.177), if and only
if
Mdeg w() = Mdeg wik () .
3.
Theorem 2.66. A necessary and sucient condition for the matrix w2 () to
be dominant in the block row
w() = w1 () w2 ()
w1 () w2 () .
l
102 2 Fractional Rational Matrices
w1 () w2 () .
r
Proof. The proof immediately arises from the proof of Theorem 2.56.
4.
Theorem 2.67. Consider the strictly proper rational block matrix G() of the
shape
m
K() L() i (2.181)
G() =
M () N () n
Proof. Necessity: Let (2.183) be valid. Since the matrix G() is strictly
proper, there exists a minimal realisation
G() = C(I 1 B
p A) (2.185)
p
C i
m
C = 1 = B
B 1 B
2 p
C2 n
Both Equations (2.182) and (2.187) are realisations of N () and possess the
same dimension. Since (2.182) is a minimal realisation, so (2.187) has to be
minimal too. Therefore, Relations (2.88) can be used that will lead us to
A = RAR1 , 2 = RB,
B C2 = CR1
G1 () G2 () ,
r
are true.
104 2 Fractional Rational Matrices
l ()K() = l ()B1
N ()
A() = , deg d() = p (3.1)
d()
A() = a1 1
l ()bl () = br ()ar () , (3.2)
and Mdeg A() is the degree of the McMillan denominator of the matrix A().
At rst, Relation (2.34) implies Mdeg A() p. In the following disclosure,
matrices will play an important role for which
Since det al () det ar () is valid and both polynomials are divisible by d(),
Relation (3.3) is equivalent to
d() = A () , (3.4)
2. For a normal matrix (3.1), it is possible to build IMFDs (3.2), such that
If both ILMFD and IRMFD satisfy such conditions, the pair is called a com-
plete MFD. Thus, normal rational matrices are rational matrices that possess
a complete MFD.
It is emphasised that a complete MFD is always irreducible. Indeed,
from (2.34) is seen that for any matrix A() in form (3.1) it always follows
deg A () deg d(). Therefore, if we have any matrix A() satisfying (3.3),
then the polynomials det al () and det ar () possess the minimal possible
degree, and hence the complete MFD is irreducible.
with unimodular matrices (), (). We take from (3.5), that the matrix
al () is simple, and furthermore from (3.5), we obtain
where p(), q() are unimodular, and the matrix SN () has the appropriate
Smith canonical form. Thus, from (1.49) we receive
3.1 Normal Rational Matrices 107
g1 () 0 ... 0
0 g1 ()g2 ()d() ... 0
O,m
.. .. .. ..
SN () =
. . . . , (3.8)
0 0 . . . g1 () g ()d()
On, On,m
where the polynomial g1 () and the denominator d() are coprime, because
in the contrary the fraction (3.1) would be reducible. According to (3.7) and
(3.8), the matrix A() of (3.1) can be written in the shape
g1 ()
0 ... 0
d()
0 g1 ()g2 () . . . 0 O,m
A() = p() .. .. .. .. q() ,
. . . .
0 0 . . . g1 () g ()
On, On,m
g1 ()
where the fraction is irreducible. Therefore, choosing
d()
al () = diag{d(), 1, . . . , 1}p1 ()
g1 () 0 ... 0
0 g1 ()g2 () ... 0
O,m
.. .. .. ..
bl () =
. . . . q() ,
0 0 . . . g1 () g ()
On, On,m
we obtain the LMFD
A() = a1
l ()bl () ,
which is complete, because det al () d() is true.
Corollary 3.2. It follows from (3.5), (3.6) that for a simple n n matrix
a(), the rational matrix a1 () is normal, and vice versa.
Corollary 3.3. From Equations (3.7), (3.8) we learn that for k 2, all mi-
nors of k-th order of the numerator of a normal matrix N () are divisible by
dk1 ().
Remark 3.4. Irreducible rational matrix rows or columns are always normal.
Let for instance the column
a1 ()
1 .
A() = .. (3.9)
d()
an ()
108 3 Normal Rational Matrices
where c() is a unimodular n n matrix, and () is the GCD of the polyno-
mials a1 (), . . . , an (). The polynomials () and d() are coprime, because
in other case the rational matrix (3.9) could be cancelled. Choose
()
al () = diag{d(), 1, . . . , 1}c1 (), bl () = ... ,
0
d() = ( 1 )1 ( q )q , 1 + . . . + q = p . (3.10)
Then a necessary and sucient condition for the matrix A() to be normal
is the fact that each of its minors of second order possess poles in the points
= i (i = 1, . . . , q) with multiplicity not higher than i .
Proof. Necessity: Let N () = nij () and
nik () ni ()
i j d() d()
A = det
njk ()
(3.11)
k nj ()
d() d()
be a minor of the matrix A() that is generated by the elements of the rows
with numbers i, j and columns with numbers k, . Obviously
i j nik ()nj () njk ()ni ()
A = (3.12)
k d2 ()
is true. If the matrix A() is normal, then, due to Theorem 3.1, the numerator
of the last fraction is divisible by d(). Thus we have
3.1 Normal Rational Matrices 109
i j aij
k ()
A = , (3.13)
k d()
where aij
k () is a certain polynomial. It is seen from (3.13) and (3.10) that
the minor (3.11) possess in = i poles of order i or lower.
Suciency: Conversely, if for every minor (3.11) the representation (3.13) is
correct, then the numerator of each fraction (3.12) is divisible by d(), that
means, every minor of second order of the matrix N () is divisible by d(),
or in other words, the matrix A() is normal.
5.
Theorem 3.6. If the matrix A() (3.1) is normal, and (3.10) is assumed,
then
rank N (i ) = 1, (i = 1, . . . , q) . (3.14)
Thereby, if the polynomial (3.10) has only single roots, i.e. q = p, 1 = 2 =
. . . = p = 1, then Condition (3.14) is not only necessary but also sucient
for the normality the matrix A().
Proof. Equation (3.6) implies rank Q(i ) = 1, (i = 1, . . . , q). Therefore, the
matrix
N (i ) = Q(i )bl (i )
is either the zero matrix or it has rank 1. The rst possibility is excluded,
otherwise the fraction (3.1) would have been reducible. Hence we get (3.14).
If all roots i are simple, and (3.14) holds, then every minor of second order
of the matrix N () is divisible by ( i ), (i = 1, . . . , q). Since in the present
case d() = ( 1 )( 2 ) ( p ) is true, so every minor of second order
of N () is divisible by d(), it means, that the matrix A() is normal.
where the rst and last matrix on the right side are unimodular. Thus for
= 0, the matrix A() has the McMillan canonical form
1
0
MA () = ( 1)( 2) .
0 1
( 1)( 2)
In the present case we have 1 () = (1)(2) = d(), 2 () = (1)(
2) = d(). Hence the McMillan denominator is A () = ( 1)2 ( 2)2 and
Mdeg A() = 4. However, if = 0 is true, then we get
1
0
MA () = ( 1)( 2) .
0 1
2.
Theorem 3.9. Let the matrices (3.15) have the same dimension, and the
polynomials d1 () and d2 () be coprime. Then the matrix
A() = A1 () + A2 () (3.17)
is normal.
Proof. From (3.15) and (3.17), we generate
d2 ()N1 () + d1 ()N2 ()
A() = . (3.18)
d1 ()d2 ()
The fraction (3.18) is irreducible, because the sum (3.17) has its poles at the
zeros of d1 () and d2 () with the same multiplicity.
Denote
ik () ik ()
A1 () = , A2 () = .
d1 () d2 ()
Then the minor (3.11) for the matrix A() has the shape
ik () ik () i () i ()
i j d1 () + d2 () d1 ()
+
d2 ()
A = det
jk () jk ()
. (3.19)
k j () j ()
+ +
d1 () d2 () d1 () d2 ()
Applying the summation theorem for determinants, and using the normality
of A1 (), A2 () after cancellation, we obtain the expression
i j bij
k ()
A =
k d1 ()d2 ()
with certain polynomials bijk (). It follows from this expression that the poles
of the minor (3.19) can be found under the roots of the denominators of Matrix
(3.18), and they possess no higher multiplicity. Since this rational matrix is
irreducible, Theorem 3.5 yields that the matrix A() is normal.
Corollary 3.10. If A() is a normal n m rational matrix, and G() is an
n m polynomial matrix, then the rational matrix
A1 () = A() + G()
is normal.
112 3 Normal Rational Matrices
where the matrices a1 (), a2 () are simple, and det a1 () d1 (), det a2 ()
d2 () are valid. Applying these representations, we can write
A() = a1 1
1 ()b1 ()a2 ()b2 () . (3.21)
b1 ()a1 1
2 () = a3 ()b3 () ,
A() = a1
l ()bl ()
with al () = a3 ()a1 (), bl () = b3 ()b2 () .
Per construction, det al () d1 ()d2 () is valid, thats why the last re-
lations dene a complete LMFD, the matrix al () is simple and the pair
(a3 ()a1 (), b3 ()b2 ()) is irreducible. Hereby, we still obtain
Theorem 3.13. If the matrices (3.15) are normal and the product (3.16) is
irreducible, then the matrices A1 () and A2 () are independent in the sense
of Section 2.4.
5. From Theorems 3.13 and 2.61 we conclude the following statement, which
is formulated in the terminology of subordination of matrices in the sense of
Section 2.12.
Theorem 3.14. Let us have the normal matrices (3.15), and their product
(3.16) should be irreducible. Then
6.
Theorem 3.15. Let the separation
N () N1 () N2 ()
A() = = + (3.22)
d1 ()d2 () d1 () d2 ()
exist, where the matrix A() is normal and the polynomials d1 (), d2 () are
coprime. Then each of the fractions on the right side of (3.22) is normal.
Proof. At rst we notice that the fractions on the right side of (3.22) are
irreducible, otherwise the fraction A() would be reducible. The fraction
N ()
A1 () =
d1 ()
is also normal, because it is irreducible and the minors of second order of the
numerator are divisible by the denominator d1 (). Therefore,
114 3 Normal Rational Matrices
A1 () = a1
1 ()b1 () , (3.23)
b1 () N1 () N2 ()
= a1 () + a1 () ,
d2 () d1 () d2 ()
this means
b1 () N2 () N1 ()
a1 () = a1 () .
d2 () d2 () d1 ()
The left side of the last equation is analytical at the zeros of the polynomials
d1 (), and the right side at the zeros of d2 (). Consequently
a1 ()N1 ()
= L()
d1 ()
N1 ()
= a1
1 ()L() .
d1 ()
The fraction on the right side is irreducible, otherwise the fraction (3.22) has
been irreducible. The matrix a1 () is simple, and therefore the last fraction
owing to Theorem 3.11 is normal. In analogy it may be shown that the matrix
N2 ()/d2 () is normal.
2. We will use the terminology and the notation of Section 1.15, and con-
sider an arbitrary realisation (A, B, C) of dimension n, p, m. Doing so, the
realisation (A, B, C) is called minimal, if the pair (A, B) is controllable and
the pair [A, C] is observable. A minimal realisation is called simple if the ma-
trix A is cyclic. As shown in Section 1.15, the property of simplicity of the
realisation (A, B, C) is structural stable, and it is conserved at least for suf-
ciently small deviations in the matrices A, B, C. Realisations, that are not
simple, are not supplied with the property of structural stability. Practically,
this means that correct models of real linear objects in state space amounts
to simple realisations.
3.3 Normal Matrices and Simple Realisations 115
equivalently expressed by
C adj(Ip A)B
w() = , (3.25)
dA ()
N ()
w() = (3.26)
d()
4.
Theorem 3.16. The transfer matrix (3.25) of the realisation (A, B, C) is ir-
reducible, if and only if the realisation (A, B, C) is simple.
Proof. As follows from Theorem 2.45, for the irreducibility of the fractions
(3.25), it is necessary and sucient that the elementary PMD
() = (Ip A, B, C) (3.27)
Proof. Assume that the realisation (A, B, C) is simple. Then the elementary
PMD (3.27) is also simple, and the fraction on the right side of (3.25) is
irreducible. Hereby, due to the simplicity of the matrix Ip A, the rational
matrix
adj(Ip A)
(Ip A)1 =
det(Ip A)
becomes normal, what means, it is irreducible and all minors of 2nd order of
the matrix adj(Ip A) are divisible by det(Ip A). But then for min{m, n}
2 owing to the theorem of Binet-Cauchy, also the minors of 2nd order of the
matrix
Q() = C adj(Ip A)B
possess this property, and this means that Matrix (3.24) is normal.
Theorem 3.18. For a strictly proper rational matrix to possess a simple re-
alisation, it is necessary and sucient, that this matrix is normal.
Proof. Necessity: When the rational matrix (3.26) allows a simple realisation
(A, B, C), then it is normal by virtue of Theorem 3.17.
Suciency: Let the irreducible matrix (3.26) be normal and deg d() = p.
Then there exists a complete LMFD
w() = a1 ()b()
for it with ord a() = p, and consequently Mdeg w() = p. From this we con-
clude, that Matrix (3.26) allows a minimal realisation (A, B, C) of dimension
(n, p, m). We now assume that the matrix A is not cyclic. Then the fraction
C adj(Ip A)B
det(Ip A)
N1 ()
w() = ,
d1 ()
where deg d1 () < deg d(). But this contradicts the supposition on the irre-
ducibility of Matrix (3.26). Therefore, the matrix A must be cyclic and the
matrix Ip A simple, hence the minimal realisation has to be simple.
(3.1) is strictly proper, then the dimensions of the matrices in its minimal real-
isation in state space will also abruptly increase, i.e. the dynamical properties
with respect to the original system will change drastically. In this section, a
structural stable representation (S-representation) of normal rational matri-
ces will be introduced, . Regarding normality, the S-representation is invariant
related to parameter deviations in the transfer matrix, originated for instance
by modeling or rounding errors.
2.
Theorem 3.19 ([144, 145]). The irreducible rational n m matrix
N ()
A() = (3.28)
d()
is normal, if and only if its numerator permits the representation
N () = P ()Q () + d()G() (3.29)
with an n 1 polynomial column P (), an m 1 polynomial column Q(),
and an n m polynomial matrix G().
Proof. Suciency: Let us have
p1 () q1 () g11 () . . . g1m ()
P () = ... , Q() = ... , G() = ...
..
.
pn () qm () gn1 () . . . gnm ()
with scalar polynomials pi (), qi (), gik (). The minor (3.11) of Matrix (3.29)
possesses the form
i j pi ()qk () + d()gik () pi ()q () + d()gi ()
N = det
k pj ()qk () + d()gjk () pj ()q () + d()gj ()
= d()nij
k ()
where nij
k () is a polynomial. Therefore, an arbitrary minor is divisible by
d(), and thus Matrix (3.28) is normal.
Necessity: Let Matrix (3.28) be normal. Then all minors of second order of its
numerator N () are divisible by the denominator d(). Applying (3.7) and
(3.8), we nd out that the matrix N () allows the representation
g1 () 0 ... 0
0 g1 ()g2 ()d() . . . 0
O,m
.. .. .. ..
N () = p() . . . . q()
0 0 . . . g1 () g ()d()
On, On,m
(3.30)
118 3 Normal Rational Matrices
where the gi (), (i = 1, . . . , ) are monic polynomials and p(), q() are
unimodular matrices. Relation (3.30) can be arranged in the form
N () = N1 () + d()N2 () , (3.31)
where
1 O1,m1
N1 () = g1 ()p() q() , (3.32)
On1,1 On1,m1
0 0 0 ... 0
0 1 0 ... 0
0 0 g3 () ... 0
O,m
N2 () = g1 ()g2 ()p() .. .. .. .. .. q() .
. . . . .
00 0 . . . g3 () g ()
On, On,m
(3.33)
Obviously, we have
N1 () = g1 ()P1 ()Q1 () , (3.34)
where P1 () is the rst column of the matrix p() and Q1 () is the rst row
of q(). Inserting (3.32)(3.34) into (3.31), we arrive at the representation
(3.29), where for instance
can be used.
P ()Q ()
A() = + G() . (3.35)
d()
4. Assume
P () = d()L1 () + P1 () ,
1 () ,
Q() = d()L2 () + Q
where
deg P1 () < deg d(), 1 () < deg d() .
deg Q (3.36)
Then from (3.23), we obtain
3.4 Structural Stable Representation of Normal Matrices 119
1 () + d()G1 ()
N () = P1 ()Q
with
() + P1 ()L () + d()L1 ()L () + G() .
G1 () = L1 ()Q 1 2 2
()
P1 ()Q 1
A() = + G1 () , (3.37)
d()
where
1
P () = () , Q () = () 1 () , (3.41)
()
120 3 Normal Rational Matrices
and moreover
O1,m1 0
G() = () () . (3.42)
G2 () On1,1
Doing so, Matrix (3.28) takes the S-representation
g()P ()Q ()
A() = + g()G() . (3.43)
d()
Easily,
() 1 1 O1,m1
N () N () = = B() (3.46)
L() () On1,1 R()
is established, where
R() = L() ()() . (3.47)
The polynomials g(), d() are coprime, otherwise the fraction (3.28) would
be reducible. Thereby, all minors of second order of the matrix
() 1
Ng () = () () (3.48)
L() ()
and the observation that the matrices ()N1 () and N1 ()() are uni-
modular, we realise that the matrices Ng () and B() are equivalent. Since
all minors of second order of the matrix B() are divisible by d(), we get
immediately that all elements of the matrix R() are also divisible by d(),
which runs into the equality
R() = L() ()() = d()G2 ()
Using
() 1 1
= () 1 ,
()() () ()
we generate from (3.49) Formulae (3.40)(3.42). Relation (3.43) is held by
substituting (3.49) into (3.28).
P () = 1 () + 2 ()2 () + . . . + n ()n () ,
6.
Example 3.23. Generate the S-representation of a normal matrix (3.28) with
+ 1 2 1
N () = , d() = ( 1)( 2) .
0 ( + 1)( 2) 2
In the present case, the matrix A() possesses only the two single poles 1 = 1
and 2 = 2. Hereby, we have
0 2 1
N (1 ) = , rank N (1 ) = 1 ,
0 2 1
1 2 1
N (2 ) = , rank N (2 ) = 1 ,
0 0 0
122 3 Normal Rational Matrices
thats why the matrix A(), owing to Theorem 3.6 is normal. For construction
of the S-representation, Theorem 3.20 is used. In the present case, we have
g() = 1, () = I2 ,() = I3 ,
() = 2, () = + 1 2 , L() = 0 ( + 1)( 2) .
Applying (3.39), we produce
L() ()() = d() d() = d() 1 1 ,
and therefore
G2 () = 1 1 .
On the basis of (3.41), we nd
1
P () = , Q () = + 1 2 1 .
2
Moreover, due to (3.42), we get
000
G() = .
110
With these results, we obtain
1
1 2 1
2 000
A() = + .
( 1)( 2) 110
Regarding deg P () = deg Q() = 1, the generated S-representation is mini-
mal.
Jp (, a) = Ip Jp (a) .
g() = 1, () = () = Ip ,
() = ( a)p1 . . . a ,
() = ( a) . . . ( a)p1 .
where
P () = 1 () , Q () = () 1
and
d() = det Jp (, a) = ( a)p .
For determining the polynomial matrix G(), take care of
1
a
P ()Q () = .. ( a)p1 . . . ( a) 1
.
( a)p1
( a)p1 ( a)p2 . . . 1
( a)p ( a)p1 . . . a
= .. .. .. .
..
.
. . .
( a)2p2 ( a)2p3 . . . ( a)p1
As a result, we obtain
124 3 Normal Rational Matrices
0 0 ... 0 0
( a)p 0 ... 0 0
adj Jp (, a) P ()Q () = ( a)p+1
( a)p
. . . 0 0
.. .. .. .. ..
. . . . .
( a) 2p2
( a) 2p3
. . . ( a) 0 p
0 0 ... 0 0
1 0 ... 0 0
= ( a) p a 1 . . . 0 0 .
.. .. . . .. ..
. . . . .
( a) p2
( a) p3
... 1 0
P ()Q ()
Jp1 (, a) = + G() .
( a)p
Since deg P () < p and deg Q() < p, the produced S-representation is mini-
mal.
A1
F = (Ip AF )
1
,
where
0 1 0 ... 0
0 0 1 ... 0
.. .. .. . . ..
AF = . . . . . (3.53)
0 0 0 ... 1
dp dp1 dp2 . . . d1
is the lower Frobenius normal form of dimension pp. Its characteristic matrix
has obviously the shape
3.5 Inverses of Characteristic Matrices of Jordan and Frobenius Matrices 125
1 0 ... 0
0 1 . . . 0
.. .. .. .. ..
AF =
. . . . . . (3.54)
..
0 0 0 . 1
dp dp1 dp2 . . . + d1
d() = p + d1 p1 + . . . + dp1 + dp ,
d1 () = p1 + d1 p2 + . . . + dp2 + dn1 ,
d2 () = p2 + d1 p3 + . . . + dp2 ,
.. ..
. . (3.56)
dp1 () = + d1 ,
dp () = 1 ,
where
PF () = 1 . . . p1 ,
(3.58)
QF () = d1 () . . . dp1 () 1 .
Per construction, deg d() = p, deg aik () p 1. Bringing this face to face
with (1.6), we recognise that gik () is the integral part and aik () is the
126 3 Normal Rational Matrices
rest, when dividing the polynomial bik () by d(). Utilising (3.58), we arrive
at
bik () = i1 dk () . (3.60)
Due to deg dk () = p k, we obtain deg bik () = p k + i 1. Thus, for k i,
we get gik () = 0. Substituting i = k + , ( = 1, . . . , p k) and taking into
account (3.60) and (3.56), we receive
gik () = 1 , (i = k + ; = 1, . . . , p k) .
and let (A0 , B0 , C0 ) be one of its simple realisations. Then any simple realisa-
tion of the matrix A() has the form (QA0 Q1 , QB0 , C0 Q1 ) with a certain
non-singular matrix Q. Keeping in mind that all simple matrices of the same
dimension with the same characteristic polynomial are similar, the matrix Q
can be selected in such a way that the equation QA0 Q1 = A1 for a cyclic
matrix A1 fullls a prescribed form. Especially, we can achieve A1 = J, where
J is a Jordan matrix (1.97), and every distinct root of the polynomials d() is
congured to exactly one Jordan block. The corresponding simple realisation
(J, BJ , CJ ) is named a Jordan realisation. But, if we choose A1 = AF , where
3.6 Construction of Simple Jordan Realisations 127
Lemma 3.25. Let J (a) be an upper Jordan block (3.50), and J (, a) be its
corresponding characteristic matrix. Introduce for xed the matrices
Hi , (i = 0, . . . , 1) of the following shape:
0 1 0 ... 0
0 0 1 ... 0
.. .. . . . . .. O 1
H0 = I , H1 = . . . . . , . . . , H,1 = 1,1
.
O1,1 O1,1
0 0 0 . . . 1
0 0 0 ... 0
(3.70)
Then,
adj [I J (a)]
(3.71)
= ( a)1 H0 + ( a)2 H1 + . . . + ( a)H,2 + H,1 .
is true, where
L1 = u1 v1 ,
L2 = u1 v2 + u2 v1 ,
..
. (3.73)
L = u1 v + u2 v1
+ . . . + u v1 = U V .
Proof. The proof follows directly by inserting (3.71) and (3.70) into the left
side of (3.72).
3.6 Construction of Simple Jordan Realisations 129
Remark 3.27. Concluding in reverse direction, it comes out that, under as-
sumption (3.73), the right side of Relation (3.72) is equal to the left one.
N ()
A() = , (3.74)
d()
where
N () = P ()Q () + d()G() . (3.75)
Since Matrix (3.74) is strictly proper, it can be developed into partial fractions
(2.98). Applying (3.65) and (2.96)(2.97), this expansion can be expressed in
the form
q
A() = Aj () , (3.76)
j=1
where
Aj1 Aj2 Aj,j
Aj () = + 1
+. . .+ , (j = 1, . . . , q) . (3.77)
( j ) j ( j ) j
( j )
N ()( j )j
= Aj1 +(j )Aj2 +. . .+(j )j 1 Aj,j +(j )j Rj (),
d()
(3.78)
where Rj () is a rational matrix that is analytical in the point = j .
Utilising (3.74), (3.75), we can write
N ()( j )j
= P ()Qj () + ( j )j G() , (3.79)
d()
where
Q() d()
Qj () = , dj () = .
dj () ( j )j
Conformable with (3.78), for the determination of the matrices Ajk , (k =
1, . . . , j ), we have to nd the rst j terms of the separation on the right
side of (3.79) in the Taylor series. Obviously, the matrices Ajk , (k = 1, . . . , j )
do not depend on the matrix G().
Near the point = j , suppose the developments
where the vectors Pjk and Qjk are determined by (3.66). Then we get
130 3 Normal Rational Matrices
Taking into account Remark 3.27, we obtain from the last expression
Aj () = ( j )j Pj adj Ij Jj (j ) Qj
1
= Pj Ij Jj (j ) j ,
Q
q
1
A() = Pj Ij Jj (j ) j = PJ (Ip J)1 QJ ,
Q
j=1
3.
Example 3.28. Find the the Jordan realisation of the strictly proper normal
matrix
3.6 Construction of Simple Jordan Realisations 131
( 1)2 1
0 2
A() = . (3.80)
( 1) ( 2)
2
Inserting (3.81) and (3.82) into (3.63), we get the wanted S-representation.
Per construction, Relation (3.36) is fullled, i.e. the obtained S-representation
is minimal. Therefore, Formulae (3.81), (3.82) forth the possibility of di-
rect transfer from the Frobenius realisation to the corresponding minimal
S-representation of its transfer matrix.
2.
Example 3.29. Assume the Frobenius realisation with
0 1 0 10
12 0
AF = 0 0 1 , BF = 2 3 , CF = . (3.83)
3 1 1
2 1 1 01
d() = 3 + 2 + + 2 ,
d1 () = 2 + + 1, d2 () = + 1 .
CF PF ()QF ()BF
A() = + CF GF ()BF . (3.90)
d()
CF PF () = N1 + N2 + . . . + Ns s1 = P () , (3.91)
and also
where d1 (), . . . , ds1 () are the polynomials (3.56). Substituting (3.56) into
the last equation, we nd
Ms = B1 ,
Ms1 = B2 + d1 B1 ,
Ms2 = B3 + d1 B2 + d2 B1 ,
.. ..
. . (3.93)
M1 = Bs + d1 Bs1 + d2 Bs2 + . . . + ds1 B1 ,
P ()Q ()
A() = + CF GF ()BF . (3.95)
d()
Comparing this expressions with (3.64) and paying attention to the fact that
the matrices A() and A() are strictly proper, and the matrices G() and
CF GF ()BF are polynomial matrices, we obtain
A() = A() .
C adj(Ip A)B N ()
A() = = . (3.96)
det(Ip A) dA ()
can be generated with unimodular matrices (), (). Then for constructing
the S-representation, Theorem 3.20 is applicable. Using Theorem 3.20, the last
equation yields
that leads to
where
P ()Q ()
A() = + G()
dA ()
2. For calculating the adjoint matrix adj(Ip A), we can benet from some
general relations in [51]. Assume
dA () = p q1 p1 . . . qp .
where
F1 = A q1 Ip , F2 = A2 q1 A q2 Ip , . . .
3.8 Construction of S-representations from Simple Realisations. General Case 137
or generally
Fk = Ak q1 Ak1 . . . qk Ip .
The matrices F1 , . . . , Fp1 can be calculated successively by the recursion
Fk = AFk1 qk Ip , (k = 1, 2, . . . , p 1; F0 = Ip ) .
AFp1 qp Ip = Opp .
3.
Example 3.33. Suppose the simple realisation (A, B, C) with
0 1 0 1 1 0
A= , B= , C= . (3.98)
0 1 1 1 1 1
dA () = 2
adj(I2 A) = I2 + F1
with
1 1
F1 = A I2 = .
0 0
Formula (3.97) delivers
+ 1 1
adj(I2 A) = ,
0
where
1 + 1 0 0
P () = , Q() = , G() = .
1 1 0
Therefore, the S-representation of the matrix A() for the realisation (3.98)
takes the shape
1
1
+1 00
A() = + . (3.99)
2 01
The obtained S-representation is minimal.
138 3 Normal Rational Matrices
N ()
A() = (3.100)
d()
be given with deg d() = p. Then in accordance with Section 3.1, Matrix
(3.100) allows the irreducible complete MFD
A() = a1 1
l ()bl () = br ()ar () (3.101)
for which
ord al () = ord ar () = Mdeg L() = p .
In principle, for building a complete MFD (3.101), the general methods from
Section 2.4 can be applied. However, with respect to numerical eort and
numeric stability, essentially more eective methods can be developed when
we prot from the special structure of normal matrices while constructing
complete MFDs.
2.
Theorem 3.34. Let the numerator of Matrix (3.100) be brought into the form
(3.38)
() 1
N () = g()() () . (3.102)
L() ()
Then the pair of matrices
d() O1,n1 1
al () = () , bl () = al ()A() (3.103)
() In1
Multiplying this from left with the matrix al () in (3.103), and considering
d() O1,n1 1 O1,n1 d() O1,n1
= ,
() In1 () In1 On1,1 In1
3.
Example 3.35. Let us have a normal matrix (3.100) with
2 3
N () = , d() = 2 4 + 3 .
2 + + 1 22 5
In this case, we nd
0 1 2 1 01
N () = ,
1 +1 2 3 10
so we get
0 1 01
() = , () = , g() = 1 ,
1 +1 10
and with respect to (3.103), (3.104), we obtain immediately
d() 0 ( + 1) 1 d()( + 1) d()
al () = = ,
1 1 0 2 + + 1
1 2
bl () = al ()A() = .
0 1
we arrive at
2 7 3 1 2
()Rh () = .
2.5 + 1 0.5 0.5 1
P ()Q ()
A() = + G() , (3.105)
d()
P () Q ()
= a1
l ()bl (), = br ()a1
r () . (3.106)
d() d()
Proof. Due to Remark 3.4, the matrix P ()/d() is normal. Therefore, for
the ILMFD (3.106) det al () det d() is true, and the rst row in (3.107)
proves to be a complete LMFD of the matrix A(). In analogy, we realise that
a complete RMFD stands in the second row.
3.10 Normalisation of Rational Matrices 141
5.
Example 3.37. For Matrix (3.99) in Example 3.33,
1
P () +1 Q () 1
= 2 , = 2
d() d()
is built, that is normal only for = 0. For the nominal matrix (3.108), there
exists the simple realisation (A, B, C) with
a0 01 01
A= , B= , C= . (3.110)
0b 10 10
All other simple realisations of Matrix (3.108) are produced from (3.110) by
similarity transformations. Realisation (3.110) corresponds to the system of
dierential equations of second order
x 1 = ax1 + u2
x 2 = bx2 + u1 (3.111)
y1 = x2 , y2 = x1 .
For = 0, these equations do not turn into (3.111), and the component x1
looses controllability. Moreover, for = 0 and a > 0, the object is no more
stabilisable, though the nominal object (3.111) was stabilisable.
In constructing the MFDs, similarly dierent solutions are held for = 0
and = 0. Indeed, if the numerator of the perturbed matrix (3.109) is written
in Smith canonical form, then we obtain for b a + = 0
3.10 Normalisation of Rational Matrices 143
a+ 0
0 b
1
a+
a+ ba+ 1 0 b
ba+ ba+
= .
b 1
ba+
0 ( a + )( b) 1 1
A () = ( a)2 ( b) .
C adj(Ip A)B N ()
A() = = (3.114)
det(Ip A) d()
()
N
A() = , (3.115)
d()
which practically always deviates from a normal matrix. Even more, if the
random calculation errors are independent, then Matrix (3.115) with proba-
bility 1 is not normal. Hence the transition from Matrix (3.115) to its minimal
B,
realisation leads to a realisation (A, C)
of dimension n, q, m with q > p, that
means, to an object of higher order with non-predictable dynamic properties.
5. Situations of this kind also arise during the application of frequency do-
main methods for design of linear MIMO systems [196, 6, 48, 206, 95], . . . .
The algorithm of the optimal controller normally bases on the demand that it
is described by a simple realisation (A0 , B0 , C0 ). The design method is usually
supplied by numerical calculations, so that the transfer matrix of the optimal
controller A0 () will practically not be normal. Therefore, the really produced
0 , C0 ) of the optimal controller will have an increased order.
realisation (A0 , B
Due to this fact, the system with this controller may show a unintentional
behavior, especially it might become (internally) unstable.
N ()
A () =
d ()
the coecients of which dier only a bit from the coecients of Matrix
(3.115).
rational matrix A() can be brought into the form (3.102)
() 1
N () = g()() () .
L() ()
Let d() be the denominator of the approximated strictly proper matrix
(3.115). Then owing to (3.40), from the above considerations, it follows im-
mediately the representation
g()P ()Q ()
A () = + G () ,
d ()
where
1
d () = d(),
P () = () , Q () =
() 1 () (3.116)
()
b
g() = 1, () = + b,
() =
ba+
and
b a + 1 11
() = , () = .
0 1 10
Using (3.116), we nally nd
b a + 1 1 a+
P () = = ,
0 1 + b + b
b 11
b
Q () = ba+ 1 = ba+ b
+ 1 ba+ .
10
In this chapter, and later on if possible, the fundamental results are formulated
for real polynomials or real rational matrices, because this case dominates in
technical applications, and its handling is more comfortable.
Then the matrix Q(, , ) is simple. In connection with the above said the
following task seems substantiated.
Structural eigenvalue assignment. For a given process
(a(), b()) and scalar polynomial d(), the eigenvalue assignment
d d ,
(4.3) should deliver the solution set d . Find the subset
where the matrix Q(, , ) possesses a prescribed sequence of invari-
ant polynomials a1 (), . . . , an+m ().
is fullled.
4.2 Basic Controllers 151
4. Let the task of eigenvalue assignment for a PMD process be solvable, and
be the congured set of controllers ((), ()). For dierent controllers in
, the sequence of the invariant polynomials of Matrix (4.5) can be dierent.
Therefore, also the next task is of interest.
Structural eigenvalue assignment for a PMD process. For a
given PMD process (4.4) and polynomial d(), the set of solutions
((), ()) of the eigenvalue assignment (4.6) is designated by .
Find the subset , where the matrix Q (, , ) possesses a
prescribed sequence of invariant polynomials.
In the present chapter, the general solution of the eigenvalue assignment
problem is derived, where the processes are given as polynomial pairs or as
PMDs. Moreover, the structure of the set of invariant polynomials is stated,
which can be prescribed for this task. Although, the following results are
formulated for real matrices, they could be transferred practically without
changes to the complex case. In the considerations below, the eigenvalue as-
signment problem is also called modal control problem, and the determination
of the structured eigenvalues is also named structural modal control problem.
2. The next theorem presents a general expression for the set of all basic
controllers for a given irreducible pair.
Theorem 4.1. Let (0 (), 0 ()) be a certain basic controller for the process
(a(), b()). Then the set of all basic controllers (0 (), 0 ()) is determined
by the formula
0 () = D()0 () M ()b() ,
(4.7)
0 () = D()0 () M ()a() ,
where M () Rmn [] is an arbitrary, and D() Rmm [] is an arbitrary,
but unimodular matrix.
152 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods
Proof. The set of all basic controllers is denoted by R0 , and the set of all pairs
satisfying Condition (4.7) by Rp . At rst, we will show R0 Rp .
Let (0 (), 0 ()) be a certain basic controller, and
a() b()
Ql (, 0 , 0 ) = (4.8)
0 () 0 ()
n m
r () br () (4.9)
Q1 n
l (, 0 , 0 ) = Qr (, 0 , 0 ) =
r () ar () m .
Owing to the properties of the inverse matrix (2.109), we have the relations
a()r () b()r () = In ,
(4.10)
a()br () b()ar () = Onm .
where
3. As emerges from (4.7), before constructing the set of all basic controllers,
at rst we have to nd one sample of them. Usually, search procedures for
such a controller found on the following considerations.
Lemma 4.2. For the irreducible process (a(), b()), there exist an m m
polynomial matrix ar () and an n m polynomial matrix br (), such that the
equation
a()br () = b()ar () (4.13)
is fullled, where the pair [ar (), br ()] is irreducible.
Proof. Since the process (a(), b()) is irreducible, there exists a basic con-
trollers (0 (), 0 ()), such that the matrix
a() b()
Q(, 0 , 0 ) =
0 () 0 ()
becomes unimodular. Thus, the inverse matrix
0r () br ()
Q1 (, 0 , 0 ) =
0r () ar ()
is also unimodular. Then from (4.10), it follows Statement (4.13). Moreover,
the pair [ar (), br ()] is irreducible thanks to Theorem 1.32.
Remark 4.3. If the matrix a() is non-singular, i.e. det a() / 0, then there
exists the transfer matrix of the processes
w() = a1 ()b() .
The right side of this equation proves to be an ILMFD of the matrix w(). If
we consider an arbitrary IRMFD
w() = br ()a1
r () ,
then Equation (4.13) holds, and the pair [ar (), br ()] is irreducible. There-
fore, Lemma 4.2 is a generalisation of this property in case the matrix a() is
singular.
154 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods
In what follows, the original pair (a(), b()) is called left process model,
and any pair [ar (), br ()] satisfying (4.13), is named right process model. If
in this case, the pair [ar (), br ()] is irreducible, then the right process model
should also be designated as irreducible.
Lemma 4.4. Let [ar (), br ()] be an irreducible right process model. Then
any pair (0 (), 0 () satisfying the Diophantine equation
with a unimodular matrix P (), turns out to be a basic controller for the left
process model (a(), b()).
Proof. Since the pair (a(), b()) is irreducible, there exists a vertical pair
[r (), r ()] with
a()r () b()r () = In .
The pair [0 (), 0 ()] should satisfy Condition (4.14). Then build the product
a() b() r () br () In Onm
= ,
0 () 0 () r () ar () M () P ()
Remark 4.6. It is easily shown that the set of all pairs (0 (), 0 ()) satisfy-
ing Equation (4.14) for all possible unimodular matrices P () generate the
complete set of basic controllers.
1. As arises from Lemma 4.4, a basic controller (0 (), 0 ()) can be found
as solution of the Diophantine matrix equation (4.14). In the present section,
an alternative method for nding a basic controller is described that leads
to a recursive solution of simpler scalar Diophantine equations, and does not
need the matrices ar (), br (), arising in (4.14).
4.3 Recursive Construction of Basic Controllers 155
Lemma 4.7. A necessary and sucient condition for the solvability of Equa-
tion (4.15) is, that the greatest common divisor of the polynomials ai (),
(i = 1, . . . , n) is a divisor of the polynomial c().
ai () = ()a1i () , (i = 1, . . . , n) (4.16)
from which it is clear that the polynomial c() must be divisible by ().
Suciency: The proof is done by complete induction. The statement
should be valid for one n = k > 0, and then it is shown that it is also
valid for n = k + 1. Consider the equation
k+1
ai ()xi () = c() . (4.17)
i=1
which is solvable due to the induction supposition, since all coecients a1i ()
i (), (i = 1, . . . , k) be any solution of Equation (4.19).
are in all coprime. Let x
Then applying (4.16) and (4.18), we get
k+1
ai ()
xi () = c()
i=1
Here, the coecients are in all coprime, though they are not coprime by twos.
In the present case, the auxiliary equation (4.18) could be given the shape
() = 2 0.5 ,
u x
3 () = 0.5 .
x
1 () = 0.5 , 2 () = 1 .
x
x
1 () = 0.5 , 2 () = 1 ,
x x
3 () = 0.5 .
4.3 Recursive Construction of Basic Controllers 157
4.
Lemma 4.9. Suppose the n (n + 1) polynomial matrix
a11 () . . . a1,n+1 ()
A() = ...
..
. (4.20)
an1 () . . . an,n+1 ()
A1 () = A()
d1 () . . . dn+1 ()
n+1
(1)n+1+i di ()1i () = 1 .
i=1
5.
Theorem 4.10. [193] Suppose the non-degenerated n m polynomial matrix
Assume that the submatrix A() on the left of the line has the form (4.20).
Let DA () be the monic GCD of the minors of n-th order of the matrix A(),
and DA () the monic GCD of the minors of n-th order of the matrix A().
The polynomials d1 (), . . . , dn+1 () should be a solution of Equation (4.22).
Then the monic GCD of the minors of n-th order of the matrix
a11 () . . . a1n () a1,n+1 () a1,n+2 () . . . a1m ()
.. .. .. .. ..
Ad () = . . . . . (4.23)
an1 () . . . ann () an,n+1 () an,n+2 () . . . anm ()
d1 () . . . dn () dn+1 () dn+2 () . . . dm ()
6. Suppose an irreducible pair (a(), b()), where a() has the dimension
n n and b() dimension n m. Then by successive repeating the procedure
explained in Theorem 4.10, the unimodular matrix
a11 () . . . a1n () b11 () . . . b1m ()
.. .. .. ..
. . . .
an1 () . . . ann () bn1 () . . . bnm ()
Ql (, 0 , 0 ) =
11 () . . . 1n () 11 () . . . 1m ()
.. .. .. ..
. . . .
m1 () . . . mn () m1 () . . . mm ()
is produced. The last m rows of this matrix present a certain basic controller
4.3 Recursive Construction of Basic Controllers 159
11 () . . . 1m () 11 () . . . 1n ()
.. .. .. ..
0 () = . . , 0 () = . . .
m1 () . . . mm () m1 () . . . mn ()
7.
Example 4.12. Determine a basic controller for the process (a(), b()) with
1 +1 0
a() = , b() = . (4.24)
0 1 0
is alatent, which is easily checked. Hence the design problem for a basic con-
troller is solvable. In the rst step, we search the polynomials d1 (), d1 (),
d3 (), so that the matrix
1 +1 0
A1 () = 0 1
d1 () d2 () d3 ()
The not designated polynomial d4 () can be chosen arbitrarily. Take for in-
stance d4 () = 0, so the alatent matrix becomes
1 +1 0
0
A2 () = 0 1 .
1
0 1 0
2 2
160 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods
d1 () = d2 () = d3 () = 0, d4 () = 1 .
Using this solution and Formula (4.7), we construct the set of all basic con-
trollers for the process (4.24).
Example 4.13. Find a basic controller for the process (a(), b()) with
1 +1 1
a() = , b() = . (4.25)
1 +1 0
In the present case the matrix a() is singular, nevertheless, a basic controller
can be found because the matrix
1 +1 1
Rh () = a() b() =
1 +1 0
( + 1)d1 () + ( 1)d2 () = 1
is named a left basic matrix. However, if the pair [ar (), br ()] is an irre-
ducible right process model, then the vertical pair [0r (), 0r ()], for which
the matrix
0r () br ()
Qr (, 0r , 0r ) = (4.27)
0r () ar ()
becomes unimodular, is said to be a right basic controller, and the congured
matrix (4.27) is called a right basic matrix. .
2. Using (4.27), we nd
0r () 0r () 0r () 0r () ar () br ()
Qr (, 0r , 0r ) = ,
br () ar () ar () br ()
0r
() 0r ()
where the symbol stands for the equivalence of the polynomial matri-
ces. Now, if [0r (), 0r ()] is any right basic controller, then applying Theo-
rem 4.1 and the last relation, the set of all right basic controllers is expressed
by the formula
0r () = 0r ()Dr () br ()Mr () ,
(4.28)
0r () = 0r ()Dr () ar ()Mr () ,
3. The basic matrices (4.26) and (4.27) are called dual , if the equation
Qr (, 0r , 0r ) = Q1
l (, 0l , 0l ) (4.29)
holds, or equivalently, if
Remark 4.14. The validity of the relations of anyone of the groups (4.31) or
(4.32) is necessary and sucient for the validity of Formulae (4.29), (4.30).
Therefore, each of the groups of Relations (4.31) or (4.32) follows from the
other one.
4. Applying the new notation, Formula (4.7) can be expressed in the form
0l () 0l () = Ml () Dl () Ql (, 0l , 0l )
al () bl ()
= Ml () Dl () ,
0l () 0l ()
which results in
Ml () Dl () = 0l () 0l () Q1
l (, 0l , 0l )
(4.33)
= 0l () 0l () Qr (, 0l , 0l )
Dl () = 0l ()br () + 0l ()ar () ,
Ml () = 0l ()0r () + 0l ()0r () .
where we derive
Dr () ()
= Ql (, 0l , 0l ) 0r
Mr () 0r ()
or
Dr () = al ()0r () bl ()0r () ,
Mr () = 0l ()0r () 0l ()0r () .
5.
Theorem 4.15. Let the left and right irreducible models (al (), bl ()),
[ar (), br ()] of an object be given, which satisfy the condition
and, moreover, an arbitrary left basic controller (0l (), 0l ()). Then a nec-
essary and sucient condition for the existence of a right basic controller
[0r (), 0r ()] that is dual to the controller (0l (), 0l ()), is that the pair
(0l (), 0l ()) is a solution of the Diophantine equation
0l ()ar () 0l ()br () = Im .
Proof. The necessity follows from the Bezout identity (4.31). To prove the
suciency, we notice that due to the irreducibility of the pair (al (), bl ()),
0r (), 0r ()], that fullls the relation
there exists a pair [
0r () bl ()0r () = In .
al () (4.35)
where
0r () = 0r () 0l ()0r () ,
0r () + br () 0l ()
0r () = 0r () + ar () 0l ()0r () 0l ()0r () .
It arises from (4.36), that the last pair is a right basic controller, which is dual
to the left basic controller (0l (), 0l ()).
Remark 4.16. In analogy, it can be shown that for a right basic controller
0l (), 0l ()), if and
[0r (), 0r ()], there exists a dual left basic controller (
only if the relation
0l ()ar () 0l ()br () = Im ,
dene a left basic controller, that is dual to the controller [0r (), 0r ()].
6. The next theorem supplies a parametrisation of the set of all pairs of dual
basic controllers.
Theorem 4.17. Suppose two dual basic controllers (0l (), 0l ()) and
[0r (), 0r ()]. Then the set of all pairs of dual basic controllers
(0l (), 0l ()), [0r (), 0r ()] is determined by the relations
0l () = 0l () M ()bl (), 0l () = 0l () M ()al () ,
(4.39)
0r () = 0r () br ()M (), 0r () = 0r () ar ()M () ,
For the duality of the controllers (0l (), 0l ()) and [0r (), 0r ()], it
is necessary and sucient that the matrices (4.41) satisfy Relation (4.29).
But from (4.41), owing to the duality of the controllers (0l (), 0l ()) and
[0r (), 0r ()], we get
Dr () Onm
Ql (, 0l , 0l )Qr (, 0r , 0r ) =
Ml ()Dr () Dl ()Mr () Dl ()
Dr () = In , Dl () = Im , Ml () = Mr () = M () .
Remark 4.19. Theorems 4.15 and 4.17 indicate that the pairs of left and right
process models, used for building the dual basic controllers, may be cho-
sen arbitrarily, as long as Condition (4.34) holds. If the pairs (al (), bl ()),
[ar (), br ()] satisfy Condition (4.34), and the n n polynomial matrix p()
and the m m polynomial matrix q() are unimodular, then the pairs
(p()al (), p()bl ()), [ar ()q(), br ()q()] fulll this condition. Therefore,
we can reach for instance that in (4.34), the matrix al () is row reduced and
the matrix ar () is column reduced.
Remark 4.20. From Theorems 4.15 and 4.17, it follows that as a rst right ba-
sic controller any solution [0r (), 0r ()] of the Diophantine Equation (4.37)
can be used. Then the corresponding dual left basic controller is found by For-
mula (4.38). After that, the complete set of all pairs of dual basic controllers
is constructed by Relations (4.40).
Theorem 4.21. Let the process (al (), bl ()) be irreducible. Then Equation
(4.42) is solvable for any polynomial d(). Thereby, if (0l (), 0l ()) is a cer-
tain basic controller for the process (al (), bl ()), then the set of all controllers
(l (), l ()) satisfying (4.42) can be represented in the form
l () = Dl ()0l () Ml ()bl () ,
(4.44)
l () = Dl ()0l () Ml ()al () ,
is valid. Besides, the pair (l (), l ()) is irreducible, if and only if the pair
(Dl (), Ml ()) is irreducible.
Proof. Denote the set of solutions of Equation (4.42) by N0 , and the set of
pairs (4.44) by Np . Let (0l (), 0l ()) be a certain basic controller. Then the
matrices
al () bl ()
Ql (, 0l , 0l ) = ,
0l () 0l ()
(4.45)
1 0r () br ()
Ql (, 0l , 0l ) = Qr (, 0r , 0r ) =
0r () ar ()
Ql (, 0l , 0l )Qr (, 0r , 0r ) = In+m
holds. Let (l (), l ()) be a controller satisfying Equation (4.42). Then using
(4.34), (4.43), (4.45) and the Bezout identity (4.31), we get
al () bl () 0r () br ()
Ql (, l , l )Qr (, 0r , 0r ) =
l () l () 0r () ar ()
(4.46)
In Onm
=
Ml () Dl ()
with
4.5 Eigenvalue Assignment for Polynomial Pairs 167
Dl () = l ()br () + l ()ar () ,
(4.47)
Ml () = l ()0r () + l ()0r () .
Ql (, l , l ) = Nl ()Ql (, 0l , 0l ) , (4.48)
where
In Onm
Nl () = , (4.49)
Ml () Dl ()
where we read (4.44). Calculating the determinant on both sides of (4.48)
shows that
det Ql (, l , l ) det Dl () d() .
Thus N0 Np was proven. By reversing the conclusions, we deduce as in
Theorem 4.1 that also Np N0 is true. Therefore, the sets N0 and Np coin-
cide.
Notice that Formulae (4.44) may be written in the shape
l () l () = Ml () Dl () Ql (, 0l , 0l )
al () bl ()
= Ml () Dl () .
0l () 0l ()
Since
the matrix Ql (, 0l , 0l ) is unimodular, the matrices l () l ()
and Ml () Dl () are right-equivalent, and thats why the pair (l (), l ())
is irreducible, if and only if the pair (Dl (), Ml ()) is irreducible.
2.
Example 4.22. For a prescribed polynomial d(), the solution set of the eigen-
value assignment problem for the process (4.24) in Example 4.12 has the form
d11 () d12 () (0.5 + 1) 0 m11 () m12 () 0
l () = ,
d21 () d22 () 0 1 m21 () m22 () 0
d11 () d12 () 0.5 0 m11 () m12 () 1 +1
l () = .
d21 () d22 () 0 0 m21 () m22 () 0 1
Here the mik () are arbitrary polynomials and dik () are arbitrary polyno-
mials bound by the condition
168 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods
Example 4.23. The set of solutions of Equation (4.42) for the process (4.25)
in Example 4.13 has the form
l () = kd()d3 () + m1 ()} ,
l () = 0.5kd() 1 1 m1 () m2 () 1 + 1 ,
3. Now, consider the question, how the solution of Equation (4.42) looks like
when the process (al (), bl ()) is reducible. In this case, with respect to the
results in Section 1.12, there exists a latent square n n polynomial matrix
q(), such that
is true with an irreducible pair (al1 (), bl1 ()). The solvability conditions for
Equation (4.42) in case (4.50) states the following theorem.
Theorem 4.24. Let (4.50) be valid and det q() = (). Then a necessary
and sucient condition for the solvability of Equation (4.42) is, that the poly-
nomial d() is divisible by (). Thus, if ( 0l (), 0l ()) is a certain basic
controller for the process (al1 (), bl1 ()), then the set of all controllers satis-
fying Equation (4.42) is bound by the relations
l ()
l () = D l ()bl1 () ,
0l () M
(4.51)
l ()0l () M
l () = D l ()al1 () ,
= d() .
d() (4.52)
()
Proof. Let (4.50) be true. Then (4.42) can be presented in the shape
q() Onm
det Ql (, l , l ) d() , (4.53)
Omn Im
where
l (, l , l ) = al1 () bl1 ()
Q .
l () l ()
Calculating the determinants, we nd
l (, l , l ) d() ,
() det Q
4.6 Eigenvalue Assignment by Transfer Matrices 169
i.e. for the solvability of Equation (4.53), it is necessary that the polynomial
d() is divisible by (). If this condition is ensured and (4.52) is used, then
Equation (4.53) leads to
.
l (, , ) d()
det Q
Since the pair (al1 (), bl1 ()) is irreducible, Equation (4.53) is always solvable,
thanks to Theorem 4.17, and its solution has the shape (4.51).
4. Let (a(), b()) be an irreducible process and (l (), l ()) such a con-
troller, that det Ql (, l , l ) = d() / 0 becomes true. Furthermore, let
(0l (), 0l ()) be a certain basic controller. Then owing to Theorem 4.17,
there exist m m and m n polynomial matrices Dl () and Ml (), such that
l () = Dl ()0l () Ml ()bl () ,
(4.54)
l () = Dl ()0l () Ml ()al () ,
where det Dl () d(). Relations (4.54) are called the basic representation
of the controllers (l (), l ()) with respect to the basis (0l (), 0l ()).
Theorem 4.25. The basic representation (4.54) is unique in the sense, that
from the validity of (4.54) and the relation
which is equivalent to
Ml () Ml1 () Dl () Dl1 () Ql (, 0l , 0l ) = Om,m+n .
w () = l1 ()l () (4.56)
might be included into our considerations. Its standard form (2.21) can be
written as
M ()
w () = (4.57)
d ()
for which Relation (4.56) denes a certain LMFD. Conversely, if the transfer
function of the controller is given in the standard form (4.57), then various
LMFD (4.56) and the corresponding characteristic matrices
al () bl ()
Ql (, l , l ) = (4.58)
l () l ()
can be investigated. Besides, every LMFD (4.56) is uniquely related to a
characteristic polynomial () = det Ql (, l , l ).
In future, we will say that the transfer matrix w () is a solution of the
eigenvalue assignment for the process (al (), bl ()), if it allows an LMFD
(4.56) such that the corresponding pair (l (), l ()) satises Equation (4.42).
2. The set of transfer matrices (4.57) that supply the solution of the eigen-
value assignment is generally characterised by the next theorem.
Theorem 4.26. Let the pair (al (), bl ()) be irreducible and (0l (), 0l ())
be an appropriate left basic controller. Then for the fact that the transfer
matrix (4.56) is a solution of Equation (4.42), it is necessary and sucient
that it allows a representation of the form
w () = [0l () ()bl ()]1 [0l () ()al ()] , (4.59)
where () is a broken rational m n matrix, for which exists an LMFD
() = Dl1 ()Ml () , (4.60)
where det Dl () d() is true and the polynomial matrix Ml () is arbitrary.
Proof. Suciency: Suppose the LMFD (4.60). Then from (4.59) we get
w () = [Dl ()0l () Ml ()bl ()]1 [Dl ()0l () Ml ()al ()] . (4.61)
Thus, the set of equations
l () = Dl ()0l () Ml ()bl () ,
(4.62)
l () = Dl ()0l () Ml ()al ()
describes a controller satisfying Relation (4.42).
Necessity: If (4.56) and det Ql (, l , l )) d() are true, then for the
matrices l () and l (), we can nd a basic representation (4.54), and under
the invertability condition for the matrix l (), we obtain (4.59), so the proof
is carried out.
4.6 Eigenvalue Assignment by Transfer Matrices 171
Corollary 4.27. From (4.59) we learn that the transfer matrices w (), de-
ned as the solution set of Equation (4.42), depend on a matrix parameter,
namely the fractional rational matrix ().
3. Let the transfer function of the controller be given in the form (4.59). Then
under Condition (4.60), it can be represented in form of the LMFD (4.56),
where the matrices l (), l () are determined by (4.62). For applications,
the question of the irreducibility of the pair (4.62) is important.
Theorem 4.28. The pair (4.62) is exactly then irreducible, when the pair
[Dl (), Ml ()] is irreducible, i.e. the right side of (4.59) is an ILMFD.
4. Let the process (al (), bl ()) and a certain fractional rational mn matrix
w () be given, for which the expression (4.56) denes a certain LMFD. Thus,
if
al () bl ()
det Ql (, l , l )) = det d() ,
l () l ()
then, owing to Theorem 4.26, the matrix w () can be represented in the
form (4.59), (4.61), where (0l (), 0l ()) is a certain basic controller. Under
these circumstances, the notation (4.59) of the matrix w () is called its basic
representation with respect to the basis (0l (), 0l ()).
Theorem 4.29. For a xed basic controller (0l (), 0l ()), the basic repre-
sentation (4.59) is unique in the sense, that the validity of (4.59) and
Proof. Without loss of generality, we suppose that the right side of (4.60) is
an ILMFD. Then owing to Theorem 4.26, the right side of (4.61) is an ILMFD
of the matrix w (). In addition let us have the LMFD
1 () = D11 ()M1 () .
Then from (4.63) for the matrix w (), we obtain the LMFD
This relation and (4.61) dene two dierent LMFDs of the matrix w (). By
supposition the LMFD (4.61) is irreducible, so with respect to Statement 2.3
on page 64, we come out with
Thus we derive
Theorem 4.30. Let the process (al (), bl ()) be irreducible and the controller
(l (), l ()) should have the basic representation (4.54). Then the matrices
al () bl () In Onm
Ql (, l , l ) = , S() = (4.66)
l () l () Omn Dl ()
are equivalent, and this fact does not depend on the matrix Ml ().
Proof. Notice
In Onm In Onm In Onm
= ,
Ml () Dl () Ml () Im Omn Dl ()
The rst and the last factor on the right side are unimodular matrices and
therefore, the matrices (4.66) are equivalent.
4.7 Structural Eigenvalue Assignment for Polynomial Pairs 173
a1 () = a2 () = . . . = an () = 1 (4.67)
and furthermore
an+i () = bi (), (i = 1, . . . , m) . (4.68)
Corollary 4.32. Theorem 4.31 supplies a constructive procedure for the de-
sign of closed systems with a prescribed sequence of invariant polynomials
of the characteristic matrix. Indeed, let a sequence of monic polynomials
b1 (), . . . , bm () with
b1 () bm () d()
be given and for all i = 2, . . . , m, the polynomial bi () is divisible by bi1 ().
Then we take
where p(), q() are unimodular matrices. After that, independently of the
selection of Ml () in (4.54), the sequence of the last m invariant polynomials
of the matrix Ql (, l , l ) coincides with the sequence b1 (), . . . , bm ().
Corollary 4.33. If the process (al (), bl ()) is irreducible, then there exists a
set of controllers s for which the matrix Ql (, l , l ) becomes simple. This
happens exactly when the matrix Dl () is simple, i.e. it allows the represen-
tation
Dl () = p() diag{1, . . . , 1, d()}q()
with unimodular matrices p(), q().
Corollary 4.34. Let irreducible left and right models of the process
(al (), bl ()) and [ar (), br ()] be given. Then the sequence of invariant poly-
nomials an+1 (), . . . , an+m () of the characteristic matrix Ql (, l , l ) coin-
cides with the sequence of invariant polynomials of the matrix
Dl () = l ()br () + l ()ar () ,
Theorem 4.35. Let the PMD (4.4) be non-singular and minimal, and
where
a() Opn b()
A() = , B() = , (4.73)
c() In Onm
C() = Omp () , D() = () . (4.74)
is applicable. Observing
1
a1 () Opn
A () =
c()a1 () In
Lemma 4.37. Let the non-singular PMD (4.4) and its corresponding transfer
matrix (4.69) be given, for which Relation (4.70) denes an ILMFD. Consider
the matrix
al () bl ()
Ql (, , ) = , (4.77)
() ()
where the matrices () and () are dened as in (4.5). If under this con-
dition, the PMD (4.4) is minimal, then
c()a1 () = a1
1 ()c1 () .
w () = a1 ()[c1 ()b()]
denes an ILMFD of the matrix w (). This expression and (4.70) dene at
the same time ILMFDs of the matrix w (), so we have
Using this and (4.70) from (4.79), we obtain the statement (4.78).
Proof of Theorem 4.35. The minimality of the PMD (4.4) and Lemma 4.37
imply that the sets of solutions of (4.6) and of the equation
det Ql (, , ) d()
coincide.
176 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods
2. The next theorem supplies the solution of the eigenvalue assignment for
the case, when the PMD (4.4) is not minimal.
Theorem 4.38. Let the non-singular PMD (4.4) be not minimal, and Rela-
tion (4.70) should describe an ILMFD of the transfer matrix w (). Then the
relation
det a()
() = (4.80)
det al ()
turns out to be a polynomial. Thereby, Equation (4.6) is exactly then solvable,
when
d() = ()d1 () , (4.81)
where d1 () is any polynomial. If (4.81) is true, then the set of controllers
that are solutions of Equation (4.5) coincide with the set of solutions of the
equation
al () bl ()
det d1 () . (4.82)
() ()
This solution set can be constructed with the help of Theorem 4.17.
Proof. Owing to Lemma 2.48, Relation (4.80) is a polynomial. With the help
of (4.71) and (4.80), we gain
From (4.83), it is immediately seen that Equation (4.6) needs Condition (4.81)
be fullled for its solvability. Conversely, if (4.81) is fullled, Equation (4.83)
leads to Equation (4.82).
Theorem 4.39. Let the non-singular PMD (4.4) be minimal, and Relation
(4.70) should dene an ILMFD of the transfer matrix (4.69). Furthermore,
let (0 (), 0 ()) be a basic controller for the pair (al (), bl ()), and the set
of pairs
() = N ()0 () M ()bl () ,
(4.84)
() = N ()0 () M ()al ()
should determine the set of solutions of the eigenvalue assignment (4.6). More-
over, let q1 (), . . . , qp+n+m () be the sequence of invariant polynomials of the
4.8 Eigenvalue and Eigenstructure Assignment for PMD Processes 177
q1 () = q2 () = . . . = qp+n () = 1 ,
(4.85)
qp+n+i () = i (), (i = 1, . . . , m) .
where
C0 () = Omp 0 () , D0 () = 0 () , (4.86)
and due to Theorem 1.41, the pair (A(), B()) is irreducible.
b) Equation (4.6) is written in the form
A() B()
det d() . (4.87)
C() D()
Since the pair A(), B()) is irreducible, it follows from Theorem 4.17
that for any polynomial d(), Equation (4.87) is solvable and the set of
solutions can be presented in the shape
b()
D() = N1 ()D0 () M1 () ,
Onm
(4.88)
a() Opn
C() = N1 ()C0 () M1 () ,
c() In
det N1 () d() .
c) On the other side, due to Theorem 4.38, the set of pairs ((), ()) sat-
isfying Equation (4.87) coincides with the set of solutions of the equation
al () bl ()
det d()
() ()
Assume in (4.88)
p n
M1 () = M 1 () M
2 () m .
Then with the help of (4.74), (4.86), Relation (4.88) can be presented in
the shape
1 ()b() ,
() = N1 ()0 () M
Omp () = N1 () Omp 0 () M 2 ()c() M
1 ()a() M 2 () .
a) The PMD 1 () = (a2 (), b1 (), c1 ()) is equivalent to the PMD (4.4) and
minimal.
b) The relation
det a()
() = (4.94)
det a2 ()
turns out to be a polynomial with
det a()
() () = , (4.95)
det al ()
where () is the polynomial (4.80).
c) The relation
Q (, , ) = Gl ()Q1 (, , )Gr () (4.96)
is true with
Gl () = diag{d1 (), 1, . . . , 1} , Gr () = diag{d2 (), 1, . . . , 1} ,
(4.97)
and the matrix Q1 (, , ) has the shape
a2 () Opn b1 ()
Q1 (, , ) = c1 () In Onm . (4.98)
Omp () ()
d) Formula
() det d1 () det d2 () (4.99)
is valid.
e) Let q1 (), . . . , qp+n+m () be the sequence of invariant polynomials of the
matrix Q1 (, , ) and 1 (), . . . , m () be the sequence of invariant poly-
nomials of the matrix N () in the representation (4.91), where instead of
al (), b(), c() we have to write a2 (), b1 (), c1 (). Then
q1 () = q2 () = . . . = qp+n () = 1 ,
(4.100)
qp+n+i () = i (), (i = 1, . . . , m) .
Proof. a) Using (4.92) and (4.93), we nd
w () = c()a1 ()b() = c1 ()a1
2 ()b1 () = w1 () ,
where w1 () is the transfer function of the PMD 1 (), this means, the
PMD () and 1 () are equivalent. It is demonstrated that the PMD
1 () is minimal. Since the pair [a2 (), c1 ()] is irreducible per construc-
tion, it is sucient to show that the pair (a2 (), b1 ()) is irreducible. Per
construction, the pair
(a1 (), b1 ()) = (a2 ()d2 (), b1 ())
is irreducible. Hence due to Lemma 2.11, also the pair (a2 (), b1 ()) is
irreducible.
180 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods
det al () det a2 ()
det Q (, , ) = () det Q1 (, , ) .
Bringing face to face the last two equations proves Relation (4.99).
e) Since the PMD 1 () is minimal and Relation (4.84) holds, Formula
(4.100) follows from Theorem 4.39.
Proof. Let for instance the matrix d1 () be not simple, then we learned from
the considerations in Section 1.11 that there exists an eigenvalue with
> 1. Hence considering (4.96), (4.97), we get def Q (,
def Gl () , ) > 1,
i.e., the matrix Q (, , ) is not simple. If d2 () is not simple, we conclude
analogously.
and together with the results of Section 1.11, this yields that the matrix
Q (, , ) is simple.
5
Fundamentals for Control of Causal
Discrete-time LTI Processes
{u} {y}
- L -
{u}1 {y}1
{u} = ... , {y} = ...
{u}m {y}n
where
us,1 ys,1
us = ... , ys = ... . (5.2)
us,m ys,n
Furthermore, in Fig. 5.1, the letter L symbolises a certain system of linear
equations that consists between the input and output sequences. If L stands for
184 5 Fundamentals for Control of Causal Discrete-time LTI Processes
a system with a nite number of linear dierence equation with constant coef-
cients, then the corresponding process is called a nite-dimensional discrete-
time LTI object. In this section exclusively such objects will be considered and
they will be called shortly as LTI objects.
2. Compatible with the introduced concepts, the LTI object in Fig. 5.1 is
congured to a system of scalar dierence equations
n
(0)
n
()
m
(0)
m
(s)
aip yp,k+ + . . . + aip yp,k = bir ur,k+s + . . . + bir ur,k
p=1 p=1 r=1 r=1
(5.3)
(i = 1, . . . , n; k = 0, 1, . . . ) ,
(j) (j)
where the aip , bir are constant real coecients. Introducing into the consid-
erations the constant matrices
a (j)
j = aip , bj = b(j)
ir
and using the notation (5.1), the system of scalar Equations (5.3) can be
written in form of the vector dierence equation
a yk = b0 uk+s + . . . + bs uk ,
0 yk+ + . . . + a (k = 0, 1, . . . ) , (5.4)
It is easily checked that Equations (5.4) and (5.6) are equivalent. This can
be done by substituting the expressions (5.5) into (5.6) and comparing the
corresponding components of the sequences on the left and right side.
(q)yk = b(q)uk ,
a (5.8)
where
a 0 q + . . . + a
(q) = a , b(q) = b0 q s + . . . + bs (5.9)
5.1 Finite-dimensional Discrete-time LTI Processes 185
(q){y} = b(q){u} .
a (5.10)
(q)
det a /0 (5.11)
0 = 0
det a (5.12)
is valid, then the LTI process is called normal. If however, instead of (5.12)
det a
0 = 0 (5.13)
is true, then the LTI process is named anomalous, [39]. For instance, descriptor
processes can be modelled by anomalous systems [34].
a(q)yk = (q)b(q)uk ,
(q) (k = 0, 1, . . . ) (5.14)
and using (5.7), this can be written in an analogue form to (5.4). Equation
(5.14) is said to be derived from the output Equation (5.4), and Equation
(5.4) itself is called original.
186 5 Fundamentals for Control of Causal Discrete-time LTI Processes
Example 5.1. Consider the equations of the LTI processes in the form (5.3)
a
0 yk+2 + a 2 yk = b0 uk+2 + b1 uk+1 + b2 uk ,
1 yk+1 + a (5.16)
where
10 00 11
a
0 = , a
1 = , a
2 = ,
00 11 00
b0 = 0 , b1 = 1 , b2 = 0 ,
0 0 2
6.
Lemma 5.2. For any matrix (q), all solutions of the original equation (5.8)
are also solutions of the derived equation (5.14).
5.1 Finite-dimensional Discrete-time LTI Processes 187
Remark 5.3. The inverse statement to Lemma 5.2 in general is not true. In-
deed, let {v} be a solution of the equation
(q)vk = 0k .
(q)yk = b(q)uk + vk
a (5.19)
for all possible vk presents a solution of the derived Equation (5.14), but only
for vk = 0k it is a solution of the original equation. It is easy to show that
Relation (5.19) contains all solutions of the derived equation.
7. Consider the important special case, when in (5.14) the matrix (q) is
unimodular. In this case, the transition from the original equation (5.8) to the
derived equation (5.14) means manipulating the system (5.3) by operations
of the following types:
a) Exchange the places of two equations.
b) Multiply an equation by a non-zero constant.
c) Add one equation to any other equation that was multiplied before by an
arbitrary polynomial f (q).
In what follows, Equations (5.8) and (5.14) are called equivalent by the uni-
modular matrix (q). The reasons for using this terminology arise from the
next lemma.
Lemma 5.4. The solution sets of the equivalent equations (5.8) and (5.14)
coincide.
Proof. Let Equations (5.8) and (5.14) be equivalent, and R, Rx are their
solution sets. Lemma 5.2 implies R Rx . On the other side, Equations (5.8)
are gained from Equations (5.14) by multiplying them from left by 1 (q).
Then also Lemma 5.2 implies Rx R, thus R = Rx .
(q)
a(q) = a
(q) , (q)b(q) = b (q) . (5.20)
(q)yk = b (q)uk .
a (5.21)
From Section 1.6, it is known that under supposition (5.11), the matrix (q)
can always be selected in such a way that the matrix a (q) becomes row
reduced. In this case, Equation (5.21) also is said to be row reduced. Let
1 (q), . . . , a
a n (q) be the rows of the matrix a
(q). As before, denote
i = deg a
i (q) , (i = 1, . . . , n) .
If under these conditions, Equation (5.21) is row reduced, then independently
of the concrete shape of the matrix (q), the quantities
n
l = i , (q) = max {i }
max = deg a
1in
i=1
take their minimal values in the set of equivalent equations to the original
equation (5.8).
Example 5.5. Consider the anomalous process
y1,k+4 + 2y1,k+2 + y1,k+1 + 2y2,k+2 + y2,k = uk+3 + 2uk+1
(5.22)
y1,k+3 + y1,k+1 + y1,k + 2y2,k+1 = uk+2 + uk .
In the present case, we have
4
q + 2q 2 + q 2q 2 + 1
a
(q) = =a 0 q 4 + a1 q 3 + a
2 q 2 + a
3 q + a
4 ,
q3 + q + 1 2q
3
b(q) = q 2 + 2q = b1 q 3 + b2 q 2 + b3 q + b4 ,
q +1
where
10 00 22 10 01
a
0 = , a
1 = , a
2 = , a
3 = , a
4 = ,
0
0 1
0 0
0
1 2 10
b1 = 1 , b2 = 0 , b3 = 2 , b4 = 0 .
0 1 0 1
Choose
1 q
(q) = .
q q 2 + 1
So, we generate the derived matrices
2
q 1 b (q) = (q)b(q) = q ,
a
(q) = (q)
a(q) = ,
q+1 q 1
where the matrix a
(q) is row reduced. Applying this, Equations (5.22) might
be expressed equivalently by
y1,k+2 + y2,k = uk+1
y1,k+1 + y1,k + y2,k+1 = uk .
5.2 Transfer Matrices and Causality of LTI Processes 189
Lemma 5.6. The transfer matrices of the original equation (5.8) and of the
derived equation (5.21) coincide.
Proof. Suppose
(q) = a
w 1
(q)b (q) . (5.24)
Then applying (5.20), we get
1
(q) = a
w 1 (q)b(q) = w(q)
(q)b (q) = a .
2. From the above said emerges that any forward model (5.8) is uniquely as-
signed to a transfer matrix. The reverse statement is obviously wrong. There-
fore the question arises, how is the set of forward models structured, that
possess a given transfer matrix? The next theorem gives the answer.
w(q)
1
=a
0 b0 (q)
be an ILMFD. Then the set of all forward models of LTI processes possessing
this transfer matrix is determined by the relations
a a0 (q),
(q) = (q) b(q) = (q)b0 (q) , (5.25)
Proof. The right side of Relation (5.24) presents a certain LMFD of the ratio-
nal matrix w(q).
Hence by the properties of LMFDs considered in Section 2.4,
we conclude that the set of all pairs ( a(q), b(q)) according to the transfer
matrix w(q)
is determined by Relations (5.25).
Corollary 5.9. A forward model of the LTI processes (5.8) is called control-
a(q), b(q)) is irreducible. Hence Theorem 5.8 is formulated
lable, if the pair (
in the following way: Let the forward model dened by the pair ( a(q), b(q)) be
controllable. Then the set of all forward models with transfer function (5.23)
coincides with the set of all derived forward models.
190 5 Fundamentals for Control of Causal Discrete-time LTI Processes
3. The LTI process (5.8) is called weakly causal, strictly causal or causal, if its
transfer matrix (5.23) is proper, strictly proper or at least proper, respectively.
From the content of Section 2.6, it emerges that the LTI process (5.8), (5.9)
is causal, if there exists the nite limit
lim w(q)
= w0 . (5.26)
q
Besides, when w0 = Onm holds, the process is strictly causal. When the limit
(5.26) becomes innite, the process is named non-causal.
Theorem 5.10. For the process (5.8), (5.9) to be causal, the condition
s (5.27)
>s
must be valid.
(q)
N
w(q)
= .
d(q)
Remark 5.12. The conditions of Theorem 5.10 are in general not sucient, as
it is illustrated by the following example.
4. If Equation (5.8) is row reduced, the causality question for the processes
(5.8) can be answered without constructing the transfer matrix.
Theorem 5.14. Let Equation (5.8) be row reduced, and i be the degree of
the i-th row of the matrix a (q) and i be the degree of the i-th row of the
matrix b(q). Then the following statements are true:
a) For the weak causality of the process, it is necessary and sucient that
the conditions
i i , (i = 1, . . . , n) (5.29)
are true, where at least for one 1 i n in (5.29) the equality sign has
to be taken place.
b) For the strict causality of the process, the fullment of the inequalities
i > i , (i = 1, . . . , n) (5.30)
i < i
a yk = b0 uk+ + . . . + b uk ,
0 yk+ + . . . + a (k = 0, 1, . . . ) (5.31)
Theorem 5.15. For the weak causality of the normal processes (5.31), the
fulllment of
b0 = Onm (5.33)
is necessary and sucient.
For the strict causality of the normal processes (5.31), the fulllment of
b0 = Onm (5.34)
Proof. From (5.32) it follows that the matrix a(q) for a normal process is row
reduced, and we have
1 = 2 = . . . = n = .
If (5.33) takes place, then in Condition (5.29) the equality sign stands for at
least one 1 i n. Therefore, as a consequence of Theorem 5.14, the process
is weakly causal. If however, (5.34) takes place, then Condition (5.30) is true
and the process is strictly causal.
yi = yi , (i = 1, . . . , 1) . (5.37)
a a1
i = 0 a
i , (i = 1, 2, . . . , ); 1
bi = a
0 bi , (i = 0, 1, . . . , ) .
y = a y0 + b0 u + b1 u1 + . . . + b u0 .
1 y1 + . . . + a (5.39)
Hence for a known input sequence (5.35) and given initial values (5.36), the
vector y is uniquely determined. For k = 1 from (5.38), we derive
y+1 = a y1 + b0 u+1 + b1 u + . . . + b u1 .
1 y + . . . + a
Thus with the help of (5.35), (5.36) and (5.39), the vector y+1 is uniquely
calculated. Obviously, this procedure can be uniquely continued for all k > 0.
As a result, in a unique way the sequence
5.3 Normal LTI Processes 193
{y} = {
y0 , . . . , y1 , y , . . . }
is generated, that is a solution of Equation (5.31) and fullls the initial con-
ditions (5.37).
Remark 5.17. It follows from the proof of Theorem 5.16 that for weakly causal
normal processes for given initial conditions, the vector yk of the solution {y} is
determined by the values of the input sequence u0 , u1 , . . . , uk . If the process,
however, is strictly causal, then the vector yk is determined by the vectors
u0 , u1 , . . . , uk1 .
3.
Theorem 5.18. Let the input (5.2) be a Taylor sequence (see Appendix A).
Then all solutions of Equation (5.31) are Taylor sequences.
( 1 ) = a
a() = a 0 + a ,
1 + . . . a
(5.40)
b() = b( 1 ) = b0 + b1 + . . . b
0 = 0 .
det a(0) = det a (5.41)
Under the assumed conditions, there exists the -transform of the input se-
quence
u0 () = ui i . (5.42)
i=0
y 0 () = a1 () a0 y0 + (
a0 y1 + a
1 y0 ) + . . .
+ 1 (
a0 y1 + a
1 y2 + . . . + a
1 y0 )
(5.43)
+ a1 () b0 u0 () u0 u1 . . . 1 u1
+ b1 u0 () u0 u1 . . . 2 u2 + . . . + b u0 () ,
1
+ (
a0 y1 + a
1 y2 + . . . + a
1 y0 )
(5.45)
+ b0 u0 () u0 u1 . . . 1 u1
+ b1 u0 () u0 u1 . . . 2 u2 + . . . + b u0 () ,
which holds for all suciently small ||. Notice that the coecients of the
matrices bi , (i = 0, . . . , ) on the right side of (5.45) are proportional to i .
Hence comparing the coecients for i , (i = 0, . . . , 1) on both sides of
(5.45) yields
a
0 y0 = a
0 y0
a
1 y0 + a
0 y1 = a
1 y0 + a
0 y1
(5.46)
.. .. ..
. . .
1 y0 + a
a 1 y1 + . . . + a
0 y = a
1 y0 + a
2 y1 + . . . a
0 y1 .
yi = yi , (i = 0, 1, . . . , 1) . (5.47)
a
0 yk+ + a yk = b0 uk+ + b1 uk+1 + . . . + b uk ,
1 yk+1 + . . . + a
(k = 0, 1, . . . ) .
Bringing this face to face with (5.31) and taking advantage of (5.47), we con-
clude that the coecients of the expansion (5.44) build a solution of Equation
(5.31), the initial conditions of which satisfy (5.37) for any initial vectors
(5.36). But owing to Theorem 5.16, every ensemble of initial values (5.36)
uniquely corresponds to a solution. Hence we discover that for any initial
vectors (5.36), the totality of coecients of the expansion (5.44) exhaust the
whole solution set of the normal equation (5.31). Thus in case of convergence
of the -transforms (5.42), all solutions of the normal equation (5.31) are
Taylor sequences.
Corollary 5.19. When the input is a Taylor sequence {u}, it emerges from
the proof of Theorem 5.18 that the right side of Relation (5.43) denes the
-transform of the general solution of the normal equation (5.31).
4. From Theorem 5.18 and its Corollary, as well as from the relations be-
tween the z-transforms and -transforms, it arises that for a Taylor input
sequence {u}, any solution {y} of the normal equation (5.31) possesses the
z-transform
y (z) = yk z k .
k=0
y (z) = w(z)u
a0 y0 b0 u0 ) + z 1 (
(z) + z ( 1 y0 b0 u1 b1 u0 ) + . . .
a0 y1 + a
+ z(
a0 y1 + a1 y2 + . . . + a 1 y0 b0 u1 b1 u2 . . . u0 ) .
0 y00 = b0 u0
a
0 y10 = b1 u0 + b0 u1
1 y00 + a
a
(5.48)
.. .. ..
. . .
1 y00 + . . . + a
a 0
0 y1 = b1 u0 + . . . + b0 u1
196 5 Fundamentals for Control of Causal Discrete-time LTI Processes
hold. Owing to det a 0 = 0, the system (5.48) uniquely determines the to-
tality of initial vectors y00 , . . . , y1
0
. Taking these vectors as initial values,
we conclude that the solution {y 0 }, which is congured to the initial values
y00 , . . . , y1
0
, possesses the z-transform
y0 (z) = w(z)u
(z) . (5.49)
In what follows, those solution of Equation (5.31) having the transform (5.49)
is called the solution with vanishing initial energy. As a result of the above
considerations, the following theorem is formulated.
Theorem 5.20. For the normal equation (5.31) and any Taylor input se-
quence {u}, there exists the solution with vanishing initial energy {y 0 }, which
has the z-transform (5.49). The initial conditions of this solution are uniquely
determined by the system of equations (5.48).
{H} = {H0 , H1 , . . . , }, (i = 0, 1, . . . )
0 Hk+ + . . . + a
a Hk = b0 Uk+ + . . . + b Uk , (k = 0, 1, . . . ) (5.50)
0 H0 = b0
a
a
1 H0 + a
0 H1 = b1
.. .. ..
. . .
a 0 H1 = b1
1 H0 + . . . + a
as initial values H0 , . . . , H1 . Notice that due to (5.51) for k > 0, Equation
(5.50) converts into the homogeneous equation
a
0 Hk+ + . . . + a
Hk = Onm , (k = 0, 1, . . . ) .
5.4 Anomalous LTI Processes 197
(q)yk = b(q)uk
a (5.52)
with
a 0 q + a
(q) = a 1 q 1 + + . . . + a
,
(5.53)
b(q) = b0 q + b1 q 1 + + . . . + b
a
0 y+i = di , (i = 0, 1, . . . ) . (5.55)
2.
Lemma 5.22. If the input of a causal anomalous process is a Taylor-sequence,
then all solutions of Equation (5.52) are Taylor sequences.
Proof. Without loss of generality, we assume that Equation (5.52) is row re-
duced, so that utilising (1.21) gives
(q) = diag{q 1 , . . . , q n }A0 + a
a 1 (q) , (5.59)
where the degree of the i-th row of the matrix a
1 (q) is lower than i and
det A0 = 0. Suppose deg a
(q) = max . Select
(q) = diag{q max 1 , . . . , q max n }
and consider the derived equations (5.14), which with the help of (5.59) takes
the form
a1 (q) yk = (q)b(q)uk .
A0 q max + (q) (5.60)
As is easily seen, Equation (5.60) is normal under the given suppositions.
Therefore, owing to Theorem 5.18 for Taylor input sequence {u}, all solutions
of Equation (5.60) are Taylor sequences. But due to Lemma 5.2, all solutions
of the original equation (5.52) are also solutions of the derived equation (5.60),
thus Lemma 5.22 is proven.
5.4 Anomalous LTI Processes 199
i (z) is a polynomial in z,
where yp,0 , . . . , yp,i 1 are the initial values, and B
(j)
which depends on the coecients bir and the excitation {u}. Substituting
here 1 for z, we obtain the equations for the -transforms
n
(0)
aip yp0 () yp,0 . . . i 1 yp,i 1 +
p=1
(5.63)
n
(1)
+ aip yp0 () yp,0 . . . z i 2 yp,i 2 + . . .
p=1
n
( )
. . . + i aip i yp0 (z) = Bi () ,
p=1
has to be fullled for all yp,k , that are practically congured by the left side
of (5.63). Inserting (5.64) on the left side of (5.63), and comparing the co-
ecients of k , (k = 0, 1, . . . ) on both sides, a system of successive linear
equations for the quantities yp,k , (k = 0, 1, . . . ) is created, which due to (5.62)
is always solvable. In order to meet Condition (5.65), we generate the totality
of linear relations, that have to be fullled between the quantities yp,k and the
rst values of the input sequence {u}. These conditions determine the set of
initial conditions yp,k , for which the wanted solution of Equation (5.61) exists.
Since with respect to Lemma 5.22, all solutions of Equation (5.61) (whenever
they exist) possess -transforms, the suggested procedure always delivers the
wanted result.
so, we gain
2
1 2
y1 (z) z +1 2 z y1,0 + z y1,1 + 2z y2,0 + zu (z) zu0
= . (5.67)
y2 (z) 1 z z y2,0 + u (z)
y10 () = y1 ( 1 ), y20 () = y2 ( 1 ), u0 () = u ( 1 )
were used. Since by supposition the input {u} is a Taylor sequence, the ex-
pansion
5.4 Anomalous LTI Processes 201
u0 () = uk k
k=0
converges. Thus, the right side of (5.68) is analytical in the point = 0. Hence
the pair of convergent expansions
y10 () = y1,k , k
y20 () = y2,k k (5.69)
k=0 k=0
Now we insert (5.69) into (5.70) and set equal those terms on both sides,
which do not depend on . Thus, we receive
When (5.71) and (5.72) hold, then in the rst row of (5.70) the terms of zero
and rst degree in neutralise each other, respectively, and in the second
equation the absolute terms cancel each other. Altogether, Equations (5.70)
under Conditions (5.71), (5.72) might be written in the shape
y1,k k + 2 y1,k k + 2 y2,k k = uk k
k=2 k=0 k=1 k=1
y1,k k + y2,k k = uk k .
k=0 k=1 k=0
2
Canceling the rst equation by , and the second by , we nd
y1,k+2 k + y1,k k + 2 y2,k+1 k = uk+1 k
k=0 k=0 k=0 k=0
y1,k k + y2,k+1 k = uk k .
k=0 k=0 k=0
k
Comparing the coecients of the powers , (k = 0, 1, . . . ), we conclude that
for any selection of the constants y1,0 = y1,0 , y2,0 = y2,0 , y1,1 = y1,1 the
coecients of the expansion (5.69) present a solution of Equation (5.66) which
satises the initial conditions (5.71), (5.72). As result of the above analysis,
the following facts are ascertained:
202 5 Fundamentals for Control of Causal Discrete-time LTI Processes
Although the values of the numbers 1 and 2 are the same as in Exam-
ple 5.23, the right side of Relation (5.75) now depends on four values y1,0 ,
y2,0 , y1,1 and y2,1 . Substitute z = 1 , so, as in the preceding example, the
relations
0
1
y1 () 1 + 2 2 y1,0 + y1,1 + 2 y2,0 + +2 y2,1 + u0 (z) u0
=
y20 () 1 1 y1,0 + y2,0 + u0 ()
(5.76)
take place. The right side of (5.76) is analytical in the point = 0. Hence
there exists uniquely a pair of convergent expansions (5.69), which are the
Taylor series of the right side of (5.76). Thus from (5.76), we obtain
y10 () y1,0 y1,1 + 2 y10 () + 2 y20 () y2,0 y2,1 = u0 () u0
(5.77)
y10 () y1,0 + y20 () y2,0 = u0 () .
Theorem 5.27. For the causal anomalous process (5.52) with Taylor input
sequence {u}, there always exists the solution {y0 }, the z-transform y0 (z) of
which is determined by the relation
y0 (z) = w(z)u
(z) ,
where
w(z)
1 (z)b(z)
=a
5.4 Anomalous LTI Processes 205
is the assigned transfer matrix. The -transform of the solution {y0 } has the
view
y00 () = w()u0 () (5.82)
with
1 ) = a
w() = w( 1 ( 1 )b( 1 ) . (5.83)
Proof. Without loss of generality, we assume that Equation (5.52) is row re-
duced and in (5.59) det A0 = 0. In this case, the right side of (5.82) is analytical
in the point = 0. Thus, there uniquely exists the Taylor series expansion
y00 () = yk k (5.84)
k=0
that converges for suciently small ||. From (5.83) and (5.53) using (5.40),
we get
w() = a1 ()b() , (5.85)
where
a() = a
0 + a ,
1 + . . . + a b() = b0 + b1 + . . . + b . (5.86)
a
0 + a
1 + . . . + a yk k = b0 + b1 + . . . + b k k .
u
k=0 k=0
(5.87)
By comparison of the coecients for k , (k = 0, 1, . . . , 1), we nd
0 y0 = b0 u0
a
a 0 y1 = b1 u
1 y0 + a 0 + b0 u1
(5.88)
.. .. ..
. . .
0 y1 = b1 u
1 y0 + . . . + a
a 0 + . . . + b0 u1 .
With the aid of (5.88), Equation (5.87) is easily brought into the form
a
0 yk k +
a1 yk k + . . . + a
yk k
k= k=1 k=0
= b0 k k + b1
u k k + . . . + b
u k k .
u
k= k=1 k=0
Remark 5.28. Since in the anomalous case det a 0 = 0, Relations (5.88) do not
allow to determine the initial conditions that are assigned to the solution with
vanishing initial energy. For the determination of these initial conditions, the
following procedure is possible. Using (5.59), we obtain
where
a( 1 ) = A0 + A1 + . . . + A ,
A() = diag{ 1 , . . . n }
(5.90)
B() = diag{ 1 , . . . n }b( 1 ) = B
0 + B
1 + . . . + B
.
A0 y0 = B
0 u0
A1 y0 + A0 y1 = B
1 u0 + B
0 u1
(5.91)
.. .. ..
. . .
A1 y0 + . . . + A0 y1 = B
1 u0 + . . . + B
0 u1 .
Example 5.29. Find the initial conditions for the solution with vanishing initial
energy for Equations (5.73). Notice that in this example
2
z +1 2 z
a
(z) = , b(z) =
z z 1
that means
12 00 10
A0 = , A1 = , A2 = ,
11 00 00
(5.92)
0 1 0
B0 = , B0 = , B0 = .
0 1 0
A0 y0 = B
0 u0
i.e. y0 = O21 . Thus, the second equation in (5.91) takes the form
A0 y1 = B
1 u0
6. For anomalous causal LTI processes, in the same way as for normal pro-
cesses, we introduce the concept of the weighting sequence
U (z) = Im .
H 0 () = w() .
208 5 Fundamentals for Control of Causal Discrete-time LTI Processes
Since the right side is analytical in = 0, there exists the convergent expansion
H 0 () = Hk k .
k=0
Applying Relations (5.85), (5.86), we obtain from the last two equations
a
0 + a
1 + . . . + a Hk k = b0 + b1 + . . . + b . (5.95)
k=0
0 H0 = b0
a
a 0 H1 = b1
1 H0 + a
(5.96)
.. .. ..
. . .
0 H = b .
H0 + . . . + a
a
If we make the coecients at all powers of on the left side equal to zero,
then we nd
a
0 Hk++1 + a
1 Hk+ + . . . + a
Hk+1 = Onm , (k = 0, 1, . . . ) ,
0 Hk+ + a
a 1 Hk+1 + . . . + a
Hk = Onm , (k = 1, 2, . . . ) .
From this is seen that for k 1, the elements of the weighting sequence satisfy
the homogeneous equation, which is derived from (5.52) for {u} = Om1 . Notice
0 = 0, the determination of the matrices Hi , (i = 0, 1, . . . , ) is
that for det a
not possible with the help of (5.96). To overcome this diculty, Relation (5.89)
is recruited. So instead of (5.95), we obtain the result
A0 + A1 + . . . + A 0 + B
Hk k = B 1 + . . . + B
,
k=0
5.5 Forward and Backward Models 209
A0 H0 = B
0
A1 H0 + A0 H1 = B
1
(5.97)
.. .. ..
. . .
A H0 + . . . + A0 H = B
.
Thus with the help of the initial conditions (5.97), the weighting sequence
{H} can be calculated.
Example 5.30. Under the conditions of Example 5.29 and applying (5.97) and
(5.92), we nd
0 1 0
H0 = , H1 = , H2 = .
0 0 0
A0 Hk+3 = A2 Hk+1 , (k = 0, 1, . . . )
or
1 0
Hk+3 = Hk+1 .
1 0
(q)yk = b(q)uk ,
a (5.98)
where
a
(q) = a 0 q + . . . + a , (n n) ,
(5.99)
b(q) = b0 q + . . . + b , (n m) .
As before, Equation (5.98) is designated as a forward model of the LTI process.
Select a unimodular matrix (q), such that the matrix a (q) = (q)a(q) in
(5.20) becomes row reduced and consider the equivalent equation
210 5 Fundamentals for Control of Causal Discrete-time LTI Processes
(q)yk = b (q)uk ,
a (5.100)
where b (q) = (q)b(q). Let i be the degree of the i-th row of the matrix
(q), then we have
a
(q) = diag {q 1 , . . . , q n } A0 + A1 q 1 + . . . + A q ,
a
with
( 1 )
a() = A0 + A1 + . . . + A = diag { 1 , . . . , n } a (5.102)
and
is called the associated backward model of the LTI process. From (5.29), we
recognise that b() is a polynomial matrix.
w(q)
1 (q)b(q) = a
=a 1
(q)b (q) (5.104)
is called the transfer matrix of the forward model, and the matrix
w() = a1 ()b() = a
1 ( 1 )b( 1 ) = a
1
(
1
)b ( 1 ) (5.105)
is the transfer matrix of the backward model. From (5.104) and (5.105), we
take the reciprocal relations
1 ),
w() = w( w(q)
= w(q 1 ) . (5.106)
The matrix a (q) is named as before the eigenoperator of the forward model
and the matrix a() is the eigenoperator of the backward model. As seen from
(5.102), the eigenoperator a() of the backward model is independent of the
shape of the matrix b(q) in (5.98). Obviously, the matrices a() and b() in
(5.101) are not uniquely determined. Nevertheless, as we realise from (5.105),
the transfer matrix w() is not aected. Moreover, later on we will prove that
the structural properties of the matrix a() also do not depend on the special
procedure for its construction.
1
In (5.101) for once means the operator q 1 . A distinction from the complex
variable of the -transformation is not made, because the operator q 1 , due to
the mentioned diculties, will not be used later on.
5.5 Forward and Backward Models 211
Thus, the matrices a(), b() of the associated backward model take the shape
1 2 10 00 01 2
a() = = + + ,
2 1 01 20 00
2
0 1 2
b() = = + .
1 0
3.
Lemma 5.32. For the causality of the processes (5.98), it is necessary and
sucient that the transfer function w() is analytical in the point = 0. For
the strict causality of the process (5.98), the fulllment of the equation
w(0) = Onm
Corollary 5.33. The process (5.98) is strictly causal, if and only if the equa-
tion
w() = w1 ()
holds with a matrix w1 (), which is analytical in the point = 0.
4. It is shown that the concepts of forward and backward models are closely
connected with the properties of the z- and -transforms of the solution for
Equation (5.98). Indeed, suppose a causal process, then it was shown above
that for a Taylor input sequence {u}, independently of the fact whether the
process is normal or anomalous, Equation (5.98) always possesses the solution
with vanishing initial energy, and its z-transform y (z) satises the equation
that formally coincides with (5.98). In what follows, Relation (5.107) is also
called a forward model of the process (5.98). From (5.107), we receive
y (z) = w(z)u
1 (z)b(z)u (z) .
(z) = a
y 0 () = w()u0 () ,
where y 0 (), u0 () are the -transforms of the process output for vanish-
ing initial energy and the input sequence, respectively. Moreover, w() is the
transfer matrix of the backward model (5.105). Owing to (5.105), the last
equation might be presented in the form
a(z), b(z)) is
5. The forward model (5.107) is called controllable, if the pair (
irreducible, i.e. for all nite z
rank R h (z) = rank a(z) b(z) = n
6.
Lemma 5.34. If the forward model (5.107) is controllable, then the associated
backward model (5.108) is also controllable.
Proof. Let the model (5.107) be controllable. Then the row reduced model
(5.100) is also controllable and hence for all nite z
rank a (z) b (z) = n .
w(z)
= C(zIp A)1 B + D (5.109)
0 () = (Ip A, B, C) .
Lemma 5.35. Under the named suppositions, the PMD 0 () is minimal, i.e.
the pairs (Ip A, B) and [Ip A, C] are irreducible.
These conditions together with Lemma 1.42 imply the irreducibility of the
pairs
(Ip A, B), [Ip A, C] .
where
q
si s,i1 , (s = 0, . . . q, i = 2, . . . , ), si = p .
s=0 i=1
Then the totality of invariant polynomials dierent from one of the matrix
Ip A consists of the polynomials 1 (), . . . , () having the shape
Besides,
det[I J (a)] = (1 a) (5.115)
and Matrix (5.114) possesses only the eigenvalue = a1 of multiplicity .
For = a1 from (5.114), we receive
0 a1 0 . . . 0
0 0 a1 . . . 0
.. .
I a1 J (a) = ... ... .. . .
. . . (5.116)
0 0 0 . . . a 1
0 0 0 ... 0
Obviously
det[I J (0)] = 1 ,
thus Matrix (5.117) is unimodular and has no elementary divisor.
Now, consider the general case and A is expressed in the Jordan form
A = U diag J01 (0), J11 (z1 ), . . . , Jq (zq ) U 1
Theorem 5.37. Suppose the forward and backward models (5.107) and
(5.108) be controllable, and the sequence of the invariant polynomials dierent
from 1 of the matrix a(z) should have the form (5.111). Then the sequence of
invariant polynomials dierent from 1 of the matrix a() has the form (5.112).
(z) det(zIp A) .
det a
det a(0) = 0 .
det a
(z) = (z), det a() = () (5.119)
and let deg (z) = p. Then
1 ) .
() p ( (5.120)
5.5 Forward and Backward Models 217
10. As follows from the above shown, the set of eigenoperators of the asso-
ciated backward model does not depend on the matrix b(z) in (5.107) and it
can be found by Formula (5.102). The reverse statement is in general not true.
Therefore, the transition from a backward model to the associated forward
model has to be considered separately.
a) For a given controllable backward model
1 (z)b(z) = a1 (z 1 )b(z 1 ) .
a
11. As just shown for a known eigenoperator of the forward model a (z), the
set of all eigenoperators of the associated controllable backward models can
be generated. When with the aid of Formula (5.102), one eigenoperator a0 ()
has been designed, then the set of all such operators is determined by the
relation
a() = ()a0 () ,
where () is any unimodular matrix. The described procedure does not de-
pend on the input operator b(z). However, the reverse pass from an eigen-
operator of a controllable backward model a() to the eigenoperator a (z) in
general requires additional information about the input operator b(). In this
connection, we ask for general rules for the transition from the matrix a() to
the matrix a
(z).
a()y 0 () = b1 ()u0 () ,
(5.125)
a()x0 () = b2 ()v 0 ()
Since the matrices (5.126), roughly speaking are not strictly proper, they could
be written as
w1 () = w
1 () + d1 (), w2 () = w
2 () + d2 () , (5.127)
1 () = a1 ()b1 (),
w 2 () = a1 ()b2 () ,
w (5.128)
where
b1 () = b1 () a()d1 (), b2 () = b2 () a()d2 ()
220 5 Fundamentals for Control of Causal Discrete-time LTI Processes
2 () w
w 1 ()
l
2 () = C(Iq G)1 B2 .
w (5.130)
1 () = Mdeg w
Mdeg w 2 () = deg det a() .
Hence the right side of (5.130) is a minimal standard realisation of the matrix
2 (). Inserting (5.129) and (5.130) into (5.127), we arrive at
w
w1 () = C(Iq G)1 B1 + d1 () ,
w2 () = C(Iq G)1 B2 + d2 () ,
The matrices in the brackets possess poles only in the point z = 0. Thus, in
the ILMFDs
the matrices 1 (z) and 2 (z) are nilpotent. Applying this and (5.133), as well
as Corollary 2.19, we nd out that the ILMFDs
1 1
w
1 (z) = [1 (z)
a0 (z)] q1 (z), w
2 (z) = [2 (z)
a0 (z)] q2 (z)
12. Sometimes in engineering literature, the pass from the original control-
lable forward model (5.98) to an associated backward model is made by pro-
cedures that are motivated by the SISO case. Then simply
( 1 ),
a() = a b() = b( 1 ) (5.134)
is applied. It is easy to see that this procedure does not work, when det a0 = 0
in (5.98). In this case, we would get det a(0) = 0, which is impossible for a
controllable backward model. If however, in (5.99) det a 0 = 0 takes place, i.e.
the original process is normal, then Formula (5.134) delivers a controllable
associated backward model.
13. In recent literature [69, 80, 115], the backward model is usually written
in the form
a(q 1 )yk = b(q 1 )uk , (5.135)
where q 1 is the right-shift operator that is inverse to the operator q. Per
denition, we have
As was demonstrated in [14], a strict foundation for using the operator q 1 for
a correct description of discrete LTI processes is connected with honest di-
culties. The reason arises from the fact, that the operator q is only invertible
over the set of two-sided unlimited sequences. If however, the equations of the
LTI process (5.4) are only considered for k 0, then the application of the
operator q 1 needs special attention. From this point of view, the application
of the -transformation for investigating the properties of backward models
seems more careful. Nevertheless, the description in the form (5.135) appears
sometimes more comfortable, and it will also be used later on.
222 5 Fundamentals for Control of Causal Discrete-time LTI Processes
yk < ck , (k = 0, 1, . . . )
is true, where is a certain norm for nite dimensional number vectors and
c, are positive constants with 0 < < 1. If for the sequence {y}, such an
estimate does not hold, then it is called unstable.
The homogeneous vector dierence equation
0 yk+ + . . . + a
a yk = On1 (5.138)
is called stable, if all of its solutions are stable sequences. Equations of the
form (5.138), that are not stable, will be called unstable.
a 0 (z)z + . . . + a
(z) = a
Then, for the stability of Equation (5.138), it is necessary and sucient that
Proof. Suciency: Let (z) be a unimodular matrix, such that the matrix
a
(z) = (z)
a(z)
a
(z)yk = On1 (5.141)
at the same time with Equations (5.137) is stable or unstable, and the equation
(z) = 0 possesses the same roots as Equation (5.139). Since the zero
det a
input is a Taylor sequence, owing to Lemma 5.22, all solutions of Equation
(5.19) are Taylor sequences. Passing in Equation (5.141) to the z-transforms,
we obtain the result that for any initial conditions, the transformed solution
of Equation (5.141) has the shape
R(z)
y (z) = ,
(z)
5.6 Stability of Discrete-time LTI Systems 223
where R(z) is a polynomial vector. Besides under Condition (5.140), the in-
verse z-transformation formula [1, 123] ensures that all originals according to
the transforms of (5.141) must be stable. Thus the suciency is shown.
Necessity: It is shown that, if Equation (5.139) has one root z0 with |z0 | 1,
then Equation (5.138) is unstable. Let d be a constant vector, which is a
solution of the equation
(z0 )d = On1 .
a
Then, we directly verify that
yk = z0k d, (k = 0, 1, . . .)
Theorem 5.47. For the stability of Equation (5.142), it is necessary and suf-
cient that the characteristic polynomial
Theorem 5.49. Equations (5.138) and (5.142) are stable, if and only if the
1 (z) and a1 () are stable.
rational matrices a
adj a
(z)
1 (z) =
a , (5.143)
da min (z)
where da min (z) is the minimal polynomial of the matrix a (z). Since the set of
roots of the polynomial da min (z) contains all roots of the polynomial det a(z),
the matrices a (z) and a 1 (z) are at the same time stable or unstable. The
same can be said about the matrices a() and a1 ().
6. Hitherto, the forward model (5.107) and the backward model (5.108) are
called stable, when the matrices a(z) and a() are stable. For the considered
class of systems, this denition is de facto equivalent to the asymptotic sta-
bility in the sense of Lyapunov.
Theorem 5.50. Let the forward model (5.107) and the associated backward
model (5.108) be controllable. Then for the stability of the corresponding mod-
els, it is necessary and sucient that their transfer matrices w(z)
resp. w()
are stable.
adj (z) b(z)
a
w(z)
= .
da min (z)
Under the made suppositions, this matrix is irreducible, and this fact arises
from Theorem 2.42. Thus, the matrices a
(z) and w(z)
are either both stable or
both unstable. This fact proves Theorem 5.50 for forward models. The proof
for backward models runs analogously.
5.7 Closed-loop LTI Systems of Finite Dimension 225
{gk }
- {yk }
L -
-
{uk }
Fig. 5.2. Process with two inputs
where a(q) Rnn [q], b(z) Rnm [q], f(q) Rn [q]. In future, we will only
consider non-singular processes, for which det a (q) / 0 is true. When this
condition is ensured, the rational matrices
w(q)
1 (q)b(q),
=a w 1 (q)f(q)
g (q) = a (5.145)
are explained, and they will be called the control and disturbance transfer
matrix, respectively.
For the further investigations, we always suppose the following assump-
tions:
A1 The matrix w g (q) is at least proper, i.e. the process is causal with respect
to the input {g}.
A2 The matrix w(q)
is strictly proper, i.e. the process is strictly causal with
respect to the input {u}. This assumption is motivated by the following
reasons:
a) In further considerations, only such kind of models will occur.
b) This assumption enormously simplies the answer to the question
about the causality of the controller.
c) It can be shown that, when the matrix w(q) is only proper, then the
closed-loop system contains de facto algebraic loops, which cannot
appear in real sampled-data control systems [19].
The process (5.144) is called controllable by the control input, if for all
nite q
rank Rh (q) = rank a (q) b(q) = n , (5.146)
226 5 Fundamentals for Control of Causal Discrete-time LTI Processes
{gk }
- {yk }
L -
-
{uk }
R
with (q)
Rmm [q], (q) Rmn [q].
Together with Equation (5.144), this performs a model of the closed-loop
system:
(q)yk b(q)uk = f(q)gk
a
(5.147)
(q)y k + (q)uk = Om1 .
1 (q)
wd (q) = (q)
4. For the solution of many control problems, it is suitable to use the associ-
ated backward model additionally to the forward model (5.144) of the process.
We will give a general approach for the design of such models, which suppose
the controllability of the process by the control input. For this reason, we
write (5.144) in the form
yk = w(q)u
k +w
g (q)gk .
where
1 ) ,
w() = w( g ( 1 ) .
wg () = w
When we have the ILMFD
Lemma 5.51. Let the process (5.144) be controllable by the control input.
Then the matrix
bg () = a()wg () (5.152)
turns out to be a polynomial.
g (q) w(q)
w , (5.153)
l
w(q)
= C(qIp A)1 B + D ,
C(Ip A)1 = a1
1 ()b1 () , (5.155)
turns out as an ILMFD of the matrix w(). Hence the right side of (5.150) is
also an ILMFD of the matrix w(), such that
a() = ()a1 ()
g ( 1 ) = C(Ip A)1 Bg + Dg .
wg () = w
is a polynomial matrix.
4. Due to the supposed strict causality of the process with respect to the
control input, the conditions
hold. Thats why for further considerations, the associated backward model
of the process is denoted in the form
where the rst condition in (5.156) is ensured. Starting with the backward
model of the process (5.157), the controller is attempted in the form
with
det (0) = 0 .
5.7 Closed-loop LTI Systems of Finite Dimension 229
When we put this together with (5.157), we obtain the backward model of
the closed-loop system
a()yk b()uk = bg ()gk
(5.159)
()yk + ()uk = Om1 .
Besides, the characteristic matrix of the backward model of the closed-loop
system Ql (, ) takes the form
a() b()
Ql (, , ) = . (5.160)
() ()
Introduce the extended output vector
yk
k = ,
uk
so Equations (5.159) might be written in form of the backward model
bg ()
Ql (, , )k = B()gk , B() = .
Om
The polynomial
() = det Ql (, , ) (5.163)
is called the characteristic polynomial of the system (5.159), and the polyno-
mial
() = det Q (, , )
the characteristic polynomial of the system (5.161).
Theorem 5.52. For the stability of the systems (5.159) or (5.161), it is nec-
essary and sucient that the characteristic polynomials () or (), re-
spectively, are stable.
Proof. The proof immediately follows from Theorems 5.46 and 5.47.
Corollary 5.53. Any stabilising controller for the processes (5.157) or
(5.161) is causal, i.e.
det (0) = 0 . (5.164)
Proof. When the system (5.159) is stable, due to Theorem 5.46, we have
hence (5.164) is true. The proof for the system (5.161) runs analogously.
Corollary 5.54. Any stabilising controller ((), ()) for the systems
(5.159) or (5.161) possesses a transfer matrix
wd () = 1 ()() ,
w0 () = Q1
l (, , )B() ,
w () = Q1
(, , )B () ,
wd () = l1 ()l () = r ()r1 ()
exist, then the pairs (l (), l ()) and [r (), r ()] are left and right mod-
els of the controller, respectively. As above, we introduce the concepts of
controllable left and right controller models. The matrices
al () bl ()
Ql (, l , l ) = ,
l () l ()
(5.167)
r () br ()
Qr (, r , r ) =
r () ar ()
Lemma 5.56. Let (al (), bl ()), [ar (), br ()] as well as (l (), l ()),
[r (), r ()] be irreducible left and right models of the process or controller,
respectively. Then
Due to the supposed irreducibility, we obtain for the left and right models
From Lemma 5.56, it arises that the design problems for left and right
models of stabilising controllers are in principal equivalent.
232 5 Fundamentals for Control of Causal Discrete-time LTI Processes
Theorem 5.57. Let the process be controllable by the control input, and
Relations (5.165) and (5.166) should determine controllable left and right
IMFDs. Then a necessary and sucient condition for the fact, that the pair
(l (), l ()) is a left model of a stabilising controller, is that the matrices
l () and l () satisfy the relation
where Dl () is any stable polynomial matrix. For the pair [r (), r ()] to be
a right model of a stabilising controller, it is necessary and sucient that the
matrices r () and r () fulll the relation
al ()0r () bl ()0r () = In .
F1 ()ar () F2 ()br () = Im ,
(5.174)
al ()G1 () bl ()G2 () = In .
m n
F () = a1
F ()bF () = a1
F () d 1 () d 2 () m ,
d1 ()ar () d2 ()br () = aF () ,
the pair (d1 (), d2 ()), owing to Theorem 5.57, is a stabilising controller with
the transfer function
wd () = d1 1
1 ()d2 () = F1 ()F2 () .
Theorem 5.59. Let the pair (al (), bl ()) be irreducible and (0l (), 0l ())
should be an arbitrary basic controller, such that the matrix Ql (, 0l , 0l ) be-
comes unimodular. Then the set of all stabilising left controllers (l (), l ())
for the system (5.159) is determined by the relations
l () = Dl ()0l () Ml ()bl () ,
l () = Dl ()0l () Ml ()al () ,
Theorem 5.60. Let the pairs (al (), bl ()) and (l (), l ()) be irreducible
and the matrix Q1
l (, l , l ) be represented in the form
n m
V1 () q12 () n (5.175)
Q1
l (, l , l ) = .
V2 () q21 () m
Ql (, l , l ) = Nl ()Ql (, 0l , 0l ) , (5.176)
where
In Onm
Nl () = . (5.177)
Ml () Dl ()
Inverting the matrices in Relation (5.176), we arrive at
Q1 1 1
l (, l , l ) = Ql (, 0l , 0l )Nl () .
V1 () = 0r () br ()() ,
(5.181)
V2 () = 0r () ar ()() ,
or equivalently
V1 () In
= Qr (, 0r , 0r ) .
V2 () ()
From this equation and (5.178), we generate
In V1 ()
= Ql (, 0l , 0l ) ,
() V2 ()
thus we read
() = 0l ()V1 () 0l ()V2 () .
When the matrices V1 () and V2 () are stable, then the matrix () is also
stable.
Furthermore, notice that the pair (l (), l ()) is irreducible, because
the pair (Dl (), Ml ()) is also irreducible. Hence Equation (5.180) denes
an ILMFD of the matrix (). But the matrix () is stable and therefore,
Dl () is also stable. Since also the matrix Q1l (, l , l ) is stable, the blocks
in (5.179) must be stable. Hence owing to Theorem 5.49, it follows that the
matrix Ql (, l , l ) is stable and consequently, the controller (l (), l ()) is
stabilising.
al ()V1 () bl ()V2 () = In ,
l ()V1 () + l ()V2 () = Onm .
4. On basis of Theorem 4.24, the stabilisation problem can be solved for the
system (5.159) even in those cases, when the pair (a(), b()) is reducible.
Theorem 5.62. Suppose in (5.159)
with a latent matrix () and the irreducible pair (a1 (), b1 ()). Then, if the
matrix () is unstable, the system (5.159) never can be stabilised by a feedback
of the form (5.158), i.e. the process (5.157) is not stabilisable. However, if the
matrix () is stable, then there exists for this process a set of stabilising
controllers, i.e. the process is stabilisable. The corresponding set of stabilising
controllers coincides with the set of stabilising controllers of the irreducible
pair (a1 (), b1 ()).
w () = c()a1 ()b() ,
w () = p1 ()q() . (5.185)
det Q (, , ) d+ () ,
6. The results in Sections 4.7 and 4.8, together with the design of the set
of stabilising controllers allow to obtain at the same time information about
the structure of the set of invariant polynomials for the characteristic matrices
(5.160) and (5.162). For instance, in case of Theorem 5.57 or 5.63, the n invari-
ant polynomials of the matrices (5.160) a1 (), . . . , an () are equal to 1, and
the set of the remaining invariant polynomials an+1 (), . . . , an+m () coincides
with the set of invariant polynomials of the matrix Dl ().
with
r
r
al1 () = Ak k , bl1 () = Bk k ,
k=0 k=0
the closed-loop system with the disturbed process (5.186) and the controller
(l (), l ()) remains stable.
Proof. The characteristic matrix of the closed-loop system with the disturbed
process has the form
al () + al1 () [bl () + bl1 ()]
Ql1 (, l , l )) = .
l () l ()
det Ql1 (, l , l ) = 1 () = () + 2 () ,
where
al () bl ()
() = det (5.188)
l () l ()
is the characteristic polynomial of the undisturbed system, and 2 () is a
polynomial, the coecients of which tend to zero for 0. Denote
2 () = d0 + d1 1 + . . . + d ,
Comparing this and (5.188), we realise that for any point of the unit circle
|| = 1
|2 ()| < |()| ,
and from the Theorem of Rouche [171], it arises that the polynomials ()
and () + 2 () have the same number of zeros inside the unit disc. Hence
the stability of the polynomial () implies the stability of the polynomial
1 ().
Remark 5.66. It can be shown that for the solution of the stabilisation problem
in case of forward models of the closed-loop systems (5.147), an analogue
statement with respect to the insensitivity of the solution of the stabilisation
problem cannot be derived.
Part III
This section presents some auxiliary relations that are needed for the further
disclosures.
y = w(p)x , (6.1)
d
where p = dt is the dierential operator, w(p) Rnm (p) is a rational matrix
and x = x(t), y = y(t) are vectors of dimensions m 1, n 1, respectively.
The process is symbolically presented in Fig. 6.1. In the following, the matrix
x y
- w(p) -
Assuming an exp.per. input signal x(t) (6.2), this section handles the ex-
istence problem for an exp.per. output signal of the processes (6.1), i.e.
3.
Lemma 6.1. Let the matrix w(p) be given in the standard form
N (p)
w(p) =
d(p)
with N (s) Rnm [p] and the scalar polynomial
d(p) = (p p1 )1 (p pq )q , 1 + . . . + q = r . (6.4)
Furthermore, suppose
x(t) = xs (t) = Xest , (6.5)
where X Cm1 is a constant vector and s is a complex number with
s = pi , (i = 1, . . . , q) . (6.6)
Y (s) = w(s)X
and
ys (t) = w(s)Xest . (6.8)
Proof. Suppose a certain ILMFD
w(s) = a1
l (s)bl (s)
with al (s) Rnn [s], bl (s) Rnm [s]. Then Relation (6.1) is equivalent to the
dierential equation
d d
al y = bl x. (6.9)
dt dt
Relations (6.5) and (6.7) should hold, and the vectors xs (t) and ys (t) should
determine special solutions of Equation (6.9). Due to
d
al ys (t) = a(s)Y (s)est ,
dt
d
bl xs (t) = b(s)Xest ,
dt
6.1 Response of Linear Continuous-time Processes to Exponential-periodic Signals 243
the condition
al (s)Y (s) = bl (s)X (6.10)
must be satised. Owing to the properties of ILMFDs, the eigenvalues of the
matrix al (s) turn out as the roots of the polynomial (6.4), but possibly with
higher multiplicity. Thus, (6.6) implies det al (s) = 0 and from (6.10) we derive
Y (s) = a1
l (s)bl (s)X = w(s)X ,
4. The question about the existence of an exp.per. output signal with the
same exponent and the same period is investigated.
Theorem 6.2. Let the transfer function of the processes (6.1) be strictly
proper, the input signal should have the form (6.2), and for all k, (k =
0, 1, . . .) the relations
s + kj = pi , (i = 1, . . . , q), = 2/T, j = 1 (6.13)
should be valid. Then there exists a unique exp.per. output of the form (6.3)
with T
yT (t) = w (T, s, t )xT ( ) d , (6.14)
0
where T
1
xk = xT ( )ekj d . (6.16)
T 0
Then we obtain
x(t) = xk e(s+kj)t .
k=
where
yT (t) = w(s + kj)xk ekjt . (6.18)
k=
Under our suppositions, series (6.15) converges. Hence due to the general
properties of Fourier series [171], the order of summation and integration
could be exchanged. Thus, we obtain Formula (6.14).
It remains to show the uniqueness of the above generated exp.per. solution.
Assume the existence of a second exp.per. output
in addition to the solution (6.3). Then the dierence (t) = y(t) y1 (t) is
a solution of the homogeneous equation (6.12) with exponent s and period
T . But, from (6.13) emerge that Equation (6.12) does not possess solutions
dierent from zero. Thus, (t) = 0 and hence the exp.per. solutions (6.3) and
(6.14) coincide.
5. In the following, the series (6.15) is called the displaced pulse frequency
response, which is abbreviated as DPFR. This notation has a physical inter-
pretation. Let (t) be the Dirac impulse and
6.2 Response of Open SD Systems to Exp.per. Inputs 245
T (t) = (t kT )
k=
is a periodic pulse sequence. Then, it is well known [159] that the function
T (t) could be developed in a generalised Fourier series
1 kjt
T (t) = e .
T
k=
Hence the DPFR w (T, s, t) is related to the response of the process (6.1) to
an exponentially modulated sequence of unit impulses (6.19).
DCU
y {} {} v
- ADC - ALG - DAC -
The number T > 0, arising in (6.20), is named as the sampling period or the
period of time quantisation.
The block ALG in Fig. 6.2 stands for the control program or the control
algorithm. If confusion is excluded, also the short name controller is used.
It calculates from the sequence {} a new sequence {} with elements k ,
(k = 0, 1, . . . ). The ALG is a causal discrete LTI object, which is described
for instance by its forward model
0 k+ + k = 0 k+ + 1 k+1 + . . . + k (6.21)
1 k+1 + . . . +
0 k + 1 k1 + . . . + k = 0 k + 1 k1 + . . . + k . (6.22)
Finally in Fig. 6.2, the block DAC is the digital to analog converter, which
transforms a discrete sequence {} into a continuous-time signal v(t) by the
relation
v(t) = m(t kT )k , kT < t < (k + 1)T . (6.23)
In (6.23), m(t) is a given function on the interval 0 < t < T , which is named
as form function, because it establishes the shape of the control pulses [148].
In what follows, we always suppose that the function m(t) is of bounded
variation on the interval 0 t T .
with the exponent s and the period T , which coincides with the time quanti-
sation period. We search for an exp.per. output of the form
At rst, notice a special feature, when an exp.per. signal (6.24) is sent through
a digital control unit. If (6.24) and (6.20) is valid, we namely obtain
k = eksT 0 , 0 = yT (0).
The result would be the same, if instead of the input y(t) the exponential
signal
ys (t) = est yT (0)
6.2 Response of Open SD Systems to Exp.per. Inputs 247
would be considered. The equivalence of the last two equations shows the
so-called stroboscopic property of a digital control unit.
The awareness of the stroboscopic property makes it possible to connect
the response of the digital control unit to an exp.per. excitation with its
response to an exponential signal.
3. In connection with the above said, consider the design task for a solution
of Equations (6.20)(6.23) under the conditions
Assume at rst
m(t) = 1, 0t<T. (6.27)
Then from (6.23) and (6.25), we obtain
Consider t kT + 0, so we nd
where
g(s) = vT (s, +0)
is an unknown vector function. The equality
k = y(kT ) = eksT y0
emerges from (6.28), so after inserting this and (6.29) into (6.22), we receive
0 + 1 esT + . . . + esT g(s) = 0 + 1 esT + . . . + esT y0
where
(s) = 0 + 1 esT + . . . + esT ,
(s) = 0 + 1 esT + . . . + esT
are polynomial matrices in the variable esT . Hereinafter, means that
the corresponding function depends on esT . For det (s) / 0 from (6.30),
we obtain
g(s) = w d (s)y0 , (6.31)
where
1
w d (s) = (s) (s) .
248 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes
which implies
vT (s, t) = w d (s)y0 es(tkT ) , kT < t < (k + 1)T .
4. Using (6.32), we are able to obtain the general solution for the case, when
m(t) is an arbitrary given function on the interval 0 t T . Then instead of
(6.32), formula
vT (s, t) = w d (s)y0 est m(t), 0 < t < T, vT (s, t) = vT (s, t + T ) (6.33)
comes up. As result of the above considerations, the following theorem was
proven.
Theorem 6.3. Let the input of the digital control unit (6.20)(6.23) be the
continuous-time signal (6.24). Furthermore, suppose
det (s) = det 0 + 1 esT + . . . + esT / 0. (6.34)
Remark 6.4. It can be shown that under Supposition (6.34) the obtained
exp.per. solution with the exponent s and the period T is unique.
Remark 6.5. The obtained solution does not depend on the vector y(t) for
0 < t < T , but only on y0 = yT (0). The stroboscopic property expresses itself
in this way.
where T
1
k (s) = (T, s, )ekj d .
T 0
Now, introduce the function
T
(s) = es m( ) d , (6.38)
0
which is called the transfer function of the form element. Thus from (6.36)
and (6.38), we obtain
1 T (s+kj) 1
k (s) = e m( ) d = (s + kj) ,
T 0 T
so Formula (6.37) sounds
1
(T, s, t) = (s + kj)ekjt , (6.39)
T
k=
1 esT
(s) = .
s
6. Let us consider now the more general question about the pass of an
exp.per. signals through the open sampled-data system of Fig. 6.3, where
DCU is a digital control unit described by Equations (6.20)(6.23) and L(p)
is a continuous-times LTI process of the form (6.1) with the transfer function
w(p), that is at least proper. The problem amounts to the solution of the
g x y
- DCU - L(p) -
hold. In order to solve the just stated problem, we point out that owing to
the stroboscopic property, we could restrict ourselves to exponential inputs of
the form g(s) = est gT (0). Hence instead of (6.40), the equivalent task with
The exp.per. signal (6.42) acts as input to the continuous-time process. With
regard to (6.39), this input might be written in the form
1
x(t) = (s + kj)e(s+kj)t w d (s)g0 . (6.43)
T
k=
with
1
w (T, s, t) = w(s + kj)(s + kj)ekjt . (6.45)
T
k=
Denote
G(s) = w(s)(s) , (6.46)
so Formula (6.45) can be written as the DPFR
w (T, s, t) = G (T, s, t) .
A = Ip A
6.3 Functions of Matrices 251
dA () = det A = det(Ip A)
adj(I p A)
(Ip A)1 = , (6.47)
dA min ()
where the numerator is the monic adjoint matrix. Compatible with earlier
results, Matrix (6.47) turns out to be strictly proper. Assume
dA min () = ( 1 )1 ( q )q , 1 + . . . + q = r p . (6.48)
where the Mik = Ni,i k+1 are constant matrices, and the Nik are calculated
by Formula (2.99). Since the fraction in (6.47) is irreducible, observing (2.100)
produces
Mi,i = Opp (i = 1, . . . , q) .
2. Denote
Mik
Zik = , (i = 1, . . . , q; k = 1, . . . , i ) . (6.50)
(k 1)!
The constant matrices (6.50) are called components of the matrix A. Each
root i of the minimal polynomial (6.48) with the multiplicity i corresponds
to i components
Zi1 , Zi2 , . . . , Zi,i .
The totality of all these matrices is named the set of components of the matrix
A according to the eigenvalue i . The total number of the components of the
matrix A is equal to the degree of its minimal polynomial.
3. Some general properties of the components (6.50) are listed below [51].
a) If the matrix A is real, and 1 , 2 are two conjugated complex eigenvalues,
which arise in the minimal polynomial with the power , then the corre-
sponding components Z1k , Z2k (k = 1, . . . , ) are conjugated complex.
252 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes
Zi1 = Qi , (i = 1, . . . , q)
is used. The matrices Qi are named the projectors of the matrix A. Some
important properties of the projectors Qi are advised now:
Q2i = Qi , (i = 1, . . . , q) .
Qi Q = Q Qi = Opp , (i = ) .
4. Let the matrix A possess the minimal polynomial (6.48), and f () should
be a known scalar function. It is said that the function f () is dened over
the spectrum of the matrix A, if the expressions
f (1 ), f (1 ), . . . f (1 1) (1 )
f (2 ), f (2 ), . . . f (2 1) (2 )
.. .. .. (6.53)
. . .
f (q ), f (q ), . . . f (q 1) (q )
make sense. The totality of the values (6.53) is addressed if we speak about
the values of the function f () on the spectrum of the matrix A.
6.3 Functions of Matrices 253
5. Let the function f () be given, that should take the values (6.53) on
the spectrum of the matrix A. Moreover, a polynomial h() may fulll the
conditions
h(1 ) = f (1 ), . . . h(1 1) (1 ) = f (1 1) (1 )
h(2 ) = f (2 ), . . . h(2 1) (2 ) = f (2 1) (2 )
.. ..
. .
h(q ) = f (q ), . . . h(q 1) (q ) = f (q 1) (q ) .
If these relations are true, then the polynomial h() is said to take the same
values as the function f () on the spectrum of the matrix A, and we write
h(A ) = f (A ) (6.54)
for this fact. If we have a polynomial h() that satises Conditions (6.54),
then the matrix
f (A) = h(A)
is established by denition as the value of the function f () for = A. Using
this denition, the value of the function of a matrix does not depend on the
concrete choice of the polynomial h() satisfying (6.54).
is valid. Consider a scalar polynomial hk , which takes the following value on
the spectrum of the matrix A:
( 1)
hk (i ) = hk (i ) = . . . = hki (i ) = 0, (i = 1, . . . , k 1, k + 1, . . . , q),
hk (k ) = hk (k ) = . . . =
(2) (1)
hk (k ) = 0, hk (k ) = 1 ,
() ( 1)
hk (k ) = ... = hkk (k ) = 0.
8. Functions of one and the same matrix are always commutative, i.e. if the
functions f1 () and f2 () are dened on the spectrum of the matrix A, then
f1 (A)f2 (A) = f2 (A)f1 (A) .
10. Let the p p matrix A possess the eigenvalues 1 , . . . , q with the multi-
plicities 1 , . . . , q , and the characteristic polynomial of the matrix A should
have the shape
dA () = ( 1 )1 ( q )q . (6.58)
If under these conditions, the function f () is dened on the spectrum of the
matrix A, then the characteristic polynomial of the matrix f (A) has the form
df () = ( f (1 ))1 ( f (q ))q .
Besides, if f (i ) = 0, (i = 1, . . . , q) is valid, then the matrix f (A) only nonzero
eigenvalues, i.e. f (A) is non-singular. However, if f (i ) = 0 for any i takes
place, then the matrix f (A) is singular.
11. Now, the important question about the structure of the set of elementary
divisors and the invariant polynomials of the matrix f (A) is investigated. Let
the matrix A have the elementary divisors
( 1 )1 , . . . , ( r )r , (6.59)
where among the numbers 1 , . . . , r , equal ones are allowed. Then the fol-
lowing assertion is true [51]: In those cases, where i = 1, or i > 1 and
f (i ) = 0, the elementary divisor ( i )i of the matrix A corresponds to
an elementary divisor
( f (i )) i
of the matrix f (A). In case of f (i ) = 0, i > 1 for the elementary divisor
( i )i , there exist more than one elementary divisors of the matrix f (A).
12. Suppose again that the characteristic polynomial of the matrix A has
the shape (6.58) and the sequence of its elementary divisors has the shape
(6.59). It is said that the matrices A and f (A) have the same structure, if
among the numbers f (1 ), . . . , f (q ) are no equal ones and the sequence of
elementary divisors of the matrix f (A) possesses the analogue form to (6.59)
1 r
( f (1 )) , . . . , ( f (r )) .
The above derived results are formulated in the next theorem.
Theorem 6.7. The following two conditions are necessary and sucient for
the matrices A and f (A) to possess the same structure:
a)
f (i ) = f (k ), (i = k; i, k = 1, . . . , q) (6.60)
b) For all exponents i ( = 1, . . . , < r) with i > 1
f (i ) = 0, ( = 1, . . . , ) .
Corollary 6.8. Let the matrix A be cyclic, i.e. in (6.59) r = q and i = i
are true. Then the matrix f (A) is also cyclic, if and only if Conditions a) and
b) are true.
256 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes
f () = et ,
q
i t tei t t(i 1) ei t
f (A) = e Mi1 + Mi2 + . . . + Mi,i , (6.61)
i=1
1! (i 1)!
where
f (i ) = ei t , f (i ) = tei t , . . . , f (i 1) (i ) = ti 1 ei t .
2 t2
et = 1 + t + + ... , (6.62)
2!
which converges for all , and consequently on any spectrum too. Inserting
the matrix A instead of into (6.62), we receive
A2 t2
eAt = Ip + At + + ... . (6.63)
2!
Particularly for t = 0, we get
eAt |t=0 = Ip .
A2 2
eA = Ip A + +... .
2!
By multiplying this expansion with (6.62), we prove
6.4 Matrix Exponential Function 257
A2 (t )2
eAt eA = eA eAt = Ip + A(t ) + + ...
2!
hence
eAt eA = eA eAt = eA(t ) .
For = t, we nd immediately
or At 1
e = eAt .
5.
Theorem 6.9. For a positive constant T , the matrices A and eAT possess the
same structure, if the eigenvalues 1 , . . . , q of A satisfy the conditions
ei T = ek T , (i = k; i, k = 1, . . . , q) (6.64)
or equivalently
2nj
i k = = nj , (i = k; i, k = 1, . . . , q) , (6.65)
T
where n is an arbitrary integer and = 2/T .
Corollary 6.10. Let the matrix A be cyclic. Then, a necessary and sucient
condition for the matrix eAT to become cyclic is the demand that Conditions
(6.64) hold.
Theorem 6.11 ([71]). Let the pair (A, B) controllable and the pair [A, C]
observable. Then under Conditions (6.64), (6.65), the pair (eAT , B) is con-
trollable and the pair [eAT , C] is observable.
w (T, s, t) = w (T, s, t + T ) .
In this section, we will derive closed formulae for the sums of the series (6.66)
and (6.67).
2.
Lemma 6.12. Let the matrix w(s) be strictly proper and possess the partial
fraction expansion
q i
wik
w(s) = , (6.69)
i=1
(s si )k
k=1
where the wik are constant matrices. Then the sum of the series (6.67) is
determined by the formulae
Dw (T, s, t) = Dw (T, s, t), 0<t<T, (6.70)
Dw (T, s, t) = Dw (T, s, t T )esT , T < t < ( + 1)T, ( = 0, 1, . . . ),
(6.71)
where
q
i k1
wik et
Dw (T, s, t) = . (6.72)
i=1 k=1
(k 1)! k1 1 e(s)T |=si
6.5 DPFR and DLT of Rational Matrices 259
Appendix B yields
k1
1 e(s+mj)t 1 et
= .
T m=
(s + mj a)k (k 1)! k1 1 e(s)T |=a
Inserting this into (6.73), we obtain Formulae (6.70) and (6.72). We recognise
Formula (6.71) as follows. Let T < t < ( + 1)T , so we conclude
Dw (T, s, t) = w (T, s, t)est = w (T, s, t T + T )es(tT ) esT
= w (T, s, t T )es(tT ) esT = Dw (T, s, t T )esT .
3.
Lemma 6.13. Let A be a constant p p matrix and
q
i
Mik
wA (s) = (Ip A)1 = . (6.77)
i=1 k=1
( i )k
q
i
k1 et
DwA (T, s, t) = Zik . (6.79)
i=1 k=1
k1 1 e(s)T |=si
Introduce the scalar function
1
f (, t) = et 1 eT esT ,
so Relation (6.79) can be presented in the form
q i k1
DwA (T, s, t) = Zik f (, t) .
i=1
k1 |=si
k=1
w (T, s, t) = w (T, s, t + T ) .
Now, closed expressions for the sums of the series (6.83), (6.84) will be derived.
2.
Lemma 6.16. Suppose the strictly proper matrix w(s) Rnm (s) of the shape
(6.69). Then, the sum of the series (6.84) converges for all s = si + kj,
(i = 1, . . . , q; k = 0, 1, . . . ), the sum depends continuously on t and is
determined by the formula [148]
Dw (T, s, t) = Dw (T, s, t), 0tT, (6.85)
where the matrix Dw (T, s, t) is bound by any one of the both equivalent rela-
tions
q i k1
wik ()et
Dw (T, s, t) = + hw (t) , (6.86)
i=1 k=1
(k 1)! k1 e(s)T 1 |=si
q i k1
wik ()et
Dw (T, s, t) = + hw (t) , (6.87)
i=1
(k 1)! k1 1 e(s)T |=s i
k=1
262 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes
where
t T
hw (t) = hw (t )m( ) d, hw (t) = hw (t )m( ) d
0 t
and k1
q
i
wik t
hw (t) = e . (6.88)
i=1 k=1
(k 1)! k1 |=si
Formulae (6.86), (6.87) is extended onto the whole taxis by the relation
Dw (T, s, t) = Dw (T, s, t T )esT , T < t < ( + 1)T .
Proof. Placing (6.69) into (6.84) gives
q i
1 (s + mj)e(s+mj)t
Dw (T, s, t) = wik .
i=1
T m= (s + mj si )k
k=1
3. In the particular case, when w(s) = wA (s), where the matrix wA (s) is
from (6.74), the following relations hold.
Lemma 6.17. In case of (6.74), Formulae (6.86), (6.87) can be represented
by the both equivalent formulae
1
DwA (T, s, t) = h (A, t) + eAt (A) esT eAT Ip , (6.89)
1
DwA (T, s, t) = h (A, t) + eAt (A) Ip esT eAT , (6.90)
where the notations
t T
h (t) = eA(t ) m( ) d, h (t) = eA(t ) m( ) d
0 t
and T
(A) = eA m( ) d
0
were used.
6.6 DPFR and DLT for Modulated Processes 263
q
i
k1 t
hwA (t) = Zik e (6.91)
i=1 k=1
k1 |=si
Using (6.91) and (6.77), Formulae (6.86) and (6.87) can be given the form
DwA (T, s, t) =
(6.92)
q
i t
k1 et ()
Zik k1 + e(t ) m( ) d ,
i=1 k=1
e(s)T 1 0 |=si
DwA (T, s, t) =
(6.93)
q
i T
k1 et ()
Zik k1 e (t )
m( ) d .
i=1 k=1
1 e(s)T t |=si
q
i
k1
DwA (T, s, t) = Zik [f (, t)] .
i=1 k=1
k1 |=si
4. Consider Relations (6.89) and (6.90) in more detail for the important
special case of a zero-order hold (6.27). Thus it appears that
1 esT
(s) = 0 (s) = .
s
264 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes
Owing to this equation and (6.27) from Formula (6.94), we gain the equivalent
expressions
et 1 1 eT et
f (, t) = + ,
e(s)T 1
e(tT ) 1 1 eT et
f (, t) = + .
1 e(s)T
Passing in these equations to functions of matrices according to (6.95), we
nd in the present case
DwA (T, s, t) =
1 At sT AT 1 (6.96)
A1 eAt Ip + A1 Ip eAT e e e Ip ,
DwA (T, s, t) =
(6.97)
1 At 1
A1 eA(tT ) Ip + A1 Ip eAT e Ip esT eAT .
et 1 e(tT ) 1 1 eT
h0 (, t) = , h0 (, t) = , 0 () = (6.98)
are integral functions in the argument . In particular for = 0
B,
Corollary 6.21. Let (A, B, C) and (A, C)
be any realisations of the matrix
w(s), then
C Dw (T, s, t)B = C Dw (T, s, t)B
A
.
A
266 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes
6. It is shown that a series of the form (6.84) is also convergent in the case,
when w(s) is only proper. Indeed, in this case we write
w0 = lim w(s) .
s
The rst term on the right side of (6.103) can be calculated by Formulae
(6.100) and (6.101). In order to calculate the second term, we observe that
1
(s + kj)e(s+kj)t = (T, s, t)est ,
T
k=
Matrix (6.105) is called the parametric discrete model of the matrix (of the
process) w(s). A list of general properties of the matrix Dw (T, , t) will now
be derived from Relation (6.105).
Since the matrix eAT is regular, we conclude from (6.105) that the ma-
trix Dw (T, , t) is strictly proper for all t. Besides, for w(s) Rnm (s) also
Dw (T, , t) Rnm () is true.
Moreover, Corollary 6.15 ensures that the parametric discrete model
(6.105) does not depend on the concrete choice of the realisation (A, B, C)
congured in (6.80).
6.7 Parametric Discrete Models of Continuous Processes 267
r(s) = (s s1 )1 (s sq )q .
which are called conditions for non-pathological behavior. Then the PMD
d (, t) = Ip eAT , eAt B, C (6.107)
3.
Theorem 6.23. Let under the conditions of Theorem 6.22 the sequence of
invariant polynomials a1 (s), . . . , ap (s) of the matrix sIp A have the form
Proof. Due to the suppositions, the matrices A and eAT have the same struc-
1 (z), . . . ,
ture, Thus, if (6.106) and (6.107) are fullled, the sequence p (z)
of the invariant polynomials of the matrix zIp eAT has the shape
11 1q
1 (z) = z es1 T
z e sq T
.. .. ..
. . .
p1 pq
p (z) = z es1 T
z e sq T .
But then owing to Lemma 5.36, the sequence of invariant polynomials of the
matrix Ip eAT has the form
11 1q
1 () = es1 T esq T
.. .. ..
. . . (6.110)
s1 T p1
sq T pq
p () = e e ,
Corollary 6.25. If under the conditions of Theorem 6.22, the matrix w(s) is
normal, then for any t the matrix Dw (T, , t) is also normal.
Proof. If the matrix w(s) is normal, then in any minimal standard repre-
sentation (6.80) the matrix A is cyclic. Regarding to Corollary 6.10, also eAT
becomes cyclic. Besides, the minimality of the PMD (6.107) and Theorem 3.17
ensure that Matrix (6.105) is normal.
6.7 Parametric Discrete Models of Continuous Processes 269
4.
Theorem 6.26. Let the strictly proper rational matrices w(s) and w1 (s) of
sizes n m and m r, respectively, be related by
and the poles s1 , . . . , sq of the matrix w(s) should satisfy the conditions for
non-pathological behavior (6.106). Then
Then Theorem 2.56 and (6.113) imply the existence of a representation of the
form
w1 (s) = C(sIp A)1 B1 .
Passing to the parametric discrete-time models, we receive
1 At
Dw (T, , t) = C Ip eAT e B,
1
Dw1 (T, , t) = C Ip eAT eAt B1 .
Then the minimality of the PMD (6.107) together with Lemma 2.9 ensures
that the right side of the relation
Dw (T, , t) = a1 () b()eAt B (6.115)
5.
Theorem 6.28. Let the rational matrices F (s), G(s) of size n and m,
respectively, be given. Suppose the matrices F (s) and
L(s) = F (s)G(s)
be strictly proper and the poles of the matrices F (s) and G(s) should satisfy
together the conditions for non-pathological behavior (6.106). Moreover, let us
have the ILMFD
L(s) = a1
0 (s)b0 (s), F (s) = a1
1 (s)b1 (s) ,
G(s) = a1
2 (s)b2 (s), b1 (s)a1 1
2 (s) = a3 (s)b3 (s)
then
0 () = 2 ()1 () , (6.118)
where 2 () is an n n polynomial matrix. Besides, if i (), (i =
1, . . . , n; = 0, 1) are the sequences of invariant polynomials of the ma-
trices 0 () and 1 (), respectively, then
c) Denote
i (s) = det ai (s), (i = 0, 1, 2) ,
then the conditions
are satised.
d) The following relation holds:
e) If the matrices L(s) and F (s) are normal, then the matrix 2 () is simple.
6.8 Parametric Discrete Models of Modulated Processes 271
The matrix Dw (T, , t) is called the parametric discrete model of the modu-
lated process w(s)(s). Clearly, w(s) Rnm (s) implies Dw (T, , t) Rnm ().
1 esi T
0 (si ) = = 0
si
are valid, this means, Conditions (6.124) imply (6.125).
Theorem 6.33. If Conditions (6.124), (6.125) are fullled and the standard
realisation (6.80) is minimal, where the matrix A is cyclic, then the matrix
Dw (T, , t) is normal.
Theorem 6.34. Let the matrices w(s) and w1 (s) of size n m and n r,
respectively, be at least proper and related by
w1 (s) w(s) .
l
Moreover, the poles s1 , . . . , sq of the matrix w(s) satisfy the strict conditions
for non-pathological behavior (6.125), (6.126). Then
L(s) = F (s)G(s)
274 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes
be at least proper, and the poles of the matrices F (s) and G(s) as a whole sat-
isfy the strict conditions for non-pathological behavior (6.125), (6.126). More-
over, the ILMFD
L(s) = a1
0 (s)b0 (s), F (s) = a1
1 (s)b1 (s) ,
G(s) = a1
2 (s)b2 (s), b1 (s)a1 1
2 (s) = a3 (s)b3 (s)
then
0 () = 2 ()1 () ,
where 2 () is an n n polynomial matrix. Besides, if i (), (i =
1, . . . , n; = 0, 1) are the sequences of invariant polynomials of the ma-
trices 0 () and 1 (), then the relations
i () = adi (), (i = 1, . . . , n; = 0, 1)
take place.
c) Denote
g (s) = det a (s), ( = 0, 1, 2) ,
then the conditions
det () = gd (), ( = 0, 1, 2)
are fullled.
d) The relation
0 ()DF (T, , t) = 2 ()1 (, t)
consists, where the matrix 1 (, t) is a polynomial in for all t.
e) If the matrices L(s) and F (s) are normal, then the matrices DL (T, , t)
and DF (T, , t) are also normal and the matrices 0 (), 1 (), 2 () are
simple.
Remark 6.36. Notice that by transposing, all above statements could be for-
mulated in case of subordination from right and corresponding IRMFDs.
6.9 Reducibility of Parametric Discrete Models 275
3. In particular, if the matrix D(, t) has the shape (6.127), then the matrix
N D (s, t)
D(s, t) =
(s)
is called rational-periodic (rat.per.). A rat.per. matrix is named (ir)reducible,
if Matrix (6.127) is (ir)reducible.
4. Proceeding from the just introduced concepts, the question arise, wether
the rat.per. matrix (6.81) is reducible for 0 < t < T . With respect to
1 At
Dw (T, s, t) = Dw (T, s, t) = C I esT eAT e B, (6.128)
the question on reducibility of Matrix (6.128) leads to the question on re-
ducibility of the rational matrix
1 At
Dw (T, , t) = C I eAT e B, (6.129)
which can be represented in the form (6.127) with
ND (, t) = C adj I eAT eAt B ,
() = det(I eAT ) .
276 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes
Theorem 6.37. Let the realisation (A, B, C) be simple, i.e. the pair (A, B) is
controllable, the pair [A, C] is observable, and the matrix A is cyclic. Further-
more the conditions for non-pathological behavior (6.106) should be satised.
Then Matrix (6.129) (and therefore also Matrix (6.128)) is irreducible.
d (, t) = (I eAT , eAt B, C)
is minimal for any t. Besides, from Corollary 6.10 emerge that the matrices
eAT and eAT are cyclic. Thus, the matrix I eAT = eAT (I eAT )
is simple and its minimal polynomial is equivalent to its characteristic poly-
nomial. Hence by Theorem 2.47, Matrix (6.129) is irreducible for all t. This
implies that the matrix
C adj I esT eAT eAt B
Dw (T, s, t) =
det (I esT eAT )
is irreducible too.
Remark 6.38. From the above proof, we read that under the conditions of
Theorem 6.37 for any t, the right side of our last equation is irreducible.
Remark 6.39. If any one of the suppositions in Theorem 6.37 is violated, then
we have reducibility in the above sense.
Theorem 6.40. Let the pair (A, B) be controllable, the pair [A, C] observable
and the matrix A cyclic, and let the strict conditions for non-pathological
behavior (6.124), (6.125) be fullled. Then, Matrix (6.130) is irreducible for
any t (and also irreducible in the above sense).
Remark 6.41. If one of the conditions in Theorem 6.40 is violated, then Ma-
trix (6.130) turns out to be reducible.
6.9 Reducibility of Parametric Discrete Models 277
ND (0 , t) = O .
7
Mathematical Description, Stability and
Stabilisation of the Standard Sampled-data
System in Continuous Time
x z
- -
- L
u y
C
m
K(p) L(p) r (7.2)
w(p) = ,
M (p) N (p) n
where the letters outside the matrix indicate the dimensions of the corre-
sponding blocks. Henceforth, it is assumed that the matrix N (p) is strictly
proper and L(p) is at least proper. The restrictions imposed on the matrices
K(p) and M (p) will depend on the problem under consideration. Using (7.1)
and (7.2), we can write the equations of the plant in the operator form
280 7 Description and Stability of SD Systems
z = K(p)x + L(p)u
(7.3)
y = M (p)x + N (p)u .
k = y(kT ) , (7.4)
where the input signal y(t) is assumed to be continuous. The digital controller
ALG can be described either by the forward model (6.21)
k = 0 k+ + . . . + k
0 k+ + . . . + (7.5)
0 k + . . . + k = 0 k + . . . + k , (7.6)
C1 r
m .
C= , B = B1 B2
C2 n
7.2 Equation Discretisation for the Standard SD System 281
x z1 z
- K(p) -g -
6
y1
- M (p)
z2
- L(p)
y2
- N (p) -?
g
u y
{} {}
DAC ALG ADC
The standard realisation (7.8) can be associated with the state equations of
the plant
dv
= Av + B1 x + B2 u
dt
(7.10)
z = C1 v + DL u, y = C2 v .
Taken in the aggregate, the state equations (7.10) and the equations of the
digital controllers (7.4)(7.7) form a system of dierential-dierence equations,
which will be called a continuous-time model of the standard sampled-data
system.
with the notation vk = v(kT ). Using (7.7), this equation can be written as
t t
v(t) = eA(tkT ) vk + eA(t ) m( kT ) d B2 k + eA(t ) B1 x( ) d .
kT kT
(7.11)
Assuming
t = kT + , = kT + , 0 T, 0T
Taken in the aggregate, the dierence equations (7.12) and the equations of
the digital controller (7.4)(7.7) will be called a parametric discrete model of
the standard sampled-data system. It can easily be shown that the continuous-
time model (7.10), (7.4)(7.7) and the parametric discrete model (7.12), (7.4)
(7.7) of the standard sampled-data system are equivalent, i.e., if the set of
functions y(t), z(t), and u(t) and the sequence {k } satisfy the equations of the
continuous-time model, then the set of sequences {vk ()}, {zk ()}, {uk ()},
and {k } are a solution of the parametric discrete model, and vice versa.
k = 0 yk+ + . . . + yk .
0 k+ + . . . + (7.18)
It can be easily shown that the discrete backward model (7.19) together with
(7.12) is equivalent to the original continuous-time model (7.10), (7.4)(7.7),
i.e., if a set of sequences {vk ()}, yk ()}, {zk ()} satises Equations (7.12),
(7.19), then the functions
y(t) = yT (s, t)est , z(t) = zT (s, t)est , u(t) = uT (s, t)est , (7.21)
where
3. Henceforth for the PTM (7.22), we shall use the following special notation:
yT (s, t) = wyx (s, t), zT (s, t) = wzx (s, t), uT (s, t) = wux (s, t) .
Let us begin with the matrix wyx (s, t). First of all, we notice that from the
strict properness of the matrix N (s) and Fig. 7.2, it follows that the PTM
wyx (s, t) is continuous in t. Therefore, using the stroboscopic property, it can
be assumed that the input of the ADC is acted upon by the exponential matrix
signal
y(s, t) = wyx (s, 0)est .
Consider the open-loop system shown in Fig. 7.3. Using (6.41)(6.45), we nd
where
7.3 Parametric Transfer Matrix (PTM) 285
1
N (T, s, t) = N (s + kj)(s + kj)ekjt
T
k=
and
1
w d (s) = (s) (s) ,
where
(s) = 0 + 1 esT + . . . + esT ,
(7.24)
(s) = 0 + 1 esT + . . . + esT .
y1 (t) = M (s)est ,
so we obtain
y(t) = y1 (t) + y2 (t) = N (T, s, t)w d (s)wyx (s, 0)est + M (s)est .
where we used the fact that, due to (6.66) and (6.67), the following equality
holds:
1
N (T, s, 0) = N (s + kj)(s + kj) = DN (T, s, 0) . (7.27)
T
k=
4. In order to nd the PTM wzx (s, t), let us consider the open-loop system
shown in Fig. 7.4. Similarly to (7.23), we construct the exp.per. output
z2 (t) = L (T, s, t)w d (s)wyx (s, 0)est ,
where
1
L (T, s, t) = L(s + kj)(s + kj)ekjt . (7.29)
T
k=
z1 (t) = K(s)est .
Using (7.21), we obtain the required PTM from the input x to the output z
wzx (s, t) = L (T, s, t)RN (s)M (s) + K(s) . (7.30)
6. It should be noted that the standard system shown in Fig. 7.2 is fairly
general. It can describe any sampled-data system containing, except for
continuous-time LTI units, a single digital controller (7.4)(7.7). Neverthe-
less, systems encountered in applications often are not given in the standard
form. Then for obtaining the latter one, some structural transformations are
needed. At the same time, there exists another way to construct the standard
system for a given structure. For this purpose, we assume that the input of the
system at hand is acted upon by an exponential signal (7.20), and all signals
in the system are exponential periodic with exponent s and period T . Then
using the stroboscopic property, the exp.per. system output z(t) can always
be found in the form
7.3 Parametric Transfer Matrix (PTM) 287
where wzx (s, t) is the PTM from the input x to the output z. Comparing
this expression for wzx (s, t) with the general formula (7.30), we can always
nd the matrices K(s), L(s), M (s), and N (s) associated with the equivalent
standard system.
x u z
-g - C - G(p) -
6
x(t) = est In , z(t) = wzx (s, t)est , wzx (s, t) = wzx (s, t + T ) , (7.31)
where the matrix wzx (s, t) is continuous with respect to t. Then using the
stroboscopic property, consider the open-loop system shown in Fig. 7.6. The
est In u z
-g - C - G(p) -
6
wzx (s, 0)est
where
1
DG (T, s, t) = G(s + kj)(s + kj)e(s+kj)t
T
k=
is the DLT of the matrix G(s)(s). It can be easily veried that for t = ,
0 T , the matrix w d (s, ) determines the transfer matrix of the discrete
system in Fig. 7.5 in the sense of the modied discrete Laplace transformation
[177].
7. At the same time, if M (s) = On , then the standard sampled-data system
with the input x and output z reduces to the continuous-time LTI system
x z
- K(s) -
8. The method for constructing the PTM, described in this section, is fairly
general. In fact, it does not exploit the fact that the matrix w(p) is rational.
All the aforesaid still holds, when we assume that the matrices K(p), L(p),
M (p), N (p) are transfer matrices of some linear stationary operators such
that the series L (T, s, t) and N (T, s, t) converge and the latter sum is
continuous with respect to t. As a special case, this method can be used for
constructing the PTM for a standard system with pure-delay elements.
Example 7.2. Consider the system with delayed feedback shown in Fig. 7.8.
Using the techniques described in detail in Chapter 9, we nd the PTM
x
u z
- C - G(p) -?
g - F (p) -
y
Q1 (p) = Q(p)ep
wzx (s, t) =
1
F G (T, s, t)w d (s) I DQF G (T, s, )w d (s) Q(s)F (s)es + F (s)
dv
= Av + B1 x + B2 u
dt
(7.34)
z = C1 v + DL u , y = C2 v
k = yk = y(kT ) (7.35)
0 k + . . . + k = 0 k + . . . + k (7.36)
u(t) = m(t kT )k , kT < t < (k + 1)T . (7.37)
Introduce the matrix
I esT eAT On esT eAT (A)B2
Q(s, , ) =
C2 In Onm ,
(7.38)
Om (s) (s)
where (s) and (s) are the matrices (7.24). Then for any s with
det Q(s, , ) = 0 , (7.39)
where the matrices v0 (s) and 0 (s) are given by the equation
v0 (s)
Q(s, , ) y0 (s) = R(s) (7.41)
0 (s)
1
(sI A) (I esT eAT )B1
(7.42)
R(s) = On n .
Om m
with
G(s, ) = e A
e(sI A) dB1 = (sI A)1 (es I eA )B1 . (7.44)
0
where
G(s, T ) = (sI A)1 (esT I eAT )B1 .
Combining (7.45) with the equations of the digital controller and using
(7.17), we nd the discrete backward model of the standard sampled-data
system for the input (7.20)
vk = eAT vk1 + eAT (A)B2 k1 + e(k1)sT G(s, T )
yk = C2 vk (7.46)
0 k + . . . + k = 0 yk + . . . + yk .
The discrete model (7.46) together with (7.43) determines the backward
parametric discrete model for the input (7.20).
b) If a solution of the continuous-time models (7.34)(7.37) satises Condi-
tions (7.21) and (7.22), then it is associated with discrete sequences
{v(s)} = {. . . , v1 (s), v0 (s), v1 (s), . . . }
{y(s)} = {. . . , y1 (s), y0 (s), y1 (s), . . . }
{(s)} = {. . . , 1 (s), 0 (s), 1 (s), . . . }
that determine a solution of Equations (7.46) and satisfy the conditions
vk (s) = eksT v0 (s), yk (s) = eksT y0 (s), k (s) = eksT 0 (s) ,
where v0 (s), y0 (s), 0 (s) are unknown matrices to be found. Substituting
these relations into (7.46), we obtain
v0 (s) = esT eAT v0 (s) + esT eAT (A)B2 0 (s)
+ (sI A)1 (I esT eAT )B1
y0 (s) = C2 v0 (s) (7.47)
(s)0 (s) = (s)y0 (s) .
292 7 Description and Stability of SD Systems
This system can be written in form of the linear system of equations (7.41)
having a unique solution under Condition (7.39).
c) Now, we will prove (7.40). From the rst equation in (7.47), we nd
1
v0 (s) = esT eAT I (A)B2 0 (s) + (sI A)1 B1 . (7.48)
Moreover, (7.47) yields
1
0 (s) = (s) (s)y0 (s) = w d (s)y0 (s) . (7.49)
Substituting (7.49) into (7.48), we nd
1
v0 (s) = esT eAT I (A)B2 w d (s)y0 (s) + (sI A)1 B1 . (7.50)
Multiplying this from the left by C2 , we obtain
1
y0 (s) = C2 esT eAT I (A)B2 w d (s)y0 (s) + C2 (sI A)1 B1 .
(7.51)
But from (6.89), (6.100), and (7.9), it follows that
1
C2 esT eAT I (A)B2 = N (T, s, 0) = DN (T, s, 0) ,
C2 (sI A)1 B1 = M (s) .
Thus, Equation (7.51) can be written in the form
y0 (s) = DN (T, s, 0)w d (s)y0 (s) + M (s) ,
whence 1
y0 (s) = In DN (T, s, 0)w d (s) M (s) .
Then using (7.49), we obtain
1
0 (s) = w d (s) In DN (T, s, 0)w d (s) M (s) . (7.52)
Therefore, the PTM wvx (s, t) with respect to the output v is given for
0 t T by
wvx (s, t) = v(t)est
t
(7.53)
= est eAt v0 (s) + eA(t) m() d B2 0 (s)
0
t
+ e(AsI )(t) d B1 .
0
7.4 PTM as Function of the Argument s 293
Multiplying by est and using the fact that for 0 < t < T
t
1
est C1 eAt (A) esT eAT I + eA(t) m() d B2
0
+ DL est m(t) = L (T, s, t)
and moreover,
C1 (sI A)1 B1 = K(s) ,
we obtain
wzx (s, t) = L (T, s, t)0 (s) + K(s) .
Using (7.52), we get (7.30).
3. Using Theorem 7.3, some general properties of the singular points of the
PTM wzx (s, t) can be investigated.
Theorem 7.4. Under the conditions of Theorem 7.3, the PTM wzx (s, t) given
by (7.30) is a meromorphic function of the argument s, i.e., all its singular
points are poles. The set of poles of wzx (s, t) belongs for any t to the set of
roots of the equation
(s) = det Q(s, , ) = 0 , (7.55)
where Q(s, , ) is Matrix (7.38).
294 7 Description and Stability of SD Systems
The coecients for v0 (s) and 0 (s), as well as the last term on the right side of
(7.57) are integral functions of s. Therefore, the claim of the theorem follows
for 0 < t < T from the already proved properties of the matrices v0 (s) and
0 (s). Since wzx (s, t) = wzx (s, t + T ), this result holds for all t.
1 2nj
sin = ln i + , (i = 1, . . . , q; n = 0, 1, . . . ) . (7.60)
T T
Proof. The matrix Q(s, , ) in (7.38) depends only on esT . Therefore, sub-
stituting in (7.38) for esT , we obtain the polynomial matrix (7.58). The
poles of the matrix Q1 (, , ) coincide with the roots of the polynomial
(), which are related to the poles of the matrices v0 (s), y0 (s), and 0 (s) by
Equations (7.60). Due to (7.57), the same is valid for the poles of the PTM
wzx (s, t).
and
where
296 7 Description and Stability of SD Systems
() = 0 + 1 + . . . + ,
(7.64)
() = 0 + 1 + . . . +
2.
Denition 7.7. The standard sampled-data system (7.61)(7.64) will be
called internally stable, if with x(t) = O1 for any solution of Equations
(7.61)(7.64) and for t > 0, k > 0, the following estimations hold:
with positive constants dy and dz . As follows from (7.57), the matrices C1 and
DL do not inuence the internal stability of the standard system (7.61)(7.64).
In this section, we formulate some necessary and sucient conditions for
the internal stability of the standard sampled-data system. For brevity, we
also shall use the term stability, when we mean internal stability.
If x(t) = O1 , the standard sampled-data system can be associated with
the discrete backward model generated from (7.19) with gk = O1 :
3.
Lemma 7.8. For the standard sampled-data system (7.61)(7.64) to be inter-
nally stable, a necessary and sucient condition is that the discrete backward
model (7.67) is stable.
Proof. Necessity: Let the standard sampled-data system be stable. Then Es-
timates (7.65) and (7.66) hold. As a special case for k > 0, we have
v(kT ) = vk < dv ekT , y(kT ) = yk < dy ekT , k < d ekT .
(7.68)
With the notation eT = , || < 1, we obtain
Since Conditions (7.69) hold for all solutions of Equations (7.67), the discrete
model is stable by denition.
Suciency: Let the discrete model (7.67) be stable. Then, we have In-
equalities (7.69), which can be written in the form (7.68). Due to (7.12) and
(7.13), we have
A
v(kT + ) = e vk + eA() m() d B2 k
0
where
" "
" "
L1 = max e A
, L2 = max " e A()
m() d B2 "
0T 0T " 0
"
where
L = L1 dv + L2 d
is a constant. From (7.71), the following estimate can easily be derived:
4. Necessary and sucient conditions for the internal stability of the system
(7.61)(7.64) are given by the following theorem.
Theorem 7.9. A necessary and sucient condition for the standard sampled-
data system (7.61)(7.64) to be internally stable is that all eigenvalues of the
matrix
I esT eAT On esT eAT (A)B2
C2 In Onm
Q(s, , ) =
Om (s) (s)
Proof. Due to Lemma 7.8, the standard sampled-data system is stable i the
discrete model (7.67) is stable. The latter can be written in the form of a
homogeneous backward-dierence equation
vk
Q(, , ) yk = O .
k
Then from Theorem 5.47, it follows that the discrete model (7.67) is stable
i Matrix (7.72) is stable. The claim regarding Matrix (7.72) follows from the
equality
Q(s, , ) = Q(, , ) | =esT .
Corollary 7.10. Any controller ensuring under the given assumptions the
internal stability of the closed-loop system is causal, i.e., det 0 = 0, because
for det 0 = 0, Matrix (7.72) is unstable.
Corollary 7.11. Theorem 7.9 can be formulated in an alternative way: a nec-
essary and sucient condition for the standard sampled-data system to be
stable is that its characteristic polynomial (7.59) must be stable.
2. Firstly, we consider the above mentioned problems for the closed loop
incorporated in the standard system in Fig. 7.3. This loop is shown in Fig. 7.9,
where the control signal u is related to y by (7.4)(7.7).
B
Let (A, 2 , C2 ) be a realisation of the matrix N (p) in form of the state
equations
d
v v + B2 u , y = C2 v
= A (7.73)
dt
with an 1 state vector v and constant matrices A, B 2 , and C2 of dimensions
, m, and n, respectively. Then, the closed loop will be called stable,
if for the system of equations (7.73) and (7.62)(7.64), estimates similar to
(7.65) hold:
v (t) < dv et ,
u(t) < du et , k < d ekT , t > 0, k > 0 .
7.6 Polynomial Stabilisation of the Standard SD System 299
x
-g - N (p) -
6
y
u
{} {}
DAC ALG ADC
where T
eA m( ) d .
(A ) =
0
Let us have an ILMFD
wN () = a1
N ()bN () (7.74)
d
v
y = C2 v , = A v + B
2 u
dt
()k = ()uk
u(t) = m(t kT )k , kT < t < (k + 1)T .
As follows from Theorem 7.9, a necessary and sucient condition for the
stability of this system is that the matrix
I eA T On eA T (A )B
2
(, , ) =
Q C2 In Onm
(7.77)
Om () ()
is stable. Hence the set of the pairs of stabilising polynomials ((), ())
coincides with the set of stabilising pairs for the nonsingular PMD
N () = I eA T , eA T (A )B
2 , C2 . (7.78)
Then the claim of the theorem for a given realisation (A , B 2 , C2 ) and the
pair (aN (), bN ()) follows from Theorem 5.64. It remains to prove that the
set of stabilising controllers does not depend on the choice of the realisation
2 , C2 ) and the pair (aN (), bN ()). With this aim in view, we notice
(A , B
that from the formulae of Section 6.8, it follows that
1
wN () = N (s + kj)(s + kj) . (7.79)
T esT =
k=
4. A more complete result can be obtained under the assumption that the
poles of the matrix N (p) satisfy the strict conditions for non-pathological
behavior (6.124) and (6.125).
Proof. If (7.80) and (7.81) hold, due to Theorem 6.30, the PMD (7.78) is
minimal. Then for the ILMFD
C2 I eA T = a1
1 ()b1 () ,
we have
det a1 () det I eA T .
Hence from Lemma 2.9, it follows that for any ILMFD (7.74)
aN () = ()a1 ()
is true with a unimodular matrix (). Therefore, in this case due to (7.75),
the polynomial
N () = const. = 0
is stable. Then the claim follows from Theorem 7.13.
5. A general criterion for the stabilisability of the closed loop is given by the
following theorem.
Theorem 7.15. Let (A, B 2 , C2 ) of dimension n, , m be any realisation of
the matrix N (p), and (A , B 2 , C2 ) be one of its minimal realisation with
dimension n, , m such that > . Then the function
det(sI A)
r(s) = (7.82)
det(sI A )
are equivalent, i.e. their transfer matrices coincide. Moreover, since the PMD
N (s) is minimal, Relation (7.82) is a polynomial by Lemma 2.48. Let us
have
= (s s1 )1 (s sq )q ,
det(sI A) 1 + . . . + q = ,
det(sI A ) = (s s1 ) (s sq ) ,
1 q
1 + . . . + q = ,
302 7 Description and Stability of SD Systems
Using Equation (4.71) for this and Matrix (7.77), and taking account of (7.85),
we nd
, ) = det(I eAT
det Q(, ) det[() ()w
N ()] ,
(, , ) = det(I eA T ) det[() ()w
det Q N ()] .
Re si < 0, (i = 1, . . . , ) .
6. Using the above results, we can consider the stabilisation problem for the
complete standard sampled-data system.
det(sI A)
r(s) = (7.89)
det(sI A )
Proof. Using (7.61) and (7.62)(7.64) and assuming x(t) = O1 , we can rep-
resent the standard sampled-data system in the form
dv
y = C2 v , = Av + B2 u
dt
()k = ()yk (7.90)
u(t) = m(t kT )k , kT < t < (k + 1)T .
Since
C2 (pI A)1 B2 = N (p) ,
304 7 Description and Stability of SD Systems
2. The standard sampled-data system with the plant (7.92) will be called
modal controllable, if all roots of its characteristic polynomial are controllable,
i.e., () = const. = 0. Under the strict conditions for non-pathological behav-
ior, necessary and sucient conditions for the system to be modal controllable
are given by the following theorem.
Theorem 7.17. Let the poles of the matrix
K(p) L(p) C1 1
Or DL
w(p) = = (pI A) B1 B2 + (7.96)
M (p) N (p) C2 On Onm
satisfy Conditions (7.80) and (7.81). Then, a necessary and sucient condi-
tion for the standard sampled-data system to be modal controllable is that the
matrix N (p) dominates in the matrix w(p).
Proof. Suciency: Without loss of generality, we take DL = Orm and assume
that the standard representation is minimal. Let the matrix N (p) dominate in
Matrix (7.96). Then due to Theorem 2.67, the realisation (A, B2 , C2 ) on the
right-hand side of (7.96) is minimal. Construct the discrete model Dw (T, , t)
of the matrix w(p)(p). Obviously, we have
DK (T, , t) DL (T, , t)
Dw (T, , t) = .
DM (T, , t) DN (T, , t)
Using the second formula in (6.122) and (7.96), we obtain
Dw (T, , t) = D1 (, t) + D2 (t) ,
where
C1
D1 (, t) = (I eAT )1 eA(tT ) (A) B1 B2 ,
C2
T
C1
D2 (t) = eA(t ) ( ) d B1 B2 .
C2 t
By virtue of Theorem 6.30, the right-hand side of the rst equation denes
a minimal standard
representation of the matrix D1 (, t). At the same time,
the realisation eAT , eA(tT ) (A)B2 , C2 is also minimal. Therefore, we can
take A = A in (7.89). Hence r1 (s) = const. = 0 and () = const. = 0, and
the suciency has been proven.
The necessity of the conditions of the theorem is seen by reversing the
above derivations.
Then the set of stabilising controllers ((), ()) for the standard
sampled-data system coincides with the set of solutions of the Diophantine
equation
()ar () ()br () = Dl () ,
where Dl () is any stable polynomial matrix.
5. Any stabilising controller ((), ()) for the standard sampled-data sys-
tem fullls det (0) = 0, i.e., the matrix () is invertible. Therefore, any
stabilising controller has a transfer matrix
wd () = 1 ()() .
The following propositions hold:
c) The set of transfer matrices of all stabilising controllers for the standard
sampled-data system can be written in the form
1
wd () = [0 () ()bN ()] [0 () ()aN ()] ,
where () is any stable rational matrix of compatible dimension.
d) The rational matrix wd () is associated with a stabilising controller for a
stabilisable standard system, if and only if there exists any of the following
representations:
wd () = F11 ()F2 (), wd () = G2 ()G1
1 () ,
where the pairs of rational matrices (F1 (), F2 ()) and [G1 (), G2 ()] are
stable and satisfy the equations
F1 ()ar () F2 ()br () = Im ,
al ()G1 () bl ()G2 () = In .
8
Analysis and Synthesis of SD Systems Under
Stochastic Excitation
Kx ( ) = E [x(t)x (t + )] ,
which will be called the spectral density of the input signal, converges abso-
lutely in some stripe 0 Re s 0 , where 0 is a positive number.
2. Let the block L(p) in the matrix w(p) (7.2) be at least proper and the re-
maining blocks be strictly proper. Let also the system (7.3)(7.7) be internally
stable. When the input of the standard sampled-data system is the above
mentioned signal, after fading away of transient processes, the steady-state
stochastic process z (t) is characterised by the covariance matrix [143, 148]
j
1
Kz (t1 , t2 ) = w(s, t1 )x (s)w (s, t2 )es(t2 t1 ) ds , (8.1)
2j j
4. Assume in particular that x (s) = I , i.e. the input signal is white noise
with uncorrelated components. For this case, we denote
rz (t) = dz (t) .
Then (8.6) and (8.4) yield
j
1
rz (t) = trace [w (s, t)w(s, t)] ds ,
2j j
j
1
= trace [w(s, t)w (s, t)] ds .
2j j
8.1 Quasi-stationary Stochastic Processes in the Standard SD System 309
Moreover, for any function (matrix) g(), we use as before the notation
g (s) = g() | =esT .
As follows from [148] for a rational matrix x (s), the matrices (8.9) and (8.10)
are rational matrices of the argument = esT . Therefore, to calculate the
integrals (8.8), we could take prot from the technique described in [148].
There exists an alternative way to compute the integrals in (8.8). With this
aim in view, we pass to the integration variable in (8.8), such that
310 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
# #
1 d 1 d
dz (t) = trace U1 (T, , t) = trace U2 (T, , t) , (8.13)
2j 2j
where, according to the notation (8.11),
Ui (T, , t) = U i (T, s, t) | esT = , (i = 1, 2)
6.
Example 8.1. Let us nd the variance of the quasi-stationary output for the
simple single-loop system shown in Fig. 8.1, where the forming element is a
x z
-g - 1 -
s
6
v
C
with
1 esT (s)
v0 (s) = ,
s (s)
1 esT (s)
0 (s) = , (8.15)
s (s)
est 1
c(s, t) = ,
s
8.1 Quasi-stationary Stochastic Processes in the Standard SD System 311
where
(s) = 0 + 1 esT + . . . + esT ,
(8.16)
sT sT
(s) = 0 + 1 e + . . . + e
and
(s) = 1 esT (s) T esT (s) . (8.17)
Using (8.14)(8.17) from (8.9) and (8.10) after fairly tedious calculations, it
is found that for 0 t T
U 1 (T, s, t) = U 2 (T, s, t)
(s) (s) (s) (s) (s) (s)
=T + tT +
(s)(s) (s)(s) (s)(s)
$
% (8.18)
(s) (s) (s) (s)
+ t2 T + t esT + esT
(s)(s) (s) (s)
(s) (s)
+ t2 esT + esT + t.
(s) (s)
To derive Formula (8.18), we employed expressions for the sums of the follow-
ing series:
1
(s) (s)
v0 (s + kj)v 0 (s + kj) = T ,
T (s)(s)
k=
1 (s) (s)
0 (s + kj) 0 (s + kj) = T ,
T (s)(s)
k=
1
(s) (s)
v0 (s + kj) 0 (s + kj) = T ,
T (s)(s)
k=
1
(s) sT
v0 (s + kj)c(s + kj, t) = e t,
T (s)
k=
1 (s) sT
0 (s + kj)c(s + kj, t) = e t,
T (s)
k=
1
c(s + kj, t)c(s + kj, t) = t
T
k=
312 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
and
1 e(s+kj)t (1 esT )t + T esT
= ,
T (s + kj) 2 (1 esT )2
k=
0tT.
1 e(s+kj)t (1 esT )esT t + T esT
= ,
T (s + kj)2 (1 esT )2
k=
will be called the mean variance of the quasi-stationary output. Using here
(8.6) and (8.7), we obtain
j j
1 1
dz = trace [x (s)w
1 (s)] ds = trace [w
1 (s)x (s)] ds ,
2j j 2j j
(8.19)
where T
1
w
1 (s) = w (s, t)w(s, t) dt . (8.20)
T 0
2. When x (s) = I , for the mean variance, we will use the special notation
T
1
rz = rz (t) dt .
T 0
The value
S2 = + rz (8.23)
henceforth, will be called the H2 -norm of the stable standard sampled-data
system S. Hence
j
1
S22 = trace w(s)
ds . (8.24)
2j j
For further transformations, we write the right-hand side of (8.24) in the form
j/2
T
S22 = trace Dw (T, s, 0) ds ,
2j j/2
Remark 8.3. The H2 -norm is dened directly by the PTM. This approach
opens the possibility to dene the H2 -norm for any system possessing a PTM.
Interesting results have been already published by the authors for the class
of linear periodically time-varying systems [98, 100, 101, 88, 89]. In contrast
to other approaches like [200, 32, 203, 28, 204], the norm computation over
the PTM yields closed formulae and needs to evaluate matrices of only nite
dimensions.
314 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
where
T
1
DL (s) = L (T, s, t)L (T, s, t) dt ,
T 0
T
1 1
QL (s) = L (T, s, t) dt = L(s)(s) ,
T 0 T
T
1 1
Q L (s) = L (T, s, t) dt = L (s)(s) .
T 0 T
Using (8.28) in (8.24), we obtain
j
1
S22 = trace M (s)R N (s)DL (s)RN (s)M (s)
2j j
+ M (s)R N (s)Q L (s)K(s) (8.29)
+ K (s)QL (s)RN (s)M (s) + K (s)K(s) ds .
All matrices in the integrand, except for the matrix RN (s), are determined
by the transfer matrix w(s) of the continuous plant and are independent of
the transfer matrix w d (s) of the controller. Moreover, each transfer matrix
8.3 Representing the PTM in Terms of the System Function 315
w d (s)of a stabilising controller is associated with a nonnegative value S22 .
Therefore, the right-hand side of (8.29) can be considered as a functional
dened over the set of transfer functions of stabilising controllers w d (s). Hence
the following optimisation problem arises naturally.
H2 -problem. Let the matrix w(p) in (7.2) be given, where the matrix
L(p) is at least proper and the remaining elements are strictly proper.
Furthermore, the sampling period T and the impulse form m(t) are
xed. Find the transfer function of a stabilising controller w d (s), which
minimises the functional (8.29).
with
1
DN (T, s, 0) = N (T, s, 0) = N (s + kj)(s + kj)
T
k=
and
1
w d (s) = l (s) l (s) , (8.31)
where
l (s) = 0 + 1 esT + . . . + esT ,
l (s) = 0 + 1 esT + . . . + esT
are polynomial matrices in the variable = esT . Moreover, (s) is the trans-
fer function of the forming element (6.38). Matrix (8.31) will be called the
transfer function of the controller.
316 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
m
K(p) L(p)
w(p) =
M (p) N (p) n .
Henceforth as above, we assume that the matrix L(p) is at least proper and the
remaining elements are strictly proper. The above matrix can be associated
with the state equations (7.10)
dv
= Av + B1 x + B2 u
dt
z = C1 v + DL u , y = C2 v ,
4. As follows from Theorems 7.3 and 7.4, the PTM (8.27) admits a repre-
sentation of the form
Pw (s, t)
w(s, t) = , (8.32)
(s)
where Pw (s, t) = Pw (s, t + T ) is a matrix, whose elements are integer
functions in s for all t and the function (s) is given by
(s) = det Q(s, l , l ) , (8.33)
where Q(s, l , l ) is a matrix of the form
I esT eAT On esT eAT (A)B2
Q(s, l , l ) =
C2 In Onm .
Om l (s) l (s)
where
I eAT On eAT (A)B2
Q(, l , l ) = C2 In Onm
Om l () l ()
8.3 Representing the PTM in Terms of the System Function 317
with
l () = 0 + 1 + . . . + ,
(8.35)
l () = 0 + 1 + . . . + .
For brevity, we will refer to the matrices (8.35) as a controller and the matrix
wd () = w d (s) | eT = = l1 ()l ()
as well as w d (s) are transfer functions of this controller.
wN () = DN (T, , 0) = a1
l ()bl () , (8.36)
() ()d () , (8.38)
d () det QN (, l , l ) ,
where
al () bl ()
QN (, , ) = (8.39)
l () l ()
is a polynomial matrix (7.76). If the stabilisability conditions hold, then the
set of stabilising controllers for the standard sampled-data system coincide
with the set of controllers (8.35) with stable matrices (8.39).
318 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
Then, as was proved before, the set of all causal stabilising controllers can be
given by
l () = Dl ()0l () Ml ()bl () ,
(8.41)
l () = Dl ()0l () Ml ()al () ,
where
() = Dl1 ()Ml ()
is a stable rational matrix, which will hereinafter be called the system function
of the standard sampled-data system.
wN () = DN (T, , 0) = br ()a1
r () (8.42)
and let (0l (), 0l ()) and [0r (), 0r ()] be two dual basic controllers cor-
responding to the IMFDs (8.36) and (8.42). These controllers will be called
initial controllers. Then the transfer matrix of a stabilising controller admits
the right representation
wd () = r ()r1 () (8.43)
with
r () = 0r ()Dr () br ()Mr () ,
(8.44)
r () = 0r ()Dr () ar ()Mr () ,
wd () = V2 ()V11 () , (8.46)
where
1
V1 () = al () bl ()wd () = 0r () br ()() ,
1 (8.47)
V2 () = wd () al () bl ()wd () = 0r () ar ()() .
8. Using (8.47), we can write the matrix RN (s) in (8.30) in terms of the
system function (). Indeed, using (8.30), (8.36), and (8.47), we nd
1
RN () = RN (s) | esT = = wd () [In DN (T, , 0)wd ()]
1
= wd () In a1l ()bl ()wd () (8.48)
1
= wd () [al () bl ()wd ()] al () = V2 ()al () .
Hence
RN (s) = RN () | =esT = 0r (s) a l (s) a r (s) (s) a l (s) . (8.50)
where
(s, t) = L (T, s, t) a r (s) ,
(s) = a l (s)M (s) , (8.52)
(s, t) = L (T, s, t) 0r (s) a l (s)M (s) + K(s) .
Theorem 8.4. The poles of the matrix (s, t) belong to the set of roots of the
function (s) = (esT ), where () is the polynomial given by (8.37).
Proof. Assume Dl () = I and Ml () = Omn in (8.41), i.e. we choose the
initial controller (0l (), 0l ()). In this case, (s) = Omn and
w(s, t) = (s, t) .
Since the controller (0l (), 0l ()) is a basic controller, we have (8.40) and
from (8.38), it follows
() () .
Assuming this, from (8.32) we get
P (s, t)
w(s, t) = (s, t) = , (8.53)
(s)
where the matrix P (s, t) is an integral function of the argument s. The claim
of the theorem follows from (8.53).
Corollary 8.5. Let the standard sampled-data system be modal controllable,
i.e. (s) = const. = 0. Then the matrix (s, t) is an integral function in s.
Theorem 8.6. For any polynomial matrix (), the set of poles of the matrix
G(s, t) = (s, t) (s)(s) (8.54)
belongs to the set of roots of the function (s).
Proof. Let (s) = () | =esT with a polynomial matrix (). Then for any
ILMFD
() = Dl1 ()Ml () ,
the matrix Dl () is unimodular. Therefore, due to Theorem 4.1, the controller
(8.41) is a basic controller. Hence we have d () = const = 0. In this case,
d (s) = const. = 0 and (8.38) yields
() () .
From this relation and (8.32), we obtain
Pw (s, t)
w(s, t) = ,
(s)
where the matrix Pw (s, t) is an integral function in s. Using (8.53) and the
last equation, we obtain
PG (s, t)
G(s, t) = w(s, t) (s, t) = , (8.55)
(s)
where the matrix PG (s, t) is an integral function in s.
8.3 Representing the PTM in Terms of the System Function 321
10. In principle for any (), the right-hand side of (8.54) can be cancellee
by a function 1 (s), where 1 () is a polynomial independent of t. In this case
after cancellation, we obtain an expression similar to (8.55):
PGm (s, t)
G(s, t) = , (8.56)
m (s)
where deg m () < deg (). If deg m () has the minimal possible value in-
dependent of the choice of (), the function (8.56) will be called globally
irreducible.
Using (8.52), we can represent (8.56) in the form
PGm (s, t)
L (T, s, t) a r (s) (s) a l (s)M (s) = = G(s, t) .
m (s)
Hence
1
G1 (s + kj, t)e(s+kj)t = DL (T, s, t) a r (s) (s) a l (s)DM (T, s, t)
T
k=0
(8.57)
N G1 (s, t)
= ,
m (s)
11. The following propositions prove some further cancellations in the cal-
culation of the matrices (s, t) and (s) appearing in (8.52).
Theorem 8.8. For 0 < t < T , let us have the irreducible representations
N L (s, t)
DL (T, s, t) a r (s) = , (8.58)
L (s)
N M (s, t)
a l (s)DM (T, s, t) = , (8.59)
M (s)
NL (, t) NM (, t)
,
M () L ()
be irreducible and the fraction (8.57) be globally irreducible. Then the function
()
() =
L ()M ()
is a polynomial. Moreover,
L ()M () m () .
AB = On (8.60)
Proof. Assume that the function (8.64) is not a polynomial. If p() is a GCD
of the the polynomials d0 () and dAB (), then
where the polynomials d1 () and d2 () are coprime and deg d2 () > 0. Sub-
stituting these equations into (8.63), we obtain
d2 ()
NA ()()NB () = N () .
d1 ()
NA (0 )(0 )NB (0 ) = On
can be written for any constant matrix (0 ). Then with the help of
Lemma 8.9, it follows that at least one of the following two equations holds:
NA (0 ) = Onm or NB (0 ) = O .
In this case, at least one of the rational matrices (8.61) or (8.62) appears to
be reducible. This contradicts the assumptions. Thus, deg d2 () = 0 and ()
is a polynomial.
Now, let the right-hand side of (8.63) be globally irreducible. We show
that in this case deg d1 () = 0 and we have (8.65). Indeed, if we assume the
converse, we have deg d0 () > deg dAB (). This contradicts the assumption
that the right-hand side of (8.63) is globally irreducible.
NL (, t) NM (, t) NG1 ()
() = .
L () M () m ()
Since here the polynomial matrix () can be chosen arbitrarily, the claim of
the theorem stems directly from Lemma 8.10.
Corollary 8.11. When under the conditions of Theorem 8.8, the right-hand
side of (8.55) is globally irreducible, then we have
Corollary 8.12. As follows from the above reasoning, the converse proposi-
tion is also valid: When under the conditions of Theorem 8.8, Equation (8.66)
holds, then the representations (8.53) and (8.55) are globally irreducible.
324 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
Theorem 8.13. Let the conditions of Theorem 8.8 hold. Then we have the
irreducible representations
P (s, t)
(s, t) = L (T, s, t) a r (s) = , (8.67)
L (s)
P (s)
(s) = a l (s)M (s) = , (8.68)
M (s)
P1 (s)
(s) = a l (s)M (s) = ,
M 1 (s)
where deg M 1 () < deg M (). With respect to (8.59), we thus obtain an
expression of the form
1 N M 1 (s, t)
a l (s)DM (T, s, t) = (s + kj)e(s+kj)t = , (8.69)
T M 1 (s)
k=
PL (s)
L(s) a r (s)(s) = ,
L (s)
where
T
1
g1 (s) = (s) (s) (s, t)(s, t) dt (s)(s) ,
T 0
T
1
g2 (s) = (s) (s) (s, t)(s, t) dt , (8.70)
T 0
T
1
g3 (s) = g2 (s) = (s, t)(s, t) dt (s)(s) ,
T 0
and
T
1
g4 (s) = (s, t)(s, t) dt .
T 0
326 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
Then,
T
1
Q (s) = (s, t)(s, t) dt . (8.75)
T 0
Using (8.52), we nd
(s, t)(s, t) = a r (s)L (T, s, t)L (T, s, t) 0r (s) a l (s)M (s)
+ a r (s)L (T, s, t)K(s) .
Substituting this into (8.75) and taking account of (8.72) after integration, we
nd
8.4 Representing the H2 -norm in Terms of the System Function 327
1
Q (s) = a r (s) DL L (T, s, 0) 0r (s) a l (s)M (s)
T
1
+ a r (s)L (s)(s)K(s)
T
and
1
Q(s) = M (s) a l (s) 0r (s) DL L (T, s, 0) a r (s)
T
1
+ K (s)L(s)(s) a r (s)
T
considering the identity
DL L (T, s, 0) = DL L (T, s, 0) .
where
1 j
J1 = trace (s) (s)AL (s) (s)(s)
2j j
(8.77)
(s) (s)Q (s) Q(s) (s)(s) ds ,
j
1
J2 = trace g4 (s) ds .
2j j
Under the given assumptions, these integrals converge absolutely, i.e. all the
integrands as |s| tend to zero as |s|2 .
where the integral on the right-hand side converges absolutely. From (8.52),
(8.74) and (8.75), we have
328 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
1
(s)Q(s) = a l (s)M (s)M (s) a l (s) 0r (s) DL L (T, s, 0) a r (s)
T
1
+ a l (s)M (s)K (s)L(s)(s) a r (s) ,
T
1
Q (s) (s) = a r (s) DL L (T, s, 0) 0r (s) a l (s)M (s)M (s) a l (s)
T
1
+ a r (s)L (s)K(s)M (s)(s) a l (s) .
T
Substitute this into the equation before and pass to an integral with nite
integration limits. Then using (8.12), we obtain the functional
J1 = (8.78)
T j/2
trace (s)AL (s) (s)AM (s) (s)C (s) C(s) (s) ds ,
2j j/2
where = 2/T , the matrix AL (s) is given by (8.73),
1
AM (s) = (s + kj) (s kj)
T
k=
1
= a l (s) M (s + kj)M (s kj) a l (s) (8.79)
T
k=
= a l (s)DM M (T, s, 0) a l (s)
and
1
C(s) = (s + kj)Q(s + kj)
T
k=
1
= AM (s) 0r (s) DL L (T, s, 0) a r (s)
T
1
+ a l (s) DM K L (T, s, 0) a r (s) , (8.80)
T
1
C (s) = a r (s) DL L (T, s, 0) 0r (s)AM (s)
T
1
+ a r (s) DL KM (T, s, 0) a l (s) .
T
6. Let us note several useful properties of the matrices AL (s), AM (s) and
C(s) appearing in the functional (8.78). We shall assume that (8.66) holds,
because this is true in almost all applied problems.
8.4 Representing the H2 -norm in Terms of the System Function 329
Theorem 8.16. The matrices AL (s) (8.73), AM (s) (8.79) and C(s) (8.80)
are rational periodic and admit the representations
B C (s) B C (s)
C(s) = = ,
L (s) L (s) M (s) M (s) (s) (s)
(8.81)
B L (s) B M (s)
AL (s) = , AM (s) = ,
L (s) L (s) M (s) M (s)
where the numerators are nite sums of the forms
B L (s) = lk eksT ,
lk = lk ,
k=
B M (s) = mk eksT , mk = mk , (8.82)
k=
B C (s) = ck eksT .
k=
1
T AL (s) = a r (s kj)L (s kj)(s kj)
T
k=
(8.83)
L(s + kj)(s + kj) a r (s + kj) .
Each summand on the right-hand side can have poles only at the roots of
the product 1 (s) = L (s) L (s). Under the given assumptions, the series
(8.83) converges absolutely and uniformly in any restricted part of the complex
plain containing no roots of the function 1 (s). Hence the sum of the series
(8.83) can have poles only at the roots of the function 1 (s). Therefore, the
matrix B L (s) = T A(s) 1 (s) has no poles. Moreover, since AL (s) = AL (s),
we have B L (s) = B L (s), and the rst relation in (8.82) is proven. The
remaining formulae in (8.82) are proved in a similar way using (8.66).
330 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
Corollary 8.17. When the original system is stabilisable, then the matrices
(8.81) do not possess poles on the imaginary axis.
and
1 1
C() = AM ()0r () DL L (T, , 0)ar () + al ()DM K L (T, , 0)ar () ,
T T
(8.86)
1 1
r () DL L (T, , 0)0r ()AM () + a
C() = a r ()DL KM (T, , 0)
al () .
T T
Per construction,
AL () = AL (), AM () = AM () .
where and are nonnegative integers and the Fk are constant matrices.
Substituting esT = in (8.81) and (8.82), we obtain
BL () BM ()
AL () = , AM () = ,
L ()L ( 1 ) M ()M ( 1 )
(8.87)
BC ()
C() = ,
L ()L ( 1 )M ()M ( 1 )
8.5 Wiener-Hopf Method 331
BC () = ck k .
k=
where
332 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
V1o () = 0r () br ()o () ,
(8.91)
V2o () = 0r () ar ()o () .
Proof. Since the matrix o () is stable, the rational matrices (8.91) are also
stable. Then using the Bezout identity and the ILMFD (8.36), we have
al ()V1o () bl ()V2o ()
= al ()0r () bl ()0r () [al ()br () bl ()ar ()] o () = In .
After substituting this equation into (8.29) and some transformations, the
integral (8.29) can be reduced to the form
S22 = J1o + J2 ,
where J1o is given by (8.84) for () = o () and the value J2 is constant. Per
construction, the value S22 is minimal. Therefore, Formula (8.90) gives the
transfer matrix of an optimal stabilising controller.
R() = R+ () + R () , (8.94)
where the rational matrix R () is strictly proper and its poles incorpo-
rate all unstable poles of R(). Such a separation will be called principal
separation.
c) The optimal matrix o () is determined by the formula
o () = 1 ()R+ () 1 () . (8.95)
() ()()()
() () () ()()
() () ()
= [() () () R()]' [() () () R()] R()R()
334 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
can easily be veried. Using the separation (8.94), the last relation can be
transformed into
() ()()()
() () () ()()
() () ()
= [() () () R+ ()]' [() () () R+ ()]
()R () R
+R () [() () () R+ ()]
[() () () R+ ()]'R () R()R() .
Hence the functional (8.96) can be written in the form
Jw = Jw1 + Jw2 + Jw3 + Jw4 ,
where
#
1 d
Jw1 = trace [() () () R+ ()]' [() () () R+ ()] ,
2j
#
1 () [() () () R+ ()] d ,
Jw2 = trace R
2j
# (8.97)
1 d
Jw3 = trace [() () () R+ ()]'R () ,
2j
#
1 ()R () R()R()
d
Jw4 = trace R .
2j
The integral Jw4 is independent of (). As for the scalar case [146], it can
be shown that Jw2 = Jw3 = 0. The integral Jw1 is nonnegative, its minimal
value Jw1 = 0 can be reached for (8.95).
Corollary 8.23. The minimal value of the integral (8.92) is
Jw min = Jw4 .
R() = R+ () + R () ,
where
R + () + ()
R
R+ () = = (8.100)
L ()M () ()
+ ().
with a polynomial matrix R
c) The optimal system function o () is given by the formula
o () = 1 1
L ()R+ ()M () . (8.101)
Proof. Let the factorisations (8.98) hold. Since for the stabilisability of the
system the polynomials L () and M () must be stable, the following fac-
torisations hold:
AL () = ()() , AM () = () () , (8.102)
where
L () M ()
() = , () = (8.103)
L () M ()
are rational matrices, which are stable together with their inverses. From
(8.103) we also have
L ()
M ()
() = , () = . (8.104)
L ( 1 ) M ( 1 )
BC () BC ()
C() = = ,
L ()L ( )M ()M ( 1 )
1 ()( 1 )
BC ( 1 ) BC ()
C() = 1 1
= .
L ()L ( )M ()M ( ) ()( 1 )
d () det L () det M () .
4. From (8.76) and (8.97), it follows that the minimal value of S22 is
# d j
1 ()R () R()R()
1
S22 = trace R + trace g4 (s) ds .
2j 2j j
F () = F () .
Obviously,
F1 () + F2 () = F1 () + F2 () .
where
1 1
R1 () = () ar ()DL L (T, , 0)0r ()AM () 1 () , (8.106)
T
1 1
R2 () = () al () 1 () .
ar ()DL KM (T, , 0) (8.107)
T
Since
1
DL L (T, , 0)ar () = AL () ,
r ()
a (8.108)
T
Matrix (8.106) can be written in the form
8.7 Modied Optimisation Algorithm 337
1 ()AL ()a1
R1 () = 1 () .
r ()0r ()AM () (8.109)
R1 () = ()a1
r ()0r () () . (8.110)
R () = R1 () + R2 ()
with
R1 () = ()a1
r ()0r () () , (8.112)
where the matrix R2 () is the same as in (8.105). Therefore, to prove the
lemma, it is sucient to show
R1 () = R1 () ()Q() () ,
where the second term on the right-hand side is a stable matrix, because the
matrices () and () are stable. Therefore, (8.113) holds.
4.
Lemma 8.28. The transfer matrix of an optimal controllers wdo () is inde-
pendent of the choice of the initial controller.
where
338 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
1 1
() = () al () 1 () R() .
ar ()DL KM (T, , 0) (8.116)
T
Using Lemma 8.27 and (8.116), we nd that the matrix () does not depend
on the choice of initial controller. Hence only the rst term on the right-hand
side of (8.115) depends on the initial controller. From (8.115) and (8.95), we
nd the optimal system function
o () = a1
r ()0r () +
1
()() 1 () .
Using (8.91), we obtain the optimal matrices V1o () and V2o ():
and
V1o () = 0r () br ()o ()
= 0r () br ()a1
r ()0r () br ()
1
()() 1 () .
= a1 1
l () [al ()0r () bl ()0r ()] = al () .
V1o () = a1
l () br ()
1
()() 1 () . (8.118)
The matrices (8.117) and (8.118) are independent of the matrix 0r (). Then
using (8.90), we can nd an expression for the transfer matrix of the optimal
controller that is independent of 0r ().
coincide neither with eigenvalues of the matrix ar () nor with poles of the
matrices () and (). Then, Equation (8.110) can be written in the form
1 1
1
R1 () = ()a1
r ()0r ()bl () bl ( ) bl ()bl ( 1 ) () . (8.120)
8.7 Modied Optimisation Algorithm 339
0r ()bl () + ar ()0l () = Im .
Thus,
a1 1
r ()0r ()bl () = 0l () ar () .
R1 () = R11 () + R12 () ,
where
1
R11 () = ()a1
r ()
1 1
bl ( ) bl ()bl ( 1 ) ()
(8.121)
= ()a1
r ()l () () ,
1
R12 () = ()0l () 1 bl ( 1 ) bl ()bl ( 1 ) ()
(8.122)
= ()0l ()l () () .
R1 () = R11 () + R12 ()
a
+
= R11 +
() + R12 () + R11 () + R11 () + R12 () .
But
R11 () + R12 () = R11 () + R12 () = Omn ,
because from (8.110), it follows that under the given assumptions, the matrix
R1 () has no poles that are simultaneously poles of the matrix l (). Then,
a
R1 () = R11 () ,
where the matrix on the right-hand side is independent of the choice of the
initial controller. Using the last relation and (8.105), we obtain
340 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
a
R() = R1 () + R2 () = R11 () + R2 () .
Per construction, this matrix is also independent of the choice of the ini-
tial controller. Substituting the last equation into (8.116) and using (8.117),
(8.118) and (8.90), an expression can be derived for the optimal transfer ma-
trix wdo () that is independent of the choice of the initial controller.
0r ()al () br ()0l () = In ,
br ()0l ()a1 1
l () = 0r () al () .
1
r () = 1 br ()br () br ()
1. To make the reading easier, we will use some additional terminology and
notation.
The optimisation algorithms described above make it possible to nd the
optimal system matrix o (). Using the ILMFD
and Formulae (8.41), (8.44) and (8.46), we are able to construct the transfer
o
matrix of the optimal controller wdb (), which will be called the backward
transfer matrix. Using the ILMFD
o
wdb () = l1 ()l () ,
the matrix
I eAT On eAT (A)B2
Qb (, l , l ) = C2 In Onm (8.125)
Om l () l ()
will be called the order of the optimal backward model. As shown above,
will be called the forward transfer function of the optimal controller. Using
the ILMFD
o
wdf (z) = f1 (z)bf (z) ,
a controllable forward model of the optimal discrete controller is found:
f (q)k = f (q)yk .
Together with (7.16), this equation determines a discrete forward model of
the optimal system
qvk = eAT vk + eAT (A)B2 k + gk
yk = C2 vk
f (q)k = f (q)yk .
These dierence equations are associated with the matrix
zI eAT On eAT (A)B2
Qf (z, f , f ) = C2 In Onm (8.130)
Om f (z) f (z)
that will be called the forward characteristic matrix of the optimal system.
Below, we formulate some propositions determining some properties of the
characteristic matrices (8.125) and (8.130).
Moreover,
f (z) z b (z 1 ) , b () f ( 1 ) . (8.136)
Proof. Since Matrix (8.133) is strictly proper, there exists a minimal standard
realisation in the form
wf (z) = C (zIq U )1 B , q .
Hence
det(zI eAT )
f (z) . (8.138)
det(zIq U )
From (8.138), it follows that the matrix U is nonsingular (this is a consequence
of the non-singularity of the matrix eAT ). Comparing (8.128) with (8.133), we
nd
wN () = C (Iq U )1 B ,
where the PMD (Iq U, B , C ) is irreducible due to Lemma 5.35. Thus
for the ILMFD (8.128), we obtain
where
ord al () = deg det(Iq U ) = q ,
because the matrix U is nonsingular. From (8.129) and (8.139), we obtain
det(I eAT )
b ()
det(Iq U )
Denote
f = deg det f (z) , b = deg det l () .
344 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
Since the optimal controller (l (), l ()) is stabilising, it is causal, i.e., the
o
backward transfer function wdb () is analytical at the point = 0. Hence
o
the forward transfer matrix of the controller wdf (z) is at least proper. There-
o
fore, the matrix wf (z) is strictly proper. Thus, the product wdf (z)wf (z) is a
strictly proper matrix. Moreover, the rational fraction det Im wdf o
(z)wf (z)
is proper, because of
lim det Im wdf o
(z)wf (z) = 1 .
z
z + b1 z 1 + . . .
det Im wdf
o
(z)wf (z) = ,
z + a1 z 1 + . . .
z + b1 z 1 + . . .
det Q1f (z) = det af (z) det f (z) .
z + a1 z 1 + . . .
Since the right-hand side is a polynomial, we obtain
deg det Q1f (z) = deg det af (z) + deg det f (z) .
Lemma 8.31. Let us have unimodular matrices m(z) and (z), such that the
matrices
f (z) = m(z)af (z) ,
a f (z) = (z)f (z)
are row reduced. Then the matrix
1f (z) = diag{m(z), (z)} Q1f (z)
Q (8.142)
is row reduced.
f (z) = diag {z a1 , . . . , z an } A0 + a
a 1f (z) , a1 + . . . + an = q ,
f (z) = diag {z
1
,...,z n
} B0 +
1f (z) , 1 + . . . + n = f ,
where det A0 = 0 and det B0 = 0. Hereby, the degree of the i-th row of the
matrix a 1f (z) is less than ai , and the degree of the i-th row of the matrix
1f (z) is less than i . Moreover,
1
a 1
f (z)bf (z) = af (z)bf (z) = wf (z) ,
db () q+f df ( 1 ) , (8.145)
Proof. Since Matrix (8.143) is row reduced and has the form (8.144), the
matrix
Q 1f ( 1 )
1b () = diag{ a1 , . . . , an , 1 , . . . , m } Q (8.146)
1f (z). Then
denes a backward eigenoperator associated with the operator Q
the following formula stems from Corollary 5.39:
det Q 1f ( 1 ) .
1b () q+f det Q (8.147)
Per construction,
346 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
1f (z) det Q1f (z) = df (z) ,
df (z) = det Q (8.148)
because the matrices Q1f (z) and Q 1f (z) are left-equivalent. Let us show the
relation
1b () det Q1b () = db () .
db () = det Q (8.149)
Notice that Matrix (8.146) can be represented in the form
f ( 1 ) diag{ a1 , . . . , an } bf ( 1 )
diag{ a1 , . . . , an } a
Q1b () =
diag{ 1 , . . . , m } f ( 1 ) diag{ 1 , . . . , m }
f ( 1 )
(8.150)
a1l () b1l ()
= ,
1l () 1l ()
where the pairs (a1l (), b1l ()) and (1l (), 1l ()) are irreducible due to
Lemma 5.34. We have
a1 ()b1l () = a
1l 1 ( 1 )bf ( 1 ) = wf ( 1 ) = wb ()
f
and the left-hand side is an ILMFD. On the other hand, the right-hand side
of (8.128) is also an ILMFD and the following equations hold:
a1l () = ()al (), b1l () = ()bl () , (8.151)
where () is a unimodular matrix. In a similar way, it can be shown that
1l () = ()l (), 1l () = ()l () (8.152)
with a unimodular matrix (). Substituting (8.151) and (8.152) into (8.150),
we nd
Q 1l () = diag{(), ()} Q1b () ,
i.e., the matrices Q1b () and Q 1b () are left-equivalent. Therefore, Relations
(8.149) hold. Then Relation (8.145) directly follows from (8.147)(8.149).
On the basis of Lemmata 8.298.32, the following theorem will be proved.
Theorem 8.33. Let f (z) and b () be the forward and backward charac-
teristic polynomials of the optimal system. Then,
f = deg f (z) = + q + f ,
b = deg b () = + deg det Dl () ,
where the number is determined by (8.135) and the number 0 of zero roots
of the polynomial f (z) is
0 = q + f deg det Dl () .
In this case, the polynomials f (z) and b () are related by
b () = f f ( 1 ) ,
i.e., b () is equivalent to the reciprocal polynomial for f (z).
Proof. The proof is left as exercise for the reader.
9
H2 Optimisation of a Single-loop Multivariable
SD System
1. The aforesaid approach for solving the H2 -problem is fairly general and
can be applied to any sampled-data system that can be represented in the
standard form. Nevertheless, as will be shown in this chapter, the specic
structure of a system and algebraic properties of the transfer matrices of the
continuous-time blocks can play an important role for solving the H2 -problem.
In this case, we can nd possibilities for some additional cancellations, extract-
ing matrix divisors and so on. Moreover, this makes it possible to investigate
important additional properties of the optimal solutions.
2. In this chapter, the above ideas are exemplarily illustrated by the single-
loop system shown in Fig. 9.1, where F (s), Q(s) and G(s) are rational matrices
h 6 x
y C u h1 6 ? F (s)
- - G(s) q - f- q v-
mn m
T
Q(s)
n
will be taken as the output of the system. If the system is internally stable
and x(t) is a stationary centered vector, the covariance matrix of the quasi-
stationary output is given by
Since
2 h1 (t1 )h1 (t2 ) h1 (t1 )v (t2 )
z(t1 )z (t2 ) = ,
v(t1 )h1 (t2 ) v(t1 )v (t2 )
we have
where dh1 and dv are the mean variances of the corresponding output vectors.
To solve the H2 -problem, it is required to nd a stabilising discrete controller,
such that the right-hand side of (9.2) reaches the minimum.
whx (s, t) (9.3)
w(s, t) =
wvx (s, t) ,
where whx (s, t) and wvx (s, t) are the PTMs of the system from the input x to
the outputs h and v, respectively.
2. To nd the PTM wvx (s, t), we assume according to the previously exposed
approach
x(t) = est I , v(t) = wvx (s, t)est , wvx (s, t) = wvx (s, t + T )
and
y(t) = wyx (s, t)est , wyx (s, t) = wyx (s, t + T ) , (9.4)
9.2 General Properties 349
x
wyx (s, 0)est u v y
- - G(s) ?-
-f - Q(s) -
C F (s)
where wyx (s, t) is the PTM from the input x to the output y. The matrix
wyx (s, t) is assumed to be continuous in t. For our purpose, it suces to
assume that the matrix Q(s)F (s)G(s) is strictly proper.
Consider the open-loop system shown in Fig. 9.2. The exp.per. output y(t)
is expressed by
y(t) = QF G (T, s, t)w d (s)wyx (s, 0)est + Q(s)F (s)est .
Returning to Fig. 9.1 and using the last equation, we immediately get
wvx (s, t) = F G (T, s, t)RQF G (s)Q(s)F (s) , +F (s) (9.5)
where 1
RQF G (s) = w d (s) In DQF G (T, s, 0)w d (s) .
where the matrix G(p) is assumed to be at least proper. Combining (9.5) and
(9.7), we nd that the PTM (9.3) has the form
w(s, t) = L (T, s, t)RN (s)M (s) + K(s) (9.8)
with
m
O G(s)
K(s) = , L(s) = ,
F (s) F (s)G(s)
(9.9)
m
M (s) = Q(s)F (s) n , N (s) = Q(s)F (s)G(s) n .
Under the given assumptions, the matrix L(s) is at least proper and the
remaining matrices in (9.9) are strictly proper. The matrix w(p) associated
with the PTM (9.8) has the form
m
..
O . G(p)
K(p) L(p) .
F (p) .. F (p)G(p) (9.10)
w(p) = = .
M (p) N (p) .
......... ................
.. n .
Q(p)F (p) . Q(p)F (p)G(p)
are normal.
II The fraction
NQ (s)NF (s)NG (s)
N (s) = Q(s)F (s)G(s) = (9.13)
dQ (s)dF (s)dG (s)
is irreducible.
9.2 General Properties 351
III The poles of the matrix N (s) should satisfy the strict conditions for non-
pathological behavior (6.124) and (6.125). Moreover, it is assumed that
the number of inputs and outputs of any continuous-time block does not
exceed the McMillan-degree of its transfer function.
These assumptions are introduced for the sake of simplicity of the solution.
They are satised for the vast majority of applied problems.
where
dF (s)I
I NF (s) NL1 (s)
L1 (s) = = = .
F (s) dF (s) dF (s)
Let us show that this matrix is normal. Indeed, since the matrix F (s) is
normal, all second-order minors of the matrix NF (s) are divisible by dF (s).
Obviously, the same is true for all second-order minors of the matrix NL1 (s).
Thus, both factors on the right-hand side of (9.15) are normal matrices and its
product is irreducible, because the product F (s)G(s) is irreducible. Therefore,
the matrix L(s) is normal.
Lemma 9.5. Matrix (9.10) is normal. Moreover, the matrix N (s) dominates
in w(s).
Each factor on the right-hand side of (9.16) is a normal matrix. This state-
ment is proved similarly to the proof of Lemma 9.3. Moreover, Matrix (9.13)
is irreducible, such that the product on the right-hand side of (9.16) is irre-
ducible. Hence Matrix (9.16) is normal. It remains to prove that the matrix
N (s) = Q(s)F (s)G(s) dominates in Matrix (9.10).
Denote
Q = deg dQ (s), F = deg dF (s), G = deg dG (s) .
Mdeg w(p) = Q + F + G = .
Hence
Mdeg w(p) = Mdeg N (p) .
This equation means that the matrix N (p) dominates in Matrix (9.10).
5. Let a minimal standard realisation of the matrix N (p) have the form
where the matrix A is cyclic. Then, as follows from Theorem 2.67, the minimal
standard realisation of the matrix w(p) can be written in the form
C1 (pI A)1 B1 C1 (pI A)1 B2 + DL
w(p) = , (9.18)
C2 (pI A)1 B1 C2 (pI A)1 B2
or equivalently,
C1
O+, DL
w(p) = (pI A)1 B1 B2 + . (9.19)
C2 On Onm
dv
= Av + B1 x + B2 u
dt
z = C1 v + DL u , y = C2 v .
9.3 Stabilisation 353
9.3 Stabilisation
1.
Theorem 9.6. Let Assumptions IIII on page 350 hold. Then the single-loop
system shown in Fig. 9.1 is modal controllable (hence, is stabilisable).
DN (T, , 0) = a1 1
l ()bl () = br ()ar () , (9.20)
det(I eAT )
() = = const. = 0 , (9.22)
det al ()
2.
Remark 9.7. In the general case, the assumption on irreducibility of the right-
hand side of (9.13) is essential. If the right-hand side of (9.13) is reducible by
an unstable factor, then the system in Fig. 9.1 is not stabilisable despite the
fact that all other assumptions of Theorem 9.6 hold.
In this case, the matrix N (p) is not dominant in the matrix w(p),
because
it is analytical for p = 0, while some elements of the matrix w(p)
have poles
at p = 0. Then the matrix A in the minimal standard realisation (9.19) will
have the eigenvalue zero. At the same time, the matrix A in the minimal
representation
N 1 B
(p) = C2 (pI A) 2
has no eigenvalue zero, because the right-hand side of (9.24) is analytical for
p = 0. Therefore, in the given case, (9.22) is not a stable polynomial. Hence
the single-loop system with (9.23) is not stabilisable.
over the set of stable rational matrices. The matrices AL (), AM (), C() and
C() can be calculated using Formulae (8.85) and (8.86). Then referring to
(9.9) and (8.86), we nd
1
AL () = a
r ()DL L (T, , 0)ar ()
T
1
= ar () 2 DG G (T, , 0) + DG F F G (T, , 0) ar () ,
T
(9.25)
AM () = al ()DM M (T, , 0)
al () = al ()DQF F Q (T, , 0)
al () .
1 1
C() = AM ()0r () DL L (T, , 0)ar () + al ()DM K L (T, , 0)ar ()
T T
1
= AM ()0r () 2 DG G (T, , 0) + DG F F G (T, , 0) ar () +
T
1
+ al ()DQF F F G (T, , 0)ar () ,
T
(9.26)
1 2
C() = a
r () DG G (T, , 0) + DG F F G (T, , 0) 0r ()AM ()
T
1
+ a r ()DG F F F Q (T, , 0)
al () ,
T
where the matrices al () and ar () are determined by the IMFDs (9.20).
2. Since under the given assumptions, the single-loop system is modal con-
trollable, all matrices in (9.25) and (9.26) are quasi-polynomials and can have
poles only at = 0. Assume the following factorisations
AL () = ()(), AM () = () () , (9.27)
where the polynomial matrices () and () are stable. Then, there exists
an optimal controller, which can be found using the algorithm described in
Chapter 8:
a) Calculate the matrix
1 ()C()
R() = 1 () .
o () = 1 ()R+ () 1 () .
d) The transfer matrix of the optimal controller wdo () is given by (8.90) and
(8.91).
NM (s)
M (s) = Q(s)F (s) =
dM (s)
with
M (s) = CM (sI AM )1 BM ,
where B1 and C2 are constant matrices. For 0 < t < T , let us have as before
1 AM t
DM (T, , t) = CM I eAM T e BM . (9.31)
DM (T, , t) = a1
M ()bM (, t), bM (, t) = bM ()eAM t BM (9.33)
The set of matrices al () satisfying (9.36) coincides with the set of matrices
al () for the ILMFD (9.20), which satisfy Condition (9.21).
This fact follows
from the minimality of the PMD I eAT , eAt B2 , C2 . From (9.35) and
(9.36), we obtain an LMFD for the matrix DM (T, , t)
9.5 Factorisation of Quasi-polynomials of Type 1 357
DM (T, , t) = a1
l
bl ()eAt B1 = a1 ()bM (, t) .
l
al () = a1 ()aM () , (9.37)
det al ()
det a1 () = G () . (9.38)
det aM ()
Proof. Since
1
DM (T, s, t) = M (s + kj)e(s+kj)t ,
T
k=
(9.40)
1
DM (T, s, t) = M (s + kj)e(s+kj)t ,
T
k=
This result can be proved by substituting (9.40) into (9.41) and integrating
term-wise. Substituting for esT in (9.41), we nd
T
DM M (T, , 0) = DM (T, , t)DM (T, 1 , t) dt . (9.42)
0
x0 bM (, t) = O1n .
If such row does not exist, then the quasi-polynomial matrix PM () is positive
on the unit circle.
5.
Lemma 9.13. Let under the given assumptions the matrix aM () in the
ILMFD (9.32) be row reduced. Then,
PM () = mk k , mk = mk (9.45)
k=
with
0 M 1 , (9.46)
where
M = deg aM () deg det aM () = . (9.47)
Proof. Since the matrix DM (T, , t) is strictly proper for 0 < t < T , due to
Corollary 2.23 for the ILMFD (9.33), we have
At the same time, since the matrix aM () is row reduced, Equation (9.47)
follows from (9.29). Then using (9.44), we obtain (9.45) and (9.46).
6. Denote
qM () = det PM () .
Lemma 9.14. Let the matrix M (s) be normal and the product
NM (s)NM (s)
(s) =
M M (s)M (s) = (9.48)
dM (s)dM (s)
qM () = qk k , qk = qk , (9.49)
k=
0 n, (9.50)
where T
J= eAM t BM BM eAM t dt .
0
From (9.51), it follows that the matrix DM M (T, , 0) is strictly proper
and can be written in the form
K()
DM M (T, , 0) = , (9.52)
M ()M ()
where
1
M () = em1 T em T ,
1
(9.53)
M () = em1 T em T ,
qM ()
fM () = .
det aM () det aM ( 1 )
Since
det aM () = M () , det aM ( 1 ) = M ( 1 )
with = const. = 0, a comparison of (9.56) with (9.58) yields
() = qM ()
AM () = al ()D MM (T, , 0)
al () = ak k , ak = ak , (9.60)
k=
where
0 1 (9.61)
and
= deg dN (s) = deg dQ (s) + deg dF (s) + deg dG (s) .
Proof. As was proved above, the sets of matrices al () from (9.59) and (9.20)
coincide. Taking into account (9.37) and (9.43), we rewrite the matrix AM ()
in the form
AM () = a1 ()PM ()
a1 () , (9.62)
where PM () is the quasi-polynomial (9.39). Using (9.44) from (9.62), we
obtain
362 9 H2 Optimisation of a Single-loop System
T
AM () = [a1 ()bM (, t)] [a1 ()bM (, t)]' dt . (9.63)
0
Moreover, if we have the ILMFD (9.59), any pair (()al (), ()bl ()) with
any unimodular matrix () is also an ILMFD for Matrix (9.59). As a special
case, the matrix () can be chosen in such a way, that the matrix a1 () in
(9.37) is row reduced. Then with respect to (9.38), we obtain
From this and (9.63), the validity of (9.60) and (9.61) follows.
with
0 n. (9.66)
rM () = det a1 () det a1 ( 1 ) qk k ,
k=
Corollary 9.17. As follows from (9.67), the set of zeros of the function rM ()
includes the set of roots of the polynomial G () as well as those of the quasi-
polynomial G ( 1 ).
9.5 Factorisation of Quasi-polynomials of Type 1 363
Theorem 9.18. Let Assumptions I-III on page 350 and the propositions of
Lemma 9.14 hold. Let also the quasi-polynomial AM () be positive on the unit
circle. Then, there exists a factorisation
AM () = () () = () ( 1 ) , (9.68)
det () rM
+
()
and the matrices al (), aM () in the ILMFDs (9.32), (9.59) can be chosen in
such a way that
deg () 1 .
Proof. With respect to the above results, the proof is a direct corollary of the
general theorem about factorisation given in [133].
Remark 9.19. As follows from Corollary 9.17, if the polynomial dG (s) has roots
on the imaginary axis, then the quasi-polynomial rM () has roots on the unit
circle. In this case, the factorisations (9.68) and (9.69) are impossible.
dG (s) = (s g1 )1 (s g )
Re gi < 0, (i = 1, . . . , ); Re gi > 0, (i = + 1, . . . , ) .
+
Then the polynomial rM () can be represented in the form
+
rM () = + +
G ()r1M () ,
where + +
G (), r1M () are stable polynomials and
g1 T 1
+1
G () = e
+ eg T eg+1 T eg T ,
NL (s)
L(s) = , Mdeg L(s) = ,
dL (s)
where
dL (s) = (s 1 )1 (s m )m , 1 + . . . + m =
with the minimal standard representation
L(s) = CL (sI AL )1 BL + DL .
Let also T
(s) = es m( ) d
0
be the transfer function of the forming element. Then for 0 < t < T from
(6.90), (6.100) and (6.103), we have
1
DL (T, s, t) = L(s + kj)(s + kj)e(s+kj)t
T
k=
(9.70)
1
= CL h (AL , t)BL + CL (AL )eAL t I esT eAL T BL + DL m(t) ,
where T
h (AL , t) = eAL (t ) m( ) d .
t
where 1
wL () = I eAL T BL , (9.72)
and
DL (t) = CL h (AL , t)BL + DL m(t) (9.73)
is a matrix independent of . Let us have an IRMFD
1
wL () = I eAL T BL = bL ()a1
L () . (9.74)
DL (T, , t) = bL (, t)a1
L () , (9.75)
9.6 Factorisation of Quasi-polynomials of Type 2 365
is a polynomial in for all t. When Assumptions I-III on page 350 hold, then
the matrix aL () is simple and we have
det aL () F ()G () .
and
PL () = aL ( 1 )DL L (T, , 0)aL () . (9.77)
Let us formulate a number of propositions determining some properties of the
matrices (9.76) and (9.77) required below.
Proof. Since
1
D L (T, s, t) = L (s + kj)(s + kj)e(s+kj)t
T
k=
DL (T, 1 , t) = DL
(T, 1 , t)
1 1 (9.79)
= aL ( 1 ) 1
bL ( , t) = a
L ()bL (, t) .
Hence T
PL () = bL (, t)bL (, t) dt (9.80)
0
is a symmetric quasi-polynomial.
Lemma 9.23. Let Assumptions I-III on page 350 hold and the matrix aL ()
from the IRMFD (9.74) be column reduced. Then,
PL () = k k , k = k
k=
with
0 L ,
where
L = deg aL () deg det aL () = .
deg aL () .
b) Let us show that Matrix (9.83) is normal. For this purpose using (9.81),
we write
DL L (T, s, 0) = DL
(T, s, 0)
(9.85)
1
= L(s + kj)(s + kj)(s kj) .
T
k=
Under the given assumptions, the matrix L (s) is normal. Hence Ma-
trix (9.81) is also normal as a product of irreducible normal matrices.
Therefore, the minimal standard representation of the matrix L(s) can be
written in the form
1
L(s) = CL sI2 AL L + D
B L .
where
L (t) = C2 h (AL , t)B
D L + D
L m(t) .
where
9.6 Factorisation of Quasi-polynomials of Type 2 369
T
(AL ) = eAL t m(t) dt .
0
Under the given assumptions, the matrices AL and eAL T are cyclic, the
L ) is controllable and the pair [eAL T , CL ] is observable. More-
pair (eAL T , B
over, the matrix
(AL ) = (AL )(AL )
is commutative with the matrix eAL and due to (9.82), it is nonsingular.
Therefore, Matrix (9.83) is normal.
c) With respect to the normality of Matrix (9.83), calculating the determi-
nants on both sides of (9.83), we nd
uL ()
fL () = det DL L (T, , 0) = , (9.88)
L ()L ()
L ()
fL () = , (9.89)
L ()L ( 1 )
where (n
L () = (1) eT i=1 i i
uL ()
is a symmetric quasi-polynomial. Moreover, since the product L () is
a polynomial, we receive
L () = k k , k = k ,
k=
where
0 .
Using (9.89), (9.77) and the relations
4. Using the above auxiliary results under Assumptions I-III on page 350,
we consider some properties of the quasi-polynomial AL ().
L () = bL ()a1
w r () . (9.91)
DL (T, , t) = bL (, t)a1
r () , (9.92)
where
bL (, t) = C1 h (A, t)B2 ar () + C1 (A)eAtbL () + m(t)DL ar () .
Simultaneously with the RMFD (9.92), we have the IRMFD (9.75), therefore
ar () = aL ()a2 () . (9.93)
1
AL () = a
r ()DL L (T, , 0)ar () = ak k , ak = ak , (9.95)
T
k=
where
0 . (9.96)
9.6 Factorisation of Quasi-polynomials of Type 2 371
Proof. The coincidence of the sets of matrices ar () in (9.80) and (9.91) stems
from the minimality of the PMD
I eAT , eAt (A)B2 , C2 .
Using (9.79), (9.80) and (9.93), the matrix AL () can be written in the form
AL () = a
2 ()PL ()a2 () , (9.97)
where PL () is the quasi-polynomial matrix (9.77). Using (9.80) from (9.97),
we nd
1 T
AL () = [bL (, t)a2 ()] [bL (, t)a2 ()] dt . (9.98)
T 0
Let the matrix aL () be column reduced. Then as before, we have
deg bL (, t) deg aL () deg det al () = .
Moreover, if we have the IRMFD (9.91), then any pair [ar ()(), bL ()()],
where () is any unimodular matrix, determines an IRMFD for Matrix (9.91).
In particular, the matrix () can be chosen in such a way that the matrix
a2 () in (9.93) becomes column reduced. In this case, we receive
deg a2 () deg det a2 () = deg Q () = q .
The last two estimates yield
deg [bL (, t)a2 ()] + q = .
Equations (9.95) and (9.96) follow from (9.98) and the last estimate.
Theorem 9.26. Denote
rL () = det AL () .
Then under Assumptions I-III on page 350 and the conditions of Lemma 9.23,
we have
rL () = rk k , rk = r k ,
k=
where
0 .
Proof. From (9.97), we have
rL () = det a2 () det a2 ( 1 )qL (), = const. = 0 (9.99)
that is equivalent to the claim, because deg det a2 () = Q and Q + = .
Corollary 9.27. From (9.99), it follows that the set of roots of the function
rL () includes the set of roots of the polynomial Q () and the set of roots of
the quasi-polynomial Q ( 1 ).
372 9 H2 Optimisation of a Single-loop System
Theorem 9.28. Let Assumptions I-III on page 350 and the conditions of
Lemmata 9.23 and 9.24 hold. Let also the quasi-polynomial AL () be posi-
tive on the unit circle. Then, there exists a factorisation
AL () = ()() = ( 1 )() , (9.100)
det () rL
+
()
and the matrices ar (), aL () in the IRMFDs (9.91), (9.74) can be chosen,
such that
deg () .
Proof. As for Theorem 9.16, the proof is a direct corollary of the theorem
about factorisation from [133] with account for our auxiliary results.
Remark 9.29. From Corollary 9.27, it follows that, when the polynomial dQ (s)
has roots on the imaginary axis, then the quasi-polynomial (9.99) has roots
on the unit circle and the factorisations (9.100) and (9.101) are impossible.
dQ (s) = (s q1 )1 (s q ) , 1 + . . . , = Q
where d+ +
Q () and r1L () are stable polynomials and
q1 T 1
m m+1
Q () = e
d+ eqm T eqm+1 T eq T ,
o () = 1 ()R+ () 1 () ,
R+ () 1 () = 11 ()R1+ ()
with deg det 1 () = deg det () n. From the last two equations, we
obtain the LMFD
1
o () = [1 ()()] R1+ () ,
where deg det[1 ()()] 2 n.
On the other hand, let us have an ILMFD
o () = Dl1 ()Ml () .
3. Let g1 , . . . , g be the stable and g+1 , . . . , g the unstable poles of the ma-
trix G(s); and q1 , . . . , qm ; qm+1 , . . . , q be the corresponding sequences of poles
of the matrix Q(s). Then the characteristic polynomial has in the general case
its roots at the points 1 = eg1 T , . . . , = eg T ; +1 = eg+1 T , . . . , =
eg T ; and 1 = eq1 T , . . . , m = eqm T ; m+1 = eqm+1 T , . . . , = eq T .
374 9 H2 Optimisation of a Single-loop System
4. The single-loop system shown in Fig. 9.1 will be called critical, if at least
one of the matrices Q(s), F (s) or G(s) has poles on the imaginary axis. These
poles will also be called critical. The following important conclusions stem
from the above reasoning.
a) The presence of critical poles of the matrix F (s) does not change the
H2 -optimisation procedure.
b) If any of the matrices Q(s) or G(s) has a critical pole, then the corre-
sponding factorisations (9.100) or (9.68) appear to be impossible, because
at least one of the polynomials det a2 () or det a1 () has roots on the unit
circle. In this case, formal following the Wiener-Hopf procedure leads to
a controller that does not stabilise.
c) As follows from the aforesaid, for solving the H2 -optimisation problems
for sampled-data systems with critical continuous-time elements, it is nec-
essary to take into account some special features of the system structure,
as well as the placement of the critical elements with respect to the system
input and output.
x y
- g - F (s) -
6
u u1
C
NF (s)
F (s) =
dF (s)
with
dF (s) = (s f1 )1 (s f ) , 1 + . . . + = (9.103)
Then,
1
DL L (T, s, 0) = 2 (s + kj)(s kj) + DF F (T, s, 0) .
T
k=
This series can be easily summarised. Indeed, using (6.36) and (6.39), we
obtain
T
1 1
(s + kj)(s kj) = (s + kj) e(s+kj)t m(t) dt
T T 0
k= k=
T
T
1 (s+kj)t
= (s + kj)e m(t) dt = m2 (t) dt = m2 .
0 T 0
k=
Hence
DL L (T, s, 0) = 2 m2 + DF F (T, s, 0) .
Let us have a minimal standard realisation
376 9 H2 Optimisation of a Single-loop System
and IMFDs
1
C I eAT = a1
l ()bl () ,
1
I eAT B = br ()a1
r () ,
where
1
det al () det ar () ef1 T ef T = F () .
AM () = al ()DF F (T, , 0)
al () (9.105)
3.
Theorem 9.31. Let the above formulated assumptions hold in this section.
Let the quasi-polynomials (9.104) and (9.105) be positive on the unit circle, so
that there exist factorisations (9.68) and (9.100). Let also the set of eigenval-
ues of the matrices () and () does not include the numbers i = efi T ,
where fi are the roots of the polynomial (9.103). Then the following proposi-
tions hold:
a) The matrix
1 1
R2 () = () al () 1 ()
ar ()DF F F (T, , 0) (9.106)
T
admits a unique separation
where
V1o () = a1
l () br ()
1
()R21 () 1 () ,
(9.109)
V2o () = ar () 1 ()R21 () 1 () .
c) The matrices (9.109) are stable and analytical at the points i , and the
set of their poles is included in the set of poles of the matrices 1 ()
and 1 ().
d) The characteristic polynomial of the optimal system o () is a divisor of
the polynomial det () det ().
where
R1 () = ()a1
r ()0r () () (9.111)
and the matrix R2 () is given by (9.106). Under the given assumptions owing
to Remark 8.19, the matrix C() is a quasi-polynomial. Therefore, the matrix
R() can have unstable poles only at the point = 0 and at the poles of
the matrices 1 () and 1 (). Hence under the given assumptions, Matrix
(9.110) is analytical at the points i = efi T . Simultaneously, all nonzero
poles of the matrix
r ()DF F F (T, , 0)
a al ()
belong to the set of the numbers i , because the remaining poles are cancelled
against the factors ar () and a
l (). Then it follows immediately that Matrix
(9.106) admits a unique separation (9.107). Using (9.111) and (9.107) from
(9.110), we obtain
R() = ()a1 r ()0 () () + R21 () + R22 () . (9.112)
Per construction, R22 () is a strictly proper rational matrix, whose poles in-
clude all poles of the matrix R(), which are all unstable. Also per construc-
tion, the expression in the square brackets can have poles at the points i . But
under the given assumptions, the matrix R() is analytical at these points.
Hence the matrix in the square brackets in (9.112) is a polynomial. Then the
right-hand side of (9.112) coincides with the principal separation (9.28) and
from (8.115), we obtain
R+ () = ()a1
r ()0r () () + R21 () ,
o () = a1
r ()0r () +
1
()R21 () 1 () ,
which is stable and analytical at the points i . Therefore, the matrices (9.109)
calculated by (8.117)(8.118) are stable and analytical at the points i . The
remaining claims of the theorem follow from the constructions of Section 8.7.
Example 9.32. Consider the simple SISO system shown in Fig. 9.4 with
x y
-g - K -
sa
6
u u1
C
K
F (s) = ,
sa
where K and a are constants. Moreover, assume that x(t) is unit white noise.
It is required to nd the transfer function of a discrete controller wdo (), which
stabilises the closed-loop system and minimises the value
S22 = 2 du1 + dy .
Moreover,
K(a)eaT
DF (T, , 0) = .
1 eaT
Hence we can take
al () = 1 eaT , bl () = K(a)eaT ,
(9.114)
ar () = al (), br () = bl () .
where > 0 is a known constant. Moreover, using (9.113), (9.78) and (9.104),
it can be easily shown that
DL L (T, , 0) = 2 m
+ DF F (T, , 0)
(9.116)
q(1 )(1 1 )
= ,
(1 eaT )(1 1 eaT )
eaT
a r () = 1 1 eaT =
l () = a . (9.117)
AM () = , AL () = K1 (1 )(1 1 ) ,
() = 1 , () = 2 (1 ) , (9.118)
(2 + 1 + 0 2 )
DF F F (T, , 0) = (9.119)
(1 eaT )2 (1 eaT )
from (9.119), we nd
0 + 1 + 2 2
DF F F (T, , 0) = .
( eaT )2 ( eaT )
1 eaT 0 + 1 + 2 2
a al () =
r ()DF F F (T, , 0) . (9.120)
T T 2 (1 eaT )
Owing to
380 9 H2 Optimisation of a Single-loop System
m2 ( )
() = m2 (1 1 ) = , () = m1
n0 + n1 + n2 2
R2 () = ,
( )(1 eaT )
1
1 ()R21 () 1 () = (9.121)
(1 aT
e )(1 )
with a known constant 1 . Taking into account (9.114), we obtain the function
V1o () in (9.109):
2 1
V1o () = +
(1 eaT )(1 ) 1 eaT
with a known constant 2 . Due to Theorem 9.31, the function V1o () is ana-
lytical at the point = eaT . Hence
1 + eaT 2 eaT = 0 .
3
V2o () = , 3 = const.
1
wdo () = 3 = const.
o () 1 .
10
L2 -Design of SD Systems for 0 < t <
determines the L2 -norm of the output signal z(t). Thus, the following optimi-
sation problem is formulated.
L2 -problem. Given the matrix w(p) in (7.2), the input vector x(t),
the sampling period T and the form of the control impulse m(t). Find
a stabilising controller (8.35) that ensures the internal stability of the
standard sampled-data system and the minimal value of z(t)L2 .
2. It should be noted that the general problem formulated above include, for
dierent choice of the vector z(t), many important applied problems, also in-
cluding the tracking problem. Indeed, let us consider the block-diagram shown
in Fig. 10.1, where the dotted box denotes the initial standard system that
will be called nominal. Moreover in Fig. 10.1, Q(p) denotes the transfer matrix
of an ideal transition. To evaluate the tracking performance, it is natural to
use the value
382 10 L2 -Design of SD Systems
x z
-
- w(p)
u y
C ?
h -
e
- Q(p)
z
Je = e (t)e(t) dt = [z(t) z(t)] [z(t) z(t)] dt . (10.2)
0 0
If the PTM of the nominal system w(s, t) has the form (7.30)
w(s, t) = L (T, s, t)RN (s)M (s) + K(s) , (10.3)
then the tracking error e(t) can be considered as a transformed result of the
input signal x(t) by a new standard sampled-data system with the PTM
we (s, t) = w(s, t) Q(s) = L (T, s, t)RN (s)M (s) + K(s) Q(s) . (10.4)
This system is fenced by a dashed line in Fig. 10.1. The standard sampled-
data system with the PTM (10.4) is associated with a continuous-time LTI
plant having the transfer matrix
K(p) Q(p) L(p)
we (p) = .
M (p) N (p)
Then, the integral (10.2) coincides with (10.1) for the new standard sampled-
data system.
where d > 0 and are constants. It is known [39] that for any function
f (t) for Re s > , there exists the Laplace transform
F (s) = f (t)est dt (10.6)
0
where
f (t 0) + f (t + 0)
f(t) = .
2
As follows from the general properties of the Laplace transformation [39],
under the given assumptions in any half-plane Re s 1 > , we have
lim |F (s)| = 0
s
for s increasing to innity along any contour. Then for f (t) and s =
x + jy, x 1 > , the following estimation holds [22] :
c
|F (x + kjy)| , c = const. (10.7)
|k|
Hereinafter, the elements f (t) of the set will be called originals and denoted
by small letters, while the corresponding Laplace transforms (10.6) will be
called images and denoted by capital letters.
3. For any original f (t) for Re s 1 > , the following series con-
verges:
f (T, s, t) = f (t + kT )es(t+kT ) , < t < . (10.8)
k=
384 10 L2 -Design of SD Systems
f (T, s, t0 ) = F (T, s, t0 )
f (T, s, t0 0) + f (T, s, t0 + 0)
F (T, s, t0 ) = ,
2
if f (T, s, t) at t = t0 has a break of the rst kind (nite break).
4. Together with the DPFR (10.8), we consider the discrete Laplace trans-
form (DLT) Df (T, s, t) of the function f (t):
Df (T, s, t) = f (t + kT )eksT = f (T, s, t)est (10.10)
k=
which will be called the discrete Laplace transform of the image F (s).
Let us have a strictly proper rational matrix
NF (s)
F (s) = ,
dF (s)
dF (s) = (s f1 )1 (s f ) , 1 + . . . + = .
is rational in for all t, and for 0 < t < T it can be represented in the form
(m
dk (t) k
DF (T, , t) = k=0 , (10.12)
F ()
where 1
F () = ef1 T ef T
is the discretisation of the polynomial dF (s), and dk (t) are functions of
bounded variation on the interval 0 < t < T .
5. Below it will be shown that some images F (s), which are not rational
functions, may possess a DLT of the form (10.12).
Henceforth, the image F (s) will be called pseudo-rational, if its DLT for
0 < t < T can be represented in the form (10.12). The set of all pseudo-
rational images are determined by the following Lemma.
Lemma 10.1. A necessary and sucient condition for an image F (s) to be
pseudo-rational is, that it can be represented as
(m ksT & T
e dk (t)est dt
F (s) = k=0
0
, (10.13)
F (s)
where
1
F (s) = F () | =esT = esT ef1 T esT ef T .
Proof. Necessity: Let the matrix DF (T, , t) have the form (10.12). Then we
have
(m ksT
e dk (t)
DF (T, s, t) = D(T, , t) | =esT = k=0 . (10.14)
F (s)
Moreover for the image F (s), we have [148]
T
F (s) = DF (T, s, t)est dt . (10.15)
0
Indeed, we have
1
DF (T, s, t ) = F (s + kj)e(s+kj)(t )
T
k=
1
DG (T, s, ) = G(s + kj)e(s+kj) .
T
k=
Substituting this into the middle part of (10.17) after integration, we prove
the rst equality in (10.17). The second one is proved in a similar way.
Using known properties of the DLT and (10.14), we have
DF (T, s, t ) = DF (T, s, t )
(m ksT
e dk (t )
= k=0 , 0<t <T ,
F (s)
DF (T, s, t ) = DF (T, s, t + T )esT
(m (k+1)sT
e dk (t + T )
= k=0 , T < t < 0 .
F (s)
10.3 Laplace Transforms of Standard SD System Output 387
NF G ()
DF G (T, , 0) = ,
F ()G ()
n1
DF (T, s, t) = DF (T, s, t) = fi (t + iT )eisT , 0<t<T,
i=0
where
q1 q
X (s) = esT ex1 T esT ex T . (10.20)
388 10 L2 -Design of SD Systems
The equations of the digital controller have the form (7.4), (7.6) and (7.7):
k = y(kT ) , (k = 0, 1, . . . )
0 k + . . . + q kq = 0 k + . . . q kq (10.22)
u(t) = m(t kT )k , kT < t < (k + 1)T .
As follows from [148] for x(t) , all solutions z(t) of the system (10.21),
(10.22) belong to the set , where is a suciently large number. In par-
ticular, if < 0 and the system (10.21) (10.22) is internally stable, then we
can take < 0.
2. The following theorem gives a general expression for the Laplace trans-
form of the output Z(s) under zero initial energy.
Theorem 10.4. For Re s > with suciently large , there exists the
Laplace transform of the solution z(t) of the system (10.21)(10.22) under
zero initial energy. This image Z(s) has the form
Z(s) = L(s)(s)RN (s)DM X (T, s, 0) + K(s)X(s) , (10.23)
where
and
1
DM X (T, s, 0) = M (s + kj)X(s + kj) .
T
k=
where
10.3 Laplace Transforms of Standard SD System Output 389
1
DN (T, s, 0) = N (s + kj)(s + kj)
T
k=
and T
(s) = es m( ) d . (10.26)
0
The matrix w d (s) in (10.25) is determined by the relation
1
w d (s) = (s) (s) , (10.27)
where
(s) = 0 + 1 esT + . . . + q eqsT ,
(s) = 0 + 1 esT + . . . + q eqsT .
Proof. a) Taking the Laplace images for the rst equation in (10.21) under
zero initial conditions, we obtain
whence it follows
where
(s) = k eksT (10.31)
k=0
is the discrete Laplace transform for the vector sequence {k } = {(kT )},
(k = 0, 1, . . . ). From (10.28) and (10.30), we have
V (s) = w2 (s)(s) (s) + w1 (s)X(s) . (10.32)
390 10 L2 -Design of SD Systems
c) Let us nd an expression for the vector (s) appearing in (10.31). With
this aim in view, we notice that (10.32) for k = 0, 1, . . . yields
V (s + kj) = w2 (s + kj)(s + kj) (s) + w1 (s + kj)X(s + kj) .
Then, we nd
1
DV (T, s, t) = V (s + kj)e(s+kj)t
T
k=
(10.33)
= Dw2 (T, s, t) (s) + Dw1 X (T, s, t) .
where v (s) is the discrete Laplace transform of the sequence {vk } =
{v(kT )}, (k = 0, 1, . . . ). With regard to (10.35), Relation (10.34) can be
written in the form
v (s) = Dw2 (T, s, 0) (s) + Dw1 X (T, s, 0) . (10.36)
Multiplying from left by the matrix C2 and using the fact that
C2 v (s) = y (s) (10.39)
from (10.24), we nd
y (s) = DN (T, s, 0)w d (s)y (s) + DM X (T, s, 0) .
Then,
1
y (s) = In DN (T, s, 0)w d (s) DM X (T, s, 0) ,
Substituting here the right side of (10.44) and the expression for w(s, t) from
(10.3) after some transformations, we obtain a result equivalent to (10.23).
2.
Theorem 10.5. Denote by PX the set of roots of the equation
q1 q
X (s) = esT ex1 T esT ex T =0
Onm , t<0
f (t) = At
Ce B , t > 0,
c) Let the image X(s) be pseudo-rational. Then for all t, the matrix
DF X (T, s, t) dened by (10.47) is continuous in t. For t = 0 from (10.48),
we obtain
1 T A
DF X (T, s, 0) = esT eAT I e B DX (T, s, ) d . (10.49)
0
d) Now proceed to the proof of the theorem. From (10.29), (6.89) and (6.100),
it follows for t = 0 that
1
Dw2 (T, s, 0) = (A) esT eAT I B2 .
Hence
I esT eAT v (s) = esT eAT (A)B2 (s)
T
sT AT
+e e eA B1 DX (T, s, ) d .
0
Combining this with (10.39) and (10.37), we obtain the system of equa-
tions
I esT eAT v (s) esT eAT (A)B2 (s)
T
= esT eAT eA B1 DX (T, s, ) d ,
0
(10.51)
C2 v (s) + y (s) = On1 ,
(s)y (s) + (s) (s) = Om1 .
10.4 Investigation of Poles of the Image Z(s) 395
where
T
sT AT
D1 (s) = e e eA B1 DX (T, s, ) d .
0
Thus, substituting herein (10.19), we nd
T
sT AT ksT
e e e eA B1 xk ( ) d
0
k=0
D1 (s) = .
X (s)
Since the numerator of this expression is an integral function of s, it follows
that the set of poles of the vector D1 (s) belongs to the set PX . Obviously,
the same is true for the vector D(s). Then with account for
1
G (s) = Q (s, , )D(s) ,
we nd that the set of all poles of the vectors v (s) and y (s), (s)
belongs to the union of the sets PX and P .
e) Substituting the relation
t
1
Dw2 (T, s, t) = eA(t ) m( ) d B2 + esT eAT I (A)eAt B2
0
and Equation (10.48) with F (s) = w2 (s) into (10.33), we nd for 0 < t < T
DV (T, s, t)
t
sT AT 1
= e A(t )
m( ) d B2 + e e I (A)e B2 (s)
At
0
1 T A(t )
+ esT eAT I e B1 DX (T, s, ) d
0
t
+ eA(t ) B1 DX (T, s, ) d .
0
396 10 L2 -Design of SD Systems
where
T 1
G1 (s) = eAt est dt = (A sI )1 esT eAT I ,
0
T T
G2 (s) = est eA(t ) m( ) d dt ,
0 0
T t
G3 (s) = est eA(t ) B1 DX (T, s, ) d dt .
0 0
Obviously, the matrices G1 (s) and G2 (s) are integral functions of s and
the poles of the matrix G3 (s) belong to the set PX . As was proved above,
the poles of the vectors v (s) and (s) belong to the union of PX and
P . Hence the poles of the image V (s) belong to PX P .
f) To conclude the proof, we notice that (10.30) and (10.43) yield
Z(s) = C1 V (s) + DL (s) (s) .
Due to (10.26), (s) is an integral function so that the set of all poles of
the vector Z(s) belongs to the set PX P .
10.5 Representing the Output Image in Terms of the System Function 397
Corollary 10.6. Under the assumptions of Theorem 10.5, the image Z(s)
admits a representation of the form
PZ (s)
Z(s) = ,
(s)X (s)
where PZ (s) is an integral function in s and X (s) is given by (10.20). Ac-
cording to (7.93),
() = ()d () , (10.52)
where the polynomial () is independent of the choice of the discrete con-
troller.
() d () .
As a result, we obtain
Z(s) = p(s) (s) q (s) + r(s) , (10.54)
where
p(s) = L(s)(s) a r (s) ,
q (s) = a l (s)DM X (T, s, 0) ,
(10.55)
r(s) = L(s)(s) 0r (s) a l (s)DM X (T, s, 0) + K(s)X(s) ,
= L(s)(s) 0r (s) q (s) + K(s)X(s) .
3.
Theorem 10.7. Let the standard sampled-data system be modal controllable.
Then the matrix p(s) is an integral function of s and the set of all poles of the
vectors q (s) and r(s) belongs to the set PX .
Proof. The claim regarding the matrix p(s) follows immediately from Corol-
lary 8.15.
Then we consider the vector q (s) in (10.55). From (10.17), it follows
T
DM X (T, s, t) = DM (T, s, )D X (T, s, t ) d .
0
For t = 0, we have
T
DM X (T, s, 0) = DM (T, s, )D X (T, s, ) d .
0
Moreover, we have
Pr (s)
r(s) =
, (10.59)
X (s)
where the numerator is an integral function in s.
(10.61)
r (s) = DX M (T, s, 0)a l (s) 0r
(s)L (s)(s) + X (s)K (s) ,
= q (s) 0r (s)L (s)(s) + X (s)K (s) .
Multiplying (10.54) and (10.60), we obtain
Z (s)Z(s) = Z (s)Z(s)
= q (s) (s)p (s)p(s) (s) q (s) + r (s)r(s) +
+ q (s) (s)p (s)r(s) + r (s)p(s) (s) q (s) .
j
1
J2 = r (s)r(s) ds .
2j j
400 10 L2 -Design of SD Systems
2. Let us transform the integral (10.63). With this purpose, we pass to nite
integration limits in (10.63):
j/2
T
q (s) (s)AL1 (s) (s) q (s)
J1 =
2j j/2
(10.64)
q (s) (s)B (s) B(s) (s) q (s) ds ,
where
1
AL1 (s) = p (s + kj)p(s + kj) ,
T
k=
1
B(s) = r (s + kj)p(s + kj) , (10.65)
T
k=
1
B (s) = p (s + kj)r(s + kj) .
T
k=
r (s + kj)p(s + kj)
(10.66)
= q (s) 0r (s)L (s + kj)L(s + kj)(s + kj)(s + kj) a r (s)
+ X (s + kj)K (s + kj)L(s + kj)(s + kj) a r (s) .
and the integration is performed along the unit circle in positive direction.
The matrices appearing in (10.68) are given by
AL1 () = T AL () = a
r ()DL L (T, , 0)ar () ,
which can have poles at the roots of the polynomial X () and at the point
= 0. Therefore, there exists a representation
NB ()
B() = , (10.70)
X ()
where
V1o () = 0r () br ()o () ,
V2o () = 0r () ar ()o () ,
and the polynomial matrices ar (), br () are determined by the IRMFD (8.42)
and moreover, [0r , 0r ] is a right initial controller solving Equation (4.37).
Furthermore, if we have both the IMFDs
and the system is modal controllable, then the characteristic polynomial of the
optimal system o () satises the relation
o () d () ,
where
d () det Dl () det Dr () .
2. We recognise that the form of the functional (10.68) coincides with that
of (8.92). Nevertheless, a direct application of the Wiener-Hopf method in the
form given in Section 8.6 is impossible, because the matrix () = q() in
(10.68) is not invertible. This means that the functional (10.68) is singular.
Therefore, the minimisation needs special approaches. Below we describe such
an approach based on an idea of [124]. With this aim in view, we derive a
number of auxiliary transformations.
a) Consider the rational vector (10.58)
Nq ()
q() = al ()DM X (T, , 0) = . (10.71)
X ()
As follows from Remark 3.4, there exists a unimodular matrix R(), such
that
1
0
R()Nq () = ()11n , 1 n = . n ,
..
0
where () is a greatest common divisor of the elements of the column
Nq (). Thus with (10.71), we have
() ( 1 )
q() = Y ()11n , q() = 1 Y () (10.72)
X () X ( 1 ) n
o () = 1o ()R() (10.76)
1 ()11n = 1 ()
and the integral (10.75) can be written in a form depending only on the
column 1 ():
# 1
1 1 () T ( )AL ()() 1 ()
J1 =
2j X ( 1 )X () (10.77)
1
1 ()B()
( ) () d
B()1 () .
X ( 1 ) X ()
minimises the functional (10.68) for any stable m(n1) matrices 2 ().
Therefore, if n > 1 and there exists an optimal column 1 (), then there
exists also a set of optimal system functions depending on the stable
matrix parameter 2 (). Since each optimal system function (10.76) is
associated by Theorem 10.9 with an optimal stabilising controller, we
conclude for the L2 -problem, that the existence of one optimal controller
means that there exists a set of optimal stabilising controllers depending
on the choice of the stable matrix 2 ().
This result principally diers from the situation for the H2 -problem and
it is originated by the singularity of the functional (10.68).
10.7 Wiener-Hopf Method 405
1 ()B()
( 1 )
R() = . (10.83)
1
X ( 1 )
1 1 ()N
B () ( 1 )
R() = .
T X () + ( 1 )
The next stage of the minimisation requires to perform the principal separa-
tion
R() = R+ () + R () = R+ () + R() , (10.84)
where R () is a strictly proper rational matrix incorporating all unstable
poles of the matrix R() and nally, R+ () is a stable rational matrix. The
general form of the matrix R+ () is given in the following Theorem.
Theorem 10.10. The matrix R+ () admits a representation
NR+ ()
R+ () = (10.85)
X ()
( 1 )
() = (10.86)
+ ( 1 )
406 10 L2 -Design of SD Systems
and
( 1 ) = g (1 1 )b1 (1 m )bm . (10.87)
Assume
whence it follows
1 1 ()NR+ ()
1o () = 11 ()R+ () = . (10.88)
T + ()
1o () = 1 ()N
+ () ,
R
+ () is a polynomial vector.
where NR
10.8 General Properties of Optimal Systems 407
determines the set of all optimal matrices 1o (). Thus with respect to (10.78),
we nd that the set of all optimal system matrices o () is determined by
1 1 ()NR+ ()
o () = 2 () R() .
T + ()
1
The authors attention to this fact was attracted by Dr. K. Polyakov.
408 10 L2 -Design of SD Systems
1o () = Dl1 ()M
l () . (10.89)
is an ILMFD for the matrix o (). Thus for the characteristic polynomial of
the optimal system o (), we have
o () det Dl () . (10.90)
1o () = a1
()b () . (10.91)
o ()
() =
det a ()
is a polynomial.
Dl () = a0 ()a () ,
det Dl ()
1 () =
det a ()
()
B() r ()DL L (T, , 0)0r ()Y ()11n
=a +a
r ()DL KX (T, , 0) ,
X ()
whence
( 1 ) T ( 1 )AL ()()
B() = a
r () 0r ()Y ()11n
X ( 1 ) X ( 1 )X ()
( 1 )
+a
r ()DL KX (T, , 0) .
X ( 1 )
( 1 )
+a
r ()DL KX (T, , 0) .
X ( 1 )
1 ()B()
( 1 )
R() = 1
X ( 1 )
1 () ( 1 )
= 1 ()a1 1n +
r ()0r ()Y ()1 ar ()DL KX (T, , 0) .
1
X ( 1 )
where Q() is a stable rational matrix, then with the new matrix, we get
R () = R() 1 ()Q()Y () 1 n .
The second term on the right-hand side is a stable rational matrix, because
the matrix 1 () is stable. Therefore,
R () = R() .
410 10 L2 -Design of SD Systems
2. With account for Theorem 10.20, we can extend the modied method
given in Section 8.6 onto the problem at hand. This makes it possible to nd
the optimal vector 1o () without calculating an initial basic controller.
v e2
- Wv (s) -e -
6
v
x y u h1
- e - F (s) -
Q(s) - C - G(s) -
n mn m h
6 T
? e1
e -
6
1
h
h
- Wh (s) -
notation as in Fig. 9.1. In addition, Wv (s) and Wh (s) are the transfer functions
of the ideal LTI convolution operators
t
v(t) = gv (t )x( ) d
t
1 (t) =
h gh (t )x( ) d ,
are pseudo-rational and all their poles are stable. Hence for 0 < t < T , we
have
Nv (, t) Nh (, t)
DWv (T, , t) = , DWh (T, , t) = , (10.92)
v () h ()
where Nv (, t) and Nh (, t) are polynomial matrices in , while v () and
h () are stable scalar polynomials having no roots inside the closed unit
disk.
10.10 Single-loop Control System 411
2. The input of the system is the vector x(t) with image X(s), which should
be stable and pseudo-rational. The error under zero initial energy
e1 (t) h1 (t) h 1 (t)
e(t) = = (10.93)
e2 (t) v(t) v(t)
V (s) = Wv (s)X(s) ,
H(s) = Wh (s)X(s)
as stable pseudo-rational vectors. Moreover, we have v(t) and h(t)
for some < 0.
4. For Wv (s) = O and Wh (s) = O , the system shown in Fig. 10.2 is
transformed into the single-loop system shown in Fig. 9.1 with the PTM
w(s, t) = L (T, s, t)RN (s)M (s) + K(s) ,
where
O G(s)
K(s) = , L(s) = ,
F (s) F (s)G(s)
(10.94)
M (s) = Q(s)F (s), N (s) = Q(s)F (s)G(s) .
Assume that the matrix L(s) is at least proper and the remaining matrices in
(10.94) are strictly proper.
5. Under the given assumptions, for the output of the nominal system
h(t)
z(t) = (10.95)
v(t)
In particular, if the system is internally stable, then we can take < 0 and
the vector (10.95) has a nite L2 -norm. Then for the error vector (10.93), we
have e(t) , where < 0. For Re s > , there exists the image of the error
vector
H(s)
H(s)
E(s) = ,
V (s) V (s)
that can be represented using the above relations in the form
E(s) = L(s)(s)RN (s)DM X (T, s, 0) + K(s)X(s) We (s)X(s)
(10.97)
= Z(s) We (s)X(s) ,
where
Wh (s)
We (s) = .
Wv (s)
The right-hand sides of (10.97) and (10.96) dier only by the last term. Thus,
for application of the Wiener-Hopf method in the given case, we can use the
above general relations, changing the matrix K(s) for
Wh (s)
Ke (s) = K(s) We (s) = . (10.98)
F (s) Wv (s)
The matrix We (s) is not rational in the general case, but this fact does not
aect the optimisation procedure, because under the given assumptions, all
matrices in the cost functional are rational.
where
10.11 Wiener-Hopf Method for Single-loop Tracking System 413
G(s)(s) a r (s)
p(s) = L(s)(s) a r (s) = ,
F (s)G(s)(s) a r (s)
q (s) = a l (s)DM X (T, s, 0) = a l (s)DQF X (T, s, 0) , (10.100)
re (s) = L(s)(s) 0r (s) a l (s)DQF X (T, s, 0) + Ke (s)X(s)
G(s)(s) 0r (s) a l (s)DQF X (T, s, 0) Wh (s)X(s)
=
.
F (s)G(s)(s) 0r (s) a l (s)DQF X (T, s, 0) + F (s)X(s) Wv (s)X(s)
Let the nominal system be modal controllable. Then by Theorem 10.7, the
matrix p(s) is an integral function in s and the matrices q (s) and r(s) admit
representations (10.57) and (10.59):
N q (s) Pr (s)
q (s) = , r(s) = .
X (s) X (s)
Pe (s)
re (s) = , (10.101)
h (s)v (s)X (s)
2. With account for (10.99) and (10.100) in the given case, the functional
(10.64) takes the form
j/2
T
q (s) (s)AL1 (s) (s) q (s) q (s) (s)B e (s)
J1 =
2j j/2
(10.102)
B e (s) (s) q (s) ds ,
where
1
AL1 (s) = p (s + kj)p(s + kj) = a r (s)DL L (T, s, 0) a r (s)
T
k=
(10.103)
= a r (s) 2 DG G (T, s, 0) + DG F F G (T, s, 0) a r (s)
and
414 10 L2 -Design of SD Systems
1
Be (s) = p (s + kj)re (s + kj)
T
k=
= a r (s)DL L (T, s, 0) 0r (s) a l (s)DM X (T, s, 0)
+ a r (s)DL Ke X (T, s, 0)
= a r (s) 2 DG G (T, s, 0)
+ DG F F G (T, s, 0) 0r (s) a l (s)DM X (T, s, 0)
+ 2 DG Wh X (T, s, 0) + a r (s)DG F F X (T, s, 0)
a r (s)DG F Wv X (T, s, 0) .
As was proved before, the rational periodic matrix (10.103) has no poles and
with respect to (10.101), the matrix B e (s) admits a representation of the
form
Qe (s)
B e (s) = ,
h (s)v (s)X (s)
where Qe (s) is an integral rational periodic function.
Nq ()
q() = al ()DQF X (T, , 0) = ,
X ()
e () = Qe ()
B ,
h ()v ()X ()
()
q() = Y () 1 n ,
X ()
1 1 ()Ne+ ()
1o () = . (10.105)
T h ()v ()+ ()
4. Let the conditions of Lemma 9.24 hold. Let us note some special features
of the optimisation procedure for this case.
a) As follows from Section 9.6, if the polynomial dQ (s) has no roots on the
imaginary axis, then for the factorisation (10.104) we have
det () = + +
Q ()Q ()() , (10.106)
Then, if the polynomial dQ (s) is factored into stable and unstable cofactors
dQ (s) = d+
Q (s)dQ (s) ,
then + +
Q () is the discretisation of the polynomial dQ (s). The polynomial
Q () in (10.106) is constructed as follows. Let Q () be the discretisa-
+
1
+
Q () = Q ( ) ,
N ()
1o () = ,
h ()v ()+ +
Q ()Q ()()
416 10 L2 -Design of SD Systems
o () det a () .
2. The two compared systems are shown in Fig. 10.3: a given continuous-
time LTI system I, which will be called the reference system, and the standard
sampled-data system II. As before, w(s) in Fig. 10.3 is a rational matrix
m
K(s) L(s) r (10.107)
w(s) = .
M (s) N (s) n
It is assumed that L(s) is at least proper, while the other elements of the
matrix w(s) are strictly proper. Moreover, we assume that the standard system
10.12 L2 Redesign of Continuous-time LTI Systems under Persistent Excitation 417
I
x z
-
- w(s)
u
y
x ? e
- U (s) g -
6
x z
-
- w(s)
u y
C
II
Fig. 10.3. Structure for redesign
is internally stable and modal controllable and the forming element is a zero-
order hold, i.e.
1 esT
(s) = 0 (s) = . (10.108)
s
Suppose that the reference system I is stable and the rational matrix in the
feedback U (s) is analytical at the point s = 0.
3. With account for (10.108) under zero initial energy, the image of the
output of the standard sampled-data system has the form
1
Z(s) = L(s)0 (s)w d (s) In DN 0 (T, s, 0)w d (s) DM X (T, s, 0)
(10.109)
+ K(s)X(s) .
Under similar assumptions, the image of the output of the reference system
Z(s) is
Z(s) = wc (s)X(s) , (10.110)
where the transfer matrix of the reference system
1
wc (s) = L(s)U (s) [In N (s)U (s)] M (s) + K(s) (10.111)
is the unit step. Then after vanishing of the transient processes, the constant
output z in the reference system has the form
where we used the fact that the image X(s) has the form (10.112). Similarly,
if the standard sampled-data system is internally stable and (10.108) and
(10.112) hold, then there exists the limit
z = lim sZ(s) .
s0
As follows from (10.113) and (10.114) under (10.112), the output signals of
both systems have in the general case innite L2 -norms. Nevertheless under
the condition
z = z , (10.115)
the dierence
e(t) = z(t) z(t)
has a nite L2 -norm, i.e. the following integral converges:
J = e (t)e(t) dt = [z(t) z(t)] [z(t) z(t)] dt .
0 0
Using the Parseval formula, we can write this integral in the form
j
1
J = E (s)E(s) ds
2j j
j (10.116)
1
= Z(s) Z(s) Z(s) Z(s) ds ,
2j j
where Z(s) and Z(s) are the images (10.109) and (10.110). Then the following
optimisation problem is quite logical.
10.12 L2 Redesign of Continuous-time LTI Systems under Persistent Excitation 419
0 (s) = T + . . . . (10.118)
0 (kj) = 0 , (k = 1, 2, . . . ). (10.119)
Using (10.118) and (10.119), it can be easily shown that in the vicinity of
the point s = 0
DN 0 (T, s, 0) = N (s) + . . . .
c) Since X(s) = s1 x0 similarly to this equation, we obtain
s M (s + kj)x0 1
sDM X (T, s, 0) = = M (s)x0 + . . . .
T s + kj T
k=
420 10 L2 -Design of SD Systems
1
= 0r (s) a r (s) (s) 0r (s) esT b r (s) (s) .
7. Using (10.109)(10.111), the image of the error can be written in the form
E(s) = Z(s) Z(s)
1
= L(s)0 (s)w d (s) In DN (T, s, 0)w d (s) DM X (T, s, 0)
(10.125)
1
L(s)U (s) [In N (s)U (s)] M (s)X(s) .
Under Condition (10.117), the right-hand side of this relation is analytical for
s = 0. Let us nd a representation of the error image E(s) in terms of the
new system matrix (). For this purpose, we use Equation (10.124). Then
from (10.53) and (10.124), we have
1
RN (s) = w d (s) In DN 0 (T, s, 0)w d (s)
= 0r (s) a l (s) a r (s) (s) a l (s)
= 0r (s) a l (s) a r (s)0 a l (s) 1 esT a r (s) (s) a l (s) .
where
p0 (s) = L(s)0 (s) a r (s) ,
q 0 (s)= 1 esT a l (s)DM X (T, s, 0) ,
(10.126)
r0 (s) = L(s)0 (s) 0r (s) a l (s) a r (s)0 a l (s) DM X (T, s, 0)
1
L(s)U (s) [In N (s)U (s)] M (s)X(s) .
Let us prove some properties of the matrices (10.126), which will be important.
In the present section it is always assumed that the conditions of Lemma 10.13
hold.
Lemma 10.16. The matrices p0 (s) and
p1 (s) = L(s) a r (s) (10.127)
Proof. The claim about the matrix p0 (s) was proved above. Further, we have
p0 (s)
L(s) a r (s) = . (10.128)
0 (s)
The left-hand side of (10.128) can have poles only at poles of the matrix
L(s) and due to (10.108), the right-hand side can have poles only at the
points sk = kj, (k = 1, 2, . . . ). But by assumptions, the matrix L(s) is
analytical at the points sk . Therefore, the left-hand side of (10.128) has no
poles, i.e. is an integral function of s.
Lemma 10.17. The vector q 0 (s) is an integral function of s.
Proof. Let us transform the expression for the vector q 0 (s) using the relation
T
DM X (T, s, 0) = DM (T, s, )D X (T, s, ) d . (10.129)
0
Lemma 10.18. If the conditions of Lemma 10.13 hold, the vector r0 (s) has
no poles on the imaginary axis.
where D(s) is an integral function. Then the image (10.130) can have single
pure imaginary poles only at the points sk = kj = 2kj/T , (k = 0, 1, . . . ).
On the other hand, with account for (10.108), (10.126) and (10.127), from
(10.130), we obtain
Z2 (s)
Z1 (s) = x0 , (10.132)
s
where
Z2 (s) = L(s) 0r (s) q 1 (s) p1 (s)0 q 1 (s) + K(s) .
The matrix Z2 (s) is analytical at the points sk = kj, (k = 0, 1, . . . ),
because under the given assumptions the right-hand side has no poles due to
(6.106), Lemmata 10.16 and 10.17. Moreover, the matrix Z2 (s) has no poles
at s = 0. Indeed, if we assume the converse from (10.132), we nd that the
image (10.130) has a pole at s = 0 with a multiplicity greater than one, but
this contradicts (10.131). Hence the matrix Z2 (s) is an integral function of s.
Therefore, with respect to (10.132), the vector r0 (s) can be written in the
form
1
r0 (s) = [Z2 (s) wc (s)] x0 , (10.133)
s
where wc (s) is the transfer function of the reference system (10.111). Its poles
are located in the left half-plane due to the assumption on its stability. From
(10.133), we obtain that the vector r0 (s) can have a simple pole at the point
s = 0. But, due to the choice () = 0 , we have
Nc (s)
wc (s) = . (10.134)
dc (s)
Pc (s)
r0 (s) = , (10.135)
dc (s)
Proof. From the proof of Lemma 10.18, it follows that the matrix Z2 (s) is an
integral function of s. Moreover, the right-hand side of (10.133) is analytical
at the point s = 0. Therefore, substituting (10.134) into (10.133), we obtain
the claim of the corollary.
424 10 L2 -Design of SD Systems
where
AL1 (s) = a r (s)DL L0 0 (T, s, 0) a r (s) . (10.137)
Moreover,
1
B 0 (s) = r0 (s + kj)p0 (s + kj) ,
T
k=
(10.138)
1
B 0 (s) = p0 (s + kj)r0 (s + kj)
T
k=
G()q0 () = 0 ()11n ,
where 0 () is a scalar polynomial and G() is a unimodular matrix. Then
1 n1
1
,
2 () = ()G () = 1 () 2 () m
0 () = D0 () .
B
c ()
From the last two equations, we conclude that the set of stable poles of the
matrix R0 () belongs to the set of roots of the polynomial c (). Therefore,
as a result of the principal separation (10.84), we obtain
N0 ()
R0+ () =
c ()
with a polynomial matrix N0 (). The optimal vector o1 () has the form
1 1 ()N0 ()
o1 () = . (10.142)
T +
0 ()c ()
The further procedure for constructing the set of optimal controllers is the
same as in Section 10.7.
10. Similarly to Section 10.7, it can be found that the characteristic poly-
nomial of the optimal system o () is divisible by the polynomial c ().
Hence if in particular, the function (10.142) is irreducibile, then the charac-
teristic polynomial of the optimal standard sampled-data system is divisible
by the discretisation of the characteristic polynomial of the reference model
dc (s), and this fact does not dependent on the choice of the controller which
minimises Functional (10.141).
426 10 L2 -Design of SD Systems
6
v
1
x y u
h
h
- f - F (s) -
Q(s) - U (s) - G(s) - -
n mn m
6
Suppose the reference system in Fig. 10.4 is asymptotically stable, the input
signal has the form (10.112) and the matrix U (s) is analytical at the point
s = 0. Then the L2 -redesign problem can be formulated as follows.
For the sampled-data system in Fig. 9.1, let us have the sampling period
T , the transfer function of the forming element (10.108) and the input signal
(10.112). Let z(t) denote the output vector (10.143) of the LTI system under
zero initial energy and the vector z(t) is the output of the sampled-data system
under similar assumptions. It is required to nd the transfer function of the
discrete controller wd () satisfying the following conditions:
a) The following equality holds:
2. Let us show that the problem formulated above can be reduced to the
general scheme considered in Section 10.12. With this aim in view, we show
that the system shown in Fig. 10.4 can be presented in form of a reference
system I from Fig. 10.3. Notice that the transfer matrix of the LTI system
wc (s) from the input x to the output z can be represented in the form (10.111).
Indeed, using the standard structural transformations, it is easy to nd the
(s) and wv
transfer matrices whx
x (s) from the input x(t) to the outputs h(t)
and v(t):
1
(s) = G(s)U (s)Q(s)F (s) [I G(s)U (s)Q(s)F (s)]
whx , (10.144)
1
wvx (s) = F (s) [I G(s)U (s)Q(s)F (s)] . (10.145)
Then assuming
A = Q(s)F (s) , B = G(s)U (s) ,
we obtain
1
G(s)U (s)Q(s)F (s) [I G(s)U (s)Q(s)F (s)]
(10.146)
1
= G(s)U (s) [In Q(s)F (s)G(s)U (s)] Q(s)F (s) .
Then we prove
1
wvx (s) = F (s) [I G(s)U (s)Q(s)F (s)]
* +
1
= F (s) [I G(s)U (s)Q(s)F (s)] I + F (s) (10.148)
1
= F (s)G(s)U (s)Q(s)F (s) [I G(s)U (s)Q(s)F (s)] + F (s) .
where
428 10 L2 -Design of SD Systems
O G(s)
K(s) = , L(s) = ,
F (s) F (s)G(s)
(10.149)
M (s) = Q(s)F (s) , N (s) = Q(s)F (s)G(s) .
Comparing (10.149) with (9.9), we arrive to the conclusion that the problem
under consideration is a special case of the general problem described in Sec-
tion 10.12, whenever the elements of Matrix (10.107) have the form (10.149).
Therefore, the further solution of the L2 -redesign problem can be found using
the general algorithm of Section 10.12.
3. Under some additional assumptions taking into account the special struc-
ture of the reference system, we can establish some additional important prop-
erties of the optimal system.
Theorem 10.20. Let the conditions of the Lemma 9.24 hold, the factorisation
(10.104) exist and () = const. Then there exists a set of optimal controllers.
Moreover, if the ratio (10.142) is irreducible, then the optimal controller can be
chosen in such a way that the characteristic polynomial of the optimal system
o () becomes
o () = c ()() ,
where c () is a polynomial and () is a polynomial such that () =
det () and
For any choice of the optimal controller, the characteristic polynomial of the
optimal system o () is divisible by the polynomial o ().
Remark 10.21. If the polynomial dQ (s) has roots on the imaginary axis, then
the factorisation (10.104) is impossible.
Appendices
A
Operator Transformations of Taylor Sequences
4. Let the -transform (A.3) of the sequence (A.1) be convergent for || < R.
Then for |z| > R1 , the series
u (z) = uk z k (A.5)
k=0
u0 () = u ( 1 ) , u (z) = u0 (z 1 ) . (A.7)
0
where u () is the -transform congured by (A.4).
which results in
z u (z)
y (z) = y + . (A.12)
za za
Particularly, y = 0 implies
and thus
y
y 0 () = + u0 () . (A.13)
1 a 1 a
Especially for y = 0, we obtain
(1 a)y 0 () = u0 (z) .
where
q
i
fik k1 e(si s)t
F (T, s, t) = .
i=1 k=1
(k 1)! sik1 1 e(si s)T
Besides, if we have m1 = 0 in (B.1), then the sum of the series F (T, s, t)
possesses jumps of nite height in the points tn = nT , (n = 0, 1, . . . ).
However, when m1 = m2 = . . . = m1 = 0 and m = 0 take place, then
the periodic function F (T, s, t) has derivatives up to and including ( 1)-th
order, where the ( 1)-th derivative is piecewise continuous, but the lower
derivatives are continuous.
where
436 B Sums of Certain Series
q
i
fik k1 e si t
DF (T, s, t) = est F (T, s, t) = .
i=1 k=1
(k 1)! sk1
i
1 e(si s)T
If we have m1 = 0 in (B.1), then the sum of the series DF (T, s, t) has jumps of
nite height in the points tn = nT , (n = 0, 1, . . . ). However, if m1 = m2 =
. . . = m1 = 0 and m = 0 are true, then the periodic function F (T, s, t) has
derivatives up to and including (1)-th order, where the (1)-th derivative
is piecewise continuous, but the lower derivatives are continuous.
3. Suppose
T
(s) = est m(t) dt ,
0
where the function DF (T, s, t) is determined by each one of the equivalent
formulae
q
i
fik k1 esi t (si )
DF (T, s, t) = + hF (t) ,
i=1 k=1
(k 1)! sk1
i
e(ssi )T 1
q
i
fik k1 e si t
DF (T, s, t) = + hF (t) .
i=1 k=1
(k 1)! s k1
i
1 e (si s)T
where
q
i
fik
hF (t) = tk1 esi t .
i=1 k=1
(k 1)!
C.1 Introduction
For description of control system elements, the following two data structures
are used:
Polynomial and quasi-polynomial matrices;
Real rational matrices.
Polynomial matrices are realised as objects of the class poln. The special
variables s, p, z, d, and q are realised as functions, and they are used for
entering polynomial matrices. For example, after the input
P = [ s+1 s^2+s-6
s^3 s-12 ]
the MATLAB environment creates and displays the following polynomial
matrix:
438 C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems
P: polynomial matrix: 2 x 2
s + 1 s^2 + s - 6
s^3 s - 12
Moreover, the DirectSDM Toolbox supports operations with quasi-
polynomials (by this term we mean functions having poles only at the
origin) by means of the same class poln. For example, the input
P = [ z+1 z+1+z^-1
z^2-5 1+z^-2 ]
creates and displays the following quasi-polynomial matrix:
P: quasi-polynomial matrix: 2 x 2
z + 1 z + 1 + z^-1
z^2 - 5 1 + z^-2
Real rational matrices are stored and handled as objects of standard classes
of the Control Toolbox describing models of LTI-systems, namely, tf (trans-
fer matrix), zpk (zero-pole-gain form), and ss (state-space description). The
DirectSDM Toolbox redenes the display function for the classes tf and
zpk. Also, some errors in the Control Toolbox (versions up to 5.2) have
been corrected.
Since the synthesis procedures developed in the present book essentially ex-
ploit models in form of polynomial matrices and matrix fraction descriptions
(MFD), the DirectSDM Toolbox supports all basic operations with polyno-
mial and quasi-polynomial matrices.
For objects of the poln class, the arithmetic operations (addition, sub-
traction, multiplication, division) as well as concatenation, transposition and
inversion (for square matrices) are overloaded. It should be noticed that all
operands used in binary operations should have the same independent variable
(s, p, z, d, or q), respectively.
Below a short list of functions for handling polynomial and quasi-
polynomial matrices are given.
C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems 439
Consider the multivariable single-loop system shown in Fig. C.1. The digi-
tal controller (in the dashed box) composed of a discrete lter with transfer
matrix C() and hold circuit with transfer function (s) is used for sta-
bilising a continuous-time plant. The control loop includes a plant F (s),
actuator G(s) and dynamic negative feedback Q(s). The exogenous distur-
bance w(t) and measurement noise m(t) are modeled as vector stationary
stochastic processes with spectral density matrices Sw (s) = Fw (s)Fw (s) and
Sm (s) = Fm (s)Fm (s), respectively. The signals (t) and (t) are independent
unit centred white noises.
The output signal e(t) denotes the stabilisation error. The controller should
ensure minimal power of the error signal under restrictions imposed on the
control power. The frequency-dependent weighting functions Ve (s) and Vu (s)
are introduced in order to shape the frequency-domain properties of the sys-
tem (for example, to ensure roll-o of the controller frequency response at
high frequencies).
C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems 441
zu (t) 6 (t)
?
Vu (s) Fw (s)
u(t) 6
w(t)
y(t) T y(t) zy (t)
- C() - (s)
?
q - G(s) - e- q - Vy (s) -
F (s)
Q(s) e
m(t) 6
Fm (s)
(t) 6
z = K(s)x + L(s)u
y = M (s)x + N (s)u ,
The function sdh2 can be used for synthesis of H2 -optimal controllers for
extended single-loop multivariable systems as described above. Consider, for
example, a simplied model of course stabilisation for a Kazbek-type tanker
[149]:
0.051
(25s + 1)s 1
F (s) =
,
G(s) = ,
0.051 s+1
25s + 1
Fw (s) = 1 , Fm (s) = 0 , Ve (s) = I , Vu (s) = 1 , T = 1.
As distinct from the problem considered in [149], the yaw angle and rotation
rate z are both measured, i.e., the controller has 2 inputs and 1 output.
The system shown in Fig. C.1 must be described as a structure of
MATLAB as follows:
sys.F = tf({0.051; 0.051},{[25 1 0];[25 1]});
sys.G = tf(1, [1 1]);
sys.Fw = tf(1);
sys.Vu = tf(1);
sys.T = 1;
C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems 443
Mandatory elds of the structure are only sys.F, sys.G, sys.Fw, and sys.T.
If others are not specied, they take the following default values:
sys.Fm = 0;
sys.Ve = eye(n);
sys.Vu = 0;
sys.Q = eye(n);
Here n denotes the number of outputs of the plant F (s) and eye(n) denotes
the identity matrix of the corresponding dimension.
The function call
[C,P] = sdh2 ( sys )
gives the transfer matrix of the (unique) optimal controller C(z) (in the vari-
able z!) and poles of the closed-loop system in the z-plane:
C: zero-pole-gain model 1 x 2
Sampling time: 1
P =
0.9627 + 0.0240i
0.9627 - 0.0240i
0.3679 + 0.0000i
0.3679 - 0.0000i
Since all poles are inside the unit disk, the optimal closed-loop system is
stable.
This example is investigated in detail in the demo script demoh2 included
in the DirectSDM Toolbox .
6u(t)
(t) r(t) v(t) T y(t)
- R(s) p- -
d G(s) - C() - (s) -
q F (s) q
6 zy (t)
?
-
d Vy (s) -
Q(s) 6
y (t)
- Wy (s) - 1
The cost function includes the sum of weighted integral quadratic output and
control errors and coincides with the square of the L2 -norm of z(t):
T
J = z(t)2 =
2 T
z (t)z(t) dt = ze (t)ze (t) + zuT (t)zu (t) dt . (C.2)
0 0
The problem is formulated as follows: Let all continuous elements of the sys-
tem, hold device (s) and sampling interval T be given. Find a stabilising
digital controller C() ensuring the minimum of the cost function (C.2).
It can be shown that the problem under consideration can be viewed as a
special case of the general L2 -optimisation problem for standard sampled-data
system analysed in Chapter 10. Denote the signal acting upon the sampling
unit by y(t). Then the system equations in operator form appear as
z = K(s)x + L(s)u
y = M (s)x + N (s)u ,
where the matrices of the corresponding standard system have the form
Ve (s)We (s)R(s) Ve (s)F (s)
K(s) = , L(s) = ,
Vu (s)Wu (s)R(s) Vu (s)
The function sdl2 can be used for synthesis of L2 -optimal controllers for the
extended single-loop multivariable system described above. Assume,
1
1
0.5s + 1
F (s) = , G(s) = 1 , Q(s) = I , X(s) = s
,
1 1
(0.5s + 1)s s
00
We (s) = , Ve (s) = I , Wu (s) = 0 0 , Vu (s) = 0 , T = 1.
01
The system shown in Fig. C.2 is described as a structure of MATLAB as
sys.F = tf( {1; 1}, {[0.5 1]; conv(1,[0.5 1 0])} );
sys.X = tf( {1; 1}, {[1 0]; [1 0]} );
sys.We = tf( {0 0;0 1}, {1 1;1 1} );
sys.T = 1;
Among all the elds, only sys.F, sys.R, sys.Wy, and sys.T are required. If
the others are not given, they take the following default values:
sys.G = eye(m);
sys.Q = eye(n);
sys.Ve = eye(n);
sys.Wu = 0;
sys.Vu = 0;
Here n denotes the number of outputs of the plant F (s), m is the dimen-
sion of the input signal x(t), and eye() denotes the identity matrix of the
corresponding dimension.
The function call
[C,P] = sdl2 ( sys )
gives the transfer matrix of an optimal controller C(z) (non-unique for multi-
variable systems) and the poles of the optimal closed-loop system in z-plane:
C: zero-pole-gain model 1 x 2
Sampling time: 1
P =
0
446 C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems
0
0.3673
-0.2329
Since all poles are inside the unit disk, the closed-loop system is stable.
This example is investigated in detail in the demo script demol2 included
in the DirectSDM Toolbox .
D
Design of SD Systems with Guaranteed
Performance
D.1 Introduction
During the projection of control systems, usually for analysis and design,
nearly complete information is required about the conditions under which
the system should be in function. A typical practical problem consists in
the investigation of the function, when the system is disturbed by stochastic
external signals. As shown in the present book, in this case for evaluating the
function of a sampled-data system, the mean variance of the output could be
used, and the optimisation criterion could be a weighted sum of the output
variances.
For calculating the mean variance and for applying the optimisation pro-
cedure, the considered methods require the spectral density of the excitation.
However, for the majority of real stochastic processes, a rough information
about the spectral density is not available. For instance, there is no exact an-
swer to the question about the spectrum of sea waves [23, 24, 122]. Therefore,
it is impossible to predict, under which conditions the process will move.
The lack of rough information about the spectral density has the conse-
quence, that the variance of the output signal cannot be calculated, thus we
cannot nd the optimal controller. In engineering practice, this situation is
managed in the following way. For the real spectral density of the aecting
disturbance, several approximations are built. Then for each approximation,
the analysis or synthesis problem for the optimal system is solved. However,
this way of solution never takes into account the approximation error of
the spectral density. Hence the inuence of this error on the performance of
the system cannot be estimated under real excitations. But in practice, the
situation may arise that a prescribed performance of the system must be
guaranteed for any of a set of excitations. In this case, it cannot be predicted
how variations in the parameters of the excitation aect the performance of
the optimal system.
448 D Design of SD Systems with Guaranteed Performance
Hence the absence of rough information about the spectral density of the
excitation leads to the following problem:
Analyse a system under incomplete information about the external exci-
tation.
Design a system that guarantees an upper bound of the performance index
for all excitations of a certain set.
Below, such systems are called systems with guaranteed performance, and
the synthesis procedure is named design for guaranteed performance. The
set of excitations, for which the performance of the system is guaranteed
inside prescribed limits, is called class of excitations. Taking a single loop
scalar systems as an instance, the present appendix considers methods for
the solution of analysis and design problems for guaranteed performance.
Moreover, the modelling of classes of stochastic excitations is explained which
are needed for denition and solution of the named tasks.
The practical computations are realised with the MATLAB-Toolbox
GarSD which was particularly developed for analysis and design of sampled-
data systems with guaranteed performance. The package operates together
with the MATLAB-Toolbox DirectSD, [130] and the Toolbox DirectSDM
which has been presented in Appendix C.
Consider the single-loop scalar sampled-data system with the structure shown
in Fig. D.1. The centralised stationary stochastic excitation g(t) with the
g(t)
T u(t) ? e(t)
- wd () - (s) - W (s) - f - F (s) -
L(s)
spectral density Sg (s) aects the continuous process with the transfer function
F (s). Furthermore, we have the transfer functions of the actuator W (s), of the
feedback L(s), and of the forming element (s). The product W (s)F (s)L(s) is
D Design of SD Systems with Guaranteed Performance 449
assumed to be strictly proper, while W (s) and W (s)F (s) are at least proper.
The system is controlled by the digital controller with the transfer function
(R r
r=0 br B()
wd () = (R = , = esT , (D.1)
r=0 ar
r A()
where W F (T, s, t), W (T, s, t), W F L (T, s, t) are the corresponding dis-
placed pulse frequency responses (DPFR).
Let Z be the vector containing the constructive parameters, which have
to be determined. The components of the vectors Z have to be committed
in such a way that the transfer function of the controller wd () is uniquely
established. In this case, the PTFs (D.2), (D.3) are functions of the vector Z.
These functions will be denoted by
Then the formulae for calculating the variance of the outputs might be written
in the form
1 2
d (t, Z) = A (, t, Z)Sg () d, = 1, 2 (D.4)
0
with the magnitude of the parametric frequency response
A (, t, Z) = |w (s, t, Z)|s=i
The functional
J(Z) = d1 (Z) + 2 d2 (Z) (D.5)
is used as performance criterion, where is a real weighting coecient and
d1 (Z) and d2 (Z) are the mean variances, which are determined by
1 T
d (Z) = d (t, Z) dt, = 1, 2 .
T 0
450 D Design of SD Systems with Guaranteed Performance
The procedure for searching the vector Zgar is called design for guaranteed
performance.
Let J0 be known as the largest value of (D.5), for which the function of
the system is accepted as successful. Then, if we can prove the inequality
gar ) J0 ,
E(Z
then with the aid of (D.6)(D.8), we are able to state that a successful op-
eration of the systems with the parameter Zgar for any excitation from the
class MS is guaranteed. There is no disturbance of the class MS , for which
the maximal possible value (D.5) exceeds the boundary J0 .
We consider two modelling variants for the class of random excitations in
problems for guaranteed performance.
In the rst variant, the model involves the variance d0 of the excitation
and the totality of its N moments dn
1 2n
S() d = dn , n = 0, 1, . . . , N . (D.9)
0
This totality is a generalised characteristic of the spectral density that is
robust against variations [18, 122].
In the second variant, the class MS is modelled by enveloping the spectral
density Sog (). The construction makes sure that there exists no frequency ,
for which any element of the class MS takes a value greater than Sog ().
D Design of SD Systems with Guaranteed Performance 451
I. Let the class MS of the system excitations in Fig. D.1 be given by the
set dn , n = 0, . . . , N (D.9). Always suppose that the transfer function of the
processes F (s) and the product F (s)L(s) are strictly proper. In this case, we
obtain for the parametric frequency response Ak (, t, Z)
lim A (, t, Z) = 0 , = 1, 2 ,
i.e. the system as a whole reacts to the input g(t) as a low-pass [148]. Besides,
the PFR decreases not slower than 1/, when . Thus, the integrals
(D.4) converge absolutely and the innite limits in (D.4) might be substituted
by nite values , because of
A2 (, t, Z) 0, > .
where
1 T 2
A (, Z) = A (, t, Z) dt, = 1, 2 .
T 0
(Z) of the mean variances d (Z) are calculated by
The estimates D
N
d (Z) D
(Z) = cn (Z)dn , (D.10)
n=0
N
C (, Z) = cn (Z) 2n ,
n=0
A (, Z) C (, Z) [0, S ] (D.11)
and in addition
C (, Z) A (, Z) [0, S ] . (D.12)
452 D Design of SD Systems with Guaranteed Performance
N
N
E(Z) = cn1 (Z)dn + 2 cn2 (Z)dn . (D.13)
n=0 n=0
Due to (D.11), this functional majorises the functional (D.5), and for any given
Z, it constitutes an upper bound [155]. Moreover, it does not depend on the
concrete spectral density, but its value is determined only by the generalised
characteristic of the class MS . The coecients cn (Z) can be computed by
applying known numeric procedures [121], [155].
Remark D.1. Practical computations have shown that for arbitrary excita-
tions, the inclusion of variances higher than rst order has marginal inuence
to the estimation of the mean variance. Therefore, in practice the calculation
of two coecients for each polynomial C (, Z) is sucient.
II. When the class MS is given by the envelope spectral density Sog (), then
an estimation of the mean variance could be found more precisely. Let for the
envelope spectral density the value S be known, and for the system with
given vector Z the value should be found. The following considerations are
valid for S .
In this case, the functional (D.7) can achieve the form
N
N
E(Z) = cn1 (Z)dn1 +2
cn2 (Z)dn2 ,
n=0 n=0
where the quantities dn1 and dn2 are determined by integrals of the shape
1
dn = Sog () 2n d, n = 0, . . . , N ; = 1, 2
0
and the coecients cn (Z) are chosen in such a way that Conditions (D.12),
(D.11) are satised on the interval [0, ].
Consider the search process for the vector Zgar that minimises (D.7) for the
sampled-data system containing the digital controller with the transfer func-
tion (D.1). For this purpose, the application of genetic algorithms is suitable
[131], [117].
There are two variants of using genetic algorithms for selecting a con-
troller. Let us have to design a sampled-data system of Fig. D.1 for guar-
anteed performance. For the discrete transfer function DW LF (T, s, t) of the
D Design of SD Systems with Guaranteed Performance 453
open sampled-data system with the elements W (s), L(s), F (s) and (s), the
representation [148]
n()
DW LF (T, s, 0) =
d() =esT
D.3.1 Structure
2. The module GarSD realises procedures for computing the functional (D.7)
for the sampled-data system and for its minimisation.
3. The module GenSD realises numerical minimisation procedures by applying
genetic algorithms.
For completion, the module SEAWAVE might be used. Here various evaluation
methods were collected. These methods are applicable for data, which describe
the eects of sea waves to a ship. The module is suitable for the solution of
problems for guaranteed performance, because it contains sea-wave spectra of
real measurements.
The information about the external excitation is set into the structure
spectral by the command spt as
spectral=spt(<information>);
The variable information in case of a known envelope spectral density Sog ()
may have dierent formats:
1. Transfer function of the form lters (object tf)
2. Numerator and denominator polynomials of the fractional rational spec-
tral density function
3. Vector with number values of the spectral density
4. Coecients of the exponential functions of the spectral density.
For instance, the envelope spectral density may be given by Sog () = 0.1/( 4
2 2 + 2). This corresponds to a form lter with the transfer function Ff il (s) =
0.32/(s2 + 0.91s + 1.41). It is set by one of the both commands
spectral=spt(0.1,[1 0 2 0 2]);
spectral=spt(tf(0.32,[1 0.91 1.41]));
The spectral density of the form Sog () = A m exp(B n ) is fed into the
computer by the command
spectral=spt(A,B,m,n);
Moreover, the module spectral includes procedures that allow to design the
envelope spectral density for a given set of excitations, or to build the set
of excitations in various practical situations (for instance in case of three
dimensional disturbance models or of switching between dierent modes in
the system), or to nd a rational approximation for the envelope spectral
density, when several spectra are given by numerical data.
The class of excitations may be given by the totality of the variances of its
excitations and their moments d0 , d1 , . . . and the limit frequency S in the
class of the spectral densities. Then the class is dened by the command
spectral=spt(d_i,beta_S);
D Design of SD Systems with Guaranteed Performance 455
The other one is free of these restrictions, but needs more computation time.
For realising the last algorithm, some macros of the toolbox DirectSD were
employed. The selection of the algorithm happens automatically.
Moreover, the toolbox GarSD contains several procedures for testing the
stability of sampled-data systems, for computing the poles or the oscillation
index, for construction of the PFR, for determining the transfer functions of
stabilising controllers and of controllers with certain assigned poles.
with the values = 0.051 sec1 , = 25 sec. The transfer functions of the
process, the actuator and the feedback are are given by
D Design of SD Systems with Guaranteed Performance 457
alpha=0.051; beta=25;
F=tf(alpha/beta,[1 1/beta 0]);
W=tf(1); L=tf(1);
and this information is collected in the structure of the system with the sam-
pling period 1 sec by the command
system=sys(F,W,L,1);
Suppose the class of excitations MS is given by the enveloped spectral
density
0.0757
Sog () = 4 . (D.14)
2.489 2 + 1.848
The structure of this class of excitations is generated by the command
spectral=spt(0.0757,[1 0 2.489 0 1.848]);
For the weighting coecient = 0.1, a system with guaranteed performance
should be designed, where the sampling period is T = 1 sec. As number of
iterations for the genetic algorithm, we choose num=50. For the selection of
the minimal controller order according to the rst variant, the command
[system_gar,reg_gar,D_e,D_u,E]=
regelgarsys(sta,min,1,system,spectral,0.1,50);
is used. As a result, we obtain the transfer function of the controller reg gar
in the form
37.82z 34.99
wd1 (z) = .
z 0.535
For any excitation of the class MS , the system with this controller guarantees
values of the mean variances de D e = 0.000066 and du D u = 0.0105.
Besides, the value of the functional (D.7) is estimated by E = 0.00017.
Applying the second variant of controller design, so for instance the con-
troller order 1 could be chosen (the existence of a stabilising controllers of
1st order for the given system was just proven)
[system_gar,reg_gar,D_e,D_u,E]=
regelgarsys(all,1,1,system,spectral,0.1,50);
The macro supplies the transfer function of the controller reg gar:
66.86z 61.45
wd2 (z) = .
z 0.14
For any excitation of the class MS , the system with this controller guarantees
values of the mean variances de D e = 0.000061 and du D u = 0.0173.
Besides, the value of the functional (D.7) is estimated by E = 0.00023.
The property of the envelop of the spectral density obviously ensures that
for any excitation of the class MS , the value of the mean variances of the
signals u(t) and e(t) in the systems with controllers wd1 (z) or wd2 (z), will
458 D Design of SD Systems with Guaranteed Performance
Now, let us assume that the class MS is given by the set of variances
d0 = 0.05807, d1 = 0.07895 (D.15)
and the width S = 3.04. This new class involves the spectrum (D.14). The
structure of excitations is generated by
spectral=spt([0.05807 0.07895],3.04);
Let us take the rst variant for the controller design:
[system_gar,reg_gar,D_e,D_u,E]=
regelgarsys(sta,min,1,system,spectral,0.1,50);
As a result, we obtain the transfer function of the controller reg gar
367.8z 333.8
wd3 (z) = .
z + 0.3107
For any excitation of the class MS , the system with this controller guarantees
values of the mean variances de D e = 0.000056 and du D
u = 0.0597 as
well as the value E = 0.00065 in (D.7).
Finally, it remains to design the controller, for instance of rst order, for
the totality of variances by the second variant. The existence of a stabilising
controller of 1st order was proven above. The command
[system_gar,reg_gar,D_e,D_u,E]=
regelgarsys(all,1,1,system,spectral,0.1,50);
supplies the transfer function of the controller reg gar:
1.87z 0.24
wd4 (z) = .
z + 0.83
For any excitation of the class MS , the system with this controller guarantees
values of the mean variances de D e = 0.019 and du D u = 0.050 as well
as E = 0.019 in (D.7).
Now investigate the behavior of the system with the controllers wd3 (z) and
wd4 (z) under the condition of various excitations of the class MS for the given
set (D.15). For certain spectral densities of the class MS having the form
S() = a1 /( 4 + a2 2 + a3 ) ,
the values of the coecients a1 , a2 , a3 are listed in Table D.1. In the same
table, the exact values of the mean variances of the signals e(t) and u(t) occur
for the controllers wd3 (z) and wd4 (z), respectively.
Table D.1 exemplies that the values of the mean variances of the signals
e(t) and u(t) for all considered excitations do not exceed the values of the
calculated estimations.
D Design of SD Systems with Guaranteed Performance 459
Table D.1. Variances of the output of the system with guaranteed performance for
various excitations from the class MS
16. B.A. Bamieh and J.B. Pearson. The H2 problem for sampled-data systems.
Syst. Contr. Lett., 19(1):112, 1992.
17. B.A. Bamieh, J.B. Pearson, B.A. Francis, and A. Tannenbaum. A lifting tech-
nique for linear periodic systems with applications to sampled-data control
systems. Syst. Contr. Lett., 17:7988, 1991.
18. V.A. Besekerskii and A.V. Nebylov. Robust systems in automatic control.
Nauka, Moscow, 1983. (in Russian).
19. M.J. Blachuta. Contributions to the theory of discrete-time control for
continuous-time systems. Habilitation thesis, Silesian Techn. University, Gli-
wice, Poland, 1999.
20. M.J. Blachuta. Discrete-time modeling of sampled-data control systems with
direct feedthrough. IEEE Trans. Autom. Contr, 44(1):134139, 1999.
21. Ch. Blanch. Sur les equation dierentielles lineares a coecients lentement
variable. Bull. technique de la Suisse romande, 74:182189, 1948.
22. S. Bochner. Lectures on Fourier Integrals. University Press, Princeton, NJ,
1959.
23. I. Boroday, V. Mohrenschildt, et al. Behavior of ships in ocean waves. Su-
dostroyenie, Leningrad, 1969.
24. I.K. Boroday and V.V. Nezetaev. Application problems of dynamics for ships
on waves. Sudostroyenie, Leningrad, 1989.
25. G.D. Brown, M.G. Grimble, and D. Biss. A simple ecient H controller
algorithm. In Proc. 26th IEEE Conf. Decision Contr, Los Angeles, 1987.
26. B.W. Bulgakov. Schwingungen. GITTL, Moskau, 1954. (in Russisch).
27. F.M. Callier and C.A. Desoer. Linear system theory. Springer-Verlag, New
York, 1991.
28. M. Cantoni. Algebraic characterization of the H and H2 norms for linear
continuous-time periodic systems. In Proc. 4th Asian Control Conference,
pages 19451950, Singapore, 2002.
29. S.S.L. Chang. Synthesis of optimum control systems. McGraw Hill, New York,
Toronto, London, 1961.
30. T. Chen and B.A. Francis. Optimal sampled-data control systems. Springer-
Verlag, Berlin, Heidelberg, New York, 1995.
31. T.A.C.M. Claasen and W.F.G. Mecklenbr auker. On stationary linear time
varying systems. IEEE Trans. Circuits and Systems, CAS-29(2):169184, 1982.
32. P. Colaneri. Continuous-time periodic systems in H2 and H . Part I: The-
oretical aspects; Part II: State feedback control. Kybernetika, 36(3):211242;
329350, 2000.
33. R.E. Crochiere and L.R. Rabiner. Multirate digital signal processing. Prentice-
Hall, Englewood Clis, NJ, 1983.
34. L. Dai. Singular control systems. Lecture notes in Control and Information
Sciences. Springer-Verlag, New York, 1989.
35. J.A. Daletskii and M.G. Krein. Stability of solutions of dierential equations
in Banach-space. Nauka, Moscow, 1970. (in Russian).
36. R. DAndrea. Software for modeling, analysis, and control design for multidi-
mensional systems. In Proc. IEEE Symp. on Computer Aided Control System
Design (CACSD99), pages 2427, Kohala Coast, Island of Hawaii, Hawaii,
USA, 1999.
37. C.E. de Souza and G.C. Goodwin. Intersample variance in discrete minimum
variance control. IEEE Trans. Autom. Contr, AC-29:759761, 1984.
References 463
38. B.W. Dickinson. Systems - Analysis, Design and Computation. Prentice Hall,
Englewood Clis, NJ, 1991.
39. G. Doetsch. Anleitung zum praktischen Gebrauch der Laplace Transformation
und z-Transformation. Oldenbourg, M unchen, Wien, 1967.
40. R.C. Dorf and R.H. Bishop. Modern control systems. Pearson Prentice Hall,
Upper Saddle River, NJ, tenth edition, 2001.
41. J.C. Doyle. Guaranteed margins for LQG regulators. IEEE Trans. Autom.
Contr, AC-23(8):756757, 1978.
42. J.C. Doyle, B.A. Francis, and A.R. Tannenbaum. Feedback control theory.
Macmillan, New York, 1992.
43. S. Engell. Lineare optimale Regelung. Springer-Verlag, Berlin, 1988.
44. D.K. Faddeev and V.N. Faddeeva. Numerische Methoden der linearen Algebra.
Oldenbourg, M unchen, 1979. (with L. Bittner).
45. A. Feuer and G.C. Goodwin. Generalised sample and hold functions - frequency
domain analysis of robustness, sensitivity and intersampling diculties. IEEE
Trans. Autom. Contr, AC-39(5):10421047, 1994.
46. N. Fliege. Multiraten-Signalverarbeitung. B.G. Teubner, Stuttgart, 1993.
47. V.N. Fomin. Control methods for discrete multidimensional processes. Univer-
sity press, Leningrad, 1985. (in Russian).
48. V.N. Fomin. Regelungsverfahren f ur diskrete Mehrgroenprozesse. Verlag der
Universitat, Leningrad, 1985. (in Russisch).
49. G.F. Franklin, J.D. Powell, and A. Emami-Naeini. Feedback Control of Dy-
namic Systems. Prentice Hall, Upper Saddle River, NJ 07458, 4 edition, 2002.
50. G.F. Franklin, J.D. Powell, and H.L. Workman. Digital control of dynamic
systems. Addison Wesley, New York, 1990.
51. F.R. Gantmacher. The theory of matrices. Chelsea, New York, 1959.
52. E.G. Gilbert. Controllability and observability in multivariable control sys-
tems. SIAM J. Control, A(1):128151, 1963.
53. G.C. Goodwin, S.F. Graebe, and M.E. Salgado. Control system design.
Prentice-Hall, Upper Saddle River, NJ 07458, 2001.
54. G.C. Goodwin and M. Salgado. Frequency domain sensitivity functions for
continuous-time systems under sampled-data control. Automatica, 30(8):1263
1270, 1994.
55. M.J. Grimble. Robust Industrial Control: Optimal Design Approach for Poly-
nomial Systems. International Series in Systems and Control Engineering.
Prentice Hall International (UK) Ltd, Hemel Hempstead, Hertfordshire, 1994.
56. M.J. Grimble and V. Kucera, editors. Polynomial methods for control systems
design. Springer-Verlag, London, 1996.
57. M. G unther. Kontinuierliche und zeitdiskrete Regelungen. B.G. Teubner,
Stuttgart, 1997.
58. T. Hagiwara and M. Araki. FR-operator approach to the H2 -analysis and syn-
thesis of sampled-data systems. IEEE Trans. Autom. Contr, AC-40(8):1411
1421, 1995.
59. V. Hahn. Direkte adaptive Regelstrategien f ur die diskrete Regelung von
Mehrgr oensystemen. PhD thesis, University of Bochum, 1983.
60. M.E. Halpern. Preview tracking for discrete-time SISO systems. IEEE Trans.
Autom. Contr, AC-39(3):589592, 1994.
61. S. Hara, H. Fujioka, and P.T. Kabamba. A hybrid state-space approach to
sampled-data feedback control. Linear Algebra and Its Applications, 205-
206:675712, 1994.
464 References
62. U.K. Herne. Methoden zur rechnergest utzten Analyse und Synthese von
Mehrgr oenregelsystemen in Polynommatrizendarstellung. PhD thesis, Uni-
versity of Bochum, 1988.
63. R. Isermann. Digitale Regelungssysteme. Band I: Grundlagen, deterministische
Regelungen. Band II: Stochastische Regelungen, Mehrgr oenregelungen Adap-
tive Regelungen, Anwendungen. Springer-Verlag, Berlin, 2 edition, 1987.
64. M.A. Jevgrafov. Analytische Funktionen. Nauka, Moskau, 1965. (in Russisch).
65. G. Jorke, B.P. Lampe, and N. Wengel. Arithmetische Algorithmen der
Mikrorechentechnik. Verlag Technik, Berlin, 1989.
66. E.I. Jury. Sampled-data control systems. John Wiley, New York, 1958.
67. P.T. Kabamba and S. Hara. Worst-case analysis and design of sampled-data
control systems. IEEE Trans. Autom. Contr, AC-38(9):13371358, 1993.
68. T. Kaczorek. Linear control systems, volume II - Synthesis of multivariable
systems. J. Wiley, New York, 1993.
69. T. Kailath. Linear Systems. Prentice Hall, Englewood Clis, NJ, 1980.
70. R. Kalman and J.E. Bertram. A unied approach to the theory of sampling
systems. J. Franklin Inst., 267:405436, 1959.
71. R. Kalman, Y.C. Ho, and K. Narendra. Controllabiltiy of linear dynamical
systems. Contributions to the Theory of Dierential Equations, 1:189213,
1963.
72. R.E. Kalman. Mathematical description of linear dynamical systems. SIAM
J. Control, A(1):152192, 1963.
73. S. Karlin. A rst course in stochastic processes. Academic Press, New York,
1966.
74. V.J. Katkovnik and R.A. Polucektov. Discrete multidimensional control.
Nauka, Moscow, 1966. (in Russian).
75. J.P. Keller and B.D.O. Anderson. H -Optimierung abgetasteter Regelsys-
teme. Automatisierungstechnik, 40(4):114123, 1993.
76. U. Keuchel. Methoden zur rechnergest utzten Analyse und Synthese von
Mehrgr oensystemen in Polynommatrizendarstellung. PhD thesis, University
of Bochum, 1988.
77. P.P. Khargonekar and N. Sivarshankar. H2 -optimal control for sampled-data
systems. Systems & Control Letters, 18:627631, 1992.
78. U. Korn and H.-H. Wilfert. Mehrgr oenregelungen. Verlag Technik, Berlin,
1982.
79. V. Kucera. Discrete Linear Control. The Polynomial Approach. Academia,
Prague, 1979.
80. V. Kucera. Analysis and Design of Discrete Linear Control Systems. Prentice
Hall, London, 1991.
81. B.C. Kuo and D.W. Peterson. Optimal discretization of continuous-data con-
trol systems. Automatica, 9(1):125129, 1973.
82. H. Kwakernaak. Minimax frequency domain performance and robustness
optimisation of linear feedback systems. IEEE Trans. Autom. Contr, AC-
30(10):9941004, 1985.
83. H. Kwakernaak. The polynomial approach to H regulation. In E. Mosca
and L. Pandol, editors, H control theory, volume 1496 of Lecture Notes in
Mathematics, pages 141221. Springer-Verlag, London, 1990.
84. H. Kwakernaak and R. Sivan. Linear Optimal Control Systems. Wiley-
Interscience, New York, 1972.
References 465
85. S. Lall and C. Beck. Model reduction of complex systems in the linear-fractional
framework. In Proc. IEEE Int. Symp. on Computer Aided Control System
Design (CACSD99), pages 3439, Kohala Coast, Island of Hawaii, Hawaii,
USA, 1999.
86. B.P. Lampe. Strukturelle Instabilit at in linearen Systemen Frequenz-
gangsmethoden auf dem Pr ufstand der Mathematik. In Mitteilungen der Math-
ematischen Gesellschaft in Hamburg, volume XVIII, pages 926, Hamburg,
Germany, 1999.
87. B.P. Lampe, G. Jorke, and N. Wengel. Algorithmen der Mikrorechentechnik.
Verlag Technik, Berlin, 1984.
88. B.P. Lampe, M.A. Obraztsov, and E.N. Rosenwasser. H2 -norm computation
for stable linear continuous-time periodic systems. Archives of Control Sci-
ences, 14(2):147160, 2004.
89. B.P. Lampe, M.A. Obraztsov, and E.N. Rosenwasser. Statistical analysis
of stable FDLCP systems by parametric transfer matrices. Int. J. Control,
78(10):747761, Jul 2005.
90. B.P. Lampe and U. Richter. Digital controller design by parametric transfer
functions - comparison with other methods. In Proc. 3. Int. Symp. Methods
Models Autom. Robotics, volume 1, pages 325328, Miedzyzdroje, Poland, 1996.
91. B.P. Lampe and U. Richter. Experimental investigation of parametric fre-
quency response. In Proc. 4. Int. Symp. Methods Models Autom. Robotics,
pages 341344, Miedzyzdroje, Poland, 1997.
92. B.P. Lampe and E.N. Rosenwasser. Design of hybrid analog-digital systems
by parametric transfer functions. In Proc. 32nd CDC, pages 38973898, San
Antonio, TX, 1993.
93. B.P. Lampe and E.N. Rosenwasser. Application of parametric frequency re-
sponse to identication of sampled-data systems. In Proc. 2. Int. Symp. Meth-
ods Models Autom. Robotics, volume 1, pages 295298, Miedzyzdroje, Poland,
1995.
94. B.P. Lampe and E.N. Rosenwasser. Best digital approximation of continuous
controllers and lters in H2 . In Proc. 41st KoREMA, volume 2, pages 6569,
Opatija, Croatia, 1996.
95. B.P. Lampe and E.N. Rosenwasser. Best digital approximation of continuous
controllers and lters in H2 . AUTOMATIKA, 38(34):123127, 1997.
96. B.P. Lampe and E.N. Rosenwasser. Parametric transfer functions for sampled-
data systems with time-delayed controllers. In Proc. 36th IEEE Conf. Decision
Contr, pages 16091614, San Diego, CA, 1997.
97. B.P. Lampe and E.N. Rosenwasser. Sampled-data systems: The L2 induced
operator norm. In Proc. 4. Int. Symp. Methods Models Autom. Robotics, pages
205207, Miedzyzdroje, Poland, 1997.
98. B.P. Lampe and E.N. Rosenwasser. Statistical analysis and H2 -norm of nite
dimensional linear time-periodic systems. In Proc. IFAC Workshop on Periodic
Control Systems, pages 914, Como, Italy, Aug. 2001.
99. B.P. Lampe and E.N. Rosenwasser. Forward and backward models for anoma-
lous linear discrete-time systems. In Proc. 9th IEEE Symp. Methods Models
Autom. Robotics, pages 369373, Miedzyzdroje, Poland, Aug 2003.
100. B.P. Lampe and E.N. Rosenwasser. Operational description and statistical
analysis of linear periodic systems on the unbounded interval < t < .
European J. Control, 9(5):508521, 2003.
466 References
101. B.P. Lampe and E.N. Rosenwasser. Closed formulae for the L2 -norm of lin-
ear continuous-time periodic systems. In Proc. IFAC Workshop on Periodic
Control Systems, pages 231236, Yokohama, Japan, Sep 2004.
102. B.P. Lampe and E.N. Rosenwasser. Unterordnung und Dominanz rationaler
Matrizen. Automatisierungstechnik, 53(9):434444, 2005.
103. F.H. Lange. Signale und Systeme, volume 13. Verlag Technik, Berlin, 1971.
104. V.B. Larin, K.I. Naumenko, and V.N. Suntsov. Spectral methods for design of
linear systems with feedback. Naukova Dumka, Kiev, 1971. (in Russian).
105. B. Lennartson and T. S oderstr
om. Investigation of the intersample variance in
sampled-data control. Int. J. Control, 50:15871602, 1989.
106. B. Lennartson, T. S oderstr
om, and Sun Zeng-Qi. Intersample behavior as
measured by continuous-time quadratic criteria. Int. J. Control, 49:20772083,
1989.
107. O. Ling arde and B. Lennartson. Frequency analysis for continuous-time sys-
tems under multirate sampled-data control. In Proc. 13th IFAC Triennial
World Congr., volume 2a10, 5, pages 349354, San Francisco, USA, 1996.
108. L. Ljung. System Identication Theory for the User. Prentice-Hall, Engle-
wood Clis, NJ, 1987.
109. D.G. Luenberger. Dynamic equations in descriptor form. IEEE Trans. Autom.
Contr, AC-22(3):312321, 1977.
110. J. Lunze. Robust multivariable feedback control. Akademie-Verlag, Berlin, 1988.
111. J. Lunze. Regelungstechnik 2 - Mehrgr oensysteme, Digitale Regelung.
Springer-Verlag, Berlin, Heidelberg, ..., 1997.
112. N.N. Lusin. Matrix theory for studying dierential equations. Avtomatika i
Telemechanika, 5:466, 1940. (in Russian).
113. N.N. Lusin. Matrizentheorie zum Studium von Dierentialgleichungen. Av-
tomatika i Telemechanika, 5:466, 1940.
114. J.M. Maciejowski. Multivariable feedback design. Addison-Wesley, Wokingham,
England a.o., 1989.
115. J.M. Maciejowski. Predictive control - with constraints. Pearson Education
Lim., Harlow, England, 2002.
116. A.G. Madievski and B.D.O. Anderson. A lifting technique for sampled-data
controller reduction for closed-loop transfer function consideration. In Proc.
32nd IEEE Conf. Decision Contr, pages 29292930, San Antonio, TX, 1993.
117. K.F. Man, K.S. Tang, and S. Kwong. Genetic algorithms. Springer-Verlag,
London Berlin Heidelberg, 1999.
118. S.G. Michlin. Vorlesungen u ber lineare Integralgleichungen. Dt. Verlag d. Wis-
senschaften, Berlin, 1962.
119. B.C. Moore. Principal component analysis in linear systems: Controllability,
observability and model reduction. IEEE Trans. Autom. Contr, AC-26(1):17
32, 1981.
120. R. M uller. Entwurf von Mehrgr oenreglern durch Frequenzgang-
Approximation. PhD thesis, University of Dortmund, 1996.
121. A.V. Nebylov. Warranting of accuracy of control. Nauka, Moscow, 1998. (in
Russian).
122. A.V. Nebylov. Measuring parameters of a plane near the sea surface. Saint Pe-
tersburg State University Academic Press, St. Petersburg, 2000. (in Russian).
123. K. Ogata. Modern control engineering. Prentice-Hall, Upper Saddle River, NJ
07458, 2002.
References 467
124. V.G. Pak and V.N. Fomin. Linear quadratic optimal control problem under
known disturbance I. Abstract linear quadratic problem under known distur-
bance. Preprint VINITI, N2063-B97, St. Petersburg, 1997. (in Russian).
125. K. Parks and J.J. Bongiorno. Modern Wiener-Hopf design of optimal con-
trollers Part II: The multivariable case. IEEE Trans. Autom. Contr, AC-
34(6):619626, 1989.
126. R.V. Patel. Computation of minimal-orders state-space realisations and ob-
servability indices using orthonormal transformations. In R.V. Patel, A.J.
Laub, and Van Dooren P.M., editors, Numerical linear Algebra techniques for
systems and control, pages 195212. IEEE Press, New York, 1994.
127. T.P. Perry, G.M.H. Leung, and B.A. Francis. Performance analysis of sampled-
data control systems. Automatica, 27(4):699704, 1991.
128. U. Petersohn, H. Unger, and Wardenga W. Beschreibung von Multirate-
Systemen mittels Matrixkalk 48(1):3441, 1994.
ul. AEU,
129. J.P. Petrov. Design of optimal control systems under incompletely known input
disturbances. University press, Leningrad, 1987. (in Russian).
130. K.Y. Polyakov, E.N. Rosenwasser, and B.P. Lampe. DirectSD - a toolbox for di-
rect design of sampled-data systems. In Proc. IEEE Intern. Symp. CACSD99,
pages 357362, Kohala Coast, Island of Hawaii, Hawaii, USA, 1999.
131. K.Y. Polyakov, E.N. Rosenwasser, and B.P. Lampe. Quasipolynomial low-
order digital controller design using genetic algorithms. In Proc. 9th IEEE
Mediterranian Conf. on Control and Automation, pages WM1B5, Dubrovnik,
Croatia, June 2001.
132. K.Y. Polyakov, E.N. Rosenwasser, and B.P. Lampe. DirectSDM - a toolbox for
polynomial design of multivariable sampled-data systems. In Proc. IEEE Int.
Symp. Computer Aided Control Systems Design, pages 95100, Taipei, Taiwan,
Sep 2004.
133. V.M. Popov. Hyperstability of control systems. Springer-Verlag, Berlin, 1973.
134. I.I. Priwalow. Einfuhrung in die Funktionentheorie. 3. Au., B.G. Teubner,
Leipzig, 1967.
135. R. Rabenstein. Diskrete Simulation linearer mehrdimensionaler Systeme. PhD
thesis, University of Erlangen-N urnberg, 1991.
136. J.R. Ragazzini and G.F. Franklin. Sampled-data control systems. McGraw-Hill,
New York, 1958.
137. J.R. Ragazzini and L.A. Zadeh. The analysis of sampled-data systems. AIEE
Trans., 71:225234, 1952.
138. J. Raisch. Mehrgr oenregelung im Frequenzbereich. R. Oldenbourg Verlag,
M unchen, 1994.
139. K.S. Rattan. Digitalization of existing control systems. IEEE Trans. Autom.
Contr, AC-29:282285, 1984.
140. K.S. Rattan. Compensating for computational delay in digital equivalent of
continuous control systems. IEEE Trans. Autom. Contr, AC-34:895899, 1989.
141. K. Reinschke. Lineare Regelungs- und Steuerungstheorie. Springer-Verlag,
Berlin, 2006.
142. G. Roppenecker. Fortschr.-Ber. VDI-Z. In Vollst andige modale Synthese lin-
earer Systeme und ihre Anwendung zum Entwurf strukturbeschr ankter Zus-
tandsruckf
uhrungen, number 59 in 8. VDI-Verlag, D usseldorf, 1983.
143. E.N. Rosenwasser. Lyapunov-Indizes in der linearen Regelungstheorie. Nauka,
Moskau, 1977. (in Russisch).
468 References
144. E.N. Rosenwasser, P.G. Fedorov, and B.P. Lampe. Construction of MFD-
representation of real rational transfer matrices on basis of normalisation pro-
cedure. In Int. Conf. on Computer Methods for Control Systems, pages 3942,
Szczecin, Poland, December 1997.
145. E.N. Rosenwasser, P.G. Fedorov, and B.P. Lampe. Construction of state-space
model with minimal dimension for multivariable system on basis of transfer
matrix normalization procedure. In Proc. 5. Int. Symp. Methods Models Au-
tom. Robotics, volume 1, pages 235238, Miedzyzdroje, Poland, 1998.
146. E.N. Rosenwasser and B.P. Lampe. Digitale Regelung in kontinuierlicher Zeit
- Analyse und Entwurf im Frequenzbereich. B.G. Teubner, Stuttgart, 1997.
147. E.N. Rosenwasser and B.P. Lampe. Algebraische Methoden zur Theorie der
Mehrgr oen-Abtastsysteme. Universit atsverlag, Rostock, 2000. ISBN 3-86009-
195-6.
148. E.N. Rosenwasser and B.P. Lampe. Computer Controlled Systems - Analysis
and Design with Process-orientated models. Springer-Verlag, London Berlin
Heidelberg, 2000.
149. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Entwurf optimaler Kursre-
gler mit Hilfe von Parametrischen Ubertragungsfunktionen. Automatisierung-
stechnik, 44(10):487495, 1996.
150. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Frequency domain method
for H2 optimization of time-delayed sampled-data systems. Automatica,
33(7):13871392, 1997.
151. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Optimal discrete ltering
for time-delayed systems with respect to mean-square continuous-time error
criterion. Int. J. Adapt. Control Signal Process., 12:389406, 1998.
152. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Application of Laplace
transformation for digital redesign of continuous control systems. IEEE Trans.
Automat. Contr, 4(4):883886, April 1999.
153. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Comments on A tech-
nique for optimal digital redesign of analog controllers. IEEE Trans. Control
Systems Technology, 7(5):633635, September 1999.
154. W.J. Rugh. Linear system theory. Prentice-Hall, Englewood Clis, NJ, 1993.
155. V.O. Rybinskii and B.P. Lampe. Accuracy estimation for digital control
systems at incomplete information about stochastic input disturbances. In
B.P. Lampe, editor, Maritime Systeme und Prozesse, pages 4352. Univer-
sit
atsdruckerei, Rostock, 2001.
156. V.O. Rybinskii, B.P. Lampe, and E.N. Rosenwasser. Design of digital ship mo-
tion control with guaranteed performance. In Proc. 49. Int. Wiss. Kolloquium,
volume 1, pages 381386, Ilmenau, Germany, 2004.
157. M. Saeki. Method of solving a polynomial equation for an H optimal control
problem. IEEE Trans. Autom. Contr, AC-34:166168, 1989.
158. M. Sagfors. Optimal Sampled-Data and Multirate Control. PhD thesis, Faculty
of Chemical Engineering, Abo Akademi University, Finland, 1998.
159. L. Schwartz. Methodes mathematiques pour les sciences physiques. Hermann
115, Paris VI, Boul. Saint-Germain, 1961.
160. H. Schwarz. Optimale Regelung und Filterung - Zeitdiskrete Regelungssysteme.
Akademie-Verlag, Berlin, 1981.
161. L.S. Shieh, B.B. Decrocq, and J.L. Zhang. Optimal digital redesign of cascaded
analogue controllers. Optimal Control Appl. Methods, 12:205219, 1991.
References 469
162. L.S. Shieh, J.L. Zhang, and J.W. Sunkel. A new approach to the digital redesign
of continuous-time controllers. Control Theory Adv. Techn., 8:3757, 1992.
163. I.Z. Shtokalo. Generalisation of symbolic method principal formula onto linear
dierential equations with variable coecients. Dokl. Akad. Nauk SSR, 42:9
10, 1945. (in Russian).
164. S. Skogestad and I. Postlethwaite. Multivariable feedback control: Analysis and
design. Wiley, Chichester, 2nd edition, 2005.
165. L.M. Skvorzov. Transformation algorithm for mathematical models of mul-
tidimensional control systems. Izv. Akad. Nauk, Control theory and systems,
2:1723, 1997.
166. V.B. Sommer, B.P. Lampe, and E.N. Rosenwasser. Experimental investiga-
tions of analog-digital control systems by frequency methods. Automation and
Remote Control, 55(Part 2):912920, 1994.
167. E.D. Sontag. Mathematical control theory deterministic nite dimensional
systems. Springer-Verlag, New York, 1998.
168. D.S. Stearns. Digitale Verarbeitung analoger Signale. R. Oldenbourg Verlag,
Munchen, 1988.
169. R.F. Stengel. Stochastic optimal control. Theory and application. J. Wiley &
Sons, Inc., New York, 1986.
170. Y. Tagawa and R. Tagawa. A computer aided technique to derive the class
of realizable transfer function matrices of a control system for a prescribed
order controller. In Proc. IEEE Int. Symp. on Computer Aided Control System
Design (CACSD99), pages 321327, Kohala Coast, Island of Hawaii, Hawaii,
USA, 1999.
171. E.C. Titchmarsh. The theory of functions. Oxford science publ. University
Press, Oxford, 2 edition, 1997. Reprint.
172. H.T. Toivonen. Sampled-data control of continuous-time systems with an H -
optimality criterion. Automatica, 28(1):4554, 1992.
173. H.T. Toivonen. Worst-case sampling for sampled-data H design. In Proc.
32nd IEEE Conf. Decision Contr, pages 337342, San Antonio, TX, 1993.
174. H. Tolle. Mehrgr oenregelkreissynthese, volume 1, 2. R. Oldenbourg Verlag,
Munchen, 1983, 1985.
175. J. Tou. Digital and Sampled-Data Control Systems. McGraw-Hill, New York,
1959.
176. H.L. Trentelmann and A.A. Stoorvogel. Sampled-data and discrete-time
H2 optimal control. In Proc. 32nd Conf. Dec. Contr., pages 331336, San
Antonio, TX, 1993.
177. J.S. Tsypkin. Sampling systems theory. Pergamon Press, New York, 1964.
178. R. Unbehauen. Systemtheorie, volume 2. R. Oldenbourg Verlag, M unchen, 7
edition, 1998.
179. H. Unger, U. Petersohn, and S. Lindow. Zur Beschreibung hybrider Multiraten-
Systeme mittels Matrixkalk uls. FREQUENZ, 1997. (einger.).
180. K.G. Valeyev. Application of Laplace transform for analysis of linear systems.
In Proc. Intern. Conf. on Nonlin. Oscill., volume I, pages 126132, Kiev, 1970.
(in Russian).
181. B. van der Pol and H. Bremmer. Operational calculus based on the two-sided
Laplace integral. University Press, Cambridge, 1959.
182. A. Varga. On stabilization methods of descriptor systems. Syst. Contr. Lett.,
24:133138, 1995.
470 References
183. M. Vidyasagar. Control system synthesis. MIT Press, Cambridge, MA, 1994.
184. L.N. Volgin. Optimal discrete control of dynamic systems. Nauka, Moscow,
1986. (in Russian).
185. S. Volovodov, B.P. Lampe, and E.N. Rosenwasser. Application of method of
integral equations for analysis of complex periodic behaviors in Chuas cir-
cuits. In Proc. 1st IEEE Int. Conf. Control Oscill. Chaos, pages 125128, St.
Petersburg, Russia, August 1997.
186. J. Wernstedt. Experimentelle Prozeanalyse. Verlag Technik, Berlin, 1989.
187. E.T. Whittaker and G.N. Watson. A course of modern analysis. University
Press, Cambridge, 4 edition, 1927.
188. J.H. Wilkinson. The algebraic eigenvalue problem. Clarendon Press, Oxford,
1965.
189. W.A. Wolovich. Linear Multivariable Systems. Springer-Verlag, New York,
1974.
190. W.A. Wolovich. Automatic control systems. Harcourt Brace, 1994.
191. W.M. Wonham. Linear multivariable control - A geometric appraoch. Springer-
Verlag, New York Berlin ..., 3 edition, 1985.
192. R.A. Yackel, B.C. Kuo, and G. Singh. Digital redesign of continuous systems
by matching of states at multiple sampling periods. Automatica, 10:105111,
1974.
193. D.V. Yakubovich. Algorithm for supplementing a rectangular polynomial ma-
trix to a quadratic matrix with given determinant. Kybernetika i Vychisl.,
23:8589, 1984.
194. Y. Yamamoto. A function space approach to sampled-data systems and track-
ing problems. IEEE Trans. Autom. Contr, AC-39(4):703713, 1994.
195. Y. Yamamoto and P. Khargonekar. Frequency response of sampled-data sys-
tems. IEEE Trans. Autom. Contr, AC-41(2):161176, 1996.
196. D.C. Youla, H.A. Jabr, and J.J. Bongiorno (Jr.). Modern Wiener-Hopf design
of optimal controllers. Part II: The multivariable case. IEEE Trans. Autom.
Contr, AC-21(3):319338, 1976.
197. L.A. Zadeh. Circuit analysis of linear varying-parameter networks. J. Appl.
Phys., 21(6):11711177, 1950.
198. L.A. Zadeh. Frequency analysis of variable networks. Proc. IRE,
39(March):291299, 1950.
199. L.A. Zadeh. Stability of linear varying-parameter systems. J. Appl. Phys.,
22(4):202204, 1951.
200. C. Zhang and J. Zhang. H2 performance of continuous periodically time-
varying controllers. Syst. Contr. Lett, 32:209221, 1997.
201. P. Zhang, S.X. Ding, G.Z. Wang, and D.H. Zhou. Fault detection in multirate
sampled-data systems with time-delays. In Proc. 15th IFAC Triennial World
Congr., volume Fault detection, supervision and safety of technical processes,
page REG2179, Barcelona, 2002.
202. J. Zhou. Harmonic analysis of linear continuous-time periodic systems. PhD
thesis, Kyoto University, 2001.
203. J. Zhou, T. Hagiwara, and M. Araki. Trace formulas for the H2 norm of
linear continuous-time periodic systems. In Prepr. IFAC Workshop on Periodic
Control Systems, pages 38, Como, Italy, 2001.
204. J. Zhou, T. Hagiwara, and M. Araki. Trace formula of linear continuous-
time periodic systems via the harmonic Lyapunov equation. Int. J. Control,
76(5):488500, 2003.
References 471
205. K. Zhou and J.C. Doyle. Essentials of robust control. Prentice-Hall Intern.,
Upper Saddle River, NJ, 1998.
206. K. Zhou, J.C. Doyle, and K. Glover. Robust and optimal control. Prentice-Hall,
Englewood Clis, NJ, 1996.
207. J.Z. Zypkin. Sampling systems theory. Pergamon Press, New York, 1964.
Index
coprime, 7 inverse, 3
neutral, 3
Defect opposite, 4
normal, 22 zero , 3
degree, 5 elementary divisor, 24
rational matrix, 61 nite, 39
denominator entire part, 6
left, 63 equivalence
rat. matrix, 59 strict, 39
righter, 63 excitation
descriptor process, 185 class, 448
descriptor system, 38, 90, 197 exp.per. = exponential-periodic, 241
design exponent
guaranteed performance, 448 exp.per. function, 241
determinantal divisor, 24 exponential function
greatest, 24 matrix, 256
determinants, 7
dierence equation
eld, 4
derived
complex number, 4
output, 185
real numbers, 4
original, 185
form element
dierence equations
transfer function, 249
equivalent, 187
form function, 246
row reduced, 188
forward model, 185, 209, 212
dimension
controllable, 189
standard presentation, 78
discrete
discretisation
sampled-data system, 283
polynomial, 267
forward transfer function, 342
disturbance input, 225
fraction
disturbance transfer matrix, 225
equality, 53
divisor, 6
improper, 55
common left, 35
common right, 36 irreducible, 275
greatest common, 6 irreducible form, 54
greatest common left, 35 proper, 55
greatest common right, 36 rational, 53
DLT = discrete Laplace transform, 258 reducible, 59
DMFD = double-sided MFD, 73 strictly proper, 55
Frobenius
eigenoperator, 210 matrix
forward model, 185 accompnying, 46
eigenvalue characteristic, 122
polynomial matrix, 26 realisation, 50, 127
eigenvalue assignment, 149 function
PMD, 150 exponential-periodic, 241
structural, 150 fractional rational, 53
PMD, 151 of matrix, 253
transfer matrix, 170
element GCD = greatest common divisor, 6
Index 475