Sie sind auf Seite 1von 483

Communications and Control Engineering

Published titles include:


Stability and Stabilization of Innite Dimensional Systems with Applications
Zheng-Hua Luo, Bao-Zhu Guo and Omer Morgul
Nonsmooth Mechanics (Second edition)
Bernard Brogliato
Nonlinear Control Systems II
Alberto Isidori
L2 -Gain and Passivity Techniques in Nonlinear Control
Arjan van der Schaft
Control of Linear Systems with Regulation and Input Constraints
Ali Saberi, Anton A. Stoorvogel and Peddapullaiah Sannuti
Robust and H Control
Ben M. Chen
Computer Controlled Systems
Em N. Rosenwasser and Bernhard P. Lampe
Dissipative Systems Analysis and Control
Rogelio Lozano, Bernard Brogliato, Olav Egeland and Bernhard Maschke
Control of Complex and Uncertain Systems
Stanislav V. Emelyanov and Sergey K. Korovin
Robust Control Design Using H Methods
Ian R. Petersen, Valery A. Ugrinovski and Andrey V. Savkin
Model Reduction for Control System Design
Goro Obinata and Brian D.O. Anderson
Control Theory for Linear Systems
Harry L. Trentelman, Anton Stoorvogel and Malo Hautus
Functional Adaptive Control
Simon G. Fabri and Visakan Kadirkamanathan
Positive 1D and 2D Systems
Tadeusz Kaczorek
Identication and Control Using Volterra Models
Francis J. Doyle III, Ronald K. Pearson and Bobatunde A. Ogunnaike
Non-linear Control for Underactuated Mechanical Systems
Isabelle Fantoni and Rogelio Lozano
Robust Control (Second edition)
Jrgen Ackermann
Flow Control by Feedback
Ole Morten Aamo and Miroslav Krstic
Learning and Generalization (Second edition)
Mathukumalli Vidyasagar
Constrained Control and Estimation
Graham C. Goodwin, Mara M. Seron and Jos A. De Don
Randomized Algorithms for Analysis and Control of Uncertain Systems
Roberto Tempo, Giuseppe Calaore and Fabrizio Dabbene
Switched Linear Systems
Zhendong Sun and Shuzhi S. Ge
Subspace Methods for System Identication
Tohru Katayama
Digital Control Systems
Ioan D. Landau and Gianluca Zito
Em N. Rosenwasser and Bernhard P. Lampe

Multivariable
Computer-controlled
Systems
A Transfer Function Approach

With 27 Figures

123
Em N. Rosenwasser, Dr. rer. nat. Dr. Eng. Bernhard P. Lampe, Dr. rer. nat. Dr. Eng.
State Marine Technical University University of Rostock
Lozmanskaya str. 3 Institute of Automation
190008 Saint Petersburg 18051 Rostock
Russia Germany

Series Editors
E.D. Sontag M. Thoma A. Isidori J.H. van Schuppen

British Library Cataloguing in Publication Data


Rosenwasser, Em
Multivariable computer-controlled systems : a transfer
function approach. - (Communications and control
engineering)
1.Automatic control
I.Title II.Lampe, Bernhard P.
629.8
ISBN-13: 9781846284311
ISBN-10: 1846284317
Library of Congress Control Number: 2006926886
Communications and Control Engineering Series ISSN 0178-5354
ISBN-10: 1-84628-431-7 e-ISBN 1-84628-432-5 Printed on acid-free paper
ISBN-13: 978-1-84628-431-1
Springer-Verlag London Limited 2006
MATLAB is a registered trademark of The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-2098,
U.S.A. http://www.mathworks.com
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the
publishers, or in the case of reprographic reproduction in accordance with the terms of licences issued
by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be
sent to the publishers.
The use of registered names, trademarks, etc. in this publication does not imply, even in the absence of
a specic statement, that such names are exempt from the relevant laws and regulations and therefore
free for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the infor-
mation contained in this book and cannot accept any legal responsibility or liability for any errors or
omissions that may be made.
Printed in Germany
987654321
Springer Science+Business Media
springer.com
To Elena and Barbel
Preface

Classical control theory comprehends two principal approaches for continuous-


time and discrete-time linear time-invariant (LTI) systems. The rst, consti-
tuted of frequency-domain methods, is based on the concepts of the transfer
function and the frequency response. The second approach arises from the
state space concept and uses either dierential or dierence equations for
describing dynamical systems.
Although these approaches were originally separate, it was nally accepted
that rather than hindering each other, they are, in fact, complementary; there-
fore more constructive and comprehensive methods of investigation could
be developed by applying and combining frequency-domain techniques with
state-space ones [68, 55, 53, 40, 49].
A dierent situation exists in the theory of linear computer-controlled
systems, which are a subclass of sampled-data (SD) systems, because they
are built of both continuous- and discrete-time components. Traditionally,
approximation methods, where the problem is reduced to a complete inves-
tigation of either continuous- or discrete-time LTI models, predominate in
this theory. This assertion can easily be corroborated by studying the leading
monograph in this eld [14]. However, to obtain rigorous results, a unied and
accurate description of discrete- as well as continuous-time elements in con-
tinuous time is needed. Unfortunately, as a consequence of this approach, the
models become variable in time. Over the last few years, a series of methods
has been developed for this more complicated problem; a lot of them are cited
in [30, 158]. An analysis of those references, however, shows that there are
no frequency-domain methods for the analysis and design of SD systems that
could be applied analogously to those used in the theory of LTI systems.
The reason for this deciency seems to be the lack of a transfer function
concept for this wider class of systems that would parallel the classical transfer
function for LTI systems [7]. Diculties in introducing such a concept are
caused by the fact that linear computer-controlled systems are non-stationary
and have periodically varying coecients.
viii Preface

In [148] the authors demonstrated that these diculties could be con-


quered by the concept of the parametric transfer function (PTF) w(s, t),
which, in contrast with the ordinary transfer function for LTI systems, de-
pends on an additional parameter: the time t. Applying the PTF permits the
development of frequency methods for the analysis and design of SD systems
after the pattern of classical methods and by doing so provides important
additional results, practical methods and solutions for a number of new prob-
lems. Last but not least, the PTF yields deeper insight into the structure and
nature of SD systems.
Though, for the most part, [148] handles single-input, single-output (SISO)
systems, it pays attention to practical constraints and makes clear the broad
potential of the PTF approach. Since its publication, the authors have taken
forward a number of investigations which extend these methods to multi-
input, multi-output (MIMO) systems. The results of these investigations are
summarized in the present monograph. In place of the PTF, we now make use
of the parametric transfer matrix (PTM) w(s, t). In making this extension,
obstacles arise because, in contrast with the transfer matrix for LTI systems,
the PTM w(s, t) is not a rational function of the argument s. Fortunately, these
obstacles turn out to be surmountable and we have developed investigation
methods that use only polynomial and rational matrices. Though the nal
results are stated in a fairly general form, they open new possibilities for
solving important classes of applied multivariable problems for which other
methods fail.
The theory presented in Multivariable Computer-controlled Systems is con-
ceptually based on the work of J.S. Tsypkin [177], who proposed a general
frequency-domain description of SD systems in the complex s-plane, and
L.A. Zadeh [198, 199], who introduced the PTF concept into automatic con-
trol theory. Other signicant results in this eld are due to J.R. Raggazini,
J.T. Tou and S.S.L. Chang, [136, 175, 29]. A version of the well-known Wiener-
Hopf method by D. Youla et al. [196] was a useful tool.
The main body of the book consists of ten chapters and is divided into
three parts. Part I (Chapters 13) contains preliminary algebraic material.
Chapters 1 and 2 handle the fundamentals of polynomial and rational ma-
trices that are necessary for understanding ideas explained later. Chapter 3
describes a class of rational matrices that are termed normal in the text. At
rst sight, these matrices seem to have a number of exotic properties because
their entries are bounded by a multitude of algebraic conditions; however, it
follows from the results of Chapters 1, 2 and 3 that, in practical applications,
it is with these matrices that we mostly have to deal.
Chapters 4 and 5 form Part II of the book, dedicated to some control
problems which are also necessary to further investigations but which are of
additional, independent, importance. Chapter 4 handles the eigenvalue as-
signment problem and the structure of the characteristic matrix of the closed
Preface ix

systems, where the processes are given by polynomial pairs or by polynomial


matrix description (PMD), from a general standpoint.
Chapter 5 looks more deeply into the question whether the z- and
-transforms are applicable for the investigation of normal and anomalous
discrete systems. In this connection it considers the construction of control-
lable forward and backward models of such systems.
We would emphasize that Chapters 1, 2, 4 and 5 use many results that
are known from the fundamental literature: see, for instance, [51, 133, 69,
114, 27, 80, 68, 206, 111, 167, 164] and others. We have decided to include
this material in the main body of the current book because of the following
considerations:
1. Their inclusion makes the book more readable because it reduces the
readers need for additional literature to a minimum.
2. The representation of the information is adapted to be more suited to
achieving the objectives of this particular book.
3. Chapters 1, 2, 4 and 5 contain a number of new results that, in our opin-
ion, will be interesting for readers who are not directly engaged with SD
systems.
Among the latter results are the concept of the simple polynomial matrix,
its property of structural stability and the analysis of rational matrices on
basis of their dominance and subordination, all of which appear in Chapters
1 and 2. Chapter 2 also details investigations into the reducibility of rational
transfer matrices. Chapter 4 covers the theorems on eigenvalue and eigenstruc-
ture assignment for control systems with PMD processes. In Chapter 5, the
investigation of the applicability of the z- and - (or Taylor-) transformations
to the mathematical description of anomalous discrete systems is obviously
new, as is the generation of controllable forward and backward models for
such systems.
Part III (Chapters 610) is mainly concerned with frequency methods for
the investigation of MIMO SD systems. Chapter 6 presents a frequency ap-
proach for parametrically discretized continuous MIMO processes and makes
clear the mutual algebraic properties of the continuous process and the dis-
crete model.
Chapter 7 is dedicated to the mathematical description of the standard SD
system. It is here that we introduce the PTM, among other substantial meth-
ods of description, and make careful investigation of its properties. Stability
and stabilizing problems for closed-loop systems, in which the polynomial
solution of the stabilizing problem obtained has very general character, are
studied. Particularly, cases with pathological sampling periods are included.
Chapter 8 deals with the analysis of the response of the standard SD
system to stationary stochastic excitation and with the solution of the H2
optimization problem on the basis of the PTM concept and the Wiener-Hopf
method. The method presented is extremely general and, in addition to nding
the optimal control program, it permits us to state a number of fundamental
x Preface

properties of the optimal system: its structure and the set of its poles, for
instance.
Chapter 9 describes the methods of the preceding three chapters in greater
detail for the special case of single-loop MIMO SD systems. This is done with
the supposition that the transfer matrices of all continuous parts are normal
and that the sampling period is non-pathological. When theses suppositions
hold, important special cancellations take place; thus, the critical case, in
which the transfer matrices of continuous elements contain poles on the imag-
inary axis, is considered. In this way, the fact, important for applications, that
the solvability of the connected H2 problem in the critical case depends on
the location of the critical elements inside the control loop with respect to the
input and output of the system is stated. In this case there may be situations
in which the H2 problem has no solution.
Chapter 10 is devoted to the L2 problem for the standard SD system; it
contains, as special cases, the design of optimal tracking systems and the re-
design problem. In our opinion, this case constitutes a splendid example for
demonstrating the possibilities of frequency methods. This chapter demon-
strates that in the multidimensional case the solution of the L2 problem al-
ways leads to a singular quadratic functional for that a set of minimizing
control programs exists. Applying Laplace transforms during the evaluation
by the Wiener-Hopf method allows us to nd the complete set of optimal
solutions; by doing this, input signals of nite duration and constant signals
are included. We know of no alternative methods for constructing the general
solution to this problem.
The book closes with four appendices. Appendix A gives a short introduc-
tion to the -transformation (Taylor transformation), and its relationship to
other operator transformations for discrete sequences. In Appendix B some
auxiliary formulae are derived.Appendix C, written by Dr. K. Polyakov,
presents the MATLAB DirectSDM Toolbox. Using this toolbox, various
H2 and L2 problems for single-loop MIMO systems can be solved numeri-
cally.Appendix D, composed by Dr. V. Rybinskii, describes a design method
for control with guaranteed performance. These controllers guarantee a
required performance for arbitrary members of certain classes of stochastic
disturbances. The MATLAB GarSD Toolbox, used for the numerical
solution of such problems, is also presented.
In our opinion, the best way to get well acquainted with the content of
the book is, of course, the thorough reading of all the chapters in sequence,
starting with Chapter 1. We recognize, however, that this requires eort and
staying-power of the reader and an expert, interested only in SD systems, can
start directly with Chapter 6, looking into the preceding chapters only when
necessary.
The book is written in a mathematical style. We do not include el-
ementary introductory material on the functioning or the physical and
technological characteristics of computer-controlled systems; likewise, there
Preface xi

is no relation of the theory and practice of such systems in an historical


context. For material of this sort, we refer the reader to the extensive
literature in those elds, the above mentioned reference [14] by Astrom and
Wittenmark, for example. From this viewpoint, our book and its predeces-
sor [148] can be seen as extensions of [14] the titles of both being inspired by it.
Multivariable Computer-controlled Systems is addressed to engineers
and scientic workers involved in the investigation and design of computer-
controlled systems. It can also be used as a complementary textbook on
process-oriented methods in computer-controlled systems by students on
courses in control theory, communications engineering and related elds. Prac-
tically oriented mathematicians and engineers working in systems theory will
nd interesting insight in the following pages. The mathematical tools used in
this book are, in general, included in basic mathematics syllabuses for engi-
neers at technical universities. Necessary additional material is given directly
in the text. The References section is by no means a complete bibliography
as it contains only those works we used directly in the preparation of the book.
The authors gratefully acknowledge the nancial support by the Ger-
man science foundation (Deutsche Forschungsgemeinschaft), especially Dr.
Andreas Engelke for his engagement and helpful hints. Due to Mrs. Han-
nelore Gellert from the University of Rostock and Mrs. Ludmila Patrashewa
from the Saint Petersburg University of Ocean Technology, additional support
by the Euler program of the German academic exchange service (Deutscher
Akademischer Austauschdienst) was possible, which is thankfully mentioned.
We are especially indebted to the professors B.D.O. Anderson, K.J. Astrom,
P.M. Frank, G.C. Goodwin, M. Grimble, T. Kaczorek, V. Kucera, J. Lunze, B.

Lohmann, M. Sebek, A. Weinmann and a great number of unnamed colleagues
for many helpful discussions and valuable remarks.
The engaged work of Oliver Jackson from Springer helped us to overcome
various editorial problems - we appreciate his careful work. We thank Sri
Ramoju Ravi for comments after reading the draft version.
The MATLAB-Toolbox DirectSDM by K.Y. Polyakov is available as free
download from
http://www.iat.uni-rostock.de/blampe/matlab toolbox.html
In case of any problems, please contact bernhard.lampe@uni-rostock.de.

Rostock, Em Rosenwasser
May 17, 2006 Bernhard Lampe
Contents

Part I Algebraic Preliminaries

1 Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 Basic Concepts of Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Matrices over Rings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Left and Right Equivalence of Polynomial Matrices . . . . . . . . . . 12
1.6 Row and Column Reduced Matrices . . . . . . . . . . . . . . . . . . . . . . . 15
1.7 Equivalence of Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . . . 20
1.8 Normal Rank of Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . 21
1.9 Invariant Polynomials and Elementary Divisors . . . . . . . . . . . . . . 23
1.10 Latent Equations and Latent Numbers . . . . . . . . . . . . . . . . . . . . . 26
1.11 Simple Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
1.12 Pairs of Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.13 Polynomial Matrices of First Degree (Pencils) . . . . . . . . . . . . . . . 38
1.14 Cyclic Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
1.15 Simple Realisations and Their Structural Stability . . . . . . . . . . . 49

2 Fractional Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53


2.1 Rational Fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
2.2 Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.3 McMillan Canonical Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
2.4 Matrix Fraction Description (MFD) . . . . . . . . . . . . . . . . . . . . . . . . 63
2.5 Double-sided MFD (DMFD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.6 Index of Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
2.7 Strictly Proper Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.8 Separation of Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
2.9 Inverses of Square Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . 85
2.10 Transfer Matrices of Polynomial Pairs . . . . . . . . . . . . . . . . . . . . . . 87
2.11 Transfer Matrices of PMDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
xiv Contents

2.12 Subordination of Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . 94


2.13 Dominance of Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

3 Normal Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105


3.1 Normal Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
3.2 Algebraic Properties of Normal Matrices . . . . . . . . . . . . . . . . . . . 110
3.3 Normal Matrices and Simple Realisations . . . . . . . . . . . . . . . . . . . 114
3.4 Structural Stable Representation of Normal Matrices . . . . . . . . . 116
3.5 Inverses of Characteristic Matrices of Jordan and Frobenius
Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
3.6 Construction of Simple Jordan Realisations . . . . . . . . . . . . . . . . . 126
3.7 Construction of Simple Frobenius Realisations . . . . . . . . . . . . . . 132
3.8 Construction of S-representations from Simple Realisations.
General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.9 Construction of Complete MFDs for Normal Matrices . . . . . . . . 138
3.10 Normalisation of Rational Matrices . . . . . . . . . . . . . . . . . . . . . . . . 141

Part II General MIMO Control Problems

4 Assignment of Eigenvalues and Eigenstructures by


Polynomial Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
4.2 Basic Controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
4.3 Recursive Construction of Basic Controllers . . . . . . . . . . . . . . . . . 154
4.4 Dual Models and Dual Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
4.5 Eigenvalue Assignment for Polynomial Pairs . . . . . . . . . . . . . . . . 165
4.6 Eigenvalue Assignment by Transfer Matrices . . . . . . . . . . . . . . . . 169
4.7 Structural Eigenvalue Assignment for Polynomial Pairs . . . . . . . 172
4.8 Eigenvalue and Eigenstructure Assignment for PMD Processes 174

5 Fundamentals for Control of Causal Discrete-time LTI


Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
5.1 Finite-dimensional Discrete-time LTI Processes . . . . . . . . . . . . . . 183
5.2 Transfer Matrices and Causality of LTI Processes . . . . . . . . . . . . 189
5.3 Normal LTI Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
5.4 Anomalous LTI Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
5.5 Forward and Backward Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
5.6 Stability of Discrete-time LTI Systems . . . . . . . . . . . . . . . . . . . . . 222
5.7 Closed-loop LTI Systems of Finite Dimension . . . . . . . . . . . . . . . 225
5.8 Stability and Stabilisation of the Closed Loop . . . . . . . . . . . . . . . 230
Contents xv

Part III Frequency Methods for MIMO SD Systems

6 Parametric Discrete-time Models of Continuous-time


Multivariable Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.1 Response of Linear Continuous-time Processes to
Exponential-periodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
6.2 Response of Open SD Systems to Exp.per. Inputs . . . . . . . . . . . 245
6.3 Functions of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
6.4 Matrix Exponential Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
6.5 DPFR and DLT of Rational Matrices . . . . . . . . . . . . . . . . . . . . . . 258
6.6 DPFR and DLT for Modulated Processes . . . . . . . . . . . . . . . . . . . 261
6.7 Parametric Discrete Models of Continuous Processes . . . . . . . . . 266
6.8 Parametric Discrete Models of Modulated Processes . . . . . . . . . 271
6.9 Reducibility of Parametric Discrete Models . . . . . . . . . . . . . . . . . 275

7 Description and Stability of SD Systems . . . . . . . . . . . . . . . . . . . 279


7.1 The Standard Sampled-data System . . . . . . . . . . . . . . . . . . . . . . . 279
7.2 Equation Discretisation for the Standard SD System . . . . . . . . . 280
7.3 Parametric Transfer Matrix (PTM) . . . . . . . . . . . . . . . . . . . . . . . . 283
7.4 PTM as Function of the Argument s . . . . . . . . . . . . . . . . . . . . . . . 289
7.5 Internal Stability of the Standard SD System . . . . . . . . . . . . . . . 295
7.6 Polynomial Stabilisation of the Standard SD System . . . . . . . . . 298
7.7 Modal Controllability and the Set of Stabilising Controllers . . . 304

8 Analysis and Synthesis of SD Systems Under Stochastic


Excitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
8.1 Quasi-stationary Stochastic Processes in the Standard SD
System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
8.2 Mean Variance and H2 -norm of the Standard SD System . . . . . 312
8.3 Representing the PTM in Terms of the System Function . . . . . 315
8.4 Representing the H2 -norm in Terms of the System Function . . 325
8.5 Wiener-Hopf Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
8.6 Algorithm for Realisation of Wiener-Hopf Method . . . . . . . . . . . 332
8.7 Modied Optimisation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 336
8.8 Transformation to Forward Model . . . . . . . . . . . . . . . . . . . . . . . . . 340

9 H2 Optimisation of a Single-loop System . . . . . . . . . . . . . . . . . . 347


9.1 Single-loop Multivariable SD System . . . . . . . . . . . . . . . . . . . . . . . 347
9.2 General Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
9.3 Stabilisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
9.4 Wiener-Hopf Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
9.5 Factorisation of Quasi-polynomials of Type 1 . . . . . . . . . . . . . . . 355
9.6 Factorisation of Quasi-polynomials of Type 2 . . . . . . . . . . . . . . . 364
9.7 Characteristic Properties of Solution for Single-loop System . . . 373
xvi Contents

9.8 Simplied Method for Elementary System . . . . . . . . . . . . . . . . . . 374

10 L2 -Design of SD Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381


10.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381
10.2 Pseudo-rational Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . 383
10.3 Laplace Transforms of Standard SD System Output . . . . . . . . . . 387
10.4 Investigation of Poles of the Image Z(s) . . . . . . . . . . . . . . . . . . . . 392
10.5 Representing the Output Image in Terms of the System
Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
10.6 Representing the L2 -norm in Terms of the System Function . . . 399
10.7 Wiener-Hopf Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402
10.8 General Properties of Optimal Systems . . . . . . . . . . . . . . . . . . . . . 407
10.9 Modied Optimisation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 409
10.10Single-loop Control System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
10.11Wiener-Hopf Method for Single-loop Tracking System . . . . . . . . 412
10.12L2 Redesign of Continuous-time LTI Systems under
Persistent Excitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 416
10.13L2 Redesign of a Single-loop LTI System . . . . . . . . . . . . . . . . . . . 426

Appendices

A Operator Transformations of Taylor Sequences . . . . . . . . . . . . 431

B Sums of Certain Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435

C DirectSDM A Toolbox for Optimal Design of


Multivariable SD Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
C.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
C.2 Data Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
C.3 Operations with Polynomial Matrices . . . . . . . . . . . . . . . . . . . . . . 438
C.4 Auxiliary Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
C.5 H2 -optimal Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
C.5.1 Extended Single-loop System . . . . . . . . . . . . . . . . . . . . . . . 440
C.5.2 Function sdh2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
C.6 L2 -optimal Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443
C.6.1 Extended Single-loop System . . . . . . . . . . . . . . . . . . . . . . . 443
C.6.2 Function sdl2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445

D Design of SD Systems with Guaranteed Performance . . . . . . 447


D.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
D.2 Design for Guaranteed Performance . . . . . . . . . . . . . . . . . . . . . . . . 448
D.2.1 System Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 448
D.2.2 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 450
D.2.3 Calculation of Performance Criterion . . . . . . . . . . . . . . . . 451
Contents xvii

D.2.4 Minimisation of Performance Criterion Estimate for


SD Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 452
D.3 MATLAB -Toolbox GarSD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
D.3.1 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
D.3.2 Setting Properties of External Excitations . . . . . . . . . . . . 454
D.3.3 Investigation of SD Systems . . . . . . . . . . . . . . . . . . . . . . . . 455

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Part I

Algebraic Preliminaries
1
Polynomial Matrices

1.1 Basic Concepts of Algebra


1. Let a certain set A with elements a, b, c, d, . . . be given. Assume that over
the set A an algebraic operation is dened which relates every pair of elements
(a, b) to a third element c A that is called the result of the operation.
If the named operation is designated by the symbol , then the result is
symbolically written as
a b = c.
In general, we have a b = b a. However, if for any two elements a, b in A
the equality a b = b a holds, then the operation is called commutative.
The operation is named associative, if for any a, b, c A the relation

(a b) c = a (b c)

is true.
The set A is called a semigroup, if an associative operation is dened
in it. A semigroup A is called a group , if it contains a neutral element e, such
that for every a A
ae=ea=a
is correct, and furthermore, for any a A there exists a uniquely determined
element a1 A, such that

a a1 = a1 a = e . (1.1)

The element a1 is called the inverse element of a. A group, where the


operation is commutative, is called a commutative group or Abelian group.

In many cases the operation in an Abelian group is called addition, and


it is designated by the symbol +. This notation is called additive. In additive
notation the neutral element is called the zero element , and it is denoted by
the symbol 0 (zero).
4 1 Polynomial Matrices

In other cases the operation is called multiplication, and it is written


in the same way as the ordinary multiplication of numbers. This notation is
named multiplicative. The neutral element in the multiplicative notation is
designated by the symbol 1 (one). For the inverse element in multiplicative
notation is used a1 , and in additive notation we write a. In the last case
the inverse element a is also named the opposite element to a.

2. The set A is called an (associative) ring, if the two operations addition


and multiplication are dened on A. Hereby, the set A forms an Abelian
group with respect to the addition, and a semigroup with respect to the
multiplication. From the membership to an Abelian group it follows

(a + b) + c = a + (b + c)

and
a + b = b + a.
Moreover, there exists a zero element 0, such that for an arbitrary a A

a + 0 = 0 + a = a.

The element 0 is always uniquely determined. Between the operations addi-


tion and multiplication of a ring the relations

(a + b)c = ac + bc , c(a + b) = ca + cb

(left and right distributivity) are valid.


In many cases rings are considered, which possess a number of further
properties. If for any two a, b always ab = ba is true, then the ring is called
commutative. If a unit element exists with 1a = a1 for all a A, then the ring
is named as a ring with unit element. The element 1 in such a ring is always
uniquely determined.
The non-zero elements a, b of a ring, satisfying ab = 0, are named (left
resp. right) zero divisor. A ring is called integrity region , if it has no zero
divisor.

3. A commutative associative ring with unit element, where every non-zero


element a has an inverse a1 that satises Equation (1.1), is called a eld.
In others words, a eld is a ring, where all elements dierent from zero with
respect to multiplication form a commutative group. It can be shown that
an arbitrary eld is an integrity region. The set of complex or the set of
real numbers with the ordinary addition and multiplication as operations are
important examples for elds. In the following, these elds will be designated
by C and R, respectively.
1.2 Polynomials 5

1.2 Polynomials
1. Let N be a certain commutative associative ring with unit ele-
ment, especially it can be a eld. Let us consider the innite sequence
(a0 , a1 , . . . , ak ; 0, . . .), where ak = 0, and all elements starting from ak+1 are
equal to zero. Furthermore, we write

(a0 , a1 , . . . , ak ; 0, . . .) = (b0 , b1 , . . . , bk ; 0, . . .) ,

if and only if ai = bi (i = 0, . . . , k). Over the set of elements of the above form,
the operations addition and multiplication are introduced in the following way.
The sum is dened by the relation

(a0 , a1 , . . . , ak ; 0, . . .)+(b0 , b1 , . . . , bk ; 0, . . .) = (a0 +b0 , a1 +b1 , . . . , ak +bk ; 0, . . .)

and the product of the sequences is given by

(a0 , a1 , . . . , ak ; 0, . . .)(b0 , b1 , . . . , bk ; 0, . . .)
(1.2)
= (a0 b0 , a0 b1 + a1 b0 , . . . , a0 bk + a1 bk1 + . . . + ak b0 , . . . , ak bk ; 0, . . .) .

It is easily proven that the above explained operations addition and mul-
tiplication are commutative and associative. Moreover, these operations are
distributive too. Any element a N is identied with the sequence (a; 0, . . .).
Furthermore, let be the sequence

= (0, 1; 0, . . .) .

Then using (1.2), we get

2 = (0, 0, 1; 0, . . .) , 3 = (0, 0, 0, 1; 0, . . .) , etc.

Herewith, we can write

(a0 , a1 , . . . , ak ; 0, . . .) =
= (a0 ; 0, . . .) + (0, a1 ; 0, . . .) + . . . + (0, . . . , 0, ak ; 0, . . .)
= a0 + a1 (0, 1; 0, . . .) + . . . + ak (0, . . . , 0, 1; 0, . . .)
= a0 + a1 + a2 2 + . . . + ak k .

The expression on the right side of the last equation is called a polynomial in
with coecients in N . It is easily shown that this denition of a polyno-
mial is equivalent to other denitions in elementary algebra. For ak = 0 the
polynomial ak k is called the term of the polynomial

f () = a0 + a1 + . . . + ak k (1.3)

with the highest power. The number k is called the degree of the polynomial
(1.3), and it is designed by deg f (). If we have in (1.3) a0 = a1 = . . . = ak = 0,
6 1 Polynomial Matrices

then the polynomial (1.3) is named the zero polynomial . A polynomial with
ak = 1 is called monic. If for two polynomials f1 (), f2 () the relation f1 () =
af2 () with a N is valid, then these polynomials are called equivalent. In
what follows, we will use the notation f1 ) f2 () for the fact that the
polynomials f1 () and f2 () are equivalent.
Inside this book we only consider polynomials with coecients from the
real number eld R or the complex number eld C. Following [206] we use the
notation F for a eld that is either R or C. The set of polynomials over these
elds are designated by R[], C[] or F[] respectively. The sets R[] and C[]
are commutative rings without zero divisor. In what follows, the elements in
R[] are called real polynomials.

2. Some general properties of polynomials are listed below:


1. Any polynomial f () C[] with deg f () = n can be written in the form

f () = an ( 1 ) ( n ) . (1.4)

This representation is unique up to permutation of the factors. Some of


the numbers 1 , . . . , n that are the roots of the polynomial f (), could
be equal. In that case the product (1.4) is represented by

f () = an ( 1 )1 ( q )q , 1 + . . . + q = n, (1.5)

where all i , (i = 1, . . . , q) are dierent. The number i , (i = 1, . . . , q)


is called the multiplicity of the root i . If f () R[] then an is a real
number, and in the products (1.4), (1.5) for every complex root i there
exists the conjugate complex root with equal multiplicity.
2. For given polynomials f (), d() F[] there exists a uniquely deter-
mined pair of polynomials q(), r() F[], such that

f () = q()d() + r() , (1.6)

where
deg r() < deg d() .
Hereby, the polynomial q() is called the entire part, and the polynomial
r() is the remainder from the division of f () by d().
3. Let us have f (), g() F[]. It is said, that the polynomial g() is a
divisor of f (), and we write g()|f (), if

f () = q()g()

is true, where q() is a certain polynomial.


The greatest common divisor (GCD) of the polynomials f1 () and f2 ()
should be designated by p(). At the same time the GCD is a divisor
of f1 () and f2 (), and it possesses the greatest possible degree . Up to
1.3 Matrices over Rings 7

equivalence, the GCD is uniquely determined. Any GCD p() permits a


representation of the form

p() = f1 ()m1 () + f2 ()m2 () ,

where m1 (), m2 () are certain polynomials in F[].


4. The two polynomials f1 () and f2 () are called coprime if their monic
GCD is equal to one, that means, up to constants, these polynomials
possess no common divisors. For the polynomials f1 () and f2 () to be
coprime, it is necessary and sucient that there exist polynomials m1 ()
and m2 () with
f1 ()m1 () + f2 ()m2 () = 1 .
5. If
f1 () = p()f1 () , f2 () = p()f2 () ,
where p() is a GCD of f1 () and f2 (), then the polynomials f1 () and
f2 () are coprime.

1.3 Matrices over Rings


1. Let N be a commutative ring with unit element forming an integrity
region, such that ab = 0 implies a or b equal to zero, where 0 is the zero
element of the ring N . Then from ab = 0, a = 0 it always follows b = 0.

2. The rectangular scheme



a11 . . . a1m
.. .. ..
A= . . . (1.7)
an1 . . . anm

is named a rectangular matrix over the ring N , where the aik , (i =


1, . . . , n; k = 1, . . . , m) are elements of the ring N . In what follows, the set of
matrices is designated by Nnm . The integers n and m are called the dimension
of the matrix. In case of m = n we speak of a quadratic matrix A, for m < n
of a vertical and for m > n of a horizontal matrix A. For matrices over rings
the operations addition, (scalar) multiplication with elements of the ring N ,
multiplication of matrices by matrices and transposition are dened. All these
operations are dened in the same way as for matrices over numbers [51, 44].

3. Every quadratic matrix A Nnn is related to its determinant det A


which is calculated in the same way as for number matrices. However, in the
given case the value of det A is an element of the ring N . A matrix A with
det A = 0N is called regular or non-singular, for det A = 0N it is called
singular.
8 1 Polynomial Matrices

4. For any matrix A Nnn there uniquely exists a matrix adj A of the form

A11 . . . An1

adj A = ... ... ... , (1.8)
A1n . . . Ann
where Aik is the algebraic complement (the adjoint) of the element aik of the
matrix A, which is received as the determinant of those matrix that remains by
cutting the ith row and kth column multiplied by the sign-factor (1)i+k .
The matrix adj A is called the adjoint of the matrix A. The matrices A and
adj A are connected by the relation
A(adj A) = (adj A)A = (det A)In , (1.9)
where the identity matrix In is dened by

1N 0N . . . 0N
0N 1N . . . 0N

In = . . . . = diag{1N , . . . , 1N }
.. .. . . ..
0N 0N . . . 1N
with the unit element 1N of the ring N , and diag means the diagonal matrix.

5. In the following, matrices of dimension n 1 are called as columns and


matrices of dimension 1 m as rows, and both are referred to as vectors . The
number n is named the height of the column, and the number m the width of
the row, and both are the length of the vector.
Let u1 , u2 , . . . , uk be rows of N1m . As a linear combination of the rows
u1 , . . . , uk , we term the row
u
= c1 u1 + . . . + ck uk ,
where the ci , (i = 1, . . . , k) are elements of the ring N . The set of rows
{u1 , . . . , uk } is named linear dependent, if there exist coecients c1 , . . . , ck ,
that are not all equal to zero, such that u = O1m . Here and furthermore, Onm
designates the zero matrix, i.e. that matrix in Nnm having all its elements
equal to the zero element 0N .
If the equation
cu = c1 u1 + . . . + ck uk
is valid with a c = 0N , then we say that the column u depends linearly on the
columns u1 , . . . , uk .
For the set {u1 , . . . , uk } of columns to be linear dependent, it is necessary
and sucient that one column depends linearly on the others in the sense of
the above denition.
For rows over the ring N the important statement is true: Any set of rows
of the width m with more than m elements is linear dependent. In analogy, any
set of columns of height n with more than n elements is also linear dependent.
1.3 Matrices over Rings 9

6. Let a nite or innite set U of rows of width m be given. Furthermore,


let r be the maximal number of linear independent elements of U, where
due to the above statement r m is valid. An arbitrary subset of r linear
independent rows of U is called a basis of the set U, , the number r itself is
called the normal rank of U. All that is said above can be directly transferred
to sets of columns.

7. Let a matrix A Nnm be given, and U should be the set of rows of A,


and V the set of its columns. Then the following important statements take
place:
1. The normal rank of the set U of the rows of the matrix A is equal to the
normal rank of the set V of its columns. The common value of these ranks
is called the normal rank of the matrix A, and it is designated by rank A.
2. The normal rank of the matrix A is equal to the highest order of its
subdeterminants (minors) dierent from zero. (Here zero again means
the zero element of the ring N .)
3. For the linear independence of all rows (columns) of a quadratic matrix,
it is necessary and sucient that it is non-singular.
For arbitrary matrices A Nnm , the above statements imply

rank V = rank U min(n, m) = A .

Hereinafter, the symbol = stands for equality by denition. In the following,
we say that the matrix A has maximal or full normal rank, if rank A =
A , or that it is non-degenerated. In the following the symbol rank also
denotes the rank of an ordinary number matrix. This notation does not lead to
contradictions, because for matrices over the elds of real or complex numbers
the normal rank coincides with the ordinary rank.

8. For Matrix (1.7), the expression



  ai1 k1 . . . ai1 kp
i1 i2 . . . ip .. .. ..
A = det . . .
k1 k2 . . . kp
aip k1 . . . aip kp

denotes the minor of the matrix A, which is calculated by the elements, that
are at the same time members of the rows with the numbers i1 , . . . , ip , and of
the columns with the numbers k1 , . . . , kp . Let

C = AB

be given with C Nnm , A Nn , B Nm . Then if n = m, the matrix C is


quadratic, and for n  we have
10 1 Polynomial Matrices
   
1 2 ... n k1 k2 . . . kn
det C = A B .
k1 k2 . . . kn 1 2 ... n
1k1 <<kn 

This relation is called Binet-Cauchy-formula [51]. For n >  we obtain


det C = 0.
The formula of Binet-Cauchy permits to express an arbitrary minor of a
product by the corresponding minors of its factors. For p  this formula
takes the form
     
i1 . . . ip i1 i2 . . . ip k1 k2 . . . kp
C = A B .
j1 . . . jp k1 k2 . . . kp j1 j2 . . . jp
1k1 <<kp 

For p > , all minors in this relation will be equal to zero.

1.4 Polynomial Matrices


1. By a polynomial matrix A() we mean a matrix of the form

a11 () . . . a1m ()

A() = ... ..
.
..
. ,
an1 () . . . anm ()

where all elements are polynomials in F[], especially also in R[] or C[].
The set of these matrices will be designated by Fnm [], or symbolised directly
by Rnm [] resp. C[], and their subsets containing the constant matrices, are
denoted by Fnm , Rnm or Cnm , respectively. The matrices in Rnm and Rnm []
are called real.

2. Let, especially


ui () = ai1 () . . . aim () , (i = 1, 2, . . . , p)

be a certain set of rows with width m. The above dened rows will be called
linear dependent in F1m [], if and only if there exist polynomials ci () F[]
that are not all zero at the same time, such that


p
ci ()ui () = O1m .
i=1

Here Om is the matrix of dimension  m with all elements equal to the zero
polynomial.
Based on this fundamental understanding of these denitions, all derived
concepts and insight can be transferred to polynomial matrices, especially the
normal rank of matrices over rings and also the formula of Binet-Cauchy.
1.4 Polynomial Matrices 11

3. Any polynomial matrix A() Fnm [] can be written in the form

A() = A0 q + A1 q1 + . . . + Aq , (1.10)

where Ai , (i = 1, . . . , q) are constant matrices in Fnm . The matrix A0 is named


the highest coecient of the polynomial matrix A(). If A0 = Onm is true,
then the number q is called the degree of the polynomial matrix A(), and
it is designated by deg A(). If n = m, or det A() 0 in case of n = m,
the matrix A() is called singular. For det A() / 0 the matrix A() is called
non-singular. A non-singular matrix (1.10) is called regular, if det A0 = 0, and
anomalous, if det A0 = 0 is true.
In the general case we have

deg[A()B()] deg A() + deg B() . (1.11)

However, if one of the factors is regular, then

deg[A()B()] = deg A() + deg B() . (1.12)

4. For A() Fnn [], Matrix (1.10) is related to its determinant det A(),
which itself is a polynomial in F[]. In accordance with the above statements
the matrix is non-singular if its determinant is dierent from the zero poly-
nomial. A non-singular matrix A() is related to the non-negative number

ord A() = deg det A()

that is called the order of the matrix A(). The degree and order of a matrix
A() are connected by the inequalities

ord A() n deg A() (1.13)

or
1
deg A() ord A() . (1.14)
n
For a regular matrix A() the inequalities (1.13), (1.14) become equalities.
In general for a given order ord A(), the degree of a matrix A() can be
an arbitrary large number. A non-singular quadratic polynomial matrix A()
with ord A() = 0, i.e. det A() = const. = 0 is called unimodular.

Example 1.1. The matrix



2 1
A() =
4 53 + 62 5 + 6 3 32 + 4 4

can be written in the form (1.10) as

A() = A0 4 + A1 3 + A2 2 + A3 + A4 ,
12 1 Polynomial Matrices

where

00 0 0
A0 = , A1 = ,
10 5 1

0 0 1 0 2 1
A2 = , A3 = , A4 = .
6 3 5 4 6 4

In the present case we have deg A() = 4. The matrix A() is non-singular,
because of n = m = 2 and det A() / 0. At the same time ord A() = 2 due
to
det A() = 42 7 + 2 .
Moreover, the matrix A() is anomalous, because det A0 = 0.


Example 1.2. Let


 
1 2 + 4
A() =
3 3 + 5 5 + 3 + 52 12 + 21

be given. In this case we have deg A() = 5. At the same time det A() = 1
and ord A() = 0, thus the matrix A() is unimodular. 

1.5 Left and Right Equivalence of Polynomial Matrices

1. Let two polynomial matrices A1 (), A2 () Fnm [] be given. The matri-


ces A1 () and A2 () are called left-equivalent, if one of them can be generated
from the other by applying the following operations, which are called left el-
ementary operations:
1. Exchange of two rows
2. Multiplying the elements of any row with one and the same non-zero
number in F
3. Adding to the elements of any row, the by one and the same polynomial
in F[] multiplied corresponding elements of another row.
It is known that matrices A1 (), A2 () are left-equivalent if and only if
there exists a unimodular matrix p(), such that

A1 () = p()A2 () .

2. By applying left elementary operations, any matrix A() can be given a


special form that later on is named the left canonical form of Hermite.
1.5 Left and Right Equivalence of Polynomial Matrices 13

Theorem 1.3 (following [113]). Let the matrix A() Fnm [] have maxi-
mal rank A , and the rst A columns of A() should have a minor of non-
vanishing order A . Then in dependence of its dimension, the matrix A()
can be transformed by left elementary operations into one of the three forms:

A = m = n :

g11 () g12 () . . . g1n ()
0 g22 () . . . g2n ()

p()A() = . .. .. .. = Al () , (1.15)
.. . . .
0 0 . . . gnn ()

A = n < m :

g11 () g12 () . . . g1n () g1,n+1 () . . . g1m ()
0 g22 () . . . g2n () g2,n+1 () . . . g2m ()

p()A() = . .. .. .. .. .. = Al () , (1.16)
.. . . . . ... .
0 0 . . . gnn () gn,n+1 () . . . gnm ()

A = m < n :

g11 () g12 () . . . g1m ()
0 g22 () . . . g2m ()

.. .. .. ..
. . . .

p()A() =
0 0 . . . gmm () = Al () .
(1.17)
0 0 . . . 0

. . .
.. .. ... .
.
0 0 ... 0

In (1.15)(1.17) the matrix p() is unimodular, and the gii () are monic
polynomials, where every gii () is of highest degree in its column. Doing so, the
matrix Al () is uniquely determined by A(). Moreover, in Formulae (1.15)
and (1.16) the matrix p() is also uniquely committed.

In the following, the suitable matrix Al () is said to be the left canonical


form of the corresponding matrix A() or also its left Hermitian form.

3. By right elementary operations , we understand the above declared oper-


ations for columns instead of rows. Two matrices are called right-equivalent, if
we can generate any one from the other by applying right elementary opera-
tions. Two polynomial matrices A1 (), A2 () are right-equivalent if and only
if there exists a unimodular matrix q() with

A1 () = A2 ()q() .

In analogy to Theorem 1.3 the following theorem holds.


14 1 Polynomial Matrices

Theorem 1.4. Let the matrix A() have the maximal rank A , and the rst
A rows of A() should possess a non-zero minor of order A . Then according
to its dimension, by applying right elementary operations, the matrix A()
can be transformed into one of the three forms:

A = m = n :

g11 () 0 ... 0
g21 () g22 () ... 0

A()q() = . .. .. .. = Ar () , (1.18)
.. . . .
gn1 () gn2 () . . . gnn ()

A = n < m :

g11 () 0 ... 0 ... 0
g21 () g22 () 0 ... 0 ... 0

A()q() = . .. . .. .. . = Ar () , (1.19)
. . . . . . . . . . ..
gn1 () gn2 () . . . gnn () 0 ... 0

A = m < n :

g11 () 0 ... 0
g21 () g22 () ... 0

.. .. .. ..
. . . .

A()q() =
gm1 () gm2 () ... gmm ()
= Ar () . (1.20)
gm+1,1 () gm+1,2 () ... gm+1,m ()

.. .. ..
. . ... .
gn1 () gn2 () ... gnm ()

In (1.18)(1.20) the matrix q() is unimodular, and the gii () are monic
polynomials, where every gii () has the highest degree in its row. Doing so,
the matrix Ar () is uniquely determined by A(). Moreover, the matrix q()
in (1.18) and (1.20) is also uniquely committed.

The suitable matrix Ar () in (1.18)(1.20) is said to be the right canonical


form of the polynomial matrix A(), or its right Hermitian form.

Example 1.5. Let


 
4 + 1 4 + 3
A() = .
6 + 22 + 1 6 + 42 + 2

In this case we have det A() = 4 22 1. Hence deg A() = 6 and


ord A() = 4. The matrix

0.5 0.5(1 2 ) 1 0
p() =
(2 + 1) 4 + 1 2 1
1.6 Row and Column Reduced Matrices 15

is unimodular. By direct calculation we conrm



1 2.5 0.52
p()A() = ,
0 4 22 1

which has degree 4. The matrix on the right side of the last equation is a
Hermitian canonical form. As a conclusion of Theorem 1.3, it follows that
this Hermitian form Al () and its transformation matrix p() are uniquely
determined. It should be remarked that the matrix Al () possesses not the
smallest possible degree of all matrices that are left-equivalent to A(). Indeed,
consider the product

1 2 1 0 1 2 3 2
A1 () = A() = .
0 1 2 1 2 + 1 2 + 2

Then obviously deg A1 () = 2. This is the minimal degree, and this result
conrms Inequality (1.14). 

1.6 Row and Column Reduced Matrices

1. Let the non-singular quadratic matrix A() Fnn [] be given, and


a1 (), . . . , an () be the rows of A(). With the notation

i = deg ai () , (i = 1, . . . , n) ,

the matrix A() can be written in the form

A() = diag{1 , . . . , n }A0 + A1 () . (1.21)

Herein A1 () is a matrix, where the degree of its i-th row is smaller than i ,
and A0 is a constant matrix. Formula (1.21) could be transformed into
 
A() = diag{1 , . . . , n } A0 + A1 1 + . . . + Ap p , (1.22)

where p 0 is an integer, and the Ai are constant matrices. The number

l = 1 + . . . + n

is named the left order of the matrix A(). Denote

max = max {i } .
1in

Then obviously
deg A() = max . (1.23)
In analogy, assuming that b1 (), . . . , bn () are the columns of A() and
16 1 Polynomial Matrices

i = deg bi () ,

we generate the representation


 
A() = B0 + B1 1 + . . . + Bq q diag{1 , . . . , n } . (1.24)

The number
r = 1 + . . . + n
is called the right order of the matrix A(). Introduce the notation

max = max {i } .
1in

Then we obtain
deg A() = max .

Example 1.6. [68]: Consider the matrix


2
1 3
A() = 2 1 2 1 . (1.25)
+ 2 2

In this case we have 1 = 2, 2 = 2, 3 = 1, and the left order of the matrix


A() becomes l = 5. In the representation (1.21) we get

1 0 0 1 3
A0 = 1 0 0 , A1 () = 0 1 2 1
1 1 0 2 0 2

and therefore, (1.22) yields


 
A() = diag{2 , 2 , } A0 + A1 1 + A2 2 (1.26)

with

1 0 0 0 1 3 1 0 0
A0 = 1 0 0 , A1 = 0 1 2 , A2 = 0 1 1 . (1.27)
1 1 0 20 2 0 0 0

At the same time we have 1 = 2, 2 = 1, 3 = 1, and the right order of A()


becomes r = 4. The representation (1.24) takes the form
 
A() = B0 + B1 1 + B2 2 diag{2 , , } ,

where

1 1 3 0 0 0 1 0 0
B0 = 1 1 2 , B1 = 0 1 1 , B2 = 0 0 0 .
0 1 0 1 0 2 2 00

1.6 Row and Column Reduced Matrices 17

2. The matrix A() is said to be row reduced, if we have in the representation


(1.21)
det A0 = 0 (1.28)
and it said to be column reduced, if in the representation (1.24)

det B0 = 0

is true.
Column-reduced matrices can be generated from row-reduced matrices
simply by transposition. Therefore, in the following only row-reduced matrices
will be considered.
Lemma 1.7. For Matrix (1.21) to be row reduced, a necessary and sucient
condition is the validity of the equation

ord A() = deg det A() = l . (1.29)

Proof. From (1.21) we get

det A() = l det A0 + a1 ()

with deg a1 () < l . For (1.29) to be valid, (1.28) is necessary. If, conversely,
(1.29) is fullled, then det A0 = 0 is true, and the matrix A() is row re-
duced.
Example 1.8. For Matrix (1.25) we get det A0 = 0, det B0 = 5, therefore
the matrix A() is column reduced but not row reduced. Hereby, we obtain
ord A() = r = 4. 
Theorem 1.9 ([133]). Any non-singular matrix A() can be made row-
reduced by left-equivalent transforms.
Proof. Assume the matrix A() be given in form of Representation (1.22).
Then for det A0 = 0 the matrix A() is already row reduced. Therefore, take
a singular matrix A0 , i.e. det A0 = 0. Then there exists a non-zero row vector
= (1 , . . . , n ) such that
A0 = O1n (1.30)
is fullled. Let i1 , . . . , iq , (1 q n) be the non-zero components of , and
i1 , . . . , iq are the corresponding exponents i . Denote

= max {ij } ,
1jq

and let be a value of the index j, for which i = is valid. Then the row


() = 1 1 2 2 . . . . . . n1 n1 n n (1.31)

comes out as a polynomial. Now consider the matrix P () that is generated


from the identity matrix In by exchanging the -th row by the row (1.31):
18 1 Polynomial Matrices

1 0 ... 0 ... 0 0
.. .. . .. ..
. . . . . .. . . . . .


P () =
1
1
2 2
... . . . n1 n1 n n
. (1.32)

.. .. .. .. ..
. . ... . ... . .
0 0 ... 0 ... 0 1

Due to det P () = = 0, the matrix P () is unimodular. Therefore, as


shown in [133], the equation
 
P ()A() = diag{1 , . . . , n } A0 + A1 1 + . . . + Ap p (1.33)

holds with
Ai = DAi , (1.34)
and the matrix D is generated from the identity matrix In by exchanging the
-th row by the row :

1 0 ... 0 ... 0 0
. . ..
.. . . . . . ... . . . ... .



D = 1 2 . . . . . . n1 n . (1.35)
. .
. . . . .
. . . . . .. . . . . . ..
0 0 ... 0 ... 0 1

Obviously, det D = = 0. From (1.30) and (1.33) it follows that the -th
row of the matrix A0 is identical to zero. That means, Equation (1.33) can be
written in the form

P ()A() = diag{1 , . . . , 1 , 1 , +1 , . . . , n }
  (1.36)
A
0 + A
1 1 + . . . + Ap p ,

where the matrices A i (i = 0, . . . , p 1) are built from the matrices Ai by


substituting their -th rows by the -th row of Ai+1 . Hereby, the -th row
of Ap is substituted by the zero row. If in Relation (1.36) the matrix A0 is
regular, then the matrix P ()A() is row reduced, and the transformation
procedure nishes. If, however, the matrix A 0 is already singular, we have to
repeat the transformation procedure again. It was shown in [133] that for a
non-singular matrix A() this algorithm after a nite number of steps yields
a row-reduced matrix.
Example 1.10. Generate a row-reduced form of the matrix A() in (1.26),
(1.27). Equation (1.30) leads to the system of linear equations
1.6 Row and Column Reduced Matrices 19

1 + 2 + 3 = 0 , 3 = 0 ,


so we choose = 1 1 0 , = 1 and = 2. Applying (1.31), (1.32) and
(1.34) yields
1 1 0
P () = D = 0 1 0 .
0 0 1
Using this result and (1.34), we nd

0 0 0 0 0 5 1 1 1
A0 = 1 0 0 , A1 = 0 1 2 , A2 = 0 1 1 .
1 1 0 20 2 0 0 0

Now, exchange the rst row of A0 by the rst row of A1 , the rst row of A1
by the rst row of A2 , and the rst row of A2 by the zero row. As result we
get

0 0 5 1 1 1 0 0 0
A
0 = 1 0 0 , A 1 = 0 1 2 , A2 = 0 1 1 .
1 1 0 2 02 0 0 0

The matrix A0 is regular. Therefore, the procedure stops, and with the help
of (1.36) we get
 
P ()A() = diag{, 2 , } A0 + A1 1 + A2 2

1 1 5 + 1
= 2 1 2 1 .
+ 2 2


3. A number of useful properties of row-reduced matrices follows from the


above explanations.

Theorem 1.11. (see [69]) Let the matrices


 
A() = diag{1 , . . . , n } A0 + A1 1 + . . . ,
  (1.37)
B() = diag{1 , . . . , n } B 1 1 + . . .
0 + B

be given. Then, if the matrices A() and B() are row reduced and left equiv-
alent, then the sets of numbers {1 , . . . , n } and {1 , . . . , n } coincide.

Corollary 1.12. If the matrices A() and B() are left equivalent, and the
matrix A() is row reduced, then
20 1 Polynomial Matrices


n
n
i i
i=1 i=1

is true, where the equality takes place if and only if the matrix B() is also
row reduced.
Proof. Because the matrices (1.37) are left equivalent, they possess the same
order. Therefore, by Lemma 1.7 it follows
1 + . . . + n ord B() = ord A() = 1 + . . . + n ,
where the equality in the left part exactly takes place, when the matrix B()
is row reduced.
Corollary 1.13. Under the conditions of Theorem 1.11,
deg A() = deg B() . (1.38)
Proof. From (1.23) and (1.37), we get
deg A() = max {i } = max {i } = deg B() ,
1in 1in

because the sets of numbers i and i coincide.


Corollary 1.14. Let the matrices A() and B() in (1.37) be left equivalent,
where the matrix A() is row reduced, but the matrix B() is not row reduced.
Then we have
deg A() deg B() . (1.39)
Proof. In contrary to the claim, we assume
= deg B() < deg A() = max .
Then B() is given in a row-reduced form by applying Relation (1.36). In this
manner, we get
 
1 1 + . . .
0 + B
Q()B() = diag{1 , . . . n } B
0 = 0. From (1.36), it is seen that
with a unimodular matrix Q() and det B
deg[Q()B()] deg B() = < max = deg A() ,
which contradicts Equation (1.38). Hence it follows (1.39).

1.7 Equivalence of Polynomial Matrices


1. The matrices A1 (), A2 () Fnm [] are called equivalent, if
A1 () = p()A2 ()q() (1.40)
is true with unimodular matrices p(), q(). Obviously, left-equivalent or
right-equivalent matrices are also equivalent. Formula (1.40) says that the
matrices A1 () and A2 () are equivalent if and only if they could be gener-
ated of each other by left or right elementary operations.
1.8 Normal Rank of Polynomial Matrices 21

2.
Theorem 1.15 ([51]). Any n m matrix A() with the normal rank is
equivalent to the matrix
 
S () O,m
SA () = , (1.41)
On, On,m

where the matrix S () has the form

S () = diag{a1 (), . . . , a ()} , (1.42)

and the ai () are monic polynomials, where every polynomial ai+1 () is di-
visible by ai ().
Matrix (1.41) is uniquely determined by the matrix A(), and it is named
as the Smith-canonical form of the matrix A().

Corollary 1.16. It follows immediately from Relations (1.40)(1.42), that


under the condition rank A() = , there exists only a nite set of numbers
i , (i = 1, . . . , q) such that the number matrix A(
i ) satises the inequality
rank A( i ) < .

Corollary 1.17. Let a nite number of matrices A1 (), . . . , Ap () with


excluding
rank Ai () = i , (i = 1, . . . , p) be given. Then for all xed values ,

a certain nite set, the condition rank Ai () = i is fullled.

1.8 Normal Rank of Polynomial Matrices

1. Utilising the results from the preceding section, we are able to transfer
known results over the rank of number matrices to the normal rank of poly-
nomial matrices.

Theorem 1.18. Assume

D() = A()B()

with polynomial matrices A(), B(), D() of sizes n ,  m and n m,


respectively. Then the relations

rank D() min{rank A(), rank B()} (1.43)

and
rank D() rank A() + rank B()  (1.44)
are true.
22 1 Polynomial Matrices

Relations (1.43), (1.44) are named inequalities of Sylvester.

Proof. Assume

rank A() = A , rank B() = B , rank D() = D

with
D > min{A , B } . (1.45)
with
Then due to Corollary 1.17, there exists a value =
= A ,
rank A() = B ,
rank B() = D .
rank D()

It is possible to apply the Sylvester inequalities to the number matrices


= A()B(
D() ,
)

which gives
min{rank A()
rank D() rank B()}
.

But this contradicts (1.45). This contradiction proves the validity of Inequality
(1.43).
Inequality (1.44) could be proved analogously because a corresponding
inequality holds for constant matrices.

2. From the inequalities of Sylvester (1.43), (1.44) ensue the following rela-
tions
rank[A()B()] = rank B() for rank A() =  (1.46)
and
rank[A()B()] = rank A() for rank B() =  . (1.47)
Herein, rank A() =  can only be fullled for n , and rank B() = 
only for m . Especially, Equation (1.46) is valid if the matrix A() is
non-singular, and Equation (1.47) holds if matrix B() is non-singular.

3. For arbitrary matrices A() Fnm [] we introduce the notation



def A() = min{n, m} rank A() = A rank A()

and call it as the normal defect of A(). Obviously, we always have


def A() 0.
Arising from this fact and the above considerations, we conclude that the
rank of any matrix does not decrease if it is multiplied from the left by a
vertical or square matrix with defect zero. Analogously, the multiplication
from right by an horizontal or square matrix with defect zero also would not
change the rank.
1.9 Invariant Polynomials and Elementary Divisors 23

4. Applying the above thoughts used in the proofs of Theorem 1.18, the
known statements for number matrices [51] can also be proved for polynomial
matrices.

Theorem 1.19. The matrices A(), B() and the matrix D() should be
connected by

D() = A() B() .
Then,
rank D() rank A() + rank B() . (1.48)

Theorem 1.20. For any polynomial matrices A() and B() of equal dimen-
sion,
rank[A() + B()] rank A() + rank B() .

Remark 1.21. A corresponding relation to the last one was proven for number
matrices in [147].

1.9 Invariant Polynomials and Elementary Divisors

1. Applying (1.41) and (1.42), the Smith-canonical form of a polynomial


matrix A() can be written in the form

h1 () 0 ... 0
0 h1 ()h2 () . . . 0

.. .. . . O,m
. . . . .
.
SA () = , (1.49)
0 0 . . . h1 ()h2 () h ()


On, On,m

where h1 (), . . . , h () are scalar monic polynomials given by the relations

h1 () = a1 (), h1 ()h2 () = a2 (), . . . , h1 ()h2 () h () = a () .


(1.50)

2. The polynomials ai (), (i = 1, . . . , ) congured by (1.42) are called the


invariant polynomials of the matrix A(). It was shown that the coincidence of
the sets of their invariant polynomials is not only a necessary but a sucient
condition for two polynomial matrices A1 () and A2 () to be equivalent, [51].
24 1 Polynomial Matrices

3. The monic greatest common divisor of all minors of i-th order for the
matrix A() is named its i-th determinantal divisor. If rank A() = is true,
then there exist determinant divisors D1 (), D2 (), . . . , D ().
D () is named the greatest determinantal divisor. It can be shown that the
set of determinantal divisors is invariant against equivalence transformations
on the matrix A().

4. The invariant polynomials ai () are connected with the polynomials


Di () by the relation

Di ()
ai () = , D0 () = 1 , (i = 1, . . . , ) . (1.51)
Di1 ()

Therefore, it follows from (1.51)

D1 () = a1 (), D2 () = a1 ()a2 (), , D () = a1 ()a2 () a () .


...
(1.52)
If Representations (1.49), (1.50) are used, then these relations can be written
in the form

D1 () = h1 (), D2 () = h21 ()h2 (), ...


D () = h1 ()h1
2 () h () .

5. Suppose the greatest common determinantal divisor D () be given by


the linear factors

D () = ( 1 )1 ( q )q , (1.53)

where all numbers i are dierent. We take from (1.51) that every invariant
polynomial ai () permits a factorisation of the form

ai () = ( 1 )1i ( q )qi , (i = 1, . . . , ) (1.54)

with
0 pi p,i+1 p , (p = 1, . . . , q) .
The factors dierent from one in the expression (1.54) are called elemen-
tary divisors of the polynomial matrix A() in the eld C. In general, every
root i is congured to several elementary divisors. It follows from the above
said, that the set of invariant polynomials uniquely determines the set of el-
ementary divisors. The reverse is also true if the rank of the matrix A() is
known.

Example 1.22. Assume the rank of the matrix A() to be equal to four, and
the whole of its elementary divisors to be

( 2)2 , ( 2)2 , 2, 3, 3, 4.
1.9 Invariant Polynomials and Elementary Divisors 25

Then as the set of invariant polynomials, we obtain

a4 () = (2)2 (3)(4), a3 () = (2)2 (3), a2 () = 2, a1 () = 1.

Using the set of invariant polynomials, we are able to specify immediately the
Smith-canonical form of the matrix A(). In the present case we get

1 0 0 0
0 2 0 0
SA () = 0 0 ( 2)2 ( 3)
.

0
0 0 0 ( 2) ( 3)( 4)
2


6. For diagonal and block-diagonal matrices the system of elementary divisor


can be constructed by the elementary divisors of its elements.

Lemma 1.23 ([51]). The system of elementary divisors of any diagonal ma-
trix is the unication of the elementary divisors of its elements.

Example 1.24. Let the diagonal matrix


2
0 0 0
0 ( 1) 0 0
A() = 0


0 ( 1)2
0
0 0 0 ( 1)

be given. By decomposition of all diagonal elements into factors (1.54), we


obtain the totality of elementary divisors

2 , , 1, ( 1)2 , , 1

and, nally, we nd the Smith-canonical form



1 0 0 0
0 ( 1) 0 0
SA () =
0
.

0 ( 1) 0
0 0 0 ( 1)
2 2


Lemma 1.25 ([51]). The system of elementary divisors of the block-diagonal


matrix

A1 () O . . . O
O A2 () . . . O

Ad () = . .. . . .. = diag{A1 (), . . . , An ()} ,
.. . . .
O O . . . An ()

where the Ai (), (i = 1, . . . , n) are rectangular matrices of any dimension, is


built by the unication of the elementary divisors of their block elements.
26 1 Polynomial Matrices

Example 1.26. We choose n = 2 and



10 0 0
A1 () = 0 1 , A2 () = 0 1 0 .
00 0 0 a

We realise immediately that the matrix A1 () possesses the one and only ele-
mentary divisor 3 . The matrix A2 () for a = 0 has the two equal elementary
divisors and . In case of a = 0, we nd for A2 () the two dierent elemen-
tary divisors and a. Thats why for a = 0 the totality of elementary
divisors of the matrix Ad () = diag{A1 (), A2 ()} consists of 3 , , a,
and the Smith-canonical form comes out as

SAd () = diag{1, 1, 1, 1, , 3 ( a)} .

However, in case of a = 0, we nd

SAd () = diag{1, 1, 1, , , 3 } . 

Remark 1.27. The above example illustrates the fact that the dependence of
the Smith-canonical form (or the totality of its elementary divisors) on the
coecients of the polynomial matrix is numerically unstable.

1.10 Latent Equations and Latent Numbers


1. Let the non-singular matrix A() Fnn [] be given. The polynomial

dA () = det A()

is said to be the characteristic polynomial of the matrix A(), and the equation

dA () = 0 (1.55)

is its characteristic equation. The roots of the characteristic equation are called
the eigenvalues of the matrix A(). For A() Fnn [], the characteristic
polynomial dA () is equivalent to the greatest determinantal divisor Dn ().
Therefore, the characteristic equation (1.55) is equivalent to

D () = 0 , (1.56)

where = n is the normal rank of the matrix A().

2. Let us consider an arbitrary matrix A() Fnm [] having full rank =


A . For it, Equation (1.56) always has a sense, and for n = m this will be
called its latent equation. The roots of Equation (1.56) are named latent roots
(numbers) of the matrix A(). Obviously, the latent numbers are equal to the
1.10 Latent Equations and Latent Numbers 27

numbers i , that are congured by the factorisation (1.53). The latent roots
of square matrices coincide with its eigenvalues.
Owing to (1.52), the latent equation can be written in the form
a1 ()a2 () a () = 0 .
Hence it follows that every latent number is the root of at least one invariant
polynomial.

3. In the following we investigate the question on the important relation


between the rank of the polynomial matrix A() and the rank of the number
that is generated from A() by substituting = ,
matrix A() where is a
given complex number.
Theorem 1.28. Suppose that the matrix A() possesses the rank = A .
does not coincide with one of the latent roots, the relation
Then, if
=
rank A()
= i takes place with a certain latent number i , then
is true. However, if
we have
= di ,
rank A() (1.57)
where di is the number of dierent elementary divisors, that is connected with
the latent number i .
is not a latent number, then it follows from (1.41), (1.42)
Proof. If =
= .
rank SA ()
Due to (1.40), we obtain
= p()S
A() ,
A ()q()

where we read rank A() = , because the multiplication with


= rank SA ()
q()
the non-singular matrices p(), does not change the rank. Now, let =
i , where i is a latent number. Then there exists a number di 1, such that
a (i ) = 0, . . . , adi +1 (i ) = 0, adi (i ) = 0, . . . , a1 (i ) = 0 .
Obviously, di is equal to the number of dierent elementary divisors of the
latent root i . Hence it follows with the help of (1.40)(1.42)
rank A(i ) = rank SA (i ) = di ,
which is equivalent to (1.57).
Corollary 1.29. The equation for the defect
def A(i ) = di , (i = 1, . . . , q)
is true.
28 1 Polynomial Matrices

Corollary 1.30. It follows from Theorem 1.28, that the latent numbers i of
a non-degenerated matrix A() are exactly those numbers i , for which

rank A(i ) < A

becomes valid.

4. For a non-degenerated matrix A(), let the monic greatest common divi-
sor of the minors of A -th order be equal to 1. In that case, the latent equation
(1.56) has no roots, thus the matrix A() also does not possess latent roots.
Such polynomial matrices are said to be alatent. All invariant polynomials of
an alatent matrix are equal to 1.
Alatent square matrices turn out to be unimodular. For an alatent matrix
A(), the number matrix A() for all
possesses its maximal rank.

Theorem 1.31. The non-degenerated n m matrix A() with n < m proves


to be alatent, if and only if there exists a unimodular matrix (), that meets


A() = In On,mn () .

Proof. Under the made suppositions, due to Theorem 1.4, the Hermitian form
Ar () has the shape

Ar () = In On,mn ,
and the claimed relation emerges from (1.19) for () = q 1 ().
Analogously, we conclude from (1.17) that for n > m the vertical n m
matrix A() is alatent if and only if

Im
A() = ()
Onm,m
becomes true with a certain unimodular matrix ().

5. A non-degenerated matrix A() Fnm [] is said to be latent, if it has


latent roots. Due to (1.40)(1.42) it is clear that for n < m a latent matrix
A() allows the representation

A() = a()b() , (1.58)

where

a() = p() diag{a1 (), . . . , an ()} ,




b() = In On,mn q()

and p(), q() are unimodular matrices.


Obviously, det a() a1 () an () is valid. The matrix b() proves to
be alatent, i.e. its rank is equal to n = for all = . A corresponding
representation for n > m is also possible.
1.11 Simple Matrices 29

6.
Theorem 1.32. Suppose the n m matrix A() to be alatent. Then every
submatrix generated from any of its rows is also alatent.

Proof. Take a positive integer p < n and present the matrix A() in the form

a11 () . . . a1m ()
.. ..
. ... .
 
ap1 () . . . apm () Ap ()

A() = = . (1.59)
ap+1,1 () . . . ap+1,m () A1 ()

.. ..
. ... .
an1 () . . . anm ()

It is indirectly shown that the submatrix Ap () over the line turns out to be
alatent. Suppose the contrary. Then owing to (1.58), we get

Ap () = ap ()bp () ,

where the matrix ap () is latent, and ord ap () > 0. Applying this result from
(1.59),
ap () Op,np bp ()
A() =
Onp,p Inp A1 ()
be an eigenvalue of the matrix ap (), so
is acquired. Let


bp ()
= ap () Op,np
A()
Onp,p Inp
A1 ()

is valid.Because the rank of the rst factors on the right side is smaller than n,
< n, which is in contradiction to the supposed alatency
this implies rank A()
of A().

Remark 1.33. In the same way, it is shown that any submatrix of an alatent
matrix A() built from any of its columns also becomes alatent.

Corollary 1.34. Every submatrix built from any rows or columns of a uni-
modular matrix is alatent.

1.11 Simple Matrices


1. A non-degenerated latent n m matrix A() of full rank = A is called
simple, if

D () = a (), D1 () = D2 () = . . . = D1 () = 1 .
30 1 Polynomial Matrices

In dependence on the dimension for a simple matrix A() from (1.40)(1.42),


we derive the representations

A = n = m : A() = () diag{1, . . . , 1, an ()}() ,



1 ... 0 0 0 ... 0
.. . . .. .... ..
. .
A = n < m : A() = () . . . . () ,
0 ... 1 0 0 ... 0
0 ... 0 an () 0 . . . 0

1 ... 0 0
.. . . .. ..
. . . .

0 ... 1 0

A = m < n : A() = ()
0 ... 0 am ()
() ,
0 ... 0 0

. .. ..
.. . . . . .
0 ... 0 0

where () and () are unimodular matrices.

2. From the last relations, we directly deduce the following statements:


a) For a non-degenerated matrix A() to be simple, it is necessary and su-
cient that every latent root i is congured to only one elementary divisor.
b) Let the non-degenerated n m matrix A() of rank A have the latent
roots 1 , . . . , q . Then for the simplicity of A(), the relation

rank A(i ) = A 1, (i = 1, . . . , q)

or, equivalently the condition

def A(i ) = 1, (i = 1, . . . , q) (1.60)

is necessary and sucient.


c) Another criterion for the membership of a matrix A() to the class of
simple matrices yields the following theorem.

Theorem 1.35. A necessary and sucient condition for the simplicity of the
n n matrix A()
is, that there exists a n 1 column B(), such that the
matrix L() = A() B() becomes alatent.


Proof. Suciency: Let the matrix A() B() be alatent and i , (i =
1, . . . , q) are the eigenvalues of A(). Hence it follows


rank A(i ) B(i ) = n, (i = 1, . . . , q) .
1.11 Simple Matrices 31

Hereby, we deduce from Theorem 1.28, that we need Condition (1.60) to be


satised if the last conditions should be fullled, i.e. the matrix A() has to
be simple.
Necessity: It is shown that for
a simple matrix A(), there exists a col-
umn B(), such that the matrix A() B() becomes alatent. Let us have
det A() = d() and () d() as the equivalent monic polynomial. Then
the matrix A() can be written in the form

A() = () diag{1, . . . , 1, ()}() ,

where (), () are unimodular n n matrices. The matrix Q() of the


shape
In1 On1,1 On1,1
Q() =
O1,n1 () 1
is obviously alatent, because it has a minor of n-th order that is equal to one.

The matrix () with

() On1
() = = diag{(), 1}
O1n 1

is unimodular. Applying the last two equations, we get





()Q()() = A() B() = L()

with
0
..

B() = () . . (1.61)
0
1
The matrix L() is alatent per construction.

Remark 1.36. If the matrix () is written in the form




() = 1 () . . . n () ,

where 1 (), . . . , n () are the corresponding columns, then from (1.61), we


gain
B() = n () .

3. Square simple matrices possess the property of structural stability, which


will be explained by the next theorem.

Theorem 1.37. Let the matrices A() Fnn (), B() Fnn [] be given,
where the matrix A() is simple, but the matrix B() is of any structure.
Furthermore, let us have det A() = d() and
32 1 Polynomial Matrices

det[A() + B()] = d() + d1 (, ) , (1.62)


where d1 (, ) is a polynomial, satisfying the condition
deg d1 (, ) < deg d() . (1.63)
Then there exists a positive number 0 , such that for || < 0 all matrices
A() + B() are simple.
Proof. The proof splits into several stages.
Lemma 1.38. Let   be a certain norm for nite-dimensional number ma-
trices. Then for any matrix B = [ bik ] Fnn the estimation
max |bik | B (1.64)
1i,kn

is true, where > 0 is a constant, independent of B.


Proof. Let  1 and  2 be any two norms in the space Cnn . Due to the
nite dimension of Cnn , any two norms are equivalent, that means, for an
arbitrary matrix B, we have
1 B1 B2 2 B1 ,
where 1 , 2 are positive constants not depending on the choice of B.
Take
n
B1 = max |bik | ,
1in
k=1
then under the assumption  2 =  , we win
|bik | B1 11 B ,
which is adequate to (1.64) with = 11 .
Lemma 1.39. Let the matrix A Fnn be non-singular and   be a certain
norm in Fnn . Then, there exists a positive constant 0 , such that for B <
0 , all matrices A + B become non-singular.
Proof. Assume |bik | B, where > 0 is the constant congured in (1.64).
Then we expand
det(A + B) = det A + (A, B) ,
where (A, B) is a scalar function of the elements in A and B. For it, an
estimation
|(A, B)| < 1 B + 2 2 B2 + . . . + n n Bn
is true, where i , (i = 1, . . . , n) are constants, that do not depend on B. Hence
there exists a number 0 > 0, such that B < 0 always implies
|(A, B)| < | det A| .
Thats why for B < 0 , the desired relation det(A + B) = 0 holds.
1.11 Simple Matrices 33

Lemma 1.40. For the matrix A Fnn , we assume rank A = , and let  
be a certain norm in Fnn . Then there exists a positive constant 0 , such that
for B Fnn with B < 0 always

rank(A + B) .

Proof. Let A be a non-zero minor of order of A. Lemma 1.39 delivers the


existence of a number 0 > 0, such that for B < 0 , the minor of the
matrix A + B corresponding to A is dierent from zero. However, this means
that the rank will not reduce after addition of B, but that was claimed by the
lemma.

Proof of Theorem 1.37 Let

d() = d0 k + . . . + dk = 0, d0 = 0

be the characteristic polynomial of the matrix A(). Then we obtain from


(1.62) and (1.63)

det[A() + B()] = d(, ) = d0 k + d1 ()k1 + . . . + dk () ,

where
di () = di + di1  + di2 2 + . . . , (i = 1, . . . , k)
be a root of the
are polynomials in the variable  with di (0) = di . Let
equation
d(, 0) = d() = 0
with multiplicity , i.e. an eigenvalue of the matrix A() with multiplicity .
Since the matrix A() is simple, we obtain
= n 1.
rank A()

Hereby, due to Lemma 1.40 it follows the existence of a constant


, such that
for every matrix G Cnn with G < the relation
+ G] n 1
rank[A() (1.65)

is fullled. Now consider the equation

d(, ) = 0 .

As known from [188], for || < , where > 0 is suciently small, there exist
i (), (i = 1, . . . , ), such that
continuous functions
i (), ) = det[A(
d( i ()) + B(
i ())] = 0 , (1.66)
i () may coincide. Thereby, the limits
where some of the functions
34 1 Polynomial Matrices

lim i ,
i () = (i = 1, . . . , )
0

exist, and we can write


i () =
i + i () ,

where i () are continuous functions with (0)


= 0. Consequently, we get
i ()) + B(
A( i ()) = A(
i + i ()) + B(
i + i ())

i ) + Gi ()
= A(

with
i) + L
Gi () = B( i ()

and the matrices L i () for || < depend continuously on , and L
i (0) = Onn
holds. Next choose a constant  > 0 with the property that for || <  and all
i = 1, . . . , , the relation
i) + L
Gi () = B( i () <

is true. Therefore, we receive for || <  from (1.65)


i ()) + B(
rank[A( i ())] n 1 .

On the other side, it follows from (1.66), that for || < , we have
i ()) + B(
rank[A( i ())] n 1 .

Comparing the last two inequalities, we nd for || < min{


, }
i ()) + B(
rank[A( i ())] = n 1 .

The above considerations can be made for all eigenvalues of the matrix A(),
therefore, Theorem 1.37 is proved by (1.60).

1.12 Pairs of Polynomial Matrices


1. Let us have a() Fnn [], b() Fnm []. The entirety of both matrices
is called a horizontal pair, and it is designated by (a(), b()). On the other
side, if we have a() Fmm [] and c() Fnm [], then we speak about a
vertical pair and we write [a(), c()]. The pairs (a(), b()) and [a(), c()]
may be congured to the rectangular matrices


a()
Rh () = a() b() , Rv () = , (1.67)
c()

where the rst one is horizontal, and the second one is vertical. Due to
1.12 Pairs of Polynomial Matrices 35


Rv () = a () c () ,

the properties of vertical pairs can immediately deduced from the properties
of horizontal pairs. Therefore, we will consider now only horizontal pairs. The
pairs (a(), b()), [a(), c()] are called non-degenerated if the matrices (1.67)
are non-degenerated. If not supposed explicitly otherwise, we will always con-
sider non-degenerated pairs.

2. Let for the pair (a(), b()) exist a polynomial matrix g(), such that

a() = g()a1 () , b() = g()b1 () (1.68)

with polynomial matrices a1 (), b1 (). Then the matrix g() is called a com-
mon left divisor of the pair (a(), b()). The common left divisor g() is named
as a greatest common left divisor (GCLD) of the pair (a(), b()), if for any
left common divisor g1 ()

g() = g1 ()()

with a polynomial matrix () is true. As known any two GCLD are right-
equivalent [69].

3. If the pair (a(), b()) is non-degenerated, then from Theorem 1.4, it


follows the existence of a unimodular matrix

n m

r () r12 () n (1.69)
r() = 11
r21 () r22 () m

for which


r11 () r12 ()
n m

a() b() = N () O n (1.70)
r21 () r22 ()

holds. As known [69], the matrix N () is a GCLD of the pair (a(), b()) .

4. The pair (a(), b()) is called irreducible, if the matrix Rh () in (1.67) is


alatent. From the above considerations, it follows that the pair (a(), b()) is
irreducible, if and only if there exists a unimodular matrix r() according to
(1.69) with


a() b() r() = In Onm .

5. Let
s11 () s12 ()
s() = r1 () =
s21 () s22 ()
be a unimodular polynomial matrix. Then we get from (1.70)
36 1 Polynomial Matrices



s11 () s12 ()
a() b() = N () Onm .
s21 () s22 ()

Hence it follows immediately

a() = N ()s11 () , b() = N ()s12 () ,

that can be written in the form





a() b() = N () s11 () s12 () .

Due to Corollary 1.34, the pair (s11 (), s12 ()) is irreducible. Therefore, the
next statement is true:
If Relation (1.68) is true, and g() is a GCLD of the pair (a(), b()),
then the pair (a1 (), b1 ()) is irreducible.
The reverse statement is also true:
If Relation (1.68) is valid, and the pair (a1 (), b1 ()) is irreducible, then
the matrix g() is a GCLD of the pair (a(), b()).

6. A necessary and sucient condition for the irreducibility of the pair


(a(), b()) with the n n polynomial matrix a() and the n m polynomial
matrix b() is the existence of an n n polynomial matrix X() and an m n
polynomial matrix Y (), such that the relation

a()X() + b()Y () = In (1.71)

becomes true [69].

7. All what is said up to now, can be transferred practically without change


to vertical pairs [a(), c()]. In this case, instead of the concepts common
left divisor and GCLD we introduce the concepts common right divisor and
greatest common right divisor (GCRD) . Hereby, if

m
a() L() m
p() =
c() Onm n

is valid with a unimodular matrix p(), then L() is a GCRD of the corre-
sponding pair [a(), c()]. If L() and L1 () are two GCRD, then they are
related by
L() = f ()L1 ()
where f () is a unimodular matrix.
The vertical pair [a(), c()] is called irreducible, if the matrix Rv () in
(1.67) is alatent. The pair [a(), c()] turns out to be irreducible, if and only
if, there exists a unimodular matrix p() with
1.12 Pairs of Polynomial Matrices 37

a() Im
p() = .
c() Onm

Immediately, it is seen that the pair [a(), c()] is exactly irreducible, when
there exist polynomial matrices U (), V (), for which

U ()a() + V ()c() = Im .

8. The above stated irreducibility criteria will be formulated alternatively.

Theorem 1.41. A necessary and sucient condition for the pair (a(), b())
to be irreducible, is the existence of a pair (l (), l ()), such that the matrix

a() b()
Ql () =
l () l ()

becomes unimodular.
For the pair [a(), c()] to be irreducible, it is necessary and sucient that
there exists a pair [r (), r ()], such that the matrix

r () c()
Qr () =
r () a()

becomes unimodular.

9.
Lemma 1.42. Necessary and sucient for the irreducibility of the pair
(a(), b()), with the n n and n m polynomial matrices a() and b(),
is the condition


rank Rh (i ) = rank a(i ) b(i ) = n , (i = 1, . . . , q) , (1.72)

where the i are the dierent eigenvalues of the matrix a().

Proof. Suciency: For = = i , (i = 1, . . . , q), we have rank a()


= n.
Therefore, together with (1.72) the relation rank Rh () = n is true for all
nite . This means, however, the pair (a(), b()) is irreducible.
The necessity of Condition (1.72) is obvious.

10.
Lemma 1.43. Let the pair (a(), b()) be given with the n n and n m
polynomial matrices a(), b(). Then for the pair (a(), b()) to be irreducible,
it is necessary that the matrix a() has not more than m invariant polynomials
dierent from 1.
38 1 Polynomial Matrices

Proof. Assume the number of invariant polynomials dierent from 1 of the


matrix a() be > m. Then it follows from (1.57), that there exists an eigen-
value 0 of the matrix
a() with rank a(0 ) = n . Applying Inequality
(1.48),

we gain rank a(0 ) b(0 ) n + m < n that means, the ma-
trix a() b() is not alatent and, consequently, the pair (a(), b()) is not
irreducible.
Remark 1.44. Obviously, we could formulate adequate statements as in Lem-
mata 1.42 and 1.43 for vertical pairs too.

1.13 Polynomial Matrices of First Degree (Pencils)


1. For q = 1, n = m the polynomial matrix (1.10) takes the form

A() = A + B (1.73)

with constant n n matrices A, B. This special structure is also called a


pencil. The pencil A() is non-singular if

det(A + B)
/ 0.

According to the general denition, the non-singular matrix (1.73) is called


regular for det A = 0 and anomalous for det A = 0. Regular pencils arise
in connection with state space representations, while anomalous pencils are
congured to descriptor systems [109, 34, 182]. All introduced concepts and
statements that were developed for polynomial matrices of general structure
are also valid for pencils (1.73). At the same time, these matrices possess a
number of important additional properties that will be investigated in this
section. In what follows, we only consider non-singular pencils.

2. In accordance with the general denition, the two matrices of equal di-
mension
A() = A + B, A1 () = A1 + B1 (1.74)
are called left(right)-equivalent, if there exists a unimodular matrix p()
(q()), such that

A() = p()A1 (), (A() = A1 ()q()) .

The matrices (1.74) are equivalent, if they satisfy an equation

A() = p()A1 ()q()

with unimodular matrices p(), q(). As follows from the above disclosures,
the matrices (1.74) are exactly left(right)-equivalent, if their Hermitian canon-
ical forms coincide. For the equivalence of the matrices (1.74), it is necessary
and sucient that their Smith-canonical forms coincide.
1.13 Polynomial Matrices of First Degree (Pencils) 39

3. The matrices (1.74) are named strictly equivalent, if there exist constant
matrices P , Q with
A() = P A1 ()Q . (1.75)
If in (1.74) the conditions det A = 0, det A1 = 0 are valid, i.e. the matrices are
regular, then the matrices A(), B() are only in that case equivalent, when
they are strictly equivalent. If det A = 0 or det A1 = 0, i.e. the matrices (1.74)
are anomalous, then the conditions for equivalence and strict equivalence do
not coincide.

4. In order to formulate a criterion for the strict equivalence of anomalous


matrices (1.74), following [51], we consider the n n Jordan block

a 1 0 ... 0 0
0 a 1 ... 0 0

..
. 0 0
 0 0 a
Jn (a) = . . . . , (1.76)
.. .. .. . . . . . ...

0 0 0 ... a 1
0 0 0 ... 0 a

where a is a constant.
Theorem 1.45 ([51]). Let

det A() = det(A + B) = 0

be given with det A = 0 and

0 < ord A() = deg det A() = < n . (1.77)

Furthermore, let

( 1 )1 , . . . , ( q )q , 1 + . . . + q = (1.78)

be the entity of elementary divisors of A() in the eld C. In what follows,


the elementary divisors (1.78) will be called nite elementary divisors. Then
the matrix A() is strictly equivalent to the matrix

A() = diag{I + A , In + A } (1.79)

with

A = diag{J1 (1 ), . . . , Jq (q )} ,
(1.80)
A = diag{Jp1 (0), . . . , Jp (0)} ,

where p1 , . . . , p are positive integers with p1 + . . . + p = n . The matrix


A Cn,n is nilpotent, that means, there exists an integer with A =
On,n .
40 1 Polynomial Matrices

Remark 1.46. The above dened numbers p1 , . . . , p are determined by the


innite elementary divisors of the matrix A(), [51]. Thereby, the matrices
(1.74) are strictly equivalent, if their nite and innite elementary divisors
coincide.

Remark 1.47. Matrix (1.79) can be represented as



A() = U + V , (1.81)

where
U = diag{I , A } , V = diag{A , In } . (1.82)

As is seen from (1.76) and (1.80)(1.82) for < n, we always obtain



det U = 0 and the matrix A() is generally spoken not row reduced.

5. As any non-singular matrix, also an anomalous matrix (1.73) can be


brought into row reduced form by left equivalence transformations. Hereby,
we obtain for matrices of rst degree some further results.

Theorem 1.48. Let Relation (1.77) be true for the non-singular anomalous
matrix (1.73). Then there exists a unimodular matrix P (), such that

P ()(A + B) = A() +B
= A (1.83)

is true with constant matrices


n
n
A1
A =

, = B1
B
. (1.84)
On,n n 2
B n

Moreover
A
det 1 = 0 (1.85)
B2
is true together with
deg P () n . (1.86)

Proof. We apply the row transformation algorithm of Theorem 1.9 to the


matrix A + B. Then after a nite number of steps, we reach at a row reduced

matrix A(). Due to the fact, that the degree of the transformed matrix does

not increase, we conclude deg A() 1. The case deg A() = 0 is excluded,
otherwise the matrix A() would be unimodular in contradiction to (1.77).

Therefore, only deg A() = 1 is possible. Moreover, we prove
 

A() +B
= A = diag{1 , . . . , n } A0 + A1 1 (1.87)

with det A0 = 0, where each of the numbers i , (i = 1, . . . , n) is either 0 or 1.


Due to
1.13 Polynomial Matrices of First Degree (Pencils) 41

1 + . . . + n = ,
among the numbers 1 , . . . , n are exactly ones with the value one, and
the other n numbers are zero. Without loss of generality, we assume the
succession

1 = 2 = . . . = = 1, +1 = +2 = . . . = n = 0 .

Then the matrix A in (1.83) takes the shape (1.84). Furthermore, if the matrix

A() is represented in the form (1.87), then with respect to (1.79) and (1.84),
we get
A1
A0 = .
B2

Since the matrix A() is row reduced, Relation (1.85) arises.
It remains to show Relation (1.86). As follows from (1.36), each step de-
creases the degree of one of the rows of the transformed matrices at least by
one. Hence each row of the matrix A() cannot be transformed more than
once. Therefore, the number of transformation steps is at most n . Since
however, in every step the transformation matrix P () is either constant or
with degree one, Relation (1.86) holds.
Corollary 1.49. In the row-reduced form (1.83), n rows of the matrix

A() are constant. Moreover, the rank of the matrix built from these rows is
equal to n , i.e., these rows are linearly independent.
Example 1.50. Consider the anomalous matrix

112 213 + 2 + 1 2 + 3
A() = A + B = 1 1 2 + 3 2 5 = + 3 + 2 2 + 5
113 326 + 3 + 2 3 + 6

appering in [51], that is represented in the form


 
A() = diag{, , } A0 + A1 1

with
A0 = A, A1 = B .
In the rst transformation step (1.30), we obtain

1 + 2 + 3 = 0
21 + 22 + 33 = 0 .

Now, we can choose 1 = 1, 2 = 1, 3 = 0, and the matrices (1.32) and


(1.35) take the form
1 1 0
P1 () = D1 = 0 1 0
0 0 1
42 1 Polynomial Matrices

hence

1 1 2 000
A1 () = P1 ()A() = diag{1, , } 1 1 2 + 3 2 5 1 .
1 1 3 326

By appropriate manipulations, these matrices are transformed into



100 100
P2 () = 1 0 , D2 = 1 1 0 .
001 001

Finally, we receive over the product

A2 () = P2 ()A1 () = P2 ()P1 ()A()

the row-reduced matrix



1 1 2
A2 () = 3 2 5 .
+ 3 + 2 3 + 6


6. Let B be a constant n n matrix. We assign to this matrix a matrix B


of degree one by
B = In B ,
which is called the characteristic matrix of B. For polynomial matrices of this
form, all above introduced concepts and statements for polynomial matrices of
general form remain valid. Hereby, the characteristic polynomial of the matrix
B

det B = det(In B) = dB ()
usually is named the characteristic polynomial of the matrix B. In the same
way, we deal with the terminology of minimal polynomials, invariant polyno-
mials, elementary divisor etc. Obviously,

ord B = deg det B = n .

As a consequence from Relation (1.75) for A1 = A = In we formulate:

Theorem 1.51 ([51]). For two characteristic matrices B = In B and


B1 = In B1 to be equivalent, it is necessary and sucient, that the ma-
trices B and B1 are similar, i.e. the relation

B1 = LBL1

is true with a certain non-singular constant matrix L.


1.13 Polynomial Matrices of First Degree (Pencils) 43

Remark 1.52. Theorem 1.51 implies the following property. If the matrix B
(the matrix In B) has the entirety of elementary divisors

( 1 )1 ( q )q , 1 + . . . + q = n ,

then the matrix B is similar to the matrix J of the form

J = diag{J1 (1 ), . . . , Jq (q )} . (1.88)

The matrix J is said to be the Jordan (canonical) form or shortly, Jordan


matrix of the corresponding matrix B. For any n n matrix B, the Jordan
matrix is uniquely determined, except the succession of the diagonal blocks.

7. Let the horizontal pair of constant matrices (A, B) with A, n n, and


B, nm be given. The pair (A, B) is called controllable, if the polynomial pair
(In A, B) is irreducible. This means, that the pair (A, B) is controllable if
and only if the matrix

Rc () = In A B
is alatent. It is known, see for instance [72, 69], that the pair (A, B) is con-
trollable, if and only if
rank Qc (A, B) = n ,
where the matrix Qc (A, B) is determined by


Qc (A, B) = B AB . . . An1 B . (1.89)

The matrix Qc (A, B) is named controllability matrix of the pair (A, B).
Some statements regarding the controllability of pairs are listed now:
a) If the pair (A, B) is controllable, and the n n matrix R is non-singular,
then also the pair (A1 , B1 ) with A1 = RAR1 , B1 = RB is controllable.
Indeed, from (1.89) we obtain


Qc (A1 , B1 ) = RB RAB . . . RAn1 B == RQc (A, B) ,

from which follows rank Qc (A1 , B1 ) = rank Qc (A, B) = n, because R is


non-singular.
b) Theorem 1.53. Let the pair (A, B) with the nn matrix A and the nm
matrix B be given, and moreover, an nn matrix L, which is commutative
with A, i.e. AL = LA. Then the following statements are true:
1. If the pair (A, B) is not controllable, then the pair (A, LB) is also not
controllable.
2. If the pair (A, B) is controllable and the matrix L is non-singular,
then the pair (A, LB) is controllable.
3. If the matrix L is singular, then the pair (A, LB) is not controllable.
44 1 Polynomial Matrices

Proof. The controllability matrix of the pair (A, LB) has the shape


Qc (A, LB) = LB ALB . . . An1 LB

(1.90)
= L B AB . . . An1 B = LQc (A, B)
where Qc (A, B) is the controllability matrix (1.89). If the pair (A, B)
is not controllable, then we have rank Qc (A, B) < n, and there-
fore, rank Qc (A, LB) < n. Thus the 1st statement is proved. If the
pair (A, B) is controllable and the matrix L is non-singular, then we
have rank Qc (A, B) = n, rank L = n and from (1.90) it follows
rank Qc (A, LB) = n. Hence the 2nd statement is shown. Finally, if the
matrix L is singular, then rank L < n and rank Qc (A, LB) < n are true,
which proves 3.
c) Controllable pairs are structural stable - this is stated in the next theorem.
Theorem 1.54. Let the pair (A, B) be controllable, and (A1 , B1 ) be an
arbitrary pair of the same dimension. Then there exists a positive number
0 , such that the pair (A + A1 , B + B1 ) is controllable for all || < 0 .
Proof. Using (1.89) we obtain
Qc (A + A1 , B + B1 ) = Qc (A, B) + Q1 + . . . + n Qn , (1.91)
where the Qi , (i = 1, . . . , n) are constant matrices, that do not depend
on . Since the pair (A, B) is controllable, the matrix Qc (A, B) contains
a non-zero minor of n-th order. Then due to Lemma 1.39 for suciently
small ||, the corresponding minor of the matrix (1.91) also remains dif-
ferent from zero.
Remark 1.55. Non-controllable pairs do not possess the property of struc-
tural stability. If the pair (A, B) is not controllable, then there exists a
pair (A1 , B1 ) of equal dimension, such that the pair (A + A1 , B + B1 )
for arbitrary small || > 0 becomes controllable.

8. The vertical pair [A, C] built from the constant m m matrix A and
n m matrix C is called observable, if the vertical pair of polynomial matrices
[Im A, C] is irreducible. Obviously, the pair [A, C] is observable, if and
only if the horizontal pair (A , C  ) is controllable, where the prime means
the transposition operation. Due to this reason, observable pairs possess all
the properties that have been derived above for controllable pairs. Especially,
observable pairs are structural stable.

1.14 Cyclic Matrices


1. The constant n n matrix A is said to be cyclic, if the assigned char-
acteristic matrix A = In A is simple in the sense of the denition in
Section 1.11, see [69, 78, 191].
1.14 Cyclic Matrices 45

Cyclic matrices are provided with the important property of structural


stability, as is substantiated by the next theorem.

Theorem 1.56. Let the cyclic n n matrix A, and an arbitrary n n matrix


B be given. Then there exists a positive number 0 > 0, such that for || < 0
all matrices A + B become cyclic.

Proof. Let
det(In A) = dA (), deg dA () = n .
Then we obtain

det(In A B) = dA () + d1 (, )

with deg d1 (, ) < n for all . Therefore, by virtue of Theorem 1.37, there
exists an 0 , such that for || < 0 the matrix In A B remains simple,
i.e. the matrix A + B is cyclic.

2. Square constant matrices that are not cyclic, will be called in future com-
posed. Composed matrices are not equipped with the property of structural
stability in the above dened sense. For any composed matrix A, we can nd
a matrix B, such that the sum A + B becomes cyclic, as small even || > 0 is
chosen. Moreover, the sum A + B will become composed only in some special
cases. This fact is illustrated by a 2 2 matrix in the next example.

Example 1.57. As follows from Theorem 1.51, any composed 2 2 matrix A


is similar to the matrix
a0
B= = aI2 , (1.92)
0a
where a = const., so we have

A = LBL1 = B .

Therefore, the set of all composed matrices in C22 is determined by Formula


(1.92) for any a. Assume now the matrix Q = A + F to be composed. Then

q0
Q=
0q

is true, and hence


aq 0
F =BQ=
0 aq
becomes an composed matrix. When the 2 2 matrix A is composed, then
the sum A + F still becomes onerous, if and only if the matrix F is composed
too. 
46 1 Polynomial Matrices

3. The property of structural stability of cyclic matrices allows a probability-


theoretic interpretation. For instance, the following statement is true:
Let A Fnn be a composed matrix and B Fnn any random matrix
with independent entries that are equally distributed in a certain interval
bik . Then the sum A + B with probability 1 becomes a cyclic matrix.

4. The property of structural stability has great practical importance. In-


deed, let for instance the dierential equation of a certain linear process be
given in the form
dx
= Ax + Bu , A = A0 + A ,
dt
where x is the state vector, and A0 , A are constant matrices, where A0 is
cyclic. The matrix A manifests the unavoidable errors during the set up and
calculation of the matrix A. From Theorem 1.56 we conclude that the matrix
A remains cyclic, if the deviation A satises the conditions of Theorem 1.56.
If however, the matrix A is composed, then this property can be lost due to
the imprecision characterised by the matrix A, as tiny this ever has been
with respect to the norm.

5. Assume
d() = n + d1 n1 + . . . + dn (1.93)
to be a monic polynomial. Then the n n matrix AF of the form

0 1 0 ... 0 0
0 0 1 ... 0 0

.. . .. .. . . ..
AF = . .. . . . . (1.94)

0 0 0 ... 0 1
dn dn1 dn2 . . . d2 d1

is called its accompanying (horizontal) Frobenius matrix with respect to the


polynomial d(). Moreover, we consider the vertical accompanying Frobenius
matrix
0 0 . . . 0 dn
1 0 . . . 0 dn1

. . . .. ..
. .
AF = . . . . . . . (1.95)
..
0 0 . 0 d 2
0 0 . . . 1 d1
The properties of the matrices (1.94) and (1.95) are analogue, so that we could
restrict ourself to the investigation of (1.94). The characteristic matrix of AF
has the form
1.14 Cyclic Matrices 47

1 0 . . . 0 0
0 1 . . . 0 0

.. .. . . . . .. .. .
In AF = . . . . . . (1.96)

0 0 0 . . . 1
dn dn1 dn2 . . . d2 + d1


Appending Matrix (1.96) with the column b = 0 . . . 0 1 , we receive the
extended matrix
1 0 . . . 0 0 0
0 1 . . . 0 0 0

.. .. . . . . .. .. .. .
. . . . . . .

0 0 0 . . . 1 0
dn dn1 dn2 . . . d2 + d1 1
This matrix is alatent, because it has a minor of n-th order that is equal to
(1)n1 . Strength to Theorem 1.35, Matrix (1.96) is simple, and therefore,
the matrix AF is cyclic.

6. By direct calculation we recognise

det(In AF ) = n + d1 n1 + . . . + dn = d() .

According to the properties of simple matrices, we conclude that the whole of


invariant polynomials corresponding to the matrix AF is presented by

a1 () = a2 () = . . . = an1 () = 1, an () = f () .

Let A be any cyclic n n matrix. Then the accompanying matrix A =


In A is simple. Therefore, by applying equivalence transformations, A
might be brought into the form

In A = p() diag{1, 1, . . . , d()}q() ,

where the matrices p(), q() are unimodular, and d() is the characteristic
polynomial of the matrix A. From the last equation, we conclude that the set
of invariant polynomials of the cyclic matrix A coincides with the set of invari-
ant polynomials of the accompanying Frobenius matrix of its characteristic
polynomial d(). Hereby, the matrices In A and In AF are equivalent,
hence the matrices A and AF are similar, i.e.

A = LAF L1

is true with a certain non-singular matrix L. It can be shown that in case of


a real matrix A, also the matrix L could be chosen real.
48 1 Polynomial Matrices

7. As just dened in (1.76), let



a10 ... 0 0
0 a 1 ... 0 0

..
0 0 a . 0 0
Jn (a) =
. . .

.. .. .. . . . . ..
. . .
0 0 0 ... a 1
000 ... 0 a

be a Jordan block. The matrix Jn (a) turns out to be cyclic, because the matrix

a 1 0 ... 0 0 0
0 a 1 . . . 0 0 0

.
0 0 a .. 0 0 0

. . . . . . .
.. .. .. .. .. .. ..

0 0 0 . . . a 1 0
0 0 0 . . . 0 a 1

is alatent.
Let us represent the polynomial (1.93) in the form

d() = ( 1 )1 ( q )q ,

where all numbers i are dierent. Consider the matrix

J = diag{J1 (1 ), . . . , Jq (q )} (1.97)

and its accompanying characteristic matrix

In J = diag{I1 J1 (1 ), . . . , Iq Jq (q )} , (1.98)

where the corresponding diagonal blocks take the shape



i 1 0 ... 0 0
0 i 1 . . . 0 0

.
0 0 i . 0. 0
Ii Ji (i ) =
. .. ..
.
.. (1.99)
.. .. ..
. . . . .
0 0 0 . . . i 1
0 0 0 . . . 0 i

Obviously, we have

det[Ii Ji (i )] = ( i )i

so that from (1.98), we obtain


1.15 Simple Realisations and Their Structural Stability 49

det(In J) = ( 1 )1 ( q )q = d() .

At the same time, using (1.98) and (1.99), we nd

rank(i In J) = n 1, (i = 1, . . . , q)

that means, Matrix (1.98) is cyclic. Therefore, Matrix (1.97) is similar to the
accompanying Frobenius matrix of the polynomial (1.93), thus

J = LAF L1 ,

where L in general is a complex non-singular matrix.

1.15 Simple Realisations and Their Structural Stability


1. The triple of matrices a(), b(), c() of dimensions p p, p m, n p,
according to [69] and others, is called a polynomial matrix description (PMD)

() = (a(), b(), c()) . (1.100)

The integers n, p, m are the dimension of the PMD. In dependence on the


membership of the entries of the matrices a(), b(), c() to the sets F[],
R[], C[], the sets of all PMDs with dimension n, p, m are denoted by Fnpm [],
Rnpm [], Cnpm [], respectively.
A PMD (1.100) is called minimal, if the pairs (a(), b()), [a(), c()] are
irreducible.

2. A PMD of the form

() = (Ip A, B, C) , (1.101)

where A, B, C are constant matrices, is said to be an elementary. Every


elementary PMD (1.101) is characterised by a triple of constant matrices
A, (pp); B, (pm); C, (np). The triple (A, B, C) is called a realisation of
the linear process in state space, or shortly realisation. The numbers n, p, m are
named the dimension of the elementary realisation. The set of all realisations
with given dimension is denoted by Fnpm , Rnpm , Cnpm , respectively.
Suppose the p p matrix Q to be non-singular. Then the realisations
(A, B, C) and (QAQ1 , QB, CQ1 ) are called similar.

3. The realisation (A, B, C) is called minimal, if the pair (A, B) is control-


lable and the pair [A, C] is observable, i.e. the elementary PMD (1.101) is
minimal. A minimal realisation with a cyclic matrix A is called a simple re-
alisation. The set of all minimal realisations of a given dimension will be
symbolised by F npm , C
npm , R npm respectively, and the set of all simple re-
alisations by Fsnpm , Rsnpm Csnpm . For a simple realisation (A, B, C) Rsnpm
50 1 Polynomial Matrices

always exists a similar realisation (QJ AQ1 1


J , QJ B, CQJ ) Cnpm , where the
s

matrix QJ AQ1 J is of Jordan canonical form. Such a simple realisation is


called a Jordan realisation. Moreover, for this realisation, there exists a sim-
ilar realisation (QF AQ1 1
F , QF B, CQF ) Rnpm , where the matrix QF AQF
s 1

is a Frobenius matrix of the form (1.94). Such a simple realisation is called a


Frobenius realisation.

4. Simple realisations possess the important property of structural stability,


as the next theorem states.

Theorem 1.58. Let the realisation (A, B, C) of dimension n, p, m be simple,


and (A1 , B1 , C1 ) be an arbitrary realisation of the same dimension. Then there
exists an 0 > 0, such that the realisation (A + A1 , B + B1 , C + C1 ) for all
|| < 0 remains simple.

Proof. Since the pair (A, B) is controllable and the pair [A, C] is observable,
there exists, owing to Theorem 1.54, an 1 > 0, such that the pair (A +
A1 , B + B1 ) becomes controllable and the pair [A + A1 , C + C1 ] observable
for all || < 1 . Furthermore, due to Theorem 1.56, there exists an 2 > 0,
such that the matrix A + A1 becomes cyclic for all || < 2 . Consequently, for
|| < min(1 , 2 ) = 0 all realisations (A + A1 , B + B1 , C + C1 ) are simple.

Remark 1.59. Realisations that are not simple, are not provided by the prop-
erty of structural stability. For instance, from the above considerations we
come to the following conclusion:
Let the realisation (A, B, C) be not simple, and (A1 , B1 , C1 ) be a random
realisation of equal dimension, where the entries of the matrices A1 , B1 , C1
are in the whole statistically independent and equally distributed in a certain
interval [, ]. Then the realisation (A + A1 , B + B1 , C + C1 ) will be simple
with probability 1.

5. Theorem 1.58 has fundamental importance for developing methods on


base of a mathematical description of linear time-invariant multivariable sys-
tems. The dynamics of such systems are described in continuous time by
state-space equations of the form
dx
y = Cx, = Ax + Bu , (1.102)
dt
corresponding to the realisation (A, B, C). In practical investigations, we al-
ways will meet A = A0 +A, B = B0 +B, C = C0 +C, where (A0 , B0 , C0 )
is the nominal realisation and the realisation (A, B, C) characterises in-
accuracies due to nite word length etc. Now, if the nominal realisation is
simple, then at least for suciently small deviations (A, B, C), the sim-
plicity is preserved. Analogue considerations are possible for the description
of the dynamics of discrete-time systems, where
1.15 Simple Realisations and Their Structural Stability 51

yk = Cxk , xk+1 = Axk + Buk (1.103)

is used.
If however, the nominal realisation (A, B, C) is not simple, then the struc-
tural properties will, roughly spoken, not be preserved even for tiny deviations.

6. In principle in many cases, we can nd suitable bounds of disturbances


for which a simple realisation remains simple. For instance, let the matrices
A1 , B1 , C1 depend continuously on a scalar parameter , such that A1 =
A1 (), B1 = B1 (), C1 = C1 () with A1 (0) = Opp , B1 (0) = Opm , C1 (0) =
Onp is valid. Now, if the parameter increases from zero to positive values,
then the realisation (A + A1 (), B + B1 (), C + C1 () for 0 < 0 remains
simple, where 0 is the smallest positive number, for which at least one of the
following conditions takes place:
a) The pair (A + A1 (0 ), B + B1 (0 )) is not controllable.
b) The pair [A + A1 (0 ), C + C1 (0 )] is not observable.
c) The matrix A + A1 (0 ) is not cyclic.
2
Fractional Rational Matrices

2.1 Rational Fractions


1. A fractional rational (rat.) function, or shortly rational fraction means
the relation of two polynomials
m()

() = , (2.1)

d()

where d() = 0. In dependence on the coecient sets for the numerator and
denominator polynomials in (2.1), the corresponding set of rational fractions
is designated by F(), C() or R(), respectively. Hereby, rational fractions
in R() are named real. Over the set of rational functions, various algebraic
operations can be explained.

2. Two rational fractions


m1 () m2 ()
1 () = , 2 () = (2.2)
d1 () d2 ()
are considered as equal, and we write 1 () = 2 () for that, when

m1 ()d2 () m2 ()d1 () = 0 . (2.3)

Let in particular

m2 () = a()m1 () , d2 () = a()d1 ()

with a polynomial a(), then (2.3) is fullled, and the fractions


m1 () a()m1 ()
1 () = , 2 () =
d1 () a()d1 ()
are equal in the sense of the above denition. Immediately, it follows that the
rational fraction (2.1) does not change if the numerator and denominator are
54 2 Fractional Rational Matrices

cancelled by the same factor. Any polynomial f () can be represented in the


form
f ()
f () = .
1
Therefore, for polynomial rings the relations F[] F(), C[] C(), R[]
R() are true.

3. Let the fraction (2.1) be given, and g() is the GCD of the numerator
m()

and the denominator d(), such that
m()
= g()m1 (), = g()d1 ()
d()
with coprime m1 (), d1 (). Then we have
m1 ()
() = . (2.4)
d1 ()
Notation (2.4) is called an irreducible form of the rational fraction. Further-
more, assume
d1 () = d0 n + d1 n1 + . . . + dn , d0 = 0 .
Then, the numerator and denominator of (2.4) can be divided by d0 , yielding
m()
() = .
d()
Herein the numerator and denominator are coprime polynomials, and besides
the polynomial d() is monic. This representation of a rational fraction will
be called its standard form. The standard form of a rational fraction is unique.

4. The sum of rational fractions (2.2) is dened by the formula


m1 () m2 () m1 ()d2 () + m2 ()d1 ()
1 () + 2 () = + = .
d1 () d2 () d1 ()d2 ()

5. The product of rational fractions (2.2) is explained by the relation


m1 ()m2 ()
1 ()2 () = .
d1 ()d2 ()

6. In algebra, it is proved that the sets of rational fractions C(), R() with
the above explained rules for addition and multiplication form elds. The zero
element of those elds proves to be the fraction 0/1, the unit element is the
rational fraction 1/1. If we have in (2.1) m()
= 0, then the inverse element
1 () is determined by the formula

d()
1 () = .
m()

2.1 Rational Fractions 55

7. The integer ind , for which the nite limit

lim ()ind  = 0 = 0

exists, is called the index of the rational fraction (2.1). In case of ind  = 0
(ind  > 0), the fraction is called proper (strictly proper). In case of ind  0,
the fraction is said to be at least proper. In case of ind  < 0 the fraction () is
named improper. If the rational fraction () is represented in the form (2.1)
and we introduce deg m()
= , deg d() = , then the fraction is proper,
strictly proper or at least proper, if the corresponding relation = , <
or is true. The zero rational fraction is dened as strictly proper.

8. Any fraction (2.1) can be written in the shape

r()
() = + q() (2.5)
d()

with polynomials r(), q(), where deg r() < deg d(), such that the rst
summand at the right side of (2.5) is strictly proper. The representation (2.5)
is unique. Practically, the polynomials r() and q() could be found in the
following way. Using (1.6), we uniquely receive

m() = d()q() + r()

with deg r() < deg d(). Inserting the last relation into (2.1), we get (2.5).

9. The sum, the dierence and the product of strictly proper fractions are
also strictly proper. The totality of strictly proper fractions builds a commu-
tative ring without unit element.

10. Let the strictly proper rational fraction

m()
() =
d1 ()d2 ()

be given, where the polynomials d1 () and d2 () are coprime. Then we can


nd a separation
m1 () m2 ()
() = + , (2.6)
d1 () d2 ()
where both fractions on the right side are strictly proper. Hereby, the poly-
nomial m1 () and m2 () are determined uniquely.
56 2 Fractional Rational Matrices

11. A separation of the form (2.6) can be generalised as follows. Let the
strictly proper fraction () possess the shape

m()
() = ,
d1 ()d2 () dn ()

where all polynomials in the denominator are two and two coprime, then there
exists a unique representation of the form

m1 () m1 () mn ()
() = + + ... + , (2.7)
d1 () d1 () dn ()

where all fractions on the right side are strictly proper. In particular, let the
strictly proper irreducible fraction

m()
() =
d()

with
d() = ( 1 )1 ( q )q
be given, where all i are dierent. Introduce ( i )i = di () and apply
(2.7), then we obtain a representation of the form


q
mi ()
() = , deg mi () < i . (2.8)
i=1
( i )i

The representation (2.8) is unique.

12. Furthermore, we can show that the fractions of the form

mi ()
i () = , deg mi () < i
( i )i

can be uniquely presented in the form


mi1 mi2 mii
i () = + 1
+ ... + ,
( i )i ( i )i i

where the mij are certain constants. Inserting this relation into (2.8), we get
q

mi1 mi2 mii
() = + + ... + , (2.9)
i=1
( i )i ( i )i 1 i

which is named a partial fraction expansion . A representation of the form


(2.9) is unique.
2.1 Rational Fractions 57

13. For calculating the coecients mik of the partial fraction expansion (2.9)
the formula

1 k1 m()( i )i 
mik =  (2.10)
(k 1)! k1 d() =i

can be used. The coecients (2.10) are closely connected to the expansion of
the function
m()( i )i
i () =
d()
into a Taylor series in powers of ( i ), that exists because the function
i () is analytical in the point = i . Assume for instance

i () = i1 + i2 ( i ) + . . . + ii ( i )i + . . .

with 
1 dk1 
ik = i () .
(k 1)! d k1
=i

Comparing this expression with (2.10) yields

mik = ik .

14. Let
m()
() = (2.11)
d1 ()d2 ()
be any rational fraction, where the polynomials d1 () and d2 () are coprime.
Moreover, we assume

m()

() = + q()
d1 ()d2 ()

for a representation of () in the form (2.5), where m(),


q() are polynomials
with deg m()
< deg d1 () + deg d2 (). Since d1 (), d2 () are coprime, there
exists a unique decomposition

m()
m1 () m2 ()
= + ,
d1 ()d2 () d1 () d2 ()

where the fractions on the right side are strictly proper. Altogether, for (2.11)
we get the unique representation

m() m1 () m2 ()
= + + q() , (2.12)
d1 ()d2 () d1 () d2 ()

where deg m1 () < deg d1 () and deg m2 () < deg d2 () are valid.
Let g() be any polynomial. Then (2.12) can be written in the shape
58 2 Fractional Rational Matrices

m1 () m2 ()
() = + g() + + q() g() , (2.13)
d1 () d2 ()

which is equivalent to

n1 () n2 ()
() = 1 () + 2 () = + , (2.14)
d1 () d2 ()

where

n1 () = m1 () + g()d1 () ,
(2.15)
n2 () = m2 () + [ q() g()] d2 () .

The representation of a rational fraction () in the form (2.14) is called a


separation with respect to the polynomials d1 () and d2 () .
From (2.14), (2.15) it follows that a separation always is possible, but not
uniquely determined. It is shown now that Formulae (2.14), (2.15) include all
possible separations.
Indeed, if (2.14) holds, then the division of the fractions on the right side
of (2.14) by the denominator yields

k1 () k2 ()
() = + q1 () + + q2 ()
d1 () d2 ()

with deg k1 () < deg d1 (), deg k2 () < deg d2 (). Comparing the last equa-
tion with (2.12) and bear in mind the uniqueness of the representation (2.12),
we get k1 () = m1 (), k2 () = m2 () and q1 () + q2 () = q(). While as-
signing q1 () = g(), q2 () = q() g(), we realise that the representation
(2.14) takes the form (2.12).
Selecting in (2.13) g() = 0, we obtain the separation

m1 () m2 ()
1 () = , 2 () = + q() , (2.16)
d1 () d2 ()

where the fraction 1 () is strictly proper. Analogously, we nd for g() = q()


the separation

m1 () m2 ()
1 () = + q() , 2 () = , (2.17)
d1 () d2 ()

where the fraction 2 () is strictly proper. Separation (2.16) is called min-


imal with respect to d1 (), and Separation (2.17) minimal with respect to
d2 (). From the above exposition, it follows that the minimal separations are
uniquely determined. An important special case arises for strictly proper frac-
tions (2.11). Then we receive in (2.12) q() = 0, and the minimal separations
(2.16) and (2.17) coincide.
2.2 Rational Matrices 59

2.2 Rational Matrices


1. A n m matrix L() of the form

11 () . . . 1m ()
.. .. ..
L() = . . . (2.18)
n1 () . . . nm ()
is called a broken rational, or shortly rational matrix, if all its entries are broken
rational functions of the form (2.1). If ik F( ) (or C(), R()), then the cor-
responding set of matrices (2.18) is denoted by Fnm (), (or Cnm (), Rnm ()),
respectively. In the following considerations we optionally assume matrices in
Rnm [] and Rnm (), that practically arise in all technological applications.
But it is mentioned that most of the results derived below also hold for ma-
trices in Fnm (), Cnm (). Rational matrices in Rnm () are named real.
By writing all elements of Matrix (2.18) on the principal denominator, the
matrix can be denoted in the form
()
N
L() = , (2.19)

d()

where the matrix N


() is a polynomial matrix, and d() is a scalar polyno-
mial. In description (2.19) the matrix N () is called the numerator and the

polynomial d() the denominator of the matrix L(). Without loss of gener-

ality, we will assume that the polynomial d() in (2.19) is monic and has the
linear factorisation
= ( 1 )1 ( q )q .
d() (2.20)
The fraction (2.19) is called irreducible, if with respect to the factorisation
(2.20)
N (i ) = Onm , (i = 1, . . . , q) .
However, if for at least one 1 i q
(i ) = Onm
N
becomes true, then the fraction (2.19) is named reducible. The last equation
is fullled, if every element of the matrix L() is divisible by i . After
performing all possible cancellations, we always arrive at a representation of
the form
N ()
L() = , (2.21)
d()
where the fraction on the right side is irreducible, and the polynomial d() is
monic. This representation of a rational matrix (2.18) is named its standard
form. The standard form of a rational matrix is uniquely determined. In fu-
ture, we will always assume rational matrices in standard form if nothing else
is denied.
60 2 Fractional Rational Matrices

Example 2.1. Let the rational matrix



5 + 3 1
2 3
L() =

(2.22)
2 + 1
+2
( 2)( 3)

be given that can be represented in the form (2.19)



(5 + 3)( 3) 2
2 + 1 (2 4)( 3)
L() = , (2.23)
( 2)( 3)
where

(5 + 3)( 3) 2
N () = ,
2 + 1 (2 4)( 3)
(2.24)
d() = ( 2)( 3) .

According to
13 0 01
N (2) = , N (3) = ,
5 0 70
the fraction (2.23) is irreducible. Since the polynomial d() in (2.24) is monic,
the expression (2.23) estabishes as the standard form of the rational matrix
L(). 

If the denominator d() in (2.21) has the shape (2.20), then the numbers
1 , . . . , q are called the poles of the matrix L(), and the numbers 1 , . . . , q
are their multiplicities.

2.3 McMillan Canonical Form


1. Let the rational n m matrix L() be given in the standard form (2.21).
The matrix N () is written in Smith canonical form (1.41):

diag{a1 (), . . . , a ()} O,m
N () = p() q() . (2.25)
On, On,m

Then we produce from (2.21)

L() = p()ML ()q() , (2.26)

where  
M () O,m
ML () = (2.27)
On, On,m
2.3 McMillan Canonical Form 61

and  
a1 () a ()
M () = diag ,..., . (2.28)
d() d()
Executing all possible cancellations in (2.28), we arrive at
 
1 () ()
M () = diag ,..., , (2.29)
1 () ()

where the i (), i (), (i = 1, . . . , ) are coprime monic polynomials, such


that i+1 () is divisible by i (), and i () is divisible by i+1 ().
Matrix (2.27), where M () is represented in the form (2.29), is designated
as the McMillan (canonical) form of the rational matrix L(). The McMillan
form of an arbitrary rational matrix is uniquely determined.

2. The polynomial
L () = 1 () () (2.30)
is said to be the McMillan denominator of the matrix L(), and the polyno-
mial
L () = 1 () () (2.31)
its McMillan numerator. The non-negative number

Mdeg L() = deg L () (2.32)

is called the McMillan degree of the matrix L(), or shortly its degree.

3.
Lemma 2.2. For a rational matrix L() in standard form (2.21) the fraction

L ()
() = (2.33)
d()

turns out to be a polynomial.

Proof. It is shown that under the actual assumptions the polynomials a1 ()


and d() are coprime, such that the fraction a1 ()/d() is irreducible. Indeed,
let us assume the contrary such that

a1 () b1 ()
= ,
d() 1 ()

where deg 1 () < deg d(). Since the polynomial 1 () is divisible by the
polynomials 2 (), . . . , (), we obtain from (2.29)
 
b1 () b ()
M () = diag ,..., ,
1 () 1 ()
62 2 Fractional Rational Matrices

where b1 (), . . . , b () are polynomials. Inserting this relation and (2.27) into
(2.26), we arrive at the representation
N1 ()
L() = ,
1 ()
where N1 () is a polynomial matrix, and deg 1 () < deg d(). But this
inequality contradicts our assumption on the irreducibility of the standard
form (2.21). This conict proves the correctness of 1 () = d(), and from
(2.30) arises (2.33).
From Lemma 2.2, for a denominator d() of the form (2.20), we deduce
the relation

L () = ( 1 )1 ( q )q = d()2 () () , (2.34)

where i i , (i = 1, . . . , q). The number i is called the McMillan multiplic-


ity of the pole i .
From (2.34) and (2.32) arise

Mdeg L() = 1 + . . . + q .

4.
Lemma 2.3. For any matrix L(), assuming (2.26), (2.27), we obtain

deg d() Mdeg L() deg d() .

Proof. The left side of the claimed inequality establishes itself as a conse-
quence of Lemma 2.2. The right side is seen immediately from (2.28), because
under the assumption that all fractions ai ()/d() are irreducible, we obtain

L () = [d()] .

5.
Lemma 2.4. Let L() in (2.21) be an nn matrix with rank N () = n. Then
L ()
det L() = , = const. = 0 . (2.35)
L ()
Proof. For n = m and rank N () = n, from (2.26)(2.29) it follows
 
1 () 2 () n ()
L() = p() diag , ,..., q() .
d() 2 () n ()
Calculating the determinant on the right side of this equation according to
(2.30) and (2.31) yields Formula (2.35) with = det p() det q().
2.4 Matrix Fraction Description (MFD) 63

2.4 Matrix Fraction Description (MFD)


1. Let the rational n m matrix L() be given in the standard form (2.21).
We suppose the existence of a non-singular n n polynomial matrix al ()
with
al ()N () 
al ()L() = = bl ()
d()
and an n m polynomial matrix bl (). In this case, we call the polynomial
matrix al () a left reducing polynomial of the matrix L(). Considering the
last equation, we gain the representation

L() = a1
l ()bl () , (2.36)

which is called an LMFD (left matrix fraction description) of the matrix L().
Analogously, if there exists a non-singular m m matrix ar () with
N ()ar ()
L()ar () = = br ()
d()
and a polynomial n m matrix br (), we call the representation

L() = br ()a1
r () (2.37)

a right MFD (RMFD) of the matrix L(), [69, 68], and the matrix ar () is
named its right reducing polynomial.

2. The polynomials al () and bl () in the LMFD (2.36) are called left denom-
inator and right numerator, and the polynomials ar (), br () of the RMFD
(2.37) its right denominator and left numerator, respectively. Obviously, the
set of left reducing polynomials of the matrix L() coincides with the set
of its left denominators, and the same is true for the set of right reducing
polynomials and the set of right denominators.
Example 2.5. Let the matrices

2 2 + 2
2 7 + 18 2 + 7 2
L() =
( 2)( 3)
and
4 1
al () =
6
be given. Then by direct calculation, we obtain

3 +1
al ()L() = = bl () ,
2

such that L() = a1


l ()bl (). 
64 2 Fractional Rational Matrices

3. For any matrix L() (2.21), there always exist LMFDs and RMFDs. In-
deed, take
al () = d()In , bl () = N () ,
then the rational matrix (2.21) can be written in form of an LMFD (2.36),
where
det al () = [d()]n ,
and therefore
deg det al () = ord al () = n deg d() .
In the same way, we see that

ar () = d()Im , br () = N ()

is an RMFD (2.37) of Matrix (2.21), where ord ar () = m deg d().


However, as will be shown in future examples, in most cases we are inter-
ested in LMFDs or RMFDs with lowest possible ord al () or ord ar ().

4. In connection with the above demand, the problem arise to construct


an LMFD or RMFD, where det al () or det ar () have the minimal possible
degrees. Those MFDs are called irreducible. In what follows, we speak about
irreducible left MFDs (ILMFDs) and irreducible right MFDs (IRMFDs). The
following statements are well known [69].
Statement 2.1 An LMFD (2.36) is an ILMFD,
if and only if the pair
(al (), bl ()) is irreducible, i.e. the matrix al () bl () is alatent.
Statement 2.2 An RMFD (2.37) is an IRMFD, if and only if the pair
ar ()
[ar (), br ()] is irreducible, i.e. the matrix is alatent.
br ()
Statement 2.3 If the n m matrix A() possesses the two LMFDs

L() = a1 1
l1 ()bl1 () = al2 ()bl2 ()

and the pair (al1 (), bl1 ()) is irreducible, then there exists a non-singular
n n polynomial matrix g() with

al2 () = g()al1 () , bl2 () = g()bl1 () .

Furthermore, if the pair (al2 (), bl2 ()) is also irreducible, then the matrix
g() is unimodular.
Remark 2.6. A corresponding statement is true for right MFDs.
2.4 Matrix Fraction Description (MFD) 65

5. The theoretical equipment for constructing ILMFDs and IRMFDs is


founded on using the canonical form of McMillan. Indeed, from (2.27) and
(2.29), we get
ML () = a1 a1
l ()b() = b() r () (2.38)
with
a
l () = diag{d(), 2 (), . . . , (), 1, . . . , 1} ,
a
r () = diag{d(), 2 (), . . . , (), 1, . . . , 1} , (2.39)
 
diag{1 (), , . . . , ()} O,m
b() = .
On, On,m

Inserting (2.38) and (2.39) in (2.26), we obtain an LMFD (2.36) and an RMFD
(2.37) with
l ()p1 (), bl () = b()q() ,
al () = a
(2.40)
ar () = q 1 ()
ar (), br () = p()b() .
In [69] is stated that the pairs (al (), bl ()) and [ar (), br ()] are irre-
ducible, i.e. by using (2.40), Relations (2.36) and (2.37) generate ILMFDs
and IRMFDs of the matrix L().

6. If Relations (2.36) and (2.37) dene ILMFDs and IRMFDs of the matrix
L(), then it follows from (2.40) and Statement 2.3 that the matrices al ()
and ar () possess equal invariant polynomials dierent from one. Herein,
det al () det ar () d()2 () () = L () ,
where L () is the McMillan denominator of the matrix L(). Besides, the
last relation together with (2.32) yields
ord al () = ord ar () = Mdeg L() . (2.41)
Moreover, we recognise from (2.40) that the matrices bl () and br () in the
ILMFD (2.36) and the IRMFD (2.37) are equivalent.

7.
Lemma 2.7. Let a l () (ar ()) be a left (right) reducing polynomial for the
l () = (ord a
matrix L() with ord a r () = ). Then
Mdeg L() .
Proof. Let us have the ILMFD (2.36). Then due to Statement 2.3, we have
a
l () = g()al () , (2.42)
where the matrix g() is non-singular, from which directly follows the claim.
66 2 Fractional Rational Matrices

8. A number of auxiliary statements about general properties of MFDs


should be given now.

Lemma 2.8. Let an LMFD

L() = a1
l1 ()bl1 ()

be given. Then there exists an RMFD

L() = br1 ()a1


r1 ()

with det al1 () det ar1 (). The reverse statement is also true.

Proof. Let the ILMFD and IRMFD

L() = a1 1
l ()bl () = br ()ar ()

be given. Then with (2.42), we have

al1 () = gl ()al () ,

where the matrix gl () is non-singular. Let det g() = h() and choose the
m m matrix gr () with det gr () h(). Then using

ar1 () = ar ()gr (), br1 () = br ()gr () ,

we obtain an RMFD of the desired form.

9.
Lemma 2.9. Let the PMD of the dimension n, p, m

() = (a(), b(), c())

be given, where the pair (a(), b()) is irreducible. Then, if we have an ILMFD

c()a1 () = a1
1 ()c1 () , (2.43)

the pair (a1 (), c1 ()b()) becomes irreducible.


On the other side, if the pair [a(), c()] is irreducible, and we have an
IRMFD
a1 ()b() = b1 ()a1
2 () , (2.44)
then the pair [a2 (), c()b1 ()] becomes irreducible.

Proof. Since the pair (a(), b()) is irreducible, owing to (1.71), there exist
polynomial matrices X(), Y () with

a()X() + b()Y () = Ip .
2.4 Matrix Fraction Description (MFD) 67

In analogy, the irreducibility of the pair (a1 (), c1 ()) implies the existence of
polynomial matrices U () and V () with
a1 ()U () + c1 ()V () = In . (2.45)
Using the last two equations, we nd
a1 ()U () + c1 ()V () = a1 ()U () + c1 ()Ip V ()
= a1 ()U () + c1 () [a()X() + b()Y ()] V () = In
which, due to (2.43), may be written in the form
a1 () [U () + c()X()V ()] + c1 ()b() [Y ()V ()] = In .
From this equation by virtue of (1.71), it is evident that the pair
(a1 (), c1 ()b()) is irreducible.
In the same manner, it can be shown that the pair [a2 (), c()b1 ()] is irre-
ducible.
Remark 2.10. The reader nds in [69] an equivalent statement to Lemma 2.9
in modied form.

10.
Lemma 2.11. Let the pair (a1 ()a2 (), b()) be irreducible. Then also
the pair (a1 (), b()) is irreducible. Analogously, we have: If the pair
[a1 ()a2 (), c()] is irreducible, then the pair [a2 (), c()] is also irreducible.
Proof. Produce
L() = a1 1
2 ()a1 ()b() = [a1 ()a2 ()]
1
b() .
Due to our supposition, the right side of this equation is an ILMFD. Therefore,
regarding (2.41), we get
Mdeg L() = ord[a1 ()a2 ()] = ord a1 () + ord a2 () . (2.46)
Suppose the pair (a1 (), b()) to be reducible. Then there would exist an
ILMFD
a1 1
3 ()b1 () = a1 ()b() ,
where ord a3 () < ord a1 (), and we obtain
L() = a1 1
2 ()a3 ()b1 () = [a3 ()a2 ()]
1
b1 () .
From this equation it follows that a3 ()a2 () is a left reducing polynomial for
L(). Therefore, Lemma 2.7 implies
Mdeg L() ord[a3 ()a2 ()] < ord a1 () + ord a2 () .
This relation contradicts (2.46), thats why the pair (a1 (), b()) has to be
irreducible. The second part of the Lemma is shown analogously.
68 2 Fractional Rational Matrices

11. The subsequent Lemmata state further properties of the denominator


and the McMillan degree.
Lemma 2.12. Let a matrix of the form

L() = c()a1 ()b() (2.47)

be given with polynomial matrices a(), b(), c(), where the pairs (a(), b())
and [a(), c()] are irreducible. Then

L () det a()

is true, and thus


Mdeg L() = ord a() . (2.48)

Proof. Build the ILMFD

c()a1 () = a1
1 ()c1 () . (2.49)

Since by supposition the left side of (2.49) is an IRMFD, we have

det a() det a1 () . (2.50)

Using (2.49), we obtain from (2.47)

L() = a1
1 ()[c1 ()b()] .

Due to Lemma 2.9, the right side of this equation is an ILMFD and because
of (2.50), we get
L () det a1 () det a() .
Relation (2.48) now follows directly from (2.32).

Lemma 2.13. Let


L() = L1 ()L2 () (2.51)
be given with rational matrices L1 (), L2 (), L(), and L1 (), L2 (), L ()
should be their accompanying McMillan denominators. Then the expression

L1 ()L2 ()
() =
L ()

realises as a polynomial.

Proof. Let the ILMFD


L() = a1 ()b() (2.52)
and in addition
Li () = a1
i ()bi () , (i = 1, 2) (2.53)
be given. Then
2.4 Matrix Fraction Description (MFD) 69

L () det a(), Li () det ai (), (i = 1, 2) . (2.54)

Equation (2.51) with (2.53) implies

L() = a1 1
1 ()b1 ()a2 ()b2 () . (2.55)

Owing to Lemma 2.8, there exists an LMFD

a1 1
3 ()b3 () = b1 ()a2 () , (2.56)

where
det a3 () det a2 () L2 () .
Using (2.55) and (2.56), we nd

L() = a1 1 1
1 ()a3 ()b3 ()b2 () = a4 ()b4 () , (2.57)

where
a4 () = a3 ()a1 (), b4 () = b3 ()b2 () .
Per construction, we get

det a4 () L1 ()L2 () . (2.58)

Relations (2.52) and (2.57) dene LMFDs of the matrix L(), where (2.52) is
an ILMFD. Therefore, the relation

a4 () = g()a()

holds with an n n polynomial matrix g(). From the last equation arises
that the object
det a4 ()
= det g()
det a()
is a polynomial. Finally, this equation together with (2.54) and (2.58) yields
the claim of the Lemma.
Remark 2.14. From Lemma 2.13 under supposition (2.51), we get

Mdeg[L1 ()L2 ()] Mdeg L1 () + Mdeg L2 () .

In the following investigations, we will call the matrices L1 () and L2 ()


independent, when the equality sign takes place in the last relation.
Lemma 2.15. Let L() Fnm (), G() Fnm [] and

L1 () = L() + G()

be given. Then, we have


L1 () = L () (2.59)
and therefore,
Mdeg L1 () = Mdeg L() . (2.60)
70 2 Fractional Rational Matrices

Proof. Start with the ILMFD (2.52). Then the matrix




Rh () = a() b() (2.61)

becomes alatent. By using (2.52), we build the LMFD

L1 () = a1 () [b() + a()G()] (2.62)

for the matrix L1 (), to which the horizontal matrix




R1h () = a() b() + a()G()

is congured. The identity



In G()
R1h () = Rh ()
Omn Im

is easily proved. The rst factor on the right side is the alatent matrix Rh ()
and the second factor is a unimodular matrix. Therefore, the product is also
alatent and consequently, (2.62) is an ILMFD, which implies

Mdeg L1 () = ord a() = Mdeg L()

and Equation (2.60) follows.

Lemma 2.16. For the matrix L() Fnm (), let an ILMFD (2.52) be given,
and the matrix L1 () is determined by

L1 () = L()D() ,

where the non-singular matrix D() Fmm [] should be free of eigenvalues


that coincide with eigenvalues of the matrix a() in (2.52). Then the relation

L1 () = a1 ()[b()D()] (2.63)

denes an ILMFD of the matrix L1 (), and Equations (2.59), (2.60) are
fullled.

Proof. Let an ILMFD (2.52) and the set 1 , . . . , q of eigenvalues of a() be


given. Since Matrix (2.61) is alatent, we gain


rank Rh (i ) = rank a(i ) b(i ) = n, (i = 1, . . . , q) .

Consider the LMFD (2.63) and the accompanying matrix




R1h () = a() b()D() .

The latent numbers of the matrix R1h () belong to the set of numbers
1 , . . . , q . But for any 1 i q, we have
2.4 Matrix Fraction Description (MFD) 71

R1h (i ) = Rh (i )F (i ) ,

where the matrix F () has the form



In Onm
F () = .
Omn D()

Under the supposed conditions, rank F (i ) = n + m is valid, that means, the


matrix F (i ) is non-singular, which implies

rank R1h (i ) = n, (i = 1, . . . , q) .

Therefore, the matrix R1h () satises Condition (1.72), and Lemma 1.42 guar-
antees that Relation (2.63) delivers an ILMFD of the matrix L1 (). From this
fact we conclude the validity of (2.59), (2.60).

12.
Lemma 2.17. Let the irreducible rational matrix
N ()
L() = (2.64)
d1 ()d2 ()
be given, where N () is an n m polynomial matrix, and d1 (), d2 () are
coprime scalar polynomials. Moreover, let the ILMFDs

1 () = N () = a1 ()b1 () ,
L 2 () = b1 () = a1 ()b2 ()
L
1 2
d1 () d2 ()
exist. Then the expression

L() = [a2 ()a1 ()]1 b2 ()

turns out to be an ILMFD of Matrix (2.64).


Proof. The proof immediately follows from Formulae (2.25)(2.29), because
the polynomials d1 () and d2 () are coprime.

13.
Lemma 2.18. Let irreducible representations of the form (2.21)
Ni ()
Li () = , (i = 1, 2) (2.65)
di ()
with n m polynomial matrices Ni () be given, where the polynomials d1 ()
and d2 () are coprime. Then we have

Mdeg[L1 () + L2 ()] = Mdeg L1 () + Mdeg L2 () . (2.66)


72 2 Fractional Rational Matrices

Proof. Proceed from the ILMFDs


1
Li () = a
i ()bi (), (i = 1, 2) . (2.67)
Then under the actual assumptions, the matrices a
1 () and a
2 () have no
common eigenvalues, and they satisfy
Mdeg Li () = ord a
i (), (i = 1, 2) . (2.68)
Using (2.65), we arrive at
N1 ()d2 () + N2 ()d1 ()
L() = L1 () + L2 () = ,
d1 ()d2 ()
where the fraction on the right side is irreducible. Consider the matrix

1 () = L()d2 () = N1 () d2 () + N2 () .
L
d1 ()
Applying (2.67), we obtain
 
L 1
1 () = a
1 () b1 ()d2 () + a
1 ()N2 () .

From Lemmata 2.152.17, it follows that the right side of the last equation
is an ILMFD, because the polynomials d1 () and d2 () are coprime. Now
introduce the notation
N2 ()
2 () = b1 () + a
L 1 () = b1 () + a a1
1 ()
2 ()b2 () (2.69)
d2 ()
and investigate the ILMFD
a a1
1 () 1
2 () = a1 ()a2 () . (2.70)
The left side of this equation is an IRMFD, because the matrices a
1 () and
2 () have no common eigenvalues. Therefore,
a
ord a
2 () = ord a1 () , (2.71)
and from Lemmata 2.9 and 2.15 we gather that the right side of the equation
 
2 () = a1 () a1 ()b1 () + a2 ()b2 () = a1 ()b2 ()
L 1 1

is an ILMFD. This relation together with (2.69) implies


1
L() = [a1 ()
a1 ()] b2 () .
Hereby, Lemma 2.17 yields that the right side of the last equation is an
ILMFD, from which by means of (2.68) and (2.71), we conclude (2.66).
Corollary 2.19. If we write with the help of (2.70)
 
L() = a1 1
1 ()a1 () a1 ()b1 () + a2 ()b2 () ,

then the right side is an ILMFD.


2.5 Double-sided MFD (DMFD) 73

2.5 Double-sided MFD (DMFD)


1. Assume in (2.64) d1 () and d2 () to be monic and coprime polynomials,
i.e.
N ()
L() =
d1 ()d2 ()
is valid. Then applying (2.26)(2.29) yields

L() =
 
1 () 2 () ()
diag ,
d1 ()d2 () 2 ()2 ()
,...,
() ()
O,m
p() q() ,
On, On,m

where all fractions are irreducible, and all polynomials 2 (), . . . , () are
divisors of the polynomial d1 (), and all polynomials 2 (), . . . , () are divi-
sor of the polynomial d2 (). Furthermore, every i () is divisible by i+1 (),
and i () by i+1 ().

2. Consider now the polynomial matrices

l () = diag{d1 (), 2 (), . . . , (), 1, . . . , 1}p1 () ,


a
 
diag{1 (), . . . , ()} O,m
b() = , (2.72)
On, On,m

r () = q 1 () diag{d2 (), 2 (), . . . , (), 1, . . . , 1}


a

with the dimensions n n, n m, m m, respectively. So we can write

1
L() = a a1 () .
l ()b() r (2.73)

A representation of the form (2.73) is called double-sided or bilateral MFD


(DMFD).

3.
Lemma 2.20. The pairs (al (), b()) and [
ar (), b()] dened by Relations
(2.72) are irreducible.
Proof. Build the LMFD and RMFD
N ()ar () l ()N ()
a
1
=a
l ()b(), a1
= b() r () .
d1 ()d2 () d1 ()d2 ()
With the help of (2.72), we immediately recognise that the right sides are
al (), b()), [
ILMFD resp. IRMFD. Therefore, the pairs ( ar (), b()] are irre-
ducible.
74 2 Fractional Rational Matrices

Suppose (2.72), then under the conditions of Lemma 2.20, it follows that
l () and ord a
in the representation (2.73), the quantities ord a r () take their
minimal values. A representation like (2.73) is named irreducible DMFD
(IDMFD). The set of all IDMFD of the matrix L() according to given poly-
nomials d1 (), d2 () has the form

L() = a1 1
l ()b()ar ()

with

al () = p()
al (), b() = p()b()q(), ar () = a
r ()q() ,

where p(), q() are unimodular matrices of appropriate type.

Example 2.21. Consider the rational matrix



52 6 12 22 + 3 + 4
22 2 + 18 2 7
L() = .
(2 + + 2)( 3)

Assume d1 () = 2 + + 2, d2 () = 3, then we can write

L() = a1 1
l ()b()ar ()

with

+1 1 1 2 0
al () = , ar () = , b() = .
2 +2 2 3 2 1

The obtained DMFD is irreducible because of det al () = d1 (), det ar () =


d2 (), and the quantities ord al () and ord ar () take their minimal possible
values. 

2.6 Index of Rational Matrices


1. As in the scalar case, we understand by the index of a rational n m
matrix L() that integer ind L for which the nite limit

lim L()ind L = L0 = Onm (2.74)


exists. For ind L = 0, ind L > 0 and ind L 0 the matrix L() is called proper,
strictly proper and at least proper, respectively. For rational matrices of the
form (2.21), we have

ind L = deg d() deg N () .


2.6 Index of Rational Matrices 75

2. In a number of cases we also can receive the value of ind L from the LMFD
or RMFD.
Lemma 2.22. Suppose the matrix L() in the standard form
N ()
L() = (2.75)
d()
and the relations

L() = a1 1
l ()bl () = br ()ar () (2.76)

should dene LMFD resp. RMFD of the matrix L(). Then ind L satises the
inequalities

ind L = deg d() deg N () deg al () deg bl ()


(2.77)
deg ar () deg br () .

Proof. From (2.75) and (2.76) we arrive at

d()bl () = al ()N () ,

which results in
deg[d()bl ()] = deg[al ()N ()] . (2.78)
According to
d()bl () = [d()In ]bl ()
and due to the regularity of the matrix d()In , we get through (1.12)

deg[d()bl ()] = deg d() + deg bl () . (2.79)

Moreover, using (1.11) we realise

deg[al ()N ()] deg al () + deg N () . (2.80)

Comparing (2.78)(2.80), we obtain

deg d() + deg bl () deg al () + deg N () ,

which is equivalent to the rst inequality in (2.77). The second inequality can
be shown analogously.
Corollary 2.23. If the matrix L() is proper, i.e. ind L = 0, then for any
MFD (2.76) from (2.77) it follows

deg bl () deg al (), deg br () deg ar ().

If the matrix L() is even strictly proper, i.e. ind L < 0 is true, then we have

deg bl () < deg al (), deg br () < deg ar ().


76 2 Fractional Rational Matrices

3. A complete information about the index of L() is received in that case,


where in the LMFD (2.36) [RMFD (2.37)] the matrix al () is row reduced
[ar () is column reduced].

Theorem 2.24. Consider the LMFD

L() = a1
l ()bl () (2.81)

with al (), bl () of the dimensions n n, n m, where al () is row reduced.


Let i be the degree of the i-th row of al (), and i the degree of the i-th row
of bl (), and denote

i = i i , (i = 1, . . . , n)

and
L = min [i ] .
1in

Then the index of the matrix L() is determined by

ind L = L . (2.82)

Proof. Using (1.22), we can write


 
al () = diag{1 , . . . , n } A0 + A1 1 + A2 2 + . . . , (2.83)

where the Ai , (i = 0, 1, . . .) are constant matrices with det A0 = 0. Extracting


from the rows of bl () the corresponding factors, we obtain
 
bl () = diag{1 , . . . , n } B 0 L + B
1 L 1 + B
2 L 2 + . . . ,

where the B 0 =
i , (i = 0, 1, . . .) are constant matrices, and B  Onm . Inserting
this and (2.83) into (2.81), we nd
 1  
L()L = A0 + A1 1 + . . . 1 1 + . . . .
0 + B
B

Now, due to det A0 = 0, it follows

lim L()L = A1
0 B0 = Onm , (2.84)

and by (2.74) we recognise the statement (2.82) to be true.

Corollary 2.25. ([69], [68]) If in the LMFD (2.81) the matrix al () is row
reduced, then the matrix L() is proper, strictly proper or at least proper, if
and only if we have L = 0, L > 0 or L 0, respectively.

In the same way the corresponding statement for right MFD can be seen.
2.7 Strictly Proper Rational Matrices 77

Theorem 2.26. Consider the RMFD

L() = br ()a1
r ()

with ar (), br () of the dimensions m m, n m, where ar () is column


reduced. Let i be the degree of the i-th column of ar () and i the degree of
the i-th column of br (), and denote

i =
i i , (i = 1, . . . , m)

and
L = min [i ] .
1im

Then the index of the matrix L() is determined by

ind L = L .

Example 2.27. Consider the matrices


2
2 + 1 + 2 2 32 + 1
al () = , bl () = .
1 +1 5 7

In this case the matrix al () is row reduced, where 1 = 2, 1 = 1 and


1 = 2, 1 = 0. Consequently, we get 1 = 0, 2 = 1, thus L = 0. Therefore,
the matrix a1
l ()bl () becomes proper. Hereby, we obtain

20 03
A0 = , B0 = ,
01 00

and from (2.84) it follows



0 1.5
lim a1
l ()bl ()
1
= A0 B0 = .
0 0


2.7 Strictly Proper Rational Matrices


1. According to the above denitions, Matrix (2.21) is strictly proper if
ind L = deg d() deg N () > 0. Strictly proper rational matrices possess
many properties that are analogue to the properties of scalar strictly proper
rational fractions, which have been considered in Section 2.1. In particular,
the sum, the dierence and the product of strictly proper rational matrices
are strictly proper too.
78 2 Fractional Rational Matrices

2. For any strictly proper rational n m matrix L(), there exists an indef-
inite set of elementary PMDs
() = (Ip A, B, C) (2.85)
i.e. realisations (A, B, C), such that
L() = C(Ip A)1 B . (2.86)
The right side of (2.86) is called a standard representation of the matrix,
or simply its representation. The number p, congured in (2.86), is called its
dimension. A representation, where the dimension p takes its minimal possible
value, is called minimal.
A standard representation (2.86) is minimal, if and only if its elementary
PMD is minimal, that means, if the pair (A, B) is controllable and the pair
[A, C] is observable.
The matrix L() (2.86) is called the transfer function (transfer matrix) of
the elementary PMD (2.85), resp. of the realisation (A, B, C). The elementary
PMD (2.85) and the PMD
1 () = (Iq A1 , B1 , C1 ) (2.87)
are called equivalent, if their transfer matrices coincide.

3. Now a number of statements on the properties of strictly proper rational


matrices is formulated, which will be used later.
Statement 2.4 (see [69, 68]) The minimal PMD (2.85) and (2.87) are equiv-
alent, if and only if p = q. In this case, there exists a non-singular p p
matrix R with
A1 = RAR1 , B1 = RB, C1 = CR1 , (2.88)
i.e., the corresponding realisations are similar.
Statement 2.5 Let the representation (2.86) be minimal and possess the
ILMFD
C(Ip A)1 = a1 l ()bl () . (2.89)
Then, as follows from Lemma 2.9, the pair (al (), bl ()B) is irreducible.
In analogy, if we have an IRMFD
(Ip A)1 B = br ()a1
r () , (2.90)
then the pair [ar (), Cbr ()] is irreducible.
Statement 2.6 If the representation (2.86) is minimal, then the matrices al ()
in the ILMFD (2.89) and ar () in the IRMFD (2.90) possess the same
invariant polynomials dierent from 1 as the matrix Ip A. Hereby, we
have
L () = det(Ip A) det al () det ar () . (2.91)
2.7 Strictly Proper Rational Matrices 79

Particularly, it follows from (2.91) that Mdeg L() of a strictly proper


rational matrix L() is equal to the dimension of its minimal standard repre-
sentation.

4.
Lemma 2.28. Assume n = m in the standard representation (2.86) and
det L()
/ 0. Then p n holds, and
k()
det L() = (2.92)
det(Ip A)
is valid, where k() is a scalar polynomial with
deg k() p n ind L . (2.93)
The case p < n results in det L() 0.
Proof. In accordance with Lemma 2.8, there exists an LMFD
C(Ip A)1 = a1
1 ()b1 () ,

where det a1 () det(Ip A), thats why


L() = a1
1 ()[b1 ()B] .

Calculating the determinants of both sides yields (2.92) with k() =


det[b1 ()B].
To prove (2.93), we write L() in the form (2.21) obtaining ind L =
deg d() deg N () > 0. Now calculating the determinant of the right side of
(2.21), we gain
det N ()
det L() = .
[d()]n
Let deg d() = q. Then deg N () = q ind L holds, where deg det N ()
n(q ind L) and deg[d()]n = nq. From this we directly generate (2.93). For
p < n, on account of the Binet-Cauchy formula, we come to det[C adj(Ip
A)B] 0.
Corollary 2.29. Consider the strictly proper n n matrix L() and its
McMillan denominator and numerator L () and L (), respectively. Then
the following relation is true:
deg L () deg L () n ind L .
Proof. Let (2.86) be a minimal standard representation of the matrix L().
Then due to Lemma 2.4, we have
L ()
det L() = k , k = const.
L ()
and the claim immediately results from (2.93).
80 2 Fractional Rational Matrices

5. Let the strictly proper rational matrix L() of the form (2.21) with
N ()
L() =
d1 ()d2 ()
be given, where the polynomials d1 () and d2 () are coprime. Then there
exists a separation
N1 () N2 ()
L() = + , (2.94)
d1 () d2 ()
where N1 () and N2 () are polynomial matrices and both fractions in (2.94)
are strictly proper.
The matrices N1 () and N2 () in (2.94) are uniquely determined.
In practice, the separation (2.94) can be produced by performing the sep-
aration (2.6) for every element of the matrix L().
Example 2.30. Let
+2
+ 3 2 + 1
L() =
( 2)2 ( 1)
be given. By choosing d1 () = ( 2)2 , d2 () = 1, a separation (2.94) is
found with

3 + 10 + 4 31
N1 () = , N2 () = .
4 + 13 + 7 42


6. The separation (2.94) is extendable to a more general case. Let the strictly
proper rational matrix have the form
N ()
L() = ,
d1 ()d2 () d ()
where all polynomials in the denominator are two-by-two coprime. Then there
exists a unique representation of the form
N1 () N ()
L() = + ... + , (2.95)
d1 () d ()
where all fractions on the right side are strictly proper. Particularly consider
(2.21), and the polynomial d() should have the form (2.20). Then under the
assumption

di () = ( i ) i , (i = 1, . . . , q)
from (2.95), we obtain the unique representation

q
Ni ()
L() = , (2.96)
i=1
( i )i

where deg Ni () < i .


2.7 Strictly Proper Rational Matrices 81

7. By further transformations the fraction

Ni ()
Li () =
( i )i

could be written as
Ni1 Ni2 Nii
Li () = + 1
+ ... + , (2.97)
( i )i ( i )i i

where the Nik , (k = 1, . . . , i ) are constant matrices. Inserting (2.97) into


(2.96), we arrive at the representation
q

Ni1 Ni2 Nii
L() = + + ... + , (2.98)
i=1
( i )i ( i )i 1 i

which is called partial fraction expansion of the matrix L().

8. For calculating the matrices Nik in (2.97), we rely upon the analogous
formula to (2.10)
k1
1 N ()( i )i
Nik = . (2.99)
(k 1)! k1 d() |=i

In practice, the coecients in (2.99) will be determined by partial fraction


expansion of the scalar entries of L().

Example 2.31. Assuming the conditions of Example 2.30, we get


N11 N12 N21
L() = + + ,
( 2)2 2 1

where
42 3 1 31
N11 = , N12 = , N21 = . 
55 4 1 42

9. The partial fraction expansion (2.98) can be used in some cases for solving
the question on reducibility of certain rational matrices.
Indeed, it is easily shown that for the irreducibility of the strictly proper
matrix (2.21), it is necessary and sucient that in the expansion (2.98)

Ni1 = Onm , (i = 1, . . . , q) (2.100)

must be true.
82 2 Fractional Rational Matrices

2.8 Separation of Rational Matrices


1. Let the n m matrix L() in (2.21) be not strictly proper, that means
ind L 0. Then for every element of the matrix L(), the representation (2.5)
can be generated, yielding

R()
L() = + G() = L0 () + G() , (2.101)
d()

where the fraction in the middle part is strictly proper, and G() is a polyno-
mial matrix. The representation (2.101) is unique. Practically, the dissection
(2.101) is done in such a way that the dissection (2.5) is applied on each
element of L().
Furthermore, the strictly proper matrix L0 () on the right side of (2.101) is
called the broken part of the matrix L(), and the matrix G() its polynomial
part .

Example 2.32. For Matrix (2.22), we obtain



13( 3) 2

2 + 1 0 5 0
L0 () = , G() = .
( 2)( 3) 0 +2


2. Let us have in (2.101)

N0 ()
L0 () = , (2.102)
d1 ()d2 ()

where the polynomials d1 () and d2 () are coprime, and deg N0 () <


deg d1 () + deg d2 (). Then as was shown above, there exists the unique sep-
aration
N0 () N1 () N2 ()
= + ,
d1 ()d2 () d1 () d2 ()
where the fractions on the right side are strictly proper. Inserting this sepa-
ration into (2.101), we nd a unique representation of the form

N1 () N2 ()
L() = + + G() . (2.103)
d1 () d2 ()

Example 2.33. For Matrix (2.22), we generate the separation (2.103) of the
shape
13 0 01

5 0 70 5 0
A() = + + . 
2 3 0 +2
2.8 Separation of Rational Matrices 83

3. From (2.103) we learn that Matrix (2.101) can be presented in the form

Q1 () Q2 ()
L() = + , (2.104)
d1 () d2 ()

where

Q1 () = N1 () + d1 ()F () , Q2 () = N2 () + d2 () [G() F ()] ,


(2.105)
where the polynomial matrix F () is arbitrary.
The representation of the rational matrix L() from (2.102) in the form
(2.104), (2.105) is called its separation with respect to the polynomials d1 ()
and d2 (). It is seen from (2.105) that for coprime polynomials d1 () and
d2 (), the separation (2.104) is always possible, but not uniquely determined.
Nevertheless, the following theorem holds.

Theorem 2.34. The totality of pairs Q1 (), Q2 () satisfying the separation


(2.104), is given by Formula (2.105).

Proof. By P we denote the set of all polynomial pairs Q1 (), Q2 () satisfying


Relation (2.104), and by Ps the set of all polynomial pairs produced by (2.105)
when we insert there any polynomial matrices F (). Since for any Pair (2.105),
Relation (2.104) holds, Ps P is true. On the other side, let the matrices
1 (), Q
Q 2 () fulll Relation (2.104). Then we obtain

i ()
Q Ri ()
= + Gi () , (i = 1, 2) ,
di () di ()

where the fractions on the right sides are strictly proper, and G1 (), G2 ()
are polynomial matrices. Therefore,

R1 () R2 ()
L() = + + G1 () + G2 () .
d1 () d2 ()

Comparing this with (2.103), then due to the uniqueness of the expansion
(2.103), we get

R1 () = N1 (), R2 () = N2 (), G() = G1 () + G2 () .

Denoting G1 () = F (), G2 () = G() F () we nd Q 1 () and Q 2 ()


satisfying Relation (2.105), i.e. P Ps is true. Consequently, the sets P and
Ps contain each other.

Example 2.35. According to (2.104) and (2.105), we nd for Matrix (2.22)


the set of all separations with respect to the polynomials d1 () = 2,
d2 () = 3. Using the results of Example 2.33, we obtain
84 2 Fractional Rational Matrices
 
13 + ( 2)f11 () ( 2)f12 ()
Q1 () = ,
5 + ( 2)f21 () ( 2)f22 ()
 
( 3)[5 f11 ()] 1 ( 3)f12 ()
Q2 () = ,
7 + ( 3)f21 () ( 3)[( + 2) f22 ()]

where the fik (), (i, k = 1, 2) are some polynomials. 

4. Setting in (2.105) F () = Onm , we arrive at the special solution of the


form
Q1 () = N1 (), Q2 () = N2 () + d2 ()G() . (2.106)
Otherwise, taking F () = G() results in

Q1 () = N1 () + d1 ()G(), Q2 () = N2 () . (2.107)

For the solution (2.106), the rst summand in the separation (2.104) becomes
a strictly proper rational matrix, and for the solution (2.107) the second one
does. The particular separations dened by Formulae (2.106) and (2.107) are
called minimal with respect to d1 () resp. d2 (). Due to their construction,
the minimal separations are uniquely determined.

Example 2.36. The separation of Matrix (2.22), which is minimal with respect
to d1 () = 2, is given by the matrices

13 0 5( 3) 1
Q1 () = , Q2 () = .
5 0 7 ( 3)( + 2)

With respect to d2 () = 3 the separation by the matrices



5 + 3 0 01
Q1 () = , Q () =
5 2 4 2
70

is minimal. These minimal separations are unique per construction. 

5. If in particular the original rational matrix (2.101) is strictly proper, then


G() = Onm becomes true, and the minimal separations (2.106) and (2.107)
coincide.

Example 2.37. For the strictly proper matrix in Example 2.30, we obtain a
unique minimal separation with Q1 () = N1 (), Q2 () = N2 (), where the
matrices N1 () and N2 () were already determined in Example 2.30. 
2.9 Inverses of Square Polynomial Matrices 85

2.9 Inverses of Square Polynomial Matrices


1. Assume the nn polynomial matrix L() to be non-singular, and adj L()
be its adjoint matrix, that is determined by Equation (1.8). Then the matrix

adj L()
L1 () = (2.108)
det L()

is said to be the inverse of the matrix L(). Equation (1.9) implies

L()L1 () = L1 ()L() = In . (2.109)

2. The matrix L() could be written with the help of (1.40), (1.49) in the
form

h1 () 0 ... 0
0 h1 ()h2 () . . . 0
1
L() = p1 () . . .. . q () ,
.. .. . ..
0 0 . . . h1 ()h2 () hn ()

where p() and q() are unimodular matrices. How the inverse matrix L1 ()
can be calculated? For that purpose, the general Formula (2.108) is used.
Denoting

H() = diag{h1 (), h1 ()h2 (), . . . , h1 ()h2 () hn ()}

we can write
L1 () = q()H 1 ()p() . (2.110)
1
Now, we have to calculate the matrix H (). Obviously, the characteristic
polynomial of H() amounts to

dH () = det H() = hn1 ()hn1


2 () hn () det L() = dL () . (2.111)

Direct calculating the matrix of adjuncts adj H() results in



adj H() = diag hn11 ()hn1
2 () hn (), hn1
1 ()hn2
2 () hn (), . . .
(2.112)

. . . , hn1
1 ()hn2
2 () hn1 () ,

from which we gain


adj H()
H 1 () = . (2.113)
dH ()
Herein, the numerator and denominator are constrained by Relations (2.112),
(2.111).
In general, the rational matrix on the right side of (2.113) is reducible, and
thats why we will write
86 2 Fractional Rational Matrices


adj H()
H 1 () = (2.114)
dL min ()
with

adj H() = diag {h2 () hn (), h3 () hn (), . . . , hn (), 1} (2.115)
dL min () = h1 ()h2 () hn () = an () , (2.116)
where an () is the last invariant polynomial. Altogether, we receive by using
(2.110)

adj L()
L1 () = , (2.117)
dL min ()
where

adj 
L() = q()adj H() p() . (2.118)
Matrix (2.118) is called the monic adjoint matrix, and the polynomial
dL min () the minimal polynomial of the matrix L(). The rational matrix
on the right side of (2.117) will be named monic inverse of the polynomial
matrix L().

3. Opposing (2.111) to (2.116) makes clear that among the roots of the
minimal polynomial dLmin () are all eigenvalues of the matrix L(), however,
possibly with lower multiplicity. It is remarkable that the fraction (2.117) is

irreducible. The reason for that lies in the fact that the matrix adj H() for no
value of becomes zero. The same can be said about Matrix (2.118), because
the matrices q() and p() are unimodular.

4. Comparing (2.108) with (2.117), we nd out that the fraction (2.108) is


irreducible, if and only if
h1 () = h2 () = . . . = hn1 () = 1, hn () = an () = dLmin () det L()
holds, i.e. if the characteristic polynomial of the matrix L() is equivalent to
its minimal polynomial. If the last conditions are fullled, then the matrix
L() can be presented in the form
L() = p1 () diag{1, . . . , 1, an ()}q 1 ()
that means, it is simple in the sense of Section 1.11, and the following theorem
has been proved.
Theorem 2.38. The inverse matrix (2.108) is irreducible, if and only if the
matrix L() is simple.

5. From (2.117) we take the important equation



adj 
L()L() = L()adj L() = dL min ()In . (2.119)
2.10 Transfer Matrices of Polynomial Pairs 87

2.10 Transfer Matrices of Polynomial Pairs


1. The pairs (al (), bl ()), [ar (), br ()] are called non-singular, if
det al ()
/ 0 resp. det ar () / 0. For a non-singular pair (al (), bl ()), the
rational matrix
wl () = a1
l ()bl () (2.120)
can be explained, and for the non-singular pair [ar (), br ()], we build the
rational matrix
wr () = br ()a1
r () . (2.121)
Matrix (2.120) or (2.121) is called the transfer matrix (transfer function) of
the corresponding pair. Applying the general Formula (2.108), we obtain

adj al () bl () br () adj ar ()
wl () = , wr () = (2.122)
dal () dar ()

with the notation dal () = det al (), dar () = det ar ().

2.
Denition 2.39. The transfer matrices wl () and wr () are called irre-
ducible, if the rational matrices on the right side of (2.122) are irreducible.
Now, we collect some facts on the reducibility of transfer matrices.

Lemma 2.40. If the matrices al (), ar () are not simple, then the transfer
matrices (2.120), (2.121) are reducible.

Proof. If the matrices al (), ar () are not simple, then owing to (2.116), we
conclude that the matrices a1 1
l (), ar () are reducible, and therefore, also
the fractions


adj al () bl () 
br ()adj ar ()
wl () = , wr () = . (2.123)
dal min () dar min ()

But this means, fractions (2.120), (2.121) are reducible.

The matrices (2.123) are said to be the monic transfer matrices.

3.
Lemma 2.41. If the pairs (al (), bl ()), [ar (), br ()] are reducible, i.e. the
matrices

ar ()
Rh () = al () bl () , Rv () = (2.124)
br ()
are latent, then the fractions (2.120), (2.121) are reducible.
88 2 Fractional Rational Matrices

Proof. If the pair (al (), bl ()) is reducible, then by virtue of the results in
Section 1.12, we obtain

al () = g()al1 (), bl () = g()bl1 () (2.125)

with ord g() > 0 and polynomial matrices al1 (), bl1 (), where due to

det al () = det g() det al1 () ,

the relation
deg det al1 () < deg det al ()
holds. From (2.125), we gain

adj al1 () bl1 ()


wl () = a1
l1 ()bl1 () = .
det al1 ()
The denominator of this rational matrix possesses a lower degree than that of
(2.122), what implies that the fraction wl () in (2.122) is reducible. For the
vertical pair [ar (), br ()], we carry out the proof in the same way.

4. Let the matrices al () and ar () be not simple. Then using (2.117), we


receive the monic transfer matrix (2.123).

Theorem 2.42. If the pairs (al (), bl ()), [ar (), br ()] are irreducible, then
the monic transfer matrices (2.123) are irreducible.

Proof. Let the pair (al (), bl ()) be irreducible. Then the matrix Rh () in
(2.124) is alatent. Therefore, an arbitrary xed = yields


= rank al ()
rank Rh () bl ()
= n. (2.126)


Multiplying the matrix Rh () from left by the monic adjoint matrix adj al (),
with benet from (2.119), we nd
 

adj 
al ()Rh () = dal min ()In adj al ()bl () . (2.127)

Now, let = 0 be any root of the polynomial dal min (), then due to
dal min (0 ) = 0 in (2.127)
 

adj 
al (0 )Rh (0 ) = Onn adj al (0 )bl (0 )

is preserved. If we assume that the matrix wl () in (2.123) is reducible, then


0 , we obtain
for a certain root


adj 0 )bl (
al ( 0 ) = Onm

and therefore
2.10 Transfer Matrices of Polynomial Pairs 89


adj 0 )Rh (
al ( 0 ) = On,n+m . (2.128)

But from Relations (2.115), (2.118), we know rank [ adj 0 ) ] 1. Moreover,
al (

from (2.126) we get rank Rh (0 ) = n, and owing to the Sylvester inequality
(1.44), we conclude  

rank adj 0 )Rh (
al ( 0) 1 .

Consequently, Equation (2.128) cannot be fullled and therefore, the fraction


wl () in (2.123) is irreducible. The proof for the irreducibility of the matrix
wr () in (2.123) runs analogously.
Remark 2.43. The reverse statement of the just proven Theorem 2.42 is in
general not true, as the next example illustrates.
Example 2.44. Consider the pair (al (), bl ()) with

0 1
al () = , bl () = . (2.129)
0 1

In this case, we have


10
01
a1
l () = ,

which means
 10
adj al () = , dal min () = .
01
So we arrive at
1

adj al ()bl () 1
wl () = = (2.130)
dal min ()
and the fraction on the right side is irreducible. Nevertheless, the pair (2.129)
is not irreducible, because the matrix

01
Rh () =
01

for = 0 has only rank 1. On the other side, we immediately recognise that
the pair
0 1
al1 () = , bl1 () =
1 1 0
is an ILMFD of the transfer matrix (2.130), because the matrix

01
Rh1 () =
1 1 0

possesses rank 2 for all . 


90 2 Fractional Rational Matrices

5.
Theorem 2.45. For the transfer matrices (2.122) to be irreducible, it is nec-
essary and sucient that the pairs (al (), bl ()), [ar (), br ()] are irreducible
and the matrices al (), ar () are simple.

Proof. The necessity follows from the above considerations.


That the condition is also sucient, we see by assuming al () to be simple.

Then dal min () dal (), adj al () adj al () hold, and Theorem 2.42 yields
that the fraction
adj al () bl ()
wl () =
dal ()
is irreducible. For the second fraction in (2.122), the statement is proven
analogously.

2.11 Transfer Matrices of PMDs

1. A PMD of the dimension n, p, m

() = (a(), b(), c()) (2.131)

is called regular, if the matrix a() is non-singular. All descriptor systems of


interest belong to the set of regular PMDs. A regular PMD (2.131) is related
to a rational transfer matrix

w () = c()a1 ()b() (2.132)

that is named the transfer function (-matrix) of the PMD (2.131). Using
(2.108), the transfer matrix can be presented in the form

c() adj a() b()


w () = . (2.133)
det a()

When in the general case a() is not simple, then by virtue of (2.117), we
obtain

c()adj a() b()
w () = . (2.134)
da min ()
The rational matrix on the right side of (2.134) is called the monic transfer
matrix of the PMD (2.131).
2.11 Transfer Matrices of PMDs 91

2.
Theorem 2.46. For a minimal PMD (2.131), the monic transfer matrix
(2.134) is irreducible.

Proof. Construct the ILMFD

c()a1 () = a1
1 ()c1 () . (2.135)

Since the left side of (2.135) is an IRMFD,

det a() det a1 () . (2.136)

Furthermore, also the minimal polynomials of the matrices a() and a1 ()


coincide, because they possess the same sequences of invariant polynomials
dierent from one. Therefore, we have

da min () = da1 min () . (2.137)

Utilising (2.132) and (2.135), we can write

w () = a1
1 ()[c1 ()b()] . (2.138)

The right side of (2.138) is an ILMFD, what follows from the minimality of
the PMD (2.131) and Lemma 2.9. But then, employing Lemma 2.8 yields the
fraction

adj a1 () c1 ()b()
w () =
da1 min ()
to be irreducible, and this implies, owing to (2.137), the irreducibility of the
right side of (2.134).

3.
Theorem 2.47. For the right side of Relation (2.133) to be irreducible, it is
necessary and sucient that the PMD (2.131) is minimal and the matrix a()
is simple.

Proof. The necessity results from Lemmata 2.40 and 2.41.


Suciency: If the matrix a() is simple, then

det a() da min () ,

and the irreducibility of the right side of (2.133) follows from Theorem 2.46.
92 2 Fractional Rational Matrices

4. Let in addition to the PMD (2.131) be given a regular PMD of dimension


n, q, m
a(), b(), c()) .
() = ( (2.139)
The PMD (2.131) and (2.139) are called equivalent, if their transfer functions
coincide, that means

c()a1 ()b() = c()


a1 ()b() . (2.140)

Lemma 2.48. Assume the PMDs (2.131) and (2.139) be equivalent and the
PMD (2.131) be minimal. Then the expression
det a
()
() = (2.141)
det a()
turns out to be a polynomial.
Proof. Lemma 2.8 implies the existence of the LMFD

a1 () = a1
c() 2 ()c2 () ,

where
() det a2 () .
det a (2.142)
Utilising (2.140), from this we gain the LMFD of the matrix w ()

w () = a1
2 ()[c2 ()b()] . (2.143)

On the other side, the minimality of the PMD (2.131) allows to conclude that
the right side of (2.138) is an ILMFD of the matrix w (). Comparing (2.138)
with (2.143), we obtain a2 () = g()a(), where g() is a polynomial matrix.
Therefore, the expression
det a2 ()
1 () = = det g()
det a1 ()
proves to be a polynomial. Taking into account (2.136) and (2.142), we realise
that the right side of Equation (2.141) becomes a polynomial. Hereby, ()
1 () holds.
Corollary 2.49. If the PMDs (2.131) and (2.139) are equivalent and mini-
mal, then
det a() det a
() .
Proof. Lemma 2.48 oers under the given suppositions that
det a() det a
()
,
det a
() det a()
are polynomials, this proves the claim.
2.11 Transfer Matrices of PMDs 93

5.
Lemma 2.50. Consider a regular PMD (2.131) and its corresponding trans-
fer matrix (2.132). Moreover, let the ILMFD and IRMFD
w () = p1 1
l ()ql () = qr ()pr () (2.144)
exist. Then the expressions
det a() det a()
l () = , r () = (2.145)
det pl () det pr ()
turn out to be polynomials. Besides, the sets of poles of each of the matrices
w1 () = pl ()c()a1 (), w2 () = a1 ()b()pr ()
are contained in the set of roots of the polynomial l () r ().
Proof. Consider the PMDs
1 () = (pl (), ql (), In ) ,
(2.146)
2 () = (pr (), Im , qr ()) .
Per construction, the PMDs (2.131) and (2.146) are equivalent, where the
PMD (2.146) is minimal. Therefore, due to Lemma 2.48, the functions (2.145)
are polynomials. Now we build the LMFD
c()a1 () = a1
3 ()c3 () , (2.147)
where
det a3 () det a() . (2.148)
As above, we have an LMFD of the transfer matrix w ()
w () = a1
3 ()[c3 ()b()] . (2.149)
This relation together with (2.144) determines two LMFDs of the transfer
matrix w (), where (2.144) is an ILMFD. Therefore,
a3 () = gl ()pl () (2.150)
holds with a non-singular n n polynomial matrix gl (). Inversion of both
sides of the last equation leads to
a1 1 1
3 () = pl ()gl () . (2.151)
Moreover, from (2.150) through (2.148), we receive
det a3 () det a()
det gl () = = l () . (2.152)
det pl () det pl ()
From (2.147) and (2.151), we earn
adj gl () c3 ()
pl ()c()a1 () = gl1 ()c3 () = ,
det gl ()
and with the aid of (2.152), this yields the proof for a left MFD. The relation
for a right MFD is proven analogously.
94 2 Fractional Rational Matrices

2.12 Subordination of Rational Matrices


1. Let us have the rational n m matrix w() and the ILMFD
w() = p1
l ()ql () . (2.153)
Furthermore, let the rational n s matrix w1 () be given.
Denition 2.51. The matrix w1 () is said to be subordinated from left to
the matrix w(), and we write
w1 () w() (2.154)
l

for that, when the polynomial pl () is a left-cancelling polynomial for w1 (),


i.e. the product

ql1 () = pl ()w1 ()
is a polynomial.

2. In analogy, if the n m matrix w() has an IRMFD


w() = qr ()p1
r ()

and the s m matrix w2 () is of such a kind, that the product



qr1 () = w2 ()pr ()
turns out as a polynomial matrix, then the matrix w2 () is said to be subor-
dinated from right to the matrix w(), and we denote this fact by
w2 () w() . (2.155)
r

3.
Lemma 2.52. Let the right side of (2.153) dene an ILMFD of the matrix
w(), and Condition (2.154) should be fullled. Let w () and w1 () be the
McMillan denominators of the matrices w() resp. w1 (). Then the fraction
w ()
() =
w1 ()
proves to be a polynomial.
Proof. Take the ILMFD of the matrix w1 ():
w1 () = p1
l1 ()ql1 () .

Since the polynomial pl () is left cancelling for w1 (), a factorisation pl () =


g1 ()pl1 () with an n n polynomial matrix g1 () is possible. Besides, we
obtain
det pl () w ()
det g1 () = = () .
det pl1 () w1 ()
Remark 2.53. A corresponding statement holds, when (2.155) is true.
2.12 Subordination of Rational Matrices 95

4.
Lemma 2.54. Assume (2.154) be valid, and Q(), Q1 () be any polynomial
matrices of appropriate dimension. Then

w1 () + Q1 () w() + Q() .
l

Proof. Start with the ILMFD (2.153). Then the expression

w() + Q() = p1
l () [ql () + pl Q1 ()] ,

due to Lemma 2.15, is also an ILMFD. Hence owing to (2.154), the prod-
uct pl ()[w1 () + Q1 ()] turns out as a polynomial, thats what the lemma
claims.
Remark 2.55. An analogous statement is true for subordination from right.
Therefore, when the matrix w1 () is subordinated to the matrix w(), then
the broken part of w1 () is subordinated to the broken part of w(). The
reverse is also true.

5.
Theorem 2.56. Consider the strictly proper n m matrix w(), and its min-
imal realisation
w() = C(Ip A)1 B . (2.156)
Then for holding the relation

w1 () w() , (2.157)
l

where the rational n s matrix w1 () is strictly proper, it is necessary and


sucient, that there exists a constant p s matrix B1 , which guarantees

w1 () = C(Ip A)1 B1 .

Proof. Suciency: Build the ILMFD

C(Ip A)1 = a1
1 ()b1 () .

Then owing to Lemma 2.9, the expression

w() = a1
1 ()[b1 ()B] (2.158)

denes an ILMFD of the matrix w(). Hence the product

a1 ()w1 () = b1 ()B1

proves to be a polynomial matrix, thats what (2.157) declares.


Necessity: Build the matrix w()
of the form
96 2 Fractional Rational Matrices

s m
(2.159)
w()
= [ w1 () w() ] n .
We will show
Mdeg w()
= Mdeg w() = p . (2.160)
The equality Mdeg w() = p immediately follows because the realisation
(2.156) is minimal. It remains to show Mdeg w()
= Mdeg w(). For this
purpose, we multiply (2.159) from left by the matrix a1 (). Then taking into
account (2.157) and (2.158), we realise that


a1 ()w()
= a1 ()w1 () a1 ()w()

is a polynomial matrix. Using ord a1 () = p, Lemma 2.7 yields

Mdeg w()
p.

Now, we will prove that the inequality cannot happen. Indeed, assume
Mdeg w()
= < p, then there exists a polynomial a() with ord a
() =
deg det a
() = , and


a
()w()
= a()w1 () a
()w()

becomes a polynomial matrix. When this happens, also a ()w() becomes a


polynomial matrix, and regarding to Lemma 2.7, we have Mdeg w() <
p. But this is impossible, due to our supposition Mdeg w() = p, and the
correctness of (2.160) is proven.
Since the matrix w()
is strictly proper and Mdeg w()
= p holds, there
exists a minimal realisation

w()
= C(I 1 B
p A) (2.161)
C,
with constant p p, n p and p (s + m) matrices A, B.
Bring the matrix
into the form
B

s m
B= B 1 B
2 p .
Then from (2.161), we gain


w()
= C(I 1 B
p A) 1 C(I 1 B
p A) 2 .

When we relate this and (2.159), we nd

w1 () = C(I 1 B
p A) 1 , (2.162)

w() = C(I 1 B
p A) 2 . (2.163)

Expressions (2.156) and (2.163) dene realisations of the matrix w() of the
same dimension p. However, since realisation (2.156) is minimal, also realisa-
tion (2.163) has to be minimal. According to (2.88), we can nd a non-singular
matrix R with
2.12 Subordination of Rational Matrices 97

A = RAR1 , 2 = RB ,
B C = CR1 .

From this and (2.162), it follows

w1 () = C(Ip A)1 B1 , B1 = R1 B
1 ,

and the theorem is proven.

Remark 2.57. A corresponding theorem can be proven for subordination from


right.

Theorem 2.58. Let (2.156) and the rational q m matrix w1 () be given.


Then for holding the relation

w1 () w() ,
r

it is necessary and sucient, that there exists a constant q p matrix C1 with

w1 () = C1 (Ip A)1 B .

6.
Theorem 2.59. Consider the rational matrices

F (), G(), H() = F ()G() (2.164)

and the ILMFD

F () = a1
1 ()b1 (), G() = a1
2 ()b2 () . (2.165)

Furthermore, let us have the ILMFD

b1 ()a1 1
2 () = a3 ()b3 () . (2.166)

Then the relation


F () H() (2.167)
l

is true, if and only if the matrix




Rh () = a3 ()a1 () b3 ()b2 () (2.168)

is alatent, i.e. the pair (a3 ()a1 (), b3 ()b2 ()) is irreducible.

Proof. Suciency: Start with the ILMFD

H() = a1 ()b() . (2.169)

Then from (2.165) and (2.166), we obtain


98 2 Fractional Rational Matrices

H() = a1 1 1 1
1 ()b1 ()a2 ()b2 () = a1 ()a3 ()b3 ()b2 ()
(2.170)
= [a3 ()a1 ()]1 b3 ()b2 () .
Let Matrix (2.168) be alatent. Then the right side of (2.170) is an ILMFD
and with the aid of (2.169), we get a() = g()a3 ()a1 (), where g() is a
unimodular matrix. Besides
a()F () = g()a3 ()b1 ()
is a polynomial matrix, and hence (2.167) is true.
Necessity: Assume (2.167), then we have
a() = h()a1 () , (2.171)
where h() is a non-singular polynomial matrix. This relation leads us to
H() = a1 ()b() = a1
1 ()h
1
()b() . (2.172)
Comparing the expressions for H() in (2.170) and (2.172), we nd
a1
3 ()b3 ()b2 () = h
1
()b() . (2.173)


But the
matrix a3 () b3 ()b2 () due to Lemma 2.9 is alatent, and the
matrix h() b() with respect to (2.171) and owing to Lemma 2.11 is
alatent. Therefore, the left as well as the right side of (2.173) present ILMFDs
of the same rational matrix. Then from Statement 2.3 on page 64 arise
h() = ()a3 (), b() = ()b3 ()b2 () , (2.174)
where the matrix () is unimodular. Applying (2.172) and (2.174), we arrive
at the ILMFD
H() = [()a3 ()a1 ()]1 b() .
This expression and (2.170) dene two LMFDs of the same matrix H(). Since
the matrix () is unimodular, we have
ord[()a3 ()a1 ()] = ord[a3 ()a1 ()] ,
and the right side of (2.170) is an ILMFD too. Therefore, Matrix (2.168) is
alatent.
A corresponding statement holds for subordination from right.
Theorem 2.60. Consider the rational matrices (2.164) and the IRMFDs
a1
F () = b1 () 1 (), a1
G() = b2 () 2 () .

Moreover, let the IRMFD


1
a a1
1 ()b2 () = b3 () 3 ()

be given. Then the relation


G() H()
r

is true, if and only if the pair [ a3 (), b1 ()b3 ()] is irreducible.


a2 ()
2.13 Dominance of Rational Matrices 99

7. The following theorem states an important special case, where the con-
ditions of Theorems 2.59 and 2.60 are fullled.
Theorem 2.61. If for the rational n p and p m matrices F () and G()
the relation
Mdeg[F ()G()] = Mdeg F () + Mdeg G() (2.175)
holds, i.e. the matrices F () and G() are independent, then the relations

F () F ()G(), G() F ()G() (2.176)


l r

take place.
Proof. Let us have the ILMFD (2.165), then Mdeg F () = ord a1 () and
Mdeg G() = ord a2 (). Besides, the pair [a2 (), b1 ()] is irreducible, that
can be seen by assuming the contrary. In case of ord a3 () < ord a2 () in
(2.166), we would obtain from (2.170)

Mdeg H() ord a1 () + ord a3 () < ord a1 () + ord a2 ()

which contradicts (2.175). The irreducibility of the pairs [a2 (), b1 ()] and
(2.175) implies that the right part of (2.170) is an ILMFD. Owing to Theo-
rem 2.59, the rst relation in (2.176) is shown. The second relation in (2.137)
is seen analogously.

8.
Remark 2.62. Under the conditions of Theorem 2.59 using (2.171) and (2.174),
we obtain
a()F () = ()a3 ()b1 () ,
that means, the factor ()a3 () is a left divisor of the polynomial matrix
a()F (). Analogously, we conclude from the conditions of Theorem 2.60,
when the IRMFD H() = b() a1 () is present, that

a() = b2 ()
G() a3 ()()

takes place with a unimodular matrix (). We learn from this equation that
under the conditions of Theorem 2.60, the polynomial matrix a3 ()() is a
right divisor of the polynomial matrix G()
a().

2.13 Dominance of Rational Matrices


1. Consider the rational block matrix

w11 () . . . w1m ()
.. .. ..
w() = . . . , (2.177)
wn1 () . . . wnm ()
100 2 Fractional Rational Matrices

where the wik () are rational matrices of appropriate dimensions. Let ()


and ik (), (i = 1, . . . , n; k = 1, . . . , m) be the McMillan denominators of the
matrix w() resp. of its blocks wik (). Hereinafter, we abbreviate McMillan
denominator by MMD.

Lemma 2.63. All expressions

()
dik () = (2.178)
ik ()

turn out to be polynomials.

Proof. At rst assume only a block row




w() = wz () = w1 () . . . wm () , (2.179)

and we should have an ILMFD

wz () = a1 ()b()

for it. Then per construction

det a() z () ,

where z () is the MMD of the row (2.179). Besides, the polynomial a() is
canceling from left for all matrices wi (), (i = 1, . . . , m), that means

wi () wz (), (i = 1, . . . , m) .
l

Therefore, the relations

 z ()
dzi () = , (i = 1, . . . , m)
i ()

where the i () are the MMDs of the matrices wi (), owing to Lemma 2.52,
become polynomials. In the same way can be seen that for a block column

1 ()
w

w() = ws () = ... (2.180)
n ()
w

the expressions
s ()
dsk () = , (k = 1, . . . , n)
k ()
become polynomials, where s (), k () are the MMDs of the column (2.180)
and of its elements.
2.13 Dominance of Rational Matrices 101

Denote by wiz (), (i = 1, . . . , n), w


ks (), (k = 1, . . . , m) all rows and
columns of Matrix (2.177), and by iz (), (i = 1, . . . , n), ks (), (k = 1, . . . , m)
their MMDs. With respect to the above shown, the relations

() ()
, (i = 1, . . . , n); , (k = 1, . . . , m)
iz () ks ()

are polynomials. Therefore, all relations

() () iz ()
dik () = = z
ik () i () ik ()

become polynomials.

Corollary 2.64. For any Matrix (2.177), the inequalities

Mdeg w() Mdeg wik (), (i = 1, . . . , n; k = 1, . . . , m)

are true.

2. The element wik () in Matrix (2.177) is said to be dominant, if the equal-


ity
() = ik ()
takes place.

Lemma 2.65. The element wik () is dominant in Matrix (2.177), if and only
if
Mdeg w() = Mdeg wik () .

Proof. The necessity of this condition is obvious. That it is also sucient


follows from the fact that expression (2.178) is a polynomial, and from the
equations

Mdeg w() = deg (), Mdeg wik () = deg ik () .

3.
Theorem 2.66. A necessary and sucient condition for the matrix w2 () to
be dominant in the block row


w() = w1 () w2 ()

is that it meets the relation

w1 () w2 () .
l
102 2 Fractional Rational Matrices

A necessary and sucient condition for the matrix w2 () to be dominant in


the block column
w1 ()
w() =
w2 ()
is that it meets the relation

w1 () w2 () .
r

Proof. The proof immediately arises from the proof of Theorem 2.56.

4.
Theorem 2.67. Consider the strictly proper rational block matrix G() of the
shape
 m

K() L() i (2.181)
G() =
M () N () n

and the minimal realisation

N () = C(Ip A)1 B . (2.182)

Then the matrix N () is dominant in G(), i.e.

Mdeg N () = Mdeg G() = p , (2.183)

if and only if there exist constant i p and p  matrices C1 and B1 with

K() = C1 (Ip A)1 B1 ,


L() = C1 (Ip A)1 B , (2.184)
1
M () = C(Ip A) B1 .

Proof. Necessity: Let (2.183) be valid. Since the matrix G() is strictly
proper, there exists a minimal realisation

G() = C(I 1 B
p A) (2.185)

with constant matrices A, B and C of the dimensions p p, p ( + m) and


(i + n) m, respectively. Assume

p

C i
 m
C = 1 = B
B 1 B
2 p
C2 n

and substitute this expression in (2.185), then we obtain


2.13 Dominance of Rational Matrices 103
 
1 B
C1 (Ip A) 1 B
1 C1 (Ip A) 2
G() = . (2.186)
1 B
C2 (Ip A) 1 B
1 C2 (Ip A) 2

Relating (2.181) to (2.186), we nd


1 B
N () = C2 (Ip A) 2 . (2.187)

Both Equations (2.182) and (2.187) are realisations of N () and possess the
same dimension. Since (2.182) is a minimal realisation, so (2.187) has to be
minimal too. Therefore, Relations (2.88) can be used that will lead us to

A = RAR1 , 2 = RB,
B C2 = CR1

with a non-singular matrix R. Inserting this into (2.186), Relation (2.184) is


achieved with
C1 = C1 R , B1 = R 1 B
1 ,
that proves the necessity of the conditions of the theorem.
Suciency: Suppose Conditions (2.182), (2.184) to be true. Then,

C1

G() = (Ip A)1 B1 B
C
holds. Consider the matrix



G2 () = M () N () = C(Ip A)1 B1 B . (2.188)

The realisation on the right side of



(2.188)
is minimal, because of Mdeg N () =
p. Therefore, the pair (Ip A, B1 B ) is irreducible, and Mdeg G2 () = p.
Utilising (2.181) and (2.188), the matrix G() can be written in the form

G1 ()
G() =
G2 ()
with

G1 () = C1 (Ip A)1 B1 B ,
where the pair [Ip A, C1 ] is, roughly said, non-irreducible. Therefore, ac-
cording to Theorem 2.58, we obtain

G1 () G2 () ,
r

and with account of Theorem 2.66

Mdeg G() = Mdeg G2 () = p .

Corollary 2.68. Under Conditions (2.181)(2.184), the relations

K() L(), K() M () (2.189)


l r

are true.
104 2 Fractional Rational Matrices

Proof. Assume the ILMFD

C1 (Ip A)1 = l1 ()l () .

Owing to Lemma 2.9, the right side of

C1 (Ip A)1 B2 = l1 ()[l ()B2 ]

is an ILMFD of the matrix L(). Therefore, the product

l ()K() = l ()B1

becomes a polynomial matrix. Herewith the rst relation in (2.189) is shown.


The second part can be proven analogously.
3
Normal Rational Matrices

3.1 Normal Rational Matrices

1. Consider the rational n m matrix A() in the standard form (2.21)

N ()
A() = , deg d() = p (3.1)
d()

and, furthermore, let be given certain ILMFD and IRMFD

A() = a1 1
l ()bl () = br ()ar () , (3.2)

where, due to the irreducibility of the MFD, we have

ord al () = ord ar () = Mdeg A() ,

and Mdeg A() is the degree of the McMillan denominator of the matrix A().
At rst, Relation (2.34) implies Mdeg A() p. In the following disclosure,
matrices will play an important role for which

Mdeg A() = p . (3.3)

Since det al () det ar () is valid and both polynomials are divisible by d(),
Relation (3.3) is equivalent to

d() = A () , (3.4)

where A () is the McMillan denominator of A(). Further on, rational ma-


trices satisfying (3.3), (3.4) will be called normal matrices .

2. For a normal matrix (3.1), it is possible to build IMFDs (3.2), such that

det al () d(), det ar () d() .


106 3 Normal Rational Matrices

If both ILMFD and IRMFD satisfy such conditions, the pair is called a com-
plete MFD. Thus, normal rational matrices are rational matrices that possess
a complete MFD.
It is emphasised that a complete MFD is always irreducible. Indeed,
from (2.34) is seen that for any matrix A() in form (3.1) it always follows
deg A () deg d(). Therefore, if we have any matrix A() satisfying (3.3),
then the polynomials det al () and det ar () possess the minimal possible
degree, and hence the complete MFD is irreducible.

3. A general characterisation of the set of normal rational matrices yields


the next theorem.
Theorem 3.1. Let in (3.1) be min(n, m) 2. Then for the fact that the ir-
reducible rational matrix (3.1) becomes normal, it is necessary and sucient,
that every minor of second order of the polynomial matrix N () is divisi-
ble without remainder by the denominator d(). Moreover, if Relations (3.2)
dene a complete MFD, then the matrices al () and ar () are simple.
Proof. Necessity: To consider a concrete case, assume a left MFD. Let
N ()
A() = = a1
l ()bl ()
d()
be part of a complete MFD. Then the matrix al () is simple, because from
(2.34) it follows that Equations (3.3), (3.4) can be fullled only for 2 () =
3 () = . . . = () = 1. Therefore, (2.39) delivers the representation

al () = () diag{1, . . . , 1, d()}() (3.5)

with unimodular matrices (), (). We take from (3.5), that the matrix
al () is simple, and furthermore from (3.5), we obtain

1 () diag{d(), . . . , d(), 1}1 ()  Q()


a1
l () = = . (3.6)
d() d()
All minors of second order of the matrix diag{d(), . . . , d(), 1} are divisible
by d(). Thus, by the Binet-Cauchy theorem this property passes to the nu-
merator of the fraction on the right side of (3.6). But, due to the Binet-Cauchy
theorem, the matrix N () = Q()bl () possesses the shown property. Hence
the necessity of the condition of the theorem is proven.
Suciency: Assume that all minors of second order of the matrix N ()
are divisible by d(). Then we learn from (1.40)(1.42), that this matrix can
be presented in the form

N () = p()SN ()q() , (3.7)

where p(), q() are unimodular, and the matrix SN () has the appropriate
Smith canonical form. Thus, from (1.49) we receive
3.1 Normal Rational Matrices 107

g1 () 0 ... 0
0 g1 ()g2 ()d() ... 0
O,m
.. .. .. ..
SN () =
. . . . , (3.8)

0 0 . . . g1 () g ()d()

On, On,m
where the polynomial g1 () and the denominator d() are coprime, because
in the contrary the fraction (3.1) would be reducible. According to (3.7) and
(3.8), the matrix A() of (3.1) can be written in the shape

g1 ()
0 ... 0
d()

0 g1 ()g2 () . . . 0 O,m

A() = p() .. .. .. .. q() ,
. . . .

0 0 . . . g1 () g ()
On, On,m
g1 ()
where the fraction is irreducible. Therefore, choosing
d()
al () = diag{d(), 1, . . . , 1}p1 ()

g1 () 0 ... 0
0 g1 ()g2 () ... 0
O,m
.. .. .. ..
bl () =
. . . . q() ,

0 0 . . . g1 () g ()

On, On,m
we obtain the LMFD
A() = a1
l ()bl () ,
which is complete, because det al () d() is true.
Corollary 3.2. It follows from (3.5), (3.6) that for a simple n n matrix
a(), the rational matrix a1 () is normal, and vice versa.
Corollary 3.3. From Equations (3.7), (3.8) we learn that for k 2, all mi-
nors of k-th order of the numerator of a normal matrix N () are divisible by
dk1 ().
Remark 3.4. Irreducible rational matrix rows or columns are always normal.
Let for instance the column

a1 ()
1 .
A() = .. (3.9)
d()
an ()
108 3 Normal Rational Matrices

with polynomials ai (), (i = 1, . . . , n) be given. Then by applying left elemen-


tary operations, A() can be brought into the form

()
1
A() = c() ... ,
d()
0

where c() is a unimodular n n matrix, and () is the GCD of the polyno-
mials a1 (), . . . , an (). The polynomials () and d() are coprime, because
in other case the rational matrix (3.9) could be cancelled. Choose

()

al () = diag{d(), 1, . . . , 1}c1 (), bl () = ... ,
0

then obviously we have


A() = a1
l ()bl () ,

and this LMFD is complete, because of det al () d().

4. A general criterion for calculating the normality of the rational matrix


(3.1) directly from its elements yields the following theorem.

Theorem 3.5. Let the fraction (3.1) be irreducible, and furthermore

d() = ( 1 )1 ( q )q , 1 + . . . + q = p . (3.10)

Then a necessary and sucient condition for the matrix A() to be normal
is the fact that each of its minors of second order possess poles in the points
= i (i = 1, . . . , q) with multiplicity not higher than i .


Proof. Necessity: Let N () = nij () and

nik () ni ()
 
i j d() d()
A = det
njk ()
(3.11)
k nj ()
d() d()

be a minor of the matrix A() that is generated by the elements of the rows
with numbers i, j and columns with numbers k, . Obviously
 
i j nik ()nj () njk ()ni ()
A = (3.12)
k d2 ()

is true. If the matrix A() is normal, then, due to Theorem 3.1, the numerator
of the last fraction is divisible by d(). Thus we have
3.1 Normal Rational Matrices 109
 
i j aij
k ()
A = , (3.13)
k d()

where aij
k () is a certain polynomial. It is seen from (3.13) and (3.10) that
the minor (3.11) possess in = i poles of order i or lower.
Suciency: Conversely, if for every minor (3.11) the representation (3.13) is
correct, then the numerator of each fraction (3.12) is divisible by d(), that
means, every minor of second order of the matrix N () is divisible by d(),
or in other words, the matrix A() is normal.

5.
Theorem 3.6. If the matrix A() (3.1) is normal, and (3.10) is assumed,
then
rank N (i ) = 1, (i = 1, . . . , q) . (3.14)
Thereby, if the polynomial (3.10) has only single roots, i.e. q = p, 1 = 2 =
. . . = p = 1, then Condition (3.14) is not only necessary but also sucient
for the normality the matrix A().
Proof. Equation (3.6) implies rank Q(i ) = 1, (i = 1, . . . , q). Therefore, the
matrix
N (i ) = Q(i )bl (i )
is either the zero matrix or it has rank 1. The rst possibility is excluded,
otherwise the fraction (3.1) would have been reducible. Hence we get (3.14).
If all roots i are simple, and (3.14) holds, then every minor of second order
of the matrix N () is divisible by ( i ), (i = 1, . . . , q). Since in the present
case d() = ( 1 )( 2 ) ( p ) is true, so every minor of second order
of N () is divisible by d(), it means, that the matrix A() is normal.

6. We learn from Theorems 3.13.6 that the elements of a normal matrix


A() are constrained by a number of strict equations that consist between
them, which ensure that all minors of second order are divisible by the de-
nominator. Even small deviations, sometimes only in one element, cause that
these equations are violated, and the matrix A() is no longer normal, with the
consequence that the order of the McMillan denominator of this matrix grows
abruptly. As a whole, this leads to incorrect solutions during the construction
of the IMFD and the corresponding realisations in state space. The above said
gives evidence of the structural instability of normal matrices, and from that
we conclude immediately the instability of the numeric operations with such
matrices. On the other side, it is shown below, that in practical problems,
the frequency domain models for real objects are described essentially by nor-
mal transfer matrices. Therefore, the methods for practical solution of control
problems have to be supplied by additional tools, which help to overcome the
mentioned structural and numeric instabilities to reach correct results.
110 3 Normal Rational Matrices

Example 3.7. Consider the rational matrix


N ()
A() =
d()
with
1 1
N () = , d() = ( 1)( 2) ,
 2
where  is a constant. Due to det N () = ( 1)( 2) , the matrix A()
proves to be normal if and only if  = 0. It is easily checked that

1 1 1 0 1 0 1 1
= ,
 2 2 1 0 2 3 + 2  1 0

where the rst and last matrix on the right side are unimodular. Thus for
 = 0, the matrix A() has the McMillan canonical form

1
0

MA () = ( 1)( 2)  .
0 1
( 1)( 2)
In the present case we have 1 () = (1)(2) = d(), 2 () = (1)(
2) = d(). Hence the McMillan denominator is A () = ( 1)2 ( 2)2 and
Mdeg A() = 4. However, if  = 0 is true, then we get

1
0
MA () = ( 1)( 2) .
0 1

In this case, we obtain 1 () = ( 1)( 2) = d(), 2 () = 1 and the


McMillan denominator A () = ( 1)( 2) which yields Mdeg A() = 2. 

3.2 Algebraic Properties of Normal Matrices


1. In this section we give some general algebraic properties of normal ma-
trices that will be used further.
Theorem 3.8. Let two normal matrices
N1 () N2 ()
A1 () = , A2 () = (3.15)
d1 () d2 ()
of dimensions n  resp.  m be given. Then, if the fraction
N1 ()N2 ()
A() = A1 ()A2 () = (3.16)
d1 ()d2 ()
is irreducible, the matrix A() becomes normal.
3.2 Algebraic Properties of Normal Matrices 111

Proof. If n = 1 or m = 1 is true, then the statement follows from the remark


after Theorem 3.1. Now, let min(n, m) 2 and assume N () = N1 ()N2 ().
Due to the theorem of Binet-Cauchy, every minor of second order of the matrix
N () is a bilinear form of the minors of second order of the matrices N1 ()
and N2 (), and consequently divisible by the product d1 ()d2 (). Therefore,
the fraction (3.16) is normal, because it is also irreducible.

2.
Theorem 3.9. Let the matrices (3.15) have the same dimension, and the
polynomials d1 () and d2 () be coprime. Then the matrix
A() = A1 () + A2 () (3.17)
is normal.
Proof. From (3.15) and (3.17), we generate
d2 ()N1 () + d1 ()N2 ()
A() = . (3.18)
d1 ()d2 ()
The fraction (3.18) is irreducible, because the sum (3.17) has its poles at the
zeros of d1 () and d2 () with the same multiplicity.
Denote
ik () ik ()
A1 () = , A2 () = .
d1 () d2 ()
Then the minor (3.11) for the matrix A() has the shape

ik () ik () i () i ()
 
i j d1 () + d2 () d1 ()
+
d2 ()
A = det
jk () jk ()
. (3.19)
k j () j ()
+ +
d1 () d2 () d1 () d2 ()
Applying the summation theorem for determinants, and using the normality
of A1 (), A2 () after cancellation, we obtain the expression
 
i j bij
k ()
A =
k d1 ()d2 ()
with certain polynomials bijk (). It follows from this expression that the poles
of the minor (3.19) can be found under the roots of the denominators of Matrix
(3.18), and they possess no higher multiplicity. Since this rational matrix is
irreducible, Theorem 3.5 yields that the matrix A() is normal.
Corollary 3.10. If A() is a normal n m rational matrix, and G() is an
n m polynomial matrix, then the rational matrix
A1 () = A() + G()
is normal.
112 3 Normal Rational Matrices

3. For normal matrices the reverse to Theorem 2.42 is true.


Theorem 3.11. Let the polynomial n n matrix al () be simple and bl () be
any n m polynomial matrix. If under this condition, the fraction
adj al () bl ()
A() = a1
l ()bl () = (3.20)
det al ()
is irreducible, then the pair (al (), bl ()) is irreducible and the matrix A() is
normal.
Proof. It is sucient to consider the case min{n, m} 2. Since the matrix
al () is simple, with the help of Corollary 3.2, it follows that the matrix a1
l ()
is normal and all minors of second order of the matrix adj al () are divisible by
det al (). Due to the theorem of Binet-Cauchy, this property transfers to the
numerator on the right side of (3.20), thats why the matrix A() is normal.
By using (3.6), we also obtain
Q()bl ()
A() = .
d()
Here per construction, we have det al () d(). Comparing this equation
with (3.20), we realise that the middle part of (3.20) proves to be a complete
LMFD, and consequently the pair (al (), bl ()) is irreducible.
Analogously, the following statement for right MFD can be shown:
Corollary 3.12. If the polynomial m m matrix ar () is simple, det ar ()
d(), br () is any polynomial n m matrix, and the fraction
R()
br ()a1
r () = = A()
d()
is irreducible, then the pair [ar (), br ()] is irreducible and the left side denes
a complete RMFD of the matrix A().

4. Let us investigate some general properties of the MFD of the product of


normal matrices. Consider some normal matrices (3.15), where their product
(3.16) should exist and be irreducible. Moreover, let us have the complete
LMFD
A1 () = a1
1 ()b1 (), A2 () = a1
2 ()b2 () ,

where the matrices a1 (), a2 () are simple, and det a1 () d1 (), det a2 ()
d2 () are valid. Applying these representations, we can write

A() = a1 1
1 ()b1 ()a2 ()b2 () . (3.21)

Notice, that the fraction L() = b1 ()a1 2 () owing to the irreducibility of


A() is also irreducible. Hence as a result of Corollary 3.12, the fraction L()
is normal, and there exists the complete LMFD
3.2 Algebraic Properties of Normal Matrices 113

b1 ()a1 1
2 () = a3 ()b3 () ,

where det a3 () d2 (). From this and (3.21), we get

A() = a1
l ()bl ()
with al () = a3 ()a1 (), bl () = b3 ()b2 () .

Per construction, det al () d1 ()d2 () is valid, thats why the last re-
lations dene a complete LMFD, the matrix al () is simple and the pair
(a3 ()a1 (), b3 ()b2 ()) is irreducible. Hereby, we still obtain

Mdeg[A1 ()A2 ()] = Mdeg A1 () + Mdeg A2 () .

Hence the following theorem has been proven:

Theorem 3.13. If the matrices (3.15) are normal and the product (3.16) is
irreducible, then the matrices A1 () and A2 () are independent in the sense
of Section 2.4.

5. From Theorems 3.13 and 2.61 we conclude the following statement, which
is formulated in the terminology of subordination of matrices in the sense of
Section 2.12.

Theorem 3.14. Let us have the normal matrices (3.15), and their product
(3.16) should be irreducible. Then

A1 () A1 ()A2 (), A2 () A1 ()A2 () .


l r

6.
Theorem 3.15. Let the separation

N () N1 () N2 ()
A() = = + (3.22)
d1 ()d2 () d1 () d2 ()

exist, where the matrix A() is normal and the polynomials d1 (), d2 () are
coprime. Then each of the fractions on the right side of (3.22) is normal.

Proof. At rst we notice that the fractions on the right side of (3.22) are
irreducible, otherwise the fraction A() would be reducible. The fraction

N ()
A1 () =
d1 ()

is also normal, because it is irreducible and the minors of second order of the
numerator are divisible by the denominator d1 (). Therefore,
114 3 Normal Rational Matrices

A1 () = a1
1 ()b1 () , (3.23)

where the matrix a1 () is simple and det a1 () d1 (). Multiplying both


sides of Equation (3.22) from left by a1 () and considering (3.23), we get

b1 () N1 () N2 ()
= a1 () + a1 () ,
d2 () d1 () d2 ()

this means
b1 () N2 () N1 ()
a1 () = a1 () .
d2 () d2 () d1 ()
The left side of the last equation is analytical at the zeros of the polynomials
d1 (), and the right side at the zeros of d2 (). Consequently

a1 ()N1 ()
= L()
d1 ()

has to be a polynomial matrix L() and

N1 ()
= a1
1 ()L() .
d1 ()

The fraction on the right side is irreducible, otherwise the fraction (3.22) has
been irreducible. The matrix a1 () is simple, and therefore the last fraction
owing to Theorem 3.11 is normal. In analogy it may be shown that the matrix
N2 ()/d2 () is normal.

3.3 Normal Matrices and Simple Realisations


1. At the rst sight, normal rational matrices seem to be quite articial con-
structions, because their elements are bounded by a number of crisp equations.
However, in this section we will demonstrate that even normal matrices for
the most of real problems will give the correct description of multidimensional
LTI objects in the frequency domain.

2. We will use the terminology and the notation of Section 1.15, and con-
sider an arbitrary realisation (A, B, C) of dimension n, p, m. Doing so, the
realisation (A, B, C) is called minimal, if the pair (A, B) is controllable and
the pair [A, C] is observable. A minimal realisation is called simple if the ma-
trix A is cyclic. As shown in Section 1.15, the property of simplicity of the
realisation (A, B, C) is structural stable, and it is conserved at least for suf-
ciently small deviations in the matrices A, B, C. Realisations, that are not
simple, are not supplied with the property of structural stability. Practically,
this means that correct models of real linear objects in state space amounts
to simple realisations.
3.3 Normal Matrices and Simple Realisations 115

3. As explained in chapter 2, every realisation (A, B, C) is assigned to a


strictly proper rational n m matrix w() by the relation

w() = C(Ip A)1 B (3.24)

equivalently expressed by

C adj(Ip A)B
w() = , (3.25)
dA ()

where adj(Ip A) is the adjoint matrix and dA () = det(Ip A). As is


taken from (3.24), every realisation (A, B, C) is uniquely related to a transfer
matrix. Conversely, every strictly proper rational n m matrix

N ()
w() = (3.26)
d()

is congured to an innite set of realisations (A, B, C) of dimensions n, q, m


with q Mdeg w(), where Mdeg w() means the McMillan-degree of the
matrix w(). Realisations, where the number q takes its minimal value, as
before will be called minimal. The realisation (A, B, C) is minimal, if and
only if the pair (A, B) is controllable and the pair [A, C] is observable.
In general, minimal realisations of arbitrary matrices w() are not simple,
and therefore, they do not possess the property of structural stability. In
this case small deviations in the coecients of the linear objects (1.102),
(1.103) lead to essential changes in their transfer functions. In this connection,
the question arises, for which class of matrices the corresponding minimal
realisations will be simple. The answer to this question lies in the following
statement.

4.
Theorem 3.16. The transfer matrix (3.25) of the realisation (A, B, C) is ir-
reducible, if and only if the realisation (A, B, C) is simple.

Proof. As follows from Theorem 2.45, for the irreducibility of the fractions
(3.25), it is necessary and sucient that the elementary PMD

() = (Ip A, B, C) (3.27)

is minimal and the matrix Ip A is simple, which is equivalent to the demand


for simplicity of the realisation (A, B, C).

Theorem 3.17. If the realisation (A, B, C) of dimension (n, p, m) is simple,


then the corresponding transfer matrix (3.24) is normal.
116 3 Normal Rational Matrices

Proof. Assume that the realisation (A, B, C) is simple. Then the elementary
PMD (3.27) is also simple, and the fraction on the right side of (3.25) is
irreducible. Hereby, due to the simplicity of the matrix Ip A, the rational
matrix
adj(Ip A)
(Ip A)1 =
det(Ip A)
becomes normal, what means, it is irreducible and all minors of 2nd order of
the matrix adj(Ip A) are divisible by det(Ip A). But then for min{m, n}
2 owing to the theorem of Binet-Cauchy, also the minors of 2nd order of the
matrix
Q() = C adj(Ip A)B
possess this property, and this means that Matrix (3.24) is normal.

Theorem 3.18. For a strictly proper rational matrix to possess a simple re-
alisation, it is necessary and sucient, that this matrix is normal.

Proof. Necessity: When the rational matrix (3.26) allows a simple realisation
(A, B, C), then it is normal by virtue of Theorem 3.17.
Suciency: Let the irreducible matrix (3.26) be normal and deg d() = p.
Then there exists a complete LMFD

w() = a1 ()b()

for it with ord a() = p, and consequently Mdeg w() = p. From this we con-
clude, that Matrix (3.26) allows a minimal realisation (A, B, C) of dimension
(n, p, m). We now assume that the matrix A is not cyclic. Then the fraction

C adj(Ip A)B
det(Ip A)

would be reducible. Hereby, Matrix (3.26) would permit the representation

N1 ()
w() = ,
d1 ()

where deg d1 () < deg d(). But this contradicts the supposition on the irre-
ducibility of Matrix (3.26). Therefore, the matrix A must be cyclic and the
matrix Ip A simple, hence the minimal realisation has to be simple.

3.4 Structural Stable Representation of Normal Matrices


1. The notation of normal matrices in the form (3.1) is structural unstable,
because it looses for arbitrary small errors in its coecients the property that
its minors of 2nd order of the numerator are divisible by the denominator. In
that case, the quantity Mdeg A() will abruptly increase. Especially, if Matrix
3.4 Structural Stable Representation of Normal Matrices 117

(3.1) is strictly proper, then the dimensions of the matrices in its minimal real-
isation in state space will also abruptly increase, i.e. the dynamical properties
with respect to the original system will change drastically. In this section, a
structural stable representation (S-representation) of normal rational matri-
ces will be introduced, . Regarding normality, the S-representation is invariant
related to parameter deviations in the transfer matrix, originated for instance
by modeling or rounding errors.

2.
Theorem 3.19 ([144, 145]). The irreducible rational n m matrix
N ()
A() = (3.28)
d()
is normal, if and only if its numerator permits the representation
N () = P ()Q () + d()G() (3.29)
with an n 1 polynomial column P (), an m 1 polynomial column Q(),
and an n m polynomial matrix G().
Proof. Suciency: Let us have

p1 () q1 () g11 () . . . g1m ()

P () = ... , Q() = ... , G() = ...
..
.
pn () qm () gn1 () . . . gnm ()
with scalar polynomials pi (), qi (), gik (). The minor (3.11) of Matrix (3.29)
possesses the form
 
i j pi ()qk () + d()gik () pi ()q () + d()gi ()
N = det
k pj ()qk () + d()gjk () pj ()q () + d()gj ()

= d()nij
k ()

where nij
k () is a polynomial. Therefore, an arbitrary minor is divisible by
d(), and thus Matrix (3.28) is normal.
Necessity: Let Matrix (3.28) be normal. Then all minors of second order of its
numerator N () are divisible by the denominator d(). Applying (3.7) and
(3.8), we nd out that the matrix N () allows the representation

g1 () 0 ... 0
0 g1 ()g2 ()d() . . . 0
O,m
.. .. .. ..
N () = p() . . . . q()

0 0 . . . g1 () g ()d()

On, On,m
(3.30)
118 3 Normal Rational Matrices

where the gi (), (i = 1, . . . , ) are monic polynomials and p(), q() are
unimodular matrices. Relation (3.30) can be arranged in the form

N () = N1 () + d()N2 () , (3.31)

where
1 O1,m1
N1 () = g1 ()p() q() , (3.32)
On1,1 On1,m1

0 0 0 ... 0
0 1 0 ... 0

0 0 g3 () ... 0
O,m
N2 () = g1 ()g2 ()p() .. .. .. .. .. q() .
. . . . .

00 0 . . . g3 () g ()
On, On,m
(3.33)
Obviously, we have
N1 () = g1 ()P1 ()Q1 () , (3.34)
where P1 () is the rst column of the matrix p() and Q1 () is the rst row
of q(). Inserting (3.32)(3.34) into (3.31), we arrive at the representation
(3.29), where for instance

P () = g1 ()P1 (), Q() = Q1 (), G() = N2 ()

can be used.

3. Inserting (3.29) into (3.28) yields

P ()Q ()
A() = + G() . (3.35)
d()

The representation of a normal rational matrix in the form (3.35) is called


its structural stable representation or S-representation. Notice, that the S-
representation of a normal matrix is structural stable (invariant) according to
variations of the vectors P (), Q(), the matrix G() and of the polynomial
d(), because the essential structural specialities of Matrix (3.35) still hold.

4. Assume

P () = d()L1 () + P1 () ,
1 () ,
Q() = d()L2 () + Q

where
deg P1 () < deg d(), 1 () < deg d() .
deg Q (3.36)
Then from (3.23), we obtain
3.4 Structural Stable Representation of Normal Matrices 119

1 () + d()G1 ()
N () = P1 ()Q

with
 () + P1 ()L () + d()L1 ()L () + G() .
G1 () = L1 ()Q 1 2 2

Altogether from (3.35), we get

 ()
P1 ()Q 1
A() = + G1 () , (3.37)
d()

where the vectors P1 () and Q 1 () satisfy Relation (3.36). Representation


(3.37) is named as minimal S-representation of a normal matrix.
A minimal S-representation (3.34) also turns out to be structural stable,
if the parameter variations do not violate Condition (3.36).

5. The proof of Theorem 3.19 has constructive character, so it yields a prac-


tical method for calculating the vectors P (), Q() and the matrix G(),
which appear in the S-representations (3.35) and (3.37). However, at rst the
matrix N () has to be given the shape (3.30), which is normally connected
with extensive calculations. In constructing the S-representation of a normal
matrix, the next statement allows essential simplication in most practical
cases.

Theorem 3.20. Let the numerator N () of a normal matrix (3.28) be given


in the form
() 1
N () = g()() () , (3.38)
L() ()
where g() is a scalar polynomial, which is equal to the GCD of the elements
of N (). The polynomial matrices (), () are unimodular and L() is an
(n 1) (m 1) polynomial matrix. Furthermore, let


() = m () . . . 2 () ,


 () = 2 () . . . n ()

be row vectors. Then

L() = ()() + d()G2 () , (3.39)

where G2 () is an (n 1) (m 1) polynomial matrix. Hereby, we have

N () = g()P ()Q () + g()d()G() , (3.40)

where
1

P () = () , Q () = () 1 () , (3.41)
()
120 3 Normal Rational Matrices

and moreover
O1,m1 0
G() = () () . (3.42)
G2 () On1,1
Doing so, Matrix (3.28) takes the S-representation

g()P ()Q ()
A() = + g()G() . (3.43)
d()

Proof. Introduce the unimodular matrices



1 O1,n1 Om1,1 Im1
N () = , N () = . (3.44)
() In1 1 ()

By direct calculation, we obtain



1 1 O1,n1 () 1
N () = , N1 () = . (3.45)
() In1 Im1 Om1,1

Easily,
 
() 1 1 O1,m1 
N () N () = = B() (3.46)
L() () On1,1 R()

is established, where

R() = L() ()() . (3.47)
The polynomials g(), d() are coprime, otherwise the fraction (3.28) would
be reducible. Thereby, all minors of second order of the matrix

() 1
Ng () = () () (3.48)
L() ()

are divisible by d(). With regard to

Ng () = ()N1 ()B()N1 ()()

and the observation that the matrices ()N1 () and N1 ()() are uni-
modular, we realise that the matrices Ng () and B() are equivalent. Since
all minors of second order of the matrix B() are divisible by d(), we get

immediately that all elements of the matrix R() are also divisible by d(),
which runs into the equality

R() = L() ()() = d()G2 ()

that is equivalent to (3.39). Inserting the relation

L() = ()() + d()G2 ()


3.4 Structural Stable Representation of Normal Matrices 121

into (3.38), we get



() 1
N () = g()() ()
()() + d()G2 () ()
(3.49)

() 1 O1,m1 0
= g()() () + g()d()() () .
()() () G2 () On1,1

Using
() 1 1

= () 1 ,
()() () ()
we generate from (3.49) Formulae (3.40)(3.42). Relation (3.43) is held by
substituting (3.49) into (3.28).

Remark 3.21. Let




() = 1 () . . . n () ,


 () = 1 () . . . m

() ,

where i (), (i = 1, . . . , n), i (), (i = 1, . . . , m) are columns or rows, respec-


tively. Then from (3.41), it follows

P () = 1 () + 2 ()2 () + . . . + n ()n () ,

Q() = 1 ()m () + 2 ()m1 () + . . . + m



() .

Remark 3.22. Equation (3.42) delivers

rank G() min{n 1, m 1} .

6.
Example 3.23. Generate the S-representation of a normal matrix (3.28) with

+ 1 2 1
N () = , d() = ( 1)( 2) .
0 ( + 1)( 2) 2

In the present case, the matrix A() possesses only the two single poles 1 = 1
and 2 = 2. Hereby, we have

0 2 1
N (1 ) = , rank N (1 ) = 1 ,
0 2 1

1 2 1
N (2 ) = , rank N (2 ) = 1 ,
0 0 0
122 3 Normal Rational Matrices

thats why the matrix A(), owing to Theorem 3.6 is normal. For construction
of the S-representation, Theorem 3.20 is used. In the present case, we have
g() = 1, () = I2 ,() = I3 ,



() = 2, () = + 1 2 , L() = 0 ( + 1)( 2) .
Applying (3.39), we produce



L() ()() = d() d() = d() 1 1 ,
and therefore

G2 () = 1 1 .
On the basis of (3.41), we nd

1

P () = , Q () = + 1 2 1 .
2
Moreover, due to (3.42), we get

000
G() = .
110
With these results, we obtain

1

1 2 1
2 000
A() = + .
( 1)( 2) 110
Regarding deg P () = deg Q() = 1, the generated S-representation is mini-
mal. 

3.5 Inverses of Characteristic Matrices of Jordan and


Frobenius Matrices
1. In this section S-representations for matrices of the shape (Ip A)1 =
A1
will be constructed, where A = Jp (a) is a Jordan block (1.76) or A =
AF is a Frobenius matrix (1.94). In the rst case, the matrix A is called
the characteristic Jordan matrix , and in the second case the characteristic
Frobenius matrix.

2. Consider the upper Jordan block with the eigenvalue a



a 1 ... 0 0
0 a ... 0 0


Jp (a) = ... ... . . . ... ... (3.50)

0 0 ... a 1
0 0 ... 0 a
3.5 Inverses of Characteristic Matrices of Jordan and Frobenius Matrices 123

and the corresponding characteristic matrix

Jp (, a) = Ip Jp (a) .

Now, a direct calculation of the adjoint matrix yields



( a)p1 ( a)p2 . . . a 1
0 ( a)p1 . . . ( a)2 a

. . .. . ..
adj Jp (, a) = .. .. . .. . . (3.51)

0 0 . . . ( a) p1
( a) p2
0 0 ... 0 ( a)p1

Matrix (3.51) has the shape (3.38) with

g() = 1, () = () = Ip ,


() = ( a)p1 . . . a ,


 () = ( a) . . . ( a)p1 .

Consistent with Theorem 3.20, we get

adj Jp (, a) = P ()Q () + d()G() , (3.52)

where


P  () = 1  () , Q () = () 1
and
d() = det Jp (, a) = ( a)p .
For determining the polynomial matrix G(), take care of

1
a


P ()Q () = .. ( a)p1 . . . ( a) 1
.
( a)p1

( a)p1 ( a)p2 . . . 1

( a)p ( a)p1 . . . a

= .. .. .. .
..
.
. . .
( a)2p2 ( a)2p3 . . . ( a)p1

As a result, we obtain
124 3 Normal Rational Matrices

0 0 ... 0 0

( a)p 0 ... 0 0



adj Jp (, a) P ()Q () = ( a)p+1
( a)p
. . . 0 0

.. .. .. .. ..
. . . . .

( a) 2p2
( a) 2p3
. . . ( a) 0 p


0 0 ... 0 0

1 0 ... 0 0


= ( a) p a 1 . . . 0 0 .

.. .. . . .. ..
. . . . .

( a) p2
( a) p3
... 1 0

Substituting this into (3.52) yields



0 0 ... 0 0

1 0 ... 0 0


G() = a 1 ... 0 0,

. .. . . .. ..
.. . . . .

( a)p2 ( a)p3 ... 1 0

and applying (3.43) delivers the S-representation

P ()Q ()
Jp1 (, a) = + G() .
( a)p

Since deg P () < p and deg Q() < p, the produced S-representation is mini-
mal.

3. Now consider the problem to nd the S-representation for

A1
F = (Ip AF )
1
,

where
0 1 0 ... 0
0 0 1 ... 0

.. .. .. . . ..
AF = . . . . . (3.53)

0 0 0 ... 1
dp dp1 dp2 . . . d1
is the lower Frobenius normal form of dimension pp. Its characteristic matrix
has obviously the shape
3.5 Inverses of Characteristic Matrices of Jordan and Frobenius Matrices 125

1 0 ... 0
0 1 . . . 0

.. .. .. .. ..
AF =
. . . . . . (3.54)
..
0 0 0 . 1
dp dp1 dp2 . . . + d1

The adjoint matrix for (3.54) is calculated by



d1 () d2 () . . . dp1 () 1


adj(Ip AF ) = .. . (3.55)
LF () .
p1

Here and in what follows, we denote

d() = p + d1 p1 + . . . + dp1 + dp ,
d1 () = p1 + d1 p2 + . . . + dp2 + dn1 ,
d2 () = p2 + d1 p3 + . . . + dp2 ,
.. ..
. . (3.56)
dp1 () = + d1 ,
dp () = 1 ,

and LF () is a certain (p 1) (m 1) polynomial matrix. Relation (3.55)


with Theorem 3.20 implies

adj(Ip AF ) = PF ()QF () + d()GF () , (3.57)

where


PF () = 1 . . . p1 ,

(3.58)
QF () = d1 () . . . dp1 () 1 .

It remains to calculate the matrix GF (). Denote

adj(Ip AF ) = [aik ()], PF ()QF () = [bik ()],


(3.59)
GF () = [gik ()], (i, k = 1, . . . , p) ,

then from (3.57), we obtain

bik () = d()gik () + aik () .

Per construction, deg d() = p, deg aik () p 1. Bringing this face to face
with (1.6), we recognise that gik () is the integral part and aik () is the
126 3 Normal Rational Matrices

rest, when dividing the polynomial bik () by d(). Utilising (3.58), we arrive
at
bik () = i1 dk () . (3.60)
Due to deg dk () = p k, we obtain deg bik () = p k + i 1. Thus, for k i,
we get gik () = 0. Substituting i = k + , ( = 1, . . . , p k) and taking into
account (3.60) and (3.56), we receive

bk+,k () = 1 d() + dk () ,

where deg dk () < p. From this, we read

gik () = 1 , (i = k + ;  = 1, . . . , p k) .

Altogether, this leads to



0 0 0 ... 0 0
1 0 0 ... 0 0


GF () = 1 0 ... 0 0 (3.61)

.. .. .. .. .. ..
. . . . . .
p2
p3
p4 ... 1 0

and the wanted S-representation


PF ()QF ()
(Ip AF )1 = + GF () . (3.62)
d()

Per construction, deg PF () < p, deg QF () < p is valid, so the produced


S-representation (3.62) is minimal.

3.6 Construction of Simple Jordan Realisations


1. Suppose the strictly proper normal rational matrix
N ()
A() = , deg d() = p (3.63)
d()

and let (A0 , B0 , C0 ) be one of its simple realisations. Then any simple realisa-
tion of the matrix A() has the form (QA0 Q1 , QB0 , C0 Q1 ) with a certain
non-singular matrix Q. Keeping in mind that all simple matrices of the same
dimension with the same characteristic polynomial are similar, the matrix Q
can be selected in such a way that the equation QA0 Q1 = A1 for a cyclic
matrix A1 fullls a prescribed form. Especially, we can achieve A1 = J, where
J is a Jordan matrix (1.97), and every distinct root of the polynomials d() is
congured to exactly one Jordan block. The corresponding simple realisation
(J, BJ , CJ ) is named a Jordan realisation. But, if we choose A1 = AF , where
3.6 Construction of Simple Jordan Realisations 127

AF is a Frobenius matrix (3.53), then the corresponding simple realisation


(AF , BF , CF ) is called a Frobenius realisation. These two simple realisations
are said to be canonical.
In this section the question is considered, how to produce a Jordan reali-
sation from a given normal rational matrix in S-representation.

2. Suppose the normal strictly proper rational matrix (3.63) in S-


representation
P ()Q ()
A() = + G() . (3.64)
d()
Then the following theorem gives the answer to the question, how to construct
a simple Jordan realisation.
Theorem 3.24. Suppose the normal n m matrix A() in S-representation
(3.64) with
d() = ( 1 )1 ( q )q , 1 + . . . + q = p . (3.65)
Then a simple Jordan realisation of the matrix A() is attained by the follow-
ing steps:
1) For each j, (j = 1, . . . , q) calculate the vectors

1 dk1 P () 
Pjk = ,
(k 1)! dk1  =j
(k = 1, . . . , j ) (3.66)

1 k1
d Q()( j ) j 
Qjk = 
(k 1)! d k1 d()  = .
j

2) For each j, (j = 1, . . . , q) build the matrices




Pj = Pj1 Pj2 . . . Pjj (n j ) ,
(3.67)
Qjj
Qj, 1
j =
Q ..
j
(j m) .
.
Qj1

3) Put together the matrices




PJ = P1 P2 . . . Pq (n p) ,
(3.68)
Q1
Q
2
QJ = . (p m) .
..
q
Q
128 3 Normal Rational Matrices

4) Build the simple Jordan matrix J (1.88) according to the polynomial


(3.65). Then
A() = PJ (Ip J)1 QJ , (3.69)
and the realisation (J, QJ , PJ ) is a simple Jordan realisation.

The proof of Theorem 3.24 is prepared by two Lemmata.

Lemma 3.25. Let J (a) be an upper Jordan block (3.50), and J (, a) be its
corresponding characteristic matrix. Introduce for xed the matrices
Hi , (i = 0, . . . , 1) of the following shape:

0 1 0 ... 0
0 0 1 ... 0

.. .. . . . . .. O 1

H0 = I , H1 = . . . . . , . . . , H,1 = 1,1
.
O1,1 O1,1
0 0 0 . . . 1
0 0 0 ... 0
(3.70)
Then,

adj [I J (a)]
(3.71)
= ( a)1 H0 + ( a)2 H1 + . . . + ( a)H,2 + H,1 .

Proof. The proof deduces immediately from (3.51) and (3.70).

Lemma 3.26. Assume the constant n and m matrices U and V with



v


U = u1 . . . u , V = ... ,
v1

where ui , vi , (i = 1, . . . , ) are columns or rows, respectively. Then the equa-


tion

U adj [I J (a)] V = L1 + ( a)L2 + . . . + ( a)1 L (3.72)

is true, where

L1 = u1 v1 ,
L2 = u1 v2 + u2 v1 ,
..
. (3.73)
L = u1 v + u2 v1

+ . . . + u v1 = U V .

Proof. The proof follows directly by inserting (3.71) and (3.70) into the left
side of (3.72).
3.6 Construction of Simple Jordan Realisations 129

Remark 3.27. Concluding in reverse direction, it comes out that, under as-
sumption (3.73), the right side of Relation (3.72) is equal to the left one.

Proof (of Theorem 3.24). From (3.64), we obtain

N ()
A() = , (3.74)
d()

where
N () = P ()Q () + d()G() . (3.75)
Since Matrix (3.74) is strictly proper, it can be developed into partial fractions
(2.98). Applying (3.65) and (2.96)(2.97), this expansion can be expressed in
the form
q
A() = Aj () , (3.76)
j=1

where
Aj1 Aj2 Aj,j
Aj () = + 1
+. . .+ , (j = 1, . . . , q) . (3.77)
( j ) j ( j ) j
( j )

The constant matrices Ajk , (k = 1, . . . , j ) appearing in (3.77) are determined


by the Taylor expansion at the point = j :

N ()( j )j
= Aj1 +(j )Aj2 +. . .+(j )j 1 Aj,j +(j )j Rj (),
d()
(3.78)
where Rj () is a rational matrix that is analytical in the point = j .
Utilising (3.74), (3.75), we can write

N ()( j )j
= P ()Qj () + ( j )j G() , (3.79)
d()

where
Q() d()
Qj () = , dj () = .
dj () ( j )j
Conformable with (3.78), for the determination of the matrices Ajk , (k =
1, . . . , j ), we have to nd the rst j terms of the separation on the right
side of (3.79) in the Taylor series. Obviously, the matrices Ajk , (k = 1, . . . , j )
do not depend on the matrix G().
Near the point = j , suppose the developments

P () = Pj1 + ( j )Pj2 + . . . + ( j )j 1 Pj,j + . . . ,

Qj () = Qj1 + ( j )Qj2 + . . . + ( j )j 1 Qj,j + . . . ,

where the vectors Pjk and Qjk are determined by (3.66). Then we get
130 3 Normal Rational Matrices

P ()Qj () = Pj1 Qj1 + ( j )(Pj1 Qj2 + Pj2 Qj1 ) +

+ ( j )2 (Pj1 Qj3 + Pj2 Qj2 + Pj3 Qj1 ) + . . . .

Comparing this with (3.78) delivers

Aj1 = Pj1 Qj1 ,


Aj2 = Pj1 Qj2 + Pj2 Qj1 ,
.. .. ..
. . .
Aj,j = Pj1 Qj,j + Pj2 Qj,j 1 + . . . + Pj,j Qj1 .

Substituting this into (3.77) leads to




Aj () = ( j )j Aj1 + ( j )Aj2 + . . . + ( j )j 1 Aj,j

= ( j )j Pj1 Qj1 + ( j )(Pj1 Qj2 + Pj2 Qj1 ) + . . .

. . . + ( j )j 1 (Pj1 Qj,j + . . . + Pj,j Qj1 ) .

Taking into account Remark 3.27, we obtain from the last expression


Aj () = ( j )j Pj adj Ij Jj (j ) Qj

1
= Pj Ij Jj (j ) j ,
Q

j are committed by (3.67). From the last equations


where the matrices Pj , Q
and (3.76), it follows


q

1
A() = Pj Ij Jj (j ) j = PJ (Ip J)1 QJ ,
Q
j=1

where PJ , QJ are the matrices (3.68) and




Ip J = diag I1 J1 (1 ), I2 J2 (2 ), . . . , Iq Jq (q ) ,

which is equivalent to Formula (3.69). Since in the present case the p p


matrix J possesses the minimal possible dimension, the realisation (J, BJ , CJ )
is minimal. Therefore, the pair (J, BJ ) is controllable, and the pair [J, CJ ] is
observable. Finally, per construction, the matrix J is cyclic. Hence (J, BJ , CJ )
is a simple Jordan realisation of the matrix A().

3.
Example 3.28. Find the the Jordan realisation of the strictly proper normal
matrix
3.6 Construction of Simple Jordan Realisations 131

( 1)2 1
0 2
A() = . (3.80)
( 1) ( 2)
2

Using the notation in Section 3.4


g() = 1, () = () = I2 , () = 2, () = ( 1)2 ,
d() = ( 1)2 ( 2), L() = 0, 1 = 1, 2 = 2
is performed, and applying (3.41) yields

1

P () = , Q () = ( 1)2 1 .
2
For constructing a simple Jordan realisation, we have to nd the vectors (3.66).
For the root 1 = 1, we introduce the notation

1  Q () ( 1)2 1
P1 () = P () = , Q1 () = = .
2 2 2 2
Using (3.66), we obtain

1 dP1 ()  0
P11 = P () |=1 = , P12 = = ,
1 d =1 1
and furthermore


dQ1 () 

Q11 = Q1 () |=1 = 0 1 , Q12 = = 0 1 .
d =1
Then (3.67) ensures

1 0 0 1
P1 = ,
Q1 = .
1 1 0 1
For the single root 2 = 2, we denote

1  Q () 1
P2 () = P () = , Q2 () = = 1
2 ( 1)2 ( 1)2
and for = 2, we get

1

P21 = P () |=2 = , Q21 = Q2 () |=2 = 1 1 .
0
Applying (3.68) yields

0 1
1 0 1
PJ = , QJ = 0 1 .
1 1 0
1 1
Thus the simple Jordan realisation of Matrix (3.80) possesses the shape
(J, BJ , CJ ) with
110
J = 0 1 0.

002
132 3 Normal Rational Matrices

3.7 Construction of Simple Frobenius Realisations


1. For constructing a simple Jordan-realisation, the roots of the polynomi-
als d() have to be calculated, and this task can be connected with honest
numerical problems. From this point of view, it is much easier to produce the
realisation (AF , BF , CF ), where the matrix AF has the Frobenius normal form
(3.53), that turns out to be the accompanying matrix for the polynomial d()
in (3.56). The assigned characteristic matrix to AF has the shape (3.54), and
the S-representation of the matrix (Ip AF )1 is determined by Relations
(3.62). For a given realisation (AF , BF , CF ), the transfer matrix A() has the
shape (3.63) with
N () = CF adj(Ip AF )BF .
Taking advantage from (3.57), the last equation can be represented in the
form
N () = P ()Q () + d()G()
(3.81)
with

1

Q () = d1 () . . . dn1 () 1 BF ,

P () = CF .. , (3.82)
. G() = CF GF ()BF .
n1

Inserting (3.81) and (3.82) into (3.63), we get the wanted S-representation.
Per construction, Relation (3.36) is fullled, i.e. the obtained S-representation
is minimal. Therefore, Formulae (3.81), (3.82) forth the possibility of di-
rect transfer from the Frobenius realisation to the corresponding minimal
S-representation of its transfer matrix.

2.
Example 3.29. Assume the Frobenius realisation with

0 1 0 10
12 0
AF = 0 0 1 , BF = 2 3 , CF = . (3.83)
3 1 1
2 1 1 01

Here AF is the accompanying matrix for the polynomial

d() = 3 + 2 + + 2 ,

so we get the coecients d1 = 1, d2 = 1, d3 = 2. In this case, the polynomials


(3.56) have the shape

d1 () = 2 + + 1, d2 () = + 1 .

Hence recall (3.58) for the considered example, we receive


3.7 Construction of Simple Frobenius Realisations 133



PF () = 1 2
, QF () = 2 + + 1 + 1 1 .
Then using (3.82), we obtain

1
12 0 2 + 1
P () = = ,
3 1 1 2 + + 3
2
(3.84)

1 0

Q () = 2 + + 1 + 1 1 2 3 = 2 + 3 + 3 3 + 4 .
01
Moreover, a direct calculation with the help of (3.61) yields

0 0 0 10
12 0 2 0
G() = 1 0 0 2 3 = ,
3 1 1 +1 3
1 0 01
which together with (3.84) gives the result
P ()Q ()
A() = + G()
d()

2 + 1
2
+ 3 + 3 3 + 4
2 + + 3 2 0
= + .
3 + 2 + + 2 +1 3


3. It is remarkable that there exists a rather easy way from a minimal S-


representation (3.64) to the matrix A() of the simple Frobenius realisation.
Theorem 3.30. Let for the strictly proper normal nm matrix A() be given
the minimal S-representation (3.64), where
d() = s + d1 s1 + . . . + ds . (3.85)
Since the S-representation (3.64) is minimal, it follows
P () = N1 + N2 + . . . Ns s1 ,
(3.86)
Q() = M1 + M2 + . . . Ms s1 ,
where the Ni , Mi , (i = 1, . . . , s) are constant vectors of dimensions n 1 and
m 1, respectively. Introduce the columns B1 , . . . , Bs recursively by
B1 = Ms ,
B2 = Ms1 d1 B1 ,
B3 = Ms2 d1 B2 d2 B1 ,
.. ..
. . (3.87)
Bs = M1 d1 Bs1 d2 Bs2 . . . ds1 B1 .
134 3 Normal Rational Matrices

With account to (3.86) and (3.87), build the matrices



B1

..
CF = N1 . . . Ns , BF = . . (3.88)
Bs

Then the matrix



A() = CF (Is AF )1 BF , (3.89)
where AF is the accompanying Frobenius matrix for the polynomial (3.85), de-
nes the minimal standard realisation of the matrix A(), that means, the re-
alisation (AF , BF , CF ) is the simple Frobenius realisation of the matrix A().

Proof. Using (3.62), we obtain from (3.89)

CF PF ()QF ()BF
A() = + CF GF ()BF . (3.90)
d()

From (3.88) and (3.58), we get

CF PF () = N1 + N2 + . . . + Ns s1 = P () , (3.91)

and also

QF ()BF = d1 ()B1 + d2 ()B2 + . . . + ds1 ()Bs1



+ Bs ,

where d1 (), . . . , ds1 () are the polynomials (3.56). Substituting (3.56) into
the last equation, we nd

QF ()BF = (s1 + d1 s2 + . . . + ds1 )B1 + . . . + ( + d1 )Bs1


+ Bs
(3.92)
= s1 B1 + s2 (d1 B1 + B2 ) + . . . + (ds1 B1 + ds2 B2 + . . . + Bs )

such that from (3.87), it follows

Ms = B1 ,
Ms1 = B2 + d1 B1 ,
Ms2 = B3 + d1 B2 + d2 B1 ,
.. ..
. . (3.93)
M1 = Bs + d1 Bs1 + d2 Bs2 + . . . + ds1 B1 ,

and from (3.92) with (3.86), we nd

QF ()BF = s1 Ms + . . . + M2 + M1 = Q () . (3.94)

Finally, by virtue of this and (3.91), we produce from (3.90)


3.7 Construction of Simple Frobenius Realisations 135

P ()Q ()
A() = + CF GF ()BF . (3.95)
d()
Comparing this expressions with (3.64) and paying attention to the fact that

the matrices A() and A() are strictly proper, and the matrices G() and
CF GF ()BF are polynomial matrices, we obtain

A() = A() .

Example 3.31. Under the conditions of Example 3.29, we obtain



1 2 0
P () = + + 2 ,
3 1 1

so that with regard to (3.86), we congure



1 2 0
N1 = , N2 = , N3 = ,
3 1 1

that agrees with (3.84). Moreover, (3.84) yields



3 3 1 2
Q() = + +
4 3 0
and thus
3 3 1
M1 = , M2 = , M3 = .
4 3 0
Applying (3.93), we obtain


B1 = M3 = 1 0 ,




B2 = M2 d1 B1 = 3 3 1 0 = 2 3 ,





B3 = M1 d1 B2 d2 B1 = 3 4 2 3 1 0 = 0 1 ,

that with respect to (3.88) can be written as



B1 1 0
BF = B2 = 2 3 .
B3 0 1

This result is again consistent with (3.83). 


It is referred to the fact that analogue formulae to (3.87), (3.93) for real-
isations with vertical Frobenius matrices were dedicated in dierent way in
[165].
Remark 3.32. It is important that Formulae (3.87), (3.93) only depend on the
coecients of the characteristic polynomial (3.85), but not on its roots, thats
why the practical handling of these formulae is less critical.
136 3 Normal Rational Matrices

3.8 Construction of S-representations from Simple


Realisations. General Case
1. If the simple realisation (A, B, C) of a normal rational n m transfer
matrix A() is known, then the corresponding S-representation of A() can
be build on basis of the general considerations in Section 3.4. Indeed, let the
simple realisation (A, B, C) be given, so

C adj(Ip A)B N ()
A() = = . (3.96)
det(Ip A) dA ()

is valid, and by equivalence transformations, the representation



() 1
adj(Ip A) = () ()
L() ()

can be generated with unimodular matrices (), (). Then for constructing
the S-representation, Theorem 3.20 is applicable. Using Theorem 3.20, the last
equation yields

adj(Ip A) = Pa ()Qa () + dA ()Ga ()

that leads to

C adj(Ip A)B = P ()Q () + dA ()G() ,

where

P () = CPa (), Q() = B  Qa (), G() = CGa ()B .

The last relations proves to be an S-representation of the matrix (3.96)

P ()Q ()
A() = + G()
dA ()

that could easily transformed into a minimal S-representation.

2. For calculating the adjoint matrix adj(Ip A), we can benet from some
general relations in [51]. Assume

dA () = p q1 p1 . . . qp .

Then the adjoint matrix adj(Ip A) is determined by the formula

adj(Ip A) = p1 Ip + p2 F1 + . . . + Fp1 , (3.97)

where
F1 = A q1 Ip , F2 = A2 q1 A q2 Ip , . . .
3.8 Construction of S-representations from Simple Realisations. General Case 137

or generally
Fk = Ak q1 Ak1 . . . qk Ip .
The matrices F1 , . . . , Fp1 can be calculated successively by the recursion

Fk = AFk1 qk Ip , (k = 1, 2, . . . , p 1; F0 = Ip ) .

After this, the solution can be checked by the equation

AFp1 qp Ip = Opp .

3.
Example 3.33. Suppose the simple realisation (A, B, C) with

0 1 0 1 1 0
A= , B= , C= . (3.98)
0 1 1 1 1 1

In the present case, we have

dA () = 2

so we read p = 2 and q1 = 1, q2 = 0. Using (3.97), we nd

adj(I2 A) = I2 + F1

with
1 1
F1 = A I2 = .
0 0
Formula (3.97) delivers

+ 1 1
adj(I2 A) = ,
0

which is easily produced by direct calculation. Applying Theorem 3.20, we get

adj(I2 A) = P ()Q () + dA G() ,

where


1 + 1 0 0
P () = , Q() = , G() = .
1 1 0

Therefore, the S-representation of the matrix A() for the realisation (3.98)
takes the shape
1

1
+1 00
A() = + . (3.99)
2 01
The obtained S-representation is minimal. 
138 3 Normal Rational Matrices

3.9 Construction of Complete MFDs for Normal


Matrices
1. Let the normal matrix in standard form (2.21)

N ()
A() = (3.100)
d()

be given with deg d() = p. Then in accordance with Section 3.1, Matrix
(3.100) allows the irreducible complete MFD

A() = a1 1
l ()bl () = br ()ar () (3.101)

for which
ord al () = ord ar () = Mdeg L() = p .
In principle, for building a complete MFD (3.101), the general methods from
Section 2.4 can be applied. However, with respect to numerical eort and
numeric stability, essentially more eective methods can be developed when
we prot from the special structure of normal matrices while constructing
complete MFDs.

2.
Theorem 3.34. Let the numerator of Matrix (3.100) be brought into the form
(3.38)
() 1
N () = g()() () . (3.102)
L() ()
Then the pair of matrices

d() O1,n1 1
al () = () , bl () = al ()A() (3.103)
() In1

proves to be a complete LMFD, and the pair of matrices



Om1,1 Im1
ar () = 1 () , br () = A()ar ()
d() ()

is a complete RMFD of Matrix (3.100).

Proof. Applying Relations (3.44)(3.49), Matrix (3.102) is represented in the


form

1 O1,n1 1 O1,m1 () 1
N () = g()() ()
() In1 On1,1 d()G2 () Im1 Om1,1

and with respect to (3.100), we get


3.9 Construction of Complete MFDs for Normal Matrices 139

1
1 O1,n1 O1,m1 () 1
A() = g()() d() () .
() In1 On1,1 G2 () Im1 Om1,1

Multiplying this from left with the matrix al () in (3.103), and considering

d() O1,n1 1 O1,n1 d() O1,n1
= ,
() In1 () In1 On1,1 In1

we nd out that the product



1 O1,m1 () 1
al ()A() = g() ()
On1,1 G2 () Im1 Om1,1
(3.104)
() 1
= g() () = bl ()
G2 () Om1,1

proves to be a polynomial matrix. Per construction, we have det al () d()


and ord al () = deg d(), thats why the LMFD is complete. For a right MFD
the proof runs analogously.

3.
Example 3.35. Let us have a normal matrix (3.100) with

2 3
N () = , d() = 2 4 + 3 .
2 + + 1 22 5

In this case, we nd

0 1 2 1 01
N () = ,
1 +1 2 3 10

so we get
0 1 01
() = , () = , g() = 1 ,
1 +1 10
and with respect to (3.103), (3.104), we obtain immediately

d() 0 ( + 1) 1 d()( + 1) d()
al () = = ,
1 1 0 2 + + 1

1 2
bl () = al ()A() = .
0 1

In the present case, we have deg al () = 3. The degree of the matrix al ()


can be decreased, if we build the row-reduced form. The extended matrix
according to the above pair has the shape
140 3 Normal Rational Matrices

3 + 32 + 3 2 4 + 3 1 2
Rh () = .
2 + + 1 0 1

Multiplying the matrix Rh () from left with the unimodular matrix



1 4
() = ,
0.5 0.52 2 + 1

we arrive at

2 7 3 1 2
()Rh () = .
2.5 + 1 0.5 0.5 1

This matrix corresponds to the complete LMFD, where



2 7 3 1 2
al () = , bl () =
2.5 + 1 0.5 0.5 1

and the matrix al () is row-reduced. Thus deg al () = 1, and this degree


cannot be decreased. 

4. For a known S-representation of the normal matrix

P ()Q ()
A() = + G() , (3.105)
d()

a complete MFD can be built by the following theorem.

Theorem 3.36. Suppose the ILMFD and IRMFD

P () Q ()
= a1
l ()bl (), = br ()a1
r () . (3.106)
d() d()

Then the expressions


 
A() = a1
l () bl ()Q () + al ()G() = a1 ()bl () ,
l
  (3.107)
A() = P ()br () + G()ar () a1 1
r () = br ()ar ()

dene a complete MFD of Matrix (3.105).

Proof. Due to Remark 3.4, the matrix P ()/d() is normal. Therefore, for
the ILMFD (3.106) det al () det d() is true, and the rst row in (3.107)
proves to be a complete LMFD of the matrix A(). In analogy, we realise that
a complete RMFD stands in the second row.
3.10 Normalisation of Rational Matrices 141

5.
Example 3.37. For Matrix (3.99) in Example 3.33,

1

P () +1 Q () 1
= 2 , = 2
d() d()

is performed. It is easily checked, that in this case, we can choose



2 1
al () = , bl () = ,
+1 1 0



ar () = , br () = 1 0
1

and according to (3.107), we build the matrices



1 0 1 0
bl () = , br () = .
0 1 1 1


3.10 Normalisation of Rational Matrices

1. During the construction of complete LMFD, RMFD and simple realisa-


tions for normal rational matrices, we have to take into account the structural
peculiarities, and the equations that exist between their elements. Indeed, even
arbitrarily small inaccuracies during the calculation of the elements of a nor-
mal matrix (3.100), most likely will lead to a situation, where the divisibility
of all minors of second order by the denominator is violated, and the result-

ing matrix A() is no longer normal. After that, also the irreducible MFD,

built from the matrix A() will not be complete, and the values of ord al ()
and ord ar () in the congured IMFDs will get too large. Also the matrix A
that is assigned by the minimal realisation (A, B,
C)
according to the matrix

A(), would have too high dimension. As a consequence of these errors after
transition to an IMFD or to corresponding minimal realisations, we would
obtain linear models with totally dierent dynamic behavior than that of the
original object, which is described by the transfer matrix (3.100).

2. Let us illustrate the above remarks by a simple example.


Example 3.38. Consider the nominal transfer matrix

a 0
0 b
A() = , a = b (3.108)
( a)( b)
142 3 Normal Rational Matrices

that proves to be normal. Assume that, due to practical calculations, the


approximated matrix

a+ 0
0 b
A() = (3.109)
( a)( b)

is built, that is normal only for  = 0. For the nominal matrix (3.108), there
exists the simple realisation (A, B, C) with

a0 01 01
A= , B= , C= . (3.110)
0b 10 10

All other simple realisations of Matrix (3.108) are produced from (3.110) by
similarity transformations. Realisation (3.110) corresponds to the system of
dierential equations of second order

x 1 = ax1 + u2
x 2 = bx2 + u1 (3.111)
y1 = x2 , y2 = x1 .

For  = 0, we nd the minimal realisation (A , B , C ) for (3.109), where




a00 0
ab 101

A = 0 a 0 , B = 0
1 , C = . (3.112)
 010
00b 1 0
ab
All other minimal realisations of Matrix (3.109) are held from (3.112) by
similarity transformations.
Realisation (3.112) is assigned to the dierential equation of third order

x 1 = ax1 + u1
ab
x 2 = ax2 + u2
 

x 3 = bx3 + 1 u1
ab
y1 = x1 + x3 , y2 = x2 .

For  = 0, these equations do not turn into (3.111), and the component x1
looses controllability. Moreover, for  = 0 and a > 0, the object is no more
stabilisable, though the nominal object (3.111) was stabilisable.
In constructing the MFDs, similarly dierent solutions are held for  = 0
and  = 0. Indeed, if the numerator of the perturbed matrix (3.109) is written
in Smith canonical form, then we obtain for b a +  = 0
3.10 Normalisation of Rational Matrices 143

a+ 0
0 b
 1
 a+
a+ ba+ 1 0 b
ba+ ba+
= .
b 1
ba+
0 ( a + )( b) 1 1

Thus, the McMillan canonical form of (3.109) becomes


 1
  
a +  ba+ 1
(a)(b) 0 a+ b

A() = ba+ ba+
. (3.113)
b ba+ 1
0 a+
a 1 1

For  = 0, the McMillan denominator A () of Matrix (3.109) results to

A () = ( a)2 ( b) .

Hence the irreducible left MFD is built with the matrices


1
( a)( b) 0 ba+
1
ba+
al () = ,
0 a b a+
a+ b
ba+ ba+
bl () = .
a+ a+

For  = 0 the situation changes. Then from (3.113), it arises


  a b
a ba
1 1
(a)(b) 0 ba ba
A() =
b ba
1
0 1 1 1

and we arrive at the LMFD


 
(a)(b) (a)(b)
al () = ba ba ,
b a
a b
ba ba
bl () = ,
1 1

and for that, according to the general theory, we get ord al () = 2. 

3. In connection with the above considerations, the following problem arises.


Suppose a simple realisation (A, B, C) of dimension n, p, m. Then its assigned
(ideal) transfer matrix

C adj(Ip A)B N ()
A() = = (3.114)
det(Ip A) d()

is normal. However, due to inevitable inaccuracies, we could have the real


transfer matrix
144 3 Normal Rational Matrices

()
N

A() = , (3.115)

d()
which practically always deviates from a normal matrix. Even more, if the
random calculation errors are independent, then Matrix (3.115) with proba-
bility 1 is not normal. Hence the transition from Matrix (3.115) to its minimal
B,
realisation leads to a realisation (A, C)
of dimension n, q, m with q > p, that
means, to an object of higher order with non-predictable dynamic properties.

4. Analogue diculties arise during the solution of identication problems


for linear MIMO systems in the frequency domain [108, 4, 120]. Let for in-
stance the real object be described by the simple realisation (A, B, C). Any
identication procedure in the frequency domain will only give an approxi-
mate transfer matrix (3.115). Even perfect preparation of the identication
conditions cannot avoid that the coecients of the estimated transfer matrix
(3.115) will slightly deviate from the coecients of the exact matrix (3.114).
But this deviation suces that the probability for Matrix (3.115) to become
normal turns to zero. Therefore, the formal transition from Matrix (3.115) to
the corresponding minimal realisation will lead to a system of higher order,
i.e. the identication problem is incorrectly solved.

5. Situations of this kind also arise during the application of frequency do-
main methods for design of linear MIMO systems [196, 6, 48, 206, 95], . . . .
The algorithm of the optimal controller normally bases on the demand that it
is described by a simple realisation (A0 , B0 , C0 ). The design method is usually
supplied by numerical calculations, so that the transfer matrix of the optimal
controller A0 () will practically not be normal. Therefore, the really produced
0 , C0 ) of the optimal controller will have an increased order.
realisation (A0 , B
Due to this fact, the system with this controller may show a unintentional
behavior, especially it might become (internally) unstable.

6. As a consequence of the outlined problems, the following general task is


stated [144, 145].

Normalisation problem. Suppose a rational matrix (3.115), the


coecients of which deviate slightly from the coecients of a certain
normal matrix (3.114). Then, nd a normal matrix

N ()
A () =
d ()

the coecients of which dier only a bit from the coecients of Matrix
(3.115).

A possible approach for the solution of the normalisation problem consists in


the following reection. By equivalence transformation, the numerator of the
3.10 Normalisation of Rational Matrices 145


rational matrix A() can be brought into the form (3.102)
 

() 1

N () = g()() () .

L() ()


Let d() be the denominator of the approximated strictly proper matrix
(3.115). Then owing to (3.40), from the above considerations, it follows im-
mediately the representation

g()P ()Q ()
A () = + G () ,
d ()

where

1


d () = d(),
P () = () , Q () =
() 1 () (3.116)
()

and the matrix G () is determined in such a way that the matrix A ()


becomes strictly proper.

Example 3.39. Apply the normalisation procedure to Matrix (3.109) for  = 0


and b a +  = 0. Notice that
b
a+ 0 b a +  1 ba+ 1 11
=
0 b 0 1 b + b 10

is true, i.e. in the present case, we can choose

b
g() = 1, () = + b,
() =
ba+
and
b a +  1 11
() = , () = .
0 1 10
Using (3.116), we nally nd

b a +  1 1 a+
P () = = ,
0 1 + b + b


b 11
b
Q () = ba+ 1 = ba+ b
+ 1 ba+ .
10

Selecting the denominator d () = ( a)( b), we get



1 1 1
G () = .
ba+ 1 1 
Part II

General MIMO Control Problems


4
Assignment of Eigenvalues and Eigenstructures
by Polynomial Methods

In this chapter, and later on if possible, the fundamental results are formulated
for real polynomials or real rational matrices, because this case dominates in
technical applications, and its handling is more comfortable.

4.1 Problem Statement


1. Suppose the horizontal pair (a(), b()) with a() Rnn [], b Rnm [].
For the theory and in many applications, the following problem is important.
For a given pair (a(), b()), nd a pair ((), ()) with () Rmm [],
() Rmn [] such that the set of eigenvalues of the matrix

a() b()
Q(, , ) = (4.1)
() ()

takes predicted values 1 , . . . , q with the multiplicities 1 , . . . , q . In what


follows, the pair (a(), b()) is called the process to control, or shortly the
process, and the pair ((), ()) the controller . Matrix (4.1) is designated
as the characteristic matrix of the closed loop, or shortly the characteristic
matrix. Denote
d() = ( 1 )1 ( q )q , (4.2)
then the problem of eigenvalue assignment is formulated as follows.
Eigenvalue assignment. For a given process (a(), b()) and pre-
scribed polynomial d(), nd all controllers ((), ()) that ensure

a() b()
det Q(, , ) = det d() . (4.3)
() ()

In what follows, the polynomial det Q(, , ) is designated as the charac-


teristic polynomial of the closed loop. For a given process and polynomial
150 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

d(), Relation (4.3) can be seen as an equation depending on the controller


((), ()).

2. Let the just formulated task of eigenvalue assignment be solvable for a


given process with a certain polynomial d(). Suppose d to be the set of
controllers satisfying Equation (4.3). Assume a1 (), . . . , an+m () to be the
sequence of invariant polynomials of the matrix Q(, , ). In principle, for
dierent controllers in the set d , these sequences will be dierent, because
such a sequence a1 (), . . . , an+m () only has to meet the three demands: All
polynomials ai () are monic, each polynomial ai+1 () is divisible by ai (),
and
a1 () an+m () = d() .
Assume particularly

a1 () = a2 () = . . . = an+m1 () = 1, an+m () = d().

Then the matrix Q(, , ) is simple. In connection with the above said the
following task seems substantiated.
Structural eigenvalue assignment. For a given process
(a(), b()) and scalar polynomial d(), the eigenvalue assignment
d d ,
(4.3) should deliver the solution set d . Find the subset
where the matrix Q(, , ) possesses a prescribed sequence of invari-
ant polynomials a1 (), . . . , an+m ().

3. In many cases, it is useful to formulate the control problem more general,


when the process to control is described by a PMD

() = (a(), b(), c()) Rnpm [] , (4.4)

which then is called as a PMD process. Introduce the matrix Q (, , ) of


the shape
a() Opn b()
Q (, , ) = c() In Onm , (4.5)
Omp () ()
which is called the characteristic matrix of the closed loop with PMD process.
Then the eigenvalue assignment can be formulated as follows:

Eigenvalue assignment for a PMD process. For a given PMD


process (4.4) and polynomial d() of the form (4.2), nd the set of all
controllers ((), ()) for which the relation

det Q (, , ) d() (4.6)

is fullled.
4.2 Basic Controllers 151

4. Let the task of eigenvalue assignment for a PMD process be solvable, and
be the congured set of controllers ((), ()). For dierent controllers in
, the sequence of the invariant polynomials of Matrix (4.5) can be dierent.
Therefore, also the next task is of interest.
Structural eigenvalue assignment for a PMD process. For a
given PMD process (4.4) and polynomial d(), the set of solutions
((), ()) of the eigenvalue assignment (4.6) is designated by .
Find the subset , where the matrix Q (, , ) possesses a
prescribed sequence of invariant polynomials.
In the present chapter, the general solution of the eigenvalue assignment
problem is derived, where the processes are given as polynomial pairs or as
PMDs. Moreover, the structure of the set of invariant polynomials is stated,
which can be prescribed for this task. Although, the following results are
formulated for real matrices, they could be transferred practically without
changes to the complex case. In the considerations below, the eigenvalue as-
signment problem is also called modal control problem, and the determination
of the structured eigenvalues is also named structural modal control problem.

4.2 Basic Controllers


1. In this section, the important question is investigated, how to design the
controller ((), ()) that Matrix (4.1) becomes unimodular, i.e.
a1 () = a2 () = . . . = an+m () = 1 .
In the following, such controllers are called basic controllers.
It follows directly from Theorem 1.41 that for the existence of a basic
controller for the process (a(, b()), it is necessary and sucient that the
pair (a(), b()) is irreducible, i.e. the matrix


Rh () = a() b()
is alatent. If a process meets this condition, it is called irreducible.

2. The next theorem presents a general expression for the set of all basic
controllers for a given irreducible pair.
Theorem 4.1. Let (0 (), 0 ()) be a certain basic controller for the process
(a(), b()). Then the set of all basic controllers (0 (), 0 ()) is determined
by the formula
0 () = D()0 () M ()b() ,
(4.7)
0 () = D()0 () M ()a() ,
where M () Rmn [] is an arbitrary, and D() Rmm [] is an arbitrary,
but unimodular matrix.
152 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

Proof. The set of all basic controllers is denoted by R0 , and the set of all pairs
satisfying Condition (4.7) by Rp . At rst, we will show R0 Rp .
Let (0 (), 0 ()) be a certain basic controller, and

a() b()
Ql (, 0 , 0 ) = (4.8)
0 () 0 ()

be its congured characteristic matrix, which is unimodular. Introduce

n m

 r () br () (4.9)
Q1 n
l (, 0 , 0 ) = Qr (, 0 , 0 ) =
r () ar () m .

Owing to the properties of the inverse matrix (2.109), we have the relations

a()r () b()r () = In ,
(4.10)
a()br () b()ar () = Onm .

Let (0 (), 0 ()) be any other basic controller, and



a() b()
Ql (, 0 , 0 ) = (4.11)
0 () 0 ()

be its congured characteristic matrix. Then due to (4.10),



In Onm
Ql (, 0 , 0 )Qr (, 0 , 0 ) = (4.12)
M () D()

where

D() = 0 ()br () + 0 ()ar () ,


M () = 0 ()r () + 0 ()r () .

From (4.12) with regard to (4.9), we receive



In Onm
Ql (, 0 , 0 ) = Ql (, 0 , 0 ) ,
M () D()

which directly delivers Formulae (4.7). Calculating (4.12), we get

det D() = det Ql (, 0 , 0 ) det Qr (, 0 , 0 ) = const. ,

i.e. the matrix D is unimodular. Therefore, every basic controller (0 (), 0 ()


permits a representation (4.7), thats why R0 Rp is true.
On the other side, if (4.7) is valid, then from (4.11), it follows
4.2 Basic Controllers 153
 
a() b()
Ql (, 0 , 0 ) =
D()0 ()
+ M ()a() D()0 () M ()b()

In Onm a() b()
=
M () D() 0 () 0 ()

In Onm
= Q(, 0 , 0 ) .
M () D()
Since D() is unimodular, also this matrix has to be unimodular, i.e. R0 Rp
is proven, and therefore the sets R0 and Rp coincide.

3. As emerges from (4.7), before constructing the set of all basic controllers,
at rst we have to nd one sample of them. Usually, search procedures for
such a controller found on the following considerations.
Lemma 4.2. For the irreducible process (a(), b()), there exist an m m
polynomial matrix ar () and an n m polynomial matrix br (), such that the
equation
a()br () = b()ar () (4.13)
is fullled, where the pair [ar (), br ()] is irreducible.
Proof. Since the process (a(), b()) is irreducible, there exists a basic con-
trollers (0 (), 0 ()), such that the matrix

a() b()
Q(, 0 , 0 ) =
0 () 0 ()
becomes unimodular. Thus, the inverse matrix

0r () br ()
Q1 (, 0 , 0 ) =
0r () ar ()
is also unimodular. Then from (4.10), it follows Statement (4.13). Moreover,
the pair [ar (), br ()] is irreducible thanks to Theorem 1.32.
Remark 4.3. If the matrix a() is non-singular, i.e. det a() / 0, then there
exists the transfer matrix of the processes
w() = a1 ()b() .
The right side of this equation proves to be an ILMFD of the matrix w(). If
we consider an arbitrary IRMFD
w() = br ()a1
r () ,

then Equation (4.13) holds, and the pair [ar (), br ()] is irreducible. There-
fore, Lemma 4.2 is a generalisation of this property in case the matrix a() is
singular.
154 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

In what follows, the original pair (a(), b()) is called left process model,
and any pair [ar (), br ()] satisfying (4.13), is named right process model. If
in this case, the pair [ar (), br ()] is irreducible, then the right process model
should also be designated as irreducible.

Lemma 4.4. Let [ar (), br ()] be an irreducible right process model. Then
any pair (0 (), 0 () satisfying the Diophantine equation

0 ()br () + 0 ()ar () = P () (4.14)

with a unimodular matrix P (), turns out to be a basic controller for the left
process model (a(), b()).

Proof. Since the pair (a(), b()) is irreducible, there exists a vertical pair
[r (), r ()] with
a()r () b()r () = In .
The pair [0 (), 0 ()] should satisfy Condition (4.14). Then build the product

a() b() r () br () In Onm
= ,
0 () 0 () r () ar () M () P ()

where M () is a polynomial matrix. Since the matrix P () is unimodular, the


matrix on the right side becomes unimodular. Thus, both matrices on the left
side are unimodular, and (0 (), 0 ()) proves to be a basic controller.

Corollary 4.5. An arbitrary pair (0 (), 0 ()) satisfying the Diophantine


equation
0 ()br () + 0 ()ar () = Im ,
proves to be a basic controller.

Remark 4.6. It is easily shown that the set of all pairs (0 (), 0 ()) satisfy-
ing Equation (4.14) for all possible unimodular matrices P () generate the
complete set of basic controllers.

4.3 Recursive Construction of Basic Controllers

1. As arises from Lemma 4.4, a basic controller (0 (), 0 ()) can be found
as solution of the Diophantine matrix equation (4.14). In the present section,
an alternative method for nding a basic controller is described that leads
to a recursive solution of simpler scalar Diophantine equations, and does not
need the matrices ar (), br (), arising in (4.14).
4.3 Recursive Construction of Basic Controllers 155

2. For nding this approach, the polynomial equation



n
ai ()xi () = c() (4.15)
i=1

is considered, where the ai (), (i = 1, . . . , n), c() are known polynomials,


and xi (), (i = 1, . . . , n) are unknown scalar polynomials. We will say, that
the polynomials ai () are in all coprime, if their monic GCD is equal to 1.
The next lemma is a corollary from a more general statement in [79].

Lemma 4.7. A necessary and sucient condition for the solvability of Equa-
tion (4.15) is, that the greatest common divisor of the polynomials ai (),
(i = 1, . . . , n) is a divisor of the polynomial c().

Proof. Necessity: Suppose () as a GCD of the polynomials ai (). Then

ai () = ()a1i () , (i = 1, . . . , n) (4.16)

where the polynomials a1i (), (i = 1, . . . , n) are in all coprime. Substituting


(4.16) into (4.15), we obtain
 n 

() a1i ()xi () = c() ,
i=1

from which it is clear that the polynomial c() must be divisible by ().
Suciency: The proof is done by complete induction. The statement
should be valid for one n = k > 0, and then it is shown that it is also
valid for n = k + 1. Consider the equation


k+1
ai ()xi () = c() . (4.17)
i=1

Without loss of generality assume that the coecients ai (), (i = 1, . . . , k + 1)


are in all coprime, otherwise both sides of Equation (4.17) could be divided
by the common factor.
Let () be the GCD of the polynomials a1 (), . . . , ak (). Then (4.16) is
true, where the polynomials ai1 (), (i = 1, . . . , k) are in all coprime. Herein,
the polynomials () and ak+1 () are also coprime, otherwise the coecients
of Equation (4.17) would not be in all coprime. Hence the Diophantine equa-
tion
()u() + ak+1 ()xk+1 () = c() (4.18)
is solvable. Let u
(), x
k+1 () be a certain solution of Equation (4.18). Inves-
tigate the equation
k
a1i ()xi () = u
() (4.19)
i=1
156 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

which is solvable due to the induction supposition, since all coecients a1i ()
i (), (i = 1, . . . , k) be any solution of Equation (4.19).
are in all coprime. Let x
Then applying (4.16) and (4.18), we get


k+1
ai ()
xi () = c()
i=1

this means, the totality of polynomials x


i (), (i = 1, . . . , k + 1) presents a
solution of Equation (4.15). Since the statement of the theorem holds for
k = 2, we have proved by complete induction that it is also true for all k 2.

3. The idea of the proof consists in constructing a solution of (4.15) by


reducing the problem to the case of two variables. It can be used to generate
successively the solutions of Diophantine equations with several unknowns,
where in every step a Diophantine equation with two unknowns is solved.

Example 4.8. Find a solution of the equation

( 1)( 2)x1 () + ( 1)( 3)x2 () + ( 2)( 3)x3 () = 1 .

Here, the coecients are in all coprime, though they are not coprime by twos.
In the present case, the auxiliary equation (4.18) could be given the shape

( 1)u() + ( 2)( 3)x3 () = 1 .

A special solution takes the form

() = 2 0.5 ,
u x
3 () = 0.5 .

Equation (4.19) can be represented in the form

( 2)x1 () + ( 3)x2 () = 2 0.5 .

As a special solution for the last equation, we nd

x
1 () = 0.5 , 2 () = 1 .
x

Thus, as a special solution of the original equation, we obtain

x
1 () = 0.5 , 2 () = 1 ,
x x
3 () = 0.5 .

4.3 Recursive Construction of Basic Controllers 157

4.
Lemma 4.9. Suppose the n (n + 1) polynomial matrix

a11 () . . . a1,n+1 ()

A() = ...
..
. (4.20)
an1 () . . . an,n+1 ()

with rank A() = n. Furthermore, denote DA () as the monic GCD of the


minors of n-th order of the matrix A(). Then there exist scalar polynomials
d1 (), . . . , dn+1 (), such that the (n + 1) (n + 1) polynomial matrix

A1 () = A()
d1 () . . . dn+1 ()

satises the relation


det A1 () = DA () . (4.21)

Proof. Denote Bi () as that n n polynomial matrix which is held from A()


by cutting its i-th column. Then the expansion of the determinant by the last
row delivers

n+1
det A1 () = (1)n+1+i di ()i ()
i=1

with i () = det Bi (). Thus, Relation (4.21) is equivalent to the Diophantine


equation

n+1
(1)n+1+i di ()i () = DA () . (4.22)
i=1

By denition, the polynomial DA () is the GCD of the polynomials i ().


Hence we can write
i () = DA ()1i () ,
where the polynomials 1i (), (i = 1, . . . , n + 1) are in all coprime. By virtue
of this relation, Equation (4.22) can take the form


n+1
(1)n+1+i di ()1i () = 1 .
i=1

Since the polynomials 1i () are in all coprime, this equation is solvable


thanks to Lemma 4.7. Multiplying both sides by DA (), we conclude that
Equation (4.22) is solvable.
158 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

5.
Theorem 4.10. [193] Suppose the non-degenerated n m polynomial matrix

A(), m > n + 1, where



a11 () . . . a1n () a1,n+1 ()  a1,n+2 () . . . a1m ()


A() = ...
..
.
..
. 
..
.
..
.

an1 () . . . ann () an,n+1 ()  an,n+2 () . . . anm ()

 a1,n+2 () . . . a1m ()

= A()  ..
.
..
. .

an,n+2 () . . . anm ()

Assume that the submatrix A() on the left of the line has the form (4.20).
Let DA () be the monic GCD of the minors of n-th order of the matrix A(),
and DA () the monic GCD of the minors of n-th order of the matrix A().
The polynomials d1 (), . . . , dn+1 () should be a solution of Equation (4.22).
Then the monic GCD of the minors of n-th order of the matrix

a11 () . . . a1n () a1,n+1 ()  a1,n+2 () . . . a1m ()
.. .. ..  .. ..

Ad () = . . .  . . (4.23)
an1 () . . . ann () an,n+1 ()  an,n+2 () . . . anm ()

d1 () . . . dn () dn+1 ()  dn+2 () . . . dm ()

satises the condition


DAd () = DA ()
for any polynomials dn+2 (), . . . , dn+m ().

Corollary 4.11. If the matrix A() is alatent, then also Matrix (4.23) is
alatent.

6. Suppose an irreducible pair (a(), b()), where a() has the dimension
n n and b() dimension n m. Then by successive repeating the procedure
explained in Theorem 4.10, the unimodular matrix

a11 () . . . a1n () b11 () . . . b1m ()
.. .. .. ..
. . . .

an1 () . . . ann () bn1 () . . . bnm ()
Ql (, 0 , 0 ) =
11 () . . . 1n () 11 () . . . 1m ()


.. .. .. ..
. . . .
m1 () . . . mn () m1 () . . . mm ()

is produced. The last m rows of this matrix present a certain basic controller
4.3 Recursive Construction of Basic Controllers 159

11 () . . . 1m () 11 () . . . 1n ()
.. .. .. ..
0 () = . . , 0 () = . . .
m1 () . . . mm () m1 () . . . mn ()

For a transition to the matrix Ql (, 0 , 0 ) in every step essentially a scalar


Diophantine equation must be solved with several unknowns of the type (4.15).
As shown above, the solution of such equations amounts to the successive
solution of simple Diophantine equations of the form (4.18).

7.
Example 4.12. Determine a basic controller for the process (a(), b()) with

1 +1 0
a() = , b() = . (4.24)
0 1 0

The pair (a(), b()) establishes to be irreducible, because the matrix



1 +1 0
0 1 0

is alatent, which is easily checked. Hence the design problem for a basic con-
troller is solvable. In the rst step, we search the polynomials d1 (), d1 (),
d3 (), so that the matrix

1 +1 0
A1 () = 0 1
d1 () d2 () d3 ()

becomes unimodular. Without loss in generality, we assume det A1 () = 1.


Thus we arrive at the Diophantine equation

( + 1)d1 () ( 1)d2 () + ( 1)d3 () = 1 .

A special solution of this equation is represented in the form


 
1
d1 () = , d2 () = 0, d3 () = +1 .
2 2

The not designated polynomial d4 () can be chosen arbitrarily. Take for in-
stance d4 () = 0, so the alatent matrix becomes

1 +1 0

0
A2 () = 0 1 .
1
0 1 0
2 2
160 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

It remains the task to complete A2 () that it becomes a unimodular matrix.


For that, we attempt

1 +1 0
0 0
1

A3 () = 1
0 1 0
2 2

d1 () d2 () d3 () d4 ()

with unknown polynomials di (), (i = 1, . . . , 4). Assume det A3 () = 1, so we


obtain the Diophantine equation
 
2
+ 1 d1 () d2 () + d3 () + d4 () = 1
2 2 2
which has the particular solution

d1 () = d2 () = d3 () = 0, d4 () = 1 .

In summary, we obtain the unimodular matrix



1 +1 0
0 1 0

A3 () =
1
,

0 1 0
2 2
0 0 0 1

where we read the basic controller (0 (), 0 ()) with


 1 

1 0 0
0 () = 2 , 0 () = 2 .
0 1 0 0

Using this solution and Formula (4.7), we construct the set of all basic con-
trollers for the process (4.24). 
Example 4.13. Find a basic controller for the process (a(), b()) with

1 +1 1
a() = , b() = . (4.25)
1 +1 0

In the present case the matrix a() is singular, nevertheless, a basic controller
can be found because the matrix


1 +1 1
Rh () = a() b() =
1 +1 0

is alatent. In accordance with the derived methods, we search for polynomials


d1 (), d2 (), d3 () such that the condition
4.4 Dual Models and Dual Bases 161

1 +1 1
det 1 + 1 0 = 1
d1 () d2 () d3 ()

is satised, which is equivalent to the Diophantine equation

( + 1)d1 () + ( 1)d2 () = 1

which has the particular


solution d1 () = d2 () = 0.5. Thus, we can take
0 () = d3 (), 0 () = 0.5 0.5 , where d3 () is an arbitrary polynomial. 

4.4 Dual Models and Dual Bases


1. For the further investigations, we want to modify the introduced notation.
The initial irreducible process (a(), b()) is written as (al (), bl ()), and is
called, as before, a left process model. Any basic controller 0 (), 0 ()) is
written in the form (0l (), 0l ()), and it is called a left basic controller.
Hereby, the assigned unimodular matrix (4.8) that presents itself in the form

al () bl ()
Ql (, 0l , 0l ) = (4.26)
0l () 0l ()

is named a left basic matrix. However, if the pair [ar (), br ()] is an irre-
ducible right process model, then the vertical pair [0r (), 0r ()], for which
the matrix
0r () br ()
Qr (, 0r , 0r ) = (4.27)
0r () ar ()
becomes unimodular, is said to be a right basic controller, and the congured
matrix (4.27) is called a right basic matrix. .

2. Using (4.27), we nd
 
 

 0r () 0r () 0r () 0r () ar () br ()
Qr (, 0r , 0r ) = ,
br () ar () ar () br () 
0r 
() 0r ()

where the symbol stands for the equivalence of the polynomial matri-

ces. Now, if [0r (), 0r ()] is any right basic controller, then applying Theo-
rem 4.1 and the last relation, the set of all right basic controllers is expressed
by the formula

0r () = 0r ()Dr () br ()Mr () ,
(4.28)

0r () = 0r ()Dr () ar ()Mr () ,

where the m n polynomial matrix Mr () is arbitrary, and Dr () is any


unimodular n n polynomial matrix.
162 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

3. The basic matrices (4.26) and (4.27) are called dual , if the equation

Qr (, 0r , 0r ) = Q1
l (, 0l , 0l ) (4.29)

holds, or equivalently, if

Ql (, 0l , 0l )Qr (, 0r , 0r ) = Qr (, 0r , 0r )Ql (, 0l , 0l ) = In+m .


(4.30)
The processes (al (), bl (), [ar (), br ()] congured by Equations (4.29) and
(4.30), as well as the basic controllers (0l (), 0l ()), [0r (), 0r ()] will also
be named as dual.
From (4.30) and (4.26), (4.27) emerge two groups of equations, respectively
for left or right dual models as well as left and right dual basic controllers

al ()0r () bl ()0r () = In , al ()br () bl ()ar () = Onm ,


0l ()0r () + 0l ()0r () = Onm , 0l ()br () + 0l ()ar () = Im
(4.31)
and
0r ()al () br ()0l () = In , 0r ()bl () + br ()0l () = Onm ,
0r ()al () ar ()0l () = Onm ,0r ()bl () + ar ()0l () = Im .
(4.32)
Relations (4.31) and (4.32) are called direct and reverse Bezout identity, re-
spectively [69].

Remark 4.14. The validity of the relations of anyone of the groups (4.31) or
(4.32) is necessary and sucient for the validity of Formulae (4.29), (4.30).
Therefore, each of the groups of Relations (4.31) or (4.32) follows from the
other one.

4. Applying the new notation, Formula (4.7) can be expressed in the form



0l () 0l () = Ml () Dl () Ql (, 0l , 0l )


al () bl ()
= Ml () Dl () ,
0l () 0l ()

which results in



Ml () Dl () = 0l () 0l () Q1
l (, 0l , 0l )

(4.33)

= 0l () 0l () Qr (, 0l , 0l )

with the dual basic matrix



0r () br ()
Qr (, 0r , 0r ) = .
0r () ar ()
4.4 Dual Models and Dual Bases 163

Alternatively, (4.33) can be written in the form

Dl () = 0l ()br () + 0l ()ar () ,

Ml () = 0l ()0r () + 0l ()0r () .

Analogously, Formula (4.28) can be presented in the form



0r () Dr ()
= Qr (, 0r , 0r ) ,
0r () Mr ()

where we derive

Dr () ()
= Ql (, 0l , 0l ) 0r
Mr () 0r ()
or

Dr () = al ()0r () bl ()0r () ,

Mr () = 0l ()0r () 0l ()0r () .

5.
Theorem 4.15. Let the left and right irreducible models (al (), bl ()),
[ar (), br ()] of an object be given, which satisfy the condition

al ()br () = bl ()ar () (4.34)

and, moreover, an arbitrary left basic controller (0l (), 0l ()). Then a nec-
essary and sucient condition for the existence of a right basic controller
[0r (), 0r ()] that is dual to the controller (0l (), 0l ()), is that the pair
(0l (), 0l ()) is a solution of the Diophantine equation

0l ()ar () 0l ()br () = Im .

Proof. The necessity follows from the Bezout identity (4.31). To prove the
suciency, we notice that due to the irreducibility of the pair (al (), bl ()),
0r (), 0r ()], that fullls the relation
there exists a pair [

0r () bl ()0r () = In .
al () (4.35)

Now, build the product



al () bl ()
0r () br () In Onm
=
0l () 0l () 0r () ar () 0r () + 0l ()0r () Im
0l ()

from which we held


164 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

al () bl () 0r () br () In Onm
= , (4.36)
0l () 0l () 0r () ar () Omn Im

where
 
0r () = 0r () 0l ()0r () ,
0r () + br () 0l ()
 
0r () = 0r () + ar () 0l ()0r () 0l ()0r () .

It arises from (4.36), that the last pair is a right basic controller, which is dual
to the left basic controller (0l (), 0l ()).
Remark 4.16. In analogy, it can be shown that for a right basic controller
0l (), 0l ()), if and
[0r (), 0r ()], there exists a dual left basic controller (
only if the relation

al ()0r () bl ()0r () = In (4.37)

0l (), 0l ) satises the condition


is fullled. Hereby, if the pair (

0l ()ar () 0l ()br () = Im ,

then the formulae


 
0l () 0l ()0r ()
0l () = 0l ()0r () bl () ,
  (4.38)
0l () = 0l () 0l ()0r ()
0l ()0r () al ()

dene a left basic controller, that is dual to the controller [0r (), 0r ()].

6. The next theorem supplies a parametrisation of the set of all pairs of dual
basic controllers.

Theorem 4.17. Suppose two dual basic controllers (0l (), 0l ()) and

[0r (), 0r ()]. Then the set of all pairs of dual basic controllers
(0l (), 0l ()), [0r (), 0r ()] is determined by the relations

0l () = 0l () M ()bl (), 0l () = 0l () M ()al () ,
(4.39)

0r () = 0r () br ()M (), 0r () = 0r () ar ()M () ,

where M () is any polynomial matrix of appropriate dimension.


Proof. In order to determine the set of all pairs of dual controllers, we at rst
notice that from (4.7) and (4.28) it follows that the relations

0l () = Dl ()0l () Ml ()bl (), 0l () = Dl ()0l () Ml ()al () ,

0r () = 0r ()Dr () br ()Mr (), 0r () = 0r ()Dr () ar ()Mr ()
(4.40)
4.5 Eigenvalue Assignment for Polynomial Pairs 165

hold, from which we get



In Onm
Ql (, 0l , 0l ) = Ql (, 0l , 0l )
Ml () Dl ()
(4.41)
Dr () Onm
Qr (, 0r , 0r ) = Qr (, 0r , 0r ) .
Mr () Im

For the duality of the controllers (0l (), 0l ()) and [0r (), 0r ()], it
is necessary and sucient that the matrices (4.41) satisfy Relation (4.29).

But from (4.41), owing to the duality of the controllers (0l (), 0l ()) and

[0r (), 0r ()], we get

Dr () Onm
Ql (, 0l , 0l )Qr (, 0r , 0r ) =
Ml ()Dr () Dl ()Mr () Dl ()

and Relation (4.29) is fullled, if and only if

Dr () = In , Dl () = Im , Ml () = Mr () = M () .

Corollary 4.18. Each solution of Equation (4.35) uniquely corresponds to


a right dual controller, and each solution of Equation (4.36) uniquely corre-
sponds to a left dual controller.

Remark 4.19. Theorems 4.15 and 4.17 indicate that the pairs of left and right
process models, used for building the dual basic controllers, may be cho-
sen arbitrarily, as long as Condition (4.34) holds. If the pairs (al (), bl ()),
[ar (), br ()] satisfy Condition (4.34), and the n n polynomial matrix p()
and the m m polynomial matrix q() are unimodular, then the pairs
(p()al (), p()bl ()), [ar ()q(), br ()q()] fulll this condition. Therefore,
we can reach for instance that in (4.34), the matrix al () is row reduced and
the matrix ar () is column reduced.

Remark 4.20. From Theorems 4.15 and 4.17, it follows that as a rst right ba-
sic controller any solution [0r (), 0r ()] of the Diophantine Equation (4.37)
can be used. Then the corresponding dual left basic controller is found by For-
mula (4.38). After that, the complete set of all pairs of dual basic controllers
is constructed by Relations (4.40).

4.5 Eigenvalue Assignment for Polynomial Pairs


1. As stated in Section 4.1, the eigenvalue assignment problem for the pair
(al (), bl ()) amounts to nding the set of controllers (l (), l ()) which
satisfy the condition
det Ql (, l , l ) d() , (4.42)
166 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

where d() is a prescribed monic polynomial and



al () bl ()
Ql (, l , l ) = . (4.43)
l () l ()

The general solution for the formulated problem in case of an irreducible


process provides the following theorem.

Theorem 4.21. Let the process (al (), bl ()) be irreducible. Then Equation
(4.42) is solvable for any polynomial d(). Thereby, if (0l (), 0l ()) is a cer-
tain basic controller for the process (al (), bl ()), then the set of all controllers
(l (), l ()) satisfying (4.42) can be represented in the form

l () = Dl ()0l () Ml ()bl () ,
(4.44)
l () = Dl ()0l () Ml ()al () ,

where the m n polynomial matrix Ml () is arbitrary, and for the m m


polynomial matrix Dl () the condition

det D() d()

is valid. Besides, the pair (l (), l ()) is irreducible, if and only if the pair
(Dl (), Ml ()) is irreducible.

Proof. Denote the set of solutions of Equation (4.42) by N0 , and the set of
pairs (4.44) by Np . Let (0l (), 0l ()) be a certain basic controller. Then the
matrices

al () bl ()
Ql (, 0l , 0l ) = ,
0l () 0l ()
(4.45)
1 0r () br ()
Ql (, 0l , 0l ) = Qr (, 0r , 0r ) =
0r () ar ()

are unimodular, and the condition

Ql (, 0l , 0l )Qr (, 0r , 0r ) = In+m

holds. Let (l (), l ()) be a controller satisfying Equation (4.42). Then using
(4.34), (4.43), (4.45) and the Bezout identity (4.31), we get

al () bl () 0r () br ()
Ql (, l , l )Qr (, 0r , 0r ) =
l () l () 0r () ar ()
(4.46)
In Onm
=
Ml () Dl ()

with
4.5 Eigenvalue Assignment for Polynomial Pairs 167

Dl () = l ()br () + l ()ar () ,
(4.47)
Ml () = l ()0r () + l ()0r () .

Applying (4.46) and (4.47), we nd

Ql (, l , l ) = Nl ()Ql (, 0l , 0l ) , (4.48)

where
In Onm
Nl () = , (4.49)
Ml () Dl ()
where we read (4.44). Calculating the determinant on both sides of (4.48)
shows that
det Ql (, l , l ) det Dl () d() .
Thus N0 Np was proven. By reversing the conclusions, we deduce as in
Theorem 4.1 that also Np N0 is true. Therefore, the sets N0 and Np coin-
cide.
Notice that Formulae (4.44) may be written in the shape



l () l () = Ml () Dl () Ql (, 0l , 0l )


al () bl ()
= Ml () Dl () .
0l () 0l ()


Since
the matrix Q l (, 0l , 0l ) is unimodular, the matrices l () l ()
and Ml () Dl () are right-equivalent, and thats why the pair (l (), l ())
is irreducible, if and only if the pair (Dl (), Ml ()) is irreducible.

2.
Example 4.22. For a prescribed polynomial d(), the solution set of the eigen-
value assignment problem for the process (4.24) in Example 4.12 has the form

d11 () d12 () (0.5 + 1) 0 m11 () m12 () 0
l () = ,
d21 () d22 () 0 1 m21 () m22 () 0

d11 () d12 () 0.5 0 m11 () m12 () 1 +1
l () = .
d21 () d22 () 0 0 m21 () m22 () 0 1

Here the mik () are arbitrary polynomials and dik () are arbitrary polyno-
mials bound by the condition

d11 ()d22 () d21 ()d12 () d() .


168 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

Example 4.23. The set of solutions of Equation (4.42) for the process (4.25)
in Example 4.13 has the form

l () = kd()d3 () + m1 ()} ,




l () = 0.5kd() 1 1 m1 () m2 () 1 + 1 ,

where k is a constant and d3 (), m1 (), m2 () are any polynomials. 

3. Now, consider the question, how the solution of Equation (4.42) looks like
when the process (al (), bl ()) is reducible. In this case, with respect to the
results in Section 1.12, there exists a latent square n n polynomial matrix
q(), such that

al () = q()al1 (), bl () = q()bl1 () (4.50)

is true with an irreducible pair (al1 (), bl1 ()). The solvability conditions for
Equation (4.42) in case (4.50) states the following theorem.

Theorem 4.24. Let (4.50) be valid and det q() = (). Then a necessary
and sucient condition for the solvability of Equation (4.42) is, that the poly-
nomial d() is divisible by (). Thus, if ( 0l (), 0l ()) is a certain basic
controller for the process (al1 (), bl1 ()), then the set of all controllers satis-
fying Equation (4.42) is bound by the relations
l ()
l () = D l ()bl1 () ,
0l () M
(4.51)
l ()0l () M
l () = D l ()al1 () ,

where the m n polynomial matrix M l () is arbitrary, and the m m poly-


nomial matrix D l () satises the condition det D
l () d(). Here, the poly-

nomial d() is determined by

= d() .
d() (4.52)
()

Proof. Let (4.50) be true. Then (4.42) can be presented in the shape
 
q() Onm
det Ql (, l , l ) d() , (4.53)
Omn Im

where
l (, l , l ) = al1 () bl1 ()
Q .
l () l ()
Calculating the determinants, we nd
l (, l , l ) d() ,
() det Q
4.6 Eigenvalue Assignment by Transfer Matrices 169

i.e. for the solvability of Equation (4.53), it is necessary that the polynomial
d() is divisible by (). If this condition is ensured and (4.52) is used, then
Equation (4.53) leads to
.
l (, , ) d()
det Q

Since the pair (al1 (), bl1 ()) is irreducible, Equation (4.53) is always solvable,
thanks to Theorem 4.17, and its solution has the shape (4.51).

4. Let (a(), b()) be an irreducible process and (l (), l ()) such a con-
troller, that det Ql (, l , l ) = d() / 0 becomes true. Furthermore, let
(0l (), 0l ()) be a certain basic controller. Then owing to Theorem 4.17,
there exist m m and m n polynomial matrices Dl () and Ml (), such that

l () = Dl ()0l () Ml ()bl () ,
(4.54)
l () = Dl ()0l () Ml ()al () ,

where det Dl () d(). Relations (4.54) are called the basic representation
of the controllers (l (), l ()) with respect to the basis (0l (), 0l ()).
Theorem 4.25. The basic representation (4.54) is unique in the sense, that
from the validity of (4.54) and the relation

l () = Dl1 ()0l () Ml1 ()bl () ,


(4.55)
l () = Dl1 ()0l () M1l ()al () ,

we can conclude Dl1 () = Dl (), Ml1 () = Ml ().


Proof. Suppose (4.54) and (4.55) are fullled at the same time. Subtracting
(4.55) from (4.54), we get

[Dl () Dl1 ()] 0l () [Ml () Ml1 ()] bl () = Omm ,

[Dl () Dl1 ()] 0l () [Ml () Ml1 ()] al () = Omn ,

which is equivalent to


Ml () Ml1 () Dl () Dl1 () Ql (, 0l , 0l ) = Om,m+n .

From this, it follows immediately Ml1 () = Ml (), Dl1 () = Dl (), because


the matrix Ql (, 0l , 0l ) is unimodular.

4.6 Eigenvalue Assignment by Transfer Matrices


1. In case of det l () / 0, it means, the pair (l (), l ()) is not singular,
the transfer function of the controller
170 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

w () = l1 ()l () (4.56)
might be included into our considerations. Its standard form (2.21) can be
written as
M ()
w () = (4.57)
d ()
for which Relation (4.56) denes a certain LMFD. Conversely, if the transfer
function of the controller is given in the standard form (4.57), then various
LMFD (4.56) and the corresponding characteristic matrices

al () bl ()
Ql (, l , l ) = (4.58)
l () l ()
can be investigated. Besides, every LMFD (4.56) is uniquely related to a
characteristic polynomial () = det Ql (, l , l ).
In future, we will say that the transfer matrix w () is a solution of the
eigenvalue assignment for the process (al (), bl ()), if it allows an LMFD
(4.56) such that the corresponding pair (l (), l ()) satises Equation (4.42).

2. The set of transfer matrices (4.57) that supply the solution of the eigen-
value assignment is generally characterised by the next theorem.
Theorem 4.26. Let the pair (al (), bl ()) be irreducible and (0l (), 0l ())
be an appropriate left basic controller. Then for the fact that the transfer
matrix (4.56) is a solution of Equation (4.42), it is necessary and sucient
that it allows a representation of the form
w () = [0l () ()bl ()]1 [0l () ()al ()] , (4.59)
where () is a broken rational m n matrix, for which exists an LMFD
() = Dl1 ()Ml () , (4.60)
where det Dl () d() is true and the polynomial matrix Ml () is arbitrary.
Proof. Suciency: Suppose the LMFD (4.60). Then from (4.59) we get
w () = [Dl ()0l () Ml ()bl ()]1 [Dl ()0l () Ml ()al ()] . (4.61)
Thus, the set of equations
l () = Dl ()0l () Ml ()bl () ,
(4.62)
l () = Dl ()0l () Ml ()al ()
describes a controller satisfying Relation (4.42).
Necessity: If (4.56) and det Ql (, l , l )) d() are true, then for the
matrices l () and l (), we can nd a basic representation (4.54), and under
the invertability condition for the matrix l (), we obtain (4.59), so the proof
is carried out.
4.6 Eigenvalue Assignment by Transfer Matrices 171

Corollary 4.27. From (4.59) we learn that the transfer matrices w (), de-
ned as the solution set of Equation (4.42), depend on a matrix parameter,
namely the fractional rational matrix ().

3. Let the transfer function of the controller be given in the form (4.59). Then
under Condition (4.60), it can be represented in form of the LMFD (4.56),
where the matrices l (), l () are determined by (4.62). For applications,
the question of the irreducibility of the pair (4.62) is important.

Theorem 4.28. The pair (4.62) is exactly then irreducible, when the pair
[Dl (), Ml ()] is irreducible, i.e. the right side of (4.59) is an ILMFD.

Proof. The proof follows directly from Theorem 4.17.

4. Let the process (al (), bl ()) and a certain fractional rational mn matrix
w () be given, for which the expression (4.56) denes a certain LMFD. Thus,
if
al () bl ()
det Ql (, l , l )) = det d() ,
l () l ()
then, owing to Theorem 4.26, the matrix w () can be represented in the
form (4.59), (4.61), where (0l (), 0l ()) is a certain basic controller. Under
these circumstances, the notation (4.59) of the matrix w () is called its basic
representation with respect to the basis (0l (), 0l ()).

Theorem 4.29. For a xed basic controller (0l (), 0l ()), the basic repre-
sentation (4.59) is unique in the sense, that the validity of (4.59) and

w () = [0l () 1 ()bl ()]1 [0l () 1 ()al ()] (4.63)

at the same time implies the equality () = 1 ().

Proof. Without loss of generality, we suppose that the right side of (4.60) is
an ILMFD. Then owing to Theorem 4.26, the right side of (4.61) is an ILMFD
of the matrix w (). In addition let us have the LMFD

1 () = D11 ()M1 () .

Then from (4.63) for the matrix w (), we obtain the LMFD

w () = [D1 ()0l () M1 ()bl ()]1 [D1 ()0l () M1 ()al ()] . (4.64)

This relation and (4.61) dene two dierent LMFDs of the matrix w (). By
supposition the LMFD (4.61) is irreducible, so with respect to Statement 2.3
on page 64, we come out with

D1 ()0l () M1 ()bl () = U ()[Dl ()0l () Ml ()bl ()] ,


D1 ()0l () M1 ()al () = U () [Dl ()0l () Ml ()al ()] ,
172 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

where U () is a non-singular m m polynomial matrix. These relations can


be written as


M1 () U ()Ml () D1 () U ()Dl () Ql (, 0l , 0l ) = Om,m+n ,
(4.65)
where
al () bl ()
Ql (, 0l , 0l ) = .
0l () 0l ()
Since this matrix is designed unimodular, it follows from (4.65)

M1 () = U ()Ml (), D1 () = U ()Dl () .

Thus we derive

() = Dl1 ()Ml () = D11 ()M1 () = 1 () ,

which completes the proof.

4.7 Structural Eigenvalue Assignment for Polynomial


Pairs

1. The solution of the structural eigenvalue assignment for an irreducible


process (al (), bl ()) by the controller (l (), l ()) bases on the following
statement.

Theorem 4.30. Let the process (al (), bl ()) be irreducible and the controller
(l (), l ()) should have the basic representation (4.54). Then the matrices

al () bl () In Onm
Ql (, l , l ) = , S() = (4.66)
l () l () Omn Dl ()

are equivalent, and this fact does not depend on the matrix Ml ().

Proof. Notice

In Onm In Onm In Onm
= ,
Ml () Dl () Ml () Im Omn Dl ()

then Relations (4.48), (4.49) can be written in the form



In Onm
Ql (, l , l ) = S()Ql (, 0l , 0l ) .
Ml () Im

The rst and the last factor on the right side are unimodular matrices and
therefore, the matrices (4.66) are equivalent.
4.7 Structural Eigenvalue Assignment for Polynomial Pairs 173

Theorem 4.31. Let a1 (), . . . , an+m () and b1 (), . . . , bm () be the se-


quences of invariant polynomials of the matrices Ql (, l , l ) and Dl (), re-
spectively. Then the equations

a1 () = a2 () = . . . = an () = 1 (4.67)

and furthermore
an+i () = bi (), (i = 1, . . . , m) . (4.68)

Proof. Assume b1 (), . . . , bm () be the sequence of invariant polynomials


of Dl (). Then the sequence of invariant polynomials of S() is equal to
1, . . . , 1, b1 (), . . . , bm (). But the matrices Dl () and Ql (, l , l ) are equiv-
alent, hence their sequences of invariant polynomials coincide, that means,
Equations (4.67) and (4.68) are correct.

Corollary 4.32. Theorem 4.31 supplies a constructive procedure for the de-
sign of closed systems with a prescribed sequence of invariant polynomials
of the characteristic matrix. Indeed, let a sequence of monic polynomials
b1 (), . . . , bm () with
b1 () bm () d()
be given and for all i = 2, . . . , m, the polynomial bi () is divisible by bi1 ().
Then we take

Dl () = p() diag{b1 (), . . . , bm ()}q() ,

where p(), q() are unimodular matrices. After that, independently of the
selection of Ml () in (4.54), the sequence of the last m invariant polynomials
of the matrix Ql (, l , l ) coincides with the sequence b1 (), . . . , bm ().

Corollary 4.33. If the process (al (), bl ()) is irreducible, then there exists a
set of controllers s for which the matrix Ql (, l , l ) becomes simple. This
happens exactly when the matrix Dl () is simple, i.e. it allows the represen-
tation
Dl () = p() diag{1, . . . , 1, d()}q()
with unimodular matrices p(), q().

Corollary 4.34. Let irreducible left and right models of the process
(al (), bl ()) and [ar (), br ()] be given. Then the sequence of invariant poly-
nomials an+1 (), . . . , an+m () of the characteristic matrix Ql (, l , l ) coin-
cides with the sequence of invariant polynomials of the matrix

Dl () = l ()br () + l ()ar () ,

which is a direct consequence of Theorem 4.30 and Equations (4.47).


174 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

4.8 Eigenvalue and Eigenstructure Assignment for PMD


Processes
1. In the present section, the generale solution for the eigenvalue assign-
ment (4.6) for non-singular PMD processes is developed. Moreover, the set
of sequences of invariant polynomials is described for which the structural
eigenvalue assignment is solvable.

Theorem 4.35. Let the PMD (4.4) be non-singular and minimal, and

w () = c()a1 ()b() (4.69)

should be its corresponding transfer matrix. Furthermore, let us have the


ILMFD
w () = a1
l ()bl () . (4.70)
Then the eigenvalue assignment problem (4.5) is solvable for any polynomial
d(). Besides, the set of pairs (l (), l ()) that are solutions of (4.6) coin-
cides with the set of pairs that are determined as solutions of the eigenvalue
assignment for the irreducible pair (al (), bl ()), and these may be produced
on the base of Theorem 4.17.

Preparing the proof, some auxiliary statements are given.

Lemma 4.36. For the non-singular PMD (4.4), formula

det Q (, , ) = det a() det [() ()w ()] (4.71)

holds, where the matrix Q (, , ) is established in (4.5).

Proof. The matrix Q (, , ) is brought into the form



A() B()
Q (, , ) = , (4.72)
C() D()

where

a() Opn b()
A() = , B() = , (4.73)
c() In Onm


C() = Omp () , D() = () . (4.74)

Under the taken propositions, we have

det A() = det a()


/ 0. (4.75)

Therefore, the well-known formula [51]




det Q (, , ) = det A() det D() C()A1 ()B() (4.76)
4.8 Eigenvalue and Eigenstructure Assignment for PMD Processes 175

is applicable. Observing
 
1
a1 () Opn
A () =
c()a1 () In

and (4.72)(4.75), we obtain (4.71).

Lemma 4.37. Let the non-singular PMD (4.4) and its corresponding transfer
matrix (4.69) be given, for which Relation (4.70) denes an ILMFD. Consider
the matrix
al () bl ()
Ql (, , ) = , (4.77)
() ()
where the matrices () and () are dened as in (4.5). If under this con-
dition, the PMD (4.4) is minimal, then

det Q (, , ) det Ql (, , ) . (4.78)

Proof. Applying Formula (4.76) to Matrix (4.77), we nd

det Ql (, , ) = det al () [() ()w ()] . (4.79)

Consider now the ILMFD

c()a1 () = a1
1 ()c1 () .

Since the left side is an IRMFD, the relation

det a1 () det a()

holds. Thus due to Lemma 2.9, the expression

w () = a1 ()[c1 ()b()]

denes an ILMFD of the matrix w (). This expression and (4.70) dene at
the same time ILMFDs of the matrix w (), so we have

det a1 () det a() det al () .

Using this and (4.70) from (4.79), we obtain the statement (4.78).

Proof of Theorem 4.35. The minimality of the PMD (4.4) and Lemma 4.37
imply that the sets of solutions of (4.6) and of the equation

det Ql (, , ) d()

coincide.
176 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

2. The next theorem supplies the solution of the eigenvalue assignment for
the case, when the PMD (4.4) is not minimal.

Theorem 4.38. Let the non-singular PMD (4.4) be not minimal, and Rela-
tion (4.70) should describe an ILMFD of the transfer matrix w (). Then the
relation
det a()
() = (4.80)
det al ()
turns out to be a polynomial. Thereby, Equation (4.6) is exactly then solvable,
when
d() = ()d1 () , (4.81)
where d1 () is any polynomial. If (4.81) is true, then the set of controllers
that are solutions of Equation (4.5) coincide with the set of solutions of the
equation
al () bl ()
det d1 () . (4.82)
() ()
This solution set can be constructed with the help of Theorem 4.17.

Proof. Owing to Lemma 2.48, Relation (4.80) is a polynomial. With the help
of (4.71) and (4.80), we gain

det Q (, , ) = det a() det[() ()w ()]


= () det al () det[() ()w ()] .

Using (4.79), we nd out that Equation (4.5) leads to

() det Ql (, , ) d() . (4.83)

From (4.83), it is immediately seen that Equation (4.6) needs Condition (4.81)
be fullled for its solvability. Conversely, if (4.81) is fullled, Equation (4.83)
leads to Equation (4.82).

3. The solution of the structural eigenvalue assignment for minimal PMD


(4.4) supplies the following theorem.

Theorem 4.39. Let the non-singular PMD (4.4) be minimal, and Relation
(4.70) should dene an ILMFD of the transfer matrix (4.69). Furthermore,
let (0 (), 0 ()) be a basic controller for the pair (al (), bl ()), and the set
of pairs

() = N ()0 () M ()bl () ,
(4.84)
() = N ()0 () M ()al ()

should determine the set of solutions of the eigenvalue assignment (4.6). More-
over, let q1 (), . . . , qp+n+m () be the sequence of invariant polynomials of the
4.8 Eigenvalue and Eigenstructure Assignment for PMD Processes 177

polynomial matrix Q (, , ), and 1 (), . . . , m () be the sequence of invari-


ant polynomials of the polynomial matrix N (). Then

q1 () = q2 () = . . . = qp+n () = 1 ,
(4.85)
qp+n+i () = i (), (i = 1, . . . , m) .

Proof. a) It is shown that under the conditions of Theorem 4.39, the


pair (A(), B()) dened by Relation (4.73) is irreducible. Indeed, let
(0 (), 0 ()) be a basic controller for the pair (al (), bl ()) that is de-
termined by the ILMFD (4.70). Then owing to Lemma 4.37, we have

A() B()
det = const. = 0 ,
C0 () D0 ()

where

C0 () = Omp 0 () , D0 () = 0 () , (4.86)
and due to Theorem 1.41, the pair (A(), B()) is irreducible.
b) Equation (4.6) is written in the form

A() B()
det d() . (4.87)
C() D()

Since the pair A(), B()) is irreducible, it follows from Theorem 4.17
that for any polynomial d(), Equation (4.87) is solvable and the set of
solutions can be presented in the shape

b()
D() = N1 ()D0 () M1 () ,
Onm
(4.88)
a() Opn
C() = N1 ()C0 () M1 () ,
c() In

where the m(p+n) polynomial matrix M1 () is arbitrary, but the mm


polynomial matrix N1 () has to fulll the single condition

det N1 () d() .

c) On the other side, due to Theorem 4.38, the set of pairs ((), ()) sat-
isfying Equation (4.87) coincides with the set of solutions of the equation

al () bl ()
det d()
() ()

that has the form (4.84), where the m n polynomial matrix M () is


arbitrary, and the m m polynomial matrix N () has to satisfy the con-
dition
det N () d() .
178 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

Assume in (4.88)

p n

M1 () = M 1 () M
2 () m .
Then with the help of (4.74), (4.86), Relation (4.88) can be presented in
the shape
1 ()b() ,
() = N1 ()0 () M




Omp () = N1 () Omp 0 () M 2 ()c() M
1 ()a() M 2 () .

In order to avoid a contradiction between these equations with (4.84), it


is necessary and sucient that the condition
N1 () = N () (4.89)
holds, and moreover,
1 ()b() = M ()bl (),
M 1 ()a() M
M 2 ()c() = Omn ,
2 () = M ()al ()
M
are fullled. Now we directly conclude that these relations are satised for
1 () = M ()al ()c()a1 (),
M 2 () = M ()al () .
M (4.90)
Besides, due to Lemma 2.9, the product al ()c()a1 () is a polynomial
matrix . Substituting the last relations and (4.89) into (4.88), we nd


1
b()
D() = N ()0 () M () al ()c()a () al () ,
Onm
(4.91)

1
a() Opn
C() = N ()C0 () M () al ()c()a () al () .
c() In
From this and Theorem 4.8, Equations (4.85) emerge immediately.
Corollary 4.40. In order to get a simple matrix Q (, , ) under the con-
ditions of Theorem 4.11, it is necessary and sucient that the matrix N ()
in Formula (4.91) is simple.

4. The structure of the characteristic matrix Q (, , ) for the case, when


the non-singular PMD (4.4) is not minimal, decides the following theorem.
Theorem 4.41. For the non-singular PMD (4.4), the factorisations
a() = d1 ()a1 (), b() = d1 ()b1 () (4.92)
should be valid, where d1 (), a1 () are p p polynomial matrices, b1 () is a
p m polynomial matrix and the pair (a1 (), b1 ()) is irreducible. Moreover,
suppose
a1 () = a2 ()d2 (), c() = c1 ()d2 () (4.93)
with p p polynomial matrices d2 (), a2 (), the n p polynomial matrix c1 ()
and the irreducible pair [a2 (), c1 ()]. Then the following statements are true:
4.8 Eigenvalue and Eigenstructure Assignment for PMD Processes 179

a) The PMD 1 () = (a2 (), b1 (), c1 ()) is equivalent to the PMD (4.4) and
minimal.
b) The relation
det a()
() = (4.94)
det a2 ()
turns out to be a polynomial with
det a()
() () = , (4.95)
det al ()
where () is the polynomial (4.80).
c) The relation
Q (, , ) = Gl ()Q1 (, , )Gr () (4.96)
is true with
Gl () = diag{d1 (), 1, . . . , 1} , Gr () = diag{d2 (), 1, . . . , 1} ,
(4.97)
and the matrix Q1 (, , ) has the shape

a2 () Opn b1 ()
Q1 (, , ) = c1 () In Onm . (4.98)
Omp () ()
d) Formula
() det d1 () det d2 () (4.99)
is valid.
e) Let q1 (), . . . , qp+n+m () be the sequence of invariant polynomials of the
matrix Q1 (, , ) and 1 (), . . . , m () be the sequence of invariant poly-
nomials of the matrix N () in the representation (4.91), where instead of
al (), b(), c() we have to write a2 (), b1 (), c1 (). Then
q1 () = q2 () = . . . = qp+n () = 1 ,
(4.100)
qp+n+i () = i (), (i = 1, . . . , m) .
Proof. a) Using (4.92) and (4.93), we nd
w () = c()a1 ()b() = c1 ()a1
2 ()b1 () = w1 () ,

where w1 () is the transfer function of the PMD 1 (), this means, the
PMD () and 1 () are equivalent. It is demonstrated that the PMD
1 () is minimal. Since the pair [a2 (), c1 ()] is irreducible per construc-
tion, it is sucient to show that the pair (a2 (), b1 ()) is irreducible. Per
construction, the pair
(a1 (), b1 ()) = (a2 ()d2 (), b1 ())
is irreducible. Hence due to Lemma 2.11, also the pair (a2 (), b1 ()) is
irreducible.
180 4 Assignment of Eigenvalues and Eigenstructures by Polynomial Methods

b) From (4.92) and (4.93), we recognise that Relation (4.94) is a polynomial.


Since the PMD () = (al (), bl (), In ) and 1 () are equivalent and
minimal, Corollary 2.49 implies

det al () det a2 ()

and this yields (4.95).


c) Relations (4.96)(4.98) can be taken immediately from (4.6), (4.92),
(4.93).
d) Applying Formula (4.71) to Matrix (4.98), we obtain

det Q1 (, , ) = det a2 () det[() ()w ()] .

Therefore, from (4.71) with the help of (4.94), we receive

det Q (, , ) = () det Q1 (, , ) .

On the other side from (4.96)(4.98), it follows

det Q (, , ) = det Gl () det Gr () det Q1 (, , ).

Bringing face to face the last two equations proves Relation (4.99).
e) Since the PMD 1 () is minimal and Relation (4.84) holds, Formula
(4.100) follows from Theorem 4.39.

Corollary 4.42. If one of the matrices d1 () or d2 () is not simple, then


the matrix Q (, , ) cannot be made simple with the help of any controller
((), ()).

Proof. Let for instance the matrix d1 () be not simple, then we learned from
the considerations in Section 1.11 that there exists an eigenvalue with
> 1. Hence considering (4.96), (4.97), we get def Q (,
def Gl () , ) > 1,
i.e., the matrix Q (, , ) is not simple. If d2 () is not simple, we conclude
analogously.

Corollary 4.43. Let the matrices d1 () and d2 () be simple and possess no


eigenvalues in common. Then for the simplicity of the matrix Q (, , ), it
suces that the matrix N () in (4.84) is simple and has no common eigen-
= d1 ()d2 ().
values with the matrix d()

Proof. Let 1 , . . . , q and 1 , . . . , s be the dierent eigenvalues the matrices


d1 () and d2 (), respectively. Then the matrices Gl () and Gr () in (4.97)
are also simple, where the eigenvalues of Gl () are the numbers 1 , . . . , q ,
but the eigenvalues of the matrix Gr () are the numbers 1 , . . . , s . Let the
matrix N () in (4.84) be simple and should possess the eigenvalues n1 , . . . , nk
that are disjunct with all the values i , (i = 1, . . . , q) and j , (j = 1, . . . , s).
Then from Corollary 4.40, it follows that the matrix Q1 (, , ) is simple,
and possesses the set of eigenvalues {n1 , . . . , nk }. From (4.96), we recall that
4.8 Eigenvalue and Eigenstructure Assignment for PMD Processes 181

the set of eigenvalues of the matrix Q (, , ) is built from the unication of


the sets {1 , . . . , q }, {1 , . . . , s } and {n1 , . . . , nk }. Using (4.96), we nd out
that for all appropriate i

def Q (i , , ) = 1, def Q (i , , ) = 1, def Q (ni , , ) = 1 ,

and together with the results of Section 1.11, this yields that the matrix
Q (, , ) is simple.
5
Fundamentals for Control of Causal
Discrete-time LTI Processes

5.1 Finite-dimensional Discrete-time LTI Processes


1. Generalised discrete linear processes can be represented by the abstract
Fig. 5.1, where {u}, {y} are input and output vector sequences

{u} {y}
- L -

Fig. 5.1. Generalised discrete LTI process


{u}1 {y}1

{u} = ... , {y} = ...
{u}m {y}n

and their components {u}i , {y}i are scalar sequences

{u}i = {ui,0 , ui,1 , . . . }, {y}i = {yi,0 , yi,1 , . . . } .

The above vector sequences can also be represented in the form

{u} = {u0 , u1 , . . . }, {y} = {y0 , y1 , . . . } , (5.1)

where
us,1 ys,1

us = ... , ys = ... . (5.2)
us,m ys,n
Furthermore, in Fig. 5.1, the letter L symbolises a certain system of linear
equations that consists between the input and output sequences. If L stands for
184 5 Fundamentals for Control of Causal Discrete-time LTI Processes

a system with a nite number of linear dierence equation with constant coef-
cients, then the corresponding process is called a nite-dimensional discrete-
time LTI object. In this section exclusively such objects will be considered and
they will be called shortly as LTI objects.

2. Compatible with the introduced concepts, the LTI object in Fig. 5.1 is
congured to a system of scalar dierence equations

n
(0)

n
()

m
(0)

m
(s)
aip yp,k+ + . . . + aip yp,k = bir ur,k+s + . . . + bir ur,k
p=1 p=1 r=1 r=1
(5.3)
(i = 1, . . . , n; k = 0, 1, . . . ) ,
(j) (j)
where the aip , bir are constant real coecients. Introducing into the consid-
erations the constant matrices
   
a (j)
j = aip , bj = b(j)
ir

and using the notation (5.1), the system of scalar Equations (5.3) can be
written in form of the vector dierence equation

a  yk = b0 uk+s + . . . + bs uk ,
0 yk+ + . . . + a (k = 0, 1, . . . ) , (5.4)

which connects the components of the input and output sequences.


Introduce the shifted vector sequences of the form

{uj } = {uj , uj+1 , . . . }, {yj } = {yj , yj+1 , . . . } . (5.5)

Then the system of Equations (5.4) can be written as equation connecting


their shifted sequences:

a  {y} = b0 {us } + . . . + bs {u} .


0 {y } + . . . + a (5.6)

It is easily checked that Equations (5.4) and (5.6) are equivalent. This can
be done by substituting the expressions (5.5) into (5.6) and comparing the
corresponding components of the sequences on the left and right side.

3. Next the right-shift (forward shift) operator q is introduced by the rela-


tions
qyk = yk+1 , quk = uk+1 . (5.7)
Herewith, (5.4) can be written in the form

(q)yk = b(q)uk ,
a (5.8)

where
a 0 q  + . . . + a
(q) = a  , b(q) = b0 q s + . . . + bs (5.9)
5.1 Finite-dimensional Discrete-time LTI Processes 185

are polynomial matrices. However, if the operator q is dened by the relation

q{yk } = {yk+1 }, q{uk } = {uk+1 } ,

then we come to equations that depend on the sequences

(q){y} = b(q){u} .
a (5.10)

4. In what follows, Equations (5.10) or (5.8) will be called a forward model


of the LTI process. The matrix a(q) is named eigenoperator, and the matrix
b(q) inputoperator of the forward model.
If not mentioned otherwise, we always suppose

(q)
det a /0 (5.11)

i.e., the matrix a


(q) is non-singular. If (5.11) is valid, the LTI process is said
to be non-singular.
Moreover, we assume that in the equations of the non-singular processes
0 = Onn is true, and at least one of the matrices a
(5.8), (5.9) always a  or bs
is a nonzero matrix.
If under the mentioned propositions, the relation

0 = 0
det a (5.12)

is valid, then the LTI process is called normal. If however, instead of (5.12)

det a
0 = 0 (5.13)

is true, then the LTI process is named anomalous, [39]. For instance, descriptor
processes can be modelled by anomalous systems [34].

5. For a given input sequence {u}, Relations (5.4) can be regarded as a


dierence equation for the unknown output sequence {y}. If (5.13) is allowed,
then in general the dierence equation (5.4) cannot be written as recursion
for yk , and we have to dene, what we will understand by a solution.
Solution of a not necessarily normal dierence equation. For
a known input sequence as a solution of Equation (5.4) {u}, we un-
derstand any sequence that is dened for all k 0, and the elements
of which satisfy Relation (5.4) for all k 0.
Suppose the non-singular matrix (q). Multiplying both sides of Equation
(5.8) from left by (q), we obtain

a(q)yk = (q)b(q)uk ,
(q) (k = 0, 1, . . . ) (5.14)

and using (5.7), this can be written in an analogue form to (5.4). Equation
(5.14) is said to be derived from the output Equation (5.4), and Equation
(5.4) itself is called original.
186 5 Fundamentals for Control of Causal Discrete-time LTI Processes

Example 5.1. Consider the equations of the LTI processes in the form (5.3)

y1,k+2 + y1,k + y2,k = uk+1


(5.15)
y1,k+1 + 2y2,k+1 = 2uk .


Denote yk = y1,k y2,k , then (5.15) is written in the form

a
0 yk+2 + a 2 yk = b0 uk+2 + b1 uk+1 + b2 uk ,
1 yk+1 + a (5.16)

where

10 00 11
a
0 = , a
1 = , a
2 = ,
00 11 00

b0 = 0 , b1 = 1 , b2 = 0 ,
0 0 2

so that we obtain (5.9) with


2
q +q 1 b(q) = q .
a
(q) = ,
q 2q 2

(q) = 2q 3 + 2q 2 q / 0, the LTI process (5.15) is non-singular.


Since det a
0 = 0, the process is anomalous.
Besides, due to det a
Assume
q 1
(q) = ,
1 q
then (5.14), that is derived from the original Equation (5.16), takes the form
3 2
q + q 2 + q 3q q +2
y = uk
q 1 2q 2 k q

or, by means of (5.7), it is equivalently written as the system of equations

y1,k+3 + y1,k+2 + y1,k+1 + 3y2,k+1 = uk+2 + 2uk


(k = 0, 1, . . . ) (5.17)
y1,k+1 2y2,k+2 + y2,k = uk+1 .

Hereby, Equations (5.15) are called original with respect to (5.17). 

6.
Lemma 5.2. For any matrix (q), all solutions of the original equation (5.8)
are also solutions of the derived equation (5.14).
5.1 Finite-dimensional Discrete-time LTI Processes 187

Proof. The derived equation is written in the form


 
(q) a(q)yk b(q)uk = 0k , (5.18)


where 0k = 0 0 for all k 0. Obviously, the vectors uk , yk satisfy (5.18)
for all k 0, when Equation (5.8) holds for all of them and all k 0.

Remark 5.3. The inverse statement to Lemma 5.2 in general is not true. In-
deed, let {v} be a solution of the equation

(q)vk = 0k .

Then any solution of the equation

(q)yk = b(q)uk + vk
a (5.19)

for all possible vk presents a solution of the derived Equation (5.14), but only
for vk = 0k it is a solution of the original equation. It is easy to show that
Relation (5.19) contains all solutions of the derived equation.

7. Consider the important special case, when in (5.14) the matrix (q) is
unimodular. In this case, the transition from the original equation (5.8) to the
derived equation (5.14) means manipulating the system (5.3) by operations
of the following types:
a) Exchange the places of two equations.
b) Multiply an equation by a non-zero constant.
c) Add one equation to any other equation that was multiplied before by an
arbitrary polynomial f (q).
In what follows, Equations (5.8) and (5.14) are called equivalent by the uni-
modular matrix (q). The reasons for using this terminology arise from the
next lemma.

Lemma 5.4. The solution sets of the equivalent equations (5.8) and (5.14)
coincide.

Proof. Let Equations (5.8) and (5.14) be equivalent, and R, Rx are their
solution sets. Lemma 5.2 implies R Rx . On the other side, Equations (5.8)
are gained from Equations (5.14) by multiplying them from left by 1 (q).
Then also Lemma 5.2 implies Rx R, thus R = Rx .

8. Assume in (5.14) a unimodular matrix (q). Introduce the notation

(q)
a(q) = a
 (q) , (q)b(q) = b (q) . (5.20)

Then the derived equation (5.14) can be written in the form


188 5 Fundamentals for Control of Causal Discrete-time LTI Processes

 (q)yk = b (q)uk .
a (5.21)
From Section 1.6, it is known that under supposition (5.11), the matrix (q)
can always be selected in such a way that the matrix a  (q) becomes row
reduced. In this case, Equation (5.21) also is said to be row reduced. Let
1 (q), . . . , a
a n (q) be the rows of the matrix a
 (q). As before, denote
i = deg a
i (q) , (i = 1, . . . , n) .
If under these conditions, Equation (5.21) is row reduced, then independently
of the concrete shape of the matrix (q), the quantities

n
l = i ,  (q) = max {i }
max = deg a
1in
i=1

take their minimal values in the set of equivalent equations to the original
equation (5.8).
Example 5.5. Consider the anomalous process
y1,k+4 + 2y1,k+2 + y1,k+1 + 2y2,k+2 + y2,k = uk+3 + 2uk+1
(5.22)
y1,k+3 + y1,k+1 + y1,k + 2y2,k+1 = uk+2 + uk .
In the present case, we have
4
q + 2q 2 + q 2q 2 + 1
a
(q) = =a 0 q 4 + a1 q 3 + a
2 q 2 + a
3 q + a
4 ,
q3 + q + 1 2q
3
b(q) = q 2 + 2q = b1 q 3 + b2 q 2 + b3 q + b4 ,
q +1
where

10 00 22 10 01
a
0 = , a
1 = , a
2 = , a
3 = , a
4 = ,
0 0 1 0 0 0 1 2 10
b1 = 1 , b2 = 0 , b3 = 2 , b4 = 0 .
0 1 0 1
Choose
1 q
(q) = .
q q 2 + 1
So, we generate the derived matrices
2
q 1 b (q) = (q)b(q) = q ,
a
 (q) = (q)
a(q) = ,
q+1 q 1
where the matrix a
 (q) is row reduced. Applying this, Equations (5.22) might
be expressed equivalently by
y1,k+2 + y2,k = uk+1
y1,k+1 + y1,k + y2,k+1 = uk . 
5.2 Transfer Matrices and Causality of LTI Processes 189

5.2 Transfer Matrices and Causality of LTI Processes


1. For non-singular processes (5.8) under Condition (5.11), the rational ma-
trix
w(q)
=a1 (q)b(q) (5.23)
is dened, which is called the transfer matrix (-function) of the forward model.
The next lemma indicates an important property of the transfer matrix.

Lemma 5.6. The transfer matrices of the original equation (5.8) and of the
derived equation (5.21) coincide.

Proof. Suppose
 (q) = a
w 1
 (q)b (q) . (5.24)
Then applying (5.20), we get

1
 (q) = a
w 1 (q)b(q) = w(q)
 (q)b (q) = a .

Corollary 5.7. The transfer functions of equivalent forward models coin-


cide.

2. From the above said emerges that any forward model (5.8) is uniquely as-
signed to a transfer matrix. The reverse statement is obviously wrong. There-
fore the question arises, how is the set of forward models structured, that
possess a given transfer matrix? The next theorem gives the answer.

Theorem 5.8. Let the rational n m matrix w(q)


be given and

w(q)
1
=a
0 b0 (q)

be an ILMFD. Then the set of all forward models of LTI processes possessing
this transfer matrix is determined by the relations

a a0 (q),
(q) = (q) b(q) = (q)b0 (q) , (5.25)

where (q) is any non-singular polynomial matrix.

Proof. The right side of Relation (5.24) presents a certain LMFD of the ratio-
nal matrix w(q).
Hence by the properties of LMFDs considered in Section 2.4,
we conclude that the set of all pairs ( a(q), b(q)) according to the transfer
matrix w(q)
is determined by Relations (5.25).

Corollary 5.9. A forward model of the LTI processes (5.8) is called control-
a(q), b(q)) is irreducible. Hence Theorem 5.8 is formulated
lable, if the pair (
in the following way: Let the forward model dened by the pair ( a(q), b(q)) be
controllable. Then the set of all forward models with transfer function (5.23)
coincides with the set of all derived forward models.
190 5 Fundamentals for Control of Causal Discrete-time LTI Processes

3. The LTI process (5.8) is called weakly causal, strictly causal or causal, if its
transfer matrix (5.23) is proper, strictly proper or at least proper, respectively.
From the content of Section 2.6, it emerges that the LTI process (5.8), (5.9)
is causal, if there exists the nite limit

lim w(q)
= w0 . (5.26)
q

Besides, when w0 = Onm holds, the process is strictly causal. When the limit
(5.26) becomes innite, the process is named non-causal.

Theorem 5.10. For the process (5.8), (5.9) to be causal, the condition

s (5.27)

is necessary. For strictly causality the inequality

>s

must be valid.

Proof. Let us have the transfer matrix w(q)


in the standard form (2.21)

(q)
N
w(q)
= .

d(q)

When this matrix is at least proper, then deg N becomes true.


(q) deg d(q)
Besides, Corollary 2.23 delivers for any LMFD (5.23) deg a (q) deg b(q),
which is equivalent to (5.27). For strict causality, we conclude analogously.

Remark 5.11. For further investigations, we optionally consider causal pro-


cesses. Thus, in Equations (5.8), (5.9) always  s is assumed.

Remark 5.12. The conditions of Theorem 5.10 are in general not sucient, as
it is illustrated by the following example.

Example 5.13. Assume the LTI process (5.8) with


3
q 1 1
a
(q) = , b(q) = 2 . (5.28)
q+1 q+2 q

(q) = 3, s = deg b(q) = 2. At the same time,


In this case, we have  = deg a
we receive 2
q + q + 2
q5 q 1
w(q)
= 4 .
q + 2q 3 q 1
Hence the process (5.28) is non-causal. 
5.3 Normal LTI Processes 191

4. If Equation (5.8) is row reduced, the causality question for the processes
(5.8) can be answered without constructing the transfer matrix.

Theorem 5.14. Let Equation (5.8) be row reduced, and i be the degree of
the i-th row of the matrix a (q) and i be the degree of the i-th row of the
matrix b(q). Then the following statements are true:
a) For the weak causality of the process, it is necessary and sucient that
the conditions
i i , (i = 1, . . . , n) (5.29)
are true, where at least for one 1 i n in (5.29) the equality sign has
to be taken place.
b) For the strict causality of the process, the fullment of the inequalities

i > i , (i = 1, . . . , n) (5.30)

is necessary and sucient.


c) When for at least one 1 i n

i < i

becomes true, then the process is non-causal.

Proof. The proof emerges immediately from Theorem 2.24.

5.3 Normal LTI Processes


1. This section considers LTI processes of the form

a  yk = b0 uk+ + . . . + b uk ,
0 yk+ + . . . + a (k = 0, 1, . . . ) (5.31)

under the supposition


0 = 0 .
det a (5.32)
Some important properties of normal LTI processes will be formulated, which
emerge from Relation (5.32).

Theorem 5.15. For the weak causality of the normal processes (5.31), the
fulllment of
b0 = Onm (5.33)
is necessary and sucient.
For the strict causality of the normal processes (5.31), the fulllment of
b0 = Onm (5.34)

is necessary and sucient.


192 5 Fundamentals for Control of Causal Discrete-time LTI Processes

Proof. From (5.32) it follows that the matrix a(q) for a normal process is row
reduced, and we have
1 = 2 = . . . = n =  .
If (5.33) takes place, then in Condition (5.29) the equality sign stands for at
least one 1 i n. Therefore, as a consequence of Theorem 5.14, the process
is weakly causal. If however, (5.34) takes place, then Condition (5.30) is true
and the process is strictly causal.

2. Let the vector input sequence

{u} = {u0 , u1 , . . . } (5.35)

be given, and furthermore assume any ensemble of  constant vectors of di-


mension n 1
y0 , y1 , . . . , y1 . (5.36)
In what follows, the vectors (5.36) are called initial values.
Theorem 5.16. For any input sequence (5.35) and any ensemble of initial
values (5.36), there exists a unique solution of the normal equation (5.31)

{y} = {y0 , y1 , . . . , y1 , y , . . . }

satisfying the initial conditions

yi = yi , (i = 1, . . . ,  1) . (5.37)

Proof. Assume that the vectors yi , (i = 0, 1, . . . , 1) satisfy Condition (5.37).


Since Condition (5.32) is fullled, Equation (5.31) might be written in the
shape

yk+ = a  yk + b0 uk+ + b1 uk+1 +. . .+ b uk ,


1 yk+1 +. . .+ a (k = 0, 1, . . . ),
(5.38)
where

a a1
i = 0 a
i , (i = 1, 2, . . . , ); 1
bi = a
0 bi , (i = 0, 1, . . . , ) .

For k = 0 from (5.38), we obtain

y = a  y0 + b0 u + b1 u1 + . . . + b u0 .
1 y1 + . . . + a (5.39)

Hence for a known input sequence (5.35) and given initial values (5.36), the
vector y is uniquely determined. For k = 1 from (5.38), we derive

y+1 = a  y1 + b0 u+1 + b1 u + . . . + b u1 .
1 y + . . . + a

Thus with the help of (5.35), (5.36) and (5.39), the vector y+1 is uniquely
calculated. Obviously, this procedure can be uniquely continued for all k > 0.
As a result, in a unique way the sequence
5.3 Normal LTI Processes 193

{y} = {
y0 , . . . , y1 , y , . . . }

is generated, that is a solution of Equation (5.31) and fullls the initial con-
ditions (5.37).

Remark 5.17. It follows from the proof of Theorem 5.16 that for weakly causal
normal processes for given initial conditions, the vector yk of the solution {y} is
determined by the values of the input sequence u0 , u1 , . . . , uk . If the process,
however, is strictly causal, then the vector yk is determined by the vectors
u0 , u1 , . . . , uk1 .

3.
Theorem 5.18. Let the input (5.2) be a Taylor sequence (see Appendix A).
Then all solutions of Equation (5.31) are Taylor sequences.

Proof. Using (5.8), (5.9), the polynomial matrices

( 1 ) = a
a() =  a 0 + a   ,
1 + . . . a
(5.40)
b() = b( 1 ) = b0 + b1 + . . . b 

are considered. Condition (5.32) implies

0 = 0 .
det a(0) = det a (5.41)

Under the assumed conditions, there exists the -transform of the input se-
quence

u0 () = ui i . (5.42)
i=0

Consider the vector


y 0 () = a1 () a0 y0 + (
a0 y1 + a
1 y0 ) + . . .

+ 1 (
a0 y1 + a
1 y2 + . . . + a
1 y0 )

(5.43)

+ a1 () b0 u0 () u0 u1 . . . 1 u1


+ b1 u0 () u0 u1 . . . 2 u2 + . . . +  b u0 () ,

where u0 () is the convergent series (5.42). Since the vector u0 () is analytical


in the point = 0 and Condition (5.41) is valid, the right side of (5.43) is
analytical in = 0, and consequently


y 0 () = yi i (5.44)
i=0
194 5 Fundamentals for Control of Causal Discrete-time LTI Processes

also denes a convergent series. For determining the coecients of Expansion


(5.44), substitute this equation on the left side of (5.43). Thus by taking
advantage of (5.40), we come out with


(
a0 + a   )
1 + . . . + a yi i = a
0 y0 + (
a0 y1 + a
1 y0 ) + . . .
i=0

1
+ (
a0 y1 + a
1 y2 + . . . + a
1 y0 )

(5.45)
+ b0 u0 () u0 u1 . . . 1 u1


+ b1 u0 () u0 u1 . . . 2 u2 + . . . +  b u0 () ,

which holds for all suciently small ||. Notice that the coecients of the
matrices bi , (i = 0, . . . , ) on the right side of (5.45) are proportional to i .
Hence comparing the coecients for i , (i = 0, . . . ,  1) on both sides of
(5.45) yields

a
0 y0 = a
0 y0
a
1 y0 + a
0 y1 = a
1 y0 + a
0 y1
(5.46)
.. .. ..
. . .
1 y0 + a
a 1 y1 + . . . + a
0 y = a
1 y0 + a
2 y1 + . . . a
0 y1 .

With regard to (5.41), we generate from (5.46)

yi = yi , (i = 0, 1, . . . ,  1) . (5.47)

Using (5.47) and (5.46), Relation (5.45) is written as






a
0 yi i +
a1 yi i.+
. . +  a
 yi i
i= i=1 i=0




= b0 ui i + b1 ui i + . . . +  b ui i .
i= i=1 i=0

Dividing both sides of the last equation by  yields the relation






i i
a
0 yi+ + a
1 yi+1 .+
.. + a
 yi i
i=0 i=0 i=0




= b0 ui+ i + b1 ui+1 i + . . . + b ui i .
i=0 i=0 i=0

A comparison of the coecients of equal powers of on both sides produces


5.3 Normal LTI Processes 195

a
0 yk+ + a  yk = b0 uk+ + b1 uk+1 + . . . + b uk ,
1 yk+1 + . . . + a
(k = 0, 1, . . . ) .

Bringing this face to face with (5.31) and taking advantage of (5.47), we con-
clude that the coecients of the expansion (5.44) build a solution of Equation
(5.31), the initial conditions of which satisfy (5.37) for any initial vectors
(5.36). But owing to Theorem 5.16, every ensemble of initial values (5.36)
uniquely corresponds to a solution. Hence we discover that for any initial
vectors (5.36), the totality of coecients of the expansion (5.44) exhaust the
whole solution set of the normal equation (5.31). Thus in case of convergence
of the -transforms (5.42), all solutions of the normal equation (5.31) are
Taylor sequences.

Corollary 5.19. When the input is a Taylor sequence {u}, it emerges from
the proof of Theorem 5.18 that the right side of Relation (5.43) denes the
-transform of the general solution of the normal equation (5.31).

4. From Theorem 5.18 and its Corollary, as well as from the relations be-
tween the z-transforms and -transforms, it arises that for a Taylor input
sequence {u}, any solution {y} of the normal equation (5.31) possesses the
z-transform


y (z) = yk z k .
k=0

Applying (A.8), after transition in (5.31) to the z-transforms, we arrive at





0 z  y (z) z  y0 . . . zy1 + a
a 1 z 1 y (z) z 1 y0 . . . zy2


+... + a  y (z) = b0 z  u (z) z  u0 . . . zu1


+ b1 z 1 u (z) z 1 u0 . . . zu2 + . . . + b u (z) .

Using (5.23) after rearrangement, this is represented in the form

y (z) = w(z)u

a0 y0 b0 u0 ) + z 1 (
(z) + z  ( 1 y0 b0 u1 b1 u0 ) + . . .
a0 y1 + a
+ z(
a0 y1 + a1 y2 + . . . + a 1 y0 b0 u1 b1 u2 . . . u0 ) .

The initial vectors y00 , . . . , y1


0
have to be selected in such a way that the
relations

0 y00 = b0 u0
a
0 y10 = b1 u0 + b0 u1
1 y00 + a
a
(5.48)
.. .. ..
. . .
1 y00 + . . . + a
a 0
0 y1 = b1 u0 + . . . + b0 u1
196 5 Fundamentals for Control of Causal Discrete-time LTI Processes

hold. Owing to det a 0 = 0, the system (5.48) uniquely determines the to-
tality of initial vectors y00 , . . . , y1
0
. Taking these vectors as initial values,
we conclude that the solution {y 0 }, which is congured to the initial values
y00 , . . . , y1
0
, possesses the z-transform

y0 (z) = w(z)u

(z) . (5.49)

In what follows, those solution of Equation (5.31) having the transform (5.49)
is called the solution with vanishing initial energy. As a result of the above
considerations, the following theorem is formulated.
Theorem 5.20. For the normal equation (5.31) and any Taylor input se-
quence {u}, there exists the solution with vanishing initial energy {y 0 }, which
has the z-transform (5.49). The initial conditions of this solution are uniquely
determined by the system of equations (5.48).

5. The Taylor matrix sequence

{H} = {H0 , H1 , . . . , }, (i = 0, 1, . . . )

for which the equation




H (z) = Hi z i = w(z)

i=0

holds, is called the weighting sequence of the normal process (5.31).


Based on the above reasons arising in the proof of Theorem 5.20, we are
able to show that the weighting sequence {H} is the solution the matrix
dierence equation

0 Hk+ + . . . + a
a  Hk = b0 Uk+ + . . . + b Uk , (k = 0, 1, . . . ) (5.50)

for the matrix input

{U } = {Im , Omm , Omm , . . . } (5.51)

with the solution of the equations

0 H0 = b0
a
a
1 H0 + a
0 H1 = b1
.. .. ..
. . .
a 0 H1 = b1
1 H0 + . . . + a

as initial values H0 , . . . , H1 . Notice that due to (5.51) for k > 0, Equation
(5.50) converts into the homogeneous equation

a
0 Hk+ + . . . + a
 Hk = Onm , (k = 0, 1, . . . ) .
5.4 Anomalous LTI Processes 197

5.4 Anomalous LTI Processes


1. Conformable with the above introduced concepts, the non-singular causal
LTI process, described by the equations

(q)yk = b(q)uk
a (5.52)

with

a 0 q  + a
(q) = a 1 q 1 + + . . . + a
 ,
(5.53)
b(q) = b0 q  + b1 q 1 + + . . . + b

is called anomalous, when


0 = 0 .
det a (5.54)
From a mathematical point of view, anomalous processes, which include de-
scriptor systems [34], are provided with a number of properties that distin-
guish them fundamentally from normal processes. Especially, notice the fol-
lowing:
a) While a normal process, which is described by (5.52), (5.53), will always
be causal, an anomalous process, described by (5.52)(5.54), might be
non-causal, as shown in Example 5.13.
b) The successive procedure in Section 5.3 on the basis of (5.38), that was
used for calculating the sequence is not applicable for anomalous processes.
Indeed, denote

di =  yk+i + b0 yk+i + . . . + b uk+i ,


a1 yk++i1 . . . a

then Equation (5.52) is written as dierence equation

a
0 y+i = di , (i = 0, 1, . . . ) . (5.55)

However due to (5.54), Equations (5.55) possess either no solution or


innitely many solutions. In both cases, a unique resolution of (5.55) is
impossible and the successive calculation of the output sequence breaks
down.
c) For a normal equation (5.52), the totality of initial vectors (5.36), cong-
ured according to (5.37), can be prescribed arbitrarily. For causal anoma-
lous processes (5.52)(5.54), the solution in general exists only for certain
initial conditions that are bound by additional equations, which also in-
clude values of the input sequence.

Example 5.21. Consider the anomalous process

y1,k+1 + 3y1,k + 2y2,k+1 = xk


(k = 0, 1, . . . ) (5.56)
y1,k+1 + 2y2,k+1 + y2,k = 2xk .
198 5 Fundamentals for Control of Causal Discrete-time LTI Processes

In this case, we congure



q + 3 2q b(q) = 1 .
a
(q) = ,
q 2q + 1 2
By means of (5.23), we nd the transfer matrix

2q + 1
q+6
w(q)
= .
7q + 3
This matrix is proper and thus the process (5.56) is causal. For k = 0 from
(5.56), we obtain
y1,1 + 2y2,1 = x0 3y1,0
(5.57)
y1,1 + 2y2,1 = 2x0 y2,0 .
Equations (5.57) are consistent under the condition
x0 3y1,0 = 2x0 y2,0 (5.58)
which makes the system (5.57) to
y1,1 + 2y2,1 = x0 3y1,0 = 2x0 2y2,0
that has innitely many solutions. 
With respect to the above said, it is clear that dealing with anomalous pro-
cesses (5.52)(5.54) needs special attention. The present section presents ad-
equate investigations.

2.
Lemma 5.22. If the input of a causal anomalous process is a Taylor-sequence,
then all solutions of Equation (5.52) are Taylor sequences.
Proof. Without loss of generality, we assume that Equation (5.52) is row re-
duced, so that utilising (1.21) gives
(q) = diag{q 1 , . . . , q n }A0 + a
a 1 (q) , (5.59)
where the degree of the i-th row of the matrix a
1 (q) is lower than i and
det A0 = 0. Suppose deg a
(q) = max . Select
(q) = diag{q max 1 , . . . , q max n }
and consider the derived equations (5.14), which with the help of (5.59) takes
the form  
a1 (q) yk = (q)b(q)uk .
A0 q max + (q) (5.60)
As is easily seen, Equation (5.60) is normal under the given suppositions.
Therefore, owing to Theorem 5.18 for Taylor input sequence {u}, all solutions
of Equation (5.60) are Taylor sequences. But due to Lemma 5.2, all solutions
of the original equation (5.52) are also solutions of the derived equation (5.60),
thus Lemma 5.22 is proven.
5.4 Anomalous LTI Processes 199

3. Lemma 5.22 motivates a construction procedure for the solution set of


Equation (5.52) according to its initial conditions. For this reason, the process
equations (5.52) are written as a system of scalar equations of the shape (5.3):

n
(0)

n
( )

m
(0)

m
( )
aip yp,k+i + . . . + aip i yp,k = bir ur,k+i + . . . + bir i ur,k
p=1 p=1 r=1 r=1
(5.61)
(p = 1, . . . , n; r = 1, . . . , m; k = 0, 1, . . . ) ,

where due to the row reducibility, the condition


 
(0)
det aip = det A0 = 0 . (5.62)

Without loss of generality, we suppose

i i+1 , (i = 1, . . . , n 1), n > 0 ,

because this can always be obtained by rearrangement. Passing formally from


(5.61) to the z-transforms, we obtain

n
(0)

aip z i yp (z) z i yp,0 . . . z yp,i 1 +
p=1

n
(1)

+ aip z i 1 yp (z) z i 1 yp,0 . . . z yp,i 2 + . . .
p=1

n
aip i yp (z) = B
( ) i (z) ,
... +
p=1

i (z) is a polynomial in z,
where yp,0 , . . . , yp,i 1 are the initial values, and B
(j)
which depends on the coecients bir and the excitation {u}. Substituting
here 1 for z, we obtain the equations for the -transforms

n
(0)

aip yp0 () yp,0 . . . i 1 yp,i 1 +
p=1
(5.63)

n
(1)

+ aip yp0 () yp,0 . . . z i 2 yp,i 2 + . . .
p=1

n
( )
. . . + i aip i yp0 (z) = Bi () ,
p=1

where Bi () is a polynomial in . The solution is put up as a set of power


series


yp0 () = yp,k k , (5.64)
k=0
200 5 Fundamentals for Control of Causal Discrete-time LTI Processes

which exists due to Condition (5.62). Besides, the condition

yp,k = yp,k (5.65)

has to be fullled for all yp,k , that are practically congured by the left side
of (5.63). Inserting (5.64) on the left side of (5.63), and comparing the co-
ecients of k , (k = 0, 1, . . . ) on both sides, a system of successive linear
equations for the quantities yp,k , (k = 0, 1, . . . ) is created, which due to (5.62)
is always solvable. In order to meet Condition (5.65), we generate the totality
of linear relations, that have to be fullled between the quantities yp,k and the
rst values of the input sequence {u}. These conditions determine the set of
initial conditions yp,k , for which the wanted solution of Equation (5.61) exists.
Since with respect to Lemma 5.22, all solutions of Equation (5.61) (whenever
they exist) possess -transforms, the suggested procedure always delivers the
wanted result.

Example 5.23. Investigate the row reduced anomalous system of equations

y1,k+2 + y1,k + 2y2,k+1 = uk+1


(k = 0, 1, . . . ) (5.66)
y1,k + y2,k+1 = uk

of the form (5.61). In the present case, we have 1 = 2, 2 = 1 and



(0) 10
aip = .
01

A formal pass to the z-transforms yields

z 2 y1 (z) z 2 y1,0 z y1,1 + y1 (z) + 2zy2 (z) 2z y2,0 = zu (z) zu0


y1 (z) + zy2 (z) z y2,0 = u (z)

so, we gain
2 1 2
y1 (z) z +1 2 z y1,0 + z y1,1 + 2z y2,0 + zu (z) zu0
= . (5.67)
y2 (z) 1 z z y2,0 + u (z)

Substituting 1 for z gives


1
y10 () 1 + 2 2 y1,0 + y1,1 + 2 y2,0 + u0 () u0
= , (5.68)
y20 () 1 y2,0 + u0 (z)

where the conditions

y10 () = y1 ( 1 ), y20 () = y2 ( 1 ), u0 () = u ( 1 )

were used. Since by supposition the input {u} is a Taylor sequence, the ex-
pansion
5.4 Anomalous LTI Processes 201


u0 () = uk k
k=0

converges. Thus, the right side of (5.68) is analytical in the point = 0. Hence
the pair of convergent expansions



y10 () = y1,k , k
y20 () = y2,k k (5.69)
k=0 k=0

exists uniquely. From (5.68), we obtain

y10 () y1,0 y1,1 + 2 y10 () + 2y20 () 2 y2,0 = u0 () u0


(5.70)
y10 () + y20 () y2,0 = u0 (z) .

Now we insert (5.69) into (5.70) and set equal those terms on both sides,
which do not depend on . Thus, we receive

y1,0 = y1,0 , y2,0 = y2,0 , (5.71)

and for the term with , we get

y1,1 = y1,1 . (5.72)

When (5.71) and (5.72) hold, then in the rst row of (5.70) the terms of zero
and rst degree in neutralise each other, respectively, and in the second
equation the absolute terms cancel each other. Altogether, Equations (5.70)
under Conditions (5.71), (5.72) might be written in the shape





y1,k k + 2 y1,k k + 2 y2,k k = uk k
k=2 k=0 k=1 k=1


y1,k k + y2,k k = uk k .
k=0 k=1 k=0

2
Canceling the rst equation by , and the second by , we nd





y1,k+2 k + y1,k k + 2 y2,k+1 k = uk+1 k
k=0 k=0 k=0 k=0

y1,k k + y2,k+1 k = uk k .
k=0 k=0 k=0

k
Comparing the coecients of the powers , (k = 0, 1, . . . ), we conclude that
for any selection of the constants y1,0 = y1,0 , y2,0 = y2,0 , y1,1 = y1,1 the
coecients of the expansion (5.69) present a solution of Equation (5.66) which
satises the initial conditions (5.71), (5.72). As result of the above analysis,
the following facts are ascertained:
202 5 Fundamentals for Control of Causal Discrete-time LTI Processes

a) The general solution of Equation (5.66) is determined by the initial con-


ditions y1,0 , y2,0 , y1,1 , which can be chosen arbitrarily.
b) The right side of Relation (5.67) presents the z-transform of the general
solution of Equation (5.66).
c) The right side of Relation (5.68) presents the -transform of the general
solution of Equation (5.66). 

Example 5.24. Investigate the row reduced anomalous system of equations

y1,k+2 + y1,k + 2y2,k+2 = uk+1


(k = 0, 1, . . . ) (5.73)
y1,k+1 + y2,k+1 = uk

of the form (5.61). In the present case, we have 1 = 2, 2 = 1 and



(0) 12 (0)
ai1 = , det ai1 = 0 . (5.74)
11

By formal pass to z-transforms, we nd



y1 (z)
=
y2 (z)
2 1 2 (5.75)

z + 1 2z 2 z y1,0 + z y1,1 + 2z 2 y2,0 + 2z y2,1 + zu (z) zu0
.
z z z y1,0 + z y2,0 + u (z)

Although the values of the numbers 1 and 2 are the same as in Exam-
ple 5.23, the right side of Relation (5.75) now depends on four values y1,0 ,
y2,0 , y1,1 and y2,1 . Substitute z = 1 , so, as in the preceding example, the
relations
0 1
y1 () 1 + 2 2 y1,0 + y1,1 + 2 y2,0 + +2 y2,1 + u0 (z) u0
=
y20 () 1 1 y1,0 + y2,0 + u0 ()
(5.76)
take place. The right side of (5.76) is analytical in the point = 0. Hence
there exists uniquely a pair of convergent expansions (5.69), which are the
Taylor series of the right side of (5.76). Thus from (5.76), we obtain


y10 () y1,0 y1,1 + 2 y10 () + 2 y20 () y2,0 y2,1 = u0 () u0
(5.77)
y10 () y1,0 + y20 () y2,0 = u0 () .

Equating on both sides the terms not depending on in (5.9), we nd

(y1,0 y1,0 ) + 2(y2,0 y2,0 ) = 0


(y1,0 y1,0 ) + (y2,0 y2,0 ) = 0 ,

so we gain with the help of (5.74)


5.4 Anomalous LTI Processes 203

y1,0 = y1,0 , y2,0 = y2,0 . (5.78)


Comparing the coecients for , we nd
y1,1 y1,1 + 2(y2,1 y2,1 ) = 0
y1,1 + y2,1 = u0
which might be composed in the form
(y1,1 y1,1 ) + 2(y2,1 y2,1 ) = 0
(y1,1 y1,1 + (y2,1 y2,1 =
y1,1 y2,1 + u0 .
Recall (5.74) and recognise that the equations
y1,1 = y1,1 , y2,1 = y2,1 (5.79)
hold, if and only if the condition
y1,1 + y2,1 = u0 (5.80)
is satised. The last equation is a consequence of the second equation in (5.73)
for k = 0. Suppose (5.80), then also Relations (5.78) and (5.79) are fullled,
such that Relations (5.77) might be comprised to the equations





y1,k k + 2 y1,k k + 2 y2,k k = uk k
k=2 k=0 k=2 k=1

y1,k k + y2,k k = uk k .
k=1 k=1 k=0
2
Reducing the rst equation by , and the second one by , we nd





y1,k+2 k + y1,k k + 2 y2,k+2 k = uk+1 k
k=0 k=0 k=0 k=0


y1,k+1 k + y2,k+1 k = uk k .
k=0 k=0 k=0
k
Comparing the coecients for the powers , (k = 0, 1, . . . ), we conclude that
for any selection of the constants y1,0 , y2,0 and the quantities y1,1 , y2,1 , which
are connected by Relation (5.80), there exists a solution of Equation (5.73)
satisfying the initial conditions (5.78), (5.79). As result of the above analysis,
the following facts are ascertained:
a) The general solution of Equation (5.73) is determined by the quantities
y1,0 , y2,0 , y1,1 and y2,1 , where the rst two are free selectable and the
other two are bound by Relation (5.80).
b) When (5.80) holds, the right side of Relation (5.75) presents the
z-transform of the general solution of Equation (5.73).
c) When (5.80) holds, the right side of Relation (5.76) presents the
-transform of the general solution of Equation (5.73). 
204 5 Fundamentals for Control of Causal Discrete-time LTI Processes

4. In order to nd the solutions of the equation system (5.61) for admissible


initial conditions, various procedures can be constructed. For this purpose,
the discovered expressions for the z- and the -transforms are very helpful. A
further suitable approach consists in the direct solution of the derived normal
equation (5.60). Using this approach, the method of successive approximation
(5.38) is applicable.

Example 5.25. Under the conditions of Example 5.23 substitute k by k + 1 in


the second equation of (5.66). Then we obtain the derived normal system of
equations

y1,k+2 + y1,k + 2y2,k+1 = uk+1


(k = 0, 1, . . . )
y1,k+1 + y2,k+1 = uk+1 ,

that might be written in the form

y1,k+2 = y1,k 2y2,k+1 + uk+1


(k = 0, 1, . . . ) . (5.81)
y2,k+2 = y1,k+1 + uk+1

Specify the initial conditions as

y1,0 , y2,0 , y1,1 , y2,1 = u0 y1,0 ,

then the wanted solution is generated directly from (5.81). 

Example 5.26. A corresponding consideration of Equation (5.73) leads to the


normal system

y1,k+2 + 2y2,k+2 = y1,k + uk+1


(k = 0, 1, . . . )
y1,k+2 + y2,k+2 = uk+1

with the initial conditions

y1,0 , y2,0 , y1,1 , y2,1 = u0 y1,1 . 

5. Although in general, the solution of a causal anomalous equation (5.52)


does exist only over the set of admissible initial conditions, for such an anoma-
lous system always exists the solution for vanishing initial energy.

Theorem 5.27. For the causal anomalous process (5.52) with Taylor input
sequence {u}, there always exists the solution {y0 }, the z-transform y0 (z) of
which is determined by the relation

y0 (z) = w(z)u

(z) ,

where
w(z)
1 (z)b(z)
=a
5.4 Anomalous LTI Processes 205

is the assigned transfer matrix. The -transform of the solution {y0 } has the
view
y00 () = w()u0 () (5.82)
with
1 ) = a
w() = w( 1 ( 1 )b( 1 ) . (5.83)

Proof. Without loss of generality, we assume that Equation (5.52) is row re-
duced and in (5.59) det A0 = 0. In this case, the right side of (5.82) is analytical
in the point = 0. Thus, there uniquely exists the Taylor series expansion


y00 () = yk k (5.84)
k=0

that converges for suciently small ||. From (5.83) and (5.53) using (5.40),
we get
w() = a1 ()b() , (5.85)
where

a() = a
0 + a   ,
1 + . . . + a b() = b0 + b1 + . . . + b  . (5.86)

Applying (5.84)(5.86) and (5.82), we obtain


  

a
0 + a  
1 + . . . + a yk k = b0 + b1 + . . . + b  k k .
u
k=0 k=0
(5.87)
By comparison of the coecients for k , (k = 0, 1, . . . ,  1), we nd

0 y0 = b0 u0
a
a 0 y1 = b1 u
1 y0 + a 0 + b0 u1
(5.88)
.. .. ..
. . .
0 y1 = b1 u
1 y0 + . . . + a
a 0 + . . . + b0 u1 .

With the aid of (5.88), Equation (5.87) is easily brought into the form




a
0 yk k +
a1 yk k + . . . +  a
 yk k
k= k=1 k=0



= b0 k k + b1
u k k + . . . + b
u k k .
u
k= k=1 k=0

Cancellation on both sides by  yields


206 5 Fundamentals for Control of Causal Discrete-time LTI Processes




a
0 yk+ k + a
1 yk+1 k + . . . + a
 yk k
k=0 k=0 k=0




= b0 k+ k + b1
u k+1 k + . . . + b
u k k .
u
k=0 k=0 k=0

Comparing the coecients of the powers k , (k = 0, 1, . . . ) on both sides, we


realise that the coecients of Expansion (5.84) satisfy Equation (5.52) for all
k 0.

Remark 5.28. Since in the anomalous case det a 0 = 0, Relations (5.88) do not
allow to determine the initial conditions that are assigned to the solution with
vanishing initial energy. For the determination of these initial conditions, the
following procedure is possible. Using (5.59), we obtain

w() = A1 ()B() , (5.89)

where

a( 1 ) = A0 + A1 + . . . + A  ,
A() = diag{ 1 , . . . n }
(5.90)
B() = diag{ 1 , . . . n }b( 1 ) = B
0 + B
1 + . . . + B
  .

With the help of (5.82), (5.89) and (5.90), we derive


 
 

A0 + A1 + . . . + A  k 0 + B
yk = B 1 + . . . + B
  uk k .
k=0 k=0

By comparing the coecients, we nd

A0 y0 = B
0 u0

A1 y0 + A0 y1 = B
1 u0 + B
0 u1
(5.91)
.. .. ..
. . .
A1 y0 + . . . + A0 y1 = B
1 u0 + . . . + B
0 u1 .

Since per construction det A0 = 0, Equations (5.91) provide to determine the


vectors y0 , . . . , y1 .

Example 5.29. Find the initial conditions for the solution with vanishing initial
energy for Equations (5.73). Notice that in this example
2
z +1 2 z
a
(z) = , b(z) =
z z 1

is assigned. From this and (5.90), we obtain


5.4 Anomalous LTI Processes 207

1 + 2 2 2

A() = , B() = ,
1 1

that means

12 00 10
A0 = , A1 = , A2 = ,
11 00 00
(5.92)
0 1 0
B0 = , B0 = , B0 = .
0 1 0

Applying (5.92) and (5.91), we get

A0 y0 = B
0 u0

i.e. y0 = O21 . Thus, the second equation in (5.91) takes the form

A0 y1 = B
1 u0

with the consequence


1 u0
y1 = A0 B1 u0 = .
0
Hence the solution with vanishing initial energy is determined by the initial
conditions
y1,0 = 0, y2,0 = 0, y1,1 = u0 , y2,1 = 0 .
Here, Relation (5.80) is satised. 

6. For anomalous causal LTI processes, in the same way as for normal pro-
cesses, we introduce the concept of the weighting sequence

{H} = {H0 , H1 , . . . } . (5.93)

The weighting sequence is a matrix sequence, whose z-transform H (z) is


determined by
H (z) = w(z)
. (5.94)
From (5.94) it follows that the weighting sequence (5.93) might be seen as
the matrix solution of Equation (5.52) under vanishing initial energy for the
special input
{U } = {Im , Omm , Omm , . . . } ,
because the z-transform of this input amounts to

U (z) = Im .

Passing in (5.94) to the variable , we obtain an equation for the -transform

H 0 () = w() .
208 5 Fundamentals for Control of Causal Discrete-time LTI Processes

Since the right side is analytical in = 0, there exists the convergent expansion


H 0 () = Hk k .
k=0

Applying Relations (5.85), (5.86), we obtain from the last two equations

 
a
0 + a  
1 + . . . + a Hk k = b0 + b1 + . . . + b  . (5.95)
k=0

By comparison of the coecients at i , (i = 0, . . . , ) on both sides, we nd

0 H0 = b0
a
a 0 H1 = b1
1 H0 + a
(5.96)
.. .. ..
. . .
0 H = b .
 H0 + . . . + a
a

When (5.96) is fullled, the terms with i , (i = 0, . . . , ) on both sides of


(5.95) neutralise each other, hence this equation might be written as




a
0 Hk k +
a1 Hk k + . . . +  a
 Hk k = Onm .
k=+1 k= k=1

Canceling both sides by +1 results in






a
0 Hk++1 k + a
1 Hk+ k + . . . + a
 Hk+1 k = Onm .
k=0 k=0 k=0

If we make the coecients at all powers of on the left side equal to zero,
then we nd

a
0 Hk++1 + a
1 Hk+ + . . . + a
 Hk+1 = Onm , (k = 0, 1, . . . ) ,

which is equivalently expressed by

0 Hk+ + a
a 1 Hk+1 + . . . + a
 Hk = Onm , (k = 1, 2, . . . ) .

From this is seen that for k 1, the elements of the weighting sequence satisfy
the homogeneous equation, which is derived from (5.52) for {u} = Om1 . Notice
0 = 0, the determination of the matrices Hi , (i = 0, 1, . . . , ) is
that for det a
not possible with the help of (5.96). To overcome this diculty, Relation (5.89)
is recruited. So instead of (5.95), we obtain the result
 

A0 + A1 + . . . + A  0 + B
Hk k = B 1 + . . . + B
  ,
k=0
5.5 Forward and Backward Models 209

where a corresponding formula to (5.96) arises:

A0 H0 = B
0

A1 H0 + A0 H1 = B
1
(5.97)
.. .. ..
. . .
A H0 + . . . + A0 H = B
 .

Owing to det A0 = 0, the matrices Hi , (i = 0, 1, . . . , ) can be determined. If


(5.97) is valid, we create the recursion formula

A0 Hk++1 = A1 Hk+ . . . A Hk+1 , (k = 0, 1, . . . ) .

Thus with the help of the initial conditions (5.97), the weighting sequence
{H} can be calculated.

Example 5.30. Under the conditions of Example 5.29 and applying (5.97) and
(5.92), we nd

0 1 0
H0 = , H1 = , H2 = .
0 0 0

The further elements of the weighting sequence are calculated by means of


the recursion formula

A0 Hk+3 = A2 Hk+1 , (k = 0, 1, . . . )

or
1 0
Hk+3 = Hk+1 .
1 0


5.5 Forward and Backward Models

1. Suppose the causal LTI process

(q)yk = b(q)uk ,
a (5.98)

where
a
(q) = a 0 q  + . . . + a , (n n) ,
(5.99)
b(q) = b0 q  + . . . + b , (n m) .
As before, Equation (5.98) is designated as a forward model of the LTI process.
Select a unimodular matrix (q), such that the matrix a  (q) = (q)a(q) in
(5.20) becomes row reduced and consider the equivalent equation
210 5 Fundamentals for Control of Causal Discrete-time LTI Processes

 (q)yk = b (q)uk ,
a (5.100)

where b (q) = (q)b(q). Let i be the degree of the i-th row of the matrix
 (q), then we have
a
 
 (q) = diag {q 1 , . . . , q n } A0 + A1 q 1 + . . . + A q  ,
a

where det A0 = 0. Then the equation of the form 1

a()yk = b()uk (5.101)

with

 ( 1 )
a() = A0 + A1 + . . . + A  = diag { 1 , . . . , n } a (5.102)

and

b() = B0 + B1 + . . . + B  = diag { 1 , . . . , n } b ( 1 ) (5.103)

is called the associated backward model of the LTI process. From (5.29), we
recognise that b() is a polynomial matrix.

2. Hereinafter, the matrix

w(q)
1 (q)b(q) = a
=a 1
 (q)b (q) (5.104)

is called the transfer matrix of the forward model, and the matrix

w() = a1 ()b() = a
1 ( 1 )b( 1 ) = a
1
 (
1
)b ( 1 ) (5.105)

is the transfer matrix of the backward model. From (5.104) and (5.105), we
take the reciprocal relations

1 ),
w() = w( w(q)
= w(q 1 ) . (5.106)

The matrix a (q) is named as before the eigenoperator of the forward model
and the matrix a() is the eigenoperator of the backward model. As seen from
(5.102), the eigenoperator a() of the backward model is independent of the
shape of the matrix b(q) in (5.98). Obviously, the matrices a() and b() in
(5.101) are not uniquely determined. Nevertheless, as we realise from (5.105),
the transfer matrix w() is not aected. Moreover, later on we will prove that
the structural properties of the matrix a() also do not depend on the special
procedure for its construction.
1
In (5.101) for once means the operator q 1 . A distinction from the complex
variable of the -transformation is not made, because the operator q 1 , due to
the mentioned diculties, will not be used later on.
5.5 Forward and Backward Models 211

Example 5.31. Consider the forward model


3y1,k+2 + y2,k+3 + y2,k = uk+2 + uk
2y1,k+1 + y2,k+2 = uk+1 .

In the present case, we have


2 3 2
3q q + 1 b(q) = q + 1 .
a
(q) = ,
2q q 2 q

Thus, the transfer matrix of the forward model w(q)


emerge as

q1
q2 2
w(q)
=
q3 2
and the corresponding LTI process is strictly causal. Select the unimodular
matrix
1 q
(q) = ,
0 1
so we obtain

q2 1 b (q) = (q)b(q) = 1 .
a
 (q) = (q)
a(q) = ,
2q q 2 q

Thus, the matrices a(), b() of the associated backward model take the shape

1 2 10 00 01 2
a() = = + + ,
2 1 01 20 00
2
0 1 2
b() = = + .
1 0 

3.
Lemma 5.32. For the causality of the processes (5.98), it is necessary and
sucient that the transfer function w() is analytical in the point = 0. For
the strict causality of the process (5.98), the fulllment of the equation

w(0) = Onm

is necessary and sucient.


Proof. Necessity: If the process (5.98) is causal, then the matrix w(q)
is at
least proper and thus analytical in the point q = . Hence Matrix (5.105) is
analytical in the point = 0. If the process is strictly causal, then w()
=
Onm is valid and we obtain the claim. Thus, the necessity is shown. We realise
that the condition is also sucient, when we reverse the steps of the proof.
212 5 Fundamentals for Control of Causal Discrete-time LTI Processes

Corollary 5.33. The process (5.98) is strictly causal, if and only if the equa-
tion
w() = w1 ()
holds with a matrix w1 (), which is analytical in the point = 0.

4. It is shown that the concepts of forward and backward models are closely
connected with the properties of the z- and -transforms of the solution for
Equation (5.98). Indeed, suppose a causal process, then it was shown above
that for a Taylor input sequence {u}, independently of the fact whether the
process is normal or anomalous, Equation (5.98) always possesses the solution
with vanishing initial energy, and its z-transform y (z) satises the equation

(z)y (z) = b(z)u (z)


a (5.107)

that formally coincides with (5.98). In what follows, Relation (5.107) is also
called a forward model of the process (5.98). From (5.107), we receive

y (z) = w(z)u

1 (z)b(z)u (z) .
(z) = a

Substituting here 1 for z, we obtain

y 0 () = w()u0 () ,

where y 0 (), u0 () are the -transforms of the process output for vanish-
ing initial energy and the input sequence, respectively. Moreover, w() is the
transfer matrix of the backward model (5.105). Owing to (5.105), the last
equation might be presented in the form

a()y 0 () = b()u0 () (5.108)

which coincides with the associated backward model (5.101).

a(z), b(z)) is
5. The forward model (5.107) is called controllable, if the pair (
irreducible, i.e. for all nite z


rank R h (z) = rank a(z) b(z) = n

is true. Analogously, the backward model (5.108) is called controllable, if for


all nite

rank Rh = rank a() b() = n
is true. We will derive some general properties according to the controllability
of forward and backward models.
5.5 Forward and Backward Models 213

6.
Lemma 5.34. If the forward model (5.107) is controllable, then the associated
backward model (5.108) is also controllable.

Proof. Let the model (5.107) be controllable. Then the row reduced model
(5.100) is also controllable and hence for all nite z


rank a  (z) b (z) = n .

Let z0 = 0, z1 , . . . , zq be the distinct eigenvalues of the matrix a (z). Then the


matrix a (z) has the same eigenvalues. From Formula (5.102), we gain with
respect to det A0 = 0 that the set of eigenvalues of the matrix a() contains
the quantities i = zi1 , i = 1, . . . , q. From the above rank condition for all
1 i q, we obtain

  

rank a(i ) b(i ) = rank diag zii , . . . , zin a  (zi ) b (zi ) = n .

Thus due to Lemma 1.42, the claim emerges.

7. Let the forward model (5.107) be controllable and equation

w(z)
= C(zIp A)1 B + D (5.109)

should describe a minimal standard realisation of the transfer matrix w(z).



Then with the help of (5.106), we nd

w() = C(Ip A)1 B + D . (5.110)

The expression on the right side is called a minimal standard realisation of


the transfer matrix of the associated backward model. Besides, the rational
matrix
w0 () = C(Ip A)1 B
might be seen as the transfer matrix of the PMD

0 () = (Ip A, B, C) .

Lemma 5.35. Under the named suppositions, the PMD 0 () is minimal, i.e.
the pairs (Ip A, B) and [Ip A, C] are irreducible.

Proof. Let z0 = 0, z1 , . . . , zq be the eigenvalues of the matrix A in (5.109).


Then the eigenvalues of the matrix Ip A turn out to be the numbers 1 =
z11 , . . . , q = zq1 . Since the representation (5.109) is minimal, for all nite z,
it follows

zIp A
rank zIp A B = n, rank = n.
C
Thus, for all i = 1, . . . , q
214 5 Fundamentals for Control of Causal Discrete-time LTI Processes



rank Ip i A i B = rank zi Ip A B = n ,

Ip i A Ip i A zi Ip A
rank = rank = rank = n.
C i C C

These conditions together with Lemma 1.42 imply the irreducibility of the
pairs
(Ip A, B), [Ip A, C] .

Lemma 5.36. Let z0 = 0, z1 , . . . , zq be the dierent eigenvalues of the matrix


A and a1 (z), . . . , a (z) be the totality of its invariant polynomials dierent
from one, having the shape

a1 (z) = z 01 (z z1 )11 (z zq )q1


.. .. .. ..
. . . . (5.111)
a (z) = z (z z1 ) (z zq )q ,
0 1

where

q

si s,i1 , (s = 0, . . . q, i = 2, . . . , ), si = p .
s=0 i=1

Then the totality of invariant polynomials dierent from one of the matrix
Ip A consists of the polynomials 1 (), . . . , () having the shape

1 () = ( z11 )11 ( zq1 )q1


.. .. .. ..
. . . . (5.112)
1 1
() = ( z1 ) ( zq1 )q .

Proof. Firstly, we consider the matrix A as Jordan-Block (1.76) of dimension



a 1 ... 0 0

0 a ... 0 0

A = J (a) =
... ... . . . . . . ... . (5.113)

0 0 ... a 1
0 0 ... 0 a
For a = 0, we obtain

1 a ... 0 0
.
0 1 a . . 0 0

I J (a) =
.. .. .. .. .. . (5.114)
. . . . .

0 0 . . . 1 a
0 0 . . . 0 1 a
5.5 Forward and Backward Models 215

Besides,
det[I J (a)] = (1 a) (5.115)
and Matrix (5.114) possesses only the eigenvalue = a1 of multiplicity .
For = a1 from (5.114), we receive

0 a1 0 . . . 0
0 0 a1 . . . 0

.. .
I a1 J (a) = ... ... .. . .
. . . (5.116)

0 0 0 . . . a 1

0 0 0 ... 0

Obviously, rank[I a1 J (a)] = 1 is true. Thus, owing to Theorem 1.28,


Matrix (5.114) possesses an elementary divisor ( a1 ) . For a = 0 from
(5.114), we obtain

1 . . . 0 0

0 1 ... 0 0

I J (0) =
... ... . . . . . . ... . (5.117)

0 0 . . . 1
0 0 ... 0 1

Obviously
det[I J (0)] = 1 ,
thus Matrix (5.117) is unimodular and has no elementary divisor.
Now, consider the general case and A is expressed in the Jordan form
 
A = U diag J01 (0), J11 (z1 ), . . . , Jq (zq ) U 1

where U is a certain non-singular matrix. Hence


 
Ip A = U diag I01 J01 (0), I11 J11 (z1 ), . . . , Iq Jq (zq ) U 1 .
(5.118)
According to Lemma 1.25, the set of elementary divisors of the block-diagonal
matrix (5.118) consists of the unication of the sets of elementary divisors of
its diagonal blocks. As follows from (5.113)(5.117), no elementary divisor of
Matrix (5.118) is assigned to the eigenvalue zero and a non-zero eigenvalues
zk corresponds to the totality of elementary divisors

( zk1 )k1 , . . . , ( zk1 )k ,

from which directly follows (5.112).


216 5 Fundamentals for Control of Causal Discrete-time LTI Processes

8. The next theorem establishes the connection between the eigenoperators


of controllable forward and backward models.

Theorem 5.37. Suppose the forward and backward models (5.107) and
(5.108) be controllable, and the sequence of the invariant polynomials dierent
from 1 of the matrix a(z) should have the form (5.111). Then the sequence of
invariant polynomials dierent from 1 of the matrix a() has the form (5.112).

Proof. The proof is divided into several steps.


a) When the right side of (5.109) denes a minimal standard realisation of
the transfer matrix of a controllable forward model, then the sequences of
(z) and zIp A
the invariant polynomials dierent from 1 of the matrices a
coincide. Thus, we get

(z) det(zIp A) .
det a

This statement directly emerges from the content of Section 2.4.


b) In the same way, we conclude that, when the right side of (5.110) presents
a minimal standard realisation of the transfer matrix of the controllable
backward model (5.108), then

det a() det(Ip A) .

Thus, the sequences of invariant polynomials dierent from 1 of the ma-


trices a() and Ip A coincide, because the pairs (Ip A, B) and
[Ip A, C] are irreducible.
c) Owing to Lemma 5.36, the set of invariant polynomials of the matrices
zIp A and Ip A are connected by Relations (5.111) and (5.112), such
that with respect to a) and b) analogue connections also exist between
the sets of invariant polynomials of the matrices a
(z) and a().

Corollary 5.38. The last equivalence implies

det a(0) = 0 .

Hence the eigenoperator of a controllable backward models does not possess a


zero eigenvalue.

Corollary 5.39. Denote

det a
(z) = (z), det a() = () (5.119)

and let deg (z) = p. Then
1 ) .
() p ( (5.120)
5.5 Forward and Backward Models 217

Proof. Assume (5.111), then with




q
i = is , i = p
s=1 i=1

and taking advantage of (5.112), we can write



(z) a
1 () a
(z) = z 0 (z z1 )1 (z zq )q ,
() 1 () () = ( z11 )1 ( zq1 )q .
Thus, (5.120) is ensured.
In the following, the polynomials (5.119) are referred to as the character-
istic polynomial of the forward or of the backward model, respectively.
Remark 5.40. For an arbitrary polynomial f () with deg f () = p, the poly-
nomial
f () = p f ( 1 )
usually is designated as the reciprocal to f (), [14]. Thus Relation (5.120)
might be written in the form
() ()
i.e. the characteristic polynomial of the controllable backward model is equiv-
alent to the reciprocal characteristic polynomial of the controllable forward
model.

9. The next assertion can be interpreted as completion of Theorem 5.37.


Theorem 5.41. Let a (z) be a non-singular n n polynomial matrix and (z)
is a unimodular matrix, such that a  (z) = (z)a(z) becomes row reduced.
Furthermore, let i be the degree of the i-th row of the matrix a  (z) and
build a() with the help of (5.102). Then for the matrix a() all assertions of
Theorem 5.37 and its corollaries are true.
Proof. Consider the controllable forward model
(z)y (z) = 1 (z)u (z) .
a
Multiplying this from left by (z), we obtain the row reduced causal model
 (z)y (z) = In u (z)
a
and hence both models are causal. Passing from the last model to the associ-
ated backward model by applying Relations (5.102), (5.103), we get
a()y 0 () = b()u0 ()
and all assertions of Theorem 5.41 emerge from Theorem 5.37 and its corol-
laries.
218 5 Fundamentals for Control of Causal Discrete-time LTI Processes

10. As follows from the above shown, the set of eigenoperators of the asso-
ciated backward model does not depend on the matrix b(z) in (5.107) and it
can be found by Formula (5.102). The reverse statement is in general not true.
Therefore, the transition from a backward model to the associated forward
model has to be considered separately.
a) For a given controllable backward model

a()y 0 () = b()u0 () , (5.121)

the transfer function of the associated controllable forward model can be


designed with the help of the ILMFD

1 (z)b(z) = a1 (z 1 )b(z 1 ) .
a

b) The following lemma provides a numerical well posed method.

Lemma 5.42. Let a controllable backward model (5.121) be given, where


the matrix a() has the form (5.102) and det A0 = 0. Furthermore, let i
be the degree of the i-th row of the matrix


Rh () = a() b() .

Introduce the polynomial matrices

(z) = diag {z 1 , . . . , z n } a(z 1 ) ,


a
(5.122)
b(z) = diag {z 1 , . . . , z n } b(z 1 ) .

Then under the condition




rank a(0) b(0) = n , (5.123)

a(z), b(z)) denes an associated controllable forward model.


the pair (

Proof. The proof follows the reasoning for Lemma 5.34.

Example 5.43. Consider the controllable backward model (5.121) with



1 2
a() = , b() = 2 . (5.124)
1+ 1 +1

In this case, we have 1 = 1, 2 = 2 and the matrices (5.122) take the


form
z 2 1
a
(z) = 2 , b(z) = .
z + z z2 1 + z2
Here Condition (5.123) is satised. Thus, the matrices dene an associated
controllable forward model. 
5.5 Forward and Backward Models 219

11. As just shown for a known eigenoperator of the forward model a (z), the
set of all eigenoperators of the associated controllable backward models can
be generated. When with the aid of Formula (5.102), one eigenoperator a0 ()
has been designed, then the set of all such operators is determined by the
relation
a() = ()a0 () ,
where () is any unimodular matrix. The described procedure does not de-
pend on the input operator b(z). However, the reverse pass from an eigen-
operator of a controllable backward model a() to the eigenoperator a (z) in
general requires additional information about the input operator b(). In this
connection, we ask for general rules for the transition from the matrix a() to
the matrix a
(z).

Theorem 5.44. Let the two controllable backward models

a()y 0 () = b1 ()u0 () ,
(5.125)
a()x0 () = b2 ()v 0 ()

be given, where a(), b1 () and b2 () are polynomial matrices of dimensions


n n, n m and n , respectively. Furthermore, let a 1 (z) and a
2 (z) be the
eigenoperators of the controllable forward models associated to (5.125). Then
the relations
1 (z) = 1 (z)
a a0 (z), a2 (z) = 2 (z)
a0 (z)
are true, where 1 (z) and 2 (z) are nilpotent polynomial matrices, i.e. they
only possess the eigenvalue zero, and the n n polynomial matrix a 0 (z) can
be chosen independently on the matrices b1 () and b2 (), it is only committed
by the matrix a().

Proof. Consider the transfer matrices of the models (5.125)

w1 () = a1 ()b(), w2 () = a1 ()b2 () . (5.126)

Since the matrices (5.126), roughly speaking are not strictly proper, they could
be written as

w1 () = w
1 () + d1 (), w2 () = w
2 () + d2 () , (5.127)

where d1 () and d2 () are polynomial matrices, and the matrices w 1 () and


w2 () are strictly proper. But the right sides of Relations (5.126) are ILMFD,
so Lemma 2.15 delivers that the relations

1 () = a1 ()b1 (),
w 2 () = a1 ()b2 () ,
w (5.128)

where
b1 () = b1 () a()d1 (), b2 () = b2 () a()d2 ()
220 5 Fundamentals for Control of Causal Discrete-time LTI Processes

determine an ILMFD of Matrix (5.128). Let us have the minimal standard


realisation
1 () = C(Iq G)1 B1 .
w (5.129)
In (5.129) the matrix G is non-singular, because of det a(0) = 0. Moreover,
q = Mdeg w1 () is valid. Thus, we get from (5.128)

2 () w
w 1 ()
l

and owing to Theorem 2.56, the matrix w


2 () allows the representation

2 () = C(Iq G)1 B2 .
w (5.130)

Since the right sides of (5.128) are ILMFD,

1 () = Mdeg w
Mdeg w 2 () = deg det a() .

Hence the right side of (5.130) is a minimal standard realisation of the matrix
2 (). Inserting (5.129) and (5.130) into (5.127), we arrive at
w

w1 () = C(Iq G)1 B1 + d1 () ,
w2 () = C(Iq G)1 B2 + d2 () ,

where the matrices C and G do not depend on the matrices b1 () and b2 ()


congured in (5.125). Substituting now z 1 for , we obtain the transfer ma-
trices of the forward models

1 (z) = C(zIq G1 )1 G2 B1 CG1 B1 + d1 (z 1 ) ,


w
(5.131)
2 (z) = C(zIq G1 )1 G2 B2 CG1 B2 + d2 (z 1 ) ,
w

where the realisations (G1 , G2 B1 , C) and (G1 , G2 B2 , C) turn out to be


minimal, because the realisations (G, B1 , C) and (G, B2 , C) are minimal. Build
the ILMFD
C(zIq G1 )1 = a 0 (z)b0 (z) . (5.132)
The matrix a 0 (z) does not depend on the matrices b1 () or b2 (z) in (5.126),
because the matrices C and G do not. Besides, a 0 (z) has no eigenvalues equal
to zero, because G1 is regular. Using (5.132) from (5.131), we gain


w1 (z) = a1
0 (z) b0 (z)G
2
0 (z)CG1 B1 + a
B1 a 0 (z)d1 (z 1 ) ,

(5.133)
w2 (z) = a1
0 (z) b0 (z)G
2
0 (z)CG1 B2 + a
B2 a 0 (z)d2 (z 1 ) .

The matrices in the brackets possess poles only in the point z = 0. Thus, in
the ILMFDs

b0 (z)G2 B1 a 0 (z)d1 (z 1 ) = 11 (z)q1 (z) ,


0 (z)CG1 B1 + a
b0 (z)G2 B2 a 0 (z)d2 (z 1 ) = 21 (z)q2 (z)
0 (z)CG1 B2 + a
5.5 Forward and Backward Models 221

the matrices 1 (z) and 2 (z) are nilpotent. Applying this and (5.133), as well
as Corollary 2.19, we nd out that the ILMFDs
1 1
w
1 (z) = [1 (z)
a0 (z)] q1 (z), w
2 (z) = [2 (z)
a0 (z)] q2 (z)

exist, from which all assertions of the Theorem may be read.

12. Sometimes in engineering literature, the pass from the original control-
lable forward model (5.98) to an associated backward model is made by pro-
cedures that are motivated by the SISO case. Then simply

( 1 ),
a() =  a b() = b( 1 ) (5.134)

is applied. It is easy to see that this procedure does not work, when det a0 = 0
in (5.98). In this case, we would get det a(0) = 0, which is impossible for a
controllable backward model. If however, in (5.99) det a 0 = 0 takes place, i.e.
the original process is normal, then Formula (5.134) delivers a controllable
associated backward model.

13. In recent literature [69, 80, 115], the backward model is usually written
in the form
a(q 1 )yk = b(q 1 )uk , (5.135)
where q 1 is the right-shift operator that is inverse to the operator q. Per
denition, we have

q 1 yk = yk1 , q 1 uk = uk1 . (5.136)

Example 5.45. The backward model corresponding to the matrices (5.124) is


written with the notation (5.136) in the form

y1,k + 2y2,k1 = uk1 (5.137)


y1,k + y1,k1 + y2,k = uk2 + uk .


As was demonstrated in [14], a strict foundation for using the operator q 1 for
a correct description of discrete LTI processes is connected with honest di-
culties. The reason arises from the fact, that the operator q is only invertible
over the set of two-sided unlimited sequences. If however, the equations of the
LTI process (5.4) are only considered for k 0, then the application of the
operator q 1 needs special attention. From this point of view, the application
of the -transformation for investigating the properties of backward models
seems more careful. Nevertheless, the description in the form (5.135) appears
sometimes more comfortable, and it will also be used later on.
222 5 Fundamentals for Control of Causal Discrete-time LTI Processes

5.6 Stability of Discrete-time LTI Systems


1. The vector sequence {y} = {y0 , y1 , . . . } is called stable, if the inequality

yk  < ck , (k = 0, 1, . . . )

is true, where   is a certain norm for nite dimensional number vectors and
c, are positive constants with 0 < < 1. If for the sequence {y}, such an
estimate does not hold, then it is called unstable.
The homogeneous vector dierence equation

0 yk+ + . . . + a
a  yk = On1 (5.138)

is called stable, if all of its solutions are stable sequences. Equations of the
form (5.138), that are not stable, will be called unstable.

2. The next theorem establishes a criterion for the stability of Equation


(5.138).

Theorem 5.46. Suppose the n n polynomial matrix

a 0 (z)z  + . . . + a
(z) = a 

be non-singular. Let zi , (i = 1, . . . , q) be the eigenvalues of the matrix a


(z),
i.e. the roots of the equation

(z) = det a
(z) = 0 . (5.139)

Then, for the stability of Equation (5.138), it is necessary and sucient that

|zj | < 1, (j = 0, 1, . . . , q) . (5.140)

Proof. Suciency: Let (z) be a unimodular matrix, such that the matrix

a
(z) = (z)
a(z)

is row reduced. Thus, the equivalent equation

a
(z)yk = On1 (5.141)

at the same time with Equations (5.137) is stable or unstable, and the equation
(z) = 0 possesses the same roots as Equation (5.139). Since the zero
det a
input is a Taylor sequence, owing to Lemma 5.22, all solutions of Equation
(5.19) are Taylor sequences. Passing in Equation (5.141) to the z-transforms,
we obtain the result that for any initial conditions, the transformed solution
of Equation (5.141) has the shape

R(z)
y (z) = ,

(z)
5.6 Stability of Discrete-time LTI Systems 223

where R(z) is a polynomial vector. Besides under Condition (5.140), the in-
verse z-transformation formula [1, 123] ensures that all originals according to
the transforms of (5.141) must be stable. Thus the suciency is shown.
Necessity: It is shown that, if Equation (5.139) has one root z0 with |z0 | 1,
then Equation (5.138) is unstable. Let d be a constant vector, which is a
solution of the equation
(z0 )d = On1 .
a
Then, we directly verify that

yk = z0k d, (k = 0, 1, . . .)

is a solution of Equation (5.138). Besides due to |z0 | 1, this sequence is


unstable and hence Equation (5.138) is unstable.

3. Let a() be the eigenoperator of the associated backward model designed


by Formula (5.102). Then the homogeneous process equation might be written
in form of the backward model

a()y = a0 yk + a1 yk1 + . . . + a yk = On1 (5.142)

with det a0 = 0. Denote


() = det a() ,
then the stability condition of Equation (5.142) may be formulated as follows.

Theorem 5.47. For the stability of Equation (5.142), it is necessary and suf-
cient that the characteristic polynomial

det(a0 + a1 + . . . + a  ) = det a() = () = 0

has no roots inside the unit disc or on its border.

Proof. The proof follows immediately from Theorems 5.415.46.

Corollary 5.48. As a special case, Equation (5.142) is stable, if () =


const. = 0, i.e. if the matrix a() is unimodular.

4. Further on, the non-singular n n polynomial matrices a (z) and a()


are called stable, if the conditions of Theorems 5.46, 5.47 are true for them.
Matrices a (z) and a() are named unstable, when they are not stable. The
set of real stable polynomial matrices a + [z] and
(z) and a() is denoted by R nn
Rnn [], respectively. For the sets of adequate scalar polynomials, we write
+
+ [z] and R+ [], respectively.
R
224 5 Fundamentals for Control of Causal Discrete-time LTI Processes

5. In the following considerations, the stability conditions for Equations


(5.135) and (5.138) will be applied to explain the stability of the inverse
matrices a 1 (z) and a1 ().
In what follows, the rational matrix w(z) Rnm (z) is called stable, if its
poles z1 , . . . , zq satisfy Condition (5.140). The rational matrix w() is called
stable, if it is free of poles inside or on the border of the unit disc. In the light
of this denition, any polynomial matrix is a stable rational matrix. The sets
of real stable matrices w(z) and w() are denoted by R + (z) and R+ (),
nm nm
respectively.
Rational matrices, which are not stable, are named unstable.

Theorem 5.49. Equations (5.138) and (5.142) are stable, if and only if the
1 (z) and a1 () are stable.
rational matrices a

Proof. Applying Formula (2.114), we obtain the irreducible representation


adj a
(z)
1 (z) =
a , (5.143)
da min (z)

where da min (z) is the minimal polynomial of the matrix a (z). Since the set of
roots of the polynomial da min (z) contains all roots of the polynomial det a(z),
the matrices a (z) and a 1 (z) are at the same time stable or unstable. The
same can be said about the matrices a() and a1 ().

6. Hitherto, the forward model (5.107) and the backward model (5.108) are
called stable, when the matrices a(z) and a() are stable. For the considered
class of systems, this denition is de facto equivalent to the asymptotic sta-
bility in the sense of Lyapunov.

Theorem 5.50. Let the forward model (5.107) and the associated backward
model (5.108) be controllable. Then for the stability of the corresponding mod-
els, it is necessary and sucient that their transfer matrices w(z)
resp. w()
are stable.

Proof. Using (5.104) and (5.143), we obtain


adj (z) b(z)
a
w(z)
= .
da min (z)

Under the made suppositions, this matrix is irreducible, and this fact arises
from Theorem 2.42. Thus, the matrices a
(z) and w(z)
are either both stable or
both unstable. This fact proves Theorem 5.50 for forward models. The proof
for backward models runs analogously.
5.7 Closed-loop LTI Systems of Finite Dimension 225

5.7 Closed-loop LTI Systems of Finite Dimension


1. The input signal {u} of the process in Fig. 5.1 is now separated into two
components. The rst component is still denoted by {u} and contains the
directly controllable quantities called as the control input. Besides the control
input, additional quantities eect the process L, that depend on external fac-
tors. In Fig. 5.2, these quantities are assigned by the sequence {g} called as
the disturbance input. The forward model of this process might be represented
by the equation

{gk }
- {yk }
L -
-
{uk }
Fig. 5.2. Process with two inputs

(q)yk = b(q)uk + f(q)gk ,


a (k = 0, 1, . . . ) , (5.144)

where a(q) Rnn [q], b(z) Rnm [q], f(q) Rn [q]. In future, we will only
consider non-singular processes, for which det a (q) / 0 is true. When this
condition is ensured, the rational matrices

w(q)
1 (q)b(q),
=a w 1 (q)f(q)
g (q) = a (5.145)

are explained, and they will be called the control and disturbance transfer
matrix, respectively.
For the further investigations, we always suppose the following assump-
tions:
A1 The matrix w g (q) is at least proper, i.e. the process is causal with respect
to the input {g}.
A2 The matrix w(q)
is strictly proper, i.e. the process is strictly causal with
respect to the input {u}. This assumption is motivated by the following
reasons:
a) In further considerations, only such kind of models will occur.
b) This assumption enormously simplies the answer to the question
about the causality of the controller.
c) It can be shown that, when the matrix w(q) is only proper, then the
closed-loop system contains de facto algebraic loops, which cannot
appear in real sampled-data control systems [19].
The process (5.144) is called controllable by the control input, if for all
nite q

rank Rh (q) = rank a (q) b(q) = n , (5.146)
226 5 Fundamentals for Control of Causal Discrete-time LTI Processes

and it is named controllable by the disturbance input, if for all nite q




rank Rg (q) = rank a (q) f(q) = n .

2. To impart the process (5.144) appropriate dynamical properties, a con-


troller R is fed back, what results in the structure shown in Fig. 5.3. The

{gk }
- {yk }
L -
-

{uk }
R 

Fig. 5.3. Controlled process

controller R itself is an at least causal discrete-time LTI object, which is given


by the forward model
(q)uk = (q)y
k

with (q)

Rmm [q], (q) Rmn [q].
Together with Equation (5.144), this performs a model of the closed-loop
system:
(q)yk b(q)uk = f(q)gk
a
(5.147)

(q)y k + (q)uk = Om1 .

3. The dynamical properties of the closed-loop system (5.147) are charac-


terised by the polynomial matrix

(q) b(q)
a
Ql (q, ,
) = , (5.148)
(q) (q)
which is named the (left) characteristic matrix of the forward model of the
closed-loop system. A wide class of control problems might be expressed purely
algebraic.

Abstract control problem. For a given pair ( a(q), b(q)) with


strictly proper transfer matrix w(q),
nd the set of pairs (
(q), (q))
such that the matrix

1 (q)
wd (q) = (q)

is at least proper and the characteristic matrix (5.148) adopt certain


prescribed properties. Besides, the closed-loop system (5.147), has to
be causal.
5.7 Closed-loop LTI Systems of Finite Dimension 227

4. For the solution of many control problems, it is suitable to use the associ-
ated backward model additionally to the forward model (5.144) of the process.
We will give a general approach for the design of such models, which suppose
the controllability of the process by the control input. For this reason, we
write (5.144) in the form

yk = w(q)u
k +w
g (q)gk .

Substituting here 1 for q, we obtain

yk = w()uk + wg ()gk , (5.149)

where
1 ) ,
w() = w( g ( 1 ) .
wg () = w
When we have the ILMFD

w() = a1 ()b0 () , (5.150)

then (5.149) might be written as

a()yk = b0 ()uk + a()wg ()gk . (5.151)

For the further arrangements the next property is necessary.

Lemma 5.51. Let the process (5.144) be controllable by the control input.
Then the matrix
bg () = a()wg () (5.152)
turns out to be a polynomial.

Proof. Due to supposition (5.146), the rst relation in (5.145) is an ILMFD


of the matrix w(q).
Thus from (5.145), we obtain

g (q) w(q)
w , (5.153)
l

because the polynomial a(q) reduces the matrix w


g (q). Starting with the min-
imal standard realisation

w(q)
= C(qIp A)1 B + D ,

where A, B, C, D are constant matrices of appropriate dimension, we nd


g (q) allows the
from (5.153) with the help of Theorem 2.56, that the matrix w
representation
wg (q) = C(qIp A)1 Bg + Dg , (5.154)
where Bg and Dg are constant matrices of the dimensions p  and n ,
respectively. Substituting 1 for q, we obtain the standard realisation of the
matrix w():
228 5 Fundamentals for Control of Causal Discrete-time LTI Processes

w() = C(Ip A)1 B + D ,


which is minimal due to Lemma 5.35, i.e. the pairs (Ip A, B) and [Ip
A, C] are irreducible. Thus, when we build the ILMFD

C(Ip A)1 = a1
1 ()b1 () , (5.155)

then the right side of the formula


 
w() = a1
1 () b1 ()B + a1 ()D

turns out as an ILMFD of the matrix w(). Hence the right side of (5.150) is
also an ILMFD of the matrix w(), such that

a() = ()a1 ()

is valid with a unimodular matrix (). From (5.154), we nd

g ( 1 ) = C(Ip A)1 Bg + Dg .
wg () = w

Therefore, using (5.155), we realise that


 
a()wg () = () b1 ()Bg + a1 ()Dg = bg ()

is a polynomial matrix.

Inserting (5.152) into (5.151), we obtain the wanted backward model of


the form
a()yk = b0 ()uk + bg ()gk .

4. Due to the supposed strict causality of the process with respect to the
control input, the conditions

det a(0) = 0, b0 (0) = Onm (5.156)

hold. Thats why for further considerations, the associated backward model
of the process is denoted in the form

a()yk = b()uk + bg ()gk , (5.157)

where the rst condition in (5.156) is ensured. Starting with the backward
model of the process (5.157), the controller is attempted in the form

()uk = ()yk (5.158)

with
det (0) = 0 .
5.7 Closed-loop LTI Systems of Finite Dimension 229

When we put this together with (5.157), we obtain the backward model of
the closed-loop system
a()yk b()uk = bg ()gk
(5.159)
()yk + ()uk = Om1 .
Besides, the characteristic matrix of the backward model of the closed-loop
system Ql (, ) takes the form

a() b()
Ql (, , ) = . (5.160)
() ()
Introduce the extended output vector

yk
k = ,
uk
so Equations (5.159) might be written in form of the backward model

bg ()
Ql (, , )k = B()gk , B() = .
Om

5. In analogy to the preceding investigations, consider the case when the


process is described by a PMD of the form
0 = (a(), b(), c()) Rnpm []
with det a(0) = 0. Then the backward model of the closed-loop system might
be presented in the shape
a()xk = b()uk + bg ()gk
yk = c()xk (5.161)
()uk = ()yk .
In this case, the characteristic matrix of the closed-loop system Q (, , )
takes the form
a() Opn b()

Q (, , ) = c() In Onm . (5.162)
Omp () ()
Introduce here the extended vector

xk
k = yk ,
uk
so (5.161) might be written as backward model

bg ()
Q (, , ) k = B ()gk , B() = On .
Om
230 5 Fundamentals for Control of Causal Discrete-time LTI Processes

5.8 Stability and Stabilisation of the Closed Loop


1. The following investigations for stability and stabilisation refer to closed-
loop systems and will be done with the backward models (5.159) and (5.161),
which are preferred in the whole book. In what follows, an arbitrary controller
((), ()) is said to be stabilising, if the closed-loop system with it is stable.

The polynomial
() = det Ql (, , ) (5.163)
is called the characteristic polynomial of the system (5.159), and the polyno-
mial
() = det Q (, , )
the characteristic polynomial of the system (5.161).
Theorem 5.52. For the stability of the systems (5.159) or (5.161), it is nec-
essary and sucient that the characteristic polynomials () or (), re-
spectively, are stable.
Proof. The proof immediately follows from Theorems 5.46 and 5.47.
Corollary 5.53. Any stabilising controller for the processes (5.157) or
(5.161) is causal, i.e.
det (0) = 0 . (5.164)
Proof. When the system (5.159) is stable, due to Theorem 5.46, we have

det Ql (0, , ) = (0) = 0 ,

which with the aid of (5.160) yields

det a(0) det (0) = 0 ,

hence (5.164) is true. The proof for the system (5.161) runs analogously.
Corollary 5.54. Any stabilising controller ((), ()) for the systems
(5.159) or (5.161) possesses a transfer matrix

wd () = 1 ()() ,

because from (5.164) immediately emerge that the matrix () is invertible.


Corollary 5.55. The stable closed-loop systems (5.159) and (5.161) possess
the transfer matrices

w0 () = Q1
l (, , )B() ,

w () = Q1
(, , )B () ,

which are analytical in the point = 0.


5.8 Stability and Stabilisation of the Closed Loop 231

2. Let the LMFD


w() = a1
l ()bl () (5.165)
be given. Then using the terminology of Chapter 4, the pair (al (), bl ()) is
called a left process model . Besides, if the pair (al (), bl ()) is irreducible,
then the left process model is named controllable. If we have at the same time
the RMFD
w() = br ()a1
r () , (5.166)
then the pair [ar (), br ()] is called a right process model. The right process
model is named controllable, when the pair [ar (), br ()] is irreducible.
Related to the above, the concept of controllability for left and right models
of the controllers might be introduced. If the LMFD and RMFD

wd () = l1 ()l () = r ()r1 ()

exist, then the pairs (l (), l ()) and [r (), r ()] are left and right mod-
els of the controller, respectively. As above, we introduce the concepts of
controllable left and right controller models. The matrices

al () bl ()
Ql (, l , l ) = ,
l () l ()
(5.167)
r () br ()
Qr (, r , r ) =
r () ar ()

are called the left and right characteristic matrices, respectively.

Lemma 5.56. Let (al (), bl ()), [ar (), br ()] as well as (l (), l ()),
[r (), r ()] be irreducible left and right models of the process or controller,
respectively. Then

det Ql (, l , l ) det Qr (, r , r ) . (5.168)

Proof. Applying the general formulae (4.76) and (5.167), we easily nd




det Ql (, l , l ) = det al det l det In a1 1
l ()bl ()l ()l () ,

(5.169)
det Qr (, r , r ) = det ar det r det In br ()a1 1
r ()r ()r () .

Due to the supposed irreducibility, we obtain for the left and right models

det al () det ar (), det l () det r () .

Moreover, the expressions in the brackets of (5.169) coincide, thats why


(5.168) is true.

From Lemma 5.56, it arises that the design problems for left and right
models of stabilising controllers are in principal equivalent.
232 5 Fundamentals for Control of Causal Discrete-time LTI Processes

3. A number of statements is listed, concerning the stability of the closed-


loop system and the design of the set of stabilising controllers.

Theorem 5.57. Let the process be controllable by the control input, and
Relations (5.165) and (5.166) should determine controllable left and right
IMFDs. Then a necessary and sucient condition for the fact, that the pair
(l (), l ()) is a left model of a stabilising controller, is that the matrices
l () and l () satisfy the relation

l ()ar () l ()br () = Dl () , (5.170)

where Dl () is any stable polynomial matrix. For the pair [r (), r ()] to be
a right model of a stabilising controller, it is necessary and sucient that the
matrices r () and r () fulll the relation

al ()r () bl ()r () = Dr () , (5.171)

where Dr () is any stable polynomial matrix.

Proof. Relation (5.170) will be shown.


Necessity: Let the polynomial matrices 0r (), 0r () satisfy the equation

al ()0r () bl ()0r () = In .

Then owing to Lemma 4.4, the matrix



0r () br ()
Qr (, 0r , 0r ) =
0r () ar ()

is unimodular. Besides, we obtain



In Onm
Ql (, l , l )Qr (, 0r , 0r ) = , (5.172)
Ml () Dl ()

where Dl () and Ml () are polynomial matrices and in addition (5.170) is


fullled. Per construction, the sets of eigenvalues of the matrices Ql (, l , l )
and Dl () coincide. Thus, the stability of the matrix Ql (, l , l ) implies the
stability of the matrix Dl ().
Suciency: Take the steps of the proof in reverse order to realise that the
conditions are sucient.
Relation (5.171) is shown analogously.

Theorem 5.58. For the rational m n matrix wd () to be the transfer ma-


trix of a stabilising controller for the system (5.159), where the process is
completely controllable, it is necessary and sucient that it allows a represen-
tation of the form

wd () = F11 ()F2 () = G2 ()G1


1 () , (5.173)
5.8 Stability and Stabilisation of the Closed Loop 233

where F1 (), F2 () and G1 (), G2 () are stable rational matrices satisfying

F1 ()ar () F2 ()br () = Im ,
(5.174)
al ()G1 () bl ()G2 () = In .

Proof. The rst statement in (5.174) will be shown.


Necessity: Let (l (), l ()) be a left model of a stabilising controller. Then
the matrices of this pair satisfy Relation (5.170) for a certain stable matrix
Dl (). Besides, we convince that the matrices

F1 () = Dl1 ()l (), F2 () = Dl1 ()l ()

are stable and satisfy Relations (5.173) and (5.174).


Suciency: Let the matrices F1 () and F2 () be stable and Relation (5.174)
be satised. Then the rational matrix


F () = F1 () F2 ()

is stable. Consider the ILMFD


m n

F () = a1
F ()bF () = a1
F () d 1 () d 2 () m ,

where the matrix aF () is stable and

d1 () = aF ()F1 (), d2 () = aF ()F2 ()

are polynomial matrices. Due to

d1 ()ar () d2 ()br () = aF () ,

the pair (d1 (), d2 ()), owing to Theorem 5.57, is a stabilising controller with
the transfer function

wd () = d1 1
1 ()d2 () = F1 ()F2 () .

Thus, the rst statement in (5.174) is proven.


The second statement in (5.174) can be shown analogously.

Theorem 5.59. Let the pair (al (), bl ()) be irreducible and (0l (), 0l ())
should be an arbitrary basic controller, such that the matrix Ql (, 0l , 0l ) be-
comes unimodular. Then the set of all stabilising left controllers (l (), l ())
for the system (5.159) is determined by the relations

l () = Dl ()0l () Ml ()bl () ,
l () = Dl ()0l () Ml ()al () ,

where Dl (), Ml () are any polynomial matrices, but Dl () has to be stable.


234 5 Fundamentals for Control of Causal Discrete-time LTI Processes

Proof. The proof immediately emerges from Theorem 4.21.

Theorem 5.60. Let the pairs (al (), bl ()) and (l (), l ()) be irreducible
and the matrix Q1
l (, l , l ) be represented in the form

n m

V1 () q12 () n (5.175)
Q1
l (, l , l ) = .
V2 () q21 () m

Then a necessary and sucient condition for (l (), l ()) to be a stabilising


controller is the fact that the matrices V1 () and V2 () are stable.

Proof. The necessity of the conditions of the theorem emerges immediately


from Theorem 5.49.
Suciency: Let the ILMFD (5.165) and IRMFD (5.166) exist and
(0l (), 0l ()), (0r (), 0r ()) should be dual left and right basic controllers.
Then observing (5.172), we get

Ql (, l , l ) = Nl ()Ql (, 0l , 0l ) , (5.176)

where
In Onm
Nl () = . (5.177)
Ml () Dl ()
Inverting the matrices in Relation (5.176), we arrive at

Q1 1 1
l (, l , l ) = Ql (, 0l , 0l )Nl () .

With respect to the properties of dual controllers, we obtain



1 0r () br ()
Ql (, l , l ) = Qr (, 0r , 0r ) = , (5.178)
0r () ar ()

where from (5.177), we nd



In Onm
Nl1 () = .
Dl1 ()Ml () Dl1 ()

Applying this and (5.178), we obtain


 
1 0r () br ()() br ()Dl1 ()
Ql () = (5.179)
0r () ar ()() ar ()Dl1 ()

with the notation


() = Dl1 ()Ml () . (5.180)
Comparing (5.175) with (5.179), we produce
5.8 Stability and Stabilisation of the Closed Loop 235

V1 () = 0r () br ()() ,
(5.181)
V2 () = 0r () ar ()() ,

or equivalently
V1 () In
= Qr (, 0r , 0r ) .
V2 () ()
From this equation and (5.178), we generate

In V1 ()
= Ql (, 0l , 0l ) ,
() V2 ()

thus we read
() = 0l ()V1 () 0l ()V2 () .
When the matrices V1 () and V2 () are stable, then the matrix () is also
stable.
Furthermore, notice that the pair (l (), l ()) is irreducible, because
the pair (Dl (), Ml ()) is also irreducible. Hence Equation (5.180) denes
an ILMFD of the matrix (). But the matrix () is stable and therefore,
Dl () is also stable. Since also the matrix Q1l (, l , l ) is stable, the blocks
in (5.179) must be stable. Hence owing to Theorem 5.49, it follows that the
matrix Ql (, l , l ) is stable and consequently, the controller (l (), l ()) is
stabilising.

Remark 5.61. In principle, we could understand the assertions of Theo-


rem 5.60 as a corollary to Theorem 5.58. Nevertheless, in the proof we gain
some important additional relations that will be used in the further disclo-
sures.

Besides (5.181), an additional representation of the matrices V1 (), V2 ()


will be used. For this purpose, notice that from (5.167) and (5.175), it emerges

al ()V1 () bl ()V2 () = In ,
l ()V1 () + l ()V2 () = Onm .

Resolving these equations for the variables V1 () and V2 (), we obtain


1
V1 () = [al () bl ()wd ()] ,
1
(5.182)
V2 () = wd () [al () bl ()wd ()] ,

where wd () is the transfer matrix of the controller. From (5.182), it follows


directly
wd () = V2 ()V1 ()1 . (5.183)
236 5 Fundamentals for Control of Causal Discrete-time LTI Processes

4. On basis of Theorem 4.24, the stabilisation problem can be solved for the
system (5.159) even in those cases, when the pair (a(), b()) is reducible.
Theorem 5.62. Suppose in (5.159)

a() = ()a1 (), b() = ()b1 ()

with a latent matrix () and the irreducible pair (a1 (), b1 ()). Then, if the
matrix () is unstable, the system (5.159) never can be stabilised by a feedback
of the form (5.158), i.e. the process (5.157) is not stabilisable. However, if the
matrix () is stable, then there exists for this process a set of stabilising
controllers, i.e. the process is stabilisable. The corresponding set of stabilising
controllers coincides with the set of stabilising controllers of the irreducible
pair (a1 (), b1 ()).

5. In analogy, following the reasoning of Section 4.8, the stabilisation prob-


lem for PMD processes is solved.
Theorem 5.63. Suppose the strictly causal LTI process as a minimal PMD

0 () = (a(), b(), c()) , (5.184)

where det a(0)


/ 0. For the transfer matrix

w () = c()a1 ()b() ,

there should exist the ILMFD

w () = p1 ()q() . (5.185)

Then the stabilisation problem

det Q (, , ) d+ () ,

where Q (, , ) is Matrix (5.162), is solvable for any stable polynomial


d+ (). Besides, the set of stabilising controllers coincides with the set of sta-
bilising controllers of the irreducible pair (p(), q()) and can be designed on
basis of Theorems 5.57 and 5.59.
Theorem 5.64. Let the strictly causal PMD process (5.184) be not minimal
and the polynomial () should be dened by the relation
det a()
() = ,
det p()
where the matrix p() is determined by an ILMFD (5.185). Then for the sta-
bilisability of the PMD process (5.184), it is necessary and sucient that the
polynomial () is stable. If this condition is fullled, the set stabilising con-
trollers of the original system coincides with the set of stabilising controllers
of the irreducible pair (p(), q()) in the ILMFD (5.185).
5.8 Stability and Stabilisation of the Closed Loop 237

6. The results in Sections 4.7 and 4.8, together with the design of the set
of stabilising controllers allow to obtain at the same time information about
the structure of the set of invariant polynomials for the characteristic matrices
(5.160) and (5.162). For instance, in case of Theorem 5.57 or 5.63, the n invari-
ant polynomials of the matrices (5.160) a1 (), . . . , an () are equal to 1, and
the set of the remaining invariant polynomials an+1 (), . . . , an+m () coincides
with the set of invariant polynomials of the matrix Dl ().

7. For practical applications, the question on insensitivity of the obtained


solution of the stabilisation problem plays a great role. The next theorem
supplies the answer to this question.
Theorem 5.65. Let (l (), l ()) be a stabilising controller for the strictly
causal process (al (), bl ()), and instead of the process (al (), bl ()) there
exists the disturbed strictly causal process

(al () + al1 (), bl () + bl1 ()) (5.186)

with

r
r
al1 () = Ak k , bl1 () = Bk k ,
k=0 k=0

where r 0 is an integer, and Ak , Bk , (k = 0, . . . , r) are constant matrices.


Suppose   be a certain norm for nite-dimensional number matrices. Then
there exists a positive constant , such that for

Ak  <  , Bk  < , (k = 0, . . . , r) , (5.187)

the closed-loop system with the disturbed process (5.186) and the controller
(l (), l ()) remains stable.
Proof. The characteristic matrix of the closed-loop system with the disturbed
process has the form

al () + al1 () [bl () + bl1 ()]
Ql1 (, l , l )) = .
l () l ()

Applying the sum theorem for determinants, we nd

det Ql1 (, l , l ) = 1 () = () + 2 () ,

where
al () bl ()
() = det (5.188)
l () l ()
is the characteristic polynomial of the undisturbed system, and 2 () is a
polynomial, the coecients of which tend to zero for  0. Denote

min |()| = . (5.189)


||=1
238 5 Fundamentals for Control of Causal Discrete-time LTI Processes

Under our suppositions, > 0 is true, because the polynomial () has no


zeros on the unit circle. Attempt

2 () = d0 + d1 1 + . . . + d ,

where the coecients di , (i = 0, 1, . . . , ) continuously depend on the ele-


ments of the matrices Ak , Bk , and all of them become zero, when for all
= 0, 1, . . . , r Ak = Onn , Bk = Onm is valid. Thus, there exists an , such
that the inequalities

|di | < , (i = 0, 1, . . . , )
+1

remain true, as long as Estimates (5.187) remain true. If (5.189) is fullled,


we get
max |2 ()| < .
||=1

Comparing this and (5.188), we realise that for any point of the unit circle
|| = 1
|2 ()| < |()| ,
and from the Theorem of Rouche [171], it arises that the polynomials ()
and () + 2 () have the same number of zeros inside the unit disc. Hence
the stability of the polynomial () implies the stability of the polynomial
1 ().

Remark 5.66. It can be shown that for the solution of the stabilisation problem
in case of forward models of the closed-loop systems (5.147), an analogue
statement with respect to the insensitivity of the solution of the stabilisation
problem cannot be derived.
Part III

Frequency Methods for MIMO SD Systems


6
Parametric Discrete-time Models of
Continuous-time Multivariable Processes

6.1 Response of Linear Continuous-time Processes to


Exponential-periodic Signals

This section presents some auxiliary relations that are needed for the further
disclosures.

1. Suppose the linear continuous-time process

y = w(p)x , (6.1)
d
where p = dt is the dierential operator, w(p) Rnm (p) is a rational matrix
and x = x(t), y = y(t) are vectors of dimensions m 1, n 1, respectively.
The process is symbolically presented in Fig. 6.1. In the following, the matrix

x y
- w(p) -

Fig. 6.1. Continuous-time process

w(p) is called the transfer matrix of the continuous-time process (6.1).

2. The vectorial input signal x(t) is called exponential-periodic (exp.per.), if


it has the form
x(t) = est xT (t), xT (t) = xT (t + T ) , (6.2)
where s is a complex number and T > 0 is a real constant. Here, s is designated
as the exponent and T as the period of the exponential-periodic function x(t).
Further on, all components of the vector xT (t) are supposed to be of bounded
variation.
242 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

Assuming an exp.per. input signal x(t) (6.2), this section handles the ex-
istence problem for an exp.per. output signal of the processes (6.1), i.e.

y(t) = est yT (t), yT (t) = yT (t + T ) . (6.3)

3.
Lemma 6.1. Let the matrix w(p) be given in the standard form
N (p)
w(p) =
d(p)
with N (s) Rnm [p] and the scalar polynomial

d(p) = (p p1 )1 (p pq )q , 1 + . . . + q = r . (6.4)

Furthermore, suppose

x(t) = xs (t) = Xest , (6.5)
where X Cm1 is a constant vector and s is a complex number with

s = pi , (i = 1, . . . , q) . (6.6)

Then there exists a unique output of the form



y(t) = ys (t) = Y (s)est (6.7)

with a constant vector Y (s) Cn1 . Besides,

Y (s) = w(s)X

and
ys (t) = w(s)Xest . (6.8)
Proof. Suppose a certain ILMFD

w(s) = a1
l (s)bl (s)

with al (s) Rnn [s], bl (s) Rnm [s]. Then Relation (6.1) is equivalent to the
dierential equation    
d d
al y = bl x. (6.9)
dt dt
Relations (6.5) and (6.7) should hold, and the vectors xs (t) and ys (t) should
determine special solutions of Equation (6.9). Due to
 
d
al ys (t) = a(s)Y (s)est ,
dt
 
d
bl xs (t) = b(s)Xest ,
dt
6.1 Response of Linear Continuous-time Processes to Exponential-periodic Signals 243

the condition
al (s)Y (s) = bl (s)X (6.10)
must be satised. Owing to the properties of ILMFDs, the eigenvalues of the
matrix al (s) turn out as the roots of the polynomial (6.4), but possibly with
higher multiplicity. Thus, (6.6) implies det al (s) = 0 and from (6.10) we derive

Y (s) = a1
l (s)bl (s)X = w(s)X ,

i.e. Formula (6.8) really determines the wanted solution.


Now, we prove that the found solution is unique. Beside of (6.5) and (6.7),
let Equation (6.9) have an additional special solution of the form

x(t) = Xest , ys1 (t) = Y1 (s)est ,

where Y1 (s) is a constant vector. Then the dierence



s (t) = ys (t) ys1 (t) = [Y (s) Y1 (s)]est (6.11)

must be a non-vanishing solution of the equation


 
d
al s (t) = 0 . (6.12)
dt

Relation (6.12) represents a homogeneous system of linear dierence equations


with constant coecients. This system may possess non-trivial solutions of the
form (6.11) only when det al (s) = 0. But this case is excluded by (6.6). Thus,
Y (s) = Y1 (s) is true, i.e. the solution of the form (6.7) is unique.

4. The question about the existence of an exp.per. output signal with the
same exponent and the same period is investigated.

Theorem 6.2. Let the transfer function of the processes (6.1) be strictly
proper, the input signal should have the form (6.2), and for all k, (k =
0, 1, . . .) the relations

s + kj = pi , (i = 1, . . . , q), = 2/T, j = 1 (6.13)

should be valid. Then there exists a unique exp.per. output of the form (6.3)
with  T
yT (t) = w (T, s, t )xT ( ) d , (6.14)
0

where w (T, s, t) is dened by the series



1
w (T, s, t) = w(s + kj)ekjt . (6.15)
T
k=
244 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

Proof. The function xT (t) is represented as Fourier series




xT (t) = xk ekjt ,
k=

where  T
1
xk = xT ( )ekj d . (6.16)
T 0
Then we obtain


x(t) = xk e(s+kj)t .
k=

According to the linearity of the operator (6.1) and Condition (6.13),


Lemma 6.1 yields


y(t) = w(s + kj)xk e(s+kj)t = est yT (t) , (6.17)
k=

where


yT (t) = w(s + kj)xk ekjt . (6.18)
k=

Using (6.16), the last expression sounds


 T
1
yT (t) = w(s + kj) xT ( )ekj d ekjt .
T 0
k=

Under our suppositions, series (6.15) converges. Hence due to the general
properties of Fourier series [171], the order of summation and integration
could be exchanged. Thus, we obtain Formula (6.14).
It remains to show the uniqueness of the above generated exp.per. solution.
Assume the existence of a second exp.per. output

y1 (t) = est y1T (t), y1T (t) = y1T (t + T )

in addition to the solution (6.3). Then the dierence (t) = y(t) y1 (t) is
a solution of the homogeneous equation (6.12) with exponent s and period
T . But, from (6.13) emerge that Equation (6.12) does not possess solutions
dierent from zero. Thus, (t) = 0 and hence the exp.per. solutions (6.3) and
(6.14) coincide.

5. In the following, the series (6.15) is called the displaced pulse frequency
response, which is abbreviated as DPFR. This notation has a physical inter-
pretation. Let (t) be the Dirac impulse and
6.2 Response of Open SD Systems to Exp.per. Inputs 245


T (t) = (t kT )
k=

is a periodic pulse sequence. Then, it is well known [159] that the function
T (t) could be developed in a generalised Fourier series

1 kjt
T (t) = e .
T
k=

For the response of the process (6.1) to the exp.per. input

x(t) = est T (t) (6.19)

recruit Formulae (6.17), (6.18) with xk = 1, (k = 0, 1, . . . ). Thus, we obtain

y(t) = est w (T, s, t) .

Hence the DPFR w (T, s, t) is related to the response of the process (6.1) to
an exponentially modulated sequence of unit impulses (6.19).

6.2 Response of Open SD Systems to Exp.per. Inputs


1. In this section and further on by a digital control unit DCU, we understand
a system with a structure as shown in Fig. 6.2.1 If the digital control unit works
as a controller, we also will call it a digital controller. Hereby, y = y(t) and

DCU
y {} {} v
- ADC - ALG - DAC -

Fig. 6.2. Structure of a digital control unit

v = v(t) are vectors of dimensions m 1 and n 1, respectively. Furthermore,


y(t) is assumed to be a continuous function of t.
In Fig. 6.2 ADC is the analog to digital converter, which converts a
continuous-time input signal y(t) into a discrete-time vector sequence {}
with the elements k , (k = 0, 1, . . . ), i.e.

k = y(kT ) = yk , (k = 0, 1, . . . ) . (6.20)
1
The concepts for the elements in a digital control system are not standardised in
the literature.
246 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

The number T > 0, arising in (6.20), is named as the sampling period or the
period of time quantisation.
The block ALG in Fig. 6.2 stands for the control program or the control
algorithm. If confusion is excluded, also the short name controller is used.
It calculates from the sequence {} a new sequence {} with elements k ,
(k = 0, 1, . . . ). The ALG is a causal discrete LTI object, which is described
for instance by its forward model


0 k+ +  k = 0 k+ + 1 k+1 + . . . +  k (6.21)
1 k+1 + . . . +

or by the associated backward model

0 k + 1 k1 + . . . +  k = 0 k + 1 k1 + . . . + k . (6.22)

i , i and i , i are constant real matrices of


In (6.21) and (6.22) the
appropriate dimensions.

Finally in Fig. 6.2, the block DAC is the digital to analog converter, which
transforms a discrete sequence {} into a continuous-time signal v(t) by the
relation
v(t) = m(t kT )k , kT < t < (k + 1)T . (6.23)
In (6.23), m(t) is a given function on the interval 0 < t < T , which is named
as form function, because it establishes the shape of the control pulses [148].
In what follows, we always suppose that the function m(t) is of bounded
variation on the interval 0 t T .

2. During the investigation of open and closed sampled-data systems, the


transition of exp.per. signals through a digital control unit (6.20)-(6.23)
plays an important role. Suppose the input of a digital control unit be the
continuous-time signal

y(t) = est yT (t), yT (t) = yT (t + T ) (6.24)

with the exponent s and the period T , which coincides with the time quanti-
sation period. We search for an exp.per. output of the form

v(t) = est vT (s, t), vT (s, t) = vT (s, t + T ) . (6.25)

At rst, notice a special feature, when an exp.per. signal (6.24) is sent through
a digital control unit. If (6.24) and (6.20) is valid, we namely obtain

k = eksT 0 , 0 = yT (0).

The result would be the same, if instead of the input y(t) the exponential
signal
ys (t) = est yT (0)
6.2 Response of Open SD Systems to Exp.per. Inputs 247

would be considered. The equivalence of the last two equations shows the
so-called stroboscopic property of a digital control unit.
The awareness of the stroboscopic property makes it possible to connect
the response of the digital control unit to an exp.per. excitation with its
response to an exponential signal.

3. In connection with the above said, consider the design task for a solution
of Equations (6.20)(6.23) under the conditions

y(t) = est y0 ; v(t) = est vT (t), vT (t) = vT (t + T ) . (6.26)

Assume at rst
m(t) = 1, 0t<T. (6.27)
Then from (6.23) and (6.25), we obtain

est vT (s, t) = k , kT < t < (k + 1)T . (6.28)

Consider t kT + 0, so we nd

k = eksT g(s) , (6.29)

where
g(s) = vT (s, +0)
is an unknown vector function. The equality

k = y(kT ) = eksT y0

emerges from (6.28), so after inserting this and (6.29) into (6.22), we receive
   
0 + 1 esT + . . . + esT g(s) = 0 + 1 esT + . . . + esT y0

or the equivalent relation


 
(s)g(s) = (s)y0 , (6.30)

where 
(s) = 0 + 1 esT + . . . + esT ,

(s) = 0 + 1 esT + . . . + esT
are polynomial matrices in the variable esT . Hereinafter,  means that

the corresponding function depends on esT . For det (s) / 0 from (6.30),
we obtain

g(s) = w d (s)y0 , (6.31)
where
 1 
w d (s) = (s) (s) .
248 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

From (6.29) and (6.31), it arises



k = eksT w d (s)y0 .

Substituting this in (6.28), we arrive at



est vT (s, t) = eksT w d (s)y0 , kT < t < (k + 1)T ,

which implies

vT (s, t) = w d (s)y0 es(tkT ) , kT < t < (k + 1)T .

This formula is equivalent to



vT (s, t) = w d (s)y0 est , 0 < t < T, vT (s, t) = vT (s, t + T ) . (6.32)

4. Using (6.32), we are able to obtain the general solution for the case, when
m(t) is an arbitrary given function on the interval 0 t T . Then instead of
(6.32), formula

vT (s, t) = w d (s)y0 est m(t), 0 < t < T, vT (s, t) = vT (s, t + T ) (6.33)

comes up. As result of the above considerations, the following theorem was
proven.

Theorem 6.3. Let the input of the digital control unit (6.20)(6.23) be the
continuous-time signal (6.24). Furthermore, suppose
  
det (s) = det 0 + 1 esT + . . . + esT / 0. (6.34)

Then there exists an exp.per. solution of Equations (6.20)(6.23) with the


exponent s and the period T in the shape (6.33)

Remark 6.4. It can be shown that under Supposition (6.34) the obtained
exp.per. solution with the exponent s and the period T is unique.

Remark 6.5. The obtained solution does not depend on the vector y(t) for
0 < t < T , but only on y0 = yT (0). The stroboscopic property expresses itself
in this way.

5. Perform the Fourier expansion of the vector vT (s, t) as a function of t. At


rst, we detect

vT (s, t) = w d (s)y0 (T, s, t) , (6.35)
where (T, s, t) is a scalar periodic function given by

(T, s, t) = est m(t), 0 < t < T; (T, s, t) = (T, s, t + T ) . (6.36)

Take the Fourier series


6.2 Response of Open SD Systems to Exp.per. Inputs 249


(T, s, t) = k (s)ekjt , (6.37)
k=

where  T
1
k (s) = (T, s, )ekj d .
T 0
Now, introduce the function
 T
(s) = es m( ) d , (6.38)
0

which is called the transfer function of the form element. Thus from (6.36)
and (6.38), we obtain

1 T (s+kj) 1
k (s) = e m( ) d = (s + kj) ,
T 0 T
so Formula (6.37) sounds

1
(T, s, t) = (s + kj)ekjt , (6.39)
T
k=

and it allows to establish the Fourier series of the vector (6.35).


Notice that in the special case (6.27), the transfer function of the form
element (6.38) takes the form

1 esT
(s) = .
s

6. Let us consider now the more general question about the pass of an
exp.per. signals through the open sampled-data system of Fig. 6.3, where
DCU is a digital control unit described by Equations (6.20)(6.23) and L(p)
is a continuous-times LTI process of the form (6.1) with the transfer function
w(p), that is at least proper. The problem amounts to the solution of the

g x y
- DCU - L(p) -

Fig. 6.3. Digital control unit with continuous-time process

general system of Equations (6.20)(6.23) and (6.9), where the conditions

g(t) = est gT (t), gT (t) = gT (t + T ); x(t) = est xT (t), xT (t) = xT (t + T );


y(t) = est yT (t), yT (t) = yT (t + T )
(6.40)
250 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

hold. In order to solve the just stated problem, we point out that owing to
the stroboscopic property, we could restrict ourselves to exponential inputs of
the form g(s) = est gT (0). Hence instead of (6.40), the equivalent task with

g(t) = est g0 , x(t) = est xT (t), y(t) = est yT (t) (6.41)

might be considered. Further on assume that Conditions (6.13) and


det n (s)
/ 0 take place. Then with the aid of (6.32), we nd

x(t) = est (T, s, t)w d (s)g0 . (6.42)

The exp.per. signal (6.42) acts as input to the continuous-time process. With
regard to (6.39), this input might be written in the form

1 
x(t) = (s + kj)e(s+kj)t w d (s)g0 . (6.43)
T
k=

Calculating the responses of the continuous-time process to the various parts


of the input signal (6.43), we nd

y(t) = est w (T, s, t)w d (s)g0 (6.44)

with

1
w (T, s, t) = w(s + kj)(s + kj)ekjt . (6.45)
T
k=

Denote
G(s) = w(s)(s) , (6.46)
so Formula (6.45) can be written as the DPFR

w (T, s, t) = G (T, s, t) .

Matrix (6.46) is called the transfer matrix of the modulated process.

6.3 Functions of Matrices


1. Expansions of the forms (6.15) and (6.45) will play an important role
in the following investigations. Above all, closed expressions for the sums of
series as well as algebraic properties of these sums are needed. The solution
of these problems succeeds by applying the theory of matrix functions. The
present section discusses some elements of this theory, where the declarations
orientate itself by [51]. The skilled reader may skip this and the next section.

Let A be a constant p p matrix, so

A = Ip A
6.3 Functions of Matrices 251

is the assigned characteristic matrix and

dA () = det A = det(Ip A)

is the characteristic polynomial of the matrix A. The minimal polynomial of


the matrix A is denoted by dA min (). Using (2.117), we can write


adj(I p A)
(Ip A)1 = , (6.47)
dA min ()

where the numerator is the monic adjoint matrix. Compatible with earlier
results, Matrix (6.47) turns out to be strictly proper. Assume

dA min () = ( 1 )1 ( q )q , 1 + . . . + q = r p . (6.48)

Then the partial fraction expansion (2.98) yields

M11 M12 M1,1


(Ip A)1 = + + ... + + ...
1 ( 1 )2 ( 1 )1
(6.49)
Mq1 Mq2 Mq,q
... + + + ... + ,
q ( q )2 ( q )q

where the Mik = Ni,i k+1 are constant matrices, and the Nik are calculated
by Formula (2.99). Since the fraction in (6.47) is irreducible, observing (2.100)
produces
Mi,i = Opp (i = 1, . . . , q) .

2. Denote
Mik
Zik = , (i = 1, . . . , q; k = 1, . . . , i ) . (6.50)
(k 1)!

The constant matrices (6.50) are called components of the matrix A. Each
root i of the minimal polynomial (6.48) with the multiplicity i corresponds
to i components
Zi1 , Zi2 , . . . , Zi,i .
The totality of all these matrices is named the set of components of the matrix
A according to the eigenvalue i . The total number of the components of the
matrix A is equal to the degree of its minimal polynomial.

3. Some general properties of the components (6.50) are listed below [51].
a) If the matrix A is real, and 1 , 2 are two conjugated complex eigenvalues,
which arise in the minimal polynomial with the power , then the corre-
sponding components Z1k , Z2k (k = 1, . . . , ) are conjugated complex.
252 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

b) The generating matrices (6.50) of any matrix are commutative, i.e.

Zik Zlm = Zlm Zik .

c) No component (6.50) is a zero matrix.


d) The components (6.50) of the matrix A are linearly independent, i.e. the
equality
cik Zik = Opp
i k

with scalar constant cik implies cik = 0 for all i, k.


e) In future, the particular notation

Zi1 = Qi , (i = 1, . . . , q)

is used. The matrices Qi are named the projectors of the matrix A. Some
important properties of the projectors Qi are advised now:

(i) The following relations hold:

Qi Zik = Zik Qi = Zik , (k = 1, . . . , i ) , (6.51)


Qi Zk = Zk Qi = Opp , (i = ) . (6.52)

(ii) From (6.51) for k = 1, we get

Q2i = Qi , (i = 1, . . . , q) .

(iii) From (6.52) for k = 1, we nd

Qi Q = Q Qi = Opp , (i = ) .

(iv) Moreover, it can be shown



q
Qi = Ip .
i=1

4. Let the matrix A possess the minimal polynomial (6.48), and f () should
be a known scalar function. It is said that the function f () is dened over
the spectrum of the matrix A, if the expressions

f (1 ), f  (1 ), . . . f (1 1) (1 )
f (2 ), f  (2 ), . . . f (2 1) (2 )
.. .. .. (6.53)
. . .
f (q ), f  (q ), . . . f (q 1) (q )

make sense. The totality of the values (6.53) is addressed if we speak about
the values of the function f () on the spectrum of the matrix A.
6.3 Functions of Matrices 253

5. Let the function f () be given, that should take the values (6.53) on
the spectrum of the matrix A. Moreover, a polynomial h() may fulll the
conditions
h(1 ) = f (1 ), . . . h(1 1) (1 ) = f (1 1) (1 )
h(2 ) = f (2 ), . . . h(2 1) (2 ) = f (2 1) (2 )
.. ..
. .
h(q ) = f (q ), . . . h(q 1) (q ) = f (q 1) (q ) .
If these relations are true, then the polynomial h() is said to take the same
values as the function f () on the spectrum of the matrix A, and we write

h(A ) = f (A ) (6.54)

for this fact. If we have a polynomial h() that satises Conditions (6.54),
then the matrix

f (A) = h(A)
is established by denition as the value of the function f () for = A. Using
this denition, the value of the function of a matrix does not depend on the
concrete choice of the polynomial h() satisfying (6.54).

6. In particular, if all components (6.50) of the matrix A are known, then


for a function f () dened on the spectrum of this matrix, the formula
q 

f (A) = f (i )Zi1 + f  (i )Zi2 + . . . + f (i 1) (i )Zi,i (6.55)
i=1

is valid. Consider a scalar polynomial hk , which takes the following value on
the spectrum of the matrix A:
( 1)
hk (i ) = hk (i ) = . . . = hki (i ) = 0, (i = 1, . . . , k 1, k + 1, . . . , q),
hk (k ) = hk (k ) = . . . =
(2) (1)
hk (k ) = 0, hk (k ) = 1 ,
() ( 1)
hk (k ) = ... = hkk (k ) = 0.

In this case, from (6.55) arise

hk (A) = Zk .

Thus we recognise that each component of the matrix A turns out to be a


polynomial of this matrix. Applying Relation (6.50), we produce from (6.55)
a representation of the matrix f (A) by the coecients of the partial fraction
expansion (6.49)
q

f  (i ) f (i 1) (i )
f (A) = f (i )Mi1 + Mi2 + . . . + Mi,i . (6.56)
i=1
1! (i 1)!
254 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

7. The representation of the function of a matrix in the shape (6.55), (6.56)


remains its sense, also in those cases, when the function f () is given by a
sum of an (innite) series, which converges on the spectrum of the matrix A.
Suppose

f () = uk () .
k=
Then, if


f (A ) = uk (A )
k=
is true, so


f (A) = uk (A)
k=
is established.

8. Functions of one and the same matrix are always commutative, i.e. if the
functions f1 () and f2 () are dened on the spectrum of the matrix A, then
f1 (A)f2 (A) = f2 (A)f1 (A) .

9. In a number of problems, connected functions of matrices are considered.


For example, let us have
f () = F [f1 (), . . . , fn ()] , (6.57)
where the fi () are known scalar functions. Now, if the functions f () and
fi (), (i = 1, . . . , n) are dened on the spectrum of A, then we obtain
f (A) = F [f1 (A), . . . , fn (A)] .
Notice that the connected function (6.57) could be dened on the spectrum
of A, even if some of the functions fi (), (i = 1, . . . , n) are not dened on the
spectrum of A.
Example 6.6. Consider the scalar function
sin
f () = ,

which might be written in the form
f () = f1 ()f2 (), f1 () = sin , f2 () = 1 .
The function f2 () is only dened on the spectrum of non-singular matrices.
Nevertheless, the function f () is an integral function, and thus dened on
the spectrum of arbitrary matrices. Hence the matrix
f (A) = (sin A) A1 = A1 sin A
is dened for any matrix A and it could be constructed by (6.55) or (6.56). 
6.3 Functions of Matrices 255

10. Let the p p matrix A possess the eigenvalues 1 , . . . , q with the multi-
plicities 1 , . . . , q , and the characteristic polynomial of the matrix A should
have the shape
dA () = ( 1 )1 ( q )q . (6.58)
If under these conditions, the function f () is dened on the spectrum of the
matrix A, then the characteristic polynomial of the matrix f (A) has the form
df () = ( f (1 ))1 ( f (q ))q .
Besides, if f (i ) = 0, (i = 1, . . . , q) is valid, then the matrix f (A) only nonzero
eigenvalues, i.e. f (A) is non-singular. However, if f (i ) = 0 for any i takes
place, then the matrix f (A) is singular.

11. Now, the important question about the structure of the set of elementary
divisors and the invariant polynomials of the matrix f (A) is investigated. Let
the matrix A have the elementary divisors
( 1 )1 , . . . , ( r )r , (6.59)
where among the numbers 1 , . . . , r , equal ones are allowed. Then the fol-
lowing assertion is true [51]: In those cases, where i = 1, or i > 1 and
f  (i ) = 0, the elementary divisor ( i )i of the matrix A corresponds to
an elementary divisor

( f (i )) i
of the matrix f (A). In case of f  (i ) = 0, i > 1 for the elementary divisor
( i )i , there exist more than one elementary divisors of the matrix f (A).

12. Suppose again that the characteristic polynomial of the matrix A has
the shape (6.58) and the sequence of its elementary divisors has the shape
(6.59). It is said that the matrices A and f (A) have the same structure, if
among the numbers f (1 ), . . . , f (q ) are no equal ones and the sequence of
elementary divisors of the matrix f (A) possesses the analogue form to (6.59)
1 r
( f (1 )) , . . . , ( f (r )) .
The above derived results are formulated in the next theorem.
Theorem 6.7. The following two conditions are necessary and sucient for
the matrices A and f (A) to possess the same structure:
a)
f (i ) = f (k ), (i = k; i, k = 1, . . . , q) (6.60)
b) For all exponents i ( = 1, . . . , < r) with i > 1
f  (i ) = 0, ( = 1, . . . , ) .
Corollary 6.8. Let the matrix A be cyclic, i.e. in (6.59) r = q and i = i
are true. Then the matrix f (A) is also cyclic, if and only if Conditions a) and
b) are true.
256 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

6.4 Matrix Exponential Function


1. Consider the scalar function

f () = et ,

where t is a real parameter. This function is dened on the spectrum of every


matrix. Thus for any matrix A, formula (6.56) is applicable and we obtain

q
i t tei t t(i 1) ei t
f (A) = e Mi1 + Mi2 + . . . + Mi,i , (6.61)
i=1
1! (i 1)!

where

f (i ) = ei t , f  (i ) = tei t , . . . , f (i 1) (i ) = ti 1 ei t .

Matrix (6.61) is named the exponential function of the matrix A and it is


denoted by
f (A) = eAt .

2. A further important representation of the matrix eAt is obtained by ap-


plying the series expansion

2 t2
et = 1 + t + + ... , (6.62)
2!
which converges for all , and consequently on any spectrum too. Inserting
the matrix A instead of into (6.62), we receive

A2 t2
eAt = Ip + At + + ... . (6.63)
2!
Particularly for t = 0, we get

eAt |t=0 = Ip .

3. Dierentiating (6.63) by t, we obtain


 
d  At  A2 t2
e = A Ip + At + + . . . = AeAt .
dt 2!

4. Substituting the parameter for t in (6.62), we receive

A2 2
eA = Ip A + +... .
2!
By multiplying this expansion with (6.62), we prove
6.4 Matrix Exponential Function 257

A2 (t )2
eAt eA = eA eAt = Ip + A(t ) + + ...
2!
hence
eAt eA = eA eAt = eA(t ) .
For = t, we nd immediately

eAt eAt = eAt eAt = Ip

or  At 1
e = eAt .

5.
Theorem 6.9. For a positive constant T , the matrices A and eAT possess the
same structure, if the eigenvalues 1 , . . . , q of A satisfy the conditions

ei T = ek T , (i = k; i, k = 1, . . . , q) (6.64)

or equivalently
2nj
i k = = nj , (i = k; i, k = 1, . . . , q) , (6.65)
T
where n is an arbitrary integer and = 2/T .

Proof. Owing to d(eT )/ dT = T eT = 0 for an exponential function, Con-


dition b) in Theorem 6.7 is always ensured. Therefore, the matrices A and
eAT have the same structure, if and only if Conditions (6.60) hold, which
in the present case have the shape (6.64). Conditions (6.65) are obviously
implications of (6.64).

Corollary 6.10. Let the matrix A be cyclic. Then, a necessary and sucient
condition for the matrix eAT to become cyclic is the demand that Conditions
(6.64) hold.

6. The next theorem is fundamental for the future declarations.

Theorem 6.11 ([71]). Let the pair (A, B) controllable and the pair [A, C]
observable. Then under Conditions (6.64), (6.65), the pair (eAT , B) is con-
trollable and the pair [eAT , C] is observable.

7. Since for any , t always et = 0, the matrix eAt becomes non-singular


for any nite t and any matrix A.
258 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

6.5 DPFR and DLT of Rational Matrices


1. Let w(s) Rnm (s) be a strictly proper rational matrix and

1 2
w (T, s, t) = w(s + kj)ekjt , = (6.66)
T T
k=

be its displaced pulse frequency response. As the discrete Laplace transform


(DLT) of the matrix w(s), we understand the sum of the series

 1
Dw (T, s, t) = w(s + kj)e(s+kj)t , < t < . (6.67)
T
k=

The transforms (6.66) and (6.67) are closely connected:



Dw (T, s, t) = est w (T, s, t) . (6.68)

Per denition, we have

w (T, s, t) = w (T, s, t + T ) .

In this section, we will derive closed formulae for the sums of the series (6.66)
and (6.67).

2.
Lemma 6.12. Let the matrix w(s) be strictly proper and possess the partial
fraction expansion
q i
wik
w(s) = , (6.69)
i=1
(s si )k
k=1

where the wik are constant matrices. Then the sum of the series (6.67) is
determined by the formulae
 
Dw (T, s, t) = Dw (T, s, t), 0<t<T, (6.70)
 
Dw (T, s, t) = Dw (T, s, t T )esT , T < t < ( + 1)T, ( = 0, 1, . . . ),
(6.71)
where

q
i k1
 wik et
Dw (T, s, t) = . (6.72)
i=1 k=1
(k 1)! k1 1 e(s)T |=si
6.5 DPFR and DLT of Rational Matrices 259

Proof. Substituting (6.69) into (6.67), we gain




 q i
1 e(s+mj)t
Dw (T, s, t) = wik . (6.73)
i=1
T m= (s + mj si )k
k=1

Appendix B yields

k1
1 e(s+mj)t 1 et
= .
T m=
(s + mj a)k (k 1)! k1 1 e(s)T |=a

Inserting this into (6.73), we obtain Formulae (6.70) and (6.72). We recognise
Formula (6.71) as follows. Let T < t < ( + 1)T , so we conclude

Dw (T, s, t) = w (T, s, t)est = w (T, s, t T + T )es(tT ) esT
  
= w (T, s, t T )es(tT ) esT = Dw (T, s, t T )esT .

3.
Lemma 6.13. Let A be a constant p p matrix and

wA (s) = (sIp A)1 . (6.74)

Then, the following formulae hold:


 
DwA (T, s, t) = DwA (T, s, t)
(6.75)
 1 At  1
= Ip esT eAT e = eAt Ip esT eAT , 0 < t < T,
  1 A(tT ) sT
DwA (T, s, t) = Ip esT eAT e e
(6.76)
 1
= eA(tT ) esT Ip esT eAT , T < t < ( + 1)T.

Proof. Consider the partial fraction expansion of the form (6.49)


q
i
Mik
wA (s) = (Ip A)1 = . (6.77)
i=1 k=1
( i )k

Here, due to (6.50)


Mik = (k 1)! Zik , (6.78)
where Zik are the components of the matrix A. Substituting

wik = (k 1)! Zik

into (6.72) gives


260 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes


q
i
 k1 et
DwA (T, s, t) = Zik . (6.79)
i=1 k=1
k1 1 e(s)T |=si
Introduce the scalar function
  1
f (, t) = et 1 eT esT ,
so Relation (6.79) can be presented in the form
q i k1
 
DwA (T, s, t) = Zik f (, t) .
i=1
k1 |=si
k=1

Comparing this with (6.55), we nd


 
DwA (T, s, t) = f (A, t) ,
which is equivalent to (6.75). Relation (6.76) follows immediately from (6.71)
and (6.75).

4. On basis of the stated facts, the next theorem is easily derived.


Theorem 6.14. Let the matrix w(s) have the standard realisation
w(s) = C(sIp A)1 B . (6.80)
Then the following formulae take place:
   1 At
Dw (T, s, t) = Dw (T, s, t) = C Ip esT eAT e B
  1
(6.81)
= CeAt Ip esT eAT B, 0<t<T,
   1 A(tT )
Dw (T, s, t) = esT C Ip esT eAT e B,
(6.82)
T < t < ( + 1)T .
Proof. Insert (6.80) into (6.67) and obtain

!
 1 1
Dw (T, s, t) = C [(s + kj)Ip A] e(s+kj)t B
T
k=

= C DwA (T, s, t)B ,
so Relations (6.81), (6.82) directly emerge from (6.75), (6.76).
Corollary 6.15. Since the left sides of Relations (6.81) and (6.82) only de-
pend on the transfer matrix w(s), also the right sides of Formulae (6.81),
(6.82) does not depend on the concrete choice of the realisation (A, B, C)
congured by (6.80). Therefore, we are able to state that for two realisations
(A, B, C) and (A1 , B1 , C1 ), which dene one and the same transfer matrix,
the equality
 1 At  1 A1 t
C Ip esT eAT e B = C1 Iq esT eA1 T e B1
holds.
6.6 DPFR and DLT for Modulated Processes 261

5. From the above formulae by using the relation



w (T, s, t) = Dw (T, s, t)est ,

we obtain closed formulae for the DPFR w (T, s, t).

6.6 DPFR and DLT for Modulated Processes


1. In the present section, the properties of the DPFR (6.45)

1
G (T, s, t) = w (T, s, t) = w(s + kj)(s + kj)ekjt (6.83)
T
k=

as well as of the DLT



  1
DG (T, s, t) = Dw (T, s, t) = w(s + kj)(s + kj)e(s+kj)t (6.84)
T
k=

will be investigated, which are connected by the simple relation



Dw (T, s, t) = w (T, s, t)est .

The properties of the DPFR imply

w (T, s, t) = w (T, s, t + T ) .

Now, closed expressions for the sums of the series (6.83), (6.84) will be derived.

2.
Lemma 6.16. Suppose the strictly proper matrix w(s) Rnm (s) of the shape
(6.69). Then, the sum of the series (6.84) converges for all s = si + kj,
(i = 1, . . . , q; k = 0, 1, . . . ), the sum depends continuously on t and is
determined by the formula [148]
 
Dw (T, s, t) = Dw (T, s, t), 0tT, (6.85)

where the matrix Dw (T, s, t) is bound by any one of the both equivalent rela-
tions
q i k1
 wik ()et
Dw (T, s, t) = + hw (t) , (6.86)
i=1 k=1
(k 1)! k1 e(s)T 1 |=si
q i k1
 wik ()et
Dw (T, s, t) = + hw (t) , (6.87)
i=1
(k 1)! k1 1 e(s)T |=s i
k=1
262 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

where
 t  T
hw (t) = hw (t )m( ) d, hw (t) = hw (t )m( ) d
0 t

and k1

q
i
wik t
hw (t) = e . (6.88)
i=1 k=1
(k 1)! k1 |=si
Formulae (6.86), (6.87) is extended onto the whole taxis by the relation
 
Dw (T, s, t) = Dw (T, s, t T )esT , T < t < ( + 1)T .
Proof. Placing (6.69) into (6.84) gives


 q i
1 (s + mj)e(s+mj)t
Dw (T, s, t) = wik .
i=1
T m= (s + mj si )k
k=1

From Appendix B, it follows




1 (s + mj)e(s+mj)t
T m=
(s + mj a)k

1 k1 ()et 1 k1
t
= + e ,
(k 1)! k1 e(s)T 1 |=a (k 1)! k1 |=a
which results in Formulae (6.85), (6.86). Relation (6.87) is shown in the same
way. The continuity with respect to t is a consequence of the properties of the
above series.

3. In the particular case, when w(s) = wA (s), where the matrix wA (s) is
from (6.74), the following relations hold.
Lemma 6.17. In case of (6.74), Formulae (6.86), (6.87) can be represented
by the both equivalent formulae
  1
DwA (T, s, t) = h (A, t) + eAt (A) esT eAT Ip , (6.89)
  1
DwA (T, s, t) = h (A, t) + eAt (A) Ip esT eAT , (6.90)
where the notations
 t  T
h (t) = eA(t ) m( ) d, h (t) = eA(t ) m( ) d
0 t

and  T
(A) = eA m( ) d
0
were used.
6.6 DPFR and DLT for Modulated Processes 263

Proof. At rst, we obtain from (6.77), (6.78) and (6.88)


q
i
k1 t
hwA (t) = Zik e (6.91)
i=1 k=1
k1 |=si

that with the aid of (6.55) supplies

hwA (t) = eAt .

Using (6.91) and (6.77), Formulae (6.86) and (6.87) can be given the form

DwA (T, s, t) =
(6.92)

q
i  t
k1 et ()
Zik k1 + e(t ) m( ) d ,
i=1 k=1
e(s)T 1 0 |=si

DwA (T, s, t) =
  (6.93)

q
i  T
k1 et ()
Zik k1 e (t )
m( ) d .
i=1 k=1
1 e(s)T t |=si

Explain the scalar function


 t
et ()
f (, t) = + e(t ) m( ) d
e (s)T 1 0
 T
(6.94)
et ()
= e (t )
m( ) d ,
1 e(s)T t

then Formulae (6.92) and (6.93) can be written as


q
i
k1
DwA (T, s, t) = Zik [f (, t)] .
i=1 k=1
k1 |=si

According to (6.55), we win for this expression the compact form



DwA (T, s, t) = f (A, t) . (6.95)

If in Formula (6.94) the argument is substituted by the matrix A, Formulae


(6.89) and (6.90) are achieved.

4. Consider Relations (6.89) and (6.90) in more detail for the important
special case of a zero-order hold (6.27). Thus it appears that

1 esT
(s) = 0 (s) = .
s
264 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

Owing to this equation and (6.27) from Formula (6.94), we gain the equivalent
expressions

et 1 1 eT et
f (, t) = + ,
e(s)T 1
e(tT ) 1 1 eT et
f (, t) = + .
1 e(s)T
Passing in these equations to functions of matrices according to (6.95), we
nd in the present case

DwA (T, s, t) =
   1 At  sT AT 1 (6.96)
A1 eAt Ip + A1 Ip eAT e e e Ip ,

DwA (T, s, t) =
  (6.97)
 1 At  1
A1 eA(tT ) Ip + A1 Ip eAT e Ip esT eAT .

Remark 6.18. At glance, Formulae (6.96) and (6.97) seem to be meaningful


only for a regular matrix A, but this is a fallacy. Applying the recognitions
from Section 6.3, it is easily shown that the scalar functions

et 1 e(tT ) 1 1 eT
h0 (, t) = , h0 (, t) = , 0 () = (6.98)

are integral functions in the argument . In particular for = 0

h0 (0, t) = t, h0 (0, t) = t T, 0 (0) = T ,


2
t (t T ) 2
T2 (6.99)
h0 (0, t) = , h0  (0, t) = , 0 (0) = , ... .
2 2 2
Thus, the functions (6.96), (6.97) are dened for any matrices A, among them
are singular ones too.
Example 6.19. Suppose
10
A= .
20
In this case, we have
1 0
I2 A =
2
and det(I2 A) = ( 1). Thus the eigenvalues of the matrix A are the
numbers 1 = 1, 2 = 0. Furthermore, we obtain

0
2 1
(I2 A)1 = .
( 1)
6.6 DPFR and DLT for Modulated Processes 265

The partial fraction expansion gives



10 0 0
20 2 1
(I2 A)1 = +
1
i.e., the components of A possess the form

10 0 0
Z1 = , Z2 = .
20 2 1
For any function f () dened for = 1 and = 0, we obtain from (6.55)
f (A) = Z1 f (1 ) + Z2 f (2 ) .
Particularly, from (6.98) and (6.99), we receive for the function h0 (, t)
h0 (1, t) = et 1, h0 (0, t) = t
and hence

  10 0 0 et 1 0
h0 (A, t) = e 1
t
+t = .
20 2 1 2(et 1) 2t t
In the same way, we can convince ourself that
h0 (1, t) = etT 1, h0 (0, t) = t T
and
etT 1 0
h0 (A, t) = .
2(etT 1) 2(t T ) t T
Analogously, we nd

1 eT 0
0 (A) = .
2(1 eT ) 2T T 

5. The following results generalise the preceding considerations.


Theorem 6.20. Suppose the matrix w(s) in the standard form (6.80). Then
the formula
  
Dw (T, s, t) = Dw (T, s, t) = C DwA (T, s, t)B, 0tT (6.100)

is valid, where the matrix DwA (T, s, t) is determined by Formulae (6.89),
(6.90). This formula is extended onto the whole time axis by
 
Dw (T, s, t) = C DwA (T, s, t T )BesT , T < t < ( + 1)T . (6.101)

B,
Corollary 6.21. Let (A, B, C) and (A, C)
be any realisations of the matrix
w(s), then
 
C Dw (T, s, t)B = C Dw (T, s, t)B
A
.

A
266 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

6. It is shown that a series of the form (6.84) is also convergent in the case,
when w(s) is only proper. Indeed, in this case we write

w(s) = w1 (s) + w0 , (6.102)

where the matrix w1 (s) is strictly proper and

w0 = lim w(s) .
s

When (6.102) holds, then (6.84) implies



  1
Dw (T, s, t) = Dw1 (T, s, t) + w0 (s + kj)e(s+kj)t . (6.103)
T
k=

The rst term on the right side of (6.103) can be calculated by Formulae
(6.100) and (6.101). In order to calculate the second term, we observe that

1
(s + kj)e(s+kj)t = (T, s, t)est ,
T
k=

where the function (T, s, t) is dened by (6.36). Thus, we obtain



1 
(s+kj)e(s+kj)t = D (T, s, t) = m(tT )esT , T < t < (+1)T .
T
k=

6.7 Parametric Discrete Models of Continuous Processes


1. Substituting
esT = (6.104)
into (6.81) results in a rational matrix of the argument :

Dw (T, , t) = Dw (T, s, t) |esT =
(6.105)
= C(Ip eAT )1 eAt B , 0 < t < T.

Matrix (6.105) is called the parametric discrete model of the matrix (of the
process) w(s). A list of general properties of the matrix Dw (T, , t) will now
be derived from Relation (6.105).
Since the matrix eAT is regular, we conclude from (6.105) that the ma-
trix Dw (T, , t) is strictly proper for all t. Besides, for w(s) Rnm (s) also
Dw (T, , t) Rnm () is true.
Moreover, Corollary 6.15 ensures that the parametric discrete model
(6.105) does not depend on the concrete choice of the realisation (A, B, C)
congured in (6.80).
6.7 Parametric Discrete Models of Continuous Processes 267

2. We introduce a new concept to prepare the next theorem.


Suppose the monic polynomial

r(s) = (s s1 )1 (s sq )q .

Then the monic polynomial


 1  q
rd () = es1 T esq T

is called the discretisation of the polynomial r(s).


Theorem 6.22. Relation (6.80) should dene a minimal standard realisation
of the matrix w(s), and the eigenvalues si , (i = 1, . . . , q) of the matrix A
should satisfy the relations

esi T = esk T , (i = k; i, k = 1, . . . , q) (6.106)

which are called conditions for non-pathological behavior. Then the PMD
 
d (, t) = Ip eAT , eAt B, C (6.107)

is minimal for any t.


Proof. Observing Theorem 6.11, we realise that under the conditions for non-
pathological behavior (6.106), the minimality of the representation (6.80) en-
sures that the pair (eAT , B) is controllable and the pair [eAT , C] is observable.
Since the matrix eAt for all t is commutative with the matrix eAT , and in
addition det eAt = 0 is true, Theorem 1.53 provides the pair (eAT , eAt B) to
be controllable. As a consequence, the controllability of the pair (eAT , eAt B)
and the observability of the pair [eAT , C] together with Lemma 5.34 ensure
the irreducibility of the pairs (Ip eAT , eAt B) and [Ip eAT , C]. But this
means nothing else but the PMD (6.107) is minimal.

3.
Theorem 6.23. Let under the conditions of Theorem 6.22 the sequence of
invariant polynomials a1 (s), . . . , ap (s) of the matrix sIp A have the form

a1 (s) = (s s1 )11 (s sq )1q


.. .. ..
. . . (6.108)
ap (s) = (s s1 ) p1
(s sq )
pq
.

Then for the invariant polynomials 1 (), . . . , p () of the matrix Ip eAT ,


the relations
i () = adi () , (i = 1, . . . , q) (6.109)
are true, that means, each invariant polynomial i () is the discretisation of
the corresponding invariant polynomial ai (s).
268 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

Proof. Due to the suppositions, the matrices A and eAT have the same struc-
1 (z), . . . ,
ture, Thus, if (6.106) and (6.107) are fullled, the sequence p (z)
of the invariant polynomials of the matrix zIp eAT has the shape
 11  1q
1 (z) = z es1 T
z e sq T
.. .. ..
. . .
 p1  pq
p (z) = z es1 T
z e sq T .

But then owing to Lemma 5.36, the sequence of invariant polynomials of the
matrix Ip eAT has the form
 11  1q
1 () = es1 T esq T
.. .. ..
. . . (6.110)
 
s1 T p1
 
sq T pq
p () = e e ,

which is equivalent to (6.109).

Corollary 6.24. Let w (s) and D () be the McMillan denominators of the


matrices w(s) and Dw (T, , t), respectively. Then under the conditions of The-
orem 6.22
d
D () = w () (6.111)
and hence
Mdeg w(s) = Mdeg Dw (T, , t) . (6.112)

Proof. If (6.108) and (6.110) hold, we obtain

w (s) = a1 (s) ap (s) ,


D () = 1 () p ()

and Equations (6.111) and (6.112) result from (6.109).

Corollary 6.25. If under the conditions of Theorem 6.22, the matrix w(s) is
normal, then for any t the matrix Dw (T, , t) is also normal.

Proof. If the matrix w(s) is normal, then in any minimal standard repre-
sentation (6.80) the matrix A is cyclic. Regarding to Corollary 6.10, also eAT
becomes cyclic. Besides, the minimality of the PMD (6.107) and Theorem 3.17
ensure that Matrix (6.105) is normal.
6.7 Parametric Discrete Models of Continuous Processes 269

4.
Theorem 6.26. Let the strictly proper rational matrices w(s) and w1 (s) of
sizes n m and m r, respectively, be related by

w1 (s) w(s) (6.113)


l

and the poles s1 , . . . , sq of the matrix w(s) should satisfy the conditions for
non-pathological behavior (6.106). Then

Dw1 (T, , t) Dw (T, , t) . (6.114)


l

Proof. Let us have the minimal standard realisation

w(s) = C(sIp A)1 B .

Then Theorem 2.56 and (6.113) imply the existence of a representation of the
form
w1 (s) = C(sIp A)1 B1 .
Passing to the parametric discrete-time models, we receive
 1 At
Dw (T, , t) = C Ip eAT e B,
  1
Dw1 (T, , t) = C Ip eAT eAt B1 .

Consider now the ILMFD


 1
C Ip eAT = a1 ()b() .

Then the minimality of the PMD (6.107) together with Lemma 2.9 ensures
that the right side of the relation


Dw (T, , t) = a1 () b()eAt B (6.115)

is an ILMFD of the matrix Dw (T, , t) for any t. Besides, the product

a()Dw1 (T, , t) = b()eAt B1

turns out to be a polynomial matrix, i.e. we obtain (6.114).

Remark 6.27. As a consequence from (6.115) we nd out that for a parametric


discrete model Dw (T, , t), there always exists an ILMFD (6.115), in which
the left divisor a() does not depend on t.
270 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

5.
Theorem 6.28. Let the rational matrices F (s), G(s) of size n and m,
respectively, be given. Suppose the matrices F (s) and

L(s) = F (s)G(s)

be strictly proper and the poles of the matrices F (s) and G(s) should satisfy
together the conditions for non-pathological behavior (6.106). Moreover, let us
have the ILMFD

L(s) = a1
0 (s)b0 (s), F (s) = a1
1 (s)b1 (s) ,

G(s) = a1
2 (s)b2 (s), b1 (s)a1 1
2 (s) = a3 (s)b3 (s)

and ai (s), (i = 1, . . . , n ; = 0, 1, 2) should be the totality of invariant


polynomials of the matrices a0 (s), a1 (s), a2 (s). Then owing to the validity of

F (s) L(s) = F (s)G(s) ,


l

the following relations hold:


a)
DF (T, , t) DL (T, , t) = DF G (T, , t) . (6.116)
l

b) If we have the ILMFDs

DF G (T, , t) = 01 ()0 (, t), DF (T, , t) = 11 ()1 (, t) , (6.117)

then
0 () = 2 ()1 () , (6.118)
where 2 () is an n n polynomial matrix. Besides, if i (), (i =
1, . . . , n; = 0, 1) are the sequences of invariant polynomials of the ma-
trices 0 () and 1 (), respectively, then

i () = adi (), (i = 1, . . . , n; = 0, 1) . (6.119)

c) Denote
i (s) = det ai (s), (i = 0, 1, 2) ,
then the conditions

det i () = di (), (i = 0, 1, 2) (6.120)

are satised.
d) The following relation holds:

0 ()DF (T, , t) = 2 ()1 (, t) . (6.121)

e) If the matrices L(s) and F (s) are normal, then the matrix 2 () is simple.
6.8 Parametric Discrete Models of Modulated Processes 271

Proof. a) Relation (6.116) is a consequence of Theorem 6.26.


b) Relations (6.118) and (6.119) follow immediately from (6.116), the concept
of subordination, and Theorems 6.23, 6.26.
c) Formulae (6.120) are consequences of Theorem 6.23.
d) Formula (6.121) results from (6.118), because we have the ILMFD (6.117).
e) If the matrices L(s) and F (s) are normal, then owing to Corollary 6.25, the
matrices DF G (T, , t) and DF (T, , t) are normal. Thus according to The-
orem 3.1, it follows that the matrices 0 () and 1 () are simple. Hence
the matrix 2 () is simple, because the product of two latent matrices
becomes simple if and only if both factors are simple matrices.
Remark 6.29. Recall that by transposition, analogue statements as in Theo-
rems 6.26 and 6.28 can be formulated in case of subordination from right and
according to IRMFD.

6.8 Parametric Discrete Models of Modulated Processes


1. Substituting the closed expressions (6.89), (6.90) into (6.100), we obtain
the equivalent formulae
   1 
Dw (T, s, t) = C h (A, t) + eAt (A) esT eAT Ip B,
   1 
Dw (T, s, t) = C h (A, t) + eAt (A) Ip esT eAT B.

After replacing the complex variable in these relations by (6.104), we nd the


rational matrix 
Dw (T, s, t) |esT = = Dw (T, , t) ,
which is determined by the expressions
  1 
Dw (T, , t) = C h (A, t) + eA(t+T ) (A) Ip eAT B,
 (6.122)
 1 
Dw (T, , t) = C h (A, t) + eAt (A) Ip eAT B.

The matrix Dw (T, , t) is called the parametric discrete model of the modu-
lated process w(s)(s). Clearly, w(s) Rnm (s) implies Dw (T, , t) Rnm ().

2. Starting from Relations (6.122), some important properties of the matrix


Dw (T, , t) will be established.
a) At rst (6.122) provides that there exists the nite limit

lim Dw (T, , t) = Ch (A, t)B .


Thus for all t, the matrix Dw (T, , t) is at least proper.


272 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

b) As in Corollary 6.21, we recognise that the right sides of Formulae (6.122)


do not depend on the concrete choice of the realisation (A, B, C) in the
standard realisation (6.80).

3. Bring the rst formula of (6.122) into the form


Dw (T, , t) = Ch (A, t)B + R(T, , t) ,
where R(T, , t) is the rational matrix
 1
R(T, , t) = CeA(t+T ) (A) Ip eAT B.
The right side could be seen as the transfer matrix of the PMD
m (, t) = (Ip eAT , eA(t+T ) (A)B, C) , (6.123)
which is called the parametric discrete model of the modulated process
w(s)(s).
Theorem 6.30. Let the standard realisation (6.80) be minimal and the eigen-
values s1 , . . . , sq of the matrix A should satisfy the conditions for non-
pathological behavior
esi T = esk T , (i = k; i, k = 1, . . . , q) (6.124)
and
(si ) = 0, (i = 1, . . . , q) . (6.125)
Then the PMD (6.123) is minimal for any t.
Proof. With attention to Theorem 6.11, we realise that for satised Conditions
(6.124), the pair (eAT , B) is controllable and the pair [eAT , C] is observable.
Thus, the pair (eAT , eA(t+T ) (A)B) is also controllable, which is a consequence
of Theorem 1.53, because the matrices eAT and eA(t+T ) (A) are regular due
to (6.125). Since the matrix eAT is also non-singular, from the controllability
of the pair (eAT , eA(t+T ) (A)B) and the observability of the pair [eAT , C], we
derive that the pairs (Ip eAT , eA(t+T ) B) and [Ip eAT , C] are irreducible.
But this means, the PMD (6.123) is minimal.
Corollary 6.31. Under the conditions of Theorem 6.30 for the matrix
Dw (T, , t), there exists an ILMFD
Dw (T, , t) = 1 ()(, t) , (6.126)
where the matrix () depends neither on t nor on the function m(t).
Proof. Take the ILMFD
C(Ip eAT )1 = 1 ()1 () .
Then owing to Lemma 2.9, the expression
 
Dw (T, , t) = 1 () 1 ()eA(t+T ) (A)B + ()Ch (A, t)B

denes an ILMFD of the matrix Dw (T, , t).


6.8 Parametric Discrete Models of Modulated Processes 273

4. Henceforth, the entirety of Relations (6.124) and (6.125) will be called


the strict conditions for non-pathological behavior. It is shown that in case
of a zero-order hold (6.27), where m(t) = 1, the strict conditions for non-
pathological behavior (6.124), (6.125) coincide with the ordinary Conditions
(6.124). Indeed, if (6.124) is true, then none of the eigenvalues s1 , . . . , sq will
have the form 2nj/T , where n is an integer dierent from zero. Thus for all
i, (1 i q) the relations

1 esi T
0 (si ) = = 0
si
are valid, this means, Conditions (6.124) imply (6.125).

5. The properties of the matrix Dw (T, , t) will be characterized by a num-


ber of theorems. The proofs of these theorems are repetitions of the proofs of
the corresponding statements in Section 6.7, with the only distinction that we
always have to replace the conditions for non-pathological behavior (6.124) by
the strict conditions for non-pathological behavior (6.124), (6.125). Therefore,
hereinafter all statements of this section will be given without proofs.
Theorem 6.32. Let w (s) and D () be the McMillan denominators of the
matrices w(s) and Dw (T, , t), respectively. Then for fullled strict conditions
for non-pathological behavior, the equation
d
D () = w ()

holds, and thus


Mdeg w(s) = Mdeg Dw (T, , t) .

Theorem 6.33. If Conditions (6.124), (6.125) are fullled and the standard
realisation (6.80) is minimal, where the matrix A is cyclic, then the matrix
Dw (T, , t) is normal.
Theorem 6.34. Let the matrices w(s) and w1 (s) of size n m and n r,
respectively, be at least proper and related by

w1 (s) w(s) .
l

Moreover, the poles s1 , . . . , sq of the matrix w(s) satisfy the strict conditions
for non-pathological behavior (6.125), (6.126). Then

Dw1 (T, , t) Dw (T, , t) .


l

Theorem 6.35. Suppose the rational matrices F (s), G(s) of size n, m.


Assume the matrices F (s) and

L(s) = F (s)G(s)
274 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

be at least proper, and the poles of the matrices F (s) and G(s) as a whole sat-
isfy the strict conditions for non-pathological behavior (6.125), (6.126). More-
over, the ILMFD

L(s) = a1
0 (s)b0 (s), F (s) = a1
1 (s)b1 (s) ,

G(s) = a1
2 (s)b2 (s), b1 (s)a1 1
2 (s) = a3 (s)b3 (s)

should exist, and let ai (s), (i = 1, . . . , n; = 0, 1, 2) be the totality of the in-


variant polynomials of the matrices a0 (s), a1 (s), a2 (s). Then from the validity
of
F (s) L(s) = F (s)G(s)
l

the following relations emerge:


a)
DF (T, , t) DF G (T, , t) = DL (T, , t) .
l

b) Suppose the ILMFDs

DL (T, , t) = 01 ()0 (, t), DF (T, , t) = 11 ()1 (, t) ,

then
0 () = 2 ()1 () ,
where 2 () is an n n polynomial matrix. Besides, if i (), (i =
1, . . . , n; = 0, 1) are the sequences of invariant polynomials of the ma-
trices 0 () and 1 (), then the relations

i () = adi (), (i = 1, . . . , n; = 0, 1)

take place.
c) Denote
g (s) = det a (s), ( = 0, 1, 2) ,
then the conditions

det () = gd (), ( = 0, 1, 2)

are fullled.
d) The relation
0 ()DF (T, , t) = 2 ()1 (, t)
consists, where the matrix 1 (, t) is a polynomial in for all t.
e) If the matrices L(s) and F (s) are normal, then the matrices DL (T, , t)
and DF (T, , t) are also normal and the matrices 0 (), 1 (), 2 () are
simple.

Remark 6.36. Notice that by transposing, all above statements could be for-
mulated in case of subordination from right and corresponding IRMFDs.
6.9 Reducibility of Parametric Discrete Models 275

6.9 Reducibility of Parametric Discrete Models


1. As follows from Formulae (6.105) and (6.122), the above performed para-
metric discrete models of continuous-time and of modulated processes may be
represented for 0 < t < T in the form
ND (, t)
D(, t) = , (6.127)
()
where () is a scalar polynomial and ND (, t) is a polynomial matrix in
for any xed t. A rational fraction (matrix) of the form (6.127) is called
irreducible, when there does not exist a representation
ND1 (, t)
D(, t) =
1 ()
with deg 1 () < deg (). However, if this still becomes true, then the frac-
tion (6.127) is named reducible.

2. Let F () be a matrix of the argument . The already known replacement



F (s) = F () | =esT
will be investigated systematically. Obviously, the reverse relation holds:

F () = F (s) | esT = .

3. In particular, if the matrix D(, t) has the shape (6.127), then the matrix

 N D (s, t)
D(s, t) = 
(s)
is called rational-periodic (rat.per.). A rat.per. matrix is named (ir)reducible,
if Matrix (6.127) is (ir)reducible.

4. Proceeding from the just introduced concepts, the question arise, wether
the rat.per. matrix (6.81) is reducible for 0 < t < T . With respect to
   1 At
Dw (T, s, t) = Dw (T, s, t) = C I esT eAT e B, (6.128)
the question on reducibility of Matrix (6.128) leads to the question on re-
ducibility of the rational matrix
 1 At
Dw (T, , t) = C I eAT e B, (6.129)
which can be represented in the form (6.127) with
 
ND (, t) = C adj I eAT eAt B ,

() = det(I eAT ) .
276 6 Parametric Discrete-time Models of Continuous-time Multivariable Processes

Theorem 6.37. Let the realisation (A, B, C) be simple, i.e. the pair (A, B) is
controllable, the pair [A, C] is observable, and the matrix A is cyclic. Further-
more the conditions for non-pathological behavior (6.106) should be satised.
Then Matrix (6.129) (and therefore also Matrix (6.128)) is irreducible.

Proof. Owing to Theorem 6.22, the PMD

d (, t) = (I eAT , eAt B, C)

is minimal for any t. Besides, from Corollary 6.10 emerge that the matrices
eAT and eAT are cyclic. Thus, the matrix I eAT = eAT (I eAT )
is simple and its minimal polynomial is equivalent to its characteristic poly-
nomial. Hence by Theorem 2.47, Matrix (6.129) is irreducible for all t. This
implies that the matrix
 
 C adj I esT eAT eAt B
Dw (T, s, t) =
det (I esT eAT )

is irreducible too.

Remark 6.38. From the above proof, we read that under the conditions of
Theorem 6.37 for any t, the right side of our last equation is irreducible.

Remark 6.39. If any one of the suppositions in Theorem 6.37 is violated, then
we have reducibility in the above sense.

5. By analogy, we can handle the question on reducibility of the rat.per.



matrix Dw (T, s, t), which is written in the form

Dw (T, s, t) =
  (6.130)
CeAt (A) adj I esT eAT B + Ch (A, t)B det(I esT eAT )
= .
det(I esT eAT )

Here the next theorem takes place.

Theorem 6.40. Let the pair (A, B) be controllable, the pair [A, C] observable
and the matrix A cyclic, and let the strict conditions for non-pathological
behavior (6.124), (6.125) be fullled. Then, Matrix (6.130) is irreducible for
any t (and also irreducible in the above sense).

Proof. Same proof as for Theorem 6.37.

Remark 6.41. If one of the conditions in Theorem 6.40 is violated, then Ma-
trix (6.130) turns out to be reducible.
6.9 Reducibility of Parametric Discrete Models 277

6. The introduced denitions of this section can be extended to wider classes


of functions, which we will meet in further declarations.
Let the matrix ND (, t) in (6.127) be an integral function of the argument
for any t. Then the fraction (6.127) is called (ir)reducible, if there does (not)
exist a root 0 of the polynomial () with

ND (0 , t) = O .
7
Mathematical Description, Stability and
Stabilisation of the Standard Sampled-data
System in Continuous Time

7.1 The Standard Sampled-data System


1. As a standard sampled-data system, we mean a system having the struc-
ture shown in Fig. 7.1. Here, x = x(t), y = y(t), z = z(t), and u = u(t) are

x z
- -
- L

u y
C 

Fig. 7.1. Standard sampled-data system

column-vectors of dimensions  1, n 1, r 1, and m 1, respectively. The


block L is a generalised continuous-time LTI plant described by the following
equations:
z x
= w(p) , (7.1)
y u
where p = d/dt and w(p) is a real rational block transfer matrix of the form

 m

K(p) L(p) r (7.2)
w(p) = ,
M (p) N (p) n

where the letters outside the matrix indicate the dimensions of the corre-
sponding blocks. Henceforth, it is assumed that the matrix N (p) is strictly
proper and L(p) is at least proper. The restrictions imposed on the matrices
K(p) and M (p) will depend on the problem under consideration. Using (7.1)
and (7.2), we can write the equations of the plant in the operator form
280 7 Description and Stability of SD Systems

z = K(p)x + L(p)u
(7.3)
y = M (p)x + N (p)u .

2. In addition to the continuous-time plant L, the standard system contains


a multivariable digital controller C described by the block-diagram shown in
Fig. 7.2. According to Section 6.2, the equations of the ADC are

k = y(kT ) , (7.4)

where the input signal y(t) is assumed to be continuous. The digital controller
ALG can be described either by the forward model (6.21)

k = 0 k+ + . . . + k
0 k+ + . . . + (7.5)

or by the associated backward model (6.22)

0 k + . . . + k = 0 k + . . . + k , (7.6)

i , i , i , and i are constant real matrices of compatible dimensions


where
and det 0 = 0. Moreover, the equations for the DAC have the form

u(t) = m(t kT )k , kT < t < (k + 1)T. (7.7)

Taken in the aggregate, Equations (7.3)(7.7) form a system of equations,


which hereinafter will be called the operator model of the standard sampled-
data system. This model can be associated with the expanded structure shown
in Fig. 7.2.

7.2 Equation Discretisation for the Standard SD System

1. Hereinafter, in addition to the assumptions taken in Section 7.1, we sup-


pose that the matrices K(p) and M (p) are strictly proper. In this case the
matrix w(p) is at least proper and admits a multitude of standard realisations
of the form
w(p) = C(pI A)1 B + D (7.8)
with
Or DL
D=
On Onm
and constant matrices A, B, C, and DL . Separating the blocks of suitable
dimensions, we receive



C1 r
 m .
C= , B = B1 B2
C2 n
7.2 Equation Discretisation for the Standard SD System 281

x z1 z
- K(p) -g -
6
y1
- M (p)

z2
- L(p)

y2
- N (p) -?
g

u y
{} {}
DAC  ALG  ADC 

Fig. 7.2. Operator model of the standard sampled-data system

Then (7.8) yields


 
C1 (pI A)1 B1 C1 (pI A)1 B2 + DL
w(p) = .
C2 (pI A)1 B1 C2 (pI A)1 B2

Comparing this transfer matrix with (7.2), we nd

K(p) = C1 (pI A)1 B1 , L(p) = C1 (pI A)1 B2 + DL ,


(7.9)
M (p) = C2 (pI A)1 B1 , N (p) = C2 (pI A)1 B2 .

The standard realisation (7.8) can be associated with the state equations of
the plant
dv
= Av + B1 x + B2 u
dt
(7.10)
z = C1 v + DL u, y = C2 v .

Taken in the aggregate, the state equations (7.10) and the equations of the
digital controllers (7.4)(7.7) form a system of dierential-dierence equations,
which will be called a continuous-time model of the standard sampled-data
system.

2. Let us show that the continuous-time model (7.10), (7.4)(7.7) can be


transformed into an equivalent system of dierence equations. With this aim
in view, we integrate the rst equation of (7.10) over the interval kT t
(k + 1)T , where k is any integer. So, we receive
282 7 Description and Stability of SD Systems
 t  t
A(tkT ) A(t )
v(t) = e vk + e B2 u( ) d + eA(t ) B1 x( ) d
kT kT

with the notation vk = v(kT ). Using (7.7), this equation can be written as
 t  t
v(t) = eA(tkT ) vk + eA(t ) m( kT ) d B2 k + eA(t ) B1 x( ) d .
kT kT
(7.11)
Assuming

t = kT + , = kT + , 0 T, 0T

after some transformations in (7.11), we get


 
vk () = eA vk + eA() m() d B2 k + eA() B1 x(kT +) d (7.12)
0 0

with the notation



vk () = v(kT + ) . (7.13)
Moreover, using (7.10), we obtain

yk () = y(kT + ) = C2 vk (), 0T, (7.14)

zk () = z(kT + ) = C1 vk () + DL m()k ,
0<<T (7.15)

uk () = u(kT + ) = m()k .

Taken in the aggregate, the dierence equations (7.12) and the equations of
the digital controller (7.4)(7.7) will be called a parametric discrete model of
the standard sampled-data system. It can easily be shown that the continuous-
time model (7.10), (7.4)(7.7) and the parametric discrete model (7.12), (7.4)
(7.7) of the standard sampled-data system are equivalent, i.e., if the set of
functions y(t), z(t), and u(t) and the sequence {k } satisfy the equations of the
continuous-time model, then the set of sequences {vk ()}, {zk ()}, {uk ()},
and {k } are a solution of the parametric discrete model, and vice versa.

3. Henceforth, we assume that the function m(t) is piecewise smooth. Then


under the given assumptions, the solutions v(t) and y(t) are continuous with
respect to t. Therefore, assuming = T in (7.12) and (7.14), we nd

vk+1 = eAT vk + eAT (A)B2 k + gk


(7.16)
yk = C2 vk

with the notation


 T
gk = eA(T ) B1 x(kT + ) d ,
0
7.3 Parametric Transfer Matrix (PTM) 283
 T
(A) = eA m() d . (7.17)
0

Then we supplement (7.16) with the equations of the digital controller as


forward model (7.5)

k = 0 yk+ + . . . + yk .
0 k+ + . . . + (7.18)

Taken in the aggregate, Equations (7.16)(7.18) form a system of dierence


equations, which will be called a discrete forward model of the standard
sampled-data system.

4. Without loss of generality, we assume that Equation (7.18) is row reduced.


Then, it can be shown that the system (7.16)(7.18) is row reduced. A proof
of this fact will be given in Section 8.8. In this case, using Formulae (5.102),
(5.103), and (7.6), we can obtain a discrete backward model of the standard
sampled-data system

vk = eAT vk1 + eAT (A)B2 k1 + gk1


yk = C2 vk (7.19)
0 k + . . . + k = 0 yk + . . . + yk .

It can be easily shown that the discrete backward model (7.19) together with
(7.12) is equivalent to the original continuous-time model (7.10), (7.4)(7.7),
i.e., if a set of sequences {vk ()}, yk ()}, {zk ()} satises Equations (7.12),
(7.19), then the functions

v(t) = vk (t kT ), y(t) = yk (t kT ), kT t (k + 1)T


u(t) = uk (t kT ), z(t) = zk (t kT ), kT < t < (k + 1)T

determine a solution of the continuous-time models and vice versa.

7.3 Parametric Transfer Matrix (PTM)

1. The approach to a mathematical description of the standard system de-


scribed in Sections 7.1 and 7.2 makes it possible to analyse the processes either
in continuous time t, or in discrete time tk = kT . These methods are simi-
lar to the description of continuous-time and discrete LTI systems by means
of dierential and dierence equations, respectively. In this section we intro-
duce a novel characteristic of the standard sampled-data system, describing
its properties in the frequency domain. In this sense, this characteristic is a
counterpart of the classical concept of transfer function (matrix) used in the
theory of LTI systems.
284 7 Description and Stability of SD Systems

2. Consider the following auxiliary problem. Assume in Fig. 7.3

x(t) = est I , (7.20)

where s is a complex parameter. We search for a solution of the operator model


(7.3)(7.7) such that all signals in Fig. 7.3 are exponential periodic functions
with exponent s and period equal to the sampling period T . In particular,
this means

y(t) = yT (s, t)est , z(t) = zT (s, t)est , u(t) = uT (s, t)est , (7.21)

where

yT (s, t) = yT (s, t + T ), zT (s, t) = zT (s, t + T ),


uT (s, t) = uT (s, t + T )
(7.22)
are matrices of dimensions n , r , and m , respectively. The matrices
(7.22) hereinafter will be called the parametric transfer matrices (PTM) of
the standard sampled-data system from the input x to the outputs y, z, and
u, respectively.
In this section, we present a formal method for constructing the PTM
(7.22) on the basis of the stroboscopic property.

3. Henceforth for the PTM (7.22), we shall use the following special notation:

yT (s, t) = wyx (s, t), zT (s, t) = wzx (s, t), uT (s, t) = wux (s, t) .

Let us begin with the matrix wyx (s, t). First of all, we notice that from the
strict properness of the matrix N (s) and Fig. 7.2, it follows that the PTM
wyx (s, t) is continuous in t. Therefore, using the stroboscopic property, it can
be assumed that the input of the ADC is acted upon by the exponential matrix
signal
y(s, t) = wyx (s, 0)est .
Consider the open-loop system shown in Fig. 7.3. Using (6.41)(6.45), we nd

wyx (s, 0)est {} {} u y2


- ADC - ALG - DAC - N (p) -

Fig. 7.3. Digital controller and continuous-time plant

the exp.per. output



y2 (t) = N (T, s, t)w d (s)wyx (s, 0)est , (7.23)

where
7.3 Parametric Transfer Matrix (PTM) 285

1
N (T, s, t) = N (s + kj)(s + kj)ekjt
T
k=

and
 1 
w d (s) = (s) (s) ,
where

(s) = 0 + 1 esT + . . . + esT ,

(7.24)
(s) = 0 + 1 esT + . . . + esT .

From Fig. 7.2, it follows that

y1 (t) = M (s)est ,

so we obtain

y(t) = y1 (t) + y2 (t) = N (T, s, t)w d (s)wyx (s, 0)est + M (s)est .

Equating the expressions for y(t) here and in (7.21), we get



wyx (s, t) = N (T, s, t)w d (s)wyx (s, 0) + M (s) . (7.25)

Since the matrix N (T, s, t) is continuous with respect to t, we can take


t = 0, so that
 
wyx (s, 0) = DN (T, s, 0)w d (s)wyx (s, 0) + M (s) , (7.26)

where we used the fact that, due to (6.66) and (6.67), the following equality
holds:

1 
N (T, s, 0) = N (s + kj)(s + kj) = DN (T, s, 0) . (7.27)
T
k=

From (7.26), it follows that


  
1
wyx (s, 0) = In DN (T, s, 0)w d (s) M (s) . (7.28)

Substituting (7.28) into (7.25), we nd the required PTM



wyx (s, t) = N (T, s, t)RN (s)M (s) + M (s)

with the notation


 
  
1
RN (s) = w d (s) In DN (T, s, 0)w d (s) .
286 7 Description and Stability of SD Systems

wyx (s, 0)est {} {} u z2


- ADC - ALG - DAC - L(p) -

Fig. 7.4. Digital control unit and continuous-time plant

4. In order to nd the PTM wzx (s, t), let us consider the open-loop system
shown in Fig. 7.4. Similarly to (7.23), we construct the exp.per. output

z2 (t) = L (T, s, t)w d (s)wyx (s, 0)est ,

where

1
L (T, s, t) = L(s + kj)(s + kj)ekjt . (7.29)
T
k=

From Fig. 7.2, it follows that

z1 (t) = K(s)est .

Then with regard to (7.28), we obtain



z(t) = z1 (t) + z2 (t) = L (T, s, t)RN (s)M (s)est + K(s)est .

Using (7.21), we obtain the required PTM from the input x to the output z

wzx (s, t) = L (T, s, t)RN (s)M (s) + K(s) . (7.30)

5. Similar calculations provide also



wux (s, t) = (T, s, t)RN (s)M (s) ,

where (T, s, t) is the function (6.39).

6. It should be noted that the standard system shown in Fig. 7.2 is fairly
general. It can describe any sampled-data system containing, except for
continuous-time LTI units, a single digital controller (7.4)(7.7). Neverthe-
less, systems encountered in applications often are not given in the standard
form. Then for obtaining the latter one, some structural transformations are
needed. At the same time, there exists another way to construct the standard
system for a given structure. For this purpose, we assume that the input of the
system at hand is acted upon by an exponential signal (7.20), and all signals
in the system are exponential periodic with exponent s and period T . Then
using the stroboscopic property, the exp.per. system output z(t) can always
be found in the form
7.3 Parametric Transfer Matrix (PTM) 287

z(t) = est wzx (s, t), wzx (s, t) = wzx (s, t + T ) ,

where wzx (s, t) is the PTM from the input x to the output z. Comparing
this expression for wzx (s, t) with the general formula (7.30), we can always
nd the matrices K(s), L(s), M (s), and N (s) associated with the equivalent
standard system.

Example 7.1. To illustrate the above approach, we consider the single-loop


system shown in Fig. 7.5, where the n m matrix G(p) is strictly proper. To

x u z
-g - C - G(p) -
6

Fig. 7.5. Single-loop digital system

nd the PTM wzx (s, t), we take

x(t) = est In , z(t) = wzx (s, t)est , wzx (s, t) = wzx (s, t + T ) , (7.31)

where the matrix wzx (s, t) is continuous with respect to t. Then using the
stroboscopic property, consider the open-loop system shown in Fig. 7.6. The

est In u z
-g - C - G(p) -
6
wzx (s, 0)est

Fig. 7.6. Open-loop system for Fig. 7.5

exp.per. output of this open-loop system has the form


 
z(t) = G (T, s, t)w d (s)est + G (T, s, t)w d (s)wzx (s, 0)est .

Equating this expression for z(t) with (7.31), we nd


 
wzx (s, t) = G (T, s, t)w d (s)wzx (s, 0) + G (T, s, t)w d (s) . (7.32)

Assuming t = 0 in (7.32), with the help of (7.27), we obtain


  
1  
wzx (s, 0) = In DG (T, s, 0)w d (s) DG (T, s, 0)w d (s) .
288 7 Description and Stability of SD Systems

Substitution this into (7.32) yields


wzx (s, t) =
 1  
   
G (T, s, t)w d (s) In DG (T, s, 0)w d (s) DG (T, s, 0)w d (s) + In .

After simplication, we receive



wzx (s, t) = G (T, s, t)RG (s) , (7.33)

where RG (s) is constructed similarly to (7.42). Comparing (7.33) with (7.30),
we nd that in this example
K(s) = Onn , M (s) = In , N (s) = L(s) = G(s)
i.e., Matrix (7.2) has the form
 
Onn G(p)
w(p) = .
In G(p)
Multiplying both sides of (7.33) by est , with respect to (6.68), we nd
   
  
1
w d (s, t) = est wzx (s, t) = DG (T, s, t)w d (s) In DG (T, s, 0)w d (s) ,

where

 1
DG (T, s, t) = G(s + kj)(s + kj)e(s+kj)t
T
k=
is the DLT of the matrix G(s)(s). It can be easily veried that for t = ,

0 T , the matrix w d (s, ) determines the transfer matrix of the discrete
system in Fig. 7.5 in the sense of the modied discrete Laplace transformation
[177]. 

7. At the same time, if M (s) = On , then the standard sampled-data system
with the input x and output z reduces to the continuous-time LTI system

x z
- K(s) -

Fig. 7.7. Continuous-time system as a standard sampled-data system

shown in Fig. 7.7. The general Formula (7.30) yields


w(s, t) = K(s) .
Therefore, in the special cases of a continuous-time LTI system in Fig. 7.7
or a discrete-time system in Fig. 7.5, the PTM wzx (s, t) transforms into the
classical frequency-domain descriptions for such systems.
7.4 PTM as Function of the Argument s 289

8. The method for constructing the PTM, described in this section, is fairly
general. In fact, it does not exploit the fact that the matrix w(p) is rational.
All the aforesaid still holds, when we assume that the matrices K(p), L(p),
M (p), N (p) are transfer matrices of some linear stationary operators such
that the series L (T, s, t) and N (T, s, t) converge and the latter sum is
continuous with respect to t. As a special case, this method can be used for
constructing the PTM for a standard system with pure-delay elements.
Example 7.2. Consider the system with delayed feedback shown in Fig. 7.8.
Using the techniques described in detail in Chapter 9, we nd the PTM

x
u z
- C - G(p) -?
g - F (p) -

y
Q1 (p) = Q(p)ep

Fig. 7.8. Closed-loop sampled-data system with delayed feedback

wzx (s, t) =

  
1
F G (T, s, t)w d (s) I DQF G (T, s, )w d (s) Q(s)F (s)es + F (s)

associated with the transfer matrix of the LTI plant


 
F (p) F (p)G(p)
w(p) = .
Q(p)F (p)ep Q(p)F (p)G(p)ep 

7.4 PTM as Function of the Argument s


1. In this section, we construct a general representation for the PTM
wzx (s, t) on the basis of the parametric discrete model of the standard
sampled-data system. The general expression for the PTM makes it possible
to completely characterise the set of singular points of the matrix wzx (s, t) as
a function of the complex variable s.

2. The following theorem provides such a representation.


Theorem 7.3. Suppose that the matrices K(p) and M (p) are strictly proper
and the LTI plant of the standard sampled-data system is given by the state
equations
290 7 Description and Stability of SD Systems

dv
= Av + B1 x + B2 u
dt
(7.34)
z = C1 v + DL u , y = C2 v

with a matrix A. Let the digital controller be given by the equations

k = yk = y(kT ) (7.35)

0 k + . . . + k = 0 k + . . . + k (7.36)
u(t) = m(t kT )k , kT < t < (k + 1)T . (7.37)
Introduce the matrix

I esT eAT On esT eAT (A)B2
  
Q(s, , ) =

C2 In Onm ,
(7.38)
 
Om (s) (s)

 
where (s) and (s) are the matrices (7.24). Then for any s with
  
det Q(s, , ) = 0 , (7.39)

there exists a unique solution of Equations (7.34)(7.37) satisfying Conditions


(7.21)(7.22). Moreover, for 0 < t < T , we have
 t
st At A(t)
wzx (s, t) = e C1 e v0 (s) + C1 e m() d B2 0 (s) +
0
 (7.40)
t
(AsI )(t) st
+ C1 e dB1 + DL e m(t)0 (s) ,
0

where the matrices v0 (s) and 0 (s) are given by the equation

v0 (s)
  
Q(s, , ) y0 (s) = R(s) (7.41)
0 (s)

and the matrix R(s) has the form


1
(sI A) (I esT eAT )B1
(7.42)
R(s) = On n .
Om m

Proof. The proof is given in several stages.


7.4 PTM as Function of the Argument s 291

a) First of all, we construct the parametric discrete forward model of Equa-


tions (7.34)(7.37) for the input (7.20). With this aim in view, let x(t) =
est I in (7.12). As a result after integration, we obtain

vk () = eA vk + eA() m() d B2 k + eksT G(s, ) (7.43)
0

with

G(s, ) = e A
e(sI A) dB1 = (sI A)1 (es I eA )B1 . (7.44)
0

For = T from (7.43), we nd


 T
vk+1 = eAT vk + eAT eA m() d B2 k + eksT G(s, T ) , (7.45)
0

where
G(s, T ) = (sI A)1 (esT I eAT )B1 .
Combining (7.45) with the equations of the digital controller and using
(7.17), we nd the discrete backward model of the standard sampled-data
system for the input (7.20)
vk = eAT vk1 + eAT (A)B2 k1 + e(k1)sT G(s, T )
yk = C2 vk (7.46)
0 k + . . . + k = 0 yk + . . . + yk .
The discrete model (7.46) together with (7.43) determines the backward
parametric discrete model for the input (7.20).
b) If a solution of the continuous-time models (7.34)(7.37) satises Condi-
tions (7.21) and (7.22), then it is associated with discrete sequences
{v(s)} = {. . . , v1 (s), v0 (s), v1 (s), . . . }
{y(s)} = {. . . , y1 (s), y0 (s), y1 (s), . . . }
{(s)} = {. . . , 1 (s), 0 (s), 1 (s), . . . }
that determine a solution of Equations (7.46) and satisfy the conditions
vk (s) = eksT v0 (s), yk (s) = eksT y0 (s), k (s) = eksT 0 (s) ,
where v0 (s), y0 (s), 0 (s) are unknown matrices to be found. Substituting
these relations into (7.46), we obtain
v0 (s) = esT eAT v0 (s) + esT eAT (A)B2 0 (s)
+ (sI A)1 (I esT eAT )B1
y0 (s) = C2 v0 (s) (7.47)
 
(s)0 (s) = (s)y0 (s) .
292 7 Description and Stability of SD Systems

This system can be written in form of the linear system of equations (7.41)
having a unique solution under Condition (7.39).
c) Now, we will prove (7.40). From the rst equation in (7.47), we nd
 1
v0 (s) = esT eAT I (A)B2 0 (s) + (sI A)1 B1 . (7.48)
Moreover, (7.47) yields
1  
0 (s) = (s) (s)y0 (s) = w d (s)y0 (s) . (7.49)
Substituting (7.49) into (7.48), we nd
 1 
v0 (s) = esT eAT I (A)B2 w d (s)y0 (s) + (sI A)1 B1 . (7.50)
Multiplying this from the left by C2 , we obtain
 1 
y0 (s) = C2 esT eAT I (A)B2 w d (s)y0 (s) + C2 (sI A)1 B1 .
(7.51)
But from (6.89), (6.100), and (7.9), it follows that
 1 
C2 esT eAT I (A)B2 = N (T, s, 0) = DN (T, s, 0) ,
C2 (sI A)1 B1 = M (s) .
Thus, Equation (7.51) can be written in the form
 
y0 (s) = DN (T, s, 0)w d (s)y0 (s) + M (s) ,
whence  1
 
y0 (s) = In DN (T, s, 0)w d (s) M (s) .
Then using (7.49), we obtain

  
1
0 (s) = w d (s) In DN (T, s, 0)w d (s) M (s) . (7.52)

For further transformations, we notice that for 0 = t T , Equations


(7.43) and (7.44) yield
 t  t
v(t) = eAt v0 (s) + eA(t) m() d B2 0 (s) + eA(t) es d B1 .
0 0

Therefore, the PTM wvx (s, t) with respect to the output v is given for
0 t T by
wvx (s, t) = v(t)est
 t (7.53)
= est eAt v0 (s) + eA(t) m() d B2 0 (s)
0
 t
+ e(AsI )(t) d B1 .
0
7.4 PTM as Function of the Argument s 293

Since for 0 < t < T , we have

wzx (s, t) = C1 wvx (s, t) + DL est m(t)0 (s) ,

a substitution of (7.53) into this equation gives (7.40).


d) It remains to prove that for 0 < t < T , Formula (7.40) can be reduced to
the form (7.30). Obviously, for 0 < t < T , we have

z(t) = C1 v(t) + DL m(t)0 (s)

that gives, with respect to (7.43) and (7.44),


 t
z(t) = C1 eAt v0 (s) + eA(t) m() d B2 0 (s) +
0
(7.54)
+ DL m(t)0 (s) + C1 (sI A)1 (est I eAt )B1 .

Using (7.48) after simplication, we nd


 t
 1
z(t) = C1 eAt (A) esT eAT I + eA(t) m() d B2 0 (s)
0
+ DL m(t)0 (s) + est C1 (sI A)1 B1 .

Multiplying by est and using the fact that for 0 < t < T
 t
 1
est C1 eAt (A) esT eAT I + eA(t) m() d B2
0
+ DL est m(t) = L (T, s, t)

and moreover,
C1 (sI A)1 B1 = K(s) ,
we obtain
wzx (s, t) = L (T, s, t)0 (s) + K(s) .
Using (7.52), we get (7.30).

3. Using Theorem 7.3, some general properties of the singular points of the
PTM wzx (s, t) can be investigated.

Theorem 7.4. Under the conditions of Theorem 7.3, the PTM wzx (s, t) given
by (7.30) is a meromorphic function of the argument s, i.e., all its singular
points are poles. The set of poles of wzx (s, t) belongs for any t to the set of
roots of the equation
   
(s) = det Q(s, , ) = 0 , (7.55)
  
where Q(s, , ) is Matrix (7.38).
294 7 Description and Stability of SD Systems

Proof. From (7.41), we have



v0 (s) 1 
y0 (s) = Q (s, 
, )R(s) . (7.56)
0 (s)

The matrix R(s) determined by (7.42) is an integral function of the argu-


ment s. This follows from the fact that the matrix

R1 (s, A) = (sI A)1 (I esT eAT )

is integral, because it is generated from the integral scalar function

 esT (esT eaT )


R1 (s, a) = ,
sa
where the scalar parameter a is changed for a matrix A. Therefore, from
(7.56), it follows that the matrices v0 (s), y0 (s), and 0 (s) are meromorphic
functions, and their poles belong to the set of roots of Equation (7.55). From
(7.54) for 0 < t < T , we have

wzx (s, t) = est C1 eAt v0 (s)


 t
+ est C1 eA(t) m() d B2 + DL m(t) 0 (s) (7.57)
0
+ est C1 (sI A)1 (est I eAt )B1 .

The coecients for v0 (s) and 0 (s), as well as the last term on the right side of
(7.57) are integral functions of s. Therefore, the claim of the theorem follows
for 0 < t < T from the already proved properties of the matrices v0 (s) and
0 (s). Since wzx (s, t) = wzx (s, t + T ), this result holds for all t.

4. Introduce the polynomial matrices



() = (s) | esT = = 0 + 1 + . . . + ,

() = (s) | esT = = 0 + 1 + . . . + ,

I eAT On eAT (A)B2

Q(, , ) = C2 In Onm (7.58)
Om () ()
and the polynomial

() = (s) | esT = = det Q(, , ) , (7.59)

which is called the characteristic polynomial of the standard sampled-data


system.
7.5 Internal Stability of the Standard SD System 295

Theorem 7.5. Let 1 , . . . , q be all dierent roots of the polynomial ().


Then for any t, the set of poles of wzx (s, t) belongs to the set of the numbers

1 2nj
sin = ln i + , (i = 1, . . . , q; n = 0, 1, . . . ) . (7.60)
T T
  
Proof. The matrix Q(s, , ) in (7.38) depends only on esT . Therefore, sub-
stituting in (7.38) for esT , we obtain the polynomial matrix (7.58). The
poles of the matrix Q1 (, , ) coincide with the roots of the polynomial
(), which are related to the poles of the matrices v0 (s), y0 (s), and 0 (s) by
Equations (7.60). Due to (7.57), the same is valid for the poles of the PTM
wzx (s, t).

Remark 7.6. The standard sampled-data system (7.3)(7.7) is associated with


a set of continuous-time models with dierent realisations of the matrix w(p).
Nevertheless, as follows from (7.30), the PTM wzx (s, t) is independent of the
choice of this realisation, because for a given matrix w(p) it is uniquely deter-
mined by the matrices K(p), L(p), M (p), and N (p) appearing in (7.2). Hence
it follows, in particular, that the set of poles of the PTM wzx (s, t) is inde-
pendent of the choice of the realisation (A, B, C) in the standard form (7.8).
Therefore, without loss of generality, it can be assumed that the realisation
(A, B, C) is minimal, i.e., the pair (A, B) is controllable and the pair [A, C]
is observable. This situation is similar in case of an LTI system, where the
uncontrollable and unobservable parts do not change the transfer function of
a system.

7.5 Internal Stability of the Standard SD System

1. In this section, we investigate the stability behavior of the standard


sampled-data system, assuming that the continuous-time plant is given by
the state equations (7.34). In this case, combining the equations of the digital
controller with (7.34), we can write a continuous-times model of the standard
sampled-data system in the form
dv
= Av + B1 x + B2 u
dt
(7.61)
z = C1 v + DL u , y = C2 v

and

()k = ()yk (7.62)


u(t) = m(t kT )k , kT < t < (k + 1)T , (7.63)

where
296 7 Description and Stability of SD Systems

() = 0 + 1 + . . . + ,
(7.64)
() = 0 + 1 + . . . +

are polynomial matrices.

2.
Denition 7.7. The standard sampled-data system (7.61)(7.64) will be
called internally stable, if with x(t) = O1 for any solution of Equations
(7.61)(7.64) and for t > 0, k > 0, the following estimations hold:

v(t) < dv et , u(t) < du et , k  < d ekT , (7.65)

where   denotes any norm for number matrices, dv , du , d and are


positive constants, where is independent of the initial conditions.
We note that under Conditions (7.65), the following estimates hold:

y(t) < dy et , z(t) < dz et (7.66)

with positive constants dy and dz . As follows from (7.57), the matrices C1 and
DL do not inuence the internal stability of the standard system (7.61)(7.64).
In this section, we formulate some necessary and sucient conditions for
the internal stability of the standard sampled-data system. For brevity, we
also shall use the term stability, when we mean internal stability.
If x(t) = O1 , the standard sampled-data system can be associated with
the discrete backward model generated from (7.19) with gk = O1 :

vk = eAT vk1 + eAT (A)B2 k1


yk = C2 vk (7.67)
0 k + . . . + k = 0 yk + . . . + yk .

3.
Lemma 7.8. For the standard sampled-data system (7.61)(7.64) to be inter-
nally stable, a necessary and sucient condition is that the discrete backward
model (7.67) is stable.
Proof. Necessity: Let the standard sampled-data system be stable. Then Es-
timates (7.65) and (7.66) hold. As a special case for k > 0, we have

v(kT ) = vk  < dv ekT , y(kT ) = yk  < dy ekT , k  < d ekT .
(7.68)
With the notation eT = , || < 1, we obtain

vk  < dv k , yk  < dy k , k  < d k , (k > 0) . (7.69)


7.5 Internal Stability of the Standard SD System 297

Since Conditions (7.69) hold for all solutions of Equations (7.67), the discrete
model is stable by denition.
Suciency: Let the discrete model (7.67) be stable. Then, we have In-
equalities (7.69), which can be written in the form (7.68). Due to (7.12) and
(7.13), we have

A
v(kT + ) = e vk + eA() m() d B2 k
0

and, estimating by a norm, we obtain

v(kT + ) L1 vk  + L2 k  , (7.70)

where
" "
" "
L1 = max e A
, L2 = max " e A()
m() d B2 "
0T 0T " 0
"

are constants. From (7.70) and (7.68), it follows that

v(t) LekT , kT t (k + 1)T, (k = 0, 1, . . . ) , (7.71)

where
L = L1 dv + L2 d
is a constant. From (7.71), the following estimate can easily be derived:

v(t) LeT et , t > 0.

This relation and (7.61) yield (7.66).

4. Necessary and sucient conditions for the internal stability of the system
(7.61)(7.64) are given by the following theorem.
Theorem 7.9. A necessary and sucient condition for the standard sampled-
data system (7.61)(7.64) to be internally stable is that all eigenvalues of the
matrix

I esT eAT On esT eAT (A)B2
   C2 In Onm
Q(s, , ) =
 
Om (s) (s)

lie in the open left half-plane, or equivalently, the polynomial matrix



I eAT On eAT (A)B2

Q(, , ) = C2 In Onm (7.72)
Om () ()

has to be stable, i.e., it is free of eigenvalues in the closed unit disk.


298 7 Description and Stability of SD Systems

Proof. Due to Lemma 7.8, the standard sampled-data system is stable i the
discrete model (7.67) is stable. The latter can be written in the form of a
homogeneous backward-dierence equation

vk
Q(, , ) yk = O .
k
Then from Theorem 5.47, it follows that the discrete model (7.67) is stable
i Matrix (7.72) is stable. The claim regarding Matrix (7.72) follows from the
equality
  
Q(s, , ) = Q(, , ) | =esT .
Corollary 7.10. Any controller ensuring under the given assumptions the
internal stability of the closed-loop system is causal, i.e., det 0 = 0, because
for det 0 = 0, Matrix (7.72) is unstable.
Corollary 7.11. Theorem 7.9 can be formulated in an alternative way: a nec-
essary and sucient condition for the standard sampled-data system to be
stable is that its characteristic polynomial (7.59) must be stable.

7.6 Polynomial Stabilisation of the Standard SD System


1. Hereinafter, without loss of generality, we assume DL = Orm . Then, the
matrix w(p) is strictly proper and admits a set of realisations (A, B, C).
Denition 7.12. A realisation (A, B, C) will be called stabilisable, if there
exists a controller ((), ()), such that Matrix (7.72) is stable. Such a con-
troller will be called a stabilising controller for the realisation (A, B, C).
In this section, we present solutions to the following problems:
a) Construction of the set of stabilisable realisations;
b) Construction of the set of stabilising controllers for a stabilisable realisa-
tion (A, B, C).

2. Firstly, we consider the above mentioned problems for the closed loop
incorporated in the standard system in Fig. 7.3. This loop is shown in Fig. 7.9,
where the control signal u is related to y by (7.4)(7.7).
B
Let (A, 2 , C2 ) be a realisation of the matrix N (p) in form of the state
equations
d
v v + B2 u , y = C2 v
= A (7.73)
dt
with an 1 state vector v and constant matrices A, B 2 , and C2 of dimensions
, m, and n, respectively. Then, the closed loop will be called stable,
if for the system of equations (7.73) and (7.62)(7.64), estimates similar to
(7.65) hold:
v (t) < dv et ,
 u(t) < du et , k  < d ekT , t > 0, k > 0 .
7.6 Polynomial Stabilisation of the Standard SD System 299

x
-g - N (p) -
6
y
u
{} {}
DAC  ALG  ADC 

Fig. 7.9. Closed loop of the standard sampled-data system

3. The following theorem presents a solution to the stabilisation problem


for the closed loop, when the continuous-time plant is given by its minimal
realisation.
Theorem 7.13. Let (A , B 2 , C2 ) be a minimal realisation of dimension
(n, , m), and the rational matrix wN () be determined by

wN () = DN (T, , 0) = C2 eA T (A )(I eA T )1 B

2 ,

where  T
eA m( ) d .

(A ) =
0
Let us have an ILMFD

wN () = a1
N ()bN () (7.74)

with aN () Rnn [] and bN () Rnm []. Then the relation



 det(I eA T )
N () = (7.75)
det aN ()
is a polynomial. For an arbitrary choice of the minimal realisation
(A , B
2 , C2 ) and the matrices aN (), bN (), the polynomials (7.75) are
equivalent, i.e., they are equal up to a constant factor. A necessary and suf-
cient condition for the set of minimal realisations of the matrix N (p) to be
stabilisable is that the polynomial (7.75) is stable, i.e., is free of roots inside
the closed unit disk. If the polynomial N () is stable and (aN (), bN ()) is
an arbitrary pair forming the ILMFD (7.74), then the set of all controllers
((), ()) stabilising the minimal realisations of the matrix N (p) is dened
as the set of all pairs ensuring the stability of the matrix

aN () bN ()
QN (, , ) = . (7.76)
() ()

Proof. Using an arbitrary minimal realisation (A , B 2 , C2 ) and


(7.62)(7.64), we arrive at the problem of investigating the stability of
the system
300 7 Description and Stability of SD Systems

d
v
y = C2 v , = A v + B
2 u
dt
()k = ()uk
u(t) = m(t kT )k , kT < t < (k + 1)T .

As follows from Theorem 7.9, a necessary and sucient condition for the
stability of this system is that the matrix


I eA T On eA T (A )B
2
(, , ) =
Q C2 In Onm

(7.77)
Om () ()

is stable. Hence the set of the pairs of stabilising polynomials ((), ())
coincides with the set of stabilising pairs for the nonsingular PMD
 

N () = I eA T , eA T (A )B
2 , C2 . (7.78)

Then the claim of the theorem for a given realisation (A , B 2 , C2 ) and the
pair (aN (), bN ()) follows from Theorem 5.64. It remains to prove that the
set of stabilising controllers does not depend on the choice of the realisation
2 , C2 ) and the pair (aN (), bN ()). With this aim in view, we notice
(A , B
that from the formulae of Section 6.8, it follows that


1 
wN () = N (s + kj)(s + kj)  . (7.79)
T esT =
k=

Since all minimal realisations are equivalent, the matrix wN () is indepen-


dent of the choice of the realisation (A , B2 , C2 ). Hence the set of pairs
(aN (), bN ()) is also independent of the choice of this realisation. The same
proposition holds for the set of stabilising controllers.

4. A more complete result can be obtained under the assumption that the
poles of the matrix N (p) satisfy the strict conditions for non-pathological
behavior (6.124) and (6.125).

Theorem 7.14. Let the eigenvalues s1 , . . . , sq of the matrix sI A satisfy


the strict conditions for non-pathological behavior
2nj
si sk = , (i = k; i, k = 1, . . . , q; n = 0, 1, . . . ) , (7.80)
T
(si ) = 0 , (i = 1, . . . , q) . (7.81)
Then all minimal realisations (A , B
2 , C2 ) are stabilisable.
7.6 Polynomial Stabilisation of the Standard SD System 301

Proof. If (7.80) and (7.81) hold, due to Theorem 6.30, the PMD (7.78) is
minimal. Then for the ILMFD
 
C2 I eA T = a1

1 ()b1 () ,

we have  

det a1 () det I eA T .

Hence from Lemma 2.9, it follows that for any ILMFD (7.74)

aN () = ()a1 ()

is true with a unimodular matrix (). Therefore, in this case due to (7.75),
the polynomial
N () = const. = 0
is stable. Then the claim follows from Theorem 7.13.

5. A general criterion for the stabilisability of the closed loop is given by the
following theorem.
Theorem 7.15. Let (A, B 2 , C2 ) of dimension n, , m be any realisation of

the matrix N (p), and (A , B 2 , C2 ) be one of its minimal realisation with
dimension n, , m such that > . Then the function

det(sI A)
r(s) = (7.82)
det(sI A )

is a polynomial. If, in addition, the minimal realisation (A , B 2 , C2 ) is



not stabilisable, so is the realisation (A, B2 , C2 ). If the minimal realisation
2 , C2 ) is stabilisable, then for the stabilisability of the realisation
(A , B
(A, B2 , C2 ), it is necessary and sucient that all roots of the polynomial (7.82)

be in the open left half-plane. Under this condition, the set of stabilising con-
trollers ((), ()) is independent of the realisation (A, B2 , C2 ) and is deter-
mined by the stability condition for the matrix (7.76).
Proof. Under the given assumptions, the PMDs
B
N (s) = (sI A, 2 , C2 ) ,
(7.83)
N (s) = (sI A , B
2 , C2 )

are equivalent, i.e. their transfer matrices coincide. Moreover, since the PMD
N (s) is minimal, Relation (7.82) is a polynomial by Lemma 2.48. Let us
have
= (s s1 )1 (s sq )q ,
det(sI A) 1 + . . . + q = ,
det(sI A ) = (s s1 ) (s sq ) ,
1 q
1 + . . . + q = ,
302 7 Description and Stability of SD Systems

where i i , (i = 1, . . . , q). Let i < i for i = 1, . . . , and i = i for


i = + 1, . . . , q. Then from (7.82), we obtain

r(s) = (s s1 )m1 (s s )m , (7.84)

where mi = i i , (i = 1, . . . , ). Moreover, since the PMDs (7.83) are


equivalent, using (7.79), we receive
 1  1
2 = C2 I eA T
C2 I eAT B
eAT (A) eA T (A )B
2
(7.85)
= wN () .

From (7.85), it follows that the PMDs


 
B 2 , C2 ,
d () = I eAT , eAT (A) (7.86)
 

d () = I eA T , eA T (A )B
2 , C2 (7.87)

are equivalent. Then



det(I eAT ) = (1 es1 T )1 (1 esq T )q ,

det(I eA T ) = (1 es1 T )1 (1 esq T )q ,

and the relation



det(I eAT )
1 () = = (1 es1 T )m1 (1 es T )m (7.88)
det(I eA T )
is a polynomial. Consider the characteristic matrix (7.77) for the PMD (7.86)


I eAT On eAT (A) B2

, ) = C
Q(, In Onm .
2
Om () ()

Using Equation (4.71) for this and Matrix (7.77), and taking account of (7.85),
we nd

, ) = det(I eAT
det Q(, ) det[() ()w
N ()] ,
(, , ) = det(I eA T ) det[() ()w
det Q N ()] .

Hence with (7.88), it follows


, ) = 1 () det Q
det Q(, (, , ) .

If the minimal realisation (A , B 2 , C2 ) is not stabilisable, then the matrix


Q (, , ) is unstable for any controller ((), ()). Due to the last equation,
7.6 Polynomial Stabilisation of the Standard SD System 303

, ) is also unstable. If the polynomial 1 () is not stable,


the matrix Q(,
then the matrix Q( , ) is also unstable, independently of the choice of the
, )
controller. Finally, if the polynomial 1 () is stable, then the matrix Q(,
is stable or unstable together with the matrix Q (, , ).
As a conclusion, we note that from (7.88), it follows that the polynomial
1 () is stable i in (7.84), we have

Re si < 0, (i = 1, . . . , ) .

This completes the proof.

6. Using the above results, we can consider the stabilisation problem for the
complete standard sampled-data system.

Theorem 7.16. Let the continuous-time plant of the standard sampled-data


system be given by the state equations (7.61) with a matrix A. Let
also (A , B
2 , C2 ) with dimension (n, , m) be any minimal realisation of the
matrix N (p). Then, we have and the function

det(sI A)
r(s) = (7.89)
det(sI A )

is a polynomial. Moreover, if the minimal realisation (A , B 2 , C2 ) is not


stabilisable, then also the standard sampled-data system with the plant (7.61)
is not stabilisable. If the minimal realisation (A , B2 , C2 ) is stabilisable,
then for stabilisability of the standard sampled-data system, it is necessary and
sucient that all roots si of the polynomial (7.89) lie in the open left half-
plane. Under this condition, the set of stabilising controllers for the standard
sampled-data system coincides with the set of stabilising controllers for the
minimal realisation (A , B 2 , C2 ).

Proof. Using (7.61) and (7.62)(7.64) and assuming x(t) = O1 , we can rep-
resent the standard sampled-data system in the form
dv
y = C2 v , = Av + B2 u
dt
()k = ()yk (7.90)
u(t) = m(t kT )k , kT < t < (k + 1)T .

that should be completed with the output equation

z(t) = C1 y(t) + DL u(t) . (7.91)

Since
C2 (pI A)1 B2 = N (p) ,
304 7 Description and Stability of SD Systems

due to (7.9), Equations (7.90) can be considered as equations of the closed


loop, where the continuous-time plant N (p) is given in form of a realisation
(A, B2 , C2 ) of dimension n, , m, which is not minimal in the general case.
Obviously, a necessary and sucient condition for the stability of the system
(7.90) and (7.91) is that the system (7.90) is stable. Hence it follows the con-
clusion that the stabilisation problem for the standard sampled-data system
with the plant (7.61) is equivalent to the stabilisation problem for the closed
loop, where the continuous-time plant is given as a realisation (A, B2 , C2 ).
Therefore, all claims of Theorem 7.16 are corollaries of Theorem 7.15.

7.7 Modal Controllability and the Set of Stabilising


Controllers
1. Let the continuous-time plant of the standard sampled-data system be
given by the state equations
dv
= Av + B1 x + B2 u
dt
(7.92)
z = C1 v , y = C2 v .
Then, as follows from Theorems 7.137.16, in case of a stabilisable plant (7.92),
the characteristic polynomial of the closed-loop standard sampled-data system
() can be represented in the form
() = ()d () , (7.93)
where () is a stable polynomial, which is independent of the choice of the
controller. Moreover, in (7.93)

aN () bN ()
d () det = det QN (, , ) , (7.94)
() ()
where ((), ()) is a discrete controller and the matrices aN (), bN ()
dene an ILMFD
wN () = C2 (I eAT )1 (A)eAT B2 = a1
N ()bN () . (7.95)
From (7.93), it follows that the roots of the characteristic polynomial of the
standard sampled-data system () can be split up into two groups. The rst
group (roots of the polynomial ()) is determined only by the properties of
the matrix w(p) and is independent of the properties of the discrete controller.
Hereinafter, these roots will be called uncontrollable. The second group of roots
consists of those roots of the polynomial (7.94), which are determined by the
matrix w(p) and the controller ((), ()). Since the pair (aN (), bN ()) is
irreducible, the controller ((), ()) can be chosen in such a way that the
polynomial d () is equal to any given (stable) polynomial. In this connection,
the roots of the second group will be called controllable
7.7 Modal Controllability and the Set of Stabilising Controllers 305

2. The standard sampled-data system with the plant (7.92) will be called
modal controllable, if all roots of its characteristic polynomial are controllable,
i.e., () = const. = 0. Under the strict conditions for non-pathological behav-
ior, necessary and sucient conditions for the system to be modal controllable
are given by the following theorem.
Theorem 7.17. Let the poles of the matrix

K(p) L(p) C1 1

Or DL
w(p) = = (pI A) B1 B2 + (7.96)
M (p) N (p) C2 On Onm
satisfy Conditions (7.80) and (7.81). Then, a necessary and sucient condi-
tion for the standard sampled-data system to be modal controllable is that the
matrix N (p) dominates in the matrix w(p).
Proof. Suciency: Without loss of generality, we take DL = Orm and assume
that the standard representation is minimal. Let the matrix N (p) dominate in
Matrix (7.96). Then due to Theorem 2.67, the realisation (A, B2 , C2 ) on the
right-hand side of (7.96) is minimal. Construct the discrete model Dw (T, , t)
of the matrix w(p)(p). Obviously, we have

DK (T, , t) DL (T, , t)
Dw (T, , t) = .
DM (T, , t) DN (T, , t)
Using the second formula in (6.122) and (7.96), we obtain
Dw (T, , t) = D1 (, t) + D2 (t) ,
where

C1

D1 (, t) = (I eAT )1 eA(tT ) (A) B1 B2 ,
C2
 T
C1

D2 (t) = eA(t ) ( ) d B1 B2 .
C2 t
By virtue of Theorem 6.30, the right-hand side of the rst equation denes
a minimal standard
 representation of the  matrix D1 (, t). At the same time,
the realisation eAT , eA(tT ) (A)B2 , C2 is also minimal. Therefore, we can
take A = A in (7.89). Hence r1 (s) = const. = 0 and () = const. = 0, and
the suciency has been proven.
The necessity of the conditions of the theorem is seen by reversing the
above derivations.

3. Under the stabilisability condition, the set of stabilising controllers is


completely determined by the properties of the matrix N (p), and is dened as
the set of pairs ((), ()) satisfying (7.94) for all possible stable polynomials
d (). The form of Equation (7.94) coincides with (5.163), where the matrix
Ql (, , ) is given by (5.160). Therefore, to describe the set of stabilising
controllers, all the results of Section 5.8 can be used.
306 7 Description and Stability of SD Systems

4. As a special case, the following propositions hold:


a) Let (0 (), 0 ()) be a controller, such that

aN () bN ()
det = const. = 0 .
0 () 0 ()
Then the set of all stabilising controllers for the stabilisable standard
sampled-data system is given by
() = Dl ()0 () Ml ()bN () ,
() = Dl ()0 () Ml ()aN () ,
where Dl () and Ml () are any polynomial matrices, but Dl () has to be
stable.
b) Together with the ILMFD (7.95), let us have an IRMFD
wN () = C2 (I eAT )1 eAT (A)B2 = br ()a1
r () .

Then the set of stabilising controllers ((), ()) for the standard
sampled-data system coincides with the set of solutions of the Diophantine
equation
()ar () ()br () = Dl () ,
where Dl () is any stable polynomial matrix.

5. Any stabilising controller ((), ()) for the standard sampled-data sys-
tem fullls det (0) = 0, i.e., the matrix () is invertible. Therefore, any
stabilising controller has a transfer matrix
wd () = 1 ()() .
The following propositions hold:
c) The set of transfer matrices of all stabilising controllers for the standard
sampled-data system can be written in the form
1
wd () = [0 () ()bN ()] [0 () ()aN ()] ,
where () is any stable rational matrix of compatible dimension.
d) The rational matrix wd () is associated with a stabilising controller for a
stabilisable standard system, if and only if there exists any of the following
representations:
wd () = F11 ()F2 (), wd () = G2 ()G1
1 () ,

where the pairs of rational matrices (F1 (), F2 ()) and [G1 (), G2 ()] are
stable and satisfy the equations
F1 ()ar () F2 ()br () = Im ,
al ()G1 () bl ()G2 () = In .
8
Analysis and Synthesis of SD Systems Under
Stochastic Excitation

8.1 Quasi-stationary Stochastic Processes in the


Standard SD System
1. Let the input of the standard sampled-data system be acted upon by a
vector signal x(t) that is modelled as a centered stochastic process with the
autocorrelation matrix

Kx ( ) = E [x(t)x (t + )] ,

where E[] denotes the operator of mathematical expectation. Assume that


the integral 
x (s) = Kx ( )es d ,

which will be called the spectral density of the input signal, converges abso-
lutely in some stripe 0 Re s 0 , where 0 is a positive number.

2. Let the block L(p) in the matrix w(p) (7.2) be at least proper and the re-
maining blocks be strictly proper. Let also the system (7.3)(7.7) be internally
stable. When the input of the standard sampled-data system is the above
mentioned signal, after fading away of transient processes, the steady-state
stochastic process z (t) is characterised by the covariance matrix [143, 148]
 j
1
Kz (t1 , t2 ) = w(s, t1 )x (s)w (s, t2 )es(t2 t1 ) ds , (8.1)
2j j

where w(s, t) = w(s, t + T ) is the PTM of the system. Hereinafter, the


stochastic process z (t) with the correlation matrix (8.1) will be called quasi-
stationary. As follows from (8.1), the covariance matrix Kz (t1 , t2 ) depends
separately on each of its arguments t1 and t2 rather than on their dierence.
Therefore, the quasi-stationary output z (t) is a non-stationary stochastic
process. Since
308 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

w(s, t1 ) = w(s, t1 + T ), w (s, t2 ) = w (s, t2 + T ) , (8.2)


we have
Kz (t1 , t2 ) = Kz (t1 + T, t2 + T ) .
Stochastic processes satisfying this condition will be called periodically non-
stationary, or shortly periodical. Using this term, we state that the steady-
state (quasi-stationary) response of a stable standard sampled-data system to
a stationary input signal is a periodically non-stationary stochastic process.

3. The scalar function


dz (t) = trace Kz (t, t) (8.3)
will be called the variance of the quasi-stationary output. Here trace denotes
the trace of a matrix dened as the sum of its diagonal elements. From (8.1)
for t1 = t2 = t and (8.3), we nd
 j
1
dz (t) = trace [w(s, t)x (s)w (s, t)] ds . (8.4)
2j j
Then using (8.2), we obtain
dz (t) = dz (t + T ) ,
i.e., the variance of the quasi-stationary output is a periodic function of its
argument t. For matrices A, B of compatible dimensions, the relation
trace(AB) = trace(BA) (8.5)
is well known. Thus in addition to (8.4), the following equivalent relations
hold:
 j
1
dz (t) = trace [x (s)w (s, t)w(s, t)] ds , (8.6)
2j j
 j
1
dz (t) = trace [w (s, t)w(s, t)x (s)] ds . (8.7)
2j j

4. Assume in particular that x (s) = I , i.e. the input signal is white noise
with uncorrelated components. For this case, we denote

rz (t) = dz (t) .
Then (8.6) and (8.4) yield
 j
1
rz (t) = trace [w (s, t)w(s, t)] ds ,
2j j
 j
1
= trace [w(s, t)w (s, t)] ds .
2j j
8.1 Quasi-stationary Stochastic Processes in the Standard SD System 309

Substituting here the variable s for s, we nd also


 j
1
rz (t) = trace [w (s, t)w(s, t)] ds ,
2j j
 j
1
= trace [w(s, t)w (s, t)] ds .
2j j

5. A practical calculation of the variance dz (t) using Formulae (8.4), (8.6)


and (8.7) causes some technical diculties, because the integrands of these
formulae are transcendent functions of the argument s. To solve the problem,
it is reasonable to transform these integrals to those with nite integration
limits. The corresponding equations, which stem from (8.6) and (8.7) have
the form
 j/2  j/2
T  T 
dz (t) = trace U 1 (T, s, t) ds = trace U 2 (T, s, t) ds ,
2j j/2 2j j/2
(8.8)
where = 2/T and

 1
U 1 (T, s, t) = x (s + kj)w (s + kj, t)w(s + kj, t) , (8.9)
T
k=

 1
U 2 (T, s, t) = w (s + kj, t)w(s + kj, t)x (s + kj) . (8.10)
T
k=

In (8.9) and (8.10) for any function f (s), we denote



f (s) = f (s) .

Moreover, for any function (matrix) g(), we use as before the notation
 
g (s) = g() | =esT .

Obviously, the following reciprocal relations hold:


 
g (s) = g() | =esT , g() = g (s) | esT = (8.11)

and per construction


 
g (s) = g (s + j) , = 2/T . (8.12)

As follows from [148] for a rational matrix x (s), the matrices (8.9) and (8.10)
are rational matrices of the argument = esT . Therefore, to calculate the
integrals (8.8), we could take prot from the technique described in [148].
There exists an alternative way to compute the integrals in (8.8). With this
aim in view, we pass to the integration variable in (8.8), such that
310 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
# #
1 d 1 d
dz (t) = trace U1 (T, , t) = trace U2 (T, , t) , (8.13)
2j 2j
where, according to the notation (8.11),

Ui (T, , t) = U i (T, s, t) | esT = , (i = 1, 2)

are rational matrices in . The integration in (8.13) is performed along the


unit circle in positive direction (anti-clockwise). The integrals (8.13) can be
easily computed using the residue theorem with account for the fact that all
poles of the PTM of a stable standard sampled-data system lie in the open
left half-plane. Other ways of calculating these integrals are described in [11]
and [177].

6.
Example 8.1. Let us nd the variance of the quasi-stationary output for the
simple single-loop system shown in Fig. 8.1, where the forming element is a

x z
-g - 1 -

s
6
v
C 

Fig. 8.1. Single sampled-data control loop

zero-order hold with transfer function


1 esT
(s) = 0 (s) =
s
and x (s) = 1. Using Formulae (7.40)-(7.42), it can be shown that in this case
the PTM w(s, t) can be written in the form

w(s, t) = est [v0 (s) + t0 (s) + c(s, t)] , 0tT (8.14)

with

1 esT (s)
v0 (s) =  ,
s (s)

1 esT (s)
0 (s) =  , (8.15)
s (s)
est 1
c(s, t) = ,
s
8.1 Quasi-stationary Stochastic Processes in the Standard SD System 311

where

(s) = 0 + 1 esT + . . . + esT ,
(8.16)

sT sT
(s) = 0 + 1 e + . . . + e

and    
(s) = 1 esT (s) T esT (s) . (8.17)
Using (8.14)(8.17) from (8.9) and (8.10) after fairly tedious calculations, it
is found that for 0 t T
 
U 1 (T, s, t) = U 2 (T, s, t)
 

   
(s) (s) (s) (s) (s) (s)
=T  + tT   + 
(s)(s) (s)(s) (s)(s)
  $  
% (8.18)
(s) (s) (s) (s)
+ t2 T   + t esT  + esT 
(s)(s) (s) (s)
 

(s) (s)
+ t2 esT  + esT  + t.
(s) (s)

To derive Formula (8.18), we employed expressions for the sums of the follow-
ing series:

1
 
(s) (s)
v0 (s + kj)v 0 (s + kj) = T   ,
T (s)(s)
k=

 
1 (s) (s)
0 (s + kj) 0 (s + kj) = T   ,
T (s)(s)
k=


1

(s) (s)
v0 (s + kj) 0 (s + kj) = T   ,
T (s)(s)
k=


1

(s) sT
v0 (s + kj)c(s + kj, t) =  e t,
T (s)
k=


1 (s) sT
0 (s + kj)c(s + kj, t) =  e t,
T (s)
k=


1
c(s + kj, t)c(s + kj, t) = t
T
k=
312 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

and

1 e(s+kj)t (1 esT )t + T esT
= ,
T (s + kj) 2 (1 esT )2
k=
0tT.

1 e(s+kj)t (1 esT )esT t + T esT
= ,
T (s + kj)2 (1 esT )2
k=

After substiting esT = in (8.18), we nd a rational function of the argu-


ment , for which the integrals (8.13) can be calculated elementary. 

8.2 Mean Variance and H2 -norm of the Standard SD


System

1. Let dz (t) be the variance of the quasi-stationary output determined by


anyone of Formulae (8.4), (8.6) or (8.7). Then the value
 T
 1
dz = dz (t) dt
T 0

will be called the mean variance of the quasi-stationary output. Using here
(8.6) and (8.7), we obtain
 j  j
1 1
dz = trace [x (s)w
1 (s)] ds = trace [w
1 (s)x (s)] ds ,
2j j 2j j
(8.19)
where  T
 1
w
1 (s) = w (s, t)w(s, t) dt . (8.20)
T 0

2. When x (s) = I , for the mean variance, we will use the special notation
 T
 1
rz = rz (t) dt .
T 0

The value rz is determined by the properties of the standard sampled-data


system and does not depend on the properties of the exogenous excitations.
Formulae for calculating rz can be derived from (8.19) and (8.20) with (s) =
I . In particular, assuming
 T
 1
w(s)
= w (s, t)w(s, t) dt , (8.21)
T 0

from the formulae in (8.19), we nd


8.2 Mean Variance and H2 -norm of the Standard SD System 313
 j
1
rz = trace [w(s)]
ds . (8.22)
2j j

The value
S2 = + rz (8.23)
henceforth, will be called the H2 -norm of the stable standard sampled-data
system S. Hence
 j
1
S22 = trace w(s)
ds . (8.24)
2j j
For further transformations, we write the right-hand side of (8.24) in the form
 j/2
T 
S22 = trace Dw (T, s, 0) ds ,
2j j/2

where = 2/T and



 1
Dw (T, s, 0) = w(s
+ kj) (8.25)
T
k=

is a rational matrix in esT . Using the substitution esT = , similarly to


(8.13), we obtain
#
1 d
S2 =
2
trace Dw (T, , 0) . (8.26)
2j

Example 8.2. Under the conditions of Example 8.1, we get


 
()( 1 ) T 2 ()( 1 ) ( 1 )()
Dw (T, , 0) = T + +
()( 1 ) 2 ()( 1 ) ()( 1 )
 
T 3 ()( 1 ) T () 1 ( 1 )
+ + +
3 ()( 1 ) 2 () ( 1 )
 
T 2 () 1 ( 1 ) T
+ + +
3 () ( 1 ) 2

and the integral (8.26) is calculated elementary. 

Remark 8.3. The H2 -norm is dened directly by the PTM. This approach
opens the possibility to dene the H2 -norm for any system possessing a PTM.
Interesting results have been already published by the authors for the class
of linear periodically time-varying systems [98, 100, 101, 88, 89]. In contrast
to other approaches like [200, 32, 203, 28, 204], the norm computation over
the PTM yields closed formulae and needs to evaluate matrices of only nite
dimensions.
314 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

3. Let us nd a general expression for the H2 -norm of the standard sampled-


data system (6.3)(6.6). Using the assumptions of Section 8.1 and notation
(8.11), the PTM of the system can be written in the form

w(s, t) = L (T, s, t)RN (s)M (s) + K(s) . (8.27)

Substituting s for s after transposition, we receive



w (s, t) = w (s, t) = M  (s)RN (s)L (T, s, t) + K  (s) .

Multiplying the last two equations, we nd


 
w (s, t)w(s, t) = M  (s)R N (s)L (T, s, t)L (T, s, t)RN (s)M (s)

+ M  (s)R N (s)L (T, s, t)K(s)

+ K  (s)L (T, s, t)RN (s)M (s) + K  (s)K(s) .

Using this in (8.21) yields


   
w(s)
= M  (s)R N (s)DL (s)RN (s)M (s) + M  (s)R N (s)Q L (s)K(s)

(8.28)
+ K  (s)QL (s)RN (s)M (s) + K  (s)K(s) ,

where
 T
 1
DL (s) = L (T, s, t)L (T, s, t) dt ,
T 0
 T
1 1
QL (s) = L (T, s, t) dt = L(s)(s) ,
T 0 T
 T
1 1 
Q L (s) = L (T, s, t) dt = L (s)(s) .
T 0 T
Using (8.28) in (8.24), we obtain
 j 
1  
S22 = trace M  (s)R N (s)DL (s)RN (s)M (s)
2j j

+ M  (s)R N (s)Q L (s)K(s) (8.29)


+ K  (s)QL (s)RN (s)M (s) + K  (s)K(s) ds .


All matrices in the integrand, except for the matrix RN (s), are determined
by the transfer matrix w(s) of the continuous plant and are independent of

the transfer matrix w d (s) of the controller. Moreover, each transfer matrix
8.3 Representing the PTM in Terms of the System Function 315

w d (s)of a stabilising controller is associated with a nonnegative value S22 .
Therefore, the right-hand side of (8.29) can be considered as a functional

dened over the set of transfer functions of stabilising controllers w d (s). Hence
the following optimisation problem arises naturally.
H2 -problem. Let the matrix w(p) in (7.2) be given, where the matrix
L(p) is at least proper and the remaining elements are strictly proper.
Furthermore, the sampling period T and the impulse form m(t) are

xed. Find the transfer function of a stabilising controller w d (s), which
minimises the functional (8.29).

8.3 Representing the PTM in Terms of the System


Function
1. Equation (8.29) is not fairly convenient for solving the H2 -optimisation
problem. As will be shown below, a representation of the H2 -norm in terms of
the so-called system function is more suitable for this purpose. To construct
such a representation, we must at rst write the PTM in terms of the system
function. This topic is considered in the present section.

2. To simplify the further reading, we summarise some relations obtained


above. Heed that the notation slightly diers from that in the previous expo-
sition.
Using (8.11), we present the PTM of the standard sampled-data system
(7.2)(7.7) in the form (8.27), where the matrix L (T, s, t) is determined by
(7.29)
 
  
1
RN (s) = w d (s) In DN (T, s, 0)w d (s) (8.30)

with

 1
DN (T, s, 0) = N (T, s, 0) = N (s + kj)(s + kj)
T
k=

and
 1 
w d (s) = l (s) l (s) , (8.31)
where

l (s) = 0 + 1 esT + . . . + esT ,

l (s) = 0 + 1 esT + . . . + esT

are polynomial matrices in the variable = esT . Moreover, (s) is the trans-
fer function of the forming element (6.38). Matrix (8.31) will be called the
transfer function of the controller.
316 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

3. The PTM (8.27) is associated with the rational matrix

 m

K(p) L(p)
w(p) =
M (p) N (p) n .

Henceforth as above, we assume that the matrix L(p) is at least proper and the
remaining elements are strictly proper. The above matrix can be associated
with the state equations (7.10)
dv
= Av + B1 x + B2 u
dt
z = C1 v + DL u , y = C2 v ,

where A is a constant matrix. Without loss of generality, we can assume


that the pairs

C1
(A, B1 B2 ), [A, ]
C2
are controllable and observable, respectively.

4. As follows from Theorems 7.3 and 7.4, the PTM (8.27) admits a repre-
sentation of the form
Pw (s, t)
w(s, t) =  , (8.32)
(s)
where Pw (s, t) = Pw (s, t + T ) is a  matrix, whose elements are integer

functions in s for all t and the function (s) is given by
   
(s) = det Q(s, l , l ) , (8.33)
  
where Q(s, l , l ) is a matrix of the form

I esT eAT On esT eAT (A)B2
  
Q(s, l , l ) =
C2 In Onm .

 
Om l (s) l (s)

Assuming esT = in (8.33), we nd the characteristic polynomial



() = (s) | esT = = det Q(, l , l ) , (8.34)

where
I eAT On eAT (A)B2

Q(, l , l ) = C2 In Onm
Om l () l ()
8.3 Representing the PTM in Terms of the System Function 317

with

l () = 0 + 1 + . . . + ,
(8.35)
l () = 0 + 1 + . . . + .

For brevity, we will refer to the matrices (8.35) as a controller and the matrix

wd () = w d (s) | eT = = l1 ()l ()

as well as w d (s) are transfer functions of this controller.

5. As was shown in Chapter 6, the following equations hold:



DN (T, , 0) = DN (T, s, 0) | esT =


1
= N (s + kj)(s + kj) | esT =
T
k=

= C2 (I eAT )1 eAT (A)B2 = wN () .

If this rational matrix is associated with an ILMFD

wN () = DN (T, , 0) = a1
l ()bl () , (8.36)

then the function


det(I eAT )
() = (8.37)
det al ()
is a polynomial, which is independent of the choice of the controller (8.35).
The characteristic polynomial (8.34) has the form

() ()d () , (8.38)

where d () is a polynomial determined by the choice of the controller (8.35).


Moreover, the standard system is stabilisable, if and only if the polynomial
() is stable. The polynomial d () appearing in (8.38) satises the relation

d () det QN (, l , l ) ,

where
al () bl ()
QN (, , ) = (8.39)
l () l ()
is a polynomial matrix (7.76). If the stabilisability conditions hold, then the
set of stabilising controllers for the standard sampled-data system coincide
with the set of controllers (8.35) with stable matrices (8.39).
318 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

6. Let (0l (), 0l ()) be a basic controller such that

d () det QN (, 0l , 0l ) = const. = 0 . (8.40)

Then, as was proved before, the set of all causal stabilising controllers can be
given by

l () = Dl ()0l () Ml ()bl () ,
(8.41)
l () = Dl ()0l () Ml ()al () ,

where Ml () and Dl () are polynomial matrices, the rst of them can be


chosen arbitrarily, while the second one must be stable. Then the transfer
matrix of any stabilising controller for a given basic controller has a unique
left representation of the form
1
wd () = l1 ()l () = [0l () ()bl ()] [0l () ()al ()] ,

where
() = Dl1 ()Ml ()
is a stable rational matrix, which will hereinafter be called the system function
of the standard sampled-data system.

7. Together with an ILMFD (8.36), let us have an IRMFD

wN () = DN (T, , 0) = br ()a1
r () (8.42)

and let (0l (), 0l ()) and [0r (), 0r ()] be two dual basic controllers cor-
responding to the IMFDs (8.36) and (8.42). These controllers will be called
initial controllers. Then the transfer matrix of a stabilising controller admits
the right representation

wd () = r ()r1 () (8.43)

with

r () = 0r ()Dr () br ()Mr () ,
(8.44)
r () = 0r ()Dr () ar ()Mr () ,

where Dr () is a stable polynomial matrix and Mr () is an arbitrary polyno-


mial matrix. Thus
Mr ()Dr1 () = ()
and Equation (8.43) appears as
1
wd () = [0r () ar ()()] [0r () br ()()] . (8.45)
8.3 Representing the PTM in Terms of the System Function 319

Moreover, according to (5.182), (5.183) and (5.181), we obtain

wd () = V2 ()V11 () , (8.46)

where
 1
V1 () = al () bl ()wd () = 0r () br ()() ,
 1 (8.47)
V2 () = wd () al () bl ()wd () = 0r () ar ()() .


8. Using (8.47), we can write the matrix RN (s) in (8.30) in terms of the
system function (). Indeed, using (8.30), (8.36), and (8.47), we nd

1
RN () = RN (s) | esT = = wd () [In DN (T, , 0)wd ()]

1
= wd () In a1l ()bl ()wd () (8.48)
1
= wd () [al () bl ()wd ()] al () = V2 ()al () .

From (8.47) and (8.48), we obtain

RN () = 0r ()al () ar ()()al () . (8.49)

Hence
     
RN (s) = RN () | =esT = 0r (s) a l (s) a r (s) (s) a l (s) . (8.50)

Substituting (8.50) into (8.27), we obtain



w(s, t) = (s, t) (s)(s) + (s, t) , (8.51)

where

(s, t) = L (T, s, t) a r (s) ,

(s) = a l (s)M (s) , (8.52)
 
(s, t) = L (T, s, t) 0r (s) a l (s)M (s) + K(s) .

Equation (8.51) will hereinafter be called a representation of the PTM in terms


of the system function. The matrices (8.52) will be called the coecients of
this representation.

9. Below, we will prove several propositions showing that the coecients


(matrices) (8.52) should be calculated with account for a number of important
cancellations.
320 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

Theorem 8.4. The poles of the matrix (s, t) belong to the set of roots of the

function  (s) = (esT ), where () is the polynomial given by (8.37).
Proof. Assume Dl () = I and Ml () = Omn in (8.41), i.e. we choose the

initial controller (0l (), 0l ()). In this case, (s) = Omn and
w(s, t) = (s, t) .
Since the controller (0l (), 0l ()) is a basic controller, we have (8.40) and
from (8.38), it follows
() () .
Assuming this, from (8.32) we get
P (s, t)
w(s, t) = (s, t) =  , (8.53)
 (s)
where the matrix P (s, t) is an integral function of the argument s. The claim
of the theorem follows from (8.53).
Corollary 8.5. Let the standard sampled-data system be modal controllable,

i.e.  (s) = const. = 0. Then the matrix (s, t) is an integral function in s.
Theorem 8.6. For any polynomial matrix (), the set of poles of the matrix

G(s, t) = (s, t) (s)(s) (8.54)

belongs to the set of roots of the function  (s).

Proof. Let (s) = () | =esT with a polynomial matrix (). Then for any
ILMFD
() = Dl1 ()Ml () ,
the matrix Dl () is unimodular. Therefore, due to Theorem 4.1, the controller
(8.41) is a basic controller. Hence we have d () = const = 0. In this case,

d (s) = const. = 0 and (8.38) yields
() () .
From this relation and (8.32), we obtain
Pw (s, t)
w(s, t) =  ,
 (s)
where the matrix Pw (s, t) is an integral function in s. Using (8.53) and the
last equation, we obtain
PG (s, t)
G(s, t) = w(s, t) (s, t) =  , (8.55)
 (s)
where the matrix PG (s, t) is an integral function in s.
8.3 Representing the PTM in Terms of the System Function 321

Corollary 8.7. If the standard sampled-data system is modal controllable,


then the matrix G(s, t) is an integral function of the argument s for any poly-
nomial ().

10. In principle for any (), the right-hand side of (8.54) can be cancellee

by a function  1 (s), where 1 () is a polynomial independent of t. In this case
after cancellation, we obtain an expression similar to (8.55):

PGm (s, t)
G(s, t) =  , (8.56)
 m (s)

where deg m () < deg (). If deg m () has the minimal possible value in-
dependent of the choice of (), the function (8.56) will be called globally
irreducible.
Using (8.52), we can represent (8.56) in the form

   PGm (s, t)
L (T, s, t) a r (s) (s) a l (s)M (s) =  = G(s, t) .
 m (s)

Multiplying this by est , we nd


    
est G(s, t) = G1 (s, t) = DL (T, s, t) a r (s) (s) a l (s)M (s) .

Hence

1     
G1 (s + kj, t)e(s+kj)t = DL (T, s, t) a r (s) (s) a l (s)DM (T, s, t)
T
k=0

(8.57)
N G1 (s, t)
=  ,
 m (s)

where NG1 (, t) is a polynomial matrix in for any 0 < t < T .

11. The following propositions prove some further cancellations in the cal-
culation of the matrices (s, t) and (s) appearing in (8.52).

Theorem 8.8. For 0 < t < T , let us have the irreducible representations

  N L (s, t)
DL (T, s, t) a r (s) =  , (8.58)
 L (s)

  N M (s, t)
a l (s)DM (T, s, t) =  , (8.59)
 M (s)

where NL (, t) and NM (, t) are polynomial matrices in , and M () and


M () are scalar polynomials. Let also the fractions
322 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

NL (, t) NM (, t)
,
M () L ()
be irreducible and the fraction (8.57) be globally irreducible. Then the function
 ()
() =
L ()M ()
is a polynomial. Moreover,

L ()M () m () .

The proof of the theorem is preceded by two auxiliary claims.


Lemma 8.9. Let A and B be constant matrices of dimensions n m and
 , respectively. Moreover, for any m  matrix , the equation

AB = On (8.60)

should hold. Then at least one of the matrices A, B is a zero matrix.


Proof. Assume the converse, namely, let us have (8.60) for any , where A
and B are both nonzero. Let the elements aij and bpq of the matrices A and B
be nonzero. Assume = Ijp , where Ijp is an m  matrix having the single
unit element at the cross of the j-th row and the p-th column, while all other
elements are zero. It can be easily veried that the product AIjp B is nonzero.
This contradiction proves the Lemma.
Lemma 8.10. Let us have two irreducible rational n m and  matrices
in the standard form
NA () NB ()
A() = , B() = . (8.61)
dA () dB ()
Let also the fractions
NA () NB ()
, (8.62)
dB () dA ()
be irreducible. For any m  polynomial matrix (), let us have
NA () NB () N ()
() = , deg d0 () = , (8.63)
dA () dB () d0 ()
where d0 () is a xed polynomial and N () is a polynomial matrix. Then the
function
 d0 ()
() = (8.64)
dA ()dB ()
is a polynomial. Moreover, if the right-hand side of (8.63) is globally irre-
ducible, then

dAB () = dA ()dB () d0 () . (8.65)
8.3 Representing the PTM in Terms of the System Function 323

Proof. Assume that the function (8.64) is not a polynomial. If p() is a GCD
of the the polynomials d0 () and dAB (), then

d0 () = p()d1 (), dAB () = p()d2 () ,

where the polynomials d1 () and d2 () are coprime and deg d2 () > 0. Sub-
stituting these equations into (8.63), we obtain

d2 ()
NA ()()NB () = N () .
d1 ()

Let 0 be a root of the polynomial d2 (). Then for = 0 , the equality

NA (0 )(0 )NB (0 ) = On

can be written for any constant matrix (0 ). Then with the help of
Lemma 8.9, it follows that at least one of the following two equations holds:

NA (0 ) = Onm or NB (0 ) = O .

In this case, at least one of the rational matrices (8.61) or (8.62) appears to
be reducible. This contradicts the assumptions. Thus, deg d2 () = 0 and ()
is a polynomial.
Now, let the right-hand side of (8.63) be globally irreducible. We show
that in this case deg d1 () = 0 and we have (8.65). Indeed, if we assume the
converse, we have deg d0 () > deg dAB (). This contradicts the assumption
that the right-hand side of (8.63) is globally irreducible.

Proof (of Theorem 8.8). From (8.57)-(8.59) for esT = , we obtain

NL (, t) NM (, t) NG1 ()
() = .
L () M () m ()

Since here the polynomial matrix () can be chosen arbitrarily, the claim of
the theorem stems directly from Lemma 8.10.

Corollary 8.11. When under the conditions of Theorem 8.8, the right-hand
side of (8.55) is globally irreducible, then we have

L ()M () () . (8.66)

Corollary 8.12. As follows from the above reasoning, the converse proposi-
tion is also valid: When under the conditions of Theorem 8.8, Equation (8.66)
holds, then the representations (8.53) and (8.55) are globally irreducible.
324 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

Theorem 8.13. Let the conditions of Theorem 8.8 hold. Then we have the
irreducible representations

 P (s, t)
(s, t) = L (T, s, t) a r (s) =  , (8.67)
 L (s)
 P (s)
(s) = a l (s)M (s) =  , (8.68)
 M (s)

where the numerators are integral functions of the argument s.

Proof. Multiplying the rst equation in (8.58) by est , we obtain



 est N L (s, t)
(s, t) = L (T, s, t) a r (s) =  .
 L (s)

The matrix est N L (s, t) is an integral function and the fraction on the right-
hand side is irreducible, because the function est has no zeros. Thus, (8.67)
has been proven.
Further, multiplying (8.59) by est and integrating by t, we nd

(s) = a l (s)M (s) =
 T & T st 
 st
 e N M (s, t) dt
= a l (s)e DM (T, s, t) dt = 0  .
0  M (s)

The numerator of the latter expression is an integral function in s, i.e., we


have (8.68). It remains to prove that representation (8.68) is irreducible.
Assume the converse, i.e., let the representation (8.68) be reducible. Then

 P1 (s)
(s) = a l (s)M (s) =  ,
 M 1 (s)

where deg M 1 () < deg M (). With respect to (8.59), we thus obtain an
expression of the form

  1 N M 1 (s, t)
a l (s)DM (T, s, t) = (s + kj)e(s+kj)t =  , (8.69)
T  M 1 (s)
k=

where NM 1 (, t) is a polynomial matrix in for 0 t T . This contradicts


the irreducibility assumption of the right-hand side of (8.59). Hence (8.68) is
irreducible.

Corollary 8.14. In case of modal controllability, Matrices (8.67), (8.68) and


(8.69) are integral functions of the argument s for 0 t T .
8.4 Representing the H2 -norm in Terms of the System Function 325

Corollary 8.15. In a similar way, it can be proved that for an irreducible


representation (8.58), the following irreducible representation holds:

 PL (s)
L(s) a r (s)(s) =  ,
 L (s)

where PL (s) is an integral function in s.

8.4 Representing the H2 -norm in Terms of the System


Function

1. In this section on the basis of (8.21)(8.25), we construct expressions for


the value S22 for the standard sampled-data system, using the representation
of the PTM w(s, t) in terms of the system function () dened by (8.51). From
(8.51), we have

w (s, t) = w (s, t) =  (s) (s)  (s, t) +  (s, t) .

Multiplying this with the function (8.51), we receive


 
w (s, t)w(s, t) =  (s, t)(s, t) +  (s) (s)  (s, t)(s, t) (s)(s)
 
+  (s) (s)  (s, t)(s, t) +  (s, t)(s, t) (s)(s) .

Substituting this into (8.21), we obtain


 T
1
w(s)
= w (s, t)w(s, t) dt = g1 (s) g2 (s) g3 (s) + g4 (s) ,
T 0

where

 T
1 
g1 (s) =  (s) (s)  (s, t)(s, t) dt (s)(s) ,
T 0


 T
1
g2 (s) =  (s) (s)  (s, t)(s, t) dt , (8.70)
T 0
 T
1 
g3 (s) = g2 (s) =  (s, t)(s, t) dt (s)(s) ,
T 0
and
 T
1
g4 (s) =  (s, t)(s, t) dt .
T 0
326 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

2. Next, we calculate Matrices (8.70). First of all with regard to (8.52), we


nd
 T
  1
AL (s) =  (s, t)(s, t) dt
T 0
 (8.71)
 1 T 
= a r (s) L (T, s, t)L (T, s, t) dt a r (s) .
T 0
Since

1
L (T, s, t) = L(s + kj)(s + kj)ekjt ,
T
k=

(8.72)
1 
L (T, s, t) =
 L (s + kj)(s + kj)ekjt ,
T
k=

after substituting (8.72) into (8.71) and integration, we receive


  1 
AL (s) = a r (s) DL L (T, s, 0) a r (s) , (8.73)
T
where

 1 
DL L (T, s, 0) = L (s kj)L(s + kj)(s + kj)(s kj) .
T
k=

Using (8.73) in (8.70), we obtain


  
g1 (s) =  (s) (s)AL (s) (s)(s) .

3. To calculate the matrices g2 (s) and g3 (s), we denote


 T
 1
Q(s) =  (s, t)(s, t) dt . (8.74)
T 0

Then,
 T
 1
Q (s) =  (s, t)(s, t) dt . (8.75)
T 0

Using (8.52), we nd
  
 (s, t)(s, t) = a r (s)L (T, s, t)L (T, s, t) 0r (s) a l (s)M (s)

+ a r (s)L (T, s, t)K(s) .

Substituting this into (8.75) and taking account of (8.72) after integration, we
nd
8.4 Representing the H2 -norm in Terms of the System Function 327

 1  
Q (s) = a r (s) DL L (T, s, 0) 0r (s) a l (s)M (s)
T
1 
+ a r (s)L (s)(s)K(s)
T
and
  1 
Q(s) = M  (s) a l (s) 0r (s) DL L (T, s, 0) a r (s)
T
1 
+ K  (s)L(s)(s) a r (s)
T
considering the identity
 
DL L (T, s, 0) = DL L (T, s, 0) .

4. Using the above relations and (8.22)(8.24), we obtain


 j
1
S22 = trace w(s)
ds = J1 + J2 , (8.76)
2j j

where

1 j   
J1 = trace  (s) (s)AL (s) (s)(s)
2j j
(8.77)
 
 
(s) (s)Q (s) Q(s) (s)(s) ds ,
 j
1
J2 = trace g4 (s) ds .
2j j

Under the given assumptions, these integrals converge absolutely, i.e. all the
integrands as |s| tend to zero as |s|2 .

5. Since for a given initial controller, the value J2 is a constant, we have to


consider only (8.77). With regard to (8.5) from (8.77), we obtain
 j 
1   
J1 = trace (s)AL (s) (s)(s)  (s)
2j j

 
(s)Q (s)  (s) (s)Q(s) (s) ds ,

where the integral on the right-hand side converges absolutely. From (8.52),
(8.74) and (8.75), we have
328 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

   1 
(s)Q(s) = a l (s)M (s)M  (s) a l (s) 0r (s) DL L (T, s, 0) a r (s)
T
1 
+ a l (s)M (s)K  (s)L(s)(s) a r (s) ,
T
 1   
Q (s)  (s) = a r (s) DL L (T, s, 0) 0r (s) a l (s)M (s)M  (s) a l (s)
T
1  
+ a r (s)L (s)K(s)M  (s)(s) a l (s) .
T
Substitute this into the equation before and pass to an integral with nite
integration limits. Then using (8.12), we obtain the functional

J1 = (8.78)

T j/2        
trace (s)AL (s) (s)AM (s) (s)C (s) C(s) (s) ds ,
2j j/2


where = 2/T , the matrix AL (s) is given by (8.73),

 1
AM (s) = (s + kj)  (s kj)
T
k=

 1 
= a l (s) M (s + kj)M  (s kj) a l (s) (8.79)
T
k=
  
= a l (s)DM M  (T, s, 0) a l (s)

and

 1
C(s) = (s + kj)Q(s + kj)
T
k=

  1 
= AM (s) 0r (s) DL L (T, s, 0) a r (s)
T
 1 
+ a l (s) DM K  L (T, s, 0) a r (s) , (8.80)
T
  1  
C (s) = a r (s) DL L (T, s, 0) 0r (s)AM (s)
T
 1 
+ a r (s) DL KM  (T, s, 0) a l (s) .
T
 
6. Let us note several useful properties of the matrices AL (s), AM (s) and

C(s) appearing in the functional (8.78). We shall assume that (8.66) holds,
because this is true in almost all applied problems.
8.4 Representing the H2 -norm in Terms of the System Function 329
  
Theorem 8.16. The matrices AL (s) (8.73), AM (s) (8.79) and C(s) (8.80)
are rational periodic and admit the representations
 
 B C (s) B C (s)
C(s) =     =   ,
 L (s)  L (s)  M (s)  M (s)  (s)  (s)
(8.81)
 
 B L (s)  B M (s)
AL (s) =   , AM (s) =   ,
 L (s)  L (s)  M (s)  M (s)
where the numerators are nite sums of the forms


B L (s) = lk eksT , 
lk = lk ,
k=



B M (s) = mk eksT , mk = mk , (8.82)
k=



B C (s) = ck eksT .
k=

Herein, , , and are non-negative integers and lk , mk and ck are constant


real matrices.
 
Proof. Let us prove the claim for AL (s). First of all, the matrix AL (s) is
rational periodical, as follows from the general properties given in Chapter 6.
Due to Corollary 8.15, the following irreducible representations exist:
 PL (s)  PL (s)
L(s)(s) a r (s) =  , a r (s)L (s)(s) =  ,
 L (s)  L (s)
where the matrix PL (s) is an integral function of s. Using (8.73), we can write

1  


T AL (s) = a r (s kj)L (s kj)(s kj)
T
k=
 
 (8.83)
L(s + kj)(s + kj) a r (s + kj) .

Each summand on the right-hand side can have poles only at the roots of
   
the product  1 (s) =  L (s)  L (s). Under the given assumptions, the series
(8.83) converges absolutely and uniformly in any restricted part of the complex

plain containing no roots of the function  1 (s). Hence the sum of the series

(8.83) can have poles only at the roots of the function  1 (s). Therefore, the
    
matrix B L (s) = T A(s) 1 (s) has no poles. Moreover, since AL (s) = AL (s),
 
we have B L (s) = B L (s), and the rst relation in (8.82) is proven. The
remaining formulae in (8.82) are proved in a similar way using (8.66).
330 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

Corollary 8.17. When the original system is stabilisable, then the matrices
(8.81) do not possess poles on the imaginary axis.

Corollary 8.18. If the standard sampled-data system is modal controllable,


then the matrices (8.81) are free of poles, i.e., they are integral functions of
the argument s.

7. Passing in the integral (8.78) to the variable = esT , we obtain


#   d
1
J1 = trace ()A L ()()AM () ()C() C()() (8.84)
2j
with the notation

F () = F  ( 1 ) .
Perform integration along the unit circle in positive direction (anti-

clockwise). The matrices AL (), AM (), C() and C() appearing in (8.84)
admit the representations
1
AL () = a
r ()DL L (T, , 0)ar () ,
T
(8.85)
AM () = al ()DM M  (T, , 0)
al ()

and
1 1
C() = AM ()0r () DL L (T, , 0)ar () + al ()DM K  L (T, , 0)ar () ,
T T
(8.86)
1 1
r () DL L (T, , 0)0r ()AM () + a
C() = a r ()DL KM  (T, , 0)
al () .
T T
Per construction,

AL () = AL (), AM () = AM () .

8. A rational matrix F () having no poles except for = 0 will be called a


quasi-polynomial matrix . Any quasi-polynomial matrix F () can be written
in the form


F () = Fk k ,
k=

where and are nonnegative integers and the Fk are constant matrices.
Substituting esT = in (8.81) and (8.82), we obtain

BL () BM ()
AL () = , AM () = ,
L ()L ( 1 ) M ()M ( 1 )
(8.87)
BC ()
C() = ,
L ()L ( 1 )M ()M ( 1 )
8.5 Wiener-Hopf Method 331

where the numerators are the quasi-polynomial matrices





BL () = lk k , lk = lk
k=
(8.88)


BM () = mk k , mk = mk
k=



BC () = ck k .
k=

Here, , , , and are nonnegative integers and lk , mk , and ck are constant


real matrices. Per construction,
L () = BL (),
B M () = BM () .
B (8.89)

Remark 8.19. If the standard sampled-data system is modal controllable, then


the matrices (8.87) are quasi-polynomial matrices.

Remark 8.20. Hereinafter, the matrix BM () will be called a quasi-polynomial


matrix of type 1, and the matrix BL () a quasi-polynomial matrix of type 2.

8.5 Wiener-Hopf Method

1. In this section, we consider a method for solution of the H2 -optimisation


problem for the standard sampled-data system based on minimisation of the
integral in (8.84). Such an approach was previously applied for solving H2 -
problems for continuous-time and discrete-time LTI systems and there, it was
called Wiener-Hopf method [196, 47, 5, 80].

2. The following theorem provides a substantiation for applying the Wiener-


Hopf method to sampled-data systems.

Theorem 8.21. Suppose the standard sampled-data system be stabilisable and


let us have the IMFDs (8.36) and (8.42). Furthermore, let (0r (), 0r ()) be
any right initial controller, which has a dual left controller and the stable
rational matrix o () ensures the minimal value J1 min of the integral (8.84).
Then the transfer function of the optimal controller wdo (), for which the stan-
dard sampled-data system is stable and the functional (8.29) approaches the
minimal value, has the form
1
wdo () = V2o ()V1o () , (8.90)

where
332 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

V1o () = 0r () br ()o () ,
(8.91)
V2o () = 0r () ar ()o () .

If in addition, we have IMFDs

o () = Dl1 ()Ml () = Mr ()Dr1 () ,

then the characteristic polynomial o () of the optimal system satises the


relation
o () () det Dl () () det Dr () .

Proof. Since the matrix o () is stable, the rational matrices (8.91) are also
stable. Then using the Bezout identity and the ILMFD (8.36), we have

al ()V1o () bl ()V2o ()
= al ()0r () bl ()0r () [al ()br () bl ()ar ()] o () = In .

Hence due to Theorem 5.58, wdo () is the transfer function of a stabilising


controller. Using the ILMFDs (8.36) and (8.48), we nd Matrix (8.49) as
1
o
RN () = wdo () [In DN (T, , 0)wdo ()]
= 0r ()al () ar ()o ()al () .

For = esT , we have


o o
  o
1
RN (s) = w d (s) In DN (T, s, 0)w d (s)
   o 
= 0r (s) a l (s) a r (s) (s) a l (s) .

After substituting this equation into (8.29) and some transformations, the
integral (8.29) can be reduced to the form

S22 = J1o + J2 ,

where J1o is given by (8.84) for () = o () and the value J2 is constant. Per
construction, the value S22 is minimal. Therefore, Formula (8.90) gives the
transfer matrix of an optimal stabilising controller.

8.6 Algorithm for Realisation of Wiener-Hopf Method


1. According to the aforesaid, we shall consider the problem of minimising
the functional (8.84) over the set of stable rational matrices, where the ma-
trices AL (), AM () and C() satisfy Conditions (8.87), (8.88) and (8.89).
Moreover, if the stabilisability conditions hold for the system, then the ma-
trices AL (), AM (), and C() do not possess poles on the integration path.
8.6 Algorithm for Realisation of Wiener-Hopf Method 333

2. The following proposition presents a theoretical basis for the application


of the Wiener-Hopf method to the solution of the H2 -problem.
Lemma 8.22. Let us have a functional of the form
#   d
1
Jw = trace () ()()()
() () ()C()
C() () ,
2j
(8.92)
where the integration is performed along the unit circle in positive direction
and (), () and C() are rational matrices having no poles on the inte-
gration path. Furthermore, let the matrices (), () be invertible and stable
together with their inverses. Then, there exists a stable matrix o () that min-
imises the functional (8.92). The matrix o () can be constructed using the
following algorithm:
a) Construct the rational matrix
1 ()C()
R() = 1 () . (8.93)

b) Perform the separation

R() = R+ () + R () , (8.94)

where the rational matrix R () is strictly proper and its poles incorpo-
rate all unstable poles of R(). Such a separation will be called principal
separation.
c) The optimal matrix o () is determined by the formula

o () = 1 ()R+ () 1 () . (8.95)

Proof. Using (8.5), we have

trace[C() ()] = trace[C() () () 1 ()]


= trace[ 1 ()C() () ()] .

Therefore, the functional (8.92) can be represented in the form


# 
1
Jw = trace () ()()()
() ()
2j
 d (8.96)
() ()()
() () (z) ,

where
() = 1 ()C() .
The identity

() ()()()
() () () ()()
() () ()

= [() () () R()]' [() () () R()] R()R()
334 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

can easily be veried. Using the separation (8.94), the last relation can be
transformed into
() ()()()
() () () ()()
() () ()
= [() () () R+ ()]' [() () () R+ ()]
()R () R
+R () [() () () R+ ()]

[() () () R+ ()]'R () R()R() .
Hence the functional (8.96) can be written in the form
Jw = Jw1 + Jw2 + Jw3 + Jw4 ,
where
#
1 d
Jw1 = trace [() () () R+ ()]' [() () () R+ ()] ,
2j
#
1 () [() () () R+ ()] d ,
Jw2 = trace R
2j
# (8.97)
1 d
Jw3 = trace [() () () R+ ()]'R () ,
2j
#  
1 ()R () R()R()
d
Jw4 = trace R .
2j
The integral Jw4 is independent of (). As for the scalar case [146], it can
be shown that Jw2 = Jw3 = 0. The integral Jw1 is nonnegative, its minimal
value Jw1 = 0 can be reached for (8.95).
Corollary 8.23. The minimal value of the integral (8.92) is
Jw min = Jw4 .

3. Using Lemma 8.22, we can formulate a proposition deriving a solution to


the H2 -optimisation problem for the standard sampled-data system.
Theorem 8.24. Let the quasi-polynomial matrices BL () and BM () in
(8.88) admit the factorisations
L ()L (),
BL () = M () ,
BM () = M () (8.98)
where L () and M () are invertible real stable polynomial matrices. Let also
Condition (8.66) hold. Then the optimal matrix o () can be found using the
following algorithm:
a) Construct the matrix
1 ()B  ( 1 )
1 () 1 ()B
1 ()
C ()
L C M
R() = = L M
. (8.99)
L ()M () ()
8.6 Algorithm for Realisation of Wiener-Hopf Method 335

b) Perform the principal separation

R() = R+ () + R () ,

where
R + () + ()
R
R+ () = = (8.100)
L ()M () ()
+ ().
with a polynomial matrix R
c) The optimal system function o () is given by the formula

o () = 1 1
L ()R+ ()M () . (8.101)

Proof. Let the factorisations (8.98) hold. Since for the stabilisability of the
system the polynomials L () and M () must be stable, the following fac-
torisations hold:

AL () = ()() , AM () = () () , (8.102)

where
L () M ()
() = , () = (8.103)
L () M ()
are rational matrices, which are stable together with their inverses. From
(8.103) we also have
L ()
M ()


() = , () = . (8.104)
L ( 1 ) M ( 1 )

Regarding (8.102)(8.104), the integral (8.84) can be represented in the form


(8.92) with

BC () BC ()
C() = = ,
L ()L ( )M ()M ( 1 )
1 ()( 1 )

BC ( 1 ) BC ()
C() = 1 1
= .
L ()L ( )M ()M ( ) ()( 1 )

Then the matrix R() in (8.93) appears to be equal to (8.99).


The matrices 1 () and
1 () can have only unstable poles, because the
L M

polynomial matrices L () and M () are stable. The matrix BC ( 1 ) can
have unstable pole only at = 0. Therefore, the set of stable poles of Matrix
(8.99) belongs to the set of roots of the polynomial () L ()M (). Hence
the matrix R+ () in the principal separation (8.99) has the form (8.100),
where R + () is a polynomial matrix. Equation (8.101) can be derived from
(8.100), (8.103), and (8.95).

Corollary 8.25. If the system is modal controllable, then R+ () is a polyno-


mial matrix.
336 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

Corollary 8.26. The characteristic polynomial of a modal controllable closed-


loop system d () det Dl () det Dr () is a divisor of the polynomial
det L () det M (). Hereby, if the right-hand side of (8.101) is an irreducible
DMFD (this is often the case in applications), then

d () det L () det M () .

4. From (8.76) and (8.97), it follows that the minimal value of S22 is
#   d  j
1 ()R () R()R()
1
S22 = trace R + trace g4 (s) ds .
2j 2j j

8.7 Modied Optimisation Algorithm


1. The method for solving the H2 -problems described in Section 8.5 re-
quires for given IMFDs (8.36) and (8.42) of the plant, that the basic con-
troller [0r (), 0r ()] has to be previously found. This causes some numerical
diculties. In the present section, we describe a modied optimisation pro-
cedure, which does not need the basic controller. This method will be called
the modied Wiener-Hopf method.

2. Let F () be a rational matrix and F+ (), F () be the results of the prin-


cipal separation (8.94). Then for the matrix F (), we shall use the notation

F () = F () .

Obviously,
F1 () + F2 () = F1 () + F2 () .

3. Consider Matrix (8.93) in detail. Using the above relations as well as


(8.86) and (8.102), we nd
1 ()C()
R() = 1 () = R1 () + R2 () , (8.105)

where
1 1
R1 () = () ar ()DL L (T, , 0)0r ()AM () 1 () , (8.106)
T
1 1
R2 () = () al () 1 () .
ar ()DL KM  (T, , 0) (8.107)
T
Since
1
DL L (T, , 0)ar () = AL () ,
r ()
a (8.108)
T
Matrix (8.106) can be written in the form
8.7 Modied Optimisation Algorithm 337

1 ()AL ()a1
R1 () = 1 () .
r ()0r ()AM () (8.109)

With respect to (8.102), we obtain

R1 () = ()a1
r ()0r () () . (8.110)

On the basis of (8.105)(8.110), the following lemma can be proved.


Lemma 8.27. In the principal separation

R() = R+ () + R() , (8.111)

the matrix R() is independent of the choice of the basic controller.



Proof. If we choose another right initial controller with a matrix 0r (), we

obtain a new matrix R () of the form

R () = R1 () + R2 ()

with
R1 () = ()a1
r ()0r () () , (8.112)
where the matrix R2 () is the same as in (8.105). Therefore, to prove the
lemma, it is sucient to show

R1 () = R1 () . (8.113)

But from (4.39), it follows that



0r () = 0r () ar ()Q() (8.114)

with a polynomial matrix Q(). Substituting this formula into (8.112), we nd

R1 () = R1 () ()Q() () ,

where the second term on the right-hand side is a stable matrix, because the
matrices () and () are stable. Therefore, (8.113) holds.

4.
Lemma 8.28. The transfer matrix of an optimal controllers wdo () is inde-
pendent of the choice of the initial controller.

Proof. Using (8.105)(8.111), we have

R+ () = R() R() = ()a1


r ()0r () () + () , (8.115)

where
338 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

1 1
() = () al () 1 () R() .
ar ()DL KM  (T, , 0) (8.116)
T
Using Lemma 8.27 and (8.116), we nd that the matrix () does not depend
on the choice of initial controller. Hence only the rst term on the right-hand
side of (8.115) depends on the initial controller. From (8.115) and (8.95), we
nd the optimal system function

o () = a1
r ()0r () +
1
()() 1 () .

Using (8.91), we obtain the optimal matrices V1o () and V2o ():

V2o () = 0r () ar ()o () = ar () 1 ()() 1 () (8.117)

and

V1o () = 0r () br ()o ()
= 0r () br ()a1
r ()0r () br ()
1
()() 1 () .

The Bezout identity guarantees


1
0r () br ()a1
r ()0r () = 0r () al ()bl ()0r ()

= a1 1
l () [al ()0r () bl ()0r ()] = al () .

Together these relations yield

V1o () = a1
l () br ()
1
()() 1 () . (8.118)

The matrices (8.117) and (8.118) are independent of the matrix 0r (). Then
using (8.90), we can nd an expression for the transfer matrix of the optimal
controller that is independent of 0r ().

5. As follows from Lemma 8.28, we can nd an expression for the optimal


matrix wdo (), when we manage to nd an expression for R() that is
independent of 0r . We will show a possibility for deriving such an expression.
Assume that the matrix

l () = bl ()bl ( 1 ) = bl ()bl ()

is invertible and the poles of the matrix



1
l () = 1 bl ( 1 ) bl ()bl ( 1 ) (8.119)

coincide neither with eigenvalues of the matrix ar () nor with poles of the
matrices () and (). Then, Equation (8.110) can be written in the form

1  1
1
R1 () = ()a1
r ()0r ()bl () bl ( ) bl ()bl ( 1 ) () . (8.120)
8.7 Modied Optimisation Algorithm 339

Due to the inverse Bezout identity (4.32), we have

0r ()bl () + ar ()0l () = Im .

Thus,
a1 1
r ()0r ()bl () = 0l () ar () .

Hence Equation (8.120) yields

R1 () = R11 () + R12 () ,

where

1
R11 () = ()a1
r ()
1  1
bl ( ) bl ()bl ( 1 ) ()
(8.121)
= ()a1
r ()l () () ,

1
R12 () = ()0l () 1 bl ( 1 ) bl ()bl ( 1 ) ()
(8.122)
= ()0l ()l () () .

Consider the separation


a
+
R11 () = R11 () + R11 () + R11 () , (8.123)
a
where R11 () is a strictly proper function, whose poles are the unstable

eigenvalues of the matrix ar (); R11  is a strictly proper function, whose
+
poles are the poles of Matrix (8.119); and R11 () is a rational function, whose
poles are the stable eigenvalues of the matrix ar () as well as the poles of the
matrices () and (). Similarly to (8.123) for (8.122), we nd

+
R12 () = R12 () + R12 () , (8.124)

where R12 () is a strictly proper function, whose poles are the poles of
+
Matrix (8.119), and R12 () is a stable rational matrix. Summing up Equations
(8.123) and (8.124), we obtain

R1 () = R11 () + R12 ()
a
+
= R11 +
() + R12 () + R11 () + R11 () + R12 () .

But

R11 () + R12 () = R11 () + R12 () = Omn ,
because from (8.110), it follows that under the given assumptions, the matrix
R1 () has no poles that are simultaneously poles of the matrix l (). Then,
a
R1 () = R11 () ,

where the matrix on the right-hand side is independent of the choice of the
initial controller. Using the last relation and (8.105), we obtain
340 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation
a
R() = R1 () + R2 () = R11 () + R2 () .

Per construction, this matrix is also independent of the choice of the ini-
tial controller. Substituting the last equation into (8.116) and using (8.117),
(8.118) and (8.90), an expression can be derived for the optimal transfer ma-
trix wdo () that is independent of the choice of the initial controller.

6. A similar approach to the modied optimisation method can be proposed


for the case, when the matrix br ()br () is invertible, where br () is the poly-
nomial matrix appearing in the IRMFD (8.42). In this case, (8.110) can be
written in the form
 1
R1 () = () br ()br () 1br ()br ()a1
r ()0r () () .

Due to the inverse Bezout identity, we have


1
a1
r ()0r () = 0l ()al () ,

0r ()al () br ()0l () = In ,
br ()0l ()a1 1
l () = 0r () al () .

Using these relations, we obtain


 1
R1 () = () br ()br () 1br ()0r () ()
 1
() br ()br () 1br ()a1
l () () .

Assuming that no pole of the matrix


 1
r () = 1 br ()br () br ()

coincides with an eigenvalue of the matrices al (), () or (), we nd that


the further procedure of constructing the optimal controller is similar to that
described above.

8.8 Transformation to Forward Model

1. To make the reading easier, we will use some additional terminology and
notation.
The optimisation algorithms described above make it possible to nd the
optimal system matrix o (). Using the ILMFD

o () = Dl1 ()Ml () = Mr ()Dr1 ()


8.8 Transformation to Forward Model 341

and Formulae (8.41), (8.44) and (8.46), we are able to construct the transfer
o
matrix of the optimal controller wdb (), which will be called the backward
transfer matrix. Using the ILMFD
o
wdb () = l1 ()l () ,

the matrix

I eAT On eAT (A)B2

Qb (, l , l ) = C2 In Onm (8.125)
Om l () l ()

can be constructed. This matrix is called the backward characteristic matrix.


As shown above, the characteristic polynomial of the backward model b ()
is determined by the relation

b () = det Qb (, l , l ) = b ()db () , (8.126)

where b () and db () are polynomials. Especially, we know


 
al () bl ()
db () = det , (8.127)
l () l ()

where the matrices al () and bl () are given by the ILMFD


 1 AT
wN () = C2 I eAT e (A)B2 = a1
l ()bl () . (8.128)

Moreover, the polynomial b () appearing in (8.126) is determined by the


relation
det(I eAT )
b () = (8.129)
det al ()
and is independent of the choice of the controller. Further, the value

b = ord Qb (, l , l ) = deg det Qb (, l , l ) = deg b () + deg db ()

will be called the order of the optimal backward model. As shown above,

deg det Dl () = deg db () .

2. For applied calculations and simulation, it is often convenient to use the


forward system model instead of the backward model. In the present section
we consider the realisation of such a transformation and investigate some
general properties of the forward model.
Hereinafter, the matrix
o
wdf o
(z) = wdb (z 1 )
342 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

will be called the forward transfer function of the optimal controller. Using
the ILMFD
o
wdf (z) = f1 (z)bf (z) ,
a controllable forward model of the optimal discrete controller is found:
f (q)k = f (q)yk .
Together with (7.16), this equation determines a discrete forward model of
the optimal system
qvk = eAT vk + eAT (A)B2 k + gk
yk = C2 vk
f (q)k = f (q)yk .
These dierence equations are associated with the matrix

zI eAT On eAT (A)B2

Qf (z, f , f ) = C2 In Onm (8.130)
Om f (z) f (z)
that will be called the forward characteristic matrix of the optimal system.
Below, we formulate some propositions determining some properties of the
characteristic matrices (8.125) and (8.130).

3. Similarly to (8.126)(8.129), it can be shown that the polynomial



f (z) = det Qf (z, f , f ) ,
which is called the characteristic polynomial of the forward model, satises
the relation
f (z) = f (z)df (z) , (8.131)
where f (z) and df (z) are polynomials. Moreover,

af (z) bf (z)
df (z) = det , (8.132)
f (z) f (z)
where the matrices af (z) and bf (z) are determined by the ILMFD
  1 AT
wf (z) = C2 zI eAT e (A)B2 = a1 f (z)bf (z) . (8.133)
The polynomial f (z) appearing in (8.131) satises the equation
det(zI eAT )
f (z) = . (8.134)
det af (z)
The value

f = ord Qf (z, f , f ) = deg det Qf (z, f , f ) = deg f (z) + deg df (z)
will be called the order of the optimal forward model.
8.8 Transformation to Forward Model 343

4. A connection between the polynomials (8.129) and (8.134) is determined


by the following lemma.
Lemma 8.29. The following equation holds:

= deg b () = deg f (z) . (8.135)

Moreover,
f (z) z b (z 1 ) , b () f ( 1 ) . (8.136)
Proof. Since Matrix (8.133) is strictly proper, there exists a minimal standard
realisation in the form

wf (z) = C (zIq U )1 B , q .

Then for the ILMFD on the right-hand side of (8.133), we have

det af (z) det(zIq U ) . (8.137)

Hence
det(zI eAT )
f (z) . (8.138)
det(zIq U )
From (8.138), it follows that the matrix U is nonsingular (this is a consequence
of the non-singularity of the matrix eAT ). Comparing (8.128) with (8.133), we
nd
wN () = C (Iq U )1 B ,
where the PMD (Iq U, B , C ) is irreducible due to Lemma 5.35. Thus
for the ILMFD (8.128), we obtain

det al () det(Iq U ) , (8.139)

where
ord al () = deg det(Iq U ) = q ,
because the matrix U is nonsingular. From (8.129) and (8.139), we obtain

det(I eAT )
b ()
det(Iq U )

and comparing this with (8.138) results in (8.135) and (8.136).

5. The following propositions determine a property of the matrices



al () bl () af (z) bf (z)
Q1b () = , Q1f (z) = .
l () l () f (z) f (z)

Denote
f = deg det f (z) , b = deg det l () .
344 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

Lemma 8.30. The following equation holds:

ord Q1f (z) = q + f . (8.140)

Proof. Applying Formula (4.12) to Q1f (z), we obtain




det Q1f (z) = det af (z) det f (z) det Im wdf
o
(z)wf (z) . (8.141)

Since the optimal controller (l (), l ()) is stabilising, it is causal, i.e., the
o
backward transfer function wdb () is analytical at the point = 0. Hence
o
the forward transfer matrix of the controller wdf (z) is at least proper. There-
o
fore, the matrix wf (z) is strictly proper. Thus, the product wdf (z)wf (z) is a
 
strictly proper matrix. Moreover, the rational fraction det Im wdf o
(z)wf (z)
is proper, because of


lim det Im wdf o
(z)wf (z) = 1 .
z

Thus, it follows that there exists a representation of the form


z + b1 z 1 + . . .
det Im wdf
o
(z)wf (z) = ,
z + a1 z 1 + . . .

where a nonnegative integer. Substituting this into (8.141), we nd

z + b1 z 1 + . . .
det Q1f (z) = det af (z) det f (z) .
z + a1 z 1 + . . .
Since the right-hand side is a polynomial, we obtain

deg det Q1f (z) = deg det af (z) + deg det f (z) .

With regard to (8.137), this equation is equivalent to (8.140).

Lemma 8.31. Let us have unimodular matrices m(z) and (z), such that the
matrices
f (z) = m(z)af (z) ,
a f (z) = (z)f (z)
are row reduced. Then the matrix
1f (z) = diag{m(z), (z)} Q1f (z)
Q (8.142)

is row reduced.

Proof. Rewrite (8.142) in the form


 bf (z)

1f (z) = m(z)a f (z) m(z)bf (z) a
f (z)
Q = , (8.143)
(z)f (z) (z)f (z) f (z)
f (z)
8.8 Transformation to Forward Model 345

where the pairs (af (z), bf (z)) and (


f (z), f (z)) are irreducible. Since the
f (z) and
matrices a f (z) are row reduced, we receive a representation of the
form (1.21)

f (z) = diag {z a1 , . . . , z an } A0 + a
a 1f (z) , a1 + . . . + an = q ,
f (z) = diag {z
1
,...,z n
} B0 +
1f (z) , 1 + . . . + n = f ,

where det A0 = 0 and det B0 = 0. Hereby, the degree of the i-th row of the
matrix a 1f (z) is less than ai , and the degree of the i-th row of the matrix
1f (z) is less than i . Moreover,

1
a 1
f (z)bf (z) = af (z)bf (z) = wf (z) ,

f1 (z)f (z) = f1 (z)f (z) = wdf


o
(z) .
o
Since the matrix wf (z) is strictly proper and the matrix wdf (z) is at least

proper, the degree of the i-th row of the matrix bf (z) is less than i 1, and
the degree of the i-th row of f (z) is not more than i . Therefore, Matrix
(8.143) can be represented in the form (1.21)
1f (z) = diag{z a1 , . . . , z an , z 1 , . . . , z m } D0 + Q
Q 2f (z) , (8.144)

where D0 is the constant matrix



A0 Onm
D0 = .
C0 B0

Since det D0 = 0, Matrix (8.142) is row reduced.

Lemma 8.32. The polynomials db () and df (z) given by (8.127) and


(8.132) are connected by

db () q+f df ( 1 ) , (8.145)

i.e., db () is equivalent to the reciprocal polynomial for df (z).

Proof. Since Matrix (8.143) is row reduced and has the form (8.144), the
matrix

Q 1f ( 1 )
1b () = diag{ a1 , . . . , an , 1 , . . . , m } Q (8.146)

1f (z). Then
denes a backward eigenoperator associated with the operator Q
the following formula stems from Corollary 5.39:

det Q 1f ( 1 ) .
1b () q+f det Q (8.147)

Per construction,
346 8 Analysis and Synthesis of SD Systems Under Stochastic Excitation

1f (z) det Q1f (z) = df (z) ,
df (z) = det Q (8.148)
because the matrices Q1f (z) and Q 1f (z) are left-equivalent. Let us show the
relation

1b () det Q1b () = db () .
db () = det Q (8.149)
Notice that Matrix (8.146) can be represented in the form
 
f ( 1 ) diag{ a1 , . . . , an } bf ( 1 )
diag{ a1 , . . . , an } a

Q1b () =
diag{ 1 , . . . , m } f ( 1 ) diag{ 1 , . . . , m }
f ( 1 )
(8.150)
a1l () b1l ()
= ,
1l () 1l ()
where the pairs (a1l (), b1l ()) and (1l (), 1l ()) are irreducible due to
Lemma 5.34. We have
a1 ()b1l () = a
1l 1 ( 1 )bf ( 1 ) = wf ( 1 ) = wb ()
f

and the left-hand side is an ILMFD. On the other hand, the right-hand side
of (8.128) is also an ILMFD and the following equations hold:
a1l () = ()al (), b1l () = ()bl () , (8.151)
where () is a unimodular matrix. In a similar way, it can be shown that
1l () = ()l (), 1l () = ()l () (8.152)
with a unimodular matrix (). Substituting (8.151) and (8.152) into (8.150),
we nd
Q 1l () = diag{(), ()} Q1b () ,
i.e., the matrices Q1b () and Q 1b () are left-equivalent. Therefore, Relations
(8.149) hold. Then Relation (8.145) directly follows from (8.147)(8.149).
On the basis of Lemmata 8.298.32, the following theorem will be proved.
Theorem 8.33. Let f (z) and b () be the forward and backward charac-
teristic polynomials of the optimal system. Then,
f = deg f (z) = + q + f ,
b = deg b () = + deg det Dl () ,
where the number is determined by (8.135) and the number 0 of zero roots
of the polynomial f (z) is
0 = q + f deg det Dl () .
In this case, the polynomials f (z) and b () are related by
b () = f f ( 1 ) ,
i.e., b () is equivalent to the reciprocal polynomial for f (z).
Proof. The proof is left as exercise for the reader.
9
H2 Optimisation of a Single-loop Multivariable
SD System

9.1 Single-loop Multivariable SD System

1. The aforesaid approach for solving the H2 -problem is fairly general and
can be applied to any sampled-data system that can be represented in the
standard form. Nevertheless, as will be shown in this chapter, the specic
structure of a system and algebraic properties of the transfer matrices of the
continuous-time blocks can play an important role for solving the H2 -problem.
In this case, we can nd possibilities for some additional cancellations, extract-
ing matrix divisors and so on. Moreover, this makes it possible to investigate
important additional properties of the optimal solutions.

2. In this chapter, the above ideas are exemplarily illustrated by the single-
loop system shown in Fig. 9.1, where F (s), Q(s) and G(s) are rational matrices

h 6 x

y  C u h1 6 ? F (s)
 - - G(s) q - f- q v-
mn m 
T

Q(s) 
n

Fig. 9.1. Single-loop sampled-data system

of compatible dimensions and is a constant. The vector



h1 (t) h(t)
z(t) = = (9.1)
v(t) v(t)
348 9 H2 Optimisation of a Single-loop System

will be taken as the output of the system. If the system is internally stable
and x(t) is a stationary centered vector, the covariance matrix of the quasi-
stationary output is given by

Kz (t1 , t2 ) = E [z(t1 )z  (t2 )] .

Since  
2 h1 (t1 )h1 (t2 ) h1 (t1 )v  (t2 )
z(t1 )z  (t2 ) = ,
v(t1 )h1 (t2 ) v(t1 )v  (t2 )
we have

trace[z(t1 )z  (t2 )] = trace[v(t1 )v  (t2 )] + 2 trace[h1 (t1 )h1 (t2 )] .

For x (s) = I , we obtain the square of the H2 -norm of the system S in


Fig. 9.1 as

1 T
S22 = rz = trace [Kz (t, t)] dt = 2 dh1 + dv , (9.2)
T 0

where dh1 and dv are the mean variances of the corresponding output vectors.
To solve the H2 -problem, it is required to nd a stabilising discrete controller,
such that the right-hand side of (9.2) reaches the minimum.

9.2 General Properties


1. Using the general methods described in Section 7.3, we construct the
PTM w(s, t) of the single-loop system from the input x to the output (9.1).
Let us show that such a construction can easily be done directly on the basis of
the block-diagram shown in Fig. 9.1 without transformation to the standard
form. In the given case, we realise



whx (s, t)  (9.3)
w(s, t) =
wvx (s, t) ,

where whx (s, t) and wvx (s, t) are the PTMs of the system from the input x to
the outputs h and v, respectively.

2. To nd the PTM wvx (s, t), we assume according to the previously exposed
approach

x(t) = est I , v(t) = wvx (s, t)est , wvx (s, t) = wvx (s, t + T )

and
y(t) = wyx (s, t)est , wyx (s, t) = wyx (s, t + T ) , (9.4)
9.2 General Properties 349

x
wyx (s, 0)est u v y
- - G(s) ?-
-f - Q(s) -
C F (s)

Fig. 9.2. Open-loop sampled-data system

where wyx (s, t) is the PTM from the input x to the output y. The matrix
wyx (s, t) is assumed to be continuous in t. For our purpose, it suces to
assume that the matrix Q(s)F (s)G(s) is strictly proper.
Consider the open-loop system shown in Fig. 9.2. The exp.per. output y(t)
is expressed by

y(t) = QF G (T, s, t)w d (s)wyx (s, 0)est + Q(s)F (s)est .

Comparing the formulae for y(t) here and in (9.4), we obtain



wyx (s, t) = QF G (T, s, t)w d (s)wyx (s, 0) + Q(s)F (s) .

Hence for t = 0, we have


  
1
wyx (s, 0) = In DQF G (T, s, 0)w d (s) Q(s)F (s) .

Returning to Fig. 9.1 and using the last equation, we immediately get

wvx (s, t) = F G (T, s, t)RQF G (s)Q(s)F (s) , +F (s) (9.5)

where  1
   
RQF G (s) = w d (s) In DQF G (T, s, 0)w d (s) .

Comparing (9.5) and (7.30), we nd that in the given case

K(p) = F (p), L(p) = F (p)G(p) ,


(9.6)
M (p) = Q(p)F (p), N (p) = Q(p)F (p)G(p) ,

i.e., Matrix (7.2) has the form


 
F (p) F (p)G(p)
wv (p) = .
Q(p)F (p) Q(p)F (p)G(p)

The matrix F (p)G(p) is assumed to be at least proper and the remaining


blocks should be strictly proper. In a similar way, it can be shown that

whx (s, t) = G (T, s, t)RQF G (s)Q(s)F (s) (9.7)

and the corresponding matrix wh (p) (7.2) is equal to


350 9 H2 Optimisation of a Single-loop System
 
O G(p)
wh (p) = ,
Q(p)F (p) Q(p)F (p)G(p)

where the matrix G(p) is assumed to be at least proper. Combining (9.5) and
(9.7), we nd that the PTM (9.3) has the form

w(s, t) = L (T, s, t)RN (s)M (s) + K(s) (9.8)

with
 m

O  G(s) 
K(s) = , L(s) = ,
F (s) F (s)G(s)
(9.9)
 m
M (s) = Q(s)F (s) n , N (s) = Q(s)F (s)G(s) n .

Under the given assumptions, the matrix L(s) is at least proper and the
remaining matrices in (9.9) are strictly proper. The matrix w(p) associated
with the PTM (9.8) has the form

m 

..
  O . G(p) 
K(p) L(p) .
F (p) .. F (p)G(p) (9.10)
w(p) = = .
M (p) N (p) .
......... ................
.. n .
Q(p)F (p) . Q(p)F (p)G(p)

3. Hereinafter the standard form (2.21) of a rational matrix R(s) is written


as
NR (s)
R(s) = . (9.11)
dR (s)
The further exposition is based on the following three assumptions IIII, which
usually hold in applications.
I The matrices
NQ (s) NF (s) NG (s)
Q(s) = , F (s) = , G(s) = (9.12)
dQ (s) dF (s) dG (s)

are normal.
II The fraction
NQ (s)NF (s)NG (s)
N (s) = Q(s)F (s)G(s) = (9.13)
dQ (s)dF (s)dG (s)

is irreducible.
9.2 General Properties 351

III The poles of the matrix N (s) should satisfy the strict conditions for non-
pathological behavior (6.124) and (6.125). Moreover, it is assumed that
the number of inputs and outputs of any continuous-time block does not
exceed the McMillan-degree of its transfer function.
These assumptions are introduced for the sake of simplicity of the solution.
They are satised for the vast majority of applied problems.

4. Let us formulate a number of propositions following from the above as-


sumptions.
Lemma 9.1. The following subordination relations hold:

Q(s)F (s) N (s), F (s)G(s) N (s), G(s) F (s)G(s) . (9.14)


l r r

Proof. The proof follows immediately from Theorem 3.14.

Lemma 9.2. All matrices (9.6) are normal.

Proof. The claim follows immediately from Theorem 3.8.

Lemma 9.3. All matrices (9.9) are normal.

Proof. Obviously, it suces to prove the claim for L(s). But

L(s) = L1 (s)G(s) , (9.15)

where
dF (s)I

I NF (s) NL1 (s)
L1 (s) = = = .
F (s) dF (s) dF (s)
Let us show that this matrix is normal. Indeed, since the matrix F (s) is
normal, all second-order minors of the matrix NF (s) are divisible by dF (s).
Obviously, the same is true for all second-order minors of the matrix NL1 (s).
Thus, both factors on the right-hand side of (9.15) are normal matrices and its
product is irreducible, because the product F (s)G(s) is irreducible. Therefore,
the matrix L(s) is normal.

Corollary 9.4. The matrix F (s)G(s) dominates in the matrix L(s).

Lemma 9.5. Matrix (9.10) is normal. Moreover, the matrix N (s) dominates
in w(s).

Proof. Matrix (9.10) can be written in the form



O I
w(p) = diag {I , I , Q(p)} F (p) F (p) diag {I , G(p)} . (9.16)
F (p) F (p)
352 9 H2 Optimisation of a Single-loop System

Each factor on the right-hand side of (9.16) is a normal matrix. This state-
ment is proved similarly to the proof of Lemma 9.3. Moreover, Matrix (9.13)
is irreducible, such that the product on the right-hand side of (9.16) is irre-
ducible. Hence Matrix (9.16) is normal. It remains to prove that the matrix
N (s) = Q(s)F (s)G(s) dominates in Matrix (9.10).
Denote
  
Q = deg dQ (s), F = deg dF (s), G = deg dG (s) .

Then by virtue of Theorem 3.13,



Mdeg N (p) = Q + F + G = .

On the other hand, using similar considerations, we nd for Matrix (9.16)

Mdeg w(p) = Q + F + G = .

Hence
Mdeg w(p) = Mdeg N (p) .
This equation means that the matrix N (p) dominates in Matrix (9.10).

5. Let a minimal standard realisation of the matrix N (p) have the form

N (p) = C2 (pI A)1 B2 , (9.17)

where the matrix A is cyclic. Then, as follows from Theorem 2.67, the minimal
standard realisation of the matrix w(p) can be written in the form
 
C1 (pI A)1 B1 C1 (pI A)1 B2 + DL
w(p) = , (9.18)
C2 (pI A)1 B1 C2 (pI A)1 B2

or equivalently,

C1
O+, DL
w(p) = (pI A)1 B1 B2 + . (9.19)
C2 On Onm

Equation (9.19) is associated with the state equations

dv
= Av + B1 x + B2 u
dt
z = C1 v + DL u , y = C2 v .
9.3 Stabilisation 353

9.3 Stabilisation
1.
Theorem 9.6. Let Assumptions IIII on page 350 hold. Then the single-loop
system shown in Fig. 9.1 is modal controllable (hence, is stabilisable).

Proof. Under the given assumptions, the matrix


  1 AT
DN (T, , 0) = DQF G (T, s, 0) | esT = = C2 I eAT e (A)B2

is normal. Thus, in the IMFDs

DN (T, , 0) = a1 1
l ()bl () = br ()ar () , (9.20)

the matrices al (), ar () are simple and we have


 
det al () det ar () det I eAT Q ()F ()G () . (9.21)

In (9.21) and below, R () denotes the discretisation of the polynomial dR (s)


given by (9.11). Thus, in the given case, we have

det(I eAT )
() = = const. = 0 , (9.22)
det al ()

whence the claim follows.

2.
Remark 9.7. In the general case, the assumption on irreducibility of the right-
hand side of (9.13) is essential. If the right-hand side of (9.13) is reducible by
an unstable factor, then the system in Fig. 9.1 is not stabilisable despite the
fact that all other assumptions of Theorem 9.6 hold.

Example 9.8. Let us have instead of (9.12), the irreducible representations

NQ (p) N1F (p) p N1G (p)


Q(p) = , F (p) = , G(p) = , (9.23)
dQ (p) p d1F (p) dG (p)

where d1F (0) = 0, dQ (0) = 0, dG (0) = 0 and the matrix

(p) = NQ (p)N1F (p)N1G (p)


N (9.24)
dQ (p)d1F (p)dG (p)

is irreducible. Then, Matrix (9.10) takes the form


354 9 H2 Optimisation of a Single-loop System

..
. p N1G (p)
O ..
. dG (p)
..  

N1F (p) . N1F (p)N1G (p)
K(p)
L(p)
w(p)
=

.. =
.
p d1F (p) . d1F (p)dG (p) (p) N
M (p)
.
..

.
NQ (p)N1F (p) .. NQ (p)N1F (p)N1G (p)
.
p dQ (p)d1F (p) .. dQ (p)d1F (p)dG (p)

In this case, the matrix N (p) is not dominant in the matrix w(p),
because
it is analytical for p = 0, while some elements of the matrix w(p)
have poles
at p = 0. Then the matrix A in the minimal standard realisation (9.19) will
have the eigenvalue zero. At the same time, the matrix A in the minimal
representation
N 1 B
(p) = C2 (pI A) 2

has no eigenvalue zero, because the right-hand side of (9.24) is analytical for
p = 0. Therefore, in the given case, (9.22) is not a stable polynomial. Hence
the single-loop system with (9.23) is not stabilisable. 

9.4 Wiener-Hopf Method


1. Using the results of Chapter 8, we nd that in this case, the H2 -problem
reduces to the minimisation of a functional of the form (8.84)
#   d
1
J1 = trace ()A L ()()AM () ()C() C()()
2j

over the set of stable rational matrices. The matrices AL (), AM (), C() and

C() can be calculated using Formulae (8.85) and (8.86). Then referring to
(9.9) and (8.86), we nd
1
AL () = a
r ()DL L (T, , 0)ar ()
T
1  
= ar () 2 DG G (T, , 0) + DG F  F G (T, , 0) ar () ,
T
(9.25)
AM () = al ()DM M  (T, , 0)
al () = al ()DQF F  Q (T, , 0)
al () .

Applying (9.9) and (8.78), we obtain


9.5 Factorisation of Quasi-polynomials of Type 1 355

1 1
C() = AM ()0r () DL L (T, , 0)ar () + al ()DM K  L (T, , 0)ar ()
T T
1  
= AM ()0r () 2 DG G (T, , 0) + DG F  F G (T, , 0) ar () +
T
1
+ al ()DQF F  F G (T, , 0)ar () ,
T
(9.26)
1 2 
C() = a
r () DG G (T, , 0) + DG F F G (T, , 0) 0r ()AM ()
  
T
1
+ a r ()DG F  F F  Q (T, , 0)
al () ,
T
where the matrices al () and ar () are determined by the IMFDs (9.20).

2. Since under the given assumptions, the single-loop system is modal con-
trollable, all matrices in (9.25) and (9.26) are quasi-polynomials and can have
poles only at = 0. Assume the following factorisations

AL () = ()(), AM () = () () , (9.27)

where the polynomial matrices () and () are stable. Then, there exists
an optimal controller, which can be found using the algorithm described in
Chapter 8:
a) Calculate the matrix
1 ()C()
R() = 1 () .

b) Perform the principal separation

R() = R+ () + R() , (9.28)

where R+ () is a polynomial matrix and the matrix R() is strictly


proper.
c) The optimal system function is given by the formula

o () = 1 ()R+ () 1 () .

d) The transfer matrix of the optimal controller wdo () is given by (8.90) and
(8.91).

9.5 Factorisation of Quasi-polynomials of Type 1


1. One of the fundamental steps in the Wiener-Hopf-method requires to
factorise the quasi-polynomial (9.25) according to (9.27). In the present sec-
tion, we investigate special features of the factorisation of a quasi-polynomial
AM () of type 1 determined by the given assumptions.
356 9 H2 Optimisation of a Single-loop System

2. Let us formulate some auxiliary propositions. Suppose

NM (s)
M (s) = Q(s)F (s) =
dM (s)

with

dM (s) = dQ (s)dF (s) = (s m1 )1 (s m ) ,


(9.29)

deg dM (s) = 1 + . . . + = deg dQ (s) + deg dF (s) = Q + F = .

Let us have a corresponding minimal standard realisation similar to (9.17)

M (s) = CM (sI AM )1 BM ,

where I is the identity matrix of compatible dimension. As follows from the


subordination relations (9.14) and (9.18),

M (s) = C2 (sI A)1 B1 , (9.30)

where B1 and C2 are constant matrices. For 0 < t < T , let us have as before
 1 AM t
DM (T, , t) = CM I eAM T e BM . (9.31)

Matrix (9.31) is normal for all t. Let us have an ILMFD


 1
CM I eAM T = a1
M ()bM () . (9.32)

Then under the given assumptions, the formulae

DM (T, , t) = a1
M ()bM (, t), bM (, t) = bM ()eAM t BM (9.33)

determine an ILMFD of Matrix (9.31). Moreover, the matrix aM () is simple


and
det aM () Q ()F () . (9.34)
On the other hand using (9.30), we have
 1 At
DM (T, , t) = C2 I eAT e B1 . (9.35)

Consider the ILMFD


 1
C2 I eAT = a1
l ()bl () . (9.36)

The set of matrices al () satisfying (9.36) coincides with the set of matrices
al () for the ILMFD (9.20), which satisfy Condition (9.21).
 This fact follows
from the minimality of the PMD I eAT , eAt B2 , C2 . From (9.35) and
(9.36), we obtain an LMFD for the matrix DM (T, , t)
9.5 Factorisation of Quasi-polynomials of Type 1 357
 
DM (T, , t) = a1
l
bl ()eAt B1 = a1 ()bM (, t) .
l

Since Equation (9.33) is an ILMFD, we obtain

al () = a1 ()aM () , (9.37)

where a1 () is a polynomial matrix. Here the matrix a1 () is simple and with


respect to (9.34) and (9.35), we nd

det al ()
det a1 () = G () . (9.38)
det aM ()

3. Consider the sum of the series




1 
DM M  (T, , 0) = M (s + kj)M (s kj)
T
k= | esT =

and the matrix


PM () = aM ()DM M  (T, , 0)
aM () . (9.39)

Lemma 9.9. Matrix (9.39) is a symmetric quasi-polynomial matrix of the


form (8.88).

Proof. Since

 1
DM (T, s, t) = M (s + kj)e(s+kj)t ,
T
k=

(9.40)
 1
DM  (T, s, t) = M  (s + kj)e(s+kj)t ,
T
k=

we have the equality



 T  
DM M  (T, s, 0) = DM (T, s, t)D M  (T, s, t) dt . (9.41)
0

This result can be proved by substituting (9.40) into (9.41) and integrating
term-wise. Substituting for esT in (9.41), we nd
 T
DM M  (T, , 0) = DM (T, , t)DM  (T, 1 , t) dt . (9.42)
0

As follows from (9.33) for 0 < t < T ,



1
DM  (T, 1 , t) = DM

(T, 1 , t) = bM ( 1 , t) aM ( 1 )
a1
= bM (, t) M () .
358 9 H2 Optimisation of a Single-loop System

Then substituting this and (9.33) into (9.42), we receive


 T
DM M  (T, , 0) = a1
M () 1
bM (, t)bM (, t) dt aM () . (9.43)
0

Hence with account for (9.39),


 T
PM () = bM (, t)bM (, t) dt . (9.44)
0

Obviously, the right-hand side of (9.44) is a quasi-polynomial matrix and



PM () = PM ( 1 ) = PM () ,

i.e., the quasi-polynomial (9.39) is symmetric.

4. The symmetric quasi-polynomial P () = P () of dimension n n will be


called nonnegative (positive) on the unit circle, if for any vector x C1n and
|| = 1, we have
xP ()x 0, (xP () x > 0) ,
where the overbar denotes the complex conjugate value [133].

Lemma 9.10. The quasi-polynomial (9.44) is nonnegative on the unit circle.

Proof. Since we have 1 = on the unit circle, Equation (9.44) yields


 T  T
x = [xbM (, t)] [xbM (, t) ] dt =
2
xPM () |xbM (, t)| dt 0 ,
0 0

where | | denotes the absolute value of the complex row vector.

Corollary 9.11. Since under the given assumptions, the matrix bM (, t) is


continuous with respect to t, the quasi-polynomial PM () is nonnegative on
the unit circle, if and only if there exists a constant nonzero row x0 such that

x0 bM (, t) = O1n .

If such row does not exist, then the quasi-polynomial matrix PM () is positive
on the unit circle.

Remark 9.12. In applied problems, the quasi-polynomial matrix PM () is usu-


ally positive on the unit circle.
9.5 Factorisation of Quasi-polynomials of Type 1 359

5.
Lemma 9.13. Let under the given assumptions the matrix aM () in the
ILMFD (9.32) be row reduced. Then,



PM () = mk k , mk = mk (9.45)
k=

with
0 M 1 , (9.46)
where

M = deg aM () deg det aM () = . (9.47)

Proof. Since the matrix DM (T, , t) is strictly proper for 0 < t < T , due to
Corollary 2.23 for the ILMFD (9.33), we have

deg bM (, t) < deg aM (), (0 < t < T ) .

At the same time, since the matrix aM () is row reduced, Equation (9.47)
follows from (9.29). Then using (9.44), we obtain (9.45) and (9.46).

6. Denote

qM () = det PM () .

Lemma 9.14. Let the matrix M (s) be normal and the product

 NM (s)NM (s)
(s) =
M M (s)M  (s) = (9.48)
dM (s)dM (s)

be irreducible. Let also the roots of the polynomial



gM (s) = dM (s)dM (s)

satisfy the conditions for non-pathological behavior (6.106). Then,




qM () = qk k , qk = qk , (9.49)
k=

where the qk are real constants and

0 n, (9.50)

where n is the dimension of the vector y in Fig. 9.1.


360 9 H2 Optimisation of a Single-loop System

Proof. a) From (9.31) after transposition and substitution of 1 for , we


obtain  1
 
DM  (T, 1 , t) = BM

eAM t I 1 eAM T 
CM .

Substituting this and (9.31) into (9.42), we nd


 1  
1
DM M  (T, , 0) = CM I eAM T J I 1 eAM T 
CM , (9.51)

where  T


J= eAM t BM BM eAM t dt .
0
From (9.51), it follows that the matrix DM M  (T, , 0) is strictly proper
and can be written in the form
K()
DM M  (T, , 0) = , (9.52)
M ()M ()
where
 1  
M () = em1 T em T ,
  1  
(9.53)
M () = em1 T em T ,

and K() is a polynomial matrix.


b) Let us show that Matrix (9.52) is normal. Indeed, using (9.48), we can
write
DM M  (T, , 0) = DM (T, , 0) .
Since the matrix M (s) is normal, the matrix M (s) = M (s) is normal.
Hence M (s) is also normal as a product of irreducible normal matrices.
Moreover, since the poles of M (s) satisfy Condition (6.106), the normality
of Matrix (9.52) follows from Corollary 6.25.
c) Since Matrix (9.52) is strictly proper and normal, using (2.92) and (2.93),
we have
 n uM ()
fM () = det DM M  (T, , 0) = , (9.54)
M ()M ()
where uM () is a polynomial, such that deg uM () 2 2n. From (9.53),
we nd (
M () = (1) eT i=1 mi i M ( 1 ) . (9.55)
Substituting (9.55) into (9.54), we obtain
M ()
fM () = , (9.56)
M ()M ( 1 )
where M () is a quasi-polynomial of the form
(
M () = (1) eT i=1 mi i (n)
uM () . (9.57)
9.5 Factorisation of Quasi-polynomials of Type 1 361

Substituting 1 for in (9.56) and using the fact that fM () = fM ( 1 ),


we obtain
M () = M ( 1 ) , (9.58)
i.e., the quasi-polynomial (9.57) is symmetric. Moreover, the product
n () is a polynomial due to (9.57).
Calculating the determinants of both sides on (9.43), we nd

qM ()
fM () = .
det aM () det aM ( 1 )

Since
det aM () = M () , det aM ( 1 ) = M ( 1 )
with = const. = 0, a comparison of (9.56) with (9.58) yields

() = qM ()

with = const. = 0. Therefore, the product n qM () is a polynomial


as (9.49) and (9.50) claim.

7. Next, we provide some important properties of the quasi-polynomial ma-


trix AM () in (9.25).
Theorem 9.15. Let Assumptions I-III on page 350 hold and let us have the
ILMFD (9.36)
 1
C2 I eAT = a1
l ()bl () . (9.59)
Then, the matrices al () in (9.59) and aM () in the ILMFD (9.32) can be
chosen in such a way that the following equality holds:



AM () = al ()D MM (T, , 0)
al () = ak k , ak = ak , (9.60)
k=

where
0 1 (9.61)
and
= deg dN (s) = deg dQ (s) + deg dF (s) + deg dG (s) .

Proof. As was proved above, the sets of matrices al () from (9.59) and (9.20)
coincide. Taking into account (9.37) and (9.43), we rewrite the matrix AM ()
in the form
AM () = a1 ()PM ()
a1 () , (9.62)
where PM () is the quasi-polynomial (9.39). Using (9.44) from (9.62), we
obtain
362 9 H2 Optimisation of a Single-loop System
 T
AM () = [a1 ()bM (, t)] [a1 ()bM (, t)]' dt . (9.63)
0

Let the matrix aM () be row reduced. Then we have

deg bM (, t) < deg aM () deg det aM () = . (9.64)

Moreover, if we have the ILMFD (9.59), any pair (()al (), ()bl ()) with
any unimodular matrix () is also an ILMFD for Matrix (9.59). As a special
case, the matrix () can be chosen in such a way, that the matrix a1 () in
(9.37) is row reduced. Then with respect to (9.38), we obtain

deg a1 () deg det a1 () = deg G () = G .

If this and (9.64) hold, we have

deg [a1 ()bM (, t)] M + G 1 = 1 .

From this and (9.63), the validity of (9.60) and (9.61) follows.

Theorem 9.16. Denote



rM () = det AM () .

Under Assumptions I-III on page 350, we have





rM () = rk k , rk = rk (9.65)
k=

with
0 n. (9.66)

Proof. From (9.62), we have

rM () = det a1 () det a1 ( 1 ) det PM () = det a1 () det a1 ( 1 )qM () .


(9.67)
Then with regard to (9.49) and (9.50),



rM () = det a1 () det a1 ( 1 ) qk k ,
k=

which is equivalent to (9.65) and (9.66), because deg det a1 () = G , 0


n and G + = .

Corollary 9.17. As follows from (9.67), the set of zeros of the function rM ()
includes the set of roots of the polynomial G () as well as those of the quasi-
polynomial G ( 1 ).
9.5 Factorisation of Quasi-polynomials of Type 1 363

8. Using the above auxiliary relations, we can formulate an important propo-


sition about the factorisation of quasi-polynomials of type 1.

Theorem 9.18. Let Assumptions I-III on page 350 and the propositions of
Lemma 9.14 hold. Let also the quasi-polynomial AM () be positive on the unit
circle. Then, there exists a factorisation

AM () = () () = ()  ( 1 ) , (9.68)

where () is a stable real n n polynomial matrix. Under these conditions,


there exists a factorisation
+
rM () = det AM () = rM +
()rM ( 1 ) , (9.69)
+
where rM +
() is a real stable polynomial with deg rM () n. Moreover,

det () rM
+
()

and the matrices al (), aM () in the ILMFDs (9.32), (9.59) can be chosen in
such a way that
deg () 1 .

Proof. With respect to the above results, the proof is a direct corollary of the
general theorem about factorisation given in [133].

Remark 9.19. As follows from Corollary 9.17, if the polynomial dG (s) has roots
on the imaginary axis, then the quasi-polynomial rM () has roots on the unit
circle. In this case, the factorisations (9.68) and (9.69) are impossible.

Remark 9.20. Let the polynomial

dG (s) = (s g1 )1 (s g )

be free of roots on the imaginary axis. Let also be

Re gi < 0, (i = 1, . . . , ); Re gi > 0, (i = + 1, . . . , ) .
+
Then the polynomial rM () can be represented in the form
+
rM () = + +
G ()r1M () ,

where + +
G (), r1M () are stable polynomials and

 
g1 T 1
   +1  
G () = e
+ eg T eg+1 T eg T ,

i.e., the numbers egi T , (i = 1, . . . , ) and egi T , (i = + 1, . . . , ) are found


+
among the roots of the polynomial rM ().
364 9 H2 Optimisation of a Single-loop System

9.6 Factorisation of Quasi-polynomials of Type 2


1. Let the matrix L(s) be at least proper and have the standard form

NL (s)
L(s) = , Mdeg L(s) = ,
dL (s)

where
dL (s) = (s 1 )1 (s m )m , 1 + . . . + m =
with the minimal standard representation

L(s) = CL (sI AL )1 BL + DL .

Let also  T
(s) = es m( ) d
0
be the transfer function of the forming element. Then for 0 < t < T from
(6.90), (6.100) and (6.103), we have

 1
DL (T, s, t) = L(s + kj)(s + kj)e(s+kj)t
T
k=
(9.70)
 1
= CL h (AL , t)BL + CL (AL )eAL t I esT eAL T BL + DL m(t) ,

where  T
h (AL , t) = eAL (t ) m( ) d .
t

Replacing esT = in (9.70), we nd the rational matrix



DL (T, , t) = DL (T, s, t) | esT =
(9.71)
= CL (AL )eAL t wL () + DL (t) ,

where  1
wL () = I eAL T BL , (9.72)
and
DL (t) = CL h (AL , t)BL + DL m(t) (9.73)
is a matrix independent of . Let us have an IRMFD
 1
wL () = I eAL T BL = bL ()a1
L () . (9.74)

Then using (9.71), we nd an RMFD

DL (T, , t) = bL (, t)a1
L () , (9.75)
9.6 Factorisation of Quasi-polynomials of Type 2 365

where the matrix

bL (, t) = CL eAL t (AL )bL () + DL (t)aL ()

is a polynomial in for all t. When Assumptions I-III on page 350 hold, then
the matrix aL () is simple and we have

det aL () F ()G () .

2. Consider the sum of the series



 1 
DL L (T, s, 0) = L (s kj)L(s + kj)(s + kj)(s kj)
T
k=

and the rational matrices



DL L (T, , 0) = DL L (T, s, 0) | esT = (9.76)

and
PL () = aL ( 1 )DL L (T, , 0)aL () . (9.77)
Let us formulate a number of propositions determining some properties of the
matrices (9.76) and (9.77) required below.

Lemma 9.21. Matrix (9.77) is a symmetric quasi-polynomial.

Proof. Since

 1 
D L (T, s, t) = L (s + kj)(s + kj)e(s+kj)t
T
k=

regarding (9.70) after integration, we nd


 T  
DL (T, s, t)D L (T, s, t) dt
0

1 
= L (s kj)L(s + kj)(s kj)(s + kj)
T
k=

= DL L (T, s, 0) .

Substituting for esT , we nd


 T
DL L (T, , 0) = DL (T, 1 , t)DL (T, , t) dt . (9.78)
0

Nevertheless from (9.75), we have


366 9 H2 Optimisation of a Single-loop System

DL (T, 1 , t) = DL

(T, 1 , t)

1  1 (9.79)
= aL ( 1 ) 1
bL ( , t) = a
L ()bL (, t) .

Using (9.75) and (9.79) in (9.78), we nd


 T
1
DL L (T, , 0) = aL ()
bL (, t)bL (, t) dt a1 () .
L
0

Hence  T
PL () = bL (, t)bL (, t) dt (9.80)
0
is a symmetric quasi-polynomial.

Lemma 9.22. The quasi-polynomial PL () is nonnegative on the unit circle.

Proof. The proof is similar to that given for Lemma 9.10.

Lemma 9.23. Let Assumptions I-III on page 350 hold and the matrix aL ()
from the IRMFD (9.74) be column reduced. Then,



PL () = k k , k = k
k=

with
0 L ,
where

L = deg aL () deg det aL () = .

Proof. Under the given assumptions, the matrix wL () is normal. Hence

deg det aL () = deg L () = = F + G

and since the matrix aL () is column reduced, we have

deg aL () .

Moreover, since Matrix (9.71) is at least proper, due to Corollary 2.23, we


obtain
deg bL (, t) deg aL () .
The claim of the lemma follows from (9.80) and the last relations.
9.6 Factorisation of Quasi-polynomials of Type 2 367

3. Introduce the following additional notations



qL () = det PL ()
and


(s) = (s)(s) .
Lemma 9.24. Let Assumptions I-III on page 350 hold and the product
N  (s)NL (s)

L(s) = L (s)L(s) = L (9.81)
dL (s)dL (s)
1, . . . ,
be irreducible. Let also the roots of the polynomial

dL (s) = dL (s)dL (s)
satisfy the conditions for non-pathological behavior (6.106) and moreover,
i ) = 0,
( (i = 1, . . . , ) . (9.82)
Then,



qL () = qk k ,
k=

where the qk = q k are real constants and


0 .
Proof. a) First of all, we show that the rational matrix DL L (T, , 0) is at
least proper. With this aim in view, recall that (9.71) yields

DL (T, 1 , t) = wL

( 1 ) (AL )eAL t CL + DL

(t) .
Using (9.72) and (9.73), it can be easily established that this matrix is at
least proper. Then the product
DL (T, 1 , t)DL (T, , t)
is also at least proper. Further from (9.78), it follows that the matrix
DL L (T, , 0) is at least proper. Then, if
dL (s) = (s 1 )1 (s m )m , 1 + . . . + M = ,
then
L()
DL L (T, , 0) = (9.83)
L ()L ()
with
 1  m
L () = e1 T em T ,
 1  m (9.84)
L () = e1 T em T ,
where L() is a polynomial matrix, such that deg L() 2.
368 9 H2 Optimisation of a Single-loop System

b) Let us show that Matrix (9.83) is normal. For this purpose using (9.81),
we write

DL L (T, s, 0) = DL
(T, s, 0)
(9.85)

1
= L(s + kj)(s + kj)(s kj) .
T
k=

Using the fact that


 T
(s) = es m( ) d
0

from (9.85), we can derive


 T
 1
DL L (T, s, 0) = L(s + kj)(s + kj) e(s+kj) m( ) d
T 0
k=
 

T
1
=
L(s + kj)(s + kj)e(s+kj)
m( ) d (9.86)
0 T
k=
 T 
= DL
(T, s, )m( ) d .
0

Under the given assumptions, the matrix L (s) is normal. Hence Ma-
trix (9.81) is also normal as a product of irreducible normal matrices.

Therefore, the minimal standard representation of the matrix L(s) can be
written in the form
 1

L(s) = CL sI2 AL L + D
B L .

Then similarly to (9.70)(9.73) for 0 < t < T , we obtain


  1
DL AL t (AL ) sI2 esT eAL T
(T, s, t) = CL e
L + D
B L (t) , (9.87)

where
L (t) = C2 h (AL , t)B
D L + D
L m(t) .

Using (9.87) in (9.86), after integration and substitution esT = , we


obtain
 1

DL L (T, , 0) = CL (AL )(AL ) I2 eAL T L
B
 T
+ D L (t)m(t) dt ,
0

where
9.6 Factorisation of Quasi-polynomials of Type 2 369
 T

(AL ) = eAL t m(t) dt .
0

Under the given assumptions, the matrices AL and eAL T are cyclic, the
L ) is controllable and the pair [eAL T , CL ] is observable. More-
pair (eAL T , B
over, the matrix
(AL ) = (AL )(AL )


is commutative with the matrix eAL and due to (9.82), it is nonsingular.
Therefore, Matrix (9.83) is normal.
c) With respect to the normality of Matrix (9.83), calculating the determi-
nants on both sides of (9.83), we nd

 uL ()
fL () = det DL L (T, , 0) = , (9.88)
L ()L ()

where L (), L () are the polynomials (9.84), and uL () is a polyno-


mial with deg uL () 2. Per construction, we have fL () = fL ( 1 ).
Therefore, similarly to (9.54) from (9.88), we obtain

L ()
fL () = , (9.89)
L ()L ( 1 )

where (n
L () = (1) eT i=1 i i
uL ()
is a symmetric quasi-polynomial. Moreover, since the product L () is
a polynomial, we receive





L () = k k , k = k ,
k=

where
0 .
Using (9.89), (9.77) and the relations

det aL () L (), det aL ( 1 ) = det aL ( 1 ) ,

we obtain the equality

qL () = det aL () det aL ( 1 )fL () = kL L (), kL = const.

This completes the proof.


370 9 H2 Optimisation of a Single-loop System

4. Using the above auxiliary results under Assumptions I-III on page 350,
we consider some properties of the quasi-polynomial AL ().

Using a minimal standard representation (9.18), introduce the matrix


 1
wL () = I eAT B2 (9.90)

and an arbitrary IRMFD

L () = bL ()a1
w r () . (9.91)

Since Matrix (9.90) is normal, the matrix ar () is simple and


 
det ar () det I eAT Q ()F ()G () .

From (9.18), it follows that the representation

L(s) = C1 (sI A)1 B2 + DL

exists. Hence together with (9.71), we have


 1
DL (T, , t) = C1 h (A, t)B2 + C1 (A)eAt I eAT B2 + m(t)DL .

Thus with account for (9.91), we obtain the RMFD

DL (T, , t) = bL (, t)a1
r () , (9.92)

where
bL (, t) = C1 h (A, t)B2 ar () + C1 (A)eAtbL () + m(t)DL ar () .

Simultaneously with the RMFD (9.92), we have the IRMFD (9.75), therefore

ar () = aL ()a2 () . (9.93)

Moreover, the polynomial matrix a2 () is simple and


det ar ()
det a2 () = Q () . (9.94)
det aL ()
Theorem 9.25. The set of matrices ar () in the IRMFD (9.91) coincides
with the set of matrices ar () in the IRMFD (9.20). Moreover, the matrices
ar () in (9.91) and aL () in the IRMFD (9.74) can be chosen in such a way,
that the following representation holds:

1

AL () = a
r ()DL L (T, , 0)ar () = ak k , ak = ak , (9.95)
T
k=

where
0 . (9.96)
9.6 Factorisation of Quasi-polynomials of Type 2 371

Proof. The coincidence of the sets of matrices ar () in (9.80) and (9.91) stems
from the minimality of the PMD
 
I eAT , eAt (A)B2 , C2 .
Using (9.79), (9.80) and (9.93), the matrix AL () can be written in the form
AL () = a
2 ()PL ()a2 () , (9.97)
where PL () is the quasi-polynomial matrix (9.77). Using (9.80) from (9.97),
we nd 
1 T 
AL () = [bL (, t)a2 ()] [bL (, t)a2 ()] dt . (9.98)
T 0
Let the matrix aL () be column reduced. Then as before, we have
deg bL (, t) deg aL () deg det al () = .
Moreover, if we have the IRMFD (9.91), then any pair [ar ()(), bL ()()],
where () is any unimodular matrix, determines an IRMFD for Matrix (9.91).
In particular, the matrix () can be chosen in such a way that the matrix
a2 () in (9.93) becomes column reduced. In this case, we receive
deg a2 () deg det a2 () = deg Q () = q .
The last two estimates yield
deg [bL (, t)a2 ()] + q = .
Equations (9.95) and (9.96) follow from (9.98) and the last estimate.
Theorem 9.26. Denote

rL () = det AL () .
Then under Assumptions I-III on page 350 and the conditions of Lemma 9.23,
we have

rL () = rk k , rk = r k ,
k=

where
0 .
Proof. From (9.97), we have
rL () = det a2 () det a2 ( 1 )qL (), = const. = 0 (9.99)
that is equivalent to the claim, because deg det a2 () = Q and Q + = .
Corollary 9.27. From (9.99), it follows that the set of roots of the function
rL () includes the set of roots of the polynomial Q () and the set of roots of
the quasi-polynomial Q ( 1 ).
372 9 H2 Optimisation of a Single-loop System

5. Using the above results, we prove a theorem about factorisation of quasi-


polynomials of type 2.

Theorem 9.28. Let Assumptions I-III on page 350 and the conditions of
Lemmata 9.23 and 9.24 hold. Let also the quasi-polynomial AL () be posi-
tive on the unit circle. Then, there exists a factorisation

AL () = ()() =  ( 1 )() , (9.100)

where () is a stable polynomial matrix. Under the same conditions, the


following factorisation is possible:
+ + 1
rL () = det AL () = rL ()rL ( ) , (9.101)
+
where rL +
() is a real stable polynomial with deg rL () . Moreover,

det () rL
+
()

and the matrices ar (), aL () in the IRMFDs (9.91), (9.74) can be chosen,
such that
deg () .

Proof. As for Theorem 9.16, the proof is a direct corollary of the theorem
about factorisation from [133] with account for our auxiliary results.

Remark 9.29. From Corollary 9.27, it follows that, when the polynomial dQ (s)
has roots on the imaginary axis, then the quasi-polynomial (9.99) has roots
on the unit circle and the factorisations (9.100) and (9.101) are impossible.

Remark 9.30. Let the polynomial

dQ (s) = (s q1 )1 (s q ) , 1 + . . . , = Q

be free of roots on the imaginary axis. Let also be

Re qi < 0 , (i = 1, . . . , m); Re qi > 0 , (i = m + 1, . . . , ) .


+
Then the polynomial rL () can be represented in the form
+
rL () = d+ +
Q ()r1L () ,

where d+ +
Q () and r1L () are stable polynomials and

 
q1 T 1
 m  m+1  
Q () = e
d+ eqm T eqm+1 T eq T ,

i.e., the numbers eqi T , (i = 1, . . . , m) and eqi T , (i = m + 1, . . . , ) are found


+
among the roots of the polynomial rL ().
9.7 Characteristic Properties of Solution for Single-loop System 373

9.7 Characteristic Properties of Solution for Single-loop


System

1. Using the above results, we can formulate some characteristic proper-


ties of the solution to the H2 -problem for the single-loop system shown in
Fig. 9.1. We assume that Assumptions I-III on page 350 and the conditions
of Lemmata 9.14, 9.24 hold.

2. Let the quasi-polynomials AM () and AL () be positive on the unit circle.


Then, there exist factorisations (9.68) and (9.100). Moreover, the optimal
system matrix o () has the form

o () = 1 ()R+ () 1 () ,

where R+ () is a polynomial matrix and the relations deg det () and


deg det () n hold. Due to Lemma 2.8, there exists an LMFD

R+ () 1 () = 11 ()R1+ ()

with deg det 1 () = deg det () n. From the last two equations, we
obtain the LMFD
1
o () = [1 ()()] R1+ () ,
where deg det[1 ()()] 2 n.
On the other hand, let us have an ILMFD

o () = Dl1 ()Ml () .

Then the function


det 1 () det () det () det ()
=
det Dl () det Dl ()

is a polynomial. Since the system under consideration is modal controllable,


due to the properties of the system function, the polynomial det Dl () is
equivalent to the characteristic polynomial of the optimal system o (). Then
we obtain
deg o () = deg det Dl () 2 n .

3. Let g1 , . . . , g be the stable and g+1 , . . . , g the unstable poles of the ma-
trix G(s); and q1 , . . . , qm ; qm+1 , . . . , q be the corresponding sequences of poles
of the matrix Q(s). Then the characteristic polynomial has in the general case
its roots at the points 1 = eg1 T , . . . , = eg T ; +1 = eg+1 T , . . . , =
eg T ; and 1 = eq1 T , . . . , m = eqm T ; m+1 = eqm+1 T , . . . , = eq T .
374 9 H2 Optimisation of a Single-loop System

4. The single-loop system shown in Fig. 9.1 will be called critical, if at least
one of the matrices Q(s), F (s) or G(s) has poles on the imaginary axis. These
poles will also be called critical. The following important conclusions stem
from the above reasoning.
a) The presence of critical poles of the matrix F (s) does not change the
H2 -optimisation procedure.
b) If any of the matrices Q(s) or G(s) has a critical pole, then the corre-
sponding factorisations (9.100) or (9.68) appear to be impossible, because
at least one of the polynomials det a2 () or det a1 () has roots on the unit
circle. In this case, formal following the Wiener-Hopf procedure leads to
a controller that does not stabilise.
c) As follows from the aforesaid, for solving the H2 -optimisation problems
for sampled-data systems with critical continuous-time elements, it is nec-
essary to take into account some special features of the system structure,
as well as the placement of the critical elements with respect to the system
input and output.

9.8 Simplied Method for Elementary System


1. In principle for the H2 -optimisation of the single-loop structure, we can
use the modied Wiener-Hopf-method described in Section 8.6. However in
some special cases, a simplied optimisation procedure can be used that does
not need the inversion of the matrices bl ()bl ( 1 ) or br ( 1 )br (). In this
section, such a possibility is illustrated by the example shown in Fig. 9.3,
where F (s) Rnm (s). Hereinafter, such a system will be called elementary.

x y
- g - F (s) -
6
u u1
 C 

Fig. 9.3. Simplied sampled-data system

The elementary system is a special case of the single-loop system in


Fig. 9.1, when Q(s) = In and G(s) = Im . In this case from (9.9), we have

Omm Im
K(s) = , L(s) = ,
F (s) F (s)
(9.102)
M (s) = F (s), N (s) = F (s) .
It is assumed that the matrix
9.8 Simplied Method for Elementary System 375

NF (s)
F (s) =
dF (s)

with

dF (s) = (s f1 )1 (s f ) , 1 + . . . + = (9.103)

is strictly proper and normal. Moreover, the fractions

NF (s)NF (s) NF (s)NF (s)


F (s)F  (s) = , F  (s)F (s) =
dF (s)dF (s) dF (s)dF (s)

are assumed to be irreducible. If



g(s) = dF (s)dF (s) = (s g1 )1 (s g ) ,

we shall assume that

(gi) = (gi )(gi ) = 0,


(i = 1, . . . , )

and the set of numbers gi satisfy Conditions (6.106).

2. To solve the H2 -optimisation problem, we apply the general relations of


Chapter 8. Hereby, Equation (9.102) leads to a number of serious simplica-
tions.
Using (9.102), we have

L (s)L(s) = 2 Im + F  (s)F (s) .

Then,

 1 
DL L (T, s, 0) = 2 (s + kj)(s kj) + DF  F (T, s, 0) .
T
k=

This series can be easily summarised. Indeed, using (6.36) and (6.39), we
obtain
 T
1 1
(s + kj)(s kj) = (s + kj) e(s+kj)t m(t) dt
T T 0
k= k=
 T
  T
1 (s+kj)t 
= (s + kj)e m(t) dt = m2 (t) dt = m2 .
0 T 0
k=

Hence  
DL L (T, s, 0) = 2 m2 + DF  F (T, s, 0) .
Let us have a minimal standard realisation
376 9 H2 Optimisation of a Single-loop System

F (s) = C(sI A)1 B

and IMFDs
 1
C I eAT = a1
l ()bl () ,
 1
I eAT B = br ()a1
r () ,

where
  1  
det al () det ar () ef1 T ef T = F () .

In general, the determinant of the matrix


1
r () DL L (T, , 0)ar ()
AL () = a
T
(9.104)
1 1
= 2 m2 a r () DF  F (T, , 0)ar ()
r ()ar () + a
T T
will not vanish at the roots of the function det ar () det a
r (), because these
roots are cancelled in the second summand.
Similarly, the determinant of the quasi-polynomial

AM () = al ()DF F  (T, , 0)
al () (9.105)

is not zero at these points due to cancellations on the right-hand side.

3.
Theorem 9.31. Let the above formulated assumptions hold in this section.
Let the quasi-polynomials (9.104) and (9.105) be positive on the unit circle, so
that there exist factorisations (9.68) and (9.100). Let also the set of eigenval-
ues of the matrices () and () does not include the numbers i = efi T ,
where fi are the roots of the polynomial (9.103). Then the following proposi-
tions hold:
a) The matrix
1 1
R2 () = () al () 1 ()
ar ()DF  F F  (T, , 0) (9.106)
T
admits a unique separation

R2 () = R21 () + R22 () , (9.107)

where R22 () is a strictly proper rational matrix having only unstable


poles; it is analytical at the points i = efi T . Moreover, R21 () is a
rational matrix having its poles at the points i .
9.8 Simplied Method for Elementary System 377

b) The transfer function of the optimal controller wdo () is given by


1
wdo () = V2o ()V1o () , (9.108)

where

V1o () = a1
l () br ()
1
()R21 () 1 () ,
(9.109)
V2o () = ar () 1 ()R21 () 1 () .

c) The matrices (9.109) are stable and analytical at the points i , and the
set of their poles is included in the set of poles of the matrices 1 ()
and 1 ().
d) The characteristic polynomial of the optimal system o () is a divisor of
the polynomial det () det ().

Proof. Applying (8.105)(8.110) to the case under consideration, we have


1 ()C()
R() = 1 () = R1 () + R2 () , (9.110)

where
R1 () = ()a1
r ()0r () () (9.111)
and the matrix R2 () is given by (9.106). Under the given assumptions owing
to Remark 8.19, the matrix C() is a quasi-polynomial. Therefore, the matrix
R() can have unstable poles only at the point = 0 and at the poles of
the matrices 1 () and 1 (). Hence under the given assumptions, Matrix
(9.110) is analytical at the points i = efi T . Simultaneously, all nonzero
poles of the matrix
r ()DF  F F  (T, , 0)
a al ()
belong to the set of the numbers i , because the remaining poles are cancelled
against the factors ar () and a
l (). Then it follows immediately that Matrix
(9.106) admits a unique separation (9.107). Using (9.111) and (9.107) from
(9.110), we obtain


R() = ()a1 r ()0 () () + R21 () + R22 () . (9.112)

Per construction, R22 () is a strictly proper rational matrix, whose poles in-
clude all poles of the matrix R(), which are all unstable. Also per construc-
tion, the expression in the square brackets can have poles at the points i . But
under the given assumptions, the matrix R() is analytical at these points.
Hence the matrix in the square brackets in (9.112) is a polynomial. Then the
right-hand side of (9.112) coincides with the principal separation (9.28) and
from (8.115), we obtain

R+ () = ()a1
r ()0r () () + R21 () ,

R () = R() = R22 () , R21 () = () .


378 9 H2 Optimisation of a Single-loop System

Using (8.95), we nd the optimal system matrix

o () = a1
r ()0r () +
1
()R21 () 1 () ,

which is stable and analytical at the points i . Therefore, the matrices (9.109)
calculated by (8.117)(8.118) are stable and analytical at the points i . The
remaining claims of the theorem follow from the constructions of Section 8.7.

Example 9.32. Consider the simple SISO system shown in Fig. 9.4 with

x y
-g - K -

sa
6
u u1
 C 

Fig. 9.4. Example of elementary sampled-data system

K
F (s) = ,
sa
where K and a are constants. Moreover, assume that x(t) is unit white noise.
It is required to nd the transfer function of a discrete controller wdo (), which
stabilises the closed-loop system and minimises the value

S22 = 2 du1 + dy .

In the given case from (6.72) and (6.86), it follows that


 Keat
DF (T, s, t) = , 0<t<T,
1 eaT esT
 (9.113)
t
 K(a)eat
DF (T, s, t) = sT aT +K ea(t ) m( ) d , 0tT.
e e 1 0

Moreover,
K(a)eaT
DF (T, , 0) = .
1 eaT
Hence we can take

al () = 1 eaT , bl () = K(a)eaT ,
(9.114)
ar () = al (), br () = bl () .

As follows from (9.51), in this case


9.8 Simplied Method for Elementary System 379

DF F  (T, , 0) = , (9.115)
(1 eaT )(1 1 eaT )

where > 0 is a known constant. Moreover, using (9.113), (9.78) and (9.104),
it can be easily shown that

DL L (T, , 0) = 2 m
+ DF  F (T, , 0)
(9.116)
q(1 )(1 1 )
= ,
(1 eaT )(1 1 eaT )

where q > 0 and are constants, such that || < 1.


As follows from (9.113) and (9.78),

eaT
a r () = 1 1 eaT =
l () = a . (9.117)

Then using (8.85), (9.115) and (9.116), we nd

AM () = , AL () = K1 (1 )(1 1 ) ,

where K1 > 0 is a constant. Thus, we obtain that in the factorisations (9.68),


(9.100), we can take

() = 1 , () = 2 (1 ) , (9.118)

where 1 and 2 are real constants.


For further calculations, we notice that in the given case, Formulae (6.92)
and (6.93) yield

(2 + 1 + 0 2 )
DF F  F (T, , 0) = (9.119)
(1 eaT )2 (1 eaT )

with constants 0 , 1 , and 2 . Since

DF  F F  (T, , 0) = DF F  F (T, 1 , 0) ,

from (9.119), we nd

0 + 1 + 2 2
DF  F F  (T, , 0) = .
( eaT )2 ( eaT )

Hence using (9.117), we obtain

1 eaT 0 + 1 + 2 2
a al () =
r ()DF  F F  (T, , 0) . (9.120)
T T 2 (1 eaT )

Owing to
380 9 H2 Optimisation of a Single-loop System

m2 ( )

() = m2 (1 1 ) = , () = m1

and using (9.120), we nd the function (9.106) in the form

n0 + n1 + n2 2
R2 () = ,
( )(1 eaT )

where n0 , n1 , n2 are known constants. Performing the separation (9.107) with


|| < 1, we obtain

R21 () = ,
1 eaT
where is a known constant. From this form and (9.118), we nd

1
1 ()R21 () 1 () = (9.121)
(1 aT
e )(1 )

with a known constant 1 . Taking into account (9.114), we obtain the function
V1o () in (9.109):

2 1
V1o () = +
(1 eaT )(1 ) 1 eaT

with a known constant 2 . Due to Theorem 9.31, the function V1o () is ana-
lytical at the point = eaT . Hence

1 + eaT 2 eaT = 0 .

From the last two equations, we receive


1
V1o () = .
1

Furthermore using (9.121) and (9.109), we obtain

3
V2o () = , 3 = const.
1

Therefore, Formula (9.108) yields

wdo () = 3 = const.

and the characteristic polynomial of the closed-loop appears as

o () 1 .

10
L2 -Design of SD Systems for 0 < t <

10.1 Problem Statement


1. Let the input of the standard sampled-data system for t 0 be acted
upon by a vector input signal x(t) of dimension  1, and let z(t) be the r 1
output vector under zero initial energy. Then the system performance can be
evaluated by the value
 r 

J = z  (t)z(t) dt = zi2 (t) dt , (10.1)
0 i=1 0

where zi (t), (i = 1, . . . , r) are the components of the output vector z(t). It is


assumed that the conditions for the convergence of the integral (10.1) hold.
It is known [206] that the value
)
zL = + J
2

determines the L2 -norm of the output signal z(t). Thus, the following optimi-
sation problem is formulated.
L2 -problem. Given the matrix w(p) in (7.2), the input vector x(t),
the sampling period T and the form of the control impulse m(t). Find
a stabilising controller (8.35) that ensures the internal stability of the
standard sampled-data system and the minimal value of z(t)L2 .

2. It should be noted that the general problem formulated above include, for
dierent choice of the vector z(t), many important applied problems, also in-
cluding the tracking problem. Indeed, let us consider the block-diagram shown
in Fig. 10.1, where the dotted box denotes the initial standard system that
will be called nominal. Moreover in Fig. 10.1, Q(p) denotes the transfer matrix
of an ideal transition. To evaluate the tracking performance, it is natural to
use the value
382 10 L2 -Design of SD Systems

x z
-
- w(p)

u y

C  ?
h -
e

- Q(p)
z

Fig. 10.1. Tracking control loop

 

Je = e (t)e(t) dt = [z(t) z(t)] [z(t) z(t)] dt . (10.2)
0 0

If the PTM of the nominal system w(s, t) has the form (7.30)

w(s, t) = L (T, s, t)RN (s)M (s) + K(s) , (10.3)
then the tracking error e(t) can be considered as a transformed result of the
input signal x(t) by a new standard sampled-data system with the PTM

we (s, t) = w(s, t) Q(s) = L (T, s, t)RN (s)M (s) + K(s) Q(s) . (10.4)
This system is fenced by a dashed line in Fig. 10.1. The standard sampled-
data system with the PTM (10.4) is associated with a continuous-time LTI
plant having the transfer matrix

K(p) Q(p) L(p)
we (p) = .
M (p) N (p)
Then, the integral (10.2) coincides with (10.1) for the new standard sampled-
data system.

3. Under some restrictions formulated below and using Parsevals formula


[181], the integral (10.1) can be transformed into
 j
1

J= Z  (s)Z(s) ds , (10.5)
2j j
where Z(s) is the Laplace transform of the output z(t). Thus, the L2 -problem
formulated above can be considered as a problem of choosing a stabilising
controller which minimises the integral (10.5). This problem will be considered
in the present chapter.
10.2 Pseudo-rational Laplace Transforms 383

10.2 Pseudo-rational Laplace Transforms


1. According to the above statement of the problem, to consider the inte-
gral (10.5), we have to nd the Laplace transform of the output z(t) for the
standard sampled-data system under zero initial energy and investigate its
properties as a function of s. In this section, we describe some properties of a
class of transforms used below.

2. Henceforth, we denote by the set of functions (matrices) f (t) that are


zero for t < 0, have bounded variation for t 0 and satisfy the estimation

|f (t)| < det , t > 0,

where d > 0 and are constants. It is known [39] that for any function
f (t) for Re s > , there exists the Laplace transform

F (s) = f (t)est dt (10.6)
0

and for any t 0 the following inversion formula holds:


 c+ja
1
f (t) = lim F (s)est ds , c > ,
2j a cja

where
f (t 0) + f (t + 0)
f(t) = .
2
As follows from the general properties of the Laplace transformation [39],
under the given assumptions in any half-plane Re s 1 > , we have

lim |F (s)| = 0
s

for s increasing to innity along any contour. Then for f (t) and s =
x + jy, x 1 > , the following estimation holds [22] :
c
|F (x + kjy)| , c = const. (10.7)
|k|

Hereinafter, the elements f (t) of the set will be called originals and denoted
by small letters, while the corresponding Laplace transforms (10.6) will be
called images and denoted by capital letters.

3. For any original f (t) for Re s 1 > , the following series con-
verges:


f (T, s, t) = f (t + kT )es(t+kT ) , < t < . (10.8)
k=
384 10 L2 -Design of SD Systems

According to [148], f (T, s, t) is the displaced pulse frequency response


(DPFR). The function f (T, s, t) is periodic in t, therefore, it can be as-
sociated with a Fourier series of the form (6.15)

1 2
F (T, s, t) = F (s + kj)ekjt , = . (10.9)
T T
k=

Hereinafter, we shall assume that the function f (T, s, t) is of bounded vari-


ation over the interval 0 t T . This holds as a rule in applications. Then,
we have the equality
f (T, s, t) = F (T, s, t) ,
that should be understood in the sense that

f (T, s, t0 ) = F (T, s, t0 )

for any t0 , where the function f (T, s, t) is continuous and

f (T, s, t0 0) + f (T, s, t0 + 0)
F (T, s, t0 ) = ,
2
if f (T, s, t) at t = t0 has a break of the rst kind (nite break).

4. Together with the DPFR (10.8), we consider the discrete Laplace trans-

form (DLT) Df (T, s, t) of the function f (t):



Df (T, s, t) = f (t + kT )eksT = f (T, s, t)est (10.10)
k=

and the associated series (6.67)



 1
DF (T, s, t) = F (s + kj)e(s+kj)t = F (T, s, t)est , (10.11)
T
k=

which will be called the discrete Laplace transform of the image F (s).
Let us have a strictly proper rational matrix
NF (s)
F (s) = ,
dF (s)

where NF (s) is a polynomial matrix and

dF (s) = (s f1 )1 (s f ) , 1 + . . . + = .

Then as follows from (6.70)(6.72), the function (matrix)



DF (T, , t) = DF (T, s, t) | esT =
10.2 Pseudo-rational Laplace Transforms 385

is rational in for all t, and for 0 < t < T it can be represented in the form
(m
dk (t) k
DF (T, , t) = k=0 , (10.12)
F ()
where   1  
F () = ef1 T ef T
is the discretisation of the polynomial dF (s), and dk (t) are functions of
bounded variation on the interval 0 < t < T .

5. Below it will be shown that some images F (s), which are not rational
functions, may possess a DLT of the form (10.12).
Henceforth, the image F (s) will be called pseudo-rational, if its DLT for
0 < t < T can be represented in the form (10.12). The set of all pseudo-
rational images are determined by the following Lemma.
Lemma 10.1. A necessary and sucient condition for an image F (s) to be
pseudo-rational is, that it can be represented as
(m ksT & T
e dk (t)est dt
F (s) = k=0 
0
, (10.13)
F (s)
where
   1  
F (s) = F () | =esT = esT ef1 T esT ef T .

Proof. Necessity: Let the matrix DF (T, , t) have the form (10.12). Then we
have
(m ksT
 e dk (t)
DF (T, s, t) = D(T, , t) | =esT = k=0 . (10.14)
F (s)
Moreover for the image F (s), we have [148]
 T
F (s) = DF (T, s, t)est dt . (10.15)
0

With respect to (10.14), this yields (10.13).


Suciency: Denote
 T

Dk (s) = dk (t)est dt .
0

Then, (6.36)(6.39) yield



1
Dk (s + nj)e(s+nj)t = dk (t) , 0<t<T.
T n=
386 10 L2 -Design of SD Systems

Hence using (10.13) and (10.11) for 0 < t < T , we obtain


(m ksT 1 ( (s+nj)t
e n= Dk (s + nj)e
DF (T, s, t) = k=0 T

F (s)
(m ksT
e dk (t)
= k=0 ,
F (s)
which proves the claim.
Lemma 10.2. Let the images F (s) and G(s) be pseudo-rational. Then, the
product H(s) = F (s)G(s) is pseudo-rational.
Proof. Together with (10.12), let us have
(r
eisT bi (t)
DG (T, s, t) = i=0 , 0<t<T, (10.16)
G ()
where   1  
G (s) = esT eg1 T esT eg T .
We will show that

 T  
DF G (T, s, t) = DF (T, s, t )DG (T, s, ) d
0
 T 
(10.17)

= DF (T, s, )D G (T, s, t ) d .
0

Indeed, we have

 1
DF (T, s, t ) = F (s + kj)e(s+kj)(t )
T
k=

 1
DG (T, s, ) = G(s + kj)e(s+kj) .
T
k=

Substituting this into the middle part of (10.17) after integration, we prove
the rst equality in (10.17). The second one is proved in a similar way.
Using known properties of the DLT and (10.14), we have
 
DF (T, s, t ) = DF (T, s, t )
(m ksT
e dk (t )
= k=0  , 0<t <T ,
F (s)
 
DF (T, s, t ) = DF (T, s, t + T )esT
(m (k+1)sT
e dk (t + T )
= k=0  , T < t < 0 .
F (s)
10.3 Laplace Transforms of Standard SD System Output 387

Substituting these forms and (10.16) into (10.17), we obtain an expression of


the form (q ksT
k=0 e hk (t)

DF G (T, s, t) =   , 0<t<T, (10.18)
F (s)G (s)
where hk (t) are known functions. This expression has the form (10.14), so
that the image H(s) = F (s)G(s) is pseudo-rational.

Corollary 10.3. Using estimates similar to (10.7), it can be shown that in



the given case, the function DF G (T, s, t) is continuous for all t. Then from
(10.18) for t = 0 and esT = , we obtain

NF G ()
DF G (T, , 0) = ,
F ()G ()

where NF G () is a polynomial matrix.

6. We note that many important originals encountered in applications have


pseudo-rational images, including nite length originals f (t). Indeed, suppose
f (t) = 0 outside the interval 0 t a. Without loss of generality, we can
assume a = nT , where n 1 is an integer. Moreover, if

f (t) = fi (t) , iT < t < (i + 1)T, (i = 0, 1, . . . , n 1)

from (10.10) and (10.11), we have

 
n1
DF (T, s, t) = DF (T, s, t) = fi (t + iT )eisT , 0<t<T,
i=0

whence it immediately follows that the Laplace transform F (s) of a nite


length impulse is a pseudo-rational function, because the function DF (T, , t)
is a polynomial in .

10.3 Laplace Transforms of Standard SD System Output


1. Hereinafter we assume that the input x(t) of the system under consider-
ation is in and its image X(s) is pseudo-rational, i.e.,
( ksT
k=0 e xk (t)

DX (T, s, t) =  , 0<t<T, (10.19)
X (s)

where
  q1  q
X (s) = esT ex1 T esT ex T . (10.20)
388 10 L2 -Design of SD Systems

The polynomial X () is assumed to be stable, i.e., free of roots in the closed


unit disk. As a special case, the vector (10.19) can be a polynomial in =
esT . The continuous-time parts of the standard sampled-data system will be
described in the form of the state equations
dv
= Av + B1 x + B2 u
dt
(10.21)
z = C1 v + DL u , y = C2 v .

The equations of the digital controller have the form (7.4), (7.6) and (7.7):

k = y(kT ) , (k = 0, 1, . . . )
0 k + . . . + q kq = 0 k + . . . q kq (10.22)
u(t) = m(t kT )k , kT < t < (k + 1)T .

As follows from [148] for x(t) , all solutions z(t) of the system (10.21),
(10.22) belong to the set , where is a suciently large number. In par-
ticular, if < 0 and the system (10.21) (10.22) is internally stable, then we
can take < 0.

2. The following theorem gives a general expression for the Laplace trans-
form of the output Z(s) under zero initial energy.

Theorem 10.4. For Re s > with suciently large , there exists the
Laplace transform of the solution z(t) of the system (10.21)(10.22) under
zero initial energy. This image Z(s) has the form
 
Z(s) = L(s)(s)RN (s)DM X (T, s, 0) + K(s)X(s) , (10.23)

where

K(s) = C1 (sI A)1 B1 , L(s) = C1 (sI A)1 B2 + DL ,


(10.24)
M (s) = C2 (sI A)1 B1 , N (s) = C2 (sI A)1 B2

and

 1
DM X (T, s, 0) = M (s + kj)X(s + kj) .
T
k=

Moreover, as before in (10.23), we have


 
  
1
RN (s) = w d (s) In DN (T, s, 0)w d (s) , (10.25)

where
10.3 Laplace Transforms of Standard SD System Output 389

 1
DN (T, s, 0) = N (s + kj)(s + kj)
T
k=

and  T
(s) = es m( ) d . (10.26)
0

The matrix w d (s) in (10.25) is determined by the relation
 1 
w d (s) = (s) (s) , (10.27)

where

(s) = 0 + 1 esT + . . . + q eqsT ,

(s) = 0 + 1 esT + . . . + q eqsT .

Proof. a) Taking the Laplace images for the rst equation in (10.21) under
zero initial conditions, we obtain

sV (s) = AV (s) + B2 U (s) + B1 X(s) ,

whence it follows

V (s) = w2 (s)U (s) + w1 (s)X(s) (10.28)

with the notation

w2 (s) = (sI A)1 B2 , w1 (s) = (sI A)1 B1 . (10.29)

b) The image U (s) appearing in Equation (10.28) is determined by the rela-


tions
  (k+1)T

st
U (s) = e u(t) dt = est m(t kT ) dt k
0 k=0 kT
(10.30)
 T

st ksT
= e m(t) dt k e = (s) (s) ,
0 k=0

where


(s) = k eksT (10.31)
k=0

is the discrete Laplace transform for the vector sequence {k } = {(kT )},
(k = 0, 1, . . . ). From (10.28) and (10.30), we have

V (s) = w2 (s)(s) (s) + w1 (s)X(s) . (10.32)
390 10 L2 -Design of SD Systems

c) Let us nd an expression for the vector (s) appearing in (10.31). With
this aim in view, we notice that (10.32) for k = 0, 1, . . . yields

V (s + kj) = w2 (s + kj)(s + kj) (s) + w1 (s + kj)X(s + kj) .

Then, we nd

 1
DV (T, s, t) = V (s + kj)e(s+kj)t
T
k=
  
(10.33)

= Dw2 (T, s, t) (s) + Dw1 X (T, s, t) .

Under the taken assumptions, the components of the vectors V (s + kj)


vanish as |k|2 when k increases. Therefore, the right side of (10.33) is
continuous in t and we can substitute t = 0 in (10.33):
   
DV (T, s, 0) = Dw2 (T, s, 0) (s) + Dw1 X (T, s, 0) . (10.34)

Moreover, from (10.8) and (10.9) for t = 0, we have



 1
DV (T, s, 0) = V (s + kj) = V (T, s, 0) = v (T, s, 0)
T
k=

(10.35)

ksT
= v(kT )e = v (s) ,
k=0

 
where v (s) is the discrete Laplace transform of the sequence {vk } =
{v(kT )}, (k = 0, 1, . . . ). With regard to (10.35), Relation (10.34) can be
written in the form
   
v (s) = Dw2 (T, s, 0) (s) + Dw1 X (T, s, 0) . (10.36)

Further, taking the -transform of the second equation in (10.22) under


zero initial energy and substituting esT for , we obtain
   
(s) (s) = (s)y (s) , (10.37)

where y (s) is the discrete Laplace transform of the vector sequence

{yk } = {y(kT )}, (k = 0, 1, . . . ). From (10.37) as to (10.27), we have
  
(s) = w d (s)y (s) . (10.38)

Substituting (10.38) into (10.36), we nd


    
v (s) = Dw2 (T, s, 0)w d (s)y (s) + Dw1 X (T, s, 0) .
10.3 Laplace Transforms of Standard SD System Output 391

Multiplying from left by the matrix C2 and using the fact that
 
C2 v (s) = y (s) (10.39)

from (10.24), we nd
    
y (s) = DN (T, s, 0)w d (s)y (s) + DM X (T, s, 0) .

Then,
   
1 
y (s) = In DN (T, s, 0)w d (s) DM X (T, s, 0) ,

and with regard to (10.38), we nd


   
  
1 
(s) = w d (s)y (s) = w d (s) In DN (T, s, 0)w d (s) DM X (T, s, 0)
 
(10.40)
= RN (s)DM X (T, s, 0) .

d) Substituting (10.40) into (10.32), we have


 
V (s) = w2 (s)(s)RN (s)DM X (T, s, 0) + w1 (s)X(s) . (10.41)

Multiplying this equation from the left by C1 , we have


 
C1 V (s) = C1 (sI A)1 B2 (s)RN (s)DM X (T, s, 0)+K(s)X(s) . (10.42)

But, (10.21) yields

Z(s) = C1 V (s) + DL U (s) . (10.43)

From (10.30) and (10.40), it follows that


 
U (s) = (s) (s) = (s)RN (s)DM X (T, s, 0) .

Finally, substituting (10.42) into (10.43), we obtain



 
Z(s) = C1 (sI A)1 B2 + DL (s)RN (s)DM X (T, s, 0) + K(s)X(s) .

With respect to (10.24), this is equivalent to (10.23).

3. Equations (10.21) are associated with a realisation of the matrix w(p)


in (7.8). It is easily seen that the image Z(s) is independent of the specic form
of this realisation. This is caused by the fact that Formula (10.23) is completely
determined by Matrices (10.24), that are independent of the realisation of the
matrix w(p). Therefore, without loss of generality, we will assume that the pair

 C1
A, B1 B2 in (10.21) is controllable and the pair A, is observable.
C2
392 10 L2 -Design of SD Systems

4. Equation (10.23) can be derived in an alternative way. As follows from


[148] under zero initial energy, the connection between the input x(t) and
output z(t) of the standard sampled-data system is determined by a linear
periodic operator with the PTM w(s, t) dened by Formula (10.3). Moreover,
the discrete Laplace transform of the output DZ (T, s, t) is given by the formula
[148]

1
DZ (T, s, t) = Z(s + kj)e(s+kj)t
T
k=

(10.44)
1
= w(s + kj, t)X(s + kj)e(s+kj)t .
T
k=

The image-vector Z(s) can be found from (10.15) as


 T
Z(s) = DZ (T, s, t)est dt .
0

Substituting here the right side of (10.44) and the expression for w(s, t) from
(10.3) after some transformations, we obtain a result equivalent to (10.23).

10.4 Investigation of Poles of the Image Z(s)


1. From (10.23) it immediately follows that the image Z(s) is a meromor-
phic function in the variable s, i.e., all its singular points are poles. For an
eective application of the Laplace transformation theory to the investigation
of sampled-data systems, it is important to investigate the properties of these
poles. From the rst glance, the image (10.23) must have its poles at the poles

of the matrices L(s) and K(s), and at the poles of the matrix DM X (T, s, 0)
determined by M (s). This feature makes the application of (10.23) to the
solution of practical problems very dicult. Nevertheless, a more detailed in-
vestigation shows that, generally speaking, the image Z(s) can be free of poles
caused by the poles of the matrices K(s), L(s), and M (s). The present section
deals with this question in detail.

2.
Theorem 10.5. Denote by PX the set of roots of the equation
  q1  q
X (s) = esT ex1 T esT ex T =0

and by P the set of all roots of the equation


   
(s) = det Q(s, , ) = 0 ,
10.4 Investigation of Poles of the Image Z(s) 393
  
where Q(s, , ) is Matrix (7.38):

I esT eAT On esT eAT (A)B2
  
Q(s, , ) =
C2 In Onm .

 
Om (s) (s)

Then the set of all poles of the image (10.23) belongs to PX P .

Proof. a) From (6.75) and (6.76), it follows that



1 1
[(s + kj)I A] e(s+kj)(t )
T
k= (10.45)
 sT AT 1 A(t )
e e I e + eA(t ) , 0<t <T
=  sT AT 1 A(t )
e e I e , T < t < 0 .

b) Let us have a matrix

 Onm , t<0
f (t) = At
Ce B , t > 0,

where A, B, and C are constant matrices of compatible dimensions. Then


for satisfactory great Re s, we have

F (s) = est f (t) dt = C(sI A)1 B .
0

Moreover, using (10.45), we obtain



DF (T, s, t )
(10.46)
C(esT eAT I )1 eA(t ) B + CeA(t ) B , 0<t <T
=
C(esT eAT I )1 eA(t ) B T < t < 0 .

Let DX (T, s, t) be the DLT of an image X(s). Then using (10.17), we
receive  T
 
DF X (T, s, t) = DF (T, s, t )DX (T, s, ) d .
0

Substituting here (10.46), we nd



  1 T 
DF X (T, s, t) = C esT eAT I eA(t ) B DX (T, s, ) d
0
 t 
(10.47)
+ CeA(t ) B DX (T, s, ) d , 0<t<T.
0
394 10 L2 -Design of SD Systems

As a special case for C = I, Equation (10.47) yields



  1 T A(t ) 
DF X (T, s, t) = esT eAT I e B DX (T, s, ) d
0
 t 
(10.48)
+ eA(t ) B DX (T, s, ) d , 0<t<T.
0

c) Let the image X(s) be pseudo-rational. Then for all t, the matrix

DF X (T, s, t) dened by (10.47) is continuous in t. For t = 0 from (10.48),
we obtain

  1 T A 
DF X (T, s, 0) = esT eAT I e B DX (T, s, ) d . (10.49)
0

Regarding (10.49) from (10.48), we nd


 
 t 
DF X (T, s, t) = eAt DF X (T, s, 0) + eA(t ) B DX (T, s, ) d .
0

d) Now proceed to the proof of the theorem. From (10.29), (6.89) and (6.100),
it follows for t = 0 that
  1
Dw2 (T, s, 0) = (A) esT eAT I B2 .

Therefore, with respect to (10.49), Equation (10.36) can be written in the


form
  1 
v (s) = esT eAT I (A)B2 (s)
 (10.50)
 1 T A 
+ esT eAT I e B1 DX (T, s, ) d .
0

Hence
  
I esT eAT v (s) = esT eAT (A)B2 (s)
 T 
sT AT
+e e eA B1 DX (T, s, ) d .
0

Combining this with (10.39) and (10.37), we obtain the system of equa-
tions
  
I esT eAT v (s) esT eAT (A)B2 (s)
 T 
= esT eAT eA B1 DX (T, s, ) d ,
0
 
(10.51)

C2 v (s) + y (s) = On1 ,
   
(s)y (s) + (s) (s) = Om1 .
10.4 Investigation of Poles of the Image Z(s) 395

Introduce the ( + n + m) 1 vector



        

G (s) = v (s) y (s) (s) .

Then the system of equations (10.51) can be written in the form


    
Q(s, , )G (s) = D(s) ,

where D(s) is given by
   
D(s) = D1 (s) O1n O1m ,

where 
 T 
sT AT
D1 (s) = e e eA B1 DX (T, s, ) d .
0
Thus, substituting herein (10.19), we nd


  T
sT AT ksT
e e e eA B1 xk ( ) d
 0
k=0
D1 (s) =  .
X (s)
Since the numerator of this expression is an integral function of s, it follows

that the set of poles of the vector D1 (s) belongs to the set PX . Obviously,

the same is true for the vector D(s). Then with account for
 1   
G (s) = Q (s, , )D(s) ,
  
we nd that the set of all poles of the vectors v (s) and y (s), (s)
belongs to the union of the sets PX and P .
e) Substituting the relation
 t
  1
Dw2 (T, s, t) = eA(t ) m( ) d B2 + esT eAT I (A)eAt B2
0

and Equation (10.48) with F (s) = w2 (s) into (10.33), we nd for 0 < t < T

DV (T, s, t)
 t 
 sT AT 1
= e A(t )
m( ) d B2 + e e I (A)e B2 (s)
At
0

 1 T A(t ) 
+ esT eAT I e B1 DX (T, s, ) d
0
 t 
+ eA(t ) B1 DX (T, s, ) d .
0
396 10 L2 -Design of SD Systems

After rearrangement, we get



DV (T, s, t)
  
 sT AT 1  T 
=e At
e e I (A)B2 (s) + e A
B1 DX (T, s, ) d
0
 t 
 t 
A(t )
+ e m( ) d B2 (s) + eA(t ) B1 DX (T, s, ) d .
0 0

According to (10.50), the expression in the square brackets equals v (s).
Therefore,
 
 t 
DV (T, s, t) = eAt v (s) + eA(t ) m( ) d B2 (s)
0
 t 
+ eA(t ) B1 DX (T, s, ) d .
0

Using the relation


 T 
V (s) = DV (T, s, t)est dt ,
0

we obtain V (s) in the form


 
V (s) = G1 (s)v (s) + G2 (s) (s) + G3 (s) ,

where
 T  1
G1 (s) = eAt est dt = (A sI )1 esT eAT I ,
0
 T  T
G2 (s) = est eA(t ) m( ) d dt ,
0 0
 T  t 
G3 (s) = est eA(t ) B1 DX (T, s, ) d dt .
0 0

Obviously, the matrices G1 (s) and G2 (s) are integral functions of s and
the poles of the matrix G3 (s) belong to the set PX . As was proved above,
 
the poles of the vectors v (s) and (s) belong to the union of PX and
P . Hence the poles of the image V (s) belong to PX P .
f) To conclude the proof, we notice that (10.30) and (10.43) yield

Z(s) = C1 V (s) + DL (s) (s) .

Due to (10.26), (s) is an integral function so that the set of all poles of
the vector Z(s) belongs to the set PX P .
10.5 Representing the Output Image in Terms of the System Function 397

Corollary 10.6. Under the assumptions of Theorem 10.5, the image Z(s)
admits a representation of the form
PZ (s)
Z(s) =   ,
(s)X (s)

where PZ (s) is an integral function in s and X (s) is given by (10.20). Ac-
cording to (7.93),
() = ()d () , (10.52)
where the polynomial () is independent of the choice of the discrete con-
troller.

10.5 Representing the Output Image in Terms of the


System Function
1. When not stated otherwise in this section for simplicity, we assume that
the standard sampled-data system is modal controllable. Then we have () =
const. = 0 and from (10.52), it follows that

() d () .

Moreover, we stick to all denitions and notation of Section 8.3.

2. In analogy with Section 8.3, the image (10.23) can be represented in


terms of the system function (system matrix) (). With this aim in view, we
substitute (8.50) in (10.23):
     
RN (s) = 0r (s) a l (s) a r (s) (s) a l (s) . (10.53)

As a result, we obtain
 
Z(s) = p(s) (s) q (s) + r(s) , (10.54)

where

p(s) = L(s)(s) a r (s) ,
  
q (s) = a l (s)DM X (T, s, 0) ,
(10.55)
  
r(s) = L(s)(s) 0r (s) a l (s)DM X (T, s, 0) + K(s)X(s) ,
 
= L(s)(s) 0r (s) q (s) + K(s)X(s) .

Hereinafter, we shall call Equation (10.54) a representation of the image Z(s)



in terms of the system function. The matrix p(s) and the vectors q (s) and
398 10 L2 -Design of SD Systems

r(s) will be called the coecients of this representation. As follows from


(10.55) under the given assumptions, the coecients (10.55) are meromorphic
functions of the argument s. In the present section, we investigate the set of
poles of the matrices (10.55).

3.
Theorem 10.7. Let the standard sampled-data system be modal controllable.
Then the matrix p(s) is an integral function of s and the set of all poles of the

vectors q (s) and r(s) belongs to the set PX .

Proof. The claim regarding the matrix p(s) follows immediately from Corol-
lary 8.15.

Then we consider the vector q (s) in (10.55). From (10.17), it follows

 T  
DM X (T, s, t) = DM (T, s, )D X (T, s, t ) d .
0

For t = 0, we have

 T  
DM X (T, s, 0) = DM (T, s, )D X (T, s, ) d .
0

Hence using (10.55), we obtain



 T   
 
q (s) = a l (s)DM X (T, s, 0) = a l (s)DM (T, s, ) DX (T, s, ) d .
0
(10.56)
Due to Corollary 8.14, the matrix in the square brackets is an integral function
in s. Therefore, with respect to (10.19) and (6.71), we obtain the claim of the

theorem regarding the vector q (s).
To prove the claim about the vector r(s), we notice that for (z) = Omn ,
we have
Z(s) = r(s)
and the further proof is performed similarly to the proof of Theorem 8.4 using
Theorem 10.5.

Remark 10.8. From (10.56) it follows that



 N q (s)
q (s) =  , (10.57)
X (s)

where Nq () is a polynomial vector and the function X (s) is given by (10.20).
Hence
Nq ()
q() = . (10.58)
X ()
10.6 Representing the L2 -norm in Terms of the System Function 399

Moreover, we have
Pr (s)
r(s) =
,  (10.59)
X (s)
where the numerator is an integral function in s.

10.6 Representing the L2 -norm in Terms of the System


Function
1. Using (10.54), we obtain
 
Z  (s) = q (s) (s)p (s) + r (s) , (10.60)
where, as before, we use the notation

f (s) = f (s)
and the following relations hold:

p (s) = a r (s)L (s)(s) ,
    
q (s) = D M X (T, s, 0)a l (s) = DX  M  (T, s, 0)a l (s) ,


(10.61)
  
     
r (s) = DX  M  (T, s, 0)a l (s) 0r
(s)L (s)(s) + X (s)K (s) ,
 
= q  (s)  0r (s)L (s)(s) + X  (s)K  (s) .
Multiplying (10.54) and (10.60), we obtain
Z  (s)Z(s) = Z  (s)Z(s)
  
= q  (s) (s)p (s)p(s) (s) q (s) + r (s)r(s) +


  
+ q  (s) (s)p (s)r(s) + r (s)p(s) (s) q (s) .


Substituting this expression into (10.5) yields


J = J1 + J2 , (10.62)
where
 j  
1 
q  (s) (s)p (s)p(s) (s) q (s)+

J1 =
2j j
 
(10.63)

    
+ q (s) (s)p (s)r(s) + r (s)p(s) (s) q (s) ds ,

 j
1
J2 = r (s)r(s) ds .
2j j
400 10 L2 -Design of SD Systems

As follows from the above relations, the integral J2 is independent of the


system function, i.e., it is independent of the choice of the controller.

2. Let us transform the integral (10.63). With this purpose, we pass to nite
integration limits in (10.63):
 j/2  
T  
q  (s) (s)AL1 (s) (s) q (s)

J1 =
2j j/2
  
(10.64)
 
   
q (s) (s)B (s) B(s) (s) q (s) ds ,

where

 1 
AL1 (s) = p (s + kj)p(s + kj) ,
T
k=


 1 
B(s) = r (s + kj)p(s + kj) , (10.65)
T
k=


 1 
B  (s) = p (s + kj)r(s + kj) .
T
k=

Let us nd expanded expressions for the matrices (10.65). From Chapter 8, it


follows that


 1  
AL1 (s) = a r (s) L (s + kj)L(s + kj)(s + kj)(s + kj) a r (s)
T
k=
  
 
= a r (s)DL L (T, s, 0) a r (s) = T AL (s) ,
 
where the matrix AL (s) is dened in (8.73). To calculate the matrix B(s), we
note that, due to (10.55) and (10.61),
 
r (s)p(s) = q  (s)  0r (s)L (s)L(s)(s)(s) a r (s)+X  (s)K  (s)L(s)(s) a r (s) .
 

Therefore, for any integer k

r (s + kj)p(s + kj)
 
(10.66)
= q  (s)  0r (s)L (s + kj)L(s + kj)(s + kj)(s + kj) a r (s)



+ X  (s + kj)K  (s + kj)L(s + kj)(s + kj) a r (s) .

Substituting (10.66) into (10.65), we nd


10.6 Representing the L2 -norm in Terms of the System Function 401
    
B(s) = q  (s)  0r (s)DL L (T, s, 0) a r (s) + DX  K  L (T, s, 0) a r (s) .
 

Placing s for s and transposing, we obtain


     

B  (s) = a r (s)DL L (T, s, 0) 0r (s) q (s) + a r (s)DL KX (T, s, 0) .

As shown in Chapter 8, the matrix AL (s) is an integral function in s. More-
over, since the matrix p(s) is integral, then as follows from (10.59), the poles

of each product (10.66) belong to the set of roots of the function X (s). Thus

for the matrix B(s), the series (10.65) converges uniformly in any closed area
 
that is free of roots of the function X (s). Therefore, the matrix B(s) may

possess poles only among the roots of the function X (s). Hence there exists
the following representation:

 N B1 (s)
B(s) =  ,
X (s)

where NB1 () is a quasi-polynomial vector. From (10.52), it follows that



 N B (s)
B  (s) =  , (10.67)
X (s)

where NB () = NB1 ( 1 ) is a quasi-polynomial vector.

3. Placing the variable = esT for s in (10.64), we obtain


#   d
1 B()

J1 = q()()AL1 ()()q() q
()() B()()q()
2j
(10.68)
where, as before, we used the notation

f() = f  ( 1 )

and the integration is performed along the unit circle in positive direction.
The matrices appearing in (10.68) are given by

AL1 () = T AL () = a
r ()DL L (T, , 0)ar () ,

B() = q()0r ()DL L (T, , 0)ar () + DX  K  L (T, , 0)ar () , (10.69)



B() =a
r ()DL L (T, , 0)0r ()q() + a
r ()DL KX (T, , 0) .

In Chapter 8, it was shown that the matrix AL () is a symmetric quasi-



polynomial. Moreover from (10.67), it follows that B() is a rational matrix,
402 10 L2 -Design of SD Systems

which can have poles at the roots of the polynomial X () and at the point
= 0. Therefore, there exists a representation

NB ()

B() = , (10.70)
X ()

B () is a polynomial vector and 0 is an integer.


where N

10.7 Wiener-Hopf Method

1. Since the second summand in (10.62) is independent of the choice of the


system matrix, the problem of minimising the functional (10.62) is equivalent
to the problem of minimising the integral (10.68) over the set of stable rational
matrices (). This fact is substantiated by the following theorem.

Theorem 10.9. Let o () be a stable rational matrix minimising the integral


(10.68). Then the transfer function of the optimal controller wdo () ensuring
the stability of the standard sampled-data system and minimising the func-
tional (10.5) is given by
1
wdo () = V2o ()V1o () ,

where

V1o () = 0r () br ()o () ,
V2o () = 0r () ar ()o () ,

and the polynomial matrices ar (), br () are determined by the IRMFD (8.42)
and moreover, [0r , 0r ] is a right initial controller solving Equation (4.37).
Furthermore, if we have both the IMFDs

o () = Dl1 ()Ml () = Mr ()Dr1 ()

and the system is modal controllable, then the characteristic polynomial of the
optimal system o () satises the relation

o () d () ,

where
d () det Dl () det Dr () .

Proof. The proof is similar to that of Theorem 8.21.


10.7 Wiener-Hopf Method 403

2. We recognise that the form of the functional (10.68) coincides with that
of (8.92). Nevertheless, a direct application of the Wiener-Hopf method in the
form given in Section 8.6 is impossible, because the matrix () = q() in
(10.68) is not invertible. This means that the functional (10.68) is singular.
Therefore, the minimisation needs special approaches. Below we describe such
an approach based on an idea of [124]. With this aim in view, we derive a
number of auxiliary transformations.
a) Consider the rational vector (10.58)

Nq ()
q() = al ()DM X (T, , 0) = . (10.71)
X ()

As follows from Remark 3.4, there exists a unimodular matrix R(), such
that
1
0

R()Nq () = ()11n , 1 n = . n ,
..
0
where () is a greatest common divisor of the elements of the column
Nq (). Thus with (10.71), we have

() ( 1 ) 
q() = Y ()11n , q() = 1 Y () (10.72)
X () X ( 1 ) n

with a unimodular matrix



Y () = R1 () .

b) Substituting (10.72) into (10.68), we obtain


# 1
1 ( )AL1 ()() ()Y ()11n
J1 = 1 n Y ()()
2j X ( 1 )X () (10.73)
1

( ) () d
1 n Y ()()
B()
B()()Y ()11n
X ( 1 ) X ()

Introduce the matrix



1 () = ()Y () . (10.74)
Then with account for (10.69), the functional (10.73) takes the form
#
1 T ( 1 )AL ()()
J1 = 1 n 1 () 1 ()11n
2j X ( 1 )X () (10.75)
1

( ) () d
1 n 1 ()B()
B()1 ()11n .
X ( 1 ) X ()
404 10 L2 -Design of SD Systems

Since the matrix Y () is unimodular, the problems of minimising the


functionals (10.73) and (10.75) are equivalent in the sense that, if a stable
matrix 1o () minimises the integral (10.75), then the matrix

o () = 1o ()R() (10.76)

is stable and minimises the integral (10.73). The converse proposition is


also valid.
c) Let us have
n1

1
1 () = 1 () 2 () m ,
where 1 () denotes the rst column of the matrix 1 (). Then obviously,

1 ()11n = 1 ()

and the integral (10.75) can be written in a form depending only on the
column 1 ():
# 1
1 1 () T ( )AL ()() 1 ()
J1 =
2j X ( 1 )X () (10.77)
1

1 ()B()
( ) () d
B()1 () .
X ( 1 ) X ()

Assume that there exists a stable rational column 1o () minimising the


functional (10.77). Then the stable rational matrix


1o () = 1o () 2 ()

with any stable rational m (n 1) matrix 2 () minimises the integral


(10.75), because this integral depends only on the rst column of 1 ().
Furthermore, using (10.74), we nd that the stable rational matrix


o () = 1o ()R() = 1o () 2 () R() (10.78)

minimises the functional (10.68) for any stable m(n1) matrices 2 ().
Therefore, if n > 1 and there exists an optimal column 1 (), then there
exists also a set of optimal system functions depending on the stable
matrix parameter 2 (). Since each optimal system function (10.76) is
associated by Theorem 10.9 with an optimal stabilising controller, we
conclude for the L2 -problem, that the existence of one optimal controller
means that there exists a set of optimal stabilising controllers depending
on the choice of the stable matrix 2 ().
This result principally diers from the situation for the H2 -problem and
it is originated by the singularity of the functional (10.68).
10.7 Wiener-Hopf Method 405

3. Consider the minimisation of the functional (10.77) on the basis of the


Wiener-Hopf method. Suppose that there exists a factorisation

AL () = ()() (10.79)

with a stable invertible m m polynomial matrix () and

()( 1 ) = + ()+ ( 1 ) (10.80)

with a stable polynomial + (). Since the polynomial X () is stable, there


exists a factorisation
( 1 )AL ()() 1 ()1 () ,
T = (10.81)
X ( 1 )X ()
where the rational matrix

T ()+ ()
1 () = (10.82)
X ()
is stable together with its inverse. Further, according to the Wiener-Hopf
minimisation algorithm, we nd the matrix

1 ()B()
( 1 )
R() = . (10.83)
1
X ( 1 )

Using (10.70) and (10.82), we can write (10.83) in the form

1 1 ()N
B () ( 1 )
R() = .
T X () + ( 1 )
The next stage of the minimisation requires to perform the principal separa-
tion
R() = R+ () + R () = R+ () + R() , (10.84)
where R () is a strictly proper rational matrix incorporating all unstable
poles of the matrix R() and nally, R+ () is a stable rational matrix. The
general form of the matrix R+ () is given in the following Theorem.
Theorem 10.10. The matrix R+ () admits a representation

NR+ ()
R+ () = (10.85)
X ()

with a polynomial matrix(-column) NR+ ().

Proof. a) First of all, we show that the function

 ( 1 )
 () = (10.86)
+ ( 1 )
406 10 L2 -Design of SD Systems

possesses only unstable poles. Let

() = ( 1 )b1 ( m )bm , b1 + . . . + bm = g

and
( 1 ) = g (1 1 )b1 (1 m )bm . (10.87)
Assume

|i | > 1, (i = 1, . . . , ); |i | < 1, (i = + 1, . . . , m) .

Then, we can take

+ () = ( 1 )b1 (  )b (1 +1 )b+1 (1 m )bm ,

whence it follows

+ ( 1 ) = g (1 1 )b1 (1  )b ( +1 )b+1 ( m )bm .

Hence using (10.87), we obtain

(1 +1 )b+1 (1 m )bm


 () = ,
( +1 )b+1 ( m )bm

i.e., the function (10.86) has only unstable poles.


b) Let us show that the matrix 1 () possesses only unstable poles. Indeed
per construction, the matrix 1 () has only stable poles. Hence the
matrix 1 ( 1 ) can have only unstable poles. Therefore, the matrix



() = 1 ( 1 ) can also have only unstable poles.
c) From a) and b), it follows that the stable poles of the matrix R() belong
to the set of roots of the polynomial X (), whence it follows (10.85).

4. Using (10.85) and (10.82), we nd the optimal column

1 1 ()NR+ ()
1o () = 11 ()R+ () = . (10.88)
T + ()

In particular, when we have + () = const. = 0, which is true almost always


in applied problems, then we obtain

1o () = 1 ()N
+ () ,
R

+ () is a polynomial vector.
where NR
10.8 General Properties of Optimal Systems 407

10.8 General Properties of Optimal Systems


1. Let us have (10.88) and let 2 () be a stable rational m (n 1) matrix.
Then, the expression

1 1 ()NR+ ()
1o () = 2 ()
T + ()

determines the set of all optimal matrices 1o (). Thus with respect to (10.78),
we nd that the set of all optimal system matrices o () is determined by

1 1 ()NR+ ()
o () = 2 () R() .
T + ()

Placing esT for , we obtain


 1 +

 1 (s)N (s)  
o (s) = +
R
2 (s) R(s) .
T
 (s)

Substituting this into (10.54), we nd the image of the optimal transient


process
 1 +

1 (s)N (s)  
Z o (s) = p(s) 2 (s) R(s) q (s) + r(s) .
R 
+
T
 (s)

However from (10.72) for = esT , we have



   (s)
R(s) q (s) =  1n .
X (s)

With regard to the last two relations, we obtain



1  (s)  1 +
Z (s) =
o
 +
p(s) (s)N R (s) + r(s) .
T
X (s)  (s)

This expression is independent of the choice of the matrix 2 (). So we ar-


rive at the important conclusion that the optimal transient under zero initial
energy is independent of the choice of the matrix 2 (). 1

1
The authors attention to this fact was attracted by Dr. K. Polyakov.
408 10 L2 -Design of SD Systems

2. Let us have an ILMFD

1o () = Dl1 ()M
l () . (10.89)

Then by Lemma 2.16, the expression


 
o () = Dl1 M l ()R()

is an ILMFD for the matrix o (). Thus for the characteristic polynomial of
the optimal system o (), we have

o () det Dl () . (10.90)

The following theorem states an important property of the polynomial o ().


Theorem 10.11. Let us have an ILMFD for the optimal column (10.88):

1o () = a1
()b () . (10.91)

Then independently of the choice of the matrix 2 (), the function

o ()
 () =
det a ()

is a polynomial.

Proof. Let us have ILMFDs (10.89) and (10.91). Then,





Dl ()1o () = Dl () 1o () 2 () = Dl ()1o () Dl ()2 ()

is a polynomial matrix. Hence the matrix Dl ()1o () is also a polynomial


and the polynomial matrix Dl () is a cancelling polynomial for the column
1o (). Since the right-hand side of (10.91) is an ILMFD, we obtain

Dl () = a0 ()a () ,

where a0 () a polynomial matrix and the function

det Dl ()
1 () =
det a ()

is a polynomial. Then from (10.90), it follows immediately that  () is a


polynomial.

The claim of Theorem 10.11 is an important applied result. It shows that


despite of a wide choice of optimal system functions (and, consequently, op-
timal controllers), there are some limitations on the attainable degree of sta-
bility. These limitations are imposed by the eigenvalues of the matrix a ()
and they are independent of the choice of the matrix 2 ().
10.9 Modied Optimisation Algorithm 409

10.9 Modied Optimisation Algorithm


1. For solving the L2 -problem as well as the H2 -problem, we can apply a
modied method that does not use a basic controller. The following lemma
gives a substantiation for this method.
Lemma 10.12. In the principal separation (10.84), the matrix R () =
R() is independent of the choice of the initial basic controller.

Proof. From (10.69) and (10.72), we have

()
B() r ()DL L (T, , 0)0r ()Y ()11n
=a +a
r ()DL KX (T, , 0) ,
X ()
whence

( 1 ) T ( 1 )AL ()()
B() = a
r () 0r ()Y ()11n
X ( 1 ) X ( 1 )X ()
( 1 )
+a
r ()DL KX (T, , 0) .
X ( 1 )

Using (10.81), this can be written in the form

( 1 ) 1 ()1 ()a1 ()0r ()Y ()11n


B() =
X ( 1 ) r

( 1 )
+a
r ()DL KX (T, , 0) .
X ( 1 )

Using (10.83), we obtain

1 ()B()
( 1 )
R() = 1
X ( 1 )

1 () ( 1 )
= 1 ()a1 1n +
r ()0r ()Y ()1 ar ()DL KX (T, , 0) .
1
X ( 1 )

If we choose instead of 0r () another matrix (8.114)



0r () = 0r () ar ()Q() ,

where Q() is a stable rational matrix, then with the new matrix, we get

R () = R() 1 ()Q()Y () 1 n .

The second term on the right-hand side is a stable rational matrix, because
the matrix 1 () is stable. Therefore,

R () = R() .
410 10 L2 -Design of SD Systems

2. With account for Theorem 10.20, we can extend the modied method
given in Section 8.6 onto the problem at hand. This makes it possible to nd
the optimal vector 1o () without calculating an initial basic controller.

10.10 Single-loop Control System


1. As an example, in the present section, we consider the L2 -optimisation
problem for the system shown in Fig. 10.2. In Fig. 10.2, we use the same

v e2
- Wv (s) -e -

6
v
x y u h1
- e - F (s) -
Q(s)  - C - G(s) -
 n mn m h
6 T
? e1
e -
6
1
h
h
- Wh (s) -


Fig. 10.2. Single-loop control system

notation as in Fig. 9.1. In addition, Wv (s) and Wh (s) are the transfer functions
of the ideal LTI convolution operators
 t
v(t) = gv (t )x( ) d

 t
1 (t) =
h gh (t )x( ) d ,

where gv (t) and gh (t) are the corresponding impulse responses.


Hereinafter, we assume that the matrices
 
st
Wv (s) = gv (t)e dt, Wh (s) = gh (t)est dt
0 0

are pseudo-rational and all their poles are stable. Hence for 0 < t < T , we
have
Nv (, t) Nh (, t)
DWv (T, , t) = , DWh (T, , t) = , (10.92)
v () h ()
where Nv (, t) and Nh (, t) are polynomial matrices in , while v () and
h () are stable scalar polynomials having no roots inside the closed unit
disk.
10.10 Single-loop Control System 411

2. The input of the system is the vector x(t) with image X(s), which should
be stable and pseudo-rational. The error under zero initial energy
    
e1 (t) h1 (t) h 1 (t)
e(t) = = (10.93)
e2 (t) v(t) v(t)

is chosen as output vector. The system performance will be evaluated by


 

Je = e (t)e(t) dt = [v(t) v(t)] [v(t) v(t)] dt
0 0
    
+ 2 1 (t) h1 (t) h
h1 (t) h 1 (t) dt ,
0

where Je is assumed to be nite.

3. Under the given assumptions, there exist the images

V (s) = Wv (s)X(s) ,
H(s) = Wh (s)X(s)

as stable pseudo-rational vectors. Moreover, we have v(t) and h(t)
for some < 0.

4. For Wv (s) = O and Wh (s) = O , the system shown in Fig. 10.2 is
transformed into the single-loop system shown in Fig. 9.1 with the PTM

w(s, t) = L (T, s, t)RN (s)M (s) + K(s) ,

where

O G(s)
K(s) = , L(s) = ,
F (s) F (s)G(s)
(10.94)
M (s) = Q(s)F (s), N (s) = Q(s)F (s)G(s) .

Assume that the matrix L(s) is at least proper and the remaining matrices in
(10.94) are strictly proper.

5. Under the given assumptions, for the output of the nominal system

h(t)
z(t) = (10.95)
v(t)

by Theorem 10.4, there exists the Laplace transform


 
Z(s) = L(s)(s)RN (s)DM X (T, s, 0) + K(s)X(s) , (10.96)

which converges in the half-plane Re s > , where is a suciently large


number. Using (10.94), we rewrite (10.96) in the form
412 10 L2 -Design of SD Systems
 
H(s) = G(s)(s)RQF G (s)DQF X (T, s, 0) ,
 
V (s) = F (s)G(s)(s)RQF G (s)DQF X (T, s, 0) + F (s)X(s) .

In particular, if the system is internally stable, then we can take < 0 and
the vector (10.95) has a nite L2 -norm. Then for the error vector (10.93), we
have e(t) , where < 0. For Re s > , there exists the image of the error
vector
H(s)
H(s)
E(s) = ,
V (s) V (s)
that can be represented using the above relations in the form
 
E(s) = L(s)(s)RN (s)DM X (T, s, 0) + K(s)X(s) We (s)X(s)
(10.97)
= Z(s) We (s)X(s) ,

where
Wh (s)
We (s) = .
Wv (s)
The right-hand sides of (10.97) and (10.96) dier only by the last term. Thus,
for application of the Wiener-Hopf method in the given case, we can use the
above general relations, changing the matrix K(s) for

Wh (s)
Ke (s) = K(s) We (s) = . (10.98)
F (s) Wv (s)

The matrix We (s) is not rational in the general case, but this fact does not
aect the optimisation procedure, because under the given assumptions, all
matrices in the cost functional are rational.

10.11 Wiener-Hopf Method for Single-loop Tracking


System
1. Using (10.53), (10.94), and (10.98), the image of the error (10.97) can be
written in the form
 
Z(s) = p(s) (s) q (s) + re (s) , (10.99)

where
10.11 Wiener-Hopf Method for Single-loop Tracking System 413
 

 G(s)(s) a r (s)
p(s) = L(s)(s) a r (s) =  ,
F (s)G(s)(s) a r (s)
    
q (s) = a l (s)DM X (T, s, 0) = a l (s)DQF X (T, s, 0) , (10.100)
  
re (s) = L(s)(s) 0r (s) a l (s)DQF X (T, s, 0) + Ke (s)X(s)
 

G(s)(s) 0r (s) a l (s)DQF X (T, s, 0) Wh (s)X(s)
=  
.

F (s)G(s)(s) 0r (s) a l (s)DQF X (T, s, 0) + F (s)X(s) Wv (s)X(s)

Let the nominal system be modal controllable. Then by Theorem 10.7, the

matrix p(s) is an integral function in s and the matrices q (s) and r(s) admit
representations (10.57) and (10.59):

 N q (s) Pr (s)
q (s) =  , r(s) =  .
X (s) X (s)

Hence for the vector Re (s), we have

Pe (s)
re (s) =    , (10.101)
h (s)v (s)X (s)

where h () and v () are the denominators of the functions in (10.92) and


Pe (s) is an integral function of the argument s.

2. With account for (10.99) and (10.100) in the given case, the functional
(10.64) takes the form
 j/2     
T  
q  (s) (s)AL1 (s) (s) q (s) q  (s) (s)B  e (s)

J1 =
2j j/2
(10.102)
  
B e (s) (s) q (s) ds ,

where

 1    
AL1 (s) = p (s + kj)p(s + kj) = a r (s)DL L (T, s, 0) a r (s)
T
k=
  (10.103)
   
= a r (s) 2 DG G (T, s, 0) + DG F  F G (T, s, 0) a r (s)

and
414 10 L2 -Design of SD Systems

 1 
Be (s) = p (s + kj)re (s + kj)
T
k=
   
 
= a r (s)DL L (T, s, 0) 0r (s) a l (s)DM X (T, s, 0)
 
+ a r (s)DL Ke X (T, s, 0)
  
= a r (s) 2 DG G (T, s, 0)
   
+ DG F  F G (T, s, 0) 0r (s) a l (s)DM X (T, s, 0)
  
+ 2 DG Wh X (T, s, 0) + a r (s)DG F  F X (T, s, 0)
 
a r (s)DG F  Wv X (T, s, 0) .

As was proved before, the rational periodic matrix (10.103) has no poles and

with respect to (10.101), the matrix B  e (s) admits a representation of the
form 
 Qe (s)

B e (s) =    ,
h (s)v (s)X (s)

where Qe (s) is an integral rational periodic function.

3. Using the variable = esT , we can transform the functional (10.102)


into the form (10.68)
#  
1 B e () Be ()()q() d ,
J1 = q()()AL1 ()()q() q
()()
2j
where
 
r () 2 DG G (T, , 0) + DG F  F G (T, , 0) ar () = T AL () ,
AL1 () = a

Nq ()
q() = al ()DQF X (T, , 0) = ,
X ()

e () = Qe ()
B ,
h ()v ()X ()

and the vector Qe () is a quasi-polynomial. Per construction, these rational


matrices have no poles at the unit circle. Let there exist the factorisation

AL () = ()() , (10.104)

where () is a stable invertible polynomial matrix and similarly to (10.72),


10.11 Wiener-Hopf Method for Single-loop Tracking System 415

()
q() = Y () 1 n ,
X ()

where Y () is a unimodular matrix and () is a scalar polynomial that admits


a factorisation of the form (10.80). Then according to the general procedure
given in Section 10.7, we can construct the set of optimal controllers. The
matrix R+ (), which is found as a result of the principal separation, admits
the representation
Ne+ ()
R+ () =
h ()v ()X ()
with a polynomial matrix Ne+ (). The optimal column 1o () can be written,
in analogy with (10.88), in the form

1 1 ()Ne+ ()
1o () = . (10.105)
T h ()v ()+ ()

4. Let the conditions of Lemma 9.24 hold. Let us note some special features
of the optimisation procedure for this case.
a) As follows from Section 9.6, if the polynomial dQ (s) has no roots on the
imaginary axis, then for the factorisation (10.104) we have

det () = + +
Q ()Q ()() , (10.106)

where all factors are stable polynomials and

deg det () = deg dQ () + deg dF () + deg dG () = .

Then, if the polynomial dQ (s) is factored into stable and unstable cofactors

dQ (s) = d+
Q (s)dQ (s) ,

then + +
Q () is the discretisation of the polynomial dQ (s). The polynomial

Q () in (10.106) is constructed as follows. Let Q () be the discretisa-
+

tion of the polynomial d Q (s). Then,

1
+
Q () = Q ( ) ,

where is the least nonnegative integer transforming the right-hand side


into a polynomial.
b) If under the same conditions, the polynomial dQ (s) has roots on the imagi-
nary axis, the factorisation (10.104) is impossible and a formal application
of the Wiener-Hopf method provides a controller that does not stabilise.
c) Let () = const. = 0. Then using (10.105) and (10.106), we obtain

N ()
1o () = ,
h ()v ()+ +
Q ()Q ()()
416 10 L2 -Design of SD Systems

where N () is a polynomial vector. Let the right-hand side be irreducible.


Then from Remark 3.4, it follows that 1o () is a normal matrix and for
the ILMFD (10.91), we have

det a () h ()v ()+ +


Q ()Q ()() .

Moreover, an optimal controller can be chosen in such a way that the


characteristic polynomial of the optimal system o () has the form

o () det a () .

For any choice of an optimal controller, the characteristic polynomial of


the optimal system o () is divisible by the polynomial o (). Therefore,
all roots of the product h ()v ()+ +
Q ()Q ()(), which are indepen-
dent of the choice of optimal controller, are always among the roots of the
polynomial o ().

10.12 L2 Redesign of Continuous-time LTI Systems


under Persistent Excitation
1. In the preceding sections of this chapter, it was essential that all poles of
the image of the input signal are located in the left half-plane. This condition
is equivalent to vanishing of all input signals for t . Nevertheless, many
applied problems are connected with situations, where the system is acted
upon by persistent signals including constant excitations like step signals

0 for t < 0
x(t) =
x0 for t 0

with a constant vector x0 .


In principle in many cases, the approach described above can be extended
onto the case of non-vanishing input signals.
Such a possibility is illustrated in this section by an example of the L2 -
redesign problem for the standard continuous-time system.

2. The two compared systems are shown in Fig. 10.3: a given continuous-
time LTI system I, which will be called the reference system, and the standard
sampled-data system II. As before, w(s) in Fig. 10.3 is a rational matrix

  m

K(s) L(s) r (10.107)
w(s) = .
M (s) N (s) n

It is assumed that L(s) is at least proper, while the other elements of the
matrix w(s) are strictly proper. Moreover, we assume that the standard system
10.12 L2 Redesign of Continuous-time LTI Systems under Persistent Excitation 417
I


x  z
-
- w(s)

u
y
x  ? e
- U (s) g -
6

x z
-
- w(s)

u y
C 


II
Fig. 10.3. Structure for redesign

is internally stable and modal controllable and the forming element is a zero-
order hold, i.e.
1 esT
(s) = 0 (s) = . (10.108)
s
Suppose that the reference system I is stable and the rational matrix in the
feedback U (s) is analytical at the point s = 0.

3. With account for (10.108) under zero initial energy, the image of the
output of the standard sampled-data system has the form


  
1 
Z(s) = L(s)0 (s)w d (s) In DN 0 (T, s, 0)w d (s) DM X (T, s, 0)
(10.109)
+ K(s)X(s) .

Under similar assumptions, the image of the output of the reference system

Z(s) is

Z(s) = wc (s)X(s) , (10.110)
where the transfer matrix of the reference system
1
wc (s) = L(s)U (s) [In N (s)U (s)] M (s) + K(s) (10.111)

is assumed to be strictly proper.


418 10 L2 -Design of SD Systems

4. Let the inputs of both systems be acted upon by a step signal


x0 x0
x(t) = x0 1(t), X(s) = , DX (T, s, t) = (0 < t < T ) ,
s 1 esT
(10.112)
where x0 is a constant vector and

0, t < 0
1(t) =
1, t > 0

is the unit step. Then after vanishing of the transient processes, the constant
output z in the reference system has the form

z = lim swc (s)X(s) = lim wc (s)x0 = wc (0)x0 , (10.113)


s0 s0

where we used the fact that the image X(s) has the form (10.112). Similarly,
if the standard sampled-data system is internally stable and (10.108) and
(10.112) hold, then there exists the limit

z = lim z(t) = const. , (10.114)


t

which can be found by the formula

z = lim sZ(s) .
s0

As follows from (10.113) and (10.114) under (10.112), the output signals of
both systems have in the general case innite L2 -norms. Nevertheless under
the condition
z = z , (10.115)
the dierence
e(t) = z(t) z(t)
has a nite L2 -norm, i.e. the following integral converges:
 

J = e (t)e(t) dt = [z(t) z(t)] [z(t) z(t)] dt .
0 0

Using the Parseval formula, we can write this integral in the form
 j
1
J = E  (s)E(s) ds
2j j
 j     (10.116)
1
= Z(s) Z(s) Z(s) Z(s) ds ,
2j j


where Z(s) and Z(s) are the images (10.109) and (10.110). Then the following
optimisation problem is quite logical.
10.12 L2 Redesign of Continuous-time LTI Systems under Persistent Excitation 419

L2 -redesign problem. Let a reference system I and a sampling


period T be given, and suppose (10.108). Find the transfer func-
tion (-matrix) wd () of a discrete-time controller such that the stan-
dard sampled-data system II is internally stable, satises Condi-
tion (10.115), and the integral (10.116) reaches its minimum.
Henceforth this problem will be called the problem of L2 -redesign of the ref-
erence system. The general solution described below is based on the general
approach by Wiener and Hopf. Firstly, the set of all stabilising controllers
ensuring Condition (10.115) is constructed. Then the problem is reduced to
the minimisation of a quadratic functional. In this case, there will arise some
special features that are not encountered in the H2 -problem.

5. According to the above statement of the problem rst of all, we must


construct the set of stabilising controllers for the standard sampled-data sys-
tem that guarantee Condition (10.115). One such possibility is given by the
following lemma.
Lemma 10.13. Let the poles of the matrix w(s) satisfy Conditions (6.106) for
non-pathological behavior, the reference system be asymptotically stable and
the standard sampled-data system be internally stable. Let also the rational
matrix U (s) be analytical at s = 0. Then for Condition (10.115) to be valid,
it is sucient that

w d (0) = wd (1) = U (0) . (10.117)

Proof. a) From (10.108) in the vicinity of the point s = 0, we have

0 (s) = T + . . . . (10.118)

Hereinafter, the dots denote the sum of terms that vanish as s 0.


Moreover, we have

0 (kj) = 0 , (k = 1, 2, . . . ). (10.119)

b) Consider the series



 1
DN 0 (T, s, 0) = N (s + kj)(s + kj) .
T
k=

Using (10.118) and (10.119), it can be easily shown that in the vicinity of
the point s = 0

DN 0 (T, s, 0) = N (s) + . . . .
c) Since X(s) = s1 x0 similarly to this equation, we obtain

s M (s + kj)x0 1
sDM X (T, s, 0) = = M (s)x0 + . . . .
T s + kj T
k=
420 10 L2 -Design of SD Systems

Using Relations (10.109) and (10.118), it can be shown that



 
1
sZ(s) = L(s)w d (s) In N (s)w d (s) M (s)x0 + K(s)x0 + . . . .
(10.120)
At the same time, Equation (10.110) yields
1
sZ(s) = L(s)U (s) [In N (s)U (s)] M (s)x0 + K(s)x0 .
Since the reference system is asymptotically stable, the right-hand side
tends to the nite value z (10.113) as s 0. Comparing this with
(10.120), we nd that under Condition (10.117), the right-hand side of
(10.120) tends to the nite value z = z as s 0.

6. Hereinafter, we assume that the conditions of Lemma 10.13 hold.


Lemma 10.14. Let the conditions of Lemma 10.13 hold and the matrix
 
a r (0) U (0) b r (0) = ar (1) U (0)br (1) (10.121)
be nonsingular. Assume
 1
0 = [ar (1) U (0)br (1)] [0r (1) U (0)0r (1)] . (10.122)
Then the set of all system matrices ensuring the internal stability of the stan-
dard sampled-data system and guaranteeing (10.115) has the form
() = 0 + (1 ) () , (10.123)
where is any stable rational matrix.
Proof. As follows from (8.45), the set of transfer functions of all stabilising
controllers are given by

w d (s) = wd ()  =esT


       1
= 0r (s) a r (s) (s) 0r (s) esT b r (s) (s) .

For s 0 with regard to (10.117), we obtain


       1
U (0) = 0r (0) a r (0) (0) 0r (0) b r (0) (0) .

For a nonsingular matrix (10.121), we receive



(0) = 0 ,
where 0 is given by (10.122). This condition is equivalent to
(1) = 0 ,
whence it follows (10.123).
10.12 L2 Redesign of Continuous-time LTI Systems under Persistent Excitation 421

Corollary 10.15. From (10.123) for = esT , we obtain


  
(s) = 0 + 1 esT (s) . (10.124)

7. Using (10.109)(10.111), the image of the error can be written in the form

E(s) = Z(s) Z(s)

  
1 
= L(s)0 (s)w d (s) In DN (T, s, 0)w d (s) DM X (T, s, 0)
(10.125)
1
L(s)U (s) [In N (s)U (s)] M (s)X(s) .

Under Condition (10.117), the right-hand side of this relation is analytical for
s = 0. Let us nd a representation of the error image E(s) in terms of the
new system matrix (). For this purpose, we use Equation (10.124). Then
from (10.53) and (10.124), we have
 
 
1
RN (s) = w d (s) In DN 0 (T, s, 0)w d (s)
    
= 0r (s) a l (s) a r (s) (s) a l (s)
       
= 0r (s) a l (s) a r (s)0 a l (s) 1 esT a r (s) (s) a l (s) .

From these relations and (10.125), we nd


 
E(s) = p0 (s) (s) q 0 (s) + r0 (s) ,

where

p0 (s) = L(s)0 (s) a r (s) ,
   
q 0 (s)= 1 esT a l (s)DM X (T, s, 0) ,
  (10.126)
  
r0 (s) = L(s)0 (s) 0r (s) a l (s) a r (s)0 a l (s) DM X (T, s, 0)
1
L(s)U (s) [In N (s)U (s)] M (s)X(s) .

Let us prove some properties of the matrices (10.126), which will be important.
In the present section it is always assumed that the conditions of Lemma 10.13
hold.
Lemma 10.16. The matrices p0 (s) and

p1 (s) = L(s) a r (s) (10.127)

are integral functions of s.


422 10 L2 -Design of SD Systems

Proof. The claim about the matrix p0 (s) was proved above. Further, we have

 p0 (s)
L(s) a r (s) = . (10.128)
0 (s)

The left-hand side of (10.128) can have poles only at poles of the matrix
L(s) and due to (10.108), the right-hand side can have poles only at the
points sk = kj, (k = 1, 2, . . . ). But by assumptions, the matrix L(s) is
analytical at the points sk . Therefore, the left-hand side of (10.128) has no
poles, i.e. is an integral function of s.

Lemma 10.17. The vector q 0 (s) is an integral function of s.

Proof. Let us transform the expression for the vector q 0 (s) using the relation

 T  
DM X (T, s, 0) = DM (T, s, )D X (T, s, ) d . (10.129)
0

From (10.112) and (6.71) for 0 < < T , we have


  esT
DX (T, s, ) = DX (T, s, T )esT = x0 .
1 esT
Substituting this result into (10.129), we nd

 esT T 
DM X (T, s, 0) = DM (T, s, ) d x0 .
1 esT 0

Finally using (10.126), we obtain


 T 
   
q 0 (s) = q 1 (s)x0 , q 1 (s) = esT a l (s)DM (T, s, ) d ,
0

whence it immediately follows that the vector q 1 (s) has no poles, because we
 
have already proved that the matrix a l (s)DM (T, s, ) has no poles.

Lemma 10.18. If the conditions of Lemma 10.13 hold, the vector r0 (s) has
no poles on the imaginary axis.

Proof. Consider the vector


   

Z1 (s) = L(s)0 (s) 0r (s) a l (s) a r (s)0 a l (s) DM X (T, s, 0) + K(s)X(s) .
(10.130)
This vector is the image of the output for () = 0 . Since the standard
sampled-data system is modal controllable, due to Theorem 10.5, the following
representation holds:
D(s)x0
Z1 (s) = , (10.131)
1 esT
10.12 L2 Redesign of Continuous-time LTI Systems under Persistent Excitation 423

where D(s) is an integral function. Then the image (10.130) can have single
pure imaginary poles only at the points sk = kj = 2kj/T , (k = 0, 1, . . . ).
On the other hand, with account for (10.108), (10.126) and (10.127), from
(10.130), we obtain
Z2 (s)
Z1 (s) = x0 , (10.132)
s
where   
Z2 (s) = L(s) 0r (s) q 1 (s) p1 (s)0 q 1 (s) + K(s) .
The matrix Z2 (s) is analytical at the points sk = kj, (k = 0, 1, . . . ),
because under the given assumptions the right-hand side has no poles due to
(6.106), Lemmata 10.16 and 10.17. Moreover, the matrix Z2 (s) has no poles
at s = 0. Indeed, if we assume the converse from (10.132), we nd that the
image (10.130) has a pole at s = 0 with a multiplicity greater than one, but
this contradicts (10.131). Hence the matrix Z2 (s) is an integral function of s.
Therefore, with respect to (10.132), the vector r0 (s) can be written in the
form
1
r0 (s) = [Z2 (s) wc (s)] x0 , (10.133)
s
where wc (s) is the transfer function of the reference system (10.111). Its poles
are located in the left half-plane due to the assumption on its stability. From
(10.133), we obtain that the vector r0 (s) can have a simple pole at the point
s = 0. But, due to the choice () = 0 , we have

lim sr0 (s) = z z = [Z2 (0) wc (0)] x0 = O1 ,


s0

so that the vector r0 (s) is analytical at s = 0. Hence this vector is analytical


on the whole imaginary axis.

Corollary 10.19. Let us have the standard form

Nc (s)
wc (s) = . (10.134)
dc (s)

Then the vector r0 (s) can be represented in the form

Pc (s)
r0 (s) = , (10.135)
dc (s)

where the vector Pc (s) is an integral function of s.

Proof. From the proof of Lemma 10.18, it follows that the matrix Z2 (s) is an
integral function of s. Moreover, the right-hand side of (10.133) is analytical
at the point s = 0. Therefore, substituting (10.134) into (10.133), we obtain
the claim of the corollary.
424 10 L2 -Design of SD Systems

8. Using (10.126) and repeating the derivations of Section 10.6, we obtain


that under the given assumptions the L2 -optimal redesign problem reduces to
minimising the functional
 j/2  
T  
q  0 (s) (s)AL1 (s) (s) q 0 (s)

J1 =
2j j/2
  
(10.136)
 
   
q 0 (s) (s)B 0 (s) B 0 (s) (s) q 0 (s) ds ,

where 
  
AL1 (s) = a r (s)DL L0 0 (T, s, 0) a r (s) . (10.137)
Moreover,

 1 
B 0 (s) = r0 (s + kj)p0 (s + kj) ,
T
k=

(10.138)

 1 
B 0 (s) = p0 (s + kj)r0 (s + kj)
T
k=

and after transformations, we obtain


      

B  0 (s) = ar (s)DL L0 0 (T, s, 0) 0r (s) a l (s) a r (s)0 a l (s) DM X (T, s, 0)
   
(10.139)
a r (s)DL wc X (T, s, 0) + a r (s)DL KX0 (T, s, 0) .
0

The matrices (10.137)(10.139) are rational periodic. Therefore, Matrix


(10.137) is an integral function. Moreover, since the matrix p0 (s) is an in-
tegral function and (10.135) holds, we have

 D0 (s)
B  0 (s) =  , (10.140)
c (s)

where the polynomial c (s) is the discretisation of the polynomial dc (s) and

D0 (s) is an integral rational periodic function.

9. Using the new variable = esT in (10.136), we obtain the functional


# 
1
J1 = q0 () ()AL1 () ()q0 ()
2j
 d (10.141)

q0 () ()B0 () B0 ()()q0 () ,

that should be minimised over the set of stable rational matrices (). Simi-
larly to (10.72), we obtain
10.12 L2 Redesign of Continuous-time LTI Systems under Persistent Excitation 425

G()q0 () = 0 ()11n ,
where 0 () is a scalar polynomial and G() is a unimodular matrix. Then
1 n1
 1

,
2 () = ()G () = 1 () 2 () m

so that the functional (10.141) can be written in a form similar to (10.77):


# 
1

J1 = 1 ()0 ( 1 )AL1 ()0 ()1 ()
2j

0 () B0 ()1 ()0 () d .
1 ()0 ( 1 )B

If the factorisations (10.79) and (10.80) hold, the optimisation problem is
solvable, because the matrix B0 () has no poles on the unit circle. When we
have
0 ()0 ( 1 ) = + + 1
0 ()0 ( )
with a stable polynomial +
0 () and (10.79), then according to the general
Wiener-Hopf method, we construct the matrix
1
0 () 0 ( ) .
1 ()B
R0 () = 1 + 1
0 ( )
From (10.140) for esT = , we receive

0 () = D0 () .
B
c ()
From the last two equations, we conclude that the set of stable poles of the
matrix R0 () belongs to the set of roots of the polynomial c (). Therefore,
as a result of the principal separation (10.84), we obtain
N0 ()
R0+ () =
c ()
with a polynomial matrix N0 (). The optimal vector o1 () has the form
1 1 ()N0 ()
o1 () = . (10.142)
T +
0 ()c ()
The further procedure for constructing the set of optimal controllers is the
same as in Section 10.7.

10. Similarly to Section 10.7, it can be found that the characteristic poly-
nomial of the optimal system o () is divisible by the polynomial c ().
Hence if in particular, the function (10.142) is irreducibile, then the charac-
teristic polynomial of the optimal standard sampled-data system is divisible
by the discretisation of the characteristic polynomial of the reference model
dc (s), and this fact does not dependent on the choice of the controller which
minimises Functional (10.141).
426 10 L2 -Design of SD Systems

10.13 L2 Redesign of a Single-loop LTI System


1. To illustrate the general approach given in Section 10.12, we consider in
the present section the L2 redesign problem for a single-loop continuous-time
system shown in Fig. 10.4. Here the matrices F (s), Q(s), G(s) are the same

6
v
1
x y u
h
h
- f - F (s) -
Q(s) - U (s) - G(s) - -
 n mn m
6

Fig. 10.4. Single-loop reference system

as in Fig. 9.1, and U (s) is a given rational matrix of compatible dimensions.


The output vector of the reference system has the form
 
h 1 (t)
z(t) = . (10.143)
v(t)

Suppose the reference system in Fig. 10.4 is asymptotically stable, the input
signal has the form (10.112) and the matrix U (s) is analytical at the point
s = 0. Then the L2 -redesign problem can be formulated as follows.

For the sampled-data system in Fig. 9.1, let us have the sampling period
T , the transfer function of the forming element (10.108) and the input signal
(10.112). Let z(t) denote the output vector (10.143) of the LTI system under
zero initial energy and the vector z(t) is the output of the sampled-data system
under similar assumptions. It is required to nd the transfer function of the
discrete controller wd () satisfying the following conditions:
a) The following equality holds:

z = lim z(t) = lim z(t) = z .


t t

b) The sampled-data system is internally stable.


c) The integral


J= [z(t) z(t)] [z(t) z(t)] dt
0

= [v(t) v(t)] [v(t) v(t)] dt
0
    
+ 2 1 (t) h1 (t) h
h1 (t) h 1 (t) dt
0

takes the minimal value.


10.13 L2 Redesign of a Single-loop LTI System 427

2. Let us show that the problem formulated above can be reduced to the
general scheme considered in Section 10.12. With this aim in view, we show
that the system shown in Fig. 10.4 can be presented in form of a reference
system I from Fig. 10.3. Notice that the transfer matrix of the LTI system
wc (s) from the input x to the output z can be represented in the form (10.111).
Indeed, using the standard structural transformations, it is easy to nd the
(s) and wv
transfer matrices whx
x (s) from the input x(t) to the outputs h(t)
and v(t):
1
(s) = G(s)U (s)Q(s)F (s) [I G(s)U (s)Q(s)F (s)]
whx , (10.144)
1
wvx (s) = F (s) [I G(s)U (s)Q(s)F (s)] . (10.145)

Recall that for any matrices A and B of compatible dimensions, we have

A(I BA)1 = (I AB)1 A .

Then assuming
A = Q(s)F (s) , B = G(s)U (s) ,
we obtain
1
G(s)U (s)Q(s)F (s) [I G(s)U (s)Q(s)F (s)]
(10.146)
1
= G(s)U (s) [In Q(s)F (s)G(s)U (s)] Q(s)F (s) .

Using (10.144), we obtain


1
(s) = G(s)U (s) [In Q(s)F (s)G(s)U (s)]
whx Q(s)F (s) . (10.147)

Then we prove
1
wvx (s) = F (s) [I G(s)U (s)Q(s)F (s)]
* +
1
= F (s) [I G(s)U (s)Q(s)F (s)] I + F (s) (10.148)
1
= F (s)G(s)U (s)Q(s)F (s) [I G(s)U (s)Q(s)F (s)] + F (s) .

From (10.148) and (10.146), it follows


1
wvx (s) = F (s)G(s)U (s) [In Q(s)F (s)G(s)U (s)] Q(s)F (s) + F (s) .

Using (10.147) and (10.148), we nd



whx
(s) 1
wc (s) = = L(s)U (s) [In N (s)U (s)] M (s) + K(s) ,
wvx (s)

where
428 10 L2 -Design of SD Systems

O G(s)
K(s) = , L(s) = ,
F (s) F (s)G(s)
(10.149)
M (s) = Q(s)F (s) , N (s) = Q(s)F (s)G(s) .

Comparing (10.149) with (9.9), we arrive to the conclusion that the problem
under consideration is a special case of the general problem described in Sec-
tion 10.12, whenever the elements of Matrix (10.107) have the form (10.149).
Therefore, the further solution of the L2 -redesign problem can be found using
the general algorithm of Section 10.12.

3. Under some additional assumptions taking into account the special struc-
ture of the reference system, we can establish some additional important prop-
erties of the optimal system.

Theorem 10.20. Let the conditions of the Lemma 9.24 hold, the factorisation
(10.104) exist and () = const. Then there exists a set of optimal controllers.
Moreover, if the ratio (10.142) is irreducible, then the optimal controller can be
chosen in such a way that the characteristic polynomial of the optimal system
o () becomes
o () = c ()() ,
where c () is a polynomial and () is a polynomial such that () =
det () and

deg () deg dQ (s) + deg dF (s) + deg dG (s) = .

For any choice of the optimal controller, the characteristic polynomial of the
optimal system o () is divisible by the polynomial o ().

Remark 10.21. If the polynomial dQ (s) has roots on the imaginary axis, then
the factorisation (10.104) is impossible.
Appendices
A
Operator Transformations of Taylor Sequences

1. Let the sequence of complex numbers


{uk } = {u0 , u1 , . . . } (A.1)
be given. This sequence is called a Taylor sequence, if there exist positive
numbers M and such that the inequalities
M
|ui | < i , (i = 0, 1, . . . ) (A.2)

are true.

2. For a Taylor sequence {uk } and || < , the series




u0 () = uk k (A.3)
k=0

converges. The function u0 () is called -transform (image) of the Taylor


sequence (A.1). Relation (A.3) is symbolically written as

{uk } u0 () . (A.4)
It is well known that the -transform u0 () is analytical in || < . Thus,
Relation (A.4) might be interpreted as a map from the set of Taylor sequences
in the set of functions u0 () which are analytical in the point = 0.

3. Conversely, every function u0 (), analytical in = 0, may be developed


in an environment of the origin in its Taylor series
u0 () = u0 + u1 + . . . ,
the coecients of which satisfy an inequality of the form (A.2). Thus, the
coecients of this expansion always establish a Taylor sequence, for which the
function u0 () proves to be the -transform. Hence there exists an invertible
unique map between the set of Taylor sequences and the set of function of a
complex arguments that are analytical in the origin.
432 A Operator Transformations of Taylor Sequences

4. Let the -transform (A.3) of the sequence (A.1) be convergent for || < R.
Then for |z| > R1 , the series


u (z) = uk z k (A.5)
k=0

converges, which is named z-transform of the sequence {uk }. Relation (A.5)


is denoted by
{uk } u (z) .
z
(A.6)
Obviously, also the reverse is correct: If we have (A.6) for |z| > R1 , then
for || < R the series (A.3) converges, hence the sequence {uk } is a Tay-
lor sequence. Therefore, the sequence {uk } possesses a z-transform and a
-transform, if and only if it is a Taylor sequence.

5. If we compare Formula (A.3) with (A.5), then we recognise that the -


transform u0 () and the z-transform u (z) are connected by the interrelations

u0 () = u ( 1 ) , u (z) = u0 (z 1 ) . (A.7)

Nevertheless, it must be clear that in general, the functions u0 () and u (z)


are dened in dierent regions.

6. The above considerations suggest that a complex function u0 () repre-


sents a -transform, if it is analytical in the origin, but the function u (z)
represent exactly then a z-transform, when it is analytical in the innitely far
point. In particular, a rational function u0 () is a -transform, if it has no
pole at = 0, while the rational function u (z) is a z-transform, whenever it
is at least proper.

7. In the control theoretical and engineering literature mainly the


z-transformation was investigated, and its properties are presented in de-
tail, e.g. in [3]. For the purposes of our book, both transformations are im-
portant. On one side, there is no need for considering the properties of the
-transformation in detail, because due to Relation (A.7), any formula from
the theory of z-transformation can be transferred into a corresponding for-
mula for the -transformation by exchanging z against 1 , and reverse. On
the other side, however, we have to be careful, because the named transfor-
mations are dened over dierent regions.

8. In particular, let {uk } be a Taylor sequence. Then, also the displaced


sequence
{uk+ } = {u , u+1 , . . . }
is a Taylor sequence. As known from [3], we get from (A.6)
A Operator Transformations of Taylor Sequences 433
 

1
{uk+ } z  u (z) u z .
z
(A.8)
=0

Substituting 1 for z on the right side of (A.8), we obtain the corresponding


formula for the -transformation
 

1
{uk+ }  u0 () u , (A.9)
=0

0
where u () is the -transform congured by (A.4).

9. Applying the -transformation for solving dierence equations requires


to overcome certain theoretical diculties, that arise from using the forward-
shift operator (right shifting) and the backward-shift operator (left shifting).
This fact was considered in [99]. The situation should be demonstrated by an
example, which was already considered in [14].
Let us have the scalar dierence equation

yk+1 ayk = uk , (k = 0, 1, . . . ) (A.10)

with a freely selectable initial condition y0 = y, where a is any given number.


Suppose the input {uk } to be a Taylor sequence. Then after transition to
z-transforms according to (A.8), we get

z [y (z) y] ay (z) = u (z) , (A.11)

which results in
z u (z)
y (z) = y + . (A.12)
za za
Particularly, y = 0 implies

(z a)y (z) = u (z) .

On the other side, applying the -transformation on Equation (A.10), we


receive with respect to (A.9)


1 y 0 () y ay 0 () = u0 ()

and thus
y
y 0 () = + u0 () . (A.13)
1 a 1 a
Especially for y = 0, we obtain

(1 a)y 0 () = u0 (z) .

It is emphasised that Relation (A.13) may be derived from (A.12), if we ex-


change z against 1 .
434 A Operator Transformations of Taylor Sequences

10. Assume = esT in (A.3), so the function




0
u (s) = u0 () | =esT = uk eksT (A.14)
k=0

is obtained, which is called discrete Laplace transformation (DLT) of the se-


quence {uk }, [148]. Obviously, the transformation (A.14) converges, if and
only if {uk } is a Taylor sequence. Besides, if the -transform (A.3) converges
in the circle || < , then the transform (A.14) converges in the open half-
plane Re s > T1 ln R.
B
Sums of Certain Series

1. Let the strictly proper scalar fraction


m(s) m1 sn1 + . . . + mn
F (s) = = n (B.1)
d(s) s + d1 sn1 + . . . + dn
be given and the expansion into partial fractions

q
i
fik
F (s) =
i=1 k=1
(s si )k

should be valid. Then for 0 < t < T , we obtain



1 m(s + kj) kjt
F (T, s, t) = e = F (T, s, t) , (B.2)
T d(s + kj)
k=

where

q
i
fik k1 e(si s)t
F (T, s, t) = .
i=1 k=1
(k 1)! sik1 1 e(si s)T
Besides, if we have m1 = 0 in (B.1), then the sum of the series F (T, s, t)
possesses jumps of nite height in the points tn = nT , (n = 0, 1, . . . ).
However, when m1 = m2 = . . . = m1 = 0 and m = 0 take place, then
the periodic function F (T, s, t) has derivatives up to and including ( 1)-th
order, where the ( 1)-th derivative is piecewise continuous, but the lower
derivatives are continuous.

2. Multiplying (B.2) by est for 0 < t < T , we obtain



 1 m(s + kj) (s+kj)t 
DF (T, s, t) = e = DF (T, s, t) ,
T d(s + kj)
k=

where
436 B Sums of Certain Series


q
i
fik k1 e si t
DF (T, s, t) = est F (T, s, t) = .
i=1 k=1
(k 1)! sk1
i
1 e(si s)T


If we have m1 = 0 in (B.1), then the sum of the series DF (T, s, t) has jumps of
nite height in the points tn = nT , (n = 0, 1, . . . ). However, if m1 = m2 =
. . . = m1 = 0 and m = 0 are true, then the periodic function F (T, s, t) has
derivatives up to and including (1)-th order, where the (1)-th derivative
is piecewise continuous, but the lower derivatives are continuous.

3. Suppose
 T
(s) = est m(t) dt ,
0

where the function m(t) is of bounded variation on the interval 0 t T


and it has a nite number of jumps. Then for 0 < t < T , we obtain

 1 
DF (T, s, t) = F (s + kj)(s + kj)e(s+kj)t = DF (T, s, t) ,
T
k=


where the function DF (T, s, t) is determined by each one of the equivalent
formulae

q
i
fik k1 esi t (si )
DF (T, s, t) = + hF (t) ,
i=1 k=1
(k 1)! sk1
i
e(ssi )T 1

q
i
fik k1 e si t
DF (T, s, t) = + hF (t) .
i=1 k=1
(k 1)! s k1
i
1 e (si s)T

The functions hF (t) and hF (t) are committed by the relations


 t  T
hF (t) = hF (t )m( ) d , hF (t) = hF (t )m( ) d ,
0 t

where

q
i
fik
hF (t) = tk1 esi t .
i=1 k=1
(k 1)!

Besides, the sum of the series DF (T, s, t) depends continuously on t and


has a piecewise continuous derivative. For m1 = . . . = m1 = 0, m = 0,

the function DF (T, s, t) possesses derivatives up to and including -th order.
Hereby, the -th derivative is piecewise continuous and the lower ones are
continuous.
C
DirectSDM A Toolbox for Optimal Design of
Multivariable SD Systems

C.1 Introduction

This section contains a short description of the DirectSDM Toolbox for


MATLAB. The toolbox is designed for solving optimisation problems for
multivariable sampled-data control systems. The computational procedures
used in this software are based on the frequency-domain theory of sampled-
data systems developed in the present book, and on the theory of matrix
polynomial equations [79, 80, 55, 56].
The DirectSDM Toolbox is compatible with MATLAB 6.0 and higher
and requires the Control Toolbox. The toolbox is not compatible with
the Polynomial Toolbox (http://www.polyx.com), although some functions
have the same names. The reader may download the DirectSDM Toolbox
from http://www.iat.uni-rostock.de/blampe/ .

C.2 Data Structures

For description of control system elements, the following two data structures
are used:
Polynomial and quasi-polynomial matrices;
Real rational matrices.
Polynomial matrices are realised as objects of the class poln. The special
variables s, p, z, d, and q are realised as functions, and they are used for
entering polynomial matrices. For example, after the input
P = [ s+1 s^2+s-6
s^3 s-12 ]
the MATLAB environment creates and displays the following polynomial
matrix:
438 C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems

P: polynomial matrix: 2 x 2
s + 1 s^2 + s - 6
s^3 s - 12
Moreover, the DirectSDM Toolbox supports operations with quasi-
polynomials (by this term we mean functions having poles only at the
origin) by means of the same class poln. For example, the input
P = [ z+1 z+1+z^-1
z^2-5 1+z^-2 ]
creates and displays the following quasi-polynomial matrix:
P: quasi-polynomial matrix: 2 x 2
z + 1 z + 1 + z^-1
z^2 - 5 1 + z^-2
Real rational matrices are stored and handled as objects of standard classes
of the Control Toolbox describing models of LTI-systems, namely, tf (trans-
fer matrix), zpk (zero-pole-gain form), and ss (state-space description). The
DirectSDM Toolbox redenes the display function for the classes tf and
zpk. Also, some errors in the Control Toolbox (versions up to 5.2) have
been corrected.

C.3 Operations with Polynomial Matrices

Since the synthesis procedures developed in the present book essentially ex-
ploit models in form of polynomial matrices and matrix fraction descriptions
(MFD), the DirectSDM Toolbox supports all basic operations with polyno-
mial and quasi-polynomial matrices.
For objects of the poln class, the arithmetic operations (addition, sub-
traction, multiplication, division) as well as concatenation, transposition and
inversion (for square matrices) are overloaded. It should be noticed that all
operands used in binary operations should have the same independent variable
(s, p, z, d, or q), respectively.
Below a short list of functions for handling polynomial and quasi-
polynomial matrices are given.
C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems 439

Basic properties of polynomial matrices:


coef coecient at term of given degree
coldeg column degrees
deg matrix degree
det determinant (a polynomial)
eig eigenvalues of square matrix (roots of the determinant)
lcoef leading coecient
norm norm (Euclidean norm of coecient matrix)
polyval value for given argument value
polyder derivative
rank normal rank
roots roots of determinant (or those of each element)
rowdeg row degrees
trace trace
Simple transformations:
coladd column addition
colchg column interchange
colmul multiplication of column by polynomial
fliplr ip in left/right direction
flipud ip in up/down direction
rowadd row addition
rowchg row interchange
rowmul multiplication of row by polynomial
Special forms:
colherm column Hermite form
colred column-reduced form
echelonl left echelon form
echelonr right echelon form
ltriang lower triangular form
rowherm row Hermite form
rowred row-reduced form
smith canonical Smith form
utriang upper triangular form
Miscellaneous functions:
gcld a greatest left common divisor
gcrd a greatest right common divisor
invuni inversion of unimodular matrix
jfact spectral J-factorisation of Hermitian-conjugate matrix
null null-space basis
pinv pseudoinverse matrix
lfact left spectral factorisation
linv left inverse
rfact right spectral factorisation
sylv block Sylvester coecient matrix
440 C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems

Solution of Diophantine polynomial equations:


daxb equation AX = B
daxbyc equation AX + BY = C
daxybc equation AX + Y B = C

C.4 Auxiliary Algorithms


The DirectSDM Toolbox includes a number of auxiliary functions that are
necessary for realising the optimal design procedures described in the book.

Operations with MFD:


bezout solution to Bezout identity
lmfd left-coprime MFD
lmfd2ss state-space model for left MFD
rmfd right-coprime MFD
rmfd2ss state-space model for right MFD
ss2lmfd left MFD for state-space model
ss2rmfd right MFD for state-space model
rmfd2lmfd transformation from right MFD to left MFD
lmfd2rmfd transformation from left MFD to right MFD
Discrete transformations:
ztrm discrete Laplace transform DF (T, , t)
dtfm discrete transfer matrix DF M (T, , t) for plant with ZOH
dtfm2 discrete transform DM  F  F M (T, , 0) for plant with ZOH

C.5 H2 -optimal Controller


C.5.1 Extended Single-loop System

Consider the multivariable single-loop system shown in Fig. C.1. The digi-
tal controller (in the dashed box) composed of a discrete lter with transfer
matrix C() and hold circuit with transfer function (s) is used for sta-
bilising a continuous-time plant. The control loop includes a plant F (s),
actuator G(s) and dynamic negative feedback Q(s). The exogenous distur-
bance w(t) and measurement noise m(t) are modeled as vector stationary
stochastic processes with spectral density matrices Sw (s) = Fw  (s)Fw (s) and
Sm (s) = Fm  (s)Fm (s), respectively. The signals (t) and (t) are independent
unit centred white noises.
The output signal e(t) denotes the stabilisation error. The controller should
ensure minimal power of the error signal under restrictions imposed on the
control power. The frequency-dependent weighting functions Ve (s) and Vu (s)
are introduced in order to shape the frequency-domain properties of the sys-
tem (for example, to ensure roll-o of the controller frequency response at
high frequencies).
C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems 441

zu (t) 6 (t)
?
Vu (s) Fw (s)

u(t) 6
w(t)
y(t) T y(t) zy (t)
 - C() - (s)
 ?
q - G(s) - e- q - Vy (s) -
F (s)

Q(s)  e
m(t) 6
Fm (s)

(t) 6

Fig. C.1. Single-loop sampled-data system

The following assumptions should hold:


1. The matrices K(s), M (s), and N (s) are strictly proper, and L(s) is at
least proper.
2. The matrix N (s) is irreducible in the sense of Sec. 9.2.
3. The matrices Fw (s), Fm (s), Ve (s) and Vu (s) are stable.
4. The matrices G(s) and Q(s) are free of poles at the imaginary axis.
5. The transfer matrices F (s), G(s) and Q(s) are normal.
6. The sampling period T is non-pathological.
Assumption 1 is necessary for the optimisation problem to be correct (see
Chapter 9). Assumptions 2, 5, and 6 ensure that the assumptions of Chapter 9
hold. In applied problems, Assumption 5 holds almost always. Assumption 3
causes no loss of generality, because for any spectral density having no poles
at the imaginary axis, a stable forming lter can be derived. As was shown
in Chapter 9, when Assumption 4 is violated, a formal application of the
Wiener-Hopf optimisation technique leads to a non-stabilising controller.
The stabilisation quality is estimated by the average variance vz of the
output vector signal
z (t)
z(t) = e ,
zu (t)
which can be found (for centred stochastic processes) as
 
1 T   1 T  
J = vz = E z (t)z(t) dt =
T
E trace z(t)z T (t) dt , (C.1)
T 0 T 0

where E{} denotes the mathematical expectation.


The problem can be formulated as follows: Let the continuous-time ele-
ments, sampling period T and hold device (s) be given. Find the transfer
442 C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems

matrix of a stabilising digital controller C() ensuring the minimum of the


cost function (C.1).
It can be shown that this problem is a special case of the general H2 -
optimisation problem for standard sampled-data system investigated in Chap-
ter 8. Assume
(t)
x(t) =
(t)
and denote by y(t) the vector signal acting upon the sampling unit. Then, the
operator equations of the system take the form

z = K(s)x + L(s)u
y = M (s)x + N (s)u ,

where the matrices of the associated standard system are:



Ve (s)F (s)Fw (s) 0 Ve (s)F (s)G(s)
K(s) = , L(s) = ,
0 0 Vu (s)


M (s) = Q(s)F (s)Fw (s) Q(s)Fm (s) , N (s) = Q(s)F (s)G(s) .
Thus, the cost function (C.1) equals the square of the H2 -norm of the above
standard sampled-data system.

C.5.2 Function sdh2

The function sdh2 can be used for synthesis of H2 -optimal controllers for
extended single-loop multivariable systems as described above. Consider, for
example, a simplied model of course stabilisation for a Kazbek-type tanker
[149]:
0.051
(25s + 1)s 1
F (s) =

,
G(s) = ,
0.051 s+1
25s + 1
Fw (s) = 1 , Fm (s) = 0 , Ve (s) = I , Vu (s) = 1 , T = 1.
As distinct from the problem considered in [149], the yaw angle and rotation
rate z are both measured, i.e., the controller has 2 inputs and 1 output.
The system shown in Fig. C.1 must be described as a structure of
MATLAB as follows:
sys.F = tf({0.051; 0.051},{[25 1 0];[25 1]});
sys.G = tf(1, [1 1]);
sys.Fw = tf(1);
sys.Vu = tf(1);
sys.T = 1;
C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems 443

Mandatory elds of the structure are only sys.F, sys.G, sys.Fw, and sys.T.
If others are not specied, they take the following default values:
sys.Fm = 0;
sys.Ve = eye(n);
sys.Vu = 0;
sys.Q = eye(n);
Here n denotes the number of outputs of the plant F (s) and eye(n) denotes
the identity matrix of the corresponding dimension.
The function call
[C,P] = sdh2 ( sys )
gives the transfer matrix of the (unique) optimal controller C(z) (in the vari-
able z!) and poles of the closed-loop system in the z-plane:
C: zero-pole-gain model 1 x 2

! 0.98247 (z-0.3679) 17.4769 (z-0.3679) !


! ------------------ ------------------ !
! (z-0.3457) (z-0.3457) !

Sampling time: 1

P =
0.9627 + 0.0240i
0.9627 - 0.0240i
0.3679 + 0.0000i
0.3679 - 0.0000i
Since all poles are inside the unit disk, the optimal closed-loop system is
stable.
This example is investigated in detail in the demo script demoh2 included
in the DirectSDM Toolbox .

C.6 L2 -optimal Controller


C.6.1 Extended Single-loop System

We consider the extended multivariable single-loop tracking system shown in


Fig. C.2. The control loop includes a plant F (s), prelter G(s) and dynamic
negative feedback Q(s). The input signal x(t) has the Laplace transform X(s).
The digital controller (in the dashed box), consists of a discrete lter with
transfer matrix C() and hold device with transfer function (s).
The transfer matrices We (s) and Wu (s) dene ideal operators reecting the
requirements to the output and control transients, respectively. The frequency-
dependent weighting matrices Vy (s) and Ve (s) can be used for shaping the
444 C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems
z (t)
- Wu (s) - 1 -d - Vu (s) -
u

6u(t)
(t) r(t) v(t) T y(t)
- R(s) p- -
d G(s)  - C() - (s) -
q F (s) q
6 zy (t)
?
-
d Vy (s) -
Q(s)  6

y (t)
- Wy (s) - 1

Fig. C.2. Single-loop sampled-data tracking system

frequency properties of the system and controller to be designed. Henceforth,


we assume that We (s) and Wu (s) are free of unstable poles and all remaining
assumptions set in the H2 -optimisation problem hold again.
Introduce a stacked output signal

z (t)
z(t) = e .
zu (t)

The cost function includes the sum of weighted integral quadratic output and
control errors and coincides with the square of the L2 -norm of z(t):
 

T
J = z(t)2 =
2 T
z (t)z(t) dt = ze (t)ze (t) + zuT (t)zu (t) dt . (C.2)
0 0

The problem is formulated as follows: Let all continuous elements of the sys-
tem, hold device (s) and sampling interval T be given. Find a stabilising
digital controller C() ensuring the minimum of the cost function (C.2).
It can be shown that the problem under consideration can be viewed as a
special case of the general L2 -optimisation problem for standard sampled-data
system analysed in Chapter 10. Denote the signal acting upon the sampling
unit by y(t). Then the system equations in operator form appear as

z = K(s)x + L(s)u
y = M (s)x + N (s)u ,

where the matrices of the corresponding standard system have the form

Ve (s)We (s)R(s) Ve (s)F (s)
K(s) = , L(s) = ,
Vu (s)Wu (s)R(s) Vu (s)

M (s) = G(s)R(s) , N (s) = G(s)Q(s)F (s) .


C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems 445

C.6.2 Function sdl2

The function sdl2 can be used for synthesis of L2 -optimal controllers for the
extended single-loop multivariable system described above. Assume,
1
1
0.5s + 1
F (s) = , G(s) = 1 , Q(s) = I , X(s) = s
,
1 1
(0.5s + 1)s s

00

We (s) = , Ve (s) = I , Wu (s) = 0 0 , Vu (s) = 0 , T = 1.
01
The system shown in Fig. C.2 is described as a structure of MATLAB as
sys.F = tf( {1; 1}, {[0.5 1]; conv(1,[0.5 1 0])} );
sys.X = tf( {1; 1}, {[1 0]; [1 0]} );
sys.We = tf( {0 0;0 1}, {1 1;1 1} );
sys.T = 1;
Among all the elds, only sys.F, sys.R, sys.Wy, and sys.T are required. If
the others are not given, they take the following default values:
sys.G = eye(m);
sys.Q = eye(n);
sys.Ve = eye(n);
sys.Wu = 0;
sys.Vu = 0;
Here n denotes the number of outputs of the plant F (s), m is the dimen-
sion of the input signal x(t), and eye() denotes the identity matrix of the
corresponding dimension.
The function call
[C,P] = sdl2 ( sys )
gives the transfer matrix of an optimal controller C(z) (non-unique for multi-
variable systems) and the poles of the optimal closed-loop system in z-plane:
C: zero-pole-gain model 1 x 2

! 0.6891 (z-0.1469) (z-1) 0.21306 (z-0.124) (z+3.833) !


! ------------------------ --------------------------- !
! (z^2 + 0.2842z + 0.869) (z^2 + 0.2842z + 0.869) !

Sampling time: 1

P =
0
446 C DirectSDM A Toolbox for Optimal Design of Multivariable SD Systems

0
0.3673
-0.2329
Since all poles are inside the unit disk, the closed-loop system is stable.
This example is investigated in detail in the demo script demol2 included
in the DirectSDM Toolbox .
D
Design of SD Systems with Guaranteed
Performance

D.1 Introduction

During the projection of control systems, usually for analysis and design,
nearly complete information is required about the conditions under which
the system should be in function. A typical practical problem consists in
the investigation of the function, when the system is disturbed by stochastic
external signals. As shown in the present book, in this case for evaluating the
function of a sampled-data system, the mean variance of the output could be
used, and the optimisation criterion could be a weighted sum of the output
variances.
For calculating the mean variance and for applying the optimisation pro-
cedure, the considered methods require the spectral density of the excitation.
However, for the majority of real stochastic processes, a rough information
about the spectral density is not available. For instance, there is no exact an-
swer to the question about the spectrum of sea waves [23, 24, 122]. Therefore,
it is impossible to predict, under which conditions the process will move.
The lack of rough information about the spectral density has the conse-
quence, that the variance of the output signal cannot be calculated, thus we
cannot nd the optimal controller. In engineering practice, this situation is
managed in the following way. For the real spectral density of the aecting
disturbance, several approximations are built. Then for each approximation,
the analysis or synthesis problem for the optimal system is solved. However,
this way of solution never takes into account the approximation error of
the spectral density. Hence the inuence of this error on the performance of
the system cannot be estimated under real excitations. But in practice, the
situation may arise that a prescribed performance of the system must be
guaranteed for any of a set of excitations. In this case, it cannot be predicted
how variations in the parameters of the excitation aect the performance of
the optimal system.
448 D Design of SD Systems with Guaranteed Performance

Hence the absence of rough information about the spectral density of the
excitation leads to the following problem:
Analyse a system under incomplete information about the external exci-
tation.
Design a system that guarantees an upper bound of the performance index
for all excitations of a certain set.
Below, such systems are called systems with guaranteed performance, and
the synthesis procedure is named design for guaranteed performance. The
set of excitations, for which the performance of the system is guaranteed
inside prescribed limits, is called class of excitations. Taking a single loop
scalar systems as an instance, the present appendix considers methods for
the solution of analysis and design problems for guaranteed performance.
Moreover, the modelling of classes of stochastic excitations is explained which
are needed for denition and solution of the named tasks.
The practical computations are realised with the MATLAB-Toolbox
GarSD which was particularly developed for analysis and design of sampled-
data systems with guaranteed performance. The package operates together
with the MATLAB-Toolbox DirectSD, [130] and the Toolbox DirectSDM
which has been presented in Appendix C.

D.2 Design for Guaranteed Performance

D.2.1 System Description

Consider the single-loop scalar sampled-data system with the structure shown
in Fig. D.1. The centralised stationary stochastic excitation g(t) with the

g(t)
T u(t) ? e(t)
 - wd () - (s) - W (s) - f - F (s) -

L(s) 

Fig. D.1. Single-loop scalar sampled-data system

spectral density Sg (s) aects the continuous process with the transfer function
F (s). Furthermore, we have the transfer functions of the actuator W (s), of the
feedback L(s), and of the forming element (s). The product W (s)F (s)L(s) is
D Design of SD Systems with Guaranteed Performance 449

assumed to be strictly proper, while W (s) and W (s)F (s) are at least proper.
The system is controlled by the digital controller with the transfer function
(R r
r=0 br B()
wd () = (R = , = esT , (D.1)
r=0 ar
r A()

where A and B are polynomials, and a0 = 0. The order R of the controller


and the sampling period T are prescribed. The deviation e(t) and the control
signal u(t) appear as outputs of the system. The PTFs from the input g(t) to
the outputs e(t) and u(t) become
 
L(s)wd ()W F (T, s, t)
wge (s, t) = F (s) 1 , (D.2)
1 + wd ()W F L (T, s, 0)
L(s)wd ()W (T, s, t)
wgu (s, t) = F (s) , (D.3)
1 + wd ()W F L (T, s, 0)

where W F (T, s, t), W (T, s, t), W F L (T, s, t) are the corresponding dis-
placed pulse frequency responses (DPFR).
Let Z be the vector containing the constructive parameters, which have
to be determined. The components of the vectors Z have to be committed
in such a way that the transfer function of the controller wd () is uniquely
established. In this case, the PTFs (D.2), (D.3) are functions of the vector Z.
These functions will be denoted by

w1 (s, t, Z) = wge (s, t), w2 (s, t, Z) = wgu (s, t) .

Then the formulae for calculating the variance of the outputs might be written
in the form

1 2
d (t, Z) = A (, t, Z)Sg () d, = 1, 2 (D.4)
0
with the magnitude of the parametric frequency response

A (, t, Z) = |w (s, t, Z)|s=i

and the spectral density



Sg () = Sg (s)|s=i .

The functional

J(Z) = d1 (Z) + 2 d2 (Z) (D.5)
is used as performance criterion, where is a real weighting coecient and
d1 (Z) and d2 (Z) are the mean variances, which are determined by

1 T
d (Z) = d (t, Z) dt, = 1, 2 .
T 0
450 D Design of SD Systems with Guaranteed Performance

D.2.2 Problem Statement


Consider the situation, where we miss rough knowledge of the spectral density
Sg (s), but only the general characterisation that the excitations belong to a
class MS of stochastic disturbances.
Suppose for certain known parameters Z, that there exists an estimation
D (Z) of the mean variance for the generalised characteristics of the class
MS , such that
D (Z) d (Z) (D.6)
is true over the whole set MS .

Using this estimation, in analogy to (D.5), the functional E(Z) could be

written as weighted sum of the estimation D (Z):

E(Z) 1 (Z) + 2 D
=D 2 (Z) . (D.7)

The value E(Z)
is adequate to the maximal possible value of J(Z) in (D.5)
for the functioning system with the parameter Z under any excitations from
the class MS . Per construction, the functional (D.7) does not depend on the
concrete spectral density and its value is only determined by the vector Z of
the system parameters and a general characterisation of the excitations.
Assume that we can nd a vector Zgar , such that the functional (D.7)
takes a minimal value, i.e.
gar ) = min max J(Z)
E(Z . (D.8)
Z MS

The procedure for searching the vector Zgar is called design for guaranteed
performance.
Let J0 be known as the largest value of (D.5), for which the function of
the system is accepted as successful. Then, if we can prove the inequality
gar ) J0 ,
E(Z
then with the aid of (D.6)(D.8), we are able to state that a successful op-
eration of the systems with the parameter Zgar for any excitation from the
class MS is guaranteed. There is no disturbance of the class MS , for which
the maximal possible value (D.5) exceeds the boundary J0 .
We consider two modelling variants for the class of random excitations in
problems for guaranteed performance.
In the rst variant, the model involves the variance d0 of the excitation
and the totality of its N moments dn

1 2n
S() d = dn , n = 0, 1, . . . , N . (D.9)
0
This totality is a generalised characteristic of the spectral density that is
robust against variations [18, 122].
In the second variant, the class MS is modelled by enveloping the spectral
density Sog (). The construction makes sure that there exists no frequency ,
for which any element of the class MS takes a value greater than Sog ().
D Design of SD Systems with Guaranteed Performance 451

D.2.3 Calculation of Performance Criterion

I. Let the class MS of the system excitations in Fig. D.1 be given by the
set dn , n = 0, . . . , N (D.9). Always suppose that the transfer function of the
processes F (s) and the product F (s)L(s) are strictly proper. In this case, we
obtain for the parametric frequency response Ak (, t, Z)

lim A (, t, Z) = 0 , = 1, 2 ,

i.e. the system as a whole reacts to the input g(t) as a low-pass [148]. Besides,
the PFR decreases not slower than 1/, when . Thus, the integrals
(D.4) converge absolutely and the innite limits in (D.4) might be substituted
by nite values , because of

A2 (, t, Z) 0, > .

Moreover, the frequency domain of the spectral density is supposed to be


upwards bounded
Sg () 0 > S ,
where the value S is known.
On basis of these suppositions, the mean variances d of the signals e(t)
and u(t) for known Sg () may be calculated approximately by the formula
 S
1
d (Z) = A (, Z)Sg () d ,
0

where 
1 T 2
A (, Z) = A (, t, Z) dt, = 1, 2 .
T 0
(Z) of the mean variances d (Z) are calculated by
The estimates D


N
d (Z) D
(Z) = cn (Z)dn , (D.10)
n=0

where cn (Z) are the coecients of the polynomials


N
C (, Z) = cn (Z) 2n ,
n=0

which are determined by

A (, Z) C (, Z) [0, S ] (D.11)

and in addition
C (, Z) A (, Z) [0, S ] . (D.12)
452 D Design of SD Systems with Guaranteed Performance

If (D.10) is satised, the functional (D.7) appears in the form


N
N

E(Z) = cn1 (Z)dn + 2 cn2 (Z)dn . (D.13)
n=0 n=0

Due to (D.11), this functional majorises the functional (D.5), and for any given
Z, it constitutes an upper bound [155]. Moreover, it does not depend on the
concrete spectral density, but its value is determined only by the generalised
characteristic of the class MS . The coecients cn (Z) can be computed by
applying known numeric procedures [121], [155].

Remark D.1. Practical computations have shown that for arbitrary excita-
tions, the inclusion of variances higher than rst order has marginal inuence
to the estimation of the mean variance. Therefore, in practice the calculation
of two coecients for each polynomial C (, Z) is sucient.

II. When the class MS is given by the envelope spectral density Sog (), then
an estimation of the mean variance could be found more precisely. Let for the
envelope spectral density the value S be known, and for the system with
given vector Z the value should be found. The following considerations are
valid for S .
In this case, the functional (D.7) can achieve the form


N
N

E(Z) = cn1 (Z)dn1 +2
cn2 (Z)dn2 ,
n=0 n=0

where the quantities dn1 and dn2 are determined by integrals of the shape

1
dn = Sog () 2n d, n = 0, . . . , N ; = 1, 2
0

and the coecients cn (Z) are chosen in such a way that Conditions (D.12),
(D.11) are satised on the interval [0, ].

D.2.4 Minimisation of Performance Criterion Estimate for


SD Systems

Consider the search process for the vector Zgar that minimises (D.7) for the
sampled-data system containing the digital controller with the transfer func-
tion (D.1). For this purpose, the application of genetic algorithms is suitable
[131], [117].
There are two variants of using genetic algorithms for selecting a con-
troller. Let us have to design a sampled-data system of Fig. D.1 for guar-
anteed performance. For the discrete transfer function DW LF (T, s, t) of the
D Design of SD Systems with Guaranteed Performance 453

open sampled-data system with the elements W (s), L(s), F (s) and (s), the
representation [148]

n() 
DW LF (T, s, 0) =
d()  =esT

takes place, where n() and d() are polynomials.


Investigate the equation

A()d() + B()n() = des () () ,

where des () is a polynomial containing as roots all desired pole positions i ,


and  () is a stable polynomial. This equation has to be solved for A() and
B() and these polynomials establish the transfer function of a stabilising
controller, which guarantees that the values i are among the poles of the
closed-loop system.
Then the rst variant consists in selecting stable poles of the closed-loop
system. Hereby, it is assumed that there exists a special choice Zgar of the
vector Z with roots of the polynomial  (), such that the corresponding
transfer function of the controller minimises the functional (D.7). Besides, all
roots of the polynomial des () are among the poles of the closed-loop system.
The second variant consists in selecting the coecients of the controller
directly, where the order R is given. Thereto, the existence of a stabilising
controller of this order has to be conrmed [14]. The elements of the vector
Z are the searched coecients ar , (r = 1, . . . , R) and br , (r = 0, . . . , R) of
the controller. The existence of a vector Zgar is assumed, such that the cor-
responding controller guarantees in addition to the stability also the minimal
value of the functionals (D.7). During the search procedure, the stability of
the system must be assured.

D.3 MATLAB -Toolbox GarSD


The above derived algorithm was realised in the MATLAB-Toolbox GarSD ,
which provides the solution of analysis and design problems for sampled-
data systems with guaranteed performance. The solution of the design prob-
lem needs elements of the theory of polynomial equations realised in the
MATLAB-Toolbox DirectSD that has to be available [130]. The package
was tested on a PC under Windows XP with MATLAB 6.5.

D.3.1 Structure

The package consists of three modules.


1. The module SPECTRAL contains procedures to investigate the information
about the external excitation and to formulate the initial data that are
needed for computations.
454 D Design of SD Systems with Guaranteed Performance

2. The module GarSD realises procedures for computing the functional (D.7)
for the sampled-data system and for its minimisation.
3. The module GenSD realises numerical minimisation procedures by applying
genetic algorithms.
For completion, the module SEAWAVE might be used. Here various evaluation
methods were collected. These methods are applicable for data, which describe
the eects of sea waves to a ship. The module is suitable for the solution of
problems for guaranteed performance, because it contains sea-wave spectra of
real measurements.

D.3.2 Setting Properties of External Excitations

The information about the external excitation is set into the structure
spectral by the command spt as
spectral=spt(<information>);
The variable information in case of a known envelope spectral density Sog ()
may have dierent formats:
1. Transfer function of the form lters (object tf)
2. Numerator and denominator polynomials of the fractional rational spec-
tral density function
3. Vector with number values of the spectral density
4. Coecients of the exponential functions of the spectral density.
For instance, the envelope spectral density may be given by Sog () = 0.1/( 4
2 2 + 2). This corresponds to a form lter with the transfer function Ff il (s) =
0.32/(s2 + 0.91s + 1.41). It is set by one of the both commands
spectral=spt(0.1,[1 0 2 0 2]);
spectral=spt(tf(0.32,[1 0.91 1.41]));
The spectral density of the form Sog () = A m exp(B n ) is fed into the
computer by the command
spectral=spt(A,B,m,n);
Moreover, the module spectral includes procedures that allow to design the
envelope spectral density for a given set of excitations, or to build the set
of excitations in various practical situations (for instance in case of three
dimensional disturbance models or of switching between dierent modes in
the system), or to nd a rational approximation for the envelope spectral
density, when several spectra are given by numerical data.
The class of excitations may be given by the totality of the variances of its
excitations and their moments d0 , d1 , . . . and the limit frequency S in the
class of the spectral densities. Then the class is dened by the command
spectral=spt(d_i,beta_S);
D Design of SD Systems with Guaranteed Performance 455

where di is the vector of the excitation variances and its moments.


Moreover, the module spectral contains procedures for testing a set of
data to turn out as variances.

D.3.3 Investigation of SD Systems

System description The Toolbox is dedicated to work with sampled-data


systems with a structure as in Fig. D.1. As forming element a zero-order hold
is used.
If we have to solve an analysis problem, i.e. when the transfer function
of the controller is known, then the structure of the system is fed by the
command
system=sys(F,W,L,wd);
where the variables F, W, L, wd are objects of the class tf according to the
elements of the systems. If the controller is unknown, then the structure of
the system is set by the command
system=sys(F,W,L,T);
where T is the sampling period (variable of type double). The structure
system is taken inside the toolbox for the solution of analysis and design
problems.

Estimation of guaranteed performance Suppose a system of Fig. D.1,


where all elements are known and stored in the structures system and
spectral. The toolbox contains macros, which compute in the given case
an estimation of the variance (D.6) for the time instants in the interval [0, T ]
[t,D]=aprsys(num,system,spectral,step);
where step is the step width of the time for estimating the time-varying
variance (the value of the variable step must not exceed the value of the
sampling period) and num is the identier of the system output to be analysed.
For the output e(t), we take num=12, and num=22 for the output u(t). As a
result, the vectors t, D of equal size will be generated, which contains the
time instant of the estimated variance of the output, and the value of this
estimation itself.
The value of the mean variance is estimated with the help of the command
[t,D]=aprsys(num,system,spectral);
As a result, the variable t takes the value of the empty matrix, and the variable
D contains the estimation of the mean variance.
The kind of computing the estimation depends on the type of information
about the excitation. Depending on the type of systems, the toolbox realises
two algorithms. One of them requires only negligible computation time, but
does not allow to investigate systems with multiple or nearly multiple poles.
456 D Design of SD Systems with Guaranteed Performance

The other one is free of these restrictions, but needs more computation time.
For realising the last algorithm, some macros of the toolbox DirectSD were
employed. The selection of the algorithm happens automatically.
Moreover, the toolbox GarSD contains several procedures for testing the
stability of sampled-data systems, for computing the poles or the oscillation
index, for construction of the PFR, for determining the transfer functions of
stabilising controllers and of controllers with certain assigned poles.

Minimising performance index estimate Suppose for the system in


Fig. D.1, that the transfer functions of all elements except the controller are
known and stored in the structure system. Design a system with guaranteed
performance over the class MS , which is given by the structure spectral.
The solution of the minimising problem for the estimation of the perfor-
mance index is provided by a genetic algorithm [117]. It is realised in the
macro regelgarsys, which accesses to the structures system and spectral
[system_gar,reg_gar,D_e,D_u,E]
=regelgarsys(type,deg,T,system,spectral,rho,num);
The parameters in the macro regelgarsys have the following meanings: The
parameter type contains the type of the optimisation and can adopt the val-
ues type=sta or type=all according to the rst or second variant for
selecting the elements of the vectors Z. The parameter deg contains the order
of the searched controller; for the optimisation type sta, this parameter
may hold the value deg=min, which is adequate to the smallest degree for
stabilising controllers [148]. The parameter T contains the sampling period
of the wanted digital controller and the parameter rho is the weighting co-
ecient in the functional (D.7). The parameter num commits the number of
iterations of the genetic algorithm, it is usually selected between 30 and 100.
The command supplies as result: The structure system gar of the system
with guaranteed performance, the transfer function of the digital controller
reg gar as object of tf, D e, D u as estimates of the output variances of
the systems with guaranteed performance according to the class MS , which
is given by the structure spectral, as well as the value E of the functional
(D.7).

Applicative example Consider the design problem for a controller with


guaranteed performance for the system in [149], where the course of the tanker
Kasbek under the condition of continuous excitation has to be kept. The
behavior of the ship and the rudder plant is described by

F (s) = , W (s) = 1,
s(1 + s)

with the values = 0.051 sec1 , = 25 sec. The transfer functions of the
process, the actuator and the feedback are are given by
D Design of SD Systems with Guaranteed Performance 457

alpha=0.051; beta=25;
F=tf(alpha/beta,[1 1/beta 0]);
W=tf(1); L=tf(1);
and this information is collected in the structure of the system with the sam-
pling period 1 sec by the command
system=sys(F,W,L,1);
Suppose the class of excitations MS is given by the enveloped spectral
density
0.0757
Sog () = 4 . (D.14)
2.489 2 + 1.848
The structure of this class of excitations is generated by the command
spectral=spt(0.0757,[1 0 2.489 0 1.848]);
For the weighting coecient = 0.1, a system with guaranteed performance
should be designed, where the sampling period is T = 1 sec. As number of
iterations for the genetic algorithm, we choose num=50. For the selection of
the minimal controller order according to the rst variant, the command
[system_gar,reg_gar,D_e,D_u,E]=
regelgarsys(sta,min,1,system,spectral,0.1,50);
is used. As a result, we obtain the transfer function of the controller reg gar
in the form
37.82z 34.99
wd1 (z) = .
z 0.535
For any excitation of the class MS , the system with this controller guarantees
values of the mean variances de D e = 0.000066 and du D u = 0.0105.

Besides, the value of the functional (D.7) is estimated by E = 0.00017.
Applying the second variant of controller design, so for instance the con-
troller order 1 could be chosen (the existence of a stabilising controllers of
1st order for the given system was just proven)
[system_gar,reg_gar,D_e,D_u,E]=
regelgarsys(all,1,1,system,spectral,0.1,50);
The macro supplies the transfer function of the controller reg gar:
66.86z 61.45
wd2 (z) = .
z 0.14
For any excitation of the class MS , the system with this controller guarantees
values of the mean variances de D e = 0.000061 and du D u = 0.0173.

Besides, the value of the functional (D.7) is estimated by E = 0.00023.
The property of the envelop of the spectral density obviously ensures that
for any excitation of the class MS , the value of the mean variances of the
signals u(t) and e(t) in the systems with controllers wd1 (z) or wd2 (z), will
458 D Design of SD Systems with Guaranteed Performance

not exceed the values of the obtained estimations.

Now, let us assume that the class MS is given by the set of variances
d0 = 0.05807, d1 = 0.07895 (D.15)
and the width S = 3.04. This new class involves the spectrum (D.14). The
structure of excitations is generated by
spectral=spt([0.05807 0.07895],3.04);
Let us take the rst variant for the controller design:
[system_gar,reg_gar,D_e,D_u,E]=
regelgarsys(sta,min,1,system,spectral,0.1,50);
As a result, we obtain the transfer function of the controller reg gar
367.8z 333.8
wd3 (z) = .
z + 0.3107
For any excitation of the class MS , the system with this controller guarantees
values of the mean variances de D e = 0.000056 and du D
u = 0.0597 as
well as the value E = 0.00065 in (D.7).

Finally, it remains to design the controller, for instance of rst order, for
the totality of variances by the second variant. The existence of a stabilising
controller of 1st order was proven above. The command
[system_gar,reg_gar,D_e,D_u,E]=
regelgarsys(all,1,1,system,spectral,0.1,50);
supplies the transfer function of the controller reg gar:
1.87z 0.24
wd4 (z) = .
z + 0.83
For any excitation of the class MS , the system with this controller guarantees
values of the mean variances de D e = 0.019 and du D u = 0.050 as well
as E = 0.019 in (D.7).
Now investigate the behavior of the system with the controllers wd3 (z) and
wd4 (z) under the condition of various excitations of the class MS for the given
set (D.15). For certain spectral densities of the class MS having the form
S() = a1 /( 4 + a2 2 + a3 ) ,
the values of the coecients a1 , a2 , a3 are listed in Table D.1. In the same
table, the exact values of the mean variances of the signals e(t) and u(t) occur
for the controllers wd3 (z) and wd4 (z), respectively.
Table D.1 exemplies that the values of the mean variances of the signals
e(t) and u(t) for all considered excitations do not exceed the values of the
calculated estimations.
D Design of SD Systems with Guaranteed Performance 459

Table D.1. Variances of the output of the system with guaranteed performance for
various excitations from the class MS

Spectrum a1 a2 a3 de (wd3 ) du (wd3 ) de (wd4 ) du (wd4 )


1 0.154 1.768 1.847 6.58 106 0.043 0.0023 0.0018
5
2 0.265 -0.118 1.847 1.06 10 0.048 0.0041 0.0032
6
3 0.225 0.672 1.847 9.23 10 0.047 0.00349 0.0027
References

1. J. Ackermann. Entwurf durch Polvorgabe. Regelungstechnik, 25:173179, 209


215, 1977.
2. J. Ackermann. Sampled-Data Control Systems: Analysis and Synthesis, Robust
System Design. Springer-Verlag, Berlin, 1985.
3. J. Ackermann. Abtastregelung. Springer-Verlag, Berlin, 3 edition, 1988.
4. A.G. Alexandrov and Y.F. Orlov. Finite frequency identication of multivari-
able objects. In Proc. 2nd Russian-Swedish Control Conf. (RSCC95), pages
6669, Saint Petersburg, Russia, 1995.
5. F.A. Aliev, V.B. Larin, K.I. Naumenko, and V.I. Suncev. Optimization of linear
control systems: Analytical methods and computational algorithms. Gordon &
Breach, Baalo, 1998.
6. F.A. Aliev, V.B. Larin, K.I. Naumenko, and V.I. Suntsev. Optimization of lin-
ear time-invariant control systems. Naukova Dumka, Kiev, 1978. (in Russian).
7. B.D.O. Anderson. Controller design: Moving from theory to practice. IEEE
Control Systems, 13(4):1624, 1993.
8. B.D.O. Anderson and J.B. Moore. Optimal Filtering. Prentice-Hall, Englewood
Clis, NJ, 1979.
9. M. Araki, T. Hagiwara, and Y. Ito. Frequency response of sampled-data
systems II. Closed-loop considerations. In Proc. 12th IFAC Triennial World
Congr., volume 7, pages 293296, Sydney, 1993.
10. M. Araki and Y. Ito. Frequency response of sampled-data systems I. Open-loop
considerations. In Proc. 12th IFAC Triennial World Congr., volume 7, pages
289292, Sydney, 1993.
11. K.J. Astr
om. Introduction to stochastic control theory. Academic Press, NY,
1970.
12. K.J. Astr
om, P. Hagender, and J. Sternby. Zeros of sampled-data systems.
Automatica, 20(4):3138, 1984.
13. K.J. Astr
om and B. Wittenmark. Computer controlled systems: Theory and
design. Prentice-Hall, Englewood Clis, NJ, 1984.
14. K.J. Astr
om and B. Wittenmark. Computer Controlled Systems: Theory and
Design. Prentice-Hall, Englewood Clis, NJ, 3rd edition, 1997.
15. B.A. Bamieh and J.B. Pearson. A general framework for linear periodic systems
with applications to H sampled-data control. IEEE Trans. Autom. Contr,
AC-37(4):418435, 1992.
462 References

16. B.A. Bamieh and J.B. Pearson. The H2 problem for sampled-data systems.
Syst. Contr. Lett., 19(1):112, 1992.
17. B.A. Bamieh, J.B. Pearson, B.A. Francis, and A. Tannenbaum. A lifting tech-
nique for linear periodic systems with applications to sampled-data control
systems. Syst. Contr. Lett., 17:7988, 1991.
18. V.A. Besekerskii and A.V. Nebylov. Robust systems in automatic control.
Nauka, Moscow, 1983. (in Russian).
19. M.J. Blachuta. Contributions to the theory of discrete-time control for
continuous-time systems. Habilitation thesis, Silesian Techn. University, Gli-
wice, Poland, 1999.
20. M.J. Blachuta. Discrete-time modeling of sampled-data control systems with
direct feedthrough. IEEE Trans. Autom. Contr, 44(1):134139, 1999.
21. Ch. Blanch. Sur les equation dierentielles lineares a coecients lentement
variable. Bull. technique de la Suisse romande, 74:182189, 1948.
22. S. Bochner. Lectures on Fourier Integrals. University Press, Princeton, NJ,
1959.
23. I. Boroday, V. Mohrenschildt, et al. Behavior of ships in ocean waves. Su-
dostroyenie, Leningrad, 1969.
24. I.K. Boroday and V.V. Nezetaev. Application problems of dynamics for ships
on waves. Sudostroyenie, Leningrad, 1989.
25. G.D. Brown, M.G. Grimble, and D. Biss. A simple ecient H controller
algorithm. In Proc. 26th IEEE Conf. Decision Contr, Los Angeles, 1987.
26. B.W. Bulgakov. Schwingungen. GITTL, Moskau, 1954. (in Russisch).
27. F.M. Callier and C.A. Desoer. Linear system theory. Springer-Verlag, New
York, 1991.
28. M. Cantoni. Algebraic characterization of the H and H2 norms for linear
continuous-time periodic systems. In Proc. 4th Asian Control Conference,
pages 19451950, Singapore, 2002.
29. S.S.L. Chang. Synthesis of optimum control systems. McGraw Hill, New York,
Toronto, London, 1961.
30. T. Chen and B.A. Francis. Optimal sampled-data control systems. Springer-
Verlag, Berlin, Heidelberg, New York, 1995.
31. T.A.C.M. Claasen and W.F.G. Mecklenbr auker. On stationary linear time
varying systems. IEEE Trans. Circuits and Systems, CAS-29(2):169184, 1982.
32. P. Colaneri. Continuous-time periodic systems in H2 and H . Part I: The-
oretical aspects; Part II: State feedback control. Kybernetika, 36(3):211242;
329350, 2000.
33. R.E. Crochiere and L.R. Rabiner. Multirate digital signal processing. Prentice-
Hall, Englewood Clis, NJ, 1983.
34. L. Dai. Singular control systems. Lecture notes in Control and Information
Sciences. Springer-Verlag, New York, 1989.
35. J.A. Daletskii and M.G. Krein. Stability of solutions of dierential equations
in Banach-space. Nauka, Moscow, 1970. (in Russian).
36. R. DAndrea. Software for modeling, analysis, and control design for multidi-
mensional systems. In Proc. IEEE Symp. on Computer Aided Control System
Design (CACSD99), pages 2427, Kohala Coast, Island of Hawaii, Hawaii,
USA, 1999.
37. C.E. de Souza and G.C. Goodwin. Intersample variance in discrete minimum
variance control. IEEE Trans. Autom. Contr, AC-29:759761, 1984.
References 463

38. B.W. Dickinson. Systems - Analysis, Design and Computation. Prentice Hall,
Englewood Clis, NJ, 1991.
39. G. Doetsch. Anleitung zum praktischen Gebrauch der Laplace Transformation
und z-Transformation. Oldenbourg, M unchen, Wien, 1967.
40. R.C. Dorf and R.H. Bishop. Modern control systems. Pearson Prentice Hall,
Upper Saddle River, NJ, tenth edition, 2001.
41. J.C. Doyle. Guaranteed margins for LQG regulators. IEEE Trans. Autom.
Contr, AC-23(8):756757, 1978.
42. J.C. Doyle, B.A. Francis, and A.R. Tannenbaum. Feedback control theory.
Macmillan, New York, 1992.
43. S. Engell. Lineare optimale Regelung. Springer-Verlag, Berlin, 1988.
44. D.K. Faddeev and V.N. Faddeeva. Numerische Methoden der linearen Algebra.
Oldenbourg, M unchen, 1979. (with L. Bittner).
45. A. Feuer and G.C. Goodwin. Generalised sample and hold functions - frequency
domain analysis of robustness, sensitivity and intersampling diculties. IEEE
Trans. Autom. Contr, AC-39(5):10421047, 1994.
46. N. Fliege. Multiraten-Signalverarbeitung. B.G. Teubner, Stuttgart, 1993.
47. V.N. Fomin. Control methods for discrete multidimensional processes. Univer-
sity press, Leningrad, 1985. (in Russian).
48. V.N. Fomin. Regelungsverfahren f ur diskrete Mehrgroenprozesse. Verlag der
Universitat, Leningrad, 1985. (in Russisch).
49. G.F. Franklin, J.D. Powell, and A. Emami-Naeini. Feedback Control of Dy-
namic Systems. Prentice Hall, Upper Saddle River, NJ 07458, 4 edition, 2002.
50. G.F. Franklin, J.D. Powell, and H.L. Workman. Digital control of dynamic
systems. Addison Wesley, New York, 1990.
51. F.R. Gantmacher. The theory of matrices. Chelsea, New York, 1959.
52. E.G. Gilbert. Controllability and observability in multivariable control sys-
tems. SIAM J. Control, A(1):128151, 1963.
53. G.C. Goodwin, S.F. Graebe, and M.E. Salgado. Control system design.
Prentice-Hall, Upper Saddle River, NJ 07458, 2001.
54. G.C. Goodwin and M. Salgado. Frequency domain sensitivity functions for
continuous-time systems under sampled-data control. Automatica, 30(8):1263
1270, 1994.
55. M.J. Grimble. Robust Industrial Control: Optimal Design Approach for Poly-
nomial Systems. International Series in Systems and Control Engineering.
Prentice Hall International (UK) Ltd, Hemel Hempstead, Hertfordshire, 1994.
56. M.J. Grimble and V. Kucera, editors. Polynomial methods for control systems
design. Springer-Verlag, London, 1996.
57. M. G unther. Kontinuierliche und zeitdiskrete Regelungen. B.G. Teubner,
Stuttgart, 1997.
58. T. Hagiwara and M. Araki. FR-operator approach to the H2 -analysis and syn-
thesis of sampled-data systems. IEEE Trans. Autom. Contr, AC-40(8):1411
1421, 1995.
59. V. Hahn. Direkte adaptive Regelstrategien f ur die diskrete Regelung von
Mehrgr oensystemen. PhD thesis, University of Bochum, 1983.
60. M.E. Halpern. Preview tracking for discrete-time SISO systems. IEEE Trans.
Autom. Contr, AC-39(3):589592, 1994.
61. S. Hara, H. Fujioka, and P.T. Kabamba. A hybrid state-space approach to
sampled-data feedback control. Linear Algebra and Its Applications, 205-
206:675712, 1994.
464 References

62. U.K. Herne. Methoden zur rechnergest utzten Analyse und Synthese von
Mehrgr oenregelsystemen in Polynommatrizendarstellung. PhD thesis, Uni-
versity of Bochum, 1988.
63. R. Isermann. Digitale Regelungssysteme. Band I: Grundlagen, deterministische
Regelungen. Band II: Stochastische Regelungen, Mehrgr oenregelungen Adap-
tive Regelungen, Anwendungen. Springer-Verlag, Berlin, 2 edition, 1987.
64. M.A. Jevgrafov. Analytische Funktionen. Nauka, Moskau, 1965. (in Russisch).
65. G. Jorke, B.P. Lampe, and N. Wengel. Arithmetische Algorithmen der
Mikrorechentechnik. Verlag Technik, Berlin, 1989.
66. E.I. Jury. Sampled-data control systems. John Wiley, New York, 1958.
67. P.T. Kabamba and S. Hara. Worst-case analysis and design of sampled-data
control systems. IEEE Trans. Autom. Contr, AC-38(9):13371358, 1993.
68. T. Kaczorek. Linear control systems, volume II - Synthesis of multivariable
systems. J. Wiley, New York, 1993.
69. T. Kailath. Linear Systems. Prentice Hall, Englewood Clis, NJ, 1980.
70. R. Kalman and J.E. Bertram. A unied approach to the theory of sampling
systems. J. Franklin Inst., 267:405436, 1959.
71. R. Kalman, Y.C. Ho, and K. Narendra. Controllabiltiy of linear dynamical
systems. Contributions to the Theory of Dierential Equations, 1:189213,
1963.
72. R.E. Kalman. Mathematical description of linear dynamical systems. SIAM
J. Control, A(1):152192, 1963.
73. S. Karlin. A rst course in stochastic processes. Academic Press, New York,
1966.
74. V.J. Katkovnik and R.A. Polucektov. Discrete multidimensional control.
Nauka, Moscow, 1966. (in Russian).
75. J.P. Keller and B.D.O. Anderson. H -Optimierung abgetasteter Regelsys-
teme. Automatisierungstechnik, 40(4):114123, 1993.
76. U. Keuchel. Methoden zur rechnergest utzten Analyse und Synthese von
Mehrgr oensystemen in Polynommatrizendarstellung. PhD thesis, University
of Bochum, 1988.
77. P.P. Khargonekar and N. Sivarshankar. H2 -optimal control for sampled-data
systems. Systems & Control Letters, 18:627631, 1992.
78. U. Korn and H.-H. Wilfert. Mehrgr oenregelungen. Verlag Technik, Berlin,
1982.
79. V. Kucera. Discrete Linear Control. The Polynomial Approach. Academia,
Prague, 1979.
80. V. Kucera. Analysis and Design of Discrete Linear Control Systems. Prentice
Hall, London, 1991.
81. B.C. Kuo and D.W. Peterson. Optimal discretization of continuous-data con-
trol systems. Automatica, 9(1):125129, 1973.
82. H. Kwakernaak. Minimax frequency domain performance and robustness
optimisation of linear feedback systems. IEEE Trans. Autom. Contr, AC-
30(10):9941004, 1985.
83. H. Kwakernaak. The polynomial approach to H regulation. In E. Mosca
and L. Pandol, editors, H control theory, volume 1496 of Lecture Notes in
Mathematics, pages 141221. Springer-Verlag, London, 1990.
84. H. Kwakernaak and R. Sivan. Linear Optimal Control Systems. Wiley-
Interscience, New York, 1972.
References 465

85. S. Lall and C. Beck. Model reduction of complex systems in the linear-fractional
framework. In Proc. IEEE Int. Symp. on Computer Aided Control System
Design (CACSD99), pages 3439, Kohala Coast, Island of Hawaii, Hawaii,
USA, 1999.
86. B.P. Lampe. Strukturelle Instabilit at in linearen Systemen Frequenz-
gangsmethoden auf dem Pr ufstand der Mathematik. In Mitteilungen der Math-
ematischen Gesellschaft in Hamburg, volume XVIII, pages 926, Hamburg,
Germany, 1999.
87. B.P. Lampe, G. Jorke, and N. Wengel. Algorithmen der Mikrorechentechnik.
Verlag Technik, Berlin, 1984.
88. B.P. Lampe, M.A. Obraztsov, and E.N. Rosenwasser. H2 -norm computation
for stable linear continuous-time periodic systems. Archives of Control Sci-
ences, 14(2):147160, 2004.
89. B.P. Lampe, M.A. Obraztsov, and E.N. Rosenwasser. Statistical analysis
of stable FDLCP systems by parametric transfer matrices. Int. J. Control,
78(10):747761, Jul 2005.
90. B.P. Lampe and U. Richter. Digital controller design by parametric transfer
functions - comparison with other methods. In Proc. 3. Int. Symp. Methods
Models Autom. Robotics, volume 1, pages 325328, Miedzyzdroje, Poland, 1996.
91. B.P. Lampe and U. Richter. Experimental investigation of parametric fre-
quency response. In Proc. 4. Int. Symp. Methods Models Autom. Robotics,
pages 341344, Miedzyzdroje, Poland, 1997.
92. B.P. Lampe and E.N. Rosenwasser. Design of hybrid analog-digital systems
by parametric transfer functions. In Proc. 32nd CDC, pages 38973898, San
Antonio, TX, 1993.
93. B.P. Lampe and E.N. Rosenwasser. Application of parametric frequency re-
sponse to identication of sampled-data systems. In Proc. 2. Int. Symp. Meth-
ods Models Autom. Robotics, volume 1, pages 295298, Miedzyzdroje, Poland,
1995.
94. B.P. Lampe and E.N. Rosenwasser. Best digital approximation of continuous
controllers and lters in H2 . In Proc. 41st KoREMA, volume 2, pages 6569,
Opatija, Croatia, 1996.
95. B.P. Lampe and E.N. Rosenwasser. Best digital approximation of continuous
controllers and lters in H2 . AUTOMATIKA, 38(34):123127, 1997.
96. B.P. Lampe and E.N. Rosenwasser. Parametric transfer functions for sampled-
data systems with time-delayed controllers. In Proc. 36th IEEE Conf. Decision
Contr, pages 16091614, San Diego, CA, 1997.
97. B.P. Lampe and E.N. Rosenwasser. Sampled-data systems: The L2 induced
operator norm. In Proc. 4. Int. Symp. Methods Models Autom. Robotics, pages
205207, Miedzyzdroje, Poland, 1997.
98. B.P. Lampe and E.N. Rosenwasser. Statistical analysis and H2 -norm of nite
dimensional linear time-periodic systems. In Proc. IFAC Workshop on Periodic
Control Systems, pages 914, Como, Italy, Aug. 2001.
99. B.P. Lampe and E.N. Rosenwasser. Forward and backward models for anoma-
lous linear discrete-time systems. In Proc. 9th IEEE Symp. Methods Models
Autom. Robotics, pages 369373, Miedzyzdroje, Poland, Aug 2003.
100. B.P. Lampe and E.N. Rosenwasser. Operational description and statistical
analysis of linear periodic systems on the unbounded interval < t < .
European J. Control, 9(5):508521, 2003.
466 References

101. B.P. Lampe and E.N. Rosenwasser. Closed formulae for the L2 -norm of lin-
ear continuous-time periodic systems. In Proc. IFAC Workshop on Periodic
Control Systems, pages 231236, Yokohama, Japan, Sep 2004.
102. B.P. Lampe and E.N. Rosenwasser. Unterordnung und Dominanz rationaler
Matrizen. Automatisierungstechnik, 53(9):434444, 2005.
103. F.H. Lange. Signale und Systeme, volume 13. Verlag Technik, Berlin, 1971.
104. V.B. Larin, K.I. Naumenko, and V.N. Suntsov. Spectral methods for design of
linear systems with feedback. Naukova Dumka, Kiev, 1971. (in Russian).
105. B. Lennartson and T. S oderstr
om. Investigation of the intersample variance in
sampled-data control. Int. J. Control, 50:15871602, 1989.
106. B. Lennartson, T. S oderstr
om, and Sun Zeng-Qi. Intersample behavior as
measured by continuous-time quadratic criteria. Int. J. Control, 49:20772083,
1989.
107. O. Ling arde and B. Lennartson. Frequency analysis for continuous-time sys-
tems under multirate sampled-data control. In Proc. 13th IFAC Triennial
World Congr., volume 2a10, 5, pages 349354, San Francisco, USA, 1996.
108. L. Ljung. System Identication Theory for the User. Prentice-Hall, Engle-
wood Clis, NJ, 1987.
109. D.G. Luenberger. Dynamic equations in descriptor form. IEEE Trans. Autom.
Contr, AC-22(3):312321, 1977.
110. J. Lunze. Robust multivariable feedback control. Akademie-Verlag, Berlin, 1988.
111. J. Lunze. Regelungstechnik 2 - Mehrgr oensysteme, Digitale Regelung.
Springer-Verlag, Berlin, Heidelberg, ..., 1997.
112. N.N. Lusin. Matrix theory for studying dierential equations. Avtomatika i
Telemechanika, 5:466, 1940. (in Russian).
113. N.N. Lusin. Matrizentheorie zum Studium von Dierentialgleichungen. Av-
tomatika i Telemechanika, 5:466, 1940.
114. J.M. Maciejowski. Multivariable feedback design. Addison-Wesley, Wokingham,
England a.o., 1989.
115. J.M. Maciejowski. Predictive control - with constraints. Pearson Education
Lim., Harlow, England, 2002.
116. A.G. Madievski and B.D.O. Anderson. A lifting technique for sampled-data
controller reduction for closed-loop transfer function consideration. In Proc.
32nd IEEE Conf. Decision Contr, pages 29292930, San Antonio, TX, 1993.
117. K.F. Man, K.S. Tang, and S. Kwong. Genetic algorithms. Springer-Verlag,
London Berlin Heidelberg, 1999.
118. S.G. Michlin. Vorlesungen u ber lineare Integralgleichungen. Dt. Verlag d. Wis-
senschaften, Berlin, 1962.
119. B.C. Moore. Principal component analysis in linear systems: Controllability,
observability and model reduction. IEEE Trans. Autom. Contr, AC-26(1):17
32, 1981.
120. R. M uller. Entwurf von Mehrgr oenreglern durch Frequenzgang-
Approximation. PhD thesis, University of Dortmund, 1996.
121. A.V. Nebylov. Warranting of accuracy of control. Nauka, Moscow, 1998. (in
Russian).
122. A.V. Nebylov. Measuring parameters of a plane near the sea surface. Saint Pe-
tersburg State University Academic Press, St. Petersburg, 2000. (in Russian).
123. K. Ogata. Modern control engineering. Prentice-Hall, Upper Saddle River, NJ
07458, 2002.
References 467

124. V.G. Pak and V.N. Fomin. Linear quadratic optimal control problem under
known disturbance I. Abstract linear quadratic problem under known distur-
bance. Preprint VINITI, N2063-B97, St. Petersburg, 1997. (in Russian).
125. K. Parks and J.J. Bongiorno. Modern Wiener-Hopf design of optimal con-
trollers Part II: The multivariable case. IEEE Trans. Autom. Contr, AC-
34(6):619626, 1989.
126. R.V. Patel. Computation of minimal-orders state-space realisations and ob-
servability indices using orthonormal transformations. In R.V. Patel, A.J.
Laub, and Van Dooren P.M., editors, Numerical linear Algebra techniques for
systems and control, pages 195212. IEEE Press, New York, 1994.
127. T.P. Perry, G.M.H. Leung, and B.A. Francis. Performance analysis of sampled-
data control systems. Automatica, 27(4):699704, 1991.
128. U. Petersohn, H. Unger, and Wardenga W. Beschreibung von Multirate-
Systemen mittels Matrixkalk 48(1):3441, 1994.
ul. AEU,
129. J.P. Petrov. Design of optimal control systems under incompletely known input
disturbances. University press, Leningrad, 1987. (in Russian).
130. K.Y. Polyakov, E.N. Rosenwasser, and B.P. Lampe. DirectSD - a toolbox for di-
rect design of sampled-data systems. In Proc. IEEE Intern. Symp. CACSD99,
pages 357362, Kohala Coast, Island of Hawaii, Hawaii, USA, 1999.
131. K.Y. Polyakov, E.N. Rosenwasser, and B.P. Lampe. Quasipolynomial low-
order digital controller design using genetic algorithms. In Proc. 9th IEEE
Mediterranian Conf. on Control and Automation, pages WM1B5, Dubrovnik,
Croatia, June 2001.
132. K.Y. Polyakov, E.N. Rosenwasser, and B.P. Lampe. DirectSDM - a toolbox for
polynomial design of multivariable sampled-data systems. In Proc. IEEE Int.
Symp. Computer Aided Control Systems Design, pages 95100, Taipei, Taiwan,
Sep 2004.
133. V.M. Popov. Hyperstability of control systems. Springer-Verlag, Berlin, 1973.
134. I.I. Priwalow. Einfuhrung in die Funktionentheorie. 3. Au., B.G. Teubner,
Leipzig, 1967.
135. R. Rabenstein. Diskrete Simulation linearer mehrdimensionaler Systeme. PhD
thesis, University of Erlangen-N urnberg, 1991.
136. J.R. Ragazzini and G.F. Franklin. Sampled-data control systems. McGraw-Hill,
New York, 1958.
137. J.R. Ragazzini and L.A. Zadeh. The analysis of sampled-data systems. AIEE
Trans., 71:225234, 1952.
138. J. Raisch. Mehrgr oenregelung im Frequenzbereich. R. Oldenbourg Verlag,
M unchen, 1994.
139. K.S. Rattan. Digitalization of existing control systems. IEEE Trans. Autom.
Contr, AC-29:282285, 1984.
140. K.S. Rattan. Compensating for computational delay in digital equivalent of
continuous control systems. IEEE Trans. Autom. Contr, AC-34:895899, 1989.
141. K. Reinschke. Lineare Regelungs- und Steuerungstheorie. Springer-Verlag,
Berlin, 2006.
142. G. Roppenecker. Fortschr.-Ber. VDI-Z. In Vollst andige modale Synthese lin-
earer Systeme und ihre Anwendung zum Entwurf strukturbeschr ankter Zus-
tandsruckf
uhrungen, number 59 in 8. VDI-Verlag, D usseldorf, 1983.
143. E.N. Rosenwasser. Lyapunov-Indizes in der linearen Regelungstheorie. Nauka,
Moskau, 1977. (in Russisch).
468 References

144. E.N. Rosenwasser, P.G. Fedorov, and B.P. Lampe. Construction of MFD-
representation of real rational transfer matrices on basis of normalisation pro-
cedure. In Int. Conf. on Computer Methods for Control Systems, pages 3942,
Szczecin, Poland, December 1997.
145. E.N. Rosenwasser, P.G. Fedorov, and B.P. Lampe. Construction of state-space
model with minimal dimension for multivariable system on basis of transfer
matrix normalization procedure. In Proc. 5. Int. Symp. Methods Models Au-
tom. Robotics, volume 1, pages 235238, Miedzyzdroje, Poland, 1998.
146. E.N. Rosenwasser and B.P. Lampe. Digitale Regelung in kontinuierlicher Zeit
- Analyse und Entwurf im Frequenzbereich. B.G. Teubner, Stuttgart, 1997.
147. E.N. Rosenwasser and B.P. Lampe. Algebraische Methoden zur Theorie der
Mehrgr oen-Abtastsysteme. Universit atsverlag, Rostock, 2000. ISBN 3-86009-
195-6.
148. E.N. Rosenwasser and B.P. Lampe. Computer Controlled Systems - Analysis
and Design with Process-orientated models. Springer-Verlag, London Berlin
Heidelberg, 2000.
149. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Entwurf optimaler Kursre-

gler mit Hilfe von Parametrischen Ubertragungsfunktionen. Automatisierung-
stechnik, 44(10):487495, 1996.
150. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Frequency domain method
for H2 optimization of time-delayed sampled-data systems. Automatica,
33(7):13871392, 1997.
151. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Optimal discrete ltering
for time-delayed systems with respect to mean-square continuous-time error
criterion. Int. J. Adapt. Control Signal Process., 12:389406, 1998.
152. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Application of Laplace
transformation for digital redesign of continuous control systems. IEEE Trans.
Automat. Contr, 4(4):883886, April 1999.
153. E.N. Rosenwasser, K.Y. Polyakov, and B.P. Lampe. Comments on A tech-
nique for optimal digital redesign of analog controllers. IEEE Trans. Control
Systems Technology, 7(5):633635, September 1999.
154. W.J. Rugh. Linear system theory. Prentice-Hall, Englewood Clis, NJ, 1993.
155. V.O. Rybinskii and B.P. Lampe. Accuracy estimation for digital control
systems at incomplete information about stochastic input disturbances. In
B.P. Lampe, editor, Maritime Systeme und Prozesse, pages 4352. Univer-
sit
atsdruckerei, Rostock, 2001.
156. V.O. Rybinskii, B.P. Lampe, and E.N. Rosenwasser. Design of digital ship mo-
tion control with guaranteed performance. In Proc. 49. Int. Wiss. Kolloquium,
volume 1, pages 381386, Ilmenau, Germany, 2004.
157. M. Saeki. Method of solving a polynomial equation for an H optimal control
problem. IEEE Trans. Autom. Contr, AC-34:166168, 1989.
158. M. Sagfors. Optimal Sampled-Data and Multirate Control. PhD thesis, Faculty
of Chemical Engineering, Abo Akademi University, Finland, 1998.
159. L. Schwartz. Methodes mathematiques pour les sciences physiques. Hermann
115, Paris VI, Boul. Saint-Germain, 1961.
160. H. Schwarz. Optimale Regelung und Filterung - Zeitdiskrete Regelungssysteme.
Akademie-Verlag, Berlin, 1981.
161. L.S. Shieh, B.B. Decrocq, and J.L. Zhang. Optimal digital redesign of cascaded
analogue controllers. Optimal Control Appl. Methods, 12:205219, 1991.
References 469

162. L.S. Shieh, J.L. Zhang, and J.W. Sunkel. A new approach to the digital redesign
of continuous-time controllers. Control Theory Adv. Techn., 8:3757, 1992.
163. I.Z. Shtokalo. Generalisation of symbolic method principal formula onto linear
dierential equations with variable coecients. Dokl. Akad. Nauk SSR, 42:9
10, 1945. (in Russian).
164. S. Skogestad and I. Postlethwaite. Multivariable feedback control: Analysis and
design. Wiley, Chichester, 2nd edition, 2005.
165. L.M. Skvorzov. Transformation algorithm for mathematical models of mul-
tidimensional control systems. Izv. Akad. Nauk, Control theory and systems,
2:1723, 1997.
166. V.B. Sommer, B.P. Lampe, and E.N. Rosenwasser. Experimental investiga-
tions of analog-digital control systems by frequency methods. Automation and
Remote Control, 55(Part 2):912920, 1994.
167. E.D. Sontag. Mathematical control theory deterministic nite dimensional
systems. Springer-Verlag, New York, 1998.
168. D.S. Stearns. Digitale Verarbeitung analoger Signale. R. Oldenbourg Verlag,
Munchen, 1988.
169. R.F. Stengel. Stochastic optimal control. Theory and application. J. Wiley &
Sons, Inc., New York, 1986.
170. Y. Tagawa and R. Tagawa. A computer aided technique to derive the class
of realizable transfer function matrices of a control system for a prescribed
order controller. In Proc. IEEE Int. Symp. on Computer Aided Control System
Design (CACSD99), pages 321327, Kohala Coast, Island of Hawaii, Hawaii,
USA, 1999.
171. E.C. Titchmarsh. The theory of functions. Oxford science publ. University
Press, Oxford, 2 edition, 1997. Reprint.
172. H.T. Toivonen. Sampled-data control of continuous-time systems with an H -
optimality criterion. Automatica, 28(1):4554, 1992.
173. H.T. Toivonen. Worst-case sampling for sampled-data H design. In Proc.
32nd IEEE Conf. Decision Contr, pages 337342, San Antonio, TX, 1993.
174. H. Tolle. Mehrgr oenregelkreissynthese, volume 1, 2. R. Oldenbourg Verlag,
Munchen, 1983, 1985.
175. J. Tou. Digital and Sampled-Data Control Systems. McGraw-Hill, New York,
1959.
176. H.L. Trentelmann and A.A. Stoorvogel. Sampled-data and discrete-time
H2 optimal control. In Proc. 32nd Conf. Dec. Contr., pages 331336, San
Antonio, TX, 1993.
177. J.S. Tsypkin. Sampling systems theory. Pergamon Press, New York, 1964.
178. R. Unbehauen. Systemtheorie, volume 2. R. Oldenbourg Verlag, M unchen, 7
edition, 1998.
179. H. Unger, U. Petersohn, and S. Lindow. Zur Beschreibung hybrider Multiraten-
Systeme mittels Matrixkalk uls. FREQUENZ, 1997. (einger.).
180. K.G. Valeyev. Application of Laplace transform for analysis of linear systems.
In Proc. Intern. Conf. on Nonlin. Oscill., volume I, pages 126132, Kiev, 1970.
(in Russian).
181. B. van der Pol and H. Bremmer. Operational calculus based on the two-sided
Laplace integral. University Press, Cambridge, 1959.
182. A. Varga. On stabilization methods of descriptor systems. Syst. Contr. Lett.,
24:133138, 1995.
470 References

183. M. Vidyasagar. Control system synthesis. MIT Press, Cambridge, MA, 1994.
184. L.N. Volgin. Optimal discrete control of dynamic systems. Nauka, Moscow,
1986. (in Russian).
185. S. Volovodov, B.P. Lampe, and E.N. Rosenwasser. Application of method of
integral equations for analysis of complex periodic behaviors in Chuas cir-
cuits. In Proc. 1st IEEE Int. Conf. Control Oscill. Chaos, pages 125128, St.
Petersburg, Russia, August 1997.
186. J. Wernstedt. Experimentelle Prozeanalyse. Verlag Technik, Berlin, 1989.
187. E.T. Whittaker and G.N. Watson. A course of modern analysis. University
Press, Cambridge, 4 edition, 1927.
188. J.H. Wilkinson. The algebraic eigenvalue problem. Clarendon Press, Oxford,
1965.
189. W.A. Wolovich. Linear Multivariable Systems. Springer-Verlag, New York,
1974.
190. W.A. Wolovich. Automatic control systems. Harcourt Brace, 1994.
191. W.M. Wonham. Linear multivariable control - A geometric appraoch. Springer-
Verlag, New York Berlin ..., 3 edition, 1985.
192. R.A. Yackel, B.C. Kuo, and G. Singh. Digital redesign of continuous systems
by matching of states at multiple sampling periods. Automatica, 10:105111,
1974.
193. D.V. Yakubovich. Algorithm for supplementing a rectangular polynomial ma-
trix to a quadratic matrix with given determinant. Kybernetika i Vychisl.,
23:8589, 1984.
194. Y. Yamamoto. A function space approach to sampled-data systems and track-
ing problems. IEEE Trans. Autom. Contr, AC-39(4):703713, 1994.
195. Y. Yamamoto and P. Khargonekar. Frequency response of sampled-data sys-
tems. IEEE Trans. Autom. Contr, AC-41(2):161176, 1996.
196. D.C. Youla, H.A. Jabr, and J.J. Bongiorno (Jr.). Modern Wiener-Hopf design
of optimal controllers. Part II: The multivariable case. IEEE Trans. Autom.
Contr, AC-21(3):319338, 1976.
197. L.A. Zadeh. Circuit analysis of linear varying-parameter networks. J. Appl.
Phys., 21(6):11711177, 1950.
198. L.A. Zadeh. Frequency analysis of variable networks. Proc. IRE,
39(March):291299, 1950.
199. L.A. Zadeh. Stability of linear varying-parameter systems. J. Appl. Phys.,
22(4):202204, 1951.
200. C. Zhang and J. Zhang. H2 performance of continuous periodically time-
varying controllers. Syst. Contr. Lett, 32:209221, 1997.
201. P. Zhang, S.X. Ding, G.Z. Wang, and D.H. Zhou. Fault detection in multirate
sampled-data systems with time-delays. In Proc. 15th IFAC Triennial World
Congr., volume Fault detection, supervision and safety of technical processes,
page REG2179, Barcelona, 2002.
202. J. Zhou. Harmonic analysis of linear continuous-time periodic systems. PhD
thesis, Kyoto University, 2001.
203. J. Zhou, T. Hagiwara, and M. Araki. Trace formulas for the H2 norm of
linear continuous-time periodic systems. In Prepr. IFAC Workshop on Periodic
Control Systems, pages 38, Como, Italy, 2001.
204. J. Zhou, T. Hagiwara, and M. Araki. Trace formula of linear continuous-
time periodic systems via the harmonic Lyapunov equation. Int. J. Control,
76(5):488500, 2003.
References 471

205. K. Zhou and J.C. Doyle. Essentials of robust control. Prentice-Hall Intern.,
Upper Saddle River, NJ, 1998.
206. K. Zhou, J.C. Doyle, and K. Glover. Robust and optimal control. Prentice-Hall,
Englewood Clis, NJ, 1996.
207. J.Z. Zypkin. Sampling systems theory. Pergamon Press, New York, 1964.
Index

def defect, 22 transfer matrix, 171


deg degree, 5 behavior
ind index, 74 non-pathological, 267, 273
H-norm, 313 Bezout identity, 162
ord order, 11 Binet-Cauchy-formula, 10
E = expectation, 307 block matrix
trace trace, 308 dominant element, 101
-transformation, 431
z-transformation, 432 characteristic equation
ADC= analog to digital converter, 245 polynomial matrix, 26
ALG= control program, control columns, 8
algorithm, controller, 246 height, 8
DAC= digital to analog converter, 246 control algorithm, 246
DCU = digital control unit, digital control input, 225
controller, 245 control problem
abstract, 227
Abelian group, 3 control transfer matrix, 225
addition, 3 control unit
autocorrelation matrix, 307 digital, 245
controllability
backward model, 210 by control input, 225
discrete by disturbance input, 226
sampled-data system, 283 characteristic roots, 304
standard realisation, 213 forward model, 212
backward transfer matrix, 341 matrix, 43
basic controller, 151 modal, 305
basic controllers pair, 43
dual, 162 controller, 149, 226, 246, 317
basic matrix didital, 280
dual, 162 digital, 245
left, 161 stabilising, 230, 298
right, 161 controller model
basic representation left, 231
controller, 169 right, 231
474 Index

coprime, 7 inverse, 3
neutral, 3
Defect opposite, 4
normal, 22 zero , 3
degree, 5 elementary divisor, 24
rational matrix, 61 nite, 39
denominator entire part, 6
left, 63 equivalence
rat. matrix, 59 strict, 39
righter, 63 excitation
descriptor process, 185 class, 448
descriptor system, 38, 90, 197 exp.per. = exponential-periodic, 241
design exponent
guaranteed performance, 448 exp.per. function, 241
determinantal divisor, 24 exponential function
greatest, 24 matrix, 256
determinants, 7
dierence equation
eld, 4
derived
complex number, 4
output, 185
real numbers, 4
original, 185
form element
dierence equations
transfer function, 249
equivalent, 187
form function, 246
row reduced, 188
forward model, 185, 209, 212
dimension
controllable, 189
standard presentation, 78
discrete
discretisation
sampled-data system, 283
polynomial, 267
forward transfer function, 342
disturbance input, 225
fraction
disturbance transfer matrix, 225
equality, 53
divisor, 6
improper, 55
common left, 35
common right, 36 irreducible, 275
greatest common, 6 irreducible form, 54
greatest common left, 35 proper, 55
greatest common right, 36 rational, 53
DLT = discrete Laplace transform, 258 reducible, 59
DMFD = double-sided MFD, 73 strictly proper, 55
Frobenius
eigenoperator, 210 matrix
forward model, 185 accompnying, 46
eigenvalue characteristic, 122
polynomial matrix, 26 realisation, 50, 127
eigenvalue assignment, 149 function
PMD, 150 exponential-periodic, 241
structural, 150 fractional rational, 53
PMD, 151 of matrix, 253
transfer matrix, 170
element GCD = greatest common divisor, 6
Index 475

GCLD = greatest common left divisor, linear dependence, 8


35 LMFD = left MFD, 63
GCRD = greatest common right LTI = Linear Time Invariant, 184
divisor, 36 LTI object, 184
group, 3 nite-dimensional, 184
additive notation, 3 LTI process
multiplicative notation, 4 non-singular, 185

Hermitian form matrices


left, 13 composed, 45
right, 14 cyclic, 44
normal, 105
IDMFD = irreducible DMFD, 74 same structure, 255
ILMFD = irreducible LMFD, 64 matrix
image, 383 adjoint, 8
pseudo-rational, 385 characteristic, 42
index backward, 341
rat. function, 55 closed loop, 149
rational matrix, 74 closed-loop, 226
initial controller, 318 left, 231
initial energy PMD, 150
vanishing, 196, 204 right, 231
initial values, 192 characteristic forward , 342
inputoperator components, 251
forward model, 185 dimension, 7
instability, 222 horizontal, 7
polynomial matrix, 223 non-degenerated, 9
rational matrix, 224 non-singular, 7
instability,structural, 109 normal
integrity region, 4 structural stable representation, 118
irreducibility over ring, integrity region, 7
global, 321 projector, 252
quadratic, 7
Jordan rat. =rational, 59
block, 39 rational = broken rational, 59
canonical form, 43 rational-periodic, 275
matrix regular, 7
characteristic, 122 singular, 7
realisation, 50, 126 vertical, 7
Jordan matrix, 43 McMillan
degree, 61
Laplace transform denominator, 61
discrete, 258, 384 form, 61
Laplace transformation multiplicity, 62
discrete, 434 numerator, 61
latent equation, 26 mean variance, 312
polynomial matrix, 26 MFD
latent number, 26 complete, 106
latent roots, 26 irreducible, 64
476 Index

MFD = matrix fraction description, 63 irreducible, 36


minimal polynomial, 86 parametric discrete model, 266
minor, 9 partial fraction expansion, 56
MMD = McMillan denominator, 100 rational matrix, 81
modal control, 151 pencil, 38
model performance
parametric discrete guaranteed, 448, 450
process, 266 period
process= process model, 272 exp.per. function, 241
sampled-data system PMD
parametric discrete, 282 =polynomial matrix description, 49
sampled-data system characteristic polynomial, 230
continuous, 281 elementary, 49
multiplication, 4 equivalent, 78, 92
multiplicity regular, 90
pole, 60 pole, 60
poles
non-pathological, 267 critical, 374
strict, 273 Polynom
normalisation, 144 characteristic
numerator backward model, 217
left, 63 forward model, 217
rat. matrix, 59 polynomial, 5
right, 63 characteristic, 26, 230
closed loop, 149
observability const. matrix, 42
pair, 44 controllable root, 304
operation forward model, 342
associative, 3 standard sampled-data system, 294
commutative, 3 uncontrollable root, 304
left elementary, 12 monic, 6
right elementary, 13 reciprocal, 217
order polynomial matrices
backward model alatent, 28
optimal, 341 elementary divisor, 24
forward model equivalent, 20
optimal, 342 latent, 28
left, 15 left-equivalent, 12
polynomial matrix, 11 right-equivalent, 13
original, 383 simple, 29
polynomial matrix, 10
pair anomalous, 11
controllable, 43 column reduced, 17
horizontal, 34 degree, 11
irreducible, 35 determinantal divisor, 24
non-degenerated, 35 eigenvalue, 26
non-singular, 87 highest coecient, 11
observable, 44 inverse, 85
vertical, 34 latent equation, 26
Index 477

left canonical form, 13 coecients, 319


monic adjoint, 86 PTM= parametric transfer matrix, 284
monic inverse, 86 pulse frequency response
non-singular, 11 displaced, 244
reducing, 63
regular, 11 quasi-polynomial
right canonical form, 14 nonnegative on the unit circle , 358
row reduced, 17 positive on the unit circle, 358
singular, 11 Quasi-polynomial matrix
Smith-canonical form, 21 type 1, 331
stable, 223 type 2, 331
unimodular, 11 quasi-polynomial matrix, 330
unstable, 223
polynomial rows rank
linear dependent, 10 full, 9
polynomials maximal, 9
coprime normal, 9
in all, 155 rat. = fractional rational, 53
equivalent, 6 rat.per.=rational periodic, 275
invariant, 23 rational matrices
principal separation, 333 independent, 69
process, 149 rational matrix
anomalous, 185 broken part, 82
causal, 190 dominant, 101
controllable, 189 index, 74
controlled, 226 polynomial part, 82
irreducible, 151 proper, 74
modulated representation, 78
transfer matrix, 250 separation, 83
non-causal, 190 realisation
normal, 185 canonical, 127
stochastic Frobenius, 50
periodically non-stationary, 308 Jordan, 50
quasi-stationary, 307 simple, 114
transfer matrix, 153 realisations, 49
process model dimension, 49
dual, 162 minimal, 49
left, 154, 231 similar, 49
controllable, 231 simple, 49
parametric discrete, 266 reducibility, 275
modulated, 272 reference system, 416
right, 154, 231 remainder, 6
controllable, 231 representation
projector minimal, 78
matrix, 252 ring, 4
proper, 74 strictly proper fractions, 55
pseudo-rational, 385 RMFD = right MFD, 63
PTM Rouche, Theorem of, 238
system function, 319 rows, 8
478 Index

basis, 9 structural modal control, 151


linear combination, 8 subordination, 94
linear dependent, 8 Sylvester
width, 8 inequalities, 22
system
S-representation, 117, 118 elementary, 374
minimal, 119 guaranteed performance, 448
sampled-data system single-loop
standard form, 279 critical, 374
sampling period, 246 system function
semigroup, 3 representation
separation Z(s), 398
minimal, 58, 84 coecients, 398
rat. function, 58 PTM, 319
rational matrix, 83 standard sampled-data system, 318
separation, principal, 333
signal Taylor sequence, 431
exponential-periodic, 241 time quantisation, 246
trace of a matrix, 308
Smith-canonical form, 21
transfer function
spectral density, 307
controller, 317
spectrum
transfer function = transfer matrix, 78
polynomial values, 253
transfer matrix
Spektrum
continuous-time process, 241
matrix, 252
forward model, 189
stabilisability, 298
irreducible, 87
stability, 222
monic, 87
backward model, 224
pair, 87
forward model, 224
parametric
internal, 296
sampled-data system, 284
polynomial matrix, 223
PMD, 90
rational matrix, 224
monic, 90
standard form
rational fraction, 54 variance, 308
rational matrix, 59 vector, 8
standard realisation
backward model, 213 weighting sequence
standard representation, 78 normal process, 196
standard sampled-data system Wiener-Hopf method, 331
parametric discrete model, 282 modied, 336
standard sampled-data system, 279
characteritic polynomial, 294 zero divisor, 4
modal controllable, 305 zero element, 3
model zero matrix, 8
continuous, 281 zero polynomial, 6
stroboscopic property, 247, 248 zero polynomial matrix, 10

Das könnte Ihnen auch gefallen