Beruflich Dokumente
Kultur Dokumente
2 Timothy 4.7
- i -
ACKNOWLEDGEMENTS
the Edgar AlIen Scholarship and the Linley Scholarship for doctoral
K. V. FERNANOO
SUMMARY
CONTENTS
Acknowledgements i
Summary ii
Contents iii
Part 1 Exordium 1
Chapter 1 Exordium 2
PART 1
Exordium
- 2 -
CHAPTER 1
Exordium
1. The prelude
comprehended.
values a. , i
1
= l,m (not all zeros) such that
m i
L
i=l
a.x
1
= 0 (1)
the vectors xi , 1
. · 1 ,m. The rank is then defined as the largest
(i)
a.x
1
o
1 < r < n
- 3 -
values of a., i
1
= I,m which is obviously impractical. Fortunately,
submitted in the year 1881 and was published in 1883, and thus we
2
The matrix product defined by G = XTX (X T is the transpose of X)
format,
algebra.
Theorem: The Gram determinant is zero if and only if the set {xi}
is linearly dependent.
2
then the set {xi} is linearly independent. The notation G is
this property.
2
If the spectral expansion of the Gramian G is written in the
form,
2 2
I = diag (d
l
, ••• ,d
m
)
m
det G
2
- IT
i-l
or computational problems.
m
y
-2
G =
- 5 -
3. Covariance matrices
p(x) :0:
1
----;:---- e
-! (X-a) TM-2 (X-a) (2)
(2'11')n/2 det M
f p(x)dx = 1 (3)
dx = ~l •••• dxn' x =
verified,
E{x} - a
T
E { (x -a)( x-a) }
called the 'average value' and the matrix M2 the covariance matrix
of the process.
= = I
-
- 6 -
where y
-[~:l = UTx S =
[~:l
.. lla
2
If d. is altmst zero then
1.
2 2
-(y.-8.) Id.
1
e 1. 1. 1. ~ except near y. - 8.
0
(21f) I d. 2 l. l.
1.
following manner
a . Limit X.
1.
m--
.
We observe t h at t h e covar1.ance .2
matr1.X M .1.S a G ' .
r~an matr1.X
p ,. .. I D ..
T 2
Thus, x Px = = d.y.
1. 1.
T
where Y - U x.
- -
- I ...
then the singular value decomposition of Q is given by
Q -
where the diagonal matrix D can be chosen to have diagonal positive
the year 1832, that is exactly 150 years ago, Jacobi derived this
e OIl
where
[U IiiJ T (u IiiJ . I
n [VlvJT{vlv] ... I
m
UTU
- VTV ... I
r
ee
T ".
[UluJ [:2 :] luliil
T .. 2 T
un u
the Gramian matrices eTe and eaT are explicitly present in the
1.0
a = [ 1.0
1.0 ; fl(l.O + E)
1.0
= [ 1.0
1.0 1.0]
[ 1.0 1.0
- 10 -
which does not require the formation of the Gramian matrices which
lO
is due to Golub et al (1970) and which is based on an extended
forms Part 1.
expansion, is described.
matrix are the criteria for order reduction. The more modern
of reciprocal transformations.
- 12 -
outputs.
References
pp.35-54.
indefiniti
f apaljl
A + Beos. + Csin~+(A'+Bicos~+C'sin~)cos1jl+(Aii+B"cos~+C"sin~)sin1jl
,
in formam simp1iciorem
J anas
G-G'eosn cose - G"sinn sine
118-121.
11. Klema, V.C., and Laub, A.J.: 'The singular value decomposition:
PART 2
,
The Karhunen-Loeve Expansion and Extensions
- 17 -
~~R2
n
1. Introduction The Karhune:Lo~ve expansion is one of the
The extension from the continuous to the discrete case has been
- 18 -
given by
or x ..
1J - (l)
The rank of the matrix X is taken as r, and X~M ,C· diag{cl ••• c ),
m,n r
C.
K
>0, k - 1. .r, UeM
m,r
, VeM
n, r
, r < min{m,n}. Also
- 19 -
m
or r
k-1
~i~j .. t5 1J
•• (2)
or
n
r
k-1
vkivkj - t5 ••
1J
(3)
problem defined by
m
SU == UC or r
k=l
s1'kQ 1'
k 1
- u •. c.
111 11
i1 - 1 •• r
(4)
where or (5)
RV .. VC or
(6)
n
where or r . ..
Jk
r
x.x.
1-1 1J 1J 1
(7)
dyadic format
or s .. . r
r (8)
111 k=l ~~i~i1
or r ..
JJ 1
.. rr ~vk·vk·
J J1
(9)
k-1
- 20 -
r
r - ~'1k~t L
1k<~' where the sets {u } and {v }
k k
k-l
are orthonorma1 functions with
(2*)
- (3*)
The analogies between eqns (1), (2) and (1*), (2*), etc are obvious,
• (4*)
f
T
- (6*)
r(t,t )
l - k-lr r
~v
k
(t)v (t )
k l
(9*)
Integration with respect to the variable w in eqns (2*) and (7*) can
• (10*)
(11*)
then be modified as
• or E[UTU] - I (10)
the eigenvector matrix V which gives the modes of the system data
peaks, etc at each hour of the day. However, if the data in each
matrix U will show the weekly pattern, indicating the reduced demands
practice.
-
and
-
If w is the probability space variable, then Pkt(w) will correspond
discrete
continuous discrete
KLE KLE/peA
continuous
SOV
discrete
... SVD
• cont1nuous
•
6. References
1. Miller, K.S.: 'Complex stochastic processes', Addison-Wes1ey,
pp.211-216.
10. Ahmed, N., and Rao, K.R.: 'Orthogona1 transforms for signal
York, 1972.
Press, 1960.
January 1980.
12, pp.1307-l3l2.
Proe. Conf. on Decision & Control, January 1979, San Diego, CA,
USA, pp.66-73.
- 26 -
42, pp.38-59.
CHAPTER 3
dimensional processes.
basic forms used for describing random signals and has had wide
has been used successfully for this problem, and particularly for
of "
a~r po11ut~on "d
an tra ff"~c f1 ows 11-13,26
and then the K-L expansion and the singular value decomposition
"
techn1que are ~Od"
ent~ca
127 The decomposition has been used in the
study 0
" 28 , meteoro 1ogy 22 ,and"1mage
f g1 ass propert1es "
process~ng
29-31
x =
Xl (1)
x~ (~) .]
[ x (1) x (n)
m m
x. (j) would represent the demand at time j hours (1. .n) on day i (1. .m) •
1
8
The K-L expansion is now defined by the row-problem representation
The system modes are then identified with the eigenvalue problem
defined by
- Al4!:M
n,n
- AI
- 30 -
magnitude), then
- trace hI - trace hI
hour of each day, say for example, during periods of peak TV viewing.
problem format
X ... UB U6M
m,r
R 6M P2~M
2 m,m n,n
where P2 is the~priori probability matrix associated with the n
experiments. Then
and
between both the row and column data associated with either m or n
be given by
,. .. T
E [VAI v ] , Al~Mn,n
,. .. T
E [UA U ]
2
A 6M
2 m,m
nature of scanning and time delays, and the resulting effects could
matrices PI and P2 •
Note - If PI and P 2 are positive definite matrices, then rank RI -
(1)
- 34 -
T
x = c .. U.V.
11 1 1
of the same kind. If the time variable is t and the space variable
e
2 . A e ... Ai eT
T
Further, it can be shown that A - UC and B • CV are also K-L
expansions.
~
the truncated solution for C is then given by
~ -T-
C == U XV , -
U~M CloioM.. VE:M , k < r
m,k -It,k n, k
- --T
A • UC B - CV
which contain only the first k modes. The minimized error function
is then given by
truncated expansion
.
the 1east-squares estimate -CA.lS t h en given
. b y 14
H = P i~ , , H cM
m,r
, GEM
1 n,r
-T-
H H
-T-
- GG - Ir
Then ,
Table 1
J~----~.- ------~-~DEC
1949 112 118 132 129 121 135 148 148 136 119 104 118
115 126 141 135 125 149 170 170 158 133 114 140
1 145 150 178 163 172 178 199 199 184 162 146 166
171 180 193 181 183 218 230 242 209 191 172 194
196 196 236 235 229 243 264 272 237 211 180 201
204 188 235 227 234 264 302 293 259 229 203 229
242 233 267 269 270 315 364 347 312 274 237 278
284 277 317 313 318 374 413 405 355 306 271 306
315 301 356 348 355 422 465 467 404 347 305 336
340 318 362 348 363 435 491 505 404 359 310 337
360 342 406 396 420 472 548 559 463 407 362 405
1960
1 417 391 419 461 472 535 622 606 508 461 390 432
--T
ab
where
; - 10-1 (1.21 1.34 1.63 1.89 2.15 2.29 2.72 3.14 3.53 3.55 4.10 4.56)T
-b • 10-1 (2.47 2.40 2.76 2.72 2.77 3.18 3.58 3.58 3.08 2.72 2.38 2.67) T
3
c - 3.652 10
s
The average trend could be identified from the vector a and the cyclic
-
pattern from the vector b.
• in table 2.
- 39 -
PI = diag (0.45 0.50 0.55 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00)
P
2
= diag (0.60 0.70 0.80 0.90 1.00 1.00 1.00 1.00 0.90 0.80 0.70 0.60)
with the elements of PI emphasizing the later data and the elements
C = diag (68.47 36.65 -30.69 23.65 19.55 -11.19 -9.21 6.10 -4.49 -1.82
-0.30 0.00)
in Table 3.
The optimized values of the error functions with the retention of
given in Table 4.
It can be seen from the reconstructed matrix that the best fits
energy
-2
- 3.31 10 for k • 5
Table 2
2.S l1.S 9.9 8.3 -1.8 -5.8 -10.S -10.7 -3.7 -1.5 -1.2 -0.3
-5.5 8.9 6.4 1.9 -10.4 -6.3 -5.1 -4.9 7.3 0.2 -2.0 9.5
-1.8 7.3 14.0 0.9 7.0 -11.2 -14.3 -14.1 0.4 0.2 4.6 7.0
1.1 14.8 3.1 -5.7 -8.1 -1.0 -15.9 -4.7 -3.5 3.6 8.4 10.0
1.9 7.4 19.1 20.6 10.S -7.2 -18.0 -9.8 -5.8 -3.0 -5.9 -9.2
-2.1 -12.3 4.7 -0.7 2.3 -1. 7 2.5 -6.3 1.2 1.S 4.5 5.8
X=
-2.9 -5.1 -6.7 -1.6 -5.4 -0.8 8.0 -8.7 5.6 3.9 1.1 12.7
4.9 1.8 0.6 0.2 -0.3 9.0 1.5 -6.1 0.8 -6.2 -1. 7 -0.6
-2.7 -7.9 0.9 -3.0 -2.3 12.4 3.2 5.5 6.5 -3.4 -1.0 -8.1
11.4 -1.4 -5.2 -15.0 -6.5 11.4 13.4 27.8 -7.1 -3.4 -5.5 -18.9
-5.4 -17.1 -6.9 -12.1 4.5 -4.3 11.1 22.5 0.9 -0.4 5.2 4.9
6.3 -S.2 -40.0 7.3 10.2 5.5 25.2 9.6 -5.7 8.1 -5.5 -12.8
Table 3
2.3 11.2 9.3 7.4 -3.1 -3.6 -12.4 -11.0 -1. 7 -0.4 -1.1 0.5
-1.8 6.3 5.8 -2.7 -6.2 -2.3 -7.2 -8.1 2.7 0.6 3.4 9.5
-3.9 6.1 13.7 5.S 3.1 -11.2 -15.0 -11.5 0.0 0.5 3.5 8.9
2.1 16.8 5.2 -5.2 -9.9 -5.4 -12.7 -4.7 -1.8 2.4 4.3 9.8
2.S 8.4 19.1 18.8 12.2 -7.3 -17.7 -10.9 -5.4 -3.6 -5.5 -10.4
-5.4 -9.1 3.0 4.9 2.6 -1.8 1.8 -4.2 4.9 -0.6 2.6 6.3
X-
-4.5 -4.5 -8.3 -2.1 -3.9 1.0 5.2 -8.6 6.3 3.1 3.7 11.8
1.5 4.2 1.4 1.8 -3.2 7.2 1.5 -5.4 2.9 -2.3 -2.8 -2.8
-4.1 -7.8 3.1 -3.6 -2.9 14.5 5.5 4.4 4.0 -4.9 -2.9 -6.4
8.9 -0.9 -6.9 -14.2 -6.0 13.0 11.4 28.8 -6.0 -3.9 -5.2 -19.0
-7.7 -18.0 -6.4 -13.4 4.4 -4.9 11. 7 21. 7 0.4 1.0 5.2 4.1
6.8 -8.3 -39.5 7.4 9.9 5.1 25.8 9.5 -6.2 8.0 -5.1 -12.4
T T
k trace E P E J - P *(EP ET) trace EP E
1 1 2 2
12 O. O. O.
11 O. O. O.
1
10 1.27 10- 8.96 10- 2 1.52 10-
1
T 3 T 4
trace X P X - 9.52 10 trace XP X - 1.03 10
1 2
3
- 8.19 10 = trace A
- 42 -
under consideration.
- 43 -
7. References
London, 1973.
3, pp.257-270.
pp.64o-653.
12. STERLING, M.J.H., and ANTCLIFFE, D.J.: 'A technique for the
1974, 8, pp.533-538.
pp.996-l0l7.
16. GOLUB, G.H., and REINSCH, C.: 'Singular value decomposition and
20. CLIFF, A.D., and ORD, J.K.: 'Spatial autocorre1ation', Pion Ltd,
London, 1973.
24. BOX, G.E.P., and JENKINS, G.M.: 'Time series analysis, forecasting
26. SAITO, 0., and TAKEDA, H.: 'Two-stage predictor of air pollution
pp.107-112.
65, 6, pp.7l2-715.
33. LOEVE, M.: Probability Theory, Van Nostrand, Princeton, NJ, 1963.
- 46 -
8. Appendix
-
n
L x. (j)
~
a.
~
o for all i
j=l
m
L
i-I
x. (j)
~ - b.
J
E[b.]
J
= o for all j
If the signal matrix has large deviations from the expected values,
S
The rank one matrix X is defined as
X
s ,.. ! ab T if d ; 0
d
b = (b. ,
~
... , b )T
n
m n
d = L a.
~
= L b.
J
i-I j-l
- 47 -
S
The matrix X can also be represented in the format
T
"" c ab
s
=
c A~ =
s s
196C
CHAPTER 4
1. Introduction
and cyclic effects are removed or neglected, and thus such techniques do
processes can also be studied. If the data is cyclic then one of the
.
dur1ng . hemorrh age 11 an da1r
bra1n ' po 1 1 ' 14 ,among other app l'1cat10ns.
ut10n .
model is similar to the KLE and can be implemented using the singular
2. The problem
X ••
1J
G: M
m. ,no
, i,j • 1,2
1 J
failure.
- 52 -
, 7-14
3. The Karhunen-Loeve extrapolation method
implemented usually in two stages. The first stage involves the formation
the method. The actual extrapolation using the modes or basis "pattem"
experiments each with n discrete sampled values. The data record denoted
by the row vector yi contains the information about the ith experiment
and it is assumed that the expected value of yi is zero, and that this
Y Ci= M
1 ~,n
• RY
~
R
y ~ -1 L (yi) T (yi) = l y Ty
~ 1 1
~ i-l
provided ml is large.
- 53 -
R =
Y
Al ~ M
u;.,n
-
The most important property of this matrix is that (the expected value
• D 2
y
Because of this diagonal form, the modes of the process can be decoupled
The diagonal values of the matrix D 2 may contain zero or near zero
y
entries and they can be neglected without loss of information. If the
expansion is truncated using only k modes, then it can be written in the
format,
Y
l - - -T
AlV YI ~M
u;.,n
, Al 6 Mu;.,k
V €. Mn, k Dy 2 l= ~ ,k
where Y denotes the data matrix reconstructed from the most significant
1
k modes and Al and Vare truncated matrices of Al and V, respectively,
error matrix. The above expansion can be decomposed into two parts.
X ... - - T
A V
21 2 I + E21
- - T
X A V
2 2 + E21
OIl
22
where V - (::1 -
If n is greater than the number of modes k, then the random
l
coefficient matrix A2 can be obtained by solving the following least-
squares problem •
.
I.
minimze
.. trace
The least-squares estimate A2 is then given by,
-
...
The estimate X of the unknown matrix X can be computed as,
22 22
- 55 -
R ,
z
R
z
often been used for the processing of data or signals which are essentially
surface, and such extrapolation methods do not utilize all available data
in both directions and should not have a bias to one direction or the
the trend or cyclic or other effec~from the data before the asymptotic
methods and to use the trend and other similar terms to our own
the prediction will be more consistent than if only one matrix is used.
the roundoff errors due to the reduced computational effort, and the
orthogonal matrices.
- 57 -
x ..
1J -
where u and v are random variables with the following second order
ik jk
properties.
E [u.1p u.1q] • IS
pq
, E Cv.JP v.Jq ] - ~pq
is given by
m k k
l
l:
i=l
x. x.
1p 1q
- r-l
l: s-l
l: v v d d
pr q8 r q
u. u.
1r 18
1
~
u. u.
1r 18
~ E [u.1ru.18J - ~r8
x. x. ~ ~ v v d 2
1p 1q J. pr qr r
-2-T
n UD U (2)
l
- 58 -
Equations (1) and (2) are very similar to the results obtained
"-
using the conventional Karhunen-Loeve extrapolation method, except that
there are two equations corresponding to both row and column extra-
n
1 ~ [v.l.rv.1.S] •
n
i=l
L v. v.
1.r 1.S
E cS
rs
Similarly,
-T-
U U z I (4)
• Y =
I
= =
the least-squares estimate of the diagonal matrix Dis given by6 (Appendix 2)
c. =-
1
and * denotes the Hadamard product or the Schur product defined as the
d.
1
•
Since all three known matrices are used in the extrapolation, the estimate
are not necessary and we can extend this to any arbi trary number of
blocks, and the unknown block does not have to be the corner block.
,. i T i
c. r r (~ ) ~QVQ,
1
k t
The estimate of the unknown block X is given by
st
- ~- T
- U DV
s t
xl(N'+l)
..... ............... ....................
X = ~(N') ~(N' +1) ~(N)
The suffix i in x. (j) refers to the cycle. It is assumed that data upto
1
and including the Mth cycle and the data points 1 to N'+l in the M+I cycle
are known. The unknown data to be predicted are the data points N'+l to
~(N)
~(l)
This method has the advantage of having the continuity preserved from
8. computational procedure
(a) Compute t h e •
s~ngu I ar va I
ue d ""
ecompos~t~ons 18
... , • UD V T
z z
respec ti vely.
(c) Solve the least squares problems defined in section 6 to give
the matrix D.
9. Example I
To illustrate the method, one cycle ahead predictions equivalent
United States) in the years 1949 to 1960. This airline data is seasonal
"
or cyc I ~c "h a tren d an d h as b een
w~t ·de l
w~ yIanad
yse"~n t h e I"~terature 3,17 •
Figure 3 shows the actual passenger levels and the predicted values.
The predictions were obtained using five years of immediate past data
gauged using the mean error and the mean squared error for each year.
- 63 -
(b) It was found that if lOOre than five years of past data were used,
(c) Generally, acceptable predictions were obtained using only one mode
made. It was found that the ratio of 'energy' in the first mode
,. 2 ,. 2
to the second mode d /d 2 is high, and this could explain the
l
marginal differences between the predictions using one and two modes.
T T
(d) The elements of the matrices Yl Yl and ZlZl are positive and thus,
(e) Since only past data is used, it is not possible to take into account
other factors which ~ffect the number of passengers such as the high
19
number of airline accidents in the United States in 1958. This
might perhaps explain the large prediction errors for that year.
10. Example 2
Figure 4 shows the actual realized load levels and the predictions
It can be seen from Figure 4 that the prediction errors are within
. and the Sunday. This is due to the different patterns of load which
for prediction are based on the weekdays, such errors are to be expected.
11. References
1. Willsky, A.S.: 'Digital signal processing and control and
(3), pp.82-83.
,
10. Belik, D.D., Nelson, D.J., and Olive, D.W.: 'Use of Karhunen-Loeve
12, Sterling, M.J.H., and Antcliffe, D.J.: 'A technique for the
13. Nicholson, H., and Swann, C.D.: 'The prediction of traffic flow
pp.533-538.
14. Saito, 0., and Takeda, H.: 'Two-stage predictor of air pollution
136, pp.295-336.
17. Box, G.E.P., and Jenkins, G.M.: 'Time series analysis, forecasting
18. Garbow, B.S., Boyle, J.M., Dongarra, J.J., and Maler, C.B.:
21. Srinivasan, K., and Pronovost, R.: 'Short term load forecasting using
~I
,
Y
l
.-
X
.. X
2l 22
)
c:.- direction of e xtrapolation
Zl 12
~r
X
22
actual :'
600
predicted
550 .-
500 .-
Cl)
~ 450
fJ
Cl)
::l
0
.t::
~ 400
~
.,-4
00 Cl)
\0 \.0
Q) 350
bO
~
Q)
Cl)
Cl)
1"<1
0. 300
~
0
\.0
Q)
..0 250
E:
::l
~
200
_____ ..J
150
1954 1955 1956 1957 1958 1959 1960
year
Fig. 3: The international ~assen~ers leavin~ and enterin~ the US~ ~er ~onth
( :- r)
u J ~ r- actual
- - - - predicted
I
,,,":.",",
II !
U..I'''/ 1-
, \.
/: (
;:
;' .
'\
~
" r.:- /",
j-~//! \
\'.." \.,
',- ,...
\ \
\
\
/\\\"
\ , '. r~
. \\"\,.
\"\\
,/ \
J
,:
.
\\
.\)\
I
I
\ \ ,. ....,
r: .
,'''':"r)
(..l -.i . f
i
1_
\.
' - - ........./ ;
\, i.
i"".
I
\.,. ' ' \~'
0'1
\0 ~
0
.-4
\1,
/ / \.
c:
\.
\.
"\ , .. \
/:"
f:
"\.,
,',
.~
,...
QJ
~
(}j~:J
!
\ /
,
'\
'\
\
,,,
/ '\"'. \ ii
/
ii
r: , \
I, \,
\
i
/. ".
0
p.,
II \ / '-. /\\-' !
",
'.
\
\
I \.
t:")~; I \
1'-
I ) \ /
\ \.
i
I
j
I
\ 'or
// .. '\
/
: /
I,-- '-"~"'<.~
" .'
f-- (', .'~j
i
j
,I
..-,.., ._----
.
,, _ _ .___ ._-1
PART 3
of Balanced Systems
- 71 -
CHAPTER 5
1. Introduction
i = Ax + Bu y = Cx
x~Mn, 1 uf: M
m, 1 yeM
r, 1
A€M B~M C6-M
n,n n,m r,n
In this case
W -= W ~ W W ,W W M
c o c 0 n,n
OD T
where W ... I eAtBBTeA tdt
c 0
-f T
OD
W eA tcTCeAtdt
0
0
6
A linear system can be balanced using similarity transformations
= (1)
:8 (2)
[ -:~ 1
A B = C .
where the matrices All and A22 are square. The two subsys tems
seA .. ,B.,C.) are also input-output balanced and the Gramians for
1.1. 1. 1
W.A ..
1. 1.1.
T
+ A.. W.
1.1. 1.
.. - B.B. T
1. 1.
T
W.A .. + A.. W.
1. 1.1. 1.1. 1. - T
C. C.
1. 1.
i • 1,2
where W =
T T
W.A .. + A.. W. 0& - B.B.
l. Jl. 1.J J 1. J
T T
W.A .. + A.. W. = C. C.
1. J
i, j "" 1,2
1. 1.J J1. J
If S(A
,B ,C ) is a weak subsystem, then the diagonal elements
22 2 2
of the W Gramian will be small in comparison with that of W • By
2 l
eliminating the weaker subsystem S(A ,B ,C ), we obtain the reduced
22 2 2
order model S(All,Bl,C l ).
defined by
is approximated by
.!.
x ,. -
Ax + Bu = Cx
Y
-1
where A a
All - A12A22 A2l
-
B
-
- -1
Bl - A12A22 B2
-1
C a
Cl - C2A22 A21
lO
model reduction problems and also in data contraction , the Kron
3 7
method of tearing , and elsewhere •
2w •• a ..
11 11 - b ..
1J
2
where the lower case letters indicate the elements of the respective
systems,
(b) with small values of b .. which means that the states x are
1J i
not strongly excited by impulses.
(a) will prevail. Thus, fast subsystems and weak subsystems have
Perturbation Method
T (3)
WlA + AWl = BBT
- I TT-I)
: - A2l (A 22 ) Tl ' T2~ Mr,n
6. Numerical Procedure
S(A,B,C)
7. Example
by
-3 -2 -4 -3
-6. 560xlO -7.577xlO 7. 390xl0 3.564xlO -4.713
-2 -3 -4 -3
7.577xlO -8. 383xlO 9.204x10 4.445xlO 4.831
A = -3 -2 , B =
-4 -1
-9.l71xlO 1.142xlO -9.2l9xl0 -3.086 -3.293xlO
-3 -3
3.597xlO -4.486xlO 3.136 -1. 816 1. 292
c = [ 5.530xlO-
4 1.231xlO- 3 -1.951xlO- 1 -1. 743x10 -1]
4.713 4.831 -2.653x10- 1 -1.281
3 3 1
with W = diag (1.693x10 1.392x10 5. 882xl0- 4.S97xlO- l
. )'(An) - -7.472x10
-3
± j 7.577xlO
-2
-1
). (A ) • -9.543x10 ± j 2.989
22
Thus, the subsystem S(A ,B ,C ) is considerably weak and fast compared
22 2 2
with the subsystem S(A11 ,B 1 ,C 1 )·
The balanced reduced-order model is then given by,
7.S77x10
-2
-8. 380x10
-3
B
- [-40713]
4.831
8. Conclusions
methods.
In some physical systems, singular perturbational parameters
l
can be explicitly identified • However, in general, this may not
equations. The second order modes of the system (i.e. the diagonal
such aspects.
9. References
1. Kokotovic, P.V., O'Malley, R.E., and Sannuti, P.: 'Singular
London, 1978.
,
4. Fernando, K.V.M., and Nicholson, H.: 'Karhunen-Loeve expansion
pp.246-253.
12. O'Malley, R.E., Anderson, L.R.: 'Singular perturbations, order
which have fast subsystems and which are represented in the internally
problem.
1. Introduction
for model reduction since equal amounts of information about control la-
3
For continuous-time balanced systems. Moore proposed direct
not generalize for the discrete-time case in the sense that the reduced-
W (p)
c -
w~
o -
If the system S(A,B,C) is a principal axis representation, then the
. '
a princ1pal .
aX1S representat10n b y uS1ng
. "1
S1m1 '
ar1ty .
transformat1ons 1,2
For the infinite time definition with p~, the Gramian matrices
W - AW AT
c c -
=
For the internally balanced system S(A,B,C), the matrix equations are
given by
(1)
(2)
system
x(k+l) = Ax(k) + Bu(k)
d
x (k+l) -
d
by the dual state Xi is small.
- 84 -
format,
A -t !11 -: -~12-1
A21 I A22
,
the submatrices W., i - 1,2 are not balanced Gramians of the subsystems.
1
eigenva1ues of the subsystem are large (i.e. s 4 -m) in the complex plane,
and a subsystem is said to be slow if its eigenvalues are near the origin
- 85 -
(Le. s:c? 0). We carry this definition to the discrete-time case where
slow states can be approximated using the fast states which is the
(s low-time)
(fas t-time)
A2l are small and that the subsys tem matrix A22 is fas t. With lJ -+ 0,
Xl (k+l)
(slow-time)
B =
We observe that the inverse of the matrix (I-A 22 ) always exists under
associated with these subsystems will vanish quickly and thus the overall
do not imply that all weak subsystems are fast and thus, this property
of reduction.
lO
Numerical experience (see Chapter 5) with continuous-time systems
subsystems are substantially weak and this property also can be expected
in discrete-time systems.
W - ~WA
1 1 - ATA
CC (4)
= [I
.. [I I
I
- T -
A21 (I-A22 )
T -lJ
The next proposition indicates that the reduced model has very
desired properties.
6. Numerical Procedure
use ...
s~~lar~ty trans f '
ormat~ons
4,9 to .
g~ve the a
b 1 ance d system
(a)
S(A,B,C)
and A2l
(d) if (c) is true, calculate the first-order singular perturbational
approximation
- -1-
A = All + A12 (I-A22 ) A2l B
- -1-
C = Cl + C2 (I-A 22 ) A2l
7. Conclusions
8. References
1. Mullis, C.T., and Roberts, R.A.: 'Synthesis of minimum roundoff
5. Sandell, Jr., N.R., Variya, P., Athans, M., and Safonov, M.C.:
~
pp.925-936.
8. Blankenship, C.: 'Singularly perturbed difference equations in
this method are exactly balanced and retain the dominant modes of
1. Introduction
l
The balanced model-order reduction method of Moore is based
of the system in the balanced format, in which case the two Gramian
matrices are equal and diagonal. The diagonal values are called
the second order modes of the system. The robust part of the system
part (if any) by low values. The direct removal of the weak sub-
2. Preliminaries
For the continuous-time asymptotically stable time-invariant
system S(A,B,C)
w~+~
c c
=
- 94 -
l
If the system S(A,B,C) is a balanced representation , then the
The diagonal values of the Gramian matrix Ware called the second-
order modes, and we assume that they are ordered in the non-increasing
the format
B C =
A
1·
By elimination of the weak subsystem S(A ,B ,C ), we may obtain
22 2 2
the reduced-order approximation of the original system as the sub-
system S(F,G,H)
x(k+l) = Fx(k) + Gu(k) y(k) = Hx(k)
equations given by
T
W - FW cFT = GG
c
W - FTW F = HTH
0 0
- 95 -
Wc = W0 "" W
F = ]. G H
A - -(I+F)-l(I-F)
B ""
± 12 (I+F) -l G
C - ± 12 H(I+F)-l
S(A ,B ,C ) is given by
11 1 1
All - -(I+F)-l(I-F)
B1 = ± 12 (I+F)-l G
Cl = ± 12 H(I+F)-l
where F
-G
- -1
F11 - F12 (I+F 22 ) F21
-1
= G - F12 (I+F 22 ) G2
1
H - -1
Hl - H2 (I+F 22 ) F21
lO
This result may be verified using the well known matrix 1emma for
matrix.
Thus, the required approximation corresponding to subsystem
systems.
that the matrices Fll and F21 which define the interaction between
approximations as
to the requirement that the eigenvalues of the matrix F22 are near
values of the matrix Fll are away from z =1 and thus the subsystem
'fast-time' as S(F,G,H).
F g (zo)
Gg(zo)
Hg(zo)
be derived.
= SeA
g
(~), B
g
(~), C
g
(~»
S
which was derived by Fernando and Nicho1son (Chapter 6).
approach.
6. Conclusions
9
suggested by Verriest and Kai1ath but was not elaborated by those
7. References
1. Moore, B.C.: 'Principal component analysis in linear systems:
CHAPTER 8
Model-Order Reduction
These two approaches are dual to each other and such reciprocal
reciprocal approaches.
1. Introduction
Moore 1 was abe l
to ·1nterpret 1n
. an e I egant manner, the m1n1ma1
. .
the system and the observable part which are excited by impulse
where the weak subsystem corresponds to fast dynamics and the strong
behave in this manner, and the concept has been exploited in modal
weak subsystem and the strong subsystem are due to slow and fast
S(A,B,C) described by
x(t) .: Ax(t) + Bu(t) yet) .. Cx(t)
CD
W2 (eAtB) (eAtB)T dt
c
OK
f
0
CD T T
W2 (eA tc T) (eA tCT)T dt
0
= f
0
W 2
c
... W 2
o
..
Internally balanced representations can be obtained using similarity
called the second-order modes of the system and we assume that they
F = A-I G .. H
- CA
-1
_ GGT
-
The reciprocal system S(F,G,H) is also controllable and observable
• W2 •
and has the same diagonal Gramian matr1x
by the system S(A,B,C) and thus these two systems are dual to each
system S(F,G,H)
- k > 1
- CA
k-l-l
B
2
the diagonal Gramians Wi ' i - 1,2 can be associated with the internally
the weak subsystem and thus, the reduced-order model is simply given
= -
=
4
It can be shown that reduced-order model S(AII,BI,C l ) is also an
with -
Thus. a direct duality exists between direct and reciprocal elimination
4. An Illustrative Example
transfer-function,
(s+4)
(s+l) (s+3) (s+5) (s+lO)
,
wh1ch .
has been prev10us 1y stud'1e d by M'
e1er an d Luenberger 6 .
, W1lson 7,
1 9
and recently by Moore and Fernando and Nicho1son (Chapter 11).
A =
-1.168
0.4143
-3.135
2.835
-2.835
-12.48
-0.3289
-3.249
, B
- -0.1307
o ~O5634
-0.05098 -0.3289 3.249 -2.952 -0.006875
2
and the balanced Gramian matrix W is given by,
2 3
W2 _ diag[0.01954 0.272xlO- I 0.1272xlO- O. 8006x10 -5 )
The system can be decomposed into 2x2 subsystems,in the natural order,
result is given by
1.257
An - [ -0.4249
-1.257 -3.735 ] , BI - [ -0.1164]
-0.1427
A(A
ll
) = -1.113 -2.460
Thus, the weak subsystem is faster than the strong subsystem, although
scalar het) denotes the impulse response of the original system and
defined as,
minimize f
o
h 2(t) dt
e
optimization.
5. Conclusions
We have demonstrated that a duality exists between direct
criteria.
- 112 -
reciprocal -0.006808(s-12.29)
(s+1.003) (s+3.158) 0.06931
elimination
-0.003222(8-22.66)
'optimal' (s+1.099) (s+2.511) 0.03929
- 113 -
6. References
pp.S8S-S88.
7. Wilson, D.A.: 'Optimum solution of model-reduction problem',
London, 1978.
9. Fernando, K.V., and Nicholson, H.: 'On the Cauchy index of linear
-20
-40-'
-60
-80
-100
-120
.
00
Q)
"0
'"'
-140 .,..,
3
'-'
=
,..00
cC
-160
-180
-200
a - reciprocal elimination
-220 b - direct elimination
c - original system b
-240 a
- 2 ' 6 0 4 - - - - - - - - - - - - , r - - - - - - - - - - - - r - -_ _ _ _ _ _ _ _-,
0.1 1.0 10 100
frequency w rad/sec
lxlO- 2
lxlO- 3 -3
.'-'
'-"
::r:
a - reciprocal elimination b
lxlO- 4 b - direct elimination
c - original system
lXlO- 5
+-----------------~----------------r_--------------
0.1 1.0 10
__
J.ar
frequency w rad/sec
PART 4
CHAPTER 9
reduction is highlighted.
1. Introduction
Moore 1 used concepts from the principal component analysis of
are required.
In this Chapter, we define a new matrix Wco which can be considered
component analysis.
investigated.
The spectral structure of the matrix Wco is paramount in our
analysis and in fact the absolute values of the spectrum are given by
between the singular values and the dc gain of the system and the
reduction.
2. Preliminaries
where the term eAtb represents the impulse response of the states of
dual system.
- 119 -
T
with W2
o
= f
o
Lyapunov equationso
W 2AT + AW 2 = (1)
c c
- - c c
T
(2)
Internally balanced
Table 1
original transformed
A p-1AP
b P- 1b
c cP
W2 p-lw 2(p-1)T
c c
W2 pTW 2p
0 0
W 2W 2 p-lW 2w 2p
c 0 c 0
Wco p-lW p
co
magnitude.
We observe that the normal and the internally balanced forms
format which encompasses the normal and the balanced forms can be
(a) b.
l.
= ±c.l.
(b) either a ..
1.J
:z a ..
J1.
or a.
1.
2
A A
::11 a.
J
2
if b.b.
1. J - A A
c.c. ;. 0
1. J
Proof: Since the Gramian matrices W 2(p), W 2(p) are diagonal and
c 0
equal to ~2 the diagonal elements of (1) and (2) are of the form,
A 2 A 2 A 2
2a 1.1.
.. a.1. • -b.1. - -c.1. for i - l,n
and the difference and the sum of (3) and (4) are of the form,
(a.
J
2
1.
2
1.J
A
a. ){a .. - a .. )
J1.
A
. - b.b. + c.c . (5)
1. J 1. J
(a.
J
2 2 .-
1.
.-
+ a. ){a .. + a .. )
1.J J1. - - b.b. - c.c. (6)
1. J 1. J
A A
2 2
in the matrix A or a.1 - a .• If the second possibility
J
b.b. = -c.c. ~ 0 is satisfied, then from (6), the element a .. appears
1 J 1 J 1J
skew-symmetrically.
If the remaining possibility, b.b. • e.c. • 0 is true, then from
1. J 1. J
'b'l" 2 2 . 2
(3) and (4) one POSS1 1 1ty 1.S a.. - a .•• A
However, S1.nce a. and
1.J J1 1.
a. 2 are positive, the condition a .. - a .. is not admissible and hence
J 1.J J1.
2 2 A
D
a .. - -a .. and a. - a.. The other possibility is a .. - a .. - O.
1.J J1. 1. J . 1.J J1.
The appearance of non-symmetrical or non-skewsymmetrical elements
Proof: Without loss of generality assume that the first two singular
values of the system S(A,b,c) are non-distinct and thus, the two
, 1
Q - M • I M -
n-2
1m2 + 1
2 -
(l+m ) a
12
.. a
12
- (a
22
-a
ll
)m - a
21
2
m
2 -
(l+m )a
2l - a
21
- (a
22
-a
ll
)m - a
12
2
m
A real solution for m always exists for the above equation and the
is proved. 0
orthogona1 transformation. a
Cl)
W A + AW
co co
= - bc (8)
y oore 1 t h at ·
It was s h own bM ~t ·
1S·~na d equate an d .
somet~mes
fundamental to it.
eigenvalue matrix A as
A.
~
o.
~
2 if c.
~ - b.
~
; 0
A.
~
= - o.
~
2
if c.
~
. - b.
~
(b) the square of the matrix Wco (P) is equal to the product
W
2 ...
co
u. 1 if c. a b. ; 0
~ ~ ~
u. - -1 if c.~ - - b.~
~
- 126 -
. AT
The i,j th element of the matnx UA U is equal to u.u.a .. and we
1. J J1.
consider all four possibil i ties.
c.
1. - b.
1.
c.
J - b~ 1 a ••
1.J
= a ..
J1.
u.u.a ..
1. J J1.
• a ..
1.J
c.
1. = -b.
1.
c.
J
= -b.
J
-
- ~
-b. c. b.
c.
1.
c.
1.
=
- b.
1.
1.
J
c.
J
J
-b.
J
a ..
1.J
= -a ..
J1.
u.u.a ..
1. J J1. - a ..
1.J
Thus, UATU - A, and by comparing (9) with (8), we obtain the required
result (part a)
W (P)
co - = (10)
and hence
= Wco 2(p)
It is easily seen that this result is true even under any arbitrary
a
similarity transformation F, which completes the proof.
then the corresponding matrix Wco (P) is diagonal and the diagonal
-
= o.1
2
2
if sign(c. )
-1- sign(b. )
1
, b.1 - C.1 ~ 0
W 2(p)
c - - 2 - .. , ••• )
- diag( .•• , b.1 /2a 11
- 2 -
W 2(p) "" - diag( •••• , c.1 / 2a 11
..•••• ) for a .. ;. 0
0 11
a.. "" - a ..
1J J1
2 2
However, due to theorem 1, A.1 - o.1 A. - - o.1
J
A -( All
AZl
A12
AZ2 1
b - ( ~: 1 , c =[ Cl c
2
]
A = (~ll
AZl
A12
A22 1
b - ( ~: 1 c -( Cl ~z ]
with order of All - order of All etc, then the balanced representation
A A A ~ -
then -
which completes the proof. a
- 129 -
= ,
as the total "energy" of the system and relative error ratios can be
Theorem 3: The sum of the eigenvalues Ai' i = l,n gives half the dc
trace A =
Proof: wco -
trace W
co - =
Since, trace W
co
= trace A, the result follows. D
the above theorem. Instead of the requirement that the trace of the
model and the original model should have the same dc gain and due to
the direct relationship between the singular values and the dc gain,
(a) Compute the matrix W as the solution of (8) using any standard
co
. 2
algonthm
W
co
..
where V is an eigenvector matrix and A is the diagonal eigenvalue
matrix.
(c) Compute the principal representation S(A,b,c) given by
-1 -1
S(A.b,c) - S(V AV, V b.cV)
- -
where p.
1 - (u.b./c.)
111
~
if c.
1
~ 0
p.
1 - 1 if c.
1 - 0
0
were obtained.
In references 3 and 4 and elsewhere, computation of principal
7. Conclusions
has been studied. It was shown that its properties can be used in
of the system and the dc gain. It was explained how this property
8. References
1. Moore, B.C.: 'Principal component analysis in linear systems:
CHAPTER 10
1. Introduction
The problem of determining minimality of state-space representations
general systems theory. Much effort (see for example Kailath 2) has
criteria.
3
Fernando and Nicholson (see Chapter 9) defined a cross-Gramian
2. Preliminaries
S(A,b,c) defined by
W co
f (eA~) (eAtb) T dt (1)
c 0
W
0
. f
CD
(ceAt ) T (ce
At
) dt (2)
0
W AT + AW
c c
.- (3)
T
- - c c (4)
instead of (1) and (2), then the assumptions regarding the asymptotic
matrix Wco as
Cl)
At. At
wco - of (e b) (ce ) dt
- 135 -
given by
W A + AW = - bc (5)
co co
3. The results
We present the following result which re lates the three Gramian
matrices.
Proposition 1: WW .. W 2
c 0 co
c.c.
1. ]
(W ).. -
o 1.J a .. + a ..
1.1. JJ
- 136 -
b.c.
~
(W ) •• = J
co 1J
a .. + a ..
~1 JJ
and thus, W W
c 0 - W
co
2
sufficient.
W - AW cAT
c
W - ATW A
-
=
bb
T
c c
T
(6)
(7)
0 0
W - AW coA - bc (8)
co
We also assume that there are no eigenvalues of the state matrix A
such that
A. (A) A. (A) - 1 for i = l,n , J - l,n
~ J
5. Conclusions
6. References
NJ, 1980.
3. Fernando, K.V., and Nicholson, H.: 'On the structure of balanced
4. Golub, GoH., Nash, So, Van Loan, Co: 'A Hessenberg-Schur method
CHAPTER 11
Abstract: The Cauchy index for linear S1S0 systems is given by the
1. Introduction
distinct poles, the Cauchy index is equal to the number of real poles
residues.
l
Fernando and Nicho1son (Chapter 9) defined a cross-Gramian
T
W 2(T)
c -f 0
(eAtb) (eAtb)Tdt , T > T
0
(1)
T
W 2(T)
0 -f0
(ceAt)T(ceAt)dt T > T
0
(2)
l
Fernando and Nicholson (Chapter 9) demonstrated that the information
and the observable states. while the controllable and the observable
- Ax + Bu
t t
p > n (4)
• (5)
W (p) = (6)
co
- T
Q (p)Qo(p)
o
(7)
- Qc(p)Q c (p)
T
(8)
and W
co
(p)
- Qc(p)Qo(p) (9)
where . Ab
H(p) = (10)
The similarity between the Hankel matrix H(p) and the cross-
g(s) - l
k=1
s
-k
hI h2 h
P
H(p) - h2 h3 h
p+l
h h h
P p+l 2p-l
=
- 143 -
which relates the Cauchy index with the signature of the Hanke1
matrix R satisfying
T
RA • Rb c
Proof: It is obvious from (9) and (10) that all non-zero eigenva1ues
of the Hanke1 matrix H(p) are given by the eigenvalues of the cross-
Gramian matrix Wco (p). Thus, the signatures of these matrices are
R Q (n) =
C
R W 2(T) = W T(T)
co
c
that is
b. - c. or b. - - C. for all i.
1 1 1 1
/::;
(b) the cross-Gramian matrix Wco (Cl» = A 1S diagonal.
2
If b. - c. then A. = - a.
1 1 1 1
= E2 = diag( •.••• , a.
1
2
, ..... )
(d) If b.b.
1 J
= c.c.
1 J
; 0 then a .... a ..
1J J1
Il
If b.b. "-c.C. then a .. '" - a ..
1 J 1 J 1J J1
r. =1 if h. = C.
1 1 1
r ... -1 if b ... - C.
1 1 1
(5+4)
g(s) = (5+1)(8+3)(5+5)(8+10)
1 1 1 + _~2_..,--",,""
= 24(5+1) 28(s+3) 40(5+5) 105(s+10)
Thus, the Cauchy index for this system is zero which is one of the
- 2 =
W
c
W
o
2 = r2 = diag(0.0159, 0.272x10- 2 , 0.126xlO- 3 , O.SxlO- S)
The cross-Gramian matrix Wco is also diagonal and equal to the balanced
order 4 3 2 1
Cauchy index o 1 o 1
- 147 -
I + I + I + 2
g(s) = 24(s+1) 28(s+3) 40(s+5) ~10~5~(~s~+~1-0~)
which is equal to the transfer function g(s) except for the positions
of the zeros.
The Cauchy index for this transfer function is four, and thus
order 4 3 2 1
Cauchy index 4 3 2 I
However, such inference cannot be used for the transfer function g(s),
eliminating the second and the fourth states (instead of the third
and fourth), the reduced model will not have right-hand plane zeros.
7. Conclusions
matrix and we have proved that the information contained in the Hanke1
8. References
(4), pp.449-455.
York, 1976.
PART 5
Input-output Behaviour
- 150 -
CHAPTER 12
Individual Inputs
1. Introduction
linear dynamical systems by Kalman two decades ago, much effort has
(W-matrix) ,
namely the determinant and the trace of the inverse of the W-matrix,
k(W) =
If 11 • 11 is taken as the spectral norm of W, the condition number
is then given by
p(z) =
where a =
function
, z~s i = I,m
-
i
is a measure of 'oscillatory energy' of the vector z • The loci of
eigenvalues. Th us, t h e ro 1e 0 f .
t h e matr1x A.
~
-1 1S
. to de-we1g
. ht
in the set S, and lightly the 'modes' which are not. The expected
-
Although, this is a distance measure, due to the form of
T
• f x(t)xT(t)dt
o
all other inputs being held zero, the response of the system is
given by
1 1
x (t) = x (t)
due to the i'th input only using the Mahalanobis distance. This is
by
=
The result follows due to the invariancy of the trace under
similarity transformations.
m
(b) The sum 0f t . dexes are equa 1 to un1·ty,
he 1n \L d. 2 1
=.
i=l 1
- 155 -
i
The matrix W is given by the solution of the Lyapunov equation,
m
The result is due to the fact that, W = Lw.
. 1 L
La
(c) The indexes are always positive, and less than unity (equality
5. Extension
diagonal matrix.
6. Conclusions
7. References
3. Mitra, D.: 'w matrix and the geometry of model equivalence and
1969.
10. Baram, Y., and Sandell, N.R.: 'An information theoretic approach
to dynamical systems modelling and identification', IEEE Trans.
11. Barnett, S., and Storey, c.: 'Matrix Methods in Stability Theory',
CHAPTER 13
1. Introduction
example, for a system with ten inputs and ten outputs (which is not
15 16 22
of the Bristol measure ' , based on the conditioning number
dc gain of the system. The Bristol measure has been used widely in
processes.
Gra~an matrix, which led Moore l to define balanced and other principal
realizations. Moore has shown the advantages of working directly on
of the system.
- 160 -
has been the problem of scaling the system to exemplify the inter-
2l
actions. As pointed out by Brockett et a1 , a unified scaling theory
a typical example.
In estimation and other stochastic problems, cross-correlations
(Chapter 12).
In common with the references 1-6, these measures are defined
outputs, explicitly.
matrices.
w i 6= f (eAtb i ) (eAtbi)T dt
c o
The vector b
i
denotes the ith column of the matrix B and ~(t) is
the kth input.
i
In a statistical formulation, the matrix W can be considered
c
i
as the (auto) covariance of the states x (t), under the white noise
Elu.(t)1
L
- 0
- 162 -
j
Similarly, the observability Gramian matrix W0 is defined by
where c. denotes the jth row of the matrix C. The dual system is
J
characterized by,
T
= B xd (t)
.. W j
o
the inputs and the outputs of the system. It is well known that in
i
similar problems, the cross effects between the two processes x (t)
(Chapters 9-11),
ij
= W
co
With the same white noise input in the controllable and the observable
systems ,
- 163 -
u. (t) z: v. (t)
~ J
~J
the matrix Wco is given by
00
W ~J
= f eAtbi c j e At dt
co 0
W ~JA + AW ij
co co -
The fundamental relationship between the cross-Gramian matrix W ij
co
and the controllability and observability Gramians W i and W j is
c 0
.
g~ven
byS,lS,19
3. Input-output relationships
For the system S(A,B,C), driven by white noise inputs of the form,
E [u(t)] = 0
the cross-covariance between the jth output and the ith input is given
by
...
,. f c j e Atbid t ,.
o
which is the negative of the first moment of the system or the dc gain.
-
00 00
j At e Atbi/t f
2 f ce 4= 2 trace eAtbicjeAtdt
o o
- 2 trace W
co
ij
- 164 -
ij
Thus, the matrix Wco carries information about the ith input and
i
the jth output, which is not present in the Gra~an matrices Wand
c
W J, individually. The matrix W ij may thus be considered as the
o co
carrier of information from the ith input to the jth output.
be achieved by knowing the dc gain alone. This is also true for the
multivariable case where all possible dc gains from each input to all
is obviously different.
heuristic and fuzzy rules which have been determined through past input-
control all outputs (which ~ght be required for the 'optimal' strategy)
gain.
As mentioned earlier, the dc gains are invariant under similarity
the dc gains are not invariant under input or output scaling. Thus, if
may convey the false impression that the particular input is important
systems, we require measures which are also invariant under input and
= f2 / f f o < y < 1
pq pp qq - pq
to each other then the coherence measure tends to unity. If these are
Similarly, the coherence between the input i and the output j can
i
be measured by considering the states x (t) of the controllable system
and the dual states Xdj(t) of the observable system. By using scalar
given by
•. 2 . . " 2 .. 2
y .. _ (trace W ~J) / trace (W ~W J) = (trace W 1J) /trace(W 1J)
~J co c 0 co co
where both the denominator and the numerator, and hence the measure,
It is also seen that the measure is invariant under input and output
with unity variance white noise in the input ui(t) from t = -~, then
i
as t -+ 0 E - wc
+ i
From, t - 0 onwards, if the system is unexcited with u (t) = 0, then
m
j
E [f (yj(t»2 dt] ,.. trace(w/w
o
)
o
ij
I t(h (t»2 dt
o
dynamic energies.
above depends on both the static and the dynamic behaviour of the
system.
=
- i
W
c - - j
W
o
5
Since, (W
co
ij)2 = Wc iW 0 J it is seen tha't the cross-Gramian
matrix W
co
ij is diagonal for balanced representations, and we denote
ij - iJ· iJ·
this diagonal matrix as V • That is, Wco - V • It is easy to
ij ij
verify that the diagonal values of the matrix V denoted by v ,
k
ij
k = l,n are the eigenvalues of the cross-Gramian matrix W
co
In fact, these diagonal values are the singular values (second-
are the .
d~agonal va 1ues 0
f t he .
matr~x Wij , then
= k = l,n
trace W
co
ij ..
• ·1~2
trace W iW j .. trace ( Wcol. J)
c 0
y ..
1J - (1)
- 170 -
mixed exponentials.
For an nth order system with all positively decaying exponentials, the
However, if the Cauchy index lies between these extremes, then non-
The main difficulty with the Cauchy index is that, like the
4
rank of a matrix , it is essentially a non-robust measure which can
that this measure always takes values between zero and one,
o <
-
BoO
1J-
< 1
system is high, then the measure will take values near unity. If
output j,
ij i
(~ Solve the equation W ijA + AW = -b cj .
us~ng th
e aI · hm
gor~t
co co
given in reference 12 or an equivalent algorithm.
(b) Compute t h e ·
e~genva 1 ues 0 f th e ma tr~x
· W ij wh;ch
~ are d enote d
co
ij
by v
k
,k = 1,n.
(c) Compute the coherence measure y .. us~ng equation (1) and/or the
~J
would be to compute the balanced realizations for each input and output.
reasons.
roundoff errors.
- 173 -
wco
is,
(a) Determine the dc ga1.n from input i to output J using step
responses.
-
00
ij trace W 2
f t(h (t»2dt = co trace W W
c 0
o
transfer function,
s
R(s) = (s+2) (s+l)
13
was considered by Wang and Davison to illustrate the difficulties in
first moment of the system is zero and thus the system cannot track a
step input.
The coherence measures are obviously equal to zero,
y..
1J
= S..
1J
• 0
due to the zero at the origin. Thus, our measures also indicate that
This example also indicates that if the first and the higher
if the derivatives are not dominant, then we may expect high coherence
values.
- 175 -
s + z
(s + l)(s + 3)
residues at the poles are positive (Cauchy index of 2) and thus the
Due to the form of the measures we would expect large values for them
- 1
(s + l)(s + 3)
are given by
- 176 -
O. 1.10345
0.86487 0.48251
O. 1.0
0.76191 0.48251
the ninth state. There are three main inputs and three significant
Tables 2(a) and 2(b) show half the dc gain and the sum of the
However, from tables 2(c) and 2(d) which give the coherence measures,
These tables indicate that the three inputs are directly related to
solution.
- 178 -
xl x x6
2
-2 3
u -1.02lxlO -4.807 -1.248xlO
l
-1 1 4
u2 2.098x10 8.470xlO 2.700x10
-3 -6 2
u -4. 764xlO -6. 335x10 -4.645x10
3
2 Ca) : Half the dc gain
Xl x x6
2
-3 3
u 8.483xlO 4.698 1.l79x10
l
-1 1 4
u 2.088xlO 8.491x10 2.695x10
2
-3 -2 2
u 5.093xlO 2.794xlO 4. 956x10
3
2 Cb) : Sum of the singular values
Xl x x6
2
u 1.450 1.047 1.120
l
u 1.009 0.995 1.010
2
-8
u3 0.875 5.lx10 0.878
Xl x x6
2
u 0.958 0.976 0.964
1
u 0.891 0.988 0.924
2
-8
u 0.775 2.4xlO 0.781
3
2 Cd) : The modified coherence measure B••
1)
- 179 -
order model 9 of an F100 turbofan jet engine which was also considered
2
by Denham et al . This model also has three main inputs and three
Tables 3a and 3b give half the dc gain and the sum of the singular
2
and these combinations agree with those of Denham et al • However,
since the sum of the singular values used by them appears in the
Y1 Y2 Y3
-1 -3 -2
ul 5.000x10 2.610x10 6.063xlO
2 2
u -4.8l9xlO 8.627 l.4l2x10
2
-1
u -7.075 -1. 973xlO -1.072
3
3(a) : Half the dc gain
Y1 Y2 Y3
-1 -3 -2
u 4.339x10 2.528x10 5.864x10
1
3 2
u2 1.069x10 9.594 1.508x10
1 -1
u l.048x10 l.938xlO 1.500
3
3(b) : Sum of the singular values
Y1 Y2 Y3
u1 1.328 1.066 1.069
Y1 Y2 Y3
10. Conclusion
experimentally.
We do not claim that the measures defined in this paper are the
disciplines.
- 182 -
. d by B' 20
and control 0 f systems as env1sage r1sto 1 an d oth ers. These
11. References
111, pp.1479-l499.
Chicago, 1978.
York, 1966.
12. Bartels, R.H., and Stewart, G.W.: 'Solution of the matrix equation
14. Pernebo, L., andSilverman, L.: 'Balanced systems and model reduction',
December 1979.
15. Tung, L.S., and Edgar, T.F.: 'Analysis of control-output interactions
process control', A1ChE 7lst Annual Meeting, Miami, FL, November 1978.
17. Bristol, E.H.: 'The right half plane'll get you if you don't watch
18. Fernando, K.V., and Nicho1son, H.: 'On the Cauchy index of linear
19. Fernando, K.V., and Nicho1son, H.: 'On the minimality of S1S0 linear
21. Brockett, R.W., and Krishnaprasad, P.S.: 'A scaling theory for
pp.197-207.
1979.
- 185 -
CHAPTER 14
1. Introduction
Recently, a metric information measure known as the Mahalanobis
Mahalanobis distance.
Apart from the theoretical importance, such measures can be used
Z~ S , i = I,m
the Mahalanobis distance between the vectors is defined by
,
-1
= ( Z i -z j)T~-l(
'I'
i j)
z-z
= E r, - (z-z)
L(z-z) - TJ = z
where •
and gives high values if they are orthogonal and low values when
3. Discrimination of inputs
le = Ax + Bu
is given by
WeT) .
for deterministic unit impulses at the inputs. For stochastic inputs
of the form
the system being held zero, the response of the system is given by
i i i
x (t) ,.. x (t) , bG-Mn, 1
i
where the vector b is the ith column of the matrix B.
due to ith and jth inputs can be measured using the Mahalanobis
measure as,
- 188 -
2
d .,
1.J - 1
trace W- {(W~~ + WJJ )
.. ..
- (W
ij ji
+ W )}
00
eAtbi(bj)TeATtdt
where W..
~J
= f
0
co T
eAtBBTeA tdt
W = f
0
=
2
Obviously, high magnitudes of d .. indicate dissimilarity of
1.J
controllable subspaces due to inputs i and j.
2
d.. > 0
1.J
5
(c) For input normal systems , the Gramian matrix W is equal to
unity, and thus the index can be written in the simplified form,
2 (1)
d. . :a
1.J
5. A modified measure
2
d..
1J
= 0
j
However, if b differs only with respect to sign, then the distance
is given by
2 ii
d .. ". 4 trace W
1J
-
d ..
1J
2 .
where tr is defined as
tr W = Iw··1
LL
where w.. are the diagonal elements of the matrix W. Thus, for
LL
i j
column vectors b and b which differ with respect to sign only, the
- 2
d.. .. 0
LJ
6. Conclusions
any two inputs in a multi input system has been defined using the
7. References
pp.330-331.
IT-9, pp.1l-17.
PART 6
Closure
- 192 -
CHAPTER 15
Closure
and its extensions are studied, the models assumed for data reduction
it is often assumed that the data is stationary, does not have trends,
non existent, which is the case for most of the phenomena observed
in real life?
- 194 -
not true and there are many situations where orthogona1ity has been
2. References
pp.713-725.
- 195 -
PART 7
Appendices
- 196 -
APPENDIX 1
problems.
the unknown state or parameter matrix X which are related to the 'n'
Y - HX+E , YE-M
m,n
, H eM
m,p
XE:M
p,n
(1)
, P E: M
m,m
is given by
form
and
is given by
y •.
l.J
T
Y .. XG + E Y eM
m,n
,X ~M
m,q
, G (: M
n,q
(2)
In this case, row i of the matrix Y correlates the same row of the
the ordering of rows and columns are not generic properties. The
expansion
y ••
1J - Zk,t
y
T
• HZG + E (4)
state matrix Z, where Hand G are known maximal rank matrices and
y • Fz + e
where e = vec(E) • In a statistical framework, we may also introduce
the properties
E[eJ - 0 , - R
where E[·] is the expectation operator and R is the error covariance
T
J • e Se S E: M (5)
mn,mn
given by
z -
S - Q® P
Z- (HTpH)-lHTpYQG(GTQG)-l (6)
= Z + (H TpH)-lHTpEQG(GTQG)-l
where
Y-Y • Y-NPYQM
or
m n
A * B - B *A - L L a .. b .•
l.J l.J
A,B c Mm,n
i-I j=l
I *B - trace B
Alternatively,
J - yT[s - S(M®N)S]y
_ yT[s - (QMQ)®(PNP)]y
and if the weighting matrices P and Q are set equal to the inverses
by
E (z-z)(z-z) TJ -
[
AA
column and row 'scanning' and the estimate of the state matrix Z
problems.
y - HX + E J
-
with the estimate X used as an 'observed' matrix in the row problem
x - , -
which will give the unknown state matrix Z, corresponding to eqn 6.
A- A - BPC
where A and A are the open- and closed-loop system matrices respectively for
the linear dynamical system S(A,B,C).
4. References
1. NICHOLSON, H.: 'Sequential least-squares prediction based on
spectral analysis', Int. Jl. Compr. Math., Sect B, 1972,
3, pp.257-270.
2. BELLMAN, R.: 'Introduction to matrix analysis', McGraw Hill,
APPENDIX 2
T
Y .. HZG + E (1)
J .. eT(Q~P)e .. eTSe
.. Q * ETpE ..
2 3
where @ is the Kronecker product' and e .. vec(E) , where vec(o) is
m n
A*B .. B * A" I I a .. b ••
i-I j-l 1J 1J
A,B£M
m,n
W • d .• u.u.
1.1. 1. 1.
T
- , W,D,U~M
m~
W d •. u.v.
1.J 1. 1.
T
- , r - rank (W)
.. , -
D • , A,DeM
r,r
Y could, for example, contain the observed outputs from a plant and
its model or the outputs from a system model and its reduced order
Y - UZv
T
+ E
• z .. F. + E (2)
11 1
T
where F.
1
= h.g.
1 1
, F. E:M
1 m,n
y - z •• g.®h. + e
11 1 1.
• FZ + e
d
given by
and the matrix estimate Z can be formed using the above definition.
T T T
(F SF) .. - (g. Qg.)(h. Ph.)
1J 1 J 1 J
matrices, with
3 4 8 2
where 0 denotes the Hadamard product ' , or the Schur product , which
and B. Thus
T
(F SF)..
1J
- a .. b •.
1J 1J
,
Since the matrices A and B are positive definite, with H,G,P and Q
3
of maximal rank, then by Schur's lemma ,4,8, the matrix FTSF is also
T T T T T
F Sy - (hI PYQgI hi PYQgi ••• hk PYQ~)
4. References
3. Marcus, M., and Mine, H.: 'A survey of matrix theory and matrix
l29D, in press.
pp.775-777.
APPENDIX 3
1. Introduction
"fast" and "slow" phenomena depend on the real parts of the poles of the
time-scale effects, the origin need not be the most desired position
real axis.
B - [::1
where we assume that all submatrices conform to the orders of their sub-
-1
12
H(s) • [Cl -A ] [BB12]
sI-A
22
can be expressed as
- 210 -
where H1 (s) .. J
C(s) [sI-A(s) -lB(s)
A(s) - -1
All + A12 (sI-A22 ) Az1
B(s) - -1
B1 + A12 (sI-A22 ) B2
C(s) - Cl + C2 (sI-A22 )
-1
A21
H2 (s) - C (sI-A22 )
2
-1
B2
H(s)
at s • ao •
In large-scale system studies, we may approximate the system at
frequency approximations.
If the point of approximation is the origin, then we obtain the
B(O)
- BI - A12A22
-1
B2
C(O)
- 211 -
provided that the first moment (dc gain) of the second subsystem which
-1 B2 is "small" or singular compared wi th tha t of the
is given by C A
2 22
approximation. This seems to be more general than the conventional
phase properties.
is given by
as ao ... -
- -
the transfer function can be written in the form,
H(z) - C(zI-A)-lB
time problem.
a(1)
- B1 + A12 (I-A22 )
-1
B2
e(1) . Cl + C2 (I-A 22 )
-1
A21
approximation possible and any point on the real axis, but within the
order rode!.
- 213 -
5. Conclusions
this note give an alternative, and a more refined insight into the
6. References
pp.246-253.
(4), pp.911-917.
- 215 -
APPENDIX 4
1. Introduction
l
The Routh approximation method of Hutton and Friedland and
models are always stable provided the original systems are stable.
H(s) .. + ••••• + b
n
+ ••••• + a
n
n-l
. bns + ••••• + b
l
H(s) • n
ans + ••••• + a O
not be used.
2 3
Recently, Shamash' used two examples to discredit the Routh
and show that the defects pointed out by Shamash can be easily
rectified.
To avoid ambiguities, we call the technique the Direct Routh
2. Illustrative Examples
3
Example 1: Shamash used the following problem as a counter-example
the form,
- 218 -
2
... 1005 + 11005 + 1000 100(5+1)(5+10)
H(s) 3 2 • (8+1)(5+10)(5+100)
s + 1115 + 11105 + 1000
where two of the numerator zeros cancel out two of the poles. These
additional poles representing low-frequency effects were introduced
100
(s + 100)
0.9009
= (5 + 0.9009)
(RRA)
co
2
IIhl! 2 ... f h (t) dt
o
n
11 h 11 2 - i=l
I
- 219 -
... 100
(DRA)
s + 111
is given by IIg 1\ 2
1
= 50(100/111) which is near the value of the
that high frequencies dominate in this system and that the DRA
well as the DRA and their impulse energies, we have avoided the
3 2
8169.135 + 50664.97s + 9984.325 + 500
R(s) = 3 2
100s4 + 10520s + 321015 + 101055 + 500
2 3
was also investigated by Shamash ' • Approximate cancellation of
81.6913(5 + 6.004)
H(s)
(5 + 100) (5 + 5)
0.1936(5 + 0.05007)
= (s + 0.09796 + jO.009921)(s + 0.09796 - jO.OO9921)
indicates high leakage when compared with Ilhl! 2 • 34.07 for the
original system.
- (s
2
81.69s + 506.6
+ 105.2s + 520.0)
(DRA)
- 81.69(5 + 6.201)
(s + 5.201)(5 + 100.0)
3. Conclusions
We have demonstrated that reciprocal transformations should
4. References
1. Hutton, M.F., and Fried1and, B.: 'Routh approximations for