Beruflich Dokumente
Kultur Dokumente
Two fundamental aspects of systems theory we will employ for system real-
ization are the so-called observability (measurability) and controllability (ex-
citability). Observability is the ability to obtain the response output which can
contain a number of response characteristics starting with an initial response
x(0). Controllability pertains to the extent the input can excite a number of
system response components, x(t) by a train of excitations u(t). Because of
their relevance in system realization we review them first below before moving
on to system realization description.
where we have dropped the bar ( ¯ ) symbol on A and B for notational sim-
plicity. First, if there were no excitation except the initial condition x(0) 6=
0, the corresponding output y for the succesive discrete time steps, t =
∆t, 2∆t, 3∆t, ..., r∆t can be expressed as
y
0
C
y CA
1
CA2
y2
= V p x0 , Vp = (mp × n) (2)
. .
.
.
ys−1 CA(s−1)
2
where matrix Vp (mp × n) is called an observability matrix. It will be shown
that, together with the controllability matrix we are about to derive, the ob-
servability matrix plays a crucial role in two ways: it facilitates the formulation
of the minimal realization and at the same time becomes a part of factored
Hankel matrix needed for realization computations.
Now, if we apply a train of excitations {u(0), u(1), u(2), ..., u(s − 1)}, the
internal state variable vector xr can be written as
u
s−1
u
s−2
us−3
s
xs = A x0 +Ws
,
Ws = B AB A2 B ... A(s−1) B (n×rs) (3)
.
.
u0
(1) Under what condition the initial state x0 6= 0 can be uniquely recon-
T
structed from a set of the measured output ỹpT = (y0T , y1T , y2T , ..., y(p−1) ),
if there were no excitation ?
(2) If the system was initially at rest,namely, x0 = 0, what is the condition
that ensures the initial zero state to arrive at the desired state xs , when
a train of input ũTs = (uT(s−1) , uT(s−2) , uT(s−3) , ..., uT0 ) are applied to the
system?
With regard to the second question, suppose that the rank of A is n. This
means that xs will consist of the distinct n-response components if and only
if the column vectors of the controllability matrix Ws spans the same n-
dimensional space. It should be noted that, while the initial state x0 can be
uniquely reconstructable provided the row vectors of Vp span the n-dimensional
space, there exists in general no unique set of input vectors that satisfy the
3
controllability requirement. To see this, we observe with x0 = 0:
xs = Ws ũs
(4)
(n × 1) (n × r s) (r s × 1)
On the other hand, as long as the row rank of Vp is n, one obtains uniquely
(s−1)
b1 λ1 b1 ... λ1 b1 b1
(s−1)
(7)
b2 λ2 b2 ... λ2 b2 b2
. . ... . .
= Ψ . . ... . , . = Ψ−1 B
(s−1)
bi λi bi ... λi bi bi
. . ... . .
bn λn bn ... λ(s−1)
n bn bn
4
are both their theoretical basis of providing minimal order realization as well
as computational utility. These will be discussed below.
The basic building block for a system realization is the Hankel matrix Hps (0)
defined as
C
CA
CA2
Hps (0) = V p Ws = B 2 (s−1)
AB A B ... A B
.
(8)
.
CA(p−1)
(m p × r s) (m p × n) (n × r s)
To see the above Hankel matrix is the most basic fundamental building block,
we recall that, given the measured input and output, one can obtain the FRFs
or IRFs. A key step is thus to recognize that Hps (0) can be constructed in
terms of the Markov parameters {Y(i), i = 0, 1, 2, ..., Y (N − 1)}. To this end,
5
we expand the Hankel matrix Hps (0) to read:
2 (s−1)
CB CAB CA B ... CA B
CAB CA2 B CA3 B ... CA(s) B
2 3 4 (s+1)
CA B CA B CA B ... CA B
Hps (0) = . . . ... .
. . . ... .
. . . ... .
CA(p−1) B CA(p) B CA(p+1) B ... CA(s+p−2) B
(Mathematical Expression)
(9)
Y (1) Y (2) Y (3) ... Y (s)
Y (2) Y (3) Y (4) ... Y (s + 1)
Y (3) Y (4) Y (5) ... Y (s + 2)
. . . ... .
=
. . . ... .
. . . ... .
Y (p) Y (p + 1) Y (p + 2) ... Y (p + s)
(Measured Data)
Note that the first expression in the above equation consists of the system pa-
rameters (A, B , C ) that are to be determined, whereas the second expression
consists of the Markov parameters that can be extracted from the measured
data.
A generalization of the basic Hankel matrix Hps (0) that is also needed in
6
system realization can be expressed as
Hps (k) = Vp Ak Ws
Y (k + 1) Y (k + 2) Y (k + 3) ... Y (k + s)
Y (k + 2) Y (k + 3) Y (k + 4) ... Y (k + s + 1)
Y (k + 3) Y (k + 4) Y (k + 5) ... Y (k + s + 2)
(10)
. . . ... .
=
. . . ... .
. . . ... .
Y (k + p) Y (k + p + 1) Y (k + p + 2) ... Y (k + p + s)
which shows that a Hankel matrix made of the Markov parameters can be
factored in terms of the observability matrix Vp , the system dynamics op-
erator A and the controllability matrix Ws . This intrinsic factored form of
Hankel matrices is exploited in the development of computational algorithms
for system realization.
4 Minimum Realization
Suppose you are given a set of test data and asked to perform system real-
ization. One question that comes to every analyst’s mind would be: how do
I determine the order of realization model?, what modes to include in the
model? and how good would the realization model be? This section addresses
the first of these questions.
During the early days of modal (not model!) testing, the lack of criterion
for determining the model order and modal accuracy criterion often led to
realization models of different orders and varying modal components from the
same test data, depending upon the skills and tools available to each analyst.
For complex problems, therefore, a series of model syntheses were required
before a consensus could be reached.
The problem of minimal realization was perhaps first solved by Ho and Kalman
in 1965 who presented what is now konwn the minimal realization theorem
for noise-free data which states: there exists a minimum size of the system
operator A for a given pairs of the input and the output. Thus, a realization
whose size is less than the minimal realization order is deficient. On the other
7
hand, a realization whose size is greater than the minimum realization order
will contain a set of dependent (superfluous) internal state vector components.
To see this, we recall the Laplace-transformed transfer function:
n̄ < n (12)
First, if the rank of W̄s < n, then one can perform a similarity transformation
T such that
Âc (n̂c × n̂c ) Â12 (n̂c × n − n̂c )
−1
 = T Ā T =
0 (n − n̂c × n̂c ) Â22 (n − n̂c × n − n̂c )
(13)
T
B̂c (n̂c × r) Ĉc
B̂ = , Ĉ T =
0 ((n − n̂c ) × r) Ĉ2T
Second, if the rank of V̄p < n, then employing a similar step in deriving the
8
above non-controllable case we obtain
Âo (n̂o × n̂o ) 0 (n̂o × n − n̂o )
 =
Â21 (n − n̂o × n̂o )r Â22 (n − n̂o × n − n̂o )
(14)
T
B̂o Ĉc
B̂ = , Ĉ T =
B̂2 0
However, since the two realizations come from the same transfer function H(s),
one must have
H(s) = Ĉc (sÎc − Âc )−1 B̂c = Ĉo (sÎo − Âo )−1 B̂o = C (sI − A)−1 B (15)
9
and find a minimal realization.
Step 1: Realization Since the input is a unit impulse, the output becomes
the Markov parameters. Hence, we can construct a series of Hankel matrices
as follows:
1 1 1 2
1 1 1
1 1 1 1 2 1
H22 (1) = , H33 (0) = 1 1 2 , H44 (0) = (18)
21 1 21 3
121
2132
where H22 is taken from {1, 1, 2, 1, . . .} instead of from the beginning of the se-
ries. Since |H22 | =
6 0, |H33 | =
6 0 and |H44 | = 0, we conclude that the rank of the
system is 3. In other words, the ranks of the observability and controllability
matrices are three.
10
and finally B and C are obtained from:
1 0.8881
1
B1 = S 2 PT 0
=
0.0
(22)
0 −0.4597
1
C1 = 1 0 0 PS 2 = 0.8881 0.0 −0.4597 (23)
A check with the above realization shows that, with xT0 = 0 0 0 , it recon-
structs the output.
so that S = I
One can verify that this second realization, although the sequences of x are
completely different, gives the the same output y.
11
Of course, their bases are different, which are related by a similarity transfor-
mation such that φ1 = Tφ2 so that the two realizations are related according
to
The minimum realization theory of Ho and Kalman is valid for noise-free data.
In practice there are two major factors that influence realization computations:
incompleteness of the measured data and data contaminated with noises. The
incompleteness of measued data implies that the collected data are either
insufficent or may not satisfy the periodicity requirement (7.9) as well as the
asymptotic properties of (7.10). One possible way to deal with incomplete and
noisy measured data is to perform overdetermined matrix computations, which
necessarily involve singular value decompositions. In the structural engineering
community it was Juang and Pappa who in 1984 extended the Ho-Kalman
algorithm suitable for the handling of structral test data.
Snn 0nz
Hps (0) = Pp Sps QTs , Sps = (28)
0zn zz
where is close to zero, which can be ignored unless some of them represent
the physical rigid-body modes of the structure.
It should be mentioned that the task of truncating the zero eigenvalues is not
as straightforward as it seems. This is because the eigenvalues continuously
change in most large-scale test data and the lowest structural eigenvalue and
the zero eigenvalues get blurred, especially for very flexible structures or very
large structures. Nevertheless, it is this truncation concept that constitutes a
central aspect of Junag and Pappa’s ERA.
12
Once the eigenvalue truncation is decided, we partition Pp and QTs as follows:
T
Qsn (1 : n, 1 : rs)
Pp = Ppn (1 : mp, 1 : n) Ppz (1 : mp, 1 : z) , QTp =
QTsz (1 : z, 1 : rs)
(29)
6.1 Computation of A
Now, in order to compute the system operator A, we recall the Hankel matrix
in the form of
Hps (1) = Vp A Ws (32)
1 1
− −
Vp+ = Snn2 PTpn Ws+ = Qsn Snn2 (33)
13
Thus, A can be obtained by the following computations:
1 1 1 1
− −
Vp+ Hpr (1) Wp+ = Snn2 PTpn 2
Ppn Snn A 2
Snn QTsn Qsn Snn2 =A
⇓ (36)
A = Vp+ Hps (1) Ws+
C
CA
CA2 1
Vp = (1 : mp, 1 : n) ≈ PTpn (1 : mp, 1 : n) 2
Snn
.
.
CA(p−1)
1
Ws = B AB A2 B ... A(s−1) B 2
(1 : n, 1 : rs) ≈ Snn QTsn (1 : n, 1 : rs)
(37)
1
B= 2
Snn QTsn (1 : n, 1 : r)
1 (38)
C = PTpn (1 : m, 1 : n) 2
Snn
So far we have described the basic form of the eigensystem realization algo-
rithm. Experience with the basic ERA indicated that it often has to deal with
nonsquare Hankel matrices, which can encounter numerical robustness diffi-
culties. In addition, accuracies of the mode shapes that are contained in C or
B are much to improve. One improvement over the basic ERA is to work with
a symmeterized form of Hankel matrices. This is discussed below.
14
6.3 Realization Based on Product of Hankel Matrices
RHH T = H(0) HT (0) = (Pp Snn QTr ) (Qr Snn PTp ) = Pp S2nn PTp
(39)
T
RH T H = H (0) H(0) = (Qs Snn PTp ) (Pp Snn QTs ) = Qs S2nn QTs
where the singular value decompositions of (32) and the identities of (31) have
been used.
The above two matrices, RHH T and RH T H , can be expressed in terms of the
observability and controllability matrices (31) as
RH T H = WsT Snn Ws
(41)
(rs × rs) (rs × n) (n × n) (n × rs)
15
of the matrix, e.g., RHH T :
T
Y (1) Y (2) Y (3) ... Y (r) Y (1) Y T (2) Y T (3) ... Y T (r)
Y (2) Y (3) Y (4) ... Y T (2)
Y (r + 1) Y T (3) Y T (4) ... Y T (r + 1)
Y (3) Y (4) Y (5) ... Y (r + 2) T Y T (4) Y T (5) T
Y (3) ... Y (r + 2)
RHH T = . . . ... . × . . . ... .
. . . ... . . . . ... .
. . . ... . . . . ... .
Y (p) Y (p + 1) Y (p + 2) ... Y (p + r) Y T (p) Y T (p + 1) Y T (p + 2) ... Y T (p + r)
(42)
It is seen from the above expression that RHH T consists of correlation func-
tions of the Markov parameters. Therefore, if the noises present in the Markov
parameters are uncorrelated, RHH T should filter out the noises much more
effectively than performing realizations with H(0) alone. Specifically, since
RHH T works on the Markov parameter product form Y YT , one may conjec-
ture it would filter the output noises more effectively than the input noises.
On the other hand, since RH T H involves the form YT Y, it would filter the
input noises more effectively than the output noises.
Apart from the possible flitering role, experience indicates that either form
would capture the eigenvalues competitively, including using H(0) as discussed
in the previous section. Thus, as an accurate determination of the system mode
shapes is more critical and the mode shapes are typically extracted from C ,
one should evaluate which matrix would yield more accurate C . However, as
the number of sensors m is typically much larger the number of actuators r,
RHH T -based realizations would be computationally more intensive than those
based on RH T H . We now present two realizations following closely the so-called
FastERA implementation proposed by Peterson.
16
Noting that the computed Vp is related to the analytical expression of the
observability matrix by
C
CA
CA2
Vp = (1 : mp, 1 : n) (44)
.
.
CA(p−1)
we obtain two shifted operators from Vp :
C
CA
CA
2
CA
CA2
0 1
Vp−1 = ((p − 1)m × n), Vp−1 = CA3 . ((p − 1)m × n)
.
.
.
CA(p−1)
(p−2)
CA
(45)
0 1
Note that Vp−1 and Vp−1 are related via
0 1
Vp−1 A = Vp−1 (46)
0
A = [Vp−1 ]+ Vp−1
1
(47)
17
so that B is given by
Y (1)
Y (2)
0
+
Y (3)
B= [Vp−1 ] (49)
.
.
Y (p − 1)
0
C = V(p−1) (1 : m, 1 : n) (50)
as in the case of the ERA derivation.
For this case, we obtain the following shifted operators from the controllability
matrix Ws :
0
W(s−1) 2
= B AB A B ... A (s−2)
B (1 : n, 1 : (s − 1)r)
(51)
1
W(s−1) = AB A2 B A3 B ... A(r−1) B (1 : n, 1 : (s − 1)r)
0 1
Notice that W(s−1) and W(s−1) are related by
0 1
A W(s−1) = W(s−1) (52)
0
B matrix is just the first n-column entries in W(s−1) :
0
B = W(s−1) (1 : n, 1 : r) (54)
18
0
C Ws−1 = Y (1) Y (2) Y (3) ... Y (s − 1)
(55)
⇓
0
C = Y (1) Y (2) Y (3) ... Y (s − 1) [Ws−1 ]+ (1 : m, 1 : n)
Realizations (Ā, B̄ , C ) obtained in the preceding two sections valid for the dis-
crete state space model (1). While they are adequate for discrete event dynam-
ics, it will prove more convenient to deduce the continuous state space model.
The second-order structural dynamics models can then be obtained from the
continuous state space model. In this section we focus on the derivation of a
modal-form, continuous state space equation. We defer the transformation of
the modal-form state space equation into the second-order structural equa-
tions to the next chapter.
First, we transform the discrete system matrix Ā into its continuous case A
as follows. Using the relation
Second, we recall the discrete operator B̄ (6.37) for the zero-hold case given
by
B̄ = (A − I)A−1 B (58)
19
from which we have
C = Sd 0 + Sv 0 A + Sa 0 A2 (61)
In practice, the output data from each sensor type are treated separately. This
means the modal output matrix becomes different according to sensor types:
Cψ = Sd Ψ for displacement sensing
= −1
Sv ΨΛ for velocity sensing
(62)
= Sa ΨΛ−2 for acceleration sensing
= . . . <(Cψ i ) ± =(Cψ i ) . . . i = 1, . . . , n
Observe that the internal variable z(t) is associated with an arbitrary basis
vector. On the other hand, the input u(t) and output y(t) are the same ones
measured from the experiments. In other words, for all feasible internal vector
representations, the input/output transfer function is unique, which will be
utilized for extracting the structural physical parameters in the subsequent
lecture.
20