Sie sind auf Seite 1von 13

1

Lecture 4 and 5
Controllability and Observability: Kalman decompositions
Spring 2013 - EE 194, Advanced Control (Prof. Khan)
January 30 (Wed.) and Feb. 04 (Mon.), 2013
I. OBSERVABILITY OF DT LTI SYSTEMS
Consider the DT LTI dynamics again with the observation model. We are interested in deriving
the necessary conditions for optimal estimation and thus the additional noise terms can be
removed. The system to be considered is the following:
x
k+1
= Ax
k
+Bu
k
, (1)
y
k
= Cx
k
, (2)
as before we consider the worst case of one observation per k, i.e., p = 1.
Denition 1 (Observability). A DT LTI system is said to be observable in n time-steps when
the initial condition, x
0
, can be recovered from a sequence of observations, y
0
, . . . , y
n1
, and
inputs, u
0
, . . . , u
n1
.
The observability problem is to nd x
0
from a sequence of observations, y
0
, . . . , y
k1
, and
The lecture material is largely based on: Fundamentals of Linear State Space Systems, John S. Bay, McGraw-Hill Series in
Electrical Engineering, 1998.
February 6, 2013 DRAFT
2
inputs, u
0
, . . . , u
k1
. To this end, note that
y
0
= Cx
0
,
y
1
= Cx
1
,
= CAx
0
+CBu
0
,
y
2
= Cx
2
,
= CA
2
x
0
+CABu
0
+CBu
1
,
.
.
.
y
k1
= CA
k1
x
0
+CA
k2
Bu
0
+. . . +CBu
k2
.
In matrix form, the above is given by
_

_
y
0
y
1
y
2
.
.
.
y
k1
_

_
=
_

_
C
CA
CA
2
.
.
.
CA
k1
_

_
x
0
+
_

_
u
0
u
1
u
2
.
.
.
u
k1
_

_
.
The above is a linear system of equations compactly written as

0:k1
= O
0:k1
. .
R
kn
x
0
, (3)
and hence, from the known inputs and known observations, the initial condition, x
0
, can be
recovered if and only if
rank(O
0:k1
) = n. (4)
Using the same arguments from controllability, we note that rank(O) < n for k n 2 as the
observability matrix, O, has at most n 1 rows. Similarly, adding another observation after the
nth observation does not help by Cayley-Hamilton arguments as
rank(O
0:n1
) = rank(O
0:n
) = rank(O
0:n+1
) = . . . . (5)
February 6, 2013 DRAFT
3
Hence, the observability matrix is dened as
O =
_

_
C
CA
CA
2
.
.
.
CA
n1
_

_
, (6)
without subscripts.
Remark 1. In literature, rank(C) = n is often referred to as the system being (A, B)controllable.
Similarly, rank(O) = n is often referred to as the system being (A, C)observable.
Remark 2. Show that
rank(C) = n CC
T
0 (CC
T
)
1
exists, (7)
rank(O) = n O
T
O 0 (O
T
O)
1
exists. (8)
Remark 3. Show that
(A, C)observable (A
T
, C
T
)controllable, (9)
(A, B)controllable (A
T
, B
T
)observable. (10)
February 6, 2013 DRAFT
4
II. KALMAN DECOMPOSITIONS
Suppose it is known that a SISO (single-input single-output) system has rank(C) = n
c
< n,
and rank(O) = n
o
< n. In other words, the SISO system, in question, is neither controllable nor
observable. We are interested in a transformation that rearranges the system modes (eigenvalues)
into:
modes that are both controllable and observable;
modes that are neither controllable nor observable;
modes that are controllable but not observable; and
modes that are observable but not controllable.
Such a transformation is called Kalman decomposition.
A. Decomposing controllable modes
Firstly, consider separating only the controllable subspace from the rest of the state-space. To
this end, consider a state transformation matrix, V , whose rst n
c
columns are a maximal set of
linearly independent columns from the controllability matrix, C. The remaining n n
c
columns
are chosen to be linearly independent from the rst n
c
columns (and linearly independent among
themselves). The transformation operator (matrix) is n n, dened as
V =
_
v
1
. . . v
nc
| v
nc+1
. . . v
n
_

_
V
1
V
2
_
. (11)
It is straightforward to show that the transformed system (x
k
= V x
k
) is
x
k+1
= V
1
AV x
k
+V
1
Bu, (12)
y
k
= CV x
k
. (13)
Partition V
1
as
V
1
=
_
_
V
1
V
2
_
_
, (14)
where V
1
is n
c
n and V
2
is (n n
c
) n. Now note that
I
n
= V
1
V =
_
_
V
1
V
2
_
_
_
V
1
V
2
_
=
_
_
V
1
V
1
V
1
V
2
V
2
V
1
V
2
V
2
_
_
=
_
_
I
nc
I
nnc
_
_
. (15)
February 6, 2013 DRAFT
5
The above leads to the following
1
:
V
2
V
1
= 0 C lies in the null space of V
2
, (16)
further leading to
V
2
B = 0, and V
2
AV
1
= 0. (17)
The transformed system in Eq. (12) is thus given by
A V
1
AV =
_
_
V
1
V
2
_
_
A
_
V
1
V
2
_
=
_
_
V
1
AV
1
V
1
AV
2
V
2
AV
1
V
2
AV
2
_
_

_
_
A
c
A
cc
0 A
c
_
_
, (18)
B V
1
B =
_
_
V
1
V
2
_
_
B =
_
_
V
1
B
V
2
B
_
_

_
_
B
c
0
_
_
. (19)
Now recall that a similarity transformation does not change the rank of the controllability matrix,
C, of the transformed system. Mathematically,
rank(C) = rank
__
V
1
B (V
1
AV )V
1
B . . . (V
1
AV )
n1
V
1
B
__
,
= rank
__
V
1
B V
1
AB . . . V
1
A
n1
B
__
,
= rank
_
V
1
_
B AB . . . A
n1
B
__
,
= rank
__
B AB . . . A
n1
B
__
,
= rank (C) ,
= n
c
.
Furthermore,
rank(C) = rank
__
B AB . . . A
nc1
B

. . . A
n1
B
__
,
= rank
_
_
_
_
B
c
A
c
B
c
. . . A
nc1
c
B
c
0 0 . . . 0

. . .
A
n1
c
B
c
0
_
_
_
_
,
= rank
_
_
_
_
B
c
A
c
B
c
. . . A
nc1
c
B
c
0 0 . . . 0
_
_
_
_
,
= n
c
,
1
This is because all of the freedom in the linear independence of C is already captured in V1.
February 6, 2013 DRAFT
6
where to go from second equation to the third, we use Cayley-Hamilton theorem. All of this to
show that the subsystem (A
c
, B
c
) is controllable.
Conclusions: Using the transformation matrix, V , dened in Eq. (11), we have transformed
the original system, x
k+1
= Ax
k
+Bu
k
into the following system:
x
k+1
= Ax
k
+Bu
k
, (20)

_
_
x
c
(k + 1)
x
c
(k + 1)
_
_
=
_
_
A
c
A
cc
0 A
c
_
_
_
_
x
c
(k)
x
c
(k)
_
_
+
_
_
B
c
0
_
_
u
k
, (21)
such that the transformed state-space is separated into a controllable subspace and an uncontrollable
subspace.
Consider the transformed system in Eq. (20) and partition the (transformed) state-vector, x(k),
into an n
c
1 component, x
c
(k), and an n n
c
1, x
c
(k). We get
x
c
(k + 1) = A
c
x
c
(k) +A
cc
x
c
(k) +B
c
u
k
, (22)
x
c
(k + 1) = A
c
x
c
(k). (23)
Clearly, the lower subsystem, x
c
(k), is not controllable. In the transformed coordinate space, x
k
,
we can only steer a portion, x
c
(k), of the transformed states. The subspace of R
n
that can be
steered is denoted as R(C), dened as the row space of C, where R(C) is called the controllable
subspace.
Finally, recall that the eigenvalues of similar matrices, A and V
1
AV , are the same. Recall
the structure of V
1
AV from Eq. (18); since V
1
AV is block upper-triangular, the eigenvalues
of A are given by the eigenvalues of A
c
and A
c
, called the controllable eigenvalues (modes)
and uncontrollable eigenvalues (modes), respectively.
Example 1. Consider the CT LTI system:
x =
_

_
2 1 1
5 3 6
5 1 4
_

_
x +
_

_
1
0
0
_

_
u,
C =
_

_
1 2 4
0 5 5
0 5 5
_

_
,
February 6, 2013 DRAFT
7
with rank(C) = 2 < 3. Denes V as
V =
_

_
1 2
0 5
0 5

0
0
1
_

_
,
and note that the requirement for choosing V are satised. Finally, verify that the transformed
system matrices are
V
1
AV =
_

_
0 6 1
1 1 0
0 0 0
_

_
, V
1
B =
_

_
1
0
0
_

_
.
Verify that the top-left subsystem is controllable. From this structure, we see that the two-
dimensional controllable system is grouped into the rst two state variables and that the third
state variable has neither an input signal nor a coupling from the other subsystem. It is therefore
not controllable.
B. Decomposing observable modes
Now we can easily follow the same drill and come up with another transformation, x
k
= Wx
k
,
where W contains n
o
linearly independent rows of O and the remaining rows are chosen to be
linearly independent of the rst n
o
rows (and linearly independent among themselves). Finally,
the transformed system,
x
k+1
= Ax
k
+Bu
k
,
=
_
_
A
o
0
A
oo
A
o
_
_
x
k
+
_
_
B
o
B
o
_
_
u
k
, (24)
y
k
=
_
C
o
0
_
x
k
(25)
is separated into an observable subspace and an unobservable subspace. In fact, we can now
write this result as a theorem.
Theorem 1. Consider the DT LTI system,
x
k+1
= Ax
k
,
y
k
= Cx
k
,
February 6, 2013 DRAFT
8
where A R
nn
and C R
pn
. Let O be the observability matrix as dened in Eq. (6). If
rank(O) = n
o
< n, there there exists a non-singular matrix, W R
nn
such that
W
1
AW =
_
_
A
o
0
A
oo
A
o
_
_
, CW =
_
C
o
0
_
,
where A
o
R
nono
, C
o
R
non
, and the pair (A
o
, C
o
) is observable.
By carefully applying a sequence of such controllability and observability transformations,
the complete Kalman decomposition results in a system of the following form:
_

_
x
co
(k + 1)
x
co
(k + 1)
x
co
(k + 1)
x
co
(k + 1)
_

_
=
_

_
A
co
0 A
13
0
A
21
A
co
A
23
A
24
0 0 A
co
0
0 0 A
43
A
co
_

_
_

_
x
co
(k)
x
co
(k)
x
co
(k)
x
co
(k)
_

_
+
_

_
B
co
B
co
0
0
_

_
u(k), (26)
y(k) =
_
C
co
0 C
co
0
_
_

_
x
co
(k)
x
co
(k)
x
co
(k)
x
co
(k)
_

_
. (27)
February 6, 2013 DRAFT
9
C. Kalman decomposition theorem
Theorem 2. Given a DT (or CT) LTI system that is neither controllable nor observable, there
exists a non-singular transformation, x = T x, such that the system dynamics can be transformed
into the structure given by Eqs. (26)(27).
Proof: The proof is left as an exercise and constitutes Homework 1.
February 6, 2013 DRAFT
10
APPENDIX A
SIMILARITY TRANSFORMS
Consider any invertible matrix, V R
nn
, and a state transformation, x
k
= V z
k
. The
transformed DT-LTI system is given by
x
k+1
= Ax
k
+Bu
k
,
V z
k+1
= AV z
k
+Bu
k
,
z
k+1
= V
1
AV
. .
A
z
k
+V
1
B
. .
B
u
k
,
y
k
= CV
..
C
z
k
.
The new (transformed) system is now dened by the system matrices, A, B, and C. The two
systems, x
k
and z
k
, are called similar systems.
Remark 4. Each of the following can be proved.
Given any DT-LTI systems, there are innite possible transformations.
Similar systems have the same eigenvalues; the eigenvectors of the corresponding system
matrices are different.
Similar systems have the same transfer functions; in other words, the transfer function of
the DT-LTI dynamics is unique with innite possible state-space realizations.
The rank of the controllability and observability are invariant to a similar transformation.
APPENDIX B
TRIVIA: NULL SPACES
Consider a set of m vectors, x
i
R
n
, i = 1, . . . , m.
Denition 2 (Span). The span of m vectors, x
i
R
n
, i = 1, . . . , m is dened as the set of all
vectors that can be written as a linear combination of {x
i
}

s, i.e.,
span(x
1
, . . . , x
m
) =
_
y

y =
m

i=1

i
x
i
_
, (28)
for
i
R.
February 6, 2013 DRAFT
11
Denition 3. The null space, N(A) R
n
, of an n n matrix, A, is dened as the set of all
vectors such that
Ay = 0, (29)
for y N(A).
Consider a real-valued matrix, A R
nn
and let x
1
= 0 and x
2
= 0 be the maximal set of
linearly independent vectors
2
such that
Ax
1
= 0, Ax
2
= 0, (30)
which leads to
A
2

i=1

i
x
i
=
2

i=1

i
Ax
i
= 0. (31)
Following the denition of span and null space, we can say that span(x
1
, x
2
) lies in the null
space of A, i.e.,
N(A) = span(x
1
, x
2
). (32)
In this particular case, the dimension of N(A) is 2 and
3
rank(A) = n 2.
2
In other words, there does not exist any other vector, x3, that is linearly independent of x1 and x2 such that Ax3 = 0.
3
This further leads to an alternate denition of rank as n nc, where nc dim(N(A)).
February 6, 2013 DRAFT
12
Lets recall the uncontrollable case with rank(C) = n
c
. Clearly, C has n
c
linearly independent
(l.i.) columns (vectors in R
n
); let the matrix V
1
R
nnc
consists of these n
c
l.i. columns. On
the other hand, let the matrix

V
1
R
nnnc
consists of the remaining columns of C and note
that
every column of

V
1
span(columns of V
1
),
or, span(columns of V
1
) contains every column of

V
1
.
The above leads to the following:
Remark 5. If there exists a matrix, V
2
, such that V
2
V
1
= 0, then
every column of V
1
N(V
2
),
span(columns of V
1
) N(V
2
),
every column of

V
1
N(V
2
).
February 6, 2013 DRAFT
13
A. Kalman decomposition, revisited
Construct a transformation matrix, V R
nn
, whose whose rst n n
c
block is V
1
. Choose
the remaining nn
c
columns to be linearly independent from the rst n
c
columns (and linearly
independent among themselves); call the second n (n n
c
) block as V
2
:
V
_
V
1
V
2
_
, V
1

_
_
V
1
V
2
_
_
,
I
n
= V
1
V =
_
_
V
1
V
2
_
_
_
V
1
V
2
_
=
_
_
V
1
V
1
V
1
V
2
V
2
V
1
V
2
V
2
_
_
=
_
_
I
nc
0
0 I
nnc
_
_
.
From the above, note that V
2
V
1
= 0. Now consider the transformed system matrices, A and B:
A V
1
AV =
_
_
V
1
V
2
_
_
A
_
V
1
V
2
_
=
_
_
V
1
AV
1
V
1
AV
2
V
2
AV
1
V
2
AV
2
_
_
,
B V
1
B =
_
_
V
1
V
2
_
_
B =
_
_
V
1
B
V
2
B
_
_
.
Note that
AV
1
span(columns of V
1
) N(V
2
),
B N(V
2
).
February 6, 2013 DRAFT

Das könnte Ihnen auch gefallen