Sie sind auf Seite 1von 8

BALAKRISHNAN

S EP 30, 2003

Solutions EE695R Homework #3

1. Eigenvalues of the Lyapunov operator. The Lyapuonov operator L A : Rn×n → Rn×n associ-
ated with A ∈ Rn×n is defined as

LA (P) = AT P + PA.
(a) Assuming that A has distinct eigenvalues, prove that the eigenvalues of the Lyapunov
operator are given by λi + λ∗j , , i, j = 1, . . ., n, where λi , i = 1, . . ., n are the eigenvalues
of A and r∗ is the complex conjugate transpose of r.
Hint: Try P = vi v∗j , where vi , i = 1, . . ., n are the left eigenvectors of A.
(b) Suppose that A has distinct eigenvalues, denoted λ1 , . . ., λn . Use (a) to conclude that
the Lyapunov equation
AT P + PA + Q = 0
has a unique solution P for any Q if and only λi + λ∗j 6= 0, i, j = 1, . . ., n.

Solution:

(a) As suggested in the hint, we let Pi j = vi v∗j , where vTi A = λvTi . Note that this means that
AT vi = λvi , and that v∗i A = λ∗ v∗i . Now consider

LA (Pi j ) = AT vi v∗j + vi v∗j A = (λi + λ∗j )vi v∗j = (λi + λ∗j )Pi j .
Thus, we have shown that λi + λ∗j , i, j = 1, . . ., n are some (possibly not all) of the
eigenvalues of LA . To show that these are indeed all the eigenvalues of LA , we need to
show that Pi j , i, j = 1, . . ., n are linearly independent. This is left as an exercise. (You
will have to use the fact that vi , i = 1, . . ., n are linearly independent.)
(b) The Lyapunov equation is simply a linear equation LA (P) = −Q, and therefore has a
unique solution if and only if none of the eigenvalues of LA are nonzero.

2. Let all the eigenvalues of A ∈ Rn×n have negative real part, and let B ∈ Rn×ni . Show that
the controllability Gramian Wc = 0∞ eAt BBT eA t dt is nonsingular if and only if (A, B) is
R T

controllable.
Solution:
Suppose Wc is singular. Then there exists some nonzero v ∈ Rn such that vT Wcv = 0. This
means that 0∞ vT eAt BBT eA t v dt = 0 or vT eAt B = 0 for all t ≥ 0. Evaluating the last equation
R T

and its derivatives at t = 0, we get

vT B = vT AB = · · · = vT An−1 B = 0,

or the controllabilty matrix drops rank, implying that (A, B) is not controllable.
Conversely, suppose that (A, B) is not controllable. Then, you can retrace the steps of the
preceding argument, using along the way the Cayley-Hamilton theorem, to conclude that Wc

1
is singular. Here is another proof. Since (A, B) is not controllable, we have from the PBH
eigenvector test that there exists nonzero v ∈ Cn such that v∗ B = 0 and v∗ A = λv∗ . Now, the
Gramian Wc satisfies
AWc + WcAT + BBT = 0.
Multiplying this equation on the left and right by v∗ and v respectively, we get 2(ℜλv∗ )Wcv =
0, which means Wc is singular (since A is stable, ℜλ < 0).

3. A matrix  
M11 M12
M=
M21 M22
with Mi j ∈ Rn×n is Hamiltonian if JM is symmetric, or equivalently

JMJ = M T ,

where  
0 I
J= .
−I 0
T and M
(a) Show that M is Hamiltonian if and only if M22 = −M11 21 and M12 are each
symmetric.
(b) Show that if v is an eigenvector of M, then Jv is an eigenvector of M T .
(c) Show that if λ is an eigenvalue of M, then so is −λ.
(d) Show that det(sI − M) = det(−sI − M), so that det(sI − M) is a polynomial in s 2 .

Solution:

(a) Expanding out JMJ = M T , we get


   T T

−M22 M21 M11 M21
= T T
M12 −M11 M12 M22

and the claim follows.


(b) Let λ be an eigenvalue of M with eigenvector v, i.e., Mv = λv. Then, JMJ −1Jv = Jλv.
Since J −1 = −J, and JMJ = M T , we get −M T (Jv) = λJv, and thus −λ is an eigenvalue
of M T with eigenvector Jv.
(c) This follows from (b), as M and M T have the same eigenvalues.
(d) You can argue this from the fact that if λ is an eigenvalue of M, so is −λ. Here is
another proof.

det(sI − M) = det(sI − M T ) = det(sI − JMJ) = det(sI + J −1MJ)


= det(sI + M) = (−1)2n det(−sI − M) = det(−sI − M)

2
4. Suppose that H(s) has a state-space realization (A, B, C). We showed in class that σ i (H( jω0 )) =
γ for some i, if and only if a certain Hamiltonian matrix has imaginary eigenvalues ± jω 0 .
Extend the Hamiltonian eigenvalue result to the case when H(s) has a state-space realization
(A, B, C, D), where D is nonzero.
Solution: For the case when D 6= 0, the Hamiltonian is
−1 
−D γI
    
A 0 B 0 C 0
Mγ = +
0 −AT 0 −CT γI −DT 0 BT

A − BR−1 DT C −γBR−1 BT
 
=
γCT S−1C −AT + CT DR−1 BT

where R = (DT D − γ2 I) and S = (DDT − γ2 I).


Here is the derivation. Let γ be a singular value of H( jω0 ). Then we have a nonzero u such
that
H( jω0 )u = γv,
(1)
H( jω0 )∗ v = γu.
so that
C( jω0 I − A)−1 B + D  u = γv,

(2)
BT (− jω0 I − AT )−1CT + DT v = γu.
Define
r = ( jω0 I − A)−1 Bu,
(3)
s = (− jω0 I − AT )−1CT v.
Now solving for u and v in terms of r and s,
−1 
−D γI
    
u C 0 r
= (4)
v γI −DT 0 BT s

Note that (4) guarantees that    


r 0
6= .
s 0
From (3)
( jω0 I − A)r = Bu,
(5)
(− jω0 I − AT )s = CT v
From (4) and (5), we obtain
( −1  )  
−D γI
    
A 0 B 0 C 0 r r
+ = jω0 (6)
0 −AT 0 −CT γI −DT 0 BT s s

Thus    
r r
Mγ = jω0 . (7)
s s
This proves one direction of the result.

3
Nowwe prove the converse. Suppose that Mγ has eigenvalue jω0 , that is (6) holds for some

r 0
6= . Define u and v by equation (4); clearly, [uT vT ] 6= 0. Then from (4) and (6),
s 0
we conclude (2), which establishes that γ is a singular value of H( jω 0 ).
5. Write a MATLAB function implementing a bisection algorithm to compute the H ∞-norm
of a continuous-time system with state-space realization {A, B,C}. Use the algorithm to
compute the H∞ -norm of H with realization
   
−1 −2 1 1  
A= ; B= ; C= 1 2 .
2 −2 1 2

Solution: Send email to ragu@ecn, and I will e-mail you the code.
kHk∞ = 2.2955 achieved at ω ≈ 2.1743.
2.5

1.5

0.5

0 -2 -1 0 1 2
10 10 10 10 10

Figure 1: Singular value plot for Problem 4

6. Consider a discrete-time system

x(k + 1) = Ax(k) + Bu(k), x(0) = 0 y(k) = Cx(k),

with transfer matrix H(z) = C(zI − A)−1 B. The H∞ norm of H defined as

kHk∞ = max σmax(H(e jθ )).


θ∈[0,2π]

(a) If A is nonsingular, show that one of the singular values of H(e jθ0 ) equals γ if and only
if the matrix
A − BBT (A−1 )T CT C/γ2 −BBT (A−1 )T /γ
 

(A−1 )T CT C/γ (A−1 )T


has an eigenvalue e jθ0 . Use this fact to devise a bisection algorithm to compute kHk∞
for the discrete-time system.

4
(b) Write a MATLAB function to compute the H∞ -norm of a discrete-time system with
state-space realization {A, B,C}. Use the algorithm to compute the H ∞-norm of the
system with
     
0.8 0.8 1 1 1 −1
A= ; B= ; C= .
−1.2 −0.6 2 1 −1 −2

Solution:

(a) Suppose σi (H(e jθ )) = γ. For convenience, let z = e jθ . Then, for some nonzero u, v,

C(zI − A)−1 Bv = γu, B(z−1 I − AT )−1CT u = γv.

The second equation can be rewritten, after a lot of algebra, as

−BT A−T CT + (−BT A−T )(zI − A−T )−1 A−T CT u = γv.




Then, using the same approach as in the solution to Problem 4, we get

A − BBT (A−1)T CT C/γ2 −BBT (A−1)T /γ


 
σi (H(e jθ )) = γ ⇐⇒ has an eigenvalue e jθ .
(A−1)T CT C/γ (A−1)T

(b) Send email to ragu@ecn for the code.


kHk∞ ≈ 17.6150, achieved at θ ≈ −1.4665 (or 4.8167).

18

16

14

12

10

4
0 1 2 3 4 5 6 7

Note that in this problem, we have assumed that A is nonsingular. Can you think of how one
can handle the case of singular A?

5
By the way, here is another approach that will almost always work: Use the bilinear mapping
z = (1 + s)/(1 − s) to map the closed unit disk into the closed left half plane. Let H̃(s) =
H(z). You can show that H̃ can be regarded as the transfer function of a continuous-time
system with realization

à = −(I − A)(I + A)−1 , B̃ = (I + A)−1 B, C̃ = 2C(I + A)−1 , D̃ = −C(I + A)−1 B,

Note that there is a one-to-one correspondence between the elements of { H̃( jω) | ω ∈ R}
and {H(e jθ ) | θ ∈ [0, 2π)}. You can compute kH̃k∞ using the bisection algorithm for the H∞
norm of continuous-time systems.

7. (From Fall 1995 take-home midterm.) Peak-magnitude of a transfer function over a poly-
topic region in the complex plane.
Let H(s) be the transfer function of a finite-dimensional linear time-invariant system, with a
realization (A, B,C). (Note that D = 0.)

(a) Let L (α, θ) denote the line segment in the complex plane given by
n o
L (α, θ) = s ∈ C s = α + jλe , λ ∈ [0, 1] .

Given γ > 0 and s0 ∈ L (α, θ) with s0 = α + jλ0 e jθ for some λ0 ∈ [0, 1], find a matrix
Mγ such that

σmax(H(s0 )) = γ if and only if Mγ has an eigenvalue jλ0 .

(b) Using the result of part (a), implement a bisection algorithm in MATLAB, which, given
α, θ and ε, computes maxs∈L (α,θ) σmax(H(s)) to within a relative accuracy of ε (recall
that this means that the algorithm returns a lower bound l and an upper bound u such
that on exit, you have (u − l) < εl).
(c) Use this algorithm to compute

max σmax(H(s))
s∈∂S

and the values of s where this maximum is achieved, where ∂S is the boundary (i.e.,
the four edges) of the square shown in Figure 2, and H has a state-space realization
   
−1 −1 −1 1 1  
−1 1 1
A =  1 −3 0 , B =  2 1 , C = .
1 1 −1
0 −1 −1 1 1

(d) Let S denote the interior of the square (shaded region in Figure 2). Under what condi-
tions on H do we have

max σmax(H(s)) = sup σmax(H(s))?


s∈∂S s∈S

Are these conditions satisfied by the H in the example in part (c)?

6

j/ 2

√ √
−1/ 2 1/ 2


− j/ 2

Figure 2: Polytopic region for Problem 7c

Solution:
(a) Define H̃ by
H̃(s) = H(α + se jθ ).
It is easily verified that H̃(s) has a state-space realization (A − αI)e− jθ , Be− jθ,C . Note that


the state-space entries are complex! Now, it is easy to verify (the derivation is exactly along the
lines when the state-space matrices are real) that given γ > 0, σmax(H̃( jλ0)) = γ if and only if
the matrix ∗
(A − αI) e− jθ Be− jθ Be− jθ /γ
  
Mγ = ∗ (8)
−C∗C/γ − (A − αI) e− jθ
has an imaginary eigenvalue jλ0 .
Therefore, with s0 = α + jλ0 e jθ for some λ0 ∈ [0, 1],
σmax (H(s0)) = γ if and only if
the matrix Mγ , given by (8) has an imaginary eigenvalue jλ0 .
(b) The matlab codes are in
http://www.ece.purdue.edu/~ragu/695R/for-web/{mtans.m,lineseg linf.m}

(c) Let Ei √
denote the edge of the square
√ √ E1 = L (1/ 2, π/4), E2 =
in the ith quadrant. Then,
L (−1/ 2, −π/4), E3 = L (−1/ 2, −3π/4) and E4 = L (1/ 2, 3π/4). The MATLAB code
from part (b) yields the following table of values:

Edge Maximum of σmax (H(s)) Maximizer smax



E1 1.27372 j/ 2
E2 5.58470 −0.62468 + 0.08243 j
E3 5.58470 −0.62468 −√0.08243 j
E4 1.27372 − j/ 2

Thus,
max |H(s)| = 5.58470, achieved at s = −0.62468 ± 0.08243 j
s∈∂S

Figure 3 shows the plot of the maximum singular value of H(s) along ∂S .

7
6

0
0 100 200 300 400 500 600 700 800

Figure 3: Plot of σmax (H(s)) along the edges, sampled at 800 equally spaced points, traveling
counterclockwise from the rightmost vertex.

(d) This part is not as straightforward as it looks. If H(s) were the transfer function of a single-input
single-output system, then the answer is straightforward, from the maximum modulus theorem
of complex variable theory:

max σmax(H(s)) = sup σmax (H(s))


s∈∂S s∈S

if and only if H has no poles in S .


However, we have a multi-input multi-output system, so H is a matrix of transfer functions, and
the maximum modulus theorem does not apply. However, it turns out there is another result
from complex variable theory that does apply: The maximum singular value of H(s) is “sub-
harmonic” in any region where it is analytic1, so once σmax (H(s)) achieves its maximum value
on ∂S if and only if H has no poles in S .
We find that in our example, H does have a pole in S , so in fact sups∈S σmax(H(s)) is unbounded.

1 see“Subharmonic Functions and Performance Bounds on Linear Time-Invariant Feedback Systems”, S. Boyd
and C. A. Desoer, IMA J. of Mathematical Control and Information, Volume 2, pages 153–170, 1985.

Das könnte Ihnen auch gefallen