Beruflich Dokumente
Kultur Dokumente
Matthew M. Peet
Arizona State University
Lectures 1-2
1. Linear Systems
2. Convex Optimization and Linear Matrix Inequalities
3. Optimal Control
4. LMI Solutions to the H∞ and H2 Optimal Control Problems
Definition 1.
L2 [0, ∞) is the Hilbert space of functions f : R+ → Rn with inner product
Z ∞
1
hu, yiL2 = u(t)T v(t)dt
2π 0
Definition 2.
The normed space of bounded linear operators from X to Y is denoted
L(X, Y) with norm
kP xkY
kP kL(X,Y ) := sup =K
x∈X kxkX
x6=0
Definition 3.
Given u ∈ L2 [0, ∞), the Laplace Transform of u is û = Λu, where
Z T
û(s) = (Λu)(s) = lim u(t)e−st dt
T →∞ 0
kΛukH2
kΛk = sup =???
u∈L2 kukL2
Theorem 7.
1. If u ∈ L2 [0, ∞), then Λu ∈ H2 .
2. If û ∈ H2 , then there exists a u ∈ L2 [0, ∞) such that û = Λu ( Onto).
Definition 8.
The inverse of the Laplace transform, Λ−1 : H2 → L2 [0, ∞) is
Z ∞
−1 1
u(t) = (Λ û)(t) = eσt · eıωt û(σ + ıω)dω
2π −∞
Lemma 9.
• Thus Λ is unitary.
• L2 [0, ∞) and H2 are isomorphic.
kΛukH2
kΛk = sup =???
u∈L2 kukL2
Definition 10.
A function Ĝ : C̄+ → Cn×m is in H∞ if
1. Ĝ(s) is analytic on the CRHP, C+ .
2.
lim Ĝ(σ + ıω) = Ĝ(ıω)
σ→0+
3.
sup σ̄(Ĝ(s)) < ∞
s∈C+
Definition 11.
Given Ĝ ∈ H∞ , define MĜ ∈ L(H∞ )
for û ∈ H2 .
ŷ(s) = Ĝ(s)û(s)
is analytic.
• Thus MĜ : H2 → H2 .
• Thus Λ−1 MĜ Λ maps L2 [0, ∞) → L2 [0, ∞).
Theorem 12.
G is a Causal, Linear, Time-Invariant Operator on L2 if and only if there exists
some Ĝ ∈ H∞ such that G = Λ−1 MĜ Λ.
(ΛGu)(ıω) = Ĝ(ıω)û(ıω)
Theorem 13.
Definition 14.
The space of rational functions is defined as
p(s)
R := : p, q are polynomials
q(s)
RH2 = R ∩ H2
RH∞ = R ∩ H∞
RH∞ is the set of proper rational functions with no poles in the closed right
half-plane (CRHP).
Definition 15.
p(s)
• A rational function r(s) = q(s) is Proper if the degree of p is less than or
equal to the degree of q.
p(s)
• A rational function r(s) = q(s) is Strictly Proper if the degree of p is less
than the degree of q.
Proposition 1.
1. Ĝ ∈ RH∞ if and only if Ĝ is proper with no poles on the closed right
half-plane.
Theorem 16.
• For any stable state-space system, G, there exists some Ĝ ∈ RH∞ such
that
G = Λ−1 MĜ Λ
• For any Ĝ ∈ RH∞ , the operator G = Λ−1 MĜ Λ can be represented in
state-space for some A, B, C and D where A is Hurwitz.
Input Signals:
• w: Disturbance, Tracking Signal, etc.
nputs u exogenous
are those
I input inputs to the system that can be mani
• u: Output from controller
I Input to actuator
z w
P
y u
romForwa to z is
linear given
system P , by
we have 4 subsystems.
P12 −w P K)−1 P
S(P, K) = Pz11 =+ PP1112K(I 22 21
y P21 P22 u
P11 : w 7→ z P12 : u 7→ z
P21 : w 7→ y P22 : u 7→ y
Note that all Pij can themselves be MIMO.
M. Peet Lecture 01: 19 / 135
12 - 4 LFTs and stability
z1z = y yp
1 p
P w1
n =w η
proc z1 P P0 + D
r 0=0 Cx w proc 1
z = z ==u w P= n == w
ηsensorz2 = 0 0 1 w
K + +
2
z2 u w2 0 sensor 2
ySubstituting
P0 1 P0 u
y = y1 = r + w2 + r P q
+
0
Suppose u
z2 P=0 is q = w1
Formulate the above as
zẋ1 = r +yBq
= Ax =r+
z1 = yp nproc = w1
z2 = u
P
nsensor = w2 leadsr =
to Cx + Dq
Substituting A B 0
+ P0 +
y r K q u
z2 =Pu = q=Cw1D+ 0w2
z1 = r y =0r +0 w02
C D I
leads to
A B 0 B
y K u C D 0 D
P =
0 0 0 I
M. Peet Lecture 01: C D I /D
20 135
z1 P0 0 P0
K + P0 +
z2 = 0 0 1
The Regulator y P0 1 P0
Suppose P0 is
Formulate the above as
ẋ = Ax + Bq
z1 = yp nproc = w1
P r = Cx + Dq
z2 = u nsensor = w2
Substituting
+ P0 +
r q
z2 = u q = w1 +
z1 = r y =r+w
leads to
A B 0 B
y K u C D 0 D
P =
0 0 0 I
C D I D
The reconfigured plant P is given by If P0 = (A, B, C, D), then
z1 (t) P0 0 P0 w1 (t) A B 0 B
z2 (t) = 0 0 I w2 (t) C D 0 D
y(t) P0 I P0 u(t) P = 0 0 0 I
C D I D
zsystem w
zsystem wsystemsystem
System
System
yysystem usystem
usystem
system
Controller
Controller
commandinputs
command inputs diagnostic outputs
diagnostic outputs
Formulate
Formulate thetheabove
aboveasas
Plant
Plant
zsystem wsystem
zsystem
zdiag System wsystem
wcommands
zdiag System wcommands
ysystem usystem
yysystem Controller
commands udiag usystem
Controller
ycommands udiag
r
K + P0 +
Define q = nproc P
+ u, then r = w1
−
z1 = e z = z1Werr= e −proc
e = rW P0 q nproc = w2
z2 u
z2 = u w1 r nsensor = w3
y Wact r W sens
y= 1 = w = w2 = nproc
y2 nsensor + P0 q
+ P0 + w3 nsensor
e = tracking error r = tracking input
nproc = process noise nsensor = sensor noise
P r = w1
−
z1 = e Werr Wproc nproc = w2
+ P0 +
y K u
z1 = r − P0 (nproc + u)
I −P0 0 −P0
0 0 0 I z2 = u
P =
I
0 0 0 y1 = r
0 P0 I P0 y2 = w3 + P0 (nproc + u)
M. Peet Lecture 01: 24 / 135
ransformation (LFT).
Linear Fractional Transformation
Close the loop
z w
P
y u
toPlant:
z isgiven
by A
B1 B2
z P P12 w
= 11 where D12
P = C1 D11 −1
S(P, K) = P11 + P12K(I − P K) P
y P21 P22 u
C22
2 D21 D22 21
Controller: AK BK
u = Ky where K=
CK DK
M. Peet Lecture 01: 25 / 135
Linear Fractional Transformation
z = P11 w + P12 u
y = P21 w + P22 u
u = Ky
Solving for u,
u = KP21 w + KP22 u
Thus
(I − KP22 )u = KP21 w
u = (I − KP22 )−1 KP21 w
z w
P
y u
to z is given by
M. Peet Lecture 01: 27 / 135
AK BK
= Ky where K̂ =
Other Fractional Transformations
C K DK K
z w
K P
y u
ven by
S(P, K) := P11 +P12 (I−KP22 )−1 KP21
S̄(P, K) := P22 +P21 Q(I −P11 K)−1 P12
P, K) = P11 + P12K(I − P22K)−1 P21
21 z1 w1
P
y u
K
z2 w2
K11 ) K) = P12 (I − K11 P22 )−1 K12
S(P,S(P,
S(P, K) :=
K21 (I − P22 K11 )−1 P21 S̄(K, P22 )
12 S(P, K11) P12(I − K11P22)−1K12
M. Peet −1 Lecture 01: 29 / 135
Well-Posedness
The interconnection doesn’t always make sense. Suppose
A B1 B2
AK BK
P = C1 D11 D12 and K= .
CK DK
C2 D21 D22
Definition 17.
The interconnection S(P, K) is well-posed if for any smooth w and any x(0)
and xK (0), there exist functions x, xK , u, y, z such that
From
u(t) = DK y(t) + CK xK (t)
y(t) = D22 u(t) + C2 x(t) + D21 w(t)
We have
I −DK u(t) 0 CK x(t) 0
= + w(t)
−D22 I y(t) C2 0 xK (t) D21
Proposition 2.
The interconnection S(P, K) is well-posed if and only if (I − D22 DK ) is
invertible.
Definition 18.
The Optimal H∞ -Control Problem is
Definition 19.
The Optimal H2 -Control Problem is
S(P, K) ∈ H∞ .
Choose K to minimize
AK BK
Equivalently choose to minimize
CK DK
A 0
B2 0
I −DK
−1
0 CK
B1 + B2 DK QD21
0 AK + 0 BK −D22 I C2 0 BK QD21
−1
I −DK 0 CK
C1 0 + D12 0
−D22 I C2 0
D11 + D12 DK QD21
H∞
Realizability
z + P11 w
y u
P12 K + P21
P22
R
kP11 + P12 (I − KP22 )−1 KP21 kH∞ = kP11 + P12 RP21 kH∞
Problems:
• how to optimize k·kH∞ .
• Is the controller stable?
I Does the inverse (I + RP −1
22 ) exist? Yes.
I Is it a bounded linear operator?
I In which space?
I Youla parameterization
(γ, x) ∈ S 0
where S 0 := {(γ, x) : γ − f0 (x) ≥ 0, fi (x) ≥ 0, i = 1, · · · , k}.
• Two optimization problems are Equivalent if a solution to one can be used
to construct a solution to the other.
M. Peet Lecture 01: 38 / 135
Convexity
Definition 20.
A set is convex if for any x, y ∈ Q,
I x may be indefinite.
Conclusion:
• Convex Optimization includes positivity induced from any partial ordering.
• In particular, we focus on Matrix Positivity.
Definition 23.
A symmetric matrix P ∈ Sn is Positive Semidefinite, denoted P ≥ 0 if
xT P x ≥ 0 for all x ∈ Rn
Definition 24.
A symmetric matrix P ∈ Sn is Positive Definite, denoted P > 0 if
• P is Negative Semidefinite if −P ≥ 0
• P is Negative Definite if −P > 0
• A matrix which is neither Positive nor Negative Semidefinite is Indefinite
The set of positive or negative matrices is a convex cone.
Lemma 25.
P ∈ Sn is positive definite if and only if all its eigenvalues are positive.
Lemma 26.
1
For any P > 0, there exists a positive square root, P 2 > 0 such that
1 1
P = P 2P 2.
minimize trace CX
subject to trace Ai X = bi for all i
X0
minimize cT x
subject to F0 + x1 F1 + x2 F2 + · · · + xn Fn 0
Find X :
X
Ai XBi + Q > 0
i
I See http://www.mathworks.com/help/robust/lmis.html
AT X + XA ≺ 0
X0
• Stabilization
AX + BZ + XAT + Z T B T ≺ 0
X0
• H2 Synthesis
min T r(W )
X AT
A B2 + X ZT + B1 B1T ≺ 0
Z B2T
X (CX + DZ)T
0
(CX + DZ) W
We will go beyond these examples.
M. Peet Lecture 01: 50 / 135
Lyapunov Theory
ẋ(t) = f (x(t))
Theorem 27 (Lyapunov).
Suppose there exists a continuously differentiable function V for which V (0) = 0
and V (x) > 0 for x 6= 0. Furthermore, suppose limkxk→∞ V (x) = ∞ and
AT P + P A < 0
Proof.
Suppose there exists a P > 0 such that AT P + P A < 0.
• Define the Lyapunov function V (x) = xT P x.
• Then V (x) > 0 for x 6= 0 and V (0) = 0.
• Furthermore,
V̇ (x(t)) = ẋ(t)T P x(t) + x(t)T P ẋ(t)
= x(t)T AT P x(t) + x(t)T P Ax(t)
= x(t)T AT P + P A x(t)
• Hence V̇ (x(t)) < 0 for all x 6= 0. Thus the system is globally stable.
• Global stability implies A is Hurwitz.
M. Peet Lecture 01: 52 / 135
The Lyapunov Inequality
Proof.
For the other direction, if A is Hurwitz, let
Z ∞
T
P = eA s eAs ds
0
T
• Thus P A + A P = −I < 0.
Other Versions:
Lemma 29.
(A, B) is controllable if and only if there exists a X > 0 such that
AT X + XA + BB T ≤ 0
Lemma 30.
(C, A) is observable if and only if there exists a X > 0 such that
AX + XAT + C T C ≤ 0
Definition 31.
The Static State-Feedback Problem is to find a feedback matrix K such that
is stable
Find X > 0, K :
X(A + BK) + (A + BK)T X < 0
Definition 32.
Two optimization problems are equivalent if a solution to one will provide a
solution to the other.
Theorem 33.
Problem 1 is equivalent to Problem 2.
M. Peet Lecture 01: 56 / 135
The Dual Lyapunov Equation
Problem 1: Problem 2:
Find X > 0, : Find Y > 0, :
XA + AT X < 0 Y AT + AY < 0
Lemma 34.
Problem 1 is equivalent to problem 2.
Proof.
First we show 1) solves 2). Suppose X > 0 is a solution to Problem 1. Let
Y = X −1 > 0.
• If XA + AT X < 0, then
Problem 1: Problem 2:
Find X > 0, : Find Y > 0, :
XA + AT X < 0 Y AT + AY < 0
Proof.
Now we show 2) solves 1) in a similar manner. Suppose Y > 0 is a solution to
Problem 1. Let X = Y −1 > 0.
• Then
XA + AT X = X(AX −1 + X −1 AT )X
= X(AY + Y AT )X < 0
Theorem 35.
Problem 1 is equivalent to Problem 2.
Proof.
We will show that 2) Solves 1). Suppose X > 0, Z solves 2). Let P = X > 0
and K = ZP −1 . Then Z = KP and
Now suppose that P > 0 and K solve 1). Let X = P > 0 and Z = KP . Then
AP + P AT + BZ + Z T B T = (A + BK)P + P (A + BK)T < 0
M. Peet Lecture 01: 59 / 135
The Stabilization Problem
Theorem 36.
(A, B) is static-state-feedback stabilizable if and only if there exists some P > 0
and Z such that
AP + P AT + BZ + Z T B T < 0
with u(t) = ZP −1 x(t).
Standard Format:
P AT
T
A B + P Z <0
Z BT
Proof.
1
• Now we have V̇ (x(t)) − (γ − )ku(t)k2 + ky(t)k2 < 0
γ
• Integrating in time, we get
Z T
1
V̇ (x(t)) − (γ − )ku(t)k2 + ky(t)k2 dt
0 γ
Z T Z
2 1 T
= V (x(T )) − V (x(0)) − (γ − ) ku(t)k dt + ky(t)k2 dt < 0
0 γ 0
• Thus
kyk2L2 dt < (γ 2 − γ)kuk2L2
• By definition, this means kGk2H∞ ≤ (γ 2 − γ) < γ 2 or
kGkH∞ < γ
Lemma 39.
Suppose
A B
Ĝ(s) = .
C D
Then the following are equivalent.
• G is passive. i.e. (hu, GuiL2 ≥ 0).
• There exists a P > 0 such that
T
A P + P A P B − CT
≤0
B T P − C −DT − D
z w
P
y u
toPlant:
z isgiven
by A
B1 B2
z P P12 w
= 11 where P = C1 D11 −1
D12
S(P, K) = P11 + P12K(I − P K) P
y P21 P22 u
C22
2 D21 D22 21
Controller: AK BK
u = Ky where K=
CK DK
M. Peet Lecture 01: 68 / 135
Optimal Control
Choose K to minimize
AK BK
Equivalently choose to minimize
CK DK
A 0
B2 0
I −DK
−1
0 CK
B1 + B2 DK QD21
0 AK + 0 BK −D22 I C2 0 BK QD21
−1
I −DK 0 CK
C1 0 + D12 0
−D22 I C2 0
D11 + D12 DK QD21
H∞
z w
P
y u
By the KYP lemma, kS(P̂ , K̂)kH∞ < γ if and only if there exists some X > 0
such that
(A + B2 F )T X + X(A + B2 F ) XB1
B1T X −γI
1 (C1 + D12 F )T
+ T (C1 + D12 F ) D11 < 0
γ D11
To apply the variable substitution trick, we must also construct the dual form of
this LMI.
Lemma 41 (KYP Dual).
Suppose
A B
Ĝ(s) = .
C D
Then the following are equivalent.
• kGkH∞ ≤ γ.
• There exists a Y > 0 such that
Y AT + AY B Y CT
BT −γI DT < 0
CY D −γI
if and only
−1 if X > 0 and
−1
Y 0 0 Y AT + AY B Y CT Y 0 0
0 I 0 BT −γI DT 0 I 0
0 0 I CY D −γI 0 0 I
T T
A X + XA XB C
= BT X −γI DT < 0.
C D −γI
Theorem 42.
The following are equivalent:
• There exists an F such that kS(P, K(0, 0, 0, F ))kH∞ ≤ γ.
• There exist Y > 0 and Z such that
Y AT + AY + Z T B2T + B2 Z B1 Y C1T + Z T D12
T
B1T −γI T
D11 <0
C1 Y + D12 Z D11 −γI
Let Z = F Y . Then
Y AT + Z T B2T + AY + B2 Z B1 Y C1T + Z T D12 T T
)
B1T −γI T
D11
C1 Y + D12 Z D11 −γI
Y AT + Y F T B2T + AY + B2 F Y B1 Y C1T + Y F T D12
T T
)
= B1T −γI D11T
C1 Y + D12 F Y D11 −γI
Y (A + B2 F )T + (A + B2 F )Y B1 Y (C1 + D12 F )T
= B1T −γI T
D11 < 0.
(C1 + D12 F )Y D11 −γI
Let F = ZY −1 . Then
Y (A + B2 F )T + (A + B2 F )Y B1 Y (C1 + D12 F )T
B1T −γI T
D11
(C1 + D12 F )Y D11 −γI
T T T
Y A + Y F B2 + AY + B2 F Y B1 Y C1T + Y F T D12
T
= B1T −γI T
D11
C1 Y + D12 F Y D11 −γI
T T T T T T
Y A + Z B2 + AY + B2 Z B1 Y C1 + Z D12
= B1T −γI T
D11 <0
C1 Y + D12 Z D11 −γI
M. Peet Lecture 01: 78 / 135
Full-State Feedback Optimal Control
Form B
min γ :
γ,Y,Z
−Y 0 0 0
0 Y A T
+ AY + Z T B2T + B2 Z B1 T
Y C1T + Z T D12
<0
0 B1T −γI T
D11
0 C1 Y + D12 Z D11 −γI
z w
P
y u
toPlant:
z isgiven
by A
B1 B2
z P P12 w
= 11 where P = C1 D11 D12
y P21 P22 u −1
S(P, K) = P11 + P12K(I − P K) P C22
2 D21 D22 21
Controller: AK BK
u = Ky where K=
CK DK
M. Peet Lecture 01: 80 / 135
Optimal Control
Choose K to minimize
AK BK
Equivalently choose to minimize
CK DK
A 0
B2 0
I −DK
−1
0 CK
B1 + B2 DK QD21
0 AK + 0 BK −D22 I C2 0 BK QD21
−1
I −DK 0 CK
C1 0 + D12 0
−D22 I C2 0
D11 + D12 DK QD21
H∞
Likewise
I + DK QD22 DK Q 0 CK
Ccl := C1 0 + D12 0
QD22 Q C2 0
= C1 + D12 DK QC2 D12 (I + DK QD22 )CK
Thus we have
A + B2 DK QC2 B2 (I + DK QD22 )CK B1 + B2 DK QD21
BK QC2 AK + BK QD22 CK BK QD21
C1 + D12 DK QC2 D12 (I + DK QD22 )CK D11 + D12 DK QD21
AK2 = AK + BK QD22 CK
BK2 = BK Q
CK2 = (I + DK QD22 )CK
DK2 = DK Q
CK = (I − DK D22 )CK2
implies that
or
(I + DK2 D22 )DK = DK2
which can be inverted to get
Proof.
• Since
Y1 I
> 0,
I X1
by the Schur complement X1 > 0 and X1−1 − Y1 > 0. Since
I − X1 Y1 = X1 (X1−1 − Y1 ), we conclude that I − X1 Y1 is invertible.
• Choose any two square invertible matrices X2 and Y2 such that
X2 Y2T = I − X1 Y1
Proof.
• Now define X and Y as
Then
XY = Ycl−1 Xcl Xcl
−1
Ycl = I
Likewise, Y X = I. Hence, Y = X −1 .
then
Y1 I
>0
I X1
Y1 I
and Ycl = has full column rank.
Y2T 0
Hence
Y Y2 I 0
YclT = 1 = Y = Xcl Y
I 0 X1 X2
has full column rank. Now, since XY = I implies X1 Y1 + X2 Y2T = I, we have
I 0 Y1 I Y1 I Y1 I
Xcl Ycl = = =
X1 X2 Y2T 0 X1 Y1 + X2 Y2T X1 I X1
Proof: If.
and where
−1 T −1
AK2 BK2 X2 X1 B2 An Bn X1 AY1 0 Y2 0
= − .
CK2 DK2 0 I Cn Dn 0 0 C2 Y1 I
Proof: If.
As discussed previously, this means the closed-loop system is
A 0 B1 0 B2
Acl Bcl A BK2 0 I 0
=0 0 0 + I 0 K2
Ccl Dcl CK2 DK2 C2 0 D21
C1 0 D11 0 D12
A 0 B1 0 B2
=0 0 0 + I 0
C1 0 D11 0 D12
−1 T −1
X2 X1 B2 An B n X1 AY1 0 Y2 0 0 I 0
−
0 I Cn Dn 0 0 C2 Y1 I C2 0 D21
Proof: If.
Expanding out, we obtain
T T T
Ycl 0 0 Acl X + XAcl XBcl Ccl Ycl 0 0
0 I 0 T
Bcl X −γI T
Dcl 0 I 0 =
0 0 I Ccl Dcl −γI 0 0 I
AY1 +Y1 AT +B2 Cn +CnT B2T ∗T ∗T ∗T
T T T T T T
A + An + [B2 Dn C2 ] X1 A+A X1 +Bn C2 +C2 Bn ∗ ∗T
[B1 + B2 Dn D21 ] T
[XB1 + Bn D21 ]T
−γI
<0
C1 Y1 + D12 Cn C1 + D12 Dn C2 D11 +D12 Dn D21 −γI
Acl Bcl
Hence, by the KYP lemma, S(P, K) = satisfies
Ccl Dcl
kS(P, K)kH∞ < γ.
Because the inequalities are strict, we can assume that X2 has full row rank.
Define
Y1 Y2 −1 Y1 I
Y = =X and Ycl =
Y2T Y3 Y2T 0
Then, according to the converse transformation
lemma, Ycl has full row rank
and X1 I
> 0.
I Y1
M. Peet Lecture 01: 99 / 135
Optimal Output Feedback Control
Proof: Only If.
Now, using the given AK , BK , CK , DK , define the variables
T
An B n X2 X1 B2 AK2 BK2 Y2 0 X1 AY1 0
= + .
Cn Dn 0 I CK2 DK2 C2 Y1 I 0 0
where
AK2 = AK + BK (I − D22 DK )−1 D22 CK BK2 = BK (I − D22 DK )−1
CK2 = (I + DK (I − D22 DK )−1 D22 )CK DK2 = DK (I − D22 DK )−1
Then as before
A 0 B1 0 B2
Acl Bcl
=0 0 0 + I 0
Ccl Dcl
C1 0 D11 0 D12
−1 T −1
X2 X1 B2 An Bn X1 AY1 0 Y2 0 0 I 0
−
0 I Cn Dn 0 0 C2 Y1 I C2 0 D21
where
−1 T −1
AK2 BK2 X2 X1 B2 An Bn X1 AY1 0 Y2 0
= − .
CK2 DK2 0 I Cn Dn 0 0 C2 Y1 I
Theorem 46.
For an LTI system P , if w is noise with spectral density Ŝw (ıω) and z = P w,
then z is noise with density
E[z(t)2 ] = kGk2H2
Now suppose the noise is colored with variance Ŝw (ıω). Now define Ĥ as
Ĥ(ıω)Ĥ(ıω)∗ = Ŝw (ıω) and the filtered system.
P̂11 (s)Ĥ(s) P̂12 (s)
P̂s (s) =
P̂21 (s)Ĥ(s) P̂22 (s)
Now the spectral density, Ŝz of the output of the true plant using colored noise
equals the output of the artificial plant under white noise. i.e.
Sz (s) = S(P, K)(s)Ŝw (s)S(P, K)(s)∗
= S(P, K)(s)Ĥ(s)Ĥ(s)∗ S(P, K)(s)∗ = S(Ps , K)(s)Ŝ(Ps , K)(s)∗
Thus if K minimizes the H2 -norm of the filtered plant (kŜ(Ps , K)k2H2 ), it will
minimize the variance of the true plant under the influence of colored noise with
density Ŝw .
Theorem 47.
Suppose P̂ (s) = C(sI − A)−1 B. Then the following are equivalent.
1. A is Hurwitz and kP̂ kH2 < γ.
2. There exists some X > 0 such that
Proof.
Suppose A is Hurwitz and kP̂ kH2 < γ. Then the Controllability Grammian is
defined as Z ∞
T
Xc = eAt BB T eA dt
0
Now recall the Laplace transform
Z ∞
Λe At
(s) = eAt e−ts dt
Z0 ∞
= e−(sI−A)t dt
0
t=−∞
= −(sI − A) −1 −(sI−A)t
e dt
t=0
= (sI − A)−1
Hence ΛCeAt B (s) = C(sI − A)−1 B.
Proof.
ΛCeAt B (s) = C(sI − A)−1 B implies
Proof.
Likewise TraceB T Xo B = kP̂ k2H2 where Xo is the observability Grammian. To
show that we can take strict the inequality X > 0, we simply let
Z ∞
T
X= eAt BB T + I eA dt
0
Theorem 48.
The following are equivalent.
1. kS(K, P )kH2 < γ.
2. K = ZX −1 for some Z and X > 0 where
X AT
A B2 + X ZT + B1 B1T < 0
Z B2T
Trace C1 X + D12 Z X −1 C1 X + D12 Z < γ 2
Theorem 49.
The following are equivalent.
1. kS(K, P )kH2 < γ.
2. K = ZX −1 for some Z and X > 0 where
X AT
A B2 + X Z T
+ B1 B1T < 0
Z B2T
X (C1 X + D12 Z)T
>0
C1 X + D12 Z W
TraceW < γ 2
Applying the Schur Complement gives the alternative formulation convenient for
control.
Theorem 50.
Suppose P̂ (s) = C(sI − A)−1 B. Then the following are equivalent.
1. A is Hurwitz and kP̂ kH2 < γ.
2. There exists some X, Z > 0 such that
T
A X + XA XB X CT
< 0, > 0, TraceZ < γ 2
BT X −γI C Z
If
Theorem 51 (Lall).
The following are equivalent.
AK B K
• There exists a K̂ = such that kS(K, P )kH2 < γ.
CK DK
• There exist X1 , Y1 , Z, An , Bn , Cn , Dn such that
AY1 +Y1 AT +B2 Cn +CnT B2T ∗T ∗T
AT + An + [B2 Dn C2 ]T X1 A+AT X1 +Bn C2 +C2T BnT ∗T < 0,
[B1 + B2 Dn D21 ]T [X1 B1 + Bn D21 ]T −I
Y1 I ∗T
I X1 ∗T > 0,
C1 Y1 + D12 Cn C1 + D12 Dn C2 Z
D11 + D12 Dn D21 = 0, trace(Z) < γ 2
p M q
Questions:
• Is S(∆, M ) stable for all ∆ ∈ ∆?
• Determine
sup kS(∆, M )kH∞ .
∆∈∆
Definition 52.
We say the pair (M, ∆) is Robustly Stable if (I − M22 ∆) is invertible for all
∆ ∈ ∆.
∆ := {∆ ∈ Rn×n : k∆k ≤ 1}
Theorem 54.
If ẋ = A(t)x(t) is Quadratically Stable, then it is stable for A ∈ ∆.
Then ẋ(t) = A(t)x(t) is quadratically stable for all A ∈ ∆ if and only if there
exists some P > 0 such that
∆ := {∆ ∈ Rn×n : k∆k ≤ 1}
P (A0 x(t)+M p)+(A0 x(t)+M p)T P < 0 for all p ∈ {p : p = ∆q, q = N x+Qp}
Theorem 56.
The system
is quadratically stable if and only if there exists some P > 0 such that
T
x A P + PA PM x
<0
y MT P 0 y
x x x −N T N −N T Q x
for all ∈ : ≤0
y y y −QT N I − QT Q y
If.
If
T
x A P + PA PM x
<0
y MT P 0 y
x x x −N T N −N T Q x
for all ∈ : ≤ 0
y y y −QT N I − QT Q y
then
xT P (Ax + M y) + (Ax + M y)T P x < 0
for all x, y such that
kN x + Qyk2 ≤ kyk2
Therefore, since p = ∆q implies kpk ≤ kqk, we have quadratic stability.
The only if direction is similar.
Corollary 57 (S-Procedure).
z T F z ≥ 0 for all z ∈ {x : xT Gx ≥ 0} if there exists a τ ≥ 0 such that
F − τ G 0.
The S-procedure is Necessary if {x : xT Gx > 0} =
6 ∅.
Theorem 58.
The system
is quadratically stable if and only if there exists some µ ≥ 0 and P > 0 such that
AP + P AT P N T MMT M QT
+µ < 0}
NP 0 QM T QQT − I
Definition 59.
(Sl , ∆) is QS if
Definition 61.
Given system M ∈ L(L2 ) and set ∆ as above, we define the Structured
Singular Value of (M, ∆) as
1
µ(M, ∆) =
inf ∆∈∆ k∆k
I−M22 ∆ is singular
Theorem 62.
Let
∆n = {∆ ∈ ∆, k∆k ≤ µ(M, ∆)}.
Then the pair (M, ∆n ) is robustly stable.