Sie sind auf Seite 1von 8

ME 433 – STATE SPACE CONTROL

Lecture 8

ME 433 - State Space Control 121

Static Optimization
1. Optimization without constraints
Problem definition: Find the values of m parameters u1, u2,…,um that
minimize a performance function or index

∂L 1 ∂2 L
L( u1,u2 ,…,um ) ⇒ dL = du + duT 2 du + O( 3)
∂u 2 ∂u
T
We define the decision vector u = [u1 u2 … um ]
and write the performance index as L( u)

Necessary conditions for a minimum:
∂L ⎛ ∂L € ⎞
=0 ⎜ = 0,i € = 1,…,m⎟
∂u ⎝ ∂ui ⎠
⎛ ⎡ 2 ⎤ ⎞
∂2 L 2
⎜ ⎢ ∂ L2 ⎥ = ∂ L ⎟ Positive semidefinite Hessian
≥0 ⎜ ⎣ ∂u ⎦
∂u 2 ⎝ i, j
∂ui∂u j ⎟⎠
€ ME 433 - State Space Control 122

1
Static Optimization
Sufficient conditions for a minimum:
∂L ⎛ ∂L ⎞
= 0 ⎜ = 0,i = 1,…,m⎟
∂u ⎝ ∂ui ⎠
2 ⎛ ⎡ 2 ⎤ 2 ⎞
∂L ⎜ ⎢ ∂ L2 ⎥ = ∂ L ⎟ Positive definite Hessian
2 >0 ⎜ ⎣ ∂u ⎦
∂u ⎝ i, j
∂ui∂u j ⎟⎠

Note:
T
€ Positive semidefinite: Q ≥ 0 if x Qx ≥ 0 ∀x ≠ 0
Q ≥ 0 if all λi ≥ 0, Q ≥ 0 if all mi ≥ 0
Positive definite: Q > 0 if x T Qx > 0 ∀x ≠ 0
Q > 0 if all λi > 0, Q > 0 if all mi > 0

ME 433 - State Space Control λi: eigenvalues mi: principal minors 123


Static Optimization
Examples:

⎡ 1 −1⎤ ⎡ u1 ⎤
L = [ u1 u2 ] ⎢ ⎥ ⎢ ⎥
⎣ −1 4 ⎦ ⎣u2 ⎦

⎡ −1 1 ⎤⎡ u1 ⎤
€ L = [ u1 u2 ] ⎢ ⎥⎢ ⎥
⎣ 1 3⎦⎣ u2 ⎦

L = ( u1 − u22 )( u1 − 3u22 )

€ ME 433 - State Space Control 124

2
Static Optimization
2. Optimization with constraints
Problem definition: Find the values of m parameters u1, u2,…,um that
minimize a performance function or index

L( u1,u2 ,…,um , x1, x 2 ,…, x n )


Subject to the constraint equation
f ( x,u) = 0
€ parameters x1, x2,…,xn are determined by the decision
The n state
parameters u1, u2,…,um through the constraint equation (n equations).
We define:
T
Decision vector € u = [ u1 u2 … um ]
T
State vector x = [ x1 x2 … xn ]
T
Constraint vector f = [ f1 f2 … fn ]

ME 433 - State Space Control 125

Static Optimization

If
L( u1,u2 ,…,um , x1, x 2 ,…, x n )
and
f ( x,u) = 0

are linear in both x and u, then, in general, a minimum does NOT exist.
Inequalities constraints on the magnitudes of x and u are necessary to
make the problem € meaningful. If the inequality constraints are also
linear, we are in front of a linear programming problem.
We will focus at the beginning on nonlinear L and f. This of course is
not a guarantee of the existence of a minimum.

ME 433 - State Space Control 126

3
Static Optimization
2.1 Optimization with constraints – Approach A
At a stationary point, dL is equal to zero to first order for all increments
du when df is zero, letting x change as a function of u. Thus we require

dL = Lx dx + Lu du = 0
df = f x dx + f u du = 0

where
∂L ∂L ∂f ∂f
Lx = ,Lu = , f x = , f u =
€ ∂x ∂u ∂x ∂u

Hence, if dL is zero for arbitrary du, it is necessary that

€ (m equations)
Lu − Lx f x−1 f u = 0

ME 433 - State Space Control 127

Static Optimization
2.2 Optimization with constraints – Approach B
At a stationary point, dL is equal to zero to first order for all increments
du when df is zero, letting x change as a function of u. Thus we require

dL = Lx dx + Lu du = 0 ⎡ Lx Lu ⎤ ⎡dx ⎤
⇔ ⎢ ⎥ ⎢ ⎥ = 0
df = f x dx + f u du = 0 ⎣ f x f u ⎦ ⎣du ⎦

This set of equations defines a stationary point. For a non-trivial


solution we need that the (n+1) × (n+m) matrix has rank less than n+1.
This means that its rows must be linearly dependent. So, there exists
€ €
an n vector λ (Lagrange multiplier) such that

⎡Lx Lu ⎤ Lx + λT f x = 0 λT = −Lx f x−1


[1 λ T
] ⎢
⎣ f x
⎥ = 0 ⇒
f u ⎦ Lu + λT f u = 0

Lu − Lx f x−1 f u = 0

ME 433 - State Space Control 128

4
Static Optimization
2.3 Optimization with constraints – Approach C
We adjoin the constraints to the performance index to define the
Hamiltonian function
H ( x,u, λ) = L( x,u) + λT f ( x,u)
where λ ∈ Rn is a to-be-determined Lagrange multiplier. To choose x, u
and λ to yield a stationary point we proceed as follows.
∂H ∂H ∂H
€ dH = dx + du + dλ
∂x ∂u ∂λ
∂H (n equations)
= f =0
∂λ
∂H ∂L T ∂f
€= +λ =0 ⇒ λT = −Lx f x−1 (n equations)
∂x ∂x ∂x
∂H ∂L T ∂f
€ = +λ =0 ⇒ Lu − Lx f x−1 f u = 0 (m equations)
∂u ∂u ∂u
ME 433 - State Space Control 129

Static Optimization
2.4 Optimization with constraints – Sufficient conditions
So far, we have derived necessary conditions for a minimum point of L
(x,u) that also satisfies the constratins f(x,u)=0. We are interested now
in sufficient conditions.
⎡ dx ⎤ 1 ⎡ Lxx Lxu ⎤⎡ dx ⎤
dL = [ Lx Lu ] ⎢ ⎥ + [ dx T duT ] ⎢ ⎥⎢ ⎥ + O(3) (1)
⎣ du ⎦ 2 ⎣Lux Luu ⎦⎣ du ⎦
⎡dx ⎤ 1 ⎡ f xx f xu ⎤⎡ dx ⎤
df = [ f x f u ] ⎢ ⎥ + [ dx T duT ] ⎢ ⎥⎢ ⎥ + O(3) (2)
⎣du ⎦ 2 ⎣ f ux f uu ⎦⎣ du ⎦

where

€ ∂2 L ∂2 L ∂2 L ∂2 f ∂2 f ∂2 f
Lxx = ,L = ,L = , f = , f = , f =
∂x 2 uu ∂u 2 xu ∂x∂u xx ∂x 2 uu ∂u 2 xu ∂x∂u

ME 433 - State Space Control 130

5
Static Optimization

⎡ dL ⎤ ⎡dx ⎤ 1 ⎡ H xx H xu ⎤ ⎡dx ⎤
[1 λT ] ⎢ ⎥ = [ H x
⎣ df ⎦
H u ] ⎢ ⎥ + [ dx T
⎣du ⎦ 2
duT ] ⎢
⎣ H ux
⎥ ⎢ ⎥ + O(3) (3)
H uu ⎦ ⎣du ⎦

For a stationary point we need f=0, and also that dL=0 to first order for
all increments dx, du. Since f=0, we also have df=0. And these
€ conditions require Hx=0 and Hu=0 (necessary conditions). By (2) we
have
dx = − f x−1 f u du
Replacing this in (3) yields

1 T ⎡ H xx H xu ⎤⎡ − f x−1 f u ⎤
dL = €du [ − f uT f x−T I ] ⎢ ⎥⎢ ⎥du + O(3)
2 ⎣H ux H uu ⎦⎣ I ⎦

ME 433 - State Space Control 131

Static Optimization
To ensure that this stationary point is a minimum we need dL>0 to the
second order for all increments du:

⎡ H xx H xu ⎤ ⎡− f x−1 f u ⎤
[− f u
T
f x
−T
I ] ⎢
⎣ H ux
⎥ ⎢
H uu ⎦ ⎣ I ⎦
⎥ > 0

H uu − H ux f x−1 f u − f uT f x−T H xu + f uT f x−T H xx f x−1 f u > 0


∂2 L
≡ H uu − H ux f x−1 f u − f uT f x−T H xu + f uT f x−T H xx f x−1 f u (4)
∂u 2 f =0

ME 433 - State Space Control 132

6
Static Optimization
Examples:

1 ⎡1 1 ⎤ ⎡ x ⎤ ⎡ x ⎤
(a) L( x,u) = [x u] ⎢ ⎥ ⎢ ⎥ + [0 1] ⎢ ⎥
2 ⎣1 2 ⎦ ⎣ u ⎦ ⎣ u ⎦
f ( x,u) = x − 3 = 0

1 ⎛ x 2 u 2 ⎞
(b) L( x,u) = ⎜ 2 + 2 ⎟
€ 2 ⎝ a b ⎠
f ( x,u) = x + mu − c = 0

1 T 1
(c) L( x,u) = x Qx + uT Ru
€ 2 2
f ( x,u) = x + Bu + c = 0
ME 433 - State Space Control 133

Static Optimization
2.5 Optimization with constraints – Lagrange multiplier
We now produce an interpretation of the Lagrange multiplier. Let us
suppose that the constraints are increased by infinitesimal amounts so
that we have f(x,u)=df, where df is an infinitesimal constant vector. How
does the optimal value change?
dH xT = H xx dx + H xu du + f xT dλ = 0
dH uT = H ux dx + H uu du + f uT dλ = 0
df = f x dx + f u du
The partial derivatives are evaluated at the original optimal value.
These equations determine dx, du, dλ.
dx = f x−1df − f x−1 f u du

dλ = − f x−T ( H xx dx + H xu du)
⎛ ∂2 L ⎞ −1
du = −⎜ 2 ⎟ [ H ux − f uT f x−T H xx ] f x−1df ≡ −Cdf
ME 433 - State Space Control
⎝ ∂ u ⎠ f =0 134

7
Static Optimization

Existence of a neighboring optimal solution (for infinitesimal change in f)


is guaranteed by 2 ⎛ ∂ L ⎞
Luu = ⎜ 2 ⎟ > 0
⎝ ∂ u ⎠ f =0
which is the sufficient condition for a local minimum (Equation (4)).
Substituting the expression for dx and du in (3), and using Hx=Hu=0, we
get
1
dL€= −λT df + df T [ f x−T H xx f x−1 − C T LuuC ] df
2
∂Lmin
= −λT
∂f
€ ∂2 Lmin
2 = f x−T H xx f x−1 − C T LuuC
∂f
ME 433 - State Space Control 135

Static Optimization
2.6 Optimization with constraints – Numerical solution
1. Select initial u

2. Determine x from f(x,u)=0

3. Determine λ from λT = −Lx f x−1

4. Determine the gradient vector H u = Lu + λT f u

5. Update €
the control/decision vector by Δu = −kH u for k>0 (scalar)

(Steepest Descendent
€ Method)

€ ΔL
6. Determine the predicted change = H uT Δu = −kH uT H u. Stop if small
enough. Go to step 2 otherwise.

ME 433 - State Space Control 136

Das könnte Ihnen auch gefallen