Sie sind auf Seite 1von 6

Static optimization unconstrained problems

Graduate course on Optimal and Robust Control (spring12)


Zdenek Hurak
Department of Control Engineering
Faculty of Electrical Engineering
Czech Technical University in Prague
February 17, 2013
1 / 24
Lecture outline
General optimization problem
Classes of optimization problems
Optimization without constraints
2 / 24
General optimization problem
minimize f (x)
subject to
x R
n
,
h
eq
(x) = 0,
h
ineq
(x) 0.
Note
max f (x) = min(f (x))
3 / 24
Classes of optimization problems
Linear programming
Quadratic programming
Semidenite programming
. . .
(General) nonlinear programming
4 / 24
Nonlinear optimization without constraints
min f (x), x R
n
.
10
5
0
5
10
10
5
0
5
10
100
0
100
200
300
400
x
y
z
5 / 24
Optimization without constraints scalar case
min f (x), x R.
0 1 2 3 4 5 6 7 8 9 10
4
3
2
1
0
1
2
x
y
(x
)
Local minimum at x

if f (x) f (x

) in an neighbourgood.
Local maximum at x

if f (x) f (x

) in an neighbourgood.
6 / 24
Assumptions
real variables (no integers)
smoothness (at least rst and second derivatives)
convexity
7 / 24
Taylor approximation of the function around the minimum
f (x

+ ) = f (x

) + f

(x

) +
1
2
f

(x

)
2
+O(
3
)
or
f (x

+ ) = f (x

) + f

(x

) +
1
2
f

(x

)
2
+ o(
2
)
Big-O or little-o concepts:
lim
0
O(
3
)

3
M < ,
lim
0
o(
2
)

2
= 0
8 / 24
First-order necessary conditions
Taylor approximation of the increment in the cost function
f = f (x

+ ) f (x

) = f

(x

) + o()
The classical necessary condition on the rst derivative at the
critical point
f

(x

) = 0
Recall this is just necessary, not sucient!
9 / 24
Second-order necessary conditions for minimum
Higher-order Taylor approximation of the increment in the cost
function
f = f (x

+ ) f (x

) =

f

(x

) +
1
2
f

(x

)
2
+ o(
2
)
The classical necessary condition on the second derivative at the
critical point (knowing that the rst derivative vanishes)
f

(x

) 0
Proof: there is some such that for || <
1
2
|f

(x

)|
2
< |o(
2
)|
10 / 24
Second-order sucient conditions for minimum
f

(x

) = 0, f

(x

) > 0
Note this is just sucient, not necessary! If f

(x

) = 0,
higher-order terms need to be investigated.
1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
x
x
4


y(x)=x
4
11 / 24
What if the cost function is a function of several variables
Pick an arbitrary vector d
f (x

+ d) =: g()
Taylor expansion
g() = g(0) + g

(0) + o()
First-order necessary condition
g

(0) = 0
12 / 24
Back to f () using chain rule:
g

() = (f (x

+ d)
. .
gradient
)
T
d
f (x

) = 0
(Some regard the gradient a column, some row, some do not care.)
13 / 24
Second-order necessary conditions of minimum
Again, back from g() to f ()
g

() =
n

i =1
d
dx
i
f (x

+ d)d
i
g

() =
n

i ,j =1
d
2
dx
i
x
j
f (x

+ d)d
i
d
j
g

(0) =
n

i ,j =1
d
2
dx
i
x
j
f (x

)d
i
d
j
g

(0) = d
T

2
f (x

)
. .
Hessian
d

2
f (x

) 0 positive semidenite
14 / 24
Second-order sucient conditions of minimum
f (x

+ d) = f (x

) + (f (x

))
T
d + o(
2
)
f (x

) = 0,
2
f (x

) > 0 positive denite


15 / 24
Extending the argument to arbitrary direction d
For every direction d (say, of unit length) there is a corresponding
(d) such that for || < (d)
1
2
|f

(x

)|
2
>|o(
2
)|
If we can prove that a minimum of exists over all d then x

is a
local minimum.
Theorem (Weierstrass)
A continuous function achieves a minimum on a compact set.
Compact set = closed and bounded for nite-dimensional spaces.
16 / 24
What the previous step does not say
If at a given stationary point x

and for an arbitrary


direction d the one-variable function g() achieves a
local minimum, it can be concluded that the original
function f (x) achieve a local minimum.
NO! See
f (x, y) = (x y
2
)(x 3y
2
)
0.4
0.2
0.0
0.2
0.4
x
0.4
0.2
0.0
0.2
0.4
y
0.0
0.1
0.2
0.3
17 / 24
Alternative development of necessary and sucient
conditions
f (x

+d) = f (x

) + (f (x

))
T
d +d
T

2
f (x

)d + o(d
2
)
Frechet (above) vs. Gateaux (before) derivative.
18 / 24
Classication of stationary (critical) points
f (x

) = 0

2
f (x

) > 0: Minimum

2
f (x

) indenite: Saddle point



2
f (x

) = 0: Singular point
19 / 24
Quadratic surfaces
f (x) =
1
2
x
T
_
q
11
q
12
q
21
q
22
_
. .
Q
x +
_
b
1
b
2

. .
b
T
x
f (x) = Qx +b
First-order necessary conditions for the stationary point
x = Q
1
b
Hessian

2
x = Q
20 / 24
Example - minimum of a quadratic function
f (x) =
1
2
x
T
_
1 1
1 2
_
x +
_
0 1

x
0
50
50
50
50
50
50
50
100
100
100
100
150
150
200
200
x
1
x
2
10 8 6 4 2 0 2 4 6 8 10
10
8
6
4
2
0
2
4
6
8
10
21 / 24
Example - saddle point of a quadratic function
f (x) =
1
2
x
T
_
1 1
1 2
_
x +
_
0 1

5
0
50

50
0
0
0
0
0
0
0
0
50
50 50
50
50 50
100
100
100
100
150
x
1
x
2
10 8 6 4 2 0 2 4 6 8 10
10
8
6
4
2
0
2
4
6
8
10
22 / 24
Example - singular point
f (x) = (x
1
x
2
2
)(x
1
3x
2
2
)
0
0
0
0
0
0
0
0
0.1
0.1
0.1
0.1
0.1
0.1
0.2
0.2
0.2
0.2
0
.2
0.3
0.3
0.3
0.3
0.4
0.4
0.5
0.5
0.6
0.6
0.7
0.7
0.8
0.8
u
1
u
2
0.5 0.4 0.3 0.2 0.1 0 0.1 0.2 0.3 0.4 0.5
0.5
0.4
0.3
0.2
0.1
0
0.1
0.2
0.3
0.4
0.5
0.5
0
0.5
0.5
0
0.5
0.5
0
0.5
1
u
1
u
2
L
(u
)
syms x1 x2
f = ( x1 x2 2) ( x13x2 2)
f x = s i mp l i f y ( [ d i f f ( f , x1 ) ; d i f f ( f , x2 ) ] )
f x x = [ d i f f ( f x , x1 ) , d i f f ( f x , x2 ) ]
f x x x ( : , : , 1 ) = d i f f ( f xx , x1 ) ;
f x x x ( : , : , 2 ) = d i f f ( f xx , x2 )
23 / 24
Summary
necessary and sucient conditions of optimality (gradient,
Hessian)
classication of stationary points: minimum/maximum, saddle
point, singular point
24 / 24