Beruflich Dokumente
Kultur Dokumente
Optimality Criteria
Important question: How do we know that we
f(x)
Local max
12
Inflection
0
-6
-4
Local min
-2
0
-4
-8
-12
Local min
12
12
df(x)/dx
Inflection
0
-6
-4
Local min
-2
0
-4
-6
-4
-2
0
-4
-8
-8
-12
-12
Local min
2.
The second order condition for the minimum of f(x) can be derived by
considering the quadratic expansion of the function around the
optimum point (x*) using Taylor Series as following:
1 T 2
f ( x) f ( x*) + f (x*) d + d f ( x*)d +
2
T
1 T 2
d f ( x*)d 0
2
2.
x
1
2 F( x )
H = x 2 x 1
2 F( x )
x n x 1
2 F( x )
x 1x n
2 F( x )
2 F( x )
x
x
x 2
2
n
2 F( x )
2 F( x )
2
x n x 2
x n
2 F( x )
x 1x 2
Example 1:
Find all stationary points for the following function. Using Optimality
conditions, classify them as minimum, maximum or inflection points.
The objective function is : -2x + x2 xy +2y2
The gradient vector :
f
2 + 2 x y
x =
f x + 4 y
y
2 f / x 2 2 f / xy 2 1
2
f / yx 2 f / y 2 = 1 4
Example 1:
The first order optimality conditions:
Necessary conditions:
-2 + 2x y = 0
-x + 4y = 0 ;
x = 4y and then
Example 1:
The function f(x, y) at the point x* = 1.14286, y* = 0.285714 is
f = -1.14286 ;
The point is minimum point.
Since Hessian matrix is positive define, we know the function is convex.
Therefore any minimum is a global minimum.
Example 1:
Contour graph using MATLAB
Graphical
presentation of
function and
minimum at point
(x*, y*)
Example 1:
Three D plot
It confirms the
solution as well that
a global minimum
exists here at point
(x*, y*)
Example 1:
Important observations:
The minimum point does not change if we add a
constant to the objective function.
The minimum point does not change if we multiply the
objective function by a positive constant.
The problem changes from minimization to maximization
problem if we multiply the objective function by a
negative sign.
The unconstrained problem is a convex problem if the
object function is convex. For convex problems any local
minimum is also a global minimum.
objective function
j=1, . . . , m
inequality constraints
hk(x) 0
k=1, . . . , l
equality constraints
xi lower xi xi upper
where x=(x1, x2, x3, x4 , x5 ,xn)
2.
2 F( x )
2
x
2 F( x )
H = x 2 x 1
2 F( x )
x n x 1
2 F( x )
x 1x n
2 F( x )
2 F( x )
2
x 2 x n
x 2
2 F( x )
2 F( x )
2
x n x 2
x n
2 F( x )
x 1x 2
j = 1, 2, 3, . . . J
hk(x) = 0;
k = 1, 2, 3, . . . K
xi(L) xi xi(U) ;
i = 1, 2, 3, . . . N
j = 1, 2, 3, . . . J
hk(x) = 0;
k = 1, 2, 3, . . . K
xi(L) xi xi(U) ;
i = 1, 2, 3, . . . N
j =1
k =1
f ( x ) u j g j ( x ) v k h k ( x ) = 0 , (optimality)
gj(x) 0
hk(x) = 0
for k = 1, 2, ..., K
(complementary slackness
condition)
ujgj(x) = 0
uj(x) 0
Exercise 4.1.1
Let us take the following function to be minimized:
f(x) = (x2 + y 11)2 + (x + y2 7)2
Subjected to:
Exercise 4.1.1
The region on the
other side of the
hatched portion of
a constraint line is
feasible.
The combination
of two constraints
and variable
bounds makes
the interior region
feasible, as
depicted in the
figure.
Exercise 4.1.1
At first, we transform the variable bounds to two
inequality constraints: g3(x) =x 0, and g4(y) =y 0.
Thus, the above problem has four inequality constraints
(J= 4) and no equality constraint (K =0).
There are two problem variables: N =2.
Thus, for each point a total of 2 + 3 x 4 +0 = 14 KuhnTucker conditions need to be checked.
To formulate all K-T conditions, we first calculate the
gradient of the objective function.
In Table 4.1, we compute these gradients numerically
and also compute the constraint values at all four points.
Exercise 4.1.1
f (x, y) = ( fx , fy )T =
(4x3 + 4xy 42x + 2y2 14 , 2x2 + 4y3 + 4xy 26y 22 )T = (18, 370)T
Exercise 4.1.1
( g1 )y = 2y
( g2 )x = 4 ; ( g2 )y = 1
( g3 )x = 1 ; ( g3 )y = 0
( g4 )x = 0 ; ( g4 )y = 1
Exercise 4.1.1
For the first point when we substitute the values we get following KKT
conditions:
J
K
f ( x ) u j g j ( x ) v k hk ( x ) = 0 ,
j =1
k =1
f x u1 ( g 1 )x u 2 ( g 2 )x u 3 ( g 3 )x u 4 ( g 4 )x v1 (h1 )x + = 0 ,
18 (8)u1 ( 4)u2 (1)u3 (0)u4 = 0 ;
f y u1 ( g1 )y u2 ( g 2 )y u3 ( g 3 )y u4 ( g 4 )y v1 (h1 )y + = 0,
370 (-10)u1 (1)u2 (0)u3 (1)u4 = 0 ;
Exercise 4.1.1
For the first point when we substitute the values we get following KKT
conditions:
J
K
f ( x ) u j g j ( x ) v k hk ( x ) = 0 ,
j =1
k =1
f x u1 ( g 1 )x u 2 ( g 2 )x u 3 ( g 3 )x u 4 ( g 4 )x v1 (h1 )x + = 0 ,
18 (8)u1 ( 4)u2 (1)u3 (0)u4 = 0 ;
gj(x) 0
for j=
1, 2, ..., J
hk(x) = 0
(-15)u
1 =
1, 2,
..., 0K
for k =
;
(11)u2 = 0 ;
(1)u3 = 0
;
(5)u4 = 0
u1 , u2 , u3 , u4 0
ujgj(x) = 0
uj(x) 0
1, 2, ..., J
for j= 1, 2, ..., J
for j=
Exercise 4.1.1
It is clear that the third
condition is not satisfied that
is gj (x) 0.
Exercise 4.1.1
Exercise 4.1.1
For the second point x(2) = (0, 0)T
we obtain the following conditions:
-14 -10u1 +4u2 -u3 = 0,
-22 +u2 -u4 = 0,
1 >0; 20 > 0;
0=0; 0 = 0,
(1)u1 =0 ;
(20)u2 = 0 ;
(0)u3 =0 ;
(0)u4 =0 ;
Exercise 4.1.1
Since u1 = 0, u2 = 0, u3 =
-14 and u4 = -22.
The final set of conditions
are not satisfied with these
values, because u3 and u4
are negative.
Thus, the point x(2) is not a
K-T point.
Since the constraints are not
violated, the point is a
feasible point, as shown in
Figure 4.1.
Thus, the point x(2) cannot
be an optimal point.
Exercise 4.1.1
Similarly, the conditions for the third point x(3) = (3, 2)T are
following:
-4u1 + 4 u2 u3 = 0;
+4u1 + u2 u4 = 0;
18 > 0 ; 6 > 0 ; 3 > 0 ; 2 > 0,
(18)u1 =0,
(6)u2 = 0,
(3)u3 =0,
(2)u4 =0,
The vector u* = (0,0,0, 0)T satisfies all the above conditions.
Exercise 4.1.1
The vector u* = (0, 0,
0, 0)T satisfies all the
above conditions.
Thus, the point x(3) is a
K-T point (Figure 4.1).
As mentioned earlier, KT points are likely
candidates for minimal
points.
To conclude, we may
say that the optimality
of a point requires
satisfaction of more
conditions.
Exercise 4.1.1
The K-T conditions obtained for the point x(4) = (3.396, 0)T
are
-3.21u1 + 4u2 - u3 = 0,
1 +u2 - u4 = 0,
23.427 > 0; 6.416 > 0 ; 3.396 > 0; 0 = 0,
(23.427)(u1) = 0,
(6.416)u2 = 0,
(3.396)u3 =0,
(0)u4 = 0,
The solution to the above conditions is the vector u* = (0,0,0,1)T
Exercise 4.1.1
The solution to the above
conditions is the vector u* =
(0,0,0,1)T.
Thus, the point x(4) is also a
K-T point.
It is clear from the figure that
the point x(3) is the minimum
point, but the point x(4) is not
a minimum point.
Thus, we may conclude from
the above exercise problem
that a K-T point mayor may
not be a minimum point.
But if a point is not a K-T point
(point x(2) or x(3)), then it
cannot be an optimum point.
Limitations
Necessity theorem helps identify points that are not
optimal. A point is not optimal if it does not satisfy
the KuhnTucker conditions.
On the other hand, not all points that satisfy the
Kuhn-Tucker conditions are optimal points.
The KuhnTucker sufficiency theorem gives conditions
under which a point becomes an optimal solution to a
single-objective NLP.
Sufficiency Condition
f(x)
for j = 1, 2, ..., J
hk(x) = 0
for k = 1, 2, ..., K
Some Remarks