10 views

Uploaded by Muhammad Bilal Junaid

Lec 22 - 23 KKT Conditions

- K2UserManual
- syllabus 2
- pmf_5.0_user_guide
- Absil_Bib.pdf
- Decentralized Energy Planning
- Capital Structure Optimization
- Chapter 3
- Flow Sheet Optimization Presentation
- Towards an Online, Adaptive Algorithm for Radar Surveillance Control
- Paper_1
- Fmincon Code
- On the Constrained Maximum Entropy Solution of the Population Balance Equation
- Barrier Function
- Lagrange
- R-11
- Computing Objective Functions __ Writing Files for Optimization Functions (Global Optimization Toolbox)
- wangtao105753-self-201209-3
- BEAM
- lec19.pdf
- US Federal Reserve: ifdp728

You are on page 1of 45

Email: nasirmm@yahoo.com

Optimality Criteria

Important question: How do we know that we

and sufficient conditions

The first order optimality condition for the minimum of f(x)

can be derived by considering linear expansion of the function

around the optimum point x* using Taylor Series:

T

T

x - x* is the distance.

Unconstrained Problems:

this condition can only be ensured if

f(x)=0; The gradient of f(x) must

vanish at the optimum.

Thus the first order necessary

condition for the minimum of a

function is that its gradient is zero at

the optimum.

f(x)

Local max

12

Inflection

0

-6

-4

Local min

-2

0

-4

-8

maximum point also and for any

other point where the slope is zero.

Therefore, it is only a necessary

condition and is not sufficient

condition.

-12

Local min

Graph for Function and its derivative

f(x)

Local max

12

12

df(x)/dx

Inflection

0

-6

-4

Local min

-2

0

-4

-6

-4

-2

0

-4

-8

-8

-12

-12

Local min

Unconstrained Problems

1.

2.

The second order condition for the minimum of f(x) can be derived by

considering the quadratic expansion of the function around the

optimum point (x*) using Taylor Series as following:

1 T 2

f ( x) f ( x*) + f (x*) d + d f ( x*)d +

2

T

For x* to be a local minimum f(x) f(x*) must be greater than or equal to Zero

in the neighborhood of x*. So, we must have

1 T 2

d f ( x*)d 0

2

Unconstrained Problems

1.

2.

2 F( x )

x

1

2 F( x )

H = x 2 x 1

2 F( x )

x n x 1

2 F( x )

x 1x n

2 F( x )

2 F( x )

x

x

x 2

2

n

2 F( x )

2 F( x )

2

x n x 2

x n

2 F( x )

x 1x 2

Unconstrained Problems

a local minimum has been found;

that the Hessian is positive definite for all possible values of

x. This would imply a convex design space.

Let x* be the point that we think is the minimum for f(x)

Necessary condition (for optimality):

f(x*) = 0

A point that satisfies the necessary condition is a stationary

point

It can be a minimum, maximum, or saddle point

How do we know that we have a minimum?

Answer: Sufficiency Condition:

The sufficient conditions for x* to be a strict local minimum

are:

f(x*) = 0

2f(x*) is positive definite

Example 1:

Find all stationary points for the following function. Using Optimality

conditions, classify them as minimum, maximum or inflection points.

The objective function is : -2x + x2 xy +2y2

The gradient vector :

f

2 + 2 x y

x =

f x + 4 y

y

2 f / x 2 2 f / xy 2 1

2

f / yx 2 f / y 2 = 1 4

Example 1:

The first order optimality conditions:

Necessary conditions:

-2 + 2x y = 0

-x + 4y = 0 ;

x = 4y and then

Possible solution point is x = 1.14286 and y = 0.285714

must be positive definite at the minimum. Let us find principal

minors:

A1 = |a11| = 2 ; A2 = det H = 8 1 = 7 ; both are positive;

So, H is positive definite.

Example 1:

The function f(x, y) at the point x* = 1.14286, y* = 0.285714 is

f = -1.14286 ;

The point is minimum point.

Since Hessian matrix is positive define, we know the function is convex.

Therefore any minimum is a global minimum.

[X,Y] = meshgrid(-1:.1:2);

Z = -2.*X + X.*X - X.*Y + 2.*Y.*Y;

contour(X,Y,Z,100)

Example 1:

Contour graph using MATLAB

Graphical

presentation of

function and

minimum at point

(x*, y*)

Example 1:

Three D plot

It confirms the

solution as well that

a global minimum

exists here at point

(x*, y*)

[X,Y] = meshgrid(-1:.1:2);

Z = -2.*X + X.*X - X.*Y + 2.*Y.*Y;

mesh(Z);

Example 1:

Important observations:

The minimum point does not change if we add a

constant to the objective function.

The minimum point does not change if we multiply the

objective function by a positive constant.

The problem changes from minimization to maximization

problem if we multiply the objective function by a

negative sign.

The unconstrained problem is a convex problem if the

object function is convex. For convex problems any local

minimum is also a global minimum.

Find values of the variables that minimize or maximize the

objective function while satisfying the constraints.

The standard form of the constrained optimization problem can be

written as:

Minimize: F(x)

objective function

j=1, . . . , m

inequality constraints

hk(x) 0

k=1, . . . , l

equality constraints

xi lower xi xi upper

where x=(x1, x2, x3, x4 , x5 ,xn)

design variables

Unconstrained Problems

1.

2.

optimum point).

2 F( x )

2

x

2 F( x )

H = x 2 x 1

2 F( x )

x n x 1

2 F( x )

x 1x n

2 F( x )

2 F( x )

2

x 2 x n

x 2

2 F( x )

2 F( x )

2

x n x 2

x n

2 F( x )

x 1x 2

Unconstrained Problems

a local minimum has been found;

that the Hessian is positive definite for all possible values of

x. This would imply a convex design space.

Let x* be the point that we think is the minimum for f(x)

Necessary condition (for optimality):

f(x*) = 0

A point that satisfies the necessary condition is a stationary

point

It can be a minimum, maximum, or saddle point

How do we know that we have a minimum?

Answer: Sufficiency Condition:

The sufficient conditions for x* to be a strict local minimum

are:

f(x*) = 0

2f(x*) is positive definite

To proof a claim of optimality in constrained

minimization (or maximization), we have to check the

found point with respect to the (Karesh) Kuhn Tucker

conditions.

Kuhn and Tucker extended the Lagrangian theory to

include the general classical single-objective nonlinear

programming problem:

Minimize:

f(x)

Subject to:

gj(x) 0

for j = 1, 2, ..., J

hk(x) = 0

for k = 1, 2, ..., K

x = ( x1, x2, ..., xN )

Interior:

If no constraints are active and (thus) the solution lies at the

interior of the feasible space, then the necessary condition for

optimality is same as for unconstrained case:

f(x*) = 0

Exterior:

If solution lies at the exterior, then the condition f(x*) = 0

does not apply because some constraints will block movement

to this minimum.

Some constraints will (thus) be active.

We cannot get any more improvement (in this case) if for x*

there does not exist a vector d that is both a descent direction

and a feasible direction.

In other words: the possible feasible directions do not

intersect the possible descent directions at all.

For the problem: Minimize objective function, f(x)

subjected to: gj(x) 0;

j = 1, 2, 3, . . . J

hk(x) = 0;

k = 1, 2, 3, . . . K

xi(L) xi xi(U) ;

i = 1, 2, 3, . . . N

optimization problem.

Here gj(x) are inequality constraint functions (Total J in number);

hk(x) are equality constraint functions respectively (Total K).

A point is feasible if all constraints and bounds are satisfied.

For the problem:

Minimize: f(x)

subjected to: gj(x) 0;

j = 1, 2, 3, . . . J

hk(x) = 0;

k = 1, 2, 3, . . . K

xi(L) xi xi(U) ;

i = 1, 2, 3, . . . N

J

j =1

k =1

f ( x ) u j g j ( x ) v k h k ( x ) = 0 , (optimality)

gj(x) 0

hk(x) = 0

for k = 1, 2, ..., K

(complementary slackness

condition)

ujgj(x) = 0

uj(x) 0

for j= 1, 2, ..., J

If the definition of feasibility changes, the optimality and

feasibility conditions change.

The necessary conditions become:

f(x) - ui gi(x) + vj hj(x) = 0 (optimality)

gj (x) 0

hk (x) = 0

uigi(x) = 0

ui 0

for k = 1, 2, ..., K (feasibility)

for i = 1, 2, ..., J (complementary slackness)

for i = 1, 2, ..., J (non-negativity)

Exercise 4.1.1

Let us take the following function to be minimized:

f(x) = (x2 + y 11)2 + (x + y2 7)2

Subjected to:

g2(x) = 20 4x - y 0,

here every point in the search space is not feasible.

The feasible points are those that satisfy the above two

constraints and variable bounds.

Let us also choose four points x(1) =(1, 5)T, x(2) =(0, 0)T, x(3)

=(3, 2)T, and x(4) =(3.396, 0)Tto investigate whether each

point is a K-T point.

The feasible search space and these four points are shown on a

contour plot of the objective function in Figure 4.1.

Exercise 4.1.1

The region on the

other side of the

hatched portion of

a constraint line is

feasible.

The combination

of two constraints

and variable

bounds makes

the interior region

feasible, as

depicted in the

figure.

Exercise 4.1.1

At first, we transform the variable bounds to two

inequality constraints: g3(x) =x 0, and g4(y) =y 0.

Thus, the above problem has four inequality constraints

(J= 4) and no equality constraint (K =0).

There are two problem variables: N =2.

Thus, for each point a total of 2 + 3 x 4 +0 = 14 KuhnTucker conditions need to be checked.

To formulate all K-T conditions, we first calculate the

gradient of the objective function.

In Table 4.1, we compute these gradients numerically

and also compute the constraint values at all four points.

Exercise 4.1.1

g1(x, y) = 26 - (x - 5)2 y2 = 26 (1 5)2 52

= 26 16 25 = 15.0 ,

g2(x, y) = 20 4x - y = 20 4(1) 5 = 11.0

g3(x, y) = x = 1.0

g2(x, y) = y = 5.0

fx = 4x3 + 4xy 42x + 2y2 14

fy = 2x2 + 4y3 + 4xy 26y 22

f (x, y) = ( fx , fy )T =

(4x3 + 4xy 42x + 2y2 14 , 2x2 + 4y3 + 4xy 26y 22 )T = (18, 370)T

Exercise 4.1.1

g1(x, y) = 26 - (x - 5)2 y2 = 26 (1 5)2 52

= 26 16 25 = 15.0 ,

( g1 )x = 2(x 5)

( g1 )y = 2y

g2 = 20 4x y ;

( g2 )x = 4 ; ( g2 )y = 1

g3 = x ;

( g3 )x = 1 ; ( g3 )y = 0

g4 = y ;

( g4 )x = 0 ; ( g4 )y = 1

Exercise 4.1.1

For the first point when we substitute the values we get following KKT

conditions:

J

K

f ( x ) u j g j ( x ) v k hk ( x ) = 0 ,

j =1

k =1

f x u1 ( g 1 )x u 2 ( g 2 )x u 3 ( g 3 )x u 4 ( g 4 )x v1 (h1 )x + = 0 ,

18 (8)u1 ( 4)u2 (1)u3 (0)u4 = 0 ;

f y u1 ( g1 )y u2 ( g 2 )y u3 ( g 3 )y u4 ( g 4 )y v1 (h1 )y + = 0,

370 (-10)u1 (1)u2 (0)u3 (1)u4 = 0 ;

( g2x , g2y )T = ( -4 , -1 )T

( g3x , g3y )T = ( 1 , 0 )T

( g4x , g4y )T = ( 0 , 1 )T

Exercise 4.1.1

For the first point when we substitute the values we get following KKT

conditions:

J

K

f ( x ) u j g j ( x ) v k hk ( x ) = 0 ,

j =1

k =1

f x u1 ( g 1 )x u 2 ( g 2 )x u 3 ( g 3 )x u 4 ( g 4 )x v1 (h1 )x + = 0 ,

18 (8)u1 ( 4)u2 (1)u3 (0)u4 = 0 ;

gj(x) 0

for j=

1, 2, ..., J

hk(x) = 0

(-15)u

1 =

1, 2,

..., 0K

for k =

;

(11)u2 = 0 ;

(1)u3 = 0

;

(5)u4 = 0

u1 , u2 , u3 , u4 0

g2 = +11 > 0

;

g3 = 1 > 0

;

g4 = 5 >0

ujgj(x) = 0

uj(x) 0

1, 2, ..., J

for j= 1, 2, ..., J

for j=

Exercise 4.1.1

It is clear that the third

condition is not satisfied that

is gj (x) 0.

that the point x(1) is not a KT

point.

In fact, since the first

constraint value is negative,

this constraint is violated at

this point and the point x(1)

is not a feasible point, as

shown in Figure 4.1.

If a point is infeasible, the

point cannot be an optimal

point.

Exercise 4.1.1

Exercise 4.1.1

For the second point x(2) = (0, 0)T

we obtain the following conditions:

-14 -10u1 +4u2 -u3 = 0,

-22 +u2 -u4 = 0,

1 >0; 20 > 0;

0=0; 0 = 0,

(1)u1 =0 ;

(20)u2 = 0 ;

(0)u3 =0 ;

(0)u4 =0 ;

u1= 0, u2 = 0, u3 = -14 and u4 = -22.

Exercise 4.1.1

Since u1 = 0, u2 = 0, u3 =

-14 and u4 = -22.

The final set of conditions

are not satisfied with these

values, because u3 and u4

are negative.

Thus, the point x(2) is not a

K-T point.

Since the constraints are not

violated, the point is a

feasible point, as shown in

Figure 4.1.

Thus, the point x(2) cannot

be an optimal point.

Exercise 4.1.1

Similarly, the conditions for the third point x(3) = (3, 2)T are

following:

-4u1 + 4 u2 u3 = 0;

+4u1 + u2 u4 = 0;

18 > 0 ; 6 > 0 ; 3 > 0 ; 2 > 0,

(18)u1 =0,

(6)u2 = 0,

(3)u3 =0,

(2)u4 =0,

The vector u* = (0,0,0, 0)T satisfies all the above conditions.

Exercise 4.1.1

The vector u* = (0, 0,

0, 0)T satisfies all the

above conditions.

Thus, the point x(3) is a

K-T point (Figure 4.1).

As mentioned earlier, KT points are likely

candidates for minimal

points.

To conclude, we may

say that the optimality

of a point requires

satisfaction of more

conditions.

Exercise 4.1.1

The K-T conditions obtained for the point x(4) = (3.396, 0)T

are

-3.21u1 + 4u2 - u3 = 0,

1 +u2 - u4 = 0,

23.427 > 0; 6.416 > 0 ; 3.396 > 0; 0 = 0,

(23.427)(u1) = 0,

(6.416)u2 = 0,

(3.396)u3 =0,

(0)u4 = 0,

The solution to the above conditions is the vector u* = (0,0,0,1)T

Exercise 4.1.1

The solution to the above

conditions is the vector u* =

(0,0,0,1)T.

Thus, the point x(4) is also a

K-T point.

It is clear from the figure that

the point x(3) is the minimum

point, but the point x(4) is not

a minimum point.

Thus, we may conclude from

the above exercise problem

that a K-T point mayor may

not be a minimum point.

But if a point is not a K-T point

(point x(2) or x(3)), then it

cannot be an optimum point.

Kuhn Tucker Optimization Problem:

Find vectors x(Nx1), u(1xM) and v (1xK) that satisfy:

for i = 1, 2, ..., M (feasibility)

gi(x) 0

for j = 1, 2, ..., L (feasibility)

hj(x) = 0

ui gi(x) = 0 for i = 1, 2, ..., M (complementary slackness)

for i = 1, 2, ..., M (non-negativity)

ui 0

If x* is an optimal solution to NLP, then there exists a (u*, v*)

such that (x*, u*, v*) solves the KuhnTucker problem.

Above equations not only give the necessary conditions for

optimality, but also provide a way of finding the optimal point.

Limitations

Necessity theorem helps identify points that are not

optimal. A point is not optimal if it does not satisfy

the KuhnTucker conditions.

On the other hand, not all points that satisfy the

Kuhn-Tucker conditions are optimal points.

The KuhnTucker sufficiency theorem gives conditions

under which a point becomes an optimal solution to a

single-objective NLP.

Sufficiency Condition

classical single objective NLP problem, where f, gj, and hk are twice

differentiable functions are that

1) The necessary KKT conditions are met.

2) The Hessian matrix 2L(x*) = 2f(x*) + i2gi(x*) +

j2hj(x*) is positive definite on a subspace of Rn as defined by

the condition:

gj(x*)y = 0 for j belonging to I1 = { j | gj(x*) = 0,

uj* > 0} (active constraints)

hk(x*)y = 0 for k = 1, ..., K and y 0

Consider the classical single objective NLP problem.

minimize :

f(x)

for j = 1, 2, ..., J

hk(x) = 0

for k = 1, 2, ..., K

Let the objective function f(x) be convex, the inequality constraints

gj(x) be all convex functions for j = 1, ..., J, and the equality

constraints hk(x) for k = 1, ..., K be linear.

If this is true, then the necessary KKT conditions are also sufficient.

Therefore, in this case, if there exists a solution x* that satisfies the

KKT necessary conditions, then x* is an optimal solution to the NLP

problem.

In fact, it is a global optimum.

Some Remarks

and method.

Sufficiency conditions are difficult to verify.

Practical problems do not have required nice properties.

For example, You will have a problems if you do not know

the explicit constraint equations (e.g., in FEM).

testing each priority level separately.

- K2UserManualUploaded byMary Cazacu
- syllabus 2Uploaded byRamkrishna Sonavane
- pmf_5.0_user_guideUploaded byDana Howell
- Absil_Bib.pdfUploaded bymarcus1818
- Decentralized Energy PlanningUploaded byAmir Joon
- Capital Structure OptimizationUploaded byMamadou Bass
- Chapter 3Uploaded byAthir03
- Flow Sheet Optimization PresentationUploaded byPanasheMuduzu
- Towards an Online, Adaptive Algorithm for Radar Surveillance ControlUploaded byanteys
- Paper_1Uploaded bySrilekhya Reddy
- Fmincon CodeUploaded byIshan Darwhekar
- On the Constrained Maximum Entropy Solution of the Population Balance EquationUploaded byMenwer Attarakih
- Barrier FunctionUploaded byfatih_eal
- LagrangeUploaded bymamaiabh8476
- R-11Uploaded byRaditya Pratikno
- Computing Objective Functions __ Writing Files for Optimization Functions (Global Optimization Toolbox)Uploaded byYisa Adeeyo
- wangtao105753-self-201209-3Uploaded byKathleen Franklin
- BEAMUploaded byDeepesh Kumar
- lec19.pdfUploaded bywin alfalah
- US Federal Reserve: ifdp728Uploaded byThe Fed
- buffer allocation problemUploaded byDocente Fede Tecnologico
- lec19Uploaded byWin Alfalah Nasution
- 6651_ftpUploaded byzelalemyomiyu
- TheveninUploaded byMateus Corato Zanarella
- Viorel Badescu 2Uploaded byAsmaa Ramadan
- Least Frobenius Norm Updating SchemeUploaded byAroua Gharbi
- Http Gio.uniovi.es Ant Ponencias Nacionales Fin en Ingles[1]Uploaded byParmeshwar Khendke
- 08 16 A comparative study of production–inventory model for determining effective production quantity and safety stock level.pdfUploaded byChau Mai
- Project-Example3.pdfUploaded byTushar Padave
- 614slides-tspUploaded bykeerthu28

- LicenseUploaded byKuracha Polarao
- 05 Ramadan Generic Cata 2017Uploaded byMuhammad Bilal Junaid
- BTS3900 V100R012C10SPC100 (ENodeB, TDD) Differentiated Baseline Paramete...Uploaded byMuhammad Bilal Junaid
- 4G HUA 60 NCL MODE CHANGE 20170110.xlsxUploaded byMuhammad Bilal Junaid
- HistogramUploaded byMuhammad Bilal Junaid
- CE cogestion.txtUploaded byMuhammad Bilal Junaid
- S1P1_Yachen_Wang.pptxUploaded byMuhammad Bilal Junaid
- 23216 870 Intraumts SrvccUploaded bybadass
- 295302452-Reserved-Parameter-List.xlsUploaded byMuhammad Bilal Junaid
- 279392594 SCCPCH Concept ImplementationUploaded byMuhammad Bilal Junaid
- Lab1Uploaded byMuhammad Bilal Junaid
- Lab Manual 1Uploaded byMuhammad Bilal Junaid
- LAB 3Uploaded byMuhammad Bilal Junaid
- Time TableUploaded byMuhammad Bilal Junaid
- Assignment OptUploaded byMuhammad Bilal Junaid
- Assignment 2Uploaded byMuhammad Bilal Junaid
- ADSP Lec 01Uploaded byShoaib Malik
- Lab7Uploaded byMuhammad Bilal Junaid
- Assignment1_2.pdfUploaded byMuhammad Bilal Junaid
- Israr q22(Publish)Uploaded byMuhammad Bilal Junaid
- Lec 13 Newton Raphson MethodUploaded byMuhammad Bilal Junaid
- Lec28 DgtlComm F13 V1Uploaded byMuhammad Bilal Junaid
- Lec 15 Summary of Single Var MethodsUploaded byMuhammad Bilal Junaid
- Lec 26 Simplex MethodUploaded byMuhammad Bilal Junaid
- Lec 25 Linear ProgrammingUploaded byMuhammad Bilal Junaid
- Lec 24 Lagrange MultiplierUploaded byMuhammad Bilal Junaid
- Lec 21 Marquardt MethodUploaded byMuhammad Bilal Junaid
- lec 18 unidirectional search.pdfUploaded byMuhammad Bilal Junaid

- Total Tundish Technology (T3)Uploaded bydebjit123
- Std Vi 2015 Test Paper With SolutionsUploaded byChirag
- Naphtha QualityUploaded byCherie Adams
- [Instant Revision] Geography.pdfUploaded bycooldude99344
- ASDA II-M (1)Uploaded byshoaib1982
- 250 Small Business IdeasUploaded byjon
- Characterization of Recovery and NeuropsychologicalUploaded byFernanda Lemos
- MATLAB OR BASIC SIMULATION LAB CSE 222.pdfUploaded byAmar Deep
- Beee Unit IVUploaded byDeependra Singh
- 16 MARKSUploaded byElumalai Pc
- Circular+Water+Tank+(Flexible+Joint)Uploaded byfirasslmn
- UT Dallas Syllabus for mech6306.501.11f taught by Xin-Lin Gao (xxg110530)Uploaded byUT Dallas Provost's Technology Group
- Autoregressive Neural NetworkUploaded byTeja
- Seven Laws of NutritionUploaded byDaniela Balaita
- Panofsky, Erwin (2004) - Reflections on Historical TimeUploaded byacarferra
- BELL 2306DUploaded byAndrew Visagie
- btzsUploaded byShubham Sangewar
- Kubota PumpsUploaded byВенцислав Венев
- Relativity Lab (1)Uploaded byQuynh Le
- Book Web.ist.Utl.pt Ist11038 Acad or Jacobs Osmcore2e PrefaceUploaded bySana Bhittani
- BAB 2Uploaded byuliatma
- 3GPP LTE Downlink System PerformanceUploaded byXiaolin Cheng
- Linear Regression - MATLAB & SimulinkUploaded byErfan Gholami Qadikola
- Muther Vol 2Uploaded byancuta
- undergraduate research poster finalUploaded byapi-263210756
- M07Uploaded byROHITCHOURASIYA
- Original Xbox manual user guideUploaded byManuel Yagui
- Infrared Plastic Solar CellUploaded byAjay Mv
- Hoods, Ductwork and StacksUploaded byShiyamraj Thamodharan
- Torchwood - 07 - Pack Animals - Peter AnghelidesUploaded byfunstuffrocksbig