Sie sind auf Seite 1von 19

Optimization using Calculus

Optimization of Functions of Multiple Variables subject to Equality Constraints

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Objectives
Optimization of functions of multiple variables subjected to equality constraints using
the method of constrained variation, and the method of Lagrange multipliers.

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Constrained optimization
A function of multiple variables, f(x), is to be optimized subject to one or more equality constraints of many variables. These equality constraints, gj(x), may or may not be linear. The problem statement is as follows: Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, , m where

x1 x X = 2 M xn

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Constrained optimization (contd.)


With the condition that m n; or else if m > n then the problem becomes an over defined one and there will be no solution. Of the many available methods, the method of constrained variation and the method of using Lagrange multipliers are discussed.

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Solution by method of Constrained Variation


For the optimization problem defined above, let us consider a specific case with n = 2 and m = 1 before we proceed to find the necessary and sufficient conditions for a general problem using Lagrange multipliers. The problem statement is as follows: Minimize f(x1,x2), subject to g(x1,x2) = 0 For f(x1,x2) to have a minimum at a point X* = [x1*,x2*], a necessary condition is that the total derivative of f(x1,x2) must be zero at [x1*,x2*].

df =

f f dx1 + dx2 = 0 x1 x2

(1)
Optimization Methods: M2L4

D Nagesh Kumar, IISc

Method of Constrained Variation (contd.)


Since g(x1*,x2*) = 0 at the minimum point, variations dx1 and dx2 about the point [x1*, x2*] must be admissible variations, i.e. the point lies on the constraint: g(x1* + dx1 , x2* + dx2) = 0 (2) assuming dx1 and dx2 are small the Taylor series expansion of this gives us g ( x1 * + dx1 , x2* + dx2 )

= g ( x1* , x2* ) +

g * * g * * (x1 ,x 2 ) dx1 + (x1 ,x 2 ) dx2 = 0 x1 x2


(3)

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Method of Constrained Variation (contd.)


or
dg = g g dx1 + dx2 = 0 at [x1*,x2*] x1 x2

(4)

which is the condition that must be satisfied for all admissible variations. Assuming , (4) can be rewritten as
g / x2 0

dx2 =

g / x1 * * ( x1 , x2 )dx1 g / x2

(5)

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Method of Constrained Variation (contd.)


(5) indicates that once variation along x1 (dx1) is chosen arbitrarily, the variation along x2 (dx2) is decided automatically to satisfy the condition for the admissible variation. Substituting equation (5) in (1) we have: f g / x1 f (6) df = dx1 = 0 x1 g / x2 x2 (x * , x * ) The equation on the left hand side is called the constrained variation of f. Equation (5) has to be satisfied for all dx1, hence we have
1 2

f g f g =0 x1 x2 x2 x1 (x1* , x 2* )
This gives us the necessary condition to have [x1*, x2*] as an extreme point (maximum or minimum)

(7)

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Solution by method of Lagrange multipliers


Continuing with the same specific case of the optimization problem with n = 2 and m = 1 we define a quantity , called the Lagrange multiplier as f / x2 = (8) g / x2 (x , x )
* 1 * 2

Using this in (5)

f g + =0 x1 (x * , x * ) x1 1 2
f g + =0 x2 x2 (x * , x * ) 1 2

(9)

And (8) written as

(10)
Optimization Methods: M2L4

D Nagesh Kumar, IISc

Solution by method of Lagrange multiplierscontd.


Also, the constraint equation has to be satisfied at the extreme point
g ( x1 , x2 ) ( x * , x * ) = 0
1 2

(11)

Hence equations (9) to (11) represent the necessary conditions for the point [x1*, x2*] to be an extreme point. Note that could be expressed in terms of g / x1 as well and g / x1 has to be non-zero. Thus, these necessary conditions require that at least one of the partial derivatives of g(x1 , x2) be non-zero at an extreme point.

10

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Solution by method of Lagrange multiplierscontd.


The conditions given by equations (9) to (11) can also be generated by constructing a functions L, known as the Lagrangian function, as L( x1 , x2 , ) = f ( x1 , x2 ) + g ( x1 , x2 ) (12) Alternatively, treating L as a function of x1,x2 and , the necessary conditions for its extremum are given by
L f g ( x1 , x2 , ) = ( x1 , x2 ) + ( x1 , x2 ) = 0 x1 x1 x1

L f g ( x1 , x2 , ) = ( x1 , x2 ) + ( x1 , x2 ) = 0 x2 x2 x2 L ( x1 , x2 , ) = g ( x1 , x2 ) = 0

(13)

11

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Necessary conditions for a general problem


For a general problem with n variables and m equality constraints the problem is defined as shown earlier Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, , m
x1 x X = 2 M xn

where

In this case the Lagrange function, L, will have one Lagrange multiplier j for each constraint as (14)

L ( x1 , x2 ,..., xn , 1 , 2 ,..., m ) = f ( X) + 1 g1 ( X) + 2 g 2 ( X) + ... + m g m ( X)

12

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Necessary conditions for a general problemcontd.


L is now a function of n + m unknowns, x1 , x2 ,..., xn , 1 , 2 ,..., m , and the necessary conditions for the problem defined above are given by
m g j L f = ( X) + j ( X) = 0, xi xi xi j =1

i = 1, 2,..., n

j = 1, 2,..., m

L = g j ( X) = 0, j

(15)

j = 1, 2,..., m

which represent n + m equations in terms of the n + m unknowns, xi and j. The solution to this set of equations gives us x1* 1* (16) * * x and * = 2 X= 2

13

M xn*

M * m

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Sufficient conditions for a general problem


A sufficient condition for f(X) to have a relative minimum at X* is that each root of the polynomial in , defined by the following determinant equation be positive.
L11 L12 L L21 L22 M Ln1 g11 g 21 M g m1 Ln 2 g12 g 22 gm2 L1n L2 n g11 g12 g 21 L g m1 g 22 gm2 g2n L O L O M L g mn L 0 M M 0
Optimization Methods: M2L4

O M M L Lnn g1n L O L g1n g2n M g mn 0 M M 0

=0

(17)

14

D Nagesh Kumar, IISc

Sufficient conditions for a general problemcontd.


where
2L ( X* , * ), Lij = xi x j g pq = g p xq ( X* ), for i = 1, 2,..., n and j = 1, 2,..., m

where p = 1, 2,..., m and q = 1, 2,..., n

(18)

Similarly, a sufficient condition for f(X) to have a relative maximum at X* is that each root of the polynomial in , defined by equation (17) be negative. If equation (17), on solving yields roots, some of which are positive and others negative, then the point X* is neither a maximum nor a minimum.

15

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Example
2 2 Minimize f ( X) = 3 x1 6 x1 x2 5 x2 + 7 x1 + 5 x2 , Subject to

x1 + x2 = 5

or

16

D Nagesh Kumar, IISc

Optimization Methods: M2L4

L = 6 x1 10 x2 + 5 + 1 = 0 x2 1 => 3 x1 + 5 x2 = (5 + 1 ) 2 1 => 3( x1 + x2 ) + 2 x2 = (5 + 1 ) 2

x2 =

1 2

x1 =

11 2

1 11 X* = , ; * = [ 23] 2 2

L12 g11 L11 L21 L22 g 21 = 0 g g12 0 11

Thank you

19

D Nagesh Kumar, IISc

Optimization Methods: M2L4

Das könnte Ihnen auch gefallen