Sie sind auf Seite 1von 20

Optimization Techniques

Topic: Lagrange Multipliers method

Dr. Nasir M Mirza


Email: nasirmm@yahoo.com

Lagrange Multipliers
The method of Lagrange multipliers gives a set of necessary conditions to
identify optimal points of equality constrained optimization problems.
This is done by converting a constrained problem to an equivalent
unconstrained problem with the help of certain unspecified parameters
known as Lagrange multipliers.
The classical problem formulation
minimize :
f(x1, x2, ..., xn)
Subject to :
h1(x1, x2, ..., xn) = 0 (equality constraints)
can be converted to
minimize :
L(x, ) = f(x) - h1(x)
Where
L(x, v) is the Lagrangian function
The is an unspecified positive or negative constant.
It is called the Lagrangian Multiplier

Finding an Optimum using Lagrange Multipliers


New problem is:

minimize L(x, ) = f(x) - h1(x)

Suppose that we fix = * and


the unconstrained minimum of L(x; ) occurs at x = x* and x*
satisfies h1(x*) = 0, then x* minimizes f(x) subject to h1(x) = 0.
Trick is to find appropriate value for Lagrangian multiplier .
This can be done by treating as a variable, finding the
unconstrained minimum of L(x, ) and adjusting so that h1(x) =
0 is satisfied.

Method
1. Original problem is rewritten as:
minimize L(x, ) = f(x) - h1(x)
1. Take derivatives of L(x, ) with respect to xi and set them
equal to zero.
If there are n variables (i.e., x1, ..., xn) then you will get n
equations with n + 1 unknowns (i.e., n variables xi and one
Lagrangian multiplier )
2. Express all xi in terms of Langrangian multiplier
3. Plug x in terms of in constraint h1(x) = 0 and solve .
4. Calculate x by using the just found value for .

Note that the n derivatives and one constraint equation result


in n+1 equations for n+1 variables!

Multiple constraints
The Lagrangian multiplier method can be used for any number of
equality constraints.
Suppose we have a classical problem formulation with k equality
constraints

minimize :
Subject to :

f(x1, x2, ..., xn)


h1(x1, x2, ..., xn) = 0
......
hk(x1, x2, ..., xn) = 0
This can be converted in minimize : L(x, l) = f(x) - T h(x)
where T is the transpose vector of Lagrangian multpliers
and has length k

Example: Lagrange Multipliers


Minimise/Maximise a function given a constraint
Maximise

R ( x, y ) = 5 x + 2 x 2 4 y

subject to G ( x , y ) = 2 x + y 20 = 0

Construct Lagrangian
L ( x , y , ) = R ( x , y ) G ( x , y ) = 5 x + 2 x 2 4 y (2 x + y 20)
L
= 0 5 + 4 x = 2
x
13
L
= 3.25
= 0 4 + = 0 = 4 x =
4
y
L
= 0 2 x + y = 20 y = 27.5

Example 1
Consider the problem:

Example 2
We have the problem with
inequality constraint:

The Lagrangian is given as

Then we have

Example 2
Case 1: (x 2) = 0 means that = 0
From equation for Lx we get x = 4, which also satisfies x 2 0 or
positive.
Hence, this solution (x* = 4), which makes h (4) =16, is a possible
candidate for the maximum solution.
Case 2: (x 2) = 0 then Suppose is not zero and x = 2
Now from equation for Lx , we get = - 4, which does not satisfy
the inequality 0.
From these two cases we conclude that the optimum
solution is x* = 4 and h* = 16

Example 3
Find the shortest distance between the point (2, 2) and the upper
half of the semicircle of radius one, whose center is at the origin. In
order to simplify the calculation, we minimize h , the square of the
distance:

Example 3
The Lagrangian function for this problem is

The necessary conditions are


Lx =
Ly =

From (8.24) we see that either = 0 or x2 + y2 =1, i.e., we are on the


boundary of the semicircle. If = 0, we see from (8.20) that x = 2.
But x = 2 does not satisfy (8.22) for any y , and
hence we conclude > 0 and x2 + y2 =1.

Example 3

Figure : Shortest Distance from a Point to a Semi-Circle

Example 4
From (8.25) we conclude that either
v =0
or y =0. If v = 0 ,
then from (8.20), (8.21) and >0, we get x = y.
Solving the latter with x2+y2 =1 , gives

If y =0, then solving with x2+y2 =1 gives

These three points are shown in figure.


Of the three points found that satisfy the necessary conditions,
clearly the point
found in (a) is the nearest point
and solves the closest-point problem. The point (-1,0) in (c) is in
fact the farthest point; and the point (1,0) in (b) is neither the
closest nor the farthest point.

Example 4
Objective: Minimize the material (surface area) for any volume.
R

1. Write the objective function.

Area = f

= 2 ( R 2 ) + 2 R H

2. Derive the constraint equation.

= R H =C
2

C
H=
R2

Example 4
3. Set up the Lagrange expression.
L = 2 R 2 + 2 R H + ( R 2 H C )
objective

constraint

4. Take partials with respect to each of the variables.


L
= 4R + 2H + (2RH) = 0
R

L
= 2 R + ( R 2 ) = 0
H

L
= R2 H - C = 0

5. Solve equations simultaneously to obtain variable values for


optimum.

2 R + R 2 = 0 = -

2
R

R2 H - C = 0

H=

R2

Example 4
6. Then by substitution
C 2
C

(2
R)
4R + 2
+

= 0

2
2
R R
R

4 R3 + 2C - 4C = 0
C = R H
2

R=

H
2

2
H
R
R= 3
2

R= 3

C
2

2
H
R
=
R
2
3

LINEAR PROGRAMMING (graphical)


Used to solve problems that involve linear objective
functions and linear constraints.
1. Identify the design variables, the objective function and
the constraints.
The General Form of the Objective Function:
n

f =

xi

i =1

= a1 x1 + a 2 x 2 +

The general form of the Constraint Function:


n

j=

b
i =1

ij

xi

LINEAR PROGRAMMING (graphical)


2. Identify the boundaries of the feasibility region using
the given constraints.
Consider inequalities as equalities to establish the
boundaries of the feasibility region.

3. Plot the constraint boundaries and the objective


function to determine the optimum design point.
Note: In linear programming the optimum values will be
found at the boundaries (intersection points).

Linear Programming (cont.)


Objective Function:

= X1 +3X2

Constraint Functions
2 =>

1 => X1 + X2 10
X1 > 0

X1 + 4X2 16

X2 > 0

12

X2 variable

10
8
6
4
2
Feasible solution area

0
0

8
X1 Variable

10

12

14

16

Linear Programming (cont.)


Solve for the intersection point
X 1 + X 2 = 10

X 1 = 10 X 2

Three possible solutions exist


X1 = 0

X2 = 4

= 0 + 3(4) = 12

X1 = 8

X2 = 2

= 8 + 3(2) = 14

X 1 = 10

X2 = 0

= 10 + 3(0) = 10

Optimum occurs at X1= 8, the intersection of the two constraints.


X 1 + 4 X 2 = 16
X2 = 2

X1 = 8

(10 X 2 ) + 4 X 2 = 16

Das könnte Ihnen auch gefallen