Beruflich Dokumente
Kultur Dokumente
In optimal design problems, values for a set of n design variables, (x1 , x2 , xn ), are
to be found that minimize a scalar-valued objective function of the design variables, such
that a set of m inequality constraints, are satisfied. Constrained optimization problems are
generally expressed as
g1 (x1 , x2 , , xn ) 0
g2 (x1 , x2 , , xn ) 0
min
x1 ,x2 , ,xn
J = f (x1 , x2 , , xn ) such that .. (1)
.
gm (x1 , x2 , , xn ) 0
If the objective function is quadratic in the design variables and the constraint equations are
linear in the design variables, the optimization problem usually has a unique solution.
Consider the simplest constrained minimization problem:
1 2
min kx such that xb. (2)
x 2
This problem has a single design variable, the objective function is quadratic (J = 12 kx2 ),
there is a single constraint inequality, and it is linear in x (g(x) = b x). If g > 0, the
constraint equation constrains the optimum and the optimal solution, x , is given by x = b.
If g 0, the constraint equation does not constrain the optimum and the optimal solution
is given by x = 0. Not all optimization problems are so easy; most optimization methods
require more advanced methods. The methods of Lagrange multipliers is one such method,
and will be applied to this simple problem.
Lagrange multiplier methods involve the modification of the objective function through
the addition of terms that describe the constraints. The objective function J = f (x) is
augmented by the constraint equations through a set of non-negative multiplicative Lagrange
multipliers, j 0. The augmented objective function, JA (x), is a function of the n design
variables and m Lagrange multipliers,
m
X
JA (x1 , x2 , , xn , 1 , 2 , , m ) = f (x1 , x2 , , xn ) + j gj (x1 , x2 , , xn ) (3)
j=1
Case 1: b = 1
If b = 1 then the minimum of 21 kx2 is constrained by the inequality x b, and the
optimal value of should minimize JA (x, ) at x = b. Figure 1(a) plots JA (x, ) for a few
non-negative values of and Figure 1(b) plots contours of JA (x, ).
12 4
3.5 (b)
10 (a)
=4
3
augmented cost, JA(x)
Lagrange multiplier,
2.5
3
6
2
2
*
4 1.5
1 1
2 b=1
0.5
0 =0
4
3.5
3
2.5
JA 2
1.5
1
0.5 4
3.5
3
00 2.5
0.5 2
1.5Lagrange multiplier,
1 1
design variable, x 1.5 0.5
20
JA (x, ) is independent of at x = b,
JA (x , ) JA (x , ) JA (x, ),
This example has a physical interpretation. The objective function J = 21 kx2 represents
the potential energy in a spring. The minimum potential energy in a spring corresponds to
a stretch of zero (x = 0). The constrained problem:
1 2
min kx such that x1
x 2
means minimize the potential energy in the spring such that the stretch in the spring is
greater than or equal to 1. The solution to this problem is to set the stretch in the spring
equal to the smallest allowable value (x = 1). The force applied to the spring in order to
achieve this objective is f = kx . This force is the Lagrange multiplier for this problem,
( = kx ).
Case 2: b = 1
If b = 1 then the minimum of 12 kx2 is not constrained by the inequality x b. The
derivation above would give x = 1, with = k. The negative value of indicates that
the constraint does not affect the optimal solution, and should therefore be set to zero.
Setting = 0, JA (x, ) is minimized at x = 0. Figure 2(a) plots JA (x, ) for a few negative
values of and Figure 2(b) plots contours of JA (x, ).
12 0
*
-0.5 (b)
10 (a)
=-4
-1
augmented cost, JA(x)
Lagrange multiplier,
-1.5
-3
6
-2
-2
4 -2.5
-1 -3
2 b=-1
-3.5
0 =0
4
3.5
3
2.5
JA 2
1.5
1
0.5 0
-0.5
-1
0-2 -1.5
-1.5 -2
-2.5Lagrange multiplier,
-1 -3
design variable, x -0.5 -3.5
0 -4
JA (x, ) is independent of at x = b,
The conditions
JA
= 0 (8a)
xk xi =x , j =j
i
JA
= 0 (8b)
k xi =xi , j =j
f(x) f(x)
f= g
<0
f = g >0
11
00 000
111
111
000
00
11
00
11
00
11
x 000
111
000
111
000
111
x
00
11
00
11 000
111
00
11 000
111
000
111
00
11 000
111
00
11
g(x)+ g x=b g(x)+ g
x=b
not ok ok g(x) not ok ok g(x)
Figure 3. If increasing the constraint, g(x), results in an improved objective (a reduced cost),
then < 0, and the constraint g(x) 0 should not bind the optimum point. If increasing
the constraint results in an increased cost, then > 0, and the constraint g(x) 0 must be
enforced at the optimum point.
3x1 + 2x2 + 2 0
min x21 + 0.5x1 + 3x1 x2 + 5x22 such that (9)
x1 ,x2 15x1 3x2 1 0
This example also has a quadratic objective function and inequality constraints that are linear
in the design variables. Contours of the objective function and the two inequality constraints
are plotted in Figure 4. The feasible region of these two inequality constraints is to the left
of the lines in the figure and are labeled as g1 ok and g2 ok. This figure shows that the
inequality g1 (x1 , x2 ) constrains the solution and the inequality g2 (x1 , x2 ) does not. This is
visible in Figure 4 with n = 2, but for more complicated problems it may not be immediately
clear which inequality constraints are active.
Using the method of Lagrange multipliers, the augmented objective function is
JA (x1 , x2 , 1 , 2 ) = x21 + 0.5x1 + 3x1 x2 + 5x22 + 1 (3x1 + 2x2 + 2) + 2 (15x1 3x2 1) (10)
Unlike the first examples with n = 1 and m = 1, we cannot plot contours of JA (x1 , x2 , 1 , 2 )
since this would be a plot in four-dimensional space. Nonetheless, the same optimality
conditions hold.
JA
min JA = 0 2x1 + 0.5 + 3x2 + 31 + 152 = 0 (11a)
x1 x1 x ,x , ,
1 2 1 2
JA
min JA =0 3x1 + 10x2 + 21 32 = 0 (11b)
x2 x2 x ,x , ,
1 2 1 2
JA
max JA =0 3x1 + 2x2 + 2 = 0 (11c)
1 1 x ,x , ,
1 2 1 2
JA
max JA =0 15x1 3x2 1 = 0 (11d)
2 2
x1 ,x2 ,1 ,2
If the objective function is quadratic in the design variables and the constraints are linear
in the design variables, the optimality conditions are simply a set of linear equations in the
design variables and the Lagrange multipliers. In this example the optimality conditions are
expressed as four linear equations with four unknowns. In general we may not know which
inequality constraints are active. If there are only a few constraint equations its not too
hard to try all combinations of any number of constraints, fixing the Lagrange multipliers
for the other inequality constraints equal to zero.
First, lets find the unconstrained minimum by assuming neither constraint g1 (x1 , x2 )
or g2 (x1 , x2 ) is active, 1 = 0, 2 = 0, and
" #" # " # " # " #
2 3 x1 0.5 x1 0.45
= = , (12)
3 10 x2 0 x2 0.14
which is the unconstrained minimum shown in Figure 4 as a . Plugging this so-
lution into the constraint equations gives g1 (x1 , x2 ) = 0.93 and g2 (x1 , x2 ) = 8.17,
so the unconstrained minimum is not feasible with respect to constraint g1 , since
g1 (0.45, 0.14) > 0.
Next, assuming both constraints g1 (x1 , x2 ) and g2 (x1 , x2 ) are active, optimal values for
x1 , x2 , 1 , and 2 are sought, and all four equations must be solved together.
2 3 3 15 x1 0.5 x1 0.10
3 10 2 3 x2
0 x2 0.85
= = , (13)
3 2 0 0 1 2 1 3.55
15 3 0 0 2 1 2 0.56
which is the constrained minimum shown in Figure 4 as a o at the intersection of the
g1 line with the g2 line in Figure 4. Note that 2 < 0 indicating that constraint g2 is not
active; g2 (0.10, 0.85) = 0 (ok). Enforcing the constraint g2 needlessly compromises the
optimality of this solution. So, while this solution is feasible (both g1 and g2 evaluate
to zero), the solution could be improved by letting go of the g2 constraint and moving
along the g1 line.
So, assuming only constraint g1 is active, g2 is not active, 2 = 0, and
x1 x1
2 3 3 0.5 0.81
3 10 2 x2 = 0
x2 = 0.21 ,
(14)
3 2 0 1 2 1 0.16
which is the constrained minimum as a o on the g1 line in Figure 4. Note that 1 > 0,
which indicates that this constraint is active. Plugging x1 and x2 into g2 (x1 , x2 ) gives a
value of -13.78, so this constrained minimum is feasible with respect to both constraints.
This is the solution were looking for.
As a final check, assuming only constraint g2 is active, g1 is not active, 1 = 0, and
x1 x1
2 3 15 0.5 0.06
3 10 3 x2 = 0
x2 = 0.03 , (15)
15 3 0 2 1 2 0.04
which is the constrained minimum shown in as a o on the g2 line in Figure 4. Note that
2 < 0, which indicates that this constraint is not active, contradicting our assumption.
Further, plugging x1 and x2 into g1 (x1 , x2 ) gives a value of +2.24, so this constrained
minimum is not feasible with respect to g1 , since g1 (0.06, 0.03) > 0.
(a)
0.5
g1 ok
x2
0
g2 ok
-0.5
-1
-1 -0.5 0 0.5 1
x1
(b)
0.5
g1 ok
x2
0
g2 ok
-0.5
-1
-1 -0.5 0 0.5 1
x1
Figure 4. Contours of the objective function and the constraint equations for the example of
equation (9). (a): J = f (x1 , x2 ); (b): JA = f (x1 , x2 ) + 1 g1 (x1 , x2 ). Note that the the
contours of JA are shifted so that the minimum of JA is at the optimal point along the g1 line.
JA (x, )
= 0T (20)
x
results in Hx + AT = 0 from which x = H1 AT . Substituting this solution into JA
results in
1 T
1 T T
max AH A b such that 0, (21)
2
or
1 T
1 T T
min AH A + b such that 0, (22)
2
which is independent of x.
Equation (19) is called the primal quadratic programming problem and equation (22)
is called the dual quadratic programming problem. The primal problem has n + m unknown
variables (x and ) whereas the dual problem has only m unknown variables () and is
therefore easier to solve.