Sie sind auf Seite 1von 13

CY4A2 Nonlinear Programming Lecture 7

Algorithms for Constrained Optimisa-


tion
Quadratic Programming (QP)
A quadratic program is an optimisation prob-
lem with a quadratic objective and linear con-
straints:
min
x
1
2
x
T
Hx +gx (1)
subject to
Ax b = 0
Cx d 0
(2)
where H is constant and symmetric, A and
C are constant matrices and g, b and d are
constant vectors. The solution to an uncon-
strained QP problem with H positive denite
is obvious: x

= H
1
g
1
The solution of an equality constrained QP
problem with H positive denite can be found
by looking at the Lagrangian function:
L =
1
2
x
T
Hx +g
T
x +
T
(Ax b) (3)
The gradient of the Lagrangian is:
L = Hx +g +A
T
= 0 (4)
This equation, together with the equality con-
straint Ax b = 0 dene the following set of
linear equations:

H A
T
A 0

x

g
b

(5)
The solution is then, x

= H
1
(g + A
T
),

= (AH
1
A
T
)
1
(b +AH
1
g)
2
If the QP includes inequality constraints, then
a typical solution algorithm involves the ac-
tive set strategy, through which a sequence of
equality constrained problems are solved.
min
d
1
2
(x +d)
T
H(x +d) +g(x +d) (6)
subject to
a
i
(x +d) b

i
= 0, i W
k
(7)
where d is a search direction. The active set
W
k
includes all equality constraints and active
inequality constraints at the current iteration.
The active set is updated at every iteration
according to a selection criterion.
The MATLAB function QUADPROG solves
general QP problems.
3
Recall that most nonlinear constrained optimi-
sation problems can be written as follows:
min
x
f(x) (8)
subject to
h(x) = 0
g(x) 0
(9)
where g :
n

m
and h :
n

p
4
Sequential Quadratic Programming (SQP)
This is the most widely used algorithm for non-
linearly constrained optimisation.
The Karush-Kuhn-Tucker conditions are en-
forced in an iterative manner.
An approximate Quadratic Programming sub-
problem is solved at each major iteration.
The solution to the QP problem gives a search
direction.
Using the search direction a line search is car-
ried out.
At each major iteration an approximation of
the Hessian is updated using the BFGS method.
5
The Quadratic Programming problem solved
at each iteration of SQP is an approximation
of the original problem with linear constraints
and quadratic objective:
min
d
f(x
k
) +f(x
k
)
T
d +
1
2
d
T

2
L(x
k
)d
subject to
g
i
(x
k
) +g
i
(x
k
)
T
d 0
h
i
(x
k
) +h
i
(x
k
)
T
d = 0
(10)
where the Lagrangian function L is dened as:
L(x)

= f(x) +
T
h(x) +
T
g(x) (11)
where is a vector of approximate Lagrange
multipliers and is a vector of approximate
KKT multipliers.
6
Instead of using the true Hessian of the La-
grangian,
2
L(x) is replaced by H
k
, which is
the BFGS approximation of the Hessian:
H
k+1
= H
k
+
q
k
q
T
k
q
T
k
p
k

H
k
p
k
p
T
k
H
k
p
T
k
H
k
p
k
(12)
where:
p
k
= x
k+1
x
k
,
q
k
= f(x
k+1
) +
T
h(x
k+1
)
(f(x
k
) +
T
h(x
k
))
(13)
7
The solution to the QP sub-problem produces
a search direction vector d
k
, which is used to
form a new iterate x
k+1
.
x
k+1
= x
k
+d
k
(14)
A line search is carried out to choose such
that the following penalty function (also called
merit function) is minimised:
(x) = f(x)+

i=1
|h
i
(x)| +
m

i=1
max{0, g
i
(x)}

(15)
where is a penalty factor. This line search
sub-problem helps to improve the convergence
of the algorithms and it balances the some-
times conicting objectives of reducing the ob-
jective function and of satisfying the constraints.
8
SQP Algorithm:
Step 1. Given the current iterate x
k
, and
a current approximate Hessian H
k
, solve
the QP subproblem to obtain the search
direction d
k
. Notice that the solution of
the QP subproblem provides estimates of
the multipliers and .
Step 2. Given d
k
, solve the line search
problem to minimise the merit function (x)
and then nd the next iterate x
k+1
Step 3. Update the Hessian approximation
H
k+1
using the BFGS formula.
Step 4. Check if the convergence criterion
is satised, if not set k = k+1 and go back
to Step 1.
9
Example
min
x
f(x) = e
x
1
(4x
2
1
+2x
2
2
+4x
1
x
2
+2x
2
+1)
subject to
x
1
x
2
x
1
x
2
+1.5 0
x
1
x
2
10 0
10
MATLAB code to compute the objective func-
tion:
function f = objfun(x)
f =exp(x(1))*(4*x(1)^2+2*x(2)^2
+4*x(1)*x(2) + 2*x(2)+1)
MATLAB code to compute the constraints:
function [c, ceq] = confun(x)
c = [x(1)*x(2)-x(1)-x(2)+1.5;
-x(1)*x(2) - 10];
ceq = [];
11
MATLAB code to invoke a constrained opti-
misation routine and hence solve the problem:
x0 = [-1,1]; % Initial guess
options = optimset(LargeScale, off,
Display,iter);
[x, fval] = fmincon(objfun, x0, [], [],
[], [], [], [],confun, options);
The minimiser is: x

= [9.5474 1.0474]
T
,
such that f(x

) = 0.0236. Both constraints


are active at the solution.
12
The following table shows the intermediate re-
sults when the SQP algorithm is executed:
Iter F-count f(x) constraint derivative
1 3 1.8394 0.5 0.0486
2 7 1.85127 -0.09197 -0.556
3 11 0.300167 9.33 0.17
4 15 0.529834 0.9209 -0.965
5 20 0.186965 -1.517 -0.168
6 24 0.0729085 0.3313 -0.0518
7 28 0.0353322 -0.03303 -0.0142
8 32 0.0235566 0.003184 -6.22e-006
9 36 0.0235504 9.032e-008 1.76e-010
13

Das könnte Ihnen auch gefallen