Beruflich Dokumente
Kultur Dokumente
equality constrained minimization eliminating equality constraints Newtons method with equality constraints infeasible start Newton method implementation
111
112
equality constrained quadratic minimization (with P Sn ) + minimize (1/2)xT P x + q T x + r subject to Ax = b optimality condition: P AT A 0 x = q b
coecient matrix is called KKT matrix KKT matrix is nonsingular if and only if Ax = 0, x=0 = xT P x > 0
114
example: optimal allocation with resource constraint minimize f1(x1) + f2(x2) + + fn(xn) subject to x1 + x2 + + xn = b
reduced problem: minimize f1(x1) + + fn1(xn1) + fn(b x1 xn1) (variables x1, . . . , xn1)
115
Newton step
Newton step xnt of f at feasible x is given by solution v of 2f (x) AT A 0 interpretations xnt solves second order approximation (with variable v) minimize f (x + v) = f (x) + f (x)T v + (1/2)v T 2f (x)v subject to A(x + v) = b xnt equations follow from linearizing optimality conditions f (x + v) + AT w f (x) + 2f (x)v + AT w = 0, A(x + v) = b v w = f (x) 0
116
Newton decrement
(x) = xT 2f (x)xnt nt properties gives an estimate of f (x) p using quadratic approximation f : 1 f (x) inf f (y) = (x)2 Ay=b 2 directional derivative in Newton direction: d f (x + txnt) dt = (x)2
t=0 1/2
= f (x)T xnt
1/2
1/2
117
given starting point x dom f with Ax = b, tolerance > 0. repeat 1. Compute the Newton step and decrement xnt, (x). 2. Stopping criterion. quit if 2/2 . 3. Line search. Choose step size t by backtracking line search. 4. Update. x := x + txnt.
a feasible descent method: x(k) feasible and f (x(k+1)) < f (x(k)) ane invariant
118
given starting point x dom f , , tolerance > 0, (0, 1/2), (0, 1). repeat 1. Compute primal and dual Newton steps xnt, nt. 2. Backtracking line search on r 2. t := 1. while r(x + txnt, + tnt) 2 > (1 t) r(x, ) 2, t := t. 3. Update. x := x + txnt, := + tnt. until Ax = b and r(x, ) 2 .
not a descent method: f (x(k+1)) > f (x(k)) is possible directional derivative of r(y)
2
d r(y + ty) dt
= r(y)
1111
subject to Ax = b +n
n T i=1 log(A )i
three methods for an example with A R100500, dierent starting points 1. Newton method with equality constraints (requires x(0) 0, Ax(0) = b)
105
f (x(k)) p
10
15
20
1113
p g( (k))
10
r(x(k), (k))
complexity per iteration of three methods is identical 1. use block elimination to solve KKT system diag(x)2 AT A 0 x w = diag(x)11 0
reduces to solving A diag(x)2AT w = b 2. solve Newton system A diag(AT )2AT = b + A diag(AT )11 3. use block elimination to solve KKT system diag(x)2 AT A 0 x = diag(x)11 Ax b
reduces to solving A diag(x)2AT w = 2Ax b conclusion: in each case, solve ADAT w = h with D positive diagonal
Equality constrained minimization 1115
Network ow optimization
minimize i=1 i (xi ) subject to Ax = b directed graph with n arcs, p + 1 nodes xi: ow through arc i; i: cost ow function for arc i (with (x) > 0) i node-incidence matrix A R(p+1)n dened as 1 arc j leaves node i 1 arc j enters node i Aij = 0 otherwise reduced node-incidence matrix A Rpn is A with last row removed b Rp is (reduced) source vector rank A = p if graph is connected
Equality constrained minimization 1116
KKT system H A AT 0 v w g h
sparsity pattern of coecient matrix is given by graph connectivity (AH 1AT )ij = 0 (AAT )ij = 0 nodes i and j are connected by an arc
1117
i = 1, . . . , p
X 0,
(X )1 +
j=1
j Ai = 0,
tr(AiX ) = bi,
i = 1, . . . , p
X 1XX 1 +
j=1
wj Ai = X 1,
tr(AiX) = 0,
i = 1, . . . , p
solution by block elimination eliminate X from rst equation: X = X substitute X in second equation
p p j=1 wj XAj X
i = 1, . . . , p
(2)
a dense positive denite set of linear equations with variable w Rp op count (dominant terms) using Cholesky factorization X = LLT : form p products LT Aj L: (3/2)pn3 form p(p + 1)/2 inner products tr((LT AiL)(LT Aj L)): (1/2)p2n2 solve (2) via Cholesky factorization: (1/3)p3
1119