You are on page 1of 10

# 15.

## 093 Optimization Methods

Lecture 8: Robust Optimization

Papers
Slide 1
B. and Sim, The Price of Robustness, Operations Research, 2003.
B. and Sim, Robust Discrete optimization, Mathematical Programming,

2003.

Structure

Slide 2

Motivation
Data Uncertainty
Robust Mixed Integer Optimization
Robust 0-1 Optimization

Motivation

Slide 3

## The classical paradigm in optimization is to develop a model that assumes

that the input data is precisely known and equal to some nominal values.

This approach, however, does not take into account the inuence of data

## that is they are robust?

Slide 4
Ben-Tal and Nemirovski (2000):
In real-world applications of Linear Optimization (Net Lib li
brary), one cannot ignore the possibility that a small uncer
tainty in the data can make the usual optimal solution com
pletely meaningless from a practical viewpoint.

3.1

Literature

Slide 5

## (1997), El-Ghaoui et. al (1996)

Nonlinear convex models
Not extendable to discrete optimization

Goal

Slide 6

## Develop an approach to address data uncertainty for optimization problems

that:
It allows to control the degree of conservatism of the solution;
It is computationally tractable both practically and theoretically.

Data Uncertainty
Slide 7
minimize c x
subject to Ax b
lxu
xi Z,

i = 1, . . . , k,

WLOG data uncertainty aects only A and c, but not the vector b.

Slide 8

ij , j Ji

ij , aij + a
ij ].

Robust MIP

Slide 9

## Consider an integer i [0, |Ji |], i = 0, 1, . . . , m.

i adjusts the robustness of the proposed method against the level of

## We want to be protected against all cases that up to i of the aij s are

allowed to change.

Slide 10
Nature will be restricted in its behavior, in that only a subset of the

## coecients will change in order to adversely aect the solution.

We will guarantee that if nature behaves like this then the robust solution

## robust solution will be feasible with very high probability.

6.1

Problem
minimize

subject to

Slide 11

c x+

max

{S0 | S0 J0 ,|S0 |0 }

aij xj +

l x
u
xi Z,

6.2

jS0

max

{Si | Si Ji ,|Si |i }

dj |xj |

jSi

a
ij |xj |

bi , i

i = 1, . . . k.

Theorem 1

Slide 12

## The robust problem can be reformulated has an equivalent MIP:

minimize
subject to

c x + z 0 0 +

aij xj + zi i +

jJi

z0 + p0j dj yj
zi + pij a
ij yj
pij , yj , zi 0
yj xj yj
lj xj uj
xi Z

6.3

X0j

jJ0

pij bi

i
j J0
i =
0, j Ji
i, j Ji
j
j
i = 1, . . . , k.

Proof

Slide 13

i (x ) =

max

jSi

a
ij |xj |

i (x ) = max

a
ij |xj |zij

jJi

s.t.

jJi

zij i

0 zij 1
Dual:
i (x ) = min

i, j Ji .

pij + i zi

jJi

s.t.

zi + pij a
ij |xj |
pij 0
zi 0

j Ji
j Ji
i.

Slide 14

|Ji |
5
10
100
200

8.3565

24.263

33.899

## Table 1: Choice of i as a function of |Ji | so that the probability of constraint

violation is less than 1%.

6.4

Size

Slide 15

## Original Problem has n variables and m constraints

P
Robust counterpart has 2n + m + l variables, where l = m
i=0 |Ji | is the

## number of uncertain coecients, and 2n + m + l constraints.

6.5
6.5.1

Probabilistic Guarantee
Theorem 2

Slide 16

## Let x be an optimal solution of robust MIP.

(a) If A is subject to the model of data uncertainty U:

Pr

a
ij xj > bi

1
(1 )
2n

n  
X
n

l=

 

n
X
n

l=+1

## n = |Ji |, = i2+n and = ; bound is tight.

(b) As n

1
(1 )
2n

n  
X
n

l=

 
n
X
n

l=+1

i 1

Slide 17
Slide 18

Experimental Results

7.1

Knapsack Problems

Slide 19

maximize
subject to

iN
X
iN

ci xi
wi xi b

x {0, 1}n.

10

Approx bound
Bound 2

10

10

10

10

0
2.8
36.8
82.0
200

5
i

Violation Probability
0.5
4.49 101
5.71 103
5.04 109
0

Optimal Value
5592
5585
5506
5408
5283

10

Reduction
0%
0.13%
1.54%
3.29%
5.50%

w
i are independently distributed and follow symmetric distributions in

[wi i , wi + i ];

## c is not subject to data uncertainty.

7.1.1

Data

Slide 20

|N | = 200, b = 4000,
wi randomly chosen from {20, 21, . . . , 29}.
ci randomly chosen from {16, 17, . . . , 77}.
i = 0.1wi .

7.1.2

Results

Slide 21

Slide 22

## Nominal combinatorial optimization:

c x

minimize

x X {0, 1}n .

subject to
Robust Counterpart:
Z =

minimize
subject to

c x +

max

{S| SJ,|S|=}

x X,

dj x j

jS

WLOG d1 d2 . . . dn .

8.1

Remarks

Slide 23

Examples: the shortest path, the minimum spanning tree, the minimum
assignment, the traveling salesman, the vehicle routing and matroid inter
section problems.
Other approaches to robustness are hard. Scenario based uncertainty:
minimize max(c1 x, c2 x)
subject to x X.
is NP-hard for the shortest path problem.

8.2

Approach

Slide 24

xX

dj xj uj

s.t. 0 uj 1,
X
uj

## Dual :Z = min c x + min

xX
s.t.

yj

yj + dj xj ,
yj , 0

8.3

Algorithm A

Slide 25

Solution: yj = max(dj xj , 0)

Z =

min

xX,0

## Since X {0, 1}n,

max(dj xj , 0) = max(dj , 0) xj

Z =

min +
xX,0

## (cj + max(dj , 0)) xj

Slide 26

d1 d2 . . . dn dn+1 = 0.
For dl dl+1 ,

min

xX,dl dl+1

n
X

cj xj +

j=1

dl + min
xX

Z =

n
X

j=1

cj xj +

j=1

dl + min
l=1,...,n+1
xX

n
X

(dj dl )xj = Zl

cj xj +

j=1

8.4

(dj )xj =

l
X
j=1

min

l
X

l
X
j=1

(dj dl )xj .

Theorem 3

Slide 27

## Algorithm A correctly solves the robust 0-1 optimization problem.

It requires at most |J| + 1 solutions of nominal problems. Thus, If the

nominal problem is polynomially time solvable, then the robust 0-1 coun

## ing, shortest path and matroid intersection, are polynomially solvable.

9
9.1

Experimental Results
Robust Sorting

Slide 28
minimize
subject to

X
iN
X
iN

ci xi
xi = k

x {0, 1}n .

Z()
8822
8827
8923
9059
9627
10049
10146
10355
10619
10619

0
10
20
30
40
50
60
70
80
100

Z () =

% change in Z()

0 %

0.056 %

1.145 %

2.686 %

9.125 %

13.91 %

15.00 %

17.38 %

20.37 %

20.37 %

minimize
subject to

c x +

()
501.0
493.1
471.9
454.3
396.3
371.6
365.7
352.9
342.5
340.1

max

% change in ()
0.0 %
-1.6 %
-5.8 %
-9.3 %
-20.9 %
-25.8 %
-27.0 %
-29.6 %
-31.6 %
-32.1 %

{S| SJ,|S|=}

xi = k

dj x j

jS

iN

x {0, 1}n .

9.1.1

Data

Slide 29

|N | = 200;
k = 100;
cj U [50, 200]; dj U [20, 200];
For testing robustness, generate instances such that each cost component

## independently deviates with probability = 0.2 from the nominal value

cj to cj + dj .

9.1.2

Results

Slide 30

MIT OpenCourseWare
http://ocw.mit.edu

Fall 2009