Sie sind auf Seite 1von 55

RAMS

Reliability, Availability,
Maintainability, and Safety

This is the Title of my Thesis


Your Name
December 2012

PROJECT THESIS
Department of Production and Quality Engineering
Norwegian University of Science and Technology

Supervisor 1: Professor Ask Burlefot


Supervisor 2: Professor Fingal Olsson

Preface
Here, you give a brief introduction to your work. What it is (e.g., a Masters thesis in RAMS at
NTNU as part of the study program xxx and. . . ), when it was carried out (e.g., during the autumn semester of 2021). If the project has been carried out for a company, you should mention
this and also describe the cooperation with the company. You may also describe how the idea
to the project was brought up.

Trondheim, 2012-12-16
(Your signature)
Ola Nordmann

ii

Acknowledgment
I would like to thank the following persons for their great help during . . .
If the project has been carried out in cooperation with an external partner (e.g., a company),
you should acknowledge the contribution and give thanks to the involved persons.
You should also acknowledge the contributions made by your supervisor(s).
O.N.
(Your initials)

iii

Summary and Conclusions


Here you give a summary of your your work and your results. This is like a management summary and should be written in a clear and easy language, without many difficult terms and without abbreviations. Everything you present here must be treated in more detail in the main report. You should not give any references to the report in the summary just explain what you
have done and what you have found out. The Summary and Conclusions should be no more
than two pages.

Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ii

Summary and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iii

1 Introduction

1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2 Organization of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Background of Primal-Dual Method

2.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1.1 Linear Programming Problem: . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1.2 Feasible Solution of a Linear Programming: . . . . . . . . . . . . . . . . . . .

2.1.3 Integer Linear Programming(ILP) . . . . . . . . . . . . . . . . . . . . . . . . .

2.1.4 Feasible Solution of an Integer Linear Programming: . . . . . . . . . . . . . .

2.1.5 LP Relaxation: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1.6 Integrality Gap: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2.1.7 Performance guarantee or Approximation Ratio: . . . . . . . . . . . . . . . .

10

2.1.8 Dual: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

2.1.9 Duality Gap: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

2.2 Example: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

2.3 Theorems: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

2.4 Basic steps of solving a combinatorial problem: . . . . . . . . . . . . . . . . . . . . .

18

3 Basic Primal-Dual Algorithm for Hitting Set Problem

iv

19

CONTENTS

3.1 Hitting Set Problem: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

3.1.1 Basic Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

3.1.2 Approximation Ratio: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

3.1.3 Example:

22

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Improvements of Primal-Dual Algorithm and their analysis

23

4.1 Improvement By adding Reverse Delete Step . . . . . . . . . . . . . . . . . . . . . . .

23

4.1.1 Algorithm : . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

4.1.2 Approximation Ratio: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

4.2 Next Improvement:Uniformly increase all dual variables . . . . . . . . . . . . . . . .

26

4.2.1 Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

4.2.2 Approximation Ratio: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

5 Primal-Dual Algorithm for Steiner Tree/Forest problem

29

5.1 Steiner Tree Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

5.1.1 Steiner Tree Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . .

29

5.1.2 Algorithm: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

5.1.3 Example: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

5.1.4 Iteration(k)= 1: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

5.1.5 k=2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

5.1.6 k=3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

5.1.7 k=4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

5.1.8 Reverse Delete: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

6 Steiner Tree/Forest Problem As a Hitting Set Problem

38

6.1 Steiner Tree problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

6.2 Hitting Set Problem: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

6.3 Representing Steiner Tree problem as Hitting Set problem . . . . . . . . . . . . . . .

39

7 Relation between the primal-dual algorithm of Steiner Tree/Forest Problem and Hitting Set Problem

40

CONTENTS
8 Analysis of Steiner Tree/Forest Algorithm

1
41

8.1 Analysis of the Approximation Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

8.2 Proof: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

A Acronyms

45

B Additional Information

46

B.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

B.1.1 More Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

Bibliography

47

Curriculum Vitae

48

Chapter 1
Introduction
Primal-Dual method is a well-known mathematical method that is widely used for solving combinatorial optimization problems, that can be represented by integer linear programming.This method was first proposed by Dantzig, Ford, and Fulkerson as a tool for solving linear
programs[1]. Basically, this method is used to design approximation algorithms for many NP hard problems that can be modelled as integer linear programming.

Many problems are there which are NP -hard and do not have polynomial- time algorithms
for optimal solution(unless P=NP). Examples are vertex cover, set cover, max Independent set,
steiner tree etc. For those problems we donot yet know whether there exists any algorithms that
will simultaneously produce an optimal solutions in polynomial time

In primal-dual method we try to find a solution in polynomial time that closely approximates
the optimal solution.

1.1 Objectives
The main objectives of this work
1. To present clearly the primal-dual method from a computer science perspective and highlight its application in solving combinatorial problems like the Hitting Set problem and
the Steiner Tree/Forest problem.
2

CHAPTER 1. INTRODUCTION

2. To reformulate the Steiner Tree/Forest Problem as a Hitting Set problem.


3. Analyzing the relation between the primal dual algorithm of the Steiner Tree/Forest problem and the Hitting Set problem.

1.2 Organization of the Thesis


Chapter 2 gives a background of Primal-dual method. The application of basic primal-dual algorithm in the Hitting Set problem and its analysis is given in Chapter 3. Chapter 4 describes
the improvements of basic Primal-Dual algorithm and a finer analysis that closely follows the
analysis given in the paper of Geomans and Williamson[1]. Primal-Dual algorithm for Steiner
Tree/Forest problem is described in Chapter 5. In Chapter 6 we attempted to represent the
Steiner Tree/Forest problem as a Hitting Set problem. Chapter 7 analyses the relation between
the primal dual algorithm of Steiner Tree/Forest problem and Hitting Set problem. The detailed
analysis of the primal-dual algorithm for Steiner Tree problem is given in Chapter 8 that is similar to the analysis given in the papers[1][2]. Chapter 9 concludes the thesis.

Chapter 2
Background of Primal-Dual Method
Some basic definitions required to understand the primal-dual method is given below.The descriptions given here is taken from the following papers[1][2][3][4]

2.1 Definitions
2.1.1 Linear Programming Problem:
Linear programming is a mathematical method to achieve the best result (maximum or minimum) of a linear objective function over R n , subject to some linear equality or linear inequality
constraints.So,linear programs are problems that can be represented as shown below:
Linear program representation for a minimization problem:

where x is a (n 1) vector that represents the real variables whose values to be determined, c is
a (n 1) vector and b is a (m 1) vector of known real coefficients, A is a (m n) matrix of known
coefficients.
4

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

Here, we assume that all x j , c j , b i and a i j are real numbers.


c T x is the objective function here that is to be minimized subject to the inequality constraints A T x 0 and x 0.

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD


Hence,we can also represent the linear program as shown below:

Similarly, we can represent linear program for a maximization problem.


Linear program representation for a maximization problem:

Similarly,we can represent it as follows:

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

2.1.2 Feasible Solution of a Linear Programming:


A solution(x 1 , x 2 , ...x n ) Rn that satisfies all the inequality constraints is said to be a feasible
solution.
In a linear programming problem, all the constraints produces a convex polytope, that is
the feasible region. Any solution in that region is a feasible solution. Optimal solution will be
achieved at an extreme point of the polytope.

Figure 2.1: Feasible region of a linear programming problem with two variables

2.1.3 Integer Linear Programming(ILP)


A linear programming problem where all variables are restricted to be integers.

2.1.4 Feasible Solution of an Integer Linear Programming:


In a linear programming problem, all the constraints produces a convex polytope, that is the
feasible region. But,In Integer Linear Programming the feasible region is the set of all integer-

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

valued points within the polytope, not the entire polytope.Any solution in that region is a feasible solution. The optimal solution may not be found at an extreme points of the polytope.

Figure 2.2: Feasible region of a linear programming problem with two variables

2.1.5 LP Relaxation:
For any ILP(Integer Linear Programming) we can generate an LP(Linear Program) by taking the
same objective function and same inequality constraints but relaxing the constraint that the
variables should be integer.
e.g. x = {0, 1} can be relaxed with the continuous constraint 0 x 1
Optimal solution of the LP is not necessarily integral. The feasible region of the LP is larger
than the feasible region of the IP. Hence, the optimal value of the LP is no worse than the optimal
value of the IP.

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

2.1.6 Integrality Gap:


Integrality Gap of a minimization problem:
Let, OP T I LP is the optimal value of the original ILP and OP TLP is the optimal value of the relaxed LP.For a a minimization problem , OP TLP gives a lower bound on the OP T I LP . Integrality
gap is the maximum ratio between the OP T I LP and OP TLP (for a minimization problem).

T I LP
Integrality gap =max OP
OP TLP

Figure 2.3: Integrality Gap of a minimization problem

Integrality Gap of a maximization problem:


Let, OP T I LP is the optimal value of the original ILP and OP TLP is the optimal value of the
relaxed LP. For a a maximization problem , OP TLP gives an upper bound on the OP TLP . Integrality gap is the maximum ratio between the OP TLP and OP T I LP (for a maximization problem).

OP TLP
Integrality gap =max OP
T I LP

Figure 2.4: Integrality Gap of a maximization problem

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

10

2.1.7 Performance guarantee or Approximation Ratio:


Let, some algorithm produces OP TLP as the solution of the relaxed LP, A as the integral solution found by rounding the fractional solution OP TLP . Also we consider OP T I LP is the optimal
solution of the original integer program. Then it is approximation algorithm if,
1)For a minimization problem, OP T I LP A .OP T I LP ,where > 1
2)For a maximization problem, .OP T I LP A OP T I LP ,where < 1

Figure 2.5: Approximation Ratio of a minimization problem

Figure 2.6: Approximation Ratio of a maximization problem

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

11

2.1.8 Dual:
Given a linear programming problem, the dual can be formulated by seeing the problem from
another perspective. For a minimization primal problem, dual will be a maximization problem
and vice versa.
The corresponding dual problem of a primal minimization problem:

When the original problem(called the primal problem) is a minimization problem, the solution
to its dual provides a lower bound to the solution of the primal problem.

The corresponding dual problem of a primal maximization problem:

When the original primal problem is a maximization problem, the solution to its dual provides
a upper bound to the solution of the primal problem.

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

12

2.1.9 Duality Gap:


The optimal values of the primal and dual problems need not be equal. Their difference is called
the duality gap. If OP TDU AL is the optimal dual value and OP TP R I M AL is the optimal primal
value, then
For a minimization problem: The duality gap is equal to OP TP R I M AL OP TDU AL . This value is
always greater than or equal to 0.
For a maximization problem: The duality gap is equal to OP TDU AL OP TP R I M AL . This value is
always greater than or equal to 0.

2.2 Example:
Min Vertex Cover Problem:
Input: A graph G = (V, E )
Output: Minimum subset of vertices such that each edge is covered.
The integer linear program for the problem:

Basically,here (u, v) E , either u or v should be picked in the solution.

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

13

The relaxed linear program:

Dual: Dual problem of the LP will be,

This is basically the Maximum matching problem, as the inequality constraint says that in the
solution each vertex should be incident to only one edge.
Lets consider the graph shown below.

The optimal solution of the integer linear program:


If we take any two vertices of the triangle, then all three edges will be covered. So,setting any two
of x 1 , x 2 , x 3 to 1 will give optimal solution of the integer linear program(OP T I LP )

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

The optimal solution of the relaxed linear program:


Setting each of x 1 , x 2 , x 3 to 1/2 will give optimal solution of the linear program(OP TLP )

Hence,obviously here OP TLP OP T I LP


The rounded integral solution:
If we round the fractional solution of the relaxed problem as:
if x i is 1/2 ,set x i = 1
otherwise, set x i = 0

Hence, here OP TLP OP T I LP A


The solution of the dual:
Only one edge can be taken in the solution that will be a feasible dual solution.

14

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

15

2.3 Theorems:
Basic theorems of Primal-Dual method is described below with proofs that closely follows the
proof and description given in the paper of Vazirani[2]
Theorem 1 Weak Duality Theorem:
If (x 1 , x 2 , ...., x n ) is a feasible solution for the primal minimization linear program and
(y 1 , y 2 , ...., y m ) is a feasible solution for the dual maximization linear program, then
n
X

cj xj

j =1

m
X

bi y i

i =1

Proof:
Lets consider about the primal minimization problem,
Since, (y 1 , y 2 , ...., y m ) is a feasible dual solution and (x 1 , x 2 , ...., x n ) is non-negative ,then
n
X

cj xj

j =1

n X
m
X
( a i j y i )x j
j =1 i =1

m X
n
X
( a i j x j )y i ...............................(1)
i =1 j =1

Also since, (x 1 , x 2 , ...., x n ) is a feasible primal solution and (y 1 , y 2 , ...., y m ) is non-negative ,then
m X
n
m
X
X
( a i j x j )y i
b i y i .............................(2)
i =1 j =1

So,the therom follows as, from (1) and (2)

i =1

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

n
X

cj xj

j =1

m
X

16

bi y i

i =1

Similarly,for maximization problem it can be proved that,


n
X

cj xj

j =1

m
X

bi y i

i =1

Theorem 2 Strong-Duality theorem:


The primal program has finite optimum iff its dual has finite optimum.Moreover, if x=(x 1 , ...., x n )
and y = (y 1 , ....., y n ) are optimal solutions for the primal and dual programs respectively,then
n
X
j =1

c j x j =

m
X
i =1

b i y i

Theorem 3 Complementary Slackness Theorem:


Let x and y be primal and dual feasible solutions,respectively.Then, x and y are both optimal iff all of the following conditions are satisfied:

Primal Complementary Slackness Conditions:


j , 1 j n : either x j = 0 or

Pm

i =1 a i j y i

= cj

Dual Complementary Slackness Conditions:


i , 1 i m : either y i = 0 or

Pn

j =1 a i j x j

= bi

Proof: Since x and y are feasible primal and dual solution respectively,
n
X
j =1

a i j x j b i , i = 1, 2, ..m

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

17

and
m
X

a i j y i c j , j = 1, 2, ..n

i =1

Also we can write,


(

n
X

a i j x j b i )y i 0, i = 1, 2, ..m

j =1

(c j

m
X

a i j y i )x j 0, j = 1, 2, ..n

i =1

By summing over i and j we have,


m X
n
X
( a i j x j b i )y i 0.........(1)
i =1 j =1
n
X

(c j

j =1

m
X

a i j y i )x j 0.........(2)

i =1

By adding (1) and (2) ,


X

ai j x j y i

i,j

bi y i +

=>

cj xj

cj xj

ai , j x j y i 0

i,j

b i y i 0..................(3)

By using strong duality theorem, strong duality implies optimality when,


X
j

cj xj =

bi y i

This holds if the inequalities (1) and (2) holds with equality and we obtaion our desired result.
P
from (1) either y i = 0 or nj=1 a i j x j = b i
P
from (2) either x j = 0 or m
i =1 a i j y i = c j

CHAPTER 2. BACKGROUND OF PRIMAL-DUAL METHOD

18

2.4 Basic steps of solving a combinatorial problem:


(1)Formulate the optimization problem as an integer program.
(2)Relax the integer constraints to convert the integer program(I P ) to linear program(LP ).
(3)Solve the LP to obtain an optimal solution OP TLP by using Primal-Dual Algorithm. (PrimalDual algorithm is described in the next chapter)
(4)Construct a feasible solution to I P by rounding the OP TLP to integers that will produce our
approximate optimal solution.

Chapter 3
Basic Primal-Dual Algorithm for Hitting Set
Problem
To understand the primal-dual algorithm hitting-set problem is taken as an example.

3.1 Hitting Set Problem:


Input:
A set E of elements.
A cost function for elements ,c : E > Q +
A collection of sets T = {T1 , T2 , .....Tk }, where Ti E , i = 1, 2, ...k
Output:
A minimum cost subset A E of elements, such that A Ti 6= , Ti T
Example:
Lets consider a set of elements=(e 1 , e 2 , e 3 , e 4 , e 5 )
Cost of elements are {2, 3, 2, 4, 1} respectively
T = {T1 , T2 , T3 }
T1 = {e 1 , e 3 , e 5 }
T2 = {e 2 , e 3 }
T3 = {e 2 , e 4 }
So, the minimum cost of the solution should be 5 by taking e 3 and e 2 .
19

CHAPTER 3. BASIC PRIMAL-DUAL ALGORITHM FOR HITTING SET PROBLEM


Integer Linear Program of the Hiitng Set Problem:

Relaxed Primal Problem:


By relaxing the intgral constraints,we get the following relaxed primal problem:

Corresponding Dual problem:

20

CHAPTER 3. BASIC PRIMAL-DUAL ALGORITHM FOR HITTING SET PROBLEM

3.1.1 Basic Algorithm:


e E , x e = 0 (pr i mal i n f easi bl e sol ut i on)
Ti T , y i = 0 (d ual f easi bl e sol ut i on)
A=
whi l e A i s not f easi bl e, d o :
=
choose a vi ol at ed Ti and i ncr ease y i , unt i l e A and
f or each such t i g ht e,
= + {e}
A = A
f or each e A
i f A {e} i s f easi bl e
A = A {e}
Ret ur n A

3.1.2 Approximation Ratio:


The cost of the solution of our algorithm is C =
=

P P

eA

i :eTi

eA c e

.y i (As ever y pi cked el ement i s t i g ht )

.y i

eATi

|A Ti |.y i

Now let if max i |Ti | = f ,then |A Ti | f , Ti T


P
So C f ki=1 y i
Hence,it is a f approximation algorithm.

i :eTi

y i = ce , d o :

21

CHAPTER 3. BASIC PRIMAL-DUAL ALGORITHM FOR HITTING SET PROBLEM

22

3.1.3 Example:
We can represent minimum vertex-cover problem as a hitting set problem, and by using the
basic primal-dual algorithm we will get 2approximate solution.
Min Vertex Cover Problem:
Input: A graph G = (V, E ) with weights w i 0 i V
Output: Minimum-weight subset of vertices such that each edge is covered.
The integer linear program for the problem:

mi n

i V

w i xi

sub j ec t t o, x i + x j 1 (i , j ) E
x i = {0, 1} i V
Basically, here (u, v) E , either u or v should be picked in the solution.

We can represent the problem as a hitting-set problem as:


The set of elements=The set of vertices in the graph G
The collection of sets T = (T1 , T2 , ...Tk ), where Ti = {u, v}(u, v) E
So, here the size of each Ti is 2, hence max i |Ti | = 2 and the algorithm will give a 2approximate
solution.

Chapter 4
Improvements of Primal-Dual Algorithm
and their analysis
4.1 Improvement By adding Reverse Delete Step
In the previous algorithm after finding a feasible solution, let A, we are deleting elements
from A that are not needed for the feasibility of A. This algorithm can be modified if we delete
the elements from A in the reverse order in which they were added to A. Because more weighted
edges are added later. So deleting unnecessary elements in reverse order will give better result.
Here also let A be the final solution.

4.1.1 Algorithm :
e E , x e = 0(pr i mal i n f easi bl e sol ut i on)
Ti T , y i = 0(d ual f easi bl e sol ut i on)
A0 =
B0 = A
k =0
whi l e A k i s not f easi bl e, d o :
k = k +1
23

CHAPTER 4. IMPROVEMENTS OF PRIMAL-DUAL ALGORITHM AND THEIR ANALYSIS

24

k =
choose a vi ol at ed set Ti and i ncr ease y i , unt i l e k A k1 and
f or each such t i g ht e k ,

k = k + {e k }
A k = A k1 + k
B k = B k1 k
l =k
f or j = k d own t o 1
f or each e j
i f A l {e}i s f easi bl e
A l = A l {e}
Ret ur n A l

4.1.2 Approximation Ratio:


(1)Let A is the solution before reverse delete step
If we consider there are l iterations,then for kth iteration(1 k l )
A k = kt=1 t
B k = lt =k+1 t
Here B K is the augmentation of A k ,as
A k B k = A( f easi bl e)
Let the cost of our algorithm is C =
=

P P

eA

i :eTi

.y i

.y i

eATi

|A Ti |.y i

eA c e

i :e k Ti

y i = cek , d o :

CHAPTER 4. IMPROVEMENTS OF PRIMAL-DUAL ALGORITHM AND THEIR ANALYSIS


=

25

l
|[k1
t =1 t +t =k t ] Ti |.y i

(Let k is the iteration where Ti is choosen as a violated set)


=

|A k1 + B k1 |.y i

|[B k1 Ti |.y i as A k1 Ti =

(As Ti is choosen as a violated set, it doesnot intersect with the already picked elements)
=

|B k1 Ti |.y i

So, if we ensure for each iteration k, |B k1 Ti |


then it will be an approximation algorithm, then
To minimize the approximation ratio we should always choose minimal violated sets in
our algorithm.

(2)Now let after reverse delete step the final solution is A.


0

Let after kth iteration ,B k = A A k


0

Here,B k is minimal augmentation of A k as ,


0

(1) A k B k = A(feasible)
0

(2) A k B k {e} is not feasible e B k


Claim:
A k B k {e} is not feasible e B k
If we choose to delete e k in iteration k of reverse delete step then,no e j such that ( j < k) is
being deleted till now.
All e j such that ( j > k) is already considered in the reverse delete step,and those elements
are necessary for the feasibility of A.
Hence,the claim is proved.
Now, the cost of the solution of our algorithm is C =

|A Ti |.y i

|A k1 + B k1 |.y i

eA x e

(B k1 is the minimal augmentation of A k1 ,after k 1 iteration)

CHAPTER 4. IMPROVEMENTS OF PRIMAL-DUAL ALGORITHM AND THEIR ANALYSIS


=

26

|[B k1 Ti |.y i as A k1 Ti =
0

So, if we ensure for each iteration k, |B k1 Ti |


then it will be an approximation algorithm
0

If max k |B k | = D, then for each iteration, |D Ti | should be

4.2 Next Improvement:Uniformly increase all dual variables


In the previous algorithm we are increasing only one violated dual. In this algorithm we will
increase all duals simultaneously in one iteration.

4.2.1 Algorithm:
e E , x e = 0
Ti T , y i = 0
A0 =
B0 = A
k =0
whi l e A k i s not f easi bl e, d o :
k = k +1
k = vi ol at ed (A k1 )
k =
i ncr ease y i , Ti k uni f or ml y unt i l e k A k1 and
f or each such t i g ht e k ,
k = k + {e k }
A k = A k1 + k
l =k
f or j = k d own t o 1
f or each e j

i :e k Ti

y i = cek , d o :

CHAPTER 4. IMPROVEMENTS OF PRIMAL-DUAL ALGORITHM AND THEIR ANALYSIS


i f A l {e} i s f easi bl e
A l = A l {e}
Ret ur n A l

4.2.2 Approximation Ratio:


In this algorithm we can get better approximation ratio.
P
The cost of the solution of the algorithm is C = eA c e
=

P P

eA

i :eTi

.y i

.y i

eATi

|A Ti |.y i

Now let for iteration k,


k =the set of violated sets
k =the amount by which we are increasing each dual variable
At the end of the algorithm,
P
P
i y i = k |k |.k
Now similarly C can be written as C =
P

P P
k ( Ti k |A Ti |).k ........(1)

|A Ti |

|A Ti |.y i

k:Ti k k

So, if it is approximation then, C should be


P
= i yi
P
= k |k |.k ............(2)

yi

So comparing (1) and (2) we can say if this algorithm is approximation, then
P P
P
k ( Ti k |A Ti |).k k |k |.k
=

k (|k |)k

This implies,if we ensure for each k,

27

CHAPTER 4. IMPROVEMENTS OF PRIMAL-DUAL ALGORITHM AND THEIR ANALYSIS


P

Ti k

Or,

28

|A Ti | |k |

Ti k

|ATi |
|k |

then the algorithm is approximation algorithm.


Use:
Using specialized version of this algorithm in Steiner forest problem, we can prove that =
2.This is discussed in the next chapter

Chapter 5
Primal-Dual Algorithm for Steiner
Tree/Forest problem
5.1 Steiner Tree Problem
Input:An undirected graph G = (V, E ),
an edge cost function c : E > Q + ,
and a collection of disjoint subsets of V, {R 1 , R 2 , ..., R m }(Required set of vertices)
Output:A minimum cost subgraph of G in which each pair of vertices belonging to the same
set R i is connected.(For minimum steiner forest problem).
Minimum Steiner tree is a special case in which m = 1.

5.1.1 Steiner Tree Problem formulation


Some functions used to represent the Steiner Tree problem:
(1)we can define a connectivity requirement function r : V X V > {0, 1} as follows,

r (u, v)=

1 if u and v belong to the same set R i

0 otherwise

The problem now is to find a minimum cost subgraph of G that contains a u v path for
each pair (u, v) with r (u, v) = 1

29

CHAPTER 5. PRIMAL-DUAL ALGORITHM FOR STEINER TREE/FOREST PROBLEM

30

(2)Also we consider a function on all cuts in G, f : 2V > {0, 1} as follows,

f (S)=

1 if u S and v S such that r (u, v) = 1

0 otherwise

(3) Another function : 2V > E where


(S) = {(u, v)|(u, v) E and either (u S and v S) or (v S and u S)}
The primal problem can be written as:
mi ni mi ze
sub j ect t o,

eE c e x e

e:e(S) x e

f (S)

S V

x e {0, 1}, e E

Relaxed Primal Problem:

mi ni mi ze
sub j ect t o,

eE c e x e

e:e(S) x e

f (S)

S V

x e 0 , e E
Corresponding Dual problem:

maxi mi ze
sub j ect t o,

SV

S:e(S) y S

f (S)y S
ce

e E

y S 0 , S V
The primal-dual algorithm for the Steiner Tree problem is same as the primal-dual algorithm
for the hitting set problem where we start with primal infeasible and dual feasible solution and
increase the dual variables for violated sets ultil the primal solution becomes feasible.But the
choice of violated sets are different in case of the Steiner Tree algorithm.

CHAPTER 5. PRIMAL-DUAL ALGORITHM FOR STEINER TREE/FOREST PROBLEM


At iteration k violated sets are minimal sets called active sets k , where
|k | = {S V |S is a connected component at iteration k and f (S) = 1}

5.1.2 Algorithm:
e E , x e = 0
S V , y s = 0
F0 =
k =0
whi l e F k i s not f easi bl e, d o :
k = k +1
k = ac t i ve(F k1 )
k =
i ncr ease y s , S k uni f or ml y unt i l e k F k1 and
X

y s = cek , d o

S:e k (S)

f or each t i g ht e k ,
k = k + {e k }
F k = F k1 + k
F = Fk
f or j = k d own t o 1
f or each e j
i f F {e}i s f easi bl e
F = F {e}
Ret ur n F

31

CHAPTER 5. PRIMAL-DUAL ALGORITHM FOR STEINER TREE/FOREST PROBLEM

5.1.3 Example:
Lets consider the graph below:

Required set of vertices are:


R1={a,b}
R2={c,d}

5.1.4 Iteration(k)= 1:
Active sets
{a}, {b}, {c}, {d }
If we increase the dual variables of active sets by 6, edge (a, e) and (b, f ) will become tight, as

y s = y {a} + 0

S:(a,e)(S)

(As,the dual variables for other sets S V, such that (a, e) (S) is zero,except y {a} )
= 6 = c (a,e)
Similarly,

X
S:(b, f )(S)

y s = y {b}

32

CHAPTER 5. PRIMAL-DUAL ALGORITHM FOR STEINER TREE/FOREST PROBLEM


= 6 = c (b, f )
So,we can add the tight edges to our solution,
A 1 = (a, e) (b, f )
Added edges are marked with bolder lines in the image.
Sets with non-zero dual values are marked with circles.

5.1.5 k=2
Active sets
{a,e},{b,f},{c},{d}
If we increase the dual variables of active sets by 2, edge (a, c) becomes tight, as

X
S:(a,c)(S)

= 6+8+2
= 16 = c (a,c)

y s = y {a} + y {c} + y {a,e}

33

CHAPTER 5. PRIMAL-DUAL ALGORITHM FOR STEINER TREE/FOREST PROBLEM

A 2 = A 1 (a, c)

5.1.6 k=3
Active sets
{a,e,c},{b,f,},{d}
If we increase the dual variables of active sets by 1, edge (d , f ) becomes tight, as

X
S:(d , f )(S)

= 9+3
= 12 = c (d , f )

y s = y {d } + y {b, f }

34

CHAPTER 5. PRIMAL-DUAL ALGORITHM FOR STEINER TREE/FOREST PROBLEM

A 3 = A 2 (d , f )

5.1.7 k=4
Active sets
{a,e,c},{b,f,d}
If we increase the dual variables of active sets by 1, edge (a, b) becomes tight, as

y s = y {a} + y {a,e} + y {a,c,e} + y {b} + y {b, f } + y {b, f ,d }

S:(a,b)(S)

= 6+2+2+6+3+1
= 20 = c (a,b)

35

CHAPTER 5. PRIMAL-DUAL ALGORITHM FOR STEINER TREE/FOREST PROBLEM

A 4 = A 3 (a, b)

5.1.8 Reverse Delete:


Edges are added in the order as follows:
(a, e), (b, f )
(a, c)
(d , f )
(a, b)
Our required sets are:
R1 = {a, b} and R2 = {c, d }
(1)If we delete (a, b), a and b will no more be connected.
(2)If we delete (d , f ), c and d will no more be connected.
(3)If we delete (a, c), c and d will no more be connected.
(4)If we delete (b, f ), c and d will no more be connected.
(5)If we delete (a, e), both a and b And c and d will remain connected.
So we can delete the edge (a, e) from our solution
So,the final Steiner forest(tree in this case) is as shown below:

36

CHAPTER 5. PRIMAL-DUAL ALGORITHM FOR STEINER TREE/FOREST PROBLEM

37

Chapter 6
Steiner Tree/Forest Problem As a Hitting Set
Problems
6.1 Steiner Tree problem
Input:
An undirected graph G = (V, E ),
an edge cost function c : E > Q + ,
and a collection of disjoint subsets of V, {R 1 , R 2 , ..., R m }(Required set of vertices)
Output:
A minimum cost subgraph of G in which each pair of vertices belonging to the same set R i is
connected.
Relaxed Primal Problem:

mi ni mi ze
sub j ect t o,

eE c e x e

e:e(S) x e

f (S)

x e 0 , e E

38

S V

CHAPTER 6. STEINER TREE/FOREST PROBLEM AS A HITTING SET PROBLEM

39

6.2 Hitting Set Problem:


Input: A set E of elements.
A cost function for elements ,c : E > Q +
A collection of sets T = {T1 , T2 , .....Tk } where Ti E , i = 1, 2, ...k
Output: Smallest subset A E of elements, such that A Ti 6= , Ti T
Relaxed Primal Problem:

mi ni mi ze
P

sub j ect t o,

eTi

eE c e x e

xe 1

Ti T

x e 0 , e E

6.3 Representing Steiner Tree problem as Hitting Set problem


We can write the steiner tree problem as a hitting set problem as follows:
The ground set of elements=The set of edges E in the graph G.

A cost function for elements is defined as the cost of the edges in the graph G, c : E -> Q +

The collection of sets T = {T1 , T2 , .....Tk } where Ti E , i = 1, 2, ...k and


Ti = {e|e i n (S i ), S i V and f (S i ) = 1}

Chapter 7
Relation between the primal-dual algorithm
of Steiner Tree/Forest Problem and Hitting
Set Problem
Also k is changed in the steiner tree problem as the set of minimal violated sets or active sets at
iteration k
k = {Ti = ( S i ) : S i V, S i i s a connec t ed component at i t er at i on k, f (S i ) = 1}

So,at each iteration k dual variables for only the connected components, for which f (S i ) =
1((u, v) E : u S i and v S i , u, v R j ,for some j = 1, ..m) is getting incremented.
m

40

Chapter 8
Analysis of Steiner Tree/Forest Algorithm
8.1 Analysis of the Approximation Ratio
Let A is the final solution.
The cost of the algorithm is C =

eA c e

P P

P P
k ( Ti k |A Ti |).k ............(1)

eA

i :eTi

y i (Si nce ever y pi cked el ement i s t i g ht )

.y i

eATi

|A Ti |.y s

|A Ti |

k:Ti k k

Now if we want to prove that this is a 2-approximation algorithm, then C should be 2

k:Ti k k

= 2.

P P

= 2.

P P

= 2.

i :Ti k k

k |k |.k

k 2|k |.k ............................(2)

Comparing (1) and (2) we can write that, we want to ensure

41

yi

CHAPTER 8. ANALYSIS OF STEINER TREE/FOREST ALGORITHM

42

P P
P
k ( Ti k |A Ti |).k k (2|k |).k ..............(3)

So, suppose if we prove that, for each iteration k,


P

Ti k

|A Ti | 2|k |

then the inequality (3) will be ensured.


So, we want to ensure for each k,
P

|ATi |
|k |

Ti k

Ti k

Ti k

S i Vk

|(A k1 B k1 )Ti |
|k |
|B k1 Ti |
|k |

2 As A k1 Ti =

|B k1 (S i )|
|k |

(Vk = {S i V |S i i s connec t ed component and f (S i ) = 1}


P

S i Vk

|B k1 (S i )|
|Vk |

2 As |k | = |Vk |

Si
S i Vk Deg B k1

|Vk |

2..................(4)

So, basically we want to prove for each iteration k, the average degree of connected components for which f (S i ) = 1, is 2

8.2 Proof:
At each iteration k, we are having a graph, lets say Hk with vertex set V and edge set A k1 .
Hk will have connected components of two types:
(1)Active components S V where f (S) = 1
(2)Inactive components S V where f (S) = 0
Each such component we can shrink to a node,thus we will have another set of nodes, lets
0

say Vk
0

Now lets consider another graphHk (Vk , B k1 )

CHAPTER 8. ANALYSIS OF STEINER TREE/FOREST ALGORITHM

43

Two points we can notice about Hk as follows:


0

(1)Hk is a forest,as we are shrinking each component in Hk to a node in Hk , Hk wont have


any cycles
0

(2) v Vk ,d eg (v) = Deg B v

k1

where S v is the

corresponding component of v in Hk
0

In Hk (Vk ,B k1 ) we will have


(1)active nodes corresponding to active components in Hk
(2)inactive nodes corresponding to inactive components in Hk
0

Now,lets divide VK into parts,


0

(1)P k = {v Vk |corresponding component of v in Hk is S v and f (S v ) = 1


0

= {v Vk |corresponding component of v in Hk is S v and S v Vk


(2)Q k = Vk 0 P k -q k
where let q k = isolated nodes in Vk 0 P k
0

So, in Hk we want to prove average degree of active nodes 2


P

Or,

d eg (v)
|P k |

vP k

claim 1:
Average degree of vertices in a forest is 2
Let no of vertices=n
No of edges in forest n-1
The summation of degrees in forest 2(n 1)
The average degree of vertices 2(n 1)/n
2
Claim 2:
P

d eg (v)
|Q k |

vQ k

in Hk is 2 for each k

Lets take v Q k

CHAPTER 8. ANALYSIS OF STEINER TREE/FOREST ALGORITHM

44

d eg (v) cannot be 0 as we donot have any isolated nodes in Q k


Now, let consider, d eg (v) = 1
Also consider, S v is the corresponding set of v in Hk
0

So, f (S v ) = 0 according to our division of Vk


As,d eg (v) = 1, there is only one edge (u, v) E that crosses S v , and that is necessary for the
feasibility of A.
So, u S v and v S v , but r (u, v) = 1
This implies f (S v ) = 1(contradiction)
0

v
Hence, d eg (v) in Hk 2 Or, Deg B k1
2

So,v Q k d eg (v) 2
Hence,
P

d eg (v)
|Q k |

vQ k

in Hk 2

Claim 3:
0

The average degree of active vertices in Hk 2 for each k


0

In Hk average degree of all vertices is 2


Average degree of inactive vertices is 2
Hence,The average degree of active vertices 2

Claim 3 directly proves the fact that,average degree of active sets in each iteration is 2
Hence it proves the 2 approximation of Steiner tree or Steiner forest problem.

Appendix A
Acronyms
FTA Fault tree analysis
MTTF Mean time to failure
RAMS Reliability, availability, maintainability, and safety

45

Appendix B
Additional Information
This is an example of an Appendix. You can write an Appendix in the same way as a chapter,
with sections, subsections, and so on.

B.1 Introduction
B.1.1 More Details

46

Bibliography

47

Curriculum Vitae
Name:

Your Name

Gender:

Female

Date of birth:

1. January 1995

Address:

Nordre gate 1, N7005 Trondheim

Home address:

Kings road 1, 4590 Vladivostok, Senegal

Nationality:

English

Email (1):

your.name@stud.ntnu.no

Email (2):

yourname@gmail.com

Telephone:

+47 12345678

Your picture

Language Skills
Describe which languages you speak and/or write. Specify your skills in each language.

Education
School 1
School 2
School 3

Computer Skills
Program 1

48

BIBLIOGRAPHY
Program 2
Program 3

Experience
Job 1
Job 2
Job 3

Hobbies and Other Activities

49

Das könnte Ihnen auch gefallen