Sie sind auf Seite 1von 11

Optimization Methods: Linear Programming- Revised Simplex Method 1

Module – 3 Lecture Notes – 5

Revised Simplex Method, Duality and Sensitivity analysis

Introduction

In the previous class, the simplex method was discussed where the simplex tableau at each
iteration needs to be computed entirely. However, revised simplex method is an improvement
over simplex method. Revised simplex method is computationally more efficient and accurate.

Duality of LP problem is a useful property that makes the problem easier in some cases and
leads to dual simplex method. This is also helpful in sensitivity or post optimality analysis of
decision variables.

In this lecture, revised simplex method, duality of LP, dual simplex method and sensitivity or
post optimality analysis will be discussed.

Revised Simplex method

Benefit of revised simplex method is clearly comprehended in case of large LP problems. In


simplex method the entire simplex tableau is updated while a small part of it is used. The
revised simplex method uses exactly the same steps as those in simplex method. The only
difference occurs in the details of computing the entering variables and departing variable as
explained below.

Let us consider the following LP problem, with general notations, after transforming it to its
standard form and incorporating all required slack, surplus and artificial variables.
( Z) c1 x1 + c2 x2 + c3 x3 + L L L + cn xn + Z =0
( xi ) c11 x1 + c12 x2 + c13 x3 + L L L + c1n xn = b1
(x )j c21 x1 + c22 x2 + c23 x3 + L L L + c2 n xn = b2
M M M
M M M
( xl ) cm1 x1 + cm 2 x2 + cm 3 x3 + L L L + cmn xn = bm

As the revised simplex method is mostly beneficial for large LP problems, it will be discussed
in the context of matrix notation. Matrix notation of above LP problem can be expressed as
follows:
Minimize z = C T X
subject to : AX = B
with : X  0

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 2

�b1 �
 x1   c1  ��  0  c11 c12  c1n 
x  c  �b2 �  0 c c 22  c 2 n 
where X =  2
, C=  2
, B = � �, 0 =   , A = 
  21

    �M�      
       
� �  0
 xn  c n  � � c m1 cm2  c mn 
b
�m �
It can be noted for subsequent discussion that column vector corresponding to a decision

 c1k 
c 
variable k is   .
2k
x
 
 
c mk 
Let X S is the column vector of basic variables. Also let C S is the row vector of costs
coefficients corresponding to X S and S is the basis matrix corresponding to X S .

1. Selection of entering variable


For each of the nonbasic variables, calculate the coefficient (WP  c ) , where, P is the
corresponding column vector associated with the nonbasic variable at hand, c is the cost
coefficient associated with that nonbasic variable and W = C S S 1 .
For maximization (minimization) problem, nonbasic variable, having the lowest negative
(highest positive) coefficient, as calculated above, is the entering variable.

2. Selection of departing variable


a. A new column vector U is calculated as U = S 1B .
b. Corresponding to the entering variable, another vector V is calculated as V = S 1 P ,
where P is the column vector corresponding to entering variable.
c. It may be noted that length of both U and V is same ( = m ). For i = 1,  , m , the

U( i )
ratios, , are calculated provided V ( i )  0 . i = r , for which the ratio is least, is
V( i )
noted. The r th basic variable of the current basis is the departing variable.
If it is found that V ( i )  0 for all i , then further calculation is stopped concluding
that bounded solution does not exist for the LP problem at hand.

3. Update to new basis

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 3

Old basis S , is updated to new basis S new , as S new =  ES 1 


1

�1 0 L h1 L 0 0 �
� �
�0 1 L h2 L 0 0 �
� �
�M M O M L M M� V (i)
� � V ( r ) for ir
where E = �M M L hr L M M �and h i =
� � 1
 for i=r
�M M L M O M M� V ( r )
� �
�0 0 L hm1 L 1 0 �
� �
�0 0 L hm L 0 1 �
� �

r th column

S is replaced by S new and steps 1 through 3 are repeated. If all the coefficients calculated in

step 1, i.e., (WP  c ) is positive (negative) in case of maximization (minimization) problem,


then optimum solution is reached and the optimal solution is,
X S = S 1B and z = CX S

Duality of LP problems

Each LP problem (called as Primal in this context) is associated with its counterpart known
as Dual LP problem. Instead of primal, solving the dual LP problem is sometimes easier
when a) the dual has fewer constraints than primal (time required for solving LP problems is
directly affected by the number of constraints, i.e., number of iterations necessary to converge
to an optimum solution which in Simplex method usually ranges from 1.5 to 3 times the
number of structural constraints in the problem) and b) the dual involves maximization of an
objective function (it may be possible to avoid artificial variables that otherwise would be
used in a primal minimization problem).
The dual LP problem can be constructed by defining a new decision variable for each
constraint in the primal problem and a new constraint for each variable in the primal. The
coefficients of the j th variable in the dual’s objective function is the i th component of the
primal’s requirements vector (right hand side values of the constraints in the Primal). The
dual’s requirements vector consists of coefficients of decision variables in the primal
objective function. Coefficients of each constraint in the dual (i.e., row vectors) are the
column vectors associated with each decision variable in the coefficients matrix of the primal

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 4

problem. In other words, the coefficients matrix of the dual is the transpose of the primal’s
coefficient matrix. Finally, maximizing the primal problem is equivalent to minimizing the
dual and their respective values will be exactly equal.
When a primal constraint is less than equal to in equality, the corresponding variable in the
dual is non-negative. And equality constraint in the primal problem means that the
corresponding dual variable is unrestricted in sign. Obviously, dual’s dual is primal. In
summary the following relationships exists between primal and dual.

Primal Dual

Maximization Minimization
Minimization Maximization
i th variable i th constraint
j th constraint j th variable
Inequality sign of i th Constraint:
xi �0 �if dual is maximization
�if dual is minimization
th th
i variable unrestricted i constraint with = sign
j th constraint with = sign j th variable unrestricted
th
Cost coefficient associated with j th
RHS of j constraint
variable in the objective function
Cost coefficient associated with
i th variable in the objective RHS of i th constraint
function

See the pictorial representation in the next page for better understanding and quick reference:

Mark the corresponding


decision variables in the dual

Cost coefficients for


Opposite for the Dual,
the Objective Function
i.e., Minimize
D Nagesh Kumar, IISc, Bangalore M3L5
Optimization Methods: Linear Programming- Revised Simplex Method 5

Maximize Z = c1 x1 + c2 x2 + L L L + cn xn

Subject to c11 x1 + c12 x2 + L L L + c1n xn = b1 ��


� y1
c21 x1 + c22 x2 + L L L + c2 n xn �b2 ��
� y2 Thus the Objective Function,
M M Minimize b1 y1 + b2 y2 + L + bm ym
cm1 x1 + cm 2 x2 + L L L + cmn xn �bm ��
� ym
x1 �0, x2 unrestricted, L , xn �0
Thus, the 1st constraint,
Right hand c11 y1 + c21 y2 + L + cm1 ym �c1
Coefficient Corresponding side of the 1st
s of the 1st sign of the 1st constraint
constraint constraint is

Coefficient Corresponding Right hand Thus, the 2nd constraint,


s of the 2nd sign of the 2nd side of the 2nd c12 y1 + c22 y2 + L + cm 2 ym = c2
constraint constraint is constraint

M M M M
Determine the Determine the Determine the
sign of sign of LL sign of

Dual Problem

Minimize Z = b1 y1 + b2 y2 + L L L + bm ym
Subject to c11 y1 + c21 y2 + L L L + cm1 ym �c1
c12 y1 + c22 y2 + L L L + cm 2 ym = c2
M M
c1n y1 + c2 n y2 + L L L + cmn ym �cn
y1 unrestricted, y2 �0, L , ym �0

It may be noted that, before finding its dual, all the constraints should be transformed to ‘less-
than-equal-to’ or ‘equal-to’ type for maximization problem and to ‘greater-than-equal-to’ or

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 6

‘equal-to’ type for minimization problem. It can be done by multiplying with 1 both sides
of the constraints, so that inequality sign gets reversed.

An example of finding dual problem is illustrated with the following example.

Primal Dual

Maximize Z = 4 x1 + 3 x 2 Minimize Z  = 6000 y1  2000 y 2 + 4000 y 3


Subject to Subject to
2 y1  y 2 + y 3 = 4
x1 + x 2  6000
3
2
y1 + y 2  3
x1  x 2  2000 3
x1  4000 y1  0

x1 unrestricted y2  0

x2  0 y3  0

It may be noted that second constraint in the primal is transformed to  x1 + x2 �2000


before constructing the dual.

Primal-Dual relationships

Following points are important to be noted regarding primal-dual relationship:

1. If one problem (either primal or dual) has an optimal feasible solution, other problem
also has an optimal feasible solution. The optimal objective function value is same for
both primal and dual.

2. If one problem has no solution (infeasible), the other problem is either infeasible or
unbounded.

3. If one problem is unbounded the other problem is infeasible.

Dual Simplex Method

Computationally, dual simplex method is same as simplex method. However, their


approaches are different from each other. Simplex method starts with a nonoptimal but

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 7

feasible solution where as dual simplex method starts with an optimal but infeasible solution.
Simplex method maintains the feasibility during successive iterations where as dual simplex
method maintains the optimality. Steps involved in the dual simplex method are:

1. All the constraints (except those with equality (=) sign) are modified to ‘less-than-
equal-to’ ( �) sign. Constraints with greater-than-equal-to’ ( �) sign are multiplied by
1 through out so that inequality sign gets reversed. Finally, all these constraints are
transformed to equality (=) sign by introducing required slack variables.

2. Modified problem, as in step one, is expressed in the form of a simplex tableau. If all
the cost coefficients are positive (i.e., optimality condition is satisfied) and one or
more basic variables have negative values (i.e., non-feasible solution), then dual
simplex method is applicable.

3. Selection of exiting variable: The basic variable with the highest negative value is
the exiting variable. If there are two candidates for exiting variable, any one is
selected. The row of the selected exiting variable is marked as pivotal row.

4. Selection of entering variable: Cost coefficients, corresponding to all the negative


elements of the pivotal row, are identified. Their ratios are calculated after changing

 Cost Coefficients 
the sign of the elements of pivotal row, i.e., ratio =   1  Elements of pivotal row  .
 

The column corresponding to minimum ratio is identified as the pivotal column and
associated decision variable is the entering variable.

5. Pivotal operation: Pivotal operation is exactly same as in the case of simplex


method, considering the pivotal element as the element at the intersection of pivotal
row and pivotal column.

6. Check for optimality: If all the basic variables have nonnegative values then the
optimum solution is reached. Otherwise, Steps 3 to 5 are repeated until the optimum is
reached.

Consider the following problem:

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 8

Minimize Z = 2 x1 + x 2
subject to x1  2
3 x1 + 4 x 2  24
4 x1 + 3 x 2  12
 x1 + 2 x 2  1

By introducing the surplus variables, the problem is reformulated with equality constraints as
follows:

Minimize Z = 2 x1 + x2
subject to  x1 + x3 = 2
3x1 +4 x2 + x4 = 24
4 x1 3x2 + x5 = 12
x1 2 x2 + x6 = 1

Expressing the problem in the tableau form:

Variables
Iteration Basis Z br
x1 x2 x3 x4 x5 x6

Z 1 -2 -1 0 0 0 0 0

x3 0 -1 0 1 0 0 0 -2

1 x4 0 3 4 0 1 0 0 24

x5 0 -4 -3 0 0 1 0 -12

x6 0 1 -2 0 0 0 1 -1

Ratios  0.5 1/3 -- -- -- --

Pivotal Element
Pivotal Row
Pivotal Column

Tableaus for successive iterations are shown below. Pivotal Row, Pivotal Column and Pivotal
Element for each tableau are marked as usual.

Variables
Iteration Basis Z br
x1 x2 x3 x4 x5 x6

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 9

Z 1 -2/3 0 0 0 -1/3 0 4

x3 0 -1 0 1 0 0 0 -2

2 x4 0 -7/3 0 0 1 4/3 0 8

x2 0 4/3 1 0 0 -1/3 0 4

x6 0 11/3 0 0 0 -2/3 1 7

Ratios  2/3 -- -- -- -- --

Variables
Iteration Basis Z br
x1 x2 x3 x4 x5 x6

Z 1 0 0 -2/3 0 -1/3 0 16/3

x1 0 1 0 -1 0 0 0 2

3 x4 0 0 0 -7/3 1 4/3 0 38/3

x2 0 0 1 4/3 0 -1/3 0 4/3

x6 0 0 0 11/3 0 -2/3 1 -1/3

Ratios  -- -- -- -- 0.5 --

Variables
Iteration Basis Z br
x1 x2 x3 x4 x5 x6

Z 1 0 0 2.5 0 0 -0.5 5.5

x1 0 1 0 -1 0 0 0 2

4 x4 0 0 0 5 1 0 2 12

x2 0 0 1 -0.5 0 0 -0.5 1.5

x5 0 0 0 -5.5 0 1 -1.5 0.5

Ratios 

As all the br are positive, optimum solution is reached. Thus, the optimal solution is
Z = 5.5 with x1 = 2 and x 2 = 1.5 .

Solution of Dual from Final Simplex Tableau of Primal

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 10

Primal Dual
Minimize Z ' = 6 y1 + 0 y 2 + 4 y3
Maximize Z = 4 x1  x2 + 2 x3
subject to 2 y1 + y 2 + 5 y3  4
subject to 2 x1 + x2 + 2 x3  6
y1  4 y 2  2 y3  1
x1  4 x2 + 2 x3  0
2 y1 + 2 y 2  2 y3  2
5 x1  2 x2  2 x3  4
y1 , y 2 , y3  0
x1 , x2 , x3  0
Final simplex tableau of primal:

y1
Z’
y2

y3

As illustrated above solution for the dual can be obtained corresponding to the coefficients of

1
slack variables of respective constraints in the primal, in the Z row as, y1 = 1 , y 2 = and
3

1
y3 = and Z’=Z=22/3.
3

Sensitivity or post optimality analysis

A dual variable, associated with a constraint, indicates a change in Z value (optimum) for a
small change in RHS of that constraint. Thus,

D Nagesh Kumar, IISc, Bangalore M3L5


Optimization Methods: Linear Programming- Revised Simplex Method 11

DZ = y j Dbi

where y j is the dual variable associated with the i th constraint, Dbi is the small change in the

RHS of i th constraint, and DZ is the change in objective function owing to Dbi .

Let, for a LP problem, i th constraint be 2 x1 + x2 �50 and the optimum value of the objective

function be 250. What if the RHS of the i th constraint changes to 55, i.e., i th constraint

changes to 2 x1 + x2 �55 ? To answer this question, let, dual variable associated with the i th

constraint is y j , optimum value of which is 2.5 (say). Thus, Dbi = 55  50 = 5 and y j = 2.5 .

So, DZ = y j Dbi = 2.5 �5 = 12.5 and revised optimum value of the objective function is

( 250 + 12.5) = 262.5 .

D Nagesh Kumar, IISc, Bangalore M3L5

Das könnte Ihnen auch gefallen