Sie sind auf Seite 1von 35

Linear Programming Problem

Optimization Problem
Problems which seek to maximize or minimize an
objective function of a finite number of variables
subject to certain constraints are called optimization
problems
Example

Maximize z = c1x1 +c2 x 2 +.............+cn x n


Subject to;

a11x1 +a12 x 2 +.............+a1n x n b1


a 21x1 +a 22 x 2 +.............+a 2n x n b 2
.
.
.
a m1x1 +a m2 x 2 +.............+a mn x n b m
x j > 0 j (j = 1, 2,............,n)
Feasible Solution
Any solution of a linear programming problem that
1

satisfies all the constraints of the model is called a


feasible solution.

Maximize z = c1x1 +c2 x 2 +.............+cn x n


Subject to;

a11x1 +a12 x 2 +.............+a1n x n b1


a 21x1 +a 22 x 2 +.............+a 2n x n b 2
.
.
.
a m1x1 +a m2 x 2 +.............+a mn x n b m
x j > 0 j (j = 1, 2,............,n)
The solution,

x1 = s1 , x 2 = s2 ,..........,x n = sn will be a

feasible solution of the given problem if it does not


violate any of the constraints of the given problem.
Programming Problem
Programming problems always deal with determining
optimal allocations of limited resources to meet given
objectives. The constraints or limited resources are
2

given by linear or non-linear inequalities or equations.


The given objective may be to maximize or minimize
certain function of a finite number of variables.
Linear Programming and Linear Programming
Problem
Suppose we have given m linear inequalities or
equations in n unknown variables x1 , x 2 ,............and,x n
and we wish to find non-negative values of these
variables which will satisfy the constraints and
maximize or minimize some linear functions of these
variables (objective functions), then this procedure is
known as linear programming and the problem which is
described is known as linear programming problem.
Mathematically it can be described as, suppose we have
m linear inequalities or equations in n unknown
variables x1 , x 2 ,............and,x n of the form
n

a x { ,=, }b (i= 1, 2,....,m) where


ij j

j=1

and only one of the signs

,=,

for each constraint one


holds. Now we wish to

find the non-negative values of x j , j = 1, 2,,n.


which will satisfy the constraints and maximize or
n

z = c jx j

minimize a linear function

. Here a ij , b i and

j=1

c j are known constant.

At the short-cut method mathematically the linear


programming problem can be written as
n

Optimize (maximize or minimize)

z=

c x
j

j=1

Subject to
n

a x { ,=, }b
ij

(i= 1, 2,....,m)

j=1

x j > 0 j (j = 1, 2,............,n)
Application of LPM
(i) Linear programming problem is widely applicable in
business and economic activities
(ii) It is also applicable in government, military and
industrial operations
(iii) It is also extensively used in development and
distribution of planning.
4

Objective Function
In a linear programming problem, a linear function of
n

the type

z = c jx j

of the variables x j , j = 1, 2,,n.

j=1

which is to be optimized is called objective function. In


an objective function no constant term will be appeared.
i. e. we cannot write the objective function of the type
n

z = c jx j +k
j=1

Example of Linear Programming Problem:


Suppose m types of machines
n products namely

A1 , A2 ,.........,Am are

P1 , P2 ,.........Pn .

producing

Let (i) a ij is the hours

required of the ith machine (i = 1, 2,,m) to produce


per unit of the jth product (j=1, 2,..,n) (ii) bi (i= 1,
2,.,m) is the total available hours per week for
machine i, and (iii) c j is the per unit profit on sale of
each of the jth product.
P1

P2

a11

a12 a1n

a 21

a 22

Machines
A1

A2

Pn

Total Available Time


b1

b2
a 2n

.
.

.
Am

a m1

Unit Profits

c1

a m2

c2

a mn

.
bm

c n

Construct the LPP


Suppose x j (j = 1, 2, 3,..,n) is the no. of units of
the jth product produced per week. The objective
function is given by;

z = c1x1 +c2 x 2 +.............+cn x n


The constraints are given by;
a11x1 +a12 x 2 +.............+a1n x n b1
a 21x1 +a 22 x 2 +.............+a 2n x n b 2
.
.
.
a m1x1 +a m2 x 2 +.............+a mn x n b m

Since the amount of production cannot be negative so,

x j 0 (j = 1, 2, 3, 4) .
The weekly profit is given by z = c1x1 +c2 x 2 +.............+cn x n .
Now we wish to determine the values of the variables
x j 's for

which all the constraints are satisfied and the

objective function will be at maximum. That is

Maximize z = c1x1 +c2 x 2 +.............+cn x n


6

Subject to
a11x1 +a12 x 2 +.............+a1n x n b1
a 21x1 +a 22 x 2 +.............+a 2n x n b 2
.
.
.
a m1x1 +a m2 x 2 +.............+a mn x n b m

and

x j 0 (j = 1, 2, 3, 4)
Formulation of Linear Programming Problem
(i) Transportation Problem
Suppose given amount of uniform product are available
at each of a no. of origins say warehouse. We wish to
send specified amount of the products to each of a no.
of different destinations say retail stores. We are
interested in determining the minimum cost-routing
from warehouse to the retail stores.
Let use define,
m = no. of warehouses
n = no. of retail stores

a1

c11:x11
x11

x 22

b1

x12

x 21
a2

b2

.
m Origins

x m2

n Destinations

x m1

am

x 2n x1n
n

cmn :x mn

bn

x ij the amount of product shipped from the ith


warehouse to the jth retail store.
Since negative amounts cannot be shipped so we have

x ij 0 i, j

a i = total no. of units of the products available for


shipment at the ith (i= 1, 2,,m)warehouse.
bj =

the no. of units of the product required at the jth

retail store.
Since we cannot supply more than the available amount
of the product from ith warehouse to the different retail
stores, therefore we have
8

xi1 +xi2 +............+x in a i

i= 1, 2,..,m

We must supply at each retail store with the no. of units


desired, therefore. The total amount received at any
retail store is the sum over the amounts received from
each warehouse. That is

x1j +x 2j +.............+x mj =b j ; j = 1, 2,.,n


The needs of the retail stores can be satisfied
m

a b
i

i=1

j=1

Let us define cij is the per unit cost of shifting from ith
warehouse to the jth retail store, then the total cost of
shifting is given by;
m

z= cij x ij
i=1 j=1

We wish to determine x ij s which minimize the cost


m

z= cij x ij
i=1 j=1

subject to the constraints

xi1 +x i2 +............+xin a i ;i=1, 2,....,m


9

x1j +x 2j +.............+x mj =b j ; j= 1, 2,........,n


It is a linear programming problem in mn variables with
(m+n) constraints.
(2) The Diet Problem
Suppose we have given the nutrient content of a no. of
different foods. We have also given the minimum daily
requirement for each nutrient and quantities of nutrient
contained in one of each food being considered. Since
we know the cost per ounce of food, the problem is to
determine the diet that satisfy the minimum daily
requirement of nutrient and also the minimum cost diet.
Let us define
m = the no. of nutrients
n = the no. of foods

a ij = the quantity (mg) of ith nutrient per (oz) of the jth


food

bi = the minimum quantity of ith nutrient


c j = the cost per (oz) of the jth food
10

x j = the quantity of jth food to be purchased


The total amount of ith nutrient contained in all the
purchased foods cannot be less than the minimum daily
requirements
Therefore we have
n

a i1x1 +a i2 x 2 +............+a in x n = a ijx j bi


j=1

The total cost for all purchased foods is given by;


n

z = c jx j
j=1

Now our problem is to minimize cost

z = c jx j

subject

j=1

to the constraints
n

a i1x1 +a i2 x 2 +............+a in x n = a ijx j bi and


j=1

xj 0
This is called the linear programming problem.
Feasible Solution
Any set of values of the variables x j which satisfies
n

the constraints

a x {, , b
ij

j=1

11

, where a ij and bi

are constant is called a solution to the linear


programming problem and any solution which satisfies
the non-negative restrictions i. e. x j 0 is called a
feasible solution.
Optimal Feasible Solution
In a linear programming problem there is an infinite no.
of feasible solutions and out of all these solutions we
must find one feasible solution which optimize the
n

c jx j is called optimal feasible


objective function z =
j=1

solution
In other words, any feasible solution which satisfies the
following conditions;
n

(i)

a x {, , b
ij

j=1

(ii) x j 0
n

c jx j , is called a
(iii) optimize objective function z =
j=1

optimal feasible solution.

12

Slack and Surplus Variables


In LP problems, generally the constraints are not all
equations. Since equations are easy to handle as
compared to inequalities, a simple conversion is needed
to make the inequalities into equality. Let us consider
first, the constraints having less than or equal signs ( ).
Any constraint of this category can be written as

a h1x1 +a h2 x 2 +..............+a hn x n bh

(1)

Let us introduce a new variable x n+h which satisfies


that x n+h 0 where
n

x n+h b h a hjx j 0 , to convert the inequalities to the


j=1

equality such that

a h1x1 +a h2 x 2 +..............+a hn x n +x n+h bh

(2)

The new variable x n+h is the difference between the


amount available of resource and the amount actually
used and it is called the slack variables.
Next we consider the constraints having signs greater
than or equal ( ). A typical inequality in this set can
13

be written as;

a k1x1 +a k2 x 2 +..............+a kn x n bk

(3)

Introducing a new variable x n+k 0 , the inequality can


be written as equality which is given by;

a k1x1 +a k2 x 2 +..............+a kn x n -x n+k bk

(4)

Here the variable x n+k is called the surplus variable,


because it is the difference between resources used and
the minimum amount to be produced is called the
surplus.
Therefore using algebraic method for solving a linear
programming problem, the linear programming problem
with original constraints can be transformed into a LP
problem with constraints of simultaneously linear
equation form by using slack and surplus variable
Example: Considering the LP problem

Min: -x1 -3x 2


St

x1 -2x 2 4

-x1 +x 2 3
14

x1 , x 2 > 0
Now introducing two new variables x 3 and x 4 , the
problem can be written as;

Min: -x1 -3x 2 +0.x3 +0.x 4


St:
x1 -2x 2 x 3 =4
-x1 +x 2 x 4 = 3
x1 , x 2 , x 3 , x 4 > 0

Here x 3 is the slack variable and x 4 is the surplus


variable.
Effect of Introducing Slack and Surplus Variables
Suppose we have a linear programming problem

P1

such that
Optimize

Z= c1x1 +c2 x 2 +..............+cn x n

(1)

Subject to the condition

a h1x1 +a h2 x 2 +..............+a hn x n {, , }bh

(2)

Where one and only one of the signs in the bracket hold
for each constraint
15

The

problem

is

converted

programming problem

P2

to

another

linear

such that

Z= c1x1 +c2 x 2 +..............+cn xn +0.xn+1 +............+0.x m

(3)

Subject to the condition

Ax = a h1x1 +a h2x 2 +..............+a hn x n a hn+1x n+1 ........... a hm xm bh


Where

A= a ij

nm

(4)

and a j ( j = 1, 2,.,m) is the jth

column of A.
We claim that optimizing (3) subject to (4) with

x j 0 is completely equivalent to optimizing (1)


subject to (2) with x j 0
To prove this, we first note that if we have any feasible
solution to the original constraints, then our method of
introducing slack or surplus variables will yield a set of
non-negative slack or surplus variables such that
equation (4) is satisfied with all variables non-negative
consequently if we have a feasible solution to (4) with
all variables non-negative, then its first n components
will yield a feasible solution to (2) .Thus there exist one
to-one correspondence between the feasible solutions
16

to the original set of constraints and the feasible


solution to the set of simultaneous linear equations.
Now if X* = (x1* x*2 ,........,x*m ) 0 is a feasible optimal
solution to linear programming
components of

X*

P2

then the first n

that is (x1* x*2 ,........,x*n ) is an optimal

solution by annexing the slack and surplus variables


to any optimal solution to
solution to

P1

we obtain an optimal

P2

Therefore, wet may conclude that if slack and surplus


variables having a zero cost are introduced to convert
the original set of constraint into a set of simultaneous
linear equations, so the resulting problem is equivalent
to the original problem.
Example: Consider the following inequalities
Maximize: z= 3x1 +2x 2
Subject to constraints

x1 +x 2 6
x2 3
x1 , x 2 0

Find basic solutions, and optimal feasible solution.


17

Solution. By introducing slack variables x3 and x 4 , the


problem is put into the following standard format
x1 +x 2 x 3 =6
x 2 x 4 =3
x1 , x 2 , x 3 ,x 4 0

So, the constraint matrix A is given by;


1 1 1 0
6
A=
= (a1 , a 2 , a 3 , a 4 ) , b=
0 1 0 1
3

Rank(A) = 2

Therefore, the basic solutions will be corresponding to


finding a

2 2

basis B. Following are the possible ways

of extracting B out of A
1 1
B=(a1 , a 2 ) =

0 1

(i)

1 1
B=(a1 , a 3 )=

0 0

(ii)
B-1

and hence

1 -1
B-1 =

0 1

1 -1 6 3
x B =B-1b=
=
0 1 3 3

x 0
xn = 3 =
x4 0

, Since |B|=0, it is not possible to find

xB

x2 0
x1 -1 1 0 6 6
1 0
B-1 =
x B = x =B b= 0 1 3 = 3 x n = x = 0


0 1
4
3

(iii)

1 0
B=(a1 , a 4 )=

0 1

(iv)

x 2 -1 0 1 6 3
x1 0
1 1 -1 0 1
B=(a 2 , a 3 )=
B =
x B = x =B b= 1 1 3 = 3 x n = x = 0


1 0
1 1
4
3

(v)
(vi)

1 0
B=(a 2 , a 4 )=

1 1

1 0
B-1 =

-1 1

1 0
B=(a 3 , a 4 )=

0 1

1 0
B-1 =

0 1

x 0
x
1 0 6 6
x B = 2 =B-1b=
= xn = 1 =

-1 1 3 -3
x4
x3 0
x
x 0
1 0 6 6
x B = 3 =B-1b=
= xn = 1 =

0 1 3 3
x4
x2 0
18

Hence we have the following five basic solutions


3

3
x1 =
0

0

6

0
x2 =
0

3

0
0


3
6
x3 = x 4 =
3
0


0
-3

Of which except

0

0
x5 =
6

3

are BFS because it violates

x4

non-negativity restrictions. The BFS belong to a four


dimensional space. These basic feasible solutions are
projected in the

(x1 , x 2 )

space gives rise to the following

four points.
3 6 0 0 0
, , ,
3 0 3 6 0

From the graphical representation the extreme points


are (0, 0), (0, 3), (3, 3) and (6,0) which are the same as
the BFSs. Therefore the extreme points are precisely the
BFS. The no. of BFS is 4 less than 6.
The optimal BFS is

6

0

Corner Point Feasible Solution


A feasible solution which does not lie on the line
segment, connection any other two feasible solution is
called a corner point feasible solution.

19

The corner point feasible solution is

3

3

Properties of Corner Point Feasible Solution


(i) If there is an exactly one optimal solution of the
linear programming problem, then it is a corner point
feasible solution.
(ii) If there are more than two optimal solutions of the
given problem, then at least two of them are adjacent
corner points.
(iii) In a linear programming problem there are a finite
number of corner points
(iv) If a corner point feasible solution is better than its
adjacent corner point solution, then it is better than all
other feasible solutions.
Methods for Solving Linear Programming Problems
(1) Graphical Method
(2) Algebraic Method
(3) Simplex Method
Graphical Method
The graphical method to solve a linear programming
20

problem involves two basic steps


(1) At the first step we have to determine the feasible
solution space.
We represent the values of the variable x1 to the X
axis and the their corresponding values of the variable

x 2 to the Y axis. Any point lying in the first quadrant


satisfies x1 > 0 and x 2 0 . The easiest way of
accounting

for

the

remaining

constraints

for

optimization objective function is to replace inequalities


with equations and then plot the resulting straight lines
Next we consider the effect of the inequality. All the
inequality does is to divide the (x1 , x 2 ) -plane into two
spaces that occur on both sides of the plotted line: one
side satisfies the inequality and the other one does not.
Any point lying on or below the line satisfies the
inequality. A procedure to determine the feasible side is
to use the origin (0, 0) as a reference point.
Step 2: At the second step we have to determine the
optimal solution.
21

Problem: Find the non-negative value of the


variables x1 and x 2 which satisfies the constraints

3x1 +5x 2 15
5x1 +2x 2 10
And

which

maximize

the

objective

function

z = 5x1 +3x 2
Solution: We introduce an x1x 2 co-ordinate system.
Any point lying in the first quadrant has x1 ,x 2 0 .
Now we show the straight lines 3x1 +5x 2 =15

and

5x1 +2x 2 =10 on the graph. Any point lying below


the line 3x1 +5x 2 =15 satisfies the 3x1 +5x 2 15 .
Similarly any point lying below the line 5x1 +2x 2 =10
satisfies the constraint

x2
22
C(0,5)

5x1 +2x 2 =10


A(3.0)

z = 5x1 +3x 2 z = 5x1 +3x 2

So, the region FDOA containing the set of points


satisfying both the constraints and the non negative
restriction. So, the points in this region are the feasible
solution. Now we wish to find the line with the largest
value of z = 5x1 +3x 2 which has at least one point in
common with the region of feasible solution. The line is
drawn in the graph above. It shows that the value of x1
and x 2 at the point A are the required solution.
Here x1 =1.053 and x 2 2.368 approximate.
Now from the objective function we get the maximum
value of z which is given by z = 5 1.053+3 2.368=12.37
23

Existence of Extreme Basic Feasible Solution:


Reduction of any feasible solution to a basic feasible
solution
Let us consider a linear programming problem with m
linear equations in n unknowns such that
AX = b
X0

Which has at least one basic feasible solution without


loss of generality suppose that Rank(A) = m and let
X=(x1, x 2 ,......,x n ) be as feasible solution. Further suppose

that x1, x 2 ,......,x p >0 and that x p+1, x P+2 ,......,x n =0 . And let
a1 , a 2 ,......,a p

be

corresponding
a1 , a 2 ,......,a p

the
to

the

respective
variables

columns

of

x1, x 2 ,......,x p

A
If

are linearly independent then X is a basic

feasible solution. in such case

pm

. If p=m from the

theory of system of linear equation, the solution is


non-degeneracy basic feasible solution.
If p<m, the system have a degenerate basic feasible
solution with (m-p) of the basic variables are equal to
24

zero.
If a1, a 2 ,......,ap are dependent then there exist scalars
1, 2 ,......, p with at least one positive j such that
p

a
j 1

Considering the following point

with

x j 0 j ; j 1, 2,...., p
xj =
0; j p 1, p 2,.....n

where

xj

x
; j 0 = k >0
j
k

o = Minimum
j=1,2,....,p

If j 0 , then xj >0 , since both x j and 0 are positive.


If j 0 , then by the definition of 0 we have
xj

o x j 0 j .

Thus xj >0

Furthermore
xk = x k - 0k =x k -

xk

k =0

. Hence

positive components.
Also,

25

has at most (p-1)

Ax= a jxj
j=1
n

a (x
j

0 j )

j=1

j=1

j=1

= a j x j 0 a j j
=b

Thus we have a constructed feasible solution

x since

Ax=b , x 0 with at most (p-1) positive


components. If the columns of A corresponding to these
positive components are linearly independent then

is basic feasible solution. Otherwise the process is


repeated. Eventually a basic feasible solution (BFS) will
be obtained.
Example: Consider the following inequalities
Maximize: z= 3x1 +2x 2
Subject to constraints

x1 +x 2 6
x2 3
x1 , x 2 0

Find basic solution, BFS and extreme points.


Solution. By introducing slack variables x3 and x 4 , the
problem is put into the following standard format
26

x1 +x 2 x 3 =6
x 2 x 4 =3
x1 , x 2 , x 3 ,x 4 0

So, the constraint matrix A is given by;


1 1 1 0
6
A=
= (a1 , a 2 , a 3 , a 4 ) , b=
0 1 0 1
3

Rank(A) = 2

Therefore, the basic solutions corresponding to finding


a

2 2

basis B. Following are the possible ways of

extracting B out of A
1 1
B=(a1 , a 2 ) =

0 1

(i)

1 1
B=(a1 , a 3 )=

0 0

(ii)
B-1

and hence

1 -1
B-1 =

0 1

1 -1 6 3
x B =B-1b=
=
0 1 3 3

x 0
xn = 3 =
x4 0

, Since |B|=0, it is not possible to find

xB

x2 0
x1 -1 1 0 6 6
1 0
B-1 =
x B = x =B b= 0 1 3 = 3 x n = x = 0


0 1
4
3

(iii)

1 0
B=(a1 , a 4 )=

0 1

(iv)

x 2 -1 0 1 6 3
x1 0
1 1 -1 0 1
B=(a 2 , a 3 )=
B =
x B = x =B b= 1 1 3 = 3 x n = x = 0


1 0
1 1
4
3

(v)
(vi)

1 0
B=(a 2 , a 4 )=

1 1

1 0
B-1 =

-1 1

1 0
B=(a 3 , a 4 )=

0 1

1 0
B-1 =

0 1

x 0
x
1 0 6 6
x B = 2 =B-1b=
= xn = 1 =

-1 1 3 -3
x4
x3 0
x
x 0
1 0 6 6
x B = 3 =B-1b=
= xn = 1 =

0 1 3 3
x4
x2 0

Hence we have the following five basic solutions

27

3

3
x1 =
0

0

6

0
x2 =
0

3

0
0


3
6
x3 = x 4 =
3
0


0
-3

Of which except

x4

0

0
x5 =
6

3

are BFS because it violates

non-negativity restrictions. The BFS belong to a four


dimensional space. These basic feasible solutions are
projected in the

(x1 , x 2 )

space gives rise to the following

four points.
3 6 0 0 0
, , ,
3 0 3 6 0

From the graphical representation the extreme points


are (0, 0), (0, 3), (3, 3) and (6,0) which are the same as
the BFSs. Therefore the extreme points are precisely the
BFS. The no. of BFS is 4 less than 6.
General Mathematical Formulation for Linear
Programming
Let us define the objective function which to be
optimized

z = c1x1 +c2 x 2 +...................+cn x n


We have to find the values of the decision variables
x1, x 2 ,.........,x n

on the basis of the following m


28

constraints;
a11x1 +a12 x 2 +.........+a1n x n ( ,=, )b1
a 21x1 +a 22 x 2 +.........+a 2n x n ( ,=, )b 2

a m1x1 +a m2 x 2 +.........+a mn x n ( ,=, )b m

and

x j 0; j = 1, 2,.......,n
The above formulation can be written as the following
compact form by using the summation sign;
n

c jx j
Optimize (maximize or minimize) z =
j=1

Subject to the conditions;


n

a x ( ,=, )b ;i=1, 2,.......,m


ij

j=1

and
x j 0; j = 1, 2,.......,n

The constants c j ; j =1, 2,......,n

are called the cost

coefficients; the constants bi ; i =1, 2,.......,m are called


stipulations and the constants a ij ; i =1, 2,.....,m; j=1,2,.....,n
are called structural coefficients. In matrix notation the
29

above equations can be written as;


Optimize z = CX
Subject to the conditions
AX( ,=, )B

where

C= c1 c2 ... ... cn 1n

x1

x2
.

X= .
.

.
x
n n 1

a11 a12 ...... a1n

a 21 a 22 ...... a 2n
A= .
.
.
.

.
.
.
.
a

m1 a m2 ...... a mn m n

b1

b2
.

B= .
.

.
b
m m n

Where, A is called the coefficient matrix, X is called the


decision vector, B is called the requirement vector and
C is called the cost vector of linear programming
problem
The Standard Form of LP Problem
The use of basic solutions to solve the general LP
models requires putting the problem in standard form.
The followings are the characteristics of the standard
form
(i) All the constraints are expressed in the form of
30

equations except the non-negative restrictions on the


decision variables which remain inequalities
(ii) The right hand side of each constraint equation is
non-negative
(iii) All the decision variables are non-negative
(iv) The objective function may be of the maximization
or the minimization type
Conversion of Inequalities into Equations:
The inequality constraint of the type

,( )

can be

converted to an equation by adding or subtracting a


variable from the left-hand sides of such constraints.
These new variables are called the slack variables or
simply slacks. They are added if the constraints are of
the

types and subtracted if the constraints are of the

types. Since in the cases of

type the subtracted

variables represent the surplus of the left-hand side over


right-hand side, it is commonly known as the surplus
variables and is in fact a negative slack.
For example
31

x1 +x 2 b1

Is equivalent to
x1 +x 2 s1 = b1

If

x1 +x 2 b2

Is equivalent to
x1 +x 2 s1 =b2

The general LP problem that discussed above can be


expressed as the following standard form;
n

z = c jx j
j=1

Subject to the conditions


n

a x
ij

si =bi ;i=1, 2,.......,m

j=1

x j 0; j = 1, 2,.......,n
and

si 0; i = 1, 2,.....,m
In the matrix notation, the general LP problem can be
written as the following standard form;
Optimize z = CX
Subject to the conditions
AX S = B
32

X0
S0

Example: Express the following LP problem in a


standard form;
Maximize z= 3x1 +2x 2
Subject to the conditions;
2x1 +x 2 2
3x1 +4x 2 12
x1 ,x 2 0

Solution: Introducing slack and surplus variables, the


problem can be expressed as the standard form and is
given below;
Maximize z= 3x1 +2x 2
Subject to the conditions;
2x1 +x 2 s1 =2
3x1 +4x 2 s2 =12
x1 ,x 2 , s1 , s2 0

Conversion

of

Unrestricted

Variable

into

Non-negative Variables
An unrestricted variable x j can be expressed in terms
of two non-negative variables by using substitution
33

+
+
such that x j = x j -x j ; x j ,x j 0

For example, if x j = -10 , then x +j =0, and x -j = 10 . If x j = 10 ,


then x +j =10, and x -j = 0 .
The substitution is effected in all constraints and in the
objective function. After solving the problem in terms
of x +j and x j , the value of the original variable x j is

then determined through back substitution.


Example: Express the following linear programming
problem in the standard form;
Maximize, z= 3x1 +2x 2 +5x3
Subject to
2x1 -3x 2 3
x1 +2x 2 3x 3 5
3x1 +2x 3 2
x1 , x 2 0

Solution: Here x1 and x 2 are restricted to be


non-negative while x 3 is unrestricted. Let us express
as, x3 = x3+ -x3- where, x3+ 0 and x3- 0 . Now introducing
slack and surplus variable the problem can be written as
34

the standard form which is given by;


Maximize, z= 3x1 +2x 2 +5(x 3+ -x 3- )
Subject to the conditions;
2x1 -3x 2 s1 3
x1 +2x 2 3x 3+ -3x 3- s2 =5
3x1 +2x 3+ -2x 3- s3 =2
x1 ,x 2 , x 3+ , x 3- , s1 , s2 , s3 0

Conversion of Maximization to Minimization:


The maximization of a function f(x1, x 2 ,.....,x n ) is
equivalent to the minimization of -f(x1, x 2 ,.....,x n ) in the
sense that both problems yield the same optimal values
of x1,x 2 ,......, and x n

35

Das könnte Ihnen auch gefallen