Beruflich Dokumente
Kultur Dokumente
Introduction to LP Problem
When the objective function and all constraints are linear
functions of the variables, the problem is known as a
linear programming (LP) problem.
A large number of engineering applications have been
successfully formulated and solved as LP problems.
LP problems also arise during the solution of nonlinear
problems as a result of linearizing functions around a
given point.
It is important to recognize that a problem is of the LP
type because of the availability of well-established
methods, such as the simplex method, for solving such
problems.
Problems with thousands of variables and constraints
can be handled with the simplex method.
Introduction to LP Problem
The simplex method requires that the LP problem be
stated in a standard form that involves only the equality
constraints.
Conversion of a given LP problem to this form is
discussed in the first.
In the standard LP form, since the constraints are linear
equalities, the simplex method essentially boils down to
solving systems of linear equations.
A review of solving linear systems of equations using the
Gauss-Jordan form and the LU decomposition will help.
A=
am
am
amn
1
2
Unrestricted Variables
The standard LP form restricts all variables to be
positive.
If an actual optimization variable is unrestricted in sign,
it can be converted to the standard form by defining it
as the difference of two new positive variables.
For example, if variable x is unrestricted in sign, it is
replaced by two new variables y1 and 2 with x = y1 y2.
Both the new variables are positive.
After the solution is obtained, if y1 > y2 then x will be
positive and if y1 < y2, then x will be negative.
Example
Convert the following problem to the standard LP form.
Maximize: z = 3x + 8y
Subjected to : x + 4y -2
x + 3y 6
x 0
Note that y is unrestricted in sign.
Define new variables (all 0)
x = y 1 ; y = y 2 y3
z = 3y1 + 8y2 8
-3y1 - 4y2 + 4y3 20
y1 + 3y2 - 6
y1, y2, 0
Example
Multiplying the objective function by a negative sign and
introducing slack surplus variables in the constraints, the
problem in the standard LP form is as follows:
Minimize f = -3y1 8y2 + 8y3
Subjected to:
-3y1 4y2 + 4y3 + y4 = 20
y1 + 3y2 - 3y3 - y5 = 6
y1 , . . . , y5 0
LP Problems
Once an LP problem is converted to its standard form, the
constraints represent a system of n equations in m unknowns.
When m = n (i.e., the number of constraints is equal to the number
of optimization variables), then the solution for all variables is
obtained from the solution of constraint equations and there is no
consideration of the objective function. This situation clearly does
not represent an optimization problem.
On the other hand, m > n does not make sense because in this
case, some of the constraints must be linearly dependent on the
others.
Thus, from an optimization point of view, the only meaningful case
is when the number of constraints is smaller than the number of
variables (after the problem has been expressed in the standard LP
form) that is m < n.
Basic Principles
The general description of a set of linear
equations in the matrix form:
[A][X] = [B]
[A] ( m x n ) matrix
[X] ( n x 1 ) vector
[B] (m x 1 ) vector
Matrix Representation
[A]{x} = {b}
a11 a12
a a
21 22
an1 an2
a1n x1 b1
a2n x2 b2
=
ann xn bn
Gaussian Elimination
One of the most popular techniques for solving
simultaneous linear equations of the form
[A ][X ] = [C ]
Consists of 2 steps
1. Forward Elimination of Unknowns.
2. Back Substitution
Forward Elimination
The goal of Forward Elimination is to transform
the coefficient matrix into an Upper Triangular
Matrix
25 5 1
64 8 1
144 12 1
5
1
25
0 4.8 1.56
0
0
0.7
Forward Elimination
Linear Equations: A set of n equations and n unknowns
( Eq.1 )
( Eq.2 )
.
.
.
.
.
.
Computer Program
function x = gaussE(A,b,ptol)
%
GEdemo Show steps in Gauss elimination and back substitution
%
No pivoting is used.
%
%
Synopsis:
x = GEdemo(A,b)
%
x = GEdemo(A,b,ptol)
%
%
Input:
A,b
= coefficient matrix and right hand side vector
%
ptol = (optional) tolerance for detection of zero pivot
%
Default: ptol = 50 * eps
%
%
Output:
x = solution vector, if solution exists
A=[25 5 1; 64 8 1; 144 12 1]
b=[106.8; 177.2; 279.2]
if nargin<3, ptol = 50*eps; end
[m,n] = size(A);
if m~=n, error('A matrix needs to be square'); end
nb = n+1; Ab = [A b];
% Augmented system
fprintf('\n Begin forward elimination with Augmented system;\n'); disp(Ab);
%
--- Elimination
The variables that we choose to solve for are called basic, and the
remaining variables are called non-basic.
f = -x + y
subjected to:
x 2y 2
x+y 4
x 3
x, y 0.
Basic Solutions
The general solution is valid for any values of the nonbasic variables.
Since all variables are positive and we are interested in
minimizing the objective function, we assign 0 values to
non-basic variables.
A solution from the constraint equations obtained by
setting non-basic variables to zero is called a basic
solution.
Therefore, one possible basic solution for the above
example is as follows: x3 = -2 ; x4 = 4 ; x5 = 3
Since all variables must be > 0, this basic solution is not
feasible because x3 is negative.
Basic solutions
Let's find another basic solution by choosing (again arbitrarily) x1, x4, and
x5 as basic variables and x2 and x3 as non-basic.
5! / [ 3! 2! ] = 10.
Basis
Solution
Status
{3, 1, -1, 0, 0}
infeasible
feasible
infeasible
{3, 0, 1, 1, 0}
feasible
{4, 0, 2, 0, -1}
infeasible
{2, 0, 0, 2, 0}
feasible
{-}
No sol.
{0, 4, -10, 0, 3}
infeasible
{0, -1, 0, 5, 3}
infeasible
10
{0, 0, -2, 4, 3}
infeasible
Graphical Solution
Constraint:
x2 = 4.0 - x1
1.0
f = -1.0
-1.5
-2.0
y-values or x2
0.5
Sol 2
Feasible
region
0.0
Sol 6
Constraint:
x2 = 0.5x1 1.0
Sol 4
Opt. sol.
-2.5
-0.5
-3.0
Constraint:
x1 = 3
-3.5
-1.0
0.0
0.5
1.0
1.5
2.0
x-values or x1
2.5
3.0
3.5
Final Thoughts
One of these basic feasible solutions must be the
optimum.
Thus, a brute-force method to find an optimum is to
compute all possible basic solutions.
The one that is feasible and has the lowest value of the
objective function is the optimum solution.
For the example problem, the fourth basic solution is
feasible and has the lowest value of the objective
function.
Thus, this represents the optimum solution for the
problem. Optimum solution:
x1 = 3, x2 = 0, x3 = 1, x4 = 1, x5 = 0, f = -3.