Sie sind auf Seite 1von 30

Optimization Techniques

Intro. to Linear Programming

Dr. Nasir M Mirza


Email: nasirmm@yahoo.com

Introduction to LP Problem
When the objective function and all constraints are linear
functions of the variables, the problem is known as a
linear programming (LP) problem.
A large number of engineering applications have been
successfully formulated and solved as LP problems.
LP problems also arise during the solution of nonlinear
problems as a result of linearizing functions around a
given point.
It is important to recognize that a problem is of the LP
type because of the availability of well-established
methods, such as the simplex method, for solving such
problems.
Problems with thousands of variables and constraints
can be handled with the simplex method.

Introduction to LP Problem
The simplex method requires that the LP problem be
stated in a standard form that involves only the equality
constraints.
Conversion of a given LP problem to this form is
discussed in the first.
In the standard LP form, since the constraints are linear
equalities, the simplex method essentially boils down to
solving systems of linear equations.
A review of solving linear systems of equations using the
Gauss-Jordan form and the LU decomposition will help.

The Standard LP Problem


Here we present methods for solving linear programming
(LP) problems expressed in the following standard
form:
Find x in order to: Minimize f(x) = cTx
Subject to Ax = b and x 0.
where ,
x = [x1, x2 ,..., xn]T a vector of optimization variables;
c = [c1, c2 ,..., cn]T a vector of objective or cost coefficients
b = [b1, b2, ..., bm]T > 0 vector of right-hand sides of constraints
A is m x n matrix of constraint coefficients:
a11 a12 a1n
a 21 a 22 a 2n

A=

am
am
amn

1
2

The standard LP Problem


Note that in this standard form, the problem is of
minimization type.
All constraints are expressed as equalities with the righthand side greater than or equal to (>) 0.
Furthermore, all optimization variables are restricted to
be positive.

How to deal with maximization?


Maximization Problem
As already mentioned a maximization problem can be
converted to a minimization problem simply by
multiplying the objective function by a negative sign.
For example,
Maximize z(x) = 3x + 5y is the same as
Minimize: f(x) = -3x -5y.

Constant Term in the Objective Function


From the optimality conditions, it is easy to see that the optimum
solution x* does not change if a constant is either added to or
subtracted from the objective function.
Thus, a constant in the objective function can simply be ignored.
After the solution is obtained, the optimum value of the objective
function is adjusted to account for this constant.
Alternatively, a new dummy optimization variable can be defined to
multiply the constant and a constraint added to set the value of this
variable to 1.
For example, consider the following objective function of two
variables:
Minimize: f(x, y) = 3x + 5y + 7
In standard LP form, it can be written as follows:
Minimize: f(x, y) = 3xi + 5 + 7z
subject to z = 1

Negative Values on the Right-Hand Sides of Constraints


The standard form requires that all constraints must be
arranged such that the constant term, if any, is a
positive quantity on the right-hand side.
If a constant appears as negative on the right-hand side
of a given constraint, multiply the constraint by a
negative sign.
Keep in mind that the direction of inequality changes
(that < becomes >, and vice versa) when both sides are
multiplied by a negative sign.
For example,
3x1 + 5x2 < -7 is the same as -3x1 - 52 > 7

Standard Form for LP Problems


Less than Type Constraints
Add a new positive variable (called a slack variable) to convert a
constraint ( less than equal to, LE) to an equality.
For example, 3x +5y 7 is converted to 3x +5y + z = 7,
where z > 0 is a slack variable.

Greater than Type Constraints


Subtract a new positive variable (called a surplus variable) to
convert a constraint (GE) to equality.
For example, 3x + 5y 7 is converted to + 5y - z = 7,
where z > 0 is a surplus variable.
Note that, since the right-hand sides of the constraints are restricted
to be positive, we cannot simply multiply both sides of the GE
constraints by -1 to convert them into the LE type.

Unrestricted Variables
The standard LP form restricts all variables to be
positive.
If an actual optimization variable is unrestricted in sign,
it can be converted to the standard form by defining it
as the difference of two new positive variables.
For example, if variable x is unrestricted in sign, it is
replaced by two new variables y1 and 2 with x = y1 y2.
Both the new variables are positive.
After the solution is obtained, if y1 > y2 then x will be
positive and if y1 < y2, then x will be negative.

Example
Convert the following problem to the standard LP form.
Maximize: z = 3x + 8y
Subjected to : x + 4y -2
x + 3y 6
x 0
Note that y is unrestricted in sign.
Define new variables (all 0)

x = y 1 ; y = y 2 y3

Substituting these and multiplying the first constraint by a negative


sign, the problem is as follows:
Maximize:
Subject to:

z = 3y1 + 8y2 8
-3y1 - 4y2 + 4y3 20
y1 + 3y2 - 6
y1, y2, 0

Example
Multiplying the objective function by a negative sign and
introducing slack surplus variables in the constraints, the
problem in the standard LP form is as follows:
Minimize f = -3y1 8y2 + 8y3
Subjected to:
-3y1 4y2 + 4y3 + y4 = 20
y1 + 3y2 - 3y3 - y5 = 6
y1 , . . . , y5 0

The Optimum of LP Problems


Since linear functions are always convex, the LP
problem is a convex programming problem.
This means that if an optimum solution exists, it
is a global optimum.
The optimum solution of an LP problem always
lies on the boundary of the feasible domain.
We can easily prove this by contradiction.

LP Problems
Once an LP problem is converted to its standard form, the
constraints represent a system of n equations in m unknowns.
When m = n (i.e., the number of constraints is equal to the number
of optimization variables), then the solution for all variables is
obtained from the solution of constraint equations and there is no
consideration of the objective function. This situation clearly does
not represent an optimization problem.
On the other hand, m > n does not make sense because in this
case, some of the constraints must be linearly dependent on the
others.
Thus, from an optimization point of view, the only meaningful case
is when the number of constraints is smaller than the number of
variables (after the problem has been expressed in the standard LP
form) that is m < n.

Solving Linear Systems of Equations


So, solving LP problems involves solving a system of
undetermined linear equations. (The number of
equations is less than the number of unknowns.)
A review of the Gauss-Jordan procedure for solving a
system of linear equations is presented next.
Consider the solution of the following system of
equations: Ax = b where A is an m x n coefficient
matrix, x is n x 1 vector of unknowns, and b is an m x 1
vector of known right-hand sides.

Basic Principles
The general description of a set of linear
equations in the matrix form:
[A][X] = [B]
[A] ( m x n ) matrix
[X] ( n x 1 ) vector
[B] (m x 1 ) vector

How we proceed with this system:

Write the equations in natural form


Identify unknowns and order them
Isolate unknowns
Write equations in matrix form

Matrix Representation

[A]{x} = {b}
a11 a12
a a
21 22

an1 an2

a1n x1 b1

a2n x2 b2
=


ann xn bn

Gaussian Elimination
One of the most popular techniques for solving
simultaneous linear equations of the form

[A ][X ] = [C ]
Consists of 2 steps
1. Forward Elimination of Unknowns.

2. Back Substitution

Forward Elimination
The goal of Forward Elimination is to transform
the coefficient matrix into an Upper Triangular
Matrix

25 5 1
64 8 1

144 12 1

5
1
25
0 4.8 1.56

0
0
0.7

Forward Elimination
Linear Equations: A set of n equations and n unknowns

a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1

( Eq.1 )

a21 x1 + a22 x2 + a23 x3 + ... + a2 n xn = b2

( Eq.2 )

.
.
.

.
.
.

an1 x1 + an 2 x2 + an 3 x3 + ... + ann xn = bn

Computer Program
function x = gaussE(A,b,ptol)
%
GEdemo Show steps in Gauss elimination and back substitution
%
No pivoting is used.
%
%
Synopsis:
x = GEdemo(A,b)
%
x = GEdemo(A,b,ptol)
%
%
Input:
A,b
= coefficient matrix and right hand side vector
%
ptol = (optional) tolerance for detection of zero pivot
%
Default: ptol = 50 * eps
%
%
Output:
x = solution vector, if solution exists
A=[25 5 1; 64 8 1; 144 12 1]
b=[106.8; 177.2; 279.2]
if nargin<3, ptol = 50*eps; end
[m,n] = size(A);
if m~=n, error('A matrix needs to be square'); end
nb = n+1; Ab = [A b];
% Augmented system
fprintf('\n Begin forward elimination with Augmented system;\n'); disp(Ab);
%

--- Elimination

Computer Program (continued)


% program continued
for i =1:n-1
pivot = Ab(i,i);
if abs(pivot)<ptol, error('zero pivot encountered'); end
for k=i+1:n
factor
= - Ab(k,i)/pivot;
Ab(k,i:nb) = Ab(k,i:nb) - (Ab(k,i)/pivot)*Ab(i,i:nb);
fprintf('Multiplication factor is %g\n',factor);
disp(Ab);
pause;
end
fprintf('\n After elimination in column %d with pivot = %f \n\n',i,pivot);
disp(Ab);
pause;
end
%
--- Back substitution
x
= zeros(n,1);
% Initializing the x vector to zero
x(n) = Ab(n,nb) /Ab(n,n);
for i= n-1:-1:1
x(i) = (Ab(i,nb) - Ab(i,i+1:n)*x(i+1:n))/Ab(i,i);
end

Basic Solutions of an LP Problem

As mentioned before, the solution of an LP problem reduces to solving a


system of underdetermined (fewer equations than variables) linear
equations.

From m equations, at most we can solve for m variables in terms of the


remaining n - m variables.

The variables that we choose to solve for are called basic, and the
remaining variables are called non-basic.

Consider the following example:


Minimize:

f = -x + y

subjected to:

x 2y 2
x+y 4
x 3
x, y 0.

Basic Solutions of an LP Problem


In standard form this LP problem becomes:
Minimize:
f = -x1 + x2
subjected to: x1 2x2 x3 = 2
x1 + x2 + x4 = 4
x1 + x5 = 3
xi 0, for all i=1, 2, 3, 4, 5.
Where, x3 is a surplus variable for the first constraint,
and x4 and x5 are slack variables for the two less-than
type constraints.

The total number of variables is n = 5, and


The number of equations is m = 3.
Thus, we can have three basic variables and two nonbasic variables.
If we arbitrarily choose x3 , x4 , and x5 as basic
variables, a general solution of the constraint
equations can readily be written as follows:
x3 = -2 + x1 - 2x2 ;
x4 = 4 x1 - x2 ;
x5 = 3 x1

Basic Solutions
The general solution is valid for any values of the nonbasic variables.
Since all variables are positive and we are interested in
minimizing the objective function, we assign 0 values to
non-basic variables.
A solution from the constraint equations obtained by
setting non-basic variables to zero is called a basic
solution.
Therefore, one possible basic solution for the above
example is as follows: x3 = -2 ; x4 = 4 ; x5 = 3
Since all variables must be > 0, this basic solution is not
feasible because x3 is negative.

Basic solutions

Let's find another basic solution by choosing (again arbitrarily) x1, x4, and
x5 as basic variables and x2 and x3 as non-basic.

By setting non-basic variables to zero, we solve for the basic variables:


x1 = 2; x1 + x4 = 4 ; x1 + x5 = 3

It can easily be verified that the solution is x1= 2, x4 = 2, and x5 = 1. Since


all variables have positive values, this basic solution is feasible as well.

The maximum number of basic solutions depends on the number of


constraints and the number of variables in the problem:
Possible basic solutions = n! / [ m! (n - m)! ]

For the example problem where m = 3 and n = 5 therefore, the maximum


number of basic solutions is

5! / [ 3! 2! ] = 10.

All basic solutions

All these basic


solutions are
computed
from the
constraint
equations and
are
summarized in
the following
table. The set
of basic
variables for a
particular
solution is
called a basis
for that
solution.

Basis

Solution

Status

{x1, x2, x3}

{3, 1, -1, 0, 0}

infeasible

{x1, x2, x4}

{3, 1/2, 0, 1/2, 0}

feasible

{x1, x2, x5}

{10/3, 2/3, 0, 0, -1/3}

infeasible

{x1, x3, x4}

{3, 0, 1, 1, 0}

feasible

{x1, x3, x5}

{4, 0, 2, 0, -1}

infeasible

{x1, x4, x5}

{2, 0, 0, 2, 0}

feasible

{x2, x3, x4}

{-}

No sol.

{x2, x3, x5}

{0, 4, -10, 0, 3}

infeasible

{x2, x4, x5}

{0, -1, 0, 5, 3}

infeasible

10

{x3, x4, x5}

{0, 0, -2, 4, 3}

infeasible

Graphical Solution
Constraint:
x2 = 4.0 - x1

1.0

f = -1.0
-1.5

-2.0

y-values or x2

0.5

Sol 2
Feasible
region

0.0

Sol 6

Constraint:
x2 = 0.5x1 1.0

Sol 4
Opt. sol.

-2.5
-0.5

-3.0

Constraint:
x1 = 3

-3.5
-1.0
0.0

0.5

1.0

1.5

2.0

x-values or x1

2.5

3.0

3.5

Final Thoughts
One of these basic feasible solutions must be the
optimum.
Thus, a brute-force method to find an optimum is to
compute all possible basic solutions.
The one that is feasible and has the lowest value of the
objective function is the optimum solution.
For the example problem, the fourth basic solution is
feasible and has the lowest value of the objective
function.
Thus, this represents the optimum solution for the
problem. Optimum solution:
x1 = 3, x2 = 0, x3 = 1, x4 = 1, x5 = 0, f = -3.

Das könnte Ihnen auch gefallen