Sie sind auf Seite 1von 48

SEARCH NOW

Simplex algorithm - Definition


In mathematical optimization theory, the simplex algorithm of George Dantzig is the fundamental technique for numerical
linear programming problem. There is also an unrelated method for solving nonlinear programming problems.
In both cases, the method uses the concept of a simplex, which is a polytope of N + 1 vertices in N dimensions; a line segment on a
line, a triangle on a plane, a 4-vertex pyramid in three-dimensional space and so forth.

The simplex algorithm in linear programming


A linear programming problem consists of a collection of linear inequalities on a number of real variables and a fixed linear functional
which is to be maximized (or minimized). Some further details are given on the page for linear programming.
In geometric terms we are considering a closed convex polytope P, defined by intersecting a number of half-spaces in n-dimensional
Euclidean space, which lie to one side of a hyperplane. The objective is to maximise a linear functional L; if we consider the
hyperplanes H(c) defined by L(x) = c as c increases, these form a parallel family. If the problem is well-posed, we want to find the
largest value of c such that H(c) intersects P (if there is no such largest value of c, this isn't a reasonable question for optimization as it
stands). In this case we can show that the optimum value of c is attained, on the boundary of P. Methods for finding this optimum point
on P have a choice of improving a possible point by moving through the interior of P (so-called interior point methods), or starting and
remaining on the boundary.
The simplex algorithm falls into the latter class of method. The idea is to move along the facets of P in search of the optimum, from
point to point. Note that the optimum cannot occur at a mid-point, for example in the middle of an edge lying as a line segment on the
boundary of P, unless the whole relevant 'face' is parallel to H. Therefore it is possible to look solely at moves skirting the edge of a
, ignoring the interior. The algorithm specifies how this is to be done.
We start at some vertex of the polytope, and at every iteration we choose an adjacent vertex such that the value of the objective
function does not decrease. If no such vertex exists, we have found a solution to the problem. But usually, such an adjacent vertex is
nonunique, and a pivot rule must be specified to determine which vertex to pick. Various pivot rules exist.
, Klee and Minty gave an example of a linear programming problem in which the polytope P is a distortion of an n-dimensional
cube. They showed that the simplex method as formulated by Dantzig visits all 2 n vertices before arriving at the optimal vertex. This
shows that the worst-case complexity of the algorithm is exponential time. Similar examples have been found for other pivot rules. It is
an open question if there is a pivot rule with polynomial time worst-case complexity.
Nevertheless, the simplex method is remarkably efficient in practice. Attempts to explain this employ the notion of average complexity

or (recently) smoothed complexity.


Other algorithms for solving linear programming problems are described in the article on linear programming.

The simplex algorithm in nonlinear programming


The simplex algorithm finds an optimum solution to a problem with N variables when the solution varies smoothly from one point in the
-dimensional solution space to the neighboring points. For example, a 2-dimensional map showing the elevation of a piece of land or
the depth of a pool of oil, or optimizing the outcome of an experiment where many conditions are varied over some known range. With
an elevation map, simple inspection shows the answer to the question of the highest elevations; thus illustrating that the usefulnes of
the simplex method lies in cases where the value at various points is hard to come by, and an exhaustive search is infeasible.
A computer program to use the simplex method to automatically find an optimum typically (1) chooses N + 1 points, (2) finds the values
(if unknown) at those points (the elevation, depth, speed, cost, execution time ... whatever is being optimized), (3) stop if we found a
good enough value (by any of several criteria), else choose a new point CLOSER TO THE OTHER POINTS (typically by half) in place
of the worst valued point (creating a new smaller simplex), (4) go to step (2).
Many variations exist depending on the actual nature of problem being solved. The most common, perhaps, is to use a constant size
small simplex that climbs local gradients to local maximums. Visualize a small triangle on an elevation map flip floping its way up a hill
to a local peak.

External links
An Introduction to Linear Programming and the Simplex Algorithm by Spyros Reveliotis of the Georgia Institute of Technology.
Java-based interative simplex tool hosted by Argonne National Laboratory.

Algorithm - Example Usage


kicauan: The A* algorithm in Clojure - http://su.pr/5e1vkH

jerfisesta: RT @abinesh: Oh yes! RT @redsheep: : Y'know #Facebook needs an algorithm that prevents you from
recommending friends to your mom. #suggestions

allenfw: Time flies at apartment..... in contrast, time drags in algorithm course.....

rochorbeancurd: via @redsheep: : Y'know #Facebook needs an algorithm that prevents you from recommending friends to
your mom. #suggestions

Copyright 2010 WordIQ.com - Privacy Policy :: Terms of Use :: Contact Us :: About Us


This article is licensed under the GNU Free Documentation License. It uses material from the this Wikipedia article.

The Classical Simplex Method

Europe Site

Site for Asia Site for Middle East

UK Site USA Site

Para mis visitantes del mundo de habla hispana, este sitio se encuentra disponible
en espaol en:

Sitio Espejo para Espaa

Sitio Espejo para Amrica Latina

Sitio en los E.E.U.U.

The Simplex Method is the earliest solution algorithm for solving LP problems.
It is an efficient implementation of solving a series of systems of linear
equations. By using a greedy strategy while jumping from a feasible vertex of
the next adjacent vertex, the algorithm terminates at an optimal solution.
The aim of this site is to enhance the understanding of the Simplex Method
developmental process.
Professor Hossein Arsham
To search the site, try Edit | Find in page [Ctrl + f]. Enter a word or phrase in the
dialogue box, e.g. "parameter " or "linear " If the first appearance of the
word/phrase is not what you are looking for, try Find Next.

MENU
1. Introduction
2. Algebraic Method
3. Pivoting Row Operations
4. The Simplex Method

Companion Sites:

Linear Optimization Solvers to Download


Success Science

Leadership Decision Making

Linear Programming (LP) and Goal-Seeking Strategy

Artificial-variable Free LP Solution Algorithms

Integer Optimization and the Network Models

Tools for LP Modeling Validation

Zero-Sum Games with Applications

Computer-assisted Learning Concepts and Techniques

Linear Algebra and LP Connections

From Linear to Nonlinear Optimization with Business Applications

Sensitivity Region for LP Models

Zero Sagas in Four Dimensions

Probabilistic Modeling

Systems Simulation

Business Keywords and Phrases

Compendium of Web Site Review

Collection of JavaScript E-labs Learning Objects

Decision Science Resources

Introduction
The Graphical Method is limited in solving LP problems having one or two
decision variables. However, it provides a clear illustration of where the
feasible and non-feasible regions are, as well as, vertices. Having a visual
understanding of the problem helps with a more rational thought process. For
example, we learned that: if a linear program has a non-empty, bounded
feasible region, then the optimal solution is always one the vertices of its
feasible region (a corner point). Therefore, what is needed to be done is to
find all the intersection points (vertices) and then examine which one among
all feasible vertices, provides the optimal solution. Now, by using the
Analytical Geometry concepts, we will overcome this limitation of human
vision. The Algebraic Method is designed to extend the graphical method
results to multi-dimensional LP problem.

Algebraic Method:
LP models with multi-dimensional decision variables
We are interested in solving the following general problem:

Max( or Min) C.X


subject to:
AX a
BX b
DX = d
with possibly some restricted variables: some Xi's 0, some Xi's 0, and all others
unrestricted in sign.
The customary notation is used: C = {Cj} for the objective function coefficients (known
as the cost coefficients, because historically, the first LP problem was a cost minimization
problem), and X = {Xj } is used for the decision variables.
Here is a four-step procedure for the algebraic solution algorithm:
1. Construction of the Boundaries of the Constraints Set:
Transform all inequalities (except the restricted condition on each
variable, if any) to equalities by adding or subtracting slack and
surplus variables. Construction of the boundary of the restricted
variables is included in the next step.
2. Finding All the Vertices: If the number of variables (including slack
and surplus) is more than the number of equations, then set the
following number of variables to zero:
[ (Total number of variables including slack and surplus variables) (Number of equations) ]

The variables to set to zero are the slack, surplus, and restricted variables (any
Xi's 0, or Xi's 0) only. After setting these many variables to zero, then find the
other variables by solving the resulting squared system of equations.
Notice if Xi 0, then it could be written as Xi - Si = 0, setting the corresponding
Si to zero means Xi = 0, therefore, it is not necessary to explicitly introduce
addition slack and surplus variables. Note also that when any Xi is an unrestricted
in sign, it does not add a constraint.
3. Check For Feasibility: All slack and surplus must be non-negate and
check for restricted condition on each variable, if any. Each feasible
solution is called a Basic Feasible Solution, which is a corner point of
the feasible region.

4. Selecting the Optimal Corner Point: Among all feasible solutions


(i.e., feasible corner points), find the optimal one (if any) by evaluating
the objective function. There might be multiple optimal solutions.

Remember that: A linear system AX=b has a solution if and only if the rank of A is equal
to the rank of the augmented matrix (A,B).
Definitions to know:
Each solution to any system of equations is called a Basic Solution (BS). Those BS,
which are feasible are called Basic Feasible Solutions (BFS).
In every basic solution, the variables, which you set equal to zero are called the NonBasic Variables (NBV), all other variables you compute by using the system of equations
are called Basic Variables (BV).
Note that, the list of decision variables, which are BV's for an optimal solution is readily
available in the QSB optimal solution tableau.
Slack is the leftover of a resource. Surplus is the excess of production.
Number of basic solutions: After converting all inequalities into equalities, let T = Total
number of variables including slack and surplus variables, E = Number of equations, and
R = total number of slack and surplus variables and restricted decision variables, then the
maximum number of basic solutions is:
R! / [(T - E)! (R + E - T)!]

where ! stands for Factorials. For example, 4 ! = (1)(2)(3)(4) = 24. Note that, by
definition 0 ! = 1.
Notice that, If E T or T R + E, then the initial LP formulation could be wrong. The
remedies for corrective actions are discussed in The Dark Side of LP: Tools for Modeling
Validation section.
The main result: If an LP problem has bounded solution(s), then, the algebraic method
generates the solution(s).
Example 1 : A Problem Without Any Restricted Variable:
Max X1 + X2
subject to:
X1 + X2 10
X1 8
X2 12

Introducing the slack and surplus variables, we have:


X1 + X2 - S1 = 10
X1 + S2 = 8
X2 + S3 = 12
For this problem E = 3, T = 5, R = 3, therefore, there are at most 3! / [2! . (3 - 2)! ] = 3
basic solutions. To find out the basic solutions, we notice that there are 3 equations with 5
unknowns, setting any 2 = 5 - 3 slack and surplus variables to zero, and solving the three
resultant systems of equations, we have:
S S S X X X1 +
1 2 3 1 2
X2
0 0

1
8 2
0

10

1
1
0 0 8
0
2

20*

1
1
0 -2
0
2

10

The optimal solution is S1= 10, S2 = 0, S3 = 0, X1 = 8, X2 = 12, with optimal value of


20.
Example 2: A Problem With One Restricted Variable:
The following problem is attributed to Andreas Enge, and Petra Huhn.
Maximize 3X1 + X2 - 4X3
subject to:
X1 + X2 - X3 =1,
X2 2,
X1 0
After adding a surplus variable, we have:
Maximize 3X1 + X2 - 4X3
subject to:
X1 + X2 - X3 =1,
X2 - S1 = 2,
This LP problem cannot be solved by the graphical method. However, the algebraic
method is general in the sense that it does not put any limitations on the dimensionality of

the problem. Notice that we have two equations with one surplus variable and one
restricted decision variable. The parameters for this problem are: T = 4, R = 2, and E = 2.
This gives the total number of possible basic solutions: 2! / [(2!). (0!)] = 1. By setting the
surplus and X1 variables to zero we get:
X X X S 3X1 + X2
1 2 3 1
-4X3
0 2 1 0

-2*

Thus the optimal solution is X1 = 0, X2 = 2, X3 = 1, with optimal value of -2.


Example 3: The Carpenter's Problem:
Introducing the slack and surplus variables to convert all inequalities into equalities, we
have:
2X1 + X2 + S1 = 40
X1 + 2X2 + S2 = 50
Here we have 2 equations with 4 unknowns. Since the variables X1 and X2 both are
restricted, we must set any two variables including these two, to zero. Solving the six
resultant systems of equations, we have:
S1 S2
0
0

X X 5X1 +
1 2
3X2
1 2
0 0

110*

4 infeasib
0
30
0
le

0 30

2
0
0

15 0 0

2
5

100
75

5
infeasib
0
0
60
0
le
40 50 0 0

Therefore, from the above table, we see that, the optimal solution is S1= 0, S2 = 0, X1 =
10, X2 = 20, with optimal value of $110.

Example 4: A Problem With Mixed Constraints:


Min X1 + 2X2
subject to:
X1 + X2 4
-X1 + X2 2
X1 0, and X2 is unrestricted in sign.
Introducing the slack and surplus variables, we have:
X1 + X2 - S1 = 4
-X1 + X2 + S2 = 2
Here we have 2 equations with 4 unknowns. Since only X1 is restricted, we must set any
two of the variables S1, S2, and X1 to zero. Solving the six resultant systems of
equations, we have:
X X S S
1 2 1 2

X1 +
2X2

0 4 0 -2

infeasib
le

0 2 -2 0

infeasib
le

1 3 0 0

7*

Therefore, from the above table, we see that, the optimal solution is X1 = 1, X2 = 3, S1=
0, S2 = 0, with optimal value of 7.
Example 5: A Transportation Problem:
The goal is to find the most effective way to transport the goods. The supply and demand
at each origin (e.g.; warehouse) O1, O2 and destination (e.g.; market) D1 and D2,
together with the unit transportation cost are summarized in the following table.
The Unit Transportation
Cost Matrix
D1

D2

Suppl
y

O1 20

30

200

O2 10

40

100

Deman
150
d

150

300

Let Xij denotes the amount of shipment from source i to destination j. The LP
formulation of the problem minimizing the total transportation cost is:
Min 20X11 + 30X12 + 10X21 + 40X22
subject to:
X11 + X12 = 200
X21 + X22 = 100
X11 + X21 = 150
X12 + X22 = 150
all Xij 0
Because this transportation problem is a balanced one (total supply = total demand),
therefore, all constraints are in equality form. Moreover, any one of the constraints is
redundant (adding any two constraints and subtracting another one we obtain the
remaining one). Let us delete the last constraint. Therefore, the problem reduces to:
Min 20X11 + 30X12 + 10X21 + 40X22
subject to:
X11 + X12 = 200
X21 + X22 = 100
X11 + X21 = 150
all Xij 0
This LP problem cannot be solved by the graphical method. However, the algebraic
method has no limitation on the LP dimension. Notice that we have three equations with
four restricted decision variables. Setting any one of the variables in turn to zero we get:
X1 X1 X2 X2 Total Transportation
1 2 1 2
Cost
20 15
-50
0 0

infeasible

20
15
0 -50
0
0

infeasible

15
50
0

10
0

8500

50

15 10
0
0 0

6500*

Thus the optimal strategy is X11 = 50, X12 = 150, X21 = 100, and X22 = 0, with a least
total transportation cost of $6,500.
Example 6: A Problem With Too-few Constraints:
As our last example, consider the following problem:
Max X1 + X2
subject to:
X1 + X2 5
Introducing the slack variable we have:
X1 + X2 + S1 = 5
The parameters for this problem are: T = 3, R = 1, and E = 1. This gives the total number
of possible basic solutions 1! / [(1!). (0!)] = 1. By setting the slack variable to zero we
have this single equation X1 + X2 = 5 with two variables to solve. Therefore, there is no
corner point, moreover, the feasible region is also unbounded. However, any arbitrary
solution such as X1 = 0, X2 = 5 is optimal solution to the LP problem, with an optimal
value of 5.
For a refined algebraic method visit the Solution Algorithms for LP Models Web site.
For more numerical examples, extensions, proofs read the following articles:
Arsham H., Affine Geometric Method for Linear Programs, Journal of Scientific
Computing, Vol. 12, No. 3, 289-303, 1997.
Arsham H., Links Among a Linear System of Equations, Matrix Inversion, and Linear
Program Solver Routines, Journal of Mathematical Education in Science and
Technology, Vol. 29, No. 5, 764-769, 1998.

Pivoting Row Operations


As you recall, the Algebraic Method involves solving many linear systems of
equations. When the LP problem has many variables and constraints, then
solving so many systems of equations by hand becomes very tedious and
even for very large-scale problems it is an impossible task. Therefore, we
need the computer to do the computations for us. One of the algorithmic and

computerized approaches to solve linear systems of equations is known as


the Gauss-Jordan (GJ) pivoting operations.

The GJ pivoting will be also required later during the Simplex Method, and now is a
good time to develop the habits needed at those later times.
Pivoting uses row operations (known as Gauss-Jordan row operations) to change one
matrix entry (the Pivot) to "1", and then to change all other entries in the pivot's column
into Zero's.
Once a pivot is chosen, the row operations of pivoting must be as follows:
Step 1: Make the pivot "1" by dividing the pivot's row by the pivot number
Step 2: Make the remainder of the pivot's Columns into Zero's by adding to each row a
suitable multiple of the Pivot Row.
Note: The number changing to "1" is called a Pivot, is usually encircled, and cannot be
zero. If it is zero, then interchange this row with a row below it with a non-zero element
in that column (if there is none, then the conversion is impossible).
A Numerical Example: Using the Gauss-Jordan row operations, solve the following
system of equations:
2X1 + X2 + X3 = 3
X1 + 3X3 = 1
2X1 + X2 = 2
The aim is to convert the coefficients of the system of equations into the following
identity matrix. The results on the RHS's elements (?) provide the solution (if one exists).
1

Step 1. Use row operations column by column;


Step 2. In each column:
a) First obtain the 1 in the appropriate row by multiplying by the reciprocal. Note: If there
is a 0 in this position, exchange rows with a row below with a non-zero element if
possible.

b) Reduce all other values in the column to zero by adding the appropriate multiple of the
row containing the 1 to each row.
Let's apply the above procedure to our numerical example.
Notations: Old Row [ ], New Row { }. Putting these two matrices next to each other, the
augmented matrix is:

Row #

Operations

Results

RHS

[1]/2

1/2

1/2

3/2

-1{1}+[2]

-1/2

5/2

-1/2

-2{1}+[3]

-1

-1

(-1/2){2}+[1]

[2]/(-1/2)

-5

(0){2}+[3]

-1

-1

(-3){3} + [1]

-2

(5){3} + [2]

[3]/(-1)

The solution is X1 = -2, X2 = 6, and X3 =1, which can be verified by substitutions.


Also visit the Web sites Solving a linear system of equations, Pivot Engine, Solve System
of Linear Equations, and The Equator Module.

The Simplex Method


The Simplex Method is another algorithm for solving LP problems. You recall
that the Algebraic Method provides all vertices even those which are not
feasible. Therefore, it is not an efficient way of solving LP problems with large

numbers of constraints. The Simplex Method is a modification of the


Algebraic Method, which overcomes this deficiency. However, the Simplex
Method has its own deficiencies. For example, it requires that all variables be
non-negative ( 0); also, all other constraints must be in form with nonnegative right-hand-side (RHS) values.

Like the Algebraic Method, the simplex method is also a tabular solution algorithm.
However, each tableau in the simplex method corresponds to a movement from one basic
variable set BVS (extreme or corner point) to another, making sure that the objective
function improves at each iteration until the optimal solution is reached.
The presentation of the simplex method is not universal. In the U.S.A. professors on the
West Coast enjoy solving the minimization problems, while in the East the maximization
version is preferred. Even within each of these groups you will find differences in
presenting the simplex rules. The following process describes all the steps involved in
applying the simplex solution algorithm:
1. Convert the LP to the following form:

Convert the minimization problem into a maximization one (by multiplying the
objective function by -1).
All variables must be non-negative.
All RHS values must be non-negative (multiply both sides by -1, if needed).
All constraints must be in form (except the non-negativity conditions). No
strictly equality or constraints are allowed.
If this condition cannot be satisfied, then use the Initialization of the Simplex
Method: Articicial-Free.
2. Convert all constraints to equalities by adding a different slack
variable for each one of them.
3. Construct the initial simplex tableau with all slack variables in the BVS.
The last row in the table contains the coefficient of the objective
function (row Cj).
4. Determine whether the current tableau is optimal. That is:
If all RHS values are non-negative (called, the feasibility condition)
If all elements of the last row, that is Cj row, are non-positive (called,
the optimality condition).

If the answers to both of these two questions are Yes, then stop. The current
tableau contains an optimal solution.
Otherwise, go to the next step.

5. If the current BVS is not optimal, determine, which nonbasic variable


should become a basic variable and, which basic variable should
become a nonbasic variable. To find the new BVS with the better
objective function value, perform the following tasks:
o Identify the entering variable: The entering variable is the one
with the largest positive Cj value (In case of a tie, we select the
variable that corresponds to the leftmost of the columns) .
o

Identify the outgoing variable: The outgoing variable is the one


with smallest non-negative column ratio (to find the column
ratios, divide the RHS column by the entering variable column,
wherever possible). In case of a tie we select the variable that
corresponds to the upmost of the tied rows.

Generate the new tableau: Perform the Gauss-Jordan pivoting


operation to convert the entering column to an identity column
vector (including the element in the Cj row).

Go to step 4.
A short discussion on the simplex method strategy: At the start of the simplex procedure;
the set of basis is constituted by the slack variables. Remember that the first BVS has
only slack variables in it. The row Cj presents the increase in the value of the objective
function that will result if one unit of the variable corresponding to the jth column was
brought in the basis. This row answers the question: Can we improve the objective
function by moving to a new BVS? We will call it The Indicator Row (since it indicates if
the optimality condition is satisfied).
Criterion for entering a new variable into the BVS will cause the largest per-unit
improvement of the objective function. Criterion for removing a variable from the current
BVS maintains feasibility (making sure that the new RHS, after pivoting remain nonnegative).
Warning: Whenever during the Simplex iterations yet get a negative RHS, it means you
have selected a wrong outgoing variable. The best remedy is to start all over again.
Notice that there is a solution corresponding to each simplex tableau. The numerical of
basic variables are the RHS values, while the other variables (non-basic variables) are
always equal to zero.
Notice also that variables can exit and enter the basis repeatedly during the simplex
algorithm.

Numerical Recipes states that the Simplex algorithm is 'almost always' O(max(N,M)),
which means that the number of iteration is a factor of number of variables or number of
constraints, whichever is larger.
Geometric Interpretation of the Simplex Method: The simplex method always starts at the
origin (which is a corner point) and then jumps from a corner point to the neighboring
corner point until it reaches the optimal corner point (if bounded). Therefore, at each one
of the simplex iterations, we are searching for a better solution among the vertices of a
Simplex. A simplex in an n-dimensional space is the simplest shape having n + 1 vertices.
For example, a triangle is a simplex in 2-dimensional space while a pyramid is a simplex
in 3-dimensional space. These movements can be seen when you correspond each
simplex tableau with an specific corner point in the graphical method e.g.; the carpenter's
problem, as shown next.
A Numerical Example: The Carpenter's Problem
Maximize 5X1 + 3X2
Subject to:
2X1 + X2 40
X1 + 2X2 50
and both X1, X2 are non-negative.
After adding two slack variables S1 and S2 the problem is equivalent to:
Maximize 5X1 + 3X2
Subject to:
2X1 + X2 + S1 = 40
X1 + 2X2 + S2 = 50
and variables X1, X2, S1, S2 are all non-negative.
The initial tableau is:
BV X X S S RH Column Ratio
S 1 2 1 2 S
(C/R)
[2
1 1 0 40
]

40/2

S2 1 2 0 1 50

50/1

S1

Cj 5 3 0 0

The solution shown by this tableau is: S1 = 40, S2 = 50, X1 = 0, and X2 = 0. This
solution is the origin, shown in our graphical method.
This table is not optimal since some of Cj elements are positive. The incoming variable is
X1 and the outgoing variable is S1 (by C/R test). The pivot element is in the bracket.
After pivoting, we have:
BV X
S RH Column Ratio
X2 S1
S 1
2 S
(C/R)
X1 1 1/2 1/2 0 20

20/(1/2)=40

[3/2 1 30
] 1/2

30/(3/2)=10

S2 0

Cj 0 1/2

0
5/2

The solution to this tableau is: X1 = 20, S2 = 30, S1 = 0, and X2 = 0. This solution is the
corner point (20, 0), shown in our graphical method.
This table is not optimal, since some of Cj elements is positive. The incoming variable is
X2 and the outgoing variable is S2 (by C/R test). The pivot element is in the bracket.
After pivoting, we have:
BV X X
RH
S1 S2
S 1 2
S
X1 1 0 2/3

10
1/3

X2 0 1

2/3 20
1/3

Cj 0 0

7/3 1/3

The solution to this tableau is: X1 = 10, X2 = 20, S1 = 0, and S2 = 0. This solution is the
corner point (10, 20), shown in our graphical method.
This tableau is optimal, since all Cj elements are non-positive and all RHS are nonnegative. The optimal solution is X1 = 10, X2 = 20, S1 =0, S2 = 0. To find the optimal
value, plug in this solution into the objective function 5X1 + 3X2 = 5(10) + 3(20) = $110.
Visit also the Web sites tutOR, The Simplex Place, Simplex Machine, and LP-Explorer.

Further Readings:
Maros I., Computational Techniques of the Simplex Method, Kluwer Publishers, 2002.

A selection of:

|Association for Facilities Engineering (AFE)| BUBL Catalogue| Business Majors|


Decision Sciences Institute| Education World| Investigacin Operativa|
|Mathematics Resources| MathForum: Engineering| NEEDS| Operations Management|
OR Resources| Penn State U.| Portuguesa de Investigao Operacional|
Search Engines Directory
| AltaVista| AOL| Excite| HotBot| Lycos| MSN| Netscape| NetFirst Yahoo|
|Scout Report for Business & Economics |Virtual Library|

The Copyright Statement: The fair use, according to the 1996 Fair Use Guidelines for
Educational Multimedia, of materials presented on this Web site is permitted for noncommercial and classroom purposes only.
This site may be mirrored intact (including these notices), on any server with public
access, and linked to other Web pages. All files are available at
http://www.mirrorservice.org/sites/home.ubalt.edu/ntsbarsh/Business-stat for mirroring.
Kindly e-mail me your comments, suggestions, and concerns. Thank you.
Professor Hossein Arsham

This site was launched on 2/25/1994, and its intellectual materials have been thoroughly
revised on a yearly basis. The current version is the 8th Edition. All external links are
checked once a month.

Back to:

Dr Arsham's Home Page

EOF:

1994-2009.

Das könnte Ihnen auch gefallen