Beruflich Dokumente
Kultur Dokumente
http://www.cise.ufl.edu/research/sparse/Morgan/chapter3.htm
TOC Abstract (1) Intro (2) Simplex (3) Impl (4) Test (5) Results (6) Concl Scripts Biblio Author Chairman
Due to the variety of expressive modes that linear programs can assume, the simplex.m routine was needed to translate the various possibilities into the form the routines expected. For instance, some linear programs expect equality constraints to be met (Ax=b) while others only ask for inequality constraints (Ax<= b or Ax>= b), and they require different processing steps. In the case of inequality constraints, slack variables -- variables adding no cost to the solution -- were used to create equality constraints, and the initial basic feasible solution consisted simply of these slack variables. In the case of equality constraints, however, an initial basic feasible solution needed to be found first. Using standard practice, the initial BFS was found by solving for the same sets of equations with the slack variables given positive costs and the original variables given zero cost. If this produced a solution with a minimized cost of zero then all the slack variables had swapped out and this solution became the initial basic feasible solution to the actual problem. If a nonzero solution is determined, then there is no feasible solution to the original problem.
1 of 4
07-11-2012 10:43
http://www.cise.ufl.edu/research/sparse/Morgan/chapter3.htm
has already been compiled and optimized for the particular platform, the actual running time of this routine ought to be very impressive even though the flop count may not be.
Bartels-Golub Method
The Bartels-Golub Method so closely parallels the Revised Simplex Method that the code looks nearly the same. Basis inversion is not used, but the double backsolve is used instead. Algebraically, the results are identical. It should be noted, though, that the double backsolve is performed three times per iteration whereas the matrix inversion was only performed once in the Revised Simplex Method. The extra calculations do not change the asymptotic behavior, but they do hamper the Bartels-Golub slight time-complexity edge over the Revised Simplex Method. LU decomposition could be performed only once per iteration to determine the inverse of the basis, however, Bartels and Golubs original presumption was that the LU factors will be much less dense than the basis inverse thus saving space in memory. A measurement of the actual space saved was attempted in order to justify this assumption.
Forrest-Tomlin Method
The matrix U remains a row-permuted upper-triangular matrix throughout the algorithm. It is column permuted to proper upper-triangular form when a new variable enters the basis so that the new column at the far right, as shown in Figures 11, 12, and 13. This is the implementation suggested by Forrest and Tomlin in their original paper.
2 of 4
07-11-2012 10:43
http://www.cise.ufl.edu/research/sparse/Morgan/chapter3.htm
Figure 11
Figure 12
Figure 13
Calculation of the row factor in each iteration occurs during the update step. When the new column enters the basis, the row elements in the corresponding row need to be canceled, thus casting the diagonal entry of the incoming column as the bottom-most entry of the proper upper-triangular matrix. Canceling these row entries is obtained by post-multiplying the non-diagonal entries by the inverse of U (rather, forward solving the row entries through U). The result, also a row as shown in Figure 14, has its entries negated to simulate the inverse of an eta matrix containing this row and then placed in R. Figure 14
Reids Method
Reids Method is nearly identical to the Sparse Bartels-Golub Method. The extra step that Reid takes is simple in concept, but complex to implement. Therefore a separate routine was added: rotate. The difficulty lay in an efficient implementation because a linear search for each row and column singleton would take more time than would be otherwise saved. Therefore, a linear search occurs only twice: once for column singletons and once for row singletons, both occurring right at the beginning of the rotation step. As each singleton is eliminated, only necessary rows and columns are checked for new singletons. So long as the matrices remain sparse, this process will be far more efficient. Implementation of Reid's Method becomes even more inefficient in MATLAB. The overhead necessary to interpret the code for each iteration of rotate is significant. The flop count should still indicate an improvement in calculation efficiency from the other methods since the rotations involve integer
3 of 4 07-11-2012 10:43
http://www.cise.ufl.edu/research/sparse/Morgan/chapter3.htm
manipulations rather than the costly floating point calculations. Only U and the permutation vectors are manipulated by this routine, so the Sparse Bartels-Golub code can continue without any extra modifications.
4 of 4
07-11-2012 10:43