2 views

Uploaded by Jaol1976

Presentación shows a tecnihques of lu factorization

- Linear Programming 1
- Hankel
- hihhi
- Solvingsingular Linear Equation
- 4 System of Linear Equations
- Ampl Minos
- Superlu UG
- Data Input FG (Value)
- 2k14
- Chapter 4 - Advanced Matrix Operations
- CMPT1998-21
- HW4solcomp
- MAK-ISNAINI-(38-48)
- BA46syllabusjune2013
- Engineering Wireless Mesh Networks
- [2] science direct
- MRTG+Setup+Step+by+Step+in+RHEL+5
- [Doi 10.1109%2Firi.2011.6009513] Rachevsky, Lev; Pu, Ken Q. -- [IEEE Integration (IRI) - Las Vegas, NV, USA (2011.08.3-2011.08.5)] 2011 IEEE International Conference on Information Reuse & I
- Log
- optimization using Lagrange multiplier

You are on page 1of 106

LU Factorization

Kristin Davies

Peter He

Feng Xie

Hamid Ghaffari

April 4, 2007

Outline

Introduction to LU Factorization (Kristin)

LU Update (Hamid)

Introduction – Transformations – Sparsity – Simplex – Implementation

Matrix decomposition into the product of a

lower and upper triangular matrix:

A = LU

Example for a 3x3 matrix:

a a22

a23 = l21 l22

0 0 u 22

u 23

21

a31 a32 a33 l31 l32 l33 0 0 u33

Introduction – Transformations – Sparsity – Simplex – Implementation

a11 a12 a13

A = a 21 a 22 a 23

a 31 a 32 a 33

Introduction – LU Existence

LU factorization can be completed on an

invertible matrix if and only if all its principle

minors are non-zero

Recall: invertible Recall: principle minors

A is invertible if B exists s.t. det (a11 ) det (a22 ) det (a33 )

AB = BA = I a11 a12 a22 a23 a11 a13

det det det

a21 a22 a32 a33 a31 a33

a11 a12 a13

det a21 a22 a23

a a32 a33

31

Introduction – Transformations – Sparsity – Simplex – Implementation

Imposing the requirement that the diagonal of

either L or U must consist of ones results in

unique LU factorization

1 0 0 u11 u12 u13

0 u

l

21 1 0 22 u 23

a11 a12 a13 l31 l32 1 0 0 u33

a

21 a22 a23 =

a31 a32 a33 l11 0 0 1 u12 u13

0 1 u

l l

21 22 0 23

l31 l32 l33 0 0 1

Introduction – Transformations – Sparsity – Simplex – Implementation

LU factorization is useful in numerical analysis

for:

– Solving systems of linear equations (AX=B)

– Computing the inverse of a matrix

a need to solve a set of equations for many

different values of B

Introduction – Transformations – Sparsity – Simplex – Implementation

Transformation Algorithms

Modified form of Gaussian elimination

Crout factorization – U has 1’s on its diagonal

Cholesky factorization – U=LT or L=UT

– Construct the matrices L and U (if possible)

– Solve LY=B for Y using forward substitution

– Solve UX=Y for X using back substitution

Introduction – Transformations – Sparsity – Simplex – Implementation

Transformations – Doolittle

Doolittle factorization – L has 1’s on its diagonal

General algorithm – determine rows of U from top to

bottom; determine columns of L from left to right

for i=1:n

for j=i:n

LikUkj=Aij gives row i of U

end

for j=i+1:n

LjkUki=Aji gives column i of L

end

end

Introduction – Transformations – Sparsity – Simplex – Implementation

1 0 0 u11 u12 u13 2 − 1 − 2 for j=i:n

l 0 0 u 22 u23 = − 4 6 3

21 1

LikUkj=Aij gives row i of U

l31 l32

1 0 0 u33 − 4 − 2 8 end

u11

(1 0 0) 0 = 2 ⇒ u11 = 2

0

Similarly

1 0 0 u11 u12 u13 2 − 1 − 2

l 0 0 u 22 u23 = − 4 6 3

21 1

l31 l321 0 0 u33 − 4 − 2 8

⇒ u12 = −1 , u13 = −2

Introduction – Transformations – Sparsity – Simplex – Implementation

1 0 0 2 − 1 − 2 2 − 1 − 2 for j=i+1:n

l 0 0 u22 u23 = − 4 6 3

21 1

LjkUki=Aji gives column i of L

2

(l21 1 0) 0 = −4 ⇒ 2l21 = −4 ⇒ l21 = −2

0

Similarly

1 0 0 2 − 1 − 2 2 − 1 − 2

l 0 0 u 22 u23 = − 4 6 3

21 1

l31 l321 0 0 u33 − 4 − 2 8

⇒ l31 = −2

Introduction – Transformations – Sparsity – Simplex – Implementation

1 0 0 u11 u12 u13 2 − 1 − 2 1 0 0 2 − 1 − 2 2 − 1 − 2

l 0 0 u22 u23 = − 4 6 3 − 2 1 0 0 4 − 1 = − 4 6

21 1 3

l31 l32 1 0 0 u33 − 4 − 2 8 − 2 − 1 1 0 0 3 − 4 − 2 8

3rd row of U

1st row of U

1 0 0 2 − 1 − 2 2 − 1 − 2 1 0 0 2 − 1 − 2 2 − 1 − 2

l 0 0 u22 u23 = − 4 6 3 − 2 1 0 0 4 − 1 = − 4 6

21 1 3

l31 l32 1 0 0 u33 − 4 − 2 8 − 2 − 1 1 0 0 u33 − 4 − 2 8

2nd coln of L

1st coln of L

1 0 0 2 − 1 − 2 2 − 1 − 2 1 0 0 2 − 1 − 2 2 − 1 − 2

− 2 1 0 0 u22 u23 = − 4 6 3 − 2 1 0 0 4 − 1 = − 4 6 3

− 2 l32 1 0 0 u33 − 4 − 2 8 − 2 l32 1 0 0 u33 − 4 − 2 8

2nd row of U

Introduction – Transformations – Sparsity – Simplex – Implementation

Execute algorithm for our example (n=3)

i=1

j=1 L1kUk1=A11 gives row 1 of U

for i=1:n

j=2 L1kUk2=A12 gives row 1 of U

for j=i:n

j=3 L1kUk3=A13 gives row 1 of U

LikUkj=Aij gives row i of U

j=1+1=2 L2kUk1=A21 gives column 1 of L

end

j=3 L3kUk1=A31 gives column 1 of L

for j=i+1:n

i=2

LjkUki=Aji gives column i of L

j=2 L2kUk2=A22 gives row 2 of U

end

j=3 L2kUk3=A23 gives row 2 of U

end

j=2+1=3 L3kUk2=A32 gives column 2 of L

i=3

j=3 L3kUk3=A33 gives row 3 of U

Introduction – Transformations – Sparsity – Simplex – Implementation

Transformations – Crout

Crout factorization – U has 1’s on its diagonal

General algorithm – determine columns of L from left to

right; determine rows of U from top to bottom (same!?)

for i=1:n

for j=i:n

LjkUki=Aji gives column i of L

end

for j=i+1:n

LikUkj=Aij gives row i of U

end

end

Introduction – Transformations – Sparsity – Simplex – Implementation

l11 0 0 1 u12 u13 2 − 1 − 2 for j=i:n

l 0 0 1 u 23 = − 4 6 3

21 l22

LjkUki=Aji gives column i of L

1

(l11 0 0 ) 0 = 2 ⇒ l11 = 2

0

Similarly

l11 0 0 1 u12 u13 2 − 1 − 2

l 0 0 1 u 23 = − 4 6 3

21 l22

l31 l32l33 0 0 1 − 4 − 2 8

⇒ l21 = −4 , l31 = −4

Introduction – Transformations – Sparsity – Simplex – Implementation

Transformations – Solution

Once the L and U matrices have been found, we can

easily solve our system

AX=B AUX=B

LY=B UX=Y

l11 0 0 y1 b1 u11 u12 u13 x1 y1

l 0 y2 = b2 0 u u23 x2 = y2

21 l22 22

Forward Substitution Backward Substitution

b1 yn

y1 = xn =

l11 unn

i −1 n

bi − ∑ lij x j yi − ∑u ij yj

j =1 j = i +1

yi = for i = 2,...n xi = for i = (n − 1),...1

aii uii

Introduction – Transformations – Sparsity – Simplex – Implementation

Module for Forward and Backward Substitution.

http://math.fullerton.edu/mathews/n2003/BackSubstitutionMod.hml

Forward and Backward Substitution.

www.ac3.edu/papers/Khoury-thesis/node13.html

LU Decomposition.

http://en.wikipedia.org/wiki/LU_decomposition

Crout Matrix Decomposition.

http://en.wikipedia.org/wiki/Crout_matrix_decomposition

Doolittle Decomposition of a Matrix.

www.engr.colostate.edu/~thompson/hPage/CourseMat/Tutorials/C

ompMethods/doolittle.pdf

Introduction – Transformations – Sparsity – Simplex – Implementation

Matrix

for example: diag(d1,…,dn), as n>>0.

some kinds of matrix storage format

Introduction – Transformations – Sparsity – Simplex – Implementation

Sparse Matrix

Regular storage structure.

row 1 1 2 2 4 4

list = column 3 5 3 4 2 3

value 3 4 5 7 2 6

Introduction – Transformations – Sparsity – Simplex – Implementation

Sparse Matrix

Subscripts:1 2 3 4 5 6 7 8 9 10 11

Colptr: 1 4 6 8 10 12

1 -3 0 -1 0 Rowind: 1 3 5 1 4 2 5 1 4 2 5

0 0 -2 0 3 Value: 1 2 5 -3 4 -2 –5-1 –4 3 6

2 0 0 0 0

0 4 0 –4 0

5 0 –5 0 6

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

A ↔ G(A) ↔ Adj(G)

i

Adj(G) ∈ {0,1}

j

Matrix P is of permutation and G(A) = G(PT AP)

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

Step 1. No loop and t = 1

Step 2. ∃ output vertices

{ }

St = i1t i2t . . . int t ⊂ V(G)

Step 3.

t

i1 i2t . . . t

in

t

adj(G) = adj(G) \ adj(G)

t

i1 i2t . . . int

t

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

{ 1}

After p times, 1,2,...,n = S ∪ S ∪ ... ∪ Sp

2

T

T T T

(separation of set). ∃P = Ps ,Ps ,...,Psp

1 2

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

2.There is a loop in G(A).

If the graph is not strongly connected, in the

reachable matrix of adj(A), there are naught

entries.

Step 1. Choose the j-th column, t = 1,and

T i

St (j) = i (R ∩ R ) = 1,1≤ i ≤ n =

j

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

t

= i1 i2t . . . t

in

t

{

(j ∈ St ⊂ 1 2 . . . n , }

St is closed on strong connection. )

Step 2: Choose the j1-th column ( j1 ≠ j ),

j = j1 , t=t+1, returning step 1.

After p times,

{1,2,...,n} = S1 ∪ S2 ∪... ∪ Sp

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

1 2

is a lower triangular block matrix. Therefore

PT AP is a lower triangular block matrix.

Note of case 2 mentioned above.

Aˆ 11 Aˆ 12 . . . Aˆ 1p

Aˆ Aˆ 22 . . . Aˆ 2p

= P adj(G)P = .

T 21

Â

. . . . . .

Aˆ

Aˆ pp

p1 . . . .

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

B = (bij p×

)∈R ,p

= 0 i= j

bij = = 1 Aˆ ij ≠ 0. ⇒

= 0 Aˆ ij = 0

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

analysis to case 1,

×

. × ∈ Rp×p

P(j1 j2 . . . jp )T BP(j1 j2 . . . jp ) =

× . ×

× × . ×

. Therefore, PT AP is a lower triangular block

matrix.

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (square)

large scale linear system

Generally speaking, for large scale system,

system stability can easily be judged.

The method is used in hierarchical optimization

of Large Scale Dynamic System

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (General)

Dulmage-Mendelsohn Decomposition

∃P,Q such that

A

h x x

PT AQ = As X

A v

Introduction – Transformations – Sparsity – Simplex – Implementation

Structure Decomposition of

Sparse Matrix (General)

Ah → the block diagonal form

A v → the block diagonal form

As → the block upper triangular form.

D-M decomposition can be seen in reference

[3].

Computation and storage: Minimum of filling in

The detail can be see in reference [2].

Introduction – Transformations – Sparsity – Simplex – Implementation

LU Factorization Method :

Gilbert/Peierls

1. L=I

2. U=I

3. for k = 1:n

4. s = L \ A(:,k)

5. (partial pivoting on s)

6. U(1:k,k) = s(1:k)

7. L(k:n,k) = s(k:n) / U(k,k)

8. end

Introduction – Transformations – Sparsity – Simplex – Implementation

Introduction – Transformations – Sparsity – Simplex – Implementation

LU Factorization Method :

Gilbert/Peierls

algorithm for LU factorization of A

with partial pivoting can be

implemented to run in O(flops (LU)+ m)

time on a RAM

where m is number of the nonzero

entries of A.

Note: The theorem expresses that the LU

factorization will run in the time

within a constant factor of the best

possible, but it does not say what the

Introduction – Transformations – Sparsity – Simplex – Implementation

x = b;

for j = 1:n

if ≠(x(j) 0) , x(j+1:n) =

x(j+1:n) - L(j+1:n,j) * x(j);

end; ---Total time: O(n+flops)

Introduction – Transformations – Sparsity – Simplex – Implementation

x j ≠ 0 ∧ lij ≠ 0 ⇒ x i ≠ 0

bi ≠ 0 ⇒ x i ≠ 0

LetG(L) j →an

have i edge lij ≠ 0

if β = {i b ≠ 0}

i

χ = {i x i ≠ 0}

Let χ = Re achG(L ) (β) and

Then

---Total time: O(flops)

Introduction – Transformations – Sparsity – Simplex – Implementation

Introduction – Transformations – Sparsity – Simplex – Implementation

Introduction – Transformations – Sparsity – Simplex – Implementation

Introduction – Transformations – Sparsity – Simplex – Implementation

Introduction – Transformations – Sparsity – Simplex – Implementation

Introduction – Transformations – Sparsity – Simplex – Implementation

References

time proportional to arithmetic operations, SIAM J.

Sci. Statist. Comput., 9 (1988), pp. 862-874

Mihalis Yannakakis’ website:

http://www1.cs.columbia.edu/~mihalis

A. Pothen and C. Fan, Computing the block triangular

form of a sparse matrix, ACM Trans. On Math. Soft.

Vol. 18, No.4, Dec. 1990, pp. 303-324

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Problem size

Problem size determined by A

On the average, 5~10 nonzeros per column

greenbea.mps

of NETLIB

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Computational Form, Basis

Note: A is of full row rank and m<n

Basic variables

x1 x2 x3 x4

1 0 1 2

0 1 2 1

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Notations

β index set of basic variables

γ index set of non-basic variables

B:=Aβ basis

R:=Aγ non-basis (columns)

xβ cβ

A = [ B | R], x = , c =

xγ cγ

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basic Feasible Solution

Ax = b Bxβ + Rxγ = b

xβ = B −1 (b − Rxγ )

xγ = 0, x ≥ 0, Ax = b

solution which is also optimal.

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Checking Optimality

Objective value : cT x = cβT xβ + cγT xγ

constant Reduced Costs

= c0 + d T xγ

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Improving Basic Feasible Solution

increase xq as much as possible c0 + d T xγ

non-basic variables basic variables

basic variables remain feasible

xm +1 = 0 x1 ↑ (≥ 0) even if xq ∞

⋮ ⋮

xq ↑ xp → objective value is unbounded

⋮ ⋮

xn = 0 xn ↑

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Improving Basic Feasible Solution

Choose xq (incoming variable) s.t. dq < 0 Objective value

increase xq as much as possible c0 + d T xγ

xm +1 = 0 x1 ↑ x1 ↑

⋮ ⋮ ⋮

xq ↑ xp → x p ↓ goes to 0 first

(outgoing variable)

⋮ ⋮ ⋮

xn = 0 xn ↑ xn ↑

Unbounded solution neighboring improving basis

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basis Updating

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basis Updating B = b1 ,..., bp ,..., bm

m

Write a = ∑ v i bi = Bv ( v = B −1 a ) B = [b1 ,..., a,..., bm ]

i =1

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basis Updating B = b1 ,..., bp ,..., bm

m

Write a = ∑ v i bi = Bv ( v = B −1 a ) B = [b1 ,..., a,..., bm ]

i =1

1 vi

bp = p a − ∑ p bi ( v p ≠ 0)

v i≠ p v

p −1 p +1 T

v1

v 1 v v m

bp = Bη , where η = − p ,⋯ , − p , p , − p , ⋯, − p

v v v v v

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basis Updating B = b1 ,..., bp ,..., bm

m

Write a = ∑ v i bi = Bv ( v = B −1 a ) B = [b1 ,..., a,..., bm ]

i =1

1 vi

bp = p a − ∑ p bi

v i≠ p v

p −1 p +1 T

v1

v 1 v v m

bp = Bη , where η = − p ,⋯ , − p , p , − p , ⋯, − p

v v v v v

vp : pivot element

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basis Updating B = b1 ,..., bp ,..., bm

m

Write a = ∑ v i bi = Bv ( v = B −1 a ) B = [b1 ,..., a,..., bm ]

i =1

1 vi

bp = p a − ∑ p bi

v i≠ p v

p −1 p +1 T

v v 1

1 v v m

bp = Bη , where η = − p ,⋯ , − p , p , − p , ⋯, − p

v v v v v

(elementary transformation matrix)

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basis Updating B = b1 ,..., bp ,..., bm

m

Write a = ∑ v i bi = Bv ( v = B −1 a ) B = [b1 ,..., a,..., bm ]

i =1

1 vi

bp = p a − ∑ p bi

v i≠ p v

p −1 p +1 T

v v 1

1 v v m

bp = Bη , where η = − p ,⋯ , − p , p , − p , ⋯, − p

v v v v v

B −1 = EB −1

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basis Updating

E = e1 , ..., e p −1 ,η , e p +1 ,..., e m

1 η1

ETM

⋱ (Elementary Transformation Matrix)

= ηp

⋱

ηm 1

1 0 1 η1 1 0

⋱ ⋮

⋱ ⋮

⋱ ⋮

= ηp 1 ⋯ 1

⋮ ⋱ ⋮ ⋱ ⋮ ⋱

0 1 0 1 ηm 1

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Basis Updating

Basis tends to get denser after each update ( B −1 = EB −1 )

1 η1 w1 w1 + w pη 1

⋱ ⋮ ⋮

Ew = ηp w p = w pη p

⋱ ⋮ ⋮

ηm 1 wm wm + w pη m

Ew = w if wp = 0

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Algorithm

1. Find an initial feasible basis B

2. Initialization B −1b

4. Choose incoming variable xq B −1 a q

5. Choose outgoing variable xp

6. Update basis EB −1

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Algorithm

1. Find an initial feasible basis B

2. Initialization B −1b

4. Choose incoming variable xq B −1 a q

5. Choose outgoing variable xp (pivot step)

6. Update basis EB −1

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Algorithm

large pivot element

In practice, compromise.

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– Typical Operations in Simplex Method

Typical operations : B −1w, wT B −1

Two ways:

−1

Product form of the inverse ( B = Ek Ek −1 ⋯ E1 ) (obsolete)

LU factorization

Introduction – Transformations – Sparsity – Simplex – Implementation

Simplex Method

– LU Factorization

Reduce complexity using LU update

( B = BE , B −1 = EB −1 )

Refactorization

(reinstate efficiency and numerical accuracy)

Sparse LU Updates in Simplex Method

Hamid R. Ghaffari

Outline

LU Update Methods

Preliminaries

Bartels-Golub LU UPDATE

Sparse Bartels-Golub Method

Reid’s Method

The Forrest-Tomlin Method

Suhl-Suhl Method

More Details on the Topic

Revised Simplex Algorithm

Determine the current basis, d d = B −1 b

Choose xq to enter the basis based on c̄ = cN0 − cB0 B −1 N,

the greatest cost contribution {q|c̄q = min(c̄t )}

t

If xq cannot decrease the cost, c̄q ≥ 0, d is optimal solution

d is optimal solution

−1

Determine xp that leaves the basis w

n = B Aq , o

(become zero) as xq increases. di

p wi = min wdtt , wt > 0

t

If xq can increase without causing If wp ≤ 0 for all i, the solution is

another variable to leave the basis, unbounded.

the solution is unbounded

Update dictionary. Update B −1

Note: In general we do not compute the inverse.

Revised Simplex Algorithm

Determine the current basis, d d = B −1 b

Choose xq to enter the basis based on c̄ = cN0 − cB0 B −1 N,

the greatest cost contribution {q|c̄q = min(c̄t )}

t

If xq cannot decrease the cost, c̄q ≥ 0, d is optimal solution

d is optimal solution

−1

Determine xp that leaves the basis w

n = B Aq , o

(become zero) as xq increases. di

p wi = min wdtt , wt > 0

t

If xq can increase without causing If wp ≤ 0 for all i, the solution is

another variable to leave the basis, unbounded.

the solution is unbounded

Update dictionary. Update B −1

Note: In general we do not compute the inverse.

Revised Simplex Algorithm

Determine the current basis, d d = B −1 b

Choose xq to enter the basis based on c̄ = cN0 − cB0 B −1 N,

the greatest cost contribution {q|c̄q = min(c̄t )}

t

If xq cannot decrease the cost, c̄q ≥ 0, d is optimal solution

d is optimal solution

−1

Determine xp that leaves the basis w

n = B Aq , o

(become zero) as xq increases. di

p wi = min wdtt , wt > 0

t

If xq can increase without causing If wp ≤ 0 for all i, the solution is

another variable to leave the basis, unbounded.

the solution is unbounded

Update dictionary. Update B −1

Note: In general we do not compute the inverse.

Problems with Revised Simplex Algorithm

matrix manipulations (ill-conditioned matrices).

m3 − m floating-point (real number) calculations.

reduce this O(m3 )-time algorithm as well as improve its accuracy.

Introducing Spike

basis, then we have

Introducing Spike

basis, then we have

Introducing Spike

basis, then we have

Introducing Spike

basis, then we have

Introducing Spike

basis, then we have

generally diverge with the next step: reduction of the spiked upper

triangular matrix back to an upper-triangular matrix. (Chvátal, p150)

Bartels-Golub Method

Illustration

The first variant of the Revised Simplex Method was the Bartels-Golub

Method.

Bartels-Golub Method

Algorithm

d = B −1 b d = U −1 L−1 b

c̄ = cN0 − cB0 B −1 N, c̄ = cN0 − cB0 U −1 L−1 N,

{q|c̄q = min(c̄t )} {q|c̄q = min(c̄t )}

t t

c̄q ≥ 0, d is optimal solution c̄q ≥ 0, d is optimal solution

−1 −1 −1

w

n = B Aq , o w

n = U L Aq , o

di di

p wi = min wdtt , wt > 0 p wi = min wdtt , wt > 0

t t

If wp ≤ 0 for all i, the solution is If wp ≤ 0 for all i, the solution is

unbounded. unbounded.

Update B −1 Update U −1 and L−1

Bartels-Golub Method

Characteristics

I Can we do better?

Bartels-Golub Method

Characteristics

Sparse Bartels-Golub Method

eta matrices

1 1 1 1

l21 1 1 1 l21 1

= · · l

l l32 1 1 l32 1 1

31 31

l41 l42 l43 1 l43 1 l42 1 l41 1

Sparse Bartels-Golub Method

eta matrices

1 1 1 1

l21 1 1 1 l21 1

= · · l

l l32 1 1 l32 1 1

31 31

l41 l42 l43 1 l43 1 l42 1 l41 1

Single-Entry-Eta Decomposition:

1 1 1 1

l21 1 l21 1 1 1

= · l ·

l 1 1 1 1

31 31

l41 1 1 1 l41 1

Sparse Bartels-Golub Method

eta matrices

1 1 1 1

l21 1 1 1 l21 1

= · · l

l l32 1 1 l32 1 1

31 31

l41 l42 l43 1 l43 1 l42 1 l41 1

Single-Entry-Eta Decomposition:

1 1 1 1

l21 1 l21 1 1 1

= · l ·

l 1 1 1 1

31 31

l41 1 1 1 l41 1

and hence, L−1 is also is the product of the same matrices with

off-diagonal entries negated.

Sparse Bartels-Golub Method

Algorithm

d = U −1 L−1 b d = U −1 t ηt b

Q

Q

{q|c̄q = min(c̄t )} {q|c̄q = min(c̄t )}

t t

c̄q ≥ 0, d is optimal solution c̄q ≥ 0, d is optimal solution

−1 −1 −1 Q

w

n = U L Aq , o w

n =U t η

t Aq ,

o

di

p wi = min wdtt , wt > 0 p wdii = min wdtt , wt > 0

t t

If wp ≤ 0 for all i, the solution is If wp ≤ 0 for all i, the solution is

unbounded. unbounded.

Update U −1 and L−1 Update U −1 and create any

necessary eta matrices.

If there are too many eta matrices,

completely refactor the basis.

Sparse Bartels-Golub Method

Algorithm

d = U −1 L−1 b d = U −1 t ηt b

Q

Q

{q|c̄q = min(c̄t )} {q|c̄q = min(c̄t )}

t t

c̄q ≥ 0, d is optimal solution c̄q ≥ 0, d is optimal solution

−1 −1 −1 Q

w

n = U L Aq , o w

n =U t η

t Aq ,

o

di

p wi = min wdtt , wt > 0 p wdii = min wdtt , wt > 0

t t

If wp ≤ 0 for all i, the solution is If wp ≤ 0 for all i, the solution is

unbounded. unbounded.

Update U −1 and L−1 Update U −1 and create any

necessary eta matrices.

If there are too many eta matrices,

completely refactor the basis.

Sparse Bartels-Golub Method

Algorithm

d = U −1 L−1 b d = U −1 t ηt b

Q

Q

{q|c̄q = min(c̄t )} {q|c̄q = min(c̄t )}

t t

c̄q ≥ 0, d is optimal solution c̄q ≥ 0, d is optimal solution

−1 −1 −1 Q

w

n = U L Aq , o w

n =U t η

t Aq ,

o

di

p wi = min wdtt , wt > 0 p wdii = min wdtt , wt > 0

t t

If wp ≤ 0 for all i, the solution is If wp ≤ 0 for all i, the solution is

unbounded. unbounded.

Update U −1 and L−1 Update U −1 and create any

necessary eta matrices.

If there are too many eta matrices,

completely refactor the basis.

Sparse Bartels-Golub Method

Algorithm

d = U −1 L−1 b d = U −1 t ηt b

Q

Q

{q|c̄q = min(c̄t )} {q|c̄q = min(c̄t )}

t t

c̄q ≥ 0, d is optimal solution c̄q ≥ 0, d is optimal solution

−1 −1 −1 Q

w

n = U L Aq , o w

n =U t η

t Aq ,

o

di

p wi = min wdtt , wt > 0 p wdii = min wdtt , wt > 0

t t

If wp ≤ 0 for all i, the solution is If wp ≤ 0 for all i, the solution is

unbounded. unbounded.

Update U −1 and L−1 Update U −1 and create any

necessary eta matrices.

If there are too many eta matrices,

completely refactor the basis.

Sparse Bartels-Golub Method Advantages

Sparse Bartels-Golub Method Advantages

matrices and U.

Sparse Bartels-Golub Method Advantages

matrices and U.

Sparse Bartels-Golub Method Advantages

matrices and U.

store the location and value of the off-diagonal element for each

matrix.

Sparse Bartels-Golub Method Advantages

matrices and U.

store the location and value of the off-diagonal element for each

matrix.

complexity improves significantly to O(m2 ).

Sparse Bartels-Golub Method Disadvantages

becomes cheaper to decompose the basis.

Sparse Bartels-Golub Method Disadvantages

becomes cheaper to decompose the basis.

promote stability if noticeable round-off errors begin to occur.

Sparse Bartels-Golub Method Disadvantages

becomes cheaper to decompose the basis.

promote stability if noticeable round-off errors begin to occur.

quite frequently, often after every twenty iterations of so. (Chvátal,

p. 111)

Sparse Bartels-Golub Method Disadvantages

becomes cheaper to decompose the basis.

promote stability if noticeable round-off errors begin to occur.

quite frequently, often after every twenty iterations of so. (Chvátal,

p. 111)

I If the spike always occurs in the first column and extends to the

bottom row, the Sparse Bartels-Golub Method becomes worse than

the Bartels-Golub Method.

Sparse Bartels-Golub Method Disadvantages

becomes cheaper to decompose the basis.

promote stability if noticeable round-off errors begin to occur.

quite frequently, often after every twenty iterations of so. (Chvátal,

p. 111)

I If the spike always occurs in the first column and extends to the

bottom row, the Sparse Bartels-Golub Method becomes worse than

the Bartels-Golub Method.

I The upper-triangular matrix will always be fully-decomposed

resulting in huge amounts of fill-in;

Sparse Bartels-Golub Method Disadvantages

becomes cheaper to decompose the basis.

promote stability if noticeable round-off errors begin to occur.

quite frequently, often after every twenty iterations of so. (Chvátal,

p. 111)

I If the spike always occurs in the first column and extends to the

bottom row, the Sparse Bartels-Golub Method becomes worse than

the Bartels-Golub Method.

I The upper-triangular matrix will always be fully-decomposed

resulting in huge amounts of fill-in;

I Large numbers of eta matrices;

Sparse Bartels-Golub Method Disadvantages

becomes cheaper to decompose the basis.

promote stability if noticeable round-off errors begin to occur.

quite frequently, often after every twenty iterations of so. (Chvátal,

p. 111)

I If the spike always occurs in the first column and extends to the

bottom row, the Sparse Bartels-Golub Method becomes worse than

the Bartels-Golub Method.

I The upper-triangular matrix will always be fully-decomposed

resulting in huge amounts of fill-in;

I Large numbers of eta matrices;

I O(n3 )-cost decomposition;

Ried Suggestion on Sparse Bartels-Golub Method

only to the part of that remained upper-Hessenberg.

Reid’s Method

Task: The task is to find a way to reduce that bump before attempting to

decompose it;

Row singleton: any row of the bump that only has one non-zero

entry.

Column singleton: any column of the bump that only has one

non-zero entry.

Method:

I When a column singleton is found, in a bump, it is moved to the top

left corner of the bump.

right corner of the bump.

Reid’s Method

Column Rotation

Reid’s Method

Row Rotation

Reid’s Method

Characteristics

Advantages:

I It significantly reduces the growth of the number of eta matrices in

the Sparse Bartels-Golub Method

I So, the basis should not need to be decomposed nearly as often.

some attempt to maintain stability.

Disadvantages:

I The rotations make absolutely no allowance for stability whatsoever,

I So, Reid’s Method remains numerically less stable than the Sparse

Bartels-Golub Method.

The Forrest-Tomlin Method

The Forrest-Tomlin Method

d = U −1 L−1 b d = U −1 t Rt L−1

Q

b

c̄ = cN0 − cB0 U −1 L−1 N, c̄ = cN0 − cB0 U −1 t Rt L−1 N,

Q

{q|c̄q = min(c̄t )} {q|c̄q = min(c̄t )}

t t

c̄q ≥ 0, d is optimal solution c̄q ≥ 0, d Q

is optimal solution

−1 −1 −1 −1

w

n = U L Aq , o w

n = U tRt L Aq , o

p wdii = min wdtt , wt > 0 p di

wi = min dt

wt , wt > 0

t t

If wp ≤ 0 for all i, the solution is If wp ≤ 0 for all i, the solution is

unbounded. unbounded.

Update U −1 and L−1 Update U −1 creating a row factor

as necessary. If there are too

many factors, completely refactor

the basis.

The Forrest-Tomlin Method Characteristics

Advantages:

I At most one row-eta matrix factor will occur for each iteration

where an unpredictable number occurred before.

I The code can take advantage of such knowledge for predicting

necessary storage space and calculations.

I Fill-in should also be relatively slow, since fill-in can only occur

within the spiked column.

Disadvantages:

I Sparse Bartels-Golub Method allowed LU-decomposition to pivot for

numerical stability, but Forrest-Tomlin Method makes no such

allowances.

I Therefore, severe calculation errors due to near-singular matrices are

more likely to occur.

Suhl-Suhl Method

This method is a modification of Forrest-Tomlin Method.

For More Detail

A fast LU update for linear programming.

Annals of Operations Research 43(1993)33-47

Stiven S. Morgan

A Comparison of Simplex Method Algorithms

University of Florida, 1997

Vasek Chvátal

Linear Programming

W.H. Freeman & Company (September 1983)

Thanks

- Linear Programming 1Uploaded byDrRitu Malik
- HankelUploaded bymasrawy2010
- hihhiUploaded byNick
- Solvingsingular Linear EquationUploaded bytaanjit
- 4 System of Linear EquationsUploaded bydiego
- Ampl MinosUploaded byZobayer Tushar
- Superlu UGUploaded bydardoonan
- Data Input FG (Value)Uploaded byςεπ ςει
- 2k14Uploaded byLeogem Uy
- Chapter 4 - Advanced Matrix OperationsUploaded byKrishna Kumar
- CMPT1998-21Uploaded byamitangrau
- HW4solcompUploaded bylennyrodolfo
- MAK-ISNAINI-(38-48)Uploaded byRahmah Arie Vadison
- BA46syllabusjune2013Uploaded byDianne Christine Entero Bacalla
- Engineering Wireless Mesh NetworksUploaded byallanokello
- [2] science directUploaded bySravan Kumar
- MRTG+Setup+Step+by+Step+in+RHEL+5Uploaded byamimasum
- [Doi 10.1109%2Firi.2011.6009513] Rachevsky, Lev; Pu, Ken Q. -- [IEEE Integration (IRI) - Las Vegas, NV, USA (2011.08.3-2011.08.5)] 2011 IEEE International Conference on Information Reuse & IUploaded bynikrepp
- LogUploaded byke duc nguyen
- optimization using Lagrange multiplierUploaded byRakesh Kota
- Hw 3 -SolutionUploaded bydch27
- Lecture 8A Unit Commitment Part 1Uploaded byaryo_el06
- Track CommandsUploaded byJuanpa Reinhard
- cldd_grouproj_2011_12_v0001Uploaded byNaponee Nap-shot Evans
- Project 08Uploaded byvinaychimalgi
- 1.4 ws - equ of perpendicular and parallel lines.pdfUploaded byRolando
- Memory ManagerUploaded byEzra Joy Celino
- z DataguardUploaded byk2sh07
- Zmm Bdc Mr21Uploaded byHemanth Kumar Panagam
- cape mathematics 2012 paper 2.pdfUploaded byAckeema Johnson

- HK Unit 2Uploaded byJaol1976
- IJSEE Volume 3 FuzzyUploaded byJaol1976
- 3. Tesis Control Jerarquico.pdfUploaded byJaol1976
- Slide 1 Introduction ChapmanUploaded byJaol1976
- Tesis PSS Phenomenical.pdfUploaded byJaol1976
- Paper Monte Carlo and PSSUploaded byJaol1976
- A Trip To The Stars_notes.pdfUploaded byJaol1976
- Preguntas Capitulo7.pdfUploaded byJaol1976
- Signal WordsUploaded byJaol1976
- Prueba CEE 2008.pdfUploaded byJaol1976
- PSS with remote signal considering the Transport DelayUploaded byJaol1976
- HW Unit 3Uploaded byJaol1976
- A method for Tuning Models-DIgSILENT.pdfUploaded byJaol1976
- Paper Control JerarquicoUploaded byJaol1976
- Preguntas Capitulo7.pdfUploaded byJaol1976
- WADC and PSSUploaded byJaol1976
- Introduccion a DigsilentUploaded byJaol1976
- DigSilent PowerfactoryUploaded byPandiyan
- Articulo Robustez JOsculloUploaded byJaol1976
- Gurung2018.pdfUploaded byJaol1976
- SMCUploaded byJaol1976
- Tesis Pade-Matlab.pdfUploaded byJaol1976
- PI PSS in SimulikUploaded byJaol1976
- Gurung2019.pdfUploaded byJaol1976
- Gurung2019.pdfUploaded byJaol1976
- Gurung2019.pdfUploaded byJaol1976
- Robust Controller PSSUploaded byJaol1976
- Paper Patel 2018Uploaded byJaol1976
- Ejercicios de Funcion LinealUploaded byEddy LLatance Paredes
- Examen parciales Máquinas EléctricasUploaded byJaol1976

- addmathUploaded byAMIRA EZZATIE BINTI MOHD HATTA -
- DESIGN AND ANALYSIS OF ALGORITHM.docxUploaded byswathisaipragnya
- Kalman Filter: Lecture NotesUploaded byjprahladan
- Assignment 2Uploaded byangelosouti
- SPPR-2015Uploaded byCS & IT
- Spectral Estimation of Short Segments of EEG SignalsUploaded byGurpriksha Kaur
- Spatial FilteringUploaded byRaviraj Maiya
- Speaker RecognitionSystemUploaded byrahul
- Belotti-Et-Al-12-TR.pdfUploaded byJorge Ignacio Cisneros Saldana
- MTH603 Final Term Solved MCQsUploaded byMuhammad Asif Butt Mohsini
- 2003q1Uploaded bysamgmg
- fgfgUploaded byYeison Andres Zambrano
- Implementation of Digital FiltersUploaded bydivya1587
- simplex_sUploaded byWilliamDog Li
- 4D2-5&6 Hashing Techniques v1.02Uploaded byசு மகேஸ்வரன்
- Building an AIUploaded byMaverick Cards
- V2I30052Uploaded byeditor_ijarcsse
- SynopsisUploaded byNiro Thakur
- welch1967.pdfUploaded byTomas Dreves Bazan
- Numerical ComputingUploaded byraheel
- Efficient Region Search for Object Detection - Vijayanarasimhan Grauman - Proceedings of IEEE Conference on Computer Vision and Pattern Recognition - 2011Uploaded byzukun
- Lecture 13Uploaded bySatyanshu Srivastava
- Y. Hu and P. C. Loizou, “Speech enhancement based on Wavelet Thresholding the Multitaper SpectrumUploaded byElena Trandafir
- lec19Uploaded byStefania Oliveira
- ELEC692n assignment1Uploaded byAwais
- Practice Problem for Operations Researc-1Uploaded byRyan Jeffrey Padua Curbano
- EE3tp4 Exam 2014Uploaded byChris Aiello
- Load Forecasting Using ANNUploaded byJahangir Alam Sumon
- errorUploaded byMuddys007
- QuizUploaded byRajeev Pathak