Sie sind auf Seite 1von 18

Direct Methods for Solving Symmetric Indefinite Systems of Linear Equations Author(s): J. R. Bunch and B. N.

Parlett Reviewed work(s): Source: SIAM Journal on Numerical Analysis, Vol. 8, No. 4 (Dec., 1971), pp. 639-655 Published by: Society for Industrial and Applied Mathematics Stable URL: http://www.jstor.org/stable/2949596 . Accessed: 13/03/2012 00:39
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Society for Industrial and Applied Mathematics is collaborating with JSTOR to digitize, preserve and extend access to SIAM Journal on Numerical Analysis.

http://www.jstor.org

SIAM J. NUMER. ANAL. Vol. 8, No. 4, December 1971

DIRECT METHODS FOR SOLVING SYMMETRIC INDEFINITE SYSTEMS OF LINEAR EQUATIONS*


J. R. BUNCHt
AND

B. N. PARLETTt

Abstract. Methods for solving symmetric indefinite systems are surveyed including a new one which is stable and almost as fast as the Cholesky method.

1. Introduction. 1.1. The problem. Consider direct as against iterative methods for solving Ax = b, where A is an n x n real symmetric matrix. If A is also positive definite, then Cholesky's method and triangular factorization are fast ((1/6)n3 multiplications), stable, and preserve symmetry. If A is indefinite, these methods can produce very inaccurate results and fail to give warning of what has occurred. It is therefore usual to recommend Gaussian elimination with partial or complete pivoting for indefinite systems, and the symmetry of A is of no advantage. Such systems do arise in practice, not so much in models of physical problems but in the intermediate stages which are produced by numerical methods. There have been attempts to devise algorithms which take advantage of symmetry and are also competitive with Gaussian elimination. These are reviewed and a new method is proposed which uses Kahan's generalization of a pivot to include 2 x 2 principal submatrices. The bound on growth of elements in the reduced matrices is nearly as good as Wilkinson's bound for elimination with complete pivoting; the number of multiplications, additions, and comparisons is at most of the order (1/6)n3 for each. Unfortunately, bandwidth is not preserved. The results in ?? 3-6 also hold for Hermitian systems, but for convenience we shall only consider real symmetric systems. 1.2. Summary of methods. Table 1 summarizes properties of some methods for solving Ax = b, where A is an n x n matrix and b is an arbitrary n-vector. Normalization: maxi,j lAijI= 1. < 2n(l/4)logn 41/3 .*. n l(n/Notation: f(n) = (2* 3 - 330 (Wilkinson [15, p. 285]). In the rest of the article, the notation - n' f(100) will denote that lower order terms have been omitted. If x is an n-vector and A is an
1/2 * 1))1/2

of n x n matrix,then the max-norms x and A are definedby


= and IIAKlZ max1<_i<n 1 AJ, and the one-norms by = max1<j<n l 1 ALJ.If A is symmetric, then 1Aj1
=

X11

IX 1 =

max1_<i_n Xil Z7 xj and

1All

1.3. Origin of indefinite systems. In the Rayleigh quotient iteration for finding eigenvectors of a positive definite matrix A (Wilkinson [16, p. 629]), there
* Received by the editors January 20, 1970. This work was supported in part by the Atomic Energy Commission and in part by Contract NONR 3656(23) with the Computer Center, University of California, Berkeley, California. t Applied Mathematics Division, Argonne National Laboratory, Argonne, Illinois. Now at Department of Mathematics, The University of Chicago, Chicago, Illinois 60637. t Computer Science Department, University of California, Berkeley, California 94720. 639

640

J. R. BUNCH AND B. N. PARLETT

iAll

E~

<

-1Cr

1m

All ~~~~~~~~~~~~~~

ZZ
,,

~
;
=,,

~
| f

~
|

~
E -.

~
e

~
-Ii

.I~ X~~~~~~~V1 VIIAll

?i

2~~~~~~~~~~~~~~~~~~~
Xl z
C4

- X=icoeJ<

V _t2

7~~~~~~~~~~~I

ICL7

~ ~ ~ -~~~~~~~~~~~~ ~ ~ ~ ~ ~ ~ ~1~~~c

cc,'

Uk~

0..

SYMMETRIC INDEFINITE SYSTEMS OF LINEAR EQUATIONS

641

- riI)xi?1 = xi, where riI cannot be definite. Symmetric indefinite systems also arise from the application of the Ritz procedure to problems in control theory (Bosarge and Johnson [1]).

is a repeated need to solve equations of the form (A


-

ri = xtAxi/xtxi. Because ri lies in the spectrum of A, A

2. Known methods and their stability. 2.1. Stability. Consider the equation Ax = b, where A is n x n, det A = 0 and b is an arbitrary n-vector. Let x denote an approximation to A - 'b computed by some method. Unless xi = 0, there are infinitely many matrices E such that (A + E)xi = b. If it can be shown, for a particular method, that there is an E which is, in some sense, small, then the method may be regarded as satisfactory in that it has produced the exact solution of a system close to the original one. For brevity, the method is said to be stable. The standard criterion for stability is that for some Eli A the norm 11 11 ratio I /11 should be small. Note, however, that such a perturbation E may give very large relative changes to some of the smaller elements. Ideal would be a method for which, for each nonzero element Aij, the ratio lEij/Aijl would be small. No practical method known to the authors has this property. This fact would not matter if the size of matrix elements represented their importance to the given problem. This puts the burden on the user to choose the units for his variables and to scale his equations so that this is so. 2.2. General case: Gaussian elimination. Suppose that we could perform Gaussian elimination without interchanges in order to solve Ax = b in exact arithmetic. First we obtain A = LU, where L is unit lower triangular and U is upper triangular. Then we solve Lc = b for c, and finally Ux = c for x. However, in finite precision, we have error at each step. We can consider the L and U matrices to be the exact decomposition of A + F for some error matrix F. Similarly, (L + 6L)c = b and (U + U)x = c for some error matrices 5L and 3U. Then E = F + L(3U) + (6L)U + (6L)(6U). Wilkinson [14] showed that for Gaussian elimination: < Fiji 2.01 min(i
-

1,j)2t

max
n-min(i-1,j-1)_k_n

maxJA(j
r,s

< whenever LijLI 1 for 1 < i,j ? n, and also


? IIEJIK, 1.06 2t
1 <k<n

max max lA (k)I (2.01n2


r,s

+ n3

+ 0.27n42-t),

where t is the number of binary digits to which numbers are represented, and A(k) is the reduced matrix of order k in the elimination process. This shows the importance of preventing rapid growth in the elements in the reduced matrices. There are two well-known strategies for choosing permutation matrices P and Q such that Gaussian elimination without interchanges applied to PAQ will provide sufficiently small element growth in the reduced matrices. The first strategy, called complete pivoting, requires that we bring the largest element in the reduced matrix into the leading diagonal position. This strategy is called complete since we search the entire reduced matrix. Wilkinson [15] showed that with this complete strategy, max1 'j- k+1)I IA(n l j
<

. kf (k) max IAijl


i,J

642

J. R. BUNCH AND B. N. PARLETT

In words, the elements in the reduced matrices can never become too large; so this strategy is never bad. It is conjectured that the true bound is
max jA7n k- 1)
l,J

<

k max lAijI

~~~~~~~~L,J

when A is real (Cryer [5, p. 343]). Equivalently, the above says that there exist permutation matrices P and Q such that Gaussian elimination without interchanges applied to PAQ gives max Fj <2.01 2tn312f (n) max lAiAl
i,J ,

and IiElI < 1.06 2- tl/f + (n) max JAijI(2.01n2 n3 + 0.27n42- t).

The second strategy, called partial pivoting, requires that we bring the largest element in the first column of the reduced matrix into the leading diagonal position. This strategy is called partial since we search only a part of the reduced matrix. This is equivalent to the application of Gaussian elimination without interchanges to PA, where P is a permutation matrix. Here maxi jIA(n.-k+?' _2k 'maxi,j IAijI.This bound is sharp (Wilkinson [16, p. 212]). It should be remarked that the partial pivoting algorithm can be organized so that the reduced matrices are not calculated explicitly. Each element is modified only once, at the moment when its final value can -be calculated. This is essentially Crout's method. The Escalator method (Householder [8, pp. 78-79]) is usually presented as a way of calculating A1 if the inverse of a principal submatrix is known. It can be recast as a way of calculating the triangular factors of A if the factors of a principal submatrix are known. It is possible to incorporate row and column interchanges into this approach, but it is not clear how to pick suitable interchanges without doing many subsidiary calculations. For general cases, the Escalator method is inferior to the elimination methods described above. For positive definite matrices, it reduces to the Cholesky method. It is useful when inner products can be accumulated easily in double precision. 2.3. Symmetric case. A basic approach of great power for solving symmetric operator equations is the spectral decomposition method. If A is an n x n symmetric matrix with det A =A then the spectral decompo0, sition of A is A = OAOt, where A = diag )and O is orthogonal. Given this, the solution of Ax = b **, n is easily obtained as OA 'Otb. However, this stable decomposition involves much more work than does Gaussian elimination and requires the use of irrational operations. It reveals more of the structure of A than is needed for finding x. We shall not consider it further. The symmetric form of the triangular factorization LU is the decomposition LDLt, where L is unit lower triangular, and D is diagonal. Since we may only perform congruences on A, the error matrix depends on L, D, b and N, where N is a permutation matrix such that LDLt = NANt.

SYMMETRIC INDEFINITE SYSTEMS OF LINEAR EQUATIONS

643

2.4. Symmetric positive definite case. (a) Cholesky's method.The Cholesky decomposition (Wilkinson [16, pp. 229323]) is the best known decomposition for a positive definite matrix A (i.e., xtAx > 0 for x : 0). Here A = LLt, where L is lower triangular. No interchanges K are required for stability, so N = I. In fact, for all i,j, k, JA991 max_tArn. (b) LDLt decomposition.If A is positive definite, then its LDLt decomposition (the symmetric form of Gaussian elimination) is stable. The elements of L are _ bounded by ILijL (A(P-J+')/A(,-j+ 1))1/2, and the error matrix E is as small as the error matrix for the Cholesky decomposition of A. Here L LD 112 and N = I. (c) Method of congruent transformations.This method (Westlake [13, p. 21]; de Meersman and Schotsmans [9]) uses the LDLt decomposition with interchanges. Here, the largest diagonal element is brought into the leading diagonal position at each step. If A is positive definite, then the elements of L are bounded by 1 and the method is stable. In general, N # I here. If max. lAijI= 1 for each row index i and maxj lAiji = 1 for each column index j, then A is said to be equilibrated. To any symmetric positive definite matrix A there corresponds a unique positive diagonal matrix D such that DAD is equilibrated. Then DAD has l's on its diagonal, and its off-diagonal elements are strictly less than 1. It is worth noting that if the (reduced) matrix is equilibrated at each step of the elimination, then the three methods above coalesce.
-

2.5. Symmetric case: failure of Cholesky and LDLt methods. If A is symmetric but indefinite, then the LDLt decomposition, the method of congruent transformations, and the Cholesky decomposition fail in exact arithmetic for a matrix as simple as
A= [1

There exists no permutation matrix N such that NANt has an LDLt or an LLt decomposition. Note that Lt is always positive semidefinite. Thus, the real Cholesky decomposition will fail in exact arithmetic whenever A is indefinite and det A = 0, i.e., there exists no permutation N such that NANt = LLt, where L is lower triangular and real. We could avoid the problem of the calculation of square roots of negative numbers during the Choleskey decomposition by factoring out the negative signs into a diagonal matrix, i.e., we would seek a permutation matrix N such that NANt = LSLt,where L is real lower triangular and S is a diagonal matrix with Sjj = + 1. But this decomposition, as well as the LDLt decomposition and the method of congruent transformations, will fail in exact arithmetic whenever all the diagonal elements in a reduced matrix are zero, i.e., there exists no permutation N such that NANt permits an LSLt or an LDLt decomposition. Associated with failure of an algorithm in exact arithmetic is instability in finite precision, i.e., large intermediate quantities can occur, causing the error matrix E (? 2.1) to be large in comparison to A. Consider the following example.

644 Let

J. R. BUNCH AND B. N. PARLETT

dLldl b

0,

<e d

1.

Let x be the exact solution of Ax = b, and let xc be the solution obtained from finite precision calculation. The relative error, llx - xcll/llxll, of the solution of a system of linear equations is bounded in proportion to the condition number of A, K(A) = IAAII IIA-'11(Wilkinson [14]). We should not expect small relative error if = hAil1 A'I 1 = (1 + g)2/(l - d2) = 1 + 2? K(A) is large. Here IAKIIA'I + Q(_2) 1, so we do demand a small relative error for a stable method. For the abovementioned symmetric methods, N = I, L does not exist,

= LDLt
LSLt=

[1/e
L

1] [0

0][

de -

~~][
I/e_
l

_O

]
(I - de) (/e 1/2_
/?-1/2]

Le-1/2

/ - de) Lo 12-

1 r _-1 L

However, if e is small enough (e.g., e = i0' in nine decimal digit arithmetic), then the finite precision operation de - 1 e yields -t/. Let LcI Dc LCand SCbe the matrices resulting from computing L, D, L and S respectively in finite precision. Then LC= L, SC=S,

1 /?

[j-1/2

?-1/2]

and the computed solution xc from either the LDLt or LSLt method is xc= But A-'=
L

and

x = A-lb =

2L

The relative error in the computed solution is llx - xcjjjxj = de. If we calculate A- 1 by direct inversion in finite precision, de2 _ 1 is replaced

by - 1 for smallenoughe, and the computedinverseis

-1 L-d?1 ]
Then
(A' )b

=K|dcl and

lx

(A-l)cb j1 /jjxK11

= dA2.

Thus, direct inversion (or Gaussian elimination with partial or complete pivoting) gives a highly accurate solution while the LDLt and LSLt decompositions give unnecessarily inaccurate results for this well-conditioned symmetric indefinite system.

SYMMETRIC INDEFINITESYSTEMSOF LINEAR EQUATIONS

645

2.6. Symmetric case: present situation. If we ignore the symmetry of A and apply elimination with partial or complete pivoting to A, then in general A will no longer be symmetric after the first step of the elimination. We then need - n2 storage positions in the computer and we must perform n3/3 multiplications, - n3/3 additions, as well as n2/2 or - n3/3 comparisons for partial or complete pivoting, respectively. But we do obtain stability for the LU decomposition. This procedure is presently recommended for the solution of symmetric indefinite systems of linear equations (Fox [6, pp. 80, 185]), and thus the symmetry of A is of no advantage. 2.7. Tridiagonal method. Some headway in the problem was made recently by Parlett and Reid [11]. They reduce a symmetric matrix to tridiagonal form by stabilized elementary congruences and solve the tridiagonal system by Gaussian elimination with partial pivoting. This requires n2/2 storage locations and n3/3 operations, and the method is stable. 2.8. Indirect methods. The Seidel iterative method and the method of relaxation (Householder [8, pp. 48-51, 81]) require n2 operations for each cycle for a full matrix. The number of cycles required depends on the matrix, the starting values, and the needed accuracy. Usually the number of cycles exceeds ii, so at least - n3 operations are required. The method of steepest descent (Householder [8, pp. 47-51, 82]) requires 2n2 operations at each step. Again, usually at least n steps are required. So the number of operations is at least 2n3. The congruent gradient method, also called the Stiefel-Hestenes method, is a finite iterative method designed for positive definite matrices, but it can be used for symmetric matrices (Fox [6, pp. 208-213]; Householder [8, pp. 73-82]). It requires 2n2 operations at each of n steps. Once again 2n3 operations are required. See Reid [12] for useful observations on this method. It should be stressed that special properties of A and b may make any of the iterative methods described above particularly attractive. Moreover, when only a few rows of A can be contained in fast memory, then iterative methods may offer advantages. Yet block iterative methods are usually faster than simple iterations, and they involve the direct solution of equations with principal submatrices as coefficients. 3. Block diagonal pivoting. 3.1. The decomposition. Let us consider the symmetric decomposition of a symmetric matrix in greater detail.

Let A =

C1,where C is (n-j)
-C B-

x j, B is (n-j)

x (n-j),

and P is

j x j, wherej 2 1.
If P lexists, then LK A =B-=CP1Ct1

Lt,

whereL= L[A'I
-

and IJ1In - j are the identity matrices of orderj and n of CP- ' will be called a multiplier.

j, respectively. Any element

646

J. R. BUNCH AND B. N. PARLETT

The variants of LDLt are unstable for symmetric (indefinite) systems (? 2.5). This instability results from the impossibility of bringing an off-diagonal element into the pivotal position. When some off-diagonal element Aj,] j > i, is very large, we can bring Aji into the (2,1) position by congruences, but never into a pivotal position. Thus, the multipliers cannot be bounded. 3.2. Lagrange's method of reduction. In 1759, Lagrange devised a method for reducing a real quadratic form 0(x) = x'Ax of rank r by a real nonsingular linear transformation to a diagonal form
2
OClX1

*+

x2 rXr

where al, , ?Cr all nonzero (Mirsky [10, pp. 368-374]), and the number of are positive (and negative) squares is invariant (Sylvester's inertia theorem). His method corresponds to the LDLt decomposition of a symmetric matrix A when the LDLt decomposition exists. Suppose, however, that A1l = ... = A = 0, while det A : 0. Then the LDLt decomposition for A does not exist, but Ars = 0 for some r = s. For simplicity, let us assume that A12 = 0. In this case, Lagrange proposed the following transformation: x1= Y1 + Y2, X2 = Y1 - Y2,
-

X3 = y3,

Xn= Yn-

This maps 2A12x1x2 into 2A12(y y2). Thus, 0 is transformed into a quadratic form in Yl, , Yn where the coefficients of y2 and y2 are nonzero. Then, we can proceed with the triangular decomposition (Mirsky [10, pp. 371-372], Gantmacher [7, p. 199]). We have xtAx = yt(TtAT)y, and TtAT is a symmetric matrix with (TtAT)11, (TtAT)22 #0 and T=

In2

This procedure is also applicable to complex quadratic forms. 3.3. Kahan's proposal. In 1965, W. Kahan (in correspondence with R. de Meersman and L. Schotsmans) proposed that Lagrange's method could be made the basis of a stable method which preserves symmetry. Kahan made the important observation that Lagrange's transformation could be interpreted as generalizing the notion of a pivot from a 1 x 1 to a 2 x 2 principal submatrix. Let us see why. Partition the matrices so that
A=[P C]

T= [S

?]
order (S).

det (P) # 0, Then TtAT

order (P)

stct

Ss [StPS If S is chosen so that StPS = D is diagonal and the associated variables are then eliminated, the reduced matrix which remains is B - CSD- 'StCt.

SYMMETRIC INDEFINITESYSTEMSOF LINEAR EQUATIONS

647

This is exactly the matrix B - CP- 'Ct obtained by treating P as a pivot and performing block elimination. Thus, there is no need to find (as Lagrange did) a matrix S which diagonalizes P. Consequently, by using P's of order greater than one, it will always be possible to reduce A by elimination to block diagonal form. However, no P of order greater than two need be considered in the light of the following fact. If all principal 1 x 1 and 2 x 2 submatrices of a symmetric matrix A are singular, then A = 0. In exact arithmetic, it is only necessary to employ this generalization when the diagonal of A is null. However, in finite precision calculation, it is advantageous to employ it more often. 3.4. Pivotal strategy. Kahan considered two pivotal strategies. In the first, one searches the entire matrix for mc = max,J,k{AA,IAJJAkk A]k2}. If mc = A7, then use Aii as a 1 x 1 pivot by interchanging rows and columns 1 and i. If
Mc = lAjjAkk AkjI,then use

jj

Jk

as a 2 x 2 pivot by interchanging rows

Ajk AkkJ

and columns 1 with j and 2 with k. Since we search all 2 x 2 principal minors, this is called a complete pivoting strategy, in analogy with complete pivoting for Gaussian elimination. However, the searching here requires between n3/6 and n3/3 multiplications to find mc for all steps (depending on the number of 1 x 1 and 2 x 2 pivots used). The decomposition itself requires -n /6 multiplications. Thus, this strategy would require between n3/3 and - n3/2 multiplications to solve Ax = b. This is more than for Gaussian elimination. Kahan considered a second pivotal strategy in which we scan only the first column and the main diagonal; this is called a partial pivoting strategy, in analogy with partial pivoting for Gaussian elimination. The searching here only requires between -n2/4 and -n2/2 multiplications, since we take mp = maxjj{A 2 A 2ljl}. -A1 However, this partial pivoting strategy is unstable. Example.

I ,

whereO< s <<l.

2-

~
= K

-??

Then mp= A1 A22 - A


22~

2 ~~

(3/4)g2. Thus P

?jwould be used as a 2 x 2

pivot, and the reduced matrix B - CP- 1Ct (? 3.1) is [-2/(3g) + 8/3 + v/6]. Thus, So maxkmaxij IAAJI= max {1, 1-2/(3e) + 8/3 + e/61} -+ oo as E- O. we are unable to bound the reduced matrices, and we have no bound on the error matrix.

648

J. R. BUNCH AND B. N. PARLETT

Note, however, that with the complete pivoting strategy, we would use

P=

as a 2 x 2 pivot by interchanging rowsand columns,i.e.,

_~~~~~~~~~~~

Foo]21

N=

O1 0
1t 0 0

NAN'=

I
88?

C=[?

?],

B=

?
-

The reduced matrix B - CP- 1Ct = [-8(6 - )/(2(2 + 8))] 0 as 8 --0, and is bounded by 5/6 for 8 < 1. Thus maxkmaxi,jA(9j < 1 for 8 < 1, and this is stable. For this reason, Kahan believed that the complete strategy was essential to this approach and concluded that it was not competitive with standard elimination algorithms which ignore symmetry. 3.5. Parlett's observation. In 1967, B. Parlett observed that the examples for which the partial pivoting strategy was unstable were also unequilibrated and ill-conditioned (K(A) >>1). A symmetric matrix A is equilibrated if maxj IAijI= 1 for each row index i. Parlett conjectured that the partial diagonal pivoting strategy would be stable when applied to equilibrated matrices. 4. The decompositionfor diagonal pivoting. 4.1. Definitions. Let A be an n x n symmetric nonsingular matrix. We want to decompose A into the "diagonal" form MDMt by congruences, where D is a block diagonal matrix, each block being of order 1 or 2, and M is unit lower triangular with Mi+,ni = 0 if Di+li = 0. Let p0 = maxiJAJ, -= maxilAiijand v- A11A22 - A2 Let

A=[ [X],
where P isj x j, C is (n - j) x j, B is (n - j) x (n - j), andj = 1 or 2. Let A(n)= A; let A(k) be the reduced matrix of order k in the elimination process. 4.2. The 1 x 1 pivot. Suppose P is of order 1. (We shall not make a distinction between a matrix P of order 1 and its element, which we shall also call P.) Let us assume that we have already interchanged rows and columns so that = ml, i.e., P is the maximum diagonal element. IPI = = If P` exists (i.e., pu1 0), then A(n-1) B - CP-'C'. Since IP1 p, and ,o = max, IAjI, we have the following lemma.

SYMMETRIC INDEFINITE SYSTEMSOF LINEAR EQUATIONS

649

LEMMA1. If P is of order 1 and IPI = t,u 0 , then


(i) maxI(CP1)iI ? [o/pl,

(ii)

? nmax IA8nI- < (1 + [0/yj)/o


Ii,

Thus a 1 x 1 pivot P is useful if and only if [PI-,1 i.e., if and only if ,uJ,o is bounded away from zero.

is large relative to g,u

4.3. The 2 x 2 pivot. Suppose P is of order 2 and P-' exists (i.e., v # 0). Then CP- is (n -2) x 2 and the (k - 2)nd row of CP1 is
(Akl , Ak2)P
=

1
A1A2-

2 (AkjA22-Ak2A2l,Ak2Alj A21(A12

AkjA2).

Then An-2) = B - CP-1Ct, but Aln-1) does not exist. _ _ Since v = A11A22 - A 21, lAklt and 4Ak21 ito, and JA11j, JA22j we have the following lemma. LEMMA If P is of order 2 and IdetPI = v = 0, then 2. + (i) maxi,q I(CP- 1)iql< [LO(/LO,I1)/v,for q = 1, 2,
(ii) maxi,j IA(ij->2)1 ? [1 + 2[o([o + 1)/v][0.

Thus a 2 x 2 pivot is useful if and only if we can bound v away from zero. In particular, from Lemma 1 we need to have v bounded away from zero whenever p1I/o is near zero. (Note that the use of the standard norm inequalities would give bounds in Lemmas 1 and 2 which are too crude for our needs.) 4.4. Bounding v. We can easily bound v from above, since v = JA11A22
y4. LEMMA Idet PI = v _ ,pto+ [4. 3. This upper bound is sharp for A defined by: A1l = p1, Aii =-p ?o. Ai+[1,i = o = Ai,i+1, Aij = 0 otherwise; where 0 < [1i <
-A 11 ? JA212 +AIl A22A _?
2 +

for i > 1;

But, as we saw in Lemma 2, we need a lower bound on v which bounds v away from zero when [1l/[o is small. Clearly, such a lower bound does not exist without interchanges. Consider

A=

0 0 1
1 1

with P= [

? ~

We shall exhibit later three different pivotal strategies which provide us with the necessary interchanges so that we have a 2 x 2 pivot P with IdetPI V > [A2 - 12. Assuming this lower bound, we have the following lemma. 2 > 0, then LEMMA 4. If |det PI = v > [12 - 1)iql_ [o/(to p- 11) q = 1, 2, for (CP (i) maxj,qI (ii) maxi,j IA(y2)1?_ [o[1 + 2/(1 - p1/po)].
-

The lower bound on v is sharp for A defined by: Aii = [1l, Ai+1,j = [o [ Ai,i+ 1, Aij = 0 otherwise; where 0 p1 < [o.

650

J. R. BUNCH AND B. N. PARLETT

= 4.5. Criteria for choosing a 1 x 1 or 2 x 2 pivot. Define A(n) = A, p(On) ,uO,


(n) (n) = V.

Let
-

A(k)

be the reduced matrix of order k. Let


iA(k)A(k) -

maxj JA..)J, and v(k)-

Agj.

,u(k) = maxij JA(.1, p(bk) Note that if A(k) uses a 2 x 2 pivot

then A( k- 1) ((k-1) I(k-1) and v(k 1) do not exist. 1 All considerations in ??4.1-4.4 hold for A(k). If we made our criterion to be the minimization of the number of multiplications (comparisons), then we would want a 1 x 1 (2 x 2) pivot at each step. But this would be unstable. Instead, let us aim to minimize the element growth that can take place in the transformation from one reduced matrix to the next. Let F5k) be the growth factor permitted by choosing a j x j pivot for A(k),
where j = 1 or 2.

If the hypothesis of Lemma 4 holds (i.e., uses a 2 x 2 pivot), then, by Lemmas 1 and 4,
F(k) = 1 + p(k)/I(k'), F(k) = 1

v(k) > 1(k)2 -_

(k)2

whenever

A(k)

+ 2/(1

I(k)/I(k))

< Thus, we are led to the following strategy, Sa, 0 < OC 1: for each reduced matrix A(k), choose a 1 x 1 pivot if and only if t(lk)y(ok) > a (and a 2 x 2 pivot otherwise). With Sa we have F(k) ? 1 + 1/o and F(k) ? 1 + 2/(1 - oc) for all A(k). Note that these bounds are sharp. But at any stage the choice of a 2 x 2 pivot carries us further towards the complete reduction than does the choice of a 1 x 1 pivot. We would like to compare F(k+1) F(k) with F(k), but in an a priori analysis we must be content with comparing bounds on these quantities. Let G(a) = max {(1 + 1/a)2, 1 + 2/(1 - oc)}. Thus we seek mino< <1 G(a),

Note that which is (9 + /17)/8 and is achieved by oc= o0 = (1 + /17)/8. -0 G(oc)-+ oo as oL , 1. But a = 0 (cL= 1) corresponds to the use of a 1 x 1 (2 x 2)

pivot at each step. Either case is unstable, so we expect the bounds to go to infinity as a approaches 0 or 1. We immediately obtain the following bounds on multipliers and on elements in the reduced matrices under strategy Sa. Let m be any multiplier. whenever A(k) uses a 2 x 2 5. LEMMA Under strategy S, if v(k) ? y(1k)2 -_ pivot, then
(k)2

< For o = o0 = (1 + /17)/8,

1/
1/(1-ot)

for a I x 1 pivot, for a 2 x 2 pivot.

m?

J( 17 =(,/

0/2 < 1.562 for a 1 x 1 pivot, 7 + 7)/4 < 2.781 for a 2 x 2 pivot.
_
(lk)2

6. LEMMA Understrategy Sa with cx= oto,if v(k) ? IA(k)2 )/2](n-k)/2 a 2 x 2 pivot, then I(ok) < it [(9 + <n-k

wheneverA(k) uses

Bunch [3] gives a much better bound on the elements in the reduced matrices.

SYMMETRIC INDEFINITE SYSTEMS OF LINEAR EQUATIONS

651

The strategy Sa allows us to proceed in the following order:

(i) calculate l(k) and l(k) (no multiplications, + 1)/2comparisons); k(k ? oXt40,then we use a 1 x 1 pivot; (ii) if =
(iii) otherwise we find a 2 x 2 by some strategy.

Thus,we searchfor a 2 x 2 for A(k) if and only if

l(k) < O,(k)

5. The complete and partial pivoting strategies. 5.1. Complete pivoting. Let us consider the complete pivoting strategy ("complete" in the sense that we search over all the principal 2 x 2 minors) suggested by W. Kahan (? 3.3). Let vc = maxij JA A Aj-I. The complete strategy involves: (i) finding ,u1 = maxi lAiij,1k = maxi,j lAijl; (ii) choosing a 1 x 1 or 2 x 2 pivot according to Sa(?4.5); (iii) for a 1 x 1, interchanging so that IP1= Ml(?4.2); (iv) for a 2 x 2, finding vc, and interchanging so that IdetPI = vc(?4.3). This would be repeated for each reduced matrix. Actually this process is slightly faster than the one originally considered by Kahan. The result of all this work is that we do obtain a lower bound for vc in terms of Mo and 1,.
THEOREM 1. /0 12 < VC /2 + ft2

Proof. The upper bound follows from Lemma 3 of ? 4.4. Let Mo= IArsI.Then = ArrAss- A = ,o2I since Mi= 82 VC maxij lAiiA- Ajl ArrAss? p 2 maxi IAiil. Thus Lemma 4 in ? 4.4 holds for vcwhen ,u1 < 1o. According to Sa0,we choose (k) a 1 x 1 pivot for the reducedmatrixA(k) if and only if l(k) ? The complete diagonal pivoting strategy requires - (n3/3 + p3/6) multiplications, , (n3/3 + p3/6) additions, and - (n3/12 + p3/12) comparisons, where p is the number of 1 x 1 pivots used. Examples. If A is n x n and positive definite, then p = n. If A is defined by 1 = Aj,j+j for 1 ? i ? n-1, where n is even, and Aij = 0 otherwise, Ajj+j= then p = 0. 5.2. Partial pivoting for equilibrated matrices. As we saw in ?5.3, the complete pivoting strategy involves more work than we are willing to perform, while in ? 3.4 we saw that a partial pivoting strategy is unstable for unequilibrated matrices. We shall now show that a partial pivoting strategy is stable when applied to equilibrated matrices (with equilibrated reduced matrices). Let A be equilibrated, i.e., let maxj IAijI= Mo for every i, where Mo> 0 (usually we normalize by taking Mo= 1). Let vp = maxi IA11Aj - A2il The partial strategy involves: (i) equilibrating A (thus we know Mo); (ii) finding M, and choosing a 1 x 1 or 2 x 2 according to Sa(? 4.5); (iii) for a 1 x 1, interchanging so that IP1= M, (? 4.2); (iv) for a 2 x 2, finding vp, and interchanging so that IdetPI = vp (? 4.3). Now let us find a lower bound for vp.

652
THEOREM < V < , +

J. R. BUNCH AND B. N. PARLETT

2. If A is equilibrated (maxj IAijI= /lo for every i), then /l


14.

y'

Proof. The upper bound follows from Lemma 3 of ? 4.4. By equilibration, = either (i) 1A111 1,k)or (ii) there exists an integer k ? 2 with lAkll = 1o. If(i), then 2 _2 = 0. If (ii), then vp > IlAkk /1o = [, and, trivially, vp > -A211 = -AllAkk _4 _142.This completes the proof. Thus, Lemma 4 in ? 4.4 holds for vpif ,u1 < ,o. According to Sa0,we choose a 1 x 1 pivot for the equilibrated reduced matrix A(k) of order k if and only if
(k) ? got().

For A= A(n), only 2(n -1) multiplications, n-1 additions, and n-1 comparisons are required to calculate v(n), in contrast to n(n - 1) multiplications, n(n - 1)/2 additions, and n(n - 1)/2 comparisons to calculate v(n")in ? 5.1. 5.3. Criticism of the partial pivoting strategy. The drawback to this method and the criteria which we have found for the pivoting strategy is that the matrix must be equilibrated at the beginning and then each reduced matrix should be equilibrated. But a fast algorithm for equilibrating symmetric matrices in the max-norm has never been exhibited, i.e., we seek a diagonal matrix D such that DAD is equilibrated. However, Bunch [4] presents an algorithm which can equilibrate any symmetric matrix in a very simple way. In ? 6, we shall exhibit another version of diagonal pivoting which is applicable to unequilibrated matrices; we call this unequilibrated diagonal pivoting. Equilibration is unnecessary for this strategy and this is the algorithm that we recommend. 6. Unequilibrateddiagonal pivoting. 6.1. Maximal off-diagonal element. In order to obtain a lower bound of _2 -2 for vpin Theorem 2 in ? 5.2, we needed the fact that, due to equilibration, there existed an element of maximal absolute value in the first column. However, if ,u < [to then there exist integers i, j with i > j, such that IAijI= ,uo. We need only bring the element Aij up to the (2, 1) position and then we will have a 2 x 2 pivot with a maximal off-diagonal element. We shall call this variation unequilibrated diagonal pivoting. Let pi = maxilAiij = lAkkl, where k is the least such integer. Let ,o where, in order to be specific, we take r to be the least such maxij AijI = JArsI, row integer, and s to be the least such column integer in the rth row. Let Vb =Arr -AIrAss S-rsl A 2I. This strategy involves: (i) finding p, and the least integer k with IAkkl= Pl (ii) finding /lo and the least row integer r and then the least column integer s in the rth row such that lArsl = o0; (iii) choosing a 1 x 1 or 2 x 2 pivot according to Sa (? 4.6); (iv) for a 1 x 1, interchanging row and column 1 with k so that IPI =,u (?4.2); (v) for a 2 x 2, interchanging rows and columns 1 with r and 2 with s so that IdetPI = Vband 1A211= Mo(? 4.3). This procedure is repeated for each reduced matrix.
-

SYMMETRIC INDEFINITE SYSTEMS OF LINEAR EQUATIONS

653

Note that calculating Vb requires only 2 multiplications instead of n(n - 1) for and 2(n - 1) for vp. vC Clearly, from the definitions of Vb, vp, and vc and from Lemma 3 of ? 4.4, we have the following lemma.
?p LEMMA 7. Vb ?_VP, _
_H + 81

6.2. Bounding Vb. Let us now bound Vb from below. ? Ho + 12. Vy2 o,u then y- _,u12? Proof. The upper bound follows from Lemma 7. > -2 A Since 1A2 = to, vb = A11A22 A-2 1l = 2 11 -,AA Here, as in ? 5.1 and ? 5.2, symmetry was used to get the lower bound on Vb. then IA121 =o. By symmetry, if g0 = 1A211, = If A were not symmetric, then /t = IA211would not imply that 1A121 PO (in fact we could have A12 = 0). Thus, no such lower bound on the absolute value of the determinant of 2 x 2 submatrices can exist for nonsymmetric matrices. Thus, Theorem 3 implies that Lemma 4 in ? 4.4 holds for Vb if 1 <PO According to S,,0(? 4.5), we would choose a 1 x 1 pivot for the reduced matrix ? _ 14k A(k)of order k if and only if = 0 By symmetry, if I0 = IA211, then IA121 = PO. 6.3. Comments on this strategy. From ? 6.1, we see that we need do no searching over the 2 x 2 principal minors, but we merely choose that principal minor with maximal off-diagonal element. The seairching for the p4k) requires between n3/12 and n3/6 comparisons and no multiplications or additions. We used the terms "complete" and "partial" strategies in ??4-5 to distinguish between searching over all the 2 x 2 principal minors and over the 2 x 2 principal minors with off-diagonal element in the first column. In analogy with Gaussian elimination with complete pivoting, we would like to call our strategy in ? 6.1 a complete pivoting strategy, since we search all of A(k) for its maximal element, but we do not wish to cause confusion with the use of the word "complete" in ? 4 where it meant searching all the 2 x 2 principal minors. Now, in analogy with Gaussian elimination with partial pivoting, we ask if there could be a partial strategy in which we search only the first column of A(k) for its maximal element. Such a partial strategy requires at most only n(n - 1)/2 comparisons to calculate the maximal element in the first column of all the reduced matrices. If such a partial strategy were stable, then this strategy would require only n3/6 multiplications, n3/6 comparisons to solve n3/6 additions, and Ax = b, A = At, det A = 0. However, any such partial strategy is unstable for unequilibrated matrices, as the following example shows:
THEOREM 3. If IA21I =
2

A=LeY21
1

654

J. R. BUNCH AND B. N. PARLETT

where 0 < ax< 1 and 0 < << 1. For further comments on this topic, see Bunch [2, pp. 46-49]. 6.4. Conclusions. The conclusions in this section are proved in Bunch [3]. Let >3)denote summation over those indices i, 1 < i < n, such that, if A(') exists then it yields a j x j pivot, j = 1 or 2. Let p be the number of 1 x 1 pivots used. Let Mults (Divs, Adds, Comps) mean the number of multiplications (divisions, additions, comparisons). Then the requirements of solving Ax = b appear in Table 2.
TABLE 2 Exact Upper bound

Mults
Divs Adds
Comps

3 kn3 +n 2 _

n?P+31(2)i

13

n2-

2-2 n3 + n2 _
n3 +
3n2

2n2 6n + p +
+ ?p ?n

n3 +
(1)i2 n3
+

n2 _
n2 +

n + 4
ln

8P +

Recall that for Gaussian elimination with partial or complete pivoting the number of comparisons required is n2/2 - n/2 and n3/3 + n2/2 - 5n/6, respectively (Table 1). Backward error analysis reveals that the stability of this method depends on the bounding of the growth of the elements of the reduced matrices (Bunch [3]). where Thus we would like to bound B(n) = maxA (maxk maxi,j JAj)I/maxj,j IAijl), the A are n x n nonsingular symmetric matrices. Further, we would like the bound -? to be independent of ay,0 < cx< 1. But, as we recall from ? 4.5, ac-+ 0 (cx 1) corresponds to the use of a 1 x 1 (2 x 2) pivot at each step, and either situation is unstable. Thus B(n) -+ oo as a -+* 0, 1. < We saw in ? 4.5 that maxi,j jAY9j (2.57)n k maxi,j AijI(i.e., B(n) < (2.57)n)- 1 for ox= cxo= (1 + /17)/8. But, by the use of Wilkinson's techniques for Gaussian elimination with complete pivoting (Wilkinson [15, pp. 281-285]), Bunch [3] cx), obtains B(n) < /nf (n)c(cx)h(n, where
f =
n

-(n)2

H[r 1(rr= 2

1)

(recall

? 1.2),

c(cx)= ijmax {x- 1, (1 +

c2)/(1-

S1
(1
-( x2)- 1/2

if the last reduced matrix is 1 x 1, if the last reduced matrix is 2 x 2,


n

h(n, Cx)2

r= 2

rl fll(r-1)

INDEFINITE SYSTEMSOF LINEAR EQUATIONS SYMMETRIC

655

(a-2
fr
=

if A(r) yields a 1 x 1 pivot,


if A(r) yields a 2 x 2 pivot,

(1-2)-l

if A(r+1) yieldsa2 x 2 pivot. This bound on B(n) holds for each a, 0 < a < 1, but h(n, ca) oo as ax-* 0 when p = n (e.g., when A is positive definite) and as a -? 1 when p = 0 (? 5.1). Furtheruntil we more, this is an a posteriori bound, since we cannot calculate c(a)h(n,aL) know the position of the blocks of order 1 and 2 in the decomposition. as Wilkinson obtained the factor lnf(n) maxi JIAijl the bound on the elements in the reduced matrices for solving Ax = b by Gaussian elimination with complete pivoting. We have the extra factor c(a)h(n,a) since our pivots are not necessarily the maximal elements in the reduced matrices. We would like an a priori bound on the element growth for the diagonal pivoting method, i.e., a bound independent of the selection of a 1 x 1or a 2 x 2 pivot at each stage. Bunch [3] shows that a) inf c(ac)h(n, > (2.029)n?3465,
< 0 <aX 1

while for a = xo= (1 +

/17)/8,
1)0?446

3.n. nf(n)3.07(n - 1)0?446, which is within a factor Thus, for a = ao, B(n) < 3.07(n - 1)0.446 of Wilkinson's bound for Gaussian elimination with complete pivoting.
REFERENCES Error boundsof high order accuracyfor the state regulator JR. [1] W. E. BOSARGE, AND0. G. JOHNSON, problem via piecewise polynomial approximations, SIAM J. Control, to appear. [2] J. R. BUNCH, On direct methodsfor solving symmetric systems of linear equations, Tech. Rep. 33, University of California Computer Center, Berkeley, 1969. , Analysis of the diagonal pivoting method, this Journal, 8 (1971), pp. 656-680. [3] Equilibration of symmetric matrices in the max-norm, to appear. [4] Pivot size in Gaussian elimination, Numer. Math., 12 (1968), pp. 335-345. [5] C. W. CRYER, [6] L. Fox, An Introduction to Numerical Linear Algebra, Oxford University Press, London, 1964. The [7] F. R. GANTMACHER, Theory of Matrices, vol. 1, Chelsea, New York, 1959. [8] A. S. HOUSEHOLDER, The Theory of Matrices in Numerical Analysis, Blaisdell, New York, 1964. AND L. SCHOTSMANS, Note on the inversion of symmetric matrices by the Gauss[9] R. DE MEERSMAN Jordan method, I.C.C. Bull., 3 (1964), pp. 152-155. [10] L. MIRSKY, An Introductionto Linear Algebra, Clarendon Press, Oxford, 1955. [11] B. N. PARLETTAND J. K. REID, On the solution of a system of linear equations whose matrix is symmetric but not definite, BIT, to appear. [12] J. K. REID, A note on the least square solution of a band system of linear equations by Householder reductions, Comput. J., 10 (1967), pp. 188-189. [13] J. WESTLAKE,Handbook of Numerical Matrix Inversion and Solution of Linear Equations, John Wiley, New York, 1968. Rounding errors in algebraic processes, Information Processing, Proc. Inter[14] J. H. WILKINSON, national Conference on Information Processing, UNESCO, Paris, 1959. [15] f, Error analysis of direct methods of matrix inversion, J. Assoc. Comput. Mach., 8 (1961), pp. 281-330. The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965. [16] ~-,

c(cot)h(n, < 3.07(n co)

<

Das könnte Ihnen auch gefallen