Sie sind auf Seite 1von 7
Op Sooke eee a —— Sx tion Sele, aan 20 estos ts SPECIAL MATRICES AND GAUSS-SEIDEL decomposition aigotthm 11.2 and Eq. (11.4) yields (979 = (22.454 — GOSTTF = 6.1101 ‘Ths the Cholesky decomposition yields 2.4495 6.1237 4.1833 b= fas - u 22.454 20917 6.1101 ‘The validity of this decomposition can be verified by substituting it and its trans- pose into Eq, (11.2) to see if their produet yields the original matrix [4], This is left for an Figure [1.3 presents pseudocode for implementing the Cholesky decomposition algo- rithm, It should he noted thatthe algorithm in Fig. 11.3 could result in an execution error if the evaluation of ajy involves taking the square root of a negative number. However, For cases where the matrix is positive definite! this will never occur. Because many symimet- tic mattices dealt with in engineering ate in fat, positive definite, the Cholesky algorithm has wide application, Another benefit of dealing with positive definite symmetcic matrices is that pivoting is not required to avoid division by zero. Thus, we can implement the algo- rithm in Fig. 11.3 without the complication of pivoting, GAUSS-SEIDEL lucrative or approximate methods provide an altemative to the elimination methods de- scribed to this point. Such approaches are similar to the techniques we developed to obtain the roots of a single equation in Chap. 6. Those approaches consisted of guessing a value and then using a systematic method to obtain a refined estimate ofthe root, Because the present part ofthe book deals with a similar problem—obtaining the values that simulta- neously satisfy a set of equations—we might suspect that such approximate methods could bbe useful in this context. ‘The Gauss-Seidel method is the most commonly used iterative method. Assiime that we are given a set of m equations Lal) = 18) ‘Suppose that for conciseness we limit ourselves to a3 X 3 set of equations. Ifthe diagonal elements are all nonzero, the first equation can be solved for x, the second for 2, and the third for 4310 yield n= tsa) "a potve defane mar is ome for which the product 4x" [A).X} is greer than zero forall nonzero sectors (X). a ee cumin Hetbts er Eiginners— Eestene ts Sx tion Sele, aan 20 11.2 GAUSS-SBDEL 301 EXAMPLE 11.3 jonas Sanes prise @ by aux -anwe viae Now, we can stat the solution process by choosing guesses forthe xs. A simple way to obtain initial guesses is to assume that they are all ero. These zeros can be substituted {nto Eq, (11.54), which can be used to calculate a new value for.x1= bi/ait. Then, we sub- stitute this new value of xy along with the previous guess of zero for x3 into Eq (11.56) 10 ‘compute a new value for xy. The process is repeated for Eq, (II Se) to calculate new est ‘mate for x. Then we retum to the frst equation and repeat the entire procedure until our solution converges closely enough to the true values. Convergence ean be checked using the criterion [recall Eq, @.5)] 100% laa) and laal > leat] “That is, the dlagonal element must be greater than the off- So ash aia FIGURE 11.5 lustoton offal comergence and Ib divergence ofthe GaussSeicel method, Notice that he same fincions ore pled in both cases lu: 11x + 13x2 = 286; v: 11x — 9xa = 99). Thus theordernwhich he eqioton cre mplaneied fos doped bye drecion fhe rt row from the ofginl dictates wheter the computation converges, a ee cwweew ee hes ee SS coe 11.2 GAUSS-SBDEL 305 ‘That is the diagonal coefficient in each of the equations must be larger than the sum of the absolute values of the other coefficients in the equation. This criterion is sufficient but not necessary for convergence. Thats although the method may sometimes work if Eg. (1110) {snot met, convergence is guarumteedif the condition is satisfied. Systems where Eg, (110) hholds are called diagonally dovainant. Fortunately, many engineering peoblems of practical importance Fulfill this requirement .2.2 Improvement of Convergence Using Relaxation Relaxation represents a slight modification of the Gauss-Seidel method and is designed to enhance convergence. After each new value of x is computed using Eq, (11.5), that value is modified hy a weighted average of the results of the previous and the present iY = ate" 4 apa ‘where A is a weighting factor that is assigned a value betwen and 2. If = 1,(1 — 2) sequal to 0 and the result is unmodified. However, if is set at a value between 0 and 1, the result is a weighted average of the present and the previous results, This type of modification is called enderrelaxation. It is typically employed 10 ‘make a nonconvergent system converge orto hasten convergence by dampening out oscil lations For values of from 1 to 2, extra weight is placed on the present value, In this in- stance, there isan implicit assumption thatthe new value is moving in the coreeet direction toward the true solution but at too slow arate. Thus, the added weight of 2 is intended 10 improve the estimate by pushing it closer to the truth, Hence, this type of modification, which is called overrelaxavion, is designed to accelerate the convergence of an already ‘convergent system, The approach is also called successive or simultaneous overrelaxation, or SOR, ‘The choice of a proper value for. is highly problem-specifie and is often determined ‘empirically. For single solution of a set of equations its often unnecessary. However, if the system under study is to be solved repeatedly, the efficiency introduced by a wise choice of 2 can be extremely important. Good examples ate the very large systems of par- tial differential equations that often arise when modeling continuous variations of variables (recall the distributed system depicted in Fig. PT3.16). We will return to this topic in Part Bight. 2.3 Algorithm for Gauss-Seidel An algorithm for the Gauss-Seidel method, with relaxation, is depicted in Fig. 11.6. Note that this algorithm is not guaranteed to converge if the equations are not input in a diago- nally dominant form, "The pseudocode has two features that bear mentioning. First, ther isan initial set of sted loops to divide each equation by its diagonal element. This reduces the total number of operations inthe algorithm, Second, notice thatthe eror check is designated by a variable called Sentinel. I'uny of the equations has an approximate error greater than the stopping criterion (¢,), then the iterations are allowed to continue. The use of the sentinel allows us Cha-Canle Nenecel Una Ageiie | 1. Spa Mates nd (oe owt Imad terEiginors.—Euatone uae Sie cana 290 Si Edtion SPECIAL MATRICES AND GAUSS-SEIDEL SUBROUTINE Gofal (2,b,n,3,Imax,es, Tanbds) ‘oFOR i = 2.0 dummy on 00 bp = bilo x0 00 poroR 7 = 2, 0 aro § F 17 THEN sum = sum ~ a4 sh; wre = tan TF fj THEW sum = sum — ay x om 00 1x; = lambdatsum +(2.~ lambda old TF sentine) = 1 AND x #0. THEW fea = ABSC(x, — otd)/xP*100. TF 02 > es THEN sent tne) ow iF ono 00 Iter = tter +1 TOR (iter = imax) BUT FIGURE 11.6 Peeudocade for Gauss Seidl wih elation to circumvent unnecessary caleulations of exor estimates once one of the equations exceeds the eriterion 11.2.4 Problem Contexts for the Gauss-Seidel Method Aside from circumventing the round-off dilemma, the Gauss-Seidel technique has a number of other advantages that make it particularly atractive in the context of certain engineering problems. For example, when the matrix in question is very large and very sparse (that i, ‘mostof the elements are 2ero, elimination methods waste large amounts of computer mem cary by storing 2er08,

Das könnte Ihnen auch gefallen