Sie sind auf Seite 1von 4

STR 613

613:: Advanced Website


Numerical Analysis
Instructor
Dr. Ahmed Amir Khalil

Algorithm: Cholesky factorization


Comparison
storage: n(n + 1)/2
 Direct methods: operations: n3/3 + O(n2)
 cheap way to check if a symmetric matrix is positive definite
 systematic procedures, algebraic elimination,
 a fixed number of operations for i=1 to n
 Gauss Elimination, Gauss-
Gauss-Jordan, Matrix Inversion, if i>1
and LU Decomposition. % compute L(i,1:i)
 Iterative methods for j=1 to i-1
 Asymptotic, Iterative procedure L(i,j) = (A(i,j) - L(i,1:j-1)*L(j,1:j-1)')/L(j,j);
 Variable no. of iterations (operations) end
end
 trial solution, substituted into the system, determine
the mismatch, improved solution L(i,i) = sqrt(A(i,i) - L(i,1:i-1)*L(i,1:i-1)');
End
 Jacobi, Gauss-
Gauss-Seidel, Relaxation, and Successive
Over Relaxation.

Iterative Methods Jacobi Iteration


 Diagonal dominance is a sufficient condition for convergence

 Assume an initial solution x0 Ax = b


 Generate improved solution vector x1 n
 Repeat until the solution converges aij x j = bi (i = 1,2,..., n)
 The number of iterations required to achieve j =1
convergence depends on:
 Dominance of the diagonal coefficients. 1 b i 1 a x n a x
 The initial solution vector. xi = i ij j ij j
 The algorithm used. a ii j =1 j = i +1
 The convergence criteria specified.
( i = 1,2,..., n)

1
Jacobi Iteration
Jacobi Iteration
1 (0)
i 1 n
x i( 1 ) = bi a ij x j a ij x j
(0)

a ii 1 b i 1 a x ( k ) n a x ( k )
j = 1 j = i + 1
x i( k +1 ) = ij j
a ii i j =1 ij j j = i +1
( i = 1, 2,..., n )
( i = 1,2,..., n) OR
1
n
x i( k + 1) = x i( k ) + bi a ij x (jk ) ( i = 1,2,..., n) R i( k )
a ii
xi
( k + 1)
= xi
(k )
+
j =1
a ii
i 1 n
R (k ) = bi
i j =1
a ij x (jk ) - a
j=i
(k )
ij x j

Example on Jacobi Iteration Example using Jacobi Iteration


4 x1 x2 + x4 = 100 k x1 X2 X3 X4 X5
x1 + 4 x2 x3 + x5 = 100 0 0 0 0 0 0
x2 + 4 x3 x4 = 100 1 25 25 25 25 25
x1 x3 + 4 x4 x5 = 100 2 25 31.25 37.5 31.25 25
+ x2 x4 + 4 x5 = 100 3 25 34.375 40.625 34.375 25
4 25 35.15625 42.1875 35.15625 25
R1 = 100 4 x1 + x2 x4
: : :
R2 = 100 + x1 4 x2 + x3 x5
16 25 35.714286 42.857140 35.714286 25
R3 = 100 + x2 4 x3 + x4
: : :
R4 = 100 x1 + x3 4 x4 + x5
18 25 35.714285 42.857143 35.714285 25
R5 = 100 x2 + x4 4 x5

Gauss--Seidel
Gauss Gauss--Seidel
Gauss
 Diagonal dominance is a sufficient condition for convergence 1 i 1 n
(k)
x i( k +1 ) = bi a ij x j a ij x j
( k +1 )

 Similar to Jacobi except that most recently computed values of a ii j =1 j = i +1


all xi are used in all computations ( i = 1,2,..., n )

bi a ij x j a ij x j
1 i 1 n
OR
x i( k +1 ) = ( k +1 ) (k )

a ii j =1 j = i +1
Ri( k )
( i = 1,2,..., n) x i( k +1 ) = x i( k ) +
aii
i 1 n
R ( k ) = bi a ij x (j k +1 ) - a ij x (j k )
i
j =1 j=i

2
Example using Gauss-
Gauss-Seidel
Comparison
Jacobi
k x1 X2 X3 X4 X5
16 25 35.714286 42.857140 35.714286 25
0 0 0 0 0 0
: : :
1 25 31.25 32.8125 26.953125 23.925781
18 25 35.714285 42.857143 35.714285 25
2 26.074219 33.740234 40.173340 34.506226 25.191498
Seidel

: : : : : :
15 25 35.714286 42.857143 35.714286 25
15 25 35.714286 42.857143 35.714286 25

Successive Over-
Over- Successive Over-
Over-relaxation
relaxation k X2 X3 ( k +1 ) (k) Ri( k )
xi = xi + (i = 1,2,..., n)
 Southwell observed that 0 0 0 aii
in many cases the 1 25 25 i 1 n
changes were always in 2 31.25 37.5 R( k ) = bi aij x (j k +1) - aij x (j k ) (i = 1,2,..., n)
the same direction i
j =1 j=i
3 34.375 40.625
 Overcorrecting (i.e.,
over--relaxing) the value
over 4 35.15625 42.1875
of xi by the right amount
 If = 1.0, we get Gauss-
Gauss-Seidel
accelerates :  If 1<
1<
<2, we get an over-
over-relaxed system
convergence. 16 35.714286 42.85714
0  Does not change the final solution, since it
: multiplies the residual Ri, which approaches
18 35.714285 42.85714 zero when the final solution is reached
3

Successive Over-
Over-relaxation Example:
(k)
Ri  In the previous example, the number of iterations
xi( k +1) = xi( k ) + (i = 1,2,..., n)
aii required to achieve an accuracy of 0.000001
i 1 n using (Succesive Over-
Over-Relaxation) SOR.
R( k ) = bi aij x (j k +1) - aij x (j k )
i
(i = 1,2,..., n)
j =1 j=i

 If <1, we get an under-


under-relaxed system, useful 1.0 1.01 1.02 1.05 1.14 1.15
if the Gauss-
Gauss-Seidel algorithm causes the
solution to overshoot (oscillating) k 15 14 14 13 13 14

3
Error Accuracy

 Absolute error = approximate value exact value.  No. of significant digits


 Relative error = absolute error/exact value  Direct methods: error due to round-
round-off
 Absolute error should never be used as accuracy
 Iterative methods:
criterion
 100.000 0.001
 Error built-
built-in
 0.001 0.001  Approach the exact solution asymptotically
 Terminate when solution converges based
on some criterion

Convergence
 Achieved when accuracy criterion is satisfied

( x )
( x ) i max
relative i max

xi
n n ( x )
( x ) i
relative i

i =1 i =1 xi
1/ 2
n x
2

( (x ) )
n

i
2
1/ 2

relative i
i =1 x i




i =1

Das könnte Ihnen auch gefallen