Sie sind auf Seite 1von 30

Applied Numerical Methods

Iterative Methods for Linear Equation Systems

Diego Mendoza
Master in Mechanical Engineering Program
Universidad Autónoma del Caribe

March 4, 2016
Introduction
Linear equations systems examples

Figure: Structure
Introduction

Figure: Equation system


Figure: Partial differential equation.
Figure: PDE linear system
Introduction

Iterative methods use the following recurrence formula to solve


a linear equation system:

x (k +1) = Bx (k ) + C (1)
Iterative methods for linear equation systems (LES), can be
broadly classified as:
1. Stationary: In these methods B and C in Eq. 1 do not
depend on k.
2. Nonstationary: Are methods were B and C (Eq. 1) do
change during the iterative process.
Some stationary methods

I Jacobi: Easy to understand and implement, but


convergence is slow.
I Gauss - Seidel: The Gauss-Seidel method is like the
Jacobi method, except that it uses updated values as
soon as they are available. Faster than the Jacobi
method, though still relatively slowly.
I Sucessive over relaxation (SOR): It can be derived from
the Gauss-Seidel method by introducing an extrapolation
parameter ω. For the optimal choice of ω, SOR may
converge faster than Gauss-Seidel by an order of
magnitude.
Some Nonstationary methods

I Conjugate gradient (CG): The CG method derives its


name from the fact that it generates a sequence of
conjugate (or orthogonal) vectors. These vectors are the
residuals of the iterates. CG is an extremely effective
method when the coefficient matrix is symmetric positive
definite.
I Biconjugate gradient (BiCG): The BiCG generates two
CG-like sequences of vectors, one based on a system with
the original coefficient matrix A, and one on AT . It is
useful when the matrix is nonsymmetric and nonsingular.
Jacobi

The Jacobi method is easily derived by examining each of the


n equations in the linear system Ax = b in isolation. If in the
ith equation
bi − ∑nj6=i aij xj
xi =
aii
This suggests an iterative method defined by
(k )
(k +1) bi − ∑nj6=i aij xj
xi = (2)
aii
know as Jacobi recurrence formula.
Algorithm 1 pseudocode for the Jacobi method
1: Choose an initial guess x (0) to the solution x.
2: for k = 1, 2, . . . do
3: for i = 1 to n do
4: x̄i = 0
5: for j = 1, 2, . . . , i − 1, i + 1, . . . , n do
(k )
6: x̄i = x̄i + aij xj
7: end for
8: x̄i = (bi − x̄i )/aii
9: end for
(k +1)
10: xi = x̄i
11: Check for convergence; continue if necessary.
12: end for
Example

Solve the following linear system using Jacobi

Starting from x (0) = [0, 0, 0]T


Gauss-Seidel

If we proceed as with the Jacobi method, but now assume that


the equations are examined one at a time in sequence, and
that previously computed results are used as soon as they are
available, we obtain the Gauss-Seidel method.
(k +1) (k )
(k +1) bi − ∑nj<i aij xj − ∑nj>i aij xj
xi = (3)
aii
Algorithm 2 pseudocode for the Gauss-Seidel method
1: Choose an initial guess x (0) to the solution x.
2: for k = 1, 2, . . . do
3: for i = 1 to n do
4: σ=0
5: for j = 1, 2, . . . , i − 1 do
(k +1)
6: σ = σ + aij xj
7: end for
8: for j = i + 1, i + 2, . . . , n do
(k )
9: σ = σ + aij xj
10: end for
(k +1)
11: xi = (bi − σ)/aii
12: end for
13: Check for convergence; continue if necessary.
14: end for
Successive Over Relaxation (SOR)

SOR, is devised by applying extrapolation to the Gauss-Seidel


method. This extrapolation takes the form of a weighted
average between the previous iterate and the computed
Gauss-Seidel iterate successively for each component.

(k +1) (k +1) (k )
xi = ω x̄i + ( 1 − ω ) xi (0 < ω < 2) (4)

where x̄ denotes a Gauss-Seidel iterate, and ω is the


extrapolation factor. If ω = 1, the SOR method simplifies to
the Gauss-Seidel method.
Algorithm 3 pseudocode for the SOR method
1: Choose an initial guess x (0) to the solution x.
2: for k = 1, 2, . . . do
3: for i = 1 to n do
4: σ=0
5: for j = 1, 2, . . . , i − 1 do
(k +1)
6: σ = σ + aij xj
7: end for
8: for j = i + 1, i + 2, . . . , n do
(k )
9: σ = σ + aij xj
10: end for
11: σ = (bi − σ )/aii
(k +1) (k ) (k )
12: xi = xi + ω ( σ − xi )
13: end for
14: Check for convergence; continue if necessary.
15: end for
Conjugate Gradient (CG)

The Conjugate Gradient method is an effective method for


symmetric positive definite systems.
The matrix A is possitive definite if
I x T Ax > 0 ∀ x 6 = 0

The matrix A is symmetric if:


I A = AT
The general strategy is using an optimization procedure to find
the solution of the linear equation system Ax = b.
The first optimality condition for the unconstrined
minimization problem

min f (x ) = x T Ax − bx
x
is

∇f (x ∗ ) = 0 ⇔ Ax ∗ − b = 0
Therefore the minimizer of the quadratic function
f (x ) = x T Ax − bx is also a solution to the linear system
Ax = b.
The minimizer of f (x ) can be found using a line search
strategy

x (k +1) = x (k ) + α k p (k )
where x (k +1) is the new value of x, α is the steplength and
p (k ) is a descent direction of f (x ).
The general procedure for the line searc consist of:
1. Fiding a (local) descent direction p such that

∇ f ( x (k ) ) T p ( x (k ) ) < 0

The steepest direction is p (x (k ) ) = −∇f (x (k ) )


2. Finding the steplength along the search direction
The step length can be found minimizing
φ(α) = f (x (k ) + αp (k ) ).

dφ d (f (x (k ) + αp (k ) ))
= =0
dα dα
where

φ(α) = (x (k ) + αp (k ) )T A(x (k ) + αp (k ) ) − (x (k ) + αp (k ) )T b

The derivative becomes:


dφ T T
= p (k ) (Ax (k ) − b ) + αp (k ) Ap (k ) = 0

The optimal step length is:
T
p (k ) r (k )
αk = T
p (k ) Ap (k )

Where r (k ) = b − Ax (k ) is the residual of the linear equation.


The iterative line-search optimization scheme is:
T
(k +1) (k ) p (k ) r (k )
x =x + (k )T (k ) p (k ) (5)
p Ap
If p (k ) = −∇f (x (k ) = r (k ) , the steepest descent method is
obtained.
The conjugate gradient method uses a set of nonzero direction
vectors {p (1) , . . . , p (n) } which are conjugated and the residual
vectors orthogonal {r (0) , r (1) } orthogonal, i.e.,
T
p (i ) Ap (j ) = 0 and r (i ) r (j ) = 0∀ i 6= j
Where A is a symmetric matrix.
I To construct the direction vectors {p (1) , . . .} and the
approximations {x (1) , . . .} the method starts with an
initial approximation x (0) and the use of the steepest
descent direction r (0) = b − Ax (0) as the first seach
direction p (1) .
I p (k ) is generated by setting

p (k ) = r (k −1) + s(k −1) p (k −1)

where T
p (k −1) Ar (k −1)
sk − 1 = (k −1)T (k −1)
p Ap
The basic algortihm is as follows
Algorithm 4 pseudocode for Conjugate Gradient method
1: Compute r (0) = b − Ax (0) and p (1) = r (0) .
2: for k = 1, 2, . . . do
T T
3: αk = r (k −1) r (k −1) /(p (k ) Ap (k ) )
4: x (k ) = x (k −1) + α k p (k )
5: r (k ) = r (k −1) − αk Ap (k )
T T
6: sk = r ( k ) r ( k ) / ( r ( k − 1 ) r ( k − 1 ) )
7: p ( k + 1 ) = r ( k ) + sk p ( k )
8: Check for convergence; continue if necessary.
9: end for
Figure: Performance of iterative methods for linear equations
system
Exercise 1
Exercise 2
Exercise 3
Exercise 4
Algorithm 5 pseudocode for Conjugate Gradient method
1: Compute r (0) = b − Ax (0) for some initial guess.
2: for k = 1, 2, . . . do
3: solve Mz (i −1) = r (i −1)
T
4: ρ i −1 = r (i −1) z (i −1)
5: if i = 1 then
6: p (1) = z (0)
7: else
8: β i −1 = ρi −1 /ρi −2
9: p (i ) = z (i −1) + β i −1 p (i −1)
10: end if
11: q (i ) = Ap (i )
12: αi = ρi −1 /p ( i )T q (i )
13: x (i ) = x (i −1) + α i p (i )
14: r (i ) = r (i −1) − α i q (i )
15: Check for convergence; continue if necessary.
16: end for

Das könnte Ihnen auch gefallen