Beruflich Dokumente
Kultur Dokumente
These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b.
We need such methods for solving many large linear systems. Sometimes the matrix is too large to
be stored in the computer memory, making a direct
method too dicult to use.
More importantly, the operations cost of 23 n3 for
Gaussian elimination is too large for most large systems. With iteration methods, the
cost
can often be
reduced to something of cost O n2 or less. Even
when a special form for A can be used to reduce the
cost of elimination, iteration will often be faster.
There are other, more subtle, reasons, which we do
not discuss here.
1 [b 3x 4x ]
x3 = 11
3
1
2
T
(0)
(0)
(0)
be an initial guess to the
Let x(0) = x1 , x2 , x3
x2
(k+1)
x3
=
=
(k)
(k)
1 b x
2 x3
9 1
1 b 2x(k) 3x(k)
1
3
10 2
1 b 3x(k) 4x(k)
1
2
11 3
NUMERICAL EXAMPLE.
Let b = [10, 19, 0]T .
The solution is x = [1, 2, 1]T .
To measure the error, we use
k
0
1
2
3
4
5
6
7
8
9
10
30
31
(k)
(k)
Error = kx x k = max xi xi
i
(k)
x1
0
1.1111
0.9000
1.0351
0.9819
1.0074
0.9965
1.0015
0.9993
1.0003
0.9999
1.0000
1.0000
(k)
x2
0
1.9000
1.6778
2.0182
1.9496
2.0085
1.9915
2.0022
1.9985
2.0005
1.9997
2.0000
2.0000
(k)
x3
0
0
0.9939
0.8556
1.0162
0.9768
1.0051
0.9960
1.0012
0.9993
1.0003
1.0000
1.0000
Error
2.00E + 0
1.00E + 0
3.22E 1
1.44E 1
5.06E 2
2.32E 2
8.45E 3
4.03E 3
1.51E 3
7.40E 4
2.83E 4
3.01E 11
1.35E 11
Ratio
0.500
0.322
0.448
0.349
0.462
0.364
0.477
0.375
0.489
0.382
0.447
0.447
1 [b 3x 4x ]
x3 = 11
3
1
2
Now immediately use every new iterate:
(k+1)
x1
(k)
(k)
= 19 b1 x2 x3
(k+1)
=
x2
(k+1)
x3
(k+1)
(k)
1 b 2x
3x3
1
10 2
(k+1)
(k+1)
1 b 3x
4x2
1
11 3
NUMERICAL EXAMPLE.
Let b = [10, 19, 0]T .
The solution is x = [1, 2, 1]T .
To measure the error, we use
(k)
Error = kx x(k)k = max xi xi
i
k
0
1
2
3
4
5
6
(k)
x1
0
1.1111
1.0262
1.0030
1.0002
1.0000
1.0000
(k)
x2
0
1.6778
1.9687
1.9981
2.0000
2.0000
2.0000
(k)
x3
0
0.9131
0.9958
1.0001
1.0001
1.0000
1.0000
Error
2.00E + 0
3.22E 1
3.13E 2
3.00E 3
2.24E 4
1.65E 5
2.58E 6
Ratio
0.161
0.097
0.096
0.074
0.074
0.155
A GENERAL SCHEMA
Rewrite Ax = b as
Nx = b + P x
(1)
k = 0, 1, 2, . . . ,
(2)
for k = 0, 1, . . . .
1in
j=1
(k+1)
= bi
ai,j xj
n
X
j=i+1
(k)
ai,j xj ,
1in
for k = 0, 1, . . . .
EXAMPLE. Another method could be defined by letting N be the tridiagonal matrix formed from the diagonal, super-diagonal, and sub-diagonal of A, with
P = N A:
0
a
a1,2
1,1
..
...
a2,1 a2,2
a
2,3
...
N =
0
0
..
... a
0
an,n1
an,n
CONVERGENCE
When does the iteration method (2) converge? Subtract (2) from (1), obtaining
(k)
= P xx
(k+1)
(k)
1
= N P xx
xx
N x x(k+1)
e(k+1) = Me(k),
M = N 1P (3)
(k+1)
(k)
e
kM k e ,
k0
(k)
k (0)
e kMk e ,
k0
x1
(k)
(k)
= 19 b1 x2 x3
(k+1)
=
x2
(k+1)
x3
(k)
(k)
1 b 2x
1 3x3
10 2
(k)
(k)
1 b 3x
1 4x2
11 3
19
2
M = 10
0
3 4
11
11
19
3
10
0
7 .
= 0.636
kM k =
11
This is consistent with the earlier table of values, although the actual convergence rate was better than
predicted by (3).
x1
(k+1)
x2
(k)
(k)
= 19 b1 x2 x3
(k+1)
1 b 2x
= 10
2
1
(k+1)
=
x3
(k)
3x3
1 b 3x(k+1) 4x(k+1)
1
2
11 3
0
0 1 1
0 0
0 3
9 0
M = 2 10
3 4 11
0 19 19
1
5
= 0 45 18
1
13
0 45
99
kMk = 0.3
ai,i >
ai,j ,
j=1
j6=i
i = 1, . . . , n
a1,n
a1,2
a1,1
a1,1
a
a
2,n
2,1
a2,2
M = a2,2
..
..
...
an,n1
an,1
an,n
an,n
With diagonally dominant matrices A,
kMk = max
1in
n
X
j=1
j6=i
a
i,j
<1
ai,i
(4)
GAUSS-SEIDEL ITERATION
n
X
(k+1)
(k)
=
ai,j ej
ai,j ej ,
j=1
j=i+1
1in
i1
n a
X ai,j (k+1)
X
(k+1)
i,j (k)
ei
=
ej
ej
a
a
j=1 i,i
j=i+1 i,i
(5)
Introduce
i1
X ai,j
i =
,
ai,i
j=1
n a
X
i,j
i =
,
ai,i
j=i+1
1in
(k+1)
e
e(k+1) + e(k)
,
i
i
i
Let
i = 1, ..., n (6)
(k+1)
(k+1)
(k+1)
e
= max e
= e
1in i
Then using i =
in (6),
(k+1)
(k+1)
(k)
e
e
+ e
Define
(k+1)
(k)
e
1
i
i 1 i
= max
Then
(k+1)
(k)
e
e
(7)
X ai,j
kM k = max
= max (i + i) < 1
ai,i
1in
1in
j=1
j6=i
Consequently, for A diagonally dominant, the GaussSeidel method also converges and it does so more
rapidly than the Jacobi method in most cases.
Showing (7) follows by showing
i
(i + i) 0,
1in
1 i
For our earlier example with A of order 3, we have
= 0.375 This is not as good as computing kMk
directly for the Gauss-Seidel method, but it does show
that the rate of convergence is better than for the
Jacobi method.
CONVERGENCE: AN ADDENDUM
Since
kMk = kN 1P k kN 1k kP k,
kMk < 1 is satisfied if N satisfies
kN 1k kP k < 1
1
kN 1k
We also want to choose N so that systems N z = f
is easily solvable.
kA N k <
k = 0, 1, 2, . . . ,
will converge, for all right sides b and all initial guesses
x(0), if and only if all eigenvalues of M = N 1P
satisfy
|| < 1
This is the basis of deriving other splittings A = N P
that lead to convergent iteration methods.
RESIDUAL CORRECTION
For k = 0, 1, . . . , define
r(k) = b Ax(k)
N e(k) = r(k)
x(k+1) = x(k) + e(k)
This is the general residual correction method.
To see how this fits into our earlier framework, proceed
as follows:
x(k+1) = x(k) + e(k) = x(k) + N 1r(k)
= x(k) + N 1(b Ax(k))
Thus,
N x(k+1) = N x(k) + b Ax(k)
= b + (N A)x(k)
= b + P x(k)
Sometimes the residual correction scheme is a preferable way of approaching the development of an iterative method.