Sie sind auf Seite 1von 19

AMTH247

Practical 3
Linear Equations
Doing Assignment 3
March 27, 2006

In this practical we will implement the algorithms we have discussed in the


lectures as Scilab functions. All the Scilab functions defined in this practical
are in the file lineqn.sci in the directory for this practical.

Contents
1 LU Factorization
1.1 Elimination Matrices . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 LU Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . .

2
2
4

2 Pivoting
2.1 Permutation Matrices . . . . . . . .
2.2 Partial Pivoting . . . . . . . . . . . .
2.2.1 Selecting the Pivot . . . . . .
2.2.2 LU Factorization with Partial
2.3 Complete Pivoting . . . . . . . . . .

5
5
6
6
7
8

. . . . . .
. . . . . .
. . . . . .
Pivoting .
. . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

3 Solving Linear Equations


10
3.1 Forward and Back Substitution . . . . . . . . . . . . . . . . . . . 10
3.2 Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 An Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
4 Cholesky Factorization

14

5 Hints on Assignment 3

17

LU Factorization

We will start with LU factorization without pivoting. Recall that this algorithm constructs a series of elimination matrices M1 , M2 , . . . , Mn1 such
that the LU factorization is
A = LU
with
U = Mn1 . . . M2 M1 A
and
L = N1 N2 . . . Nn1
where the Nj are the inverses of the elimination matrices Mj

1.1

Elimination Matrices

First will we will a function to compute the elimination matrices. Recall


that when eliminating in the kth column of a matrix, only the elements on or
below the diagonal are of interest

...

...
.. . .
..
.
..
.
.
. ..
.

. . .
akk
. . .

. . . ak+1k . . .

. .
..
.
..
..
..
. ..
.
...

ank

...

and the corresponding elimination matrix is

1 ...
0
.. . .
..
.
.
.

0 . . .
1

0 . . . ak+1k /akk

. .
..
..
..
.
0 ...

with inverse

1 ...
.. . .
.
.

0 . . .

0 . . .

. .
..
..
0 ...

ank /akk
0
..
.

...
..
.
...
...
..
.
...

1
ak+1k /akk
..
.

...
..
.
...
...
..
.

ank /akk

...

0
..
.

..
.
1

0
..
.

..
.
1

The following function elimmat(a,k), computes the elimination matrix m


and its inverse matrix n which eliminate column k of matrix a.
2

function [m,n] = elimmat (a, k)


[nr,nc] = size(a)
if (nr ~= nc)
error("matrix is not square")
end
m = eye(nr,nr)
n = eye(nr,nr)
for i = k+1:nr
m(i,k) = -a(i,k)/a(k,k)
n(i,k) = a(i,k)/a(k,k)
end
endfunction

// get the size of the matrix


// and check it is square

Example:
We will use a random matrix with normally distributed entries.
-->a = rand(5, 5, normal)
a =
! - 0.7616491
!
0.6755537
!
1.4739762
!
1.1443051
!
0.8529775

0.4529708
0.2546697
0.3240162
0.7223316 - 1.5417209 - 0.1884803
1.9273333 - 0.6834217
0.4241610
0.6380837 - 0.7209534 - 1.0327357
- 0.8498895
0.8145126 - 0.6779672

- 0.9239258 !
- 1.3874078 !
2.7266682 !
- 0.7123134 !
- 1.7086774 !

Here is the matrix, and its inverse, to eliminate below the diagonal of the
second column:
-->[m,n] = elimmat(a, 2)
n =
!
!
!
!
!
m
!
!
!
!
!

1.
0.
0.
0.
0.
=

0.
1.
2.6682113
0.8833668
- 1.1765919

0.
0.
1.
0.
0.

0.
0.
0.
1.
0.

0.
0.
0.
0.
1.

!
!
!
!
!

1.
0.
0.
0.
0.

0.
1.
- 2.6682113
- 0.8833668
1.1765919

0.
0.
1.
0.
0.

0.
0.
0.
1.
0.

0.
0.
0.
0.
1.

!
!
!
!
!

Checking that it works correctly:


-->m*a
ans =
! - 0.7616491
!
0.6755537

0.4529708
0.2546697
0.3240162
0.7223316 - 1.5417209 - 0.1884803
3

- 0.9239258 !
- 1.3874078 !

! - 0.3285439
!
0.5475434
!
1.6478286

0.
0.
0.

3.4302152
0.9270663
0.6409516 - 0.8662385
- 0.9994637 - 0.8997316

6.4285652 !
0.5132765 !
- 3.3410901 !

-->m*n
ans =
!
!
!
!
!

1.
0.
0.
0.
0.

1.2

0.
1.
0.
0.
0.

0.
0.
1.
0.
0.

0.
0.
0.
1.
0.

0.
0.
0.
0.
1.

!
!
!
!
!

LU Factorization

We want to compute the LU factorization


U = Mn1 . . . M2 M1 A
L = N1 N2 . . . Nn1
Our algorithm for computing L and U is
1. Start with
L0 = I

U0 = U.

2. For k = 1 . . . n 1 do
(a) Compute the elimination matrix Mk and its inverse Nk which eliminates the kth column of Uk1 .
(b) Update
Lk = Lk1 Nk

Uk = Mk Uk1 .

3. We finish with the LU factorization


L = Ln1

U = Un1 .

One interesting feature of our algorithm is that


Lk Uk = A.
is true at each step. This is easy to prove by induction and makes it easy to
prove the correctness of the algorithm.
Here is the algorithm in Scilab:
function [l,u] = lufactor (a)
[nr,nc] = size(a)
if (nr ~= nc)
error("matrix is not square")
end
l = eye(nr,nr)
u = a
for k = 1:nr-1
4

[m,n] = elimmat(u,k)
l = l*n
u = m*u
end
endfunction
Example:
-->a = rand(5, 5, normal)
a =
!
0.9520307
!
0.0698768
! - 2.8759126
! - 1.3772844
!
0.7915156

- 0.1728369
0.7629083
- 0.6019869
- 0.0239455
- 1.5619521

- 0.5637165 - 0.3888655
- 0.6594738 0.6543045
- 0.6773066

0.7004486
0.3353388
0.8262233
0.4694334
1.1323911

- 0.3109791 !
- 0.2330131 !
0.9741226 !
- 0.2343923 !
- 0.1821937 !

-->[l,u] = lufactor(a)
u =
!
!
!
!
!
l

0.9520307
0.
0.
0.
0.
=

!
1.
!
0.0733977
! - 3.0208192
! - 1.4466806
!
0.8313971

- 0.1728369 - 0.5637165 - 0.7004486 - 0.3109791 !


0.7755941 - 0.3474901
0.3867501 - 0.2101880 !
0.
- 2.865989
- 2.3816214 - 0.2699219 !
0.
0.
- 0.1712941 - 0.7317862 !
0.
0.
0.
- 13.571831 !

0.
1.
- 1.4493355
- 0.3532587
- 1.8286059

0.
0.
1.
0.0990817
0.2945080 -

0.
0.
0.
1.
18.233917

0.
0.
0.
0.
1.

!
!
!
!
!

0.
0.
0.
2.776E-17
- 8.882E-16

!
!
!
!
!

-->l*u-a
ans =
!
!
!
!
!

0.
0.
0.
0.
0.

0.
0.
0.
0.
- 1.110E-16 - 1.110E-16
1.388E-17
0.
0.
0.

0.
0.
0.
0.
- 4.441E-16

Because of rounding errors we dont have an exact LU factorization.

2
2.1

Pivoting
Permutation Matrices

Here is a Scilab function to compute the n n permutation matrix which


swaps rows, or equivalently columns, i1 and i2.
5

function p = perm (n, i1, i2)


p = eye(n,n);
p(i1,i1) = 0; p(i1,i2) = 1;
p(i2,i2) = 0; p(i2,i1) = 1;
endfunction
Example:
-->perm(6,2,4)
ans =
!
!
!
!
!
!

1.
0.
0.
0.
0.
0.

2.2

0.
0.
0.
1.
0.
0.

0.
0.
1.
0.
0.
0.

0.
1.
0.
0.
0.
0.

0.
0.
0.
0.
1.
0.

0.
0.
0.
0.
0.
1.

!
!
!
!
!
!

Partial Pivoting

2.2.1

Selecting the Pivot

This is easy enough to program as a simple loop, running down the column
and comparing each element to the largest found so far. We will use the Scilab
function maxi In general
[m,k] = maxi(a)
for a matrix a returns its maximum element m and its position k. The position
is a pair of numbers for a general matrix, but only a single index for a vector.
Example:
This example finds the largest element on or below the diagonal in the second
column of a matrix. First we use (a(2:5,2)) to form the vector of on or below
the diagonal in the second column of a matrix, and then use maxi to find the
maximum element. Note that the position in the position in the subvector, not
the position in the whole column or the larger matrix.
-->a=rand(5,5)
a =
!
!
!
!
!

0.4094825
0.8784126
0.1138360
0.1998338
0.5618661

0.5896177
0.6853980
0.8906225
0.5042213
0.3493615

0.3873779
0.9222899
0.9488184
0.3435337
0.3760119

-->[m,k] = maxi(a(2:5,2))
k =
2.
6

0.7340941
0.2615761
0.4993494
0.2638578
0.5253563

0.5376230
0.1199926
0.2256303
0.6274093
0.7608433

!
!
!
!
!

=
0.8906225

Note that here (a(2:5,2)) is


!
!
!
!

0.6853980
0.8906225
0.5042213
0.3493615

!
!
!
!

and the pivot row is row 3 of the original matrix.


2.2.2

LU Factorization with Partial Pivoting

Here we compute an LU factorization


LU = PA
In terms of permutation matrices Pk , elimination matrices Mk , and their inverses Nk
L = Pn1 . . . P2 P1 P1 N1 P2 N2 . . . Pn1 Nn1
U = Mn1 Pn1 . . . M2 P2 M1 P1 A
P = Pn1 . . . P2 P1
Our algorithm for computing L, U and P is similar to the algorithm without
pivoting, except at each step we need to compute the permutation to switch the
rows required by pivoting and update our matrices accordingly.
Here is the algorithm in Scilab
function [l,u,p] = luppivot (a)
[nr,nc] = size(a)
if (nr ~= nc)
error("matrix is not square")
end
l = eye(nr,nr)
u = a
p = eye(nr,nr)
for k = 1:nr-1
[m,kk] = maxi(abs(u(k:nr,k)))
kk = kk+k-1
if (m == 0)
error ("singular matrix")
end
pp = perm(nr,k,kk)
l = pp*l*pp
u = pp*u
p = pp*p
[m,n] = elimmat(u,k)
l = l*n
u = m*u
end
endfunction
7

// find maximum element


// adjust to position in matrix

// compute permutation matrix


// and update

// compute elimination matrix


// and update

Example:
-->a = rand(5, 5, normal)
a =
!
!
!
!
!

- 0.5005553
- 0.4575284
0.5623151
- 0.6453261
- 0.3647833

0.0116391
1.8651793
0.2232736
0.1645912
- 1.4344474 - 1.035891
1.7363821
0.9182207 1.748736
- 0.9355485 -

0.0259119
0.2720405
0.7953703
1.681167
1.3770621

0.7042731
- 0.9063738
0.2634747
1.2296215
- 1.1579023

!
!
!
!
!

0.9182207 - 1.681167
1.2296215
1.1529499
1.3299301 - 0.2494982
- 1.3566504
0.4601505 - 1.589842
0.
- 0.6482352
1.5170419
0.
0.
- 0.9071508

!
!
!
!
!

-->[l,u,p] = luppivot(a)
p =
!
!
!
!
!

0.
1.
0.
0.
0.
=

0.
0.
1.
0.
0.

0.
0.
0.
1.
0.

1.
0.
0.
0.
0.

! - 0.6453261
!
0.
!
0.
!
0.
!
0.
l =

1.7363821
- 1.3352074
0.
0.
0.

!
1.
!
0.7756625
!
0.7089879
! - 0.8713658
!
0.5652698

0.
1.
0.7547893
- 0.0588497
- 0.5746011

0.
0.
0.
0.
1.

!
!
!
!
!

0.
0.
1.
0.1237859
0.5838679 -

0.
0.
0.
1.
0.1060772

0.
0.
0.
0.
1.

!
!
!
!
!

0.
0.
0.
5.551E-17
2.220E-16

!
!
!
!
!

-->l*u - p*a
ans =
!
!
!
!
!

0.
0.
0.
0.
0.

2.3

0.
- 9.368E-17
- 5.551E-17 0.
0.
-

0.
0.
5.551E-17
0.
1.110E-16

0.
- 6.245E-17
1.110E-16
2.220E-16
0.

Complete Pivoting

Here we compute an LU factorization


LU = PAQ
The algorithm is similar to that for partial pivoting, except we have to take
into account the permutation matrices acting on the right to perform column
8

interchanges.
function [l,u,p,q] = lucpivot (a)
[nr,nc] = size(a)
if (nr ~= nc)
error("matrix is not square")
end
l = eye(nr,nr)
u = a
p = eye(nr,nr)
q = eye(nr,nr)
for k = 1:nr-1
[m,kk] = maxi(abs(u(k:nr,k:nr)))
k1 = kk(1)+k-1
k2 = kk(2)+k-1
if (m == 0)
error ("singular matrix")
end
pp = perm(nr,k,k1)
qq = perm(nr,k,k2)
l = pp*l*pp
u = pp*u*qq
p = pp*p
q = q*qq
[m,n] = elimmat(u,k)
l = l*n
u = m*u
end
endfunction
Example:
-->a = rand(5, 5, normal)
a =
! - 0.4577385
!
0.0168437
! - 0.5875092
! - 1.4029475
!
0.2301981

- 2.7290776
- 0.2563031
- 0.5003796 1.1937458
- 1.5206395 -

1.8655072 - 0.1043591
0.1910551
0.2973099
1.3189198
0.5308515
0.9307226 - 1.5404673
0.8575199 - 0.3966362

-->[l,u,p,q] = lucpivot(a)
q =
!
!
!
!
!
p

0.
1.
0.
0.
0.
=

0.
0.
0.
0.
1.

0.
0.
1.
0.
0.

1.
0.
0.
0.
0.

0.
0.
0.
1.
0.

!
!
!
!
!

0.5163254
0.0075659
1.0422456
2.6705108
2.4976094

!
!
!
!
!

!
!
!
!
!

1.
0.
0.
0.
0.
=

0.
0.
0.
0.
1.

0.
0.
0.
1.
0.

0.
1.
0.
0.
0.

0.
0.
1.
0.
0.

! - 2.7290776
!
0.
!
0.
!
0.
!
0.
l =

0.5163254
2.8963605
0.
0.
0.

!
1.
! - 0.4374173
!
0.5571991
!
0.1833512
!
0.0939157

0.
1.
0.7629967
0.3271612
- 0.0141299 -

!
!
!
!
!

1.8655072 - 0.4577385 - 0.1043591 !


1.7467278 - 1.6031702 - 1.5861158 !
- 3.2297262
1.7084631
0.8717137 !
0.
- 1.1599967
0.4663628 !
0.
0.
0.3192085 !

0.
0.
1.
0.6912116
0.0125508 -

0.
0.
0.
1.
0.0505368

0.
0.
0.
0.
1.

!
!
!
!
!

-->l*u - p*a*q
ans =
!
!
!
!
!

0.
0.
0.
0.
0.

0.
0.
0.
0.
0.

0.
0.
0.
- 2.220E-16
- 2.776E-17

0.
0.
0.
0.
- 1.665E-16 - 1.665E-16
- 1.110E-16
0.
1.041E-17
5.551E-17

!
!
!
!
!

Solving Linear Equations


Given an LU factorization
A = LU

to solve the linear system


Ax = b
we proceed in two steps
1. Solve Lz = b.
2. Solve Ux = z.
These triangular systems are solved by forward and back substitution.

3.1

Forward and Back Substitution

Algorithms for forward and back substitution for triangular systems are
given as Algorithms 2.1 and 2.2 in Heath and are easy to transcribe into Scilab:

10

function x = forsub (l, b)


[nr,nc] = size(l)
if (nr ~= nc)
error ("matrix is not square")
end
x = zeros(nr,1)
for j = 1:nr
x(j) = b(j)/l(j,j)
for i = (j+1):nr
b(i) = b(i) - l(i,j)*x(j)
end
end
endfunction
function x = backsub (u, b)
[nr,nc] = size(u)
if (nr ~= nc)
error ("matrix is not square")
end
x = zeros(nr,1)
for j = nr:-1:1
x(j) = b(j)/u(j,j)
for i = 1:(j-1)
b(i) = b(i) - u(i,j)*x(j)
end
end
endfunction

3.2

Solving Linear Systems

We will write functions to solve a linear system using (a) no pivoting, (b)
partial pivoting, and (c) complete pivoting. These follow the steps detailed in
the lecture notes:
// linear system, no pivoting
function x = lusolve1 (a, b)
[l,u] = lufactor(a)
z = forsub(l, b)
x = backsub(u, z)
endfunction
// linear system, partial pivoting
function x = lusolve2 (a, b)
[l,u,p] = luppivot(a)
bb = p*b
z = forsub(l, bb)
x = backsub(u, z)
endfunction

11

// linear system, complete pivoting


function x = lusolve3 (a, b)
[l,u,p,q] = lucpivot(a)
bb = p*b
z = forsub(l, bb)
y = backsub(u, z)
x = q*y
endfunction

3.3

An Example

We will use tridiagonal systems of the form


Ax = b
with

6 1
8 6

A=

1
6
..
.

which have the exact solution

1
..
.

..

6
8

1
6

7
15

15

b= .
..

15
14


1
..
x = .
1

First we will write a Scilab function to produce these matrices


function [a,b] = testmat(n)
a = zeros(n,n)
for i = 1:n
a(i,i) = 6
end
for i = 1:n-1
a(i,i+1) = 1
a(i+1,i) = 8
end
b = a*ones(n,1)
endfunction
-->[a,b] = testmat(5)
b =
!
!
!
!
!

7.
15.
15.
15.
14.

!
!
!
!
!
12

a
!
!
!
!
!

=
6.
8.
0.
0.
0.

1.
6.
8.
0.
0.

0.
1.
6.
8.
0.

0.
0.
1.
6.
8.

0.
0.
0.
1.
6.

!
!
!
!
!

Let us test our linear equation solvers for a 20 20 matrix:


-->[a,b] = testmat(20);
-->x1 = lusolve1(a,b);
-->x2 = lusolve2(a,b);
-->x3 = lusolve3(a,b);
-->err1 = norm(x1 - ones(20,1))
err1 =
3.935E-11
-->err2 = norm(x2 - ones(20,1))
err2 =
0.
-->err3 = norm(x3 - ones(20,1))
err3 =
0.
We see that while solving without pivoting gave a moderate size error, using
partial or complete pivoting gave the exact answer. Now try a 60 60 matrix
-->[a,b] = testmat(60);
-->x1 = lusolve1(a,b);
-->x2 = lusolve2(a,b);
-->x3 = lusolve3(a,b);
-->err1 = norm(x1 - ones(60,1))
err1 =
43.269257
-->err2 = norm(x2 - ones(60,1))
err2 =
13

2.254E-13
-->err3 = norm(x3 - ones(60,1))
err3 =
2.254E-13
In this case, pivoting was essential to get a reasonable answer.
The fact that partial and complete pivoting give the same error suggests
that they are actually performing the same calculation. This is indeed the case,
since we can check that the matrix Q, which performs the column interchanges,
produced by complete pivoting is the identity matrix.
-->[a,b] = testmat(10);
-->[l,u,p,q] = lucpivot(a);
-->q
q =
!
!
!
!
!
!
!
!
!
!

1.
0.
0.
0.
0.
0.
0.
0.
0.
0.

0.
1.
0.
0.
0.
0.
0.
0.
0.
0.

0.
0.
1.
0.
0.
0.
0.
0.
0.
0.

0.
0.
0.
1.
0.
0.
0.
0.
0.
0.

0.
0.
0.
0.
1.
0.
0.
0.
0.
0.

0.
0.
0.
0.
0.
1.
0.
0.
0.
0.

0.
0.
0.
0.
0.
0.
1.
0.
0.
0.

0.
0.
0.
0.
0.
0.
0.
1.
0.
0.

0.
0.
0.
0.
0.
0.
0.
0.
1.
0.

0.
0.
0.
0.
0.
0.
0.
0.
0.
1.

!
!
!
!
!
!
!
!
!
!

Cholesky Factorization

The algorithm for Cholesky factorization given in Lecture 10, can be easily
turned into a Scilab function:
function c = cholesky (a)
[n,nc] = size(a)
if (n ~= nc)
error("matrix is not square")
end
c = a;
for i = 1:n-1
// zero the upper triangular
for j = i+1:n
c(i,j) = 0
end
end
for k = 1:n
// the rest is Heath, Algorithm 2.7
c(k,k) = sqrt(c(k,k))
14

for i = k+1:n
c(i,k) = c(i,k)/c(k,k)
end
for j = k+1:n
for i = j:n
c(i,j) = c(i,j) - c(i,k)*c(j,k)
end
end
end
endfunction
To try out our function we need a symmetric positive definite matrix. We can
use the fact that for any non-singular matrix A, the matrix AAT is symmetric
and positive definite.
-->a = rand(5, 5, normal)
a =
! - 1.2875914
!
0.6450695
!
0.6696589
! - 0.4483985
! - 1.5316782

- 0.7218988
0.3759656 - 1.6280467
- 2.3544971 - 1.3667445 - 0.4386207
- 0.5485232 - 0.0346505
0.7757721
0.0207588 - 1.3850463 - 0.6024774
1.1316247
0.3828793 - 1.1049895

- 1.7970058
0.7604477
1.0562604
- 0.6588100
1.2431698

!
!
!
!
!

- 0.2971600 - 3.6403952
2.2063853
8.5984307
2.2337923
1.3181482
2.2337923
2.4680297 - 1.4269283
1.3181482 - 1.4269283
2.916855
- 2.7457074 - 1.2038004
0.0267063

0.8641976 !
- 2.7457074 !
- 1.2038004 !
0.0267063 !
6.5396823 !

-->a = a*a
a =
!
8.2001453
! - 0.2971600
! - 3.6403952
!
2.2063853
!
0.8641976

-->c = cholesky(a)
c =
!
2.8635896
! - 0.1037719
! - 1.2712699
!
0.7704963
!
0.3017882

0.
0.
2.9304713
0.
0.7172465
0.5809131
0.4770919 - 1.35926
- 0.9262641 - 0.2681751 -

0.
0.
0.
0.4979819
0.2578950

0.
0.
0.
0.
2.3349975

-->c*c - a
ans =
1.0E-14 *
! - 0.1776357
!
0.
!
0.

0.
- 0.1776357
0.

0.
0.
0.
15

0.
0.
0.

0.
0.
0.

!
!
!

!
!
!
!
!

!
!

0.
0.

0.
0.

0.
0.

16

0.0444089
0.

0.
!
0.0888178 !

Hints on Assignment 3

Question 2
This question is a trickier than it looks. Note that the question asks for
when a computed quantity is zero; to answer this you need to look at each step
of the computation, not just the final result. Your answers to parts (b) and (d)
should be expressed in terms of mach.
You can use Scilab to check your answers.
Question 3
Part (a) We are only interested in the U part of the LU factorization. It
is fairly easy to go through the elimination process step by step, selecting pivot
elements by hand and, if you like, using elimmmat and perm from to perform
the calculations. There is a clear pattern in the computation.
Part (b) This is an experimental question. The first thing you need to do
is write a function to produce matrices of the required form and any size (see
testmat in this Practical for a similar example). Make sure you try fairly large
matrices, e.g. at least 100 100.
We saw how to generate right-hand sides so that the solution is known in
Lecture 8,and this is used in the example below. In this problem you can take
a random vector or a vector of ones, or preferably both, as the known solution.
One way to present your results is as graphs of condition number, relative
error and residual as functions of n, the size of the system. For each n there are
a number of computations to perform, so it is a good idea to write a script file
or function to automate the process. The steps for each n are:
1. Generate the matrix and right-hand side.
2. Compute the solution.
3. Compute the condition number, error and residual.
To compare the size of the residuals it is helpful to fix some scale on which
to measure them. Note that if, for example, we double the right-hand side of
a system of equations, then the solution and residuals will also we doubled.
I suggest taking taking the norm of the residual divided by the norm of the
right-hand side as a measure of the size of the residual.
Here is an example, assuming testmat3 is your function to produce matrices
for the question:
-->n=100
n =
100.
-->a = testmat3(n);
-->x = ones(n,1);
-->b = a*x;

17

-->xx = a\b;
-->cond(a)
ans =
44.802251
-->err = norm(xx-x)/norm(x)
err =
0.678233
-->res = norm(a*xx-b)/norm(b)
res =
0.3191424

Question 4
This is another experimental question. The functions lusolve1, lusolve2
and lusolve3, are the linear equation solvers to use.
What we are really after in this question is a general idea of how the error
and residual vary with size of the system and the pivoting strategy used. I
suggest using (at least) 4 different sized systems, say 10, 50, 100, 200, and for
each size solving (at least) 5 different random systems. It is better to tabulate
your results for each matrix rather than taking averages. We are looking for
typical behaviour and there may be substantial variation in the quantities of
interest for fixed size and pivoting strategy.
Some miscellaneous remarks:
1. It is preferable to use normal rather than uniform random matrices, since
the former have variations in the sign and magnitude of components more
typical of those met in practice.
2. Again it is a good idea to use a script file or function to automate your
computations.
3. It is a good idea to include data on the condition number as well as the
error and residuals.
4. Recall that an estimate for the relative error in solving a linear system is
Error cond A mach .
Therefore it might be a good idea to take the relative error divided by the
condition number times mach as a measure of the error, since this takes
into account that part of the error that can be attributed to the condition
number of the linear system.
5. Recall the remarks on scaling residuals in the hints for Question 3.
18

6. Our algorithms were designed for clarity rather than efficiency. You will
notice that our programs can be rather slow for large matrices.

19

Das könnte Ihnen auch gefallen