Beruflich Dokumente
Kultur Dokumente
DISSERTATION SUBMITTED
by
SUSHREE SWAGATIKA
Dr.Anasuya Nath
UTKAL UNIVERSITY
APRIL 2017
DEPARTMENT OF MATHEMATICS
UTKAL UNIVERSITY
VANIVIHAR, BHUBANESWAR
PIN-751004.
CERTIFICATE
I, Dr.Anasuya nath hereby certify that project entitled ”Solution of
system of linear equations” is a record of bonafide study carried out by
Sushree swagatika under my guidance and supervision in partial fulfilment
of the requirements for the Master degree in Mathematics.
Dr.Anasuya Nath
Research Supervisor
Assistant Professor
Department of Mathematics
Utkal University,Vanivihar,Bhubaneswar.
DECLARATION
I, Sushree Swagatika hereby declare that the project entitled ”Solution
of system of linear equations” is an original piece of record of studies
and bonafide work carried out by me under the guidance and supervision of
Dr.Anasuya nath, Assistant professor, Department of Mathematics, Utkal
University, Vanivihar, Bhubaneswar and has not been submitted by me else-
where for the award of any degree, diploma, title or recognition before.
This project becomes a reality with the kind support and help of many
individuals. I would like to extend my sincere thanks to all of them.
iv
Contents
1 Introduction 1
1.1 Introduction to linear algebra . . . . . . . . . . . . . . . . . . 1
1.2 Rank,nullity of a matrix . . . . . . . . . . . . . . . . . . . . . 2
1.3 Elementary matices and elementary operations . . . . . . . . . 2
1.4 Introduction to linear equations . . . . . . . . . . . . . . . . . 3
v
3.4 Crout’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Tridiagonal System . . . . . . . . . . . . . . . . . . . . . . . . 26
vi
Chapter 1
Introduction
1
much attention. John von Neumann and Alan Turing were the world-famous
pioneers of computer science. They introduced significant contributions to
the development of computer linear algebra. In 1947, von Neumann and
Goldstine investigated the effect of rounding errors on the solution of linear
equations. One year later, Turing [Tur48] initiated a method for factoring a
matrix to a product of a lower triangular matrix with an echelon matrix (the
factorization is known as LU decomposition). At present, computer linear
algebra is broadly of interest. This is due to the fact that the field is now
recognized as an absolutely essential tool in many branches of computer ap-
plications that require computations which are lengthy and difficult to get
right when done by hand, for example: in computer graphics, in geometric
modeling, in robotics, etc.
2
of linear equations.There are two types of elementary matrix operations-row
operations and column operations.generally row operations are more useful.
Let A be an m × n matrix. Any one of the following three operations on the
rows[columns] of A is called an elementary row[columns]operations:
1.interchanging any two rows[columns]of A.
2.multiplying any row [column]of A by a nonzero sclar.
3.adding any scalar multiple of a row [column]of A to another row[column].
3
Chapter 2
2.1 Inroduction
A system of linear equations (or linear system)is a collection of two or more
linear equations involving the same set of variables.For example,
3x + 2y − z = 1
2x − 2y + 4z = −2
−x + 1/2y − z = 0
x=1
4
y = −2
z = −2
since it makes all three equations valid.The word ”system”indicates that the
equations are to be considerd collectively,rather than individually. A general
system of m linear equations with n unknowns can be written as
where x1 , x2 , ..., xn are the unknowns,a11 , a12 , ..., amn are the coefficients of the
system ,and b1 , b2 , ..., bm are the constant terms. A system of linear equations
is homogeneous if all of the constant terms are zero:
Ax = 0
5
a11 a12 a13 . . . a1n x1 b1
a21 a22 a23 . . . a2n
x 2 = b2
. . . . . . . . . . . . ... . . . . . .
an1 an2 an3 . . . ann xn bn
and the augumented matrix is given by: [A:b]=
a11 a12 a13 . . . a1n b1
a21 a22 a23 . . . a2n b2
. . . . . . . . . . . . . . . . . .
an1 an2 an3 . . . ann bn
After reducing this matrix to it’s echelon form if the augumented column
becomes a free column then that will mean than augumented column will be
spanned by pivot columns i.e.,columns of A or b lies in the column space of
A.If the augumented column becomes pivot column then it will be linearly
independent column and can not be spanned by columns of A or b does not lie
in the column space of A.The system of linear equations will be inconsistent.
If b is a free column then all pivot columns of[A:b] will be columns of A and
we can say number of pivot columns in augumented matrix [A:b] is equal to
number of pivot columns in A.Since ,the rank of matrix is equal to number
of pivot columns ,so we can say for the equation to be consistent,
Rank[A : b] = RankA
6
2.2 Substituition,Elimination and Graphing
method to solve system of linear equa-
tions
Substitution is another method of solving system of equations by removing
all but one of the variables in one of the equations and then solving that
equation.This is achieved by isalating the other variable in an equation and
then substituting values for these variables in other another equation.for
example,to solve the system of equations x + y = 4, 2x − 3y = 3 isolate the
variable x in the first equation to get x = 4 − y,then substitute this value of
y into the second equation to get 2(4 − y) − 3y = 3 .This equation simplifies
to−5y = −5,or y = 1.Plug this value into the second equation to find the
value of x : x + 1 = 4 or x = 3.
Another way of solving a linear system is to use the elimination method.Inthe
elimination method we can either add or subtract the equations to get an
equation in one variable. When the coefficients of one variable are oppo-
sites we add the equations to eliminate a variable and when the coefficients
of one variable are equal we subtract the equations to eliminate a variable.
EXAMPLE:
3y + 2x = 6
5y − 2x = 10
3y + 2x = 6
+5y − 2x = 10
⇒ 8y = 16
⇒y=2
7
The value of y can now be substituted into either of the original equations
to find the value of x.
3y + 2x = 6
3.2 + 2x = 6
6 + 2x = 6
x=0
8
1 1
A−1 = adj(A).Making this substitution we get that x = adj(A)b =
det(A) det(A)
1
=
det(A)
C11 C12 C13 . . . C1n b1
C21 C22 C23 . . . C2n
b2
. . . . . . . . . . . . ...
. . .
Cn1 Cn2 Cn3 . . . Cnn bn
=
b1 C11 b2 C12 b3 C13 . . . bn C1n
b1 C21 b2 C22 b3 C23 . . . bn C2n
... . . . . . . . . . . . .
b1 Cn1 b2 Cn2 b3 Cn3 . . . bn Cnn
Now for any row j,it follows that
b1 C1j + b2 C2j + ... + bn Cnj
xj =
det(A)
Now look at the matrix Aj that we defined earlier
Aj =
a11 a12 a13 . . . a1j−1 b1 a1j+1 . . . a1n
a21 a22 a23 . . . a2j−1 b2 b2j+1 . . . a2n
. . . . . . . . . . . . ... ... ... ... ...
an1 an2 an3 . . . anj−1 bn anj+1 . . . ann
The matrix Aj only differs from A by a single column (column j).Hence the
cofactors of b1 , b2 , ..., bn in Aj are the same as the cofactors a1j , a2j , ..., anj in
A,and thus det(Aj ) = b1 C1j + b2 C2j + ... + bn Cnj .Substituting this back into
b1 C1j + b2 C2j + ... + bn Cnj
the formulae xj = ,we get
det(A)
det(Aj )
xj =
det(A)
9
2.4 Solution of system of linear equation us-
ing augumented matrix
Let A be a m × n matrix.The set of all solution of AX = 0 is called the null
space of A and denoted by N (A).
Thus,
N (A) = {X ∈ Rn | AX = 0}
10
PROOF: Suppose X1 , X2 ∈ N (A),then AX1 = AX2 = 0 then
=0+0=0
So,
c1 X1 + c2 X2 ∈ N (A)
x+y+z =6
x + 2y + 3z = 14
x + 4y + 7z = 30
Solution: The system of linear equations can be put in the matrix form as
1 1 1 x 6
1 2 3 y = 14
1 4 7 z 30
11
Let us form the augumented matrix and reduce it to echelon form using row
transformation [A : b]=
..
1 1 1 .6
.
1 2 3 ..14
.
1 4 7 ..30
by operating R2 → R2 − R1 ,R3 → R3 − R1 ,we have:
.
1 1 1 ..6
0 1 2 ...8
.
0 0 0 ..0
Here we find that first and second columns are pivot columns and third
and augumented columns are free columns.Since,augumented columns is free
column ,it can be expressed as linear combination of pivot columns.
1 1 1 x 6
y = 8
0 1 2
0 0 0 z 0
1 1
What linear combination of pivot columns 0,1 will give augumented
0 0
column,will give us particular solution. Let us assign z = 0 to free vari-
ables.So,value of pivot variables such the augumented column is expressed as
linear combination of
pivot
column
is called particular solution. The partic-
x −2
ular solution Xp = y = 8 This means (-2)× column +8×column
z 0
12
6
=
8 We can check Xp satisfies the original system of equation.
0
AXp = b
Now let us find the null space.To find null space let us assign free variable
z = 1.So,the vector belonging to null space can be given by
1
Xn = k × 0
0
AXp = b
then
A(Xn + Xp ) = AXn + AXp = 0 + b = b
r=n
then null space will zero and the system equation will have unique solution,
X = Xp
13
If rank of matrix r <number of columns in A i.e., if
r<n
14
Chapter 3
(1) (1)
A(1) = [aij ], b(1) = [b1 , ..., b(1) T
n ] , 1 6 i, j 6 n
in which n is the order of the system.We reduce the system to the trian-
gular form U x = g by adding multiples of obe equation to another equa-
tion,eliminating some unknowns from the second equation.Additional row
operations are used in the modifications given in succeding sections.To keep
15
the presentation simple ,we make some technical assumptions in defining the
algorithm. GAUSSIAN ELIMINATION ALGORITHM:
(1)
STEP 1:Assume a11 6= 0.Define the row multipliers by
(1)
ai1
mi1 = (1)
, i = 2, 3, ..., n
a11
These are used in eliminating the x1 term .
Define
(2) (1) (1)
aij = aij − mi1 a1j ; i, j = 2, ..., n
(2) (1) (1)
bi = bi − mi1 b1 ; i = 2, ..., n
Also the first rows of A and b are left undisturbed,and the first column of
A(1) ,below the diagonal, is set to zero.The system A(2) x = b(2) looks like
(1) (1) (1) (1)
(1)
a11 a12 a13 . . . a1n x1 b1
0 a222 a(2) (2)
(2)
23 . . . a2n x2 = b2
. . . . . . . . . ... ... . . . . . .
(2) (2) (2) (2)
0 an2 an3 . . . ann xn bn
We continue to eliminate unknowns ,going onto columns 2, 3, ....and this is
expressed generally in the following.
STEP k:Let 1 6 k 6 n−1.Assume that A(k) x = b(k) has been constructed,with
x1 , x2 , ..., xk−1 eliminated at succesive stages ,and A(k) has the form: A(k) =
(1) (1) (1) (1)
a11 a12 a13 . . . a1n
0 a2 a(2) . . . a(2)
22 23 2n
. . . . . . . . . . . . . . .
0 . . . a(k) . . . a(k)
kk kn
. . . . . . . . . . . . . . .
(k) (k)
0 ... ank ... ann
(k)
Assume akk 6= 0.Define the multipliers
(k)
aik
mik = (k)
, i = k + 1, ..., n
akk
16
Use these to remove the unknown xk from equations k + 1 throgh n.Define
(k+1) (k) (k)
aij = aij − mik akj
(k+1) (k) (k)
bi = bi − mik bk ; i, j = k + 1, ..., n
The earlier rows 1 throgh k are left undisturbed,and zeros are introduced
into column k below the diagonal element.
By continuing in this manner ,after n − 1 steps we obtain A(n) x = b(n) :
(1) (1)
(1)
a11 . . . a1n x1 b1
(2)
0 ... x 2 b2
. . . . . . . . . . . . = . . .
(n) (n)
0 . . . ann xn bn
For notational convenience,let U = A(n) and g = b(n) .The system U x = g is
upper triangular ,and it is quite easy to solve.First
gn
xn =
unn
n
1 X
and then xk = [gk − ukj xj ]; k = n − 1, n − 2, ..., 1 This completes the
ukk j=k+1
Gaussian elimination.
Theorem 3.1.1 If L and U are the lower and upper triangular matrices
defined previously using Gaussian elimination,then
A = LU
PROOF: To visualize the matrix element (LU )ij ,use the vector formula
u1j
.
..
mi1 . . . mi,i−1 1 0 . . . 0 ujj
0
.
..
0
17
For i ≤ j,
(LU )ij = mi1 u1j + mi2 u2j + ... + mi,i−1 ui−1,j + ui,j
i−1
X (k) (i)
= mik akj + aij
k=1
i−1
X (k) (k+1) (i)
= [aij − aij ] + aij
k=1
(1)
= aij = aij For i > j,
(LU )ij = mi1 u1j + ... + mij ujj
j−1
X (k) (j)
= mik akj + mij ajj
k=1
j−1
X (k) (k+1) (j)
= [aij − aij ] + aij
k=1
(1)
= aij = aij
This completes the proof.
Consider the result A = LU .Then there is some nonuniqeness in the
choice of L and U ,if we insist only that L and U be lower and upper triangular
,respectively.If A is nonsingular ,and if we have two decompositions
A = L1 U1 = L2 U2
then
L−1 −1
2 L1 = U2 U1
The inverse and products of lower triangular matrices are again lower tri-
angular,and similarly for upper triangular matrices.The left and right sides
of above equation are lowr and upper triangular,respectively.Thus they must
equal a diagonal matrix ,call it D,and
L1 = L2 D, U1 = D−1 U2
.
The choice of D is tied directly to the choice of the diagonal elements of
either L or U ,and once they have been choosen,D is uniquely determined.
18
If the diagonal elements of L are all required to equal 1,then the re-
sulting decomposition A = LU is that given by Gaussian elimination.The
associated compact method gives explicit formulas for lij and uij ,and it is
known as Doolittle0 s method .If we choose to have the diagonal elements of
U all equal 1,the associated compact method for calculatin A = LU is called
Crout0 s method.There is only a multiplaying diagonal matrix to distinguish
it from Doolittle0 s method.
x1 + 2x2 + 2x3 = 6
x1 + 2x2 + 3x3 = 8
1 1 1 X1 5
A = 1
2 2
X = X2
B = 6
1 2 3 X3 8
Let A = LU
19
1 1 1 1 0 0 d e f
= a
1 2 2 1 0
0 g h
1 2 3 b c 1 0 0 i
1 1 1 d e f
= ad
1 2 2 ae + g af + h
1 2 3 bd be + cg bf + ch + i
d = 1, e = 1, f = 1
ad = 1, ae + g = 2, af + h = 2
a = 1, g = 1, h = 1
bd = 1, be + cg = 2, bf + ch + 1 = 3
b = 1, c = 1, i = 1
Let Ly = B
1 1 1 Y1 5
1
2 2 Y2 = 6
1 2 3 Y3 8
Y1 = 5
Y1 + Y2 = 6; Y2 = 1
Y1 + Y2 + Y3 = 8; Y3 = 2
Y1 = 5, Y2 = 1, Y3 = 2
Let U x = y
20
1 1 1 X1 5
0
1 1 X2 = 1
0 0 1 X3 2
X3 = 2
X2 + X3 = 1; X2 = −1
X1 + X2 + X3 + 5; X1 = 4
4
X=
−1
2
21
3.2.2 1.Equivalent definitions of positive semidefinite
matrices
Theorem 3.2.1 For a symmetric n × n matrix M ∈ L(V ), following are
equivalent.
1.v T M v ≥ 0 for all v ∈ V .
2. All the eigenvalues are non-negative.
3. There exist a matrix B, s.t., B T B = M .
4. Gram matrix of vectors u1 , ..., un ∈ U , where U is some vector space.
Hence
∀i, j; Mi,j = viT vj
.
22
3.3 Cholskey Method
A matrix decomposition or matrix factorization is a factorization of a ma-
trix into a product of matrices.There are many different matrix decomposi-
tions.One of them is Cholskey decomposition.
The Cholskey decomposition or Cholskey factorization is a decomposition
of a Hermitian ,positive-definite matrix into the product of a lower triangular
matrix and it’s conjugate transpose.The Cholskey decomposition is roughly
twice as efficient as the LU decomposition for solving system of linear equa-
tions.
The cholskey decomposition of a Hermitian positive-definite matrix A is
a decomposition of the form A = LLT ,where L is a lower triangular ma-
trix with real and positive diagonal entries,and LT denotes the conjugate
tranpose of L.Every Hermitian positive definite matrix (and thus also every
real-valued symmetric positive-definite matrix)has a unique Cholskey deco-
mosition.
a11 a12 a13 l11 0 0 l11 l21 l31
T
A = a21 a22 = l21
a23 l22 0 0
l22 = LL
l32
a31 a32 a33 l31 l32 l33 0 0 l33
Every positive ,definite matrix A can be decomposed into a product of a
unique lower triangular matrix L and it’s transpose :A = LLT
23
a11 a12 a13 ... a1n l11 0 0 ... o
a21 a22 a23 ... a2n
= l21 l22 0 ... 0
. . . ... ... ... . . . ... ... ... . . .
an1 an2 an3 ... ann ln1 ln2 ln3 ... lnn
1 u12 u13 ... u1n
0 1 u23 ... u2n
. . . ... ... ... . . .
0 0 0 ... 1
Therefore,by LU -decomposition ,the system of linear equations Ax = b can
be solved in three steps:
1.Construct the lower triangular matrix L and upper triangular matrix U .
2.Using forward substitution ,solve Ly = b.
3.Solve U x = y,backward substitution .
We further elaborate the process by considering a 3×3 matrix A.We consider
solving
the system of equation
of the
form Ax =b,where
a11 a12 a13 x1 b1
A= a21 a22 , x = x2 and b = b2
a23
a31 a32 a33 x3 b3
The matrix A is factorised as a product of two matrices L(lower triangular
matrix)
and U (upper triangular matrix) as follws:
l11 0 0 1 u12 u13 a11 a12 a13
l21 l22 0 0 1 u 23
= a21 a 22 a 23
l31 l32 l33 0 0 1 a31 a32 a33
l11 l11 u12 l11 u13 a11 a12 a13
⇒ l21
l21 u12 + l22 l21 u13 + l22 u23 = a21
a22 a23
l31 l31 u12 + l32 l31 u13 + l32 u23 + l33 a31 a32 a33
This implies
l11 = a11 , l21 = a21 , l31 = a31
a12 a12
l11 u12 = a12 ⇒ u12 = =
l11 a11
24
a13 a13
l11 u13 = a13 ⇒ u13 = =
l11 a11
l21 u12 + l22 = a22 ⇒ l22 = a22 − l21 u12
1
l21 u13 + l22 u23 = a23 ⇒ u23 = (a23 − l21 u13 )
l22
l31 u12 + l32 = a32 ⇒ l32 = a32 − l31 u12
l31 u13 + l32 u32 + l33 = a33 ⇒ l33 = a33 − l31 u13 − l32 u23
Once all the value of lij ’s and uij ’s are obtained ,we can write Ax = b as
LU x = b
LetU x = y,then Ly=b
l11 0 0 y1 b1
⇒ l21 l22 0 y2 = b2
l l32 l33 y3 b
31 3
l11 y1 b1
⇒ l21 y1 + l22 y2 = b2
l31 y1 + l32 y2 + l33 y3 b3
b1 1 1
⇒ y1 = , y2 = (b2 − l21 y1 ) and y3 = (b3 − l31 y1 − l32 y2 )
l11 l22 l33
BYforeward substitution we obtain,U
x=y
1 u12 u13 x1 y1
⇒ 0 1 u23 x2 = y2
0 0 1 x3 y3
By back substitution we get,
x3 = y 3
x2 + u23 x3 = y2 ⇒ x2 = y2 − u23 x3
x1 + u12 x2 + u13 x3 = y1 ⇒ x1 = y1 − u12 x2 − u13 x3
25
3.5 Tridiagonal System
The matrix A = [aij ] is tridiagonal if
aij = 0,for |i − j| > 1
This gives the form
a c1 0 0...0
1
b a2 c2 0...
2
o b3 a3 c3 . . .
A= ...
... bn−1 an−1 cn−1
0... 0 bn an
Tridiagonal matrices occur in a variety of applications.In addition,many nu-
merical methods for solving boundary value problems for ordinary and partial
differential equations involve the solution of tridiagonal systems.Virtually all
of these applications yield tridigonal matrices for which the LU factorisation
can be formed without pivoting,and for which there is no large increase in
error as a consequence.
By considering the factorization A = LU without pivoting ,we find that
most elements of L and U will be zero.And we are lead to the follwing general
formula for the decomposition: A = LU
a1 0 ... 0
..
1 γ 1 . . . 0
b2 α2 0 . ..
0 1 γ2 .
= 0 b3 α3
.
..
.
.
.
0 ... 0 1
0 ... bn an
We can multiply to obtain a way to recursively compute {αij } and {γij }:
a1 = α1 , α1 γ1 = c1
ai = αi + bi γi−1 , i = 2, ..., n
αi γi = ci , i = 2, 3, ..., n − 1
These can be solved to give
26
c1
α1 = a1 , γ1 =
α1
ci
αi = ai − bi γi−1 , γi = ; i = 2, 3, ..., n − 1
αi
αn = an − bn γn−1
To solve LU x = f,let U x = z and Lz = f .Then
f1 fi − bi zi−1
z1 = , z1 = , i = 2, 3, ..., n
α1 αi
xn = zn , xi = zi − γi xi+1 , i = n − 1, n − 2, ..., 1.
27
Chapter 4
28
system given by
a11 x1 + a12 x2 + ... + a1n xn = b1
a21 x1 + a22 x2 + ... + a2n xn = b2
.. .. .. ..
. . . .
an1 x1 + an2 x2 + ... + ann xn = bn
has a unique solution and (2) that the coefficient matrix A has no zeroes on
it’s main diagonal.If any of the diagonal entries a11 , a22 , ..., ann are zero,then
rows or columns must be interchanged to obtain a coefficient matrix that has
nonzero entries on the main diagonal.
To begin the Jacobi method ,solve the first equation for x1 ,the second
equation for x2 ,and so on, as follows.
1
x1 = (b1 − a12 x2 − a13 x3 − · · · − a1n xn )
a11
1
x2 = (b2 − a21 x1 − a23 x3 − · · · − a2n xn )
a22
..
.
1
xn = (bn − an1 x1 − an2 x2 − · · · − an,n−1 xn−1 )
ann
Then make an initial approximation of the solution,
(x1 , x2 , x3 , . . . , xn ),
and substitute these values of xi into the right hand side of the rewritten
equations to obtain the f irst approximation.After this procedure has been
completed,one iteration has been performed.In the same way the second
approximation is performed by substituting the first approximation’s x-values
into the right hand side of the rewritten equations.By repeated iterations,we
will form a sequence of approximations that often converges to the actual
solution.This procedure is illustrated in Example 1.
29
4.1.1 Example 1:Applying the Jacobi Method
Use the Jacobi method to approximate the solution of the following system
of linear equations.
5x1 − 2x2 + 3x3 = −1
−3x1 + 9x2 + x3 = 2
2x1 − x2 − 7x3 = 3
Continue the iterations until two successive approximations are identical
when rounded to three significant digits.
To begin,write the system in the form
1 2 3
x1 = − + x2 − x3
5 5 5
2 3 1
x2 = + x1 − x3
9 9 9
3 2 1
x3 = − + x1 − x2 .
7 7 7
Because we don’t know the actual solution,choose
x1 = 0, x2 = 0, x3 = 0
as a convenient initial approximation .So,the first approximation is
1 2 3
x1 = − + (0) − (0) = −0.200
5 5 5
2 3 1
x2 = + (0) − (0) ≈ 0.222
9 9 9
3 2 1
x3 = − + (0) − (0) ≈ −0.429
7 7 7
Continuing this procedure ,we obtain the sequence of approximations as fol-
lows:
n 0 1 2 3 4 5 6 7
x1 0.000 -0.200 0.146 0.192 0.181 0.185 0.186 0.186
x2 0.000 0.222 0.203 0.328 0.332 0.329 0.331 0.331
x3 0.000 -0.429 -0.517 -0.416 -0.421 -0.424 -0.423 -0.423
Becuase the last two columns in above are identical ,we can conclude taht
30
to three significant digits the solution is
x1 =0.186,x2 =0.331,x3 =-0.423
For the system of linear equations given in above example ,the Jacobi method
is said to converge .That is,repeated iterations succeed in producing an ap-
proximation yhat is correct to three significant digits.As is generally true for
iterative methods ,greater accuracy would require more iterations.
31
The first computation is identical to that given in the above example. That
is using (x1 , x2 , x3 )=(0,0,0) as the initial approximation,we obtain the fol-
lowing new value for x1 .
1 2 3
x1 = − + (0) − (0) = −0.200
5 5 5
Now that we have a new value for x1 ,however, use it to compute a new value
for x2 .That is,
2 3 1
x2 = + (−0.200) − (0) ≈ 0.156
9 9 9
3 2 1
x3 = − + (−0.200) − (0.156) ≈ −0.508.
7 7 7
n 0 1 2 3 4 5
x1 0.000 -0.200 0.167 0.191 0.186 0.186
x2 0.000 0.156 0.334 0.333 0.331 0.331
x3 0.000 -0.508 -0.429 -0.422 -0.423 -0.423
Note that after only five iterations of the Gauss-Siedel method ,we achieved
the same accuracy as was obtained with seven iterations of the Jacobi method
in first example.
32
Neither of the iterative methods presented in this section always con-
verges.That is,it is possible to apply the Jacobi method or the Gauss-Siedel
method to a system of linear equations and obtain a divergent sequence of
approximations.In such cases ,it is said that the method diverges.
x1 − 5x2 = −4
7x1 − x2 = 6
using the initial approximation (x1 , x2 ) = (0, 0),and show that the method
diverges.
SOLUTION: As usual,begin by rewritting the given system in the form
x1 = −4 + 5x2
x2 = −6 + 7x1
x1 = −4 + 5(0) = −4
x2 = −6 + 7(0) = −6
33
n 0 1 2 3 4 5 6 7
x1 0 -4 -34 -174 -1244 -6124 -42,874 -214,374
x2 0 -6 -34 -244 -1244 -8574 -42,874 -300,124
n 0 1 2 3 4 5
x1 0 -4 -174 -6124 -214,374 -7,503,124
x2 0 -34 -1244 -42,874 -1,500,624 -52,521,874
nor the Gauss-Siedel method converges to the solution of the system of linear
equations given in Example 3.We will now look at a special type of coeffi-
cient matrix A,called a strictly diagonally dominant matrix,for which it
is guaranted that both methods will converge.
DEFINITION OF STRICTLY DIAGONALLY MATRIX: An n × n matrix
A is strictly diagonally dominant if the absolute value of each entry on
the main diagonal is grater than the sum of the absolute values of the other
entries in the sum row.That is,
|a11 | > |a12 | + |a13 | + ... + |a1n |
|a22 | > |a21 | + |a23 | + ... + |a2n |
..
.
|amn | > |an1 | + |an2 | + ... + |an.n−1 |.
34
4.3 Sufficient condition for the convergence
of Gauss Jacobi and Gauss-Siedel method
The following theorem,states that strict diagonal dominanance is sufficient
for the convergence of either the Jacobi method or the Gauss-Siedel method.
35
x − xm = e(m)
Now,
n
(m+1)
X aij (m)
ei =− (xj − xj )
j=1,i6=j
aii
n
X aij (m)
=− (ej )
j=1,i6=j
a ii
(m+1)
⇒ |ei |
n
X aij (m)
=|− (ej )|
j=1,i6=j
a ii
n
X |aij | (m)
6 |(e )|
j=1,i6=j
|aii | j
n
X |aij | m
6 ||(e )||∞
j=1,i6=j
|a ii |
n
(m)||∞
X |aij |
= ||e
j=1,i6=j
|aii |
n
X |aij |
Let υ = max
j=1,i6=j
|aii |
(m+1)
⇒ |(ei )| 6 υ||(e(m) )||∞ , ∀i = 1, 2, ..., n
(m+1)
⇒ max|ei | 6 υ||(e(m) )||∞
36
6 υ 2 ||(e(m−2) )||∞ ... 6 υ m ||(e(0) )||∞
37
n 0 1 2 3 4 5
x1 0.0000 0.8571 0.9959 0.9999 1.000 1.000
x2 0.0000 0.9714 0.9992 1.000 1.000 1.000
−4x1 + 5x2 = 1
x1 + 2x2 = 3
is not a strictly diagonally dominant matrix,and yet both methods converge
to the solution x1 = 1 ans x2 = 1 when we use an initial approximation of
(x1 , x2 )=(0,0).
38
p(x) = a0 + a1 x + a2 x2 + ... + an−1 xn−1
whose graph passes through the specified points.This procedure is called
polynomial curve fitting.If all x-coordinates of the points are distinct
,then there is precisely one polynomial function of degree n − 1 (or less)
that fits the n points.
To solve for the n coefficients of p(x),substitute each of the n points into the
polynomial function and obtain n linear equations in n variables a0 , a1 , a2 , ..., an−1 .
39
.
40
Bibliography
[2] Babajee, D., A comparision between Cramer’s rule and a proposed Cramerelimination method
2013
[3] Friedberg, S.H., Insel, A.J., Spence, L.E., Linear algebra, 4th Edition,
Chapter 3, 2013.
[4] Hoffmann, K.M., Kunze, R., Linear algebra, 2nd Edition, Chapter 1,
1971.
[5] Jain, M.K., Iyengar, S.R.K., Numerical methods for scientific and engineering computation,
6th Edition, Chapter 2, 2012.
[6] Isaacson, E., Keller, H.B., Analysis of numerical methods, 1st Edition,
Chapter 2, 1966.
[7] Lipschutz, S., Lipson, M., Schaum’s outlines linear algebra, Chapter 3,
1976.
[9] Rohn, J., An existence theorem for systems of linear equations linear and multilinear algebra,
1991.
41
[10] Vandenberghe, L., Cholskey factorization, 2018.
42