Sie sind auf Seite 1von 48

SOLUTION OF SYSTEMS OF LINEAR EQUATIONS

DISSERTATION SUBMITTED

by

SUSHREE SWAGATIKA

Under the guidence of

Dr.Anasuya Nath

The project submitted in partial fulfillment of the requirements for the


degree of

MASTER OF SCIENCE IN MATHEMATICS


(Batch 2017-2019)
at

UTKAL UNIVERSITY

APRIL 2017

DEPARTMENT OF MATHEMATICS

UTKAL UNIVERSITY
VANIVIHAR, BHUBANESWAR
PIN-751004.
CERTIFICATE
I, Dr.Anasuya nath hereby certify that project entitled ”Solution of
system of linear equations” is a record of bonafide study carried out by
Sushree swagatika under my guidance and supervision in partial fulfilment
of the requirements for the Master degree in Mathematics.

Dr.Anasuya Nath
Research Supervisor
Assistant Professor
Department of Mathematics
Utkal University,Vanivihar,Bhubaneswar.
DECLARATION
I, Sushree Swagatika hereby declare that the project entitled ”Solution
of system of linear equations” is an original piece of record of studies
and bonafide work carried out by me under the guidance and supervision of
Dr.Anasuya nath, Assistant professor, Department of Mathematics, Utkal
University, Vanivihar, Bhubaneswar and has not been submitted by me else-
where for the award of any degree, diploma, title or recognition before.

Department of Mathematics SUSHREE SWAGATIKA


Utkal University, Vanivihar,Bhubaneswar April 2019
ACKNOWLEDGEMENT
Foremost, I express my sincere thanks to god, the lord almighty, who
shows me the light of knowledge and the path of success for making this
venture an ecstasy.

I, would like to express my sincere thanks to Dr.Namita Das, Head


of the department of mathematics, Utkal University for providing necessary
facilities for carrying out this project work.

I would like to show my great appreciation to my guide Dr.Anasuya


Nath , Department of Mathematics, Utkal University. I can’t say thank
you enough for her tremendous support and help. I feel motivated and en-
couraged every time I attend her meeting. Without her encouragement and
guidance this project would not have materialized.

Finally, yet importantly, I would like to express my heartfelt thanks to


my beloved parents for their blessing, all my friends for their help and wishes
for the successful completion of my project.

This project becomes a reality with the kind support and help of many
individuals. I would like to extend my sincere thanks to all of them.

Place : Vanivihar, Bhubaneswar Sushree swagatika


Date:

iv
Contents

1 Introduction 1
1.1 Introduction to linear algebra . . . . . . . . . . . . . . . . . . 1
1.2 Rank,nullity of a matrix . . . . . . . . . . . . . . . . . . . . . 2
1.3 Elementary matices and elementary operations . . . . . . . . . 2
1.4 Introduction to linear equations . . . . . . . . . . . . . . . . . 3

2 Types of system of linear equations and solutions 4


2.1 Inroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Substituition,Elimination and Graphing method to solve sys-
tem of linear equations . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Cramer’s Rule to solve system of linear equations . . . . . . . 8
2.4 Solution of system of linear equation using augumented matrix 10
2.4.1 solution of system of non-homogeneous equation using
augumented matrix . . . . . . . . . . . . . . . . . . . . 11

3 Factorization of matrix to solve system of linear equation 15


3.1 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Doolittle’s Method . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2.1 Positive Semidefinite Matrix: . . . . . . . . . . . . . . 21
3.2.2 1.Equivalent definitions of positive semidefinite matrices 22
3.3 Cholskey Method . . . . . . . . . . . . . . . . . . . . . . . . . 23

v
3.4 Crout’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.5 Tridiagonal System . . . . . . . . . . . . . . . . . . . . . . . . 26

4 Iterative methods to solve system of linear equations 28


4.1 The Jacobi Method . . . . . . . . . . . . . . . . . . . . . . . . 28
4.1.1 Example 1:Applying the Jacobi Method . . . . . . . . 30
4.2 The Gauss-Siedel Method . . . . . . . . . . . . . . . . . . . . 31
4.2.1 Example 2:Applying the Gauss-Siedel Method . . . . . 31
4.2.2 Example 3:An example of Divergence . . . . . . . . . . 33
4.3 Sufficient condition for the convergence of Gauss Jacobi and
Gauss-Siedel method . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.1 Example 4:Interchanging Rows to obtain Convergence . 37
4.4 Applications of system of linear equations . . . . . . . . . . . 38
4.4.1 Polynomial Curve Fitting . . . . . . . . . . . . . . . . 38
4.4.2 Network Analysis . . . . . . . . . . . . . . . . . . . . . 40

vi
Chapter 1

Introduction

1.1 Introduction to linear algebra


Linear algebra is an important course for a diverse number of students for at
least two reasons. First, few subjects can claim to have such widespread ap-
plications in other areas of mathematics multi variable calculus, differential
equations, and probability, for example-as well as in physics, biology, chem-
istry, economics, finance, psychology, sociology, and all fields of engineering.
Second, this subject presents the student at the sophomore level with an ex-
cellent opportunity to learn how to handle abstract concepts. Linear algebra
is one of the most known mathematical disciplines because of its rich theoret-
ical foundations and its many useful applications to science and engineering.
Solving systems of linear equations and computing determinants are two ex-
amples of fundamental problems in linear algebra that have been studied for
a long time ago. Leibnitz found the formula for determinants in 1693, and
in 1750 Cramer presented a method for solving systems of linear equations,
which is today known as Cramers Rule. This is the first foundation stone
on the development of linear algebra and matrix theory. At the beginning
of the evolution of digital computers, the matrix calculus has received very

1
much attention. John von Neumann and Alan Turing were the world-famous
pioneers of computer science. They introduced significant contributions to
the development of computer linear algebra. In 1947, von Neumann and
Goldstine investigated the effect of rounding errors on the solution of linear
equations. One year later, Turing [Tur48] initiated a method for factoring a
matrix to a product of a lower triangular matrix with an echelon matrix (the
factorization is known as LU decomposition). At present, computer linear
algebra is broadly of interest. This is due to the fact that the field is now
recognized as an absolutely essential tool in many branches of computer ap-
plications that require computations which are lengthy and difficult to get
right when done by hand, for example: in computer graphics, in geometric
modeling, in robotics, etc.

1.2 Rank,nullity of a matrix


Let A be any m*n matrix.Then A consist of n column vectors a1 ,a2 ,...,an ,which
are m-vectors. The rank of A is the maximal number of linearly independent
column vectors in A ,i.e.,the maximal number of linearly independent vectors
among a1 ,a2 ,...,an .
If A=0, then the rank of A is 0.
Let A be a matrix.
Then the nullity of A is defined to be the dimension of the null space of A.

1.3 Elementary matices and elementary op-


erations
Elementary operations are used to obtain simple computational methods for
determining the rank of a linear transformation and the solution of a system

2
of linear equations.There are two types of elementary matrix operations-row
operations and column operations.generally row operations are more useful.
Let A be an m × n matrix. Any one of the following three operations on the
rows[columns] of A is called an elementary row[columns]operations:
1.interchanging any two rows[columns]of A.
2.multiplying any row [column]of A by a nonzero sclar.
3.adding any scalar multiple of a row [column]of A to another row[column].

1.4 Introduction to linear equations


System of linear equations arise in a large number of areas,both directly
in modeling physical situations and indirectly in the numerical solution of
other mathematical models.These applications occur in virtually all areas
of the physical,biological,and social sciences.In addition,linear systems are
involved in the following:optimization theory;solving system of non linear
equations;the approximation of functions;the numerical solution of bound-
ary value problems for ordinary differential equations; and numerous other
problems.because of the widespread importance of linear systems,much re-
search has been devoted to their numerical solution.Excellent algorithms have
been developed for the most common types of problems for linear systems,
and some of these are defined,analysed, and illustrated here.

3
Chapter 2

Types of system of linear


equations and solutions

2.1 Inroduction
A system of linear equations (or linear system)is a collection of two or more
linear equations involving the same set of variables.For example,

3x + 2y − z = 1

2x − 2y + 4z = −2

−x + 1/2y − z = 0

is a system of three equations in the three variables x, y, z.A solution to


a linear system is an assignment of values to the variables such that all the
equations are simultaneously satisfied.A solution to the system above is given
by

x=1

4
y = −2

z = −2

since it makes all three equations valid.The word ”system”indicates that the
equations are to be considerd collectively,rather than individually. A general
system of m linear equations with n unknowns can be written as

a11 x1 + a12 x2 + ... + a1n xn = b1

a21 x1 + a22 x2 + ... + a2n xn = b2

am1 x1 + am2 x2 + ... + amn xn = bm

where x1 , x2 , ..., xn are the unknowns,a11 , a12 , ..., amn are the coefficients of the
system ,and b1 , b2 , ..., bm are the constant terms. A system of linear equations
is homogeneous if all of the constant terms are zero:

a11 x1 + a12 x2 + ... + a1n xn = 0

a21 x1 + a22 x2 + ... + a2n xn = 0

am1 x1 + am2 x2 + ... + amn xn = 0

. A homogeneous system is equivalent to a matrix equation of the form

Ax = 0

where A is an m × n matrix,x is a column vector with n entries ,and 0


is the zero vector with m entries. Every homogeneous system has atleast
one solution,known as the zero solution(or trivial solution),which is obtained
by assigning the value of zero to each of the variables.If the system has
a nonsingular matrix (det(A)6=0)then it is the only solution.If the system
has a singular matrix then there is solution set with an infinite number of
solutions. The system of non homogeneous linear equations can be put into
the matrix form:

5
    
a11 a12 a13 . . . a1n x1 b1
    
 a21 a22 a23 . . . a2n 
  x 2  =  b2 
   

. . . . . . . . . . . . ... . . . . . .
   

an1 an2 an3 . . . ann xn bn
and the augumented matrix is given by: [A:b]=

 
a11 a12 a13 . . . a1n b1
 
 a21 a22 a23 . . . a2n b2 
 
. . . . . . . . . . . . . . . . . .
 
an1 an2 an3 . . . ann bn
After reducing this matrix to it’s echelon form if the augumented column
becomes a free column then that will mean than augumented column will be
spanned by pivot columns i.e.,columns of A or b lies in the column space of
A.If the augumented column becomes pivot column then it will be linearly
independent column and can not be spanned by columns of A or b does not lie
in the column space of A.The system of linear equations will be inconsistent.
If b is a free column then all pivot columns of[A:b] will be columns of A and
we can say number of pivot columns in augumented matrix [A:b] is equal to
number of pivot columns in A.Since ,the rank of matrix is equal to number
of pivot columns ,so we can say for the equation to be consistent,

Rank[A : b] = RankA

If b is a pivot column of A then muber of pivot columns in augumented


matrix [A : b] is more than number of pivot columns of A.

Rank[A : b] > Rank[A]

The system of equations will be inconsistent in that case.

6
2.2 Substituition,Elimination and Graphing
method to solve system of linear equa-
tions
Substitution is another method of solving system of equations by removing
all but one of the variables in one of the equations and then solving that
equation.This is achieved by isalating the other variable in an equation and
then substituting values for these variables in other another equation.for
example,to solve the system of equations x + y = 4, 2x − 3y = 3 isolate the
variable x in the first equation to get x = 4 − y,then substitute this value of
y into the second equation to get 2(4 − y) − 3y = 3 .This equation simplifies
to−5y = −5,or y = 1.Plug this value into the second equation to find the
value of x : x + 1 = 4 or x = 3.
Another way of solving a linear system is to use the elimination method.Inthe
elimination method we can either add or subtract the equations to get an
equation in one variable. When the coefficients of one variable are oppo-
sites we add the equations to eliminate a variable and when the coefficients
of one variable are equal we subtract the equations to eliminate a variable.
EXAMPLE:
3y + 2x = 6

5y − 2x = 10

We can eliminate the x-variable by addition of the two equations.

3y + 2x = 6

+5y − 2x = 10

⇒ 8y = 16
⇒y=2

7
The value of y can now be substituted into either of the original equations
to find the value of x.
3y + 2x = 6

3.2 + 2x = 6

6 + 2x = 6

x=0

The solution of the linear system is(0,2).


A system of linear equations contains two or more equations e.g.,y =
0.5x + 2and y = x − 2.The soution of such system is the orderd pair that is a
solution to both equations.To solve a system of linear equations graphically
we graph both equations in the same coordinate system.The solution to the
system will be in the point where the two lines intersect.

2.3 Cramer’s Rule to solve system of linear


equations
Theorem 2.3.1 Suppose that Ax  =b is a system 
of linear
 equations where
x b
 1  1
 x2   b2 
A is an n × n sized matrix ,x =  . ,and b =  . .Let Aj denote the
   
 ..   .. 
   
xn bn
matrix that is obtained by taking the j th column of A and replacing it with the
column matrix b.If det(A)6= 0 then this system has a unique solution given
det(A1 ) det(A2 ) det(An )
by x1 = ,x2 = ,...,xn =
det(A) det(A) det(A)

PROOF: We know that A is invertible since det(A) 6= 0.That said,there


exist a unique solution x = A−1 b that satisfies this system.We know that

8
1 1
A−1 = adj(A).Making this substitution we get that x = adj(A)b =
det(A) det(A)
1
=
det(A)
  
C11 C12 C13 . . . C1n b1
  
 C21 C22 C23 . . . C2n 
  b2 
 

. . . . . . . . . . . . ... 
 . . .
 

Cn1 Cn2 Cn3 . . . Cnn bn
=  
b1 C11 b2 C12 b3 C13 . . . bn C1n
 
 b1 C21 b2 C22 b3 C23 . . . bn C2n 
 
 ... . . . . . . . . . . . . 
 
b1 Cn1 b2 Cn2 b3 Cn3 . . . bn Cnn
Now for any row j,it follows that
b1 C1j + b2 C2j + ... + bn Cnj
xj =
det(A)
Now look at the matrix Aj that we defined earlier

Aj =
 
a11 a12 a13 . . . a1j−1 b1 a1j+1 . . . a1n
 
 a21 a22 a23 . . . a2j−1 b2 b2j+1 . . . a2n 
 
. . . . . . . . . . . . ... ... ... ... ...
 
an1 an2 an3 . . . anj−1 bn anj+1 . . . ann
The matrix Aj only differs from A by a single column (column j).Hence the
cofactors of b1 , b2 , ..., bn in Aj are the same as the cofactors a1j , a2j , ..., anj in
A,and thus det(Aj ) = b1 C1j + b2 C2j + ... + bn Cnj .Substituting this back into
b1 C1j + b2 C2j + ... + bn Cnj
the formulae xj = ,we get
det(A)

det(Aj )
xj =
det(A)

9
2.4 Solution of system of linear equation us-
ing augumented matrix
Let A be a m × n matrix.The set of all solution of AX = 0 is called the null
space of A and denoted by N (A).
Thus,
N (A) = {X ∈ Rn | AX = 0}

AX is basically linear combination of columns of matrix A.So,finding null


space of a matrix is just like finding what linear combination of column of A
gives the zero vector.
AX = 0

can also be written as :


    
a11 a12 a13 . . . a1n x1 0
    
 a21 a22 a23 . . . a2n 
  x2  =  0 
   

. . . ... ... ... ... . . . . . .
   

an1 an2 an3 . . . ann xn 0

a11 x1 + a12 x2 + ... + a1n xn = 0

a21 x1 + a22 x2 + ... + a2n xn = 0


..
.

am1 x1 + am2 x2 + ... + amn xn = 0

So,finding null space of a m × n matrix A is like solution the system of linear


homogeneous equation a subspace of Rn .

Theorem 2.4.1 The null space of a m × n matrix A is a subspace of Rn .

10
PROOF: Suppose X1 , X2 ∈ N (A),then AX1 = AX2 = 0 then

A(c1 X1 + c2 X2 ) = c1 AX1 + c2 AX2

=0+0=0

So,
c1 X1 + c2 X2 ∈ N (A)

So,N (A) is the subspace of Rn .


To get null space we have to solve the system of linear equation AX =
0.X = 0 is always solution of AX = 0,X = 0 always belongs to N (A) and
is quite obvious because N (A) being a subspace must alwyas contain zero
vector.Since,finding null space of matrix is just like solving linear homoge-
neous equation in x1 , x2 , ..., xn .So,x1 = x2 = ... = xn=0 is always a solution
to system of linear equation given AX = 0.So,X = 0 is called trivial solution
of system of linear equations.It also mean the zero combination of columns
of A is a zero vector.

2.4.1 solution of system of non-homogeneous equation


using augumented matrix
We can show this by an example: Solve:

x+y+z =6

x + 2y + 3z = 14

x + 4y + 7z = 30

Solution: The system of linear equations can be put in the matrix form as
    
1 1 1 x 6
    
1 2 3 y  = 14
    
1 4 7 z 30

11
Let us form the augumented matrix and reduce it to echelon form using row
transformation [A : b]=

 .. 
1 1 1 .6
 .
 1 2 3 ..14


 
.
1 4 7 ..30
by operating R2 → R2 − R1 ,R3 → R3 − R1 ,we have:
 . 
1 1 1 ..6
 0 1 2 ...8
 

 
.
0 0 0 ..0

Here we find that first and second columns are pivot columns and third
and augumented columns are free columns.Since,augumented columns is free
column ,it can be expressed as linear combination of pivot columns.
    
1 1 1 x 6
    
 y  = 8
0 1 2    

0 0 0 z 0
  
1 1
  
What linear combination of pivot columns 0,1 will give augumented
 
0 0
column,will give us particular solution. Let us assign z = 0 to free vari-
ables.So,value of pivot variables such the augumented column is expressed as
linear combination of
pivot
 column
 is called particular solution. The partic-
x −2
   
ular solution Xp = y  =  8  This means (-2)× column +8×column
  
z 0

12
 
6
 
=
8 We can check Xp satisfies the original system of equation.

0

AXp = b

Now let us find the null space.To find null space let us assign free variable
z = 1.So,the vector belonging to null space can be given by
 
1
 
Xn = k ×  0

0

If we add any vector Xn belonging to null space to particular solution Xp then


Xn + Xp will also be the solution .We can check If

AXp = b

then
A(Xn + Xp ) = AXn + AXp = 0 + b = b

X = Xn + Xp will also be the solution of system of nonhomogeneous linear


equations. So,complete solution is given by:
   
1 −2
   
X = Xn + Xp = k −2 +  8 
  

1 0

If rank of matrix,r=number of columns in A i.e.,if

r=n

then null space will zero and the system equation will have unique solution,

X = Xp

13
If rank of matrix r <number of columns in A i.e., if

r<n

then null space will be (n − r) dimensional,and there will be infinitely many


solutions.

14
Chapter 3

Factorization of matrix to solve


system of linear equation

3.1 Gaussian elimination


This is the formal name guven to the method of solving system of linear
equations by succesively eliminating unknowns and reducing to systems of
lower order.A precise definition is given of Gaussian elimination ,which is
necessary on implementing it on computer and when analysing the effects of
rounding errors that occur when computing with it.
To solve Ax = b, we reduce it to an equivalent system U x = g in
which U is upper triangular.This sytem can be easily solved by a process
ofbacksubstitution.Denote the orginal system by A(1) x = b(1) ,

(1) (1)
A(1) = [aij ], b(1) = [b1 , ..., b(1) T
n ] , 1 6 i, j 6 n

in which n is the order of the system.We reduce the system to the trian-
gular form U x = g by adding multiples of obe equation to another equa-
tion,eliminating some unknowns from the second equation.Additional row
operations are used in the modifications given in succeding sections.To keep

15
the presentation simple ,we make some technical assumptions in defining the
algorithm. GAUSSIAN ELIMINATION ALGORITHM:
(1)
STEP 1:Assume a11 6= 0.Define the row multipliers by
(1)
ai1
mi1 = (1)
, i = 2, 3, ..., n
a11
These are used in eliminating the x1 term .
Define
(2) (1) (1)
aij = aij − mi1 a1j ; i, j = 2, ..., n
(2) (1) (1)
bi = bi − mi1 b1 ; i = 2, ..., n
Also the first rows of A and b are left undisturbed,and the first column of
A(1) ,below the diagonal, is set to zero.The system A(2) x = b(2) looks like
 (1) (1) (1) (1)
    (1) 
a11 a12 a13 . . . a1n x1 b1
 0 a222 a(2) (2)  
   (2) 
 23 . . . a2n   x2  = b2 
   
. . . . . . . . . ... ...  . . .  . . . 
   

(2) (2) (2) (2)
0 an2 an3 . . . ann xn bn
We continue to eliminate unknowns ,going onto columns 2, 3, ....and this is
expressed generally in the following.
STEP k:Let 1 6 k 6 n−1.Assume that A(k) x = b(k) has been constructed,with
x1 , x2 , ..., xk−1 eliminated at succesive stages ,and A(k) has the form: A(k) =
 
(1) (1) (1) (1)
a11 a12 a13 . . . a1n
 
 0 a2 a(2) . . . a(2) 
 22 23 2n 
 
. . . . . . . . . . . . . . . 
 
 0 . . . a(k) . . . a(k) 
 kk kn 
 
. . . . . . . . . . . . . . . 
 
(k) (k)
0 ... ank ... ann
(k)
Assume akk 6= 0.Define the multipliers
(k)
aik
mik = (k)
, i = k + 1, ..., n
akk

16
Use these to remove the unknown xk from equations k + 1 throgh n.Define
(k+1) (k) (k)
aij = aij − mik akj
(k+1) (k) (k)
bi = bi − mik bk ; i, j = k + 1, ..., n
The earlier rows 1 throgh k are left undisturbed,and zeros are introduced
into column k below the diagonal element.
By continuing in this manner ,after n − 1 steps we obtain A(n) x = b(n) :
 (1) (1)
    (1) 
a11 . . . a1n x1 b1
     (2) 
 0 ...   x 2   b2 
 . . . . . . . . .  . . .  =  . . . 
    
    
(n) (n)
0 . . . ann xn bn
For notational convenience,let U = A(n) and g = b(n) .The system U x = g is
upper triangular ,and it is quite easy to solve.First
gn
xn =
unn
n
1 X
and then xk = [gk − ukj xj ]; k = n − 1, n − 2, ..., 1 This completes the
ukk j=k+1
Gaussian elimination.

Theorem 3.1.1 If L and U are the lower and upper triangular matrices
defined previously using Gaussian elimination,then

A = LU

PROOF: To visualize the matrix element (LU )ij ,use the vector formula
 
u1j
 . 
 .. 
  

mi1 . . . mi,i−1 1 0 . . . 0  ujj 

0
 
 . 
 .. 
 
0

17
For i ≤ j,
(LU )ij = mi1 u1j + mi2 u2j + ... + mi,i−1 ui−1,j + ui,j
i−1
X (k) (i)
= mik akj + aij
k=1
i−1
X (k) (k+1) (i)
= [aij − aij ] + aij
k=1
(1)
= aij = aij For i > j,
(LU )ij = mi1 u1j + ... + mij ujj
j−1
X (k) (j)
= mik akj + mij ajj
k=1
j−1
X (k) (k+1) (j)
= [aij − aij ] + aij
k=1
(1)
= aij = aij
This completes the proof.
Consider the result A = LU .Then there is some nonuniqeness in the
choice of L and U ,if we insist only that L and U be lower and upper triangular
,respectively.If A is nonsingular ,and if we have two decompositions

A = L1 U1 = L2 U2

then
L−1 −1
2 L1 = U2 U1

The inverse and products of lower triangular matrices are again lower tri-
angular,and similarly for upper triangular matrices.The left and right sides
of above equation are lowr and upper triangular,respectively.Thus they must
equal a diagonal matrix ,call it D,and

L1 = L2 D, U1 = D−1 U2

.
The choice of D is tied directly to the choice of the diagonal elements of
either L or U ,and once they have been choosen,D is uniquely determined.

18
If the diagonal elements of L are all required to equal 1,then the re-
sulting decomposition A = LU is that given by Gaussian elimination.The
associated compact method gives explicit formulas for lij and uij ,and it is
known as Doolittle0 s method .If we choose to have the diagonal elements of
U all equal 1,the associated compact method for calculatin A = LU is called
Crout0 s method.There is only a multiplaying diagonal matrix to distinguish
it from Doolittle0 s method.

3.2 Doolittle’s Method


Doolittle’s method LU factorization of A when the diagonal elements of lower
triangular matrix ,Lhave a unit value. 1.Create matrices A,X,B, where A
is the augumented matrix,X costitute the variable vectors and B are the
constants.
2.Let A = LU ,where L is the lower triangular matrix and U is the upper
triangular matrix,assume that the diagonal entries L is equal to 1.
3.Let Ly = B,solve for y 0 s.
4.Let U x = y.Solve for the variable vectors x.
EXAMPLE:
x1 + x2 + x3 = 5

x1 + 2x2 + 2x3 = 6

x1 + 2x2 + 3x3 = 8
     
1 1 1 X1 5
     
A = 1
 2 2
 X = X2 
  B = 6


1 2 3 X3 8
Let A = LU

19
    
1 1 1 1 0 0 d e f
    
 = a
1 2 2 1 0
 0 g h
 
 
1 2 3 b c 1 0 0 i
   
1 1 1 d e f
   
 = ad
1 2 2  ae + g af + h 
 
1 2 3 bd be + cg bf + ch + i

d = 1, e = 1, f = 1

ad = 1, ae + g = 2, af + h = 2

a = 1, g = 1, h = 1

bd = 1, be + cg = 2, bf + ch + 1 = 3

b = 1, c = 1, i = 1

Let Ly = B
    
1 1 1 Y1 5
    
1
 2 2 Y2  = 6
   

1 2 3 Y3 8

Y1 = 5

Y1 + Y2 = 6; Y2 = 1

Y1 + Y2 + Y3 = 8; Y3 = 2

Y1 = 5, Y2 = 1, Y3 = 2

Let U x = y

20
    
1 1 1 X1 5
    
0
 1 1 X2  = 1
   

0 0 1 X3 2

X3 = 2

X2 + X3 = 1; X2 = −1

X1 + X2 + X3 + 5; X1 = 4

 
4
 
X=
−1

2

3.2.1 Positive Semidefinite Matrix:


A positive semidefinite matrix is defined as a symmetric matrix with non-
negative eigenvalues. The original definition is that a matrix M ∈ L(V ) is
positive semidefinite iff,
1. M is symmetric,
2. v T M v > 0f orallv ∈ V .
If the matrix is symmetric and v T M v > 0, ∀v ∈ V ,
then it is called positive definite. When the matrix satisfies opposite inequal-
ity it is called negative definite.
The two definitions for positive semidefinite matrix turn out be equiva-
lent.

21
3.2.2 1.Equivalent definitions of positive semidefinite
matrices
Theorem 3.2.1 For a symmetric n × n matrix M ∈ L(V ), following are
equivalent.
1.v T M v ≥ 0 for all v ∈ V .
2. All the eigenvalues are non-negative.
3. There exist a matrix B, s.t., B T B = M .
4. Gram matrix of vectors u1 , ..., un ∈ U , where U is some vector space.
Hence
∀i, j; Mi,j = viT vj
.

Proof. 1⇒2: Say λ is an eigenvalue of M . Then there exist eigenvector


v ∈ V , s.t.,M v = λv. So 0 6 v T M v = λv T v. Since v T v is positive for all v,
implies λ is non-negative. 2 ⇒3: Since the matrix M is symmetric, it has a
spectral decomposition.
X
M= λi xi xTi
i

Define yi = λi xi . This definition is possible because λ0i s are non-negative.
Then,
X
M= yi yiT
i

Define B to be the matrix whose columns are yi . Then it is clear thatB T B =


M . From this construction,Bs columns are orthogonal. In general, any
matrix of the form B T B is positive semi-definite. The matrix B need not
have orthogonal columns (it can even be rectangular).
But this representation is not unique and there always exists a matrix
B with orthogonal columns for M , s.t., B T B = M . This decomposition is
unique if B is positive semidefinite. The positive semidefinite B,s.t.,B T B =
M , is called the square root of M .

22
3.3 Cholskey Method
A matrix decomposition or matrix factorization is a factorization of a ma-
trix into a product of matrices.There are many different matrix decomposi-
tions.One of them is Cholskey decomposition.
The Cholskey decomposition or Cholskey factorization is a decomposition
of a Hermitian ,positive-definite matrix into the product of a lower triangular
matrix and it’s conjugate transpose.The Cholskey decomposition is roughly
twice as efficient as the LU decomposition for solving system of linear equa-
tions.
The cholskey decomposition of a Hermitian positive-definite matrix A is
a decomposition of the form A = LLT ,where L is a lower triangular ma-
trix with real and positive diagonal entries,and LT denotes the conjugate
tranpose of L.Every Hermitian positive definite matrix (and thus also every
real-valued symmetric positive-definite matrix)has a unique Cholskey deco-
mosition.
    
a11 a12 a13 l11 0 0 l11 l21 l31
     T
A =  a21 a22  = l21
a23   l22 0  0
 l22  = LL
l32 
a31 a32 a33 l31 l32 l33 0 0 l33
Every positive ,definite matrix A can be decomposed into a product of a
unique lower triangular matrix L and it’s transpose :A = LLT

3.4 Crout’s Method


It is a distinct method of solving a system of linear equations of the form
Ax = b.,where the matrix A is decomposed into a product of lower triangular
matrix L and an upper triangular matrix U ,that is A = LU . Explicitly,we
can write it as

23
   
a11 a12 a13 ... a1n l11 0 0 ... o
   
 a21 a22 a23 ... a2n 
 =  l21 l22 0 ... 0

 
. . . ... ... ...  . . . ... ... ... . . .
   
an1 an2 an3 ... ann ln1 ln2 ln3 ... lnn
 
1 u12 u13 ... u1n
 
 0 1 u23 ... u2n 
 
. . . ... ... ... . . .
 
0 0 0 ... 1
Therefore,by LU -decomposition ,the system of linear equations Ax = b can
be solved in three steps:
1.Construct the lower triangular matrix L and upper triangular matrix U .
2.Using forward substitution ,solve Ly = b.
3.Solve U x = y,backward substitution .
We further elaborate the process by considering a 3×3 matrix A.We consider
solving
 the system of equation
 of the
 form Ax =b,where
a11 a12 a13 x1 b1
     
A= a21 a22  , x = x2  and b = b2 
a23     
a31 a32 a33 x3 b3
The matrix A is factorised as a product of two matrices L(lower triangular
matrix)
 and U (upper  triangular matrix)  as follws: 
l11 0 0 1 u12 u13 a11 a12 a13
    
l21 l22 0  0 1 u 23
 = a21 a 22 a 23

    
l31 l32 l33 0 0 1 a31 a32 a33
   
l11 l11 u12 l11 u13 a11 a12 a13
   
⇒ l21
 l21 u12 + l22 l21 u13 + l22 u23  = a21
  a22 a23 

l31 l31 u12 + l32 l31 u13 + l32 u23 + l33 a31 a32 a33
This implies
l11 = a11 , l21 = a21 , l31 = a31
a12 a12
l11 u12 = a12 ⇒ u12 = =
l11 a11

24
a13 a13
l11 u13 = a13 ⇒ u13 = =
l11 a11
l21 u12 + l22 = a22 ⇒ l22 = a22 − l21 u12
1
l21 u13 + l22 u23 = a23 ⇒ u23 = (a23 − l21 u13 )
l22
l31 u12 + l32 = a32 ⇒ l32 = a32 − l31 u12

l31 u13 + l32 u32 + l33 = a33 ⇒ l33 = a33 − l31 u13 − l32 u23

Once all the value of lij ’s and uij ’s are obtained ,we can write Ax = b as
LU x = b
LetU x = y,then Ly=b   
l11 0 0 y1 b1
    
⇒ l21 l22 0  y2  = b2 
   
l l32 l33 y3 b
 31   3
l11 y1 b1
   
⇒  l21 y1 + l22 y2  = b2 
  
l31 y1 + l32 y2 + l33 y3 b3
b1 1 1
⇒ y1 = , y2 = (b2 − l21 y1 ) and y3 = (b3 − l31 y1 − l32 y2 )
l11 l22 l33
BYforeward substitution   we obtain,U
  x=y
1 u12 u13 x1 y1
    
⇒ 0 1 u23  x2  = y2 
   
0 0 1 x3 y3
By back substitution we get,
x3 = y 3
x2 + u23 x3 = y2 ⇒ x2 = y2 − u23 x3
x1 + u12 x2 + u13 x3 = y1 ⇒ x1 = y1 − u12 x2 − u13 x3

25
3.5 Tridiagonal System
The matrix A = [aij ] is tridiagonal if
aij = 0,for |i − j| > 1
This gives the form 
a c1 0 0...0
 1 
 b a2 c2 0... 
 2 
 
 o b3 a3 c3 . . . 
A=  ...


 
 
 ... bn−1 an−1 cn−1 
 
0... 0 bn an
Tridiagonal matrices occur in a variety of applications.In addition,many nu-
merical methods for solving boundary value problems for ordinary and partial
differential equations involve the solution of tridiagonal systems.Virtually all
of these applications yield tridigonal matrices for which the LU factorisation
can be formed without pivoting,and for which there is no large increase in
error as a consequence.
By considering the factorization A = LU without pivoting ,we find that
most elements of L and U will be zero.And we are lead to the follwing general
formula for the decomposition: A = LU
 
a1 0 ... 0  
 ..  
 1 γ 1 . . . 0
 b2 α2 0 .  .. 
 0 1 γ2 .
= 0 b3 α3
  
 .
  ..

. 
.
.
 

0 ... 0 1
0 ... bn an
We can multiply to obtain a way to recursively compute {αij } and {γij }:
a1 = α1 , α1 γ1 = c1
ai = αi + bi γi−1 , i = 2, ..., n
αi γi = ci , i = 2, 3, ..., n − 1
These can be solved to give

26
c1
α1 = a1 , γ1 =
α1
ci
αi = ai − bi γi−1 , γi = ; i = 2, 3, ..., n − 1
αi
αn = an − bn γn−1
To solve LU x = f,let U x = z and Lz = f .Then

f1 fi − bi zi−1
z1 = , z1 = , i = 2, 3, ..., n
α1 αi

xn = zn , xi = zi − γi xi+1 , i = n − 1, n − 2, ..., 1.

27
Chapter 4

Iterative methods to solve


system of linear equations

As a numerical technique ,Gaussian elimination is rather unusual because it


is direct.That is ,a solution is obtained after a single application of Gaussian
elimination.Once a ”solution” has been obtained,Gaussian elimination offers
no method of refinement.The lack of refinements can be a problem because,as
the previous section shows ,Gaussian elimination is sensitive to rounding
error.
Numerical techniques more commonly involve an iterative method.For
example,in calculus we probably studied Newton’s iterative method for ap-
proximating the zeroes of a differential function.In this section we will look at
two iterative methods for approximating the solution of a system of n linear
equations in n variables.

4.1 The Jacobi Method


The first iterative technique is called the Jacobi method,,after Carl Gustav
Jacob Jacobi(1804-1851).This method makes two assumptions:(1)that the

28
system given by
a11 x1 + a12 x2 + ... + a1n xn = b1
a21 x1 + a22 x2 + ... + a2n xn = b2
.. .. .. ..
. . . .
an1 x1 + an2 x2 + ... + ann xn = bn
has a unique solution and (2) that the coefficient matrix A has no zeroes on
it’s main diagonal.If any of the diagonal entries a11 , a22 , ..., ann are zero,then
rows or columns must be interchanged to obtain a coefficient matrix that has
nonzero entries on the main diagonal.
To begin the Jacobi method ,solve the first equation for x1 ,the second
equation for x2 ,and so on, as follows.
1
x1 = (b1 − a12 x2 − a13 x3 − · · · − a1n xn )
a11
1
x2 = (b2 − a21 x1 − a23 x3 − · · · − a2n xn )
a22
..
.
1
xn = (bn − an1 x1 − an2 x2 − · · · − an,n−1 xn−1 )
ann
Then make an initial approximation of the solution,

(x1 , x2 , x3 , . . . , xn ),

and substitute these values of xi into the right hand side of the rewritten
equations to obtain the f irst approximation.After this procedure has been
completed,one iteration has been performed.In the same way the second
approximation is performed by substituting the first approximation’s x-values
into the right hand side of the rewritten equations.By repeated iterations,we
will form a sequence of approximations that often converges to the actual
solution.This procedure is illustrated in Example 1.

29
4.1.1 Example 1:Applying the Jacobi Method
Use the Jacobi method to approximate the solution of the following system
of linear equations.
5x1 − 2x2 + 3x3 = −1
−3x1 + 9x2 + x3 = 2
2x1 − x2 − 7x3 = 3
Continue the iterations until two successive approximations are identical
when rounded to three significant digits.
To begin,write the system in the form
1 2 3
x1 = − + x2 − x3
5 5 5
2 3 1
x2 = + x1 − x3
9 9 9
3 2 1
x3 = − + x1 − x2 .
7 7 7
Because we don’t know the actual solution,choose
x1 = 0, x2 = 0, x3 = 0
as a convenient initial approximation .So,the first approximation is
1 2 3
x1 = − + (0) − (0) = −0.200
5 5 5
2 3 1
x2 = + (0) − (0) ≈ 0.222
9 9 9
3 2 1
x3 = − + (0) − (0) ≈ −0.429
7 7 7
Continuing this procedure ,we obtain the sequence of approximations as fol-
lows:

n 0 1 2 3 4 5 6 7
x1 0.000 -0.200 0.146 0.192 0.181 0.185 0.186 0.186
x2 0.000 0.222 0.203 0.328 0.332 0.329 0.331 0.331
x3 0.000 -0.429 -0.517 -0.416 -0.421 -0.424 -0.423 -0.423

Becuase the last two columns in above are identical ,we can conclude taht

30
to three significant digits the solution is
x1 =0.186,x2 =0.331,x3 =-0.423
For the system of linear equations given in above example ,the Jacobi method
is said to converge .That is,repeated iterations succeed in producing an ap-
proximation yhat is correct to three significant digits.As is generally true for
iterative methods ,greater accuracy would require more iterations.

4.2 The Gauss-Siedel Method


We will now look at a modification of the Jacobi method called the Gauss-
Siedel method, named after Carl Friedrich Gauss (1777-1855) and Philipp
L. Siedel(1821-1896).This modification is no more difficult to use than the
Jacobi method , and it often requires fewer iterations to produce the same
degree of accuracy.
With the Jacobi method ,the values of xi obtained in the nth approxi-
mation remain unchanged untill the entire (n + 1)th approximation has been
calculated.With the Gauss-Siedel method ,on the other hand ,we use the new
values of each xi as soon as they are known.That is ,once we have determined
x1 from the first eqution ,it’s value is then used in the second equation to
obtain the new x2 .Similarly,the new x1 and x2 are used in the third equation
to obtain the new x3 , and so on.This procedure is demonstrated in following
example.

4.2.1 Example 2:Applying the Gauss-Siedel Method


We noe use the Gauss-Siedel iteration method to approximate the solution
to the system of equations given in the following example:

31
The first computation is identical to that given in the above example. That
is using (x1 , x2 , x3 )=(0,0,0) as the initial approximation,we obtain the fol-
lowing new value for x1 .

1 2 3
x1 = − + (0) − (0) = −0.200
5 5 5

Now that we have a new value for x1 ,however, use it to compute a new value
for x2 .That is,

2 3 1
x2 = + (−0.200) − (0) ≈ 0.156
9 9 9

Similarly,use x1 = −0.200 and x2 = 0.156 to compute a new value for x3 .That


is,

3 2 1
x3 = − + (−0.200) − (0.156) ≈ −0.508.
7 7 7

So,the first approximation is x1 = −0.200,x2 = 0.156, and x3 = −0.508.Continued


iterations produce the sequence of approximations shown in following table:

n 0 1 2 3 4 5
x1 0.000 -0.200 0.167 0.191 0.186 0.186
x2 0.000 0.156 0.334 0.333 0.331 0.331
x3 0.000 -0.508 -0.429 -0.422 -0.423 -0.423

Note that after only five iterations of the Gauss-Siedel method ,we achieved
the same accuracy as was obtained with seven iterations of the Jacobi method
in first example.

32
Neither of the iterative methods presented in this section always con-
verges.That is,it is possible to apply the Jacobi method or the Gauss-Siedel
method to a system of linear equations and obtain a divergent sequence of
approximations.In such cases ,it is said that the method diverges.

4.2.2 Example 3:An example of Divergence


Apply the Jacobi method to the system

x1 − 5x2 = −4

7x1 − x2 = 6

using the initial approximation (x1 , x2 ) = (0, 0),and show that the method
diverges.
SOLUTION: As usual,begin by rewritting the given system in the form

x1 = −4 + 5x2

x2 = −6 + 7x1

Then the initial approximation (0,0) produces

x1 = −4 + 5(0) = −4

x2 = −6 + 7(0) = −6

as the first approximation.Repeated iterations produce the sequence of ap-


proximations shown below:
For this particular system of linear equations we can determine that the
actual solution is x1 = 1 and x2 = 1. So,we can see from the chat that

33
n 0 1 2 3 4 5 6 7
x1 0 -4 -34 -174 -1244 -6124 -42,874 -214,374
x2 0 -6 -34 -244 -1244 -8574 -42,874 -300,124

the approximations given by the Jacobi method become progressively worse


instead of better ,and we can conclude that the metohd converges.
The problem of divergence in Example 3 is not resolved by using the
Gauss-Siedel method rather than the Jacobi method .In fact,for this partic-
ular system the Gauss-Siedel method diverges more rapidly,as shown below:
With an initial approximation of (x1 , x2 )=(0,0),neither the Jacobi method

n 0 1 2 3 4 5
x1 0 -4 -174 -6124 -214,374 -7,503,124
x2 0 -34 -1244 -42,874 -1,500,624 -52,521,874

nor the Gauss-Siedel method converges to the solution of the system of linear
equations given in Example 3.We will now look at a special type of coeffi-
cient matrix A,called a strictly diagonally dominant matrix,for which it
is guaranted that both methods will converge.
DEFINITION OF STRICTLY DIAGONALLY MATRIX: An n × n matrix
A is strictly diagonally dominant if the absolute value of each entry on
the main diagonal is grater than the sum of the absolute values of the other
entries in the sum row.That is,
|a11 | > |a12 | + |a13 | + ... + |a1n |
|a22 | > |a21 | + |a23 | + ... + |a2n |
..
.
|amn | > |an1 | + |an2 | + ... + |an.n−1 |.

34
4.3 Sufficient condition for the convergence
of Gauss Jacobi and Gauss-Siedel method
The following theorem,states that strict diagonal dominanance is sufficient
for the convergence of either the Jacobi method or the Gauss-Siedel method.

Theorem 4.3.1 Convergence of the Jacobi and Gauss-Siedel method:If A


is strictly diagonally dominant ,then the system of linear equations given by
Ax = b has a unique solution to which the Jacobi method and the Gauss-
Siedel method will converge for any initial approximation.

PROOF:For Gauss Jacobi Method we can prove in the following way


Suppose Ax = b.    
b1 x
   1
 b2   x2 
Let A=[aij ], b =  . ,x =  . 
   
 ..   .. 
   
bn xn
Xn
Now; aij xj = bi ; i = 1, 2, ..., n
j=1
n
X
⇒ aii xi + aij xj = bi
j=1,i6=j
n
1 X
⇒ xi = [bi − aij xj ]
aii j=1,i6=j
n
(m+1) 1 X (m)
xi = [bi − aij xj ]
aii j=1,i6=j
This method valid if aii 6= 0
Claim:This method converges ,when matrix is diagonally dominant.
Now,
(m+1) (m+1)
x i − xi = ei

35
x − xm = e(m)

Now,
n
(m+1)
X aij (m)
ei =− (xj − xj )
j=1,i6=j
aii
n
X aij (m)
=− (ej )
j=1,i6=j
a ii

(m+1)
⇒ |ei |
n
X aij (m)
=|− (ej )|
j=1,i6=j
a ii

n
X |aij | (m)
6 |(e )|
j=1,i6=j
|aii | j
n
X |aij | m
6 ||(e )||∞
j=1,i6=j
|a ii |

n
(m)||∞
X |aij |
= ||e
j=1,i6=j
|aii |
n
X |aij |
Let υ = max
j=1,i6=j
|aii |

(m+1)
⇒ |(ei )| 6 υ||(e(m) )||∞ , ∀i = 1, 2, ..., n

(m+1)
⇒ max|ei | 6 υ||(e(m) )||∞

⇒ ||(e(m+1) )||∞ 6 υ||(e(m) )||∞

⇒ ||(e(m) )||∞ 6 υ||(e(m−1) )||∞

36
6 υ 2 ||(e(m−2) )||∞ ... 6 υ m ||(e(0) )||∞

If υ < 1,then the above −→ 0 as m −→ ∞ i.e., the method converges


if the matrix is strictly diagonally dominant. Similarly ,we can prove for
Gauss-Siedel method.

4.3.1 Example 4:Interchanging Rows to obtain Con-


vergence
Interchange the rows of the system
x1 − 5x2 = −4
7x1 − x2 = 6
to obtain one with a strictly diagonally dominant coefficient matrix .Then
apply the Gauss-Siedel method to approximate the solution of four signifi-
cant digits.
SOLUTION: Begin by interchanging the two rows of the given system to
obtain
7x1 − x2 = 6
x1 − 5x2 = −4
Note that the coefficient matrix of this system is strictly diagonally domi-
nant.Then solve for x1 and x2 as follows:
6 1
x1 = + x2
7 7
4 1
x2 = + x1
5 5
Using the initial approximatio (x1 , x2 )=(0,0),we can obtain the sequence of
approximation shown as below:
So we can conclude that the solution is x1 = 1 and x2 = 1
Do not conclude from the theorem given above that strict diagonal domi-
nance is a necessary condition for the convergence of the Jacobi or Gauss-
Siedel methods.For instance,the coefficient matrix of the system

37
n 0 1 2 3 4 5
x1 0.0000 0.8571 0.9959 0.9999 1.000 1.000
x2 0.0000 0.9714 0.9992 1.000 1.000 1.000

−4x1 + 5x2 = 1
x1 + 2x2 = 3
is not a strictly diagonally dominant matrix,and yet both methods converge
to the solution x1 = 1 ans x2 = 1 when we use an initial approximation of
(x1 , x2 )=(0,0).

4.4 Applications of system of linear equations


(1)Set up and solve a system of equations to fit a polynomial function to a
set of data points.
(2)Set up and solve a system of equations to represent a network.
Systems of linear equations arise in a wide vareity of applications.In this
section we will look at two applications.The first application shows how to
fit a polynomial function to a set of data points in the plane .The second
application focuses on networks and Kirchhoff’s Laws of electricity.

4.4.1 Polynomial Curve Fitting


Suppose n points in the xy-plane

(x1 , y1 ), (x2 , y2 ), ..., (xn , yn )

represent a collection of data and we are asked to find a polynomial function


of degree n − 1.

38
p(x) = a0 + a1 x + a2 x2 + ... + an−1 xn−1
whose graph passes through the specified points.This procedure is called
polynomial curve fitting.If all x-coordinates of the points are distinct
,then there is precisely one polynomial function of degree n − 1 (or less)
that fits the n points.
To solve for the n coefficients of p(x),substitute each of the n points into the
polynomial function and obtain n linear equations in n variables a0 , a1 , a2 , ..., an−1 .

a0 + a1 x1 + a2 x21 + ... + an−1 x1n−1 = y1


a0 + a1 x2 + a2 x22 + ... + an−1 x2n−1 = y2
..
.
a0 + a1 xn + a2 x2n + ... + an−1 xnn−1 = yn

Example 1 demonstrates this prodedure with a second-degree polynomial.


EXAMPLE 1:Polynomial Curvr Fitting Determine the polynamial p(x) =
a0 + a1 x + a2 x2 whose graph passes through the points (1,4),(2,0) and (3,12).
SOLUTION: Substituting x=1,2,and 3 into p(x) and equating the results
to the respective y-values produces the system of linear equations in the
variables a0 , a1 , and a2 shown below.
p(1) = a0 + a1 (1) + a2 (1)2 = a0 + a1 + a2 = 4
p(2) = a0 + a1 (2) + a2 (2)2 = a0 + 2a1 + 4a2 = 0
p(3) = a0 + a1 (3) + a2 (3)2 = a0 + 3a1 + 9a2 = 12
The solution of this system is

a0 = 24, a1 = −28, and a2 = 8

So the polynomial function is

p(x) = 24 − 28x + 8x2

39
.

4.4.2 Network Analysis


Networks composed of branches and junctions are used as models in such
fields as economics, traffic analysis,and electrical engeneering. In a network
model, we assume that the total flow into a junction is equal to the total flow
out of the junction.

40
Bibliography

[1] Atkinson, K.E., An introduction to numerical analysis, 2nd Edition,


Chapter 8, 1989.

[2] Babajee, D., A comparision between Cramer’s rule and a proposed Cramerelimination method
2013

[3] Friedberg, S.H., Insel, A.J., Spence, L.E., Linear algebra, 4th Edition,
Chapter 3, 2013.

[4] Hoffmann, K.M., Kunze, R., Linear algebra, 2nd Edition, Chapter 1,
1971.

[5] Jain, M.K., Iyengar, S.R.K., Numerical methods for scientific and engineering computation,
6th Edition, Chapter 2, 2012.

[6] Isaacson, E., Keller, H.B., Analysis of numerical methods, 1st Edition,
Chapter 2, 1966.

[7] Lipschutz, S., Lipson, M., Schaum’s outlines linear algebra, Chapter 3,
1976.

[8] Mathews, J.H., Numerical methods for mathematics,science, and engeneering,


2nd Edition, Chapter 3, 1992.

[9] Rohn, J., An existence theorem for systems of linear equations linear and multilinear algebra,
1991.

41
[10] Vandenberghe, L., Cholskey factorization, 2018.

42

Das könnte Ihnen auch gefallen