Sie sind auf Seite 1von 165

West University of Timisoara

Fa ulty of Mathemati s and Computer S ien e


Department of Computer S ien e

Liliana Braes u Stefan Balint


Nadia Bon his Eva Kaslik

NUMERICAL METHODS

Timisoara 2007
Introdu tion

The substantial reforms of Higher Edu ation systems, in agreement with


the Bologna Pro ess, in lude: three- y le degree stru ture and quali ation
frameworks, quality assuran e of higher edu ation, the European redit trans-
fer and a umulation system, mobility of students and sta, the so ial di-
mension of the European Higher Edu ation area.
These determined hanges in the Romanian Higher Edu ation System and
elaboration of new urri ula and adequate syllabi.

The text book Numeri al methods was elaborated in agreement with the
requirements, based on the syllabus elaborated by the Department of Com-
puter S ien e and approved by the Coun il of the Fa ulty of Mathemati s
and Computer S ien e from the West University of Timisoara.

This text book is written espe ially for students of the Computer S ien e
Department, overing all the subje ts of the syllabus, at the knowledge level
of the fth semester student. The le tures are tailored so that their presen-
tation during the allo ated time is made possible.

A distinguished element of this book onsists of a large number of exer-


ises, algorithms and their implementation in Borland C. This last hara -
teristi is spe i for omputer s ien e students.

The authors would like to thank to Prof. Dr. Gheorghe Bo san for
reading this manus ript, for omments and pertinent re ommendations.

The authors
Contents

1 Systems of Linear Equations 9


1.1 Gaussian Elimination and Pivoting . . . . . . . . . . . . . . . 9
1.2 LU Fa torization . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.3 Tridiagonal Systems . . . . . . . . . . . . . . . . . . . . . . . 24
1.4 Cholesky Fa torization . . . . . . . . . . . . . . . . . . . . . . 30
1.5 Householder Fa torization . . . . . . . . . . . . . . . . . . . . 35
1.6 The Ja obi Method . . . . . . . . . . . . . . . . . . . . . . . . 42
1.7 The Gauss-Seidel Method . . . . . . . . . . . . . . . . . . . . 46
1.8 Su essive Over-relaxation (SOR) Method . . . . . . . . . . . 51

2 Numeri al Solutions of Equations and Systems of Nonlinear


Equations 59
2.1 Fixed-Point Iterative Method . . . . . . . . . . . . . . . . . . 59
2.2 The Newton Method . . . . . . . . . . . . . . . . . . . . . . . 63
2.3 Quasi-non-expansion Operators . . . . . . . . . . . . . . . . . 65

3 Interpolation, Polynomials Approximation, Spline Fun tions 71


3.1 The Newton Divided Dieren e Formulas . . . . . . . . . . . . 72
3.2 The Lagrange Interpolating Polynomial . . . . . . . . . . . . . 85
3.3 Pie ewise Polynomial Approximations: Spline Fun tions. In-
trodu tion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.4 The Spline Polynomial Interpolation . . . . . . . . . . . . . . 94
3.5 The Bernstein Polynomial . . . . . . . . . . . . . . . . . . . . 110

4 Numeri al Dierentiation 117


4.1 The Approximation of the Derivatives by Finite Dieren es . . 117
4.2 The Approximation of the Derivatives Using Derivatives of the
Interpolating Polynomials . . . . . . . . . . . . . . . . . . . . 120

5 Numeri al Integration 121


5.1 The Newton-Cotes Formula, Trapezoidal Rule, Simpson Formula122

7
8 CONTENTS

5.2 Gaussian Integration Formulas . . . . . . . . . . . . . . . . . . 128

6 Dierential Equations, Initial-Value Problems 133


6.1 Finite Dieren e Method for a Numeri al Solution of Initial-
Value Problems (IVP) . . . . . . . . . . . . . . . . . . . . . . 134
6.2 The Taylor Method for a Numeri al Solution of IVP . . . . . . 137
6.3 The Runge-Kutta Method of the Se ond Order . . . . . . . . 140
6.4 The Runge-Kutta Method of the Third
Order and Fourth Order . . . . . . . . . . . . . . . . . . . . . 147
6.5 The Adams-Bashforth and Adams-Moulton Methods . . . . . 150
6.6 The Predi tor- orre tor Method . . . . . . . . . . . . . . . . . 155
6.7 The Finite Dieren es Method for a Numeri al Solution of a
Limit Linear Problem . . . . . . . . . . . . . . . . . . . . . . 159
6.8 The Collo ation Method and the Least Squares Method . . . . 165
Chapter 1
Systems of Linear Equations

1.1 Gaussian Elimination and Pivoting


Denition 1.1.1. A linear system of n algebrai equations of n un-
knowns is a system of the form:


 a11 x1 + a12 x2 + . . . + a1n xn = b1

 a x + a x + ... + a x = b
21 1 22 2 2n n 2
(1.1.1)

 .....................................


an1 x1 + an2 x2 + . . . + ann xn = bn
in whi h aij , bi are given real numbers, i = 1 . . . n, j = 1 . . . n, and x1 , x2 , . . . , xn
are real unknown numbers.
Denition 1.1.2. A solution of the system (1.1.1) is an ordered system
of numbers x1 , x2 , . . . , xn whi h, repla ed into the system (1.1.1), verify the
equations; this is onsidered as a ve tor x = (x1 , x2 , . . . , xn )T from Rn .
Remark 1.1.1. The system(1.1.1) an be written in the followings matrix
form:
Ax = b (1.1.2)
where: A = (aij )i=1,n , b = (b1 , b2 , . . . , bn )T , x = (x1 , x2 , . . . , xn )T .
j=1,n

Theorem 1.1.1. (Cramer)


If det(A)6= 0 (i.e. A is non-singular), then the system has a unique solution
given by:
x = A−1 b (1.1.3)
Denition 1.1.3. The system (1.1.1) is alled of Cramer type if det(A) 6=
0.

9
10 Systems of Linear Equations

Comment 1.1.1. In the ase of a Cramer-type system, if n is large then


the determination of the solution using formula (1.1.3) is di ult, and it
ondu ts to many errors be ause the omputation of the inverse matrix A−1
implies a large number of operations.

Comment 1.1.2. In order to solve system (1.1.1), another method is to


obtain an equivalent upper-triangular system U x = y (all elements situated
under main diagonal are null: uij = 0 for i > j ), from the linear system
(1.1.1).

Theorem 1.1.2. (Gauss Elimination with Ba k Substitution)


If A is a n × n non-singular matrix, then there exists a system U x = y ,
equivalent to Ax = b, where U is an upper-triangular matrix with ukk 6= 0.
After U and y are onstru ted, ba k substitution an be used to solve U x = y
for x.
The Gauss method with pivot for omputing a numeri al solution of
a linear Cramer-type system onsists of two stages (the rst stage has n − 1
steps, and the se ond has n steps), and ondu ts to an exa t solution of the
system (if the trun ation errors are negligible).

Stage 1
In the rst stage we onstru t a new system having the same solution as
the initial system (1.1.1), and for whi h its matrix is upper-triangular. This
stage onsists of n − 1 steps whi h will be des ribed below.
In the rst step the unknown x1 is eliminated from the equations 2, 3, 4, . . . , n.
This step is realized by su essive multipli ations of the rst equation with
m21 = − aa2111
, m31 = − aa31
11
, . . . , mn1 = − aan1 11
and by su essive sums of the
obtained equations with equations 2, 3, 4, . . . , n. This new system looks like:
 1

 a11 x1 + a112 x2 + . . . a11n xn = b11

 a222 x2 + . . . a22n xn = b22
(1.1.4)

 ........................


a2n2 x2 + . . . a2nn xn = b2n

where the supers ript 1 indi ates that the rst equation remains un hanged,
the supers ript 2 indi ates that the rst elimination has been arried out
and:

a1ij = aij b1i = bi for i, j = 1, 2, . . . , n


a2ij = a1ij + mi1 · a11j b2i = b1i + mi1 · b11 for i, j = 2, 3, . . . , n.
Gaussian Elimination and Pivoting 11

At the se ond step the rst equation will be ignored, and the unknown x2
will be eliminated from the equations 3, 4, . . . , n by su essive multipli ations
a2
of the se ond equation with mi2 = − a2i2 ; i = 3, . . . , n, and by su essive sums
22
of the obtained equation with the equations 3, 4, . . . , n. After this step we
obtain the followings system:


 a111 x1 + a112 x2 + a113 x3 + . . . + a11n xn = b11




 a222 x2 + a223 x3 + . . . + a22n xn = b22
a333 x3 + . . . + a33n xn = b33 (1.1.5)




 ..............................

 a3n3 x2 + . . . + a3nn xn = b3n

where the supers ript 3 indi ates that the se ond elimination has been arried
out and:

a2i2
a3ij = a2ij + mi2 · a22j b3i = b2i + mi2 · b22 mi2 = − for i, j = 3, 4, . . . , n.
a222

At the k - step the rst k − 1 equations will be ignored, and the unknown
xk will be eliminated from the equations k + 1, . . . , n by su essive multipli-
ak
ations of the equations k with the ratio mik = − akik , and by su essive sums
kk
of the obtained equation with the equation i. The obtained system is:

 a111 x1 + a112 x2 + a113 x3 + . . . + a11k xk + a11 k+1 xk+1 + . . . + a11n xn = b11




 a222 x2 + a223 x3 + . . . + a22k xk + a22 k+1 xk+1 + . . . + a22n xn = b22




 a333 x3 + . . . + a33k xk + a33 k+1 xk+1 + . . . + a33n xn = b33


 ························

 akkk xk + akk k+1 xk+1 + . . . + akkn xn = bkk




 ak+1 k+1 k+1
k+1 k+1 xk+1 + . . . + ak+1 n xn = bk+1




 ························


ak+1 k+1 k+1
n k+1 xk+1 + . . . + an n xn = bn
(1.1.6)
where:

akik
ak+1
ij = akij + mik · akkj , bk+1
i = bki + mik · bkk , mik = − for i, j = k + 1, . . . , n.
akkk

The last step of the rst stage is the step n − 1, in whi h the unknown
xn−1 will be eliminated from the equation n. These n − 1 steps give the
12 Systems of Linear Equations

system:


 a111 x1 + a112 x2 + a113 x3 + . . . + a11k xk + . . . + a11n xn = b11

a222 x2 + a223 x3 + . . . + a22k xk + . . . + a22n xn = b22





a333 x3 + . . . + a33k xk + . . . + a33n xn = b33




····················· (1.1.7)

akkk akkn bkk



 xk + . . . + xn =




 ·····················

 ann n xn = bnn

Stage 2
In the se ond stage we solve the obtained system (1.1.7), by ba k substitu-
tion. From the last equation we obtain:
bnn
xn = . (1.1.8)
annn
Introdu ing the omputed xn in the equation n − 1 we obtain the unknown
xn−1 . Introdu ing xn and xn−1 in the equation n − 2 we obtain the unknown
xn−2 , et , the pro ess is repeated until we are left with one equation and one
unknown x1 . The unknowns xn−1 , xn−2 , . . . , x1 are given by the formula:
n
!
X 1
xk = bkk − akkj · xj · k , k = n − 1, n − 2, . . . , 1. (1.1.9)
j=k+1
a kk

Remark 1.1.2. The method des ribed above (Gauss with trivial pivot) as-
sumes that at ea h step the element akkk from the main diagonal of the system
(1.1.7) is dierent of zero. These oe ients are alled pivots or pivotal ele-
ments. If there is a pivotal element equal to zero then the pro edure breaks
down sin e mik annot be dened. To ensure that zero pivots are not used
to ompute the multipliers at the step k , a sear h is made in the olumn k
for nonzero elements from rows k + 1, . . . , n. If some entry is nonzero, for
example i > k then we swit h rows i and k . This inter hange of rows does
not hange the solution of the system. This inter hange is possible be ause
A is non-singular (if all elements from the pivot down in olumn k in luding
the pivot are zero, then the matrix is singular).

Be ause the omputer uses xed-pre ision arithmeti , it is possible that


a small error will be introdu ed ea h time that an arithmeti operation is
performed. Hen e, the trivial pivoting strategy in Gaussian elimination an
lead to signi ant error in the solution of a linear system of equations.
Gaussian Elimination and Pivoting 13

Remark 1.1.3. The purpose of the pivoting strategy is to move the entry
of greatest magnitude to the main diagonal and then use it to eliminate the
remaining entries in the olumn. This implies an inter hange between row
k and the row whi h ontains the element with the largest magnitude from
the olumn. This modied method is alled Gauss method with partial
pivot.
Remark 1.1.4. In the Gauss method with partial pivot we an improve
errors by hoosing in ea h step a pivot given by the element with the largest
magnitude from the olumn and row, simultaneously. This is alled Gauss
method with total pivot.
Remark 1.1.5. The Gauss method with pivot has the following matrix
representation:
A(n−1) = M · A (1.1.10)
where: A(n−1) is the matrix of the system (1.1.7) and M is the matrix:
 
1 0 0 ... 0
 m21 1 0 ... 0 

 ... ... ...
 = M. (1.1.11)
. . . . . .
mn1 mn2 mn3 ... 1

Remark 1.1.6. Denoting by A(n−1) = U , M −1 = L and supposing that the


system (1.1.1) is ompatible determinate, the equality (1.1.10) be omes:
A = L · U. (1.1.12)

This shows that the Gauss method with pivot ondu ts to the existen e
of the fa torization A = L · U , where L and U are lower-triangular and
upper-triangular matri es.

Exer ises
Solve the following systems using the Gauss method:

a) with pivot

 2x − 4y + 2z = 20
x − 4y − z = 2

− x + y + z = −2
14 Systems of Linear Equations

b) with partial pivot



 − 8y − 2z = −1
6x + 4y = 4

8x − 6z = 7

) with total pivot



 x − y − 2z = 2
− 2x − 4y + z = 8

x + y + 2z = 2

The algorithm of implementation of the Gauss method is:

k = 1...n − 1
i = k + 1...n
aik
m=
akk
j = k + 1...n
aij = aij − m · akj
bi = bi − m · b k
bn
xn =
ann
for i = n − 1 . . . 1 !
n
X 1
xi = bi − aij · xj ·
j=i+1
aii

Input data:
- n - dimension of the spa e

- (aij ) i=1,...,n - matrix of the system


j=1,...,n

- (bi )i=1,...,n - the olumn ve tor form the right hand side

Output data:
- (xi )i=1,...,n - the solution of the system Ax = b

Implementation in Borland C language:


#in lude<stdio.h>
#in lude<math.h>
#in lude<mallo .h>
Gaussian Elimination and Pivoting 15

oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int imin, int imax);
void CitireMat(oat **a, int n, int m);
void CitireVe t(oat *a, int init, int n);
void S riereMat(oat **a, int n, int m);
void S riereVe t(oat *a, int init, int n);
void gauss(oat **a, oat *b, int n);
void main()
{
oat **a, *b;
int n;
printf("n= "); s anf("%d", &n);
a=Matri e(1,n,1,n);
b=Ve tor(0,n);
CitireMat(a,n,n);
S riereMat(a,n,n);
CitireVe t(b,1,n);
S riereVe t(b,1,n);
printf("SOLUTIE\n");
gauss(a,b,n);
}
#dene Swap(a,b) {t=a; a=b; b=t;}
void gauss(oat **a, oat *b, int n)
{
oat amax, suma, m,t, *x;
int i, imax, j, k;
x=Ve tor(0,n);
for(k=1;k<=n;k++)
{
amax=0.0;
for(i=k;i<=n;i++)
if(amax<fabs(a[i℄[k℄))
{
amax=fabs(a[i℄[k℄);
imax=i;
}
if(amax==0)
{
printf("matri e nesingulara! \n");
}
if(imax!=k)
16 Systems of Linear Equations

{
for(j=k;j<=n;j++) Swap(a[imax℄[j℄, a[k℄[j℄);
Swap(b[imax℄, b[k℄);
}
m=1.0/a[k℄[k℄;
for(j=1;j<=n;j++) a[k+1℄[j℄*=m;
b[k+1℄*=m;
for(i=k+1;i<=n;i++)
{
m=a[i℄[k℄/a[k℄[k℄;
for(j=k;j<=n;j++) a[i℄[j℄-=a[k℄[j℄*m;
b[i℄-=b[k℄*m;
}
}
x[n℄=b[n℄/a[n℄[n℄;
for(k=n-1;k>=1;k)
{
suma=0.0;
for(j=k+1;j<=n;j++)
suma+=a[k℄[j℄*x[j℄;
x[k℄=(b[k℄-suma)/a[k℄[k℄;
}
S riereVe t(x,1,n);
}

1.2 LU Fa torization
In the Gaussian elimination method, for solving a Cramer-type system

Ax = b, (1.2.1)

the system was redu ed to a triangular form and then solved by ba k substi-
tution. It is mu h easier to solve triangular systems. Let us exploit this idea
and assume that a given n × n matrix A an be written as a produ t of two
matri es L and U so that
A=L·U (1.2.2)

where L is an n×n lower-triangular matrix and U is an n×n upper-triangular


matrix:
LU Fa torization 17

 
λ11 0 0 ... 0
 λ21 λ22 0 ... 0 
L=. . .
 (1.2.3)
... ... ... ...
λn1 λn2 λn3 . . . λnn
 
µ11 µ12 µ13 . . . µ1n
 0 µ22 µ23 . . . µ2n 
U =
. . .
 (1.2.4)
... ... ... ...
0 0 0 . . . µnn

Theorem 1.2.1. If the matrix A an be written in the form (1.2.2) then the
system (1.2.1) is de omposed into two triangular systems:
Ly = b (1.2.5)

Ux = y (1.2.6)
whi h are equivalent with:


 λ11 y1 = b1

λ y +λ y =b
21 1 22 2 2
(1.2.7)

 ···················


λn1 y1 + λn2 y2 + . . . + λnn yn = bn


 µ11 x1 + µ12 x2 + . . . + µ1n xn = y1

 µ22 x2 + . . . + µ2n xn = y2
(1.2.8)

 ···············


µnn xn = yn .

Proof: immediate.
Both systems are triangular and therefore easy to solve. What we need
is a pro edure to generate fa torization. For this we observe that we obtain
L and U by Gaussian elimination. In fa t, we have the following theorem:

Theorem 1.2.2. The solution of the system (1.2.7) is given by:



b1

 y1 =

 λ11
 i−1  (1.2.9)
 X 1
 yi = bi − λij yj · , i = 2, 3, . . . , n


j=1
λii
18 Systems of Linear Equations

and the solution of the system (1.2.8) is given by:


 yn

 xn =

 µnn
 Xn 
1 (1.2.10)


 xi = yi − µij xj · , i = 2, 3, . . . , n.

j=i+1
µii

Proof: immediate.
Theorem 1.2.3. If A is a non-singular matrix and satises the onditions:
det(A[i] ) 6= 0, i = 1, 2, . . . , n − 1 (1.2.11)

then there exists a lower-triangular matrix L and an upper-triangular matrix


U su h that the equality (1.2.2) takes pla e. We denoted by det(A[i] ) the
orner determinants until order n − 1 of the matrix A.
Proof: the matri es:
   
λ11 0 0 ... 0 µ11 µ12 µ13 . . . µnn
 λ21 λ22 0 ... 0   0 µ22 µ23 . . . µ2n 
L=
· · · · · · · · ·
 and U = 
··· ··· · · · · · · · · · ··· ···
λn1 λn2 λn3 . . . λnn 0 0 0 . . . µnn

are onsidered having n(n + 1) unknowns. Computing the produ t L · U and


equating by A we obtain a system having n2 equations and n2 + n unknowns.
For the uniqueness of the solution, n elements of the matrix L or U should
be spe ied.
Remark 1.2.1. The additional onditions on the unknowns an be spe ied
in the following ways:
• the diagonal elements of the matrix U are µii = 1, known as the Crout
method;
• the diagonal elements of the matrix L are λii = 1, known as the Doolit-
tle method.
In the ase of the Crout fa torization we have:
min(i,j)
X
aij = λik · µkj µii = 1 i, j = 1, . . . , n (1.2.12)
k=1

For j = 1 we obtain:

ai1 = λi1 · µ11 = λi1 ; i, j = 1, . . . , n (1.2.13)


LU Fa torization 19

hen e, the rst olumn of the matrix L is equal to the rst olumn of the
matrix A.
If in (1.2.12) we take i = 1 then we obtain:

a1j = λ11 · µ1j , j = 1, 2, . . . , n (1.2.14)

hen e,
a1j
µ1j = , j = 1, 2, . . . , n (1.2.15)
a11
whi h represent the elements of the rst row from the matrix U .
Supposing that the rst r − 1 olumns of the matrix L and the rst r − 1
rows of the matrix U are determined from (1.2.12) we have:
r−1
X
air = λir + λik · µkr i = r, . . . , n
k=1
r−1
X
arj = λrr · µrj + λrk · µkj j = r + 1, . . . , n
k=1

and hen e λir ( olumn r of L) and µrj (row r of U ), are obtained in the ase
of the Crout fa torization.

Exer ises
Solve the following systems using LU fa torization:

 4x − 2y + z = 16
a) − 3x − y + 3z = 3

x − y + 3z = 6

 2x + 3y + 3z = 2
b) 5y + 7z = 2

6x + 9y + 8z = 5
The algorithm of implementation of the LU fa torization is:

// Determination of x ve tor representing the solution of the system U x =


y:
for k = 1 . . . n
ukk = 1
k−1
X
lkk = akk − lks · usk
s=1
20 Systems of Linear Equations

forj = k + 1 . . . n
 k−1 
X 1
uki = aki − lks · usi ·
s=1
lkk

 k−1 
X 1
lik = aik − lis · usk ·
s=1
ukk
// Determination of y ve tor representing the solution of the system Ly =
b:
b1
y1 =
l11
for i = 2 .
..n
i−1 
X 1
yi = bi − lij · yj ·
j=1
lii
// Determination of x ve tor representing the solution of the system U x =
y:
yn
xn =
unn
for i = n −
 1 . . . 1X
n 
1
xi = yi − uij · xj ·
j=i+1
uii

Input data:
- n - dimension of the spa e

- (aij ) i=1,...,n - matrix of the system


j=1,...,n

- (bi )i=1,...,n - olumn ve tor from the right hand side

Output data:
- (xi )i=1,...,n - the solution of the system Ax = b

Implementation of the above algorithm, in Borland C language, is:


#in lude<stdio.h>
#in lude<math.h>
#in lude<stdlib.h>
#in lude<mallo .h>
oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int n);
void CitireMat(oat **a, int n, int m);
void CitireVe t(oat *a, int n);
LU Fa torization 21

void S riereMat(oat **a, int n, int m);


void S riereVe t(oat *a, int n);
void LU(oat **a, oat *b, int n);
void main(void)
{
oat **a, *b;
int n;
a=Matri e(1,n,1,n);
b=Ve tor(n);
printf("n= ");
s anf("%d", &n);
CitireMat(a,n,n);
S riereMat(a,n,n);
CitireVe t(b,n);
S riereVe t(b,n);
printf("SOLUTIE\n");
LU(a,b,n);
}
void CitireMat(oat **a, int n, int m)
{
int i,j;
for(i=1; i<=n; i++)
for(j=1; j<=m; j++)
{
printf("[%i℄[%i℄=",i,j);
s anf("%f",&a[i℄[j℄);
}
}
void CitireVe t(oat *a, int n)
{
int i;
for(i=1; i<=n; i++)
{
printf("[%i℄=",i);
s anf("%f", &a[i℄);
}
}
void S riereMat(oat **a, int n, int m)
{
int i,j;
for(i=1; i<=n; i++)
22 Systems of Linear Equations

{
for(j=1;j<=m;j++)
printf("%g ",a[i℄[j℄);
printf("\n");
}
}
void S riereVe t(oat *a, int n)
{
int i;
for(i=1; i<=n; i++)
{
printf("%g",a[i℄);
printf("\n");
}
}
oat **Matri e(int imin, int imax, int jmin, int jmax)
{
int i, ni=imax-imin+1,nj=jmax-jmin+1;
oat **p;
p=(oat **) mallo ((size_t)(ni*sizeof(oat *)));
p-=imin;
p[imin℄=(oat *) mallo ((size_t) (ni*nj*sizeof(oat)));
p[imin℄-=jmin;
for(i=imin+1;i<=imax;i++) p[i℄=p[i-1℄+nj;
return p;
}
oat *Ve tor(int n)
{
oat *p;
p=(oat *)mallo ((size_t) (n*sizeof(oat)));
return p;
}
void LU(oat **a, oat *b, int n)
{
oat **u, **l, *y, *x, suma;
int i,k,j;
u=Matri e(1,n,1,n);
l=Matri e(1,n,1,n);
x=Ve tor(n);
y=Ve tor(n);
for(k=1;k<=n;k++)
LU Fa torization 23

{
u[k℄[k℄=1;
suma=0;
for(j=1;j<=k-1;j++)
suma=suma+l[k℄[j℄*u[j℄[k℄;
l[k℄[k℄=a[k℄[k℄-suma;
suma=0;
for(i=k+1;i<=n;i++)
{
for(j=1;j<=k-1;j++)
suma=suma+l[k℄[j℄*u[j℄[i℄;
u[k℄[i℄=(a[k℄[i℄-suma)/l[k℄[k℄;
suma=0;
for(j=1;j<=k-1;j++)
suma=suma+l[i℄[j℄*u[j℄[k℄;
l[i℄[k℄=a[i℄[k℄-suma;
}
}
y[1℄=b[1℄/l[1℄[1℄;
for(i=2;i<=n;i++)
{
suma=0;
for(j=1;j<=i-1;j++)
suma=suma+l[i℄[j℄*y[j℄;
y[i℄=(b[i℄-suma)/l[i℄[i℄;
}
x[n℄=y[n℄/u[n℄[n℄;
for(i=n-1;i>=1;i)
{
suma=0;
for(j=i+1;j<=n;j++)
suma=suma+u[i℄[j℄*x[j℄;
x[i℄=(y[i℄-suma)/u[i℄[i℄;
}
S riereVe t(x,n);
}
24 Systems of Linear Equations

1.3 Tridiagonal Systems


Denition 1.3.1. A tridiagonal system is a Cramer system
Ax = b (1.3.1)

in whi h the matrix A is tridiagonal (has nonzero elements only on the main
diagonal and in the positions adja ent to the diagonal), i.e., A has the form:
 
b1 c2 0 0 ... ... ... 0
 a2 b2 c 3 0 ... ... ... 0 
 
 0 a3 b 3 c 3 . . . . . . ... 0 
A=
· · ·
 (1.3.2)
 ··· ··· ··· ··· ··· · · · · · ·

 0 . . . . . . . . . . . . an−1 bn−1 cn 
0 ... ... ... ... 0 a n bn
A tridiagonal system an be solved very e iently by the fa torization method.
Thus, using the Crout fa torization we have the following matri es L and U
orresponding to the given tridiagonal matrix:
 
β1 0 0 ... 0 0
 a2 β2 0 . . . 0 0
 
L = · · · · · · · · · · · · · · · · · ·

 (1.3.3)
 0 0 0 . . . βn−1 0 
0 0 0 . . . an βn
 
1 ν2 0 ... 0 0
 0 1 ν3 . . . 0 0
 
U =· · · · · · · · · · · · · · · · · ·
 (1.3.4)
 0 0 0 . . . 1 νn 
0 0 0 ... ... 1
Theorem 1.3.1. The equality A = L·U is satised if and only if the following
equalities take pla e:
β1 = b1 (1.3.5)
βi · νi+1 = ci+1 i = 2, . . . , n − 1 (1.3.6)
ai · νi + βi = bi i = 2, . . . , n (1.3.7)

Proof: If A = L · U then:
- multiplying the row 1 of the matrix L with the olumn 1 of the matrix
U we obtain β1 · 1; equating this with the element from the rst row
and the rst olumn of the matrix A, i.e. b1 , we obtain β1 = b1 .
Tridiagonal Systems 25

- multiplying the row i of the matrix L with the olumn i + 1 of the


matrix U we obtain βi · νi+1 ; equating this with the element from the
row i and from the olumn i + 1 of the matrix A, i.e. ci+1 , we obtain
βi · νi+1 = ci+1 .

- multiplying the row i of the matrix L with the olumn i of the matrix
U we obtain ai · νi + βi ; equating this with the element from the row i
and from the olumn i of the matrix A, i.e. bi , we obtain ai ·νi +βi = bi .

Re ipro ally, if the relations (1.3.5), (1.3.6) and (1.3.7) take pla e then the
equality A = L · U is satised.

Let us denote by xij an arbitrary element of the matrix A; we observe that:

• for i = 1 we have: x11 = b1 , x12 = c1 and xij = 0 if j > 2.

• for i = 2, . . . , n − 1 we have: xij = 0 if j ∈


/ {i − 1, i, i + 1} and
xi,i−1 = ai , xii = bi , xi,i+1 = ci+1 .

• for i = n we have: xnj = 0 if j < n − 1 and xn,n−1 = an , xnn = bn .

An arbitrary element λij of the matrix L has the properties:

- for i = 1, λ11 = β1 and λ1j = 0 for j 6= 1

- λij = 0 for i = 2, . . . , n if j ∈
/ {i, i − 1} and λi,i−1 = ai , λii = βi .

An arbitrary element µij of the matrix U is:

- µij = 0 for i = 1, . . . , n − 1 if j ∈
/ {i, i + 1} and µii = 1, µi,i+1 = νi+1

- for i = n we have µnj = 0 for j < n and µnn = 1.

An arbitrary element γij of the matri es produ t L · U is given by:


n
X
γij = λik · µkj .
k=1

For i = 1 and j ≤ 2 we have γ11 = b1 = x11 and γ12 = c1 = x12 ; the element
γ1j for j > 2 is:
γ1j = 0 = xij
For i = 2, . . . , n − 1, γij is:

γij = λi,i−1 · µi−1,j + λii · µij


26 Systems of Linear Equations

/ {i − 1, i, i + 1} then µi−1,j = µij = µi+1,j = 0 and γij = 0 = xij


- if j ∈
- if j = i − 1 then µi−1,i−1 = 1, µi,i−1 = 0 and γi,i−1 = ai = xi,i−1
- if j = i then µi−1,i = νi , µi,i = 1 and γii = ai · νi + βi = bi = xi,i
- if j = i + 1 then µi−1,i+1 = 0, µi,i+1 = νi+1 and γi,i+1 = βi · νi+1 =
ci+1 = xi,i+1 .
In the same way, it an be proved that γnj = xnj .
Theorem 1.3.2. If the equality is satised and the numbers βi
A = L·U
verifying (1.3.5), (1.3.6) and (1.3.7) are nonzero, then the solution of the
system (1.3.1) is given by the relations:
xn = yn and xi = yi − νi+1 · xi+1 , i = n − 1, n − 2, . . . , 1, (1.3.8)
in whi h y1 , y2 , . . . , yn are given by the equalities:
y1 = d1 /β1 and yi = (di − ai · yi−1 )/βi , i = 2, . . . , n. (1.3.9)
Proof: If x represents the solution of the equation Ax = d and A = LU
then LU x = d. Let be y = U x and Ly = d.
From Ly = d and βi 6= 0 we have y1 = d1 /β1 and yi = (di − ai ·
yi−1 )/βi , i = 2, . . . , n.
From U x = y we have xn = yn and xi = yi − νi+1 · xi+1 , i = n − 1, n −
2, . . . , 1.
Propostion 1.3.1. If |b1 | > |c2 |, |bi | > |ai | + |ci+1 | for i = 2, . . . , n − 1 and
|bn | > |an | the system:

 β1 = b1

βi · νi+1 = ci+1 , i = 2, . . . , n − 1 (1.3.10)

 a · ν + β = b , i = 2, . . . , n − 1
i i i i

in unknowns β1 , . . . βn and ν2 , . . . , νn has a unique solution and βi 6= 0, i =


1, . . . , n.
Proof: Outline: The numbers β1 , . . . βn and ν2 , . . . , νn are obtained in the
order β1 , ν2 , β2 , ν3 , β3 , ν4 , . . . , βn , νn using relations βi · νi+1 = ci+1 and ai ·
νi + βi = bi .
Denition 1.3.2. A quadrati matrix A = (aij ) of order n is stri t diago-
nally dominant if its elements verify the relations:
n
X
|aii | > |aij | i = 1, 2, . . . , n. (1.3.11)
j=1
j6=i
Tridiagonal Systems 27

Propostion 1.3.2. If the tridiagonal matrix A is diagonally dominant then


the fa torization A = L · U with L and U of the forms (1.3.3) and (1.3.4) is
possible.
Proof: Consequen e of the Proposition 1.3.1.

Exer ises
1. Solve
 the followings system using LU method:

 x + 2y = 3

3x + 2y + 4z = 1

 y + z + 2t = −2

2z + 3t = 3
The algorithm of implementation of the LU fa torization in the ase of
the tridiagonal matri es is:

// Constru tion of the matri es L and U:


β1 = b1
c2
γ2 =
β1
for i = 2 . . . n − 1
βi = bi − ai · γi
ci+1
γi+1 =
βi
βn = bn − an · γn
// Determination of the ve tor y as a solution of the system Ly = d:
d1
y1 =
β1
for i = 2 . . . n
yi = (di − ai · yi−1 )/βi
// Determination of the ve tor x as a solution of the system U x = y :
xn = yn
for i = n − 1 . . . 1
xi = yi − γi+1 · xi+1

Input data:
- n - dimension of the spa e

- (ai )i=2,...,n - the adja ent lower diagonal

- (bi )i=1,...,n - the main diagonal


28 Systems of Linear Equations

- (ci )i=2,...,n - the adja ent upper diagonal

- (di )i=1,...,n - the olumn ve tor from the right hand side

Output data:
- (xi )i=1,...,n - the solution of the system Ax = d

Implementation of the above algorithm, in Borland C language, is:


#in lude<stdio.h>
#in lude<mallo .h>
oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int n);
void CitireVe t(oat *a,int init, int n);
void S riereVe t(oat *a,int init, int n);
void Tridiag(oat *a, oat *b, oat * , oat *d,int n);
void main(void)
{
oat *a, *b, * ,*d;
int n;
a=Ve tor(n);
b=Ve tor(n);
=Ve tor(n);
d=Ve tor(n);
printf("n= ");
s anf("%d", &n);
printf("Introdu eti elementele de pe diagonala maina:\n");
CitireVe t(b,1,n);
S riereVe t(b,1,n);
printf("Introdu eti elementele de deasupra diagonalei prin ipale:\n");
CitireVe t( ,2,n);
S riereVe t( ,2,n);
printf("Introdu eti elementele de sub diagonala prin ipala:\n");
CitireVe t(a,2,n);
S riereVe t(a,2,n);
printf("Introdu eti termenii liberi:\n");
CitireVe t(d,1,n);
S riereVe t(d,1,n);
printf("SOLUTIE\n");
Tridiag(a,b, ,d,n);
}
void CitireVe t(oat *a,int init, int n)
Tridiagonal Systems 29

{
int i;
for(i=init; i<=n; i++)
{
printf("[%i℄=",i);
s anf("%f", &a[i℄);
}
}
void S riereVe t(oat *a,int init, int n)
{
int i;
for(i=init; i<=n; i++)
{
printf("%g",a[i℄);
printf("\n");
}
}
void Tridiag(oat *a, oat *b, oat * , oat *d,int n)
{
int i;
oat *beta,* gamma,*x,*y;
beta=Ve tor(n);
gamma=Ve tor(n);
x=Ve tor(n);
y=Ve tor(n);
beta[1℄=b[1℄;
gamma[2℄= [2℄/beta[1℄;
for(i=2;i<=n-1;i++)
{
beta[i℄=b[i℄-a[i℄*gamma[i℄;
gamma[i+1℄= [i+1℄/beta[i℄;
}
beta[n℄=b[n℄-a[n℄*gamma[n℄;
y[1℄=d[1℄/beta[1℄;
for(i=2;i<=n;i++)
y[i℄=(d[i℄-a[i℄*y[i-1℄)/beta[i℄;
x[n℄=y[n℄;
for(i=n-1;i>=1;i)
x[i℄=y[i℄-gamma[i+1℄*x[i+1℄;
S riereVe t(x,1,n);
}
30 Systems of Linear Equations

1.4 Cholesky Fa torization


The Cholesky fa torization is a method used in omputation of a numeri-
al solution of the Cramer systems whi h are symmetri and have positive
denite matri es.

Denition 1.4.1. The Cramer system


Ax = b (1.4.1)

is alled symmetri if the matrix A of the system is symmetri , i.e. A has


the property:
AT = A. (1.4.2)

Denition 1.4.2. A quadrati matrix A of the order n (symmetri or non-


symmetri ) is alled positive denite, if for any x ∈ IRn , x 6= 0, the follow-
ings inequality is satised:
xT Ax > 0 (1.4.3)

Theorem 1.4.1. (Sylvester)


A matrix A is positive denite if and only if all orner determinants det(A[k] ),
k = 1, 2, . . . , n are stri tly positive, where we denoted by A[k] the matrix:

A[k] = (aij )1≤i≤k (1.4.4)


1≤j≤k

Theorem 1.4.2. (Cholesky fa torization)


A matrix A is symmetri and positive denite if and only if there exists a
non-singular and lower triangular matrix L having real elements, su h that
the followings equality is satised:
A = L · LT . (1.4.5)

Proof: We will prove that, if there exists a non-singular and lower triangular
matrix L su h that A = L · LT , then the matrix A is symmetri and positive
denite.
Symmetry: A = L · LT ⇒ AT = (L · LT )T = (LT )T · LT = L · LT = A ⇒ A =
AT .

A is positive denite: let be x ∈ IRn and the number xT Ax. We have:

xT Ax =< Ax, x >=< L · LT x, x >=< LT x, LT x >= kLT xk2 > 0, (∀)x 6= 0;

LT x = 0 ⇔ x = 0.
Cholesky Fa torization 31

In the followings, we will show who are the elements of the matrix L. We
onsider:  
λ11 0 0 ... 0
 λ21 λ22 0 . . . 0 
 
L=  λ31 λ32 λ33 . . . 0   (1.4.6)
· · · ··· ··· ··· ···
λn1 λn2 λn3 . . . λnn
and  
λ11 λ21 λ31 ... λn1
 0 λ22 λ32 ... λn2 
 
LT = 
 0 0 λ33 ... λn3 
 (1.4.7)
· · · · · · ··· ··· ···
0 0 0 ... λnn
The element from row i and olumn j of the produ t L · LT is:
min(i,j)
X
pij = λik · λjk (1.4.8)
k=1

and it is equal to aij


pij = aij , i, j = 1, 2, . . . , n (1.4.9)
- for i = j = 1 we nd:
p11 = λ211 = a11 , a11 > 0 from A positive denite, and hen e

λ11 = a11 , λ11 ∈ R∗+ ;
- for j = 1 and i = 2, . . . , n we nd:
ai1
pi1 = λi1 · λ11 = ai1 and hen e λi1 = ;
λ11

- for xed j and i = j we nd:


v
i u i−1
X u X
pii = λ2ik = aii and hen e λii = aii −
t λ2ik ,
k=1 k=1

it existen e being assured on the basis of the Sylvester's Theorem;


- for xed j and i = j + 1, j + 2, . . . , n we nd:
j j−1
!
X X 1
pij = λik · λjk = aij and hen e λij = aij − λik · λjk · .
k=1 k=1
λjj
32 Systems of Linear Equations

In this way, we an on lude that, if A = L · LT then the elements λij of the


matrix L are given by the formulas:
√ ai1
λ11 = a11 and λi1 = i = 2, . . . , n; (1.4.10)
λ11
v
u i−1
u X
λii = taii − λ2ik i = 2, . . . , n; (1.4.11)
k=1

j−1
!
X 1
λij = aij − λik · λjk · i = j + 1, j + 2, . . . , n.(1.4.12)
k=1
λjj

Thus, if the matrix A is symmetri and positive denite, then the formu-
las (1.4.10), (1.4.11) and (1.4.12) dene a non-singular and lower triangular
matrix L for whi h A = L · LT .

Theorem 1.4.3. If the matrix A is symmetri and positive denite, then the
solution of the system
Ax = b (1.4.13)
is given by the formulas:
n
!
yn X 1
xn = and xi = yi − λki · xk · i = n − 1, n − 2, . . . , 1
λnn k=i+1
λii
(1.4.14)
in whi h y1 , . . . , yn are given by:
i−1
!
b1 X 1
y1 = and yi = bi − λik · yk · i = 2, 3, . . . , n. (1.4.15)
λ11 k=1
λii

Exer ises
1. Solve the following systems using Cholesky fa torization:

 x1 + x2 + x3 = 2
a) x1 + 5x2 + 5x3 = 4

x1 + 5x2 + 14x3 = −5

 x1 + 2x2 + 3x3 = 0
b) 2x1 + 5x2 + x3 = −12

3x1 + x2 + 35x3 = 59
Cholesky Fa torization 33

The algorithm for the Cholesky fa torization is:

// Constru tion of the matrix L


k = 1...n v
u k−1
u X
2
lkk = akk −
t lks
s=1
i = k + 1...n !
k−1
X 1
lik = aik − lis · lks ·
s=1
lkk
// Determination of the ve tor y as solution of the system Ly = b:
b1
y1 =
l11
for i = 2 . . . n !
i−1
X 1
yi = bi − lij · yj ·
j=1
lii
// Determination of the ve tor x as solution of the system LT x = y :

xn = yn /lnn
for i = n − 1 . . . 1 !
n
X 1
xi = yi − lji · xj ·
j=i+1
lii

Input data
- n - dimension of the spa e

- (aij ) i=1,...,n - matrix of the system


j=1,...,n

- (bi )i=1,...,n - olumn ve tor from the right hand side

Output data:
- (xi )i=1,...,n - the solution of the system Ax = b

Implementation of the above algorithm for the Cholesky fa torization, in


Borland C language, is:

#in lude<stdio.h>
#in lude<math.h>
#in lude<stdlib.h>
34 Systems of Linear Equations

#in lude<mallo .h>


void Cholesky(oat **a, oat *b, int n);
void main(void)
{
oat **a, *b;
int n;
a=Matri e(1,n,1,n);
b=Ve tor(n);
printf("n= ");
s anf("%d", &n);
CitireMat(a,n,n);
S riereMat(a,n,n);
CitireVe t(b,n);
S riereVe t(b,n);
printf("SOLUTIE\n");
Cholesky(a,b,n);
}
void Cholesky(oat **a, oat *b, int n)
{
oat **l, *y, *x, suma;
int i,k,j;
l=Matri e(1,n,1,n);
x=Ve tor(n);
y=Ve tor(n);
for (k = 1; k<=n; k++)
{
suma = 0.0;
for (i = 1; i<=k-1; i++)
suma = suma + l[k℄[i℄ * l[k℄[i℄;
l[k℄[k℄ = sqrt(a[k℄[k℄ - suma);
for (i = k+1; i<=n; i++)
{
suma = 0.0;
for (j = 1; j<=k-1; j++)
suma = suma + l[i℄[j℄ * l[k℄[j℄;
l[i℄[k℄ = ( a[i℄[k℄ - suma ) / l[k℄[k℄;
}
}
y[1℄=b[1℄/l[1℄[1℄;
for(i=2;i<=n;i++)
{
Householder Fa torization 35

suma=0.0;
for(j=1;j<=i-1;j++)
suma=suma+l[i℄[j℄*y[j℄;
y[i℄=(b[i℄-suma)/l[i℄[i℄;
}
x[n℄=y[n℄/l[n℄[n℄;
for(i=n-1;i>=1;i)
{
suma=0.0;
for(j=i+1;j<=n;j++)
suma=suma+l[j℄[i℄*x[j℄;
x[i℄=(y[i℄-suma)/l[i℄[i℄;
}
S riereVe t(x,n);
}

1.5 Householder Fa torization


Householder Fa torization is a numeri al method for solving symmetri al
systems of Cramer type.

Denition 1.5.1. The Cramer system:


Ax = b (1.5.1)

is alled symmetri , if the matrix A is symmetri , i.e. A has the property:


AT = A. (1.5.2)

Denition 1.5.2. The Householder Fa torization of the matrix A means


to determine a matrix U having the property that U AU is a symmetri and
tridiagonal matrix T , i.e.
U AU = T. (1.5.3)

Householder fa torization is based on the followings result:

Propostion 1.5.1. For any quadrati matrix A of order n and symmetri ,


there exists a ve tor v = (v1 , v2 , . . . , vn )T su h that the olumn ve tor a1 =
Ae1 , e1 = (1, 0, . . . , 0)T (a1 represents the rst olumn of the matrix A) has
the property:
2v· < v, a1 >
a1 − = λ · e1 . (1.5.4)
kvk2
36 Systems of Linear Equations

Proof: Let be v a ve tor of the form:

v = a 1 + α · e1 (1.5.5)

where α is an unknown real onstant; this onstant will be determined by


imposing to the ve tor v = a1 + α · e1 to satisfy the ondition (1.5.4).
Repla ing the ve tor v = a1 + α · e1 in the equality (1.5.4) we nd

a1 · [kvk2 − 2(ka1 k2 + α · a11 )] − 2α · e1 · (ka1 k2 + α · a11 )


= λ · e1 . (1.5.6)
kvk2

and hen e we obtain the equality:

kvk2 − 2(ka1 k2 + α · a11 ) = 0. (1.5.7)

Taking into a ount the equality:

kvk2 = ka1 k2 + 2α · a11 + α2 (1.5.8)

we have:
α = ±ka1 k. (1.5.9)
Choosing α = ±ka1 k and v = a1 +α·e1 it is easier to see that the equality
(1.5.4) is veried.

Remark 1.5.1. In the following, we onsider the ve tor v ( alled House-


holder ve tor) given by:

v = a1 + sign(a11 ) · ka1 k · e1 . (1.5.10)

Remark 1.5.2. The Proposition 1.5.1 establishes the fa t that, for any sym-
metri matrix A, the omponents 2, 3, . . . , n of the ve tor

2v· < v, a1 >


(1.5.11)
kvk2

oin ide with those of the ve tor a1 (the rst olumn of the matrix A), and
hen e the omponents 2, 3, . . . , n of the ve tor:

2v· < v, a1 >


a1 − (1.5.12)
kvk2

are null.
Householder Fa torization 37

Propostion 1.5.2. For any symmetri matrix A, the matrix P dened by


2 · v · vT
P =I− (1.5.13)
kvk2
is symmetri and it has the property that the omponents 2, 3, . . . , n of the
rst olumn of the matrix P A are null, where we denoted by v the ve tor
dened by v = a1 + sign(a11 ) · ka1 k · e1 , i.e. the matrix v · v T is:
 
v12 v1 v2 . . . v1 vn
 v2 v1 v22 . . . v2 vn 
v · vT = 
 ···
 (1.5.14)
··· ··· ··· 
vn v1 vn v2 . . . vn2

Proof: The equality P T = P is immediate and hen e P is a symmetri


matrix.
The elements on the rst olumn of the matrix P A are the omponents of
the ve tor (1.5.12), and hen e the omponents 2, 3, . . . , n are null.
Denition 1.5.3. We all Householder matrix of order n − 1 asso iated
with the matrix A, denoted by Pn−1 , a matrix of the form:
2 · v · vT
Pn−1 = In−1 − (1.5.15)
kvk2
where: v = a1n−1 + sign(a21 ) · ka1n−1 k · e1 represents the ve tor onstituted
with elements of the ve tor a1 (the rst olumn of the matrix A), e1 =
( 1, 0, . . . , 0 )T and In−1 is the unity matrix of order n − 1.
| {z }
n−1

Propostion 1.5.3. The Householder matrix Pn−1 , asso iated with the sym-
metri matrix A, is symmetri and it has the property that the matrix U1
dened by:  
1 0 ... 0
0 
U1 =   (1.5.16)
 Pn−1 
0
is symmetri and veries:
 
a11 a12 a13 ... a1n
 α1 a122 a123 ... a12n 
 
 0
U1 A =  a132 a133 ... a13n 
 (1.5.17)
· · · ··· ··· ··· · · ·
0 a1n2 a1n3 ... a1nn
38 Systems of Linear Equations
 
a11 α1 0 ... 0
 α1 (1) (1) (1)
 a22 a23 . . . a2n 
(1) (1) (1) 
A(1) (1.5.18)

= U1 A U1 =  0 a32 a33 . . . a3n 
 
· · · ··· ··· ··· ···
(1) (1) (1)
0 an2 an3 . . . ann
Proof: The symmetry of the Householder matrix Pn−1 results from the def-
inition. From the symmetry of Pn we obtain the symmetry of U1 . Relations
(1.5.17) and (1.5.18) are veried by simple omputations.
Remark 1.5.3. The Householder matrix of order n − 2, denoted by Pn−2 ,
is onstru ted with help of the olumn ve tor a2n−2 formed with the elements
n − 2 of the olumn matrix A(1) . The orresponding matrix U2 is dened by:
 
1 0 0 ... 0
0 1 0 . . . 0
 
0 0
(1.5.19)

U2 =  
 .. .. 
. . Pn−2 
0 0
and it has the property:
 
a11 α1 0 0 ... 0
 α a(1) α 0 ... 0 
 1 22 2 
A(2) = U2 A U2 =  0 α2 a33 a(2)
(1) (2) (2) 
(1.5.20)

34 . . . a3n 
 
· · · · · · · · · · · · ··· 
(2) (2) (2)
0 0 an3 an4 . . . ann
Using the matrix Pn−2 , we obtained a new row and a new olumn of the
tridiagonal matrix at whi h we will redu e the matrix A.
Continuing by n − 1 transformations, we obtain the equality:
U AU = T (1.5.21)
in whi h T is a tridiagonal matrix.
Theorem 1.5.1. If U AU = T then the solution of the system Ax = b is
given by:
x = Uy (1.5.22)
where y represents the solution of the system
T y = U b. (1.5.23)
Proof: Ax = b and U AU = T ⇒ U −1 T U −1 x = b ⇒ T U −1 x = U b. Denoting
y = U −1 x ⇒ T y = U b we obtain x = U y .
Householder Fa torization 39

Exer ises
1. Determine the solutions of the following systems, using Householder fa -
torization:

 2x1 + 2x2 + x3 = √ 2
a) 2x − x2 + x3 = 5
 1
x1 + x2 + 2x3 = 0


 x1 − x2 + 2x3 + 2x4 = 0

−x1 + x2 + 3x3 = −1
b)

 2x1 + 3x2 − x3 + 2x4 = 2

2x1 2x3 = 1
The algorithm for the Householder fa torization is:

for i = 1 . . . n − 2
for l = 1 . . . n
for m = 1 . . . n
if m = l then uml = 1
if m 6= l then uml = 0
//We generatev the ve tor v
uX
u n 2
norm a = t aij
j=i+1
ei+1 = 1
for j = i + 2 . . . n
ej = 0
for j = i + 1 . . . n
vj = aij + sign(ai,i+1 ) · norm a · ej
X n
norm v= vj2
j=i+1
//We generate the matrix U
j = i + 1...n
k = i + 1...n
ujk = ujk − 2 · vj · vk / norm v
// D=AU
m = 1...n
l = 1...n n
X
dml = amk · ukl
k=1
//A=UD=UAU
40 Systems of Linear Equations

m = 1...n
l = 1...n n
X
aml = umk · dkl
k=1
Using the above algorithm, we determine the solution of the system
UAUy=Ub, and after that, we obtain the solution of the system Ax=b:
for i=1. . . nn
X
xi = uij · yj
j=1

Input data:
- n - spa e dimension

- (aij ) i=1,...,n - matrix of the system


j=1,...,n

- (bi )i=1,...,n - the olumn ve tor from right hand side

Output data:
- (xi )i=1,...,n - solution of the system Ax = b

Implementation of the above algorithm for the Householder fa torization, in


Borland C language, is:
#in lude<stdio.h>
#in lude<stdlib.h>
#in lude<mallo .h>
oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int n);
void CitireMat(oat **a, int n, int m);
void CitireVe t(oat *a, int n);
void S riereMat(oat **a, int n, int m);
void S riereVe t(oat *a, int n);
void Householder(oat **a,oat *b, int n);
void main(void)
{
oat **a, *b;
int n;
a=Matri e(1,n,1,n);
b=Ve tor(n);
printf("n= ");
s anf("%d", &n);
Householder Fa torization 41

CitireMat(a,n,n);
S riereMat(a,n,n);
CitireVe t(b,n);
S riereVe t(b,n);
printf("SOLUTIE\n");
Householder(a,b,n);
}
void Householder(oat **a, oat *b, int n)
{
int i,j,k;
oat **u, ** ,xnorm, *e, vnorm, *v, suma;
int sign;
=Matri e(1,n,1,n);
u=Matri e(1,n,1,n);
e=Ve tor(n);
v=Ve tor(n);
for(i=1;i<=n-2;i++)
{
for(j=1;j<=n;j++)
for(k=1;k<=n;k++)
{
if(k==j) u[j℄[k℄=1;
else u[j℄[k℄=0;
}
xnorm=0;
for(j=i+1;j<=n;j++)
{
e[j℄=0;
xnorm=xnorm+a[i℄[j℄*a[i℄[j℄;
}
e[i+1℄=1;
xnorm=sqrt(xnorm);
vnorm=0;
for(j=i+1;j<=n;j++)
{
if(a[i℄[i+1℄>0) sign=1;
else
{
if(a[i℄[i+1℄<0) sign =-1;
else sign=0;
}
42 Systems of Linear Equations

v[j℄=a[i℄[j℄+sign*xnorm*e[j℄;
vnorm=vnorm+v[j℄*v[j℄;
}
for(j=i+1;j<=n;j++)
for(k=i+1;k<=n;k++)
u[j℄[k℄=u[j℄[k℄-2*v[j℄*v[k℄/vnorm;
}
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{
[i℄[j℄=0.0;
for(k=1;k<=n;k++)
[i℄[j℄+=a[i℄[k℄*u[k℄[j℄;
}
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{
a[i℄[j℄=0.0;
for(k=1;k<=n;k++)
a[i℄[j℄+=u[i℄[k℄* [k℄[j℄;
}
}

1.6 The Ja obi Method


The Ja obi method is an iterative method by su essive approximations (it-
erations), to solve numeri ally Cramer systems.
The iteration te hnique is popular for nding roots of equations. General-
ized xed point iteration is applied to systems of linear equations, in order
to produ e a urate results.
Let be the Cramer system:
Ax = b. (1.6.1)
We denote by aij the elements of the matrix A, i, j = 1, . . . , n, and we
onsider the matri es D, L, U :
 
a11 0 0 ... 0
 0 a22 0 . . . 0 
 
D=  0 0 a 33 . . . 0 
 (1.6.2)
· · · · · · · · · · · · · · · 
0 0 0 . . . ann
The Ja obi Method 43

 
0 0 0 ... 0
 a21 0 0 ... 0 
 
L =  a31 a32 0 . . . 0 

 (1.6.3)
· · · · · · · · · · · · · · ·
an1 an2 an3 . . . 0
 
0 a12 a13 . . . a1n
 0 0 a23 . . . a2n 
 
U = · · · · · · · · · · · ·
 ···   (1.6.4)
 0 0 0 . . . an−1,n 
0 0 0 ... 0
Remark 1.6.1. The matrix A an be written as:

A = D + L + U. (1.6.5)

Theorem 1.6.1. If for i = 1, 2, . . . , n, then x(∗) is solution of the


aii 6= 0
system (1.6.1) if and only if x(∗) veries:
x(∗) = D−1 [b − (L + U )x(∗) ]. (1.6.6)

Proof: If x(∗) is solution of the system (1.6.1), then Ax(∗) = b. From this,
we obtain su essively:

(L + D + U )x(∗) = b ⇒ Dx(∗) + (L + U )x(∗) = b ⇒


⇒ Dx(∗) = b − (L + U )x(∗) ⇒ x(∗) = D−1 [b − (L + U )x(∗) ].

Re ipro ally: if x(∗) veries (1.6.6) then:

x(∗) = D−1 [b − (L + U )x(∗) ],

and hen e:

Dx(∗) = b − (L + U )x(∗) ⇒ (D + L + U )x(∗) = b ⇒ Ax(∗) = b.

Denition 1.6.1. Let be x(0) ∈ IRn a given ve tor. The sequen e of ve tors:
x(k+1) = D−1 [b − (L + U )x(k) ] k = 0, 1, 2, . . . (1.6.7)

is alled Ja obi traje tory of ve tor x(0) .


Denition 1.6.2. We say that the Ja obi traje tory of the ve tor x(0)
onverges, if the sequen e x(k+1) dened by (1.6.7) is onvergent.
44 Systems of Linear Equations

In the ase in whi h the Ja obi traje tory of the ve tor x(0) onverges, this
traje tory will be alled Ja obi sequen e of the su essive approximations.
Theorem 1.6.2. If the Ja obi traje tory of ve tor onverges, then the
x(0)
limit of the Ja obi sequen e, of the su essive approximations, is solution of
the system (1.6.1).
Proof: We denote by x(∗) the limit of the Ja obi sequen e of the su essive
approximations. Passing to limit for k → ∞ in the relation (1.6.7), we obtain
the equality x(∗) = D−1 [b − (L + U )x(∗) ]. On the base of the Theorem 1.6.1
we have that x(∗) is a solution of the system (1.6.1).
Theorem 1.6.3. The Ja obi traje tory of ve tor x(0) onverges, if and only
if the sequen e y (k) dened by:
y (k+1) = −D−1 (L + U )y (k) k = 1, 2, . . . (1.6.8)
y (0) = x(0) − x(∗)

onverges to zero, where x(∗) represents the solution of the system (1.6.1).
The matrix −D−1 (L + U ) is alled the Ja obi matrix.
Proof: We will prove that, the ve tor x(k+1) on the Ja obi traje tory of
ve tor x(0) and the ve tor y (k+1) dened by (1.6.8) verify:

y (k+1) = x(k+1) − x(∗) k = 0, 1, 2, . . . (1.6.9)

For k = 0 we must to prove the equality y (1) = x(1) − x(∗) . For this aim, using
(1.6.8) we ompute y (1) , and we obtain:

y (1) = −D−1 (L + U )y (0) = −D−1 (L + U )x(0) + D−1 (L + U )x(∗) =


= x(1) − D−1 b + D−1 (L + U )x(∗) = x(1) − D−1 [b − (L + U )x(∗) ] =
= x(1) − x(∗) .

We suppose now that (1.6.9) is true for k = l:

y (l+1) = x(l+1) − x(∗) ,

and we show that this is true for k = l + 1, too, i.e.

y (l+2) = x(l+2) − x(∗) .

Thus, using (1.6.8) we ompute y (l+2) and we obtain:

y (l+2) = −D−1 (L + U )y (l+1) = −D−1 (L + U )x(l+1) + D−1 (L + U )x(∗) =


= x(l+2) − D−1 b + D−1 (L + U )x(∗) = x(l+2) − D−1 [b − (L + U )x(∗) ] =
= x(l+2) − x(∗) .
The Ja obi Method 45

In this way, we proved that the relation (1.6.9) is true for any k = 0, 1, 2, . . ..
From (1.6.9), we obtain that if x(k+1) onverges then y (k+1) onverges,
too. Moreover, a ording to the Theorem (1.6.2) we have lim x(k+1) = x(∗) ,
k→∞
and hen e lim y (k+1)
= 0. If y (k+1)
onverges to zero, from (1.6.9) we obtain
k→∞
that x(k+1) onverges to x(∗) .

Theorem 1.6.4. The sequen e of ve tors, dened by:


y (k+1) = −D−1 (L + U )y (k) k = 1, 2, . . . (1.6.10)

onverges to zero for any y (0) ∈ IRn , if and only if the spe tral radius ρ of the
matrix −D−1 (L + U ) is stri tly sub-unitary.
A su ient ondition for onvergen e to zero of a sequen e y (k+1) , for any
y (0) , is given by the next theorem in whi h the matrix norm is dened as
follows:
kAxk
kAk = max ,
kxk6=0 kxk

where A is a quadrati matrix of dimension n and x = (x1 , x2 , ..., xn )T .

Theorem 1.6.5. If norm of the matrix −D−1 (L + U ) is stri tly sub-unitary,


then for any y (0) ∈ IRn , the sequen e y (k+1) dened by (1.6.8), onverges to
zero.
Proof: By mathemati al indu tion, the following inequality is proved :

ky (k+1) k ≤ k − D−1 (L + U )kk+1 · ky (0) k k = 0, 1, 2, . . .

Remark 1.6.2. If the spe tral radius ρ of the matrix −D−1 (L+U ) is stri tly
sub-unitary, then for any x(0) ∈ IRn , the Ja obi sequen e of the su essive
approximations onverges to the solution of the system (1.6.1).

Remark 1.6.3. If norm of the matrix −D−1 (L + U ) is stri tly sub-unitary,


then for any x(0) ∈ IRn , the Ja obi sequen e of the su essive approximations
onverges to the solution of the system (1.6.1).

Theorem 1.6.6. The omponents x(k+1)


1
(k+1)
, . . . , x1 of the ve tor xk+1 situ-
ated on the Ja obi traje tory of the ve tor x(0) are given by relations:
 n 
(k+1)
X (k) 1
xi = bi − aij · xj · i = 1, 2, . . . , n k = 0, 1, . . . (1.6.11)
j=1
aii
j6=i
46 Systems of Linear Equations

Exer ises
1. De ide if the Ja obi method an be applied for solving the followings
system:

 5x1 − 2x2 + 3x3 = −1
−3x1 + 9x2 + x3 = 2

2x1 − x2 − 7x3 = 3
The algorithm of the Ja obi method:

for i = 1 . . . n
xi = 0
repeat
for i = 1 . . . n  n 
X 1
xk+1
i = bi − aij · xkj ·
j=1
aii
j6=i

until max |xk+1


i − xki | < ε
1≤i≤n
Input data:
- n - spa e dimension

- (aij ) i=1,...,n - matrix of the system


j=1,...,n

- (bi )i=1,...,n - the olumn ve tor form the right hand side
Output data:
- (xi )i=1,...,n - solution of the system Ax = b

- k - number of steps

1.7 The Gauss-Seidel Method


The Gauss-Seidel method is an iterative method of su essive approximations
(iterations), for solving Cramer systems numeri ally. The advantage of this
method, is that it improves substantially the rate of onvergen e..
Considering the Cramer system:

Ax = b, (1.7.1)

we de ompose the matrix A as

A=L+D+U (1.7.2)
The Gauss-Seidel Method 47

where: L is a lower triangular matrix, D is a diagonal matrix and U is an


upper triangular matrix.

Theorem 1.7.1. If aii 6= 0 for i = 1, 2, . . . , n then x(∗) is a solution of the


system (1.7.1) if and only if x(∗) veries:
x(∗) = (L + D)−1 (b − U x(∗) ). (1.7.3)

Proof: If x(∗) is a solution of the system (1.7.1) then Ax(∗) = b. We obtain


su essively:

(L + D)x(∗) + U x(∗) = b ⇒ (L + D)x(∗) = b − U x(∗) ⇒ x(∗) = (L + D)−1 (b − U x(∗) ).

Re ipro ally: if x(∗) veries (1.7.3) then:

x(∗) = (L + D)−1 (b − U x(∗) ).

From this we obtain:

(L + D)x(∗) = b − U x(∗) ⇒ (L + D)x(∗) + U x(∗) = b ⇒ Ax(∗) = b.

Denition 1.7.1. For a ve tor x(0) ∈ IRn , the sequen e of ve tors x(k) dened
by:
x(k+1) = (L + D)−1 (b − U x(k) ) (1.7.4)
is alled Gauss-Seidel traje tory of the ve tor x(0) .
Denition 1.7.2. We say that Gauss-Seidel traje tory of the ve tor
x(0) onverges if sequen e x(k+1) dened by (1.7.4) onverges.
In the ase in whi h the Gauss-Seidel traje tory of the ve tor x(0) on-
verges, it is alled Gauss-Seidel sequen e of su essive approximations.

Theorem 1.7.2. If the Gauss-Seidel traje tory of the ve tor x(0) onverges,
then the limit of the Gauss-Seidel sequen e of su essive approximations is a
solution of the system (1.7.1).
Proof: We denote by x(∗) the limit of the Gauss-Seidel sequen e of the
su essive approximations. Passing to limit for k → ∞ in relation (1.7.4) we
obtain the equality x(∗) = (L + D)−1 (b − U x(∗) ). On the base of the Theorem
1.7.1, we have that x(∗) is solution of the system (1.7.1).
48 Systems of Linear Equations

Theorem 1.7.3. The Gauss-Seidel traje tory of the ve tor x(0) onverges if
and only if the sequen e y (k+1) dened by:
y (k+1) = −(L + D)−1 U y (k) k = 1, 2, . . . (1.7.5)
y (0) = x(0) − x(∗)

onverges to zero, where we denoted by x(∗) the solution of the system (1.7.1).
The matrix −(L + D)−1 U is alled the Gauss-Seidel matrix.
Proof: We will prove that, the ve tor x(k+1) on the traje tory of x(0) , and
the ve tor y (k+1) given by (1.7.5) verify:

y (k+1) = x(k+1) − x(∗) k = 0, 1, 2, . . . (1.7.6)

For k = 0, the equality y (1) = x(1) − x(∗) must be shown. Thus, omputing
y (1) with (1.7.5) we nd:

y (1) = −(L + D)−1 U y (0) = −(L + D)−1 U (x(0) − x(∗) ) =


= −(L + D)−1 U x(0) + (L + D)−1 U x(∗) =
= x(1) − (L + D)−1 b + (L + D)−1 U x(∗) = x(1) − (L + D)−1 (b − U x(∗) ) =
= x(1) − x(∗) .

Supposing that (1.7.6) is true for k = l:

y (l+1) = x(l+1) − x(∗) ,

we will show that this is true for k = l + 1, i.e:

y (l+2) = x(l+2) − x(∗) .

We ompute y (l+2) using (1.7.5) and we obtain:

y (l+2) = −(L + D)−1 U y (l+1) = −(L + D)−1 U (x(l+1) − x(∗) ) =


= −(L + D)−1 U x(l+1) + (L + D)−1 U x(∗) =
= x(l+2) − (L + D)−1 b + (L + D)−1 U x(∗) = x(l+2) − (L + D)−1 (b − U x(∗) ) =
= x(l+2) − x(∗) .

Thus, we proved that (1.7.4) is true for any k = 0, 1, 2, . . ..


From (1.7.6) we obtain that, if x(k+1) onverges then y (k+1) onverges,
too. Moreover, from Theorem 1.7.2 we have lim x(k+1) = x(∗) , and hen e
k→∞
lim y (k+1) = 0. Due to the fa t that y (k+1) onverges to zero, from (1.7.6) it
k→∞
results that x(k+1) onverges to x(∗) .
The Gauss-Seidel Method 49

Theorem 1.7.4. The sequen e of ve tors:


y (k+1) = −(L + D)−1 U y (k) k = 1, 2, . . . (1.7.7)

onverges to zero for any y (0) ∈ IRn if and only if the spe tral radius ρ of the
Gauss-Seidel matrix −(L + D)−1 U is stri tly sub-unitary.
A su ient ondition for the sequen e y (k+1) to onverge to zero, for any
y (0) , is given by the next theorem in whi h the matrix norm is dened as
follows:
kAxk
kAk = max ,
kxk6=0 kxk

where A is a quadrati matrix of dimension n and x = (x1 , x2 , ..., xn )T .

Theorem 1.7.5. If norm of the matrix −(L + D)−1 U is stri tly sub-unitary
then for any y (0) ∈ IRn , the sequen e y (k+1) dened by (1.7.7) onverges to
zero.
Proof: Using mathemati al indu tion, the following inequality is proved:

ky (k+1) k ≤ k − (L + D)−1 U kk+1 · ky (0) k k = 0, 1, 2, . . .

Remark 1.7.1. If the spe tral radius ρ of the matrix −(L+D)−1 U is stri tly
sub-unitary, then for any x(0) ∈ IRn , the Gauss-Seidel sequen e of the su -
essive approximations onverges to the solution of the system (1.7.1).

Remark 1.7.2. If norm of the matrix −(L + D)−1 U is stri tly sub-unitary,
then for any x(0) ∈ IRn , the Gauss-Seidel sequen e of su essive approxima-
tions onverges to the solution of the system (1.7.1).

Propostion 1.7.1. The points on the Gauss-Seidel traje tory of the ve tor
x(0) verify:
x(k+1) = D−1 (b − Lx(k+1) − U x(k) ) k = 0, 1, 2, . . . (1.7.8)

Proof: Considering the equality (1.7.4): x(k+1) = (L + D)−1 (b − U x(k) ) we


obtain su essively:

(L + D)x(k+1) = b − U x(k) ⇒
⇒ Lx(k+1) + Dx(k+1) = b − U x(k) ⇒
⇒ x(k+1) = D−1 (b − Lx(k+1) − U x(k) ).
50 Systems of Linear Equations

Consequen e 1.7.1. The omponents x(k+1)


1
(k+1)
, . . . , x1 of the ve tor x(k+1)
on the Gauss-Seidel traje tory of the ve tor x(0) , are given by the relations:
 n 
(k+1)
X (k) 1
x1 = b1 − a1j · xj · (1.7.9)
j=2
a11

 i−1 n 
(k+1)
X (k+1)
X (k) 1
xi = bi − aij ·xj − aij ·xj · i = 2, . . . , n; k = 0, 1, . . .
j=1 j=i+1
aii
(1.7.10)

Exer ises
1. Sear h if the Ja obi and Gauss-Seidel methods an be applied for solving
the 
followings system:
4x1 + x2 = −1
4x1 + 3x2 = −2
The algorithm of the Gauss-Seidel method:

for i = 1 . . . n
xi = 0
repeat
for i = 1 . . . n
 i−1 n 
X X 1
xk+1
i = bi − aij · xk+1
j − aij · xkj ·
j=1 j=i+1
aii

until max |xk+1


i − xki | < ε
1≤i≤n
Input data:
- n - spa e dimension

- (aij ) i=1,...,n - matrix of the system


j=1,...,n

- (bi )i=1,...,n - olumn ve tor from the right hand side

Output data:
- (xi )i=1,...,n - solution of the system Ax = b

- k - number of steps
Su essive Over-relaxation (SOR) Method 51

1.8 Su essive Over-relaxation (SOR) Method


The su essive over-relaxation method is an iterative method for numeri ally
solving systems of Cramer type, by su essive approximations. The aim of
this method, is to improve substantially the rate of onvergen e of the Gauss-
Seidel method.
Considering the Cramer system:

Ax = b, (1.8.1)

we de ompose the matrix A as:

A = L + D + U, (1.8.2)

where L is a lower triangular matrix, D is a diagonal matrix, and U is an


upper triangular matrix.
If aii 6= 0 for i = 1, 2, . . . , n, then for any x(0) ∈ IRn we onsider the
Gauss-Seidel traje tory of the ve tor x(0) ∈ IRn dened by:

x(k+1) = (L + D)−1 (b − U x(k) ) k = 0, 1, 2, . . . (1.8.3)

The points of this traje tory verify the relations:

x(k+1) = D−1 (b − Lx(k+1) − U x(k) ) k = 0, 1, 2, . . . (1.8.4)

Denition 1.8.1. The ve tors ∆x(k) dened by:


∆x(k) = x(k+1) − x(k) k = 0, 1, 2, . . . (1.8.5)

are alled orre tions.


In terms of orre tions, the relations (1.8.4) are written in the form:

x(k+1) = x(k) +∆x(k) = x(k) +D−1 (b−Lx(k+1) −(U +D)x(k) ) k = 0, 1, 2, . . .


(1.8.6)
Denition 1.8.2. Let be ω > 0. The sequen e dened by:
x(k+1) = x(k) +ω∆x(k) = x(k) +ωD−1 (b−Lx(k+1) −(U +D)x(k) ) k = 0, 1, 2, . . .
(1.8.7)
is alled traje tory of the ve tor x(0) obtained by su essive relax-
ations.
Remark 1.8.1. The relations (1.8.7) dene the su essive relaxation
method and is alled the under-relaxation method if ω < 1, and the
over-relaxation method if ω > 1. For ω = 1, the relation (1.8.7) is
redu ed to the Gauss-Seidel su essive approximation (1.7.4).
52 Systems of Linear Equations

Theorem 1.8.1. The ve tors x(k) on the traje tory of x(0) obtained by su -
essive over-relaxation, verify:
 −1     
(k+1) 1 1 (k)
x = L+ D b− 1− D+U x k = 0, 1, 2, . . .
ω ω
(1.8.8)

Proof: Considering the equality (1.8.7):

x(k+1) = x(k) + ωD−1 (b − Lx(k+1) − (U + D)x(k) ) k = 0, 1, 2, . . .

from whi h we obtain:


1 1
Dx(k+1) = Dx(k) + b − Lx(k+1) − (U + D)x(k) ,
ω ω
    
1 (k+1) 1
L+ D x =b− 1− D + U x(k) ,
ω ω
 −1     
(k+1) 1 1 (k)
x = L+ D b− 1− D+U x .
ω ω

Denition 1.8.3. We say that the traje tory of the ve tor x(0) , obtained
by su essive over-relaxations onverges, if the sequen e x (k+1)
dened by
(1.8.8) is onvergent.
In the ase in whi h the traje tory of ve tor x(0) obtained by su essive
over-relaxations, onverges then it is alled sequen e of su essive approxi-
mations.

Theorem 1.8.2. If the traje tory of the ve tor x(0) ,


obtained by su essive
over-relaxations, onverges then the limit of the sequen e of su essive ap-
proximations is solution of the system (1.8.1).
Proof: Suppose that the traje tory of the ve tor x(0) , obtained by su essive
over-relaxations, onverges to the ve tor x(∗) : lim x(k) = x(∗) . Passing to
k→∞
limit for k → ∞ in (1.8.8) we obtain:
 −1     
1 1
x (∗)
= L+ D b− 1− D+U x (∗)
(1.8.9)
ω ω
Su essive Over-relaxation (SOR) Method 53

Form here, it results su essively:


   
1 (∗) 1
L+ D x =b− 1− Dx(∗) − U x(∗) ,
ω ω

1 1
Lx(∗) + Dx(∗) = b − Dx(∗) + Dx(∗) − U x(∗) ,
ω ω

(L + D + U )x(∗) = b,

Ax(∗) = b.

Theorem 1.8.3. The traje tory of the ve tor x(0) ,


obtained by su essive
over-relaxations, onverges if and only if the sequen e y (k) dened by:
 −1   
1 1
y (k+1)
= − L+ D 1− D + U y (k) k = 1, 2,(1.8.10)
...
ω ω
y (0) = x(0) − x(∗)
onverges to zero.
Proof: Using mathemati al indu tion, we obtain that the ve tor x(k+1) on
the traje tory of the ve tor x(0) , and ve tor y (k+1) dened by (1.8.10) verify:
y (k+1) = x(k+1) − x(∗) k = 0, 1, 2, . . . (1.8.11)
Theorem 1.8.4. The sequen e of the ve tors y(k+1) dened by:
 −1   
1 1
y (k+1)
=− L+ D 1− D + U y (k) k = 1, 2, . . . (1.8.12)
ω ω
onverges to zero for any y (0) ∈ IRn if and only if the spe tral radius ρ of the
matrix
 −1   
1 1
− L+ D 1− D + U is stri tly sub-unitary.
ω ω
A su ient ondition for the sequen e y (k+1) to onverge to zero, for any
y (0) , is given by the next theorem in whi h the matrix norm is dened as
follows:
kAxk
kAk = max ,
kxk6=0 kxk

where A is a quadrati matrix of dimension n and x = (x1 , x2 , ..., xn )T .


54 Systems of Linear Equations
−1    
1 1
Theorem 1.8.5. If the norm of the matrix − L+ D 1− D+U
ω ω
is stri tly sub-unitary, then for any y (0) ∈ IRn the sequen e y (k+1) dened by
(1.8.12) tends to zero.

Remark 1.8.2. If the spe tral radius ρ of the matrix −


−1   
L + ω1 D 1 − ω1 D + U
is stri tly sub-unitary, then for any x(0) ∈ IRn , the su essive approximations
sequen e, obtained by su essive over-relaxations, onverges to the solution
of the system (1.8.1).

Remark 1.8.3.
−1   
If the norm of the matrix − L + ω1 D 1 − ω1 D + U
is stri tly sub-unitary, then for any x(0) ∈ IRn , the su essive approximations
sequen e, obtained by su essive over-relaxations, onverges to the solution
of the system (1.8.1).

Theorem 1.8.6. (Kahan)


The su essive relaxation method onverges only if ω is hosen in the interval
0 < ω < 2.

Proof: Outline −1   


Sin e the spe tral radius of the matrix − L + ω1 D 1 − ω1 D + U should
be sub-unitary, the denition of the spe tral radius, and the properties of the
determinants we have: max|λi | ≥ |1 − ω| and |1 − ω| < 1.
Moreover, the onvergen e speed is maximized by sele ting ω su h that the
spe tral radius to be minimum (i.e. ω should have a maximal value).

Theorem 1.8.7. The omponents x(k+1)


1
(k+1)
, . . . , x1 of the ve tor xk+1 on the
traje tory of the ve tor x(0) , obtained by su essive over-relaxations, verify the
relations:
" n
#
(k+1) (k) ω X (k)
x1 = (1 − ω) · x1 + b1 − a1j · xj (1.8.13)
a11 j=1

" i−1 n
#
(k+1) (k) ω X (k+1)
X (k)
xi = (1 − ω) · xi + bi − aij · xj − aij · xj (1.8.14)
aii j=1 j=i

i = 2, . . . , n; k = 0, 1, . . .
Su essive Over-relaxation (SOR) Method 55

Exer ises
De ide if the Ja obi, Gauss-Seidel and su essive over-relaxations methods
methods
 an be applied for solving the followings system:
2x1 + x2 = 3
4x1 + 3x2 = −5
The algorithm for implementation of the su essive over-relaxation:

for i = 1 . . . n
xi = 0
repeat
for i = 1 . . . n " #
i−1 n
ω X X
xk+1
i = (1 − ω) · xki + bi − aij · xk+1
j − aij · xkj
aii j=1 j=i

until max |xk+1


i − xki | < ε
1≤i≤n
Input data:
- n - spa e dimension

- (aij ) i=1,...,n - matrix of the system


j=1,...,n

- (bi )i=1,...,n - the olumn ve tor from the right hand side

- ω

Output data:
- (xi )i=1,...,n - solution of the system Ax = b

- k - step number

Implementation of the algorithms Ja obi, Gauss-Seidel and su essive over-


relaxation, in Borland C language:
#in lude<stdio.h>
#in lude<mallo .h>
#in lude< onio.h>
#in lude <math.h>
oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int n);
void CitireVe t(oat *a, int n);
void S riereVe t(oat *a, int n);
void CitireMat(oat **a, int n, int m);
56 Systems of Linear Equations

void S riereMat(oat **a, int n, int m);


int gauss_seidel(oat **a, oat *b,int n,oat *xi);
int ja obi(oat **a, oat *b,int n, oat *xi);
int relaxare_su esiva(oat **a, oat *b,int n,oat *xi);
void main(void)
{
oat **a, *b,*xgs, *xja , *xrel,omega;
int i,n, nr_pasi_gs=0,nr_pasi_ja =0,nr_pasi_rel=0;
printf("n="); s anf("%d",&n);
a=Matri e(1,n,1,n);
b=Ve tor(n);
xgs=Ve tor(n);
xja =Ve tor(n);
xrel=Ve tor(n);
CitireMat(a,n,n);
S riereMat(a,n,n);
CitireVe t(b,n);
S riereVe t(b,n);
nr_pasi_gs = gauss_seidel(a,b,n,xgs);
nr_pasi_ja = ja obi(a,b,n,xja );
nr_pasi_rel = relaxare_su esiva(a,b,n,xrel);
printf("-SOLUTIE-\n");
printf("Gauss-Seidel\t\tJa obi\t\t\tRelaxare-su esiva\n");
for(i=1; i<=n; i++)
printf("Xgs[%d℄ = %lf\tXj[%d℄= %lf\t\tXgs[%d℄ = %lf\n", i, xgs[i℄,
i, xja [i℄, i, xrel[i℄);
printf("Numarul de pasi:\n");
printf("Gauss-Seidel\tJa obi\tRel-su \n%d\t\t%d\t%d\n",
nr_pasi_gs, nr_pasi_ja , nr_pasi_rel);
printf("Apasati o tasta pt. a termina");
get h();
}
double epsilon = 0.0000000001;
oat max(oat zi[℄, oat xi[℄,int n); /* returneaza maximul dintre |zi-xi|
*/
int gauss_seidel(oat **a, oat *b,int n, oat *xi)
{
oat *z,suma;
int i,j,k=0;
z=Ve tor(n);
for(i=1; i<=n; i++)
Su essive Over-relaxation (SOR) Method 57

xi[i℄ = 0;
do
{
for(i=1; i<=n; i++)
z[i℄ = xi[i℄;
for(i=1; i<=n; i++)
{
suma=0.0;
for(j=1; j<=n; j++)
if (i != j)
suma += a[i℄[j℄*xi[j℄;
xi[i℄ = 1.0/a[i℄[i℄*(b[i℄ - suma);
}
k++;
}while (max(z, xi,n) >= epsilon);
return k;
}
int ja obi(oat **a, oat *b,int n,oat *xi)
{
oat *z,suma;
int k=0, i,j;
z=Ve tor(n);
for(i=1; i<=n; i++)
xi[i℄ = 0;
do
{
for(i=1; i<=n; i++)
z[i℄ = xi[i℄;
for(i=1; i<=n; i++)
{
suma=0.0;
for(j=1; j<=n; j++)
if (i != j)
suma += a[i℄[j℄*z[j℄;
xi[i℄ = 1.0/a[i℄[i℄*(b[i℄ - suma);
}
k++;
}while (max(z, xi,n) >= epsilon);
return k;
}
int relaxare_su esiva(oat **a, oat *b,int n,oat *xi)
58 Systems of Linear Equations

{
oat *z,suma,omega;
int k=0, i,j;
printf("omega="); s anf("%f",&omega);
for(i=1; i<=n; i++)
xi[i℄ = 0;
do
{
for(i=1; i<=n; i++)
z[i℄ = xi[i℄;
for(i=1; i<=n; i++)
{
suma=0.0;
for(j=1; j<=n; j++)
if (i != j)
suma += a[i℄[j℄*xi[j℄;
xi[i℄ = (1-omega)*z[i℄ + omega/a[i℄[i℄*(b[i℄ - suma);
}
k++;
}while (max(z, xi,n) >= epsilon);
return k;
}
oat max(oat zi[℄, oat xi[℄, int n)
/* returneaza maximul dintre |zi-xi| */
{
int i;
oat maxim = fabs(zi[1℄ - xi[1℄);
for(i=2, maxim = fabs(zi[1℄ - xi[1℄); i<=n; i++)
if(maxim < fabs(zi[i℄-xi[i℄))
maxim = fabs(zi[i℄-xi[i℄);
return maxim;
}
Chapter 2
Numeri al Solutions of Equations
and Systems of Nonlinear
Equations
Denition 2.0.1. A nonlinear system of n algebrai equations and n
unknowns, is a system of the form:

 f1 (x1 , x2 , . . . , xn ) = 0
............... (2.0.1)

fn (x1 , x2 , . . . , xn ) = 0
or
F (x) = 0, (2.0.2)
where: F (x) represents the ve tor (f1 (x), . . . , fn (x)) in whi h the fun tions
T

f1 , . . . , fn : D ⊂ IRn → IR1 are given, and x is the ve tor (x1 , x2 , . . . , xn )T .


A solution of a system (2.0.1) is an ordered system of numbers x∗1 , . . . , x∗n
whi h, repla ed in the system, verify the equalities (2.0.1).

2.1 Fixed-Point Iterative Method


The system (2.0.1) is transformed into the system:

 g1 (x1 , x2 , . . . , xn ) = x1

............... (2.1.1)

 g (x , x , . . . , x ) = x ,
n 1 2 n n

if we hoose the fun tions g1 , . . . , gn : D ⊂ IRn → IR1 , for example, of the


form:
gi (x1 , . . . , xn ) = fi (x1 , . . . , xn ) + xi .

59
60 Numeri al Solutions of Equations and Systems of Nonlinear Equations

The ordered system of numbers x∗1 , . . . , x∗n is a solution of the system


(2.0.1) if and only if it is a solution of the system (2.1.1).
In the following, we onsider the nonlinear systems written as (2.1.1) or
in the ve tor form:
G(x) = x (2.1.2)
where: x is the ve tor (x1 , x2 , . . . , xn )T , and G(x) is the ve tor (g1 (x), . . . , gn (x))T .
The fun tion G : D ⊂ IRn → IRn dened by:

G(x) = (g1 (x), . . . , gn (x))T for x0 ∈ D, (2.1.3)

is alled nonlinear operator.


Remark 2.1.1. The operator G(x) an be expressed using the operator F (x)
dened by (2.0.2), as follows:

G(x) = x − [F ′ (x)]−1 · F (x),

where F ′ (x) represents the Ja obi matrix whi h is supposed ontinuous and
invertible.
This formula an be dedu ed from the equivalen es:

F (x) = 0 ⇔ [F ′ (x)]−1 · F (x) = 0 ⇔ x − [F ′ (x)]−1 · F (x) = x ⇔ G(x) = x.

Denition 2.1.1. The ve tor x(∗) ∈ D is alled xed point for the operator
G if:
G(x(∗) ) = x(∗) . (2.1.4)
It is evident that x(∗) is a xed point for G if and only if x(∗) is a solution of
the system (2.1.1) equivalent with (2.0.1).
In the followings, we suppose that the operator G has a xed point x(∗) .
To nd the xed point x(∗) of the operator G (i.e. the solution x(∗) of the
system (2.0.1)), we use the algorithm:

x(k+1) = G(xk ), k = 0, 1, 2, . . . x(0) ∈ D (2.1.5)

with x0 given.
Denition 2.1.2. The point x(∗) is alled attra tor if there is an open
sphere S(x(∗) , r) = {x ∈ IRn | kx − x(∗) k < r} having the properties:
1. S(x(∗) , r) ⊂ D and x(k) obtained from (2.1.5) is well dened for ∀ x(0) ∈
S(x(∗) , r);
2. ∀ x(0) ∈ S(x(∗) , r) the sequen e x(k) dened of (2.1.5) belongs to D and
−−−→ x(∗) .
x(k) −k→∞
Fixed-Point Iterative Method 61

Remark 2.1.2. If x(∗) is an attra tor, then x(k) is alled the su essive
approximation sequen e of the xed point x(∗) .

Theorem 2.1.1. Let G : D ⊂ IRn → IRn a nonlinear operator and x(∗) ∈ D


a xed point of G. If there is an open sphere S(x(∗) , r) ⊂ D and a onstant
α ∈ (0, 1) su h that:

kG(x) − x(∗) k ≤ α kx − x(∗) k, ∀ x ∈ S(x(∗) , r)

then x(∗) is an attra tor and


kx(k+1) − x(∗) k
lim sup ≤ α,
k→∞ kx(k) − x(∗) k

lim sup kx(k) − x(∗) k1/k ≤ α.


k→∞

Proof: For x(0) ∈ S(x(∗) , r) we onsider x(1) = G(x(0) ). From the inequality

kG(x(0) ) − x(∗) k ≤ α kx(0) − x(∗) k

it results
kx(1) − x(∗) k ≤ α kx(0) − x(∗) k < α · r < r,
from whi h we obtain:
x(1) ∈ S(x(∗) , r) ⊂ D.
After that, we onsider x(2) = G(x(1) ), and from

kG(x(1) ) − x(∗) k ≤ α kx(1) − x(∗) k ≤ α2 kx(0) − x(∗) k < α2 · r < r

we obtain
x(2) ∈ S(x(∗) , r) ⊂ D.
In the followings, we onsider x(3) = G(x(2) ), and using a similar evaluation
we have:
kx(3) − x(∗) k ≤ α3 kx(0) − x(∗) k < α3 · r < r.
By mathemati al indu tion it is shown that x(k+1) = G(x(k) ) is well dened
and the terms of this sequen e verify:

kx(k) − x(∗) k ≤ αk kx(0) − x(∗) k ∀ k = 1, 2, . . .

Be ause α ∈ (0, 1), the sequen e αk onverges to 0 for k → ∞, from whi h


we obtain that x(k) onverges to x(∗) for k → ∞.
62 Numeri al Solutions of Equations and Systems of Nonlinear Equations

If there exists k0 su h that x(k) = x(∗) for k ≥ k0 , then lim sup kx(k+1) −x(∗) k =
k→∞
0 ≤ α.
If x(k) 6= x(∗) for k ≥ k0 , then the inequality kx(k+1) − x(∗) k ≤ α kx(k) − x(∗) k
shows that
kx(k+1) − x(∗) k
≤ α,
kx(k) − x(∗) k
and hen e
kx(k+1) − x(∗) k
lim sup ≤ α.
k→∞ kx(k) − x(∗) k
From kx(k) − x(∗) k ≤ αk kx(0) − x(∗) k, the following inequality results:

lim sup kx(k) − x(∗) k1/k ≤ α.


k→∞

Theorem 2.1.2. Let G : D ⊂ IRn → IRn be a nonlinear operator and


x(∗) ∈ D a xed point of G. If G is of C 1 - lass on D, and the spe tral radius
ρ of the Ja obi matrix in x(∗) , asso iated to the operator G, is stri tly sub-
unitary (ρ < 1), then x(∗) is an attra tor and
lim sup kx(k) − x(∗) k1/k = ρ.
k→∞

Theorem 2.1.3. Let G : D ⊂ IRn → IRn be a nonlinear operator and


x(∗) ∈ D a xed point of G. If G is of C 1 - lass on D, and the norm µ of
the Ja obi matrix in x(∗) , asso iated to the operator G, is stri tly sub-unitary
(µ < 1), then x(∗) is an attra tor and
lim sup kx(k) − x(∗) k1/k = µ.
k→∞

Theorem 2.1.4. (Fixed-Point)


Let G : D ⊂ IRn → IRn a nonlinear operator. If G is of C 1 - lass on D,
and the norm µ of the Ja obi matrix is stri tly sub-unitary (µ < 1) for any
x ∈ D, then for any x0 the sequen e of iterates
x(k+1) = G(xk ), k = 0, 1, 2, . . . x(0) ∈ D
onverges to a unique xed point x(∗) ∈ D.
Consequen e 2.1.1. If the Ja obi matrix of G is null in x(∗) , then x(∗) is
an attra tor and
kx(k+1) − x(∗) k
lim sup kx(k) − x(∗) k1/k = 0 = lim sup .
k→∞ k→∞ kx(k) − x(∗) k
The Newton Method 63

2.2 The Newton Method


The Newton method is an iterative method based on su essive approxima-
tions, for solving systems of n algebrai equations having n unknowns. We
onsider the system:
F (x) = 0 (2.2.1)
in whi h F : D ⊂ IRn − IRn is a nonlinear operator. We admit that (2.2.1)
has a solution x(∗) ∈ D, F is of C 1 - lass on D and F ′ (x) is invertible for any
x ∈ D.
Denition 2.2.1. If for a ve tor x(0) ∈ D the sequen e:
x(k+1) = x(k) − [F ′ (x(k) )]−1 · F (x(k) ), k = 0, 1, 2, . . . (2.2.2)
is well dened, then it is alled the lassi al sequen e of su essive it-
erations of Newton; we denoted by F ′(x(k)) the Ja obi matrix in the point
x(k) whi h is supposed ontinuous and invertible.
Denition 2.2.2. If for a ve tor x(0) ∈ D the sequen e:
x(k+1) = x(k) − [F ′ (x(0) )]−1 · F (x(k) ), k = 0, 1, 2, . . . (2.2.3)
is well dened, then it is alled the simplied sequen e of su essive
iterations of Newton.
Theorem 2.2.1. If for a ve tor x(0) ∈ D, the lassi al sequen e of su essive
iterations of Newton onverges, then the limit of the sequen e x(k) is a solution
of the equation (2.2.1).
Proof:We denote by x the limit of the sequen e x(k) . Passing to limit for
k → ∞ in relation (2.2.2) we obtain:
x = x − [F ′ (x)]−1 · F (x),
and hen e [F ′ (x)]−1 · F (x) = 0, i.e. F (x) = 0.
Theorem 2.2.2. If for a ve tor x(0) ∈ D, the simplied sequen e of su es-
sive iterations of Newton onverges, then the limit of the sequen e x(k) is a
solution of the equation (2.2.1).
Proof: We denote by x the limit of the sequen e x(k) and passing to limit
for k → ∞ in the relation (2.2.3) we obtain:
x = x − [F ′ (x(0) )]−1 · F (x)
and hen e [F ′ (x(0) )]−1 · F (x) = 0, i.e. F (x) = 0.
64 Numeri al Solutions of Equations and Systems of Nonlinear Equations

Theorem 2.2.3. Let F : D ⊂ IRn → IRnand the equation F (x) = 0 having


the solution x ∈ D. If there is an open sphere S(x(∗) , r) = {x ∈ IRn |
(∗)

kx − x(∗) k < r} on whi h F is of C 1 - lass and F ′ (x(∗) ) is non-singular then


in the ase of the lassi al Newton method, x(∗) is an attra tor.
Proof: The Consequen e 2.1.1 from previous se tion is applied.
Theorem 2.2.4. Under the hypothesis of the previous theorem, if the spe tral
radius ρ of the matrix
I − [F ′ (x(0) )]−1 · F ′ (x(∗) )
is stri tly sub-unitary then in the ase of the simplied Newton method, x(∗)
is an attra tive xed point.
Proof: The Theorem 2.1.2 is applied.

Exer ises
1. Using Fixed-Point and Newton methods, nd the solutions of the following
systems:
 2
 x1 + x2

 − x1 = 0
 6
a) on the domain D = [0, 1] × [1, 2]
 2
 x1 + x2 − x2 = 0


8
 3
x − 20x − 1 = 0
b)
x3 + xy − 10y + 10 = 0 on the domain D = [0, 1] × [1, 2]

The algorithm of the lassi al Newton method in the ase n = 2 is the fol-
lowing:

//Computation of the Ja obi matrix


h=0.0001
f1 (x1 + h, x2 ) − f1 (x1 , x2 )
jac11 =
h
f2 (x1 + h, x2 ) − f2 (x1 , x2 )
jac21 =
h
f1 (x1 , x2 + h) − f1 (x1 , x2 )
jac12 =
h
f2 (x1 , x2 + h) − f2 (x1 , x2 )
jac22 =
h
Quasi-non-expansion Operators 65

// Classi al Newton method


for i = 1 . . . n
xi = x0i
repeat
- the Ja obi matrix is omputed in xk
- a method for solving the linear system is used :
JF (xk )z k = −F (xk )
for i = 1 . . . n
xk+1
i = zik + xki

until max |fi (xk+1 )| < ε


1≤i≤n

Input data:
- n = 2 dimension for the presented ase

- initial iteration x0 = (x01 , . . . , x0n )

Output data:
- (xi )i=1,...,n - the solution of the system

- k - number of the steps

2.3 Quasi-non-expansion Operators


Let D ⊂ IRn , D be an open set and G : D ⊂ IRn → IRn a nonlinear operator,
x(∗) ∈ D a xed point of G: G(x(∗) ) = x(∗) .

Denition 2.3.1. The operator G is alled quasi-non-expansive on S(x(∗) , r) ⊂


D, if:
kG(x) − x(∗) k < kx − x(∗) k, ∀ x ∈ S(x(∗) , r), x 6= x(∗) (2.3.1)

Remark 2.3.1. If G is quasi-non-expansive operator on S(x(∗) , r) then x(∗)


is a unique xed point in S(x(∗) , r).

Theorem 2.3.1. If G : D ⊂ IRn → IRn is a ontinuous operator on D and


x(∗) ∈ D is a xed point of G, G is quasi-non-expansive on S(x(∗) , r) ⊂ D,
then for any x(0) ∈ S(x(∗) , r) the sequen e x(k+1) = G(x(k) ) onverges to x(∗)
for k → ∞.
66 Numeri al Solutions of Equations and Systems of Nonlinear Equations

Proof: The inequality:

kx(k+1) − x(∗) k = kG(x(k) ) − x(∗) k < kx(k) − x(∗) k k = 0, 1, 2, . . .

shows that the sequen e kx(k) − x(∗) k de reases.


Denoting L = lim kx(k) − x(∗) k and supposing that L 6= 0, we onsider a
k→∞
subsequen e x(kj ) of x(k) whi h onverges to the ve tor y (∗) ∈ S(x(∗) , r), for
whi h we have ky (∗) − x(∗) k = L 6= 0. Under these hypothesis we obtain:

L = lim kG(x(kj ) ) − x(∗) k = kG(y (∗) ) − x(∗) k < ky (∗) − x(∗) k = L


j→∞

whi h is absurd. It results that L = 0, i.e. x(k) −−−−→


k→∞
x(∗) .

In the followings, we will show that the operator from the lassi al Newton
iteration is quasi-non-expansive in a ertain sphere, from whi h we obtain the
lo al onvergen e of the Newton method.

Theorem 2.3.2. Let F : D ⊂ IRn → IRn be a nonlinear operator and x(∗) a


solution of the equation F (x) = 0. We suppose that F is of C 1 - lass on the
sphere S(x(∗) , r) ⊂ D and that the followings onditions are satised:
1. [F ′ (x)]−1 exists and k[F ′ (x)]−1 k ≤ β, ∀ x ∈ S ;
2. kF ′ (x) − F ′ (y)k ≤ γ kx − yk, ∀ x, y ∈ S ;
3. the onstants β, γ, r satisfy the relation β · γ · r < 1.
In these onditions, for any x(0) ∈ S , the sequen e x(k) generated by the
lassi al Newton method onverges to x(∗) .
Proof: We onsider the operator G : S ⊂ D → IRn dened by

G(x) = x − [F ′ (x)]−1 · F (x).

We show that G is quasi-non-expansive on S . Let x ∈ S and R(x) = F (x) −


F ′ (x)(x − x(∗) ) = F (x) − F (x(∗) ) − F ′ (x)(x − x(∗) ). A ording to the theorem
of the means we have:

kR(x)k = k(F ′ (y)−F ′ (x))(x−x(∗) )k ≤ γky −xk·kx−x(∗) k ≤ γ ·r ·kx−x(∗) k.

From here we obtain:

k[F ′ (x)]−1 ·R(x)k ≤ k[F ′ (x)]−1 k·kR(x)k ≤ β·γ·rkx−x(∗) k < kx−x(∗) k, ∀x ∈ S, x 6= x(∗) .
Quasi-non-expansion Operators 67

We ompute kG(x) − x(∗) k and we nd:

kG(x) − x(∗) k = kx − x(∗) − [F ′ (x)]−1 · F (x)k =


= kx − x(∗) − [F ′ (x)]−1 · (F ′ (x)(x − x(∗) ) + R(x))k =
= k[F ′ (x)]−1 · R(x)k < kx − x(∗) k

for any x ∈ S, x 6= x(∗) .


It follows that G is quasi-non-expansive on S and a ording to Theorem 2.3.1
the sequen e generated by the Newton method onverges to x(∗) .

Theorem 2.3.3. Under hypothesis of Theorem 2.3.2, the sequen e generated


by the iterations:
x(k+1) = x(k) − kF ′ (x(k) )k−2 · [F ′ (x(k) )]T · F (x(k) )

onverges to x(∗) for any x(0) ∈ S ; these iterations are alled the gradient
method.
Proof: As in the previous ase we prove that the operator

G(x) = x − kF ′ (x)k−2 · [F ′ (x)]T · F (x)

is quasi-non-expansive on S . From here, on the basis of Theorem 2.3.1, the


onvergen e of the sequen e x(k) to x(∗) results for any x(0) ∈ S .
In this ase, we onsider:

R(x) = F (x) − F ′ (x)(x − x(∗) ) = F (x) − F (x(∗) ) − F ′ (x)(x − x(∗) ).

Based on the previous theorem, R(x) veries

kR(x)k ≤ γ · r · kx − x(∗) k ∀ x ∈ S.

Be ause βγr < 1 we have γr < 1


β
and we an write:

1
kR(x)k < · kx − x(∗) k.
β
Taking into a ount the inequality kF ′ (x)−1 k ≤ β we obtain:

kR(x)k < kF ′ (x)−1 k−1 · kx − x(∗) k

Be ause:

kx − x(∗) k = kF ′ (x) · [F ′ (x)]−1 · (x − x(∗) )k ≤ k[F ′ (x)]−1 k · kF ′ (x) · (x − x(∗) )k


68 Numeri al Solutions of Equations and Systems of Nonlinear Equations

it results:
kx − x(∗) k · k[F ′ (x)]−1 k−1 ≤ kF ′ (x) · (x − x(∗) )k
hen e:
kR(x)k < kF ′ (x) · (x − x(∗) )k ∀ x ∈ S.
Let's evaluate kG(x) − x(∗) k2 for x ∈ S and x 6= x(∗) . We have:

kG(x) − x(∗) k2 = kx − kF ′ (x)k−2 · [F ′ (x)]T · F (x) − x(∗) k2

= < x − x(∗) − kF ′ (x)k−2 · [F ′ (x)]T · F (x), x − x(∗) − kF ′ (x)k−2 · [F ′ (x)]T · F (x) >=

= < x − x(∗) , x − x(∗) > −2· < x − x(∗) , kF ′ (x)k−2 · [F ′ (x)]T · F (x) > +

+ < kF ′ (x)k−2 · [F ′ (x)]T · F (x), kF ′ (x)k−2 · [F ′ (x)]T · F (x) > =

= kx − x(∗) k2 − 2· < x − x(∗) , kF ′ (x)k−2 · [F ′ (x)]T · F (x) > +

+ kF ′ (x)k−4 · kF ′ (x)T · F (x)k2 ≤

≤ kx − x(∗) k2 − 2 · kF ′ (x)k−2 · < x − x(∗) , [F ′ (x)]T · F (x) > +

+ kF ′ (x)k−4 · kF ′ (x)T k2 · kF (x)k2 ≤

≤ kx − x(∗) k2 − kF ′ (x)k−2 < 2 · F ′ (x)(x − x(∗) ), F (x) >

+ kF ′ (x)k−2 · kF (x)k2 .
Taking into a ount the equality:

R(x) = F (x) − F ′ (x)(x − x(∗) )

we have:

kR(x)k2 = kF (x)k2 − 2· < F (x), F ′ (x)(x − x(∗) ) > +kF ′ (x)(x − x(∗) )k2 .

It follows:

−2· < F (x), F ′ (x)(x − x(∗) ) >= kR(x)k2 − kF (x)k2 − kF ′ (x)(x − x(∗) )k2 .
Quasi-non-expansion Operators 69

We an on lude that for kG(x) − x(∗) k2 we have:

kG(x) − x(∗) k2 ≤ kx − x(∗) k2 + kF ′ (x)k−2 · [kR(x)k2 − kF (x)k2 − kF ′ (x)(x − x(∗) )k2 ]+


+ kF ′ (x)k−2 · kF (x)k2
= [kF ′ (x)−1 k · kR(x)k]2 < kx − x(∗) k2 .
70 Numeri al Solutions of Equations and Systems of Nonlinear Equations
Chapter 3
Interpolation, Polynomials
Approximation, Spline Fun tions
Suppose that X = {xi | xi ∈ IR1 , i = 0, 1, . . . , m} is a set of m + 1 distin t
real numbers: x0 < x1 < . . . < xm−1 < xm and a fun tion f : X → IR1
known only by its values in xi : yi = f (xi ), i = 0, 1, . . . , m .
Be ause we do not have an analyti expression of the fun tion f , for al ulat-
ing its value at an arbitrary point we should determine a fun tion ϕ, alled
interpolating fun tion, whi h takes values yi in the points xi : yi = ϕ(xi ), i =
0, 1, . . . , m .
The form of the approximating fun tion will be hosen su h that the on-
ditions yi = ϕ(xi ), i = 0, 1, . . . , m ondu t to a linear system of algebrai
equations. A possible hoi e is the following: we take a set of m + 1 simple
known fun tions ϕi (x), i = 0, 1, . . . , m and a set of m+1 unknown parameters
ai , i = 0, 1, . . . , m. Using these we an write the approximating fun tion:
Xm
ϕ(x) = ai ϕi (x).
i=0

In order to assure the uniqueness of the solution of the linear system, m + 1


unknown parameters ai and m + 1 unknown distin t fun tions ϕi (x) (linearly
independent), were hosen.
In the followings, we will onsider fun tions whi h have this property. A set
of su h fun tions is given by the monomials xi , i = 0, 1, . . . , m , and in this
ase the interpolating fun tion is a polynomial of degree m:
Xm
ϕ(x) = ai x i .
i=0

In this way, the problem of determining the interpolating fun tion is redu ed
to the determination of the oe ients ai from the interpolating ondition,

71
72 Interpolation, Polynomials Approximation, Spline Fun tions

i.e. it is redu ed to solving the linear system:


m
X
ai xij = yj , j = 0, 1, . . . , m
i=0

equivalent to:
m
X
ai xij = f (xj ), j = 0, 1, . . . , m,
i=0
i.e., 

 a0 + a1 x0 + a2 x20 + . . . + am xm
0 = f (x0 )

 a + a x + a x2 + . . . + a xm = f (x )
0 1 1 2 1 m 1 1

 .................................


a0 + a1 xm + a2 x2m + . . . + am xmm = f (xm ).
This system has a unique solution be ause its determinant is dierent of zero
(it's a determinant of Vandermonde type and xi are distin t).
We an on lude that the interpolating polynomial is unique for a given
fun tion f and for given data points x0 < x1 < . . . < xm−1 < xm . This
approximating polynomial is also alled global interpolation due to the fa t
that only one polynomial is used on the interval [a, b].

3.1 The Newton Divided Dieren e Formulas


An interpolating polynomial is given by the Newton polynomial with
divided dieren es.
Denition 3.1.1. For a given fun tion f : X → IR1 , the rst divided
dieren e of f at the point xr is given by the number:
f (xr+1 ) − f (xr )
(D1 f )(xr ) = .
xr+1 − xr
The rst divided dieren e of f at the point xr is denoted by [xr , xr+1 , f ] or
f (xr , xr+1 ):
f (xr+1 ) − f (xr )
[xr , xr+1 , f ] = f (xr , xr+1 ) = .
xr+1 − xr
Remark 3.1.1. We denote by Fm the set of real fun tions dened on X :
Fm = {f | f : X → IR1 }. For a fun tion f ∈ Fm we onsider the divided
dieren es:
f (x1 ) − f (x0 ) f (x2 ) − f (x1 ) f (xm ) − f (xm−1 )
, ,...,
x1 − x0 x2 − x1 xm − xm−1
The Newton Divided Dieren e Formulas 73

f (xr+1 ) − f (xr )
denoted by , r = 0, 1, . . . , m − 1.
xr+1 − xr

These divided dieren es onstitute a set of m numbers atta hed to the points
x0 < x1 < . . . < xm−1 :

f (x1 ) − f (x0 )
x0 −→ = [x0 , x1 , f ]
x1 − x0

f (x2 ) − f (x1 )
x1 −→ = [x1 , x2 , f ]
x2 − x1

...........................

f (xm ) − f (xm−1 )
xm−1 −→ = [xm−1 , xm , f ].
xm − xm−1

In this way, we obtain a fun tion D1 f dened on the set {xi | i = 0, . . . , m −


1}. Thus, D1 f is a real fun tion dened on the set {x0 < . . . < xm−1 } as
follows:
(D1 f )(xr ) = [xr , xr+1 , f ], r = 0, 1, . . . , m − 1.

We onsider the sets of fun tions Fm = {f | f : X → IR1 } and Fm−1 = {f |


f : {x0 , x1 , . . . , xm−1 } → IR1 }, and the operator D1 : Fm → Fm−1 dened by

f 7−→ D1 f

where (D1 f )(xr ) = [xr , xr+1 , f ].

Denition 3.1.2. The operator D1 : Fm −→ Fm−1 dened by (D1 f )(xr ) =


[xr , xr+1 , f ], r = 0, 1, . . . , m − 1 is alled the operator of the rst divided
dieren e.

Propostion 3.1.1. The operator of the rst divided dieren e is a linear


operator.

Proof: Let be f, g ∈ Fm and α, β ∈ IR1 . Cal ulating [D1 (αf + βg)](xr ) we


74 Interpolation, Polynomials Approximation, Spline Fun tions

nd:

D1 (αf + βg)(xr ) = [xr , xr+1 , αf + βg] =

(αf + βg)(xr+1 ) − (αf + βg)(xr )


= =
xr+1 − xr

αf (xr+1 ) + βg(xr+1 ) − αf (xr ) − βg(xr )


= =
xr+1 − xr

f (xr+1 ) − f (xr ) g(xr+1 ) − g(xr )


=α· +β· =
xr+1 − xr xr+1 − xr

= α · (D1 f )(xr ) + β · (D1 f )(xr ).

Denition 3.1.3. The se ond divided dieren e of the fun tion f at the
point xr , r ≤ m − 2, is the number:
(D1 f )(xr+1 ) − (D1 f )(xr ) [xr+1 , xr+2 , f ] − [xr , xr+1 , f ]
(D2 f )(xr ) = = ,
xr+2 − xr xr+2 − xr

and it will be denoted by:


[xr+1 , xr+2 , f ] − [xr , xr+1 , f ]
[xr , xr+1 , xr+2 , f ] =
xr+2 − xr

(D1 f )(xr+1 ) − (D1 f )(xr )


[xr , xr+1 , xr+2 , f ] = .
xr+2 − xr

Propostion 3.1.2. The following equality takes pla e:


f (xr ) f (xr+1 )
(D2 f )(xr ) = [xr , xr+1 , xr+2 , f ] = + +
(xr − xr+1 )(xr − xr+2 ) (xr+1 − xr )(xr+1 − xr+2 )

f (xr+2 )
+
(xr+2 − xr )(xr+2 − xr+1 )
The Newton Divided Dieren e Formulas 75

Proof: A ording to the denition we have:


(D1 f )(xr+1 ) − (D1 f )(xr )
[xr , xr+1 , xr+2 , f ] =
xr+2 − xr

 
1 f (xr+2 ) − f (xr+1 ) f (xr+1 ) − f (xr )
= · − =
xr+2 − xr xr+2 − xr+1 xr+1 − xr


1 f (xr+2 )(xr+1 − xr )
= · −
xr+2 − xr (xr+2 − xr+1 )(xr+1 − xr )


f (xr+1 )(xr+1 − xr + xr+2 − xr+1 ) f (xr )(xr+2 − xr+1 )
− + =
(xr+2 − xr+1 )(xr+1 − xr ) (xr+2 − xr+1 )(xr+1 − xr )

f (xr+2 ) f (xr+1 )
= − +
(xr+2 − xr )(xr+2 − xr+1 ) (xr+2 − xr+1 )(xr+1 − xr )

f (xr )
+ =
(xr+2 − xr )(xr+1 − xr )

f (xr ) f (xr+1 )
= + +
(xr − xr+1 )(xr − xr+2 ) (xr+1 − xr )(xr+1 − xr+2 )

f (xr+2 )
+ .
(xr+2 − xr )(xr+2 − xr+1 )

Remark 3.1.2. For any k ≤ m we an dene the k th divided dieren e


of the fun tion f at the point xr ( r ≤ m − k ):

k (Dk−1 f )(xr+1 ) − (Dk−1 f )(xr ) [xr+1 , . . . , xr+k , f ] − [xr , xr+1 , . . . , xr+k−1 , f ]


(D f )(xr ) = = .
xr+k − xr xr+k − xr
The fun tion whi h asso iates to the point xr the k th divided dieren e of f
at xr , is denoted by (Dk f ).
By mathemati al indu tion it is shown that the equality:
k
X f (xr+i )
(Dk f )(xr ) =
i=0
(xr+i − xr )(xr+i − xr+1 ) . . . (xr+i − xr+i−1 )(xr+i − xr+i+1 ) . . . (xr+i − xr+k )
76 Interpolation, Polynomials Approximation, Spline Fun tions

takes pla e, where it an be observed that the fa tor (xr+i − xr+i ) is missing
from the denominator.
Remark 3.1.3. Considering the set of fun tions Fm−k = {f : {x0 , x1 , . . . , xm−k } →
IR1 }, using the k th divided dieren e we an asso iate to every fun tion
f ∈ Fm the fun tion Fm−k :
f 7−→ Dk f
where Dk f is dened by (Dk f )(xr ) = [xr , xr+1 , . . . , xr+k , f ] for r ≤ m − k .
The orresponden e f 7−→ Dk f will be denoted by Dk and will be alled
the operator of the k th divided dieren e.
Remark 3.1.4. The operator Dk : Fm → Fm−k of the k th divided dieren e
is linear.
Remark 3.1.5. For k = m, the mth divided dieren e is dened only at x0
and it is given by:
m
m
X f (xi )
(D f )(x0 ) = .
i=0
(xi − x0 )(xi − x1 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )

Thus, the following equality an be proved:


Propostion 3.1.3.
(W f )(x0 , x1 , . . . , xm )
(Dm f )(x0 ) =
V (x0 , x1 , . . . , xm )
where:

1
x0 x20 . . . xm−1
0 f (x 0 )

m−1
1 x1 x21 . . . x1 f (x1 )
(W f )(x0 , x1 , . . . , xm ) =
· · · ··· ··· ··· ··· · · ·
1 xm x2m . . . xm−1
m f (xm )
and
1
x0 x20 . . . xm−1
0 xm0

1 x1 x21 . . . xm−1
1
m
x1
V (x0 , x1 , . . . , xm ) =
· · · ··· ··· ··· ··· · · ·
1 xm x2m . . . xm−1
m xmm

Proof: Using the Vandermonde formula we have:



1 x0 x20 . . .
xm−1
0 xm0

1 x1 x21 . . . xm−1
1
m
x1 Y
V (x0 , x1 , . . . , xm ) = = (xi − xj ).
· · · · · · · ·2· · · · ··· · · ·
i>j
1 xm x
m ... xm−1
m xmm

The Newton Divided Dieren e Formulas 77

Developing W f with respe t to the last olumn we obtain:

(W f )(x0 , x1 , . . . , xm ) = (−1)m+2 · f (x0 ) · V (x1 , . . . , xm )+

+ (−1)m+3 · f (x1 ) · V (x0 , x2 , . . . , xm )+

+ · · · + (−1)2(m+1) · f (xm ) · V (x0 , . . . , xm−1 ).

Hen e,

(W f )(x0 , x1 , . . . , xm )
=
V (x0 , x1 , . . . , xm )
m
1 X Y
= Y · (−1)m+2+k · f (xk ) · (xi − xj ) =
(xi − xj ) k=0 i>j
i>j i,j6=k

Y
(xi − xj )
m i>j
X i,j6=k
m+2+k
= (−1) · f (xk ) · Y =
k=0 (xi − xj )
i>j
m
X f (xk )
= (−1)m+2+k · =
k=0
(xk − x0 ) . . . (xk − xk−1 )(xk+1 − xk ) . . . (xm − xk )

m
X f (xk )
= (−1)m+2+k · =
k=0
(xk − x0 ) . . . (xk − xk−1 )(−1)m−k (xk − xk+1 ) . . . (xk − xm )

m
X f (xi )
=
i=0
(xi − x0 )(xi − x1 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )

= (Dm f )(x0 ).

Remark 3.1.6. From the obtained representation of (Dm f )(x0 ) it results


that, for any permutation (i0 , i1 , . . . , im ) of the numbers (0, 1, . . . , m), we
have:
[xi0 , xi1 , . . . , xim ; f ] = [x0 , x1 , . . . , xm ; f ].

In other words, the mth divided dieren e does not depend of the order of
knots.
78 Interpolation, Polynomials Approximation, Spline Fun tions

Propostion 3.1.4. If f is a polynomial of the maximum degree m − 1, then


(Dm f )(x0 ) = 0, ∀ x0 .

Proof: If f is a polynomial of the maximum degree m − 1, then f (x) =


m−1
X
ak xk . Taking into a ount that Dm is a linear operator, we have:
k=0

m−1
X
m
(D f )(x0 ) = ak Dm (xk )(x0 ).
k=0

On the other hand, we have:

W (xk )(x0 , x1 , . . . , xm )
Dm (xk )(x0 ) =
V (x0 , x1 , . . . , xm )

1 x0 x20 . . . xm−1 0 xk0
2 m−1
1 x 1 x . . . x xk1
with W (xk )(x0 , x1 , . . . , xm ) = 1 1
= 0.
· · · · · · · · · · · · · ·· · · ·
1 xm x2 . . . xm−1 xk
m m m
In this way, we nd the equality from the enun iation.

Propostion 3.1.5. If f, g : X → IR1 then:


m
X
[x0 , x1 , . . . , xm , f · g] = [x0 , x1 , . . . , xk , f ] · [xk , . . . , xm , g].
k=0

Proof: Mathemati al indu tion with respe t to m will be used.


Thus, for m = 1 the left hand side is [x0 , x1 , f · g] whi h represents the
rst divided dieren e of f · g . So, a ording to the denition we have:

(f · g)(x1 ) − (f · g)(x0 ) f (x1 ) · g(x1 ) − f (x0 ) · g(x0 )


[x0 , x1 , f · g] = = =
x1 − x0 x1 − x0

f (x1 ) · g(x1 ) − f (x0 ) · g(x1 ) + f (x0 ) · g(x1 ) − f (x0 ) · g(x0 )


= =
x1 − x0

= g(x1 ) · [x0 , x1 , f ] + f (x0 ) · [x0 , x1 , g] =

= f (x0 ) · [x0 , x1 , g] + [x0 , x1 , f ] · g(x1 ).


The Newton Divided Dieren e Formulas 79

We suppose that the relation:

m−1
X
[x0 , x1 , . . . , xm , f · g] = [x0 , x1 , . . . , xk , f ] · [xk , . . . , xm−1 , g]
k=0

is true, and we ompute [x0 , x1 , . . . , xm , f · g]. Thus, we obtain:

[x0 , x1 , . . . , xm , f · g] =
1
= · [[x1 , . . . , xm , f · g] − [x0 , . . . , xm−1 , f · g]] =
xm − x0
"m−1
1 X
= · [x1 , . . . , xk+1 , f ] · [xk+1 , . . . , xm , g]−
xm − x0 k=0

−[x0 , . . . , xk , f ] · [xk , . . . , xm−1 , g]] =


m−1
1 X
= · [x1 , . . . , xk+1 , f ] · [xk+1 , . . . , xm , g]−
xm − x0 k=0

−[x0 , . . . , xk , f ] · [xk , . . . , xm−1 , g]+

+ [x0 , . . . , xk , f ] · [xk+1 , . . . , xm , g] − [x0 , . . . , xk , f ] · [xk+1 , . . . , xm , g] =

m−1
1 X
= · [x0 , . . . , xk , f ] · {[xk+1 , . . . , xm , g] − [xk , . . . , xm−1 , g]}+
xm − x0 k=0

m−1
1 X
+ · [xk+1 , . . . , xm , g] · {[x1 , . . . , xk+1 , f ] − [x0 , . . . , xk , f ]} =
xm − x0 k=0

m−1
1 X
= · [x0 , . . . , xk , f ] · (xm − xk ) · [xk , . . . , xm , g]+
xm − x0 k=0

m
1 X
+ · [xk , . . . , xm , g] · (xk − x0 ) · [x0 , . . . , xk , f ] =
xm − x0 k=1
80 Interpolation, Polynomials Approximation, Spline Fun tions

1
= · {(xm − x0 ) · [x0 , f ] · [x0 , . . . , xm , g]+
xm − x0
m−1
X
+ (xm − x0 ) · [x0 , . . . , xk , f ] · [xk , . . . , xm , g]+
k=1

+ (xm − x0 ) · [x0 , . . . , xm , f ] · [xm , g]} =


m
X
= [x0 , . . . , xk , f ] · [xk , . . . , xm , g].
k=0

We return to the obje tive of this se tion: the Newton polynomial


with divided dieren es. Thus, using the divided dieren es presented
above, we will onstru t the interpolating polynomial of degree m:
m
X
ϕ(x) = ai x i ,
i=0

whi h approximates the fun tion f : X → IR1 known only by its values in
the data points xi : yi = f (xi ), i = 0, 1, . . . , m .
The approximating fun tion ϕ(x) is a polynomial fun tion of degree m, de-
noted by pm (x), for whi h the oe ients ai will be omputed using divided
dieren es:
ai = [x0 , x1 , . . . , xi , f ], i = 0, 1, . . . , m.

Denition 3.1.4. We all Newton polynomial with divided dieren es


whi h approximates the fun tion f : X → IR1 given by: yi = f (xi ), i =
0, 1, . . . , m, the polynomial fun tion of degree m:

pm (x) = f (x0 ) + [x0 , x1 , f ](x − x0 ) + [x0 , x1 , x2 , f ](x − x0 )(x − x1 ) + . . .

+[x0 , x1 , . . . , xm , f ](x − x0 )(x − x1 ) . . . (x − xm−1 ).

Remark 3.1.7. Newton polynomial with divided dieren es has the prop-
erty that its graph passes through the points (xi , yi ) = (xi , f (xi )), i = 0, 1, . . . , m.

Using the Newton polynomial with divided dieren es, the fun tion f :
X → IR1 given by: yi = f (xi ), i = 0, 1, . . . , m, is written as:

f (x) = f (x0 ) + (D1 f )(x0 )(x − x0 ) + (D2 f )(x0 )(x − x0 )(x − x1 ) + . . .

+(Dm f )(x0 )(x − x0 )(x − x1 ) . . . (x − xm−1 ) + Rm (x),


The Newton Divided Dieren e Formulas 81

where Rm (x) represents the remainder term (the error of approximation) of


the interpolating polynomial. In order to evaluate the approximating error,
we need supplementary information about the approximating fun tion f (x)
and its derivatives.
Propostion 3.1.6. (Mean Value Theorem)
Let be a ≤ x0 < x1 < . . . < xm ≤ b. If f ∈ C m−1 [a, b] and f (m) is derivable
on (a, b), then there exists ξ su h that:
1
(Dm f )(x0 ) = · f (m) (ξ) a < ξ < b.
m!
Proof: We will onsider the auxiliary fun tion:

ϕ(x) = (W f )(x, x0 , . . . , xm−1 ) + (Dm f )(x0 ) · V (x, x0 , . . . , xm−1 ).


The fun tion ϕ has the property ϕ(xk ) = 0, k = 0, 1, 2, . . . , m. Applying
su essively the Theorem of Rolle on the subintervals determined by these
points, we obtain that ϕ(m) has at least one zero ξ ∈ (a, b). Be ause

ϕ(m) (x) = [f (m) (x) − m! · Dm f (x0 )] · V (x0 , . . . , xm−1 )


it results
f (m) (ξ)
(Dm f )(x0 ) = .
m!

From the Mean Value Theorem, we obtain the expression of the mth
divided dieren e as fun tion of the mth derivative of the fun tion f . In
this way, the approximating error be omes:
f (m+1) (ξ)
Rm (x) = (x − x0 )(x − x1 ) . . . (x − xm−1 )(x − xm ),
(m + 1)!
where ξ ∈ (a, b).
The interpolating polynomial appears espe ially as a omponent of other
numeri al algorithms (numeri al integration or numeri al dierentiation). In
these appli ations, equal intervals are onsidered given by equidistant knots
(i.e. the distan e between two onse utive knots is equal to a onstant h
alled step of the mesh):

xi+1 − xi = h, i = 0, 1, . . . , m − 1.
We introdu e the forward dieren e operator △ and the ba kward dieren e
operator ▽ dened as follows:

△f (x) = f (x + h) − f (x),
82 Interpolation, Polynomials Approximation, Spline Fun tions

▽f (x) = f (x) − f (x − h).


The results obtained by applying the operators △ or ▽ to the fun tion f (x),
dene the rst nite dieren es.
Thus, divided dieren es an be expressed using nite dieren es and step
h.
Hen e, for the rst divided dieren es we have:

f (x1 ) − f (x0 ) f (x0 + h) − f (x0 ) △f (x0 )


[x0 , x1 , f ] = = = ,
x1 − x0 h h

f (xm ) − f (xm−1 ) f (xm ) − f (xm − h) ▽f (xm )


[xm−1 , xm , f ] = [xm , xm−1 , f ] = = = .
xm − xm−1 h h
For the se ond divided dieren es we obtain:
△f (x1 ) △f (x0 )
[x1 , x2 , f ] − [x0 , x1 , f ] − △2 f (x0 )
[x0 , x1 , x2 , f ] = = h h = ,
x2 − x0 2h 2!h2

▽f (xm ) ▽f (xm−1 )
[x1 , x2 , f ] − [x0 , x1 , f ] − ▽2 f (xm )
[xm−2 , xm−1 , xm , f ] = = h h = .
x2 − x0 2h 2!h2
By mathemati al indu tion, the mth divided dieren es are obtained:

△m f (x0 )
[x0 , x1 , x2 , . . . , xm , f ] = ,
m!hm
▽m f (xm )
[x0 , x1 , x2 , . . . , xm , f ] =
.
m!hm
Based on the above formulas, the Newton polynomial with forward nite
dieren es is:
△f (x0 ) △2 f (x0 )
pm (x) = f (x0 ) + (x − x0 ) + (x − x0 )(x − x1 ) + . . .
h 2!h2
△m f (x0 )
+ (x − x0 ) . . . (x − xm−1 ), and the Newton polynomial
m!hm
with ba kward nite dieren es is:
▽f (xm ) ▽2 f (xm )
pm (x) = f (xm ) + (x − xm ) + (x − xm )(x − xm−1 ) + . . .
h 2!h2
▽m f (xm )
+ (x − xm ) . . . (x − x1 ).
m!hm
The Newton Divided Dieren e Formulas 83

Exer ises

1. Approximate numeri ally the fun tion f (x) = x, knowing its values in
the knots: x0 = 1, x1 = 1.5, x2 = 2, x3 = 2.5, x4 = 3, as follows:

• using the Newton polynomial with divided dieren es;

• using the Newton polynomial with forward nite dieren es;

• using the Newton polynomial with ba kward nite dieren es.

Plot, on the same artesian oordinates, the obtained interpolating


√ polyno-
mial fun tions together with the given fun tion f (x) = x.
2. Using the Mean Value Theorem, approximate numeri ally the derivatives
f ′ (0.1), f ′′ (0.2), f ′′′ (0.1), where the fun tion f is given as follows:

xi 0.1 0.2 0.3 0.4


f (xi ) 0.995 0.98007 0.95534 0.92106

In the followings, we will present the algorithm for the implementation of


the k th divided dieren e at the point xr .

If r > n − k , then we annot ompute the derivative of order k at the


point xr .
If r ≤ n − k , then the k th divided dieren e is:
k  k 
k
X Y 1
(D f )(xr ) = f (xr+i ) ·
i=0
x − xr+j
j=0 r+i
j6=i
Input data:
- n

- xi - knots, i = 0, . . . , n

- f (xi ) - values of the fun tion f at the knots

- r - the order of the knot in whi h we approximate the derivative

- k - order of the derivative

Output data:
- approximating the derivative of order k at the knot xr
84 Interpolation, Polynomials Approximation, Spline Fun tions

Implementation in Borland C:
#in lude<stdio.h>
oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int n);
void CitireVe t(oat *a, int n);
void S riereVe t(oat *a, int n);
oat dif_divizate(oat *x, oat *f,int n, int r, int k);
void main()
{
oat *f, *x;
int n,r,k;
printf("n= "); s anf("%d",&n);
x=Ve tor(n);
f=Ve tor(n);
printf("Introdu eti nodurile: \n");
CitireVe t(x,n);
printf("Introdu eti valorile fun tiei in noduri: \n");
CitireVe t(f,n);
printf("Introdu eti ordinul nodului in are se fa e derivata: ");
s anf("%d",&r);
printf("Introdu eti ordinul derivatei: "); s anf("%d",&k);
printf("Valoarea derivatei de ordinul %d in x[%d℄ este:
%g",k,r, dif_divizate(x, f,n, r, k));
}
oat dif_divizate(oat *x, oat *f,int n, int r, int k)
{
oat suma, produs;
int i,j;
if(r>n-k){ printf("Nu se poate al ula derivata de ordin %d in pun tul
x[%d℄ !", k,r); }
else{
suma=0.0;
for(i=0;i<=k;i++)
{
produs=f[r+i℄;
for(j=0;j<=k;j++)
if(j!=i) produs=produs/(x[r+i℄-x[r+j℄);
suma=suma+produs;
}
}
return suma;
The Lagrange Interpolating Polynomial 85

3.2 The Lagrange Interpolating Polynomial


Let [a, b] ⊂ IR1 , the knots xi ∈ [a, b], i = 0, 1, . . . , m su h that xi 6= xj for
i 6= j , and a fun tion f : [a, b] → IR1 .
Denition 3.2.1. The problem of determining the polynomial P whi h has
minimum degree and satises the property:
P (xi ) = f (xi ), i = 0, 1, . . . , m,

is alled the Lagrange interpolating polynomial problem.


Denition 3.2.2. A solution of the Lagrange interpolating polynomial prob-
lem (if it exists) is alled Lagrange interpolating polynomial and it will be
denoted by Lm f , Lm being the operator whi h asso iates f to the polynomial
Lm f .
Theorem 3.2.1. The Lagrange interpolating polynomial problem has a unique
solution whi h is a polynomial of degree m.
Proof: We onsider a polynomial of degree m having undetermined oe-
ients:
Pm (x) = a0 + a1 x + a2 x2 + . . . + am xm .
Imposing this polynomial to satisfy:

Pm (xi ) = f (xi ) i = 0, 1, . . . , m

we obtain the following linear system having m + 1 equations and m + 1


unknowns a0 , a1 , . . . , am :

a0 + a1 x0 + a2 x20 + . . . + am xm 0 = f (x0 )
a0 + a1 x 1 + a2 x 1 + . . . + a m x m
2
1 = f (x1 )
.................................
a0 + a1 xm + a2 x2m + . . . + am xm m = f (xm ).

The determinant of the system is



1 x0 x20 ... xm0

m
1 x1 x21 ... m
x1 Y
V (x0 , x1 , . . . , xm ) = = (xi − xj ) 6= 0
. . . . . . . .2. ... . . .
i,j=0
1 xm x
m ... xmm
i>j
86 Interpolation, Polynomials Approximation, Spline Fun tions

and it follows that, the system has a unique solution. This means that
there exists a unique polynomial of degree m whi h satises the onditions
P (xi ) = f (xi ). The fa t that does not exist polynomial of degree less then
m whi h veries P (xi ) = f (xi ) does not exist, an be proved supposing
the ontrary ( ase in whi h a system of m + 1 equations with maximum m
unknowns is obtained, whi h does not have a solution for any f ).

Denition 3.2.3. The Lagrange interpolating polynomials orresponding to


the fun tions li : [a, b] → IR1 dened by li (xi ) = 1 and li (xj ) = 0 for i 6= j ,
are alled fundamental Lagrange interpolating polynomials.
Theorem 3.2.2. The fundamental Lagrange interpolating polynomials li (x),
i = 0, m are given by the formulas:
(x − x0 ) . . . (x − xi−1 )(x − xi+1 ) . . . (x − xm )
li (x) = .
(xi − x0 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )
Proof: The system from the previous proof is solved for the parti ularly
ase of the fun tions li . Thus, we obtain:
m

1 x 0 . . . x 0

. . . . . . . . . . . .

1 xi−1 . . . xm i−1

1
li (x) = · 1 x . . . xm
V (x0 , . . . , xm ) m
1 xi+1 . . . xi+1
. . . . . . . . . . . .

1 xm . . . xm
m

i.e.,
V (x0 , . . . , xi−1 , x, xi+1 , . . . , xm )
li (x) = =
V (x0 , . . . , xi−1 , xi , xi+1 , . . . , xm )
(x − x0 ) . . . (x − xi−1 )(x − xi+1 ) . . . (x − xm )
+ .
(xi − x0 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )

Remark 3.2.1. The Lagrange interpolating polynomial whi h approx-


imates the fun tion f is given by the formula:
m
X
(Lm f )(x) = li (x) · f (xi ).
i=0

This equality an be obtained dire tly by veri ation.


The Lagrange Interpolating Polynomial 87

Theorem 3.2.3. The Lagrange interpolating operator Lm dened on the set


F = {f : [a, b] → IR1 } taking values in the set of the polynomials of degree
m, it is linear and idempotent i.e., (Lm f )(x) = (L2m f )(x) (it is a proje tor).
Proof: Linearity:
m
X
Lm (αf + βg)(x) = li (x) · (αf (xi ) + βg(xi )) = α(Lm f )(x) + β(Lm g)(x).
i=0

For proving that the operator is idempotent we onsider the fun tions lk (x) =
xk , k = 0, 1, . . . , n. We observe that: Lm (lk )(x) = lk (x) = xk , k = 0, 1, . . . , m
and hen e we obtain:

Lm (f )(x) = a0 + a1 x + . . . + am xm ⇒
⇒ Lm (Lm f )(x) = a0 + a1 x + . . . + am xm ⇒ (L2m f )(x) = (Lm f )(x).

Remark 3.2.2. If we onsider the operator Lm dened on the set of ontin-


uous fun tions dened on [a, b], then the norm of the operator Lm dened by
kLm k = sup kLm f k, where kf k = sup |f (x)|, is evaluated as follows:
kf k≤1 x∈[a,b]

m
X
kLm k = max |li (x)|.
a≤x≤b
i=0

Denition 3.2.4. The dieren e f (x) − (Lm f )(x) = (Rm f )(x) is alled
trun ation error of order m, and the approximating formula f (x) =
(Lm f )(x) + (Rm f )(x) is alled the Lagrange approximation formula.
Theorem 3.2.4. The trun ation error (Rm f )(x) from the Lagrange approx-
imation formula is a linear and idempotent operator.
Proof: Linearity of (Rm f )(x) results from the linearity of (Lm f )(x). For
proving that the error is idempotent, we take into a ount that, if f is a
polynomial of degree m then Lm = f . It results that, if f is a polynomial of
degree m, then Rm f = f −Lm f = 0. Thus, Rm (Rm f ) = Rm (f −Lm f ) = Rm f
for any f , i.e., Rm
2
f = Rm f .

In the followings, we will present some formulas for the representation of


the trun ation error of the Lagrange interpolating formula.
88 Interpolation, Polynomials Approximation, Spline Fun tions

Theorem 3.2.5. Let be α = min{x, x0 , . . . , xm } and β = max{x, x0 , . . . , xm }.


If f is of C m - lass on [α, β] and f (m) is derivable on (α, β) then there exists
ξ ∈ (α, β) su h that
u(x)
(Rm f )(x) = · f (m+1) (ξ)
(m + 1)!
where u(x) is the polynomial dened by
u(x) = (x − x0 )(x − x1 ) . . . (x − xm ).

Proof: Let be the fun tion F dened by



u(z) (Rm f )(z)
F (z) =

u(x) (Rm f )(x)

From the hypothesis on f it results that F ∈ C m [α, β] and there exists


F (m+1) on (α, β). Moreover, we observe that F (x) = 0 and F (xi ) = 0,
i = 0, 1, . . . , m, hen e F has m + 2 distin t zeros on [α, β]. Applying su es-
sively the Rolle theorem it results that F (m+1) has a least one zero in this
interval. Taking the derivative of order m + 1 of the fun tion F and imposing
the ondition F (m+1) (ξ) = 0 we obtain

(m+1)
(m + 1)! f (m+1) (ξ)
F (ξ) = = 0,
u(x) (Rm f )(x)

where we used that (Rm f )(m+1) = f (m+1) −(Lm f )(m+1) = f (m+1) . Computing
this determinant we obtain (Rm f )(x) from the enun iation.

Theorem 3.2.6. For f : [a, b] → IR1 we have


(Rm f )(x) = u(x) · [x, x0 , . . . , xm ; f ], x ∈ [a, b].

Proof: Supposing that x 6= xk , k = 0, 1, . . . , m (this is a natural ondition


be ause (Rm f )(xk ) = 0, k = 0, 1, . . . m), based on the divided dieren e
formula we obtain:
m
f (x) X f (xk )
[x, x0 , . . . , xm ; f ] = +
u(x) k=0 (xk − x) · u′ (xk )

where u(x) = (x − x0 ) . . . (x − xm ), and u′ (xk ) = (xk − x0 ) . . . (xk − xk−1 )(xk −


xk+1 ) . . . (xk − xm ). Multiplying every member of this formula with u(x), we
get the equality from the enun iation.
The Lagrange Interpolating Polynomial 89

Remark 3.2.3. If the interpolating knots are equidistant, i.e. xi = x0 + ih,


i = 0, 1, . . . , m, h > 0, then the Lagrange interpolating polynomial and the
trun ation error an be written as follows :
m
tm+1 X 1
(Lm f )(x0 + th) = · (−1)m−i Cmi
· f (xi ),
m! i=0 t−i

hm+1 · tm+1 (m+1)


(Rm f )(x0 + th) = ·f (ξ).
(m + 1)!
Pra ti al points of view for the interpolating polynomial:
1. The Newton and Lagrange interpolating polynomials are dierent only
by the form; the trun ation errors are the same, if we onsider the
same mesh of knots. From numeri al al ulus point of view, the New-
ton interpolating is preferred be ause this request a small number of
the arithmeti if we ompare with the Lagrange interpolating. Both
algorithms use the same memory size.
2. If we denote by α and β the smaller and the larger interpolating knots,
respe tively, then from omputational point of view, the following in-
terpolating polynomials are indi ated: for x not far of α, it is indi ated
to use the Newton polynomial with forward nite dieren es; for x not
far of β , it is indi ated to use the Newton polynomial with ba kward
nite dieren es.
3. Other interpolating polynomial forms orrespond to the spa e of the
fun tions in whi h interpolator is sear hed. Thus, if we hoose the
spline fun tions as spa e, then we obtain spline polynomial interpola-
tion; if we hoose the trigonometri fun tions as spa e, then we obtain
spline polynomial trigonometri , et .

Exer ises
1. Using the Lagrange interpolating polynomial, approximate numeri ally
cos(0.12), in the ase in whi h the followings values are known:
xi 0.1 0.2 0.3 0.4
cos(xi ) 0.995 0.98007 0.95534 0.92106
2. Using the Lagrange interpolating polynomial, ompute the Australian
population from the years 1960, 1970 and 1975, if the followings data are
given:
90 Interpolation, Polynomials Approximation, Spline Fun tions

an 1954 1961 1971 1976


population 8.99 10.51 12.94 13.92

For implementation of the Lagrange interpolation polynomial the following


formula will be used:
n  n 
X Y x − xj
L(x) = f (xi ) ·
i=0
x − xj
j=0 i
j6=i
Input data:
- n

- xi - knots, i = 0, . . . , n

- f (xi ) - values of the fun tion f in the knots

- x - point in whi h the value of the fun tion f is approximated

Output data:
- L(x)

Implementation of the above algorithm, using the Borland C language:


#in lude<stdio.h>
oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int n);
void CitireVe t(oat *a, int n);
void S riereVe t(oat *a, int n);
oat Lagrange(oat *x, oat *f,int n, oat a);
void main()
{
oat *f, *x,a;
int n;
printf("n= "); s anf("%d",&n);
x=Ve tor(n);
f=Ve tor(n);
printf("Introdu eti nodurile: \n");
CitireVe t(x,n);
printf("Introdu eti valorile fun tiei in noduri: \n");
CitireVe t(f,n);
printf("Introdu eti abs isa pun tului in are se aproximeaza valoarea
fun tiei: "); s anf("%f",&a);
Pie ewise Polynomial Approximations: Spline Fun tions. Introdu tion 91

printf("f(%g)= %g",a, Lagrange(x, f,n, a));


}
oat Lagrange(oat *x, oat *f,int n, oat a)
{
oat suma, produs;
int i,j;
suma=0.0;
for(i=0;i<=n;i++)
{
produs=f[i℄;
for(j=0;j<=n;j++)
if(j!=i) produs=produs*(a-x[j℄)/(x[i℄-x[j℄);
suma=suma+produs;
}
return suma;
}

3.3 Pie ewise Polynomial Approximations: Spline


Fun tions. Introdu tion
In some ases, the global interpolation presented in the previous se tions (on
the entire interval [a, b]), does not onverge. Indeed, even if the trun ation
errors Rm (x) suggest that the pre ision in reases when the knots number
in reases, the interpolation polynomial may not onverge; an example has
been given in 1901 by Runge: f (x) = 1/(1 + x2 ), x ∈ [−5, 5] (trun ation
error tends to innity when m −→ ∞). More pre isely, the Faber Theo-
rem asses that for any set of data points in the interval [a, b] there exists a
ontinuous fun tion for whi h the digression of the interpolating polynomial
in reases however mu h for m −→ ∞. The fa t that there exists at least
one fun tion for whi h the interpolating polynomial does not onverge, re-
du es the appli ability of the global interpolation, and it is used only as a
omponent of numeri al algorithms for small values of m.
Thus, the idea of pie ewise polynomial approximations omes up, i.e.,
for every subinterval [xi−1 , xi ] of [a, b] another interpolating polynomial is
dened. The polygonal fun tion represents the simplest example of pie ewise
interpolating polynomial ( alled spline interpolation ).
The natural spline is the urve ( alled the elasti ) obtained by for ing
a exible elasti rod through the n points Pi , i = 1, n but letting the slope
92 Interpolation, Polynomials Approximation, Spline Fun tions

at the ends be free to equilibrate the position that minimizes the os illatory
behavior of the urve.
Ia ob Bernoulli (1705) gives the idea that "the elasti " an be obtained by
minimization of the integral from the squared urvature, in a lass of admissi-
ble fun tions. Thus, the theory of Euler-Bernoulli on erning to deformation
of the thin beams is formulated (1742).
The expression whi h is minimized in this theory is:
Z l
Ep = µ(s) · K 2 (s)ds
0

in whi h: Ep is the potential energy; l is length of the rod; µ is the rod


density; s - ar length; K - urvature as fun tion of the ar length.
Choosing the artesian referen e xOy and denoting by f the fun tion whose
graphi is "the elasti " we have:

f ′′ (x)
K(x) = and ds = {1 + [f ′ (x)]2 }1/2 dx.
{1 + [f ′ (x)]2 }3/2

Supposing the rod homogeneity µ(s) = µ (i.e. onstant density) and denoting
by (a, f (a)), (b, f (b)) the ends of the urve, the expression whi h should be
minimized be omes:
Z b
[f ′′ (x)]2
Ep = µ · ′ 2 5/2
dx.
a {1 + [f (x)] }

Admitting that f ′ (x) ≈ c, x ∈ [a, b] we obtain:


Z b
Ep ≈ c1 · [f ′′ (x)]2 dx,
a

µ
where c1 = .
(1 + c2 )5/2
If Pi = (xi , yi ), i = 1, 2, . . . , n are n knots whi h dene the division
∆ : a ≤ x1 < x2 < . . . < xn ≤ b of the interval [a, b], then the onsidered
problem is redu ed to the minimization of the integral:
Z b
[f ′′ (x)]2 dx
a

on a set of smooth fun tions for whi h we impose to pass through Pi :

f (xi ) = yi , i = 1, 2, . . . , n.
Pie ewise Polynomial Approximations: Spline Fun tions. Introdu tion 93

Due to Zphysi al reasons it results that f ′ is ontinuous and the potential


b
energy [f ′′ (x)]2 dx is nite.
a
In this way, we are led to the set:
2,2
H[a,b] 1
= {f ∈ C[a,b] | f ′ absolutely ontinuous on [a, b] and f ′′ ∈ L2[a,b] }.

From this set we will hoose those fun tions whi h pass through Pi :
2,2
U (y) = {f ∈ H[a,b] | f (xi ) = yi , i = 1, 2, . . . , n}

where y is the ve tor y = (y1 , . . . , yn ).


The Euler-Bernoulli problem be omes the following minimum prob-
lem with interpolating restri tions:
Determine the element u∗ ∈ U (y) su h that:
ku∗ ′′ k2 = inf ku′′ k2 ,
u∈U (y)

where k · k2 is the norm of the spa e L2[a,b] :


Z b
kuk22 = u2 (x) dx.
a

A ording to the given model, the spline interpolation problem an be


enoun ed in the following manner:
Denition 3.3.1. Let X be a linear spa e, (Y, k · k) a normed linear spa e,
U = {f ∈ X | f (xi ) = yi , i = 1, . . . , n} ⊂ X and T : X → Y . The problem
of the determination of the fun tion u∗ ∈ U having the property
kT u∗ k = inf kT uk
u∈U

is alled spline interpolation problem.


A solution of the spline interpolation problem is alled interpolating spline
fun tion, and the set U is alled interpolating set.
The elements from the formulation of the spline interpolation problem are:
spa es X and Y , interpolating set U and appli ation T . By parti ularization
these elements we an give dierent types of spline fun tions (polynomial,
exponential, trigonometri ) whi h are solutions of the problems of this type.
In the ase of the Euler-Bernoulli problem presented above, the elements
from the spline interpolation problem are:
2,2
X = H[a,b] ,
94 Interpolation, Polynomials Approximation, Spline Fun tions
Z b
(Y, k · k) = (L2[a,b] , kf k22 = f 2 ),
a
2,2 2,2
U = U (y) = {f ∈ H[a,b] | f (xi ) = yi , i = 1, . . . , n} ⊂ H[a,b] ,

T f = f ′′ .

3.4 The Spline Polynomial Interpolation


For dening the spa es X , Y and the interpolating set U we onsider:

• the set of the fun tions f ∈ C[a,b]


m−1
, m ∈ {1, 2, . . .}, having the derivative
f (m−1)
absolutely ontinuous on [a, b]:
m
H[a,b] m−1
= {f ∈ C[a,b] | f (m−1) absolutely ontinuous on [a, b]},

whi h admit the following Taylor representation:


m−1 Z x
X (x − a)k (k) (x − t)m−1 (m)
f (x) = · f (a) + · f (t)dt,
k=0
k! a (m − 1)!

hen e
m,2
X = H[a,b] m−1
= {f ∈ C[a,b] | f (m−1) absolutely ontinuous on [a, b] and f (m) ∈ L2[a,b] }.

m,2
The spa e X = H[a,b] is a subspa e of the ve torial spa e H[a,b]
m
and an
be organized as a Hilbert spa e with the s alar produ t dened by:
Z b m−1
X
(m) (m)
< f, g >m,2 = f (x) · g (x) dx + f (k) (a) · g (k) (a),
a k=0

whi h generates the norm:


m−1
X  2
kf k2m,2 = kf (m) k22 + f (k) (a) .
k=0

• the linear spa e with norm:


Z b
(Y, k · k2 ) = (L2[a,b] , kf k22 = f 2 (x)dx.
a
The Spline Polynomial Interpolation 95

• a set of n linear independent fun tionals Φ = {ϕi | i = 1, 2, . . . , n},


m,2
m ∈ IN , dened on H[a,b] , for dening the interpolating set:
m,2
U = U (y) = {f ∈ H[a,b] | ϕi (f ) = yi , i = 1, 2, . . . , n},
y ∈ IRn .
Denition 3.4.1. The problem of nding those elements u∗ ∈ U whi h sat-
isfy the property:
ku∗(m) k2 = inf ku(m) k2
u∈U (y)

is alled polynomial spline interpolation problem.


Remark 3.4.1. The polynomial spline interpolation problem onsists of the
determination of those fun tion, from the set U (y), whi h are more appro-
priate for the polynomial p of maximum degree m − 1.
Denition 3.4.2. A solution u∗ of the polynomial spline interpolation prob-
lem is alled polynomial spline fun tion.
The theorem of existen e of the solution of the polynomial spline inter-
polation problem is based on the following lemma.
Lema 3.4.1. Let be {v1 , v2 , . . . , vm } a base for the ve torial spa e of the poly-
nomials of maximum degree m − 1. For any number k ∈ {1, 2, . . . , m}, there
exists a linear fun tional ϕk dened on the spa e H[a,b]
m,2
having the property:
ϕk (vi ) = δki .
Proof: m,2
Any fun tion f ∈ H[a,b] an be represented by Taylor formula as
follows:
Z x
x−a ′ (x − a)m−1 (m−1) (x − t)m−1 (m)
f (x) = f (a)+ ·f (a)+. . .+ ·f (a)+ ·f (t) dt.
1! (m − 1)! a (m − 1)!
We onsider the Taylor polynomial:
x−a ′ (x − a)m−1 (m−1)
pf (x) = f (a) + · f (a) + . . . + ·f (a)
1! (m − 1)!
whi h we write in the base vi (x) as follows:
m
X
pf (x) = ci (f ) · vi (x).
i=1
m,2
The linear fun tionals dened on H[a,b] by ϕi (f ) = ci (f ), i = 1, n verify
ϕk (vi ) = δki .
96 Interpolation, Polynomials Approximation, Spline Fun tions

Theorem 3.4.1. (existen e)


If the fun tionals ϕi ∈ Φ, i = 1, n are bounded and the set:
m,2
U (y) = {f ∈ H[a,b] | ϕi (f ) = yi , i = 1, n}

is non empty, then the polynomial spline interpolation problem hast at least
one a solution.
Proof: We onsider the set U (m) = {v | v = u(m) , u ∈ U } ⊂ L2[a,b] and
the problem whi h onsists of the determination of v ∗ ∈ U (m) having the
property
kv ∗ k2 = inf kvk2 .
v∈U (m)

If the polynomial spline interpolation problem has a solution, then the on-
sidered problem has a solution. If the onsidered problem has a solution
then the polynomial spline interpolation problem has at least one solution.
It follows that, if we show that the problem:

kv ∗ k2 = inf kvk2
v∈U (m)

has a solution, it results that the polynomial spline interpolation problem


has a solution. In the followings we will show that the problem:

kv ∗ k2 = inf kvk2
v∈U (m)

has a solution.
We remark that the set U (m) is non empty (be ause U is non empty). From
the linearity of the fun tionals ϕi for any u1 , u2 ∈ U and α ∈ [0, 1] we have:

ϕi (αu1 + (1 − α)u2 ) = αϕi (u1 ) + (1 − α)ϕi (u2 ) = αyi + (1 − α)yi = yi ,

i = 1, n. This means that αu1 + (1 − α)u2 ∈ U for any u1 , u2 ∈ U and


α ∈ [0, 1] (U is onvex set). Be ause the derivative D(m) of order m is linear
it results that the set U (m) = D(m) U is onvex.
We will show that the set U (m) is losed in L2[a,b] , i.e., if (gk ) is a sequen e
(from U (m) ) onvergent to g ∈ L2[a,b] in L2[a,b] sense, then g ∈ U (m) . For this
we will show that there exists a polynomial p of maximum degree m − 1 su h
that the fun tion:
Z x
(x − t)m−1
f (x) = p(x) + · g(t) dt
a (m − 1)!

belong to U (then, evident f (m) = g ∈ U (m) ).


The Spline Polynomial Interpolation 97

(m)
Be ause gk ∈ U (m) , there exists fk ∈ U su h that gk = fk . It follows:
Z x
(x − t)m−1
fk (x) = pk (x) + · gk (t) dt
a (m − 1)!
where pk are Taylor polynomials of the maximum degree m − 1.
We suppose that from the n linear fun tionals ϕ1 , ϕ2 , . . . , ϕn , the rst m
(m ≤ n) are linear independent on the set of polynomial of maximum degree
m − 1. In parti ular, we admit that ϕ1 , ϕ2 , . . . , ϕm are su h that the matrix:

A = (ϕi (vj )) i=1,m


j=1,m

(x − a)j−1
with vj (x) = , is nonsingular.
(j − 1)!
In this ase we have:
   
pk (a) ϕ1 (pk )
A·
 ..   .. 
. = . 
m−1
pk (a) ϕm (pk )
Z x 
(x − t)m−1
ϕi (fk ) = yi being bounded, and ϕi · gk (t) dt onvergent to
Z x  a (m − 1)!
(x − t)m−1
ϕi · g(t) dt , i = 1, 2, . . . , n, from whi h we have that the
a (m − 1)!
(j)
sequen es ϕi (pk ), k = 1, m are bounded. Hen e, every sequen e pk (a),
j = 0, 1, . . . , m − 1 is bounded and so ontains a onvergent subsequen e
(j)
(pkl (a)).
(j)
Let be p(j) (a) = lim pkl (a), j = 0, 1, . . . , m−1. Using these values we dene
kl →∞
p′ (a)
a polynomial p of maximum degree m − 1: p(x) = p(a) + · (x − a) +
1!
p(m−1) (a)
... + · (x − a)(m−1) . The sequen e fk given by
(m − 1)!
Z x
(x − t)m
fk (x) = pk (x) + · g(t) dt
a (m − 1)!

onverges to: Z x
(x − t)m
f (x) = p(x) + · g(t) dt.
a (m − 1)!
Be ause the set U is losed we obtain that f ∈ U .
The ase d < m an be redu ed to the previous ase.
98 Interpolation, Polynomials Approximation, Spline Fun tions

Be ause U (m) is non empty, onvex and losed in L2[a,b] , based on a theorem
from fun tional analysis (theorem for the best approximation), it results that
the problem kv ∗ k2 = inf kvk2 has at least one solution.
v∈U (m)

Theorem 3.4.2. (uniqueness)


1. For any solutions u∗ , u∗∗ of the polynomial spline interpolation problem,
the dieren e u∗ − u∗∗ is a polynomial of maximum degree m − 1.
2. The polynomial spline interpolation problem has an unique solution if
and only if the set U0 = U (0) = {f ∈ H[a,b]
m,2
| ϕi (f ) = 0, i = 1, 2, . . . , n}
does not ontain no null polynomials.
Proof:
1. The rst armation is a onsequen e of the uniqueness of the solution of
the best approximation problem. Indeed, (u∗ )(m) and (u∗∗ )(m) are solutions
of the best approximation problem and hen e (u∗ )(m) = (u∗∗ )(m) . It results
that the dieren e u∗ − u∗∗ is a polynomial of maximum degree m − 1.

2. For proving the se ond armation, rst we will show that if the polynomial
spline interpolation problem has only one solution then U0 = U (0) does
not ontain null polynomials. For this we onsider the solution u∗ of the
polynomial spline interpolation problem and we suppose the ontrary, i.e.,
that the set U0 ontains a polynomial p of degree less than or equal to m − 1.
Considering u∗∗ = p + u∗ , be ause ϕi (p) = 0, i = 1, n results that ϕi (u∗∗ ) =
ϕi (u∗ ), i = 1, n. From here we obtain u∗∗ ∈ U (y), whi h together with the
equality u∗∗ (m) = u∗ (m) proves that u∗ and u∗∗ are two solutions of the best
approximation problem, whi h is impossible.
Similarly it an be shown that if the set U0 = U (0) does not ontain null
polynomials then the polynomial spline interpolation problem has a unique
solution.

The following theorem establishes an orthogonality property for a spline.


Theorem 3.4.3. The fun tion u∗ ∈ U is a solution of the polynomial spline
interpolation problem if and only if
Z b
u∗ (m) · g (m) = 0, ∀g ∈ U0 .
a

Proof: u∗ ∈ U is solution of the polynomial spline interpolation problem if


and only if u∗ (m) ∈ U (m) is a solution of the best approximation problem,
The Spline Polynomial Interpolation 99

and u∗ (m) ∈ U (m) is a solution of the best approximation problem if and only
(m)
if it is orthogonal on U0 .

Consequen e 3.4.1. If u∗ is solution of the polynomial spline interpolation


problem then:
ku∗ (m) k22 = ku(m) − u∗ (m) k22 + ku∗ (m) k22 , ∀ u ∈ U.
Proof: For u ∈ U we have
Z b
ku(m) k22 = ku (m)
− u∗ (m) k22 + ku∗ (m) k22 +2 (u − u∗ )(m) · u∗ (m) .
a
Z b
Be ause u − u ∈ U0 , a ording to Theorem 3.4.3, it results that

(u −
a
u∗ )(m) · u∗ (m) = 0.
We onsider the set S dened by:
n Z b o
m,2
S = f ∈ H[a,b] | f (m) · g (m) = 0, ∀g ∈ U0 .
a

Theorem 3.4.4. S is a losed linear subspa e of H[a,b]


m,2
.
Proof: Let be f1 , f2 ∈ S and α, β ∈ IR1 . We have:
Z b Z b
(m) (m) (m) (m)
(αf1 + βf2 ) ·g = αf1 · g (m) + βf2 · g (m) =
a a

Z b Z b
(m) (m) (m)
= α f1 ·g +β f2 · g (m) = 0,
a a

and hen e S is a linear spa e.

For proving that S is losed we onsider a sequen e of fun tions (fk ) from
m,2
S onvergent to f ∈ H[a,b] . We should prove that f ∈ S . From fk ∈ S we
Z b
(m) m,2
have fk · g (m) = 0, and from the onvergen e ondition fk → f in H[a,b]
a
results < fk − f, g >m,2 − −−→ 0, hen e < (fk − f )(m) , g (m) >L2 −−−→ 0.
k→∞ k→∞
In this way we have:
Z b Z b
(m) (m) (m) (m)
f ·g = lim (f (m) − fk ) · g (m) + fk · g (m) = 0
a k→∞ a

for any y ∈ U0 , hen e f ∈ S .


100 Interpolation, Polynomials Approximation, Spline Fun tions

Theorem 3.4.5. S is the set of all solutions of the polynomial spline inter-
polation problem if f ∈ IRn and ontains the set of polynomials of maximum
degree m − 1.
Proof: Let Zu∗ a solution of the polynomial spline interpolation problem. It
b
results that u∗ (m) ·g (m) = 0 for any g ∈ U0 and hen e u∗ ∈ S . If f ∈ S then
Z b a

f (m) · g (m) = 0, for any g ∈ U0 , so f is solution of the polynomial spline


a
interpolation problem orresponding to y1 , y2 , . . . , yn , where yi = ϕi (f ), i =
1, 2, . . . , n. The fa t that the polynomial
Z p of maximum degree m − 1 belongs
b
to S results from the equality p(m) · g (m) = 0.
a

Theorem 3.4.6. Let be {v1 , . . . , vd } a base of the spa e Pm−1 ∩ U0 and u∗i a
solution of the polynomial spline interpolation problem on the set Ui = {f ∈
m,2
H[a,b] | ϕi (f ) = δij , j = 1, 2, . . . , n}.
The set {u∗1 , u∗2 , . . . , u∗n }∪{v1 , v2 , . . . , vd }
is a base for S .
n
X
Proof: Let be f ∈ S and h = f − u∗i · ϕi (f ). The fun tion h belongs to
i=1
Z b
the set S and to U0 ; moreover, fun tion h veries [h(m) ]2 = 0. From here
a
h(m) = 0, i.e., h ∈ Pm−1 and hen e h ∈ Pm−1 ∩ U0 . Be ause the system of
ve tors {v1 , . . . , vd } ia a base in Pm−1 ∩ U0 we have:

d
X
h= cj · vj
j=1

and thus we obtain:


n
X d
X
f= u∗i · ϕi (f ) + cj · vj .
i=1 j=1

For showing that the system of fun tions {u∗1 , u∗2 , . . . , u∗n , v1 , v2 , . . . , vd } is lin-
early independent, we will onsider the relation of linear dependen e
n
X d
X
ai · u∗i + bj · vj = 0.
i=1 j=1
The Spline Polynomial Interpolation 101

Applying the fun tional ϕk results


" n d
#
X X
ϕk ai · u∗i + bj · vj = ak = 0, k = 1, 2, . . . , n,
i=1 j=1

be ause ϕk (u∗i ) = δki and ϕk (vj ) = 0, j = 1, . . . , d, (vj ∈ U0 ). Repla ing ak


in the relation of linear dependen e we obtain
d
X
bj · vj = 0.
j=1

Using the linear independen e of the ve tors v1 , . . . , vd we obtain that bj = 0,


j = 1, . . . , d.
If Pm−1 ∩ U0 = {0} (the polynomial spline interpolation problem has a
unique solution) then the spa e S has dimension n and u∗1 , . . . , u∗n is a base
in S .

Due to the properties:

ϕk (u∗i ) = δki , k, i = 1, n

fun tions u∗i are alled fundamental interpolating spline fun tions.
Theorem 3.4.7. Let be y = (y1 , . . . , yn ) ∈ IRn and u∗1 , . . . , u∗n fundamental
interpolating spline fun tions. The fun tion u∗y dened by
n
X
u∗y = u∗i · yi
i=1

is solution of the polynomial spline interpolation problem related to the set


U = U (y).
Proof: We have ϕk (u∗y ) = yk , k = 1, n, hen e uy ∈ U . Be ause u∗i are
solutions of the polynomial spline interpolation problem whi h interpolate
Z b
m,2 ∗(m)
the sets Ui = {f ∈ H[a,b] | ϕi (f ) = δij , j = 1, 2, . . . , n}, results that ui ·
a
g (m) = 0, ∀g ∈ U0 , i = 1, n. It follows
Z b Xn Z b
∗(m) (m) ∗(m)
uy · g + yi ui · g (m) = 0 = 0, ∀g ∈ U0 ,
a i=1 a

and hen e u∗y ∈ S .


102 Interpolation, Polynomials Approximation, Spline Fun tions

Remark 3.4.2. m,2


The previous theorem shows that if f ∈ H[a,b] then the
fun tion: n
X
Sf = u∗i · ϕi (f )
i=1

is a spline fun tion whi h interpolates the fun tion f , i.e. ϕi (Sf ) = ϕi (f ).
m,2
The appli ation S : H[a,b] → S dened above is a linear operator, and if
the polynomial spline interpolation problem has a unique solution then S is
idempotent.

Denition 3.4.3. The operator S dened above is alled polynomial in-


terpolating spline operator.
The stru ture of the solution of the polynomial spline interpolation prob-
lem depends on the nature of the fun tionals whi h denes the interpolating
set U . In the followings we will formulate a theorem for the stru ture of the
solution in the ase of Birkho fun tionals.
For dening Birkho fun tionals we will onsider a division ∆ : a ≤ x1 <
. . . < xk ≤ b of [a, b]. For any point of the division, we onsider a natural
number ri , i = 1, k having the property ri ≤ m − 1, and a set of subs ripts
Ii ⊂ {0, 1, . . . , ri }, its elements being denoted by j ∈ Ii .

Denition 3.4.4. The fun tionals ϕi dened as:


ϕij (f ) = f (j) (xi ) i = 1, k, j ∈ Ii

are alled Birkho fun tionals.


Theorem 3.4.8. (stru tural hara terization)
Let be Φ a set of Birkho fun tionals and U the orresponding interpolating
set. The fun tion u ∈ U is a solution of the polynomial spline interpolation
problem if and only if the following properties take pla e:
i. u(2m) (x) = 0, x ∈ [x1 , xk ]\{x1 , . . . , xk };
ii. u(m) (x) = 0, x ∈ (a, x1 ) ∪ (xk , b);
iii. u(2m−1−µ) (xi − 0) = u(2m−1−µ) (xi + 0), µ ∈ {0, 1, . . . , m − 1}\Ii for
i = 1, . . . , k.

Remark 3.4.3. The theorem of hara terization expresses the fa t that a


solution u of the polynomial spline interpolation problem , in the ase in
whi h Φ is a set of Birkho fun tionals, is a polynomial of degree 2m − i
on every interval (xi , xi+1 ) ⊂ (a, b), a polynomial of degree m − 1 on the
The Spline Polynomial Interpolation 103

extreme intervals [a, x1 ) and (xk , b], and in the points xi the derivative of
order 2m − 1 − µ is ontinuous if the value of the µth derivative in xi does
not belong to Φ. The solution u is alled spline of degree 2m − 1 or natural
spline of degree 2m − 1.

A ording to established results we an enoun e the following theorem.

Theorem 3.4.9. If Φ is a set of Birkho fun tionals, U is the orresponding


interpolating set and S is the set of the spline fun tions whi h interpolate on
U , then the followings assertions are equivalent:

1. u ∈ S ⇔ ku(m) k2 = inf kuk2 ;


u∈U

Z b
2. u ∈ S ⇔ u(m) · g (m) = 0, ∀g ∈ U0 ;
a

3. u ∈ S ⇔
 (2m)
 u (x) = 0, x ∈ [x1 , xk ]\{xi }i=1,...,k
(m)
u (x) = 0, x ∈ [a, x1 ) ∪ (xk , b]
 (2m−1−µ) (2m−1−µ)
u (xi + 0) − u (xi − 0) = 0, µ ∈ {0, 1, . . . , m − 1}\Ii , i = 1, . . . , k.

In parti ular, if ϕi (f ) = f (xi ), i = 1, . . . , n, then the ondition

u(2m−1−µ) (xi +0)−u(2m−1−µ) (xi −0) = 0, µ ∈ {0, 1, . . . , m−1}\Ii , i = 1, . . . , k

be omes
2m−2
u ∈ C[a,b] ,
and Theorem 3.4.8 permits to write u(x) as follows:
m−1
X n
X
i
u(x) = ai x + bk (x − xk )2m−1 ,
i=0 k=1

in whi h those m + n parameters ai and bk , i = 0, m − 1, k = 1, n, are deter-


mined by the interpolating onditions u(xj ) = f (xj ), j = 1, n and from the
ondition that u ∈ Pm−1 on the extreme intervals.

Examples of polynomial spline fun tions


Let be the interval [a, b] and n points {(xi , yi )}n−1
i=0 whi h determine the knots
xi of the division ∆ : a = x0 < x1 < . . . < xn−1 = b.
104 Interpolation, Polynomials Approximation, Spline Fun tions

• The polynomial spline fun tion of rst order S(x) (polygonal line) is
the polynomial fun tion determined of n − 1 polynomials Si (x) of the
rst degree (segments of straight lines):
S(x) = Si (x) = si,0 + si,1 (x − xi ) (3.4.1)
for x ∈ [xi−1 , xi ], i = 1, n − 1, with oe ients si,0 , si,1 satisfying the
properties:

i=0 , i.e.
(i) the spline fun tion passes through every point {(xi , yi )}n−1
S(xi ) = yi , i = 0, n − 1;
(ii) the spline fun tion is ontinuous on the interval [a, b], i.e. Si (xi ) =
Si+1 (xi ), i = 1, n − 2.
Imposing for the fun tion S(x) to satisfy the onditions (i) − (ii), the
oe ients si,0 and si,1 are obtained, and the following formula for the
polynomial spline fun tion of rst order is found:
yi − yi−1
S(x) = Si (x) = yi + (x − xi ),
xi − xi−1
with x ∈ [xi−1 , xi ], i = 1, n − 1.
• polynomial spline fun tion of se ond order (quadrati spline) S(x) is the
polynomial fun tion determined by n − 1 polynomials Si (x) of se ond
degree (segments of parabolas):
S(x) = Si (x) = si,0 + si,1 (x − xi ) + si,2 (x − xi )2 (3.4.2)
for x ∈ [xi−1 , xi ], i = 1, n − 1, with oe ients si,0 , si,1 and si,2 satisfy-
ing the properties:
(i) the spline fun tion passes through every point {(xi , yi )}n−1
i=0 , i.e.
S(xi ) = yi , i = 0, n − 1;
(ii) the spline fun tion is ontinuous on the interval [a, b], i.e. Si (xi ) =
Si+1 (xi ), i = 1, n − 2;
(iii) the spline fun tion is smooth on the interval [a, b], i.e. Si′ (xi ) =

Si+1 (xi ), i = 1, n − 2.
• polynomial spline fun tion of third order ( ubi spline) S(x) is the
polynomial fun tion determined by n − 1 polynomials Si (x) of third
degree ( ubi segments):
S(x) = Si (x) = si,0 + si,1 (x − xi ) + si,2 (x − xi )2 + si,3 (x − xi )3 (3.4.3)
for x ∈ [xi−1 , xi ], i = 1, n − 1, with oe ients si,0 , si,1 , si,2 and si,3
satisfying the properties:
The Spline Polynomial Interpolation 105

i=0 , i.e.
(i) the spline fun tion passes through every point {(xi , yi )}n−1
S(xi ) = yi , i = 0, n − 1;
(ii) the spline fun tion is ontinuous on the interval [a, b], i.e. Si (xi ) =
Si+1 (xi ), i = 1, n − 2;
(iii) the spline fun tion is smooth on the interval [a, b], i.e. Si′ (xi ) =

Si+1 (xi ), i = 1, n − 2;
(iv) the se ond derivative of the spline fun tion is ontinuous on the
interval [a, b], i.e. Si′′ (xi ) = Si+1
′′
(xi ), i = 1, n − 2.
Note that ea h ubi polynomial Si has si,0 , si,1 , si,2 and si,3 as un-
knowns; therefore there are 4n − 4 unknowns orresponding to n knots
and hen e n − 1 ubi polynomials, and 4n − 6 equations given by
(i) − (iv) (in the ase in whi h we have n + 1 knots and hen e n ubi
polynomials, then number of unknowns is 4n and number equations is
4n − 2). We still need two more equations that an obtained by im-
posing boundary onditions at the endpoints x0 and xn−1 . The most
ommonly boundary onditions are
′′
S1′′ (x0 ) = Sn−1 (xn−1 ) = 0 (3.4.4)

or
S1′ (x0 ) = y0′ and ′
Sn−1 ′
(xn−1 ) = yn−1 . (3.4.5)
The boundary onditions given by Eq. (3.4.4) are alled free or natu-
ral boundary onditions and the orresponding ubi spline is alled
natural ubi spline. The boundary onditions given by Eq. (3.4.5) are
alled lamped boundary onditions.
For omputing the oe ients si,0 , si,1 , si,2 and si,3 of the ubi spline
polynomial we impose for S(x) to satisfy the onditions (i)−(iv). Thus,
from Eq. (3.4.3), we have:

Si′ (x) = si,1 + 2si,2 (x − xi ) + 3si,3 (x − xi )2 (3.4.6)

and
Si′′ (x) = 2si,2 + 6si,3 (x − xi ). (3.4.7)
From Si , we get:
Si (xi ) = si,0 = yi (3.4.8)
and
Si+1 (xi ) = si+1,0 + si+1,1 hi + si+1,2 h2i + si+1,3 h3i (3.4.9)
for i = 0, n − 2, where
hi = xi − xi+1 . (3.4.10)
106 Interpolation, Polynomials Approximation, Spline Fun tions

From Eq. (3.4.8), Eq. (3.4.9) and ontinuity (ii) , we get

si+1,1 hi + si+1,2 h2i + si+1,3 h3i = yi − yi+1 . (3.4.11)

Sin e Si′ (xi ) = si,1 = Si+1



(xi ) and Eq. (3.4.6) we obtain:

si,1 = si+1,1 + 2si+1,2 hi + 3si+1,3 h2i .

This an be rewritten as

2si+1,2 hi + 3si+1,3 h2i = si,1 − si+1,1 (3.4.12)

for i = 0, n − 2.
Sin e Si′′ (xi ) = 2si,2 = Si+1
′′
(xi ) and Eq. (3.4.7) we obtain:

2si,2 = 2si+1,2 + 6si+1,3 hi . (3.4.13)

Solving this equation for si+1,3 we have:

si,2 − si+1,2
si+1,3 = . (3.4.14)
3hi

Substituting for si+1,3 from Eq. (3.4.14) in Eq. (3.4.11) and then
solving for si+1,1 gives

yi − yi+1 hi
si+1,1 = − (si,2 + 2si+1,2 ). (3.4.15)
hi 3

Substituting si+1,1 from Eq. (3.4.15) and si+1,3 from Eq. (3.4.14) in
Eq.(3.4.12) and then simplifying yields:
yi+1 − yi yi − yi−1
hi−1 si−1,2 +2(hi +hi−1 )si,2 +hi si+1,2 = 3 −3 , (3.4.16)
hi hi−1

for i = 1, n − 2.
For natural or free boundary onditions, from Eq. (3.4.7), we have

s0,2 = 0, sn−1,2 = 0. (3.4.17)

Equations (3.4.16)-(3.4.17) an be written as a tridiagonal system in


the unknowns s0,2 , s01,2 , . . . , sn−1,2 whi h an be solved by the LU de-
omposition method developed in the rst hapter.
Remark: If onditions (ii) − (iv) are satised, fun tions Si+1 (x),

Si+1 (x) and Si+1
′′
(x) are repla ed with similar onditions written with
The Spline Polynomial Interpolation 107

Si−1 (x), Si−1



(x) and Si−1
′′
(x), then the oe ients si,2 , si,1 , si,3 and si,0
(in this order) of the ubi spline polynomial are given by:
yi+1 − yi yi − yi−1
hi−1 si−1,2 +2(hi +hi−1 )si,2 +hi si+1,2 = 3 −3 , (3.4.18)
hi hi−1
yi − yi−1 hi−1
si,1 = − (2si−1,2 + si,2 ), (3.4.19)
hi−1 3
si,2 − si−1,2
si,3 = , (3.4.20)
3hi−1
si,0 = yi , (3.4.21)
where hi = xi+1 − xi .

Exer ises
1. Find the natural ubi spline fun tion whi h interpolates the data:
xi 27.7 28 29 30
yi 4.1 4.3 4.1 3.0
Compute S(28.5).
The algorithm for implementation of ubi spline interpolation:

//Constru tion of tridiagonal system


for i = 0 . . . n − 1
hi = xi+1 − xi
for i = 2 . . . n − 1
subdiagi = hi−1
supradiagi = hi
for i = 1 . . . n − 1
diagi = 2 · (hi−1 + hi ) 
yi+1 − yi yi − yi−1
term_ liberii = 3 · −
hi hi−1
The LU fa torization method is applied for solving the tridiagonal system
( onstru ted above) of n − 1 equations and n − 1 unknowns. Solution of this
system is the ve tor (ci )i=1...n−1 .
c0 = cn = 0
for i = 0 . . . n − 1
ai = y i
yi+1 − yi (2 · ci + ci+1 )
bi = −
hi 3
108 Interpolation, Polynomials Approximation, Spline Fun tions
ci+1 − ci
di =
3 · hi
Input data:
- n

- xi - knots, i = 0, . . . , n

- yi - values of f in knots

Output data:
- for i = 0 . . . n − 1
Pi (x) = ai + bi (x − xi ) + ci (x − xi )2 + di (x − xi )3

Implementation of the above algorithm made in Borland C language:


#in lude<stdio.h>
#in lude<mallo .h>
oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int n);
void CitireVe t(oat *a,int init, int n);
void S riereVe t(oat *a,int init, int n);
void Tridiag(oat *a, oat *b, oat * , oat *d,oat *x,int n);
void spline(oat *x, oat *y, int n);
void main()
{
oat *y, *x;
int n;
printf("Introdu eti numarul nodurilor: "); s anf("%d",&n);
x=Ve tor(n);
y=Ve tor(n);
printf("Introdu eti nodurile: \n");
CitireVe t(x,0,n-1);
printf("Introdu eti valorile fun tiei in noduri: \n");
CitireVe t(y,0,n-1);
printf("SOLUTIE\n");
spline(x,y,n-1);
}
void spline(oat *x, oat *y, int n)
{
oat *h, *a, *b, * , *d, *subD, *diag, *supraD, *lib;
int i;
h=Ve tor(n);
The Spline Polynomial Interpolation 109

a=Ve tor(n);
b=Ve tor(n);
=Ve tor(n);
d=Ve tor(n);
subD=Ve tor(n);
diag=Ve tor(n);
supraD=Ve tor(n);
lib=Ve tor(n);
for(i=0;i<n;i++)
{
h[i℄=x[i+1℄-x[i℄;
}
[0℄=0;
[n℄=0;
for(i=2;i<n;i++)
{
subD[i℄=h[i℄;
supraD[i℄=h[i℄;
}
for(i=1;i<n;i++)
{
diag[i℄=2*(h[i-1℄+h[i℄);
lib[i℄=3.0*((y[i+1℄-y[i℄)/h[i℄-(y[i℄-y[i-1℄)/h[i-1℄);
}
Tridiag(subD,diag,supraD,lib, ,n-1);
for(i=0;i<n;i++)
{
a[i℄=y[i℄;
b[i℄=(y[i+1℄-y[i℄)/h[i℄-(2.0* [i℄+ [i+1℄)*h[i℄/3.0;
d[i℄=( [i+1℄- [i℄)/(3.0*h[i℄);
}
for(i=0;i<n;i++)
{
printf("P%d (x)=%g +%g(x-%g)+ %g (x-%g)^2+ %g (x-%g)^3
\n",
i,a[i℄,b[i℄,x[i℄, [i℄,x[i℄,d[i℄,x[i℄);
}
}
110 Interpolation, Polynomials Approximation, Spline Fun tions

3.5 The Bernstein Polynomial


Denition 3.5.1. Let f : [a, b] → IR1 . The polynomial dened by
m  
X
k k m−k k
(Bm f )(x) = Cm · x · (1 − x) ·f , x ∈ [0, 1],
k=0
m

is alled Bernstein polynomial (of degree m) for the approximation the


fun tion f on [0, 1].
Theorem 3.5.1. For any f, g : [0, 1] → IR1 and α, β ∈ IR1 we have:
[Bm (αf + βg)](x) = α(Bm f )(x) + β(Bm g)(x), ∀x ∈ [0, 1].

Proof:
m  
X
k k m−k k
[Bm (αf + βg)](x) = Cm · x · (1 − x) · (αf + βg) =
k=0
m

m     
X
k k m−k k k
= Cm · x · (1 − x) · αf + βg =
k=0
m m

m  
X
k k m−k k
= Cm · x · (1 − x) · αf +
k=0
m

m  
X
k k m−k k
+ Cm · x · (1 − x) · βg =
k=0
m

m  
X
k k m−k k
=α Cm · x · (1 − x) ·f +
k=0
m

m  
X
k k m−k k
+β Cm · x · (1 − x) ·g =
k=0
m

= α(Bm f )(x) + β(Bm f )(x).


The Bernstein Polynomial 111

Theorem 3.5.2. Let be f : [0, 1] → IR1 . If f (x) ≥ 0 for any x ∈ [0, 1] then:
(Bm f )(x) ≥ 0, ∀x ∈ [0, 1].

Proof: Be ause x ∈ [0, 1], it results that xk ≥ 0 and (1 − x)m−k ≥ 0. From


f (x) ≥ 0 for any x ∈ [0, 1], we have:

(Bm f )(x) ≥ 0, ∀ x ∈ [0, 1].

Theorem 3.5.3. If f (x) ≡ 1then (Bm f )(x) ≡ 1; if f (x) ≡ x, then


x(1 − x)
(Bm f )(x) ≡ x and if f (x) ≡ x2 then (Bm f )(x) ≡ x2 +
m

Theorem 3.5.4. If f : [a, b] → IR1 veries m ≤ f (x) ≤ M , ∀x ∈ [0, 1] then


(Bm f )(x) veries m ≤ (Bm f )(x) ≤ M , ∀x ∈ [0, 1].

Proof: We onsider the fun tion g(x) = f (x) − m ≥ 0, ∀x ∈ [0, 1]. Applying
Theorem 3.5.2 we obtain (Bm g)(x) ≥ 0. On the basis of Theorem 3.5.1 we
have (Bm g)(x) = (Bm f )(x) − m and hen e (Bm f )(x) − m ≥ 0, from where
we obtain (Bm f )(x) ≥ m. Analogously the inequality (Bm f )(x) ≤ M is
obtained.

Theorem 3.5.5. (Weierstrass approximation)


If f ∈ C[0,1] then Bm f onverges uniformly to f on [0, 1] for m → ∞.
Proof: If f is ontinuous on [0, 1] then it is uniformly ontinuous and hen e
∀ε > 0, ∃δ(ε) > 0 su h that ∀x′ , x′′ ∈ [0, 1] with |x′ − x′′ | < δ(ε) we have
|f (x′ ) − f (x′′ )| < 2ε .
We onsider the dieren e f (x) − (Bm f )(x) written in the form:
m   
X k k
f (x) − (Bm f )(x) = f (x) − f · Cm · xk · (1 − x)m−k .
k=0
m

For x ∈ [0, 1] we denote:


n o
k
Im = k x − m
< δ(ε)
n o
k
Jm = k x − m
≥ δ(ε)
112 Interpolation, Polynomials Approximation, Spline Fun tions

and we onsider M = max{|f (x)| : x ∈ [0, 1]}. We evaluate the dieren e


f (x) − (Bm f )(x):
ε X k k X
|f (x) − (Bm f )(x)| ≤ Cm x (1 − x)m−k + 2M Cmk k
x (1 − x)m−k ≤
2 k∈Im k∈Jm

ε X
k k
≤ + 2M Cm x (1 − x)m−k .
2 k∈J m

From the inequality | m


k
− x| ≥ δ it results that 1 ≤ ( m
k
− x)2 /δ 2 and hen e
 2
X
k k m−k 1 X k k k
Cm x (1 − x) ≤ 2 − x Cm x (1 − x)m−k ≤
k∈Jm
δ m
k∈Jm

m  2
1 X k k k
≤ 2 − x Cm x (1 − x)m−k ≤
δ k=0 m

1 x(1 − x) 1
≤ 2
· ≤ .
δ m 4mδ 2
It follows:
ε M
|f (x) − (Bm f )(x)| ≤ + ,
2 2mδ 2
from where we have:
M
|f (x) − (Bm f )(x)| ≤ ε if m > , x ∈ [0, 1].
εδ 2

Denition 3.5.2. The equality:


f = Bm f + Rm f
in whi h Rm f (x) = f (x) − Bm f (x) se is alled Bernstein formula for
approximation; the term Rmf (x) is alled error term.
Theorem 3.5.6. If f ∈ C[0,1]
2
then:
x(1 − x) ′′
(Rm f )(x) = − · f (ξ), 0 ≤ ξ ≤ 1,
2m
and
1
|(Rm f )(x)| ≤ · sup{|f ′′ (ξ)| | ξ ∈ [0, 1]}.
8m
The Bernstein Polynomial 113

Proof: We have: Z 1
(Rm f )(x) = ϕ(x, t)f ′′ (t)dt
0
m  
X k
u ϕ(x, t) = (x − t) − k k
Cm x (1 − x) m−k
· −t .
k=0
m
From here we obtain:
x(1 − x) ′′
(Rm f )(x) = − · f (ξ), 0 ≤ ξ ≤ 1.
2m
The last inequality from the enun iation results from the inequality x(1−x) ≤
1
4
, ∀x ∈ [0, 1].

Theorem 3.5.7. If f : [a, b] → IR1 , then the Bernstein polynomial of degree


m whi h approximates the fun tion f pe [a, b] is
m  k  m−k  
X
k y−a b−y k
(Bm f )(x) = Cm · · · f a + (b − a) · =
k=0
b−a b−a m

m  
1 X
k k m−k k
= · C · (y − a) · (b − y) · f a + (b − a) · .
(b − a)m k=0 m m

Proof: The Bernstein polynomial of degree m is written for the fun tion
g(x) = f (a + (b − a)x), x ∈ [0, 1].

Exer ises
1. Using the Bernstein polynomial formula determine the Bezier urve asso-
iated to the points A(1,1), B(2,-1), C(3,2) and D(4,-1).
Remark: The oordinate fun tions x(t) and y(t) for the Bezier urve an
be written as linear ombinations of the Bernstein polynomials:
n
X
x(t) = Cni · ti (1 − t)n−i xi ,
i=0

n
X
y(t) = Cni · ti (1 − t)n−i yi .
i=0
114 Interpolation, Polynomials Approximation, Spline Fun tions

The algorithm for determination of the Bezier urve using Bernstein polyno-
mial:
n  
X n
x(t) = ti (1 − t)n−i xi
i
i=0
n  
X n
y(t) = ti (1 − t)n−i yi
i
i=0
Input data:
- n

- xi - knots, i = 0, . . . , n

- yi - values of f in knots

Output data:
- expressions of x(t) and y(t)

Implementation of this algorithm made in Borland C language:


#in lude<stdio.h>
#in lude<math.h>
#in lude< onio.h>
#in lude<mallo .h>
oat **Matri e(int imin, int imax, int jmin, int jmax);
oat *Ve tor(int imin, int imax);
void CitireVe t(oat *a,int init, int n);
void S riereVe t(oat *a,int init, int n);
int fa t(int n);
oat omb(int n, int k);
void main()
{
oat *x, *y;
int n,i,k;
printf("Introdu eti numarul de noduri: "); s anf("%d", &n);
x=Ve tor(0,n-1);
y=Ve tor(0,n-1);
printf("Introdu eti abs isele pun telor \n");
CitireVe t(x,0,n-1);
printf("Introdu eti ordonatele pun telor \n");
CitireVe t(y,0,n-1);
printf("x(t)= %g (1-t)^%d+",x[0℄,n-1);
The Bernstein Polynomial 115

for(k=1;k<n-1;k++)
printf("%g t^%d (1-t)^%d+", omb(n,k)*x[k℄,k,n-1-k);
printf("%g t^%d\n",x[n-1℄, n-1);
printf("y(t)= %g (1-t)^%d+",y[0℄,n-1);
for(k=1;k<n-1;k++)
printf("%g t^%d (1-t)^%d+", omb(n-1,k)*y[k℄,k,n-1-k);
printf("%g t^%d\n",y[n-1℄, n-1);
}
int fa t(int n)
{
int i,prod=1;
for(i=2;i<=n;i++)
prod*=i;
return prod;
}
oat omb(int n, int k)
{
return (fa t(n)/(fa t(k)*fa t(n-k)));
}
116 Interpolation, Polynomials Approximation, Spline Fun tions
Chapter 4
Numeri al Dierentiation
In the followings, we present two methods for approximating the derivatives
of the fun tion f in an arbitrary point:

• the approximation of the derivatives by nite dieren es used in the


ase in whi h the fun tion is known, but omputing its derivatives is
too di ult;

• the approximation of the derivatives using derivatives of the interpo-


lating polynomials (Newton or Lagrange) used in the ase when the
fun tion is known only by its values in given points.

Let X = {xi | xi ∈ IR1 , i = 0, 1, . . . , m} be a set of m + 1 distin t real


numbers x0 < x1 < . . . < xm−1 < xm and the fun tion f : X → IR1 known
by its values in the points xi : yi = f (xi ), i = 0, 1, . . . , m .

4.1 The Approximation of the Derivatives by


Finite Dieren es
In the previous hapter, forward △ and ba kward ▽ nite dieren e opera-
tors were dened:
△f (x) = f (x + h) − f (x),
▽f (x) = f (x) − f (x − h).
Moreover, the Newton polynomial with forward nite dieren es:

△f (x0 ) △2 f (x0 )
pm (x) = f (x0 ) + (x − x0 ) + (x − x0 )(x − x1 ) + . . .
h 2!h2
117
118 Numeri al Dierentiation

△m f (x0 )
+ (x − x0 ) . . . (x − xm−1 ),
m!hm
the Newton polynomial with ba kward nite dieren es:

▽f (xm ) ▽2 f (xm )
pm (x) = f (xm ) + (x − xm ) + (x − xm )(x − xm−1 ) + . . .
h 2!h2
▽m f (xm )
+ (x − xm ) . . . (x − x1 ).
m!hm
were given.
Hen e, for a fun tion f with derivatives up to a su iently high order, the
following approximation was obtained:

f (m+1) (ξ)
f (x) = pm (x) + (x − x0 )(x − x1 ) . . . (x − xm−1 )(x − xm ).
(m + 1)!

Using these results, the approximations of f by nite dieren es an be


obtained.
In order to express the rst derivative by forward nite dieren es, we make
m = 1 in the orresponding approximation of f and we ompute the rst
derivative with respe t to x:
′′
△f (x0 ) f (ξ) f (3) (η(x))
f ′ (x) = + [(x − x0 ) + (x − x1 )] + (x − x0 )(x − x1 ),
h 2 3!
where ξ ∈ (x0 , x1 ), η ∈ (x0 , x1 ).
Substituting x = x0 , x + h = x1 we obtain the following approximation for
the rst order derivative, written in forward nite dieren es:
′′
′ △f (x) f (ξ)
f (x) = − h, ξ ∈ (x, x + h),
h 2
whi h is equivalent to
′′
′ f (x + h) − f (x) f (ξ)
f (x) = − h, ξ ∈ (x, x + h).
h 2
For obtaining approximation of the rst order derivative by ba kward nite
dieren es we make m = 1 in the orresponding approximation of f , we
ompute the rst order derivative with respe t to x and we substitute x = x1 ,
x − h = x0 :
′′
′ ▽f (x) f (ξ)
f (x) = + h, ξ ∈ (x − h, x),
h 2
The Approximation of the Derivatives by Finite Dieren es 119

whi h is equivalent to
′′
′ f (x) − f (x − h) f (ξ)
f (x) = + h, ξ ∈ (x − h, x).
h 2
In similar ways the higher order derivatives an be obtained.
For example, we ompute the se ond derivative by forward nite dieren es.
Thus, for m = 2 the approximation of the fun tion is:
△f (x0 ) △2 f (x0 ) f (3) (ξ)
f (x) = f (x0 )+ (x−x0 )+ (x−x0 )(x−x1 )+ (x−x0 )(x−x1 )(x−x2 ).
1!h 2!h2 3!
Computing the rst two derivatives with respe t to x we obtain:
△f (x0 ) △2 f (x0 )
f ′ (x) = + [(x − x0 ) + (x − x1 )]+
1!h 2!h2

f (3) (ξ)
+ [(x − x1 )(x − x2 ) + (x − x0 )(x − x2 ) + (x − x0 )(x − x1 )]+
3!

f (4) (η(x))
+ (x − x0 )(x − x1 )(x − x2 ),
4!
△2 f (x0 ) f (3) (ξ)
f ′′ (x) = + [(x − x0 ) + (x − x1 ) + (x − x2 )]+
2!h2 3

[f (4) (η) + f (4) (ξ1 )]


+ [(x − x1 )(x − x2 ) + (x − x0 )(x − x2 ) + (x − x0 )(x − x1 )]+
4!

f (5) (η1 (x))


+ (x − x0 )(x − x1 )(x − x2 ),
5!
where ξ, η, ξ1 , η1 ∈ (x0 , x2 ).
Substituting x = x1 , x − h = x0 , x + h = x2 we obtain:
△2 f (x0 ) h2 (4)
f ′′ (x1 ) = − f (ξ),
2h2 12
whi h is equivalent to
f (x0 + 2h) − 2f (x0 + h) + f (x0 ) h2 (4)
f ′′ (x1 ) = − f (ξ), ξ ∈ (x0 , x0 +2h).
h2 12
Hen e, the approximation of the se ond order derivative written with forward
nite dieren e is:
′′ f (x0 + h) − 2f (x0 ) + f (x0 − h) h2 (4)
f (x0 ) = − f (ξ), ξ ∈ (x0 −h, x0 +h).
h2 12
120 Numeri al Dierentiation

4.2 The Approximation of the Derivatives Us-


ing Derivatives of the Interpolating Poly-
nomials
The rst derivative of the fun tion f known by its values in the knots x0 <
x1 < . . . < xm−1 < xm of [a, b] an be expressed by the rst derivative of the
fun tion of Newton interpolating polynomial:

d
f ′ (x) = f (x0 ) + (D1 f )(x0 )(x − x0 ) + (D2 f )(x0 )(x − x0 )(x − x1 ) + . . .
dx
+(Dm f )(x0 )(x − x0 )(x − x1 ) . . . (x − xm−1 ) + Rm (x)) ,
or Lagrange interpolating polynomial:
k
!
d X (x − x0 ) . . . (x − xi−1 )(x − xi+1 ) . . . (x − xm )
f ′ (x) = · f (xi ) + (Rm f )(x) .
dx i=0
(xi − x0 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )

The derivative of higher order f (n) (x) of the fun tion f is obtained by deriving
n times the interpolating polynomial fun tion.

Exer ises

1. Approximate the rst order derivative of the fun tion f (x) = x, using
its values at the knots: x0 = 1, x1 = 1.5, x2 = 2, x3 = 2.5, x4 = 3.
2. Approximate the derivatives f ′ (0.1), f ′′ (0.2) if the values of the fun tion
f at the following knots are given:

xi 0.1 0.2 0.3 0.4


f (xi ) 0.995 0.98007 0.95534 0.92106
Chapter 5
Numeri al Integration
Sometimes it is not possible to evaluate a denite integral exa tly by well-
known te hniques and hen e numeri al methods are required. Thus, we seek
numeri al pro edures to evaluate the denite integral given by

Z b
f (x)dx
a

where f is ontinuous or Riemann-Darboux


PN integrable on [a, b]. To approx-
imate this integral we look for j=0 f (xj ) · cj , where xj are N + 1 distin t
points a = x0 < x1 < . . . < xN −1 < xN = b alled the quadrature points or
nodes (knots), and the quantities cj are oe ients alled weights. The basi
problem is the sele tion of the nodes and oe ients so that

Z N

b X

f (x)dx − f (xj ) · cj
a
j=0

is minimum for a large lass of fun tions.


The lassi al method is to repla e the fun tion f by a polynomial that an
be easily integrated. The use of the interpolating polynomial as repla ement
fun tion leads to a variety of integration formulas, alled the Newton-Cotes
formulas. The interpolating polynomial an be obtained using a Newton
divided dieren e formula for the interpolating polynomial or Lagrange in-
terpolating polynomial.

121
122 Numeri al Integration

5.1 The Newton-Cotes Formula, Trapezoidal Rule,


Simpson Formula
Let f : [a, b] → IR1 a Riemann-Darboux integrable fun tion and the equidis-
b−a
tant knots xi = a + ih, i = 0, 1, 2, . . . , N of [a, b], (h = and xN = b).
N
We onsider the Lagrange interpolating polynomial for f :
N
X
PN (x) = f (xj ) · Lj (x)
j=0

where
N
Y x − xi
Lj (x) = ,
x − xi
i=0 j
i6=j

and the integral of the interpolating polynomial:


Z b N Z bYN
X x − xi
PN (x)dx = f (xj ) · cj where cj = dx.
a j=0 a i=0 xj − xi
i6=j

The quadrature
Z Newton-Cotes formulas approximate
Z the value of the
b b
integral f (x)dx with the value of the integral PN (x)dx of the interpo-
a a
lating polynomial:
Z b Z b N Z bYN
X x − xi
f (x)dx ≈ PN (x)dx = f (xj ) · cj where cj = dx.
a a j=0 a i=0 xj − xi
i6=j

The Trapezoidal rule and the Simpson formula represent parti ular ases
of the quadrature Newton-Cotes formulas. Thus, for N = 1 we have:
Z b
b−a h
f (x)dx ≈ · [f (b) + f (a)] = · [f (b) + f (a)]
a 2 2

alled the trapezoidal rule (without error term).


If we apply the trapezoidal rule on ea h subinterval [xi−1 , xi ] of the interval
[a, b] (xi are equidistant knots, xi = a + ih, i = 0, 1, 2, . . . , N ), we obtain:
Z b Z x1 Z x2 Z xN
f (x)dx = f (x)dx + f (x)dx + . . . + f (x)dx ≈
a x0 x1 xN −1
The Newton-Cotes Formula, Trapezoidal Rule, Simpson Formula 123
" N −1
#
h X
≈ · f (a) + 2 · f (xi ) + f (b)
2 i=1

alled omposite trapezoidal rule (without error term).


For N = 2 in the quadrature Newton-Cotes formulas we obtain:
Z b
h
f (x)dx ≈ · [f (a) + 4f (a + h) + f (b)]
a 3

alled the Simpson formula.


In the followings we will present some quadrature Newton-Cotes formulas:
For N = 3:
Z b
3h
f (x)dx ≈ · [f (a) + 3f (a + h) + 3f (a + 2h) + f (b)].
a 8

For N = 4:
Z b
2h
f (x)dx ≈ · [7f (a) + 32f (a + h) + 12f (a + 2h) + 32f (a + 3h) + 7f (b)].
a 45

For N = 5:
Z b
5h
f (x)dx ≈ ·[19f (a)+75f (a+h)+50f (a+2h)+50f (a+3h)+75f (a+4h)+19f (b)].
a 288

For N = 6:
Z b
h
f (x)dx ≈ · [41f (a) + 216f (a + h) + 27f (a + 2h) + 272f (a + 3h)+
a 140

+ 27f (a + 4h) + 216f (a + 5h) + 41f (b)].

For N = 7:
Z b
7h
f (x)dx ≈ · [751f (a) + 3577f (a + h) + 1323f (a + 2h) + 2989f (a + 3h)+
a 17280

+ 2989f (a + 4h) + 1323f (a + 5h) + 3577f (a + 6h) + 751f (b)].

In these ases, the trun ation errors are established by al ulus on the base
of the Mean Value Theorem for Denite Integrals :
124 Numeri al Integration

Theorem 5.1.1. (Mean Value Theorem for Denite Integrals)


Let f be ontinuous on a losed interval [a,b℄ and w be an integrable fun tion
whi h does not hange sign on [a,b℄. Then there is at least one number
c ∈ [a, b] su h that

Z b Z b
f (x)w(x)dx = f (c) w(x)dx.
a a

Thus, for N = 1 and x0 = a, x1 = b we have:

x−b x − a f ′′ (ξ)
f (x) = f (a) + f (b) + (x − a)(x − b),
a−b b−a 2!

where a < ξ(x) < b. Integrating from a to b we obtain

Z b Z b
(b − a) 1
f (x)dx = [f (a) + f (b)] + f ′′ (ξ)(x − a)(x − b)dx.
a 2 2 a

On the base of the Mean Value Theorem, the above error term be omes:

Z b Z b
1 1
′′
f (ξ)(x − a)(x − b)dx = f ′′ (c) (x − a)(x − b)dx,
2 a 2 a

where a < c < b. Let u = x − a and h = b − a. Then error term is

Z b
1 1 −h3
f ′′ (ξ)(x − a)(x − b)dx = f ′′ (c) · ,
2 a 2 6

and hen e the trun ation error for N = 1 is given by

h3 (2)
E1 = − · f (c).
12
The Newton-Cotes Formula, Trapezoidal Rule, Simpson Formula 125

Analogues, we obtain the followings trun ation errors:


h5 (4)
N =2 E2 = − · f (c);
90

3h5 (4)
N =3 E3 = − · f (c);
80

8h7 (6)
N =4 E4 = − · f (c);
975

275h7 (6)
N =5 E5 = − · f (c);
12096

9h9
N =6 E6 = − · f (8) (c);
1400

8183h9 (8)
N =7 E7 = − · f (c).
518400

Exer ises
1. Use the trapezoidal rule, the omposite trapezoidal rule and the Simpson
formula, respe tively, to approximate the denite integral:
Z 1
1
2
dx
0 1+x

The algorithms for the trapezoidal rule, omposite trapezoidal rule and for
the Simpson formula are:

// trapezoidal rule
h=b−a
Z b
h
f (x)dx ≈ · [f (a) + f (b)]
a 2
// omposite trapezoidal rule
b−a
h=
n " #
Z b n−1
h X
f (x)dx ≈ · f (a) + 2 · f (xi ) + f (b)
a 2 i=1
126 Numeri al Integration

where xi = a + i · h

// the Simpson formula


b−a
h=
Z b 2
h
f (x)dx ≈ · [f (a) + 4 · f (a + h) + f (b)]
a 3
Input data:
- a and b - the ends of the interval on whi h we approximate denite
integral

- n - number of the subintervals

Output data
- value of the integral omputed using those 3 methods

Implementation in Borland C:
#in lude<stdio.h>
#in lude< type.h>
#in lude<math.h>
oat f(oat x)
{
return x;
}
oat trapezgen(oat a, oat b, int n)
{
int i;
oat h,s=0.0;
h=(b-a)/(n);
for(i=1;i<n;i++)
s+=f(a+i*h);
return (h*0.5*(f(a)+f(b))+h*s);
}
oat simpson(oat a, oat b)
{
int i;
oat h;
h=(b-a)/2;
return ((h/3)*(f(a)+4*f(a+h)+f(b)));
}
oat trapez(oat a, oat b)
The Newton-Cotes Formula, Trapezoidal Rule, Simpson Formula 127

{
int i;
oat h;
h=b-a;
return ((h/2)*(f(a)+f(b)));
}
void main(void){
int a,b,n;
har op;
printf("\n\nIntrodu eti a: ");s anf("%d",&a);
printf("\nIntrodu eti b: ");s anf("%d",&b);
printf("\nIntrodu eti n: ");s anf("%d",&n);
printf("\nMeniu\n");
printf("Alegeti una dintre optiuni: \n\n metoda (T)rapezului simpla\n
metoda (G)enerala a trapezelor \n metoda (S)impson \n e(X)it
\n");
ush(stdin);
s anf("% ",&op);
op=toupper(op);
while (op !='X')
{
swit h (op)
{
ase 'T' : printf("Aproximarea integralei prin metoda
trapezului:%g ", trapez(a,b));
break;
ase 'G' : printf("Aproximarea integralei prin metoda generala
a trapezelor:%g ",
trapezgen(a,b,n));
break;
ase 'S' : printf("Aproximarea integralei prin metoda lui
Simpson %g ",simpson(a,b));
break;
}
printf("\nMeniu\n");
printf("\n Alegeti una dintre optiuni: \n\n metoda (T)rapezului
simpla\n metoda (G)enerala a
trapezelor \n metoda (S)impson \n e(X)it \n");
ush(stdin);
s anf("% ",&op);
op=toupper(op);
128 Numeri al Integration

}
}

5.2 Gaussian Integration Formulas


The Newton-Cotes formulas are of the form:
Z b
f (x) dx ≈ a0 f (x0 ) + a1 f (x1 ) + . . . + aN f (xN ), (5.2.1)
a

where the knots x0 , x1 , . . . , xN of the interval [a, b] are equidistant.


These methods are learly preferable for integrating a fun tion that is in
equally-spa ed tabulated form. However, if a fun tion f is known analyti-
ally, there is no need to require equally-spa ed (equidistant) nodes for the
integrating formulas.
If the knots x0 , x1 , . . . , xN are not xed in advan e and if there are no other
restri tions on them, then in Eq. (5.2.1) there are 2N + 2 unknowns or
parameters a0 , a1 , . . . , aN and x0 , x1 , . . . , xN , whi h should satisfy 2N + 2
equations.
We an obtain the 2N + 2 equations imposing the formula (5.2.1) to be more
a urate for polynomials 1, x, x2 , . . . , x2N +1 . Gauss showed that by sele ting
x0 , x1 , . . . , xN properly it is possible to onstru t formulas far more a urate
than the orresponding Newton-Cotes formulas. The formulas based on this
prin iple are alled Gaussian integration formulas.
In the followings, we will determine the parameters in the ase of two
points. More pre isely, we will determine the four Z parameters a0 , a1 , x0 , x1 ,
1
(2N+2=4), if the integral involved is of the form f (x) dx.
−1
Hen e, the parameters a0 , a1 , x0 , x1 will be omputed su h that:
Z 1
f (x) dx ≈ a0 f (x0 ) + a1 f (x1 )
−1

and to obtain an equality for the polynomials 1, x, x2 , x3 . Imposing these


onditions, the following system is obtained:

 a0 + a1


=2




 a0 x 0 + a 1 x 1 = 0

(5.2.2)
 2 2 2


 a0 x 0 + a 1 x 1 = 3





a0 x30 + a1 x31 = 0.
Gaussian Integration Formulas 129

The solution of this nonlinear system is:


a0 = a1 = 1

3
−x0 = x1 = 3

and the integration formula be omes:


Z 1 √ ! √ !
3 3
f (x) dx ≈ f − +f . (5.2.3)
−1 3 3

This integration formula is alled the two-points Gaussian integration


formula. √ √
It is remarkable that by adding − 33 and 33 , we get the exa t value of
an integral of any polynomial degree three or less. Z b
In order to use Eq. (5.2.1) for omputing integrals f (x) dx on an
a
arbitrary segment [a, b] we make a hange of variable:
(b − a)t + b + a
x=
2
and we obtain:
Z b Z  
b−a 1 (b − a)t + a + b
f (x) dx = f dt
a 2 −1 2
" √ ! √ !#
b−a (a − b) 3 + 3(a + b) (b − a) 3 + 3(a + b)
≈ f +f .
2 6 6

Using the same te hnique as the one presented for obtaining formula (5.2.3),
we an determine formulas for a bigger number of terms xi and ai . The in on-
venien e onsists of the di ulty for obtaining solutions of these nonlinear
systems, so we present an alternative derivation of these formulas.
Sin e xi are unknowns, we use the Lagrange interpolating polynomial whi h
allows arbitrarily-spa ed base points :
N N
X f (N +1) (ξ(x)) Y
f (x) = f (xj ) · Lj (x) + (x − xj ) (5.2.4)
j=0
(N + 1)! j=0

where Lj (x) is
N
Y x − xi
Lj (x) = and − 1 < ξ(x) < 1.
x − xi
i=0 j
i6=j
130 Numeri al Integration

If f is a polynomial of degree 2N + 1 or less, then the equality (5.2.1) should


(N +1) (ξ(x))
give the exa t value of the integral. In this ase, the term f (N +1)! is a
polynomial of degree N or less and will be denoted by qN (x):

f (N +1) (ξ(x))
qN (x) = . (5.2.5)
(N + 1)!

Repla ing (5.2.5) in (5.2.4) and integrating between -1 and 1 we obtain


Z 1 N
X Z 1 Z 1 N
Y
f (x) dx = f (xj ) Lj (x) dx + qN (x) · (x − xj ) dx. (5.2.6)
−1 j=0 −1 −1 j=0

We want to sele t xj in su h a way that the error term in Eq. (5.2.6) vanishes
sin e f (x) is a polynomial of degree 2N + 1 or less. It follows that we want:
Z 1 N
Y
qN (x) · (x − xj ) dx = 0. (5.2.7)
−1 j=0

N
Y
Be ause (x−xj ) is a polynomial of degree N +1 and qN (x) is a polynomial
j=0
N
Y
of degree N or less, the equality (5.2.7) is veried if the polynomial (x−xj )
j=0
of degree N + 1 is orthogonal on all polynomials of degree N or less, on the
interval [−1, 1].
The Legendre polynomials dened by

P0 (x) = 1
P1 (x) = x
1
Pi (x) = · [(2i − 1) · x · Pi−1 (x) − (i − 1) · Pi−2 (x)] i = 2, 3, . . .
i
are orthogonal polynomials over [−1, 1] with respe t to the weight fun tion
w(x) = 1: Z 1
Pi (x) · Pj (x) = 0, i 6= j.
−1

The Legendre polynomials are also linearly independent and, therefore, qN (x)
an be written as a linear ombination of Legendre polynomials Pi (x), i =
N
Y
0, 1, 2, . . . , N . If in (x − xj ) we hoose xj , j = 0, 1, . . . , N as the zeros
j=0
Gaussian Integration Formulas 131

of the Legendre polynomials PN +1 (x), then the equality (5.2.7) is satised


YN
be ause (x − xj ) will be ollinear with PN +1 (x) (this fa t it is possible
j=0
be ause the zeros of Legendre polynomials PN +1 (x) are real and distin t).
By sele ting the zeros of Legendre polynomial PN +1 (x) as knots for (5.2.1),
the equality (5.2.6) is redu ed to the equality
Z 1 N
X Z 1
f (x) dx = f (xj ) Lj (x) dx
−1 j=0 −1

for any polynomial f of degree 2N + 1 or less. Therefore,


Z 1 N Z 1 Z 1 N
X f (2N +2) (η) X
f (x) dx = f (xj ) Lj (x) dx + (x − xi )2 dx
−1 j=0 −1 (2N + 2)! −1 i=0

and thus
Z 1 N
X
f (x) dx ≈ aj f (xj ) (5.2.8)
−1 j=0
Z 1
where aj = Lj (x) dx and xj , j = 0, 1, . . . , N are the zeros of Legendre
−1
polynomial PN +1 (x).

Exer ises
1. Using the Gaussian formulas for N = 2 and N = 3 approximate the
denite integral: Z 1
e2x dx.
0
132 Numeri al Integration
Chapter 6

Dierential Equations,
Initial-Value Problems

Dierential equations are divided into two lasses, ordinary and partial,
a ording to the number of independent variables present in the dierential
equations; one for ordinary and more than one for partial. The order of
the dierential equation s the order of its highest derivative. The general
solution of N th -order ordinary dierential equation ontains N independent
arbitrary onstants. To determine these N arbitrary onstants, we need N
onditions. If these onditions are pres ribed at one point, then these on-
ditions are alled initial onditions. A dierential equation with initial
onditions is alled an initial-value problem (IVP). If these N onditions
are pres ribed at dierent points, then these onditions are alled bound-
ary onditions. A dierential equation with boundary onditions is alled
a boundary-value problem (BVP).

These two types of problems have dierent properties. An initial-value


problem an be identied with a time-dependent problem. In this hapter
we develop numeri al methods (single-step and multi-step methods) to solve
rst order dierential equations with given initial onditions, and numeri al
methods to solve se ond order dierential equations with given boundary
onditions .

133
134 Dierential Equations, Initial-Value Problems

6.1 Finite Dieren e Method for a Numeri al


Solution of Initial-Value Problems (IVP)
Let be the initial-value problem
 ′
y = f (x, y)
(6.1.1)
y(x0 ) = y0

where f : (α, β) × (γ, δ) → IR1 is a fun tion of C 1 - lass and x0 ∈ (α, β),
y0 ∈ (γ, δ).
We onsider the points xi+1 = xi + h = x0 + (i + 1)h for i = 0, 1, . . . , N − 1,
h > 0, and we admit that xi ∈ (α, β) for i = 0, N − 1; a = x0 and b = xN =
x0 + N h.
If the maximal domain of the solution of (IVP) (6.1.1) ontains the points xi
whi h are referred to as mesh points, i = 0, N − 1, then in these points the
solution y = y(x) of (IVP) veries:

y ′ (xi ) = f (xi , y(xi )), i = 0, N − 1. (6.1.2)

The simplest way to approximate Eq. (6.1.2) is to repla e the derivative


y ′ (xi ) by a divided dieren e of rst order:

y(xi + h) − y(xi )
y ′ (xi ) ≈ , i = 0, N − 1.
h
In this way, the following equation with dieren es results:

y(xi+1 ) = y(xi ) + hf (xi , y(xi )), i = 0, N − 1, (6.1.3)

alled Euler's formula or Euler's forward formula.


Starting with y(x0 ) = y0 for i = 0 and using the Euler's formula (6.1.3),
we rst ompute y(x1 ), then y(x2 ), y(x3 ), and so on. If h is su iently
small, then values y(x1 ), y(x2 ), . . . , y(xN ) approximate numeri al values of
the solution of (IVP) (6.1.1) at the points x1 , x2 , . . . , xN .
The a ura y of a numeri al solution is given by the error. More pre isely,
three entral on epts should be followed into a numeri al method:

1. onvergen e;

2. onsisten y;

3. stability.
Finite Dieren e Method for a Numeri al Solution of Initial-Value Problems (IVP)135

A numeri al method is said onvergent if the numeri al solution approa hes


the exa t solution as the step size h goes to zero.
A numeri al method is said onsistent if the ration between the lo al error
(dieren e between the result given by the method and the exa t solution)
and the step size h, goes to zero.
A numeri al method is saidstable if the errors whi h appear in a ertain step
are not amplied in the next steps (i.e. small and ontrollable errors for
produ ing stable solutions).
If in equation (6.1.2), the derivative y ′ (xi ) is repla ed by

y(xi ) − y(xi − h)
y ′ (xi ) ≈ , i = 1, N ,
h
then the following equation with dieren es is obtained:

y(xi ) − y(xi−1 ) − h · f (xi , y(xi )) = 0, i = 1, N ,


or (6.1.4)
y(xi+1 ) = y(xi ) + h · f (xi+1 , y(xi+1 )), i = 0, N − 1,

alled impli it Euler formula or ba kward Euler formula.


The equation with dieren es (6.1.4) should be solved for obtaining numeri-
ally the approximative values of the solution of the (IVP)(6.1.1).
If in equation (6.1.2), the derivative y ′ (xi ) is repla ed by

y(xi + h) − y(xi − h)
y ′ (xi ) ≈ , i = 1, N ,
2h
then the following equation with dieren es is obtained:

y(xi+1 ) = y(xi−1 ) + 2h · f (xi , y(xi )), i = 1, N − 1, (6.1.5)

alled midpoint formula. For solving (6.1.5), the value y(xi ) should be
founded using other method.

Exer ises
1. Solve the following (IVP):
 ′
y (x) = x − 5y(x)
y(0) = 1 x ∈ [0, 1]

using Euler formula for n = 4.


136 Dierential Equations, Initial-Value Problems

The algorithm for solving an (IVP) using Euler method:


b − x0
h=
n
for i = 1 . . . n
yi = yi−1 + h · f (xi−1 , yi−1 )
xi = xi−1 + h
Input data:
- x0 , b - the ends of the interval for x

- y0 = y(x0 )

- n - the points number

Output data:
- ouples (xi , yi ), i = 0, n

Implementation of the above algorithm in Borland C language:


#in lude<stdio.h>
#in lude<mallo .h>
oat *Ve tor(int imin, int imax);
oat f(oat x, oat y);
void Euler(oat x0, oat y0, oat b, int n);
void main()
{
oat x0, y0,b;
int n;
printf("n= "); s anf("%d", &n);
printf("b= "); s anf("%f", &b);
printf("x0= "); s anf("%f", &x0);
printf("y0= "); s anf("%f", &y0);
Euler( x0, y0, b, n);
}
void Euler(oat x0, oat y0, oat b, int n)
{
oat h, *x, *y;
int i;
x=Ve tor(0,n);
y=Ve tor(0,n);
x[0℄=x0;
y[0℄=y0;
h=(b-x[0℄)/n;
The Taylor Method for a Numeri al Solution of IVP 137

for(i=1;i<=n;i++)
{
y[i℄=y[i-1℄+h*f(x[i-1℄,y[i-1℄);
x[i℄=x[i-1℄+h;
}
for(i=0;i<=n;i++)
{
printf("x[%d℄=%f ", i,x[i℄);
printf("y[%d℄=%f\n", i,y[i℄);
}
}
oat f(oat x, oat y)
{
return (x-5*y);
}
oat *Ve tor(int imin, int imax)
{
oat *p;
p=(oat *)mallo ((size_t)((imax-imin+1) * sizeof(oat)));
return (p-imin);
}

6.2 The Taylor Method for a Numeri al Solu-


tion of IVP
Let be the (IVP) 
y′ = f (x, y)
(6.2.1)
y(x0 ) = y0
where f : (α, β) × (γ, δ) → IR1 is an indenite derivable fun tion and x0 ∈
(α, β), y0 ∈ (γ, δ).
The solution y = y(x) of (IVP) (6.2.1) is an indenite derivable and veries:

y ′ (x0 ) y ′′ (x0 )
y(x) = y(x0 ) + (x − x0 ) + (x − x0 )2 + . . . +
1! 2!
(6.2.2)
(n) n+1
y (x0 ) y (ξ)
+ (x − x0 )n + (x − x0 )n+1 ,
n! (n + 1)!

where |x0 − ξ| < |x0 − x|.


138 Dierential Equations, Initial-Value Problems

If we denote by h the dieren e x − x0 , then (6.2.2) an be written as follows:

y ′ (x0 ) y ′′ (x0 ) 2 y (n) (x0 ) n y n+1 (ξ) n+1


y(x0 + h) = y(x0 ) + h+ h +...+ h + h .
1! 2! n! (n + 1)!
(6.2.3)
We know y(x0 ) from the right-hand side term of this equality, but we do
not know the derivatives y ′ (x0 ), y ′′ (x0 ), . . . , y (n) (x0 ) and y (n+1) (ξ). In order
to nd them, let us rst nd y ′ (x), y ′′ (x), . . . , y (n) (x) and y (n+1) (ξ). By
su essive derivation we have

y ′ (x) = f (x, y)

d ∂f ∂f ′
y ′′ (x) = [f (x, y(x))] = f ′ (x, y) = + · y (x) = fx + fy · f
dx ∂x ∂y

y (3) (x) = f ′′ (x, y) = fxx (x, y(x)) + 2f (x, y(x)) · fxy (x, y(x)) + f 2 (x, y(x)) · fyy (x, y(x))+

+ fx (x, y(x)) · fy (x, y(x)) + f (x, y(x)) · fy2 (x, y(x))

....................................

y (n) (x) = f (n−1) (x, y).

Using these, the equality (6.2.3) be omes:

f (x0 , y0 ) f ′ (x0 , y0 ) 2 f ′′ (x0 , y0 ) 3


y(x0 + h) = y(x0 ) + h+ h + h + ... +
1! 2! 3!
f (n−1) (x0 , y0 ) n f (n) (ξ, y(ξ)) n+1
+ h + h . (6.2.4)
n! (n + 1)!

Sin e f (k) (x0 , y0 ) is ompli ated (otherwise, we would solve the equation an-
alyti ally), evaluation of higher derivatives is time onsuming. Therefore
instead of using a high degree Taylor series over a relatively large distan e,
we divide the interval [x0 , b] into small subintervals and use a lower-degree
Taylor series over ea h subinterval. Let be a = x0 < x1 < x2 < . . . < xN = b
a partition of the interval [a = x0 , b] into N equally-spa ed subintervals of
length h = b−a N
; xi = x0 + ih, i = 0, N .
The Taylor Method for a Numeri al Solution of IVP 139

From (6.2.4) we have:


f (x0 , y0 ) f ′ (x0 , y0 ) 2 f ′′ (x0 , y0 ) 3
y(x1 ) = y(x0 ) + h+ h + h + ... +
1! 2! 3!
f (n−1) (x0 , y0 ) n f (n) (ξ, y(ξ)) n+1
+ h + h . (6.2.5)
n! (n + 1)!
For onvenien e we denote by Tn (x, y, h) the expression

f ′ (x, y) f (n−1) (x, y) n−1


Tn (x, y, h) = f (x, y) + h + ... + h (6.2.6)
2! n!
and we write (6.2.5) in the form

f (n) (ξ, y(ξ)) n+1


y(x1 ) = y(x0 ) + h · Tn (x0 , y0 , h) + h . (6.2.7)
(n + 1)!
Sin e we do not know ξ , we an not ompute the last term; however, sin e h
is small, we may ignore the last term and we obtain:

y(x1 ) ≈ y0 + h · Tn (x0 , y0 , h). (6.2.8)

The approximate value y(x1 ) is now known, so we an nd the approximate


value of y at x2 :
y(x2 ) = Y1 + h · Tn (x1 , Y1 , h),
where
Y1 = y0 + h · Tn (x0 , y0 , h).
Similarly, we nd the approximate values of y at x3 , x4 , . . . , xN :

Y0 = y(x0 )
(6.2.9)
Yi+1 = Yi + h · Tn (xi , Yi , h) for i = 0, 1, . . . , N − 1,
where
h ′ hn−1 (n−1)
Tn (xi , yi , h) = f (xi , yi ) + f (xi , yi ) + . . . + f (xi , yi ).
2! n!
This method for solving numeri ally an (IVP) is alled Taylor Series
method of order n. The Taylor Series method of rst order is also alled
Euler's method.
In the ase of Euler method we have:

Y0 = y(x0 )
(6.2.10)
Yi+1 = Yi + h · f (xi , Yi ) for i = 0, 1, . . . , N − 1.
140 Dierential Equations, Initial-Value Problems

Exer ises
1. Solve the following (IVP):
 ′
y (x) = y(x) − x
y(0) = 0 x ∈ [0, 1]
using Taylor Series method of order 1, 2 and 3, respe tively, for h = 0.25.

6.3 The Runge-Kutta Method of the Se ond


Order
In order to solve an initial-value problem:
 ′
y = f (x, y)
(6.3.1)
y(x0 ) = y0
we an use a Taylor Series method of higher order or better a ura y; how-
ever, in order to use high orders, we need to evaluate high order derivatives
of f (x, y). The Runge-Kutta methods do not require the evaluation of the
derivatives of f (x, y) and at the same time, they keep the desirable property
of higher-order lo al trun ation error.
Let us start with the Taylor Series method of order two:
h2 ′ h3 ′′
y(xi+1 ) = y(xi ) + h · f (xi , y(xi )) + f (xi , y(xi )) + f (ξi , y(ξi )). (6.3.2)
2! 3!
The idea is to avoid f ′ (xi , y(xi )) approximating by forward dieren e:
f (xi + h, y(xi + h)) − f (xi , y(xi )) h ′′
f ′ (xi , y(xi )) = − f (ξi , y(ξi )),
h 2
and hen e Eq. (6.3.2) be omes:
h h3 ′′
y(xi+1 ) = y(xi ) + [f (xi , y(xi )) + f (xi + h, y(xi + h))] − f (ηi , y(ηi )),
2 12
(6.3.3)
whi h ondu ts to the equation with dieren es:
h
Yi+1 = Yi + [f (xi , Yi ) + f (xi+1 , Yi+1 )]. (6.3.4)
2
In this equation, the unknown Yi+1 from the right-hand side is repla ed using
Euler's formula
Yi+1 = Yi + hf (xi , Yi )
The Runge-Kutta Method of the Se ond Order 141

yielding
h
Yi+1 = Yi + [f (xi , Yi ) + f (xi+1 , Yi + hf (xi , Yi ))]. (6.3.5)
2
This formula is alled the Runge-Kutta formula of se ond order.
Comparing it with the Taylor Series method of se ond order

h2 ′
Yi+1 = Yi + h · f (xi , Yi ) + f (xi , Yi ) (6.3.6)
2!
the dieren e is lear. In equation (6.3.5) we do not need f ′ (xi , Yi ) and yet
it has the same trun ation order as equation (6.3.6).
Our obje tive is to develop a general pro edure to derive Eq. (6.3.5) from
Eq. (6.3.6) and other similar higher-order methods.
We are looking at the formula:

Yi+1 = Yi + w1 k1 + w2 k2 , (6.3.7)

where
k1 = h · f (xi , Yi )
k2 = h · f (xi + α2 · h, Yi + β21 · k1 )
and the onstants w1 , w2 , α2 , and β21 should be determined su h that the
Eqs. (6.3.7) and (6.3.6) represent the same equation with dieren es.
For this aim, we write the left-hand side term of the equation (6.3.7) as
follows:
h2 ′ h3
Yi+1 = Yi +h·f (xi , Yi )+ ·f (xi , Yi )+ ·f ′′ (xi , Yi )+h.o.t (higher-order terms)
2! 3!
in whi h repla ing

f = f (xi , Yi ) fx = fx (xi , Yi ) fy = fy (xi , Yi )


fxx = fxx (xi , Yi ) fxy = fxy (xi , Yi ) fyy = fyy (xi , Yi )

we obtain
h2
Yi+1 = Yi + h · f + · [fx + f · fy ] + (6.3.8)
2!
h3
+ · [fxx + 2f · fxy + f 2 · fyy + fy (fx + f · fy )] + h.o.t
3!
For k2 from the right-hand side of the equation with dieren es (6.3.7), we
use the Taylor Series formula of se ond order of a fun tion of two variables,
142 Dierential Equations, Initial-Value Problems

whi h gives

k2 = h · f (xi + α2 · h, Yi + β21 · k1 ) =

  
∂ ∂
= h · f (xi , Yi ) + α2 · h · + β21 · k1 · f (xi , Yi )+
∂x ∂y

 2 )
1 ∂ ∂
+ α2 · h · + β21 · k1 · f (xi , Yi ) + t.o.s.
2! ∂x ∂y

 
h2 2
= h f + α2 hfx + β21 hf · fy + (α2 fxx + 2α2 β21 f · fxy + β21 f fyy ) + h.o.t
2 2
2

Repla ing k1 and k2 in the right-hand side of the equation (6.3.7) we obtain

Yi+1 = Yi + h(w1 + w2 ) · f + w2 h2 · [α2 fx + β21 f · fx ] + (6.3.9)


3
w2 h
+ · [α22 fxx + 2α2 β21 f · fxy + β21
2 2
f fxy ] + h.o.t
2
Comparing the oe ients of h and h2 on the right-hand of Eqs. (6.3.8) and
(6.3.9), by identi ation we get

 w1 + w 2 = 1

α2 · w2 = 21 (6.3.10)

 β ·w = 1
21 2 2

We have four unknowns and only three equations, therefore, we have one
degree of freedom in the solution of the Eq. (6.3.10).
Solving Eq. (6.3.10) in terms of α2 we get

 1

 w2 =


 2α2
 1 2α2 − 1
w1 = 1 − w 2 = 1 − = (6.3.11)

 2α2 2α2


 1
 β21 =
 = α2
2w2
We an use one degree of freedom to sele t the trun ation error as small as
possible that produ ed h3 terms in the expansions of the Eqs. (6.3.8) and
(6.3.9).
The Runge-Kutta Method of the Se ond Order 143

The asymptoti form of the error term is found by taking the dieren e
between the h3 terms in Eqs. (6.3.8) and (6.3.9), and is given by:
  
3 1 α2 2 1
error = h · − [fxx + 2f · fxy + f · fyy ] + fy · [fx + f · fy ] .
6 4 6
(6.3.12)
If M and L are su h that
i+j
∂ f Li+j
|f (x, y)| < M and i j ≤ i+j for i + j ≤ n
∂x ∂y M
then error is bounded by:
   
1 α2 1
3 2
|error| ≤ h · M L · 4 · − + (6.3.13)
6 4 3

and it is minimum for α2 = 32 :

M · L2 3
|error| ≤ ·h . (6.3.14)
3
For α2 = 23 we have β21 = 32 , w1 = 41 , w2 = 43 and the equation with dieren es
(6.3.7) be omes:
  
h 2 2
Yi+1 = Yi + f (xi , Yi ) + 3 f xi + h, Yi + h · f (xi , Yi ) (6.3.15)
4 3 3
whi h an be written as follows:

2

 Ŷi+2/3 = Yi + h · f (xi , Yi )


 3
(6.3.16)
   

 h 2
 Yi+1 = Yi +
 f (xi , Yi ) + 3 f xi + h, Ŷi+2/3
4 3
This form shows that the method an be viewed as a predi tor- orre tor
method. The rst equation predi ts Ŷi+2/3 , a preliminary value, while the
se ond equations gives the orre t value Yi+1 by using the preliminary value.
Other solutions of the system (6.3.10) whi h are used in literature are:

a) α2 = β21 = 1, w1 = w2 = 12 .
In this ase the equation with dieren es is:
h
Yi+1 = Yi + [f (xi , Yi ) + f (xi + h, Yi + h · f (xi , Yi ))] (6.3.17)
2
144 Dierential Equations, Initial-Value Problems

whi h is known as the Improved Euler method or Heun method.


Eq. (6.3.17) an be written as follows:

 Ŷi+1 = Yi + h · f (xi , Yi )
h i (6.3.18)
 Yi+1 = Yi + h f (xi , Yi ) + f (xi + h, Ŷi+1 )
2

The rst equation predi ts the preliminary value Ŷi+1 , while the se ond
equation gives the orre ted value Yi+1 by using the preliminary value
Ŷi+1 . This ab be viewed as a predi tor- orre tor method, too.

b) α2 = β21 = 12 , w1 = 0, w2 = 1.
In this ase the equation with dieren es is:
 
h h
Yi+1 = Yi + h · f xi + , Yi + · f (xi , Yi ) (6.3.19)
2 2

whi h is known as the Modied Euler method or Improved Poly-


gon method.
Writing the Eq. (6.3.19) as:

h
 Ŷi+1/2 = Yi + · f (xi , Yi )

2 (6.3.20)
 Yi+1 = Yi + h · f (xi + h , Ŷi+1/2 )

2
this an be viewed as a predi tor- orre tor method, too.

Exer ises
1. Solve (IVP):

y ′ (x) = x3 + x · y 2 (x)
y(0) = 1 x ∈ [0, 1]

using Runge-Kutta method of se ond order for n = 10.

The algorithm for solving numeri ally an (IVP) using Runge-Kutta method
of se ond order:
b − x0
h=
n
for i = 1 . . . n
The Runge-Kutta Method of the Se ond Order 145

k1 = h · f (xi−1 , yi−1 )
k2 = h · f (xi−1 + α2 · h, yi−1 + β21 · k1 )
yi = yi−1 + w1 · k1 + w2 · k2
xi = xi−1 + h
- For Improved Euler method
α2 = β21 = 1
1
w1 = w 2 =
2
- for Modied Euler method
1
α2 = β21 =
2
w1 = 0
w2 = 1
- for Runge-Kutta with the smallest error
2
α2 = β21 =
3
1
w1 =
4
3
w2 =
4
Input data:
- x0 , b - the ends of the interval for x

- y0 = y(x0 )

- n - number of points

Output data:
- ouples (xi , yi ), i = 0, n

Implementation of this algorithm in Borland C language:


#in lude <stdlib.h>
#in lude <stdio.h>
#in lude < onio.h>
oat f(oat x, oat y)
{
return -1.0*x*y*y;
}
void rk2(oat a2, oat b21, oat w1, oat w2, oat h, oat x0, oat y0,
int n){
int i;
oat h1,h2,x,y;
for(i = 1; i<=n; i++)
146 Dierential Equations, Initial-Value Problems

{
h1=h*f(x0,y0);
h2=h*f(x0+a2*h,y0+b21*h1);
y=y0+w1*h1+w2*h2;
x=x0+h;
printf("(%g,%g)\n",x,y);
x0=x;
y0=y;
}
}
void main()
{
int n,i;
oat b,h,x0,y0,a2,b21,w1,w2;
har ;
printf("Introdu eti n: "); s anf("%d",&n);
printf("Introdu eti b: "); s anf("%f",&b);
printf("Introdu eti x0: "); s anf("%f",&x0);
printf("Introdu eti y0: "); s anf("%f",&y0);
h=(b-x0)/(oat)n;
do
{
printf("\n");
printf(" Metode Runge-Kutta \n");
printf("1.Metoda Euler imbunatatita\n");
printf("2.Metoda Euler modi ata\n");
printf("3.Metoda Runge-Kutta u ea mai mi a eroare\n");
printf("4.Iesire\n");
printf("Introdu eti optiunea: "); s anf("%s", & );
printf("\n");
swit h( )
{
ase '1': { // Metoda Euler imbunatatita
a2=1.0;
b21=1.0;
w1=0.5;
w2=0.5;
rk2(a2,b21,w1,w2,h,x0,y0,n);
break;
}
ase '2': { // Metoda Euler modi ata
The Runge-Kutta Method of the Third Order and Fourth Order 147

a2=0.5;
b21=0.5;
w1=0;
w2=1;
rk2(a2,b21,w1,w2,h,x0,y0,n);
break;
}
ase '3': { // Metoda Runge-Kutta u ea mai mi a eroare
a2=2.0/3.0;
b21=2.0/3.0;
w1=0.25;
w2=0.75;
rk2(a2,b21,w1,w2,h,x0,y0,n);
break;
}
ase '4': exit(0);
default: {printf("Optiune gresita!"); break;}
}
}while ( != '4');
}

6.4 The Runge-Kutta Method of the Third


Order and Fourth Order
The Runge-Kutta method of se ond order were derived from Yi+1 = Yi +
w1 k1 + w2 k2 . In order to derive the Runge-Kutta method of the third-
order, we add one term or stage w3 k3 :
Yi+1 = Yi + w1 k1 + w2 k2 + w3 k3 (6.4.1)
where
k1 = h · f (xi , Yi )
k2 = h · f (xi + α2 h, Yi + β21 k1 )
k3 = h · f (xi + α3 h, Yi + β31 k1 + β32 k2 )
and k1 , k2 , k3 , α2 , β21 , α2 , β21 α3 , β31 , β32 are eight unknown onstants whi h
should be determined by expanding Eq. (6.4.1) in a Taylor series around
(xi , Yi ) and omparing it with the orresponding terms of the Taylor Series
method of order three:
h2 ′ h3 ′′
Yi+1 = Yi + h · f (xi , Yi ) + · f (xi , Yi ) + · f (xi , Yi ) + higher-order terms
2! 3!
(6.4.2)
148 Dierential Equations, Initial-Value Problems

Expanding k2 and k3 in Taylor series around (xi , Yi ) and keeping terms up


to h3 , we get:

k2 = h · {f + h[α2 fx + β21 f · fy ]+

h2
+ · [α22 fxx + 2α2 β21 f · fxy + β21
2 2
f · fyy ] + higher-order terms}
2

k3 = h · {f + h · [α3 fx + (β31 + β32 )f · fy ]+

h2
+ · [2β32 (α2 fx + β21 f · fy ) · fy + α32 fxx +
2

+ 2α3 (β31 + β32 )f · fxy + (β31 + β32 )2 f 2 · fxy ] + higher-order terms}.

Repla ing k1 , k2 and k3 in Eq. (6.4.1) we obtain

n
Yi+1 = Yi + h · (w1 + w2 + w3 ) · f + h2 · (w2 α2 + w3 α3 ) · fx +

o n
+ f · fy [w2 β21 + w3 (β31 + β32 )] + h3 12 · (w2 α22 + w3 α32 ) · fxx +

+ [w2 α2 β21 + w3 α3 (β31 + β32 )] · f · fxy +

+ 12 · [w2 β21
2
+ w3 (β31 + β32 )2 ] · f 2 · fyy +

o
+ w3 α2 β32 · fx fy + w3 β21 β32 f · fy2 + higher-order terms

Substituting for f ′ (xi , Yi ) and f ′′ (xi , Yi ) in equation (6.4.2) we have:

h2
Yi+1 = Yi + h · f + · (fx + f · fy )+
2
h3
+ · (fxx + 2f · fxy + f 2 · fyy + fx · fy + f · fy2 ) + higher-order terms.
3!
The Runge-Kutta Method of the Third Order and Fourth Order 149

Comparing the oe ients of h, h2 , and h3 , by identi ation we get:




 w1 + w 2 + w 3 = 1




 w α +w α = 1


 2 2 3 3 2




w2 β21 + w2 (β31 + β32 ) = 21








 w2 α22 + w3 α32 = 13


(6.4.3)
 1


 w2 α2 β21 + w2 α3 (β31 + β32 ) = 3








2
w2 β21 + w2 (β31 + β32 )2 = 13







 w3 α2 β32 = 16





w3 β21 β32 = 61
1
Sin e w3 α2 β32 = = w3 β21 β32 it results that α2 = β21 .
6
Further,
w2 α22 + w3 α32 = 31 = w2 α2 β21 + w2 α3 (β31 + β32 )
and
α2 = β21
implies α3 = β31 + β32 . Thus, the system (6.4.3) is redu ed to the system:


 w1 + w 2 + w 3 = 1




 w2 β21 + w2 (β31 + β32 ) = 12


(6.4.4)
 2 2 1


 w2 β21 + w2 (β31 + β32 ) = 3





w3 β21 β32 = 61
whi h has 4 equations and 6 unknowns. Thus we obtain a two parameter
family of su h methods.
The lassi al method of order three is given by
1 2 1 1
w1 = w 3 = ; w2 = ; α2 = ; α3 = 1; β21 = ; β31 = −1; β32 = 2
6 3 2 2
and it follows that
1
Yi+1 = Yi + · (k1 + 4k2 + k3 ) (6.4.5)
6
150 Dierential Equations, Initial-Value Problems

where k1 = h · f (xi , Yi )

k2 = h · f xi + h2 , Yi + k21
k3 = h · f (xi + h, Yi − k1 + 2k2 ).
The higher-order Runge-Kutta formulas an be derived in the same way;
however, as the order in reases the omplexity in reases very rapidly. The
best known Runge-Kutta method of fourth stage and fourth order
is given by:
1
Yi+1 = Yi + · (k1 + 2k2 + 2k3 + k4 ) (6.4.6)
6
where k1 = h · f (xi , Yi )
k1

k2 = h · f xi + h2 , Yi + 2
k2

k3 = h · f xi + h2 , Yi + 2
k4 = h · f (xi + h, Yi + k3 ).
At rst sight, these formulas seem to be ompli ated, but they are easy to
program and they get a very good speed of onvergen e.

6.5 The Adams-Bashforth and Adams-Moulton


Methods
The methods dis ussed in the previous subse tions depend only on Yi to ob-
tain the value of Yi+1 ; therefore those methods are alled single-step meth-
ods. As we move away from x0 , the error |y(xi+1 ) − Yi+1 | in reases. Sin e
the approximate values at x0 , x1 , . . ., xi are available, it seems reasonable to
use those values to approximate Yi+1 a urately.
A method that uses k approximate values of y to ompute Yi+1 is alled
a k-step method or multi-step method.
In the followings we present multi-step methods for solving the (IVP):

y′ = f (x, y)
y(x0 ) = y0

where f : (a, b) × (c, d) → IR1 is a fun tion of C 1 - lass.


The solution y = y(x) of this (IVP) veries:

y ′ (x) = f (x, y(x)), ∀ x ∈ (α, β) a ≤ α < x0 < β ≤ b,

where (α, β) is the maximum interval on whi h the solution is dened.


The Adams-Bashforth and Adams-Moulton Methods 151

For any knots x0 < x1 < x2 < . . . < xi < xi+1 < . . . < xN from the maximum
interval (α, β) on whi h the solution is dened, we have:

Z xi+1 Z xi+1 Z xi+1



y (x) dx = f (x, y(x)) dx or y(xi+1 ) = yi + f (x, y(x)) dx.
xi xi xi

Sin e y(x) is not known and f (x, y(x)) annot be integrated exa tly, we ap-
proximate f (x, y(x)) by an interpolating polynomial that uses the previously
obtained data points (xi , f (xi , y(xi ))), (xi−1 , f (xi−1 , y(xi−1 ))), . . . , (xi−k , f (xi−k , y(xi−k )))
.
Let k = 0. Then the equality:

Z xi+1
y(xi+1 ) = y(xi ) + f (x, y(x)) dx
xi

be omes:

Z xi+1
y(xi+1 ) = y(xi ) + [f (xi , y(xi )) + (x − xi ) · f ′ (ηi (x), y(ηi (x)))] dx
xi

h2 ′
= y(xi ) + h · f (xi , y(xi )) + · f (ξi , y(ξi )),
2

∂f ∂f ′
where: h = xi+1 − xi , xi < ξi < xi+1 and f ′ = + · y ;. This gives the
∂x ∂y
one-step Euler method

Yi+1 = Yi + hf (xi , Yi ).

Let k = 1. Although any interpolating polynomial through (xi , f (xi , y(xi )))
and (xi−1 , f (xi−1 , y(xi−1 ))) an be used, it is very onvenient to use the New-
ton ba kward dieren e formula. Let h = xi+1 − xi = xi − xi−1 . Then the
equality
Z xi+1
y(xi+1 ) = y(xi ) + f (x, y(x)) dx
xi
152 Dierential Equations, Initial-Value Problems

be omes
Z xi+1
∇f (xi , y(xi ))
y(xi+1 ) = y(xi ) + f (xi , y(xi )) + (x − xi ) · +
xi h

(x − xi )(x − xi−1 ) ′′
+ · f (ηi , y(ηi )) dx
2!
h
= y(xi ) + h · f (xi , y(xi )) + · ∇f (xi , y(xi )) +
Z 2
f ′′ (ξi , y(ξi )) xi+1
+ (x − xi )(x − xi−1 ) dx
2! xi
h 5
= y(xi ) + · {3 f (xi , y(xi )) − f (xi−1 , y(xi−1 ))} + · h3 · f ′′ (ξi , y(ξi ))
2 12
where xi < ηi and ξi < xi+1 . This two-step method that uses the information
at the points xi and xi−1 is alled the se ond-order Adams-Bashforth
method and is given by
h
Yi+1 = Yi + · [3 f (xi , Yi ) − f (xi−1 , Yi−1 )].
2
Similarly for k = 2, using three points (xi , f (xi , y(xi ))), (xi−1 , f (xi−1 , y(xi−1 )))
and
(xi−2 , f (xi−2 , y(xi−2 ))), we get

h
y(xi+1 ) = y(xi ) + · {23 f (xi , y(xi )) − 16 f (xi−1 , y(xi−1 ))+
12
3
+ 5 f (xi−2 , y(xi−2 ))} + · h4 · f (3) (ξi , y(ξi )),
8
and the orresponding equation with dieren es is

h
Yi+1 = Yi + · {23 f (xi , Yi ) − 16 f (xi−1 , Yi−1 ) + 5 f (xi−2 , Yi−2 )}.
12
For k = 3 we have:
h
y(xi+1 ) = y(xi ) + · {55 f (xi , y(xi )) − 59 f (xi−1 , y(xi−1 ))+
24
251 5 (4)
+ 37 f (xi−2 , y(xi−2 )) − 9 f (xi−3 , y(xi−3 ))} + · h · f (ξi , y(ξi ))
720
and the orresponding equation with dieren es:

h
Yi+1 = Yi + ·{55 f (xi , Yi )−59 f (xi−1 , Yi−1 )+37 f (xi−2 , Yi−2 )−9 f (xi−3 , Yi−3 )}.
24
The Adams-Bashforth and Adams-Moulton Methods 153

For k = 4 we have:
h
y(xi+1 ) = y(xi ) + · {1901 f (xi , y(xi )) − 2774 f (xi−1 , y(xi−1 )) + 2616 f (xi−2 , y(xi−2 ))
720
95
− 1274 f (xi−3 , y(xi−3 )) + 251 f (xi−4 , y(xi−4 ))} + · h6 · f (5) (ξi , y(ξi ))
288
and the orresponding equation with dieren es:
h
Yi+1 = Yi + · {1901 f (xi , Yi ) − 2774 f (xi−1 , Yi−1 ) + 2616 f (xi−2 , Yi−2 )
720
− 1274 f (xi−3 , Yi−3 ) + 251 f (xi−4 , Yi−4 )}.
In prin iple, the pre eding pro edure an be ontinued to obtain higher-
order Adams-Bashforth formulas, but if k in reases then the formulas be ome
omplex.
Multi-step methods need help getting started. Generally, a k -step method
must have starting values Y0 , Y1 , . . . , Yk−1 . These starting values must be
omputed by other methods. However, keep in mind that the obtained start-
ing values must be as a urate as those produ ed by the nal method. If a
starting method is of lower order, then use a smaller step size to generate
a urate starting values.
By using (xi , f (xi , y(xi ))), (xi−1 , f (xi−1 , y(xi−1 ))), . . . , (xi−k , f (xi−k , y(xi−k ))),
we derived the Adams-Bashforth methods. We an also use (xi+1 , f (xi+1 , y(xi+1 ))),
(xi+2 , f (xi+2 , y(xi+2 ))), . . . to form an interpolating polynomial. An interpo-
lating polynomial through (xi+1 , f (xi+1 , y(xi+1 ))), (xi , f (xi , y(xi ))), . . ., (xi−k , f (xi−k , y(xi−k )))
that satises P (xj ) = f (xj , y(xj )) for j = i + 1, i, . . . , i − k generates a lass
of methods known as the Adams-Moulton methods.
Let k = 0. Repla ing f (x, y(x)) by the interpolating polynomial through
(xi+1 , f (xi+1 , y(xi+1 ))) and (xi , f (xi , y(xi ))) in the formula
Z xi+1
y(xi+1 ) = y(xi ) + f (x, y(x)) dx
xi

we get
Z xi+1 
x − xi+1 x − xi
y(xi+1 ) = y(xi ) + f (xi , y(xi )) · + f (xi+1 , y(xi+1 )) ·
xi xi − xi+1 xi+1 − xi


(x − xi )(x − xi+1 ) ′′
+ · f (ξi (x), y(ξi (x))) dx =
2!

h h3 ′′
= y(xi ) + [f (xi , y(xi )) + f (xi+1 , y(xi+1 ))] − · f (ηi , y(ηi )).
2 12
154 Dierential Equations, Initial-Value Problems

We obtain in this way the se ond-order Adams-Moulton formula whi h


is also known as the Trapezoidal method:

h
Yi+1 = Yi + [f (xi , Yi ) + f (xi+1 , Yi+1 )].
2
For k = 1, using the ubi interpolating polynomial through (xi+1 , f (xi+1 , y(xi+1 ))),
(xi , f (xi , y(xi ))) and (xi−1 , f (xi−1 , y(xi−1 ))) we nd

h
y(xi+1 ) = y(xi ) + [5 f (xi+1 , y(xi+1 )) + 8 f (xi , y(xi ))−
12
h4 (3)
− f (xi−1 , y(xi−1 ))] − · f (ζi , y(ζi ))
24
and
h
Yi+1 = Yi + [5 f (xi+1 , Yi+1 ) + 8 f (xi , Yi ) − f (xi−1 , Yi−1 )].
12
For k = 2 we obtain:
h
y(xi+1 ) = y(xi ) + [9 f (xi+1 , y(xi+1 )) + 19 f (xi , y(xi )) − 5 f (xi−1 , y(xi−1 ))+
24
19
+ f (xi−2 , y(xi−2 ))] − · h5 · f (4) (ξi , y(ξi )),
720
and
h
Yi+1 = Yi + [9 f (xi+1 , Yi+1 ) + 19 f (xi , Yi ) − 5 f (xi−1 , Yi−1 ) + f (xi−2 , Yi−2 )].
24
For k = 3 we have:
h
y(xi+1 ) = y(xi ) + [251 f (xi+1 , y(xi+1 )) + 646 f (xi , y(xi )) − 264 f (xi−1 , y(xi−1 ))+
720
3
+ 106 f (xi−2 , y(xi−2 )) − 19 f (xi−3 , y(xi−3 ))] − · h6 · f (5) (αi , y(αi )),
160
and
h
Yi+1 = Yi + [251 f (xi+1 , Yi+1 ) + 646 f (xi , Yi ) − 264 f (xi−1 , Yi−1 )+
720 .
+ 106 f (xi−2 , Yi−2 ) − 19 f (xi−3 , Yi−3 )]

Note that these equations impli itly dene Yi+1 reason for whi h the Adams-
Moulton formulas are alled impli it methods, while Adams-Bashforth
methods dene Yi+1 expli itly.
The Predi tor- orre tor Method 155

6.6 The Predi tor- orre tor Method


In previous se tion we derived multi-step methods.
We note that in the ase of Adams-Moulton method of fourth order:
h
Yi+1 = Yi + [9 f (xi+1 , Yi+1 ) + 19 f (xi , Yi ) − 5 f (xi−1 , Yi−1 ) + f (xi−2 , Yi−2 )]
24
(6.6.1)
the absolute trun ation error is less than that of the fourth-order Adams-
Bashforth method:
h
Yi+1 = Yi + [55 f (xi , Yi ) − 59 f (xi−1 , Yi−1 ) + (6.6.2)
24

+ 37 f (xi−2 , Yi−2 ) − 9 f (xi−3 , Yi−3 )].

Hen e, among Eqs. (6.6.1) and (6.6.2), Eq. (6.6.1) is preferable, but it is an
impli it formula. If f (x, y) is nonlinear, then generally it is di ult to solve
Eq. (6.6.1) expli itly for Yi+1 .
However, Eq. (6.6.1) is a nonlinear equation with root Yi+1 and an be solved
by a su essive approximation method. For xed i, Yi+1 is the solution of:

y = g(y) (6.6.3)

where
h
g(y) = Yi + [9 f (xi+1 , y) + 19 f (xi , Yi ) − 5 f (xi−1 , Yi−1 ) + f (xi−2 , Yi−2 )].
24
To solve Eq. (6.6.3), it is very onvenient to use the xed-point iteration
method
y (k+1) = g(y (k) ) k = 0, 1, 2, . . . (6.6.4)
be ause Yi+1 is a xed point of g .
If |g ′ (y)| < 1 for all y with |y − yi+1 | < |y (0) − yi+1 |, then the sequen e of
iterations (6.6.4) onverges. Sin e g ′ (y) = 9h
24 ∂y
· ∂f , the sequen e of iterations
 
(6.6.4) onverges if h < 1/ 9h · ∂f and y (0) is su iently lose to Yi+1 .

24 ∂y
Thus by properly sele ting y (0) (su iently lose to Yi+1 ), the sequen e of
iterations (6.6.4) onverges without using many iterations.
For al ulating y (0) we use the Eq. (6.6.2):

(0) h
Yi+1 = Yi + [55 f (xi , Yi ) − 59 f (xi−1 , Yi−1 ) + (6.6.5)
24

+ 37 f (xi−2 , Yi−2 ) − 9 f (xi−3 , Yi−3 )].


156 Dierential Equations, Initial-Value Problems

This approximation is improved using Eq. (6.6.1):

(1) h (0)
Yi+1 = Yi + [9 f (xi+1 , Yi+1 ) + 19 f (xi , Yi ) (6.6.6)
24

−5 f (xi−1 , Yi−1 ) + f (xi−2 , Yi−2 )].

We use Eq. (6.6.5) to predi t a value of Yi+1 and therefore this equation
is known as a predi tor. The value Yi+1 given by predi tion is repla ed
(0)

(1)
in (6.6.6), and in this way a orre ted value Yi+1 is obtained. Due to this
reason, the Eq. (6.6.6) is known as a orre tor.
A ombination of an expli it method to predi t and an impli it method to
orre t is known as a predi tor- orre tor method.
It has been shown (Henri i 1962) that if the predi tor method has at least
the order of the orre tor method, then one iteration is su ient to preserve
the asymptoti a ura y of the orre tor method.
A ommonly used predi tor- orre tor method is the ombination of the
fourth-order Adams-Bashforth formula as a predi tor and the fourth-order
Adams-Moulton formula as a orre tor. Thus

 (p)
 Yi+1 h
 = Yi + [55 f (xi , Yi ) − 59 f (xi−1 , Yi−1 ) + 37 f (xi−2 , Yi−2 ) − 9 f (xi−3 , Yi−3 )]
 24


 h (p)
 Yi+1 = Yi + [9 f (xi+1 , Yi+1 ) + 19 f (xi , Yi ) − 5 f (xi−1 , Yi−1 ) + f (xi−2 , Yi−2 )]
24
(6.6.7)
The system (6.6.7) is widely used in ombination with the fourth-order
Runge-Kutta method as starter. Like the fourth-order Runge-Kutta method,
the predi tor- orre tor system (6.6.7) is one of the most reliable and widely
used methods for the numeri al solution of an initial-value problem.

Exer ises
1. Approximate the solution of the following (IVP)
 ′
y (x) = y 2 (x) − x
y(0) = 0 x ∈ [0, 1]

for n = 10, using:

a) fourth-order Adams-Bashforth method,

b) fourth-order Adams-Moulton method,


The Predi tor- orre tor Method 157

) fourth-order predi tor- orre tor method.

Algorithms for implementation:


// fourth-order Adams-Bashforth method
The fourth-order Runge-Kutta method is applied for determining Y1 , Y2
and Y3 .
for i = 4 . . . n
xi = xi−1 + h
h
yi = yi−1 + 24 (55f (xi−1 , yi−1 ) − 59f (xi−2 , yi−2 ) + 37f (xi−3 , yi−3 ) −
9f (xi−4 , yi−4 ))
// fourth-order predi tor- orre tor method
The fourth-order Runge-Kutta method is applied for determining Y1 , Y2
and Y3 .
for i = 4 . . . n
xi = xi−1 + h
h
P = yi−1 + 24 (55f (xi−1 , yi−1 ) − 59f (xi−2 , yi−2 ) + 37f (xi−3 , yi−3 ) −
9f (xi−4 , yi−4 ))
h
yi = yi−1 + 24 (9f (xi + h, P ) + 19f (xi−1 , yi−1 ) − 5f (xi−2 , yi−2 ) +
f (xi−3 , yi−3 ))

Input data:
- x0 , b - the ends of the interval of x

- y0 = y(x0 )

- n

Output data:
- the omputed ouples (xi , yi ), i = 0, n

Implementation in Borland C language:


#in lude<stdio.h>
#in lude<mallo .h>
oat x0, y0,b;
int n;
oat *Ve tor(int imin, int imax);
oat f(oat a, oat b);
void rk4(int m, oat *x, oat *y);
void pred or(void);
void main()
158 Dierential Equations, Initial-Value Problems

{
printf("b= "); s anf("%f", &b);
printf("x0= "); s anf("%f", &x0);
printf("y0= "); s anf("%f", &y0);
printf("n= "); s anf("%d", &n);
pred or();
}
oat f(oat a, oat b)
{
return (b-a);
}
void rk4(int m, oat *x, oat *y)
{
oat h, k1, k2, k3, k4;
int i;
x[0℄=x0;
y[0℄=y0;
h=(b-x[0℄)/n;
for(i=1;i<=m;i++)
{
k1=h*f(x[i-1℄,y[i-1℄);
k2=h*f(x[i-1℄+h*0.5,y[i-1℄+0.5*k1);
k3=h*f(x[i-1℄+h*0.5,y[i-1℄+0.5*k2);
k4=h*f(x[i-1℄+h,y[i-1℄+k3);
y[i℄=y[i-1℄+(k1+2*k2+2*k3+k4)/6;
x[i℄=x[i-1℄+h;
}
}
void pred or(void)
{
oat *x, *y, h, P;
int i;
x=Ve tor(0,n);
y=Ve tor(0,n);
rk4(3,x,y);
h=(b-x0)/n;
for(i=4;i<=n;i++)
{
x[i℄=x[i-1℄+h;
P=y[i-1℄+(h/24)*(55*f(x[i-1℄,y[i-1℄)-59*f(x[i-2℄,y[i-2℄)+37*f(x[i-
3℄,y[i-3℄-9*f(x[i-4℄,y[i-4℄)));
The Finite Dieren es Method for a Numeri al Solution of a Limit Linear Problem 159

y[i℄=y[i-1℄+(h/24)*(9*f(x[i℄+h,P)+19*f(x[i-1℄,y[i-1℄)-5*f(x[i-2℄,y[i-
2℄+f(x[i-3℄,y[i-3℄)));
}
for(i=0;i<=n;i++)
{
printf("x[%d℄=%g\t", i,x[i℄);
printf("y[%d℄=%g\n", i,y[i℄);
}
}

6.7 The Finite Dieren es Method for a Nu-


meri al Solution of a Limit Linear Problem
In this se tion, we onsider the most ommon numeri al method for solving
a boundary value problem (BVP), the nite dieren e method.
The basi idea underlying the nite dieren e method is to repla e the deriva-
tives in a dierential equation by suitable dieren e quotients and then to
solve the resulting system of equations.
We illustrate this method with the following linear se ond-order ordinary
dierential equation with boundary onditions:
y ′′ = p(x) · y ′ + q(x) · y + r(x) x ∈ [a, b] (6.7.1)
(
y(a) = α
(6.7.2)
y(b) = β.
Let us assume that (BVP) (6.7.1)-(6.7.2) has a unique solution.
In order to approximate the solution, we repla e the derivatives in Eq. (6.7.1)
by nite dieren es. This redu es Eq. (6.7.1) to a system of linear equations.
In order to a omplish this, we divide the interval [a, b] into N + 1 equal
intervals of length h = Nb−a
+1
:
a = x0 < x1 < x2 < . . . < xi−1 < xi < . . . < xN < xN +1 = b.
The points (knots) xi represent the mesh points.
At the interior mesh point xi , i 6= 0 and N + 1, Eq. (6.7.1) be omes
y ′′ (xi ) = p(xi ) · y ′ (xi ) + q(xi ) · y(xi ) + r(xi ). (6.7.3)
The simplest way to approximate Eq. (6.7.3) is to repla e the derivatives
y ′ (xi ) and y ′′ (xi ) with numeri al derivative by entral dieren es:
y(xi + h) − y(xi − h) h2 (3)
y ′ (xi ) = − · y (ξi ) (6.7.4)
2h 6
160 Dierential Equations, Initial-Value Problems

and
y(xi − h) − 2 y(xi ) + y(xi + h) h2 (4)
y ′′ (xi ) = − · y (ηi ). (6.7.5)
h2 12
Substituting Eqs. (6.7.4) and (6.7.5) in (6.7.3), we get
y(xi − h) − 2 y(xi ) + y(xi + h) y(xi + h) − y(xi − h)
2
= p (xi ) · + (6.7.6)
h 2h
h2 (4) h2 (3)
+ q(xi ) · y(xi ) + r(xi ) + · y (ηi ) − · y (ξi ).
12 6
Sin e ξi and ηi are not known and h2 is small, we ignore the last two terms.
Denoting the approximate value of y in xi by Yi (i.e., Yi ≈ y(xi )), the approx-
imate value of y in xi + h by Yi+1 (Yi+1 ≈ y(xi + h)), and the approximate of
y in xi − h by Yi−1 (Yi−1 ≈ y(xi − h)), we get from Eq (6.7.6) the following:
Yi−1 − 2 Yi + Yi+1 Yi+1 − Yi−1
2
= p(xi ) · + q(xi ) · Yi + r(xi ), i = 1, 2, . . . , N,
h 2h
(6.7.7)
whi h is an algebrai linear system of N equations in N unknowns.
Colle ting similar terms, Eq. (6.7.7) is rewritten as
   
h h
Yi−1 1 + p(xi ) −Yi (2+h q(xi ))+Yi+1 1 − p(xi ) = h2 r(xi ), i = 1, 2, . . . , N.
2
2 2
(6.7.8)
Denoting:
h
ai = 1 + p(xi )
2

bi = −(2 + h2 q(xi ))

h
ci = 1 − p(xi )
2
the equality (6.7.8) be omes

ai Yi−1 + bi Yi + ci Yi+1 = h2 r(xi ), i = 1, 2, . . . , N, (6.7.9)

and the boundary onditions be ome

α = y(a) = y(x0 ) = Y0 and β = y(b) = y(xN +1 ) = YN +1 . (6.7.10)

Thus, (6.7.9) and (6.7.10) are written in the matrix form

Ay = s (6.7.11)
The Finite Dieren es Method for a Numeri al Solution of a Limit Linear Problem 161

where
     
b1 c 1 0 ... 0 0 Y1 h2 r(x1 ) − a1 α
 a2 b 2 c 2 ... 0 0 
  
 Y2



 h2 r(x2 ) 

 0 a3 b 3
A=
... 0 0  
Y =
.. 
s= .
. 
. . . . . . . . . ... ... ... 



 .  . 

  YN −1  2
 h r(xN −1 ) 
0 0 0 . . . bN −1 cN −1 
0 0 0 . . . aN bN YN h2 r(xN ) − cN β

This tridiagonal system an be solved very e iently by the fa torization


method.
Let us onsider the se ond-order ordinary dierential equation with mixed
boundary onditions:
y ′′ = p(x) · y ′ + q(x) · y + r(x), x ∈ [a, b], (6.7.12)
(
γ1 · y(a) + γ2 · y ′ (a) = α
(6.7.13)
γ3 · y(b) + γ4 · y ′ (b) = β,
The Eq. (6.7.12) is redu ed to the system:

ai · Yi−1 + bi · Yi + ci · Yi+1 = h2 · r(xi ) (6.7.14)

in whi h we have:
h
ai = 1 + · p(xi )
2

bi = −(2 + h2 · q(xi )) (6.7.15)

h
ci = 1 −· p(xi ).
2
If we repla e y ′ (a) by the forward nite dieren e formula and y ′ (b) by the
ba kward nite dieren e formula, the onditions (6.7.13) be ome
Y1 − Y0
γ1 Y 0 + γ2 =α
h
(6.7.16)
YN +1 − YN
γ3 YN +1 + γ4 = β.
h
Solving the rst equation for Y0 as fun tion of Y1 , and solving the se ond
equation for YN +1 as fun tion of YN , the system (6.7.14) redu es to a tridiag-
onal system. Unfortunately, the rst derivative approximation is of the rst
162 Dierential Equations, Initial-Value Problems

order only. To over ome this drawba k, we an use higher-order approxima-


tions for y ′ (a) and y ′ (b). Using the asymmetri al formulas, we get
−3 Y0 + 4 Y1 − Y2
γ1 Y 0 + γ 2 =α
2h
(6.7.17)
3 YN +1 − 4 YN + YN −1
γ3 YN +1 + γ4 = β.
2h
Solving the rst equation for Y0 and the se ond for YN +1 yields
2hα − γ2 (4Y1 − Y2 )
Y0 =
2hγ1 − 3γ2
(6.7.18)
2hβ − γ4 (YN −1 − 4YN )
YN +1 = .
2hγ3 + 3γ4
For i = 1, Eq. (6.7.14) redu es to
a1 Y0 + b1 Y1 + c1 Y2 = h2 · r(x1 ).
Repla ing Y0 , we have
   
4a1 γ2 a1 γ2 2hαa1
b1 − Y1 + c 1 + Y2 = h2 r(x1 ) − .
2hγ1 − 3γ2 2hγ1 − 3γ2 2hγ1 − 3γ2
(6.7.19)
For i = N , Eq. (6.7.14) be omes
aN YN −1 + bN YN + cN YN +1 = h2 · r(xN ),
and repla ing YN +1 from (6.7.18), we get
   
c N γ4 4cN γ4 2hβcN
aN − YN −1 + bN + YN = h2 r(xN )− .
2hγ3 + 3γ4 2hγ3 + 3γ4 2hγ3 + 3γ4
(6.7.20)
In this way we obtain the system of equations
A1 · y = s1 (6.7.21)
in whi h
 4a1 γ2 a1 γ2 
b1 − c1 + 0 ... 0 0
 2hγ1 − 3γ2 2hγ1 − 3γ2 
 
 a2 b2 c2 ... 0 0 
 
 0 a3 b3 ... 0 0 
A= 
 ... ... ... ... ... ... 
 
 0 0 0 ... bN −1 cN −1 
 c N γ4 4cN γ4 
0 0 0 . . . aN − bN +
2hγ3 + 3γ4 2hγ3 + 3γ4
The Finite Dieren es Method for a Numeri al Solution of a Limit Linear Problem 163

 
2hαa1
  h2 r(x1 ) −
Y1  2hγ1 − 3γ2 
 
 Y2 
   h2 r(x2 ) 
 ..   .
.

y= .  s=
 . 

 
YN −1  
 h2 r(xN −1 ) 

YN  2hβc N

h2 r(xN ) −
2hγ3 + 3γ4

Exer ises
1. Approximate the following (BVP)
 ′′
 y = −y
y(0) = 1

y(π/2) = 0

using the nite dieren e method for n = 4.


The algorithm for implementation is:
b−a
h=
n+1
xi = a + ih for i = 0, n
h
ai = 1 + 2 · p(xi ) for i = 1, n
2
bi = −(2 + h q(xi )) for i = 1, n
ci = 1 − h2 · p(xi−1 ) for i = 2, n + 1
si = h2 r(xi ) for i = 2, n − 1
s1 = h2 r(x1 ) − ai · α
sn = h2 r(xn ) − cn+1 · β
We solve the tridiagonal system A · y = s where
     
b1 c 2 0 . . . 0 y1 s1
 a2 b 2 c 3 . . . 0   y2   s2 
     
A= . . . . . . . . . . . . . . .
 y =  ..  s =  .. 
 0 0 . . . bn−1 cn  . .
0 0 . . . an bn yn sn

Input data:
- a, b - the ends of the interval of x

- α = y(a)
164 Dierential Equations, Initial-Value Problems

- β = y(b)

- n

Output data:
- omputed ouples (xi , yi ), i = 1, n

Implementation of this algorithm made in Borland C language:


#in lude<stdio.h>
#in lude< onio.h>
#in lude<math.h>
oat *Ve tor(int imin, int imax);
void tridiag(int n, oat *a, oat *b, oat * , oat *d, oat *x);
oat p(oat x);
oat q(oat x);
oat r(oat x);
void dinite(oat lmin, oat lmax, oat alpha, oat beta, int n);
void main()
{
oat lmin,lmax,alpha, beta;
int n;
printf("a= "); s anf("%f", &lmin);
printf("b= "); s anf("%f", &lmax);
printf("y[%g℄= ",lmin); s anf("%f", &alpha);
printf("y[%g℄= ",lmax); s anf("%f", &beta);
printf("n= "); s anf("%d", &n);
dinite(lmin, lmax, alpha, beta, n);
}
void dinite(oat lmin, oat lmax, oat alpha, oat beta, int n)
{
oat *x, *z,h,*a,*b,* ,*s;
int i;
x=Ve tor(0,n);
z=Ve tor(0,n);
a=Ve tor(0,n);
b=Ve tor(0,n);
=Ve tor(0,n);
s=Ve tor(0,n);
h=(lmin-lmax)/(n+1);
for(i=0;i<=n;i++)
x[i℄=lmin+i*h;
The Collo ation Method and the Least Squares Method 165

for(i=1;i<=n;i++)
{
b[i℄=-(2+h*h*q(x[i℄));
s[i℄=h*h*r(x[i℄);
}
s[1℄=s[1℄-alpha*(1+h*0.5*p(x[1℄));
s[n℄=s[n℄+beta*(1-h*0.5*p(x[n℄));
for(i=2;i<=n;i++)
{
a[i℄=1+h*0.5*p(x[i℄);
[i℄=1-h*0.5*p(x[i-1℄);
}
tridiag(n, a, b, , s, z);
for(i=1;i<=n;i++)
{
printf("x[%d℄=%f\t", i,x[i℄);
printf("y[%d℄=%f\n", i,z[i℄);
}
}
oat p(oat x)
{
return 0;
}
oat q(oat x)
{
return (-1);
}
oat r(oat x)
{
return 0;
}

6.8 The Collo ation Method and the Least Squares


Method
Let us onsider the se ond-order ordinary dierential equation with mixed
boundary onditions:
y ′′ + p(x) · y ′ + q(x) · y = f (x), x ∈ [a, b], (6.8.1)
166 Dierential Equations, Initial-Value Problems
(
γ1 · y(a) + γ2 · y ′ (a) = α
(6.8.2)
γ3 · y(b) + γ4 · y ′ (b) = β.

For approximating a solution of the (BVP) (6.8.1)-(6.8.2) (supposing that it


exists and is unique), we onsider a set {Φ0 , Φ1 , . . . , ΦN } of linearly indepen-
dent fun tions of C 2 - lass, whi h verify:

γ1 · Φ0 (a) + γ2 · Φ′0 (a) = α and γ3 · Φ0 (b) + γ4 · Φ′0 (b) = β (6.8.3)

γ1 · Φi (a) + γ2 · Φ′i (a) = 0 and γ3 · Φi (b) + γ4 · Φ′i (b) = 0, (6.8.4)

i = 1, N .
We sear h an approximative solution, of Eq. (6.8.1), of the form:

N
X
YN (x) = Φ0 (x) + ci · Φi (x) (6.8.5)
i=1

in whi h ci are onstants whi h should be determined. There are many


te hniques for determining ci . Sin e Φ0 veries (6.8.3) and Φi veries (6.8.4),
for i = 1, 2, . . . , N , the fun tion YN (x) veries the mixed boundary onditions
(6.8.2). Repla ing fun tion YN (x) in the left-hand side of the Eq. (6.8.1) we
nd the fun tion
N
" N
# " N
#
X X X
Φ′′0 + ci Φ′′i + p(x) Φ′0 + ci Φ′i + q(x) Φ0 + ci Φi = h.
i=1 i=1 i=1

The fun tion R = h − f is alled residual fun tion and indi ates the measure
in whi h YN veries Eq. (6.8.1):

R(x; c1 , . . . , cN ) = Φ′′0 (x) + p(x) · Φ′0 (x) + q(x) · Φ0 (x)+ (6.8.6)


N
X
+ ci [Φ′′i (x) + p(x) · Φ′i (x) + q(x) · Φi (x)] − f (x)
i=1

We say that YN (x) is an exa t solution if and only if R(x; c1 , . . . , cN ) ≡ 0.

Generally, the solution is not exa t, but if the number of fun tions Φi in-
reases then R be omes small. We try to make R(x; c1 , . . . , cN ) smaller
hoosing c1 , . . . , cN .
The ollo ation method requests that R(x; c1 , . . . , cN ) is zero in the
given points x1 , x2 , . . . , xN of [a, b]. Taking into a ount (6.8.6) it results
The Collo ation Method and the Least Squares Method 167

that:
N
X
ci [Φ′′i (xk ) + p(xk ) · Φ′i (xk ) + q(xk ) · Φi (xk )] =
(6.8.7)
i=1
= f (xk ) − Φ′′0 (xk ) − p(xk ) · Φ′0 (xk ) − q(xk ) · Φ0 (xk )
for k = 1, 2, . . . , N .
The system of relations (6.8.7) is a system of N linear algebrai equations in
N unknowns c1 , . . . , cN and an be written int the matrix form as follows
A · c = b. (6.8.8)
We solve this equation and the obtained solution (c1 , . . . , cN ) will be used for
the onstru tion of YN (x) of the form (6.8.5) in order to obtain an approxi-
mative solution of the (BVP) (6.8.1)-(6.8.2).

The least squares method requests that the integral:


Z b
I= R2 (x; c1 , . . . , cN )dx (6.8.9)
a

has a minimum value.


In the minimum point we have:
Z b
∂I ∂R
=2 R· dx = 0, i = 1, 2, . . . , N. (6.8.10)
∂ci a ∂ci
Sin e:
∂R
= Φ′′i (x) + p(x) · Φ′i (x)q(x) · Φi (x, ) i = 1, 2, . . . N, (6.8.11)
∂ci
repla ing R and ∂R
∂ci
in (6.8.9), we get
N
X Z b
cj [Φ′′j (x) + p(x) · Φ′j (x) + q(x) · Φj (x)] · [Φ′′i (x) + p(x) · Φ′i (x)q(x) · Φi (x)]dx =
i=1 a
Z b
=− [Φ′′0 (x) + p(x) · Φ′0 (x) + q(x) · Φ0 (x) − f (x)] · [Φ′′i (x) + p(x) · Φ′i (x)q(x) · Φi (x)]dx.
a
(6.8.12)
The system (6.8) has N linear algebrai equations in N unknowns and an
be written in the matrix form
A · c = b. (6.8.13)
From this system, c is found and it used for the onstru tion of the approxi-
mative solution YN .
168 Dierential Equations, Initial-Value Problems

Exer ises
1. Approximate the solution of the following (BVP)
 ′′
 y +y = x
y(0) = 0

y(1) = 0

using:

a) the ollo ation method for n = 4

b) the least squares method n = 3.


Bibliography
[1℄ Berbente Corneliu, Mitran Sorin, Zan u Silviu, Metode Numeri e , Ed.
Tehni a, Bu uresti, 1998.

[2℄ Beu Titus A., Cal ul numeri in C, Editia a 2-a, Ed. Alabastra, Cluj-
Napo a, 2000.

[3℄ Coman Gheorghe, Analiza Numeri a , Ed. Libris, Cluj-Napo a, 1995.


[4℄ Dinu Mariana, Lin a Gheorghe, Algoritmi si teme spe iale de analiza
numeri a Ed. Matrix Rom, Bu uresti, 1999.
[5℄ O. Dogaru, Gh. Bo san, I. Despi, A. Ioni a, V. Iordan, L. Lu a, D.
Pet u, P. Popovi i Informati a pentru denitivare si grad , Ed. de Vest,
Timisoara, 1998.

[6℄ Kelley W., Peterson A., Dieren e equation, An Introdu tion with Ap-
pli ations , A ademi Press, Elsevier, 2000.
[7℄ Maruster Stefan, Metode numeri e in rezolvarea e uatiilor neliniare , Ed.
Tehni a, Bu uresti, 1981.

[8℄ P. Naslau, R. Negrea, L. Cadariu, B. Caruntu, D. Popes u, M. Balmez,


C. Dumitras u, Matemati i asistate pe al ulator , Ed. Politehni a,
Timisoara, 2005.

[9℄ Vithal A. Patel, Numeri al Analysis , Humboldt State University, USA,


1994.

169

Das könnte Ihnen auch gefallen