Beruflich Dokumente
Kultur Dokumente
NUMERICAL METHODS
Timisoara 2007
Introdu
tion
The text book Numeri
al methods was elaborated in agreement with the
requirements, based on the syllabus elaborated by the Department of Com-
puter S
ien
e and approved by the Coun
il of the Fa
ulty of Mathemati
s
and Computer S
ien
e from the West University of Timisoara.
This text book is written espe
ially for students of the Computer S
ien
e
Department,
overing all the subje
ts of the syllabus, at the knowledge level
of the fth semester student. The le
tures are tailored so that their presen-
tation during the allo
ated time is made possible.
The authors would like to thank to Prof. Dr. Gheorghe Bo
san for
reading this manus
ript, for
omments and pertinent re
ommendations.
The authors
Contents
7
8 CONTENTS
9
10 Systems of Linear Equations
Stage 1
In the rst stage we
onstru
t a new system having the same solution as
the initial system (1.1.1), and for whi
h its matrix is upper-triangular. This
stage
onsists of n − 1 steps whi
h will be des
ribed below.
In the rst step the unknown x1 is eliminated from the equations 2, 3, 4, . . . , n.
This step is realized by su
essive multipli
ations of the rst equation with
m21 = − aa2111
, m31 = − aa31
11
, . . . , mn1 = − aan1 11
and by su
essive sums of the
obtained equations with equations 2, 3, 4, . . . , n. This new system looks like:
1
a11 x1 + a112 x2 + . . . a11n xn = b11
a222 x2 + . . . a22n xn = b22
(1.1.4)
........................
a2n2 x2 + . . . a2nn xn = b2n
where the supers
ript 1 indi
ates that the rst equation remains un
hanged,
the supers
ript 2 indi
ates that the rst elimination has been
arried out
and:
At the se
ond step the rst equation will be ignored, and the unknown x2
will be eliminated from the equations 3, 4, . . . , n by su
essive multipli
ations
a2
of the se
ond equation with mi2 = − a2i2 ; i = 3, . . . , n, and by su
essive sums
22
of the obtained equation with the equations 3, 4, . . . , n. After this step we
obtain the followings system:
a111 x1 + a112 x2 + a113 x3 + . . . + a11n xn = b11
a222 x2 + a223 x3 + . . . + a22n xn = b22
a333 x3 + . . . + a33n xn = b33 (1.1.5)
..............................
a3n3 x2 + . . . + a3nn xn = b3n
where the supers
ript 3 indi
ates that the se
ond elimination has been
arried
out and:
a2i2
a3ij = a2ij + mi2 · a22j b3i = b2i + mi2 · b22 mi2 = − for i, j = 3, 4, . . . , n.
a222
At the k - step the rst k − 1 equations will be ignored, and the unknown
xk will be eliminated from the equations k + 1, . . . , n by su
essive multipli-
ak
ations of the equations k with the ratio mik = − akik , and by su
essive sums
kk
of the obtained equation with the equation i. The obtained system is:
a111 x1 + a112 x2 + a113 x3 + . . . + a11k xk + a11 k+1 xk+1 + . . . + a11n xn = b11
a222 x2 + a223 x3 + . . . + a22k xk + a22 k+1 xk+1 + . . . + a22n xn = b22
a333 x3 + . . . + a33k xk + a33 k+1 xk+1 + . . . + a33n xn = b33
························
akkk xk + akk k+1 xk+1 + . . . + akkn xn = bkk
ak+1 k+1 k+1
k+1 k+1 xk+1 + . . . + ak+1 n xn = bk+1
························
ak+1 k+1 k+1
n k+1 xk+1 + . . . + an n xn = bn
(1.1.6)
where:
akik
ak+1
ij = akij + mik · akkj , bk+1
i = bki + mik · bkk , mik = − for i, j = k + 1, . . . , n.
akkk
The last step of the rst stage is the step n − 1, in whi
h the unknown
xn−1 will be eliminated from the equation n. These n − 1 steps give the
12 Systems of Linear Equations
system:
a111 x1 + a112 x2 + a113 x3 + . . . + a11k xk + . . . + a11n xn = b11
a222 x2 + a223 x3 + . . . + a22k xk + . . . + a22n xn = b22
a333 x3 + . . . + a33k xk + . . . + a33n xn = b33
····················· (1.1.7)
akkk akkn bkk
xk + . . . + xn =
·····················
ann n xn = bnn
Stage 2
In the se
ond stage we solve the obtained system (1.1.7), by ba
k substitu-
tion. From the last equation we obtain:
bnn
xn = . (1.1.8)
annn
Introdu
ing the
omputed xn in the equation n − 1 we obtain the unknown
xn−1 . Introdu
ing xn and xn−1 in the equation n − 2 we obtain the unknown
xn−2 , et
, the pro
ess is repeated until we are left with one equation and one
unknown x1 . The unknowns xn−1 , xn−2 , . . . , x1 are given by the formula:
n
!
X 1
xk = bkk − akkj · xj · k , k = n − 1, n − 2, . . . , 1. (1.1.9)
j=k+1
a kk
Remark 1.1.2. The method des
ribed above (Gauss with trivial pivot) as-
sumes that at ea
h step the element akkk from the main diagonal of the system
(1.1.7) is dierent of zero. These
oe
ients are
alled pivots or pivotal ele-
ments. If there is a pivotal element equal to zero then the pro
edure breaks
down sin
e mik
annot be dened. To ensure that zero pivots are not used
to
ompute the multipliers at the step k , a sear
h is made in the
olumn k
for nonzero elements from rows k + 1, . . . , n. If some entry is nonzero, for
example i > k then we swit
h rows i and k . This inter
hange of rows does
not
hange the solution of the system. This inter
hange is possible be
ause
A is non-singular (if all elements from the pivot down in
olumn k in
luding
the pivot are zero, then the matrix is singular).
Remark 1.1.3. The purpose of the pivoting strategy is to move the entry
of greatest magnitude to the main diagonal and then use it to eliminate the
remaining entries in the
olumn. This implies an inter
hange between row
k and the row whi
h
ontains the element with the largest magnitude from
the
olumn. This modied method is
alled Gauss method with partial
pivot.
Remark 1.1.4. In the Gauss method with partial pivot we
an improve
errors by
hoosing in ea
h step a pivot given by the element with the largest
magnitude from the
olumn and row, simultaneously. This is
alled Gauss
method with total pivot.
Remark 1.1.5. The Gauss method with pivot has the following matrix
representation:
A(n−1) = M · A (1.1.10)
where: A(n−1) is the matrix of the system (1.1.7) and M is the matrix:
1 0 0 ... 0
m21 1 0 ... 0
... ... ...
= M. (1.1.11)
. . . . . .
mn1 mn2 mn3 ... 1
This shows that the Gauss method with pivot
ondu
ts to the existen
e
of the fa
torization A = L · U , where L and U are lower-triangular and
upper-triangular matri
es.
Exer
ises
Solve the following systems using the Gauss method:
a) with pivot
2x − 4y + 2z = 20
x − 4y − z = 2
− x + y + z = −2
14 Systems of Linear Equations
k = 1...n − 1
i = k + 1...n
aik
m=
akk
j = k + 1...n
aij = aij − m · akj
bi = bi − m · b k
bn
xn =
ann
for i = n − 1 . . . 1 !
n
X 1
xi = bi − aij · xj ·
j=i+1
aii
Input data:
- n - dimension of the spa
e
- (bi )i=1,...,n - the olumn ve tor form the right hand side
Output data:
- (xi )i=1,...,n - the solution of the system Ax = b
oat **Matri
e(int imin, int imax, int jmin, int jmax);
oat *Ve
tor(int imin, int imax);
void CitireMat(oat **a, int n, int m);
void CitireVe
t(oat *a, int init, int n);
void S
riereMat(oat **a, int n, int m);
void S
riereVe
t(oat *a, int init, int n);
void gauss(oat **a, oat *b, int n);
void main()
{
oat **a, *b;
int n;
printf("n= "); s
anf("%d", &n);
a=Matri
e(1,n,1,n);
b=Ve
tor(0,n);
CitireMat(a,n,n);
S
riereMat(a,n,n);
CitireVe
t(b,1,n);
S
riereVe
t(b,1,n);
printf("SOLUTIE\n");
gauss(a,b,n);
}
#dene Swap(a,b) {t=a; a=b; b=t;}
void gauss(oat **a, oat *b, int n)
{
oat amax, suma, m,t, *x;
int i, imax, j, k;
x=Ve
tor(0,n);
for(k=1;k<=n;k++)
{
amax=0.0;
for(i=k;i<=n;i++)
if(amax<fabs(a[i℄[k℄))
{
amax=fabs(a[i℄[k℄);
imax=i;
}
if(amax==0)
{
printf("matri
e nesingulara! \n");
}
if(imax!=k)
16 Systems of Linear Equations
{
for(j=k;j<=n;j++) Swap(a[imax℄[j℄, a[k℄[j℄);
Swap(b[imax℄, b[k℄);
}
m=1.0/a[k℄[k℄;
for(j=1;j<=n;j++) a[k+1℄[j℄*=m;
b[k+1℄*=m;
for(i=k+1;i<=n;i++)
{
m=a[i℄[k℄/a[k℄[k℄;
for(j=k;j<=n;j++) a[i℄[j℄-=a[k℄[j℄*m;
b[i℄-=b[k℄*m;
}
}
x[n℄=b[n℄/a[n℄[n℄;
for(k=n-1;k>=1;k)
{
suma=0.0;
for(j=k+1;j<=n;j++)
suma+=a[k℄[j℄*x[j℄;
x[k℄=(b[k℄-suma)/a[k℄[k℄;
}
S
riereVe
t(x,1,n);
}
1.2 LU Fa
torization
In the Gaussian elimination method, for solving a Cramer-type system
Ax = b, (1.2.1)
the system was redu
ed to a triangular form and then solved by ba
k substi-
tution. It is mu
h easier to solve triangular systems. Let us exploit this idea
and assume that a given n × n matrix A
an be written as a produ
t of two
matri
es L and U so that
A=L·U (1.2.2)
λ11 0 0 ... 0
λ21 λ22 0 ... 0
L=. . .
(1.2.3)
... ... ... ...
λn1 λn2 λn3 . . . λnn
µ11 µ12 µ13 . . . µ1n
0 µ22 µ23 . . . µ2n
U =
. . .
(1.2.4)
... ... ... ...
0 0 0 . . . µnn
Theorem 1.2.1. If the matrix A
an be written in the form (1.2.2) then the
system (1.2.1) is de
omposed into two triangular systems:
Ly = b (1.2.5)
Ux = y (1.2.6)
whi
h are equivalent with:
λ11 y1 = b1
λ y +λ y =b
21 1 22 2 2
(1.2.7)
···················
λn1 y1 + λn2 y2 + . . . + λnn yn = bn
µ11 x1 + µ12 x2 + . . . + µ1n xn = y1
µ22 x2 + . . . + µ2n xn = y2
(1.2.8)
···············
µnn xn = yn .
Proof: immediate.
Both systems are triangular and therefore easy to solve. What we need
is a pro
edure to generate fa
torization. For this we observe that we obtain
L and U by Gaussian elimination. In fa
t, we have the following theorem:
Proof: immediate.
Theorem 1.2.3. If A is a non-singular matrix and satises the
onditions:
det(A[i] ) 6= 0, i = 1, 2, . . . , n − 1 (1.2.11)
For j = 1 we obtain:
hen
e, the rst
olumn of the matrix L is equal to the rst
olumn of the
matrix A.
If in (1.2.12) we take i = 1 then we obtain:
hen
e,
a1j
µ1j = , j = 1, 2, . . . , n (1.2.15)
a11
whi
h represent the elements of the rst row from the matrix U .
Supposing that the rst r − 1
olumns of the matrix L and the rst r − 1
rows of the matrix U are determined from (1.2.12) we have:
r−1
X
air = λir + λik · µkr i = r, . . . , n
k=1
r−1
X
arj = λrr · µrj + λrk · µkj j = r + 1, . . . , n
k=1
and hen
e λir (
olumn r of L) and µrj (row r of U ), are obtained in the
ase
of the Crout fa
torization.
Exer
ises
Solve the following systems using LU fa
torization:
4x − 2y + z = 16
a) − 3x − y + 3z = 3
x − y + 3z = 6
2x + 3y + 3z = 2
b) 5y + 7z = 2
6x + 9y + 8z = 5
The algorithm of implementation of the LU fa
torization is:
forj = k + 1 . . . n
k−1
X 1
uki = aki − lks · usi ·
s=1
lkk
k−1
X 1
lik = aik − lis · usk ·
s=1
ukk
// Determination of y ve
tor representing the solution of the system Ly =
b:
b1
y1 =
l11
for i = 2 .
..n
i−1
X 1
yi = bi − lij · yj ·
j=1
lii
// Determination of x ve
tor representing the solution of the system U x =
y:
yn
xn =
unn
for i = n −
1 . . . 1X
n
1
xi = yi − uij · xj ·
j=i+1
uii
Input data:
- n - dimension of the spa
e
Output data:
- (xi )i=1,...,n - the solution of the system Ax = b
{
for(j=1;j<=m;j++)
printf("%g ",a[i℄[j℄);
printf("\n");
}
}
void S
riereVe
t(oat *a, int n)
{
int i;
for(i=1; i<=n; i++)
{
printf("%g",a[i℄);
printf("\n");
}
}
oat **Matri
e(int imin, int imax, int jmin, int jmax)
{
int i, ni=imax-imin+1,nj=jmax-jmin+1;
oat **p;
p=(oat **) mallo
((size_t)(ni*sizeof(oat *)));
p-=imin;
p[imin℄=(oat *) mallo
((size_t) (ni*nj*sizeof(oat)));
p[imin℄-=jmin;
for(i=imin+1;i<=imax;i++) p[i℄=p[i-1℄+nj;
return p;
}
oat *Ve
tor(int n)
{
oat *p;
p=(oat *)mallo
((size_t) (n*sizeof(oat)));
return p;
}
void LU(oat **a, oat *b, int n)
{
oat **u, **l, *y, *x, suma;
int i,k,j;
u=Matri
e(1,n,1,n);
l=Matri
e(1,n,1,n);
x=Ve
tor(n);
y=Ve
tor(n);
for(k=1;k<=n;k++)
LU Fa
torization 23
{
u[k℄[k℄=1;
suma=0;
for(j=1;j<=k-1;j++)
suma=suma+l[k℄[j℄*u[j℄[k℄;
l[k℄[k℄=a[k℄[k℄-suma;
suma=0;
for(i=k+1;i<=n;i++)
{
for(j=1;j<=k-1;j++)
suma=suma+l[k℄[j℄*u[j℄[i℄;
u[k℄[i℄=(a[k℄[i℄-suma)/l[k℄[k℄;
suma=0;
for(j=1;j<=k-1;j++)
suma=suma+l[i℄[j℄*u[j℄[k℄;
l[i℄[k℄=a[i℄[k℄-suma;
}
}
y[1℄=b[1℄/l[1℄[1℄;
for(i=2;i<=n;i++)
{
suma=0;
for(j=1;j<=i-1;j++)
suma=suma+l[i℄[j℄*y[j℄;
y[i℄=(b[i℄-suma)/l[i℄[i℄;
}
x[n℄=y[n℄/u[n℄[n℄;
for(i=n-1;i>=1;i)
{
suma=0;
for(j=i+1;j<=n;j++)
suma=suma+u[i℄[j℄*x[j℄;
x[i℄=(y[i℄-suma)/u[i℄[i℄;
}
S
riereVe
t(x,n);
}
24 Systems of Linear Equations
in whi
h the matrix A is tridiagonal (has nonzero elements only on the main
diagonal and in the positions adja
ent to the diagonal), i.e., A has the form:
b1 c2 0 0 ... ... ... 0
a2 b2 c 3 0 ... ... ... 0
0 a3 b 3 c 3 . . . . . . ... 0
A=
· · ·
(1.3.2)
··· ··· ··· ··· ··· · · · · · ·
0 . . . . . . . . . . . . an−1 bn−1 cn
0 ... ... ... ... 0 a n bn
A tridiagonal system
an be solved very e
iently by the fa
torization method.
Thus, using the Crout fa
torization we have the following matri
es L and U
orresponding to the given tridiagonal matrix:
β1 0 0 ... 0 0
a2 β2 0 . . . 0 0
L = · · · · · · · · · · · · · · · · · ·
(1.3.3)
0 0 0 . . . βn−1 0
0 0 0 . . . an βn
1 ν2 0 ... 0 0
0 1 ν3 . . . 0 0
U =· · · · · · · · · · · · · · · · · ·
(1.3.4)
0 0 0 . . . 1 νn
0 0 0 ... ... 1
Theorem 1.3.1. The equality A = L·U is satised if and only if the following
equalities take pla
e:
β1 = b1 (1.3.5)
βi · νi+1 = ci+1 i = 2, . . . , n − 1 (1.3.6)
ai · νi + βi = bi i = 2, . . . , n (1.3.7)
Proof: If A = L · U then:
- multiplying the row 1 of the matrix L with the
olumn 1 of the matrix
U we obtain β1 · 1; equating this with the element from the rst row
and the rst
olumn of the matrix A, i.e. b1 , we obtain β1 = b1 .
Tridiagonal Systems 25
- multiplying the row i of the matrix L with the
olumn i of the matrix
U we obtain ai · νi + βi ; equating this with the element from the row i
and from the
olumn i of the matrix A, i.e. bi , we obtain ai ·νi +βi = bi .
Re
ipro
ally, if the relations (1.3.5), (1.3.6) and (1.3.7) take pla
e then the
equality A = L · U is satised.
- λij = 0 for i = 2, . . . , n if j ∈
/ {i, i − 1} and λi,i−1 = ai , λii = βi .
- µij = 0 for i = 1, . . . , n − 1 if j ∈
/ {i, i + 1} and µii = 1, µi,i+1 = νi+1
For i = 1 and j ≤ 2 we have γ11 = b1 = x11 and γ12 = c1 = x12 ; the element
γ1j for j > 2 is:
γ1j = 0 = xij
For i = 2, . . . , n − 1, γij is:
Exer
ises
1. Solve
the followings system using LU method:
x + 2y = 3
3x + 2y + 4z = 1
y + z + 2t = −2
2z + 3t = 3
The algorithm of implementation of the LU fa
torization in the
ase of
the tridiagonal matri
es is:
Input data:
- n - dimension of the spa
e
- (di )i=1,...,n - the olumn ve tor from the right hand side
Output data:
- (xi )i=1,...,n - the solution of the system Ax = d
{
int i;
for(i=init; i<=n; i++)
{
printf("[%i℄=",i);
s
anf("%f", &a[i℄);
}
}
void S
riereVe
t(oat *a,int init, int n)
{
int i;
for(i=init; i<=n; i++)
{
printf("%g",a[i℄);
printf("\n");
}
}
void Tridiag(oat *a, oat *b, oat *
, oat *d,int n)
{
int i;
oat *beta,* gamma,*x,*y;
beta=Ve
tor(n);
gamma=Ve
tor(n);
x=Ve
tor(n);
y=Ve
tor(n);
beta[1℄=b[1℄;
gamma[2℄=
[2℄/beta[1℄;
for(i=2;i<=n-1;i++)
{
beta[i℄=b[i℄-a[i℄*gamma[i℄;
gamma[i+1℄=
[i+1℄/beta[i℄;
}
beta[n℄=b[n℄-a[n℄*gamma[n℄;
y[1℄=d[1℄/beta[1℄;
for(i=2;i<=n;i++)
y[i℄=(d[i℄-a[i℄*y[i-1℄)/beta[i℄;
x[n℄=y[n℄;
for(i=n-1;i>=1;i)
x[i℄=y[i℄-gamma[i+1℄*x[i+1℄;
S
riereVe
t(x,1,n);
}
30 Systems of Linear Equations
Proof: We will prove that, if there exists a non-singular and lower triangular
matrix L su
h that A = L · LT , then the matrix A is symmetri
and positive
denite.
Symmetry: A = L · LT ⇒ AT = (L · LT )T = (LT )T · LT = L · LT = A ⇒ A =
AT .
LT x = 0 ⇔ x = 0.
Cholesky Fa
torization 31
In the followings, we will show who are the elements of the matrix L. We
onsider:
λ11 0 0 ... 0
λ21 λ22 0 . . . 0
L= λ31 λ32 λ33 . . . 0 (1.4.6)
· · · ··· ··· ··· ···
λn1 λn2 λn3 . . . λnn
and
λ11 λ21 λ31 ... λn1
0 λ22 λ32 ... λn2
LT =
0 0 λ33 ... λn3
(1.4.7)
· · · · · · ··· ··· ···
0 0 0 ... λnn
The element from row i and
olumn j of the produ
t L · LT is:
min(i,j)
X
pij = λik · λjk (1.4.8)
k=1
j−1
!
X 1
λij = aij − λik · λjk · i = j + 1, j + 2, . . . , n.(1.4.12)
k=1
λjj
Thus, if the matrix A is symmetri
and positive denite, then the formu-
las (1.4.10), (1.4.11) and (1.4.12) dene a non-singular and lower triangular
matrix L for whi
h A = L · LT .
Theorem 1.4.3. If the matrix A is symmetri
and positive denite, then the
solution of the system
Ax = b (1.4.13)
is given by the formulas:
n
!
yn X 1
xn = and xi = yi − λki · xk · i = n − 1, n − 2, . . . , 1
λnn k=i+1
λii
(1.4.14)
in whi
h y1 , . . . , yn are given by:
i−1
!
b1 X 1
y1 = and yi = bi − λik · yk · i = 2, 3, . . . , n. (1.4.15)
λ11 k=1
λii
Exer
ises
1. Solve the following systems using Cholesky fa
torization:
x1 + x2 + x3 = 2
a) x1 + 5x2 + 5x3 = 4
x1 + 5x2 + 14x3 = −5
x1 + 2x2 + 3x3 = 0
b) 2x1 + 5x2 + x3 = −12
3x1 + x2 + 35x3 = 59
Cholesky Fa
torization 33
xn = yn /lnn
for i = n − 1 . . . 1 !
n
X 1
xi = yi − lji · xj ·
j=i+1
lii
Input data
- n - dimension of the spa
e
Output data:
- (xi )i=1,...,n - the solution of the system Ax = b
#in
lude<stdio.h>
#in
lude<math.h>
#in
lude<stdlib.h>
34 Systems of Linear Equations
suma=0.0;
for(j=1;j<=i-1;j++)
suma=suma+l[i℄[j℄*y[j℄;
y[i℄=(b[i℄-suma)/l[i℄[i℄;
}
x[n℄=y[n℄/l[n℄[n℄;
for(i=n-1;i>=1;i)
{
suma=0.0;
for(j=i+1;j<=n;j++)
suma=suma+l[j℄[i℄*x[j℄;
x[i℄=(y[i℄-suma)/l[i℄[i℄;
}
S
riereVe
t(x,n);
}
v = a 1 + α · e1 (1.5.5)
we have:
α = ±ka1 k. (1.5.9)
Choosing α = ±ka1 k and v = a1 +α·e1 it is easier to see that the equality
(1.5.4) is veried.
Remark 1.5.2. The Proposition 1.5.1 establishes the fa
t that, for any sym-
metri
matrix A, the
omponents 2, 3, . . . , n of the ve
tor
oin
ide with those of the ve
tor a1 (the rst
olumn of the matrix A), and
hen
e the
omponents 2, 3, . . . , n of the ve
tor:
are null.
Householder Fa
torization 37
Propostion 1.5.3. The Householder matrix Pn−1 , asso
iated with the sym-
metri
matrix A, is symmetri
and it has the property that the matrix U1
dened by:
1 0 ... 0
0
U1 = (1.5.16)
Pn−1
0
is symmetri
and veries:
a11 a12 a13 ... a1n
α1 a122 a123 ... a12n
0
U1 A = a132 a133 ... a13n
(1.5.17)
· · · ··· ··· ··· · · ·
0 a1n2 a1n3 ... a1nn
38 Systems of Linear Equations
a11 α1 0 ... 0
α1 (1) (1) (1)
a22 a23 . . . a2n
(1) (1) (1)
A(1) (1.5.18)
= U1 A U1 = 0 a32 a33 . . . a3n
· · · ··· ··· ··· ···
(1) (1) (1)
0 an2 an3 . . . ann
Proof: The symmetry of the Householder matrix Pn−1 results from the def-
inition. From the symmetry of Pn we obtain the symmetry of U1 . Relations
(1.5.17) and (1.5.18) are veried by simple
omputations.
Remark 1.5.3. The Householder matrix of order n − 2, denoted by Pn−2 ,
is
onstru
ted with help of the
olumn ve
tor a2n−2 formed with the elements
n − 2 of the
olumn matrix A(1) . The
orresponding matrix U2 is dened by:
1 0 0 ... 0
0 1 0 . . . 0
0 0
(1.5.19)
U2 =
.. ..
. . Pn−2
0 0
and it has the property:
a11 α1 0 0 ... 0
α a(1) α 0 ... 0
1 22 2
A(2) = U2 A U2 = 0 α2 a33 a(2)
(1) (2) (2)
(1.5.20)
34 . . . a3n
· · · · · · · · · · · · ···
(2) (2) (2)
0 0 an3 an4 . . . ann
Using the matrix Pn−2 , we obtained a new row and a new
olumn of the
tridiagonal matrix at whi
h we will redu
e the matrix A.
Continuing by n − 1 transformations, we obtain the equality:
U AU = T (1.5.21)
in whi
h T is a tridiagonal matrix.
Theorem 1.5.1. If U AU = T then the solution of the system Ax = b is
given by:
x = Uy (1.5.22)
where y represents the solution of the system
T y = U b. (1.5.23)
Proof: Ax = b and U AU = T ⇒ U −1 T U −1 x = b ⇒ T U −1 x = U b. Denoting
y = U −1 x ⇒ T y = U b we obtain x = U y .
Householder Fa
torization 39
Exer
ises
1. Determine the solutions of the following systems, using Householder fa
-
torization:
2x1 + 2x2 + x3 = √ 2
a) 2x − x2 + x3 = 5
1
x1 + x2 + 2x3 = 0
x1 − x2 + 2x3 + 2x4 = 0
−x1 + x2 + 3x3 = −1
b)
2x1 + 3x2 − x3 + 2x4 = 2
2x1 2x3 = 1
The algorithm for the Householder fa
torization is:
for i = 1 . . . n − 2
for l = 1 . . . n
for m = 1 . . . n
if m = l then uml = 1
if m 6= l then uml = 0
//We generatev the ve
tor v
uX
u n 2
norm a = t aij
j=i+1
ei+1 = 1
for j = i + 2 . . . n
ej = 0
for j = i + 1 . . . n
vj = aij + sign(ai,i+1 ) · norm a · ej
X n
norm v= vj2
j=i+1
//We generate the matrix U
j = i + 1...n
k = i + 1...n
ujk = ujk − 2 · vj · vk / norm v
// D=AU
m = 1...n
l = 1...n n
X
dml = amk · ukl
k=1
//A=UD=UAU
40 Systems of Linear Equations
m = 1...n
l = 1...n n
X
aml = umk · dkl
k=1
Using the above algorithm, we determine the solution of the system
UAUy=Ub, and after that, we obtain the solution of the system Ax=b:
for i=1. . . nn
X
xi = uij · yj
j=1
Input data:
- n - spa
e dimension
Output data:
- (xi )i=1,...,n - solution of the system Ax = b
CitireMat(a,n,n);
S
riereMat(a,n,n);
CitireVe
t(b,n);
S
riereVe
t(b,n);
printf("SOLUTIE\n");
Householder(a,b,n);
}
void Householder(oat **a, oat *b, int n)
{
int i,j,k;
oat **u, **
,xnorm, *e, vnorm, *v, suma;
int sign;
=Matri
e(1,n,1,n);
u=Matri
e(1,n,1,n);
e=Ve
tor(n);
v=Ve
tor(n);
for(i=1;i<=n-2;i++)
{
for(j=1;j<=n;j++)
for(k=1;k<=n;k++)
{
if(k==j) u[j℄[k℄=1;
else u[j℄[k℄=0;
}
xnorm=0;
for(j=i+1;j<=n;j++)
{
e[j℄=0;
xnorm=xnorm+a[i℄[j℄*a[i℄[j℄;
}
e[i+1℄=1;
xnorm=sqrt(xnorm);
vnorm=0;
for(j=i+1;j<=n;j++)
{
if(a[i℄[i+1℄>0) sign=1;
else
{
if(a[i℄[i+1℄<0) sign =-1;
else sign=0;
}
42 Systems of Linear Equations
v[j℄=a[i℄[j℄+sign*xnorm*e[j℄;
vnorm=vnorm+v[j℄*v[j℄;
}
for(j=i+1;j<=n;j++)
for(k=i+1;k<=n;k++)
u[j℄[k℄=u[j℄[k℄-2*v[j℄*v[k℄/vnorm;
}
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{
[i℄[j℄=0.0;
for(k=1;k<=n;k++)
[i℄[j℄+=a[i℄[k℄*u[k℄[j℄;
}
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
{
a[i℄[j℄=0.0;
for(k=1;k<=n;k++)
a[i℄[j℄+=u[i℄[k℄*
[k℄[j℄;
}
}
0 0 0 ... 0
a21 0 0 ... 0
L = a31 a32 0 . . . 0
(1.6.3)
· · · · · · · · · · · · · · ·
an1 an2 an3 . . . 0
0 a12 a13 . . . a1n
0 0 a23 . . . a2n
U = · · · · · · · · · · · ·
··· (1.6.4)
0 0 0 . . . an−1,n
0 0 0 ... 0
Remark 1.6.1. The matrix A
an be written as:
A = D + L + U. (1.6.5)
Proof: If x(∗) is solution of the system (1.6.1), then Ax(∗) = b. From this,
we obtain su
essively:
and hen e:
Denition 1.6.1. Let be x(0) ∈ IRn a given ve
tor. The sequen
e of ve
tors:
x(k+1) = D−1 [b − (L + U )x(k) ] k = 0, 1, 2, . . . (1.6.7)
In the
ase in whi
h the Ja
obi traje
tory of the ve
tor x(0)
onverges, this
traje
tory will be
alled Ja
obi sequen
e of the su
essive approximations.
Theorem 1.6.2. If the Ja
obi traje
tory of ve
tor
onverges, then the
x(0)
limit of the Ja
obi sequen
e, of the su
essive approximations, is solution of
the system (1.6.1).
Proof: We denote by x(∗) the limit of the Ja
obi sequen
e of the su
essive
approximations. Passing to limit for k → ∞ in the relation (1.6.7), we obtain
the equality x(∗) = D−1 [b − (L + U )x(∗) ]. On the base of the Theorem 1.6.1
we have that x(∗) is a solution of the system (1.6.1).
Theorem 1.6.3. The Ja
obi traje
tory of ve
tor x(0)
onverges, if and only
if the sequen
e y (k) dened by:
y (k+1) = −D−1 (L + U )y (k) k = 1, 2, . . . (1.6.8)
y (0) = x(0) − x(∗)
onverges to zero, where x(∗) represents the solution of the system (1.6.1).
The matrix −D−1 (L + U ) is
alled the Ja
obi matrix.
Proof: We will prove that, the ve
tor x(k+1) on the Ja
obi traje
tory of
ve
tor x(0) and the ve
tor y (k+1) dened by (1.6.8) verify:
For k = 0 we must to prove the equality y (1) = x(1) − x(∗) . For this aim, using
(1.6.8) we
ompute y (1) , and we obtain:
In this way, we proved that the relation (1.6.9) is true for any k = 0, 1, 2, . . ..
From (1.6.9), we obtain that if x(k+1)
onverges then y (k+1)
onverges,
too. Moreover, a
ording to the Theorem (1.6.2) we have lim x(k+1) = x(∗) ,
k→∞
and hen
e lim y (k+1)
= 0. If y (k+1)
onverges to zero, from (1.6.9) we obtain
k→∞
that x(k+1)
onverges to x(∗) .
onverges to zero for any y (0) ∈ IRn , if and only if the spe
tral radius ρ of the
matrix −D−1 (L + U ) is stri
tly sub-unitary.
A su
ient
ondition for
onvergen
e to zero of a sequen
e y (k+1) , for any
y (0) , is given by the next theorem in whi
h the matrix norm is dened as
follows:
kAxk
kAk = max ,
kxk6=0 kxk
Remark 1.6.2. If the spe
tral radius ρ of the matrix −D−1 (L+U ) is stri
tly
sub-unitary, then for any x(0) ∈ IRn , the Ja
obi sequen
e of the su
essive
approximations
onverges to the solution of the system (1.6.1).
Exer
ises
1. De
ide if the Ja
obi method
an be applied for solving the followings
system:
5x1 − 2x2 + 3x3 = −1
−3x1 + 9x2 + x3 = 2
2x1 − x2 − 7x3 = 3
The algorithm of the Ja
obi method:
for i = 1 . . . n
xi = 0
repeat
for i = 1 . . . n n
X 1
xk+1
i = bi − aij · xkj ·
j=1
aii
j6=i
- (bi )i=1,...,n - the
olumn ve
tor form the right hand side
Output data:
- (xi )i=1,...,n - solution of the system Ax = b
- k - number of steps
Ax = b, (1.7.1)
A=L+D+U (1.7.2)
The Gauss-Seidel Method 47
Denition 1.7.1. For a ve
tor x(0) ∈ IRn , the sequen
e of ve
tors x(k) dened
by:
x(k+1) = (L + D)−1 (b − U x(k) ) (1.7.4)
is
alled Gauss-Seidel traje
tory of the ve
tor x(0) .
Denition 1.7.2. We say that Gauss-Seidel traje
tory of the ve
tor
x(0)
onverges if sequen
e x(k+1) dened by (1.7.4)
onverges.
In the
ase in whi
h the Gauss-Seidel traje
tory of the ve
tor x(0)
on-
verges, it is
alled Gauss-Seidel sequen
e of su
essive approximations.
Theorem 1.7.2. If the Gauss-Seidel traje
tory of the ve
tor x(0)
onverges,
then the limit of the Gauss-Seidel sequen
e of su
essive approximations is a
solution of the system (1.7.1).
Proof: We denote by x(∗) the limit of the Gauss-Seidel sequen
e of the
su
essive approximations. Passing to limit for k → ∞ in relation (1.7.4) we
obtain the equality x(∗) = (L + D)−1 (b − U x(∗) ). On the base of the Theorem
1.7.1, we have that x(∗) is solution of the system (1.7.1).
48 Systems of Linear Equations
Theorem 1.7.3. The Gauss-Seidel traje
tory of the ve
tor x(0)
onverges if
and only if the sequen
e y (k+1) dened by:
y (k+1) = −(L + D)−1 U y (k) k = 1, 2, . . . (1.7.5)
y (0) = x(0) − x(∗)
onverges to zero, where we denoted by x(∗) the solution of the system (1.7.1).
The matrix −(L + D)−1 U is
alled the Gauss-Seidel matrix.
Proof: We will prove that, the ve
tor x(k+1) on the traje
tory of x(0) , and
the ve
tor y (k+1) given by (1.7.5) verify:
For k = 0, the equality y (1) = x(1) − x(∗) must be shown. Thus,
omputing
y (1) with (1.7.5) we nd:
onverges to zero for any y (0) ∈ IRn if and only if the spe
tral radius ρ of the
Gauss-Seidel matrix −(L + D)−1 U is stri
tly sub-unitary.
A su
ient
ondition for the sequen
e y (k+1) to
onverge to zero, for any
y (0) , is given by the next theorem in whi
h the matrix norm is dened as
follows:
kAxk
kAk = max ,
kxk6=0 kxk
Theorem 1.7.5. If norm of the matrix −(L + D)−1 U is stri
tly sub-unitary
then for any y (0) ∈ IRn , the sequen
e y (k+1) dened by (1.7.7)
onverges to
zero.
Proof: Using mathemati
al indu
tion, the following inequality is proved:
Remark 1.7.1. If the spe
tral radius ρ of the matrix −(L+D)−1 U is stri
tly
sub-unitary, then for any x(0) ∈ IRn , the Gauss-Seidel sequen
e of the su
-
essive approximations
onverges to the solution of the system (1.7.1).
Remark 1.7.2. If norm of the matrix −(L + D)−1 U is stri
tly sub-unitary,
then for any x(0) ∈ IRn , the Gauss-Seidel sequen
e of su
essive approxima-
tions
onverges to the solution of the system (1.7.1).
Propostion 1.7.1. The points on the Gauss-Seidel traje
tory of the ve
tor
x(0) verify:
x(k+1) = D−1 (b − Lx(k+1) − U x(k) ) k = 0, 1, 2, . . . (1.7.8)
(L + D)x(k+1) = b − U x(k) ⇒
⇒ Lx(k+1) + Dx(k+1) = b − U x(k) ⇒
⇒ x(k+1) = D−1 (b − Lx(k+1) − U x(k) ).
50 Systems of Linear Equations
i−1 n
(k+1)
X (k+1)
X (k) 1
xi = bi − aij ·xj − aij ·xj · i = 2, . . . , n; k = 0, 1, . . .
j=1 j=i+1
aii
(1.7.10)
Exer
ises
1. Sear
h if the Ja
obi and Gauss-Seidel methods
an be applied for solving
the
followings system:
4x1 + x2 = −1
4x1 + 3x2 = −2
The algorithm of the Gauss-Seidel method:
for i = 1 . . . n
xi = 0
repeat
for i = 1 . . . n
i−1 n
X X 1
xk+1
i = bi − aij · xk+1
j − aij · xkj ·
j=1 j=i+1
aii
Output data:
- (xi )i=1,...,n - solution of the system Ax = b
- k - number of steps
Su
essive Over-relaxation (SOR) Method 51
Ax = b, (1.8.1)
A = L + D + U, (1.8.2)
Theorem 1.8.1. The ve
tors x(k) on the traje
tory of x(0) obtained by su
-
essive over-relaxation, verify:
−1
(k+1) 1 1 (k)
x = L+ D b− 1− D+U x k = 0, 1, 2, . . .
ω ω
(1.8.8)
Denition 1.8.3. We say that the traje
tory of the ve
tor x(0) , obtained
by su
essive over-relaxations
onverges, if the sequen
e x (k+1)
dened by
(1.8.8) is
onvergent.
In the
ase in whi
h the traje
tory of ve
tor x(0) obtained by su
essive
over-relaxations,
onverges then it is
alled sequen
e of su
essive approxi-
mations.
1 1
Lx(∗) + Dx(∗) = b − Dx(∗) + Dx(∗) − U x(∗) ,
ω ω
(L + D + U )x(∗) = b,
Ax(∗) = b.
Remark 1.8.3.
−1
If the norm of the matrix − L + ω1 D 1 − ω1 D + U
is stri
tly sub-unitary, then for any x(0) ∈ IRn , the su
essive approximations
sequen
e, obtained by su
essive over-relaxations,
onverges to the solution
of the system (1.8.1).
" i−1 n
#
(k+1) (k) ω X (k+1)
X (k)
xi = (1 − ω) · xi + bi − aij · xj − aij · xj (1.8.14)
aii j=1 j=i
i = 2, . . . , n; k = 0, 1, . . .
Su
essive Over-relaxation (SOR) Method 55
Exer
ises
De
ide if the Ja
obi, Gauss-Seidel and su
essive over-relaxations methods
methods
an be applied for solving the followings system:
2x1 + x2 = 3
4x1 + 3x2 = −5
The algorithm for implementation of the su
essive over-relaxation:
for i = 1 . . . n
xi = 0
repeat
for i = 1 . . . n " #
i−1 n
ω X X
xk+1
i = (1 − ω) · xki + bi − aij · xk+1
j − aij · xkj
aii j=1 j=i
- (bi )i=1,...,n - the olumn ve tor from the right hand side
- ω
Output data:
- (xi )i=1,...,n - solution of the system Ax = b
- k - step number
xi[i℄ = 0;
do
{
for(i=1; i<=n; i++)
z[i℄ = xi[i℄;
for(i=1; i<=n; i++)
{
suma=0.0;
for(j=1; j<=n; j++)
if (i != j)
suma += a[i℄[j℄*xi[j℄;
xi[i℄ = 1.0/a[i℄[i℄*(b[i℄ - suma);
}
k++;
}while (max(z, xi,n) >= epsilon);
return k;
}
int ja
obi(oat **a, oat *b,int n,oat *xi)
{
oat *z,suma;
int k=0, i,j;
z=Ve
tor(n);
for(i=1; i<=n; i++)
xi[i℄ = 0;
do
{
for(i=1; i<=n; i++)
z[i℄ = xi[i℄;
for(i=1; i<=n; i++)
{
suma=0.0;
for(j=1; j<=n; j++)
if (i != j)
suma += a[i℄[j℄*z[j℄;
xi[i℄ = 1.0/a[i℄[i℄*(b[i℄ - suma);
}
k++;
}while (max(z, xi,n) >= epsilon);
return k;
}
int relaxare_su
esiva(oat **a, oat *b,int n,oat *xi)
58 Systems of Linear Equations
{
oat *z,suma,omega;
int k=0, i,j;
printf("omega="); s
anf("%f",&omega);
for(i=1; i<=n; i++)
xi[i℄ = 0;
do
{
for(i=1; i<=n; i++)
z[i℄ = xi[i℄;
for(i=1; i<=n; i++)
{
suma=0.0;
for(j=1; j<=n; j++)
if (i != j)
suma += a[i℄[j℄*xi[j℄;
xi[i℄ = (1-omega)*z[i℄ + omega/a[i℄[i℄*(b[i℄ - suma);
}
k++;
}while (max(z, xi,n) >= epsilon);
return k;
}
oat max(oat zi[℄, oat xi[℄, int n)
/* returneaza maximul dintre |zi-xi| */
{
int i;
oat maxim = fabs(zi[1℄ - xi[1℄);
for(i=2, maxim = fabs(zi[1℄ - xi[1℄); i<=n; i++)
if(maxim < fabs(zi[i℄-xi[i℄))
maxim = fabs(zi[i℄-xi[i℄);
return maxim;
}
Chapter 2
Numeri
al Solutions of Equations
and Systems of Nonlinear
Equations
Denition 2.0.1. A nonlinear system of n algebrai
equations and n
unknowns, is a system of the form:
f1 (x1 , x2 , . . . , xn ) = 0
............... (2.0.1)
fn (x1 , x2 , . . . , xn ) = 0
or
F (x) = 0, (2.0.2)
where: F (x) represents the ve
tor (f1 (x), . . . , fn (x)) in whi
h the fun
tions
T
59
60 Numeri
al Solutions of Equations and Systems of Nonlinear Equations
where F ′ (x) represents the Ja
obi matrix whi
h is supposed
ontinuous and
invertible.
This formula
an be dedu
ed from the equivalen
es:
Denition 2.1.1. The ve
tor x(∗) ∈ D is
alled xed point for the operator
G if:
G(x(∗) ) = x(∗) . (2.1.4)
It is evident that x(∗) is a xed point for G if and only if x(∗) is a solution of
the system (2.1.1) equivalent with (2.0.1).
In the followings, we suppose that the operator G has a xed point x(∗) .
To nd the xed point x(∗) of the operator G (i.e. the solution x(∗) of the
system (2.0.1)), we use the algorithm:
with x0 given.
Denition 2.1.2. The point x(∗) is
alled attra
tor if there is an open
sphere S(x(∗) , r) = {x ∈ IRn | kx − x(∗) k < r} having the properties:
1. S(x(∗) , r) ⊂ D and x(k) obtained from (2.1.5) is well dened for ∀ x(0) ∈
S(x(∗) , r);
2. ∀ x(0) ∈ S(x(∗) , r) the sequen
e x(k) dened of (2.1.5) belongs to D and
−−−→ x(∗) .
x(k) −k→∞
Fixed-Point Iterative Method 61
Remark 2.1.2. If x(∗) is an attra
tor, then x(k) is
alled the su
essive
approximation sequen
e of the xed point x(∗) .
Proof: For x(0) ∈ S(x(∗) , r) we onsider x(1) = G(x(0) ). From the inequality
it results
kx(1) − x(∗) k ≤ α kx(0) − x(∗) k < α · r < r,
from whi
h we obtain:
x(1) ∈ S(x(∗) , r) ⊂ D.
After that, we
onsider x(2) = G(x(1) ), and from
we obtain
x(2) ∈ S(x(∗) , r) ⊂ D.
In the followings, we
onsider x(3) = G(x(2) ), and using a similar evaluation
we have:
kx(3) − x(∗) k ≤ α3 kx(0) − x(∗) k < α3 · r < r.
By mathemati
al indu
tion it is shown that x(k+1) = G(x(k) ) is well dened
and the terms of this sequen
e verify:
If there exists k0 su
h that x(k) = x(∗) for k ≥ k0 , then lim sup kx(k+1) −x(∗) k =
k→∞
0 ≤ α.
If x(k) 6= x(∗) for k ≥ k0 , then the inequality kx(k+1) − x(∗) k ≤ α kx(k) − x(∗) k
shows that
kx(k+1) − x(∗) k
≤ α,
kx(k) − x(∗) k
and hen
e
kx(k+1) − x(∗) k
lim sup ≤ α.
k→∞ kx(k) − x(∗) k
From kx(k) − x(∗) k ≤ αk kx(0) − x(∗) k, the following inequality results:
Exer
ises
1. Using Fixed-Point and Newton methods, nd the solutions of the following
systems:
2
x1 + x2
− x1 = 0
6
a) on the domain D = [0, 1] × [1, 2]
2
x1 + x2 − x2 = 0
8
3
x − 20x − 1 = 0
b)
x3 + xy − 10y + 10 = 0 on the domain D = [0, 1] × [1, 2]
The algorithm of the
lassi
al Newton method in the
ase n = 2 is the fol-
lowing:
Input data:
- n = 2 dimension for the presented
ase
Output data:
- (xi )i=1,...,n - the solution of the system
In the followings, we will show that the operator from the
lassi
al Newton
iteration is quasi-non-expansive in a
ertain sphere, from whi
h we obtain the
lo
al
onvergen
e of the Newton method.
k[F ′ (x)]−1 ·R(x)k ≤ k[F ′ (x)]−1 k·kR(x)k ≤ β·γ·rkx−x(∗) k < kx−x(∗) k, ∀x ∈ S, x 6= x(∗) .
Quasi-non-expansion Operators 67
onverges to x(∗) for any x(0) ∈ S ; these iterations are
alled the gradient
method.
Proof: As in the previous
ase we prove that the operator
kR(x)k ≤ γ · r · kx − x(∗) k ∀ x ∈ S.
1
kR(x)k < · kx − x(∗) k.
β
Taking into a
ount the inequality kF ′ (x)−1 k ≤ β we obtain:
Be ause:
it results:
kx − x(∗) k · k[F ′ (x)]−1 k−1 ≤ kF ′ (x) · (x − x(∗) )k
hen
e:
kR(x)k < kF ′ (x) · (x − x(∗) )k ∀ x ∈ S.
Let's evaluate kG(x) − x(∗) k2 for x ∈ S and x 6= x(∗) . We have:
= < x − x(∗) − kF ′ (x)k−2 · [F ′ (x)]T · F (x), x − x(∗) − kF ′ (x)k−2 · [F ′ (x)]T · F (x) >=
= < x − x(∗) , x − x(∗) > −2· < x − x(∗) , kF ′ (x)k−2 · [F ′ (x)]T · F (x) > +
+ kF ′ (x)k−2 · kF (x)k2 .
Taking into a
ount the equality:
we have:
kR(x)k2 = kF (x)k2 − 2· < F (x), F ′ (x)(x − x(∗) ) > +kF ′ (x)(x − x(∗) )k2 .
It follows:
−2· < F (x), F ′ (x)(x − x(∗) ) >= kR(x)k2 − kF (x)k2 − kF ′ (x)(x − x(∗) )k2 .
Quasi-non-expansion Operators 69
In this way, the problem of determining the interpolating fun
tion is redu
ed
to the determination of the
oe
ients ai from the interpolating
ondition,
71
72 Interpolation, Polynomials Approximation, Spline Fun
tions
equivalent to:
m
X
ai xij = f (xj ), j = 0, 1, . . . , m,
i=0
i.e.,
a0 + a1 x0 + a2 x20 + . . . + am xm
0 = f (x0 )
a + a x + a x2 + . . . + a xm = f (x )
0 1 1 2 1 m 1 1
.................................
a0 + a1 xm + a2 x2m + . . . + am xmm = f (xm ).
This system has a unique solution be
ause its determinant is dierent of zero
(it's a determinant of Vandermonde type and xi are distin
t).
We
an
on
lude that the interpolating polynomial is unique for a given
fun
tion f and for given data points x0 < x1 < . . . < xm−1 < xm . This
approximating polynomial is also
alled global interpolation due to the fa
t
that only one polynomial is used on the interval [a, b].
f (xr+1 ) − f (xr )
denoted by , r = 0, 1, . . . , m − 1.
xr+1 − xr
These divided dieren
es
onstitute a set of m numbers atta
hed to the points
x0 < x1 < . . . < xm−1 :
f (x1 ) − f (x0 )
x0 −→ = [x0 , x1 , f ]
x1 − x0
f (x2 ) − f (x1 )
x1 −→ = [x1 , x2 , f ]
x2 − x1
...........................
f (xm ) − f (xm−1 )
xm−1 −→ = [xm−1 , xm , f ].
xm − xm−1
f 7−→ D1 f
nd:
Denition 3.1.3. The se
ond divided dieren
e of the fun
tion f at the
point xr , r ≤ m − 2, is the number:
(D1 f )(xr+1 ) − (D1 f )(xr ) [xr+1 , xr+2 , f ] − [xr , xr+1 , f ]
(D2 f )(xr ) = = ,
xr+2 − xr xr+2 − xr
f (xr+2 )
+
(xr+2 − xr )(xr+2 − xr+1 )
The Newton Divided Dieren
e Formulas 75
1 f (xr+2 ) − f (xr+1 ) f (xr+1 ) − f (xr )
= · − =
xr+2 − xr xr+2 − xr+1 xr+1 − xr
1 f (xr+2 )(xr+1 − xr )
= · −
xr+2 − xr (xr+2 − xr+1 )(xr+1 − xr )
f (xr+1 )(xr+1 − xr + xr+2 − xr+1 ) f (xr )(xr+2 − xr+1 )
− + =
(xr+2 − xr+1 )(xr+1 − xr ) (xr+2 − xr+1 )(xr+1 − xr )
f (xr+2 ) f (xr+1 )
= − +
(xr+2 − xr )(xr+2 − xr+1 ) (xr+2 − xr+1 )(xr+1 − xr )
f (xr )
+ =
(xr+2 − xr )(xr+1 − xr )
f (xr ) f (xr+1 )
= + +
(xr − xr+1 )(xr − xr+2 ) (xr+1 − xr )(xr+1 − xr+2 )
f (xr+2 )
+ .
(xr+2 − xr )(xr+2 − xr+1 )
takes pla
e, where it
an be observed that the fa
tor (xr+i − xr+i ) is missing
from the denominator.
Remark 3.1.3. Considering the set of fun
tions Fm−k = {f : {x0 , x1 , . . . , xm−k } →
IR1 }, using the k th divided dieren
e we
an asso
iate to every fun
tion
f ∈ Fm the fun
tion Fm−k :
f 7−→ Dk f
where Dk f is dened by (Dk f )(xr ) = [xr , xr+1 , . . . , xr+k , f ] for r ≤ m − k .
The
orresponden
e f 7−→ Dk f will be denoted by Dk and will be
alled
the operator of the k th divided dieren
e.
Remark 3.1.4. The operator Dk : Fm → Fm−k of the k th divided dieren
e
is linear.
Remark 3.1.5. For k = m, the mth divided dieren
e is dened only at x0
and it is given by:
m
m
X f (xi )
(D f )(x0 ) = .
i=0
(xi − x0 )(xi − x1 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )
Hen e,
(W f )(x0 , x1 , . . . , xm )
=
V (x0 , x1 , . . . , xm )
m
1 X Y
= Y · (−1)m+2+k · f (xk ) · (xi − xj ) =
(xi − xj ) k=0 i>j
i>j i,j6=k
Y
(xi − xj )
m i>j
X i,j6=k
m+2+k
= (−1) · f (xk ) · Y =
k=0 (xi − xj )
i>j
m
X f (xk )
= (−1)m+2+k · =
k=0
(xk − x0 ) . . . (xk − xk−1 )(xk+1 − xk ) . . . (xm − xk )
m
X f (xk )
= (−1)m+2+k · =
k=0
(xk − x0 ) . . . (xk − xk−1 )(−1)m−k (xk − xk+1 ) . . . (xk − xm )
m
X f (xi )
=
i=0
(xi − x0 )(xi − x1 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )
= (Dm f )(x0 ).
In other words, the mth divided dieren
e does not depend of the order of
knots.
78 Interpolation, Polynomials Approximation, Spline Fun
tions
m−1
X
m
(D f )(x0 ) = ak Dm (xk )(x0 ).
k=0
W (xk )(x0 , x1 , . . . , xm )
Dm (xk )(x0 ) =
V (x0 , x1 , . . . , xm )
1 x0 x20 . . . xm−1 0 xk0
2 m−1
1 x 1 x . . . x xk1
with W (xk )(x0 , x1 , . . . , xm ) = 1 1
= 0.
· · · · · · · · · · · · · ·· · · ·
1 xm x2 . . . xm−1 xk
m m m
In this way, we nd the equality from the enun
iation.
m−1
X
[x0 , x1 , . . . , xm , f · g] = [x0 , x1 , . . . , xk , f ] · [xk , . . . , xm−1 , g]
k=0
[x0 , x1 , . . . , xm , f · g] =
1
= · [[x1 , . . . , xm , f · g] − [x0 , . . . , xm−1 , f · g]] =
xm − x0
"m−1
1 X
= · [x1 , . . . , xk+1 , f ] · [xk+1 , . . . , xm , g]−
xm − x0 k=0
m−1
1 X
= · [x0 , . . . , xk , f ] · {[xk+1 , . . . , xm , g] − [xk , . . . , xm−1 , g]}+
xm − x0 k=0
m−1
1 X
+ · [xk+1 , . . . , xm , g] · {[x1 , . . . , xk+1 , f ] − [x0 , . . . , xk , f ]} =
xm − x0 k=0
m−1
1 X
= · [x0 , . . . , xk , f ] · (xm − xk ) · [xk , . . . , xm , g]+
xm − x0 k=0
m
1 X
+ · [xk , . . . , xm , g] · (xk − x0 ) · [x0 , . . . , xk , f ] =
xm − x0 k=1
80 Interpolation, Polynomials Approximation, Spline Fun
tions
1
= · {(xm − x0 ) · [x0 , f ] · [x0 , . . . , xm , g]+
xm − x0
m−1
X
+ (xm − x0 ) · [x0 , . . . , xk , f ] · [xk , . . . , xm , g]+
k=1
whi
h approximates the fun
tion f : X → IR1 known only by its values in
the data points xi : yi = f (xi ), i = 0, 1, . . . , m .
The approximating fun
tion ϕ(x) is a polynomial fun
tion of degree m, de-
noted by pm (x), for whi
h the
oe
ients ai will be
omputed using divided
dieren
es:
ai = [x0 , x1 , . . . , xi , f ], i = 0, 1, . . . , m.
Remark 3.1.7. Newton polynomial with divided dieren
es has the prop-
erty that its graph passes through the points (xi , yi ) = (xi , f (xi )), i = 0, 1, . . . , m.
Using the Newton polynomial with divided dieren
es, the fun
tion f :
X → IR1 given by: yi = f (xi ), i = 0, 1, . . . , m, is written as:
From the Mean Value Theorem, we obtain the expression of the mth
divided dieren
e as fun
tion of the mth derivative of the fun
tion f . In
this way, the approximating error be
omes:
f (m+1) (ξ)
Rm (x) = (x − x0 )(x − x1 ) . . . (x − xm−1 )(x − xm ),
(m + 1)!
where ξ ∈ (a, b).
The interpolating polynomial appears espe
ially as a
omponent of other
numeri
al algorithms (numeri
al integration or numeri
al dierentiation). In
these appli
ations, equal intervals are
onsidered given by equidistant knots
(i.e. the distan
e between two
onse
utive knots is equal to a
onstant h
alled step of the mesh):
xi+1 − xi = h, i = 0, 1, . . . , m − 1.
We introdu
e the forward dieren
e operator △ and the ba
kward dieren
e
operator ▽ dened as follows:
△f (x) = f (x + h) − f (x),
82 Interpolation, Polynomials Approximation, Spline Fun
tions
▽f (xm ) ▽f (xm−1 )
[x1 , x2 , f ] − [x0 , x1 , f ] − ▽2 f (xm )
[xm−2 , xm−1 , xm , f ] = = h h = .
x2 − x0 2h 2!h2
By mathemati
al indu
tion, the mth divided dieren
es are obtained:
△m f (x0 )
[x0 , x1 , x2 , . . . , xm , f ] = ,
m!hm
▽m f (xm )
[x0 , x1 , x2 , . . . , xm , f ] =
.
m!hm
Based on the above formulas, the Newton polynomial with forward nite
dieren
es is:
△f (x0 ) △2 f (x0 )
pm (x) = f (x0 ) + (x − x0 ) + (x − x0 )(x − x1 ) + . . .
h 2!h2
△m f (x0 )
+ (x − x0 ) . . . (x − xm−1 ), and the Newton polynomial
m!hm
with ba
kward nite dieren
es is:
▽f (xm ) ▽2 f (xm )
pm (x) = f (xm ) + (x − xm ) + (x − xm )(x − xm−1 ) + . . .
h 2!h2
▽m f (xm )
+ (x − xm ) . . . (x − x1 ).
m!hm
The Newton Divided Dieren
e Formulas 83
Exer
ises
√
1. Approximate numeri
ally the fun
tion f (x) = x, knowing its values in
the knots: x0 = 1, x1 = 1.5, x2 = 2, x3 = 2.5, x4 = 3, as follows:
- xi - knots, i = 0, . . . , n
Output data:
- approximating the derivative of order k at the knot xr
84 Interpolation, Polynomials Approximation, Spline Fun
tions
Implementation in Borland C:
#in
lude<stdio.h>
oat **Matri
e(int imin, int imax, int jmin, int jmax);
oat *Ve
tor(int n);
void CitireVe
t(oat *a, int n);
void S
riereVe
t(oat *a, int n);
oat dif_divizate(oat *x, oat *f,int n, int r, int k);
void main()
{
oat *f, *x;
int n,r,k;
printf("n= "); s
anf("%d",&n);
x=Ve
tor(n);
f=Ve
tor(n);
printf("Introdu
eti nodurile: \n");
CitireVe
t(x,n);
printf("Introdu
eti valorile fun
tiei in noduri: \n");
CitireVe
t(f,n);
printf("Introdu
eti ordinul nodului in
are se fa
e derivata: ");
s
anf("%d",&r);
printf("Introdu
eti ordinul derivatei: "); s
anf("%d",&k);
printf("Valoarea derivatei de ordinul %d in x[%d℄ este:
%g",k,r, dif_divizate(x, f,n, r, k));
}
oat dif_divizate(oat *x, oat *f,int n, int r, int k)
{
oat suma, produs;
int i,j;
if(r>n-k){ printf("Nu se poate
al
ula derivata de ordin %d in pun
tul
x[%d℄ !", k,r); }
else{
suma=0.0;
for(i=0;i<=k;i++)
{
produs=f[r+i℄;
for(j=0;j<=k;j++)
if(j!=i) produs=produs/(x[r+i℄-x[r+j℄);
suma=suma+produs;
}
}
return suma;
The Lagrange Interpolating Polynomial 85
Pm (xi ) = f (xi ) i = 0, 1, . . . , m
a0 + a1 x0 + a2 x20 + . . . + am xm 0 = f (x0 )
a0 + a1 x 1 + a2 x 1 + . . . + a m x m
2
1 = f (x1 )
.................................
a0 + a1 xm + a2 x2m + . . . + am xm m = f (xm ).
and it follows that, the system has a unique solution. This means that
there exists a unique polynomial of degree m whi
h satises the
onditions
P (xi ) = f (xi ). The fa
t that does not exist polynomial of degree less then
m whi
h veries P (xi ) = f (xi ) does not exist,
an be proved supposing
the
ontrary (
ase in whi
h a system of m + 1 equations with maximum m
unknowns is obtained, whi
h does not have a solution for any f ).
i.e.,
V (x0 , . . . , xi−1 , x, xi+1 , . . . , xm )
li (x) = =
V (x0 , . . . , xi−1 , xi , xi+1 , . . . , xm )
(x − x0 ) . . . (x − xi−1 )(x − xi+1 ) . . . (x − xm )
+ .
(xi − x0 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )
For proving that the operator is idempotent we
onsider the fun
tions lk (x) =
xk , k = 0, 1, . . . , n. We observe that: Lm (lk )(x) = lk (x) = xk , k = 0, 1, . . . , m
and hen
e we obtain:
Lm (f )(x) = a0 + a1 x + . . . + am xm ⇒
⇒ Lm (Lm f )(x) = a0 + a1 x + . . . + am xm ⇒ (L2m f )(x) = (Lm f )(x).
m
X
kLm k = max |li (x)|.
a≤x≤b
i=0
Denition 3.2.4. The dieren
e f (x) − (Lm f )(x) = (Rm f )(x) is
alled
trun
ation error of order m, and the approximating formula f (x) =
(Lm f )(x) + (Rm f )(x) is
alled the Lagrange approximation formula.
Theorem 3.2.4. The trun
ation error (Rm f )(x) from the Lagrange approx-
imation formula is a linear and idempotent operator.
Proof: Linearity of (Rm f )(x) results from the linearity of (Lm f )(x). For
proving that the error is idempotent, we take into a
ount that, if f is a
polynomial of degree m then Lm = f . It results that, if f is a polynomial of
degree m, then Rm f = f −Lm f = 0. Thus, Rm (Rm f ) = Rm (f −Lm f ) = Rm f
for any f , i.e., Rm
2
f = Rm f .
where we used that (Rm f )(m+1) = f (m+1) −(Lm f )(m+1) = f (m+1) . Computing
this determinant we obtain (Rm f )(x) from the enun
iation.
Exer
ises
1. Using the Lagrange interpolating polynomial, approximate numeri
ally
cos(0.12), in the
ase in whi
h the followings values are known:
xi 0.1 0.2 0.3 0.4
cos(xi ) 0.995 0.98007 0.95534 0.92106
2. Using the Lagrange interpolating polynomial,
ompute the Australian
population from the years 1960, 1970 and 1975, if the followings data are
given:
90 Interpolation, Polynomials Approximation, Spline Fun
tions
- xi - knots, i = 0, . . . , n
Output data:
- L(x)
at the ends be free to equilibrate the position that minimizes the os
illatory
behavior of the
urve.
Ia
ob Bernoulli (1705) gives the idea that "the elasti
"
an be obtained by
minimization of the integral from the squared
urvature, in a
lass of admissi-
ble fun
tions. Thus, the theory of Euler-Bernoulli
on
erning to deformation
of the thin beams is formulated (1742).
The expression whi
h is minimized in this theory is:
Z l
Ep = µ(s) · K 2 (s)ds
0
f ′′ (x)
K(x) = and ds = {1 + [f ′ (x)]2 }1/2 dx.
{1 + [f ′ (x)]2 }3/2
Supposing the rod homogeneity µ(s) = µ (i.e.
onstant density) and denoting
by (a, f (a)), (b, f (b)) the ends of the
urve, the expression whi
h should be
minimized be
omes:
Z b
[f ′′ (x)]2
Ep = µ · ′ 2 5/2
dx.
a {1 + [f (x)] }
µ
where c1 = .
(1 + c2 )5/2
If Pi = (xi , yi ), i = 1, 2, . . . , n are n knots whi
h dene the division
∆ : a ≤ x1 < x2 < . . . < xn ≤ b of the interval [a, b], then the
onsidered
problem is redu
ed to the minimization of the integral:
Z b
[f ′′ (x)]2 dx
a
f (xi ) = yi , i = 1, 2, . . . , n.
Pie
ewise Polynomial Approximations: Spline Fun
tions. Introdu
tion 93
From this set we will
hoose those fun
tions whi
h pass through Pi :
2,2
U (y) = {f ∈ H[a,b] | f (xi ) = yi , i = 1, 2, . . . , n}
T f = f ′′ .
hen
e
m,2
X = H[a,b] m−1
= {f ∈ C[a,b] | f (m−1) absolutely
ontinuous on [a, b] and f (m) ∈ L2[a,b] }.
m,2
The spa
e X = H[a,b] is a subspa
e of the ve
torial spa
e H[a,b]
m
and
an
be organized as a Hilbert spa
e with the s
alar produ
t dened by:
Z b m−1
X
(m) (m)
< f, g >m,2 = f (x) · g (x) dx + f (k) (a) · g (k) (a),
a k=0
is non empty, then the polynomial spline interpolation problem hast at least
one a solution.
Proof: We
onsider the set U (m) = {v | v = u(m) , u ∈ U } ⊂ L2[a,b] and
the problem whi
h
onsists of the determination of v ∗ ∈ U (m) having the
property
kv ∗ k2 = inf kvk2 .
v∈U (m)
If the polynomial spline interpolation problem has a solution, then the
on-
sidered problem has a solution. If the
onsidered problem has a solution
then the polynomial spline interpolation problem has at least one solution.
It follows that, if we show that the problem:
kv ∗ k2 = inf kvk2
v∈U (m)
kv ∗ k2 = inf kvk2
v∈U (m)
has a solution.
We remark that the set U (m) is non empty (be
ause U is non empty). From
the linearity of the fun
tionals ϕi for any u1 , u2 ∈ U and α ∈ [0, 1] we have:
(m)
Be
ause gk ∈ U (m) , there exists fk ∈ U su
h that gk = fk . It follows:
Z x
(x − t)m−1
fk (x) = pk (x) + · gk (t) dt
a (m − 1)!
where pk are Taylor polynomials of the maximum degree m − 1.
We suppose that from the n linear fun
tionals ϕ1 , ϕ2 , . . . , ϕn , the rst m
(m ≤ n) are linear independent on the set of polynomial of maximum degree
m − 1. In parti
ular, we admit that ϕ1 , ϕ2 , . . . , ϕm are su
h that the matrix:
(x − a)j−1
with vj (x) = , is nonsingular.
(j − 1)!
In this
ase we have:
pk (a) ϕ1 (pk )
A·
.. ..
. = .
m−1
pk (a) ϕm (pk )
Z x
(x − t)m−1
ϕi (fk ) = yi being bounded, and ϕi · gk (t) dt
onvergent to
Z x a (m − 1)!
(x − t)m−1
ϕi · g(t) dt , i = 1, 2, . . . , n, from whi
h we have that the
a (m − 1)!
(j)
sequen
es ϕi (pk ), k = 1, m are bounded. Hen
e, every sequen
e pk (a),
j = 0, 1, . . . , m − 1 is bounded and so
ontains a
onvergent subsequen
e
(j)
(pkl (a)).
(j)
Let be p(j) (a) = lim pkl (a), j = 0, 1, . . . , m−1. Using these values we dene
kl →∞
p′ (a)
a polynomial p of maximum degree m − 1: p(x) = p(a) + · (x − a) +
1!
p(m−1) (a)
... + · (x − a)(m−1) . The sequen
e fk given by
(m − 1)!
Z x
(x − t)m
fk (x) = pk (x) + · g(t) dt
a (m − 1)!
onverges to: Z x
(x − t)m
f (x) = p(x) + · g(t) dt.
a (m − 1)!
Be
ause the set U is
losed we obtain that f ∈ U .
The
ase d < m
an be redu
ed to the previous
ase.
98 Interpolation, Polynomials Approximation, Spline Fun
tions
Be
ause U (m) is non empty,
onvex and
losed in L2[a,b] , based on a theorem
from fun
tional analysis (theorem for the best approximation), it results that
the problem kv ∗ k2 = inf kvk2 has at least one solution.
v∈U (m)
2. For proving the se
ond armation, rst we will show that if the polynomial
spline interpolation problem has only one solution then U0 = U (0) does
not
ontain null polynomials. For this we
onsider the solution u∗ of the
polynomial spline interpolation problem and we suppose the
ontrary, i.e.,
that the set U0
ontains a polynomial p of degree less than or equal to m − 1.
Considering u∗∗ = p + u∗ , be
ause ϕi (p) = 0, i = 1, n results that ϕi (u∗∗ ) =
ϕi (u∗ ), i = 1, n. From here we obtain u∗∗ ∈ U (y), whi
h together with the
equality u∗∗ (m) = u∗ (m) proves that u∗ and u∗∗ are two solutions of the best
approximation problem, whi
h is impossible.
Similarly it
an be shown that if the set U0 = U (0) does not
ontain null
polynomials then the polynomial spline interpolation problem has a unique
solution.
and u∗ (m) ∈ U (m) is a solution of the best approximation problem if and only
(m)
if it is orthogonal on U0 .
Z b Z b
(m) (m) (m)
= α f1 ·g +β f2 · g (m) = 0,
a a
For proving that S is
losed we
onsider a sequen
e of fun
tions (fk ) from
m,2
S
onvergent to f ∈ H[a,b] . We should prove that f ∈ S . From fk ∈ S we
Z b
(m) m,2
have fk · g (m) = 0, and from the
onvergen
e
ondition fk → f in H[a,b]
a
results < fk − f, g >m,2 − −−→ 0, hen
e < (fk − f )(m) , g (m) >L2 −−−→ 0.
k→∞ k→∞
In this way we have:
Z b Z b
(m) (m) (m) (m)
f ·g = lim (f (m) − fk ) · g (m) + fk · g (m) = 0
a k→∞ a
Theorem 3.4.5. S is the set of all solutions of the polynomial spline inter-
polation problem if f ∈ IRn and
ontains the set of polynomials of maximum
degree m − 1.
Proof: Let Zu∗ a solution of the polynomial spline interpolation problem. It
b
results that u∗ (m) ·g (m) = 0 for any g ∈ U0 and hen
e u∗ ∈ S . If f ∈ S then
Z b a
Theorem 3.4.6. Let be {v1 , . . . , vd } a base of the spa
e Pm−1 ∩ U0 and u∗i a
solution of the polynomial spline interpolation problem on the set Ui = {f ∈
m,2
H[a,b] | ϕi (f ) = δij , j = 1, 2, . . . , n}.
The set {u∗1 , u∗2 , . . . , u∗n }∪{v1 , v2 , . . . , vd }
is a base for S .
n
X
Proof: Let be f ∈ S and h = f − u∗i · ϕi (f ). The fun
tion h belongs to
i=1
Z b
the set S and to U0 ; moreover, fun
tion h veries [h(m) ]2 = 0. From here
a
h(m) = 0, i.e., h ∈ Pm−1 and hen
e h ∈ Pm−1 ∩ U0 . Be
ause the system of
ve
tors {v1 , . . . , vd } ia a base in Pm−1 ∩ U0 we have:
d
X
h= cj · vj
j=1
For showing that the system of fun
tions {u∗1 , u∗2 , . . . , u∗n , v1 , v2 , . . . , vd } is lin-
early independent, we will
onsider the relation of linear dependen
e
n
X d
X
ai · u∗i + bj · vj = 0.
i=1 j=1
The Spline Polynomial Interpolation 101
ϕk (u∗i ) = δki , k, i = 1, n
fun
tions u∗i are
alled fundamental interpolating spline fun
tions.
Theorem 3.4.7. Let be y = (y1 , . . . , yn ) ∈ IRn and u∗1 , . . . , u∗n fundamental
interpolating spline fun
tions. The fun
tion u∗y dened by
n
X
u∗y = u∗i · yi
i=1
is a spline fun
tion whi
h interpolates the fun
tion f , i.e. ϕi (Sf ) = ϕi (f ).
m,2
The appli
ation S : H[a,b] → S dened above is a linear operator, and if
the polynomial spline interpolation problem has a unique solution then S is
idempotent.
extreme intervals [a, x1 ) and (xk , b], and in the points xi the derivative of
order 2m − 1 − µ is
ontinuous if the value of the µth derivative in xi does
not belong to Φ. The solution u is
alled spline of degree 2m − 1 or natural
spline of degree 2m − 1.
Z b
2. u ∈ S ⇔ u(m) · g (m) = 0, ∀g ∈ U0 ;
a
3. u ∈ S ⇔
(2m)
u (x) = 0, x ∈ [x1 , xk ]\{xi }i=1,...,k
(m)
u (x) = 0, x ∈ [a, x1 ) ∪ (xk , b]
(2m−1−µ) (2m−1−µ)
u (xi + 0) − u (xi − 0) = 0, µ ∈ {0, 1, . . . , m − 1}\Ii , i = 1, . . . , k.
be
omes
2m−2
u ∈ C[a,b] ,
and Theorem 3.4.8 permits to write u(x) as follows:
m−1
X n
X
i
u(x) = ai x + bk (x − xk )2m−1 ,
i=0 k=1
• The polynomial spline fun
tion of rst order S(x) (polygonal line) is
the polynomial fun
tion determined of n − 1 polynomials Si (x) of the
rst degree (segments of straight lines):
S(x) = Si (x) = si,0 + si,1 (x − xi ) (3.4.1)
for x ∈ [xi−1 , xi ], i = 1, n − 1, with
oe
ients si,0 , si,1 satisfying the
properties:
i=0 , i.e.
(i) the spline fun
tion passes through every point {(xi , yi )}n−1
S(xi ) = yi , i = 0, n − 1;
(ii) the spline fun
tion is
ontinuous on the interval [a, b], i.e. Si (xi ) =
Si+1 (xi ), i = 1, n − 2.
Imposing for the fun
tion S(x) to satisfy the
onditions (i) − (ii), the
oe
ients si,0 and si,1 are obtained, and the following formula for the
polynomial spline fun
tion of rst order is found:
yi − yi−1
S(x) = Si (x) = yi + (x − xi ),
xi − xi−1
with x ∈ [xi−1 , xi ], i = 1, n − 1.
• polynomial spline fun
tion of se
ond order (quadrati
spline) S(x) is the
polynomial fun
tion determined by n − 1 polynomials Si (x) of se
ond
degree (segments of parabolas):
S(x) = Si (x) = si,0 + si,1 (x − xi ) + si,2 (x − xi )2 (3.4.2)
for x ∈ [xi−1 , xi ], i = 1, n − 1, with
oe
ients si,0 , si,1 and si,2 satisfy-
ing the properties:
(i) the spline fun
tion passes through every point {(xi , yi )}n−1
i=0 , i.e.
S(xi ) = yi , i = 0, n − 1;
(ii) the spline fun
tion is
ontinuous on the interval [a, b], i.e. Si (xi ) =
Si+1 (xi ), i = 1, n − 2;
(iii) the spline fun
tion is smooth on the interval [a, b], i.e. Si′ (xi ) =
′
Si+1 (xi ), i = 1, n − 2.
• polynomial spline fun
tion of third order (
ubi
spline) S(x) is the
polynomial fun
tion determined by n − 1 polynomials Si (x) of third
degree (
ubi
segments):
S(x) = Si (x) = si,0 + si,1 (x − xi ) + si,2 (x − xi )2 + si,3 (x − xi )3 (3.4.3)
for x ∈ [xi−1 , xi ], i = 1, n − 1, with
oe
ients si,0 , si,1 , si,2 and si,3
satisfying the properties:
The Spline Polynomial Interpolation 105
i=0 , i.e.
(i) the spline fun
tion passes through every point {(xi , yi )}n−1
S(xi ) = yi , i = 0, n − 1;
(ii) the spline fun
tion is
ontinuous on the interval [a, b], i.e. Si (xi ) =
Si+1 (xi ), i = 1, n − 2;
(iii) the spline fun
tion is smooth on the interval [a, b], i.e. Si′ (xi ) =
′
Si+1 (xi ), i = 1, n − 2;
(iv) the se
ond derivative of the spline fun
tion is
ontinuous on the
interval [a, b], i.e. Si′′ (xi ) = Si+1
′′
(xi ), i = 1, n − 2.
Note that ea
h
ubi
polynomial Si has si,0 , si,1 , si,2 and si,3 as un-
knowns; therefore there are 4n − 4 unknowns
orresponding to n knots
and hen
e n − 1
ubi
polynomials, and 4n − 6 equations given by
(i) − (iv) (in the
ase in whi
h we have n + 1 knots and hen
e n
ubi
polynomials, then number of unknowns is 4n and number equations is
4n − 2). We still need two more equations that
an obtained by im-
posing boundary
onditions at the endpoints x0 and xn−1 . The most
ommonly boundary
onditions are
′′
S1′′ (x0 ) = Sn−1 (xn−1 ) = 0 (3.4.4)
or
S1′ (x0 ) = y0′ and ′
Sn−1 ′
(xn−1 ) = yn−1 . (3.4.5)
The boundary
onditions given by Eq. (3.4.4) are
alled free or natu-
ral boundary
onditions and the
orresponding
ubi
spline is
alled
natural
ubi
spline. The boundary
onditions given by Eq. (3.4.5) are
alled
lamped boundary
onditions.
For
omputing the
oe
ients si,0 , si,1 , si,2 and si,3 of the
ubi
spline
polynomial we impose for S(x) to satisfy the
onditions (i)−(iv). Thus,
from Eq. (3.4.3), we have:
and
Si′′ (x) = 2si,2 + 6si,3 (x − xi ). (3.4.7)
From Si , we get:
Si (xi ) = si,0 = yi (3.4.8)
and
Si+1 (xi ) = si+1,0 + si+1,1 hi + si+1,2 h2i + si+1,3 h3i (3.4.9)
for i = 0, n − 2, where
hi = xi − xi+1 . (3.4.10)
106 Interpolation, Polynomials Approximation, Spline Fun
tions
This an be rewritten as
for i = 0, n − 2.
Sin
e Si′′ (xi ) = 2si,2 = Si+1
′′
(xi ) and Eq. (3.4.7) we obtain:
si,2 − si+1,2
si+1,3 = . (3.4.14)
3hi
Substituting for si+1,3 from Eq. (3.4.14) in Eq. (3.4.11) and then
solving for si+1,1 gives
yi − yi+1 hi
si+1,1 = − (si,2 + 2si+1,2 ). (3.4.15)
hi 3
Substituting si+1,1 from Eq. (3.4.15) and si+1,3 from Eq. (3.4.14) in
Eq.(3.4.12) and then simplifying yields:
yi+1 − yi yi − yi−1
hi−1 si−1,2 +2(hi +hi−1 )si,2 +hi si+1,2 = 3 −3 , (3.4.16)
hi hi−1
for i = 1, n − 2.
For natural or free boundary
onditions, from Eq. (3.4.7), we have
Exer
ises
1. Find the natural
ubi
spline fun
tion whi
h interpolates the data:
xi 27.7 28 29 30
yi 4.1 4.3 4.1 3.0
Compute S(28.5).
The algorithm for implementation of
ubi
spline interpolation:
- xi - knots, i = 0, . . . , n
- yi - values of f in knots
Output data:
- for i = 0 . . . n − 1
Pi (x) = ai + bi (x − xi ) + ci (x − xi )2 + di (x − xi )3
a=Ve
tor(n);
b=Ve
tor(n);
=Ve
tor(n);
d=Ve
tor(n);
subD=Ve
tor(n);
diag=Ve
tor(n);
supraD=Ve
tor(n);
lib=Ve
tor(n);
for(i=0;i<n;i++)
{
h[i℄=x[i+1℄-x[i℄;
}
[0℄=0;
[n℄=0;
for(i=2;i<n;i++)
{
subD[i℄=h[i℄;
supraD[i℄=h[i℄;
}
for(i=1;i<n;i++)
{
diag[i℄=2*(h[i-1℄+h[i℄);
lib[i℄=3.0*((y[i+1℄-y[i℄)/h[i℄-(y[i℄-y[i-1℄)/h[i-1℄);
}
Tridiag(subD,diag,supraD,lib,
,n-1);
for(i=0;i<n;i++)
{
a[i℄=y[i℄;
b[i℄=(y[i+1℄-y[i℄)/h[i℄-(2.0*
[i℄+
[i+1℄)*h[i℄/3.0;
d[i℄=(
[i+1℄-
[i℄)/(3.0*h[i℄);
}
for(i=0;i<n;i++)
{
printf("P%d (x)=%g +%g(x-%g)+ %g (x-%g)^2+ %g (x-%g)^3
\n",
i,a[i℄,b[i℄,x[i℄,
[i℄,x[i℄,d[i℄,x[i℄);
}
}
110 Interpolation, Polynomials Approximation, Spline Fun
tions
Proof:
m
X
k k m−k k
[Bm (αf + βg)](x) = Cm · x · (1 − x) · (αf + βg) =
k=0
m
m
X
k k m−k k k
= Cm · x · (1 − x) · αf + βg =
k=0
m m
m
X
k k m−k k
= Cm · x · (1 − x) · αf +
k=0
m
m
X
k k m−k k
+ Cm · x · (1 − x) · βg =
k=0
m
m
X
k k m−k k
=α Cm · x · (1 − x) ·f +
k=0
m
m
X
k k m−k k
+β Cm · x · (1 − x) ·g =
k=0
m
Theorem 3.5.2. Let be f : [0, 1] → IR1 . If f (x) ≥ 0 for any x ∈ [0, 1] then:
(Bm f )(x) ≥ 0, ∀x ∈ [0, 1].
Proof: We
onsider the fun
tion g(x) = f (x) − m ≥ 0, ∀x ∈ [0, 1]. Applying
Theorem 3.5.2 we obtain (Bm g)(x) ≥ 0. On the basis of Theorem 3.5.1 we
have (Bm g)(x) = (Bm f )(x) − m and hen
e (Bm f )(x) − m ≥ 0, from where
we obtain (Bm f )(x) ≥ m. Analogously the inequality (Bm f )(x) ≤ M is
obtained.
ε X
k k
≤ + 2M Cm x (1 − x)m−k .
2 k∈J m
m 2
1 X k k k
≤ 2 − x Cm x (1 − x)m−k ≤
δ k=0 m
1 x(1 − x) 1
≤ 2
· ≤ .
δ m 4mδ 2
It follows:
ε M
|f (x) − (Bm f )(x)| ≤ + ,
2 2mδ 2
from where we have:
M
|f (x) − (Bm f )(x)| ≤ ε if m > , x ∈ [0, 1].
εδ 2
Proof: We have: Z 1
(Rm f )(x) = ϕ(x, t)f ′′ (t)dt
0
m
X k
u ϕ(x, t) = (x − t) − k k
Cm x (1 − x) m−k
· −t .
k=0
m
From here we obtain:
x(1 − x) ′′
(Rm f )(x) = − · f (ξ), 0 ≤ ξ ≤ 1.
2m
The last inequality from the enun
iation results from the inequality x(1−x) ≤
1
4
, ∀x ∈ [0, 1].
m
1 X
k k m−k k
= · C · (y − a) · (b − y) · f a + (b − a) · .
(b − a)m k=0 m m
Proof: The Bernstein polynomial of degree m is written for the fun
tion
g(x) = f (a + (b − a)x), x ∈ [0, 1].
Exer
ises
1. Using the Bernstein polynomial formula determine the Bezier
urve asso-
iated to the points A(1,1), B(2,-1), C(3,2) and D(4,-1).
Remark: The
oordinate fun
tions x(t) and y(t) for the Bezier
urve
an
be written as linear
ombinations of the Bernstein polynomials:
n
X
x(t) = Cni · ti (1 − t)n−i xi ,
i=0
n
X
y(t) = Cni · ti (1 − t)n−i yi .
i=0
114 Interpolation, Polynomials Approximation, Spline Fun
tions
The algorithm for determination of the Bezier
urve using Bernstein polyno-
mial:
n
X n
x(t) = ti (1 − t)n−i xi
i
i=0
n
X n
y(t) = ti (1 − t)n−i yi
i
i=0
Input data:
- n
- xi - knots, i = 0, . . . , n
- yi - values of f in knots
Output data:
- expressions of x(t) and y(t)
for(k=1;k<n-1;k++)
printf("%g t^%d (1-t)^%d+",
omb(n,k)*x[k℄,k,n-1-k);
printf("%g t^%d\n",x[n-1℄, n-1);
printf("y(t)= %g (1-t)^%d+",y[0℄,n-1);
for(k=1;k<n-1;k++)
printf("%g t^%d (1-t)^%d+",
omb(n-1,k)*y[k℄,k,n-1-k);
printf("%g t^%d\n",y[n-1℄, n-1);
}
int fa
t(int n)
{
int i,prod=1;
for(i=2;i<=n;i++)
prod*=i;
return prod;
}
oat
omb(int n, int k)
{
return (fa
t(n)/(fa
t(k)*fa
t(n-k)));
}
116 Interpolation, Polynomials Approximation, Spline Fun
tions
Chapter 4
Numeri
al Dierentiation
In the followings, we present two methods for approximating the derivatives
of the fun
tion f in an arbitrary point:
△f (x0 ) △2 f (x0 )
pm (x) = f (x0 ) + (x − x0 ) + (x − x0 )(x − x1 ) + . . .
h 2!h2
117
118 Numeri
al Dierentiation
△m f (x0 )
+ (x − x0 ) . . . (x − xm−1 ),
m!hm
the Newton polynomial with ba
kward nite dieren
es:
▽f (xm ) ▽2 f (xm )
pm (x) = f (xm ) + (x − xm ) + (x − xm )(x − xm−1 ) + . . .
h 2!h2
▽m f (xm )
+ (x − xm ) . . . (x − x1 ).
m!hm
were given.
Hen
e, for a fun
tion f with derivatives up to a su
iently high order, the
following approximation was obtained:
f (m+1) (ξ)
f (x) = pm (x) + (x − x0 )(x − x1 ) . . . (x − xm−1 )(x − xm ).
(m + 1)!
whi
h is equivalent to
′′
′ f (x) − f (x − h) f (ξ)
f (x) = + h, ξ ∈ (x − h, x).
h 2
In similar ways the higher order derivatives
an be obtained.
For example, we
ompute the se
ond derivative by forward nite dieren
es.
Thus, for m = 2 the approximation of the fun
tion is:
△f (x0 ) △2 f (x0 ) f (3) (ξ)
f (x) = f (x0 )+ (x−x0 )+ (x−x0 )(x−x1 )+ (x−x0 )(x−x1 )(x−x2 ).
1!h 2!h2 3!
Computing the rst two derivatives with respe
t to x we obtain:
△f (x0 ) △2 f (x0 )
f ′ (x) = + [(x − x0 ) + (x − x1 )]+
1!h 2!h2
f (3) (ξ)
+ [(x − x1 )(x − x2 ) + (x − x0 )(x − x2 ) + (x − x0 )(x − x1 )]+
3!
f (4) (η(x))
+ (x − x0 )(x − x1 )(x − x2 ),
4!
△2 f (x0 ) f (3) (ξ)
f ′′ (x) = + [(x − x0 ) + (x − x1 ) + (x − x2 )]+
2!h2 3
d
f ′ (x) = f (x0 ) + (D1 f )(x0 )(x − x0 ) + (D2 f )(x0 )(x − x0 )(x − x1 ) + . . .
dx
+(Dm f )(x0 )(x − x0 )(x − x1 ) . . . (x − xm−1 ) + Rm (x)) ,
or Lagrange interpolating polynomial:
k
!
d X (x − x0 ) . . . (x − xi−1 )(x − xi+1 ) . . . (x − xm )
f ′ (x) = · f (xi ) + (Rm f )(x) .
dx i=0
(xi − x0 ) . . . (xi − xi−1 )(xi − xi+1 ) . . . (xi − xm )
The derivative of higher order f (n) (x) of the fun
tion f is obtained by deriving
n times the interpolating polynomial fun
tion.
Exer
ises
√
1. Approximate the rst order derivative of the fun
tion f (x) = x, using
its values at the knots: x0 = 1, x1 = 1.5, x2 = 2, x3 = 2.5, x4 = 3.
2. Approximate the derivatives f ′ (0.1), f ′′ (0.2) if the values of the fun
tion
f at the following knots are given:
Z b
f (x)dx
a
Z N
b X
f (x)dx − f (xj ) · cj
a
j=0
121
122 Numeri
al Integration
where
N
Y x − xi
Lj (x) = ,
x − xi
i=0 j
i6=j
The quadrature
Z Newton-Cotes formulas approximate
Z the value of the
b b
integral f (x)dx with the value of the integral PN (x)dx of the interpo-
a a
lating polynomial:
Z b Z b N Z bYN
X x − xi
f (x)dx ≈ PN (x)dx = f (xj ) · cj where cj = dx.
a a j=0 a i=0 xj − xi
i6=j
The Trapezoidal rule and the Simpson formula represent parti
ular
ases
of the quadrature Newton-Cotes formulas. Thus, for N = 1 we have:
Z b
b−a h
f (x)dx ≈ · [f (b) + f (a)] = · [f (b) + f (a)]
a 2 2
For N = 4:
Z b
2h
f (x)dx ≈ · [7f (a) + 32f (a + h) + 12f (a + 2h) + 32f (a + 3h) + 7f (b)].
a 45
For N = 5:
Z b
5h
f (x)dx ≈ ·[19f (a)+75f (a+h)+50f (a+2h)+50f (a+3h)+75f (a+4h)+19f (b)].
a 288
For N = 6:
Z b
h
f (x)dx ≈ · [41f (a) + 216f (a + h) + 27f (a + 2h) + 272f (a + 3h)+
a 140
For N = 7:
Z b
7h
f (x)dx ≈ · [751f (a) + 3577f (a + h) + 1323f (a + 2h) + 2989f (a + 3h)+
a 17280
In these
ases, the trun
ation errors are established by
al
ulus on the base
of the Mean Value Theorem for Denite Integrals :
124 Numeri
al Integration
Z b Z b
f (x)w(x)dx = f (c) w(x)dx.
a a
x−b x − a f ′′ (ξ)
f (x) = f (a) + f (b) + (x − a)(x − b),
a−b b−a 2!
Z b Z b
(b − a) 1
f (x)dx = [f (a) + f (b)] + f ′′ (ξ)(x − a)(x − b)dx.
a 2 2 a
On the base of the Mean Value Theorem, the above error term be omes:
Z b Z b
1 1
′′
f (ξ)(x − a)(x − b)dx = f ′′ (c) (x − a)(x − b)dx,
2 a 2 a
Z b
1 1 −h3
f ′′ (ξ)(x − a)(x − b)dx = f ′′ (c) · ,
2 a 2 6
h3 (2)
E1 = − · f (c).
12
The Newton-Cotes Formula, Trapezoidal Rule, Simpson Formula 125
3h5 (4)
N =3 E3 = − · f (c);
80
8h7 (6)
N =4 E4 = − · f (c);
975
275h7 (6)
N =5 E5 = − · f (c);
12096
9h9
N =6 E6 = − · f (8) (c);
1400
8183h9 (8)
N =7 E7 = − · f (c).
518400
Exer
ises
1. Use the trapezoidal rule, the
omposite trapezoidal rule and the Simpson
formula, respe
tively, to approximate the denite integral:
Z 1
1
2
dx
0 1+x
The algorithms for the trapezoidal rule,
omposite trapezoidal rule and for
the Simpson formula are:
// trapezoidal rule
h=b−a
Z b
h
f (x)dx ≈ · [f (a) + f (b)]
a 2
//
omposite trapezoidal rule
b−a
h=
n " #
Z b n−1
h X
f (x)dx ≈ · f (a) + 2 · f (xi ) + f (b)
a 2 i=1
126 Numeri
al Integration
where xi = a + i · h
Output data
- value of the integral
omputed using those 3 methods
Implementation in Borland C:
#in
lude<stdio.h>
#in
lude<
type.h>
#in
lude<math.h>
oat f(oat x)
{
return x;
}
oat trapezgen(oat a, oat b, int n)
{
int i;
oat h,s=0.0;
h=(b-a)/(n);
for(i=1;i<n;i++)
s+=f(a+i*h);
return (h*0.5*(f(a)+f(b))+h*s);
}
oat simpson(oat a, oat b)
{
int i;
oat h;
h=(b-a)/2;
return ((h/3)*(f(a)+4*f(a+h)+f(b)));
}
oat trapez(oat a, oat b)
The Newton-Cotes Formula, Trapezoidal Rule, Simpson Formula 127
{
int i;
oat h;
h=b-a;
return ((h/2)*(f(a)+f(b)));
}
void main(void){
int a,b,n;
har op;
printf("\n\nIntrodu
eti a: ");s
anf("%d",&a);
printf("\nIntrodu
eti b: ");s
anf("%d",&b);
printf("\nIntrodu
eti n: ");s
anf("%d",&n);
printf("\nMeniu\n");
printf("Alegeti una dintre optiuni: \n\n metoda (T)rapezului simpla\n
metoda (G)enerala a trapezelor \n metoda (S)impson \n e(X)it
\n");
ush(stdin);
s
anf("%
",&op);
op=toupper(op);
while (op !='X')
{
swit
h (op)
{
ase 'T' : printf("Aproximarea integralei prin metoda
trapezului:%g ", trapez(a,b));
break;
ase 'G' : printf("Aproximarea integralei prin metoda generala
a trapezelor:%g ",
trapezgen(a,b,n));
break;
ase 'S' : printf("Aproximarea integralei prin metoda lui
Simpson %g ",simpson(a,b));
break;
}
printf("\nMeniu\n");
printf("\n Alegeti una dintre optiuni: \n\n metoda (T)rapezului
simpla\n metoda (G)enerala a
trapezelor \n metoda (S)impson \n e(X)it \n");
ush(stdin);
s
anf("%
",&op);
op=toupper(op);
128 Numeri
al Integration
}
}
Using the same te
hnique as the one presented for obtaining formula (5.2.3),
we
an determine formulas for a bigger number of terms xi and ai . The in
on-
venien
e
onsists of the di
ulty for obtaining solutions of these nonlinear
systems, so we present an alternative derivation of these formulas.
Sin
e xi are unknowns, we use the Lagrange interpolating polynomial whi
h
allows arbitrarily-spa
ed base points :
N N
X f (N +1) (ξ(x)) Y
f (x) = f (xj ) · Lj (x) + (x − xj ) (5.2.4)
j=0
(N + 1)! j=0
where Lj (x) is
N
Y x − xi
Lj (x) = and − 1 < ξ(x) < 1.
x − xi
i=0 j
i6=j
130 Numeri
al Integration
f (N +1) (ξ(x))
qN (x) = . (5.2.5)
(N + 1)!
We want to sele
t xj in su
h a way that the error term in Eq. (5.2.6) vanishes
sin
e f (x) is a polynomial of degree 2N + 1 or less. It follows that we want:
Z 1 N
Y
qN (x) · (x − xj ) dx = 0. (5.2.7)
−1 j=0
N
Y
Be
ause (x−xj ) is a polynomial of degree N +1 and qN (x) is a polynomial
j=0
N
Y
of degree N or less, the equality (5.2.7) is veried if the polynomial (x−xj )
j=0
of degree N + 1 is orthogonal on all polynomials of degree N or less, on the
interval [−1, 1].
The Legendre polynomials dened by
P0 (x) = 1
P1 (x) = x
1
Pi (x) = · [(2i − 1) · x · Pi−1 (x) − (i − 1) · Pi−2 (x)] i = 2, 3, . . .
i
are orthogonal polynomials over [−1, 1] with respe
t to the weight fun
tion
w(x) = 1: Z 1
Pi (x) · Pj (x) = 0, i 6= j.
−1
The Legendre polynomials are also linearly independent and, therefore, qN (x)
an be written as a linear
ombination of Legendre polynomials Pi (x), i =
N
Y
0, 1, 2, . . . , N . If in (x − xj ) we
hoose xj , j = 0, 1, . . . , N as the zeros
j=0
Gaussian Integration Formulas 131
and thus
Z 1 N
X
f (x) dx ≈ aj f (xj ) (5.2.8)
−1 j=0
Z 1
where aj = Lj (x) dx and xj , j = 0, 1, . . . , N are the zeros of Legendre
−1
polynomial PN +1 (x).
Exer
ises
1. Using the Gaussian formulas for N = 2 and N = 3 approximate the
denite integral: Z 1
e2x dx.
0
132 Numeri
al Integration
Chapter 6
Dierential Equations,
Initial-Value Problems
Dierential equations are divided into two
lasses, ordinary and partial,
a
ording to the number of independent variables present in the dierential
equations; one for ordinary and more than one for partial. The order of
the dierential equation s the order of its highest derivative. The general
solution of N th -order ordinary dierential equation
ontains N independent
arbitrary
onstants. To determine these N arbitrary
onstants, we need N
onditions. If these
onditions are pres
ribed at one point, then these
on-
ditions are
alled initial
onditions. A dierential equation with initial
onditions is
alled an initial-value problem (IVP). If these N
onditions
are pres
ribed at dierent points, then these
onditions are
alled bound-
ary
onditions. A dierential equation with boundary
onditions is
alled
a boundary-value problem (BVP).
133
134 Dierential Equations, Initial-Value Problems
where f : (α, β) × (γ, δ) → IR1 is a fun
tion of C 1 -
lass and x0 ∈ (α, β),
y0 ∈ (γ, δ).
We
onsider the points xi+1 = xi + h = x0 + (i + 1)h for i = 0, 1, . . . , N − 1,
h > 0, and we admit that xi ∈ (α, β) for i = 0, N − 1; a = x0 and b = xN =
x0 + N h.
If the maximal domain of the solution of (IVP) (6.1.1)
ontains the points xi
whi
h are referred to as mesh points, i = 0, N − 1, then in these points the
solution y = y(x) of (IVP) veries:
y(xi + h) − y(xi )
y ′ (xi ) ≈ , i = 0, N − 1.
h
In this way, the following equation with dieren
es results:
1. onvergen e;
2. onsisten y;
3. stability.
Finite Dieren
e Method for a Numeri
al Solution of Initial-Value Problems (IVP)135
y(xi ) − y(xi − h)
y ′ (xi ) ≈ , i = 1, N ,
h
then the following equation with dieren
es is obtained:
y(xi + h) − y(xi − h)
y ′ (xi ) ≈ , i = 1, N ,
2h
then the following equation with dieren
es is obtained:
alled midpoint formula. For solving (6.1.5), the value y(xi ) should be
founded using other method.
Exer
ises
1. Solve the following (IVP):
′
y (x) = x − 5y(x)
y(0) = 1 x ∈ [0, 1]
- y0 = y(x0 )
Output data:
-
ouples (xi , yi ), i = 0, n
for(i=1;i<=n;i++)
{
y[i℄=y[i-1℄+h*f(x[i-1℄,y[i-1℄);
x[i℄=x[i-1℄+h;
}
for(i=0;i<=n;i++)
{
printf("x[%d℄=%f ", i,x[i℄);
printf("y[%d℄=%f\n", i,y[i℄);
}
}
oat f(oat x, oat y)
{
return (x-5*y);
}
oat *Ve
tor(int imin, int imax)
{
oat *p;
p=(oat *)mallo
((size_t)((imax-imin+1) * sizeof(oat)));
return (p-imin);
}
y ′ (x0 ) y ′′ (x0 )
y(x) = y(x0 ) + (x − x0 ) + (x − x0 )2 + . . . +
1! 2!
(6.2.2)
(n) n+1
y (x0 ) y (ξ)
+ (x − x0 )n + (x − x0 )n+1 ,
n! (n + 1)!
y ′ (x) = f (x, y)
d ∂f ∂f ′
y ′′ (x) = [f (x, y(x))] = f ′ (x, y) = + · y (x) = fx + fy · f
dx ∂x ∂y
y (3) (x) = f ′′ (x, y) = fxx (x, y(x)) + 2f (x, y(x)) · fxy (x, y(x)) + f 2 (x, y(x)) · fyy (x, y(x))+
....................................
Sin
e f (k) (x0 , y0 ) is
ompli
ated (otherwise, we would solve the equation an-
alyti
ally), evaluation of higher derivatives is time
onsuming. Therefore
instead of using a high degree Taylor series over a relatively large distan
e,
we divide the interval [x0 , b] into small subintervals and use a lower-degree
Taylor series over ea
h subinterval. Let be a = x0 < x1 < x2 < . . . < xN = b
a partition of the interval [a = x0 , b] into N equally-spa
ed subintervals of
length h = b−a N
; xi = x0 + ih, i = 0, N .
The Taylor Method for a Numeri
al Solution of IVP 139
Exer
ises
1. Solve the following (IVP):
′
y (x) = y(x) − x
y(0) = 0 x ∈ [0, 1]
using Taylor Series method of order 1, 2 and 3, respe
tively, for h = 0.25.
yielding
h
Yi+1 = Yi + [f (xi , Yi ) + f (xi+1 , Yi + hf (xi , Yi ))]. (6.3.5)
2
This formula is
alled the Runge-Kutta formula of se
ond order.
Comparing it with the Taylor Series method of se
ond order
h2 ′
Yi+1 = Yi + h · f (xi , Yi ) + f (xi , Yi ) (6.3.6)
2!
the dieren
e is
lear. In equation (6.3.5) we do not need f ′ (xi , Yi ) and yet
it has the same trun
ation order as equation (6.3.6).
Our obje
tive is to develop a general pro
edure to derive Eq. (6.3.5) from
Eq. (6.3.6) and other similar higher-order methods.
We are looking at the formula:
Yi+1 = Yi + w1 k1 + w2 k2 , (6.3.7)
where
k1 = h · f (xi , Yi )
k2 = h · f (xi + α2 · h, Yi + β21 · k1 )
and the
onstants w1 , w2 , α2 , and β21 should be determined su
h that the
Eqs. (6.3.7) and (6.3.6) represent the same equation with dieren
es.
For this aim, we write the left-hand side term of the equation (6.3.7) as
follows:
h2 ′ h3
Yi+1 = Yi +h·f (xi , Yi )+ ·f (xi , Yi )+ ·f ′′ (xi , Yi )+h.o.t (higher-order terms)
2! 3!
in whi
h repla
ing
we obtain
h2
Yi+1 = Yi + h · f + · [fx + f · fy ] + (6.3.8)
2!
h3
+ · [fxx + 2f · fxy + f 2 · fyy + fy (fx + f · fy )] + h.o.t
3!
For k2 from the right-hand side of the equation with dieren
es (6.3.7), we
use the Taylor Series formula of se
ond order of a fun
tion of two variables,
142 Dierential Equations, Initial-Value Problems
whi h gives
k2 = h · f (xi + α2 · h, Yi + β21 · k1 ) =
∂ ∂
= h · f (xi , Yi ) + α2 · h · + β21 · k1 · f (xi , Yi )+
∂x ∂y
2 )
1 ∂ ∂
+ α2 · h · + β21 · k1 · f (xi , Yi ) + t.o.s.
2! ∂x ∂y
h2 2
= h f + α2 hfx + β21 hf · fy + (α2 fxx + 2α2 β21 f · fxy + β21 f fyy ) + h.o.t
2 2
2
Repla ing k1 and k2 in the right-hand side of the equation (6.3.7) we obtain
We have four unknowns and only three equations, therefore, we have one
degree of freedom in the solution of the Eq. (6.3.10).
Solving Eq. (6.3.10) in terms of α2 we get
1
w2 =
2α2
1 2α2 − 1
w1 = 1 − w 2 = 1 − = (6.3.11)
2α2 2α2
1
β21 =
= α2
2w2
We
an use one degree of freedom to sele
t the trun
ation error as small as
possible that produ
ed h3 terms in the expansions of the Eqs. (6.3.8) and
(6.3.9).
The Runge-Kutta Method of the Se
ond Order 143
The asymptoti
form of the error term is found by taking the dieren
e
between the h3 terms in Eqs. (6.3.8) and (6.3.9), and is given by:
3 1 α2 2 1
error = h · − [fxx + 2f · fxy + f · fyy ] + fy · [fx + f · fy ] .
6 4 6
(6.3.12)
If M and L are su
h that
i+j
∂ f Li+j
|f (x, y)| < M and i j ≤ i+j for i + j ≤ n
∂x ∂y M
then error is bounded by:
1 α2 1
3 2
|error| ≤ h · M L · 4 · − + (6.3.13)
6 4 3
M · L2 3
|error| ≤ ·h . (6.3.14)
3
For α2 = 23 we have β21 = 32 , w1 = 41 , w2 = 43 and the equation with dieren
es
(6.3.7) be
omes:
h 2 2
Yi+1 = Yi + f (xi , Yi ) + 3 f xi + h, Yi + h · f (xi , Yi ) (6.3.15)
4 3 3
whi
h
an be written as follows:
2
Ŷi+2/3 = Yi + h · f (xi , Yi )
3
(6.3.16)
h 2
Yi+1 = Yi +
f (xi , Yi ) + 3 f xi + h, Ŷi+2/3
4 3
This form shows that the method
an be viewed as a predi
tor-
orre
tor
method. The rst equation predi
ts Ŷi+2/3 , a preliminary value, while the
se
ond equations gives the
orre
t value Yi+1 by using the preliminary value.
Other solutions of the system (6.3.10) whi
h are used in literature are:
a) α2 = β21 = 1, w1 = w2 = 12 .
In this
ase the equation with dieren
es is:
h
Yi+1 = Yi + [f (xi , Yi ) + f (xi + h, Yi + h · f (xi , Yi ))] (6.3.17)
2
144 Dierential Equations, Initial-Value Problems
The rst equation predi
ts the preliminary value Ŷi+1 , while the se
ond
equation gives the
orre
ted value Yi+1 by using the preliminary value
Ŷi+1 . This
ab be viewed as a predi
tor-
orre
tor method, too.
b) α2 = β21 = 12 , w1 = 0, w2 = 1.
In this
ase the equation with dieren
es is:
h h
Yi+1 = Yi + h · f xi + , Yi + · f (xi , Yi ) (6.3.19)
2 2
Exer
ises
1. Solve (IVP):
y ′ (x) = x3 + x · y 2 (x)
y(0) = 1 x ∈ [0, 1]
The algorithm for solving numeri
ally an (IVP) using Runge-Kutta method
of se
ond order:
b − x0
h=
n
for i = 1 . . . n
The Runge-Kutta Method of the Se
ond Order 145
k1 = h · f (xi−1 , yi−1 )
k2 = h · f (xi−1 + α2 · h, yi−1 + β21 · k1 )
yi = yi−1 + w1 · k1 + w2 · k2
xi = xi−1 + h
- For Improved Euler method
α2 = β21 = 1
1
w1 = w 2 =
2
- for Modied Euler method
1
α2 = β21 =
2
w1 = 0
w2 = 1
- for Runge-Kutta with the smallest error
2
α2 = β21 =
3
1
w1 =
4
3
w2 =
4
Input data:
- x0 , b - the ends of the interval for x
- y0 = y(x0 )
- n - number of points
Output data:
-
ouples (xi , yi ), i = 0, n
{
h1=h*f(x0,y0);
h2=h*f(x0+a2*h,y0+b21*h1);
y=y0+w1*h1+w2*h2;
x=x0+h;
printf("(%g,%g)\n",x,y);
x0=x;
y0=y;
}
}
void main()
{
int n,i;
oat b,h,x0,y0,a2,b21,w1,w2;
har
;
printf("Introdu
eti n: "); s
anf("%d",&n);
printf("Introdu
eti b: "); s
anf("%f",&b);
printf("Introdu
eti x0: "); s
anf("%f",&x0);
printf("Introdu
eti y0: "); s
anf("%f",&y0);
h=(b-x0)/(oat)n;
do
{
printf("\n");
printf(" Metode Runge-Kutta \n");
printf("1.Metoda Euler imbunatatita\n");
printf("2.Metoda Euler modi
ata\n");
printf("3.Metoda Runge-Kutta
u
ea mai mi
a eroare\n");
printf("4.Iesire\n");
printf("Introdu
eti optiunea: "); s
anf("%s", &
);
printf("\n");
swit
h(
)
{
ase '1': { // Metoda Euler imbunatatita
a2=1.0;
b21=1.0;
w1=0.5;
w2=0.5;
rk2(a2,b21,w1,w2,h,x0,y0,n);
break;
}
ase '2': { // Metoda Euler modi
ata
The Runge-Kutta Method of the Third Order and Fourth Order 147
a2=0.5;
b21=0.5;
w1=0;
w2=1;
rk2(a2,b21,w1,w2,h,x0,y0,n);
break;
}
ase '3': { // Metoda Runge-Kutta
u
ea mai mi
a eroare
a2=2.0/3.0;
b21=2.0/3.0;
w1=0.25;
w2=0.75;
rk2(a2,b21,w1,w2,h,x0,y0,n);
break;
}
ase '4': exit(0);
default: {printf("Optiune gresita!"); break;}
}
}while (
!= '4');
}
k2 = h · {f + h[α2 fx + β21 f · fy ]+
h2
+ · [α22 fxx + 2α2 β21 f · fxy + β21
2 2
f · fyy ] + higher-order terms}
2
h2
+ · [2β32 (α2 fx + β21 f · fy ) · fy + α32 fxx +
2
n
Yi+1 = Yi + h · (w1 + w2 + w3 ) · f + h2 · (w2 α2 + w3 α3 ) · fx +
o n
+ f · fy [w2 β21 + w3 (β31 + β32 )] + h3 12 · (w2 α22 + w3 α32 ) · fxx +
+ 12 · [w2 β21
2
+ w3 (β31 + β32 )2 ] · f 2 · fyy +
o
+ w3 α2 β32 · fx fy + w3 β21 β32 f · fy2 + higher-order terms
h2
Yi+1 = Yi + h · f + · (fx + f · fy )+
2
h3
+ · (fxx + 2f · fxy + f 2 · fyy + fx · fy + f · fy2 ) + higher-order terms.
3!
The Runge-Kutta Method of the Third Order and Fourth Order 149
where k1 = h · f (xi , Yi )
k2 = h · f xi + h2 , Yi + k21
k3 = h · f (xi + h, Yi − k1 + 2k2 ).
The higher-order Runge-Kutta formulas
an be derived in the same way;
however, as the order in
reases the
omplexity in
reases very rapidly. The
best known Runge-Kutta method of fourth stage and fourth order
is given by:
1
Yi+1 = Yi + · (k1 + 2k2 + 2k3 + k4 ) (6.4.6)
6
where k1 = h · f (xi , Yi )
k1
k2 = h · f xi + h2 , Yi + 2
k2
k3 = h · f xi + h2 , Yi + 2
k4 = h · f (xi + h, Yi + k3 ).
At rst sight, these formulas seem to be
ompli
ated, but they are easy to
program and they get a very good speed of
onvergen
e.
For any knots x0 < x1 < x2 < . . . < xi < xi+1 < . . . < xN from the maximum
interval (α, β) on whi
h the solution is dened, we have:
Sin
e y(x) is not known and f (x, y(x))
annot be integrated exa
tly, we ap-
proximate f (x, y(x)) by an interpolating polynomial that uses the previously
obtained data points (xi , f (xi , y(xi ))), (xi−1 , f (xi−1 , y(xi−1 ))), . . . , (xi−k , f (xi−k , y(xi−k )))
.
Let k = 0. Then the equality:
Z xi+1
y(xi+1 ) = y(xi ) + f (x, y(x)) dx
xi
be omes:
Z xi+1
y(xi+1 ) = y(xi ) + [f (xi , y(xi )) + (x − xi ) · f ′ (ηi (x), y(ηi (x)))] dx
xi
h2 ′
= y(xi ) + h · f (xi , y(xi )) + · f (ξi , y(ξi )),
2
∂f ∂f ′
where: h = xi+1 − xi , xi < ξi < xi+1 and f ′ = + · y ;. This gives the
∂x ∂y
one-step Euler method
Yi+1 = Yi + hf (xi , Yi ).
Let k = 1. Although any interpolating polynomial through (xi , f (xi , y(xi )))
and (xi−1 , f (xi−1 , y(xi−1 )))
an be used, it is very
onvenient to use the New-
ton ba
kward dieren
e formula. Let h = xi+1 − xi = xi − xi−1 . Then the
equality
Z xi+1
y(xi+1 ) = y(xi ) + f (x, y(x)) dx
xi
152 Dierential Equations, Initial-Value Problems
be
omes
Z xi+1
∇f (xi , y(xi ))
y(xi+1 ) = y(xi ) + f (xi , y(xi )) + (x − xi ) · +
xi h
(x − xi )(x − xi−1 ) ′′
+ · f (ηi , y(ηi )) dx
2!
h
= y(xi ) + h · f (xi , y(xi )) + · ∇f (xi , y(xi )) +
Z 2
f ′′ (ξi , y(ξi )) xi+1
+ (x − xi )(x − xi−1 ) dx
2! xi
h 5
= y(xi ) + · {3 f (xi , y(xi )) − f (xi−1 , y(xi−1 ))} + · h3 · f ′′ (ξi , y(ξi ))
2 12
where xi < ηi and ξi < xi+1 . This two-step method that uses the information
at the points xi and xi−1 is
alled the se
ond-order Adams-Bashforth
method and is given by
h
Yi+1 = Yi + · [3 f (xi , Yi ) − f (xi−1 , Yi−1 )].
2
Similarly for k = 2, using three points (xi , f (xi , y(xi ))), (xi−1 , f (xi−1 , y(xi−1 )))
and
(xi−2 , f (xi−2 , y(xi−2 ))), we get
h
y(xi+1 ) = y(xi ) + · {23 f (xi , y(xi )) − 16 f (xi−1 , y(xi−1 ))+
12
3
+ 5 f (xi−2 , y(xi−2 ))} + · h4 · f (3) (ξi , y(ξi )),
8
and the
orresponding equation with dieren
es is
h
Yi+1 = Yi + · {23 f (xi , Yi ) − 16 f (xi−1 , Yi−1 ) + 5 f (xi−2 , Yi−2 )}.
12
For k = 3 we have:
h
y(xi+1 ) = y(xi ) + · {55 f (xi , y(xi )) − 59 f (xi−1 , y(xi−1 ))+
24
251 5 (4)
+ 37 f (xi−2 , y(xi−2 )) − 9 f (xi−3 , y(xi−3 ))} + · h · f (ξi , y(ξi ))
720
and the
orresponding equation with dieren
es:
h
Yi+1 = Yi + ·{55 f (xi , Yi )−59 f (xi−1 , Yi−1 )+37 f (xi−2 , Yi−2 )−9 f (xi−3 , Yi−3 )}.
24
The Adams-Bashforth and Adams-Moulton Methods 153
For k = 4 we have:
h
y(xi+1 ) = y(xi ) + · {1901 f (xi , y(xi )) − 2774 f (xi−1 , y(xi−1 )) + 2616 f (xi−2 , y(xi−2 ))
720
95
− 1274 f (xi−3 , y(xi−3 )) + 251 f (xi−4 , y(xi−4 ))} + · h6 · f (5) (ξi , y(ξi ))
288
and the
orresponding equation with dieren
es:
h
Yi+1 = Yi + · {1901 f (xi , Yi ) − 2774 f (xi−1 , Yi−1 ) + 2616 f (xi−2 , Yi−2 )
720
− 1274 f (xi−3 , Yi−3 ) + 251 f (xi−4 , Yi−4 )}.
In prin
iple, the pre
eding pro
edure
an be
ontinued to obtain higher-
order Adams-Bashforth formulas, but if k in
reases then the formulas be
ome
omplex.
Multi-step methods need help getting started. Generally, a k -step method
must have starting values Y0 , Y1 , . . . , Yk−1 . These starting values must be
omputed by other methods. However, keep in mind that the obtained start-
ing values must be as a
urate as those produ
ed by the nal method. If a
starting method is of lower order, then use a smaller step size to generate
a
urate starting values.
By using (xi , f (xi , y(xi ))), (xi−1 , f (xi−1 , y(xi−1 ))), . . . , (xi−k , f (xi−k , y(xi−k ))),
we derived the Adams-Bashforth methods. We
an also use (xi+1 , f (xi+1 , y(xi+1 ))),
(xi+2 , f (xi+2 , y(xi+2 ))), . . . to form an interpolating polynomial. An interpo-
lating polynomial through (xi+1 , f (xi+1 , y(xi+1 ))), (xi , f (xi , y(xi ))), . . ., (xi−k , f (xi−k , y(xi−k )))
that satises P (xj ) = f (xj , y(xj )) for j = i + 1, i, . . . , i − k generates a
lass
of methods known as the Adams-Moulton methods.
Let k = 0. Repla
ing f (x, y(x)) by the interpolating polynomial through
(xi+1 , f (xi+1 , y(xi+1 ))) and (xi , f (xi , y(xi ))) in the formula
Z xi+1
y(xi+1 ) = y(xi ) + f (x, y(x)) dx
xi
we get
Z xi+1
x − xi+1 x − xi
y(xi+1 ) = y(xi ) + f (xi , y(xi )) · + f (xi+1 , y(xi+1 )) ·
xi xi − xi+1 xi+1 − xi
(x − xi )(x − xi+1 ) ′′
+ · f (ξi (x), y(ξi (x))) dx =
2!
h h3 ′′
= y(xi ) + [f (xi , y(xi )) + f (xi+1 , y(xi+1 ))] − · f (ηi , y(ηi )).
2 12
154 Dierential Equations, Initial-Value Problems
h
Yi+1 = Yi + [f (xi , Yi ) + f (xi+1 , Yi+1 )].
2
For k = 1, using the
ubi
interpolating polynomial through (xi+1 , f (xi+1 , y(xi+1 ))),
(xi , f (xi , y(xi ))) and (xi−1 , f (xi−1 , y(xi−1 ))) we nd
h
y(xi+1 ) = y(xi ) + [5 f (xi+1 , y(xi+1 )) + 8 f (xi , y(xi ))−
12
h4 (3)
− f (xi−1 , y(xi−1 ))] − · f (ζi , y(ζi ))
24
and
h
Yi+1 = Yi + [5 f (xi+1 , Yi+1 ) + 8 f (xi , Yi ) − f (xi−1 , Yi−1 )].
12
For k = 2 we obtain:
h
y(xi+1 ) = y(xi ) + [9 f (xi+1 , y(xi+1 )) + 19 f (xi , y(xi )) − 5 f (xi−1 , y(xi−1 ))+
24
19
+ f (xi−2 , y(xi−2 ))] − · h5 · f (4) (ξi , y(ξi )),
720
and
h
Yi+1 = Yi + [9 f (xi+1 , Yi+1 ) + 19 f (xi , Yi ) − 5 f (xi−1 , Yi−1 ) + f (xi−2 , Yi−2 )].
24
For k = 3 we have:
h
y(xi+1 ) = y(xi ) + [251 f (xi+1 , y(xi+1 )) + 646 f (xi , y(xi )) − 264 f (xi−1 , y(xi−1 ))+
720
3
+ 106 f (xi−2 , y(xi−2 )) − 19 f (xi−3 , y(xi−3 ))] − · h6 · f (5) (αi , y(αi )),
160
and
h
Yi+1 = Yi + [251 f (xi+1 , Yi+1 ) + 646 f (xi , Yi ) − 264 f (xi−1 , Yi−1 )+
720 .
+ 106 f (xi−2 , Yi−2 ) − 19 f (xi−3 , Yi−3 )]
Note that these equations impli
itly dene Yi+1 reason for whi
h the Adams-
Moulton formulas are
alled impli
it methods, while Adams-Bashforth
methods dene Yi+1 expli
itly.
The Predi
tor-
orre
tor Method 155
Hen
e, among Eqs. (6.6.1) and (6.6.2), Eq. (6.6.1) is preferable, but it is an
impli
it formula. If f (x, y) is nonlinear, then generally it is di
ult to solve
Eq. (6.6.1) expli
itly for Yi+1 .
However, Eq. (6.6.1) is a nonlinear equation with root Yi+1 and
an be solved
by a su
essive approximation method. For xed i, Yi+1 is the solution of:
y = g(y) (6.6.3)
where
h
g(y) = Yi + [9 f (xi+1 , y) + 19 f (xi , Yi ) − 5 f (xi−1 , Yi−1 ) + f (xi−2 , Yi−2 )].
24
To solve Eq. (6.6.3), it is very
onvenient to use the xed-point iteration
method
y (k+1) = g(y (k) ) k = 0, 1, 2, . . . (6.6.4)
be
ause Yi+1 is a xed point of g .
If |g ′ (y)| < 1 for all y with |y − yi+1 | < |y (0) − yi+1 |, then the sequen
e of
iterations (6.6.4)
onverges. Sin
e g ′ (y) = 9h
24 ∂y
· ∂f , the sequen
e of iterations
(6.6.4)
onverges if h < 1/ 9h · ∂f and y (0) is su
iently
lose to Yi+1 .
24 ∂y
Thus by properly sele
ting y (0) (su
iently
lose to Yi+1 ), the sequen
e of
iterations (6.6.4)
onverges without using many iterations.
For
al
ulating y (0) we use the Eq. (6.6.2):
(0) h
Yi+1 = Yi + [55 f (xi , Yi ) − 59 f (xi−1 , Yi−1 ) + (6.6.5)
24
(1) h (0)
Yi+1 = Yi + [9 f (xi+1 , Yi+1 ) + 19 f (xi , Yi ) (6.6.6)
24
We use Eq. (6.6.5) to predi
t a value of Yi+1 and therefore this equation
is known as a predi
tor. The value Yi+1 given by predi
tion is repla
ed
(0)
(1)
in (6.6.6), and in this way a
orre
ted value Yi+1 is obtained. Due to this
reason, the Eq. (6.6.6) is known as a
orre
tor.
A
ombination of an expli
it method to predi
t and an impli
it method to
orre
t is known as a predi
tor-
orre
tor method.
It has been shown (Henri
i 1962) that if the predi
tor method has at least
the order of the
orre
tor method, then one iteration is su
ient to preserve
the asymptoti
a
ura
y of the
orre
tor method.
A
ommonly used predi
tor-
orre
tor method is the
ombination of the
fourth-order Adams-Bashforth formula as a predi
tor and the fourth-order
Adams-Moulton formula as a
orre
tor. Thus
(p)
Yi+1 h
= Yi + [55 f (xi , Yi ) − 59 f (xi−1 , Yi−1 ) + 37 f (xi−2 , Yi−2 ) − 9 f (xi−3 , Yi−3 )]
24
h (p)
Yi+1 = Yi + [9 f (xi+1 , Yi+1 ) + 19 f (xi , Yi ) − 5 f (xi−1 , Yi−1 ) + f (xi−2 , Yi−2 )]
24
(6.6.7)
The system (6.6.7) is widely used in
ombination with the fourth-order
Runge-Kutta method as starter. Like the fourth-order Runge-Kutta method,
the predi
tor-
orre
tor system (6.6.7) is one of the most reliable and widely
used methods for the numeri
al solution of an initial-value problem.
Exer
ises
1. Approximate the solution of the following (IVP)
′
y (x) = y 2 (x) − x
y(0) = 0 x ∈ [0, 1]
Input data:
- x0 , b - the ends of the interval of x
- y0 = y(x0 )
- n
Output data:
- the
omputed
ouples (xi , yi ), i = 0, n
{
printf("b= "); s
anf("%f", &b);
printf("x0= "); s
anf("%f", &x0);
printf("y0= "); s
anf("%f", &y0);
printf("n= "); s
anf("%d", &n);
pred
or();
}
oat f(oat a, oat b)
{
return (b-a);
}
void rk4(int m, oat *x, oat *y)
{
oat h, k1, k2, k3, k4;
int i;
x[0℄=x0;
y[0℄=y0;
h=(b-x[0℄)/n;
for(i=1;i<=m;i++)
{
k1=h*f(x[i-1℄,y[i-1℄);
k2=h*f(x[i-1℄+h*0.5,y[i-1℄+0.5*k1);
k3=h*f(x[i-1℄+h*0.5,y[i-1℄+0.5*k2);
k4=h*f(x[i-1℄+h,y[i-1℄+k3);
y[i℄=y[i-1℄+(k1+2*k2+2*k3+k4)/6;
x[i℄=x[i-1℄+h;
}
}
void pred
or(void)
{
oat *x, *y, h, P;
int i;
x=Ve
tor(0,n);
y=Ve
tor(0,n);
rk4(3,x,y);
h=(b-x0)/n;
for(i=4;i<=n;i++)
{
x[i℄=x[i-1℄+h;
P=y[i-1℄+(h/24)*(55*f(x[i-1℄,y[i-1℄)-59*f(x[i-2℄,y[i-2℄)+37*f(x[i-
3℄,y[i-3℄-9*f(x[i-4℄,y[i-4℄)));
The Finite Dieren
es Method for a Numeri
al Solution of a Limit Linear Problem 159
y[i℄=y[i-1℄+(h/24)*(9*f(x[i℄+h,P)+19*f(x[i-1℄,y[i-1℄)-5*f(x[i-2℄,y[i-
2℄+f(x[i-3℄,y[i-3℄)));
}
for(i=0;i<=n;i++)
{
printf("x[%d℄=%g\t", i,x[i℄);
printf("y[%d℄=%g\n", i,y[i℄);
}
}
and
y(xi − h) − 2 y(xi ) + y(xi + h) h2 (4)
y ′′ (xi ) = − · y (ηi ). (6.7.5)
h2 12
Substituting Eqs. (6.7.4) and (6.7.5) in (6.7.3), we get
y(xi − h) − 2 y(xi ) + y(xi + h) y(xi + h) − y(xi − h)
2
= p (xi ) · + (6.7.6)
h 2h
h2 (4) h2 (3)
+ q(xi ) · y(xi ) + r(xi ) + · y (ηi ) − · y (ξi ).
12 6
Sin
e ξi and ηi are not known and h2 is small, we ignore the last two terms.
Denoting the approximate value of y in xi by Yi (i.e., Yi ≈ y(xi )), the approx-
imate value of y in xi + h by Yi+1 (Yi+1 ≈ y(xi + h)), and the approximate of
y in xi − h by Yi−1 (Yi−1 ≈ y(xi − h)), we get from Eq (6.7.6) the following:
Yi−1 − 2 Yi + Yi+1 Yi+1 − Yi−1
2
= p(xi ) · + q(xi ) · Yi + r(xi ), i = 1, 2, . . . , N,
h 2h
(6.7.7)
whi
h is an algebrai
linear system of N equations in N unknowns.
Colle
ting similar terms, Eq. (6.7.7) is rewritten as
h h
Yi−1 1 + p(xi ) −Yi (2+h q(xi ))+Yi+1 1 − p(xi ) = h2 r(xi ), i = 1, 2, . . . , N.
2
2 2
(6.7.8)
Denoting:
h
ai = 1 + p(xi )
2
bi = −(2 + h2 q(xi ))
h
ci = 1 − p(xi )
2
the equality (6.7.8) be
omes
Ay = s (6.7.11)
The Finite Dieren
es Method for a Numeri
al Solution of a Limit Linear Problem 161
where
b1 c 1 0 ... 0 0 Y1 h2 r(x1 ) − a1 α
a2 b 2 c 2 ... 0 0
Y2
h2 r(x2 )
0 a3 b 3
A=
... 0 0
Y =
..
s= .
.
. . . . . . . . . ... ... ...
. .
YN −1 2
h r(xN −1 )
0 0 0 . . . bN −1 cN −1
0 0 0 . . . aN bN YN h2 r(xN ) − cN β
in whi
h we have:
h
ai = 1 + · p(xi )
2
h
ci = 1 −· p(xi ).
2
If we repla
e y ′ (a) by the forward nite dieren
e formula and y ′ (b) by the
ba
kward nite dieren
e formula, the
onditions (6.7.13) be
ome
Y1 − Y0
γ1 Y 0 + γ2 =α
h
(6.7.16)
YN +1 − YN
γ3 YN +1 + γ4 = β.
h
Solving the rst equation for Y0 as fun
tion of Y1 , and solving the se
ond
equation for YN +1 as fun
tion of YN , the system (6.7.14) redu
es to a tridiag-
onal system. Unfortunately, the rst derivative approximation is of the rst
162 Dierential Equations, Initial-Value Problems
2hαa1
h2 r(x1 ) −
Y1 2hγ1 − 3γ2
Y2
h2 r(x2 )
.. .
.
y= . s=
.
YN −1
h2 r(xN −1 )
YN 2hβc N
h2 r(xN ) −
2hγ3 + 3γ4
Exer
ises
1. Approximate the following (BVP)
′′
y = −y
y(0) = 1
y(π/2) = 0
Input data:
- a, b - the ends of the interval of x
- α = y(a)
164 Dierential Equations, Initial-Value Problems
- β = y(b)
- n
Output data:
-
omputed
ouples (xi , yi ), i = 1, n
for(i=1;i<=n;i++)
{
b[i℄=-(2+h*h*q(x[i℄));
s[i℄=h*h*r(x[i℄);
}
s[1℄=s[1℄-alpha*(1+h*0.5*p(x[1℄));
s[n℄=s[n℄+beta*(1-h*0.5*p(x[n℄));
for(i=2;i<=n;i++)
{
a[i℄=1+h*0.5*p(x[i℄);
[i℄=1-h*0.5*p(x[i-1℄);
}
tridiag(n, a, b,
, s, z);
for(i=1;i<=n;i++)
{
printf("x[%d℄=%f\t", i,x[i℄);
printf("y[%d℄=%f\n", i,z[i℄);
}
}
oat p(oat x)
{
return 0;
}
oat q(oat x)
{
return (-1);
}
oat r(oat x)
{
return 0;
}
i = 1, N .
We sear
h an approximative solution, of Eq. (6.8.1), of the form:
N
X
YN (x) = Φ0 (x) + ci · Φi (x) (6.8.5)
i=1
The fun
tion R = h − f is
alled residual fun
tion and indi
ates the measure
in whi
h YN veries Eq. (6.8.1):
Generally, the solution is not exa
t, but if the number of fun
tions Φi in-
reases then R be
omes small. We try to make R(x; c1 , . . . , cN ) smaller
hoosing c1 , . . . , cN .
The
ollo
ation method requests that R(x; c1 , . . . , cN ) is zero in the
given points x1 , x2 , . . . , xN of [a, b]. Taking into a
ount (6.8.6) it results
The Collo
ation Method and the Least Squares Method 167
that:
N
X
ci [Φ′′i (xk ) + p(xk ) · Φ′i (xk ) + q(xk ) · Φi (xk )] =
(6.8.7)
i=1
= f (xk ) − Φ′′0 (xk ) − p(xk ) · Φ′0 (xk ) − q(xk ) · Φ0 (xk )
for k = 1, 2, . . . , N .
The system of relations (6.8.7) is a system of N linear algebrai
equations in
N unknowns c1 , . . . , cN and
an be written int the matrix form as follows
A · c = b. (6.8.8)
We solve this equation and the obtained solution (c1 , . . . , cN ) will be used for
the
onstru
tion of YN (x) of the form (6.8.5) in order to obtain an approxi-
mative solution of the (BVP) (6.8.1)-(6.8.2).
Exer
ises
1. Approximate the solution of the following (BVP)
′′
y +y = x
y(0) = 0
y(1) = 0
using:
[2℄ Beu Titus A., Cal
ul numeri
in C, Editia a 2-a, Ed. Alabastra, Cluj-
Napo
a, 2000.
[6℄ Kelley W., Peterson A., Dieren
e equation, An Introdu
tion with Ap-
pli
ations , A
ademi
Press, Elsevier, 2000.
[7℄ Maruster Stefan, Metode numeri
e in rezolvarea e
uatiilor neliniare , Ed.
Tehni
a, Bu
uresti, 1981.
169