Sie sind auf Seite 1von 23

A collocation approach for solving high-order linear

Fredholm–Volterra integro-differential equations


Şuayip Yüzbaşıa , Niyazi Şahina , Ahmet Yildirimb,* , Nguyen Thai
Duyc , and Le Tran Phuong Anhc
a
Department of Mathematics, Faculty of Science, Muğla University,
Muğla, Turkey
b
Department of Mathematics, Faculty of Science, Ege University,
İzmir, Turkey
c
The re-typing group J for the subject C02045 at TDTU: C1601140,
email: c1601140@student.tdtu.edu.vn; C1601002,
email:c1601002@student.tdtu.edu.vn

September 24, 2019

Contents
1 Introduction 3

2 Preliminaries and notations 4


2.1 Best approximation . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Polynomial interpolation . . . . . . . . . . . . . . . . . . . . . 5
2.3 Bessel polynomials of first kind . . . . . . . . . . . . . . . . . 6
2.4 Matrix norms . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

3 Fundamental relations 7
3.1 Matrix relation for the differential part D(x) . . . . . . . . . . 8
3.2 Matrix relation for the integral part I(x) . . . . . . . . . . . . 9
3.3 Matrix relation for the integral part V(x) . . . . . . . . . . . 10
3.4 Matrix relation for the conditions . . . . . . . . . . . . . . . . 12

4 Error bound and stability of solution 14

5 Numerical examples 18

1
6 Fractional order Chelyshkov wavelets and function aproxi-
mation 22

7 A numerical method for variable order fractional differential


equations 22

8 Conclusion 22

2
These color lines correspond to
• [SV1: NTDuy’s comments],
• [SV2: LTPAnh’s comments]
Abstract
In this study, a collocation method based on the Bessel polyno-
mials is introduced for the approximate solutions of high-order lin-
ear Fredholm–Volterra integro-differential equations (FVIDEs) under
mixed conditions. In addition, the method is presented with error and
stability analysis. Numerical examples are included to demonstrate the
validity and applicability of the technique and comparisons are made
with the existing results.
Keywords— Fredholm–Volterra integro-differential equations Bessel polynomials
and series Bessel collocation method Collocation points

1 Introduction
Mathematical modelling of real-life problems usually results in functional equa-
tions, e.g. partial differential equations, integral and integro-differential equations
(IDEs), stochastic equations and others. In particular, IDEs arise in fluid dynam-
ics, biological models and chemical kinetics. The analytical solutions of some IDEs
cannot be found, thus numerical methods are required. Many numerical methods
have been studied such as the Tau method [? ],[? ],[? ],[? ],[? ], the Legendre
wavelets method [? ], the finite difference method [? ], the Haar functions method
[? ],[? ], the numerical methods [? ], the CAS wavelet method [? ], the differential
transform method [? ], the Homotopy perturbation method [? ], the sine–cosine
wavelets method [? ],[? ], the Chebyshev and Taylor collocation methods [? ],[?
], the Hybrid function method [? ], the Adomian decomposition method [? ] and
numerical solution techniques [? ],[? ],[? ],[? ],[? ],[? ],[? ],[? ],[? ]. Recently,
Yüzbaşı et al. [? ], Yüzbaşı and Sezer [? ], Yüzbaşı et al. [? ] have worked on the
Bessel matrix and collocation methods for the numerical solutions of the neutral
delay differential equations, the pantograph equations and the Lane–Emden dif-
ferential equations. In this paper, by means of the Bessel collocation method [31]
used for solving pantograph equations, we consider the approximate solution of the
mth-order linear FVIDE.
Xm Z b Z x
βk (x)y (k) (x) = g(x) + λf Kf (x, t)y(t)dt + λv Kv (x, t)y(t)dt, 0 ≤ a ≤ x,t ≤ b (1)
k=0 a a

under the mixed conditions


m−1
X
(ajk y (k) (a) + bjk y k (b)) = λj , j = 0, 1, ...., m − 1 (2)
k=0

3
where y (0) = y(x) is an unknown function, the known functions βk (x), g(x), Kf (x, t)
and Kv (x, t)are defined on interval a ≤ x, t ≤ b, and ajk , bjk , λj , λf and λv are
real or complex constants. Also, the functions Kf (x, t) and Kv (x, t)can be repre-
sented with the Maclaurin series. Our aim is to find an approximate solution of (1)
expressed in the truncated Bessel series form
N
X
y(x) = an Jn (x), (3)
n=0

so that, an , n= 0,1,2,...,N, are the unknown Bessel coefficients. Here, N is chosen


to be any positive integer such that N ≥ m and Jn (x)n= 0,1,2,...,N are the Bessel
polynomials of first kind defined by
[ N −n
2 ] k
X (−1) x 2k+n
Jn (x) = ( ) , n ∈ N, 0 ≤ x < ∞
k!(k + n)! 2
k=0

2 Preliminaries and notations


2.1 Best approximation
The set of continuous functions on a given closed interval [a,b], which we denote by
C[a,b],is a linear space. A norm in Ca,b can be defined by
kf k∞ = max |f (x)|
a≤x≤b

This norm is called uniform or Chebyshev norm. Let Pn be the (n + 1)- di-
mensional subspace of C[a, b] spanned by the functions 1, x, x2 , ..., xn . That is, Pn
consists of all polynomials of degree at most n. Since Pn is a finite-dimensional
subspace of C[a, b], it is closed and convex. Thus, for f ∈ C[a, b], there exists
unique p ∈ Pn such that
kf − pk∞ ≤ kf − qk, ∀q ∈ Pn
Here, p is called the best approximation to f out of Pn. We will denote the error
of the best approximation to f from Pn
En [f ; [a, b]] = En (f ) = kf − pk∞

Definition 2.1.1 Let f (x) be defined on [a, b], the modulus of continuity of f (x)
on [a, b], w(δ), is defined for δ > 0 by
w(δ) = sup |f (x) − f (y)|
|x−y|<δ

Theorem 2.1.2 If f ∈ C[a, b], then


b−a
En [f ; [a, b]] ≤ 6w( ) (4)
2n

4
2.2 Polynomial interpolation
Let us consider n+1 pairs (xi , yi ). The problem is to find a polynomial pm,called
the interpolating polynomial, such that

pm (xi ) = c0 + c1 xi + ... + cm xi m = yi , i = 0, 1, ..., n

The points xi are called interpolation nodes.If n 6= m, the problem is over or under-
determined.

Theorem 2.2.1 Given n+1 distinct nodes x0 , x1 , ..., xn and n+1 corresponding
values y0 , y1 , ..., yn then there exist a unique polynomial pn ∈ Pn such that pn (xi ) =
yi for i = 0,1,...,n. If we define
n
Y (x − xi )
li ∈ Pn : li (x) = , i = 0, 1, ..., n
i=0
(x i − xj )
j6=i

then li (xj ) = δij .The polynomials li (x) are called Larangage are characteristic poly-
nomials. If yi = f (xi ) for i = 0, 1, . . . , n, f being a given function, the

5
interpolating polynomial pn (x) will be denoted by pn f (x). Let us introduce a lower
triangular matrix X of infinite size, called the interpolation matrix on [a, b], whose
entries xij , for i, j = 0, 1, . . . , represent the points of [a, b], with the assumption
that on each row the entries are all distinct. Thus, for any n ≥ 0, the (n + 1)-th
row of X contains n + 1 distinct values that can be identified as nodes, so that,
for a given function f , we can uniquely define an interpolating polynomial pn f of
degree n at those nodes. Having the fixed function f and the interpolation matrix
X, let us define the interpolation error by
Then, we have the following comparison of Gn (X) and En [f ; [a, b]].

Theorem 2.2.2 Let f ∈ C[a, b] and X be an interpolation matrix on [a, b]. Then

Gn (X) ≤ (1 + Λn (X))En (f ; [a, b]) (5)

where Λn (X) denotes the Lebesgue constant of X, defined as




n (n)
Λn (X) = sumi=0 li (x)


inf ty

(n)
and where li ∈ Pn is the j-th characteristic polynomial associated with the (n +
1)-th row of X

Corollary 2.2.3 Using the above notation,if f ∈ C[a, b],we get from (4) and (5)
(b − a)
kf − pn f k∞ ≤ 6(1 + Λn (X))w( )
2n
Theorem 2.2.4 Let x and the abscissas x0 , x1 , ..., xn be contained in an interval
[a,b] on which f and its first n derivatives are continuous,and let f (n+1) exist in
the open interval (a,b).Then there exists x ∈ (a, b),which depends on x,such that
(n+1) Q(x−x )
f (x) − pn (x) = f (n+1)!
(x )
N i=0 i

2.3 Bessel polynomials of first kind


The n-th degree truncated Bessel polynomials of first kind are defined by
[ N −n
2 ] k
X (−1) x 2k+n
Jn (x) = ( ) , n ∈ N, 0 ≤ x < ∞
k!(k + n)! 2
k=0

Let us assume that f is a solution of(1).We would like to interpolate f by


n=0
X
pN (x) = an Jn (x), N ≥ m
N

, such that pN and f are equal on the nodes a ≤ x0 < x1 < ... < xN ≤ b. Since
all f (xi ) values are unknown, we can use (1) to find the interpolation polynomial

6
at the nodes x0 , x1 , · · · , xN without knowing f (xi ) values. To do this, we put the
interpolation polynomial pN (x) = c0 + c1 x + ... + cN xN into (1). If pN equals f
on the nodes, then pN satisfies (1) on the nodes. Hence, we can obtain a system
of linear equations depending on c0 , c1 , ..., cN . Therefore, we can find the solution
of(1)with some errors which are the interpolation and computational errors.

2.4 Matrix norms


Definition 2.4.1 A matrix norm on C m×n is a function kk: C m×n → R satisfy-
ing the following conditions:
1. X 6= 0 ⇒ kXk = 0
2. kαXk = |α|kXk
3.kX + Y k ≥ kXk + kY k.
All of the properties of vector norms are equally true for matrix norms.In particu-
lar,all matrix norms are equivalentand satisfy kXY k ≤ kXkkY k if this multiplica-
tion is defined.

Definition 2.4.2 The Frobenius norm is the function kkF defined for any matrix
by sX
kXkF = |χij |2
i,j

Definition 2.4.3 ∞-norm and 1-norm are defined by


X X
kXk∞ = max |χij |, kXk1 = max |χij |
i i
j j

respectively.

3 Fundamental relations
First,we can write Jn (x) in the matrix formas follows

JT (χ) = DXT (χ) ⇔ J(χ) = X(χ)DT (6)

where

x1 x2 x3 xN
   
J(x) = J0 (x) J1 (x) · · · JN (x) and X(x) = ···

7
If N is odd,
 N −1 
1 −1 (−1) 2
0!0!20 0 1!1!22 ··· N −1 N −1
( 2 )!( 2 )!2N −1 )
0
 N −1

(−1) 2
 1


 0 0!1!21 0 ··· 0 ( N 2−1 )!( N2+1 )!2N


 N −3 
1 −1 2
D= 0 0 ···
 
0!2!22 N −3 N +1
( 2 )!( 2 )!2N −1

 
 .. .. .. .. .. .. 

 . . . . . . 

1
 0 0 0 ··· 0!(N −1)!2N −1
0 
1
0 0 0 ··· 0 0!N !2N (N +1)×(N +1)

If N is even,
Let us show Eq.(1) in the form

D(χ) = g(χ) + λf I(χ) + λv V (χ) (7)

, where
m
X Z b
D(χ) = βk (χ)y (k) (χ), I(x) = Kf (x, t)y(t)dt
k=0 a

, Now we convert the solution y(x) and its derivative y ( k)(x), the parts D(x),I(x)
and V(x), and the mixed conditions (2) to matrix forms.

3.1 Matrix relation for the differential part D(x)


We first consider the desired solution y(x) = pN (x) of Eq.(1)defined by the trun-
cated Bessel series.Then the function defined in relation can be written in the matrix
form
[y(x)] = J(x)A; A = [a0 a1 ...aN ]T
or from Eq. (8)
[y(x)] = X(x)DT A
Also,we write the relation between the matrix X(x) and its derivative X ( 1)(x) as
follows
X ( 1)(x) = X(x)B T

where
 
0 1 0 ··· 0
0 0 2
 ··· 0 
BT =  ... ... ... .. .. 

 . .
0 0 0 ··· N
0 0 0 ··· 0

8
From Eq.(12),we have the recurrence relations

X(0) (x) = X(x)


X(1) (x) = X(x)BT
X(2) (x) = X(1) (x)BT = X(x)(BT )2 (8)
..
. (9)
(k) (k−1) T T k
X (x) = X (x)B = X(x)(B ) (10)
By using the relations (11) and (13),we obtain matrix relation

y (k) (x) = X(k) (x)DT A (11)


T k T
= X(x)(B ) D A, k = 0, 1, 2, ..., m (12)
By substituting expression (14) into (9),we have the matrix relation
m
X
[D(x)] = βk (x)X(x)(BT )k DT A (13)
k=0

3.2 Matrix relation for the integral part I(x)


The kernel function Kf (x, t) can be approximated by the truncated Maclaurin series
and the truncated Bessel series, respectively,
N X
X N N X
X N
Kf (x, t) = k f mn xm tn and Kf (x, t) = k f mn Jm(x) Jn(t), (14)
m=0n=0t m=0n=0b

where
f 1 ∂ m+n Kf (0, 0)
t k mn = , m, n = 0, 1, ..., N (15)
m!n! ∂xm ∂tn
The relations given in Eq.(16) can be put in the matrix forms

Kf (x, t) = X(x)Kft XT (t), Kft = [b kmn


f
] m, n = 0, 1, 2, ..., N (16)
(17)
and
Kf (x, t) = J(x)Kfb JT (t), Kfb = [b kmn
f
] m, n = 0, 1, 2, ..., N (18)
From Eqs.(17) and (18),we obtain the following relation

X(x)Kft XT (t) = J(x)Kft JT (t) ⇒ X(x)DT Kft XT (t) = X(x)DT Kfb DXT (t), Kft = DT Kfb D or Kfb = (DT
By substituting the matrix forms(10)and(18)in to integral part I(x) in(9),we have
the matrix relation
Z b
[I(x)] = J(x)Kf bJT (t)J(t)Adt = J(x)Kfb Qf A (19)
a

9
so that
Z b Z b
Qf = JT (t)J(t)dt = DXT (t)X(t)DT dt = DHf DT
a a

where
Z b
Hf = XT (t)X(t)dt = [hfij ];
a

By substituting the matrix relation (8) in to expression (20) , we have the matrix
form

I(x)] = X(x)DT Kfb Qf A (20)

3.3 Matrix relation for the integral part V(x)


The kernel function Kv (x, t) can be approximated by the truncated Maclaurin series
and the truncated Bessel series, respectively,
N X
X N N X
X N
Kv (x, t) = k v mn xm tn and Kv (x, t) = k v mn Jm(x) Jn(t) (21)
m=0n=0t m=0n=0b

where

v 1 ∂ m+n Kf (0, 0)
t k mn = , m, n = 0, 1, ..., N
m!n! ∂xm ∂tn
The relations given in Eq.(22) can be written in the matrix forms

Kv (x, t) = X(x)Kvt XT (t), Kvt = [b kmn


v
] m, n = 0, 1, 2, ..., N (22)

and

Kv (x, t) = J(x)Kvb JT (t), Kvb = [b kmn


v
] m, n = 0, 1, 2, ..., N (23)

From Eqs. (23) and (24) ,the following relation is obtained:

X(x)Kvt XT (t) = J(x)Kvt JT (t) ⇒ X(x)DT Kvt XT (t) = X(x)DT Kvb DXT (t), (24)
Kvt = DT Kvb D or Kvb = (DT )−1 Kvt D−1 (25)

By substituting the matrix forms (10) and (24) into integral part V(x) in (9),we
have the matrix relation
Z x
[V (x)] = J(x)Kvb J T (t)J(t)Adt
a

= J(x)Kvb Qv (x)A, (26)

10
so that Z x
Qv (x) = J T (t)J(t)dt
a
Z x
= DX T (t)X(t)DT dt
a

= DHv (x)DT ;

11
where
x
xi+j+1 − ai+j+1
Z
Hv (x) = X T (t)X(t)dt = [hvij (x)], hvij (x) = , i, j = 0, 1, 2, ..., N.
a j+j+1

By substituting the matrix relation (8) into expresstion (26), we have the matrix
form
[V (x)] = X(x)M H(x)DT A, M = DT Kbv D
.

3.4 Matrix relation for the conditions


We can obtain the correcsponding matrix forms for the conditions (2), by means of
the relation (14), as
m−1
X
[aij X(a) + bjk X(b)](B T )k DT A = [λj ], j = 0, 1, 2, ..., m − 1.
k=0

We are now ready to construct the fundamental matrix equation corresponding to


Eq. (1). For this purpose, we substitute the matrix relation (15), (21) and (27)
into Eq. (9) and obtain the matrix equation
m
X
βk (x)X(x)(B T )k DT A = g(x) + λf X(x)DT Kbf Qf A + λv X(x)M H(x)DT A.
k=0

By using in Eq.(29) collocation points defined by

b−a
xi = a + i, i = 0, 1, ..., N.
N
the system of matrix equations is obtained as
m
X
βk (xi )X(xi )(B T )k DT A = g(xi )+λf X(xi )DT Kbf Qf A+X(xi )M H(xi )DT A.i = 0, 1, ...N.
k=0

or briefly the fundamental matrix equation is


( m )
X f
T k T T
βk X(B ) D − λf XD Kb Qf − λv XM HD A = G,
k=0

where
x20 xN
       
βk (x0 ) 0 0 0 g(x0 ) X(x0 ) 1 x0 ··· 0
 0 βk (x1 ) 0 0   g(x1 )   X(x1 )   1 x1 x21 ··· xN
1

βk =  ,G =  ,X =   =  ..
       
.. .. .. .. .. .. .. .. .. 
 . . . .   .   .   . . . ··· . 
0 0 0 βk (xn ) g(xN ) X(xN ) 1 xN x2N ··· xN
N

12
   
X(x0 ) 0 ··· 0 M 0 ··· 0
 0 X(x 1) ··· 0   0 M ··· 0 
X= . ,M =  ,
   
. .. .. .. .. .. ..
 .. .. 
. .   . . . . 
0 0 0 X(xn ) 0 0 0 M (N +1)∗(N +1)
T
     
H(x0 ) 0 ··· 0 D a0
 0 H(x 1) ··· 0   DT   a1 
H= . ,D =  andA = 
     
. .. .. .. ..
 .. ..  
. .   .   . 
0 0 0 H(xn ) DT (N +1)∗1
aN

when the matrices X, M , H , and D in Eq.(30) are in written in full , it can be


seen that their dimensions are respectively (N + 1) ∗ (N + 1)2 , (N + 1)2 ∗ (N + 1)2 ,
(N + 1) ∗ (N + 1)2 , (N + 1)2 ∗ (N + 1). Hence, the fundamental matrix equation
(30) corresponding to Eq . (1) can be written in the form
m
X
W A = Gor[W ; G]; W = βk X(B T )k DT − λf XDT Kbf Qf − λv XM HD
k=0

We note that Eq . (31) coressponds to a system of (N+1) linear algebraic equations


with unknown Bessel coefficients a0 , a1 , · · · , aN On the other hand, the matrix form

13
(28) for the conditions can be written as

Uj A = [λj ]or[Uj ; λj ]; j = 0, 1, 2, · · · , m − 1

where
m−1
X
[aij X(a)+bjk X(b)](B T )k DT =
 
Uj = Uj0 Uj1 Uj2 ··· UjN , j = 0, 1, 2, ...m−1
k=0

Consequently, to obtain the solution to Eq . (1) under conditions (2), by replacing


the row matrices Uj and λj , by the rows of the W and G, respectively, we have

WA = G

For simplicity, if last rows of the matrix (31) are replaced, the new augmented
matrix of the above system is as follows [29-31,35,36]:
 
w0 0 w0 1 w0 2 ··· w0 N g(x0 )
 w1 0
 w 0 1 w 0 2 · · · w 1 N g(x1 ) 

 w2 0 w 0 1 w 0 2 · · · w 2 N g(x2 ) 
.. .. .. . ..
 
 . .. .. 

 . . . . 

 wN −m0 wN −m1 wN −m2 · · · wN −m 
[W ; G] =  
 u0 0
 u0 1 u0 2 ··· λ0 

 u1 0
 u1 1 u 1 2 · · · u1 N λ1 

 u2 0
 u2 1 u 2 2 · · · u2 N λ2 

 .. .. .. . . .. .. 
 . . . . . . 
um−10 um−11 um−12 ··· um−1N λm−1

Note that rank W = rank [W;G] = N+1 . If it is not, It would be a contradiction


to theorem 2.2.1. Thus, we can write and hence the elements a0 , a1 , · · · , aN of A
are uniquely determined.

4 Error bound and stability of solution


Theorem 4.1
. If P is a nonsingular matrix and
1
||δP || <
||p− 1||

then P + δP is nonsingular.

14
Theorem 4.2
. Let P be a nonsingular matrix, let b 6= 0 and let x and x
e = x + δx be solutions
of Px = b and ( P+ δP ) xe = b, respectively. Then,

||δx|| <= ||P − 1||kδP kke


xk.

Theorem 4.3
Let k.k be a consistent matrix norm on C n×n .For any matrix P of order n,if

kP k < 1

then I − P is nonsingular . Moreover,

kXk
k(I − P )−1 Xk ≤ .
1 − kP k

and
kP k
k(I − P )−1 k ≤ .
1 − kP k
The following theorem has been proved in [39] for the Bernstein series solutions
pN (x) of the pantograph equations. The similar therem for Bessel series case has
been proved in [31]. Now, we modlify the theorem given in [31] for the linear
Fredholm-Volterra integro-differential equations.

Theorem 4.4
Let the solution of FVIDE actually computed by the Bessel series solution pN (x)and
y=f(x) be the exact solution. Let the coefficient matrix of (33) be W
f1 = W f + δW
where δW represents the computational error.Let X(x) and D be the matrices which
] −1
defined in (8). If kδW k kW
∞ 1k < 1, then

b−a f −1 ||∞ ||A||


s||W e ∞
||f − pN |∞ ≤ 6(1 + ΛN (X))w( )+ 1
||D|| e T (b − a)||∞
e ∞ ||X
2N ] −1
1 − s||W1 ||∞

where s is the highest value of kδW k∞ and Aisthesolutionof


e f −1 kF <
(33).P articularly, if kδW kF kW 1
inf ty
1 and f ∈ C [a, b] or m ≥ n
N f −1 ||F ||A||
1 Y s||W e F
|f (x)−pN (x)| ≤ | (x−xi )f (N +1) (εx )|+ 1 e F ||X T (b−a)||∞
||D||
(N − 1)! i=0 ]
1 − s||W −1
||
1 ∞

15
proof of Theorem 4.4
||y − pN ||∞ = ||y − pN f + pN f − pN ||∞ ≤ ||y − pN f ||∞ + ||pN f
b−a
≤ 6(1 + ΛN (X))w( ) + ||pN f
2N

J
N N N
 J
X X X  
||pN f − pN ||∞ = || an Jn − an Jn ||∞ = || (an − e
an )Jn ||∞ ≤ a0 − e
a0 a1 − e
a1 · · · aN − e
aN 

e

n=0 n=0 n=0
J

J
   J
≤k a0 − e
a0 a1 − e
a1 · · · aN − e
aN k∞ 


J
f −1 ||∞ ||δW ||∞ ||A||
≤ ||W f1 − δW )−1 k∞ kAk
e 1 k[J]T k∞ = k(W e 1 kDk∞ kδW k∞ kX T (b
!−1

−1 −1
≤ ||W1 ||∞ k I − (δW )W1 kδW ||∞ A ||D||∞ ||X T (b
f f e

1

f −1 1
≤ W 1


kδW k∞
A kDk∞ kX T (b
e

f −1
∞ 1 − (δW )W 1
1




] −1
W1 kδW∞ A
e


∞ 1 T



kDk ∞ X (b
] −1
1 − W1 kδW∞




] −1 e
s W1 A

≤ ∞ 1 kDk∞ X T (b

] −1
1 − s W
1




] −
if k(δW )k∞ W
1
|y − pN (x)| = |y − pN f (x) + pN f (x) − pN (x)| ≤ |y − pN f (x)| + |pN f (x) −
N
1 Y
(N +1)

≤ (x − xi )f (x ) + |pN f (x) −

(N + 1)! i=0
N N
N
X X X
|pN f (x) − pN (x)| = an Jn (x) − an Jn (x) = (an − ea

e

n=0 n=0 n=0

J0 (X
16 

  J1 (X

≤ k a0 − e a0 a1 − e a1 · · · aN − e aN kF  ..
 .
JN (X
where s is the highest value of kδW ki nf ty. If f ∈ C ∞ [a, b] or m > N , from
Theorems 2.2.4, 4.1 and 4.2, inequality (34) and Cauchy–Schwarz inequalities,we
obtain that
N
Y
|y−pN (x)| = |y−pN f (x)+pN f (x)−pN (x)| ≤ |y−pN f (x)|+|pN f (x)−pN (x)| ≤ 1(N +1)! (x−xi )f (N +1)(x ) +


i=0
(42)
and To complete the proof,we need to find s,the highest value of kδW k.To do this,we
will use the following lemma.

lemma 4.5
If Xj + ∆Xj ∈ Rn×n satisfies k∆Xj ≤ σj kXj k for all j for a consistent norm,then
Using Lemma 4.5,we find the upper bound of kδW k as where θ ≤ u ,u is the unit
round off and r is the number ofterms summed in W.This ( completes the proof )of

Theorem 4.4. Let us consider a set of function values fe(xi ) : i = 0, 1, ..., N

which is a perturbation of the data f (xi ) : i = 0, 1, ..., N relative to the nodes


xi : i = 0, 1, ..., N ⊂ [a, b]. The perturbation may be due, for instance, to the ef-
fect of rounding errors,or maybe caused by an error in the experimental measure
of the data.Denoting by peN f (x),the interpolating polynomial on the set of values
fe(xi),then we have where δpN (x) and δe pN (x) represent the differences between
the interpolating polynomial and the Bessel series solution.If the Lebesgue con-
stant ΛN (X) given in (5) and the difference between the interpolating polynomial
and the Bessel series solution are small,the small changes on the data give rise to
small changes on the Bessel series solution.On equally spaced nodes, ΛN (X) grows
exponentially,that is as N → ∞

2N +1
ΛN (X) ≈
eN log N

asproved in [41].This shows that,for large N and equally spaced nodes,the polyno-
mial interpolation method can become unstable.On the other hand,using(36),the
difference between the interpolating polynomial and the Bessel series solution grows
whenever N increases.As a result,the Bessel series solution can become unstable for
large N. Let EN (x) be the error function which comes from putting the Bessel series
solution in (1), that is EN : [a, b] → R is a function which is defined by
m
(k)
X
EN (x) = βk (X)pN (x) − g(x) − λf I(x) − λv V (x) (43)
k=0

If pN (x)is the Bessel series solution,thus the solution of (33) , the computational er-
rors are minimal if EN (x) is equivalent to the zero function on the nodes x0 , x1 , ..., xN .Therefore,it
can be used for finding computational errors on the nodes.

17
5 Numerical examples
In this section,several numerical examples are given to illustrate the properties
of the method and also,shown that the absolute error and its upper bound are
consistent. In this regard, we have reported in tables and figures, the values of the
exact solution y(x), the polynomial approximate solution yN (x) and the absolute
error function eN (x) = |y(x)−yN (x)| at the selected points of the given interval.All
the numerical computations have been done using Matlab.

Example 1
Let us now consider the linear second order FVIDE given by
Z 1 Z
0 1 1
y ” (x) + xy (x) − xy(x) = ex − sin(x) + x cos(x) + sin(x)e−1 y(t)dt − x cos(x)e−t y(t)dt, 0 ≤ x, t ≤ 1 (44)
2 0 2 0
with the initial conditions 0
y(0) = 1andy (0) = 1

which is the exact solution y(x) = ex . Now,let us find the approximate solution
given by the truncated Bessel series
2
X
y(x) = p2 (x) = an Jn (x),
n=0

where m = 2, N = 3, β0 = −x, β1 = x, β2 = 1, g(x) = ex − sin(x) + 12 x cos(x), λf =


1, λv = −1
2 , Kf (x, t) = sin(x)e
−t
and Kv (x, t) = cos(x)e−t .Hence,the set of collo-
cation points for N =3 is
( )
1 2
x0 = 0, x1 = , x2 = , x3 = 1
3 3

and from Eq.(30),the fundamental matrix equation of the given FVIDE is written
as where The augmented matrix for this fundamental matrix equation is The matrix
forms for the initial conditions from Eq. (32) are
Uj A = [λj ]or[Uj ; λj ]; j = 0, 1,
or clearly  
[U0 ; λ0 ] = 1 0 0 0 ; 1
and
1
 
[U1 ; λ1 ] = 1 2 0 0 ; 1
From system (33),the new augmented matrix based on conditions is computed as
follows By solving this system,the Bessel coefficient matrix is obtained as
T
A = 1 2 6 4587

299

18
Hence,the approximate solution of the problem for N = 3 is found as

p3 (x) = 1 + x + 0.5x2 + 0.194606977755x3 .

Similarly,we obtain the approximate solutions of the problem for N =7 and N =


10,respectively and To find the upper bound of the absolute error using Theorem
4.4 , let us calculate the norms of the matrices used in(35).For N = 7 and digits =25
, it is found that and Since kδW kF kW f −1 kF < 1.8930 × 10−9 , from (35), the upper
1
bound of the absolute error for N = 7 is obtained as 3.5646 × 10−5 .In a similar
way,the upper bounds of the absolute errors for N = 10 are found to be 7.6050 ×
10−7 . On the other hand,using kf − pN k∞ = max |f (x) − pN (x)|, 0 ≤ x ≤ 1, the
maximum absolute errors for N = 3, N = 7andN = 10 and digits = 25 are obtained
as kf − p3 ki nf ty = 2.3675 × 10−2 , kf − p7 k∞ = 1.2063 × 10−6 and kf − p1 0k∞ =
8.2249 × 10−10 .

Example 2
Consider the linear Fredholm integro - differential equation given by
Z 1
0
x x x
y (x) = xe + e + e − x + xy(t)dt, 0 ≤ x, t ≤ 1 (45)
0

with the initial condition y(0) = 0 and the exact solution y(x) = xex so that m =
1, β1 = 1, λf = 1, λv = 0, g(x) = xex + ex − x and Kf (x, t) = x From Eq,(30),the
fundamental matrix equation of the problem is Thus, following the method given
in Section 3, we obtain approximate solutions by Bessel
polynomials of the problem

f −1
for N =5,7,10,respectively, and Since kδW kF W1 < 1.0133 × 10−9 , from (35),

F
the upper bound of the absolute error for N = 7 and digits = 25 is computed as
1.0198 × 10−5 . Similarly, the upper bound of the absolute error for N = 10 and
digits = 25 is found as 3.7415 × 10−5 and the upper bound of the absolute error for
N=10 and digits = 35 is calculated as 3.3988 × 10−8 .

Table 1: Numerical results of the absolute error functions eN (x) of y(x) of


Eq.(39)
xi CAS wavelet method Differential transformation HPM Present
e5 (xi ) e7 (xi ) e10 (xi )
7878 6869846 676384
0.1 1.3417 1.001183 2.3
0.2 4868468 8789 97979

19
Figure 1: Absolute error of Example ?? with k = 1, M = 4, h = 1, α = 1.

Also ,by using kf − pN ki nf ty = max |f (x) − pN (x)|, 0 ≤ x ≤ 1, we obtain the max-


imum absolute errors for N = 5, N = 7 and N = 10 and digits = 25 as follows
kf − p5 k∞ = 2.3540 × 10−4 , kf − p7 k∞ = 7.6356 × 10−7 and

kf − p1 0k∞ = 8.9800 × 10−10

In addition, the numerical results of the absolute error functions obtained by the
present method for N = 5, 7 and 10, the CAS wavelet method [12],the differential
transformation [13] and the HPM [14] are compared in Table 1.Fig.1shows the
absolute error function sobtained by the present method for N = 5,7 and 10,the
CAS wavelet method,the differential transformation and the HPM.It is seen from
Table1andFig.1 that the results obtained by the present method is better than that
obtained by the other methods.

Example 3
Suppose that the following linear FVIDE with variable coefficients
 Z 1 Z x
 y ( 5)(x) − xy ( 2)(x) + xy(x) = −e−x − 1 e2x − x2 + 1

e2x+t y(t)dt + xet y(t)dt, o ≤ x, t (46)
≤1
2 2 0 0
 0 000
y(0) = 1, y (0) = −1, y ” (0) = 1, y (0) = −1y (4) = 1 (47)

has a solution.Then,the Bessel series solution for N = 10 is obtained: Since the


exact solution of this equation f ∈ C 4 [0, 1]andf (5) (x) exist,we can find the error of
the Bessel polynomial solution using Theorem 4.4.Summing this error with |p1 0(x)−
p4 (x)|, we get the upper bound of the error for y(x) = p1 0(x).Table 2 shows both
the error function E1 0(x) defined in 4.4 and the upper bound error of p1 0(x). As
seen from Table 2, the errors given in the table are consistent. Thus,only in the case
of existence of the exact solution,the Bessel series solution can be implemented.

20
Table 2: Comparison of |E1 0(x)| and p1 0(x)
xi |E1 0(x)| Upper bound error of p1 0(x)
0 4.50 O(10−20 )
0.2 6.719 O(10−4 )
0.4 2.153 O(10−4 )
0.6 6.755 O(10−3 )
0.8 1.96 O(10−2 )
1 1.45 O(10−2 )

Table 3: The numerical results of Eq.(41)for the x values.


xi Exact solution Taylor method Present method
0 0 0 0
0.1 0.0998 0.0998 0.2
0.2 0.1986 0.198 0.25
0.3 0.29 0.2955 0.43

Example 4
Finally , let us consider the linear Volterra integro-differential equation
Z x
0
y (x) = 1 − y(t)dt, 0 ≤ x, t ≤ 1 (48)
0

with the initial condition y(0) = 0 and the exact solution y(x) = sin(x). Here,
m = 1, N = 5, β1 = 1, λf = 0, λv = −1, g(x) = 1, Kf (x, t) = 1. We write the
fundamental matrix equation of Example 4 from Eq.(30) as follows Thus,by using
the procedure given in Section 3,we find the approximate solution by the Bessel
polynomials for N = 5

y5 (x) = x+(0.519273964102e−4)x2 −(0.167011007897)x3 +(0.815190187759e−3)x4 +(0.762454157655e−2)x5

The numerical results of the absolute error function and the approximate solution
obtained by the present method are compared with the numerical results of the
Taylor method given in [17] for N = 5 in Table 3. Fig. 2-(a) displays the exact
solution and the approximate solutions obtained by the present method and the
Taylor method for N = 5,and in Fig.2-(b), we compare the absolute error functions
gained by the present method and the Taylor method for N = 5. It is seen from
Fig.2 and Table 3 that the results obtained by the present method is very superior
to that obtained by the Taylor method

Definition 5.0.1 The Riemann-Liouville fractional integral operator of order β(x)


is defined as

21
Iβ(x) u(x) = R
x
Γ(β(x)) 0 u(t)(x − t)β(x)−1 dt, x ≥ 0.(49)
Definition 5.0.2 Caputo’s derivative of order β(x) is defined as
 η 
β(x) η−β(x) d
D u(x) = I u(x)
dxη
Z x
1
= u(η) (t)(x − t)η−β(x)−1 dt, η − 1 < β(x) ≤ η, η ∈(50)
N.
Γ(η − β(x)) 0

6 Fractional order Chelyshkov wavelets and func-


tion aproximation
 √ k
2m + 1 2 2 ρm,M (2k x − n), if x ∈ 2nk , n+1
 
2k
,
ψn,m (x) = (51)
0, otherwise.

7 A numerical method for variable order fractional


differential equations
k
−1
M 2X
X
Dη u(x) = C T Ψh,α
k,M (x) =
h,α
cn,m ψn,m (x), (52)
m=0 n=0

where C and Ψh,α


k,M (x) are given in Eq. (??).

M k Method in [? ] Present method


0 1.361 · 10−4 2.22654 · 10−7
4 2 2.38112 · 10−10
6 0 1.470 · 10−7 3.57691 · 10−10

Table 4: The maximal absolute errors of numerical solutions for Example ??


by using our method in comparison with those by using the Chebyshev
collocation method in [? ].

8 Conclusion
In this paper, a new form of wavelets was contributed, and it’s the exact formulas
of Caputo derivative and Riemann-Liouville integration were investigated. We used
them directly to construct a numerical method to solve effectively variable order
differential equation. This suggested algorithms can be used also for solving the
other class of VOFDEs, distributed order FDEs and system of VOFDEs.

22
Figure 2: For N = 5. (41) (a) Comparison of the solutions y(x) and (b)
Comparison of the absolute error functions eN (x)

23

Das könnte Ihnen auch gefallen