Beruflich Dokumente
Kultur Dokumente
Contents
1 Introduction 3
3 Fundamental relations 7
3.1 Matrix relation for the differential part D(x) . . . . . . . . . . 8
3.2 Matrix relation for the integral part I(x) . . . . . . . . . . . . 9
3.3 Matrix relation for the integral part V(x) . . . . . . . . . . . 10
3.4 Matrix relation for the conditions . . . . . . . . . . . . . . . . 12
5 Numerical examples 18
1
6 Fractional order Chelyshkov wavelets and function aproxi-
mation 22
8 Conclusion 22
2
These color lines correspond to
• [SV1: NTDuy’s comments],
• [SV2: LTPAnh’s comments]
Abstract
In this study, a collocation method based on the Bessel polyno-
mials is introduced for the approximate solutions of high-order lin-
ear Fredholm–Volterra integro-differential equations (FVIDEs) under
mixed conditions. In addition, the method is presented with error and
stability analysis. Numerical examples are included to demonstrate the
validity and applicability of the technique and comparisons are made
with the existing results.
Keywords— Fredholm–Volterra integro-differential equations Bessel polynomials
and series Bessel collocation method Collocation points
1 Introduction
Mathematical modelling of real-life problems usually results in functional equa-
tions, e.g. partial differential equations, integral and integro-differential equations
(IDEs), stochastic equations and others. In particular, IDEs arise in fluid dynam-
ics, biological models and chemical kinetics. The analytical solutions of some IDEs
cannot be found, thus numerical methods are required. Many numerical methods
have been studied such as the Tau method [? ],[? ],[? ],[? ],[? ], the Legendre
wavelets method [? ], the finite difference method [? ], the Haar functions method
[? ],[? ], the numerical methods [? ], the CAS wavelet method [? ], the differential
transform method [? ], the Homotopy perturbation method [? ], the sine–cosine
wavelets method [? ],[? ], the Chebyshev and Taylor collocation methods [? ],[?
], the Hybrid function method [? ], the Adomian decomposition method [? ] and
numerical solution techniques [? ],[? ],[? ],[? ],[? ],[? ],[? ],[? ],[? ]. Recently,
Yüzbaşı et al. [? ], Yüzbaşı and Sezer [? ], Yüzbaşı et al. [? ] have worked on the
Bessel matrix and collocation methods for the numerical solutions of the neutral
delay differential equations, the pantograph equations and the Lane–Emden dif-
ferential equations. In this paper, by means of the Bessel collocation method [31]
used for solving pantograph equations, we consider the approximate solution of the
mth-order linear FVIDE.
Xm Z b Z x
βk (x)y (k) (x) = g(x) + λf Kf (x, t)y(t)dt + λv Kv (x, t)y(t)dt, 0 ≤ a ≤ x,t ≤ b (1)
k=0 a a
3
where y (0) = y(x) is an unknown function, the known functions βk (x), g(x), Kf (x, t)
and Kv (x, t)are defined on interval a ≤ x, t ≤ b, and ajk , bjk , λj , λf and λv are
real or complex constants. Also, the functions Kf (x, t) and Kv (x, t)can be repre-
sented with the Maclaurin series. Our aim is to find an approximate solution of (1)
expressed in the truncated Bessel series form
N
X
y(x) = an Jn (x), (3)
n=0
This norm is called uniform or Chebyshev norm. Let Pn be the (n + 1)- di-
mensional subspace of C[a, b] spanned by the functions 1, x, x2 , ..., xn . That is, Pn
consists of all polynomials of degree at most n. Since Pn is a finite-dimensional
subspace of C[a, b], it is closed and convex. Thus, for f ∈ C[a, b], there exists
unique p ∈ Pn such that
kf − pk∞ ≤ kf − qk, ∀q ∈ Pn
Here, p is called the best approximation to f out of Pn. We will denote the error
of the best approximation to f from Pn
En [f ; [a, b]] = En (f ) = kf − pk∞
Definition 2.1.1 Let f (x) be defined on [a, b], the modulus of continuity of f (x)
on [a, b], w(δ), is defined for δ > 0 by
w(δ) = sup |f (x) − f (y)|
|x−y|<δ
4
2.2 Polynomial interpolation
Let us consider n+1 pairs (xi , yi ). The problem is to find a polynomial pm,called
the interpolating polynomial, such that
The points xi are called interpolation nodes.If n 6= m, the problem is over or under-
determined.
Theorem 2.2.1 Given n+1 distinct nodes x0 , x1 , ..., xn and n+1 corresponding
values y0 , y1 , ..., yn then there exist a unique polynomial pn ∈ Pn such that pn (xi ) =
yi for i = 0,1,...,n. If we define
n
Y (x − xi )
li ∈ Pn : li (x) = , i = 0, 1, ..., n
i=0
(x i − xj )
j6=i
then li (xj ) = δij .The polynomials li (x) are called Larangage are characteristic poly-
nomials. If yi = f (xi ) for i = 0, 1, . . . , n, f being a given function, the
5
interpolating polynomial pn (x) will be denoted by pn f (x). Let us introduce a lower
triangular matrix X of infinite size, called the interpolation matrix on [a, b], whose
entries xij , for i, j = 0, 1, . . . , represent the points of [a, b], with the assumption
that on each row the entries are all distinct. Thus, for any n ≥ 0, the (n + 1)-th
row of X contains n + 1 distinct values that can be identified as nodes, so that,
for a given function f , we can uniquely define an interpolating polynomial pn f of
degree n at those nodes. Having the fixed function f and the interpolation matrix
X, let us define the interpolation error by
Then, we have the following comparison of Gn (X) and En [f ; [a, b]].
Theorem 2.2.2 Let f ∈ C[a, b] and X be an interpolation matrix on [a, b]. Then
(n)
and where li ∈ Pn is the j-th characteristic polynomial associated with the (n +
1)-th row of X
Corollary 2.2.3 Using the above notation,if f ∈ C[a, b],we get from (4) and (5)
(b − a)
kf − pn f k∞ ≤ 6(1 + Λn (X))w( )
2n
Theorem 2.2.4 Let x and the abscissas x0 , x1 , ..., xn be contained in an interval
[a,b] on which f and its first n derivatives are continuous,and let f (n+1) exist in
the open interval (a,b).Then there exists x ∈ (a, b),which depends on x,such that
(n+1) Q(x−x )
f (x) − pn (x) = f (n+1)!
(x )
N i=0 i
, such that pN and f are equal on the nodes a ≤ x0 < x1 < ... < xN ≤ b. Since
all f (xi ) values are unknown, we can use (1) to find the interpolation polynomial
6
at the nodes x0 , x1 , · · · , xN without knowing f (xi ) values. To do this, we put the
interpolation polynomial pN (x) = c0 + c1 x + ... + cN xN into (1). If pN equals f
on the nodes, then pN satisfies (1) on the nodes. Hence, we can obtain a system
of linear equations depending on c0 , c1 , ..., cN . Therefore, we can find the solution
of(1)with some errors which are the interpolation and computational errors.
Definition 2.4.2 The Frobenius norm is the function kkF defined for any matrix
by sX
kXkF = |χij |2
i,j
respectively.
3 Fundamental relations
First,we can write Jn (x) in the matrix formas follows
where
x1 x2 x3 xN
J(x) = J0 (x) J1 (x) · · · JN (x) and X(x) = ···
7
If N is odd,
N −1
1 −1 (−1) 2
0!0!20 0 1!1!22 ··· N −1 N −1
( 2 )!( 2 )!2N −1 )
0
N −1
(−1) 2
1
0 0!1!21 0 ··· 0 ( N 2−1 )!( N2+1 )!2N
N −3
1 −1 2
D= 0 0 ···
0!2!22 N −3 N +1
( 2 )!( 2 )!2N −1
.. .. .. .. .. ..
. . . . . .
1
0 0 0 ··· 0!(N −1)!2N −1
0
1
0 0 0 ··· 0 0!N !2N (N +1)×(N +1)
If N is even,
Let us show Eq.(1) in the form
, where
m
X Z b
D(χ) = βk (χ)y (k) (χ), I(x) = Kf (x, t)y(t)dt
k=0 a
, Now we convert the solution y(x) and its derivative y ( k)(x), the parts D(x),I(x)
and V(x), and the mixed conditions (2) to matrix forms.
where
0 1 0 ··· 0
0 0 2
··· 0
BT = ... ... ... .. ..
. .
0 0 0 ··· N
0 0 0 ··· 0
8
From Eq.(12),we have the recurrence relations
where
f 1 ∂ m+n Kf (0, 0)
t k mn = , m, n = 0, 1, ..., N (15)
m!n! ∂xm ∂tn
The relations given in Eq.(16) can be put in the matrix forms
X(x)Kft XT (t) = J(x)Kft JT (t) ⇒ X(x)DT Kft XT (t) = X(x)DT Kfb DXT (t), Kft = DT Kfb D or Kfb = (DT
By substituting the matrix forms(10)and(18)in to integral part I(x) in(9),we have
the matrix relation
Z b
[I(x)] = J(x)Kf bJT (t)J(t)Adt = J(x)Kfb Qf A (19)
a
9
so that
Z b Z b
Qf = JT (t)J(t)dt = DXT (t)X(t)DT dt = DHf DT
a a
where
Z b
Hf = XT (t)X(t)dt = [hfij ];
a
By substituting the matrix relation (8) in to expression (20) , we have the matrix
form
where
v 1 ∂ m+n Kf (0, 0)
t k mn = , m, n = 0, 1, ..., N
m!n! ∂xm ∂tn
The relations given in Eq.(22) can be written in the matrix forms
and
X(x)Kvt XT (t) = J(x)Kvt JT (t) ⇒ X(x)DT Kvt XT (t) = X(x)DT Kvb DXT (t), (24)
Kvt = DT Kvb D or Kvb = (DT )−1 Kvt D−1 (25)
By substituting the matrix forms (10) and (24) into integral part V(x) in (9),we
have the matrix relation
Z x
[V (x)] = J(x)Kvb J T (t)J(t)Adt
a
10
so that Z x
Qv (x) = J T (t)J(t)dt
a
Z x
= DX T (t)X(t)DT dt
a
= DHv (x)DT ;
11
where
x
xi+j+1 − ai+j+1
Z
Hv (x) = X T (t)X(t)dt = [hvij (x)], hvij (x) = , i, j = 0, 1, 2, ..., N.
a j+j+1
By substituting the matrix relation (8) into expresstion (26), we have the matrix
form
[V (x)] = X(x)M H(x)DT A, M = DT Kbv D
.
b−a
xi = a + i, i = 0, 1, ..., N.
N
the system of matrix equations is obtained as
m
X
βk (xi )X(xi )(B T )k DT A = g(xi )+λf X(xi )DT Kbf Qf A+X(xi )M H(xi )DT A.i = 0, 1, ...N.
k=0
where
x20 xN
βk (x0 ) 0 0 0 g(x0 ) X(x0 ) 1 x0 ··· 0
0 βk (x1 ) 0 0 g(x1 ) X(x1 ) 1 x1 x21 ··· xN
1
βk = ,G = ,X = = ..
.. .. .. .. .. .. .. .. ..
. . . . . . . . . ··· .
0 0 0 βk (xn ) g(xN ) X(xN ) 1 xN x2N ··· xN
N
12
X(x0 ) 0 ··· 0 M 0 ··· 0
0 X(x 1) ··· 0 0 M ··· 0
X= . ,M = ,
. .. .. .. .. .. ..
.. ..
. . . . . .
0 0 0 X(xn ) 0 0 0 M (N +1)∗(N +1)
T
H(x0 ) 0 ··· 0 D a0
0 H(x 1) ··· 0 DT a1
H= . ,D = andA =
. .. .. .. ..
.. ..
. . . .
0 0 0 H(xn ) DT (N +1)∗1
aN
13
(28) for the conditions can be written as
Uj A = [λj ]or[Uj ; λj ]; j = 0, 1, 2, · · · , m − 1
where
m−1
X
[aij X(a)+bjk X(b)](B T )k DT =
Uj = Uj0 Uj1 Uj2 ··· UjN , j = 0, 1, 2, ...m−1
k=0
WA = G
For simplicity, if last rows of the matrix (31) are replaced, the new augmented
matrix of the above system is as follows [29-31,35,36]:
w0 0 w0 1 w0 2 ··· w0 N g(x0 )
w1 0
w 0 1 w 0 2 · · · w 1 N g(x1 )
w2 0 w 0 1 w 0 2 · · · w 2 N g(x2 )
.. .. .. . ..
. .. ..
. . . .
wN −m0 wN −m1 wN −m2 · · · wN −m
[W ; G] =
u0 0
u0 1 u0 2 ··· λ0
u1 0
u1 1 u 1 2 · · · u1 N λ1
u2 0
u2 1 u 2 2 · · · u2 N λ2
.. .. .. . . .. ..
. . . . . .
um−10 um−11 um−12 ··· um−1N λm−1
then P + δP is nonsingular.
14
Theorem 4.2
. Let P be a nonsingular matrix, let b 6= 0 and let x and x
e = x + δx be solutions
of Px = b and ( P+ δP ) xe = b, respectively. Then,
Theorem 4.3
Let k.k be a consistent matrix norm on C n×n .For any matrix P of order n,if
kP k < 1
kXk
k(I − P )−1 Xk ≤ .
1 − kP k
and
kP k
k(I − P )−1 k ≤ .
1 − kP k
The following theorem has been proved in [39] for the Bernstein series solutions
pN (x) of the pantograph equations. The similar therem for Bessel series case has
been proved in [31]. Now, we modlify the theorem given in [31] for the linear
Fredholm-Volterra integro-differential equations.
Theorem 4.4
Let the solution of FVIDE actually computed by the Bessel series solution pN (x)and
y=f(x) be the exact solution. Let the coefficient matrix of (33) be W
f1 = W f + δW
where δW represents the computational error.Let X(x) and D be the matrices which
] −1
defined in (8). If kδW k kW
∞ 1k < 1, then
∞
15
proof of Theorem 4.4
||y − pN ||∞ = ||y − pN f + pN f − pN ||∞ ≤ ||y − pN f ||∞ + ||pN f
b−a
≤ 6(1 + ΛN (X))w( ) + ||pN f
2N
J
N N N
J
X X X
||pN f − pN ||∞ = || an Jn − an Jn ||∞ = || (an − e
an )Jn ||∞ ≤
a0 − e
a0 a1 − e
a1 · · · aN − e
aN
e
n=0 n=0 n=0
J
J
J
≤k a0 − e
a0 a1 − e
a1 · · · aN − e
aN k∞
J
f −1 ||∞ ||δW ||∞ ||A||
≤ ||W f1 − δW )−1 k∞ kAk
e 1 k[J]T k∞ = k(W e 1 kDk∞ kδW k∞ kX T (b
!−1
−1 −1
≤ ||W1 ||∞ k
I − (δW )W1
kδW ||∞
A
||D||∞ ||X T (b
f
f
e
1
f −1
1
≤
W 1
kδW k∞
A
kDk∞ kX T (b
e
f −1
∞ 1 − (δW )W
1
1
∞
] −1
W1
kδW∞
A
e
∞ 1
T
≤
kDk ∞
X (b
] −1
1 −
W1
kδW∞
∞
] −1
e
s
W1
A
≤
∞
1 kDk∞
X T (b
] −1
1 − s
W
1
∞
] −
if k(δW )k∞
W
1
|y − pN (x)| = |y − pN f (x) + pN f (x) − pN (x)| ≤ |y − pN f (x)| + |pN f (x) −
N
1 Y
(N +1)
≤ (x − xi )f (x ) + |pN f (x) −
(N + 1)! i=0
N N
N
X X X
|pN f (x) − pN (x)| = an Jn (x) − an Jn (x) = (an − ea
e
n=0 n=0 n=0
J0 (X
16
J1 (X
≤ k a0 − e a0 a1 − e a1 · · · aN − e aN kF
..
.
JN (X
where s is the highest value of kδW ki nf ty. If f ∈ C ∞ [a, b] or m > N , from
Theorems 2.2.4, 4.1 and 4.2, inequality (34) and Cauchy–Schwarz inequalities,we
obtain that
N
Y
|y−pN (x)| = |y−pN f (x)+pN f (x)−pN (x)| ≤ |y−pN f (x)|+|pN f (x)−pN (x)| ≤ 1(N +1)! (x−xi )f (N +1)(x )+
i=0
(42)
and To complete the proof,we need to find s,the highest value of kδW k.To do this,we
will use the following lemma.
lemma 4.5
If Xj + ∆Xj ∈ Rn×n satisfies k∆Xj ≤ σj kXj k for all j for a consistent norm,then
Using Lemma 4.5,we find the upper bound of kδW k as where θ ≤ u ,u is the unit
round off and r is the number ofterms summed in W.This ( completes the proof )of
2N +1
ΛN (X) ≈
eN log N
asproved in [41].This shows that,for large N and equally spaced nodes,the polyno-
mial interpolation method can become unstable.On the other hand,using(36),the
difference between the interpolating polynomial and the Bessel series solution grows
whenever N increases.As a result,the Bessel series solution can become unstable for
large N. Let EN (x) be the error function which comes from putting the Bessel series
solution in (1), that is EN : [a, b] → R is a function which is defined by
m
(k)
X
EN (x) = βk (X)pN (x) − g(x) − λf I(x) − λv V (x) (43)
k=0
If pN (x)is the Bessel series solution,thus the solution of (33) , the computational er-
rors are minimal if EN (x) is equivalent to the zero function on the nodes x0 , x1 , ..., xN .Therefore,it
can be used for finding computational errors on the nodes.
17
5 Numerical examples
In this section,several numerical examples are given to illustrate the properties
of the method and also,shown that the absolute error and its upper bound are
consistent. In this regard, we have reported in tables and figures, the values of the
exact solution y(x), the polynomial approximate solution yN (x) and the absolute
error function eN (x) = |y(x)−yN (x)| at the selected points of the given interval.All
the numerical computations have been done using Matlab.
Example 1
Let us now consider the linear second order FVIDE given by
Z 1 Z
0 1 1
y ” (x) + xy (x) − xy(x) = ex − sin(x) + x cos(x) + sin(x)e−1 y(t)dt − x cos(x)e−t y(t)dt, 0 ≤ x, t ≤ 1 (44)
2 0 2 0
with the initial conditions 0
y(0) = 1andy (0) = 1
which is the exact solution y(x) = ex . Now,let us find the approximate solution
given by the truncated Bessel series
2
X
y(x) = p2 (x) = an Jn (x),
n=0
and from Eq.(30),the fundamental matrix equation of the given FVIDE is written
as where The augmented matrix for this fundamental matrix equation is The matrix
forms for the initial conditions from Eq. (32) are
Uj A = [λj ]or[Uj ; λj ]; j = 0, 1,
or clearly
[U0 ; λ0 ] = 1 0 0 0 ; 1
and
1
[U1 ; λ1 ] = 1 2 0 0 ; 1
From system (33),the new augmented matrix based on conditions is computed as
follows By solving this system,the Bessel coefficient matrix is obtained as
T
A = 1 2 6 4587
299
18
Hence,the approximate solution of the problem for N = 3 is found as
Example 2
Consider the linear Fredholm integro - differential equation given by
Z 1
0
x x x
y (x) = xe + e + e − x + xy(t)dt, 0 ≤ x, t ≤ 1 (45)
0
with the initial condition y(0) = 0 and the exact solution y(x) = xex so that m =
1, β1 = 1, λf = 1, λv = 0, g(x) = xex + ex − x and Kf (x, t) = x From Eq,(30),the
fundamental matrix equation of the problem is Thus, following the method given
in Section 3, we obtain approximate solutions
by Bessel
polynomials of the problem
f −1
for N =5,7,10,respectively, and Since kδW kF
W1
< 1.0133 × 10−9 , from (35),
F
the upper bound of the absolute error for N = 7 and digits = 25 is computed as
1.0198 × 10−5 . Similarly, the upper bound of the absolute error for N = 10 and
digits = 25 is found as 3.7415 × 10−5 and the upper bound of the absolute error for
N=10 and digits = 35 is calculated as 3.3988 × 10−8 .
19
Figure 1: Absolute error of Example ?? with k = 1, M = 4, h = 1, α = 1.
In addition, the numerical results of the absolute error functions obtained by the
present method for N = 5, 7 and 10, the CAS wavelet method [12],the differential
transformation [13] and the HPM [14] are compared in Table 1.Fig.1shows the
absolute error function sobtained by the present method for N = 5,7 and 10,the
CAS wavelet method,the differential transformation and the HPM.It is seen from
Table1andFig.1 that the results obtained by the present method is better than that
obtained by the other methods.
Example 3
Suppose that the following linear FVIDE with variable coefficients
Z 1 Z x
y ( 5)(x) − xy ( 2)(x) + xy(x) = −e−x − 1 e2x − x2 + 1
e2x+t y(t)dt + xet y(t)dt, o ≤ x, t (46)
≤1
2 2 0 0
0 000
y(0) = 1, y (0) = −1, y ” (0) = 1, y (0) = −1y (4) = 1 (47)
20
Table 2: Comparison of |E1 0(x)| and p1 0(x)
xi |E1 0(x)| Upper bound error of p1 0(x)
0 4.50 O(10−20 )
0.2 6.719 O(10−4 )
0.4 2.153 O(10−4 )
0.6 6.755 O(10−3 )
0.8 1.96 O(10−2 )
1 1.45 O(10−2 )
Example 4
Finally , let us consider the linear Volterra integro-differential equation
Z x
0
y (x) = 1 − y(t)dt, 0 ≤ x, t ≤ 1 (48)
0
with the initial condition y(0) = 0 and the exact solution y(x) = sin(x). Here,
m = 1, N = 5, β1 = 1, λf = 0, λv = −1, g(x) = 1, Kf (x, t) = 1. We write the
fundamental matrix equation of Example 4 from Eq.(30) as follows Thus,by using
the procedure given in Section 3,we find the approximate solution by the Bessel
polynomials for N = 5
The numerical results of the absolute error function and the approximate solution
obtained by the present method are compared with the numerical results of the
Taylor method given in [17] for N = 5 in Table 3. Fig. 2-(a) displays the exact
solution and the approximate solutions obtained by the present method and the
Taylor method for N = 5,and in Fig.2-(b), we compare the absolute error functions
gained by the present method and the Taylor method for N = 5. It is seen from
Fig.2 and Table 3 that the results obtained by the present method is very superior
to that obtained by the Taylor method
21
Iβ(x) u(x) = R
x
Γ(β(x)) 0 u(t)(x − t)β(x)−1 dt, x ≥ 0.(49)
Definition 5.0.2 Caputo’s derivative of order β(x) is defined as
η
β(x) η−β(x) d
D u(x) = I u(x)
dxη
Z x
1
= u(η) (t)(x − t)η−β(x)−1 dt, η − 1 < β(x) ≤ η, η ∈(50)
N.
Γ(η − β(x)) 0
8 Conclusion
In this paper, a new form of wavelets was contributed, and it’s the exact formulas
of Caputo derivative and Riemann-Liouville integration were investigated. We used
them directly to construct a numerical method to solve effectively variable order
differential equation. This suggested algorithms can be used also for solving the
other class of VOFDEs, distributed order FDEs and system of VOFDEs.
22
Figure 2: For N = 5. (41) (a) Comparison of the solutions y(x) and (b)
Comparison of the absolute error functions eN (x)
23