Sie sind auf Seite 1von 4

S.

Ghorai 1
Lecture XVI
Strum comparison theorem, Orthogonality of Bessel functions
1 Normal form of second order homogeneous linear ODE
Consider a second order linear ODE in the standard form
y

+ p(x)y

+ q(x)y = 0. (1)
By a change of dependent variable, (1) can be written as
u

+ Q(x)u = 0, (2)
which is called the normal form of (1).
To nd the transformation, let use put y(x) = u(x)v(x). When this is substituted in
(1), we get
vu

+ (2v

+ pv)u

+ (v

+ pv

+ qv)u = 0.
Now we set the coecient of u

to zero. This gives


2v

+ pv = 0 v = e

_
p/2 dx
.
Now coecient of u becomes
_
q(x)
1
4
p
2

1
2
p

_
v = Q(x)v.
Since v is nonzero, cancelling v we get the required normal form. Also, since v never
vanishes, u vanishes if and only if y vanishes. Thus, the above transformation has no
eect on the zeros of solution.
Example 1. Consider the Bessel equation of order 0:
x
2
y

+ xy

+ (x
2

2
)y = 0, x > 0.
Solution: Here v = e

_
x/2 dx
= 1/

x. Now
Q(x) = 1

2
x
2

1
4x
2
+
1
2x
2
= 1 +
1/4
2
x
2
.
Thus, Bessel equation in normal form becomes
u

+
_
1 +
1/4
2
x
2
_
u = 0. (3)
Theorem 1. (Strum comparison theorem) Let and be nontrivial solutions of
y

+ p(x)y = 0, x I,
and
y

+ q(x)y = 0, x I,
where p and q are continuous and p q on I. Then between any two consecutive zeros
x
1
and x
2
of , there exists at least one zero of unless p q on (x
1
, x
2
).
S. Ghorai 2
Proof: Consider x
1
and x
2
with x
1
< x
2
. WLOG, assume that > 0 in (x
1
, x
2
).
Then

(x
1
) > 0 and

(x
2
) < 0. Further, suppose on the contrary that has no zero
on (x
1
, x
2
). Assume that > 0 in (x
1
, x
2
). Since and are solutions of the above
equations, we must have

+ p(x) = 0,

+ q(x) = 0.
Now multiply rst of these by and second by and subtracting we nd
dW
dx
= (q p),
where W =

is the Wronskian of and . Integrating between x


1
and x
2
, we
nd
W(x
2
) W(x
1
) =
_
x
2
x
1
(q p) dx.
Now W(x
2
) 0 and W(x
1
) 0. Hence, the left hand side W(x
2
) W(x
1
) 0. On
the other hand, right hand side is strictly greater than zero unless p q on (x
1
, x
2
).
This contradiction proves that between any two consecutive zeros x
1
and x
2
of , there
exists at least one zero of unless p q on (x
1
, x
2
).
Proposition 1. Bessel function of rst kind J
v
( 0) has innitely number of
positive zeros.
Proof: The number of zeros J

is the same as that of nontrivial u that satises (3),


i.e.
u

+
_
1 +
1/4
2
x
2
_
u = 0. (4)
Now for large enough x, say x
0
, we have
_
1 +
1/4
2
x
2
_
>
1
4
, x (x
0
, ). (5)
Now compare (4) with
v

+
1
4
v = 0. (6)
Due to (5), between any two zeros of a nontrivial solution of (6) in (x
0
, ), there exists
at least one zero of nontrivial solution of (4). We know that v = sin(x/2) is a nontrivial
solution of (6), which has innite number of zeros in (x
0
, ). Hence, any nontrivial
solution of (4) has innite number of zeros in (x
0
, ). Thus, J

has innite number of


zeros in (x
0
, ), i.e. J

has innitely number of positive zeros. We label the positive


zeros of J

by
n
, thus J

(
n
) = 0 for n = 1, 2, 3, .
2 Orthogonality of Bessel function J

Proposition 2. (Orthogonality) The Bessel functions J

( 0) satisfy
_
1
0
xJ

(
m
x)J

(
n
x) dx =
1
2
_
J
+1
(
n
)
_
2

mn
, (7)
where
i
are the positive zeros of J

, and
mn
= 0 for m = n and
mn
= 1 for m = n.
S. Ghorai 3
Proof: We know that J

(x) satises
y

+
1
x
y

+
_
1

2
x
2
_
y = 0.
If u = J

(x) and v = J

(x), then u and v satises


u

+
1
x
u

+
_


2
x
2
_
u = 0, (8)
and
v

+
1
x
v

+
_


2
x
2
_
v = 0. (9)
Multiplying (8) by v and (9) by u and subtracting, we nd
d
dx
_
x(u

v uv

)
_
=
_

2
_
xuv. (10)
Integrating from x = 0 to x = 1, we nd
_

2
_
_
1
0
xuv dx = u

(1)v(1) u(1)v

(1).
Now u(1) = J

() and v(1) = J

(). Let us choose =


m
and =
n
, where
m
and

n
are positive zeros of J

. Then u(1) = v(1) = 0 and thus nd


(
2
n

2
m
)
_
1
0
xJ

(
m
x)J

(
n
x) dx = 0.
If n = m, then
_
1
0
xJ

(
m
x)J

(
n
x) dx = 0.
Now from (10), we nd [since u

(x) = J

(x) etc]
d
dx
_
x
_
J

(x)J

(x) J

(x)J

(x)
__
=
_

2
_
xJ

(x)J

(x).
We dierentiate this with respect to and then put = . This leads to
2xJ

(x)J

(x) =
d
dx
_
x
_
xJ

(x)J

(x) J

(x)J

(x) xJ

(x)J

(x)
__
Integrating between x = 0 to x = 1, we nd
2
_
1
0
xJ
2

(x) dx =
_
J

()
_
2
J

()J

() J

()J

().
OR
_
1
0
xJ
2

(x) dx =
1
2
J

()
2

()
2
_
J

()

+ J

()
_
[This last relation can be written as (NOT needed for the proof!)
_
1
0
xJ
2

(x) dx =
1
2
J

()
2
+
1
2
_
1

2

2
_
J
2

() ]
S. Ghorai 4
Now if we take =
n
, where
n
is a positive zero of J

, then we nd
_
1
0
xJ
2

(
n
x) dx =
1
2
_
J

(
n
)
_
2
.
Now
_
x

(x)
_

= x

J
+1
(x) J

(x)

x
J

(x) = J
+1
(x),
we nd by substituting x =
n
J

(
n
) = J
+1
(
n
).
Thus, nally we get
_
1
0
xJ
2

(
n
x) dx =
1
2
J
2
+1
(
n
).
Theorem 2. (Fourier-Bessel series) Suppose a function f is dened in the interval
0 x 1 and that it has a Fourier-Bessel series expansion:
f(x)

n=1
c
n
J

(
n
x),
where
n
are the positive zeros of J

. Using orthogonality, we nd
c
n
=
2
J
2
+1
(
n
)
_
1
0
xf(x)J

(
n
x) dx.
Suppose that f and f

are piecewise continuous on the interval 0 x 1. Then for


0 < x < 1,

n=1
c
n
J

(
n
x) =
_

_
f(x), where f is continuous
f(x

) + f(x
+
)
2
, where f is discontinuous
At x = 0, it converges to zero for > 0 and to f(0+) for = 0. On the other hand,
it converges to zero at x = 1.

Das könnte Ihnen auch gefallen