Beruflich Dokumente
Kultur Dokumente
Differential Equations
M.T.Nair
Department of Mathematics, IIT Madras
CONTENTS
1.1 Introduction 2
1.2 Direction Field and Isoclines 3
1.3 Initial Value Problem 3
1.4 Linear ODE 4
1.5 Equations with Variables Separated 6
1.6 Homogeneous equations 7
1.7 Exact Equations 7
1.8 Equations reducible to homogeneous or variable separable or linear or exact form 9
6. References 56
1
1 First order ODE
1.1 Introduction
An Ordinary differential equation (ODE) is an equation involving an unknown function and its
derivatives with respect to an independent variable x:
Here, y is the unknown function, x is the independent variable and y (j) represents the j-th derivative
of y. We shall also denote
y 0 = y (1) , y 00 = y (2) , y 000 = y (3) .
Example 1.1.
y 0 = x.
The above simple example shows that a DE can have more than one solution. In fact, we obtain a
family of parabolas as solution curves. But, if we require the solution curve to pass through certain
specified point then we may get a unique solution. In the above example, if we demand that
y(x0 ) = y0
x20
y0 = +C
2
so that the constant C must be
x20
C = y0 .
2
Thus, the solution, in this case, must be
x2 x2
y= + y0 0 .
2 2
2
1.2 Direction Field and Isoclines
Suppose y = (x) is a solution of DE (1). Then this curve is also called an integral curve of the
DE. At each point on this curve, the tangent must have the slope f (x, y). Thus, the DE prescribes a
direction at each point on the integral curve y = (x). Such directions can be represented by small
line segments with arrows pointing to the direction. The set of all such directed line segments is called
the direction field of the DE.
The set of all points in the plane where f (x, y) is a constant is called an isocline. Thus, the family
of isoclines would help us locating integral curves geometrically.
y(x0 ) = y0 (2)
is called an initial value problem. The condition (2) is called an initial condition.
THEOREM 1.2. Suppose f is defined in an open rectangle R = I J, where I and J are open
intervals, say I = (a, b), J = (c, d):
Remark 1.3. The conditions prescribed are sufficient conditions that guarantee the existence and
uniqueness of a solution for the initial value problem. They are not necessary conditions. A unique
solution for the initial value problem can exist without the prescribed conditions on f as in the above
theorem.
The condition (2) in Theorem 1.2 is called an initial condition, the equation (1) together with
(2) is called an initial value problem.
3
A solution y for a particular value of C is called a particular solution of (1).
u(x, y, C) = 0
arbitrary constant C, then the above equation is called the complete integral of (1).
Remark 1.4. Under the assumptions of Theorem 1.2, if x0 I, then existence of a solution y for (1)
is guaranteed in some neighbourhood I0 I of x0 , and it satisfies the integral equation
Z x
y(x) = y0 + f (t, y(t))dt.
x0
Is the family of all solutions of (1) defined on I0 a one-parameter family, so that any two
solutions in that family differ only by a constant?
It is known that for a general nonlinear equation (1), the answer is nt in affirmative. However, for
linear equations the answer is in affirmative.
If f depends of y in a linear fashion, then the equation (1) is called a linear DE. A general form of
the linear first order DE is:
y 0 + p(x)y = q(x). (3)
Assume first that there is a solution for (3) and that after multiplying both sides of (3) by a
differentiable function (x), the LHS is of the ((x)y)0 . Then(3) will be converted into:
((x)y)0 = (x)q(x)
so that Z
(x)y = q(x)dx + C.
0 y + y 0 = (y 0 + py).
4
i.e.,
R
p(x)dx
(x) := e .
It can be easily seen that the function y defined by (4) satisfies the DE (3). Thus existence of a
solution for (3) is proved for continuous functions p and q.
Suppose there are two functions and which satisfy (3). Then (x) := (x) (x) would
satisfy
0 (x) + p(x)(x) = 0.
(x) = C(x)1
Now, if (x0 ) = y0 = (x0 ), then we must have (x0 ) = 0 so that C(x)1 = 0. Hence, we obtain
C = 0 and hence, = . Thus, we have proved the existence and uniqueness for the linear DE only
by assuming that p and q are continuous.
Example 1.5.
y 0 = x + y.
R
Then, = e dx
= ex and hence,
Z Z
x x x x x
y=e e xdx + C = e xe + e dx + C .
Thus,
y = ex xex ex + C = x 1 + Cex .
y(0) = 0 = 0 = 1 + C = C = 1.
Hence,
y = x 1 + ex .
Note that
y 0 = 1 + ex = 1 + (x + y + 1) = x + y.
5
1.5 Equations with Variables Separated
for some functions f1 , f2 , then we say that (3) is an equation with separated variables. In this
case (3) takes the form:
y 0 = f1 (x)f2 (y);
equivalently,
y0
= f1 (x),
f2 (y)
assuming that f2 (y) is not zero at all points in the interval of interest. Hence, in this case, a general
solution is given implicitly by Z Z
dy
= f1 (x)dx + C.
f2 (y)
Example 1.6.
y 0 = xy.
Equivalently,
dy
= xdx.
y
Hence,
x2
log |y| = + C,
2
i.e.,
2
y = C1 ex /2
.
Note that 2
2
y = C 1 ex /2
= y 0 = C1 ex /2 x = xy.
M (x)dx + N (y)dy = 0.
can be brought to the form (5): After dividing (6) by N1 (y)M2 (x) we obtain
M1 (x) N2 (y)
dx + dy = 0.
M2 (x) N1 (y)
6
1.6 Homogeneous equations
f (x, y) = n f (x, y) R
for some n N.
where M and N are such that there exists u(x, y) with continuous first partial derivatives satisfying
u u
M (x, y) = , N (x, y) = . (8)
x y
u(x, y) = C.
Equation (7) with M and N satisfying (8) is called an exact differential equation.
7
2u
Note that, in the above, if there exists u(x, y) with continuous second partial derivatives
xy
2u
and , then
yx
M N
= .
y x
THEOREM 1.7. Suppose M and N are continuous and have continuous first partial derivatives
M N
and in I J, and
y x
M N
= .
y x
Then the equation (7) is exact, and in that case the complete integral of (7) is given by
Z x Z y
M (x, y)dx + N (x0 , y)dy = C.
x0 y0
u
satisfies = M (x, y). Then
x
Z x Z x
u M 0 N
= dx + g (y) = dx + g 0 (y) = N (x, y) N (x0 , y) + g 0 (y).
y x0 y x0 x
Thus, Z y
u
= N g 0 (y) = N (x0 , y) g(y) = N (x0 , y)dy.
y y0
Thus, taking Z y Z x
g(y) = N (x0 , y)dy and u(x, y) := M (x, y)dx + g(y)
y0 x0
Example 1.8.
y cos xydx + x cos xydy = 0.
(x, y) = sin xy = = x cos xy and = x cos xy.
x y
Hence, sin xy = C. Also,
y dx dy
y cos xydx + x cos xydy = 0 y 0 = + = 0.
x x y
Hence, log |xy| = C.
8
Example 1.9.
2x y 2 3x2
dx + dy = 0.
y3 y4
In this case
M 6x N
= 4 = .
y y x
Hence, the given DE is exact, and u is give by
x2
Z Z
1
u(x, y) = M dx + N (0, y)dy = 3 ,
y y
so that the complete integral is given by u(x, y) = C.
Then
ax + by + c = a(X + h) + b(Y + k) + c = aX + bY + (ah + bk + c),
a1 x + b1 y + c1 = a1 (X + h) + b1 (Y + k) + c1 = a1 X + b2 Y + (a1 h + b1 k + c1 ).
ah + bk + c = 0 (2)
a 1 h + b1 k + c 1 (3)
9
the equation (1) takes the form
dY aX + bY
= .
dX a 1 X + b1 Y
This is a homogeneous equation. If Y = (X) is a solution of this homogeneous equation, then a
solution of (1) is given by
y = k + (x h).
!
a b
Case(ii): det = 0. In this case either
a1 b1
a1 = a, b1 = b for some R
or
a1 = a1 , b = b1 for some R.
Assume that a1 = a and b1 = b for some R. Then, (1) takes the form
dy ax + by + c ax + by + c
= = .
dx a1 x + b1 y + c1 (ax + by) + c1
Taking z = ax + by, we obtain
dz dy z+c
=a+b =a+b .
dx dx (z + c1
This is an equation in variable separable form.
Example 1.10.
dy 2x + y 1
= .
dx 4x + 2y + 5
Taking z = 2x + y,
dz dy z1 dz 5z + 9
=2+ =2+ = ,
dx dx 2z + 5 dx 2z + 5
i.e.,
2z + 5
dz = dx.
5z + 9
Note that
2z + 5 1 10z + 25 1 2(5z + 9) + 7 2 7 1
= = = +
5z + 9 5 5z + 9 5 5z + 9 5 5 5z + 9
Z
2z + 5 2z 7
dz = inf dx + log |5z + 9| = x + 9
5z + 9 5 25
2(2x + y) 7
+ log |5(2x + y) + 9| = x + 9
5 25
Thus, the solution y is given by
4x + 2y 7
+ log |10x + 5y + 9| = x + 9.
5 25
10
1.8.2 Reducible to linear form
Bernaulis equation:
y 0 + p(x)y = q(x)y n .
Write it as
y n y 0 + p(x)y n+1 = q(x).
Taking z = y n+1 ,
dz dy
= (n + 1)y n = (n + 1)[p(x)z + q(x)],
dx dx
i.e.,
dz
(n + 1)p(x)z = (n + 1)q(x).
dx
Hence, Z
1 R
z= (x)(n + 1)q(x)dx + C , (x) = e(n+1) p(x)dx
.
(x)
Example 1.11.
dy
+ xy = x3 y 3 .
dx
Here, n = 3 so that n + 1 = 2 and
2
R R
(x) = e(n+1) p(x)dx
= e2 xdx
= ex .
Z Z
1 2 2
z = (x)(n + 1)q(x)dx + C = ex 2ex x3 dx + C
(x)
Z
2 2
= 2ex ex x3 dx C/2 .
Gives:
2
(x2 + 1 + Cex )y 2 = 1.
M N M N
Suppose M (x, y) and N (x, y) are functions with continuous partial derivatives , , , .
x x y x
Consider the differential equation
11
is exact. So, requirement on should be
M N
(M ) = (N ), i.e., = + 0 N
y x y x
0
1 M N
= .
N y x
Thus:
1 M N
If := is a function of x alone, then the above differential equation
N y x R
for can be solved and with the resulting := e dx the equation () is exact equation.
(x)[M (x, y)dx + N (x, y)dy] = 0 ()
Example 1.13.
(y + xy 2 )dx xdy = 0.
M N
Note that y = 1 + 2xy, = 1,
x
1 M N (1 + 2xy) + 1 2(1 + xy)
:= = = .
N y x x x
1 N M 2(1 + xy) 2
= = = .
M x y y(1 + xy) y
Thus,
R 2
y dy
1
:= e =
y2
is an integrating factor, i.e.,
1 1 x
[(y + xy 2 )dx xdy] = 0 x dx 2 dy = 0
y2 y y
is an exact equation. Then
x x2
Z Z Z
1
u = M dx + N (0, y)dy = x dx = .
y y 2
x x2
Thus the complete integral is given by y + 2 = C.
12
2 Second and higher order linear ODE
where a(x), b(x), f (x) are functions defined on some interval I. The equation (1) is said to be
THEOREM 2.1. (Existence and uniqueness) Suppose a(x), b(x), f (x) are continuous functions
(defined on some interval I). Then for every x0 I, y0 R, z0 R, there exists a unique solution y
for (1) such that
y(x0 ) = y0 , y 0 (x0 ) = z0 .
Note that:
If y1 and y2 are solutions of (2), then for any , R, the function y1 + y2 is also a solution
of (2).
1. y1 and y2 are said to be linearly dependent if there exists R such that either y1 (x) = y2 (x)
or y2 (x) = y1 (x); equivalently, there exists , R with atleast one of them nonzero, such
that
y1 (x) + y2 (x) = 0 x I.
2. y1 and y2 are said to be linearly independent if they are not linearly dependent, i.e. for
, R,
y1 (x) + y2 (x) = 0 x I = = 0, = 0.
We shall prove:
13
1. The differential equation (2) has two linearly independent solutions.
2. If y1 and y2 are linearly independent solutions of (2), then every solution y of (2) can be expressed
as
y = y1 + y2
for some , R.
Definition 2.4. Let y1 and y2 be differentiable functions (on an interval I). Then the function
!
y1 y2
W (y1 , y2 )(x) := det 0
y1 y20
Once the functions y1 , y2 are fixed, we shall denote W (y1 , y2 )(x) by W (x).
Note that:
Equivalently:
y1 (x0 ) = a1 y2 (x0 ) = b1
.
y10 (x0 ) = a2 y20 (x0 ) = b2
Proof. Since A = W (x0 ) and det(A) 6= 0, the proof follows from the earlier observation.
14
y200 + a(x)y20 + b(x)y2 = 0.
Hence,
(y1 y200 y2 y100 ) + a(x)(y1 y20 y2 y10 ) = 0.
Note that
W = y1 y20 y2 y10 , W 0 = y1 y200 y2 y100 .
Hence
W 0 + a(x)W = 0.
Therefore, Rx
a(t)dt
W (x) = W (x0 )e x0
.
Proof. We have already observed that if W (x0 ) = 0 for some x0 I, then y1 and y2 are linearly
independent. Hence, it remains to prove that if y1 and y2 are linearly independent, then W (x) 6= 0
for every x I.
Suppose W (x0 ) = 0 for some x0 I. Then by the Lemma 2.6, W (x) = 0 for every x I,i.e.,
y1 y20 y2 y10 = 0 on I.
y1 y20 y2 y10
=0 on I0 ,
y12
i.e.,
d y2
=0 on I0 .
dx y1
Hence, there exists R such that
y2
= on I0 .
y1
Hence, y2 = y1 on I, showing that y1 and y2 are linearly dependent.
THEOREM 2.8. Let y1 and y2 be linearly independent solutions of (2). Then every solution y of
(2) can be expressed as
y = y1 + y2
for some , R.
15
Proof. Let y be a solution of (2), and for x0 I, let
y0 := y(x0 ), z0 := y 0 (x0 ).
Let W (x) be the Wronskian of y1 , y2 . Since y1 and y2 are linearly independent solutions of (2), by
Theorem 2.5, W (x0 ) 6= 0. Hence, there exists a unique pair , ) of real numbers such that
" #" #" #
y1 (x0 ) y2 (x0 ) y0
.
y10 (x0 ) y20 (x0 ) z0
Let
(x) = y1 (x) + y2 (x), x I.
By the existence and uniqueness theorem, we obtain (x) = y(x) for all x I, i.e.,
y = y1 + y2 .
Now, the question is how to get linearly independent solutions for (2).
Then
y20 = y1 0 + y10 , y200 = y1 00 + y10 0 + y10 0 + y100 = y1 00 + 2y10 0 + y100 .
Hence,
16
Note that
(x)
0 = , i.e., y12 0 = .
y1 (x)2
Hence
y12 00 + 2y1 y10 0 = 0 i.e., y1 (y1 00 + 2y10 0 ) = 0
so that
0 a 0 + a
y200 + ay 0 + by2 = y1 00 + 2y10 0 + ay1 0 = + = = 0.
y1 y1 y1
Clearly, y1 and y2 are linearly independent.
Hence, Rx !
Z a(t)dt
Ce x0
y2 = y1 dx.
y12
where p, q are real constants. Let us look for a solution (1) in the form y = ex for some , real or
complex. Assuming that such a solution exists, from (1) we have
(2 + p + q)ex = 0
2 + p + q = 0. (2)
17
In case 1, ex , xex are linearly independent solutions.
Example 2.10.
y 00 + y 0 2y = 0
Example 2.11.
y 00 + 2y 0 + 5y = 0
Example 2.12.
y 00 4y 0 + 4y = 0
y = y0 + y
is a solution of the nonhomogeneous equation (1). Also, if y is a particular solution of the nonho-
mogeneous equation (1) and if y is any solution of the nonhomogeneous equation (1), then y y is
a solution of the homogeneous equation (2). Thus, knowing a particular solution y of the nonho-
mogeneous equation (1) and a general solution y of homogeneous equation (2), we obtain a general
solution of the nonhomogeneous equation (1) as
y = y + y .
If the coefficients are constants, then we know a method of obtaining two linearly independent solutions
for the homogeneous equation (2), and thus we obtain a general solution for the homogeneous equation
(2).
18
2.3.1 Method of variation of parameters
y = u1 y1 + u2 y2
where u1 and u2 are unctions to be determined. Assume for a moment that such a solution exists.
Then
y 0 = u1 y10 + u2 y20 + u01 y1 + u02 y2 .
Then, we have
y 0 = u1 y10 + u2 y20 , (4)
(u1 y100 + u2 y200 + u01 y10 + u02 y20 ) + a(x)(u1 y10 + u2 y20 ) + b(x)(u1 y1 + u2 y2 ) = f (x),
i.e.,
u1 [y100 + a(x)y10 b(x)y1 ] + u2 [y200 + a(x)y20 b(x)y2 ] + u01 y10 + u02 y20 = f (x),
i.e.,
u01 y10 + u02 y20 = f (x). (6)
19
THEOREM 2.13. If y1 , y2 are linearly independent solutions of the homogeneous equation (2), and
if W (x) is their Wronskian, then a general solution of the nonhomogeneous equation (1) is given by
y = u1 y1 + u2 y2 ,
where Z Z
y2 f y1 f
u1 = + C1 , u2 = + C2 .
W W
where a1 , a2 , . . . , an are continuous functions on an interval I, and if W (x) is their Wronskian, i.e.,
y1 y2 yn
0
y20 yn0
y1
W (x) = det
,
(n1) (n1) (n1)
y1 y2 yn
is given by
y = (u1 + C1 )y1 + (u2 + C2 )y2 + + (un + Cn )yn ,
Remark 2.15. Suppose the right hand side of (1) is of the form f (x) = f1 (x) + f2 (x). Then it can
be easily seen that:
20
2.3.2 Method of undetermined coefficients
This method is when the coefficients of (1) are constants and f is of certain special forms. So, consider
y 00 + py 0 + qy = f, (1)
where Q is a polynomial of degree n Substituting the above expression in the DE, we obtain:
Note that, the above equation is an identity only if 2 + p + q 6= 0, i.e., is not a root of the auxiliary
equation 2 + p + q = 0. In such case, we can determine Q by comparing coefficients of powers of xk
for k = 0, 1, . . . , n.
If is a root of the auxiliary equation 2 + p + q = 0, then we must look for a solution of the
form
x
y = Q(x)e
e ,
where Q
e is a polynomial of degree n + 1, or we must look for a solution of the form
y = xQ(x)ex ,
If is a double root of the auxiliary equation 2 + p + q = 0, then we must look for a solution
of the form
x
y = Q(x)e
b ,
where Q
b is a polynomial of degree n + 2, or we must look for a solution of the form
y = x2 Q(x)ex ,
Case (ii): f (x) = P1 (x)ex cos x + P1 (x)ex sin x, where P1 and P2 are polynomials and , are
real numbers:
21
We look for a solution of the form
Substituting the above expression in the DE, we obtain the coefficients of Q1 , Q2 if + i is not a
root of the auxiliary equation 2 + p + q = 0.
where Q1 and Q2 are polynomials with degQj (x) = max{P1 (x), P2 (x)}, j {1, 2}.
The following example illustrates the second part of case (ii) above:
2
Example 2.16. We find the general solution of
y 00 + 4y = x sin 2x.
2 + 4 = 0.
Its solutions are = 2i. Hence, the general solution of the homogenous equation is:
Note that the non-homogenous term, f (x) = x sin 2x, is of the form
where Q1 and Q2 are polynomials with degQj (x) = max{P1 (x), P2 (x)} = 1. Thus, a a particular
solution is of the form
y = x[(A0 + A1 x) cos 2x + (B0 + B1 x) sin 2x].
Differentiating:
y 0 = [A0 + (2A1 + 2B0 )x + 2B1 x2 ] cos 2x + [B0 + (2B1 2A0 )x 2A1 x2 ] sin 2x,
2 This example is included in the notes on November 23, 2012 mtnair.
22
y 00 + 4y = 2[B0 + (2B1 2A0 )x 2A1 x2 ] cos 2x
2[A0 + (2A1 + 2B0 )x + 2B1 x2 ] sin 2x
+[(2B1 2A0 ) 4A1 x] sin 2x + [(2A1 + 2B0 ) + 4B1 x] cos 2x
+4x[(A0 + A1 x) cos 2x + (B0 + B1 x) sin 2x].
1 1
A0 = 0, A1 = , B0 = , B1 = 0,
8 16
so that
x2 x
y = x[(A0 + A1 x) cos 2x + (B0 + B1 x) sin 2x] = cos 2x + sin 2x.
8 16
Thus, the general solution of the equation is:
x2 x
A cos 2x + B sin 2x cos 2x + sin 2x.
8 16
Remark 2.17. The above method can be generalized, in a natural way, to higher order equation
A particular type of equations with non-constant coefficients can be reduced to the ones with constant
coefficients. here it is: Consider
x = ez .
d
Dn y + b1 Dn1 y + + bn1 Dy + an y = f (ez ), D := ,
dz
23
where b1 , b2 , . . . , bn are constants. Let us consider the case of n = 2:
x2 y 00 + a1 xy 0 + a2 y = f (x).
Taking x = ez ,
dy dy dx
= = y 0 x,
dz dx dz
d2 y d 0 dy 0 dx dy
2
= (y x) = x + y0 = y 00 x2 + y 0 x = y 00 x2 + .
dz dz dz dz dz
Hence we have
d2 y dy d2 y
dy dy
x2 y 00 + a1 xy 0 + a2 y = + a1 + a2 y = 2 + (a1 1) + a2 y.
dz 2 dz dz dz dz
d2 y dy
+ (a1 1) + a2 y = f (ez ).
dz 2 dz
Hence,
d3 y
2 2
d y dy dy d y dy dy
x3 y 000 + ax2 y 00 + bxy 0 + cy = 3 + a +b + cy
dz 3 dz 2 dz dz dz 2 dz dz
d3 y d2 y dy
= 3
+ (a 3) 2 + (b a + 3) + cy.
dz dz dz
24
3 System of first order linear homogeneous ODE
25
Definition 3.1. The equation (3) is called the auxiliary equation for the system (1).
Case (i): Suppose the roots of the auxiliary equation (3) are real distinct, say 1 and 2 . Suppose
" (1) # " (2)
#
1 1
(1) and (2)
2 2
are nonzero solutions of (2) corresponding to = 1 and = 2 , respectively. Then, the vector valued
functions " (1) # " (2) #
1 1 t 1
X1 = (1) e , X2 = (2) e2 t
2 2
are solutions of (1), and they are linearly independent. In this case, the general solution of (1) is given
by C1 X1 + C2 X2 .
Case (ii): Suppose the roots of the auxiliary equation (3) are complex non-real. Since the entries of
the matrix are real, these roots
" are# conjugate to each other. Thus, they are of the form + i and
1
i for 6= 0. Suppose be a nonzero solution of (2) corresponding to = + i. The
2
numbers 1 and 2 need not be real. Thus,
" # " (1) (2)
#
1 1 + i1
= (1) (2) .
2 2 + i2
Case (iii): Suppose 0 is a double root of the auxiliary equation (3). In this case there are two
subcases:
There is only one (up to scalar multiples) nonzero solution for (2).
26
In the first case if " (1) # " #
(2)
1 1
(1) and (2)
2 2
are the linearly independent solutions of (2) corresponding to = 0 , then the vector valued functions
" (1) # " (2) #
1 0 t 1 0 t
X1 = (1) e , X2 = (2) e
2 2
C1 X1 + C2 X2 .
" #
1
In the second case, let u := is a nonzero solution of (2) corresponding to = 0 , and let
2
" #
1
v := is such that
2
(A 0 I)v = u.
Then
X = C1 ue0 t + C2 [tu + v]e0 t
Remark 3.2. Another method of solving a system is to convert the given system into a second order
system for one of x1 and x2 , and obtain the other.
27
4 Power series method
We would like to see if the above equation has a solution of the form
X
y= cn (x x0 )n (2)
n=0
There exists > 0 such that the series converges at every x with |x x0 | < .
The series can be differentiated term by term in the interval (x0 r, x0 + ) any number of times,
i.e.,
dk X n
X
an (x x 0 ) = n(n 1) (n k + 1)an (x x0 )nk
dxk n=0
n=k
p(x) = a0 + a1 x + + an xn , q(x) = b0 + b1 x + + bn xn ,
28
then
p(x)q(x) = a0 b0 + (a0 b1 + a1 b0 )x + + (a0 bn + a1 bn1 + + an b0 )xn .
X
X
Motivated by this, for convergent power series an (x x0 )n and bn (x x0 )n , we define
n=0 n=0
X
X X n
X
an (x x0 )n bn (x x0 )n = cn (x x0 )n , cn := ak bnk .
n=0 n=0 n=0 k=0
Now, it may be too much to expect to have a solution of the form (2) for a differential equation
(1) for arbitrary continuous functions f, g r. Note that we require the solution to have only second
derivative, whereas we are looking for a solution having a series expansion; in particular, differentiable
infinitely many times. But, it may not be too much expect to have a solution of the form (2) if f, g, r
also have power series expansions about x0 . Power series method is based on such assumptions.
The idea is to consider those cases when f, g, r also have power series expansions about x0 , say
X
X
X
f (x) = an (x x0 )n , g(x) = bn (x x0 )n , r(x) = dn (x x0 )n ..
n=0 n=0 n=0
Then substitute the expressions for f, g, r, y and obtain the coefficients cn , n N, by comparing
coefficients of (x x0 )k for k = 0, 1, 2, . . . .
Any of the functions f, g, r is a rational function, i.e., function of the form p(x)/q(x) where p(x)
and q(x) are polynomials, and in that case the point x0 should not be a zero of q(x).
Example 4.2.
y 00 + y = 0. ()
In this case, f = 0, g = 0, r = 0. So, we may assume that the equation has a solution power series
expansion around any point x0 R. For simplicity, let x0 = 0, and assume that the solution is of the
X
form y = cn xn . Note that
n=0
X
X
X
X
() n(n 1)cn xn2 + cn xn = 0 (n + 2)(n + 1)cn+2 xn + cn x n = 0
n=2 n=0 n=0 n=0
X
[(n + 2)(n + 1)cn+2 + cn ]xn = 0 (n + 2)(n + 1)cn+2 + cn n N0 := N {0}
n=0
cn
(n + 2)(n + 1)cn+2 = n N0
(n + 2)(n + 1)
(1)n a0 (1)n a1
c2n = , c2n+1 = n N0 .
(2n)! (2n + 1)!
29
X
Thus, if y = cn xn is a solution of (), then
n=0
X
X
X
n 2n
y= cn x = c2n x + c2n+1 x2n+1 = c0 cos x + c1 sin x
n=0 n=0 n=0
The following theorem specifies conditions under which a power series solution is possible.
THEOREM 4.3. Let p, q r be analytic at a point x0 . Then every solution of the equation
is called Legendre equation. Here, is a real constant. Note that the above equation can also be
written as
d h dy i
(1 x2 ) + ( + 1)y = 0.
dx dx
Note that () can also be written as
2xy 0 ( + 1)y
y 00 + = 0.
1 x2 1 x2
It is of the form (1) with
2x ( + 1)
f (x) = , g(x) = , r(x) = 0.
1 x2 1 x2
Clearly, f, g, r are rational functions, and have power series expansions around the point x0 = 0. Let
X
us assume that a solution of () is of the form y = cn xn . Substituting the expressions for y, y 0 , y 00
n=0
into (), we obtain
X
X
X
2 n2 n1
(1 x ) n(n 1)cn x 2x ncn x + ( + 1) cn xn = 0,
n=2 n=1 n=0
i.e.,
X
X
X
X
n(n 1)cn xn2 n(n 1)cn xn 2ncn xn + ( + 1) cn xn = 0,
n=2 n=2 n=1 n=0
i.e.,
X
X
X
X
(n + 2)(n + 1)cn+2 xn n(n 1)cn xn 2ncn xn + ( + 1)cn xn = 0.
n=0 n=2 n=1 n=0
30
Equating coefficients of xk to 0 for k N0 , we obtain
i.e.,
2c2 + ( + 1)c0 = 0, 6c3 + [2 + ( + 1)]c1 = 0,
i.e.,
( + 1) 2 + ( + 1) ( n)( + n + 1)
c2 = c0 , c3 = c1 , cn+2 = cn .
2 6 (n + 2)(n + 1)
Note that if = k is a positive integer, then coefficients of xn+2 is zero for n {k, k + 1, . . .}. Thus,
in this case we have y = y1 (x) + y2 (x), where:
If = k is an even integer, then y1 (x) is a polynomial of degree k with only even powers of x,
and y2 (x) is a power series with only odd powers of x,
If = k is an odd integer, then y2 (x) is a polynomial of degree k with only odd powers of x,
and y1 (x) is a power series with only even powers of x.
ck+2j = 0 for j N.
Thus,
k(k 1)
ck2 = ck ,
2(2k 1)
(k 2)(k 3) k(k 1)(k 2)(k 3)
ck4 = ck2 = (1)2 ck .
4(2k 3) 2.4.(2k 1)(2k 3)
(k 4)(k 5) k(k 1)(k 2)(k 3)(k 4)(k 5)
ck6 = ck4 = (1)3 ck .
6(2k 5) 2.4.6(2k 1)(2k 3)(2k 5)
In general, for 2` < k,
k(k 1)(k 2) (k 2` + 1)
ck2` = (1)` ck
[2.4. (2`)](2k 1)(2k 3) (2k 2` + 1)
k!(2k 2)(2k 4) (2k 2`)
= (1)` ck
(k 2`)!2` `!(2k 1)(2k 2)(2k 3)(2k 4) (2k 2` + 1)(2k 2`)
k!2` (k 1)(k 2) (k `)
= (1)` ck
(k 2`)!2` `!(2k 1)(2k 2)(2k 3)(2k 4) (2k 2` + 1)(2k 2`)
k!(k 1)!(2k 2` 1)!
= (1)` ck
(k 2`)!`!(k ` 1)!(2k 1)!
31
Taking
(2k)!
ck :=
2k (k!)2
it follows that
(2k 2`)!
ck2` = (1)` .
2k `!(k `)!(k 2`)!
Definition 4.4. The polynomial
Mn
X (2n 2`)!
Pn (x) = (1)` xn2`
2n `!(n `)!(n 2`)!
`=0
is called the Legendre polynomial of degree n. Here, Mn = n/2 if n is even and Mn = (n 1)/2 if
n is odd.
Recall
Mn
X (2n 2k)!
Pn (x) = (1)k xn2k .
2n k!(n k)!(n 2k)!
k=0
It can be seen that
3 2 1
P0 (x) = 1, P1 (x) = x, P2 (x) = (x 1), P2 (x) = (5x3 3x),
2 5
1 1
P4 (x) = (35x4 30x2 + 3), P5 (x) = (63x5 70x3 + 15x).
8 8
Mn
X (2n 2k)!
Pn (x) = (1)k n (x)n2k = (1)n Pn (x).
2 k!(n k)!(n 2k)!
k=0
1 dn 2
Rodrigues formula: Pn (x) = (x 1)n .
n!2n dxn
Let
n
X
2 n
f (x) = (x 1) = (1)r (n Cr )x2n2r .
r=0
Then
M1
X
0
f (x) = (1)r (n Cr )(2n 2r)x2n2r1 ,
r=0
M2
X
f 00 (x) = (1)r (n Cr )(2n 2r)(2n 2r 1)x2n2r2 ,
r=0
Mn
X
n
f (x) = (1)r (n Cr )[(2n 2r)(2n 2r 1) (2n 2r n + 1)]x2n2rn ,
r=0
Mn
X
= (1)r (n Cr )[(2n 2r)(2n 2r 1) (n 2r + 1)]xn2r ,
r=0
Mn
X n! (2n 2r)! n2r
= (1)r x
r=0
r!(n r)! (n 2r)!
= n!2n Pn (x),
32
1 X
Generating function: = Pn (x)un .
1 2xu + u2 n=0
Thus,
12
X (2n)!
(1 ) = an n , an := (1)n .
n=0
22n (n!)2
Also,
n n
2 n
X n! X n!
(2xu u ) = (2xu)nk (u2 )k = (1)k 2nk xnk un+k .
k!(n k)! k!(n k)!
k=0 k=0
Thus,
n
X n!
(2xu u2 )n = bn,k xnk un+k , bn,k = (1)k 2nk .
k!(n k)!
k=0
2
Taking = 2xu u , we have
n
hX i
1
X
(1 2xu + u2 ) 2 = an bn,k xnk un+k
n=0 k=0
= a0 + a1 b1,0 xu + (a1 b1,1 + a2 b2,0 x2 )u2
+(a2 b2,1 x + a3 b3,0 x3 )u3
+(a2 b2,2 + a3 b3,1 x2 + a4 b4,4 x4 )u4 +
= f0 (x) + f1 (x)u + f2 (x)u2 + ,
where
Mn
X
fn (x) = ank bnk,k xn2k .
k=0
Since
[2(n k)]! (n k)! n2k (2n 2k)!
ank bnk,k = (1)k 2 = (1)k n ,
(2nk )2 [(n k)!]2 k!(n 2k)! 2 k!(n k)!(n 2k)!
33
we have
fn (x) = Pn (x).
Thus,
1 X
= Pn (x)un .
1 2xu u2 n=0
Recurrence formulae:
0 0
3. (2n + 1)Pn+1 (x) = Pn+1 (x) nPn1 (x).
0 0
4. Pn+1 (x) = xPn1 (x) nPn1 (x).
1
Proofs. 1. Recall that the generating function for (Pn ) is (1 2xt + t2 ) 2 , i.e.,
1
X
(1 2xt + t2 ) 2 = Pn (x)tn .
n=0
1
X
(x t)(1 2xt + t2 ) 2 = (1 2xt + t2 ) nPn (x)tn1
n=1
X
X
X
(x t) Pn (x)tn = (1 2xt + t2 ) nPn (x)tn1 = (1 2xt + t2 ) (n + 1)Pn+1 (x)tn .
n=0 n=1 n=0
n
Equating the coefficients of t , we obtain
i.e.,
(n + 1)Pn+1 (x) = (2n + 1)xPn (x) nPn1 (x).
34
2. Differentiating with respect to t:
2 23
X
(x t)(1 2xt + t ) = nPn (x)tn1
n=1
Hence,
3
X X
(x t)t(1 2xt + t2 ) 2 = nPn (x)tn = nPn (x)tn
n=1 n=0
Thus,
X
X
(x t) Pn0 (x)tn = nPn (x)tn
n=0 n=0
n
Equating the coefficients of t , we obtain nPn = xPn0 (x) 0
Pn1 (x).
3. Differentiating the recurrence relation in (1) with respect to x and then using the expression
for xPn0 (x) from (2), we get the result in (3).
0
(n + 1)Pn+1 (x) = (2n + 1)Pn (x) + (n + 1)xPn0 (x) + n[xPn0 (x) Pn1
0
(x)].
4. Prove that for every polynomial q(x) of degree n, there exists a unique (n+1)-tuple (a0 , a1 , . . . , an )
of real numbers such that q(x) = a0 P0 (x) + a1 P1 (x) + . . . an Pn (x).
(Hint: use induction on degree.)
35
4.3 Power series solution around singular points
2. Consider the Cauchy equation: x2 y 00 + 2xy 0 2y = 0. This takes the form (1) with
2 2
p(x) = , q(x) = 2 .
x x
Note that x = 0 is the only singular point of this DE.
Definition 4.8. A singular point x0 R of the DE (1) is called a regular singular point if
(x x0 )p(x) and (x x0 )2 q(x) are analytic at x0 . Otherwise, x0 is called an irregular singular
point of (1).
Example 4.9. Consider x2 (x 2)y 00 + 2y 0 + (x + 1)y = 0. This takes the form (1) with
2 x+1
p(x) = , q(x) = .
x2 (x 2) x2 (x 2)
Note that
2 x+1
xp(x) = , x2 q(x) = ,
x(x 2) x2
2 (x + 1)(x 2)
(x 2)p(x) = 2 , (x 2)2 q(x) = .
x x2
We see that
36
x = 0 is an irregular singular point,
for some real or complex number r and for some real numbers a0 , a1 , a2 , . . . with a0 6= 0.
and it reduces to the EulerCauchy equation when b(x) and c(x) are constant functions.
That is,
X
X
(n + r)(n + r 1)an xn+r + b(x) (n + r)an xn+r + c(x) = 0. (3)
n=0 n=0
Let
X
X
b(x) = bn x n , c(x) = cn x n .
n=0 n=0
r
Comparing comparing coefficients of x , we get
[r(r 1) + b0 r + c0 ]a0 = 0.
37
Let r1 , r2 be the roots of the indicial equation. Then one of the solutions is
X
y1 (x) = xr1 an xn ,
n=0
38
Example 4.11. Let us ind linearly independent solutions for the Euler-Cauchy equation:
x2 y 00 + b0 xy 0 + c0 y = 0.
Note that this is of the form (2) with b(x) = b0 , c(x) = c0 , constants. Assuming a solution is of the
P
form y = xr n=0 an xn , we obtain
X
X
(n + r)(n + r 1)an xn+r + b0 (n + r)an xn+r + c0 = 0.
n=0 n=0
Now, equating the coefficient of xr to 0, we get the indicial equation as [r(r 1) + b0 r + c0 ]a0 = 0,
a0 6= 0, so that
r2 (1 b0 )r + c0 = 0.
[(n + r 1) + b0 ]an = 0 n N.
We can take an = 0 for all n N. Thus, y1 (x) = xr . The other solution is given by
R
e p(x)
Z
a(x)
y2 (x) = y1 (x) dx, p(x) := .
[y1 (x)]2 x
Thus, R
e p(x)
Z Z
r b0 1
y2 (x) = x dx, p(x) := , i.e., y2 (x) = xr dx.
x2r x x2r+b0
If r is a double root, then 2r + b0 = 1 so that
y2 (x) = xr ln(x).
39
3x 1 x P
This is of the form (1) with b(x) = , c(x) = . Now, taking y = xr n=0 an xn , we obtain
x1 x1
from (1):
X
x(x 1)y 00 = (x2 x) (n + r)(n + r 1)an xn+r2
n=0
X
X
= (n + r)(n + r 1)an xn+r (n + r)(n + r 1)an xn+r1
n=0 n=0
X
(3x 1)y 0 = (3x 1) (n + r)an xn+r1
n=0
X
X
= 3(n + r)an xn+r (n + r)an xn+r1 .
n=0 n=0
Hence, ():
X
X
[(n + r)(n + r 1) + 3(n + r) + 1]an xn+r + [(n + r)(n + r 1) (n + r)]an xn+r1 = 0.
n=0 n=0
r1
Equating coefficient of x to 0, we get the indicial equation as r(r 1) r = 0, i.e., r2 = 0. Thus,
r = 0 is a double root of the indicial equation. Hence, we obtain:
X
X
[(n)(n 1) + 3(n) + 1]an xn + [(n)(n 1) (n)]an xn1 = 0,
n=0 n=1
i.e.,
X
X
X
X
(n + 1)2 an xn n2 an xn1 = 0, i.e., (n + 1)2 an xn (n + 1)2 an+1 xn = 0.
n=0 n=1 n=0 n=0
Now, R
e pdx 3x 1
Z
y2 (x) = y1 (x) dx, p(x) := .
[y1 (x)]2 x(x 1)
Note that
Z Z Z Z Z Z
3 1 3 1 1
p(x)dx = dx dx = dx + dx dx
x1 x(x 1) x1 x x1
= 3 ln |x 1| + ln |x| ln |x 1| = 2 ln |x 1| + ln |x| = ln |(x 1)2 x|,
R
e pdx 1 1
2
= 2 2
= .
[y1 (x)] |(x 1) x|[y1 (x)] x
Thus,
ln(x)
y2 (x) = .
1x
40
Example 4.13. Consider the DE:
(x2 + 1) x2 + 1 P
This is of the form (1) with b(x) = 2
, c(x) = 2 . Now, taking y = xr n=0 an xn , we
(x 1) x 1
obtain from (1):
X
(x2 1)x2 y 00 = (x2 1) (n + r)(n + r 1)an xn+r
n=0
X
X
= (n + r)(n + r 1)an xn+r+2 (n + r)(n + r 1)an xn+r
n=0 n=0
X
(x2 + 1)xy 0 = (x2 + 1) (n + r)an xn+r
n=0
X
X
= (n + r)an xn+r+2 + (n + r)an xn+r ,
n=0 n=0
X
X
(x2 + 1)y = an xn+r+2 + an xn+r .
n=0 n=0
i.e.,
X
X
n2 an xn+3 n(n + 2)an xn+1 = 0, i.e.,
n=0 n=0
Hence, an = 0 for all n N so that y(x) = x. Taking y1 (x) = x, we obtain the second solution y2 as
R
e p
Z
y2 (x) = y1 ,
y12
where
x2 + 1 (x2 1) + 2
1 2 1 1 1
p= 2 = 2 = + = + .
(x 1)x x 1)x x (x2 1)x x1 x+1 x
41
R x2 1
Hence, e p
= so that
x
Z R p
1 x2 1
Z 2
x 1
Z
e 1
y2 (x) = y1 =x dx = x dx = x ln(x) + 2 .
y12 x2 x x3 2x
Thus,
1
y1 = x, y2 = x ln(x) +
2x
are linearly independent solutions.
P
Remark 4.14. It can be seen that if we take the solution as y = xr n=0 An xn with r = 1, then
we arrive at An = 0 so that it violates our requirement, and the resulting expression will not be a
solution.
y 00 + p(x)y 0 + q(x)y = 0
where p, q are such that xp(x) and x2 q(x) are analytic at 0, i.e., 0 is a regular singular point. Thus,
Frobenius method can be applied.
X
Taking a solution y of the form y = xr an xn , we have
n=0
X
X
X
X
n+r n+r n+r
(n + r)(n + r 1)an x + (n + r)an x + (an2 x 2 an xn+r = 0.
n=0 n=0 n=2 n=0
42
Recall that ( + 1) = (). Then we have
a0 1 1
a2 = = 2+ = 2+ ,
22 (1 + ) 2 ( + 1)( + 1) 2 ( + 2)
a2 1
a4 = 2
= (1)2 ,
2 2(2 + ) 24+ 2( + 3)
(1)n
a2n = .
22n+ n!( + n + 1)
The corresponding solution is
X (1)n
J (x) = 2n+ n!(n + + 1)
x2n+ ,
n=0
2
which is called the Bessel function of the first kind of order .
Observe:
If is not an integer, then J (x) and J (x) are linearly independent solutions.
43
Now, for an integer k, for obtaining a second solution of the Bessel equation which is linearly
independent of Jk , we can use the general method, i.e., write the Bessel equation as
y 00 + p(x)y 0 + q(x0y = 0
R
e p(x)dx
Z
and knowing a solution y1 , obtain y2 := y1 (x) dx. Note that
y12
1 x2 k 2
p(x) = , q(x) = .
x x2
Thus, the second solution according to the above formula is
Z
dx
Yk (x) = Jk (x) .
x[Jk (x)]2
0
1. (x J (x)) = x J1 (x).
0
2. (x J (x)) = x J+1 (x).
2
3. J1 (x) + J+1 (x) = x J (x).
Proofs:
Note that
0
X (2n + 2)x2n+21
(x J (x)) = (1)n
n=0
22n+ n!(n + + 1)
X 2(n + )x2n+21
= (1)n
n=0
22n+ n!(n + )(n + )
X x2n+21
= (1)n
n=0
22n+1 n!(n + )
X x2n+21
= x (1)n
n=0
22n+1 n!(n + )
= x J1 (x).
44
This proves (1). To prove (2), note that
0 X 2nx2n1
x J (x) = (1)n
n=1
22n+ n!(n + + 1)
X 2(n + 1)x2n+1
= (1)n+1
n=0
22n++2 (n + 1)!(n + + 2)
X x2n+1
= (1)n+1
n=0
22n++1 n!(n + + 2)
X x2n++1
= x (1)n+1
n=0
22n++1 n!(n + + 2)
= x J+1 (x).
0 0
J1 (x) J+1 (x) = x (x J (x)) + x x J (x)
= x [x J0 (x) + x1 J (x)] + x [x J0 (x) x1 J (x)]
= 2J0 (x).
Using the fact ( 21 ) = , it can be shown (verify!) that
r r
2 2
J 12 = sin x, J 12 = cos x.
x x
Definition 4.15. Functions f and g defined on an interval [a, b] are said to be orthogonal with
respect to a nonzero weight function w if
Z b
f (x)g(x)w(x)dx = 0.
a
[Here, we assume that the above integral exits; that is the case, if for example, they are continuous
or bounded and piece-wise continuous.]
45
Note that (
2
if n 6= m,
Z
0
sin(nx) sin(mx)dx =
0 if n 6= m,
(
2
if n 6= m,
Z
0
cos(nx) cos(mx)dx =
0 if n 6= m,
Z 2
sin(nx) cos(mx)dx = 0.
0
Thus, writing
f2n2 (x) = cos(nx), f2n1 (x) = sin(nx) for n N,
then (fn ) is an orthogonal sequence of functions with respect to w = 1.
hf, f iw 0,
hcf, f iw = chf, f iw .
hf, f iw = 0 f = 0.
Exercise 4.16. Let f1 , , . . . , fn be linearly independent continuous functions. Let g1 = f1 and for
j = 1, . . . , n, define g1 , . . . , gn iteratively as follows:
Definition 4.18. A sequence (fn ) on [a, b] is said to be an orthonormal sequence of functions with
respect to w if (fn ) is an orthogonal sequence with respect to w and hfn , fn iw = 1 for every j N.
Exercise 4.19. Let fj (x) = xj1 for j N. Find g1 , g2 , . . . as per the formula in Exercise 4.16 with
w(x) = 1 and [a, b] = [1, 1]. Observe that, for each n N, gn is a scalar multiple of the Legendre
polynomial Pn1 .
46
4.4.1 Orthogonality of Legendre polynomials
=
[(1 x2 )Pn0 ]0 Pm + n Pn Pm = 0, 0 0
[(1 x2 )Pm ] Pn + m Pm Pn = 0
=
{[(1 x2 )Pn0 ]0 Pm [(1 x2 )Pm
0 0
] Pn } + (n m )Pn Pm = 0,
i.e.,
[(1 x2 )Pn0 Pm ]0 [(1 x2 )Pm
0
Pn ]0 + (n m )Pn Pm = 0
= Z 1 Z 1
{[(1 x 2
)Pn0 Pm ]0 [(1 x 2 0
)Pm Pn ]0 }dx + (n m ) Pn Pm dx = 0
1 1
i.e., Z 1
(n m ) Pn Pm dx = 0.
1
Thus, Z 1
n 6= m = n 6= m = Pn Pm dx = 0.
1
Remark 4.20. Recall that for n N0 , the Legenendre polynomial Pn (x) is of degree n and the
P0 , P1 , P2 , . . . are orthogonal. Hence P0 , P1 , P2 , . . . are linearly independent. We recall the following
result from Linear Algebra:
47
If q0 , q1 , . . . , qn are polynomials which are
q = c0 q0 + c1 q1 + . . . + cn qn .
In the above if q0 , q1 , . . . , qn are orthogonal also, i.e., hqj , qk i = 0 for j 6= k, then we obtain
hq, qj i
cj = , j = 0, 1, . . . , n.
hqj , qj i
Thus,
n n
X X hq, qj i
q= cj qj = qj .
j=0 j=0
hq j , qj i
In particular:
For every continuous function f defined on a closed and bounded interval [a, b], there exists a
sequence (qn ) of polynomials such that (qn ) converges to f uniformly on [a, b], i.e., for every
> 0 there exists a positive integer N such that
The above result is known as Weierstrass approximation theorem. Using the above result it can be
shown that:
If q0 , q1 , . . . , are nonzero orthogonal polynomials on [a, b] such that max deg(qj ) n, then
0jn
every continuous function f defined on [a, b] can be represented as
X hq, qj i
f= cj qj , cj := , j N0 . ()
j=0
hqj , qj i
48
The expansion in () above is called the Fourier expansion of f with respect to the orthogonal
polynomials qn , n N0 . If we take P0 , P1 , P2 , . . . on [1, 1], then the corresponding Fourier expansion
is known as FourierLegendre expansion.
Recall that for a positive integer n N, the Bessel function of the first kind of order n is given by
X (1)j
Jn (x) = x2j+n
j=0
22j+n j!(n+ j + 1)
THEOREM 4.21. If and are zeros of Jn (x) in the interval [0, 1], then
Z 1 (
0 if 6= ,
xJn (x)Jn (x)dx = 1
0 J
2 n+1 (), if = .
Thus, we have
Now, let
u(x) = Jn (x), v(x) = Jn (x).
Thus, we have
x2 u00 + xu0 + (2 x2 n2 )u = 0, x2 v 00 + xv 0 + ( 2 x2 n2 )v = 0
n2 n2
xu00 + u0 + (2 x )u = 0, xv 00 + v 0 + ( 2 x )v = 0
x x
= h n2 i h n2 i
v xu00 + u0 + (2 x )u = 0, u xv 00 + v 0 + ( 2 x )v = 0
x x
=
x[vu00 uv 00 ] + [vu0 uv 0 ] + (2 2 )xuv = 0
49
d
[x(vu0 uv 0 )] + (2 2 )xuv = 0
dx
= Z 1 Z 1
d
[x(vu0 uv 0 )]dx + (2 2 ) xuvdx = 0.
0 dx 0
Hence, Z 1
6= = xJn (x)Jn (x)dx = 0.
0
Next, we consider the case of = : Note that
i.e.,
2x2 u0 u00 + 2xu0 u0 + 2(2 x2 n2 )u0 u = 0,
i.e.,
[x2 (u0 )2 ]0 + 2(2 x2 n2 )u0 u = 0,
Also,
[2 x2 u2 n2 u2 ]0 = 2 (2x2 uu0 + 2xu2 ) n2 (2uu0 ) = 2(2 x2 n2 )u0 u + 22 xu2 .
Thus,
[x2 (u0 )2 ]0 + 2(2 x2 n2 )u0 u = 0
= Z 1 Z 1 Z 1
[x2 (u0 )2 ]0 dx + [2 x2 u2 n2 u2 ]0 dx 22 xu2 dx = 0,
0 0 0
i.e., Z 1
[x2 (u0 )2 ]10 + [2 x2 u2 n2 u2 ]10 22 xu2 dx = 0,
0
i.e., Z 1
1 0 1
x[Jn (x)]2 dx = [J ()]2 = Jn+1 ().
0 2 n 2
The last equality follows, since:
50
so that taking x = ,
51
5 SturmLiouville problem (SLP)
Definition 5.1. For continuous real valued functions p, q, r defined on interval such that r0 exists and
continuous and p(x) > 0 for all x [a, b], consider the differential equation
The problem of determining a scalar and a corresponding nonzero function y satisfying (1)(3) is
called a SturmLiouville problem (SLP). A scalar (real or complex number) for which there
is a nonzero function y satisfying (1)(3) is called an eigenvalue of the SLP, and in that case the
function y is called the corresponding eigenfunction.
THEOREM 5.2. Under the assumptions on p, q, r given in Definition 5.1, the set of all eigenvalues
of SLP is a countably infinite set4 .
THEOREM 5.3. Eigenfunctions corresponding to distinct eigenvalues are orthogonal on [a, b] with
respect to the weight function p(x).
Proof. Suppose 1 and 2 are eigenvalues of the SLP with corresponding eigenvectors y1 and y2 ,
respectively. Let us denote
Ly := [r(x)y 0 ]0 + q(x)y.
=
(Ly1 )y2 (Ly2 )y1 = (2 1 )py1 y2 .
= Z b Z b
[(Ly1 )y2 (Ly2 )y1 dx = (2 1 ) py1 y2 dx.
a a
Note that
(Ly1 )y2 (Ly2 )y1 = [(ry10 )y2 (ry20 )y1 ]0 .
4 A set S is said to be countably infinite if it is in one-one corresponding to the set N of natural numbers. For example,
other than N itself, the set Z of all integers, and the set Q of all rational numbers are countably infinite. However,
the set {x R : 0 < x < 1} is not a countably infinite set. An infinite set which is not countably infinite is called an
uncountable set. For example, the set {x R : 0 < x < 1} is an uncountable set; so also the set of all irrational numbers
in {x R : 0 < x < 1}
52
Hence Z b
[(Ly1 )y2 (Ly2 )y1 dx = [(ry10 )y2 (ry20 )y1 ](b) [(ry10 )y2 (ry20 )y1 ](a).
a
Using the boundary conditions, the last expression on the above can be shown to be 0. Thus, we
obtain Z b
(2 1 ) py1 y2 dx = [(ry10 )y2 (ry20 )y1 ](b) [(ry10 )y2 (ry20 )y1 ](a) = 0.
a
Z b
Therefore, if 2 6= 1 , we obtain py1 y2 dx = 0.
a
i.e.,
Lu + iLv = p(u v) ip(v + u).
Hence,
Lu = p(u v), Lv = p(v + u)
=
(Lu)v (Lv)u = p(v 2 + u2 ).
= Z b Z b
[(Lu)v (Lv)u]dx = p(v 2 + u2 )dx.
a a
But,
(Lu)v (Lv)u = [(ru0 )v (rv 0 )u]0 .
Hence,
Z b Z b
[(Lu)v (Lv)u]dx = [(ru0 ) (rv 0 )u]0 dx = [(ru0 )v (rv 0 )u](b) [(ru0 )v (rv 0 )u](a).
a a
Using the fact that u and v satisfy the boundary conditions (2)-(3), it can be shown that
THEOREM 5.5. If y1 and y2 are the eigenfunctions corresponding to an eigenvalue of the SLP,
then prove that y1 , y2 are linearly dependent.
53
Proof. Suppose y1 and y2 are eigenfunctions corresponding to an eigenvalue of the SLP. Then we
have
Ly1 = py1 , Ly2 = py2 .
Hence,
(Ly1 )y2 (Ly2 )y1 = 0.
But,
(Ly1 )y2 (Ly2 )y1 = [(ry10 )y2 (ry20 )y1 ]0 = [rW (y1 , y2 )]0 .
Thus [rW (y1 , y2 )]0 = 0 so that, using the assumption that r is not a zero function, we obtain rW (y1 , y2 )
is a constant function, say
r(x)W (y1 , y2 )(x) = c, constant.
But, by the boundary condition (2) we have
y 00 + y = 0, y(0) = 0 = y()
Note that, for = 0, the problem has only zero solution. Hence, 0 is not an eigenvalue of the problem.
y(x) = C1 ex + C2 ex .
Note that y(0) = 0 implies C1 = 0. Now, y() = 0 implies y() = C2 sin() = 0. Hence, for those
values of for which sin() = 0, we obtain nonzero solution. Now,
sin() = 0 = n for n Z.
n := n2 , yn (x) := sin(nx), n N.
54
Example 5.7. For R, consider the SLP:
y 00 + y = 0, y 0 (0) = 0 = y 0 ()
Note that, for = 0, y(x) = + x is a solution of the DE. Now, y 0 (0) = 0 = y 0 () = 0 imply = 0.
Hence, y(x) = 1 is a solution.
y(x) = C1 ex + C2 ex .
y 0 (0) = 0 = y 0 () = C1 C2 = 0, C1 e C2 e = 0.
Hence, C1 = C2 = 0, and hence the SLP does not have any negative eigenvalues.
Then,
y 0 (x) = C1 sin(x) + C2 cos(x).
Now, y(0) implies C2 = 0, and hence y() = 0 implies sin() = 0. Note that
sin() = 0 = n for n Z.
n := n2 , yn (x) := cos(nx), n N0 .
Exercise 5.8. For R, consider the SLP:
y 00 + y = 0, y(0) = 0, y 0 () = 0.
Show that the eigenvalues and the corresponding eigenfunctions for the above SLP are given by
2n 1 2 h 2n 1 i
n = , yn (x) = sin x , n N.
2 2
Exercise 5.9. Consider the Schr
odinger equation:
h2 00
(x) = x, x [0, `],
2m
along with the boundary condition
(0) = 0 = (`).
Show that the eigenvalues and the corresponding eigenfunctions for the above SLP are given by
r
h2 2 n2 2 nx
n = , n (x) = sin , n N.
2m`2 ` `
55
Exercise 5.10. Let
Ly := [r(x)y 0 ]0 + q(x)y.
Prove that
hLy, zip = hy, Lzip y, z C[a, b],
hf, n iw
It can be seen that cn = .
hfn , n iw
References
[1] William E. Boycee and Richard C. DiPrima (2012): Elementary Differential Equations , John
Wiley and Sons, Inc.
56