Sie sind auf Seite 1von 79

Mathematics-III 1

MATHEMATICS-III
MATH F211
Dr. Suresh Kumar
Note: Some concepts of Dierential Equations are briefly described here just to help the students.
Therefore, the following study material is expected to be useful but not exhaustive for the Mathematics-III
course. For detailed study, the students are advised to attend the lecture/tutorial classes regularly, and
consult the text book prescribed in the hand out of the course.

Dr. Suresh Kumar, Department of Mathematics, BITS-Pilani, Pilani Campus


Contents

1 Preliminaries of Dierential Equations 4


1.1 Dierential equations and their classifications . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Classification based on number of independent variables . . . . . . . . . . . . . . . 4
1.1.2 Classification based on degree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Solutions of DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.1 Explicit solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 Implicit solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Formal solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.4 General and particular solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.5 Singular solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.6 Initial and boundary value problems . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 First Order Dierential Equations 8


2.1 Exact methods for solving first order DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.1 Variable separable DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.2 DE reducible to variable separable . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Exact DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Integrating Factor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.1 Existence and uniqueness of Integrating Factor . . . . . . . . . . . . . . . . . . . . 12
2.3.2 IF of first order linear DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.3 Bernoullis DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.3.4 IF of homogeneous DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Clairauts DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.5 Existence and uniqueness of solution of IVP . . . . . . . . . . . . . . . . . . . . . . . . . . 14

3 Second Order DE 15
3.1 Second Order LDE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2 Use of known solution to find another . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3 Homogeneous LDE with Constant Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Method of Undetermined Coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.5 Method of Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.6 Operator Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

4 Qualitative Behavior of Solutions 27


4.1 Sturm Separation Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Normal form of DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2
Mathematics-III 3

5 Power Series Solutions and Special Functions 32


5.1 Some Basics of Power Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
5.2 Power series solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.3 Gausss Hypergeometric Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6 Fourier Series 43
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6.2 Dirichlets conditions for convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.3 Fourier series for even and odd functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
6.4 Fourier series on arbitrary intervals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

7 Boundary Value Problems 49


7.1 One dimensional wave equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
7.2 One dimensional heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
7.3 The Laplace equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7.4 Strum Liouville Boundary Value Problem (SLBVP) . . . . . . . . . . . . . . . . . . . . . 55
7.4.1 Orthogonality of eigen functions . . . . . . . . . . . . . . . . . . . . . . . . . . 56

8 Some Special Functions 58


8.1 Legendre Polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
8.2 Gamma Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8.3 Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8.3.1 Second solution of Bessels DE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
8.3.2 Properties of Bessel Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.4 Orthogonal properties of Bessel functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
8.4.1 Fourier-Bessel Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

9 Laplace Transforms 69
9.1 Definitions of Laplace and inverse Laplace transforms . . . . . . . . . . . . . . . . . . . . 69
9.2 Laplace transforms of some elementary functions . . . . . . . . . . . . . . . . . . . . . . . 69
9.3 Sufficient conditions for the existence of Laplace transform . . . . . . . . . . . . . . . . . . 70
9.4 Some more Laplace transform formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
9.4.1 Laplace transform of a function multiplied by eax . . . . . . . . . . . . . . . . . . 71
9.4.2 Laplace transform of derivatives of a function . . . . . . . . . . . . . . . . . . . . . 71
9.4.3 Laplace transform of integral of a function . . . . . . . . . . . . . . . . . . . . . . . 72
9.4.4 Laplace transform of a function multiplied by x . . . . . . . . . . . . . . . . . . . . 72
9.4.5 Laplace transform of a function divided by x . . . . . . . . . . . . . . . . . . . . . 73
9.5 Solution of DE using Laplace transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
9.6 Solution of integral equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
9.7 Heaviside or Unit Step Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
9.8 Dirac Delta Function or Unit Impulse Function . . . . . . . . . . . . . . . . . . . . . . . . 77
Chapter 1

Preliminaries of Dierential Equations

1.1 Dierential equations and their classifications


Dierential equation
The mathematical description of any dynamical or physical phenomenon naturally introduces the inde-
pendent and dependent variables. Suppose we blow air into a balloon that inflates in spherical shape.
Then the radius r of the spherical balloon depends on the amount of air blown into, and is therefore at
our discretion. So we may treat the variable r as the independent variable. We know that surface area
S of the spherical balloon depends on r via the relation S = 4r2 . So, in this example, r is independent
variable and S is dependent variable. Also, the rate of change of surface area S of balloon with respect its
radius r is given by the equation dS
dr = 8r. It is a dierential equation that gives us the rate of change
of S with respect to r for any given value of r.
A dierential equation may involve more than one independent or dependent variables. For instance,
in the above balloon example, if we allow the variable r to depend on time t, then the time variable
t is independent while r and S both are dependent variables. Also, the governing dierential equation
dS
dr = 8r can be written as

dS dr
= 8r .
dt dt
Formally, we define a dierential equation as follows: Any equation (non-identity) involving derivatives
of dependent variable(s) with respect to independent variable(s) is called a dierential equation(DE).
Hereafter, we shall use the abbreviation DE for the phrase dierential equation and its plural dif-
ferential equations as well.

Order and Degree


The order of the highest order derivative occurring in a DE is called its order. The power or exponent of
the highest order derivative occurring in the DE is called its degree provided the DE is made free from
radicals or fractions in its derivatives.

Ex. Order of (y 00 )3 + 2y 0 + 3y = x is 2 and degree is 3.


Ex. Order of y (4) + 2(y 0 )5 + 3y = 0 is 4 and degree is 1.
Ex. (y 000 )1/2 + y 0 = 0 can be rewritten as y 000 (y 0 )2 = 0. So its order is 3 and degree is 1.

1.1.1 Classification based on number of independent variables


DE are classified into two categories based on the number of independent variables.

4
Mathematics-III 5

Ordinary DE
A DE involving derivatives with respect to one independent variable is called ordinary DE.
An ordinary DE of order n, in general, can be expressed in the form

f (x, y, y 0 , ......., y (n) ) = 0.

In particular, a first order ordinary DE is of the form

f (x, y, y 0 ) = 0,

while a second order ordinary DE is of the form

f (x, y, y 0 , y 00 ) = 0.

Ex. y = xy 0 + (y 0 )2 is a first order ordinary DE.


Ex. y 0 + xy + x2 = 0 is a first order ordinary DE.
Ex. (y 00 )3 + 2y 0 + 3y = x is a second order ordinary DE.

Partial DE
A DE involving partial derivatives with respect to two or more independent variables is called partial DE.
For example, the well known Laplace equation
@2u @2u
+ 2 = 0,
@x2 @y
is a partial DE, which carries the second order partial derivatives of the dependent variable u(x, y) with
respect to the independent variables x and y.

Note:
Hereafter, we shall talk about ordinary DE only. So DE shall mean ordinary DE unless otherwise stated.

1.1.2 Classification based on degree


DE are classified into two categories based on the degree.

Linear DE
A DE is said to be linear if the dependent variable and its derivatives occur in first degree and are not
multiplied together.
An linear DE of order n can be expressed in the form
0
a0 (x)y (n) + a1 (x)y (n 1)
+ ....... + an 1 (x)y + an (x)y = b(x).

where an (x) is not identically 0.


For example, y 00 + 2y 0 + 3y = x is a second order linear DE.

Non-linear DE
If a DE in not linear, then it is said to be non-linear.

Ex. y = xy 0 + (y 0 )2 is a first order non-linear DE as y 0 occurs with degree 2.


Ex. yy 00 + 4y = 3x2 is a second order non-linear DE as y and y 00 occur in product in the first term.
Ex. y 00 + 2y 0 + 3y 2 = 0 is a second order non-linear DE as y occurs with degree 2.
Mathematics-III 6

1.2 Solutions of DE
Consider the nth order DE

f (x, y, y 0 , ......., y (n) ) = 0. (1.1)

We define three types of solutions of (1).

1.2.1 Explicit solution


A function g defined on an interval I is said to be an explicit solution of (1.1) on the interval I if
f (x, g, g 0 , ......., g (n) ) = 0 for all x 2 I.
For example, y = sin x is an explicit solution of the DE y 00 + y = 0 on ( 1, 1) since y = sin x implies
that y 00 + y = sin x + sin x = 0 for all x 2 ( 1, 1).

1.2.2 Implicit solution


A relation h(x, y) = 0 is said to be an implicit solution of (1.1) on an interval I if h(x, y) = 0 yields at
least one explicit solution g of (1.1) on I.
solution of the DE yy 0 + x = 0 on ( 1, 1). For, x2 + y 2 = 1
For example, x2 + y 2p= 1 is an implicit p
yields two functions y = 1 x2 and y = 1 x2 , both of which can be verified to be explicit solutions
0
of yy + x = 0 on ( 1, 1).

1.2.3 Formal solution


A relation h(x, y) = 0 is said to be a formal solution of (1.1) on an interval I if h(x, y) = 0 does not yield
any explicit solution g of (1.1) on I but satisfies (1.1) on I.
For example, x2 +y 2 +1 = 0 is a formal solution of the DE yy 0 +x = 0. For, the implicit dierentiation
of the relation x2 + y 2 + 1 = 0 with respect to x yields the DE yy 0 + x = 0. However, x2 + y 2 + 1 = 0
gives y 2 = 1 x2 . So y is not real for any real x. This in turn implies that x2 + y 2 + 1 = 0 does not
yield any explicit solution of the given DE.

1.2.4 General and particular solutions


A relation h(x, y, c1 , c2 , ....., cn ) = 0, involving n arbitrary constants c1 , c2 ,...., cn , is said to be a general
solution of (1.1) on an interval I if h(x, y, c1 , c2 , ....., cn ) = 0 satisfies (1.1) identically on I. Note that the
number of arbitrary constants in the general solution is equal to the order n of the DE (1.1). Further,
a solution of (1.1) obtained by choosing particular values of the arbitrary constants is called particular
solution of (1.1).
For example, y = c1 sin x + c2 cos x is general solution of the DE y 00 + y = 0 on ( 1, 1) since
y = c1 sin x + c2 cos x leads to y 00 + y = c1 ( sin x + sin x) + c2 ( cos x + cos x) = 0 for all x 2 ( 1, 1).
Also, y = sin x is a particular solution of (1.1) as it can be obtained from the general solution by choosing
c1 = 1 and c2 = 0.

1.2.5 Singular solution


A singular solution of (1.1) is a particular solution of (1.1), which can not be obtained from the general
solution h(x, y, c1 , c2 , ....., cn ) = 0 of (1.1) by choosing particular values of the arbitrary constants c1 ,
c2 ,...., cn .
For example, y = cx + c2 is general solution of the DE y = xy 0 + (y 0 )2 . It is easy to verify that
y = x2 /4 is also a solution of this DE. Further, y = x2 /4 can not be retrieved from y = cx + c2 for
Mathematics-III 7

any choice of the arbitrary constant c. Hence, y = x2 /4 is a singular solution of the DE y = xy 0 + (y 0 )2 .

Note: Considering the types of solutions as discussed above we can say that a solution of (1.1) is any
relation-explicit or implicit- between x and y that does not involve derivatives and satisfies (1.1) identically.

1.2.6 Initial and boundary value problems


Consider the nth order DE (1.1). We know that its general solution involves n arbitrary constants.
Therefore, in order to seek a particular solution from the general solution of (1.1), we need to find the
values of n arbitrary constants using n given conditions. If the n given conditions are specified at a single
point say x0 in the form, say

y(x0 ) = b0 , y 0 (x0 ) = b1 , ........, y (n 1)


(x0 ) = bn 1,

then the DE (1) with these n conditions is said to be an initial value problem (IVP).
On the other hand, if k conditions are specified at one point say x0 while the remaining n k points
are specified at some other point say x1 , then the DE (1.1) with the given conditions at two dierent
points is said to be a boundary value problem (BVP).

Ex. y 0 y = 0, y(0) = 1 is an IVP. The general solution of y 0 y = 0 is y = c1 ex . So the condition


y(0) = 1 yields c1 = 1. So the solution of the given IVP is y = ex .

Ex. y 00 y = 0, y(0) = 1, y 0 (0) = 1 is an IVP. The general solution of y 00 y = 0 is y = c1 ex + c2 e x . So


the conditions y(0) = 1 and y 0 (0) = 1 yield the relations c1 + c2 = 1 and c1 c2 = 1, respectively. Solving
the two, we get c1 = 1 and c2 = 0. So the solution of the given IVP is y = ex .

Ex. y 00 + y = 0, y(0) = 1, y(/2) = 0 is a BVP. The general solution of y 00 + y = 0 is y = c1 sin x + c2 cos x.


So the conditions y(0) = 1 and y(/2) = 0 yield c2 = 1 and c1 = 0, respectively. So the solution of the
given BVP is y = cos x.
Chapter 2

First Order Dierential Equations

In general, any first order DE is of the form

g(x, y, y 0 ) = 0. (2.1)

Sometimes, it is possible to write the first order DE (2.1) in the canonical form

y 0 = f (x, y) (2.2)

There is no method to find a general solution of (2.1) or (2.2).

2.1 Exact methods for solving first order DE


In the following, we present some particular families of first order DE and exact methods to solve the
same.

2.1.1 Variable separable DE


A first order DE is said to be in variable separable form if it can be written as

y 0 = F (x)G(y), (2.3)

where F (x) is function of x, and G(y) is function of y. Equation (2.3) can be rewritten as
dy
= F (x)dx,
G(y)
which on integration, yields the solution
Z Z
dy
= F (x)dx + C,
G(y)
where C is a constant of integration. xb

Ex. 2.1.1. Solve y 0 = y cos x.

Sol. 2.1.1. y = cesin x .

2.1.2 DE reducible to variable separable


There are DE which are not directly variable separable but can be reduced to variable separable by using
some suitable transformation(s). In the following, we present some families of such DE.

8
Mathematics-III 9

DE of the form y 0 = f (ax + by + c)


Here a, b and c are constants. Such DE can be reduced to variable separable by using the transformation
ax + by + c = t.
For, we have a + by 0 = t0 , which transforms the DE y 0 = f (ax + by + c) into the variable separable DE
t0 = bf (t) + a
with the general solution
Z
dt
= x + C.
bf (t) + a
Ex. 2.1.2. Solve y 0 = sin(x y).
Sol. 2.1.2. sec(x y) + tan(x y) = x + c.

Homogeneous DE
A function h(x, y) is said to be homogeneous function of degree n if h(tx, ty) = tn h(x, y). A DE of the form
M (x, y)dx + N (x, y)dy = 0 is said to be homogeneous if M (x, y) and N (x, y) are homogeneous functions
of same degree. The DE M (x, y)dx + N (x, y)dy = 0 can be rewritten as y 0 = M (x, y)/N (x, y) = f (x, y)
(say). Therefore, a DE expressed in the form y 0 = f (x, y) is homogeneous if f (tx, ty) = f (x, y) = f (1, y/x).
To solve the homogeneous DE, we use the transformation y = vx, where v is a function of x. This gives
y 0 = v + xv 0 . So the DE y 0 = f (x, y) transforms to
v + xv 0 = f (1, v),
which can be rearranged in the variable separable form
dv dx
= .
f (1, v) v x
Integrating both sides, we get the general solution
Z
dv
= ln x + C.
f (1, v) v
x+y
Ex. 2.1.3. Solve y 0 = x y.
p
Sol. 2.1.3. tan 1 (y/x) = log x2 + y 2 + c.

DE of the form y 0 = (ax + by + c)/(px + qy + r)

Here a, b, c, p, q and r are constants. In case, ap = qb = m (say), we have ax + by = m(px + qy). Then
the transformation px + qy = t transforms the given DE into variable separable form. Now, consider
that ap 6= qb . In this case, we use the transformations x = X + h and y = Y + k, where h and k are
constants to be determined from the equations ah + bk + c = 0 and ph + qk + r = 0. The equation
y 0 = (ax + by + c)/(px + qy + r), then, transforms to
dY aX + bY
= ,
dX pX + qY
which is a homogeneous DE in X and Y .
Ex. 2.1.4. Solve y 0 = x+y+4
x y 6.
p
Sol. 2.1.4. tan 1 xy+51 = log (x 1)2 + (y + 5)2 + c.
Mathematics-III 10

2.2 Exact DE
dy
The first order DE dx = f (x, y) can be written in the canonical form M (x, y)dx + N (x, y)dy = 0 where
f (x, y) = M (x, y)/N (x, y). It is said to be an exact DE if M dx + N dy is an exact dierential of some
function say F (x, y), that is, M dx + N dy = dF .

For example, ydx + xdy = 0 is an exact DE since ydx + xdy = d(xy).

The following theorem provides the necessary and sufficient condition for a DE to be exact.

Necessary and sufficient condition for exact DE: If M (x, y) and N (x, y) possess continuous first
order partial derivatives, then the DE M (x, y)dx + N (x, y)dy = 0 is exact if and only if @M @N
@y = @x .
Proof. First assume that the DE M (x, y)dx + N (x, y)dy = 0 is exact. Then by definition, there exists
some function F (x, y) such that

M dx + N dy = dF. (2.4)

Also F (x, y) is a function of x and y. So from the theory of partial dierentiation, we have
@F @F
dx + dy = dF. (2.5)
@x @y
From (2.4) and (2.5), we obtain
@F @F
M= , N= . (2.6)
@x @y

@M @2F @N @2F
=) = , = . (2.7)
@y @y@x @x @x@y
@2F
Given that M (x, y) and N (x, y) possess continuous first order partial derivatives. Therefore, @y@x and
@2F @2F @2F
@x@y are continuous functions, which in turn implies that @y@x = @x@y . Hence, (2.7) gives

@M @N
= . (2.8)
@y @x
Conversely assume that the condition (2.8) is satisfied. We shall prove that there exists a function
F (x, y) such that equation (2.4) and hence (2.6) are satisfied. Integrating first of the equations in (2.6)
w.r.t. x, we get
Z
F = M dx + g(y). (2.9)

Z
@F @
=) = M dx + g 0 (y).
@y @y

Z
@
=) N= M dx + g 0 (y).
@y

Z Z
@
=) g(y) = N M dx dy. (2.10)
@y
Mathematics-III 11

Z
@
=) The integrand N M dx is a function of y only.
@y

Z
@ @
=) N M dx = 0.
@x @y

Z
@N @2
=) M dx = 0.
@x @x@y

Z
@N @ @
=) M dx = 0.
@x @y @x

@N @M
=) = 0,
@x @y
which is true in view of (2.8). This completes the proof.

Note. If the DE M dx + N dy = 0 is exact, then in view of (2.9) and (2.10) the solution F (x, y) = c reads
as
Z Z Z
@
M dx + N M dx dy = c.
@y
Ex. Test the equation ey dx + (xey + 2y)dy = 0 for exactness and solve it if it is exact.
Sol. Comparing the given equation with M dx + N dy = 0, we get

M = ey , N = xey + 2y.

@M @N
=) = ey = .
@y @x
This shows that the given DE is exact, and therefore its solution is given by
Z Z Z
@
M dx + N M dx dy = c.
@y

Z Z Z
y y @ y
=) e dx + xe + 2y e dx dy = c.
@y

Z
y y @ y
=) xe + xe + 2y (xe ) dy = c.
@y

Z
y
=) xe + (xey + 2y xey ) dy = c.

=) xey + y 2 = c.
Mathematics-III 12

2.3 Integrating Factor


If the DE M dx + N dy = 0 is not exact and suppose there exists a function (x, y) such that the DE
(M dx + N dy) = 0 is exact, then (x, y) is called an integrating factor (IF) of the DE M dx + N dy = 0.

Obviously, we need to determine the integrating factor for a non-exact DE.

2.3.1 Existence and uniqueness of Integrating Factor


Existence of IF is ensured provided there exists general solution of the DE M dx + N dy = 0. For, let
the general solution of M dx + N dy = 0 be f (x, y) = c. Then @f @f
@x dx + @y dy = 0. Its comparison with
M dx + N dy = 0 gives
@f @f
@x @y
= = (x, y) (say).
M N
Therefore, M = @f @f @f @f
@x and M = @y . So M dx + N dy = @x dx + @y dy = df . It implies that M dx +
N dy = 0 is exact.
Next, assume that is IF and f (x, y) = c be general solution of the DE M dx + N dy = 0. Let F be
any function of f , then F (f ) is also an IF of M dx + N dy = 0. For
Z
F (f )(M dx + N dy) = F (f )df = d F (f )df .

This shows that IF of a DE, if exists, is not unique.


Now we determine the IF (x, y). Since M dx + N dy = 0 is an exact DE, so by the condition of
exactness
@ @
(M ) = (N ).
@y @x

@M @ @N @
=) +M = +N .
@y @y @x @x

1 @ @ @M @N
=) N M = . (2.11)
@x @y @y @x
We can not determine in general from (2.11). If happens to be a function of x only, then (2.11)
reduces to

1 d 1 @M @N
= = h(x) (say).
dx N @y @x

d
=) = h(x)dx.

R
h(x)dx
=) =e .
R
1 @M @N
Thus, if N @y @x = h(x) is a function of x only, then the IF = e h(x)dx .
R
1 @N @M
Similarly, if M @x @y = h(y) is a function of y only, then the IF = e h(y)dy .

Ex. Solve (x2 + y 2 + x)dx + xydy = 0.

Sol. 3x4 + 4x3 + 6x2 y 2 = c.


Mathematics-III 13

2.3.2 IF of first order linear DE


A linear DE (LDE) of first order is of the from

y 0 + p(x)y = q(x), (2.12)

which can be written in the canonical form

(p(x)y q(x))dx + dy = 0.

Itscomparison with M dx + N dy = 0 gives M = p(x)ydx q(x) andR N = 1. Here, we find that


1 @M @N
N @y @x = p(x) is a function of x only. Therefore, the IF is = e p(x)dx . Now multiplying both
sides of (2.12) by the IF, we obtain
R R R
y0e p(x)dx
+ p(x)ye p(x)dx
= q(x)e p(x)dx
.

d R p(x)dx R
=) ye = q(x)e p(x)dx .
dx

R Z R
p(x)dx p(x)dx
=) ye = q(x)e dx + c

is the general solution of the LDE.

Ex. Solve sec x y 0 = y + sin x.


Sol. y = (1 + sin x)e sin x + c.

2.3.3 Bernoullis DE
A non-linear DE of the form y 0 + p(x)y = q(x)y n is called Bernoullis DE, which can be reduced to LDE
by dividing it by y n and then substituting y 1 n = z.

Ex. Solve y 0 + xy = x3 y 3 .

2 2
Sol. y = 1 + x2 + cex .

2.3.4 IF of homogeneous DE
If M (x, y)dx + N (x, y)dy = 0 is a homogeneous DE, then its IF is 1/(M x + N y) provided M x + N y 6= 0.
In case, M x + N y = 0, the IF is 1/x2 or 1/y 2 or 1/(xy).

2.4 Clairauts DE
A Clairauts DE is of the form

y = xy 0 + f (y 0 ), (2.13)

where f is any function of y 0 . Separating x from this equation, we get

y f (p)
x= , (p = y 0 ). (2.14)
p p
Mathematics-III 14

Dierentiating with respect to y, we get


1 1 y dp f (p) dp f 0 (p) dp
= + 2 . (2.15)
p p p2 dy p dy p dy
or
dp
[y f (p) + pf 0 (p)] = 0. (2.16)
dy
dp
It suggests that either dy = 0 or y = f (p) pf 0 (p).
dp
If = 0, then p = c (a constant) and we get the general solution of (2.13) given by
dy
y = cx + f (c).

In case, y = f (p) pf 0 (p), from equation (2.13), we get x = f 0 (p).

So the parametric equations x = f 0 (t) and y = f (t) tf 0 (t) define another solution of (2.13). It is
called singular solution of (2.13).
It should be noted that the straight lines given by the general solution y = cx + f (c) are tangential
to the curve given by the singular solution x = f 0 (t) and y = f (t) tf 0 (t). Hence, the singular solu-
tion is an envelope of the family of straight lines of general solution as illustrated in the following example.

Note: In general, a given DE need not to possess a solution. For example, |y 0 | + |y| + 1 = 0 has no
solution. The DE |y 0 | + |y| = 0 has only one solution y = 0.

2.5 Existence and uniqueness of solution of IVP


In this section, we discuss some theorems which guarantee the existence/uniqueness of solution of the
IVP y 0 = f (x, y), y(x0 ) = y0 .

Existence Theorem: Let f (x, y) be continuous in a closed rectangular region R = {(x, y) : |x x0 |


a, |y y0 | b}, and there exists some constant M > 0 such that |f (x, y)| M for all (x, y) 2 R.
Then there exists a solution of the IVP y 0 = f (x, y), y(x0 ) = y0 in the interval [x0 h, x0 + h], where
h = min{a, b/M }.
The above theorem suggests that through any given point (x0 , y0 ) of a closed rectangular region
R, there passes at least one solution curve of the DE y 0 = f (x, y) provided f (x, y) is continuous and
bounded in R. Also note that this theorem gives the sufficient conditions for the existence of solution
but not the necessary. For example, consider the IVP xy 0 = 3y, y(1) = 1 and the rectangular region
R = {(x, y) : |x| 2, |y| 3}. For the given IVP, we have f (x, y) = 3y/x, which is not continuous in R
as it is not continuous at the point (0, 0) in R. However, y = x3 is a solution of the given IVP.

Uniqueness Theorem: Let f (x, y) be continuous in a closed rectangular region R = {(x, y) : |x x0 |


a, |y y0 | b}, and there exists some constant M > 0 such that |f (x, y)| M for all (x, y) 2 R. Suppose
f (x, y) satisfies the Lipshitz condition in R with respect to y, that is, there exists a constant L such that
|f (x, y1 ) f (x, y2 )| L|y1 y2 | for all (x, y1 ), (x, y2 ) 2 R. Then there exists a unique solution of the
IVP y 0 = f (x, y), y(x0 ) = y0 in the interval [x0 h, x0 + h], where h = min{a, b/M }.

@f
Picards Theorem: Let f (x, y) and be continuous in a closed rectangular region R. If (x0 , y0 ) is
@y
any point in R, then there exists some constant h > 0 such that the IVP y 0 = f (x, y), y(x0 ) = y0 has a
unique solution in the interval [x0 h, x0 + h].
Chapter 3

Second Order DE

Any second order DE is of the form

f (x, y, y 00 ) = 0.

First we discuss the LDE of second order.

3.1 Second Order LDE


The general form of the second order LDE is

y 00 + p(x)y 0 + q(x)y = r(x). (3.1)

If r(x) = 0, then it is called homogeneous otherwise non-homogeneous. The following theorem guar-
antees the existence and uniqueness of solution of (3.1).

Theorem 3.1.1. (Existence and Uniqueness of Solution): If p(x), q(x) and r(x) are continuous
functions on [a, b] and x0 is any point in [a, b], then the IVP y 00 + p(x)y 0 + q(x)y = r(x), y(x0 ) = y0 ,
y 0 (x0 ) = y00 has a unique solution on [a, b].

Theorem 3.1.2. If p(x) and q(x) are continuous functions on [a, b] and x0 is any point in [a, b], then the
IVP y 00 + p(x)y 0 + q(x)y = 0, y(x0 ) = 0, y 0 (x0 ) = 0 has only the trivial solution y = 0 on [a, b].

Proof. We find that y(x) = 0 satisfies the homogeneous DE y 00 + p(x)y 0 + q(x)y = 0 along with the initial
conditions y(x0 ) = 0 and y 0 (x0 ) = 0. So the required result follows from the previous theorem 3.1.1.

Theorem 3.1.3. (Linearity Principle) If y1 and y2 are any two solutions of the homogeneous LDE
y 00 + p(x)y 0 + q(x)y = 0, then c1 y1 + c2 y2 is also a solution for any constants c1 and c2 .

Proof. Since y1 and y2 are solutions y 00 + p(x)y 0 + q(x)y = 0, we have

y100 + p(x)y10 + q(x)y1 = 0, y200 + p(x)y20 + q(x)y2 = 0.

Now substituting c1 y1 + c2 y2 for y into left hand side of the given homogeneous LDE, we obtain

c1 (y100 + p(x)y10 + q(x)y1 ) + c2 (y200 + p(x)y20 + q(x)y2 ) = c1 .0 + c2 .0 = 0.

Thus, c1 y1 + c2 y2 , the linear combination of the solutions y1 and y2 , is also a solution of the homogeneous
LDE.

Remark 3.1.1. The above result need not be true for a non-homogeneous or non-linear DE.

15
Mathematics-III 16

Definition 3.1.1. (Linearly Independent and Linearly Dependent Functions) Two functions
f (x) and g(x) are said to be linearly independent (LI) on [a, b] if f (x) is not a constant multiple of g(x)
on [a, b]. The functions, which are not LI, are known as linearly dependent (LD) functions.
For example, the functions x + 1 and x2 are LI on [1, 5] while the functions x2 + 1 and 3x2 + 3 are LD
functions on [1, 5]. The functions sin x and cos x are LI on any interval.
Definition 3.1.2. Definition (Wronskian): Wronskian of two functions y1 (x) and y2 (x) is defined as
y1 y2
the determinant and is denoted by W (y1 , y2 ).
y10 y20
) W (y1 , y2 ) = y1 y20 y2 y10 .
Lemma 3.1.1. (Wronskian of Solutions of Homogeneous LDE) The Wronskian of two solutions
y1 (x) and y2 (x) defined on [a, b] of a homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 is either identically zero
or never zero.
Proof. Since y1 and y2 are solutions y 00 + p(x)y 0 + q(x)y = 0, we have

y100 + p(x)y10 + q(x)y1 = 0, (3.2)

y200 + p(x)y20 + q(x)y2 = 0. (3.3)

Multiplying (3.3) by y1 and (3.2) by y2 , and subtracting, we get

y1 y200 y2 y100 + p(x)(y1 y20 y2 y10 ) = 0.


dW dW
=) + p(x)W = 0. * W = y1 y20 y2 y10 & = y1 y200 y2 y100
dx dx

R
p(x)dx
=) W = ce , where c is a constant of integration.

=) W is identically 0 if c = 0 otherwise W never vanishes.

Lemma 3.1.2. (Wronskian of LD Solutions) Two solutions y1 and y2 defined on [a, b] of a homoge-
neous LDE y 00 + p(x)y 0 + q(x)y = 0 are LD if and only if W (y1 , y2 ) = 0 for all x 2 [a, b].
Proof. If y1 and y2 are LD, then there exists some constant c such that y1 (x) = cy2 (x) for all x 2 [a, b].
It follows that W (y1 , y2 ) = y1 y20 y2 y10 = cy2 y20 cy2 y20 = 0 for all x 2 [a, b].

Conversely, let W (y1 , y2 ) = y1 y20 y2 y10 = 0 for all x 2 [a, b]. Now, there are two possibilities about
y1 . First, y1 = 0 for all x 2 [a, b]. In this case, we have y1 = 0 = 0.y2 for all x 2 [a, b], and consequently
y1 and y2 are LD. Next, if y1 is not identically 0 in [a, b] and x0 is any point in [a, b] such that y1 (x0 ) 6= 0,
then continuity of y1 ensures the existence of a subinterval [c, d] containing x0 in [a, b] such that y1 6= 0
for all x 2 [c, d]. Dividing W (y1 , y2 ) = y1 y20 y2 y10 = 0 by y12 , we get (y1 y20 y2 y10 )/y12 = (y2 /y1 )0 = 0. So
we have y2 /y1 = k for all x 2 [c, d], where k is some constant. This shows that y1 and y2 are LD in [c, d].
This completes the proof.
Mathematics-III 17

Theorem 3.1.4. (General Solution of Homogeneous LDE) If y1 (x) and y2 (x) are two LI solutions
of a homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0 on [a, b], then c1 y1 (x) + c2 y2 (x), where c1 and c2 are
arbitrary constants, is the general solution of the homogeneous LDE.
Proof. Let y(x) be any solution of y 00 + p(x)y 0 + q(x)y = 0. We shall prove that there exists unique
constants c1 and c2 such that

c1 y1 (x) + c2 y2 (x) = y(x). (3.4)

Now dierentiating both sides of (3.4) w.r.t. x, we get

c1 y10 (x) + c2 y20 (x) = y 0 (x). (3.5)

Given that y1 (x) and y2 (x) are two LI solutions of the given homogeneous LDE on [a, b].Therefore,
y1 (x) y2 (x)
= W (y1 (x), y2 (x)) is non-zero for all x 2 [a, b]. This in turn implies that the system of
y10 (x) y20 (x)
equations (3.4) and (3.5) has a unique solution (c1 , c2 ). This completes the proof.

For example, y 00 + y = 0 has two LI solutions y1 = cos x and y2 = sin x. So its general solution is
c1 cos x + c2 sin x.

3.2 Use of known solution to find another


Consider the homogeneous LDE

y 00 + p(x)y 0 + q(x)y = 0. (3.6)

Let y1 be a non-zero and known solution of (3.6). Therefore,

y100 + p(x)y10 + q(x)y1 = 0. (3.7)

We assume that y2 = vy1 is a solution of (3.4). Therefore,

y200 + p(x)y20 + q(x)y2 = 0.

=) v(y100 + p(x)y10 + q(x)y1 ) + v 00 y1 + v 0 (2y10 + p(x)y1 ) = 0. (3.8)

Plugging (3.7) into (3.8), we get

v 00 y1 + v 0 (2y10 + p(x)y1 ) = 0.

v 00 2y10
=) = p(x).
v0 y1
Integrating,
Z
log v 0 = 2 log y1 p(x)dx.

1 R
=) v0 = e p(x)dx
.
y12
Mathematics-III 18

Z R
1 p(x)dx
=) v= e dx.
y12

Z R
1
) y2 = y 1 e p(x)dx
dx.
y12

Clearly y2 is not a constant multiple of y1 . So y1 and y2 are LI solutions of (3.6). Hence, c1 y1 + c2 y2


is general solution of (3.6).

Ex. 3.2.1. Find general solution of x2 y 00 + xy 0 y = 0 given that y1 = x is a solution.

Sol. 3.2.1. The given DE can be rewritten as


1 0 1
y 00 + y y = 0.
x x2
Comparing it with y 00 + p(x)y 0 + q(x)y = 0, we find p(x) = x1 . Also the given solution is y1 = x. So
the second solution reads as
Z R Z R 1 Z Z
1 p(x)dx 1 dx 1 log x 1 1
y2 = y 1 e dx = x e x dx = x e dx = x x 3 dx = x .
y12 x2 x2 2

) The general solution is


1
y = c1 x + c2 x .

3.3 Homogeneous LDE with Constant Coefficients


Consider the homogeneous LDE

y 00 + py 0 + qy = 0, (3.9)

where p and q are constants. Let y = emx be a solution of 3.9. Then, we have

(m2 + pm + q)emx = 0.

=) m2 + pm + q = 0, (3.10)

since emx 6= 0.
Equation (3.10) is called auxiliary equation (AE) and its roots are
p p
p + p2 4q p p2 4q
m1 = and m2 = .
2 2
Now three dierent cases arise depending on the nature of roots of the AE.

(i) If p2 4q > 0, then m1 and m2 are real and distinct. So em1 x and em2 x are two particular solutions of
3.9. Also these are LI being not constant multiple of each other. Therefore, general solution of (3.9) is

y = c 1 em1 x + c 2 em2 x .
Mathematics-III 19

(ii) If p2 4q < 0, then m1 and m2 are conjugate complex numbers. Let m1 = a + ib and m2 = a ib.
Then we get the following solutions of 3.9:

e(a+ib)x = eax (cos bx + i sin bx), (3.11)

e(a ib)x
= eax (cos bx i sin bx). (3.12)

As we are interested in real solutions of 3.9, adding 3.11 and 3.12 and then dividing by 2, we get a
real solution eax cos bx.
Similarly, subtracting 3.11 and 3.12 and then dividing by 2i, we get another real solution of 3.9 given
by eax sin bx.
Now, we see that the particular solutions eax cos bx and eax sin bx are LI. So general solution of (3.9)
is

y = eax (c1 cos bx + c2 sin bx).


p
(iii) If p2 4q = 0, then m1 and m2 are real and equal with m1 = m2 = 2. Therefore, one solution of
px
(3.9) is y1 = e 2. Another LI solution of (3.9) is given by
Z R Z R Z
1 p(x)dx px 1 pdx px 1 px px
y2 = y 1 2 e dx = e 2 e dx = e 2 e dx = xe 2 .
y1 e px e px

So general solution of (3.9) is


px
y=e 2 (c1 + c2 x).

Ex. 3.3.1. Solve y 00 + y 0 6y = 0.

Sol. 3.3.1. y = c1 e 3x + c2 e2x .

Ex. 3.3.2. Solve y 00 4y 0 + 4y = 0.

Sol. 3.3.2. y = e2x (c1 + c2 x).

Ex. 3.3.3. Solve y 00 + y 0 + y = 0.


x
p p
3 3
Sol. 3.3.3. y = e 2 (c1 cos 2 x + c2 sin 2 x).
Mathematics-III 20

Ex. 3.3.4. Show that a DE of the form x2 y 00 + pxy 0 + qy = 0, where p, q are constants, reduces to a
homogeneous LDE with constant coefficients with the transformation x = ez . Hence, solve the equation
x2 y 00 + 2xy 0 6y = 0.
Sol. 3.3.4. We have x = ez . So z = log x and z 0 = 1/x. Therefore,
dy 0 dy
xy 0 = x z = .
dz dz

0
2 00 2 1 dy 2 1 d2 y 0 1 dy d2 y dy
=) x y =x =x z = .
x dz x dz 2 x2 dz dz 2 dz
Thus, the equation x2 y 00 + pxy 0 + qy = 0 becomes
d2 y dy
+ (p 1) + qy = 0,
dz 2 dz
which a homogeneous LDE with constant coefficients.
Hence, with x = ez , the DE x2 y 00 + 2xy 0 6y = 0 reduces to
d2 y dy
+ 6y = 0.
dz 2 dz
Its AE is m2 + m 6 = 0 with the roots m = 3, 2. So its solution is
3z
y = c1 e + c2 e2z .

3
=) y = c1 x + c 2 x2 .
Remark 3.3.1. The DE in the form x2 y 00 + pxy 0 + qy = 0 is called Eulers or Cauchys equidimensional
equation. If we denote dy 0 00
dz by Dz y, then xy = Dz y and xy = Dz (Dz 1)y. It can also be shown that
3 000 n (n)
x y = Dz (Dz 1)(Dz 2)y and in general x y = Dz (Dz 1)...(Dz n + 1)y. Thus, every Eulers
or Cauchys equidimensional equation reduces to a homogeneous LDE with constant coefficients with the
transformation x = ez .
Ex. 3.3.5. Solve x2 y 00 + 3xy 0 + 10y = 0.
Sol. 3.3.5. y = x 1 (c 3) + c2 sin(log x3 ).
1 cos(log x

Ex. 3.3.6. Show that the general homogeneous LDE y 00 +p(x)y 0 +q(x)y = 0 is reducible to a homogeneous
3 Rp
LDE with constant coefficients if and only if (q 0 + 2pq)/q 2 is constant provided that z = q(x)dx.
Rp p
Sol. 3.3.6. We have z = q(x)dx and z 0 = q(x). Therefore,
dy 0 dy p
y0 = z = q.
dz dz

p d2 y 0 q 0 dy d2 y q 0 dy
=) y 00 = q 2z + p =q 2 + p .
dz 2 q dz dz 2 q dz
Plugging the values of y 0 and y 00 into y 00 + p(x)y 0 + q(x)y = 0 and dividing by q, we obtain
d2 y q 0 + 2pq dy
+ 3 + y = 0.
dz 2 2q 2 dz
3
This is a homogeneous LDE with constant coefficients if and only if (q 0 + 2pq)/q 2 is constant.
Mathematics-III 21

Ex. 3.3.7. Reduce xy 00 + (x2 1)y 0 + x3 y = 0 to a homogeneous LDE with constant coefficients and
hence solve it.

Sol. 3.3.7. The given DE can be rewritten as



00 1
y + x y 0 + x2 y = 0.
x
1
Comparing it with y 00 + p(x)y 0 + q(x)y = 0, we get p(x) = x x and q(x) = x2 .
1
q 0 + 2pq 2x + 2 x x2
) 3 = x
= 2.
q2 x3

This shows that the given DE is reducible to a homogeneous LDE with constant coefficients given by

d2 y q 0 + 2pq dy
+ 3 + y = 0.
dz 2 2q 2 dz
Rp R 2
where z = q(x)dx = xdx = x2 .

d2 y dy
=) + + y = 0.
dz 2 dz
p
1 3
Its AE is m2 + m + 1 = 0 with roots m = 2 2 i. So the solution reads as
p p !
z 3 3
y=e 2 c1 cos z + c2 sin z .
2 2

x2
Substituting z = 2 , we have
p p !
x2 3 2 3 2
y=e 4 c1 cos x + c2 sin x .
4 4

Ex. 3.3.8. Solve y 00 + 3xy 0 + x2 y = 0.

Sol. 3.3.8. Not possible with the above method.

Theorem 3.3.1. (General Solution of Non-Homogeneous LDE) If yp is a particular solution of


a non-homogeneous LDE y 00 + p(x)y 0 + q(x)y = r(x) and yh = c1 y1 + c2 y2 is general solution of the
corresponding homogeneous LDE y 00 + p(x)y 0 + q(x)y = 0, then y = yh + yp is the general solution of the
non-homogeneous LDE.

Proof. Let y be any solution of y 00 + p(x)y 0 + q(x)y = r(x). Then y yp is a solution of homogeneous
LDE y 00 + p(x)y 0 + q(x)y = 0 since
(y yp )00 +p(x)(y yp )0 +q(x)(y yp ) = (y 00 +p(x)y 0 +q(x)y) (yp00 +p(x)yp0 +q(x)yp ) = r(x) r(x) = 0.
But yh = c1 y1 + c2 y2 is general solution of y 00 + p(x)y 0 + q(x)y = 0. So there exists suitable constants
c1 and c2 such that
y yp = c1 y1 + c2 y2 = yh or y = yh + yp .
This completes the proof.

In the next three sections, we shall learn some methods to find the particular solution yp of the
non-homogeneous LDE, namely the method of undetermined coefficients, the method of variation of
parameters and the operator methods.
Mathematics-III 22

3.4 Method of Undetermined Coefficients


This method is used to find a particular solution yp for a DE of the form
y 00 + py 0 + qy = r(x), (3.13)
where p, q are constants and r(x) is exponential or sine or cosine or polynomial or some combination of
these functions. We assume yp equal to linear combination r(x) and all dierent functions (except for the
constant multiples) arising from derivatives of r(x). Finally substituting yp for y in (3.13), we determine
the unknown coefficients in yp by equating coefficients of like functions on both sides.

Ex. Find a particular solution of y 00 y0 2y = 4x2 . Also determine the general solution.

Sol. Comparing the given equation with y 00 + py 0 + qy = r(x), we get r(x) = 4x2 . Therefore, the possible
non-zero derivatives of r(x) are 8x and 8. Let yp = Ax2 + Bx + C be a particular solution. Substituting
yp for y into the given DE, we obtain
2A (2Ax + B) 2(Ax2 + Bx + C) = 4x2 , (3.14)
Equating coefficients of x2 , x and x0 on both sides of 3.14, we have
2A = 4, 2A 2B = 0, 2A B 2C = 0.

=) A= 2, B = 2, C = 3.
Thus, the particular solution is
yp = 2x2 + 2x 3.
Next, we find the general solution yh of the corresponding homogeneous DE y 00 y0 2y = 0. Here
the AE is m2 m 2 = 0 with roots m = 2, 1. Therefore, yh = c1 e2x + c2 e x .

Finally, the general solution of the given DE reads as


y = yh + yp = c1 e2x + c2 e x
2x2 + 2x 3.
Remark: It is possible that the assumed yp may satisfy the corresponding homogeneous DE. In such a
case, we assume yp with multiplication by x.

Ex. Find a particular solution of y 00 + y = sin x.

Sol. Comparing the given equation with y 00 + py 0 + qy = r(x), we get r(x) = sin x. The only function
arising from derivative from r(x) is cos x. Let yp = A sin x + B cos x be a particular solution. But
yp00 + yp = 0, that is, the assumed yp satisfies the corresponding homogeneous DE y 00 + y = 0. Therefore,
we assume revised particular solution yp = x(A sin x + B cos x). Substituting it for y into the given DE,
we obtain
2A cos x 2B sin x = sin x. (3.15)
Equating coefficients of sin x and cos x on both sides, we get
1
A = 0, B= . (3.16)
2
Thus, the particular solution is
1
yp = x cos x.
2
Mathematics-III 23

3.5 Method of Variation of Parameters


This method is used to find a particular solution yp of the non-homogeneous DE

y 00 + p(x)y 0 + q(x)y = r(x). (3.17)

Let

y = c 1 y 1 + c 2 y2 , (3.18)

be general solution of the corresponding homogeneous DE y 00 + p(x)y 0 + q(x)y. We replace the constants
by unknown functions v1 (x) and v2 (x), and attempt to determine these functions such that

y = v 1 y 1 + v 2 y2 , (3.19)

is a solution of (3.17), and

v10 y1 + v20 y2 = 0. (3.20)

Plugging (3.19) into (3.17), we get

v1 (y100 + p(x)y10 + q(x)y1 ) + v2 (y200 + p(x)y20 + q(x)y2 ) + p(x)(v10 y1 + v20 y2 ) + v10 y10 + v20 y20 = r(x). (3.21)

y1 and y2 being particular solutions of the corresponding homogeneous DE y 00 + p(x)y 0 + q(x)y = 0, we


have y100 + p(x)y10 + q(x)y1 = 0 and y200 + p(x)y20 + q(x)y2 = 0. Therefore, (3.21) reduces to

v10 y10 + v20 y20 = r(x). (3.22)

Solving (3.20) and (3.22) for v10 and v20 , we get

y2 r(x) y1 r(x)
v10 = , v20 = .
W (y1 , y2 ) W (y1 , y2 )

Thus, (3.19) leads to


Z Z
y2 r(x) y1 r(x)
y = y1 dx + y2 dx.
W (y1 , y2 ) W (y1 , y2 )

Ex. Find a particular solution of y 00 + y = csc x.

Sol. Comparing the given equation with y 00 + p(x)y 0 + q(x)y = r(x), we get r(x) = csc x. The general
solution of the corresponding homogeneous equation y 00 + y = 0 is y = c1 cos x + c2 sin x. Let y1 = cos x
and y2 = sin x. Then W (y1 , y2 ) = 1, and hence by the method of variation of parameters, the particular
solution is obtained as
Z Z
y2 r(x) y1 r(x)
y = y1 dx + y2 dx
W (y1 , y2 ) W (y1 , y2 )
Z Z
= cos x sin x csc xdx + sin x cos x csc xdx
= x cos x + sin x log(sin x).
Mathematics-III 24

3.6 Operator Methods


dy d y2
d
Denoting the dierential operator dx by D such as y 0 = dx = Dy and y 00 = dx 2
2 = D y, the DE
y 00 + py 0 + qy = r(x) in operator form can be written as (D2 + pD + q)y = r(x) or f (D)y = r(x) where
1
f (D) = D2 + pD + q. We shall denote the inverse operator of f (D) by f (D) .
1
Operating f (D) on both sides of the DE f (D)y = r(x), we obtain

1
y= r(x),
f (D)
1
a particular solution of the DE. We can not operate f (D) on r(x) in general. It depends on forms of f (D)
and r(x). So we discuss the following cases.
(i) If a is a constant and f (D) = D a, then the particular solution is given by
1
y= r(x).
D a
Operating D a on both sides, we get

(D a)y = r(x)

dy
=) ay = r(x),
dx
which is a LDE with IF= e ax and solution
Z
y = eax r(x)e ax dx.

Z
1 ax ax
Thus, r(x) = e r(x)e dx.
D a
1
R 1
If a = 0, then D r(x) = r(x)dx. This shows that D stands for the integral operator. Hence, inverse
operator of dierential operator is the integral operator.

Ex. Find a particular solution of y 00 y=e x.

Sol. The given DE in operator form can be written as

(D2 1)y = e x
.

x
=) (D 1)(D + 1)y = e .

1 x
=) y= e .
(D 1)(D + 1)


1 1 x
=) y= e .
D 1 D+1
Mathematics-III 25

Z
1 x x x
=) y= e e e dx .
D 1

1 x
=) y= (xe ).
D 1

Z
=) y = ex e x
xe x
dx.


x 1 1
=) y=e x .
2 4
1
Remark: In the above example, we applied the operators D+1 and D1 1 successively. We could, however,
also apply the operators after making partial fractions as illustrated in the following.
We have
1
y = e x
(D 1)(D + 1)

1 1 1
= e x
2 D 1 D+1

1 1 x 1 x
= e e
2 D 1 D+1
Z Z
1 x x x x
= e e e dx e ex e x
dx
2

x 1 1
= e x .
4 2
1
(ii) If r(x) is some polynomial in x, then we write series expansion of f (D) in ascending powers of D as
illustrated in the following example.

Ex. Find a particular solution of y 00 + y = x2 + x + 3.

Sol. The given DE in operator form can be written as

(D2 + 1)y = x2 + x + 3.

1
=) y= (x2 + x + 3).
D2 + 1

=) y = (1 + D2 ) 1
(x2 + x + 3).

=) y = (1 D2 + D4 ......)(x2 + x + 3).
Mathematics-III 26

=) y = x2 + x + 3 D2 (x2 + x + 3) + D4 (x2 + x + 3) .......

=) y = x2 + x + 3 2+0 .......

=) y = x2 + x + 1,

is the required particular solution of the given DE.


1 1
(ii) If k is a constant and r(x) = ekx g(x), then y = kx
f (D) (e g(x)) = ekx f (D+k) g(x). This is called
exponential shift rule. It is justified as follows.
We have

D(ekx g(x)) = ekx Dg(x) + kekx g(x) = ekx (D + k)g(x).

=) D2 (ekx g(x)) = D(ekx (D + k)g(x)) = ekx D(D + k)g(x) + kekx (D + k)g(x) = ekx (D + k)2 g(x).
1
Since one can express f (D) in powers of D, so we have in general

1 1
(ekx g(x)) = ekx g(x).
f (D) f (D + k)

Ex. Find a particular solution of (D2 3D + 2)y = xex .

Sol. We have
1
y= (xex ).
D2 3D + 2

1
=) y = ex x.
(D + 1)2 3(D + 1) + 2

1
=) y = ex x.
D2 D


x 1 1
=) y= e + x.
D 1 D


x 1 2
=) y= e + 1 + D + D + .... x.
D


x x2
=) y= e +x+1 .
2
Chapter 4

Qualitative Behavior of Solutions

In this chapter, we discuss qualitative behavior of the solutions of the second order homogeneous LDE
given by y 00 + p(x)y 0 + q(x)y = 0.

4.1 Sturm Separation Theorem


Theorem 4.1.1. (Sturm Separation Theorem) If y1 (x) and y2 (x) are two LI solutions of y 00 +p(x)y 0 +
q(x)y = 0, then y1 (x) vanishes exactly once between two successive zeros of y2 (x), and vice versa.

Proof. Denoting the Wronskian W (y1 , y2 ) by W (x), we have W (x) = y1 (x)y20 (x) y2 (x)y10 (x). Since y( x)
and y2 (x) are LI, W (x) does not vanish. Let x1 and x2 be any two successive zeros of y2 . We shall prove
that y1 vanishes exactly once between x1 and x2 . Now x1 and x2 being zeros of y2 , we have

W (x1 ) = y1 (x1 )y20 (x1 ), W (x2 ) = y1 (x2 )y20 (x2 ), (4.1)

which implies that y1 (x1 ), y20 (x1 ), y1 (x2 ), y20 (x2 ) all are non-zero since W (x) does not vanish. Now y2
is continuous and has successive zeros x1 and x2 . Therefore, if y2 is increasing at x1 , then it must be
decreasing at x2 and vice versa. Mathematically speaking, y20 (x1 ) and y20 (x2 ) are of opposite sign. Also
W (x) being a non-vanishing and continuous function retains the same sign. So in view of (4.1), it is easy
to conclude that y1 (x1 ) and y1 (x2 ) must be of opposite sign. Therefore, y1 vanishes at least once between
x1 and x2 . Further, y1 can not vanish more than once between x1 and x2 . For if it does, then applying
the same argument as above, it can be proved that y2 has at least one zero between two zeros of y1 lying
between x1 and x2 . But this would contradict the assumption that x1 and x2 are successive zeros of y2 .
This completes the proof.

Ex. Two LI solutions of y 00 + y = 0 are sin x and cos x. Also, between any two successive zeros of sin x,
there is exactly one zero of cos x and vice versa.

4.2 Normal form of DE


A second order linear and homogeneous DE in the standard form is written as

y 00 + p(x)y 0 + q(x)y = 0. (4.2)

Substituting y = u(x)v(x) into (4.2), we get

vu00 + (2v 0 + pv)u0 + (v 00 + pv 0 + qv)u = 0. (4.3)

27
Mathematics-III 28

1
R
p(x)dx
On setting coefficient of u0 equal to 0 and solving, we get v = e 2 . Then (4.3) reduces to

u00 + h(x)u = 0, (4.4)


1 2 1 0
where h(x) = q(x) 4 p(x) 2 p (x). The DE (4.4) is referred to as the normal form of DE (4.2).
1
R
Remark: Since v = e 2 p(x)dx does not vanish and y = u(x)v(x), it follows that the solution y(x) of
(4.2) and the solution u(x) of (4.4) have the same zeros.

Theorem 4.2.1. If h(x) < 0, and if u(x) is a non-trivial solution of u00 + h(x)u = 0, then u(x) has at
most one zero.

Proof. Let x0 be a zero of u(x) so that u(x0 ) = 0. Then u0 (x0 ) must be non-zero otherwise u(x) would
be a trivial solution of u00 + h(x)u = 0 by theorem 3.1.2. Suppose u0 (x0 ) > 0. Then by continuity, u0 (x)
is positive in some interval to the right of x0 . So u(x) is an increasing function in the interval to the
right of x0 . We claim that u(x) does not vanish anywhere to the right of x0 . In case, u(x) vanishes at
some point say x2 to the right of x0 , then u0 (x) must vanish at some point x1 such that x0 < x1 < x2 .
Notice that x1 is a point of maxima of u(x). So u00 (x1 ) < 0, by second derivative test for maxima.
But u00 (x1 ) = h(x1 )u(x1 ) > 0 since h(x1 ) < 0 and u(x1 ) > 0. So u(x) can not vanish to the right of
x0 . Likewise, we can show that u(x) does not vanish to the left of x0 . A similar argument holds when
u0 (x0 ) < 0. Hence, u(x) has at most one zero.

Theorem
R1 4.2.2. If h(x) > 0 for all x > 0, and u(x) is a non-trivial solution of u00 + h(x)u = 0 such that
1 h(x)dx = 1, then u(x) has infinitely many zeros on the positive X-axis.

Proof. Suppose u(x) has only finite number of zeros on the positive X-axis, and x0 > 1 be any number
greater than the largest zero of u(x). Without loss of generality, assume that u(x) > 0 for all x > x0 . Let
g(x) = u0 (x)/u(x) so that

g 0 (x) = u00 (x)/u(x) + [u0 (x)/u(x)]2 = h(x) + g 2 (x).

Integrating from x0 to x, we get


Z x Z x
g(x) g(x0 ) = h(x)dx + h2 (x)dx.
x0 x0
R1
This gives g(x) > 0 for sufficiently large values of x since 1 h(x)dx = 1. Then u(x) > 0 for all x > x0
in the relation g(x) = u0 (x)/u(x) implies that u0 (x) < 0 for x > x0 . Also, u00 (x) = h(x)u(x) < 0 for
x > x0 . It follows that u(x) must vanish to the right of x0 , which is a contradiction to the assumption
that x0 is the largest zero of u(x). This completes the proof.

Ex. 4.2.1. Show that the zeros of the functions a sin x + b cos x and c sin x + d cos x are distinct and
6 0.
occur alternatively whenever ad bc =

Sol. 4.2.1. The functions a sin x + b cos x and c sin x + d cos x are solutions of the DE y 00 + y = 0. Also,
Wronskian of a sin x + b cos x and c sin x + d cos x is non-zero if ad bc 6= 0, which in turn implies that the
two solutions are LI. Thus, by Theorem 4.1.1, the zeros of these functions occur alternatively whenever
ad bc 6= 0.

Ex. 4.2.2. Find the normal form of Bessels equation x2 y 00 + xy 0 + (x2 p2 )y = 0, and use it to show
that every non-trivial solution has infinitely many positive zeros.
Mathematics-III 29

1
Sol. 4.2.2. Comparing the Bessels equation with y 00 + p(x)y 0 + q(x)y = 0, we obtain p(x) = x and
2 2
q(x) = x x2p . Next, we evaluate

1 1 0 1 4p2
h(x) = q(x) p(x)2 p (x) = 1 + .
4 2 4x2
Therefore, the normal form of Bessels equation reads as

1 4p2
u00 + h(x)u = 0 or u00 + 1 + u = 0. (4.5)
4x2

Now we shall prove that every non-trivial solution u(x) of (4.5) has infinitely many positive zeros.

Case (i) 0 p 12 . In this case, we have



1 4p2 1 1 1
h(x) = 1 + 2
=1+ 2 +p p 1>0 for all x > 0.
4x x 2 2

Also, we have
Z 1 Z 1
1 4p2
h(x)dx = 1+ dx = 1.
1 1 4x2

So by Theorem 4.2.2, every non-trivial solution u(x) has infinitely many positive zeros.

Case (ii) p > 12 . In this case, we have


p ! p ! p
1 4p2 1 4p2 1 4p2 1 4p2 1
h(x) = 1 + 2
= 2 x+ x >0 provided x > .
4x x 2 2 2
p
4p2 1
Now let x0 = 2 and x = t + x0 . Then (4.5) becomes

d2 u
+ h1 (t)u = 0, (4.6)
dt2
1 4p2
where h1 (t) = 1 + 4(t+x0 )2
. We see that h1 (t) > 0 for all t > 0, and
Z 1 Z 1
1 4p2
h1 (t)dt = 1+ dt = 1.
1 1 4(t + x0 )2

So by Theorem 4.2.2, every non-trivial solution u(t) of (4.6) has infinitely many positive zeros. Since
x = t + x0 , so zeros of (4.5) and (4.6) dier only by x0 . Also x0 is a positive number. Therefore, every
non-trivial solution u(x) of (4.5) has infinitely many positive zeros.

From case(i) and case (ii), we conclude that every non-trivial solution u(x) of (4.5) has infinitely many
positive zeros.

Ex. 4.2.3. The hypothesis of the theorem 4.2.2 is false for the Euler equation x2 y 00 + ky = 0, but the
conclusion is sometimes true and sometimes false, depending on the magnitude of the constant k. Show
that every non-trivial solution has infinitely many positive zeros if k > 1/4, and only a finite number if
k 1/4.
Mathematics-III 30

Sol. 4.2.3. Comparing the given equation with y 00 + h(x)y = 0, we get h(x) = k/x2 . Therefore,
Z 1
h(x)dx = [ k/x]x=1
x=1 = k, which is finite number.
1

) The hypothesis of the theorem 4.2.2 is false.

Now with the transformation x = ez , the DE x2 y 00 + ky = 0 transforms to

Dz (Dz 1)y + ky = 0 or (Dz2 y Dz + k)y = 0.

Its AE is
r r
2 1 1 1 1
m m+k =0 with the roots m1 = + k, m2 = k.
2 4 2 4
Now three cases arise depending on the values of k.

(i) If k < 1/4, then the non-trivial solutions are given by

y = c 1 em1 z + c 2 em2 z = c 1 xm1 + c 2 xm2 .

(ii) If k = 1/4, then the non-trivial solutions are given by


z 1
y = (c1 + c2 z)e 2 = (c1 + c2 log x)x 2 .

(iii) If k > 1/4, then the non-trivial solutions are given by


r r ! r r !
z 1 1 1 1 1
y = e 2 c1 cos k z + c2 sin k z = x 2 c1 cos k log x + c2 sin k log x .
4 4 4 4

In each case, c1 and c2 are not both zero. In case (i) and case (ii), the solutions are non-periodic
and therefore possess at most finite number of zeros. In case (iii), the solutions being periodic in nature
possess infinitely many positive zeros.

Theorem 4.2.3. If u(x) is a non-trivial solution of u00 + h(x)u = 0 on a closed interval [a, b], then u(x)
has at most a finite number of zeros in this interval.

Proof. Assume that u(x) has infinitely many zeros in the interval [a, b]. Then the infinite set of zeros of
u(x) is bounded. So by Bolzano-Weierstrass theorem of advanced calculus, there exists some x0 in [a, b]
and a sequence {xn 6= x0 } of zeros of u(x) such that xn ! x0 as n ! 1. Since u(x) is continuous and
dierentiable, we have

u(x0 ) = lim u(xn ) = 0,


n!1

u(xn ) u(x0 )
u0 (x0 ) = lim = 0.
n!1 xn x0
By theorem 3.1.2, it follows that u(x) is a trivial solution of u00 + h(x)u = 0, which is not true as per the
given hypothesis. Hence, u(x) can not have infinitely many zeros in the interval [a, b].

Theorem 4.2.4. (Sturm Comparison Theorem) If y(x) and z(x) be non-trivial solutions of y 00 +
q(x)y = 0 and z 00 + r(x)z = 0 respectively, q(x) and r(x) are positive functions such that q(x) > r(x),
then y(x) vanishes at least once between two successive zeros of z(x).
Mathematics-III 31

Proof. Let x1 and x2 be two successive zeros of z(x) with x1 < x2 . Let us assume that y(x) does not
vanish on the interval (x1 , x2 ). We shall prove the theorem by deducing a contradiction. Without loss of
generality, we assume that y(x) and z(x) both are positive on (x1 , x2 ), for either function can be replaced
by its negative if necessary. Now, denoting the Wronskian W (y, z) by W (x), we have
W (x) = y(x)z 0 (x) z(x)y 0 (x). (4.7)

=) W 0 (x) = yz 00 zy 00 = y( rz) z( qy) = (q r)yz > 0 on (x1 , x2 ).


Integrating over (x1 , x2 ), we obtain
W (x2 ) W (x1 ) > 0 or W (x2 ) > W (x1 ). (4.8)
Since z(x) vanishes at x1 and x2 , so (4.7) yields
W (x1 ) = y(x1 )z 0 (x1 ), W (x2 ) = y(x2 )z 0 (x2 ). (4.9)
Now y(x) being continuous and positive on (x1 , x2 ), we have y(x1 ) 0 and y(x2 ) 0. Also z 0 (x1 ) > 0
and z 0 (x2 ) < 0 since z(x) is continuous and positive on (x1 , x2 ), and x1 , x2 are successive zeros of z(x).
Hence, (4.9) leads to
W (x1 ) 0 and W (x2 ) 0.

=) W (x2 ) W (x1 ). (4.10)


We see that (4.8) and (4.10) are contradictory. This completes the proof.

Ex. Solutions of y 00 + 4y = 0 oscillate more rapidly than y 00 + y = 0.

Ex. 4.2.4. Use Sturm Comparison Theorem to solve example 4.2.2.


Sol. 4.2.4. In example 4.2.2, we have

1 4p2
lim h(x) = lim 1 + = 1.
x!1 n!1 4x2
So given > 0, there exists > 0 such that h(x) 2 (1 , 1 + ) for all x > . Choosing = 1/4, we
have h(x) > 1/4 for all x > . So by Theorem 4.2.4, every solution of u00 +h(x)u = 0 vanishes at least once
between any two zeros of solutions of v 00 + (1/4)v = 0. Also every non-trivial solution of v 00 + (1/4)v = 0
has infinitely many zeros. It follows that every non-trivial solution of u00 + h(x)u = 0 has infinitely many
zeros.
Ex. 4.2.5. Let yp (x) be non trivial solution of the Bessels equation. Show that every interval of length
contains at least one zero of yp (x) for 0 p < 1/2, and at most one zero if p > 1/2.
Sol. 4.2.5. Let [x0 , x0 + ] be any interval of length . The non-trivial solution sin(x x0 ) of the DE
v 00 + v = 0 vanishes at the end points x0 and x0 + . Also, for 0 p < 1/2, yp (x) vanishes at least once
between two successive
zeros of any non-trivial solution of v 00 + v = 0, by Strum comparison theorem
2
since 1 + 1 4x4p2 > 1. So [x0 , x0 + ] contains at least one zero of yp (x). Next, if p > 1/2, then again
by Strum comparison theorem at least one zero of sin(x x0 ) lies between two successive zeros of yp (x).
Now, the interval [x0 , x0 + ] can contain at most one zero of yp (x). For, if there are two zeros of yp (x)
in the interval [x0 , x0 + ], then sin(x x0 ) must vanish at some point between x0 and x0 + which is
not possible.
Chapter 5

Power Series Solutions and Special


Functions

5.1 Some Basics of Power Series


An infinite series of the form
1
X
an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + ........ (5.1)
n=0

is called a power series in x x0 .


m
X
The power series (5.1) is said to converge at a point x if lim an (x x0 )n exists finitely, and the
m!1
n=0
sum of the series is defined as the value of the limit. Obviously the power series (5.1) converges at x = x0 ,
and in this case its sum is a0 . If R is the largest positive real number such that the power series (5.1)
converges for all x with |x x0 | < R, then R is called radius of convergence of the power series, and
(x0 R, x0 + R) is called the interval of convergence. If the power series converges only for x = x0 , then
R = 0. If the power series converges for every real value of x, then R = 1.
We can derive a formula for R by using ratio test. For, by ratio test the power series (5.1) converges
if
an+1 an
lim |x x0 | < 1, that is, if |x x0 | < R where R = lim .
n!1 an n!1 an+1
Similarly, by Cauchys root test the power series (5.1) converges if lim |an |1/n |x x0 | < 1, that is, if
n!1
1/n
|x x0 | < R where R = lim |an | .
n!1
1
X
Ex. xn (R = 1. So the power series converges for 1 < x < 1.
n=0
X1
xn
Ex. (R = 1. So the power series converges for all x.)
n!
n=0
X1
Ex. n!xn (R = 0. So the power series converges only for x = 0.)
n=0
Now suppose that the power series (5.1) converges to f (x) for |x x0 | < R, that is,
1
X
f (x) = an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 + ........ (5.2)
n=0

32
Mathematics-III 33

Then it can be proved that f (x) possesses derivatives of all orders in |x x0 | < R. Also, the series can
be dierentiated termwise in the sense that
1
X
0
f (x) = nan (x x0 ) n 1
= a1 + 2a2 (x x0 ) + 3a3 (x x0 )2 + ........,
n=1

1
X
00
f (x) = n(n 1)an (x x0 ) n 2
= 2a2 + 3.2a3 (x x0 ) + ........,
n=2

and so on, and each of the resulting series converges for |x x0 | < R. The successive dierentiated series
suggest that an = f (n) (0)/n!. Also, the power series (5.2) can be integrated termwise provided the limits
of integration lie inside the interval of convergence.
1
X
If we have another power series bn (x x0 )n converging to g(x) for |x x0 | < R, that is,
n=0

1
X
g(x) = bn (x x0 )n = b0 + b1 (x x0 ) + b2 (x x0 )2 + b3 (x x0 )3 + ........, (5.3)
n=0

then (5.2) and (5.3) can be added or subtracted termwise, that is,
1
X
f (x) g(x) = (an bn )(x x0 )n = (a0 b0 ) + (a1 b1 )(x x0 ) + (a2 b2 )(x x0 )2 + ........
n=0

The two series can be multiplied also in the sense that


1
X
f (x)g(x) = (a0 bn + a1 bn 1 + ....... + an b0 )(x x0 ) n
n=0
= a0 b0 + (a0 b1 + a1 b0 )(x x0 ) + (a0 b2 + a1 b1 + a2 b0 )(x x0 )2 + ......

If f (x) possesses derivatives of all orders in |x x0 | < R, then by Taylors formula

f 00 (x0 ) f (n) (x0 )


f (x) = f (x0 ) + f 0 (x0 )(x x0 ) + (x x0 )2 + ........ + (x x 0 ) n + Rn ,
2! n!

f (n+1) ()
where Rn = (x x0 )n+1 , is some number between x0 and x. Obviously the power series
(n + 1)!
1
X f (n) (x0 )
(x x0 )n converges to f (x) for those values of x 2 (x0 R, x0 + R) for which Rn ! 0 as
n!
n=0
n ! 1. Thus for a given function f (x), the Taylors formula enables us to find the power series that
converges to f (x). On the other hand, if a convergent power series is given, then it is not always possible
to find/recognize its sum function. In fact, very few power series have sums that are elementary functions.
X1
f (n) (x0 )
If the power series (x x0 )n converges to f (x) for all values of x in some neighbourhood
n!
n=0
of x0 (open interval containing x0 ), then f (x) is said to be analytic at x0 and the power series is called
Taylor series of f (x) at x0 . Notice that f (x) is analytic at each point in the interval of convergence
1
X f (n) (x0 )
(x0 R, x0 + R) of the power series (x x0 )n .
n!
n=0
Mathematics-III 34

5.2 Power series solution


The exact methods that we have learned in Chapter 2 and Chapter 3 are applicable to only selected class
of DE. There are DE such as the Bessel DE, which can not be solved by exact methods. Solutions of such
DE can be found in the form of power series. We start with a simple example.

Ex. 5.2.1. Find power series solution of y 0 y = 0 about x = 0.

Sol. 5.2.1. Assume that


1
X
y= an xn = a0 + a1 x + a2 x2 + a3 x3 + ........, (5.4)
n=0

is a power series solution of the given DE. So


1
X
y0 = nan xn 1
= a1 + 2a2 x + 3a3 x2 + ........, (5.5)
n=0

Substituting the y and y 0 into the given DE, we get

1
X 1
X
n 1
nan x an xn = 0, (5.6)
n=0 n=0

which must be an identity in x since (5.4) is, by assumption, a solution of the given DE. So coefficients of
all powers of x must be zero. In particular, equating to 0 the coefficient of xn 1 , the lowest degree term
in x, we obtain
1
nan 2an 1 =0 or an = an 1.
n
Substituting n = 1, 2, 3, ...., we get

a1 = a0 ,
1 1
a2 = a1 = a0 ,
2 2!
1 1
a3 = a2 = a0 ,
3 3!
and so on. Plugging the values of a1 , a2 , ..... into (5.4), we get
1 1
y = a0 + a0 x + a0 x2 + a0 x3 + ........,
2! 3!
x2 x3
= a0 1 + x + + + ..............
2! 3!
2 3
Let us examine the validity of this solution. We know that the power series 1 + x + x2! + x3! + ..............
converges for all x. It implies that the term by term dierentiation carried out in (5.5) is valid for all x.
Similarly, the dierence of the two series(5.4) and (5.5) considered in (5.6) is valid for all x. It follows
2 3
that y = a0 1 + x + x2! + x3! + .............. is a valid solution of the given DE for all x. Also, we know
x2 x3
that ex = 1 + x + 2! + 3! + ............... So y = a0 ex is general solution of the DE y 0 y = 0, as expected.
Mathematics-III 35

Ordinary and regular singular points


Consider a second order homogeneous LDE

y 00 + p(x)y 0 + q(x)y = 0. (5.7)

If the functions p(x) and q(x) are analytic at x = x0 , then x0 is called an ordinary point of the DE (5.7).
If p(x) and/or q(x) fail to be analytic at x0 , but (x x0 )p(x) and (x x0 )2 q(x) are analytic at x0 , then we
say that x0 is a regular singular point of (5.7) otherwise we call x0 as an irregular singular point of x0 . For
example, x = 0 is a regular singular point of the DE x2 y 00 + xy 0 + 2y = 0 and every non-zero real number
is an ordinary point of the same DE. x = 0 is an irregular singular point of the DE x3 y 00 + xy 0 + y = 0.
The following theorem gives a criterion for the existence of the power series solution of near an ordinary
point.

Theorem 5.2.1. If a0 , a1 are arbitrary constants, and x0 is an ordinary point of a DE y 00 +p(x)y 0 +q(x)y =
0, then there exists a unique solution y(x) of the DE that is analytic at x0 such that y(x0 ) = a0 and
y 0 (x0 ) = a1 . Furthermore, the power series expansion of y(x) is valid in |x x0 | < R provided the power
series expansions of p(x) and q(x) are valid in this interval.

The above theorem asserts that there exists a unique power series solution of the form
1
X
y(x) = an (x x0 )n = a0 + a1 (x x0 ) + a2 (x x0 )2 + a3 (x x0 )3 + ........,
n=0

about the ordinary point x0 satisfying the initial conditions y(x0 ) = a0 and y 0 (x0 ) = a1 . The constants
a2 , a3 and so on are determined in terms of a0 or a1 as illustrated in the following examples.

Ex. 5.2.2. Find power series solution of y 00 y = 0 about x = 0.

Sol. 5.2.2. Here p(x) = 0 and q(x) = 4, both are analytic at x = 0. So x = 0 is an ordinary point of
the given DE. So there exists a power series solution
1
X
y= an xn = a0 + a1 x + a2 x2 + a3 x3 + ........, (5.8)
n=0

where the constants a2 , a3 , ..... are to be determined.


Substituting the power series solution into the given DE, we get
1
X 1
X
an n(n 1)xn 2
an xn = 0.
n=0 n=0

Comparing coefficients of xn 2 , the lowest degree term in x, we obtain


1
n(n 1)an an 2 =0 or an = an 1.
n(n 1)

Substituting n = 2, 3, ...., we get


1 1
a2 = a0 = a0 ,
2 2!
1 1
a3 = a1 = a1 ,
3.2 3!
1 1
a4 = a2 = a0 ,
4.3 4!
Mathematics-III 36

1 1
a5 = a4 = a1 ,
5.4 5!
and so on. Plugging the values of a3 , a4 , a5 ..... into (5.8), we get
1 1 1 1
y = a0 + a1 x + a0 x2 + a1 x3 + a0 x4 + a1 x5 + ........,
2! 3! 4! 5!
1 2 1 4 1 3 1 5
= a0 1 + x + x + .............. + a1 x + x + x + .............. ,
2! 4! 3! 5!
the required power series solution of the given DE. We know that (ex +e x )/2 = 1+ 2!1 x2 + 4!1 x4 +..............
and (ex e x )/2 = x + 3!1 x3 + 5!1 x5 + ................ So the power series solution becomes y = c1 ex + c2 e x ,
where c1 = (a0 + a1 )/2 and c1 = (a0 a1 )/2, which is the same solution of y 00 y = 0 as we obtain by
exact method.
Ex. 5.2.3. Find power series solution of (1 + x2 )y 00 + xy 0 y = 0 about x = 0.
Sol. 5.2.3. Here x = 0 is an ordinary point of the given DE. So there exists a power series solution
1
X
y= an xn = a0 + a1 x + a2 x2 + a3 x3 + ......... (5.9)
n=0

Substituting the power series solution (5.9) into the given DE, we get
1
X 1
X 1
X
(1 + x2 ) an n(n 1)xn 2
+x an nxn 1
an xn = 0.
n=2 n=1 n=0
1
X 1
X
=) an n(n 1)xn 2
+ an [n(n 1) + n 1]xn = 0.
n=2 n=0
X1 X1
=) an n(n 1)xn 2
+ an (n 1)(n + 1)xn = 0.
n=2 n=0
Equating to 0 the coefficient of xn 2 , we obtain
n(n 1)an + (n 3)(n 1)an 2 = 0.
3 n
=) an = an 2 provided n 6= 1.
n
Substituting n = 2, 3, ...., we get
1
a2 = a0 ,
2
a3 = 0,
1 1
a4 = a2 = a0 ,
4 4.2
a5 = 0,
3 3
a6 = a4 = a0 ,
6 6.4.2
and so on.
Plugging the values of a2 , a3 , a4 , a5 , a6 and so on into (5.9), we get
1 1 3
y = a0 + a1 x + a0 x2 + 0.x3 a0 x4 + 0.x5 + a0 x6 + ........,
2 4.2 6.4.2
1 2 1 4 3 6
= a0 1 + x x + x .............. + a1 x,
2 4.2 6.4.2
the required power series solution of the given dierential equation.
Mathematics-III 37

The following theorem by Frobenius gives a criterion for the existence of the power series solution of
near a regular singular point.

Theorem 5.2.2. If x0 is a regular singular point of a DE y 00 + p(x)y 0 + q(x)y = 0, then there exists at
1
X
least one power series solution of the form y = an (x x0 )n+r (a0 6= 0), where r is some root of the
n=0
quadratic equation (known as indicial equation) obtained by equating to zero the coefficient of lowest
1
X
degree term in x of the equation that arises on substituting y = an (x x0 )n+r into the given DE.
n=0

RemarkP1 5.2.1. The n+r above theorem by Frobenius guarantees at least one power series solution of the
form n=0 an (x x0 ) (a0 6= 0) of the DE y 00 + p(x)y 0 + q(x)y = 0, which we call Frobenius solution.
If the roots of the indicial equation do not dier by an integer, we get two LI Frobenious solutions. In
case, there exists only one Frobenious solution, it corresponds to larger root of the indicial equation. The
other LI solution depends on the nature of roots of the indicial equation as illustrated in the following
examples.

Ex. 5.2.4. Find power series solutions of 2x2 y 00 + xy 0 (x2 + 1)y = 0 about x = 0.

Sol. 5.2.4. Here x = 0 is a regular singular point of the given DE. So there exists at least one Frobenius
solution of the form
1
X
y= an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........). (5.10)
n=0

Substituting (5.10) into the given DE, we obtain


1
X 1
X
an (n + r 1)(2n + 2r + 1)xn+r an xn+r+2 = 0. (5.11)
n=0 n=0

Equating to 0 the coefficient of xr , the lowest degree term in x, we obtain

a0 (r 1)(2r + 1) = 0 or (r 1)(2r + 1) = 0.

Therefore, roots of the indicial equation are r = 1, 1/2, which do not dier by an integer. So we shall
get two LI Frobenious solutions.
Next equating to 0 the coefficient of xr+1 , we find

a1 r(2r + 3) = 0 or a1 = 0 for r = 1, 1/2.

Now equating to 0 the coefficient of xn+r , we have the recurrence relation


1
an (n + r 1)(2n + 2r + 1) an 2 =0 or an = an 2.
(n + r 1)(2n + 2r + 1)
where n = 2, 3, 4....
For r = 1, we have
1 1 1 1 1
an = an 2, a2 = a0 , a3 = a1 = 0, a4 = a2 = a0 , .......
n(2n + 3) 2.7 3.9 4.11 2.7.4.11

For r = 1/2, we have


1 1 1 1 1
an = an 2, a2 = a0 , a3 = a1 = 0, a4 = a2 = a0 , .......
n(2n 3) 2.1 3.3 4.5 2.1.4.5
Mathematics-III 38

Thus, two LI Frobenious solutions of the given DE are



x2 x4
y1 = a 0 x 1 + + + ....... ,
2.7 2.7.4.11

1/2 x2 x4
y2 = a 0 x 1+ + + ....... .
2.1 2.1.4.5
Ex. 5.2.5. Find power series solutions of xy 00 + y 0 xy = 0 about x = 0.

Sol. 5.2.5. Here x = 0 is a regular singular point of the given DE. So there exists at least one Frobenius
solution of the form
1
X
y= an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........). (5.12)
n=0

Substituting (5.12) into the given DE, we obtain


1
X 1
X
2 n+r 1
an (n + r) x an xn+r+1 = 0. (5.13)
n=0 n=0

Equating to 0 the coefficient of xr 1, the lowest degree term in x, we obtain

a0 r 2 = 0 or r2 = 0.

Therefore, roots of the indicial equation are r = 0, 0, which are equal. So we shall get only one Frobenious
series solution.
Next equating to 0 the coefficient of xr , we find

a1 (r + 1)2 = 0 or a1 = 0 for r = 0.

Now equating to 0 the coefficient of xn+r 1, we have the recurrence relation


1
an (n + r)2 an 2 =0 or an = an 2.
(n + r)2

where n = 2, 3, 4....
Therefore, we have
1 1 1 1
a2 = a0 , a3 = a1 = 0, a4 = a2 = a0 , .......
(r + 2)2 (r + 3)2 (r + 4)2 (r + 2)2 (r + 4)2

Plugging these values in (5.12), we get



r x2 x4
y = a0 x 1 + + + ....... (5.14)
(r + 2)2 (r + 2)2 (r + 4)2

Taking r = 0, we get the following Frobenius solution



x2 x4
y1 = a0 1 + 2 + 2 2 + .......
2 2 .4

To get another LI solution, we substitute (5.14) into the given DE. Then we have

xy 00 + y 0 xy = a0 r2 xr 1
or (xD2 + D x)y = a0 r2 xr 1
. (5.15)
Mathematics-III 39

Note that substitution of (5.14) into the given DE gives only the lowest degree term in x. Obviously
(y)r=0 = y1 satisfies (5.15) and hence the given DE. Now dierentiating (5.15) partially w.r.t. r, we
obtain
@y
(xD2 + D = a0 (2rxr 1 + r2 xr 1 ln x).
x) (5.16)
@r

This shows that @y
@r is a solution of the given DE. Thus, the second LI solution of the given DE is
r=0

@y x2 3 4
y2 = = y1 ln x a0 + x + ......
@r r=0 4 128

Ex. 5.2.6. Find power series solutions of x(1 + x)y 00 + 3xy 0 + y = 0 about x = 0.

Sol. 5.2.6. Here x = 0 is a regular singular point of the given DE. So there exists at least one Frobenius
solution of the form
1
X
y= an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........). (5.17)
n=0

Substituting (5.17) into the given DE, we obtain


1
X 1
X
an (n + r)(n + r 1)xn+r 1
an [(n + r)(n + r + 2) + 1]xn+r = 0. (5.18)
n=0 n=0

Equating to 0 the coefficient of xr 1, the lowest degree term in x, we obtain

a0 r(r 1) = 0 or r(r 1) = 0.

Therefore, roots of the indicial equation are r = 0, 1, which dier by an integer. So we shall get only one
Frobenious solution and that corresponds to the larger root r = 1.
Now equating to 0 the coefficient of xn+r 1 , we have the recurrence relation
n+r
an (n + r 1) + an 1 (n + r) = 0 or an = an 1.
n+r 1
where n = 1, 2, 3, 4....
Therefore, we have
r+1 r+2 r+3
a1 = a0 , a2 = a0 , a3 = a0 , .......
r r r
For r = 1, we get a1 = 2a0 , a2 = 3a0 , a3 = 4a0 , ... So the Frobenious series solution is

y = xr (a0 + a1 x + a2 x2 + a3 x3 + ........) = a0 (x 2x2 + 3x3 4x4 + .....). (5.19)

Now we find the other LI solution. Since a1 , a2 ,...... are not defined at r = 0, so we replace a0 by b0 r in
(5.17). Thus the modified series solution reads as

y = xr (b0 r + a1 x + a2 x2 + a3 x3 + ........),

which on substitution into the given DE yields

x(1 + x)y 00 + 3xy 0 + y = b0 r2 (r 1)xr 1


. (5.20)
Mathematics-III 40

Obviously (y)r=0 and (y)r=1 satisfy the given DE. But we find that the solutions

(y)r=0 = b0 (x 2x2 + ..........),

(y)r=1 = b0 (x 2x2 + ..........),



@y
are not LI from the Frobenious solution (5.19). So we partially dierentiate (5.20) and find that @r r=0
is a solution of the given DE. Thus the other LI solution of the given DE reads as

@y
y= = y1 ln x + b0 (1 x + x2 x3 + ..........).
@r r=0

Ex. 5.2.7. Find power series solutions of x2 y 00 + x3 y 0 + (x2 2)y = 0 about x = 0.


Sol. 5.2.7.

r = 2, 1

2 3 2 3
y1 = a 0 x 1 x + x4 ............ ,
10 56
1
y2 = a 0 x .
Ex. 5.2.8. Find power series solutions of x2 y 00 + 6xy 0 + (x2 + 6)y = 0 about x = 0.
Sol. 5.2.8.
1
r= 2, 3. an = an 2
n(n + 1)
For r = 3, we find that a1 is arbitrary. In this case, r = 3 provides the general solution y = a0 y1 +a1 y2 ,
where

3 1 2 1
y1 = x 1 x + x4 ............ ,
2! 4!

3 1 3 1
y2 = x x x + x5 ............ .
3! 5!
Note that corresponding to the larger root r = 2, you will get the Frobenious solution, a constant
multiple of y2 . (Find and see!)

5.3 Gausss Hypergeometric Equation


A DE of the form

x(1 x)y 00 + [c (a + b + 1)x]y 0 aby = 0, (5.21)

where a, b and c are constants, is called Hypergeometric Equation. We observe that x = 0 is a regular
singular points of (5.21). So there exists at least one Frobenius solution of the form
1
X
y= an xn+r = xr (a0 + a1 x + a2 x2 + a3 x3 + ........). (5.22)
n=0

Substituting (5.22) into (5.21), we obtain


1
X 1
X
n+r 1
an (n + r)(c + n + r 1)x an (n + r + a)(n + r + b)xn+r = 0. (5.23)
n=0 n=0
Mathematics-III 41

Comparing coefficients of xr 1, the lowest degree term in x, we obtain

a0 r(c + r 1) = 0 or r(c + r 1) = 0.

Therefore, roots of the indicial equation are r = 0, 1 c. Now comparing the coefficient of xn+r 1, we
have the recurrence relation
(a + n 1 + r)(b + n 1 + r)
an (n+r)(c+n+r 1) an 1 (n 1+r+a)(n 1+r+b) = 0 or an = an 1.
(n + r)(c + n 1 + r)
where n = 1, 2, 3, 4....
For r = 0, we have
(a + n 1)(b + n 1) a.b (a + 1)(b + 1) a(a + 1)b(b + 1)
an = an 1, a1 = a0 a2 = a1 = a0 , .......
n(c + n 1) 1.c 2(c + 1) 1.2c(c + 1)
So the Frobenius solution corresponding to r = 0 reads as

a.b a(a + 1)b(b + 1) 2
y = a0 1 + x+ x + .......... ,
1.c 1.2c(c + 1)
This series with a0 = 1 is called hypergeometric series and is denoted by F (a, b, c, x). Thus,
1
X a(a + 1)...(a + n 1)b(b + 1)...(b + n 1)
F (a, b, c, x) = 1 + xn .
n!c(c + 1)...(c + n 1)
n=0

In case a = 1 and b = c, we get

F (1, b, b, x) = 1 + x + x2 + .........

the familiar geometric series. Thus, F (a, b, c, x) generalizes the geometric series. That is why it is named
as hypergeometric series. Further, we find
an+1 (a + n)(b + n)
lim |x| = lim |x| = |x| ,
n!1 an n!1 (n + 1)(c + n)

provided c is not zero or negative integer. Therefore, F (a, b, c, x) is analytic function-called the hyper-
geometric function-on the interval |x| < 1. It is the simplest particular solution of the hypergeometric
equation.
Next we find the series solution corresponding to the indicial root r = 1 c. The series solution in
this case is given by

y = x1 c (a0 + a1 x + a2 x2 + a3 x3 + ........),

where the constants a1 , a2 and so on can be determined using the recurrence relation. Alternatively, we
substitute y = x1 c z into the given DE (5.21) and obtain

x(1 x)z 00 + [(2 c) ((a c + 1) + (b c + 1) + 1)x]z 0 (a c + 1)(b c + 1)z = 0, (5.24)

which is the hypergeometric equation with the constants a, b and c replaced by a c + 1, b c + 1 and
2 c. Therefore, (5.24) has the power series solution

z = F (a c + 1, b c + 1, 2 c, x).

So the second power series solution (5.21) is

y = x1 c F (a c + 1, b c + 1, 2 c, x).
Mathematics-III 42

Thus, general solution of (5.21) near the regular singular point x = 0 is


y = c1 F (a, b, c, x) + c2 x1 c F (a c + 1, b c + 1, 2 c, x), (5.25)
provided c is not an integer.
Next we solve the DE (5.21) near the regular singular point x = 1. Assuming t = 1 x, we find that
x = 1 corresponds to t = 0 and (5.21) transforms to
t(1 t)y 00 + [(a + b c + 1) (a + b + 1)t]y 0 aby = 0,
where the prime denotes the derivative with respect to t. It is a hypergeometric equation with c replaced
by a + b c + 1. So its solution with t replaced by 1 x in view of (5.25) reads as
y = c1 F (a, b, a + b c + 1, 1 x) + c2 (1 x)c a b
F (c b, c a, c a b + 1, 1 x),
provided c a b is not an integer.
Remark 5.3.1. Any DE of the form
(x A)(x B)y 00 + (C + Dx)y 0 + Ey = 0, (5.26)
where A, B, C, D and E are constants with A 6= B and D 6= 0, can be transformed to the hypergeometric
equation
t(1 t)y 00 + (F + Gt)y 0 + Hy = 0, (5.27)
where
x A
t=
B A
and F , G, H are certain combinations of the constants in (5.26). The primes in (5.27) denote the
derivatives with respect to t. This is a hypergeometric equation with a, b and c defined by F = c,
G = (a + b + 1) and H = ab. Therefore, (5.27) can be solved in terms of hypergeoetric function near
t = 0 and t = 1. It follows that (5.26) can be solved in terms of the same function near x = A and x = B.
Remark 5.3.2. Most of the familiar functions in elementary analysis can be expressed in terms of
hypergeometric function.
(i) (1 + x)n = F ( n, b, b, x)
(ii) log(1 + x) = xF (1, 1, 2, x)
1
(iii) sin x = xF (1/2, 1/2, 3/2, x2 )
(iv) ex = lim F (a, b, a, x/b)
b!1

Ex. 5.3.1. Solve (x2 x 6)y 00 + (5 + 3x)y 0 + y = 0 near x = 3.


Sol. 5.3.1. The given equation can be rewritten as
(x 3)(x + 2)y 00 + (5 + 3x)y 0 + y = 0. (5.28)
Here A = 3, B = 2. Therefore,
x A x 3 x 3
t= = = x= 5t + 3.
B A 2 3 5
So the given equation becomes
t(1 t)y 00 + (14/5 3t)y 0 y = 0,
a hypergeometric equation with c = 14/5, a + b + 1 = 3, ab = 1. This implies a = b = 1. Therefore, the
solution near t = 0, that is, x = 3 is
y = c1 F (1, 1, 14/5, (x 3)/( 5)) + c2 ((x 3)/ 5)( 9/5)F ( 4/5, 4/5, 4/5, (x 3)/( 5)).
Chapter 6

Fourier Series

6.1 Introduction
We are familiar with the power series representation of a function f (x). The representation of f (x) in the
form of a trigonometric series given by
1
a0 X
f (x) = + (an cos nx + bn sin nx), (6.1)
2
n=1

is required in the treatment of many physical problems such as heat conduction, electromagnetic waves,
mechanical vibrations etc. An important advantage of the series (6.1) over a usual power series in x is that
it can represent f (x) even if f (x) possesses many discontinuities (eg. discontinuous impulse function in
electrical engineering). On the other hand, power series can represent f (x) only when f (x) is continuous
and possesses derivatives of all orders.
Let m and n be positive integers such that m 6= n. Then we have,
Z Z Z
cos nxdx = 0, sin nxdx = 0, cos mx sin nxdx = 0,

Z Z
cos mx cos nxdx = 0, sin mx sin nxdx = 0.

Z Z
Further, cos2 nxdx = = sin2 nxdx.

Now, we do some classical calculations that were first done by Euler. We assume that the function f (x)
in (6.1) is defined on [ , ]. Also, we assume that the series in (6.1) is uniformly convergent so that
term by term integration is possible.
Integrating both sides of (6.1) over [ , ], we get
Z
1
a0 = f (x)dx. (6.2)

Multiplying both sides of (6.1) by cos nx, and then integrating over [ , ], we get
Z
1
an = f (x) cos nxdx. (6.3)

Note that this formula, for n = 0, gives the value of a0 as given in (6.2). That is why, a0 is divided by 2
in (6.1).

43
Mathematics-III 44

Next, multiplying both sides of (6.1) by sin nx, and then integrating over [ , ], we get
Z
1
bn = f (x) sin nxdx. (6.4)

These calculations show that the coefficients an and bn can be obtained from the sum f (x) in (6.1)
by means of the formulas (6.3) and (6.4) provided the series (6.1) is uniformly convergent. However,
this situation is too restricted to be of much practical use because first we have to ensure that the given
function f (x) admits an expansion as a uniformly convergent trigonometric series. For this reason, we
set aside the idea of finding the coefficients an and bn in the expansion (6.1) that may or may not exist.
Instead we use formulas (6.3) and (6.4) to define some numbers an and bn . Then we use these to construct
a series of the form (6.1). When we follow this approach, the numbers an and bn are called the Fourier
coefficients of the function f (x) and the series (6.1) is called Fourier series of f (x). Obviously, the function
f (x) must be integrable in order to construct its Fourier series. Note that a discontinuous function may
be integrable.
We hope that the Fourier series of f (x) will converge to f (x) so that (6.1) is a valid representation or
expansion of f (x). However, this is not always true. There exist integrable functions whose Fourier series
diverge at one or more points. That is, why some advanced texts on Fourier series write (6.1) in the form

1
a0 X
f (x) + (an cos nx + bn sin nx), (6.5)
2
n=1

where the sign is used in order to emphasize that the Fourier series on right is not necessarily convergent
to f (x).
Just like a Fourier series does not imply convergence, a convergent trigonometric series does imply to
be a Fourier series of some function. For example, it is known that the trigonometric series
1
X sin nx
ln(1 + n)
n=1

converges for all x. But it is not a Fourier series since 1/ ln(1 + n) can not be obtained from formula (6.4)
for any choice of integrable function f (x). In fact, this series fails to be Fourier series because it fails to
satisfy a remarkable theorem, which states that the term by term integral of any Fourier series (whether
convergent or not) must converge for all x.
Thus, the fundamental problem of the subject of Fourier series is clearly to discover the properties of
an integrable function that guarantee that its Fourier series not only converges but also converges to the
function. Before this, let us see some examples.
Ex. 6.1.1. Find Fourier series of the function f (x) = x, x .
Sol. 6.1.1. We find
Z
1
a0 = f (x)dx = 0,

Z
1
an = f (x) cos nxdx = 0,

Z
1 2
bn = f (x) sin nxdx = ( 1)n .
n
So Fourier series of f (x) = x reads as

1 1
x = 2 sin x sin 2x + sin 3x + ........ . (6.6)
2 3
Mathematics-III 45

Here equals sign is an expression of hope rather than definite knowledge. It can be proved that the Fourier
series in (6.6) converges to x in < x < . At x = or x = , the Fourier series converges to 0, and
hence does not converge to f (x) = x at x = or x = . Further, each term on right hand side in (6.6)
has a period 2. So the entire expression on right hand side of (6.6) has a period 2. It follows that the
Fourier series in (6.6) does not converge to f (x) = x outside the interval < x < . But if f (x) = x is
given to be a periodic function of period 2, then the Fourier series in (6.6) converges to f (x) = x for all
real values of x except x = k, where k is any non-zero integer. In left panel of Figure 6.1, we show the
plots of x (Black line), 2 sin x (Green curve), 2 sin x sin 2x (Red curve) and 2 sin x sin 2x + (2/3) sin 3x
(Blue curve) in the range < x < or 3.14 < x < 3.14. We see that as we consider more and
more terms of the the Fourier series in (6.6), it approximates the function f (x) = x better and better, as
expected.
3

3
2

1
2

-3 -2 -1 1 2 3

-1 1

-2

-3 -2 -1 1 2 3
-3

Figure 6.1: Left Panel: Plots of x (Black line), 2 sin x (Green curve), 2 sin x sin 2x (Red curve) and 2 sin x sin 2x +
(2/3) sin 3x (Blue curve) in the range < x < or 3.14 < x < 3.14.
Right Panel: Plots of f (x) (Black lines), 2 (Green line), 2 + 2 sin x (Red curve) and 2 + 2 sin x + 23 sin 3x (Blue curve),

2
+ 2 sin x + 23 sin 3x + 25 sin 5x (Purple curve) in the range < x < or 3.14 < x < 3.14.

Ex. 6.1.2. Find Fourier series of the function



0, x<0
f (x) =
, 0 x .
Sol. 6.1.2. We find
Z
1
a0 = f (x)dx = ,

Z
1
an = f (x) cos nxdx = 0,

Z
1 1
bn = f (x) sin nxdx = [1 ( 1)n ].
n
So Fourier series of f (x) reads as

1 1
f (x) = + 2 sin x + sin 3x + sin 5x + ........ . (6.7)
2 3 5
The Fourier series in (6.7) converges to f (x) in < x < except x = 0. At x = 0, the value of f (x) is
while the Fourier series converges to 2 . In right panel of Figure 6.1, we show the plots of f (x) (Black lines),
2 2 2
2 (Green line), 2 +2 sin x (Red curve) and 2 +2 sin x+ 3 sin 3x (Blue curve), 2 +2 sin x+ 3 sin 3x+ 5 sin 5x
(Purple curve) in the range < x < or 3.14 < x < 3.14. We see that as we consider more and
more terms of the the Fourier series in (6.7), it approximates the function f (x) = x better and better, as
expected.
Mathematics-III 46

6.2 Dirichlets conditions for convergence


Let f (x) be a function defined and bounded on x < such that it has finite number of dis-
continuities, and only a finite number of maxima and minima on this interval. Let f (x) be defined by
f (x + 2) = f (x) for other values of x. Then the Fourier series of f (x) converges to 12 [f (x ) + f (x+)]
at every point x. Thus, the Fourier series of f (x) converges to f (x) at every point x of continuity. If
we redefine the function as the average of the two one sided limits at each point of discontinuity, that is,
f (x) = 12 [f (x ) + f (x+)], then the Fourier series represents f (x) everywhere.

Ex. 6.2.1. Find Fourier series of the function



0, x<0
f (x) =
x , 0 x .

2 1 1 1
Hence show that 8 =1+ 32
+ 52
+ 72
+ ........

Sol. 6.2.1. We find


Z
1
a0 = f (x)dx = /2,

Z
1 1
an = f (x) cos nxdx = [( 1)n 1],
n2
Z
1 ( 1)n+1
bn = f (x) sin nxdx = .
n
So Fourier series of f (x) is
1 1
X 1 X ( 1)n+1
f (x) = + [( 1)n 1] cos nx + sin nx. (6.8)
4 n2 n
n=1 n=1

At x = , f (x) is discontinuous. So its Fourier series converges to 12 [f (x ) + f (x+)] = 12 ( + 0) = 2 .


So at x = , (6.8) gives
1
X 1
= + [( 1)n 1]( 1)n .
2 4 n2
n=1

It can be rearranged to write

2 1 1 1
= 1 + 2 + 2 + 2 + .........
8 3 5 7

6.3 Fourier series for even and odd functions


Let f (x) be integrable function defined on x . If f (x) is even, then its Fourier series carries
only cosine terms and the Fourier coefficients are given by
Z
2
an = f (x) cos nxdx, bn = 0.
0

If f (x) is odd, then its Fourier series carries only sine terms and the Fourier coefficients are given by
Z
2
an = 0, bn = f (x) sin nxdx.
0
Mathematics-III 47

For example, the Fourier coefficients of the odd function f (x) = x, x are an = 0 and
n 1
bn = 2( 1)
n . So the Fourier series of x is given by

1 1
x = 2 sin x sin 2x + sin 3x ........... (6.9)
2 3

Note that the Fourier series converges to x for < x < and not at the end points x = .
Similarly, the Fourier coefficients of the even function f (x) = |x|, x are a0 = , an =
2
n2
[( 1)n 1] and bn = 0. So we have

4 1 1
|x| = cos x + 2 cos 3x + 2 cos 5x + ........... (6.10)
2 3 5

It is interesting to observe that the two series (6.9) and (6.10) both represent the same function
f (x) = x on 0 x since |x| = x for x 0. The series (6.9) is called Fourier sine series of x, and
the series (6.10) is called Fourier cosine series of x. Similarly, any function f (x) satisfying the Dirichlets
conditions on 0 x can be expanded in both a sine series and a cosine series on this interval subject
to that the series does not converge to f (x) at the end points x = 0 and x = unless f (x) = 0 at these
points. Thus, to obtain sine series of a function, we redefine the function (if necessary) to have the value
0 at x = 0, and then extend it over the interval x < 0 such that f ( x) = f (x) for all x lying in
x . It is called odd extension of f (x). Similarly, even extension of f (x) can be carried out in
order to obtain Fourier cosine series.

Ex. 6.3.1. Find Fourier sine and cosine series of f (x) = cos x, 0 x .

Sol. 6.3.1. For sine series, we find


Z Z
2 2 2n 1 + ( 1)n
bn = f (x) sin nxdx = cos x sin nxdx = , n 6= 1.
0 0 n2 1
Z
2
b1 = cos x sin xdx = 0.
0
So Fourier sine series of cos x is given by
1
X
2n 1 + ( 1)n
cos x = sin nx.
n2 1
n=2

For cosine series, we find


Z Z
2 2
an = f (x) cos nxdx = cos x cos nxdx = 0, n 6= 1.
0 0
Z
2
a1 = cos x cos xdx = 1.
0
So Fourier cosine series of cos x is given by

cos x = cos x.
Mathematics-III 48

6.4 Fourier series on arbitrary intervals


x
Let f (x) be defined on an interval L x L. If we let t = L , then we have
x
f (x) = f = g(t), t .
L
So Fourier series of g(t) is given by
1
a0 X
g(t) = + (an cos nt + bn sin nt),
2
n=0
Z Z
1 1
where an = g(t) cos ntdt, bn = g(t) sin ntdt.

x
Since t = L , it follows that

a0 X nx
1
nx
f (x) = + an cos + bn sin ,
2 L L
n=0
Z L Z L
1 nx 1 nx
where an = f (x) cos dx, bn = f (x) sin dx.
L L L L L L
Ex. 6.4.1. Find Fourier coefficients of the function

0, 2x<0
f (x) =
1 , 0 x 2.

Sol. 6.4.1. Here L = 2. So we find


Z Z
1 L 1 2
a0 = f (x)dx = f (x)dx = 1,
L L 2 2
Z L Z 2
1 nx 1 nx
an = f (x) cos dx = f (x) cos dx = 0,
L L L 2 2 2
Z L Z 2
1 nx 1 nx 1
bn = f (x) sin dx = f (x) sin dx = [1 ( 1)n ].
L L L 2 2 2 n
Chapter 7

Boundary Value Problems

In this chapter, we shall discuss the solution of some boundary value problems.

7.1 One dimensional wave equation


Consider an elastic string of negligible mass and length tied at the two ends along x-axis at the points
(0, 0) and (, 0). Suppose the string is pulled in the shape y = f (x) in the xy-plane and released. Then it
can be shown that the vibrations of the string in the xy-plane are governed by the one dimensional wave
equation
@2y 1 @2y
= , (7.1)
@x2 a2 @t2
where a is some positive constant, and y(x, t) is the displacement/vibration of the string along y-axis
direction. The wave equation is subjected to the following four conditions.
The first condition is

y(0, t) = 0, (7.2)

since the left end of the string is tied at (0, 0) for all the time, and hence it can not have displacement
along the y-axis.
The second condition is

y(, t) = 0 (7.3)

since the right end of the string is tied at (, t) for all the time, and hence it can not have displacement
along the y-axis.
The third condition is
@y
= 0, at t = 0, (7.4)
@t
since the string is in rest at t = 0.
The fourth condition is

y(x, 0) = f (x), (7.5)

since the string is in the shape y = f (x) at t = 0.

Once the string is released from the initial shape y(x, 0) = f (x), we are interested to find the distance
or displacement of the string from the x-axis at any time t. It is equivalent to saying that we are interested
to solve (7.1) for y(x, t) subject to the four conditions (7.2)-(7.5).

49
Mathematics-III 50

Assume that (7.1) possesses a solution of the form

y(x, t) = u(x)v(t), (7.6)

where u(x) and v(t) are to be determined. Plugging (7.6) into (7.1), we get

u00 (x) 1 v 00 (t)


= 2 = , (7.7)
u(x) a v(t)

where is some constant. This yields following two equations

u00 (x) u(x) = 0, (7.8)

v 00 (t) a2 v(t) = 0. (7.9)

Now, let us first solve (7.8). Later, we shall look for the solution of (7.9). Considering (7.6), the condition
y(0, t) = 0 in (7.2) gives u(0)v(t) = 0 or u(0) = 0. Similarly, y(, t) = 0 in (7.3) gives u() = 0. Further,
we see that the nature of solution of (7.8) depends on the values of .
p p
(i) When > 0, the solution reads as u(x) = c1 e x + c2 e x . Using the conditions u(0) = 0 and
u() = 0, we get c1 = 0 = c2 , and hence u(x) = 0. This leads to the trivial solution y(x, t) = u(x)v(t) = 0,
which is not of our interest.

(ii) When = 0, the solution reads as u(x) = c1 x + c2 . Again, using the conditions u(0) = 0 and
u() = 0, we get c1 = 0 = c2 , which leads to the trivial solution y(x, t) = u(x)v(t) = 0.

(iii) When < 0, say, = n2 , the solution reads as u(x) = c1 sin nx + c2 cos nx. Applying the
condition u(0) = 0, we get c2 = 0. The condition u() = 0 then implies that c1 sin n = 0. Obviously,
for a non-trivial solution we must have c1 6= 0. Then the condition c1 sin n = 0 forces n to be a positive
integer. Thus,

un (x) = sin nx, (7.10)

is non-trivial solution of (7.8) for each positive integer n.

Now, the solution of (7.9) with = n2 reads as v(t) = c1 sin nat + c2 cos nat. The condition in (7.4)
leads to u(x)v 0 (0) = 0 or v 0 (0) = 0, which in turn gives c1 = 0. So

vn (t) = cos nat, (7.11)

is non-trivial solution of (7.8).

In view of (7.6), (7.10) and (7.11), we can say that

yn (x, t) = un (x)vn (t) = sin nx cos nat, (7.12)

is a solution of (7.1) for each positive integer n. It follows that


1
X 1
X
y(x, t) = bn yn (x, t) = bn sin nx cos nat, (7.13)
n=1 n=1
Mathematics-III 51

is also a solution of (7.1). To determine bn , we use the fourth condition y(x, 0) = f (x) given in (7.5).
Then (7.13) gives
1
X
f (x) = bn sin nx. (7.14)
n=1

Notice that the series on right hand side in (7.14) is the Fourier sine series of f (x) in the interval [0, ].
So we have
Z
2
bn = f (x) sin nxdx. (7.15)
0
Hence,
1
X
y(x, t) = bn sin nx cos nat, (7.16)
n=1

with bn from (7.15) is the solution of (7.1) subject to the four conditions (7.2)-(7.5).

7.2 One dimensional heat equation


Consider a uniform rod of length aligned along x-axis from (0, 0) to (, 0). Suppose that the two ends of
the rod are kept at zero temperature all the time, and f (x) represents the temperature function at time
t = 0. Then it can be shown that the temperature w(x, t) of the road is governed by the one dimensional
heat equation

@2w 1 @w
2
= 2 , (7.17)
@x a @t
where a is some positive constant. The heat equation is subjected to the following three conditions.
The first condition is

w(0, t) = 0, (7.18)

since the left end of the rod is kept at zero temperature for all t.
The second condition is

w(, t) = 0 (7.19)

since the right end of the rod is kept at zero temperature for all t.
The third condition is

w(x, 0) = f (x), (7.20)

since the temperature of the rod is given by f (x) at t = 0.

Having known the temperature of the rod at t = 0, we are interested to find the temperature of the
rod at any time t. It is equivalent to saying that we are interested to solve (7.17) for w(x, t) subject to
the three conditions (7.18)-(7.20).
Assume that (7.17) possesses a solution of the form

w(x, t) = u(x)v(t), (7.21)


Mathematics-III 52

where u(x) and v(t) are to be determined. Plugging (7.21) into (7.17), we get

u00 (x) 1 v 0 (t)


= 2 = , (7.22)
u(x) a v(t)

where is some constant. This yields following two equations

u00 (x) u(x) = 0, (7.23)

v 0 (t) a2 v(t) = 0, (7.24)

Following the strategy discussed in the previous section, the non-trivial solution of (7.23) subject to the
conditions (7.18) and (7.19), reads as

un (x) = sin nx, (7.25)

where n a positive integer with = n2 .

Now, the solution of (7.24) with = n2 reads as v(t) = c1 e n 2 a2 t . So


n 2 a2 t
vn (t) = e , (7.26)

is non-trivial solution of (7.24).

In view of (7.21), (7.25) and (7.26),


n 2 a2 t
wn (x, t) = un (x)vn (t) = sin nxe , (7.27)

is a solution of (7.17) for each positive integer n. It follows that


1
X 1
X
n 2 a2 t
w(x, t) = bn wn (x, t) = bn sin nxe , (7.28)
n=1 n=1

is also a solution of (7.17). To determine bn , we use the third condition w(x, 0) = f (x) given in (7.20).
Then (7.28) gives
1
X
f (x) = bn sin nx. (7.29)
n=1

Notice that the series on right hand side in (7.29) is the Fourier sine series of f (x) in the interval [0, ].
So we have
Z
2
bn = f (x) sin nxdx. (7.30)
0

Hence,
1
X
n 2 a2 t
w(x, t) = bn sin nxe , (7.31)
n=1

with bn from (7.30) is the solution of (7.17) subject to the three conditions (7.18)-(7.20).
Mathematics-III 53

7.3 The Laplace equation


The steady state temperature w(x, y) (independent of time) in the two dimensional xy-plane is governed
by

@2w @2w
+ = 0, (7.32)
@x2 @y 2
known as the Laplace equation. With the transformations x = r cos and y = r sin , the polar form of
(7.32) reads as

@ 2 w 1 @w 1 @2w
+ + = 0. (7.33)
@r2 r @r r2 @2
For,
@w @w @x @w @y @w @w
= + = cos + sin .
@r @x @r @y @r @x @y

@2w 2
2 @ w
2
2 @ w
= cos + sin
@r2 @x2 @y 2
@w @w @x @w @y @w @w
= + = r sin + r cos .
@ @x @ @y @ @x @y
@2w 2
2
2 @ w @w @2w @w
= r sin r cos + r2 cos2 2 r sin .
@2 @x2 @x @y @y
2 2
Substituting the values of @@rw2 , @w @ w
@r and @ 2 into (7.33), we get (7.32).
Suppose the steady state temperature is given on the boundary of a unit circle r = 1, say w(1, ) =
f (). Then the problem of finding the temperature at any point (r, ) inside the circle is a Dirichlets
problem for a circle. Now we shall solve (7.33) subject to the condition

w(1, ) = f (). (7.34)

Assume that (7.33) possesses a solution of the form

w(r, ) = u(r)v(), (7.35)

where u(r) and v() are to be determined. Plugging (7.35) into (7.33), we get

r2 u00 (r) + ru0 (r) v 00 ()


= = , (7.36)
u(r) v()
where is some constant. This yields following two equations

v 00 () + v() = 0, (7.37)

r2 u00 (r) + ru0 (r) u(r) = 0. (7.38)

The non-trivial solution of (7.37) reads as

vn () = an cos n + bn sin n, (7.39)

where = n2 ; an , bn are constants such that both the terms on right hand side of (7.41) do not vanish
together for n = 1, 2, 3, ...... Let a20 be the solution corresponding to n = 0.
Mathematics-III 54

Notice that (7.40) is a Cauchy-Euler DE with = n2 . So it transforms to

d2 u
n2 u = 0, (7.40)
dz 2
where r = ez . Solutions of this equation are

u(z) = c1 + c2 z for n=0

and

u(z) = c1 enz + c2 e nz
for n = 1, 2, 3, .....

where c1 and c2 are constants. In terms of r, the solutions read as

u(r) = c1 + c2 ln r for n=0

and

u(r) = c1 rn + c2 r n
for n = 1, 2, 3, .....

Since we are interested in solutions which are well defined inside the circle r = 1, we discard the first
solution because ln r is not finite at r = 0. Similarly, the second solution is acceptable by discarding the
second term carrying r n . Thus, the solution of our interest is

un (r) = rn , n = 1, 2, 3, ..... (7.41)

In view of (7.35), (7.40) and (7.41),

wn (r, ) = un (r)vn () = rn (an cos n + bn sin n), (7.42)

is a solution of (7.33) for n = 1, 2, ....... It follows that


1
X 1
X
wn (x, t) = rn (an cos n + bn sin n), (7.43)
n=0 n=1

a0
is also a solution of (7.33). Since 2 is also a solution of (7.33), so
1
a0 X n
w(r, ) = + r (an cos n + bn sin n), (7.44)
2
n=1

is also a solution of (7.33).


To determine a0 , an and bn , we use the third condition w(1, ) = f () given in (7.34). Then (7.44)
gives
1 1
a0 X X
f () = + wn (x, t) = (an cos n + bn sin n), (7.45)
2
n=1 n=1

Notice that the series on right hand side in (7.45) is the Fourier series of f () in the interval [ , ]. So
we have
Z
1
an = f ( ) cos n dx, (n = 0, 1, 2, ....) (7.46)

Mathematics-III 55

Z
1
bn = f ( ) sin n dx, (n = 1, 2, 3, ......). (7.47)

Thus, (7.44) with an from (7.46) and bn from (7.47) is the solution of (7.33) subject to the condition
(7.34). Thus, the Dirichlet problem for the unit circle is solved.
Now substituting an from (7.46) and bn from (7.47) into (7.44), we get
Z " 1
#
1 1 X n
w(r, ) = f( ) + r cos n( ) d . (7.48)
2
n=1

Let = and z = rei = r(cos + i sin n). Then we have


1 1
1 X n 1 X n
+ r cos n( ) = + r cos n
2 2
n=1 n=1
" 1
#
1 X n
= Re + z
2
n=1

1 z
= Re +
2 1 z

1+z
= Re
2(1 z)

(1 + z)(1 z)
= Re
2|1 z|2
1 |z|2
=
2|1 z|2
1 r2
=
2(1 2r cos + r2 )
So (7.48) becomes
Z
1 1 r2
w(r, ) = f ( )d , (7.49)
2 1 2r cos( ) + r2
known as the Poission integral. It expresses the value of the harmonic function w(r, ) at all points inside
the circle r = 1 in terms of its values on the circumference of the circle. In particular, at r = 0, we have
Z
1
w(0, ) = f ( )d , (7.50)
2
which shows that the value of the harmonic function w at the center of the circle is the average of its
values on the circumference.

7.4 Strum Liouville Boundary Value Problem (SLBVP)


Let p(x) 6= 0, p0 (x), q(x) and r(x) be continuous functions on [a, b]. Then the DE
d
[p(x)y 0 ] + [ q(x) + r(x)]y = 0, (7.51)
dx
with the boundary conditions

c1 y(a) + c2 y 0 (a) = 0, (7.52)


Mathematics-III 56

and

d1 y(b) + d2 y 0 (b) = 0, (7.53)

where neither both c1 and c2 nor both d1 and d2 are zero, is called a SLBVP. We see that y = 0 is trivial
solution of (7.51). The values of for which (7.51) has non-trivial solutions, are known as its eigen values
while the corresponding non-trivial solutions are known as eigen functions.

Ex. 7.4.1. Find eigen values and eigen functions of the SLBVP

y 00 + y = 0, y(0) = 0, y() = 0.

Sol. 7.4.1. Eigen values are = n2 , where n is a positive integer.


Eigen functions are yn = sin nx.

7.4.1 Orthogonality of eigen functions


Consider the SLBVP given by (7.51), (7.52) and (7.52). If ym and yn are any two distinct eigen functions
corresponding to the eigen values m and n , then
Z b
q(x)ym (x)yn (x)dx = 0.
a

In other words, any two distinct eigen functions ym and yn of the SLBVP are orthogonal with respect to
the weight function q(x). Let us prove this result.
Since ym and yn are eigen functions corresponding to the eigen values m and n , we have
0 0
(pym ) +( mq + r)ym = 0 (7.54)

and

(pyn0 )0 + ( nq + r)yn = 0. (7.55)

Multiplying (7.54) by yn and (7.55) by ym , and subtracting we get


0 0
yn (pym ) ym (pyn0 )0 + ( m n )qym yn =0 (7.56)

Moving the first two terms on right hand side, and then integrating from a to b, we have
Z b Z b Z b
0 0
( m n ) qy y
m n dx = y n (py m ) dx ym (pyn0 )0 dx
a a a
Z b Z b
0 b 0 0 0
= [ym (pyn )]a b
ym (pyn )dx [yn (pym )]a + yn0 (pym
0
)dx
a a
0 0 0 0
= p(b)[ym (b)yn (b) yn (b)ym (b)] p(a)[ym (a)yn (a) yn (a)ym (a)]
= p(b)W (b) p(a)W (a)

where W (x) = ym (x)yn0 (x) yn (x)ym 0 (x) is Wronskian of y and y .


m n
Z b
) ( m n) qym yn dx = p(b)W (b) p(a)W (a). (7.57)
a

Notice that the eigen functions ym and yn are particular solutions of the SLBVP given by (7.51), (7.52)
and (7.52). So we have
0
c1 ym (a) + c2 ym (a) = 0, (7.58)
Mathematics-III 57

c1 yn (a) + c2 yn0 (a) = 0, (7.59)

0
d1 ym (b) + d2 ym (b) = 0, (7.60)

d1 yn (b) + d2 yn0 (b) = 0. (7.61)

Now by the given c1 and c2 are not zero together. So the homogeneous system given by (7.58) and (7.59)
has a non-trivial solution. It follows that ym (a)yn0 (a) yn (a)ym
0 (a) = W (a) must be zero. Likewise, (7.60)
0 0
and (7.61) lead to ym (b)yn (b) yn (b)ym (b) = W (b) = 0. So (7.57) becomes
Z b
( m n) qym yn dx = 0. (7.62)
a

Also, m 6= n. So we get
Z b
qym yn dx = 0, (7.63)
a

the desired result.

Remark 7.4.1. The orthogonality property of eigen functions can be used to write a given function as
the series expansion of eigen functions.

Remark 7.4.2. A DE in the form


d
[p(x)y 0 ] + [ q(x) + r(x)]y = 0
dx
is called in self adjoint form.
Chapter 8

Some Special Functions

8.1 Legendre Polynomials


A DE of the form

(1 x2 )y 00 2xy 0 + n(n + 1)y = 0, (8.1)

where n is a constant, is called Legendres Equation. We observe that x = 0 is an ordinary point of (8.1).
So there exists a series solution of the form
1
X
y= ak xk = a0 + a1 x + a2 x2 + a3 x3 + ......... (8.2)
k=0

Substituting (8.2) into (8.1), we obtain


1
X 1
X
k 2
ak k(k 1)x + ak (n k)(n + k + 1)xk = 0. (8.3)
k=0 k=0

Comparing coefficients of xk 2, we obtain


(n k + 2)(n + k 1)
ak k(k 1) + ak 2 (n k + 2)(n + k 1) = 0 or ak = ak 2.
k(k 1)
n(n+1) (n 1)(n+2) (n 2)n(n+1)(n+3) (n 3)(n 1)(n+2)(n+4)
) a2 = 2! a0 , a3 = 3! a1 a4 = 4! a0 , a5 = 5! a1 , .......

Substituting these values into (8.2), we obtain the general solution of (8.1) as

y = c 1 y 1 + c 2 y2

where

n(n + 1) 2 (n 2)n(n + 1)(n + 3) 4
y1 = a 0 1 x + x ......... ,
2! 4!

(n 1)(n + 2) 3 (n 3)(n 1)(n + 2)(n + 4) 5
y2 = a 1 x x + x ......... .
3! 5!
We observe that y1 and y2 are LI solutions of the Legendre equation (8.1), and these are analytic in the
range 1 < x < 1. However, the solutions most useful in the applications are those bounded near x = 1.
Notice that x = 1 is a regular singular point of the Legendre equation (8.1). We use the transformation
t = (1 x)/2 so that x = 1 corresponds to t = 0, and (8.1) transforms to the hypergeometric DE

t(1 t)y 00 + (1 2t)y 0 + n(n + 1)y = 0, (8.4)

58
Mathematics-III 59

where the prime denote derivative with respect to t. Here, a = n, b = n + 1 and c = 1. So the solution
of (8.4) in the neighbourhood of t = 0 is given by

y1 = F ( n, n + 1, 1, t). (8.5)

The other LI solution can be found as


Z R
1 P dt
y2 = y1 2 e dt = y1 (ln t + a1 t + ........). (8.6)
y1

However, this solution is not bounded near t = 0. So any solution of (8.4) bounded near t = 0 is a constant
multiple of y1 . Consequently, the constant multiples of F ( n, n + 1, 1, (1 x)/2) are the solutions of (8.1),
which are bounded near x = 1.
If n is a non-negative integer, then F ( n, n + 1, 1, (1 x)/2) defines a polynomial of degree n known
as Legendre polynomial, denoted by Pn (x). Therefore,

n(n + 1) n(n 1)(n + 1)(n + 2) (2n)!


Pn (x) = F ( n, n+1, 1, (1 x)/2) = 1+ 2
(x 1)+ 2 2
(x 1)2 +....+ (x 1)n .
(1!) 2 (2!) 2 (n!)2 2n

Notice that Pn (1) = 1 for all n. Next, after a sequence of algebraic manipulations, we can obtain
1 dn
Pn (x) = [(x2 1)n ],
2n n! dxn
known as Rodrigues formula. The following theorem provides the alternative approach to obtain the
Rodrigues formula.

Theorem 8.1.1. (Rodrigues Formula)

1 dn
Prove that Pn (x) = [(x2 1)n ].
2n n! dxn
Proof. Let v = (x2 1)n . Then we have
dv
v1 = 2nx(x2 1)n 1
where v1 = .
dx
=) (x2 1)v1 = 2nx(x2 1)n .
=) (1 x2 )v1 + 2nxv = 0.
Dierentiating it n + 1 times with respect to x using the Leibnitz theorem, we get
(n + 1)n
(1 x2 )vn+2 + (n + 1)( 2x)vn+1 + ( 2)vn + 2n[xvn+1 + (n + 1)vn ] = 0.
2!
=) (1 x2 )vn00 2xvn0 + n(n + 1)vn = 0.
This shows that cvn (c is an arbitrary constant) is a solution of the Legendres equation (8.1). Also cvn
is a polynomial of degree n. But we know that the nth degree polynomial Pn (x) is a solution of the
Legendres equation. It follows that
dn
Pn (x) = cvn = c [(x2 1)n ]. (8.7)
dxn
To find c, we put x = 1 into (8.7) to get
n
d
Pn (1) = c [(x2 1)n ] .
dxn x=1
Mathematics-III 60


dn
=) 1=c [(x 1)n (x + 1)n ] = c[n!(x + 1)n + Terms containing the factor (x 1)]x=1 .
dxn x=1
1
=) 1 = c.n!2n or c= .
n!2n
Thus, (8.7) becomes
1 dn
Pn (x) = [(x2 1)n ].
2n n! dxn
This completes the proof.

Remark 8.1.1. Using Rodrigues formula, we get


P0 (x) = 1, P1 (x) = x, P2 (x) = 12 (3x2 1), P3 (x) = 12 (5x3 3x) , P4 (x) = 35 4
8 x
15 2
4 x + 3
8 etc.

Ex. 8.1.1. Express the polynomial x4 + 3x3 x2 + 5x 2 in terms of Legendres polynomials.

Sol. 8.1.1. Since P4 (x) = 35 8 x


4 15 2 3 4 8 6 2
4 x + 8 , so x = 35 P4 (x) + 7 x
3
35 . Similarly, x3 = 25 P3 (x) + 35 x,
x2 = 23 P2 (x) + 13 , x = P1 (x), 1 = P0 (x). Using all these, we get

8 6 2 34 224
x4 + 3x3 x2 + 5x 2= P4 (x) + P3 (x) P2 (x) + P1 (x) P0 (x).
35 5 21 5 105

Orthogonality of Legendre Polynomials


Z 1
Ex. 8.1.2. Show that Pm (x)Pn (x)dx = 0 for m 6= n.
1

Sol. 8.1.2. We know that y = Pm (x) is a solution of

(1 x2 )y 00 2xy 0 + m(m + 1)y = 0, (8.8)

and z = Pn (x) is a solution of

(1 x2 )z 00 2xz 0 + n(n + 1)z = 0. (8.9)

Multiplying (8.8) by z and (8.9) by y, and subtracting, we get

(1 x2 )(y 00 z yz 00 ) 2x(y 0 z yz 0 ) + [m(m + 1) n(n + 1)]yz = 0.


d
=) [(1 x2 )(y 0 z yz 0 )] + (m n)(m + n + 1)yz = 0. (8.10)
dx
Integrating (8.10) from 1 to 1, we have
Z 1
(m n)(m + n + 1) yzdx = 0
1

Also, m 6= n. So it gives
Z 1
Pm (x)Pn (x)dx = 0.
1

This is known as the orthogonality property of Legendre polynomials.


Z 1
2
Ex. 8.1.3. Show that Pn2 (x)dx = .
1 2n +1
Mathematics-III 61

Sol. 8.1.3. The Rodrigues formula is


1 dn 1
Pn (x) = [(x2 1)n ] = Dn (x2 1)n .
2n n! dxn 2n n!
Therefore, we have
Z 1 Z 1
n 2 2
(2 n!) Pn (x)dx = Dn (x2 1)n Dn (x2 1)n dx
1 1
Z 1
n 2 n n 1 2
x=1
= D (x 1) D (x 1)n x= 1 Dn+1 (x2 1)n Dn 1
(x2 1)n dx
1
Z 1
= 0 Dn+1 (x2 1)n Dn 1
(x2 1)n dx
1
Z 1
n
= ( 1) D2n (x2 1)n (x2 1)n dx (Integrating (n 1) times more)
1
Z 1
n
= ( 1) (2n)!(x2 1)n (x2 1)n dx (Put x = sin )
1
Z /2
= 2(2n)! cos2n+1 d
0
2n(2n 2).......4.2
= 2(2n)!
(2n + 1)(2n 1)........3.1
[2n(2n 2).......4.2]2
= 2(2n)!
(2n + 1)!
2
= (2n n!)2
2n + 1
Z 1
2
) Pn2 (x)dx = .
1 2n + 1

Legendre Series
Let f (x) be a function defined from x = 1 to x = 1. Then we can write,
1
X
f (x) = cn Pn (x), (8.11)
n=0

where cn s are constants to be determined. Multiplying both sides of (8.11) by Pn (x) and integrating
from 1 to 1, we get
Z 1 Z 1
2
f (x)Pn (x)dx = cn Pn2 (x)dx = cn .
1 1 2n +1
Z
2n + 1 1
=) cn = f (x)Pn (x)dx.
2 1
Using the values of cn into (8.11), we get the expansion of f (x) in terms of Legendre polynomials, known
as the Legendre series of f (x).
Ex. 8.1.4. If f (x) = x for 0 < x < 1 otherwise 0, then show that f (x) = 14 P0 (x)+ 12 P1 (x)+ 16
5
P2 (x)+.......
Z Z
1 1 1 1 1
Sol. 8.1.4. c0 = f (x)P0 (x)dx = x.1dx = , etc.
2 1 2 0 4
Mathematics-III 62

1
X
2 1/2
Ex. 8.1.5. Prove that (1 2xt + t ) = tn Pn (x), and
n=0
hence prove the recurrence relation nPn (x) = (2n 1)xPn 1 (x) (n 1)Pn 2 (x).

Sol. 8.1.5. Please try yourself.

Note: The function (1 2xt + t2 ) 1/2 is called generating function of the Legendre polynomials. Note
that the Legendre polynomials Pn (x) appear as coefficients of tn in the expansion of the function
(1 2xt + t2 ) 1/2 .

8.2 Gamma Function


The gamma function is defined as
Z 1
(n) = e x xn 1 dx, (n > 0) (8.12)
0

The condition n >Z0 is necessary in order to guarantee the convergence of the integral.
1
Note that (1) = e x dx = 1.
0
Next, we have
Z 1 Z 1 Z 1
x n
1
(n + 1) = e x dx = xn e x
( 1) 0
n nxn 1
e x
( 1)dx = n e x n 1
x dx.
0 0 0

) (n + 1) = n (n).
It is the recurrence relation for gamma function. Using this relation recursively, we have

(2) = 1. (1) = 1,

(3) = 2. (2) = 2.1 = 2!,


(4) = 3. (3) = 3.2! = 3!,
......
(n + 1) = n. (n) = n.(n 1)! = n!.
p
Thus, (n) takes positive integer values for positive integer values of n. It can be proved that ( 21 ) = .
For,
Z 1 Z 1
1 t 1/2 2
= e t dx = 2 e x dx, where t1/2 = x.
2 0 0

2 Z 1 Z 1
1 x2 y2
= 2 e dx 2 e dy
2 0 0
Z 1Z 1
2 2
= e (x +y ) dxdy
0 0
Z 2 Z 1
2
= e r rdrd, x = r cos , y = r sin
0 0
=
Mathematics-III 63

1
Having known the precise value of 2 , we can calculate the values of gamma function at positive
fractions with denominator 2. For instance,

7 5 3 1 1 5 3 1p
= . . = . . .
2 2 2 2 2 2 2 2

For values of gamma function at positive fractions with denominator dierent from 2, we have to rely
upon the numerically approximated value of the integral arising in gamma function.
Note that (n) given by (8.12) is not defined for n 0. We extend the definition of gamma function
by the relation

(n + 1)
(n) = (8.13)
n
Then (n) is defined for all n except when n is any non-positive integer. If we agree (n) to be 1 for
non-positive integer values of n, then 1/ (n) is defined for all n. Such an agreement is useful while dealing
with Bessel functions. The Gamma function is, thus, defined as

8 Z 1
>
> e x xn 1
dx , n>0
>
>
>
> 0
>
<
(n) = (n + 1)
>
> , n < 0 but not an integer
>
> n
>
>
>
:
1, n = 0, 1, 2, .......

Note that the gamma function generalizes the concept of factorial from non-negative integers to any
real number via the formula

n! = (n + 1).

8.3 Bessel Functions


The DE

x2 y 00 + xy 0 + (x2 p2 )y = 0, (8.14)

where p is a non-negative constant, is called Bessels DE. We see that x = 0 is a regular singular point of
(8.14). So there exists at least one Frobenious series solution of the form
1
X
y= an xn+r , (a0 6= 0). (8.15)
n=0

Using (8.15) into (8.14), we get


1
X 1
X
an [(n + r)2 p2 ]xn+r + an xn+r+2 = 0. (8.16)
n=0 n=0

Equating to 0 the coefficient of xr , the lowest degree term in x, we obtain

a0 (r2 p2 ) = 0 or r2 p2 = 0.

Therefore, roots of the indicial equation are r = p, p.


Mathematics-III 64

Next equating to 0 the coefficient of xr+1 , we find


a1 [(r + 1)2 p2 ] = 0 or a1 = 0 for r = p.
Now equating to 0 the coefficient of xn+r , we have the recurrence relation
an 2
an = , (8.17)
(n + r)2 p2
where n = 2, 3, 4....
For r = p, we get the solution in the form
1
X
p ( 1)n (x/2)2n
y = a0 x . (8.18)
n!(p + 1)(p + 2)....(p + n)
n=0
1
The Bessel function of first kind of order p, denoted by Jp (x) is defined by putting a0 = 2p p! into (8.18)
so that
1
X ( 1)n (x/2)2n+p
Jp (x) = , (8.19)
n! (n + p + 1)
n=0

which is well defined for all real values of p in accordance with the definition of gamma function.
1 .0

0 .8

0 .6

0 .4

0 .2

x
2 4 6 8 10
- 0 .2

- 0 .4

Figure 8.1: Plots of J0 (x) (Blue curve) and J1 (x) (Red curve).

From applications point of view, the most useful Bessel functions are of order 0 and 1, given by
x2 x4 x6
J0 (x) = 1 + + ..........
22 22 .42 22 .42 .62
x 1 x 3 1 x 5
J1 (x) = + ..........
2 1!2! 2 2!3! 2
Plots of J0 (x) (Blue curve) and J1 (x) (Red curve) are shown in Figure 8.1. It may be seen that J0 (x) and
J1 (x) vanish alternatively, and have infinitely many zeros on positive x-axis, as expected, since J0 (x) and
J1 (x) are two particular LI solutions of the Bessels DE (8.14). Later, we shall show that J00 (x) = J1 (x).
Thus, J0 (x) and J1 (x) behave just like cos x and sin x. This analogy may also be observed by the fact
that the normal form of Bessels DE (8.14) given by

1 4p2
u00 + 1 + u = 0,
4x2
behaves as
u00 + u = 0,
for large values of x, with solutions cos x and sin x. It means J0 (x) and J1 (x) behave more precisely like
cos x and sin x for larger values of x.
Mathematics-III 65

8.3.1 Second solution of Bessels DE


To obtain second solution it is natural to try the second root r = p of the indicial equation. We assume
that p is not an integer otherwise the dierence of the indicial equation roots p and p would be the
integer 2p. For r = p, the equation a1 [(r + 1)2 p2 ] = 0 becomes a1 (1 2p) = 0, which lets a1 arbitrary
for p = 1/2. So there is no compulsion to choose a1 = 0. However, we fix a1 = 0 after all we are interested
in a particular solution. Also, for r = p, the recurrence relation (8.17) reduces to
an 2 an 2
an = 2 2
= ,
(n p) p n(n 2p)
where n = 2, 3, 4....
For n = 3, we get 3(3 2p)a3 = a1 = 0. This lets a3 arbitrary for p = 3/2. We choose a3 = 0. Likewise, we
choose a5 = 0, a7 = 0, ........ for the sake of particular solution, and thus obtain the following particular
solution of (8.14):
1
X ( 1)n (x/2)2n p
J p (x) = , (8.20)
n! (n p + 1)
n=0
which is same as if we replace p by p in (8.19).
Notice that Jp (x) and J p (x) are LI since Jp (x) is bounded near x = 0 but J p (x) is not so. Thus,
when p is not an integer, the general solution of Bessels equation (8.14) is
y = c1 Jp (x) + c2 J p (x). (8.21)
Now let us see what happens when p is a non-negative integer say m. We have
X1
( 1)n (x/2)2n m
J m (x) =
n!( m + n)!
n=0
X ( 1)n (x/2)2n m
1
1
= = 0, n = 0, 1, 2, ...., m 1
n=m
n!( m + n)! ( m + n)!
1
X ( 1)n+m (x/2)2(n+m) m
= (Replacing the dummy variable n by n + m)
(n + m)!( m + n + m)!
n=0
1
X ( 1)n (x/2)2n+m
m
= ( 1)
n!(m + n)!
n=0
= ( 1)m Jm (x)
This shows that Jp (x) and J p (x) are not LI when p is an integer.
When p is not an integer, any function of the form (8.21) with c2 6= 0 is a Bessel function of second
kind. The standard Bessel function of second kind is defined as
Jp (x) cos p J p (x)
Yp (x) = . (8.22)
sin p
One can write (8.21) in the equivalent form
y = c1 Jp (x) + c2 Yp (x), (8.23)
which is general solution of (8.14) when p is not an integer. One may observe that Yp (x) is not defined
when p is an integer say m. However, it can be shown that
Ym (x) = lim Yp (x)
p!m

exists, and it is taken as the Bessel function of second kind. Thus, it follows that (8.23) is general solution
of Bessels equation (8.14) in all cases. It is found that Yp (x) is not bounded near x = 0 for p 0.
Accordingly, if we are interested in solutions of Bessels equation near x = 0, which is often the case in
applications, then we must take c2 = 0 in (8.23).
Mathematics-III 66

8.3.2 Properties of Bessel Functions


It is easy to prove the following:
d p
(1) [x Jp (x)] = xp Jp 1 (x).
dx
d p
(2) x Jp (x) = x p Jp+1 (x).
dx
1
(3) Jp0 (x) = [Jp 1 (x) Jp+1 (x)].
2
2p
(4) Jp+1 (x) = Jp (x) Jp 1 (x).
x
Z
From (1), we have xp Jp 1 (x)dx = xp Jp (x) + C.
Z
Similarly, (2) gives x p Jp+1 (x)dx = x p Jp (x) + C.
Also, notice that (4) is the recurrence relation for Bessel functions. By definition of Bessel function, it
can be shown that
r r
2 2
J1/2 (x) = sin x, J 1/2 (x) = cos x.
x x
So by property (4)
r
1 2 sin x
J3/2 (x) = J1/2 (x) J 1/2 (x) = cos x .
x x x

Again, by property (4)


r
1 2 cos x
J 3/2 (x) = J 1/2 (x) J1/2 (x) = sin x .
x x x
Thus, every Bessel function Jm+ 1 (x), where m is any integer, is elementary as it is expressible in terms
2
of elementary functions.

8.4 Orthogonal properties of Bessel functions


If m and n are positive zeros of Jp (x), then
Z 1
0, m 6= n
xJp ( m x)Jp ( n x)dx = 1 2
0 2 Jp+1 ( m) , m = n.

Proof. Since y = Jp (x) is a solution of



00 1 0 p2
y + y + 1 y = 0,
x x2

it follows that u(x) = Jp ( m x) and v(x) = Jp ( n x) satisfy the equations



00 1 0 2 p2
u + u + m u = 0, (8.24)
x x2


1 0 p2
v 00 + v + 2
n v = 0, (8.25)
x x2
Mathematics-III 67

Multiplying (8.24) by v and (8.25) by u, and subtracting the resulting equations, we obtain
d 1 0
u0 v v0u + (u v v 0 u) = ( 2
n
2
m )uv.
dx x
After multiplication by x, it becomes
d
x(u0 v v 0 u) = ( 2
n
2
m )xuv.
dx
Now, integrating with respect to x from 0 to 1, we have
Z 1
1
2
( n 2
m) uv = x(u0 v v 0 u) 0 = 0,
0

since u(1) = Jp ( m) = 0 and v(1) = Jp ( n) = 0.


Z 1
) xJp ( m x)Jp ( n x)dx = 0, (m 6= n).
0

Next, we consider the case m = n. Multiplying (8.24) by 2x2 u0 , we get

2x2 u0 u00 + 2xu02 + 2 2 2


m x uu
0
2p2 uu0 = 0.
d
=) x2 u02 + 2m x2 u2 p2 u2 = 2 2m xu2 .
dx
Integrating from 0 to 1 with respect to x, we get
Z 1
1
2
2 m xu2 dx = x2 u02 + 2m x2 u2 p2 u2 0 = 2 02
m Jp ( m ) +( 2
m p2 )Jp2 ( m ).
0

(Notice that u(0) = Jp (0) = 0 for p > 0. So p2 u2 (0) = 0 for p 0).


Z 1
1 1 2
) xJp2 ( m x)dx = Jp02 ( m ) = Jp+1 ( m ).
0 2 2
d p
p
For, x Jp (x) = x Jp+1 (x) leads to
dx
p
Jp0 (x) = Jp (x) Jp+1 (x),
x
p
) Jp0 ( m) = Jp ( m) Jp+1 ( m) = Jp+1 ( m ).
m

8.4.1 Fourier-Bessel Series


In mathematical physics, it is often necessary to expand a given function in terms of Bessel functions.
The simplest and most useful expansions are of the form
1
X
f (x) = an Jp ( n x) = a1 Jp ( 1 x) + a2 Jp ( 2 x) + ..........., (8.26)
n=1

where f (x) is defined on the interval 0 x 1 and n are positive zeros of some fixed Bessel function
Jp (x) with p 0. Now multiplying (8.26) by xJp ( n x) and integrating from x = 0 to x = 1, we get
Z 1
1 2
xf (x)Jp ( n x)dx = an Jp+1 ( n ),
0 2
Mathematics-III 68

which gives
Z 1
2
an = 2 xf (x)Jp ( n x)dx.
Jp+1 ( n) 0

Ex. 8.4.1. Express f (x) = 1 in terms of the functions J0 ( n x)

Sol. 8.4.1. We have


Z 1
2
an = 2 xf (x)J0 ( n x)dx
J1 ( n ) 0
1
2 1 d
= 2 xJ1 ( n x) , * [xJ1 (x)] = xJ0 (x)
J1 ( n ) n 0 dx
2 J1 ( n )
= 2
J1 ( n ) n
2
=
n J1 ( n )

So the required Fourier-Bessel series is


1
X 2
1= J0 ( n x).
n=1 n J1 ( n)

Convergence of Fourier-Bessel Series


Assume that f (x) and f 0 (x) have at most finite number of jump discontinuities on [0, 1]. If x 2 (0, 1), then
the Bessel series converges to f (x) when x is a point of continuity of f (x), and converges to 12 [f (x ) +
f (x+)] when x is a point of discontinuity. At x = 1, the series converges to 0 regardless of the nature of
f (x) because every Jp ( n ) = 0. The series converges to 0 if p > 0 and to f (0+) if p = 0.
Chapter 9

Laplace Transforms

9.1 Definitions of Laplace and inverse Laplace transforms


Let f (x) be a function defined on a finite or infinite interval a x b. If we choose a fixed function
K(p, x) of variable x and a parameter p, then the general integral transformation is defined as
Z b
T [f (x)] = K(p, x)f (x)dx. (9.1)
a

The function K(p, x) is called kernel of T . In particular, if a = 0, b = 1 and K(p, x) = e px , then (9.1)
is called Laplace transform of f (x) and is denoted by L[f (x)].
Z 1
) L[f (x) = e px
f (x)dx = F (p).
0

It may be noted that L is linear. For,


Z 1
L[f (x) + g(x)] = e px [f (x) + g(x)]dx
0
Z 1 Z 1
= e px f (x)dx + e px
g(x)dx
0 0
= L[f (x)] + L[g(x)].
Further, if L[f (x)] = F (p), then f (x) is called inverse Laplace of F (p) and is denoted by L 1 [F (p)].

) L 1
[F (p)] = f (x).
Z 1
px
Remark: The Laplace transform of a(x), that is, e a(x)dx is the integral analog of the power
0
1
X
series a(n)xn with x = e p.

n=0

9.2 Laplace transforms of some elementary functions


It would be useful to memorize the following formulas related to Laplace and inverse Laplace transforms
of elementary functions.
Z 1
px 1 1 1
(1) L [1] = e .1dx = (p > 0). L =1
0 p p

69
Mathematics-III 70

Z 1
ax px ax 1 1 1
(2) L [e ] = e e dx = (p > a). L = eax
0 p a p a
Z 1
n px n (n + 1) 1 1 xn
(3) L [x ] = e x dx = (p > 0). L =
0 pn+1 pn+1 (n + 1)

eiax e iax a 1 1 1
(4) L [sin ax] = L = 2 . L = sin ax
2i p + a2 p2 +a 2 a

eiax + e iax p 1 p
(5) L [cos ax] = L = 2 . L = cos ax
2 p + a2 p2 + a 2

eax e ax a 1 1 1
(6) L [sinh ax] = L = . L = sinh ax
2 p2 a2 p2 a2 a

eax + e ax p 1 p
(7) L [cosh ax] = L = . L = cosh ax
2 p2 a 2 p2 a2

Ex. 9.2.1. Find L sin2 x and L 1 [4 sin x cos x +e x ].

Sol. 9.2.1.

2
1 cos 2x 1 1 p
L sin x = L = 2
2 2 p p +4
x
x
4 1
L 4 sin x cos x + e = L 2 sin 2x + e = + .
p2 +4 p+1
h i h i
1 1 1 1
Ex. 9.2.2. Find L p2 +2
and L p4 +p2
.

Sol. 9.2.2.

1 1 1 p
L 2
= p sin 2x.
p +2 2

1 1 1 1 1
L 4 2
=L 2 2
=x sin x.
p +p p p +1

9.3 Sufficient conditions for the existence of Laplace transform


Let f (x) be a piecewise continuous function for x 0 and there exist constants M and c such that
|f (x)| M ect . Then L[f (x)] exists for p > c.
For, we have
Z 1 Z 1 Z 1
M
|F (p)| = e px f (x)dx e px |f (x)|dx M e (p c)x dx = , (p > c). (9.2)
0 0 0 p c

The above conditions are not necessary. Consider the function f (x) = x 1/2 . This function is not
piecewise continuous on [0,b] forp any positive real number b since it has infinite discontinuity at x = 0.
But L[x 1/2 ] = (1/2)/p1/2 = /p exists for p > 0.
Further, from (9.2), we see that lim F (p) = 0. It is true even if the function is not piecewise continuous
p!1
or of exponential order. So if lim (p) 6= 0, then (p) can not be Laplace transform of any function. For
p!1
example, L 1 [p], L 1 [cos p], L 1 [log p] etc. do not exist.
Mathematics-III 71

9.4 Some more Laplace transform formulas


9.4.1 Laplace transform of a function multiplied by eax
If L[f (x)] = F (p), then L [eax f (x)] = F (p a). (Shifting formula)
Z 1 Z 1
For, L [eax f (x)] = e px eax f (x)dx = e (p a)x f (x)dx = F (p a).
0 0

Ex. 9.4.1. Use shifting formula to evaluate L e2x sin x .
1
Sol. 9.4.1. Since L[sin x] = p2 +1
, so by shifting formula
1
L e2x sin x =
(p 2)2 + 1

9.4.2 Laplace transform of derivatives of a function


If L[f (x)] = F (p), then

L f 0 (x) = pF (p) f (0).
Z 1 Z 1
0 px 0

px 1 px
For L f (x) = e f (x)dx = f (x)e 0
+p e f (x)dx = pF (p) f (0).
0 0
Likewise, we can show that

L f 00 (x) = p2 F (p) pf (0) f 0 (0).

In general,
h i
1 0
L f (n) (x) = pn F (p) pn 1
f (0) pn f (0) ...... f (n 1)
(0).

Ex. 9.4.2. Find Laplace transform of cos x considering that it is derivative of sin x.
1
Sol. 9.4.2. Here f (x) = sin x and F (p) = L[sin x] = p2 +1
.
p
) L [cos x] = pF (p) f (0) = .
p2 +1

9.4.3 Laplace transform of integral of a function


Z
x
F (p)
If L[f (x)] = F (p), then L f (t)dt = .
Z x 0 p
For, let g(x) = f (t)dt so that g 0 (x) = x and pL[g(x)] g(0) = F (p), where g(0) = 0.
0

The above result is quite useful in the form


Z x
1 F (p)
L = f (t)dt
p 0

1 1
Ex. 9.4.3. Find L 2
.
p(p + 1)

1 1
Sol. 9.4.3. Since L 2
= sin x, so we have
p +1
Z x
1 1
L = sin t = 1 cos x.
p(p2 + 1) 0
Mathematics-III 72

9.4.4 Laplace transform of a function multiplied by x


If L[f (x)] = F (p), then L [xf (x)] = ( 1)F 0 (p), where prime stands for derivative with respect to p.
Z 1
For, e px f (x)dx = F (p).
0

Dierentiating both sides with respect to p, we get


Z 1
( x)e px f (x)dx = F 0 (p),
0
Z 1
or e px
[xf (x)]dx = ( 1)F 0 (p).
0
Likewise, we can show that

L x2 f (x) = ( 1)2 F 00 (p).

In general,

L [xn f (x)] = ( 1)n F (n) (p).

Ex. 9.4.4. Find L[x sin x].


1
Sol. 9.4.4. We know L[sin x] = p2 +1
.

d 1 2p
) L [x sin x] = ( 1) 2
= .
dp p +1 (p2 + 1)2

9.4.5 Laplace transform of a function divided by x


Z 1
f (x)
If L[f (x)] = F (p), then L = F (t)dt.
x p
f (x)
For, let g(x) = so that xg(x) = f (x) and ( 1)G0 (p) = F (p), which on integrating from p to 1 gives
x
the desired result, noting that G(1) = 0, G(p) being Laplace transform of g(x).
Z 1
sin x sin x
Ex. 9.4.5. Find L and hence show that dx = .
x 0 x 2
1
Sol. 9.4.5. We know L[sin x] = p2 +1
.
Z 1
sin x 1 1
) L = dt = tan 1
t p
= tan 1
p = cot 1
p.
x p t2 +1 2

Now,
Z 1
px sin x sin x 1
e dx = L = tan p.
0 x x 2
Z 1
sin x
Choosing p = 0, we get dx = .
0 x 2
h cos x i
Ex. 9.4.6. Show that L does not exist.
x
Sol. 9.4.6. Please try yourself.
Mathematics-III 73


1 p+7
Ex. 9.4.7. Find L .
p2 + 2p + 5
Sol. 9.4.7. Please try yourself by making perfect square in the denominator.

1 2p2 6p + 5
Ex. 9.4.8. Find L .
p3 6p2 + 11p 6
Sol. 9.4.8. Please try yourself by making partial fractions.

1 p+1
Ex. 9.4.9. Find L log .
p 1
Sol. 9.4.9. Please try yourself by letting

p+1
L[f (x)] = log
p 1

so that
2
L[xf (x)] = .
p2 1

1 p
Ex. 9.4.10. Show that L = x sinh ax.
(p2 a2 ) 2
Sol. 9.4.10. Please try yourself.

1 p
Ex. 9.4.11. Show that L .
p + p2 + 1
4

Sol. 9.4.11. Please try yourself by using



p 1 1 1
=
p4 + p2 + 1 2 p2 p + 1 p2 + p + 1

9.5 Solution of DE using Laplace transform


To solve a DE, first take Laplace transform of both sides, find L[y] and finally take inverse Laplace
transform to obtain the solution y, as illustrated in the following examples.

Ex. 9.5.1. Solve y 0 y = 0.

Sol. 9.5.1. Taking Laplace transform of both sides, we get

pL[y] y(0) L[y] = 0.

Letting y(0) = c and solving for L[y], we have


c
L[y] = .
p 1
Now taking inverse Laplace transform, we get

1 1
y = cL = cex .
p 1

Ex. 9.5.2. Solve y 00 + y = 0.


Mathematics-III 74

Sol. 9.5.2. Taking Laplace transform of both sides, we get

p2 L[y] py(0) y 0 (0) + L[y] = 0.

Letting y(0) = c1 , y 0 (0) = c2 and solving for L[y], we have


p 1
L[y] = c1 + c2 2 .
p2 +1 p +1
Now taking inverse Laplace transform, we get

y = c1 cos x + c2 sin x.

Ex. 9.5.3. Solve y 0 + y = 3e2x , y(0) = 0.

Sol. 9.5.3. Taking Laplace transform of both sides, we get


3
pL[y] y(0) + L[y] = .
p 2

Using y(0) = 0 and solving for L[y], we have


3 1 1
L[y] = = .
(p + 1)(p 2) p 2 p+1

Now taking inverse Laplace transform, we get

y = e2x e x
.

Ex. 9.5.4. Solve the Bessels DE of order 0 given by

xy 00 + y 0 + xy = 0, subject to y(0) = 1, y 0 (0) = 0.

Sol. 9.5.4. Taking Laplace transform of both sides, we get


d 2 d
( 1) p L[y] py(0) y 0 (0) + pL[y] y(0) (L[y]) = 0.
dp dp

Using y(0) = 1, y 0 (0) = 0 and rearranging, we have

d[L[y]] p
= 2 dp,
L[y] p +1

which on integrating leads to


1/2
2 1/2 c 1 1 1 1 1 13 1
L[y] = c(p + 1) = 1+ 2 =c 3
+ ......
p p p 2p 2! 2 2 p5

Now taking inverse Laplace transform, we get



x2 x4
y=c 1 + 2 2 ........... = cJ0 (x).
22 2 .4

Using y(0) = 1, we get c = 1. Thus, the required solution is

y = J0 (x).
Mathematics-III 75

1
Remark 9.5.1. From the above example, notice that L[J0 (x)] = p .
p2 +1
Z x
Theorem 9.5.1. (Convolution Theorem) Prove that L[f (x)].L[g(x)] = L f (x t)g(t)dt .
0

Proof. We have
Z 1 Z 1
L[f (x)].L[g(x)] = e ps f (s)ds. e pt g(t)dt
Z0 1 Z 1 0

= e p(s+t) f (s)g(t)ds dt
0 0
Z 1Z 1
= e px f (x t)g(t)dxdt (s + t = x)
0 t
Z 1Z x
= e px f (x t)g(t)dtdx (Change of order of integration)
Z0 1 0 Z x
= e px f (x t)g(t)dt dx
0 0
Z x
= L f (x t)g(t)dt .
0

Remark 9.5.2. If L[f (x)] = F (p) and L[g(x)] = G(p), then by convolution theorem
Z x
L 1 [F (p)G(p)] = f (x t)g(t)dt
0
.

1 1
Ex. 9.5.5. Use convolution theorem to find L .
p2 (p2 + 1)
Sol. 9.5.5. We know that

1 1 1 1
L 2
= x, L 2
= sin x.
p p +1

So in view of convolution theorem, we have


Z x
1 1 1 1 1
L =L . = (x t) sin tdt = x sin x.
p2 (p2 + 1) p 2 p2 + 1 0
Z x
Remark 9.5.3. The integral f (x t)g(t)dt is called convolution of the functions f (x) and g(x), and
0
is denoted by f (x) g(x). So by convolution theorem, we have

L[f (x) g(x)] = L[f (x)]L[g(x)].

9.6 Solution of integral equations


If f (x) and K(x) are given functions, then the equation
Z x
f (x) = y(x) + K(x t)y(t)dt, (9.3)
0
Mathematics-III 76

where the unknown function y(x) appears under the integral sign, is called an integral equation. Taking
Laplace transform of both sides of (9.3), we get
L[f (x)] = L[y(x)] + L[K(x)]L[y(x)].
So we have
L[f (x)]
L[y(x)] =
1 + L[K(x)]
Z x
Ex. 9.6.1. Solve y(x) = x3 + sin(x t)y(t)dt.
0
Sol. 9.6.1. Taking Laplace transform of both sides, we get
L[y(x)] = L[x3 ] + L[sin x]L[y(x)].
So we have
L[x3 ] 6 6
L[y(x)] = = 4 + 6.
1 + L[sin x] p p
Taking inverse Laplace transform, we have
1 5
y(x) = x3 + x .
20

9.7 Heaviside or Unit Step Function


It is denoted by H(t) or u(t), and is defined as

0, t<0
H(t) = u(t) =
1 , t 0.
It has a jump discontinuity at t = 0. If the discontinuity happens to be at t = a 0, we define

0, t<a
Ha (t) = ua (t) =
1 , t a.
The Laplace transform of ua (t) is given by
Z 1 Z 1 ap
e
L[ua (t)] = e pt ua (t)dt = e pt dt =
0 a p
1
In particular, L[u(t)] = .
p
Further, if L[f (t)] = F (p), then we have
Z 1 Z 1
L[f (t a)ua (t)] = e pt f (t a)dt = e p(a+z)
f (z)dz = e ap
F (p).
a 0
It gives
1 ap
L [e
F (p)] = f (t a)ua (t).

1 e 3p
Ex. 9.7.1. Find L .
p2 + 1

1
Sol. 9.7.1. We know L 1 2 = sin t.
p +1

e 3p 0, t<3
) L 1
2
= sin(t 3)u3 (t) =
p +1 sin(t 3) , t 3.
Mathematics-III 77

9.8 Dirac Delta Function or Unit Impulse Function


A large force acting on a body for a very short duration of time is called impulse. For instance, hammering
a nail into wood, hitting cricket ball by bat etc. Impulse is modelled by a function known as the Dirac
delta function.
Let > 0 be any real number, then the limit of the function
8
< 0, t<0
f (t) = 1/ , 0 t
:
0, t .
as ! 0+ defines Dirac delta function, which is denoted by (t). So lim f (t) = (t), and we may
!0+
interpret that (t) = 0 for t 6= 0 and (t) = 1 at t = 0. The delta function can be made to act at any
point say a 0. Then we define

0, t 6= a
a (t) =
1 , t = a.
The function f (t) can be written in terms of unit step function as
1
f (t) = [u(t) u (t)].

It implies that
(t) = u0 (t).
Here it should be noted that ordinary derivative of u(t) does not exist at t = 0, u(t) being discontinuous
at t = 0. So it is to be understood as a generalized function or quasi function. Similarly, we have
8
< 0, t<a
f (t) = 1/ , a t a + (9.4)
:
0, t a + .
can be written as
1
f (t) = [ua (t) ua+ (t)].

) a (t) = u0a (t).
Now, let g(t) be any continuous function for t 0. Then using (9.4) , we have
Z 1 Z
1 a+
g(t)f (t)dt = g(t) = g(t0 ),
0 a
where a < t0 < a + , by mean value theorem of integral calculus. So in the limit ! 0, we get
Z 1
g(t) a (t)dt = g(a).
0

In particular, if we choose g(t) = e pt , then we get


Z 1
e pt a (t)dt = e pa .
0

It means
pa
L[ a (t)] = e and L[ (t)] = 1.
) L 1
[e pa
]= a (t) and L 1
[1] = (t).
Mathematics-III 78

Examples
Suppose the LDE
y 00 + ay 0 + by = f (t), y(0) = y 0 (0) = 0, (9.5)
describes a mechanical or electrical system at rest in its state of equilibrium. Here f (t) can be an impressed
external force F or an electromotive force E that begins to act at t = 0. If A(t) is solution (output or
indicial response) for the input f (t) = u(t) (the unit step function), then
A00 + aA0 + bA = u(t)
Taking Laplace transform of both sides, we get
1
p2 L[A] pA(0) A0 (0) + apL[A] + pL[A] A(0) =
p
Using A(0) = A0 (0) = 0 and solving for L[A], we get
1 1
L[A] = = , (9.6)
p(p2 + ap + b) pZ(p)
where Z(p) = p2 + ap + b.
Similarly, taking Laplace transform of (9.5), we get
Z t Z t
L[f (t)] d
L[y] = = pL[A]L[f (t)] = pL A(t )f ( )d = L A(t )f ( )d . (9.7)
Z(p) 0 dt 0
Taking inverse Laplace Transform, we have
Z Z t
d t
=) y(t) = A(t )f ( )d = A0 (t )f ( )d (* A(0) = 0). (9.8)
dt 0 0

Since L[A]L[f (t)] = L[f (t)]L[A], (9.7) gives


Z t
L[y] = pL f 0 (t )A( )d .
0

Taking inverse Laplace Transform, we get


Z t
y(t) = f 0 (t )A( )d + f (0)A(t).
0

Thus, finally the solution of (9.5) for the general input f (t) is given by the following two formulas:
Z t
y(t) = A0 (t )f ( )d, (9.9)
0

Z t
y(t) = f 0 (t )A( )d + f (0)A(t). (9.10)
0

In case, the input is f (t) = (t), the unit impulse function, let us denote the solution (output or impulsive
response) of (9.5) by h(t) so that L[h(t)] = 1/Z(p) and
1 L[h(t)]
L[A(t)] = = .
pZ(p) p
So A0 (t) = h(t) and formula (9.9) becomes
Z t
y(t) = h(t )f ( )d. (9.11)
0
Mathematics-III 79

Ex. 9.8.1. Use formula (9.10) to solve y 00 + y 0 6y = 2e3t , y(0) = y 0 (0) = 0.


1 1 1 1
Sol. 9.8.1. Here L[A(t)] = . So A(t) = + e 3t + e2 2t.
p(p2 + p 6) 6 15 10
Also, f (t) = 2e3t , f 0 (t) = 6e3t and f (0) = 2. So formula (9.10) gives
1 1 2 2t
y(t) = e3t + e 3t
e .
3 15 5
h i
1
Formula (9.11) can also be used for the solution, where h(t) = L 1
p2 +p 6
= 15 (e2t e 3t ).

Das könnte Ihnen auch gefallen