Sie sind auf Seite 1von 41

Introduction to Dierential Equations

Ryan Lok-Wing Pang


lwpang@ust.hk
August 1, 2014
2
Contents
1 First-Order Dierential Equations 5
1.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 First Order Ordinary Dierential Equations . . . . . . . . . . . . 5
1.2.1 1st Order Linear ODEs . . . . . . . . . . . . . . . . . . . 5
1.2.2 1st Order Separable Equations . . . . . . . . . . . . . . . 6
1.2.3 1st Order Homogeneous Equations . . . . . . . . . . . . . 7
1.2.4 1st Order Exact Equations . . . . . . . . . . . . . . . . . 8
1.3 Existence and Uniqueness of Solutions . . . . . . . . . . . . . . . 10
1.4 Modeling with rst-order ODEs . . . . . . . . . . . . . . . . . . . 12
1.5 Introduction to Numerical Methods . . . . . . . . . . . . . . . . . 13
1.5.1 Euler Method . . . . . . . . . . . . . . . . . . . . . . . . . 13
2 Second-Order Linear Ordinary Dierential Equations 15
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 General Solution of a Homogeneous Linear 2nd-Order ODE . . . 16
2.3 High Order ODEs with Constant Coecients . . . . . . . . . . . 16
2.4 Reduction of Order . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Nonhomogeneous Equations . . . . . . . . . . . . . . . . . . . . . 17
2.6 Method of Undetermined Coecients . . . . . . . . . . . . . . . . 18
2.7 Variation of Parameters . . . . . . . . . . . . . . . . . . . . . . . 19
2.8 Mechanical Vibrations . . . . . . . . . . . . . . . . . . . . . . . . 20
2.8.1 Free Undamped Oscillation . . . . . . . . . . . . . . . . . 21
2.8.2 Free Damped Oscillation . . . . . . . . . . . . . . . . . . . 22
2.8.3 Forced Undamped Oscillation . . . . . . . . . . . . . . . . 22
2.8.4 Forced Damped Oscillation . . . . . . . . . . . . . . . . . 23
3 Series Solutions for Second-Order Linear Equations 25
3.1 Series Solutions for Linear ODEs . . . . . . . . . . . . . . . . . . 25
3.2 Ordinary and Singular Points of an ODE . . . . . . . . . . . . . 26
3.3 Series Solution about an Ordinary Point . . . . . . . . . . . . . . 26
3.4 Series Solution about a Regular Singular Point . . . . . . . . . . 27
3.4.1 Euler Equations . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.2 The Method of Frobenius . . . . . . . . . . . . . . . . . . 28
4 Laplace Transform 31
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4.2 Laplace Transform of Elementary Functions . . . . . . . . . . . . 31
4.3 Laplace Transform of Derivatives . . . . . . . . . . . . . . . . . . 32
3
4 CONTENTS
4.4 Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . 32
4.5 Solving Initial Value Problem by Laplace Transform . . . . . . . 33
4.6 Unit Step Function . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.7 Initial Value Problems with Discontinuous Functions . . . . . . . 34
4.8 Impulse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.9 Convolution Integral . . . . . . . . . . . . . . . . . . . . . . . . . 36
5 Systems of First-Order ODEs 37
5.1 System of Dierential Equations . . . . . . . . . . . . . . . . . . 37
5.2 Solution of a General First-Order System . . . . . . . . . . . . . 37
5.3 Homogeneous System of 1st-Order Linear ODEs with Constant
Coecients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.4 Non-Homogeneous System . . . . . . . . . . . . . . . . . . . . . . 39
5.4.1 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . 39
5.4.2 Variation of Parameter . . . . . . . . . . . . . . . . . . . . 39
Chapter 1
First-Order Dierential
Equations
1.1 Preliminaries
An Ordinary Dierential Equation (ODE) is an equation involving single vari-
able deriatives only.
Dention 1.1.1 (Order). The order of an ODE is the highest derivative appears
in the ODE.
Example 1.1.2. ty

+ 2ty
2
+e
t
= 1 is an ODE of order 3.
The general form of an ODE is F(y
(n)
, y
(n1)
, , y, t) = 0, y = y(t). An
ODE is called linear if the coecients of y
(n)
, , y

, y are not functions of y


and is called non linear otherwise.
Example 1.1.3. y

+t
2
y

+y
2
= e
t
is a non-linear ODE.
Example 1.1.4. t
2
y

+ sin(t)y = t
2
is a linear ODE.
1.2 First Order Ordinary Dierential Equations
In general, an ODE y

= f(y, t) is NOT solvable in terms of elementary func-


tions (e.g. e
x
, sin x, log). However, if an ODE is in one of the following three
categories, we can solve the equation and obtain an analytical solution.
1. Linear Equation: y

+p(t)y = q(t).
2. Separable Equations:: P(t) +Q(y)y

= 0.
3. Exact Equations.
1.2.1 1st Order Linear ODEs
For a 1st-oder ODE of the form
dy
dt
+p(t)y = q(t),
5
6 CHAPTER 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
we may solve it by the method of Integrating Factor. First, we nd an antideriva-
tive P(t) of p(t), then multiply both sides of the ODE by the integrating factor
exp(P(t)):
e
P(t)
y

+e
P(t)
p(t)y = q(t)e
P(t)
d
dt
(e
P(t)
y) = q(t)e
P(t)
e
P(t)
y =
_
q(t)e
P(t)
dt
y = e
P(t)
_
q(t)e
P(t)
dt.
Example 1.2.1. Solve ty

+ 2y = t
5
for t > 0.
Proof. Rewrite the equation in the form y

+ (2/t)y = t
4
. Then an integrating
factor is
e
2 ln t
= t
2
.
Multiplying t
2
to both sides of the ODE yields
t
2
y

+ 2ty = t
6
d
dt
(t
2
y) = t
6
t
2
y =
t
7
7
+C
y =
t
5
7
+Ct
2
.
1.2.2 1st Order Separable Equations
If a 1st order ODE can be written in the form
dy
dx
= P(x)Q(y),
then the ODE is said to be in a separable form. In general, it can be solved no
matter the ODE is linear or not, but the solution may be implicit. i.e. Cannot
be written in the form y = f(x).
Example 1.2.2. Solve (t
2
+ 1)y

+te
y
= 0.
Proof. We have
dy
e
y
=
t
t
2
+ 1
dt.
Integrating both sides yields
e
y
=
1
2
ln(t
2
+ 1) +C.
Hence
y = ln | ln
_
t
2
+ 1 +C|.
1.2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS 7
Example 1.2.3 (Romanian MO 1971). Find all continuous functions f : R
R satisfying the equation
f(x) = (1 +x)
_
1 +
_
x
0
f(t)
1 +t
2
dt
_
,
for all x R. Here is a xed real number.
Proof. Since f is continuous, thr R.H.S. of the equation is a dierentiable func-
tion, so is f. Rewrite the equation as
f(x)
1 +x
2
=
_
1 +
_
x
0
f(t)
1 +t
2
dt
_
,
and then dierentiate w.r.t. x to obtain
f

(x)(1 +x
2
) f(x)2x
(1 +x
2
)
2
=
f(x)
1 +x
2
.
Separating variables and integrating yields
_
f

(x)
f(x)
dx =
_ _
+
2x
1 +x
2
_
dx.
Hence ln f(x) = x + ln(1 + x
2
) + C. Hence f(x) = a(1 + x
2
)e
x
for some
constant a. Substituting in the original relation, we obtain a = . Hence, the
unique solution is f(x) = (1 +x
2
)e
x
.
1.2.3 1st Order Homogeneous Equations
A function f(x, y) is homogeneous of degree n if f(kx, ky) = k
n
f(x).
A 1st order ODE P(x, y) +Q(x, y)
dy
dx
= 0 is homogeneous if both functions
P(x, y) and Q(x, y) are homogeneous functions of the same degree n.
The general form of a 1-st order homogeneous equation is
dy
dx
= F(
y
x
).
They can be solved by transforming to separable equations.
Let u = y/t, then dy/dt = t(du/dt) + u. The homogeneous ODE then
become separable:
t
du
dt
+u = F(u)
t
du
dt
= F(u) u.
Hence homogeneous equations can be considered as a class of separable equa-
tions.
Example 1.2.4. Solve
dy
dt
=
t
2
+ty +y
2
t
2
.
Proof.
dy
dt
= 1 +
y
t
+ (
y
t
)
2
,
so the ODE is homogeneous and can be solved by transforming to a separable
equation.
8 CHAPTER 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
1.2.4 1st Order Exact Equations
Recall from multivariable calculus that for a continuous vector eld F : R
2

R
2
, F(x, y) = (P(x, y), Q(x, y)), we call F is conservative if
f
x
= P(x, y) and
f
y
= Q(x, y) for some function f(x, y).
. i.e. F = f. The function f is called the potential function for F. If F is
dened on a simply connected region D R
2
, then the condition is equivalent
to
P
y
=
Q
x
.
There are two ways to nd a potential function.
Example 1.2.5. Prove that the vector eld F(x, y) = (2x + 3y, 3x 2y) is
conservative and nd a potential function f for F.
Proof. We claim that f(x, y) = x
2
y
2
+ 3xy is a potential function for F. If
F = f, then we have
f
x
= 2x + 3y.
Integrating w.r.t. x gives f(x, y) = x
2
+ 3xy + g(y) for some g(y) which only
depends on y but not on x. Dierentiating w.r.t. y gives
f
y
= 3x +g

(y) = 3x 2y.
Hence g

(y) = 2y and g(y) = y


2
+ C. In particular, we see that f(x, y) =
x
2
y
2
+ 3xy is a potenital function for F.
Alternatively, we can nd f by performing a line integral along the segments
(0, 0) (x, 0) and (x, 0) (x, y).
f(x, y) =
_
x
0
(2t + 0)dt +
_
y
0
(3x 2t)dt = x
2
y
2
+ 3xy +C.
A rst order ODE of the form
P(x, y) +Q(x, y)
dy
dx
= 0
is called exact if
f
x
= P(x, y) and
f
y
= Q(x, y) for some function f(x, y).
If P and Q are dened on a simply connected region, then the denition of
exactness is equivalent to
P
y
=
Q
x
If the ODE is exact with potential function f(x, y) = f(x, g(x)) (i.e. f is a
function of x only), then by chain rule, we have
df
dx
=
f
x
dx
dx
+
f
y
dy
dx
=
f
x
+
f
y
dy
dx
.
1.2. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS 9
i.e. the ODE can be written in the form
df
dx
= 0
so that f(x, y) = C is a constant.
Example 1.2.6. Solve
dy
dx
=
x
3
y
x +y
3
.
Proof. We easily check that this ODE is exact and an associated potential func-
tion is f(x, y) = xy (x
4
/4) + (y
4
/4). Hence f(x, y) = C. i.e. xy (x
4
/4) +
(y
4
/4) = C.
For the ODE P(x, y) +Q(x, y)y

= 0, sometimes, there exists an integrating


factor (x, y) such that the new ODE P +Qy

= 0 is exact. So that
P
y
=
Q
x
,
which implies
(
P
y

Q
x
) +P

y
Q

x
= 0. ()
This is a partial dierential equation for (x, y) and is generally very hard to
solve.
However, an integrating factor can be found in some special cases. This
happens when is a single variable function of x or y.
When (x, y) = (x), we have /y = 0 so that () becomes

x
=
P
y
Q
x
Q
.
If (P
y
Q
x
)/Q is a function of x only, can be solved.
When (x, y) = (y), we have /x = 0 so that () becomes

y
=
Q
x
P
y
P
.
If (Q
x
P
y
)/Q is a function of y only, can be solved.
Therefore, to determine whether integrating factor exists, we check (P
y

Q
x
)/Q and (Q
x
P
y
)/P to see if they are single variable functions of x and y
respectively.
Example 1.2.7. Solve (3xy +y
2
) + (x
2
+xy)y

= 0.
Proof. Let P(x, y) = 3xy+y
2
, Q(x, y) = x
2
+xy, then P
y
= 3x+2y, Q
x
= 2x+y.
Hence the ODE is not exact. Now (P
y
Q
x
)/Q = (x +y)/(x
2
+xy) = 1/x is a
function of x only. Hence we can nd an integrating factor (x).
P +Q
dy
dx
= 0
10 CHAPTER 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
is exact. Hence
P
y
=
Q
x
P
y
=
x
Q+Q
x

=
P
y
Q
x
Q
=
1
x
_
d

=
_
dx
x
= x +C
Take = x, there exists potential function f(x, y) such that
f
x
= P(x, y) = 3x
2
y +xy
2
and
f
y
= Q(x, y) = x
3
+x
2
y.
Solving for f gives f(x, y) = x
3
y + (x
2
y
2
)/2. Hence the solution of the ODE is
given by x
3
y + (x
2
y
2
)/2 = C, where C is a constant.
Remark. In fact, the ODE is homogeneous.
Example 1.2.8 (Razvan Gelca). Let f and g be dierentiable functions on the
real line satisfying the equation
(f
2
+g
2
)f

+ (fg)g

= 0.
Prove that f is bounded.
Proof. The idea is to integrate the equation using an integrating factor. If
instead we had the 1st-order ODE (x
2
+ y
2
)dx + xydy = 0, then the standard
method as in the above example nds x as an integrating factor. So if we
multiply the equation by f, then we have (f
3
+ fg
2
)f

+ f
2
gg

= 0, which is
equivalent to
_
1
4
f
4
+
1
2
f
2
g
2
_

= 0.
Hence f
4
+2f
2
g
2
= C for some real constant C. In particular, f is bounded.
1.3 Existence and Uniqueness of Solutions
Theorem 1.3.1. For the initial value problem
dy
dt
= f(t, y), y(t
0
) = y
0
.
If f and f/y are continuous on some rectangle < t
0
< , < y
0
<
containing the point (t
0
, y
0
), then there exists a unique solution y = (t) to the
initial value problem dened on some interval (a, b) satisfying a < t
0
< b
.
1.3. EXISTENCE AND UNIQUENESS OF SOLUTIONS 11
Corollary 1.3.2. [Existence and Uniqueness Theorem for 1st-Order Linear
ODEs] CoFnsider the 1st-order linear ODE initial value problem
dy
dt
+p(t)y = g(t), y(t
0
) = y
0
.
If p(t) and q(t) are continuous functions on an open interval < t
0
< , then
there exists a unique solution to the initial value problem dened on the interval
(, ).
There are a few things to notice here. First, unlike Theorem 1.3.2, Theorem
1.3.1 doesnt tell us the interval of validity for a unique solution guaranteed by
it. Instead, it tells us the largest possible interval that the solution will exist
in; wed need to actually solve the initial value problem to get the interval of
validity.
Secondly, for nonlinear dierential equations, the value of y
0
may aect the
interval of validity, as we will see in a later example.
Example 1.3.3. Does the initial value problem y

= 1/x, y(0) = 0 have a


unique solution?
Proof. The ODE in linear. Since 1/x is not continous at x = 0, and the initial
value is given at x = 0, so uniue solution does not exist in this case.
Example 1.3.4. For ty

2y = 5t
2
, y(1) = 2, what is the interval for a
unique solution to exist?
Proof. The ODE in linear. p(t) = 2/t and g(t) = 5t are both continuous for
all t except at t = 0. Hence the desired interval is t (, 0). Solving the
ODE gives y(t) = 5t
2
ln |t| +Ct
2
. Finally y(1) = 2 implies C = 2.
Example 1.3.5. For (t
2
+ t 6)y

+ (t 3)y = 1/(sin t), y(4) = 1, what is


the interval for a unique solution to exist?
Proof. The ODE in linear. p(t) = (t 3)/((t + 3)(t 2)) is not continuous at
t = 3, 2 and g(t) = 1/((sin t)(t + 3)(t 2)) is not continuous at t = 3, 2, k
for k Z. Hence the desired interval is t (2, ).
Example 1.3.6. Does the initial value problem y

= 2

y, y(0) = 0 have a
unique solution?
Proof. The ODE in non-linear. f(t, y) = 2

y is continuous for y > 0, f


y
=
1/(

y) is continuous for y > 0. However, the initial condition y(0) = 0 is at


y = 0. Hence there is no solution for the initial value problem.
Example 1.3.7. For y

= (2 cos(2x))/(3+2y), y(0) = 1, what is the interval


for a unique solution to exist?
Proof. The ODE in non-linear. f(x, y) = (2 cos(2x))/(3+2y) and f
y
= (4 cos(2x))/(3+
2y)
2
are continuous for all y except at y = 3/2. To determine the interval, we
rst solve the ODE For y = 3/2, separating variables and integrating gives
3y +y
2
= sin(2x) +C.
12 CHAPTER 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
y(0) = 1 implies C = 2. Hence y
2
+ 3y + 2 sin(2x) = 0 is the im-
plicit solution to the initial value problem. By quadratic formula, we have
y = (3/2) (1/2)
_
1 + 4 sin(2x). The initial condition implies y = (3/2) +
(1/2)
_
1 + 4 sin(2x). Ths solution is valid if 1+4 sin(2x) > 0 i sin(2x) > 1/4
i arcsin(1/4) < 2x < + arcsin(1/4) i (1/2) arcsin(1/4) < x < ( +
arcsin(1/4))/2. Therefore the desired interval is x ((1/2) arcsin(1/4), ((+
arcsin(1/4))/2).
Example 1.3.8. For y

= y
2
y(0) = 1, what is the interval for a unique
solution to exist?
Proof. The ODE in non-linear. f(t, y) = y
2
and f
y
= 2y are continuous for
all y and t. By Theorem 1.3.1, there exists a unique solution to the initial
value problem. To determine the interval, we rst solve the ODE. Solving gives
y = 1/(t + C). The initial condition y(0) = 1 implies C = 1. Hence
y = 1/(1 t) is the solution to the initial value problem. Therefore the desired
interval is t (, 1).
Example 1.3.9 (Putnam 1988). A common calculus mistake is to believe that
the product rule for derivatives says that (fg)

= f

. If f(x) = e
x
2
, determine,
with proof, whether there exists an open interval (a, b) and a nonzero function
g dened on (a, b) such that this wrong product rule is true for x in (a, b).
Proof. We want to nd a solution g to fg

+ f

g = f

, which is equivalent to
g

+ (f

/(f f

))g = 0. Now f f

= (1 2x)e
x
2
. If x
0
= 1/2 and y
0
R,
then by the existence and uniqueness theorem for 1st order ODEs, there exists
a unique solution g(x), dened in some open neighborhood (a, b) of x
0
, with
g(x
0
) = y
0
. By taking y
0
nonzero, we obtain a nonzero solution g.
One can solve the ODE by separation of variables. If g is nonzero, the ODE
is equivalent to
g

g
=
f

f
=
2xe
x
2
(2x 1)e
x
2
= 1 +
1
2x 1
,
ln |g(x)| = x +
1
2
ln |2x 1| +c,
from which one nds that the nonzero solutions are of the form g(x) = Ce
x
|2x
1|
1/2
for any nonzero number C, on any interval not containing 1/2.
1.4 Modeling with rst-order ODEs
One of the main application of ODEs is to model the processes of physical
phenomena, or other problems from economics, social science, nance, etc. In
this section, we discuss some processes that can be modeled by 1st-order ODEs.
Example 1.4.1 (Radioactive Decay). Let Q(t) be the amount of radioactive
substance at time t. Assume the decay of Q is proportional to the current
amount. Then the process is governed by the equation
dQ
dt
= Q.
Solving gives Q = ce
t
, which means Q decays exponentially.
1.5. INTRODUCTION TO NUMERICAL METHODS 13
Example 1.4.2 (Compound Interest). Let P(t) be the principal, r be the in-
terest rate (which is a constant), then
dP
dt
= rP, P(0) = P
0
,
solving the initial value problem gives P(t) = P
0
e
rt
.
1.5 Introduction to Numerical Methods
1.5.1 Euler Method
For the initial value problem
dy
dt
= f(t, y), y(t
0
) = y
0
.
If f and f/y are conntinuous, then the initial value problem has a unique
solution y = (t) in some interval containing the initial point t = t
0
. It is
usually impossible to nd the solution exactly. In that case, we use numerical
methods to approximate the solution. One of the oldest method is called the
Euler method or the tangent line method.
We know that the solution passes through the initial point (t
0
, y
0
) and the
slope at this point is dy/dt = f(t, y). Hence consider the tangent to the solution
curve at (t
0
, y
0
), namely
y = y
0
+f(t
0
, y
0
)(t t
0
).
Thus if t
1
is close enough to t
0
, then we can approximate (t
1
) by substi-
tuting t = t
1
into the tangent line:
y
1
= y
0
+f(t
0
, y
0
)(t
1
t
0
).
We can repeat the process. But we do not know the value (t
1
) of the
solution at t
1
. The best we can do is to use the approximate value y
1
instead,
Thus we construct the line through (t
1
, y
1
) with the slope f(t
1
, y
1
),
y = y
1
+f(t
1
, y
1
)(t t
1
).
If t
2
is close enough to t
1
, then substituting to the above equation gives
y
2
= y
1
+f(t
1
, y
1
)(t
2
t
1
).
In general, we have
y
n+1
= y
n
+f(t
n
, y
n
)(t
n+1
t
n
), n = 0, 1, 2,
14 CHAPTER 1. FIRST-ORDER DIFFERENTIAL EQUATIONS
Chapter 2
Second-Order Linear
Ordinary Dierential
Equations
2.1 Introduction
The general form of a 2nd-order ODE is
y

= f(t, y, y

).
It is linear if it can be written as
y

+p(t)y

+q(t)y = g(t).
Otherwise it is non-linear. If g(t) = 0, then the ODE is called homogeneous.
Otherwise it is called non-homogeneous.
Theorem 2.1.1 (Principle of Superposition). For any homogeneous linear ODE,
the set of all solutions forms a vector space over C.
Theorem 2.1.2 (Dimension Theorem). The dimension of the vector space V
of solutions to an n-th order homogeneous linear ODE is dim(V ) = n.
Dention 2.1.3 (Wronskian). The Wronskian of two functions f(t) and g(t),
denoted by W(f, g), is dened as
W(f, g) = det
_
f(t) g(t)
f

(t) g

(t)
_
The Wronskian is used to test the linear independence of functions on an
interval.
Theorem 2.1.4. If W(f, g) = 0 on an interval I, then f(t) and g(t) are linearly
independent on I.
Example 2.1.5. W(e
t
, e
2t
) = e
3t
= 0 for all t R. Hence e
t
, e
2t
are linearly
independent on R.
Example 2.1.6. W(e
t
, 2e
t
) = 0 for all t R. Hence e
t
, 2e
t
are linearly depen-
dent on R. Alternatively, 2e
t
/e
t
= 2 is a constant for all t R.
15
16CHAPTER 2. SECOND-ORDER LINEAR ORDINARYDIFFERENTIAL EQUATIONS
2.2 General Solution of a Homogeneous Linear
2nd-Order ODE
By the principle of superposition and the dimension theorem , if y
1
(t) and y
2
(t)
are two linearly independent solutions of the homogeneous equation
y

+p(t)y

+q(t)y = 0,
then the general solution of this equation is given by y = c
1
y
1
+c
2
y
2
for arbitrary
constants c
1
, c
2
C.
2.3 High Order ODEs with Constant Coecients
Given
a
n
y
(n)
+a
n1
y
(n1)
+ +a
1
y

+a
0
y = 0, a
i
C ()
We dene the characteristic equation
p(r) = a
n
r
n
+a
n1
r
n1
+ +a
1
r +a
0
.
By the fundamental theorem of algebra, we can factor p(r) into n factors:
p(r) = a
n
(r r
1
)(r r
2
) (r r
n
).
If r
1
, , r
n
are distinct, then the functions e
r
1
t
, , e
r
n
t
form a basis for
the space of the solution to (). In other words, the general solution is
C
1
e
r
1
t
+C
2
e
r
2
t
+ +C
n
e
r
n
t
.
If r
1
, , r
n
are not distinct. Consider a particular root r with multiplicity
m, then replace
e
rt
, e
rt
, , e
rt
(m copies)
by
e
rt
, te
rt
, , t
m1
e
rt
.
Example 2.3.1. Solve y
(6)
+ 6y
(5)
+ 9y
(4)
= 0.
Proof. The associated characteristic equation is p(r) = r
6
+6r
5
+9r
4
= r
4
(r +
3)
2
, whose roots are 0, 0, 0, 0, 3, 3 (counting multiplicities). Hence the general
solution is
C
1
+C
2
t +C
3
t
2
+C
4
t
3
+C
5
e
3t
+C
6
te
3t
.
Example 2.3.2 (Putnam). Solve the system of dierential equations
x

+x = 0,
y

+x

+y = 0
in real-valued functions x(t) and y(t).
2.4. REDUCTION OF ORDER 17
Proof. Multiply the second equation by i then add it to the rst to obtain
(x +iy)

+i(x +iy)

+ (x +iy) = 0.
Substitutes z = x + iy, we have z

+ iz

+ z = 0. The characteristic equation


is r
2
+ir + 1 = 0 with solutions r
1
= i(1 +

5)/2, r
2
= i(1

5)/2. Hence
the general solution is
z(t) = (a +ib)e
r
1
t
+ (c +id)e
r
2
t
for arbitrary real numbers a, b, c, d. Comparing real and imaginary parts gives
x(t) = a cos r
1
t b sin r
1
t +c cos r
2
t d sin r
2
t,
y(t) = a sin r
1
t +b cos r
1
t +c sin r
2
t +d sin r
2
t.
2.4 Reduction of Order
For a general homogeneous linear 2nd-order ODE
y

+p(t)y

+q(t)y = 0.
Suppose one particular solution y
1
is known, a second linearly independent
particular solution y
2
can be found by the method of reduction of order.
The idea is to let y
2
(t) = v(t)y
1
(t), and put y
2
into the ODE, and then solve
for v(t).
Example 2.4.1. Find the general solution of x
2
y

+xy

9y = 0, x > 0, given
a particular solution y
1
= x
3
.
Proof. Let y
2
(t) = v(t)x
3
. Then we have y

2
= x
3
v + 3x
2
v and y

2
= x
3
v

+
6x
2
v

+ 6xv. This implies


v

+
7
x
v

= 0.
Let u = v

, then the equation becomes u

+(7/x)u = 0, solving gives u = C


1
x
7
.
Hence v(x) = (C
1
x
6
/6) + C
2
. Takes v(x) = x
6
. Then y
2
= x
6
x
3
= x
3
.
Hence the general solution is
y = c
1
y
1
+c
2
y
2
= c
1
x
3
+c
2
1
x
3
.
2.5 Nonhomogeneous Equations
Dention 2.5.1 (Dierential Operator). For a function y(t), dene the dier-
ential operator L by L[y] = y

+p(t)y

+q(t)y.
18CHAPTER 2. SECOND-ORDER LINEAR ORDINARYDIFFERENTIAL EQUATIONS
Theorem 2.5.2. L is a linear operator, i.e. L[y
1
+y
2
] = L[y
1
] +L[y
2
], L[cy] =
cL[y]. Moreover, if y
p
is a particular solution of the non-homogeneous linear
ODE L[y] = g(t), and y
h
= c
1
y
1
+c
2
y
2
is the general solution to the correspond-
ing homogeneous linear equation L[y] = 0, then
(t) = y
p
+y
h
= y
p
+c
1
y
1
+c
2
y
2
is the general solution of the non-homogeneous linear equation L[y] = g(t).
Proof. The fact that L is linear is clear. Let Y (t) be another solution to L[y] =
g(t), then L[Y (t)] = g(t). Hence L[Y (t)] L[y
p
] = 0. i.e. L[Y (t) y
p
] = 0 since
L is linear. Therefore, Y (t)y
p
= y
h
by assumption. Hence Y (t) = y
p
+y
h
.
The problem is how to nd a particular solution y
p
. One common method
is called the Method of Undetermined Coecients, which is the subject of the
next section.
2.6 Method of Undetermined Coecients
This method is suitable for non-homogeneous linear ODEs with constant coe-
cients, i.e. equation of the form ay

+by

+cy = g(t), where g(t) is an elementary


function.
The basic principle is to seek a particular solution y
p
(a trial function) based
on the the form of g(t) and the form of y
h
(solutions of the associated homoge-
neous equation). Below are ve typical cases of g(t).
(a) g(t) is a polynomial, i.e. g(t) = a
n
t
n
+a
n1
t
n1
+ +a
0
.
case 1. c = 0, try y
p
= t(A
n
t
n
+A
n1
t
n1
+ +A
0
).
case 2. b = c = 0, try y
p
= t
2
(A
n
t
n
+A
n1
t
n1
+ +A
0
).
case 3. Otherwise, try y
p
= A
n
t
n
+A
n1
t
n1
+ +A
0
.
Example 2.6.1. Solve y

+ 4y = 3x
2
.
Proof. Solving the corresponding homogeneous ODE gives y
h
= c
1
cos(2x) +
c
2
sin(2x). Let y
p
= Ax
3
+ Bx
2
+ Cx + D, then y

p
= 3Ax
2
+ 2Bx + C and
y

p
= 6Ax + 2B. Substituting y
p
into the ODE gives
(6Ax + 2B) + 4(Ax
3
+Bx
2
+Cx +D) = 3x
2
.
Comparing coecients gives 4A = 3, 4B = 0, 4C+6A = 0, 2B+4D = 0. Solving
gives A = 3/4, B = 0, C = 9/8, D = 0. Hence
y
p
=
3
4
x
3

9
8
x.
(b) g(t) = cos(t) or sin(t), then let y
p
= t
s
(Acos(t) + Bsin(t)),
where s = 0 if i is not a root to the characteristic equation, and s = 1 if it
is a root.
2.7. VARIATION OF PARAMETERS 19
Example 2.6.2. Find a particular solution of 3y

+y

2y = 2 cos t.
Proof. The characteristic equation is 3r
2
+r 2 = 0, solving gives r = 1, 2/3.
Let y
p
= Acos t + Bsin t, then y

p
= Asin t + Bcos t, y

p
= Acos t Bsin t.
Substituting y
p
into the ODE and simplifying gives
(5A+B 2) cos t + (A5B) sin t = 0.
Which implies 5A + B 2 = 0, A 5B 0 since sin t and cos t are linearly
independent. Solving gives A = 5/13, B = 1/13.
(c) g(t) = (a
n
t
n
+a
n1
t
n1
+ +a
0
)e
t
, then let y
p
= t
s
(A
n
t
n
+A
n1
t
n1
+
+A
0
)e
t
, with
s = 0 if is not a root to the characteristic equation.
s = 1 if is a simple root to the characteristic equation.
s = 2 if is a double root to the characteristic equation.
Example 2.6.3. Find a particular solution of y

4y = 2e
2t
.
Proof. Solving ths characteristic equation r
2
4 = 0 gives r = 2. Let y
p
=
Ate
2t
. Then y

p
= A(2te
2t
+ e
2t
), y

p
= 4A(e
2t
+ te
2t
). Substituting y
p
into the
ODE gives 4A(e
2t
+te
2t
) 4Ate
2t
= 2e
2t
, which gives A = 1/2.
(d) g(t) = p
n
(t)e
t
cos(t) or p
n
(t)e
t
sin(t), with p
n
(t) = a
n
t
n
+ +a
1
t+
a
0
, then let y
p
= t
s
(A
n
t
n
+ +A
0
)e
t
cos(t) +t
s
(B
n
t
n
+ +B
0
)e
t
sin(t),
with
s = 0 if +i is not a root to the characteristic equation.
s = 1 if +i is a simple root to the characteristic equation.
s = 2 if +i is a double root to the characteristic equation.
Example 2.6.4. Fina a particular solution of y

+ 6y

+ 13y = e
3t
cos(2t).
Proof. The characteristic equation is r
2
+6r +13 = 0, solving r = 3 2i. Let
y
p
= Ate
3t
cos 2t +Bte
3t
sin 2t.
(e) g(t) = g
1
(t)+g
2
(t)+ +g
n
(t), then let y
p
= y
p
1
+y
p
2
+ +y
p
n
. i.e., lin-
ear combination of forms of particular solutions suitable for g
1
(t), g
2
(t), , g
n
(t).
Example 2.6.5. Solve y

3y

+ 2y = 3e
t
10 cos(3t).
Proof. Solving the characteristic equation r
2
3r + 2 = 0 gives r = 1, 2. Let
y
p
= Ae
t
+Bcos(3t) +C sin(3t).
2.7 Variation of Parameters
The method of Variation of Parameters can be applied to a general non-homogeneous
2nd-order ODE
L[y] = y

+p(t)y

+q(t)y = g(t).
Unlike the method of undetermined coecients, g(t) can be any function.
If y
1
(t) and y
2
(t) are two solutions to the corresponding homogeneous equa-
tion L[y] = 0, we want to nd a particular solution y
p
(t) for the ODE L[y] = g(t)
of the form
y
p
= u
1
y
1
+u
2
y
2
,
where u
1
(t) and u
2
(t) are unknown functions to be determined.
20CHAPTER 2. SECOND-ORDER LINEAR ORDINARYDIFFERENTIAL EQUATIONS
Theorem 2.7.1. With the notation as above, we have
u
1
(t) =
_
y
2
(t)g(t)
W(y
1
, y
2
)
dt, u
2
(t) =
_
y
1
(t)g(t)
W(y
1
, y
2
)
dt,
where W(y
1
, y
2
) = 0 is the Wronskian.
Example 2.7.2. Find the general solution of y

+y = tan t.
Proof. Solving the characteristic equation r
2
+ 1 = 0 gives r = i. Hence y
1
=
cos t, y
2
= sin t. Hence y
h
= c
1
y
2
+ c
2
y
2
, c
1
, c
2
C. Let y
p
= u
1
(t)y
1
+ u
2
(t)y
2
,
then by Theorem 2.7.1, we have
u
1
=
_
sin t tan t
1
dt = sin(t) ln | sec t + tan t|,
u
2
=
_
cos t tan t
1
dt = cos t +C.
Hence y
p
= (sin(t) ln | sec t +tan t|) cos t +sin t cos t = cos t(ln | sec t +tan t|)
and the general solution is y(t) = y
h
+y
p
.
Example 2.7.3 (Putnam 1987). For all real x, the real-valued function y =
f(x) satises
y

2y

+y = 2e
x
.
(a) If f(x) > 0 for all real x, must f

(x) > 0 for all real x? Explain.


(b) If f

(x) > 0 for all real x, must f(x) > 0 for all real x? Explain.
Proof. Solving the characteristic equation r
2
2r + 1 = 0 gives r = 1. Hence
y
h
= c
1
e
x
+c
2
xe
x
for c
1
, c
2
R. Let y
p
= u
1
(x)e
x
+u
2
(x)xe
x
then
u
1
=
_
xe
x
2e
x
W(e
x
, xe
x
)
dx = x
2
+C,
u
2
=
_
e
x
2e
x
W(e
x
, xe
x
)
dx = 2x +C.
Hence y
p
= x
2
e
x
+2x
2
e
x
= x
2
e
x
and the general solution is f(x) = y
h
+y
p
=
(x
2
+c
2
x +c
1
)e
x
.
Hence f(x) > 0 x R i x
2
+c
2
x +c
1
> 0 x R i c
2
2
4c
1
< 0.
Simlarly f

(x) > 0 x R i (c
2
+ 2)
2
4(c
1
+c
2
) < 0 i c
2
2
4c
1
+ 4 < 0.
Clearly c
2
2
4c
1
< 0 does not imply c
2
2
4c
1
+ 4 < 0. (Take c
2
= 1, c
1
= 1
for instance.) But c
2
2
4c
1
+ 4 < 0 does imply c
2
2
4c
1
< 0. Hence the answer
is NO for (a) and YES for (b).
2.8 Mechanical Vibrations
The spring mass system is a typical example to illustrate the behavior of a
dynamical system modeled by a linear 2nd order ODE.
Let
l = natural length of the spring.
L = Stretching length of the spring after attaching the object.
2.8. MECHANICAL VIBRATIONS 21
u = displacement from equilibrium position (take downward as +ve).
m = mass of the object.
k = spring constant. Recall F
s
= ku (Hookes Law).
= damping coecient of the system. Damping force F
d
= u

(t).
F(t) = external force applied to the object at time t.
F
g
= gravitional force F
g
= mg
By Newtons Sceond Law,we have
F = ma = mu

F(t) +F
g
+F
d
+F
s
= mu

mu

+u

+ku +kL mg = F(t).


But mg kL = 0 (by considering equilibrium state), hence the spring-mass
system is governed by the equation
u

+

m
u

+
k
m
u =
F(t)
m
,
which is a linear non-homogeneous 2nd-order ODE with constant coecients.
The oscillation is free if F(t) = 0.
The oscillation is forced if F(t) = 0.
The motion is undamped if = 0 and is called damped otherwise.
2.8.1 Free Undamped Oscillation
In this case, = 0 and F(t) = 0. The governing equation becomes
u

+
k
m
u = 0,
which is a homogeneous 2nd-order ODE with constant coecients. The char-
acteristic equation is r
2
+ k/m = 0. Solving gives r = i
_
k/m. Hence the
general solution is
u = c
1
cos
0
t +c
2
sin
2
t,
where
0
=
_
k/m is called the natural frequency of the oscillation. Rewrite
the solution as
u =
_
c
2
1
+c
2
2
(
c
1
_
c
2
1
+c
2
2
cos
0
t +
c
2
_
c
2
1
+c
2
2
sin
0
t)
= A
0
(cos cos
0
t + sin sin
0
t)
= A
0
cos(
0
t ).
Where A
0
=
_
c
2
1
+c
2
2
,
0
=
_
k/m, = arctan(c
2
/c
1
) are the amplitude,
natural frequency and phase shift respectively, all determined by initial condi-
tions. Note that T = 2/
0
= 2
_
m/k is the period.
22CHAPTER 2. SECOND-ORDER LINEAR ORDINARYDIFFERENTIAL EQUATIONS
2.8.2 Free Damped Oscillation
In this case, F(t) = 0, but = 0. The governing equation becomes
u

+

m
u

+
k
m
u = 0.
This is a homogeneous 2nd-order ODE with constant coecients. Solving the
characteristic equation gives
r =

2m

_
(

2m
)
2

k
m
.
Hence we have three cases:
(Case 1) (Under Damping) (/2m)
2
< k/m. Then we have two complex
roots.
u(t) = exp(
t
2m
)(c
1
cos t +c
2
sin t),
where =
_
(k/m) (/2m)
2
. It is an oscillating solution.
(Case 2) (Critical Damping) (/2m)
2
= k/m. Then we have a double root.
u(t) = c
1
exp(
t
2m
) +c
2
t exp(
t
2m
).
(Case 3) (Over Damped) (/2m)
2
> k/m. Then we have two distinct real
roots.
u(t) = c
1
e
r
1
t
+c
2
e
r
2
t
,
where r
1
and r
2
are the two distinct real roots of the characteristic equation.
In all three cases, we have lim
t
u(t) = 0.
The damping eect depends on the ratio /(mk). Therefore, for large m
and k, oscillation is more likely to occur.
2.8.3 Forced Undamped Oscillation
In this case, = 0 but F(t) = 0. The governing equation is a non-homogeneous
2nd-order ODE with constant coecients,
u

+
k
m
u =
F(t)
m
.
If the force F(t) is a sinusoidal function with F(t) = F
0
cos t, then the govern-
ing equation becomes
u

+
0
u =
F
0
m
cos t,
where
0
=
_
k/m is the natural frequency. The solution u
h
for the corre-
sponding homogeneous equation is the same as in the case of free undamped
oscillation,
u
h
= c
1
cos
0
t +c
2
sin
0
t.
A particular solution u
p
of the non-homogeneous equation can be obtained by
the method of undetermined coecients. Depending on , we have the following
two cases.
2.8. MECHANICAL VIBRATIONS 23
(Case 1) =
0
. Then cos t is not a solution of the homogeneous equation.
Let u
p
= Bcos t +Dsin t. Then solving for B, D gives
D = 0, B =
F
0
m(
2
0

2
)
.
Hence the general solution of the ODE is
u(t) = u
h
+u
p
= c
1
cos
0
t +c
2
sin
0
t +
F
0
m(
2
0

2
)
cos t
= A
0
cos(
0
t ) +
F
0
m(
2
0

2
)
cos t
with A
0
and as in the case of free undamped oscillation.
(Case 2) =
0
. Then resonance occurs. cos t is a solution of the homo-
geneous equation. Let u
p
= Bt cos
0
t +Dt sin
0
t. Solving for B, D gives
B = 0, D =
F
0
2m
0
Hence the general solution is
u(t) = u
h
+u
p
= c
1
cos
0
t +c
2
sin
0
t +
F
0
2m
0
t sin
0
t.
For the initial conditions u(0) = u

(0) = 0, c
1
= c
2
= 0, we have
u(t) =
F
0
2m
0
t sin
0
t.
The amplitude of u increases with time t.
2.8.4 Forced Damped Oscillation
In this case, F(t) = 0, = 0. Let F(t) = (F
0
/m) cos t, the governing equation
becomes
u

+

m
u

+
k
m
u =
F
0
m
cos t.
So that
u(t) = u
h
+u
p
Note that the homogeneous solution is the solution of the free damped case
and recall that in the free damped case, u 0 as t . Hence back to the
forced damped case, we have
lim
t
u
h
(t) = 0.
and u
h
is called the transient solution. On the other hand, we have u u
p
as t and hence the particular solution u
p
is often called the steady state
solution.
24CHAPTER 2. SECOND-ORDER LINEAR ORDINARYDIFFERENTIAL EQUATIONS
Chapter 3
Series Solutions for
Second-Order Linear
Equations
3.1 Series Solutions for Linear ODEs
Example 3.1.1. Find a power series solution to the dierential equation y(t) =
y(t) + 2 with initial condition y(0) = 6.
Proof. Let y(t) =

k=0
a
k
t
k
, then
y

(t) =

k=1
ka
k
t
k1
.
Substitutes these into the ODE gives

k=1
ka
k
t
k1
=

k=0
a
k
t
k
+ 2 =

k=1
a
k1
t
k1
+ 2,

k=1
[ka
k
a
k1
]t
k1
= 2.
Comparing coecients of constant term gives a
1
a
0
= 2. Comparing
coecients of t
k
gives ka
k
a
k1
= 0, k = 2, 3, 4, . Hence a
k
= a
k1
/k.
Therefore
y(t) =

k=0
a
k
t
k
= a
0
+a
1
t +a
2
t
2
+
= a
0
+a
1
_
t +
t
2
2!
+
t
3
3!
+
_
= a
0
+a
1
(e
t
1) = a
0
+ (a
0
+ 2)(e
t
1).
Hence y(t) = 8e
t
2 by the initial value condition.
25
26CHAPTER 3. SERIES SOLUTIONS FOR SECOND-ORDER LINEAR EQUATIONS
The above example is the typical way to obtain the series solution of a linear
ODE. In general, we have the following procedures:
1. Let y(t) =

k=0
a
k
t
k
.
2. Take the corresponding derivatives on the power series and substitute
them into the equation.
3. Shift the index in the summations so that they start from the same index.
4. Comparing the coecients for the terms of t
k
to obtain a relation between
a
k
. Usually the coecients are related by recurrence relations.
5. If an initial value problem is to be solved, the values of rst few coecients
can be evaluated from the initial conditions.
3.2 Ordinary and Singular Points of an ODE
Consider the homogeneous linear 2nd-order ODE of the general form
d
2
y
dx
2
+p(x)
dy
dx
+q(x)y = 0 . . . (1)
We do not discuss the corresponding non-homogeneous ODE since once the
general solutions for the homogeneous ODE have been found, we can obtain a
particular solution by variation of parameters.
Dention 3.2.1 (Analytic Function). A function f(x) is said to be real ana-
lytic, or analytic at the point x = a if f(x) can be represented by a Taylor series
centered about the point x = a with radius of convergence R > 0.
Dention 3.2.2 (Ordinary and Singular Points). For the equation in (1), the
point x = x
0
is called an ordinary point for the ODE if the functions p(x) and
q(x) are analytic at x
0
. Otherwise, it is called a singular point for the ODE.
Dention 3.2.3 (Regular Singular Point). Let x = x
0
be a singular point for
the ODE in (1). This point is called a regular singular point for the ODE if
the functions (xx
0
)p(x) and (xx
0
)
2
q(x) are analytic at x
0
. Alternatively, if
both the limit lim
xx
0
(xx
0
)p(x) and lim
xx
0
(xx
0
)
2
q(x) have nite values,
then the point x
0
is a regular singular point.
Example 3.2.4. The Legendre equation (1 x
2
)y

2xy

+(+1)y = 0 has
regular singular points at x = 1, 1.
Proof. Rewrite the equation as y

+p(x)y

+q(x)y = 0, where p(x) = 2x/(1


x
2
), q(x) = ( + 1)/(1 x
2
). Then p(x), q(x) are not continuous at x = 1.
Hence x = 1 are singular points. For x = 1, (x 1)p(x) = 2x/(x + 1), (x
1)
2
q(x) = ( + 1)(1 x)/(1 + x) are continuous at x = 1, hence x = 1 is a
regular singular point. Likewise for x = 1.
3.3 Series Solution about an Ordinary Point
Theorem 3.3.1 (Existence of Power Series Solution). Let I be the interval
containing an ordinary point x
0
. If p(x), q(x) in (1) are analytic at x
0
and their
Taylor series both converges in I, then a series solution of y(x) about x
0
can be
obtained and converges at least in the interval I.
3.4. SERIES SOLUTION ABOUT A REGULAR SINGULAR POINT 27
Recall that the general solution of a second-order ODE has the form y(x) =
c
1
y
1
(x) + c
2
y
2
(x), with two arbitrary constants c
1
and c
2
. The series solution
of y(x) is a linear combination of two power series solutions y
1
(x) and y
2
(x).
3.4 Series Solution about a Regular Singular Point
3.4.1 Euler Equations
The Euler Equation is a linear ODE with the general form
x
2
y

+xy

+y = 0,
where and are real constants. Is is routine to check that the Euler equation
has the only regular singular point at x = 0.
In any interval not including x = 0, the Euler equation has a general solution
of the form y = c
1
y
1
+ c
2
y
2
, where y
1
and y
2
are linearly independent. For
convenience we rst consider the interval x > 0, extending our results later to
the interval x < 0.
First note that (x
r
)

= rx
r1
, (x
r
)

= r(r 1)x
r2
. Hence if we assume
that the Euler equation has a solution of the form y = x
r
, then
x
2
(x
r
)

+x(x
r
)

+x
r
= 0,
x
r
(r(r 1) +r +) = 0.
Hence if r is a root of F(r) = r(r 1) +r + = 0, then y = x
r
is a solution of
the Euler equation. Denote the roots of F(r) = 0 by r
1
, r
2
. Then we have the
following cases:
Case 1, r
1
, r
2
are real and distinct: Then y
1
(x) = x
r
1
, y
2
(x) = x
r
2
are
solutions of the Euler equation. Hence the general solution of the Euler equation
is y = c
1
x
r
1
+c
2
x
r
2
, x > 0.
Case 2, r
1
= r
2
= r and is real: Then y
1
(x) = x
r
1
is a solution. The second
solutiojn can be found by the method of reduction of order. Let y
2
= x
r
v(x),
then y

2
= rx
r1
v + x
r
v

, y

2
= r(r 1)x
r2
v + 2rx
r1
v

+ x
r
v

. Substitutes
these into the equation gives x
2
(r(r1)x
r2
v+2rx
r1
v

+x
r
v

)+x(rx
r1
v+
x
r
v

) +(x
r
v) = 0. Simplifying gives x
2
v

+xv

= 0 since r(r 1) +r + = 0
and 2r + = 1. Let u = v

, then separating variables gives du/u = dx/x,


integrating w.r.t. x and setting the integration constant C = 0 gives u = x
1
,
hence v = ln x and y
2
= x
r
ln x, x > 0.
Case 3, r
1
= a + ib, r
2
= a ib are complex and distinct: Then y
1
(x) =
x
r
1
= e
r
1
ln x
= e
(a+ib) ln x
= x
a
(cos(b ln x) + i sin(b ln x)), similarly y
2
(x) =
x
a
(cos(b ln x) i sin(b ln x)). To obatin the general solution without i, we con-
struct another solution: y
1
+y
2
, y
1
y
2
, so that
y(x) = c
1
x
a
cos(b ln x) +c
2
x
a
sin(b ln x).
The solution of the Euler equation for x < 0 is similar to the case x > 0.
We can make the substitution x = u, with u > 0 so that y = y(u), dy/dx =
dy/du, d
2
y/dx
2
= d
2
y/du
2
. Therefore, the Euler equation beomces u
2
y

(u) +
uy

(u) +y(u) = 0 and we have the following theorem for the general solution
of the Euler equation for x = 0.
28CHAPTER 3. SERIES SOLUTIONS FOR SECOND-ORDER LINEAR EQUATIONS
Theorem 3.4.1. The general solution of the Euler equation x
2
y

+xy

+y =
0 in any interval not containing x = 0 is determined by the roots r
1
, r
2
of the
equation
F(r) = r(r 1) +r + = 0.
If the roots are real and distinct, then
y = c
1
|x|
r
1
+c
2
|x|
r
2
.
If the roots are real and equal, then
y = (c
1
+c
2
ln |x|)|x|
r
1
.
If the roots are complex: r
1
= a +ib, r
2
= a ib, then
y = y(x) = c
1
|x|
a
cos(b ln |x|) +c
2
|x|
a
sin(b ln |x|).
3.4.2 The Method of Frobenius
We are ready to solve the general 2nd order linear equation in the neighborhood
of a regular singular point x = x
0
. Without loss of generality, we may assume
x
0
= 0,
Theorem 3.4.2 (Method of Frobenius). Suppose the equation x
2
y

+xp(x)y

+
q(x)y = 0 has a regular singular point at x = 0. If the functions p(x), q(x) are
analytic at x = 0, then at least one solution to the equation can be represented
by a power series in the form
y(x) = |x|
r

k=0
a
k
x
k
,
where r can be a real or complex number (chosen so that a
0
= 0).
Remark. The Euler equation has a solution of the form y = |x|
r
, which is
a special case of the above theorem.
Remark. We will discuss the solution for x > 0, i.e. to seek the solution
of the form y(x) = x
r

k=0
a
k
x
k
, the corresponding solution for x < 0 can be
found by the substitution x = u, u > 0.
Consider the equation x
2
y

+xp(x)y

+q(x)y = 0, x > 0. Then


y(x) = x
r

k=0
a
k
x
k
, y

k=0
(r+k)a
k
x
r+k1
, y

k=0
(r+k)(r+k1)a
k
x
r+k2
.
Also, since p(x), q(x) are analytic at x = 0, we can write
p(x) =

k=0
p
k
x
k
, q(x) =

k=0
q
k
x
k
.
Substitutes all these into the equation gives

k=0
(r+k)(r+k1)a
k
x
r+k
+

k=0
p
k
x
k

k=0
(r+k)a
k
x
r+k
+

k=0
q
k
x
k

k=0
a
k
x
r+k
= 0.
3.4. SERIES SOLUTION ABOUT A REGULAR SINGULAR POINT 29
In the above equation, the smallest power of x is x
r
. Equating the coecients
for x
r
(i.e. the terms for k = 0) gives r(r 1)a
0
+rp
0
a
0
+q
0
a
0
= 0. Since a
0
is
supposed to be nonzero, we have
r(r 1) +rp
0
+q
0
= 0.
The above equation is called the indicial equation of the ODE. The roots
of the indicial equation determine the nature of the solutions of the ODE. We
have the following cases:
Case 1, Distinct Roots (real or complex): r = r
1
, r
2
. Then the two solutions
are given by
y
1
(x) = x
r
1

k=0
a
k
x
k
, y
2
(x) = x
r
2

k=0
b
k
x
k
.
Case 2, Double Roots. Then the two solutions are given by
y
1
(x) = x
r
1

k=0
a
k
x
k
, y
2
(x) = y
1
(x) ln x +x
r
1

k=1
b
k
x
k
Case 3, Real roots diering by an integer: r = r
1
, r
2
, with r
1
r
2
Z. Then
the two solutions are given by
y
1
= x
r
1

k=0
a
k
x
k
, y
2
= my
1
ln x +x
r
2

k=0
c
k
x
k
,
where m may or may not be zero so that y
2
may or may not involve the term
with ln x.
Example 3.4.3 (Distinct Roots, real or complex). Solve 2x
2
y

xy

+(1+x)y =
0 for x > 0.
Proof. It is routine to check that x = 0 is a regular singular point. Let
y = x
r

k=0
c
k
x
k
, then y

k=0
(k+r)c
k
x
k+r1
, y

k=0
(k+r)(k+r1)c
k
x
k+r2
.
Substitutes these into the equation gives

k=0
{[2(k +r)(k +r 1)c
k
(k +r)c
k
+c
k
]x
k+r
+c
k
x
k+r+1
} = 0.
Comparing coecients of x
r
yields the indicial equation 2r(r 1) r + 1 = 0.
Solving gives r = 1/2, 1. Equating coecients of x
r
+k for k 1 implies
[2(k +r)(k +r 1) (k +r) + 1]c
k
+c
k1
= 0.
For r = 1/2, we have c
k
= c
k1
/(k(2k 1)).
For r = 1, we have c
k
= c
k1
/(k(2k + 1)). Therefore
y = x
1/2
c
0
(1 x +
1
6
x
2
) +xc
2
(1
1
3
x +
1
30
x
2
+ ).
30CHAPTER 3. SERIES SOLUTIONS FOR SECOND-ORDER LINEAR EQUATIONS
Chapter 4
Laplace Transform
4.1 Introduction
For an ODE like ay

+by

+cy = f(t), we may solve the equation by analytical


method such as method of undetermined coecients. Most methods require
that f(t) is a continuous function.
However, if f(t) is a piecewise continuous function like
f(t) =
_
_
_
t, 0 t < a
cos t, a t < b
, t b
,
the method of undetermined coecients does not work. Instead, the method of
Laplace Transform is a suitable technique for these kinds of problems.
Dention 4.1.1 (Laplace Transform). For a function f(t), dene
L{f(t)} =
_

0
e
st
f(t)dt = F(s).
Theorem 4.1.2. The Laplace Transform operator is a linear operator. i.e.
L{af(t) +bg(t)} = aL{f(t)} +bL{g(t)}, for constants a, b.
Theorem 4.1.3 (Conditions for Existence). If
1. f(t) is a piecewise continuous function on an interval 0 t A for
A > 0, and
2. |f(t)| ke
at
when t T, for some constants a, k, T with k, T > 0,
then L{f(t)} = F(s) exists for s > a.
4.2 Laplace Transform of Elementary Functions
The following examples will come in handy later.
Example 4.2.1.
L{1} =
1
s
for s > 0.
31
32 CHAPTER 4. LAPLACE TRANSFORM
Example 4.2.2.
L{t} =
1
s
2
for s > 0.
Example 4.2.3.
L{e
at
} =
1
s a
for s > a.
Example 4.2.4.
L{sin at} =
a
s
2
+a
2
for s > 0.
Example 4.2.5.
L{cos at} =
s
s
2
+a
2
for s > 0.
4.3 Laplace Transform of Derivatives
Theorem 4.3.1. L{f
(n)
(t)} = s
n
L{f(t)}

n1
k=0
s
k
f
(n1k)
(0).
Corollary 4.3.2. L{f

(t)} = sL{f(t)} f(0) = sF(s) f(0).


Corollary 4.3.3. L{f

(t)} = s
2
L{f(t)}sf(0)f

(0) = s
2
F(s)sf(0)f

(0).
4.4 Inverse Laplace Transform
Dention 4.4.1 (Inverse Laplace Transform). If L{f(t)} = F(s), then we
dene the inverse Laplace transofrm of F(s) to be f(t) = L
1
{F(s)}. It is easy
to see that L
1
is also a linear operator.
Theorem 4.4.2 (Conditions for Existence). The inverse Laplace transform of
a function F(s) exists i
1. lim
s
F(s) = 0, and
2. lim
s
sF(s) = L < +.
Example 4.4.3. Find L
1
{(4s + 1)/(s
2
+ 9)}.
Proof.
L
1
{
4s + 1
s
2
+ 9
} = 4L
1
{
s
s
2
+ 9
} +
1
3
L
1
{
3
s
2
+ 9
} = 4 cos 3t +
1
3
sin 3t.
Example 4.4.4. Find L
1
{s/(s
2
+s 2)}.
Proof. Since
s
s
2
+s 2
=
1
3

1
s 1
+
2
3

1
s + 2
,
we have
L
1
{
s
s
2
+s 2
} =
1
3
L
1
{
1
s 1
} +
2
3
L
1
{
1
s + 2
} =
1
3
e
t
+
2
3
e
2t
.
4.5. SOLVING INITIAL VALUE PROBLEMBY LAPLACE TRANSFORM33
Example 4.4.5. Find L{1/((s 2)(s + 2)(s
2
+ 1))}.
Proof. The transform equals
1
5
L{
(s
2
+ 1) (s
2
4)
(s
2
4)(s
2
+ 1)
} =
1
10
L{
2
s
2
2
2
}
1
5
L{
1
s
2
+ 1
} =
1
10
sinh 2t
1
5
sin t.
Theorem 4.4.6. L
1
is a one to one operator. i.e. If L{f(t)} = F(s), L{g(t)} =
F(s), then f(t) = g(t).
4.5 Solving Initial Value Problem by Laplace
Transform
The basic idea is to transform a homogeneous or non-homogeneous ODE of y(t)
with constant coecients to an algebraic equation with functions of s (y(t)
Y (s)), then we can perform inverse Laplace Transform L
1
to Y (s) to obtain
the solution y(t).
Example 4.5.1. Solve the initial value problem y

+ y = t, y(0) = 0, y

(0) =
1.
Proof. Apply the Laplace Transform to the equation yields L{y

+y} = L{t}.
Hence s
2
Y (s) sy(0) y

(0) +Y (s) = 1/s


2
. Simplifying gives Y (s) = 1/s
2
.
Hence y(t) = L
1
{Y (s)} = L
1
{1/s
2
} = t.
4.6 Unit Step Function
To describe discontinuous or piecewise continuous functions eciently, it is con-
venient to introduce the unit step function, which is dened as
Dention 4.6.1 (Unit Step Function). Dene the unit step function as
u
c
(t) =
_
0, t < c
1, t c
,
for c > 0.
Example 4.6.2. If
f(t) =
_
1, a t < b
0, otherwise
,
then f(t) = u
a
(t) u
b
(t).
Example 4.6.3. If
f(t) =
_
sin t, 0 t < /4
sin t + cos(t /4), t /4
,
then f(t) = (u
0
u
/4
) sin t +u
/4
(sin t +cos(t /4)) = u
0
sin t +u
/4
cos(t
/4).
34 CHAPTER 4. LAPLACE TRANSFORM
Example 4.6.4. Write the following function in terms of the unit step function.
f(t) =
_

_
2, t 3
4, 3 t < 6
8, 6 t < 25
16, t 25
Proof. f(t) = 2(1 u
3
) 4(u
3
u
6
) + 8(u
6
u
25
) 15u
25
= 2 6u
3
+ 12u
6

24u
25
.
Theorem 4.6.5.
L{u
c
(t)} =
_

0
e
st
u
c
(t)dt =
_

c
e
st
dt =
e
cs
s
.(s > 0)
Theorem 4.6.6. If L{f(t)} = F(s), then L{u
c
(t)f(t c)} = e
cs
L{f(t)} =
e
cs
F(s).
Theorem 4.6.7. If L{f(t)} = F(s), then L{e
ct
f(t)} = F(s c).
Example 4.6.8. If
f(t) =
_
0, t <
sin 2t, t
,
nd L{f(t)}.
Proof. We have f(t) = u

(t) sin 2t = u

sin(2t 2) = u

sin(2(t )). Hence


L{f(t)} = L{u

sin(2(t ))} = e
s
L{sin 2t} = (2e
s
)/(s
2
+ 4).
Example 4.6.9. If f(t) = u

(t)t, nd L{f(t)}.
Proof. f(t) = u

(t)(t + ) = u

(t ) + u

, hence L{f(t)} = L{u

(u
)} +L{u

} =
e
s
s
2
+
e
s
s
.
Example 4.6.10. Find L
1
{(1 e
2s
)/(s
2
+ 1)}.
Proof. Let F(s) = 1/(s
2
+1), then (1 e
2s
)/(s
2
+1) = F(s) e
2s
F(s). Now
L{F(s)} = sin t, hence L
1
{(1e
2s
)/(s
2
+1)} = L
1
{F(s)}L
1
{e
2s
F(s)} =
sin t u
2
(t) sin(t 2).
Example 4.6.11. Find L
1
{1/(s
2
4s + 5)}.
Proof. 1/(s
2
4s + 5) = 1/((s 2)
2
+ 1) = F(s 2), where F(s) = 1/(s
2
+ 1).
Hence L
1
{1/(s
2
4s + 5)} = L
1
{F(s 2)} = e
2t
L
1
{F(s)} = e
2t
sin t.
4.7 Initial Value Problems with Discontinuous
Functions
Example 4.7.1. Solve y

+ 4y = g(t), y(0) = y

(0) = 0, where
g(t) =
_
1, t < 3
0, 0 t < or t 3
4.8. IMPULSE FUNCTIONS 35
Proof. Applying Laplace Transform to both sides of the equation gives L{y

+
4y} = L{g(t)}, g(t) = u

u
3
. Hence s
2
Y (s) s 0 0 + 4Y (s) = (e
s

e
3s
)/s. Solving for Y (s) yields
Y (s) = (e
s
e
3s
)
1
s(s
2
+ 4)
=
1
4

e
s
s

1
4

se
s
s
2
+ 4

1
4

e
3s
s
+
1
4

se
3s
s
2
+ 4
.
Therefore
y(s) = L
1
{Y (s)} = (u

u
3
)
1 cos 2t
4
.
4.8 Impulse Functions
An impulse function is a very large magnitude signal applied over a very short
period. An example is the impluse of a voltage to a electric circuit.
If g(t) is an impulse function with non-zero value over a short time interval
t
0
< t < t
0
+ with 0, and g(t) = 0 otherwise. then the total impulse
of g(t) is I(), given by
I() =
_

g(t)dt =
_
t
0
+
t
0

g(t)dt.
Example 4.8.1. Let
d

(t) =
_
1/(2), < t <
0, otherwise
,
which represents an impulse at t = 0. Then the total impulse is
I() =
_

(t)dt =
_

1
2
dt = 1
Hence the total impulse of d

(t) is always 1 and is independent of .


As 0, the function is called a Dirac delta function and is denoted by
(t). It is an impulse at t = 0 and has the following preperties:
1. (t) = at t = 0.
2. (t) = 0 for t = 0.
3.
_

(t)dt = 1.
Delta function at t = t
0
is represented by (t t
0
).
Theorem 4.8.2. L{(t t
0
)} = e
st
0
.
Theorem 4.8.3.
_

(t t
0
)f(t)dt = f(t
0
).
Theorem 4.8.4. L{(t t
0
)f(t)} = e
st
0
f(t
0
).
36 CHAPTER 4. LAPLACE TRANSFORM
4.9 Convolution Integral
To evaluate L{Y (s)}, sometimes we may see that Y (s) can be decomposed into
product of functions (i.e. Y (s) = F(s)G(s)) that their inverse Laplace transform
are known. i.e. L
1
{F(s)} = f(t), L
1
{G(s)} = g(t) are known.
To take this advantage, we have the following theorem:
Theorem 4.9.1. If Y (s) = F(s)G(s) and L
1
{F(s)} = f(t), L
1
{G(s)} =
g(t), then
L
1
{Y (s)} = f g(t),
where
f g(t) =
_
t
0
f()g(t )d =
_
t
0
f(t )g()d
is the convolution integral.
Theorem 4.9.2 (Basic Properties of Convolution). 1. f g = g f.
2. f (g
1
+g
2
) = f g
1
+f g
2
.
3. (f g) h = f (g h).
4. f 1 = f(t)
5. L{f g(t)} = L{f(t)}L{g(t)}.
Example 4.9.3. Evaluate L{1/(s(s
2
+ 2s + 2))}.
Proof. We have 1/(s(s
2
+ 2s + 2)) = (1/s)(1/(s
2
+ 2s + 2)) = F(s)G(s). Now
L
1
{1/s} = f(t) = 1, L
1
{1/(s
2
+2s+2)} = g(t) = e
t
sin t. Hence L{1/(s(s
2
+
2s + 2))} = f g(t). It is not hard to nd that f g(t) = (1/2)(1 e
t
sin t
e
t
cos t) and the result follows.
Chapter 5
Systems of First-Order
ODEs
5.1 System of Dierential Equations
Systems of 1st-order ODEs are important because many linear ODEs of high
order can be transformed into a system of 1st-order linear ODEs.
Example 5.1.1. For y

+ py

+ qy = 0, where p, q are constants, we may let


x
1
= y, x
2
= y

, then x
2
= x

1
and the equation becomes
_
x

1
= x
2
x

2
= qx
1
px
2
,
i.e.
_
x
1
x
2
_

=
_
0 1
q p
__
x
1
x
2
_
,
x

= Ax.
Example 5.1.2. For y

+3y

+2y

5y = sin 2t, let x


1
= y, x
2
= y

= x

1
, x
3
=
y

= x

2
, then the equation becomes
_
x

1
= x
2
x

2
= x
3
x

3
= 5x
1
2x
2
3x
3
+ sin 2t
,
i.e.
_
_
x
1
x
2
x
3
_
_

=
_
_
0 1 0
0 0 1
5 2 3
_
_
_
_
x
1
x
2
x
3
_
_
+
_
_
0
0
sin 2t
_
_
,
x

= Ax +F(t).
5.2 Solution of a General First-Order System
A general system of 1st-order linear ODEs
37
38 CHAPTER 5. SYSTEMS OF FIRST-ORDER ODES
_

_
x

1
= p
11
(t)x
1
+p
12
(t)x
2
+ +p
1n
(t) +g
1
(t)
.
.
.
x

n
= p
n1
(t)x
1
+p
n2
(t)x
2
+ +p
nn
(t) +g
n
(t)
,
can be written as the matrix equation X

= P(t)X+G(t), where
X =
_
_
_
x
1
(t)
.
.
.
x
n
(t)
_
_
_, G(t) =
_
_
_
g
1
(t)
.
.
.
g
n
(t)
_
_
_, P(t) =
_
_
_
p
11
(t) p
1n
(t)
.
.
.
.
.
.
.
.
.
p
n1
(t) p
nn
(t)
_
_
_.
If G(t) = 0, then the system is called a homogeneous system.
Theorem 5.2.1. The general solution of the homogeneous dierential system
X

= P(t)X, where X = (x
1
(t), , x
n
(t))
T
is
X = c
1
X
1
+ +c
n
X
n
,
where X
1
, , X
n
are linearly independent solutions and c
i
are arbitrary con-
stants.
Theorem 5.2.2. Consider a non-homogeneous system of ODE X

= P(t)X+
G(t). If X
h
is the general solution to the associated homogeneous system X

=
P(t)X, and X
p
is a particular solution to the non-homogeneous system X

=
P(t)X+G(t). then the general solution to the non-homogeneous system is given
by
X = X
h
+X
p
= c
1
X
1
+ +c
n
X
n
+X
p
.
5.3 Homogeneous System of 1st-Order Linear
ODEs with Constant Coecients
To nd the solution of a homogeneous system X

= AX, where A is a nn real


matrix, and X = (x
1
(t), , x
n
(t))
T
.
Suppose X = ve
t
, with v = (v
1
, , v
n
), and v
i
are constant scalars, then
X

= ve
t
, and the system becomes ve
t
= Ave
t
. i.e. Av = v. Therefore,
, v are eigenbalue and eigenvector of the matrix A by denition. Then we have
the following 3 cases:
Case 1: Eigenvalues are all real and distinct.
In this case, the n eigenvectors v
1
, , v
n
corresponding to the n distinct
eigenvalues
1
, ,
n
are linearly independent, so the general solution is
X = c
1
e

1
t
v
1
+ +c
n
e

n
t
v
n
.
Case 2: Complex eigenvalues.
If
1
= a + ib,
2
= a ib, then the corresponding eigenvectors are also in
conjugate pair: v
1
= u +iv, v
2
= u iv. and the corressponding solutions are
X
1
= e
at
(ucos bt v sin bt),
X
2
= e
at
(ucos bt +v sin bt).
5.4. NON-HOMOGENEOUS SYSTEM 39
Case 3: Repeated eigenvalues.
If two eigenvalues are repeated (with algebraic multiplicity m), then compute
a basis for the eigenspace ker(AI) to get m linearly independent eigenvectors.
Remark. if A is diagonalizable, then the method described in this section
can produce n linearly independent solutions. If A is non-diagonalizable, the
general solution can still be found, using the Jordan canonical form of A, but
this is beyond the scope of this notes.
5.4 Non-Homogeneous System
Consider the non-homogeneous system
X

= AX+G(t),
the general solution X = X
h
+X
p
can be obatined by two methods.
5.4.1 Diagonalization
If A is diagonalizable, i.e. A = PDP
1
for some diagonal matrix D and invert-
ible matrix P, then X

= PDP
1
X+G(t), or (P
1
X)

= D(P
1
X)+P
1
G(t).
Let

X = P
1
X, F = P
1
G(t), then the system becomes

X

= D

X+F.
This new system is a system of n linear 1-st order ODEs:

x
i
(t)

=
i

x
i
(t) +
F
i
(t), i = 1, n. Therefore

x
i
(t) = e

i
t
_
e

i
t
F
i
(t)dt +c
i
e

i
t
for i = 1, , n and c
i
are arbitrary constants. Hence the general solution of
the system X

= AX+G(t) is then given by X = P

X.
5.4.2 Variation of Parameter
Dention 5.4.1 (Fundamental Solution Matrix). Consider the non-homogeneous
system
X

= AX+G(t),
if X
h
= c
1
X
1
+ + c
n
X
n
are the solutions of the associated homogeneous
system, then the fundamental solution matrix is dened as the matrix such
that the n-th columns of the matrix are formed by the n-th linearly independent
homogeneous solution. i.e.
= (X
1
X
2
X
n
).
Then we have

= A. Also, we have X
h
= C, where C = (c
1
, , c
n
)
T
.
Let X
p
= V(t), where V(t) = (v
1
(t), , v
n
(t))
T
, then we have X

p
=

V(t) + V(t)

. Substituting these into the system gives

V(t) + V(t)

=
AV(t) +G(t), which implies V(t)

= G(t) since

= A.
is invertible (since its columns are linearly independent), hence V(t)

1
G(t) and V(t) =
_

1
G(t)dt. Hence
X
p
=
_

1
G(t)dt,
40 CHAPTER 5. SYSTEMS OF FIRST-ORDER ODES
and the general solution is given by
X = X
h
+X
p
= C +
_

1
G(t)dt.
Bibliography
[1] B. Poonen, Dierential Equations Lecture Notes at MIT, 2014.
[2] J R. Chansnov, Dierential Equations Lecture Notes at HKUST, 2014.
[3] W. E. Boyce and R. C. Diprima, Elementary Dierential Equations and
Boundary Value Problems, 8th Ed., Wiley, 2005.
41

Das könnte Ihnen auch gefallen