Beruflich Dokumente
Kultur Dokumente
Dr. A. N. Njah,
Department of Physics,
University of Agriculture,
Abeokuta.
PHS 471: Linear Algebra: Transformation in linear vector spaces and ma-
trix theory.
Functional analysis; Hilbert space, complete sets of orthogonal functions; Lin-
ear operations.
Special functions: Gamma, hypergometric, Legendre, Bessel, Hermite and
Laguerre functions. The Dirac delta function
Integral transform and Fourier series: Fourier series and Fourier transform;
Application of transform methods to the solution of elementary dierential
equations in Physics and Engineering.
Suggested reading.
1. Advanced Engineering Mathematics by E. Kreyszig.
2. Further Engineering Mathematics by H.A. Stroud
3. Mathematical Physics by B.D. Gupta
4. Advanced Mathematics for Engineers and Scientist (Schaum series) by
Spiegel
1
Contents
1 LINEAR ALGEBRA 5
1.1 Vector Space or Linear Space . . . . . . . . . . . . . . . . . . 5
1.1.1 Algebraic Operations on Vectors . . . . . . . . . . . . . 6
1.1.2 Linearly Dependent and Independent sets of vectors . . 6
1.2 Matrix Theory . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2.1 Determinant of a Matrix . . . . . . . . . . . . . . . . . 9
1.2.2 General properties of determinants . . . . . . . . . . . 10
1.2.3 Adjugate Matrix or Adjoint of a Matrix . . . . . . . . 10
1.2.4 Reciprocal Matrix or Inverse of a Matrix . . . . . . . . 11
1.2.5 The Transpose of a Matrix . . . . . . . . . . . . . . . . 11
1.2.6 Symmetric, Skew-symmetric and Orthogonal Matrices . 11
1.3 Complex Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.1 The Conjugate of a Matrix . . . . . . . . . . . . . . . . 12
1.3.2 The Conjugate transpose or Hermitian Conjugate of a
Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.3.3 Hermitian, Skew-Hermitian and Unitary Matrices . . . 13
1.4 Matrix Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.4.1 Rank of a Matrix . . . . . . . . . . . . . . . . . . . . . 13
1.5 Consistency of equations . . . . . . . . . . . . . . . . . . . . . 15
1.5.1 Homogeneous and Non-Homogeneous Linear Equations 15
1.5.2 Uniqueness of Solutions . . . . . . . . . . . . . . . . . 16
1.6 Solution of sets of Equations . . . . . . . . . . . . . . . . . . . 17
1.6.1 Inverse method . . . . . . . . . . . . . . . . . . . . . . 17
1.6.2 Row Transformation method . . . . . . . . . . . . . . . 19
1.6.3 Gaussian elimination method . . . . . . . . . . . . . . 20
1.6.4 Triangular Decomposition method: LU-decomposition . 20
1.6.5 Cramers Rule . . . . . . . . . . . . . . . . . . . . . . . 20
1.7 Eigenvalues and Eigenvectors of a Matrix . . . . . . . . . . . . 20
2
1.7.1 Nature of the eigenvalues and eigenvectors of special
types of matrices . . . . . . . . . . . . . . . . . . . . . 22
1.7.2 Diagonalisation of a matrix . . . . . . . . . . . . . . . 24
1.8 Transformation . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.8.1 Transformation . . . . . . . . . . . . . . . . . . . . . . 25
1.8.2 Resultant of two linear transformation . . . . . . . . . 26
1.8.3 Similarity transformation . . . . . . . . . . . . . . . . . 26
1.8.4 Unitary transformation . . . . . . . . . . . . . . . . . . 26
1.8.5 Orthogonal transformation . . . . . . . . . . . . . . . . 26
1.8.6 Orthogonal set . . . . . . . . . . . . . . . . . . . . . . 27
1.9 Bases and dimension . . . . . . . . . . . . . . . . . . . . . . . 27
1.9.1 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . 27
2 FUNCTIONAL ANALYSIS 28
2.1 Normed Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.1 Cauchy Sequences . . . . . . . . . . . . . . . . . . . . . 28
2.1.2 Completeness . . . . . . . . . . . . . . . . . . . . . . . 28
2.1.3 Pre-Hilbert spaces . . . . . . . . . . . . . . . . . . . . 29
2.1.4 Hilbert Spaces . . . . . . . . . . . . . . . . . . . . . . . 29
2.1.5 Geometry of Hilbert space . . . . . . . . . . . . . . . . 29
3 SPECIAL FUNCTIONS 30
3.1 The gamma and beta functions . . . . . . . . . . . . . . . . . 30
3.1.1 The gamma function . . . . . . . . . . . . . . . . . . 30
3.1.2 The beta function, . . . . . . . . . . . . . . . . . . . 32
3.1.3 Application of gamma and beta functions . . . . . . . . 32
3.2 Bessels Functions . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.3 Legendres Polynomials . . . . . . . . . . . . . . . . . . . . . . 35
3.4 Hermite Polynomials . . . . . . . . . . . . . . . . . . . . . . . 36
3.5 Laguerre Polynomials . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.1 Hypergeometric Function . . . . . . . . . . . . . . . . . 39
4 INTEGRAL TRANSFORM AND FOURIER SERIES 41
4.1 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . 41
4.1.1 Inverse Transform . . . . . . . . . . . . . . . . . . . . . 44
4.1.2 Solution of Dierential Equations by Laplace Transform 46
4.2 The Dirac Delta Function the impulse function . . . . . . . 48
3
4.2.1 Integration involving the impulse function . . . . . . . 49
4.2.2 Dierential equations involving the impulse function . . 49
4.3 Fourier Series . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.1 Fourier series of functions of period 2 . . . . . . . . . 51
4.3.2 Half-range series . . . . . . . . . . . . . . . . . . . . . 54
4.3.3 Functions with arbitrary period T . . . . . . . . . . . . 54
4.3.4 Sum of a Fourier series at a point of nite discontinuity 55
4.4 Fourier Integrals . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4.1 The Fourier integral . . . . . . . . . . . . . . . . . . . 55
4
Chapter 1
LINEAR ALGEBRA
1.1 Vector Space or Linear Space
A vector space V over a eld F is a set of elements called vectors which may
be combined by two operations - addition and scalar multiplication; such that
a) if the vectors a and
b belong to V , then
(i) a +
b =
b) +c =a + (
b +c) =
0 =a
(v) to every vector a in V , there corresponds a vector a known as the
additive inverse vector, such that a + (a) =
0
b) if m, n (elements of F) are any two scalars and a is a vector in V , then
(i) (m + n)a = ma + na (distributive law)
(ii) m(a +
b) = ma + m
b (distributive law)
(iii) m(na) = (mn)a = n(ma) (associative law of multiplication)
(iv) to every vector a in V , there corresponds a multiplicative identity
scalar 1, such that 1a =a
Conditions for a physical quantity to be representable by a vector:
It must obey the parallelogram law of addition (|a +
b| |a| +|
b|)
It must have magnitude as well as direction independent of any choice
of coordinate axes.
5
1.1.1 Algebraic Operations on Vectors
If
A and
B are two vectors with components (a
1
, a
2
, . . . , a
n
) and (b
1
, b
2
, . . . , b
n
)
respectively then
(i)
A
B = (a
1
b
1
, a
2
b
2
, . . . , a
n
b
n
)
(ii) k
A = (ka
1
, ka
2
, . . . , ka
n
), k a scalar
(iii)
A
B = a
1
b
1
+a
2
b
2
+ . . . +a
n
b
n
(iv) A vector will be a unit vector if the magnitude [
A[ = 1
(v) The vectors
A and
B will be orthogonal if
A
B = 0
1.1.2 Linearly Dependent and Independent sets of vectors
A set a vectors
A
1
,
A
2
, . . .
A
n
is said to be linearly dependent if there exist
a set of n scalars k
1
, k
2
, . . . , k
n
(of which at least one is non-zero) such that
k
1
A
1
+k
2
A
2
+. . . + k
n
A
n
=
0
In the case when k
1
= k
2
= . . . = k
n
= 0, in order that k
1
A
1
+ k
2
A
2
+ . . . +
k
n
A
n
=
0
the set of n vectors
A
1
,
A
2
, . . . ,
A
n
is said to be linearly independent
A vector
A is known as a linear combination of the set
A
1
,
A
2
, . . .
A
n
if it is
expressable as
A = k
1
A
1
+ k
2
A
2
+ . . . + k
n
A
n
Examples:
1. The set of vectors (1,2,3), (2,-2,0) is linearly independent since k
1
(1, 2, 3)+
k
2
(2, 2, 0) = (0, 0, 0) is equivalent to the set of equations k
1
+ 2k
2
=
0, 2k
1
2k
2
= 0 and 3k
1
= 0 which gives k
1
= k
2
= 0.
2. The set of vectors (2,4,10), (3,6,15) is linearly dependent since k
1
(2, 4, 10)+
k
2
(3, 6, 15) = (0, 0, 0) gives the system 2k
1
+ 3k
2
= 0, 4k
1
+ 6k
2
=
0, 10k
1
+ 15k
2
= 0 k
1
= 3, k
2
= 2
6
1.2 Matrix Theory
A set of numbers arranged in a rectangular array of m rows and n columns
such as
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
m1
a
m2
. . . a
mn
u
11
u
12
u
13
0 u
22
u
23
0 0 u
33
L =
l
11
0 0
l
21
l
22
0
l
31
l
23
l
33
d
11
0 0
0 d
22
0
0 0 d
33
, S =
0 0
0 0
0 0
, I =
1 0 0
0 1 0
0 0 1
a
11
a
12
a
21
a
22
= a
11
a
22
a
12
a
21
Similarly the determinant of a 3 3 matrix A is referred to as a third order
determinant and is dened by
detA =
a
11
a
12
a
13
a
21
a
22
a
23
a
31
a
32
a
33
= a
11
a
22
a
23
a
32
a
33
a
21
a
12
a
13
a
32
a
33
+ a
31
a
12
a
13
a
22
a
23
= a
11
M
11
a
21
M
21
+ a
31
M
31
Here we have expanded the determinant by the rst column. Determinant
can be expanded by any row or column
Minor and Cofactors of a third order determinant
M
11
=
a
22
a
23
a
32
a
33
, M
21
=
a
12
a
13
a
32
a
33
and M
31
=
a
12
a
13
a
22
a
23
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
m1
a
m2
. . . a
mn
9
1.2.2 General properties of determinants
Theorem 1: (Behaviour of n
th
order determinant under elementary row op-
eration)
(a) Interchange of two rows multiplies the value of the determinant by 1.
(b) Addition of a multiple of a row to another row does not alter the value
of the determinant.
(c) Multiplication of a row by c multiplies the value of the determinant by
c
Theorem 2: (further properties of the n
th
-order determinant)
(a)-(c) in theorem 1 hold also for columns
(d) Transposition leaves the value of a determinent unaltered
(e) A zero row or column renders the value of a determinant zero.
(f) Proportional rows or columns render the value of a determinant zero.
In particular, a determinant with two identical rows or columns has the
value zero.
Singular and Non-singular matrices
A square matrix A is known as singular matrix if its determinant [A[ = 0.
In case [A[ ,= 0 then A is known as non-singular matrix
1.2.3 Adjugate Matrix or Adjoint of a Matrix
Let A = [a
ij
] be a square matrix of order n and c
ij
represents the cofactor
of the element a
ij
in the determinant [A[ then the transpose of the matrix of
cofactors (C = [c
ij
]) denoted by C
/
= [c
ji
] is called the adjugate or adjoint of
A and is denoted adjA.
Properties:
A(adjA) = (adjA)A.............................
i.e. multiplication of A by adjA is commutative and their product is a
scalar matrix having every diagonal element as [A[.
[adjA[ = [A[
n1
, where n is the order of the matrix.
10
If [A[ = 0 then A(adjA) = (adjA)A = 0
adj(AB) = adjBadjA
or adj(ABC) = adjCadjBadjA (prove!!)
1.2.4 Reciprocal Matrix or Inverse of a Matrix
If AB = BA = I (the identity matrix) then B is said to be the inverse of A
and vice versa and is denoted by A
1
. That is AA
1
= A
1
A = I
When the inverse of A exists, then A is said to be invertible. The necessary
and sucient condition for a square matrix to be invertible is that it is non-
singular. From eq.(*) A
1
is given by
A
1
=
1
[A[
adjA
1.2.5 The Transpose of a Matrix
If A is a matrix of order mn the transpose of A denoted by A
/
or A
T
, is
a matrix of order n m obtained by interchanging its rows and columns
Examples
If A =
a
11
a
12
a
13
a
21
a
22
a
23
then A
/
=
a
11
a
21
a
12
a
22
a
13
a
23
Properties
(A
/
)
/
= A
[A
/
[ = [A[
(kA)
/
= kA
/
for k a scalar
(A+B)
/
= A
/
+B
/
(AB)
/
= B
/
A
/
1.2.6 Symmetric, Skew-symmetric and Orthogonal Matrices
A square matrix A is said to be:
(i) symmetric if A
/
= A i.e. if a
ij
= a
ji
for all i and j, i.e. if transposition
leaves it unchanged
11
(ii) skew-symmetric if A
/
= A i.e. if transposition gives the negative of A
(iii) orthogonal if A
/
= A
1
i.e. if transposition gives the inverse of A.
Every square matrix A = P+Q
where P =
1
2
(A+A
/
) is a symmetric matrix
and Q =
1
2
(AA
/
) is a skew-symmetric matrix
1.3 Complex Matrices
1.3.1 The Conjugate of a Matrix
If the elements of a matrix A are complex quantities, then the matrix ob-
tained from A, on replacing its elements by the corresponding conjugate
complex numbers, is said to be the conjugate matrix of A and is denoted by
A or A
= A
(A+B)
= A
+B
(AB)
= A
(called A dagger)
Example:
A =
2 + 3i 3 4i i
5i 5 3i 4 + i
2 3i 3 + 4i i
5i 5 + 3i 4 i
2 3i 5i
3 + 4i 5 + 3i
i 4 i
12
Properties:
(A
= A
(A+B)
= A
+B
(AB)
= B
(prove!!)
1.3.3 Hermitian, Skew-Hermitian and Unitary Matrices
A matrix A is said to be
(i) Hermitian if A
= A
(ii) skew-Hermitian if A
= A
(iii) unitary if A
= A
1
Examples:
A =
5 2 + 3i i
2 3i 3 3 4i
i 3 + 4i 0
, B =
3i 2 + i
2 + i i
and C =
1
2
i
1
2
3
1
2
3
1
2
i
) is a Hermitian matrix
and Q =
1
2
(AA
) is a skew-Hermitian matrix
1.4 Matrix Algebra
1.4.1 Rank of a Matrix
It is the order of the largest determinant that can be formed from the ele-
ments of the matrix. A matrix A is said to have rank r if it contains at least
one square submatrix of r rows with a non-zero determinant, while all square
submatrices of (r + 1) rows, or more, have zero determinants.
Examples:
13
1. A =
4 2
1 5
4 2
1 5
6 3
8 4
gives [B[ =
6 3
8 4
= 0.
Therefore B is not of rank 2. It is, however, of rank 1, since the subma-
trices (6),(3),(8),(4) are not zero. This implies that only a null matrix
is of rank zero.
3. Determine the ranks of (a) A =
1 2 8
4 7 6
9 5 3
and (b) B =
3 4 5
1 2 3
4 5 6
.
Since [A[ =
1 2 8
4 7 6
9 5 3
3 4 5
1 2 3
4 5 6
3 4
1 2
4 5
2 3
1 2
4 5
2 3
5 6
3 5
1 3
1 3
4 6
4 5
5 6
3 5
4 6
3 4
4 5
. i.e. we
test all possible second order minors to nd one that is not zero.
For a rectangular matrix of order mn the rank is given by the order of the
largest square sub-matrix formed by the elements.
Example:
For a 3 4 matrix
2 2 3 1
0 8 2 4
1 7 3 2
2 2 3
0 8 2
1 7 3
= 0
But we must also try other 3 3 sub-matrices, e.g,
2 3 1
8 2 4
7 3 2
= 30 ,= 0, therefore, B is of rank 3
14
1.5 Consistency of equations
1.5.1 Homogeneous and Non-Homogeneous Linear Equations
A set of m simultaneous linear equations in n unknowns
a
11
x
1
+ a
12
x
2
+ . . . + a
1n
x
n
= b
1
a
21
x
1
+ a
22
x
2
+ . . . + a
2n
x
n
= b
2
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
a
m1
x
1
+a
m2
x
2
+ . . . +a
mn
x
n
= b
m
can be written in the matrix form as follows
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
m1
a
m2
. . . a
mn
x
1
x
2
...
x
n
b
1
b
2
...
b
m
i.e. Ax = b
This set of equations is homogeneous if b = 0, i.e. (b
1
, b
2
, . . . , b
m
) = (0, 0, . . . , 0)
otherwise it is said to be non-homogeneous. This set of equations is said to
be consistent if solutions for x
1
, x
2
, . . . , x
n
exist and inconsistent if no such
solutions can be found.
The Augmented coecient matrix A
b
of A is
A
b
=
a
11
a
12
. . . a
1n
b
1
a
21
a
22
. . . a
2n
b
2
. . . . . . . . . . . . ...
a
m1
a
m2
. . . a
mn
b
m
1 3
2 6
x
1
x
2
4
5
then A =
1 3
2 6
and A
b
=
1 3 4
2 6 5
Rank of A:
1 3
2 6
= 0 rank of A = 1
Rank of A
b
:
1 3
2 6
= 0 as before
but
3 4
6 5
= 9 rank of A
b
= 2 In this case rank of A is less than rank
of A
b
no solution exists.
1.5.2 Uniqueness of Solutions
1. With a set of n equations in n unknowns, the equations are consistent if
the coecient matrix A and the augmented matrix A
b
are each of rank
n. There is then a unique solution for the n equations.
2. If the rank of A and that of A
b
is m, where m < n, then the matrix A is
singular, i.e. [A[ = 0, and there will be an innite number of solutions
for the equations.
3. If the rank of A is less than the rank of A
b
, then no solution exists.
That is, with a set of n equations in n unknowns
(i) a unique solution exists if rank A = rank A
b
= n
(ii) an innite number of solution exists if rank A = rank A
b
= m < n
(iii) no solution exists if rank A<rank A
b
Example:
Show that
4 5
8 10
x
1
x
2
3
6
4 5
8 10
and A
b
=
4 5 3
8 10 6
Rank A: =
4 5
8 10
= 0
rank A = 1 Rank A
b
: =
4 5
8 10
= 0,
5 3
10 6
= 0,
4 3
8 6
=
0 rank of A
b
= 1. Therefore rank of A=rank of A
b
= 1 < n = 2.
16
Therefore an innite number of solutions exist.
For the homogeneous linear equations
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
m1
a
m2
. . . a
mn
x
1
x
2
...
x
n
0
0
...
0
i.e. Ax = 0 ()
Let r be the rank of the matrix A of order m n. We have the following
results:
1. A system of homogeneous linear equations always have one or more
solutions. The two cases are: r = n or r < n. For r = n the eq(*)
will have no linearly independent solutions, for in that case trivial (zero)
solution is the only solution, while in the case r < n there will be (nr)
independent solutions and therefore the eq(*) will have more than one
solution.
2. The number of linearly independent solutions of Ax = 0 is (n r) i.e.
if we assign arbitrary values to (n r) of the variables, then the values
of the others can be uniquely determined.
Since the rank of A is r, it has r linearly independent columns.
3. If the number of equations is less than the number of variables, then the
solution is always other than x
1
= x
2
= . . . = x
n
= 0 (i.e. the solution
is always non-trivial solution)
4. If the number of equations is equal to the number of variables a necessary
and sucient condition for solutions other than x
1
= x
2
= . . . = x
n
= 0
is that the determinant of the coecients must be zero.
1.6 Solution of sets of Equations
1.6.1 Inverse method
To solve Ax = b we use x = A
1
b. To nd A
1
proceed as follows:
(i) evaluate [A[. If [A[ = 0 then stop (no solution) else proceed.
17
(ii) Form C, the matrix of cofactors of A (the cofactor of any element is its
minor together with its place sign)
(iii) Write C
/
, the transpose of C
(iv) Then A
1
=
1
[A[
C
/
Example: To solve the system
3x
1
+ 2x
2
x
3
= 4
2x
1
x
2
+ 2x
3
= 10
x
1
3x
2
4x
3
= 5
we rewrite it in matrix form as
3 2 1
2 1 2
1 3 4
x
1
x
2
x
3
4
10
5
i.e. Ax = b
A =
3 2 1
2 1 2
1 3 4
(i) [A[ =
3 2 1
2 1 2
1 3 4
= 55
(ii) C =
c
11
c
12
c
13
c
21
c
22
c
23
c
31
c
32
c
33
where c
11
=
1 2
3 4
= 10, c
12
=
2 2
1 4
= 10, c
13
=
2 1
1 3
= 5,
c
21
=
2 1
3 4
= 11, c
22
=
3 1
1 4
= 11, c
23
=
3 2
1 3
= 11,
c
31
=
2 1
1 2
= 3, c
32
=
3 1
2 2
= 8, c
33
=
3 2
2 1
= 7
So C =
10 10 5
11 11 11
3 8 7
18
(iii) C
/
=
10 11 3
10 11 8
5 11 7
10 11 3
10 11 8
5 11 7
So
x
1
x
2
x
3
=A
1
b
1
b
2
b
3
=
1
55
10 11 3
10 11 8
5 11 7
4
10
5
=
1
55
40 + 110 + 15
40 110 40
20 + 10 35
3
2
1
Therefore x
1
= 3, x
2
= 2, x
3
= 1
1.6.2 Row Transformation method
Elementary row transformation are:
(a) interchange any two rows
(b) multiply (or divide) every element in a row by a non-zero scalar (con-
stant) k
(c) add to (or subtract from) all the elements of any row k times the corre-
sponding elements of any other row.
Equivalent Matrices: Two matrices A and B are said to be equivalent if B
can be obtained from A by a sequence of elementary transformations.
Theorem: Elementary operations do not change the rank of a matrix.
Corollary 1: Equivalent matrices have the same rank
Corollary 2: The rank of A
/
is equal the rank of A
Solution of equations:
Example:
The system
2 1 1
1 3 2
3 2 4
x
1
x
2
x
3
5
1
4
is written as
2 1 1
1 3 2
3 2 4
x
1
x
2
x
3
1 0 0
0 1 0
0 0 1
5
1
4
19
and
2 1 1 1 0 0
1 3 2 0 1 0
3 2 4 0 0 1
is transformed to
1 0 0
8
17
2
17
1
17
0 1 0
10
17
11
17
3
17
0 0 1
11
17
7
17
5
17
.
We now have
1 0 0
0 1 0
0 0 1
x
1
x
2
x
3
=
1
17
8 2 1
10 11 3
11 7 5
5
1
4
Which gives x
1
= 2, x
2
= 3, x
3
= 4
1.6.3 Gaussian elimination method
Example:
For the system
2 3 2
3 2 1
1 4 2
x
1
x
2
x
3
9
4
6
2 3 2 : 9
3 2 1 : 4
1 4 2 : 6
1 4 2 : 6
0 1
2
5
:
3
5
0 0 1 : 4
1 4 2
0 1
2
5
0 0 1
x
1
x
2
x
3
3
5
4
By backward substitution x
3
= 4, x
2
= 1, x
1
= 2
1.6.4 Triangular Decomposition method: LU-decomposition
1.6.5 Cramers Rule
1.7 Eigenvalues and Eigenvectors of a Matrix
A non-zero vector X is called an eigenvector or characteristic vector of a
matrix A, if there is a number called the eigenvalue or characteristic value
or characteristic root or latent root such that AX = X
i.e. AX = IX, where I is a unit matrix.
20
or (AI)X = 0
Since X ,= 0, the matrix (AI) is singular so that its determinant [AI[,
which is known as the characteristic determinant of A is zero. This leads to
the characteristics equation of A as
[AI)[ = 0 ()
which follows that every characteristic root of a matrix A is a root of its
characteristic equation, eq(*)
Example:
If A =
5 4
1 2
5 4
1 2
1 0
0 1
5 4
1 2
= 0
i.e.
2
7 + 6 = 0
or ( 1)( 6) = 0
Therefore
1
= 1,
2
= 6 are the eigenvalues of A.
Now the eigenvector
X
1
=
x
1
x
2
of A corresponding to
1
= 1 is given by
(A1I)
X
1
=
0
i.e.
5 4
1 2
1 0
0 1
x
1
x
2
0
0
or
4 4
1 1
x
1
x
2
0
0
i.e. 4x
1
+ 4x
2
= 0
and x
1
+ x
2
= 0
which yield x
1
= x
2
.
If we take x
1
= 1, then x
2
= 1
Therefore
X
1
=
x
1
x
2
1
1
x
1
x
2
of A corresponding to
2
= 6 is given by
(A6I)
X
2
=
0
i.e.
5 4
1 2
1 0
0 1
x
1
x
2
0
0
or
1 4
1 4
x
1
x
2
0
0
21
i.e. x
1
+ 4x
2
= 0
and x
1
4x
2
= 0
which yield x
1
= 4x
2
.
If we take x
1
= 4, then x
2
= 1
Therefore
X
2
=
x
1
x
2
4
1
In general we write
X
1
= C
1
1
1
and
X
2
= C
2
4
1
, where C
1
and C
2
are non-zero scalars called the normalization constants.
Normalization: The constants C
1
and C
2
can be xed by normalization as
follows:
1
X
1
= 1 i.e. C
1
_
1 1
_
C
1
1
1
= 1 or 2C
2
1
= 1 C
1
=
1
2
,
hence
X
1
=
1
1
1
2
X
2
= 1 i.e. C
2
_
4 1
_
C
2
4
1
= 1 or 17C
2
2
= 1 C
2
=
1
17
,
hence
X
2
=
1
17
4
1
X =
X =
X
X =
X =
X
so that =
X
X
But
X
X =
X
X which implies
X
X
1
=
1
X
1
...............................(1)
A
X
2
=
2
X
2
...............................(2)
From theorem 1
1
and
2
are real. Premultiplying (1) and (2) by
X
2
and
1
respectively
2
A
X
1
=
1
X
2
X
1
...............................(3)
1
A
X
2
=
2
X
1
X
2
...............................(4)
But (
2
A
X
1
)
=
X
1
A
X
2
Therefore for a Hermitian matrix A
= A and also (
2
)
=
X
2
; therefore we
have, from (3) and (4),
(
1
X
2
X
1
)
=
2
X
1
X
2
or
1
X
1
X
2
=
2
X
1
X
2
or (
1
2
)
1
X
2
= 0
Since
1
2
,= 0 for distinct roots we have
X
1
X
2
= 0
X
1
and
X
2
are
orthogonal
Corollary: Any two eigenvectors corresponding to two distinct eigenvalues of
a real symmetric matrix are orthogonal.
Theorem 5: Any two eigenvectors corresponding to two distinct eigenvalues
of a unitary matrix are orthogonal.
Theorem 6: The eigenvectors corresponding to distinct eigenvalues of a ma-
trix are linearly independent.
Theorem 7: The characteristic polynomial and hence the eigenvalues of sim-
ilar matrices are the same. Also if
X is an eigenvector of A corresponding
to the eigenvalue , then P
1
X is an eigenvector of B corresponding to the
eigenvalue where B = P
1
AP
Proof: Let A and B be two similar matrices. Then there exists an invertible
matrix P such that B = P
1
AP. Consider
BI = P
1
API
= P
1
(AI)P
23
since P
1
(I)P = P
1
P = I
Therefore
[BI[ = [P
1
[.[AI[.[P[
= [P
1
[[P[.[AI[ since scalar quantities commute
= [P
1
P[.[AI[ where [CD[ = [C[[D[
= [AI[ where [P
1
P[ = [I[ = 1
which follows that A and B have the same characteristic polynomial and so
they have the same eigenvalues.
Corollary 1: The eigenvalues of a matrix are invariant under similarity trans-
formation.
Corollary 2: If A is similar to a diagonal matrix D then the diagonal elements
of D are the eigenvalues of A.
1.7.2 Diagonalisation of a matrix
Modal matrix: If the eigenvectors of a matrix A are arranged as columns
of a square matrix, the modal matrix of A denoted by M, is formed.
Example:
For A =
1 0 4
0 2 0
3 1 3
,
1
= 2,
2
= 3,
3
= 5 and the corresponding
eigenvectors are
X
1
=
4
7
1
,
X
2
=
2
0
1
,
X
3
=
2
0
3
4 2 2
7 0 0
1 1 3
2 0 0
0 3 0
0 0 5
Note that the eigenvalues of S and A are the same as indicated in corollary
2 above.
24
Show that AM = MS
M
1
AM = S (i.e. similarity transformation of A to S which implies
that S and A are similar matrices)
Note
1. M
1
AM transforms the square matrix A into a diagonal matrix S
2. A square matrix A of order n can be so transformed if the matrix has n
independent eigenvectors.
3. A matrix A always has n linearly independent eigenvectors if it has n
distinct eigenvalues or if it is a symmetric matrix.
4. If the matrix has repeated eigenvalues, it may or may not have linearly
independent eigenvectors
1.8 Transformation
Linear form: An expression of the form
n
j=1
a
ij
x
j
is said to be linear form
of the variable x
j
.
1.8.1 Transformation
If a
ij
are the given constants and x
j
the variables then the set of equation
y
i
=
n
j=1
a
ij
x
j
(for i = 1, 2, 3 . . . , n) (1)
is called a linear transformation connecting the variables x
j
and the variables
y
i
. The square matrix
A =
a
11
a
12
. . . a
1n
a
21
a
22
. . . a
2n
. . . . . . . . . . . .
a
n1
a
n2
. . . a
nn
A = AA
= I
Therefore y
y = (Ax)
(Ax) = x
Ax = x
x
the norm of vectors is invariant under similarity transformation. In
eq.(2), if P is unitary, i.e., P
P = PP
= I, i.e. P
= P
1
then the
transformation B = P
1
AP is also unitary.
1.8.5 Orthogonal transformation
Any transformation y = Ax that transforms
y
2
into
x
2
is said to be an or-
thogonal transformation and the matrix A is known as an orthogonal matrix.
The necessary and sucient condition for a square matrix to be orthogonal
26
is AA
/
= I. In eq.(2) if P is orthogonal, then P
1
= P
/
and eq.(2) is an
orthogonal transformation. The product of two orthogonal transformations
is an orthogonal transformation. Two n-vectors x and y are orthogonal to
each other if x.y = x, y) = 0 i.e. if x
y = 0.
1.8.6 Orthogonal set
A set of vectors is said to be orthonormal if:
each vector of the set is a normal vector
any two vectors of the set are orthogonal
1.9 Bases and dimension
Let X be a linear space over K. Every subset of a linearly independent subset
of X is automatically linearly independent. X possesses a maximal linearly
independent subset called a basis of X. The cardinality of a basis is an invari-
ant of X, since any two bases possess the same cardinality. This invariant is
called the dimension of X. If the dimension of X is nite, then X is called a
nite dimensional linear space, otherwise X is an innite dimensional linear
space.
1.9.1 Linear Maps
Let X, Y be two linear spaces over K and D a subspace of X. A transfor-
mation T : D Y is called a linear map or a linear operator if
T(x
1
+ x
2
) = T(x
1
) +T(x
2
)
where x
1
, x
2
D and , K. The set D is called the domain of T and
is often written as D(T). The set R(T) = Tx : x D(T) is the range or
codomain of T. R(T) is automatically a linear space over K
27
Chapter 2
FUNCTIONAL ANALYSIS
2.1 Normed Spaces
A normed space over a eld K is a pair (X, | |) consisting of a linear space
X over K and a map | |: X 1 with the following properties:
(i) | x | 0x X i.e. | | is non-negative
(ii) | x |= 0 implies x = 0 (i.e. ker(| |) = 0)
(iii) | x |= [[ | x | for all x X, K (i.e. | | is positively
homogeneous)
(iv) | x+y || x | + | y | for all x, y X (i.e. | | satises the triangular
inequality).
For x X, the number | x | is called the norm of x
2.1.1 Cauchy Sequences
Let (X, | |) be normed space and x
n
a sequence of number of X. Then x
n
is a Cauchy sequence if given any > 0, there is natural number N(), such
that | x
n
x
m
|< whenever n, m > N()
2.1.2 Completeness
A normed space in which every Cauchy sequence has a limit is said to be
complete.
28
2.1.3 Pre-Hilbert spaces
An inner product space or a pre-Hilbert space is a pair (H, , )) consisting
of a linear space H over K and a functional , ) : H H K, called the
inner product of H, with the following properties:
(i) x + y, z) = x, z) +y, z), x, y, z H
(ii) x, y) = x, y), x H, K
(iii) x, y) = y, x), x, y H
(iv) x, x) 0, x H and x, x) = 0 i x = 0.
Remark
1. For x, y H the number x, y) is called the inner product of x and y.
2. For x H, dene | x | by | x |=
_
x, x), x H Then, | | is a
norm on H, whence (H, | |) is a normed space. | | is called the norm
induced by the inner product , )
3. With | | as in 2., one can show that
| x + y |
2
+ | x y |
2
= 2 | x |
2
+2 | y |
2
for all x, y H.
This result is called the parallelogram law and is a characterizing property
of prer-Hilbert spaces, i.e. if a norm does not satisfy the parallelogram
law, then it is not induced by an inner product
2.1.4 Hilbert Spaces
A Hilbert space is a pre-Hilbert space (H, , )) such that the pair (H, | |),
where | | is the norm induced by , ), is a complete normed space
Example of Hilbert space: Dene , ) : K
n
K
n
K by x, y) =
n
j=1
x
j
y
j
where x = (x
1
, x
2
, . . . , x
n
), y = (y
1
, y
2
, . . . , y
n
) Then (K
n
, , )) is a Hilbert
space nite dimension.
2.1.5 Geometry of Hilbert space
29
Chapter 3
SPECIAL FUNCTIONS
3.1 The gamma and beta functions
3.1.1 The gamma function
The gamma function (x) is dened by the integral
(x) =
_
0
t
x1
e
t
dt (1)
and is convergent for x > 0. It follows from eq.(1) that
(x + 1) =
_
0
t
x
e
t
dt
Integrating by parts
(x + 1) =
_
t
x
(
e
t
1
)
_
0
+x
_
0
t
x1
e
t
dt
(x + 1) = x(x) (2)
This is a fundamental recurrence relation for gamma functions. It can also
be written as (x) = (x 1)(x 1).
A number of other results can be derived from this as follows:
If x = n, a positive integer, i.e. if n 1, then
(n + 1) = n(n).
= n(n 1)(n 1) since (n) = (n 1)(n 1)
= n(n 1)(n 2)(n 2) since (n 1) = (n 2)(n 2)
= ....................................................
= n(n 1)(n 2)(n 3) . . . 1(1)
= n!(1)
But (1) =
_
0
t
0
e
t
dt = [e
t
]
0
= 1
(n + 1) = n! (3)
Examples:
30
(7) = 6! = 720, (8) = 7! = 5040, (9) = 40320
We can also use the recurrence relation in reverse
(x + 1) = x(x) (x) =
(x+1)
x
Example:
If (7) = 720 then (6) =
(6+1)
6
=
(7)
6
=
720
6
= 120
If x =
1
2
it can be shown that (
1
2
) =
(F.E.M. 150)
Using the recurrence relation (x + 1) = x(x) we can obtain the following:
(
3
2
) =
1
2
(
1
2
) =
1
2
(
) (
3
2
) =
2
(
5
2
) =
3
2
(
3
2
) =
3
2
(
2
) (
5
2
) =
3
4
Negative values of x
Since (x) =
(x+1)
x
, then as x 0, (x) (0) =
The same result occurs for all negative integral values of x
Examples:
At x = 1, (1) =
(0)
1
=
At x = 2, (2) =
(1)
2
= etc.
Also at x =
1
2
, (
1
2
) =
(
1
2
)
1
2
= 2
and at x =
3
2
, (
3
2
) =
(
1
2
)
3
2
=
4
3
Gragh of y = (x)
Examples:
1. Evaluate
_
0
x
7
e
x
dx
(x) =
_
0
t
x1
e
t
dt
Let (v) =
_
0
x
v1
e
x
dx v = 8
i.e.
_
0
x
7
e
x
dx = (8) = 7! = 5040
2. Evaluate
_
0
x
3
e
4x
dx
Since (v) =
_
0
x
v1
e
x
dx we use the substitution y = 4x dy = 4dx
I =
1
4
4
_
0
y
3
e
y
dy =
1
4
4
(v) where v = 4 I =
3
128
3. Evaluate
_
0
x
1
2
e
x
2
dx
Use y = x
2
therefore dy = 2xdx. Limits x = 0, y = 0 x = , y =
31
x = y
1
2
x
1
2
= y
1
4
I =
_
0
y
1
4
e
y
2x
dy =
_
0
y
1
4
e
y
2y
1
2
dy =
1
2
_
0
y
1
4
e
y
dy =
1
2
_
0
y
v1
e
y
dy where
v =
3
4
I =
1
2
(
3
4
)
From tables, (0.75) = 1.2254 I = 0.613
3.1.2 The beta function,
The beta function (m, n) is dened by (m, n) =
_
1
0
x
m1
(1 x)
n1
dx
It can be shown that the beta function and the gamma function are related
as (m, n) =
(m)(n)
(m+n)
=
(m1)!(n1)!
(m+n1)!
3.1.3 Application of gamma and beta functions
Examples:
1. Evaluate I =
_
1
0
x
5
(1 x)
4
dx
Comparing this with (m, n) =
_
1
0
x
m1
(1 x)
n1
dx
then m1 = 5 m = 6 and n 1 = 4 n = 5
I = (6, 5) =
5!4!
10!
=
1
1260
2.Evaluate I =
_
1
0
x
4
1 x
2
dx
Comparing this with (m, n) =
_
1
0
x
m1
(1 x)
n1
dx
we see that we have x
2
in the root, instead of a single x. Therefore, put
x
2
= y x = y
1
2
and dx =
1
2
y
1
2
dy
The limits remain unchanged.
I =
_
1
0
y
2
(1 y)
1
2
1
2
y
1
2
dy =
1
2
_
1
0
y
3
2
(1 y)
1
2
dy
m1 =
3
2
m =
5
2
and n 1 =
1
2
n =
3
2
Therefore, I =
1
2
(
5
2
,
3
2
) =
1
2
(
5
2
)(
3
2
)
(
5
2
+
3
2
)
=
1
2
(
3
4
)(
1
2
)
3!
=
32
3. Evaluate I =
_
3
0
x
3
dx
3x
(F.E.M. 170)
3.2 Bessels Functions
Bessels functions are solutions of the Bessels dierential equation
x
2
d
2
y
dx
2
+ x
dy
dx
+ (x
2
v
2
)y = 0 (1)
where v is a real constant.
By the Frobenius method we assume a series solution of the form
32
y = x
c
(a
0
+ a
1
x + a
2
x
2
+a
3
x
3
+ . . . +a
r
x
r
+. . .) or y = x
c
r=0
a
r
x
r
i.e. y = a
0
x
c
+a
1
x
c+1
+a
2
x
c+2
+. . . + a
r
x
c+r
+. . . or y =
r=0
a
r
x
c+r
(2)
where c, a
0
, a
1
, a
2
, . . . , a
r
are constants. a
0
is the rst non-zero coecient. c
is called the indicial constant.
dy
dx
= a
0
cx
c1
+a
1
(c + 1)x
c
+a
2
(c + 2)x
c+1
+. . . +a
r
(c +r)x
c+r1
+. . . (3)
d
2
y
dx
2
= a
0
c(c1)x
c2
+a
1
c(c+1)x
c1
+a
2
(c+1)(c+2)x
c
+. . .+a
r
(c+r1)(c+r)x
c+r2
+. . .
(4)
Substituting eqs.(2),(3) and (4) into (1) and equating coecients of equal
powers of x, we have c = v and a
1
= 0.
The recurrence relation is a
r
=
a
r2
v
2
(c+r)
2
for r 2.
It follows that a
1
= a
3
= a
5
= a
7
= . . . = 0 so that when c = +v
a
2
=
a
0
2
2
(v+1)
a
4
=
a
0
2
4
2!(v+1)(v+2)
a
6
=
a
0
2
6
3!(v+1)(v+2)(v+3)
a
r
=
(1)
r
2
a
0
2
r
r
2
!(v+1)(v+2)...(v+
r
2
)
for r even. The resulting solution is
y
1
= a
0
x
v
1
x
2
2
2
(v + 1)
+
x
4
2
4
2!(v + 1)(v + 2)
x
6
2
6
3!(v + 1)(v + 2)(v + 3)
+ . . .
1 +
x
2
2
2
(v 1)
+
x
4
2
4
2!(v 1)(v 2)
+
x
6
2
6
3!(v 1)(v 2)(v 3)
+ . . .
1
(n + 1)
x
2
2
2
(1!)(n + 2)
+
x
4
2
4
(2!)(n + 3)
. . .
= (
x
2
)
n
k=0
(1)
k
x
2k
2
2k
(k!)(n +k + 1)
= (
x
2
)
n
k=0
(1)
k
x
2k
2
2k
(k!)(n +k)!
Similarly for c = v = n (a negative integer)
J
n
(x) = (
x
2
)
n
1
(1 n)
x
2
2
2
(1!)(2 n)
+
x
4
2
4
(2!)(3 n)
. . .
= (
x
2
)
n
k=0
(1)
k
x
2k
2
2k
(k!)(k n + 1)
= (1)
n
(
x
2
)
n
k=0
(1)
k
x
2k
2
2k
(k!)(n + k)!
(for details see F.E.M. 247)
= (1)
n
J
n
(x)
The two solutions J
n
(x) and J
n
(x) dependent on each other. Further
more the series for J
n
(x) is
J
n
(x) = (
x
2
)
n
1
n!
1
(n + 1)!
(
x
2
)
2
+
1
(2!)(n + 2)!
(
x
2
)
4
. . .
1
(3!)
2
(
x
2
)
6
+ . . .
J
1
(x) =
x
2
1
1
(1!)(2!)
(
x
2
)
2
+
1
(2!)(3!)
(
x
2
)
4
+ . . .
n=0
f
n
(x)t
n
we may obtain the properties of
f
n
(x) from those of G which generates this sequence and is called a gen-
erating function of it.
The generating function for J
n
(x) is e
x
2
(t
1
t
)
=
J
n
(x)t
n
Recurrence formula: J
n
(x) can also be obtained from the recurrence for-
mula J
n+1
(x) =
2n
x
[J
n
(x) J
n1
(x)]
For (0 < x < 1) J
n
(x) are orthogonal
3.3 Legendres Polynomials
These are solutions of the Legendres dierential equation
(1 x
2
)
d
2
y
dx
2
2x
dy
dx
+k(k + 1)y = 0
where k is a real constant. Solving it by the Frobenius method as before we
obtain c = 0 and c = 1 and the corresponding solutions are
a) c = 1 : y = a
0
_
1
k(k+1)
2!
x
2
+
k(k2)(k+1)(k+3)
4!
x
4
. . .
_
b) c = 0 : y = a
1
_
x
(k1)(k2)
3!
x
3
+
(k1)(k3)(k+2)(k+4)
5!
x
5
. . .
_
where a
0
and a
1
are the usual arbitrary constants. When k is an integer
n, one of the solution series terminates after a nite number of terms. The
resulting polynomial in x denoted by P
n
(x) is called Legendre polynomial with
a
0
and a
1
being chosen so that the polynomial has unit value when x = 1.
(1 < x < 1) orthogonality
e.g. P
0
(x) = a
0
1 0 + 0 . . . = a
0
. We choose a
0
= 1 so that P
0
(x) = 1
P
1
(x) = a
1
x 0 + 0 . . . = a
1
x
a
1
is then chosen to make P
1
(x) = 1 when x = 1 a
1
= 1 P
1
(x) = x
p
2
(x) = a
0
_
1
2x
3
2!
x
2
+ 0 + 0 +. . .
_
= a
0
1 3x
2
If P
2
(x) = 1 when x = 1 then a
0
=
1
2
P
2
(x) =
1
2
(3x
2
1)
Using the same procedure obtain:
35
P
3
(x) =
1
2
(5x
3
3x)
P
4
(x) =
1
8
(35x
4
30x
2
+ 3)
P
5
(x) =
1
8
(63x
5
70x
3
+ 15x) etc.
Legendre polynomials can also be expressed by Rodrigues formula given by
P
n
(x) =
1
2
n
n!
d
n
dx
n
(x
2
1)
n
(Use this formula to obtain P
0
(x), P
1
(x), P
2
(x), P
3
(x), etc)
The generating function is
1
1 2xt + t
2
=
n=0
P
n
(x)t
n
To show this, start from the binomial expansion of
1
1v
where v = 2xt t
2
,
multiply the powers of 2xt t
2
out, collect all the terms involving t
n
and
verify that the sum of these terms is P
n
(x)t
n
.
The recurrence formula for Legendre polynomials is
P
n+1
(x) =
2n + 1
n + 1
xP
n
(x)
n
n + 1
P
n1
(x)
This means that if we know P
n1
(x) and P
n
(x) we can calculate P
n+1
(x),
e.g. given that P
0
(x) = 1 and P
1
(x) = x we can calculate P
2
(x) using the
recurrence formula by taking P
n1
= P
0
, P
n
= P
1
and P
n+1
= P
2
n = 1.
Substituting these in the formula,
P
2
(x) =
21+1
1+1
xP
1
(x)
1
1+1
P
0
(x) =
1
2
(3x
2
1)
Similarly to nd P
3
(x) we set P
n1
= P
1
, P
n
= P
2
and P
n+1
= P
3
where
n = 2. Substituting these in the formula we have
P
3
(x) =
22+1
2+1
xP
2
(x)
2
2+1
P
1
(x)
=
5
3
x
1
2
(3x
2
1)
2
3
x
=
1
2
(5x
3
3x)
(Using the recurrence formula obtain P
4
(x) and P
5
(x))
3.4 Hermite Polynomials
They are solutions of the Hermite dierential equation
d
2
y
dx
2
2x
dy
dx
+ 2vy = 0 ()
36
where v is a parameter. Using the Frobenius method the solution is
y =
r=0
a
r
x
c+r
, where a
0
,= 0
dy
dx
=
r=0
a
r
(c +r)x
c+r1
and
d
2
y
dx
2
=
r=0
a
r
(c +r)(c + r 1)x
c+r2
Substituting these in eq.(*) and equating coecients of like terms we have
a
0
c(c 1) = 0 c = 0, or c = 1
and a
r+2
=
2(c+rv)a
r
(c+r+2)(c+r+1)
a
1
= a
3
= a
5
= . . . = 0
When c = 0 (M.P. by Gupta 8.94),
y
1
= a
0
_
1
2v
2!
x
2
+
2
2
v(v2)
4!
x
4
. . . +
(1)
r
2
r
v(v2)...(v2r+2)
2r!
x
2r
+ . . .
_
(where a
1
= 0). When c=1,
y
2
= a
0
x
_
1
2(v1)
2!
x
2
+
2
2
(v1)(v3)
4!
x
4
. . . +
(1)
r
2
r
(v1)(v3)...(v2r+1)
(2r+1)!
+. . .
_
The complete solution of eq.(*) is then given by y = Ay
1
+ By
2
i.e.
y = A
_
1
2v
2!
x
2
+
2
2
v(v2)
4!
x
4
. . .
_
+Bx
_
1
2(v1)
3!
x
2
+
2
2
(v1)(v3)
5!
x
4
. . .
_
where A and B are arbitrary constants. When v = n, an integer, the series
terminates after a few terms. The resulting polynomials H
n
(x) are called
Hermite polynomials. The rst 5 of them are:
H
0
(x) = 1
H
1
(x) = 2x
H
2
(x) = 4x
2
2
H
3
(x) = 8x
3
12x
H
4
(x) = 16x
4
48x
2
+ 12
H
5
(x) = 32x
5
160x
3
+ 120x
They can also be given by a corresponding Rodrigues formula
H
n
(x) = e
x
2
(1)
n
d
n
dx
n
(e
x
2
)
The generating function is given by
e
2txt
2
=
n=0
H
n
(x)
n!
t
n
This can be proved using the formula for the coecients of a Maclaurin series
and noting that tx
1
2
t
2
=
1
2
x
2
1
2
(x t)
2
Hermite polynomials satisfy the recursion formula
H
n+1
(x) = 2xH
n
(x) 2nH
n1
(x)
(Given that H
0
= 1 and H
1
= 2x use this formula to obtain H
2
, H
3
, H
4
and
H
5
).
37
3.5 Laguerre Polynomials
They are solutions of the Laguerre dierential equation
x
d
2
y
dx
2
+ (1 x)
dy
dx
+ vy = 0 (1)
Using the Frobenius method again we have
y =
r=0
a
r
x
c+r
, where a
0
,= 0
dy
dx
=
r=0
a
r
(c +r)x
c+r1
and
d
2
y
dx
2
=
r=0
a
r
(c +r)(c + r 1)x
c+r2
Substituting these in eq.(*1) and equating coecients of like terms we have
c
2
= 0 c = 0
and a
r+1
=
c+rv
(c+r+1)
2
a
r
=
rv
(r+1)
2
a
r
y = a
0
_
1 vx +
v(v1)
(2!)
2
x
2
. . . + (1)
r
v(v1)...(vr+1)
(r!)
2
x
r
+ . . .
_
(2)
In case v = n (a positive integer) and a
0
= n! the solution eq.(*2) is said to
be the Laguerre polynomial of degree n and is denoted by L
n
(x) i.e.
L
n
(x) = (1)
n
x
n
n
2
1!
x
n1
+
n
2
(n 1)
2
2!
x
n2
+ . . . + (1)
n
n!
(3)
Then the solution of Laguerre equation for v to be a positive integer is
y = AL
n
(x)
From eq.(*3) it is easy to show that
L
0
(x) = 1
L
1
(x) = 1 x
L
2
(x) = x
2
4x + 2
L
3
(x) = x
3
+ 9x
2
18x + 6
L
4
(x) = x
4
16x
3
+ 72x
2
96x + 48
They can also be given by the Rodrigues formula
L
n
(x) = e
x
d
n
dx
n
(x
n
e
x
)
Their generating function is
e
xt
(1t)
=
n=0
L
n
(x)
n!
t
n
They satisfy the recursion formula
L
n+1
= (2n + 1 x)L
n
(x) n
2
L
n1
(x)
They are orthogonal for 0 < x <
38
3.5.1 Hypergeometric Function
The solutions of Gausss hypergeometric dierential equation
x(1 x)y
//
+ [ ( + + 1)x]y
/
aby = 0 (1)
where , and are parametric constants, and by choosing specic values
we can obtain a large number of elementary and higher functions as solutions
of (1). This accounts for the practical importance of (1)
a) Hypergeometric series
Using the Frobenius method the indicial equation of (1) has roots c = 0 and
c = 1 ; a
r+1
=
(c+r+)(c+r+)
(c+r+1)(c+r+)
a
r
For c
1
= 0, a
r+1
=
(+r)(+r)
(r+1)(r+)
a
r
and the Frobenius method gives
y
1
(x) = a
0
_
1 +
1!
x +
(+1)(+1)
2!(+1)
x
2
+
(+1)(+2)(+1)(+2)
3!(+1)(+2)
x
3
+ . . .
_
(2)
where ,= 0, 1, 2, . . .. This series is called the hypergeometric series.
Its sum y
1
(x) is denoted by F(, , ; x) and is called the hypergeometric
function.
b) Special cases: It can be shown that
(i)
1
1x
= F(1, 1, 1; x) = F(1, , ; x) = F(, 1, ; x)
(ii) (1 + x)
n
= F(n, , ; x) or F(n, 1, 1; x)
(iii) (1 x)
n
= 1 nxF(1 n, 1, 2; x)
(iv) arctanx = xF(
1
2
, 1,
3
2
; x
2
)
(v) arcsinx = xF(
1
2
,
1
2
,
3
2
; x
2
)
(vi) e
x
=
lim
n
F(n, 1, 1;
x
n
)
(vii) ln(1 + x) = xF(1, 1, 2; x)
(viii) ln
1+x
1x
= 2xF(
1
2
, 1,
3
2
; x
2
) etc
c) Second solution
For c
2
= 1 , a
r+1
=
(+r+1)(+r+1)
(2+r)(r+1)
and the Frobenius method yields
the following solution (where ,= 2, 3, 4, . . .)
y
2
(x) = a
0
x
1
_
1 +
(+1)(+1)
1!(+2)
x +
(+1)(+2)(+1)(+2)
2!(+2)(+3)
x
2
+. . .
_
39
which gives y
2
(x) = x
1
F( + 1, + 1, 2 ; x). The complete
solution is
y = Ay
1
(x) +By
2
(x) = AF(, , ; x) +Bx
1
F( +1, +1, 2; x)
40
Chapter 4
INTEGRAL TRANSFORM AND
FOURIER SERIES
4.1 Laplace Transform
LT is used in solving ordinary dierential equations (ode). It has the following
advantages:
Solution of the ode is obtained by algebraic processes.
the initial conditions are involved from the early stages so that the de-
termination of the particular solution is considerably shortened.
the method enables us to deal with situations where the function is
discontinuous.
The LT of a function f(t) is denoted by /f(t) or F(s) and is dened by
the integral
_
0
f(t)e
st
dt
i.e. /f(t) or F(s) =
_
0
f(t)e
st
dt
where s is a positive constant such that f(t)e
st
converges as t
Examples
1. To nd the LT of a constant function f(t) = a
/a =
_
0
ae
st
dt= a
_
e
st
s
_
0
=
a
s
[e
st
]
0
=
a
s
[0 1] =
a
s
/a =
a
s
(1)
e.g. for a = 1, /1 =
1
s
41
2. If f(t) = e
at
/e
at
=
_
0
e
at
e
st
dt=
_
0
e
(sa)t
dt=
_
e
(sa)t
(sa)
_
0
=
1
sa
[0 1] =
1
sa
/e
at
=
1
s a
(2)
Similarly
/e
at
=
1
s +a
(3)
3. If f(t) = sin at
/sin at =
_
0
sin(at)e
st
dt =
_
0
e
iat
e
iat
2i
e
st
dt
=
1
2i
__
0
e
(sia)t
dt
_
0
e
(s+ia)t
dt
_
=
1
2i
_
1
s ia
1
s + ia
_
=
1
2
_
1
a +is
+
1
a is
_
=
1
2
a is + a + is
(a +is)(a is)
=
a
s
2
+ a
2
/sin at =
a
s
2
+ a
2
(4)
e.g. /sin 2t =
2
s
2
+4
Similarly (show that):
4. if f(t) = cos at
/cos at =
s
s
2
+a
2
(5)
e.g. /cos 4t =
s
s
2
+16
5. if f(t) = t
n
/t
n
=
n!
s
n+1
(6)
e.g. /t
3
=
3!
s
3+1
=
6
s
4
.
42
6. If f(t) = sinh at
/sinh at =
_
0
sinh(at)e
st
dt =
_
0
e
at
e
at
2
e
st
dt
=
1
2
__
0
e
(sa)t
dt
_
0
e
(s+a)t
dt
_
=
1
2
_
1
s a
1
s +a
_
=
a
s
2
a
2
/sinh at =
a
s
2
a
2
(7)
e.g. /sinh 2t =
2
s
2
4
Similarly (show that)
7. if f(t) = cosh at =
1
2
(e
at
+ e
at
)
/cosh at =
s
s
2
a
2
(8)
e.g. /4 cosh 3t = 4
s
s
2
3
2
=
4s
s
2
9
Existence theorem for Laplace Transforms
Let f(t) be a function that is piecewise continuous on every nite interval
in the range t 0 and satises [f(t)[ Me
kt
for all t 0 and for some
constants k and M. Then the LT of f(t) exists for all s > k
A function f(t) is said to be piecewise continuous in an interval (a, b) if
(i) the interval can be divided into a nite number of subintervals in each
of which f(t) is continuous.
(ii) the limits of f(t) as t approaches the endpoints of each subinterval are
nite.
In other words a piecewise continuous function is one that has a nite number
of nite discontinuities. e.g see g.
43
4.1.1 Inverse Transform
Given a LT, F(s) one can nd the function f(t) by inverse transform
f(t) = /
1
F(s) where /
1
indicates inverse transform.
e.g. /
1
_
1
s2
_
= e
2t
/
1
_
4
s
_
= 4
/
1
_
s
s
2
+25
_
= cos 5t
/
1
_
3s + 1
s
2
s 6
_
= /
1
_
1
s + 2
+
2
s 3
_
(by partial fractions)
= /
1
_
1
s + 2
_
+/
1
_
2
s 3
_
(Note: /, /
1
are linear operators. Prove it)
= e
2t
+ 2e
3t
Rules of Partial Fractions
1. The numerator must be of lower degree than the denominator. If it is
not then we rst divide out
2. Factorize the denominator into its prime factors. These determine the
shapes of the partial fractions.
3. A linear factor (s +a) gives a partial fraction
A
s+a
where A is a constant
to be determined.
4. A repeated factor (s +a)
2
gives
A
s+a
+
B
(s+a)
2
5. Similarly (s +a)
3
gives
A
s+a
+
B
(s+a)
2
+
C
(s+a)
3
6. Quadratic factor (s
2
+ ps +q) gives
As+B
s
2
+ps+q
7. repeated quadratic factor (s
2
+ ps + q)
2
gives
As+B
s
2
+ps+q
+
Cs+D
(s
2
+ps+q)
2
Examples
1.
s
2
15s+41
(s+2)(s3)
2
=
3
s+2
2
s3
+
1
(s3)
2
2. /
1
_
4s
2
5s+6
(s+1)(s
2
+4)
_
but
4s
2
5s+6
(s+1)(s
2
+4)
A
s+1
+
Bs+C
s
2
+4
=
3
s+1
+
s6
s
2
+4
=
3
s+1
+
s
s
2
+4
6
s
2
+4
44
f(t) = /
1
_
4s
2
5s+6
(s+1)(s
2
+4)
_
= /
1
_
3
s+1
+
s
s
2
+4
6
s
2
+4
_
f(t) = 3e
t
+ cos 2t 3 sin2t
PROPERTIES OF LAPLACE TRANSFORM
Linearity: /af(t) + bg(t) = a/f(t) + b/g(t) (Prove!)
(1) The rst shift theorem (or s-shifting):
It states that if /f(t) = F(s) then
[ /e
at
f(t) = F(s +a) [ (9)
i.e. /e
at
f(t) is the same as /f(t) with s replaced by (s +a)
Examples
1. If /sin 2t =
2
s
2
+4
then /e
3t
sin 2t =
2
(s+3)
2
+4
=
2
s
2
+6s+13
2. If /t
2
=
2
s
3
then /t
2
e
4t
=
2
(s4)
3
(2) Theorem 2: Multiplying by t (or derivative of LT):
If /f(t) = F(s) then
/tf(t) =
d
ds
F(s)
(10)
e.g. if /sin 2t =
2
s
2
+4
/t sin 2t =
d
ds
_
2
s
2
+4
_
=
4s
(s
2
+4)
2
(3) Theorem 3: Dividing by t: If /f(t) = F(s) then
f(t)
t
=
_
s
F(s)ds
(11)
If limit of
f(t)
t
as t 0 exists, we use lHopitals rule to nd out if it does
e.g. /
_
sin at
t
_
; here lim
t0
_
sin at
t
_
=
0
0
(undened).
By lHopitals rule, we dierentiate top and bottom separately and substitute
t = 0 in the result to ascertain the limit of the new function.
lim
t0
_
sin at
t
_
=lim
t0
_
a cos at
1
_
= a,
i.e. the limit exists. The theorem can, therefore, be applied.
Since /sin at =
a
s
2
+a
2
then /
_
sin at
t
_
=
_
s
a
s
2
+a
2
ds =
_
arctan
_
s
a
__
s
=
2
arctan
_
s
a
_
= arctan
_
a
s
_
(4) Transform of derivative
Let
df(t)
dt
= f
/
(t) and
d
2
f(t)
dt
2
= f
//
(t)
Then /f
/
(t) =
_
0
e
st
f
/
(t)dt by denition.
Integrating by parts,
45
/f
/
(t) = [e
st
f(t)]
0
_
0
f(t)se
st
dt
i.e. /f
/
(t) = f(0) + s/f(t)
[ /f
/
(t) = sF(s) f(0) [ (12)
Similarly /f
//
(t) = f
/
(0) + s/f
/
(t) = f
/
(0) +s[f(0) +s/f(t)]
[ /f
//
(t) = s
2
F(s) sf(0) f
/
(0) [ (13)
Similarly [ /f
///
(t) = s
3
F(s) s
2
f(0) sf
/
(0) f
//
(0) [ (14)
Alternative notation:
Let x = f(t), f(0) = x
0
, f
/
(0) = x
1
, f
//
(0) = x
2
, . . . , f
(n)
(0) = x
n
and
x = /x = /f(t) = F(s) we now have
/x = x
/ x = s x x
0
/ x = s
2
x sx
0
x
1
/
...
x = s
3
x s
2
x
0
sx
1
x
2
4.1.2 Solution of Dierential Equations by Laplace Transform
Procedure:
a) Rewrite the equation in terms of LT.
b) Insert the given initial conditions.
c) Rearrange the equation algebraically to give the transform of the solu-
tion.
d) Determine the inverse transform to obtain the particular solution
Solution of rst order dierential equations
Example
Solve the equation
dx
dt
2x = 4, given that at t = 0, x = 1.
We go through the four stages as follows:
a) /x = x, / x = s x x
0
, /4 =
4
s
Then the equation becomes (s x x
0
) 2 x =
4
s
b) Insert the initial condition that at t = 0, x = 1, i.e., x
0
= 1
s x 1 2 x =
4
s
46
c) Now we rearrange this to give an expression for x:
i.e. x =
s+4
s(s2)
d) nally, we take inverse transform to obtain x:
s+4
s(s2)
=
A
s
+
B
s2
s + 4 = A(s 2) +Bs
(i) Put (s 2) = 0, i.e., s = 2 6 = 2B or B = 3
(ii) Put s = 0 s = 2A or A = 2
x =
s+4
s(s2)
=
3
s2
2
s
x = 3e
2t
2
Solve the following equations:
1.
dx
dt
+ 2x = 10e
3t
, given that at t = 0, x = 6
2.
dx
dt
4x = 2e
2t
+ e
4t
, given that at t = 0, x = 0
Solution of second order dierential equation
Example
Solve the equation
d
2
x
dt
2
3
dx
dt
+ 2x = 2e
3t
, given that at t = 0, x = 5 and
dx
dt
= 7
a) /x = x
/ x = s x x
0
/ x = s
2
x sx
0
x
1
The equation becomes (s
2
x sx
0
x
1
) 3(s x x
0
) + 2 x =
2
s3
b) Insert the initial conditions. In this case x
0
= 5 and x
1
= 7
(s
2
x 5s 7) 3(s x 5) + 2 x =
2
s3
c) Rearrange to obtain x as x =
5s
2
23s+26
(s1)(s2)(s3)
d) Now for partial fractions
5s
2
23s+26
(s1)(s2)(s3)
=
A
s1
+
B
s2
+
C
s3
5s
2
23s +26 = A(s 2)(s 3) +B(s 1)(s 3) +C(s 1)(s 2)
A = 4, B = 0, C = 1
x =
4
s1
+
1
s3
x = 4e
t
+ e
3t
Solve
d
2
x
dt
2
4x = 24 cos 2t, given that at t = 0, x = 3 and
dx
dt
= 4
Solution of simultaneous dierential equations
Example
47
Solve the pair of simultaneous equations
y x = e
t
x + y = e
t
given that at t = 0, x = 0 and y = 0
a) (s y y
0
) x =
1
s1
(s x x
0
) + y =
1
s+1
b) Insert the initial conditions x
0
= 0 and y
0
= 0
s y x =
1
s1
s x + y =
1
s+1
c) Eliminating = y we have
s y x =
1
s1
s y + s
2
x =
s
s+1
x =
s
2
2s1
(s1)(s+1)(s
2
+1)
=
1
2
.
1
s1
1
2
.
1
s+1
+
s
s
2
+1
+
1
s
2
+1
d) x =
1
2
e
t
1
2
e
t
+ cos t + sin t= sin t + cos t cosht
Eliminating x in (b) we have
y =
s
2
+2s1
(s1)(s+1)(s
2
+1)
=
1
2
.
1
s1
+
1
2
.
1
s+1
s
s
2
+1
+
1
s
2
+1
y =
1
2
e
t
+
1
2
e
t
cos t + sin t= sin t cos t + cosh t
So the results are:
x = sin t + cos t cosh t
y = sint cos t + cosh t
4.2 The Dirac Delta Function the impulse function
It represents an extremely large force acting for a minutely small interval of
time. Consider a single rectangular pulse of width b and height
1
b
ocurring at
t = a, as shown in Figs. 1(a) and (b).
If we reduce the width of the pulse to
b
2
and keep the area of the pulse con-
stant (1 unit) the height of the pulse will be
2
b
. If we continue reducing the
width of the pulse while maintaining an area of unity, then as b 0, the
height
1
b
and we have the Dirac delta function. It is denoted by (ta).
Graphically it is represented by a rectangular pulse of zero width and innite
height, Fig. 2.
48
If the Dirac delta function is at the origin, a = 0 and so it is denoted by
(t)
4.2.1 Integration involving the impulse function
From the denition of (t a)
_
q
p
(t a)dt = 1 for
0
f(t) (t a)dt = f(a)
Hence, if f(t) = e
st
, this becomes
_
0
e
st
(t a)dt = /t a = e
as
Similarly /f(t) (t a) =
_
0
e
st
f(t) t adt = f(a)e
as
4.2.2 Dierential equations involving the impulse function
Example
Solve the equation x + 4 x + 13x = 2(t) where, at t = 0, x = 2 and x = 0
a) Expressing in LT, we have (s
2
x sx
0
x
1
) + 4(s x x
0
) + 13 x = 2 1
b) Inserting the initial conditions and simplifying we have x =
2s+10
s
2
+4s+13
49
c) Rearranging the denominator by completing the square, this can be
written as x =
2(s+2)
(s+2)
2
+9
+
6
(s+2)
2
+9
d) The inverse LT is x = 2e
2t
cos 3t + 2e
2t
sin 3t = 2e
2t
(cos 3t + sin 3t)
4.3 Fourier Series
You have seen from Maclaurins and Taylors series that an innitely dieren-
tiable function can be expressed in the form of an innite series in x. Fourier
series on the other hand, enables us to represent a periodic function as an
innite trigonometrical series in sine and cosine terms. We can use Fourier
series to represent a function containing discontinuities unlike Maclaurins
and Taylors series.
Periodic function: A function f(t) is periodic i
f(t) = f(t + nT), n = 0, 1, 2 . . .
T is called the period. For sine and cosine the period T = 2 so that
sin t = sin(t + 2n) and cos t = cos(t + 2n)
Analytical description of a periodic function
Many periodic functions are non-sinusoidal
Examples
1. f(t) = 3 0 < t < 4
f(t) = 0 4 < t < 6
f(t) = f(t + 6) i.e. the period is 6
2. f(t) =
5
8
t 0 < t < 8
f(t) = f(t + 8)
Sketch the following periodic functions
1. f(t) = 4 0 < t < 5
f(t) = 0 5 < t < 8
f(t) = f(t + 8)
2. f(t) = 3t t
2
0 < t < 3
f(t) = f(t + 3)
50
4.3.1 Fourier series of functions of period 2
Any periodic function f(x) = f(x + 2n) can be written in Fourier series as
f(x) =
1
2
a
0
+
n=1
(a
n
cos nx +b
n
sin nx)
=
1
2
a
0
+a
1
cos x + a
2
cos 2x + . . . +b
1
sinx + b
2
sin 2x +. . .
(where a
0
, a
n
, b
n
, n = 1, 2, 3 . . . are Fourier coecients) or as
f(x) =
1
2
a
0
+ c
1
sin(x +
1
) + c
2
sin(2x +
2
) + . . .
where c
i
=
_
a
2
i
+ b
2
i
and
i
= arctan(
b
i
a
i
).
c
1
sin(x +
1
) is the rst harmonic or fundamental
c
2
sin(2x +
2
) is the second harmonic
c
n
sin(nx +
n
) is the n
th
harmonic.
For the Fourier series to accurately represent f(x) it should be such that if
we put x = x
1
in the series the answer should be approximately equal to
the value of f(x
1
) i.e. the value should converge to f(x
1
) as more and more
terms of the series are evaluated. For this to happen f(x) must satisfy the
following Dirichlet conditions:
a) f(x) must be dened and single-valued.
b) f(x) must be continuous or have a nite number of discontinuities within
a periodic interval.
c) f(x) and f
/
(x) must be piecewise continuous in the periodic interval.
If these conditions are met the series converges fairly quickly to f(x
1
) if
x = x
1
, and only the rst few terms are required to give a good approxima-
tion of the function f(x)
Fourier coecients: The Fourier coecients above are given by
a
0
=
1
f(x)dx
a
n
=
1
f(x) sinnxdx
Odd and even functions
a) Even functions: A function f(x) is said to be even if f(x) = f(x). The
graph of an even function is, therefore, symmetrical about the y-axis. e.g.
51
f(x) = x
2
f(x) = cos x
b) Odd functions: A function f(x) is said to be odd if f(x) = f(x);
the graph of an odd function is thus symmetrical about the origin. e.g.
f(x) = x
3
f(x) = sinx
Products of odd and even functions
(even)(even)=even
(odd)(odd)=even
(neither)(odd)=neither
(neither)(even)=neither
Theorem 1: If f(x) is dened over the internal < x < and f(x) is
even, then the Fourier series for f(x) contains cosine terms only. Here a
0
is
included.
Example
f(x) = 0 < x <
2
f(x) = 4
2
< x <
2
f(x) = 0
2
< x <
f(x) = f(x + 2)
The waveform is symmetrical about the y-axis, therefore, it is even.
f(x) =
1
2
a
0
+
n=1
a
n
cos nx
a) a
0
=
1
f(x)dx=
2
0
f(x)dx=
2
2
0
4dx =
2
[4x]
2
0
= 4
b) a
n
=
1
0
f(x) cos nxdx =
2
_
2
0
4 cos nxdx =
8
_
sin nx
n
_
2
0
=
8
n
sin
n
2
But sin
n
2
=
0 for n even
1 for n = 1, 5, 9 . . .
1 for n = 3, 7, 11 . . .
f(x) = 2 +
8
_
cos x
1
3
cos 3x +
1
5
cos 5x
1
7
cos 7x + . . .
_
Theorem 2: If f(x) is an odd function dened over the interval < x < ,
then the Fourier series for f(x) contains sine terms only. Here a
0
= a
n
= 0
52
Example
f(x) = 6 < x < 0
f(x) = 6 0 < x <
f(x) = f(x + 2)
This is an odd function so f(x) contains only the sine terms
i.e. f(x) =
n=1
b
n
sin nx
and b
n
=
1
f(x) sinnxdx
f(x) sinnx is even since it is a product of two odd functions.
b
n
=
2
0
f(x) sinnxdx=
2
0
6 sinnxdx=
12
_
cosnx
n
_
0
=
12
n
(1 cos n)
f(x) =
24
_
sin x +
1
3
sin3x +
1
5
sin 5x + . . .
_
If f(x) is neither even nor odd we must obtain expressions for a
0
, a
n
and b
n
in full
Examples
Determine the Fourier series of the function shown.
f(x) =
2x
0 < x <
f(x) = 2 < x < 2
f(x) = f(x + 2)
This is neither odd nor even,
f(x) =
1
2
a
0
+
n=1
a
n
cos nx +b
n
sin nx
a) a
0
=
1
_
2
0
f(x)dx =
1
_
_
0
2
xdx +
_
2
2dx
_
=
1
__
x
2
0
+ [2x]
2
_
=
1
+ 4 2 = 3
a
0
= 3
b) a
n
=
1
_
2
0
f(x) cos nxdx
=
1
_
_
0
_
2x
_
cos nxdx +
_
2
2 cos nxdx
_
=
2
_
1
_
xsin nx
n
_
0
1
n
_
0
sinnxdx +
_
2
cos nxdx
_
=
2
_
1
n
( sinnx) +
1
n
_
cos nx
n
_
0
+
_
sin nx
n
_
2
_
=
2
_
1
n
sin nx +
1
n
2
(cos nx 1) +
1
n
(sin2nx sinnx)
_
=
2
_
1
n
2
(cos nx 1) +
1
n
sin 2nx
_
a
n
= 0 (n even); a
n
=
4
2
n
2
(n odd)
c) b
n
=
1
_
2
0
f(x) sinnxdx
=
1
_
_
0
_
2x
_
sin nxdx +
_
2
2 sinnxdx
_
53
=
2
_
1
_
xcos nx
n
_
0
1
n
_
0
cos nxdx +
_
2
sin nxdx
_
=
2
_
1
n
( cos nx) +
1
nx
_
sinnx
n
_
0
+
_
cos nx
n
_
2
_
=
2
1
n
cos nx + (0 0)
1
n
(cos 2nx cos nx)
_
=
2
1
n
cos 2nx
_
=
2
n
cos 2nx
But cos 2n = 1 b
n
=
2
n
f(x) =
3
2
4
2
_
cos x +
1
9
cos 3x +
1
25
cos 5x +. . .
_
_
sinx +
1
2
sin 2x +
1
3
sin3x +
1
4
sin 4x . . .
_
4.3.2 Half-range series
Sometimes a function of period 2 is dened over the range 0 to instead of
the normal to , or 0 to 2. In this case one can choose to obtain a half-
range cosine series by assuming that the function is part of an even function
or a sine series by assuming that the function is part of an odd function.
Example
f(x) = 2x 0 < x <
f(x) = f(x + 2)
from g. 1
To obtain a half-range cosine series we assume an even function as in g.
2.
a
0
=
2
0
f(x)dx =
2
0
=
2
_
x
2
_
0
= 2
a
n
=
2
0
2x cos nxdx =
4
__
xsin nx
n
_
0
1
n
_
0
sinnxdx
_
Simplifying, a
n
= 0 for n even and a
n
=
8
n
2
for n odd. In this b
0
= 0 and so
f(x) =
8
_
cos x +
1
9
cos 3x +
1
25
cos 5x +. . .
_
Obtain a half-range sine series for f(x).
4.3.3 Functions with arbitrary period T
i.e. f(t) = f(t + T), frequency f =
1
T
and angular frequency = 2f
=
2
T
and T =
2
n=1
a
n
cos nt + b
n
sin nt
54
=
1
2
a
0
+
n=1
a
n
cos
2nt
T
+b
n
sin
2nt
T
where
a
0
=
2
T
_
T
0
f(t)dt =
_
2
0
f(t)dt
a
n
=
2
T
_
T
0
f(t) cos ntdt =
_
2
0
f(t) cos ntdt
b
n
=
2
T
_
T
0
f(t) sinntdt =
_
2
0
f(t) sinntdt
Example
Determine the Fourier series for a periodic function dened by
f(t) = 2(1 + t) 1 < t < 0
f(t) = 0 0 < t < 1
f(t) = f(t + 2) 0 < t < 1
Answer:
f(t) =
1
2
+
4
2
_
cos t +
1
9
cos 3t +
1
25
cos 5t . . .
_
_
sint +
1
2
sin 2t +
1
3
sin 3t +
1
4
sin4t
_
4.3.4 Sum of a Fourier series at a point of nite discontinuity
At x = x
1
the series converges to the value
f(x
1
) as the number of terms included
increases to innity. But if there is a
jump at x
1
as shown in Fig.
f(x
1
0) = y
1
(approaching x
1
from below) f(x
1
+ 0) = y
2
(approaching x
1
from above)
If we sustitute x = x
1
in the Fourier series for f(x), it can be shown that
the series converges to the value
1
2
f(x
1
0) + f(x
1
+ 0) i.e.
1
2
(y
1
+y
2
), the
average of y
1
and y
2
.
4.4 Fourier Integrals
4.4.1 The Fourier integral
: While Fourier series is for periodic functions Fourier integral is for non-
periodic function. If a non-periodic f(x) (i) satises the Dirichlet conditions
in every nite interval (a, a) and (ii) is absolutely integrable in(, ), i.e.
_
[f(x)[dx
exists, then f(x) can be represented by a Fourier integral. At a point where
f(x) is discontinuous the value of the Fourier integral equals the average of
the left- and right-hand limits of f(x) at that point.
Examples
Find the Fourier integral representation of the function in g. below
f(x) =
1 if [x[ < 1
0 if [x[ > 1
Solution: From (2) and (3) we have
A(k) =
1
_
1
1
f(x) cos kxdx =
sin kx
k
1
1
=
sink
k
B(k) =
1
_
1
1
sin kxdx = 0
and (1) gives the answer
f(x) =
2
_
0
cos kx sink
k
dk (4)
The average of the left- right-hand limits of f(x) at x = 1 is equal to (1+0)/2,
that is, 1/2.
Furthermore, from (4) and Theorem 1 we obtain
_
0
cos kx sink
k
=
2
f(x) =
2
if 0 x < 1
4
if x = 1
0 if x > 1
This integral is called Dirichlets discontinuous factor. If x = 0, then
_
0
sink
k
dk =
2
.
This integral is the limit of the so-called sine integral
Si(u) =
_
u
0
sin k
k
dk
56
as u
In the case of a Fourier series the graghs of the partial sums are approximation
curves of the periodic function represented by the series. Similarly, in the
case of the Fourier integral, approximations are obtained by replacing by
numbers a. Hence the integral
_
a
0
cos kx sink
k
dk
approximates the integral in (4) and therefore f(x)
57