Sie sind auf Seite 1von 29

Useful Formulae of Trigonometry

DIFFERENTIATION, INTEGRATION AND FORMULAE USED CHAPTERWISE


sin 2q = 2 sin q cos q, cos 2q = 2 cos2 q – 1, cos 2q = 1 – 2sin2 q

Angle 0° 30° 45° 60° 90° 180°   sin (– q) = – sin q, cos (– q) = cos q

sin 1 1   sin (90 + q) = cos q     (change)


3
0 1 0
2 2 2
cos 1 1   sin (p – q) = sin q      (No change)
3
1 0 –1
2 2 2
tan 1 3
0 1 ∞ 0
3

HYPERBOLIC FUNCTIONS
e x − e− x e x + e− x
sinh x = , cosh x = , cosh2 x – sinh2 x = 1
2 2
d d
(sinh x) = cosh x, (cosh x) = sinh x,
dx dx
eix − e − ix eix + e − ix
sin x = , cos x =
2i 2
sinh ix = i sin x, i sinh x = sin ix, cash ix = cos x, cosh x = cos ix
n(n − 1) 2 n(n − 1)(n − 2) 3
Binomial Theorem (1 + x)n = 1 + nx + x + x + ...
2! 3!
y
Polar coordinates x = r cos q, y = r sin q, r2 = x2 + y2, q = tan −1
x
x = r sin q cos f, y = r sin q sin f, z = r cos q
Median is the line joining the vertex to the mid point of the opposite side of a triangle.
Centroid or C.G. is the point of intersection of the medians of a triangle.
Incentre is the point of intersection of the bisectors of the angles of a triangle.
Circumcentre is the point of intersection of the perpendicular bisectors of the sides of a triangle.
Orthocentre is the point of intersection of the perpendiculars drawn from vertex to the opposite
sides of a triangle.
Asymptote is the tangent to a curve at infinity.

(i)
DIFFERENTIAL CALCULUS
d d d
(cos x) = − sin x (tan x) = sec 2 x (cot x) = − cos ec 2 x
dx dx dx
d d d x
(sec x) = sec x tan x (cos ecx) = − cos ec x cot x (a ) = a x log e a
dx dx dx
d 1 d −1 d 1
(sin −1 x) = (cos −1 x) = (tan −1 x) =
dx 1 − x2 dx 1 − x2 dx 1 + x2
d 1 d 1 d −1
(cot −1 x) = − (sec −1 x) = (cos ec −1x) =
dx 1 + x2 dx x x2 − 1 dx x x2 − 1
d d d
(sinh x) = cosh x (cosh x) = sinh x (tanh x) = sec h 2 x
dx dx dx
d d d
(coth x) = − cosech 2 x (sec h x) = − sec h x tanh x (cos ech x) = − cos ech x coth x
dx dx dx
INTEGRAL CALCULUS

∫a ∫ sin xdx = − cos x ∫ cos xdx = sin x


x
dx = a x log a e

dx 1 a+x
∫ tan x dx = log sec x ∫ cot xdx = log sin x ∫ a 2 − x 2 = 2a log a − x
x π
∫ sec x dx = log tan  2 + 4  = log(sec x + tan x) ∫ sinh xd x = cosh x
x
∫ cos ec x dx = log tan 2 = log(cos ec x − cot x) ∫ cosech
2
xdx = − coth x

∫ sec x tan xdx = sec x ∫ cos ecx cot xdx = − cos ecx
dx x dx x dx x
∫ = sin −1 ∫ = sinh −1 ∫ = cosh −1
a −x2 2 a 2
a +x 2 a 2
x −a 2 a
dx 1 −1 x − dx 1 −1 x dx 1 x
∫ a 2 + x 2 = a tan a ∫ x 2 + a 2 = a cot a ∫ = sec −1
x x −a 2 2 a a
− dx 1 x
∫ = cos ec −1 ∫ cosh x dx = sinh x ∫ sec h
2
x dx = tanh x
x x −a 2 2 a a

∫ sec h x tanh x dx = − sec h x ∫ cosech x coth xdx = −cosech x


x 2 a2 x x 2 a2
∫ a 2 − x 2 dx = a − x 2 + sin −1          ∫ a 2 + x 2 dx = a + x 2 + sinh −1 x
2 2 a 2 2
x 2 a2 x
∫ x 2 − a 2 dx = x − a2 − cosh −1
2 2 a
b b b a
1. ∫a f ( x) dx = ∫a f (t ) dt 2. ∫a f ( x) dx = − ∫b f ( x) dx
b c b b b
3. ∫a f ( x) dx = ∫a f ( x) dx + ∫c f ( x) dx, (∈ ( a, b)) 4. ∫a f ( x) dx = ∫a f (a + b − x) dx
a a 2a a a
5. ∫0 f ( x) dx = ∫ f (a − x) dx
0
6. ∫0 f ( x) dx = ∫ f ( x) dx + ∫ f (2a − x) dx
0 0
a a
7. ∫− a f ( x) dx = 2∫0 f (t ) dx if f (2a – x) = (x) = 0, if f (2a – x) = – f (x)

(ii)
a a
8. ∫− a f ( x) dx = 2∫0 f ( x) dx, if f is an even function i.e., f (– x) = f (x).
a
∫− a f ( x) dx = 0 if f is an odd function i.e., f (– x) = – f (x).

CHAPTER–1 (REVIEW OF VECTOR ALGEBRA)


If r = xiˆ + yjˆ + zkˆ then | r |= x 2 + y 2 + z 2
→ mb + na
AB = Position vector of B – Position vector of A, Ratio formula c =
m+n
→ −→ a.b
−1
Scalar Product: a.b =| a | | b | cos θ , Work done = F ⋅ dr θ = cos
| a || b |
Vector Product a × b = | a | | b | sin θ.η
ˆ.

Area of parallelogram = a × b : Moment of a force = r × F ; V = w×r

 a1 a2 a3  a × (b × c) = (a. c)b − ( a.b)c


a.(b × c) =  b1 b2 b3  , a.c a.d
( a × b ) . (c × d ) =
 c1 a2 c3  b.c b.d

  Volume of parallelopiped = a.(b × c) , if a.(b × c) = 0 then a, b, c are coplanar.

(a × b) × (c × d ) = [abd ]c − [abc]d = −[bcd ]a + [acd ]b

CHAPTER–2 (DIFFERENTIATION OF VECTORS)


dr d2r dr
Velocity = , Acceleration = , Tangent vector = , Normal vector = ∇φ
dt dt 2 dt
Gradient φ = ∇φ Directional derivative = ∇φ
Divergence f = ∇. f , If divergence f = 0, then f is called solenoidal vector.
Curl f = ∇ × f , If curl f = 0, f is called Irrotational vector.

CHAPTER–3 (INTEGRATION OF VECTORS)


 ∂ψ ∂φ 
Green’s Theorem: ∫c φdx + ψdy = ∫ ∫s  ∂x − ∂y  dxdy , Stokes Theorem: ∫c F . dr = ∫ ∫s curl F .ηˆ ds
Gauss theorem of Divergence: ∫ ∫s F .ηds = ∫ ∫ ∫v div f dx dy dz
CHAPTER–4 (ORTHOGONAL CURVILINEAR COORDINATES)
Orthogonal curvilinear coordinates. Let the rectangular Cartesian coordinates of a point P in space
be (x, h, z). Now we introduce one more system of coordinates
Let x = X (u1, u2, u3), y = Y (u1, u2, u3), z = Z (u1, u2, u3)
∂r ∂r ∂r
h1 = , h2 = , h3 =
∂u ∂v ∂v
dS1 = h1du, dh2 = h2du, dS3 = h3du

(iii)
CHAPTER–5 (DOUBLE INTEGRALS)
b y2
First Method: ∫∫A f ( x, y) dy dx = ∫a ∫y1 f ( x, y ) dy dx

d x2
Second Method: ∫∫A f ( x, y) dxdy = ∫c ∫x1 f ( x, y ) dx dy

Change of Order of Integration


On changing the order of integration, the limits of integration change. To find new limits, we draw
the rough sketch of the region of integration.
Some of the problems connected with double integrals, which seem to be complicated, can be made
easy to handle by a change in the order of integration.

CHAPTER–6 (APPLICATION OF THE DOUBLE INTEGRALS)


Z
Area, Volume and Surface
Area in Polar Co-ordinates: Area = ∫∫ r d θ dr S
Z γ N
P
Surface Area
Let z = f (x, y) be the surface S. Let its projection
on the x-y plane be the region A. Consider an element dx. O X
dy in the region A. Erect a cylinder on the element dx. dy
having its generator parallel to OZ and meeting the surface A δy
S in an element of area dx. δx

CHAPTER–7 (TRIPLE INTEGRATION)


dx. dy = ds cos g,
  ∂z  2  ∂z  2 
S= ∫∫   +   + 1 dxdy
  ∂x   ∂y  
A  

Triple Integration
x2 y2 z 2
It can be calculated as ∫x1 ∫y1 ∫z1 f ( x, y, z ) dz dy dx, First we integrate with respect to z treating

x, y as constant between the limits z1 and z2. The resulting expression (function of x, y) is integrated with
respect to y keeping x as constant between the limits y1 and y2. At the end we integrate the resulting
expression (function of x only) within the limits x1 and x2.

Integration by Change of Cartesian Coordinates into Spherical Coordinates


Sometime it becomes easy to integrate by changing the cartesian coordinates into spherical coordinates.
The relations between the cartesian and spherical polar co-ordinates of a point are given by the relations
x = r sin q cos f
y = r sin q sin f
z = r cos q
dx dy dz = | J | dr dq df = r2 sin q dr dq df

(iv)
CHAPTER–8 (APPLICATION OF TRIPLE INTEGRATION)

Volume of the solids


Consider an elementary cuboid, whose volume is dx dy dz. Then the volume of the whole solid is
obtained by evaluating the triple integral.
If the equation of the surface of the solid be given in cartesian coordinates, then
V= ∫∫∫ dx dy dz (dV = dx dy dz)
dxdy = rdqdr  dxdydz = r2 sin q dr dq df
x2 y2 x2 y2 z 2 ∂ ( x, y )
Area = ∫x1 ∫y1 dx ' dy , Volume = ∫x1 ∫y1 ∫z1 dxdydz,     dxdy =
∂ ( r , θ)
drd θ

Centre of gravity

x=
∫ ∫ ∫ x ρ dx dy dz , y=
∫ ∫ ∫ y ρ dx dy dz , z=
∫ ∫ ∫ z ρ dx dy dz
∫ ∫ ∫ ρ dx dy dz ∫ ∫ ∫ ρ dx dy dz ∫ ∫ ∫ ρ dx dy dz
∫ ∫ ∫ ρ( y
2
Moment of Inertia about x – axis = + z 2 )dx dy dz

∫ ∫ ∫ ρ(x
2
Moment of Inertia about y – axis = + z 2 )dx dy dz

∫ ∫ ∫ ρ(x
2
Moment of Inertia about z – axis = + y 2 )dx dy dz

Centre of pressure x=
∫ ∫ xρ d x d y , y=
∫ ∫A y ρ dx dy
∫ ∫A ρ dx dy ∫ ∫A ρ dx dy
CHAPTER–9 (GAMMA, BETA FUNCTION)

1
n + 1 = n, n + 1 = n n, = π
2
∞ −x
Gamma Function: ∫0 e x n −1 dx = n

m +1 n +1
π
∞ 1−1 l m
Beta Function: b(l,m) = ∫ x (1 − x) m −1
dx = , ∫ 2 sin m θ cos θ d θ = 2
n 2
0 l+m 0 m+n+2
2
2
1−1 m −1 n −1 l m n
Dirichlet’s Integral = ∫∫∫ x y z dxdydz =
l + m + n +1
Liouville’s Extension of Dirichlet theorem
1−1 l m n h2
∫∫∫ f ( x + y + z ) x . y m − 1 . z n − 1 dxdydz = ∫ f (u )u l + m + n − 1 du
l + m + n +1 1
h

CHAPTER–10 (THEORY OF ERRORS)


2 x − l 2 dt
Error function =
∫ e
π 0
d
dα {∫ ψ (α )
φ(α ) }
f ( x, y ) dx =
d  ψ ( α ) ∂f ( x, α )
dx  ∫φ (α ) ∂α
dx −


 dψ
f [φ(α ), α ] +
 dα
f [ψ (α ), α ]

(v)
CHAPTER–11 (FOURIER SERIES)
a0
f ( x) =+ a1 cos x + a2 cos 2 x + .... + an cos nx + ... + b1 sin x + b2 sin 2 x + ... + bn sin nx + ...
2
1 2π 1 2π 1 2π
Where a0 = ∫ f ( x) dx, an = ∫ f ( x)cos nx dx , bn = ∫ f ( x)sin nx dx
π 0 π 0 π 0
2 π 2 π
π∫0
For even function: a0 = f ( x) dx, an = ∫ f ( x)cos n x dx, bn = 0
π 0
2 π
For odd function: a0 = 0, an = 0, bn = ∫ f ( x) sin nx dx
π 0
For arbitrary function:
1 2c 1 2c nπx 1 2c nπx
a0 = ∫ f ( x) dx, an = ∫ f ( x) cos
c ∫0
dx bn = f ( x)sin dx
c 0 c 0 c c
sin np = 0, cos np = (– 1)n

CHAPTER–12 (DIFFERENTIAL EQUATIONS OF FIRST ORDER)

(i) Variables separable: f (y)dy = f(x)dx,  ∫ f ( y)dy = ∫ φ( x)dx + c


dy f ( x)
(ii) Homogeneous Equation: = where each term of f (x) and f(x) are of the same degree.
dx φ( x)
dy dv
Put y = vx so that = v + x.
dx dx
dy ax + by + c
(iii) Reducible to homogeneous: =
dx Ax + By + C
a b a b
Put x = X + h, y = Y + k if ≠ , Put ax + by = z if =
A B A B
dy
Linear differential equation: + Py = Q where P and Q are not functions of y.
dx

Integrating factor = e∫ , then y.e∫ = ∫ (Qe∫


pdx Pdx pdx
) dx + c

Exact Differential Equation


∂M ∂N
If
∂y
=
∂x
then ∫ Mdx + ∫ Ndy is exact differential equation.

∂M ∂N
If ≠ then the given differential equations is not exact, but it can be reduced as an exact
∂y ∂x
differential equation.
∂M ∂N

∂y ∂x
1. If is a function of x alone say f (x) then
N

If = e∫
f ( x ) dx
. On multiplying the differential equation by integrating factor becomes exact differential
equation.
∂N ∂M

∂x ∂y
2. If is a function of y alone say f (y), then
M

(vi)
I.F. = e∫
f ( y ) dy
on multiplying the given differential equation by integrating factor becomes exact
differential equation.
1
3. If M is of the form M = yf1(xy) and N is of the from N = xf2(xy) then I.F. = .
Mx − Ny
4. For this type of xmyn (aydx + bxdy) + xm’yn’ (a’ydx + b’xdy) = 0 then I.F. = xhyk.
m + h +1 n + k +1 m' + h + 1 n' + k + 1
where  = and =
a b a' b'
On solving these equations we get the Value of h and k.

CHAPTER–13 (LINEAR DIFFERENTIAL EQUATIONS OF SECOND ORDER)

Rules to find Complementary Function


1. when roots of A.E. = m1, m2; ⇒ C.F. = c1e m1x + c2e m2 x
2. when roots are equal; ⇒ C.F. = (c1 + c2x)emx
3. when roots are complex a + ib ; ⇒ C.F. = eax[c1 cos bx + c2 sin bx]

Rules to Find Particular Integral

1 1 ax 1 ax 1
(i) e ax = e , if f (a) ≠ 0; e =x e ax if f (a) = 0
f ( D) f (a) f ( x) f ′(a)
1
(ii) x n = [ f ( D)]−1 x n , Expand [f (D)]–1 and then operate
f ( D)
1 1 1 1
(iii) sin ax = sin ax cos ax = cos ax
f (D2 ) f (−a2 ) f (D2 ) f (−a2 )
1 1 1 1
If f (– a2) = 0 then sin ax = x sin ax (iv) e ax .φ( x) = e ax . φ( x )
f (D ) 2 2
f ′( − a ) f ( D) f ( D + a)

1  1  1 1
(v) xφ( x) =  x − × f ′( D) φ( x ) (vi) φ( x) = e − ax ∫ e ax φ( x)dx
f ( D)  f ( D )  f ( D) D+a

CHAPTER–14 (CAUCHY – EULER EQUATIONS, METHOD OF VARIATION


OF PARAMETERS)

Variation of Parameters C.F. = Ay1 + By2 then P.I. = uy1 + vy2


y2 Xdx y1 Xdx
Where u = − ∫ v=∫
y1 y2′ − y1′ y2 y1 y2′ − y1′ y2

d2y dy
Homogeneous Differential Equation is as x2 2
− 2x + 4 y = x4
dx dx
2
dy d y d
Put x = ez, x = Dy, x 2 2 = D( D − 1) y , where D ≡
dx dx dz

(vii)
CHAPTER–15 (LINEAR DIFFERENTIAL EQUATIONS OF SECOND ORDER)

d2y
Equation of the type = f ( x)
dx 2

d2 y
Equation of the type = f ( y)
dx 2
Equations which do not Contain ‘y’ Directly
The equation which do not contain y directly, can be written
 d n y d n – 1y dy 
f  n , n – 1 , ..... , x = 0 ...(1)
 dx dx dx 

dy d 2 y dP d 3 y d 2P
On substituting = P i.e., 2 = and in (1), we get =
dx dx dx dx3 dx 2
 d n –1P 
f  n – 1 , ....... P, x = 0
 dx 
Equations that do not Contain ‘x’ Directly
The equations that do not contain x directly are of the form
 d n y d n – 1y dy 
f n,
n–1
, ...... , y = 0 ...(1)
 dx dx dx 

dy d 2 y dP dP dy dP
= P, 2 =
On substituting = ⋅ = P in the equation (1), we get
dx dx dx dy dx dy
 dP n – 1 
 n – 1 ,..... P, y  = 0 ...(2)
 dy 
Equation (2) is solved for P. Let
dy dy dy
P = f1 ( y ) ⇒

dx
= f1 ( y ) ⇒
f1 ( y )
= dx ⇒ ∫ f1( y) =x+c

Equation whose one Solutions is known


If y = u is given solution belonging to the complementary function of the differential equation.
Let the other solution be y = v. Then y = u. v is complete solution of the differential equation.
d2y dy
Let
2
+P + Qy = R ....(1), be the differential equation and u is the solution included in the
dx dx
complementary function of (1)
d 2u du
∴ +P + Qu = 0 ...(2)
dx 2 dx
y = u. v
dy du dv
= v +u
dx dx dx
d2y d 2u dv du d 2v
2 = v 2
+2 +u 2
dx dx dx dx dx

(viii)
dy d 2 y
Substituting the values of y, , in (1), we get
dx dx 2

d 2u dv du d 2v  du dv 
v 2 +2 + u 2 + Pv + u  + Q u. v = R
dx dx dx dx  dx dx 
On arranging
 d 2u du   d 2v dv  du dv
⇒ v 2 + P + Qu  + u  2 + P  + 2 ⋅ = R
 dx dx   dx dx  dx dx
The first bracket is zero by virtue of relation (2), and the remaing is divided by u.
d 2v dv 2 du dv R
2
+P + =
dx dx u dx dx u

d 2v  2 du  dv R
⇒ 2
+ P +  = ...(3)
dx  u dx  dx u
Normal Form (Removal of First Derivative)
d2y dy
Consider the differential equation + P + Q y = R ...(1)
dx 2 dx
Put y = uv where v is not an integral solution of C.F.
dy du du
= v +u
dx dx dx
d2 y d2 v du dv d 2u
= u +2 +v 2
d x2 d x2 dx dx dx
dy d 2 y
On putting the values of y, , in (1) we get
dx dx 2
 d 2v dv du d 2u   dv du 
u 2 + 2
+ v 2  + P  u + v  + Q.uv = R
 dx
 dx dx dx dx  dx 

d 2 u du  dv   d 2v dv 
v 2 +
⇒  Pv + 2  + u  2 + P + Q.v =R
dx dx dx  dx dx 

d 2u du  2 dv  u  d 2v dv  R
0⇒ +  P +  +  2 + P + Q.v = ...(2)
dx 2
dx  v dx  v  dx dx  v
Here in the last bracket on L.H.S. is not zero y = v is not a part of C.F.
Here we shall remove the first derivative.
2 dv dv 1 –1
2 ∫
P+ = 0 or = – P dx or log v = P dx
v dx v 2
1
– ∫ P dx
v = e 2

d 2v dv
In (2) we have to find out the value of the last bracket i.e., +P + Qv
dx 2 dx
1
dv P – ∫ P dx 1  v = e – 1/2∫ Pdx 
= – e 2 = – Pv
dx 2 2  
d 2v 1 dP P dv 1 dP P 1  1 dP 1
= – v– =– v –  – Pv = – v + P 2v
dx 2 2 dx 2 dx 2 dx 2  2  2 dx 4
(ix)
d 2v dv 1 dP 1  1   1 dP 1 2 
∴ +P + Qv = – v + P 2 v + P  – Pv + Qv = v Q – – P
dx 2 dx 2 dx 4  2   2 dx 4 
Equation (1) is transformed as
d 2u u  1 dP P 2  R
2 + v Q – –  =
dx v  2 dx 4  v
1
d 2u  1 dP P 2  ∫ P dx
⇒ 2
+ u  Q – –  = R e 2
dx  2 dx 4 

d 2u  1 dP P 2 
+ Q1u = R1 where Q1 = Q – – 
dx 2  2 dx 4 
1
∫ P dx R
R1 = R e 2 or
v
1
– ∫ P dx
y = uv  and  v = e 2 Ans

dv d 2v dz
Let = z, so that =
dx dx 2 dx
Equation (3) becomes
dz  2 du  R
+ P+ z=
dx  u dx  u
This is the linear differential equation of first order and can be solved (z can be found), which will
contain one constant.
dv
On integration z = , we can get v.
dx
Having found v, the solution is y = uv.
Note: Rule to find out the integral belonging to the complementary function
Rule Condition u
1 1+P+Q=0 ex
2 1–P+Q=0 e–x
P Q
3 1+ + =0 eax
a a2

4 P + Qx = 0 x
2
5 2 + 2Px + Qx = 0 x2
6 n (n – 1) + Pnx + Qx = 0 2
xn

CHAPTER–17 (APPLICATIONS TO DIFFERENTIAL EQUATIONS)


Element Symbol Unit
1. Charge q coulomb
2. Current i ampere

+ R –
3. Resistance, ohm R
i

(x)
4. + L –
Inductance, henry H
i
5. + C –
Capacitance, farad C
i
6. Electromotive force + E –
constant V
or voltage (constant) i volt
7. + E –
Variable voltage ~ variable V volt
i
The formation of differential equation for an electric circuit depends upon the following laws.
dq
(i) i = ,
dt
(ii) Voltage drop across resistance R = Ri
di
(iii) Voltage drop across inductance L = L. 
dt
q
(iv) Voltage drop across capacitance C =
C
R L L R
di
Ri + L
= E
dt
di R E i
⇒ + i = i
dt L L
+ –
di R E
+ i = 0 sin wt
dt L L E
E0 sin wt
q V0 C R
Ri = 0 C
c
dq q
⇒ R + =0 i
dt c
i
Ri + ∫ dt = E0 sin wt

c
R E0 sin wt
di i
R + = E0 cos wt L C R
dt c
d 2q dq 1
+R + q =0
dt 2 dt c i

Mechanical Engineering problems.

dx
V=
dt

d 2x dv
a= 2 , a =V
dt dx

d 2x dv
F = ma , F=m 2 , F = mv
dx dx

(xi)
Vertical motion
gt g
x = − (1 − e − kt )
k k2
Vertical Elastic String
Stress (T)
= Constant = Modules of Elasticity (E)
Strain
Exteminon in lengthq
Strain =
original length
Ea Ea
T = or mg =
l l
Horizontal Elastic String
Stress
= Constant of elasticity
Strain
x
T = E
l
d 2x Ex
Equation of motion is m 2
=−
dt l
Simple Harmonic motion (S.H.M)
O
2
d x θ
= – u2 x
dt 2 T

The simple pendulum


I P
d 2x
Restoring force = m = – mg sin q θ
dt 2
A mg
d 2x mg sin θ mg cos θ
2 = – g sin q
dt O O

Oscillations of a Spring
d 2x
m = – mg – k (s + x)
dt 2 A A
s
Dampled Free Oscillations s
B B
2 e cos nt
d x dx
+ 2λ + u x2 = 0
dt 2 dt x k1 dx x
dt
Forced oscillations (without damping) P k (s + x)
P
d 2x
m = mg – ks + q cos nt damper
dt 2
mg
Forced oscillations (without damping)
Y
d 2x −k k dx q
= x− 1 + cos nt
dt 2 m m drt m u sin α
u
P
Projectile
(x, y)
gx 2
y = x tan a –
2u 2 cos 2 α O u cos α A X

(xii)
CHAPTER–18 (CALCULUS OF VARIATIONS)
Functional
x2 2
 dy 
∫ 1 +   dx.
 dx 
x1

Euler’s equation
∂f d  df 
− =0
∂y dx  dy ' 

CHAPTER–19 (MAXIMA AND MINIMA OF FUNCTIONS)

Working rule to find Extremum Values


(i) Differentiate f (x, y) and find out
∂ f ∂ f ∂2 f ∂2 f ∂2 f
, , , ,
∂ x ∂ y ∂ x2 ∂ x ∂ y ∂ y2
∂f ∂f
(ii) Put = 0 and = 0 and solve these equations for x and y. Let (a, b) be the values of (x, y).
∂x ∂y

∂2 f ∂2 f ∂2 f
(iii) Evaluate r = , s= , t= for these values (a, b).
∂ x2 ∂ x∂ y ∂ y2
(iv) If rt – s2 > 0 and
(a) r < 0, then f (x, y) has a maximum value.
(b) r > 0, then f (x, y) has a minimum value.
(v) If rt – s2 < 0, then f (x, y) has no extremum value at the point (a, b).
(iv) If rt – s2 = 0, then the case is doubtful and needs further investigation.
∂f ∂f
Note: The point (a, b), which are the roots of = 0 and = 0 are called stationary points.
∂x ∂y

Lagrange Method of Undetermined Multipliers


Let f (x, y, z) be a function of three variables x, y, z and the variables be connected by the relation.
φ (x, y, z) = 0 ...(1)
⇒  f (x, y, z) to have stationary values,
∂f ∂f ∂f
= 0, = 0, = 0
∂x ∂y ∂z
∂f ∂f ∂f
⇒ dx + dy + dz = 0 ...(2)
∂x ∂y ∂z
By total differentiation of (1), we get
∂φ ∂φ ∂φ
dx + dy + dz = 0 ...(3)
∂x ∂y ∂z
Multiplying (3) by λ and adding to (2), we get

 ∂f ∂φ   ∂f ∂φ   ∂f ∂φ 
 dx + λ dx +  dy + λ dy +  dz + λ dz  = 0
∂x ∂x  ∂y ∂y  ∂z ∂z

(xiii)
∂ f ∂ φ ∂ f ∂φ ∂ f ∂ φ
 ∂ x + λ ∂ x  dx +  ∂ y + λ ∂ y  dy +  ∂ z + λ ∂ z  dz = 0

This equation will hold good if


∂f ∂φ
+λ = 0 ...(4)
∂x ∂x
∂f ∂φ
+λ = 0 ...(5)
∂y ∂y
∂f ∂φ
+λ = 0 ...(6)
∂z ∂z
On solving (1), (4), (5), (6), we can find the values of x, y, z and λ for which f (x, y, z) has stationary
value.
Draw Back in Lagrange method is that the nature of stationary point cannot be determined.

CHAPTER–22 (COMPLEX VARIABLE FUNCTION)


Analytic function A single valued function which is differentiable at z = z0 is said to be analytic
at the point z = z0
∂u ∂v ∂u −∂v ∂u 1 ∂v ∂u ∂v
C – R Equations: = , = . And = ; = −r
∂x ∂y ∂y ∂x ∂r r ∂θ ∂θ ∂r
∂v ∂v −∂u ∂u
To find conjugate function, dv = dx + dy = dx + dy if u is given
∂x ∂y ∂y ∂x
∂u ∂u
Milne Thomson Method   f (z) = ∫ φ1( z,0) dz + ∫ φ2 ( z,0) dz , where φ1 ( x, y ) =
∂x
, φ 2 ( x, y ) =
∂y
∂v ∂v
f ( z ) = ∫ ψ1 ( z ,0)dz + ∫ ψ1 ( z ,0) dz , where ψ1 ( x, y ) = , ψ 2 ( x, y ) = .
∂y ∂x

CHAPTER–23 (TRANSFORMATION)
For every point (x, y) in the z-plane, the relation w = f (z) defines a corresponding (u, v) in the w-plane.
We call this “transformation or mapping of z-plane into w-plane z0 maps into the point w0, w0 is also known
as the image of z0,
If the point P (x, y) moves along a curve C in z-plane, the point P’ (u, v) will corresponding curve C’
in w-plane, then we say that a curve C in the z-plane is corresponding curve C’ in the w-plane by the rela-
tion w = f (z).
Conformal Transformation
Let two curves C’, C’, in the z-plane intersect at the point P and the corresponding curve C’, C’1 in the
w-plane intersect at P’. If the angle of intersection of the curves at P in z-plane is the same as the angle of
intersection of the curves of w-plane at P’ in magnitude and sense, then the transformation is called conformal:
conditions: (i) f (z) is analytic. (ii) f (z) ≠ 0   Or
If the sense of the rotation as well as the magnitude of the angle is preserved, the transformation is
said to be conformal.
If only the magnitude of the angle is preserved, transformation is Isogonal.
Translation: w = z + c
Rotation: w = zeiq

(xiv)
Magnification: w = c.z.
az + b
Bilinear Transformation: w =
cz + d
az + b
Invariant points: w = then w = z
cz + d

CHAPTER–24 (COMPLEX INTEGRATION)

Cauchy’s Integral Theorem ∫c f ( z ) dz = 0 if f (z) is analytic function within C.

f ( z )dz
Cauchy’s Integral formula ∫c z−a
= 2πif (a ) , if f (z) is analytic in c, and a is a point within C.

φ( a )
Residue (i) Res f (a) = lim ( z − a ) f ( z ),     (ii) Res (a) = ,
z→a ψ′ (a )
n −1
1  d  1
(iii) Res (a) =  ( z − a ) n f ( z ) , (iv) Res (a) = coefficient of where t = z – a
n − 1!  dz n −1  t

Residue Theorem ∫c f ( z )dz = 2πi (Sum of the residues at the poles written C)

2π 1 1 1 1 dz
∫ f (sin θ, cos θ) d θ , put sin θ =

0 2i
[ z − ],
z
cos θ =  z +  ,
2 z
dθ =
iz
C is the circle of radius one.
∞ f ( x) f1 ( x)
∫−∞ f12 ( x) dx , consider ∫c f ( z )dz where f ( x) =
f 2 ( x)
and c is the semicircle with real axis.

CHAPTER–25 (TAYLOR’S AND LAURENT’S SERIES)


Taylor’s–series.
f "(a ) f n (a)
f (z) = f (a) + f’ (a) (z – a) + (z – a)2 + ........ + (z – a)n + ........
2! n!

1  an +1 
Radius of Convergence : = lim  a 
R n →∞ n

Laurent’s theorem
If we are required to expand f (x) about a point where f (z) is not analytic, then it is expanded by
Laurent’s Series and not by Taylor’s Series.
Statement. If f (z) is analytic on c1 and cv and the annular region R bounded by the two concentric
circles c1 and c2 of radii r1 and r2 (r2 < r1) and with centre at a, then for all z in R
b1 b2
f (z) = a0 + a1 (z – a) + a2(z – a)2 + ....... + + + ...... R
z − a ( z − a)2 W

1 f ( w) r2
an =

2π i ∫c1 (w − a)n +1 d w, A B
r1
a
1 f ( w)
bn =

2π i ∫c2 (w − a)− n +1 d w C2 C1

(xv)
CHAPTER–28 (LEGENDRE’S FUNCTIONS)
d2y dy
Legendre’s Equation (1 − x 2 ) 2
− 2x + n(n − 1) y = 0
dx dx
1.3.5...(2n − 1)  n n(n − 1) n − 2 n(n − 1)(n − 2)(n − 3) n − 4 
Pn ( x) =
x − x + x + ...
n  (2 n − 1)2 (2 n − 1)(2 n − 3)2.4 
1 dn
Rodrigue’s formula Pn ( x) = ( x 2 − 1) n
2 | n! dx n
n

1

Generating Function (1 − 2 xz + z 2 ) 2 = Σ Pn ( x) z n
+1 +1 2 2
Orthogonality Property ∫−1 Pn ( x).Pm ( x)dx = 0, if m ≠ n and ∫−1 Pn ( x)dx = 2n + 1
Recurrence Formulae (i) (n + 1) Pn +1 = (2n + 1) Pn − nPn −1 (ii) nPn = xPn′ − Pn′−1
(iii) (2n + 1) Pn = Pn′+1 − Pn′−1 (iv) Pn′ − xPn′−1 = n Pn −1

(v) ( x 2 − 1) Pn′ = n[ xPn − Pn −1 ] (vi) ( x 2 − 1) Pn′ = (n + 1)( Pn +1 − x Pn )

CHAPTER–29 (LEGENDRE’S FUNCTIONS)


∞ n + 2r
d2y dy ( −1) r  x
Bessel’s Equation x 2
dx 2
+x
dx
+ ( x2 − n2 ) y = 0 , J n ( x) = ∑ r n + r + 1  2 
r =0
Recurrence Formula
(i) xJ n′ = nJ n − x J n +1 (ii) xJ n′ = − nJ n + xJ n −1 (iii) 2 J n′ = J n −1 − J n +1
d −n d n
(iv) 2nJ n = x( J n −1 + J n +1 ) (v) ( x J n ) = − x − n J n +1 (vi) ( x J n ) = x n J n −1
dx dx

CHAPTER–30 (HERMITE FUNCTION)


Hermite’s equation
d2y dy
2
−2 + 2ny = 0
dx dx

Generating function of Hermite polynomials

x2 ∂n 2
e e{− (t − x ) }
= Hn (x) + Hn + 1 (x) t + Hn + 2 (x) · t2 + .....
∂t n
Orthogonal property
∞ − x2 0 , m≠n
∫− ∞ e H m ( x) H n ( x) dx = 2n n! π , m = − n


Recurrence Formulae for Hn (x) of Hermite Equation
Four recurrence Relations
1. 2n Hn–1 (x) = H′n (x)
2. 2x Hn (x) = 2nHn–1(x) + Hn+1 (x)
3. H′n(x) = 2x Hn(x) – Hn+1 (x)
4. H′n (x) = x H′n (x) + 2n Hn (x) = 0
(xvi)
CHAPTER–31 (LAGUERRES FUNCTIONS)
d2y dy
x 2 + (1 − x) + ny = 0
dx dx
Laguerres Function for Different Values of n.
Ln(0) = n!
L0(x) = 1
L1(x) = 1– x
L2(x) = x2 – 4x + 2
L3 (x) = –x3 + 9x2 – 18x + 6
L4(x) = x4 – 16x3 + 72 x2 – 96x + 48
and so on.

Generating Function of Laguerre Polynomial


∞ − xt
Ln ( x) n
(1 − t ) ∑ t = e 1−t
n=0 n!

Recurrence Relation
− xt ∞
Ln ( x) n
I. e 1−t = (1 − t ) ∑ t
n=0 n!

II. L’n (x) – n L’n–1 (x) + n Ln–1 (x) = 0


III. Ln+1 (x) + (x – 2n – 1) Ln (x) + x2 Ln–1 (x) = 0
 1  ∞
1−
−1  1−t 
x Ln ( x)t n
IV. (1 − t ) e =∑
n=0 n!
Orthogonal Property
1 − x /2
Let f n ( x) = e Ln ( x) ........(1)
n!
∞ ∞
− x Lm ( x ) L ( x )
∫ 0 f m ( x) f n ( x)dx = ∫ 0 e m! nn! dx = δ m,n
0 for m ≠ n
Over the interval 0 ≤ x ≤ ∞1 when δ m, n = 
1 for m = n

CHAPTER–34 (LINEAR TRANSFORMATIONS)


(i) T (a + b) = T (a) + T (b) ∀ a, b ∈U and
(ii) T (a a) = aT (a) ∀ a  ∈F, a ∈U
Zero Transformation → if U (F) and V (F) be two vector spaces over the same field F, then the mapping
oˆ, , U → v .

Matrix of a Linear Transformation


Consider the simultaneous equations given below:
2x1 + 3x2 – x3 = 2
       5x1 – 6x2 – 3x3 – 10
x1 + x2 + x3 = 8
(xvii)
The left hand side of the equations can be considered as the linear transformations of T

 x1   2 x1 + 3 x2 – x3   2 3 −1  x1 
 
T  x2  =  5 x1 + 6 x2 – 3x3  =  5 6 −3  x2 
 x3   x + x + x   1 1 1  x3 
 1 2 3 
y
we can write the formula A:TA(X) = AX
C A
Change of Basis 5 (4, 5)

S1 = {(1, 0), (0, 1)} 4

S2 = {(0, 1), (1, 0)} 3


Path 1 = (4, 5) = 4 (1, 0) + 4 (0, 1)
2
Path 2 = (4, 5) = 5 (0, 1) + 4 (1, 0)
1
Codomain of a linear transformations B
x
0 1 2 3 4
if T : A → B is a transformation then set A is called the domain and the set B is called the codomain of T.

Rank and Nullity of a linear Transformation


Let T : V → W be a Linear Transformation. We know that T (V) is a subspace of the vector space. The
dimension of this subspace T (V) is called the rank of T.
Nullity of f = dim [ker (f )]

Similarity of Matrices
If A and B are square matrices of order n over the field F, then B is said to be similar to A, if there
exists an n × n invertible square matrix P with elements in F is such that
B = P–1AP

CHAPTER–35 (BASIS OF NULL SPACE, ROW SPACE AND COLUMN SPACE)

Row Vectors
Here we have a m × n matrix.

 a11 a12 .............a1n 


a a22 .............a2 n 
A= 
21

............................... 
 
 am1 am 2 .............amn 
The elements a11, a12 ...... are known as enteries. If a11, a12 ........ a1n are real, then enteries are in R.
The rows of A described as vectors in Rn are called row vectors or row matrix as
r1 = (a11, a12, ------------ a1n)
r2 = (a21, a22 ------------ a2n)
-------------------------------------
------------------------------------
rm = (am1, am2 ----------- amn)

(xviii)
Column Vectors
Column vectors of A are
 a11   a12   a1n 
a  a  a 
 21   22   2n 
C1 =  ....  , C2 =  ....  --------------- Cn =  .... 
     
 ....   ....   .... 
 am1   am 2   amn 

Column space
A = {C1, C2 ............. Cn}

Row space
A = [r1, r2 ............. rn}

Null space
Ax = 0 in Rn is called the null space of A. If is denoted by null (A)

Dimension of a Vector space


The number of vectors present in a basis of vector space V is called the dimension of V. It is denoted
by dim (V).

Nullity
The dimension of the null space of the matrix A is called the nullity of A and is denoted by nullity (A)
or the number of free variables in the solution of AX = 0.

Rank of a Matrix
The row rank of a matrix is equal to the dimension of the row space of the matrix.
The column rank of matrix is equal to the dimension of the column space of the given matrix.

Rank Nullity Theorem


Consider a matrix A then rank (A) + null (A) = number of columns of A.

CHAPTER–36 (REAL INNER PRODUCT SPACES)

Inner Product spaces


A vector space together with an inner product defined on it is called inner product space. We know
→ →
the scalar product of vectors a and b in Rn we can define an inner product of two-column vectors a and
b as (a, b) = aT.b.
A real vector space V is called a real inner product space if it has the following properties:
1. Symmetry. (X, Y) = (Y, X)
2. Additivity. (X + Y, Z) = (X, Z) + (Y, Z)
3. Linearity. C(X, Y) = (C X, Y) = (X, CY)
4. Positivity. (X, X) ≥ 0 and (X, X) = 0 if and only if X = 0.

The length or norm of a vector X in V is defined by || X || = (X , X )

(xix)
Orthogonal Vectors (Perpendicular Vectors)
According to Cauchy Schwarz inequality
| ( X , Y )| ≤ || X || || Y ||
Scalar product of two vectors
( X , Y ) = || X || || Y || cos θ
Where θ is the angle between two vectors X and Y
If cos θ = 0, then
(X, Y) = 0
These two vectors X and Y are known as orthogonal vectors.
Gram-Schmidt orthogonalisation-process
(Y1, X 2 )
Y2 = X2 – Y1
(Y1, Y1 )

Y1 = X1

Y2 = X 2 −

( X 2 , Y1 ) Y
1
|| Y1 ||2

Y3 = X 3 −
( X 3 , Y1 ) Y _ ( X 3 , Y2 ) Y
1 2
|| Y1 ||2 || Y2 ||2

Unitary Transformation
A linear transformation, Y = AX, where A is Unitary (i.e., A is such AθA = A Aθ = In), is called a unitary
transformation.
Theorem 1. The necessary and sufficient candition for a linear transformation Y = AX and Vn (C) to
preserve lengths is that A is unitary.

Orthogonal Transformation
A transformation Y = AX is said to be orthogonal if its matrix is orthogonal.
Orthogonal projections
→ →
→ → a⋅b →
Projection of a along b = →
⋅b
|b|

Linear Transformation of Matrices


Let X and Y be two vectors such that
 x1   y1 
x  y 
X =  2
,Y =  
2

   
   
 xn   yn 
 x1 
x 
[
y1 = a11 , a12 , ...., a1n
]  2 
 
 xn 
[
y1 = a11 x1 + a12 x2 + .... + a1n xn
]
(xx)
CHAPTER–37 (DETERMINANTS)

Minor
The minor of an element is defined as a determinant obtained by deleting the row and
column containing the element.
Thus the minors of a1, b1 and c1 are respectively.
b2 c2 a2 c2 a2 b2
, and
b3 c3 a3 c3 a3 b3
a1 b1 c1
Thus a2 b2 c2 = a1 (minor of a1) – b1 (minor of b1) + c1 (minor of c1).
a3 b3 c3
Cofactor
Cofactor = (– 1)r+c Minor

Properties of Determinants
Property (i). The value of a determinant remains unaltered; if the rows are interchanged into columns
(or the columns into rows).
Property (ii). If two rows (or two columns) of a determinant are interchanged, the sign of the value
of the determinant changes.
Property (iii). If two rows (or columns) of a determinant are identical, the value of the determinant
is zero.
Property (iv). If the elements of any row (or column) of a determinant be each multiplied by the same
number, the determinant is multiplied by that number.
Property (v). The value of the determinant remains unaltered if to the elements of one row (or column)
be added any constant multiple of the corresponding elements of any other row (or column) respectively.

Factor Theorem
If the elements of a determinant are polynomials in a variable x and if the substitution x = a makes two
rows (or columns) identical then (x – a) is a factor of the determinant.
When two rows are identical, the value of the determinant is zero. The expansion of a determinant being
polynomial in x vanishes on putting x = a, then x – a is its factor by the Remainder theorem.

CHAPTER–38 (ALGEBRA OF MATRICES)


Types of equations, AX = B; C = {A, B}
(1)
Consistent equation If Rank of C = Rank of A
    (a) Unique solution If Rank of C = Rank of A = Number of unknowns
    (b) Infinite solution If Rank of C = Rank of A < Number of unknowns
Inconsistent equation.
(2) If Rank of C ≠ Rank of A.
Eigen values are the roots of the characteristics equation |A – lI| = O
Cayley Hamilton Theorem. Every square matrix satisfies its own characteristics equation.
Diagonalisation P–1 AP = D, where P is the modal matrix containing eigen vectors, D is the diagonal
matrix containing eigen values.

(xxi)
Determinant Cramer’s rule. solve the following equations.
a1x + b1 y + c1z = d1
a2x + b2 y + c2z = d2
a3x + b3 y + c3z = d3
a1 b1 c1 d1 b1 c1 a1 d1 c1 a1 b1 d1
D = a2 b2 c2 , D1 = d 2 b2 c2 , D2 = a2 d 2 c2 , D3 = a2 b2 d 2
a3 a3 c3 d3 b1 c3 a3 d3 c3 a3 b3 d3
D1 D2 D3
x= , y= , z=
D D D

CHAPTER–39 (RANK OF MATRIX)

Rank of a Matrix
The rank of a matrix is said to be r if
(a) It has at least one non-zero minor of order r.
(b) Every minor of A of order higher than r is zero.
Note: (i) Non-zero row is that row in which all the elements are not zero.
(ii) The rank of the product matrix AB of two matrices A and B is less than the rank of either of
the matrices A and B.
(iii) Corresponding to every matrix A of rank r, there exist non-singular matrices P and Q such that
Ir 0
PAQ = 
0 0

Normal form (Canonical form)


By performing elementary transformation, any non-zero matrix A can be reduced to one of the follow-
ing four forms, called the Normal form of A :

Ir  Ir 0
(i) Ir (ii) [Ir 0] (iii)   (iv) 
0  0 0

Ir 0
The number r so obtained is called the rank of A and we write ρ(A) = r. The form 
0 0

CHAPTER–40 (CONSISTENCY OF LINEAR SYSTEM OF EQUATIONS AND


THEIR SOLUTION)

Homogeneous Equations
For a system of homogeneous linear equations AX = O
(i) X = O is always a solution. This solution in which each unknown has the value zero is called
the Null Solution or the Trivial solution. Thus a homogeneous system is always consistent.
A system of homogeneous linear equations has either the trivial solution or an infinite number of
solutions.
(ii) If R (A) = number of unknowns, the system has only the trivial solution.
(iii) If R (A) < number of unknowns, the system has an infinite number of non-trivial solutions.

(xxii)
A system of homogeneous linear equations
AX = O

Always has a
solution
Find R (A)

R (A) = n (no. of unknowns) R (A) < n (no. of unknowns)

Unique or trivial Infinite no. of non-trivial


solution solutions
(each unknown equal to zero)

Linear Dependence and Independence of Vectors


Vectors (matrices) X1, X2, .... Xn are said to be dependent if
(1) all the vectors (row or column matrices) are of the same order.
(2) n scalars λ1, λ2, ... λn (not all zero) exist such that
λ1 X1 + λ2 X2 + λ3 X3 + ..... + λn Xn = 0
Otherwise they are linearly independent.
Remember: If in a set of vectors, any vector of the set is the combination of the remaining  vectors,
then the vectors are called dependent vectors.

A system of non-homogeneous linear equations


AX = B

Find R (A) and R (C)

R (A) = R (C) R (A) ≠ R (C)

Solution exists, system No solution, system


is consistent is inconsistent

R (A) = R (C)
= n (no. of unknowns) R (A) = R (C) < n (no. of unknowns)

Unique Infinite no. of


solution solutions

Partitioning of Matrices
Sub matrix. A matrix obtained by deleting some of the rows and columns of a matrix A is said to be
sub matrix.

 4 1 0
  4 1   5 2  1 0
For example, A =  5 2 1  , then  , , are the sub matrices.
5 2 6 3  2 1 
 6 3 4
Partitioning: A matrix may be subdivided into sub matrices by drawing lines parallel to its rows and
columns. These sub matrices may be considered as the elements of the original matrix.

(xxiii)
 2 1 : 0 4 1
 1 0 : 2 3 4
For example, A =  
.... .... : .... .... ....
 
 4 5 : 1 6 5

CHAPTER–41 (EIGEN VALUES, EIGEN VECTOR, CAYLEY HAMILTON


THEOREM, DIAGONALISATION)
Figen Values
Let X be a such vector which transforms into λX by means of the transformation (1). Suppose the linear
transformation Y = AX transforms X into a scalar multiple of itself i.e. λX.
AX = Y = λ X
AX – λ IX = 0
(A – λI) X = 0 ...(2)
Thus the unknown scalar λ is known as an eigen value of the matrix A and the corresponding non zero
vector X as eigen vector.
(b) Characteristic Polynomial: The determinant | A – λI | when expanded will give a polynomial, which
we call as characteristic polynomial of matrix A.
2−λ 2 1
For example; 1
3− λ 1
1 2 2−λ
= ( 2 – λ) (6 – 5 λ + λ2 – 2) – 2 (2 – λ – 1) + 1( 2 – 3 + λ)
= – λ3 + 7 λ2 – 11 λ + 5
(c) Characteristic Equation: The equation | A – λI | = 0 is called the characteristic equation of the
matrix A e.g.
λ3 – 7λ2 + 11 λ – 5 = 0
(d) Characteristic Roots or Eigen Values: The roots of characteristic equation | A – λI | = 0 are called
characteristic roots of matrix A. e.g.
λ3 – 7 λ2 + 11 λ – 5 = 0
⇒ (λ – 1) (λ – 1) (λ – 5) = 0 ∴ λ = 1, 1, 5
Characteristic roots are 1, 1, 5.

Cayley-Hamilton Theorem
Satement. Every square matrix satisfies its own characteristic equation.
If | A − λI |= ( −1)
n
(λ n
)
+ a1λ n −1 + a2 λ n − 2 +  + an be the characteristic polynomial of n × n matrix
A = (aij), then the matrix equation
X n + a1 X n −1 + a2 X n − 2 +  + an I = 0 is satisfied by X = A i.e
An + a1 An −1 + a2 An − 2 +  + an I = 0
Power of matrix
PDn – 1 PP–1 An PP–1 = An
 λ1n 0 0
 
n
A = PD Pn –1
where Dn =  0 λ n2 0
 
 0 0 λ 3n 
(xxiv)
Working procedure
(i) Find the eigee values of square matrix A.
(ii) Find the corresponding eigen vectors and write modal matrix P.
(iii) Find diagonal matrix D from D = P–1 AP
(iv) Obtain An from An = PDnP–1 

Characteristic Vectors or Eigen Vectors


AX = lX
X is called eigen vector.

Properties oF Eigen Vectors


1. The eigen vector X of a matrix A is not unique.
2. If λ1, λ2, ...., λn be distinct eigen values of an n × n matrix then corresponding eigen vectors X1, X2,
......., Xn form a linearly independent set.
3. If two or more eigen values are equal it may or may not be possible to get linearly
independent eigen vectors corresponding to the equal roots.
4. Two eigen vectors X1 and X2 are called orthogonal vectors if X1′ X2= 0.
5. Eigen vectors of a symmetric matrix corresponding to different eigen values are orthogonal.

Orthogonal Vectors
T T
Two vectors X and Y are said to be orthogonal if X 1 X 2 = X 2 X 1 = 0.

Algebraic Multiplicity
Algebraic multiplicity of an eigen value is the number of times of repetition of an eigen value.
 −2 2 −3
 
 2 1 −6 are –3, –3, 5.
 −1 −2 0 

Geometric Multiplicity
Geometric multiplicity of an eigen value is the number of linearly independent eigen vectors cor-
responding to λ.
It is denoted by Multg(λ)
 0  3
 3  
λ = – 3 are   and  0 .
 2 1 
So the multg (– 3) = 2

Similarity Transformation
Let A and B be two square matrices of order n. Then B is said to be similar to A if there exists a non-
singular matrix P such that
B = P–1 AP ...(1)
Equation (1) is called a similar transformation.

Diagonalisation of a Matrix
Diagonalisation of a matrix A is the process of reduction of A to a diagonal form ‘D’. If A is related
to D by a similarity transformation such that D = P–1 AP then A is reduced to the diagonal matrix D through
modal matrix P. D is also called spectral matrix of A.
(xxv)
Powers of a Matrix (By Diagonalisation)
We can obtain powers of a matrix by using diagonalisation.
We know that D = P–1 AP
Where A is the square matrix and P is a non-singular matrix.
D2 = (P–1 AP) (P–1 AP) = P–1 A (P P–1) AP = P–1 A2 P
Similarly D3 = P–1 A3 P
In general Dn = P–1 An P ...(1)
Pre-multiply (1) by P and post-multiply by P–1
P Dn P–1 = P (P–1 An P) P–1
= (P P–1) An (P P–1)
= An
Procedure: (1) Find eigen values for a square matrix A.
(2) Find eigen vectors to get the modal matrix P.
(3) Find the diagonal matrix D, by the formula D = P–1 AP
(4) Obtain An by the formula An = P Dn P–1.

Hermitian Matrix
Definition. A square matrix A = [aij] is said to be Hermitian if the (i, j)th element of A, i.e.,
aij = a ji for all i and j.

 2 3 + 4i   a b − id 
For example  ,
3 − 4i 1  b + id c 
Skew-Hermitian Matrix
Definition. A square matrix A = (aij) is said to be Skew-Hermitian matrix if the (i, j)th element of A is
equal to the negative of the conjugate complex of the (j, i)th element of A, i.e., aij = − a ji for all i and j.

Periodic Matrix
A square matrix is said to be periodic, if Ak+1 = A, where k is a positive integer. If k is the least positive
integer for which Ak+1 = A, then A is said to be of period k.

Idempotent Matrix
A square matrix is said to be idempotent provided A2 = A.

Unitary Matrix
A square matrix A is said to be unitary matrix if
A ⋅ Aθ = Aθ A = I

CHAPTER–42 (PARTIAL DIFFERENTIAL EQUATIONS)


dx dy dz
Pp + Qq = R ⇒ = = we can also use multipliers.
P Q R

∂2 z ∂2 z ∂2 z
Homogeneous equations a0 + a1 + a2 2 = 0 ⇒ A.E. is a0m2 + a1m + a2 = 0
∂x 2 ∂x∂y ∂y
Case I. If m = m1, m = m2, C.F. = f1(y + m1x) + f2(y + m2x)
Case II. If m1 = m2 ⇒ C.F. = f1(y + m1x) + xf2(y + m1x)

(xxvi)
1 1
(i) Particular Integral = e ax + by = e ax + by ,
f ( D, D ′ ) f ( a, b)
1 sin(ax + by )
(ii) P.I. = 2 2
sin(ax + by ) =
f ( D , DD ′, D ) f ( − a 2 , − ab, −b 2 )
1
(iii) P.I. = f ( x, y ) , use Binomial Theorem
f ( D, D′ )
1
(iv) P.I. = f ( x, y ) = ∫ f ( x, c + mx)dx
f ( D + mD ′ )

CHAPTER–43 (LINEAR AND NON-LINEAR PARTIAL


DIFFERENTIAL EQUATIONS)
Non Homogeneous Equations Monges’ Method Rr + Ss + Tt = v ...(i)
dp − sdy dq − sdx
r= , t= substitute the value of r and t in (ii).
dx dy
Rdy2 – Sdxdy + Tdx2 = 0

CHAPTER–44 (APPLICATIONS OF PARTIAL DIFFERENTIAL EQUATIONS)


∂ 2u ∂ 2u ∂u ∂ 2u
If (i) = c2 (wave equation) (ii) = c 2 2 (One dimension heat flow)
∂t 2 ∂x 2 ∂t ∂x
∂ 2u ∂ 2u
(iii) + = 0 (Two dimensions heat flow)
∂x 2 ∂y 2
Classification
∂ 2z ∂ 2z ∂ 2z
A 2 +B + C 2 + F (n, y, z, p, q) = 0
∂x ∂x∂y ∂y
1. Parabolic if B2 – 4A C = 0
2. Elliptic if B2 – 4A C < 0
3. Hypercritic if B2 – 4A C > 0

CHAPTER–45 (INTEGRAL TRANSFORM)


Fourier Transform


1
F (s) =

∫ f (t )i st dt.
−∞

1 ∞ − isx
f (x) =

∫− ∞ F (s)e
Fourier Sine Transform


2
f (x) =
π ∫ sins x ds F ( s )
−∞


2
F (s) =
π ∫f (t ) sin t dt
0
(xxvii)
Fourier Conine Transform

2
f (x) =
π ∫ cos sx F (s)ds
0

2
f (s) =
π ∫ f (t ) cos st dt.
0

CHAPTER–46 (LAPLACE TRANSFORM)


1 n! 1
1. L(1) = L(t n ) = n +1 3.
2. L(e at ) =
s s s−a
s a a
4. L (cosh at) = L (sinh a t) =
5. L (sin a t) =
6.
s2 − a2 s2 − a2 s2 + a2
s
L (cos a t) =
7. Leat f(t) = F (s – a) 9.
8. Le–1 f’(t) = sLf (t) – f (0)
s2 + a2
1
L  ∫ f (t )dt  = F ( s )
1
10. L f ′′ (t ) = s 2 Lf (t ) − sf (0) − f ′ (0) 11.
 0  s
dn 1  ∞ 0 when t < a
L t n f (t )] = ( −1) n
12. [ [ F ( s )] 13. L  f (t ) = ∫ F ( s )ds 14.
u (t − a ) = 
1when t > a
n
ds t  s

e − ax 1
L u (t − a )] =
15. [ L [ f (t − a ).u (t − a )] = e − as F ( s ) 17. L δ (t − 1) =
16.
s ∈
T − st

18. L δ (t − a ) = e − ax 19.
Lf (t ) = 0
∫ e f (t )dt t
L sin at = 2
20.
s
− sT
1− e 2a (s + a 2 )2
s2 − a2 1 1
21. Lt cos at = L 3 (sin at − at cos at ) = 2
22.
( s 2 + a 2 )2 2a ( s + a 2 )2
1 s2
23. L (sin at − at cos at ) = 2
2a ( s + a 2 )2

Convolution Theorem L  ∫ f1 ( x) f 2 (t − x)dx  = F1 ( s ) * F2 ( s )


t
 0 

CHAPTER–47 (INVERSE LAPLACE TRANSFORMS)

 1 1 t n −1 1
L−1 n =
1. L−1   = 1 2. L−1
3. = e at
 s s ( n − 1)! s − a
s 1 1 1 1
L−1
4. 2
= cosh at 5.
2
L−1 2 2
= sinh at 6. L−1 2 2
= sin at
s −a s −a a s +a a
s
L−1 2
7. = cos at 8. L−1F ( s − a ) = e at f (t )
s + a2
1 1 s 1
L−1 2
9. = (sin at − at cos at ) 10. L−1 2 = t sin at
( s + a 2 ) 2 2a 3 ( s + a 2 ) 2 2a

s2 − a2 s2 1
11. L−1 2 2 2
L−1
= t cos at 12. = (sin at + at cos at )
(s + a ) ( s + a 2 )2
2 2a
(xxviii)
d 1  t
13. L−1[ sF ( s )] = f (t ) + f (0) 14. L−1  F ( s ) = ∫ f (t ) dt 15.
L−1F ( s + a ) = e − at f (t )
dt  s  0

d 
16. L−1 e − as F ( s ) = f (t − a ) u (t − a ) 17.
L−1  F ( s ) = −t f (t )
   ds 
∞ f (t )
L−1  ∫ F ( s ) ds  =
s
18. L−1 ∫ f1 ( x) f 2 (t − x) dx = F1 ( s ).F2 ( s )
19.
 s  t 0

 F ( s )  n F (α i ) αit
20. f (t) = sum of the residues of est F (s) at the poles of F (s) 21. L−1  =∑ e
 G ( s )  i =1 G ′ (α i )

(xxix)

Das könnte Ihnen auch gefallen