Sie sind auf Seite 1von 58

UECM2623/UCCM2623

NUMERICAL METHODS & STATISTICS

UECM1693
MATHEMATICS FOR PHYSICS II

UNIVERSITI TUNKU ABDUL RAHMAN


Contents

1 Preliminaries 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2 Numerical Differentiation 5

3 Numerical Integration 7
3.1 The Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Simpsons Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 Roots Of Equations 10
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Bracketing Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2.1 The Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.2.2 The Method of False-Position . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.3 Open Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3.1 Fixed-Point Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.3.2 Newton-Raphson Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
4.3.3 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

5 Some Topics In Linear Algebra 20


5.1 Iterative Methods For Solving Linear Systems . . . . . . . . . . . . . . . . . . . . . . . 20
5.2 A Review On Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.3 Approximation of Eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3.1 The Power Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5.3.2 Power Method with Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3.3 Inverse Power Method with Scaling . . . . . . . . . . . . . . . . . . . . . . . . . 29

1
5.3.4 Shifted Inverse Power Method with Scaling . . . . . . . . . . . . . . . . . . . . . 30

6 Optimization 32
6.1 Direct Search Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.1.1 Golden Section Search (Maximum) . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.1.2 Gradient Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

7 Numerical Methods For Ordinary Differential Equations 36


7.1 First-Order Initial-Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.1.1 Eulers Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
7.1.2 Heuns Method/Improved Eulers Method . . . . . . . . . . . . . . . . . . . . . 38
7.1.3 Taylor Series Method of Order p . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
7.1.4 Runge-Kutta Method of Order p . . . . . . . . . . . . . . . . . . . . . . . . . . 40
7.1.5 Multi-step Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
7.1.6 Adams-Bashforth/Adams-Moulton Method . . . . . . . . . . . . . . . . . . . . . 42
7.1.7 Summary:Orders Of Errors For Different Methods . . . . . . . . . . . . . . . . . 43
7.2 Higher-Order Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
7.2.1 The Linear Shooting Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
7.2.2 Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

8 Numerical Methods For Partial Differential Equations 48


8.1 Second Order Linear Partial Differential Equations . . . . . . . . . . . . . . . . . . . . 48
8.2 Numerical Approximation To Derivatives :1-Variable Functions . . . . . . . . . . . . . . 49
8.3 Numerical Approximation To Derivatives :2-Variable Functions . . . . . . . . . . . . . . 49
8.4 Methods for Parabolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.4.1 FTCS Explicit Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
8.4.2 Crank-Nicolson Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
8.5 A Numerical Method for Elliptic Equations . . . . . . . . . . . . . . . . . . . . . . . . . 54
8.6 CTCS scheme for Hyperbolic Equations . . . . . . . . . . . . . . . . . . . . . . . . . . 55

2
Chapter 1

Preliminaries

1.1 Introduction
Numerical methods are methods for solving problems on a computer or a pocket calculator. Such
methods are needed for many real life problems that do not have analytic solutions or in other cases,
the analytic solutions may be practically useless.

1.2 Error Analysis


Definition 1.2.1.

(a) The error in a computed quantity is defined as

Error = True value - Approximation.

(b) The absolute error is defined as

Absolute error = | True value - Approximation |.

(c) The relative error is defined as

True value - Approximation


Relative error = .
True value
(d) The percentage relative error is defined as

True value - Approximation


Percentage relative error = 100%.
True value

3
Definition 1.2.2. (Errors In Numerical Methods)
There are two major sources of errors in numerical computations.
(a) Round-off error occurs when a computer or calculator is used to perform real-number calcula-
tions.
Remark. The error arises because for machine computation, the number must be represented
by a number with a finite number of digits. These errors become important as the number of
computations gets very large. To understand the nature of round-off errors, it is necessary to
learn the ways numbers are stored and arithmetic operations are performed in a computer. The
effect of the round-off errors can be illustrated by the following example.
Example 1. (The effect of round-off error)
x2 19
Let f (x) = .
x 13
0.1112 0.1111
(i) If a 4-digit calculator is used to find f (0.3334), we will obtain 1.
0.3334 0.3333
1 1
(ii) On the other hand, by using the formula f (x) = x + , we will obtain 0.3334 + 0.6667.
3 3
(b) Truncation errors are those that result from using an approximation in place of an exact
mathematical procedure.
Example 2. Recall that for all x,
x2 x3
ex = 1 + x + + + .
2! 3!
In particular, if we let x = 1, then
1 1
e=1+1+ + + .
2! 3!
1 1 1 1 1
+ ++
If we use 1 + 1 + to approximate e, then the truncation error is + + .
2! 3! 10! 11! 12!
Remark. (Truncation Errors)
Many numerical schemes are derived from the Taylor series
h2 00
y(x + h) = y(x) + hy 0 (x) + y (x) + .
2!
If the truncated Taylor series used to approximate y(x) is the order n Taylor polynomial
h2 00 hn
pn (x) = y(x) + hy 0 (x) +
y (x) + + y (n) (x),
2! n!
then the approximation is called an nth order method since it is accurate to the terms of order hn .
The neglected remainder term
hn+1 (n+1) hn+2 (n+2)
y (x) + y (x) +
(n + 1)! (n + 2)!
is called the (local) truncation error (TE).
We say the TE per step is of order hn+1 i.e TE= O(hn+1 ).

4
Chapter 2

Numerical Differentiation

We replace the derivatives of a function f by their corresponding difference quotients based on the
Taylor series:
1 1
(i) f (x + h) = f (x) + hf 0 (x) + h2 f 00 (x) + h3 f 000 (x) +
2! 3!
1 1
(ii) f (x h) = f (x) hf 0 (x) + h2 f 00 (x) h3 f 000 (x) +
2! 3!
f (x + h) f (x) 1
(iii) (i) f 0 (x) = hf 00 (x)
h 2!
f (x + h) f (x)
i.e. f 0 (x) = + O(h)
h
(the forward difference formula for f 0 )
f (x) f (x h)
(iv) (ii) f 0 (x) = + O(h)
h
(the backward difference formula for f 0 )
f (x + h) f (x h)
(v) (i) - (ii) f 0 (x) = + O(h2 )
2h
(the central difference formula for f 0 )
f (x + h) 2f (x) + f (x h)
(vi) (i) + (ii) f 00 (x) = + O(h2 )
h2
(the central difference formula for f 00 )

5
Example 3. Given the following table of data:
x 1.00 1.01 1.02 1.03
f (x) 5 6.01 7.04 8.09
(a) Use forward and backward difference approximations of O(h) and a central difference approxima-
tion of O(h2 ) to estimate f 0 (1.02) using a step size h = 0.01.
(b) Calculate the percentage errors for the approximations in part (a) if the actual value f 0 (1.02) =
104.

Reading Assignment 2.0.1. Given the following table of data:


x 1.00 1.25 1.50 1.75 2.00
f (x) -8 -9.93359375 -11.4375 -11.99609375 -11.0000
(a) Use forward and backward difference approximations of O(h) and a central difference approxima-
tion of O(h2 ) to estimate the first derivative of f (x) at x = 1.5 using a step size h = 0.25.
(b) Calculate the percentage errors for the approximations in part (a) if the actual value f 0 (1.5) =
4.5.

Answer .

(a) For h = 0.25,


(i) using the forward difference formula
f (1.5 + h) f (1.5) f (1.75) f (1.5) 11.99609375 (11.4375)
f 0 (1.5) = = = 2.234375
h 0.25 0.25
(ii) using the backward difference formula
f (1.5) f (1.5 h) f (1.5) f (1.25) 11.4375 (9.93359375)
f 0 (1.5) = = = 6.015625
h h 0.25
(iii) using the central difference formula
f (1.5 + h) f (1.5 h) f (1.75) f (1.25) 11.99609375 (9.93359375)
f 0 (1.5) = = = 4.125
2h 0.5 0.5
actual value approximation
(b) percentage error = 100%
actual value
(i) For the forward difference approximation:
4.5 (2.234375)
percentage error = 100 50.35%
4.5
(ii) For the backward difference approximation:
4.5 (6.015625)
percentage error = 100 33.68%
4.5
(iii) For the central difference approximation:
4.5 (4.125)
percentage error = 100 8.33%
4.5

6
Chapter 3

Numerical Integration

Rb
The ideal way to evaluate a definite integral a f (x) dx is, of course, to find a formula F (x) for an
anti-derivative of f. But some anti-derivatives are difficultor impossible to find. For example, there
2
are no elementary formulas for the anti-derivatives of sinx x , 1 + x4 and ex . When we cannot evaluate
a definite integral with an anti-derivative, we turn to numerical methods such as the Trapezoidal Rule
and Simpsons Rule. The problem of numerical integration is the numerical evaluation of integrals
Z b
I= f (x) dx
a

where a and b are constants and f is a function given analytically or empirically by a table of values.
Geometrically , if f (x) 0 x [a, b], then I is equal to the area under the curve of f between a and
b.

3.1 The Trapezoidal Rule


Use to approximate a definite integral by adding up the areas under trapezoids.
ba
Recall the area formula for the trapezoid: Partition [a, b] into n subintervals of equal length 4x = n
.
Area of Ai = 4x2
(f (xi1 ) + f (xi )) where xi = a + i4x = a + i( ba
n
).
An approximation for the area under the curve y = f (x) from x = a to x = b is
X n
T = Ai
i=1
n
X 4x
= (f (xi1 ) + f (xi ))
i=1
2
4x h i
= f (x0 ) + 2f (x1 ) + + 2f (xn1 ) + f (xn ) .
Z 2b
hh i
f (x) dx f (x0 ) + 2f (x1 ) + + 2f (xn1 ) + f (xn ) where h = ba
n
.
a 2
Remark. R b The absolute error incurred by the Trapezoidal approximation is given by
ET = | a f (x) dx T |. This error will decrease as the step size 4x decreases, because the trapezoids
fit the curve better as their number increases.

7
Theorem 3.1.1 (Error Bound for Trapezoidal Rule). If f 00 is continuous and |f 00 (x)| M for
3
all x [a, b], then ET (ba)
12n2
M.
Z 2
Example 4. Estimate x2 dx by Trapezoidal Rule with n = 4.
1

Answerh. f (x) = x2 , 4x = 14 , x0 = 1, x1 = 54 , x2 = i64 , x3 = 74 , x4 = 8


4
= 2.
T = 18 f (x0 ) + 2f (x1 ) + 2f (x2 ) + 2f (x3 ) + f (x4 )
h i
= 18 (x0 )2 + 2(x1 )2 + 2(x2 )2 + 2(x3 )2 + (x4 )2
h i
= 18 1 + 2( 16
25
) + 2( 36
16
) + 2( 49
16
) + 4 75
= 32 .

Z 2
1
Example 5. How many subdivisions should be used in the Trapezoidal Rule to approximate dx
1 x
with an error less than 104 ?
Answer . Since |f 00 (x)| = | x23 | 2 for x [1, 2], we have from Theorem 3.1.1, ET 6n1 2 .
We choose n so that 6n1 2 < 104 :
q
1 4 104 4
6n2
< 10 n > 6 n > 106 40.82. So we can use any n 41. In particular, n = 41 will
2

ensure the desired accuracy.

3.2 Simpsons Rule


Use to approximate a definite integral by adding up areas of parabolas.
Partition [a, b] into n subintervals of equal length 4x = ban
, but this time with n an even number.
2
Area under the curve y = Ax + Bx + C in [h, h] is equal to
Z h
h
Ap = (Ax2 + Bx + C) dx = (2Ah2 + 6C). To write Ap in terms of y0 , y1 , y2 :
h 3
Since the curve passes through (h, y0 ), (0, y1 ), and (h, y2 ), we get

y0 = Ah2 Bh + C (3.1)
y1 = C (3.2)
2
y2 = Ah + Bh + C (3.3)

Solving these simultaneous equations, we obtain C = y1 , 2Ah2 = y0 +y2 2y1 , and Ap = h3 (y0 +4y1 +y2 ).
An approximation
h for the area under the curve y = f (x) from x = a to x = b is i
h
S = 3 (f (x0 ) + 4f (x1 ) + f (x2 )) + (f (x2 ) + 4f (x3 ) + f (x4 )) + + (f (xn2 + 4f (xn1 ) + f (xn )) .
Z b
hh i
f (x) dx f (x0 ) + 4f (x1 ) + 2f (x2 ) + 4f (x3 ) + + 2f (xn2 ) + 4f (xn1 ) + f (xn ) , where
a 3
ba
h= n .
Theorem 3.2.1 (Error Estimate for Simpsons Rule). If f (4) is continuous and |f (4) (x)| M
5
for all x [a, b], then the Simpsons rule error |ES | (ba)
180n4
M.

8
Z 1
Example 6. Approximate 4x3 dx by Simpsons Rule with n = 4.
0

Answer . f (x) = 4x3 , h = 41 , xi = ih = 4i .


1
h i
S = 34 f (x0 ) + 4f (x1 ) + 2f (x2 ) + 4f (x3 ) + f (x4 )
h i
1 1 8
= 12 0 + 4( 16 ) + 2( 16 ) + 4( 27
16
) + 4 = 1.

Z 1
2
Example 7. Use Simpsons Rule with n = 10 to approximate the integral ex dx.
0
Estimate the error involved in this approximation.
2
x
Answer h . (i) f (x) = e , h = 0.1, xi = 0.1i. i
0.1
S = 3 f (x0 ) + 4f (x1 ) + 2f (x2 ) + + 2f (x8 ) + 4f (x9 ) + f (x10 )
h i
= 0.1
3
f (0) + 4f (0.1) + 2f (0.2) + + 2f (0.8) + 4f (0.9) + f (1)
h i
= 0.1
3
e 0
+ 4e 0.01
+ 2e 0.04
+ 4e 0.09
+ 2e0.16
+ 4e0.25
+ 2e0.36
+ 4e 0.49
+ 2e0.64
+ 4e0.81
+ e1

1.462681
(ii) 0 x 1
2
0 f (4) (x) = (12 + 48x2 + 16x4 )ex max f (4) (x) = f (4) (1) = 76e
0x1
5
76e(1 0)
ES 0.000115.
180(10)4

Z 4
170
Exercise 1. Estimate dx by using
2 1 + x2
(a) Trapezoidal Rule
(b) Simpsons Rule
with n = 6 subintervals. Calculate the absolute error in each case.
Answer . (a) 413, 0.6043 (b) 400, 13.6043

Exercise 2. Suppose the following data were obtained from an experiment:

x 3.0 3.25 3.5 3.75 4.0 4.25 4.5 4.75 5.0


y 6.7 7.4 8.2 9.2 10.4 11.6 12.5 13.3 14.0
Z 5
Use Simpsons Rule to approximate y dx.
3

Answer . 20.7.

9
Chapter 4

Roots Of Equations

4.1 Introduction
Definition 4.1.1. Any number r for which f (r) = 0 is called a solution or a root of that equation
or a zero of f.
b
Example 8. The root of the linear equation ax + b = 0 is x = , a 6= 0.
a
Example 9. The roots of the quadratic equation ax2 + bx + c = 0 are given by

b b2 4ac
x= .
2a
Example 10. Find the roots of the equation x3 4x = 0.

4.2 Bracketing Methods


These methods are use to solve the equation f (x) = 0 where f is a continuous function. They are
based on the Intermediate Value Theorem that says that
if f is a continuous function on [a, b] that has values of opposite signs at a and b, then f has at least
one root in the interval (a, b).
They are called bracketing methods because 2 initial guesses bracketing the root are required to
start the procedure. The solution is found by systematically reducing the width of the bracket.
Two examples of bracketing methods are :
(a) The Bisection Method
(b) The Method of False-Position

Example 11. Show that the equation x = cos x has at least one solution in the interval (0, /2).

10
Reading Assignment 4.2.1. Show that the equation x3 4x + 1 = 0 has at least one solution in
the interval (1, 2).

Answer . (a) Let f (x) = x3 4x + 1. Being a polynomial, f is continuous on [1, 2].


(b) f (1) = 2, f (2) = 1 f (1)f (2) is negative.
Hence, by the Intermediate Value Theorem, the equation has a solution in (1, 2).

4.2.1 The Bisection Method


The method calls for a repeated halving of subintervals of [a, b], and at each step, picking the half
where f changes sign.
The basic algorithm is as follows:
Step 1. Choose lower xl and upper xu guesses for the root such that f (xl )f (xu ) < 0.
Step 2. An estimate of the root is determined by
xl + xu
xr = .
2
Step 3. Determine which subinterval the root lies as follows:
(a) If f (xl )f (xr ) < 0, the root lies in (xl , xr ). set xu = xr , and return to Step 2.
(b) If f (xl )f (xr ) > 0, the root lies in (xr , xu ). set xl = xr , and return to Step 2.
(c) If f (xl )f (xr ) = 0, then the root equals xr . Stop.

Termination Criteria and Error Estimates

We must have a stopping criterion s to terminate the computation. In practice, we require an error
estimate that is not contingent on foreknowledge of the root . In particular, an approximate percentage
a can be calculated as
present approximation previous approximation xnew xold
|a | = 100% = r new r 100%

present approximation xr

11
Example 12. Use bisection to find the root of f (x) = x10 1. Employ initial guesses of xl = 0 and
xu = 1.3 and iterate until the estimated percentage error a falls below a stopping criterion of s = 8%.
Answer .

(a) f (0)f (1.3) < 0 the initial estimate of the root is


0 + 1.3
x1r = = 0.65.
2
(b) f (0)f (0.65) > 0 the root is in (0.65, 1.3). So set xl = 0.65, xu = 1.3. Hence
0.65 + 1.3
x2r = = 0.975.
2
The estimated error is
xnew xold 0.975 0.65
r r
|a | = 100% = 100% = 33.3%.

new
xr 0.975

(c) f (0.65)f (0.975) > 0 the root is in (0.975, 1.3). So set xl = 0.975, xu = 1.3. Hence
0.975 + 1.3
x3r = = 1.1375.
2
The estimated error is 1.1375 0.975
|a | = 100% = 14.3%.

1.1375
(d) f (0.975)f (1.1375) < 0 the root is in (0.975, 1.1375). So set xl = 0.975, xu = 1.1375. Hence
0.975 + 1.1375
x4r = = 1.05625.
2
The estimated error is 1.05625 1.1375
|a | = 100% = 7.7%.

1.05625
After 4 iterations, the estimated percentage error is reduced to less than 8%.

Advantages & Disadvantages of Bisection Method


(a) Advantages
(i) This method is guaranteed to converge to the root.
(ii) The error bound is guaranteed to decrease by one half with each iteration.
(b) Disadvantages
(i) It generally converges more slowly than most other methods.
(ii) It requires two initial estimates at which f has opposite signs to start the procedure.
(iii) If f is not continuous , the method may converge to a wrong point. Should check the values
f (x).
(iv) If f does not change sign over any interval, then the method will not work.

12
4.2.2 The Method of False-Position
In this method, we use the graphical insight that
if f (c) is closer to zero than f (d), then c is closer to the root than d (which is not true in general).

By using this idea, we obtain


xl xu
xr = xu f (xu ).
f (xl ) f (xu )
By replacing this formula in the Step 2 of the Bisection method, we obtain the algorithm for the
method of false-position as follows:
Step 1. Choose lower xl and upper xu guesses for the root such that f (xl )f (xu ) < 0.
Step 2. An estimate of the root is determined by
xl xu
xr = xu f (xu ).
f (xl ) f (xu )

Step 3. Determine which subinterval the root lies as follows:


(a) If f (xl )f (xr ) < 0, the root lies in (xl , xr ). set xu = xr , and return to Step 2.
(b) If f (xl )f (xr ) > 0, the root lies in (xr , xu ). set xl = xr , and return to Step 2.
(c) If f (xl )f (xr ) = 0, then the root equals xr . Stop.

13
Example 13. Use the Method of False-Position to find the zero of f (x) = x ex . Use initial guesses
of 0 and 1.
Answer .

(i) First iteration,


xl = 0, f (xl ) = 1
xu = 1, f (xu ) = 0.63212
01
xr = 1 (0.63212) = 0.61270, f (xr ) = 0.07081
1 0.63212
f (xl )f (xr ) < 0 r (xl , xr )
(ii) Second iteration, xl = 0, f (xl ) = 1
xu = 0.61270, f (xu ) = 0.0.07081
0 0.61270
xr = 0.61270 (0.07081) = 0.57218, f (xr ) = 0.00789
1 0.07081
(iii) The calculations are summarized in the following table:
xl xu xr f (xl ) f (xr ) f (xl )f (xr )
0.00000 1.00000 0.61270 -1.00000 0.07081 -0.07081
0.00000 0.61270 0.57218 -1.00000 0.00789 -0.00789
0.00000 0.57218 0.56770 -1.00000 0.00087 -0.00087
0.00000 0.56770 0.56721 -1.00000 0.00010 -0.00010
0.00000 0.56721 0.56715 -1.00000 0.00001 -0.00001
0.00000 0.56715 0.56714 -1.00000 0.00000 0.00000
Hence, the approximate root is 0.56714.

Reading Assignment 4.2.2. Use the Method of False-Position to find the zero of f (x) = x ex .
Use initial guesses of 0 and 1. Iterate until two successive approximations differ by less than 0.01.
Answer . Let ea = |xnew
r xold
r |.

(i) First iteration,


xl = 0, f (xl ) = 1
xu = 1, f (xu ) = 0.63212
01
xr = 1 (0.63212) = 0.61270
1 0.63212
f (xl )f (xr ) < 0 r (xl , xr )
(ii) Second iteration, xl = 0, f (xl ) = 1
xu = 0.61270, f (xu ) = 0.0.07081
0 0.61270
xr = 0.61270 (0.07081) = 0.57218
1 0.07081
ea = |0.57218 0.61270| = 0.04052
(iii) Third iteration, xl = 0, f (xl ) = 1
xu = 0.57218, f (xu ) = 0.0.07081
xl xu 0 0.57218
xr = xu f (xu ). xr = 0.57218 (0.00789) = 0.56770
f (xl ) f (xu ) 1 0.00789
ea = |0.56770 0.57218| = 0.00448 < 0.01 (tolerance satisfied).
Hence, the approximate root is 0.56770.

14
4.3 Open Methods
In contrast to the bracketing methods, the open methods are based on formulas that require a single
starting value or two starting values that do not necessarily bracket the root. Hence, they sometimes
diverge from the true root. However, when they converge, they tend to converge much faster than the
bracketing methods.
Examples of open methods:
(a) Fixed-Point Method
(b) Newton-Raphson Method
(c) Secant Method

4.3.1 Fixed-Point Method


To solve f (x) = 0, rearrange f (x) = 0 into the form x = g(x). Then the scheme is given by

xn+1 = g(xn ), n = 0, 1, 2, . . .

Remark. This method is also called the successive substitution method , or one-point iteration.

Example 14. The function f (x) = x2 3x + ex 2 is known to have two roots, one negative and one
positive. Find the smaller root by using the fixed-point method.
Answer . The smaller root is the negative root.
f (1) = 2 + e1 > 0 and f (0) = 1 < 0 f (1)f (0) < 0 the negative root lies in (1, 0).
x2 + ex 2
f (x) = 0 can be written as x = = g(x), so the algorithm is
3
x2n + exn 2
xn+1 = g(xn ) = , n = 0, 1, 2, . . .
3
If we choose the initial guess x0 = 0.5, we obtain
x2 + ex0 2 (0.5)2 + e0.5 2
x1 = 0 = 0.3811564468 and so on. The results are summarized in
3 3
the following table :
k xk
0 0.5(initial guess)
1 0.381156446
2 0.390549582
3 0.390262048
4 0.390272019
5 0.390271674
6 0.390271686
7 0.390271686
Hence the root is approximately 0.390271686.

15
Note 1:
(a) There are many ways to change the equation f (x) = 0 to the form x = g(x) and the speed of
convergence of the correspondingiterative sequences {xn } may differ accordingly. For instance,
if we use the arrangement x = 3x ex + 2 = g(x) in the above example, the sequence might
not converge at all.
(b) It is simple to implement but in this case slow to converge.
(c) Even in the case where convergence is possible, divergence can occur if the initial guess is not
sufficiently close to the root.

Example 15. If we solve x3 + 6x 3 = 0 using the Fixed-point algorithm


3 x3n
xn+1 =
6
we obtain the following results with x0 = 0.5, x0 = 1.5, x0 = 2.5 and x0 = 5.5.
n xn n xn n xn n xn
0 0.5 0 1.5 0 2.5 0 5.5
1 0.4791667 1 0.0625 1 2.1041667 1 -27.2291667
2 0.4816638 2 0.5000407 2 2.0527057 2 3365.242242
.. ..
3 0.4813757 3 0.4791616 . . 3 -6351813600
.. .. .. ..
4 0.4814091 . . . . 4 0.4271122072 1029
.. .. .. .. ..
5 0.4814052 . . . . . D
.. .. .. .. ..
6 0.4814057 . . . . . I
.. .. .. .. ..
7 0.4814056 . . . . . V
.. .. ..
8 0.4814056 8 0.4814057 . . . E
.. .. ..
9 0.4814056 . . . R
.. .. ..
10 0.4814056 . . . G
.. .. ..
. . . E
.. .. ..
. . . S
..
13 0.4814013 . !!
..
14 0.4814055 .
..
16 0.4814056 .
..
17 0.4814056 .
Note that when the initial guess x0 is close enough to the fixed point r, then the method will converge,
but if it is too far away from r, the method will diverge.
Theorem 4.3.1 (Convergence of Fixed-Point Method). Let g be a continuous function on [a, b]
and a g(x) b for all x [a, b]. Then g(x) has at least one fixed point r in (a, b).

0
If in addition, g is differentiable and satisfies g (x) M < 1 for all x in [a, b], M a constant, then
the fixed point is unique and the method converges for any choice of initial point x0 in (a, b).

16
4.3.2 Newton-Raphson Method
This is a method used to approximate a root of an equation f (x) = 0 assuming that f has a continuous
derivatives f 0 . It consists of the following steps:
Step 1. Guess a first approximation x0 to the root.
(A graph may be helpful.)
Step 2. Use the first approximation to get the second, the second to get the third, and so on, using
the formula
f (xn )
xn+1 = xn 0 , n = 0, 1, 2, 3, . . .
f (xn )
where xn is the nth approximation.
Stop when |xn+1 xn | < s , the pre-specified stopping criterion s .

xn+1 xn
Remark. You may also stop when 100% < s %
xn+1

Note 2: The underlying idea is that we approximate the graph of f by suitable tangents.
If you are writing a program for this method, dont forget also to include an upper limit to the number
of iterations in the procedure.

Example 16. Use Newton-Raphson Method to approximate the root of f (x) = x ex = 0 that lies
between 0 and 2. Continue the iterations until two successive approximations differ less than 108 .
(xn exn ) xn + 1
Answer . The iteration is given by xn+1 = xn x
=
1+e n exn + 1
Starting with x0 = 1, we obtain
x0 + 1 1+1
x 1 = x0 = 0.537882842
e +1 e+1
x1 + 1
x 2 = x1 0.566986991
e +1
x3 0.567143286
x4 0.56714329
Since |x4 x3 | = 0.000000004 = 0.4 108 < 108 , we stop the process and take x4 0.56714329 as
the required root.

17
Reading Assignment 4.3.2. The equation 2x3 + x2 x + 1 = 0 has only one real root. Use the
Newton-Raphson Method to find this root. Continue the iterations until two successive approximations
differ less than 0.0001.
Answer . Let f (x) = 2x3 + x2 x + 1.
(a) First, use the Intermediate Value Theorem to local the root.
Since f (2)f (1) = (9)(1) = 9 < 0, the root is in (2, 1).
(b) Using the iterative formula

f (xn ) 2x3n + x2n xn + 1


xn+1 = xn = x n
f 0 (xn ) 6x2n + 2xn 1

with the initial approximation x0 = [(2) + (1)]/2 = 1.5, we obtain


2x30 +x20 x0 +1
x1 = x0 6x20 +2x0 1
= 1.289473684, |x1 x0 | > 0.0001
x2 = 1.236967446, |x2 x1 | > 0.0001
x3 = 1.233763552, |x3 x1 | > 0.0001
x4 = 1.233751929
Since |x4 x3 | = 0.000011623 < 0.0001, the required approximation to the root is x4 = 1.233751929.


4
Exercise 3. Use Newton-Raphson method to approximate 22 correct to six decimal places.
Answer . 2.165737

Advantages
(a) It needs only 1 initial guess.
(b) It converges very rapidly .

Disadvantages
(a) It may not converge if the initial guess is not sufficiently close to the true root.
(b) The calculations of f 0 (xn ) may be very complicated.
(c) Difficulties may arise if |f 0 (xn )| is very small near a solution of f (x) = 0.

18
4.3.3 Finite Difference Method
Consider a second order boundary value problem (BVP)

y 00 + P (x)y 0 + Q(x)y = f (x), y(a) = , y(b) = .

Suppose a = x0 < x1 < < xn1 < xn = b with xi xi1 = h for all i = 1, 2, . . . , n. Let
yi = y(xi ), Pi = P (xi ), Qi = Q(xi ) and fi = f (xi ). Then by replacing y 0 and y 00 with their central
difference approximations in the BVP, we get
yi+1 2yi + yi1 yi+1 yi1
2
+ Pi + Qi yi = fi , i = 1, 2, . . . , n 1.
h 2h
or, after simplifying
 h  h
1 + Pi yi+1 + (h2 Qi 2)yi + (1 Pi )yi1 = h2 fi .
2 2
The last equation, known as a finite difference equation, is an approximation to the differential
equation. It enables us to approximate the solution at x1 , . . . , xn1 ..

Example 17. Solving BVPs Using Finite Difference Method


Use the above difference equation with n = 4 to approximate the solution of the BVP

y 00 4y = 0, y(0) = 0, y(1) = 5.

Answer . Here h = (1 0)/4 = 0.25, xi = 0.25i, i = 0, 1, 2, 3, 4. Hence, the difference equation is

yi+1 2.25yi + yi1 = 0, i = 1, 2, 3

That is,

y2 2.25y1 + y0 = 0
y3 2.25y2 + y1 = 0
y4 2.25y3 + y2 = 0

With the BCs y0 = 0 and y4 = 5, the above system becomes

y2 2.25y1 = 0
y3 2.25y2 + y1 = 0
2.25y3 + y2 = 5

Solving the system gives y1 = 0.7256, y2 = 1.6327, and y3 = 2.9479.

Notes : We can improve the accuracy by using smaller h. But for that we have to pay a price, i.e.
we have to solve a larger system of equations.

19
Chapter 5

Some Topics In Linear Algebra

5.1 Iterative Methods For Solving Linear Systems


The iterative methods start with an initial approximation to a solution and then generate a succes-
sion of better and better approximations that (may) tend toward an exact solution. We shall study
the following two iterative methods:
(a) Jacobi iteration : The order in which the equations are examined is irrelevant, since the Jacobi
method treats them independently. For this reason, the Jacobi method is also known as the
method of simultaneous corrections, since the updates could in principle be done simultane-
ously.
(b) Gauss-Seidel iteration : this is a method of successive corrections. It is very similar to the
Jacobi technique except it replaces approximations by corresponding new ones as soon as the
latter are available.

Definition 5.1.1. An n n matrix A = [aij ] is strictly diagonally dominant if


n
X
|akk | > |aki |, k = 1, 2, . . . n.
i=1
i6=k

That is, A is strictly diagonally dominant if the absolute value of each diagonal entry is greater than
the sum of the absolute values of the remaining entries in the same row.

20

2 7 4
Example 18. Show that A = 8 1 6 is not strictly diagonally dominant.
3 5 9

Answer . Since in the first row, |a11 | = 2 > 6 |a12 | + |a13 | = 7 + 4 = 11 and in the second row,
|a22 | = 1 6> |a21 | + |a23 | = 8 + 6 = 14, matrix A is not strictly diagonally dominant.

8 1 6
However, if we interchange the first and second rows, the resulting matrix B = 2 7 4 is strictly
3 5 9
diagonally dominant since
|a11 | = 8 > |a12 | + |a13 | = 1 + 6 = 7,
|a22 | = 7 > |a21 | + |a23 | = 2 + 4 = 6,
|a33 | = 9 > |a31 | + |a32 | = 3 + 5 = 8.

Theorem 5.1.1 ( Convergence of the Iterative Methods). If the square matrix A is strictly
diagonally dominant, then the Gauss-Seidel and Jacobi approximations to the solution of the linear
system Ax = b both converge to the exact solution for all choices of the initial approximation.

Remark (Termination Criterion). We can stop the computation when

|xi [p+1] xi [p] | < 

for all i, where  is the pre-specified stopping criterion.

Example 19. Use Jacobi Iteration Technique to solve

x1 10x2 + x3 = 13
20x1 + x2 x3 = 17
x1 + x2 + 10x3 = 18

Iterate until |xi [p+1] xi [p] | < 0.0002, for all i. Prepare all the computations in 5 decimal places.

Answer .

(i) To ensure the convergence of this method, we rearrange the equations to obtain a strictly diago-
nally dominant system :

20x1 + x2 x3 = 17
x1 10x2 + x3 = 13
x1 + x2 + 10x3 = 18

21
(ii) Make each diagonal element the subject of each equation:
1
x1 = (17 x2 + x3 )
20
1
x2 = (13 + x1 + x3 ) ()
10
1
x3 = (18 + x1 x2 )
10
(iii) Start with a reasonable initial approximation to the solution:
[0] [0] [0]
e.g. x1 = 0, x2 = 0, x3 = 0.

(iv) Substitute this initial approximation into the RHS of (*), and calculate the new approximation
1 [p] [p]
x1 [p+1] = (17 x2 + x3 )
20
1 [p] [p]
x2 [p+1] = (13 + x1 + x3 )
10
1 [p] [p]
x3 [p+1] = (18 + x1 x2 )
10
[p]
where xi is the pth iteration of the approximation to xi .

That is, the new approximation is


1 [0] [0]
x1 [1] = (17 x2 + x3 ) = 0.850
20
1 [0] [0]
x2 [1] = (13 + x1 + x3 ) = 1.3
10
1 [0] [0]
x3 [1] = (18 + x1 x2 ) = 1.8
10
(v) To improve the approximation, we can repeat the substitution process. The next approximation
is
1
x1 [2] = (17 (1.3) + 1.8) = 1.005
20
1
x2 [2] = (13 + 0.85 + 1.8) = 1.035
10
1
x3 [2] = (18 + 0.85 (1.3)) = 2.015
10
(vi) As |xi [6] xi [5] | < 0.0002 for all i, we stop the computation.
The results obtained are summarized in the following table:
m 0 1 2 3 4 5 6
[m]
x1 0 0.850 1.005 1.0025 1.0001 0.99997 1.00000
[m]
x2 0 -1.3 -1.035 -0.9980 -0.99935 -0.99999 -1.00000
[m]
x3 0 1.8 2.015 2.004 2.00005 1.99995 2.00000

22
Example 20. Use Gauss-Seidel Method to solve
x1 10x2 + x3 = 13
20x1 + x2 x3 = 17
x1 + x2 + 10x3 = 18
Iterate until |xi [p+1] xi [p] | < 0.0002, for all i. Prepare all the computations in 5 decimal places.
Answer .

(i) Make sure the matrix is diagonally dominant (rearrange it if necessary).


Rearranging the equations leads to
20x1 + x2 x3 = 17
x1 10x2 + x3 = 13
x1 + x2 + 10x3 = 18

which is strictly diagonally dominant.


(ii) Make each diagonal element the subject of each equation:
1
x1 = (17 x2 + x3 )
20
1
x2 = (13 + x1 + x3 ) ()
10
1
x3 = (18 + x1 x3 )
10
(iii) Start with a reasonable initial approximation to the solution:
[0] [0] [0]
e.g. x1 = 0, x2 = 0, x3 = 0.

(iv) Substitute this initial approximation into the RHS of (*), and calculate the new approximation
1 [p] [p]
x1 [p+1] = (17 x2 + x3 )
20
1 [p+1] [p]
x2 [p+1] = (13 + x1 + x3 )
10
1 [p+1] [p+1]
x3 [p+1] = (18 + x1 x2 )
10
[p]
where xi is the pth iteration of the approximation to xi .
That is, the new approximation is
1 [0] [0]
x1 [1] = (17 x2 + x3 ) = 0.850
20
1 [1] [0]
x2 [1] = (13 + x1 + x3 ) = 1.215
10
1 [1] [1]
x3 [1] = (18 + x1 x2 ) = 2.0065
10

23
(v) To improve the approximation, we can repeat the substitution process. The next approximation
is
1
x1 [2] = (17 (1.215) + 2.0065) = 1.01108
20
1
x2 [2] = (13 + 1.0111 + 2.0065) = 0.99824
10
1
x3 [2] = (18 + 1.0111 (0.99824)) = 2.00093
10

We summarize the results obtained in the following table:


m 0 1 2 3 4
[m]
x1 0 0.850 1.01108 0.99996 1.00000
[m]
x2 0 -1.215 -0.99824 -0.99991 -1.00000
x3 [m] 0 2.0065 2.00093 1.99999 2.00000

Note 3:

(a) The Gauss-Seidel technique is not appropriate for use on vector computer, as the set of equations
must be solved in series.
(b) The Gauss-Seidel technique requires less storage than the Jacobi technique and leads to a conver-
gent solution almost twice as fast.

5.2 A Review On Eigenvalues


Definition 5.2.1. Let A be an n n matrix. A scalar is called an eigenvalue of A if there exists
a nonzero vector x Rn such that
Ax = x.
In this case, the nonzero vector x is called an eigenvector of A corresponding to .

   
4 1 1
Example 21. Let A = . Verify that v1 = is an eigenvector of A and find the corresponding
3 2 1
eigenvalue.
    
4 1 1 1
Answer . Av1 = = = 5v1 is an eigenvector of A corresponding to the eigenvalue
3 2 1 1
= 5.

Note 4:

(a) To find an eigenvalue of A, we solve the characteristic equation


|A I| = 0
for .

24
(b) To find the eigenvectors corresponding to , we solve the linear system

(A I)x = 0

for nonzero x.
 
4 1
Exercise 4. Find the eigenvalues and eigenvectors of A = .
3 2
   
1 1
Answer . 1 = 1, 2 = 5; v1 = , v2 =
3 1

Definition 5.2.2. A square matrix A is called diagonalizable if there is an invertible matrix P such
that P 1 AP is a diagonal matrix; the matrix P is said to diagonalize A.

   
2 0 0 1
Exercise 5. Let A = and P = . Verify that P diagonalizes A.
3 1 1 1
     
1 1 2 0 0 1 1 0
Answer . P 1 AP = = , a diagonal matrix, so P diagonalizes A.
1 0 3 1 1 1 0 2

Theorem 5.2.1.

Let A be an n n matrix.
(a) A is diagonalizable if and only if it has n linearly independent eigenvectors.
(b) These n linearly independent eigenvectors form a basis of Rn .

25
5.3 Approximation of Eigenvalues
Definition 5.3.1. An eigenvalue of an n n matrix A is called the dominant eigenvalue of A if its
absolute value is larger than the absolute values of the remaining n 1 eigenvalues. An eigenvector
corresponding to the dominant eigenvalue is called a dominant eigenvector of A.

Example 22.

(a) If 4 4 matrix A has eigenvalues

1 = 5, 2 = 4, 3 = 2, 4 = 3

then 1 = 5 is the dominant eigenvalue since

|1 | > |i | for all i 6= 1.

(b) A 3 3 matrix B with eigenvalues

1 = 2, 2 = 5, 3 = 5

has no dominant eigenvalue.

5.3.1 The Power Method


Theorem 5.3.1 (The Power Method (or Direct Iteration Method)). Let A be a diagonalizable
n n matrix with eigenvalues
|1 | > |2 | |n1 | |n |
Assume that v1 , . . . , vn are unit eigenvectors of A associated with 1 , 2 , . . . n respectively. Let x0
be a nonzero vector in Rn that is an initial guess for the dominant eigenvector v1 . Then the vector
xk = Ak x0 is a good approximation to a dominant vector of A v1 when the exponent k is sufficiently
large.
Note 5:

(a) x0 is an initial guess or approximation for the dominant eigenvector.


(b) If xk = Ak x0 is an approximation to the dominant eigenvector, then the dominant eigenvalue 1
can be approximated by the Rayleigh quotient
Axk xk
1 .
xk xk
(c) The rate of convergence depends on the ratios
2 n
,..., .
1 1
The smaller the ratios, the faster the rate of convergence.

26
 
4 2
Example 23. Let A = . Use the power method to approximate the dominant eigenvalue and
3 1  
1
a corresponding eigenvector with x0 = . Prepare 6 iterations.
0
Answer .

We computexm+1 = Ax  m form  = 0,1, 2, . . . .


4 2 1 4 1
x1 = Ax0 = = =4 ,
3 1 0  3 0.75
10 1
x2 = Ax1 = = 10
9
  0.9 
22 1
x3 = Ax2 = 22 ,
21 0.9545
46 1
x4 = Ax3 = 46 ,
45
  0.9783 
94 1
x5 = Ax4 = 94 ,
93  0.9894
 
190 1
x6 = Ax5 = 190
  189 0.9947
190
So x6 = is an approximation to a dominant eigenvector and the dominant eigenvalue
189  
190
[382 381]
Ax6 x6 189
1 =   2.0132.
x6 x6 190
[190 189]
189

Remark.

(a) From the above


  calculations, it is clear that the vectors xm are getting closer and closer to scalar
1
multiples of which is a dominant eigenvector of A. It can also be checked that 1 = 2.
1
(b) The convergence is rather slow.
(c) Termination Criteria Let (i) denote the approximation to the eigenvalue at the ith step.
We can stop the computation once the relative error

(i)

is less than the pre-specified error criterion . Unfortunately, the actual value is usually unknown.
So instead, we will stop the computation at the ith step if the estimated relative error

(i) (i 1)
< .
(i)

The value obtained by multiplying estimated relative error by 100% is called the estimated
percentage error.

27
5.3.2 Power Method with Scaling
Theorem 5.3.2. The power method often generates a sequence of vectors {Ak x0 } that have incon-
veniently large entries.
This can be avoided by scaling the iterative vector at each step. That is, we
x1
.. 1
multiply Ax0 = . by and label the resulting matrix as x1 . We repeat the
max{|x1 |, |x2 |, . . . , |xn |}
xn
process with the vector Ax1 to obtain the scaled-down vector x2 , and so on.

The algorithm for the Power Method with Scaling as follows:


1. Compute yk = Axk1
1
2. Set xk = yk
max{|yk |}
Axk xk
3. Dominant eigenvalue of matrix A =
xk xk

Example 24. Repeat the iterations of Example 23 using the power method with scaling.

Answer .
     
4 1 1 4 1
(i) y1 = Ax0 = , we define x1 = y1 = = .
3   4  4 3 0.75    
4 2 1 2.5 1 1 2.5 1
(ii) y2 = Ax1 = = , we define x2 = y2 = = .
3 1 0.75 2.25 2.5 2.5 2.25 0.9
Continue in this manner, we will obtain
       
1 1 1 1
x3 , x4 , x5 , x6 .
0.9545 0.9782 0.9894 0.9947

Finally, the dominant eigenvalue


 
1
[2.0106 2.0053]
Ax6 x6 0.99947
1 =   4.0053/1.9894 2.0133
x6 x6 1
[1 0.9947]
0.99947
 
1
Observe that the sequence of vectors {xk } is approaching the eigenvector .
1

28
5.3.3 Inverse Power Method with Scaling
Theorem 5.3.3. If in addition, A is an invertible matrix, then
1 > 2 3 n > 0.
1
Hence will be the dominant eigenvalue of A1 . We could apply the power method on A1 to find
n
1
and hence n . This is called the inverse power method.
n
The algorithm for the Inverse Power Method with Scaling as follows:
1. Compute B = A1
2. Compute yk = Bxk1
1
3. Set xk = yk
max{|yk |}
Bxk xk
4. Dominant eigenvalue of matrix A1 = =
xk xk
1
5. Smallest eigenvalue of A =

 
8 8
Example 25. Let 1 and 2 be the eigenvalues of A = such that 1 > 2 > 0.
1 2
(a) Use an appropriate power method with scalingto approximate an eigenvector corresponding to
1
2 . Start with the initial approximation x0 = . Round off all computations to four decimal
1
places, and stop after two iterations.
(b) Use the result of part (a) to approximate 2 .
(c) Find the estimated percentage error in the approximation of 2 .

Answer .
(a) We use the inverse power method with scaling to find 2 , the smallest eigenvalue of A.
 
1 0.0833 0.3333
B=A =
0.0417 0.3333
   
0.4166 1.0000
Iteration 1 : y1 = Bx0 = , x1 = y1 /0.4166 =
0.2916  0.7000
 
0.3166 1.0000
Iteration 2 : y2 = Bx1 = , x2 = y2 /0.3166 = is an approximation to the required
0.1916 0.6052
eigenvector.
x2 x2
(b) An approximation to 2 is the second approximation 2 (2) = = 3.5782
Bx2 x2
x1 x1
(c) The first approximation of 2 is 2 (1) = = 3.3058
Bx1 x1
2 (2) 2 (1)
The percentage error is 100% = 7.6128%
2 (2)

29
5.3.4 Shifted Inverse Power Method with Scaling
Theorem 5.3.4. Let A be a diagonalizable n n matrix with eigenvalues 1 , . . . , n and assume that

1 > 2 > 3 > n > 0.

This method can be used to find any eigenvalue and eigenvector of A. Let a be any number and let k
be the eigenvalue of A closest to a. The inverse power iteration with A aI will converge to |k a|1
and a multiple of vk .

The algorithm for the Shifted Inverse Power Method with Scaling as follows:
1. Compute C = (A aI)1
2. Compute yk = Cxk1
1
3. Set xk = yk
max{|yk |}
Cxk xk
4. Dominant eigenvalue of matrix (A aI)1 = =
xk xk
1
5. Eigenvalue closest to a = + a

 
7 2
Example 26. Apply the shifted inverse power method with scaling (2 steps) to A = to find
  3 1
1
the eigenvalue nearest to a = 6. Start with the initial guess x0 = .
0
 
1 2
Answer . A aI = A 6I = .
3 7
 
7 2
C = (A aI)1 = = .
3 1
Iteration 1:
   
7 1 1
y1 = Cx0 = , x1 = y 1 = .
3 7 0.4286
Iteration 2:
   
6.1428 1 1
y2 = Cx1 = , x2 = y2 = is an approximation to an eigenvector correspond-
2.5714 6.1428 0.4186
ing to the required eigenvalue,say, .
Let be the dominant eigenvalue of C.
 
Cx2 x2 7.2433 6.1628
Then = = 6.1634 since Cx2 = .
x2 x2 1.1752 2.5814
1 1
Hence, = a + = 6 + = 6.1622.
6.1634

30
Remark.

(a) The Power Method converges rather slowly. Shifting can improve the rate of convergence.
(b) If we have some knowledge of what the eigenvalues of A are, then this method can be used to find
any eigenvalue and eigenvector of A.
(c) We can estimate the eigenvalues of A by using the Gerschgorins Theorem.

Theorem 5.3.5 (Gerschgorins Theorem). Let be an eigenvalue of an n n matrix A. Then for


some integer i(1 i n), we have

| aii | ri = |ai1 | + + |ai,i1 | + |ai,i+1 | + + |ain |.

That is, the eigenvalues of A lie in the union of the n discs with radius ri centered at aii .
Furthermore, if a union of k of these n discs form a connected region that is disjoint from all the
remaining n k discs, then there are precisely k eigenvalues of A in this region.

Example 27. Draw the Gerschgorin discs corresponding to



7 4 4
A = 4 8 1 .
4 1 8

What can be concluded about the eigenvalues of A?

Answer . The Gerschgorin discs are

D1 : Center 7, radius 8

D2 = D3 : Center -8, radius 5


(i) The eigenvalues of A must lie in D1 D2 .
(ii) A is symmetric, so all the eigenvalues are real numbers.
(iii) Hence, 1 1 15 and 13 2 , 3 3

31
Chapter 6

Optimization

6.1 Direct Search Methods


Direct search methods apply primarily to strictly unimodal 1-variable functions. The idea of these
methods is to identify the interval of uncertainty that contains the optimal solution point. The
procedure locates the optimum by iteratively narrowing the interval of uncertainty to any desired level
of accuracy. We will discuss only the golden section method. This method is used to find the maximum
value of a unimodal function f (x) over a given interval [a, b].

Definition 6.1.1. A function f (x) is unimodal on [a, b] if it has exactly one maximum (or minimum)
on [a, b].

6.1.1 Golden Section Search (Maximum)


The general steps for this method are as follows:
Let f (x) be a unimodal function over a given interval [a, b] with the optimal point x . Let Ik1 =
(xL , xR ) be the current interval of uncertainty with I0 = [a, b]. Define
x1 = xR r(xR xL ), x2 = xL + r(xR xL )

51
where r = , the golden ratio. (Clearly, xL < x1 < x2 < xR .) The next interval of uncertainty
2
Ik is determined in the following way:
(i) If f (x1 ) > f (x2 ), then xL < x < x2 . Set xR = x2 and Ik = [xL , x2 ].
(ii) If f (x1 ) < f (x2 ), then x1 < x < xR . Set xL = x1 and Ik = [x1 , xR ].
(iii) If f (x1 ) = f (x2 ), then x1 < x < x2 . Set xL = x1 , xR = x2 , and Ik = [x1 , x2 ].
Remark. (a) The choice of x1 and x2 ensures that Ik Ik1 .
(b) Let Lk = kIk k, the length of Ik . Then the algorithm terminates at iteration k if Lk < , the user
specified level of accuracy.
(c) It can be seen that Lk = rLk1 and Lk = rk (b a). Thus, the algorithm will terminate at k
iterations where Lk = rk (b a) < .

32
Example 28. Find the maximum value of f (x) = 3 + 6x 4x2 on the interval [0, 1] using the golden
section search method with the final interval of uncertainty having a length less than 0.25.

Answer . Solving
rk (b a) < 0.25
for the number k of iterations that must be performed, we obtain

k > 2.88.

Thus 3 iterations of golden section search must be performed.

Iteration 1 : xL = 0, xR = 1
51
x1 = xR (xR xL ) = 0.381966,
2
51
x2 = xL + (xR xL ) = 0.618034
2
f (x1 ) = 4.708204 < f (x2 ) = 5.180340 take xL = x1 = 0.381966 with the same xR .

Iteration 2 : xL = 0.381966, xR = 1
51
x1 = xR (xR xL ) = 0.618034,
2
51
x2 = xL + (xR xL ) = 0.763932
2
f (x1 ) = 5.180340 < f (x2 ) = 5.249224 take xL = x1 = 0.618034 with the same xR i.e., xR = 1 .

Iteration 3 : xL = 0.618034, xR = 1
51
x1 = xR (xR xL ) = 0.763932,
2
51
x2 = xL + (xR xL ) = 0.854102
2
f (x1 ) = 5.249224 > f (x2 ) = 5.206651 take xL remains the same with the same xR = x2 = 0.854102
i.e., xL = 0.618034, xR = 0.854102

Thus, I3 = [0.618034, 0.854102] and L3 = 0.854102 0.618034 = 0.236068 < 0.25 as wanted.
So the required maximum point lies within I3 = [0.618034, 0.854102]

33
6.1.2 Gradient Method
Review. Recall that the gradient vector of a function f at a point x,

f (x),

points in the direction of maximum increase (or the direction of steepest ascent). And,

f (x),

is the direction of maximum decrease (or the direction of steepest descent)

Theorem 6.1.1 (Method of Steepest Ascent/ Gradient Method). An algorithm for finding
the nearest local maximum of a twice continuous differentiable function f (xk ), which presupposes that
the gradient of the function can be computed. The method of steepest ascent, also called the gradient
method, starts at a point x0 and, as many times as needed, moves from xk to xk+1 by maximizing
along the line extending from xk in the direction of f (xk ), the local uphill gradient. That is, we
determine the value of t and the corresponding point

xk+1 = xk + tf (xk )

at which the function


g(t) = f (xk+1 (t))
has a maximum. We take xk+1 as the next approximation after xk .

Remark : This method has the severe drawback of requiring a great many iterations (hence a slow
convergence) for functions which have long, narrow valley structures.

Example 29. Use the method of steepest ascent to determine a maximum of f (x) = x2 y 2 starting
from the point x0 = (1, 1).

Answer . f (x) = (2x, 2y)


(i) At x0 = (1, 1), f = (2, 2).
The new approximation is
x1 = x0 + tf (x0 )
where t is a value that will maximize the function
 
g(t) = f x0 + tf (x0 ) = f (1 2t, 1 2t) = (1 2t)2 (1 2t)2 .

Solving g 0 (t) = 0, we obtain 4(1 2t) + 4(1 2t) = 0 t = 1/2.


1
Hence x1 = (1, 1) + (2, 2) = (0, 0)
2
Now f (x1 ) = f (0, 0) = (0, 0), and we terminate the algorithm.
So the maximum point is x1 = (0, 0) with maximum value f (0, 0) = 0.

34
Reading Assignment 6.1.2. Starting at the point (0, 0), use one iteration of the steepest-descent
algorithm to approximate the minimum value of the function

f (x, y) = 2(x + y)2 + (x y)2 + 3x + 2y.

Answer .

fx = 4(x + y) + 2(x y) + 3 = 6x + 2y + 3
fy = 4(x + y) 2(x y) + 2 = 2x + 6y + 2
f (x, y) = (fx , fy ) = (6x + 2y + 3, 2x + 6y + 2)
Start with the initial approximation x0 = (0, 0),
we obtain the next approximation
x1 = x0 + tf (x0 ) = (0, 0) + t(3, 2) = (3t, 2t)
since f (x0 ) = f (0, 0) = (3, 2).
Find t that minimizes g(t) = f (x1 ) = f (3t, 2t) = 2(5t)2 + (t)2 + 3(3t) + 2(2t) = 51t2 + 13t
Solve g 0 (t) = 102t + 13 = 0
t = 13/102 = 0.127

Hence x1 = (0.382, 0.255) and f (x1 ) = f (0.382, 0.255) = 0.828 minimum value of f

35
Chapter 7

Numerical Methods For Ordinary


Differential Equations

7.1 First-Order Initial-Value Problems

7.1.1 Eulers Method


Given the initial-value problem
dy
= f (x, y), y(x0 ) = y0 .
dx
Euler method with step size h consists in using the iterative formula

yn+1 = yn + hf (xn , yn )

to approximate the solution y(x) of the IVP at the mesh points

xn+1 = xn + h = x0 + (n + 1)h, n = 0, 1, 2, . . . .

Note 6 (Truncation error in Eulers Method):

(a) As the truncation is performed after the first term of the Taylor series of y(x), Eulers method is
a first order method with TE = O(h2 ). This error is a local error as it applies at each and every
step as the solution develops.
(b) The global error , which applies to the final solution is O(h), since the number of operations
1
would be proportional to .
h

36
Example 30. Consider the IVP
y 0 = x + y, y(0) = 1.

(a) Use Eulers method to obtain a five-decimal approximation to y(0.5) using the step size h = 0.1.
(b) (i) Estimate the truncation error in your approximation to y(0.1) using the next two terms in
the corresponding Taylor series.
(ii) The exact value for y(0.1) is 1.11034 (to 5 decimal places) . Calculate the error between
the actual value y(0.1) and your approximation y1 . How does this error compare with the
truncation error you obtained in (i)?
(c) The exact value for y(0.5) is 1.79744 (to 5 decimal places) . Calculate the absolute error between
the actual value y(0.5) and your approximation y5 .

Answer . Here f (x, y) = x + y, x0 = 0 and y0 = 1 so that the Eulers scheme becomes

yn+1 = yn + 0.1(xn + yn ) = 0.1xn + 1.1yn .

(a) (i) y1 = 0.1x0 + 1.1y0 = 0.1(0) + 1.1(1) = 1.1


(ii) y2 = 0.1x1 + 1.1y1 = 0.1(0.1) + 1.1(1.1) = 1.22
(iii) Rounding values to 5 decimal places, the remaining values obtained in this manner are

y3 = 1.36200, y4 = 1.52820, y5 = 1.72102.

(iv) Therefore y(0.5) y5 = 1.72102.


(b) (i) The truncation error is 0.01033.
(ii) The actual error is 0.01034.
(c) 0.07642

37
Example 31. Given that the exact solution of the initial-value problem
y 0 = x + y, y(0) = 1
is y(x) = 2ex x 1. The following table shows
(a) the approximate values obtained by using Eulers method with step size h = 0.1
(b) the approximate values with h = 0.05, and the actual values of the solution at the points
x = 0.1, 0.2, 0.3, 0.4, 0.5 :

yn yn exact absolute error absolute error


xn with h = 0.1 with h = 0.05 value with h = 0.1 with h = 0.05
0.0 1.00000 1.00000 1.00000 0.00000 0.00000
0.1 1.10000 1.10500 1.11034 0.01034 0.00534
0.2 1.22000 1.23101 1.24281 0.02281 0.01180
0.3 1.36200 1.38019 1.39972 0.03772 0.01953
0.4 1.52820 1.55491 1.58365 0.05545 0.02874
0.5 1.72102 1.75779 1.79744 0.07642 0.03965

Based on the above information, give some comments.


Answer .
Comments
(a) By comparing the absolute errors , we see that Eulers method is more accurate for the smaller h.
(b) The column of data for h = 0.1 requires only 5 steps, whereas 10 steps are required to reach x = 0.5
with h = 0.05. In general, more calculations are required for smaller h. As the consequence, a
larger roundoff error is expected.
(c) The global error for Eulers method is O(h). It follows that if the step size h is halved, this error
would be approximately halved as well. This can be seen from the above table that when the step
size is halved from h = 0.1 to h = 0.05, the global error at x = 0.5 is reduced from 0.07642 to
0.03965. This is a reduction of approximately one half.

7.1.2 Heuns Method/Improved Eulers Method


In each step of this method, we compute first the auxiliary value

yn+1 = yn + hf (xn , yn ) (1)
and then the new value
h
yn+1 = yn + (f (xn , yn ) + f (xn+1 , yn+1 )) (2).
2
Note 7:
(a) This is an example of a predictor-corrector method-it uses (1) to predict a value of y(xn+1 ) and
then uses (2) to correct this value.
(b) The local TE for Heuns method is O(h3 ).
(c) The global TE for Heuns method is O(h2 ).

38
Example 32. Use Improved Eulers method with step size h = 0.1 to approximate the solution of the
IVP
y 0 = x + y, y(0) = 1
on the interval [0, 1]. The exact solution is y(x) = 2ex x 1. Make a table showing the approximate
values , the actual values together with the absolute errors.

Answer . Here f (x, y) = x + y, x0 = 0 and y0 = 1 so that



(a) yn+1 = yn + 0.1(x n + yn ) = 0.1xn + 1.1yn 

yn+1 = yn + 0.05 (xn + yn ) + (xn+1 + yn+1 )
(b) y1 = 0.1x0 + 1.1y
 0 = 0.1(0) + 1.1(1) = 1.1

y1 = y0 + 0.05 (x0 + y0 ) + (x1 + y1 ) = 1 + 0.05[1 + (0.1 + 1.1)] = 1.11
(c) y2 = 1.11 + 0.1[(0.1 + 1.11) = 1.231
y2 = 1.11 + 0.05[(0.1 + 1.11) + (0.2 + 1.231)] = 1.24205

The remaining calculations are summarized in the following table:


n xn yn exact absolute error
0 0.0 1.00000 1.00000 0.00000
1 0.1 1.11000 1.11034 0.00034
2 0.2 1.24205 1.24281 0.00076
3 0.3 1.39847 1.39972 0.00125
4 0.4 1.58181 1.58365 0.00184
5 0.5 1.79490 1.79744 0.00254
6 0.6 2.04086 2.04424 0.00338
7 0.7 2.32315 2.32751 0.00436
8 0.8 2.64558 2.65108 0.00550
9 0.9 3.01237 3.01921 0.00684
10 1.0 3.42817 3.43656 0.00839
Remark.

(a) Considerable improvement in accuracy over Eulers method.


h3 000 (0.1)3
(b) For the case h = 0.1, the TE in approximation to y(0.1) is y
6 0
= 6
(2) = 0.00033, very
close to the actual error 0.00034.

39
7.1.3 Taylor Series Method of Order p
h2 00 h3 (3) hp
yn+1 = yn + hyn0 + yn + yn + + yn(p)
2! 3! p!
Example 33.

Consider the IVP


y 0 = 2xy, y(1) = 1.
Use Taylor series method of order 2 to approximate y(1.2) using the step size h = 0.1.

Answer . Here f (x, y) = 2xy, x0 = 1, and y0 = 1.


h2
The method is yn+1 = yn + hyn0 + yn00 = yn + 0.1yn0 + 0.005yn00 , n = 0, 1, 2, . . . with xn+1 = xn + h =
2!
xn + 0.1.
(a) When n = 0 : x0 = 1
(i) y 0 = 2xy y00 = 2x0 y0 = 2(1)(1) = 2
(ii) y 00 = 2y + 2xy 0 y000 = 2y0 + 2x0 y00 = 2(1) + 2(1)(2) = 6
y1 = y0 + 0.1y00 + 0.005y000 = 1 + 0.1(2) + 0.005(6) = 1.23
(b) When n = 1 : x1 = x0 + 0.1 = 1.1
(i) y 0 = 2xy y10 = 2x1 y1 = 2(1.1)(1.23) = 2.706
(ii) y 00 = 2y + 2xy 0 y100 = 2y1 + 2x1 y10 = 2(1.23) + 2(1.1)(2.706) = 8.4132
y2 = y1 + 0.1y10 + 0.005y100 = 1.23 + 0.1(2.706) + 0.005(8.4132) = 1.542666 y(x2 ) = y(1.2)

7.1.4 Runge-Kutta Method of Order p


(a) These methods are all of the form
1
yn+1 = yn + (a1 k1 + a2 k2 + + ap kp )
6
X
where ai = 1.
(b) The Fourth-Order Runge-Kutta Method is
1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 )
6
where
k1 = hf (xn , yn )
k2 = hf (xn + 21 h, yn + 21 k1 )
k3 = hf (xn + 12 h, yn + 21 k2 )
k4 = hf (xn + h, yn + k3 )
This method is a 4th order method, so its TE is O(h5 ).

40
Example 34. Consider y 0 = x + y, y(0) = 1. Use Runge-Kutta method of order 4 to obtain an
approximation to y(0.2) using the step size h = 0.1.

Answer . Here f (x, y) = x + y, x0 = 0, y0 = 1, h = 0.1, xn+1 = xn + 0.1.


(a) k1 = 0.1(xn + yn )
(b) k2 = 0.1(xn + 0.05 + yn + 21 k1 )
(c) k3 = 0.1(xn + 0.05 + yn + 12 k2 )
(d) k4 = 0.1(xn + 0.1 + yn + k3 )
(a) For y1 :
(i) k1 = 0.1(x0 + y0 ) = = 0.1
(ii) k2 = 0.1(x0 + 0.05 + y0 + 12 k1 ) = 0.1(0 + 0.05 + 1 + 0.05) = 0.11
(iii) k3 = 0.1(x0 + 0.05 + y0 + 12 k2 ) = 0.1(0 + 0.05 + 1 + 0.055) = 0.1105
(iv) k4 = 0.1(x0 + 0.1 + y0 + k3 ) = 0.1(0 + 0.1 + 1 + 0.1105) = 0.12105
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 ) = = 1.110341667
6
(b) For y2 :
(i) k1 = 0.1(x1 + y1 ) = 0.1(0.1 + 1.110341667) = 0.121034166
(ii) k2 = 0.1(x1 + 0.05 + y1 + 12 k1 ) = 0.132085875
(iii) k3 = 0.1(x1 + 0.05 + y1 + 12 k2 ) = 0.132638460
(iv) k4 = 0.1(x1 + 0.1 + y1 + k3 ) = 0.144298012
1
y2 = y1 + (k1 + 2k2 + 2k3 + k4 ) = 1.242805142 y(x2 ) = y(0.2)
6

41
7.1.5 Multi-step Methods
The methods of Euler, Heun, and RK are called one-step methods because the approximation for
the mesh point xn+1 involves information from only the previous mesh point, xn . That is, only the
initial point (x0 , y0 ) is used to compute (x1 , y1 ) and in general, yn is needed to compute yn+1 .

Methods which use more than one previous mesh points to find the next approximation are called
multi-step methods.

7.1.6 Adams-Bashforth/Adams-Moulton Method


This method uses the Adams-Bashforth formula
h
yn+1 = yn + (55fn 59fn1 + 37fn2 9fn3 ), n 3
24
where fj = f (xj , yj ), as a predictor, and the Adams-Moulton formula
h
yn+1 = yn + (9fn+1 + 19fn 5fn1 + fn2 )
24

where fn+1 = f (xn+1 , yn+1 ), as a corrector.

Note 8: (a) This is a order 4 method with TE= O(h5 ).


(b) It is not self starting. To get started, y1 , y2 , y3 must be computed by using a method of same
accuracy or better such as the Runge-Kutta order 4 method.

Example 35. Use the Adams-Bashforth/Adams-Moulton Method with h = 0.2 to obtain an approx-
imation to y(0.8) for the solution of
y 0 = x + y, y(0) = 1.
Answer . With h = 0.2, y(0.8) y4 . We use the RK4 method as the starter method, with x0 =
0, y0 = 1 to obtain
(a) y1 = 1.242800000
(b) y2 = 1.583635920
(c) y3 = 2.044212913
(d) fn = f (xn , yn ) = xn + yn
f0 = x0 + y0 = 0 + 1 = 1
f1 = x1 + y1 = 0.2 + 1.242800000 = 1.442800000
f2 = 0.4 + 1.583635920 = 1.983635920
f3 = 0.6 + 2.044212913 = 2.644212913
0.2 0.2
(e) y4 = y3 + (55f3 59f2 + 37f1 9f0 ) = 2.044212913 + (55(2.644212913) 59(1.983635920) +
24 24
37(1.442800000) 9(1)) = 2.650719504
(f) f4 = f (x4 , y4 ) = x4 + y4 = 0.8 + 2.650719504 = 3.450719504
0.2
(g) y4 = y3 + (9f4 + 19f3 5f2 + f1 ) = 2.651055757 y(0.8)
24

42
7.1.7 Summary:Orders Of Errors For Different Methods
Method order Local TE Global TE
Euler 1 O(h2 ) O(h)
Heun 2 O(h3 ) O(h2 )
Second-Order Taylor Series 2 O(h3 ) O(h2 )
Third-Order Taylor Series 3 O(h4 ) O(h3 )
Fourth-Order Runge-Kutta 4 O(h5 ) O(h4 )
Adams-Bashforth/Adams-Moulton 4 O(h5 ) O(h4 )

43
7.2 Higher-Order Initial Value Problems
The methods use to solve first order IVPs can be applied to higher order IVPs because a higher IVP
can be replaced by a system of first-order IVPs. For example, the second-order initial value problem

y 00 = f (x, y, y 0 ), y(x0 ) = y0 , y 0 (x0 ) = y00

can be decomposed to a system of two first-order initial value problems by using the substitution
y0 = u :
y0 = u , y(x0 ) = y0
u = f (x, y, u) , u(x0 ) = y00
0

Each equation can be solved by the numerical techniques presented earlier. For example, the Euler
method for this system would be

yn+1 = yn + hun
un+1 = un + hf (xn , yn , un )

Remark. The Euler method for a general system of two first order differential equations,

u0 = f (x, y, u) , u(x0 ) = u0
y 0 = g(x, y, u) , y(x0 ) = y0

is given as follows
un+1 = un + hf (xn , yn , un )
yn+1 = yn + hg(xn , yn , un )

Reading Assignment 7.2.1. Use the Euler method to approximate y(0.2) and y 0 (0.2) in two steps,
where y(x) is the solution of the IVP

y 00 + xy 0 + y = 0, y(0) = 1, y 0 (0) = 2.

Answer . Let y 0 = u. Then the equation is equivalent to the system

y0 = u
0
u = xu y

Using the step size h = 0.1, the Euler method is given by

yn+1 = yn + 0.1un
un+1 = un + 0.1(xn un yn )

With x0 = 0, y0 = 1, u0 = y00 = 2, we get


(a) y1 = y0 + 0.1u0 = 1 + 0.1(2) = 1.2
u1 = u0 + 0.1(x0 u0 y0 ) = 2 + 0.1[(0)(2) 1] = 1.9
(b) y2 = y1 + 0.1u1 = 1.2 + 0.1(1.9) = 1.39
u2 = u1 + 0.1(x1 u1 y1 ) = 1.9 + 0.1[(0.1)(1.9) 1.2] = 1.761
That is, y(0.2) y2 = 1.39 and y 0 (0.2) u2 = 1.761.

44
7.2.1 The Linear Shooting Method
Consider the linear second order BVP

y 00 = p(x)y 0 + q(x)y + r(x), axb

with boundary conditions y(a) = , y(b) = . The shooting method replaces the BVP by two IVPs
in the following way:
(a) Let y1 be the solution of the IVP

y 00 = p(x)y 0 + q(x)y + r(x), y(a) = , y 0 (a) = 0.

Replace this second-order IVP by a system of two first-order IVPs. Then solve each of the two
first-order IVPs using, say, the fourth-order Runge-Kutta method to obtain y1 .
(b) Let y2 be the solution of the IVP

y 00 = p(x)y 0 + q(x)y, y(a) = 0, y 0 (a) = 1.

Find y2 as in Step (a).


(c) Find c1 and c2 so that y = c1 y1 + c2 y2 is the solution of the BVP:
(i) p(x)y 0 + q(x)y + r(x) = p(c1 y10 + c2 y20 ) + q(c1 y1 + c2 y2 ) + r = c1 (py10 + qy1 ) + c2 (py20 + qy2 ) + r
(ii) y 00 = c1 y100 + c2 y200 = c1 (py10 + qy1 + r) + c2 (py20 + qy2 ) = c1 (py10 + qy1 ) + c2 (py20 + qy2 ) + c1 r
(iii) (i)=(ii) r = c1 r c1 = 1
(iv) y(a) = c1 y1 (a) + c2 y2 (a) = 1 y1 (a) + c2 0 = y(a) =
(v) Choose c2 so that y(b) = :
y1 (b)
y(b) = c1 y1 (b) + c2 y2 (b) = y1 (b) + c2 y2 (b) = c2 = , if y2 (b) 6= 0
y2 (b)
(d) Therefore
y1 (b)
y(x) = y1 (x) + y2 (x)
y2 (b)
is the solution to the BVP, provided y2 (b) 6= 0.

45
Example 36. Use the shooting method (together with the fourth-order Runge-Kutta method with
1
h = ) to solve the BVP
3
y 00 = 4(y x), 0 x 1, y(0) = 0, y(1) = 2.

Answer .
y0 =

00 0 u , y(0) = 0 [1]
(a) y = 4(y x), y(0) = 0, y (0) = 0 0
u = 4(y x) , u(0) = 0 [2]
(i) Use RK4 to solve [1] and [2] : we obtain
y1 ( 13 ) = 0.02469136, y1 ( 23 ) = 0.21439821 and y1 (1) = 0.80973178

y 0 = u , y(0) = 0 [3]

00 0
(b) y = 4y, y(0) = 0, y (0) = 1
u0 = 4y , u(0) = 1 [4]
(i) Use RK4 to solve [3] and [4] : we obtain
y2 ( 31 ) = 0.35802469, y2 ( 23 ) = 0.88106488 and y2 (1) = 1.80973178

2 y1 (1) 2 + 0.80973178
(c) y(x) = y1 (x) + y2 (x) = y1 (x) + y2 (x)
y2 (1) 1.80973178
y( 31 ) = 0.53116634, y( 23 ) = 1.15351499, y(1) = 2

Note 9:

An advantage of the shooting method is that the existing programs for initial value problems
may be used.

However, the shooting method is sensitive to round-off errors and it becomes rather difficult to
use when there are more than two boundary conditions. For these reasons, we may want to use
alternative methods. This brings us to the next topic.

46
7.2.2 Finite Difference Method
Consider a second order BVP

y 00 + P (x)y 0 + Q(x)y = f (x), y(a) = , y(b) = .

Suppose a = x0 < x1 < < xn1 < xn = b with xi xi1 = h for all i = 1, 2, . . . , n. Let
yi = y(xi ), Pi = P (xi ), Qi = Q(xi ), and fi = f (xi ). Then by replacing y 0 and y 00 with their central
difference approximations in the BVP, we get
yi+1 2yi + yi1 yi+1 yi1
2
+ Pi + Qi yi = fi , i = 1, 2, . . . , n 1.
h 2h
or, after simplifying
 h  h
1 + Pi yi+1 + (h2 Qi 2)yi + (1 Pi )yi1 = h2 fi .
2 2
The last equation, known as a finite difference equation, is an approximation to the DE. It enables
us to approximate the solution at x1 , . . . , xn1 ..

Example 37. Solving BVPs Using Finite Difference Method


Use Finite Difference Method with h = 1 to approximate the solution of the BVP
x
y 00 (1 )y = x, y(1) = 2, y(3) = 1.
5
xi
Answer . Here Pi = 0, Qi = 1 + 5
, fi = xi . Hence, the difference equation is
xi
yi+1 + (3 + )yi + yi1 = xi , i = 1
5
That is,
x1
y2 + (3 + )y1 + y0 = x1
5
With x1 = 2 and the boundary conditions y0 = 2 and y2 = 1, solving the above equation gives
y1 = 0.3846

Notes : We can improve the accuracy by using smaller h. But for that we have to pay a price, i.e.
we have to solve a larger system of equations.

47
Chapter 8

Numerical Methods For Partial


Differential Equations

8.1 Second Order Linear Partial Differential Equations


Definition 8.1.1 (Three Basic Types of Second Order Linear Equations). The linear PDE

Auxx + Buxy + Cuyy + Dux + Euy + F u = G

(where A, B, C, D, E, F, G are given functions of x and y) is called


(a) parabolic if B 2 4AC = 0. Parabolic equations often describe heat flow and diffusion phenomena,
such as heat flow through the earths surface.
(b) hyperbolic if B 2 4AC > 0. Hyperbolic equations often describe wave motion and vibrating
phenomena, such as violins strings and drum heads.
(c) elliptic if B 2 4AC < 0. Elliptic equations are often used to describe steady state phenomena
and thus do not depend on time. Elliptic equations are important in the study of electricity and
magnetism.

Example 38. Some classical examples of PDEs.


(a) The 1-D wave equation utt = c2 uxx is a hyperbolic equation since
B 2 4AC = 02 4(c2 )(1) = 4c2 > 0.
(b) The 1-D heat equation ut = c2 uxx is a parabolic equation since
B 2 4AC = 02 4(c2 )(0) = 0.
(c) The 2-D Laplace equation uxx + uyy = 0 is an elliptic equation since
B 2 4AC = 02 4(1)(1) = 4 < 0.

48
8.2 Numerical Approximation To Derivatives :1-Variable Func-
tions
We replace derivatives by their corresponding difference quotients based on the Taylor series :
1 1
(i) u(x + h) = u(x) + hu0 (x) + h2 u00 (x) + h3 u000 (x) +
2! 3!
1 1
(ii) u(x h) = u(x) hu0 (x) + h2 u00 (x) h3 u000 (x) +
2! 3!
u(x + h) u(x) 1
(iii) (i) u0 (x) = hu00 (x)
h 2!
u(x + h) u(x)
i.e. u0 (x) = + O(h) (the forward difference formula for u0 )
h
u(x) u(x h)
(iv) (ii) u0 (x) = + O(h) (the backward difference formula for u0 )
h
u(x + h) u(x h)
(v) (i) - (ii) u0 (x) = + O(h2 ) (the central difference formula for u0 )
2h
u(x + h) 2u(x) + u(x h)
(vi) (i) + (ii) u00 (x) = + O(h2 )
h2 00
(the central difference formula for u )

8.3 Numerical Approximation To Derivatives :2-Variable Func-


tions
Consider a function u(x, y) over a 2-dimensional grid with x = h and y = k. At a typical point
P (x, y), where x = ih and y = jk, write

uP = u(ih, jk) = ui,j .

Then
u ui+1,j ui,j
= + 0(h) the FD for ux at P
x P h
u ui,j+1 ui,j
= + 0(k)the FD for uy at P
y P k
u ui+1,j ui1,j
= + 0(h2 ) the CD for ux at P
x P 2h
2 u ui+1,j 2ui,j + ui1,j
2
= 2
+ 0(h2 )the FD for uxx at P
x P h

49
8.4 Methods for Parabolic Equations

8.4.1 FTCS Explicit Scheme


Example 39. Using F.D for ut and C.D for uxx , find the finite difference approximation for the 1-D
heat equation
u 2u
= , 0 < x < a, t > 0.
t x2
Answer .

Let ui,j = u(ih, jk) = u(xi , tj ).


Then the finite difference approximation is
ui,j+1 ui,j ui+1,j 2ui,j + ui1,j
= , i = 1, . . . M, j = 0, . . . , N
k h2
Solving for ui,j+1 , we obtain

ui,j+1 = (1 2r)ui,j + r(ui+1,j + ui1,j )

k
where r = .
h2

Note 10: We use the above formula to estimate the values of u at time level j + 1 using the values at
level j. This is known as the FTCS explicit finite difference method.

Example 40. Consider the one-dimensional heat equation

u 2u
= , 0 x 1, t0
t x2
subject to the initial condition
u(x, 0) = sin x, 0x1
and boundary conditions
u(0, t) = u(1, t) = 0, t 0.
Using an x step of h = 0.1 and a t step of k = 0.0005, use the FTCS scheme to estimate u(0.1, 0.001).

Answer .
k 0.0005
r= 2
= = 0.05
h (0.1)2
(a) u(0, t) = u(1, t) = 0 u0,j = u10,j = 0
(b) u(x, 0) = sin x ui,0 = sin 0.1i
(c) ui,j+1 = (1 2r)ui,j + r(ui+1,j + ui1,j )
ui,j+1 = 0.9ui,j + 0.05(ui+1,j + ui1,j )

50
(d) j = 0 : ui,1 = 0.9ui,0 + 0.05(ui+1,0 + ui1,0 )
(i) i = 1 : u1,1 = 0.9u1,0 + 0.05(u2,0 + u0,0 ) = 0.3075
since ui,0 = sin 0.1i u0,0 = 0, u1,0 = sin 0.1 = 0.3090, u2,0 = sin 0.2 = 0.5878
(ii) i = 2 : u2,1 = 0.9u2,0 + 0.05(u3,0 + u1,0 ) = 0.5849
since u3,0 = sin 0.3 = 0.8090
(e) j = 1 : ui,2 = 0.9ui,1 + 0.05(ui+1,1 + ui1,1 )
(i) i = 1 : u1,2 = 0.9u1,1 + 0.05(u2,1 + u0,1 ) = 0.3060 u(0.1, 0.001).

Definition 8.4.1 (Stability of Numerical Methods). A method is unstable if round-off errors


or any other errors grow too rapidly as the computations proceed.

Note 11: The FTCS scheme is stable for a given space step h provided the time step k is restricted
by the condition
k 1
0<r= 2 .
h 2
The restriction recommends that we do not move too fast in the t-direction.
Note 12: Suppose there is an initial condition : u(x, 0) = f (x). Then the scheme starts with (i, j) =
(0, 0), the left-hand corner of the 2D grid. Along the horizontal line j = 0, where t = 0, we have

ui,0 = u(xi , 0) = f (xi ).

Note 13: Suppose we include the boundary conditions : u(0, t) = u(a, t) = 0. Then we have

u0,j = uM,j = 0, j > 0.

8.4.2 Crank-Nicolson Method


Example 41 (Crank-Nicolson Implicit Scheme for the Heat Equation). This CNI scheme
replaces uxx by an average of two CD quotients, one at the time level j and another at j + 1:
ui,j+1 ui,j 1 h ui+1,j+1 2ui,j+1 + ui1,j+1 2 ui+1,j 2ui,j + ui1,j 2
i
+ 0(k) = + 0(h ) + + 0(h )
k 2 h2 h2
After simplified, we obtain

rui1,j+1 + 2(1 + r)ui,j+1 rui+1,j+1 = rui1,j + 2(1 r)ui,j + rui+1,j + k0(k, h2 )

k
where r =
h2
Note 14: For each time level j, we obtain an (M 1) (M 1) tridiagonal system which can be
solved using iterative methods.
Note 15: The CNI scheme has no stability restriction. It is more accurate than the explicit scheme.

51
Example 42. Consider the one-dimensional heat equation

u 2u
= , 0 x 1, t0
t x2
subject to the initial condition
u(x, 0) = sin x, 0x1
and boundary conditions
u(0, t) = u(1, t) = 0, t 0.
Using an x step of h = 0.2 and a t step of k = 0.001, use the Crank-Nicolson method to estimate
u(0.4, 0.001).

Answer .

Since the initial temperature distribution is symmetric with respect to x = 0.5, we only need to
consider the grid points over 0 x 0.5.

u(0.4, 0.001) u2,1


k 0.001
r= 2
= = 0.025
h (0.2)2
(a) u(0, t) = u(1, t) = 0 u0,j = u5,j = 0
(b) u(x, 0) = sin x ui,0 = sin 0.2i
(c) rui1,j+1 + 2(1 + r)ui,j+1 rui+1,j+1 = rui1,j + 2(1 r)ui,j + rui+1,j
0.025ui1,j+1 + 2.05ui,j+1 0.025ui+1,j+1 = 0.025ui1,j + 1.95ui,j + 0.025ui+1,j
(d) j = 0 : 0.025ui1,1 + 2.05ui,1 0.025ui+1,1 = 0.025ui1,0 + 1.95ui,0 + 0.025ui+1,0
(i) i = 1 : 0.025u0,1 + 2.05u1,1 0.025u2,1 = 0.025u0,0 + 1.95u1,0 + 0.025u2,0
since u0,j = 0 and ui,0 = sin 0.2i
u0,0 = 0, u1,0 = sin 0.2 = 0.58778525, u2,0 = sin 0.4 = 0.95105652
2.05u1,1 0.025u2,1 = 1.95(0.58778525) + 0.025(0.95105652)
2.05u1,1 0.025u2,1 = 1.1699576505 (A)
(ii) i = 2 : 0.025u1,1 + 2.05u2,1 0.025u3,1 = 0.025u1,0 + 1.95u2,0 + 0.025u3,0
since u3,0 = sin 0.6 = 0.95105652
0.025u1,1 + 2.05u2,1 0.025u3,1 = 0.025(0.58778525) + 1.95(0.95105652) + 0.025(0.95105652)
0.025u1,1 + 2.05u2,1 0.025u3,1 = 1.89303126
0.025u1,1 + 2.025u2,1 = 1.89303126 (B)
since u3,1 = u2,1 by symmetry
(iii) Solving (A) and (B) by Gauss elimination or Cramers Rule or Gauss-Seidel method, we
obtain u1,1 = 0.58219907, u2,1 = 0.94201789
Therefore u(0.4, 0.001) u2,1 = 0.94201789

52
Example 43.
The exact solution of the 1-D heat equation.
ut = uxx
u(0, t) = u(1, t) = 0 , t>0
u(x, 0) = f (x) = sin x , 0 < x < 1
2
is u(x, t) = e t sin(x).

(a) By using the FTCS explicit scheme with h = 0.2 and k = 0.008 , we obtain the following table

t x=0 x=0.2 x=0.4


0.000 Exact 0.000 0.5878 0.9511
FTCS 0.000 0.5878 0.9511
0.008 Exact 0.000 0.5432 0.8789
FTCS 0.000 0.5429 0.8784
0.016 Exact 0.000 0.5019 0.8121
FTCS 0.000 0.5014 0.8113
0.024 Exact 0.000 0.4638 0.7505
FTCS 0.000 0.4631 0.7493
0.032 Exact 0.000 0.4286 0.6935
FTCS 0.000 0.4277 0.6921
0.040 Exact 0.000 0.3961 0.6408
FTCS 0.000 0.3951 0.6392

(b) By using the FTCS and the CNI scheme with h = 0.2 and k = 0.04 , we obtain the following table

t x=0 x=0.2 x=0.4


0.00 Exact 0.000 0.5878 0.9511
CNI 0.000 0.5878 0.9511
FTCS 0.000 0.5878 0.9511
0.04 Exact 0.000 0.3961 0.6408
CNI 0.000 0.3993 0.6460
FTCS 0.000 0.3633 0.5878
0.08 Exact 0.000 0.2669 0.4318
CNI 0.000 0.2712 0.4388
FTCS 0.000 0.2245 0.3633
0.12 Exact 0.000 0.1798 0.2910
CNI 0.000 0.1842 0.2981
FTCS 0.000 0.1388 0.2245
0.16 Exact 0.000 0.1212 0.1961
CNI 0.000 0.1251 0.2025
FTCS 0.000 0.0858 0.1388
0.20 Exact 0.000 0.0817 0.1321
CNI 0.000 0.0850 0.1376
FTCS 0.000 0.0530 0.0858

53
8.5 A Numerical Method for Elliptic Equations
Example 44 (Difference Equation for the Laplace Equation). Using C.D for both uxx and uyy ,
the finite difference approximation for the 2-D Laplace equation uxx + uyy = 0 is

ui+1,j 2ui,j + ui1,j ui,j+1 2ui,j + ui,j1


+ =0
h2 k2
For a uniform space grid with h = k, this becomes
1  1 
ui,j = ui+1,j + ui1,j + ui,j+1 + ui,j1 = uE + u W + uN + uS
4 4
i.e. ui,j is the average of its nearest neighbors.
Equivalently, uE + uN + uW + uS 4ui,j = 0.

Example 45. The four sides of a square plate of side 12 cm made of uniform material are kept at
constant temperature such that

LHS = RHS = Bottom = 100 C and Top = 0 C.

Using a mesh size of 4 cm, calculate the temperature u(x, y) at the internal mesh points A(4, 4), B(8, 4),
C(8, 8) and D(4, 8) by using the Gauss-Seidel method if u(x, y) satisfies the Laplace equation uxx +
uyy = 0, start with the initial guess uA = uB = 80, uC = uD = 50. Continue iterating until all
[p+1] [p] [p]
|ui ui | < 103 where ui are the pth iterates for ui , i = A, B, C, D.

Answer .

Applying the equation


1
ui,j = [uE + uN + uW + uS ]
4
to the 4 internal mesh points, we obtain
1
uA = (uB + uD + 200)
4
1
uB = (uA + uC + 200)
4
1
uC = (uB + uD + 100)
4
1
uD = (uA + uC + 100)
4
This system is strictly diagonally dominant, so we can proceed with the Gauss-Seidel iteration starting
[p+1] [p]
with the initial guess uA = uB = 80, uC = uD = 50. Continue iterating until all |ui ui | < 103
we obtain uA = uB = 87.5, uC = uD = 62.5.

54
8.6 CTCS scheme for Hyperbolic Equations
Example 46 (CTCS scheme for Hyperbolic Equations). Consider the BVP involving the 1-D
wave equation

2u 2u
=
t2 x2
u(0, t) = u(1, t) = 0, t>0
u(x, 0) = f (x), 0x1
ut (x, 0) = g(x), 0x1

Use a 2-D grid with x = h and t = k. By replacing both uxx and utt with the central difference
quotients, we obtain a CTCS scheme
ui,j+1 2ui,j + ui,j1 ui+1,j 2ui,j + ui1,j
=
k2 h2
i.e. ui,j+1 = ui+1,j + 2(1 )ui,j + ui1,j ui,j1
where = (k/h)2 .

Note 16:

(a) This scheme is stable for 0 1 or k h.


(b) u(x, 0) = f (x) u(ih, 0) = f (ih) or ui,0 = fi .
(c) Approximate ut with a CD quotient, the IC ut (x, 0) = g(x) becomes
ui,1 ui,1
= gi ui,1 = ui,1 2kgi
2k
where (i, 1) is a fictitious grid point.
(d) To calculate ui,1 :
ui,1 = ui+1,0 + 2(1 )ui,0 + ui1,0 ui,1
= ui+1,0 + 2(1 )ui,0 + ui1,0 ui,1 + 2kgi
1
= (ui+1,0 + ui1,0 ) + (1 )ui,0 + kgi
2

ui,1 = (fi+1 + fi1 ) + (1 )fi + kgi
2

55
Example 47. If the string (with fixed ends at x = 0 and x = 1) governed by the hyperbolic partial
differential equation
utt = uxx , 0 x 1, t 0
with boundary conditions
u(0, t) = u(1, t) = 0
starts from its equilibrium position with initial velocity

g(x) = sin x

and initial displacement u(x, 0) = 0.


What is its displacement u at time t = 0.4 and x = 0.2, 0.4, 0.6, 0.8? (Use the CTCS explicit scheme
with 4x = 4t = 0.2.)

Answer .

(a) u(0, t) = u(1, t) = 0 u0,j = u5,j = 0


(b) u(x, 0) = f (x) = 0 ui,0 = fi = 0
(c) g(x) = sin x gi = sin 0.2i
(d) With h = 4x = 4t = k = 0.2 c = (k/h)2 = 1, we have the following CTCS scheme:
ui,j+1 = ui+1,j + ui1,j ui,j1
with ui,1 = 0.2gi = 0.2 sin 0.2i
(e) ui,1 = 0.2gi = 0.2 sin 0.2i
(i) i = 1 : u1,1 = 0.2 sin 0.2 = 0.1176
(ii) i = 2 : u2,1 = 0.2 sin 0.4 = 0.1902
(iii) i = 3 : u3,1 = 0.2 sin 0.6 = 0.1902
(iv) u4,1 = u1,1 = 0.1176
(v) u5,1 = u0,1 = 0
(f) ui,j+1 = ui+1,j + ui1,j ui,j1
(i) i = 1, j = 1 : u1,2 = u2,1 + u0,1 u1,0 = 0.1902 + 0 0 = 0.1902
(ii) i = 2, j = 1 : u2,2 = u3,1 + u1,1 u2,0 = 0.1902 + 0.1176 0 = 0.3078
(iii) i = 3, j = 1 : u3,2 = u4,1 + u2,1 u3,0 = 0.1176 + 0.1902 0 = 0.3078
(iv) i = 4, j = 1 : u4,2 = u5,1 + u3,1 u4,0 = 0 + 0.1902 0 = 0.1902

56
Bibliography

[1] Koay Hang Leen, Lecture Note for Numerical Methods and Statistics, 2007.

[2] Peter V.ONeil, Advanced Engineering Mathematics.

[3] Glyn James, Advanced Modern Engineering Mathematics.

[4] Anthony Croft, Robert Davison, Martin Hargreaves,Engineering Mathematics.

[5] Erwin Kreyszig, Advanced Engineering Mathematics.

[6] K.A. Stroud, Advanced Engineering Mathematics.

[7] Curtis F.Gerald, Patrick. Wheatley, Applied Numerical Analysis.

[8] Levine, D. M., Applied statistics for engineers and scientists.

[9] Hamdy A. Taha, Operations Research : An Introduction.

57

Das könnte Ihnen auch gefallen