Sie sind auf Seite 1von 58

UECM2623/UCCM2623 \

NUMERICAL METHODS & STATISTICS

UECM1693
MATHEMATICS FOR PHYSICS 11

UNIVERSITI TUNKU ABDUL RAHMAN


LECTURER: YAP LEE KEN

---- -

-- - .~ .-- -

.. ---"--.~-

Contents
1 Preliminaries
J.l

Introduction _

..

1.2 Error Analysis .

.,.

..

2 Numerical Differentiation

3 Nume rical Integration

3.1 The Trapezoidal Rule .


3.2 Simpson IS Rule
4

Roots Of Equations

10

4. 1 Introduction _ _ _

10

4.2 Bracketing Methods.


4.2.1 The Bisection Method

10

4.2.2

11

The Method of False-Position

4.3 Open Methods

...

13

.---

15

.
4.3.1 Fixed-Point Method ...
4.3.2 Newton-Raphson Method
4.3.3 Finite Difference Method _

- -

15
17
19

Some Topics In Linear Algebra

20

5.1 Iterative Methods For Solving Linear Systems

..

20

5.2 A Review On Eigenvalues

25
26

5.3 Approximation of Eigenvalues


5.3.1

The power Method ..

5.3.2 -Power Method with Scaling


5.3.3

Inverse Power Method with Scaling

..

.. .

..

26
28
29

5.3.4

Shifted Inverse Power Method with Scaling .

6 Optimization

6.1

.. . .

30
32

Direct Search Methods

32

6.1.1

Golden Section Search (Maximum)

32

6.1.2

Gradient Method . . . . . . . . . .

34

7 Numerical Methods For Ordinary Differential Equations

36

7.1

7.2

First-Order Initial-Value Problems

36

7.1.1

Euler's Method . . . .

36

7.1.2

Heun's Method/Improved Euler's Method

38

7.1.3

Taylor Series Method of Order p .

40

7.1.4

Runge-Kutta Method of Order p

40

7.1.5

Multi-step Methods . . . . . . . .

42

7.1.6

Adarns-Bashforth/ Adarns-Moulton Method .

42

7.1.7

Summary:Orders Of Errors For Different Methods.

43

Higher-Order Initial Value Problems

44

7.2.1

The Linear Shooting Method

45

7.2.2

Finite Difference Method . ..

47

8 Nume rical Methods For Partial Differential Equations

48

8.1

Second Order Linear Partial Differential Equations

48

8.2

Numerical Approximation To Derivatives :1- Variable Functions.

49

8.3

Numerical Approximation To Derivatives :2-Variable Functions.

49

8.4 Methods for Parabolic Equations

50

8.4.1

FTCS Explicit Scheme

50

8.4.2

Crank-Nicolson Method

51

8.5

A Numerical Method for Elliptic Equations .

54

8.6

CTCS scheme for Hyperbolic Equations

55

..

Chapter 1
Preliminaries
1.1

Introduction

Numer ical meth ods are methods for solving problems on a computer or a pocket calculator. Such
methods are needed for many real life problems that do not have analytic solutions or in other cases,
the analytic solutions may be practically useless.

1.2

Error Analysis

Definitio n 1.2.1.
(8) The er ror in a computed quantity is defined as

Error = True value - Approximation.

(b) The a bsolute error is defined as


Absolute error =

I True value - Approximation 1.

(c) The rela tive error is defined as


.
True value - Approximation
Re1atlve error =
Tr
1
.
ue va ue

(cl) The p e rcentage r elative e rror is defined as


.
True value - Approximation
Tr
x 100%.
Percentage relative error =
ue value

D efinition 1.2.2. (Errors In Numerical Methods)


There are two major sources of errors in numerical computations.
(a) Round-off error occurs when a computer or calculator is used to perform real-number calcul ations.
Remark. The error arises because for machine computation, the number must be represented
by a number with a finite number of digits. These errors become important as the number of
computat ions gets very large. To understand Lhe nature of round-off errors, it is necessary to
learn the ways numbers are stored and arithmetic operations are performed in a computer. The
effect of the round-off errors can be illustrated by the following example.
Example 1. (The effect of round-off error)
Let f(x) ~

x2

- !

".

x-,

0.111 2 - 0.1111
(i) If a 4-(ugit calculator is used to find f(0 .3334), we will obtain 0.3334 _ 0.3333 '" 1.

(i i) On the other hand , by using t he formula f(x)

~ x +~,

we will obtain 0.3334 + ~

'" 0.6667.

(b) Truncation errors are those that resul t from using an approximation in place of an exact
mathematical procedure.
Example 2. Recall that for all x ,

In particular, jf we let x = 1, then


1

2!

3!

e ~ I + I + - +-+

1 I
1
.
. I
I
If we use 1 + 1 + 21 + 31+ ... + 101 to approximate e, then the truncatIon error IS -11- ! + -12-1+ ....
Remark. (Truncation Errors)

Ma.ny numerical schemes are deri ved from the Taylor series

y(x + h)

~ y(x) + hy'(x) + h:
y"(x) + ....
2.

If the t runcated Taylor series used to approximate y(x) is the order n Taylor polynomial
h2
h"
Pn(x) ~ y(x) + hy'(x) + 2f Y"(x ) + .. . + n! y1n)(x),

then t he approximation is called an nt h order method since it is accurate to the terms of order hn.
T he neglected remainder term
hn+!

hn+2

.,.----,-:-;-;y1n+l)(x) +
y(n+2) (x)
(n + I )!
(n+2) !

is called the (local) truncation error (TE).


We say the TE per step is of order h n+! i.e TE= O(/tn + I).

+ ...

Chapter 2
Numerical Differentiation
We replace the derivatives of a function

f by their corresponding difference quotients based on the

Taylor series:

(i) f(x

+ h) ~ f(x) + hJ'(x) + ;1h' f"(x) + ~1h3 f"'(x) + .. .

(ii) f(x - h)

~ f(x) - hJ'(x) + ~h' f"(x) - ~h3 f"'(x) + .. .


21

... ) (.) f'()


f(x
X=
(1lI1=>
i.e. J'(x)

+ h) - f( x)
h

31

1 hf"( x-
)
21

--

~ f(x + h) - f(x) + O(h)

h
(the forward difference formula for 1')

(iv) (ii)=<- J'(x)

~ f(x) - f( x - h) + O(h)

h
(the backward difference formula for 1')

(v) (i) _ (ii)=> J'(x)

~ f(x + h) - f(x - h) + O(h')


2h

(the central difference formula for J' )


(vi) (i)

+ (ii)=> J"(x) ~ f(x + h) - 2{~x) + f(x - h) + O(h')

(the central difference formula for /")

Example 3 . Given the following table of data:

x
f(x)

1.00 1.01
6.01
5

1.02 1.03
7.04 8.09

(a) Use forward and backward difference approximations of O(h) and a central difference approximation of 0(h2) to estimate /,(1.02) using a step size h = 0.01.

(b) Calculate the percentage errors for the approximations in part (a) if the actual value /'(1.02) =
104.
Reading Assignment 2.0.1. Given the following table of data:

x
f(x)

1.00 1.25
1.50
1.75
2.00
-8
-9.93359375 -11.4375 -11.99609375 -11 .0000

(a) Use forward and backward difference approximations of O(h) and a central difference approximation of O(/t2 ) to estimate the first derivative of I(x) at x = 1.5 using a step size It = 0.25.

(b) Calculate the percentage errors for the approximations in part (a) if the actual value /,(1.5) =
- 4.5.

Answer.
(a) For It = 0.25,
(i) using the forward difference formula

'(15)
f(1.5
. '"

+ It) It

f(1.5)
=

f(1.75) - f(1.5)
-1 1.99609375 - (- 11.4375)
0.25
=
0.25
= - 2.234375

(ii) using the backward difference formula

'(15)
.

'"

f(1.5) - f(1.5 - h)
f(1.5) - f(1.25)
- 11.4375 - (- 9.93359375)
=
=
= - 6.015625
h
h
O .~

(Hi) using the central difference formula

'(1.5) '" f(1.5

+ It) 2h

(b) percentage error

f(1.5 - iI) = f(1.75) - f(1.25) = -1 1.99609375 - (-9.93359375) = _


0.5
0.5
4.125
actual value - approximation

al al
actu v ue

x 100%

(i) For the forward difference approximation:


percentage error =

- 4.5 - (- 2.234375)
x 100
- 4.5

50 .35%

(ii) For the backward difference approximation:


percentage error =

- 4.5 - (-6.015625)
x 100 ~ - 33.68%
-4.5

(iii) For the central difference approximation:


percentage error =

-4.5 - (-4.125)
x 100
- 4.5

8.33%

Chapter 3
Numerical Integration

J:

The ideal way to evaluate a definite integral


f (x) dx is, of course, to find a formu la F(x) for an
anti-derivative of f. Bu t some anti-derivati ves are difficult or impossible to find. For example, there
arc no elementary formulas for the anti-derivatives of 8i~~. v'l + X4 and e~2. When we cannot evaluate
a definite integral with an anti-derivative, we turn to numerical methods such as the Trapezoidsl Rule
and Simpson 's Rule. The problem of numerica1 integration is the numerical evaluation of integrals

1 = [f( X) dx
where a and b arc constants and f is a function given analytically or empirically by a table of values.
Geometrically, if f( x) ;::: 0 V x E [a, hI, then I is equal to the area under the curve of f between a and
b.

3.1

The Trapezoida l Rule

Use to ap proximate a definite integra] by adding up the areas under trapezoids.


Recall the area formul a for the trape-.wid: Partition [a, bl into n subintervals of equal length !::::..X = b:a.
Area of A, = t;,"(J(x,- d + f(x,)) where x, = a + iL'lx = a + i('~.).
An approxi mation fo r the area under the curve y = f( x) from x = a to x = b is
n

L A,
L'I
= L -i- (J( x,- d + f(x ,) )

T=

.: I

1= 1

~x [f( xo) + 2f(xd + ... + 2f(xn _ , ) + f(x n ) J.


b

b- a
- .
n
Remark. The absolute error incurred by the Trapezoidal approximation is given by
Er = I f(x) dx - TI This error will decrease as the step size 6.x decreases, because the trapezoids
fit the curve better as their number increases.

... f (x) dx '"

'2 [f (xo) + 2f(xd + ... + 2f(x n - d + f(x n )J where h= -

J:

T heore m 3.1.1 (Error Bound for Trapezoidal R ule). If


all x E [a, bJ, then

IErI

Example 4 . Estimate

Answer . f(x}

is cont inuous and

If"(x)i

M for

(b - a)'
12n' M.

/ 2Xl dx by Trapezoidal Rule with n

x2,6x

f"

4.

= ~ ,xo = 1,xl = ~,X2 = ~ , X3 = ~,X4 = ~ = 2.

~ [f(xo) + 2f (xt) + 2f (x,) + 2f(x,) + f(x.) ]


~ [(xo)' + 2(xt)' + 2(x,)' + 2(x,)' + (x.)' ]

~ [1+ 2<;;) + 2(;: ) + 2(::) +4] =

:. T =

;;

1. .!.
2

Example 5 . How many subdivisions should be used in the Trapezoidal Rule to approximate

with an error less than 1O- 4 ?

Answer . Since 1f"(x)1 = 1:'1~ 2 for x E [1,21. we have from T heorem 3. 1.1 ,
We choose n so that ~ < 10- 4 :

~<

lr

10- => n2 >


=> n >
ensure the desired accuracy.

I- : : :

40.82. So we can use any n

dx

IErI ~ t,;..

41. In particular, n

41 will

Simpson's Rule

3.2

Use to approximate a definite integra] by adding up areas of parabolas.


Parti tion [a, bl into n subintervals of equal length D.x = b~a, but this time with n an even number.
Area under the curve y = Ax2 + Bx + C in [- h)t] is equal to

l'

(Ax' + B x + C) dx = !:(2Ah'
+ 6C). To wri te Ap in terms of Yo, Yt , Y, :
_,
3
Since the curve passes through (- h, Yo) , (0, Yt), and (h, y,), we get
Ap =

Yo = Ah' - Bh + C
Yt =C
y, = Ah' + B h + C

(3. 1)
(3.2)
(3.3)

Solving these simultaneous equations, we obtain C = YI, 2Ah2 = YO+Y2- 2Yll and Ap = ~ (Yo+4YI +Y2).
An approximation for the area under the curve y = f (x) from x = a to x = b is
S = ~ [(f(xo) + 4f(xt) + f (x, + (f(x, ) + 4f(x,) + f (x. + ... + (f(x n- , + 4f (x n_,) + f(x n

J.'

h (xo)
: . f(x) dx '" :df

].

+ 4f(Xt) + 2f(x,) + 4f(x,) + ." + 2f(xn_,) + 4f(x n_,) + f (x n)].

b- a

h=-.
n

where

Theorem 3.2.1 (Error Bound for Simpson's Rule). If J<') is continuous and 1/(')(x) 1 S M for
(b - a)'
all x E la, b], then the Simpson's rule error IEsl S ISDn' M.
Example 6. Approximate

Answer. f(x)

S =
=

4x 3 , h

11

4x 3 dx by Simpson's Rule with n = 4.

~,Xi = ih = ~.

t [/(xo) + 4/(xl) + 2/(x,) + 4/(X3 ) + /(x.) ]

i,[0 +4(1~) +2C'6)+4(;~)+ 4]

= 1.

Example 7. Use Simpson's Rule with n

10 to approximate the integral

11

e:r.1. dx .

Estimate the error involved in this approximation.

Answer. (i) /(x) = eX', h = 0.1, x; = O.li.


S = Oil [/(x o) + 4/(xd + 2/(x,) + ... + 2/(x,)
=

+ 4/(xg) + /(x lO )]
Oil [/(0) + 4/(0.1) + 2/(0.2) + ... + 2/(0.8) + 4/(0.9) + /(1)]
O 01
0:/ reo + 4e .
+ 2eO.04 + 4eO.09 + 2eO.16 + 4 e O.25 + 2eO.36 + 4e0 .49 + 2eO.64 + 4eO.81 + ell

'" 1.4626S1
(ii ) 0 S x S I
=> 0 < /(')(x)
-

= (12 + 48x' + 16x' )e <


max / (')(x) = / (')( 1) = 76e
- 0$.x$ 1
X

'

76e(1 - D)'

=> Es S 180(10)' '" 0.0001l5.

. 1 Es'
E xerclse.
tlmate

14

170 2 db
'
x y usmg
_2 1 +x

(a) Trapezoid. l Rule


(b) Simpson's Rule
with n

6 subintervals. Calculate the absolute error in each case.

Answer. (a) 413, 0.6043 (b) 400, 13.6043


Exercise 2. Suppose the following data were obtained from an experiment:

x
Y

3.0
6.7

3.25
7.4

3.5
8.2

3.75
9.2

4.0

4.25 4.5
10.4 11.6 12.5

4.75 5.0
13.3 14.0

Use Sirnpson's Rule to approximate

y dx .

Answer. 20.7.
9

Chapter 4
Roots Of Equations
4.1

Int roduction

D e finition 4.1.1. Any number r for which f (r ) = 0 is called a solution or a root of that equat ion
or a ze ro of f.
Example 8 . T he root of t he linear equation ax

+b =

b
0 is x = - -,
a

a "# O.

Example 9. The roots of the quadratic equation ax 2 + bx + c = 0 are given by

- b V b'

4ac

2a

Example 10. Find the roots of the equation x 3

4.2

4x = O.

Bracketing M ethods

These methods are use to solve the equation f (x) = 0 where


based on the Intermediate Value Theorem that says that

J is

a cont inuous fu nct ion. They are

if f is a continuous function on [a, bj that has values of opposite signs at a and b, then
one root in the interval (a , b).

f has at least

They are called bracketing met hods because 2 initial guesses !l bracketing" the root are required to
start t he procedure. The solution is found by systematically reducing the widt h of the bracket.
T wo examples of bracketing methods are :
(a) T he Bisect ion Method
(b) The Method of False-Position
Example 11. Show t hat t he equation x

cos x has at least one solution in t he interval (0,1f/2).

10

Reading Assignment 4.2.1. Show that the equation

Xl -

4x + 1 = 0 has at least one solution in

the interval (1,2).

Answer. (a) Let f(x ) = x' - 4x + I. Being a polynomial, f is continuous on (1,2J.


(b) f(l) = - 2, f(2) = 1 => f(l)f(2) is negative.
Hence, by the Intermediate Value Theorem, the equation has a solution in (1,2).

4.2.1

The Bisection Method

The method calls for a repeated halving of subintervals of [a, bj, and at each step, picking the half
where f changes sign.
The basic algorithm is as follows:
Step 1. Choose lower x, and upper x. guesses for the root such that f (x,)f(x.) < 0.
Step 2. An estimate of the root is determined by
Xr

Xl +Xu

Step 3. Determine which subinterval the root lies as follows:


(a)

(b)
(c)

f(x,)f(x.) < 0, the root lies in (x"x.). c. set x.

x., and return to Step 2.

f(x,)f(x.) > 0, the root lies in (X., x.). c. set x, = x., and return to Step 2.
[ f(x,)f(x.) = 0, then the root equals x . Stop.

Te rmination Criteria and Error Estimates


We must have a stopping criterion es to terminate the computation. In practice, we require an error
estimate that is not contingent on foreknowledge of the root . In particular, an approximate percentage
ea can be calculated as

'" I

'"

present approximation - previous approximation


x~ew - X~ld
.
.
X 10070 =
X 100/0
present approXlmatlOn
x~ew

Iea I = I

11

Example 12. Usc bisect ion to find the root of f (x) = x lO - 1. Employ initial guesses of Xl = 0 and
xu = 1.3 and iterate until the estimated percentage crror a falls below a stopping criterion of (, = 8%.

Answer.
(a) /(0)/(1.3)

< 0,* the initial estimate of t he root is


x,.1 = 0 +2 1.3 = 0.65.

(b) 1(0)/(0.65)

> 0 '* the root is in (0.65, 1.3). So set


2 _

x,. -

XI

= 0.65,

Xu

= 1.3. Hence

0.65 + 1.3 _ 0 97'


2

o.

The estimated error is

If.1 = IX~~ X~d 1 x 100% = 10.975 x;ew

(c) /(0.65)/(0.975)

0.65 1 x 100% = 33.3%.

0.975

> 0 '* the root is

in (0.975, 1.3). So set

x~

XI

= 0.975,

Xu

= 1.3. Hence

= 0.975 + 1.3 = 1.1375.


2

The esti mated error is

975 1 x 1000/,

- 0.
Ia 1= 11.1375
1.1375
(d) 1 (0.975 )/( 1.1375)

= 14.30/,.0

< 0 '* the root is in (0.975, 1.1375) . So set


4 _
Xc -

The estimated error is

If.1 =

XI

= 0.975,

Xu

= 1.1375. Hence

0.975 + 1.1375 _ 0 62
2
- 1. 5 5.

1.05625 - 1.1 375 1


1.05625
x 100% = 7.7%.
1

After 4 iterations, the estimated percentage error is reduced to less than 8%.

Advantages & Disadvantages of Bisection Method


(a) Advantages
(i) T his method is guaranteed to converge to thc root .

(ii) The error bound is guaranteed to decrease by one half with each iteration .
(b) Disadvantages
(i) It generally converges more slowly than most other methods.
(ii) It requires two initial estimates at which
(iii) If f is not continuous

has opposite signs to start the procedure.

the method may converge to a wrong point. Should check the values

I (x).
(iv) If f does not change sign over any interval , then the method will not work .

12

4.2.2

The Method of False-Position

In this method, we use the graphical ins ight that

"if f( c) is closer to zero than I(d) , then c is closer to the root than cf' (which is not true in general).

f(x)

By using this idea, we obtain

-x
x, ~ x. - f(x,) _ f (x./( x.) .
Xj

By replacing th is formula in the Step 2 of the Bisection method, we obtain the algorithm for the
method of false-position as follows:
Step 1. Choose lower x, and upper x. guesses for the root such that f( x,)f(x.) < O.
Step 2. An estimate of the root is determined by

Step 3. Determine which subinterval the root lies as foHows:

(a) If f( x,)f(x,) < 0, t he root lies in (x"x,). :. set x.

~ Xn

and return to Step 2.

(b) If f( x,) f(x, )

> 0, the root lies in (xn x.). :. set x, ~ Xn and return to Step 2.

(c) If f (x,)f(x,)

0, then the root equals x,. Stop.

13

Example 13. Use the Method of False-Position to find the zero of f (x)
of 0 and 1.

x-

e - ~.

Use initial guesses

Answer.
(i) First iteration,
Xl ~ 0, f (xl) ~ - 1
x. ~ 1, f(x.) ~ 0.63212
0- 1
X, ~ 1 - - 1 - 0.63212(0.63212) ~ 0.61270, f (x,) ~ 0.07081

f (xl)f(x,) < 0 =>

E (x" x,)
(ii) Second iteration , Xl ~ 0, f(xl) ~ - 1
x. ~ 0.61270, f(x.) ~ 0.0.07081
0 - 0.61270
x, ~ 0.61270 - - 1 - 0.07081 (0.07081) ~ 0.57218, f(x,) ~ 0.00789
r

(iii ) T he calculations are summarized in the following table:

Xl

X.

X,

0.00000 1.00000 0.61270


0.00000 0.61270 0.57218
0.00000 0.57218 0.56770
0.00000 0.56770 0.56721
0.00000 0.56721 0.56715
0.00000 0.56715 0.56714
Hence, the approximate root

f(x l)
-1.00000
-1.00000
-1.00000
-1.00000
-1.00000
-1.00000
is 0.56714.

f(x,)

f (xl) f (x,)

0.07081
0.00789
0.00087
0.00010
0.00001
0.00000

-0.07081
-0.00789
-0.00087
-0.00010
-0.00001
0.00000

Reading Assignment 4.2.2. Use the Method of False-Position to find the zero of f (x) = x - e - ~.
Use initial guesses of 0 and 1. Iterate until two successive approximations differ by less than 0.01.
Answer . Let ea = Ix~ew _ X~ld I.
(i) First iteration,
Xl ~ 0, f(X I) ~ - 1
x. ~ 1, f (x.) ~ 0.63212
X,

~ 1-

- 1~

~:3212 (0.63212) ~ 0.61270

f (xl)f(x,) < 0 => T E (XI , X,)


(ii) Second iteration , Xl ~ 0, f (XI) ~ - 1
x. ~ 0.61270, f(x.) ~ 0.0.07081
X,

~ 0.61270 _ 0 - 0.61270 (0.07081) ~ 0.57218

- 1 - 0.07081
e. ~ 10.57218 - 0.612701 ~ 0.04052
(iii ) T hird iteration, Xl ~ 0, f( XI) ~ - 1
x. ~ 0.57218, f(x.) ~ 0.0.07081

Xl - X.
0 - 0.57218
X, ~ X. - f(xl ) _ f(x./(x,) X, ~ 0.57218 - - 1 _ 000789 (0.00789) ~ 0.56770
10.56770 - 0.572181 ~ 0.00448 < om (tolerance satisfied).
Hence, the approximate root is 0.56770.

e.

14

4.3

Open Methods

In contrast to the bracketing methods, the open methods are based on formulas that require a single
starting value or two starting values that do not necessarily bracket the root. Hence, t hey sometimes
diverge from the true root . However , when they converge, they tend to converge much faster than the
bracketing methods.
Examples of open methods:
(a) Fixed- Point Method
(b) Newton-Ilaphson Method
(c) Secant Method

4.3.1

Fixed-Point Method

To solve f(x)

= 0, rearrange

f(x)

= 0 into the form x = g(x).

Then the scheme is given by

n = 0, 1,2, ...

Xn+l = g(x n ),

Remark. This method is also called the successive substitution method , or one-point iteration.
Example 14. The function f( x) = x 2 - 3x + eX - 2 is known to have two roots , one negative and one
positive. Find the smaller root by using t he fixed-point method .

Answer. The smaller root is the negative root.


f( - I)
f(x)

= 2+e- 1 > 0 and 1(0) = - 1 < 0 =}

= 0 can be wri tten as x =

x2+ex _ 2

Xn+l

= g(xn) =

f( - I)f (O) < 0 =} the negative root lies in (- 1, 0) .

= g(x ), so the algorithm is


X2
n

+ e n3

, n = 0, 1, 2, ...

If we choose the initial guess Xo = - 0.5, we obtain


x 2 + eo - 2
(- 0.5)2 + e- O.5 - 2
Xl = 0
3
=
3
::::: - 0.3811564468 and so on. The results are summarized in
the following table :

x,

- 0.5(initial guess)
- 0.381156446
- 0.390549582
-0.390262048
- 0.390272019
- 0.390271674
- 0.390271686
- 0.390271686

1
2
3
4
5
6
7

Hence the root is approximately - 0.390271686.


15

Note 1:
(a) There are many ways to change the equation f( x)

0 to the form x

g(x) and the speed of

convergence of the corresponding iterative sequences {Xn } may differ accordingly. For instance,
if we use the arrangement x = -J3x e'Z + 2 = g(x) in the above example, the sequence might
not converge at all .
(b) It is simple to implement but in this case slow to converge.
(c) Even in the case where convergence is possible, divergence can occur if the initial guess is not
sufficiently close to the root.

Example 15 . If we solve x 3 + 6x - 3 = 0 using the Fixed-point algorithm


-

Xn+ 1 -

we obtai n the following results with

n
0 0.5
0
1 0.4791667 1
2 0.4816638 2
Xn

Xn

1.5
0
- 0.0625
1
0.5000407 2

IO =

3 - xn3
6

0. 5, X o = 1.5, IO = 2.5 and


n

Xn

Xo =

5.5.

Xn

2.5
0 5.5
-2. 1041667 1 -27.2291667
2.0527057
2 3365.242242

0.4791616

-6351813600

0.4271122072 x 10"

0.4813757 3

0.4814091

0.4814052

0.4814057

0.4814056

0.4814056 8

0.4814057

0.4814056

10 0.4814056

G
E

S
!!

13 0.4814013
14 0.4814055
16 0.4814056
17 0.4814056

Note that when the initial guess Xo is close enough to the fixed point
but if it is too far away from 1', the method will diverge.

T,

then the method will converge,

Theorem 4.3.1 (Convergence of Fixed-Point M e thod). Let 9 be a continuous function on [a, bl

and a ~ g(x) ~ b for all x E

la, bJ. Then g(x) has at least one fixed point r

If in addition , 9 is differentiable and satisfies 19'(x)1

M < 1 for all

in (a, b) .

x in [a, bJ, M a constant, then

the fixed point is unique and the method converges for any choice of initial point Xo in (a , b).

16

4.3.2

Newton-Raphson Method

This is a method used to approximate a root of an equation f(x) = 0 assuming that f has a continuous
derivatives /'. It consists of the following steps:
Step 1. Guess a first approximation Xo to the root.
(A graph may be helpfuL)
Step 2. Use the first approximation to get the second, t he second to get the third, and so on, using
the formula
f(x n )
xn+J = Xn - f'(Xn) , n = 0, 1,2,3, ...
where

Xn

is the nth approximation.

Stop when IX n+l -

Inl < E,. the pre-specified stopping criterion i,.

Remark. You may also stop when

Xn+l - Xn
Xn+l

I x 100% <

i,

Note 2: The underlying idea is that we approximate the graph of f by suitable tangents.
If you are writing a program for this method, don't forget also to include an upper limit to the number
of iterations in the procedure.

Example 16. Use Newton-Raphson Method to approximate the root of f(x) = x - e- = 0 that lies
between 0 and 2. Continue the iterations until two successive approximations differ less than 10- 8 .
IS
A nswer. Th e
Iteration
gIVen by Xn +l =

Xn -

(X n - e- ') = -"x.::.n.-:+_ =l l +e:tn

e%n+l

Starting with Xo = 1! we obtain


X, = Xo + 1 = 1 + 1 '" 0.537882842
e:to+ l
e+ l
XI + 1
X, = e:tl + 1 '" 0.566986991
X3 '" 0.567143286
x. '" 0.56714329
Since lx, - x31 = 0.000000004
the required root .

= 0.4 x

10-' < 10- ', we stop the process and take x, '" 0.56714329 as

17

+ X2 - X + 1 = 0 has only one real root. Use the


Newton-Raphson Method to find this root. Continue the iterations until two successive approximations
Reading Assignment 4 .3.2. The equation 2x3

differ less than 0.0001.

Answer. Let f (x) = 2x3

+ x' - x + 1.

(a) First, use the intermediate Value Theorem to local the root.

Since f( - 2)f(-I) = (- 9)(1) = - 9 < 0, the root is in (-2, - 1).


(b) Using the iterative formula
Xn+1 =

with the initial approximation


Xl = Xo -

x, =
X3

2~~!:t:oil

Xo

f(xn}
Xn - f'(x )

2x~
=

+ x! - Xn + 1
6'
xn + 2Xn - 1

Xn -

= [(-2) + (-1)]/2 = -

1.5, we obtain

= -1.289473684,lxl - xol > 0.0001

-1.236967446, lx,

= -

- xd > 0.0001
1.233763552, IX3 - xd > 0.0001

x, = -1.23375 1929
Since

IX4 -

x3 1 = 0.000011623 < 0.0001 , the required approximation to the root is

Exercise 3. Use Newton-Raphson method to approximate

X4 =

ffi correct to six decimal places.

Answer. 2.165737
Advantages

(a) It needs only 1 initial guess.


(b) It converges very rapidly.
Disadvantages
(a) It may not converge if the initial guess is not sufficiently close to the true root .

(b) The calculations of f'(x n) may be very complicated.


(c) Difficulties may arise if 1f'(xn)1 is very small near a solution of f(x) = O.

18

- 1.233751929.

4.3.3

Finite Diffe rence M ethod

Consider a second order boundary value problem (BVP)

Y"
Suppose a = Xo <

y,

= y(x,) , Po =

+ P (x)y' + Q(x)y =

f(x), y(a) = et, y(b) = (J.

< ... < Xn - ] < Xn = b with Xi - Xi_I = h for all i = 1, 2, ... , n . Let
P(x,) , Q, = Q(x,) and J; = f(x ,). Then by replacing y' and y" with their central
Xl

difference approximations in the BVP, we get

Yi+1 - 2Yi
h2

+ Yi- l

+ Pi

Yi+l - Yi - l

2h

+ QiYi = h, t =

1, 2, ... , n - 1.

or, after simplify ing

The last equation, known as a finite difference equation , is an approximation to the differential
equation. It enables us to approximate the solution at X I , . . ,Xn_ l ..

Example 17. Solving BVPs Using Finite Diffe rence Method


Use the above difference equation with n = 4 to approximate the solution of the BVP

y" - 4y

Answer. Here h = (1- 0)/4

= 0,

y(O)

= 0,

y( l ) = 5.

= 0.25, x, = 0.25i, i = 0,1,2,3,4.


Yi+l -

Hence, t he differen ce equation is

2.25Yi + Yi-I = 0, i = 1,2,3

That is,

Y2 - 2.25YI + Yo Y3 - 2.25Y2 + YI
y, - 2.25Y3+Y2 -

0
0
0

With the BCs Yo = 0 and Y4 = 5, the above system becomes

112 - 2.25YI 113 - 2.251/2

+ YI

-2.25Y3 +y, Solving the system gives YI = 0.7256, Y2 = 1.6327, and Y3

0
0

-5

= 2.9479.

Notes : We can improve the accuracy by using smaller h. But for that we have to pay a price , i. e.
we have to solve a larger system of equations .

19

Chapter 5
Some Topics In Linear Alge bra
5.1

Iter ative Meth ods For Solving Linear Systems

The iterative methods start with an initial approximation to a solut ion and then generate a succession of better and better approximations that (may) tend toward an exact solution. We shall study
the following two iterative methods:

(8.) Jacobi iteration: The order in which t he equations are examined is irrelevant, since the Jacohi
method treats them independently. For this reason, the Jacohi method is also known as the
me thod of simultaneous corrections, since the updates could in principle be done simultaneously.

(b) Gauss-Seidel iteration: this is a method of successive corrections. It is very similar to the
Jacohi technique except it replaces approximations by corresponding new ones as soon as the
latter are available.
Definition 5.1.1. An n x n matrix A = [Oij] is strictly diagonally dominant if
n

..

la.. 1> 2:)a"l.


,
1#1<

V k ~ 1. 2.. . n .

That is, A is strictly diagonally dominant if the absolute value of each diagonal entry is greater than
the sum of the absolute values of the remaining entries in the same row .

20

Example 18. Show that A =

[~

-3

-74]
1

is not strictly diagonally dominant.

Answer . Since in t he first row, lalll = 2 :f la121 + la131 = 7 + 4 = 11 and in t he second row ,
la,,1 = 11- la,,1 + lad = 8 + 6 = 14, matrix A is not strictly diagonally dominant.

However, if we interchange t he first and second rows, the resulting matrix B =

[ ~ ~7:] is strictly
-3

diagonally dominant since


lal d = 8 > la,,1 + lad = 1 + 6
la,,1 = 7 > la,d + la,,1 = 2+4
la331 = 9 > la,d + la,,1 = 3 + 5

= 7,
= 6,
= 8.

Theorem 5.1.1 ( Convergence of the Iterative Methods). If the square matrix A is strictly
diagonally dominant, then the Gauss-Seidel and Jacobi approximations to the solution of the linear
system Ax = b both converge to the exact solution for all choice'> of the initial approximation.

Remark (Termination Criterion). We can stop t he computation when

Ix;iP+IJ - xiPi I < ,


for all i, where

lE

is the pre-specified stopping criterion.

Example 19. Use Jacohi Iteration Technique to solve


Xl -

lOx2

+ X3

+ X2 - X3
-X l + X2 + 10x3
20Xl

Iterate until IXjlP+IJ - XjIPq

13
17

18

< 0.0002, for all i. Prepare all the computations in 5 decimal places.

Answer .
(i) To ensure the convergence of this method , we rearrange the equations to obtain a strictly diagonally dominant system :

+ X2 - X3
Xl - lOx2 + X3
-Xl + X2 + lOX3
20XI

21

17

13

18

(ii) Make each diagonal element the subject of each equation:

Xl

1
20(17 -x,+x3)

x,

1
10 (-13 + Xl +X3)

X3

1
10 (18 + XI - x,)

(*)

(Hi) Start with a reasonable initial approximation to the solution:

O' 3
xlol= O.
e.g. Xlol_
I - O,xlol=
2
(iv) Substitute this initial approximation into the RHS of (*) , and calcula.te the new approximation

where

xf]

xrl + x!rl)

XI(P+ I]

1
20(17 -

X2(P+l]

~(13 +x~1
+ x~l)
10
I
3

X3{P+I]

11 (18 + xfl 0

is the pth iteration of the approximation to

x~l)

Xi

That is, the new approximation is

Xliii _

2~(17 -x\OI +x\OI) =

X,III

11 (- 13 + x\OI +x\OI) = - 1.3


0

x3[11 -

110 (18 + x\OI -

x~l) =

0.850

1.8

(v) To improve the approximation, we can repeat the substitution process. T he next approximation
is

2~(17 -

(- 1.3) + 1.8) = 1.005

1
10 (- 13 + 0.85 + 1.8) = - 1.035

1
10(18 + 0.85 - (- 1.3)) = 2.015

(vi) As Ix,I'1 - x,lslj < 0.0002 for all i, we stop the computation.
Th e resu1ts obtained are summarized in the followinp; table:
m 0
1
2
4
5
6
3
XI,ffl, 0 0.850 1.005 1.0025
1.0001 0.99997 1.00000
X"ffl, 0
-1.3 -1.035 -0.9980 -0.99935 -0.99999 -1.00000
X3,ffl, 0
2.004 2.00005 1.99995 2.00000
1.8 2.015
22

Example 20. Use Gauss-Seidel Method to solve


lOx2

20XI

+ X2 -

-Xl

Iterate until IXjIP+ I ]

Xj: IP] I

+ X3

Xl -

13

X3

17

+ X2 + lOx3

18

< 0.0002. for all i. Prepare all the computations in 5 decimal places.

Answer.
(i) Make sure the matrix is diagonally dominant (rearrange it if necessary).
Rearranging the equations leads to
20XI +X2 - X3

17

Xl -

lOX2 +X3

+ X2 + lOx 3

13
18

- Xl

which is strictly diagonally dominant.


(ii) Make each diagonal element t he subject of each equation:
I

20{17 -

XI
X,

X,

1O {-13 +

XI

X3)

(.)

X3)

10 (18 +

X3

XI -

X3)

(iii) Start with a reasonable initial approximation to the solution:

e.g. x~oJ = 0, x~oJ

0, x~oJ = O.

(iv) Substitute this initial approximation into the RHS of (*" and calculate the new approximation

210{17 - Xrl+X~I)
-

IP II
~(10 13 + X I + + xIPI)
3

:0

(18 + x\",11 - Xr+ II )

xr'

where
is the pth iteration of the approximation to
That is, the new approximation is
1
20 (17 -

XliII
x,iIl

x3 iII -

Xi .

x~1 + xil) =

0.850

110 (- 13 + x\11 + xil) = -1.215


110 (18 + xPI - x\ll) = 2.0065
23

(v) To improve the approximation, we can repeat the substitution process. The next approximation
IS

XI!'!

2~(17 -

x,!'! -

110 (- 13 + 1.0111

X3!'!

110 (18 + 1.0111 - (-0.99824))

(- 1.215)

+ 2.0065) =

1.01108

+ 2.0065) =

- 0.99824
=

2.00093

' d 'mt
h ieoowm
ll
table;
we summarize t he resu ts 0 b
tame
m
XI,m,
x,lmJ
X3 JmJ

0
0
0
0

2
4
I
3
0.850 1.01108 0.99996 1.00000
-1.215 -0.99824 -0.99991 -1.00000
2.0065 2.00093 1.99999 2.00000

Note 3:
(a) The Gauss-Seidel technique is not appropriate for use on vector computer, as the sct of equations
must be solved in series.
(b) The Gauss-Seidel technique requires less storage than the Jacobi technique and leads to a convergent solution almost twice as fast .

24

5.2

A Review On Eigenvalues

Definition 5.2.1 . Let A be a.n n x n matrix. A scalar ..\ is called an eigenvalue of A if there exists
a nonzero vector x E !Rn such that
Ax ~ AX.
In this case, the nonzero vector x is called an e igenvector of A corresponding to A.

Example 21. Let A

[~ ~]. Verify that VI =

[!]

is an eigenvector of A and find the corresponding

eigenvalue.

Answer.

AVt =

[~ ~] [~]

= ... = 5 Vl

=>

[!]

is an eigenvector of A corresp,onding to the eigenvalue

A ~ 5.

Note 4:

(8.) To find an eigenvalue of A, we solve the characteristic equation

IA - MI ~ O
for A.
(b) To find the eigenvectors corresponding to A, we solve the linear system

(A - M )x

Exercise 4. Find the eigcnvalues and eigenvectors of A

for nonzero x.

Answer.

[~

;].

A, ~ I,A, ~ 5; v, ~ [~3] , v, ~ [:]

Definition 5 .2.2. A square matrix A is called diagonalizable if there is an invertible matrix P such
that p - l AP is a diagonal matrix; the matrix P is said to diagonalize A.
Exercise 5 . Let A

Answer. p - I AP

[~ ~l]

and P

[~

~ [~I ~] [~ ~I] [~

!].

Verify that P diagonalizes A.

;J ~ [~I

~], a diagonal matrix, so P diaganalizes A.

Theorem 5.2.1.
Let A be an n x n matrix.
(a) A is diagonalizable if and only if it has n linearly independent eigenvectors.
(b) These n linearly independent eigenvectors form a basis of !Rn.

25

5.3

Approximation of Eigenvalues

Definition 5.3.1. An eigenvalue of an n x n matrix A is called the dominant eigenvalue of A if its


absolute value is larger than the absolute values of the remaining n - 1 eigenvalues. An eigenvector
corresponding to the dominant eigenvalue is called a dominant e igenvector of A.

Example 22.
(a) If 4 x 4 matrix A has eigenvalues

then ...\1 = - 5 is the dominant eigenvalue since

IA,I > IA,I

for all i

11.

(b) A 3 x 3 matrix B with eigenvalues

A,

= 2,

A,

= -5,

A3

= 5

has no dominant eigenvalue.

5.3.1

The Power Method

Theorem 5.3.1 (The Power Method (or Direct Iteration Method)) . Let A be a diagonalizable
n x n matrix with eigenvalues

Assume that VI" " , Vn are unit eigenvectors of A associated with Al ..\2 ... An respectively. Let Xo
be a nonzero vector in !Rn that is an initial guess for the dominant eigenvector v._ Then the vector
Xk = Ak:xo is a good approximation to a dominant vector of A VI when the exponent k is sufficiently
large,
Note 5:

(a) Xo is an initial guess or approximation for the dominant eigenvector.


(b) IT Xk = Ak:xo is an approximation to the dominant eigenvector , then the dominant eigenvalue AI
can be approximated by the Rayleigh quotient
\
AXk'Xk
A l ::::::
X k' Xk

(c) The rate of convergence depends on the ratios

A2
An
AI "'" AI'
The smaller the ratios, the faster the rate of convergence.

26

Example 23. Let A =

[~ =~l . Use the power method to approximate the dominant eigenvalue and

a corresponding eigenvector with Xo =

[~].

Prepare 6 iterations.

Answer.
We compute
XI

= AXo =

X, =

AXI

X,=

Ax, =

X m +1

AXm for m = 0, 1,2, ....

[~ =~] [~] = [~] = 4 [0~5] ,


[I~]
[;~]

1
= 10 [0 9]
"' 22

[09~45] ,

X, = A", =

[09~83] ,
[~~] "' 94 [09~94] ,

Xo=Ax, =

[:~~] ", 190[09~47]

'" = Ax, = [::] "' 46

So

Xl)

[~~~]

is an approximation to a dominant eigenvector and the dominant eigenvalue

(3823811. [190]
189

----;[c-:-;*] "' 2.0132.


(190 1891 . 190
189

Remark.

(a) From the above calculations, it is clear that the vectors Km are getting closer and closer to scalar
multiples of

[n

which is a dominant eigenvector of A. It can also be checked that ).. = 2.

(b) The convergence is rather slow.


(c) Termination Criteria Let A(i) denote the approximation to the eigenvalue A at the ith step.
We can stop the computation once the relative error

is less than the pre-specified error criterion l. Unfortunately, the actual value A is usually unknown.
So instead, we will stop the computation at the ith step if the estimated relative error

A(i)-A(i- 1)\
< ,.
A(i)

The val ue obtained by multiplying estimated relative error by 100% is called the estimated
percentage error.

27

5.3.2

Power Method with Scaling

Theorem 5.3. 2. The power method often generates a sequence of vectors {A'~Xo} that have inconveniently large entries. This can be avoided by scaling the iterative vector at each step. That is, we

[~,] by

multiply AXo =

Xn

process with the vector

{I 111 1

max Xl

AXI

X2

1 1

Xn

I} and label the resulting matrix as X, . We repeat the

to obtain the scaled-down vector

X 2,

and so on.

The algorithm for the Power Method with Scaling as follows:


1. Compute Yk = AXk _ l

1
2. Set

Xk

max{ly.!} Yk

AXk'Xk

3. Dommant elgenvalue of matnx A = -.=........=


XI:' XI:

Example 24. Repeat the iterations of Example 23 using the power method with scaling.

Answer.

(i) y,

= AXo =

[~] . we define X, = ~YI = ~ [~]

[1]

[0~5] '

1 1

[1]

.. y, = Ax, = [43 -2]


(11)
- 1 0.75 = [2.5]
2.25 we define X, = 2.5 Y' = 2.5 [2.5]
2.25 = 0.9 .
Continue in this manner, we will obtain

Finally, the dominant eigenvalue

A"". ""
Al '"
\

[2.0106 2.00531'
[

X . ""

[0.9~947]
1

'" 4.0053/ 1.9894 '" 2.0133

[10.99471' 0.99947

Observe that the sequence of vectors {Xk} is approaching the eigenvector

28

[i].

5.3.3

Inverse Power Method with Scaling

Theorem 5.3.3. If in addition, A is an invertible matrix, then

AI > A, 2: A3 2: ... 2: An > O.


Hence
I

>:I will be the dominant eigenvalue of A- I, We could apply the power method on A- I to find
n

An and hence An' This is called the inverse power method.

The algorithm for the Inverse Power Method with Scaling as follows:
1. Compute B = A- I
. 2. Compute Yk = BXk _ 1
1
3. Set ~
(I Ij Yk

x,

max Yk

.
al ue 0 f matnx
' AI
4. D omlllaut
clgenv
- = Bx,x, = J1.
5. Smallest eigenvalue of A

xk' Xk

~ .!.
jJ.

Example 25. Let

>"1

and

).2

be the eigenvalues of A =

[~ ~]

such that

Al

>

.A2'

(a) Use an appropriate power method with scaling to approximate an eigenvector corresponding to
"\2. Start with t he initial approximation Xo =

[-~.7]. Round off' all computations to four decimal

places, and stop after two iterations.


(b) Use the result of part (a) to approximate A,.
(c) Find the estimated percentage error in the approximation of A2 _

Answer.
(a) We use the inverse power method with scaling to find A2 , t he smallest eigenvalue of A.
B

A- I

[- 0.1 0.3]
0.4 - 0.2

Iteration I YI

~ Bx. ~ [~03;:], XI ~ y1/0.48 ~ [0 ~~08]

. 2:
I teratIon

0.3771]
= B Xl = [-0.5083

Y2

,X2

/ 05083
Y2

[- 0.7419]
1
is an approximation to the required

eigenvector.

(b) An approximation to A,2 is the second approximation "\2(2) =

X2' X2

BX2' X2

(c) The first approximation of A, is A,(I) ~


The percentage error is

XI' XI
BXI"xj

~ - 1.9952

A,(2) - A,( I ) I
A,(2)
x 100% ~ 0.3397%

29

= - 2.0020

5.3.4

Shifted Inverse Power Method with Scaling

Theorem 5 .3.4. Let A be a diagonalizable n x n matrix with eigenvalues

Al l '"

An and assume that

This method can be used to find any eigenvalue and eigcnvector of A. Let a be any number and let Ak
be the eigenvalue of A closest to a. The inverse power iteration with A - aI will converge to IAk - al - 1
and a multiple of Vk .
The algorithm for the Shifted Inverse Power Method with Scaling as follows:
1. Compute C = (A - aI)-1
2. Compute Yk = CXk _ l
1
3. Set Xk = max{IYkllYk
. .

4. Dominant clgenvalue of matnx (A - aJ)

- 1

eXit; Xk

= {J

X It; XIt;

.
1
5. Elgenvalue closest to a =

p+ a

Example 26. Apply the shifted inverse power method with scaling (2 steps) to A =
the eigenvalue nearest to a = 6. Start with the initial guess Xo =

Answer. A - al = A - 61 =

C = (A - aI)- 1 =
Iteration L
YI

Cx. =

m,

XI

[~

[~ =~]

~YI =

[~ =~]

to fi nd

[~] .

=;].

[04;86] '

Iteration 2:
6.1428]
1
[1] .
. . .
d
112 = C X l = [2.5714' X2 = 6.1428Y2 = 0.4186 IS an apprOXimatIOn to an elgenvector correspon ing to the required eigenvalue,say, A.
Let {3 be the dominant eigenvalue of C.

ex, x,

7.2433
.
[6.1628]
T hen {3 '" x,. x, = 1.1752 = 6.1634 smce Cx, = 2.5814 .
1
1
Hence, A = a + fj = 6 + 6.1634 = 6.1622.

30

Remark.

(a) The Power Method converges rather slowly. Shifting can improve the rate of convergence.
(b) If we have some knowledge of what the eigenvalues of A are, then t his method can be used. to find
any eigenvalue and eigenvector of A.
(c) We can estimate the eigenvalues of A by using the Gerschgorin's Theorem.

Theorem 5.3.5 (Gerschgorin's Theorem). Let). be an eigenvalue of an n x n matrix A. Then for


some integer i( l :S i :S n), we have

lA - a,,1 $ T,

la,d + ... + la",-d + 1a.,'+l1 + ... + Ia.nl

That is, the eigenvalues of A lie in the union of the n discs with radius Ti centered at llii.
Furthermore, if a union of k of these n discs form a connected. region that is disjoint from all the
remaining n - k discs, then there are precisely k eigenvalues of A in this region.
Example 27. Draw the Gerschgorin discs corresponding to

A=

[!

~8 =~]

-4 - 1 -8

What can be concluded about the eigenvalues of A?

Answer. T he Gerschgorin discs are


D, : Center 7, radius 8

D2 = DJ : Center -8, radius 5


(i) The eigenvalues of A must lie in D, U D2 .
(ii) A is symmetric, so all the eigenvalues are real numbers.
(iii) Hence, - 1 $ Al $ 15 and - 13 $ A" A3 $ - 3

31

Chapter 6
Optimization
6.1

Direct Search Methods

Direct search methods apply primarily to strictly unimodal I-variable functions. The idea of these
methods is to identify the interval of uncertainty that contains the optimal solution point. The
procedure locates the optimum by iteratively narrowing the interval of ullcertainty to any desired level
of accuracy. We will discuss only t he golden section method. T his method is used to find the maximum
value of a unimodal fun ction f(x) over a given interval [a, b] .
Definition 6 .1.1. A fun ction f( x) is unimodal on [a, bj if it has exactly one maximum (or minimum )
on [a, b] .

6.1.1

Golden Section Search (Maximum)

The general steps for this method are as follows:


Let f(x) be a unimodal function over a given interval [a, b] with the optimal point x, Let
(XL , xn) be the current interval of uncertainty with 10 = [a, b1 , Define
Xl

vis -

= Xn

r(xR - xd ,

where r =
2 ' the golden ratio. (Clearly,
h is determined in the following way:

XL

X2

<

= XL + r(xn -

Xl

<

X2

h-l -

XL)

< xn.) The next interval of uncertainty

(i) If f(xl) > f(x,), then XI. < x' < x,.'Set XR = X, and I, = [X L,X,].
(ii) If f(xd < f(x,) , then Xl < X XR Set XL = Xl and h = [XI. XR].
(iii) If f(xd = f(x,), then X l < x' < X,. Set XL = XI,XR = X" and I, = [Xl ,X,].
Remark. (a) The choice of XI and X2 ensures that h e l k _ I '
(b) Let L, = 11 /, 11, the length of I ,. Then the algorithm terminates at iteration k if L, < " the user
specified level of accuracy.
(c) It can be seen that Lk = rL k_1 and Lk = rk(b - a), Thus, the algorithm will terminate at k
iterations where Lk = rk(b - a) < E.

32

Example 28. Find the maximum value of f(x) = 3 + 6x - 4x' on the interval [0, 1[ using the golden
section search method with the final interval of uncertainty having a length less than 0.25.

Answer. Solving
r'(b - a) < 0.25
for the number k of iterations that must be performed , we obtain

> 2.88.

Thus 3 iterations of golden section search must be performed.


Iteration 1 :

=> XI = XR -

0, Xn
1

XL =

vis2-

(XR -

xd =

0.381966,

vis - 1
X, = XL +
2 (XR - xd = 0.618034
f(xl) = 4.708204 < f (x,) = 5. 180340 => take XL
Iteration 2 :

=> XI = XR -

XL

= 0.381966, XR = 1

vis 2

(Xll -

xd =

0.618034,

vis - 1
X, = XL +
2 (XR - xd = 0.763932
f(xl) = 5.180340 < f(x,) = 5.249224 => take XL
Iteration 3 :

=>

Xl

x, =

XR -

XL

= XI = 0.618034 with the same XR

1
(XR -

xd =

=1.

0.763932,

(XR - xd = 0.854102
f(xd = 5.249224 > f(x,) = 5.206651 => take XL remains the same with the same
Le., XL = 0.618034, XR = 0.854102
XL

i.e., XR

= 0.618034, XR = 1

vis -

vis -

= XI = 0.381966 with the same XR

Thus, 13 = [0.618034, 0.8541021 and L:. = 0.854102 - 0.618034 = 0.236068


So the required maximum point lies within 1, = [0.618034, 0.8541021

33

XR

= X, = 0.854102

< 0.25 as wanted.

6.1.2

Gradient Method

Review. Recall that the gradient vector of a function f at a point x,


'Il I(x ),
points in the direction of maximum increase (or the direction of steepest ascent). And ,

- 'Il I(x),
is the direction of maximum decrease (or the direction of steepest descent)

Theorem 6.1.1 (Method of Steepest Ascent / Gradient Method) . An algorithm for finding
the nearest local maximum of a twice continuous differentiable function !(x",) , which presupposes that
the gradient of the function can be computed . The method of steepest ascent, also called the gradient
method, starts at a point Xo and , as many times as needed , moves from X l; to X k+ 1 by maximizing
along the line extending from Xl; in the direction of 'iJ !(XIc) , the local uphill gradient. That is, we
determine the value of t and the corresponding point

at which the function

g(t) = I (XH I(t) )


has a maximum . We take

X k+}

as the next approximation after

Xk.

Remark : This method has the severe drawback of requiring a great many iterations (hence a slow
convergence) for functions which have long, narrow valley structures.
Example 29. Use the method of steepest ascent to determine a maximum of f(x) = _ x2 _

from the point Xo = (1, 1).


Answer . 'Ill(x) = (- 2x,-2y)
(i) At Xo = (1,1), 'Ill = (- 2, - 2) .
The new approximation is
XI

= Xo

+ t'll I(Xo)

where t is a value that will maximize the function

g(t)

Solving 9'(t)

= I( Xo + t'll I(Xo)) = 1(1 -

= 0, we obtain 4(1 -

2t ) + 4(1 - 2t)

2t , 1 - 2t)

=-

(1 - 2t)' - (1 - 2t)'.

= 0 => t = 1/ 2.

1
Hence XI = (1, 1) + 2(-2, - 2) = (0,0)
Now 'Il l (xl ) = 'Il 1 (0,0) = (0, 0) , and we terminate the algorithm.

So the maximum point is

XI

= (0,0) with maximum value 1(0,0) = o.


34

y2

starting

Reading Assignment 6.1.2. Starting at the point (0, 0) , use one iteration of the steepest-descent
algorithm to approximate the minimum value of the function

f(x , y) = 2(x+y)' + (x - y)' +3x + 2y.


Answer.

f. = 4(x +y) + 2(x -y) +3 = 6x+2y+3


f. = 4(x + y) - 2(x - y) + 2 = 2x + 6y + 2
\1f(x, y)

= (/.. f.) = (6x+2y+3,2x+6y+2)

Start with the initial approximation Xc

(0,0),

we obtain the next approximation

Xl

= "0 + t\1 /("0) = (0, 0) + t(3, 2) = (3t,2t)

since \1 /("0) = \1 f(O , 0) = (3,2).


Find t that minimizes 9(t) = f(x.)
Solve 9'(t) = 102t + 13 = 0
=> t = - 13/ 102 = - 0.127
Hence Xl

= (-0.382, - 0.255)

= /(3t , 2t) = 2(5t)' + (t)' + 3(3t) + 2(2t) = 51t' + 13t

and f(XI)

= f( - 0.382, - 0.255) = -0.828", minimum value of f

35

Chapter 7
Numerical Methods For Ordinary
Differential Equations
7.1
7.1.1

First-Order Initial-Value Problem s


E uler's Method

Given the initial-value problem


dy
. dx = f(x , y),

y(xo) = Yo

EuleT method with step size h consists in using the iterative formula
Yn+ ! = Yn

+ hf(xno Yn)

to approximate the solution y(x) of the IVP at the mesh points


Xn+l

Xn

+h =

IO

+ (n + l)h,

n = 0, 1, 2, ....

Note 6 (TI-uncation error in Euler's Method):


(8) As the truncation is performed after the first term of the Taylar series of y(x), Euler's method is
a first order method with TE = O(h2). This error is a local error as it applies at each and every
step as the solution develops.
(b) The g lobal e rro r , which applies to the final solution is O(h). since the number of operations
.
I
would be proportional to h'

36

Example 30. Consider the IVP

y' = x + y, y(O) = 1.
(a) Use Euler's method to obtain a five-decimal approximation to y(O.5) using the step size h = 0.1.
(b)

(i) Estimate the truncation error in your approximation to y(O.I) using the next two terms in
the corresponding Taylor series.

(ii) The exact value for y(O.I) is 1.11034 (to 5 decimal places) . Calculate the error between
the actual value y(O.I) and your approximation Yt. How does this error compare with the
truncation error you obtained in (i)?

(c) The exact value for y(0.5) is 1.79744 (to 5 decimal places) . Calculate the absolute error between
the actual value y(O.5) and your approximation Ys.

Answer. Here f (x , y)

x + y, Xo
Yn+l

(a)

0 and Yo = 1 so that the Euler's scheme becomes

Yn

+ O.l(xn + Yn)

O.lxn

+ 1.1Yn.

(i) y, = O.lxo + 1.1yo = 0.1(0) + 1.1(1) = 1.1


(ii) y, = O.lx, + 1.1y, = 0.1(0.1) + 1.1 (1.1) = 1.22
(iii) Rounding values to 5 decimal places, the remaining values obtained in this manner are
Y3

= 1.36200, y, = 1.52820, ys = 1.72102.

(iv) Therefore y(0.5) '" Ys = 1.72102.


(b) (i) The truncation error is 0.01033.
(ii) The actual error is 0.01034.

(c) 0.07642

37

Example 31. Given that the exact solution of the initial-value problem
y' ~ x
is y(x)

+ y, y(O) ~

2e" - x - 1. The following table shows

(a) the approximate values obtained by using Euler's method with step size h = 0.1
(b) the approximate values with h = 0.05 , and the actual values of the solution at the points
x = 0.1, 0.2, 0.3, 0.4, 0.5 :

Yn
Yn
Xn with h ~ 0.1 with h ~ 0.05
0.0
1.00000
1.00000
0.1
1.10000
1.10500
1.23101
0.2
1.22000
0.3
1.36200
1.38019
1.52820
1.55491
0.4
0.5
1.72102
1.75779

exact
value
1.00000
1.11034
1.24281
1.39972
1.58365
1.79744

absolute error
with h ~ 0.1
0.00000
0.01034
0.02281
0.03772
0.05545
0.07642

absolute error
with h ~ 0.05
0.00000
000534
0.01180
0.01953
0.02874
0.03965

Based on the above information, give some comments .


Answer.
Comments
(a) By comparing the absolute errors , wc sce that Euler's method is more accurate for the smaller h.
(b) The column of data for It = 0.1 requires only 5 steps, whereas 10 steps are required to reach x = 0.5
with h = 0.05. In general , more calculations are required for smaller h. As the consequence, a
larger roundoff error is expected.
(c) The global error for Euler's method is O(h) . It follows that if the step size h is halved , this error
would be approximately halved as welL This can be seen from the above table that when the step
size is halved from h = 0. 1 to h = 0.05, the global error at x = 0.5 is reduced from 0.07642 to
0.03965. This is a reduction of approximately one half.

7.1.2

Heun's M ethod/ Improved Euler's Method

In each step of this method , we compute first the auxiliary value


y~+ l ~ Yn + hf(x., Yn )

(1 )

and then the new value

Note 7:
(a) This is an example of a predictor-corrector method-it uses (1) to predict a value of Y( Xn +l) and
then uses (2 ) to correct this value.
(b) The local TE for Heun 's method is 0(h 3 ).
(c) The global TE for Heun's method is 0(h2).
38

Example 32. Use Improved Euler's method with step size h = 0.1 to approximate the solution of the
IVP
y' =X+ y, y(O) = I
on the interval [0, 1]. The exact solution is y(x) = 2e% - x - 1. Make a table showing the approximate
values, the actual values together with the absolute errors.

Answer. Here f(x , y) = x + y, Xo = 0 and Yo = I so that


(a) Y~+l = Yn + O.I(xn + Yn) = O.lxn + I.1Yn
Yn+1 = Yn + 0.05 [(xn + Yn) + (Xn+l + Y~+l)]
(b) Yl' = O.lxo + I.1Yo = 0.1(0) + 1.1(1) = 1.1
Yl = Yo + 0.05 [(Xo + Yo) + (Xl + Yl')] = 1+ 0.05[1 + (0.1 + I.1)J = 1.11

(c)

y,

= 1.11 + 0.1[(0.1 + 1.11) = 1.231


11> = 1.11 + 0.05[(0.1 + 1.11) + (0.2 + 1.231)J = 1.24205
The remaining calculations are summarized in the following table:
n
exact
absolute error
Xn
Yn
0.00000
0 0.0 1.00000 1.00000
0.00034
I 0.1 1.11000 1.11034
2 0.2 1.24205 1.24281
0.00076
3 0.3 1.39847 1.39972
0.00125
4 0.4 1.58181 1.58365
0.00184
5 0.5 1.79490 1.79744
0.00254
6 0.6 2.04086 2.04424
0.00338
0.7
7
2.32315 2.32751
0.00436
8 0.8 2.64558 2.65108
0.00550
0.00684
9 0.9 3.01237 3.01921
10 1.0 3.42817 3.43656
0.00839

Remark.
(a) Considerable improvement in accuracy over Euler's method.

(b) For the case h = 0.1, the TE ill approximation to y(O.I ) is '" ":y'~' = (O~)'(2) = 0.00033, very
close to the actual error 0.00034.

39

7.1.3
_

Yn+l - Un

Taylor Series Method of Order p


I

+ hYn +

h2" h3 (3)
2! Un + 31 Un

...

hP

(P)

+ pI Yn

Example 33.
Consider the IVP
y' = 2xy , y(l) = I.
Use Taylor series method of order 2 to approximate y( 1.2) using the step size It

Answer. Here f(x , y)

2xy, Xo

0.1.

1, and Yo = 1.

=
2

' Yn+ l
The met hd
0 IS
Xn

= Yn

h "=
+ hi
Yn + 2fYn

Yn

+ 01'
. Yn + 000'"
. vUn)

h
n= 0
) 1
, ,2
'" '
WIt

Xn+l = Xn

+h

+ 0.1.

(a) When n = 0 : Xo = I
(i) y' = 2xy "'" y~ = 2xoYo

(b)

= 2(1)( 1) = 2
(ii) y" = 2y + 2xy' "'" y~ = 2yo + 2xov. = 2(1) + 2(1)(2) = 6
:. y, = Yo + 0.1v. + 0. 005y'~ = I + 0.1(2) + 0.005(6) = 1.23
When n = I : x, = Xo + 0.1 = 1.1
(i) y' = 2xy "'" V, = 2x,y, = 2(1.1 )(1.23) = 2.706
(ii) y" = 2y + 2xy' "'" y'{ = 2y, + 2x,v, = 2(1.23) + 2(1.1)(2.706) = 8.4 132
:. y, = y, + 0.1v, + 0.005y'{ = 1.23 + 0.1(2.706) + 0.005(8.4132) = 1.542666 '" y(x,) = y(1.2)

7.1.4

Runge-Kutta Method of Order p

(a) These methods are all of the form

where

Cl.j =

1.

(b) The Fourth-Order Runge-Kutta Method is


I

Yn+' = Yn + 6(k,
where
k, = hf(xn, Yn)
k, = hf (xn + i h , Yn
kJ = hf(xn , h, Yn

+ 2k, + 2kJ + k.)

+ ik.)

+ ,k,)

k, = hf(xn + h, Yn + kJ)
This method is a 4th order method, so its TE is O(h').

40

Example 34. Consider y' = x + y, y(O) = 1. Use Runge-Kutta method of order 4 to obtain an
approximation to y(O .2) using the step size h = 0.1.

Answer. Here f(x , y) = x+Y , xo = O,Yo = 1, h = O.l , x n+! = Xn +0.1.


(a) k, =
(b) Iv, =
(c) k3 =
(d) k, =

O.l(xn

+ Yn)

O.l(x n + 0.05 + Yn + ~kd


O.l(x n + 0.05 + Yn + ~k,)
O.l(x n + 0.1 + Yn + k3)

(a) For y,:


(i) k, = O.l(xo + Yo) = . .. = 0.1
(ii) k, = O.l(xo + 0.05 + Yo + ~kl) = 0.1(0 + 0.05 + 1 + 0.05) = 0.11

(iii) k3 = O.l(xo + 0.05 + Yo + ~Iv,) = 0.1(0 + 0.05 + 1 + 0.055) = 0.1105 '


(iv) k, = 0.1(xo+0.1+Yo+k3 ) = 0.1(0+0.1+1+0.1105) =0.12105
1

c. Y, = Yo + 6(k , + 21v, + 2k3 + k,) = .. . = 1.110341667


(b) For y,:
(i) k, = O.l(XI + yd = 0.1(0.1 + 1.110341667) = 0.121034166
(ii) Iv, = O.l(XI + 0.05 + y, + ~kd = 0.132085875
(iii) k3 = O.l(XI + 0.05 + Y, + ~Iv,) = 0.132638460
(iv) k, = O.I(XI + 0.1 + Y, + k3) = 0.144298012
1
c. y, = y, + 6(k, + 2k, + 2k3 + k,) = 1.242805142 '" y(x,) = y(0.2)

41

7.1.5

Multi-step Methods

The methods of Euler, Heull , and RK are called one-step methods because the approximation for
the mesh point Xn+ l involves information from only the previous mesh point , Xn. That is, only the
initial point (xo ,Yo ) is used to compute (xl ,yJ) and in general , Yn is needed to compute Yn+ )'
Methods which usc more than one previous mesh I)oints to fiud the next approximation arc c:alll.'(l

multi-step methods.

7.1.6

Adams-Bashforth/ Adams-Moulton Method

This method uses the Adams-Bashforth formula

y~+l =

y.. +

2~ (55/n -

59/n_'

+ 37/n_' - 9/n- ,) ,n ~ 3

where /; = ! (Xj ,Yj) , as a predictor, and the Adams-Moulton formula

Yn+ ' = y. +
where f~+ l

~~ (9/~+, + 19/n -

5/n- l + In -' )

f(x n+ l , Y~+ I ) ' as a corrector.

No te 8: (a) This is a order 4 method with TE= O(h' ).


(b) It is not self starting. To get started, Vb V2, V3 must be computed by using a method of same
accuracy or better such as the Runge-Ku tta order 4 method.

Example 35. Use the Adams-Bashforth/ Adams-Moulton Method with h = 0.2 to obtain an approximation to y(0.8) for the solution of

y' = x
Answer. With h
0, Yo = 1 to obtain

0.2 , y(0.8)

::::::l

+ y , y(O) = 1.

Y4. We use the RK4 method as the sta.rter method , with Xo

(a) y, = 1.242800000
(b) y, = 1.583635920
(c) y, = 2.044212913
(d) In = l (xn, Yn) = Xn +Yn
=> 10 = Xo + Yo = 0 + 1 = 1
=> " = X, + Y, = 0.2 + 1.242800000 = 1.442800000
=> /, = 0.4 + 1.583635920 = 1.983635920
=> /, = 0.6 + 2.044212913 = 2.644212913

(e) Y; = y,+ ~~(55/,-59h+ 37/,

- 9/0) =

2.044212913 +

~~(55(2.644212913)-59(1.983635920)+

37(1.442800000) - 9(1)) = 2.650719504


(f) I:' = I(x" Y; ) = x, +
= 0.8 + 2.650719504 = 3.450719504

y,

(g) y, = y, +

~~ (9/; + 19/, -

5/, + f,) = 2.651055757 '" y(0.8)


42

7.1.7

Summary:Orders Of Errors For Different Methods

order
Method
Euler
1
2
Heun
Second-Order Taylor Series
2
Third-Order Taylor Series
3
Fburth-Order Runge-Kutta
4
Adams-Bashforthj Adams-Moulton
4

Local TE Global TE
O(h')
O(h)
O(h3)
O(h')
O(h')
O(h')
O(h )
O(h')
O(h")
O(h)
O(h')
O(h')

43

7.2

Higher-Order Initial Value Problems

The methods use to solve first order IVPs can be applied to higher order IVPs because a higher IVP
can be replaced by a system of first-order IVPs. For example, the second-order initial value problem

y" = f{x , y, y') ,

y{xo) = Yo, y'{xo) =

y~

can be decomposed to a system of two first-order initial value problems by using the substitution
y' = u;
y' =
u
, y{xo) = Yo
u' = f{x, y, u) , u{xo) = if,
Each equation can be solved by the numerical techniques presented earlier. For example, the Euler
method for this system would be

+
+

Yn+l = Yn
Un+I

= Un

hUn
hf(xn , Yn , Un)

Remark. The Euler met hod for a general system of two first oruer differcntiaJ equations,
f{x , y, u) , u{xo) = "0
, y{xo) = Yo

u' -

y'

= g{x, y, u)

is given as follows

+
+

u,,+, = u"

Yn+l

Yn

hf{x n, yn, u,,)


hg(xn Yn, Un)

R eading Assignment 7 .2.1. Use the Euler method to approximate y{O.2) and y'{0.2) in two steps,
where y{x) is the solution of the IVP

y"

+ xy' + y =

0,

y{O) = I ,

y'{O) = 2.

Answer . Let y' = u.. Then the equation is equivalent to the system
y'
u'

=
=

u
-X'U -

Using the step size h = 0.1, the Euler method is given by


Yn+! = Yn
Un+! = 'Un

+
+

0.1un
O.l( - xnu n - Yn)

lIo = 2, we get
(a) y, = yo + 0.1"0 = 1 + 0.1(2) = 1.2
u, = "0 + O.I{ - XoUo - Yo) = 2 + 0. 1[-{O)(2) - IJ =
(b) y, = y, + O.lu, = 1.2 + 0.1{1.9) = 1.39
u, = u, + O.I{-x,u, - y,) = 1.9 + 0.1 [-{0.I)(1.9) T hat is, y{0.2) '" y, = 1.39 a nd y'{0.2) '" u, = 1.761.
With

Xo

= O,Yo = 1, UQ =

44

1.9
1.2J = 1.761

7.2.1

The Linear Shooting Method

Consider the linear second order BVP

yU~ p(x)y' + q(x)y + "(x),


with boundary conditions y(a)

n, y(b)

aSxS b

(3. The shooting method replaces the BVP by two IVPs

in the following way:

(a) Let YI be the solution of t he IVP

yU

p(x)y' + q(x)y + r(x),

y(a)

n, y'(a)

O.

Replace this second-order IVP by a system of two first-order IVPs. Then solve each of the two
first-order IVPs using, say, the fourth-order Runge-Kutta method to obtfl.in Yl-

(b) Let y, be the solution of the IVP

yU

p(x)y' + q(x)y,

y(a)

0, y'(a)

1.

Find y, as in Step (a) .


(c) Find Cl and ~ so that y = ClYI + C2Y2 is the solution of the BVP:
(i) p(x)y' + q(x)y + r(x) ~ p(clY; + o,1/,) + q(cIYI + c,y,) + r ~ Cl (w,

(ii)

+ qYI) + o,(w, + qV,) + r


yU ~ Cly'{ + o,y~ ~ CI(PY; + qYI + r) + c,(P1h + qV,) ~ CIWY; + qy.) + o,(Py; + qV,) + clr

=> r ~ clr => Cl ~ 1


(iv) y(a) ~ clYI(a) + c,y,(a) ~ 1 . YI(a) + c, . 0 ~ n => y(a)
(v) Choose c, so t hat y(b) ~ (3 ;
(iii)

(i)~( ii)

y(b)

~ cly,(b) + c,y,(b) ~ YI(b) + o,y,(b) ~ (3 => 0, ~ (3 ~,r;t), if y,(b) # 0

(d) T herefore

y(x) ~ YI(x) +
is the solution to the BVP, provided y,(b)

# O.

45

(3 - YI(b)

y,(b)

y,(x)

Example 36. Use the shooting method (together with the fourth-order Runge-Kutta method with

h=

:3)

to solve the BVP

Y" = 4(y - x),

0., x ., 1,

y(O) = 0, y(l) = 2.

Answer.

{~;

~~~~: ~ I~I

(a) y" = 4(y - x), y(O) = 0, y'(O) = 0 =>


= 4(Y"- x) :
(i) Use RK4 to solve [1] and [2] : we obtain
= - 0.02469136,
= - 0.21439821 and Yl(l) = - 0.80973178

Y1W

Y1W

(b) Y" = 4y, Y(0) =

y( )
{y = u , y(O) = 0 [3]
0,
0 = 1 => u' = 4y , ufO) = 1 [4]
(i) Use RK4 to solve [3] and [4] : we obtain
= 0.35802469,
= 0.88106488 and y,(I) = 1.80973178

y,W

(c) y(x) = Yl(X) +

y,W

2 - Yl(l)
2+0.80973178
y,(I) y,(x) = Yl(X) + 1.80973178 y,(x)

=> y(~) = 0.53116634, y(~) = 1.15351499, y(l) = 2


Note 9:
An advantage of the shooting method is that the existing programs for initial value problems
may be used .
However, the shooting method is sensitive to round-off errors and it becomes rather difficult to
use when there are more than two boundary conditions. For these reasons, we may want to use
alternative methods. This brings us to the next topic.

46

7.2.2

Finite Diffe rence M ethod

Consider a second order BVP

yU+ P (x)y' + Q(x)y


Suppose

y,

Xo

= y(x,), P; =

= f(x) , y(a) = <>, y(b) = (3.

< XI < ... < Xn - l < In = b with Xi - Xi - l = h for all i = 1,2, ... ) n. Let
P(x,), Q, = Q(x,) , and /; = f(x,). Then by replacing y' and y" with their central

difference approximations in the BVP, we get


Yi+l - 2Yi

h2

+ Yi- I + P. Yi+l
t

- Yi- l

2h

+ Q. . = J,.
tU.

11

i = 1,2, . . . , n - 1.

or I after simpli fying

h) YHl + (h Q, (1 +"2P;
2

2)y,

h l = h2k
+ (I - "2P;)Y'-

The last eq'uation, known as a finite difference e quation , is an approximation to the DE. [t enables
us to approximate the solution at Xl .. . , Xn _ I ..

Example 37. Solving BVPs Using Finite Difference Method


Use F"ini te Difference Method with h = 1 to approximate the solution of the BVP

yU _ (1 - ~)y
Answer. Here Pi

= 0,

Qi

= - 1+

fi

= x , y(l) = 2, y(3) = -1.

= X i' Hence , the difference equation is

x 1

Yi+1 + (- 3+ 5 )Yi +Yi- l =

Xi.

i = 1

T hat is ,

With Xl = 2 and the boundary conditio ns Yo = 2 and Y2 = - 1, solving the above equation gives
Yl = - 0.3846
Notes : We can improve the accuracy by using smaller h. But for that we have to pay a price, Le.
we have to solve a larger system of equations .

47

Chapter 8
Numerical Methods For Partial
Differential Equations
8.1

Second Order Linear Partial Differential Equations

De finition 8.1.1 (Three B asic Types of Second Orde r Linear Equations). The linear PDE
Au.. + Bu.,

+ Gu" + Du. + Eu, + Fu =

(where A , B , C, D, E , F, G are given functions of x and y) is called


(a) parabolic if B2 _ 4AC = O. Parabolic equations often describe heat. flow aJld diffusion phenomena,
such as heat flow t hrough t he eart h's surface.

(b) hyp erbolic if B2 - 4AC > O. Hyperbolic equations often describe wave motion and vibrating
phenomena, such as violin 's strings and drum heads.
(c) e lliptic if 8 2 - 4AC < O. Elliptic equations are often used to describe steady state phenomena
and thus do not depend 0 11 time. Elliptic equations are important in the study of electricity and
magnetism.

Example 38. Some classical examples of PDEs.


(a) The I-D wave equation Uu = e~z is a hyperbolic equation since
B' - 4AG = 0' - 4(<')(- 1) = 4<' > o.
(b) The I-D heat equation u, = 2uzz is a parabolic equation since

B' - 4AG = 0' - 4(c')(0) = O.


(c) The 2-D Laplace equation U zz + Uyy
B' - 4AG = 0' - 4(1)( 1) = -4 < O.

0 is an elliptic equation since

48

8.2

Numerical Approximation To Derivatives :l-Variable Functions

We replace derivatives by their corresponding difference quotients based on the Taylar series :

(i) u(x + h) = u(x) + hu'(x)

+ 2\h'U"(x)
+ ~h3u"'(x)
+ ...
.
3.

(ii) u(x - Il)

+ ~h2U"(X) - ~h3u"'(X) + ...

u(x) - hu'(x)

2!
3!
... ) (.)
'() u(x + It) - u(x) - -1u
hx
"( -)' "
I =>UX =
(1ll
It
2!
i.e. u'(x) = u(x + It~ - u(x) + O(h) (the forward difference formula for u')
(iv) (ii) => u'(x) =

u(x)-u(x-h)
.
h
+O(h) (the backward difference formula for u')

~) + O(h')

(v) (i) _ (ii)=> u'(x) = u(x + h) ;;. u(x -

(the central difference formula for u' )

( .. )
"() u(x + h) - 2u(x)
(VI.) (.)
I + Il ::::} 'U X =
h2 + u(x - h) + O(h')
(the central difference formula for u")

8.3

Numerical Approximation To Derivatives :2-Variable Functions

Consider a function u(x,y) over a 2-dimensional grid with f:j.x


P(x, y), where x = ih and y = jk, write
Up =

Then

aUI
ax

'Ui

u(ih,ik)

l --Uj '
J

+ Jh

aul
ay = "' 1-'"
k
aul
ax =
2h P

~:~ Ip =

J+

'Ui+ l ' J

'Ui+l,j -

Uj I '
J

+ O(h)

hand !1y

= 'Ui,j'

the FD for

+ O(k)the FD
+ O(h')

u, at P

for u, at P

the CD for

u, at P

2~~j + 'Ui - l ,; + O(Jt2)the FD for ~z at P

49

k. At a typical point

8.4

Methods for Parabolic Equations

8.4.1

FTCS Explicit Scheme

Example 39. Using F ,D for u, and C.D for


heat equation

au a'u
at = 8x2 '

tL,;:I;'

find the finite difference approximation for the 1-D

0 < X < a,

t > O.

Answer.
Let "',; = u(iIL, jk) = u(x" t;).
Then t he finite difference approximation is
Ut,;+! -

Ui,;

'Uj+l,; - 2Ui,j

+ 'ai - I';

h'

i = i , ... M,

j = O, ... ,N

Solving for 'UfJ+lo we obtain

k
where r = hl

'

Note 10: We use the ahove formula to estimate the values of u at time level j
level j. This is known as the FTCS explicit finite difference method.

+ 1 using the values at

Example 40. Consider the one-dimensional heat equation

au
at

t ~ O

subject to the initial condition


u(x,O) = sinn,

0 :::; x:::; 1

and boundary conditions


u(O, t) = u( l , t) = 0,

O.

Using an x step of h = 0.1 and a t step of k = 0.0005, use the FTCS scheme to estimate u(O. I , O.OOI ).

Answer.
k
r =

0.0005

h' = (0.1)' = 0.05

(a) u(O, t) = u(l , t) = 0 "" UoJ = U10J = 0

(b) u(x,O) = sin "X "" u',o = sin O.I"i


(c) U',H1 = (1 - 2r)u'J + r(uH1,; + U'- IJ)
"',HI = 0.9"'J + 0.05("'+IJ + Ui-lJ)

50

(d ) j = 0 : "i,1 = 0 .9"i,0 + 0.05 ("H I,0 + U;- I,O)


(i) i = 1 : " 1,1 = 0.9",,0 + 0.05("2,0 + "0,0) = 0.3075
since 'lLi,o = sinO.l1ri => UQ ,o = 0, UI ,O = sinO.17r = 0.3090,

U2,O =

sinO.21f = 0.5878

(ii) i = 2 : " 2,1 = 0.9"2,0 + 0.05(U3,0 + ", ,0) = 0 .5849


since

U3,O =

sin 0.311"

= 0.8090

(e) j = 1 : "i,2 = 0.9U;,1 + 0.05(U;+1,1 + U;- I,I)


(i) i = I : " ' ,2 = 0 ,%", + 0.05(" 2,1 + "0,1) = 0 .3060 '" ,,(0 .1, 0.001 ).
Definition 8.4.1 (Stability of Numerical Methods). A method is unstable if round-off errors
or any other errors grow too rapidly as the computations proceed.
Note 11 .- The

?res scheme is stable for a given space step h provided the .t ime step k

is restricted

by the condition

T he restriction recommends that we do not move too fast in the t-direction.

Note 12: Suppose there is an initial condition: u(x , O) = f (x). Then the scheme starts with (i,j) =
(0, 0), t he left... hand corner of the 20 grid. Along the horizontal line j = 0, where t = 0, we have
U;,O = U( Xi, 0 ) = f (Xi).

Note 13: Suppose we include the boundary conditions: u(O, t ) =


'UO,j

8.4.2

UM ,j

= 0,

uta, t ) =

O. Then we have

j > O.

Crank-Nicolson Method

Example 41 (Crank-Nicolson Implicit Sche me for the H eat Equation) . This CN[ scheme
repl aces Ul:l: by an average of two CD quotients, one at the time level j and another at j + 1:
U i,j + 1 -

'lLj,j

+ O(k) = ! [Ui+ I ,j+l

k
2
After simplified, we obtain
- TU; - IJ+I

+ 2(1 + T)U; J +l -

2Uj ,j+ l

1t2

+ 'Ut- I ,HI + 0(h2) + u H I ,j -

TU;+l J +l = TU; - IJ

+ 2(1 -

T)Ui J

2'lLj ,j

1t2

+ U i- l,j + 0(h2)]

+ TU;+ IJ + kO(k , h2 )

k
where r = h2

Note 14: For each time level j, we obtain an (M - 1) x (M - 1) tridiagonal system which can be
solved using iterat ive methods.

Note 15: T he CN I scheme has no stability restriction. It is more accurate than the explicit scheme.

51

Example 42. Consider the one-dimensional heat equation

8u

82 u

t2:0

at - ax2'

subject to the initial condition

u(X,O)

sin7rx,

S x

~ 1

and boundary conditions


u(O, t) = ,,(1 , t) = 0,

Using an x step of h
u(O.4,O.OO1).

= 0.2

and a t step of k

= 0.001,

t 2: O.

use the Crank-Nicolson method to estimate

Answer.
Since the initial temperature distribution is symmetric with respect to x
consider the grid points over 0 S x ~ 0.5.

= 0 .5,

we only need to

u(O.4 , 0.001) '" U2,1

k
0.001
r = h' = (0.2)' = 0.025
(a) ,,(0, t) = u(l, t) = 0
(b) ,,(x, 0) = sin TTX

=}

=}

"oJ = "5J = 0

u',o = sin 0.2rri

(c) - rui- IJ+l + 2(1 + r)'UiJ+l - r'Ui+lJ+1 = TUt- IJ + 2(1 - r)tiiJ + rUt+l J
-0.025Ut- IJ+l + 2.05uiJ+l - 0.025Ui+lJ+l = 0 .025'Ui_ IJ + 1.95UtJ + 0 .025uH IJ

(d) j = 0 : - 0.025"' _1,1 + 2.05".,1 - 0.025"'+1,1 = 0.025u'_1,0 + 1.95u.,0 + 0.025"'+1 ,0


(i) i = 1 : -0.025"0,1 + 2.05uI,1 - 0.025u2,1 = 0.025"0,0 + 1.95uI,0 + 0.025"2,0
since tLo,j = 0 and tii,O= sin 0.2rri
=} uo,o = 0, UI ,O = sin 0.2rr = 0.58778525, U2 ,0 = sin 0.4rr = 0.95105652
2.05"1,1 - 0.025u2,1 = 1.95(0.58778525) + 0.025(0.95105652)
2.05"1,1 - 0.025"',1 = 1.1699576505 (A)
(ii) i = 2 : -0.025"1,1 + 2.05"',1 - 0.025"3,1 = 0.025uI,0 + 1.95"2,0 + 0.025"3,0
since ti3,O= sin 0.6rr = 0.95105652
-0.025"1,1 +2.05"',1- 0.025u3,1 = 0.025(0.58778525) + 1.95(0.95105652) +0.025(0.95105652)
- 0.025uI,1 + 2.05u,,1 - 0.025u3,1 = 1.89303126
- 0.025uI,1 + 2.025"2,1 = 1.89303126 (B)
since ti3,l = ti2,l by symmetry

(iii) Solving (A) and (B) by Gauss elimination or Cramer's Rule or Gauss-Seidel method , we
obtain UI ,I = 0.58219907 ' ''',1 = 0.94201789
Therefore u(O.4 , 0.001) '" " 2,1 = 0.94201789

52

Example 43.
The exact solution of the 1-0 heat equation.

is u(x,t)

Ut

U xx

u(O, t)
u(X, 0)

u( l ,t) = 0
f(x) = sin7rx

t> 0
O<x< 1

e- 1f"lt sin(7rx) .

(a) By using the FTCS explicit scheme with h = 0.2 and k = 0.008 , we obtain the following table
t
0.000
0.008
0.016
0.024
0.032
0.040

x- O
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000
Exact 0.000
FTCS 0.000

x- 0.2
0.5878
0.5878
0.5432
0.5429
0.5019
0.5014
0.4638
0.4631
0.4286
0.4277
0.3961
0.3951

x- 0.4
0.9511
0.9511
0.8789
0.8784
0.8121
0.8113
0.7505
0.7493
0.6935
0.6921
0.6408
0.6392

(b) By using the FTCS and t he CN! scheme with h


t
0.00

x- O
Exact 0.000
CN! 0.000
FTCS 0.000
0.04 Exact 0.000
CN! 0.000
FTCS 0.000
0.08 Exact 0.000
CN l 0.000
FTCS 0.000
0.12 Exact 0.000
CNl 0.000
FTCS 0.000
0.16 Exact 0.000
CN! 0.000
FTCS 0.000
0.20 Exact 0.000
CN! 0.000
FTCS 0.000

x- 0.2 x- O.4
0.5878 0.9511
0.5878 0.9511
0.5878 0.9511
0.3961 0.6408
0.3993 0.6460
0.3633 0.5878
0.2669 0.4318
0.2712 0.4388
0.2245 0.3633
0.1798 0.2910
0.1842 0.2981
0.1388 0.2245
0.1212 0.1961
0.1251 0.2025
0.0858 0.1388
0.0817 0.1321
0.0850 0.1376
0.0530 0.0858
53

= 0.2 and k = 0.04

, we obtain the following table

8.5

A Numerical M ethod for Elliptic Equa tions

Example 44 (D iffe re nce Equatio n for the Laplace Equa tio n) . Using C.D for both 'UX:l; and
the finite difference approximation for the 2-D Laplace equation 'U:r:x + Uyll = 0 is
'Ui+lJ - 2'lLjj

+ 'Ui-Ij

h'

UjJ+l -

'U1I1I1

+ 'Ui,j - i = 0
k'

21LiJ

For a uniform space grid with h = k, this becomes


tii,j

('UHt ,j

+ Ui _ l,j + ui,Hi + Ui,j - l)

~ ('UE + 'UW + UN + us)

i.e. Ui,j is the average of its nearest neighbors.


Equivalently, 'UE + UN + Uw + Us - 41LiJ = O.

Example 45. The four sides of a square plate of side 12 cm made of uniform material are kept at
constant temperature such that
LHS = RHS = Bottom = 100'G and Top = O' G.
Using a mesh size of 4 cm, calculate the temperature u(x, y) at the internal mesh points A(4, 4), B(8, 4),
G(8,8) and D(4 ,8) by using the Gauss-Seidel method if u(x,y) satisfies the Laplace equation Un +
Uvu = 0, start with the initial guess 'UA = UB = 80, Uc = Uo = 50. Continue iterating until all
lur+ IJ - u~ I < 10- 3 where ur1 are the pth iterates for tLi , i = A, B , C, D .

Answer.
Applying the equation
I
u'J = :\[UE

+ UN + Uw + Us]

to the 4 internal mesh points, we obtain


1
= :\ (UB + Uo + 200)

U.

I
U8 = :\ (u.

+ Uc + 200)

Uc =

4' (UB + Uo + l OO)

Uo =

4' (u. + Uc + lOO)

This system is strictly diagonally dominant, so we can proceed with the Gauss-Seidel iteration starting
with the initial guess UA = UB = 80, Uc = UD = 50. Continue iterating until all lur+1] - u~l < 10- 3
we obtain UA = UB = 87.5,uc = Uo = 62.5.

54

8.6

CTCS scheme for Hyperbolic Equations

Example 46 (CTCS scheme for Hyperbolic Equations). Consider the BVP involving the I-D
wave equation

0'1.1

0'1.1

at'

ax'
t>0

1.1(0, t) = 1.1(1, t) = 0,
u(x ,O) = f(x) ,

0 :S x :S 1

u,(x,O) = g(x),

0:S x :S 1

Use a 2-D grid with 6.x = hand f:J.t = k. By replacing both


quotients , we obtain a eTCS scheme
U i j+l -

+ UiJ- l

2Ui,j

k'
i.e.

'Ui,; +! =

'U,:+l ,j -

PUi+l,j

+ 2(1 -

~:c

2u,;,j

and

Ut!

+ 'Ui_ l,j

h'
P)UiJ

+ fYUi - lJ -

Ui,j- l

where p = (kj h)'.

Note 16:
(a) This scheme is stable for 0 :S p :S 1 or k :S h.

(b) u(x ,O) = f(x) => u(i h, O) = f(ih) or '-'0,0 = /;.


(c) Approximate 1.1, with a CD quotient, the IC u,(x, O) = g(x) becomes
Ut,l -

Ui,- l

2k

= 9i =? u-':,- l = u,;,l -

where (i. - 1) is a fictitious grid point.

(d) To calculate

'-'0, 1 :

'lLj,1 = P'Ui+ l ,O

+ 2(1 -

P)Ui ,O

+ PUi- I,O -

'tLi, - 1

+ 2(1 - p)Uj,O+ PUi- l ,O - 'Ui ,l + 2kgi


= 2(PU;+l ,0 + PU; - I,O) + (1 - p)u.,o + kg.
p
=> 1.1.,1 = 2(Ji+1 + f.-I) + (1 - p)1; + kg.
= P'UH l ,O

55

with the central difference

2kgi

Example 47. If the string (with fixed ends at x = 0 and x = I) governed by the hyperbolic partial
differential equation
Utt = U XZ )
0 =:; X :::; 1, t;::: 0
with boundary conditions

U(O, t) = u(l, t) = 0
starts from its equilibrium position with initial velocity

g(x) = sin7fx
and initial displacement u(x,O) = O.
What is its displacement u at time t = 0.4 and x = 0.2,0.4, 0.6, 0.8? (Use the eTCS explicit scheme
with /::'x = /::,t = 0.2.)

Answer.

= u(l , t) = 0 => uOJ = us,; = 0


u(x,O) = f(x) = 0 => u',o = f, = 0

(a) u(O, t)

(b)

(c) g(x) = sin 7fX => g, = sinO.2i7f


(d) With h

= /::'x = /::,t = k = 0.2 => c = (k/h)2 = I, we have the following CTCS scheme:

'UiJ+l = tLi+1J

with

Ui,l =

+ 'Ui-lj -

O.29j

'U,;j- l

0.2 sin O.2i1r

(e) "',I = 0.2g,


(i) i = I :
(Hi)

= 0.2 sin 0.2i7f


U',' = 0.2sinO.27f = 0.1176
i = 2: U"I = 0.2sinO.47f = 0.1902
i = 3 : U3,1 = 0.2 sin 0.67f = 0.1902

(iv)

U4.1 = U', ' =

(v)

US,I

(ii)

(f)

UO,I

'Ui,j+l = 'Ui+1,j

0.1176

= 0

+ 14-1';

- UiJ- l

= I ,j = 1 : U '.2 = U2,1 + UO,I (H) i = 2,j = 1 : U2 ,2 = U3,1 + U',' (iii ) i = 3,j = 1 : U3,2 = U4,1 + U2, 1 (iv) i = 4,j = 1 : U4 ,2 = US,I + U3,1 (i) i

= 0.1902 + 0 - 0 = 0.1902
U2,O = 0.1902 + 0.1176 - 0 = 0.3078
U3,O = 0.1176 + 0.1902 - 0 = 0.3078
U4,O = 0 + 0.1902 - 0 = 0.1902
U' ,O

56

Bibliography
[l[ Koay Hang Leen, Lecture Note for Numerical Methods and Statistics, 2007.
[2[ Peter V.O'Nei!, Advanced Engineering Mathematics.
[3) Glyn James, Advanced Modern Engineering Mathematics.
!4} Anthony Croft, Robert DavisoD , Martin Hargreaves ,Engineer!ng Mathematics.

[5) Erwin Kreyszig, Advanced Engineering Mathematics.


[6) K.A. Stroud, Advanced Engineering Mathematics.
[7) Curtis F.Gerald , Patrick. Wheatley, Applied Numerical Analysis.
[8] Levine , D . M., Applied statistics for engineers and scientists.

[9) Hamdy A. Taha, Operations Research: An Introduction.

57

Das könnte Ihnen auch gefallen