Beruflich Dokumente
Kultur Dokumente
Computational Engineering
Numerical Methods
by
A. Avudainayagam
Department of Mathematics
revised by
Timothy Gonsalves
Dept of Computer Science & Engg
Lecture 27: Root Finding
Tutors:
Used for:
Approximation of functions
Used because:
Source of Errors
= 22/7 = 3.14285714
2.000 + 0.77 10
-6
= 2.00000077 = 2.000
Reduction of Error
Efficiency of convergence
We can fix the number of iterations so that the root lies within
an interval of chosen length (error tolerance).
n
e
n
ln(x
1
x
0
)ln
ln2
If n satisfies this, root lies within a distance of /2
of the
actual root
Though the root lies in a small interval, |f(x)| may not be
small if f(x) has a large slope.
Conversely if |f(x)| small, x may not be close to the root if
f(x) has a small slope.
So, we use both these facts for the termination criterion.
We first choose an error tolerance on f(x): |f(x)| <
and K the maximum number of iterations.
Pseudo code (Bisection Method)
1. Input > 0 , K > 0, x
1
> x
0
so that f(x
0
) f(x
1
) < 0.
Compute f
0
= f(x
0
) .
k = 1 (iteration count)
2. Do
{
(a) Compute m = , and f = f(m)
(b) If f f
0
< 0, set x
1
= m
otherwise set x
0
= m
(c) Set k = k+1
} while |f| > and k K
3. Set root = m
,
_
+
2
1 0
x x
Bisection Method Example
f(x) = x
3
-2 = 0, = 10
-4
x
0
= 1, x
1
= 2
k x
0
x
1
m f(m)
1 1 2 1.5 1.375
2 1 1.5 1.25 -0.4688
3 1.25 1.5 1.375 0.5996
4 1.25 1.375 1.3125 0.2610
5 1.25 1.3125 1.2813 0.1033
Using , n 12.29
n
ln(x
1
x
0
)ln
ln2
False Position Method (Regula Falsi)
+
New end point w:
y
x
wx
1
x
1
x
0
f
1
f
0
_
,
f
1
2. do {
a. Compute , and f = f(w)
b. If f
0
f < 0 set x
1
= w, f
1
= f
else set x
0
= w, f
0
= f
c. k = k+1
} while (|f| ) and (k K)
False Position Method (Pseudo Code)
wx
1
x
1
x
0
f
1
f
0
_
,
f
1
1. Choose > 0 (tolerance on |f(x)| )
K > 0 (maximum number of iterations )
k = 1 (iteration count)
x
0
,x
1
(so that f
0
,f
1
< 0)
4. x = w, the root
Regula Falsi Example
f(x) = x
3
-2 = 0, = 10
-4
x
0
= 1, x
1
= 2
k x
0
x
1
f(x
0
) f(x
1
) w f(w)
0 1.0 2.0 -1.0 6.0 1.1429 -0.5071
1 1.1429 2.0 -0.5071 6.0 1.2097 -0.2298
2 1.2097 2.0 -0.2298 6.0 1.2389 -0.0987
3 1.2389 2.0 -0.0987 6.0 1.2512 -0.0412
4 1.2512 2.0 -0.0412 6.0 1.2563 -0.0172
9 1.2598 2.0 -0.0003 6.0 1.2607 0.0039
10 1.2598 1.2607 -0.0003 0.0039 1.2599 -0.0003
11 1.2599 1.2607 -0.0003 0.0039 1.2600 0.0002
Newton-Raphson or Newtons Method
At an approximation x
k
to the root, the curve is approximated by the
tangent to the curve at x
k
and the next approximation x
k+1
is the
point where the tangent meets the x-axis.
y = f(x)
root
x
0
x
x
1
x
2
y
) (
) (
1
k
k
k k
x f
x f
x x
+
Warning : If f(x
k
) is very small, method fails.
+
Example : a = 5 , initial approximation x
0
= 2.
x
1
= 2.25
x
2
= 2.236111111
x
3
= 2.236067978
x
4
= 2.236067978
f(x) = x
2
- a
2
= 0
CS110 Lecture 28:
Root Finding Secant Method
Tutors:
(x
0
,f
0
)
(x
1
,f
1
)
x
y
The secant Method (Pseudo Code)
1. Choose > 0 (function tolerance |f(x)| )
m > 0 (Maximum number of iterations)
x
0
, x
1
(Two initial points near the root )
f
0
= f(x
0
)
f
1
= f(x
1
)
k = 1 (iteration count)
2. Do {
x
0
= x
1
f
0
= f
1
x
1
= x
2
f
1
= f(x
2
)
k = k+1 }
3. while (|f
1
| ) and (m k)
x
2
x
1
x
1
x
0
f
1
f
0
_
,
f
1
On Convergence
#
The false position method in general converges faster than
the bisection method.
#
But not always, shown by counter examples
#
The bisection method and the false position method are
guaranteed to converge
#
The secant method and the Newton-Raphson method are
not guaranteed to converge
Order of Convergence
#
A measure of how fast an algorithm converges
Let be the actual root: f() = 0
Let x
k
be the approximate root at the kth iteration . Error
at the kth iteration, e
k
= |x
k
- |
The algorithm converges with order p if there exists
such that
p
k 1 k
e e
+
Order of Convergence of
#
Bisection method p = 1 (linear convergence)
#
False position - generally super linear (1 < p < 2)
#
Secant method (super linear) 618 . 1
2
5 1
+
#
Newton Raphson method p = 2 quadratic
Machine Precision
# The smallest positive float
M
that can be added to
one and produce a sum that is greater than one.
Pseudo code to find Machine Epsilon
1. Set
M
= 1
2. Do
{
M
=
M
/ 2
x = 1 +
M
}
3. While ( x > 1 )
4.
M
= 2
M
CS110 Lecture 29:
Approximation & Interpolation
Tutors:
P
1
P
2
P
3
P
4
y
x
Given the data:
(x
k
, y
k
) , k = 1, 2, 3, ... , n,
find a function f which we can use to predict the value of y at points
other than the samples
1. f(x) may pass through all the data points:
f(x
k
)
= y
k
, 1 k n
2. f(x) need not pass through any of the data points:
Need to control error |f(x
k
)
y
k
|
( ) ( )
k k
k k
k
k k
y y
x x
x x
y x f
+
+
+
1
1
1 k
n
Simple and computationally efficient
1. Polynomial Interpolation
For the data set (x
k
, y
k
), k = 1, ... , n,
we find the one polynomial of degree (n - 1) subject to the n
interpolation constraints f(x
k
) = y
k
( )
n
1 k
1 k
k
x a x f
1
1
1
1
]
1
1
1
1
1
]
1
1
1
1
1
1
]
1
n
2
1
n
2
1
1 n
n
2
n n
1 n
2
2
2 2
1 n
1
2
1 1
y
y
y
a
a
a
x x x 1
x x x 1
x x x 1
Q
k
x
()
xx
1
( )
(xx
2
)...(xx
k1
)(xx
k+1
)...(xx
n
)
(x
k
x
1
)(x
k
x
3
)...(x
k
x
k1
)(x
k
x
k+1
)(x
k
x
n
)
Q
k
(x) (xx
i
)
i0,ik
n
/ (x
k
x
i
)
i0,ik
n
Each Q
k
(x) is a polynomial of degree (n - 1)
Q
k
(x
j
) = 1 , j = k
= 0 , j k
The polynomial curve that passes through the data set (x
k
, y
k
),
k = 1, 2, ..., n is
f(x) = y
1
Q
1
(x) + y
2
Q
2
(x) + ... +y
n
Q
n
(x)
Q
k
(x) (xx
i
)
i0,ik
n
/ (x
k
x
i
)
i0,ik
n
Q
0
(x) = -0.0625
Q
1
(x) = -0.5625
Q
2
(x) = -0.5625
Q
3
(x) = -0.0625
f(x) = y
1
Q
1
(x) + y
2
Q
2
(x) + ... +y
n
Q
n
(x)
= (0.34202)(-0.0625) +
(0.37461)(0.5625) +
(0.40674)(0.5625) +
(0.43837)(-0.0625)
= 0.39074
True value: 0.39073, discrepancy due to
accumulation of round-off error.
CS110 Lecture 30:
Curve Fitting
Tutors:
2. Minimax
Least squares approximation gives good fit overall, but
may have large deviation from one point
Minimize the maximum deviation from y
k
Also known as Chebyshev or optimal polynomial
approximation
| ) ( | max
k k
k
y x f E
Straight Line Fit or Linear Regression
To fit a straight line through the n points (x
1
, y
1
), (x
2
, y
2
), ... ,(x
n
, y
n
)
Assume f(x) = a
1
+ a
2
x
Error
[ ]
2
n
1 k
k k
y ) x ( f E
[ ]
2
n
1 k
k k 2 1
y x a a
+
Find a
1
, a
2
which minimize E
0 x ) y x a a ( 2
a
E
n
1 k
k k k 2 1
2
+ /
E
a
1
/ 2 (a
1
+a
2
x
k
y
k
)
k1
n
0
1
]
1
1
]
1
1
]
1
k k
k
k k
k
y x
y
a
a
x x
x n
2
1
2
0 x ) y x a a ( 2
a
E
n
1 k
k k k 2 1
2
+ /
E
a
1
/ 2 (a
1
+a
2
x
k
y
k
)
k1
n
0
Solve:
Straight Line Fit (example)
Fit a straight line through the five points
(0, 2.10), (1, 2.85), (2, 1.10), (3, 3.20), (4, 3.90)
a
11
= n = 5
+ + + 30 16 9 4 1 x a
2
k 22
0 4 3 2 1 0 x a
k 12
+ + + +
+ + + 15 . 13 20 . 3 10 . 1 85 . 2 10 . 2 y b
k 1
a
21
= a
12
+ + + 25 . 30 ) 90 . 3 ( 4 ) 20 . 3 ( 3 ) 10 . 1 ( 2 85 . 2 y x b
k k 2
1
]
1
1
]
1
1
]
1
25 . 30
15 . 13
a
a
30 10
10 5
2
1
a
1
= 1.84, a
2
= 0.395, f(x) = 1.84 + 0.395x
Straight Line Fit (example)
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0 1 2 3 4
Data
Linear Fit
Data points: (0, 2.10), (1, 2.85), (2, 1.10), (3, 3.20), (4, 3.90)
Linear fit: f(x) = 1.84 + 0.395x
Two Kinds of Curve Fitting
Data Representation
Integers - Fixed Point Numbers
Decimal System - base 10 uses 0,1,2,,9
(396)
10
= (6 10
0
)
+ (9 10
1
) + (3 10
2
) = (396)
10
Binary System - base 2 uses 0,1
(11001)
2
= (1 2
0
) + (0 2
1
) + (0 2
2
) + (1 2
3
) + (1 2
4
) = (25)
10
Decimal to Binary Conversion
Convert (39)
10
to binary form
base = 2
39 2
2 19 + Remainder 1
2 9 + Remainder 1
2 4 + Remainder 1
2 2 + Remainder 0
2 1 + Remainder 0
0 + Remainder 1
Put the remainder in reverse order
(100111)
2
= (1 2
0
) + (1 2
1
) + (1 2
2
) + (0 2
3
) + (0 2
4
)
+ (1 2
5
) = (39)
10
Largest number that can be stored in m-digits
base - 10 : (999999) = 10
m
- 1
base - 2 : (111111) = 2
m
- 1
m = 3 (999) = 10
3
- 1
(111) = 2
3
- 1
Limitation: Memory cells consist of 8 bits (1 byte) multiples, each
position containing 1 binary digit