Sie sind auf Seite 1von 67

CS-110

Computational Engineering
Numerical Methods
by
A. Avudainayagam
Department of Mathematics
revised by
Timothy Gonsalves
Dept of Computer Science & Engg
Lecture 27: Root Finding

Tutors:

Shailesh Vaya Anurag Mittal


vaya@cse.iitm.ac.in amittal@cse.iitm.ac.in
Numerical Methods

Used for:

Solution of algebraic equations

Approximation of functions

Differentiation and integration of functions

Solution of differential equations

Statistical analysis of data


Numerical Methods

Used because:

Analytical solution extremely difficult for a


complex function

Analytical solution may require evaluation of


esoteric functions

Mathematical functions may not be analytical

Function may be in the form of pairs of data

Given experimental data for pillar design:


Pillar radius (m) 1.2 1.5 1.8 2.0 3.0
Max Load (tons) 10.3 15.6 20.3 28.7 43.5

What is pillar radius for 35 tons?


Numerical Errors

Source of Errors

Approximate evaluation of functions

= 22/7 = 3.14285714

Representation of numbers in a finite number of


bits

Round-off error, eg correct to 3 decimals:

2.000 + 0.77 10
-6
= 2.00000077 = 2.000

Reduction of Error

Iterative solutions -- repeat until error <

Efficiency of convergence

Stability --- may never converge


Fundamental Motifs

Several techniques for a given problem

Technique of choice depends on nature of


the data

One technique with modifications may be


used to solve several different problems

Error analysis essential to determine


reliability of the computed results
Root Finding: f(x)=0
Method 1: The Bisection method
Th: If f(x) is continuous in [a,b] and if f(a)f(b)<0, then there is
atleast one root of f(x)=0 in (a,b).
Single root in (a,b)
f (a) < 0
a
x
1 b
f(b) > 0
Multiple roots in (a,b)
a
x
1

x
2
x
3
x
4
x
5
b
y
x
y
x
The Method
Find an interval [x
0
, x
1
] such that f(x
0
)f(x
1
) < 0

This may not be easy. Use your knowledge of the physical


phenomenon which the equation represents.

In each iteration, cut the interval into half

Examine the sign of the function at the mid point

If f(m) = 0, x is the root



If f(m) 0 and f(x
0
)f(x) < 0, root lies in [x
0
, m]
Otherwise root lies in [m, x
1
]

Repeat the process until convergence (length of interval < )


m
x
0
+x
1
2
Number of Iterations and Error Tolerance

Length of the interval (where the root lies) after n iterations


n
0 1
n
2
x x
e

We can fix the number of iterations so that the root lies within
an interval of chosen length (error tolerance).

n
e

n
ln(x
1
x
0
)ln
ln2
If n satisfies this, root lies within a distance of /2

of the
actual root
Though the root lies in a small interval, |f(x)| may not be
small if f(x) has a large slope.
Conversely if |f(x)| small, x may not be close to the root if
f(x) has a small slope.
So, we use both these facts for the termination criterion.
We first choose an error tolerance on f(x): |f(x)| <
and K the maximum number of iterations.
Pseudo code (Bisection Method)
1. Input > 0 , K > 0, x
1
> x
0
so that f(x
0
) f(x
1
) < 0.
Compute f
0
= f(x
0
) .
k = 1 (iteration count)
2. Do
{
(a) Compute m = , and f = f(m)
(b) If f f
0
< 0, set x
1
= m


otherwise set x
0
= m
(c) Set k = k+1
} while |f| > and k K
3. Set root = m

,
_

+
2
1 0
x x
Bisection Method Example
f(x) = x
3
-2 = 0, = 10
-4
x
0
= 1, x
1
= 2
k x
0
x
1
m f(m)
1 1 2 1.5 1.375
2 1 1.5 1.25 -0.4688
3 1.25 1.5 1.375 0.5996
4 1.25 1.375 1.3125 0.2610
5 1.25 1.3125 1.2813 0.1033

After 13 iterations, m = 1.2599 (to 4 decimal places)

Using , n 12.29

n
ln(x
1
x
0
)ln
ln2
False Position Method (Regula Falsi)

Root may lie near end of interval with smaller value


of |f|
Instead of bisecting the interval [x
0
,x
1
], we choose
the point where the straight line through the end
points meet the x-axis as w

and bracket the root with
[x
0
, w] or [w, x
1
] depending on sign of f(w)
False Position Method
(x
1
,f
1
)
x
0
(x
0
,f
0
)
w x
1
(w,f
2
)
y=f(x)
Straight line through (x
0
,f
0
) ,(x
1
,f
1
)

:
) (
1
0 1
0 1
1
x x
x x
f f
f y

+
New end point w:
y
x

wx
1

x
1
x
0
f
1
f
0



_
,

f
1
2. do {
a. Compute , and f = f(w)
b. If f
0
f < 0 set x
1
= w, f
1
= f
else set x
0
= w, f
0
= f
c. k = k+1
} while (|f| ) and (k K)
False Position Method (Pseudo Code)

wx
1

x
1
x
0
f
1
f
0



_
,

f
1
1. Choose > 0 (tolerance on |f(x)| )
K > 0 (maximum number of iterations )
k = 1 (iteration count)
x
0
,x
1
(so that f
0
,f
1
< 0)
4. x = w, the root
Regula Falsi Example
f(x) = x
3
-2 = 0, = 10
-4
x
0
= 1, x
1
= 2
k x
0
x
1
f(x
0
) f(x
1
) w f(w)
0 1.0 2.0 -1.0 6.0 1.1429 -0.5071
1 1.1429 2.0 -0.5071 6.0 1.2097 -0.2298
2 1.2097 2.0 -0.2298 6.0 1.2389 -0.0987
3 1.2389 2.0 -0.0987 6.0 1.2512 -0.0412
4 1.2512 2.0 -0.0412 6.0 1.2563 -0.0172
9 1.2598 2.0 -0.0003 6.0 1.2607 0.0039
10 1.2598 1.2607 -0.0003 0.0039 1.2599 -0.0003
11 1.2599 1.2607 -0.0003 0.0039 1.2600 0.0002
Newton-Raphson or Newtons Method
At an approximation x
k
to the root, the curve is approximated by the
tangent to the curve at x
k
and the next approximation x
k+1
is the
point where the tangent meets the x-axis.
y = f(x)



root
x
0
x
x
1
x
2
y

) (
) (
1
k
k
k k
x f
x f
x x


+
Warning : If f(x
k
) is very small, method fails.

Two function Evaluations per iteration


This tangent cuts the x-axis at x
k+1
Tangent at (x
k
, f
k
) :
y = f(x
k
) + f

(x
k
)(x-x
k
)
Newtons Method - Pseudo code
4. x = x
1
the root.
1. Choose > 0 (function tolerance |f(x)| < )
m > 0 (Maximum number of iterations)
x
0
- initial approximation
k - iteration count
Compute f(x
0
)
2. Do { q = f (x
0
) (evaluate derivative at x
0
)
x
1
= x
0
- f
0
/q
x
0
= x
1

f
0
= f(x
0
)
k = k+1
}
3. While (|f
0
| ) and (k m)
Getting caught in a cycle of Newtons Method
Alternate iterations fall at the same point .
No Convergence.
x
k+1
x
k
x
y
Newtons Method for finding the square
root of a number x = a
k
k
k k
x
a x
x x
2
2 2
1


+
Example : a = 5 , initial approximation x
0
= 2.
x
1
= 2.25
x
2
= 2.236111111
x
3
= 2.236067978
x
4
= 2.236067978
f(x) = x
2
- a
2
= 0
CS110 Lecture 28:
Root Finding Secant Method

Tutors:

Shailesh Vaya Anurag Mittal


vaya@cse.iitm.ac.in amittal@cse.iitm.ac.in
Problems with Newtons Method

If |f (x)| is very small, accuracy is difficult


to obtain

Depending on the initial estimate, any one


of the roots may be found (answer may not
have physical significance)

Use bisection to get close to desired root, then


Newtons method for fast convergence

May get caught in an infinite cycle


The secant Method

Newtons Method requires 2 function evaluations (f, f ).

The Secant Method requires only 1 function evaluation and


converges as fast as Newtons Method at a simple root.
Start with two points x
0
,x
1
near the root (no need for
bracketing the root as in Bisection Method or Regula Falsi
Method) .
x
k-1
is dropped once x
k+1
is obtained.
The Secant Method
(Geometrical Construction)
Two initial points x
0
, x
1
are chosen
The next approximation x
2
is the point where the straight line
joining (x
0
,f
0
) and (x
1
,f
1
) meet the x-axis
Take (x
1
,x
2
)

and repeat.
y = f(x
1
)
x
2 x
0
x
1

(x
0
,f
0
)
(x
1
,f
1
)
x
y
The secant Method (Pseudo Code)
1. Choose > 0 (function tolerance |f(x)| )
m > 0 (Maximum number of iterations)
x
0
, x
1
(Two initial points near the root )
f
0
= f(x
0
)
f
1
= f(x
1
)
k = 1 (iteration count)
2. Do {

x
0
= x
1

f
0
= f
1
x
1
= x
2
f
1
= f(x
2
)
k = k+1 }
3. while (|f
1
| ) and (m k)

x
2
x
1

x
1
x
0
f
1
f
0



_
,

f
1
On Convergence
#
The false position method in general converges faster than
the bisection method.
#
But not always, shown by counter examples
#
The bisection method and the false position method are
guaranteed to converge
#
The secant method and the Newton-Raphson method are
not guaranteed to converge
Order of Convergence
#
A measure of how fast an algorithm converges
Let be the actual root: f() = 0
Let x
k
be the approximate root at the kth iteration . Error
at the kth iteration, e
k
= |x
k
- |
The algorithm converges with order p if there exists
such that
p
k 1 k
e e
+
Order of Convergence of
#
Bisection method p = 1 (linear convergence)
#
False position - generally super linear (1 < p < 2)
#
Secant method (super linear) 618 . 1
2
5 1

+
#
Newton Raphson method p = 2 quadratic
Machine Precision
# The smallest positive float
M
that can be added to
one and produce a sum that is greater than one.
Pseudo code to find Machine Epsilon
1. Set
M
= 1
2. Do
{

M
=
M
/ 2
x = 1 +
M

}
3. While ( x > 1 )
4.
M
= 2
M


CS110 Lecture 29:
Approximation & Interpolation

Tutors:

Shailesh Vaya Anurag Mittal


vaya@cse.iitm.ac.in amittal@cse.iitm.ac.in
Approximation & Interpolation
Reasons to approximate value of a function:

Difficult or impossible to evaluate the function


analytically, eg. sine, log, etc.

Have only a table of values and must interpolate

Faster to compute approx function than original

Function defined implicitly rather than by an


equation
Interpolation

P
1
P
2
P
3
P
4
y
x
Given the data:
(x
k
, y
k
) , k = 1, 2, 3, ... , n,
find a function f which we can use to predict the value of y at points
other than the samples
1. f(x) may pass through all the data points:
f(x
k
)

= y
k
, 1 k n
2. f(x) need not pass through any of the data points:
Need to control error |f(x
k
)

y
k
|

1. Piecewise Linear Interpolation


A straight line segment is used between each adjacent pair of
data points
x
1
x
2
x
3
x
4
x
5

( ) ( )
k k
k k
k
k k
y y
x x
x x
y x f

+
+
+
1
1
1 k
n
Simple and computationally efficient
1. Polynomial Interpolation
For the data set (x
k
, y
k
), k = 1, ... , n,
we find the one  polynomial of degree (n - 1) subject to the n
interpolation constraints f(x
k
) = y
k
( )

n
1 k
1 k
k
x a x f
1
1
1
1
]
1

1
1
1
1
]
1

1
1
1
1
1
]
1

n
2
1
n
2
1
1 n
n
2
n n
1 n
2
2
2 2
1 n
1
2
1 1
y
y
y
a
a
a
x x x 1
x x x 1
x x x 1

Not feasible for large data sets, since the condition


number increases rapidly with increasing n.
Lagrange Interpolating Polynomial
The Lagrange interpolating polynomial of degree k, f(x) is
constructed as follows:
1. Caculate the Lagrangian multipliers Q
k
(x) each of which is a
polynomial of degree n-1 that is non-zero at only the one base
point x
k
Normalise by Q
k
(x
k
)

Q
k
x
()

xx
1
( )
(xx
2
)...(xx
k1
)(xx
k+1
)...(xx
n
)
(x
k
x
1
)(x
k
x
3
)...(x
k
x
k1
)(x
k
x
k+1
)(x
k
x
n
)

Q
k
(x) (xx
i
)
i0,ik
n

/ (x
k
x
i
)
i0,ik
n

Each Q
k
(x) is a polynomial of degree (n - 1)
Q
k
(x
j
) = 1 , j = k
= 0 , j k
The polynomial curve that passes through the data set (x
k
, y
k
),
k = 1, 2, ..., n is
f(x) = y
1
Q
1
(x) + y
2
Q
2
(x) + ... +y
n
Q
n
(x)

Polynomial is written directly without having to solve a


system of equations
Lagrange interpolations (Pseudo Code)
Choose x, the point where the function value is required
y = 0
for i = 1 to n
p = y
1
for j = 1 to n
if ( i j )
p = p * (x - x
j
) / (x
i
- x
j
)
end for
y = y + product
End for
Lagrange Interpolation Example

Given the following base


points, estimate sin 23 to
five decimal places:
i x
i
y
i
0 20 0.34202
1 22 0.37641
2 24 0.40674
3 26 0.43837

Q
k
(x) (xx
i
)
i0,ik
n

/ (x
k
x
i
)
i0,ik
n

Q
0
(x) = -0.0625
Q
1
(x) = -0.5625
Q
2
(x) = -0.5625
Q
3
(x) = -0.0625
f(x) = y
1
Q
1
(x) + y
2
Q
2
(x) + ... +y
n
Q
n
(x)
= (0.34202)(-0.0625) +
(0.37461)(0.5625) +
(0.40674)(0.5625) +
(0.43837)(-0.0625)
= 0.39074
True value: 0.39073, discrepancy due to
accumulation of round-off error.
CS110 Lecture 30:
Curve Fitting

Tutors:

Shailesh Vaya Anurag Mittal


vaya@cse.iitm.ac.in amittal@cse.iitm.ac.in
2. Least Squares Fit
If the number of samples is large or if the dependant
variable contains measurement noise, it is often better to
find a function which minimizes an error criterion such
as
A function that minimizes E is called the Least Squares Fit
Depending on the nature of the function we have:
linear regression
polynomial regression (quadratic, cubic, )
exponential regression, etc.
E f(x
k
)y
k
[ ]
2
k1
n

2. Minimax
Least squares approximation gives good fit overall, but
may have large deviation from one point
Minimize the maximum deviation from y
k
Also known as Chebyshev or optimal polynomial
approximation
| ) ( | max
k k
k
y x f E
Straight Line Fit or Linear Regression
To fit a straight line through the n points (x
1
, y
1
), (x
2
, y
2
), ... ,(x
n
, y
n
)
Assume f(x) = a
1
+ a
2
x
Error
[ ]
2
n
1 k
k k
y ) x ( f E


[ ]
2
n
1 k
k k 2 1
y x a a

+
Find a
1
, a
2
which minimize E
0 x ) y x a a ( 2
a
E
n
1 k
k k k 2 1
2
+ /

E
a
1
/ 2 (a
1
+a
2
x
k
y
k
)
k1
n

0
1
]
1

1
]
1

1
]
1

k k
k
k k
k
y x
y
a
a
x x
x n
2
1
2
0 x ) y x a a ( 2
a
E
n
1 k
k k k 2 1
2
+ /

E
a
1
/ 2 (a
1
+a
2
x
k
y
k
)
k1
n

0
Solve:
Straight Line Fit (example)
Fit a straight line through the five points
(0, 2.10), (1, 2.85), (2, 1.10), (3, 3.20), (4, 3.90)
a
11
= n = 5

+ + + 30 16 9 4 1 x a
2
k 22
0 4 3 2 1 0 x a
k 12
+ + + +

+ + + 15 . 13 20 . 3 10 . 1 85 . 2 10 . 2 y b
k 1
a
21
= a
12

+ + + 25 . 30 ) 90 . 3 ( 4 ) 20 . 3 ( 3 ) 10 . 1 ( 2 85 . 2 y x b
k k 2
1
]
1

1
]
1

1
]
1

25 . 30
15 . 13
a
a
30 10
10 5
2
1
a
1
= 1.84, a
2
= 0.395, f(x) = 1.84 + 0.395x
Straight Line Fit (example)
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
0 1 2 3 4
Data
Linear Fit
Data points: (0, 2.10), (1, 2.85), (2, 1.10), (3, 3.20), (4, 3.90)
Linear fit: f(x) = 1.84 + 0.395x
Two Kinds of Curve Fitting
Data Representation
Integers - Fixed Point Numbers
Decimal System - base 10 uses 0,1,2,,9
(396)
10
= (6 10
0
)

+ (9 10
1
) + (3 10
2
) = (396)
10
Binary System - base 2 uses 0,1
(11001)
2
= (1 2
0
) + (0 2
1
) + (0 2
2
) + (1 2
3
) + (1 2
4
) = (25)
10
Decimal to Binary Conversion
Convert (39)
10
to binary form
base = 2
39 2
2 19 + Remainder 1
2 9 + Remainder 1
2 4 + Remainder 1
2 2 + Remainder 0
2 1 + Remainder 0
0 + Remainder 1
Put the remainder in reverse order
(100111)
2
= (1 2
0
) + (1 2
1
) + (1 2
2
) + (0 2
3
) + (0 2
4
)
+ (1 2
5
) = (39)
10

Largest number that can be stored in m-digits
base - 10 : (999999) = 10
m
- 1
base - 2 : (111111) = 2
m
- 1
m = 3 (999) = 10
3
- 1
(111) = 2
3
- 1
Limitation: Memory cells consist of 8 bits (1 byte) multiples, each
position containing 1 binary digit

Common cell lengths for integers : k = 16 bits


k = 32 bits
Sign - Magnitude Notation
First bit is used for a sign
0 - non negative number
1 - negative number
The remaining bits are used to store the binary magnitude
of the number.
Limit of 16 bit cell : (32767)
10
Limit of 32 bit cell : (2 147 483 647)
10
Twos Complement notation
Definition : The twos complement of a negative integer I in a
k - bit cell :
Twos Complement of I = 2
k
+ I
(Eg) : Twos Complement of (-3)
10
in a 3 - bit cell
= (2
3
- 3)
10
= (5)
10
= (101)
2

(-3)
10
will be stored as 101
Storage Scheme for storing an integer I in a k - bit cell in
Twos Complement notation
Stored Value C =
I , I 0 , first bit = 0
2
k
+ I , I < 0
The Twos Complement notation
admits one more negative number
than the sign - magnitude notation.
(Eg) Take a 2 bit cell (k = 2)
Range in Sign - magnitude notation : 2
1
- 1 = 1
-1 = 11
1 = 01
Range in Twos Compliment notation
Twos Compliment of -1 = 2
2
-1 = (3)
10
= (11)
2
Twos Compliment of -2 = 2
2
- 2 = (2)
10
= (10)
2
Twos Compliment of -3 = 2
2
- 2 = 0 - Not possible
Floating Point Numbers
Integer Part + Fractional Part
Decimal System - base 10
235 . 7846
Binary System - base 2
10011 . 11101

4
10
6
3
10
4
2
10
8
10
7
+ + +
Fractional Part (0.7846)
10
Fractional Part (0.11101)
2

5
2
1
4
2
0
3
2
1
2
2
1
2
1
+ + +
Binary Fraction Decimal Fraction
(10.11)
Integer Part (10)
2
= 0.2
0
+ 1.2
1
= 2
Fractional Part (11)
2
=
75 . 0 25 . 0 5 . 0
2
2
1
2
1
+ +
Decimal Fraction = ( 2.75 )
10
Decimal Fraction Binary Fraction
Convert (0.9)
10
to binary fraction
0.9
2
0.8 + integer part 1
2
0.6 + integer part 1
2
0.2 + integer part 1
2
0.4 + integer part 0
2
0.8 + integer part 0
Repetition
(0.9)
10
= (0.11100)
2
Scientific Notation (Decimal)
0.0000747 = 7.47 10
-5
31.4159265 = 3.14159265 10
9,700,000,000 = 9.7 10
9
Binary
(10.01)
2
= (1.001)
2
2
1

(0.110)
2
= (1.10)
2
2
-1
Computer stores a binary approximation to x
x t q 2
n
q - mantissa , n exponent
(-39.9)
10
= (-100111.1 1100)
2
= (-1.0001111 1100)
2
(2
5
)
10

Decimal Value of stored number (-39.9)
10

= -1. 001111 1100 1100 1100 11001
23 bit
32 bits : First bit for sign
Next 8 bits for exponent
23 bits for mantissa
= -39. 900001525 878 906 25
Round off Errors can be reduced by
Efficient Programming Practice
#
The number of operations (multiplications and
additions ) must be kept minimum. (Complexity theory)
An Example of Efficient Programming
Problem : Evaluate the value of the Polynomial.
P(x) = a
4
x
4
+ a
3
x
3
+ a
2
x
2
+ a
1
x + a
0

at a given x.
Requires 13 mulitiplications and 4 additions.
Bad Programme!
An Efficient method (Horners method)
P(x) = a
4
x
4
+ a
3
x
3
+ a
2
x
2
+ a
1
x + a
0
= a
0
+ x(a
1
+ x(a
2
+ x(a
3
+ xa
4
) ))
Requires 4 multiplications and 4 additions.
Pseudo-code for an n
th
degree polynomial
Input as, n, x
p = 0
for i = n, n-1, , 0
{
p = x p + a[i]
}
Summary

Finding the root(s) of an equation

Bisection, regula falsi, Newtons, secant

Fitting curves to data:

Exact: Lagrange interpolation

Least squares fit: linear regression (also polynomial,


exponential, etc)

OpenOffice functions: LINEST, LOGEST, TREND

Several techniques for a given problem

Technique of choice depends on nature of the data

Error analysis essential to determine reliability of


the computed results
References
M.C. Kohn, Practical Numerical Methods:
Algorithms and Programs, Macmillan, 1987
R. Bhat & S. Chakraverty, Numerical
Analysis in Engineering, Narosa, 2004
M.K. Jain, S.R.K. Iyengar & R.K. Jain,
Numerical Methods for Scientific and
Engineering Computation, 3
rd
ed., Wiley
Eastern, 1994

Das könnte Ihnen auch gefallen