Beruflich Dokumente
Kultur Dokumente
One of the most common problem encountered in engineering analysis is that given a function f (x), find
the values of x for which f (x) = 0. The solution (values of x) are known as the rootsof the equation f (x) =
0, or the zeroesof the function f (x).
The roots of equations may be real or complex. In general, an equation may have any number of (real)
roots or no roots at all. For example, sin x – x = 0 has a single root, namely, x = 0, whereas tan x – x = 0
has infinite number of roots (x = 0, ± 4.493, ± 7.725,). There are two types of methods available to find
the roots of algebraic and transcendental equations of the form f (x) =0.
1. Direct Methods: Direct methods give the exact value of the roots in a finite number of steps. We
assume here that there is no round off errors. Direct methods determine all the roots at the same time.
2. Indirect or Iterative Methods: Indirect or iterative methods are based on the concept of successive
approximations. The general procedure is to start with one or more initial approximation to the root and
obtain a sequence of iterates (xk) which in the limit converges to the actual or true solution to the root.
Indirect or iterative methods determine one or two roots at a time.
The indirect or iterative methods are further divided into two categories: Bracketing and open methods.
The bracketing methods require the limits between which the root lies, whereas the open methods require
the initial estimation of the solution. Bisection and False position methods are two known examples of
the bracketing methods. Among the open methods, the Newton- Raphson and the method of successive
approximation are most commonly used. The most popular method for solving a non-linear equation is
the Newton-Raphson method and this method has a high rate of convergence to a solution.
In this chapter, we present the following indirect or iterative methods with illustrative examples:
1. Bisection Method
2. Method of False Position (Regular Falsi Method)
3. Secant (Successive Approximation) Method.
4. Newton-Raphson Method (Newton’s method)
2.1. Bisection Method of Solving a Nonlinear Equation
After reading this topic, you should be able to:
1. follow the algorithm of the bisection method of solving a nonlinear equation,
2. use the bisection method to solve examples of finding roots of a nonlinear equation, and
3. Enumerate the advantages and disadvantages of the bisection method.
What is the bisection method and what is it based on?
One of the first numerical methods developed to find the root of a nonlinear equation f ( x) 0 was the
bisection method (also called binary-search method). The method is based on the following theorem.
Theorem
An equation f ( x) 0 , where f (x) is a real continuous function,
If f ( x ) f ( xu ) 0 has at least one root between x and xu (See Figure 1).
If f ( x ) f ( xu ) 0 , there may or may not be any root between x and xu (Figures 2 and 3).
If f ( x ) f ( xu ) 0 , then there may be more than one root between x and xu (Figure 4).
So the theorem only guarantees one root between x and xu .
xℓ
xu x
Figure 1:- At least one root exists between the two points if the function is real, continuous, and changes sign.
f (x)
x
xℓ xu
Figure 2:-The function does not change sign between two points, roots of the equation may still exist
between the two points.
f (x) f (x)
xℓ xu
x x
xℓ xu
f (x)
xu
x
xℓ
Figure 4:-If the function f (x) changes sign between the two points, more than one root for the equation
f ( x) 0 may exist between the two points.
Since the method is based on finding the root between two points, the method falls under the category of
bracketing methods.
Since the root is bracketed between two points, x andxu , one can find the mid-point, xm between x
and xu . This gives us two new intervals
1. x and xm , and
2. xm and xu .
Is the root now between x and xm or between xm and xu ? Well, one can find the sign of f ( x ) f ( xm ) ,
and if f ( x ) f ( xm ) 0 then the new bracket is between x and xm , otherwise, it is between xm and xu .
So, you can see that you are literally halving the interval. As one repeats this process, the width of the
interval x , xu becomes smaller and smaller, and you can zero in to the root of the equation f ( x) 0 .
The algorithm for the bisection method is given as follows.
Iteration 2
The estimate of the root is
n nu 50 75
nm 62.5
2 2
f nm f 62.5 40(62.5)1.5 875(62.5) 35000 76.735
f n f nm f 50 f 62.5 5392.176.735 0
Hence, the root is bracketed between nm and nu , that is, between 62.5 and 75. So the lower and upper limits of
the new bracket are
n 62.5, nu 75
The absolute relative approximate error, a at the end of Iteration 2 is
nmnew nmold
a 100
nmnew
62.5 75
100 20%
62.5
None of the significant digits are at least correct in the estimated root nm 62.5 as the absolute relative
approximate error is greater than 5%.
Iteration 3
The estimate of the root is
n nu 62.5 75
nm 68.75
2 2
f nm f 68.75 40(68.75)1.5 875(68.75) 35000 2.3545 103
f n f nm f 62.5 f 68.75 76.735 2.3545 103 0
Hence, the root is bracketed between n and nm , that is, between 62.5 and 68.75. So the lower and upper limits of
the new bracket are
n 62.5, nu 68.75
The absolute relative approximate error a at the end of Iteration 3 is
th
At the end of the 10 iteration,
a 0.077942%
Hence the number of significant digits at least correct is given by the largest value of m for which
a 0.5 10 2 m
0.077942 0.5 10 2m
0.15588 10 2 m
log0.15588 2 m
m 2 log0.15588 2.8072
So
m2
The number of significant digits at least correct in the estimated root 62.646 is 2.
f xU
Exact root
xL
O xr xU x
f xL
Figure 1 False-Position Method
Hence
f xL f xU f 0 f 0.11 3.993 104 2.662 104 0
Therefore, there is at least one root between x L and xU , that is between 0 and 0.11.
Iteration 1
The estimate of the root is
xU f x L x L f xU
xr
f x L f xU
0.11 3.993 10 4 0 2.662 10 4
0.0660
3.993 10 4 2.662 10 4
f x r f 0.0660
0.0660 0.1650.0660 3.993 10 4
3 2
5
3.1944 10
f x L f xr f 0 f 0.0660 0
Hence, the root is bracketed between x L and x r , that is, between 0 and 0.0660. So, the lower and upper limits of
the new bracket are x L 0, xU 0.0660 , respectively.
Iteration 2
The estimate of the root is
xU f x L x L f xU
xr
f x L f xU
0.0660 3.993 10 4 0 3.1944 10 5 0.0611
3.993 10 4 3.1944 10 5
The absolute relative approximate error for this iteration is
0.0611 0.0660
a 100 8%
0.0611
f xr f 0.0611
0.0611 0.1650.0611 3.993 10 4 1.1320 10 5
3 2
f x L f xr f 0 f 0.0611 0
0.0660 1.132 10 5 0.0611 3.1944 10 5
1.132 10 5 3.1944 10 5
0.0624
The absolute relative approximate error for this iteration is
0.0624 0.0611
a 100 2.05%
0.0624
f xr 1.1313 10 7
f x L f xr f 0.0611 f 0.0624 0
Hence, the lower and upper limits of the new bracket are x L 0.0611, xU 0.0624
All iterations results are summarized in Table 1. To find how many significant digits are at least correct in the last
iterative value
a 0.5 102m
2.05 0.5 102m
m 1.387
The number of significant digits at least correct in the estimated root of 0.0624 at the end of 3rd iteration is 1.
Table 1 Root of f x x 3 0.165x 2 3.993 10 4 0 for false-position method.
Iteration xL xU xr a % f xm
1 0.0000 0.1100 0.0660 ---- 3.1944 10 5
2 0.0000 0.0660 0.0611 8.00 1.1320 10 5
3 0.0611 0.0660 0.0624 2.05 1.1313 10 7
Example 2
Find the root of f x x 4 x 2 0 , using the initial guesses of x L 2.5 and xU 1.0, and a pre-
2
In the Newton-Raphson method, the root is not bracketed. In fact, only one initial guess of the root is
needed to get the iterative process started to find the root of an equation. The method hence falls in the category of
open methods. Convergence in open methods is not guaranteed but if the method does converge, it does so much
faster than the bracketing methods.
Derivation
The Newton-Raphson method is based on the principle that if the initial guess of the root of f ( x) 0 is at xi ,
then if one draws the tangent to the curve at f ( xi ) , the point xi 1 where the tangent crosses the x -axis is an
improved estimate of the root (Figure 1).
Using the definition of the slope of a function, at x xi
f xi = tan θ
f xi 0
= ,
xi xi 1
This gives
f xi
xi 1 = xi (1)
f xi
Equation (1) is called the Newton-Raphson formula for solving nonlinear equations of the form f x 0 . So
starting with an initial guess, xi , one can find the next guess, xi 1 , by using Equation (1). One can repeat this
process until one finds the root within a desirable tolerance.
Algorithm
The steps of the Newton-Raphson method to find the root of an equation f x 0 are
1. Evaluate f x symbolically
2. Use an initial guess of the root, xi , to estimate the new value of the root, xi 1 , as
xi 1 xi
a = 100
xi 1
4. Compare the absolute relative approximate error with the pre-specified relative error tolerance, s . If
a >s , then go to Step 2, else stop the algorithm. Also, check if the number of iterations has
exceeded the maximum number of iterations allowed. If so, one needs to terminate the algorithm and
notify the user.
f (x)
f (xi+1 )
θ
x
xi+2 xi+1 xi
Solution
f n 40n1.5 875n 35000 0
f n 60n 0.5 875
Let us take the initial guess of the root of f n 0 as n0 50 .
50 11.963 61.963
5392.1
50
450.74
The absolute relative approximate error a at the end of Iteration 1 is
n1 n0
a 100 61.963 50 100 19.307%
n1 61.963
The number of significant digits at least correct is 0, as you need an absolute relative approximate
error of less than 5% for one significant digit to be correct in your result.
Iteration 2
The estimate of the root is
f n1 4061.963 87561.963 35000
1.5
n2 n1 61.963
f n1 6061.963 875
0.5
292.45
61.963
402.70
61.963 0.72623 62.689
The absolute relative approximate error a at the end of Iteration 2 is
n2 n1
a 100
n2
62.689 61.963
100 1.1585%
62.689
The number of significant digits at least correct is 1, because the absolute relative approximate error
is less than 5%
Iteration 3
The estimate of the root is
f n2
n3 n 2
f n2
4062.689 87562.689 35000
1.5
62.689
6062.689 875
0.5
1.0031
62.689
399.94
62.689 2.5080 10 3 62.692
The absolute relative approximate error a at the end of Iteration 3 is
For x0 0 or x0 0.02 , division by zero occurs (Figure 4). For an initial guess close to 0.02 such as
x0 0.01999 , one may avoid division by zero, but then the denominator in the formula is a small
number. For this case, as given in Table 2, even after 9 iterations, the Newton-Raphson method does not
converge.
Table 2 Division by near zero in Newton-Raphson method
Iteration
xi f ( xi ) a %
Number
0.019990 1.00E-05
1 18.778 100.75
–1.7620
5.00E-06
2
–1.1714 –5.5638 50.282 2.50E-06
x
3
–0.77765 –1.6485 50.422
-0.03 -0.02 -0.01
0.00E+00
0 0.01 0.02 0.03 0.04
4
–0.51518 –0.48842 50.632 -2.50E-06
0.02
5 –0.14470 50.946 -5.00E-06
–0.34025
6
–0.22369 –0.042862 51.413 -7.50E-06
–0.012692
-1.00E-05
7 52.107
–0.14608
8
– –0.0037553 53.127
9
0.094490 –0.0011091 54.602
Iteration
xi f ( xi ) a %
Number
0 –1.0000 3.00
1 0.5 2.25 300.00
2 –1.75 5.063 128.571
3 –0.30357 2.092 476.47
4 3.1423 11.874 109.66
5 1.2529 3.570 150.80
6 –0.17166 2.029 829.88
7 5.7395 34.942 102.99
8 2.6955 9.266 112.93
9 0.97678 2.954 175.96
f (x i) B
f (x i–1 ) C
E D A
x
xi+1 xi–1 xi
n0
40n 875n0 35000 n0 n1
1.5
40n
0
1.5
0 875n0 35000 40n1.15 875n1 35000
50
40(50) 875(50) 35000 50 25
1.5
40(50) 1.5
875(50) 35000 40(25)1.5 875(25) 35000
60.587
The absolute relative approximate error a at the end of Iteration 1 is
n1 n0
a 100
n1
60.587 50 17.474%
100
60.587
The number of significant digits at least correct is 0, as you need an absolute relative approximate error of
less than 5% for one significant digit to be correct in your result.
Iteration 2
The estimate of the root is
f n1 n1 n0
n2 n1
f n1 f n0
n1
40n
875n1 35000 n1 n0
1.5
40n
1
1.5
1 875n1 35000 40n01.5 875n0 35000
60.587
4060.587 87560.587 3500060.587 50
1.5
4050 87550 35000
1.5
62.569
The absolute relative approximate error a at the end of Iteration 2 is
n2
40n
875n2 35000 n2 n1
1.5
40n
2
1.5
2 875n2 35000 40n11.5 875n1 35000
62.569
4062.569 87562.569 3500062.569 60.587
1.5
4060.587 87560.587 35000
1.5
62.690
The absolute relative approximate error a at the end of Iteration 3 is
n3 n 2
a 100
n3
62.690 62.569
100 0.19425%
62.690
The number of significant digits at least correct is 2, because the absolute relative approximate error is
less than 0.5% .