Beruflich Dokumente
Kultur Dokumente
Transforming lives by CSU is committed to transform the lives Productivity Compassion Competent
Educating for the BEST of people and communities through high Accessibility Accountability
quality instruction and innovative Relevance Self-disciplined
research, development, production and Excellence
Program: BSChE
b b 2 4ac
x
2a (1.1)
whenever we can’t find the factors of f x . We can observe from Eq. 1.0 that f x 0 .
Basically, it’s the reason why the x values can be calculated through Eq. 1.1. The “roots” of
Eq. 1.0 are the values of x calculated from Eq. 1.0, also referred to as the zeroes of an equation.
Nevertheless, these roots are not only limited to the quadratic formula, in the fields of
engineering and sciences, its applications range from heat balance, mass balance, force balance
and energy balance to Newton’s law of motion and Kirchhoff’s laws. By studying it, we gain
further understanding on how we want to apply it in real world problems.
Graphically, the roots are portrayed in Fig. 1.0
1
There are two main methods used in root location. The first one focuses on bracketing
methods for finding roots. These methods start with guesses that bracket, or contain the root
and then systematically reduce the width of the bracket. Two specific methods are covered:
bisection and false position. Graphical methods are used to provide visual insight into the
techniques. Error formulations are developed to help you determine how much computational
effort is required to estimate the root to a prespecified level of precision. The second covers
open methods. These methods also involve systematic trial-and-error iterations but do not
require that the initial guesses bracket the root. We will discover that these methods are usually
more computationally efficient than bracketing methods but that they do not always work. We
illustrate several open methods including the fixed-point iteration, Newton-Raphson, and
secant methods.
A. Bisection Method
1. Theoretical Background
The bisection method, which is alternatively called binary chopping, interval halving,
or Bolzano’s method, is one type of incremental search method in which the interval is always
divided in half. If a function changes sign over an interval, the function value at the midpoint
is evaluated. The location of the root is then determined as lying at the midpoint of the
subinterval within which the sign change occurs. The process is repeated to obtain refined
estimates.
We attempt to find by inspection, or trial and error, two values of x, call them xl and
xu, such that f(xl) and f(xu) have different signs, i.e. f(xl)f(xu)<0. If we can find two such
2
values, the root must lie somewhere in the interval between them, since f(x) changes sign on
this interval (Fig. 2A)
In the Bisection method, we estimate the root by xr, where xr is the mid-point of the
interval [xl,xu],
xr = (xl + xu)/2 (2A.1)
Then if f(xr) has the same sign as f(xl), as drawn in the figure, the root clearly lies
between xr and xl. We must then redefine the left-hand end of the interval as having the value
of xr, we let the new value of xl be xr. Otherwise, if f(xr) and f(xl) have different signs, we let
the new value of xu be xr, since the root must lie between xl and xr in that case. Having
redefined xl or xr, as the case may be, we bisect the new interval again according to Equation
(2A.1) and repeat the process until the distance between xl and xr is as small as we please.
We can calculate how many bisections are needed to obtain a certain accuracy, given
initial values of xl and xu. Suppose we start with xl=a, and xu=b. After the first bisection the
worst possible error (E1) in xr is E1=|a−b|/2, since we are estimating the root as being at the
mid-point of the interval [a,b]. The worst that can happen is that the root is actually at xl or xu,
in which case the error is E1. Carrying on like this, after n bisections the worst possible error
En is given by En=|a−b|/2n. If we want to be sure that this is less than some specified error E,
we must see to it that n satisfies the inequality |a−b|/2n <E.
log (| a - b | /E)
n>
log (2) (2A.2)
Since n is the number of bisections, it must be an integer. The smallest integer n that
exceeds the right-hand side of Inequality (Eq. 2A.2) will do as the maximum number of
bisections required to guarantee the given accuracy E.
3
2. Numerical Analysis
General Algorithm:
Step 1.Start
Step 2. Input the values a, b and maxerr.
*a and b are upper and lower boundary, respectively and maxerr is the absolute
maximum error used to stop the criteria.*
Step 3. Compute c a b
2
Step 4. Test for the accuracy of c
if |f(c)| > maxerr
test = f(a)*f(c)
if test < 0, then b = c;
elseif test > 0, then a= c;
else maxerr = 0,
Step 5. Display the required root as c
Step 6. Stop
4
General Flowchart:
Define the starting point of the
START program.
Preparation or initialization of
memory space for data processing. It
a= 0, b= 0 also represents instructions that will
maxerr = 0 modify the program’s course
execution.
Inputa,a,b,b,maxerr
Input maxerrand
and Putting in the parameters and
other parameters
other parameters the stopping criteria.
parameter
parameter
Computing for the root c using
the bisection method’s general
equation.
YES
Display of the output c
Print c as solution
function y = BisectionMfile(f,a,b,maxerr)
c=(a+b)/2;
disp(' xl xu xr ');
disp([a b c ]);
while abs(f(c))> maxerr
test = f(a)*f(c);
if test < 0
b = c;
elseif test > 0
a = c;
else
maxerr = 0;
end
c=(a+b)/2;
disp([a b c ]);
iter=0;
iter=iter+1;
if(iter==50)
break;
end
end
display(['Root is x=' num2str(c)]);
y=c
6
B. False Position
1. Theoretical Background
A shortcoming of the bisection method is that, in dividing the interval from xl to xu
into equal halves, no account is taken of the magnitudes of f(xl) and f(xu). For example, if f(xl)
is much closer to zero than f(xu), it is likely that the root is closer to xl than to xu (Figure 2B).
An alternative method that exploits this graphical insight is to join f(xl) and f(xu) by a straight
line. The intersection of this line with the x axis represents an improved estimate of the root.
The fact that the replacement of the curve by a straight line gives a “false position” of the root
is the origin of the name, method of false position, or in Latin, regula falsi. It is also called the
linear interpolation method.
7
2. Numerical Analysis
General Algorithm:
Step 1.Start
Step 2. Input the values xl, xu and ea.
*xl and xr are upper and lower boundary of x, respectively and ea is the absolute
maximum error used to stop the criteria.*
f(xu)(xl xu)
Step 3. Compute xr xu
f(xl) f(xu)
Step 4. Test for the accuracy of xr
if |f(xr)| > ea
test = f(xl) f(xr)
if test < 0, then xu = xr;
elseif test > 0, then xl = xr;
else ea = 0,
Step 5. Display the required root as xr
Step 6. Stop
8
General Flowchart:
9
General M-file:
% uses the false position method to find the root of the function func
%You would require to run it in command window as
% FalsePos(@(x)("Enter the Function"),xl,xu,ea);
% xl, xu = lower and upper guesses
% ea = approximate relative error (%)
function y = FalsePos(f,xl,xu,ea)
10
III. OPEN METHODS
Open methods described in this chapter require only a single starting value or two
starting values that do not necessarily bracket the root. As such, they sometimes diverge or
move away from the true root as the computation progresses. However, when the open methods
converge they usually do so much more quickly than the bracketing methods. We will begin
our discussion of open techniques with a simple approach that is useful for illustrating their
general form.
1. Theoretical Background
Iteration method, also known as the fixed point iteration method, is one of the most
popular approaches to find the real roots of a nonlinear function. It requires just one initial
guess and has a fast rate of convergence which is linear.
As just mentioned, open methods employ a formula to predict the root. Such a formula can be
developed for simple fixed-point iteration (or, as it is also called, one-point iteration or
successive substitution) by rearranging the function f ( x ) 0 so that x is on the left-hand side
of the equation:
x g(x ) (3A.1)
This transformation can be accomplished either by algebraic manipulation or by simply adding
x to both sides of the original equation. The utility of Eq. 3A.1 is that it provides a formula to
predict a new value of x as a function of an old value of x. Thus, given an initial guess at the
root xi , Eq. 3A.1 can be used to compute a new estimate xi 1 as expressed by the iterative
formula:
x i 1 g(x i ) (3A.2)
As with many other iterative formulas in this book, the approximate error for this equation can
be determined using the error estimator:
x i 1 x i
a 100%
x i 1
(3A.3)
11
2. Numerical Analysis
General Algorithm:
Step 1.Start
Step 2. Input x(1), n and other given parameters.
*Where, x(1) is the value of initial guess while n is the number of iterations *
Step 3. Compute x n 1
General Flowchart:
12
General M-file:
B. Newton-Raphson Method
1. Theoretical Background
Figure 3B
In Figure 3B, the Newton-Raphson method can be derived by using the slope (tangent)
of the function f(x) at the current iterative solution ( x i ) to find the solution ( x i ) in the next
iteration. The slope x i , f x i is given by
13
f x i 0
f ' x i (3B.1)
x i x i 1
And rearranged to
f x i
x i 1 x n , i=0, 1... (3B.2)
f 1 x i
This is called the Newton-Raphson formula.
2. Numerical Analysis
General Algorithm:
Step 1. Start
Step 2. Input the values of a, maxerr and other given parameters.
*Here , a is the value of the initial guess while maxerr is the stopping criteria,
absolute error or the desired degree of accuracy*
Step 3. Compute f(a) and df(a)
f a
Step 4. Compute c a
df a
Step 5. Test for the accuracy of c
then assign a =c
go to step 4
Else,
go to step 6
Step 6. Display the required root as c.
Step 7. Stop.
14
General Flowchart:
15
General M-file:
function y = NewtraphMfile(f,df,a,maxerr)
c = (a)-(f(a)/df(a));
disp(' Xn f(Xn) df(Xn) Xn+1 ');
disp([a f(a) df(a) c]);
while abs(f(c)) > maxerr
a = c;
c = (a)-(f(a)/df(a));
disp([a f(a) df(a) c]);
iter = 0;
iter = iter + 1;
if(iter == 100)
break;
end
end
C. Secant Method
y = c;
C. Secant Method
1. Theoretical Background
A potential problem in implementing the Newton-Raphson method is the evaluation of
the derivative. Although this is not inconvenient for polynomials and many other functions,
there are certain functions whose derivatives may be difficult or inconvenient to evaluate. For
these cases, the derivative can be approximated by a backward finite divided difference:
f x i 1 f x i
f ' x
x i 1 x i (3C.1)
This approximation can be substituted into Eq. (3C.1) to yield the following iterative equation:
f x i x i1 x i
x i1 x i
f x i 1 f x i (3C.2)
16
Equation (3C.2) is the formula for the secant method. Notice that the approach requires two
initial estimates of x. However, because f (x) is not required to change signs between the
estimates, it is not classified as a bracketing method.
2. Numerical Analysis
General Flowchart:
Step 1. Start
Step 2. Input the values a, b, maxerr and other parameters.
*Here a and b are the two initial guesses, maxerr is the stopping criteria,
absolute error or the desired degree of accuracy.*
Step 3. Compute f(a) and f(b).
a f b b f a
Step 4. Compute c
f b f a
Step 5. Test for the accuracy of c
If f c < maxerr
then assign a = b and b = c
go to step 4
Else,
go to step 6
Step 6. Display the required root as c.
Step 7. Stop.
17
General Flowchart:
Define the starting point of the
START program.
Preparation or initialization of
memory space for data processing. It
a = 0, c = 0, b = 0, also represents instructions that will
maxerr = 0 modify the program’s course
execution.
STOP
function y = SecantMfile(f,a,b,maxerr)
if(iter == 100)
break;
end
end
IV. GENERALIZATION
The two main methods in determining the roots of nonlinear equations are the
bracketing methods and the open methods. The bracketing methods require two initial guesses
that bracket the root and then systematically reduce the width of the bracket. It is a method
where convergence occurs. Graphical methods are usually used to provide visual insight into
the techniques of solving it. Under it are two specific methods, the bisection method and the
false position. The bisection method has slow convergence compared with other methods and
it is unsuitable to even multiple roots. The false position converges faster than bisection method
in most cases, but it exhibits very slow convergence when function has significant curvature.
Other than that, it is also unsuitable to even multiple roots like bisection.
Open methods are described to require only a single starting value or two starting values
that do not necessarily bracket the root. As such, they sometimes diverge or move away from
the true root as the computation progresses. However, when the open methods converge they
19
usually do so much more quickly than the bracketing methods. Several of these are the Single-
Fixed Point Iteration, Newton-Raphson, and Secant methods.
The Single-Fixed Point Iteration only converges when the g ' x 1 , hence it is
unsuitable for functions with g ' x 1. The Newton-Raphson Method is the most commonly
used method. When it converges, it does faster than the previous methods. However, the
derivative f ' is sometimes not easy to find and needs individually designed program code. This
is why in some cases it may diverge and exhibit poor convergence. The Newton-Raphson
Method is unsuitable on functions where the root lies on a reflection point, it is also unsuitable
when x i lies near a local optimum and when the tangent at x i has zero or near zero slope. On
the other hand, the secant method is created to ease the difficulties from the Newton-Raphson
Method by replacing the derivative f ' with the difference quotient. It uses two initial guesses
with no restrictions on the function sign at these guesses, thus it does not belong to the
bracketing methods. When this method converges, it does relatively fast. Generally, it is the
fastest among the five methods. Nonetheless, similar to Newton-Raphson, it may also diverge
or exhibit poor convergence when the two initial guesses are not selected carefully. Also, the
function where Secant Method is unsuitable is the same from the Newton-Raphson Method.
However computationally efficient these five methods are but that they do not always
work. But through comparing their results and their true percentage relative error we can
estimate the root nearest to the true root of a nonlinear equation.
V. NUMERICAL EXAMPLES
Problem 1
Use bisection method and false position to determine the drag coefficient needed so that an 80-
kg bungee jumper has a velocity of 36 m/s after 4s of free fall. Note: The acceleration of gravity
is 9.81m/s2. Start with initial guesses of xl =0.1 and xu =0.2 and iterate until the approximate
relative error falls below 2%.
20
(a) Mathematical Model
Where:
cd = drag coefficient
g = acceleration due to gravity
v(t) = velocity
t = time of free fall.
m = mass of the bungee jumper
gc
f m tanh d t vt
gm
cd m
9.81(m) 9.81(0.25)
f ( m) tanh (4) 36
0.25 ( m)
b) Graph
As shown in the graph above, when mass is 80 kg, the cd is approximately 0.14.
21
(b) Bisection Method
Solution:
The function to evaluate is
gc d
f c d t vt
gm
tanh
cd m
substituting the given values
9.81c d
f c d
9.81(80)
tanh 4 36
cd 80
Given the general formula of the Bisection Method
xr = (xl + xu)/2
function y = Bisection_Prob1(f,a,b,maxerr)
c=(a+b)/2;
disp(' xl xu xr ');
disp([a b c ]);
while abs(f(c))> maxerr
test = f(a)*f(c);
if test < 0
b = c;
elseif test > 0
a = c;
else
maxerr = 0;
end
c=(a+b)/2;
disp([a b c ]);
iter=0;
iter=iter+1;
if(iter==50)
break;
end
end
display(['Root is x=' num2str(c)]);
y=c
22
MATLAB® Output:
Summary:
After five iterations, we obtain a root estimate cd of 0.14063 having an approximate
relative percent error of above 2%. Since it is still above, we can still do an additional iteration
until we obtain an approximate relative percent error of below 2%.
Hence for the sixth iteration,
0.1375 0.140625
xr 0.1390625
2
0.1390625 0.140625
e a | | 100% 1.12%
0.1390625
At 6th iteration the required approximate relative percent error which is below 2% is
finally achieved. The true percentage relative error from the theoretical root is 0.67%.
23
1.19738(0.1 0.2)
xr 0.2 0.141809
0.86029 (1.19738)
f 0.1f 0.141809 0.860291 0.03521 0.030292
Therefore, the root is in the first interval and the upper guess is redefined as xu = 0.141809.
The second iteration is
0.03521(0.1 0.141809)
xr 0.141809 0.140165
0.86029 (0.03521)
0.140165 0.141809
e a | | 100% 1.17%
0.140165
M-file 5.1d: FP_Prob1.m
% uses the false position method to find the root of the function func
%You would require to run it in command window as
% FP_Prob1(@(@(x)(sqrt(9.81*80/x))*tanh(sqrt(9.81*x/80)*4)-
36,0.1,0.2,0.02;
% xl, xu = lower and upper guesses(0.1,0.2)
% ea = approximate relative error (%)
function y = FP_Prob1(f,xl,xu,ea)
24
MATLAB® output:
Summary:
After only two iterations we obtain a root estimate of 0.140165 with an approximate
error of 1.17% which is below the stopping criterion of 2%. The true percentage relative error
is equal to 0.12%.
(d) Conclusion
Using the graph1.m, the theoretical value of cd is equal to 0.14 in 80 kg mass. In the
bisection method cd is computed at the 6th iteration with a value of 0.1390625. It has an
approximate error of 1.12% and a true percentage relative error of 0.67%. While cd is
immediately obtained after only two iterations in false position with a value of 0.140165,
approximate error of 1.17% and true percentage relative error of 0.12%. This proves that the
calculated cd value of false position is more accurate.
25
Problem 2
Determine the highest real root of f x x 3 6x 2 11x 6.1 :
(a) Using the Single-Fixed Point iteration (three iterations, x(i) =3.5).
(b) Using the Newton-Raphson method (three iterations, x i 3.5 ).
(c) Using the Secant method (three iterations, x(i−1) =2.5 and x(i) =3.5)
(b) Graph:
M-file 5.2b: graph2.m
26
Figure 2.1: The roots are located approximately at 1.05, 1.9 and 3.05. From the range of
2.5 x 3.5 the root is approximately 3.05.
27
x 3 6x 2 6.1
x
11
M-file 5.2c: FixedPtIter_prob2.m
%% Matlab m-file for Fixed-Point Iteration %%
%% To find the root of f(x) = x^3-6*x^2+11*x-6.1 in [0,1] %%
%% Matlab's default is 4 digits after the decimal %%
format short;
%% set initial guess - Matlab requires indices to start at 1 %%
x(1) = 3.5;
for n = 1:3
x(n+1) = (x(n)^3-6*x(n)^2-6.1)/-11;
disp ( ' n x(n+1) ' );
disp ( [ n x(n) ] );
x(n) =(x(n+1));
end
MATLAB® Output:
Summary:
After third iteration the calculated root was 3.2514 which have true percent relative
error of 6.3%.
28
x 3 6x 2 11x 6.1
x i 1 x
3x 2 12x 11
2x 3 6x 2 6.1
x i1
3x 2 12x 11
Starting with an initial value of x i 3.5 .
M-file 5.2d: Newtraph_prob2
c = (a)-(f(a)/df(a));
disp(' Xn f(Xn) df(Xn) Xn+1 ');
disp([a f(a) df(a) c]);
while abs(f(c)) > maxerr
a = c;
c = (a)-(f(a)/df(a));
disp([a f(a) df(a) c]);
iter = 0;
iter = iter + 1;
if(iter == 100)
break;
end
end
y = c;
MATLAB® Output:
29
Summary:
After the fourth iteration, the calculated root was 3.0467 which have true percent error
of 0.0001%.
x i 1 x i
x 3 2
6x i 11x i 6.1 x i 1 x i
x
i
function y = Secant_prob2(f,a,b,maxerr)
if(iter == 100)
break;
end
end
30
MATLAB® Output:
Summary:
The calculated root, 3.0467, has 0.0001 true percent relative error after 7 iterations.
(f) Conclusion
After the third iteration, the calculated root of Single-Fixed Point Iteration was 3.2514 which
have true percent relative error of 6.3%. In Newton-Raphson Method, the calculated root was
3.0467 after the fourth iteration with a true percent error of 0.0001%. And lastly, the Secant
Method has a calculated root of 3.0467, 0.0001 true percent relative error and 7 iterations.
Comparing the three methods, the Single-Fixed point Iteration has the least accuracy while the
Newton-Raphson is the most efficient and accurate.
Problem 3
An oscillating current in an electric circuit is described by I 9e t sin 2t , where t is in
seconds. Determine all values of t such that I 3.5
31
(b) Graph:
M-file 5.3c:graph3.m
32
(c) Bisection Method:
Solution:
Given the function
f t 9e t sin 2t 3.5
M-file 5.3c: Bisect_prob3a.m
function y = Bisection_Prob3a(f,a,b,maxerr)
c=(a+b)/2;
disp(' xl xu xr ');
disp([a b c ]);
while abs(f(c))> maxerr
test = f(a)*f(c);
if test < 0
b = c;
elseif test > 0
a = c;
else
maxerr = 0;
end
c=(a+b)/2;
disp([a b c ]);
iter=0;
iter=iter+1;
if(iter==50)
break;
end
end
display(['Root is x=' num2str(c)]);
y=c
33
M-file 5.3c: Bisect_prob3b.m
function y = Bisection_Prob3a(f,a,b,maxerr)
c=(a+b)/2;
disp(' xl xu xr ');
disp([a b c ]);
while abs(f(c))> maxerr
test = f(a)*f(c);
if test < 0
b = c;
elseif test > 0
a = c;
else
maxerr = 0;
end
c=(a+b)/2;
disp([a b c ]);
iter=0;
iter=iter+1;
if(iter==50)
break;
end
end
display(['Root is x=' num2str(c)]);
y=c
34
MATLAB® Outputs:
1st root
35
2nd root
Summary:
At 16th iteration the first root t=0.0684 seconds and at 13th iteration the 2nd root t=0.4013
seconds. The theoretical roots, 0.0684 seconds and 0.4013 seconds, are approximately equal to
the 1st and 2nd computed roots. Both have true percentage relative error of 0.0001%.
36
(d) False Position
Solution:
Given the function
f t 9e t sin 2t 3.5
M-File 5.3d: FP_Prob3a
%uses the false position method to find the root of the function func
%You would require to run it in command window as
% FP_Prob3a(@(@(x) (@(x)(9*exp(-x)*(sin(2*pi*x)-3.5),0,0.2,0.0001);
% xl, xu = lower and upper guesses(0,0.2)
% ea = approximate relative error (%)
function y = FP_Prob3a(f,xl,xu,ea)
37
M-File 5.3d: FP_Prob3b.m
%uses the false position method to find the root of the function func
%You would require to run it in command window as
% FP_Prob3b( (@(x)(9*exp(-x)*sin(2*pi*x)-3.5),0.2,0.8,0.0001);
% xl, xu = lower and upper guesses(0.2,0.3)
% ea = approximate relative error (%)
function y = FP_Prob3b(f,xl,xu,ea)
38
MATLAB® Outputs:
1st root
39
2nd root
Summary:
The approximate error of 0.0690 and 0.4013 are -5.9618 and 1.9159 respectively for
the 1st root and 2nd root.
sin 2t
3.5
9e t
3.5
2t sin 1 t
9e
3.5
sin 1 t
t 9e
2
40
M-file 5.3e: FixedPtIter_prob2.m
MATLAB® Output:
Summary:
The true percent relative error is 2.12%. The first theoretical root, 0.0684, did not appear
while the second root has an error of 2.18%.
41
(f) Newton-Raphson Method
Solution:
c = (a)-(f(a)/df(a));
disp(' Xn f(Xn) df(Xn) Xn+1 ');
disp([a f(a) df(a) c]);
while abs(f(c)) > maxerr
a = c;
c = (a)-(f(a)/df(a));
disp([a f(a) df(a) c]);
iter = 0;
iter = iter + 1;
if(iter == 100)
break;
end
end
y = c;
42
M-file 5.1c: Newtraph_prob3b.m
c = (a)-(f(a)/df(a));
disp(' Xn f(Xn) df(Xn) Xn+1 ');
disp([a f(a) df(a) c]);
while abs(f(c)) > maxerr
a = c;
c = (a)-(f(a)/df(a));
disp([a f(a) df(a) c]);
iter = 0;
iter = iter + 1;
if(iter == 100)
break;
end
end
y = c;
MATLAB® Outputs:
1st root
43
2nd root
Summary:
The calculated 1st and 2nd root is approximately equal to the theoretical root with 0.0001
true percent relative error.
f t i t i 1 t i
t i 1 t i
f t i 1 f t i
t i 1 t i
9e ti
sin 2t i 3.5 t i 1 t i
9e t i 1 sin 2t i 1 3.5 9e t i sin 2t i 3.5
44
M-file 5.1c: Secant_prob3a.m
function y = Secant_prob3a(f,a,b,maxerr)
if(iter == 100)
break;
end
end
45
M-file 5.3g: Secant_prob3b.
function y = Secant_prob3b(f,a,b,maxerr)
if(iter == 100)
break;
end
end
MATLAB® Outputs:
1st root
46
2nd root
Summary:
The calculated 1st and 2nd root is approximately equal to the theoretical root with
0.0001 true percent relative error.
.
(h) Conclusion
The values of t at I = 3.5A is equal to 0.0684 seconds and 0.4013 seconds. Both the
Newton-Raphson Method and Secant Method are the most accurate methods in determining
the roots of t.
47
VI. REFERENCES
Chapra, S. C. (2012). Applied Numerical Methods with MATLAB® for Engineers and
Scientists. In Open Methods (Third ed., pp. 151-181). McGraw-Hill Companies,Inc.
Chapra, S. C. (2012). Applied Numerical Methods with MATLAB® for Engineers and
Scientists. In Bracketing Method (Third ed., pp. 126-150). McGraw-Hill Companies, Inc.
Chapra, S. C. (2012). Applied Numerical Methods with MATLAB® for Engineers and
Scientists. In Roots and Optimization (Third ed., pp. 123-125). McGraw-Hill Companies, Inc.
Steven C. Chapra and Raymond Canale. (2002). Numerical Methods for Engineers. McGraw-
Hill Companies, Inc.
48