Sie sind auf Seite 1von 24

Root-finding

Algorithm

What is a root-finding
algorithm?
A root-finding algorithm is a
numerical method, or algorithm,
for finding a value x such that
f(x) = 0, for a given function f.
Such an x is called a root of the
function f.

General Methods
1.Bracketing Method
2.Open Method

Bracketing Methods
Bracketing methods track the end points of an
interval containing a root. This allows them to
provide absolute error bounds on a root's
location when the function is known to be
continuous. Bracketing methods require two
initial conditions, one on either side of the root.

Bracketing Method
Illustration of a number of general
ways that a root may occur in an
interval prescribed by a lower
bound xl and an upper bound xu .
Parts (a) and (c) indicate that if
both f (xl ) and f (xu) have the same
sign, either there will be no roots or
there will be an even number of
roots within the interval. Parts (b)
and (d) indicate that if the function
has different signs at the end
points, there will be an odd number
of roots in the interval.

Bracketing Method
Illustration of a number of general
ways that a root may occur in an
interval prescribed by a lower
bound xl and an upper bound xu .
Parts (a) and (c) indicate that if
both f (xl ) and f (xu) have the same
sign, either there will be no roots or
there will be an even number of
roots within the interval. Parts (b)
and (d) indicate that if the function
has different signs at the end
points, there will be an odd number
of roots in the interval.

Bracketing Methods
1. Bisection method
2. False position (regula falsi)

Open Methods
Uses single initial assumptions to
determine a recursive root. As such,
they sometimes diverge or move
away from the true root as the
computation progresses

Open Methods
Graphical depiction of the
fundamental difference between
the (a) bracketing and (b) and (c)
open methods for root location.
In (a), which is bisection, the root
is constrained within the interval
prescribed by xl and xu . In
contrast, for the open method
depicted in (b) and (c), which is
Newton-Raphson, a formula is
used to project from xi to xi+1 in
an iterative fashion. Thus the
method can either (b) diverge or
(c) converge rapidly, depending
on the shape of the function and
the value of the initial guess.

Open Methods
1. Simple Fixed Point Iterations
2. Newton-Rhapson
3. Secant Method
4. Brents Method

Bisection
Method

Bisection Method
The simplest root-finding algorithm is the
bisection method. It works when f is a
continuous function and it requires previous
knowledge of two initial guesses, a and b, such
that f(a) and f(b) have opposite signs. Although
it is reliable, it converges slowly, gaining one bit
of accuracy with each iteration.

Bisection Method
The bisection method is a variation of the
incremental search method in which the interval
is always divided in half.
If a function changes sign over an interval, the
function value at the midpoint is evaluated. The
location of the root is then determined as lying
within the subinterval where the sign change
occurs. The subinterval then becomes the
interval for the next iteration. The process is
repeated until the root is known to the required
precision.

Bisection Method

Bisection Method
The input for the method is a continuous function f, an
interval [a, b], and the function values f(a) and f(b). The
function values are of opposite sign (there is at least one
zero crossing within the interval). Each iteration performs
these steps:
1. Calculate c, the midpoint of the interval, c = 0.5 * (a +
b).
2. Calculate the function value at the midpoint, f(c).
3. If convergence is satisfactory (that is, a - c is
sufficiently small, or f(c) is sufficiently small), return c
and stop iterating.
4. Examine the sign of f(c) and replace either (a, f(a)) or
(b, f(b)) with (c, f(c)) so that there is a zero crossing
within the new interval.

Bisection Method
Pseudocode
INPUT: Function f, endpoint values a, b, tolerance TOL, maximum iterations NMAX
CONDITIONS: a < b, either f(a) < 0 and f(b) > 0 or f(a) > 0 and f(b) < 0
OUTPUT: value which differs from a root of f(x)=0 by less than TOL

N 1
While N NMAX # limit iterations to prevent infinite loop
c (a + b)/2 # new midpoint
If f(c) = 0 or (b a)/2 < TOL then # solution found
Output(c)
Stop
EndIf
N N + 1 # increment step counter
If sign(f(c)) = sign(f(a)) then a c else b c # new interval
EndWhile
Output("Method failed.") # max number of steps exceeded

Example
x1=1, x2=2

Example
Determine if the sign of the initial assumptions are
different:

Because the function is continuous, there must be a root


within the interval [1, 2].

Example

Example

False Position
Method

False Position
False position (also called the linear
interpolation method) is another wellknown bracketing method. It is very
similar to bisection with the exception
that it uses a different strategy to come
up with its new root estimate. Rather
than bisecting the interval, it locates the
root by joining f (xl) and f (xu) with a
straight line

False Position

Seatwork (page 147)


Short bond paper

Das könnte Ihnen auch gefallen