Sie sind auf Seite 1von 13

COMPUTER SCIENCE & ENGINEERING DEPARTMENT ` OBAFEMI AWOLOWO UNIVERSITY . . . . ` `. ILE-IFE, OSUN STATE, NIGERIA . .

A . . Harmattan Semester, 2011-2012 Session PRACTICAL LAB III CSC 307: Numerical Computations I

May 7, 2012

Contents
1 Solution of Nonlinear Equations 1.1 Convergence of Iteration Process . . . . . . . . . . . . . . . . . . . . 2 Two Initial Points Methods 2.1 Bisection Method . . . . . . . . . 2.2 Task 1 . . . . . . . . . . . . . . . 2.3 Strength of the Bisection Method . 2.4 Weakness of the Bisection Method 2.5 The Regula Falsi Method . . . . . 2.6 Task 2 . . . . . . . . . . . . . . . 3 Single Initial Point Methods 3.1 The Newton-Ralphson Method 3.2 Task 3 . . . . . . . . . . . . . 3.3 The Fixed-Point Method . . . 3.4 Task 4 . . . . . . . . . . . . . 4 Practical problem solving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 4 5 6 6 7 7 7 7 8 9 9 11 12

1 Solution of Nonlinear Equations


One of the most frequently encountered problems in engineering and scientic computing is the evaluation of the root, or zero, of equations [2, 1]. Such equations are expressed in the form f (x) = 0.0. In many engineering models, however, a number of factors militates against obtaining the exact representation of the function f (x). An example of this is the situation where f (x) is not known explicitly. In such situation, only the rule for evaluating f (x) for any argument is known. Usually f (x), the ideal function, is not known denitively, and its approximate representation pn (x) is used instead. You have seen in the previous laboratory that, a function f (x) can be approximately represented by a polynomial pn (x) of degree n. When n 2 then the function and its associated polynomial are regarded as non-linear. So the problem of computing Equation 1 is reduced to that of computing Equation 2 f (x) = 0.0 pn (x) = 0.0 (1) (2)

The root of p(x) is the value xr , which when substituted into p(x) will yield zero. At the point xr , the system described by p(x) is said to be stable or at equilibrium (see Figure 1). The computation of pn (x) = 0.0 is not trivial in the sense that the concept of zero is vague. The value zero on a computer system is a function of many factors. Some of the factors include the word length of the machine and the real number representation scheme in use. These relates to the machine epsilon as well. For convenience we shall assume that f (x) = pn (x) in this laboratory.

Y f(xr) = 0.0 f(x)

X xr

Figure 1: Root of f (x)

In many cases, the computation of the exact value of xr is impossible. Thus only an approximate value, say xa can be computed. When solving a model represent by non-linear function therefore, the computation of xa is always in error. Let the error be represented as xe . The general aim, in the implementation of a solution is to obtain xa such that the absolute error xe in the computation is as small as possible. That is: |xr xa | = xe (3)

In engineering, an acceptable value for xe is that which will not compromise the accuracy of the target model. The machine epsilon is an important factor in determining this. There are generally two approaches to obtaining a root of this kind of Equation: Analytical Approach: The analytical approach uses direct techniques to obtain the solution to the equation. During that process, the equation is manipulated using some abstract technique which produces a result. This approach is applicable when the function is factorable. Examples of technique in this category include : (i) The almighty formula (ii) Factorisation. Computational Approach: The computational approaches uses iterative technique in obtaining a solution. Such technique starts with some initial guess and applies a repetitive process in moving closer to the root. In this course we will be focusing on the computational approaches as they are most suited to the solution of engineering models. A number of methods are used in the computational approach, they include those listed in Table 1. Table 1: Computational Technique in Non-linear Equation Solution Technique Two Initial Points Methods 1. Bisection (Half interval) 2. Regula Falsi (False Position) 3. Modied regular falsi 4. Secant 1. Newton-Ralphson (Gradient) 2. Fixed Point Iteration

Single Initial Point

1.1 Convergence of Iteration Process


When computational techniques are applied to the solution of non-linear equations with an initial guess xai it is expected that successive application of the process will move closer to the solution. If the successive computation of xai approaches xr , then the process is said to be converging to the root (see Figure 2). A way to know that a process is converging to the root is that the error xe will be getting smaller as the

Xi

Iterative Process

Xi+1

i= i + 1
Convergence to root from left

xi

xi

Convergence to root from rigth

xr

Converges when xi = xi+1 = xr

Figure 2: Iteration Process computation progress. That is if xi , xi+1 , xi+2 are the values computed for the i, i + 1 and i + 2 applications of the method, then |xi xi+1 | > |xi+1 xi+2 | (see Figure 2). If the process has converged to a root, subsequent computations of x will be equal. That is xj = xj+1 , . . . , = xn . In essence successive computation of x from iterative step j yields the same results. The process is said to have converge to the root at the j th iterative step. The above scenario is the idea situation when using iterative methods. In some situations, particularly, when the initial guess and not properly selected or when the function is ill-dened, then the iteration process may show other kinds of behaviours. Some of these are discussed as follows: 1. Diverge from the root: In this case, the iteration process will be going away from the root being computer in a run-away manner. In such situations, subsequent computations of x will incur greater absolute error than the preceding value that is |xi xi+1 | < |xi+1 xi+2 |. 2. Convergence to a Wrong root: In this case, the process is actually converging but not to the intended root. 3. Oscillation: In this case, the process is moving about around the root, but not reaching the root. 4. Stationary: In this case, the process remains in its initial position, that is the value does not progress away from the initial guess given. This situation can occur when the initial value suggested is actually the root of the equation.

2 Two Initial Points Methods


In the Two initial Point methods, the value of xr is assumed to lie between two points [a, b]. The computation process therefore starts by selecting (or guessing) two points

[x0, xn], (see Fig. 2.2). A proper analysis of the problem to be solved and experience with the problem can assist in selecting these values and an appropriate computer for the solution. The root xr is assumed to lie between these points. A test is made to conrm this assumption and then the technique is applied repeatedly until a satisfactory result, xa , is obtained. xa is such that, if xe is the acceptable error in the result then 3 will be satised. The solution techniques involves the computation of xm at a number of intervals which will produce result such as: xm0 , xm1 , . . . , xmn converges to towards xr .

2.1 Bisection Method


Given a function f (x) which is continuous on the interval [a, b] such that f (a)f (b) 0.0, then a root of f (x) lies between a and b. This is illustrated in Figure 3. This root can be obtained iteratively by using the bisection or half-interval method. At each step of the process, the root is assumed to be at the centre of [a, b], that is m = (a + b)/2.0 (4)

If the value obtained by evaluating the function at a and m, i.e., f (a) and f (m), have different signs (i.e one is positive and the other is negative), then there is a root between a and m, i.e. inside the interval [a, m]. Otherwise, the root is between the interval [m, b]. At the point of convergence, f (m) = 0.0 and then m is taken to be a root of f (x).

f(xr) = 0.0 f(b > 0.0 )

f(x)

0.0 X f(a < 0.0 ) a


IF f(a) * f(b) > 0.0 xr is in [a,b] ELSE xr is NOT in [a,b] ENDIF

xr b

Figure 3: Bisection Method

Table 2: Algorithm: Bisection Method START: INTEGER n REAL x[n], R, m, a , b READ a, b Function f(x) := { } R = F(a) * F(b) IF ( R > 0.0) THEN WRITE "No Root in This Interval EXIT ENDIF m = (a + b)/2.0 R = F(a) * F(m) IF(R < 0.0) THEN b=m ELSE a=m ENDIF WRITE m GO TO 5

5:

END:

2.2 Task 1
You may use either Octave or Plato to implement this and subsequent tasks in this laboratory. However, a graphical plot of your results and the model equations is required. 1. Develop the program to implement the Bisection method based on the pseudocode in Table 2. 2. Using your program, obtain the solution to the problems listed in Table 5 3. Discuss the solution you obtained above.

2.3 Strength of the Bisection Method


The following are the strengths of the Bisection method: 1. It is easy to understand as the computation process underlying it is intuitive. 2. It is easy to implement, the algorithm is straight forward. 3. If there is a root in an interval, the bisection method has a better chance of nding it. 4. The method does not involve further manipulation of the function, e.g. nding its derivatives, etc. 6

2.4 Weakness of the Bisection Method


The following are the weaknesses of the Bisection method: 1. It converges slowly (converges linearly), hence its computation takes a longer time to execute when implemented on a computer. 2. At least two point within which the interval lies must be provided before the process can begin. 3. The bisection method may encounter serious problem if there are more that one roots in the initial interval selected.

2.5 The Regula Falsi Method


To address the rst weakness listed above, i.e. to increase the rate of convergence, the method for computing the next value, i.e. m, can be improved. The Regula Falsi method uses the idea that it often make sense to assume that the function been evaluated is linear locally. Therefore, instead of using the midpoint of the interval [a, b] to compute the new estimate of the root, a weighted average is used. Therefor, the computation of m is replaced by the computation of w which is given as: w= f (b) a f (a) b f (b) f (a) (5)

2.6 Task 2
1. Modify the pseudo-code in Table 2 to convert it to a pseudo-code for the Regula falsi method. 2. Develop the program for your pseudo-code and obtain the solution to the problems listed in Table 5 3. Discuss the solution you obtained above. A major weakness of the Regula falsi method is that it has a high tendency of converging to the wrong root. To address this and other limitations of the Two Initial Point methods, the Single Initial Point methods have been found to be useful.

3 Single Initial Point Methods


The single initial point methods require the selection of an initial point, say x0 . The successive values of xi at points i > 0 are then computed using the selected technique. Methods in this approach are more frequently used in engineering problems for the following reasons; 1. Only one initial point is required. 2. The convergence rate is faster (converges quadratically). 7

3. The utilise less computer memory. 4. The techniques are easy to implement.

3.1 The Newton-Ralphson Method


The Newton-Ralphson method exploit the information about the gradient of the function, whose root is been computed, to move closer to the root. If f (x) is continuously differentiable and given an initial guess xn , then it is possible to obtain an approximate root to f (x) iteratively by using the following formula; xn+1 = xn f (xn ) f (xn ) (6)

(x) Note that f (xn ) is the rst derivative of f (x), i.e. dfdx , at the point x = xn . If n = 0 for instance and x0 is given, then x1 will be computed by substituting n = 0 into Equation 6 to have:

x1 = x0

f (x0 ) f (x0 )

(7)

This process is repeated for the next and subsequent values of x. This solution process is illustrated in Figure 4. The pseudo-code in Table 3 represents the algorithmic design of this process.

f(x)

X x2 xr x1 x0

Figure 4: A graphical illustration of the Newton-Ralphson Method

Table 3: Algorithm: Newton-Ralphson Method START: INTEGER i, n ! n is the number of iteration desired REAL x[n] Function f(x) := { } Function fprime(x) := { } READ x0 , n FOR i 0, n xi+1 = xi (f (xi )/(f prime(xi )) WRITE xi+1 NEXT i END:

3.2 Task 3
1. Develop the program to implement the Newton-Ralphson method based on the pseudo-code in Table 3. 2. Using your program, obtain the solution to the problems listed in Table 5. Make a reasonable assumption for the initial value. For example, you could assume x0 = (a + b)/2.0. 3. Discuss the solution you obtained in relation to the two point methods above. There are two major weaknesses of the Newton-Ralphson Method: 1. It can only be applied to functions that are continuously differentiable in the interval within which the root is been sought. 2. Its iteration process is susceptible to serious instability as the problems of: (i) Oscillation (see Figure 5a ); (ii) Divergence (see Figure 5b ); and (iii) Convergence to the wrong root (see Figure 5c) are rampant.

3.3 The Fixed-Point Method


The xed-point iteration process is primarily meant to address the second weakness of the Newton-Ralphson method. Given a non-linear function f (x) = 0.0, it is possible to express f (x) in the form x = g(x) (8)

In this case, x is said to be made as the subject of the function f (x) instead of 0.0. The function g(x) is called an iteration function. Mathematically, g(x) can be obtained by constructing a linear function, y = mx + c, of gradient m = 1.0 that intercepts f (x) at point, say 0.0. This line runs through the origin of the x y plane. Since the

f(x) Y Y

f(x)

x1
a: Oscillation

x0

x0
b: Divergence

f(x) Y

x2

x1 xr

x0
c: Convergence to Wrong Root

Figure 5: Problems with Newton-Ralphson Method linear function is of gradient 1 and the intercept is 0.0, then x meets f (x) at one of its roots. In Equation 9, the function g(x) can take many forms. Most of these forms will not produce the iteration that will facilitate convergence to a root. For a function to converge to a root, two conditions must be satised: i. The rst derivative of g(x), i. e. g (x), must be computable; ii. The absolute value of g (x), i.e. |g (x)|, at the initial point must be less than or equal to 1.0. That is |g(xk )| 1.0. The farther this value is from 1.0 the better. Given an iteration function g(x), that satises the above conditions and an initial value x0 the subsequent approximate values of f (x) is then computed. xn+1 = g(xn ) 10 (9)

This process is repeated for the next and subsequent values of x (see Figure 6). The pseudo-code in Table 4 represents the algorithmic design of this process.

xr g(x) y=x Y

X x0 x1 x2

Figure 6: Graphical Illustration of Fixed-Point Iteration Method

Table 4: Algorithm: Fixed-Point Method START: INTEGER i, n ! n is the number of iteration desired REAL x[n] Function g(x) := { } READ x0 , n FOR i 0, n xi+1 = g(xi ) WRITE xi+1 NEXT i END:

3.4 Task 4
1. Develop the program to implement the method based on the pseudo-code in Table 4. 2. Using your program, obtain the solution to the problems listed in Table 5. Make a reasonable assumption for the initial value. For example, you could assume x0 = (a + b)/2.0. 3. Discuss the solution you obtained in relation to the two point methods above. Note that the Fixed-point iteration method will fail in situation where g (x) cannot be computed or estimated accurately. 11

4 Practical problem solving


As seen in Tasks 1 through 4, it is clear that all the approaches discussed above have one or more limitations that militate against their exclusive use in problem solving. For practical engineering problems solving, the following strategy could assist in obtaining a useful solution. 1. Explore the behaviour of the function graphically in order to have a more informed guess for possible location of a root. 2. Use one of the two point methods at the initial stages of an iteration and when you are sure that the process will converge, employ a single point method. 3. Monitor the behaviour of the error in the iteration process and be careful not to confuse oscillation for convergence.

References
[1] Y. Jaluria. Computer Methods for Engineers, volume 1st . ALLYN and BACON, Inc., Canada, 1988. [2] F. Scheid. Schaums Outline of Theory and Problems of Numerical Analysis. McGraw-Hill Publishing, New Delhi, 2nd edition, 2006.

12

Ser. No. 1

Interval

Table 5: Non-Linear Models Equation expression

[1.0, 2.0]

x3 x 1.0 = 0.0

2 [0.0, 1.0] ex x = 0.0

3 [1.0, 3.0] x4 8.0x3 + 23.0x2 28.0x + 11.6875 = 0.0

4 [0.0, 1.0] 2.0x3 + x2 20.0x + 12.0 = 0.0

5 [0.0, 0.5] 2.0x2 x = 0.0

6 [1.0, 2.5] 2.0 sin x x = 0.0

7 [0.0, 2] 0.5 + 0.5e(x/2) sinx = 0.0

8 [0.0, 1.0] 3.0x (1.0 sinx) 2 = 0.0


1

9 [0.0, /2.0] cos x xex = 0.0

13

Das könnte Ihnen auch gefallen