Beruflich Dokumente
Kultur Dokumente
A . . Harmattan Semester, 2011-2012 Session PRACTICAL LAB III CSC 307: Numerical Computations I
May 7, 2012
Contents
1 Solution of Nonlinear Equations 1.1 Convergence of Iteration Process . . . . . . . . . . . . . . . . . . . . 2 Two Initial Points Methods 2.1 Bisection Method . . . . . . . . . 2.2 Task 1 . . . . . . . . . . . . . . . 2.3 Strength of the Bisection Method . 2.4 Weakness of the Bisection Method 2.5 The Regula Falsi Method . . . . . 2.6 Task 2 . . . . . . . . . . . . . . . 3 Single Initial Point Methods 3.1 The Newton-Ralphson Method 3.2 Task 3 . . . . . . . . . . . . . 3.3 The Fixed-Point Method . . . 3.4 Task 4 . . . . . . . . . . . . . 4 Practical problem solving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 3 4 5 6 6 7 7 7 7 8 9 9 11 12
The root of p(x) is the value xr , which when substituted into p(x) will yield zero. At the point xr , the system described by p(x) is said to be stable or at equilibrium (see Figure 1). The computation of pn (x) = 0.0 is not trivial in the sense that the concept of zero is vague. The value zero on a computer system is a function of many factors. Some of the factors include the word length of the machine and the real number representation scheme in use. These relates to the machine epsilon as well. For convenience we shall assume that f (x) = pn (x) in this laboratory.
X xr
In many cases, the computation of the exact value of xr is impossible. Thus only an approximate value, say xa can be computed. When solving a model represent by non-linear function therefore, the computation of xa is always in error. Let the error be represented as xe . The general aim, in the implementation of a solution is to obtain xa such that the absolute error xe in the computation is as small as possible. That is: |xr xa | = xe (3)
In engineering, an acceptable value for xe is that which will not compromise the accuracy of the target model. The machine epsilon is an important factor in determining this. There are generally two approaches to obtaining a root of this kind of Equation: Analytical Approach: The analytical approach uses direct techniques to obtain the solution to the equation. During that process, the equation is manipulated using some abstract technique which produces a result. This approach is applicable when the function is factorable. Examples of technique in this category include : (i) The almighty formula (ii) Factorisation. Computational Approach: The computational approaches uses iterative technique in obtaining a solution. Such technique starts with some initial guess and applies a repetitive process in moving closer to the root. In this course we will be focusing on the computational approaches as they are most suited to the solution of engineering models. A number of methods are used in the computational approach, they include those listed in Table 1. Table 1: Computational Technique in Non-linear Equation Solution Technique Two Initial Points Methods 1. Bisection (Half interval) 2. Regula Falsi (False Position) 3. Modied regular falsi 4. Secant 1. Newton-Ralphson (Gradient) 2. Fixed Point Iteration
Xi
Iterative Process
Xi+1
i= i + 1
Convergence to root from left
xi
xi
xr
Figure 2: Iteration Process computation progress. That is if xi , xi+1 , xi+2 are the values computed for the i, i + 1 and i + 2 applications of the method, then |xi xi+1 | > |xi+1 xi+2 | (see Figure 2). If the process has converged to a root, subsequent computations of x will be equal. That is xj = xj+1 , . . . , = xn . In essence successive computation of x from iterative step j yields the same results. The process is said to have converge to the root at the j th iterative step. The above scenario is the idea situation when using iterative methods. In some situations, particularly, when the initial guess and not properly selected or when the function is ill-dened, then the iteration process may show other kinds of behaviours. Some of these are discussed as follows: 1. Diverge from the root: In this case, the iteration process will be going away from the root being computer in a run-away manner. In such situations, subsequent computations of x will incur greater absolute error than the preceding value that is |xi xi+1 | < |xi+1 xi+2 |. 2. Convergence to a Wrong root: In this case, the process is actually converging but not to the intended root. 3. Oscillation: In this case, the process is moving about around the root, but not reaching the root. 4. Stationary: In this case, the process remains in its initial position, that is the value does not progress away from the initial guess given. This situation can occur when the initial value suggested is actually the root of the equation.
[x0, xn], (see Fig. 2.2). A proper analysis of the problem to be solved and experience with the problem can assist in selecting these values and an appropriate computer for the solution. The root xr is assumed to lie between these points. A test is made to conrm this assumption and then the technique is applied repeatedly until a satisfactory result, xa , is obtained. xa is such that, if xe is the acceptable error in the result then 3 will be satised. The solution techniques involves the computation of xm at a number of intervals which will produce result such as: xm0 , xm1 , . . . , xmn converges to towards xr .
If the value obtained by evaluating the function at a and m, i.e., f (a) and f (m), have different signs (i.e one is positive and the other is negative), then there is a root between a and m, i.e. inside the interval [a, m]. Otherwise, the root is between the interval [m, b]. At the point of convergence, f (m) = 0.0 and then m is taken to be a root of f (x).
f(x)
xr b
Table 2: Algorithm: Bisection Method START: INTEGER n REAL x[n], R, m, a , b READ a, b Function f(x) := { } R = F(a) * F(b) IF ( R > 0.0) THEN WRITE "No Root in This Interval EXIT ENDIF m = (a + b)/2.0 R = F(a) * F(m) IF(R < 0.0) THEN b=m ELSE a=m ENDIF WRITE m GO TO 5
5:
END:
2.2 Task 1
You may use either Octave or Plato to implement this and subsequent tasks in this laboratory. However, a graphical plot of your results and the model equations is required. 1. Develop the program to implement the Bisection method based on the pseudocode in Table 2. 2. Using your program, obtain the solution to the problems listed in Table 5 3. Discuss the solution you obtained above.
2.6 Task 2
1. Modify the pseudo-code in Table 2 to convert it to a pseudo-code for the Regula falsi method. 2. Develop the program for your pseudo-code and obtain the solution to the problems listed in Table 5 3. Discuss the solution you obtained above. A major weakness of the Regula falsi method is that it has a high tendency of converging to the wrong root. To address this and other limitations of the Two Initial Point methods, the Single Initial Point methods have been found to be useful.
3. The utilise less computer memory. 4. The techniques are easy to implement.
(x) Note that f (xn ) is the rst derivative of f (x), i.e. dfdx , at the point x = xn . If n = 0 for instance and x0 is given, then x1 will be computed by substituting n = 0 into Equation 6 to have:
x1 = x0
f (x0 ) f (x0 )
(7)
This process is repeated for the next and subsequent values of x. This solution process is illustrated in Figure 4. The pseudo-code in Table 3 represents the algorithmic design of this process.
f(x)
X x2 xr x1 x0
Table 3: Algorithm: Newton-Ralphson Method START: INTEGER i, n ! n is the number of iteration desired REAL x[n] Function f(x) := { } Function fprime(x) := { } READ x0 , n FOR i 0, n xi+1 = xi (f (xi )/(f prime(xi )) WRITE xi+1 NEXT i END:
3.2 Task 3
1. Develop the program to implement the Newton-Ralphson method based on the pseudo-code in Table 3. 2. Using your program, obtain the solution to the problems listed in Table 5. Make a reasonable assumption for the initial value. For example, you could assume x0 = (a + b)/2.0. 3. Discuss the solution you obtained in relation to the two point methods above. There are two major weaknesses of the Newton-Ralphson Method: 1. It can only be applied to functions that are continuously differentiable in the interval within which the root is been sought. 2. Its iteration process is susceptible to serious instability as the problems of: (i) Oscillation (see Figure 5a ); (ii) Divergence (see Figure 5b ); and (iii) Convergence to the wrong root (see Figure 5c) are rampant.
In this case, x is said to be made as the subject of the function f (x) instead of 0.0. The function g(x) is called an iteration function. Mathematically, g(x) can be obtained by constructing a linear function, y = mx + c, of gradient m = 1.0 that intercepts f (x) at point, say 0.0. This line runs through the origin of the x y plane. Since the
f(x) Y Y
f(x)
x1
a: Oscillation
x0
x0
b: Divergence
f(x) Y
x2
x1 xr
x0
c: Convergence to Wrong Root
Figure 5: Problems with Newton-Ralphson Method linear function is of gradient 1 and the intercept is 0.0, then x meets f (x) at one of its roots. In Equation 9, the function g(x) can take many forms. Most of these forms will not produce the iteration that will facilitate convergence to a root. For a function to converge to a root, two conditions must be satised: i. The rst derivative of g(x), i. e. g (x), must be computable; ii. The absolute value of g (x), i.e. |g (x)|, at the initial point must be less than or equal to 1.0. That is |g(xk )| 1.0. The farther this value is from 1.0 the better. Given an iteration function g(x), that satises the above conditions and an initial value x0 the subsequent approximate values of f (x) is then computed. xn+1 = g(xn ) 10 (9)
This process is repeated for the next and subsequent values of x (see Figure 6). The pseudo-code in Table 4 represents the algorithmic design of this process.
xr g(x) y=x Y
X x0 x1 x2
Table 4: Algorithm: Fixed-Point Method START: INTEGER i, n ! n is the number of iteration desired REAL x[n] Function g(x) := { } READ x0 , n FOR i 0, n xi+1 = g(xi ) WRITE xi+1 NEXT i END:
3.4 Task 4
1. Develop the program to implement the method based on the pseudo-code in Table 4. 2. Using your program, obtain the solution to the problems listed in Table 5. Make a reasonable assumption for the initial value. For example, you could assume x0 = (a + b)/2.0. 3. Discuss the solution you obtained in relation to the two point methods above. Note that the Fixed-point iteration method will fail in situation where g (x) cannot be computed or estimated accurately. 11
References
[1] Y. Jaluria. Computer Methods for Engineers, volume 1st . ALLYN and BACON, Inc., Canada, 1988. [2] F. Scheid. Schaums Outline of Theory and Problems of Numerical Analysis. McGraw-Hill Publishing, New Delhi, 2nd edition, 2006.
12
Ser. No. 1
Interval
[1.0, 2.0]
x3 x 1.0 = 0.0
13