Sie sind auf Seite 1von 11

Int. J. Emerg. Sci.

, 3(2), 119-130, June 2013 ISSN: 2222-4254 IJES

A Comparison of Iterative Methods for the Solution of Non-Linear Systems of equations


Noreen Jamil
Department of Computer Science, The University of Auckland, New Zealand
njam031@aucklanduni.ac.nz

Abstract. This paper presents different techniques to solve a set of nonlinear equations with the assumption that a solution exists. It involves Gauss-Seidel Method for solving non-linear systems of equations. It determines that how many more computations are required so that convergence may be achieved. The Gauss Seidel Method is computationally simpler than other non-linear solvers. Several methods for solution of nonlinear equations have been discussed and compared. Keywords: Iterative Methods, Non-linear equations.

1 INTRODUCTION
Consider the system of non-linear equations f1(x1, x1, , xn) = 0 f2(x1, x1, , xn) = 0 fn(x1, x1, , xn) = 0 We can write it in compact form as F(X) = 0 Where the functions f1, f2, .. fn = 0 are the coordinate functions of F. Non-linear systems of equations appear in many disciplines such as engineering, mathematics, robotics and computer sciences because majority of physical systems are nonlinear in nature. There are possibilities that more than one solution exists of the polynomial equations contained in the mentioned system. Non-linear systems of equations are not easy to solve. Some of the equations can be linear, but not all of them. In some situations a system of equations can have multiple solutions. Many linear and non-linear problems are sparse, i.e. most linear coefficients in the corresponding matrix are zero so that the number of non-zero coefficients is O(n) with n being the number of variables [7]. Iterative methods do not spend processing time on coefficients that are zero. Direct methods, in contrast, usually lead to fill-in, i.e. coefficients change from an initial zero to a non-zero value during the execution of the algorithm. Such methods we therefore may weaken the sparsity and may have to deal with more coefficients, that makes the processing time slower. Therefore, iterative, indirect methods are

119

often faster than naive direct methods in such cases. Iterative methods starts with an initial guess, they repeatedly iterates through the constraints of a linear specication, rening the solution until a sufcient precision is reached. In past years various methods have been designed to solve non-linear systems of equations. Some of the iterative methods are as follows. 1. Bisection Method 2. Newton-Raphson Method 3. Secant Method 4. False Position Method 5. Gauss Seidel Method There are some limitations using Newton,s method that it is impractical for large scale problems because of large memory requirements.

1.1

Bisection Method

The Bisection Method [15] is the most primitive method for nding real roots of function f (x) = 0 where f is a continuous function. This method is also known as Binary-Search Method and Bolzano Method. Two initial guess is required to start the procedure. This method is based on the Intermediate value theorem: if function f (x) = 0 is continuous between f(l) and f(u) and have opposite signs and less than zero, then there is at least one root. We have to bracket the root. After that to calculate the midpoint xm = the method divides the interval into two. Then we try to nd f (xl )f (xm ) It is illustrated in Figure 1. It is an advantageous of the Bisection Method [14] that it always converges. This method is very useful for computer based solvers. As iterations are conducted, interval gets halved. The error can be controlled. Since the method brackets the root, convergence is guaranteed. In contrast, there are some pitfalls as well. That the Bisection Method is very slow because it converges linearly. It is not good for the Bisection Method to have initial guess close to the root, otherwise it will take more number of iterations to nd the root. 1.1.1 Algorithm
xl +xu 2

Choose {x_{l}} and {x_{u}} as the initial guess such that f(x_{l})f(x_{u})<0 Try the midpoint x_{m}=\frac{x_{l}+x_{u}}{2} Find f(x_{l})f(x_{m}) If f(x_{l})f(x_{m})<0 Then 2

f(x)

Xl Xu

Figure 1: Bisection Method [15]

x_{l}=x_{l};x_{u}=x_{m} If f(x_{l})f(x_{m})>0 Then x_{l}=x_{m};x_{u}=x_{u} If f(x_{l})f(x_{m})=0 Then the root is exact root. \epsilon_{x}{i}|=|x_{i}-x_{i+1}| Check if |\epsilon_{x}{i}|\leq\epsilon

1.2

Newton-Raphson Method

Newton Raphson Method [15] is the most popular method for nding the roots of non-linear equations. Function can be approximated by its tangent line as shown in Figure 2. It starts with an initial guess that is close to the root. The basic difference between Newton and other methods is that only one initial guess is required. Newton Method is efcient if the guess is close to the root. On the other hand, Newton Method works slowly or may diverge if initial guess is not very close to the root. One advantage of Newton Method is that it converges fast, If it converges. The best thing for Newton Raphson Method is that it consumes less time and less iterations to nd the root than the Bisection and False Position Method. It has drawback of more complicated calculations. It resembles with the Bisection Method in approximating roots in a consistent fashion. In this way user can easily predict exactly how many iterations it require to arrive at a root.

f(x)

f(xi)

[xi, f(xi)]

f(xi-1) Xi+2 Xi+1 Xi

Figure 2: Newton Raphson Method [15]

It is necessary for Newton Method [5] to use the function as well as the derivative of that function to approximate the root. The process starts with an initial guess close to the root. After that one can approximate the function by its tangent line to estimate the location of the root and x-intercept of its tangent line is also approximated. The better approximation will be x-intercept than original guess, and the process is repeated until the desired root is found. Error calculation is another interesting possible action with the Newton Method. Error calculations are simply the difference between approximated value and the actual value. Newton-Raphson Method is faster than the Bisection Method. However as it requires derivative of the function which is sometime difcult and most laborious task. Sometimes there are some functions which are not solved by Newton-Raphson Method. So False Position Method is best method for such kind of functions. To meet the convergence criteria, it should fulll the condition of f (x) f (x) < [f (x)]2 However, there is no guarantee of convergence. If derivative is too small or zero then the method will diverge

1.2.1

Algorithm

Write out {f(x)} Calculate {f(x)} Choose an initial guess

Find x_{i+1}=x_{i}-\frac{f(x_{i})}{f(x_{i})} Continue iterating till |\epsilon_{x}{i}|=|x_{i}-x_{i+1}| Check if |\epsilon_{x}{i}|\leq\epsilon

1.3

Secant Method

Secant Method [15] is called root nding algorithm. It does not need derivative of the function like NewtonRaphson Method. We replace the derivative with the nite difference formula for Secant Method. We need two starting values for Secant Method to proceed. The values of function at these starting values are calculated which give two points on the curve. The initial values which we chose in Secant Method do not need opposite signs like the Bisection Method. New point is obtained where straight line crosses the x-axis as illustrated geometrically in Figure3. This new point replaces the old point being used in calculation. This process can be continued to get approximate root. It is good point that we do not need to bracket the root. As it requires two guesses. This is why its convergence is fast as compared to the Bisection and False Position Method if it converges. To compute zeros of continuous functions for standard computer programs [1]. This method is joined with some method for which convergence is assured for example, False Position Method. On contrary, Secant Method can fail if function is at. One of the drawbacks of Secant Method is that the slope of secant line can become very small which can cause it to move far from the points you have. For the convergence of Secant Method, initial guess should be close to the root. The order of convergence is where = 1.3.1 Algorithm
1+ 5 2

= 1.618

From Newton-Raphson Method,we have x_{n+1}=x_{n}-\frac{f(x_{n})}{f(x_{n})} We replace the derivative with this formula f(x_{n})=x_{1}-\frac{f(x_{n}-f(x_{n-1}))}{x_{n}-x_{n-1}} After substituting the value of {x_{n}} in Newtons Method formula, we obtain

Secant method

10 5
f(x)

previous

0 0 -5 0.5 1 1.5 2 2.5 3 3.5 4

oldest

-10

Figure 3: Secant Method [15]

x_{n+1}=x_{n}-f(x_{n})\times\frac{x_{n}-x_{n-1}}{f(x_{n}-f(x_{n-1}))} x_{n+1}=\frac{x_{n-1}f(x_{n})-x_{n}f(x_{n-1})}{f(x_{n})-f(x_{n-1})}

1.4

False Position method

The method [15] which combines the features of Bisection Method and Secant Method is known as False Position Method. It is also called Regular Falsi and Interpolation Method. Two initial approximations are needed to start this method so that function of these two initial values is less than zero. It means that functions must be of opposite signs. The new value is found as the intersection between the chord joining functions of initial approximation and the x-axis. It is illustrated in the Figure 4. The formula used for False Position Method is as follows: xn+1 = xn1 f (xn ) xn f (xn1 ) f (x)n f (xn1 ) (1)

The reason behind the discovery of this method [5] was that Bisection Method Converges slowly. So False Position Method is more efcient than the Bisection Method. 1.4.1 Algorithm

For interval[l,u] first value is calculated By using this formula

f(x)

f(xu)

xl

Xn+1
xu

f(xl)

Figure 4: False Position Method [15]

x_{n+1}=\frac{x_{n-1}f(x_{n})-x_{n}f(x_{n-1})}{f(x_{n})-f(x_{n-1})} Find f(x_{l})f(x_{u}) and multiply If f(x_{l})f(x_{u})<0 then take first half of the new interval If f(x_{l})f(x_{u})>0 then take second half as new interval

|\epsilon_{x}{i}|=|x_{i}-x_{i+1}| Check if |\epsilon_{x}{i}|\leq\epsilon 7

1.5

0.5

0.2

0.6

1.4

1.8

2.2

Figure 5: Gauss Seidel Method [6]

Repeat the process untill root is found.

1.5

Gauss-Seidel method

Gauss Seidel Method [15] is an iterative method. It is designed to nd solution for non-linear systems of equations within each period by a series of iterations. The graphical illustration of Gauss Seidel Method has been shown in gure 5. First, we have to assume initial guess to start the Gauss Seidel Method. After that substitute the solution
+1 +1 +1 in the above equation and use the most recent value of the previously computed (xk , xk , ..., xk ) in 1 2 i

the update of xi+1 such that


+1 k+1 +1 +1 k xk , xk , ..., xk , xk i+1 , ..., xn ) 2 i+1 = gi+1 (x1 i

Iteration is continued until the relative approximate error is less than the pre-specied tolerance. Gauss Seidel Method has linear convergence. This method is simple to program and sparsity is used simply. Iteration requires little execution time. It is robust method as compared to Newton-Raphson Method. The hard coded example has been shown below. 1.5.1 PROGRAM

% This function applies N iterations of Gauss Seidel method to the system of non-linear equations Ax = b % x0 is intial value tol =0.01 x0=[0 0 0]; N=100; n = length(x0); x=x0; X=[x0];

for k=1:N %for N iterations %Consider non-linear systems of equations as follows. x(1)=1/4*(11-x0(2)2-x0(3)); x(2)=1/4*(18-x(1)-x0(3)2); x(3)=1/4*(15-x(2)-x(1)2); % breaking loop if tollerance condition meets X=[X;x]; if norm(x-x0,inf) < tol disp(TOLRENCE met in k interations, where) k=k break; end x0 = x end if (k==N) disp(TOLRENCE not met in maximum number of interations) end The above system has solution:(0.999, 1.999, 2.999).

1.6

Related work

One of the most difcult problem [5] is to nd out solutions for non-linear systems of equations in numerical analysis. The methods which are used for solving non-linear systems of equations are iterative in nature because they [14] guarantee a solution with predetermined accuracy Specic methods are discussed in [1] and a comparison of the methods in [1] and [8] with several others have been carried out in [3]. Signicant success in solving some of the nonlinear equations that relates to space trajectory optimization has been achieved with one of the quasi-Newton methods in [4]. Methods that do not require derivative i.e., methods do not require assessment of the Jacobian matrix [8] of each iteration, have received signicant attention in the past eight years. In this paper two new iterative methods for solving non-linear systems of equations have been proposed and analysis between new and existing approaches has been carried out. They showed that new iterative method for solving non-linear systems of equations is more efcient than the existing methods such as Newton-Raphson Method, the Method of Cordero and Torregrosa [9] and the Method of Darvishi and Barati [11]. Bisection Method in higher dimentions has been devised by [16]. Generalized Bisection Method [2]have been proposed for constrained minimization problem when feasible region is n-dimentionaal simplex. Then they [10] extended it to multidimentional Bisection Method for solving the unconstrained minimization problem. Various computational methods have been proposed for solving non-linear systems of equations. One of which is Gauss Seidel Method. Gauss Seidel Method for non-linear systems of equations has been presented by [15]. Gauss Seidel Method [13] have been discussed in multidimensions. They [12] presented non-linear Gauss Seidel Method for network problems. They Showed comparison between Jacobi and Gauss Seidel Method for these problems and proved that non-linear Gauss Seidel Method is more efcient

then the Jacobi Method. Motivated and inspired by the on-going research in this area, we suggest Gauss Seidel Method for solving non linear systems of equations.

1.7

Comparison between Non-Linear Solvers

Newtons Method is popular method to nd solution for systems of nonlinear equations. But it requires initial estimates close to the solution and derivative is required at each iteration, there can be some nonlinear problems where derivative of function is not known. In that case it will be worst to use Newtons Method. There is no doubt that Newtons Method is efcient but it is not stable method. If we look at Secant Method it is also modied form of Newtons Method, it does not need derivative of the function. But one of the drawbacks of Secant Method is that its convergence is not guaranteed, it can fail if function is at. Bisection and False position are simpler methods to nd the roots of non-linear equations in one dimension. But it is quite complicated process for these methods to solve multiple non-linear equations in several dimensions. One of the best method is Gauss Seidel for solving non-linear systems of equations. It is simple to use, efcient and its convergence is linear. On contrary, there is no need of derivative like Newtons Method. We do not have to bracket the root as well like the Bisection Method.

1.8

Conclusion

It is worthmentioning that many methods have been proposed for solving non-linear equations. It has been shown that these methods can also be extended for multiple non-linear equations. But it depends on the problem that which particular method will provide best suitable solution for that specic problem. From the above discussion it is concluded that Gauss Seidel algorithm is best because of its assured convergence and simplicity. That is why it can be said that Gauss Seidel Method is suitable method for solving non-linear systems of equations.

References
[1] J. G. P. Barnes. An algorithm for solving non-linear equations based on the secant method. 8(1):66 72, 1965. [2] M. E. Baushev A.N. A multidimensional bisection method for minimizing function over simplex. pages 801803, 2007. [3] M. J. Box. A comparison of several current optimization methods, and the use of transformations in constrained problems. 9(1):6777, 1966. [4] M. J. Box. Parameter hunting techniques. 1966. [5] H.M.Anita. Numerical-Methods for Scientist and Engineers. Birkhauser-verlag, 2002. [6] Hornberger and Wiberg. Numerical Methods in the Hydrological Sciences. 2005. [7] S. Kunis and H. Rauhut. Random sampling of sparse trigonometric polynomials, ii. orthogonal matching pursuit versus basis pursuit. Journal Foundations of Computational Mathematics, 8(6):737763, Nov. 2008. 10

[8] M. Kuo. Solution of nonlinear equations. Computers, IEEE Transactions on, C-17(9):897 898, sep. 1968. [9] J. MA.Cordero. Variants of newtons method using fth-order quadrature formulas. pages 686698, 2007. [10] E. Morozova. A multidimensional bisection method for unconstrained minimization problem. 2008. [11] A. M.T.Darvishi. A third-order newton-type method to solve systems of nonlinear equations. pages 630635, 2007. [12] T. A. Porsching. Jacobi and gauss-seidel methods for nonlinear network problems. 6(3), 1969. [13] J. D. F. Richard L. Burden. Numerical Analysis. 8, 2005. [14] W. H. Robert. Numerical Methods. Quantum, 1975. [15] N. A. Saeed, A.Bhatti. Numerical Analysis. Shahryar, 2008. [16] G. Wood. The bisection method in higher dimensions. 1989.

11

Das könnte Ihnen auch gefallen