Sie sind auf Seite 1von 8

American Journal of Scientific Research

ISSN 1450-223X Issue 34(2011), pp. 52-59


EuroJournals Publishing, Inc. 2011
http://www.eurojournals.com/ajsr.htm

A Novel Fourth-Order Two-Step Optimal Class


F. Soleymani
Young Researchers Club, Islamic Azad University
Zahedan Branch, Zahedan, Iran
E-mail: fazlollah.soleymani@gmail.com
Tel: +98-9151401695

S. Karimi Vanani
Department of Mathematics, Islamic Azad University
Zahedan Branch, Zahedan, Iran

M. Khan
Department of Sciences and Humanities
National University of Computer and Emerging Sciences, Islamabad, Pakistan


Abstract

This paper proposes a two-step class of without memory iterations for finding
simple roots. Kung and Traub in (Kung and Traub, 1974) gave a conjecture that any
without memory iteration method including n evaluations per full cycle achieves the
maximum convergence rate
1
2
n
. Taking into account of this and also the theoretical
results of the contributed class, we obtain that our suggested scheme is optimal and
possesses the optimal efficiency index 1.587. We finally present a lot of numerical
experiments to support the theory developed in this paper.


Keywords: Two-step methods; simple root; optimal order; efficiency index; order of
convergence; solution of nonlinear equations.

1. Introduction
Finding simple roots of a nonlinear equation ( ) 0 f x = , is a common and important problem in
numerical analysis and applied sciences. In the last years, many modified iterative methods have been
developed to improve the local order of convergence of some classical methods; see e.g.(Sargolzaei
and Soleymani, 2011; Soleymani, 2011a; Noor et al., 2011a; Noor et al., 2011b). As the order of an
iterative method increases, so does the number of functional evaluations per step. In this situation, the
efficiency index gives a measure of the balance among those quantities, according to the formula
1/ n
p ,
where p is the order of convergence of the method and n the number of evaluations per full cycle.
Kung and Traub conjectured in (Kung and Traub, 1974) that the order of convergence of any
multipoint without memory method cannot exceed the bound
1
2
n
, (called the optimal order). Thus, the
optimal order for a method with 3 evaluations per full iteration would be 4. The Jarratt method (Jarratt,
1969) is an example of optimal fourth-order methods, because it only performs three evaluations per
full step. We again remark that to improve the local order of convergence and the efficiency index,
many modified methods have been proposed, see (Kyurkchiev and Iliev, 2009; Soleymani and Sharifi,
A Novel Fourth-Order Two-Step Optimal Class 53
2011a; Soleymani and Sharifi, 2011b; Phiri and Malinke, 2010; Frontini and Sormani, 2003) and the
references therein. On the other hand, solving such nonlinear equations has so many applications in
engineering and physical problems. For example, Thermistors are temperature-measuring devices
based on the principle that the thermistor material exhibits a change in electrical resistance with a
change in temperature. By measuring the resistance of the thermistor material, one can then determines
the temperature, see Fig. 1.For a 10K3A Betatherm thermistor,

Figure 1: A typical thermistor.


Thermally
conductive epoxy
coating

Tin plated copper
alloy lead wires



The relationship between the resistance R of the thermistor and the temperature is given by
, )) (ln( 10 775468 . 8 ) ln( 10 341077 . 2 10 129241 . 1
1
3 8 4 3
R R
T

+ + = (1)
where T is in Kelvin and R is in Ohms. A thermistor error of no more than C + 0.01 is acceptable.
To find the range of the resistance that is within this acceptable limit at 19 C (for instance), we need to
solve
, )) (ln( 10 775468 . 8 ) ln( 10 341077 . 2 10 129241 . 1
15 . 273 01 . 19
1
3 8 4 3
R R

+ + =
+
(2)
and
.
3
) (ln(
8
10 8.775468 ) ln(
4
10 2.341077
3
10 1.129241
273.15 18.99
1
R R

+

=
+
(3)
As can be seen, these attained equations contain simple roots as their solutions, which draw the
attention to nonlinear equations solvers as well as high precision in low computational time to achieve
this aim. To see more on this field, we refer the reads to (Soleymani and Mousavi, (2011); Soleymani,
(2011b)).
In this paper, we present a new class of two-step methods with optimal order of convergence
four by using weight function approach. The rest of the paper is organized as follows: in the second
section, we describe our general class of iterations without memory and subsequently show its optimal
order of convergence through a valid mathematical proof. In the third section, we review some of the
existing methods, which include two evaluations of the first order derivative and one evaluation of the
function. Last section includes different numerical tests to confirm the theoretical results and allow us
to compare the methods. Moreover, we infer some conclusions.


2. Main Result
Consider that the scalar function : f D R R has a simple zero in the open domain D. We
assume the following two-step cycle

=
+ +
=
+
)' ( '
) (
,
) ( ' 2
)] ( ' ) ( ' )[ ( ) ( 2
1
n
n
n n
n
n n n n n
n n
x f
x f
x y
y f
x f y f x y x f
y x
(4)
54 F. Soleymani, S. Karimi Vanani and M. Khan
in which there are three evaluations per full iteration to proceed. The iteration (4) is the two-step two-
point method given in (Soleymani and Sharifi (2010)) of third-order of convergence. To obtain a novel
modification of the Newton's method and also Soleymani and Sharifi's scheme, with better order of
convergence and efficiency index, we use of the weight function approach as follows

=
+
+ +
=
+
,
) ( '
) (
)), ( ) ( (
) ( ' 2
)] ( ' ) ( ' [ ) ( ) ( 2
1
n
n
n n
n n
n
n n n n n
n n
x f
x f
x y
H t G
y f
x f y f x y x f
y x

(5)
where R and ) ( ), (
n n
H t G are two real-valued weight functions with
)' ( '
) ( '
n
n
x f
y f
n
t
=
and
) ( '
) (
n
n
y f
x f
n
= . The
main challenge is now to select and ) ( ), (
n n
H t G such that the order becomes optimal. Toward this
end, we choose
3
2
= and suggest the following iteration without memory with the same number of
evaluations as (4)

=
+
+ +
=
+
,
) ( ' 3
) ( 2
)). ( ) ( (
) ( ' 2
)] ( ' ) ( ' [ ) ( ) ( 2
1
n
n
n n
n n
n
n n n n n
n n
x f
x f
x y
H t G
y f
x f y f x y x f
y x
(6)
Theorem 1 shows that under what conditions on ), (
n
t G , and ) (
n
H the convergence order will
arrive at the quartically level.
Theorem 1. Let a sufficiently smooth function : f D R R has a simple root in the open
interval D. Then the class of methods (6) without memory is of optimal fourth-order convergence,
when + = = = + =

= = ) 0 ( and 0 ) 0 ( ) 0 ( ' ) 0 ( , ) 1 ( and


4
7
) 1 ( ,
4
1
) 1 ( , 1 ) 1 (
) 3 ( ) 3 (
H " H H H G " G ' G G
and it satisfies
3 (3) (3) 4 5
1 2 3 4 2
1
( 486 54 (798 64 (1)) 27 (0)) ( ).
486
n n n
e c c c c G H e O e
+
= + + + + (7)
Proof. Let e
n
= x
n
- be the error in the nth iterate. By using the symbolic computation and
writing the Taylor's series expansion for any term of (6), we attain
)], ( )[ ( ' ) (
5 4
4
3
3
2
2 n n n n n n
e O e c e c e c e f x f + + + + = (8)
wherein
( )
1 ( )
, 2
! '( )
k
k
f
c k
k f

| |
=
|
\
Also for the first derivative of the function in the first step of our
cycle, we have
)]. ( 4 3 2 1 )[ ( ' ) ( '
4 3
4
2
3 2 n n n n n
e O e c e c e c f x f + + + + = (9)
Using (8), (9) and the first step of (6) gives us
). ( ) 3 7 4 (
3
2
) (
3
4
3
2
3
5
2
4
4 3 2
3
2
3
3
2
2
2
2
e O e c c c c e c c
e c e
y
n n
n n
n
+ + + + + = (10)
In the same way, for the second step of (6), we attain
3
3
2
2
2
2
1
) ( ' ) 11 8 (
9
2
) ( '
9
14
3
) ( ' 2
)] ( ' ) ( ' )[ ( ) ( 2
n n
n
n n n n n
e f c c e f c
e f
x f y f x y x f

+ + + = + +
). ( ) ( ' ) 131 252 180 (
81
2
5 4
4 3 2
3
2 n n
e O e f c c c c + + + (11)
Using (11) and Taylor series expanding around the simple root, we obtain
9
5
3 ) ( ' 2
)] ( ' ) ( ' )[ ( ) ( 2
2
2
1
n n
n
n n n n n
e c e
y f
x f y f x y x f
+ =
+ +

A Novel Fourth-Order Two-Step Optimal Class 55
). ( ) 127 435 284 (
81
1
)
9
10
27
46
(
5 4
4 3 2
3
2
3 3
2
2
n n n
e O e c c c c e
c c
+ + + + + (12)
Furthermore, by considering
H " H H H G " G G G , ) 0 ( and , 0 ) 0 ( ) 0 ( ' ) 0 ( , ) 1 ( and .
4
7
) 1 ( ,.
4
1
) 1 ( ' , 1 ) 1 (
) 3 ( ) 3 (
+ = = = + =

= = we
have
3
)) ( ) ( (
) ( ' 2
)] ( ' ) ( ' )[ ( ) ( 2
1
n
n n
n
n n n n n
e
H t G
y f
x f y f x y x f
= +
+ +

498 ( 1782 (
486
1
) (
3
4
3
2
3
2 3 2
3
3
2
2
2
2
c c c e c c
e c
n
n
+ + +
). ( ))) 0 ( 34 ( 27 )) 1 ( 64
5 4 ) 3 (
4
) 3 (
n n
e O e H c G + + + (13)
Finally, using (12) and (13) in the last step of (6) will end in
= =
+ +

1 1 n n
x e
). ( )) 0 ( 27 )) 1 ( 64 798 ( 54 486 (
486
1
5 4 ) 3 ( ) 3 ( 3
2 4 3 2 n n
e O e H G c c c c + + + + (14)
This shows that our novel class of iterations (6) reaches the optimal order four by using only
three pieces of information per full cycle. In terms of computational cost, the developed methods from
the class require only three evaluations. Therefore, they have the efficiency indices 4
1/3
1.587, that is,
the new class of methods reaches the optimal order of convergence four, conjectured by Kung and
Traub. This completes the proof.
A typical member from our contributed powerful class (6) is as follows

|
|

\
|
+
|
|

\
|
+
+ +
=
+
,
) ( ' 3
) ( 2
,
) ( '
) (

) ( '
) ( '
8
7
) ( '
) ( '
2
8
17
) ( ' 2
)] ( ' ) ( ' [ ) ( ) ( 2
3 2
1
n
n
n n
n
n
n
n
n
n
n
n n n n n
n n
x f
x f
x y
y f
x f
x f
y f
x f
y f
y f
x f y f x y x f
y x
(15)
where its error equation is as comes next
). (
9 81
133
3
1
5 4 4
3 2
3
2
1 n n n
e O e
c
c c
c
e +
|
|

\
|
+ + =
+
(16)
Clearly, choosing a different weight function as discussed in Theorem 1, in (6), will end in a
new iteration optimal two-point method without memory. This reveals the generality of our technique.
As other examples, we can produce the following methods from the proposed class
2 4
1
( ) 2
,
3 ( )
2 ( ) ( )[ ( ) ( )] ( ) ( ) ( 1 7
1 1 1 ,
2 ( ) 4 ( ) 8 ( ) (
)
)
n
n n
n
n n n n n n n n
n n
n n n n
y
x
f x
x
f x
f x y x f y f x f y f y f x
f y f x f x f y
y
+
=


| | | | | | + +
+ +
` | | |

=

\ \ \
)

(17)
with ) (
9 81
133
5 4 4
3 2
3
2
1 n n n
e O e
c
c c
c
e +
|
|

\
|
+ =
+
, as its error equations; and also
56 F. Soleymani, S. Karimi Vanani and M. Khan
1
2 3 4
( ) 2
,
3 ( )
2 ( ) ( )[ ( ) ( )] ( ) 1
1 1
2 ( ) 4 ( )
( ) ( ) ( ) 7 5
1 1
8 ( 2 ( ) ( )
,
+
=

| | + +
=
|

\
| | | |
+
| |

\


| |

+
` |

\ \
)
n
n n
n
n n n n n n
n n
n n
n n n
n n n
f x
x
f x
f x y x f y f x f y
y
f y f x
f y f y f x
f x
x
f
y
x f y
(18)
with the following error relation ). ( ) 9 3 (
9
1
5 4
4 3 2
3
2 1 n n n
e O e c c c c e + + =
+



3. Short Review on Literature
This section includes a short review on the existing methods in literature with the similar structure to
(6), that is to say, two evaluations of the first order derivative and one evaluation of the function per
computing step.
A cubically convergent method was developed in (Weerakoon and Fernando, 2000) as follows
1
2 ( )
,
'( ) '( )
n
n n
n n
f x
x x
f x f y
+
=
+
(19)
wherein
) ( '
) (
n
n
n n
x f
x f
x y = .
In (Homier, 2003), Homeier derived the following cubically convergent iteration scheme
.
) ( '
1
) ( '
1
2
) (
1
|
|

\
|
+ =
+
n n
n
n n
y f x f
x f
x x (20)
The fourth-order Jarratt method (Jarratt, 1969), in which we use one evaluation of the function
and two evaluations of the first derivative (the same evaluations as (19) and (20)) is defined by
( ) 2
,
3 ( )
3 ( ) ( ) ( )
.
6 ( ) 2 ( ) ( )
n
n n
n
n n n
n n
n n n
f x
x
f x
f y f x f x
x
f y f x f x
y
z
=

+
=

(21)
Recently, an efficient fourth-order technique, in which we have two evaluations of the first
derivative and one evaluation of the function had been presented by Khattri and Abbasbandy in
(Khattri and Abbasbandy, 2011) as follows
1 2 3
1
( ) 2
,
3 ( )
( ) ( ) ( ) ( ) 21 9 15
1 .
8 ( ) 2 ( ) 8 ( ) ( )
+

| | | |
| | | |

+ +
| | | |

\ \
=

(
| |
| |
( = +
| |

\ (
\

\ \

n
n n
n
n n n n
n n
n n n n
f x
x
f x
f y f y f y f x
x
f x f x f
y
x f x
x
(22)


4. Numerical Examples and Discussion
The main objective of this section is to provide a robust comparison between the presented class and
the already known methods in literature. For numerical reports here, we have used the cubically
methods (19) and (20); the quartically method (22); and our proposed optimal fourth-order method (15)
from the class (6). The considered nonlinear test functions and their simple roots are given below. The
graphs of some of them are shown in Figures 1-2.
, ) (sin
2
1
x x f + = , 0 =
A Novel Fourth-Order Two-Step Optimal Class 57
,
27
3 7 2 9 ( 2
1
2
cos ) 1 (
2 3
2
+
+
|

\
|
+ = x
x
x f

, 3 / 1 =
, 1 ) (sin
2 2
3
+ = x x f . 5086 1534122603 4044916482 . 1
, 1 ) (sin
4
+ =

x e f
x
, 0044 3311261307 0768312745 . 2
, 1 . 0
5
=
x
xe f , 3569 5896296483 1118325591 . 0
, ) (sin
2
6
x x x f + + = , 0 =
), sin( 2
7
3
1 )) cos( 2 sin(
x
e x x f + = , 48429 4682782501 3061752018 . 1
), sin( 2
8
3
1 )) cos( 2 sin(
x
e x x f + = , 4856 6121253522 7848959876 . 0
2 2 14 3
9
1
cos( ) sin(2 ) 1 sin( )
2
f x x x x x x
x
= + + + + +
, , 6931 2756142332 9257722498 . 0
), 2 /( 1 ) cos( ) tan(ln
3
10
x x x f + = , 3472 5676707351 4432607835 . 0
, tan
1
11
x f

= , 0 =
, 3 10
2 3 6
12
+ + = x x x x f , 3860 1814043676 6586048471 . 0
, 7 11
3 4
13
+ = x x x f , 8137 1077768897 8035111991 . 0
, 2 cos
3
14
+ = x x f , 33327 5397001267 1725779647 . 1
, cos
15
x x f = , 85653 7288265839 6417143708 . 0
, sin 2 ) ln(
3
16
x x x f + = , 44792 8037184716 2979977432 . 1
We have used Div. when the iteration diverges for the considered starting point. The results are
summarized in Table 1 after four full iterations. In Table 1, F. stands for failure of the method to find
the root after four full cycles. As they show, novel scheme is comparable with all of the methods. All
numerical instances were performed by MATLAB 7.6 using 800 digits floating point arithmetic
(VPA:=800). We have computed the root of each test function for the initial guess
0
x , while the
iterative schemes were stopped, when
800
| ( ) | 10
n
f x

. As can be seen, the obtained results in Table 1
are in harmony with the analytical procedure given in the Main Result section.
One should be aware that no iterative method always shows the best accuracy for all the test
functions. It is important to review the proof of convergence for our proposed class of methods before
implementing it. Specifically, one should review the assumptions made in the proofs, when the result
of the iterations become divergence. For situations, where the method fails to converge, it is because
the assumptions made in the proofs of the methods that we use in Table 1 are not met.

Table 1: Comparison of different methods with the same Total Number of Evaluation (TNE=12)

Methods Guess (17) (18) (20) (15)
f
1
0.3 0.4e-54 0.5e-198 0.2e-107 0.1e-207
f
1
0.2 0.2e-64 0.7e-224 0.6e-136 0.7e-210
f
1
0.6 0.7e-45 0.1e-143 0.2e-73 0.4e-94
f
2
0.3 0.1e-109 0.6e-170 0.4e-260 0.2e-332
f
2
0.4 0.1e-87 0.1e-122 0.7e-209 0.3e-274
f
2
0.2 0.3e-56 0.6e-87 0.1e-63 0.1e-161
f
a
1.3 0.2e-83 0.1e-157 0.2e-176 0.1e-271
f
a
1.1 0.7e-41 0.2e-79 0.7e-24 0.1e-158
f
a
1.9 0.2e-44 0.8e-77 0.1e-86 0.1e-125
f
4
2 0.3e-107 0.8e-138 0.9e-242 0.2e-343
f
4
2.6 0.2e-59 0.3e-85 0.1e-109 0.1e-121
f
4
1.7 0.4e-42 0.8e-63 0.1e-22 0.3e-177
f
5
0.3 0.1e-48 0.2e-93 0.2e-60 0.5e-168
f
5
0.6 0.8e-10 0.1e-35 Div. 0.6e-53
f
5
-0.5 0.9e-24 0.3e-39 0.4e-42 0.8e-59
58 F. Soleymani, S. Karimi Vanani and M. Khan
Table 1: Comparison of different methods with the same Total Number of Evaluation (TNE=12) - continued

f
6
0.3 0.6e-76 0.4e-113 0.1e-161 0.1e-189
f
6
0.1 0.4e-110 0.6e-141 0.2e-261 0.5e-325
f
6
1 0.1e-46 0.1e-104 0.4e-82 0.3e-37
f
7
1.35 0.7e-112 0.1e-88 0.2e-234 0.7e-263
f
7
1.3 0.8e-211 0.3e-157 0.7e-408 0.6e-455
f
7
1.4 0.7e-85 0.1e-60 0.1e-201 0.3e-219
f
8
-0.8 0.2e-168 0.1e-172 0.5e-418 0.5e-458
f
8
-0.5 0.7e-48 0.1e-55 Div. 0.3e-143
f
8
-0.9 0.7e-108 0.1e-102 0.3e-216 0.5e-240
f
9
-0.9 0.4e-55 0.1e-81 0.2e-96 0.6e-206
f
9
-0.85 0.3e-15 0.1e-55 Div. 0.5e-53
f
9
-0.99 0.8e-27 0.5e-40 0.5e-52 0.5e-102
f
10
0.5 0.5e-42 0.5e-65 0.1e-57 0.8e-190
f
10
0.4 0.6e-53 0.4e-69 0.7e-123 0.1e-193
f
10
0.3 0.3e-13 0.1e-22 0.3e-21 0.8e-56
f
11
0.1 0.5e-112 0.7e-112 0.3e-740 0.2e-295
f
11
0.2 0.4e-88 0.2e-87 0.5e-534 0.1e-218
f
11
-0.5 0.1e-57 0.4e-54 0.3e-219 0.4e-146
f
12
0.7 0.9e-101 0.1e-138 0.1e-249 0.1e-317
f
12
1.1 0.3e-36 0.6e-70 0.1e-64 0.9e-142
f
12
0.4 0.5e-25 0.4e-56 F. 0.4e-62
f
13
0.65 0.8e-72 0.7e-151 0.7e-140 0.4e-247
f
13
0.5 0.1e-44 0.1e-81 0.5e-37 0.4e-165
f
13
1.1 0.2e-56 0.1e-83 0.8e-125 0.9e-199
f
14
-1 0.1e-52 0.2e-147 0.2e-70 0.3e-171
f
14
-2 0.1e-20 0.9e-37 0.8e-35 0.1e-36
f
14
-0.8 0.1e-20 0.9e-50 F. 0.1e-37
f
15
0.9 0.2e-110 0.9e-138 0.6e-326 0.3e-187
f
15
0.2 0.1e-61 0.4e-64 0.5e-214 0.2e-124
f
15
1.6 0.2e-58 0.1e-58 0.2e-250 0.2e-41
f
16
1.5 0.5e-51 0.7e-82 0.5e-106 0.2e-163
f
16
0.3 0.1e-54 0.4e-75 0.2e-123 0.8e-176
f
16
2 0.1e-21 0.3e-41 0.1e-35 0.4e-108

It is well known that a wide class of problems, which arises in several branches of pure and
applied sciences, can be studied in the general framework of the nonlinear equations ( ) 0 f x = , Due to
their importance; several numerical methods have been suggested and analyzed under certain
conditions. These numerical methods have been constructed using different techniques. Herein, we
have developed a general class of two-point two-step methods including two evaluations of the first
order derivative and one evaluation of the function per full cycle to proceed. The suggested class
reaches the optimal efficiency index 1.587.

Figure 1: Graph of f
9
. Figure 2: Graph of f
7
.
1.4 1.2 1.0 0.8 0.6 0.4 0.2
5
4
3
2
1
1

1.0 0.5 0.5 1.0 1.5
1.0
0.5
0.5
1.0


A Novel Fourth-Order Two-Step Optimal Class 59
We end this paper by mentioning a still-standing new open problem in root-finding topic,
which declares that constructing an optimal three-step eighth-order method, by using two evaluations
of the function and two evaluations of the first-order derivative has not yet contributed!


References
[1] Frontini M, Sormani E, (2003). Some variants of Newtons method with third-order
convergence, App. Math. Comput., 140: 419426.
[2] Homeier HHH, (2005). On Newton-type methods with cubic convergence, J. Comput. Appl.
Math., 176: 425432.
[3] Jarratt P, (1969). Some efficient fourth order multipoint methods for solving equations, BIT 9:
119124.
[4] Khattri SK, Abbasbandy S, (2011). Optimal fourth order family of iterative methods,
Mathematicki Vesnik, 63: 6772.
[5] Kung HT, Traub JF, (1974). Optimal order of one-point and multipoint iteration, J. ACM, 21:
643-651.
[6] Kyurkchiev N, Iliev A, (2009). A note on the constructing of nonstationary methods for solving
nonlinear equations with raised speed of convergence, Serdica J. Comput. 3: 4774.
[7] Sargolzaei P, Soleymani F, (2011). Accurate fourteenth-order methods for solving nonlinear
equations, Numer. Algorithms, 58: 513-527.
[8] Soleymani F, (2011a). Regarding the accuracy of optimal eighth-order methods, Math. Comput.
Modelling, 53: 1351-1357.
[9] Soleymani F, Mousavi BS, (2011). A novel computational technique for finding simple roots of
nonlinear equations, Int. J. Math. Analysis, 5 1813-1819.
[10] Soleymani F, (2011b). A novel and precise sixth-order method for solving nonlinear equations,
Int. J. Math. Models Meth. App. Sci., 5: 730-737.
[11] Soleymani F, Sharifi M, (2011a). On a general efficient class of four-step root-finding methods,
Int. J. Math. Comput. Simul., 5: 181-189.
[12] Soleymani F, Sharifi M, (2011b). On a class of fifteenth-order iterative formulas for simple
roots, Int. Elec. J. Pure Appl. Math., 3: 245-252.
[13] Soleymani F, Sharifi M, (2010). On a cubically iterative scheme for solving non-linear
equations, Far East J. App. Math., 43: (2010), 137-143.
[14] Noor MA, Khan WA, Noor KI, Al-said E, (2011a). Higher-order iterative methods free from
second derivative for solving nonlinear equations, Int. J. Physical Sci., 6: 1887-1893.
[15] Noor MA, Shah FA, Noor KI, Al-Said E, (2011b). Variational iteration technique for finding
multiple roots of nonlinear equations, Sci. Res. Essays, 6: 1344-1350.
[16] Phiri PA, Makinde OD, (2010). A new derivative-free method for solving nonlinear equations,
Int. J. Phys. Sci., 5: 935-939.
[17] Weerakoon S, Fernando GI, (2000). A variant of Newtons method with accelerated third-order
convergence, Appl. Math. Lett. 17: 8793.

Das könnte Ihnen auch gefallen