Open Access Full-Text PDF

Open Journal of Mathematical Analysis

New iterative methods using variational iteration technique and their dynamical behavior

Muhammad Nawaz\(^1\), Amir Naseem, Waqas Nazeer
Department of Mathematics, Govt. Post graduate College Sahiwal Pakistan.; (M.N)
Department of Mathematics, Lahore Leeds University Lahore Pakistan.; (A.N)
Division of Science and Technology, University of Education, Lahore, 54000, Pakistan.; (W.N)
\(^{1}\)Corresponding Author;  mathvision204@gmail.com

Copyright © 2018 Muhammad Nawaz, Amir Naseem, Waqas Nazeer. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The aim of this paper is to present new sixth order iterative methods for solving non-linear equations. The derivation of these methods is purely based on variational iteration technique. Our methods are verified by means of various test examples and numerical results show that our developed methods are more effective with respect to the previously well known methods.

Keywords:

Non-linear equations; Newton’s method

1. Introduction

One of the most important problems is to find the values of \(x\) which satisfy the equation $$ f(x)=0.$$ The solution of these problems has many applications in applied sciences. In order to solve these problems, various numerical methods have been developed using different techniques such as adomian decomposition , Taylor's series, perturbation method, quadrature formulas and variational iteration technique [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19,20] and the references therein. One of the most famous and oldest method for solving non linear equations is classical Newton's method which can be written as:
\begin{equation} x_{n+1}=x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}, n=0,1,2,... \end{equation}
(1)
This is an important and basic method , which converges quadratically [12]. Modifications of Newton's method gave various iterative methods with better convergence order. Some of them are given in [3, 8, 9, 10, 11, 19, 20], and the references therein. In this paper, we develop three new iterative methods using variational iteration technique. The variational iteration technique was developed by He [14] . Using this technique, Noor and Shah [18] has suggested and analyzed some iterative methods for solving the nonlinear equations. The purpose of this technique was to solve a variety of diverse problems [14, 15, 16] . Now we have applied the described technique to obtain higher-order iterative methods. We also discuss the convergence criteria of these new iterative methods. Several examples are given to show the performance of our proposed methods as compare to the other similar existing methods. Via our generated methods, Polynomiographs of different complex polynomials is also presented which are quiet new and reflects the dynamical behavior of our methods.

2. Construction of iterative methods using variational technique

In this section, we develop some new sixth order iterative methods for solving non linear equations. By using variational iteration technique, we develop the main recurrence relation from which we derive the new iterative methods for solving non linear equations by considering some special cases of the auxiliary functions \(g\). These are multi-step methods consisting of predictor and corrector steps. The convergence of our methods is better than the one-step methods. Now consider the non-linear equation of the form
\begin{equation} f(x)=0. \end{equation}
(2)
Suppose that \(\alpha\) is the simple root and \(\gamma\) is the initial guess sufficiently close to \(\alpha \). Let \(g(x)\) be any arbitrary function and \(\lambda\) be a parameter which is usually called the Lagrange’s multiplier and can be identified by the optimality condition. Consider the auxiliary function
\begin{equation} H(x)=\psi(x)+\lambda[f(\psi(x)g(\psi(x)], \end{equation}
(3)
where \(\psi(x)\) is the arbitrary auxiliary function of order \(p\) with \(p\geq{1}\). Using the optimality criteria, we can obtain the value of \(\lambda\) from (3) as:
\begin{equation} \lambda=-\frac{\psi(x)}{g^{\prime }(\psi(x))f(\psi(x))+g(\psi(x))f^{\prime }(\psi(x))}. \end{equation}
(4)
From (3) and (4), we get
\begin{equation} H(x)=\psi(x)-\frac{f(\psi(x))g(\psi(x))}{[f^{\prime }(\psi(x))g(\psi(x))+ f(\psi(x))g^{\prime }(\psi(x))]}. \end{equation}
(5)
Now we are going to apply eq(5) for constructing a general iterative scheme for iterative methods. For this, suppose that
\begin{equation} \psi(x)=y=x-\frac{2f(x)f^{\prime }(x)}{2f^{\prime 2}(x)-f(x)f^{\prime \prime }(x)}, \end{equation}
(6)
which is well known Halley's method of 3rd order of convergence. With the help of (5) and (6), we can write
\begin{equation} H(x)=y-\frac{f(y)g(y)}{[f^{\prime }(y)g(y)+ f(y)g^{\prime }(y)]}. \end{equation}
(7)
If \(\alpha\) is the root of \(f(x)\), then for \(x=\alpha\), we can write:
\begin{equation} \frac{g(y)}{g^{\prime }(y)}=\frac{g[\alpha-\frac{2f(\alpha)f^{\prime }(\alpha)}{2f^{\prime 2}(\alpha)-f(\alpha)f^{\prime \prime }(\alpha)}]}{g^{\prime }[\alpha-\frac{2f(\alpha)f^{\prime }(\alpha)}{2f^{\prime 2}(\alpha)-f(\alpha)f^{\prime \prime }(\alpha)}]} =\frac{g(\alpha)}{g^{\prime }(\alpha)}. \end{equation}
(8)
Also,
\begin{equation} \frac{g(x)}{g^{\prime }(x)}=\frac{g(\alpha)}{g^{\prime }(\alpha)} \end{equation}.
(9)
With the help of (8) and (9), we get
\begin{equation} \frac{g(y)}{g^{\prime }(y)}=\frac{g(x)}{g^{\prime }(x)}. \end{equation}
(10)
Using (10) in (7), we obtain
\begin{equation} H(x)=y-\frac{f(y)g(x)}{[f^{\prime }(y)g(x)+ f(y)g^{\prime }(x)]}. \end{equation}
(11)
Which enable us to define the following iterative scheme:
\begin{equation} x_{n+1}=y_n-\frac{f(y_n)g(x_n)}{[f^{\prime }(y_n)g(x_n)+f(y_n)g^{\prime }(x_n)]}. \end{equation}
(12)
where \(y_n=x_n-\frac{2f(x_n)f^{\prime }(x_n)}{2f^{\prime2}(x_n)-f(x_n)f^{\prime \prime }(x_n)}\). Relation(12) is the main and general iterative scheme, which we use to deduce iterative methods for solving non-linear equations by considering some special cases of the auxiliary functions \(g\).

case 1. Let \(g(x_n)=\exp(\beta x_n^{2})\), then \(g^{\prime}(x_n)= 2\beta x_{n} g(x_n)\). Using these values in (12), we obtain the following algorithm. Using these values in (12), we obtain the following algorithm.

Algorithm 2.1. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=& x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,..., \\ x_{n+1}&=& y_n-\frac{f(y_n)}{[f^{\prime }(y_n)+2\beta x_n f(y_n)]} \end{eqnarray*}

case 2. Let \(g(x_n)=\exp(-\beta f(x_n))\), then \(g^{\prime}(x_n)= -\beta f^{\prime }(x_{n})g(x_n)\). Using these values in (12), we obtain the following algorithm.

Algorithm 2.2. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=&x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,..., \\ x_{n+1}&=&y_n-\frac{f(y_n)}{[f^{\prime }(y_n)-\beta f(y_n)f^{\prime }(x_n)]}. \end{eqnarray*}

case 3. Let \(g(x_n)=\exp(-\beta x_n)\), then \(g^{\prime}(x_n)= -\beta g(x_n)\). Using these values in (12), we obtain the following algorithm.

Algorithm 2.3. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=& x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,..., \\ x_{n+1}&=& y_n-\frac{f(y_n)}{[f^{\prime }(y_n)-\beta f(y_n)]}. \end{eqnarray*}

By taking different values of \(\beta\), we can obtain different iterative methods. To obtain best results in all above algorithms, don't choose such values of \(\beta\) that make the denominator zero and smallest in magnitude.

3. Convergence Analysis

In this section, we discuss the convergence order of the main and general iteration scheme (12).

Theorem 3.1. Suppose that \(\alpha \) is a root of the equation \(f(x)=0\). If \(f(x)\) is sufficiently smooth in the neighborhood of \(\alpha \), then the convergence order of the main and general iteration scheme, described in relation (12) is at least six.

Proof. To analysis the convergence of the main and general iteration scheme, described in relation (12), suppose that \(\alpha \) is a root of the equation \(f(x)=0\) and \(e_n\) be the error at nth iteration, then \(e_n=x_n-\alpha\) and by using Taylor series expansion, we have \begin{eqnarray*} f(x)&=&{f^{\prime }(\alpha)e_n}+\frac{1}{2!}{f^{\prime \prime }(\alpha)e_n^2}+\frac{1}{3!}{f^{\prime \prime \prime }(\alpha)e_n^3}+\frac{1}{4!}{f^{(iv) }(\alpha)e_n^4}+\frac{1}{5!}{f^{(v) }(\alpha)e_n^5}\\ &+&\frac{1}{6!}{f^{(vi) }(\alpha)e_n^6}+O(e_n^{13}), \end{eqnarray*}

\begin{eqnarray} f(x)={f^{\prime }(\alpha)}[e_n+c_2e_n^2+c_3e_n^3+c_4e_n^4+c_5e_n^5+c_6e_n^6+O(e_n^{7})], \end{eqnarray}
(13)
\begin{eqnarray} {f^{\prime }(x_n)}&=&{f^{\prime }(\alpha)}[1+2c_2e_n+3c_3e_n^2+4c_4e_n^3+5c_5e_n^4+6c_6e_n^5+7c_7e_n^6\notag\\ &+&O(e_n^{7})]\label{15}, \end{eqnarray}
(14)
\begin{eqnarray} {f^{\prime \prime}(x_n)}&=&{f^{\prime}(\alpha)}[2c_2+6c_3e_n+12c_4e_n^2+20c_5e_n^3+30c_6e_n^4+42c_7e_n^5+56c_8e_n^6\notag\\ &+&O(e_n^{7})]. \end{eqnarray}
(15)
Where $$c_n=\frac{1}{n!}\frac{{f^{(n) }(\alpha)}}{{f^{\prime }(\alpha)}}.$$ With the help of 13, 14 and 15, we get
\begin{eqnarray} y_n&=& \alpha+(-c_3+c_2^2)e_n^3+(6c_3c_2-3c_4-3c_2^3)e_n^4\notag\\ &+&(6c_3^2+12c_2c_4-6c_5+6c_2^4-18c_3c_2^2)e_n^5\notag\\ &+&(20c_2c_5+19c_4c_3-10c_6-28c_3^2c_2+37c_2^3c_3-29c_2^2c_4-9c_2^5)e_n^6+O(e_n^{7}), \label{16} \end{eqnarray}
(16)
\begin{eqnarray} f(y_n)&=&{f^{\prime}(\alpha)}[(-c_3+c_2^2)e_n^3+(6c_3c_2-3c_4-3c_2^3)e_n^4+(6c_3^2+12c_2c_4-6c_5+6c_2^4\notag\\ &&-18c_3c_2^2)e_n^5+(20c_2c_5+19c_4c_3-10c_6-27c_3^2c_2+35c_2^3c_3-29c_2^2c_4-8c_2^5)e_n^6+O(e_n^{7})],\label{17} \end{eqnarray}
(17)
\begin{eqnarray} {f^{\prime}(y_n)}&=&{f^{\prime}(\alpha)}[1+(-2c_3c_2+2c_2^3)e_n^3+(12c_3c_2^2-6c_2c_4-6c_2^4)e_n^4+(12c_3^2c_2+24c_2^2c_4\notag\\ &&-12c_2c_5+12c_2^5-36c_2^3c_3)e_n^5+(40c_5c_2^2+38c_3c_2c_4-20c_2c_6-62c_2^2c_3^2+77c_2^4c_3-58c_2^3c_4-18c_2^6\notag\\ &&+3c_3^3)e_n^6+O(e_n^{7})],\label{18} \end{eqnarray}
(18)
\begin{eqnarray} g(x_n)&=&g(\alpha)+g^{\prime}(\alpha)e_n+\frac{g^{\prime \prime}(\alpha)}{2!}e_n^2+\frac{g^{\prime\prime\prime}(\alpha)}{3!}e_n^3+\frac{g^{(iv)}(\alpha)}{4!}e_n^4+\frac{g^{(v)}(\alpha)}{5!}e_n^5+ \frac{g^{(vi)}(\alpha)}{6!}e_n^6\notag\\ &&+ O(e_n^{7}),\label{20} \end{eqnarray}
(19)
\begin{eqnarray} g^{\prime}(x_n)&=& g^{\prime}(\alpha)+g^{\prime \prime}(\alpha)e_n+\frac{g^{\prime\prime\prime}(\alpha)}{2!}e_n^2+\frac{g^{(iv)}(\alpha)}{3!}e_n^3+\frac{g^{(v)}(\alpha)}{4!}e_n^4+ \frac{g^{(vi)}(\alpha)}{5!}e_n^5\notag\\ &&+\frac{g^{(vii)}(\alpha)}{6!}e_n^6+O(e_n^{7}). \end{eqnarray}
(20)
Using equations (13-20) in general iteration scheme(12), we get: \begin{eqnarray*} x_{n+1}&=&\alpha+\frac{(-c_3+c_2^2)^2[g(\alpha)c_2+g^{\prime}(\alpha)]}{g(\alpha)}e_n^6+O(e_n^{7}), \end{eqnarray*} which implies that \begin{eqnarray*} e_{n+1}&=&\frac{(-c_3+c_2^2)^2[g(\alpha)c_2+g^{\prime}(\alpha)]}{g(\alpha)}e_n^6+O(e_n^{7}). \end{eqnarray*} The above relation shows that the main and general iteration scheme(12) is of sixth order of convergence and all iterative methods deduce from it have also convergence of order six.

4. Applications

In this section we included some nonlinear functions to illustrate the efficiency of our developed algorithms for \(\beta = 1\). We compare our developed algorithms with Newton's method (NM)[12] , Ostrowski method (OM) [7], Traub's method (TM)[12], and modified Halley's method (MHM) [15]. We used \(\varepsilon =10^{-15}\). The following stopping criteria is used for computer programs:
  1. \(|x_{n+1}-x_{n+1}|< \varepsilon.\)
  2. \(|f(x_{n+1})|< \varepsilon.\)

Example 4.1. \(f=x^{10}-1\)

Table 1.Comparison of various iterative methods
Methods \(N\) \(N_{f}\) \(x_{0}\) \(f(x_{n+1})\) \(x_{n+1}\)
NM \(17\) \(34\) \(0.7\) \(1.020203e-25\)
OM \(6\) \(18\) \(0.7\) \(3.610418e-20\)
TM \(9\) \(27\) \(0.7\) \(2.903022e-44\)
MHM \(5\) \(15\) \(0.7\) \(7.301483e-68\) \(1.0000000000000000000000000\)
Algorithem 2.1 \(5\) \(15\) \(0.7\) \(5.714130e-50\)
Algorithem 2.2 \(4\) \(12\) \(0.7\) \(3.873651e-18\)
Algorithem 2.3 \(3\) \(9\) \(0.7\) \(1.569018e-18\)

Example 4.2. \(f_{2}=(x-1)^3-1\)

Table 2.Comparison of various iterative methods
Methods \(N\) \(N_{f}\) \(x_{0}\) \(f(x_{n+1})\) \(x_{n+1}\)
NM \(16\) \(32\) \(-0.5\) \(3.692382e-21\)
OM \(9\) \(27\) \(-0.5\) \(3.319738e-43\)
TM \(8\) \(24\) \(-0.5\) \(3.692382e-21\)
MHM \(7\) \(21\) \(-0.5\) \(2.178093e-15\) \(2.000000000000000000000000000000\)
Algorithem 2.1 \(5\) \(15\) \(-0.5\) \(1.647894e-50\)
Algorithem 2.2 \(4\) \(12\) \(-0.5\) \(1.279762e-65\)
Algorithem 2.3 \(3\) \(9\) \(-0.5\) \(2.042477e-19\)

Example 4.3. \(f_{3}=xe^{x^2}-\sin^{2}(x)+3\cos(x)+5\)

Table 3.Comparison of various iterative methods
Methods \(N\) \(N_{f}\) \(x_{0}\) \(f(x_{n+1})\) \(x_{n+1}\)
NM \(11\) \(22\) \(-2.5\) \(5.818711e-27\)
OM \(5\) \(15\) \(-2.5\) \(5.818711e-27\)
TM \(6\) \(18\) \(-2.5\) \(2.504418e-54\)
MHM \(11\) \(33\) \(-2.5\) \(1.111535e-38\) \(-1.207647827130918927009416758360 \)
Algorithem 2.1 \(4\) \(12\) \(-2.5\) \(4.908730e-22\)
Algorithem 2.2 \(5\) \(15\) \(-2.5\) \(9.011081e-41\)
Algorithem 2.3 \(4\) \(12\) \(-2.5\) \(4.225070e-35\)

Example 4.4. \(f_{4}=\sin^{2}(x)-x^{2}+1\)

Table 4.Comparison of various iterative methods
Methods \(N\) \(N_{f}\) \(x_{0}\) \(f(x_{n+1})\) \(x_{n+1}\)
NM \(15\) \(30\) \(0.1\) \(3.032691e-22\)
OM \(8\) \(24\) \(0.1\) \(2.481593e-57\)
TM \(8\) \(24\) \(0.1\) \(2.903022e-44\)
MHM \(7\) \(21\) \(0.1\) \(7.771503e-47\) \(1.404491648215341226035086817790 \)
Algorithem 2.1 \(4\) \(12\) \(0.1\) \(3.145402e-73\)
Algorithem 2.2 \(6\) \(18\) \(0.1\) \(1.208844e-45\)
Algorithem 2.3 \(3\) \(9\) \(0.1\) \(1.401860e-29\)

Example 4.5. \(f_{5}=e^{(x^2+7x-30)}-1\)

Table 5.Comparison of various iterative methods
Methods \(N\) \(N_{f}\) \(x_{0}\) \(f(x_{n+1})\) \(x_{n+1}\)
NM \(16\) \(32\) \(2.8\) \(1.277036e-16\)
OM \(5\) \(15\) \(2.8\) \(3.837830e-24\)
TM \(8\) \(24\) \(2.8\) \(1.277036e-16\)
MHM \(5\) \(15\) \(2.8\) \(8.373329e-70\) \(3.000000000000000000000000000000 \)
Algorithem 2.1 \(4\) \(16\) \(2.8\) \(4.131496e-34\)
Algorithem 2.2 \(3\) \(9\) \(2.8\) \(9.157220e-32\)
Algorithem 2.3 \(3\) \(9\) \(2.8\) \(4.858181e-34\)

Example 4.6. \(f_{6}=xe^{x}-1\)

Table 6.Comparison of various iterative methods
Methods \(N\) \(N_{f}\) \(x_{0}\) \(f(x_{n+1})\) \(x_{n+1}\)
NM \(5\) \(10\) \(1\) \(8.478184e-17\)
OM \(3\) \(9\) \(1\) \(8.984315e-40\)
TM \(3\) \(9\) \(1\) \(2.130596e-33\)
MHM \(3\) \(9\) \(1\) \(1.116440e-68\) \(0.567143290409783872999968662210\)
Algorithem 2.1 \(2\) \(6\) \(1\) \(2.910938e-19\)
Algorithem 2.2 \(2\) \(6\) \(1\) \(1.292777e-17\)
Algorithem 2.3 \(2\) \(6\) \(1\) \(2.468437e-27\)

Example 4.7. \(f_{7}=x^{3}+4x^{2}-15\)

Table 7.Comparison of various iterative methods
Methods \(N\) \(N_{f}\) \(x_{0}\) \(f(x_{n+1})\) \(x_{n+1}\)
NM \(38\) \(76\) \(-0.3\) \(1.688878e-22\)
OM \(6\) \(18\) \(-0.3\) \(1.173790e-16\)
TM \(19\) \(57\) \(-0.3\) \(1.688878e-22\)
MHM \(16\) \(48\) \(-0.3\) \(3.742527e-16\) \(1.631980805566063517522106445540 \)
Algorithem 2.1 \(4\) \(12\) \(-0.3\) \(1.663804e-29\)
Algorithem 2.2 \(6\) \(18\) \(-0.3\) \(1.744284e-38\)
Algorithem 2.3 \(4\) \(12\) \(-0.3\) \(1.639561e-72\)

Table 2. Shows the numerical comparisons of Newton's method, Ostrowski method, Traub's method, modified Halley's method and our developed methods. The columns represent the number of iterations \(N\) and the number of functions or derivatives evaluations \(N_{f}\) required to meet the stopping criteria, and the magnitude \(|f(x)|\) of \(f(x)\) at the final estimate \(x_{n}.\)

5. Conclusions

We have established three new sixth order methods for solving non linear functions. Our developed methods have efficiency index \(6^{\frac{1}{3}}\approx1.8171\). We compared our methods with other well known iterative methods and comparison tables (1-7) show that our methods are quite fast and efficient from other similar methods.

ConclusionsCompeting Interests

The author do not have any competing interests in the manuscript.

References

  1. Nazeer, W., Naseem, A., Kang, S. M., & Kwun, Y. C. (2016). Generalized Newton Raphson's method free from second derivative. J. Nonlinear Sci. Appl. 9 (2016), 2823, 2831. [Google Scholor]
  2. Nazeer, W., Tanveer, M., Kang, S. M., & Naseem, A. (2016). A new Householder's method free from second derivatives for solving nonlinear equations and polynomiography. J. Nonlinear Sci. Appl, 9, 998-1007.[Google Scholor]
  3. Chun, C. (2006). Construction of Newton-like iteration methods for solving nonlinear equations. Numerische Mathematik , 104(3), 297-315.[Google Scholor]
  4. Burden, R. L., & Faires, J. D. (2010). Numerical analysis. Cengage Learning, 9 .[Google Scholor]
  5. Stoer, J., & Bulirsch, R. (2013). Introduction to numerical analysis (Vol. 12). Springer Science & Business Media. [Google Scholor]
  6. Quarteroni, A., Sacco, R., & Saleri, F. (2010). Numerical mathematics (Vol. 37). Springer Science & Business Media. [Google Scholor]
  7. Chen, D., Argyros, I. K., & Qian, Q. S. (1993). A note on the Halley method in Banach spaces. Applied Mathematics and Computation, 58(2-3), 215-224.[Google Scholor]
  8. Frontini, M., & Sormani, E. (2003). Some variant of Newton’s method with third-order convergence. Applied Mathematics and Computation, 140(2-3), 419-426.[Google Scholor]
  9. Gutiérrez, J. M., & Hernandez, M. A. (1997). A family of Chebyshev-Halley type methods in Banach spaces. Bulletin of the Australian Mathematical Society, 55(1), 113-130.[Google Scholor]
  10. Householder, A. S. (1970). The numerical treatment of a single nonlinear equation.[Google Scholor]
  11. Sebah, P., & Gourdon, X. (2001). Newton’s method and high order iterations. Numbers Comput., 1-10.[Google Scholor]
  12. Traub, J. F. (1964). Iterative Methods for the Solution of Equations. Prentice–Hall, Englewood Cliffs, (NJ).[Google Scholor]
  13. Inokuti, M., Sekine, H., & Mura, T. (1978). General use of the Lagrange multiplier in nonlinear mathematical physics. Variational method in the mechanics of solids, 33(5), 156-162.[Google Scholor]
  14. He, J. H. (2007). Variational iteration method—some recent results and new interpretations. Journal of computational and applied mathematics, 207(1), 3-17.[Google Scholor]
  15. Noor, K. I., & Noor, M. A. (2007). Predictor–corrector Halley method for nonlinear equations. Applied Mathematics and Computation , 188(2), 1587-1591.[Google Scholor]
  16. He, J. H. (1999). Variational iteration method–a kind of non-linear analytical technique: some examples. International journal of non-linear mechanics , 34(4), 699-708.[Google Scholor]
  17. Noor, M. A. (2007). New classes of iterative methods for nonlinear equations. Applied Mathematics and Computation, 191(1), 128-131.[Google Scholor]
  18. Noor, M. A., & Shah, F. A. (2009). Variational iteration technique for solving nonlinear equations. Journal of Applied Mathematics and Computing, 31(1-2), 247-254.[Google Scholor]
  19. Kou, J. (2007). The improvements of modified Newton’s method. Applied Mathematics and Computation , 189(1), 602-609.[Google Scholor]
  20. Abbasbandy, S. (2003). Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method. Applied Mathematics and Computation, 145(2-3), 887-893. [Google Scholor]