Open Access Full-Text PDF

Open Journal of Mathematical Analysis

A new third-order iteration method for solving nonlinear equations

Muhammad Saqib, Zain Majeed, Muhammad Quraish, Waqas Nazeer\(^1\)
Department of Mathematics Govt. Degree College Kharian Pakistan.; (M.S)
Department of Mathematics and Statistics, The University of Lahore, Lahore 54000, Pakistan.; (Z.M)
Department of Mathematics, The University of Lahore (Pakpattan Campus) Lahore, Pakistan.; (M.Q)
Division of Science and Technology, University of Education, Lahore, 54000, Pakistan.; (W.N)
\(^{1}\)Corresponding Author; nazeer.waqas@ue.edu.pk

Copyright © 2018 Muhammad Saqib, Zain Majeed, Muhammad Quraish, Waqas Nazeer. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we establish a two step third-order iteration method for solving nonlinear equations. The efficiency index of the method is 1.442 which is greater than Newton-Raphson method. It is important to note that our method is performing very well in comparison to fixed point method and the method discussed by Kang et al. (Abstract and applied analysis; volume 2013, Article ID 487060).

Keywords:

Nonlinear equations; Iterative methods; Order of convergence; Efficiency index.

1. Introduction

Solving equations in one variable is the most discussed problem in numerical analysis. There are several numerical techniques for solving nonlinear equations (\text{see for example } [1, 2, 3, 4, 5, 6, 7, 8] and the references there in). For a given function \(f\), we have to find at least one solution to the equation \(f(x)=0\). Note that, priory, we do not put any restrictions on the function \(f\). In order to check whether a given solution is true or not, we need to be able to evaluate the function, that is, \(f(\alpha) = 0\). In reality, the mere ability to be able to evaluate the function does not suffice. We need to assume some kind of "good behavior". The more we assume, the more potential we have to develop fast algorithms for finding the root. At the same time, more assumptions will reduce the number of function classes which will our assumptions. This is a fundamental paradigm in numerical analysis. We present a new iteration method that approximates the root of a nonlinear equation in one variable using the value of the function and its derivative. Our method converges to the root cubically. In this study, we suggest an improvement to the iteration of Kang iteration method at the expense of one additional first derivative evaluation. It is shown that the suggested method converges to the root, and the order of convergence is at least three in a neighborhood of the root, whenever the first and higher order derivatives of the function exist in a neighborhood of the root. This means that our method approximately triples the number of significant digits after an iteration. Numerical examples support this theory, and the computational order of convergence is even more than three for certain functions. We know that fixed point iteration method [9] is the fundamental algorithm for solving nonlinear equations in one variable. In this method equation is rewritten as
\begin{equation} x = g(x) \end{equation}
(1)
where, the following are true.
  1. \(\exists\) \([a,b]\) such that \(g(x)\in \lbrack a,b]\) for all \(x\in \lbrack a,b]\),
  2. \(\exists\) \([a,b]\) such that \(\left\vert g^{\prime}(x)\right\vert \leq L<1\) for all \(x\in \lbrack a,b]\).

Definition 1.1. Suppose, \(\{x_n\} \to \alpha\), with the property that, \begin{equation*} \lim\limits_{n \to \infty} \left\vert \frac{x_{n+1}-\alpha }{(x_{n}-\alpha )^{q}}\right\vert = D \end{equation*} where, \(D \in \mathbb{R}^+\) and \(q \in \mathbb{Z}\), then \(D\) is called the constant of convergence and \(q\) is called the order of convergence.

Definition 1.2. [2] Let \(g \in C^{p}[a,b]\). If \(g^{(k)}(x)=0\) for \(k = 1, 2, \ldots, p-1\) and \(g^{(p)}(x) \neq 0\), then the sequence \(\{x_{n}\}\) is of order \(p\).

Algorithm 1.3. [10] For a given \(x_{0}\), Kang et. al. gave the approximate solution \(x_{n+1}\) by an iteration scheme as follows. \begin{equation*} x_{n+1} = \frac{g\left( x_{n}\right) - x_{n} g^{\prime}(x_{n})}{1 - g^{\prime}(x_{n})} \end{equation*} where, \(g^{\prime}(x_{n}) \neq 1\). This scheme has convergence of order 2.

2. New Iteration Method

In the fixed-point iteration method, for some \(x \in \mathbb{R}\), if \(f\left( x\right) = 0\), then the nonlinear equation can be converted to, \begin{equation*} x = g(x) \end{equation*} Let \(\alpha\) be the root of \(f\left( x \right) = 0\). We can write functional equation of algorithm 1.3 as, \begin{equation*} H_{g}(x) = \frac{g\left( x\right) - x g^{\prime}(x)}{1 - g^{\prime}(x)}, \hspace{5mm} g^{\prime}(x) \neq 1 \end{equation*} or, \begin{equation*} H_{g}(x) = x - \frac{x - g(x)}{1 - g^{\prime}(x)}. \end{equation*}
To get higher order convergence, we introduce \(h(x)\) in above, as follows. \begin{equation}\label{a1} H_{h}(x)= x - \frac{x - g(x)}{1 - g^{\prime}(x + (x - g(x))h(x))}. \end{equation}
(2)
Then \(H_{h}(\alpha ) = \alpha\) and \(H_{h}^{\prime}(\alpha) = 0\). In order to make (2) efficient, we shall choose \(h(x)\) such that \(H_{h}^{\prime \prime}(\alpha) = 0\). By using Mathematica we have, \begin{equation*} H_{h}^{\prime \prime}(\alpha ) = \frac{(1 - 2h(\alpha )( - 1 + g^{\prime}(\alpha)))g^{\prime \prime}(\alpha)}{- 1 + g^{\prime}(\alpha)}. \end{equation*} Then, \(H_{h}^{\prime \prime}(\alpha) = 0\) gives, \begin{equation*} h(a)=\frac{-1}{2(1-g^{\prime }(\alpha ))}. \end{equation*} So, if we take \(h\left(x\right) = \frac{-1}{2(1-g^{\prime }(x))}\) and substitute it in (2), we get \begin{equation*} H_{h}(x) = x - \frac{x - g(x)}{1 - g^{\prime}(x - \frac{(x - g(x))}{2(1 - g^{\prime}(x))})}. \end{equation*} This formulation allows us to suggest the following two step iteration method for solving nonlinear equation, for a given \(x_{0}\).

Algorithm 2.1

\begin{equation} x_{n+1} = x_{n} - \frac{x_{n} - g(x_{n})}{1 - g^{\prime}(y_{n})}, n=0,1,2,.... \end{equation}
(3)
\begin{equation} y_{n} = x_{n} - \frac{x_{n} - g(x_{n})}{2(1 - g^{\prime}(x_{n}))}, g^{\prime}(x_{n}) \neq 1 . \end{equation}
(4)

3. Convergence Analysis of Algorithm

Theorem 3.1. Let \(f : D \subset \mathbb{R} \rightarrow \mathbb{R}\) be a nonlinear function on an open interval \(D\), such that \(f\left(x\right) = 0\) (or equivalently \(x = g\left(x\right) )\), has a simple root \(\alpha \in D\). Here, \(g : D \subset \mathbb{R} \rightarrow \mathbb{R}\), is sufficiently smooth in the neighborhood of the root \(\alpha\). Then, the order of convergence of algorithm 2.1 is at least \(3\), where \(c_{k} = \frac{g^{(k)}(\alpha)}{k!(1 - g^{\prime}(\alpha))}\), \(k = 2, 3, \ldots\).

Proof. As \(g\left(\alpha\right) = \alpha\), let \(x_{n} = \alpha + e_{n}\) and \(x_{n + 1} = \alpha + e_{n+1}\). By Taylor's expansion, we have, \begin{equation*} g(x_{n}) = g\left(\alpha\right) + e_{n}g^{\prime}(\alpha) + \frac{e_{n}^{2}}{2!}g^{\prime\prime}(\alpha) + \frac{e_{n}^{3}}{3!} g^{\prime \prime \prime}(\alpha ) + O(e_{n}^{4}). \end{equation*} This implies that,

\begin{equation}\label{eq:3} x_{n} - g(x_{n}) = (1 - g^{\prime }(\alpha)){e_{n}-c_{2}e_{n}^{2}-c_{3}e_{n}^{3}+O(e_{n}^{4})}. \end{equation}
(5)
Similarly, \begin{equation*} 1 - g^{\prime}(x_{n}) = (1 - g^{\prime}(\alpha)){1 - 2e_{n}c_{2} - 3e_{n}^{2}c_{3} - 4e_{n}^{3}c_{4} + O(e_{n}^{4})}. \end{equation*} Substituting these values in (4) we obtain \begin{eqnarray} y_{n} &=&x_{n}-\frac{(1 - g^{\prime }(\alpha)){e_{n}-c_{2}e_{n}^{2}-c_{3}e_{n}^{3}+O(e_{n}^{4})}}{2{(1 - g^{\prime}(\alpha)){1 - 2e_{n}c_{2} - 3e_{n}^{2}c_{3} - 4e_{n}^{3}c_{4} + O(e_{n}^{4})}}} \notag \\ &=& \alpha + e_{n} - \frac{1}{2}{\frac{e_{n}-c_{2}e_{n}^{2}-c_{3}e_{n}^{3}+O(e_{n}^{4})}{1-(2e_{n}c_{2}+3e_{n}^{2}c_{3}+4e_{n}^{3}c_{2}^{2}+O(e_{n}^{4}))}} .\notag \end{eqnarray} Using the series expansion above, we get \begin{eqnarray*} y_{n} &=& \alpha + e_{n} - \\ && \frac{1}{2}{e_{n}-c_{2}e_{n}^{2}-c_{3}e_{n}^{3}+O(e_{n}^{4})}{1+(2e_{n}c_{2}+3e_{n}^{2}c_{3}+4e_{n}^{3}c_{2}^{2}+O(e_{n}^{4})) + \ldots} \\ &=& \alpha + \frac{1}{2}{e_{n} - c_{2}e_{n}^{2} - 2(c_{2}^{2} + c_{3})e_{n}^{3} + O(e_{n}^{4})}. \end{eqnarray*} This implies that,
\begin{eqnarray*} g^{\prime }(y_{n}) &=& g^{\prime}{\alpha + \frac{1}{2}{e_{n}-c_{2}e_{n}^{2}-2(c_{2}^{2}+c_{3})e_{n}^{3}+O(e_{n}^{4})}} \\ &=& g^{\prime}(\alpha ) + \left[\frac{1}{2} (e_{n} - c_{2}e_{n}^{2} - 2(c_{2}^{2} + c_{3})e_{n}^{3} + O(e_{n}^{4})) \right] g^{^{\prime\prime}}(\alpha) \\ && + \frac{1}{2}\left[\frac{1}{2}(e_{n} - c_{2}e_{n}^{2} - 2(c_{2}^{2} + c_{3})e_{n}^{3} + O(e_{n}^{4}))\right]^{2}g^{^{\prime\prime\prime}}(\alpha ) + \ldots. \end{eqnarray*}
(6)
Thus, we get, \begin{eqnarray}\label{4} && 1 - g^{\prime}(y_{n}) \nonumber \\ &=& {1 - g^{\prime}(\alpha)}\left[1 - c_{2}e_{n} + (c_{2}^{2} - \frac{3}{4}c_{3})e_{n}^{2} + (2c_{2}^{3} + 2c_{2}c_{3} - c_{4})e_{n}^{3} + O(e_{n}^{4}) \right]. \end{eqnarray} Using (5) and (6) in (4), we have, \begin{eqnarray*} e_{n+1} &=& e_{n} - \frac{e_{n} - c_{2}e_{n}^{2} - c_{3}e_{n}^{3} + O(e_{n}^{4})}{1 - {c_{2}e_{n} - (c_{2}^{2} - \frac{3}{4}c_{3})e_{n}^{2} + O(e_{n}^{3})}}. \end{eqnarray*} Using series expansion again, we get, \begin{eqnarray*} e_{n+1} &=& e_{n} - {e_{n} - c_{2}e_{n}^{2} - c_{3}e_{n}^{3} + O(e_{n}^{4})} \left[1 + {c_{2}e_{n} - (c_{2}^{2} - \frac{3}{4}c_{3})e_{n}^{2} + O(e_{n}^{3})} \right. \\ && \left. - {c_{2}e_{n} - (c_{2}^{2} - \frac{3}{4}c_{3})e_{n}^{2} + O(e_{n}^{3})}^{2} + \ldots \right] \\ &=& {3c_{2}^{2} + \frac{1}{4}c_{3}}e_{n}^{3} + O(e_{n}^{4}) \end{eqnarray*}

4. Comparison

\noindent Comparison of Fixed Point Method (FPM), Kang Iteration Method (KIM) and our new iteration method (NIM), is shown in the following table, root corrected up to seventeen decimal places.

Example 4.1 \(f(x)=x^{3}-23x-135\), \(g(x)=23+\frac{135}{x}\)

Table 1. Comparison of FPM, NIM, KIM
Methods \(N\) \(N_{f}\) \(x_{0}\) \(x_{n+1}\) \(f(x_{n+1})\)
FPM \(24\) \(24\) \(2\) \(2.420536e-15\) \(27.84778272427181476 \)
NIM \(7\) \(14\) \(2\) \(2.781849e-20\) \(27.84778272427181484 \)
KIM \(4\) \(12\) \(2\) \(2.647181e-16\) \(27.84778272427181483 \)

Example 4.2 \(f(x)=x-\cos x,\) \(g(x)=\cos x\)

Table 2. Comparison of FPM, NIM, KIM
Methods \(N\) \(N_{f}\) \(x_{0}\) \(x_{n+1}\) \(f(x_{n+1})\)
FPM \(91\) \(91\) \(6\) \(1.376330e-16\) \(0.73908513321516064\)
NIM \(10\) \(20\) \(6\) \(1.083243e-29\) \(0.73908513321516064\)
KIM \(5\) \(15\) \(6\) \(1.809632e-36\) \(0.73908513321516064 \)

Example 4.3 \(f(x)=x^{3}+4x^{2}+8x+8,g(x)=-1-1/2x^{2}-1/8x3\)

Table 3. Comparison of FPM, NIM, KIM
Methods \(N\) \(N_{f}\) \(x_{0}\) \(x_{n+1}\) \(f(x_{n+1})\)
FPM \(50\) \(50\) \(-1.7\) \(1.416263e-15\) \(-1.99999999999999965 \)
NIM \(5\) \(10\) \(-1.7\) \(7.836283e-27\) \(-2.00000000000000000 \)
KIM \(3\) \(9\) \(-1.7\) \(4.983507e-25\) \(-2.00000000000000000 \)

Example 4.4 \(f(x)=\ln (x-2)+x,g(x)=2+e^{-x}\)

Table 4. Comparison of FPM, NIM, KIM
Methods \(N\) \(N_{f}\) \(x_{0}\) \(x_{n+1}\) \(f(x_{n+1})\)
FPM \(18\) \(18\) \(0.1\) \(1.163802e-15\) \(2.12002823898764110 \)
NIM \(5\) \(10\) \(0.1\) \(5.972968e-22\) \(2.12002823898764123\)
KIM \(3\) \(9\) \(0.1\) \(8.594812e-22\) \(2.12002823898764123 \)

Example 4.5 \(f(x)=x^{2}\sin x-\cos x,g(x)=\sqrt{\frac{1}{\tan x}}\)

Table 5. Comparison of FPM, NIM, KIM
Methods \(N\) \(N_{f}\) \(x_{0}\) \(x_{n+1}\) \(f(x_{n+1})\)
FPM \(417\) \(247\) \(2\) \(2.668900e-16\) \(0.89520604538423175 \)
NIM \(7\) \(14\) \(2\) \(3.195785e-31\) \(0.89520604538423175 \)
KIM \(4\) \(12\) \(2\) \(1.746045e-23\) \(0.89520604538423175 \)

Example 4.6 \(f(x)=x^{2}-5x-16,g(x)=5+\frac{16}{x}\)

Table 2. Comparison of FPM, NIM, KIM
Methods \(N\) \(N_{f}\) \(x_{0}\) \(x_{n+1}\) \(f(x_{n+1})\)
FPM \(34\) \(34\) \(1\) \(6.417745e-16\) \(7.21699056602830184 \)
NIM \(7\) \(14\) \(1\) \(2.984439e-27\) \(7.21699056602830191 \)
KIM \(4\) \(12\) \(1\) \(2.797394e-26\) \(7.21699056602830191 \)

5. Conclusions

A new third order iteration method for solving nonlinear equations has been introduced. By using some examples, performance of \(NIM\) is also discussed. Its performance is much better, in comparison to the fixed point method and the method presented in [10].

ConclusionsCompeting Interests

The author do not have any competing interests in the manuscript.

References

  1. Biazar, J., & Amirteimoori, A. (2006). An improvement to the fixed point iterative method. Applied mathematics and computation, 182(1), 567-571. [Google Scholor]
  2. Babolian, E., & Biazar, J. (2002). Solution of nonlinear equations by modified Adomian decomposition method. Applied Mathematics and Computation, 132(1), 167-172.[Google Scholor]
  3. Kou, J. (2007). The improvements of modified Newton’s method. Applied Mathematics and Computation, 189(1), 602-609.[Google Scholor]
  4. Gutierrez, J. M., & Hernández, M. A. (2001). An acceleration of Newton's method: Super-Halley method. Applied Mathematics and Computation, 117(2-3), 223-239.[Google Scholor]
  5. Abbasbandy, S. (2003). Improving Newton–Raphson method for nonlinear equations by modified Adomian decomposition method. Applied Mathematics and Computation, 145(2-3), 887-893. [Google Scholor]
  6. Weerakoon, S., & Fernando, T. G. I. (2000). A variant of Newton's method with accelerated third-order convergence. Applied Mathematics Letters, 13(8), 87-93.[Google Scholor]
  7. Homeier, H. H. H. (2003). A modified Newton method for rootfinding with cubic convergence. Journal of Computational and Applied Mathematics, 157(1), 227-230.[Google Scholor]
  8. Frontini, M., & Sormani, E. (2003). Some variant of Newton’s method with third-order convergence. Applied Mathematics and Computation, 140(2-3), 419-426.[Google Scholor]
  9. Isaacson, E., & Keller, H. B. (2012). Analysis of numerical methods. Courier Corporation.[Google Scholor]
  10. Kang, S. M., Rafiq, A., & Kwun, Y. C. (2013). A new second-order iteration method for solving nonlinear equations. In Abstract and Applied Analysis (Vol. 2013). Hindawi.[Google Scholor]