1. Introduction
To find the root of nonlinear equation of the form \(f(x)=0,\) is the oldest
and basic problem in numerical analysis. Newton’s method is one of oldest and
most powerful formula to approximate the root of nonlinear equations. It has
second order convergence. Many modifications in Newton’s method have been made
to increase its convergence order using various techniques. Recently, many
iterative methods with higher order convergence have been established using
different techniques like Taylor’s series, Adomain decomposition, homotopy,
modified homotopy, decomposition, modified decomposition, interpolation,
quadrature rules and their many modifications. see [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
and references therein. Chun [3] introduced some multi-step iterative
methods using Adomain decomposition. These method requires higher order
derivatives. Later on, Noor [4, 5] established some multi-step iterative
methods that do not require higher order derivative using a different
technique.
In this paper, some numerical methods based on decomposition technique are
proposed for solving algebraic nonlinear equations. For this purpose, we
write nonlinear equations as an equivalent system of equations and use
technique introduced by Gejji and Jafari [9]. Recently, several iterative
methods have been suggested by writing nonlinear equations as an equivalent
system of equations and then using different techniques. In section 3, we
give detailed proof regarding convergence of our newly established iterative
methods. In the last section, numerical results are given to make comparison
of these algorithm with some classical methods.
2. Iterative methods
Consider the nonlinear equation
\begin{equation}
f(x)=0;\;\; x\in\mathbb{R},
\label{2.1}
\end{equation}
(1)
assume \(\alpha\) is a simple root of (1) and \(\gamma\) is an initial guess
sufficiently close to \(\alpha\). We can write (1) as following coupled
system;
\begin{equation}
f(\gamma )+(x-\gamma )f^{\prime }(\frac{x+\gamma }{2})+g(x)=0 ,
\end{equation}
(2)
\begin{equation}
g(x)=f(x)-f(\gamma )-(x-\gamma )f^{\prime }(\frac{x+\gamma }{2}).
\end{equation}
(3)
From Eq. (2), we have
\[
x=\gamma -\frac{f(\gamma )}{f^{\prime }(\frac{x+\gamma }{2})}-\frac{g(x)}{%
f^{\prime }(\frac{x+\gamma }{2})}.
\]
Let
\begin{equation}
x=c+N(x),
\end{equation}
(4)
where
\begin{equation}
c=\gamma -\frac{f(\gamma )}{f^{\prime }(\frac{x+\gamma }{2})},
\end{equation}
(5)
and
\begin{equation}
N(x)=-\frac{g(x)}{f^{\prime }(\frac{x+\gamma }{2})}.
\end{equation}
(6)
Now, we construct a sequence of higher-order iterative methods by applying
modified decomposition technique introduced by Gejji and Jafari [
8]. This
technique consists a solution of equation (3) that can be written in the
form of infinite series:
\begin{equation}
x=\sum\limits_{i=0}^{\infty }x_{i},
\end{equation}
(7)
\noindent and using Gejji and Jafari [
8] technique, we decompose the nonlinear
function \(N\) as;
\begin{equation}
N(\sum\limits_{i=0}^{\infty }x_{i})=N(x_{0})-\sum_{i=1}^{\infty
}\{N(\sum\limits_{j=0}^{i}x_{j})-N(\sum\limits_{j=0}^{i-1}x_{j})\}. \label{2.8}
\end{equation}
(8)
Thus from Eqs. (4), (7) and (8) we have
\[
\sum\limits_{i=0}^{\infty }x_{i}=x_{0}+N(x_{0})+\sum_{i=1}^{\infty
}\{N(\sum\limits_{j=0}^{i}x_{j})-N(\sum\limits_{j=0}^{i-1}x_{j})\}.
\]
So we have iteration scheme as
\begin{eqnarray*}
x_{0} &=&c \\
x_{1} &=&N(x_{0}) \\
x_{2} &=&N(x_{0}+x_{1})-N(x_{0}) \\
&&. \\
&&. \\
&&. \\
x_{n+1} &=&N(x_{0}+x_{1}+…+x_{n})-N(x_{0}+x_{1}+…+x_{n-1}),\text{ }%
n=1,2,..
\end{eqnarray*}
When
\begin{eqnarray*}
x &\approx &x_{0} \\
&=&c \\
&=&\gamma -\frac{f(\gamma )}{f^{\prime }(\frac{x+\gamma }{2})}.
\end{eqnarray*}
From above relation, we can write the algorithm as follows;
Algorithm 2.1: For any initial value \(x_{0},\) we compute the approximation
solution \(x_{n+1}\), by the iteration scheme;
\begin{eqnarray*}
y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})} \\
x_{n+1} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(\frac{x_{n}+y_{n}}{2})}.
\end{eqnarray*}
This algorithm has third order convergence and has been established by
Frontini and Sormani [
1].
When
\begin{eqnarray*}
x &\approx &x_{0}+x_{1} \\
&=&\gamma -\frac{f(\gamma )}{f^{\prime }(\frac{x+\gamma }{2})}-\frac{g(x_{0})
}{f^{\prime }(\frac{x+\gamma }{2})}.
\end{eqnarray*}
From Eq. (3), we see that
\[
g(x_{0})=f(x_{0}),
\]
by substituting in above, we get
\[
x=\gamma -\frac{f(\gamma )}{f^{\prime }(\frac{x+\gamma }{2})}-\frac{f(x_{0})
}{f^{\prime }(\frac{x_{0}+\gamma }{2})},
\]
Using this, we suggest the following three step iterative methods for
solving Eq. (1) as follows;
Algorithm 2.2: For any initial value \(x_{0},\) we compute the approximation
solution \(x_{n+1}\), by the iteration scheme;
\begin{eqnarray*}
\text{ Predictor Steps;}\;\;\;\; y_{n} &=&x_{n}-\frac{f(x_{n})}{%
f^{\prime }(x_{n})} \\
z_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(\frac{x_{n}+y_{n}}{2})}
\end{eqnarray*}
\[
\text{Corrector Step; }\;\;\;\; x_{n+1}=z_{n}-\frac{f(z_{n})}{%
f^{\prime }(\frac{x_{n}+z_{n}}{2})}
\]
When
\begin{eqnarray*}
x &\approx &x_{0}+x_{1}+x_{2} \\
&=&x_{0}+N(x_{0}+x_{1}) \\
&=&x_{0}-\frac{g(x_{0}+x_{1})}{f^{\prime }(\frac{x_{0}+x_{1}+\gamma }{2})} \\
&=&x_{0}-\frac{f(x_{0}+x_{1})+f(x_{0})}{f^{\prime }(\frac{x_{0}+x_{1}+\gamma
}{2})}
\end{eqnarray*}
From this relation, we formulate four step iterative method for solving
Eq. (1) as follows;
Algorithm 2.3: For any initial value \(x_{0},\) we compute the approximation
solution \(x_{n+1}\), by the iteration scheme;
\begin{eqnarray*}
\text{ Predictor Steps;} \;\;\;\; y_{n} &=&x_{n}-\frac{f(x_{n})}{%
f^{\prime }(x_{n})} \\
z_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(\frac{x_{n}+y_{n}}{2})} \\
u_{n} &=&z_{n}-\frac{f(z_{n})}{f^{\prime }(\frac{x_{n}+z_{n}}{2})}
\end{eqnarray*}
\[
\text{Corrector Step;} \;\;\;\; x_{n+1}=z_{n}-\frac{f(u_{n})+f(z_{n})%
}{f^{\prime }(\frac{x_{n}+u_{n}}{2})}
\]
3. Convergence Analysis
In this section, we discuss in detail the convergence analysis of Algorithm
2.2 and 2.3 established above.
Theorem 3.1.
For an open interval \(I \subset \mathbb{R}\) let \(f:I
\rightarrow\mathbb{R}\) and \(\alpha \in I\) be simple zero of \(f(x)=0.\) If
\(f\) is differentiable and \(x_{0}\) is sufficiently close to \(\alpha\) then
three-step iterative method defined by Algorithm 2.2 has fourth order
convergence.
Proof.
Expanding \(f(x)\) by Taylor’s series about \(\alpha ,\) we have
\begin{equation}
f(x_{n})=f^{^{\prime }}(\alpha
)(e_{n}+c_{2}e_{n}^{2}+c_{3}e_{n}^{3}+c_{4}e_{n}^{4}+c_{5}e_{n}^{5}+…
\label{3.1}
\end{equation}
(9)
where \(c_{k}=\frac{1}{k!}\frac{f^{(k)}(\alpha )}{f^{^{\prime }}(\alpha )}\),
and \(e_{n}=x_{n}-\alpha .\)
\begin{equation}
f^{^{\prime }}(x_{n})=f^{^{\prime }}(\alpha
)(1+2c_{2}e_{n}+3c_{3}e_{n}^{2}+4c_{4}e_{n}^{3}+5c_{5}e_{n}^{4}+6c_{6}e_{n}^{5}+…
\label{3.2}
\end{equation}
(10)
\[
y_{n}=x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})}
\]
From above equations,we get
\begin{align}
y_{n} &=\alpha+c_{2}e_{n}^{2}+(2c_{3}-2c_{2}^{2})e_{n}^{3}+(3c_{4}-7c_{2}c_{3}+4c_{2}^{3})e_{n}^{4}+(4c_{5}-10c_{2}c_{4}
\nonumber \\
&-6c_{3}^{2}+20c_{3}c_{2}^{2}-8c_{2}^{4})e_{n}^{5}+… \label{3.3}
\end{align}
(11)
Now, expanding \(f^{\prime }(\frac{x_{n}+y_{n}}{2})\) by Taylor’s series about
\(\alpha\),
\begin{align}
f^{\prime }(\frac{x_{n}+y_{n}}{2}) &=f^{\prime }(\alpha
)\{1+c_{2}e_{n}+(c_{2}^{2}+\frac{3}{4}c_{3})e_{n}^{2}+(\frac{7}{2}%
c_{2}c_{3}-2c_{2}^{3}+\frac{1}{2}c_{4})e_{n}^{3}\nonumber \\&+(3c_{3}^{2}-\frac{37}{4}%
c_{3}c_{2}^{2}+
\frac{3}{2}c_{2}c_{4}+4c_{2}^{4}+\frac{5}{16}c_{5})e_{n}^{4}+…\} \label{3.4}
\end{align}
(12)
Dividing Eq. (9) by Eq. (12)
\begin{align}
\frac{f(x_{n})}{f^{\prime }(\frac{x_{n}+y_{n}}{2})} &=e_{n}+(-c_{2}^{2}+%
\frac{1}{4}c_{3})e_{n}^{3}+(-\frac{15}{4}c_{2}c_{3}+3c_{2}^{3}+\frac{1}{2}%
c_{4})e_{n}^{4}\nonumber \\&+(-\frac{51}{16}c_{3}^{2}+\frac{22}{7}
c_{3}c_{2}^{2}-2c_{2}c_{4}
-6c_{2}^{4}+\frac{16}{11}c_{5})e_{n}^{5}+…
\end{align}
(13)
Now
\[
z_{n}=x_{n}-\frac{f(x_{n})}{f^{\prime }(\frac{x_{n}+y_{n}}{2})}
\]
Putting values in above, we get
\begin{align}
z_{n} &=\alpha +(c_{2}^{2}-\frac{1}{4}c_{3})e_{n}^{3}+(\frac{15}{4}%
c_{2}c_{3}-3c_{2}^{3}-\frac{1}{2}c_{4})e_{n}^{4}\nonumber \\&+(\frac{51}{16}c_{3}^{2}-%
\frac{22}{7}c_{3}c_{2}^{2}+2c_{2}c_{4}
-6c_{2}^{4}+\frac{16}{11}c_{5})e_{n}^{5}+… \label{3.6}
\end{align}
(14)
Expanding \(f(z_{n})\) and \(f^{\prime }(\frac{x_{n}+z_{n}}{2})\) by Taylor’s
series about \(\alpha,\)
\begin{align}
f(z_{n}) &=f^{\prime }(\alpha )\{(c_{2}^{2}-\frac{1}{4}c_{3})e_{n}^{3}+(%
\frac{15}{4}c_{2}c_{3}-3c_{2}^{3}-\frac{1}{2}c_{4})e_{n}^{4}\nonumber \\&+(\frac{51}{16}%
c_{3}^{2}-\frac{22}{7}c_{3}c_{2}^{2}+2c_{2}c_{4}
-6c_{2}^{4}+\frac{16}{11}c_{5})e_{n}^{5}+…\} \label{3.7}
\end{align}
(15)
\begin{align}
f^{\prime }(\frac{x_{n}+z_{n}}{2}) &=f^{\prime }(\alpha )\{1+c_{2}e_{n}+%
\frac{3}{4}c_{3}e_{n}^{2}+(c_{2}^{3}-\frac{1}{4}c_{2}c_{3}+\frac{1}{2}%
c_{4})e_{n}^{3}+(\frac{21}{4}c_{3}c_{2}^{2}-3c_{2}^{3}\nonumber \\&-\frac{1}{2}c_{2}c_{4}
+\frac{5}{16}c_{5}-\frac{3}{8}c_{3}^{2})e_{n}^{4}+…\} \label{3.8}
\end{align}
(16)
Dividing (15) by Eq. (16), we have
\begin{align*}
\frac{f(z_{n})}{f^{\prime }(\frac{x_{n}+z_{n}}{2})} &=(c_{2}^{2}-\frac{1}{4}%
c_{3})e_{n}^{3}+(4c_{2}c_{3}-4c_{2}^{3}-\frac{1}{2}c_{4})e_{n}^{4}+(\frac{27%
}{8}c_{3}^{2}-\frac{73}{4}c_{3}c_{2}^{2}\\&+\frac{5}{2}c_{2}c_{4}+ 10c_{2}^{4}-\frac{11}{16}c_{5})e_{n}^{5}+…
\end{align*}
Corrector step of Algorithm 2.2 is
\[
x_{n+1}=z_{n}-\frac{f(z_{n})}{f^{\prime }(\frac{x_{n}+z_{n}}{2})}
\]
By substitution and simplification, we have
\begin{equation}
x_{n+1}=\alpha +(4c_{2}c_{3}-4c_{2}^{3}-\frac{1}{2}c_{4})e_{n}^{4}+(\frac{27%
}{8}c_{3}^{2}-\frac{73}{4}c_{3}c_{2}^{2}+\frac{5}{2}c_{2}c_{4}+10c_{2}^{4}-%
\frac{11}{16}c_{5})e_{n}^{5}+… \label{3.9}
\end{equation}
(17)
Hence Algorithm 2.2 has fourth order convergence.
Theorem 3.2.
Let \(f:I\subset \mathbb{R}\rightarrow\mathbb{R}\) for an open interval \(I\) and \(\alpha \in I\) be simple zero of \(f(x)=0.\) If
\(f\) is differentiable and \(x_{0}\) is sufficiently close to \(\alpha\) then
three-step iterative method defined by Algorithm 2.3 has fifth order
convergence and satisfies the error equation
\[
x_{n+1}=\alpha +(-\frac{1}{4}c_{3}c_{2}^{2}+c_{2}^{4})e_{n}^{5}+O(e_{n}^{6}),%
\text{ and }e_{n}=x_{n}-\alpha.
\]
Proof.
From Eq. (17), we have
\[
u_{n}=\alpha +(4c_{2}c_{3}-4c_{2}^{3}-\frac{1}{2}c_{4})e_{n}^{4}+(\frac{27}{8%
}c_{3}^{2}-\frac{73}{4}c_{3}c_{2}^{2}+\frac{5}{2}c_{2}c_{4}+10c_{2}^{4}-%
\frac{11}{16}c_{5})e_{n}^{5}+…,
\]
Expanding \(f(u_{n})\) and \(f^{\prime }(\frac{x_{n}+u_{n}}{2})\) by Taylor’s
series about \(\alpha ,\)
\begin{equation}
f(u_{n})=f^{\prime }(\alpha )\{(4c_{2}c_{3}-4c_{2}^{3}-\frac{1}{2}%
c_{4})e_{n}^{4}+(\frac{27}{8}c_{3}^{2}-\frac{73}{4}c_{3}c_{2}^{2}+\frac{5}{2}%
c_{2}c_{4}+10c_{2}^{4}-\frac{11}{16}c_{5})e_{n}^{5}+…\}, \label{3.10}
\end{equation}
(18)
\begin{equation}
f^{\prime }(\frac{x_{n}+u_{n}}{2})=f^{\prime }(\alpha )\{1+c_{2}e_{n}+\frac{3%
}{4}c_{3}e_{n}^{2}+\frac{1}{2}c_{4}e_{n}^{3}+(c_{2}^{3}-\frac{1}{4}%
c_{3}c_{2}^{2}+\frac{5}{16}c_{4})e_{n}^{4}+….\}. \label{3.11}
\end{equation}
(19)
Now
\[
x_{n+1}=z_{n}-\frac{f(u_{n})+f(z_{n})}{f^{\prime }(\frac{x_{n}+u_{n}}{2})},
\]
by substituting values and simplifying, we have
\[
x_{n+1}=\alpha +(-\frac{1}{4}c_{3}c_{2}^{2}+c_{2}^{4})e_{n}^{5}+O(e_{n}^{6}).
\]
Hence Algorithm 2.3 has fifth order convergence.
4. Applications
In this section, we present some numerical examples to examine the validity
and efficieny of our newly devolped algorithms namely, Algorithm 2.2 and
Algorithm 2.3. We also provide comparison of these algorithms with Newton’s
method(NM), Abbasbandy’s method (AM), Householder’s method(HHM), Halley’s method
(HM), Chun’s method (CM) [
3] and Noor’s method(NR)[
4]. We use \(\epsilon
=10^{-25}\). We use the following stopping criteria;
\vskip-1cm
\begin{eqnarray*}
(i)\text{}\;\;\;\; |x_{n}-x_{n-1}| &<&\in \\
(ii)\text{}\;\;\;\;|f(x_{n})| &<&\in
\end{eqnarray*}
We consider the following nonlinear scalar equations for comparison:
\begin{eqnarray*}
f_{1}(x) &=&\sin ^{2}x-x^{2}+1=0 \\
f_{2}(x) &=&x^{2}-e^{x}-3x+2=0 \\
f_{3}(x) &=&\cos x-x=0 \\
f_{4}(x) &=&(x-1)^{3}-1=0 \\
f_{5}(x) &=&x^{3}-10=0 \\
f_{6}(x) &=&e^{x^{2}+7x-30}-1=0.
\end{eqnarray*}
Comparison Table
\begin{array}{ccccc}
\text{Examples} & & \text{Iterations} & x_{n} & f(x_{n}) \\
f_{1},\text{ }x_{0}=1 & & & & \\
NM & & 7 & 1.40449164831534122635086 & -1.05e^{-50} \\
AM & & 5 & 1.40449164831534122635086 & -5.82e^{-54} \\
HM & & 4 & 1.40449164831534122635086 & -5.45e^{-63} \\
CM & & 5 & 1.40449164831534122635086 & -2.01e^{-64} \\
HHM & & 6 & 1.40449164821534122603508 & 1.82e^{-25} \\
NR & & 5 & 1.40449164821534122603508 & 9.23e^{-26} \\
\text{Alg. }2.2 & & 3 & 1.40449164821534122603508 & 3.40e^{-25} \\
\text{Alg. }2.3 & & 3 & 1.40449164821534122603509 & 2.32e^{-44} \\
f_{2},\text{ }x_{0}=2 & & & & \\
NM & & 6 & 0.257530285439860760455367 & 2.94e^{-55} \\
AM & & 5 & 0.257530285439860760455367 & 1.03e^{-63} \\
HM & & 5 & 0.257530285439860760455367 & 0 \\
CM & & 4 & 0.257530285439860760455367 & 1.05e^{-62} \\
HHM & & 5 & 0.257530285439860760455367 & -6.01e^{-25} \\
NR & & 5 & 0.257530285439860760455367 & 1.08e^{-23} \\
\text{Alg. }2.2 & & 3 & 0.257530285439860934975933 & 1.72e^{-69} \\
\text{Alg}.2.3 & & 2 & 0.257530285439860934975933 & 1.01e^{-29} \\
\end{array}
\begin{array}{ccccc}
\text{Examples} & & \text{Iterations} & x_{n} & f(x_{n}) \\
f_{3},\text{ }x_{0}=1.7 & & & & \\
NM & & 5 & 0.739085133215160641655372 & -2.04e^{-31} \\
AM & & 4 & 0.739085133215160641655372 & -7.13e^{-47} \\
HM & & 4 & 0.739085133215160641655372 & -5.04e^{-58} \\
CM & & 4 & 0.739085133215160641655372 & 0 \\
HHM & & 4 & 0.739085133215160641655372 & 3.91e^{-17} \\
NR & & 4 & 0.739085133215160645628855 & 6.66e^{-18} \\
\text{Alg. }2.2 & & 3 & 0.739085133215160641655312 & 3.21e^{-50} \\
\text{Alg.}2.3 & & 3 & 0.739085133215160641655312 & 1.21e^{-100} \\
f_{4},\text{ }x_{0}=1.5 & & & & \\
NM & & 8 & 2.0000000000000000000001234 & 2.05e^{-42} \\
AM & & 5 & 2 & 0 \\
HM & & 5 & 2 & 0 \\
CM & & 5 & 2 & 0 \\
HHM & & 7 & 2.000000000000000000000828 & 2.47e^{-22} \\
NR & & 5 & 2.000000000000000000000302 & 1.12e^{-24} \\
\text{Alg. }2.2 & & 4 & 2.000000000000000000000007 & 1.83e^{-44} \\
\text{Alg. }2.3 & & 4 & 2.000000000000000000000000 & 5.07e^{-27}\\
f_{5},\text{ }x_{0}=1.5 & & & & \\
N.M. & & 7 & 2.154434690031883721759235 & 2.05e^{-54} \\
A.M. & & 5 & 2.154434690031883721759235 & -5.01e^{-63} \\
H.M. & & 5 & 2.154434690031883721759235 & -5.03e^{-62} \\
C.M. & & 5 & 2.154434690031883721759235 & -5.01e^{-64} \\
H.H.M. & & 6 & 2.154434690031883721759293 & 7.86e^{-27} \\
NR.M. & & 4 & 2.154434690031883721759292 & 8.98e^{-24} \\
\text{Alg. }2.2 & & 3 & 2.154434690031883721759294 & 2.00e^{-27} \\
\text{Alg. }2.3 & & 3 & 2.154434690031883721759293 & 1.33e^{-50} \\
f_{6,}\text{ }x_{0}=3.5 & & & & \\
N.M. & & 13 & 3.000000000000000000000000 & 1.52e^{-48} \\
A.M. & & 7 & 3.000000000000000000000000 & -4.32e^{-48} \\
H.M. & & 8 & 3.000000000000000000000000 & 2.01e^{-62} \\
C.M. & & 8 & 3.000000000000000000000000 & 2.05e^{-62} \\
H.H.M. & & 12 & 3.000000000000000000000000 & 5.47e^{-25} \\
NR.M. & & 6 & 3.00000010285594873990664 & 1.34e^{-06} \\
\text{Alg. }2.2 & & 4 & 3.00000000000000000000000 & 4.76e^{-30} \\
\text{Alg. }2.3 & & 4 & 3.00000000000000000000000 & 1.99e^{-102}%
\end{array}
5. Conclusion
We have established two new algorithms of fourth and fifth order convergence
by using modified decomposition technique to coupled system. We have
discussed convergence analysis of these newly established algorithms.
Computational comparison of these algorithms has been made with some well
known methods for solving nonlinear equations. From numerical table, we see
that our new methods converge fast to the true solution of equations and
are comparable with some classical methods.
Competing Interests
The author(s) do not have any competing interests in the manuscript.