The aim of this paper is to present new sixth order iterative methods for solving non-linear equations. The derivation of these methods is purely based on variational iteration technique. Our methods are verified by means of various test examples and numerical results show that our developed methods are more effective with respect to the previously well known methods.
case 1. Let \(g(x_n)=\exp(\beta x_n^{2})\), then \(g^{\prime}(x_n)= 2\beta x_{n} g(x_n)\). Using these values in (12), we obtain the following algorithm. Using these values in (12), we obtain the following algorithm.
Algorithm 2.1. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=& x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,…, \\ x_{n+1}&=& y_n-\frac{f(y_n)}{[f^{\prime }(y_n)+2\beta x_n f(y_n)]} \end{eqnarray*}
case 2. Let \(g(x_n)=\exp(-\beta f(x_n))\), then \(g^{\prime}(x_n)= -\beta f^{\prime }(x_{n})g(x_n)\). Using these values in (12), we obtain the following algorithm.
Algorithm 2.2. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=&x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,…, \\ x_{n+1}&=&y_n-\frac{f(y_n)}{[f^{\prime }(y_n)-\beta f(y_n)f^{\prime }(x_n)]}. \end{eqnarray*}
case 3. Let \(g(x_n)=\exp(-\beta x_n)\), then \(g^{\prime}(x_n)= -\beta g(x_n)\). Using these values in (12), we obtain the following algorithm.
Algorithm 2.3. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*} y_{n} &=& x_{n}-\frac{2f(x_{n})f^{\prime }(x_{n})}{2f^{\prime2}(x_{n})-f(x_{n})f^{\prime \prime }(x_{n})},n=0,1,2,…, \\ x_{n+1}&=& y_n-\frac{f(y_n)}{[f^{\prime }(y_n)-\beta f(y_n)]}. \end{eqnarray*}
By taking different values of \(\beta\), we can obtain different iterative methods. To obtain best results in all above algorithms, don’t choose such values of \(\beta\) that make the denominator zero and smallest in magnitude.Theorem 3.1. Suppose that \(\alpha \) is a root of the equation \(f(x)=0\). If \(f(x)\) is sufficiently smooth in the neighborhood of \(\alpha \), then the convergence order of the main and general iteration scheme, described in relation (12) is at least six.
Proof. To analysis the convergence of the main and general iteration scheme, described in relation (12), suppose that \(\alpha \) is a root of the equation \(f(x)=0\) and \(e_n\) be the error at nth iteration, then \(e_n=x_n-\alpha\) and by using Taylor series expansion, we have \begin{eqnarray*} f(x)&=&{f^{\prime }(\alpha)e_n}+\frac{1}{2!}{f^{\prime \prime }(\alpha)e_n^2}+\frac{1}{3!}{f^{\prime \prime \prime }(\alpha)e_n^3}+\frac{1}{4!}{f^{(iv) }(\alpha)e_n^4}+\frac{1}{5!}{f^{(v) }(\alpha)e_n^5}\\ &+&\frac{1}{6!}{f^{(vi) }(\alpha)e_n^6}+O(e_n^{13}), \end{eqnarray*}
Example 4.1. \(f=x^{10}-1\)
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(17\) | \(34\) | \(0.7\) | \(1.020203e-25\) | |
OM | \(6\) | \(18\) | \(0.7\) | \(3.610418e-20\) | |
TM | \(9\) | \(27\) | \(0.7\) | \(2.903022e-44\) | |
MHM | \(5\) | \(15\) | \(0.7\) | \(7.301483e-68\) | \(1.0000000000000000000000000\) |
Algorithem 2.1 | \(5\) | \(15\) | \(0.7\) | \(5.714130e-50\) | |
Algorithem 2.2 | \(4\) | \(12\) | \(0.7\) | \(3.873651e-18\) | |
Algorithem 2.3 | \(3\) | \(9\) | \(0.7\) | \(1.569018e-18\) |
Example 4.2. \(f_{2}=(x-1)^3-1\)
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(16\) | \(32\) | \(-0.5\) | \(3.692382e-21\) | |
OM | \(9\) | \(27\) | \(-0.5\) | \(3.319738e-43\) | |
TM | \(8\) | \(24\) | \(-0.5\) | \(3.692382e-21\) | |
MHM | \(7\) | \(21\) | \(-0.5\) | \(2.178093e-15\) | \(2.000000000000000000000000000000\) |
Algorithem 2.1 | \(5\) | \(15\) | \(-0.5\) | \(1.647894e-50\) | |
Algorithem 2.2 | \(4\) | \(12\) | \(-0.5\) | \(1.279762e-65\) | |
Algorithem 2.3 | \(3\) | \(9\) | \(-0.5\) | \(2.042477e-19\) |
Example 4.3. \(f_{3}=xe^{x^2}-\sin^{2}(x)+3\cos(x)+5\)
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(11\) | \(22\) | \(-2.5\) | \(5.818711e-27\) | |
OM | \(5\) | \(15\) | \(-2.5\) | \(5.818711e-27\) | |
TM | \(6\) | \(18\) | \(-2.5\) | \(2.504418e-54\) | |
MHM | \(11\) | \(33\) | \(-2.5\) | \(1.111535e-38\) | \(-1.207647827130918927009416758360 \) |
Algorithem 2.1 | \(4\) | \(12\) | \(-2.5\) | \(4.908730e-22\) | |
Algorithem 2.2 | \(5\) | \(15\) | \(-2.5\) | \(9.011081e-41\) | |
Algorithem 2.3 | \(4\) | \(12\) | \(-2.5\) | \(4.225070e-35\) |
Example 4.4. \(f_{4}=\sin^{2}(x)-x^{2}+1\)
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(15\) | \(30\) | \(0.1\) | \(3.032691e-22\) | |
OM | \(8\) | \(24\) | \(0.1\) | \(2.481593e-57\) | |
TM | \(8\) | \(24\) | \(0.1\) | \(2.903022e-44\) | |
MHM | \(7\) | \(21\) | \(0.1\) | \(7.771503e-47\) | \(1.404491648215341226035086817790 \) |
Algorithem 2.1 | \(4\) | \(12\) | \(0.1\) | \(3.145402e-73\) | |
Algorithem 2.2 | \(6\) | \(18\) | \(0.1\) | \(1.208844e-45\) | |
Algorithem 2.3 | \(3\) | \(9\) | \(0.1\) | \(1.401860e-29\) |
Example 4.5. \(f_{5}=e^{(x^2+7x-30)}-1\)
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(16\) | \(32\) | \(2.8\) | \(1.277036e-16\) | |
OM | \(5\) | \(15\) | \(2.8\) | \(3.837830e-24\) | |
TM | \(8\) | \(24\) | \(2.8\) | \(1.277036e-16\) | |
MHM | \(5\) | \(15\) | \(2.8\) | \(8.373329e-70\) | \(3.000000000000000000000000000000 \) |
Algorithem 2.1 | \(4\) | \(16\) | \(2.8\) | \(4.131496e-34\) | |
Algorithem 2.2 | \(3\) | \(9\) | \(2.8\) | \(9.157220e-32\) | |
Algorithem 2.3 | \(3\) | \(9\) | \(2.8\) | \(4.858181e-34\) |
Example 4.6. \(f_{6}=xe^{x}-1\)
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(5\) | \(10\) | \(1\) | \(8.478184e-17\) | |
OM | \(3\) | \(9\) | \(1\) | \(8.984315e-40\) | |
TM | \(3\) | \(9\) | \(1\) | \(2.130596e-33\) | |
MHM | \(3\) | \(9\) | \(1\) | \(1.116440e-68\) | \(0.567143290409783872999968662210\) |
Algorithem 2.1 | \(2\) | \(6\) | \(1\) | \(2.910938e-19\) | |
Algorithem 2.2 | \(2\) | \(6\) | \(1\) | \(1.292777e-17\) | |
Algorithem 2.3 | \(2\) | \(6\) | \(1\) | \(2.468437e-27\) |
Example 4.7. \(f_{7}=x^{3}+4x^{2}-15\)
Methods | \(N\) | \(N_{f}\) | \(x_{0}\) | \(f(x_{n+1})\) | \(x_{n+1}\) |
---|---|---|---|---|---|
NM | \(38\) | \(76\) | \(-0.3\) | \(1.688878e-22\) | |
OM | \(6\) | \(18\) | \(-0.3\) | \(1.173790e-16\) | |
TM | \(19\) | \(57\) | \(-0.3\) | \(1.688878e-22\) | |
MHM | \(16\) | \(48\) | \(-0.3\) | \(3.742527e-16\) | \(1.631980805566063517522106445540 \) |
Algorithem 2.1 | \(4\) | \(12\) | \(-0.3\) | \(1.663804e-29\) | |
Algorithem 2.2 | \(6\) | \(18\) | \(-0.3\) | \(1.744284e-38\) | |
Algorithem 2.3 | \(4\) | \(12\) | \(-0.3\) | \(1.639561e-72\) |
Table 2. Shows the numerical comparisons of Newton’s method, Ostrowski method, Traub’s method, modified Halley’s method and our developed methods. The columns represent the number of iterations \(N\) and the number of functions or derivatives evaluations \(N_{f}\) required to meet the stopping criteria, and the magnitude \(|f(x)|\) of \(f(x)\) at the final estimate \(x_{n}.\)