The purpose of this paper is to introduce and evaluate novel iterative methods for approximating solutions to nonlinear equations, which leverage the power of the variational iteration technique. Specifically, we present a comprehensive analysis of the proposed methods and demonstrate their effectiveness through various examples. Moreover, we provide a comparative analysis with other existing methods and conclude that the newly developed methods offer a competitive alternative. Our results highlight the potential of this approach in generating a diverse set of iterative methods for solving nonlinear equations. Therefore, this study contributes to the ongoing efforts to improve the efficiency and accuracy of nonlinear equation solving techniques.
Finding roots of nonlinear equations efficiently has widespread applications in numerical analysis. Due to such importance and significant applications in various branches of science, several methods are being developed for solving \(f(x)=0,\) using different techniques such as Taylor series, quadrature formulas, homotopy perturbation method, Adomian decomposition and variational iteration technique [1-23]. Newton method for a single nonlinear equation is written as \[x_{n+1} =x_{n} -\frac{f(x_{n} )}{f'(x_{n} )} ,\qquad n=0,1,2,3\cdots\] This is an important and basic method [21], which converges quadratically. To improve the local order of convergence, many modified methods have been proposed. See [2,3] and [9-14]. In this paper, we use a novel technique suggested by He [6] for the development of iterative schemes for linear and nonlinear problems. We implement He’s variational iteration technique to suggest and analyze some new iterative methods for solving the nonlinear equations. We would like to mention that the variational iteration technique was developed by He [6] and has been used to solve a wide class of problems arising in various branches of pure and applied sciences. The variational iteration technique is very reliable and efficient technique. See also Noor and Mohyud-Din [15] and the references therein. Essentially using the idea and technique of He [6], Noor and Shah [16] has suggested and analyzed some iterative methods for solving the nonlinear equations. Now we have used this technique to gain higher order convergent iterative methods. An approximation technique to remove the higher derivative of the function is also introduced. We show that the new methods include only first derivative of the functions and these are free from higher order derivatives. Several examples are given to illustrate the efficiency and performance of these new methods and their comparison with other iterative methods. These new methods can be considered as alternative to the existing higher order methods.
In this section, we use a special relation for the implementation of He’s variational iteration technique [6]. We develop the main recurrence relation which generates efficient iterative schemes for the approximate solution of nonlinear equation. Consider the nonlinear equation of the type \[\label{GrindEQ__1_} f(x)=0. \tag{1}\] We assume that \(p\) is a simple root and \({ \gamma }\) is an initial guess sufficiently close to \(p.\) We consider the approximate solution \(x_{n}\) of (1) such that \(f(x_{n} )\ne 0.\)
Let \(g(x_{{}_{n} } )\) be any arbitrary function and \(\lambda\) be a parameter which is usually called the Lagrange’s multiplier and can be identified by the optimality condition. Consider the following iterative relation
\[\label{GrindEQ__2_} x_{n+1} =\phi (x_{n} )+\lambda \, [f(\phi (x_{n} ))g(\phi (x_{n} ))], \tag{2}\] where \(\phi (x_{n} )\) is the arbitrary auxiliary function of order \(p\ge 1.\)Relation (2) is a generalized relation. We note that, if \(\phi (x_{n} )=I\) and \(p=1,\) then (2) reduces to the following iterative relation
\[\label{GrindEQ__3_} x_{n+1} =x_{n} +\lambda \, [f(x_{n} )g(x_{n} )]. \tag{3}\] Which was considered and analyzed by He [6]. See also Noor [11,13]. Thus we conclude that our scheme (2) includes the He’s scheme as a special case. In this paper, our aim is to analyze the relation (2) for obtaining higher order methods and for this, we will study the arbitrary auxiliary function for \(p=2,\) and the generated methods will be of fourth order. Using the optimality criteria, we can get the value of \(\lambda\) from (2) as \[\label{GrindEQ__4_} \lambda =-\frac{\phi '(x_{n} )}{[g'(\phi (x_{n} ))f(\phi (x_{n} ))+g(\phi (x_{n} ))f'(\phi (x_{n} ))]} . \tag{4}\] From (2) and (4), we have \[\label{GrindEQ__5_} x_{n+1} =\phi (x_{n} )-\frac{f(\phi (x_{n} ))g(\phi (x_{n} ))}{[g'(\phi (x_{n} ))f(\phi (x_{n} ))+g(\phi (x_{n} ))f'(\phi (x_{n} ))]} . \tag{5}\] Let us consider \[\label{GrindEQ__6_} \phi (x_{n} )=y_{n} =x_{n} -\frac{f(x_{n} )}{f'(x_{n} )} . \tag{6}\] Using (5) in (6), we obtain the following iterative relation for solving the nonlinear equations as: \[\label{GrindEQ__7_} x_{n+1} =y_{n} -\frac{f(y_{n} )g(y_{n} )}{[f'(y_{n} )g(y_{n} )+f(y_{n} )g'(y_{n} )]} , \tag{7}\] where \(g(y_{n} )\) is the auxiliary function. We observe that, If \(p\) is the root of \(f(x),\) then for \(x=p,\) we have \(f(p)=0\) and \[\label{GrindEQ__8_} \frac{g'(y)}{g(y)} \approx \frac{g'(p)}{g(p)} . \tag{8}\] Also we have \[\label{GrindEQ__9_} \frac{g'(x)}{g(x)} =\frac{g'(p)}{g(p)} . \tag{9}\] Combining (8) and (9), and replacing in (7), we obtain the following iterative scheme \[\label{GrindEQ__10_} x_{n+1} =y_{n} -\frac{f(y_{n} )g(x_{n} )}{[f'(y_{n} )g(x_{n} )+f(y_{n} )g'(x_{n} )]} . \tag{10}\] From the above scheme, for different values of the auxiliary function \(g(x_{n} ),\) one can obtain several iterative methods of fourth order convergence for solving nonlinear equations. Here our aim is to improve the efficiency of the above iterative scheme by removing \(f'(y_{n} )\) from the main scheme. Using the Taylor series technique, we have \[\label{GrindEQ__11_} f(y_{n} )\simeq f(x_{n} )+(y_{n} -x_{n} )f'(x_{n} )+\frac{(y_{n} -x_{n} )^{2} }{2} f''(x_{n} )=\frac{[f(x_{n} )]^{2} f''(x_{n} )}{2[f'(x_{n} )]^{2} } . \tag{11}\] Let us approximate \[\label{GrindEQ__12_} f''(x_{n} )\simeq \frac{\left[f'(y_{n} )-f'(x_{n} )\right]}{y_{n} -x_{n} } . \tag{12}\] Using (12) in (11) and simplifying, we obtain
\[\label{GrindEQ__13_} f'(y_{n} )\approx \frac{f'(x_{n} )}{f(x_{n} )} \left[f(x_{n} )-2f(y_{n} )\right]. \tag{13}\]
Using (13) in (10), we get the relation
\[\label{GrindEQ__14_} x_{n+1} =y_{n} -\frac{f(x_{n} )f(y_{n} )g(x_{n} )}{f'(x_{n} )[f(x_{n} )-2f(y_{n} )]g(x_{n} )+f(x_{n} )f(y_{n} )g'(x_{n} )} . \tag{14}\]
This is the main iterative scheme for generating 4th order convergent methods. We will use some special values of \(g(x_{n} )\) and get the iterative methods as:
I. Let \(g(x_{n} )=e^{-\alpha x_{n} } .\) Then from (14), we obtain the following iterative method for solving the nonlinear equation (1).
Algorithm 1. For a given \(x_{0} ,\) find the approximate solution \(x_{n+1}\) by the iterative scheme \[y_{n} =x_{n} -\frac{f(x_{n} )}{f'(x_{n} )} ,\]
\[x_{n+1} =y_{n} -\frac{f(x_{n} )f(y_{n} )}{f'(x_{n} )[f(x_{n} )-2f(y_{n} )]-\alpha f(y_{n} )f(x_{n} )} . n=0,{ \; }1,{ \; }2,{ \; }\cdots .\]
If \(\alpha =0,\) then Algorithm 1 reduces to the well known Ostrowski method [17].
II. Let \(g(x_{n} )=e^{-\alpha f(x_{n} )} .\) Then from (14), we have the following iterative scheme for solving the nonlinear Equation (1).
Algorithm 2. For a given \(x_{0}\) , find the approximate solution \(x_{n+1}\) by the iterative scheme \[y_{n} =x_{n} -\frac{f(x_{n} )}{f'(x_{n} )} ,\] \[x_{n+1} =y_{n} -\frac{f(x_{n} )f(y_{n} )}{f'(x_{n} )\left([f(x_{n} )-2f(y_{n} )]-\alpha f(y_{n} )f(x_{n} )\right)} ,\]
If \(\alpha =0,\) then Algorithm 2 reduces to the well known Ostrowski method [17].
III. Let \(g(x_{n} )=e^{\frac{\alpha }{f'(x_{n} )} } .\) Then \(g'(x_{n} )=e^{\frac{\alpha }{f'(x_{n} )} } \left(-\frac{\alpha f''(x_{n} )}{[f'(x_{n} )]^{2} } \right).\)
Now from (14), we get after combining with (11), the following iterative method for solving the nonlinear equation (1).
Algorithm 3. For a given \(x_{0} ,\) find the approximate solution \(x_{n+1}\) by the iterative scheme \[y_{n} =x_{n} -\frac{f(x_{n} )}{f'(x_{n} )} ,\] \[x_{n+1} =y_{n} -\frac{f(y_{n} )[f(x_{n} )]^{2} }{f'(x_{n} )f(x_{n} )[f(x_{n} )-2f(y_{n} )]-2\alpha [f(y_{n} )]^{2} } ,\]
If \(\alpha =0,\) then Algorithm 3 reduces to the well known Ostrowski method [17].
IV. Let \(g(x_{n} )=e^{-\frac{\alpha f(x_{n} )}{f'(x_{n} )} } .\) Then from (15), we have the following iterative scheme after combining with (11) for solving the nonlinear Equation (1).
Algorithm 4. For a given \(x_{0}\) , find the approximate solution \(x_{n+1}\) by the iterative scheme \[y_{n} =x_{n} -\frac{f(x_{n} )}{f'(x_{n} )} ,\] \[x_{n+1} =y_{n} -\frac{f(x_{n} )f(y_{n} )}{[f(x_{n} )-2f(y_{n} )]\left[f'(x_{n} )-\alpha f(y_{n} )\right]} .\]
If \(\alpha =0,\) then Algorithm 4 reduces to the well known Ostrowski method [17].
In this section, we consider the convergence criteria of the main iterative scheme (14) developed in section 2.
Theorem 1. Assume that the function \(f:D\subset R{ \to }R\) for an open interval \(D\) has a simple root \(p{ \in }D.\) Let \(f(x)\) be smooth sufficiently in some neighborhood of the root and then (14) has fourth-order convergence.
Proof. Let \(p\) be a simple root of\(f(x)\). Since f is sufficiently differential, then expanding \(f(x)\) and \(f'(x)\) in Taylor’s series about \(p,\) we get
\[\label{GrindEQ__15_} f(x_{n} )=f'(p)\left[e_{n} +c_{2} e_{n}^{2} +c_{3} e_{n}^{3} +c_{4} e_{n}^{4} +c_{5} e_{{}_{n} }^{5} +c_{6} e_{{}_{n} }^{6} +O(e_{{}_{n} }^{7} )\right], \tag{15}\] and \[\label{GrindEQ__16_} f'(x_{n} )=f'(p)\left[1+2c_{2} e_{n} +3c_{3} e_{n}^{2} +4c_{4} e_{n}^{3} +5c_{5} e_{n}^{4} +6c_{6} e_{n}^{5} +O(e_{n}^{7} )\right]. \tag{16}\] Where
\(c_{k} =\frac{1}{k!} \frac{f^{(k)} (p{ )}}{f^{{'} } (p{ )}} ,\) \(k=2,3,\cdots .\) and \(e_{n} =x_{n} -p.\)
From (15) and (16), we get \[\label{GrindEQ__17_} \begin{array}{l} {\frac{f(x_{n} )}{f'(x_{n} )} =e_{n} -c_{2} e_{n}^{2} +2(c_{2}^{2} -c_{3} )e_{n}^{3} +(7c_{2} c_{3} -4c_{2} {}^{3} -3c_{4} )e_{n}^{4} +(8c_{{}_{2} }^{4} -20c_{3} c_{{}_{2} }^{2} } \\ {\qquad +6c_{{}_{3} }^{2} +10c_{2} c_{4} -4c_{5} )e_{n}^{5} +(13c_{2} c_{5} -28c_{2}^{2} c_{4} -5c_{6} -16c_{2}^{5} +52c_{2}^{3} c_{3} } \\ {\qquad +\, 17c_{3} c_{4} -33c_{2} c_{3}^{2} )e_{n}^{6} +O(e_{n}^{7} )\, .\, } \end{array} \tag{17}\]
Using (17), we have
\[\label{GrindEQ__18_} \begin{array}{l} {y_{n} =p+c_{2} e_{n}^{2} -2(c_{2}^{2} -c_{3} )e_{n}^{3} -(7c_{2} c_{3} -4c_{2}^{3} -3c_{4} )e_{n}^{4} -(8c_{2}^{4} -20c_{3} c_{2}^{2} +6c_{3}^{2} } \\ {\, \, \, \, \, \, \, \, +10c_{2} c_{4} \, \, -4c_{5} )e_{n}^{5} +(13c_{2} c_{5} -28c_{2}^{2} c_{4} -5c_{6} -16c_{2}^{5} +52c_{2}^{3} c_{3} \, +17c_{3} c_{4} } \\ {\, \, \, \, \, \, \, -33c_{2} c_{3}^{2} )e_{n}^{6} +O(e_{n}^{7} ).} \end{array} \tag{18}\] From (18), we obtain
\[\label{GrindEQ__19_} \begin{array}{l} {f(y_{n} )=f'(p)[c_{2} e_{n}^{2} -2(c_{2}^{2} -c_{3} )e_{n}^{3} -(7c_{2} c_{3} -5c_{2}^{3} -3c_{4} )e_{n}^{4} -(12c_{2}^{4} -24c_{3} c_{2}^{2} } \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, +6c_{3}^{2} +10c_{2} c_{4} -4c_{5} )e_{n}^{5} +O(e_{n}^{7} )].} \end{array} \tag{19}\] Now from (15) and (19), we get
\[\label{GrindEQ__20_} f(x_{n} )-2f(y_{n} )=f'(p)\left[e_{n} -c_{2} e_{n}^{2} +(4c_{2}^{2} -3c_{3} )e_{n}^{3} +(14c_{2} c_{3} -10c_{2}^{3} \right. \left. -5c_{4} )e_{n}^{4} +O(e_{n}^{5} )\right]. \tag{20}\] and \[\label{GrindEQ__21_} f(x)f(y)g(x)=[f'(p)]^{2} \left[g(p)c_{2} e_{n}^{3} +\left\{g'(p)c_{2} -g(p)c_{{}_{2} }^{2} +2g(p)c_{3} \right\}e_{n}^{4} +O(e_{n}^{5} )\right]. \tag{21}\] From (15), (20) and (21), we obtain \[\label{GrindEQ__22_} \begin{array}{l} {f'(x_{n} )[f(x_{n} )-2f(y_{n} )]g(x_{n} )+f(x_{n} )f(y_{n} )g'(x_{n} )} \\ {\qquad \qquad \qquad \qquad =[f'(p)]^{2} \left[g(p)e_{n} +(g(p)c_{2} +g'(p))e_{n}^{2} +\left(\frac{1}{2} g''(p)\right. \, \right. } \\ {\, \qquad \qquad \qquad \qquad \, +\, \left. 2g(p)c_{2}^{2} \right)e_{n}^{3} +\left(-\frac{1}{2} g''(p)c_{2} -2g'(p)c_{3} +\frac{1}{6} g'''(p)\, \right. } \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, +3g'(p)c_{2}^{2} -2g(p)c_{2}^{3} \, -g(p)c_{4} \left. +5g(p)c_{2} c_{3} \right)e_{n}^{4} \, \left. +O(e_{n}^{5} )\right]\, .} \end{array} \tag{22}\] Now with the help of (21) and (22), we get \[\label{GrindEQ__23_} \begin{array}{l} {\frac{f(x)f(y)g(x)}{f'(x_{n} )[f(x_{n} )-2f(y_{n} )]g(x_{n} )+f(x_{n} )f(y_{n} )g'(x_{n} )} \qquad \qquad \qquad } \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, =c_{2} e_{n}^{2} +(2c_{3} -2c_{2}^{2} )e_{n}^{3} } \\ {\, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \left(\frac{g'(p)}{g(p)} c_{2}^{2} +3c_{2}^{3} +c_{4} -6c_{2} c_{3} \right)e_{n}^{4} +O(e_{n}^{5} ).\, } \end{array} \tag{23}\] Now from (18) and (23), we get
\[\label{GrindEQ__24_} x_{n+1} =p+\left(\frac{g'(p)}{g(p)} c_{2}^{2} +3c_{2}^{3} +c_{4} -6c_{2} c_{3} \right)e_{n}^{4} +O(e_{n}^{5} ). \tag{24}\] Finally, the error equation is \[\label{GrindEQ__25_} e_{n+1} =\left(\frac{g'(p)}{g(p)} c_{2}^{2} +3c_{2}^{3} +c_{4} -6c_{2} c_{3} \right)e_{n}^{4} +O(e_{n}^{5} ). \tag{25}\]
Thus we conclude that (14) has fourth-order convergence and subsequently all the methods derived from (14) has also fourth order convergence. ◻
We now present some examples to illustrate the efficiency of the new developed two-step iterative methods (see Tables 1–6). We compare the Newton method (NM), Turab’s method (TM), Ostrowski method (OM) Algorithm 1 (NR1), Algorithm 2 (NR2), Algorithm 3 (NR3) and Algorithm 4 (NR4) which are introduced here in this paper. We also note that these methods do not require the computation of second derivative to carry out the iterations. All computations are done using the MAPLE using 60 digits floating point arithmetics (Digits: =60).
We will use \({ \varepsilon }=10^{-32}\).The following stopping criteria are used for computer programs.
(i) \({ |}x_{n+1} -x_{n} { |}\, { <\varepsilon }\). (ii) \(|f{ (}x_{n+1} )|\, { <\varepsilon }\).
The computational order of convergence p approximated for all the examples in Tables 1–6, (see [20]) by means of \[\rho \approx \frac{\ln \left(|x_{n+1} -x_{n} |/|x_{n} -x_{n-1} |\right)}{\ln \left(|x_{n} -x_{n-1} |/|x_{n-1} -x_{n-2} |\right)} ,\] along with the total number of functional evaluations (TNFE) as required for the iterations. We consider the following nonlinear equations as test problems which are same as Noor and Noor [16].
\[f_{1} (x)=\sin ^{2} x-x^{2} +1, f_{2} (x)=x^{2} -e^{x} -3x+2,\]
\[f_{3} (x)=(x-1)^{3} -1, f_{4} (x)=x^{3} -10,\]
\[f_{5} (x)=xe^{x^{2} } -\sin ^{2} x+3\cos x+5, f_{6} (x)=e^{x^{2} +7x-30} -1.\]
Methods | IT | TNFE | \(x_n\) | \( |f(x_n)|\) | \(\delta\) | \(\rho\) |
NM | 7 | 14 | 1.404491648215341226 | 1.04e-50 | 7.33e-26 | 2.00003 |
TM | 4 | 16 | 1.404491648215341226 | 0.00e-01 | 7.33e-26 | 4.29576 |
OM | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 5.64e-28 | 4.24367 |
Algorithm 1 | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 2.14e-27 | 4.27401 |
Algorithm 2 | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 4.20e-18 | 3.97200 |
Algorithm 3 | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 3.48e-17 | 4.29898 |
Algorithm 4 | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 2.00e-16 | 4.52494 |
NM | 7 | 14 | 1.404491648215341226 | 1.04e-50 | 7.33e-26 | 2.00003 |
TM | 4 | 16 | 1.404491648215341226 | 0.00e-01 | 7.33e-26 | 4.29576 |
OM | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 5.64e-28 | 4.24367 |
Algorithm 1 | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 9.57e-23 | 4.25801 |
Algorithm 2 | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 1.80e-43 | 3.87150 |
Algorithm 3 | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 2.33e-22 | 4.34817 |
Algorithm 4 | 4 | 12 | 1.404491648215341226 | 0.00e-01 | 2.83e-24 | 4.09265 |
Methods | IT | TNFE | \(x_n\) | \( |f(x_n)| \) | \(\delta\) | \(\rho\) |
NM | 6 | 12 | 0.2575302854398607 | 2.93e-55 | 9.10e-28 | 2.00050 |
TM | 4 | 16 | 0.2575302854398607 | 1.00e-59 | 7.74e-56 | 3.86670 |
OM | 4 | 12 | 0.2575302854398607 | 0.00e-01 | 2.70e-23 | 4.15500 |
Algorithm 1 | 4 | 12 | 0.2575302854398607 | 2.00e-59 | 4.23e-24 | 4.29911 |
Algorithm 2 | 4 | 12 | 0.2575302854398607 | 1.00e-59 | 2.26e-16 | 4.10939 |
Algorithm 3 | 4 | 12 | 0.2575302854398607 | 0.00e-01 | 2.60e-25 | 4.49624 |
Algorithm 4 | 4 | 12 | 0.2575302854398607 | 0.00e-01 | 1.35e-40 | 3.92927 |
NM | 6 | 12 | 0.2575302854398607 | 2.93e-55 | 9.10e-28 | 2.00050 |
TM | 4 | 16 | 0.2575302854398607 | 1.00e-59 | 7.74e-56 | 3.86670 |
OM | 4 | 12 | 0.2575302854398607 | 0.00e-01 | 2.70e-23 | 4.55500 |
Algorithm 1 | 4 | 12 | 0.2575302854398607 | 0.00e-01 | 3.37e-40 | 3.85293 |
Algorithm 2 | 4 | 12 | 0.2575302854398607 | 0.00e-01 | 5.62e-18 | 4.91925 |
Algorithm 3 | 4 | 12 | 0.2575302854398607 | 1.00e-59 | 2.83e-24 | 4.52741 |
Algorithm 4 | 4 | 12 | 0.2575302854398607 | 1.00e-59 | 5.76e-27 | 3.96131 |
Methods | IT | TNFE | \(x_n\) | \(|f(x_n)|\) | \(\delta\) | \(\rho\) |
NM | 8 | 16 | 2.0000000000000000 | 2.06e-42 | 8.28e-22 | 2.00025 |
TM | 5 | 20 | 2.0000000000000000 | 0.00e-01 | 6.86e-43 | 3.86708 |
OM | 5 | 15 | 2.0000000000000000 | 0.00e-01 | 2.21e-49 | 3.90897 |
Algorithm 1 | 5 | 15 | 2.0000000000000000 | 0.00e-01 | 1.00e-40 | 4.28367 |
Algorithm 2 | 6 | 18 | 2.0000000000000000 | 0.00e-01 | 1.40e-41 | 3.95934 |
Algorithm 3 | 4 | 12 | 2.0000000000000000 | 0.00e-01 | 4.30e-17 | 3.96235 |
Algorithm 4 | 4 | 12 | 2.0000000000000000 | 0.00e-01 | 1.43e-21 | 3.84029 |
NM | 8 | 16 | 2.0000000000000000 | 2.06e-42 | 8.28e-22 | 2.00025 |
TM | 5 | 20 | 2.0000000000000000 | 0.00e-01 | 6.86e-43 | 3.86708 |
OM | 5 | 15 | 2.0000000000000000 | 0.00e-01 | 2.21e-49 | 3.90897 |
Algorithm 1 | 4 | 12 | 2.0000000000000000 | 0.00e-01 | 5.48e-24 | 3.85377 |
Algorithm 2 | 6 | 18 | 2.0000000000000000 | 0.00e-01 | 8.02e-57 | 3.97842 |
Algorithm 3 | 5 | 15 | 2.0000000000000000 | 0.00e-01 | 2.43e-53 | 3.98698 |
Algorithm 4 | 4 | 12 | 2.0000000000000000 | 0.00e-01 | 9.19e-17 | 3.83639 |
Methods | IT | TNFE | \(x_n\) | \(|f(x_n)|\) | \(\delta\) | \(\rho\) |
NM | 7 | 14 | 2.15443469003188 | 2.06e-54 | 5.64e-28 | 2.00003 |
TM | 4 | 16 | 2.15443469003188 | 1.00e-58 | 5.64e-28 | 4.21798 |
OM | 4 | 12 | 2.15443469003188 | 8.00e-59 | 3.73e-32 | 4.18546 |
Algorithm 1 | 4 | 12 | 2.15443469003188 | 8.00e-59 | 2.72e-20 | 5.14348 |
Algorithm 2 | 5 | 15 | 2.15443469003188 | 1.00e-58 | 2.82e-34 | 3.92450 |
Algorithm 3 | 4 | 12 | 2.15443469003188 | 1.00e-58 | 1.04e-50 | 3.92059 |
Algorithm 4 | 4 | 12 | 2.15443469003188 | 1.00e-58 | 8.17e-17 | 4.33019 |
NM | 7 | 14 | 2.15443469003188 | 2.06e-54 | 5.64e-28 | 2.00003 |
TM | 4 | 16 | 2.15443469003188 | 1.00e-58 | 5.64e-28 | 4.21798 |
OM | 4 | 12 | 2.15443469003188 | 8.00e-59 | 3.73e-32 | 4.18546 |
Algorithm 1 | 4 | 12 | 2.15443469003188 | 8.00e-59 | 2.04e-32 | 4.46637 |
Algorithm 2 | 5 | 15 | 2.15443469003188 | 8.00e-59 | 8.44e-46 | 3.97295 |
Algorithm 3 | 4 | 12 | 2.15443469003188 | 8.00e-59 | 1.79e-36 | 4.04813 |
Algorithm 4 | 4 | 12 | 2.15443469003188 | 8.00e-59 | 3.63e-20 | 4.41414 |
Methods | IT | TNFE | \(x_n\) | \(|f(x_n)|\) | \(\delta\) | \(\rho\) |
NM | 9 | 18 | -1.207647827130918 | 2.27e-40 | 2.73e-21 | 2.000851 |
TM | 5 | 20 | -1.207647827130918 | 1.10e-58 | 2.73e-21 | 4.004843 |
OM | 5 | 15 | -1.207647827130918 | 8.00e-59 | 3.71e-43 | 4.136952 |
Algorithm 1 | 5 | 15 | -1.207647827130918 | 8.00e-59 | 3.06e-24 | 4.012770 |
Algorithm 2 | 6 | 18 | -1.207647827130918 | 1.10e-58 | 2.00e-22 | 4.02808 0 |
Algorithm 3 | 5 | 15 | -1.207647827130918 | 8.00e-59 | 1.08e-48 | 4.520711 |
Algorithm 4 | 5 | 15 | -1.207647827130918 | 8.00e-59 | 3.29e-48 | 3.975131 |
NM | 9 | 18 | -1.207647827130918 | 2.27e-40 | 2.73e-21 | 2.00085 |
TM | 5 | 20 | -1.207647827130918 | 1.10e-58 | 2.73e-21 | 4.00484 |
OM | 5 | 15 | -1.207647827130918 | 8.00e-59 | 3.71e-43 | 4.13695 |
Algorithm 1 | 5 | 15 | -1.207647827130918 | 8.00e-59 | 1.36e-30 | 4.03378 |
Algorithm 2 | 7 | 21 | -1.207647827130918 | 8.00e-59 | 4.13e-30 | 4.05656 |
Algorithm 3 | 5 | 15 | -1.207647827130918 | 8.00e-59 | 5.78e-45 | 4.24036 |
Algorithm 4 | 5 | 15 | -1.207647827130918 | 8.00e-59 | 4.57e-51 | 3.98835 |
Methods | IT | TNFE | \(x_n\) | \(|f(x_n)|\) | \(\delta\) | \(\rho\) |
NM | 13 | 26 | 3.00000000000000 | 1.52e-47 | 4.21e-25 | 2.00023 |
TM | 7 | 28 | 3.00000000000000 | 0.00e-01 | 4.21e-25 | 3.83827 |
OM | 6 | 18 | 3.00000000000000 | 0.00e-01 | 6.93e-17 | 3.97578 |
Algorithm 1 | 6 | 18 | 3.00000000000000 | 0.00e-01 | 5.25e-23 | 4.03116 |
Algorithm 2 | 6 | 18 | 3.00000000000000 | 0.00e-01 | 5.14e-19 | 4.04841 |
Algorithm 3 | 6 | 18 | 3.00000000000000 | 0.00e-01 | 7.08e-18 | 4.15183 |
Algorithm 4 | 6 | 18 | 3.00000000000000 | 0.00e-01 | 5.14e-19 | 4.12008 |
NM | 13 | 26 | 3.00000000000000 | 1.52e-47 | 4.21e-25 | 2.00023 |
TM | 7 | 28 | 3.00000000000000 | 0.00e-01 | 4.21e-25 | 3.83827 |
OM | 6 | 18 | 3.00000000000000 | 0.00e-01 | 6.93e-17 | 3.97578 |
Algorithm 1 | 6 | 18 | 3.00000000000000 | 0.00e-01 | 1.54e-19 | 3.98918 |
Algorithm 2 | 10 | 18 | 3.00000000000000 | 2.00e-58 | 1.13e-30 | 4.04841 |
Algorithm 3 | 6 | 18 | 3.00000000000000 | 2.00e-58 | 2.47e-17 | 4.05576 |
Algorithm 4 | 6 | 18 | 3.00000000000000 | 2.00e-58 | 4.96e-16 | 3.92802 |
The focus of this study is to present new fourth-order convergent methods for solving nonlinear equations. Notably, all of these methods are designed to be free from the need for second derivatives. To evaluate their effectiveness, we compare these new methods with the standard Newton method, and our findings indicate that the proposed methods generally exhibit superior performance. Furthermore, when assessing the efficiency of these methods, we apply the definition of the efficiency index, as outlined in [4]. This allows us to provide a more comprehensive analysis of the strengths and limitations of these newly proposed methods for solving nonlinear equations. Overall, our results suggest that these new methods have the potential to provide valuable contributions to the field of numerical mathematics and to enable more accurate and efficient solutions with the efficiency index [5] as \(p^{\frac{1}{m} } ,\)where p is the order of the method and m is the number of functional evaluations per iteration required by the method, we have that all of the methods obtained have the efficiency index equal to \(4^{\frac{1}{3} } \approx { 1.5874,}\) which is better than the one of Newton’s method \(2^{\frac{1}{2} } \approx 1.4142.\)
Alharbi, A. R., Faisal, M. I., Shah, F. A., Waseem, M., Ullah, R., & Sherbaz, S. (2019). Higher order numerical approaches for nonlinear equations by decomposition technique. IEEE Access, 7, 44329-44337.
Chun, C. (2007). A method for obtaining iterative formulas of order three. Applied Mathematics Letters, 20, 1103-1109.
Chun, C. (2007). On the construction of iterative methods with at least cubic convergence. Applied Mathematics and Computation, 189, 1384-1392.
Chun, C. (2007). Some variants of Chebyshev-Halley method free from second derivative. Applied Mathematics and Computation, 191, 193-198.
Gautschi, W. (1997). Numerical Analysis: An Introduction. Birkhauser.
He, J. H. (2007). Variational iteration method-some recent results and new interpretations. Journal of Computational and Applied Mathematics, 207, 3-17.
He, J. H. (1999). Variational iteration method-a kind of nonlinear analytical technique: some examples. Internet Journal of Nonlinear Mechanics, 34(4), 699-708.
Kou, J. (2007). The improvement of modified Newton’s method. Applied Mathematics and Computation, 189, 602-609.
Nonlaopon, K., Shah, F. A., Ahmed, K., Farid, G. (2023). A generalized iterative scheme with computational results concerning the systems of linear equations. AIMS Mathematics, 8(3), 6504-6519.
Noor, K. I., & Noor, M. A. (2007). Predictor-corrector Halley method for nonlinear equations. Applied Mathematics and Computation, 188, 1587-1591.
Noor, K. I., & Noor, M. A. (2007). Iterative methods with fourth-order convergence for nonlinear equations. Applied Mathematics and Computation, 189, 221-227.
Noor, K. I., Noor, M. A., & Momani, S. (2007). Modified Householder Iterative methods for nonlinear equations. Applied Mathematics and Computation, 190, 1534-1539.
Noor, M. A. (2007). Some iterative schemes for nonlinear equations. Applied Mathematics and Computation, 187, 937-943.
Noor, M. A., & Mohyud-Din, S. T. (2007). Variational iteration techniques for solving higher-order boundary value problems. Applied Mathematics and Computation, 189, 1929-1942.
Noor, M. A., & Shah, F. A. (2009). Variational iteration technique for solving nonlinear equations. Journal of Applied Mathematics and Computing, 31, 247-254.
Noor, M. A., Shah, F. A., Noor, K. I., & Al-said, E. (2011). Variational iteration technique for finding multiple roots of nonlinear equations. Scientific Research Essays, 6(6), 1344-1350.
Shah, F. A. (2014). Modified homotopy perturbation technique for the approximate solution of nonlinear equations. Chinese Journal of Mathematics, 1-10.
Shah, F. A., & Noor, M. A. (2015). Some numerical methods for solving nonlinear equations by using decomposition technique. Applied Mathematics and Computation, 251, 378-386.
Shah, F. A., Noor, M. A., & Waseem, M. (2020). Some Steffensen-type iterative schemes for the approximate solution of nonlinear equations. Miskolc Mathematical Notes, 22(2), 939-949.
Shah, F. A., & Haq, E. U. (2020). Some new multi-step derivative-free iterative methods for solving nonlinear equations. TWMS Journal of Applied and Engineering Mathematics, 10(4), 951-963.
Shah, F. A., Noor, M. A., Waseem, M., & UlHaq, E. (2021). Some Steffensen-type Iterative Schemes for the Approximate Solution of Nonlinear Equations. Miskolc Mathematical Notes, 22(2), 939-949.
Traub, J. F. (1964). Iterative Methods for Solution of Equations. Prentice-Hall, Englewood Cliffs, NJ.
Weerakoon, S., & Fernando, T. G. I. (2000). A variant of Newton’s method with accelerated third-order convergence. Applied Mathematics Letters, 13, 87-93.