The boundary value problems in Kinetic theory of gases, elasticity and other applied areas are mostly reduced in solving single variable nonlinear equations. Hence, the problem of approximating a solution of the nonlinear equations is important. The numerical methods for finding roots of such equations are called iterative methods. There are two type of iterative methods in literature: involving higher derivatives and free from higher derivatives. The methods which do not require higher derivatives have less order of convergence and the methods having high convergence order require higher derivatives. The aim of present report is to develop an iterative method having high order of convergence but not involving higher derivatives. We propose three new methods to solve nonlinear equations and solve text examples to check validity and efficiency of our iterative methods.
Algorithm 1. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following three step iterative scheme: \begin{eqnarray*}y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},n=0,1,2,…, \\ w_{n}&=&y_{n}-\frac{f(y_{n})}{f^{\prime }(y_{n})} \notag\\ x_{n+1}&=&w_{n}-\frac{f(w_{n})}{f^{\prime }(w_{n})}-\frac{f^{2}(w_{n})f^{% \prime \prime }(w_{n})}{2f^{\prime 3}(w_{n})}-\frac{f^{3}(w_{n})f^{\prime \prime \prime }(w_{n})}{6f^{\prime 4}(w_{n})}. \end{eqnarray*}
By following the finite difference scheme, we develop the following algorithms:Algorithm 2. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*}y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},n=0,1,2,…, \notag\\ w_{n}&=&y_{n}-\frac{f(y_{n})}{f^{\prime }(y_{n})} \notag\\ x_{n+1}&=&w_{n}-\frac{f(w_{n})}{f^{\prime }(w_{n})}-\frac{f^{2}(w_{n})f^{% \prime \prime }(w_{n})}{2f^{\prime 3}(w_{n})}+\frac{f^{3}(w_{n}){f^{\prime }(y_{n})}[f^{\prime \prime }(w_{n})-f^{\prime \prime }(y_{n})]}{6f(y_{n})f^{\prime 4}(w_{n})}.\label{} \end{eqnarray*}
Algorithm 3. For a given \(x_{0}\), compute the approximate solution \(x_{n+1}\) by the following iterative schemes: \begin{eqnarray*}y_{n} &=&x_{n}-\frac{f(x_{n})}{f^{\prime }(x_{n})},n=0,1,2,…, \\ w_{n}&=&y_{n}-\frac{f(y_{n})}{f^{\prime }(y_{n})} \notag\\ x_{n+1}&=&w_{n}-\frac{f(w_{n})}{f^{\prime }(w_{n})}-\frac{f^{\prime }(y_{n})f^{2}(z_{n})}{2f^{\prime 3 }(w_{n})} \left[\frac{f^{\prime }(y_{n})-{f^{\prime }(w_{n})}}{{f(y_{n})}}(1-\frac{f^{\prime }(y_{n})f(w_{n})}{3f(y_{n}) f^{\prime }(w_{n})})+\frac{f^{\prime }(x_{n})f(w_{n})({f^{\prime }(x_{n})}-{f^{\prime }(y_{n})})}{3f(x_{n})f(y_{n}){f^{\prime }(w_{n})}}\right]\label{} \end{eqnarray*}
Theorem 3.1. Suppose that \(\alpha \) is a root of the equation \(f(x)=0\). If \(f(x)\) is sufficiently smooth in the neighborhood of \(\alpha \), then the convergence order of Algorithm \(1\), Algorithm \(2\) and Algorithm \(3\) is at least twelve, twelve and ten respectively.
Proof. To prove the convergence, suppose that \(\alpha \) is a root of the equation \(f(x)=0\) and \(e_n\) be the error at nth iteration, then \(e_n=x_n-\alpha\) and by using Taylor series expansion, we have \begin{eqnarray*} f(x_n)&=&{f^{\prime }(\alpha)e_n}+\frac{1}{2!}{f^{\prime \prime }(\alpha)e_n^2}+\frac{1}{3!}{f^{\prime \prime \prime }(\alpha)e_n^3}+\frac{1}{4!}{f^{(iv) }(\alpha)e_n^4}+\frac{1}{5!}{f^{(v) }(\alpha)e_n^5}+\frac{1}{6!}{f^{(vi) }(\alpha)e_n^6}+\ldots \end{eqnarray*}
Example 1. In this example we solved \(f(x)=x^{3}+4x^{2}-25\) by taking \(x_{0}=-0.8\). It can be observed from Table 1 that NM takes 35 iterations, HM takes 36 iterations, AM takes 13 iterations and our Algorithms (1), (2) and (3) takes 12, 5 and 5 iterations respectively to reach at root of \(f(x)=x^{3}+4x^{2}-25\).
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(35\) | \(70\) | \(1.105260e-24\) | |
HM | \(36\) | \(108\) | \(2.995246e-17\) | \(1.365230013414096845760806828980\) |
AM | \(13\) | \(52\) | \(6.423767e-20\) | |
Algorithm 1 | \(12\) | \(72\) | \(2.738493e-48\) | |
Algorithm 2 | \(5\) | \(25\) | \(2.812883e-25\) | |
Algorithm 3 | \(5\) | \(20\) | \(3.108248e-83\) |
Example 2. In this example we solved \(f(x)=\) \(x^3+x^2-2\) by taking \(x_{0}=-0.1\). It can be observed from Table 2 that NM takes 13 iterations, HM takes 17 iterations, AM takes 19 iterations and our Algorithms (1), (2) and (3) takes 5, 4 and 5 iterations respectively to reach at root of \(f(x)=\) \(x^3+x^2-2\).
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(13\) | \(26\) | \(2.203086e-19\) | |
HM | \(17\) | \(51\) | \(4.338982e-22\) | \(1.000000000000000000000000000000\) |
AM | \(19\) | \(76\) | \(2.239715e-27\) | |
Algorithm 1 | \(5\) | \(30\) | \(2.338056e-31\) | |
Algorithm 2 | \(4\) | \(20\) | \(5.192250e-45\) | |
Algorithm 3 | \(5\) | \(20\) | \(6.607058e-83\) |
Example 3. In this example we solved \(f(x)=\) \(e^{(x^2+7x-30)}-1\) by taking \(x_{0}=4.5\). It can be observed from Table 3 that NM takes 27 iterations, HM takes 14 iterations, AM takes 16 iterations and our Algorithms (1), (2) and (3) takes 8, 7 and 7 iterations respectively to reach at root of \(f(x)=\) \(e^{(x^2+7x-30)}-1\).
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(27\) | \(54\) | \(6.454129e-23\) | |
HM | \(14\) | \(42\) | \(1.217550e-25\) | \( 3.000000000000000000000000000000\) |
AM | \(16\) | \(64\) | \(1.136732e-17\) | |
Algorithm 1 | \(8\) | \(48\) | \(1.261140e-22\) | |
Algorithm 2 | \(7\) | \(35\) | \(6.546702e-15\) | |
Algorithm 3 | \(7\) | \(28\) | \(9.047215e-71\) |
Example 4. In this example we solved \(f(x)=\) \(x^{2}-e^{x}-3x+2\) by taking \(x_{0}=3.5\). It can be observed from Table 4 that NM takes 6 iterations, HM takes 5 iterations, AM takes 5 iterations and our Algorithms (1), (2) and (3) takes 2, 3 and 3 iterations respectively to reach at root of \(f(x)=\) \(x^{2}-e^{x}-3x+2\).
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(6\) | \(12\) | \(4.925534e-15\) | |
HM | \(5\) | \(15\) | \(1.463064e-40\) | \(0.257530285439860760455367304937\) |
AM | \(5\) | \(20\) | \(1.120893e-28\) | |
Algorithm 1 | \(2\) | \(12\) | \(8.978612e-19\) | |
Algorithm 2 | \(3\) | \(15\) | \(0.000000e +00\) | |
Algorithm 3 | \(3\) | \(12\) | \(4.980111e-66\) |
Example 5. In this example we solved \(f(x)=\) \(xe^{x^2}-sin^{2}{x}+3cos{x}+5\) by taking \(x_{0}=1.1\). It can be observed from Table 5 that NM takes 45 iterations, HM takes 44 iterations, AM takes 50 iterations and our Algorithms (1), (2) and (3) takes 14, 12 and 12 iterations respectively to reach at root of \(f(x)=\) \(xe^{x^2}-sin^{2}{x}+3cos{x}+5\).
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | 45 | 90 | \(1.268546e-15\) | |
HM | 44 | 132 | \(1.169824e-26\) | \( -1.207647827130918927009416758360 \) |
AM | \(50\) | \(200\) | \(2.868208e-29\) | |
Algorithm 1 | \(14\) | \(84\) | \(1.935782e-64\) | |
Algorithm 2 | \(12\) | \(60\) | \(4.515078e-97\) | |
Algorithm 3 | \(12\) | \(48\) | \(4.515078e-97\) |
Example 6. In this example we solved \(f(x)=\) \(x^{2}+sin(\frac{x}{5})-\frac{1}{4}\) by taking \(x_{0}=2.2\). It can be observed from Table 6 that NM takes 7 iterations, HM takes 5 iterations, AM takes 7 iterations and our Algorithms (1), (2) and (3) takes 2, 2 and 2 iterations respectively to reach at root of \(f(x)=\) \(x^{2}+sin(\frac{x}{5})-\frac{1}{4}\).
Method | \(N\) | \(N_{f}\) | \(|f(x_{n+1})|\) | \(x_{n+1}\) |
---|---|---|---|---|
NM | \(7\) | \(14\) | \(7.777907e-23\) | |
HM | \(5\) | \(15\) | \(1.210132e-42\) | \(0.409992017989137131621258376499\) |
AM | \(7\) | \(28\) | \(2.132547e-32\) | |
Algorithm 1 | \(2\) | \(12\) | \(5.800844e-23\) | |
Algorithm 2 | \(2\) | \(10\) | \(5.897018e-23\) | |
Algorithm 3 | \(2\) | \(8\) | \(4.106937e-22\) |