Open Journal of Mathematical Sciences

On some iterative methods with frozen derivatives for solving equations

Samundra Regmi, Christopher Argyros, Ioannis K. Argyros\(^1\), Santhosh George
Learning Commons, University of North Texas at Dallas, TX 75038, USA.; (S.R)
Department of Computing Science, University of Oklahoma, Norman, OK 73071, USA.;(C.A)
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA.; (I.K.A)
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India.; (S.G)
\(^{1}\)Corresponding Author: iargyros@cameron.edu

Abstract

We determine a radius of convergence for an efficient iterative method with frozen derivatives to solve Banach space defined equations. Our convergence analysis use \(\omega-\) continuity conditions only on the first derivative. Earlier studies have used hypotheses up to the seventh derivative, limiting the applicability of the method. Numerical examples complete the article.

Keywords:

Iterative method with frozen derivative; Banach space; Convergence order.

1. Introduction

We consider solving equation

\begin{equation} \label{1.1} F(x)=0, \end{equation}
(1)
where \(F:D\subset X\longrightarrow Y\) is continuously Fréchet differentiable, \(X, Y\) are Banach spaces, and \(D\) is a nonempty convex set.

Iterative methods are used to generate a sequence converging to a solution \(x_*\) of Equation (1) under certain conditions [1,2,3,4,5,6,7,8,9,10,11,12]. Recently a surge has been noticed in the development of efficient iterative methods with frozen derivatives. The convergence order is obtained using Taylor expansions and conditions on high order derivatives not appearing on the method. These conditions limit the applicability of the methods. For example, let \( X=Y=\mathbb{R}, \,D= \left[-\frac{1}{2}, \frac{3}{2}\right].\) Define \(f\) on \(D\) by

\[f(t)=\left\{\begin{array}{cc} t^3\log t^2+t^5-t^4& \text{if}\,\,t\neq0,\\ 0& \text{if}\,\, t=0. \end{array}\right. \] Then, we have \(t_*=1,\) and \begin{align*}f'(t)&= 3t^2\log t^2 + 5t^4- 4t^3+ 2t^2 ,\\ f''(t)&= 6t\log t^2 + 20t^3 -12t^2 + 10t,\\ f'''(t) &= 6\log t^2 + 60t^2-24t + 22.\end{align*} Obviously \(f'''(t)\) is not bounded on \(D.\) So, the convergence of these methods is not guaranteed by the analysis in these papers. Moreover, no comparable error estimates are given [6,8,10,11] on the distances involved or uniqueness of the solution results. That is why we develop a general technique that can be used on iterative methods, and address these problems by using only the first derivative which only appears on these methods.

We demonstrate this technique on the \(3(i-1),\, \) convergence order method defined for all \(n=0,1,2,\ldots,\,\) by

\begin{equation} \label{1.2} \begin{cases} y_n^{\left(1\right)}=x_n-F'\left(x_n\right)^{-1}F\left(x_n\right)\\ y_n^{\left(2\right)}=x_n-2\left(F'\left(x_n\right)+F'\left(y_n^{\left(1\right)}\right)\right)^{-1}F\left(x_n\right)\\ \vdots\\ y_n^{\left(i\right)}= x_{n+1}=y_n^{\left(i-1\right)}-\alpha F\left(y_n^{\left(i-1\right)}\right) \end{cases} \end{equation}
(2)
\(i=3,4,,\ldots, k,\) where \(k\) a fixed natural number and \(\alpha=\left(3F'\left(y_n^{\left(1\right)}\right)-F'\left(x_n\right)\right)^{-1}\left(F'\left(x_n\right)+F'\left(y_n^{\left(1\right)}\right)\right)F'\left(x_n\right)^{-1}.\)

The efficiency, convergence order and comparisons with other methods using similar information was given in [6,8,10,11] when \(X=Y=\mathbb{R}^k.\) The convergence was shown using the seventh derivative. We include computable error bounds on \(\|x_n-x_*\|\) and uniqueness results that are not given in [6,8,10,11]. Our technique is so general that it can be used to extend the usage of other methods [1,2,3,4,5,6,7,8,9,10,11,12]. The method was developed in [10], where the comparisons to other methods were well stretched.

The motivation of this paper is not to do the same, but to introduce a technique that expands the applicability of this and other methods using high order derivatives not appearing in these methods. The first derivative has only been used in the convergence hypotheses. Notice that this is the only derivative appearing in the method. We also provide a computable radius of convergence which is not given in [10]. This way we locate a set of initial points for the convergence of the method. The numerical examples are chosen to show how the radii theoretically predicted are computed. In particular, the last example shows that earlier results cannot be used to show convergence of the method. Our results significantly extends the applicability of these methods and provide a new way of looking at iterative methods. The article contains local convergence analysis in Section 2 and the numerical examples in Section 3.

2. Convergence analysis

Let \(w_0:T\longrightarrow T\) be a continuous and nondecreasing function, where \(T=[0, \infty)\) and the equation
\begin{equation} \label{2.1} \omega_0(t)-1=0 \end{equation}
(3)
has a least positive solution \(\rho_0.\) Set \(T_0=[0, \rho_0)\) and \(\omega,\ \ \omega_1:T_0\longrightarrow T\) be continuous and nondecreasing functions.

Define functions \(g_1\) and \(\bar{g}_1\) on interval \(T_0\) by

\[g_1\left(t\right)=\frac{\int_0^1\omega\left(\left(1-\theta\right)t\right)d\theta}{1-\omega_0\left(t\right)}\] and \[\bar{g}_1(t)=g_1(t)-1.\] Suppose that the equation
\begin{equation} \label{2.2} \bar{g}_1(t)=0 \end{equation}
(4)
has a least solution \(r_1\in (0, \rho_0)\) and the equation
\begin{equation} \label{2.3} p(t)-1=0 \end{equation}
(5)
has a least solution \(\rho_p\in (0,\rho_0),\) where \(p(t)=\frac{1}{2}(\omega_0(t)+\omega_0(g_1(t)t)).\) Set \(\rho_1=\min\{\rho_0, \rho_p\}\) and \(T_1=[0,\rho_1).\)

Moreover, define functions \(g_2\) and \(\bar{g}_2\) on \(T_0\) by

\[g_2\left(t\right)=g_1\left(t\right)+\frac{\left(\omega_0\left(g_1\left(t\right)t\right)+\omega_0\left(t\right)\right)\int_0^1\omega_1\left(\theta \left(t\right)\right)d\theta}{2\left(1-\omega_0\left(t\right)\right)\left(1-p\left(t\right)\right)}\] and \[\bar{g}_2(t)=g_2(t)-1.\] Suppose that the equation
\begin{equation} \label{2.4} \bar{g}_2(t)=0 \end{equation}
(6)
has a least solution \(r_2\in (0, \rho_1)\) and the equation
\begin{equation} \label{2.5} q(t)=0 \end{equation}
(7)
has a least solution \(\rho_q\in(0,\rho_1),\) where \(q(t)=\frac{1}{2}(\omega_0(t)+3\omega_0(g_1(t)t).\) Set \(\rho_2=\min\{r_2,\rho_q\}.\)

Define functions \(h\) and \(\psi\) on \(T_2=[0,\rho_2)\) by

\[h(t)=\left(1+\frac{\omega_1(g_2(t)t)}{2(1-q(t))}\right)(\omega_0(t)+\omega_0(g_2(t)t))\] and \[\psi(t)=g_1(g_2(t)t)+\frac{h(t)}{(1-\omega_0(g_2(t)t))(1-\omega_0(t))}.\] Suppose that the equation
\begin{equation} \label{2.6} \psi(t)-1=0 \end{equation}
(8)
has a least solution \(r_3\in (0,\rho_2).\)

We shall show that \(r\) is a radius of convergence, where

\begin{equation} \label{2.7} r=\min\{r_1,r_2, r_3\}. \end{equation}
(9)
It follows that for each \(t\in [0,r)\)
\begin{equation} \label{2.8} 0\leq \omega_0(t) < 1, \end{equation}
(10)
\begin{equation} \label{2.9} 0\leq p(t) < 1, \end{equation}
(11)
\begin{equation} \label{2.10} 0\leq q(t) < 1, \end{equation}
(12)
\begin{equation} \label{2.11} 0\leq g_1(t) < 1, \end{equation}
(13)
\begin{equation} \label{2.12} 0\leq g_2(t) < 1, \end{equation}
(14)
and
\begin{equation} \label{2.13} 0\leq \psi(t) < 1. \end{equation}
(15)
Let \(U(x,\beta), \bar{U}(x,\beta)\) denote the open and closed balls, respectively in \(X\) with center \(x\in X\) and of radius \(\beta > 0.\) The following hypotheses (A) shall be used:
  • (A1) \(F:D\subset X\longrightarrow Y\) is Fréchet continuously differentiable; there exists \(x_*\in D\) such that \(F(x_*)=0\) and \(F'(x_*)^{-1}\in L(Y,X).\)
  • (A2) There exists a continuous and nondecreasing function \(\omega_0:T\longrightarrow T\) such that for each \(x\in D\), \[\left\|F'\left(x_*\right)^{-1}\left(F'\left(x\right)-F'\left(x_*\right)\right)\right\|\leq \omega_0\left(\left\|x-x_*\right\|\right).\] Set \(D_0=D\cap U(x_*,\rho_0).\)
  • (A3) There exist continuous and nondecreasing functions \(\omega:T_0\longrightarrow T, \omega_1:T_0\longrightarrow T\) such that for each \(x,y\in D_0\) \[\left\|F'(x_*)^{-1}(F'(y)-F'(x))\right\|\leq \omega(\|y-x\|)\] and \[\left\|F'(x_*)^{-1}F'(x)\right\|\leq \omega_1\left(\left\|x-x_*\right\|\right).\]
  • (A4) \(\bar{U}(x_*,r)\subset D.\)
  • (A5) There exists \(r_*\geq r\) such that \[\int_0^1\omega_0(\theta r_*)d\theta < 1.\] Set \(D_1=D\cap \bar{U}(x_*, r_*).\)
In the next theorem, the local convergence of method (2) is given using the hypotheses (A) and the preceding notation.

Theorem 1. Suppose the hypotheses (A) hold. Then, for any starting point \(x_0\in U(x_*,r)-\{x_*\},\) sequence \(\{x_n\}\) generated by method (2) is well defined in \(U(x_*,r),\) remains in \(U(x_*,r)\) and converges to \(x_*.\) Moreover, the following items hold for all \(i=3,4,\ldots, k, n=0,1,2,\ldots, \)

\begin{equation} \label{2.14} \left\|y_n^{(1)}-x_*\right\|\leq g_1(\|x_n-x_*\|)\|x_n-x_*\|\leq \|x_n-x_*\| < r, \end{equation}
(16)
\begin{equation} \label{2.15} \left\|y_n^{(2)}-x_*\right\|\leq g_2(\|x_n-x_*\|)\|x_n-x_*\|\leq \|x_n-x_*\|, \end{equation}
(17)
\begin{eqnarray} \nonumber \left\|y_n^{(i)}-x_*\right\|&\leq &\psi\left(\left\|x_n-x_*\right\|\right)\left(\left\|y_n^{(i-1)}-x_*\right\|\right)\\ \nonumber &\leq&\psi^{i-2}\left(\|x_0-x_*\|\right)g_2^{i-2}(\|x_0-x_*\|)\|x_0-x_*\|\\\label{2.16} &\leq &\|x_0-x_*\| \end{eqnarray}
(18)
and
\begin{equation} \label{2.17} \|x_{n+1}-x_*\|\leq \left\|y_n^{(k)}-x_*\right\|\leq c\|x_n-x_*\|, \end{equation}
(19)
where \(c=(\psi(\|x_0-x_*\|)g_2(\|x_0-x_*\|))^k\in [0,1).\) Furthermore, \(x_*\) is the only solution of equation \(F(x)=0\) in the set \(D_1\) given in (A5).

Proof. We shall use mathematical induction to show that the iterates \(\{x_n\} \) exist, remain in \(u\in U(x_*,r)\) and satisfy (16)-(19). Letting \(u\in U(x_*,r)-\{x_*\}\) and using (A1) and (A2) and (9), we get in turn

\begin{equation} \label{2.18} \left\|F'\left(x_*\right)^{-1}\left(F'(u)-F'(x_*)\right)\right\|\leq \omega_0\left(\left\|u-x_*\right\|\right)\leq \omega_0(r) < 1. \end{equation}
(20)
So the Banach Lemma on invertible operators [2,8] with (20) imply that \(F'(u)^{-1}\in L(Y,X),\) and
\begin{equation} \label{2.19} \left\|F'(u)^{-1}F'(x_*)\right\|\leq \frac{1}{1-\omega_0(\|u-x_*\|)}. \end{equation}
(21)
In particular, for \(u=x_0, y_0^{(1)}\) exists. Then, by (9), (13), (A3) and (21) we have in turn that
\begin{eqnarray} \nonumber \left\|y_0^{(1)}-x_*\right\|&=&\left\|x_0-x_*-F'(x_0)^{-1}F(x_0)\right\|\\\nonumber &=&\left\|F'(x_0)^{-1}\int_0^1(F'(x_*+\theta(x_0-x_*))-F'(x_0))d\theta (x_0-x_*)\right\|\\\nonumber &\leq&\frac{\int_0^1\omega((1-\theta)\|x_0-x_*\|)d\theta\|x_0-x_*\|}{1-\omega_0(\|x_0-x_*\|)}\\\label{2.20} &\leq&g_1(\|x_0-x_*\|)\|x_0-x_*\|\leq \|x_0-x_*\| < r, \end{eqnarray}
(22)
so (16) holds for \(n=0\) and \(y_0^{(1)}\in U(x_*,r).\) We also have
\begin{align} \nonumber \left\|(2F'(x_*))^{-1}\right.&\left.\left(F'(x_0)+F'(y_0^{(1)})-2F'(x_*)\right)\right\|\notag\\ \notag&\leq\frac{1}{2}\left(\left\|F'(x_*)^{-1}(F'(x_0)-F'(x_*))\right\|+\left\|F'(x_*)^{-1}\left(F'(y_0^{(1)})-F'(x_*)\right)\right\|\right)\\ \nonumber &\leq\frac{1}{2}\left[\omega_0\|x_0-x_*\|+\omega_0\left\|y_0^{(1)}-x_*\right\|\right]\\ &\leq p\|x_0-x_*\|\leq p(r) < 1, \end{align}
(23)
so \(\left(F'(x_0)+F'\left(y_0^{(1)}\right)\right)^{-1}\in L(Y,X),\)
\begin{equation} \label{2.21} \left\|\left(F'(x_0)+F'\left(y_0^{(1)}\right)\right)^{-1}F'(x_*)\right\|\leq \frac{1}{2\left(1-p\left(\|x_0-x_*\|\right)\right)}, \end{equation}
(24)
and \(y_0^{(2)}\) exists. Then, we can write in turn by method (2)
\begin{eqnarray} \nonumber y_0^{\left(2\right)}-x_*&=&x_0-x_*-F'\left(x_0\right)^{-1}F\left(x_0\right)+F'\left(x_0\right)^{-1}F\left(x_0\right)-2\left(F'\left(x_0\right)+F'\left(y_0^{\left(1\right)}\right)\right)^{-1}F\left(x_0\right)\\\nonumber &=&y_0^{\left(1\right)}-x_*+\left(F'\left(x_0\right)^{-1}-2\left(F'\left(x_0\right)+F'\left(y_0^{\left(1\right)}\right)\right)^{-1}\right)F\left(x_0\right)\\ &=&y_0^{\left(1\right)}-x_*+F'\left(x_0\right)^{-1}\left(F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)\left(F'\left(y_0^{\left(1\right)}\right)+F'\left(x_0\right)\right)^{-1}F\left(x_0\right). \label{2.22} \end{eqnarray}
(25)
Then, by (9), (14), (21)-(25), we obtain in turn that
\begin{eqnarray} \nonumber \left\|y_0^{\left(2\right)}-x_*\right\|&\leq&\left[g_1\left(\|x_0-x_*\|\right)+\frac{\left(\omega_0\left(\|x_0-x_*\|\right)+\omega_0\left(\left\|y_0^{\left(1\right)}-x_*\right\|\right)\right)\int_0^1\omega_1\left(\theta\|x_0-x_*\|\right)d\theta}{2\left(1-\omega_0\left(\|x_0-x_*\|\right)\right)\left(1-p\left(\|x_0-x_*\|\right)\right)}\right]\|x_0-x_*\|\\\label{2.23} &\leq&g_2\left(\|x_0-x_*\|\right)\|x_0-x_*\|\leq \|x_0-x_*\|, \end{eqnarray}
(26)
so (17) holds for \(n=0\) and \(\, y_0^{(2)}\in U(x_*,r).\) By (9), (11), (21) and (22), we have in turn that
\begin{align} \nonumber \left\|\left(2F'\left(x_*\right)\right)^{-1}\right.&\left.\left(3F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)-2F'\left(x_*\right)\right)\right\|\\ \notag&\leq\frac{1}{2}\left(3\left\|F'\left(x_*\right)^{-1}\left(F'\left(y_0^{\left(1\right)}\right)-F'\left(x_*\right)\right)\right\|+\left\|F'\left(x_*\right)^{-1}\left(F'\left(x_0\right)-F'\left(x_*\right)\right)\right\|\right)\\\nonumber &\leq\frac{1}{2}\left(3\omega_0\left(\left\|y_0^{\left(1\right)}-x_*\right\|\right)+\omega_0\left(\|x_0-x_*\|\right)\right)\\\label{2.24} &\leq q\left(\|x_0-x_*\|\right)\leq q\left(r\right) < 1, \end{align}
(27)
so \(3\left(F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)^{-1}\in L\left(Y,X\right),\)
\begin{equation} \label{2.25} \left\|3\left(F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)^{-1}F'\left(x_*\right)\right\|\leq \frac{1}{2\left(1-q\left(\left\|x_0-x_*\right\|\right)\right)} \end{equation}
(28)
and \(y_0^{(3)}\) exists. Then, we can write in turn that
\begin{equation} \label{2.26} y_0^{\left(3\right)}-x_*=y_0^{\left(2\right)}-x_*-F'\left(y_0^{\left(2\right)}\right)^{-1}F\left(y_0^{\left(2\right)}\right) +\left(F'\left(y_0^{\left(2\right)}\right)^{-1}-\alpha\right)F\left(y_0^{\left(2\right)}\right). \end{equation}
(29)
But \[F'\left(y_0^{\left(2\right)}\right)^{-1}-\alpha=F'\left(y_0^{\left(2\right)}\right)^{-1}\gamma F'\left(x_0\right)^{-1},\] where \[\gamma=F'\left(x_0\right)-F'\left(y_0^{\left(2\right)}\right)\left(3F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)^{-1}\left(F'\left(x_0\right)+F'\left(y_0^{\left(1\right)}\right)\right)\] and \begin{align*} \delta:=&\left(3F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)^{-1}\left(3F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)+F'\left(x_0\right)-3F'\left(y_0^{\left(1\right)}\right)+F'\left(x_0\right)+F'\left(y_0^{\left(1\right)}\right)\right)\\ =&\left(3F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)^{-1}\left[\left(3F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)+2F'\left(x_0\right)-F'\left(y_0^{\left(2\right)}\right)\right]\\ =&I+2\left(3F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)^{-1}\left(F'\left(x_0\right)-F'\left(y_0^{\left(1\right)}\right)\right). \end{align*} Hence, we have \(\gamma=F'\left(x_0\right)-F'\left(y_0^{\left(2\right)}\right)-2F'\left(y_0^{\left(2\right)}\right)\left(3F'\left(y_0^{\left(1\right)}\right)-F'\left(x_0\right)\right)^{-1}\left(F'\left(x_0\right)-F'\left(y_0^{\left(1\right)}\right)\right),\) so
\begin{align} \nonumber \left\|F'\left(x_*\right)^{-1}\gamma\right\|\leq&\omega_0\left(\left\|x_0-x_*\right\|\right)+\omega_0\left(\left\|y_0^{\left(2\right)}-x_*\right\|\right) +\frac{\omega_1\left(\left\|y_0^{\left(2\right)}-x_*\right\|\right)\left(\omega_0\left(\left\|x_0-x_*\right\|\right)+\omega_0\left(\left\|y_0^{\left(2\right)}-x_*\right\|\right)\right)}{2\left(1-q\left(\left\|x_0-x_*\right\|\right)\right)}\\\label{2.27} \leq&h\left(\|x_0-x_*\|\right). \end{align}
(30)
In view of (9), (15), (21), (22), (29) and (30), we get in turn that
\begin{align} \nonumber \left\|y_0^{\left(3\right)}-x_*\right\|\leq&\left\|y_0^{\left(2\right)}-x_*-F'\left(y_0^{\left(2\right)}\right)^{-1}F\left(y_0^{\left(2\right)}\right)\right\| +\left\|\left(F'\left(y_0^{\left(2\right)}\right)^{-1}-\alpha\right)F'\left(x_*\right)\right\|\left\|F'\left(x_*\right)^{-1}F\left(y_0^{\left(2\right)}\right)\right\|\\\nonumber \leq&\left(g_1\left(\left\|y_0^{\left(2\right)}\right)-x_*\right\|\right)+\frac{h\left(\left\|x_0-x_*\right\|\right)}{\left(1-\omega_0\left(\left\|y_0^{\left(2\right)}-x_*\right\|\right)\right)\left(1-\omega_0\left(\left\|x_0-x_*\right\|\right)\right)}\left\|y_0^{\left(2\right)}-x_*\right\|\\\label{2.28} \leq&\psi\left(\left\|x_0-x_*\right\|\right)\left\|y_0^{\left(2\right)}-x_*\right\|\leq \left\|y_0^{\left(2\right)}-x_*\right\| < r, \end{align}
(31)
so (12) hold for \(n=0, i=3\) and \(y_0^{(3)}\in U(x_*,r).\) By replacing \(x_0, y_0^{(1)}, \ldots y_0^{(m)}, x_1\) by \(x_k, y_k^{(1)},\ldots y_k^{(m)}, x_{k+1}, \) in the preceding estimates, we complete the induction for (16)-(19). Then, in view of the estimate
\begin{equation} \label{2.29} \|x_{k+1}-x_*\|\leq c\|x_k-x_*\|\leq c^{k+1}\|x_0-x_*\|, \end{equation}
(32)
concluding that \(\lim_{k\longrightarrow\infty}x_k=x_*,\) and \(x_{k+1}\in U(x_*,r).\)

Finally, let \(x_{**}\in D_1\) with \(F(x_{**})=0.\) Setting \(Q=\int_0^1F'(x_{**}+\theta(x_*-x_{**}))d\theta\) and using (A2), (9) and (A5), we get

\[\left\|F'\left(x_*\right)^{-1}\left(Q-F'\left(x_*\right)\right)\right\|\leq \int_0^1\omega_0\left(\theta\left\|x_*-x_{**}\right)\right\|d\theta \leq \int_0^1\omega_0\left(\theta r_{**}\right)d\theta < 1,\] so \(Q^{-1}\in L(Y,X).\) Consequently, from \(0=F(x_{**})-F(x_*)=Q(x_{**}-x_*),\) we obtain \(x_{**}=x_*.\)

Remark 1.

  • 1. In view of (11) and the estimate \begin{align*} \left\|F'\left(x^\ast\right)^{-1}F'\left(x\right)\right\|&=\left\|F'\left(x^\ast\right)^{-1}\left(F'\left(x\right)-F'\left(x^\ast\right)\right)+I\right\|\\ &\leq 1+\left\|F'\left(x^\ast\right)^{-1}\left(F'\left(x\right)-F'\left(x^\ast\right)\right)\right\| \\ &\leq 1+L_0\left\|x-x^\ast\right\|, \end{align*} the condition (14) can be dropped and \(M\) can be replaced by \(M(t)=1+L_0 t\) or \(M(t)=M=2,\) since \(t\in [0, \frac{1}{L_0}).\)
  • 2. The results obtained here can be used for operators \(F\) satisfying autonomous differential equations [2] of the form \[F'(x)=P(F(x))\] where \(P\) is a continuous operator. Then, since \(F'(x^\ast)=P(F(x^\ast))=P(0),\) we can apply the results without actually knowing \(x^\ast.\) For example, let \(F(x)=e^x-1.\) Then, we can choose: \(P(x)=x+1.\)
  • 3. Let \(\omega_0(t)=L_0t,\) and \(\omega(t)=Lt.\) In [2,3] we showed that \(r_A=\frac{2}{2L_0+L}\) is the convergence radius of Newton's method:
    \begin{equation} \label{2.30} x_{n+1}=x_n-F'\left(x_n\right)^{-1}F(x_n)\, \  \  for \  each \  n=0,1,2,\cdots \end{equation}
    (33)
    under the conditions (12) and (13). It follows from the definition of \(r\) in (9) that the convergence radius \(r\) of the method (2) cannot be larger than the convergence radius \(r_A\) of the second order Newton's method (33). As already noted in [2,3] \(r_A\) is at least as large as the convergence radius given by Rheinboldt [9]
    \begin{equation} \label{2.31}r_R=\frac{2}{3L},\end{equation}
    (34)
    where \(L_1\) is the Lipschitz constant on \(D.\) The same value for \(r_R\) was given by Traub [10]. In particular, for \(L_0 < L_1\) we have that \[r_R < r_A\] and \[\frac{r_R}{r_A}\rightarrow \frac{1}{3}\,\,\, as\,\,\, \frac{L_0}{L_1}\rightarrow 0.\] That is the radius of convergence \(r_A\) is at most three times larger than Rheinboldt's.
  • 4. It is worth noticing that method (2) is not changing when we use the conditions of Theorem 1 instead of the stronger conditions used in [6,8,11]. Moreover, we can compute the computational order of convergence (COC) defined by \[\xi= \frac{\ln\left(\frac{\|x_{n+1}-x^\ast\|}{\|x_n-x^\ast\|}\right)}{\ln\left(\frac{\|x_{n}-x^\ast\|}{\|x_{n-1}-x^\ast\|}\right)} \,,\] or the approximate computational order of convergence \[\xi_1= \frac{\ln\left(\frac{\|x_{n+1}-x_n\|}{\|x_n-x_{n-1}\|}\right)}{\ln\left(\frac{\|x_{n}-x_{n-1}\|}{\|x_{n-1}-x_{n-2}\|}\right)}. \] This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator \(F.\) Note also that the computation of \(\xi_1\) does not require the usage of the solution \(x^\ast.\)

3. Numerical Examples

Example 1. Consider the kinematic system \[\begin{cases} F_1'(x)=e^x,\\ F_2'(y)=(e-1)y+1,\\ F_3'(z)=1.\end{cases}\] with \(F_1(0)=F_2(0)=F_3(0)=0.\) Letting \(F=(F_1,F_2,F_3),\) \(X=Y=\mathbb{R}^3,\ \ D=\bar{U}(0,1),\ \ p=(0, 0, 0)^T,\) and defining the function \(F\) on \(D\) for \(w=(x,y, z)^T\) by \[ F(w)=\left(e^x-1, \frac{e-1}{2}y^2+y, z\right)^T, \] we get \[F'(v)=\left[ \begin{array}{ccc} e^x&0&0\\ 0&(e-1)y+1&0\\ 0&0&1 \end{array}\right], \] so \( \omega_0(t)=(e-1)t,\ \ \omega(t)=e^{\frac{1}{e-1}}t,\ \ \text{and}\ \ \omega_1(t)=e^{\frac{1}{e-1}}.\) Then, the radii are \(r_1=0.382692,\, r_2=0.196552,\) and \(r_3=0.126761.\)

Example 2. Considering \(X=Y=C[0,1],\) \(D=\overline{U}(0,1)\) and \(F:D\longrightarrow Y\) defined by

\begin{equation} \label{3.3} F(\phi)(x)=\varphi(x)-5\int_0^1x\theta\phi(\theta)^3d\theta, \end{equation}
(35)
we have that \[F'(\phi(\xi))(x)=\xi(x)-15\int_0^1x\theta\phi(\theta)^2\xi(\theta)d\theta,\,\,\, for \  each \,\,\, \xi \in D.\] Then, we get that \(x^* =0,\) so \( \omega_0(t)=7.5t, \ \ \omega(t)=15t\) and \(\omega_1(t)=2.\) Then, the radii are \(r_1=0.066667,\, r_2=0.0361715,\) and \(r_3=0.0251157.\)

Example 3. By the academic example of the introduction, we have \(\omega_0(t)=\omega(t)=96.6629073 t\) and \(\omega_1(t) =2.\) Then, the radii are \(r_1=0.00689682,\,\ r_2=0.00338133,\) and \(r_3=0.00217133.\)

Example 4. Let \(X=Y=C[0,1],\ \ D=\bar{U}(x^*, 1)\) and consider the nonlinear integral equation of the mixed Hammerstein-type [1,2,6,7,8,9,12] defined by \[x(s)=\int_0^1G(s,t)\left(x(t)^{3/2}+\frac{x(t)^2}{2}\right)dt,\] where the kernel \(G\) is the Green's function defined on the interval \([0,1]\times [0,1]\) by \[ G(s,t)=\left\{\begin{array}{cc} (1-s)t,& \,\,\,t\leq s,\\ s(1-t),&\,\,\,s\leq t. \end{array}\right. \] The solution \(x^*(s)=0\) is the same as the solution of Equation (1), where \(F:C[0,1]\longrightarrow C[0,1])\) is defined by \[F(x)(s)=x(s)-\int_0^1G(s,t)\left(x(t)^{3/2}+\frac{x(t)^2}{2}\right)dt.\] Notice that \[\left\|\int_0^1G(s,t)dt\right\|\leq \frac{1}{8}.\] Then, we have that \[F'(x)y(s)=y(s)-\int_0^1G(s,t)\left(\frac{3}{2}x(t)^{1/2}+x(t)\right)dt,\] so since \(F'(x^*(s))=I,\) \[\left\|F'(x^*)^{-1}(F'(x)-F'(y))\right\|\leq \frac{1}{8}\left(\frac{3}{2}\|x-y\|^{1/2}+\|x-y\|\right).\] Then, we get that \(\omega_0(t)= \omega(t)=\frac{1}{8}(\frac{3}{2}t^{1/2}+t), \omega_1(t)=1+w_0(t).\) The radii are \(r_1= 2.6303\) \(r_2=1.20504\) \(r_3=1.302.\) So we obtain \(r=1.\)

Acknowledgments

The author is really grateful to the editor and the anonymous reviewers for their constructive comments. He would also like to thank Kokou Essiomle, Tchilabalo E. Patchali and Essodina Takouda for their help during the preparation of the manuscript.

Author Contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Amat, S., Hernández, M. A., & Romero, N. (2012). Semilocal convergence of a sixth order iterative method for quadratic equations. Applied Numerical Mathematics, 62(7), 833-841. [Google Scholor]
  2. Argyros, I. K. (2007). Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C.K. and Wuytack L. Elsevier Publ. Company, New York. [Google Scholor]
  3. Argyros, I. K., Magreñán, A. A. (2017). Iterative Method and their Dynamics with Applications. CRC Press, New York, USA. [Google Scholor]
  4. Behl, R., Cordero, A., Motsa, S. S., & Torregrosa, J. R. (2017). Stable high-order iterative methods for solving nonlinear models. Applied Mathematics and Computation, 303, 70-88. [Google Scholor]
  5. Behl, R., Bhalla, S., Magreñán, A. A., & Kumar, S. (2020). An efficient high order iterative scheme for large nonlinear systems with dynamics. Computational and Applied Mathematics, 113249. https://doi.org/10.1016/j.cam.2020.113249. [Google Scholor]
  6. Cordero, A., Hueso, J. L., Martínez, E., & Torregrosa, J. R. (2010). A modified Newton-Jarratt’s composition. Numerical Algorithms, 55(1), 87-99. [Google Scholor]
  7. Magreñán, A. A. (2014). Different anomalies in a Jarratt family of iterative root-finding methods. Applied Mathematics and Computation, 233, 29-38. [Google Scholor]
  8. Noor, M. A., & Wassem, M. (2009). Some iterative methods for solving a system of nonlinear equations. Applied Mathematics and Computation, 57, 101-106. [Google Scholor]
  9. Rheinboldt, W. C. (1977). An adaptive continuation process for solving systems of nonlinear equations. In: Mathematical Models and Numerical Methods (A. N. Tikhonov et al., eds.) pub. 3, (1977), 129-142 Banach Center, Warsaw Poland. [Google Scholor]
  10. Traub, J. F. (1964). Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs. [Google Scholor]
  11. Sharma, J. R., & Arora, H. (2017). Improved Newton-like methods for solving systems of nonlinear equations. SeMA Journal, 74(2), 147-163. [Google Scholor]
  12. Weerakoon, S., & Fernando, T. G. I. (2000). A variant of Newton's method with accelerated third-order convergence. Applied Mathematics Letters, 13(8), 87-93. [Google Scholor]