Local convergence for a family of sixth order methods with parameters

Author(s): Christopher I. Argyros1, Michael Argyros1, Ioannis K. Argyros2, Santhosh George3
1Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA.
2Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA.
3Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India-575 025.
Copyright © Christopher I. Argyros, Michael Argyros, Ioannis K. Argyros, Santhosh George. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Local convergence of a family of sixth order methods for solving Banach space valued equations is considered in this article. The local convergence analysis is provided using only the first derivative in contrast to earlier works on the real line using the seventh derivative. This way the applicability is expanded for these methods. Numerical examples complete the article.

Keywords: Local convergence; Banach space; Convergence order.

1. Introduction

Consider the problem of solving equation

\begin{equation} F(x)=0, \end{equation}
(1)
where \(F:\Omega \subset B_1\longrightarrow B_2\) is continuously Fréchet differentiable, \(X, Y\) are Banach spaces and \(\Omega\) is a nonempty convex set.

In this paper we study the local convergence of a family of sixth order iterative methods using assumptions only on the first derivative of \(F.\) Usually the convergence order is obtained using Taylor expansions and conditions on high order derivatives not appearing on the methods [1,2,3,4,5,6,7,8,9,10,11,12,13]. These conditions limit the applicability of the methods.

For example, let \( X=Y=\mathbb{R}, \,D= [-\frac{1}{2}, \frac{3}{2}].\) Define \(f\) on \(D\) by

\[f(s)=\left\{\begin{array}{cc} s^3\log s^2+s^5-s^4& if\,\,s\neq0\\ 0& if\,\, s=0. \end{array}\right. \] Then, we have \(x_*=1,\) and \[f'(s)= 3s^2\log s^2 + 5s^4- 4s^3+ 2s^2 ,\] \[f”(s)= 6x\log s^2 + 20s^3 -12s^2 + 10s,\] \[f”'(s) = 6\log s^2 + 60s^2-24s + 22.\] Obviously \(f”'(s)\) is not bounded on \(D.\) So, the convergence of these methods is not guaranteed by the analysis in these papers.

The family of methods we are interested are:

\begin{equation} \begin{cases} y_n=x_n-\gamma F'(x_n)^{-}F(x_n)\\ z_n=x_n-A_nF'(x_n)^{-1}F(x_n)\\ x_{n+1}=z_n-B_nF'(y_n)^{-1}F(z_n),\\ A_n=a_1I+a_2C(y_n,x_n)+a_3C(x_n,y_n)+a_4C(y_n,x_n)^2,\\ C(x_n,y_n)=F'(x_n)^{-1}F'(y_n), \end{cases}\end{equation}
(2)
where \(B_n=b_1I+b_2C(x_n,y_n)+b_3C(y_n,x_n)+b_4C(x_n,y_n)^2,\) \(\gamma=\frac{2}{3},\,\ \ a_1=\frac{5-8a_2}{8},\,\ \ a_3=\frac{a_2}{3}\), \(a_4=\frac{9-8a_2}{24},\,\ \ b_2=\frac{3+8b_1}{8},\,\ \ b_3=\frac{15-8b_1}{14},\,\ \ b_4=\frac{9+4b_1}{12}\) with \(a_2,\ \ b_1\) and \(\gamma\) free.

The efficiency and convergence order was given in [14] when \(X=Y=\mathbb{R}^k.\) The convergence was shown using the seventh derivative. We include error bounds on \(\|x_n-x_*\|\) and uniqueness results not given in [14]. Our technique is so general that it can be used to extend the usage of other methods [1,2,3,4,5,6,7,8,9,10,11,12,13].

The article contains local convergence analysis in Section 2 and the numerical examples in Section 3.

2. Local convergence

We develop some real parameters and functions. Set \(S=[0, \infty).\) Suppose function:
  • (i) \( \omega_0(t)-1 \) has a least zero \(R_0\in S-\{0\}\) for some function \(\omega:S\longrightarrow S\) continuous and nondecreasing. Set \(S_0=[0, R_0).\)
  • (ii) \( \varphi_{1}(t)-1=0 \) has a least zero \(r_1\in S_0-\{0\}\) for some functions \(\omega:S_0\longrightarrow S, \omega_1:S_0\longrightarrow S\) continuous and nondecreasing with \(\varphi_1:S_0\longrightarrow S\) defined by \[\varphi_1(t)=\frac{\int_0^1\omega((1-\theta)t)d\theta+|1-\gamma|\int_0^1\omega_1(\theta t)d\theta}{1-\omega_0(t)}.\]
  • (iii) \(\varphi_2(t)-1\) has a least zero \(r_2\in S_0-\{0\}\) for some function \(\zeta:S_0\longrightarrow S\) with \(\varphi_2:S_0\longrightarrow S\) defined by \[\varphi_{2}(t)=\frac{\int_0^1\omega((1-\theta)t)d\theta+\zeta(t)\int_0^1\varphi_1(\theta t)d\theta}{1-\omega_0(t)},\] where \(\zeta(t)=|a_1-1|+\frac{\omega_1(t)}{1-\omega_0(\varphi_1(t)t)}+\frac{|a_3|\omega_0(\varphi_{1}(t)t)}{1-\omega_0(t)}+|a_4|\left(\frac{\omega_1(t)}{1-\omega_0(\varphi_1(t)t)}\right)^2.\)
  • (iv) \(\omega_0(\varphi_1(t)t)-1\) has a least zero \(R_1\in S_0-\{0\}.\) Set \(R=\min\{R_0, R_1\}\) and \(S_1=[0, R).\)
  • (v) \(\varphi_3(t)-1\) has a least zero \(r_3\in S_1-\{0\}\) for some function \(\psi:S_1\longrightarrow S\) defined by \begin{align*} \varphi_3(t)=&\left[\frac{\int_0^1\omega((1-\theta)\varphi_2(t)t)}{1-\omega_0(\varphi_2(t)t)}\right.+\left.\frac{(\omega_0(\varphi_{2}(t)t)+\omega_0(\varphi_1(t)t))\int_0^1\omega_1(\theta \varphi_2(t)t)d\theta}{(1-\omega_0(\varphi_2(t)t))(1-\omega_0(\varphi_1(t)t))}\right.\\&+\left.\frac{\psi(t)\int_0^1\omega_1(\theta\varphi_2(t)t)d\theta}{1-\omega_0(\varphi_2(t)t)}\right]\varphi_2(t) \end{align*} where \(\psi(t)=|b_1-1|+|b_2|\frac{\omega_1(\varphi_1(t)t)}{1-\omega_0(t)}+|b_3|\frac{\omega_1(t)}{1-\omega_0(\varphi_1(t)t)}+|b_4|\left(\frac{\omega_1(\varphi_1(t)t)}{1-\omega_0(t)}\right)^2.\)

Define parameter \(r\) by

\begin{equation} r=\min\{r_m\},\,\, m=1,2,3. \end{equation}
(3)
It shall be shown that \(r\) is a convergence radius for method (2). Set \(S_2=[0,r).\) Notice that for each \(t\in S_2\) the following hold
\begin{equation} 0\leq \omega_0(t) < 1, \end{equation}
(4)
\begin{equation} 0\leq \omega_0(\varphi_2(t)t) < 1, \end{equation}
(5)
and
\begin{equation} 0\leq \varphi_m(t) < 1. \end{equation}
(6)
By \( \bar{T}(x,\delta)\) we denote the closure of the open ball \(T(x,\delta)\) with center \(x\in X\) and of radius \(\delta > 0.\)

Our local convergence analysis uses hypotheses (H) provided that the functions “\(\omega\)“ are as previously given, and \(x_*\) is a simple zero of \(F.\) Suppose:

  • (H1) \(\|F'(x_*)^{-1}(F'(u)-F'(x_*))\|\leq \omega_0(\|u-x_*\|)\) for each \(u\in \Omega.\) Set \(\Omega_0=\Omega\cap T(x_*,R_0)\);
  • (H2) \(\|F'(x_*)^{-1}(F'(u)-F'(v))\|\leq \omega(\|u-v\|)\) and \(\|F'(x_*)^{-1}F'(u)\|\leq \omega_1(\|u-x_*\|)\) for each \(u,v\in \Omega_0\);
  • (H3) \(\bar{T}(x_*,r)\subset \Omega;\) and
  • (H4) There exists \(\beta\geq r\) satisfying \(\int_0^1\omega_0(\theta \beta)d\theta < 1.\) Set \(\Omega_1=\Omega\cap \bar{T}(x_*,\beta).\)
Next, the local convergence analysis follows for method (2) utilizing hypotheses (H).

Theorem 1. Under hypotheses (H) choose starting point \(x_0\in T(x_*,r)-\{x_*\}.\) Then, sequence \(\{x_n\}\) generated by method (2) for any starting point \(x_0\) is well defined in \(T(x_*,r),\) remains in \(T(x_*,r)\) and \(\lim_{n\longrightarrow \infty}x_n=x_*,\) which is the only zero of \(F\) in the set \(\Omega_1\) given in (H4).

Proof. The following assertions shall be shown using induction

\begin{equation} \|y_k-x_*\|\leq \varphi_{1}(\|x_k-x_*\|)\|x_k-x_*\|\leq \|x_k-x_*\| < r, \end{equation}
(7)
\begin{equation} \|z_k-x_*\|\leq \varphi_2(\|x_kn-x_*\|)\|x_k-x_*\|\leq \|x_k-x_*\|, \end{equation}
(8)
and
\begin{equation} \|x_{n+1}-x_*\|\leq \varphi_3(\|x_k-x_*\|)\|x_k-x_*\|\leq \|x_k-x_*\|, \end{equation}
(9)
where the radius \(r\) is defined in (3) and the \(\varphi_m\) functions are as previously given. Let \(x\in T(x_*,r)-\{x_*\}.\) Using (3), (4), and (H1), we get
\begin{equation} \|F'(x_*)^{-1}(F'(x)-F'(x_*))\|\leq \omega_0(\|x-x_*\|)\leq \omega_0(r) < 1, \end{equation}
(10)
so by a Lemma due to Banach [15,16,17,18,19] on invertible operators \(F'(x)\) is invertible and
\begin{equation} \|F'(x)^{-1}F'(x_*)\|\leq \frac{1}{1-\omega_0(\|x-x_*\|)}. \end{equation}
(11)
Notice also \(y_0\) exists by the first substep of method (2) from which we can write
\begin{align}\nonumber y_0-x_*=&x_0-x_*-F'(x_0)^{-1}F(x_0)+(1-\gamma)F'(x_0)^{-1}F(x_0)\\\nonumber =&(F'(x_0)^{-1}F'(x_0))(\int_0^1F'(x_*)^{-1}F'(x_*+\theta(x_0-x_*))-F'(x_0))d\theta(x_0-x_*))\\ &+(1-\gamma)(F'(x_0)^{-1}F'(x_*))(\int_0^1F'(x_*)^{-1}F'(x_*+\theta(x_0-x_*))d\theta(x_0-x_*)).\label{2.10} \end{align}
(12)
By (3), (6) (for \( m=1\)), (11) (for \(x=x_0\)), (H2) and (12), we have
\begin{align}\nonumber \|y_0-x_*\| &\leq\frac{\int_0^1\omega((1-\theta)\|x_0-x_*\|)d\theta+|1-\gamma|\int_0^1\omega_1(\theta\|x_0-x_*\|)d\theta}{1-\omega_0(\|x_0-x_*\|)}\|x_0-x_*\|\\\label{2.11} &\leq\varphi_{1}(\|x_0-x_*\|)\|x_0-x_*\|\leq \|x_0-x_*\| < r, \end{align}
(13)
showing (7) for \(n=0\) and \(y_0\in T(x_*,r).\) Then, we also have that (11) holds for \(x=y_0\) and \(F'(y_0)\) is invertible. Hence, \(z_0\) exists by the second substep of method (2) from which we can also write
\begin{equation} z_0-x_*=x_0-x_*-F'(x_0)^{-1}F(x_0)+(I-A_0)F'(x_0)^{-1}F(x_0). \end{equation}
(14)
By (3), (6) (for \(m=2\)), (11) (for \(x=x_0,y_0\)), (13) and (14), we have
\begin{align}\nonumber \|z_0-x_*\|\leq&\left[\frac{\int_0^1\omega((1-\theta)\|x_0-x_*\|)d\theta}{1-\omega_0(\|x_0-x_*\|)}\right. +\left.\frac{\zeta(\|x_0-x_*\|)\int_0^1\omega_1(\theta\|x_0-x_*\|)d\theta}{1-\omega_0(\|x_0-x_*\|)}\right]\|x_0-x_*\|\\\label{2.13} \leq&\varphi_2(\|x_0-x_*\|)\leq \|x_0-x_*\|, \end{align}
(15)
showing (8) for \(n=0\) and \(z_0\in T(x_*,r),\) where we also used the estimate
\begin{align} \|I-A_0\|\leq&|a_1-1|+|a_2|\frac{\omega_1(\|x_0-x_*\|)}{1-\omega_0(\|y_0-x_*\|)} +|a_3|\frac{\omega_1(\|y_0-x_*\|)}{1-\omega_0(\|y_0-x_*\|)} +|a_4|\left(\frac{\omega_1(\|x_0-x_*\|)}{1-\omega_0(\|y_0-x_*\|)}\right)^2\\ \leq&\zeta(\|x_0-x_*\|)\,\,(\, by \, the \, definition \, of \, \,\,A_0). \end{align}
(16)
Similarly, we have that \(x_1\) exists and we can write by the third substep of method (2)
\begin{align}\label{2.15} x_1-x_*=&z_0-x_*-F'(z_0)^{-1}F(z_0) +F'(z_0)^{-1}(F'(y_0)-F'(z_0))F'(y_0)^{-1}F(z_0)+(I-B_0)F'(y_0)^{-1}F(z_0). \end{align}
(17)
Then, by (3), (6)( for \(m=3\)), (11) (for \(x=z_0, y_0\)), (13), (15) and (17), we get
\begin{align}\nonumber \|x_1-x_*\|\leq&\left[\frac{\int_0^1\omega((1-\theta)\|z_0-x_*\|)d\theta}{1-\omega_0(\|z_0-x_*\|)}\right. +\frac{(\omega_0(\|z_0-x_*\|)+\omega_0(\|y_0-x_*\|))\int_0^1\omega_1(\theta\|z_0-x_*\|)d\theta}{(1-\omega_0(\|z_0-x_*\|))(1-\omega_0(\|y_0-x_*\|))}\\\nonumber &+\left.\frac{\psi(\|x_0-x_*\|)\int_0^1\omega_1(\theta\|z_0-x_*\|)d\theta}{1-\omega_0(\|y_0-x_*\|)}\right]\|z_0-x_*\|\\\label{2.16} \leq&\varphi_3(\|x_0-x_*\|)\|x_0-x_*\|\leq \|x_0-x_*\|, \end{align}
(18)
showing (9) for \(n=0\) and \(x_1\in T(x_*,r),\) where we also used
\begin{align}\nonumber \|I-B_0\|\leq&|b_1-1|+|b_2|\frac{\omega_1(\|y_0-x_*\|)}{1-\omega_0(\|x_0-x_*\|)} +|b_3|\frac{\omega_1(\|x_0-x_*\|)}{1-\omega_0(\|y_0-x_*\|)} +|b_4|\left(\frac{\omega_1(\|x_0-x_*\|)}{1-\omega_0(\|x_0-x_*\|)}\right)^2\\\label{2.17} \leq&\psi(\|x_0-x_*\|)\,\,(by \, the\, definition \, of \,\,B_0). \end{align}
(19)
Exchange \(x_0,y_0, z_0, x_1\) by \(x_n, y_n, z_n, x_{n+1}\) in the preceding calculations to complete the induction for (7)-(9). Then, from the estimation
\begin{equation} \|x_{n+1}-x_*\|\leq p\|x_n-x_*\|, \end{equation}
(20)
where \(p=\varphi_3(\|x_0-x_*\|)\in [0,1),\) we get \(\lim_{n\longrightarrow\infty}x_n=x_*,\) and \(x_{n+1}\in T(x_*,r).\)

Set \(M=\int_0^1F'(x_*+\theta(q-x_*))d\theta\) for some \(q\in \Omega_1\) with \(F(q)=0.\) Using (H1) and (H4) \[\|F'(x_*)^{-1}(M-F'(x_*))\|\leq \int_0^1\omega_0(\theta\|q-x_*)\|d\theta \leq \int_0^1\omega_0(\theta \beta)d\theta < 1,\] so \(q=x_*\) is implied by the identity \(0=F(q)-F(x_*)=M(q-x_*)\) and the invertability of \(M.\)

Remark 1.

  • 1. In view of (H2) and the estimate \begin{eqnarray*} \|F'(x^\ast)^{-1}F'(x)\|&=&\|F'(x^\ast)^{-1}(F'(x)-F'(x^\ast))+I\|\\ &\leq& 1+\|F'(x^\ast)^{-1}(F'(x)-F'(x^\ast))\| \leq 1+\varphi_0(\|x-x^\ast\|) \end{eqnarray*} the second condition in (H3) can be dropped and \(\varphi_1\) can be replaced by \(\varphi_1(t)=1+\varphi_0(t)\) or \(\varphi_1(t)=1+\varphi_0(R_0),\) since \(t\in [0, R_0).\)
  • 2. The results obtained here can be used for operators \(F\) satisfying autonomous differential equations [15] of the form \(F'(x)=P(F(x))\) where \(P\) is a continuous operator. Then, since \(F'(x^\ast)=P(F(x^\ast))=P(0),\) we can apply the results without actually knowing \(x^\ast.\) For example, let \(F(x)=e^x-1.\) Then, we can choose: \(P(x)=x+1.\)
  • 3. Let \(\varphi_0(t)=L_0t,\) and \(\varphi(t)=Lt.\) In [15,16] we showed that \(r_A=\frac{2}{2L_0+L}\) is the convergence radius of Newton’s method:
    \begin{equation} x_{n+1}=x_n-F'(x_n)^{-1}F(x_n)\,\,\,\, for \, each \, \,\,\,n=0,1,2,\cdots \end{equation}
    (21)
    under the conditions (H1) – (H3). It follows from the definition of \(\alpha,\) that the convergence radius \(r\) of the method (2) cannot be larger than the convergence radius \(r_A\) of the second order Newton’s method (21). As already noted in [15,16] \(r_A\) is at least as large as the convergence radius given by Rheinboldt [10]
    \begin{equation} r_R=\frac{2}{3L},\end{equation}
    (22)
    where \(L_1\) is the Lipschitz constant on \(D.\) The same value for \(r_R\) was given by Traub [13]. In particular, for \(L_0 < L_1\) we have that \(r_R < r_A\) and \(\frac{r_R}{r_A}\rightarrow \frac{1}{3}\,\,\, as\,\,\, \frac{L_0}{L_1}\rightarrow 0.\) That is the radius of convergence \(r_A\) is at most three times larger than Rheinboldt's.
  • 4. We can compute the computational order of convergence (COC) defined by \(\xi= \frac{\ln\left(\frac{d_{n+1}}{d_n}\right)}{\ln\left(\frac{d_n}{d_{n-1}}\right)}, \) where \(d_n=\|x_n-x^\ast\|\) or the approximate computational order of convergence \(\xi_1= \frac{\ln\left(\frac{e_{n+1}}{e_n}\right)}{\ln\left(\frac{e_n}{e_{n-1}}\right)}, \) where \(e_n=\|x_n-x_{n-1}\|.\)

3. Numerical Examples

Example 1. Consider the kinematic system \[F_1′(x)=e^x,\, F_2′(y)=(e-1)y+1,\, F_3′(z)=1\] with \(F_1(0)=F_2(0)=F_3(0)=0.\) Let \(F=(F_1,F_2,F_3).\) Let \({B}_1={B}_2=\mathbb{R}^3, D=\bar{B}(0,1), p=(0, 0, 0)^t.\) Define function \(F\) on \(D\) for \(w=(x,y, z)^t\) by \[ F(w)=(e^x-1, \frac{e-1}{2}y^2+y, z)^t. \] Then, we get \[F'(v)=\left[ \begin{array}{ccc} e^x&0&0\\ 0&(e-1)y+1&0\\ 0&0&1 \end{array}\right], \] so \( \omega_0(t)=(e-1)t, \omega(t)=e^{\frac{1}{e-1}}t, \omega_1(t)=e^{\frac{1}{e-1}}.\) Then, the radii are \[r_{1}=0.154407,\, r_2=0.367385,\, r_3=0.323842.\]

Example 2. Consider \({B}_1={B}_2=C[0,1],\) \(D=\overline{B}(0,1)\) and \(F:D\longrightarrow B_2\) defined by

\begin{equation} F(\phi)(x)=\varphi(x)-5\int_0^1x\theta\phi(\theta)^3d\theta. \end{equation}
(23)
We have that \[F'(\phi(\xi))(x)=\xi(x)-15\int_0^1x\theta\phi(\theta)^2\xi(\theta)d\theta,\,\,\,\, for \, each \, \,\,\, \xi \in D.\] Then, we get that \(x^* =0,\) so \( \omega_0(t)=7.5t, \omega(t)=15t\) and \(\omega_1(t)=2.\) Then, the radii are \[r_{1}=0.02222,\, r_2=0.091401,\, r_3=0.0656309.\]

Example 3. By the academic example of the introduction, we have \(\omega_0(t)=\omega(t)=96.6629073 t\) and \(\omega_1(t) =2.\) Then, the radii are \[r_{1}=0.00229894,\, r_2=0.0065021,\, r_3=0.0905654.\]

Author Contributions

All authors contributed equally.

Conflicts of Interest

The authors declare no conflict of interest.

References:

  1. Amat, S., Busquier, S., Grau, Á., & Grau-Sánchez, M. (2013). Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications. Applied Mathematics and Computation, 219(15), 7954-7963. [Google Scholor]
  2. Cordero, A., Torregrosa, J. R., & Vassileva, M. P. (2013). Increasing the order of convergence of iterative schemes for solving nonlinear systems. Journal of Computational and Applied Mathematics, 252, 86-94. [Google Scholor]
  3. Cordero, A., Martínez, E., & Torregrosa, J. R. (2009). Iterative methods of order four and five for systems of nonlinear equations. Journal of Computational and Applied Mathematics, 231(2), 541-551. [Google Scholor]
  4. Cordero, A., Hueso, J. L., Martínez, E., & Torregrosa, J. R. (2012). Increasing the convergence order of an iterative method for nonlinear systems. Applied Mathematics Letters, 25(12), 2369-2374. [Google Scholor]
  5. Chicharro, F., Cordero, A., Gutiérrez, J. M., & Torregrosa, J. R. (2013). Complex dynamics of derivative-free methods for nonlinear equations. Applied Mathematics and Computation, 219(12), 7023-7035. [Google Scholor]
  6. Darvishi, M. T., & Barati, A. (2007). A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Applied Mathematics and Computation, 188(1), 257-261. [Google Scholor]
  7. Grau-Sánchez, M., Grau, Á., & Noguera, M. (2011). On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. Journal of Computational and Applied Mathematics, 236(6), 1259-1266. [Google Scholor]
  8. Gutiérrez, J. M., Hernández, M. A., & Romero, N. (2010). Dynamics of a new family of iterative processes for quadratic polynomials. Journal of Computational and Applied Mathematics, 233(10), 2688-2695. [Google Scholor]
  9. Neta, B., & Petkovic, M. S. (2010). Construction of optimal order nonlinear solvers using inverse interpolation. Applied Mathematics and Computation, 217(6), 2448-2455. [Google Scholor]
  10. Rheinboldt, W. C. (1975). An Adaptive Continuation Process for Solving Systems of Nonlinear Equations. University of Maryland. [Google Scholor]
  11. Sharma, J. R., Guha, R. K., & Sharma, R. (2013). An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numerical Algorithms, 62(2), 307-323. [Google Scholor]
  12. Traub, J. F. (1982). Iterative Methods for the Solution of Equations (Vol. 312). American Mathematical Soc.. Iterative+Methods+for+the+Solution+of+Equations&btnG=” target=”_blank”>[Google Scholor]
  13. Wang, X., Kou, J., & Li, Y. (2009). Modified Jarratt method with sixth-order convergence. Applied Mathematics Letters, 22(12), 1798-1802. [Google Scholor]
  14. Hueso, J. L., Martínez, E., & Teruel, C. (2015). Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. Journal of Computational and Applied Mathematics, 275, 412-420. [Google Scholor]
  15. Argyros, I. K. (2007). Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, 15, Editors: Chui C.K. and Wuytack L. Elsevier Publ. Company, New York. [Google Scholor]
  16. Argyros, I. K., & Magreñán, A. A. (2017). Iterative Method and their Dynamics with Applications. CRC Press, New York, USA. [Google Scholor]
  17. Argyros, I. K., & George, S. (2019). Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications. Volume-III, Nova Publishes, NY. [Google Scholor]
  18. Argyros, I. K., & George, S. (2019). Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications. Volume-IV, Nova Publishes, NY. [Google Scholor]
  19. Argyros, I. K., George, S., & Magrenan, A. A. (2015). Local convergence for multi-point-parametric Chebyshev–Halley-type methods of high convergence order. Journal of Computational and Applied Mathematics, 282, 215-224. [Google Scholor]