Open Access Full-Text PDF

Open Journal of Mathematical Analysis

An implicit viscosity technique of nonexpansive mapping in CAT(0) spaces

Iftikhar Ahmad\(^{1}\), Maqbool Ahmad
Department of Mathematics and Statistics, University of Lahore, Lahore Pakistan.; (I.A)
Department of mathematics and statistics, The university of Lahore, Lahore Pakistan.; (M.A)
\(^{1}\)Corresponding Author;  iftikharcheema1122@gmail.com
Copyright © 2017 Iftikhar Ahmad and Maqbool Ahmad. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

In this paper, we present a new viscosity technique of nonexpansive mappings in the framework of CAT(0) spaces. The strong convergence theorems of the proposed technique is proved under certain assumptions imposed on the sequence of parameters. The results presented in this paper extend and improve some recent announced in the current literature.

Keywords:

Viscosity rule; CAT(0) space; Nonexpansive mapping; Variational inequality.

1. Introduction

The study of spaces of nonpositive curvature originated with the discovery of hyperbolic spaces, and flourished by pioneering works of J. Hadamard and E. Cartan in the first decades of the twentieth century. The idea of nonpositive curvature geodesic metric spaces could be traced back to the work of H. Busemann and A. D. Alexandrov in the 50's. Later on M. Gromov restated some features of global Riemannian geometry solely based on the so-called CAT(0) inequality (here the letters C, A and T stand for Cartan, Alexandrov and Toponogov, respectively). For through discussion of CAT(0) spaces and of fundamental role they play in geometry , we refer the reader to Bridson and Haefliger [1]. As we know, iterative methods for finding fixed points of nonexpansive mappings have received vast investigations due to its extensive applications in a variety of applied areas of inverse problem, partial differential equations, image recovery, and signal processing; see [2, 3, 4, 5, 6, 7, 8, 9, 10] and the references therein. One of the difficulties in carrying out results from Banach space to complete CAT(0) space setting lies in the heavy use of the linear structure of the Banach spaces. Berg and Nikolaev [4] introduce the noton of an inner product-like notion( quasilinearization) in complete CAT(0) spaces to resolve these difficulties. Fixed-point theory in CAT(0) spaces was frst studied by Kirk [11, 12, 13]. He showed that every nonexpansive (singlevalued) mapping defned on a bounded closed convex subset of a complete CAT(0) space always has a fxed point. Since then, the fxed-point theory for single-valued and multivalued mappings in CAT(0) spaces has been rapidly developed. In 2000, Moudaf's [14] introduce viscosity approximation methods as following:

Theorem 1.1 Let \(C\) be a nonempty closed convex subset of the real CAT(0) space \(X\). Let \(T\) be a nonexpansive mapping of \(C\) into itself such that \(Fix(T)\) is nonempty. Let \(f\) be a contraction of \(C\) into itself with coefficient \(\theta\in [0,1)\). Pick any \(x_{0}\in [0,1)\), let \(\{x_{n}\}\) be a sequence generated by $$x_{n+1}=\frac{\gamma_{n}}{1+\gamma_{n}}f(x_{n})+\frac{1}{1+\gamma_{n}}T(x_{n}),\;\;\;n\geq 0.$$ Where \(\{\gamma_{n}\}\) is a sequence in \((0,1)\) satisfying the following conditions:

  1. \(\lim\limits_{n\rightarrow \infty}\gamma_{n}=0, \)
  2. \(\sum\limits_{n=0}^{\infty}\gamma_{n}=\infty, \)
  3. \(\sum_{n=0}^{\infty}|\frac{1}{\gamma_{n+1}}-\frac{1}{\gamma_{n}}|=0. \)
Then \(\{x_{n}\}\) converges strongly to a fixed point \(x^\ast\) of the mapping \(T\), which is also the unique solution of the variational inequality $$\langle x-f(x), x-y\rangle\geq 0, \;\;\; \forall \;y\in \textrm{Fix}(T),$$ in other words, \(x^{\ast}\) is the unique fixed point of the contraction \(P_{Fix(T)}f\), that is \(P_{Fix(T)}f(x^\ast)=x^\ast\).

Shi and Chen [15] studied the convergence theorems of the following Moudaf's viscosity iterations for a nonexpansive mapping in CAT(0) spaces.
\begin{equation}\label{fc0} x_{n+1}=tf(x_{n})\oplus (1-t)T(x_{n}) \end{equation}
(1)
\begin{equation}\label{fc1} x_{n+1}=\alpha_{n}f(x_{n})\oplus (1-\alpha_{n})T(x_{n}). \end{equation}
(2)
They proved that \(\{x_{n}\}\) defned by (1) and \(\{x_{n}\}\) defned by (2) converged strongly to a fxed point of \(T\) in the framework of CAT(0) space. In 2017, Zhao et al. [16] applied viscosity approximation methods for the implicit midpoint rule for non-expansive mappings $$x_{n+1}=\alpha_nf(x_n)\oplus(1-\alpha_n)T\left(\frac{x_n\oplus x_{n+1}}{2}\right),\forall n\geq 0.$$ C.Y. Jung [17], proposed two generalized viscosity implicit rules:
\begin{equation} x_{n+1}=\alpha_nf(x_n)\oplus(1-\alpha_n)T\left(s_nx_n\oplus(1-s_n)x_{n+1}\right), \end{equation}
(3)
\begin{equation} x_{n+1}=\alpha_nx_n\oplus\beta f(x_n)+\gamma_nT(s_nx_n\oplus(1-s_n)x_{n+1}). \end{equation}
(4)
Motivated and inspired by the idea of C.Y. Jung [17], in this paper, we extend and study the implicit viscosity rules of nonexpansive mappings in the framework of CAT(0) spaces $$\left\{ \begin{array}{ll} x_{n+1}=T(y_{n}),\\ y_{n}=\alpha_{n}(w_{n})\oplus\beta_{n}f(w_{n})\oplus\gamma_{n}T(w_{n}), \\ w_{n}=\frac{x_{n}\oplus x_{n+1}}{2}. \end{array} \right.$$

2. Preliminaries

Let \((X,d)\) be a metric space. A geodesic path joining \(x\in X\) to \(y\in X\) (or, more briefly, a geodesic from \(x\) to \(y\)) is a map \(c\) from a closed interval \([0,l]\subset R\) to \(X\) such that \(c(0)=x\), \(c(l)=y\), and \(d(c(t),c(t'))=|t-t'|\) for all \(t,t'\in [0, l]\). In particular, \(c\) is an isometry and \(d(x,y)=l\). The image \(\alpha\) of \(c\) is called a geodesic (or metric) segment joining \(x\) and \(y\). When it is unique, this geodesic segment is denoted by \([x, y]\). The space \((X,d)\) is said to be a geodesic space if every two points of \(X\) are joined by a geodesic, and \(X\) is said to be uniquely geodesic if there is exactly one geodesic joining \(x\) and \(y\) for each \(x,y \in X\). A subset \(Y \subset X\) is said to be convex if \(Y\) includes every geodesic segment joining any two of its points. A geodesic triangle \(\triangle(x_{1}, x_{2}, x_{3})\) in a geodesic metric space \((X,d)\) consists of three points \(x_{1},x_{2}\),and \(x_{3}\) in \(X\) (the vertices of \(\triangle\)) and a geodesic segment between each pair of vertices (the edges of \(\triangle\)). A comparison triangle for the geodesic triangle \(\triangle (x_{1},x_{2},x_{3}\) in \((X,d)\) is a triangle \(\overline{\triangle}(x_{1},x_{2},x_{3}) :=\triangle (\overline{x_{1}},\overline{ x_{2}}, \overline{x_{3}})\) in the Euclidean plane \(\mathbb{E}^2\) such that \(d_{\mathbb{E}^2}d(x_{i},x_{j})=d(x_{i},x_{j})\) for \(i,j=1,2,3.\)\\ A geodesic space is said to be a CAT(0) space if all geodesic triangles satisfy the following comparison axiom. CAT(0): let \(\triangle\) be a geodesic triangle in \(X\), and let \(\overline{\triangle}\) be a comparison triangle for \(\triangle\) . Then, is said to satisfy the CAT(0) inequality if for all \(x, y\in \triangle\) and all comparison points \(\overline{x},\overline{y}\in \overline{\triangle}\),
\begin{equation} d(x,y)=d_{\mathbb{E}^2}(\overline{x},\overline{y}). \end{equation}
(5)
Let \(x,y\in X\) and by the Lemma \(2.1(iv)\) of [18] for each \(t\in [0,1]\), there exist a unique point \(z\in [x,y]\) such that
\begin{equation} d(x,z)=td(x,y),\;\;\;\; d(y,z)=(1-t)d(x,y). \end{equation}
(6)
From now on, we will use the notation \((1-t)x\oplus ty\) for the unique fixed point \(z\) satisfying the above equation. We now collect some elementary facts about CAT(0) spaces which will be used in the proofs of our main results.

Lemma 2.1. Let \(X\) be a CAT(0) spaces.

  1. For any \(x,y,z\in X\) and \(t\in [0,1]\),
    \begin{equation} d((1-t)x\oplus ty,z)\leq (1-t)d(x,z)+td(y,z) \end{equation}
    (7)
  2. For any \(x,y,z\in X\) and \(t\in [0,1]\),
    \begin{equation} d^2((1-t)x\oplus ty,z)\leq (1-t)^2d(x,z)+td^2(y,z)-t(1-t)d^2(x,y). \end{equation}
    (8)

Complete CAT(0) spaces are often called Hadamard spaces (see[1]). If \(x,y_{1},y_{2}\) are points of a CAT(0) spaces and \(y_{0}\) is the midpoint of the segment \([y_{1},y_{2}]\), which we will denoted by \(\frac{(y_{1}\oplus y_{2})}{2}\), then the CAT(0) inequality implies
\begin{equation} d^{2}\left(x, \frac{y_{1}\oplus y_{2}}{2}\right)\leq \frac{1}{2}d^2(x,y_{1})+\frac{1}{2}d^2(x,y_{2})-\frac{1}{4}d^2(y_{1},y_{2}). \end{equation}
(9)
This inequality is the (CN) inequality of Bruhat and Tits [19]. In fact, a geodesic space is a CAT(0) space if and only if it satisfes the (CN) inequality (cf. [1], page 163).

Definition 2.2 Let \(X\) be a CAT(0) space and \(T: X\rightarrow X\) be a mapping. Then \(T\) is called nonexpensive if $$d(T(x), T(y))\leq d(x,y), \;\;\; x,y\in C$$

Definition 2.3 Let \(X\) be a CAT(0) space and \(T: X\rightarrow X\) be a mapping. Then \(T\) is called contraction if $$d(T(x), T(y))\leq \theta d(x,y), \;\;\; x,y\in C\;\; \theta \in [0,1)$$

Berg and Nikolaev [4] introduce the concept of quasilinearization as follow. Let us denote the pair \((a,b)\in X\times X\) by the \(\overrightarrow{ab}\) and call it a vector. Then, quasilinearization is defined as a map $$\langle .,.\rangle: (x\times X)\times (X\times X) \longrightarrow\mathbb{R}$$ defined as
\begin{equation} \langle \overrightarrow{ab},\overrightarrow{cd}\rangle=\frac{1}{2}(d^{2}(a,d)+d^{2}(b,c)-d^{2}(a,c)-d^{2}(b,d)) \end{equation}
(10)
it is easy to see that \(\langle \overrightarrow{ab},\overrightarrow{cd}\rangle=\langle \overrightarrow{cd},\overrightarrow{ab}\rangle\), \(\langle \overrightarrow{ab},\overrightarrow{cd}\rangle=-\langle \overrightarrow{ba},\overrightarrow{cd}\rangle\) and \(\langle \overrightarrow{ax},\overrightarrow{cd}\rangle+\langle \overrightarrow{xb},\overrightarrow{cd}\rangle=\langle \overrightarrow{ab},\overrightarrow{cd}\rangle\) for all \(a,b,c,d\in X\). We say that \(X\) satisfies the Cauchy-Schwarz inequality if $$\langle \overrightarrow{ab},\overrightarrow{cd}\rangle\leq d(a,b)d(a,c)$$ for all \(a,b,c,d\in X\). It is well-known [4] that a geodesically connected metric space is a CAT(0) space of and only if it satisfy the Cauchy-Schwarz inequality. Let \(C\) be a non-empty closed convex subset of a complete CAT(0) space \(X\). The metric projection \(P_{c}: X\rightarrow C\) is defined by $$u=P_{c}(x)\Longleftrightarrow \inf\{d(y,x):y\in C\},\;\;\; \forall x\in X$$

Definition 2.4. Let \(P_{c}: X\rightarrow C\) is called the metric projection if for every \(x\in X\) there exist a unique nearest point in \(C\), denoted by \(P_{c}x\), such that $$d(x, P_{c}x)\leq d(x,y), \;\;\; y\in C.$$

The following theorem gives you the conditions for a projection mapping to be non-expensive.

Theorem 2.5 Let \(C\) be a non-empty closed convex subset of a real CAT(0) space \(X\) and \(P_{c}: X\rightarrow X\) a metric projection. Then

  1. \(d(P_{c}x, P_{c}y)\leq \langle \overrightarrow{xy}, \overrightarrow{P_{c}xP_{c}y}\rangle\) for all \(x,y\in X\),
  2. \(P_{c}\) is non-expensive mapping , that is, \(d(x,p_{c}x)\leq d(x,y)\) for all \(y\in C\),
  3. \(\langle \overrightarrow{xP_{c}x}, \overrightarrow{yP_{c}y}\rangle\leq 0\) for all \(x\in X\) and \(y\in C\) .

Further if, in addition, \(C\) is bounded, then \(F(T)\) is nonempty. The following Lemmas are very useful for proving our main results:

Lemma 2.6 (The demiclosedness principle) Let \(C\) be a nonempty closed convex subset of the real CAT(0) space \(X\) and \(T:C\rightarrow C\) such that $$x_n\rightharpoonup x^\ast \in C\,\, \mbox{and}\,\, (I-T)x_n \rightarrow 0.$$ Then \(x^\ast=Tx^\ast\). (Here \(\rightarrow\) (respectively ⇀) denotes strong (respectively weak) convergence.) Moreover, the following result gives the conditions for the convergence of a nonnegative real sequences.

Lemma 2.7. Assume that \(\{a_n\}\) is a sequence of nonnegative real numbers such that \(a_{n+1}\leq(1-\beta_n)a_n+\delta_n, \forall n\geq0\), where \(\{\beta_n\}\) is a sequence in \((0,1)\) and \(\{\delta_n\}\) is a sequence with

  1. \(\sum_{n=0}^\infty\beta_n=\infty\)
  2. \(\lim_{n\rightarrow\infty}\sup\frac{\delta_n}{\beta_n}\leq0\) or \(\sum_{n=0}^{\infty}|\delta_n|<\infty\).
Then \(\lim\limits_{n\rightarrow \infty} a_n\rightarrow0\).

3. The Main Result

Theorem 3.1. Let \(C\) be a non-empty closed convex subset of a complete CAT(0) space \(X\) and \(T:C\longrightarrow C\) be a non-expensive mapping with \(\textrm{Fix}(T)\neq\emptyset\). Let \(f:C\longrightarrow C\) be a contraction with coefficient \(\theta\in [0,1)\) and for arbitrary initial point \(x_{0}\in C\). Let \(\{x_{n}\}\) be a sequence generated by $$\left\{ \begin{array}{ll} x_{n+1}=T(y_{n}),\\ y_{n}=\alpha_{n}(w_{n})\oplus\beta_{n}f(w_{n})\oplus\gamma_{n}T(w_{n}), \\ w_{n}=\frac{x_{n}\oplus x_{n+1}}{2}. \end{array} \right.$$ Where \(\{\alpha_{n}\},\{\beta_{n}\}\) and \(\{\gamma_{n}\}\) are the sequence in \((0,1)\) satisfying the following conditions:

  1. \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\)
  2. \(\lim_{n\longrightarrow \infty}\alpha_{n}=0=\lim_{n\longrightarrow \infty}\beta_{n}\) and \(\lim_{n\longrightarrow \infty}\gamma_{n}=1\),
  3. \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\),
  4. \(\sum_{n=0}^{\infty}|\beta_{n+1}-\beta_{n}|<\infty\),
  5. \(\lim_{n\longrightarrow \infty}d(x_{n},T(x_{n}))=0 \).
Then \(\{x_{n}\}\)= converges strongly to a fixed point \(x^\ast\) of the mapping \(T\), which is also the unique solution of the variational inequality $$\langle \overrightarrow{xf(x)}, \overrightarrow{xy}\rangle\geq 0, \;\;\; \forall \;y\in \textrm{Fix}(T),$$ in other words, \(x^{\ast}\) is the unique fixed point of the contraction \(P_{Fix(T)}f\), that is \(P_{Fix(T)}f(x^\ast)=x^\ast\).

Proof. We divide the proof into four steps:

Step 1: Firstly we show that the sequence \(\{x_{n}\}\) is bounded. Indeed take \(p\in \textrm{Fix}(T)\) arbitrary, we have \begin{eqnarray*} d(x_{n+1},p)&=& d(T(y_{n}),p)\\ &=& d(T(\alpha_{n}(w_{n})\oplus\beta_{n}(w_{n})\oplus\gamma_{n}(w_{n})), p)\\ &\leq& d(\alpha_{n}(w_{n})\oplus\beta_{n}(w_{n})\oplus\gamma_{n}T(w_{n}), p)\\ &=& d(\alpha_{n}(w_{n})-\alpha_{n}p+\beta_{n}(w_{n})-\beta_{n}p+\gamma_{n}T(w_{n})+\alpha_{n}p+\beta_{n}p, p)\\ &\leq & \alpha_{n}d((w_{n}),p)+\beta_{n}d((w_{n}),p)+\gamma_{n}d(T(w_{n}), p)\\ &\leq &\frac{\alpha_{n}}{2}d((x_{n}),p)+\frac{\alpha_{n}}{2}d((x_{n+1}),p)+\beta_{n}d((w_{n}),f(p))+\beta_{n}d(f(p),p)+\gamma_{n}d(T(w_{n}), p)\\ &=& \frac{\alpha_{n}}{2}d((x_{n}),p)+\frac{\alpha_{n}}{2}d((x_{n+1}),p)+\theta\beta d((w_{n}),p)+\beta d(f(p),p)\\ &+&\gamma_{n}\left(\frac{1}{2}d(x_{n}, p)+\frac{1}{2}d(x_{n+1}, p)\right)\\ &=& \left(\frac{\alpha_{n}+\gamma_{n}+\theta\beta_{n}}{2}\right)d(x_{n},p)+\left(\frac{\alpha_{n}+\gamma_{n}+\theta\beta_{n}}{2}\right)d(x_{n+1},p)\\ &+& \frac{\gamma_{n}}{2}d(x_{n+1}),p)+\beta_{n}d(f(p),p)\\ &=& \left(\frac{1-\beta_{n}+\theta\beta_{n}}{2}\right)d(x_{n},p)+\left(\frac{1-\beta_{n}+\theta\beta_{n}}{2}\right)d(x_{n+1},p)\\ &+& \frac{\gamma_{n}}{2}d(x_{n+1}),p)+\beta_{n}d(f(p),p). \end{eqnarray*} It follows that \begin{eqnarray*} \left(1-\frac{1-\beta_{n}+\theta\beta_{n}}{2}\right)d(x_{n+1},p)&=&\left(\frac{1-\beta_{n}+\theta\beta_{n}}{2}\right)d(x_{n},p)\\ &&+\beta_{n}d(f(p),p). \end{eqnarray*} implies that

\begin{equation}\label{a1} (1+\beta_{n}(1-\theta))d(x_{n+1},p)\leq (1-\beta_{n}(1-\theta))d(x_{n},p)+2\beta_{n}d(f(p),p). \end{equation}
(11)
Since \(\beta_{n}, \theta\in (0,1), 1-\beta_{n}(1-\theta)\geq 0\). Moreover, by (11) and \(\alpha_{n}+\beta_{n}+\gamma_{n}=1\), we get \begin{eqnarray*} d(x_{n+1},p)&=&\frac{1-\beta_{n}(1-\theta)}{1+\beta_{n}(1-\theta)}d(x_{n},p)+\frac{2\beta_{n}}{1+\beta_{n}(1-\theta)}d(f(p),p)\\ &\leq &\left[1-\frac{2\beta_{n}(1-\theta)}{1+\beta_{n}(1-\theta)}\right]d(x_{n},p)+\left[\frac{2\beta_{n}(1-\theta)}{1+\beta_{n}(1-\theta)}\right]\left(\frac{1}{1-\theta}d(f(p),p)\right). \end{eqnarray*} Thus we have $$d(x_{n+1},p)\leq \max\left\{d(x_{n},p),\frac{1}{1-\theta} d(f(p),p)\right\}.$$ By applying induction, we obtain $$d(x_{n+1},p)\leq \max\left\{d(x_{0},p),\frac{1}{1-\theta} d(f(p),p)\right\}.$$ Hence, we conclude that \(\{x_{n}\}\) is bounded. Consequently, we deduce immediately from it that \(\{f(w_{n})\}\) and \(\{T(w_{n})\}\) are bounded.

Step 2: Now, we prove that \(\lim\limits_{n\rightarrow \infty}d(x_{n+1},x_{n})=0\) \begin{eqnarray*} d(x_{n+1},x_{n})&=& d(T(y_{n}),T(y_{n-1}))\\ &=& d(T(\alpha_{n}(w_{n})\oplus\beta_{n}(w_{n})\oplus\gamma_{n}(w_{n})), T(\alpha_{n-1}(w_{n-1})\oplus\beta_{n-1}(w_{n-1})\oplus\gamma_{n-1}(w_{n-1})))\\ &\leq& d(\alpha_{n}(w_{n})\oplus\beta_{n}(w_{n})\oplus\gamma_{n}T(w_{n}), [\alpha_{n-1}(w_{n-1})\oplus\beta_{n-1}(w_{n-1})\oplus\gamma_{n-1}T(w_{n-1})])\\ &\leq &\frac{\alpha_{n}}{2}d(x_{n+1},x_{n})+\frac{\alpha_{n}}{2}d(x_{n},x_{n-1})+\frac{1}{2}|\alpha_{n}-\alpha_{n-1}|d\left((x_{n-1}+x_{n}),2T(w_{n-1})\right)\\ &+&\beta_{n}d(f(w_{n}),f(w_{n-1}))+|\beta_{n}-\beta_{n-1}|d(f(w_{n-1}),T(w_{n-1}))+\gamma_{n}d(T(w_{n}),T(w_{n-1}))\\ &=& \frac{\alpha_{n}}{2}d(x_{n+1},x_{n})+\frac{\alpha_{n}}{2}d(x_{n},x_{n-1})+\left(\frac{1}{2}|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}-\beta_{n-1}|\right)M\\ &+&\theta\beta_{n}d(w_{n},w_{n-1})+\gamma_{n}(w_{n},w_{n-1})\\ &=& \frac{\alpha_{n}}{2}d(x_{n+1},x_{n})+\frac{\alpha_{n}}{2}d(x_{n},x_{n-1})+\left(\frac{1}{2}|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}-\beta_{n-1}|\right)M\\ &+&\frac{\theta\beta_{n}}{2}d(x_{n+1},x_{n})+\frac{\theta\beta_{n}}{2}d(x_{n},x_{n-1})+ \frac{\gamma_{n}}{2}d(x_{n+1},x_{n})+\frac{\gamma_{n}}{2}d(x_{n},x_{n-1})\\ &=& \left(\frac{\alpha_{n}+\gamma_{n}+\theta\beta_{n}}{2}\right)d(x_{n+1},x_{n})+\left(\frac{\alpha_{n}+\gamma_{n}+\theta\beta_{n}}{2}\right)d(x_{n},x_{n-1})\\ &+& \left(\frac{1}{2}|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}-\beta_{n-1}|\right)M \end{eqnarray*} Where \(M>0\) is constant such that $$M\geq \max\left\{\sup_{n\geq 0}d((x_{n}+x_{n+1},2T(w_{n-1})),\sup_{n\geq 0}d(f(w_{n-1}),T(w_{n-1}))\right\}$$ It gives \begin{eqnarray*} \left(1-\frac{\alpha_{n}+\theta\beta_{n}+\gamma_{n}}{2}\right)d(x_{n+1},x_{n})&=&\left(\frac{\alpha_{n}+\theta\beta_{n}+\gamma_{n}}{2}\right)d(x_{n},x_{n-1})\\ &+& \left(\frac{1}{2}|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}-\beta_{n-1}|\right)M \end{eqnarray*} implies that \begin{eqnarray*} \left(1-\frac{1-\beta_{n}+\theta\beta_{n}}{2}\right)d(x_{n+1},x_{n})&=&\left(\frac{1-\beta_{n}+\theta\beta_{n}}{2}\right)d(x_{n},x_{n-1})\\ &+& \left(\frac{1}{2}|\alpha_{n}-\alpha_{n-1}|+|\beta_{n}-\beta_{n-1}|\right)M \end{eqnarray*} implies \begin{eqnarray*} (1+\beta_{n}(1-\theta))d(x_{n+1},x_{n})&\leq &(1-\beta_{n}(1-\theta))d(x_{n},x_{n-11})\\ &+& \left(|\alpha_{n}-\alpha_{n-1}|+2|\beta_{n}-\beta_{n-1}|\right)M. \end{eqnarray*} Thus, we have \begin{eqnarray*} d(x_{n+1},x_{n})&\leq&\left(\frac{1-\beta_{n}(1-\theta)}{1+\beta_{n}(1-\theta)}\right)d(x_{n},x_{n-1})\\ &+&\frac{M}{(1+\beta_{n}(1-\theta))} \left(|\alpha_{n}-\alpha_{n-1}|+2|\beta_{n}-\beta_{n-1}|\right). \end{eqnarray*} Since \(\beta_{n}, \theta\in (0,1), 1+\beta_{n}(1-\theta)\geq 1\) and \(\left(\frac{1-\beta_{n}(1-\theta)}{1+\beta_{n}(1-\theta)}\right)\leq 1-\beta_{n}(1-\theta)\) Thus \begin{eqnarray*} d(x_{n+1},x_{n})&\leq&[1-\beta_{n}(1-\theta)]d(x_{n},x_{n-1})\\ &+&\frac{M}{(1+\beta_{n}(1-\theta))} \left(|\alpha_{n}-\alpha_{n-1}|+2|\beta_{n}-\beta_{n-1}|\right). \end{eqnarray*} Since \(\sum_{n=0}^{\infty}\beta_{n}=\infty\), \(\sum_{n=0}^{\infty}|\alpha_{n+1}-\alpha_{n}|<\infty\),and \(\sum_{n=0}^{\infty}|\beta_{n+1}-\beta_{n}|<\infty\), by the Lemma (2.7) we have \(\lim\limits_{n\rightarrow \infty}d(x_{n+1},x_{n})=0\).

Step 3: In this step, we claim that $$\limsup\limits_{x\rightarrow\infty}\langle\overrightarrow{x^{\ast}f(x^\ast)},\overrightarrow{x^{\ast}x_{n}}\rangle\leq 0,$$ where \(x^{\ast}=P_{Fix(T)}f(x^{\ast})\). Indeed, we take a subsequence \(\{x_{n_{i}}\}\) of \(\{x_{n}\}\) which converges weakly to a fixed point \(p\) of \(T\). Without loss of generality, we may assume that \(\{x_{n_{i}}\}\rightharpoonup p\). From \(\lim\limits_{n\rightarrow \infty}d(x_{n},T(x_{n})=0\) and the Lemma (2.6) we have \(p=Tp\). This together, with the properity of metric projection implies that \begin{eqnarray*} \limsup\limits_{x\rightarrow\infty}\langle\overrightarrow{x^{\ast}f(x^\ast)},\overrightarrow{x^{\ast}x_{n}}\rangle &=&\limsup\limits_{x\rightarrow\infty}\langle\overrightarrow{x^{\ast}f(x^\ast)},\overrightarrow{x^{\ast}x_{n_{i}}}\rangle\\ &=&\limsup\limits_{x\rightarrow\infty}\langle\overrightarrow{x^{\ast}f(x^\ast)},\overrightarrow{x^{\ast}p}\rangle\\ &\leq & 0. \end{eqnarray*}

Step 4: Finally, we show that \(x_{n}\rightarrow x^{\ast}\) as \(n\rightarrow \infty\). Now, we prove that \(\lim\limits_{n\rightarrow \infty}d(x_{n+1},x_{n})=0\). Now, we again take \(x^{\ast}\in \textrm{Fix}(T)\) is the unique fixed point of the contraction \(P_{\textrm{Fix}(T)}f\). Consider \begin{eqnarray*} d^2(x_{n+1},x_{n})&=& d^2(T(y_{n}),x^{\ast})\\ &=& d^2(T(\alpha_{n}(w_{n})\oplus\beta_{n}(w_{n})\oplus\gamma_{n}(w_{n})), x^{\ast})\\ &\leq& d^{2}(\alpha_{n}(w_{n})\oplus\beta_{n}(w_{n})\oplus\gamma_{n}T(w_{n}),x^{\ast} )\\ &=& d^{2}(\alpha_{n}(w_{n})-\alpha_{n}x^{\ast}+\beta_{n}(w_{n})-\beta_{n}x^{\ast}+\gamma_{n}T(w_{n})+\alpha_{n}x^{\ast}+\beta_{n}x^{\ast}, x^{\ast})\\ &=& \alpha^{2}_{n}d^2((w_{n}),x^{\ast})+\beta^{2}_{n}d^2((w_{n}),x^{\ast})+\gamma^{2}_{n}d^2((w_{n}),x^{\ast})\\ &+&2\alpha_{n}\beta_{n}\langle\overrightarrow{x^{\ast}w_{n}},\overrightarrow{x^{\ast}f(w_{n})}\rangle+2\alpha_{n}\gamma_{n}\langle\overrightarrow{x^{\ast}w_{n}},\overrightarrow{x^{\ast}T(w_{n})}\rangle\\ &+& 2\beta_{n}\gamma_{n}\langle\overrightarrow{x^{\ast}f(w_{n})},\overrightarrow{x^{\ast}T(w_{n})}\rangle\\ &=& \alpha^{2}_{n}d^2((w_{n}),x^{\ast})+\beta^{2}_{n}d^2((w_{n}),x^{\ast})+\gamma^{2}_{n}d^2((w_{n}),x^{\ast})\\ &+&2\alpha_{n}\beta_{n}\langle\overrightarrow{x^{\ast}w_{n}},\overrightarrow{x^{\ast}f(w_{n})}\rangle+2\alpha_{n}\gamma_{n}d(x_{n},x^{\ast})d(T(w_{n}),x^{\ast})\\ &+& 2\beta_{n}\gamma_{n}\langle\overrightarrow{x^{\ast}f(w_{n})},\overrightarrow{x^{\ast}T(w_{n})}\rangle\\ &\leq& (\alpha^{2}_{n}+\gamma^{2}_{n})d^{2}(w_{n},x^{\ast})+2\alpha_{n}\gamma_{n}d^{2}(w_{n},x^{\ast})+2\beta_{n}\gamma_{n}d^{2}(f(w_{n}),f(x^{\ast}))d^{2}(w_{n},x^{\ast})+K_{n}\\ &\leq& (\alpha^{2}_{n}+\gamma^{2}_{n})d^{2}(w_{n},x^{\ast})+2\theta\beta_{n}\gamma_{n}d^{2}(w_{n},x^{\ast})+K_{n}\\ &\leq&(\alpha^{2}_{n}+\gamma^{2}_{n}+2\theta\beta_{n}\gamma_{n})d^{2}(w_{n},x^{\ast})+K_{n}\\ &\leq&((1-\beta^{2}_{n})^2+2\theta\beta_{n}\gamma_{n})d^{2}(w_{n},x^{\ast})+K_{n} \end{eqnarray*} where \begin{eqnarray*} K_{n}&=&\beta^{2}_{n}d^2(f(w_{n}),x^{\ast})+2\alpha_{n}\beta_{n}\langle\overrightarrow{x^{\ast}w_{n}},\overrightarrow{x^{\ast}f(w_{n})}\rangle\\ &+&2\beta_{n}\gamma_{n}\langle\overrightarrow{f(w_{n})}x^{\ast},\overrightarrow{T(w_{n})x^{\ast}}\rangle \end{eqnarray*} it become $$[(1-\beta)^2+2\theta\beta_{n}\gamma_{n})]d^{2}(w_{n},x^{\ast})\geq d^{2}(x_{n+1},x_{n})-K_{n}$$ implies $$\sqrt{(1-\beta)^2+2\theta\beta_{n}\gamma_{n}}d(w_{n},x^{\ast})\geq \sqrt{d^{2}(x_{n+1},x_{n})-K_{n}}$$ implies \begin{align*} \frac{1}{2}\sqrt{(1-\beta)^2+2\theta\beta_{n}\gamma_{n}}d(w_{n},x^{\ast})(d(x_{n+1},x^{\ast})+d(x_{n},x^{\ast}))\\ \geq \sqrt{d^{2}(x_{n+1},x_{n})-K_{n}} \end{align*} implies \begin{eqnarray*} \frac{1}{4}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})(d^{2}(x_{n+1},x^{\ast})+d^{2}(x_{n},x^{\ast}))&+&2(d(x_{n+1},x^{\ast})+d(x_{n},x^{\ast}))\\ &\geq& d^{2}(x_{n+1},x_{n})-K_{n} \end{eqnarray*} implies \begin{eqnarray*} \frac{1}{4}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})(d^{2}(x_{n+1},x^{\ast})+d^{2}(x_{n},x^{\ast}))&+&(d^{2}(x_{n+1},x^{\ast})+d^{2}(x_{n},x^{\ast}))\\ &\geq& d^{2}(x_{n+1},x_{n})-K_{n} \end{eqnarray*} implies \begin{align*} \left[1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})\right]d^{2}(x_{n+1},x^{\ast})\\ \leq \left[\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})\right]d^{2}(x_{n+1},x^{\ast})+K_{n}. \end{align*} Thus, we have \begin{eqnarray*} d^2(x_{n+1},x_{n})&\leq& \frac{\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}d^{2}(x_{n+1},x^{\ast})\\ &+&\frac{K_{n}}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}\\ &=&\frac{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})-1+((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}d^{2}(x_{n+1},x^{\ast})\\ &+&\frac{K_{n}}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}\\ &=&\left[1-\frac{1-(1-\beta)^2+2\theta\beta_{n}\gamma_{n})}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}\right]d^{2}(x_{n+1},x^{\ast})\\ &+&\frac{K_{n}}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}. \end{eqnarray*} Note that $$0<1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})<1$$ implies $$\frac{1-(1-\beta)^2+2\theta\beta_{n}\gamma_{n})}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}\geq 1-((1-\beta)^2+2\theta\beta_{n}\gamma_{n}). $$ Thus, we have \begin{eqnarray*} d^2(x_{n+1},x_{n})&\leq&1-((1-\beta)^2+2\theta\beta_{n}\gamma_{n})d^{2}(x_{n+1},x^{\ast})\\ &+&\frac{K_{n}}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}\\ &=&[((1-\beta)^2+2\theta\beta_{n}\gamma_{n})]d^{2}(x_{n+1},x^{\ast})\\ &+&\frac{K_{n}}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}\\ &=&(1-\beta)^2d^{2}(x_{n+1},x^{\ast})\\ &+&\frac{K_{n}}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})}. \end{eqnarray*} Since \(0< 1-\beta_{n}<1\), this gives \((1-\beta_{n})^2<(1-\beta_{n})\) and

\begin{equation}\label{aab} d^2(x_{n+1},x_{n})\leq (1-\beta)d^{2}(x_{n+1},x^{\ast})+\frac{K_{n}}{1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n})} \end{equation}
(12)
by \(\lim\limits_{n\rightarrow \infty}\alpha_{n}=\lim\limits_{ n\rightarrow \infty}\beta_{n}=0\) and \(\lim\limits_{ n\rightarrow \infty}\gamma_{n}=1\) we have
\begin{equation}\label{ab} \begin{split} \limsup\limits_{x\rightarrow\infty}\frac{K_{n}}{\beta_{n}(1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n}))}\\ =\limsup\limits_{x\rightarrow\infty}\left(\frac{\beta^{2}_{n}d^2(f(w_{n}),x^{\ast})+2\alpha_{n}\beta_{n}\langle\overrightarrow{x^{\ast}w_{n}},\overrightarrow{x^{\ast}f(w_{n})}\rangle}{(1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n}))}\right.\\ \left.+\frac{2\beta_{n}\gamma_{n}\langle\overrightarrow{x^{\ast}f(w_{n})},\overrightarrow{x^{\ast}T(w_{n})}\rangle}{(1-\frac{1}{2}((1-\beta)^2+2\theta\beta_{n}\gamma_{n}))}\right)\\ \leq 0. \end{split} \end{equation}
(13)
From (12) and (13) and the Lemma 2.6, we have $$\lim\limits_{n\rightarrow \infty}d(x_{n+1},x^{\ast})=0.$$ This implies that \(x_{n}\rightarrow x^{\ast}\) as \(n\longrightarrow \infty\). This complete the proof.

Competing Interests

The author do not have any competing interests in the manuscript.

References

  1. Bridson, M. R., & Haefliger, A. (2013). Metric spaces of non-positive curvature (Vol. 319). Springer Science & Business Media. [Google Scholar]
  2. Alghamdi, M. A., Alghamdi, M. A., Shahzad, N., & Xu, H. K. (2014). The implicit midpoint rule for nonexpansive mappings. Fixed Point Theory and Applications, 2014(1), 96. [Google Scholar]
  3. Rajagopal, Attouch, H. (1996). Viscosity solutions of minimization problems. SIAM Journal on Optimization, 6(3), 769-806. [Google Scholar]
  4. Berg, I. D., & Nikolaev, I. G. (2008). Quasilinearization and curvature of Aleksandrov spaces. Geometriae Dedicata, 133(1), 195-218. [Google Scholar]
  5. Auzinger, W., & Frank, R. (1989). Asymptotic error expansions for stiff equations: an analysis for the implicit midpoint and trapezoidal rules in the strongly stiff case. Numerische Mathematik, 56(5), 469-499. [Google Scholar]
  6. Petrusel, A., & Yao, J. C. (2008). Viscosity approximation to common fixed points of families of nonexpansive mappings with generalized contractions mappings. Nonlinear Analysis: Theory, Methods & Applications, 69(4), 1100-1111. [Google Scholar]
  7. Shimoji, K., & Takahashi, W. (2001). Strong convergence to common fixed points of infinite nonexpansive mappings and applications. Taiwanese Journal of Mathematics,387-404. [Google Scholar]
  8. Wu, D., Chang, S. S., & Yuan, G. X. (2005). Approximation of common fixed points for a family of finite nonexpansive mappings in Banach space. Nonlinear Analysis: Theory, Methods & Applications, 63(5), 987-999. [Google Scholar]
  9. Xu, H. K. (2004). Viscosity approximation methods for nonexpansive mappings. Journal of Mathematical Analysis and Applications, 298(1), 279-291. [Google Scholar]
  10. Yao, Y., & Shahzad, N. (2011). New methods with perturbations for non-expansive mappings in Hilbert spaces. Fixed Point Theory and Applications, 2011(1), 79. [Google Scholar]
  11. Dhompongsa, S., Kirk, W. A., & Panyanak, B. (2007). Nonexpansive set-valued mappings in metric and Banach spaces. Journal of Nonlinear and Convex Analysis, 8(1), 35. [Google Scholar]
  12. Kirk, W. A. (2003, February). Geodesic geometry and fixed point theory. In Seminar of Mathematical Analysis (Malaga/Seville, 2002/2003)(Vol. 64, pp. 195-225). [Google Scholar]
  13. Kirk, W. A. (2004). Geodesic geometry and fixed point theory II. Fixed Point Theory and Applications. [Google Scholar]
  14. Moudafi, A. (2000). Viscosity approximation methods for fixed-points problems. Journal of Mathematical Analysis and Applications, 241(1), 46-55. [Google Scholar]
  15. Shi, L. Y., & Chen, R. D. (2012). Strong convergence of viscosity approximation methods for nonexpansive mappings in CAT (0) spaces. Journal of Applied Mathematics, 2012. [Google Scholar]
  16. Zhao, L. C., Chang, S. S., Wang, L., & Wang, G. (2017). Viscosity approximation methods for the implicit midpoint rule of nonexpansive mappings in CAT (0) Spaces. Journal of Nonlinear Sciences & Applications (JNSA), 10(2). [Google Scholar]
  17. Ke, Y., & Ma, C. (2015). The generalized viscosity implicit rules of nonexpansive mappings in Hilbert spaces. Fixed Point Theory and Applications, 2015(1), 190. [Google Scholar]
  18. Dhompongsa, S., & Panyanak, B. (2008). On \(\triangle\)-convergence theorems in CAT (0) spaces. Computers & Mathematics with Applications, 56(10), 2572-2579. [Google Scholar]
  19. Bruhat, F., & Tits, J. (1972). Groupes réductifs sur un corps local. Publications Mathématiques de l'Institut des Hautes Études Scientifiques, 41(1), 5-251. [Google Scholar]