1. Introduction
Special mappings having fixed point, like contractive, non-expansive and asymptotically non-expansive etc, have become a field of interest on their own and have a variety of application in related field like signal processing, image recovery and geometry of objects [5] as well as in IMRT optimization topre-compute dose-deposition coefficient(DDC) matrix, see [6]. Almost in all branches of mathematics, we see some version of theorems relating to fixed points of functions of special nature. Because of the vast range of applications in almost all areas of everyday life, the researchin this field is moving rapidly and an immense literature is present now.
Any equation that can be written as \(T(x) = x\), for some map \(T\), that is contracting with respect to some (complete) metric on \(X\), will provide such a fixed point iteration. Mann iteration method, [7], was the stepping stone in this regard and is invariably used in most of the problems. But it only ensures week convergence, see [8]. We require strong convergence in real world problems relating to Hilbert spaces [9]. A large amount of research work is dedicated for the modification of Mann process, to control and ensure the strong convergence, (see [10, 11, 12, 13, 14, 15, 16]). The first modification of Mann process was proposed by Nakajo et. al. in 2003 10. They introduced this modification for only one nonexpansive mapping, whereas, Kim et. al. introduced a variant for asymptotically nonexpansive mappings, in Hilbert spaces, in the year 2006 [12]. In the same year, Martinez et. al. introduced Ishikawa iterative scheme for nonexpansive mappings in Hilbert spaces [13]. They gave a variant of Halpern method. Su et. al. in [14] gave a hybrid iteration process for monotone nonexpansive mappings. Liu et. al. gave a novel method for quasi-asymptotically finite family of pseudo-contractive mappings [16]. Others have also worked on this problem. For more detail, see [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27].
In this paper, by using viscosity approximation methods for asymptotically nonexpansive mappings, we obtained fixed point of an iterative sequence, which is the unique solution of variational inequality, with some sufficient and necessary conditions. The results presented in this paper extend and improve mainly results in [ 28], which primarily are the improvement and extension of results in [ 1, 2, 3, 4].
2. Preliminaries
Throughout this paper, we will assume \(E\) to be a real Banach space, with \(M \neq \emptyset\) be its closed, bounded and convex subset. Also, \(T\) will be a mapping from \(M\) to itself and \(F(T)\) will denote the set of fixed points of \(T\). \(T\) is said to be nonexpansive, if for all \(y\), \(z \in M\), \(\|T(y) – T(z)\| \leq \|y – z\|\). It will be called asymptotically nonexpansive, if \(\exists\) a sequence \(l_{m}\) in \([1,\infty)\) with \(\lim\limits_{m \to \infty}l_{m} = 1\), such that \(\forall\) \(y\), \(z \in M\) and \(m \geq 0\), \(\|T^{m}(y) – T^{m}(z)\| \leq l_{m}\|y – z\|\). Similarly, \(T\) is called uniformly \(L\)-Lipschitzian, if \(\exists\) \(L > 0\) such that \(\forall\) \(y\), \(z \in M\) and \(m \geq 0\), \(\|T^{m}(y)-T^{m}(z)\|\leq L\|y – z\|\).
Remark 2.1. Every mapping, which is contractive, is also nonexpansive. Similarly, every mapping, which is nonexpansive, is also asymptotically nonexpansive. Finally, every asymptotically nonexpansive is uniformly \(L\)-Lipschitzian with appropriate constants. Generally, the converses of these statements do not hold. The asymptotically nonexpansive mappings are important generalization of nonexpansive mappings. For further details, see [ 29].
Let \(f \in E^*\), where \(E^*\) is the dual of \(E\). The inner product of \(f \in E^*\) and \(x \in E\), denoted by \(\langle f, x \rangle\), is called the duality pairing on \(E\). Let \(P(E^{*})\) denote the power set of \(E^*\). Define \(J : E \to P(E^{*})\), for any \(y \in E\), as \(J(y) = \{ j \in E^{*} \colon \langle y, j \rangle = \|y\|^2 = \|j\|^2\}\). This \(J\) is called the normalized duality pairing of \(E\). We will use \(j\) as the single-valued normalized duality mapping in \(J\). Let \(S\) be a unit sphere, in some Banach space \(E\), i-e, \(S = \left\{ y \in E \colon \|y\| =1 \right\}\). \(E\) is said to have Gâteaux differentiable norm. if for every \(y\), \(z \in S\), the limit \(\lim\limits_{h \to 0}\frac{\|y + hz\| – \|y\|}{h}\) exists. If for each \(y \in S\), the limit exists uniformly for \(x \in S\), then, \(E\) is said to have uniformly Gâteaux differentiable norm.
Remark 2.2. It is well-known that, if \(E\) has a uniformly Gâteaux differentiable norm, then the normalized duality mapping \(J : E \to P(E^*)\) is uniformly continuous from the norm topology of \(E\) to the weak\(^*\) topology of \(E^*\) on any bounded subsets of \(E\).
The
Normal structure coefficient is defined as \(N(E) = \inf\limits_{M \subset E} \left\{\frac{d(M)}{r(M)}\right\}\), where \(d(M)\) and \(r(M)\) are the diameter and Chebyshev radius of \(M\), respectively [
28]. If \(N(E) > 1\), then, \(E\) is said to have
uniform normal structure. A space which has uniform normal structure is also known as
reflexive.
The variational inequality problem is the problem of solving the inequality, \(\left\langle F(y), z – y \right\rangle \geq 0\), for some \(y \in M\) and \(\forall\) \(z \in M\). Here, \(F : M \to E^{*}\).
A linear continuous functional, \(v \in \{l^{\infty}\}^*\), is called a Banach limit [ 30] , if \(\|v\| = 1\), \(v_m(\xi_m) = v_m(\xi_{m+1})\) and \(\liminf\limits_{m \to \infty} \xi_m \leq v_m\|\xi_m\| \leq \limsup\limits_{m \to \infty} \xi_m\). This is true for all \(x = \{\xi_i\} \in l^{\infty}\). Here, common notation is to write \(v_m(\xi_m)\), instead of \(v(x)\).
In order to prove our main theorem, we will need the following results.
Lemma 2.3. [ 21] Let \(E\) be a Banach space having uniform normal structure, \(M\) a non-empty bounded subset of \(E\) and \(T : M \longrightarrow M\) be a uniformly \(L\)-Lipschitzian mapping with \(L < \sqrt{N(E)}\). Suppose also that there exist a nonempty, bounded convex subset \(A\) of \(M\) with the property that if \(x \in A\), the weak \(\omega\)-limit set of \(T\) at \(x\), denoted by \(\omega_{w}(x)\), is a subset of \(A\), i-e, for some \(m_{i} \to \infty\), \begin{equation*} \omega_{w}(x) := \{ y \in E \colon y = \text{weak}-\lim\limits_{i} T^{m_{i}}(x), x \in A \} \subset A, \end{equation*} then, \(T\) has a fixed point in \(M\).
Lemma 2.4.[ 31] Let \(\{\xi_m\}\), \(\{\eta_m\}\) and \(\{\gamma_m\}\) be three non-negative real sequences, with \(\eta_m = o\{\xi_m\}\), \(\sum\limits_{m = 0}^{\infty} \gamma_m < \infty\) and \(\xi_{m+1} \leq \{1 – \mu_m\}\xi_m + \eta_m + \gamma_m\), \(\forall\) \(m \geq m_0\), \(m_0 \in \mathbb{Z}^+\), where \(\{\mu_m\} \subset \{0, 1\}\), with \(\sum\limits_{m = 0}^{\infty} \mu_m = \infty\). Then, \(\lim\limits_{m \to \infty} \xi_m = 0\).
Lemma 2.5.[ 23] Let \(E\) be a real Banach space and \(J\) be a normalized duality mapping on \(E\). For any \(y\), \(z \in E\), \(j(y + z) \in J(y + z)\) and \(j(y) \in J(y)\), the following statements are true.
- \(\|y + z\|^{2} \leq \|y\|^{2} + 2\langle z, j(y + z) \rangle\),
- \(\|y + z\|^{2} \geq \|y\|^{2} + 2\langle z, j(y) \rangle\).
Lemma 2.6. [ 28] Let \((t_{m})\) be a sequence in \((0,1)\), such that \(\lim \limits_{m \rightarrow \infty}t_{m} = 1\). Also, let \((l_{m})\) be a sequence in \([1, \infty)\) with \(\lim \limits_{m \rightarrow \infty}l_{m} = 1\). Then, for any \(\alpha \in (0, 1)\), the following are true \(\forall\) \(m \geq 0\)
- \(0 < t_{m} < \dfrac{(1 – \alpha) l_{m}}{l_{m} – \alpha}\),
- \((l_{m}^{2} – 1) < \left( 1 – \dfrac{t_{m}}{l_{m}} \right)^{2}\),
- \(\dfrac{l_{m} – 1}{l_{m} – t_{m}} < \dfrac{l_{m} – t_{m}}{(l_{m} + 1) l^{2}_{m}} \to 0\).
3. Main Result
We have a well known Noor iterative process [
32] . If \((\alpha_{m})\), \((\beta_{m})\) and \((\gamma_{m})\) are sequences in [0, 1], then, \begin{eqnarray*} x_{m + 1} &=& \alpha_{m}x_{m} + (1 – \alpha_{m})T(y_{m}) \\ y_{m} &=& \beta_{m}x_{m} + (1 – \beta_{m})T(x_{m}) \\ z_{m} &=& \gamma_{m}x_{m} + \{1 – \gamma_{m}\} T(x_{m}). \end{eqnarray*} Corresponding to above, we have following three step viscosity approximation method for asymptotically nonexpansive mappings in Banach spaces, whose strong convergence is also proved below.
Theorem 3.1. Let \(E\) be a real Banach space and let the norm on \(E\) be uniformly Gâteaux differentiable, possessing uniform normal structure. Let \(M \neq \emptyset\) be bounded, closed and convex subset of \(E\) and let \(k : M \to M\) be a contraction with \(\alpha \in (0, 1)\) be its contractive constant. Also, let \(T : M \to M\) be asymptotically nonexpansive mapping, \((l_{m})\) a sequence in \([1, \infty)\), such that \(\sum\limits^{\infty}_{m = 0}(l_{m} – 1) < \infty\) and \(\lim\limits_{m \to \infty}l_{m} = 1\). Also, let \(({t_{m}})\) be a sequence in \((0, 1)\) such that \(\forall\) \(m \geq 0\), \(t_{m} \in (0, \eta_{m})\), where \(\eta_{m} = \left\{\frac{(1 – \alpha)l_{m}}{l_{m} – \alpha}, \hspace{1mm}l_{m}(1 – \sqrt{l^{2}_{m} – 1})\right\}\) and \(\lim\limits_{m \to \infty} t_{m} = 1\). Given any \(x_{0} \in M\) and sequences \((\alpha_{m})\), \((\beta_{m})\) and \((\gamma_{m})\) in \([0, 1]\), define a sequence \((x_{m})\) as follows.
\begin{eqnarray}\label{eq2a} x_{m + 1} &=& \alpha_{m}k(x_{m}) + (1 – \alpha_{m})T^{m}(y_{m})\nonumber\\ y_{m} &=& \beta_{m} x_{m} + (1 – \beta_{m})T^{m}(z_{m})\nonumber\\ z_{m} &=& \gamma_{m} x_{m} + (1 – \gamma_{m})T^{m}(x_{m}). \end{eqnarray}
(1)
Then, for every \(m \geq 0\), \(\exists\) \(N_{m} \in M\), such that
\begin{equation}\label{eq3a} N_{m} = \left\{1 – \frac{t_{m}}{l_{m}}\right\}k({x_{m})} + \frac {t_{m}}{l_{m}}T^{m}(N_{m}). \end{equation}
(2)
Also, \((N_{m})\) and \((x_{m})\) are strongly convergent to some \(h \in F(T)\) which, \(\forall\) \(u \in F(T)\), is the unique solution of the variational inequality \(\langle(I – k)h, \hspace{1mm} j(h – u)\rangle \leq 0\), if and only if the following conditions hold.
- \(\lim\limits_{m \to \infty} \|N_{m} – T(N_{m})\| = 0\) and \(\lim\limits_{m \to \infty} \|x_{m} – T(x_{m})\| = 0\),
- As \(m \to \infty\), \(\alpha_{m} \to 0\) and \(\sum\limits_{m = 0}^{\infty} \alpha_{m} = \infty\).
Proof. It follows from Lemma 2.6 that for each \(m \geq 0\), \(t_{m} \in \left\{0, \hspace{1mm}\frac{(1 – \alpha)l_{m}}{l_{m} – \alpha}\right\}\). Define \(U_{m} : M \to M\) as, \begin{equation*} U_{m}(x) = \left(1 – \dfrac{t_{m}}{l_{m}}\right)k(x) + \dfrac {t_{m}}{l_{m}}T^{m}(x). \end{equation*} It can easily be checked that \(U_m\) is a contraction. By Banach fixed point theorem, there exists a unique fixed point \(N_{m} \in M\), \(\forall\) \(m \geq 0\), such that \begin{equation*} U_{m}(N_m) = \left(1 – \dfrac{t_{m}}{l_{m}}\right)k(N_{m}) + \dfrac {t_{m}}{l_{m}}T^{m}(N_{m}) = N_m. \end{equation*} Next, we want to show that \((N_{m})\) is a sequence that converges strongly to some \(h \in F(T)\), which is the unique solution of variational inequality \(\langle(I – k)h, \hspace{1mm} j(h – u) \rangle \leq 0\). To show this, let \(h \in F(T)\). Then, using Lemma 2.5, we get, \begin{eqnarray*} \|N_{m} – h\|^{2} &\leq& 2\langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – h) \rangle + 2\langle k(N_{m}) – k(h), \hspace{1mm} j(N_{m} – h) \rangle \\ &&+ 2\langle k(h) – h, \hspace{1mm} j(N_{m} – h) \rangle \\ &\leq& 2\langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – h) + \{\alpha^2 + 2 \alpha\} \|N_{m} – h\|^{2}\\ && + 2\langle k(h) – h, \hspace{1mm} j(N_{m} – h) \rangle. \end{eqnarray*} This means that
\begin{eqnarray}\label{eq:000} (1 – \alpha^2 – 2\alpha) \|N_{m} – h\|^{2} &\leq& 2\langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – h) \rangle \\ &&+ 2\langle k(h) – h, \hspace{1mm} j(N_{m} – h) \rangle. \end{eqnarray}
(3)
For the second term on the right in (3), define \(\psi : M \longrightarrow \mathbb{R}\) by \(\psi(x) = \nu_{m} \|N_{m} – x\|^{2}\). Since, \(E\) is Banach space having uniform normal structure, it is reflexive. Also, since \(\lim\limits_{m \to \infty}\psi(x) = \infty\), \(\psi\) is continuous and convex. By Mazur and Schauder Theorem, there exist \(x^{‘} \in M\), such that \(\psi(x^{‘}) = \inf\limits_{x \in M} \psi(x)\) and the set \(A = \{y \in M :\psi(y) = \inf\limits_{x \in M} \psi(x)\} \neq \emptyset\). It is also closed, bounded and convex. Since, \(\|N_{m} – T(N_{m})\| \to 0\), by assumption, it can readily be seen that \(\bigcup\limits_{x \in M} \omega_w(x) \subset A\). By Lemma 2.3, \(T\) has a fixed point \(h \in A\). Since, \(M\) is convex, for any \(x \in M\) and \(t \in [0, 1]\), we have \((1 – t)h + tx \in M\). As established above, \(\psi\) is continuous, we get \(\psi(h) \leq \psi((1 – t)h + tx)\). This, with lemma 2.5-1, can be written as \begin{eqnarray*} 0 &\leq& \dfrac{\psi((1 – t)h + tx) – \psi (h)}{t} \\ &=& \dfrac{\nu_{m}}{t} \left( \|\left( N_{m} – h \right) + \{h – x\}t\|^{2} – \|N_{m} – h\|^{2} \right) \\ &\leq& \dfrac{\nu_{m}}{t} \left( \{\|N_{m} – h\|^{2} + 2 \langle(h – x)t, \hspace{1mm} j(N_{m} – h + t(h – x)) \rangle \} – \|N_{m} – h\|^{2} \right) \\ &=& 2\nu_{m} \langle h – x, \hspace{1mm} j(N_{m} – h + t(h – x)) \rangle. \end{eqnarray*} This implies that \begin{equation*} 2\nu_{m} \langle x – h, \hspace{1mm} j(N_{m} – h + t(h – x)) \rangle \leq 0. \end{equation*} Since, \(M\) is bounded and \(j\) is norm-to-weak uniformly continuous, letting \(t \to 0\) \(\forall\) \(x \in M\), we have \begin{equation*} 2\nu_{m} \langle x – h, \hspace{1mm} j(N_{m} – h) \rangle \leq 0. \end{equation*} In particular, since \(h \in M\) and \(k: M \to M\), \(\exists\) \(x \in M\) such that \(k(h) = x\). Thus, the above equation can be written as
\begin{equation}\label{eq5a} 2\nu_{m} \langle k(h) – h, \hspace{1mm} j(N_{m} – h) \rangle \leq 0. \end{equation}
(4)
For the first term on the right in (3), \(\forall\) \(u \in F(T)\), we can write \(\langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – u) \rangle\) by using (2), as follows: \begin{eqnarray*} &&\langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – u) \rangle\\ &=& \left\langle(1 – \dfrac{t_{m}}{l_{m}})k(N_{m}) + \dfrac{t_{m}}{l_{m}}T^{m}(N_{m}) – k(N_{m}), \hspace{1mm} j(N_{m} – u) \right\rangle \\ &=& \dfrac{t_{m}}{l_{m}} \left\langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – u) \right\rangle\\ &&+ \dfrac{t_{m}}{l_{m}} \left\langle T^{m}(N_{m}) – N_{m}, \hspace{1mm} j(N_{m} – u) \right\rangle. \end{eqnarray*} This implies that \begin{eqnarray*} \left\langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – u) \right\rangle &=& \dfrac{t_m}{l_m – t_m} \left\langle T^{m}(N_{m}) – N_{m}, \hspace{1mm} j(N_{m} – u) \right\rangle. \end{eqnarray*} Note that, \begin{eqnarray*} \langle N_{m} – T^{m}(N_{m}), \hspace{1mm} j(N_{m} – u) \rangle &=& \|N_{m} – u\|^{2} – \langle T^{m}(N_m) – u, \hspace{1mm} j(N_{m} – u) \rangle. \end{eqnarray*} By Lemma 2.5-2, we have \begin{eqnarray*} \|N_{m} – u\|^{2} – \|N_{m} – u + T^{m}(N_m) – u\|^{2} &\leq& – 2\langle T^{m}(N_m) – u, j(N_{m} – u) \rangle. \end{eqnarray*} Also, \begin{eqnarray*} \|\{N_{m} – u\} + \{T^{m}(N_m) – u\}\|^{2} &\leq& \|N_{m} – u\|^2 + \|T^{m}(N_m) – u\|^2 \\ &&+ 2\|N_{m} – u\|\|T^{m}(N_m) – u\| \\ &\leq& \{1 + l_m^2 + 2l_m\}\|N_{m} – u\|^2 \\ &&= \{l_m + 1\}^2\|N_{m} – u\|^2. \end{eqnarray*} This implies that \begin{eqnarray*} – l_m\{l_m + 2\}\|N_{m} – u\|^2 &\leq& \|N_{m} – u\|^2 – \|\{N_{m} – u\} + \{T^{m}(N_m) – u\}\|^{2} \\ &\leq& – 2\langle T^{m}(N_m) – u, j(N_{m} – u) \rangle. \end{eqnarray*} This means that \begin{eqnarray*} – \dfrac{l_m}{2}\{l_m + 2\}\|N_{m} – u\|^2 &\leq& – \langle T^{m}(N_m) – u, j(N_{m} – u) \rangle. \end{eqnarray*} Thus, our equation becomes \begin{eqnarray*} \{\dfrac{2 – l_m^2 – 2l_m}{2}\}\|N_{m} – u\|^2 &=& \|N_{m} – u\|^{2} – \dfrac{l_m}{2}\{l_m + 2\}\|N_{m} – u\|^2 \\ &\leq& \|N_{m} – u\|^{2} – \langle T^{m}(N_m) – u, j(N_{m} – u) \rangle \\ &=& \langle N_{m} – T^{m}(N_{m}), \hspace{1mm} j(N_{m} – u) \rangle. \end{eqnarray*} The above equation can be written as \begin{eqnarray*} \langle T^{m}(N_{m}) – N_m, \hspace{1mm} j(N_{m} – u) \rangle &\leq& \{\dfrac{l_m^2 + 2l_m – 2}{2}\}\|N_{m} – u\|^2. \end{eqnarray*} This means that \begin{eqnarray*} \langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – u) \rangle &\leq& \{\dfrac{t_m}{l_m – t_m}\}\{\dfrac{l_m^2 + 2l_m – 2}{2}\}\|N_{m} – u\|^2. \end{eqnarray*} Note that \(\dfrac{l_m^2 + 2l_m – 2}{2} \to \dfrac{1}{2}\) as \(m \to \infty\). Similarly, since \(t_m > 0\) and \(l_m \geq 1\), for all \(m > 0\), \(\dfrac{t_m}{l_m – t_m} > 0\). Thus, \(\lim\limits_{m \to \infty} \dfrac{t_m}{l_m – t_m} \geq 0\). Since, \(M\) is bounded, so \(\|N_{m} – u\|\) is bounded, \(\forall\) \(u \in F(T)\). This shows that
\begin{equation}\label{eq7a} \limsup_{m \longrightarrow\infty} \langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – u) \rangle \leq 0. \end{equation}
(5)
Since, \(h \in F(T)\) and (5) is true for all \(u \in F(T)\), we have
\begin{equation}\label{eq8a} \limsup_{m \longrightarrow \infty} \langle N_{m} – k(N_{m}), \hspace{1mm} j(N_{m} – h) \rangle \leq 0. \end{equation}
(6)
Using (4) and (6), it can be seen from (3) that, \(\lim\limits_{m \to \infty} \|N_{m} – h\|^{2} = 0\). Therefore, there is a subsequence \(\{N_{m_c}\} \subset \{N_{m}\}\), such that \(N_{m_c} \to h\), as \(c \to \infty\). For uniqueness of the \(h\), Suppose Suppose, \(\exists\) another subsequence \(\{N_{m_i}\} \subset (N_{m})\), such that \(\lim\limits_{i \to \infty}N_{m_i} \to s\), where \(s \in F(T)\). Since \(N_{m_c} \to h\), taking \(u = s\) in (5), we get \(\langle h – k(h), \hspace{1mm} j(h – s) \rangle \leq 0\). Similarly, since \(N_{m_i} \to s\), taking \(u = h\) in (5), we get \(\langle s – k(s), \hspace{1mm} j(s – h) \rangle \leq 0\). Adding these two gives, \begin{equation*} \langle h – s – k(h) + k(s), \hspace{1mm} j(h – s) \rangle \leq 0. \end{equation*} Therefore we have \begin{equation*} \|h – s\|^{2}\leq \langle k(h) – h(s), \hspace{1mm} j(h – s) \leq \alpha \|h – s\|^{2} \end{equation*} Since \(\alpha < 1\), this implies that \(h = s\). Thus, \(N_{m} \to h\) and \(h \in F(T)\) is unique. From (6), for all \(u \in F(T)\), we have, \begin{equation*} \langle h – k(h), \hspace{1mm} j(h – u) \rangle \leq 0. \end{equation*} Hence \(h \in F(T)\) is the unique solution of the variational inequality, \begin{equation*} \langle(I – k)h, \hspace{1mm} j(h – u)\rangle \leq 0. \end{equation*} In order to show that \(x_m\) converges strongly to \(h\), we first need to show that \(\limsup\limits_{m \to\infty} \langle k(h) – h, \hspace{1mm} j(x_{m + 1} – h) \rangle \leq 0\). For simplicity, let \(C_{r} = \dfrac{t_{r}}{l_{r}}\), for each \(r \geq 0\). By using (1) and (2), for any \(m\), \(r \geq 0\), we can write \begin{eqnarray*} x_{m} – N_{r} &=& (1 – C_{r})\{x_{m} – k(N_{r})\} + C_{r}\{x_{m} – T^{r}(N_{r})\}. \end{eqnarray*} Rearranging the above equation gives \begin{eqnarray*} C_{r}(x_{m} – T^r(N_{r})) &=& x_{m} – N_{r} – (1 – C_{r})(x_{m} – k(N_{r})). \end{eqnarray*} Taking squared norm on both sides and using lemma 3b gives \begin{eqnarray*} C^{2}_{r}\|x_{m} – T^{r}(N_{r})\|^{2} &\geq& \|x_{m} – N_{r}\|^{2} – 2(1 – C_{r}) \langle x_{m} – k(N_{r}), \hspace{1mm} j(x_{m} – N_{r}) \rangle \\ &\geq& \|x_{m} – N_{r}\|^{2} – 2(1 – C_{r})\|x_{m} – N_{r}\|^{2} \\ &&+ 2(1 – C_{r})\langle k(N_{r}) – N_{r}, \hspace{1mm} j(x_{m} – N_{r}) \rangle \\ &=& \{2C_{r} – 1\}\|x_{m} – N_{r}\|^{2} + 2(1 – C_{r})\langle k(N_{r}) – N_{r}, \hspace{1mm} j(x_{m} – N_{r}) \rangle. \end{eqnarray*} The rearrangement of the above inequality gives
\begin{eqnarray}\label{eq1c} && \langle k(N_{r})- N_r, \hspace{1mm} j(x_{m} – N_{r}) \rangle \nonumber \\ &\leq& \dfrac{\{1 – 2C_{r}\}\|x_{m} – N_{r}\|^{2} + C^{2}_{r}\|x_{m} – T^{r}(N_{r})\|^{2}}{2(1 – C_{r})} \nonumber \\ &=& \dfrac{2C_{r} – 1}{2(1 – C_{r})} \{\|T^{r}(N_{r}) – x_{m}\|^{2} – \|x_{m} – N_{r}\|^{2}\} + \dfrac{(C_{r} – 1)^{2}}{2(1 – C_{r})} \|T^{r}(N_{r}) – x_{m}\|^{2} \nonumber \\ &\leq& \dfrac{2C_{r} – 1}{2(1 – C_{r})}\{(\|T^{r}(N_{r}) – T^{r}(x_{m})\| + \|T^{r}(x_{m}) – x_{m}\|)^{2} – \|x_{m} – N_{r}\|^{2}\}\\ &&+ \dfrac{(C_{r} – 1)^{2}}{2(1 – C_{r})} \|T^{r}(N_{r}) – x_{m}\|^{2} \nonumber \\ &\leq& \dfrac{2C_{r} – 1}{2(1 – C_{r})}\{(l^{2}_{r} – 1) \|N_{r} – x_{m}\|^{2} + \|T^{r}(x_{m}) – x_{m}\|^{2} + 2l_{r} \|N_{r} – x_{m}\| \|T^{r}(x_{m}) – x_{m}\|\} \nonumber \\ && + \dfrac{(C_{r} – 1)^{2}}{2(1 – C_{r})} \|T^{r}(N_{r}) – x_{m}\|^{2}. \end{eqnarray}
(7)
Since \((x_{m})\) and \((N_{r})\) are sequences in \(M\), they are bounded. Similarly, since \(T : M \to M\), \(\|T^{r}(N_{r}) – x_{m}\|\) is also bounded. So, Let \begin{equation*} G_{1} = \sup\limits_{m, r \geq 0} \{\|T^{r}(N_{r}) – x_{m}\|, \|T^{r}(N_{r}) – x_{m}\|^{2}, \|N_{r} – x_{m}\|, \|N_{r} – x_{m}\|^{2}, \|x_{m} – h\|\}. \end{equation*} Note that \(G_{1} < \infty\). Hence, (7) can be written as,
\begin{align}\label{eq:001} \langle k(N_{r}) – N_{r}, \hspace{1mm} j(x_{m} – N_{r}) \rangle \leq \dfrac{2C_{r} – 1}{2(1 – C_{r})}\{(l^{2}_{r} – 1)G_{1}\nonumber\\ + 2 l_{r}G_{1} \|T^{r}(x_{m}) – x_{m}\| + \|T^{r}(x_{m}) – x_{m}\|^{2}\} + \dfrac{(C_{r} – 1)^{2}}{2(1 – C_{r})}G_{1}. \end{align}
(8)
It follows from Lemma 2.6 (2) that \((l^{2}_{r} – 1) < (1 – C_{r})^{2}\), which shows that,
\begin{equation}\label{eq3c} \dfrac{(2C_{r} – 1)}{2(1 – C_{r})}(l^{2}_{r} – 1) \leq \dfrac{(2C_{r} – 1)}{2(1 – C_{r})}(1 – C_{r})^{2} \leq \dfrac{(2C_{r} – 1)}{2}(1 – C_{r}). \end{equation}
(9)
Substituting (9) in (8), we get,
\begin{eqnarray}\label{eq2c} \langle k(N_{r}) – N_{r}, \hspace{1mm} j(x_{m} – N_{r}) \rangle &\leq& C_r\{1 – C_{r}\}G_{1} \\ &&+ \dfrac{2C_{r} – 1}{2(1- C_{r})}\{2 l_{r}G_{1}\|T^{r}(x_{m}) – x_{m}\|\\ &&+ \|T^{r}(x_{m}) – x_{m}\|^{2}\}. \end{eqnarray}
(10)
Further, by the assumption that \(\lim\limits_{m \to \infty} \|x_{m} – T(x_{m})\| = 0\). Hence, for any \(r \geq 1\),
\begin{eqnarray}\label{eq4c} \|T^{r}(x_{m}) – x_{m}\| &\leq& \{l_{r-1} + l_{r-2} + \ldots + l_2 + l_1\}\|T(x_{m}) – x_{m}\| \to 0. \end{eqnarray}
(11)
as \(m\rightarrow \infty.\) From (10) and (11), \(\forall\) \(r \geq 0\), we have \[\begin{eqnarray*} \limsup_{m \rightarrow \infty } \langle k(N_{r}) – N_{r}, \hspace{1mm} j(x_{m} – N_{r} \rangle &\leq& C_{r}\{1 – C_{r}\}G_{1}. \end{eqnarray*}\] Since, \(\lim\limits_{r \to \infty} t_r = 1\) and \(\lim\limits_{r \to \infty} l_r = 1\), we have, \(\lim\limits_{r \to \infty} C_r = 1\). Thus,
\begin{equation}\label{eq6c} \limsup_{r \rightarrow \infty} \limsup_{m \rightarrow \infty} \langle k(N_{r}) – N_{r},\hspace{1mm} j(x_{m} – N_{r} \rangle \leq 0. \end{equation}
(12)
Since, \(N_{r} \to h \in F(T)\) and \(k\) is a contraction map, \(k(N_{r}) \to k(h)\). Also, \(J\) is uniformly continuous from the norm topology of \(E\) to weak\(^{\ast}\) topology of \(E^{\ast}\) on any bounded subset of \(E\), hence for any given \(\varepsilon > 0\), \(\exists\) a positive integer \(m_{0} \), such that for any \(m,r \geq m_{0}\) we have \begin{eqnarray*} |\langle h – N_{r}, \hspace{1mm}j(x_{m} – N_{r}) \rangle| &<& \dfrac {\epsilon}{3}\\ |\langle k(N_{r}) – k(h), \hspace{1mm} j(x_{m} – N_{r} \rangle| &<& \dfrac {\varepsilon}{3}\\ |\langle k(h) – h, \hspace{1mm} j(x_{m} – N_{r}) – j(x_{m})-h \rangle| &<& \dfrac {\varepsilon}{3}. \end{eqnarray*} Hence, for any \(m,r \geq m_{0}\) we have
\begin{eqnarray}\label{eq7c} && |\langle k(N_{r}) – N_{r}, \hspace{1mm} j(x_{m} – N_{r}) \rangle – \langle k(h) – h, \hspace{1mm} j(x_{m}) – h \rangle| \nonumber \\ &\leq& |\langle k(N_{r}) – k(h),\hspace{1mm} j(x_{m} – N_{r}) \rangle| + |\langle k(h) – h, \hspace{1mm} j(x_{m} – h) \rangle| + |\langle h – N_{r}, \hspace{1mm} j(x_{m} – N_{r}) \rangle| \nonumber \\ & < & \dfrac {\varepsilon}{3} + \dfrac {\varepsilon}{3} + \dfrac {\varepsilon}{3} = \varepsilon. \end{eqnarray}
(13)
From (12) and (13), we have, \begin{eqnarray*} \limsup_{m \rightarrow \infty}\langle k(h) – h, \hspace{1mm} j(x_{m} – h) \rangle &\leq& \limsup_{r \rightarrow \infty} \limsup_{m \rightarrow \infty} \langle k(N_{r}) – N_{r}, \hspace{1mm} j(x_{m} – N_{r}) \rangle + \varepsilon \hspace{1mm} \leq \hspace{1mm} \varepsilon. \end{eqnarray*} By arbitrariness of \(\varepsilon > 0\), this becomes, \begin{equation*} \limsup_{m \rightarrow \infty} \langle k(h) – h, \hspace{1mm}j(x_{m} – h) \rangle \leq 0. \end{equation*} In order to prove that \(x_{m} \rightarrow h\), consider the following.
\begin{eqnarray}\label{eq3b} \|x_{m + 1} – h\|^{2} &\leq& \{1 – \alpha_{m}\}^{2} \|T^{m}(y_{m}) – h\|^{2} + 2 \alpha_{m} \langle k(x_{m}) – h, \hspace{1mm} j(x_{m + 1} – h) \rangle \nonumber \\ &\leq& \{1 – \alpha_{m}\}^{2}l^{2}_{m} \|y_{m} – h\|^{2} + 2 \alpha_{m} \langle k(x_{m}) – k(h), \hspace{1mm} j(x_{m + 1} – h) \rangle\\ &&+ 2 \alpha_{m} \langle k(h) – h, \hspace{1mm} j(x_{m + 1} – h) \rangle \nonumber\\ &\leq& (1 – \alpha_{m})^{2}l^{2}_{m} \|y_{m} – h\|^{2} + 2 \alpha_{m} \alpha \|x_{m} – h\|. \|x_{m + 1} – h\| \\ &&+ 2 \alpha_{m} \langle k(h) – h, \hspace{1mm} j(x_{m + 1} – h) \rangle. \end{eqnarray}
(14)
Using (1), we have \(\|z_{m} – h\| \leq l_{m} \|x_{m} – h\|\) and \(\|y_{m} – h\| \leq \beta_{m} \|x_{m} – h\| + (1 – \beta_{m})l_{m} \|z_{m} – h\|\). This gives us,
\begin{equation}\label{eq6b} \|y_{m} – h\| \leq \beta_{m} \|x_{m} – h\| + (1 – \beta_{m})l^{2}_{m} \|x_{m} – h\| \leq l^{2}_{m} \|x_{m} – h\|. \end{equation}
(15)
Now consider the second term of (14)
\begin{equation}\label{eq7b} 2 \alpha_{m} \alpha \|x_{m} – h\|. \|x_{m + 1} – h\| \leq \alpha_{m} \alpha \{\|x_{m} – h\|^{2} + \|x_{m + 1} – h\|^{2}\}. \end{equation}
(16)
Putting the value from (15) and (16) into (14) and solve it. \begin{align*} \|x_{m + 1} – h\|^{2} \leq (1 – \alpha_{m})^{2}l^{6}_{m} \|x_{m} – h\|^{2} + \alpha_{m}\alpha \|x_{m} – h\|^{2} + \alpha_{m}\alpha \|x_{m + 1} – h\|^{2}\\ + 2\alpha_{m} \langle k(h) – h, \hspace{1mm} j(x_{m + 1} – h) \rangle. \end{align*} If we define \(d_{m+1} := \max\limits_{m \geq 0} \langle k(h) – h, \hspace{1mm} j(x_{m} – h) \rangle \geq 0\), the above equation can be written as, \begin{eqnarray*} (1 -\alpha_{m} \alpha)\|x_{m + 1} – h\|^{2} &\leq& (1 – \alpha_{m})^{2}l^{6}_{m} \|x_{m} – h\|^{2} + \alpha_{m}\alpha \|x_{m} – h\|^{2} + 2\alpha_{m} d_{m+1} \\ &=& (1 – \alpha_{m})^{2}(l^{6}_{m} – 1) \|x_{m} – h\|^{2} + (1 – \alpha_{m})^{2}\\ &&+ \alpha_{m}\alpha \|x_{m} – h\|^{2} + 2\alpha_{m} d_{m+1} \\ &=& (1-\alpha_{m})^{2}(l_{m} – 1)(l^{5}_{m} + l^{4}_{m} + l^{3}_{m} + l^{2}_{m} + l_{m} + 1) \|x_{m} – h\|^{2} \\ &&+ (1 – \alpha_m)\{2 – \alpha\}\|x_{m} – h\|^{2} + \alpha_{m}^{2}\|x_{m} – h\|^{2} + 2\alpha_{m} d_{m+1}. \end{eqnarray*} If we define \(G_2 := \sup\limits_{m \geq 1} \{\{l^{5}_{m} + l^{4}_{m} + l^{3}_{m} + l^{2}_{m} + l_{m} + 1\} \|x_{m} – h\|^{2}\}\), we can write the above equation as
\begin{eqnarray}\label{eq:002} \|x_{m + 1} – h\|^{2} &\leq& \dfrac{\{1 – \alpha_{m}\}^{2}}{1 – \alpha_{m} \alpha}\{l_{m} – 1\}G_2 + \dfrac{1 – \alpha_m\{2 – \alpha\}}{1 – \alpha_{m} \alpha}G_2\nonumber\\ &&+ \dfrac{\alpha_{m}^{2}}{1 – \alpha_{m} \alpha}G_2 + \dfrac{2\alpha_{m}}{1 – \alpha_{m} \alpha} d_{m+1}. \end{eqnarray}
(17)
By assumption, \(\lim\limits_{m \to \infty} \alpha_m = 0\). Thus, \(\exists\) \(m_0 \in \mathbb{Z}^+\), such that \(1 – \alpha_m \alpha > \frac{1}{2}\). This means that \begin{equation*} \dfrac{1 – \alpha_m\{2 – \alpha\}}{1 – \alpha_{m} \alpha} = 1 – \dfrac{2\alpha_m\{1 – \alpha\}}{1 – \alpha_{m} \alpha} \leq 1 – 2\alpha_m\{1 – \alpha\}. \end{equation*} Thus, inequality (17) is equivalent to, \begin{eqnarray*} \|x_{m + 1} – h\|^{2} &\leq& 2\{1 – \alpha_{m}\}^{2}\{l_{m} – 1\}G_2 + \{1 – 2\alpha_m\{1 – \alpha\}\}G_2 + 2\alpha_{m}^{2}G_2\\ && + \dfrac{2\alpha_{m}}{1 – \alpha_{m} \alpha} d_{m+1}. \end{eqnarray*} Note that, \(d_{m} \rightarrow 0\), [
28]. It is not difficult to show that the assumptions of Lemma 2.1 will be satisfied, if we assume \(\xi_m = \|x_{m} – h\|^{2}\), \(\mu_m = 2\{1 – \alpha\}\alpha_m\), \(\eta_m = 2G_2 \alpha_m^2 + \frac{2\alpha_{m}}{1 – \alpha_{m} \alpha} d_{m+1}\) and \(\gamma_m = 2\{1 – \alpha_{m}\}^{2}\{l_{m} – 1\}G_2\). Thus, by lemma 2.4, \(\lim\limits_{m \to \infty}\|x_{m} – h\|^{2} = 0\). Suppose that \(\lim\limits_{m \to \infty}x_{m} = h\) and \(\lim\limits_{m \to \infty}N_{m} = h\), where \(h \in F(T)\), which, \(\forall\) \(u \in F(T)\), is the unique solution of the variational inequality \(\langle(I – k)h, \hspace{1mm} j(h – u)\rangle \leq 0\) and \(T\) is asymptotically non-expansive mapping as given in the theorem. It is straightforward to see that as \(m \to \infty\), we get, \begin{eqnarray*} \|x_{m} – T(x_{m})\| &\leq& \{1 + l_{1}\}\|x_{m} – h\| \to 0. \end{eqnarray*} Thus, \(\lim\limits_{m \to \infty} \|x_{m} – T(x_{m})\| = 0\). Using the same argument as above, we can show that \(\lim\limits_{m \to \infty} \|N_{m} – T(N_{m})\| = 0\). Next, for any \(m \geq 0\), let \(\beta_{m} = 1\) and \(k(x_m) = u\) in (1), where \(u \in M\) and \(u \neq h\). Then, (1) can be written as \begin{equation*} x_{m + 1} – T^{m}(x_{m}) = \alpha_{m}(u – T^{m}(x_{m})). \end{equation*} Since, \(T\) is asymptotically non-expansive and \(h \in F(T)\), \(\|T^{m}(x_{m}) – T^m(h)\| = \|T^{m}(x_{m}) – h\| \leq l_{m} \|x_{m} – h\|\). But, \(x_m \to h\), (given), which implies that \(\|T^{m}(x_{m}) – h\| \to 0\) or \(T^{m}(x_{m}) \to h\), as \(m \to \infty\). This implies, \begin{equation*} \limsup_{m \to \infty} \alpha_{m} u – T^{m}(x_{m}) = \limsup_{m \to \infty} \|x_{m + 1} – T^{m}(x_{m}))\| = 0. \end{equation*} Since, \(T^{m}(x_{m}) \to h\) and \(u \neq h\), \(\|u – T^{m}(x_{m})\| \nrightarrow 0\), \(m \to \infty\). Thus, for the above to be true, the only possibility is that \(\alpha_{m} \to 0\), as \(m \to \infty\). Finally, for any \(m \geq 0\), let \(M = \{x \in E: {x} \leq 1\}\), \(T = -I\), where \(I\) is identity mapping on \(M\), \(k = 0\) and \(\beta_{m} = 1, \gamma_{m} = 1\). Then, the sequence in (1) can be written as follows. \begin{eqnarray*} x_{m + 1} &=& (1 – \alpha_{m})T^{m}(x_{m}) = (1 – \alpha_{m})(- I^{m})x_{m} = (- 1^{m})(1 – \alpha_{m})x_{m}\\ &=& (-1)^{m + (m – 1)}(1 – \alpha_{m})(1 – \alpha_{m – 1})x_{m – 1} \\ &\vdots& \\ &=& ( – 1)^{m + (m – 1) +….+ 1}(1 – \alpha_{m})(1 – \alpha_{m – 1}) \cdots (1 – \alpha_{0}) x_{0}. \end{eqnarray*} Since, \(T=-I\) has a unique fixed point \(0 \in M\), \(\lim_{m \to \infty} \|x_{m + 1} – 0\| = \lim_{m \to \infty}\{-1\}^{\frac{m(m+1)}{2}}\Pi_{j = 0}^{m}(1 – \alpha_{j}) \|x_{0}\| = 0\). This implies that \(\Pi_{j = 0}^{\infty}(1 – \alpha_{j}) \|x_{0}\|= 0\), which means that \(\sum^{\infty}_{j = 0}\alpha_{j} = \infty\). As already mentioned in Remark 2.1, every nonexpansive mapping is a particular case of an asymptotically nonexpansive mapping. This means that Theorem 3.1 is true for any nonexpansive mapping as well. This is specifically interesting because if \(T\) is nonexpansive, then, we can remove the boundedness requirement on \(M\) in Theorem 3.1 for further details see [
1]. Again, since the sequence \(\{l_m\}\) is a constant-valued sequence \(1\)’s, in this scenario, we get \(\eta_m = 1\), \(\forall\) \(m \geq 0\). Hence from Theorem 3.1, we can obtain the following theorem.
Theorem 3.2. Let \(E\) be a real Banach space and let the norm on \(E\) be uniformly Gâteaux differentiable, possessing uniform normal structure. Let \(M \neq \emptyset\) be closed and convex subset of \(E\) and let \(k : M \to M\) be a contraction with \(\alpha \in (0, 1)\) be its contractive constant. Also, let \(T : M \to M\) be a nonexpansive mapping, with \(F(T) \neq \emptyset\). Also, assume \(({t_{m}})\) to be a sequence in \((0, 1)\) with \(\lim\limits_{m \to \infty} t_{m} = 1\). Given any \(x_{0} \in M\) and sequences \((\alpha_{m})\), \((\beta_{m})\) and \((\gamma_{m})\) in \([0, 1]\), define a sequence \((x_{m})\) as follows. \begin{eqnarray*} x_{m + 1} &=& \alpha_{m}k(x_{m}) + (1 – \alpha_{m})T(y_{m}) \\ y_{m} &=& \beta_{m} x_{m} + (1 – \beta_{m})T(z_{m}) \\ z_{m} &=& \gamma_{m} x_{m} + (1 – \gamma_{m})T(x_{m}). \end{eqnarray*} Then, for every \(m \geq 0\), \(\exists\) \(N_{m} \in M\), such that \begin{equation*} N_{m} = \left\{1 – \frac{t_{m}}{l_{m}}\right\}k({x_{m})} + \frac {t_{m}}{l_{m}}T(N_{m}). \end{equation*} Also, \((N_{m})\) and \((x_{m})\) are strongly convergent to some \(h \in F(T)\) which, \(\forall\) \(u \in F(T)\), is the unique solution of the variational inequality \(\langle(I – k)h, \hspace{1mm} j(h – u)\rangle \leq 0\), if and only if the following conditions hold.
- \(\lim\limits_{m \to \infty} \|N_{m} – T(N_{m})\| = 0\) and \(\lim\limits_{m \to \infty} \|x_{m} – T(x_{m})\| = 0\),
- As \(m \to \infty\), \(\alpha_{m} \to 0\) and \(\sum\limits_{m = 0}^{\infty} \alpha_{m} = \infty\).
4. Conclusion
In this paper, we introduced a new viscosity approximation method. Strong convergence of proposed method is proved under certain assumptions. In uniformly smooth Banach spaces, Theorem 3.2 extends and improves the corresponding results of Xu [
2], which themselves were an extension of results by Moudafi in [
25]. Theorem 3.1 extends and improves the results presented by Chidume
et. al. [
3], the scheme presented by Shahzad and Udomene [
4], the theorem proved by Lim and Xu [
1] and the corresponding results in Schu [
17,
18]. Our result is also a direct extension, as well as, improvement of the work done by Chang
et. al. [
28].
Competing Interests
The authors declare that they have no competing interests.