Contents

Some new aspects of nonconvex inverse variational inequalities

Author(s): Muhammad Aslam Noor1, Khalida Inayat Noor2
1Department of Mathematics, COMSATS University Islamabad, Islamabad, Pakistan
2Department of Mathematics, COMSATS Institute of Information Technology, Park Road, Islamabad, Pakistan
Copyright © Muhammad Aslam Noor, Khalida Inayat Noor. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Some new classes of nonconvex inverse variational inequalities are considered and studied. Using the projection technique, we establish the equivalence between the nonconvex inverse variational inequalities and the fixed point problems. This alternative equivalent formulation is used to study the existence of a solution of the nonconvex inverse variational inequalities. Several techniques including the projection, auxiliary principle, dynamical systems and nonexpansive mappings are explored for computing the approximate solution of nonconvex inverse variational inequalities. Convergence criteria of the proposed hybrid multi-step methods is investigated under suitable conditions. Our method of proofs is very simple as compared with other techniques. Some special cases are pointed are pointed as applications of the results. It is an open problem to explore the applications of the nonconvex inverse variational inequalities in various fields of mathematical and engineering sciences.

Keywords: inverse variational inequalities, nonconvex, iterative methods, auxiliary principle, dynamical systems, globally stable, fixed-point, convergence

1. Introduction

Variational inequalities theory, which was introduced by Stampacchia [1] and Lions et al. [2], can be viewed as a novel generalization and extension of the variational principles. During this period, variational inequalities have played an important, fundamental and significant part as a unifying influence and as a guide in the mathematical interpretation of many physical phenomena. In fact, it has been shown that the variational inequalities provide the most natural, direct, simple and efficient framework for the general treatment of wide range of problems. Much attention has been given to develop several numerical methods for solving variational inequalities and related optimization problems, see [142] and the references therein.

It is worth mentioning that almost all the results regarding the existence and iterative schemes for variational inequalities, which have been investigated and considered, if the underlying set is a convex set. This is because all the techniques are based on the properties of the projection operator over convex sets, which may not hold in general, when the sets are nonconvex. Clarke et al. [43] have introduced and studied some new class of nonconvex sets, which is called uniformly prox-regular sets. This class of uniformly prox-regular sets has played an important part in many nonconvex applications such as optimization, dynamic systems and differential inclusions. The uniformly prox-regular sets include the convex sets as a special case, see [22, 33, 43]. Bounkhel et al. [3], Noor [1923] and Noor et al. [25, 2830, 33] have established among the nonconvex variational inequalities and the fixed point problems applying the projection operator technique. Fixed point formulation has played a fundamental role in the existence of the solution and developing numerical techniques for solving variational inequalities. This equivalence formulations have been explored to discuss the existence of a solution of the variational inequalities. Noor [15, 18] has proposed and suggested three step iterative methods for finding the approximate solutions for general variational inequalities using the updating technique of the solution and auxiliary principle. These forward-backward splitting algorithms are similar to those of the scheme of Glowinski et al. [6], which they suggested by using the Lagrangian technique. These three-step schemes are a natural generalization of the splitting methods. Three-step Noor iterations contains Mann (one-step) iteration and Ishikawa (two-step) iterations are special cases of three-step Noor iterations. Inspired and motivated by the usefulness and applications of the splitting Noor(three-step) methods, several classes of three-step approximation schemes for solving variational inequalities, fixed points and related problems are being investigated. It has been established that three step method, known as Noor(three step) iterations, perform better that Ishikawa (two step) iteration and Mann(one-step) iteration. For the applications, generalizations and modifications of the Noor iterations, see [4447] Polyak [48] introduced the inertial type iterations for speeding the convergence of the iterative methods. In recent years, Alvarez [5], Noor [18] and Noor et al. [28, 33, 34] have suggested inertial methods for solving various classes of variational inequalities applying the projection methods, fixed point and auxiliary principle approaches.

Exploiting the applications of the fixed point formulation, Dupuis and Nagurney [4] introduced and studied the projected dynamical systems associated with variational inequalities. The novel feature of the projected dynamical system is that its set of stationary points corresponds to the set of the corresponding set of the solutions of the variational inequality problem. Thus the equilibrium and nonlinear programming problems, which can be formulated in the setting of the variational inequalities, can now be studied in the more general framework of the dynamical systems. Xia et al. [41, 42] studied the stability of the dynamical systems and applied the neural network approach for solving the linear projection equations. Noor et al. [28, 32] have used the finite difference approach to suggest and investigate some iterative schemes for solving variational inequalities and related optimization problem.

It is well known that several techniques such as projection, resolvent, descent methods for solving the variational inequalities and their variant forms can not be applied for finding the approximate solutions of certain type of variational inequalities. In such cases, usually the auxiliary principle technique to study the existence of the solution as well as for solving the variational inequalities approximately. This technique was used by Glowinski et al. [49] to prove the existences of the solution of mixed variational inequalities. For the applications and development of the auxiliary principle technique, see Noor [14, 18], Noor et al. [3335], Patricksson [37], Glowinski et al. [6, 49] and references therein.

Related to the variational inequalities, we have the problem of finding the fixed-point of the nonexpansive mappings, which is the subject of current interest in fixed point theory. It is natural to consider a unified approach to consider these two different problems. Combining these techniques, one can suggest and analyze some classes of multi-step inertial iterative methods for finding the common element of the set of the fixed point of the nonexpansive mapping and the set of the solution of the variational inequalities. In this direction, multi-step methods include Mann (one-step) iteration, Ishikawa (two-step) iterations and Noor (three-step) iterations for solving the variational inequalities and nonexpansive mappings.

Motivated and inspired by ongoing research in these dynamic interlink areas, we consider some new classes of general nonconvex variational inequalities. These new classes are equivalent to the nonconvex complementarity problems, which have not been studied previously. For different suitable choices of the operators and nonconvex sets, we have pointed out some special cases as applications. It has been shown that the nonconvex variational inequalities are equivalent to the fixed point problems using the projection approach. This alternative equivalent formulations play main role improving the existence of the solution and considering the dynamical systems, sensitivity, and other aspects of general nonconvex variational inequalities, which are considered in §2, 3, 4, 5 and 6. The auxiliary principle technique is applied in section 5 to propose and investigate some inertial iterative methods for solving the nonconvex variational inequalities. Our methods of proofs is very simple as compared with techniques. Results obtained in this paper continue for special cases and can be viewed as significant refinement of the previously known results.

2. Basic concepts and formulation

Let \(H\) be a real Hilbert space whose inner product and norm are denoted by \(\langle\cdot, \cdot\rangle\) and \(\| . \|\) respectively. Let \(K\) be a nonempty and convex set in \(H\). We now recall some known basic facts and results from convex analysis and nonlinear optimization [43, 50, 51].

Definition 1. [43] The proximal normal cone of \(K\) at \(u \in H\) is given by \[\begin{aligned} N^{P}_{K}(u):= \{\xi \in H : u \in P_K[u+\alpha \xi ] \}, \end{aligned}\] where \(\alpha > 0\) is a constant and \[\begin{aligned} P_K[u]= \{ u^* \in K: d_K(u)=\|u-u^*\|\}. \end{aligned}\]

Here \(d_K(.)\) is the usual distance function to the subset \(K,\) that is \[\begin{aligned} d_K(u)= \inf _{v \in K }\|v-u\|. \end{aligned}\]

The proximal normal cone \(N^{P}_{K}(u)\) has the following characterization.

Lemma 1. [43] Let \(K\) be a nonempty, closed and convex subset in \(H.\) Then \(\zeta \in N^{P}_{K}(u),\) if and only if, there exists a constant \(\alpha > 0\) such that \[\begin{aligned} \langle\zeta, v-u \rangle\leq \alpha \|v-u\|^2, \quad \forall v \in K. \end{aligned}\]

Definition 2. [43] The Clarke normal cone, denoted by \(N^{C}_{K}(u)\), is defined as \[\begin{aligned} N^{C}_{K}(u) = \overline{co}[N^{P}_{K}(u)], \end{aligned}\] where \(\overline{co}\) means the closure of the convex hull.

Clearly \(N^{P}_{K}(u) \subset N^{C}_{K}(u),\) but the converse is not true. Note that \(N^{P}_{K}(u)\) is always closed and convex, whereas \(N^{C}_{K}(u)\) is convex, but may not be closed [43].

Clarke et al. [43] have introduced and studied a new class of nonconvex sets, which are called uniformly prox-regular sets. This class of uniformly prox-regular sets has played an important part in many nonconvex applications such as optimization, dynamic systems and differential inclusions.

Definition 3. [43] For a given \(r \in (0, \infty ],\) a subset \(K_r\) is said to be normalized uniformly \(r\)-prox-regular, if and only if, every nonzero proximal normal to \(K_r\) can be realized by an \(r\)-ball, that is, \(\forall u \in K_r\) and \(0 \neq \xi \in N^{P}_{K_r}(u),\) one has \[\begin{aligned} \langle(\xi )/\|\xi \|,v-u \rangle\leq (1/2r)\|v-u\|^2, \quad \forall v \in K_r. \end{aligned}\]

It is clear that the class of normalized uniformly prox-regular sets is sufficiently large to include the class of convex sets, \(p\)-convex sets, \(C^{1,1}\) submanifolds (possibly with boundary) of \(H,\) the images under a \(C^{1,1}\) diffeomorphism of convex sets and many other nonconvex sets; see [6, 32]. It is clear that if \(r = \infty ,\) then uniformly prox-regularity of \(K_r\) is equivalent to the convexity of \(K.\) It is known that if \(K_r\) is a uniformly prox-regular set, then the proximal normal cone \(N^{P}_{K_r}(u)\) is closed as a set-valued mapping. Thus, we have \(N^{P}_{K_r}(u) = N^{C}_{K_r}(u).\) It is known that the union of two disjoint intervals \([a,b]\) and \([c,d]\) is a prox-regular set with \(r = c-b.\) as demonstrated in Example 2.

We also consider the following simple examples to give an idea of the importance of the nonconvex sets.

Example 1. [23] Let \(u = (x,y)\) and \(v = (t,z)\) belong to the real Euclidean plane and consider \(T u = (2x, 2(y – 1)).\) Let \(K = \{t^2 + (z – 2)^2 \geq 4,\quad -2 \leq t \leq2,\quad z \geq -2\}\) be a subset of the Euclidean plane. Then one can easily show that the set \(K\) is a prox-regular set \(K_r.\) It is clear that nonconvex variational inequality (3) has no solution.

Example 2. [23] Let \(u = (x,y) \in R^2,\quad v=(t,z) \in R^2\) and let \(Tu = (-x,1-y).\) Let the set \(K\) be the union of \(2\) disjoint squares, say \(A\) and \(B\) having respectively, the vertices in the points \((0, 1),(2,1),(2, 3),(0, 3)\) and in the points \((4, 1),(5,2),(4, 3),(3,2).\) The fact that \(K\) can be written in the form: \[K= \{(t,z) \in R^2 : \max\{|t-1|,|z-2| \leq 1\} \cup \{|t-4|+|z-2| \leq 1\}\},\] shows that it is a prox-regular set in \(R^2\) and the nonconvex variational inequality (3) has a solution on the square \(B\) . We note that the operator \(T\) is the gradient of a strictly concave function. This shows that the square \(A\) is redundant.

We now introduce the general nonconvex inverse variational inequality problem. For given nonlinear operator \(T, h: H \longrightarrow H,\) we consider the problem of finding \(u \in K_r\) such that \[\begin{aligned} \label{eq2.1n} \langle \rho Tu, v-h(u) \rangle \geq 0, \quad \forall v \in K_r, \end{aligned} \tag{1}\] is called the general nonconvex variational inequality.

For \(T=I,\) the problem (1) is known as nonconvex inverse variational inequality. To be more precise, for a given nonlinear operator \(h: H \longrightarrow H,\) we consider the problem of finding \(u \in K_r\) such that \[\begin{aligned} \label{eq2.1} \langle \rho u, v-h(u) \rangle \geq 0, \quad \forall v \in K_r, \end{aligned} \tag{2}\] where \(\rho\) is a constant, is called the nonconvex inverse variational inequality.

Changing the role of the operators \(h,T: H \longrightarrow H,\) the problem (1) is equivalent to finding \(u \in K_r,\) such that \[\begin{aligned} \label{eq2.1a} \langle \rho h(u), v-Tu \rangle \geq 0, \quad \forall v \in K_r, \end{aligned}\] which is also called the general nonconvex variational inequality. Note the role of the symmetry between the operator\(T, h.\) Both problems are exactly the same.

Special cases

We now discuss some special cases of the nonconvex inverse variational inequality (2).

(I). If \(h =I,\) the identity operator, then problem (1) reduces to finding \(u\in K_r\) such that \[\label{eq2.2} \langle \rho Tu, v-u \rangle \geq 0, \qquad \forall v \in K_r, \tag{3}\] is called the nonconvex variational inequality, studied by Bounkhel et al. [3], Noor [2027], Noor et al. [28, 3033] and Pang et al. [52].

(II). If \(K_r \equiv K,\) the convex set in \(H,\) then problem (2) reduces to finding \(u \in \in K\) such that \[\label{eq2.3} \langle \rho Tu, v-h(u) \rangle \geq 0, \qquad \forall v \in K, \tag{4}\] is called the general variational inequality, introduced and studied by Noor [24].

(III). If \(K^{*}_{r}= \{ u\in H: \langle u,v \rangle \geq 0, \quad \forall \nu \in K_r \}\) is a polar (dual) cone , then problem (2) is equivalent to finding \(\mu \in H,\) such that then the problem (2) is equivalent to finding \(u\in H,\) such that \[\begin{aligned} \label{eq2.1h} h(u)\in K_r, \quad u\in K^{*}_{r}, \quad \langle u, h(u)\rangle=0, \end{aligned} \tag{5}\] is called the nonconvex inverse complementarity problem.

(IV). If \(K^{*}_{r}= \{ u\in H: \langle u,v \rangle \geq 0, \quad \forall \nu \in K_r \}\) is a polar (dual) cone , then problem (1) is equivalent to finding \(\mu \in H,\) such that \[\begin{aligned} \label{eq2.1g} h(u)\in K_r, \quad Tu\in K^{*}_{r}, \quad \langle Tu, h(u)\rangle=0, \end{aligned}\] is called the general nonconvex complementarity problem.

(V). If \(h \equiv I,\) the identity operator, then the problem (3) reduces to finding \(u \in K\) such that \[\begin{aligned} \label{eq2.4} \langle Tu, v-u \rangle\geq 0, \quad v \in K, \end{aligned} \tag{6}\] which is known as the classical variational inequality, introduced and studied by Stampacchia [1] in 1964. It turned out that a number of unrelated obstacle, free, moving, unilateral and equilibrium problems arising in various branches of pure and applied sciences can be studied via variational inequalities, see [142] and the references therein.

It is well-known [43] that problem (6) is equivalent to finding \(u \in K\) such that \[\begin{aligned} \label{eq2.5} 0 \in \rho Tu + \rho N_{K}(u), \end{aligned} \tag{7}\] where \(N_{K}(u)\) denotes the normal cone of \(K\) at \(u\) in the sense of convex analysis. Problem (7) is called the variational inclusion associated with variational inequality (6).

Similarly, if \(K_r\) is a nonconvex (uniformly prox-regular) set, then problem (2) is equivalent to finding \(u \in K_r\) such that \[\begin{aligned} \label{eq2.6} 0 \in \rho u + \rho N^{P}_{K_r}(h(u))\quad \Longleftrightarrow \quad 0 \in \rho u +h(u)-h(u)+ \rho N^{P}_{K_r}(h(u)), \end{aligned} \tag{8}\] where \(N^{P}_{K_r}\) denotes the normal cone of \(K_r\) at \(h(u)\) in the sense of nonconvex analysis. Problem (8) is called the general nonconvex inverse variational inclusion problem associated with general nonconvex inverse variational inequality (2). This implies that the general nonconvex inverse variational inequality (2) is equivalent to finding a zero of the sum of two monotone operators (8). This equivalent formulation plays a crucial and basic part in this paper. We would like to point out this equivalent formulation allows us to use the projection operator technique for solving the general nonconvex inverse variational inequalities (2).

We now recall the well known proposition which summarizes some important properties of the uniform prox-regular sets.

Lemma 2. [43] Let \(K\) be a nonempty closed subset of \(H, \quad r\in (0,\infty ]\) and set \(K_{r} = \{ u \in H: d(u,K) < r \}.\) If \(K_r\) is uniformly prox-regular, then

i. \(\forall u \in K_r, P_{K_r}(u) \neq \emptyset.\)

ii. \(\forall r^{'} \in (0,r),\) \(P_{K_r }\) is Lipschitz continuous with constant \(\frac{r}{r-r^{'}}\ne 0\) on \(K_{r^{'}}.\)

iii. The proximal normal cone is closed as a set-valued mapping.

We assume that the implicit projection operator \(P_{K_r}\) satisfies the condition.

Assumption 1. The projection operator \(P_{K_r}\) is Lipschitz continuous, if there exist a constant \(\delta >0,\) such that \[\begin{aligned} \|P_{K-r}(u)- P_{K_r}(v)\| \leq \delta \|u-v\|, \quad \forall u,v \in K_{r}. \end{aligned}\]

For the sake of simplicity, we denote \(\delta =\frac{r}{r-r^{'}}\ne 0,\) unless otherwise specified.

Definition 4. [6] An operator \(T: H \rightarrow H\) is said to be:

(i) strongly monotone, if and only if, there exists a constant \(\alpha > 0\) such that \[\langle Tu – Tv, u-v \rangle\geq \alpha ||u-v||^2, \quad \forall u,v \in H.\]

(ii) Lipschitz continuous, if and only if, there exists a constant \(\beta > 0\) such that \[||Tu-Tv|| \leq \beta ||u-v||, \quad \forall u, v \in H.\]

3. Projection method and convergence criteria

In this section, we establish the equivalence between the nonconvex inverse variational inequality (2) and the inverse fixed point problem using the projection operator technique. This alternative formulation is used to discuss the existence of a solution of the problem (2) and to suggest some new iterative methods for solving the nonconvex inverse variational inequality (2).

Lemma 3. \(u \in K_r\) is a solution of the nonconvex inverse variational inequality (2), if and only if, \(u \in K_r\) satisfies the relation \[\label{eq3.1} h(u) = P_{K_r }[h(u) – \rho u], \tag{9}\] where \(P_{K_r }\) is the projection of \(H\) onto the uniformly prox-regular set \(K_r.\)

Proof. Let \(u \in H: h(u) \in K_r\) be a solution of (2). Then, for a constant \(\rho >0,\) \[\begin{aligned} 0 \in & h(u)+ \rho N^{P}_{K_r}(h(u)) -(h(u)-\rho u) = (I+\rho N^{P}_{K_r})(h(u))-(h(u)-\rho u)\\ \Longleftrightarrow & \\ h(u) =&(I+\rho N^{P}_{K_r})^{-1}[h(u)-\rho u] =P_{K_r}[h(u)- \rho u], \end{aligned}\] where we have used the well-known fact that \(P_{K_r} \equiv (I+ N^{P}_{K_r})^{-1}.\) ◻

Lemma 3 implies that the nonconvex inverse variational inequality (2) is equivalent to the fixed point problem (9). This alternative equivalent formulation is very useful from the numerical and theoretical point of views.

We rewrite the relation (9) in the following form \[\begin{aligned} \label{eq3.2} F(u)=u-h(u)+ P_{K_r}[h(u)-\rho u], \end{aligned} \tag{10}\] which is used to study the existence of a solution and to propose the iterative methods for solving the nonconvex inverse variational inequality (2).

We now study those conditions under which the problem (2) has a unique solution and this is the main motivation of our next result.

Theorem 1. Let \(P_{K_r}\) be the Lipschitz continuous operator with constant \(\delta > 0.\) Let the operator \(h\) be strongly monotone with constants \(\sigma > 0\) and Lipschitz continuous with constants \(\zeta> 0,\) respectively. If there exists a constant \(\rho >0\) such that \[\begin{aligned} \label{eq3.3as} \rho <\frac{1-k}{\delta }, \quad k<1, \end{aligned} \tag{11}\] then the problem (2) has a unique solution.

Proof. From Lemma 3, it follows that problems (9) and (2) are equivalent. Thus it is enough to show that the map \(F(u),\) defined by (10) has a fixed point.

For all \(u\neq v \in K_r,\) we have \[\begin{aligned} \label{eq3.4} \|F(u)-F(v)\| =& \|u-v-(h(u)-h(v))\|+ \|P_{K_r}[h(u)- \rho Tu]- P_{K_r}[h(v)-\rho v]\| \nonumber \\ \leq & \|u-v-(h(u)-h(v))\|+ \delta \|h(u)-h(v)-\rho(u-v)\|\nonumber \\ \leq & \|u-v-(h(u)-h(v))\|+ \delta \{ \|h(u)-h(v)\|+\rho \|u-v\|\}, \end{aligned} \tag{12}\] where we have used the fact that the operator \(P_{K_r}\) is a Lipschitz continuous operator with constant \(\delta.\)

Since the operator \(h\) is strongly monotone with constant \(\sigma > 0\) and Lipschitz continuous with constant \(\zeta > 0,\) it follows that \[\begin{aligned} \label{eq3.5} ||u-v-(h(u)-h(v))||^2 \leq & ||u-v||^2 -2 \langle h(u)-h(v),u-v \rangle +\zeta^2 ||h(u)-h(v)||^2 \nonumber \\ \leq & (1-2\sigma + \zeta ^2 )||u-v||^2. \end{aligned} \tag{13}\]

From (12), (13) and Lipschitz continuity of the operator \(h,\) we have \[\begin{aligned} ||F(u)-F(v)|| \leq &\left\{\sqrt{ (1-2\sigma + \zeta ^2 )}+\delta \zeta +\delta \rho \right\}||u-v|| \\ =& \theta ||u-v||, \end{aligned}\] where \[\theta = \delta \rho + k \label{eq3.6},\ \tag{14}\] \[k = \sqrt{1- 2\sigma +\zeta ^2}+ \delta \zeta. \label{eq3.7} \tag{15}\] From (11), it follows that \(\theta < 1,\) which implies that the map \(F(u)\) defined by (10) has a fixed point, which is the unique solution of (2). ◻

This fixed point formulation (9) is explored to suggest the following iterative methods for solving the general nonconvex variational inequality (2).

Algorithm 1. For a given \(u_0,\) find the approximate solution \(u_{n+1}\) by the iterative scheme \[\begin{aligned} \label{eq3.8} u_{n+1} =(1-\alpha _n)u_n-\alpha _n\{u_n-h(u_n) + P_{K_r}[h(u_n)-\rho u_n]\}, \quad n= 0,1,2,\ldots, \end{aligned} \tag{16}\]

where \(\alpha _n \in [0,1], \forall n \geq 0\) is a constant. Algorithm 1 is also called the Mann iteration process.

We again use the fixed formulation to suggest and analyze an iterative method for solving the nonconvex inverse variational inequalities (2) as:

Algorithm 2. For a given \(u_0,\) find the approximate solution \(u_{n+1}\) by the iterative scheme \[\begin{aligned} u_{n+1} = (1-\alpha _n)u_n-\alpha _n\{u_n-h(u_n) + P_{K_r}[h(u_{n+1})-\rho u_{n+1}]\}, \quad n= 0,1,2,\ldots \end{aligned}\]

Algorithm 2 is an implicit iterative method, which is difficult to implement. To implement Algorithm 2, we use the predictor-corrector technique. Here we use the Algorithm 1 as a predictor and Algorithm 2 as a corrector. Consequently, we have the following iterative method

Algorithm 3. For a given \(u_0,\) find the approximate solution \(u_{n+1}\) by the iterative schemes \[\begin{aligned} y_n =& (1-\beta _n)u_n-\beta _n\{u_n-h(u_n) + P_{K_r}[h(u_n)-\rho u_n]\} \\ u_{n+1} =& (1-\alpha _n)u_n-\alpha _n\{u_n-h(u_n) + P_{K_r}[h(y_{n})-\rho y_{n}]\}, \end{aligned}\]

which is called the two-step or splitting type iterative method for solving the problem (2). It is worth mentioning that Algorithm 3 can be suggested by using the updating the technique of the solution.

In a similar way, we have the following two-step iterative method.

Algorithm 4. For a given \(u_0,\) find the approximate solution \(u_{n+1}\) by the iterative schemes \[\begin{aligned} y_n =& (1-\beta _n)u_n-\beta _n\{u_n-h(u_n) + P_{K_r}[h(u_n)-\rho u_n]\} \\ u_{n+1} =& (1-\alpha _n)u_n-\alpha _n\{u_n-h(u_n) + P_{K_r}[h(u_{n})-\rho y_{n}]\}, \end{aligned}\]

Algorithm 4 can viewed as an extragradient Koperlevich method for solving the problem (2).

We now consider the convergence analysis of Algorithm 1 and this is the main motivation of our next result. In a similar way, one can consider the convergence criteria of other Algorithms.

Theorem 2. Let \(P_{K_r}\) be the Lipschitz continuous operator with constant \(\quad \delta > 0.\) Let the operator \(h: H \longrightarrow H\) be strongly monotone with constants \(\sigma > 0\) and Lipschitz continuous with constants with \(\zeta > 0,\) respectively. If \[\begin{aligned} \label{eq3.10} \rho < \frac{1-k}{\delta}. \end{aligned} \tag{17}\] and \(\sum ^{\infty}_{n=0} \alpha _n = \infty ,\) then the approximate solution \(u_{n}\) obtained from Algorithm 1 converges to a solution \(u \in K_r\) satisfying the nonconvex inverse variational inequality (2).

Proof. Let \(u \in K_r\) be a solution of the problem (2). Then, using Lemma 3, we have \[\begin{aligned} \label{eq3.11} u = (1-\alpha _n)u + \alpha_n\{u-h(u)+ P_{K_r}[u-\rho u]\}, \end{aligned} \tag{18}\] where \(0 \leq \alpha _n \leq 1\) is a constant.

From (13), (14), (15), (16), (17), (18) and using the Lipschitz continuity of the projection \(P_{K_r}\) with constant \(\delta,\) we have \[\begin{aligned} \|u_{n+1}-u\| =& \|(1-\alpha _n)(u_n-u)+ \alpha _n \{P_{K_r}[h(u_n)-\rho u_n]- P_{K_r}[h(u)-\rho u] \}\| \nonumber \\ & + \alpha_n\|u_n-u-(h(u_n)-h(u)\| \nonumber \\ \leq & (1-\alpha_n)\|u_n-u\|+ \alpha _n \|P_{K_r}[h(u_n)-\rho u_n]- P_{K_r}[h(u)-\rho u]\| \nonumber \\ &+\alpha_n \sqrt{1-2\sigma + \zeta^2}\|u_n-u\| \nonumber \\ \leq & (1-\alpha _n)\|u_n-u\|+ \alpha_n \delta \|h(u_n)-h(u)+\rho(u_n-u)\| \nonumber \\ & +\alpha_n \sqrt{1-2\sigma +\zeta^2}\|u_n-u\|\nonumber \\ \leq &(1-\alpha _n)\|u_n-u\|+ \alpha_n\{\sqrt{1-2\sigma +\zeta^2}\|u_n-u\|+\delta( \zeta+ \rho )\|u_n-u\| \nonumber \\ =& (1-\alpha _n)\|u_n-u\|+ \alpha_n(k+ \delta \rho)\|u_n-u\| \nonumber \\ =& (1-\alpha _n)\|u_n-u\|+\alpha_n \theta \|u_n-u\| \nonumber \\ =& \left[1-\alpha _n(1-\theta)\right]\|u_n-u\| \nonumber \\ \leq & \prod ^{n}_{i=0}\left[1-\alpha _i(1-\theta )\right]\|u_0-u\|, \end{aligned}\] where \[k= \{\sqrt{1-2\sigma +\zeta^2} +\delta \zeta.\] and \[\theta = k+ \delta \rho .\]

Now using (11), we have, \(\rho < \frac{1-k}{\delta}.\) Since \(\sum_{n=0}^{\infty}\alpha _n\) diverges and \(1-\theta > 0,\) we have \[\lim _{n \rightarrow \infty}\left\{ \prod_{i=0}^{n}[1-(1-\theta )\alpha _i] \right\} = 0.\]

Consequently the sequence \(\{u_n \}\) convergence strongly to \(u\). This completes the proof. ◻

We again use the fixed formulation (9) to suggest and analyze three step iterative method for solving the nonconvex inverse variational inequalities (2) as:

Algorithm 5. For a given \(u_n,\) find the approximate solution \(u_{n+1}\) by the iterative schemes: \[\begin{aligned} y_n =& (1-\gamma_n)u_n+ \gamma _n\{u_n-h(u_n)+ P_{K_r}[u_n-\rho u_n]\} \\ w_n =& (1-\beta_n)u_n+ \beta _n\{ u_n-h(u_n)+P_{K_r}[y_n-\rho y_n]\} \\ u_{n+1} =& (1-\alpha _n)u_n+ \alpha_n\{u_n-h(u_n)+ P_{K_r}[w_n-\rho w_n]\}, \quad n=0,1, \ldots , \end{aligned}\] where \(\alpha_n, \beta _n, \gamma_n \in [0,1]\) are some constants.

Algorithm 5 is called the Noor(three-step) iterations.

We would like to mention that three-step iterative methods are also known as Noor iteration for solving the variational inequalities and equilibrium problems. Note that for different and suitable choice of the constants \(\alpha _n, \beta _n\) and \(\gamma _n,\) one can easily show that the Noor iterations include the Mann and Ishikawa iterations as special cases. Thus we conclude that Noor (three-step) iterations are more general and unifying ones. One can easily consider the convergence criteria of Algorithm 5 using the technique of this paper.

It is worth mentioning that, if \(r= \infty,\) then the nonconvex set \(K_r\) reduces to a convex set \(K.\) Consequently Algorithm 5 collapse to the following algorithm for solving the inverse variational inequalities (3).

Algorithm 6. For a given \(u_n,\) find the approximate solution \(u_{n+1}\) by the iterative schemes: \[\begin{aligned} y_n =& (1-\gamma_n)u_n+ \gamma _n\{u_n-h(u_n)+ P_{K}[u_n-\rho u_n]\} \\ w_n =& (1-\beta_n)u_n+ \beta _n\{ u_n-h(u_n)+P_{K}[y_n-\rho y_n]\} \\ u_{n+1} =& (1-\alpha _n)u_n+ \alpha_n\{u_n-h(u_n)+ P_{K}[w_n-\rho w_n]\}, \quad n=0,1, \ldots , \end{aligned}\] where \(\alpha_n, \beta _n, \gamma_n \in [0,1]\) are some constants.

and appears to be new ones for solving the inverse variational inequalities (3).

Polyak [52] suggested the inertial methods for speeding the convergence criteria of the iterative method. Alvarez [1] investigated the weak convergence of the inertial proximal method for solving the maximal monotone operators. For the applications of the inertial methods for solving the variational inequalities and related problems, see [1, 20, 21, 29, 31, 36]. We now suggest some inertial type iterative methods for solving the problem (2). From (9), we have

Algorithm 7. For given \(\mu_0, u_1,\) compute the approximate solution \(u_{n+1}\) by the iterative schemes: \[\begin{aligned} u_{n+1} = (1- a_n )u_n + a_n\{ u-h(u)+ P_{K_r} [h( u_n)+ \lambda (u_{n-1}-u_n)) – \rho (u_n+ \lambda (u_{n-1}-u_n)) ]\}, \end{aligned}\]

which is an implicit inertial iterative scheme and is equivalent to:

Algorithm 8. For given \(u_0, u_1,\) compute the approximate solution \(u_{n+1}\) by the iterative schemes: \[\begin{aligned} y_n =& u_n+ \lambda (u_{n-1}-u_n), \quad \lambda \in[0,1] \\ u_{n+1}=& (1- a_n ) u_n +a_n\{ u-h(u)+ P_{K_r}[h(y_{n}) – \rho y_{n} ]\}, \end{aligned}\] which is a two step inertial iterative extra resolvent method.

In a similar way, we can suggest and propose the following four-step inertial iterative methods for solving the nonconvex inverse variational inequalities (2).

Algorithm 9. For a given \(u_0, u_1 \in K_r,\) compute the approximate solution \(\mu_{n+1}\) by the iterative schemes \[\begin{aligned} y_n =& u_n+ \lambda (u_{n-1}-u_n), \quad \lambda \in[0,1] \\ z_n =& (1-c_n)u_n + c_n \{u_n+ h(u_n)+ P_{K_r}[h(y_n)-\rho y_n]\}, \\ w_n =& (1-b_n)u_n + b_n \{ u_n+h(u_n)+ P_{K_r}[h(z_n) – \rho z_n ]\}, \\ u_{n+1} =& (1- a_n )u_n + a_n\{u_n +h(u_n)+ P_{K_r}[h(w_n) – \rho w_n ]\}, \end{aligned}\] where \(a_n, b_n , c_n, \lambda \in [0,1]\) for all \(n \geq 0.\)

Remark 1. For suitable and appropriate choice of the parameters, operators and the spaces, one can obtain a wide class of hybrid inertial iterative schemes for solving the nonconvex inverse variational inequality and variant forms. The interested readers may explore the numerical implementation of these proposed algorithms.

Taking \(z = h(u)- \rho u\) in (9), we have \[h(u) =P_{K_r}z. \label{eq4.1} \ \tag{19}\] \[z = h(u)- \rho u \label{eq4.2}\ \tag{20}\] \[= h(u)-\rho h^{-1}P_{K_r}[h(u)- \rho u]. \label{eq4.3} \tag{21}\]

From (20) and (21), we have \[\begin{aligned} z= h(u)-\rho u = P_{K_r}z- \rho h^{-1}P_{K_r}z, \end{aligned}\] implies that \[\begin{aligned} h(u)-\rho u = h(u)-\rho u = P_{K_r}z- \rho h^{-1}P_{K_r}z. \end{aligned}\]

Thus \[\begin{aligned} h(u)=\rho h^{-1}P_{K_r}z-P_{K_r}z, \end{aligned}\] which implies that, using (20) \[\begin{aligned} \label{eq4.4} u =& (1-\eta )u+ \eta \{h(u)-\rho u -\rho\{h^{-1}P_{K_r}z – \rho P_{K_r}z \}\nonumber \\ =& (1-\eta )u+ \eta \{u – \rho h(u) \}. \end{aligned} \tag{22}\]

This fixed point formulation enables us to suggest the following iterative method for solving the nonconvex inverse variational inequality (2).

Algorithm 10. For a given \(z_0,\) compute \(u_{n+1}\) by the iterative schemes \[\begin{aligned} \label{eq4.5} u_{n+1} =& (1-\eta _n)u_n+ \eta _n\{ u_n-\rho h(u_n),\}, \end{aligned} \tag{23}\]

where \(0 \leq \eta _n \leq 1,\) for all \(n \geq 0.\)

We would like to point out that one can obtain a number of iterative methods for solving the nonconvex inverse variational inequality (2) for suitable and appropriate choices of the operators \(h\) and the space \(H.\) This shows that iterative methods suggested in this paper are more general and unifying ones.

4. Dynamical systems

In this section, we suggest some iterative approximation schemes for solving the nonconvex inverse variational inequality (2) using the dynamical systems techniques. Dupuis and Nagurney [4] introduced and studied the projected dynamical systems associated with variational inequalities using the equivalent fixed point formulation. It has been shown [4, 1618, 26, 27, 3234, 4042] that these dynamical systems are useful in developing efficient and powerful numerical techniques for solving variational inequalities.

We use the equivalent fixed point formulation (9) to suggest and analyze the projected dynamical system associated with the nonconvex inverse variational inequalities (2). \[\begin{aligned} \label{5.1d} \frac{du}{dt} = \lambda \{P_{K_r }[h(u)-\rho u]-h(u) \}, \quad u(t_0) = u_0 \in H, \end{aligned} \tag{24}\] where \(\lambda\) is a parameter. The system of type (24) is called the projection dynamical system. Here the right hand side is related to the projection operator and is discontinuous on the boundary. It is clear from the definition that the solution to (24) always stays in the constraint set. This implies that the qualitative results such as the existence, uniqueness and continuous dependence of the solution on the given data can be studied.

The equilibrium points of the dynamical system (24) are naturally defined as follows.

Definition 5. An element \(u \in H,\) is an equilibrium point of the dynamical system (24), if \(\frac{du}{dt}=0,\) that is, \[\begin{aligned} P_{K_r }[h(u)-\rho u ]-h(u) = 0, \end{aligned}\]

Thus it is clear that \(\mu \in H\) is a solution of the nonconvex inverse variational inequality (2), if and only if, \(\mu \in H\) is an equilibrium point.

Definition 6. [4] The dynamical system is said to converge to the solution set \(S^*\) of (5), if , irrespective of the initial point, the trajectory of the dynamical system satisfies \[\begin{aligned} \label{5.4d} \lim _{t \rightarrow \infty }\mbox{dist}(u(t),S^*) = 0, \end{aligned} \tag{25}\] where \[\begin{aligned} \mbox{dist}(u,S^*) = \mbox{inf}_{v \in S^*}\|u-v\|. \end{aligned}\]

It is easy to see, if the set \(S^*\) has a unique point \(u^*,\) then (25) implies that \[\begin{aligned} \lim _{t \rightarrow \infty }u(t)=u^*. \end{aligned}\]

If the dynamical system is still stable at \(\mu^*\) in the Lyapunov sense, then the dynamical system is globally asymptotically stable at \(u^*.\)

Definition 7. The dynamical system is said to be globally exponentially stable with degree \(\eta\) at \(u^*,\) if, irrespective of the initial point, the trajectory of the system satisfies \[\begin{aligned} \| u(t)-u^*\| \leq u _1\|u(t_0)-u^*\|exp(-\eta (t-t_0)), \quad \forall t \geq t_0, \end{aligned}\] where \(u _1\) and \(\eta\) are positive constants independent of the initial point.

It is clear that the globally exponentially stability is necessarily globally asymptotically stable and the dynamical system converges arbitrarily fast.

Lemma 4(Gronwall Lemma [4]) Let \(\hat{u}\) and \(\hat{v}\) be real-valued nonnegative continuous functions with domain \(\{t : t \leq t_0\}\) and let \(\alpha (t)= \alpha _0(|t-t_0|),\) where \(\alpha _0\) is a monotone increasing function. If for \(t \geq t_0,\) \[\begin{aligned} \hat{u} \leq \alpha (t) + \int^{t}_{t_0}\hat{u}(s)\hat{v}(s)ds, \end{aligned}\] then \[\begin{aligned} \hat{u}(s) \leq \alpha (t)exp\{\int ^{t}_{t_0} \hat{v}(s)ds \}. \end{aligned}\]

We now show that the trajectory of the solution of the projection dynamical system (24) converges to the unique solution of the nonconvex inverse variational inequality (2). The analysis is in the spirit of Noor [18] and Xia and Wang [41, [42]].

Theorem 3. Let the operator \(h: H \longrightarrow H\) be Lipschitz continuous with constant \(\zeta > 0.\) respectively and the Assumption 1 hold. If \(\lambda\{(1+ \delta)\zeta +\delta \rho) \} <1,\) then, for each \(u_0 \in K_r,\) there exists a unique continuous solution \(u(t)\) of the dynamical system (24) with \(u(t_0) = u_0\) over \([t_0, \infty ).\)

Proof. Let \[\begin{aligned} G(u) = \lambda P_{K_r }[h(u)-\rho u]-h(u)\}, \end{aligned}\] where \(\lambda > 0\) is a constant and \(G(u)= \frac{du}{dt},\) For \(\forall \mu,\nu \in H,\) we have \[\begin{aligned} \|G(u)-G(v)\| \leq & \lambda \{P_{K_r }[h(u)-\rho u]-P_{K_r }[h(v)-\rho v]\| + \|h(u)-h(v)\|\} \\ \leq & \lambda \|h(u)-h(v)\| + \lambda \delta\{\|h(u)-h(v)\|+ \rho \|u-v\|\}\\ \leq & \lambda\{(1+ \delta)\zeta +\delta \rho) \}\|\mu-\nu\|. \end{aligned}\]

This implies that the operator \(G(u)\) is a Lipschitz continuous with constant \(\lambda\{(1+ \delta)\zeta +\delta \rho) \} <1\) and for each \(u \in K_r,\) there exists a unique and continuous solution \(u(t)\) of the dynamical system (24), defined on an interval \(t_0 \leq t < T_1\) with the initial condition \(u(t_0) = u_0.\) Let \([t_0,T_1)\) be its maximal interval of existence. Then we have to show that \(T_1 = \infty .\) Consider , for any \(u \in K_r,\) \[\begin{aligned} \|G(u)\| = \|\frac{du}{dt}\| =& \lambda \|P_{K_r }[h(u)-\rho u]-h(u)\| \\ \leq & \lambda \{\|P_{K_r }[h(u)-\rho u]-P_{K_r }[0]\|+\|P_{K_r }[0]-h(u)\|\} \\ \leq & \lambda \{\delta \|\{h(u)-\rho Tu\|+\|P_{K_r }[u]-P_{K_r }[0]\|+\|P_{K_r }[0]-h(u)\|\}\\ \leq & \lambda \delta \{(\rho +1+2\zeta )\|u\|+\|P_{K_r }[0]\|\}. \end{aligned}\]

Then \[\begin{aligned} \|\mu(t)\|\leq & \|u_0\|+ \int _{t_0}^{t}\|u(s)\|ds \\ \leq & (\|\mu_0\|+k_1(t-t_0))+k_2\int _{t_0}^{t}\|u(s)\|ds, \end{aligned}\] where \(k_1= \delta \lambda \|P_{K_r }[0]\|\) and \(k_2 = \delta \lambda (\rho +1+ 2\zeta ).\) Hence by the Gronwall Lemma 4, we have \[\begin{aligned} \|u(t)\| \leq \{\|u_0\| +k_1(t-t_0)\}e^{k_2(t-t_0)}, \quad t \in [t_0,T_1). \end{aligned}\]

This shows that the solution is bounded on \([t_0, T_1).\) So \(T_1 =\infty .\) ◻

Theorem 4. If the operator \(h : H \longrightarrow H\) is strongly monotone with constant \(\sigma > 0\) and \(\zeta > 0\) and the Assumption 1 hold. Then the dynamical system (24) converges globally exponentially to the unique solution of the nonconvex inverse variational inequality (2).

Proof. Since the operator \(h\) is Lipschitz continuous, it follows from Theorem 3 that the dynamical system (24) has unique solution \(u(t)\) over \([t_0,T_1)\) for any fixed \(u_0 \in H.\) Let \(u(t)\) be a solution of the initial value problem (24). For a given \(\mu^* \in H\) satisfying (2), consider the Lyapunov function \[\begin{aligned} \label{5.5d} L(u) = \lambda \|u(t)-u^*\|^2, \quad u(t) \in K_r. \end{aligned} \tag{26}\]

From (24) and (26), we have \[\begin{aligned} \label{5.6d} \frac{dL}{dt} =& 2\lambda \langle u(t)-u^*,\frac{du}{dt} \rangle \nonumber \\ =& 2\lambda \langle u(t)-u^*,P_{K_r }[h(u(t))-\rho u(t)]-h(u(t)) \rangle \nonumber \\ =& 2\lambda \langle u(t)-u^*,P_{K_r }[h(u(t))-\rho u(t)]-h(u^*)+h(u^*)-h(u(t)) \rangle \nonumber \\ =& -2\lambda \langle u(t)-u^*,h(u(t))-h(u^*) \rangle \nonumber \\ & +2\lambda \langle u(t)-u^*,P_{K_r }[h(u(t))-\rho u(t)]-h(u^*) \rangle \nonumber \\ \leq & -2\lambda \langle \rho(Tu(t)-u^*),h(u(t))-h(u^*) \rangle \nonumber \\ & +2\lambda \langle u(t)-u^*,P_{K_r }[h(u(t))-\rho u(t)]-P_{K_r }[h(u^*(t))-\rho u^*(t)]\rangle ,\nonumber \\ \leq &-2\lambda \sigma\|u(t)-u^*\|^2+\lambda\|u(t)-u^{*}\|^2\nonumber \\ &+\lambda \|P_{K_r }[h(u(t))-\rho u(t)]-P_{K_r }[h(u^*(t))-\rho u^*(t)]\|^2. \end{aligned} \tag{27}\]

Using the Lipschitz continuity of the operator \(h ,\) we have \[\begin{aligned} \label{5.7d} \|P_{K_r }[h(u)-\rho u]-P_{K_r }[h(u^*)-\rho u^*]\| \leq & \delta \|h(u)-h(u^*)-\rho(u-u^*)\| \nonumber \\ \leq & \delta(\zeta +\rho )\|u-u^*\|. \end{aligned} \tag{28}\]

From (27) and (28), we have \[\begin{aligned} \frac{d}{dt}\|u(t)-u^*\| \leq 2\xi \lambda \|u(t)-u^*\|, \end{aligned}\] where \[\begin{aligned} \xi = (\delta(\zeta+\rho )-2\sigma ) . \end{aligned}\]

Thus, for \(\lambda = -\lambda _1,\) where \(\lambda _1\) is a positive constant, we have \[\begin{aligned} \|u(t)-u^*\| \leq \|u(t_0)-u^*\|e^{-\xi \lambda _1(t-t_0)}, \end{aligned}\] which shows that the trajectory of the solution of the dynamical system (24) converges globally exponentially to the unique solution of the nonconvex inverse variational inequality (2). ◻

We use the resolvent dynamical system (24) to suggest some iterative for solving nonconvex inverse variational inequalities (2). These methods can be viewed in the sense of Korpelevich [53] and Noor [18] involving the double projection operator.

For simplicity, we take \(\lambda =1.\) Thus the dynamical system (24) becomes \[\label{eq5.2b} \frac{du}{dt}+h(u) =K_r [h(u)-\rho u],\quad u(t_{0})=\alpha. \tag{29}\]

We construct the implicit iterative method using the forward difference scheme. Discretizing (29), we have \[\label{eq5.3d} \frac{u_{n+1}-u_{n}}{h_1}+h(u_{n+1}) = P_{K_r }[h(u_{n+1})-\rho u_{n+1}], \tag{30}\] where \(h>0\) is the step size. Now, for \(h_1=1,\) we can suggest the following implicit iterative method for solving the nonconvex inverse variational inequality (2).

Algorithm 11. For a given \(u_{0}\in K_r\), compute \({u_{n+1}}\) by the iterative scheme \[\begin{aligned} u_{n+1}= u_n-h(u_{n+1})+ P_r\bigg[h(u_{n+1})-\rho u_{n+1}-(u_{n+1}-u_{n})\bigg],\quad n=0,1,2,\ldots. \end{aligned}\]

Discretizing (29), we now suggest an other implicit iterative method for solving (2). \[\label{eq5.10d} \frac{u_{n+1}-u_{n}}{h_1}+h(u_{n})=P_{K_r }[h(u_{n})-\rho u_{n+1}], \tag{31}\] where \(h_1\) is the step size.

For \(h_1=1,\) this formulation enable us to suggest the following iterative method.

Algorithm 12. For a given \(u_{0}\in K_r,\) compute \({u_{n+1}}\) by the iterative scheme \[\begin{aligned} u_{n+1}= u_n -h(u_n)+ P_{K_r }\big[h(u_{n})-\rho u_{n+1}\big],\quad n=0,1,2,\ldots, \end{aligned}\]

which is equivalent to the following tw0-step methods.

Algorithm 13. For a given \(u_{0}\in K_r,\) compute \({u_{n+1}}\) by the iterative scheme \[\begin{aligned} y_n =& u_{n+1}= u_n -h(u_n)+ P_{K_r }\big[h(u_{n})-\rho u_{n}\big] \nonumber \\ u_{n+1}=& u_n -h(u_n)+ P_{K_r }\big[h(u_{n})-\rho y_{n}\big],\quad n=0,1,2,\ldots. \end{aligned}\]

Algorithm 13 is the analogue of the extragradient method of Koperlevich [53] of solving nonconvex inverse variational inequalities.
We now rewrite the dynamical system (24), for constants \(\eta, \zeta\) in the following form \[\label{eq5.8d} \frac{du}{dt}+h(u) =P_{K_r }[h((1-\eta) u+\eta u)-\rho ((1-\zeta) u+\zeta u) ],\quad u(t_{0})=\alpha. \tag{32}\]

Descretizing the dynamical system (32) and using finite difference scheme, we suggest the proximal method.

Algorithm 14. For given \(u_{0}, u_{1},\) compute \({u_{n+1}}\) by the iterative scheme \[\begin{aligned} \frac{u_{n+1}-u_n}{h_1}= -h(u_n)+P_{K_r } [h((1-\eta) u_n+\eta u_{n+1})-\rho ((1-\zeta) u_n+\zeta u_{n+1})]. \end{aligned}\]

Which is called the inertial iterative method. For \(h_1=1,\) and applying the predictor corrector approach, Algorithm 14 is equivalent to:

Algorithm 15. For given \(u_{0}, u_{1},\) compute \({u_{n+1}}\) by the iterative scheme \[\begin{aligned} y_n =& (1-\xi)u_n+ \xi u_{n-1} \nonumber \\ w_{n}=& u_n-h(u_n)+P_{K_r } [h((1-\eta) u_n+\eta y_{n})-\rho ((1-\zeta) u_n+\zeta y_{n})] \nonumber \\ z_{n}=& u_n-h(u_n)+P_{K_r } [h((1-\eta) y_n+\eta w_{n})-\rho ((1-\zeta) y_n+\zeta w_{n})] \nonumber \\ u_{n+1}=& u_n-h(u_n)+ P_{K_r }[h((1-\eta) w_n+\eta z_{n})-\rho ((1-\zeta) w_n+\zeta z_{n})] \nonumber \end{aligned}\] which is called the hybrid four-step inertial mid-point proximal method for solving the nonconvex inverse variational inequalities.

For \(\eta=\frac{1}{2}\) and \(\zeta=\frac{1}{2},\) Algorithm 15 reduces to:

Algorithm 16. For given \(u_{0}, u_{1},\) compute \({u_{n+1}}\) by the iterative scheme \[\begin{aligned} y_n =& (1-\xi)u_n+ \xi u_{n-1} \nonumber \\ w_{n}=& u_n-h(u_n)+P_{K_r } [h(\frac{ u_n+ y_{n}}{2})-\rho (\frac{u_n+y_{n}}{2})] \nonumber \\ z_{n}=& u_n-h(u_n)+P_{K_r } [h(\frac{ y_n+ w_{n}}{2})-\rho (\frac{y_n+w_{n}}{2})] \nonumber \\ u_{n+1}=& u_n-h(u_n)+ P_{K_r }[h(\frac{ z_n+ w_{n}}{2})-\rho (\frac{z_n+w_{n}}{2})], \nonumber \end{aligned}\] which is called the hybrid four-step inertial method for solving the nonconvex inverse variational inequalities.

For \(K_r= K,\) the convex set, Algorithm 15 reduces to the following methods for solving inverse variational inequalities.

Algorithm 17. For given \(u_{0}, u_{1},\) compute \({u_{n+1}}\) by the iterative scheme \[\begin{aligned} y_n =& (1-\xi)u_n+ \xi u_{n-1} \nonumber \\ w_{n}=& u_n-h(u_n)+P_{K } [h((1-\eta) u_n+\eta y_{n})-\rho ((1-\zeta) u_n+\zeta y_{n})] \nonumber \\ z_{n}=& u_n-h(u_n)+P_{K } [h((1-\eta) y_n+\eta w_{n})-\rho ((1-\zeta) y_n+\zeta w_{n})] \nonumber \\ u_{n+1}=& u_n-h(u_n)+ P_{K}[h((1-\eta) w_n+\eta z_{n})-\rho ((1-\zeta) w_n+\zeta z_{n})], \nonumber \end{aligned}\] which is called the hybrid four step inertial iterative method for solving the nonconvex variational inequalities.

We now consider the third order dynamical systems associated with the nonconvex inverse variational inequalities of the type (24). To be more precise, we consider the problem of finding \(u\in H,\) such that \[\begin{aligned} \label{5.3d} \gamma\frac{d^3u}{dt^3}+\zeta\frac{d^2u}{dt^2}+\xi\frac{u}{dt}+h(u)=& P_{K_r}[h(u)-\rho u],\nonumber \\ \quad & u(a)=\alpha,\dot{u}(a)=\beta,\dot{u}(b)=0 \end{aligned} \tag{33}\] where \(\gamma>0, \zeta, \xi\) and \(\rho>0\) are constants. Problem (33) is called third order dynamical system associated with nonconvex inverse variational inequalities (2).

The equilibrium point of the dynamical system (33) is defined as follows.

Definition 8. An element \(u \in H,\) is an equilibrium point of the dynamical system (33), if, \[\gamma\frac{d^3u}{dt^3}+\zeta\frac{d^2u}{dt^2}+\xi\frac{du}{dt}=0.\]

Thus it is clear that \(u \in H\) is a solution of the nonconvex inverse variational inequality (2), if and only if, \(u \in H\) is an equilibrium point.

Consequently, the problem (33) can be equivalent written as \[\begin{aligned} \label{eq5.3nn} h(\mu)= P_{K_r}\bigg[h(u)- \rho u+ \gamma\frac{d^3u}{dt^3}+\zeta\frac{d^2u}{dt^2}+\xi\frac{du}{dt}\bigg]. \end{aligned} \tag{34}\]

We discretize the third-order dynamical systems (33) using central finite difference and backward difference schemes to have

\[\begin{aligned} \label{eq5.2g} &\gamma \frac{u_{n+2}-2u_{n+1}+2u_{n-1}-u_{n-2}}{2h^3} +\zeta\frac{u_{n+1}-2u_{n}+u_{n-1}}{h^2}\nonumber \\ &+\xi\frac{3u_{n}-4u_{n-1}+u_{n-2}}{2h}+h(u_{n})= P_{K_r}[h(u_{n})-\rho (u_{n+1})], \end{aligned} \tag{35}\] where \(h\) is the step size.

If \(\gamma =1, h=1, \zeta=1, \xi=1,\) then, from equation(35) after adjustment, we have

Algorithm 18. For a given \(\mu_{0},\mu_{1},\) compute \({u_{n+1}}\) by the iterative scheme \[\begin{aligned} u_{n+1}=&u_n-h(u_n)+ P_{K_r}[g(u_{n})- \rho u_{n+1}+ \frac{u_{n-1}-3u_{n}}{2} \big],\quad n=0,1,2,\ldots, \end{aligned}\] which is an inertial type hybrid iterative methods for solving the nonconvex inverse variational inequalities (2).

Remark 2. In this section, we have tried to emphasize that the projected dynamical systems associated with nonconvex inverse variational inequalities can be used to investigate the existence of the solution as well as to compute the approximate solution. For suitable and appropriate choices of the convex sets \(K_r, K,\) parameters \((\alpha, \eta,\zeta ),\) operator \(h\) and the spaces, one can obtain several known and new results for solving nonconvex inverse variational inequalities and related programming problems as special cases of these results. All the results of this section continue to hold for the inverse variational inequalities and represent significant refinement of the known results. It is an interesting problem to explore the applications of these results.

5. Auxiliary principle technique

There are several techniques such as projection, resolvent, descent methods for solving the variational inequalities and their variant forms. None of these techniques cannot be applied for suggesting the iterative methods for solving the nonconvex inverse variational inequalities. To overcome these drawbacks, one usually applies the auxiliary principle technique, which is mainly due to Glowinski et al [49] as developed in [14, 18, 3335], to suggest and analyze some inertial iterative methods for solving general nonconvex variational inequalities (2). We again apply the auxiliary principle technique involving an arbitrary operator for finding the approximate solution of the problem (2).

For a given \(u \in K_r\) satisfying (2), find \(w \in K_r\) such that \[\begin{aligned} \label{eq6.1} \langle \rho (w+\eta(u-w)), v – h(w) \rangle +\langle M(w)-M(u), v-w \rangle \geq 0, \quad\forall v \in K_r, \end{aligned} \tag{36}\] where \(\rho > 0 , \eta \in [0,1]\) are constants and \(M\) is an arbitrary operator. Inequality of type (36) is called the auxiliary nonconvex inverse variational inequality.

If \(w = u,\) then \(w\) is a solution of (2). This simple observation enables us to suggest the following iterative method for solving (2).

Algorithm 19. For a given \(u_0 \in K_r,\) compute the approximate solution \(u_{n+1}\) by the iterative scheme \[\begin{aligned} \label{eq6.2} \langle \rho (u_{n+1}+\eta(u_n-u_{n+1})), v – h(u_{n+1}) \rangle +\langle M(u_{n+1})-M(u_n), v-u_{n+1} \rangle \geq 0, \forall v \in K_r. \end{aligned} \tag{37}\]

Algorithm 19 is called the hybrid proximal point algorithm for solving the nonconvex inverse variational inequalities (2).

Special Cases

Some special cases are discussed.

(I) For \(\eta =0,\) Algorithm 19 reduces to

Algorithm 20. For a given \(u_0,\) compute the approximate solution \(u_{n+1}\) by the iterative scheme \[\begin{aligned} \label{eq6.3} \langle \rho (u_{n+1}), v – h(u_{n+1}) \rangle +\langle M(u_{n+1})-M(u_n), v-u_{n+1} \rangle \geq 0, \quad\forall v \in K_r, \end{aligned} \tag{38}\]

is called the implicit iterative methods for solving the problem (2).

(II) If \(\eta=1,\) then Algorithm 19 collapses to

Algorithm 21. For a given \(u_0,\) compute the approximate solution \(u_{n+1}\) by the iterative scheme \[\begin{aligned} \langle \rho (u_n), v – h(u_{n+1}) \rangle +\langle M(u_{n+1})-M(u_n), v-u_{n+1} \rangle \geq 0, \quad\forall v \in K_r, \end{aligned}\]

is called the explicit iterative method.

(III) For \(\eta =\frac{1}{2},\) Algorithm 19 becomes:

Algorithm 22. For a given \(u_0,\) compute the approximate solution \(u_{n+1}\) by the iterative scheme \[\begin{aligned} \langle \rho (\frac{u_{n+1}+u_n}{2}), v – h(u_{n+1}) \rangle +\langle M(u_{n+1})-M(u_n), v-u_{n+1} \rangle \geq 0, \quad\forall v \in K_r, \end{aligned}\]

is known as the mid-point proximal method for solving the problem (2).

For the convergence analysis of Algorithm 20, we need the following concepts.

Definition 9. An operator \(h\) is said to be pseudomontone, if \[\begin{aligned} \langle u, v-h(u) \rangle \geq 0, \quad \forall v\in K_r \implies -\langle v, u-h(v) \rangle \geq 0, \quad \forall v\in K_r. \end{aligned}\]

Theorem 5. Let the operator \(h\) be a pseudo-monotone with respect to the operator \(h.\) Then the approximate solution \(u_{n+1}\) obtained in Algorithm 20 converges to the exact solution \(u\in H\) of the problem (2). If the operator \(M\) is strongly monotone with constant \(\xi \geq 0\) and Lipschitz continuous with constant \(\zeta \geq 0,\) then \[\begin{aligned} \label{eq6.4} \xi \|u_{n+1}- u_n\| \leq \zeta \|u-u_n\|. \end{aligned} \tag{39}\]

Proof. Let \(u \in K_r\) be a solution of the problem (2). Then, \[\begin{aligned} \label{eq6.5} -\langle \rho v, u – h(v) \rangle \geq 0, \quad \forall v \in K_r, \end{aligned} \tag{40}\] since the operator \(h\) is a pseudo-monotone.

Takin \(v=u_{n+1}\) in (40), we obtain \[\begin{aligned} \label{eq6.6} -\langle \rho (u_{n+1}), u – h(u_{n+1}) \rangle \geq 0. \end{aligned} \tag{41}\]

Setting \(v= u\) in (37), we have \[\begin{aligned} \label{eq6.7} &\langle \rho (u_{n+1}), u – h(u_{n+1})\rangle +\langle M(u_{n+1})-M(u_n), u-u_{n+1} \rangle \geq 0. \end{aligned} \tag{42}\]

Combining (40), (41) and (42), we have \[\begin{aligned} \label{eq6.8} \langle M(\mu_{n+1})-M(\mu_n), \mu-\mu_{n+1} \rangle \geq & -\langle \rho \mu_{n+1}, h(u) – u_{n+1}\rangle \geq 0. \end{aligned} \tag{43}\]

From the equation (43), we have \[\begin{aligned} 0\leq & \langle M(u_{n+1})-M(u_n) , u- u_{n+1}\rangle \nonumber \\ =& \langle M(u_{n+1})-M(u_n) , u- u_n+u_n- u_{n+1} \rangle \nonumber \\ =& \langle M(u_{n+1})-M(u_n) , u- u_n \rangle + \langle M(u_{n+1}-M(u_n), u_n- u_{n+1} \rangle, \end{aligned}\] which implies that \[\begin{aligned} \langle M(u_{n+1}-M(u_n), u_{n+1}-u_n \rangle \leq \langle M(u_{n+1})-M(u_n) , u- u_n \rangle. \end{aligned}\]

Now using the strongly monotonicity with constant \(\xi >0\) and Lipschitz continuity with constant \(\zeta\) of the operator \(M,\) we obtain \[\begin{aligned} \xi \|u_{n+1}-u_n\|^2 \leq \zeta \|u_{n+1}-u_n\|\|u_n-u\|. \end{aligned}\]

Thus \[\begin{aligned} \xi \|u_n-u_{n+1}\| \leq \zeta \|u_n-u\|, \end{aligned}\] the required result (38). ◻

Theorem 6. Let \(H\) be a finite dimensional space and all the assumptions of Theorem 5 hold. Then the sequence \(\{u_n \}^{^{\infty } }_{_{0}}\) given by Algorithm 20 converges to the exact solution \(u\) of (2).

Proof. Let \(u \in H\) be a solution of (2). From (38), it follows that the sequence \(\{\|u-u_n\| \}\) is nonincreasing and consequently \(\{u_n\}\) is bounded. Furthermore, we have \[\xi \sum_{n=0}^{\infty} \|u_{n+1}- u_n\|\leq \zeta \|u_{n} – u\|,\] which implies that \[\begin{aligned} \label{eq6.10} \lim_{n \rightarrow \infty} \|u_{n+1}- u_n\| = 0. \end{aligned} \tag{44}\]

Let \(\hat{\mu}\) be the limit point of \(\{u_n \}_{_{0} }^{^{\infty } }\); a subsequence \(\{ u_{n_{j}} \}^{^{\infty }}_{_{1}}\) of \(\{u_n\}^{^{\infty }}_{_{0}}\) converges to \(\hat{u} \in H\). Replacing \(w_n\) by \(u_{n_j}\) in (37), taking the limit \(n_j \longrightarrow \infty\) and using (44), we have \[\begin{aligned} \label{eq6.11} \langle \rho \hat{u}, v – h(\hat{u}) \rangle \geq 0, \qquad \forall v \in K_r, \end{aligned}\] which implies that \(\hat{u}\) solves the problem (2) and \[\| u_{n+1} -u \| \leq \| u_{n} -u \|.\]

Thus, it follows from the above inequality that \(\{ u_{n} \}^{^{\infty }}_{_{1}}\) has exactly one limit point \(\hat{u}\) and \[\lim_{n \rightarrow \infty} (u_{n}) = \hat{u}.\] the required result. ◻

In recent years inertial type iterative methods have been applied to find the approximate solutions of variational inequalities and related optimizations. We again apply the modified auxiliary approach [14, 37] to suggest some hybrid inertial proximal point schemes for solving the nonconvex inverse variational inequality(2).

For a given \(u \in K_r\) satisfying (2), find \(w \in K_r\) such that \[\begin{aligned} \label{eq6.12} \langle \rho (w+\eta(u-w)), v – h(w) \rangle +\langle M(w)-M(u)+\alpha(u-u), v-w \rangle \geq 0, \quad\forall v \in K_r, \end{aligned} \tag{45}\] where \(\rho > 0 , \eta , \alpha \in [0,1]\) are constants and \(M\) is a nonlinear operator.

Clearly \(w = \mu,\) implies that \(w\) is a solution of (2). This simple observation enables us to suggest the following iterative method for solving (2).

Algorithm 23. For a given \(u_0, u_1 \in K_r,\) compute the approximate solution \(u_{n+1}\) by the iterative scheme \[\begin{aligned} \label{eq6.13} \langle \rho (u_{n+1}+\eta(u_n-u_{n+1})), v – h(u_{n+1}) \rangle +\langle M(u_{n+1})-M(u_n)+\alpha (u_n-u_{n-1}), v-u_{n+1} \rangle \geq 0, \quad\forall v \in K_r, \end{aligned}\]

Algorithm 23 is called the hybrid proximal point algorithm for solving the general variational inequalities (3). For \(\alpha =0,\) Algorithm 23 is exactly Algorithm 19. Using the technique and ideas of Theorem 5 and Theorem 6, one can analyze the convergence of Algorithm 23 and its special cases.

6. Nonexpansive mappings

It is well known that the solution of the nonconvex variational inequalities can be computed using the iterative projection method, the convergence of which requires the strongly monotonicity and Lipschitz continuity of the involved operator. These strict conditions rule out its applications in important problem. To overcome these drawback, we use the concept of the relaxed co-coercive concept, which is weaker than the strongly monotonicity. In this respect our results represent a refinement of the previously known results. Noor [15, 18] suggested and analyzed several three-step iterative methods for solving different classes of variational inequalities. It has been shown that three-step schemes are numerically better than two-step and one-step methods. Related to the nonconvex inverse variational inequalities is the problem of finding the fixed points of the nonexpansive mappings, which is the subject of current interest in functional analysis. Motivated by the research going on these fields, we suggest and analyze several new three-step iterative methods for finding the common solution of these problems. We also prove the convergence criteria of these new iterative schemes under some mild conditions. Since the inverse variational inequalities include the variational inequalities and inverse complementarity problems as special cases, results obtained in this paper continue to hold for these problems. Results proved in this section may be viewed as a significant and improvement of the previously known results.

Lemma 3 implies that nonconvex inverse variational inequalities (2) and the inverse fixed point problems (9) are equivalent. This alternative equivalent formulation has played a significant role in the studies of the inverse variational inequalities and related optimization problems.

Let \(S\) be a nonexpansive mapping. We denote the set of the fixed points of \(S\) by \(F(S)\) and the set of the solutions of the general nonconvex inverse variational inequalities (2) by \(GNVI(K_r).\) We can characterize the problem. If \(u^{\ast } \in F(S)\cap GNVI(K_r),\) then \(u^{\ast } \in F(S)\) and \(u^{\ast } \in GNVI(K_r).\) Thus from Lemma 3, it follows that \[\begin{aligned} u^{\ast} = Su^{\ast} =S\{u-h(u)+ P_{K_r} [h(u^{\ast}) – \rho u^{\ast}]\}, \end{aligned}\] where \(\rho > 0\) is a constant.

This fixed point formulation is used to suggest the following multi-step iterative methods for finding a common element of two different sets of solutions of the inverse fixed points of the nonexpansive mappings \(S\) and the nonconvex inverse variational inequality (2).

Algorithm 24. For a given \(u_0,\) compute the approximate solution \(u_{n+1}\) by the iterative schemes \[z_n = (1-c_n)u_n + c_nS\{ u_n-h(u_n)+P_{K_r}[h(u_n)-\rho u_n]\}, \label{8.1}\ \tag{46}\] \[y_n = (1-b_n)u_n + b_n S\{ u_n-h(u_n)+P_{K_r}[h(z_n)-\rho z_n]\},\label{8.2} \ \tag{47}\] \[u_{n+1} = (1- a_n ) u_n + a_n S \{ u_n-h(u_n)+P_{K_r}[h(y_n)-\rho y_n]\}, \label{8.3} \tag{48}\] where \(a_n, b_n , c_n \in [0,1]\) for all \(n \geq 0\) and \(S\) is the nonexpansive operator.

Algorithm 24 is a three-step iterative method.

Note that for \(c_n \equiv 0,\) Algorithm 24 reduces to:

Algorithm 25. For an arbitrarily chosen initial point \(u_0,\) compute the approximate solution\(\{ x_n \}\) by the iterative schemes \[\begin{aligned} y_n =& (1-b_n) u_n + b_n S \{ u_n-h(u_n)+P_{K_r}[h(u_n)-\rho u_n]\}, \\ u_{n+1} =& (1- a_n ) u_n + a_n S \{ u_n-h(u_n)+P_{K_r}[h(y_n)-\rho y_n]\}, \end{aligned}\] where \(a_n, b_n \in [0,1]\) for all \(n \geq 0\) and \(S\) is the nonexpansive operator.

Algorithm 47 is called the two-step (Ishikawa iterations) iterations. For \(b_n \equiv 1, a_n \equiv 1,\) Algorithm 25 reduces to:

Algorithm 26. For an arbitrarily chosen initial point \(u_0,\) compute the sequence \(\{ u_n \}\) by the iterative schemes \[\begin{aligned} y_n =& S\{ u_n-h(u_n)+P_{K_r}[h(u_n)-\rho u_n]\}, \\ u_{n+1} =& S \{ u_n-h(u_n)+P_{K_r}[h(y_n)-\rho Tu_n]\} P_{K(y_n)} [ y_n – \rho y_n ]. \end{aligned}\]

For \(b_n \equiv 0, c_n \equiv 0,\) Algorithm 24 collapses to the following iterative method.

Algorithm 27. For a given \(u_0,\) compute the approximate solution \(u_{n+1}\) by the iterative schemes: \[\begin{aligned} u_{n+1} = (1- a_n ) u_n + a_n S\{ u_n-h(u_n)+P_{K_r}[h(u_n)-\rho u_n]\}, \end{aligned}\] which is known as the Mann (one-step method) iteration.

For \(K_r \equiv K,\) Algorithm 24 reduces to the following three-step iterative methods for solving the problem \(F(S)\cap GNVI(K).\)

Algorithm 28. For a given \(u_0 \in K,\) compute the approximate solution \(x_{n}\) by the iterative schemes \[\begin{aligned} z_n =& (1-c_n)u_n + c_nS\{ u_n-h(u_n)+P_K[h(u_n)-\rho u_n]\}, \\ y_n =& (1-b_n)u_n + b_n S\{ u_n-h(u_n)+P_K[h(z_n)-\rho z_n]\}, \\ u_{n+1} =& (1- a_n ) u_n + a_n S \{ u_n-h(u_n)+P_K[h(y_n)-\rho y_n]\}, \end{aligned}\] where \(a_n, b_n , c_n \in [0,1]\) for all \(n \geq 0\) and \(S\) is the nonexpansive operator.

Algorithm 28 is a Noor (three-step)iterations. Clearly Noor(three-step) iterations include Mann-Ishikawa iterations as special cases. In particular, three-step methods suggested in this paper are quite general and include several new and previously known algorithms for solving variational inequalities and nonexpansive mappings.

In this section, we investigate the strong convergence of Algorithms 24, 25 and 27 in finding the common element of two sets of solutions of the nonconvex inverse variational inequalities (2) and \(F(S)\) and this is the main motivation of this section.

Definition 10.  A mapping \(T: H \rightarrow H\) is called inverse strongly monotonic (or co-coercive )with constant \(\alpha > 0,\) if \(\forall u,v \in H\), there exists a constant \(\alpha>0\), such that \[\langle Tu – Tv,u-v \rangle \geq \alpha \|Tu – Tv\|^2.\]

Lemma 5. Suppose \(\{ \delta_k \}_{k=0}^{\infty}\) is a nonnegative sequence satisfying the following inequality: \[\delta_{k+1} \leq (1- \lambda_k ) \delta_k + \sigma_k, \ k \geq 0\] with \(\lambda_k \in [0,1]\), \(\sum_{k=0}^{\infty} \lambda_k = \infty\), and \(\sigma_k = o (\lambda_k )\). Then \(\lim_{k \rightarrow \infty} \delta_k =0\).

We now consider the convergence criteria of Algorithm 24.

Theorem 7. Let \(h\) be a strongly monotone with constants \(\sigma >0 )\) and Lipschitz continuous with constant \(\zeta>0,\) respectively and let \(S\) be a nonexpansive mapping such that \(F(S) \cap GNVI(K_r) \neq \emptyset\). Let \(\{u_n \}\) be a sequence defined by Algorithm 24, for any initial point \(u_0 \in K_r\), with conditions \[\begin{aligned} \label{8.5} \rho < \frac{k-\delta}{\delta}, \quad \delta < k, \end{aligned} \tag{49}\] where \(k= (1+\delta)\sqrt{1-2\sigma+\zeta^2},\) \(\quad a_n, b_n, c_n \in [0,1]\) and \(\sum_{n=0}^{\infty} a_n = \infty.\) If Assumption 1 holds, then  \(u_n\) obtained from Algorithm 46 converges strongly to \(u^{\ast} \in F(S) \cap GNVI(K_r).\)

Proof. Let \(u^{\ast} \in K_r\) be the solution of \(F(S)\cap GNVI(K_r).\) Then \[u ^{\ast} = (1-c_n)u^{\ast } + c_nS\{ u^{\ast}-h(u^{\ast})+ P_{K_r}[h(u^{\ast })-\rho u^{\ast}]\|\} \label{8.7} \ \tag{50}\] \[ = (1-b_n)u^{\ast } + b_nS\{ u^{\ast}-h(u^{\ast})+ P_{K_r}[h(u^{\ast })-\rho u^{\ast}] \}\label{8.8} \ \tag{51}\] \[ = (1-a_n)u^{\ast } + a_nS\{u^{\ast}-h(u^{\ast})+ P_{K_r}[h(u^{\ast })-\rho u^{\ast}]\}, \label{8.9} \tag{52}\] where \(a_n, b_n, c_n \in [0,1]\) are some constants.

From (48) and (50), Assumption 1 and the nonexpansive mapping \(S.\) we have \[\begin{aligned} \label{8.10} \|u_{n+1} – u^{\ast}\| \leq & (1- a_n ) \|u_n – u^{\ast}\|+ a_n S \{ u_n-u^{\ast}-(h(u_n)-h(u^{\ast}))\nonumber \\ &+P_{K_r}[h(y_n)-\rho y_n]- P_{K_r}[h(u^{\ast })-\rho u^{\ast}]\}\ \nonumber\\ \leq & (1- a_n ) \|u_n – u^{\ast}||+ a_n \|u_n-u^{\ast}-(h(u_n)-h(u^{\ast}))\|\nonumber \\ & +a_n \| P_{K_r}[h(y_n)-\rho y_n]\}- P_{K_r}[h(u^{\ast })-\rho u^{\ast}]\| \nonumber\\ \leq & (1- a_n ) \|u_n – u^{\ast}||+ a_n(1+\delta) \|u_n-u^{\ast}-(h(u_n)-h(u^{\ast}))\|\nonumber \\ &+ a_n\delta \|u_n-u^{\ast}-\rho ((u_n)-(u^{\ast}))\|. \end{aligned} \tag{53}\]

From the strongly monotonicity and Lipschitz continuity of the operator \(h\) with constants \(\sigma >0, \zeta > 0,\) respectively, we have \[\begin{aligned} \label{8.11} \| y_n – u^{\ast} – ( h(u_n) – h(u^{\ast}))\|^2&= \langle u_n – u^{\ast} – ( h(u_n) – h(u^{\ast}) ), u_n – u^{\ast} – ( h(u_n) – h(u^{\ast}) ) \rangle \nonumber\\ & = \|y_n – u^{\ast}\|^2 – 2 \langle h(u_n) – h(u^{\ast}),u_n- x^{\ast} \rangle + \|h(u_n) -h( u^{\ast})\|^2 \nonumber\\ &\leq (1-2\sigma + \zeta^2)\|u_n -u^{\ast}\|^2. \end{aligned} \tag{54}\]

Combining (53) and(54), we have \[\begin{aligned} \label{8.12} \|u_{n+1}-u^{\ast}\| \leq & (1-a_n)\|u_n-u^{\ast}\| +a_n(1+\delta) \|u_n-u^{\ast}\| + a_n \delta (\sqrt{1- 2 \sigma + \zeta^2 } )\|u_n-u^{\ast}\| \nonumber \\ =& (1-a_n)\|u_n-u^{\ast}\| + a_n\theta \|u_n-u^{\ast}\|, \end{aligned} \tag{55}\] where \[\begin{aligned} \label{8.13} \theta = \delta (1+\rho) + (1+\delta)\sqrt{1-2\sigma+\zeta^2}= \delta(1+\rho)+k , \end{aligned} \tag{56}\] and \[\begin{aligned} \label{8.13a} k = (1+\delta)\sqrt{1-2\sigma+\zeta^2}. \end{aligned} \tag{57}\]

It follows from (49) that \(\theta < 1.\)

From (47) and (51), nonexpansivity of \(S\) and (54) and (56), we have \[\begin{aligned} \label{8.14} \|y_n-u^{\ast}\| &\leq (1-b_n)\|u_n-u^{\ast}\| +b_n\theta\|z_n-u^{\ast}\|. \end{aligned} \tag{58}\]

In a similar way, from (46) and (50), it follows that \[\begin{aligned} \label{8.17} \|z_n – x^{\ast}\| \leq & (1-c_n)\|u_n-u^{\ast}\| + c_n\theta \|u_n-u^{\ast}\|, \nonumber \\ =& \{(1-c_n(1-\theta ))\}\|u_n-u^{\ast}\| \nonumber \\ \leq & \|u_n-u^{\ast}\|. \end{aligned} \tag{59}\]

From (55), (58) and (59), we obtain \[\begin{aligned} \label{8.18} \|u_{n+1} – u^{\ast}\| \leq & (1- a_n ) \|u_n – u^{\ast}\|+ a_n \theta \| y_n -u^{\ast}|| \nonumber\\ \leq & (1- a_n ) \|u_n – u^{\ast}\|+ a_n \theta \| z_n – u^{\ast}|| \nonumber\\ \leq & (1-a_n)\|u_n-u^{\ast}\| + a_n \theta \|u_n-u^{\ast }\| \nonumber\\ =& [1- a_n (1- \theta)] \|u_n – u^{\ast}\|, \end{aligned} \tag{60}\] and hence by Lemma 5, we have \(\lim_{n \rightarrow \infty} \|u_n- u^{\ast}\| =0\), the required result. ◻

All the results proved in this paper can be extended for solving more general nonconvex inverse variational inequalities.

For given nonlinear operators \(h,g,\) consider the problem of finding \(\in K_r\) such that \[\begin{aligned} \label{eq9.1} \langle u, g(v)-h(u) \rangle \geq 0, \quad \forall v\in K_r, \end{aligned} \tag{61}\] which is called the nonconvex inverse variational inequalities. For different choice of the operators, one can obtain new classes of nonconvex inverse variational inequalities. Using Lemma 3, the problem (61) is equivalent to finding \(u\in K_r\) such that \[\begin{aligned} \label{eq9.2} h(u)= P_{K_r}[g(u)- \rho u], \end{aligned} \tag{62}\] which can be written as \[\begin{aligned} u= u-h(u)+P_{K_r}[g(u)- \rho u]. \end{aligned}\]

Thus one can consider the mapping \(\Phi\) associated with the problem (61) as \[\begin{aligned} \label{eq3.3} \Phi(u)= u-h(u)+P_{K_r}[g(u)- \rho u], \end{aligned}\] which can be used to discuss the uniqueness of the solution of the problem (61). The equivalence fixed point formulation cam be used to consider the iterative methods, dynamical systems, sensitivity analysis, nonexplosive mapping and other related optimizations problems. We would like to point out that the ideas and techniques of this paper can be used to suggest and analyze some iterative methods for solving systems of (multivalued) general nonconvex inverse variational inequalities involving several different operators with appropriate modifications. We hope that the results proved in this paper may inspire the interested readers to discover the novel applications of these general nonconvex inverse variational inequalities in different branches of pure and applied science.

Using dynamical system approach, one can consider the problem of finding \(u\in H\) such that \[\begin{aligned} \gamma \frac{d^{2}u}{dt^2}+\zeta \frac{du}{dt}= \lambda\{P_{K_r}[g(u)-\rho u]-h(u)\},\quad u(t_0)=\alpha , \dot{u}(t_0)=\beta, \end{aligned} \tag{63}\] is the second order initial value problem, where \(\lambda, \gamma, \zeta, \alpha, \beta\) are constants. This shows that the dynamical system is the second order initial value problem, which is a branch of differential equations. Consequently Euler method, shooting method and Runga-Kutta methods can be used to find the approximate solution of the nonconvex inverse variational inequalities and related optimization problems. To the best of our knowledge, no numerical results are available.

7. Conclusion

Some new classes of nonconvex inverse variational inequalities are introduced and studied. We have established that the nonconvex inverse variational inequalities are equivalent to the fixed point problems using the projection technique. This equivalent formulation is applied to discuss the unique existence of the solution as well as the development of the multi step inertial iterative methods. Under suitable conditions, convergence analysis of the proposed methods is discussed. Applying the equivalent fixed point formulation, projected dynamical system associated with nonconvex inverse variational inequalities is considered. Stability of the solution is proved using the Lyapunov function. Since the dynamical system is an initial value problem, finite difference scheme is used to propose some iterative methods for approbating the solution. If the prox-regular convex set \(K_r = K ,\) convex set, then our results represent the significant refinement of the known results for general variational inequalities. Applying the technique and ideas of Ashish et al. [44, 45], Han et al. [7], He at al. [8, 9] and Natarajan et al. [46] can one explore the Julia set and Mandelbrot set in Noor orbit using the Noor (three step) iterations in the fixed point theory and will continue to inspire further research in fractal geometry, chaos theory, coding, number theory, spectral geometry, dynamical systems, complex analysis, nonlinear programming, graphics and computer aided design. This is an open problem, which deserves further research efforts. Applications of the fuzzy set theory, stochastic, quantum calculus, fractal, logistic map [47], fractional and random traffic equilibrium can be found in many branches of mathematical and engineering sciences including artificial intelligence, computer science, control engineering, management science, operations research, green energy [46] and variational inequalities [54, 55]. One may explore these aspects of the nonconvex inverse variational inequality and its variant forms. We have only considered the theoretical aspects of the approximate methods. One can develop the efficient numerical methods for solving the nonconvex inverse variational inequalities, which may be stating point for further research.

References

  1. Stampacchia, G. (1964). Formes bilineaires coercitives sur les ensembles convexes. Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences, 258(18), 4413-4416.

  2. Lions, J. L., & Stampacchia, G. (1967). Variational inequalities. Communications on Pure and Applied Mathematics, 20, 393-519.

  3. Bounkhel, M., Tadj, M. L., & Hamdi, A. (2003) Iterative schemes to solve nonconvex variational problems. Journal of Inequalities in Pure and Applied Mathematics, 4(1), p14.

  4. Dupuis P., & Nagurney, A. (1993). Dynamical systems and variational inequalities. AAnnals of Operations Research 44, 19-42

  5. Alvarez, F. (2003). Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM Journal on Optimization, 14, 773–782.

  6. Glowinski, R., & Le Tallec, P. (1989). Augmented Lagrangian and Operator Splitting Methods in Nonlinear Mechanics, SIAM, Philadelphia, Pennsylvania, USA.

  7. Han, Y., Huang, N., Lu, J., & Xiao, Y. (2017). Existence and stability of solutions to inverse variational inequality problems. Applied Mathematics and Mechanics, 38(5), 749-764.

  8. He, B., He, X.,& Liu, H. X. (2010). Solving a class of constrained ’black-box’ inverse variational inequalities. European Journal of Operational Research, 204, 391-401.

  9. He, S., & Dong, Q. L. (2018). An existence-uniqueness theorem and alternating contraction projection methods for inverse variational inequalities. Journal of Inequalities and Applications, 2018, p351.

  10. Kinderlehrer, D.,& Stampacchia, G. (2000). An Introduction to Variational Inequalities and Their Applications, SIAM, Philadelphia, Pennsylvania, USA.

  11. Noor, M. A. (1975). On Variational Inequalities, PhD Thesis, Brunel University, London, UK.

  12. Noor, M. A. (1988). General variational inequalities. Applied Mathematics Letters, 1, 119-121.

  13. Noor, M. A. (1988). Quasi variational inequalities. Applied Mathematics Letters, 1(4), 367-370.

  14. Noor, M. A. (1992). General alogorithm for variational inequalities. Journal of Optimization Theory and Applications, 73, 409-413.

  15. Noor, M. A. (2000). New approximation schemes for general variational inequalities. Journal of Mathematical Analysis and Applications 251, 217-229.

  16. Noor, M. A. (2002). A Wiener-Hopf dynamical system for variational inequalities. New Zealand Journal of Mathematics 31, 173-182.

  17. Noor, M. A. (2002). Stability of the modified projected dynamical systems. Computers & Mathematics with Applications 44, 1-5.

  18. Noor, M. A. (2004). Some developments in general variational inequalities. Applied Mathematics and Computation, 152(1), 199-277.

  19. Noor, M. A. (2004). Iterative schemes for nonconvex variational inequalities. Journal of Optimization Theory and Applications, 121, 385-395.

  20. Noor, M. A. (2009). Some classes of general nonconvex variational inequalities. Albanian Journal of Mathematics, 3, 175-188.

  21. Noor, M. A. (2010). Nonconvex quasi variational inequalities. Journal of Advanced Mathematical Studies, 3, 59-72.

  22. Noor, M. A. (2009). Implicit iterative methods for nonconvex variational inequalities. Journal of Optimization Theory and Applications, 143, 619-624.

  23. Noor, M. A. (2009). Projection methods for nonconvex variational inequalities. Optimization Letters, 3, 411-418.

  24. Noor, M. A. (2009). Extended general variational inequalities. Applied Mathematics Letters, 22, 182-186.

  25. Noor, M. A. (2011). Some iterative methods for general nonconvex variational inequalities. Mathematical and Computer Modelling, 54, 2955-2961.

  26. Noor, M. A., & Noor, K. I. (2022). Dynamical system technique for solving quasi variational inequalities. UPB Scientific Bulletin, Series A: Applied Mathematics and Physics, 84(4), 55-66.

  27. Noor, M. A., & Noor, K. I. (2022). New inertial approximation schemes for general quasi variational inclusions. Filomat, 36(18), 6071-6084.

  28. Noor, M. A., Noor, K. I., Huang, Z., & Al-Said, E. (2012). Implicit schemes for solving extended general nonconvex variational inequalities. Journal of Applied Mathematics, 2012(1), 646259.

  29. Noor, M. A., & Noor, K. I. (2023). Some aspects of exponentially general nonconvex variational inequalities. Journal of Advanced Mathematical Studies 16(4), 257-376.

  30. Noor, M. A., & Noor, K. I. (2023). New classes of exponentially general nonconvex variational inequalities. Applied Engineering Technology, 2(2), 93-119.

  31. Noor, M. A., & Noor, K. I. (2024). Some novel aspects and applications of Noor iterations and Noor orbits. Journal of Advanced Mathematical Studies 17(3), 276-284.

  32. Noor, M. A., & Noor, K. I. (2022). New iterative methods and sensitivity analysis for inverse quasi variational inequalities. Earthline Journal of Mathematical Sciences, 15(4), 495-539.

  33. Noor, M. A., Noor, K. I., A. Alshejari, A., & Rassias, M. Th. (2025). Some new developments in general nonconvex variational inequalities, In: Trends in Applied Mathematical Analysis (Edit: Themistocles M. Rassias ), Springer, 2025.

  34. Noor, M. A., Noor, K. I.,& Rassias, M. Th. (2020). New trends in general variational inequalities. Acta Applicandae Mathematicae, 170(1), 981–1064.

  35. Noor, M. A., Noor, K. I.,& Rassias, Th. M. (1993). Some aspects of variational inequalities. Journal of Computational and Applied Mathematics, 47, 285-312.

  36. Noor, M. A.,& Oettli, W. (1994). On general nonlinear complementarity problems and quasi equilibria. Le Mathematiche, 49, 313-331.

  37. Patriksson, M. (1998). Nonlinear Programming and Variational Inequalities: A Unified Approach, Kluwer Acadamic publishers, Drodrecht, Holland.

  38. Suantai, S., Noor, M. A., Kankam, K., & Cholamjiak, P. (2021). Novel forward–backward algorithms for optimization and applications to compressive sensing and image inpainting. Advances in Difference Equations, 2021(1), 265.

  39. T. Q. Trinh, T. Q., & Vuong, P. T. (2024). The projection algorithm for inverse quasi-variational inequalities with applications to traffic assignment and network equilibrium control problems. Optimization, 2024, 1-25.

  40. P.T. Vuong, P. T., He, X., & Thong, D. V. (2021). Global exponential stability of a neural network for inverse variational inequalities. Journal of Optimization Theory and Applications, 190, 915-930.

  41. Xia, Y. S., & Wang, J. (2000). A recurrent neural network for solving linear projection equations. Neural Network, 13, 337-350.

  42. Xia, Y. S., & Wang, J. (2000). On the stability of globally projected dynamical systems. Journal of Optimization Theory and Applications, 106, 129-150.

  43. Clarke, F. H., Ledyaev, Y. S.,& Wolenski, P. R. (1998). Nonsmooth Analysis and Control Theory, Springer-Verlag, Berlin, Germany.

  44. Ashish, K., Rani, M., & Chugh, R. (2014). Julia sets and Mandelbrot sets in Noor orbit. Applied Mathematics and Computation, 228(1), 615-631.

  45. Ashish, R. Chugh, R.& M. Rani, M. (2021). Fractals and Chaos in Noor Orbit: A Four-Step Feedback Approach, Lap Lambert Academic Publishing, Saarbrucken, Germany.

  46. Natarajan, S. K., & Negi, D. (2024). Green Innovations Uniting Fractals and Power for Solar Panel Optimization. In Green Innovations for Industrial Development and Business Sustainability (pp. 146-152). CRC Press.

  47. Yadav, A., & Jha, K. (2016). Parrondo’s paradox in the Noor logistic map. International Journal of Advanced Research in Engineering and Technology, 7(5), 01-06.

  48. B. T. Polyak, B. T. (1984). Some methods of speeding up the convergence of iterative methods. USSR Computational Mathematics and Mathematical Physics, 4, 117.

  49. Glowinski, R., Lions, J. L., & Tremolieres, R. (1981). Numerical Analysis of Variational Inequalities. NortHolland, Amsterdam, Holland.

  50. Cristescu, G., & Lupsa, L. (2002). Non-Connected Convexities and Applications, Kluwer Academic Publishers, Dordrecht, Holland.

  51. Niculescu, C. P., & and Persson, L. E. (2018). Convex Functions and Their Applications, Springer-Verlag, New York, 2018.

  52. Pang, L.P., Shen, J., & Song, H. S. (2007). A modified predictor-corrector algorithm for solving nonconvex generalized variational inequalities. Computers and Mathematics with Applications, 54, 319-325.

  53. Korpelevich, G. M. (1976). The extragradient method for finding saddle points and other problems. Ekonomika i Matematiceskie Metody, 12, 747-756.

  54. Noor, M. A.,& Noor, K. I. (2024). Harmonic nonconvex variational inequalities. Transylvanian Journal Mathematics and Mechanics, 16(1-2), 71-82

  55. Noor, M. A., Noor, K. I., & Rassias, M. T. (2023). General variational inequalities and optimization. Geometry and Nonconvex Optimization (Edited: Themistocles M. Rassias), Springer.