Search for Articles:

Contents

Inverse extended general variational inequalities

Muhammad Aslam Noor1, Khalida Inayat Noor1
1Department of Mathematics, University of Wah, Wah Cantt., Pakistan
Copyright © Muhammad Aslam Noor, Khalida Inayat Noor. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Some new classes of inverse variational inequalities, which can be viewed as a novel important special case of general variational equalities, are investigated. Projection method, auxiliary principle and dynamical systems coupled with finite difference approach are used to suggest and analyzed a number of new and known numerical techniques for solving inverse variational inequalities. Convergence analysis of these methods is investigated under suitable conditions. One can obtain a number of new classes of inverse variational inequalities by interchanging the role of operators. Some important special cases are highlighted. Several open problems are suggested for future research.

Keywords: inverse variational inequalities, projection method, iterative methods , axillary principle technique, dynamical system, convergence

1. Introduction

Variational inequality theory contains a wealth of new ideas and techniques. Variational inequality theory was introduced by Lions and Stampacchia [1] in early sixties, can be viewed as a novel generalization of the variational principles. Variatitional principles have played a leading role in the developments of complicated and complex problems arising in game theory, mechanics, geometrical optics, general relativity theory, economics, transportation, differential geometry and related areas. It is well known fact that the variational inequalities are equivalent to the fixed point problem. This equivalent formulations have played an important role to study the existence of the solution and to develop efficient numerical methods for solving variational inequalities and related optimization problems. Noor [2, 3] has proposed and suggested three (multi step) forward-backward iterative methods for finding the approximate solution of general variational inequalities using the technique of updating the solution and auxiliary principle. These Noor(three-step) schemes are a natural generalization of the splitting methods for solving partial differential equations. Noor (three-step) iterations contain Mann (one-step) iteration and Ishikawa (two-step) iterations as special cases. It has been established [24] that Noor(three-step) iterations perform better than two-step(Ishikawa iteration) and one step method Mann iteration. Ashish et al. [5, 6], Cho et al. [7] and Kwuni et al. [8] explored the Julia set and Mandelbrot set in Noor orbit using the Noor (three step) iterations. We would like point out Noor(three-step) iterations have influenced the research in the fixed point theory, optimization and will continue to inspire further research in compressive sensing, image in painting fractal geometry, chaos theory, coding, number theory, spectral geometry, dynamical systems, complex analysis, nonlinear programming, graphics and computer aided design. For recent developments and applications of the variational inequalities and their variant forms, see [14, 7, 9, 50] and the references therein.

Variational inequality theory has been generalized and extended in several directions using novel and innovative ideas to tackle complex and complicated problems. Noor [32, 33] considered two new classes of variational inequalities involving two arbitrary operators in 1988, which are known as general variational inequalities and have applications in oceanography, non-positive and non symmetric differential equations theory. An important special case of these general variational inequalities is known as inverse variational inequalities has been considered in [11, 14, 1721, 46, 47, 51].

Motivated and inspired by ongoing recent research in variational inequalities, we consider the inverse variational inequalities, which is a special case of quasi variational inequalities involving two arbitrary operators, see Noor [33]. Several special cases are discussed as applications of the inverse variational inequalities in §2. In §3, we establish the equivalence with the fixed point problem, which is use to discuss the unique existence of the solution as well as to suggest several inertial iterative method along with the convergence analysis. We also apply the auxiliary principle technique involving an arbitrary operator to consider some iterative schemes for solving the inverse variational inequalities in §4. In §5, dynamical system approach is applied to study the stability of the solution and to suggest some iterative methods for solving the inverse variational inequalities exploring the finite difference idea. Our results in this paper can be viewed as significant refinement and correct form of the results in [11, 14, 1721, 46, 47, 51]and the references therein.

We have given only a brief introduction of this fast growing field. The interested reader is advised to explore this field further and discover novel fascinating applications of inverse variational inequalities in other areas of sciences such as machine learning, artificial intelligence, data analysis, fuzzy systems, random stochastic, financial analysis and related other optimization problems.

2. Formulations and basic facts

Let \(\Omega\) be a nonempty closed convex set in a real Hilbert space \(\mathcal{H}.\) We denote by \(\langle \cdot,\cdot\rangle\) and \(\parallel\cdot\parallel\) be the inner product and norm, respectively.

For given nonlinear operators \(g,h: \mathcal{H}\longrightarrow \mathcal{H},\) we consider the problem of finding \(\mu \in \Omega\) such that \[\label{eq2.6in} \langle \rho (\mu)+ g(\mu)-h(\mu), h(\nu)-g(\mu) \rangle \geq 0, \quad \forall \nu \in \Omega, \tag{1}\] is called the inverse extended general variational inequality, which is the main motivation of our investigation and consideration. We would like to mention that the inverse variational inequalities considered in [11, 14, 1721, 37, 45, 46, 51] are quite different than the problem (1).

Consequently, it is evident that all the known results for extended general variational inequalities are also valid for both types of inverse variational inequalities.

Special cases

We now point out some very important and interesting problems, which can be obtained as special cases of the problem (1).

  1. For \(h=I,\) the problem (1) reduces to finding \(\nu \in \Omega,\) such that \[\begin{aligned} \label{2.1} \langle \rho \mu+g(\mu)-\mu, \nu-g(\mu) \rangle \geq 0, \quad \forall \nu \in \Omega, \end{aligned} \tag{2}\] which is called the inverse general variational inequality.

  2. For \(g= I,\) then the problem (1) reduces to finding \(\mu \in \Omega,\) such that \[\begin{aligned} \label{eq2.6inn} \langle \rho (\mu)+ \mu-h(\mu), h(\nu)-(\mu) \rangle \geq 0, \quad \forall \nu \in \Omega, \end{aligned} \tag{3}\] is the inverse general variational inequality involving the operator \(h.\)

  3. If \(g=h,\) then problem (1) is equivalent to finding \(\mu \in \Omega\) such that \[\begin{aligned} \label{eq2.6ing} \langle \rho (\mu), g(\nu)-g(\mu) \rangle \geq 0, \quad \forall \nu \in \Omega, \end{aligned} \tag{4}\] is the inverse variational inequality, considered and studied in [1114, 1821, 45, 46, 51] with applications in various areas of mathematical and engineering sciences.

  4. If \(\Omega^{*}= \{ \mu\in \mathcal{H}: \langle \mu,\nu \rangle \geq 0, \quad \forall \nu \in \Omega\}\) is a polar (dual) cone of a convex-valued cone \(\Omega\) in \(\mathcal{H},\) then the problem (4) is equivalent to finding \(\mu \in \mathcal{H},\) such that \[\begin{aligned} \label{2.7} g(\mu) \in \Omega , \quad \mu \in \Omega^{*} \quad \mbox{and} \quad \langle \mu, g(\mu) \rangle =0, \end{aligned} \tag{5}\] which is known as the inverse quasi complementarity problems and appears to be a new one.

Obviously inverse complementarity problems include the inverse complementarity problemsx‘ and linear complementarity problems. The complementarity problems were introduced and studied by Cottle et al. [52], Noor [25, 26] and Noor et al. [31, 39, 53].

Remark 1. It is worth mentioning that for appropriate and suitable choices of the operators \(g,h,\) convex set and the spaces, one can obtain several classes of variational inequalities, complementarity problems and optimization problems as special cases of the inverse extended general variational inequalities (1). This shows that the problem (1) is quite general and unifying one. It is interesting problem to develop efficient and implementable numerical methods for solving the inverse variational inequalities and their variants.

We also need the following result, known as the projection Lemma (best approximation).

Lemma 1. [22] Let \(\Omega\) be a closed and convex set in \(\mathcal{H}.\) Then, for a given \(z\in \mathcal{H}\), \(\mu\in \Omega\) satisfies the inequality \[\label{2.16} \langle \mu-z,\nu-\mu \rangle \geq 0,\quad\forall \nu\in \Omega, \tag{6}\] if and only if, \[\mu=\Pi_{\Omega}(z),\] where \(\Pi_{\Omega}\) is projection of \(\mathcal{H}\) onto the closed convex set \(\Omega.\)

It is well known that the projection operator \(\Pi_{\Omega}\) is nonexpansive, that is, \[\begin{aligned} \label{2.17} \|\Pi_{\Omega}-\Pi_{\Omega} \|\leq \|\mu-\nu\|,\quad \forall \mu,\nu\in \Omega. \end{aligned} \tag{7}\]

Definition 1. An operator \(g:\mathcal{H} \rightarrow \mathcal{H}\) is said to be:

  1. Strongly monotone, if there exist a constant \(\alpha>0\), such that \[\langle g(\mu)-g(\nu),\mu-\nu \rangle\geq\alpha\|\mu-\nu\|^{2},\quad\forall \mu,\nu \in \mathcal{H}.\]

  2. Lipschitz continuous, if there exist a constant \(\beta>0\), such that \[\|g(\mu)-g(\nu)\|\leq\beta\|\mu-\nu\|,\quad\forall \mu,\nu\in \mathcal{H}.\]

3. Projection method

In this section, we use the fixed point formulation to suggest and analyze some new implicit methods for solving the inverse variational inequalities.

Using Lemma 1, one can show that the inverse variational inequalities are equivalent to the fixed point problems.

Lemma 2. The function \(\mu\in \Omega\) is a solution of the inverse extended general variational inequality (1), if and only if, \(\mu\in \Omega\) satisfies the relation \[\begin{aligned} \label{eq3.1} g(\mu)= \Pi_{\Omega}[\mu-\rho \mu ], \end{aligned} \tag{8}\] where \(\Pi_{\Omega}\) is the projection operator and \(\rho>0\) is a constant.

Proof. Let \(\mu\in \Omega\) be a solution of the inverse extended general variational inequality (1). Then \[\begin{aligned} \langle g(\mu)- (h(\mu)- \rho (\mu) ), h(\nu)-g(\mu) \rangle \geq 0, \quad \forall \;\nu \in \Omega. \end{aligned}\]

Now using Lemma 1, with \(z= h(\mu)- \rho (\mu),\) , we obtain \[\begin{aligned} g(\mu)= \Pi_{\Omega}[\mu-\rho \mu ], \end{aligned}\] the required result (8). ◻

Lemma 2 implies that the inverse extended general variational inequality (1) is equivalent to the fixed point problem (8). From the Eq. (8), we have \[\begin{aligned} \label{3.1e} u=u – g(\mu) + \Pi_{\Omega}[h(\mu)-\rho \mu]. \end{aligned} \tag{9}\]

We define the function \(\Phi\) associated with (8) as

\[\label{eq3.1aa} \Phi(\mu)= \mu-g(\mu)+ \Pi_{\Omega}[h(\mu)-\rho \mu]. \tag{10}\]

To prove the unique existence of the solution of the problem (1), it is enough to show that the map \(\Phi\) defined by (9) has a fixed point.

Theorem 1. Let the operator \(g\) be strongly monotone with constant \(\sigma >0\) and Lipschitz continuous with constant \(\zeta> 0,\) respectively. If the operator \(h\) is Lipschitz continuous with constan \(\sigma_1 > 0\) and there exists a parameter \(\rho>0,\) such that \[\begin{aligned} \label{3.3a} \rho < 1-k, \quad k<1,\quad \sigma_1,\quad \eta^2 < 2\sigma, \end{aligned} \tag{11}\] where \[ \theta = \rho+ k \label{3.3b}\ \tag{12}\] \[k = \sqrt{1- 2\sigma +\zeta ^2}+\sigma_1 ,\label{3.3c} \tag{13}\] then there exists a unique solution of the problem (1).

Proof. From Lemma 2, it follows that problems (8) and (1) are equivalent. Thus it is enough to show that the map \(\Phi(\mu),\) defined by (10) has a fixed point.

For all \(\nu\neq \mu \in \Omega(\mu),\) we have \[\begin{aligned} \label{eq3.4} \|\Phi(\mu)-\Phi(\nu)\| = & \|\mu-\nu-(g(\mu)-g(\nu))\|+ \Pi_{\Omega}\|[h(\mu)- \rho \mu]- \Pi_{\Omega(\nu)}[h(\nu)-\rho \nu]\| \nonumber \\ \leq& \|\mu-\nu-(g(\mu)-g(\nu))\|+ \|h(\nu)-h(\mu)-\rho (\nu-\mu)\|\nonumber \\ \leq& \|\mu-\nu-(g(\mu)-g(\nu))\|+ \|h(\nu)-h(\mu)\|+\rho \|\nu-\mu\|\nonumber \\ \leq& \|\mu-\nu-(g(\mu)-g(\nu))\|+\sigma_1 \|\nu-\mu\|+ \rho \|\nu-\mu\| \nonumber \\ \leq& \|\mu-\nu-(g(\mu)-g(\nu) \|+(\rho+\sigma_1)\|\nu-\mu\|, \end{aligned} \tag{14}\] where we have used the facts the projection operator \(\Pi_{\Omega}\) is nonexpansive and the operator \(h\) is a Lipschitz continuous with constant \(\sigma>0.\)

Since the operator \(g\) is strongly monotone with constants \(\sigma > 0\) and Lipschitz continuous with constant \(\zeta > 0,\) it follows that \[\begin{aligned} \label{eq3.5} \|\mu-\nu-(g(\mu)-g(\nu))\|^2 \leq & \|\mu-\nu||^2 -2 \langle g(\mu)-g(\nu),\mu-\nu \rangle +\zeta^2 \|g(\mu)-g(\nu)\|^2 \nonumber \\ \leq & (1-2\sigma + \zeta ^2 )\|\mu-\nu\|^2. \end{aligned} \tag{15}\]

From (14) and (15), we have \[\begin{aligned} \|\Phi(\mu)-\Phi(\nu)\| \leq \left\{\sqrt{ (1-2\sigma + \zeta ^2 )}+ \rho +\sigma_1 \right\}\|\mu-\nu\| = \theta \|\mu-\nu\|, \end{aligned}\] where \(\theta\) and \(k\) are defined by the relation (12) and 13) respectively.

From (11), it follows that \(\theta < 1,\) which implies that the map \(\Phi(u)\) defined by (10) has a fixed point, which is the unique solution of (1). ◻

This alternative equivalent formulation (8) is used to suggest the following three-step iterative methods for solving the problem (1).

Algorithm 1. For a given \(\mu_0,\) compute the approximate solution \(\{ \mu_{n+1} \}\) by the iterative schemes \[ y_n = (1-\gamma _n)\mu_n+\gamma _n \{\mu_n-g(\mu_n)+\Pi_{\Omega}[h(\mu_n)-\rho \mu_n] \}, \label{3.9ab}\ \tag{16}\] \[w_n = (1-\beta _n)\mu_n+\beta _n \{y_n-g(y_n)+\Pi_{\Omega}[h(y_n)-\rho y_n] \},\label{3.10a}\ \tag{17}\] \[\mu_{n+1} = (1- \alpha _n)\mu_n+\alpha _n \{w_n-g(w_n)+\Pi_{\Omega}[h(w_n)-\rho w_n] \},\label{3.11a} \tag{18}\] which are known as modified Noor iterations and contain Ishikawa(two-set) iterations and Mann iteration(one-step) as special cases.

We now study the convergence analysis of Algorithm 1, which is the main motivation of our next result.

Theorem 2. Let the operators \(g,h\) satisfy all the assumptions of Theorem 1. Then the approximate solution \(\{ \mu_n \}\) obtained from Algorithm 1 converges to the exact solution \(\mu \in \Omega\) of the inverse extended general variational inequality (1) strongly in \(\mathcal{H}.\)

Proof. From Theorem 1, we see that there exists a unique solution \(\mu \in \Omega\) of the inverse variational inequalities (1). Let \(\mu \in \Omega\) be the unique solution of (1). Then, using Lemma 2, we have \[ \mu = (1-\alpha _n)\mu+\alpha _n\{\mu-g(\mu)+ \Pi_{\Omega}[h(\mu)-\rho \mu] \} \label{3.12a}\ \tag{19}\] \[= (1-\beta _n )\mu + \beta _n \{\mu-g(\mu)+\Pi_{\Omega}[h(\mu)-\rho \mu] \} \label{3.13a} \ \tag{20}\] \[= (1-\gamma _n)\mu+ \gamma _n \{\mu-g(\mu)+\Pi_{\Omega}[h(\mu)-\rho \mu] \}. \label{3.14a} \tag{21}\]

From (18) and (19), we have \[\begin{aligned} \label{3.15a} \|\mu_{n+1}-\mu\| = & \|(1-\alpha _n)(\mu_n-\mu)+ \alpha _n (w_n-\mu-(g(w_n)-g(\mu))) \nonumber \\ &+ \alpha _n \Pi_{\Omega}[g(w_n)-\rho w_n]-\Pi_{\Omega)}[h(\mu)-\rho \mu\}\| \nonumber \\ \leq & (1-\alpha _n)\|\mu_n-\mu\| +\alpha _n\|w_n-\mu-(g(w_n)-g(\mu))\| \nonumber \\ &+ \alpha _n \Pi_{\Omega(}[h(w_n)-\rho w_n]- \Pi_{\Omega}[h(\mu_n)-\rho \mu \}\| \nonumber \\ \leq & (1-\alpha _n)\|\mu_n-\mu\| +\alpha _n\|w_n-\mu-(g(w_n)-g(\mu))\| \nonumber \\ &+ \alpha_n\|h(w_n)-h(\mu)-\rho( w_n-\mu)|| \nonumber \\ \leq & (1-\alpha _n)\|\mu_n-\mu||+\alpha _n (k+\rho )||w_n-\mu\| \nonumber \\ = & (1-\alpha _n)||u_n-\mu\|+ \alpha _n \theta ||w_n-\mu\|, \end{aligned} \tag{22}\] where \(\theta\) is defined by (12).

In a similar way, from (16) and (20), we have \[\begin{aligned} \label{3.16a} \|w_n-\mu\| \leq &(1-\beta _n)\|\mu_n-\mu\|+2 \beta _n \theta \|y_n-\mu-(g(y_n)-g(\mu))\| \nonumber \\ &+ \beta _n\|y_n-\mu-\rho (y_n-\mu)\| \nonumber \\ \leq & (1-\beta _n)\|\mu_n-\mu||+\beta _n \theta \|y_n-\mu\|, \end{aligned} \tag{23}\] where \(\theta\) is defined by (11).

From (16) and (21), we obtain \[\begin{aligned} \label{3.17a} \|y_n-\mu\| \leq & (1-\gamma _n)\|\mu_n-\mu\|+ \gamma _n \theta \|\mu_n-\mu\| \nonumber \\ \leq & (1-(1-\theta )\gamma _n)\|\mu_n-\mu\| \leq ||\mu_n-\mu||. \end{aligned} \tag{24}\]

From (23) and (24), we obtain \[\begin{aligned} \label{3.18a} \|w_n-\mu\| \leq & (1-\beta _n)\|\mu_n-\mu\| + \beta _n \theta \|\mu_n-\mu\| \nonumber \\ = & (1-(1-\theta )\beta _n)\|\mu_n-\mu\| \leq ||\mu_n-\mu||. \end{aligned} \tag{25}\]

Form the above equations, we have \[\begin{aligned} \|\mu_{n+1}-\mu\| \leq & (1-\alpha _n)\|\mu_n-\mu\|+ \alpha _n \theta \|\mu_n-\mu\|\\ = & [1-(1-\theta )\alpha _n]\|\mu_n-\mu\| \leq \prod_{i=0}^{n}[1-(1-\theta )\alpha _i]\|\mu_0-\mu\|. \end{aligned}\]

Since \(\sum\limits_{n=0}^{\infty}\alpha _n\) diverges and \(1-\theta > 0,\) we have \(\prod_{i=0}^{n}[1-(1-\theta )\alpha _i]= 0.\) Consequently the sequence \(\{u_n \}\) convergence strongly to \(\mu\). From (24), and (25), it follows that the sequences \(\{y_n \}\) and \(\{w_n \}\) also converge to \(\mu\) strongly in \(\mathcal{H}.\) This completes the proof. ◻

We now use the equivalent fixed point formulation (8) to suggest some iterative methods. For a parameter \(\xi,\) we rerwite the problem (8) as \[\begin{aligned} \label{3.12} \mu = \mu-g(\mu)+\Pi_{\Omega}\bigg[h\bigg (1 -\xi )\mu+\xi \mu\bigg)-\rho \bigg((1 -\xi )\mu+\xi \mu\bigg)\bigg]. \end{aligned}\]

This equivalent fixed point formulation enables to suggest the following inertial method for solving the problem (1).

Algorithm 2. For a given \(\mu_{0}, \mu_1\), compute \({\mu_{n+1}}\) by the iterative scheme \[\begin{aligned} \mu_{n+1} =& \mu_n-g(\mu_n)+ \Pi_{\Omega}\bigg[h\bigg ( (1-\xi )\mu_n+ \xi \mu_{n-1}\bigg)- \rho \bigg((1-\xi )\mu_n+ \xi \mu_{n-1}\bigg)\bigg ], \quad n=1,2,\ldots \end{aligned}\]

We now suggest some multi-step inertial methods for solving the inverse variational inequalities (1).

Algorithm 3. For given \(\mu_{0},\mu_{1},\) compute \(\mu_{n+1}\) by the recurrence relation \[\begin{aligned} \omega_{n}= &\mu_{n}-\theta _{n}\left(\mu_{n}-\mu_{n-1}\right), \quad n=1,2,\ldots,\\ y_{n}=&(1-\gamma_{n})\omega_{n} + \gamma_{n} \bigg\{\omega_{n}-g(\omega_{n})+\Pi_{(\omega} \bigg[h\bigg(\frac{\omega_{n}+\mu_{n}}{2}\bigg)-\rho\bigg(\frac{\omega_{n}+\mu_{n}}{2})\bigg]\bigg\},\\ z_{n}=&(1-\beta_{n})y_{n}+\beta_{n}\bigg\{y_{n}-g(y_{n}) +\Pi_{\Omega}\bigg[h\bigg(\frac{y_n+ \omega_{n}+\mu_n}{3}\bigg)-\rho\bigg(\frac{y_n+\omega_n+\mu_n}{3}\bigg)\bigg]\bigg\},\\ \mu_{n+1}=&(1-\alpha_{n})z_{n}+\alpha_{n}\bigg\{z_{n}-g(z_{n})+\Pi_{\Omega}\bigg[h\bigg(\frac{z_n+y_{n}+\omega_{n}+\mu_n}{4}\bigg)-\rho\bigg(\frac{y_{n}+ z_{n}+\omega_n+\mu_n}{4}\bigg)\bigg]\bigg\}, \end{aligned}\] where \(\alpha_{n},\beta_{n},\gamma_{n},\theta _{n}\in[0,1], \; \forall\; n\geq1.\)

Using the technique of Noor et al. [36, 37], one can investigate the convergence analysis of these inertial projection methods. Similar multi-step hybrid iterative methods can be proposed and analyzed for solving system of inverse variational inequalities, which is an interesting problem.

4. Auxiliary principle technique

There are several techniques such as projection, resolvent, descent methods for solving the variational inequalities and their variant forms. None of these techniques can be applied for suggesting the iterative methods for solving the several nonlinear variational inequalities and equilibrium problems. To overcome these drawbacks, one usually applies the auxiliary principle technique, which is mainly due to Glowinski et al. [16] as developed in [28, 30, 44, 45, 48], to suggest and analyze some proximal point methods for solving inverse extended general variational inequalities (1). Noor [28] modified the auxililary principle technique by involving an arbitrary operator. For the properties and applications of the modified technique, see Patricksson [41].

For strongly monontone operator \(M,\) we define the distance function as

\[\begin{aligned} %\label{5.1} \mathcal{M} (\nu, \mu) = &M(\nu)-M(\mu), \nu-\mu \rangle, \forall \mu,\nu \in \Omega.\nonumber \\ \geq & \zeta \|\nu-\mu\|^2, \quad \mu,\nu \in \Omega, \end{aligned} \tag{26}\] where \(\zeta\) is the strongly monononicity constant. It is important to emphasize that various types of function \(M\) gives different modified distance function.

We apply the auxiliary principle technique involving an arbitrary operator for finding the approximate solution of the problem (1).

For a given \(\mu \in \Omega\) satisfying (1), find \(w \in \Omega\) such that \[\begin{aligned} \label{eq9.1} &\langle \rho (w+\eta(\mu-w))+g(w)-h(w), h(\nu) – g(w) \rangle +\langle M(w)-M(\mu), \nu-w \rangle \geq 0, \quad\forall \nu \in \Omega, \end{aligned} \tag{27}\] where \(\rho > 0 , \eta \in [0,1]\) are constants and \(M\) is an arbitrary operator. The inequality (27) is called the auxiliary inverse extended general variational inequality.

If \(w = \mu,\) then \(w\) is a solution of (1). This simple observation enables us to suggest the following iterative method for solving (1):

Algorithm 4. For a given \(\mu_0 \in \Omega,\) compute the approximate solution \(\mu_{n+1}\) by the iterative scheme \[\begin{aligned} \label{eq9.2} &\langle \rho (\mu_{n+1}+\eta(\mu_n-\mu_{n+1}))+g(\mu_{n+1})-(\mu_{n+1}), h(\nu) – g(\mu_{n+1}) \rangle \nonumber \\ &\qquad +\langle M(\mu_{n+1})-M(\mu_n), \nu-\mu_{n+1} \rangle \geq 0, \quad\forall \nu \in \Omega. \end{aligned} \tag{28}\]

Algorithm 4 is called the hybrid proximal point algorithm for solving the inverse general variational inequalities (1).

Special Case. For \(\eta =0,\) Algorithm 4 reduces to:

Algorithm 5. For a given \(\mu_0,\) compute the approximate solution \(\mu_{n+1}\) by the iterative scheme \[\begin{aligned} \label{eq9.2a} &\langle \rho \mu_{n+1}+g(\mu_{n+1})-h(\mu_{n+1}), h(\nu) – g(\mu_{n+1}) \rangle +\langle M(\mu_{n+1})-M(\mu_n), \nu-\mu_{n+1} \rangle \geq 0, \quad\forall \nu \in \Omega, \end{aligned} \tag{29}\] is called the implicit iterative methods for solving the problem (1).

For the convergence analysis of Algorithm 5, we need the following concepts:

Definition 2. An operator \(g\) is said to be \(h\)– pseudomontone, if \[\begin{aligned} \langle \rho \mu+g(\mu)-h(\mu),h(\nu)-g(\mu) \rangle \geq 0, \quad \forall \nu\in \Omega, \end{aligned}\] implies that \[\begin{aligned} -\langle \rho \nu+g(\nu)-h(\nu), g(\mu)-h(\nu) \rangle \geq 0, \quad \forall \nu \in \Omega. \end{aligned}\]

Theorem 3. Let the operator \(g\) be a \(h\)-pseudo-monotone. Let \(\mu_{n+1}\) be the approximate solution obtained in Algorithm 5 and \(\mu\in \Omega\) be the exact solution of the problem (1). If the operator \(M\) is strongly monotone with constant \(\xi \geq 0\) and Lipschitz continuous with constant \(\zeta \geq 0,\) then \[\begin{aligned} \label{eq9.3} \xi \|\mu_{n+1}- \mu_n\| \leq \zeta \|\mu-\mu_n\|. \end{aligned} \tag{30}\]

Proof. Let \(\mu \in \Omega\) be a solution of the problem (1). Then \[\begin{aligned} \label{eq9.4} -\langle \rho (\nu) +g(\nu)-h(\nu), g(\mu) – \nu \rangle \geq 0, \quad \forall \nu \in \Omega, \end{aligned} \tag{31}\] since the operator \(g\) is a \(h\)-pseudo-monotone.

Takin \(\nu= \mu_{n+1}\) in (31), we obtain \[\begin{aligned} \label{eq9.5} -\langle \rho \mu_{n+1} +g(\mu_{n+1})-h(\mu_{n+1}), g(\mu) – h(\mu_{n+1}) \rangle \geq 0. \end{aligned} \tag{32}\]

Setting \(\nu = \mu\) in (29), we have \[\begin{aligned} \label{eq9.6} &\langle \rho \mu_{n+1}+g(\mu_{n+1})-h(\mu_{n+1}), g(\mu) – h(\mu_{n+1}) \rangle +\langle M(\mu_{n+1})-M(\mu_n), \mu-\mu_{n+1} \rangle \geq 0, \quad\forall \nu \in \Omega. \end{aligned} \tag{33}\]

Combining (33), (32) and (31), we have \[\begin{aligned} \label{eq9.7} \langle M(\mu_{n+1})-M(\mu_n), \mu-\mu_{n+1} \rangle& \geq -\langle \rho (\mu_{n+1}), g(\mu) – h(\mu_{n+1})\rangle \geq 0. \end{aligned} \tag{34}\]

From the Eq. (34), we have \[\begin{aligned} 0\leq& \langle M(\mu_{n+1})-M(\mu_n) , \mu- \mu_{n+1}\rangle = \langle M(\mu_{n+1})-M(\mu_n) , \mu- \mu_n+\mu_n- u_{n+1} \rangle \nonumber \\ =& \langle M(\mu_{n+1})-M(\mu_n) , \mu- \mu_n \rangle + \langle M(\mu_{n+1}-M(\mu_n), \mu_n- \mu_{n+1} \rangle, \end{aligned}\] which implies that \[\begin{aligned} \langle M(\mu_{n+1}-M(\mu_n), \mu_{n+1}-\mu_n \rangle \leq \langle M(\mu_{n+1})-M(\mu_n) , \mu- \mu_n \rangle. \end{aligned}\]

Now using the strongly monotonicity with constant \(\xi >0\) and Lipschitz continuity with constant \(\zeta\) of the operator \(M,\) we obtain \[\begin{aligned} \xi \|\mu_{n+1}-\mu_n\|^2 \leq \zeta \|\mu_{n+1}-\mu_n\|\|\mu_n-\mu\|. \end{aligned}\]

Thus \[\begin{aligned} \xi \|\mu_n-\mu_{n+1}\| \leq \zeta \|\mu_n-\mu\|, \end{aligned}\] which is the required result (30). ◻

Theorem 4. Let \(\mathcal{H}\) be a finite dimensional space and all the assumptions of Theorem 3 hold. Then the sequence \(\{\mu_n \}^{^{\infty } }_{_{0}}\) given by Algorithm 5 converges to the exact solution \(\mu \in \Omega\) of (1).

Proof. Let \(\mu \in \Omega\) be a solution of (1). From (30), it follows that the sequence \(\{\|\mu-\mu_n\| \}\) is nonincreasing and consequently \(\{u_n\}\) is bounded. Furthermore, we have \[\xi \sum\limits_{n=0}^{\infty} \|\mu_{n+1}- \mu_n\|\leq \zeta \|\mu_{n} – \mu\|,\] which implies that \[\begin{aligned} \label{eq9.9} \lim_{n \rightarrow \infty} \|\mu_{n+1}- \mu_n\| = 0. \end{aligned} \tag{35}\]

Let \(\hat{\mu}\) be the limit point of \(\{\mu_n \}_{_{0} }^{^{\infty } }\); whose subsequence \(\{ \mu_{n_{j}} \}^{^{\infty }}_{_{1}}\) of \(\{\mu_n\}^{^{\infty }}_{_{0}}\) converges to \(\hat{\mu} \in \Omega\). Replacing \(w_n\) by \(\mu_{n_j}\) in (29), taking the limit \(n_j \longrightarrow \infty\) and using (35), we have \[\begin{aligned} \langle \rho (\hat{\mu})+g(\hat{u})-h(\hat{\mu}), h(\hat{\nu}) – g(\hat{\mu}) \rangle \geq 0, \qquad \forall \nu \in \Omega, \end{aligned}\] which implies that \(\hat{u}\) solves the problem (1) and \[\| \mu_{n+1} -\mu \| \leq \| \mu_{n} -\mu \|.\]

Thus, it follows from the above inequality that \(\{ \mu_{n} \}^{^{\infty }}_{_{1}}\) has exactly one limit point \(\hat{u}\) and \[\lim_{n \rightarrow \infty} (\mu_{n}) = \hat{\mu}.\]

Which is the required result. ◻

Remark 2. For different and suitable choice of the parameters \(\rho, \eta, \alpha,\) operators \(g,h, M\) and convex sets, one can recover new and known iterative methods for solving inverse variational inequalities, inverse complementarity problems and related optimization problems. Using the technique and ideas of Theorem 3 and Theorem 4, one can analyze the convergence of Algorithm 5 and its special case.

.

5. Dynamical systems technique

In this section, we consider the dynamical systems technique for solving the inverse variational inequalities. The projected dynamical systems associated with variational inequalities were considered by Dupuis and Nagurney [15]. It is worth mentioning that the dynamical system is a first order initial value problem. Consequently, variational inequalities and nonlinear problems arising in various branches in pure and applied sciences can now be studied via the differential equations. It has been shown that the dynamical systems are useful in developing some efficient numerical techniques for solving variational inequalities and related optimization problems. For more details, see [3, 10, 14, 15, 23, 30, 3436, 36, 47, 48]. We consider some new iterative methods for solving the inverse extended general variational inequalities.
We now define the residue vector \(R(\mu)\) by the relation \[\label{5.1a} R(\mu)= g(\mu)-\Pi_{\Omega}[h(\mu)-\rho (\mu) ]. \tag{36}\]

Invoking Lemma 2, one can easily conclude that \(\mu\in \Omega\) is a solution of the problem (1), if and only if, \(\mu\in \Omega\) is a zero of the equation \[\label{5.1b} R(\mu)=0. \tag{37}\]

We now consider a dynamical system associated with the inverse variational inequalities. Using the fixed point formulation (8), we suggest a class of projection dynamical systems as \[\label{eq5.1} \frac{d\mu}{dt}=\lambda \{ \Pi_{\Omega}[h(\mu)-\rho (\mu) ]-g(\mu)\},\quad \mu(t_{0})=\alpha, \tag{38}\] where \(\lambda\) is a parameter. The system of type (38) is called the projection dynamical system associated with the problem (1). Here the right hand is related to the projection and is discontinuous on the boundary. From the definition, it is clear that the solution of the dynamical system always stays in \(\mathcal{H}\). This implies that the qualitative results such as the existence, uniqueness and continuous dependence of the solution of (1) can be studied.

The equilibrium point of the dynamical system (38) is defined as follows:

Definition 3. An element \(\mu \in \Omega,\) is an equilibrium point of the dynamical system (38), if, \(\frac{d\mu}{dx}=0.\)

Thus it is clear that \(\mu \in \Omega\) is a solution of the inverse quasi variational inequality (1), if and only if, \(\mu \in \Omega\) is an equilibrium point.

This implies that \(\mu \in \Omega\) is a solution of the inverse extended general variational inequality (1), if and only if, \(\mu \in \Omega\) is an equilibrium point.

Definition 4. [15] The dynamical system is said to converge to the solution set \(S^*\) of (38), if , irrespective of the initial point, the trajectory of the dynamical system satisfies \[\begin{aligned} \label{5.4d} \lim _{t \rightarrow \infty }\mbox{dist}(\mu(t),S^*) = 0, \end{aligned} \tag{39}\] where \[\begin{aligned} \mbox{dist}(\mu,S^*) = \mbox{inf}_{\nu \in S^*}\|\mu-\nu\|. \end{aligned}\]

It is easy to see, if the set \(S^*\) has a unique point \(\mu^*,\) then (39) implies that \[\begin{aligned} \lim _{t \rightarrow \infty }\mu(t)=\mu^*. \end{aligned}\]

If the dynamical system is still stable at \(\mu^*\) in the Lyapunov sense, then the dynamical system is globally asymptotically stable at \(\mu^*.\)

Definition 5. The dynamical system is said to be globally exponentially stable with degree \(\eta\) at \(\mu^*,\) if, irrespective of the initial point, the trajectory of the system satisfies \[\begin{aligned} \| \mu(t)-\mu^*\| \leq u _1\|\mu(t_0)-\mu^*\|exp(-\eta (t-t_0)), \quad \forall t \geq t_0, \end{aligned}\] where \(u _1\) and \(\eta\) are positive constants independent of the initial point.

It is clear that the globally exponentially stability is necessarily globally asymptotically stable and the dynamical system converges arbitrarily fast.

Lemma 3(Gronwall Lemma [15]).Let \(\hat{\mu}\) and \(\hat{\nu}\) be real-valued nonnegative continuous functions with domain \(\{t : t \leq t_0\}\) and let \(\alpha (t)= \alpha _0(|t-t_0|),\) where \(\alpha _0\) is a monotone increasing function. If, for \(t \geq t_0,\) \[\begin{aligned} \hat{\mu} \leq \alpha (t) + \int^{t}_{t_0}\hat{\mu}(s)\hat{\nu}(s)ds, \end{aligned}\] then \[\begin{aligned} \hat{\mu}(s) \leq \alpha (t)exp\{\int ^{t}_{t_0} \hat{\nu}(s)ds \}. \end{aligned}\]

One can establish that the trajectory of the solution of the projection dynamical system (38) converges to the unique solution of the inverse variational inequality (1) following the techniques and ideas of Noor [3, 30] and Xia and Wang [47, 48]. We state the main results without proof.

Theorem 5. Let the operator \(g,h: H \longrightarrow H\) be Lipschitz continuous with constants \(\zeta > 0, \sigma_1\) If \(\lambda\{(\zeta + \rho +\sigma_1) \} <1,\) then, for each \(\mu_0 \in \Omega,\) there exists a unique continuous solution \(\mu(t)\) of the dynamical system (38) with \(\mu(t_0) = \mu_0\) over \([t_0, \infty ).\)

We use the dynamical system (38) to suggest some iterative for solving the inverse variational inequalities (1).

For simplicity, we take \(\lambda =1.\) Thus the dynamical system(38) becomes \[\label{eq5.2a} \frac{d\mu}{dt}+g(\mu) = \Pi_{\Omega} \big[h(\mu)-\rho( \mu)\big],\quad \mu(t_{0})=\alpha, \tag{40}\] which is a initial value problem.

The forward difference scheme is used to construct the implicit iterative method. Discretizing (40), we have \[\label{eq5.3a} \frac{\mu_{n+1}-\mu_{n}}{h_1}+g(\mu_{n})= \Pi_{\Omega} [h(\mu_{n})-\rho (\mu_{n+1}) ], \tag{41}\] where \(h_1\) is the step size.

Now, we can suggest the following implicit iterative method for solving the inverse variational inequality (1).

Algorithm 6. For a given \(\mu_{0},\) compute \({\mu_{n+1}}\) by the iterative scheme \[\mu_{n+1}= \mu_n- g(\mu_n)+\Pi_{\Omega} \bigg[h(\mu_{n})-\rho (\mu_{n+1}) – \frac{\mu_{n+1}-\mu_{n}}{h_1}\bigg],\]

This is an implicit method, which is equivalent to the following two-step method.

Algorithm 7. For a given \(\mu_{0},\) compute \({\mu_{n+1}}\) by the iterative scheme \[\begin{aligned} \omega_n =& \mu_n- g(\mu_n)+\Pi_{\Omega}[h(\mu_{n})-\rho (\mu_{n}) ],\\ \mu_{n+1} =& \mu_n- g(\mu_n)+ \Pi_{\Omega} \big[h(\mu_{n})-\rho( \omega_{n})- \frac{\omega_{n}-\mu_{n}}{h_1}\big], \end{aligned}\]

We now introduce the second order dynamical system associated with the inverse extended general variational inequality (1). To be more precise, we consider the problem of finding \(\mu\in \Omega\) such that \[\begin{aligned} \label{eq5.1b} \gamma\frac{d^2\mu}{dx^2}+\frac{d\mu}{dx}= \lambda \{\Pi_{\Omega}[h(\mu)-\rho (\mu)]-g(\mu)\},\quad \mu(a)=\alpha,\quad \mu(b)=\beta, \end{aligned} \tag{42}\] where \(\gamma>0, \lambda > 0\) and \(\rho>0\) are constants. We would like to emphasize that the problem (42) is indeed a second order boundary vale problem. In a similar way, we can define the second order initial value problem associated with the dynamical system.

The equilibrium point of the dynamical system (42) is defined as follows:

Definition 6. An element \(\mu \in \Omega ,\) is an equilibrium point of the dynamical system (42), if, \(\gamma\frac{d^2\mu}{dx^2}+\frac{d\mu}{dx}=0.\)

Thus it is clear that \(\mu \in \Omega\) is a solution of the inverse variational inequality (1), if and only if, \(\mu \in \Omega\) is an equilibrium point.

We can rewrite (42) as follows: \[\begin{aligned} \label{eq5.1d} g(\mu)= \Pi_{\Omega}\big[h(\mu)-\rho (\mu) + \gamma\frac{d^2\mu}{dx^2}+\frac{d\mu}{dx}\big]. \end{aligned} \tag{43}\]

For \(\lambda =1,\) the problem (42) is equivalent to finding \(\mu \in \Omega\) such that \[\begin{aligned} \label{eq5.1c} \gamma\frac{d^2\mu}{dx^2}+\frac{d\mu}{dx}+g(\mu) = P_{\Omega}\big[h(\mu)-\rho (\mu)\big],\quad \mu(a)=\alpha,\quad \mu(b)=\beta. \end{aligned} \tag{44}\]

The problem (44) is called the second dynamical system, which is in fact a second order boundary value problem. This interlink among various areas is fruitful from numerical analysis in developing implementable numerical methods for finding the approximate solutions of the variational inequalities. Consequently, we can explore the ideas and techniques of the differential equations to suggest and propose hybrid proximal point methods for solving the inverse extended general variational inequalities and related optimization problems.

We discretize the second-order dynamical systems (44) using central finite difference and backward difference schemes to have \[\label{eq5.1ab} \gamma\frac{\mu_{n+1}-2\mu_{n}+\mu_{n-1}}{h_{1}^{2}}+\frac{\mu_{n}-\mu_{n-1}}{h_1}+g(\mu_{n})= \Pi_{\Omega}[h(\mu_{n})-\rho (\mu_{n+1})], \tag{45}\] where \(h\) is the step size.

If \(\gamma =1, h_1=1,\) then, from Eq. ( 45) we have

Algorithm 8. For a given \(\mu_{0},\) compute \({\mu_{n+1}}\) by the iterative scheme \[\mu_{n+1}= \mu_n+ g(\mu_n)+ \Pi_{\Omega}[h(\mu_{n})-\rho (\mu_{n+1})],\] which is the extragradient method, which is equivalent to:

Algorithm 9. For given \(\mu_{0}, \mu_1,\) compute \({\mu_{n+1}}\) by the iterative scheme \[\begin{aligned} y_n =& (1-\theta_n)\mu_n+ \theta_n \mu_{n-1},\quad n=1,2,\ldots \nonumber \\ \mu_{n+1} =& \mu_n- g(\mu_n)+ \Pi_{\Omega}[h(\mu_{n})-\rho (y_{n})], \end{aligned}\] is called the two-step inertial iterative method, where \(\theta_n \in [0,1]\) is a constant.

We discretize the second-order dynamical systems (38) using central finite difference and backward difference schemes to suggest the following an iterative method for solving the inverse extended general variational inequalities (1).

Algorithm 10. For given \(\mu_{0}, \mu_1,\) compute \({\mu_{n+1}}\) by the iterative scheme \[\begin{aligned} &\mu_{n+1}=\mu_n- g(\mu_{n+1})+ \Pi_{\Omega}\big[h(\mu_{n+1})-\rho (\mu_{n+1})-\gamma\frac{\mu_{n+1}-2\mu_{n}+\mu_{n-1}}{h_{1}^{2}}+\frac{\mu_{n}-\mu_{n-1}}{h_1}\big],\quad n=1,2,\ldots \end{aligned}\]

Algorithm 10 is called the hybrid inertial proximal method for solving the inverse extended general variational inequalities and related optimization problems. This is a new proposed method.

We now consider the third order dynamical systems associated with the inverse extended general variational inequalities of the type (1). To be more precise, we consider the problem of finding \(\mu\in \Omega,\) such that \[\begin{aligned} \label{eq5.3} \gamma\frac{d^3\mu}{dt^3}+\zeta\frac{d^2\mu}{dt^2}+\xi\frac{d\mu}{dt}+g(\mu)= \Pi_{\Omega}[h(\mu)-\rho (\mu)], \qquad u(a)=\alpha,\dot{\mu}(a)=\beta,\dot{\mu}(b)=0, \end{aligned} \tag{46}\] where \(\gamma>0, \zeta, \xi\) and \(\rho>0\) are constants. Problem (46) is called third order dynamical system associated with inverse extended general variational inequalities (1).

The equilibrium point of the dynamical system (46) is defined as follows:

Definition 7. An element \(\mu \in \mathcal{H},\) is an equilibrium point of the dynamical system (42), if, \[\gamma\frac{d^3\mu}{dt^3}+\zeta\frac{d^2\mu}{dt^2}+\xi\frac{d\mu}{dt}=0.\]

Thus it is clear that \(\mu \in \Omega\) is a solution of the inverse extended general variational inequality (1), if and only if, \(\mu \in \Omega\) is an equilibrium point.

Consequently, the problem (38) can be equivalent written as \[\begin{aligned} \label{eq5.3nn} g(\mu)= \Pi_{\Omega}\bigg[h(\mu)- \rho (\mu)+ \gamma\frac{d^3\mu}{dt^3}+\zeta\frac{d^2\mu}{dt^2}+\xi\frac{d\mu}{dt}\bigg]. \end{aligned} \tag{47}\]

We discretize the third-order dynamical systems (46) using central finite difference and backward difference schemes to have \[\begin{aligned} \label{eq5.2g} \gamma& \frac{\mu_{n+2}-2\mu_{n+1}+2\mu_{n-1}-\mu_{n-2}}{2h_{1}^3} +\zeta\frac{\mu_{n+1}-2\mu_{n}+\mu_{n-1}}{h_{1}^2}+\xi\frac{3\mu_{n}-4\mu_{n-1}+\mu_{n-2}}{2h_1}+g(\mu_{n})\notag\\ &= \Pi_{\Omega}[h(\mu_{n})-\rho (\mu_{n+1})], \end{aligned} \tag{48}\] where \(h\) is the step size.

If \(\gamma =1, h_1=1, \zeta=1, \xi=1,\) then, from Eq. (48) after adjustment, we have:

Algorithm 11. For given \(\mu_{0},\mu_{1},\) compute \({\mu_{n+1}}\) by the iterative scheme \[\begin{aligned} u_{n+1}=&\mu_n-g(\mu_n)+ \Pi_{\Omega}\bigg[ h(\mu_n)- \rho (\mu_{n+1})+ \frac{2\mu_n+\mu_{n-1}-3\mu_{n}}{2} \bigg],\quad n=1,2,\ldots \end{aligned}\] which is an inertial type hybrid iterative methods for solving the inverse extended general variational inequalities (1).

Uisng the predictor-corrector techniuqe, we now suggest multi step inertial iterative method for solving the inverse extended general variational inequalities (1).

Algorithm 12. For a given \(\mu_{0},\mu_{1},\) compute \({\mu_{n+1}}\) by the iterative scheme \[\begin{aligned} z_n=& \mu_{n}-\theta _{n}\left(\mu_{n}-\mu_{n-1}\right),\quad n=1,2,\ldots,\\ y_n =& (1-\gamma_{n})z_{n} + \gamma_{n} \bigg\{z_{n}-g(z_{n})+ \left[h(z_{n})-\rho(z_n)\right]\bigg\},\\ t_{n}=&(1-\beta_{n})y_{n}+\beta_n\bigg\{y_n-g(y_n)+ \bigg[h(y_n)-\rho(y_n)\bigg]\bigg\},\\ w_n=& (1-\zeta_n)t_n+ \zeta_n \bigg\{t_n-g(t_n)+\bigg[h(t_n)-\rho(t_n)\bigg]\bigg\},\\ \mu_{n+1}=&(1-\alpha_n)w_n+ \alpha_n\bigg\{w_n-g(\mu_n)+ \bigg[h(\mu_{n})- \rho (w_n)\bigg]\bigg\}, \end{aligned}\] where \(\alpha_{n},\beta_{n},\gamma_{n},\zeta_n, \theta _{n}\in[0,1], \quad \forall n\geq1.\)

These algorithms contain Noor (three step) iterations, Ishikawa (two step) intrations, Mann iteration and modified Noor iterations as special cases.

Remark 3. For appropriate and suitable choice of the operators \(g,h,\) convex set, parameters and the spaces, one can suggest a wide class of implicit, explicit and inertial type methods for solving inverse variational inequalities and related optimization problems. Using the techniques and ideas of Noor et al [3638], one can discuss the convergence analysis of the proposed methods.

6. Conclusion

In this paper, we have used the equivalence between the inverse variational inequalities and fixed point problems to suggest some new multi step multi-step iterative methods for solving the inverse variational inequalities. These new methods include extragradient methods, modified double projection methods and multi step inertial methods, which are suggested using the techniques of projection method, auxiliary techniques and dynamical systems. Convergence analysis of the proposed method is discussed for suitable weaker conditions. It is an open problem to compare these proposed methods with other methods. Applying the technique and ideas discussed in [5, 6, 12, 20, 41], can one explore the Julia set and Mandelbrot set in Noor orbit using the Noor (three step) iterations in the fixed point theory. It is an open interesting problem to discuss the applications of the inverse variational inequality and its variant forms in the fuzzy set theory, stochastic, quantum calculus, fractal, fractional, random traffic equilibrium, artificial intelligence, computer science, control engineering, management science and operations research.

Author Contributions

All authors contributed equally to the conception, design of the work, analysis, interpretation of data, reviewing it critically and final approval of the version for publication.

Conflicts of Interest

Authors have no conflict of interest.

Acknowledgments

The authors sincerely thank their respected Professors, teachers, students, colleagues, collaborators, editors, reviewers, referees and friends, who have contributed, directly or indirectly to this research.

References

  1. Lions, J. L., Stampacchia, G., & Math, C. P. A. (1967). Variational Inequalities. Communications on Pure and Applied Mathematics, 20, 493-519.

  2. Noor, M. A. (2000). New approximation schemes for general variational inequalities. Journal of Mathematical Analysis and Applications, 251(1), 217-229.

  3. Noor, M. A. (2004). Some developments in general variational inequalities. Applied Mathematics and Computation, 152(1), 199-277.

  4. Glowinski, R., & Le Tallec, P. (1989). Augmented Lagrangian and Operator-Splitting Methods in Nonlinear Mechanics. Society for Industrial and Applied Mathematics.

  5. Rani, M., & Chugh, R. (2014). Julia sets and Mandelbrot sets in Noor orbit. Applied Mathematics and Computation, 228, 615-631.

  6. Ashish, R. C., & Rani, M. (2021). Fractals and Chaos in Noor Orbit: A Four-Step Feedback Approach. Lap Lambert Academic Publishing, Saarbrucken, Germany.

  7. Cho, S. Y., Shahid, A. A., Nazeer, W., & Kang, S. M. (2016). Fixed point results for fractal generation in Noor orbit and s-convexity. SpringerPlus, 5(1), 1843.

  8. Kwun, Y. C., Shahid, A. A., Nazeer, W., Butt, S. I., Abbas, M., & Kang, S. M. (2019). Tricorns and multicorns in Noor orbit with s-convexity. IEEE Access, 7, 95297-95304.

  9. Alvarez, F. (2004). Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM Journal on Optimization, 14(3), 773-782.

  10. AlShejari, A. A., Noor, M. A., & Noor, K. I. (2024). Recent developments in general quasi variational inequalities. International Journal of Analysis and Applications, 22, 84-84.

  11. Aussel, D., Gupta, R., & Mehra, A. (2013). Gap functions and error bounds for inverse quasi-variational inequality problems. Journal of Mathematical Analysis and Applications, 407(2), 270-280.

  12. Barbagallo, A., & Bianco, S. G. L. (2024). A random elastic traffic equilibrium problem via stochastic quasi-variational inequalities. Communications in Nonlinear Science and Numerical Simulation, 131, 107798.

  13. Bnouhachem, A., Noor, M. A., Khalfaoui, M., & Zhaohan, S. (2011). A self–adaptive projection method for a class of variant variational inequalities. Journal of Mathematical Inequalities, 5(1), 117-129.

  14. Dey, S., & Reich, S. (2024). A dynamical system for solving inverse quasi-variational inequalities. Optimization, 73(6), 1681-1701.

  15. Dupuis, P., & Nagurney, A. (1993). Dynamical systems and variational inequalities. Annals of Operations Research, 44(1), 7-42.

  16. Trémolieres, R., Lions, J. L., & Glowinski, R. (2011). Numerical Analysis of Variational Inequalities (Vol. 8). Elsevier.

  17. Han, Y., Huang, N., Lu, J., & Xiao, Y. (2017). Existence and stability of solutions to inverse variational inequality problems. Applied Mathematics and Mechanics, 38(5), 749-764.

  18. He, B. S. (1999). A Goldstein’s type projection method for a class of variant variational inequalities. Journal of Computational Mathematics, 425-434.

  19. He, X., & Liu, H. X. (2011). Inverse variational inequalities with projection-based solution methods. European Journal of Operational Research, 208(1), 12-18.

  20. He, S., & Dong, Q. L. (2018). An existence-uniqueness theorem and alternating contraction projection methods for inverse variational inequalities. Journal of Inequalities and Applications, 2018(1), 351.

  21. He, B. S., Liu, H. X., Li, M., & He, X. Z. (2006). PPA-based methods for monotone inverse variational inequalities. Sciencepaper Online, 1. (http://www.paper.edu.cn).

  22. Kinderlehrer, D., & Stampacchia, G. (2000). An Introduction to Variational Inequalities and Their Applications. Society for Industrial and Applied Mathematics.

  23. Nagurney, A., & Zhang, D. (2012). Projected Dynamical Systems and Variational Inequalities With Applications (Vol. 2). Springer Science & Business Media.

  24. Noor, M. A. (1975). On Variational Inequalities (Doctoral dissertation, Brunel University).

  25. Noor, M. A. (1986). Generalized quasi complementarity problems. Journal of Mathematical Analysis and Applications, 120(1), 321-327.

  26. Noor, M. A. (1988). General variational inequalities. Applied Mathematics Letters, 1(2), 119-122.

  27. Noor, M. A. (1988). Quasi variational inequalities. Applied Mathematics Letters, 1(4), 367-370.

  28. Noor, M. A. (1992). General algorithm for variational inequalities. Journal of Optimization Theory and Applications, 73(2), 409-413.

  29. Noor, M. A. (2002). A Wiener-Hopf dynamical system for variational inequalities. New Zealand Journal of Mathematics, 31, 173-182.

  30. Noor, M. A. (2002). Stability of the modified projected dynamical systems. Computers & Mathematics With Applications, 44(1-2), 1-5.

  31. Noor, M. A. (2008). Differentiable non-convex functions and general variational inequalities. Applied Mathematics and Computation, 199(2), 623-630.

  32. Noor, M. A. (2009). Extended general variational inequalities. Applied Mathematics Letters, 22(2), 182-186.

  33. Noor, M. A. (2012). On general quasi-variational inequalities. Journal of King Saud University-Science, 24(1), 81-88.

  34. Noor, M. A., & Noor, K. I. (2022). Dynamical system technique for solving quasi variational inequalities. University Politehnica of Bucharest Scientific Bulletin-Series A-Applied Mathematics and Physics, 84(4), 55-66.

  35. Noor, M. A., & Noor, K. (2024). Some new iterative schemes for solving general quasi variational inequalities. Le Matematiche, 79(2), 327-370.

  36. Noor, M. A., & Noor, K. I. (2025). New iterative methods and sensitivity analysis for inverse quasi variational inequalities. Earthline Journal of Mathematical Sciences, 15(4), 495-539.

  37. Noor, M. A., & Noor, K. I. (2025). Some new aspects of nonconvex inverse variational inequalities. Open Journal of Mathematical Sciences, 9, 116-140.

  38. Noor, M. A., Noor, K. I., & Khan, A. G. (2015). Dynamical systems for quasi variational inequalities. Annals of Functional Analysis, 6(1), 193-209.

  39. Noor, M. A., & Oettli, W. (1994). On general nonlinear complementarity problems and quasi-equilibria. Le Matematiche, 49(2), 313-331.

  40. Patriksson, M. (2013). Nonlinear Programming and Variational Inequality Problems: A Unified Approach (Vol. 23). Springer Science & Business Media.

  41. Rattanaseeha, K., Imnang, S., Inkrong, P., & Thianwan, T. (2023). Novel Noor iterative methods for mixed-type asymptotically nonexpansive mappings from the perspective of convex programming in hyperbolic spaces. International Journal of Innovative Computing Information and Control, 19(6), 1717-1734.

  42. Shehu, Y., Gibali, A., & Sagratella, S. (2020). Inertial projection-type methods for solving quasi-variational inequalities in real Hilbert spaces. Journal of Optimization Theory and Applications, 184(3), 877-894.

  43. Stampacchia, G. (1964). Formes bilineaires coercitives sur les ensembles convexes. Comptes Rendus Hebdomadaires Des Seances De L Academie Des Sciences, 258(18), 4413.

  44. Suantai, S., Noor, M. A., Kankam, K., & Cholamjiak, P. (2021). Novel forward–backward algorithms for optimization and applications to compressive sensing and image inpainting. Advances in Difference Equations, 2021, 265.

  45. Trinh, T. Q., & Vuong, P. T. (2025). The projection algorithm for inverse quasi-variational inequalities with applications to traffic assignment and network equilibrium control problems. Optimization, 74(8), 1819-1842.

  46. Vuong, P. T., He, X., & Thong, D. V. (2021). Global exponential stability of a neural network for inverse variational inequalities. Journal of Optimization Theory and Applications, 190(3), 915-930.

  47. Xia, Y., & Wang, J. (2000). A recurrent neural network for solving linear projection equations. Neural Networks, 13(3), 337-350.

  48. Xia, Y. S., & Wang, J. (2000). On the stability of globally projected dynamical systems. Journal of Optimization Theory and Applications, 106(1), 129-150.

  49. Youness, E. A. (1999). E-convex sets, E-convex functions, and E-convex programming. Journal of Optimization Theory and Applications, 102(2), 439-450.

  50. Zhang, Y., & Yu, G. (2022). Error bounds for inverse mixed quasi-variational inequality via generalized residual gap functions. Asia-Pacific Journal of Operational Research, 39(02), 2150017.

  51. He, B., He, X. Z., & Liu, H. X. (2010). Solving a class of constrained ‘black-box’inverse variational inequalities. European Journal of Operational Research, 204(3), 391-401.

  52. Cottle, R. W., Pang, J. S., & Stone, R. E. (2009). The Linear Complementarity Problem. Society for Industrial and Applied Mathematics.

  53. Noor, M. A. (1988). Fixed point approach for complementarity problems. Journal of Mathematical Analysis and Applications, 133(2), 437-448.