Self Adaptive Viscosity-Type Inertial Extragradient Algorithms for Solving Variational Inequalities with Applications

In this paper, we introduce two new inertial extragradient algorithms with non-monotonic stepsizes for solving monotone and Lipschitz continuous variational inequality problems in real Hilbert spaces. Strong convergence theorems of the suggested iterative schemes are established without the prior knowledge of the Lipschitz constant of the mapping. Finally, some numerical examples are provided to illustrate the efficiency and advantages of the proposed algorithms and compare them with some related ones.


Introduction
Our interest in this paper is to investigate self-adaptive fast iterative algorithms for solving variational inequality problems in real Hilbert spaces. Recall that the variational inequality problem (in short, VIP) is expressed as follows: where A : H → H is a nonlinear mapping, C is a nonempty convex closed set in a real Hilbert space H embed with the inner product ·, · and the induced norm · . We denote the set of all such x * by VI(C, A) in this paper. Let us recall some nonlinear mappings in functional analysis. For all x, y ∈ H, a mapping A : H → H is said to be (i) L-Lipschitz continuous with L > 0 iff Ax − Ay ≤ L x − y ; (ii) η-strongly monotone if there exists η > 0 such that Ax − Ay, x − y ≥ η x − y ; (iii) monotone if Ax − Ay, x − y ≥ 0.
The theory of variational inequalities has become a suitable model to deal with many practical problems in various areas, such as medical imaging, machine learning, economics, and optimal control; see, e.g., [4,6,13]. Recently, many work of literature on iterative methods for solving variational inequality problems have been proposed and studied, see, e.g., [3,9,12,17,24,27,29] and the references therein. Among these methods for solving variational inequalities, the projection-based methods and their variant forms play an important role. The simplest and oldest projection-type algorithm is the classical projected-gradient method, which performs only one projection on the feasible set. However, the convergence of the method requires a strong hypothesis: strong monotonicity or inverse strong monotonicity on mapping A. To avoid this strong assumption, Korpelevich [14] proposed the following extragradient method (EGM): where ϑ ∈ (0, 1/L), the mapping A is monotone and L-Lipschitz continuous and P C is the metric projection onto C. It is known that the sequence generated by (EGM) converges weakly to a solution of (VIP) provided that VI(C, A) = ∅. It is worth noting that the extragradient method needs to calculate two orthogonal projections on the feasible set in each iteration. This method is particularly useful if the feasible set is very simple so that projection can be performed easily. However, if the feasible set is a general closed and convex set, then the minimum distance problem must be solved twice to obtain the next iteration. This might increase the computational burden of the computer and further seriously affect the efficiency of the extragradient method. Next, we introduce two notable methods to overcome this difficulty. The first is the Tseng's extragradient method proposed by Tseng [31]: where ϑ ∈ (0, 1/L) and the mapping A is monotone and L-Lipschitz continuous. Another well-known method is the subgradient extragradient method (SEGM) proposed by Censor, Gibali and Reich [5], which can be regarded as an improvement of the EGM. Their method is of the form: where ϑ ∈ (0, 1/L) and the mapping A is monotone and L-Lipschitz continuous. They replaced the second projection onto the feasible set of EGM by a projection onto a specific constructible half-space. Note that both methods (TEGM) and (SEGM) have proven to obtain weak convergence in real Hilbert spaces. We know that strong convergence is preferable to weak convergence in infinitedimensional spaces. Recently, Kraikaew and Saejung [15] proposed a Halpern subgradient extragraduent method (HSEGM) to solve (VIP) in real Hilbert spaces. The method is inspired by the Halpern method and the (SEGM). Indeed, their method is of the form: where ϑ ∈ (0, 1/L), ϕ n ⊂ (0, 1), lim n→∞ ϕ n = 0, ∞ n=1 ϕ n = +∞ and the mapping A is monotone and L-Lipschitz continuous. They proved that the iterative sequence {x n } defined by (1.1) converges to the unique solution of (VIP) in norm. Note that all the above methods require to know the Lipschitz constant of the mapping A. In 2017, Shehu and Iyiola [23] proposed a modification of SEGM with adoption of Armijo-like step size rule for solving (VIP), that is, where the mapping A is monotone and L-Lipschitz continuous, the mapping f : H → H is k-contraction, ϕ n ⊂ (0, 1), lim n→∞ ϕ n = 0, ∞ n=1 ϕ n = +∞, ϑ n = ρ ln and l n is the smallest nonnegative inter such that ϑ n Ax n − Ay n ≤ µ x n − y n , ρ ∈ (0, 1) and µ ∈ (0, 1). Inspired by the work of [23], Thong and Hieu in their work [30] introduced a modification of Tseng's extragradient method, that is, where the mapping A is monotone and L-Lipschitz continuous, the mapping f : H → H is k-contraction, ϕ n ⊂ (0, 1), lim n→∞ ϕ n = 0, ∞ n=1 ϕ n = +∞, ϑ n is chosen to be the largest ϑ ∈ γ, γl, γl 2 , . . . satisfying ϑ Ax n − Ay n ≤ µ x n − y n , γ > 0, l ∈ (0, 1) and µ ∈ (0, 1). Both methods (1.2) and (1.3) can work without the prior knowledge of the Lipschitz constant of the mapping, but the Armijo-like step size may involve additional computation of the mapping A. Recently, Yang et al. [33,34] introduced two self-adaptive extragradient algorithms for solving variational inequalities. The algorithms are inspired by the subgradient extragradient method, the Tseng's extragradient method, the viscosity method and the new simple step size methods. Indeed, their algorithms generate two sequences {x n } by the following iterative schemes: T n = {x ∈ H | x n − ϑ n Ax n − y n , x − y n ≤ 0} , z n = P Tn (x n − ϑ n Ay n ) , where {ϑ n } is updated by the following: where {ϑ n } is updated by the following: otherwise.
On the other hand, in recent years, there has been tremendous interest in developing fast iterative algorithms. Many authors have used inertial technology to build a large number of iterative algorithms that can improve the convergence speed; see, e.g., [2,7,10,11,21,22,25] and the references therein. One of the common features of these algorithms is that the next iteration depends on the combination of the previous two iterations. Note that this minor change greatly improves the performance of the algorithms.
Motivated and inspired by the above work, in this paper, we present two inertial extragradient algorithms with non-monotonic stepsizes for solving the monotone variational inequality problem in real Hilbert spaces. Our algorithms do not require the prior knowledge of the Lipschitz constant of the mapping. Strong convergence theorems of our algorithms are established under some suitable conditions. Finally, we provide some numerical experiments to support the theoretical results. The two algorithms obtained in this paper improve and extend some related results in this field [15,23,30,33,34].
The structure of the paper is as follows. In Section 2, we present some preliminaries that will be needed in the sequel. In Section 3, we propose two algorithms and analyze their convergence. In Section 4, some numerical examples are provided to illustrate the numerical behavior of the proposed algorithms. Finally, we conclude this paper with a brief summary in Section 5, the last section.

Preliminaries
Let C be a nonempty closed and convex subset of a real Hilbert space H. The weak convergence and strong convergence of {x n } ∞ n=1 to x are represented by x n x and x n → x, respectively. For each x, y ∈ H and δ ∈ R, we have the following facts: For every point x ∈ H, there exists a unique nearest point in C, denoted by P C (x) such that P C (x) := argmin{ x − y , y ∈ C}. P C is called the metric projection of H onto C. It is known that P C has the following basic properties: We state the following well-known lemmas which will be used in the sequel.

Lemma 1. ( [26])
Let C be a nonempty, convex and closed subset of a real Hilbert space H and A : C → H be a continuous and monotone mapping. Lemma 3. ( [20]) Let {a n } be a sequence of nonnegative real numbers, {τ n } be a sequence of real numbers in (0, 1) with ∞ n=1 τ n = ∞, and {b n } be a sequence of real numbers. Assume that a n+1 ≤ b n τ n + (1 − τ n ) a n , ∀n ≥ 1. If lim sup k→∞ b n k ≤ 0 for every subsequence {a n k } of {a n } satisfying lim inf k→∞ (a n k +1 − a n k ) ≥ 0, then lim n→∞ a n = 0.

Main results
In this section, we propose two new inertial algorithms for monotone variational inequalities, which are based on the subgradient extragradient method, the Tseng's extragradient method and the viscosity method. First, we assume that our algorithms satisfy the following conditions.
Indeed, we have n x n −x n−1 ≤ n for all n, which together with lim n→∞ n ϕn = 0 implies that lim The following lemmas are quite helpful to analyze the convergence of the algorithm.
Remark 2. The idea of the step size ϑ n defined in (3.2) is derived from [16]. It is worth noting that the step size ϑ n generated in Algorithm 3.1 is allowed to increase when the iteration increases. Therefore, the use of this type of step size reduces the dependence on the initial step size ϑ 1 . On the other hand, because of ∞ n=1 ξ n < +∞, which implies that lim n→∞ ξ n = 0. Consequently, the step size ϑ n may not increase when n is large enough. If ξ n = 0, then the step size ϑ n in Algorithm 3.1 is similar to the approaches in [1,2,33].
The following Lemma 5 plays an important role in the convergence analysis of the Algorithm 3.1 and it can be easily obtained by using the same statement as Lemma 3.2 in [28].
Using the definition of x n+1 and (3.7), we get This implies that {x n } is bounded. So {u n }, {z n } and {f (x n )} are also bounded. Claim 2.
for some M 4 > 0. Indeed, in view of (3.7), one sees that for some M 2 > 0. Combining Lemma 5 and (3.8), we obtain where M 4 := M 2 + M 3 . We can get the desired result through a simple conversion. Claim 3.
where M := sup n∈N { x n − p , x n − x n−1 } > 0. Using (3.4) and (3.9), we obtain Claim 4. { x n − p 2 } converges to zero. Indeed, from Lemma 3 and Remark 1, it suffices to show that lim sup k→∞ f (p) − p, x n k +1 − p ≤ 0 for every For this purpose, we assume that It follows from Claim 2 and Condition (C5) that which implies that lim k→∞ y n k − u n k = 0 and lim k→∞ z n k − y n k = 0 .
Therefore, we obtain lim k→∞ z n k − u n k ≤ lim k→∞ z n k − y n k + lim k→∞ y n k − u n k = 0 . (3.10) Moreover, using Remark 1 and Condition (C5), we have Thus, we conclude that We get u n k z since x n k − u n k → 0. This together with lim k→∞ u n k − y n k = 0 and Lemma 2, yields z ∈ VI(C, A). From the definition of p and (3.12), one sees that Thus, from Remark 1, (3.14), Claim 3 and Lemma 3, we conclude that x n → p as n → ∞. That is the desired result.

The self adaptive viscosity-type inertial Tseng extragradient algorithm
In this subsection, we introduce a self adaptive viscosity-type inertial Tseng extragradient algorithm for solving (VIP). The Algorithm 3.2 is stated as follows.
Self Adaptive Viscosity-Type Inertial Extragradient Algorithms for VIPs 51 The following Lemma 6 can be easily obtained by using the same statement as Lemma 3.3 in [28].
The desired result can be obtained by using the same arguments as in Claim 3 of Theorem 1. Claim 4. The sequence { x n − p 2 } converges to zero. According to Claim 4 in Theorem 1, we suppose that { x n k − p } is a subsequence of { x n − p } satisfying lim inf k→∞ ( x n k +1 − p − x n k − p ) ≥ 0. From Claim 2 and Condition (C5), one obtains lim sup By (3.15), it follows that lim k→∞ y n k − u n k = 0. According to Lemma 6, one has lim k→∞ y n k − z n k = 0. Using the same facts as (3.10)-(3.13), we obtain Therefore, using Claim 3, Condition (C5), (3.16) and Lemma 3, one concludes that lim n→∞ x n − p = 0. The proof is completed.
Remark 3. We have the following observations for the Algorithms 3.1 and 3.2.
(i) Notice that the algorithm proposed by Kraikaew and Saejung [15] is a fixed-step method, i.e., the update of the step size requires the prior information of the Lipschitz constant of the mapping. The algorithms suggested in [23,30] apply an Armijo-type criterion to update the step size, which increases its computational burden by spending a lot of computation in each iteration to find a suitable step size. However, the step size of our two iterative schemes can be updated adaptively without any line search process. In other words, our algorithms do not require the Lipschitz constant as input parameter. In addition, our Algorithms 3.1 and 3.2 embed a new non-monotonic stepsize criterion that overcomes the drawback of non-increasing stepsize sequences generated by the algorithms offered in [1,2,33].
(ii) The algorithms presented in this paper obtain strong convergence theorems in real Hilbert spaces by applying the viscosity-type method. However, the strongly convergent methods presented in [12,29] are obtained by projection-type methods. It is known that projection-based methods are not easy to implement. Therefore, the iterative schemes provided in this paper are more useful.

Numerical examples
This section reports some numerical results to illustrate the effectiveness of the proposed algorithms in comparisons with some known Algorithms (1.1)-(1.5). All the programs were implemented in MATLAB 2018a on a Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz computer with RAM 8.00 GB. Our parameters are set as follows. For all algorithms, we set ϕ n = 1/(n + 1) and f (x) = 0.9x. In our proposed algorithms, we choose ϑ 1 = 1, µ = 0.9, ξ n = 1/(n + 1) 1.
It is known that A is monotone and L-Lipschitz continuous with L = 2 and x * (t) = {0} is the solution of the corresponding variational inequality problem. Note that the projection on C is inherently explicit, that is, We choose the maximum number of iterations 50 as a common stopping criterion. Figure 2 shows the numerical behaviors of all the algorithms with two starting points.  Remark 4. From Example 1 and Example 2, we see that our proposed algorithms are outperformance some existing algorithms. The reasonable use of inertial terms and new step size greatly improves the computational performance of our algorithms. Note that the Algorithm (1.2) and the Algorithm (1.3) require more execution time because they use an Armijo-like line search method to adaptively calculate the step size.
Next, we use the proposed Algorithms 3.1 and 3.2 to solve variational inequalities that appears in optimal control problems. Recently, many scholars have proposed different methods to solve it. We recommend readers to refer to [8,19,32] for the algorithms and detailed description of the problem.
We now consider an example in which the terminal function is not linear.   In this example, the parameters of our algorithms are set the same as in Example 3. Algorithm 3.1 ran 644 iterations and Algorithm 3.2 ran 1000 iterations, which take 0.28734 seconds and 0.39556 seconds, respectively. The approximate optimal control and the corresponding trajectories of Algorithm 3.2 are plotted in Figure 4.  Remark 5. As can be seen from Examples 3 and 4, the algorithms proposed in this paper can work well on optimal control problems. It is worth noting that when the objective function is linear rather than nonlinear, our suggested algorithms can work better (cf. Figures 3 and 4).

Conclusions
In this paper, we proposed two new algorithms for solving the variational inequality problem with a monotone and Lipschitz continuous mapping but the Lipschitz constant is unknown. The algorithms are inspired by the inertial method, the subgradient extragradient method, the Tseng's extragradient method and the viscosity method. Strong convergence theorems of the proposed algorithms were obtained under some mild and standard conditions. Finally, some numerical experiments arise in finite-and infinite-dimensional spaces are performed to show the efficiency and advantages of our suggested iterative schemes over the existing ones.