Absolute Value Variational Inclusions

In this paper, we consider a new system of absolute value variational inclusions. Some interesting and extensively problems such as absolute value equations, diﬀerence of monotone operators, absolute value complementarity problem and hemivariational inequalities as special case. It is shown that variational inclusions are equivalent to the ﬁxed point problems. This alternative formulation is used to study the existence of a solution of the system of absolute value inclusions. New iterative methods are suggested and investigated using the resolvent equations, dynamical system and nonexpansive mappings techniques. Convergence analysis of these methods is investigated under monotonicity. Some special cases are discussed as applications of the main results.


Introduction
Variational inclusions contain a wealth of new ideas and techniques, which can be viewed as a novel extension and generalization of the variational inequalities and variational principles. Variational inclusion theory has applications in industry, physical, regional, social, pure and applied sciences. This theory provides us One of the most difficult and important problems in variational inclusions is the development of efficient numerical methods. Several numerical methods have been developed for solving the variational inclusions and their variant forms. These methods have been extended and modified in numerous ways. This alternative formulation has allowed us to consider the existence of a solution, iterative schemes, sensitivity analysis, merit functions and other aspects of the variational inclusions.
Equally important is the area of the resolvent equations, which is mainly due to Noor [21]. Using the resolvent operator methods, it can be shown that the variational inclusions are equivalent to the resolvent equations. It well known [23,24,25,26,27] that the resolvent equations technique can be used effectively to develop some powerful iterative algorithms for various classes of variational inclusions (inequalities) as well as to study the sensitivity analysis for variational inclusions. It is well known that the resolvent equations include the Wiener-Hopf equations as a special case. The Wiener-Hopf equations were introduced and studied by Shi [47] and Robinson [45] in relations with classical variational inequalities. This technique has been used to study the existence of a solution as well as to develop various inertial iterative methods for solving the variational inclusions, see [21,23]. It is worth mentioning that the inertial methods were introduced by Polyak [44]. Alvarez [1], Noor et al. [32,33,34] and Shehu et al. [48] have developed these inertial type methods for variational inequalities and related optimization problems.
Noor [20,21] have proved that variational inequalities are equivalent to the dynamical systems. This equivalence has been used to study the existence and stability of the solution of variational inequalities. Noor et al. [35] have been shown that the dynamical system can be used to suggest some implicit iterative method for solving variational inclusions using the forward-backward finite difference. For the applications and numerical methods of the dynamical systems, see [25,35,36] and the references therein.
The classical problem of the variational inclusion problem is to find µ ∈ H such that 0 ∈ T µ, (1.1) where T : H : R is a monotone operator, see [46]. Problem (1.1) appears in different fields of applied mathematics and optimization such as signal processing, numerous important structured optimization, composite convex optimization, saddle point, and inverse problems.
To develop the efficient methods, it is important the operator T can be decomposed as sum of two operators T = M + A. In this case, the problem (1.1) is to find µ ∈ H such that 0 ∈ Mµ + Aµ, (1.2) which is known as finding zeroes of two monotone operators, see [23,24,43]. Here the operator M is strongly monotone operator and the operator A is a maximal monotone operator. Such type of problems have been studied extensively in recent years, see [1,6,16,23,24,26,31,34,38,42,43].
Motivated and inspired by the ongoing research in this active areas, we consider a new system of absolute value variational inclusions involving three monotone operators. It is shown that some interesting problems such as variational inclusions, system of absolute value equations, absolute value variational inequalities, absolute value complementarity problems and absolute value hemivariational inequalities are special cases of absolute value variational inclusions. It is shown that this system of absolute value variational inclusions is equivalent to the fixed point problem. This alternative formulation is used to consider the existence of a solution as well as to suggest and investigate some new implicit and explicit iterative methods for solving variational inclusions.
Dynamical system and nonexpansive mappings approach are considered for solving the absolute value inclusions are investigated. The convergence criteria of the proposed implicit methods is discussed under some mild conditions. Several important and significant special cases are discussed as applications of our results. It is expected the techniques and ideas of this paper may be starting point for further research.

Formulations and Basic Facts
Let H be a real Hilbert space whose inner product and norm are denoted by ., . and . respectively. Let T , B, A : H → R be nonlinear operators.
We consider the problem of finding µ ∈ H such that 0 ∈ T µ − B|µ| + A(µ). (2.1) Inclusion of type (2.1) is called the absolute value variational inclusion. We would like to emphasize that the operator T is a strongly monotone, the operator B is Lipschitz continuous and A is a maximal monotone operator. Several important problems arising in pure and applied sciences can be studied in the frame work of the form (2.1). For example, see [6,11,13,16,23,24,31,34,40,43,46] and the references therein.
We now discuss several important and interesting problems, which can be deduced from the problem (2.1).

Special Cases
(I). For B = 0, the problem (2.1) collapse to finding µ ∈ H such that is known as finding zeros of the sum of two monotone operators and have been studied extensively in recent years.
(II). If A(µ) = 0, the problem (2.1) collapse to finding µ ∈ H such that which is called the problem of finding zeros of absolute value inclusions. Problem (2.3) can be interpreted as finding zeros of difference of two monotone operators, which is itself a very difficult problem. This problem can be viewed as a problem of finding the minimum of two difference of convex functions, known DC-problem [38]. Such type of problems have applications in optimization theory and imaging process in medical sciences and earthquake.
The problem of the type (2.4) is called the mixed absolute value variational inequality problem, which has many important and significant applications in regional, physical, mathematical, pure and applied sciences.
Obviously absolute complementarity problems include the complementarity problems, which were introduced by Lemke [10], Cottle et al. [2] and Noor [18] in game theory, management sciences and quadratic programming as special cases.
(VI). If Ω = H, then problem (2.5) reduces to finding µ ∈ H such that which is called the absolute value equation, where b is a given data. This problem was rediscovered by Mangasarian [14] and Noor at al. [36,37]. Clearly, system of absolute value equations is a very important special case of nonlinear variational inequalities, which were introduced by Noor [17] in 1975. See also [15,29,36,37,38,50,51].
2. An operator T : H → H is said to be: (ii) Lipschitz continuous, if there exist a constant β > 0, such that Remark 2.2. Every strongly monotone operator is a monotone operator and monotone operator is a pseudo monotone operator, but the converse is not true.

Iterative Resolvent Methods
In this section, we prove that the problem (2.1) is equivalent to the fixed point problem using the resolvent operator technique. we use this alternative fixed point formulation to study the existences of solution as well as to suggest and analyze some new implicit methods for solving the absolute value variational inclusions (2.1).
Lemma 3.1. The function µ ∈ H is a solution of the absolute value variational inclusion (2.1), if and only if, µ ∈ H satisfies the relation where J A is the resolvent operator and ρ > 0 is a constant.
Lemma 3.1 implies that the variational inclusion (2.1) is equivalent to the fixed point problem (3.1).
We use this fixed point formulation to study the existence of a solution of the problem (2.1). We define the mapping Φ associated with (3.1) as: To prove the existence of the solution of problem (2.1), it is enough to show that the mapping Φ defined by (3.2) is a contraction mapping.
Theorem 3.1. Let the operator T is strongly monotone with constant α > 0 and Lipschitz continuous with constant β > 0, respectively. If the operator B is Lipschitz continuous with constant γ and there exists a constant ρ > 0, such that then there exists a solution µ ∈ H satisfying problem (2.1).
Proof. Let u = v ∈ H be two solutions of problem (2.1). Then, from problem (3.2), we have Since the operator T is strongly monotonicity with constant α > 0 and Lipschitz continuous with constant β > 0, so From the Lipschitz continuity of the operator B with constant γ > 0. we have Combining (3.4), (3.5) and (3.6), we have . it follows that θ < 1. Thus it follows that the mapping Φ(µ) defined (3.2) is a contraction mapping and consequently, the mapping Φ(µ) has a fixed point Φ(µ) = µ ∈ H satisfying (2.1), the required result.
We now use the alternative equivalent formulation (3.1) to suggest the some iterative methods for solving the problem (2.1).
Algorithm 3.1. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme which is known as the resolvent method and has been studied extensively.
Algorithm 3.2. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme which is known as the implicit resolvent method and is equivalent to the following two-step method.
Algorithm 3.4. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme which is known as the modified resolvent method and is equivalent to the iterative method.
Algorithm 3.5. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme which is two-step predictor-corrector method for solving the problem (2.1).
We can rewrite the equation (3.1) as: This fixed point formulation was used to suggest the following implicit method.
Algorithm 3.6. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme Algorithm 3.7. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme This fixed point formulation (3.9) is used to suggest the implicit method for solving the problem (2.1) as Algorithm 3.8. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme We can use the predictor-corrector technique to rewrite Algorithm 3.8 as: Algorithm 3.9. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme is known as the mid-point implicit method for solving the problem (2.1).
We again use the above fixed formulation to suggest the following implicit iterative method.
Algorithm 3.10. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme Using the predictor-corrector technique, Algorithm 3.9 can be written as: Algorithm 3.11. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme which appears to be new one.
It is obvious that Algorithm 3.3 and Algorithm 3.4 have been suggested using different variant of the fixed point formulations (3.1). It is natural to combine these fixed point formulation to suggest a hybrid implicit method for solving the problem (2.1) and related optimization problems, which is the main motivation of this paper.
One can rewrite (3.1) as This equivalent fixed point formulation enables us to suggest the following implicit method for solving the problem (2.1).
Algorithm 3.12. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme To implement the implicit method, one uses the predictor-corrector technique. We use Algorithm 3.4 as the predictor and Algorithm 3.12 as corrector. Thus, we obtain a new two-step method for solving the problem (2.1).
Algorithm 3.13. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme For a parameter ξ, one can rewrite (3.1) as This equivalent fixed point formulation enables to suggest the following inertial method for solving the problem (2.1).
Algorithm 3.14. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme It is noted that Algorithm 3.14 is equivalent to the following two-step method.
Using this idea, we can suggest the following iterative methods for solving variational inclusions. Algorithm 3.16. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme Algorithm 3.17. For a given u 0 ∈ H, compute u n+1 by the iterative scheme Using the technique of Noor et al. [33,34], Jabeen et al. [8] and Shehu et al. [48], one can investigate the convergence analysis of these inertial resolvent methods.

Resolvent Equations Technique
In this section, we discuss the resolvent equations associated with the quasi variational inclusions (2.1). It is worth mentioning that the resolvent equations associated with variational inclusions were introduced and studied by Noor [23,24]. Noor and Noor [26] proved that the quasi variational inclusions are equivalent to the implicit resolvent equations to study the sensitivity analysis.
Related to the quasi variational inclusion (2.1), we consider the problem of finding z, µ ∈ H such that where ρ > 0 is a constant and R A = I − J A . Here I is the identity operator and J = (1 + ρA) −1 is the resolvent operator. The equation of the type (4.1) are called the absolute value resolvent equations.
We now prove that the absolute value variational inclusions (2.1) are equivalent to the absolute value resolvent equations (4.1).  Proof. Let µ ∈ H be a solution of (2.1). Then, by Lemma 3.1, we have which is the required (4.2). Thus which implies that the required (4.1).
Lemma 4.1 implies that the variational inclusion (2.1) and the resolvent equations (4.1) are equivalent. This alternative equivalent formulation has been used to suggest and analyze a wide class of efficient and robust iterative methods for solving the absolute value variational inclusions and related optimization problems.
We use the resolvent equations (4.1) to suggest some new iterative methods for solving the quasi variational inclusions. From (4.2) and (4.3), we have Thus, we have Consequently, for a constant α n > 0, we have where which appears to be a new one.
In a similar way, we can suggest and analyse the predictor-corrector inertial method for solving the quasi variational inclusion (2.1), which involve only one resolvent. Algorithm 4.3. For given u 0 , u 1 ∈ H, compute u n+1 by the iterative scheme One can study the convergence of the Algorithm 4.3 using the technique of Jabeen et al. [8].

Dynamical Systems Technique
In this section, we consider the dynamical systems technique for solving quasi variational inclusions. Dupuis and Nagurney [5] introduced and studied dynamical systems associated with variational inequalities using the fixed point problems. Thus it is clear that the variational inequalities are equivalent to a first order initial value problem. Consequently, equilibrium and nonlinear problems arising in various branches in pure and applied sciences can now be studied in the setting of dynamical systems. It has been shown that the dynamical systems are useful in developing some efficient numerical techniques for solving variational inequalities and related optimization problems. We consider some iterative methods for solving the variational inclusions. We investigate the convergence analysis of these new methods involving only the monotonicity of the operator.
We now define the residue vector R(µ) by the relation (5.1) Invoking Lemma 3.1, one can easily conclude that µ ∈ H is a solution of the problem(2.1), if and only if, µ ∈ H is a zero of the equation We now consider a dynamical system associated with the variational inclusions. Using the equivalent formulation (3.1), we suggest a class of resolvent dynamical systems as where λ is a parameter. The system of type (5.1) is called the resolvent dynamical system associated with the problem (2.1). Here the right hand is related to the projection and is discontinuous on the boundary. From the definition, it is clear that the solution of the dynamical system always stays in H. This implies that the qualitative results such as the existence, uniqueness and continuous dependence of the solution of (5.1) can be studied.
We use the resolvent dynamical system (5.1) to suggest some iterative for solving the variational inclusion (2.1). These methods can be viewed in the sense of Koperlevich [13] and Noor [25] involving the double projection.
For simplicity, we take λ = 1. Thus the dynamical system (5.1) becomes The forward difference scheme is used to construct the implicit iterative method.
where h is the step size.
Now, we can suggest the following implicit iterative method for solving the variational inclusion (2.1).
Algorithm 5.1. For a given µ 0 , compute µ n+1 by the iterative scheme This is an implicit method, which is quite different from the implicit method of [4].
Algorithm 5.1 is equivalent to the following two-step method.
where h is the step size.
This formulation enables us to suggest the two-step iterative method.
Algorithm 5.3. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme Again using the project dynamical systems, we can suggested some iterative methods for solving the variational inclusions and related optimization problems.

or equivalently
Algorithm 5.5. For a given µ 0 ∈ H, compute µ n+1 by the iterative scheme

Discretizing (5.3), we have
where h is the step size.
This helps us to suggest the following implicit iterative method for solving the problem (2.1).
where h is the step size.
For h = 1, we can suggest an implicit iterative method for solving the problem (2.1).
Discretization (5.7) and taking h = 1, we have which is an inertial type iterative method for solving the variational inclusion (2.1). Using the predictor-corrector techniques, we have Algorithm 5.8. For a given µ 0 ∈ H, compute µ n+1 by the iterative schemes which is known as the inertial two-step iterative method.
Remark 5.1. For appropriate and suitable choice of the operators T , B, A, convex set, parameter α and the spaces, one can propose a wide class of implicit, explicit and inertial type methods for solving variational inclusions and related nonlinear optimization problems. Using the techniques and ideas of Noor et al. [36], one can discuss the convergence analysis of the proposed methods.

Nonexpansive Mappings
In this section, we consider the non-expansive mapping technique to suggest some iterative methods for solving variational inclusions (2.1). First of all, we recall the following fact.
Let S be a nonexpansive mapping. We denote the set of the fixed points of S by F(S) and the set of the solutions of the variational inclusion (2.1) by RI (H, T, B). If µ * ∈ F(S) ∩ RI (H, T, B), then x * ∈ F (S) and µ * ∈ V I(K, T ). Thus from Lemma 3.1, it follows that where ρ > 0 is a constant.
This fixed point formulation is used to suggest the following iterative method for finding a common element of two different sets of solutions of the fixed points of the nonexpansive mappings and the variational inclusions.
Algorithm 6.1. For a given u 0 ∈ H, compute the approximate solution x n by the iterative schemes where a n ∈ [0, 1] for all n ≥ 0 and S is the nonexpansive operator. Algorithm 6.1 is also known as a Mann iteration. Using the technique of Noor [25], one can discuss the convergence analysis of Algorithm 6.1.
Related to the variational inclusions, we have the problem of solving the resolvent equations (4.1) involving the non-expansive mapping S. To be more precise, let R A = I − SJ A , where J A is the resolvent, I is the identity operator and S is the nonexpansive operator. We consider the problem of finding z ∈ H such that Remark 6.1. Clearly a r-strongly monotone operator or a γ-inverse strongly monotone operator must be a relaxed (γ, r)-cocoercive operator, but the converse is not true. Therefore the class of the relaxed (γ, r)-cocoercive operators is the most general class, and hence definition 2.4 includes both the definition 2.2 and the definition 2.3 as special cases.
Remark 6.2. From Definition 6.2, it follows that if T is α-inverse strongly monotone (or co-coercive), than T is also Lipschitz continuous with constant 1 α .
In this section, we use the resolvent equations to suggest and analyze an iterative method for finding the common element of the nonexpansive mappings and the variational inclusion RV I(T, K). For this purpose, we need the following result, which can be proved by using Lemma 2.2.
where ρ > 0 is a constant.
From Lemma 6.2, it follows that the variational inclusion (2.1) and the resolvent equation (6.2) are equivalent. This alternative equivalent formulation has been used to suggest and analyze a wide class of efficient and robust iterative methods for solving variational inclusions and related optimization problems. We denote the set of the solutions of the resolvent equations by IRE(H,T,S).
Using Lemma 6.2 and Remark 6.1, we now suggest and analyze a new iterative algorithm for finding the common element of the solution sets of the quasi variational inclusions and nonexpansive mappings S and this is the main motivation of this paper. Algorithm 6.2. For a given z 0 ∈ H, compute the approximate solution z n+1 by the iterative schemes µ n = SJ A z n (6.4) z n+1 = (1 − a n )z n + a n {u n − ρT µ n + ρB|µ|)}, (6.5) where a n ∈ [0, 1] for all n ≥ 0 and S is a nonexpansive operator.
For S = I, the identity operator, Algorithm 6.2 reduces to the following iterative method for solving variational inclusion(2.1) and appears to be a new one. Algorithm 6.3. For a given z 0 ∈ H, compute the approximate solution z n+1 by the iterative schemes µ n = SJ A z n z n+1 = (1 − a n )z n + a n {u n − ρT µ n + ρB|µ|)}.
We now study the convergence of Algorithm 6.2.
We now prove the strong convergence of Algorithm 6.2 under the α-inverse strongly monotonicity. Theorem 6.2. Let T be an α-inverse strongly monotonic mapping with constant α > 0 and S be a nonexpansive mapping such that F(S) ∩ IRE(H, T ) = ∅. If the operator B is Lipschitz continuous with constant ξ and ρ < 2α 1 + αξ , (6.15) then the approximate solution obtained from Algorithm 6.2 converges strongly to z * ∈ F(S) ∩ IRE(H, T ).
Therefore, it follows lim n→∞ z n − z * = 0 from Lemma 6.1, completing the proof.

Conclusion
We have introduced and investigated the absolute value variational inclusions. It has been shown that some interesting and important problems such as absolute value equations, complementarity problems, difference of two operators and absolute value variational inequalities are special cases of the absolute value variational inclusions. This shows that the absolute value variational inclusions can be viewed as a general unified frame work to study these unrelated problem in a unified manner. We have used the equivalence between the absolute value variational inclusion and fixed point formulation to suggest some new iterative methods for solving the variational inclusions. These new methods include extra-resolvent method, modified double resolvent methods and inertial type iterative methods, which are suggested using resolvent equations, dynamical systems and nonexpansive mappings. Convergence analysis of the proposed method is discussed for monotone operators. It is an open problem to compare these proposed methods with other methods. Despite the recent research activates, very few results are available. The development of efficient implementable numerical methods requires further efforts.