Trifunction Bihemivariational Inequalities

In this paper, we consider a new class of hemivariational inequalities, which is called the trifunction bihemivariational inequality. We suggest and analyze some iterative methods for solving the trifunction bihemivariational inequality using the auxiliary principle technique. The convergence analysis of these iterative methods is also considered under some mild conditions. Several special cases are also considered. Results proved in this paper can be viewed as a reﬁnement and improvement of the known results. structural analysis nonconvex optimization.


Introduction
Variational inequalities theory introduced in 1964 by Stampacchia [31] can be viewed as a novel and significant generalization of the variational principles. The origin of the variational principles can be traced back to Euler, Newton, Lagrange and Bernoulli's brothers. These variational principles have emerged as a powerful tool to investigate and study a wide class of unrelated problems arising in industrial, regional, physical, pure and applied sciences in a unified and general framework. Variational inequalities have been extended and generalized in several direction using novel and new techniques. Panagiotopoulos [28] introduced the hemivariational inequalities by using the concept of the generalized directional derivatives of nonconvex and nondifferentiable functions. This class has important nonconvex sets and nonconvex functions with respect to an arbitrary bifunction. This class of nonconvex set is called the biconvex set and the noncovex function is called biconvex function. functions is called the biconvex functions. Noor et al [19,21,22,23,24,26,27] have studied some basic properties of the biconvex functions. It have been shown that the biconvex functions have characterizations as the convex functions enjoy. In particular, it have been shown that the optimization conditions of the differentiable biconvex functions are characterized by a class of variational inequalities, called the bivariational inequalities, see [19,21,22,23,24,26,27] and references therein.
Variational inequalities and hemivariational inequalities have witnessed an explosive growth in theoretical advances, algorithmic developments and applications across almost all disciplines of engineering, pure and applied sciences. There are several methods for solving variational inequalities and bivariational inequalities. Due to the nature of the hemivariational inequalities, projection and resolvent methods can not be applied for solving hemivariational inequalities. In recent years, the auxiliary principle technique is being used to suggest and analyze some iterative methods for solving variational inequalities and equilibrium problems. Glowinski, Lions and Tremolieres [5] used this technique to study the existence problem for mixed variational inequalities, whereas Noor [8,11,12,13,14] and Zhu et al. [32] have used this approach to suggest and analyze some iterative methods for solving various classes of variational inequalities and equilibrium problems. In this paper, we again use the auxiliary principle technique to suggest several new iterative schemes for trifunction bihemivariational inequalities. We also prove that the convergence of these methods require either pseudomonotonicity or partially relaxed strongly monotonicity. These are weaker conditions than monotonicity. As a special case, we obtain new iterative schemes for solving bihemivariational inequalities, variational inequalities and optimization problem. The comparison of these methods with other methods is a subject of future research.
Let H be a real Hilbert space, whose inner product and norm are denoted by ., . and . respectively. Let K be a nonempty set in H.
We now recall some concepts of biconvex sets and biconvex functions, which are mainly due to Noor et al. [21,22,23,24].
Definition 2.1. The set K β in H is said to be biconvex set with respect to an arbitrary bifunction β(· − ·), if The biconvex set K β is also called β-connected set. If β(v − u) = v − u, then the biconvex set K β is a convex set, but the converse is not true. For example, It is clear that K β is not a convex set.
Consequently, the β-biconvex set reduces to the convex set K. Thus, K β ⊂ K. This implies that every convex set is a biconvex set, but the converse is not true.
Definition 2.2. The function F on the biconvex set K β is said to be strongly biconvex, if Note that every convex function is a biconvex, but the converse is not true.
If λ = 1 2 , then the function F satisfies which is called Jensen biconvex function.
If ν = 0, then Definition(2.2) reduces to Definition 2.3. The function F on the biconvex set K β is said to be biconvex, if We now consider the biconvex function on the interval . Then F is a biconvex function, if and only if, One can easily show that the following are equivalent: 1. F is a biconvex function.
To derive the main results, we need the following assumption regarding the bifunction β(· − ·).
Let f : H −→ R be a locally Lipschitz continuous function. Let Ω be an open bounded subset of R n . First of all, we recall the following concepts and results from nonsmooth analysis [2].
If β(v − u) = v, then Definition (2.5) reduces to the following concepts which are mainly due to Clarke [2].
The generalized gradient of f at x, denoted ∂f (x), is defined to be subdifferential of the function f 0 (x; v) at 0. That is If f is convex on K and locally Lipschitz continuous at x ∈ K, then ∂f (x) coincides with the subdifferential f (x) of f at x in the sense of convex analysis , and For a given nonlinear trifunction F (., ., .) : K β ×K β ×K β −→ H and a nonlinear continuous operator T : K β −→ H, consider the problem of finding u ∈ K β such that which is called the trifunction bihemivariational inequality. Here We now discuss some special cases of the trifunction bihemivariational inequalities (2.1).
which is called the bifunction bihemivariational inequality and appears to be a new one.
, where A is a nonlinear operator, then problem (2.1) is equivalent to finding u ∈ K β such that which is known as the bihemivariational inequality.
which is known as the hemivariational inequality introduced and studied by Panagiotopoulos [28,29] in order to formulate variational principles connected to energy functions which are neither convex nor smooth. It is has been shown that the technique of hemivariational inequalities is very efficient to describe the behaviour of complex structure arising in engineering and industrial sciences.
(IV). If f is a differentiable convex function, then problem (2.1) is equivalent to finding u ∈ K β such that which is known as the mildly nonlinear trifunction bihemivariational inequality and appear to be a new one.
which is called the trifunction bivariational inequality.
In brief, for suitable and appropriate choice of the trifunction, one can obtain several classes of bihemivariational and bivariational inequalities. This clear shows that the problem (2.1) is more general and flexible and includes the previous ones as special cases.
Definition 2.7. The trifunction F (., ., .) and the operator T is said to be: (c) partially relaxed strongly jointly bimonotone, if there exists a constant γ > 0 such that Note that for z = u partially relaxed strongly jointly bimonotonicity reduces to jointly bimonotonicity. This shows that partially relaxed strongly jointly bimonotonicity implies jointly bimonotonicity, but the converse is not true.
Definition 2.8. The function Ω f 0 (u; β(v − u))dΩ is said to be partially relaxed strongly bimonotone, if there exists a constant α > 0 such that Note that for z = v, partially relaxed strongly bimonotonicity reduces to relaxed strongly bimonotonicity.
For the readers convenience, we recall some basic properties of the Bregman convex functions [2]. For strongly convex functions f, we define the Bregman distance function as It is important to emphasize that various types of function f give different Bregman distance function. We give the following important examples of some practical important types of function f and their corresponding Bregman distance functions. Examples which is known as Shannon entropy, then its corresponding Bregman distance is given as This distance is called Kullback-Leibler distance (KL) and has become a very important tool in several areas of applied mathematics such as machine learning. 3 log v i , which is called Burg entropy, then its corresponding Bregman distance is given as This is called Itakura-Saito distance (IS), which is very important in the information theory, data analysis and machine learning.
It is a challenging problem to explore the applications of Bregman distance function for other types of nonconvex functions such as biconvex, k-convex functions, preinvex functions and harmonic functions.
For a given u ∈ K β satisfying (2.1), consider the auxiliary problem of finding w ∈ K β such that where ρ > 0 is a constant and E (u) is the differential biconvex function E(u) at u ∈ K β .
We note that, if w = u, then clearly w is solution of the problem (2.1). This observation enables us to suggest and analyze the following iterative method for solving (2.1).
Algorithm 3.1. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme Algorithm 3.1 is called the proximal method for solving problem (2.1). In passing, we remark that the proximal point method was suggested by Martinet [6] in the context of convex programming problems as regularization technique. For the recent developments and applications of the proximal point algorithms, see [11,12,13,14,15,19,32] and the references therein.
Algorithm 3.2. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme is called the proximal point method for solving bihemivariational inequalities (2.3) and appears to be a new one.
If f (x, u) = 0, then Algorithm 3.1 collapses to: Algorithm 3.4. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme In brief, for suitable and appropriate choice of the operators and the spaces, one can obtain a number of known and new algorithms for solving variational-hemivariational inequalities and related problems.
We now study the convergence analysis of Algorithm 3.1, which is the main motivation of our next result.
If u n+1 = u n , then clearly u n is a solution of the trifunction bihemivariational inequality (2.1). Otherwise, it follows that B(u, u n ) − B(u, u n+1 ) is nonnegative and we must have [20], it can be shown that the entire sequence {u n } converges to the cluster point u satisfying the trifunction bihemivariational inequality (2.1).

Now using the technique of Zhu and Marcotte
It is well-known that to implement the proximal point methods, one has to find the approximate solution implicitly, which is itself a difficult problem. To overcome this drawback, we now consider another method for solving (2.1) using the auxiliary principle technique.
For a given u ∈ K β satisfying (2.1), find w ∈ K β such that where E (u) is the differential of a strongly biconvex function E(u) at u ∈ K β .
Note that problems (3.2) and (3.8) are quite different problems.It is clear that for w = u, w is a solution of (2.1). This fact allows us to suggest and analyze another iterative method for solving trifunction bihemivariational inequality (2.1).
Algorithm 3.5. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme Note that for F (u, T u, β(v − u)) = W (u, β(v − u)), Algorithm 3.5 reduces to: Algorithm 3.6. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is called the predictor-corrector method for solving the bifunction bihemivariational inequality (2.3).
For F (u, T u, β(v − u)) = Au, β(v − u) Algorithm 3.5 collapses to the method for solving the bivariational inequalities (2.2). Algorithm 3.7. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is called the predictor-corrector method for solving the bihemivariational inequalities (2.2).
Algorithm 3.8. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme Similarly for suitable and appropriate choice of the operators and the spaces, one can obtain various known and new algorithms for solving hemivariational and variational inequalities.
We now consider the convergence analysis of Algorithm 3.5 using essentially the technique of Theorem 3.1. For the sake of completeness and to convey an idea of the technique, we sketch the main points.
Now using the technique of Zhu and Marcotte [20], it can be shown that the entire sequence {u n } converges to the cluster point u satisfying the trifunction bihemivariational inequality (2.1).
For a given u ∈ K β satisfying (2.1), find w ∈ K β such that where ρ > 0 is a constant. Problem (3.15) is known as the auxiliary trifunction bihemivariational inequality. We note that if w = u, then clearly w is a solution of the (2.1). This observation enables us to suggest and analyze the following iterative method for solving (2.1). Algorithm 3.9. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme (3.17) where ρ > 0 and η > 0 are constants. Algorithm 3.9 is called the predictor-corrector method for solving the trifunction bihemivariational inequality (2.1).
We now study the convergence analysis of Algorithm 3.9.
which implies that Letû be a cluster point of {u n } and the subsequence {u n j } of the sequence {u n } converge toû ∈ H. Replacing w n by u n j in (3.15), (3.16) and taking the limit n j −→ ∞ and using (3.29), we have which implies thatû solves the trifunction bihemivariational inequality (2.1) and Thus, it follows from the above inequality that the sequence {u n } has exactly one cluster pointû and lim n−→∞ (u n ) =û, the required result.
In recent years, inertial proximal methods [1] have been suggested and analyzed for maximal monotone operators associated with the discretizations of the differential equations in times, whereas Noor [12] has used the auxiliary principle technique to suggest an inertial method for variational inequalities, the converges of which requires only pseudomonotonicity, which is a weaker condition than monotonicity. This clearly improves the convergence criteria of the inertial proximal method. We again use the auxiliary principle to suggest and analyze an inertial proximal method for solving the trifunction bihemivariational inequality(2.1).
For a given u ∈ K β satisfying (2.1), consider the problem of finding w ∈ K β such that where ρ > 0 and α > 0 are constants.
It is clear that, if w = u, then u is a solution of (2.1). This fact allows us to suggest and analyze an iterative method for solving the trifunction bihemivariational inequality (2.1).
Algorithm 3.10. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme (3.31) For α n = 0, Algorithm 3.11 reduces to : Algorithm 3.11. For a given u 0 ∈ H, compute the approximate solution u n+1 by the iterative scheme which is known as the proximal method for solving trifunction bihemivariational inequality (2.1).
In a similar way for F (u, T u, v − u) = Au, v − u , one can obtain a number of new and known proximal methods from Algorithm 3.11 for solving bihemivariational inequalities (2.2) and its special cases. This shows that the new methods suggested in this paper are unifying one and more general than the previous ones.
For the convergence analysis of Algorithm 3.11, we need the following result.
Proof. Letû ∈ K β be a solution of (2.1). First we consider the case α n = 0. Using the technique of Theorem 3.3, we can prove that lim n−→∞ u n =û.
Repeating the arguments as in Theorem 3.3, one can easily show that lim n→∞ u n =û, the required result.

Conclusion
In this paper, we have introduced and studied the trifunction bihemivariational inequalities. Several special cases are discussed as applications of the trifunction bihemivariational inequalities. The auxiliary principle technique is used to suggest several implicit and explicit iterative methods for solving the trifunction bihemivariational inequalities, Convergence criteria of the proposed methods is discussed under suitable mild conditions. Results obtained in this paper continue to hold for the special cases. Comparison of the proposed methods with other methods need further efforts. The ideas and techniques of this paper stimulate further research in these dynamic fields