A New Conjugate Gradient Method with Sufficient Descent Property

In this paper, by linearly combining the numerator and denominator terms of the Dai-Liao (DL) and Bamigbola-Ali-Nwaeze (BAN) conjugate gradient methods (CGMs), a general form of DL-BAN method has been proposed. From this general form, a new hybrid CGM, which was found to possess a sufficient descent property is generated. Numerical experiment was carried out on the new CGM in comparison with four existing CGMs, using some set of large scale unconstrained optimization problems. The result showed a superior performance of new method over majority of the existing methods.


Introduction
No investor wants to go for investment without returns or with high risk; hence the need for decision making. Optimization is central to any problem involving decision making which arises from the fields of Engineering, Economics, Science, etc. It entails choosing the best out of various alternatives (Chong and Zak [3]). A way to handle this kind of problem involves solving an unconstrained optimization problem of the form: minf (x), x ∈ R n (1) Problems of the form (1) arise in many theoretical fields because most of the optimization problems can be reduced to an unconstrained optimization problem where x k is the kth solution iterate to (1), α k > 0 denotes the step size, usually obtained by a line search and d k , given by is the search direction, g k = ∇f (x k ) is the gradient and β k is a scalar known as the conjugate update parameter. Different choices of β k has resulted in different CGMs. Some well known classical CGMs, developed by Hestenes and Stiefel [9], Fletcher and Reeves [7], Fletcher [8], Dai and Liao [4] and Bamigbola et al. [2] are: In the classical methods (4 -5), we have and || . || stands for the Euclidean norm. To any CGM, the determination of the search direction d k and the step size α k is very important. A careful choice of the line search strategy is needed to obtain a descent direction (Nocedal [13]). Basically, two types of line search are used in computing α k , namely the exact and inexact line search rules. By the exact line search, α k is computed such that: This approach is expensive in terms of evaluating the function and gradient. The limitation of the exact line search led researchers to the use of inexact line search, where α k is computed numerically by ensuring a reasonable reduction in the value of the objective function at a minimal cost. One of the most popular inexact line search is the StrongWolfe line search given by: and with 0 ≤ δ ≤ σ ≤ 1. It is required that the search direction, d k satisfy: which guarantees a descent direction of f (x) at x k .
A class of CGMs known as hybrid CGMs, which are modifications of the classical CGMs have been proposed by various authors. This is due to the part it plays in achieving better computational performance as well as retaining the strong global convergence of the methods involved (Li and Zhao [10]). By taking into consideration the convex combination of the numerators and denominators of the update parameters of Fletcher-Reeves and Hestenes-Stiefel methods, Nazareth [12] proposed a two-parameter family of CGMs. Dai and Liao [4] extended this by adding one more parameter, where their three-parameter family included six standard CGMs. By forming a linear combination of the update parameters of the Dai-Yuan and Hestenes-Stiefel methods and that of Fletcher-Reeves and Polak-Ribiere-Polyak methods, Xu and Kong [15] proposed two new hybrid CGMs, with the aid of the generalized Wolfe line search. Recently, Osinuga and Olofin [14] presented an extended hybrid CGM which was proved to be globally convergent with Armijo-type line search, while Djordjevic [5] proposed another new CGM by a convex combination of the update parameters of Liu-Storey and Fletcher-Reeves methods.
Desirous to generate many methods by varying coefficients from a linear combination of the numerator and denominator terms of the update parameters of DL and BAN methods, this paper presents a new CGM.

Method
The sufficient descent analysis of β N M k shall be carried out based on the following lemmas Lemma 3.1. In the Conjugate Gradient Method, Proof. By (3) and (11), Let the second term be expressed in the form and applying Lemma 3.1, we have: Therefore, β N M k method satisfies (13) with c = 3 5 .

Numerical Consideration
In this section, a report of the numerical experiment carried out on the new CGM, using a set of large-scale unconstrained minimization problems, taken from Andrei [1], is presented.

Computational details
A total of 27 unconstrained optimization problems, each of dimensions 5000 and 10000, were solved using the Strong Wolfe line search. The iterations were terminated when g k ≤ 10 −6 , and a failure declared if this condition was not satisfied after 2000 iterations. The nonlinear conjugate gradient algorithm (CGA) was written in Matlab codes and run on a PC with 2.16 GHz processor, 4GB Ram and Windows 10 operating system.

Presentation of numerical results
The numerical results obtained for the new method in comparison with four existing CGMs are presented in Tables  The performance profile of Dolan and More [6] was adopted to compare the new method with the four existing CGMs. For each method, a fraction P (τ ) of the problems for which the method is within a factor τ of the best time is plotted as shown in Figures 1 and 2 The figures indicate that, based on the number of iterations and the CPU time, the N M method is the next in performance after the DL method, with the CD method as the least performer. This shows that the new method is ranked second out of five methods, thus competing favourably with the existing methods.

Conclusion
In this paper, a general form of the DL-BAN CGM was proposed by forming a linear combination of numerator and denominator terms in the two existing classical CGMs. This approach is capable of producing many new methods, which can be obtained by the different arrangements of the coefficients in the DL-BAN update parameter. A new hybrid CGM has been generated from this general form which has been shown to possess a desirable feature, such as the satisfaction of the sufficient descent condition, which is very vital to the global convergence of the method.
A numerical test of the new method in comparison with four existing classical CGMs, confirmed that the new CGM is capable of superior computational performance over a larger number of the existing methods, with respect to the number of iterations and the CPU time. This result is indicative that it is very probable to generate other CGMs from (10) that are computationally optimal in efficiency. Hence, there is a need to explore the DL-BAN CGM for the best possible classical CGM.