A FREE REDUCED GRADIENT SCHEME
MOUNIR EL MAGHRI *
Department of Mathematics, Faculty of Sciences, Chouab Doukkali University, BP. 20, El Jadida, Morocco.
*Author to whom correspondence should be addressed.
A free variant of the Wolfe reduced gradient method minimizing linearly constrained nonlinear functionals is presented. The obtained descent directions are not explicitly expressed in terms of reduced gradients but just depend on arbitrary parameters that may be controlled by a decision maker. This scheme extends those of Wolfe (both the continuous and discontinuous ones). The resulting algorithm is proved to be globally convergent under the standard Armijo line search condition. Experimental results over test problems including large-scale dimensions show a net performance of the new scheme while just simple variations of its parameters are used. They also reveal its ability to explore several KKT stationary points for nonconvex functions.
Keywords: Linear programming, nonlinear optimization, descent directions, reduced gradient method, convergence acceleration