Applied Mathematics and Optimization, Vol.62, No.1, 27-46, 2010
Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization
The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained convex optimization. It is well-known that the RNM has good convergence properties as compared to the steepest descent method and the pure Newton's method. For example, Li, Fukushima, Qi and Yamashita showed that the RNM has a quadratic rate of convergence under the local error bound condition. Recently, Polyak showed that the global complexity bound of the RNM, which is the first iteration k such that aEuro-a double dagger f(x (k) )aEuro-a parts per thousand currency sign epsilon, is O(epsilon (-4)), where f is the objective function and epsilon is a given positive constant. In this paper, we consider a RNM extended to the unconstrained "nonconvex" optimization. We show that the extended RNM (E-RNM) has the following properties. (a) The E-RNM has a global convergence property under appropriate conditions. (b) The global complexity bound of the E-RNM is O(epsilon (-2)) if a double dagger (2) f is Lipschitz continuous on a certain compact set. (c) The E-RNM has a superlinear rate of convergence under the local error bound condition.
Keywords:Regularized Newton methods;Global convergence;Global complexity bound;Local error bound;Superlinear convergence