A Randomized Nonlinear Rescaling Method in Large-Scale Constrained Convex Optimization

24 Mar 2020  ·  Wei Bo, Haskell William B., Zhao Sixiang ·

We propose a new randomized algorithm for solving convex optimization problems that have a large number of constraints (with high probability). Existing methods like interior-point or Newton-type algorithms are hard to apply to such problems because they have expensive computation and storage requirements for Hessians and matrix inversions. Our algorithm is based on nonlinear rescaling (NLR), which is a primal-dual-type algorithm by Griva and Polyak {[{Math. Program., 106(2):237-259, 2006}]}. NLR introduces an equivalent problem through a transformation of the constraint functions, minimizes the corresponding augmented Lagrangian for given dual variables, and then uses this minimizer to update the dual variables for the next iteration. The primal update at each iteration is the solution of an unconstrained finite sum minimization problem where the terms are weighted by the current dual variables. We use randomized first-order algorithms to do these primal updates, for which they are especially well suited. In particular, we use the scaled dual variables as the sampling distribution for each primal update, and we show that this distribution is the optimal one among all probability distributions. We conclude by demonstrating the favorable numerical performance of our algorithm.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Optimization and Control