Linear convergence and error bounds are equivalent for optimization algorithms


Speaker


Abstract

Several optimization algorithms, including ADMM and Douglas-Rachford algorithm (DR), can be cast as the fixed-point iteration (FPI) of an averaged operator. However, these algorithms' convergence is generally slow (within sublinear regime). We show that the linear convergence of the FPI of averaged operators is equivalent to an error-bound condition on the operator. Our work captures several existing results on the linear convergence of the ADMM and DR under weaker assumptions. As a byproduct of our method, we bound the rate of convergence of the DR applied to linear and quadratic optimization problems.

About Juan

Juan C. Vera Lizcano is a professor of optimization in the department of Econometrics and Operations Research at Tilburg University. His expertise includes polynomial and conic optimization and their applications to combinatorial optimization; relation between error bounds, condition numbers, and convergence of optimization algorithms; sparse methods for machine learning.