Operator Splitting Methods for Convex Optimization: Analysis and Implementation

G. Banjac
PhD thesis, University of Oxford, 2018.
BibTeX  URL  PDF 

@phdthesis{banjac_thesis,
  author = {G. Banjac},
  title = {Operator Splitting Methods for Convex Optimization: Analysis and Implementation},
  school = {University of Oxford},
  year = {2018},
  url = {https://ora.ox.ac.uk/objects/uuid:17ac73af-9fdf-4cf6-a946-3048da3fc9c2}
}

Convex optimization problems are a class of mathematical problems which arise in numerous applications. Although interior-point methods can in principle solve these problems efficiently, they may become intractable for solving large-scale problems or be unsuitable for real-time embedded applications.

Iterations of operator splitting methods are relatively simple and computationally inexpensive, which makes them suitable for these applications. However, some of their known limitations are slow asymptotic convergence, sensitivity to ill-conditioning, and inability to detect infeasible problems.

The aim of this thesis is to better understand operator splitting methods and to develop reliable software tools for convex optimization. The main analytical tool in our investigation of these methods is their characterization as the fixed-point iteration of a nonexpansive operator. The fixed-point theory of nonexpansive operators has been studied for several decades.

By exploiting the properties of such an operator, it is possible to show that the alternating direction method of mulipliers (ADMM) can detect infeasible problems. Although ADMM iterates diverge when the problem at hand is unsolvable, the differences between subsequent iterates converge to a constant vector which is also a certificate of primal and/or dual infeasibility. Reliable termination criteria for detecting infeasibility are proposed based on this result.

Similar ideas are used to derive necessary and sufficient conditions for linear (geometric) convergence of an operator splitting method and a bound on the achievable convergence rate. The new bound turns out to be tight for the class of averaged operators.

Next, the OSQP solver is presented. OSQP is a novel general-purpose solver for quadratic programs based on ADMM. The solver is very robust, is able to detect infeasible problems, and has been extensively tested on many problem instances from a wide variety of application areas.

Finally, operator splitting methods can also be effective in nonconvex optimization. The developed algorithm significantly outperforms a common approach based on convex relaxation of the original nonconvex problem.