GPU acceleration of ADMM for large-scale quadratic programming

M. Schubiger, G. Banjac and J. Lygeros

Journal of Parallel and Distributed Computing, vol. 144, pp. 55-67, October 2020.
BibTeX  URL  Preprint  Code 

  author = {M. Schubiger and G. Banjac and J. Lygeros},
  title = {{GPU} acceleration of {ADMM} for large-scale quadratic programming},
  journal = {Journal of Parallel and Distributed Computing},
  year = {2020},
  volume = {144},
  pages = {55-67},
  url = {},
  doi = {10.1016/j.jpdc.2020.05.021}

The alternating direction method of multipliers (ADMM) is a powerful operator splitting technique for solving structured convex optimization problems. Due to its relatively low per-iteration computational cost and ability to exploit sparsity in the problem data, it is particularly suitable for large-scale optimization. However, the method may still take prohibitively long to compute solutions to very large problem instances. Although ADMM is known to be parallelizable, this feature is rarely exploited in real implementations. In this paper we exploit the parallel computing architecture of a graphics processing unit (GPU) to accelerate ADMM. We build our solver on top of OSQP, a state-of-the-art implementation of ADMM for quadratic programming. Our open-source CUDA C implementation has been tested on many large-scale problems and was shown to be up to two orders of magnitude faster than the CPU implementation.