On the convergence of a regularized Jacobi algorithm for convex optimization

G. Banjac, K. Margellos and P. Goulart

IEEE Transactions on Automatic Control, vol. 63, no. 4, pp. 1113-1119, April 2018.
BibTeX  URL 

  author = {G. Banjac and K. Margellos and P. Goulart},
  title = {On the convergence of a regularized {J}acobi algorithm for convex optimization},
  journal = {IEEE Transactions on Automatic Control},
  year = {2018},
  volume = {63},
  number = {4},
  pages = {1113-1119},
  url = {https://doi.org/10.1109/TAC.2017.2737319},
  doi = {10.1109/TAC.2017.2737319}

In this paper we consider the regularized version of the Jacobi algorithm, a block coordinate descent method for convex optimization with an objective function consisting of the sum of a differentiable function and a block-separable function. Under certain regularity assumptions on the objective function, this algorithm has been shown to satisfy the so-called sufficient decrease condition, and consequently to converge in objective function value. In this paper we revisit the convergence analysis of the regularized Jacobi algorithm and show that it also converges in iterates under very mild conditions on the objective function. Moreover, we establish conditions under which the algorithm achieves a linear convergence rate.