Randomized Primal-Dual Methods with Adaptive Step Sizes

Research output: Contribution to journalConference articlepeer-review


In this paper we propose a class of randomized primal-dual methods incorporating line search to contend with large-scale saddle point (SP) problems defined by a convex-concave function L(x, y), PMi=1fi(xi)+Φ(x, y)−h(y). We analyze the convergence rate of the proposed method under mere convexity and strong convexity assumptions of L in x-variable. In particular, assuming ∇yΦ(·, ·) is Lipschitz and ∇xΦ(·, y) is coordinate-wise Lipschitz for any fixed y, the ergodic sequence generated by the algorithm achieves the O(M/k) convergence rate in the expected primal-dual gap. Furthermore, assuming that L(·, y) is strongly convex for any y, and that Φ(x, ·) is affine for any x, the scheme enjoys a faster rate of O(M/k2) in terms of primal solution suboptimality. We implemented the proposed algorithmic framework to solve kernel matrix learning problem, and tested it against other state-of-the-art first-order methods.

Original languageEnglish (US)
Pages (from-to)11185-11212
Number of pages28
JournalProceedings of Machine Learning Research
StatePublished - 2023
Event26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023 - Valencia, Spain
Duration: Apr 25 2023Apr 27 2023

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability


Dive into the research topics of 'Randomized Primal-Dual Methods with Adaptive Step Sizes'. Together they form a unique fingerprint.

Cite this