no code implementations • 26 Oct 2018 • Majid Jahani, Xi He, Chenxin Ma, Aryan Mokhtari, Dheevatsa Mudigere, Alejandro Ribeiro, Martin Takáč
In this paper, we propose a Distributed Accumulated Newton Conjugate gradiEnt (DANCE) method in which sample size is gradually increasing to quickly obtain a solution whose empirical loss is under satisfactory statistical accuracy.
1 code implementation • 14 Nov 2017 • Chenxin Ma, Martin Jaggi, Frank E. Curtis, Nathan Srebro, Martin Takáč
In this paper, an accelerated variant of CoCoA+ is proposed and shown to possess a convergence rate of $\mathcal{O}(1/t^2)$ in terms of reducing suboptimality.
no code implementations • 10 Oct 2017 • Majid Jahani, Naga Venkata C. Gudapati, Chenxin Ma, Rachael Tappenden, Martin Takáč
In this work we introduce the concept of an Underestimate Sequence (UES), which is motivated by Nesterov's estimate sequence.
2 code implementations • 7 Nov 2016 • Virginia Smith, Simone Forte, Chenxin Ma, Martin Takac, Michael. I. Jordan, Martin Jaggi
The scale of modern datasets necessitates the development of efficient distributed optimization methods for machine learning.
no code implementations • 16 Mar 2016 • Chenxin Ma, Martin Takáč
In this paper we study inexact dumped Newton method implemented in a distributed environment.
1 code implementation • 13 Dec 2015 • Chenxin Ma, Jakub Konečný, Martin Jaggi, Virginia Smith, Michael. I. Jordan, Peter Richtárik, Martin Takáč
To this end, we present a framework for distributed optimization that both allows the flexibility of arbitrary solvers to be used on each (single) machine locally, and yet maintains competitive performance against other state-of-the-art special-purpose distributed methods.
no code implementations • 22 Oct 2015 • Chenxin Ma, Martin Takáč
In this paper we study the effect of the way that the data is partitioned in distributed optimization.
no code implementations • 8 Jun 2015 • Chenxin Ma, Rachael Tappenden, Martin Takáč
We show that the famous SDCA algorithm for optimizing the SVM dual problem, or the stochastic coordinate descent method for the LASSO problem, fits into the framework of RC-FDM.
1 code implementation • 12 Feb 2015 • Chenxin Ma, Virginia Smith, Martin Jaggi, Michael. I. Jordan, Peter Richtárik, Martin Takáč
Distributed optimization methods for large-scale machine learning suffer from a communication bottleneck.