TY - JOUR
T1 - Near Optimal Coded Data Shuffling for Distributed Learning
AU - Adel Attia, Mohamed
AU - Tandon, Ravi
N1 - Funding Information:
Manuscript received January 7, 2018; revised October 26, 2018 and February 22, 2019; accepted June 8, 2019. Date of publication July 3, 2019; date of current version October 18, 2019. This work was supported by the NSF under Grant CAREER-1651492. This paper was presented in part at the 2018 IEEE International Symposium on Information Theory (ISIT), and in part at the 2016 IEEE Global Communications Conference (GLOBECOM).
Publisher Copyright:
© 2019 IEEE.
PY - 2019/11
Y1 - 2019/11
N2 - Data shuffling between distributed cluster of nodes is one of the critical steps in implementing large-scale learning algorithms. Randomly shuffling the data-set among a cluster of workers allows different nodes to obtain fresh data assignments at each learning epoch. This process has been shown to provide improvements in the learning process (via testing and training error). However, the statistical benefits of distributed data shuffling come at the cost of extra communication overhead from the master node to worker nodes, and can act as one of the major bottlenecks in the overall time for computation. There has been significant recent interest in devising approaches to minimize this communication overhead. One approach is to provision for extra storage at the computing nodes. The other emerging approach is to leverage coded communication to minimize the overall communication overhead. The focus of this work is to understand the fundamental tradeoff between the amount of storage and the communication overhead for distributed data shuffling. In this paper, we first present an information theoretic formulation for the data shuffling problem, accounting for the underlying problem parameters (number of workers, K, number of data points, N, and available storage, and S per node). We then present an information theoretic lower bound on the communication overhead for data shuffling as a function of these parameters. We next present a novel coded communication scheme and show that the resulting communication overhead of the proposed scheme is within a multiplicative factor of at most {K}{K-1} from the lower bound (which is upper bounded by 2 for K ≥ 2). Furthermore, we present new results towards closing this gap through a novel coded communication scheme, which we call the aligned coded shuffling. This scheme is inspired by the ideas of coded shuffling and interference alignment. In particular, we show that the aligned scheme achieves the optimal storage vs communication trade-off for K < 5, and further reduces the maximum multiplicative gap down to {K-1/3/K-1, for K ≥ 5.
AB - Data shuffling between distributed cluster of nodes is one of the critical steps in implementing large-scale learning algorithms. Randomly shuffling the data-set among a cluster of workers allows different nodes to obtain fresh data assignments at each learning epoch. This process has been shown to provide improvements in the learning process (via testing and training error). However, the statistical benefits of distributed data shuffling come at the cost of extra communication overhead from the master node to worker nodes, and can act as one of the major bottlenecks in the overall time for computation. There has been significant recent interest in devising approaches to minimize this communication overhead. One approach is to provision for extra storage at the computing nodes. The other emerging approach is to leverage coded communication to minimize the overall communication overhead. The focus of this work is to understand the fundamental tradeoff between the amount of storage and the communication overhead for distributed data shuffling. In this paper, we first present an information theoretic formulation for the data shuffling problem, accounting for the underlying problem parameters (number of workers, K, number of data points, N, and available storage, and S per node). We then present an information theoretic lower bound on the communication overhead for data shuffling as a function of these parameters. We next present a novel coded communication scheme and show that the resulting communication overhead of the proposed scheme is within a multiplicative factor of at most {K}{K-1} from the lower bound (which is upper bounded by 2 for K ≥ 2). Furthermore, we present new results towards closing this gap through a novel coded communication scheme, which we call the aligned coded shuffling. This scheme is inspired by the ideas of coded shuffling and interference alignment. In particular, we show that the aligned scheme achieves the optimal storage vs communication trade-off for K < 5, and further reduces the maximum multiplicative gap down to {K-1/3/K-1, for K ≥ 5.
KW - Coded data shuffling
KW - coded multi-casting
KW - distributed computing
KW - distributed learning
UR - http://www.scopus.com/inward/record.url?scp=85076340246&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85076340246&partnerID=8YFLogxK
U2 - 10.1109/TIT.2019.2926704
DO - 10.1109/TIT.2019.2926704
M3 - Article
AN - SCOPUS:85076340246
SN - 0018-9448
VL - 65
SP - 7325
EP - 7349
JO - IEEE Transactions on Information Theory
JF - IEEE Transactions on Information Theory
IS - 11
M1 - 8754795
ER -