Abstract
Data shuffling is one of the fundamental building blocks for distributed learning algorithms, that increases the statistical gain for each step of the learning process. In each iteration, different shuffled data points are assigned by a central node to a distributed set of workers to perform local computation, which leads to communication bottlenecks. The focus of this paper is on formalizing and understanding the fundamental information-theoretic tradeoff between storage (per worker) and the worst-case communication overhead for the data shuffling problem. We completely characterize the information theoretic tradeoff for K = 2, and K = 3 workers, for any value of storage capacity, and show that increasing the storage across workers can reduce the communication overhead by leveraging coding. We propose a novel and systematic data delivery and storage update strategy for each data shuffle iteration, which preserves the structural properties of the storage across the workers, and aids in minimizing the communication overhead in subsequent data shuffling iterations.
| Original language | English (US) |
|---|---|
| Article number | 7841903 |
| Journal | Proceedings - IEEE Global Communications Conference, GLOBECOM |
| DOIs | |
| State | Published - 2016 |
| Event | 59th IEEE Global Communications Conference, GLOBECOM 2016 - Washington, United States Duration: Dec 4 2016 → Dec 8 2016 |
ASJC Scopus subject areas
- Signal Processing
- Hardware and Architecture
- Computer Networks and Communications
- Artificial Intelligence
Fingerprint
Dive into the research topics of 'Information theoretic limits of data shuffling for distributed learning'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS