Discamus continentiam augere, luxuriam coercere
Home -> Publications
Home
  Publications
    
all years
    2020
    2019
    2018
    2017
    2016
    2015
    2014
    2013
    2012
    2011
    2010
    2009
    2008
    2007
    2006
    2005
    2004
    theses
    techreports
    presentations
    conferences
    edited volumes
  Awards
  Research
  Teaching
  BLOG
  Miscellaneous
  Full CV [pdf]






  Events








  Past Events





Publications of Torsten Hoefler
Copyright Notice:

The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

Shigang Li, Tal Ben-Nun, Salvatore Di Girolamo, Dan Alistarh, Torsten Hoefler:

 Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations

(In Proceedings of the 25th Symposium on Principles and Practice of Parallel Programming (PPoPP'20), Feb. 2020)

Publisher Reference

Abstract

Load imbalance pervasively exists in distributed deep learning training systems, either caused by the inherent imbalance in learned tasks or by the system itself. Traditional synchronous Stochastic Gradient Descent (SGD) achieves good accuracy for a wide variety of tasks, but relies on global synchronization to accumulate the gradients at every training step. In this paper, we propose eager-SGD, which relaxes the global synchronization for decentralized accumulation. To implement eager-SGD, we propose to use two partial collectives: solo and majority. With solo allreduce, the faster processes contribute their gradients eagerly without waiting for the slower processes, whereas with majority allreduce, at least half of the participants must contribute gradients before continuing, all without using a central parameter server. We theoretically prove the convergence of the algorithms and describe the partial collectives in detail. Experiments are conducted on a variety of neural networks and datasets. The results on load-imbalanced environments show that eager-SGD achieves 2.64× speedup (ResNet-50 on ImageNet) over the asynchronous centralized SGD, and achieves 1.29× speedup (ResNet-50 on ImageNet) and 1.27× speedup (LSTM on UCF101) over the state-of-the-art synchronous decentralized SGDs, without losing accuracy.

Documents

download article:
download slides:
 

BibTeX

@inproceedings{,
  author={Shigang Li and Tal Ben-Nun and Salvatore Di Girolamo and Dan Alistarh and Torsten Hoefler},
  title={{Taming Unbalanced Training Workloads in Deep Learning with Partial Collective Operations}},
  year={2020},
  month={Feb.},
  booktitle={Proceedings of the 25th Symposium on Principles and Practice of Parallel Programming (PPoPP'20)},
  source={http://www.unixer.de/~htor/publications/},
}

serving: 3.234.143.26:57728© Torsten Hoefler