Life would be so much easier if only we had the source code...
Home -> Publications
Home
  Publications
    
edited volumes
  Awards
  Research
  Teaching
  Miscellaneous
  Full CV [pdf]
  BLOG






  Events








  Past Events





Publications of Torsten Hoefler
Torsten Hoefler:

 Scalable and Efficient AI: From Supercomputers to Smartphones

(Presentation - presented in Salt Lake City, UT, USA, Aug. 2023)
Keynote talk at the 52nd International Conference on Parallel Processing

Abstract

Billion-parameter artificial intelligence models have proven to show exceptional performance in a large variety of tasks ranging from natural language processing, computer vision, and image generation to mathematical reasoning and algorithm generation. Those models usually require large parallel computing systems, often called 'AI Supercomputers', to be trained initially. We will outline several techniques ranging from data ingestion, parallelization, to accelerator optimization that improve the efficiency of such training systems. Yet, training large models is only a small fraction of practical artificial intelligence computations. Efficient inference is even more challenging - models with hundreds-of-billions of parameters are expensive to use. We continue by discussing model compression and optimization techniques such as fine-grained sparsity as well as quantization to reduce model size and significantly improve efficiency during inference. These techniques may eventually enable inference with powerful models on hand-held devices.

Documents


download slides:


Recorded talk (best effort)

 

BibTeX

@misc{hoefler-icpp,
  author={Torsten Hoefler},
  title={{Scalable and Efficient AI: From Supercomputers to Smartphones}},
  year={2023},
  month={Aug.},
  location={Salt Lake City, UT, USA},
  source={http://www.unixer.de/~htor/publications/},
}


serving: 3.135.216.196:36623© Torsten Hoefler