Progress in automatic GPU compilation and why you want to run MPI on your GPU.
(Presentation - presented in Lyon, France, Oct. 2016, Invited talk at the CCDSC meeting )
Abstract
Auto-parallelization of programs that have not been developed
with parallelism in mind is one of the holy grails in computer science.
It requires understanding the source code's data flow to automatically
distribute the data, parallelize the computations, and infer
synchronizations where necessary. We will discuss our new LLVM-based
research compiler Polly-ACC that enables automatic compilation to
accelerator devices such as GPUs. Unfortunately, its applicability is
limited to codes for which the iteration space and all accesses can be
described as affine functions. In the second part of the talk, we will
discuss dCUDA, a way to express parallel codes in MPI-RMA, a well-known
communication library, to map them automatically to GPU clusters. The
dCUDA approach enables simple and portable programming across
heterogeneous devices due to programmer-specified locality. Furthermore,
dCUDA enables hardware-supported overlap of computation and
communication and is applicable to next-generation technologies such as
NVLINK. We will demonstrate encouraging initial results and show
limitations of current devices in order to start a discussion.
Documents
download slides:
Recorded talk (best effort)
BibTeX
@misc{hoefler-ccdsc16, author={Torsten Hoefler}, title={{Progress in automatic GPU compilation and why you want to run MPI on your GPU.}}, year={2016}, month={Oct.}, location={Lyon, France}, note={Invited talk at the CCDSC meeting}, source={http://www.unixer.de/~htor/publications/}, }