Many-core systems with a rapidly increasing number of cores pose a significant challenge to parallel applications to use
their complex memory hierarchies efficiently. Many such applications rely on collective communications in performance-critical phases,
which become a bottleneck if they are not optimized. We address this issue by proposing cache-oblivious algorithms for MPI_Alltoall,
MPI_Allgather, and the MPI neighborhood collectives to exploit the data locality. To implement the cache-oblivious algorithms, we
allocate the send and receive buffers on a shared heap and use Morton order to guide the memory copies. Our analysis shows that our
algorithm for MPI_Alltoall is asymptotically optimal. We show an extension to our algorithms to minimize the communication distance
on NUMA systems while maintaining optimality within each socket. We further demonstrate how the cache-oblivious algorithms can be
applied to multi-node machines. Experiments are conducted on different many-core architectures. For MPI_Alltoall, our implementation
achieves on average 1.40X speedup over the naive implementation based on shared heap for small and medium block sizes (less than
16 KB) on a Xeon Phi KNC, achieves on average 3.03X speedup over MVAPICH2 on a Xeon E7-8890, and achieves on average 2.23X
speedup over MVAPICH2 on a 256-node Xeon E5-2680 cluster for block sizes less than 1 KB.
@article{, author={Shigang Li and Yunquan Zhang and Torsten Hoefler}, title={{Cache-Oblivious MPI All-to-All Communications Based on Morton Order}}, journal={IEEE Transactions on Parallel and Distributed Systems}, year={2018}, month={Mar.}, pages={542-555}, volume={29}, number={3}, publisher={IEEE}, source={http://www.unixer.de/~htor/publications/}, }