Benefits of MPI Sessions for GPU MPI applications
Résumé
Heterogeneous supercomputers are now considered the most valuable solution to reach the Exascale. Nowadays, we can frequently
observe that compute nodes are composed of more than one GPU
accelerator. Programming such architectures efficiently is challenging.
MPI is the defacto standard for distributed computing. CUDAaware libraries were introduced to ease GPU inter-nodes communications. However, they induce some overhead that can degrade
overall performances. MPI 4.0 Specification draft introduces the
MPI Sessions model which offers the ability to initialize specific
resources for a specific component of the application.
In this paper, we present a way to reduce the overhead induced
by CUDA-aware libraries with a solution inspired by MPI Sessions.
In this way, we minimize the overhead induced by GPUs in an MPI
context and allow to improve CPU + GPU programs efficiency. We
evaluate our approach on various micro-benchmarks and some
proxy applications like Lulesh, MiniFE, Quicksilver, and Cloverleaf.
We demonstrate how this approach can provide up to a 7x speedup
compared to the standard MPI model.
Origine | Fichiers produits par l'(les) auteur(s) |
---|