MPIs
Message Passing Interface (MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures.
impi
Link to section 'Description' of 'impi' Description
Intel MPI
Link to section 'Versions' of 'impi' Versions
- Bell: 2019.5.281
- Scholar: 2019.3.199
- Gilbreth: 2019.5.281
- Negishi: 2019.9.304
- Anvil: 2019.5.281
Link to section 'Module' of 'impi' Module
You can load the modules by:
module load intel
module load impi
intel-oneapi-mpi
Link to section 'Description' of 'intel-oneapi-mpi' Description
Intel MPI Library is a multifabric message-passing library that implements the open-source MPICH specification. Use the library to create, maintain, and test advanced, complex applications that perform better on high-performance computing HPC clusters based on Intel processors.
Link to section 'Versions' of 'intel-oneapi-mpi' Versions
- Negishi: 2021.8.0
- Gautschi: 2024.1
Link to section 'Module' of 'intel-oneapi-mpi' Module
You can load the modules by:
module load intel-oneapi-mpi
mvapich2
Link to section 'Description' of 'mvapich2' Description
Mvapich2 is a High-Performance MPI Library for clusters with diverse networks InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE and computing platforms x86 Intel and AMD, ARM and OpenPOWER
Link to section 'Versions' of 'mvapich2' Versions
- Negishi: 2.3.7
- Anvil: 2.3.6
- Gautschi: 2.3.7-1
Link to section 'Module' of 'mvapich2' Module
You can load the modules by:
module load mvapich2
openmpi
Link to section 'Description' of 'openmpi' Description
An open source Message Passing Interface implementation.
Link to section 'Versions' of 'openmpi' Versions
- Bell: 3.1.4, 3.1.6, 4.0.5, 4.1.3, 4.1.5
- Scholar: 2.1.6, 3.1.6, 4.1.5
- Gilbreth: 3.1.6-gpu-cuda10, 3.1.6-gpu-cuda11, 4.1.5-gpu-cuda11, 4.1.5-gpu-cuda12
- Negishi: 4.1.4
- Anvil: 3.1.6, 4.0.6, 4.1.6
- Gautschi: 4.1.6, 5.0.5
Link to section 'Module' of 'openmpi' Module
You can load the modules by:
module load openmpi
Link to section 'Compile MPI Code' of 'openmpi' Compile MPI Code
Language | Command |
---|---|
Fortran 77 |
|
Fortran 90 |
|
Fortran 95 |
|
C |
|
C++ |
|
Link to section 'Run MPI Executables' of 'openmpi' Run MPI Executables
Create a job submission file:
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=128
#SBATCH --time=00:01:00
#SBATCH -A XXXX
srun -n 256 ./mpi_hello
SLURM can run an MPI program with the srun command. The number of processes is requested with the -n option. If you do not specify the -n option, it will default to the total number of processor cores you request from SLURM.
To run MPI executables, users can also use mpirun
or mpiexec
from openmpi. Note that mpiexec
and mpirun
are synonymous in openmpi.
mpirun -n number-of-processes [options] executable