Skip to main content

MPIs

Message Passing Interface (MPI) is a standardized and portable message-passing standard designed to function on parallel computing architectures.

impi

Link to section 'Description' of 'impi' Description

Intel MPI

Link to section 'Versions' of 'impi' Versions

  • Bell: 2019.5.281
  • Brown: 2019.3.199
  • Scholar: 2019.3.199
  • Gilbreth: 2019.5.281
  • Negishi: 2019.9.304
  • Anvil: 2019.5.281

Link to section 'Module' of 'impi' Module

You can load the modules by:

module load intel
module load impi

intel-oneapi-mpi

Link to section 'Description' of 'intel-oneapi-mpi' Description

Intel MPI Library is a multifabric message-passing library that implements the open-source MPICH specification. Use the library to create, maintain, and test advanced, complex applications that perform better on high-performance computing HPC clusters based on Intel processors.

Link to section 'Versions' of 'intel-oneapi-mpi' Versions

  • Negishi: 2021.8.0

Link to section 'Module' of 'intel-oneapi-mpi' Module

You can load the modules by:

module load intel-oneapi-mpi

mvapich2

Link to section 'Description' of 'mvapich2' Description

Mvapich2 is a High-Performance MPI Library for clusters with diverse networks InfiniBand, Omni-Path, Ethernet/iWARP, and RoCE and computing platforms x86 Intel and AMD, ARM and OpenPOWER

Link to section 'Versions' of 'mvapich2' Versions

  • Negishi: 2.3.7
  • Anvil: 2.3.6

Link to section 'Module' of 'mvapich2' Module

You can load the modules by:

module load mvapich2

openmpi

Link to section 'Description' of 'openmpi' Description

An open source Message Passing Interface implementation.

Link to section 'Versions' of 'openmpi' Versions

  • Bell: 2.1.6, 3.1.6, 4.0.5, 4.1.3
  • Brown: 1.10.7, 2.1.6, 3.1.4
  • Scholar: 2.1.6, 3.1.6
  • Gilbreth: 3.1.6-gpu-cuda11
  • Negishi: 4.1.4
  • Anvil: 3.1.6, 4.0.6

Link to section 'Module' of 'openmpi' Module

You can load the modules by:

module load openmpi

Link to section 'Compile MPI Code' of 'openmpi' Compile MPI Code

The following table illustrates how to compile your MPI program. 
Language Command
Fortran 77
$ mpif77 program.f -o program
Fortran 90
$ mpif90 program.f90 -o program
Fortran 95
$ mpif90 program.f95 -o program
C
$ mpicc program.c -o program
C++
$ mpiCC program.C -o program

Link to section 'Run MPI Executables' of 'openmpi' Run MPI Executables

Create a job submission file:

#!/bin/bash

#SBATCH  --nodes=2
#SBATCH  --ntasks-per-node=128
#SBATCH  --time=00:01:00
#SBATCH  -A XXXX

srun -n 256 ./mpi_hello

SLURM can run an MPI program with the srun command. The number of processes is requested with the -n option. If you do not specify the -n option, it will default to the total number of processor cores you request from SLURM.

To run MPI executables, users can also use mpirun or mpiexec from openmpi. Note that mpiexec and mpirun are synonymous in openmpi.

mpirun -n number-of-processes [options] executable
Helpful?

Thanks for letting us know.

Please don't include any personal information in your comment. Maximum character limit is 250.
Characters left: 250
Thanks for your feedback.