openmpi
Link to section 'Description' of 'openmpi' Description
An open source Message Passing Interface implementation.
Link to section 'Versions' of 'openmpi' Versions
- Bell: 3.1.4, 3.1.6, 4.0.5, 4.1.3, 4.1.5
- Scholar: 2.1.6, 3.1.6, 4.1.5
- Gilbreth: 3.1.6-gpu-cuda10, 3.1.6-gpu-cuda11, 4.1.5-gpu-cuda11, 4.1.5-gpu-cuda12
- Negishi: 4.1.4
- Anvil: 3.1.6, 4.0.6, 4.1.6
- Gautschi: 4.1.6, 5.0.5
Link to section 'Module' of 'openmpi' Module
You can load the modules by:
module load openmpi
Link to section 'Compile MPI Code' of 'openmpi' Compile MPI Code
Language | Command |
---|---|
Fortran 77 |
|
Fortran 90 |
|
Fortran 95 |
|
C |
|
C++ |
|
Link to section 'Run MPI Executables' of 'openmpi' Run MPI Executables
Create a job submission file:
#!/bin/bash
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=128
#SBATCH --time=00:01:00
#SBATCH -A XXXX
srun -n 256 ./mpi_hello
SLURM can run an MPI program with the srun command. The number of processes is requested with the -n option. If you do not specify the -n option, it will default to the total number of processor cores you request from SLURM.
To run MPI executables, users can also use mpirun
or mpiexec
from openmpi. Note that mpiexec
and mpirun
are synonymous in openmpi.
mpirun -n number-of-processes [options] executable