Skip to main content
Have a request for an upcoming news/science story? Submit a Request

Optimizing MPI performance on Hansen

  • Announcements
  • Hansen

To provide an optimal computing experience on the new Hansen cluster, ITaP would like to offer some tips on utilizing Hansen most effectively.

Hansen utilizes Mellanox 10 Gbps Ethernet adapters for its network interconnect. In addition to handling Hansen's TCP/IP traffic, these adapters can use a special protocol, named "RoCE" (pronounced "Rocky"), to provide hardware acceleration for MPI communication. In practice, enabling RoCE increases available bandwidth by up to 50%, and decreases latency by a factor of three.

You can read more about RoCE here: http://www.mellanox.com/related-docs/prod_software/ConnectX-2_RDMA_RoCE.pdf

To fully take advantage of RoCE, MPI applications should be recompiled using OpenMPI (version 1.4.4 or greater), or MVAPICH-2. You can list all compiler versions OpenMPI is built with by using the command: module avail openmpi

To use the recommended compiler and MPI combination on Hansen, simply run the command module load devel

The following graphs illustrate the latency, bandwidth, and message rate improvements achieved simply by using a RoCE-enabled OpenMPI vs the TCP/IP based MPICH-2 1.4.0. (Of course, actual results with your application may vary).

Please give OpenMPI 1.4.4 a try with your MPI application and see if it can help improve your time to solution.

If you have an questions or issues using OpenMPI 1.4.4, please don't hesitate to contact us at rcac-help@purdue.edu.

Originally posted: