Gateway Launch
Remote Desktop Launch
Jupyter Hub Launch

Overview of Rice

Rice is a Purdue Community Cluster, optimized for Purdue's communities running traditional, tightly-coupled science and engineering applications. Rice was built through a partnership with HP and Intel in April 2015. Rice consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 64 GB of memory. All nodes have 56 Gb FDR Infiniband interconnect and a 5-year warranty.

Rice Namesake

Rice is named in honor of John R. Rice, the W. Brooks Fortune Distinguished Professor Emeritus of Computer Science. More information about his life and impact on Purdue is available in an ITaP Biography of Rice.

Rice Specifications

All Rice nodes have 20 processor cores, 64 GB of RAM, and 56 Gbps Infiniband interconnects.
Front-Ends Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
4 Two Haswell CPUs @ 2.60GHz 20 64 GB 2020
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
A 576 Two Haswell CPUs @ 2.60GHz 20 64 GB 2020

Rice nodes run CentOS 7 and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

On Rice, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:

  • Intel
  • MKL
  • Intel MPI

This compiler and these libraries are loaded by default. To load the recommended set again:

$ module load rcac

To verify what you loaded:

$ module list

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2017 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP Research Computing

Trouble with this page? Disability-related accessibility issue? Please contact us at so we can help.