Gateway Launch
Remote Desktop Launch
Jupyter Hub Launch

Overview of Halstead

Halstead is a Purdue Community Cluster, optimized for Purdue's communities running traditional, tightly-coupled science and engineering applications. Halstead was built through a partnership with HP and Intel in November 2016. Halstead consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 128 GB of memory. All nodes have 100 Gbps EDR Infiniband interconnect and a 5-year warranty.

Halstead Namesake

Halstead is named in honor of Maurice H. Halstead, Professor of Computer Science and Software Science pioneer. More information about his life and impact on Purdue is available in an ITaP Biography of Halstead.

Halstead Specifications

All Halstead nodes have 20 processor cores, 128 GB of RAM, and 100 Gbps Infiniband interconnects.
Front-Ends Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
No GPU 4 Two Haswell CPUs @ 2.60GHz 20 128 GB 2021
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
A 508 Two Haswell CPUs @ 2.60GHz 20 128 GB 2021

Halstead nodes run CentOS 7 and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

On Halstead, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:

  • Intel
  • MKL
  • Intel MPI

This compiler and these libraries are loaded by default. To load the recommended set again:

$ module load rcac

To verify what you loaded:

$ module list

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2017 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP Research Computing

Trouble with this page? Disability-related accessibility issue? Please contact us at so we can help.