Remote Desktop Launch
Jupyter Hub Launch

Overview of Snyder

Snyder is a Purdue Community Cluster which is continually expanded and refreshed, optimized for data intensive applications requiring large amounts of shared memory per node, such as life sciences. Snyder was originally built through a partnership with HP and Intel in April 2015, though it has been most recently expanded with nodes from Dell. Snyder consists of a variety of compute node configurations as shown in the table below. All nodes have 40 Gbps Ethernet connections and a 5-year warranty. Snyder is expanded annually, with each year's purchase of nodes to remain in production for 5 years from their initial purchase.

To purchase access to Snyder today, go to the Cluster Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at if you have any questions.

Snyder Namesake

Snyder is named in honor of James C. Snyder, a Professor of Agricultural Economics and a pioneer in applying computer models to agribusiness. More information about his life and impact on Purdue is available in an ITaP Biography of Snyder.

Snyder Specifications

Snyder compute node hardware varies. See below.
Front-Ends Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
2 Two Haswell CPUs @ 2.60GHz 20 64 GB 2020
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
A 52 Two Haswell CPUs @ 2.60GHz 20 256 GB 2020
B 7 Two Haswell CPUs @ 2.60GHz 20 512 GB 2020
C 10 Two Haswell CPUs @ 2.60GHz 20 512 GB 2021
D 2 Two Haswell CPUs @ 2.60GHz 20 1 TB 2021
E 8 Two Sky Lake CPUs @ 2.60Hz 24 384 GB 2022

Snyder nodes run CentOS 7 and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

On Snyder, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:

  • Intel
  • MKL
  • Intel MPI

This compiler and these libraries are loaded by default. To load the recommended set again:

$ module load rcac

To verify what you loaded:

$ module list

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2017 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP Research Computing

Trouble with this page? Disability-related accessibility issue? Please contact us at so we can help.