Overview of Snyder
To purchase access to Snyder today, go to the 2015 Cluster Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at firstname.lastname@example.org if you have any questions.
Snyder is named in honor of James C. Snyder, a Professor of Agricultural Economics and a pioneer in applying computer models to agribusiness. More information about his life and impact on Purdue is available in an ITaP Biography of James C. Snyder.
Most Snyder nodes consist of similar hardware. All Snyder nodes have 20 processor cores, 256 GB or 512 GB RAM, and 40 Gbps Ethernet interconnects.
|Sub-Cluster||Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect||TeraFLOPS|
|Snyder-2015 (A)||52||Two 10-Core Intel Xeon-E5||20||256 GB||40 Gbps Ethernet||TBD|
|Snyder-2015 (B)||7||Two 10-Core Intel Xeon-E5||20||512 GB||40 Gbps Ethernet||TBD|
|Snyder-2016 (C)||10||Two 10-Core Intel Xeon-E5||20||512 GB||40 Gbps Ethernet||TBD|
|Snyder-2016 (D)||2||Two 10-Core Intel Xeon-E5||20||1 TB||40 Gbps Ethernet||TBD|
Snyder nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management, optimized for jobs using one or more entire nodes and for maximum job throughput. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
On Snyder, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:
- Intel 126.96.36.199
- Intel MPI
To load the recommended set:
$ module load devel
To verify what you loaded:
$ module list