Computational Resources

ITaP maintains many different resources for computation. Here is a brief introduction to current computational resources. More information and detailed documentation are available for each resource listed below.


  • ScholarScholar thumbnail

    The Scholar cluster is open to Purdue instructors from any field whose classes include assignments that could make use of supercomputing, from high-end graphics rendering and weather modeling to simulating millions of molecules and exploring masses of data to understand the dynamics of social networks.

  • Data WorkbenchData Workbench thumbnail

    The Data Workbench is an interactive compute environment for non-batch big data analysis and simulation, and is a part of Purdue's Community Cluster PRogram. The Data Workbench consists of HP compute nodes with two 8-core Intel Xeon processors (16 cores per node), and 128 GB of memory. All nodes are interconnected with 10 Gigabit Ethernet. The Data Workbench entered production on October 1, 2017.

  • BrownBrown thumbnail

    Brown is Purdue's newest Community Cluster and is optimized for communities running traditional, tightly-coupled science and engineering applications. Brown was built through a partnership with Dell and Intel in October 2017. Brown consists of Dell compute nodes with two 12-core Intel Xeon Gold "Sky Lake" processors (24 cores per node) and 96 GB of memory. All nodes have 100 Gbps EDR Infiniband interconnect and a 5-year warranty.

  • Brown-GPUBrown-GPU thumbnail

    Brown-GPU is a new type of addition to Purdue's Community Clusters, designed specifically for applications which are able to take advantage of GPU accelerators. While applications must be specially-crafted to use GPUs, a GPU-enabled application can often run many times faster than the same application could on general-purpose CPUs. Due to the increased cost of GPU-equipped nodes, Brown-GPU is being offered with some new purchase options to allow for shared access at a lower price point than the full cost of a node.

  • HalsteadHalstead thumbnail

    Halstead is optimized for Purdue's communities running traditional, tightly-coupled science and engineering applications. Halstead was built through a partnership with HP and Intel in November 2016. Halstead consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 128 GB of memory. All nodes have 100 Gbps EDR Infiniband interconnect and a 5-year warranty.

  • Halstead-GPUHalstead-GPU thumbnail

    Halstead-GPU is a new type of addition to Purdue's Community Clusters, designed specifically for applications which are able to take advantage of GPU accelerators. While applications must be specially-crafted to use GPUs, a GPU-enabled application can often run many times faster than the same application could on general-purpose CPUs. Due to the increased cost of GPU-equipped nodes, Halstead-GPU is being offered with some new purchase options to allow for shared access at a lower price point than the full cost of a node.

  • RiceRice thumbnail

    Rice is optimized for Purdue's communities running traditional, tightly-coupled science and engineering applications. Rice was built through a partnership with HP and Intel in April 2015. Rice consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 64 GB of memory. All nodes have 56 Gbps FDR Infiniband interconnect and a 5-year warranty.

  • SnyderSnyder thumbnail

    Snyder is a Purdue Community Cluster which is continually expanded and refreshed, optimized for data intensive applications requiring large amounts of shared memory per node, such as life sciences. Snyder was originally built through a partnership with HP and Intel in April 2015, though it has been most recently expanded with nodes from Dell. Snyder consists of a variety of compute node configurations as shown in the table below. All nodes have 40 Gbps Ethernet connections and a 5-year warranty. Snyder is expanded annually, with each year's purchase of nodes to remain in production for 5 years from their initial purchase.

  • HammerHammer thumbnail

    Hammer is optimized for Purdue's communities utilizing loosely-coupled, high-throughput computing. Hammer was initially built through a partnership with HP and Intel in April 2015. Hammer was expanded again in late 2016. Hammer will be expanded annually, with each year's purchase of nodes to remain in production for 5 years from their initial purchase.

  • REEDREED thumbnail

    REED is designed for working with data encumbered with federal security regulations. It is implemented as an expandable set of instances within Amazon's AWS GovCloud facility, with access portals in the Purdue academic domain.

 

Information about many retired computational resources is available on the Retired Resources page.

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2017 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP Research Computing

Trouble with this page? Disability-related accessibility issue? Please contact us at online@purdue.edu so we can help.