Computational Resources

ITaP maintains many different resources for computation. Here is a brief introduction to current computational resources. More information and detailed documentation are available for each resource listed below.


  • BrownBrown thumbnail

    Brown is Purdue's newest Community Cluster and is optimized for communities running traditional, tightly-coupled science and engineering applications. Brown is being built through a partnership with Dell and Intel in October 2017. Brown consists of Dell compute nodes with two 12-core Intel Xeon Gold "Sky Lake" processors (24 cores per node) and 96 GB of memory. All nodes have 100 Gbps EDR Infiniband interconnect and a 5-year warranty.

  • HalsteadHalstead thumbnail

    Halstead is optimized for Purdue's communities running traditional, tightly-coupled science and engineering applications. Halstead was built through a partnership with HP and Intel in November 2016. Halstead consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 128 GB of memory. All nodes have 100 Gbps EDR Infiniband interconnect and a 5-year warranty.

  • Halstead-GPUHalstead-GPU thumbnail

    Halstead-GPU is a new type of addition to Purdue's Community Clusters, designed specifically for applications which are able to take advantage of GPU accelerators. While applications must be specially-crafted to use GPUs, a GPU-enabled application can often run many times faster than the same application could on general-purpose CPUs. Due to the increased cost of GPU-equipped nodes, Halstead-GPU is being offered with some new purchase options to allow for shared access at a lower price point than the full cost of a node.

  • RiceRice thumbnail

    Rice is optimized for Purdue's communities running traditional, tightly-coupled science and engineering applications. Rice was built through a partnership with HP and Intel in April 2015. Rice consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 64 GB of memory. All nodes have 56 Gbps FDR Infiniband interconnect and a 5-year warranty.

  • SnyderSnyder thumbnail

    Snyder is a Purdue Community Cluster which is continually expanded and refreshed, optimized for data intensive applications requiring large amounts of shared memory per node, such as life sciences. Snyder was originally built through a partnership with HP and Intel in April 2015, though it has been most recently expanded with nodes from Dell. Snyder consists of a variety of compute node configurations as shown in the table below. All nodes have 40 Gbps Ethernet connections and a 5-year warranty. Snyder is expanded annually, with each year's purchase of nodes to remain in production for 5 years from their initial purchase.

  • ConteConte thumbnail

    Conte was built through a partnership with HP and Intel in June 2013, and is the largest of Purdue's flagship community clusters. Conte consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and 64 GB of memory. Each node is also equipped with two 60-core Xeon Phi coprocessors. All nodes have 40 Gbps FDR10 Infiniband connections and a 5-year warranty. Conte is planned to be decommissioned on November 30, 2018.

  • HammerHammer thumbnail

    Hammer is optimized for Purdue's communities utilizing loosely-coupled, high-throughput computing. Hammer was initially built through a partnership with HP and Intel in April 2015. Hammer was expanded again in late 2016. Hammer will be expanded annually, with each year's purchase of nodes to remain in production for 5 years from their initial purchase.

  • ScholarScholar thumbnail

    The Scholar cluster is open to Purdue instructors from any field whose classes include assignments that could make use of supercomputing, from high-end graphics rendering and weather modeling to simulating millions of molecules and exploring masses of data to understand the dynamics of social networks.

  • REEDREED thumbnail

    REED is designed for working with data encumbered with federal security regulations. It is implemented as an expandable set of instances within Amazon's AWS GovCloud facility, with access portals in the Purdue academic domain.

  • BoilerGridBoilerGrid thumbnail

    BoilerGrid is a large, high-throughput, distributed computing system operated by ITaP, and using the HTCondor system developed by the HTCondor Project at the University of Wisconsin. BoilerGrid provides a way for you to run programs on large numbers of otherwise idle computers in various locations, including any temporarily under-utilized high-performance cluster resources as well as some desktop machines not currently in use.

  • RadonRadon thumbnail

    Radon is a compute cluster operated by ITaP for general campus use. Radon consists of 45 HP Moonshot compute nodes with 32 GB RAM and are connected by 10 Gigabit Ethernet (10GigE).

  • HathiHathi thumbnail

    Hathi is a shared Hadoop cluster operated by ITaP, and is a shared resource available to partners in Purdue's Community Cluster Program. Hathi went into production on September 8, 2014. Hathi consists of 6 Dell compute nodes with two 8-core Intel E5-2650v2 CPUs, 32 GB of memory, and 48TB of local storage per node for a total cluster capacity of 288TB. All nodes have 40 Gigabit Ethernet interconnects and a 5-year warranty.

  • Data WorkbenchData Workbench thumbnail

    The Data Workbench is an interactive compute environment for non-batch big data analysis and simulation, and is a part of Purdue's Community Cluster PRogram. The Data Workbench consists of HP compute nodes with two 8-core Intel Xeon processors (16 cores per node), and 128 GB of memory. All nodes are interconnected with 10 Gigabit Ethernet. The Data Workbench entered production on October 1, 2017.

 

Information about many retired computational resources is available on the Retired Resources page.

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2017 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP Research Computing

Trouble with this page? Disability-related accessibility issue? Please contact us at online@purdue.edu so we can help.