Computational Resources

ITaP maintains many different resources for computation. Here is a brief introduction to current computational resources. More information and detailed documentation are available for each resource listed below.

  • Rice

    Rice

    Rice is the newest of Purdue's Community Clusters, optimized for Purdue's communities running traditional, tightly-coupled science and engineering applications. Rice was built through a partnership with HP and Intel in April 2015. Rice consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 64 GB of memory. All nodes have 56 Gb FDR Infiniband interconnect and a 5-year warranty.

    To purchase access to Rice today, go to the 2015 Cluster Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at rcac-cluster-purchase@lists.purdue.edu if you have any questions.

  • Snyder

    Snyder

    Snyder is the newest of Purdue's Community Clusters, optimized for data intensive applications requiring large amounts of shared memory per node, such as life sciences. Snyder was built through a partnership with HP and Intel in April 2015. Snyder consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 256 GB of memory. All nodes have 40 Gbps Ethernet connections and a 5-year warranty. Snyder will be expanded annually, with each year's purchase of nodes to remain in production for 5 years from their initial purchase.

    To purchase access to Snyder today, go to the 2015 Cluster Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at rcac-cluster-purchase@lists.purdue.edu if you have any questions.

  • Conte

    Conte

    Conte was built through a partnership with HP and Intel in June 2013, and is the largest of Purdue's flagship community clusters. Conte consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and 64 GB of memory. Each node is also equipped with two 60-core Xeon Phi coprocessors. All nodes have 40 Gbps FDR10 Infiniband connections and a 5-year warranty. Conte is planned to be decommissioned on November 30, 2018.

    To purchase access to Conte today, go to the Conte Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at rcac-cluster-purchase@lists.purdue.edu if you have any questions.

  • Carter

    Carter

    Carter was launched through an ITaP partnership with Intel in November 2011 and is a member of Purdue's Community Cluster Program. Carter primarily consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and between 32 GB and 256 GB of memory. A few NVIDIA GPU-accelerated nodes are also available. All nodes have 56 Gbps FDR Infiniband connections and a 5-year warranty. Carter is planned to be decommissioned on April 30, 2017.

    To purchase access to Carter today, go to the Carter Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at rcac-cluster-purchase@lists.purdue.edu if you have any questions.

  • Hansen

    Hansen

    Hansen is a compute cluster operated by ITaP and is a member of Purdue's Community Cluster Program. Hansen went into production on September 15, 2011. Hansen consists of Dell compute nodes with four 12-core AMD Opteron 6176 processors (48 cores per node) and either 96 GB, 192 GB, or 512 GB of memory. All nodes have 10 Gigabit Ethernet interconnects and a 5-year warranty. Hansen is planned to be decommissioned in 2016.

  • Rossmann

    Rossmann

    Rossmann is a compute cluster operated by ITaP and is a member of Purdue's Community Cluster Program. Rossmann went into production on September 1, 2010. It consists of HP (Hewlett Packard) ProLiant DL165 G7 nodes with 64-bit, dual 12-core AMD Opteron 6172 processors (24 cores per node) and 48 GB, 96 GB, or 192 GB of memory. All nodes have 10 Gigabit Ethernet interconnects and a 5-year warranty. Rossmann is planned to be decommissioned in 2015.

  • Peregrine 1

    Peregrine 1

    Peregrine 1 is a state-of-the-art cluster at the Purdue Calumet campus and operated by ITaP from the West Lafayette campus. Installed on June 26, 2012, Peregrine 1 is the second major research cluster to have been hosted on the Calumet campus. Peregrine 1 consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and either 32 GB or 64 GB of memory. All nodes also feature 56 Gbps FDR Infiniband connections.

  • Hammer

    Hammer

    Hammer is optimized for Purdue's communities utilizing loosely-coupled, high-throughput computing. Hammer was initially built through a partnership with HP and Intel in April 2015. Hammer consists of HP DL60 compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 64 GB of memory. All nodes have 10 Gbps Ethernet connections and a 5-year warranty. Hammer will be expanded annually, with each year's purchase of nodes to remain in production for 5 years from their initial purchase.

    To purchase access to Hammer today, go to the 2015 Cluster Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at rcac-cluster-purchase@lists.purdue.edu if you have any questions.

  • Scholar

    Scholar

    The Scholar cluster is open to Purdue instructors from any field whose classes include assignments that could make use of supercomputing, from high-end graphics rendering and weather modeling to simulating millions of molecules and exploring masses of data to understand the dynamics of social networks.

    The hardware supporting Scholar consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and 32 GB or 64 GB of memory. All nodes have 56 Gbps FDR Infiniband connections and a 5-year warranty.

  • BoilerGrid

    BoilerGrid (HTCondor Pool)

    BoilerGrid is a large, high-throughput, distributed computing system operated by ITaP, and using the HTCondor system developed by the HTCondor Project at the University of Wisconsin. BoilerGrid provides a way for you to run programs on large numbers of otherwise idle computers in various locations, including any temporarily under-utilized high-performance cluster resources as well as some desktop machines not currently in use.

    Whenever a local user or scheduled job needs a machine back, HTCondor stops its job and sends it to another HTCondor node as soon as possible. Because this model limits the ability to do parallel processing and communications, BoilerGrid is only appropriate for relatively quick serial jobs.

  • Radon

    Radon

    Radon is a compute cluster operated by ITaP for general campus use. Radon consists of 45 HP Moonshot compute nodes with 32 GB RAM and are connected by 10 Gigabit Ethernet (10GigE).

  • Hathi

    Hathi

    Hathi is a shared Hadoop cluster operated by ITaP, and is a shared resource available to partners in Purdue's Community Cluster Program. Hathi went into production on September 8, 2014. Hathi consists of 6 Dell compute nodes with two 8-core Intel E5-2650v2 CPUs, 32 GB of memory, and 48TB of local storage per node for a total cluster capacity of 288TB. All nodes have 40 Gigabit Ethernet interconnects and a 5-year warranty.

    Hathi consists of two components: the Hadoop Distributed File System (HDFS), and a MapReduce framework for job and task tracking.

    The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX requirements to enable streaming access to file system data.

    A Hadoop cluster has several components:

    • Name Node
    • Resource Manager
    • Data Node
    • Task Manager

    To request access to Hathi today, please email rcac-help@purdue.edu. Subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments, or email us at rcac-help@purdue.edu if you have any questions.

  • WinHPC

    WinHPC

    WinHPC is a compute cluster operated by ITaP, and is a member of Purdue's Community Cluster Program. WinHPC went into production on December 1, 2011. WinHPC consists of HP compute nodes with two 12-core AMD Opteron 6172 processors (24 cores per node) and 48 GB of memory. All nodes have 10 Gigabit Ethernet interconnects and a 5-year warranty.

    To purchase access to WinHPC today, please email rcac-help@purdue.edu. Subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments, or email us at rcac-help@purdue.edu if you have any questions.

 

Information is available about retired computational resources on the Retired Resources page.