Computational Resources

ITaP maintains many different resources for computation. Here is a brief introduction to current computational resources. More information and detailed documentation are available for each resource listed below.

  • Conte

    Conte Conte is the newest of Purdue's Community Clusters, and was built through a partnership with HP and Intel in June 2013. Conte consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and 64 GB of memory. Each node is also equipped with two 60-core Xeon Phi coprocessors. All nodes have 40 Gbps FDR10 Infiniband connections and a 5-year warranty. Conte is planned to be decommissioned on November 30, 2018.

  • Carter

    Carter Carter was launched through an ITaP partnership with Intel in November 2011 and is a member of Purdue's Community Cluster Program. Carter primarily consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and between 32 GB and 256 GB of memory. A few NVIDIA GPU-accelerated nodes are also available. All nodes have 56 Gbps FDR Infiniband connections and a 5-year warranty. Carter is planned to be decommissioned on April 30, 2017.

  • Hansen

    Hansen Hansen is a compute cluster operated by ITaP and is a member of Purdue's Community Cluster Program. Hansen went into production on September 15, 2011. Hansen consists of Dell compute nodes with four 12-core AMD Opteron 6176 processors (48 cores per node) and either 96 GB, 192 GB, or 512 GB of memory. All nodes have 10 Gigabit Ethernet interconnects and a 5-year warranty. Hansen is planned to be decommissioned in 2016.

  • Rossmann

    Rossmann Rossmann is a compute cluster operated by ITaP and is a member of Purdue's Community Cluster Program. Rossmann went into production on September 1, 2010. It consists of HP (Hewlett Packard) ProLiant DL165 G7 nodes with 64-bit, dual 12-core AMD Opteron 6172 processors (24 cores per node) and 48 GB, 96 GB, or 192 GB of memory. All nodes have 10 Gigabit Ethernet interconnects and a 5-year warranty. Rossmann is planned to be decommissioned in 2015.

  • Coates

    Coates Coates is a compute cluster operated by ITaP and is a member of Purdue's Community Cluster Program. ITaP installed Coates on July 21, 2009. At installation, it was the largest entirely 10 Gigabit Ethernet (10GigE) academic cluster in the world. Coates consists of 982 64-bit, 8-core Hewlett-Packard Proliant and 11 64-bit, 16-core Hewlett-Packard Proliant DL585 G5 systems with between 16 GB and 128 GB of memory. All nodes have 10 Gigabit Ethernet interconnects and a 5-year warranty. Coates is planned to be decommissioned on September 30, 2014.

  • Peregrine 1

    Peregrine 1 Peregrine 1 is a new, state-of-the-art cluster at the Purdue Calumet campus and operated by ITaP from the West Lafayette campus. Installed on June 26, 2012, Peregrine 1 is the second major research cluster on the Calumet campus. Peregrine 1 consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and either 32 GB or 64 GB of memory. All nodes also feature 56 Gbps FDR Infiniband connections.

  • Scholar

    Scholar The Scholar cluster is open to Purdue instructors from any field whose classes include assignments that could make use of supercomputing, from high-end graphics rendering and weather modeling to simulating millions of molecules and exploring masses of data to understand the dynamics of social networks.

    The hardware supporting Scholar consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and 32 GB of memory. All nodes have 56 Gbps FDR Infiniband connections and a 5-year warranty.

  • BoilerGrid (HTCondor Pool)

    BoilerGrid (HTCondor Pool) BoilerGrid (HTCondor Pool) is a large, high-throughput, distributed computing system operated by ITaP, and using the HTCondor system developed by the HTCondor Project at the University of Wisconsin. BoilerGrid (HTCondor Pool) provides a way for you to run programs on large numbers of otherwise idle computers in various locations, including any temporarily under-utilized high-performance cluster resources as well as some desktop machines not currently in use. Whenever a local user or scheduled job needs a machine back, HTCondor stops its job and sends it to another HTCondor node as soon as possible. Because this model limits the ability to do parallel processing and communications, BoilerGrid (HTCondor Pool) is only appropriate for relatively quick serial jobs.

  • Radon

    Radon Radon is a compute cluster operated by ITaP for general campus use. Radon consists of 24 64-bit, 8-core Dell 1950 systems with 16 GB RAM and 1 Gigabit Ethernet (1GigE) local to each node.

  • WinHPC

    WinHPC WinHPC is a compute cluster operated by ITaP, and is a member of Purdue's Community Cluster Program. WinHPC went into production on December 1, 2011. WinHPC consists of HP compute nodes with two 12-core AMD Opteron 6172 processors (24 cores per node) and 48 GB of memory. All nodes have 10 Gigabit Ethernet interconnects and a 5-year warranty.

 

Information is available about many retired computational resources on the Retired Resources page.