Retired Computational Resources

ITaP retires and deactivates many systems over time as we continue to bring newer systems online. Here are the names, descriptions, and service terms of some of the major resources we have retired.

  • HansenHansen_thumbnail 

    Hansen was a compute cluster operated by ITaP and a member of Purdue's Community Cluster Program. Hansen went into production on September 15, 2011. Hansen consisted of Dell compute nodes with four 12-core AMD Opteron 6176 processors (48 cores per node) and either 96 GB, 192 GB, or 512 GB of memory. All nodes had 10 Gigabit Ethernet interconnects and a 5-year warranty. Hansen was decommissioned on October 1, 2016.


  • RossmannRossmann_thumbnail 

    Rossmann was a compute cluster operated by ITaP and was a member of Purdue's Community Cluster Program. Rossmann went into production on September 1, 2010. It consisted of HP (Hewlett Packard) ProLiant DL165 G7 nodes with 64-bit, dual 12-core AMD Opteron 6172 processors (24 cores per node) and 48 GB, 96 GB, or 192 GB of memory. All nodes had 10 Gigabit Ethernet interconnects and a 5-year warranty. Rossmann was decommissioned on November 2nd, 2015.


  • CoatesCoates_thumbnail 

    Coates was a compute cluster operated by ITaP and was a member of Purdue's Community Cluster Program. ITaP installed Coates on July 21, 2009, and at the time it was the largest entirely 10 Gigabit Ethernet (10GigE) academic cluster in the world. Coates consisted of 982 64-bit, 8-core Hewlett-Packard Proliant and 11 64-bit, 16-core Hewlett-Packard Proliant DL585 G5 systems with between 16 GB and 128 GB of memory. All nodes had 10 Gigabit Ethernet interconnects and a 5-year warranty. Coates was decommissioned on September 30, 2014.

    Coates Installation Day Video


  • SteeleSteele_thumbnail 

    Steele was a compute cluster operated by ITaP and the first system built under Purdue's Community Cluster Program. ITaP installed Steele in May 2008 in an unprecedented single-day installation. It replaced and expanded upon ITaP research resources retired at the same time, including the Hamlet, Lear, and Macbeth clusters. Steele consisted of 852 64-bit, 8-core Dell 1950 and 9 64-bit, 8-core Dell 2950 systems with various combinations of 16-32 GB RAM, 160 GB to 2 TB of disk, and 1 Gigabit Ethernet (1GigE) and InfiniBand local to each node.

    Faculty Talk about the Steele Community Cluster
    Time-Lapse Video of the Steele Installation
    Preview of Steele Installation Day


  • MoffettMoffett_thumbnail 

    Moffett was a SiCortex 5832 system. It consisted of 28 modules, each containing 27 six-processor SMP nodes for a total of 4536 processor cores. The SiCortex design was highly unusual; it paired relatively slow individual processor cores (633 MHz) with an extraordinarily fast custom interconnect fabric, and provided these in very large numbers. In addition, the SiCortex design used very little power and thereby generated very little heat.


  • MinerMiner_thumbnail 

    Miner was a compute cluster installed Miner at the Purdue Calumet campus on December 25, 2009 and operated by ITaP. It was the first major research cluster on the Calumet campus and represented a great step forward in Purdue Calumet's ongoing plan to foster more local, cutting-edge research. Miner consisted of 512 2-core Intel Xeon systems with either 4 or 6 GB RAM, 50 GB of disk, and 1 Gigabit Ethernet (1GigE) local to each node.


  • BlackBlack_thumbnail 

    The Black cluster was Purdue's portion of the Indiana Economic Development Corporation (IEDC) machine at Indiana University, the IU portion of which was known as "Big Red". Black consisted of 256 IBM JS21 Blades, each a Dual-Processor 2.5 GHz Dual-Core PowerPC 970 MP with 8 GB of RAM and PCI-X Myrinet 2000 interconnects. The large amount of shared memory in this system provided very fast communication between processor cores via shared memory and made the system ideal for large parallel jobs.


  • GoldGold_thumbnail 

    Gold was a small IBM Power5 system consisting of one front-end node and one compute node. The compute node was a Dual-Processor 1.5 GHz Dual-Core Power5 520 with 8 GB of RAM. This system was designed to be used only by users with legacy IBM architecture-optimized or AIX-specific code. Gold was intended to help facilitate the porting of any code designed for the IBM SP system at Purdue, which was retired in 2008.


  • VeniceVenice_thumbnail 

    Venice was a small cluster of Sun x4600 systems consisting of two front-end nodes and three compute nodes. The front-end nodes were both a Quad-Processor Dual-Core AMD Opteron 2216. The compute nodes were each an Eight-Processor Dual-Core AMD Opteron 8220 with 128 GB of RAM. The large amount of shared memory in this system made it ideal for large parallel jobs, using shared memory for fast communication between processor cores.


  • ProsperoProspero_thumbnail 

    The Prospero community cluster consisted of 19 Dell Quad-Processor 2.33 GHz Intel Xeon systems with 8 GB RAM and both Gigabit Ethernet and Infiniband interconnects. Each node had enough memory to run most jobs, and the high-speed Infiniband interconnect helped with many communication-bound parallel jobs.


  • CaesarCaesar_thumbnail 

    Caesar was an SGI Altix 4700 system. This large memory SMP design featured 128 processors and 512 GB of RAM connected via SGI's high-bandwidth, low-latency NUMAlink shared-memory interface. The extremely large amount of shared memory in this system made it ideal for jobs where many processors must all share a large amount of in-memory data, and for large parallel jobs, using shared memory for fast communication between processors.


  • BrutusBrutus_thumbnail 

    Brutus was an experimental FPGA resource provided by the Northwest Indiana Compuational Grid (NWICG) through ITaP. Brutus consisted of an SGI Altix 450 with two SGI RC100 blades with two FPGAs each, for a total of 4 FPGAs. Using Brutus effectively required careful code development in either VHDL or Mitrion-C, but did result in significant performance increases. BLAST was been benchmarked on Brutus at 70x typical general-purpose CPU performance.


  • PetePete_thumbnail 

    The Pete cluster was composed of two parts, one owned by Earth, Atmospheric, and Planetary Science (EAPS) and the other by the Network for Computational Nanotechnology (NCN). Pete consisted of 166 HP Dual-Processor Dual-Core DL40 systems with either 8 or 16 GB RAM and Gigabit Ethernet. The large amount of memory in this system for its time made it well suited for larger-memory parallel jobs.


  • GrayGray_thumbnail 

    The Gray cluster was solely a development platform to be used alongside the Indiana Economic Development Corporation (IEDC) machine Black. Gray was a place to compile code (mostly serial Condor applications) that were to be run on Black. Black is housed with Indiana University's "Big Red" system in Bloomington, Indiana. However, Gray was located on Purdue's West Lafayette campus. Gray included a front-end server, several worker-node blades, and extra front-end hosts for campus and TeraGrid Condor users.


  • Peregrine 1Peregrine 1_thumbnail 

    Peregrine 1 was a state-of-the-art cluster for the Purdue Calumet campus operated by ITaP from the West Lafayette campus. Installed on June 26, 2012, Peregrine 1 was the second major research cluster to have been hosted on the Calumet campus. The cluster has been consequently relocated to the West Lafayette campus on August 19, 2015. Peregrine 1 consisted of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and either 32 GB or 64 GB of memory. All nodes also featured 56 Gbps FDR Infiniband connections. Peregrine 1 was retired on October 12, 2016.


  • WinHPCWinHPC_thumbnail 

    WinHPC was a compute cluster operated by ITaP, and a member of Purdue's Community Cluster Program. WinHPC went into production on December 1, 2011. WinHPC consisted of HP compute nodes with two 12-core AMD Opteron 6172 processors (24 cores per node) and 48 GB of memory. All nodes had 10 Gigabit Ethernet interconnects and a 5-year warranty. WinHPC was decommissioned on October 1, 2016.


Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2017 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP Research Computing

Trouble with this page? Disability-related accessibility issue? Please contact us at online@purdue.edu so we can help.