ITaP Retired Research Resources

ITaP retires and deactivates many systems over time as we continue to bring newer systems online. Here are the names, descriptions, and service terms of some of the major resources we have retired.

  • Steele

    Steele Steele was a compute cluster operated by ITaP and the first system built under Purdue's Community Cluster Program. ITaP installed Steele in May 2008 in an unprecedented single-day installation. It replaced and expanded upon ITaP research resources retired at the same time, including the Hamlet, Lear, and Macbeth clusters. Steele consisted of 852 64-bit, 8-core Dell 1950 and 9 64-bit, 8-core Dell 2950 systems with various combinations of 16-32 GB RAM, 160 GB to 2 TB of disk, and 1 Gigabit Ethernet (1GigE) and InfiniBand local to each node.

  • Moffett

    Moffett Moffett was a SiCortex 5832 system. It consisted of 28 modules, each containing 27 six-processor SMP nodes for a total of 4536 processor cores. The SiCortex design was highly unusual; it paired relatively slow individual processor cores (633 MHz) with an extraordinarily fast custom interconnect fabric, and provided these in very large numbers. In addition, the SiCortex design used very little power and thereby generated very little heat.

  • Miner

    Miner Miner was a compute cluster installed Miner at the Purdue Calumet campus on December 25, 2009 and operated by ITaP. It was the first major research cluster on the Calumet campus and represented a great step forward in Purdue Calumet's ongoing plan to foster more local, cutting-edge research. Miner consisted of 512 2-core Intel Xeon systems with either 4 or 6 GB RAM, 50 GB of disk, and 1 Gigabit Ethernet (1GigE) local to each node.

  • Black

    Black The Black cluster was Purdue's portion of the Indiana Economic Development Corporation (IEDC) machine at Indiana University, the IU portion of which was known as "Big Red". Black consisted of 256 IBM JS21 Blades, each a Dual-Processor 2.5 GHz Dual-Core PowerPC 970 MP with 8 GB of RAM and PCI-X Myrinet 2000 interconnects. The large amount of shared memory in this system provided very fast communication between processor cores via shared memory and made the system ideal for large parallel jobs.

  • Gold

    Gold Gold was a small IBM Power5 system consisting of one front-end node and one compute node. The compute node was a Dual-Processor 1.5 GHz Dual-Core Power5 520 with 8 GB of RAM. This system was designed to be used only by users with legacy IBM architecture-optimized or AIX-specific code. Gold was intended to help facilitate the porting of any code designed for the IBM SP system at Purdue, which was retired in 2008.

  • Venice

    Venice Venice was a small cluster of Sun x4600 systems consisting of two front-end nodes and three compute nodes. The front-end nodes were both a Quad-Processor Dual-Core AMD Opteron 2216. The compute nodes were each an Eight-Processor Dual-Core AMD Opteron 8220 with 128 GB of RAM. The large amount of shared memory in this system made it ideal for large parallel jobs, using shared memory for fast communication between processor cores.

  • Prospero

    Prospero The Prospero community cluster consisted of 19 Dell Quad-Processor 2.33 GHz Intel Xeon systems with 8 GB RAM and both Gigabit Ethernet and Infiniband interconnects. Each node had enough memory to run most jobs, and the high-speed Infiniband interconnect helped with many communication-bound parallel jobs.

  • Caesar

    Caesar Caesar was an SGI Altix 4700 system. This large memory SMP design featured 128 processors and 512 GB of RAM connected via SGI's high-bandwidth, low-latency NUMAlink shared-memory interface. The extremely large amount of shared memory in this system made it ideal for jobs where many processors must all share a large amount of in-memory data, and for large parallel jobs, using shared memory for fast communication between processors.

  • Brutus (FPGA)

    Brutus (FPGA) Brutus (FPGA) was an experimental FPGA resource provided by the Northwest Indiana Compuational Grid (NWICG) through ITaP. Brutus (FPGA) consisted of an SGI Altix 450 with two SGI RC100 blades with two FPGAs each, for a total of 4 FPGAs. Using Brutus (FPGA) effectively required careful code development in either VHDL or Mitrion-C, but did result in significant performance increases. BLAST was been benchmarked on Brutus (FPGA) at 70x typical general-purpose CPU performance.

  • Pete

    Pete The Pete cluster was composed of two parts, one owned by Earth, Atmospheric, and Planetary Science (EAPS) and the other by the Network for Computational Nanotechnology (NCN). Pete consisted of 166 HP Dual-Processor Dual-Core DL40 systems with either 8 or 16 GB RAM and Gigabit Ethernet. The large amount of memory in this system for its time made it well suited for larger-memory parallel jobs.

  • Gray

    Gray The Gray cluster was solely a development platform to be used alongside the Indiana Economic Development Corporation (IEDC) machine Black. Gray was a place to compile code (mostly serial Condor applications) that were to be run on Black. Black is housed with Indiana University's "Big Red" system in Bloomington, Indiana. However, Gray was located on Purdue's West Lafayette campus. Gray included a front-end server, several worker-node blades, and extra front-end hosts for campus and TeraGrid Condor users.