Steele

Overview of Steele

Steele was a compute cluster operated by ITaP and the first system built under Purdue's Community Cluster Program. ITaP installed Steele in May 2008 in an unprecedented single-day installation. It replaced and expanded upon ITaP research resources retired at the same time, including the Hamlet, Lear, and Macbeth clusters. Steele consisted of 852 64-bit, 8-core Dell 1950 and 9 64-bit, 8-core Dell 2950 systems with various combinations of 16-32 GB RAM, 160 GB to 2 TB of disk, and 1 Gigabit Ethernet (1GigE) and InfiniBand local to each node.

Faculty Talk about the Steele Community Cluster
Time-Lapse Video of the Steele Installation
Preview of Steele Installation Day

Detailed Hardware Specification

Steele consisted of five logical sub-clusters, each with a different combination of memory and interconnect. Steele-B nodes had 16 GB RAM and InfiniBand/Gigabit Ethernet; Steele-C, 32 GB RAM and Gigabit Ethernet; Steele-D, 32 GB RAM and InfiniBand/Gigabit Ethernet; Steele-E, 32 GB RAM and Gigabit Ethernet; Steele-Z, 16 GB RAM and Gigabit Ethernet.

Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect Disk
Steele-B 180 Two 2.33 GHz Quad-Core Intel E5410 8 16 GB 10 Gbps SDR InfiniBand and 1 GigE 160 GB
Steele-C 48 Two 2.33 GHz Quad-Core Intel E5410 8 32 GB 1 GigE 160 GB
Steele-D 41 Two 2.33 GHz Quad-Core Intel E5410 8 32 GB 10 Gbps SDR InfiniBand and 1 GigE 160 GB
Steele-E 9 Two 3.00 GHz Quad-Core Intel E5450 8 32 GB 1 GigE 2 TB
Steele-Z 48 Two 2.33 GHz Quad-Core Intel E5410 8 16 GB 1 GigE 160 GB

At the time of retirement, Steele nodes ran Red Hat Enterprise Linux 5 (RHEL5) and used Moab Workload Manager 7 and TORQUE Resource Manager 4 as the portable batch system (PBS) for resource and job management. Steele also ran jobs for BoilerGrid whenever processor cores in it would otherwise be idle.

About John Steele
Photo of Steele

Service Lifetime

May 5, 2008 — Nov 30, 2013

Major Projects & Users