Steele is named in honor of John Steele, former professor of Computer Science and former director of the Purdue University Computing Center. More information about his life and impact on Purdue is available in an ITaP Biography of John Steele.
Steele consisted of five logical sub-clusters, each with a different combination of memory and interconnect. Steele-B nodes had 16 GB RAM and InfiniBand/Gigabit Ethernet; Steele-C, 32 GB RAM and Gigabit Ethernet; Steele-D, 32 GB RAM and InfiniBand/Gigabit Ethernet; Steele-E, 32 GB RAM and Gigabit Ethernet; Steele-Z, 16 GB RAM and Gigabit Ethernet.
|Sub-Cluster||Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect||Disk|
|Steele-B||180||Two 2.33 GHz Quad-Core Intel E5410||8||16 GB||10 Gbps SDR InfiniBand and 1 GigE||160 GB|
|Steele-C||48||Two 2.33 GHz Quad-Core Intel E5410||8||32 GB||1 GigE||160 GB|
|Steele-D||41||Two 2.33 GHz Quad-Core Intel E5410||8||32 GB||10 Gbps SDR InfiniBand and 1 GigE||160 GB|
|Steele-E||9||Two 3.00 GHz Quad-Core Intel E5450||8||32 GB||1 GigE||2 TB|
|Steele-Z||48||Two 2.33 GHz Quad-Core Intel E5410||8||16 GB||1 GigE||160 GB|
At the time of retirement, Steele nodes ran Red Hat Enterprise Linux 5 (RHEL5) and used Moab Workload Manager 7 and TORQUE Resource Manager 4 as the portable batch system (PBS) for resource and job management. Steele also ran jobs for BoilerGrid whenever processor cores in it would otherwise be idle.