Steele is a compute cluster operated by ITaP and is a member of Purdue's Community Cluster Program. ITaP installed Steele in May 2008 in an unprecedented single-day installation. It replaces and expands upon ITaP research resources retired at the same time, including the Hamlet, Lear, and Macbeth clusters. Steele consists of 852 64-bit, 8-core Dell 1950 and 9 64-bit, 8-core Dell 2950 systems with various combinations of 16-32 GB RAM, 160 GB to 2 TB of disk, and 1 Gigabit Ethernet (1GigE) and InfiniBand local to each node.
Steele is named in honor of John Steele, former professor of Computer Science and former director of the Purdue University Computing Center. More information about his life and impact on Purdue is available in an ITaP Biography of John Steele.
Steele consists of five logical sub-clusters, each with a different combination of memory and interconnect. Steele-A nodes have 16 GB RAM and Gigabit Ethernet; Steele-B, 16 GB RAM and InfiniBand/Gigabit Ethernet; Steele-C, 32 GB RAM and Gigabit Ethernet; Steele-D, 32 GB RAM and InfiniBand/Gigabit Ethernet; Steele-E, 32 GB RAM and Gigabit Ethernet.
|Sub-Cluster||Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect||Disk||TeraFLOPS|
|Steele-A||529||Two 2.33 GHz Quad-Core Intel E5410||8||16 GB||1 GigE||160 GB||39.45|
|Steele-B||180||Two 2.33 GHz Quad-Core Intel E5410||8||16 GB||10 Gbps SDR InfiniBand and 1 GigE||160 GB||13.42|
|Steele-C||48||Two 2.33 GHz Quad-Core Intel E5410||8||32 GB||1 GigE||160 GB||3.58|
|Steele-D||41||Two 2.33 GHz Quad-Core Intel E5410||8||32 GB||10 Gbps SDR InfiniBand and 1 GigE||160 GB||3.14|
|Steele-E||9||Two 3.00 GHz Quad-Core Intel E5450||8||32 GB||1 GigE||2 TB||0.84|
Steele nodes run Red Hat Enterprise Linux 5 (RHEL5) and use Moab Workload Manager 6 and TORQUE Resource Manager 3 as the portable batch system (PBS) for resource and job management. Steele also runs jobs for BoilerGrid whenever processor cores in it would otherwise be idle. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
For more information about the TORQUE Resource Manager:
On Steele, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:
To load the recommended set:
$ module load devel
To verify what you loaded:
$ module list
March 28, 2013
January 24, 2013
January 04, 2013
December 01, 2012
November 29, 2012