Rossmann is named in honor of Michael Rossmann, Purdue's Hanley Distinguished Professor of Biological Sciences. More information about his life and impact on Purdue is available in an ITaP Biography of Michael Rossmann.
Rossmann consists of five logical sub-clusters, each with a different memory/storage configuration. All nodes in the cluster have dual 12-core AMD Opteron 6172 processors and 10 Gigabit Ethernet (10GigE).
|Sub-Cluster||Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect||Disk|
|Rossmann-A||392||Two 2.1 GHz 12-Core AMD 6172||24||48 GB||10 GigE||250 GB|
|Rossmann-B||40||Two 2.1 GHz 12-Core AMD 6172||24||96 GB||10 GigE||250 GB|
|Rossmann-C||2||Two 2.1 GHz 12-Core AMD 6172||24||192 GB||10 GigE||1 TB|
|Rossmann-D||4||Two 2.1 GHz 12-Core AMD 6172||24||192 GB||10 GigE||2 TB|
Rossmann nodes run Red Hat Enterprise Linux 5 (RHEL5) and use Moab Workload Manager 7 and TORQUE Resource Manager 4 as the portable batch system (PBS) for resource and job management. Rossmann also runs jobs for BoilerGrid whenever processor cores in it would otherwise be idle. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
For more information about the TORQUE Resource Manager:
On Rossmann, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:
To load the recommended set:
$ module load devel
To verify what you loaded:
$ module list