To purchase access to Hammer today, go to the 2015 Cluster Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at email@example.com if you have any questions.
Hammer consists of several groups of hardware as the cluster is expanded annually.
|Sub-Cluster||Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect|
|Hammer-A||198||Two 10-Core Intel Xeon E5-2660 v3||20||64 GB||10 Gbps Ethernet|
|Hammer-B||40||Two Hyper-Threaded 10-Core Intel Xeon E5-2660 v3||40 (Logical)||128 GB||25 Gbps Ethernet|
Hammer nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management, optimized for jobs 8 cores or smaller and for maximum job throughput. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
For more information about the TORQUE Resource Manager:
On Hammer, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:
To load the recommended set:
$ module load devel
To verify what you loaded:
$ module list