To purchase access to Conte today, go to the Conte Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at firstname.lastname@example.org if you have any questions.
Conte is named in honor of Samuel D. Conte, who helped establish the nation's first computer science program at Purdue in 1962 and served as department head for 17 years. More information about his life and impact on Purdue is available in an ITaP Biography of Samuel D. Conte.
Most Conte nodes consist of identical hardware. All Conte nodes have 16 processor cores, 64 GB RAM, and 40 Gbps Infiniband interconnects. Conte nodes are also each equipped with two 60-core Xeon Phi Coprocessors that may be used to further accelerate work tailored to these.
|Sub-Cluster||Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect||TeraFLOPS|
|Conte-A||580||Two 8-Core Intel Xeon-E5 + Two 60-Core Xeon Phi||16||64 GB||40 Gbps FDR10 Infiniband||943.4|
Conte nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 7 and TORQUE Resource Manager 4 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
For more information about the TORQUE Resource Manager:
On Conte, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:
To load the recommended set:
$ module load devel
To verify what you loaded:
$ module list