Conte

Overview of Conte

Conte is the newest of Purdue's Community Clusters, and was built through a partnership with HP and Intel in June 2013. Conte consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and 64 GB of memory. Each node is also equipped with two 60-core Xeon Phi coprocessors. All nodes have 40 Gbps FDR10 Infiniband connections and a 5-year warranty. Conte is planned to be decommissioned on November 30, 2018.

To purchase access to Conte today, go to the Conte Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at rcac-cluster-purchase@lists.purdue.edu if you have any questions.

Namesake

Conte is named in honor of Samuel D. Conte, who helped establish the nation's first computer science program at Purdue in 1962 and served as department head for 17 years. More information about his life and impact on Purdue is available in an ITaP Biography of Samuel D. Conte.

Detailed Hardware Specification

Most Conte nodes consist of identical hardware. All Conte nodes have 16 processor cores, 64 GB RAM, and 40 Gbps Infiniband interconnects. Conte nodes are also each equipped with two 60-core Xeon Phi Coprocessors that may be used to further accelerate work tailored to these.

Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect TeraFLOPS
Conte-A 580 Two 8-Core Intel Xeon-E5 + Two 60-Core Xeon Phi 16 64 GB 40 Gbps FDR10 Infiniband 943.4

Conte nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 7 and TORQUE Resource Manager 4 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

For more information about the TORQUE Resource Manager:

On Conte, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:

  • Intel 13.1.1.163
  • MKL
  • Intel MPI

To load the recommended set:

$ module load devel

To verify what you loaded:

$ module list