path breadcrumb divider Overview of Conte path breadcrumb divider Overview of Conte

Overview of Conte

Caesar was an SGI Altix 4700 system. This large memory SMP design featured 128 processors and 512 GB of RAM connected via SGI's high-bandwidth, low-latency NUMAlink shared-memory interface. The extremely large amount of shared memory in this system made it ideal for jobs where many processors must all share a large amount of in-memory data, and for large parallel jobs, using shared memory for fast communication between processors.

path breadcrumb divider Overview of Conte path breadcrumb divider Overview of Conte

Overview of Conte

Coates was a compute cluster operated by ITaP and was a member of Purdue's Community Cluster Program. ITaP installed Coates on July 21, 2009, and at the time it was the largest entirely 10 Gigabit Ethernet (10GigE) academic cluster in the world. Coates consisted of 982 64-bit, 8-core Hewlett-Packard Proliant and 11 64-bit, 16-core Hewlett-Packard Proliant DL585 G5 systems with between 16 GB and 128 GB of memory. All nodes had 10 Gigabit Ethernet interconnects and a 5-year warranty. Coates was decommissioned on September 30, 2014.

path breadcrumb divider Overview of Conte path breadcrumb divider Overview of Conte

Overview of Conte

Conte was built through a partnership with HP and Intel in June 2013, and is the largest of Purdue's flagship community clusters. Conte consists of HP compute nodes with two 8-core Intel Xeon-E5 processors (16 cores per node) and 64 GB of memory. Each node is also equipped with two 60-core Xeon Phi coprocessors. All nodes have 40 Gbps FDR10 Infiniband connections and a 5-year warranty. Conte is planned to be decommissioned on November 30, 2018.

Conte Namesake

Conte is named in honor of Samuel D. Conte, who helped establish the nation's first computer science program at Purdue in 1962 and served as department head for 17 years. More information about his life and impact on Purdue is available in an ITaP Biography of Conte.

Conte Detailed Hardware Specification

Most Conte nodes consist of identical hardware. All Conte nodes have 16 processor cores, 64 GB RAM, and 40 Gbps Infiniband interconnects. Conte nodes are also each equipped with two 60-core Xeon Phi Coprocessors that may be used to further accelerate work tailored to these.
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect TeraFLOPS
Conte-A 580 Two 8-Core Intel Xeon-E5 + Two 60-Core Xeon Phi 16 64 GB 40 Gbps FDR10 Infiniband 943.4

Conte nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

On Conte, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:

  • Intel
  • MKL
  • Intel MPI

This compiler and these libraries are loaded by default. To load the recommended set again:

$ module load rcac

To verify what you loaded:

$ module list

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2017 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP Research Computing

Trouble with this page? Disability-related accessibility issue? Please contact us at so we can help.