Skip to main content
Have a request for an upcoming news/science story? Submit a Request

Storage and Network Upgrades for Carter Cluster

ITaP is pleased to announce several upgrades to the Carter cluster to better enable data-intensive science.

Network

To relieve potential bottlenecks on IP network traffic, Carter will receive an upgrade to its Infiniband-to-IP gateway. This gateway takes the high-speed Infiniband communications between compute nodes and translates them into the common protocol that allows compute nodes to talk to external resources, such as Research Data Depot, Fortress, other clusters, or even your laptop. The first-generation gateway in use on Carter had a practical maximum of only being able to transmit 11 Gb/second of data from Carter to these external resources.

This had limited Carter's capability for high-speed access to the Research Data Depot, research groups downloading data from external sources, and moving data to or from Fortress. Additionally, this bottleneck had, in certain situations, caused the overall Research Data Depot to experience performance problems.

Following the upgrade, Carter will have a total bandwidth of 160 Gb/second to the outside world.

Scratch

Since Carter's 2012 construction, Carter has shared a scratch filesystem with the Hansen cluster, but demand for storage capacity and performance has continued to grow. During scheduled maintenance on September 22, 2015, ITaP engineers will upgrade the storage system providing Carter's scratch (/scratch/carter), with a new, high-performance storage system, with an aggregate capacity of 1.4PB.

This new, dedicated, scratch system will allow for Carter to join Conte and Rice with very large filesystem quotas, enabling community cluster partners to address even larger research problems than were previously possible.

During the course of this upgrade, Hansen and Carter's scratch filesystems will be separated - each system will have its own dedicated scratch. Due to software incompatibility between multiple different versions of the Lustre filesystem in use, the other cluster's scratch filesystem will no longer be able to be mounted on login nodes. The following table helps to illustrate this:
Login nodes Mounted scratch filesystems
Carter and Scholar New Carter scratch only
Hansen All currently mounted filesystems except new Carter scratch and Rice/Snyder scratch
Other clusters All currently mounted filesystems except new Carter scratch and Rice/Snyder scratch

Simultaneous users of both Carter and Hansen should be aware that while ITaP will initially populate new Carter scratch filesystem with a copy of current data, going forward all new data will need to be transferred between these systems using rsync or Globus service. Please contact us at rcac-help@purdue.edu if you need help organizing your data flow.

Originally posted: