Skip to main content
Have a request for an upcoming news/science story? Submit a Request

Fast and furious: Purdue building its newest research supercomputer in less than a day

  • Science Highlights

More than 100 staff members and volunteers will build Purdue’s’s latest high-performance computing cluster in a fast and furious race with the clock Friday, May 8, that should culminate in the machine running science computations by afternoon.

It will be the eighth system built by ITaP and faculty in as many years under Purdue’s award-winning Community Cluster Program, giving Purdue the best high-performance computing resources nationally for use by researchers on a single campus.

The new research supercomputer, named Rice, should make the June TOP500 list, joining Purdue’s Conte and Carter clusters. ITaP Research Computing and its faculty partners have built six TOP500-class supercomputers at Purdue since 2008.

Rice is the seventh, along with a major research data storage cluster built in 2014. More than 150 Purdue research labs and hundreds of faculty and students use these clusters to develop new treatments for cancer, improve crop yields to better feed the planet, engineer quieter aircraft, study global climate change and probe the origins of the universe, among many other topics.

Purdue partnered with HP and Intel on Rice. The new cluster consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 64 GB of memory. The cluster features a Mellanox 56 Gb FDR Infiniband interconnect and a Lustre parallel file system built on Data Direct Networks' SFA12KX EXAScaler storage platform.

Rice is designed for tightly coupled science and engineering applications, the largest portion of the high-performance computing work done at Purdue’s West Lafayette campus. At the same time, ITaP Research Computing is adding two smaller clusters designed for memory-intensive applications and high-throughput serial work.

Snyder, the big memory system, consists of HP compute nodes with two 10-core Intel Xeon-E5 processors and 256 GB of memory and has 40 Gbps Ethernet connections. The Snyder cluster is built for expansion and the plan is to add nodes each year as demand, grows, particularly for the life sciences research emphasized in President Mitch Daniels' Purdue Moves initiative.

Hammer, the high-throughput cluster, consists of HP DL60 compute nodes with two 10-core Intel Xeon-E5 processors, 64 GB of memory and 10 Gbps Ethernet connections. The Hammer cluster also is built with annual expansion in mind.

Purdue has a tradition of naming its clusters after the many computing pioneers in the University’s history.

Rice is named for John R. Rice, the W. Brooks Fortune Distinguished Professor Emeritus of Computer Science at Purdue. Professor Rice, one of the earliest faculty members of Purdue’s first-in-the-nation computer science program, is known for his research on numerical methods and problem solving environments for scientific computing as well as performance evaluation of mathematical software.

Snyder is named for the late Purdue agricultural economics Professor James Snyder, a pioneer in applying quantitative methods and computer modeling to agribusiness decision making.

The story behind Hammer’s name is a little different: it refers to the versatile tool Purdue Pete holds in his hands.

A video about the cluster build event is available at:

http://youtu.be/Ol7NPpSo2Ao

Originally posted: