Skip to main content
Have a request for an upcoming news/science story? Submit a Request

New Rice community cluster research supercomputer ready for faculty

  • Science Highlights

Carlo Scalo’s research relies on large-scale computer simulations to support modeling and fundamental investigations of complex fluid dynamic systems with a wide range of applications including heat and mass transfer, acoustics and high-speed aerodynamics.

His computations in three dimensions can involve billions of grid points in space along with a fourth dimension, time, broken into millions or up to a billion time steps. This naturally makes for big problems, and it takes a big computer to solve them.

“You need capacity and speed,” says Scalo, an assistant professor of mechanical engineering who works with highly nonlinear partial differential equations to be discretized, or transformed for running on a computer.

Purdue’s new Rice community cluster supercomputer gives Scalo the capacity and speed he needs. Rice, built by ITaP in May 2015, made the latest TOP500 list of the world’s most powerful supercomputers. At the same time, ITaP added two smaller community clusters: Snyder designed for memory-intensive applications, especially in the life sciences, and Hammer for high-throughput serial work.

Faculty can now buy capacity in the Rice, Snyder and Hammer clusters at ITaP’s cluster orders website.

Like the other machines in Purdue’s award-winning Community Cluster Program, Rice is designed for tightly coupled science and engineering applications and parallel computation, the largest portion of the high-performance computing work done on the West Lafayette campus.

Scalo wants fast processors and plenty of them — he currently has 800 cores on Rice fully dedicated to his group — ample memory and speedy connections between nodes for message passing. The Rice cluster fits his high-performance computing needs, with minimal to no effort for code porting and optimization thanks to the technical support ITaP Research Computing offers.

“They've pushed the envelope on every aspect, which is golden,” Scalo says. “It allows me to push my envelope, or to do what I was doing quicker and better without having to worry about any technical problems.”

ITaP Research Computing, working with faculty partners, has built seven TOP500-class high-performance computing clusters at Purdue since 2008, along with a major research data storage cluster, the Research Data Depot, in 2014. Purdue has three machines on the current TOP500 list, more than for any other U.S. school, giving the University the best collection of high-performance computing systems for use by faculty researchers on any single campus in the country.

For more information on Rice, the Community Cluster Program, the Research Data Depot and other research computing services, email rcac-help@purdue.edu or contact Preston Smith, ITaP’s director of research services and support, psmith@purdue.edu or 49-49729.

Besides the computing power, Scalo lauds the support he receives from ITaP Research Computing staff, who helped him realize a 50 percent speedup in his codes on Rice, among other things. He says he never saw any reason to install his own cluster or to look elsewhere for computational resources after he joined Purdue’s faculty in 2014.

“It's impossible for me as new faculty to do a better job with my own cluster, my own students, my own resources than this team of organized people who have been doing it for years,” Scalo says. “It's impossible for me to do it with less money. It's impossible for me to beat this.”

There are now 165 faculty partners from all of Purdue’s primary colleges and schools using the community clusters for research spanning more than 30 science, engineering and social science disciplines.

Purdue partnered with HP and Intel on Rice. The new cluster consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 64 GB of memory. The cluster features a Mellanox 56 Gb FDR Infiniband interconnect and a Lustre parallel file system built on Data Direct Networks' SFA12KX EXAScaler storage platform.

Snyder, the big memory system, consists of HP compute nodes with two 10-core Intel Xeon-E5 processors and 256 GB of memory and has 40 Gbps Ethernet connections. The Snyder cluster is built for expansion and the plan is to add nodes each year as demand, grows, particularly for the life sciences research emphasized in President Mitch Daniels' Purdue Moves initiative.

Hammer, the high-throughput cluster, consists of HP DL60 compute nodes with two 10-core Intel Xeon-E5 processors, 64 GB of memory and 10 Gbps Ethernet connections. The Hammer cluster also is built with annual expansion in mind.

Originally posted: