Faculty praise new Conte cluster, capacity still available to Purdue researchers
March 25, 2014
Erica Carlson and Keith Cherkauer are aiming at very different moving targets but the two Purdue faculty members are taking aim with the same tool — Purdue’s new Conte cluster supercomputer.
Conte, which ITaP Research Computing (RCAC) brought on line during the fall semester, ranked 33rd on November’s TOP500 list of the world's most powerful supercomputers. It is the most powerful for use by researchers on a single a U.S. campus.
For Cherkauer, associate professor of agricultural and biological engineering, the target of that computing power is water moving through the hydrologic cycle and the landscape, its role in issues such as erosion and flooding, and how it is influenced by factors such as land use and climate change.
For physics Professor Carlson, the targets are electrons and the phase transitions they undergo inside of materials that could be, among other things, high-temperature superconductors, radically enhancing the efficiency of electric systems, or “memristors,” resistors with memory seen as components in a next generation of more capable electronics.
“With Conte, we can do our simulations faster and we can simulate larger systems,” says Carlson, whose research is breaking ground in interpreting images from the growing number of ways to probe what’s happening inside materials at an atomic level. “We're discovering fundamentally new physics and we want to make our predictions concrete. The larger the system you can simulate, the closer you get to the fundamental laws.”
Meanwhile, Cherkauer’s hydrologic models tend to be on a large — river basin, state, even continental — scale and to simulate long periods of time.
“Most of our simulations are 100 to 150 years,” Cherkauer says. “We just need a lot of CPU time. Each CPU on Conte is faster and I have more of them.”
Conte's 580 nodes include Intel’s new Xeon Phi accelerators and a total of 77,520 processing cores, by far the most in any Purdue research supercomputer yet. Researchers are welcome to test and refine their codes on the acceleration hardware without having to pay for it up front. They can then purchase the capability later if it proves useful.
Cherkauer also was attracted by the 500 gigabytes of research group storage space ITaP Research Computing (RCAC) now makes available at no additional cost to faculty using Purdue’s community clusters.
“Group storage allows us to put the model input files that we tend to share in one location instead of having everybody have a copy of them,” Cherkauer says.
Conte is the latest research computing system offered to Purdue faculty under the Community Cluster Program. Through community clustering, faculty partners and ITaP make more computing power available for Purdue research projects than researchers and campus units could individually afford.
ITaP Research Computing (RCAC) provides the infrastructure for, installs, administers and maintains the community clusters, including security, software installation and user support, so researchers can concentrate on doing research rather than on running a high-performance computing system.
“I think it's a fantastic service,” Carlson says. “I can work just on the physics and on my code and have somebody else worry about installing scripts, keeping compilers working, the cooling system, where to house the thing, updating it. Buying access on Conte also gives me much better computing facilities than I could purchase on my own.”
Community clustering maximizes use by sharing computing power among the faculty partners whenever it is idle. Researchers always have ready access to the capacity they purchase, and potentially to much more if they need it.