Skip to main content
Have a request for an upcoming news/science story? Submit a Request

Early faculty users praise new Conte community cluster, now open to campus researchers

  • Announcements

Purdue Professor Charles Bouman says his lab’s move to powerful supercomputers like Purdue’s new Conte community cluster is driven by need.

That need prompted Bouman’s lab to be an active tester of the Conte cluster this fall, as ITaP Research Computing (RCAC) shook down the new supercomputer in preparation for offering it to all Purdue researchers.

“We've been running things that would have taken months to run in a day,” Bouman says. “It's been a huge enabling technology for us.”

Conte is now in full production and ready for use by campus researchers. Capacity in the cluster can be ordered from the Conte cluster orders website. Email questions to rcac-cluster-purchase@lists.purdue.edu. More details are available on the Conte cluster information website.

Bouman’s lab focuses on new and improved ways to create images of fundamental processes captured by instruments ranging from CT, or computed tomography, scanners for medical purposes to synchrotron X-rays used, among other things, by materials scientists to explore how metals transition from liquid to solid.

“It's not even just better images,” says Bouman, Showalter Professor of Electrical and Computer Engineering and Biomedical Engineering. “I would say we're trying to get images where people couldn't previously.”

Constructing those images — in four dimensions no less, three of space and one of time — requires processing huge amounts of data generated by the instruments, hence the need for high-performance computing.

Conte certainly qualifies as high performance. The cluster ranked 28th on the June TOP500 list of the world's most powerful supercomputers. It was the most powerful for use by researchers on a single a U.S. campus. Conte's 580 nodes include Intel’s new Xeon Phi accelerators and a total of 77,520 processing cores, by far the most in any Purdue research supercomputer yet.

Researchers whose codes can’t yet take advantage of Phi acceleration don’t have to pay for the accelerators, but they have the option to purchase the capability later if it becomes useful to them.

ITaP’s tiered pricing structure made Conte, already well suited for his research without the accelerators, an even more attractive proposition, says Peter Bermel. An assistant professor of electrical and computer engineering, his research focuses on nanophotonics and optics with an emphasis on new energy applications, particularly in solar power.

Bermel will have a doctoral student working on taking advantage of the Phis. If the accelerators prove valuable, that’s great. If not, they won’t cost him extra. It’s a win-win situation.

“If we can get it working as well as, theoretically, it should work, then it's definitely a good investment,” Bermel says.

The Phis provide Conte with a six-fold increase in peak processing power over Purdue’s Carter community cluster, built in 2011. Phi-optimized code can take advantage of hardware support for accelerated matrix computation and dense floating-point math, offering possibilities for considerable performance improvement in mathematically intense research computation, says Michael Shuey, manager of high-performance compute systems for ITaP.

Moreover, Phis are based on Intel’s longstanding x86 technology and use standard programming models like OpenMP and MPI, which can make it simpler to start porting code versus other accelerator technologies, says Preston Smith, ITaP research support manager.

Conte is the latest research computing system offered to Purdue faculty under the Community Cluster Program. Through community clustering, faculty partners and ITaP make more computing power available for Purdue research projects than faculty and campus units could individually afford.

ITaP Research Computing (RCAC) provides the infrastructure for, installs, administers and maintains the community clusters, including security, software installation and user support, so researchers can concentrate on doing research rather than on running a high-performance computing system.

“ITaP was able to get a good deal on hardware, as good I could possibly have bargained these companies down to,” Bermel says. “Having professionals maintain the system is also a huge plus. I know personally, from when I was in graduate school, trying to maintain clusters was good educational experience but also very time consuming. Not having to deal with that is very nice.”

Community clustering also maximizes use by sharing computing power among the faculty partners whenever it is idle. Researchers always have ready access to the capacity they purchase, and potentially to much more if they need it.

Originally posted: