Steele supercomputer at Purdue makes list of world's most powerful systems
December 2, 2008
Purdue’s Steele supercomputing cluster—made possible by the collaborative efforts of faculty, staff and volunteers—is among the most powerful high performance computing systems in the world, according to rankings released at the SuperComputing ’08 conference in Austin, Texas, Tuesday (Nov. 18).
The Top 500 Supercomputer Sites project has been ranking the 500 most powerful known computer systems twice a year since 1993 as a way of detecting and tracking trends in high performance computing. Steele placed 105th on the latest list. Purdue ranked 319th in November 2007.
Steele ranked first among the Big 10 universities with systems on the list. Indiana’s Big Red cluster was at 148 and Minnesota had two entries at 268 and 356. The Steele cluster is operated by Purdue’s Rosen Center for Advanced Computing, the research and discovery arm of Information Technology at Purdue, the university’s central information technology organization.
Gerry McCartney, Purdue’s vice president for information technology and chief information officer, said Steele’s showing was important not so much for where it puts Purdue on the Top 500 list as for the trend it indicates.
“Our new supercomputer, Steele, is is another indication that Purdue is once again one of the leaders in high performance computing,” McCartney said. “But of course we don’t do this to see how high we can score on lists such as the Top 500. We do this to enable our scientists and engineers to stay at the forefront of discovery in crucial topics such as cancer, global warming and the lack of affordable energy.”
Purdue is determined to continue enhancing the high performance computing resources it provides for research and economic development purposes across the state, McCartney said. Between 2006 and 2008, ITaP’s Rosen Center increased its computing resources—used by researchers around campus, on Purdue’s satellite campuses and elsewhere—from 14 teraflops, or 14 trillion calculations per second, to 100 teraflops.
A lot of people on the Purdue campus can take some of the credit for Steele’s placement on the list announced at the premier international gathering for high performance computing, networking, storage and analysis. Steele is a “community cluster,” funded by combining faculty grant and lab startup funds and money from institutional sources.
Each “owner” gets a share of the computing power in the machine based on their investment and the opportunity to tap more when they need from the shares of other users idle at the time.
“We built a top 500 machine by working collaboratively with the faculty,” said John Campbell, associate vice president for information technology, who heads the Rosen Center. “This machine is all about pulling together a diverse set of people, utilizing a variety of funding and sharing resources.”
Resources like Steele are integral to the research of Purdue faculty members who helped pay for the cluster, like Gerhard Klimeck, an electrical and computer engineering professor who models the next two or three generations of nanoscale electronic devices, allowing their properties to be understood long before they’re ever fabricated.
Atoms, electrons—the things Klimeck looks at are very small. But there are millions, maybe billions, of them moving and interacting in myriad ways. That makes simulating them as complicated as simulating something very big, the cosmos for instance.
“We’re compute driven,” Klimeck said.
Moreover, building Steele was a community effort, too. More than 250 staff members and volunteers assembled the cluster in a single Monday morning in May. Some of them even came from Purdue’s diehard in-state athletic rival Indiana, attracted by the idea of a high-tech barn raising to assemble an 18 wheeler-sized supercomputer in a day, a process that normally takes weeks.
Campbell noted that Steele recently averaged 87 percent owner utilization and more than 98 percent utilization overall.
That’s one reason the Rosen Center already is planning Purdue’s next community cluster, to be built in the spring of 2009. Faculty and campus organizations interested in participating in the new cluster, to be called Coates, can find more information at: http://www.rcac.purdue.edu/userinfo/resources/coates/.
The Top 500 list generally captures the most powerful academic and commercial research supercomputers. Information about classified systems, used largely for defense purposes, isn’t released for the list.
Steele, made from 893 Dell 1950 systems, also prompted the appearance of a Purdue logo on a slide highlighting key Dell customers in the SC08 conference keynote talk Tuesday morning by Michael Dell, chairman of the board and chief executive officer of the world's second largest computer company.
As with Steele, the Rosen Center is naming its new cluster after an important figure in the history of computing at Purdue. Clarence L. “Ben” Coates headed the School of Electrical Engineering (now Electrical and Computer Engineering) and was a driving force behind creating a high performance computing network for Purdue’s engineering schools. John Steele was instrumental in founding and served as the director of the Purdue University Computing Center, the high performance computing unit at Purdue prior to the Rosen Center.
The testing of Steele for supercomputing’s Top 500 list required the running of the High-Performance LINPACK benchmarking suite, software designed to stress three big computationally oriented components of a cluster—its processing power, memory and the wiring that links its processors together for working in concert.
The Rosen Center’s Andy Howard, who volunteered for the benchmarking job, said it is possible simply to download the code and run it on most machines, “but it’s not going to be your best score.”
Like a lot of software, many scientific applications included, the benchmarking suite runs best with some relatively minor tuning for conditions on a specific machine, the number of processors for instance. Howard, a senior in electrical and computer engineering technology from West Lafayette who works as an assistant research programmer for the Rosen Center, tweaked the program in six test sessions spread over three weeks.
Steele is strong from the perspective of its processing power and memory, Howard said. Where it comes up a little short is in the connective wiring, which doesn’t include a lot of expensive low-latency hardware because that isn’t a feature much in demand by the researchers who use, and paid for, the system.
Writer: Greg Kline, (765) 494-8167, firstname.lastname@example.org