Skip to main content
Have a request for an upcoming news/science story? Submit a Request

"Green" scales well, according to Purdue's early users

  • Announcements

Early “tire kicking” by Purdue researchers is winning the University’s new “green” supercomputer some fans, although the results have been mixed for others and they all caution that more refining and testing need to be done. “It allows us to efficiently perform simulations much larger than we could do with other machines,” materials engineering Professor Alejandro Strachan said of the computer from Massachusetts-based SiCortex. “I think it’s a great machine to do consistently large systems.”

Purdue’s latest supercomputer won’t make any lists of the fastest supercomputers in the world, the country, Midwest, or state. It’s not even the fastest on campus.

But the machine from SiCortex has advantages that could make it a boon for some researchers because it can tackle big problems by breaking them into pieces and processing those simultaneously with a high rate of efficiency.

Moreover, the new supercomputer—which has more than 3,000 processors, kind of the brain of a computer—isn’t green only if it is running climate simulations, one potential use, at Purdue’s Rosen Center for Advanced Computing. SiCortex designed it to consume less power than conventional supercomputers.

Purdue is the first university to test SiCortex’s top-of-the-line model. The high performance, research-focused computing arm of Information Technology at Purdue (ITaP), the Rosen Center is leasing the experimental machine for more than a year’s testing with an option to buy if it proves useful to Purdue researchers.

The project is co-funded by the Purdue Provost’s Office under a grant request supported by more than 50 faculty members, according to Gerry McCartney, Purdue’s vice president for information technology and chief information officer, who heads ITaP.

About a million-dollar machine, the SiCortex is low power in large part because the speed of its processors at 500 megahertz is a quarter to a sixth that of conventional processors, which reduces both power demand and heat, potentially lowering energy consumption for cooling and cooling costs as well. Its processors draw 600 milliwatts of power each, about the same as a cell phone or small flashlight. A standard supercomputer contains thousands of processors that require about 25 watts (25,000 milliwatts) each.

SiCortex makes up for some of the chip performance deficit with very fast communications among its thousands of processors. The machine also is built to move data quickly from storage to processing. That means its processors don’t, in effect, stand around waiting for something to do, like a painting crew out of paint.

So far, about 20 researchers and a dozen Rosen Center staff members have tried the machine and the verdict, in general, has been that some interesting things can be done with it, said William Whitson, ITaP and Rosen Center research computing manager.

Since its arrival in June, the SiCortex SC5832 at the Rosen Center has shown good scalability, which is to be expected given the communications-to-computation rate its speedy “interconnect fabric”—the communications pathways among its processors—and many processors permit, said Rudolf Eigenmann, professor of electrical and computer engineering and technical director of Purdue’s Computing Research Institute. That means if it takes an hour to simulate a sample of a certain size, researchers can double the size of the sample and double the number of processors in use and still get done in an hour, or very near to it, with a slight loss to overhead.

What its testers haven’t seen yet is an improvement in “absolute execution time,” which is to say the case where the SiCortex is significantly faster than a more conventional machine, Eigenmann said. But that could come with more exploration.

“Every new machine needs to be explored in how it can deliver performance the best,” said Eigenmann, one of the drivers, with McCartney, in bringing the new supercomputer to Purdue. The Computing Research Institute partnered with the Rosen Center on the funded proposal that purchased the machine and their staffs have worked together in helping researchers run and improve their computations on it.

In addition, Whitson said the Rosen Center plans to use a power-monitoring system on the SiCortex to examine it from the standpoint of energy efficiency eventually, since that’s one of the machine’s selling points.

Mechanical engineering Professor John Abraham focuses on improving efficiency in a different arena—internal combustion engines. His lab is on the lookout for things the industry could use to get gas and diesel engines to burn less fuel, improving mileage and also reducing emissions of climate-impacting greenhouse gases like carbon dioxide.

Abraham and colleagues also look for ways to reduce engine emissions of toxic pollutants, such as carbon monoxide, particulate matter and nitric oxide.

In addition, they study after-treatment systems, to catch and filter emissions over and above improvements in the combustion process, and fuel reformers, used to separate hydrogen from primary fuels for use in after-treatment devices and fuel cells.

All of this involves examining the fundamentals of flows, of fuel and oxygen moving turbulently through an engine system and reacting to generate the explosive power that drives a car or truck. Abraham’s lab looks at elements of the physical and chemical processes involved at micron scales in engines where the geometry scales are on the order of centimeters, which makes for a lot of microns to track. The researchers also resolve time scales on the order of nanoseconds or microseconds during an engine cycle that lasts for several seconds, once more a lot of individual elements to follow.

The complexity in the length and time scales, and the inherently multiscale nature of the problem, makes for models that require high performance computing. Abraham routinely uses resources at the National Center for Supercomputing Applications in Urbana, Ill., among others.

Meanwhile, Strachan specializes in molecular- and atomic-level simulation of materials in which he tracks what each and every atom in a sample of material is doing under various conditions.

While that sounds quite different from what Abraham does, the two Purdue research efforts share some important characteristics where their computing needs, and the Rosen Center’s SiCortex, are concerned.

The problems in both cases can be highly parallelized, that is they’re capable of being broken up into many smaller pieces and parceled out to a lot of computer processors. The processors then solve the pieces of the puzzle and reassemble the answers into a master answer. This can allow problems to be solved faster, larger problems to be tackled in a reasonable amount of time, or a combination of both.

With more than 3,000 processors, the SiCortex naturally interested Abraham’s and Strachan’s labs, along with the specialized interconnect fabric the company has developed. Interprocessor communications are important in what Abraham and Strachan do since they spread so many pieces among so many processors, which need to communicate to coordinate their activity.

“If you don’t have very fast communications between the processors it ends up killing you,” Strachan said. “That made it very attractive to us.”

Strachan and colleagues employ LAMMPS, widely used molecular dynamics simulation software developed at Sandia National Laboratories. In simulations looking at the mechanical response of a polymer material, and the underlying molecular dynamics that govern it to the breaking point, their results were encouraging.

Strachan said they found that their code scaled close to perfectly up to the machine’s full compliment of 3,240 processors. Other supercomputing resources available at Purdue can certainly handle the problems, Strachan said, and perhaps do it faster, but not as efficiently. They lose more time to overhead as the problem scales up.

Abraham’s students have run models of flow and chemical reactions on the SiCortex, using 128, 256 and 1,024 processors. Their code ported easily to the new machine and scaled well at the 128- and 256-procesor levels.

It didn’t scale as well on 1,024 processors, but Abraham said that’s probably more a matter of refining to use so many processors than a problem with the SiCortex.

“I’d like to use four times that many,” he said. “SiCortex is something attractive for us and promising so far.”

Mechanical engineering Professor Steve Frankel experienced similar scaling results—and better at the high end—in testing the SiCortex. Frankel is a computational fluid dynamics expert who focuses on turbulent flows involving everything from jet engines to human speech using computer modeling and simulation.

He primarily uses direct numerical simulation, or DNS, and large eddy simulation, or LES, methods to study turbulent flows, which Abraham also employs. The former provides basically a perfect rendering of a flow, while the latter confines the picture to its important elements but in finer detail and with simulation to fill in the gaps.

In either case, the number of points in a flow and number of time steps Frankel’s work incorporates require high performance computing to avoid waiting days, weeks, months, or years for results.

He and his lab labor to parallelize their code and make it scalable in running on their own and Purdue clusters, along with external resources like IBM’s Blue Gene supercomputers, on which Frankel in one test made use of 16,000 processing cores.

Frankel and colleagues also have used more than 2,000 cores for a human blood flow simulation on the IBM Blue Gene/L system at Argonne National Lab. His partnership with the national lab led him to try the SiCortex computer when it arrived at Purdue. Argonne acquired a SiCortex SC5832 in October 2007 and tests there showed similar performance under many conditions to the lab’s Blue Gene.

Frankel’s code runs best on a lot of cores—his lab used up 1,024 in the SiCortex—and with dedicated space to run continually. Under those conditions, they got scaling performance comparable to a Blue Gene, he said, just like Argonne.

Purdue aeronautical and astronautical engineering Professor Greg Blaisdell’s experience with the SiCortex is more mixed. Blaisdell also looks at turbulent flows, as they pertain to issues like the noise from jet aircraft and the drag and heat on aircraft wings, especially at supersonic speeds. Blaisdell’s lab doesn’t do it, but he noted that similar methods are employed to study gas flows on a galactic scale.

Blaisdell regularly makes use of Big Ben at the Pittsburgh Supercomputing Center, a more than 21-teraflop Cray XT3 system. He was intrigued by the 3,240-core SiCortex and he and his students wanted to test a reworked version of their code on as many of those processors as possible

“He was really pleased at how easy it was to get things up and running,” Blaisdell said of his student Phoi-Tack (Charlie) Lew, who had a job running the same day he got access to the system.

Blaisdell, Lew, aeronautical and astronautical engineering Professor Tasos Lyrintzis and graduate student Shih-Chieh Lo compared two large eddy simulations of a turbulent jet running on the SiCortex and Big Ben.

The SiCortex was slightly better as they scaled up from a minimum of 16 cores to 128, the limit for the test problem involved. Big Ben performed better in testing a second problem on up to 1,024 cores.

Blaisdell said testing on the SiCortex should help refine their code to better use the thousands of processors set to become standard with the advent of petascale computing, which Eigenmann said was one of the primary motivations behind obtaining the new machine.

The Purdue researchers agreed that being able to take advantage of more—rather than faster, as has been the case up to now—processors is likely to become the path to doing bigger science and engineering.

“That, I think, is the future,” Abraham said. “Individual processor speeds are not necessarily going up as fast as we want them to.”

Writer: Greg Kline, 765-494-8167

Sources: Greg Blaisdell, 765-494-1490, blaisdel@purdue.edu

Steve Frankel, 765-494-1507, frankel@purdue.edu

Alejandro Strachan, 765-496-3551, strachan@purdue.edu

John Abraham, 765-494-1505, jabraham@purdue.edu

William Whitson, 765-496-8227, whitson@purdue.edu

Originally posted: