Modeling network interactions can illuminate things ranging from how the brain works to irrigation’s impact on groundwater
Networks work via nodes interacting, whether it’s proteins interacting on our cells, brain cells signaling each other, internet users passing along the latest hit meme or sophisticated financial products being exchanged, such as money moving between banks or stocks trading.
“Certainly, the dynamics of how people communicate and how brain cells act are completely different, but there are some questions that can be asked about these complex systems that are common to different disciplines,” says Christopher Quinn, an assistant professor in Purdue's School of Industrial Engineering.
To ask those kinds of questions, Quinn's lab uses observational data, applied mathematics, statistical modeling, and high-performance computing like Purdue's Brown community cluster research supercomputer.
“The end goal is to understand how these systems function,” Quinn says. “This could be false news transmitting on a social network, this could be genes in our body expressing proteins, which then lead either to proper regulation of cell life or to cancer if it malfunctions. Trying to understand the circuitry and all the processes that are mediated by the circuitry is important to understand and control complex systems in every domain.”
Quinn started with, and maintains, a strong interest in neuroscience and trying to better understand the trillions of interactions among billions of cells in the brain. But his research also has touched on such widely divergent topics as online interactions on Twitter and interactions among farmers, pumping regulations and the condition of groundwater supplies, among other things. He works with colleagues from an array of different disciplines.
While controlled experiments remain the gold standard in research, computer modeling can do things that it would be difficult, or impossible, to do experimentally, modeling can generally do them faster and also makes it feasible to do them over and over again.
“If you can come up with an accurate model, then you can run millions of simulations with all different types of parameters and see how those affect the systems,” Quinn says.
The number of calculations involved quickly adds up, and that's where high-performance computing comes into play. The community clusters can greatly speed up the work.
“For us, the most limiting factor is the time, because there are so many calculations to do, and it can be slow even for small problems,” Quinn says. “I find it very important to utilize high-performance resources. It's fantastic that Purdue has the resources it does.”
Before Brown, the latest research supercomputer in Purdue's Community Cluster Program, operated by ITaP Research Computing, Quinn also used the Conte and Rice clusters. Faculty partners always have access to the processing capacity they purchase in a community cluster and also can tap capacity that isn't being used at the moment by their peers. Quinn finds this pooling of resources advantageous.
“On my laptop it might take months and if I just use the few nodes that I've purchased on Brown it would take several days, but being able to throw a lot of jobs into the general queue on a short-term basis greatly accelerates it,” he says,
He also appreciates not having to maintain his own system and the expert support staff ITaP Research Computing provides.
“They've been very responsive, which has made me much more inclined to stick with using these resources for the long term,” says Quinn, who began using the community clusters when he arrived at Purdue three years ago.
To learn more about the Brown cluster and Purdue’s Community Cluster Program, as well as research data storage and other services available from ITaP Research Computing, contact Preston Smith, director of research services and support for ITaP, at 49-49729 or email@example.com.