New research supercomputer offers improved performance, flexibility for diverse computational needs
September 26, 2017
Purdue’s latest research supercomputer will improve on the processing power of its predecessor while offering different configuration options that make it suitable for researchers in a variety of fields.
The high-performance computing cluster’s basic configuration will include nodes with two, 12-core Intel Xeon Gold “Sky Lake” CPUs, 96 GB of RAM and EDR Infiniband interconnects, and will be approximately 50 percent faster than Halstead, the 2016 cluster. The new cluster is priced at $5,599 per node, for five years of service.
For researchers in the life sciences and others who need more memory, configurations with larger amounts of RAM are available in the Snyder cluster.
The new cluster will also include a 16-node graphical processing unit (GPU) partition, with three NVIDIA Tesla P100 GPUs per node. GPU nodes are available for purchase and researchers can also obtain access to them through a subscription-based model for $2,500 a year.
Purdue has a tradition of naming its research supercomputers after notable figures in computing history at the University. The new cluster will be named for Herbert C. Brown, the late Purdue professor who received the 1979 Nobel Prize in chemistry for his work on boron compounds.
Brown is the ninth research computing system offered to Purdue faculty in as many years through the Community Cluster Program. There are now more than 190 faculty partners from all of Purdue’s primary colleges and schools using the community clusters and other services operated by ITaP for research, spanning more than 35 science, engineering, life sciences and social science disciplines.
Community clustering makes more computing power available for Purdue research than faculty and campus units could individually afford. ITaP Research Computing installs, administers and maintains the community clusters, including security, software installation and expert user support.
Partners always have ready access to the cluster capacity they purchase, but they also can share capacity fellow researchers aren’t using. This offers users access to substantially more computational power if needed and it keeps the machines busy.