Rosen Center now has 14 times the computing capacity it had two years ago

July 12, 2006

The Rosen Center for Advanced Computing (RCAC) has 14 times the computing capacity now than it did two years ago, thanks to a new supercomputer and the creation of a community cluster model. RCAC's computing resources formerly totaled 1 TFlop, or one trillion floating point operations per second.

Information Technology at Purdue's Associate Vice President and Chief Technology Officer Rick Smith says, "Roughly two years ago, RCAC maintained only two computational research systems—one focused on high performance shared memory parallel computing in the IBM SP that had been installed since 1996 and another newly created recycled cluster that used machines retired from ITaP labs to form 48 node clusters. Through some central investment and the creation of the community cluster model, which allows researchers to pool their funds together to purchase larger computation resources, we’ve grown our capacity to 14.5 TFlops."

RCAC worked with the National Science Foundation and the San Diego Supercomputing Center to acquire the IBM SP Blue Horizon for no cost other than shipping and reconfiguration. RCAC maintains a diverse set of computation resources summarized below.

Shared Memory Machines

  • SP - IBM SP system consisting of 320 Power3 processors. The 320 processors are divided among 64 “thin nodes” (quad-processor systems with 4.5 GB of memory) and 4 “high nodes” (16-processor systems with 64 GB of memory)

  • Gold Sunset – IBM SP system consisting of 576 Power3 processors. Gold Sunset has 576 processors divided among 36 high nodes with 576GB of memory.

  • 2 IBM p690 Regattas

  • One dedicated to the Network for Computational Nanotechnology

  • One dedicated to Life Sciences Research

  • Both have 16 Power4+ processors and 128GB of memory

  • 5 Sun F6800 multiprocessor systems. Each F6800 system contains 24 1.2 GHz UltraSparc III processors and 192 GB of memory. All systems share a 3.5 TB fiber channel storage array for scratch space interconnected with Sun's proprietary FireLink, a low-latency, high-speed network fabric utilized by Sun's MPI tools.

  • 2 Sun E10000 systems (commonly referred to as E10k systems). Both systems have 56 400 Mhz UltraSPARC II processors and 56 GB of memory and support High Performance Classroom activities in addition to general research.

    Clusters

  • Copper - 7 HP dual processor Itanium systems. Each HP rx2600 servers have 8GB of memory, two 900MHz Intel Itanium processors per server, and two 73GB 15,000RPM SCSI disks.

  • General purpose Linux clusters

  • 5- 48 node clusters comprised of laboratory workstations (recycled cluster) – will be Pentium IV 1.6Ghz with 512MB of memory in summer 2005

  • 60 dual 1.2Ghz AMD Athalon systems with 1GB memory

  • 70 Dell PowerEdge 1425SC Pentium IV 3.2Ghz Noconas with 2GB memory

  • Hamlet community cluster (32-bit Intel Xeon) (618 Processors) - Hamlet is comprised of a few different node types due to expansion over time. All nodes use either 3.02 or 3.2 Ghz Intel Xeon or Nocona/Irwindale processors. They include:

  • 180 Dell PowerEdge 1750s with 2GB of memory and Gigabit interconnects

  • 64 Dell PowerEdge 1750s with 4GB of memory and Infiniband interconnect

  • 65 Dell PowerEdge 1425SC with 4GB of memory and Infiniband interconnect

  • Macbeth community cluster (64-bit AMD Opteron) (196 processors)

  • 98 HP DL 145 AMD Opterons at 1.8Ghz with 4GB of memory and Infiniband interconnect

  • Lear community cluster (64-bit Intel Irwindale) (1024 processors)

  • 96 Dell PowerEdge 1425SCs 3.2Ghz with 6GB of memory and Gigabit interconnect

  • 416 Dell PowerEdge 1425SC 3.2Ghz with 4GB of memory and Gigabit interconnect

    RCAC’s recent enhancements also include the diversification of computational resources, the creation of a multi-tiered storage model, and the initial framework of a research-focused high speed campus network with connectivity to national networks such as the TeraGrid and Star-Light.

    To find out about computing resources provided by RCAC or Information Technology at Purdue (ITaP), contact rcac-info@purdue.edu.

Originally posted: July 12, 2006