Remote Desktop Launch
Jupyter Hub Launch
Rstudio Launch

Overview of Scholar

The Scholar cluster is open to Purdue instructors from any field whose classes include assignments that could make use of supercomputing, from high-end graphics rendering and weather modeling to simulating millions of molecules and exploring masses of data to understand the dynamics of social networks.

Scholar Detailed Hardware Specification

The Scholar cluster consists of several queues which have access to multiple nodes on Rice. Each node has 20 processor cores, 64 GB RAM, and 56 Gbps Infiniband interconnects.
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Interconnect TeraFLOPS
scholar 16 Two 10-Core Intel Xeon-E5 20 64 GB 56 Gbps FDR Infiniband N/A

Scholar nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

On Scholar, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:

  • Intel 16.0.1.150
  • MKL
  • Intel MPI

This compiler and these libraries are loaded by default. To load the recommended set again:

$ module load rcac

To verify what you loaded:

$ module list

Purdue University, 610 Purdue Mall, West Lafayette, IN 47907, (765) 494-4600

© 2017 Purdue University | An equal access/equal opportunity university | Copyright Complaints | Maintained by ITaP Research Computing

Trouble with this page? Disability-related accessibility issue? Please contact us at online@purdue.edu so we can help.