The hardware supporting Scholar consists of HP compute nodes with two 10-core Intel Xeon-E5 processors (20 cores per node) and 64 GB of memory. All nodes have 56 Gbps FDR Infiniband connections and a 5-year warranty.
The Scholar cluster consists of several queues which have access to multiple nodes on Rice. Each node has 20 processor cores, 64 GB RAM, and 56 Gbps Infiniband interconnects.
|Queue||Number of Nodes||Processors per Node||Cores per Node||Memory per Node||Interconnect|
|scholar||16||Two 10-Core Intel Xeon-E5||20||64 GB||56 Gbps FDR Infiniband|
These nodes run Red Hat Enterprise Linux 6 (RHEL6) and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
For more information about the TORQUE Resource Manager:
The Scholar hadoop cluster is a portion of Hathi. It uses Map Reduce task tracking with the Hadoop Distributed File System.
To request access to Scholar today, please email email@example.com and provide the semester and CRN of the class Scholar will be used in. All students in that class will be added once the request is filled.
On Scholar, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:
To load the recommended set:
$ module load devel
To verify what you loaded:
$ module list
All Purdue faculty may request access to Scholar for use in the classroom. Please use the Accounts for Classes tool to create accounts for your class. You will need to select the semester and CRN of the class. All students registered in that class will be added once the request is fulfilled. You may add additional instructors or TAs from the same tool.