Gateway Launch
Remote Desktop Launch
Jupyter Hub Launch
Rstudio Launch

Overview of Scholar

Scholar is a small computer cluster, suitable for classroom learning about high performance computing (HPC). It consists of 7 interactive login servers, 20 batch worker nodes, 4 GPU nodes, and 3 worker nodes dedicated to Open OnDemand.

It can be accessed as a typical cluster, with a job scheduler distributing batch jobs onto its worker nodes, or as an interactive resource, with software packages available through a desktop-like environment on its login servers.

If you have a class that you think will benefit from the use of Scholar, you can schedule it for your class through our web page at: Class Account Request. You only need to register your class itself. All students who register for the class will automatically get login privileges to the Scholar cluster. As a batch resource, the cluster has access to typical HPC software packages and tool chains; as an interactive resource, Scholar provides a Linux remote desktop, or a Jupyter notebook server, or an R Studio server. Jupyter and R Studio can be used by students without any reliance on Linux knowledge or experience.

Scholar Specifications

Scholar's standard CPU compute nodes have 20 processor cores and 64 GB of RAM. GPU nodes have 16 processor cores, 192 GB RAM, and 1 Tesla V100 32GB GPU. The Open OnDemand compute nodes have 64 AMD processor cores and 256 GB of RAM.

Scholar Front-Ends
Front-Ends Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
No GPU 4 Two Haswell CPUs @ 2.60GHz 20 512 GB 2023
With GPU 3 Two Sky Lake CPUs @ 2.60GHz with one NVIDIA Tesla V100 20 768 GB 2023
Scholar Sub-Clusters
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
A 20 Two Haswell CPUs @ 2.60GHz 20 64 GB 2023
B 3 One AMD Rome CPU @ 2.00GHz 64 256 GB 2025
G 4 Two Skylake CPUs @ 2.10GHz with one NVIDIA Tesla V100 32GB 16 192 GB 2023

Faculty who would like to know more about Scholar, please read the Faculty Guide