Skip to main content

Link to section 'Overview of Weber' of 'Overview of Weber' Overview of Weber

Weber is Purdue's specialty high performance computing cluster deployed in 2019 for data, applications, and research under export control regulations such as EAR, ITAR, or requiring compliance with the NIST SP 800-171.  

For purchase access questions, please contact the Export Controls office at exportcontrols@purdue.edu

For technical questions, please contact RCAC at rcac-help@purdue.edu

Link to section 'Weber Namesake' of 'Overview of Weber' Weber Namesake

Weber is named in honor of Mary Ellen Weber, scientist and former astronaut. More information about her life and impact on Purdue is available in a Biography of Weber.

Link to section 'Weber Specifications' of 'Overview of Weber' Weber Specifications

Weber consists of Dell compute nodes with two 64-core AMD EPYC 7713 processors, and Dell GPU nodes with two 8-core Intel Xeon 4110 processors and a Tesla V100 GPU. All nodes have 56 Gbps EDR Infiniband Interconnect.

All Weber nodes have 20 processor cores, 64 GB of RAM, and 56 Gbps Infiniband interconnects.

Weber Front-Ends
Front-Ends Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
  2 Two AMD EPYC 7702P @ 2.00GHz 128 256 GB 2024
Weber Sub-Clusters
Sub-Cluster Number of Nodes Processors per Node Cores per Node Memory per Node Retires in
A 15 Two AMD EPYC 7713 @ 2.00GHz 128 256 GB 2027
G 2 Two Intel Xeon 4110 @ 2.10GHz 16 196 GB 2024

Weber nodes run CentOS 7 and use SLURM as the batch system for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).

On Weber, the following set of compiler, math library, and message-passing library for parallel code are recommended:

  • Intel
  • MKL
  • Intel MPI

This compiler and these libraries are loaded by default. To load the recommended set again:

$ module load rcac

To verify what you loaded:

$ module list