Weber

Compute Resources
Link to section 'Overview of Weber' of 'Overview of Weber' Overview of Weber
Weber is Purdue's new specialty high performance computing cluster for data, applications, and research which are covered by export control regulations such as EAR, ITAR, or requiring compliance with the NIST SP 800-171. Weber was built through a partnership with HP and AMD in August 2019. Weber consists of HP compute nodes with two 10-core Intel Xeon-E5 "Haswell" processors (20 cores per node) and 64 GB of memory. All nodes have 56 Gbps EDR Infiniband interconnect.
To purchase access to Weber today, please contact the Export Controls office at exportcontrols@purdue.edu, or contact us via email at rcac-cluster-purchase@lists.purdue.edu if you have any technical questions.
Link to section 'Weber Namesake' of 'Overview of Weber' Weber Namesake
Weber is named in honor of Mary Ellen Weber, scientist and former astronaut. More information about her life and impact on Purdue is available in a Biography of Weber.
Link to section 'Weber Specifications' of 'Overview of Weber' Weber Specifications
All Weber nodes have 20 processor cores, 64 GB of RAM, and 56 Gbps Infiniband interconnects.
Front-Ends | Number of Nodes | Processors per Node | Cores per Node | Memory per Node | Retires in |
---|---|---|---|---|---|
Interim | 2 | Two Sky Lake CPUs @ 2.10GHz | 16 | 192 GB | 2020 |
Coming | 4 | AMD Rome CPUs | 64 | 256 GB | 2023 |
Sub-Cluster | Number of Nodes | Processors per Node | Cores per Node | Memory per Node | Retires in |
---|---|---|---|---|---|
A | 26 | Two Haswell CPUs @ 2.60GHz | 20 | 64 GB | TBA |
B | 6 | Two Haswell CPUs @ 2.60GHz | 20 | 512 GB | TBA |
G | 2 | Two Haswell CPUs @ 2.60GHz One Tesla V100 GPU | 16 | 64 GB | TBA |
Weber nodes run CentOS 7 and use SLURM as the batch system for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
On Weber, the following set of compiler, math library, and message-passing library for parallel code are recommended:
- Intel
- MKL
- Intel MPI
This compiler and these libraries are loaded by default. To load the recommended set again:
$ module load rcac
To verify what you loaded:
$ module list