Overview of Gilbreth
Gilbreth is Purdue's newest Community Cluster and is optimized for communities running GPU intensive applications such as machine learning. Gilbreth consists of Dell compute nodes with Intel Xeon processors and Nvidia Tesla GPUs.
To purchase access to Gilbreth today, go to the Cluster Access Purchase page. Please subscribe to our Community Cluster Program Mailing List to stay informed on the latest purchasing developments or contact us via email at firstname.lastname@example.org if you have any questions.
Gilbreth is named in honor of Lillian Moller Gilbreth, Purdue's first female engineering professor. More information about her life and impact on Purdue is available in an ITaP Biography of Lillian Moller Gilbreth.
Gilbreth Detailed Hardware Specification
Gilbreth nodes have at least 192 GB of RAM, and 100 Gbps Infiniband interconnects.
|Front-Ends||Number of Nodes||Cores per Node||Memory per Node||GPUs per node||Retires in|
|With GPU||2||20||96 GB||1 P100||2024|
|Sub-Cluster||Number of Nodes||Cores per Node||Memory per Node||GPUs per node||Retires in|
|A||4||20||256 GB||2 P100||2022|
|B||16||24||192 GB||2 P100||2023|
|C||3||20||768 GB||4 V100||2024|
|D||8||16||192 GB||2 P100||2024|
|E||16||16||192 GB||2 V100||2024|
Gilbreth nodes run CentOS 7 and use Moab Workload Manager 8 and TORQUE Resource Manager 5 as the portable batch system (PBS) for resource and job management. The application of operating system patches occurs as security needs dictate. All nodes allow for unlimited stack usage, as well as unlimited core dump size (though disk space and server quotas may still be a limiting factor).
On Gilbreth, ITaP recommends the following set of compiler, math library, and message-passing library for parallel code:
- Intel 18.104.22.168
- Intel MPI
This compiler and these libraries are loaded by default. To load the recommended set again:
$ module load rcac
To verify what you loaded:
$ module list