Skip to main content
Have a request for an upcoming news/science story? Submit a Request

[RCAC Workshop] NCCL and Distributed GPU Operations

πŸ“… Date: December 1, 2025 ⏰ Time: 11:00 AM - 12:00 PM πŸ’» Location: Virtual 🏫 Instructor: Jacob Verburgt

Who Should Attend

Students and researchers with basic programming experience who are interested in learning how to effectively leverage multiple GPUs for scientific computing and machine learning. Familiarity with Python or C++ programming is recommended, but no prior GPU programming experience is required.

What You’ll Learn

NCCL and Distributed GPU Operations training will cover fundamentals of distributed training using Nvidia’s NCCL platform, as well as its application in AI. The foundations of NCCL, complete with examples in C++ will first be covered. This will be followed by distributed training demonstrations in PyTorch, specifically focusing on using NCCL in Distributed Data Parallel and Fully Sharded Data Parallel.

By the end of the session, you’ll... Understand distributed GPU operations and apply distributed GPU Operations into PyTorch-based machine learning workflows.

Level

Beginner to Intermediate

πŸ”— Register now: REGISTER HERE

Originally posted: