RCAC partnered with Intel to offer a training series on their oneAPI toolkit in Fall 2023. Recordings of sessions have been included below, divided by topic. This training was provided by Intel experts and facilitated by the RCAC support team. These short lectures will prepare you with everything you need to know to leverage oneAPI in your research!
- SYCL & Compilers
- Libraries & MPI
- Profiling & Debugging
- AI Analytics Toolkit & oneDNN
SYCL & Compilers
SYCL is an open alternative to single-architecture proprietary languages. It allows developers to reuse code across hardware targets (CPUs and accelerators such as GPUs and FPGAs) and also perform custom tuning for a specific accelerator. These code walkthroughs introduce you to basic principles and practices of SYCL programming. To continue to learn about SYCL and Intel’s integration of these new programming language into oneAPI Toolkits.
Intel Fortran and C++ Compilers
Create code that takes advantage of more cores and built-in technologies in platforms based on Intel® processors. Compile and generate applications for Windows*, Linux*, and macOS*. This compiler integrates seamlessly with popular third-party compilers, development environments, and operating systems. Build high-performance applications by generating optimized code for Intel® Xeon® Scalable processors and Intel® Core™ processors. Boost Single Instruction Multiple Data (SIMD) vectorization and threading capabilities (including for Intel® Advanced Vector Extensions 512 instructions) using the latest OpenMP* parallel programming model. IFX, Intel Fortran Compiler: provides CPU and GPU offload support. Features
- Improves development productivity by targeting CPUs and GPUs through single-source code while permitting custom tuning.
- Supports broad Fortran language standards.
- Incorporates industry standards support for OpenMP* 4.5, and initial OpenMP 5.0 and 5.1 for GPU Offload
- Uses well-proven LLVM compiler technology and Intel's history of compiler leadership.
- Takes advantage of multicore, Single Instruction Multiple Data (SIMD) vectorization, and multiprocessor systems with OpenMP, automatic parallelism, and coarrays
- Optimizes code with an automatic processor dispatch feature.
The oneAPI Initiative and Intel oneAPI Tools
Transition of Intel C/C++ Compilers
Transition to the Intel Fortran Compiler
Intel Fortran Compiler 2023
Libraries & MPI
Intel MPI Library
Intel® MPI Library is a multifabric message-passing library that implements the open-source MPICH specification. Use the library to create, maintain, and test advanced, complex applications that perform better on high-performance computing (HPC) clusters based on Intel® processors.
- Develop applications that can run on multiple cluster interconnects that you choose at run time.
- Quickly deliver maximum end-user performance without having to change the software or operating environment.
- Achieve the best latency, bandwidth, and scalability through automatic tuning for the latest Intel® platforms.
- Reduce the time to market by linking to one library and deploying on the latest optimized fabrics.
Math Kernel Library (oneMKL)
Intel Math Kernel Library (MKL): accelerate math procession routines, increase application performance, and reduce development time. MKL includes Linear algebra, fast flourier Transforms (FFT) Vector statistics and data fitting, vector math and miscellaneous solvers.
Profiling & Debugging
Intel Advisor: Is a design and analysis tool for achieving high application performance. This is done through efficient threading, vectorization, and memory use, and GPU offload on current and future Intel® hardware. The tool supports C, C++, Fortran, Data Parallel C++ (DPC++), OpenMP*, and Python*.
Intel VTune Profiler
Intel® VTune™ Profiler optimizes application performance, system performance, and system configuration for HPC, cloud, IoT, media, storage, and more. • CPU, GPU, and FPGA: Tune the entire application’s performance―not just the accelerated portion. • Multilingual: Profile Data Parallel C++ (DPC++), C, C++, C#, Fortran*, OpenCL™, Python*, Google Go* programming language, Java*, Assembly, or any combination. • System or Application: Get coarse-grained system data for an extended period or detailed results mapped to source code. • Power: Optimize performance while avoiding power- and thermal-related throttling.
AI Analytics Toolkit
Intel® AI Analytics Toolkit: Provides data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architectures. The components are built using oneAPI libraries for low-level compute optimizations. This maximizes performance from preprocessing through machine learning.
- Deliver high-performance deep learning (DL)
- Achieve drop-in acceleration for data analytics and machine learning workflows with compute-intensive Python* packages: Modin*, NumPy, Numba, scikit-learn*, and XGBoost* optimized for Intel.
- Gain direct access to Intel analytics and AI optimizations to ensure that your software works together seamlessly.
The Intel® oneAPI IoT Toolkit provides a common set of libraries and tools dedicated to IoT application development and optimization. The toolkit provides software developers with several advantages:
- Includes a set of tools and libraries that can be used for the development of high-performance workloads deployed on CPUs, GPUs, FPGAs, and other accelerators.
- Enables software developers to build more efficiently innovative embedded and IoT platform solutions.
- Accelerates development of smart connected devices with performance differentiation for Intel platforms.
- Supports the breadth of Intel platforms for optimizing memory and threading performance.
- Provides developer tools and libraries for IoT application development and optimization.
- Delivers an easy-to-use and consistent developer experience within an integrated IDE.