Skip to main content

Compiling Source Code

Documentation on compiling source code on Gautschi.

Compiling GPU Programs

The Gautschi cluster nodes contain 6 GPUs that support CUDA and OpenCL. See the detailed hardware overview for the specifics on the GPUs in Gautschi. This section focuses on using CUDA.

A simple CUDA program has a basic workflow:

  • Initialize an array on the host (CPU).
  • Copy array from host memory to GPU memory.
  • Apply an operation to array on GPU.
  • Copy array from GPU memory to host memory.

Here is a sample CUDA program:

Both front-ends and GPU-enabled compute nodes have the CUDA tools and libraries available to compile CUDA programs. To compile a CUDA program, load CUDA, and use nvcc to compile the program:

$ module load gcc/11.4.1 cuda/12.6.0
$ nvcc gpu_hello.cu -o gpu_hello
./gpu_hello
No GPU specified, using first GPUhello, world

The example illustrates only how to copy an array between a CPU and its GPU but does not perform a serious computation.

The following program times three square matrix multiplications on a CPU and on the global and shared memory of a GPU:

$ module load cuda
$ nvcc mm.cu -o mm
$ ./mm 0
                                                            speedup
                                                            -------
Elapsed time in CPU:                    6555.2 milliseconds
Elapsed time in GPU (global memory):      32.9 milliseconds  199.1
Elapsed time in GPU (shared memory):       3.0 milliseconds  2191.8

For best performance, the input array or matrix must be sufficiently large to overcome the overhead in copying the input and output data to and from the GPU.

For more information about NVIDIA, CUDA, and GPUs:

Compiling Hybrid Programs

A hybrid program combines both MPI and shared-memory to take advantage of compute clusters with multi-core compute nodes. Libraries for OpenMPI and Intel MPI (IMPI) and compilers which include OpenMP for C, C++, and Fortran are available.

Hybrid programs require including header files:
Language Header Files
Fortran 77
INCLUDE 'omp_lib.h'
INCLUDE 'mpif.h'
Fortran 90
use omp_lib
INCLUDE 'mpif.h'
Fortran 95
use omp_lib
INCLUDE 'mpif.h'
C
#include <mpi.h>
#include <omp.h>
C++
#include <mpi.h>
#include <omp.h>

A few examples illustrate hybrid programs with task parallelism of OpenMP:

This example illustrates a hybrid program with loop-level (data) parallelism of OpenMP:

To see the available MPI libraries:

$ module avail impi
$ module avail openmpi

The following tables illustrate how to compile your hybrid (MPI/OpenMP) program. Any compiler flags accepted by Intel ifort/icc compilers are compatible with their respective MPI compiler.

Intel MPI (IMPI) with Intel Compiler
Language Command
Fortran 77
$ mpiifort -qopenmp myprogram.f -o myprogram
Fortran 90
$ mpiifort -openmp myprogram.f90 -o myprogram
Fortran 95
$ mpiifort -openmp myprogram.f90 -o myprogram
C
$ mpiicc -qopenmp myprogram.c -o myprogram
C++
$ mpiicpc -qopenmp myprogram.cpp -o myprogram
 
OpenMPI with GNU Compiler
Language Command
Fortran 77
$ mpif77 -fopenmp myprogram.f -o myprogram
Fortran 90
$ mpif90 -fopenmp myprogram.f90 -o myprogram
Fortran 95
$ mpif90 -fopenmp myprogram.f95 -o myprogram
C
$ mpicc -fopenmp myprogram.c -o myprogram
C++
$ mpiCC -fopenmp myprogram.cpp -o myprogram

The Intel and GNU compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix .f95.

Compiling Serial Programs

A serial program is a single process which executes as a sequential stream of instructions on one processor core. Compilers capable of serial programming are available for C, C++, and versions of Fortran.

Here are a few sample serial programs:

$ module load intel
$ module load gcc
The following table illustrates how to compile your serial program:
Language Intel Compiler GNU Compiler
Fortran 77
$ ifx  myprogram.f -o myprogram
$ gfortran myprogram.f -o myprogram
 
Fortran 90
$ ifx  myprogram.f90 -o myprogram
$ gfortran myprogram.f90 -o myprogram
 
Fortran 95
$ ifx  myprogram.f90 -o myprogram
$ gfortran myprogram.f95 -o myprogram
 
C
$ icx  myprogram.c -o myprogram
$ gcc myprogram.c -o myprogram
C++
$ icpx  myprogram.cpp -o myprogram
$ g++ myprogram.cpp -o myprogram

The Intel and GNU compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95".

Compiling MPI Programs

OpenMPI and Intel MPI (IMPI) are implementations of the Message-Passing Interface (MPI) standard. Libraries for these MPI implementations and compilers for C, C++, and Fortran are available on all clusters.

MPI programs require including a header file:
Language Header Files
Fortran 77
INCLUDE 'mpif.h'
Fortran 90
INCLUDE 'mpif.h'
Fortran 95
INCLUDE 'mpif.h'
C
#include <mpi.h>
C++
#include <mpi.h>

Here are a few sample programs using MPI:

To see the available MPI libraries:

$ module avail openmpi 
$ module avail impi
The following table illustrates how to compile your MPI program. Any compiler flags accepted by Intel ifort/icc compilers are compatible with their respective MPI compiler.
Language Intel MPI OpenMPI
Fortran 77
$ mpiifort program.f -o program
$ mpif77 program.f -o program
Fortran 90
$ mpiifort program.f90 -o program
$ mpif90 program.f90 -o program
Fortran 95
$ mpiifort program.f95 -o program
$ mpif90 program.f95 -o program
C
$ mpiicc program.c -o program
$ mpicc program.c -o program
C++
$ mpiicpx program.cpp -o program
$ mpiCC program.cpp -o program

The Intel and GNU compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95".

Here is some more documentation from other sources on the MPI libraries:

Compiling OpenMP Programs

All compilers installed on Brown include OpenMP functionality for C, C++, and Fortran. An OpenMP program is a single process that takes advantage of a multi-core processor and its shared memory to achieve a form of parallel computing called multithreading. It distributes the work of a process over processor cores in a single compute node without the need for MPI communications.

OpenMP programs require including a header file:
Language Header Files
Fortran 77
INCLUDE 'omp_lib.h'
Fortran 90
use omp_lib
Fortran 95
use omp_lib
C
#include <omp.h>
C++
#include <omp.h>

Sample programs illustrate task parallelism of OpenMP:

A sample program illustrates loop-level (data) parallelism of OpenMP:

To load a compiler, enter one of the following:

$ module load intel
$ module load gcc
The following table illustrates how to compile your shared-memory program. Any compiler flags accepted by ifort/icc compilers are compatible with OpenMP.
Language Intel Compiler GNU Compiler
Fortran 77
$ ifx -qopenmp myprogram.f -o myprogram
$ gfortran -fopenmp myprogram.f -o myprogram
Fortran 90
$ ifx -qopenmp myprogram.f90 -o myprogram
$ gfortran -fopenmp myprogram.f90 -o myprogram
Fortran 95
$ ifx -qopenmp myprogram.f90 -o myprogram
$ gfortran -fopenmp myprogram.f95 -o myprogram
C
$ icx -qopenmp myprogram.c -o myprogram
$ gcc -fopenmp myprogram.c -o myprogram
C++
$ icpx -qopenmp myprogram.cpp -o myprogram
$ g++ -fopenmp myprogram.cpp -o myprogram

The Intel and GNU compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95".

Here is some more documentation from other sources on OpenMP:

Helpful?

Thanks for letting us know.

Please don't include any personal information in your comment. Maximum character limit is 250.
Characters left: 250
Thanks for your feedback.