Skip to main content

Software

Anvil provides a number of software packages to users of the system via the module command. To check the list of applications installed as modules on Anvil and their user guides, please go to the Scientific Applications on ACCESS Anvil page. For some common applications such as Python, Singularity, Horovod and R, we also provide detailed instructions and examples on the Specific Applications page.

Module System

The Anvil cluster uses Lmod to manage the user environment, so users have access to the necessary software packages and versions to conduct their research activities. The associated module command can be used to load applications and compilers, making the corresponding libraries and environment variables automatically available in the user environment.

Lmod is a hierarchical module system, meaning a module can only be loaded after loading the necessary compilers and MPI libraries that it depends on. This helps avoid conflicting libraries and dependencies being loaded at the same time. A list of all available modules on the system can be found with the module spider command:

$ module spider # list all modules, even those not available due to incompatible with currently loaded modules

-----------------------------------------------------------------------------------
The following is a list of the modules and extensions currently available:
-----------------------------------------------------------------------------------
  amdblis: amdblis/3.0
  amdfftw: amdfftw/3.0
  amdlibflame: amdlibflame/3.0
  amdlibm: amdlibm/3.0
  amdscalapack: amdscalapack/3.0
  anaconda: anaconda/2021.05-py38
  aocc: aocc/3.0

Lines 1-45

The module spider command can also be used to search for specific module names.

$ module spider intel # all modules with names containing 'intel'
-----------------------------------------------------------------------------------
  intel:
-----------------------------------------------------------------------------------
     Versions:
        intel/19.0.5.281
        intel/19.1.3.304
     Other possible modules matches:
        intel-mkl
-----------------------------------------------------------------------------------
$ module spider intel/19.1.3.304 # additional details on a specific module
-----------------------------------------------------------------------------------
  intel: intel/19.1.3.304
-----------------------------------------------------------------------------------

    This module can be loaded directly: module load intel/19.1.3.304

    Help:
      Intel Parallel Studio.

When users log into Anvil, a default compiler (GCC), MPI libraries (OpenMPI), and runtime environments (e.g., Cuda on GPU-nodes) are automatically loaded into the user environment. It is recommended that users explicitly specify which modules and which versions are needed to run their codes in their job scripts via the module load command. Users are advised not to insert module load commands in their bash profiles, as this can cause issues during initialization of certain software (e.g. Thinlinc).

When users load a module, the module system will automatically replace or deactivate modules to ensure the packages you have loaded are compatible with each other. Following example shows that the module system automatically unload the default Intel compiler version to a user-specified version:

$ module load intel # load default version of Intel compiler
$ module list # see currently loaded modules

Currently Loaded Modules:
  1) intel/19.0.5.281

$ module load intel/19.1.3.304 # load a specific version of Intel compiler
$ module list # see currently loaded modules

The following have been reloaded with a version change:
  1) intel/19.0.5.281 => intel/19.1.3.304

Most modules on Anvil include extensive help messages, so users can take advantage of the module help APPNAME command to find information about a particular application or module. Every module also contains two environment variables named $RCAC_APPNAME_ROOT and $RCAC_APPNAME_VERSION identifying its installation prefix and its version. This information can be found by module show APPNAME. Users are encouraged to use generic environment variables such as CC, CXX, FC, MPICC, MPICXX etc. available through the compiler and MPI modules while compiling their code.

Link to section 'Some other common module commands:' of 'Module System' Some other common module commands:

To unload a module

$ module unload mymodulename

To unload all loaded modules and reset everything to original state.

$ module purge

To see all available modules that are compatible with current loaded modules

$ module avail

To display information about a specified module, including environment changes, dependencies, software version and path.

$ module show mymodulename

Compiling, performance, and optimization on Anvil

Anvil CPU nodes have GNU, Intel, and AOCC (AMD) compilers available along with multiple MPI implementations (OpenMPI, Intel MPI (IMPI) and MVAPICH2). Anvil GPU nodes also provide the PGI compiler. Users may want to note the following AMD Milan specific optimization options that can help improve the performance of your code on Anvil:

  1. The majority of the applications on Anvil are built using GCC 11.2.0 which features an AMD Milan specific optimization flag (-march=znver3).
  2. AMD Milan CPUs support the Advanced Vector Extensions 2 (AVX2) vector instructions set. GNU, Intel, and AOCC compilers all have flags to support AVX2. Using AVX2, up to eight floating point operations can be executed per cycle per core, potentially doubling the performance relative to non-AVX2 processors running at the same clock speed.
  3. In order to enable AVX2 support, when compiling your code, use the -march=znver3 flag (for GCC 11.2 and newer, Clang and AOCC compilers), -march=znver2 flag (for GCC 10.2), or -march=core-avx2 (for Intel compilers and GCC prior to 9.3).

Other Software Usage Notes:

  1. Use the same environment that you compile the code to run your executables. When switching between compilers for different applications, make sure that you load the appropriate modules before running your executables.
  2. Explicitly set the optimization level in your makefiles or compilation scripts. Most well written codes can safely use the highest optimization level (-O3), but many compilers set lower default levels (e.g. GNU compilers use the default -O0, which turns off all optimizations).
  3. Turn off debugging, profiling, and bounds checking when building executables intended for production runs as these can seriously impact performance. These options are all disabled by default. The flag used for bounds checking is compiler dependent, but the debugging (-g) and profiling (-pg) flags tend to be the same for all major compilers.
  4. Some compiler options are the same for all available compilers on Anvil (e.g. -o), while others are different. Many options are available in one compiler suite but not the other. For example, Intel, PGI, and GNU compilers use the -qopenmp, -mp, and -fopenmp flags, respectively, for building OpenMP applications.
  5. MPI compiler wrappers (e.g. mpicc, mpif90) all call the appropriate compilers and load the correct MPI libraries depending on the loaded modules. While the same names may be used for different compilers, keep in mind that these are completely independent scripts.

For Python users, Anvil provides two Python distributions: 1) a natively compiled Python module with a small subset of essential numerical libraries which are optimized for the AMD Milan architecture and 2) binaries distributed through Anaconda. Users are recommended to use virtual environments for installing and using additional Python packages.

A broad range of application modules from various science and engineering domains are installed on Anvil, including mathematics and statistical modeling tools, visualization software, computational fluid dynamics codes, molecular modeling packages, and debugging tools.

In addition, Singularity is supported on Anvil and Nvidia GPU Cloud containers are available on Anvil GPU nodes.

Compiling Source code

This section provides some examples of compiling source code on Anvil.

Compiling Serial Programs

A serial program is a single process which executes as a sequential stream of instructions on one processor core. Compilers capable of serial programming are available for C, C++, and versions of Fortran.

Here are a few sample serial programs:

To load a compiler, enter one of the following:

$ module load intel
$ module load gcc
$ module load aocc
The following table illustrates how to compile your serial program:
Language Intel Compiler GNU Compiler AOCC Compiler
Fortran 77
$ ifort myprogram.f -o myprogram
$ gfortran myprogram.f -o myprogram
$ flang program.f -o program
Fortran 90
$ ifort myprogram.f90 -o myprogram
$ gfortran myprogram.f90 -o myprogram
$ flang program.f90 -o program
Fortran 95
$ ifort myprogram.f90 -o myprogram
$ gfortran myprogram.f95 -o myprogram
$ flang program.f90 -o program
C
$ icc myprogram.c -o myprogram
$ gcc myprogram.c -o myprogram
$ clang program.c -o program
C++
$ icc myprogram.cpp -o myprogram
$ g++ myprogram.cpp -o myprogram
$ clang++ program.C -o program

The Intel, GNU and AOCC compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95". You may use ".f90" to stand for any Fortran code regardless of version as it is a free-formatted form.

Compiling MPI Programs

OpenMPI, Intel MPI (IMPI) and MVAPICH2 are implementations of the Message-Passing Interface (MPI) standard. Libraries for these MPI implementations and compilers for C, C++, and Fortran are available on Anvil.

MPI programs require including a header file:
Language Header Files
Fortran 77
INCLUDE 'mpif.h'
Fortran 90
INCLUDE 'mpif.h'
Fortran 95
INCLUDE 'mpif.h'
C
#include <mpi.h>
C++
#include <mpi.h>

Here are a few sample programs using MPI:

To see the available MPI libraries:

$ module avail openmpi 
$ module avail impi
$ module avail mvapich2
The following table illustrates how to compile your MPI program. Any compiler flags accepted by Intel ifort/icc compilers are compatible with their respective MPI compiler.
Language Intel Compiler with Intel MPI (IMPI) Intel/GNU/AOCC Compiler with OpenMPI/MVAPICH2
Fortran 77
$ mpiifort program.f -o program
$ mpif77 program.f -o program
Fortran 90
$ mpiifort program.f90 -o program
$ mpif90 program.f90 -o program
Fortran 95
$ mpiifort program.f90 -o program
$ mpif90 program.f90 -o program
C
$ mpiicc program.c -o program
$ mpicc program.c -o program
C++
$ mpiicpc program.C -o program
$ mpicxx program.C -o program

The Intel, GNU and AOCC compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95". You may use ".f90" to stand for any Fortran code regardless of version as it is a free-formatted form.

Here is some more documentation from other sources on the MPI libraries:

Compiling OpenMP Programs

All compilers installed on Anvil include OpenMP functionality for C, C++, and Fortran. An OpenMP program is a single process that takes advantage of a multi-core processor and its shared memory to achieve a form of parallel computing called multithreading. It distributes the work of a process over processor cores in a single compute node without the need for MPI communications.

OpenMP programs require including a header file:
Language Header Files
Fortran 77
INCLUDE 'omp_lib.h'
Fortran 90
use omp_lib
Fortran 95
use omp_lib
C
#include <omp.h>
C++
#include <omp.h>

Sample programs illustrate task parallelism of OpenMP:

A sample program illustrates loop-level (data) parallelism of OpenMP:

To load a compiler, enter one of the following:

$ module load intel
$ module load gcc
$ module load aocc
The following table illustrates how to compile your shared-memory program. Any compiler flags accepted by ifort/icc compilers are compatible with OpenMP.
Language Intel Compiler GNU Compiler AOCC Compiler
Fortran 77
$ ifort -qopenmp myprogram.f -o myprogram
$ gfortran -fopenmp myprogram.f -o myprogram
$ flang -fopenmp program.f -o program
Fortran 90
$ ifort -qopenmp myprogram.f90 -o myprogram
$ gfortran -fopenmp myprogram.f90 -o myprogram
$ flang -fopenmp program.f90 -o program
Fortran 95
$ ifort -qopenmp myprogram.f90 -o myprogram
$ gfortran -fopenmp myprogram.f90 -o myprogram
$ flang -fopenmp program.f90 -o program
C
$ icc -qopenmp myprogram.c -o myprogram
$ gcc -fopenmp myprogram.c -o myprogram
$ clang -fopenmp program.c -o program
C++
$ icc -qopenmp myprogram.cpp -o myprogram
$ g++ -fopenmp myprogram.cpp -o myprogram
$ clang++ -fopenmp program.cpp -o program

The Intel, GNU and AOCC compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95". You may use ".f90" to stand for any Fortran code regardless of version as it is a free-formatted form.

Here is some more documentation from other sources on OpenMP:

Compiling Hybrid Programs

A hybrid program combines both MPI and shared-memory to take advantage of compute clusters with multi-core compute nodes. Libraries for OpenMPI, Intel MPI (IMPI) and MVAPICH2 and compilers which include OpenMP for C, C++, and Fortran are available.

Hybrid programs require including header files:
Language Header Files
Fortran 77
INCLUDE 'omp_lib.h'
INCLUDE 'mpif.h'
Fortran 90
use omp_lib
INCLUDE 'mpif.h'
Fortran 95
use omp_lib
INCLUDE 'mpif.h'
C
#include <mpi.h>
#include <omp.h>
C++
#include <mpi.h>
#include <omp.h>

A few examples illustrate hybrid programs with task parallelism of OpenMP:

This example illustrates a hybrid program with loop-level (data) parallelism of OpenMP:

To see the available MPI libraries:

$ module avail impi
$ module avail openmpi
$ module avail mvapich2
The following tables illustrate how to compile your hybrid (MPI/OpenMP) program. Any compiler flags accepted by Intel ifort/icc compilers are compatible with their respective MPI compiler.
Language Intel Compiler with Intel MPI (IMPI) Intel/GNU/AOCC Compiler with OpenMPI/MVAPICH2
Fortran 77
$ mpiifort -qopenmp myprogram.f -o myprogram
$ mpif77 -fopenmp myprogram.f -o myprogram
Fortran 90
$ mpiifort -qopenmp myprogram.f90 -o myprogram
$ mpif90 -fopenmp myprogram.f90 -o myprogram
Fortran 95
$ mpiifort -qopenmp myprogram.f90 -o myprogram
$ mpif90 -fopenmp myprogram.f90 -o myprogram
C
$ mpiicc -qopenmp myprogram.c -o myprogram
$ mpicc -fopenmp myprogram.c -o myprogram
C++
$ mpiicpc -qopenmp myprogram.C -o myprogram
$ mpicxx -fopenmp myprogram.C -o myprogram

The Intel, GNU and AOCC compilers will not output anything for a successful compilation. Also, the Intel compiler does not recognize the suffix ".f95". You may use ".f90" to stand for any Fortran code regardless of version as it is a free-formatted form.

Compiling NVIDIA GPU Programs

The Anvil cluster contains GPU nodes that support CUDA and OpenCL. See the detailed hardware overview for the specifics on the GPUs in Anvil. This section focuses on using CUDA.

A simple CUDA program has a basic workflow:

  • Initialize an array on the host (CPU).
  • Copy array from host memory to GPU memory.
  • Apply an operation to array on GPU.
  • Copy array from GPU memory to host memory.

Here is a sample CUDA program:

ModuleTree or modtree helps users to navigate between CPU stack and GPU stack and sets up a default compiler and MPI environment. For Anvil cluster, our team makes a recommendation regarding the cuda version, compiler, and MPI library. This is a proven stable cuda, compiler, and MPI library combination that is recommended if you have no specific requirements. By load the recommended set:

$ module load modtree/gpu
$ module list
# you will have all following modules
Currently Loaded Modules:
  1) gcc/8.4.1   2) numactl/2.0.14   3) zlib/1.2.11   4) openmpi/4.0.6   5) cuda/11.2.2   6) modtree/gpu

Both login and GPU-enabled compute nodes have the CUDA tools and libraries available to compile CUDA programs. For complex compilations, submit an interactive job to get to the GPU-enabled compute nodes. The gpu-debug queue is ideal for this case. To compile a CUDA program, load modtree/gpu, and use nvcc to compile the program:

$ module load modtree/gpu
$ nvcc gpu_hello.cu -o gpu_hello
./gpu_hello
No GPU specified, using first GPUhello, world

The example illustrates only how to copy an array between a CPU and its GPU but does not perform a serious computation.

The following program times three square matrix multiplications on a CPU and on the global and shared memory of a GPU:

$ module load modtree/gpu
$ nvcc mm.cu -o mm
$ ./mm 0
                                                            speedup
                                                            -------
Elapsed time in CPU:                    7810.1 milliseconds
Elapsed time in GPU (global memory):      19.8 milliseconds  393.9
Elapsed time in GPU (shared memory):       9.2 milliseconds  846.8

For best performance, the input array or matrix must be sufficiently large to overcome the overhead in copying the input and output data to and from the GPU.

For more information about NVIDIA, CUDA, and GPUs:

Provided Software

The Anvil team provides a suite of broadly useful software for users of research computing resources. This suite of software includes compilers, debuggers, visualization libraries, development environments, and other commonly used software libraries. Additionally, some widely-used application software is provided.

ModuleTree or modtree helps users to navigate between CPU stack and GPU stack and sets up a default compiler and MPI environment. For Anvil cluster, our team makes recommendations for both CPU and GPU stack regarding the CUDA version, compiler, math library, and MPI library. This is a proven stable CUDA version, compiler, math, and MPI library combinations that are recommended if you have no specific requirements. To load the recommended set:

$ module load modtree/cpu # for CPU
$ module load modtree/gpu # for GPU

Link to section 'GCC Compiler' of 'Provided Software' GCC Compiler

The GNU Compiler (GCC) is provided via the module command on Anvil clusters and will be maintained at a common version compatible across all clusters. Third-party software built with GCC will use this GCC version, rather than the GCC provided by the operating system vendor. To see available GCC compiler versions available from the module command:

$ module avail gcc

Link to section 'Toolchain' of 'Provided Software' Toolchain

The Anvil team will build and maintain an integrated, tested, and supported toolchain of compilers, MPI libraries, data format libraries, and other common libraries. This toolchain will consist of:

  • Compiler suite (C, C++, Fortran) (Intel, GCC and AOCC)
  • BLAS and LAPACK
  • MPI libraries (OpenMPI, MVAPICH, Intel MPI)
  • FFTW
  • HDF5
  • NetCDF

Each of these software packages will be combined with the stable "modtree/cpu" compiler, the latest available Intel compiler, and the common GCC compiler. The goal of these toolchains is to provide a range of compatible compiler and library suites that can be selected to build a wide variety of applications. At the same time, the number of compiler and library combinations is limited to keep the selection easy to navigate and understand. Generally, the toolchain built with the latest Intel compiler will be updated at major releases of the compiler.

Link to section 'Commonly Used Applications' of 'Provided Software' Commonly Used Applications

The Anvil team will go to every effort to provide a broadly useful set of popular software packages for research cluster users. Software packages such as Matlab, Python (Anaconda), NAMD, GROMACS, R, VASP, LAMMPS, and others that are useful to a wide range of cluster users are provided via the module command.

Link to section 'Changes to Provided Software' of 'Provided Software' Changes to Provided Software

Changes to available software, such as the introduction of new compilers and libraries or the retirement of older toolchains, will be scheduled in advance and coordinated with system maintenances. This is done to minimize impact and provide a predictable time for changes. Advance notice of changes will be given with regular maintenance announcements and through notices printed through “module load”s. Be sure to check maintenance announcements and job output for any upcoming changes.

Link to section 'Long Term Support' of 'Provided Software' Long Term Support

The Anvil team understands the need for a stable and unchanging suite of compilers and libraries. Research projects are often tied to specific compiler versions throughout their lifetime. The Anvil team will go to every effort to provide the "modtree/cpu" or "modtree/gpu" environment and the common GCC compiler as a long-term supported environment. These suites will stay unchanged for longer periods than the toolchain built with the latest available Intel compiler.

Installing applications

This section provides some instructions for installing and compiling some common applications on Anvil.

VASP

The Vienna Ab initio Simulation Package (VASP) is a computer program for atomic scale materials modelling, e.g. electronic structure calculations and quantum-mechanical molecular dynamics, from first principles.

Link to section 'VASP License' of 'VASP' VASP License

The VASP team allows only registered users who have purchased their own license to use the software and access is only given to the VASP release which is covered by the license of the respective research group. For those who are interested to use VASP on Anvil, please send a ticket to ACCESS Help Desk to request access and provide your license for our verification. Once confirmed, the approved users will be given access to the vasp5 or vasp6 unix groups.

Prospective users can use the command below to check their unix groups on the system.

$ id $USER 

If you are interested to purchase and get a VASP license, please visit VASP website for more information.

Link to section 'VASP 5 and VASP 6 Installations' of 'VASP' VASP 5 and VASP 6 Installations

The Anvil team provides VASP 5.4.4 and VASP 6.3.0 installations and modulefiles with our default environment compiler gcc/11.2.0 and mpi library openmpi/4.1.6. Note that only license-approved users can load the VASP modulefile as below.

You can use the VASP 5.4.4 module by:

$ module load gcc/11.2.0  openmpi/4.1.6
$ module load vasp/5.4.4.pl2

You can use the VASP 6.3.0 module by:

$ module load gcc/11.2.0  openmpi/4.1.6
$ module load vasp/6.3.0

Once a VASP module is loaded, you can choose one of the VASP executables to run your code: vasp_std, vasp_gam, and vasp_ncl.

The VASP pseudopotential files are not provided on Anvil, you may need to bring your own POTCAR files.

Link to section 'Build your own VASP 5 and VASP 6' of 'VASP' Build your own VASP 5 and VASP 6

If you would like to use your own VASP on Anvil, please follow the instructions for Installing VASP.6.X.X and Installing VASP.5.X.X.

In the following sections, we provide some instructions about how to install VASP 5 and VASP 6 on Anvil and also the installation scripts:

Build your own VASP 5

For VASP 5.X.X version, VASP provide several templates of makefile.include in the /arch folder, which contain information such as precompiler options, compiler options, and how to link libraries. You can pick up one based on your system and preferred features . Here we provide some examples about how to install the vasp.5.4.4.pl2.tgz version on Anvil with different module environments. We also prepared two versions of VASP5 installation scripts at the end of this page.

Link to section 'Step 1: Download' of 'Build your own VASP 5' Step 1: Download

As a license holder, you can download the source code of VASP from the VASP Portal, we will not check your license in this case.

Copy the VASP resource file vasp.5.4.4.pl2.tgz to the desired location, and unzip the file tar zxvf vasp.5.4.4.pl2.tgz to obtain the folder /path/to/vasp-build-folder/vasp.5.4.4.pl2and reveal its content.

Link to section 'Step 2: Prepare makefile.include' of 'Build your own VASP 5' Step 2: Prepare makefile.include

  • For GNU compilers parallelized using OpenMPI, combined with MKL

    We modified the makefile.include.linux_gnu file to adapt the Anvil system. Download it to your VASP build folder /path/to/vasp-build-folder/vasp.5.4.4.pl2:

    $ cd /path/to/vasp-build-folder/vasp.5.4.4.pl2
    $ wget https://www.rcac.purdue.edu/files/knowledge/compile/src/makefile.include.linux_gnu
    $ cp makefile.include.linux_gnu makefile.include

    If you would like to include the Wannier90 interface, you may also need to include the following lines to the end of your makefile.include file:

    # For the interface to Wannier90 (optional)
    LLIBS += $(WANNIER90_HOME)/libwannier.a

    Load the required modules:

    $ module purge 
    $ module load gcc/11.2.0 openmpi/4.1.6
    $ module load intel-mkl
    # If you would like to include the Wannier90 interface, also load the following module:
    # $ module load wannier90/3.1.0
  • For Intel compilers parallelized using IMPI, combined with MKL

    Copy the makefile.include.linux_intel templet from the /arch folder to your VASP build folder /path/to/vasp-build-folder/vasp.5.4.4.pl2:

    $ cd /path/to/vasp-build-folder/vasp.5.4.4.pl2
    $ cp arch/makefile.include.linux_intel makefile.include
    

    For better performance, you may add the following line to the end of your makefile.include file (above the GPU section):

    FFLAGS += -march=core-avx2

    If you would like to include the Wannier90 interface, you may also need to include the following lines to the end of your makefile.include file (above the GPU section):

    # For the interface to Wannier90 (optional)
    LLIBS += $(WANNIER90_HOME)/libwannier.a

    Load the required modules:

    $ module purge 
    $ module load intel/19.0.5.281  impi/2019.5.281
    $ module load intel-mkl
    # If you would like to include the Wannier90 interface, also load this module:
    # $ module load wannier90/3.1.0

Link to section 'Step 3: Make' of 'Build your own VASP 5' Step 3: Make

Build VASP with command make all to install all three executables vasp_std, vasp_gam, and vasp_ncl or use make std to install only the vasp_std executable. Use make veryclean to remove the build folder if you would like to start over the installation process.

Link to section 'Step 4: Test' of 'Build your own VASP 5' Step 4: Test

You can open an Interactive session to test the installed VASP, you may bring your own VASP test files:

$ cd /path/to/vasp-test-folder/
$ module purge 
$ module load gcc/11.2.0 openmpi/4.1.6 intel-mkl
# If you included the Wannier90 interface, also load this module:
# $ module load wannier90/3.1.0
$ mpirun /path/to/vasp-build-folder/vasp.5.4.4.pl2/bin/vasp_std 

Link to section '&nbsp;' of 'Build your own VASP 5'  

Build your own VASP 6

For VASP 6.X.X version, VASP provide several templates of makefile.include, which contain information such as precompiler options, compiler options, and how to link libraries. You can pick up one based on your system and preferred features . Here we provide some examples about how to install vasp 6.3.0 on Anvil with different module environments. We also prepared two versions of VASP6 installation scripts at the end of this page.

Link to section 'Step 1: Download' of 'Build your own VASP 6' Step 1: Download

As a license holder, you can download the source code of VASP from the VASP Portal, we will not check your license in this case.

Copy the VASP resource file vasp.6.3.0.tgz to the desired location, and unzip the file tar zxvf vasp.6.3.0.tgz to obtain the folder /path/to/vasp-build-folder/vasp.6.3.0 and reveal its content.

Link to section 'Step 2: Prepare makefile.include' of 'Build your own VASP 6' Step 2: Prepare makefile.include

  • For GNU compilers parallelized using OpenMPI + OpenMP, combined with MKL

    We modified the makefile.include.gnu_ompi_mkl_omp file to adapt the Anvil system. Download it to your VASP build folder /path/to/vasp-build-folder/vasp.6.3.0:

    $ cd /path/to/vasp-build-folder/vasp.6.3.0
    $ wget https://www.rcac.purdue.edu/files/knowledge/compile/src/makefile.include.gnu_ompi_mkl_omp
    $ cp makefile.include.gnu_ompi_mkl_omp makefile.include

    If you would like to include the Wannier90 interface, you may also need to include the following lines to the end of your makefile.include file:

    # For the VASP-2-Wannier90 interface (optional)
    CPP_OPTIONS    += -DVASP2WANNIER90
    WANNIER90_ROOT ?=$(WANNIER90_HOME)
    LLIBS          += -L$(WANNIER90_ROOT) -lwannier

    Then, load the required modules:

    $ module purge 
    $ module load gcc/11.2.0  openmpi/4.1.6
    $ module load intel-mkl hdf5 
    # If you would like to include the Wannier90 interface, also load the following module:
    # $ module load wannier90/3.1.0
  • For Intel compilers parallelized using IMPI + OpenMP, combined with MKL

    We modified the makefile.include.intel_omp file to adapt the Anvil system. Download it to your VASP build folder /path/to/vasp-build-folder/vasp.6.3.0:

    $ cd /path/to/vasp-build-folder/vasp.6.3.0
    $ wget https://www.rcac.purdue.edu/files/knowledge/compile/src/makefile.include.intel_omp
    $ cp makefile.include.intel_omp makefile.include

    If you would like to include the Wannier90 interface, you may also need to include the following lines to the end of your makefile.include file:

    # For the VASP-2-Wannier90 interface (optional)
    CPP_OPTIONS    += -DVASP2WANNIER90
    WANNIER90_ROOT ?=$(WANNIER90_HOME)
    LLIBS          += -L$(WANNIER90_ROOT) -lwannier

    Then, load the required modules:

    $ module purge 
    $ module load intel/19.0.5.281  impi/2019.5.281
    $ module load intel-mkl hdf5 
    # If you would like to include the Wannier90 interface, also load the following module:
    # $ module load wannier90/3.1.0

Link to section 'Step 3: Make' of 'Build your own VASP 6' Step 3: Make

Open makefile, make sure the first line is VERSIONS = std gam ncl.

Build VASP with command make all to install all three executables vasp_std, vasp_gam, and vasp_ncl or use make std to install only the vasp_std executable. Use make veryclean to remove the build folder if you would like to start over the installation process.

Link to section 'Step 4: Test' of 'Build your own VASP 6' Step 4: Test

You can open an Interactive session to test the installed VASP 6. Here is an example of testing above installed VASP 6.3.0 with GNU compilers and OpenMPI:

$ cd /path/to/vasp-build-folder/vasp.6.3.0/testsuite
$ module purge 
$ module load gcc/11.2.0 openmpi/4.1.6 intel-mkl hdf5
# If you included the Wannier90 interface, also load the following module:
# $ module load wannier90/3.1.0
$ ./runtest

Link to section '&nbsp;' of 'Build your own VASP 6'  

LAMMPS

Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) is a molecular dynamics program from Sandia National Laboratories. LAMMPS makes use of Message Passing Interface for parallel communication and is a free and open-source software, distributed under the terms of the GNU General Public License.

Provided LAMMPS module

Link to section 'LAMMPS modules' of 'Provided LAMMPS module' LAMMPS modules

The Anvil team provides LAMMPS module with our default module environment gcc/11.2.0 and openmpi/4.0.6 to all users. It can be accessed by:

$ module load gcc/11.2.0 openmpi/4.0.6
$ module load lammps/20210310

The LAMMPS executable is lmp and the LAMMPS potential files are installed at $LAMMPS_HOME/share/lammps/potentials, where the value of $LAMMPS_HOMEis the path to LAMMPS build folder. Use this variable in any scripts. Your actual LAMMPS folder path may change without warning, but this variable will remain current. The current path is:

$ echo $LAMMPS_HOME
$ /apps/spack/anvil/apps/lammps/20210310-gcc-11.2.0-jzfe7x3

LAMMPS Job Submit Script

This is an example of a job submission file for running parallel LAMMPS jobs using the LAMMPS module installed on Anvil.

#!/bin/bash
# FILENAME:  myjobsubmissionfile

#SBATCH -A myallocation # Allocation name
#SBATCH --nodes=2       # Total # of nodes 
#SBATCH --ntasks=256    # Total # of MPI tasks
#SBATCH --time=1:30:00  # Total run time limit (hh:mm:ss)
#SBATCH -J myjobname    # Job name
#SBATCH -o myjob.o%j    # Name of stdout output file
#SBATCH -e myjob.e%j    # Name of stderr error file
#SBATCH -p wholenode    # Queue (partition) name

# Manage processing environment, load compilers and applications.
module purge
module load gcc/11.2.0 openmpi/4.0.6
module load lammps/20210310
module list

# Launch MPI code
srun -n $SLURM_NTASKS lmp

Build your own LAMMPS

Link to section 'Build your own LAMMPS' of 'Build your own LAMMPS' Build your own LAMMPS

LAMMPS provides a very detailed instruction of Build LAMMPS with a lot of customization options. In the following sections, we provide basic installation instructions of how to install LAMMPS on Anvil, as well as a LAMMPS Installation Script for users who would like to build their own LAMMPS on Anvil:

Link to section 'Step 1: Download' of 'Build your own LAMMPS' Step 1: Download

LAMMPS is an open-source code, you can download LAMMPS as a tarball from LAMMPS download page. There are several versions available on the LAMMPS webpage, we strongly recommend downloading the latest released stable version and unzip and untar it. It will create a LAMMPS directory:

$ wget https://download.lammps.org/tars/lammps-stable.tar.gz
$ tar -xzvf lammps-stable.tar.gz
$ ls 
lammps-23Jun2022 lammps-stable.tar.gz

Link to section 'Step 2: Build source code' of 'Build your own LAMMPS' Step 2: Build source code

LAMMPS provides two ways to build the source code: traditional configure && make method and the cmake method. These are two independent approaches and users should not mix them together. You can choose the one you are more familiar with.

Build LAMMPS with Make

Traditional make method requires a Makefile file appropriate for your system in either the src/MAKE, src/MAKE/MACHINES, src/MAKE/OPTIONS, or src/MAKE/MINE directory. It provides various options to customize your LAMMPS. If you would like to build your own LAMMPS on Anvil with make, please follow the instructions for Build LAMMPS with make. In the following sections, we will provide some instructions on how to install LAMMPS on Anvil with make.

Link to section 'Include LAMMPS Packages' of 'Build LAMMPS with Make' Include LAMMPS Packages

In LAMMPS, a package is a group of files that enable a specific set of features. For example, force fields for molecular systems or rigid-body constraints are in packages. Usually, you can include only the packages you plan to use, but it doesn't hurt to run LAMMPS with additional packages.

To use make command to see the make options and package status, you need to first jump to src subdirectory. Here we will continue use lammps-23Jun2022 as an example:

$ cd lammps-23Jun2022/src     # change to main LAMMPS source folder
$ make                        # see a variety of make options
$ make ps                     # check which packages are currently installed

For most LAMMPS packages, you can include them by:

$ make yes-PGK_NAME      # install a package with its name, default value is "no", which means exclude the package
# For example:
$ make yes-MOLECULE

A few packages require additional steps to include libraries or set variables, as explained on Packages with extra build options. If a package requires external libraries, you must configure and build those libraries before building LAMMPS and especially before enabling such a package.

If you have issues with installing external libraries, please contact us at Help Desk.

Instead of specifying all the package options via the command line, LAMMPS provides some Make shortcuts for installing many packages, such as make yes-most, which will install most LAMMPS packages w/o libs. You can pick up one of the shortcuts based on your needs.

Link to section 'Compilation' of 'Build LAMMPS with Make' Compilation

Once the desired packages are included, you can compile lammps with our default environment: compiler gcc/11.2.0 and MPI library openmpi/4.0.6 , you can load them all at once by module load modtree/cpu. Then corresponding make option will be make g++_openmpi for OpenMPI with compiler set to GNU g++.

Then the LAMMPS executable lmp_g++_openmpi will be generated in the build folder.

LAMMPS support parallel compiling, so you may submit an Interactive job to do parallel compiling.

If you get some error messages and would like to start over the installation process, you can delete compiled objects, libraries and executables with make clean-all.

Link to section 'Examples' of 'Build LAMMPS with Make' Examples

Here is an example of how to install the lammps-23Jun2022 version on Anvil with most packages enabled:

# Setup module environments
$ module purge
$ module load modtree/cpu
$ module load hdf5 fftw gsl netlib-lapack
$ module list

$ cd lammps-23Jun2022/src  # change to main LAMMPS source folder
$ make yes-most            # install most LAMMPS packages w/o libs
$ make ps                  # check which packages are currently installed

# compilation
$ make g++_openmpi        # or "make -j 12 g++_openmpi" to do parallel compiling if you open an interactive session with 12 cores.

Link to section 'Tips' of 'Build LAMMPS with Make' Tips

When you run LAMMPS and get an error like "command or style is unknown", it is likely due to the fact you did not include the required packages for that command or style. If the command or style is available in a package included in the LAMMPS distribution, the error message will indicate which package would be needed.

For more information about LAMMPS build options, please refer to these sections of LAMMPS documentation:

Build LAMMPS with Cmake

CMake is an alternative to compiling LAMMPS in addition to the traditional Make method. CMake has several advantages, and might be helpful for people with limited experience in compiling software or for those who want to modify or extend LAMMPS. If you prefer using cmake, please follow the instructions for Build LAMMPS with CMake. In the following sections, we will provide some instructions on how to install LAMMPS on Anvil with cmake and the LAMMPS Installation Script:

Link to section 'Use CMake to generate a build environment' of 'Build LAMMPS with Cmake' Use CMake to generate a build environment

  1. First go to your LAMMPS directory and generate a new folder build for build environment. Here we will continue use lammps-23Jun2022 as an example:

    $ cd lammps-23Jun2022
    $ mkdir build; cd build    # create and change to a build directory
  2. To use cmakefeatures, you need to module load cmake first.

  3. For basic LAMMPS installation with no add-on packages enabled and no customization, you can generate a build environment by:

    $ cmake ../cmake         # configuration reading CMake scripts from ../cmake
  4. You can also choose to include or exclude packages to or from build.

    In LAMMPS, a package is a group of files that enable a specific set of features. For example, force fields for molecular systems or rigid-body constraints are in packages. Usually, you can include only the packages you plan to use, but it doesn't hurt to run LAMMPS with additional packages.

    For most LAMMPS packages, you can include it by adding the following flag to cmake command:

    -D PKG_NAME=yes   # degualt value is "no", which means exclude the package

    For example:

    $ cmake -D PKG_MOLECULE=yes -D PKG_RIGID=yes -D PKG_MISC=yes ../cmake

    A few packages require additional steps to include libraries or set variables, as explained on Packages with extra build options. If you have issue with installing external libraries, please contact us at Help Desk.

  5. Instead of specifying all the package options via the command line, LAMMPS provides some CMake setting scripts in /cmake/presets folder. You can pick up one of them or customize it based on your needs.

  6. If you get some error messages after the cmake ../cmake step and would like to start over, you can delete the whole build folder and create new one:

    $ cd lammps-23Jun2022
    $ rm -rf build
    $ mkdir build && cd build

Link to section 'Compilation' of 'Build LAMMPS with Cmake' Compilation

  1. Once the build files are generated by cmake command, you can compile lammps with our default environments: compiler gcc/11.2.0 and MPI library openmpi/4.0.6 , you can load them all at once by module load modtree/cpu.

  2. Then, the next step is to compile LAMMPS with make or cmake --build,  upon completion, the LAMMPS executable lmp will be generated in the build folder.

  3. LAMMPS supports parallel compiling, so you may submit an Interactive job to do parallel compilation.

  4. If you get some error with compiling, you can delete compiled objects, libraries and executables with make clean or cmake --build . --target clean.

Link to section 'Examples' of 'Build LAMMPS with Cmake' Examples

Here is an example of how to install the lammps-23Jun2022 version on Anvil with most packages enabled:

# Setup module environments
$ module purge
$ module load modtree/cpu
$ module load hdf5 fftw gsl netlib-lapack
$ module load cmake anaconda
$ module list

$ cd lammps-23Jun2022      # change to the LAMMPS distribution directory
$ mkdir build; cd build;   # create and change to a build directory

# enable most packages and setup Python package library path
$ cmake -C ../cmake/presets/most.cmake -D PYTHON_EXECUTABLE=$CONDA_PYTHON_EXE ../cmake
# If everything works well, you will see
# -- Build files have been written to: /path-to-lammps/lammps-23Jun2022/build

# compilation
$ make      # or "make -j 12" to do parallel compiling if you open an interactive session with 12 cores.
# If everything works well, you will see
# [100%] Built target lmp

The CMake setting script /cmake/presets/most.cmake we used in the example here will includes 57 most common packages:

$ ASPHERE BOCS BODY BROWNIAN CG-DNA CG-SDK CLASS2 COLLOID COLVARS COMPRESS CORESHELL DIELECTRIC DIFFRACTION DIPOLE DPD-BASIC DPD-MESO DPD-REACT DPD-SMOOTH DRUDE EFF EXTRA-COMPUTE EXTRA-DUMP EXTRA-FIX EXTRA-MOLECULE EXTRA-PAIR FEP GRANULAR INTERLAYER KSPACE MACHDYN MANYBODY MC MEAM MISC ML-IAP ML-SNAP MOFFF MOLECULE OPENMP OPT ORIENT PERI PLUGIN POEMS QEQ REACTION REAXFF REPLICA RIGID SHOCK SPH SPIN SRD TALLY UEF VORONOI YAFF

Link to section 'Tips' of 'Build LAMMPS with Cmake' Tips

When you run LAMMPS and get an error like "command or style is unknown", it is likely due to you did not include the required packages for that command or style. If the command or style is available in a package included in the LAMMPS distribution, the error message will indicate which package would be needed.

After the initial build, whenever you edit LAMMPS source files, enable or disable packages, change compiler flags or build options, you must recompile LAMMPS with make.

For more information about LAMMPS build options, please following these links from LAMMPS website:

LAMMPS Installation Script

Here we provide a lammps-23Jun2022 installation script with cmake. It contains the procedures from downloading the source code to what we mentioned in Build LAMMPS with Cmake Example section. You will start with making an empty folder. Then, download the installation scriptinstall-lammps.sh to this folder. Since parallel compiling with 12 cores is used in the script, you may submit an Interactive job to ask for 12 cores:

$ mkdir lammps; cd lammps;   # create and change to a lammps directory
$ wget https://www.rcac.purdue.edu/files/knowledge/compile/src/install-lammps.sh
$ ls
install-lammps.sh
$ sinteractive -N 1 -n 12 -A oneofyourallocations -p shared -t 1:00:00
$ bash install-lammps.sh
Helpful?

Thanks for letting us know.

Please don't include any personal information in your comment. Maximum character limit is 250.
Characters left: 250
Thanks for your feedback.