Skip to main content

NVIDIA NGC containers

Link to section 'What is NGC?' of 'NVIDIA NGC containers' What is NGC?

NGC=

Nvidia GPU cloud (NGC) is a GPU-accelerated cloud platform optimized for deep learning and scientific computing. NGC offers a comprehensive catalogue of GPU-accelerated containers, so the application runs quickly and reliably on the high performance computing environment. NGC was deployed to extend the cluster capabilities and to enable powerful software and deliver the fastest results. By utilizing Singularity and NGC, users can focus on building lean models, producing optimal solutions and gathering faster insights. For more information, please visit https://www.nvidia.com/en-us/gpu-cloud and NGC software catalog.

Link to section 'Getting Started' of 'NVIDIA NGC containers' Getting Started

Users can download containers from the NGC software catalog and run them directly using Singularity instructions from the corresponding container’s catalog page.

In addition, a subset of pre-downloaded NGC containers wrapped into convenient software modules are provided. These modules wrap underlying complexity and provide the same commands that are expected from non-containerized versions of each application.

On clusters equipped with NVIDIA GPUs, type the command below to see the lists of NGC containers we deployed.

$ module load ngc 
$ module avail 

Link to section 'Deployed Applications' of 'NVIDIA NGC containers' Deployed Applications

autodock

Link to section 'Description' of 'autodock' Description

The AutoDock Suite is a growing collection of methods for computational docking and virtual screening, for use in structure-based drug discovery and exploration of the basic mechanisms of biomolecular structure and function.

Link to section 'Versions' of 'autodock' Versions

  • Scholar: 2020.06
  • Gilbreth: 2020.06
  • Anvil: 2020.06

Link to section 'Module' of 'autodock' Module

You can load the modules by:

module load ngc
module load autodock

Link to section 'Example job' of 'autodock' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run autodock on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=autodock
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc autodock

chroma

Link to section 'Description' of 'chroma' Description

The Chroma package provides a toolbox and executables to carry out calculation of lattice Quantum Chromodynamics LQCD. It is built on top of the QDP++ QCD Data Parallel Layer which provides an abstract data parallel view of the lattice and provides lattice wide types and expressions, using expression templates, to allow straightforward encoding of LQCD equations.

Link to section 'Versions' of 'chroma' Versions

  • Scholar: 2018-cuda9.0-ubuntu16.04-volta-openmpi, 2020.06, 2021.04
  • Gilbreth: 2018-cuda9.0-ubuntu16.04-volta-openmpi, 2020.06, 2021.04

Link to section 'Module' of 'chroma' Module

You can load the modules by:

module load ngc
module load chroma

Link to section 'Example job' of 'chroma' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run chroma on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=chroma
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc chroma

gamess

Link to section 'Description' of 'gamess' Description

The General Atomic and Molecular Electronic Structure Systems GAMESS program simulates molecular quantum chemistry, allowing users to calculate various molecular properties and dynamics.

Link to section 'Versions' of 'gamess' Versions

  • Scholar: 17.09-r2-libcchem
  • Gilbreth: 17.09-r2-libcchem
  • Anvil: 17.09-r2-libcchem

Link to section 'Module' of 'gamess' Module

You can load the modules by:

module load ngc
module load gamess

Link to section 'Example job' of 'gamess' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run gamess on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=gamess
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc gamess

gromacs

Link to section 'Description' of 'gromacs' Description

GROMACS GROningen MAchine for Chemical Simulations is a molecular dynamics package primarily designed for simulations of proteins, lipids and nucleic acids. It was originally developed in the Biophysical Chemistry department of University of Groningen, and is now maintained by contributors in universities and research centers across the world.

Link to section 'Versions' of 'gromacs' Versions

  • Scholar: 2018.2, 2020.2, 2021, 2021.3
  • Gilbreth: 2018.2, 2020.2, 2021, 2021.3
  • Anvil: 2018.2, 2020.2, 2021, 2021.3

Link to section 'Module' of 'gromacs' Module

You can load the modules by:

module load ngc
module load gromacs

Link to section 'Example job' of 'gromacs' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run gromacs on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=gromacs
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc gromacs

julia

Link to section 'Description' of 'julia' Description

The Julia programming language is a flexible dynamic language, appropriate for scientific and numerical computing, with performance comparable to traditional statically-typed languages.

Link to section 'Versions' of 'julia' Versions

  • Scholar: v1.5.0, v2.4.2
  • Gilbreth: v1.5.0, v2.4.2
  • Anvil: v1.5.0, v2.4.2

Link to section 'Module' of 'julia' Module

You can load the modules by:

module load ngc
module load julia

Link to section 'Example job' of 'julia' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run julia on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=julia
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc julia

lammps

Link to section 'Description' of 'lammps' Description

Large-scale Atomic/Molecular Massively Parallel Simulator LAMMPS is a software application designed for molecular dynamics simulations. It has potentials for solid-state materials metals, semiconductor, soft matter biomolecules, polymers and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale.

Link to section 'Versions' of 'lammps' Versions

  • Scholar: 10Feb2021, 15Jun2020, 24Oct2018, 29Oct2020
  • Gilbreth: 10Feb2021, 15Jun2020, 24Oct2018, 29Oct2020
  • Anvil: 10Feb2021, 15Jun2020, 24Oct2018, 29Oct2020

Link to section 'Module' of 'lammps' Module

You can load the modules by:

module load ngc
module load lammps

Link to section 'Example job' of 'lammps' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run lammps on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=lammps
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc lammps

milc

Link to section 'Description' of 'milc' Description

MILC represents part of a set of codes written by the MIMD Lattice Computation MILC collaboration used to study quantum chromodynamics QCD, the theory of the strong interactions of subatomic physics. It performs simulations of four dimensional SU3 lattice gauge theory on MIMD parallel machines. \Strong interactions\ are responsible for binding quarks into protons and neutrons and holding them all together in the atomic nucleus.

Link to section 'Versions' of 'milc' Versions

  • Scholar: quda0.8-patch4Oct2017
  • Gilbreth: quda0.8-patch4Oct2017

Link to section 'Module' of 'milc' Module

You can load the modules by:

module load ngc
module load milc

Link to section 'Example job' of 'milc' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run milc on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=milc
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc milc

namd

Link to section 'Description' of 'namd' Description

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR.

Link to section 'Versions' of 'namd' Versions

  • Scholar: 2.13-multinode, 2.13-singlenode, 3.0-alpha3-singlenode
  • Gilbreth: 2.13-multinode, 2.13-singlenode, 3.0-alpha3-singlenode
  • Anvil: 2.13-multinode, 2.13-singlenode, 3.0-alpha3-singlenode

Link to section 'Module' of 'namd' Module

You can load the modules by:

module load ngc
module load namd

Link to section 'Example job' of 'namd' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run namd on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=namd
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc namd

nvhpc

Link to section 'Description' of 'nvhpc' Description

The NVIDIA HPC SDK C, C++, and Fortran compilers support GPU acceleration of HPC modeling and simulation applications with standard C++ and Fortran, OpenACC® directives, and CUDA®. GPU-accelerated math libraries maximize performance on common HPC algorithms, and optimized communications libraries enable standards-based multi-GPU and scalable systems programming.

Link to section 'Versions' of 'nvhpc' Versions

  • Scholar: 20.7, 20.9, 20.11, 21.5, 21.9
  • Gilbreth: 20.7, 20.9, 20.11, 21.5, 21.9
  • Anvil: 20.7, 20.9, 20.11, 21.5, 21.9

Link to section 'Module' of 'nvhpc' Module

You can load the modules by:

module load ngc
module load nvhpc

Link to section 'Example job' of 'nvhpc' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run nvhpc on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=nvhpc
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc nvhpc

parabricks

Link to section 'Description' of 'parabricks' Description

NVIDIAs Clara Parabricks brings next generation sequencing to GPUs, accelerating an array of gold-standard tooling such as BWA-MEM, GATK4, Googles DeepVariant, and many more. Users can achieve a 30-60x acceleration and 99.99% accuracy for variant calling when comparing against CPU-only BWA-GATK4 pipelines, meaning a single server can process up to 60 whole genomes per day. These tools can be easily integrated into current pipelines with drop-in replacement commands to quickly bring speed and data-center scale to a range of applications including germline, somatic and RNA workflows.

Link to section 'Versions' of 'parabricks' Versions

  • Scholar: 4.0.0-1
  • Gilbreth: 4.0.0-1
  • Anvil: 4.0.0-1

Link to section 'Module' of 'parabricks' Module

You can load the modules by:

module load ngc
module load parabricks

Link to section 'Example job' of 'parabricks' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run parabricks on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=parabricks
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc parabricks

paraview

Link to section 'Description' of 'paraview' Description

no ParaView client GUI in this container, but ParaView Web application is included.

Link to section 'Versions' of 'paraview' Versions

  • Scholar: 5.9.0
  • Gilbreth: 5.9.0
  • Anvil: 5.9.0

Link to section 'Module' of 'paraview' Module

You can load the modules by:

module load ngc
module load paraview

Link to section 'Example job' of 'paraview' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run paraview on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=paraview
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc paraview

pytorch

Link to section 'Description' of 'pytorch' Description

PyTorch is a GPU accelerated tensor computational framework with a Python front end. Functionality can be easily extended with common Python libraries such as NumPy, SciPy, and Cython. Automatic differentiation is done with a tape-based system at both a functional and neural network layer level. This functionality brings a high level of flexibility and speed as a deep learning framework and provides accelerated NumPy-like functionality.

Link to section 'Versions' of 'pytorch' Versions

  • Scholar: 20.02-py3, 20.03-py3, 20.06-py3, 20.11-py3, 20.12-py3, 21.06-py3, 21.09-py3
  • Gilbreth: 20.02-py3, 20.03-py3, 20.06-py3, 20.11-py3, 20.12-py3, 21.06-py3, 21.09-py3
  • Anvil: 20.02-py3, 20.03-py3, 20.06-py3, 20.11-py3, 20.12-py3, 21.06-py3, 21.09-py3

Link to section 'Module' of 'pytorch' Module

You can load the modules by:

module load ngc
module load pytorch

Link to section 'Example job' of 'pytorch' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run pytorch on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=pytorch
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc pytorch

qmcpack

Link to section 'Description' of 'qmcpack' Description

QMCPACK is an open-source, high-performance electronic structure code that implements numerous Quantum Monte Carlo algorithms. Its main applications are electronic structure calculations of molecular, periodic 2D and periodic 3D solid-state systems. Variational Monte Carlo VMC, diffusion Monte Carlo DMC and a number of other advanced QMC algorithms are implemented. By directly solving the Schrodinger equation, QMC methods offer greater accuracy than methods such as density functional theory, but at a trade-off of much greater computational expense. Distinct from many other correlated many-body methods, QMC methods are readily applicable to both bulk periodic and isolated molecular systems.

Link to section 'Versions' of 'qmcpack' Versions

  • Scholar: v3.5.0
  • Gilbreth: v3.5.0
  • Anvil: v3.5.0

Link to section 'Module' of 'qmcpack' Module

You can load the modules by:

module load ngc
module load qmcpack

Link to section 'Example job' of 'qmcpack' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run qmcpack on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=qmcpack
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc qmcpack

quantum_espresso

Link to section 'Description' of 'quantum_espresso' Description

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale based on density-functional theory, plane waves, and pseudopotentials.

Link to section 'Versions' of 'quantum_espresso' Versions

  • Scholar: v6.6a1, v6.7
  • Gilbreth: v6.6a1, v6.7
  • Anvil: v6.6a1, v6.7

Link to section 'Module' of 'quantum_espresso' Module

You can load the modules by:

module load ngc
module load quantum_espresso

Link to section 'Example job' of 'quantum_espresso' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run quantum_espresso on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=quantum_espresso
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc quantum_espresso

rapidsai

Link to section 'Description' of 'rapidsai' Description

The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.

Link to section 'Versions' of 'rapidsai' Versions

  • Scholar: 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 21.06, 21.10
  • Gilbreth: 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 21.06, 21.10
  • Anvil: 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 21.06, 21.10

Link to section 'Module' of 'rapidsai' Module

You can load the modules by:

module load ngc
module load rapidsai

Link to section 'Example job' of 'rapidsai' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run rapidsai on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=rapidsai
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc rapidsai

relion

Link to section 'Description' of 'relion' Description

RELION for REgularized LIkelihood OptimizatioN implements an empirical Bayesian approach for analysis of electron cryo-microscopy Cryo-EM. Specifically it provides methods of refinement of singular or multiple 3D reconstructions as well as 2D class averages. RELION is an important tool in the study of living cells.

Link to section 'Versions' of 'relion' Versions

  • Scholar: 2.1.b1, 3.0.8, 3.1.0, 3.1.2, 3.1.3
  • Gilbreth: 2.1.b1, 3.0.8, 3.1.0, 3.1.2, 3.1.3
  • Anvil: 2.1.b1, 3.1.0, 3.1.2, 3.1.3

Link to section 'Module' of 'relion' Module

You can load the modules by:

module load ngc
module load relion

Link to section 'Example job' of 'relion' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run relion on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=relion
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc relion

tensorflow

Link to section 'Description' of 'tensorflow' Description

TensorFlow is an open-source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays tensors that flow between them. This flexible architecture lets you deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device without rewriting code.

Link to section 'Versions' of 'tensorflow' Versions

  • Scholar: 20.02-tf1-py3, 20.02-tf2-py3, 20.03-tf1-py3, 20.03-tf2-py3, 20.06-tf1-py3, 20.06-tf2-py3, 20.11-tf1-py3, 20.11-tf2-py3, 20.12-tf1-py3, 20.12-tf2-py3, 21.06-tf1-py3, 21.06-tf2-py3, 21.09-tf1-py3, 21.09-tf2-py3
  • Gilbreth: 20.02-tf1-py3, 20.02-tf2-py3, 20.03-tf1-py3, 20.03-tf2-py3, 20.06-tf1-py3, 20.06-tf2-py3, 20.11-tf1-py3, 20.11-tf2-py3, 20.12-tf1-py3, 20.12-tf2-py3, 21.06-tf1-py3, 21.06-tf2-py3, 21.09-tf1-py3, 21.09-tf2-py3
  • Anvil: 20.02-tf1-py3, 20.02-tf2-py3, 20.03-tf1-py3, 20.03-tf2-py3, 20.06-tf1-py3, 20.06-tf2-py3, 20.11-tf1-py3, 20.11-tf2-py3, 20.12-tf1-py3, 20.12-tf2-py3, 21.06-tf1-py3, 21.06-tf2-py3, 21.09-tf1-py3, 21.09-tf2-py3

Link to section 'Module' of 'tensorflow' Module

You can load the modules by:

module load ngc
module load tensorflow

Link to section 'Example job' of 'tensorflow' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run tensorflow on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=tensorflow
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc tensorflow

torchani

Link to section 'Description' of 'torchani' Description

TorchANI is a PyTorch-based program for training/inference of ANI (ANAKIN-ME) deep learning models to obtain potential energy surfaces and other physical properties of molecular systems.

Link to section 'Versions' of 'torchani' Versions

  • Scholar: 2021.04
  • Gilbreth: 2021.04
  • Anvil: 2021.04

Link to section 'Module' of 'torchani' Module

You can load the modules by:

module load ngc
module load torchani

Link to section 'Example job' of 'torchani' Example job

Using #!/bin/sh -l as shebang in the slurm job script will cause the failure of some biocontainer modules. Please use #!/bin/bash instead.

To run torchani on our clusters:

#!/bin/bash
#SBATCH -A myallocation     # Allocation name
#SBATCH -t 1:00:00
#SBATCH -N 1
#SBATCH -n 1
#SBATCH -c 8
#SBATCH --gpus-per-node=1
#SBATCH --job-name=torchani
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out

module --force purge
ml ngc torchani
Helpful?

Thanks for letting us know.

Please don't include any personal information in your comment. Maximum character limit is 250.
Characters left: 250
Thanks for your feedback.