Skip to main content

Large memory jobs

One bonus for purchasing regular nodes of HPC clusters is that users will have access to large memory nodes of 1TB. As memory is evenly distributed among all the available cores, so each core is tied with about 8GB(1024 GB/128 cores) memory. This can be very useful for jobs with a large data set. Using the command slist , users can see that the queue name of large memory nodes is  highmemand the max walltime is 24 hours. Note that the large memory nodes are shared by all users, so we will not extend walltime for users. 

                      
user@cluster-fe04:~ $ slist

                      Current Number of Cores                       Node
Account           Total    Queue     Run    Free    Max Walltime    Type
==============  =================================  ==============  ======
debug               256      249       0     256        00:30:00       A
gpu                 512      128     256     256                       G
highmem            1024      368     590     434      1-00:00:00       B
multigpu             48        0       0      48                       G
rcac-b             1024        0       0    1024     14-00:00:00       B
standby           46976     1600    4563   11552        04:00:00       A

Link to section 'Example script for large memory jobs' of 'Large memory jobs' Example script for large memory jobs

#!/bin/bash
#SBATCH -A highmem
#SBATCH -t 12:00:00
#SBATCH -N 1
#SBATCH -n 12
#SBATCH --job-name=alphafold
#SBATCH --mail-type=FAIL,BEGIN,END
#SBATCH --error=%x-%J-%u.err
#SBATCH --output=%x-%J-%u.out


module --force purge
ml biocontainers alphafold/2.3.1

run_alphafold.sh --flagfile=full_db_20230311.ff  \
    --fasta_paths=sample.fasta --max_template_date=2022-02-01 \
    --output_dir=af2_full_out --model_preset=monomer \
    --use_gpu_relax=False

 
Helpful?

Thanks for letting us know.

Please don't include any personal information in your comment. Maximum character limit is 250.
Characters left: 250
Thanks for your feedback.