Anvil Scheduler Changes
During Anvil scheduled maintenance on June 2, 2022, a couple of changes will be made to the current behavior of Slurm on Anvil.
standardqueue will be renamed to
wholenode. This change seeks to alleviate some of the confusion regarding the default behavior of the standard partition, and make the naming more descriptive. In an effort to resemble other XSEDE systems, the default behavior of this queue is to allocate all of the resources on the requested nodes to the user. Jobs submitted to this partition will consume all 128-cores on a node even if a user requests one task, i.e. it will consume 128 SUs per node per hour. Note that this partition remains the scheduler's default (i.e. jobs not requesting an explicit partition will be placed in
wholenode). Users can use the
sharedpartition to request partial nodes which consume fewer SUs.
--mem=0option will be disabled. This option is used to explicitly request all of the memory on the nodes used by your job. However, Slurm does not count the cores allocated to the job properly, leading to incorrect SU calculation. If necessary, users must explicitly specify the amount of memory they want to use, or use the
--exclusiveoption if requesting the entire node's memory.
Please email email@example.com if you have any questions.