Skip to main content

Anvil Scheduler Changes

  • Announcements
  • Anvil

During Anvil scheduled maintenance on June 2, 2022, a couple of changes will be made to the current behavior of Slurm on Anvil.

  1. The standard queue will be renamed to wholenode. This change seeks to alleviate some of the confusion regarding the default behavior of the standard partition, and make the naming more descriptive. In an effort to resemble other XSEDE systems, the default behavior of this queue is to allocate all of the resources on the requested nodes to the user. Jobs submitted to this partition will consume all 128-cores on a node even if a user requests one task, i.e. it will consume 128 SUs per node per hour. Note that this partition remains the scheduler's default (i.e. jobs not requesting an explicit partition will be placed in wholenode). Users can use the shared partition to request partial nodes which consume fewer SUs.
  2. The --mem=0 option will be disabled. This option is used to explicitly request all of the memory on the nodes used by your job. However, Slurm does not count the cores allocated to the job properly, leading to incorrect SU calculation. If necessary, users must explicitly specify the amount of memory they want to use, or use the --exclusive option if requesting the entire node's memory.

Please email help@xsede.org if you have any questions.

Originally posted: