You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
CPU affinity appears in gres.conf and is mapped based on hardware architecture. Affinity means that only certain physical cores are associated with each physical GPU on any GPU node for performance reasons. Cores mapped to a GPU have faster access to that GPU than cores not mapped. On the hardware, this mapping cannot be changed as it is part of the physical layout of the devices. Slurm cannot determine this on its own, so it must be instructed via gres.conf.
In practice, CPU affinity limits the ratio of cores to GPU when requesting GPUs for jobs. Setting aside QoS, if a researcher requests a single GPU and more cores than in the table below, they will potentially get multiple nodes. If they try to force a higher core count to be on a single node with --nodes=1 then the job will get stuck in queue with ReqNodeNotAvail.
The table below ignore QoS limits.
partition
max cores:gpu from affinity
max cores for 1 gpu
pascal*
14:1
14
ampere*
64:1
64
The text was updated successfully, but these errors were encountered:
This is correct to my understanding. Requesting more than the specified cores will cause the job to request over 2 different nodes which means some of the requested resources will be unavailable to the job but still allocated to it. Someone can request all of the cores on a single pascalnode by requesting at least 2 GPUs. This isn't important for the A100s right now because the per user QoS limits any person to 64 cores in the first place.
What would you like to see added?
CPU affinity appears in
gres.conf
and is mapped based on hardware architecture. Affinity means that only certain physical cores are associated with each physical GPU on any GPU node for performance reasons. Cores mapped to a GPU have faster access to that GPU than cores not mapped. On the hardware, this mapping cannot be changed as it is part of the physical layout of the devices. Slurm cannot determine this on its own, so it must be instructed viagres.conf
.In practice, CPU affinity limits the ratio of cores to GPU when requesting GPUs for jobs. Setting aside QoS, if a researcher requests a single GPU and more cores than in the table below, they will potentially get multiple nodes. If they try to force a higher core count to be on a single node with
--nodes=1
then the job will get stuck in queue withReqNodeNotAvail
.The table below ignore QoS limits.
The text was updated successfully, but these errors were encountered: