NumCPUs shows 2 when --cpus-per-task=1

When I submit 1 cpu core job, job information shows NumCPUs=2.

Why does it show not NumCPUs=1 but NumCPUs=2

is it correct?

sub.sh

#!/bin/bash



# Parameters
#SBATCH --error=%j_0_log.err
#SBATCH --job-name=test
#SBATCH -c 1
#SBATCH --output=%j_0_log.out
#SBATCH --partition=dev
#SBATCH --time=4



# command
srun echo "hello world"

job info shows like below

scontrol show job 915246

...
...
JobId=915246 JobName=test
...
NumNodes=1 NumCPUs=2 NumTasks=0 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
TRES=cpu=2,node=1,billing=2
...

Solution 1:

The compute nodes in your cluster are most probably configured with Hyperthreading/Multithreading activated. In that case, Slurm always allocates entires physical cores to jobs, and does not allow two distinct jobs to use the same physical core.

So here you request one core, you are given a full physical core, hence two hardware threads, i.e. two CPUs in the Slurm context.

Slurm can be configured to behave as if all CPUs were physical cores, though. See this page in the documentation for more information.