On the Supek cluster, there are several job queues that are divided depending on the availability of resources and the time of job execution:

NameDescriptionDefault valuesAvailable resourcesLimitsPriorityTMPDIR
cpuStandard queue for CPU nodes

ncpus=2

select=1

mem=1800MB

walltime = 48h

ncpus=6656 - Total


ncpus=128 - per node

mem=241GB - per node

max 7 days1/lustre/scratch
gpuStandard queue for GPU nodes

ncpus=1

ngpus=1

select=1

mem=120GB

walltime = 24h

ncpus=1280 -Total

ngpus=80 -Total


ncpus=64 - per node

ngpus=4 - per node

mem=493GB - po čvoru

max 7 days1/lustre/scratch
bigmemStandard queue for nodes with large memory capacity

ncpus=1

select=1

mem=30GB

walltime = 24h

ncpus=256 -Total

mem=8038GB


ncpus=128 -per node

mem=4019 GB - per node

max 7days1/scratch (1,7TB)
cpu-testQueue for testing on CPU nodes

ncpus=2

select=1

mem=1800MB

walltime = 1h

ncpus=6656 -Total


ncpus=128 - per node

mem=241GB -per node

max 1 hour

max 256 jezgara i 480 GB RAM per user

2

/lustre/scratch
gpu-testQueue for testing on GPU nodes

ncpus=1

ngpus=1

select=1

mem=125GB

walltime = 1h

ncpus=256

ngpus=16

max 1hour

max 4GPUs i 493 GB RAM per user

2

/lustre/scratch
cpu-singleQueue for serial CPU jobs

ncpus=1

select=1

mem=1800MB

walltime = 48h

ncpus= 640

max 7days

-1

/lustre/scratch
login-cpuQueue for testing on the CPU login node

ncpus=1

select=1

walltime=1h

ncups=100

max 1hour1/scratch (1,6TB)
login-gpuQueue for testing on the GPU login node

ncpus=1

select=1

ngpus=1

walltime=1h

ngpus=1
ncpus = 40

max 1hour1/scratch (3,4TB)



  • No labels