You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

On the Supek cluster, there are several job queues that are divided depending on the availability of resources and the time of job execution:

NameDescriptionAvailable resourcesLimitsPriorityTMPDIR
cpuStandard queue for CPU nodes

ncpus=6656 - Total


ncpus=128 - per node

mem=256GB - per node

max 7 days1/lustre/scratch
gpuStandard queue for GPU nodes

ncpus=1280 -Total

ngpus=80 -Total


ncpus=64 - per node

ngpus=4 - per node

mem=512GB - po čvoru

max 7 days1/lustre/scratch
bigmemStandard queue for nodes with large memory capacity

ncpus=256 -Total

mem=8192GB


ncpus=128 -per node

mem=4096GB - per node

max 7days1/scratch (1,7TB)
cpu-testQueue for testing on CPU nodes

ncpus=6656 -Total


ncpus=128 - per node

mem=256GB -per node

max 1 hour

max 256 jezgara i 500 GB RAM per user

2

/lustre/scratch
gpu-testQueue for testing on GPU nodes

ncpus=256

ngpus=16

max 1hour

max 4GPUs i 500 GB RAM per user

2

/lustre/scratch
login-cpuQueue for testing on the CPU login node

ncups=100

max 1hour1/scratch (1,6TB)
login-gpuQueue for testing on the GPU login node

ngpus=1
ncpus = 40

max 1hour1/scratch (3,4TB)



  • No labels