On the Supek cluster, there are several job queues that are divided depending on the availability of resources and the time of job execution:
Name | Description | Default values | Available resources | Limits | Priority | TMPDIR |
---|---|---|---|---|---|---|
cpu | Standard queue for CPU nodes | ncpus=2 select=1 mem=1800MB walltime = 48h | ncpus=6656 - Total ncpus=128 - per node mem= |
241GB - per node | max 7 days | 1 | /lustre/scratch |
gpu | Standard queue for GPU nodes | ncpus=1 ngpus=1 select=1 mem=120GB walltime = 24h | ncpus=1280 -Total ngpus=80 -Total ncpus=64 - per node ngpus=4 - per node mem= |
493GB - po čvoru | max 7 days | 1 | /lustre/scratch |
bigmem | Standard queue for nodes with large memory capacity | ncpus=1 select=1 mem=30GB walltime = 24h | ncpus=256 -Total mem= |
8038GB ncpus=128 -per node mem= |
4019 GB - per node | max 7days | 1 | /scratch (1,7TB) |
cpu-test | Queue for testing on CPU nodes | ncpus=2 select=1 mem=1800MB walltime = 1h | ncpus=6656 -Total ncpus=128 - per node mem= |
241GB -per node | max 1 hour max 256 jezgara i |
480 GB RAM per user | 2 | /lustre/scratch | ||
gpu-test | Queue for testing on GPU nodes | ncpus=1 ngpus=1 select=1 mem=125GB walltime = 1h | ncpus=256 ngpus=16 | max 1hour max 4GPUs i |
493 GB RAM per user | 2 | /lustre/scratch | ||||
cpu-single | Queue for serial CPU jobs | ncpus=1 select=1 mem=1800MB walltime = 48h | ncpus= 640 | max 7days | -1 | /lustre/scratch |
login-cpu | Queue for testing on the CPU login node | ncpus=1 select=1 walltime=1h | ncups=100 | max 1hour | 1 | /scratch (1,6TB) |
login-gpu | Queue for testing on the GPU login node | ncpus=1 select=1 ngpus=1 walltime=1h | ngpus=1 | max 1hour | 1 | /scratch (3,4TB) |