...
Code Block |
---|
language | bash |
---|
title | my_job.pbs |
---|
|
#!/bin/bash
#PBS -<parametar1> <value>
#PBS -<parametar2> <value>
<command> |
Basic PBS parameters
Option | Argument | Meaning |
-N | name | Naming the job |
-q | destination | Specifying job queue or node |
-l | list_of_resources | Amount of resources required for the job |
-M | list_of_users | List of users to receive e-mail |
-m | email_options | Types of mail notifications |
-o | path/to/directory | Path to directory for output file |
-e | path/to/directory | Path to directory for error file |
-j
| oe | Combining output and error file |
-Wgroup_list | project_code | Project code for job |
Options for sending notifications by mail with the -m option:
a | Mail is sent when the batch system terminates the job |
b | Mail is sent when the job starts executing |
e | Mail is sent when the job is done |
j | Mail is sent for sub jobs. Must be combined with one or more sub-options a, b or e |
Code Block |
---|
language | bash |
---|
title | Example mail |
---|
|
#!/bin/bash
#PBS -q cpu
#PBS -l select=1:ncpus=2
#PBS -M <name>@srce.hr,<name2>@srce.hr
#PBS -m be
echo $PBS_JOBNAME > out
echo $PBS_O_HOST |
...
Options for resources with the -l option:
-l select=3:ncpus=2 | Option for 3 chunks of a node with 2 cores (6 cores in total) |
-l select=1:ncpus=10:mem=20GB | Option for 1 chunka of a node with 10 cores i 20GB RAM |
-l ngpus=2 | Option for 2 GPU-s |
PBS environmental variables
NCPUS | Number of cores requested. Matches the value from the ncpus option from the PBS script header. |
OMP_NUM_THREADS | An OpenMP variable exported by PBS to the environment that is equal to the value of the ncpus option from the PBS script header |
PBS_JOBID | Identifikator posla koji daje PBS kada se posao preda. Stvoreno nakon izvršenja naredbe qsub.
|
PBS_JOBNAME | Job identifier provided by PBS when a job is submitted. Created after executing the qsub command. |
PBS_NODEFILE | List of work nodes, or processor cores on which the job is executed |
PBS_O_WORKDIR | The working directory in which the job was submitted, or in which the qsub command was invoked. |
TMPDIR | The path to the scratch directory. |
Tip |
---|
title | Setting up working directory |
---|
|
While in PBS pro the path for output and error files is specified in the directory where they are executed, the input and output files of the program itself are by default loaded/saved in the $HOME directory. PBS Pro does not have the option of specifying the job to run in the current directory we are in, so it is necessary to manually change the directory. After the header it is necessary to write: cd $PBS_O_WORKDIR It will redirect the job execution to the directory where the script was run. |
Parallel jobs
OpenMP parallelization
If your application uses parallelization exclusively at the level of OpenMP threads and cannot expand beyond one working node (that is, it works with shared memory), you can call the job as shown in the xTB application example below.
Tip |
---|
OpenMP aplikacije zahtjevaju definiranje varijable OMP_NUM_THREADS . PBS sustav vodi računa o tome umjesto Vas, te joj pridružuje vrijednost varijable ncpus , definirane u zaglavlju PBS skripte. |
Code Block |
---|
|
#!/bin/bash
#PBS -q cpu
#PBS -l ncpus=8
cd ${PBS_O_WORKDIR}
xtb C2H4BrCl.xyz --chrg 0 --uhf 0 --opt vtight |
MPI parallelization
If your application can be parallelized hybridly, i.e. divide its MPI processes into OpenMP threads, you can call the job as shown in the GROMACS application example below:
Tip |
---|
OpenMP aplikacije zahtijevaju definiranje varijable OMP_NUM_THREADS . PBS sustav joj automatski pridružuje vrijednost varijable ncpus , definirane u zaglavlju PBS skripte. Vrijednost varijable select iz zaglavlja PBS skripte odgovara broju MPI procesa, međutim, nema pripadajuću varijablu koju PBS sustav izvodi u okolinu. Kako bi se izbjeglo prepisivanje, u primjeru niže, definirana je varijabla MPI_NUM_PROCESSES koja odgovara vrijednosti varijable select . |
Code Block |
---|
|
#!/bin/bash
#PBS -q cpu
#PBS -l select=8:ncpus=4
#PBS -l place=scatter
MPI_NUM_PROCESSES=$(cat ${PBS_NODEFILE} | wc -l)
cd ${PBS_O_WORKDIR}
mpiexec -n ${MPI_NUM_PROCESSES} --ppn 1 -d ${OMP_NUM_THREADS} --cpu-bind depth gmx mdrun -v -deffnm md |
cray-pals
Running applications using MPI parallelization (or hybrid MPI+OMP) requires the cray-pals module to be raised before calling the mpiexec command, thus ensuring proper integration of the application with the PBS Pro job submission system and Cray's version of the MPI mpiexec application based on the MPICH implementation.
An example of calling this module and executing a parallel application on two processors:
Code Block |
---|
|
#!/bin/bash
#PBS -l ncpus=2
module load cray-pals
mpiexec -np 2 moja_aplikacija_MPI |
The environment variables that the mpiexec command will set on each of the MPI ranks will be:
Environment variable | Description |
---|
PALS_RANKID | Total rank of the MPI process |
PALS_NODEID | Serial number of the local node (if the work is performed on several nodes) |
PALS_SPOOL_DIR | Temporary directory |
PALS_LOCAL_RANKID | Local ranking of the MPI process (if the work is performed on multiple nodes) |
PALS_APID | The unique identifier of the application you executed |
PALS_DEPTH | Number of processor cores per rank |
Note |
---|
Scientific applications on Supek and cray-pals Scientific applications that are available on Supek via the modulefiles tool already call this module, so it is not necessary to call it again. |