Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
titlemy_job.pbs
#!/bin/bash
 
#PBS -<parametar1> <value>
#PBS -<parametar2> <value>
 
<command>

Basic PBS parameters

OptionArgumentMeaning
-NnameNaming the job
-qdestinationSpecifying job queue or node
-llist_of_resourcesAmount of resources required for the job
-Mlist_of_usersList of users to receive e-mail
-memail_optionsTypes of mail notifications
-opath/to/directoryPath to directory for output file
-epath/to/directoryPath to directory for error file
-j
oeCombining output and error file
-Wgroup_listproject_codeProject code for job


Options for sending notifications by mail with the -m option:

aMail is sent when the batch system terminates the job
bMail is sent when the job starts executing
eMail is sent when the job is done
jMail is sent for sub jobs. Must be combined with one or more sub-options a, b or e


Code Block
languagebash
titleExample mail
#!/bin/bash
 
#PBS -q cpu
#PBS -l select=1:ncpus=2
#PBS -M <name>@srce.hr,<name2>@srce.hr
#PBS -m be
 
echo $PBS_JOBNAME > out
echo $PBS_O_HOST

...

Options for resources with the -l option:

-l select=3:ncpus=2Option for 3 chunks of a node with 2 cores (6 cores in total)
-l select=1:ncpus=10:mem=20GBOption for 1 chunka of a node with 10 cores i 20GB RAM
-l ngpus=2Option for 2 GPU-s


PBS environmental variables

NCPUSNumber of cores requested. Matches the value from the ncpus option from the PBS script header.
OMP_NUM_THREADSAn OpenMP variable exported by PBS to the environment that is equal to the value of the ncpus option from the PBS script header
PBS_JOBIDIdentifikator posla koji daje PBS kada se posao preda. Stvoreno nakon izvršenja naredbe qsub.
PBS_JOBNAMEJob identifier provided by PBS when a job is submitted. Created after executing the qsub command.
PBS_NODEFILEList of work nodes, or processor cores on which the job is executed
PBS_O_WORKDIRThe working directory in which the job was submitted, or in which the qsub command was invoked.
TMPDIRThe path to the scratch directory.


Tip
titleSetting up working directory

While in PBS pro the path for output and error files is specified in the directory where they are executed, the input and output files of the program itself are by default loaded/saved in the $HOME directory. PBS Pro does not have the option of specifying the job to run in the current directory we are in, so it is necessary to manually change the directory.

After the header it is necessary to write:

cd $PBS_O_WORKDIR

It will redirect the job execution to the directory where the script was run.


Parallel jobs

OpenMP parallelization

If your application uses parallelization exclusively at the level of OpenMP threads and cannot expand beyond one working node (that is, it works with shared memory), you can call the job as shown in the xTB application example below.

Tip

OpenMP aplikacije zahtjevaju definiranje varijable OMP_NUM_THREADS .

PBS sustav vodi računa o tome umjesto Vas, te joj pridružuje vrijednost varijable ncpus , definirane u zaglavlju PBS skripte.

Code Block
languagebash
#!/bin/bash
 
#PBS -q cpu
#PBS -l ncpus=8
 
cd ${PBS_O_WORKDIR}
 
xtb C2H4BrCl.xyz --chrg 0 --uhf 0 --opt vtight

MPI parallelization

If your application can be parallelized hybridly, i.e. divide its MPI processes into OpenMP threads, you can call the job as shown in the GROMACS application example below:

Tip

OpenMP aplikacije zahtijevaju definiranje varijable OMP_NUM_THREADS . PBS sustav joj automatski pridružuje vrijednost varijable ncpus , definirane u zaglavlju PBS skripte.

Vrijednost varijable select iz zaglavlja PBS skripte odgovara broju MPI procesa, međutim, nema pripadajuću varijablu koju PBS sustav izvodi u okolinu. Kako bi se izbjeglo prepisivanje, u primjeru niže, definirana je varijabla MPI_NUM_PROCESSES koja odgovara vrijednosti varijable select .

Code Block
languagebash
#!/bin/bash
 
#PBS -q cpu
#PBS -l select=8:ncpus=4
#PBS -l place=scatter
 
MPI_NUM_PROCESSES=$(cat ${PBS_NODEFILE} | wc -l)
 
cd ${PBS_O_WORKDIR}
 
mpiexec -n ${MPI_NUM_PROCESSES} --ppn 1 -d ${OMP_NUM_THREADS} --cpu-bind depth gmx mdrun -v -deffnm md


cray-pals
Running applications using MPI parallelization (or hybrid MPI+OMP) requires the cray-pals module to be raised before calling the mpiexec command, thus ensuring proper integration of the application with the PBS Pro job submission system and Cray's version of the MPI mpiexec application based on the MPICH implementation.

An example of calling this module and executing a parallel application on two processors:

Code Block
languagebash
#!/bin/bash
#PBS -l ncpus=2
module load cray-pals
mpiexec -np 2 moja_aplikacija_MPI

The environment variables that the mpiexec command will set on each of the MPI ranks will be:

Environment variableDescription
PALS_RANKIDTotal rank of the MPI process
PALS_NODEIDSerial number of the local node (if the work is performed on several nodes)
PALS_SPOOL_DIRTemporary directory
PALS_LOCAL_RANKIDLocal ranking of the MPI process (if the work is performed on multiple nodes)
PALS_APIDThe unique identifier of the application you executed
PALS_DEPTHNumber of processor cores per rank


Note

Scientific applications on Supek and cray-pals

Scientific applications that are available on Supek via the modulefiles tool already call this module, so it is not necessary to call it again.