...
Code Block | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #PBS -P test example #PBS -e /home/my_directory #PBS -q cpu #PBS -l walltime=00:01:00 #PBS -l select=1:ncpus=10 module load mpi/openmpi-x86_64 mpicc --version |
Osnovni PBS parametri
Opcija | Option argument | The meaning of the option |
-N | name | Setting the job name |
-q | destination | Specifying the job queue and/or server |
-l | resource_list | Specifying the resources required to perform the job |
-M | user_list | Setting up a list of mail recipients |
-m | mail_options | Setting the email notification type |
-o | path/to/desired/directory | Setting the name/path where standard output is saved |
-e | path/to/desired/directory | Setting the name/path where the standard error is saved |
-j | oe | Concatenation of standard output and error in the same file |
-Wgroup_list | project_code | Selection of the project under which the job will be performed |
Options for sending notifications by mail option -m:
a | Mail is sent when the batch system terminates the job |
b | Mail is sent when the job starts executing |
e | The mail is sent when the job is finished |
j | Mail is sent for sub jobs. Must be combined with one or more sub-options a, b or e |
Code Block | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #PBS -q cpu #PBS -l walltime=00:01:00 #PBS -l select=1:ncpus=2 #PBS -M <name>@srce.hr,<name2>@srce.hr #PBS -m be echo $PBS_JOBNAME > out echo $PBS_O_HOST |
...
Options for requesting resources with the -l option
-l select=3:ncpus=2 | Requesting 3 chunks with 2 cores each (6 cores in total) |
-l select=1:ncpus=10:mem=20GB | Requesting 1 chunk with 10 cores and 20GB of working memory |
-l ngpus=2 | Requesting 2 gpus |
-l walltime=00:10:00 | Maximum job execution time |
PBS environmental variables
Name | Description |
---|---|
PBS_JOBID | Job identifier provided by PBS when a job is submitted. Created after executing the qsub command |
PBS_JOBNAME | The name of the job provided by the user. The default name is the name of the submitted script |
PBS_NODEFILE | List of work nodes, or processor cores on which the job is executed |
PBS_O_WORKDIR | The working directory in which the job was submitted, i.e. in which qsub command was called |
OMP_NUM_THREADS | An OpenMP variable that PBS exports to the environment, which is equal to the value of the ncpus option from the PBS script header |
NCPUS | Number of cores requested. Matches the value from the ncpus option from the PBS script header |
TMPDIR | Path to temporary directory |
Tip | ||
---|---|---|
| ||
While in PBS the path for the output and error files is specified in the directory in which they are executed, the input and output files of the program itself are loaded/saved in the $HOME directory by default. PBS does not have an option to specify the job execution in the current directory we are in, so it is necessary to change the directory manually. If you want to switch to the directory where the script was started, after the header you have to write: cd $PBS_O_WORKDIR If you want to run jobs with high storage load (I/O intensive) job execution is not recommended to run from $PBS_O_WORKDIR-a but from $TMPDIR location, which will use fast storage. Read more about using fast storage and temporary results below. |
...
If some of the parameters are not defined, the default value will be used:
Parameter | Default value |
---|---|
select | 1 |
ncpus | 1 |
mpiprocs | 1 |
mem | 3500 MB |
walltime | 48:00:00 |
place | pack |
Memory control using cgroups
...
Warning |
---|
The temporary directory is automatically deleted when the job is done! |
Primjeri korištenja
Usage examples
- Example of simple use of $TMPDIR variablePrimjer jednostavnog korištenja $TMPDIR varijable:
Code Block #!/bin/bash #PBS -q cpu #PBS -l walltime=00:00:05 cd $TMPDIR pwd > test cp test $PBS_O_WORKDIR
- Primjer kopiranja ulaznih podataka u $TMPDIR, pokretanje aplikacije, i kopiranje u radni direktorijAn example of copying the input data to $TMPDIR, running the application, and copying it to the working directory:
Code Block #!/bin/bash #PBS -q cpu #PBS -l walltime=00:00:05 # Creating Stvaranjadirectories direktorijafor zainput ulaznedata podatkein ua privremenomtemporary direktorijudirectory mkdir -p $TMPDIR/data # KopiratiCopy all sverequired potrebneinputs inputeto ua privremenitemporary direktorijdirectory cp -r $HOME/data/* $TMPDIR/data # Pokrenuti aplikaciju i preusmjeriti outpute u "aktualniRun the application and redirect the outputs to the "current" (privremenitemporary) direktorijdirectory cd $TMPDIR <izvršna<application naredbaexecutable aplikacije>command> 1>output.log 2>error.log # KopiratiCopy željenidesired output uto radniworking direktorijdirectory cp -r /$TMPDIR/output $PBS_O_WORKDIR
...
Parallel jobs
OpenMP
...
parallelization
If your application uses parallelization exclusively at the level of OpenMP threads and cannot expand beyond one worker node (that is, it works with shared memory), you can call the job as shown in the xTB application example below.
Tip |
---|
OpenMP applications require the definition of the |
Ako Vaša aplikacija koristi paralelizaciju isključivo na razini OpenMP dretvi (engl. threads) i ne može se širiti van jednog radnog čvora (odnosno radi s dijeljenom memorijom), posao možete pozvati na način kako je prikazano u primjeru xTB aplikacije niže.
Tip |
---|
OpenMP aplikacije zahtjevaju definiranje varijable PBS sustav vodi računa o tome umjesto Vas, te joj pridružuje vrijednost varijable The PBS system takes care of this for you, and associates it with the value of the ncpus variable, defined in the header of the PBS script. If you define jobs using ncpus without the select option, it is preferable to define the amount of memory as well, because otherwise the available working memory will be 3500 MB Ako definirate poslove koristeći ncpus bez opcije select, poželjno je definirati i količinu memorije, jer će u suprotnom dostupna radna memorija iznositi 3500 MB (select x mem → 1 x 3500 MB). |
...
Code Block | ||||
---|---|---|---|---|
| ||||
#!/bin/bash #PBS -q cpu #PBS -l walltime=10:00:00 #PBS -l ncpus=8:mem=28GB cd ${PBS_O_WORKDIR} xtb C2H4BrCl.xyz --chrg 0 --uhf 0 --opt vtight |
...
MPI parallelization
Ako Vaša aplikacija koristi paralelizaciju isključivo na razini MPI procesa i može se širiti van jednog radnog čvora (odnosno radi s raspodijeljenom memorijom), posao možete pozvati na način kako je prikazano u primjeru Quantum ESPRESSO aplikacije niže. Za izvođenje aplikacija koje koriste paralelizaciju MPI (ili hibridno MPI+OMP) potrebno je učitati mpi modul prije pozivanja naredbe mpiexec ili mpirun.
...