You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Intro

For deploying and managing jobs on Isabella computer cluster, SGE or Sun Grid Engine is used and job managment system JMS. In this document use of SGE ver. 9 is described.


Running jobs

User applications (in continuation jobs) which are run with SGE system have to be described with the startup shell script. Withing starting script, alongside the usual commands, SGE parameters are stated. It is possible to state the same parameters outside of starting script, during job submission.

Job run starts with qsub command:

qsub <SGE_parameters> <name_of_starting_script>

SGE also has graphical interface or GUI for access to whole system functionality. GUI starts with qmon command. Use of GUI will not be described because there is no instruction manual within it (Help button).

Describing jobs

The SGE system language is used to describe jobs, and the job description file (startup script) is a standard shell script. The header of each script lists the SGE parameters that describe the job in detail, followed by the normal commands to execute the user application.

Startup script structure:

my_job.sge
#!/bin/bash

#$ -<parameter1> <value1>
#$ -<parameter2> <value2>

<command1>
<command2>


The job described by this start script is submitted with the command:

qsub my_job.sge

The qsub command returns a job ID that is used to monitor the job's status later:

Your job <JobID> ("my_job") has been submitted

Basic SGE parameters

-N <job_name> : the job name that will be displayed when retrieving job information
-P <project_name> : the name of the project to which the job belongs
-cwd : defines that the directory where the startup script is located is the working directory of the job
  • The default working directory of the job is $HOME.

-o <file_name> : the name of the file where the standard output is saved
-e <file_name> : the name of the file where the standard error is saved
-j y|n : allows merging standard output and standard error into the same file (default value is n)
  • If standard output and error are not explicitly specified, they will be saved to the files:

    1. If job name is not defined:
      <working_directory>/<script_name>.o<job_id>
      <working_directory>/<script_name>.e<job_id>
    2. else:
      <working_directory>/<script_name>.o<job_id>
      <working_directory>/<script_name>.e<job_id>


    The -o and -e parameters can point to a directory:

    #$ -o outputDir/
    #$ -e outputDir/

    In this case, SGE will create standard output and standard error files in the outputDir directory named  <job_name>.o<JobID> and <job_name>.e<JobID>

    Important: outputDir must be created manually before submitting the job.

-M <emailAddress>[,<emailAddress>]…	: list of email addresses to which job notifications are sent

-m [a][b][e][n]	: defines in which case mail notifications are sent: 
					b - start of job execution, 
					a - job execution error, 
					e - completion of job, 
					n - do not send notifications (default option)

-now y|n : the value of y defines that the job must be performed immediately. For interactive jobs, this is the default value.
		   If SGE cannot find free resources, the job is not queued but ends immediately.

-r y|n : whether the job should be restarted in case of a runtime error (default value is n)

-R y|n : the value of y defines that SGE will reserve nodes when deploying (important for multiprocessor jobs)

-l <resource>=<value>[,<resource>=<value>...] : defines the resources that the job requires. See Resources for details.

-pe <parallel_environment> <range> : parameter is used for parallel jobs.
     The first parameter defines the module that runs the requested form of parallel job.
     The second parameter defines a specific number of processors or a range in the form <N>,[<Ni>,...]<S>-<E>,[<Si>-<Ei>,] which                                         parallel job demands. For more details see Parralel jobs 
    
-q <queue_name>[,<queue_name>...] : job queue in which job is being prepared. This option can also be used to request a specific node, such as requesting a local job queue (eg a12.q@sl250s-gen8-08-01.isabella).

-t <start>:<end>:<step> : the parameter defines that it is a job queue. For details, see Job queue.

-v <variable>[=<value>][,<variable>[=<value>]...] : ption defines that SGE sets the environment variable when executing the job. This parameter is useful when the application uses special environment variables, because SGE does not set them by default when starting the job.

-V : SGE passes all current environment variables to the job. 
  • Note: spaces are not allowed when listing parameter values ​​(eg -l or -q).

  • Detaljan popis i informacije o parametrima moguće je dobiti naredbom man qsub.

SGE environment variables

Within the startup script it is possible to use SGE variables. Some of them are:

$TMPDIR : the name of the directory where temporary files can be saved (/scratch)
$JOB_ID : SGE job identifier
$SGE_TASK_ID : task identifier of the job queue
$SGE_O_HOST : address of the computer from which the job was started
$SGE_O_PATH : the original value of the PATH environment variable when starting the job
$SGE_O_WORKDIR : the directory from which the job was started
$SGE_STDOUT_PATH : file where standard output is saved
$SGE_STDERR_PATH : file where standard error is saved
$HOSTNAME : the address of the computer on which the script is executed
$JOB_NAME : job name
$PE_HOSTFILE : the name of the file in which the addresses of the computers are listed
$QUEUE : the name of the queue in which the job is executed 


Types of jobs

Serial jobs

The simplest form of SGE jobs are batch jobs that require only one processor to run. For them, it is usually not necessary to specify any special parameters, but only the name of the program is specified.

Examples of use:

  1.  An example script without additional parameters:

    #!/bin/bash 
    
    date
  2.  Example of a simple script with parameters:

    #!/bin/bash 
    
    #$ -N Date_SGE_script 
    #$ -o Date_SGE.out 
    #$ -e Date_SGE.err 
    
    date
  3. Example of running a program from the current directory:

    moj_program.sge
    #!/bin/bash
    
    #$ -N myprog
    #$ -P local
    #$ -o myprog.out
    #$ -e myprog.err
    #$ -cwd
    
    myprog

Parallel jobs

To start parallel jobs, it is necessary to specify the desired parallel environment and the number of processor cores required to perform the job.

The syntax is:

#$ -pe <type_of_parallel_job> <N>,[<Ni>,...]<S>-<E>,[<Si>-<Ei>,]

Examples of use:

  1.  The job requires 14 processor cores to run:

    #$ -pe *mpi 14
  2.  The number of allocated processor cores can be between 2 and 4:

    #$ -pe *mpi 2-4
  3. The number of allocated processor cores can be 5 or 10:

    #$ -pe *mpi 5,10
  4. The number of allocated processor cores can be 1 or between 2 and 4:

    #$ -pe *mpi 1,2-4

More information about the parallel environments available on Isabella can be found on this page : Redovi poslova i paralelne okoline.

Since the user does not need to know in advance how many processor cores will be allocated, SGE sets the value of the $NSLOTS variable to the number of allocated cores.

Running parallel jobs itself is specific because there are tools for running sub-jobs (eg. mpirun) that do the scheduling of sub-jobs on nodes themselves. When SGE assigns nodes to a parallel job, it saves the list of nodes in the $TMPDIR/machines file which is passed as a parameter to the parallel jobs (eg mpirun, mpiexec, pvm...) inside the job description script.


An example of a start script for starting one type of parallel job:

#!/bin/bash

#$ -N parallel-job
#$ -cwd
#$ -pe *mpi 14

mpirun_rsh -np $NSLOTS -hostfile $TMPDIR/machines ...


Job arrays

SGE enables multiple starting of the same job, the so-called job arrays. Sub-jobs within an array are called tasks and each task gets its own identifier. When running jobs, the user specifies a range of job identifier values ​​using the -t parameter:

#$ -t <start>:<end>:<step>


The value <start> is the identifier of the first task, <end> the identifier of the last task, and <step> is the value by which each subsequent identifier between <start> and <end> is incremented. SGE stores the identifier of each task in the variable $SGE_TASK_ID, with which users can assign different parameters to a particular task. Tasks can be serial or parallel jobs.

Examples of use

  1. Example script to run a job array of 10 batch jobs:

    #!/bin/bash
    
    #$ -N job_array_serial
    #$ -cwd
    #$ -o output/
    #$ -j y
    #$ -t 1-10
    
    ./starSeeker starCluster.$SGE_TASK_ID
  2. Example script to run a job array of 10 parallel jobs:

    #!/bin/bash
    
    #$ -N job_array_parallel
    #$ -cwd
    #$ -o output/
    #$ -j y
    #$ -t 1-10
    
    mpiexec -machinefile $TMPDIR/machines ./starseeker starCluster.$SGE_TASK_ID

Interactive jobs

SGE enables the launch of interactive jobs. The qrsh command is used to run jobs interactively.

It is recommended to use this form of jobs in the case when it is necessary to compile or debug applications on nodes.

Unlike using ssh, this lets SGE know that the nodes are busy and not run other jobs on them. When executing the command interactively, it is necessary to specify the full path to the command. If the SGE currently has no free resources and the job is to be left waiting in the queue, it is necessary to specify the "-now n" parameter. Otherwise, SGE will immediately end the job execution with the message:

Your "qrsh" request could not be scheduled, try again later.

Examples of use:

  1. Direct access to the command line of the test node:

    qrsh
  2. Interactive command execution:

    qrsh /home/user/moja_skripta
  3. Interactive application execution with graphical interface:

    qrsh -DISPLAY=10.1.1.1:0.0 <moja_skripta>

Advanced job descriptions

Saving temporary results

It is not recommended to use the $HOME directory to save temporary results generated during job execution. This reduces the efficiency of the application and burdens the front end and the cluster network.

SGE creates a directory on the disk on the work nodes (/scratch) for each individual job, of the form /scratch/<jobID>.<taskID>.<queue>. The address of this directory is saved by SGE in the variable $TMPDIR.

For higher execution speed, using a temporary directory on the /scratch disk is also recommended for jobs that often require random access to data on disk, such as TensorFlow and PyTorch jobs.

If there are indications that the created temporary files will exceed 500 GB, the temporary data should be saved to the /shared disk (described below).

The temporary directory on the /scratch disk is deleted automatically at the end of the job execution.


Examples of use:

  1. An example of simple use of the $TMPDIR variable:

    #!/bin/bash 
    
    #$ -N scratch_1 
    #$ -cwd 
    #$ -o output/scratch.out 
    #$ -j y
    #$ -l scratch=50 
    
    cd $TMPDIR 
    pwd > test 
    cp test $SGE_O_WORKDIR
  2. An example of copying data to a scratch disk:

    #!/bin/bash 
    
    #$ -N scratch_2 
    #$ -cwd 
    #$ -o output/scratch.out 
    #$ -j y
    #$ -l scratch=50 
    
    mkdir -p $TMPDIR/data 
    cp -r $HOME/data/* $TMPDIR/data 
    
    python3.5 main.py $TMPDIR/data

If the temporary data exceeds 500 GB, it is necessary to use /shared. Unlike scratch, the directory on shared must be created manually and there is no automatic directory removal.

Example of use:

#!/bin/bash 

#$ -N shared
#$ -cwd 
#$ -o output/shared.out 
#$ -j y 

mkdir -p /shared/$USER/$TMPDIR
cd /shared/$USER/$TMPDIR

pwd > test
cp test $SGE_O_WORKDIR


Resources

When starting jobs, the user can describe in more detail which conditions must be met for the job. For example, it is possible to require only a certain architecture of the worker node, the amount of memory or the execution time. Specifying required resources allows for better job scheduling and gives jobs a higher priority (more on the Priorities page on Isabella).

The required resources are specified using the -l parameter:

#$ -l <resource>=<value>

Resources which define job requirements:

arch : node architecture (eg. lx26-x86, lx25-amd64)
hostname : node address (eg. c4140-01.isabella)

Resources that place real limits on jobs:

vmem : amount of virtual memory (format: <num>K|M|G)
rss : amount of real memory
stack : stack size
data : total amount of memory (without stack)
fsize : total file size
cput : processor time (format: [<hours>:<min>:]<sec>)
rt : real time
scratch : space on the scratch disk expressed in GB

The values ​​of these resources should be carefully defined (eg take 50% higher values ​​than expected). In case of exceeding, the job will be stopped with the "segmentation fault" signal.

Values ​​cannot be changed for active jobs.

The resource values ​​defined in the job start script are set per process. For example, if a user on one node requires 3 processor cores, the values ​​of all requested resources will be multiplied by 3.

Example of use:

  1. Example of a job that requires 20 CPU cores and 10 GB of RAM per process (the job will be allocated a total of 200 GB of RAM):

    #$ -pe *mpi 20
    #$ -l memory=10
  2. The job requires 100 GB of scratch space:

    #$ -pe *mpisingle 4
    #$ -l scratch=25


  • No labels