...
...
Examples
Tip |
---|
Kad u zaglavlju PBS skripte definirate vrijednost varijable |
CPU
MPI + OpenMP
Budući da aplikacija podržava hibridnu paralelizaciju, MPI procese možete podijeliti na OpenMP threadoveSince the application supports hybrid parallelization, you can split MPI processes into OpenMP threads.
Tip |
---|
GROMACS preporučuje između 2 i 8 threadova po MPI procesu. |
U primjeru niže, aplikacija će stvoriti 8 MPI procesa, podijeljenih u 4 OpenMP threadaGROMACS recommends between 2 and 8 threads per MPI process.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#PBS -q cpu #PBS -l select=8:ncpus=4 cd ${PBS_O_WORKDIR} module load scientific/gromacs/2022.5-gnu mpiexec -d ${OMP_NUM_THREADS} --cpu-bind depth gmx mdrun -pin on -v -deffnm md |
MPI
Ako aplikaciju ne želite dijeliti u OpenMP threadove, možete koristiti paralelizaciju isključivo na razini MPI procesa.
If you do not want to divide the application into OpenMP threads, you can use parallelization exclusively at the MPI process level.
In the example below, the application will launch 32 MPI processesU primjeru niže, aplikacija će pokrenuti 32 MPI procesa.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#PBS -q cpu #PBS -l select=32 cd ${PBS_O_WORKDIR} module load scientific/gromacs/2022.5-gnu mpiexec gmx mdrun -pin on -v -deffnm md |
OpenMP
Ako aplikaciju želite dijeliti isključivo u OpenMP threadove, morate zatražiti jedan računalni čvor, budući da u ovom slučaju aplikacija radi s dijeljenom memorijomIf you want to split the application exclusively into OpenMP threads, you must request one computer node, since in this case the application works with shared memory.
Tip |
---|
GROMACS će vrijednost will get the OMP_NUM_THREADS dobiti po definiranju |
In the example below, the application will run with 32 OpenMP threadsU primjeru niže, aplikacija će se pokrenuti s 32 OpenMP threada.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#PBS -q cpu #PBS -l ncpus=32 cd ${PBS_O_WORKDIR} module load scientific/gromacs/2022.5-gnu gmx mdrun -pin on -v -deffnm md |
GPU
Aplikacija podržava rad s grafičkim procesorom, odnosno rad s više grafičkih procesoraSince the application supports hybrid parallelization, you can split MPI processes into OpenMP threads.
Warning |
---|
GPU utilization in the case of using multiple graphics processors reaches a maximum of 50% (when using 2 GPUs) in a large number of tested examples. For this reason, consider using a single GPU and multiple CPU cores. GPU utilization in the case of using 1 GPU + 8 CPUs reaches up to iskorištenje se u slučaju korištenja više grafičkih procesora kreće se do maksimalno 50% (kod korištenja 2 GPU-a) u većem broju testiranih primjera.Iz tog razloga, razmotrite korištenje jednog grafičkog procesora i više procesorskih jezgri. GPU iskorištenje se u slučaju korištenja 1 GPU + 8 CPU kreće i do 85%. |
Single GPU
U primjeru niže, aplikacija će koristiti jedan grafički procesor te osam procesorskih jezgaraIn the example below, the application will use one graphics processor and eight processor cores.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#PBS -q gpu #PBS -l select=1:ngpus=1:ncpus=8 cd ${PBS_O_WORKDIR} module load scientific/gromacs/2022.5-cuda gmx mdrun -pin on -v -deffnm md |
Multi GPU
Prilikom pozivanja aplikacije s When calling an application with mpiexec, svaki MPI proces dodijeli se jednom grafičkom procesoru (putem varijable each MPI process is assigned to one GPU (via the CUDA_VISIBLE_DEVICES variable), a svaka and each OpenMP nit thread (putem varijable via the OMP_NUM_THREADS ) jednoj procesorskoj jezgri.
...
variable) to one processor core.
In the example below, the application will use two graphics processors and four processor cores for each graphics processor.
Code Block | ||||||
---|---|---|---|---|---|---|
| ||||||
#PBS -q gpu #PBS -l select=2:ngpus=1:ncpus=4 cd ${PBS_O_WORKDIR} module load scientific/gromacs/2022.5-cuda mpiexec -d ${OMP_NUM_THREADS} --cpu-bind depth gmx mdrun -pin on -v -deffnm md |
...