You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 18 Next »

Opis

GROMACS je računalno-kemijska aplikacija za molekulsku dinamiku (MD), a prvenstveno se koristi za simulacije makromolekula te se može smatrati besplatnom alternativom komercijalnom Amberu.

GROMACS se prvenstveno temelji na klasičnoj mehanici, što znači da koristi jednadžbe gibanja klasične mehanike za izračunavanje kretanja molekule i njihovih segmenata. MD simulacije mogu pružiti informacije o ponašanju i svojstvima molekula, poput njihovih konformacija, energija i interakcija s drugim molekulama.

GROMACS je aplikacija otvorenog koda, a podržava hibridnu paralelizaciju, MPI + OpenMP, kao i upotrebu grafičkih procesora koji značajno ubrzavaju MD izračune.

Verzije

CPU

verzijamodulparalelizacijauključuje PLUMED
2022.5scientific/gromacs/2022.5-gnuMPI + OpenMPv2.8.2
2023.1

scientific/gromacs/2023.1-gnu

MPI + OpenMPv2.9.0

GPU + CPU

verzijamodulparalelizacijauključuje PLUMED
2022.5

scientific/gromacs/2022.5-cuda

MPI + OpenMP-
2023.1

scientific/gromacs/2023.1-cuda

MPI + OpenMP-

Službena dokumentacija

Examples

When you define the value of the ncpus variable in the header of the PBS script, the same value of the OMP_NUM_THREADS variable is automatically delivered to the environment.

CPU

MPI + OpenMP

Since the application supports hybrid parallelization, you can split MPI processes into OpenMP threads.

GROMACS preporučuje između 2 i 8 threadova po MPI procesu.

GROMACS recommends between 2 and 8 threads per MPI process.

PBS_script
#PBS -q cpu
#PBS -l select=8:ncpus=4

cd ${PBS_O_WORKDIR}

module load scientific/gromacs/2022.5-gnu

mpiexec -d ${OMP_NUM_THREADS} --cpu-bind depth gmx mdrun -pin on -v -deffnm md

MPI

If you do not want to divide the application into OpenMP threads, you can use parallelization exclusively at the MPI process level.

In the example below, the application will launch 32 MPI processes.

PBS_script
#PBS -q cpu
#PBS -l select=32

cd ${PBS_O_WORKDIR}

module load scientific/gromacs/2022.5-gnu

mpiexec gmx mdrun -pin on -v -deffnm md

OpenMP

If you want to split the application exclusively into OpenMP threads, you must request one computer node, since in this case the application works with shared memory.

GROMACS will get the OMP_NUM_THREADS value by defining the ncpus variable in the script header.

In the example below, the application will run with 32 OpenMP threads.

Bash_script
#PBS -q cpu
#PBS -l ncpus=32

cd ${PBS_O_WORKDIR}

module load scientific/gromacs/2022.5-gnu

gmx mdrun -pin on -v -deffnm md

GPU

Since the application supports hybrid parallelization, you can split MPI processes into OpenMP threads.

GPU utilization in the case of using multiple graphics processors reaches a maximum of 50% (when using 2 GPUs) in a large number of tested examples.

For this reason, consider using a single GPU and multiple CPU cores. GPU utilization in the case of using 1 GPU + 8 CPUs reaches up to 85%.

Single GPU

In the example below, the application will use one graphics processor and eight processor cores.

PBS_script
#PBS -q gpu
#PBS -l select=1:ngpus=1:ncpus=8

cd ${PBS_O_WORKDIR}

module load scientific/gromacs/2022.5-cuda

gmx mdrun -pin on -v -deffnm md

Multi GPU

When calling an application with mpiexec, each MPI process is assigned to one GPU (via the CUDA_VISIBLE_DEVICES variable), and each OpenMP thread (via the OMP_NUM_THREADS variable) to one processor core.

In the example below, the application will use two graphics processors and four processor cores for each graphics processor.

PBS skripta
#PBS -q gpu
#PBS -l select=2:ngpus=1:ncpus=4

cd ${PBS_O_WORKDIR}

module load scientific/gromacs/2022.5-cuda

mpiexec -d ${OMP_NUM_THREADS} --cpu-bind depth gmx mdrun -pin on -v -deffnm md
  • No labels