Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagebash
titletest.sh
linenumberstrue
collapsetrue
[korisnik@x3000c0s25b0n0] $ module load scientific/pytorch/1.14.0-ngc
[korisnik@x3000c0s25b0n0] $ run-command.sh pip3 list
INFO:    underlay of /etc/localtime required more than 50 (95) bind mounts
INFO:    underlay of /usr/bin/nvidia-smi required more than 50 (474) bind mounts
13:4: not a valid test operator: (
13:4: not a valid test operator: 510.47.03
Package                 Version
----------------------- -------------------------------
absl-py                 1.3.0
accelerate              0.19.0
apex                    0.1
appdirs                 1.4.4
argon2-cffi             21.3.0
argon2-cffi-bindings    21.2.0
asttokens               2.2.1
...

torchrun/distributed

Note
titleTorchrun & distributed

Korištenje wrappera torchun-*.sh ili distributed-*.sh je zamjenjivo u slučaju da je pytorch kod distribuiran torch.distributed modulom.

...

Izvršavanje PyTorch koda na jednom grafičkom procesoru

Code Block
languagepy
titlesinglegpu.py
linenumberstrue
collapsetrue
# source
# - https://github.com/horovod/horovod/blob/master/examples/pytorch/pytorch_synthetic_benchmark.py

import os
import time

import torch
import torch.nn as nn
import torch.optim as optim

from torch.utils.data import DataLoader

from torchvision.models import resnet50
from torchvision.datasets import FakeData
from torchvision.transforms import ToTensor

def main():

    # vars
    batch = 256
    samples = 256*100
    epochs = 1

    # model
    model = resnet50(weights=None)
    model.cuda()
    optimizer = optim.SGD(model.parameters(), lr=0.001)
    loss_fn = nn.CrossEntropyLoss()

    # data
    dataset = FakeData(samples,
                       num_classes=1000,
                       transform=ToTensor())
    loader = DataLoader(dataset,
                        batch_size=batch,
                        shuffle=False,
                        num_workers=1,
                        pin_memory=True)

    # train
    for epoch in range(epochs):
        start = time.time()
        for batch, (images, labels) in enumerate(loader):
            images = images.cuda()
            labels = labels.cuda()
            outputs = model(images)
            classes = torch.argmax(outputs, dim=1)
            loss = loss_fn(outputs, labels)
            optimizer.zero_grad()
            loss.backward()
            optimizer.step()
            if (batch%10 == 0):
                print('--- Epoch %i, Batch %3i / %3i, Loss = %0.2f ---' % (epoch,
                                                                           batch,
                                                                           len(loader),
                                                                           loss.item()))
        elapsed = time.time()-start
        imgsec = samples/elapsed
        print('--- Epoch %i finished: %0.2f img/sec ---' % (epoch,
                                                            imgsec))

if __name__ == "__main__":
    main()
Code Block
languagebash
titlesinglegpu.sh
linenumberstrue
collapsetrue
#!/bin/bash

#PBS -q gpu
#PBS -l ngpus=1

# pozovi modul
module load scientific/pytorch/2.0.0-ngc

# pomakni se u direktorij gdje se nalazi skripta
cd ${PBS_O_WORKDIR:-""}

# potjeraj skriptu korištenjem run-singlegpu.sh
run-singlegpu.sh singlegpu.py

torchrun/distributed

Note
titleTorchrun & distributed

Korištenje wrappera torchun-*.sh ili distributed-*.sh je zamjenjivo u slučaju da je pytorch kod distribuiran torch.distributed modulom.

Aplikacija na više grafičkih procesora i jednom čvoru

...