...
Na računalnom klasteru Isabella, na čvorovima s grafičkim procesorima NVIDIA Tesla V100-SXM2-16GB, instaliran je Horovod, Python Pythonova biblioteka za distribuirano pokretanje TensorFlow i PyTorch poslova.
...
Warning | ||
---|---|---|
| ||
Za sve TensorFlow i PyTorch poslove koji zahtijevaju više grafičkih procesora, obavezno je korištenje biblioteke Horovod biblioteke! |
Primjer korištenja biblioteke Horovod biblioteke:
Code Block |
---|
import tensorflow as tf import horovod.tensorflow as hvd # Initialize Horovod hvd.init() # Pin GPU to be used to process local rank (one GPU per process) config = tf.ConfigProto() config.gpu_options.visible_device_list = str(hvd.local_rank()) # Build model... loss = ... opt = tf.train.AdagradOptimizer(0.01 * hvd.size()) # Add Horovod Distributed Optimizer opt = hvd.DistributedOptimizer(opt) # Add hook to broadcast variables from rank 0 to all other processes during # initialization. hooks = [hvd.BroadcastGlobalVariablesHook(0)] # Make training operation train_op = opt.minimize(loss) # Save checkpoints only on worker 0 to prevent other workers from corrupting them. checkpoint_dir = '/tmp/train_logs' if hvd.rank() == 0 else None # The MonitoredTrainingSession takes care of session initialization, # restoring from a checkpoint, saving to a checkpoint, and closing when done # or an error occurs. with tf.train.MonitoredTrainingSession(checkpoint_dir=checkpoint_dir, config=config, hooks=hooks) as mon_sess: while not mon_sess.should_stop(): # Perform synchronous training. mon_sess.run(train_op) |
...
Info |
---|
Više informacija o korištenju biblioteke Horovod biblioteke, kao i potpune primjere TensorFlow i PyTorch skripti, možete pronaći na Horovod GiHub repozitoriju. |
...