Distributed Deep Learning with Horovod Training Course

Last updated

Course Code



7 hours (usually 1 day including breaks)


  • An understanding of Machine Learning, specifically deep learning
  • Familiarity with machine learning libraries (TensorFlow, Keras, PyTorch, Apache MXNet)
  • Python programming experience


  • Developers
  • Data scientists


Horovod is an open source software framework, designed for processing fast and efficient distributed deep learning models using TensorFlow, Keras, PyTorch, and Apache MXNet. It can scale up a single-GPU training script to run on multiple GPUs or hosts with minimal code changes.

This instructor-led, live training (online or onsite) is aimed at developers or data scientists who wish to use Horovod to run distributed deep learning trainings and scale it up to run across multiple GPUs in parallel.

By the end of this training, participants will be able to:

  • Set up the necessary development environment to start running deep learning trainings.
  • Install and configure Horovod to train models with TensorFlow, Keras, PyTorch, and Apache MXNet.
  • Scale deep learning training with Horovod to run on multiple GPUs.

Format of the Course

  • Interactive lecture and discussion.
  • Lots of exercises and practice.
  • Hands-on implementation in a live-lab environment.

Course Customization Options

  • This course is focused on Horovod, but other software tools and frameworks such as TensorFlow, Keras, PyTorch, and Apache MXNet may be required. Please let us know if you have specific requirements or preferences.
  • To request a customized training for this course, please contact us to arrange.

Course Outline


  • Overview of Horovod features and concepts
  • Understanding the supported frameworks

Installing and Configuring Horovod

  • Preparing the hosting environment    
  • Building Horovod for TensorFlow, Keras, PyTorch, and Apache MXNet
  • Running Horovod

Running Distributed Training

  • Modifying and running training examples with TensorFlow
  • Modifying and running training examples with Keras
  • Modifying and running training examples with PyTorch
  • Modifying and running training examples with Apache MXNet

Optimizing Distributed Training Processes

  • Running concurrent operations on multiple GPUs    
  • Tuning hyperparameters
  • Enabling performance autotuning


Summary and Conclusion



Related Categories

Related Courses

Course Discounts

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients

This site in other countries/regions