How to Actually Utilize GPU to Train ML Model on Your Windows Machine

Muhammad Fauzan
3 min readMay 9, 2023

--

Photo by Nana Dua on Unsplash

Keeping up with the latest state-of-the-art tech is a passion for many tech enthusiasts, including myself. That’s why I recently purchased a device with an Nvidia GPU, but I struggled to configure my device to utilize GPU for running TensorFlow. In this article, I would like to share tips and guides to prepare your device to train an ML model with TensorFlow.

What is a GPU?

A graphic processing unit (GPU) is a specialized electronic circuit designed to manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device. Nowadays, the most common use case of a GPU is expanding to run vector calculations in Data Science and Machine Learning rather than just process images for picture/video editing and gaming.

But why use a GPU for machine learning? The answer lies in the processing speed. GPUs can process vector, matrix, and tensor products much faster than CPUs. This is achieved through the parallelization of the process, allowing for the result to be computed almost as fast as a simple scalar product. This acceleration allows for the development of more complex ML models and faster training times, enabling sophisticated innovation in the industry.

I have spent a few days figuring out what works and doesn’t on setting up my hardware to run TensorFlow on GPU, here are a few key takeaways:

  • You won't need to install TensorFlow-GPU anymore with TensorFlow 2. Just simply install TensorFlow and specify the version.
  • Keep in mind that TensorFlow 2.10 is the latest version with Windows support for GPU utilization. Any GPU utilization for TensorFlow versions above that needs to be run on the Windows Subsystem for Linux (WSL), as discussed here. (referring to this discussion).
  • You need to add the directory of your Python and CUDA to the PATH (you can add multiple PATHS separated by a semicolon).

And here is the step-by-step guide on how to do it. Note that this tutorial works on Windows OS and Nvidia GeForce RTX 3050 GPU.

  1. Before beginning the installation process, it is important to ensure that CUDA, cuDNN, Python version, and TensorFlow versions are all compatible with each other. You can check their compatibility by referring to this link.
  2. Next, install your preferable Python version. Check the option to add Python to the PATH.
  3. Then, proceed to install CUDA.
  4. After that, download the cuDNN. Unarchive the zip file, then copy the folders named bin, include, and lib and paste it into this directory: Local Disk (C:) > Program Files > NVIDIA GPU Computing Toolkit > CUDA > [your installed CUDA version]
  5. Then add the directory mentioned above to the PATH.
  6. Lastly, install TensorFlow and Jupyter Notebook. Don’t forget to specify the TensorFlow version that you want to install.

You can verify if the GPU is properly configured by using this block of code.

If TensorFlow can access your GPU, you will see this output:

But how fast is it actually to run a process on a GPU compared to when you are running it on a CPU? You can run these codes below (from an article by Analytics Vidhya) to compare both. First, you need to download the dataset you want to train your ML model with. This code below will download and preprocess the CIFAR-10 dataset for you. The dataset consists of 60000 32x32 color images in 10 classes.

Define a simple model to be trained with the CIFAR-10 dataset.

Check how much time your CPU needs to go through an epoch of training a simple model by running this block of code.

Now, compare with how much time your GPU needs to go through an epoch of training a simple model.

In my device, running an epoch of training using CPU would need as long as 2min 13s, while running it on GPU only needs 13.6s to complete. That is much faster!

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Responses (1)

Write a response