How to make Jupyter Notebook to run on GPU?

I am answering my own question. Easiest way to do is use connect to Local Runtime (https://research.google.com/colaboratory/local-runtimes.html) then select hardware accelerator as GPU as shown in (https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d).


  1. Install Miniconda/anaconda

  2. Download CUDA Toolkit (acc to OS)

    Follow this (for LINUX CUDA Toolkit):

     a. Wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
    
     b. sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
    
     c. wget https://developer.download.nvidia.com/compute/cuda/11.0.3/local_installers/cuda-repo-ubuntu2004-11-0-local_11.0.3-450.51.06-1_amd64.deb
    
     d. sudo dpkg -i cuda-repo-ubuntu2004-11-0-local_11.0.3-450.51.06-1_amd64.deb
    
     e. sudo apt-key add /var/cuda-repo-ubuntu2004-11-0-local/7fa2af80.pub
    
     f. sudo apt-get update
    
     g. sudo apt-get -y install cuda
    
  3. Download and install cuDNN (create NVIDIA acc)

    a. Paste the cuDNN files(bin,include,lib) inside CUDA Toolkit Folder.

  4. Add CUDA path to ENVIRONMENT VARIABLES (see a tutorial if you need.)

  5. Create an environment in miniconda/anaconda

      Conda create -n tf-gpu 
    
      Conda activate tf-gpu
    
      pip install tensorflow-gpu
    
  6. Install Jupyter Notebook (JN)

     pip install jupyter notebook
    
  7. DONE! Now you can use tf-gpu in JN.


I've written a medium article about how to set up Jupyterlab in Docker (and Docker Swarm) that accesses the GPU via CUDA in PyTorch or Tensorflow.

Set up your own GPU-based Jupyter

I'm clear that you don't search for a solution with Docker, however, it saves you a lot of time when using an existing Dockerfile with plenty of packages required for statistics and ML.