Tensorflow set CUDA_VISIBLE_DEVICES within jupyter

I have two GPUs and would like to run two different networks via ipynb simultaneously, however the first notebook always allocates both GPUs.

Using CUDA_VISIBLE_DEVICES, I can hide devices for python files, however I am unsure of how to do so within a notebook.

Is there anyway to hide different GPUs in to notebooks running on the same server?


Solution 1:

You can set environment variables in the notebook using os.environ. Do the following before initializing TensorFlow to limit TensorFlow to first GPU.

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"   # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"]="0"

You can double check that you have the correct devices visible to TF

from tensorflow.python.client import device_lib
print device_lib.list_local_devices()

I tend to use it from utility module like notebook_util

import notebook_util
notebook_util.pick_gpu_lowest_memory()
import tensorflow as tf

Solution 2:

You can do it faster without any imports just by using magics:

%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0

Notice that all env variable are strings, so no need to use ". You can verify that env-variable is set up by running: %env <name_of_var>. Or check all of them with %env.

Solution 3:

You can also enable multiple GPU cores, like so:

import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0,2,3,4"