How to tell PyTorch to not use the GPU?

I want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there's a way to tell pytorch to not use the GPU and instead use the CPU only? I realize I could install another CPU-only pytorch, but hoping there's an easier way.


Solution 1:

I just wanted to add that it is also possible to do so within the PyTorch Code:

Here is a small example taken from the PyTorch Migration Guide for 0.4.0:

# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

...

# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)

I think the example is pretty self-explaining. But if there are any questions just ask!
One big advantage is when using this syntax like in the example above is, that you can create code which runs on CPU if no GPU is available but also on GPU without changing a single line.

Instead of using the if-statement with torch.cuda.is_available() you can also just set the device to CPU like this:

device = torch.device("cpu")

Further you can create tensors on the desired device using the device flag:

mytensor = torch.rand(5, 5, device=device)

This will create a tensor directly on the device you specified previously.

I want to point out, that you can switch between CPU and GPU using this syntax, but also between different GPUs.

I hope this is helpful!

Solution 2:

You can just set the CUDA_VISIBLE_DEVICES variable to empty via shell before running your torch code.

export CUDA_VISIBLE_DEVICES=""

Should tell torch that there are no GPUs.

export CUDA_VISIBLE_DEVICES="0" will tell it to use only one GPU (the one with id 0) and so on.

Solution 3:

Simplest way using python is:

  os.environ["CUDA_VISIBLE_DEVICES"]=""

Solution 4:

General

As previous answers showed you can make your pytorch run on the cpu using:

device = torch.device("cpu")

Comparing Trained Models

I would like to add how you can load a previously trained model on the cpu (examples taken from the pytorch docs).

Note: make sure that all the data inputted into the model also is on the cpu.

Recommended loading

model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=torch.device("cpu")))

Loading entire model

model = torch.load(PATH, map_location=torch.device("cpu"))