How to configure iGPU for xserver and nvidia GPU for CUDA work
Solution 1:
I first installed NVIDIA drivers and CUDA packages following this guide. Except, after a reboot I ended up with /usr/lib/xorg/Xorg
showing up in the output of nvidia-smi
. This wasn't good, since I needed to have all of NVIDIA GPU RAM available to my work.
After some research I found a solution that solved my problem:
I created /etc/X11/xorg.conf
with the following content:
Section "Device"
Identifier "intel"
Driver "intel"
BusId "PCI:0:2:0"
EndSection
Section "Screen"
Identifier "intel"
Device "intel"
EndSection
(if you try to do the same, make sure to check where your GPU is. Mine was on 00:02.0
which translates to PCI:0:2:0
)
% lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Device 3e92
01:00.0 VGA compatible controller: NVIDIA Corporation GP104 (rev a1)
After rebooting, xorg and other programs no longer appeared in the output of nvidia-smi
. And I was able to use pytorch with CUDA-10.0.
Note, that I still have all the NVIDIA drivers installed, but they don't interfere.
update: for Ubuntu 20.04 some extra changes are needed for this to work. You will find the full details here.
Solution 2:
I would like to add another way in which I am currently preventing Nvidia card from handling my display. I am simply booting to gnome by selecting Wayland instead of Xorg. Since Nvidia does not support Wayland, after logging in, nvidia-smi shows no process running.
However, I can still use Nvidia for stuff like Tensorflow.
Solution 3:
Let me share my recipe which helped me on Razer Blade 15 laptop with Arch Linux and Gnome desktop environment.
Initially I started Gnome with Wayland session which at that time was incompatible with NVIDIA driver, so naturally I had integrated graphics adapter for display and NVIDIA GPU for deep learning. But after recent update GDM session started to fallback to Xorg with NVIDIA GPU as primary GPU. The problem was that:
- it reduced available GPU RAM
- it bogged down the whole system during a neural network training
- it increased power consumption (= less battery life)
I ran nvidia-smi
after startup. I expected to see No running processes found
, but I saw a list of Xorg
processes that used my NVIDIA GPU. That means Gnome Display Manager used Xorg session with NVIDIA GPU as primary GPU.
I examined /var/log/Xorg.0.log
:
(II) xfree86: Adding drm device (/dev/dri/card1)
(II) systemd-logind: got fd for /dev/dri/card1 226:1 fd 11 paused 0
(II) xfree86: Adding drm device (/dev/dri/card0)
(II) systemd-logind: got fd for /dev/dri/card0 226:0 fd 12 paused 0
(**) OutputClass "nvidia" ModulePath extended to "/usr/lib/nvidia/xorg,/usr/lib/xorg/modules,/usr/lib/xorg/modules"
(**) OutputClass "nvidia" setting /dev/dri/card1 as PrimaryGPU
(**)
means that the setting had been read from config file! I found out that the config file was
/usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf
. I changed the config file to set
Intel integrated graphics adapter as primary GPU:
Section "OutputClass"
Identifier "intel"
MatchDriver "i915"
Driver "modesetting"
Option "PrimaryGPU" "yes" # <<<<<< add this string
EndSection
Section "OutputClass"
Identifier "nvidia"
MatchDriver "nvidia-drm"
Driver "nvidia"
Option "AllowEmptyInitialConfiguration"
# Option "PrimaryGPU" "yes" # <<<<<< comment this string
ModulePath "/usr/lib/nvidia/xorg"
ModulePath "/usr/lib/xorg/modules"
EndSection