avoid using nvidia card for Xorg with plasma

Solution 1:

After discovering the questions and answers here How to configure iGPU for xserver and nvidia GPU for CUDA work, notably the answer of user890178, and studying the syslog I finally found that it is not plasma that does anything specific but the problem is the same for gnome and plasma shell when using Xorg. With Xorg the gpu-manager.service

/lib/systemd/system/gpu-manager.service

is triggered by the display-manager

/etc/systemd/system/display-manager.service.wants/gpu-manager.service

and gpu-manager detects nvidia and writes the file

/usr/share/X11/xorg.conf.d/11-nvidia-prime.conf

which contains

# DO NOT EDIT. AUTOMATICALLY GENERATED BY gpu-manager

Section "OutputClass"
    Identifier "Nvidia Prime"
    MatchDriver "nvidia-drm"
    Driver "nvidia"
    Option "AllowEmptyInitialConfiguration"
    Option "IgnoreDisplayDevices" "CRT"
    Option "PrimaryGPU" "Yes"
    ModulePath "/x86_64-linux-gnu/nvidia/xorg"
EndSection

This file is not used by Wayland and so the nvidia card is not used, but it is used for gnome-shell on ubuntu and plasma. So in fact both will use the nvidia card for Xorg.

The solution is then a variation of the answer of Maksym Ganenko in the same question above which means is replacing /usr/share/X11/xorg.conf.d/11-nvidia-prime.conf by

# DO NOT EDIT. AUTOMATICALLY GENERATED BY gpu-manager

Section "OutputClass"
    Identifier "Nvidia Prime"
    MatchDriver "nvidia-drm"
    Driver "nvidia"
    Option "AllowEmptyInitialConfiguration"
    Option "IgnoreDisplayDevices" "CRT"
    # Option "PrimaryGPU" "Yes"   <<< commented out
    ModulePath "/x86_64-linux-gnu/nvidia/xorg"
EndSection


# added 
Section "OutputClass"
    Identifier "intel"
    MatchDriver "i915"
    Driver "modesetting"
    Option "PrimaryGPU" "yes"    
EndSection

and additionally to avoid gpu-manager to replace these changes when starting the next session to follow the advice of Oren in question gpu-manager overwrites xorg.conf to protect the file against changes by means of running

chattr +i /usr/share/X11/xorg.conf.d/11-nvidia-prime.conf

I seems that the fact that the screen remained black after the adding the two files I mentioned in the question to /etc/X11/xorg.conf.d is due to the fact that with the files in /usr/share/X11/xorg.conf.d that the config file did contain contradicting information.

Given the comment of GabrielaGarcia that astonishingly made the claim that what I ask cannot work on a laptop, I feel the necessity to provide a prove, that what I asked can work, and that the answer I provided is indeed a means to make it work.

Here the output of lspci proving the existence of two graphics cards

(base) m3088: (~) 505> lspci | egrep "VGA|NVIDIA"
00:02.0 VGA compatible controller: Intel Corporation Device 3e9b
01:00.0 3D controller: NVIDIA Corporation GP107M [GeForce GTX 1050 Ti Mobile] (rev a1)

Here the output of ps aux filtering the Xorg, plasma, and anaconda python running a tensorflow session. This shows that all run happily together, while plasma and Xorg do not use the nvidia card as desired (see nvidia-smi below)

(base) m3088: (~) 511> ps aux  | egrep "Xorg|plasmashell|anaconda"
roebel   13139  0.9  5.1 17315584 819236 pts/1 Sl+  00:23   0:10 /data/anasynth/anaconda3/bin/python /data/anasynth/anaconda3/bin/ipython
roebel   16198  0.0  0.0  21540  1068 pts/5    S+   00:42   0:00 grep -E Xorg|plasmashell|anaconda
roebel   18886  1.5  1.3 628292 210572 tty2    Sl+  juil.14  24:22 /usr/lib/xorg/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -background none -noreset -keeptty -verbose 3
roebel   19171  2.0  3.4 6576588 561212 ?      Sl   juil.14  33:16 /usr/bin/plasmashell

Here the output of nvidia-smi proving that Xorg is not using nvidia, but the tensorflow session in anaconda python is suing it.

(base) m3088: (~) 506> nvidia-smi
Tue Jul 16 00:34:51 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.26       Driver Version: 430.26       CUDA Version: 10.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 105...  Off  | 00000000:01:00.0 Off |                  N/A |
| N/A   47C    P8    N/A /  N/A |    123MiB /  4042MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0     13139      C   /data/anasynth/anaconda3/bin/python          109MiB |
+-----------------------------------------------------------------------------+

I am ready to provide screenshots to show that all this happens on a laptop.