What does "sink output, source output, sink offload, source offload" mean for GPUs?

I'm reading this page to learn to set up hybrid GPU plans. But what do the terms listed in the title mean?


Ah, PRIME, also known as "why does this never work". Lengthy explanation following...

So, I assume that you have acquired (as I did) a computer with at least two graphic cards. On some systems, these share your computer graphics outputs and are connected to them via some switching mechanism, mostly called a MUX (referring to MULTIPLEXER), so you can basically set a parameter in the BIOS (or UEFI) what graphics card should be on or connected to what output.

Most systems, however, have adopted the cheaper, and much more complicated alternative of sharing their graphic buffers. This means that, for example, your standard low-power integrated card is always connected to your display, and all the pixels that get drawn on that output have to somehow pass through this card.

If all you are running are lightweight applications, you will probably just be using the internal card. But sometimes you want to use your powerful external card to calculate all this 3d-stuff for you, so you have to tell xrandr that

  • The external card is doing some of the work (all the programs with the environment variable DRI_PRIME=1 get).
  • The internal card has to fiddle the data from the external card into the stuff that was NOT calculated on the external card, and draw everything on the screen.

This is called "GPU offloading", and the external card is in this context the "offload source", while the internal card is the "offload sink" (so the stream of data goes from source to sink), and you can enable this (if it is not enabled per default) with

xrandr --setprovideroffloadsink source_provider sink_provider

where source_provider and sink_provider are probably the names of your graphics drivers, for example, nouveau and Intel.

However, on some systems, some of the video outputs are connected to the external card. This means that the internal card, which does all the fiddling-together of the different program's screen space, has to somehow send its output to the external card, which then just draws the pixels on its outputs. In this context, the data goes from the source of one card to the output of the other, and to enable it you have to use.

xrandr --setprovideroutputsource output_provider source_provider

where output_provider is the name of the external card which has the previously not accessible outputs connected to it, and source_provider is the integrated card, which does the data-fiddling, but can't draw to the outputs connected to the external card.

One last thing, if you have both methods enabled at the same time (which can happen), the stuff for the graphics-heavy applications gets calculated on the "good" card, sent over to the "lame" card, fiddled together with the rest of the screen space, and sometimes sent back to the "good" card, where it will be drawn on the screen. The drawback of all of this is, that all of the "screen-space-fiddling", also known as rendering, is done on the "lame" card, which can be slow.

To get around that, you have to change the card which is doing all the rendering (called the primary GPU) from the integrated one to the external one, which (to my knowledge) can't be done without restarting the X server, and it requires you to fiddle with the Xorg configuration files.

If you want, I can give you a lot of information on how I did my setup (Arch Linux, Lenovo W530 with Intel and nouveau driver, i3wm), and otherwise, I would recommend you to read man xrandr.