Why are videos rendered by the cpu instead of the gpu?

Well I know this might sound like a pretty stupid question, but I couldn't find an answer using google, so yeah ...
So, I know there are techniques like OpenCL and CUDA, but why is, by default, the processor used to render e.g. a video file out of a video editing software? Seems counterintuitive to me that the Graphics processing unit is not used to process, well, graphics. When playing a video game, the GPU is in charge of producing the image on my screen as well, isn't it?

Again, I know this may sound stupid to you. Please be gentle °A°

Edit: I was talking specificaly about the video output of a NLE software like Premiere Pro


Before HD was a thing, CPUs could handle video decoding easily. When HD became popular about 8 years ago, GPU manufacturers started to implement accelerated video decoding in their chips. You could easily find graphics cards marketed as supporting HD videos and some other slogans. Today any GPU supports accelerated video, even integrated GPUs like Intel HD Graphics or their predecessors, Intel GMA. Without that addition your CPU would have a hard time trying to digest 1080p video with acceptable framerate, not to mention increased energy consumption. So you're already using accelerated video everyday.

Now when GPUs have more and more general use computational power, they are widely used to accelerate video processing too. This trend started around the same time when accelerated decoding was introduced. Programs like Badaboom started to gain popularity as it turned out that GPUs are much better at (re)encoding video than CPUs. It couldn't be done before, though, because GPUs lacked generic computational abilities.

But GPUs could already scale, rotate and transform pictures since middle ages, so why weren't we able to use these features for video processing? Well, these features were never implemented to be used in such way, so they were suboptimal for various reasons.

When you program a game, you first upload all graphics, effects etc. to the GPU and then you just render polygons and map appropriate objects to them. You don't have to send textures each time they are needed, you can load them and reuse them. When it comes to video processing, you have to constantly feed frames to the GPU, process them and fetch them back to reencode them on CPU (remember, we're talking about pre-computational-GPU times). This wasn't how GPUs were supposed to work, so performance wasn't great.

Another thing is, GPUs aren't quality-oriented when it comes to image transformations. When you're playing a game at 40+ fps, you won't really notice slight pixel misrepresentations. Even if you would, game graphics weren't detailed enough for people to care. There are various hacks and tricks used to speed up rendering that can slightly affect quality. Videos are played at rather high framerates too, so scaling them dynamically at playback is acceptable, but reencoding or rendering has to produce results that are pixel-perfect or at least as close as possible at reasonable cost. You can't achieve that without proper features implemented directly in GPU.

Nowadays using GPUs to process videos is quite common because we have required technology in place. Why it's not the default choice is rather a question to program's publisher, not us - it's their choice. Maybe they believe that their clients have hardware oriented to process videos on CPU, so switching to GPU will negatively affect performance, but that's just my guess. Another possibility is that they still treat GPU rendering as experimental feature that's not stable enough to set it as a default yet. You don't want to waste hours rendering your video just to realize something is screwed up due to GPU rendering bug. If you decide to use it anyway, then you can't blame the software publisher - it was your decision.