The difference between GPU and CPU

The GPU is the graphical processing unit. It is the CPU of the video card.

Traditionally the CPU did all of the graphics processing until video card manufacturers began including GPUs on the display adapters. The GPU was a dedicated piece of hardware similar to the old FPU (floating point unit that was dedicated to performing advanced math routines faster than the CPU), that could perform common graphics routines really fast. (Actually GPUs came with the move on the part of video card manufacturers to incorporate graphics acceleration into the card instead of requiring a separate board, just like the function of FPUs was eventually incorporated directly into CPUs.)

Eventually, GPUs outpaced CPUs so that a graphics processor actually had more transistors, ran faster (and hotter), and such. Graphics card manufacturers realized that the GPU was now a really powerful piece of hardware that is often sitting idle (for example when browsing the Internet, editing documents, etc.) So, starting with the X1300, ATI’s cards included AVIVO, which would allow the user to run video conversion software on the video card’s processor instead of doing it just on the slower CPU. Nvidia responded with CUDA, the first true GPGPU which basically, is a way to use the GPU(s) on a video card as general-use supplemental processors that can be used for any purpose, not just graphics or video related purposes.

Because a GPU is highly optimized to perform advanced calculations such as floating-point arithmetic, matrix arithmetic, and the like, they can perform functions like video conversion, post-processing, as well as tasks like BOINC or Folding@Home much better than with a CPU alone.

A modern computer can be quite the powerhouse, with a multi-core CPU and multi-GPU video card(s) that can act as super CPUs, the processing power of today’s computers is truly quite incredible. Even better, manufacturers are making the chips more power efficient, so that they are really powerful, but can also draw as little power and generate as little heat as possible when they are not necessary, thus giving us the best of both worlds!


The GPU is the Graphics Processing Unit. It is essentially the CPU of your video card (CPU is Central Processing Unit, the "brain", or processor, of a computer). Video cards these days are so complex that they are basically computers in themselves, with their own memory, buses, and processors.

GPUs historically have been relatively special-purpose, designed for maximum power when performing a discrete set of graphics operations on particular types of data primitives (vertices, pixels, etc). However, companies like Intel, nVidia, and ATI are starting to push the envelope with more general-purpose GPU components, making it easier than ever before for software developers to utilize extra processing power available on the video card to perform non-graphics operations. Combined with things like CUDA and other specialized GPU languages, these new chips open up a lot of possibilities.

GPGPU is sort of the headquarters for general purpose GPU computing. As a user, rather than a programmer, the whole "offload to the GPU" thing doesn't really concern you at this point, outside from a situation in which you would use software designed in that manner (not very many pieces of end user software exist at the current time).