What's the difference between load average and CPU load?

This site does a good job of explaining it. Basically, load average is the amount of traffic to your CPU(s) over the past 1, 5, and 15 minutes. Generally you want this number to be below the number of CPU(s)/cores you have. 1.0 on a single core machine means it's using the CPU to it's maximum, and anything above that means things are getting queued.

The CPU line in your top output is the current usage broken down by process types.


What Inigoesdr and the site he/she points to write is more or less correct, but remember that the "load average" isn't really a "regular" mathematical average, it's a exponentially damped/weighted moving average.

This is a very good and in-depth article on the topic of CPU percentage and load average, and how they are calculated in linux. Wikipedia also has a good article on it (explaining some differences between load average on linux vs. most UNIX systems for example).