explain server load
The actual calculation for server load is a little bit complex, but for basic understanding, it can be simplified. The best way to think about server load is that it is the average number of processes that are currently running (or waiting to run) in the given interval. Traditionally, server load is given in blocks of three - one minute, five minutes, and fifteen minutes. Load can be caused for many reasons - actual CPU time, waiting on network buffers, waiting on the disk, and on and on.
In the example you gave, your one-minute load is 1.27, your five-minute load is 0.87, and your fifteen minute load is 0.78 - indicating that something the computer is doing over the last minute is slightly more intensive than what you were doing over the last five and fifteen minutes. Simple.
It gets more complicated when multi-core servers are taken into consideration, however. When you have only one processor/core, a load greater than 1 means that whatever you have processes that are waiting, for one reason or another, instead of being actively run. That's generally a bad thing, as it means that whatever you are doing will take longer than it could. When you have multiple cores, however, you can run more than one process at a time. If you have a two core server, the load can go up to two before you start having processes waiting for things, an average of three for three cores, and so on and so on.
Most systems should be run at about a half to two-thirds of their total capacity, as a rule of thumb. Below that, the hardware is underutilized. Above that, it may not be able to handle sudden spikes in activity that come about in most applications. However, there are certain exceptions - some systems should be kept at more restrictive loads, some should be run at full capacity or higher.
None of these rules are hard and fast, and calculations of system load can get quite complicated in real-life situations. But hopefully this will give you a general idea what those numbers mean.