Why are both little- and big-endian in use?

When adding two numbers (on paper or in a machine), you start with the least significant digits and work towards the most significant digits. (Same goes for many other operations).

On the Intel 8088, which had 16-bit registers but an 8-bit data bus, being little-endian allowed such instructions to start operation after the first memory cycle. (Of course it should be possible for the memory fetches of a word to be done in decreasing order rather than increasing but I suspect this would have complicated the design a little.)

On most processors the bus width matches the register width so this no longer confers an advantage.

Big-endian numbers, on the other hand, can be compared starting with the MSB (although many compare instructions actually do a subtract which needs to start with the LSB anyway). The sign bit is also very easy to get.

Are there algorithms or storage formats that work better with one and much worse with the other?

No. There are small advantages here and there but nothing major.

I actually think litte-endian is more natural and consistent: the significance of a bit is 2 ^ (bit_pos + 8 * byte_pos). Whereas with with big endian the significance of a bit is 2 ^ (bit_pos + 8 * (word_size - byte_pos - 1)).

Wouldn't it be better if we all switched to one and stick with it?

Due to the dominance of x86, we've definitely gravitated towards little-endian. The ARM chips in many mobile devices have configurable endianness but are often set to LE to be more compatible with the x86 world. Which is fine by me.


Little Endian makes typecasts easier. For example, if you have a 16-bit number you can simply treat the same memory address as a pointer to an 8-bit number, as it contains the lowest 8 bits. So you do not need to know the exact data type you are dealing with (although in most cases you do know anyway).

Big Endian is a bit more human-readable. Bits are stored in memory as they appear in logical order (most-significant values first), just like for any human-used number system.

In times of many, many abstraction layers these arguments don't really count anymore though. I think the main reason we still have both is that nobody wants to switch. There is no obvious reason for either system, so why change anything if your old system works perfectly well?