Why are both little- and big-endian in use?

Computer ScienceEndianness

Computer Science Problem Overview


Why are both little- and big-endian still in use today, after ~40 years of binary computer-science? Are there algorithms or storage formats that work better with one and much worse with the other? Wouldn't it be better if we all switched to one and stick with it?

Computer Science Solutions


Solution 1 - Computer Science

When adding two numbers (on paper or in a machine), you start with the least significant digits and work towards the most significant digits. (Same goes for many other operations).

On the Intel 8088, which had 16-bit registers but an 8-bit data bus, being little-endian allowed such instructions to start operation after the first memory cycle. (Of course it should be possible for the memory fetches of a word to be done in decreasing order rather than increasing but I suspect this would have complicated the design a little.)

On most processors the bus width matches the register width so this no longer confers an advantage.

Big-endian numbers, on the other hand, can be compared starting with the MSB (although many compare instructions actually do a subtract which needs to start with the LSB anyway). The sign bit is also very easy to get.

> Are there algorithms or storage > formats that work better with one and > much worse with the other?

No. There are small advantages here and there but nothing major.

I actually think litte-endian is more natural and consistent: the significance of a bit is 2 ^ (bit_pos + 8 * byte_pos). Whereas with with big endian the significance of a bit is 2 ^ (bit_pos + 8 * (word_size - byte_pos - 1)).

> Wouldn't it be better if we all switched to one and stick with it?

Due to the dominance of x86, we've definitely gravitated towards little-endian. The ARM chips in many mobile devices have configurable endianness but are often set to LE to be more compatible with the x86 world. Which is fine by me.

Solution 2 - Computer Science

Little Endian makes typecasts easier. For example, if you have a 16-bit number you can simply treat the same memory address as a pointer to an 8-bit number, as it contains the lowest 8 bits. So you do not need to know the exact data type you are dealing with (although in most cases you do know anyway).

Big Endian is a bit more human-readable. Bits are stored in memory as they appear in logical order (most-significant values first), just like for any human-used number system.

In times of many, many abstraction layers these arguments don't really count anymore though. I think the main reason we still have both is that nobody wants to switch. There is no obvious reason for either system, so why change anything if your old system works perfectly well?

Solution 3 - Computer Science

Both big and little endian have their advantages and disadvantages. Even if one were clearly superior (which is not the case), there is no way that any legacy architecture would ever be able to switch endianness, so I'm afraid you're just going to have to learn to live with it.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionorlpView Question on Stackoverflow
Solution 1 - Computer ScienceArteliusView Answer on Stackoverflow
Solution 2 - Computer ScienceDaniel StelterView Answer on Stackoverflow
Solution 3 - Computer SciencePaul RView Answer on Stackoverflow