Why is network-byte-order defined to be big-endian?

NetworkingNetwork ProtocolsEndiannessTcp Ip

Networking Problem Overview


As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme?

Networking Solutions


Solution 1 - Networking

RFC1700 stated it must be so. (and defined network byte order as big-endian).

> The convention in the documentation of Internet Protocols is to > express numbers in decimal and to picture data in "big-endian" order > [COHEN]. That is, fields are described left to right, with the most > significant octet on the left and the least significant octet on the > right.

The reference they make is to

On Holy Wars and a Plea for Peace 
Cohen, D. 
Computer

The abstract can be found at IEN-137 or on this IEEE page.


Summary:

> Which way is chosen does not make too much > difference. It is more important to agree upon an order than which > order is agreed upon.

It concludes that both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either can be used in place of the other as long as it is consistent all across the system/protocol.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionNejiView Question on Stackoverflow
Solution 1 - NetworkingAnirudh RamanathanView Answer on Stackoverflow