Why is network-byte-order defined to be big-endian?
NetworkingNetwork ProtocolsEndiannessTcp IpNetworking Problem Overview
As written in the heading, my question is, why does TCP/IP use big endian encoding when transmitting data and not the alternative little-endian scheme?
Networking Solutions
Solution 1 - Networking
RFC1700 stated it must be so. (and defined network byte order as big-endian).
> The convention in the documentation of Internet Protocols is to > express numbers in decimal and to picture data in "big-endian" order > [COHEN]. That is, fields are described left to right, with the most > significant octet on the left and the least significant octet on the > right.
The reference they make is to
On Holy Wars and a Plea for Peace
Cohen, D.
Computer
The abstract can be found at IEN-137 or on this IEEE page.
Summary:
> Which way is chosen does not make too much > difference. It is more important to agree upon an order than which > order is agreed upon.
It concludes that both big-endian and little-endian schemes could've been possible. There is no better/worse scheme, and either can be used in place of the other as long as it is consistent all across the system/protocol.