Why is unsigned integer overflow defined behavior but signed integer overflow isn't?

C++CUndefined BehaviorInteger Overflow

C++ Problem Overview


Unsigned integer overflow is well defined by both the C and C++ standards. For example, the C99 standard (§6.2.5/9) states

> A computation involving unsigned operands can never overflow, because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting type.

However, both standards state that signed integer overflow is undefined behavior. Again, from the C99 standard (§3.4.3/1)

> An example of undefined behavior is the behavior on integer overflow

Is there an historical or (even better!) a technical reason for this discrepancy?

C++ Solutions


Solution 1 - C++

The historical reason is that most C implementations (compilers) just used whatever overflow behaviour was easiest to implement with the integer representation it used. C implementations usually used the same representation used by the CPU - so the overflow behavior followed from the integer representation used by the CPU.

In practice, it is only the representations for signed values that may differ according to the implementation: one's complement, two's complement, sign-magnitude. For an unsigned type there is no reason for the standard to allow variation because there is only one obvious binary representation (the standard only allows binary representation).

Relevant quotes:

C99 6.2.6.1:3:

> Values stored in unsigned bit-fields and objects of type unsigned char shall be represented using a pure binary notation.

C99 6.2.6.2:2:

>If the sign bit is one, the value shall be modified in one of the following ways: > >— the corresponding value with sign bit 0 is negated (sign and magnitude); > >— the sign bit has the value −(2N) (two’s complement); > >— the sign bit has the value −(2N − 1) (one’s complement).


Nowadays, all processors use two's complement representation, but signed arithmetic overflow remains undefined and compiler makers want it to remain undefined because they use this undefinedness to help with optimization. See for instance this blog post by Ian Lance Taylor or this complaint by Agner Fog, and the answers to his bug report.

Solution 2 - C++

Aside from Pascal's good answer (which I'm sure is the main motivation), it is also possible that some processors cause an exception on signed integer overflow, which of course would cause problems if the compiler had to "arrange for another behaviour" (e.g. use extra instructions to check for potential overflow and calculate differently in that case).

It is also worth noting that "undefined behaviour" doesn't mean "doesn't work". It means that the implementation is allowed to do whatever it likes in that situation. This includes doing "the right thing" as well as "calling the police" or "crashing". Most compilers, when possible, will choose "do the right thing", assuming that is relatively easy to define (in this case, it is). However, if you are having overflows in the calculations, it is important to understand what that actually results in, and that the compiler MAY do something other than what you expect (and that this may very depending on compiler version, optimisation settings, etc).

Solution 3 - C++

First of all, please note that C11 3.4.3, like all examples and foot notes, is not normative text and therefore not relevant to cite!

The relevant text that states that overflow of integers and floats is undefined behavior is this:

C11 6.5/5 > > If an exceptional condition occurs during the evaluation of an > expression (that is, if the result is not mathematically defined or > not in the range of representable values for its type), the behavior > is undefined.

A clarification regarding the behavior of unsigned integer types specifically can be found here:

C11 6.2.5/9 > > The range of nonnegative values of a signed integer type is a subrange > of the corresponding unsigned integer type, and the representation of > the same value in each type is the same. A computation involving > unsigned operands can never overflow, because a result that cannot be > represented by the resulting unsigned integer type is reduced modulo > the number that is one greater than the largest value that can be > represented by the resulting type.

This makes unsigned integer types a special case.

Also note that there is an exception if any type is converted to a signed type and the old value can no longer be represented. The behavior is then merely implementation-defined, although a signal may be raised.

C11 6.3.1.3

>6.3.1.3 Signed and unsigned integers
>
> When a value with integer > type is converted to another integer type other than _Bool, if the > value can be represented by the new type, it is unchanged.
>
> Otherwise, if the new type is unsigned, the value is converted by > repeatedly adding or subtracting one more than the maximum value that > can be represented in the new type until the value is in the range of > the new type.
>
> Otherwise, the new type is signed and the value > cannot be represented in it; either the result is > implementation-defined or an implementation-defined signal is raised.

Solution 4 - C++

In addition to the other issues mentioned, having unsigned math wrap makes the unsigned integer types behave as abstract algebraic groups (meaning that, among other things, for any pair of values X and Y, there will exist some other value Z such that X+Z will, if properly cast, equal Y and Y-Z will, if properly cast, equal X). If unsigned values were merely storage-location types and not intermediate-expression types (e.g. if there were no unsigned equivalent of the largest integer type, and arithmetic operations on unsigned types behaved as though they were first converted them to larger signed types, then there wouldn't be as much need for defined wrapping behavior, but it's difficult to do calculations in a type which doesn't have e.g. an additive inverse.

This helps in situations where wrap-around behavior is actually useful - for example with TCP sequence numbers or certain algorithms, such as hash calculation. It may also help in situations where it's necessary to detect overflow, since performing calculations and checking whether they overflowed is often easier than checking in advance whether they would overflow, especially if the calculations involve the largest available integer type.

Solution 5 - C++

Perhaps another reason for why unsigned arithmetic is defined is because unsigned numbers form integers modulo 2^n, where n is the width of the unsigned number. Unsigned numbers are simply integers represented using binary digits instead of decimal digits. Performing the standard operations in a modulus system is well understood.

The OP's quote refers to this fact, but also highlights the fact that there is only one, unambiguous, logical way to represent unsigned integers in binary. By contrast, Signed numbers are most often represented using two's complement but other choices are possible as described in the standard (section 6.2.6.2).

Two's complement representation allows certain operations to make more sense in binary format. E.g., incrementing negative numbers is the same that for positive numbers (expect under overflow conditions). Some operations at the machine level can be the same for signed and unsigned numbers. However, when interpreting the result of those operations, some cases don't make sense - positive and negative overflow. Furthermore, the overflow results differ depending on the underlying signed representation.

Solution 6 - C++

The most technical reason of all, is simply that trying to capture overflow in an unsigned integer requires more moving parts from you (exception handling) and the processor (exception throwing).

C and C++ won't make you pay for that unless you ask for it by using a signed integer. This isn't a hard-fast rule, as you'll see near the end, but just how they proceed for unsigned integers. In my opinion, this makes signed integers the odd-one out, not unsigned, but it's fine they offer this fundamental difference as the programmer can still perform well-defined signed operations with overflow. But to do so, you must cast for it.

Because:

  • unsigned integers have well defined overflow and underflow
  • casts from signed -> unsigned int are well defined, [uint's name]_MAX - 1 is conceptually added to negative values, to map them to the extended positive number range
  • casts from unsigned -> signed int are well defined, [uint's name]_MAX - 1 is conceptually deducted from positive values beyond the signed type's max, to map them to negative numbers)

You can always perform arithmetic operations with well-defined overflow and underflow behavior, where signed integers are your starting point, albeit in a round-about way, by casting to unsigned integer first then back once finished.

int32_t x = 10;
int32_t y = -50;  

// writes -60 into z, this is well defined
int32_t z = int32_t(uint32_t(y) - uint32_t(x));

Casts between signed and unsigned integer types of the same width are free, if the CPU is using 2's compliment (nearly all do). If for some reason the platform you're targeting doesn't use 2's Compliment for signed integers, you will pay a small conversion price when casting between uint32 and int32.

But be wary when using bit widths smaller than int

usually if you are relying on unsigned overflow, you are using a smaller word width, 8bit or 16bit. These will promote to signed int at the drop of a hat (C has absolutely insane implicit integer conversion rules, this is one of C's biggest hidden gotcha's), consider:

unsigned char a = 0;  
unsigned char b = 1;
printf("%i", a - b);  // outputs -1, not 255 as you'd expect

To avoid this, you should always cast to the type you want when you are relying on that type's width, even in the middle of an operation where you think it's unnecessary. This will cast the temporary and get you the signedness AND truncate the value so you get what you expected. It's almost always free to cast, and in fact, your compiler might thank you for doing so as it can then optimize on your intentions more aggressively.

unsigned char a = 0;  
unsigned char b = 1;
printf("%i", (unsigned char)(a - b));  // cast turns -1 to 255, outputs 255

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionanthonyvdView Question on Stackoverflow
Solution 1 - C++Pascal CuoqView Answer on Stackoverflow
Solution 2 - C++Mats PeterssonView Answer on Stackoverflow
Solution 3 - C++LundinView Answer on Stackoverflow
Solution 4 - C++supercatView Answer on Stackoverflow
Solution 5 - C++ythView Answer on Stackoverflow
Solution 6 - C++Anne QuinnView Answer on Stackoverflow