The real difference between "int" and "unsigned int"

C

C Problem Overview


int:

The 32-bit int data type can hold integer values in the range of −2,147,483,648 to 2,147,483,647. You may also refer to this data type as signed int or signed.

unsigned int :

The 32-bit unsigned int data type can hold integer values in the range of 0 to 4,294,967,295. You may also refer to this data type simply as unsigned.

Ok, but, in practice:

int x = 0xFFFFFFFF;
unsigned int y = 0xFFFFFFFF;
printf("%d, %d, %u, %u", x, y, x, y);
// -1, -1, 4294967295, 4294967295

no difference, O.o. I'm a bit confused.

C Solutions


Solution 1 - C

Hehe. You have an implicit cast here, because you're telling printf what type to expect.

Try this on for size instead:

unsigned int x = 0xFFFFFFFF;
int y = 0xFFFFFFFF;

if (x < 0)
    printf("one\n");
else
    printf("two\n");
if (y < 0)
    printf("three\n");
else
    printf("four\n");

Solution 2 - C

Yes, because in your case they use the same representation.

The bit pattern 0xFFFFFFFF happens to look like -1 when interpreted as a 32b signed integer and as 4294967295 when interpreted as a 32b unsigned integer.

It's the same as char c = 65. If you interpret it as a signed integer, it's 65. If you interpret it as a character it's a.


As R and pmg point out, technically it's undefined behavior to pass arguments that don't match the format specifiers. So the program could do anything (from printing random values to crashing, to printing the "right" thing, etc).

The standard points it out in 7.19.6.1-9

> If a conversion specification is invalid, the behavior is undefined. If > any argument is not the correct type for the corresponding conversion > specification, the behavior is undefined.

Solution 3 - C

There is no difference between the two in how they are stored in memory and registers, there is no signed and unsigned version of int registers there is no signed info stored with the int, the difference only becomes relevant when you perform maths operations, there are signed and unsigned version of the maths ops built into the CPU and the signedness tell the compiler which version to use.

Solution 4 - C

The problem is that you invoked Undefined Behaviour.


When you invoke UB anything can happen.

The assignments are ok; there is an implicit conversion in the first line

int x = 0xFFFFFFFF;
unsigned int y = 0xFFFFFFFF;

However, the call to printf, is not ok

printf("%d, %d, %u, %u", x, y, x, y);

It is UB to mismatch the % specifier and the type of the argument.
In your case you specify 2 ints and 2 unsigned ints in this order by provide 1 int, 1 unsigned int, 1 int, and 1 unsigned int.


Don't do UB!

Solution 5 - C

The internal representation of int and unsigned int is the same.

Therefore, when you pass the same format string to printf it will be printed as the same.

However, there are differences when you compare them. Consider:

int x = 0x7FFFFFFF;
int y = 0xFFFFFFFF;
x < y // false
x > y // true
(unsigned int) x < (unsigned int y) // true
(unsigned int) x > (unsigned int y) // false

This can be also a caveat, because when comparing signed and unsigned integer one of them will be implicitly casted to match the types.

Solution 6 - C

He is asking about the real difference. When you are talking about undefined behavior you are on the level of guarantee provided by language specification - it's far from reality. To understand the real difference please check this snippet (of course this is UB but it's perfectly defined on your favorite compiler):

#include <stdio.h>

int main()
{
    int i1 = ~0;
    int i2 = i1 >> 1;
    unsigned u1 = ~0;
    unsigned u2 = u1 >> 1;
    printf("int         : %X -> %X\n", i1, i2);
    printf("unsigned int: %X -> %X\n", u1, u2);
}

Solution 7 - C

The binary representation is the key. An Example: Unsigned int in HEX

 0XFFFFFFF = translates to = 1111 1111 1111 1111 1111 1111 1111 1111 

Which represents 4,294,967,295 in a base-ten positive number. But we also need a way to represent negative numbers. So the brains decided on twos complement. In short, they took the leftmost bit and decided that when it is a 1 (followed by at least one other bit set to one) the number will be negative. And the leftmost bit is set to 0 the number is positive. Now let's look at what happens

0000 0000 0000 0000 0000 0000 0000 0011 = 3

Adding to the number we finally reach.

0111 1111 1111 1111 1111 1111 1111 1111 = 2,147,483,645

the highest positive number with a signed integer. Let's add 1 more bit (binary addition carries the overflow to the left, in this case, all bits are set to one, so we land on the leftmost bit)

1111 1111 1111 1111 1111 1111 1111 1111 = -1

So I guess in short we could say the difference is the one allows for negative numbers the other does not. Because of the sign bit or leftmost bit or most significant bit.

Solution 8 - C

the type just tells you what the bit pattern is supposed to represent. the bits are only what you make of them. the same sequences can be interpreted in different ways.

Solution 9 - C

The printf function interprets the value that you pass it according to the format specifier in a matching position. If you tell printf that you pass an int, but pass unsigned instead, printf would re-interpret one as the other, and print the results that you see.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionFabricioView Question on Stackoverflow
Solution 1 - Ccha0siteView Answer on Stackoverflow
Solution 2 - CcnicutarView Answer on Stackoverflow
Solution 3 - CNathan DayView Answer on Stackoverflow
Solution 4 - CpmgView Answer on Stackoverflow
Solution 5 - CRafał RawickiView Answer on Stackoverflow
Solution 6 - CAlexTView Answer on Stackoverflow
Solution 7 - CpgreinhardView Answer on Stackoverflow
Solution 8 - CmaiwaldView Answer on Stackoverflow
Solution 9 - CSergey KalinichenkoView Answer on Stackoverflow