# What is the difference between signed and unsigned int

CUnsigned IntegerSigned Integer## C Problem Overview

What is the difference between signed and unsigned int?

## C Solutions

## Solution 1 - C

As you are probably aware, `int`

s are stored internally in binary. Typically an `int`

contains 32 bits, but in some environments might contain 16 or 64 bits (or even a different number, usually but not necessarily a power of two).

But for this example, let's look at 4-bit integers. Tiny, but useful for illustration purposes.

Since there are four bits in such an integer, it can assume one of 16 values; 16 is two to the fourth power, or 2 times 2 times 2 times 2. What are those values? The answer depends on whether this integer is a `signed int`

or an `unsigned int`

. With an `unsigned int`

, the value is never negative; there is no sign associated with the value. Here are the 16 possible values of a four-bit `unsigned int`

:

```
bits value
0000 0
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 8
1001 9
1010 10
1011 11
1100 12
1101 13
1110 14
1111 15
```

... and Here are the 16 possible values of a four-bit `signed int`

:

```
bits value
0000 0
0001 1
0010 2
0011 3
0100 4
0101 5
0110 6
0111 7
1000 -8
1001 -7
1010 -6
1011 -5
1100 -4
1101 -3
1110 -2
1111 -1
```

As you can see, for `signed int`

s the most significant bit is `1`

if and only if the number is negative. That is why, for `signed int`

s, this bit is known as the "sign bit".

## Solution 2 - C

In laymen's terms an unsigned int is an integer that can not be negative and thus has a higher range of positive values that it can assume. A signed int is an integer that can be negative but has a lower positive range in exchange for more negative values it can assume.

## Solution 3 - C

`int`

and `unsigned int`

are two distinct integer types. (`int`

can also be referred to as `signed int`

, or just `signed`

; `unsigned int`

can also be referred to as `unsigned`

.)

As the names imply, `int`

is a *signed* integer type, and `unsigned int`

is an *unsigned* integer type. That means that `int`

is able to represent negative values, and `unsigned int`

can represent only non-negative values.

The C language imposes some requirements on the ranges of these types. The range of `int`

must be at least `-32767`

.. `+32767`

, and the range of `unsigned int`

must be at least `0`

.. `65535`

. This implies that both types must be at least 16 bits. They're 32 bits on many systems, or even 64 bits on some. `int`

typically has an extra negative value due to the two's-complement representation used by most modern systems.

Perhaps the most important difference is the behavior of signed vs. unsigned arithmetic. For signed `int`

, overflow has undefined behavior. For `unsigned int`

, there is no overflow; any operation that yields a value outside the range of the type wraps around, so for example `UINT_MAX + 1U == 0U`

.

Any integer type, either signed or unsigned, models a subrange of the infinite set of mathematical integers. As long as you're working with values within the range of a type, everything works. When you approach the lower or upper bound of a type, you encounter a discontinuity, and you can get unexpected results. For signed integer types, the problems occur only for very large negative and positive values, exceeding `INT_MIN`

and `INT_MAX`

. For unsigned integer types, problems occur for very large positive values **and at zero**. This can be a source of bugs. For example, this is an infinite loop:

```
for (unsigned int i = 10; i >= 0; i --) [
printf("%u\n", i);
}
```

because `i`

is *always* greater than or equal to zero; that's the nature of unsigned types. (Inside the loop, when `i`

is zero, `i--`

sets its value to `UINT_MAX`

.)

## Solution 4 - C

Sometimes we know in advance that the value stored in a given integer variable will always be positive-when it is being used to only count things, for example. In such a case we can declare the variable to be unsigned, as in, `unsigned int num student;`

. With such a declaration, the range of permissible integer values (for a 32-bit compiler) will shift from the range -2147483648 to +2147483647 to range 0 to 4294967295. Thus, declaring an integer as unsigned almost doubles the size of the largest possible value that it can otherwise hold.

## Solution 5 - C

In practice, there are two differences:

**printing**(eg with`cout`

in C++ or`printf`

in C): unsigned integer bit representation is interpreted as a nonnegative integer by print functions.**ordering**: the ordering depends on signed or unsigned specifications.

this code can identify the integer using ordering criterion:

```
char a = 0;
a--;
if (0 < a)
printf("unsigned");
else
printf("signed");
```

`char`

is considered `signed`

in some compilers and `unsigned`

in other compilers. The code above determines which one is considered in a compiler, using the ordering criterion. If `a`

is unsigned, after `a--`

, it will be greater than `0`

, but if it is `signed`

it will be less than zero. But in both cases, the bit representation of `a`

is the same. That is, in both cases `a--`

does the same change to the bit representation.