C/C++ Why to use unsigned char for binary data?

C++CCharacter EncodingBytebufferRawbytestring

C++ Problem Overview


Is it really necessary to use unsigned char to hold binary data as in some libraries which work on character encoding or binary buffers? To make sense of my question, have a look at the code below -

char c[5], d[5];
c[0] = 0xF0;
c[1] = 0xA4;
c[2] = 0xAD;
c[3] = 0xA2;
c[4] = '\0';

printf("%s\n", c);
memcpy(d, c, 5);
printf("%s\n", d);

both the printf's output 𤭢 correctly, where f0 a4 ad a2 is the encoding for the Unicode code-point U+24B62 (𤭢) in hex.

Even memcpy also correctly copied the bits held by a char.

What reasoning could possibly advocate the use of unsigned char instead of a plain char?

In other related questions unsigned char is highlighted because it is the only (byte/smallest) data type which is guaranteed to have no padding by the C-specification. But as the above example showed, the output doesn't seem to be affected by any padding as such.

I have used VC++ Express 2010 and MinGW to compile the above. Although VC gave the warning

warning C4309: '=' : truncation of constant value

the output doesn't seems to reflect that.

P.S. This could be marked a possible duplicate of https://stackoverflow.com/questions/653336/should-a-buffer-of-bytes-be-signed-or-unsigned-char-buffer?rq=1 but my intent is different. I am asking why something which seems to be working as fine with char should be typed unsigned char?

Update: To quote from N3337,

Section 3.9 Types

> 2 For any object (other than a base-class subobject) of trivially > copyable type T, whether or not the object holds a valid value of type > T, the underlying bytes (1.7) making up the object can be copied into > an array of char or unsigned char. If the content of the array of char > or unsigned char is copied back into the object, the object shall > subsequently hold its original value.

In view of the above fact and that my original example was on Intel machine where char defaults to signed char, am still not convinced if unsigned char should be preferred over char.

Anything else?

C++ Solutions


Solution 1 - C++

In C the unsigned char data type is the only data type that has all the following three properties simultaneously

  • it has no padding bits, that it where all storage bits contribute to the value of the data
  • no bitwise operation starting from a value of that type, when converted back into that type, can produce overflow, trap representations or undefined behavior
  • it may alias other data types without violating the "aliasing rules", that is that access to the same data through a pointer that is typed differently will be guaranteed to see all modifications

if these are the properties of a "binary" data type you are looking for, you definitively should use unsigned char.

For the second property we need a type that is unsigned. For these all conversion are defined with modulo arihmetic, here modulo UCHAR_MAX+1, 256 in most 99% of the architectures. All conversion of wider values to unsigned char thereby just corresponds to truncation to the least significant byte.

The two other character types generally don't work the same. signed char is signed, anyhow, so conversion of values that don't fit it is not well defined. char is not fixed to be signed or unsigned, but on a particular platform to which your code is ported it might be signed even it is unsigned on yours.

Solution 2 - C++

You'll get most of your problems when comparing the contents of individual bytes:

char c[5];
c[0] = 0xff;
/*blah blah*/
if (c[0] == 0xff)
{
    printf("good\n");
}
else
{
    printf("bad\n");
}

can print "bad", because, depending on your compiler, c[0] will be sign extended to -1, which is not any way the same as 0xff

Solution 3 - C++

The plain char type is problematic and shouldn't be used for anything but strings. The main problem with char is that you can't know whether it is signed or unsigned: this is implementation-defined behavior. This makes char different from int etc, int is always guaranteed to be signed.

> Although VC gave the warning ... truncation of constant value

It is telling you that you are trying to store int literals inside char variables. This might be related to the signedness: if you try to store an integer with value > 0x7F inside a signed character, unexpected things might happen. Formally, this is undefined behavior in C, though practically you'd just get a weird output if attempting to print the result as an integer value stored inside a (signed) char.

In this specific case, the warning shouldn't matter.

EDIT :

> In other related questions unsigned char is highlighted because it is the only (byte/smallest) data type which is guaranteed to have no padding by the C-specification.

In theory, all integer types except unsigned char and signed char are allowed to contain "padding bits", as per C11 6.2.6.2:

> "For unsigned integer types other than unsigned char, the bits of the > object representation shall be divided into two groups: value bits and > padding bits (there need not be any of the latter)." > > "For signed integer types, the bits of the object representation shall > be divided into three groups: value bits, padding bits, and the sign > bit. There need not be any padding bits; signed char shall not have > any padding bits."

The C standard is intentionally vague and fuzzy, allowing these theoretical padding bits because:

  • It allows different symbol tables than the standard 8-bit ones.
  • It allows implementation-defined signedness and weird signed integer formats such as one's complement or "sign and magnitude".
  • An integer may not necessarily use all bits allocated.

However, in the real world outside the C standard, the following applies:

  • Symbol tables are almost certainly 8 bits (UTF8 or ASCII). Some weird exceptions exist, but clean implementations use the standard type wchar_t when implementing symbols tables larger than 8 bits.
  • Signedness is always two's complement.
  • An integer always uses all bits allocated.

So there is no real reason to use unsigned char or signed char just to dodge some theoretical scenario in the C standard.

Solution 4 - C++

Bytes are usually intended as unsigned 8 bit wide integers.

Now, char doesn't specify the sign of the integer: on some compilers char could be signed, on other it may be unsigned.

If I add a bit shift operation to the code you wrote, then I will have an undefined behaviour. The added comparison will also have an unexpected result.

char c[5], d[5];
c[0] = 0xF0;
c[1] = 0xA4;
c[2] = 0xAD;
c[3] = 0xA2;
c[4] = '\0';
c[0] >>= 1; // If char is signed, will the 7th bit go to 0 or stay the same?

bool isBiggerThan0 = c[0] > 0; // FALSE if char is signed!

printf("%s\n", c);
memcpy(d, c, 5);
printf("%s\n", d);

Regarding the warning during the compilation: if the char is signed then you are trying to assign the value 0xf0, which cannot be represented in the signed char (range -128 to +127), so it will be casted to a signed value (-16).

Declaring the char as unsigned will remove the warning, and is always good to have a clean build without any warning.

Solution 5 - C++

The signed-ness of the plain char type is implementation defined, so unless you're actually dealing with character data (a string using the platform's character set - usually ASCII), it's usually better to specify the signed-ness explicitly by either using signed char or unsigned char.

For binary data, the best choice is most probably unsigned char, especially if bitwise operations will be performed on the data (specifically bit shifting, which doesn't behave the same for signed types as for unsigned types).

Solution 6 - C++

> I am asking why something which seems to be working as fine with char should be typed unsigned char?

If you do things which are not "correct" in the sense of the standard, you rely on undefined behaviour. Your compiler might do it the way you want today, but you don't know what it does tomorrow. You don't know what GCC does or VC++ 2012. Or even if the behaviour depends on external factors or Debug/Release compiles etc. As soon as you leave the safe path of the standard, you might run into trouble.

Solution 7 - C++

Well, what do you call "binary data"? This is a bunch of bits, without any meaning assigned to them by that specific part of software that calls them "binary data". What's the closest primitive data type, which conveys the idea of the lack of any specific meaning to any one of these bits? I think unsigned char.

Solution 8 - C++

> Is it really necessary to use unsigned char to hold binary data as in some libraries which work on character encoding or binary buffers?

"really" necessary? No.

It is a very good idea though, and there are many reasons for this.

Your example uses printf, which not type-safe. That is, printf takes it's formatting cues from the format string and not from the data type. You could just as easily tried:

printf("%s\n", (void*)c);

... and the result would have been the same. If you try the same thing with c++ iostreams, the result will be different (depending on the signed-ness of c).

> What reasoning could possibly advocate the use of unsigned char instead of a plain char?

Signed specifies that the most significant bit of the data (for unsigned char the 8-th bit) represents the sign. Since you obviously do not need that, you should specify your data is unsigned (the "sign" bit represents data, not the sign of the other bits).

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionnightlytrailsView Question on Stackoverflow
Solution 1 - C++Jens GustedtView Answer on Stackoverflow
Solution 2 - C++Tom TannerView Answer on Stackoverflow
Solution 3 - C++LundinView Answer on Stackoverflow
Solution 4 - C++Paolo BrandoliView Answer on Stackoverflow
Solution 5 - C++Sander De DyckerView Answer on Stackoverflow
Solution 6 - C++PhilippView Answer on Stackoverflow
Solution 7 - C++chillView Answer on Stackoverflow
Solution 8 - C++utnapistimView Answer on Stackoverflow