Why are types always a certain size no matter its value?

C++

C++ Problem Overview


Implementations might differ between the actual sizes of types, but on most, types like unsigned int and float are always 4 bytes. But why does a type always occupy a certain amount of memory no matter its value? For example, if I created the following integer with the value of 255

int myInt = 255;

Then myInt would occupy 4 bytes with my compiler. However, the actual value, 255 can be represented with only 1 byte, so why would myInt not just occupy 1 byte of memory? Or the more generalized way of asking: Why does a type have only one size associated with it when the space required to represent the value might be smaller than that size?

C++ Solutions


Solution 1 - C++

Because types fundamentally represent storage, and they are defined in terms of maximum value they can hold, not the current value.

The very simple analogy would be a house - a house has a fixed size, regardless of how many people live in it, and there is also a building code which stipulates the maximum number of people who can live in a house of a certain size.

However, even if a single person is living in a house which can accommodate 10, the size of the house is not going to be affected by the current number of occupants.

Solution 2 - C++

The compiler is supposed to produce assembler (and ultimately machine code) for some machine, and generally C++ tries to be sympathetic to that machine.

Being sympathetic to the underlying machine means roughly: making it easy to write C++ code which will map efficiently onto the operations the machine can execute quickly. So, we want to provide access to the data types and operations that are fast and "natural" on our hardware platform.

Concretely, consider a specific machine architecture. Let's take the current Intel x86 family.

The Intel® 64 and IA-32 Architectures Software Developer’s Manual vol 1 (link), section 3.4.1 says:

> The 32-bit general-purpose registers EAX, EBX, ECX, EDX, ESI, EDI, EBP, and ESP are provided for holding the following items:

> • Operands for logical and arithmetic operations

> • Operands for address calculations

> • Memory pointers

So, we want the compiler to use these EAX, EBX etc. registers when it compiles simple C++ integer arithmetic. This means that when I declare an int, it should be something compatible with these registers, so that I can use them efficiently.

The registers are always the same size (here, 32 bits), so my int variables will always be 32 bits as well. I'll use the same layout (little-endian) so that I don't have to do a conversion every time I load a variable value into a register, or store a register back into a variable.

Using godbolt we can see exactly what the compiler does for some trivial code:

int square(int num) {
    return num * num;
}

compiles (with GCC 8.1 and -fomit-frame-pointer -O3 for simplicity) to:

square(int):
  imul edi, edi
  mov eax, edi
  ret

this means:

  1. the int num parameter was passed in register EDI, meaning it's exactly the size and layout Intel expect for a native register. The function doesn't have to convert anything
  2. the multiplication is a single instruction (imul), which is very fast
  3. returning the result is simply a matter of copying it to another register (the caller expects the result to be put in EAX)

Edit: we can add a relevant comparison to show the difference using a non-native layout makes. The simplest case is storing values in something other than native width.

Using godbolt again, we can compare a simple native multiplication

unsigned mult (unsigned x, unsigned y)
{
    return x*y;
}

mult(unsigned int, unsigned int):
  mov eax, edi
  imul eax, esi
  ret

with the equivalent code for a non-standard width

struct pair {
    unsigned x : 31;
    unsigned y : 31;
};

unsigned mult (pair p)
{
    return p.x*p.y;
}

mult(pair):
  mov eax, edi
  shr rdi, 32
  and eax, 2147483647
  and edi, 2147483647
  imul eax, edi
  ret

All the extra instructions are concerned with converting the input format (two 31-bit unsigned integers) into the format the processor can handle natively. If we wanted to store the result back into a 31-bit value, there would be another one or two instructions to do this.

This extra complexity means you'd only bother with this when the space saving is very important. In this case we're only saving two bits compared to using the native unsigned or uint32_t type, which would have generated much simpler code.


A note on dynamic sizes:

The example above is still fixed-width values rather than variable-width, but the width (and alignment) no longer match the native registers.

The x86 platform has several native sizes, including 8-bit and 16-bit in addition to the main 32-bit (I'm glossing over 64-bit mode and various other things for simplicity).

These types (char, int8_t, uint8_t, int16_t etc.) are also directly supported by the architecture - partly for backwards compatibility with older 8086/286/386/etc. etc. instruction sets.

It's certainly the case that choosing the smallest natural fixed-size type that will suffice, can be good practice - they're still quick, single instructions loads and stores, you still get full-speed native arithmetic, and you can even improve performance by reducing cache misses.

This is very different to variable-length encoding - I've worked with some of these, and they're horrible. Every load becomes a loop instead of a single instruction. Every store is also a loop. Every structure is variable-length, so you can't use arrays naturally.


A further note on efficiency

In subsequent comments, you've been using the word "efficient", as far as I can tell with respect to storage size. We do sometimes choose to minimize storage size - it can be important when we're saving very large numbers of values to files, or sending them over a network. The trade-off is that we need to load those values into registers to do anything with them, and performing the conversion isn't free.

When we discuss efficiency, we need to know what we're optimizing, and what the trade-offs are. Using non-native storage types is one way to trade processing speed for space, and sometimes makes sense. Using variable-length storage (for arithmetic types at least), trades more processing speed (and code complexity and developer time) for an often-minimal further saving of space.

The speed penalty you pay for this means it's only worthwhile when you need to absolutely minimize bandwidth or long-term storage, and for those cases it's usually easier to use a simple and natural format - and then just compress it with a general-purpose system (like zip, gzip, bzip2, xy or whatever).


tl;dr

Each platform has one architecture, but you can come up with an essentially unlimited number of different ways to represent data. It's not reasonable for any language to provide an unlimited number of built-in data types. So, C++ provides implicit access the platform's native, natural set of data types, and allows you to code any other (non-native) representation yourself.

Solution 3 - C++

It is an optimization and simplification.

You can either have fixed sized objects. Thus storing the value.
Or you can have variable sized objets. But storing value and size.

fixed sized objects

The code that manipulates number does not need to worry about size. You assume that you always use 4 bytes and make the code very simple.

Dynamic sized objects

The code the manipulates number must understand when reading a variable that it must read the value and size. Use the size to make sure all the high bits are zero out in the register.

When place the value back in memory if the value has not exceeded its current size then simply place the value back in memory. But if the value has shrunk or grown you need to move the storage location of the object to another location in memory to make sure it does not overflow. Now you have to track the position of that number (as it can move if it grows too large for its size). You also need to track all the unused variable locations so they can potentially be reused.

Summary

The code generated for fixed size objects is a lot simpler.

Note

Compression uses the fact that 255 will fit into one byte. There are compression schemes for storing large data sets that will actively use different size values for different numbers. But since this is not live data you don't have the complexities described above. You use less space to store the data at a cost of compressing/de-compressing the data for storage.

Solution 4 - C++

Because in a language like C++, a design goal is that simple operations compile down to simple machine instructions.

All mainstream CPU instruction sets work with fixed-width types, and if you want to do variable-width types, you have to do multiple machine instructions to handle them.

As for why the underlying computer hardware is that way: It's because it's simpler, and more efficient for many cases (but not all).

Imagine the computer as a piece of tape:

| xx | xx | xx | xx | xx | xx | xx | xx | xx | xx | xx | xx | xx | ...

If you simply tell the computer to look at the first byte on the tape, xx, how does it know whether or not the type stops there, or proceeds on to the next byte? If you have a number like 255 (hexadecimal FF) or a number like 65535 (hexadecimal FFFF) the first byte is always FF.

So how do you know? You either just pick a size and stick to it, or you have to add additional logic, and "overload" the meaning of at least one bit or byte value to indicate that the value continues to the next byte. That logic is never "free", either you emulate it in software or you add a bunch of additional transistors to the CPU to do it.

The fixed-width types of languages like C and C++ reflect that.

It doesn't have to be this way, and more abstract languages which are less concerned with mapping to maximally efficient code are free to use variable-width encodings (also known as "Variable Length Quantities" or VLQ) for numeric types.

Further reading: If you search for "variable length quantity" you can find some examples of where that kind of encoding is actually efficient and worth the additional logic. It's usually when you need to store a huge amount of values which might be anywhere within a large range, but most values tend towards some small sub-range.


Note that if a compiler can prove that it can get away with storing the value in a smaller amount of space without breaking any code (for example it's a variable only visible internally within a single translation unit), and its optimization heuristics suggest that it'll be more efficient on the target hardware, it's entirely allowed to optimize it accordingly and store it in a smaller amount of space, so long as the rest of the code works "as if" it did the standard thing.

But, when the code has to inter-operate with other code that might be compiled separately, sizes have to stay consistent, or ensure that every piece of code follows the same convention.

Because if it's not consistent, there's this complication: What if I have int x = 255; but then later in the code I do x = y? If int could be variable-width, the compiler would have to know ahead of time to pre-allocate the maximum amount of space it'll need. That's not always possible, because what if y is an argument passed in from another piece of code that's compiled separately?

Solution 5 - C++

Java uses classes called "BigInteger" and "BigDecimal" to do exactly this, as does C++'s GMP C++ class interface apparently (thanks Digital Trauma). You can easily do it yourself in pretty much any language if you want.

CPUs have always had the ability to use BCD (Binary Coded Decimal) which is designed to support operations of any length (but you tend to manually operate on one byte at a time which would be SLOW by today's GPU standards.)

The reason we don't use these or other similar solutions? Performance. Your most highly performant languages can't afford to go expanding a variable in the middle of some tight loop operation--it would be very non-deterministic.

In mass storage and transport situations, packed values are often the ONLY type of value you would use. For example, a music/video packet being streamed to your computer might spend a bit to specify if the next value is 2 bytes or 4 bytes as a size optimization.

Once it's on your computer where it can be used though, memory is cheap but the speed and complication of resizable variables is not.. that's really the only reason.

Solution 6 - C++

Because it would be very complicated and computation heavy to have simple types with dynamic sizes. I'm not sure it this would be even possible.
Computer would have to check how many bits the number takes after every change of its value. It would be quite a lot additional operations. And it would be much harder to perform calculations when you don't know sizes of variables during the compilation.

To support dynamic sizes of variables, computer would actually have to remember how many bytes a variable has right now which ... would require additional memory to store that information. And this information would have to be analyzed before every operation on the variable to choose the right processor instruction.

To better understands how computer works and why variables has constant sizes, learn basics of assembler language.

Although, I suppose it would be possible to achieve something like that with constexpr values. However, this would make the code less predictable for a programmer. I suppose that some compiler optimizations may do something like that but they hide it from a programmer to keep things simple.

I described here only the problems that concerns performance of a program. I omitted all problems that would have to be solved to save memory by reducing sizes of variables. Honestly, I don't think that it is even possible.


In conclusion, using smaller variables than declared has sense only if their values are known during the compilation. It is quite probable that modern compilers do that. In other cases it would cause too many hard or even unsolvable problems.

Solution 7 - C++

Computer memory is subdivided into consecutively-addressed chunks of a certain size (often 8 bits, and referred to as bytes), and most computers are designed to efficiently access sequences of bytes that have consecutive addresses.

If an object's address never changes within the object's lifetime, then code given its address can quickly access the object in question. An essential limitation with this approach, however, is that if an address is assigned for address X, and then another address is assigned for address Y which is N bytes away, then X will not be able to grow larger than N bytes within the lifetime of Y, unless either X or Y is moved. In order for X to move, it would be necessary that everything in the universe that holds X's address be updated to reflect the new one, and likewise for Y to move. While it's possible to design a system to facilitate such updates (both Java and .NET manage it pretty well) it's much more efficient to work with objects that will stay in the same location throughout their lifetime, which in turn generally require that their size must remain constant.

Solution 8 - C++

> Then myInt would occupy 4 bytes with my compiler. However, the actual value, 255 can be represented with only 1 byte, so why would myInt not just occupy 1 byte of memory?

This is known as variable-length encoding, there are various encodings defined, for example VLQ. One of the most famous, however, is probably UTF-8: UTF-8 encodes code points on a variable number of bytes, from 1 to 4.

> Or the more generalized way of asking: Why does a type have only one size associated with it when the space required to represent the value might be smaller than that size?

As always in engineering, it's all about trade-offs. There is no solution which has only advantages, so you have to balance advantages and trade-offs when designing your solution.

The design which was settled on was to use fixed-size fundamental types, and the hardware/languages just flew down from there.

So, what is the fundamental weakness of variable encoding, which caused it to be rejected in favor of more memory hungry schemes? No Random Addressing.

What is the index of the byte at which the 4th code point starts in a UTF-8 string?

It depends on the values of the previous code points, a linear scan is required.

Surely there are variable-length encoding schemes which are better at random-addressing?

Yes, but they are also more complicated. If there's an ideal one, I've never seen it yet.

Does Random Addressing really matters anyway?

Oh YES!

The thing is, any kind of aggregate/array relies on fixed-size types:

  • Accessing the 3rd field of a struct? Random Addressing!
  • Accessing the 3rd element of an array? Random Addressing!

Which means you essentially have the following trade-off:

Fixed size types OR Linear memory scans

Solution 9 - C++

The short answer is: Because the C++ standard says so.

The long answer is: What you can do on a computer is ultimately limited by hardware. It is, of course, possible to encode an integer into a variable number of bytes for storage, but then reading it would either require special CPU instructions to be performant, or you could implement it in software, but then it would be awfully slow. Fixed-size operations are available in the CPU for loading values of predefined widths, there are none for variable widths.

Another point to consider is how computer memory works. Let's say your integer type could take up anywhere between 1 to 4 bytes of storage. Suppose you store the value 42 into your integer: it takes up 1 byte, and you place it at memory address X. Then you store your next variable at location X+1 (I'm not considering alignment at this point) and so on. Later you decide to change your value to 6424.

But this doesn't fit into a single byte! So what do you do? Where do you put the rest? You already have something at X+1, so can't place it there. Somewhere else? How will you know later where? Computer memory does not support insert semantics: you can't just place something at a location and push everything after it aside to make room!

Aside: What you're talking about is really the area of data compression. Compression algorithms exist to pack everything tighter, so at least some of them will consider not using more space for your integer than it needs. However, compressed data is not easy to modify (if possible at all) and just ends up being recompressed every time you make any changes to it.

Solution 10 - C++

There are pretty substantial runtime performance benefits from doing this. If you were to operate on variable size types, you would have to decode each number before doing the operation (machine code instructions are typically fixed width), do the operation, then find a space in memory big enough to hold the result. Those are very difficult operations. It's much easier to simply store all of the data slightly inefficiently.

This is not always how it is done. Consider Google's Protobuf protocol. Protobufs are designed to transmit data very efficiently. Decreasing the number of bytes transmitted is worth the cost of additional instructions when operating on the data. Accordingly, protobufs use an encoding which encodes integers in 1, 2, 3, 4, or 5 bytes, and smaller integers take fewer bytes. Once the message is received, however, it is unpacked into a more traditional fixed-size integer format which is easier to operate on. It's only during network transmission that they use such a space-efficient variable length integer.

Solution 11 - C++

I like Sergey's house analogy, but I think a car analogy would be better.

Imagine variable types as types of cars and people as data. When we're looking for a new car, we choose the one that fits our purpose best. Do we want a small smart car that can only fit one or two people? Or a limousine to carry more people? Both have their benefits and drawbacks like speed and gas mileage (think speed and memory usage).

If you have a limousine and you're driving alone, it's not going to shrink to fit only you. To do that, you'd have to sell the car (read: deallocate) and buy a new smaller one for yourself.

Continuing the analogy, you can think of memory as a huge parking lot filled with cars, and when you go to read, a specialized chauffeur trained solely for your type of car goes to fetch it for you. If your car could change types depending on the people inside it, you would need to bring a whole host of chauffeurs every time you wanted to get your car since they would never know what kind of car will be sitting in the spot.

In other words, trying to determine how much memory you need to read at run time would be hugely inefficient and outweigh the fact that you could maybe fit a few more cars in your parking lot.

Solution 12 - C++

There are a few reasons. One is the added complexity for handling arbitrary-sized numbers and the performance hit this gives because the compiler can no longer optimize based on the assumption that every int is exactly X bytes long.

A second one is that storing simple types this way means they need an additional byte to hold the length. So, a value of 255 or less actually needs two bytes in this new system, not one, and in the worst case you now need 5 bytes instead of 4. This means that the performance win in terms of memory used is less than you might think and in some edge cases might actually be a net loss.

A third reason is that computer memory is generally addressable in words, not bytes. (But see footnote). Words are a multiple of bytes, usually 4 on 32-bit systems and 8 on 64 bit systems. You usually can't read an individual byte, you read a word and extract the nth byte from that word. This means both that extracting individual bytes from a word takes a bit more effort than just reading the entire word and that it is very efficient if the entire memory is evenly divided in word-sized (ie, 4-byte sized) chunks. Because, if you have arbitrary sized integers floating around, you might end up with one part of the integer being in one word, and another in the next word, necessitating two reads to get the full integer.

Footnote: To be more precise, while you addressed in bytes, most systems ignored the 'uneven' bytes. Ie, address 0, 1, 2 and 3 all read the same word, 4, 5, 6 and 7 read the next word, and so on.

On an unreleated note, this is also why 32-bit systems had a max of 4 GB memory. The registers used to address locations in memory are usually large enough to hold a word, ie 4 bytes, which has a max value of (2^32)-1 = 4294967295. 4294967296 bytes is 4 GB.

Solution 13 - C++

There are objects that in some sense have variable size, in the C++ standard library, such as std::vector. However, these all dynamically allocate the extra memory they will need. If you take sizeof(std::vector<int>), you will get a constant that has nothing to do with the memory managed by the object, and if you allocate an array or structure containing std::vector<int>, it will reserve this base size rather than putting the extra storage in the same array or structure. There are a few pieces of C syntax that support something like this, notably variable-length arrays and structures, but C++ did not choose to support them.

The language standard defines object size that way so that compilers can generate efficient code. For example, if int happens to be 4 bytes long on some implementation, and you declare a as a pointer to or array of int values, then a[i] translates into the pseudocode, “dereference the address a + 4×i.” This can be done in constant time, and is such a common and important operation that many instruction-set architectures, including x86 and the DEC PDP machines on which C was originally developed, can do it in a single machine instruction.

One common real-world example of data stored consecutively as variable-length units is strings encoded as UTF-8. (However, the underlying type of a UTF-8 string to the compiler is still char and has width 1. This allows ASCII strings to be interpreted as valid UTF-8, and a lot of library code such as strlen() and strncpy() to continue to work.) The encoding of any UTF-8 codepoint can be one to four bytes long, and therefore, if you want the fifth UTF-8 codepoint in a string, it could begin anywhere from the fifth byte to the seventeenth byte of the data. The only way to find it is to scan from the beginning of the string and check the size of each codepoint. If you want to find the fifth grapheme, you also need to check the character classes. If you wanted to find the millionth UTF-8 character in a string, you’d need to run this loop a million times! If you know you will need to work with indices often, you can traverse the string once and build an index of it—or you can convert to a fixed-width encoding, such as UCS-4. Finding the millionth UCS-4 character in a string is just a matter of adding four million to the address of the array.

Another complication with variable-length data is that, when you allocate it, you either need to allocate as much memory as it could ever possibly use, or else dynamically reallocate as needed. Allocating for the worst case could be extremely wasteful. If you need a consecutive block of memory, reallocating could force you to copy all the data over to a different location, but allowing the memory to be stored in non-consecutive chunks complicates the program logic.

So, it’s possible to have variable-length bignums instead of fixed-width short int, int, long int and long long int, but it would be inefficient to allocate and use them. Additionally, all mainstream CPUs are designed to do arithmetic on fixed-width registers, and none have instructions that directly operate on some kind of variable-length bignum. Those would need to be implemented in software, much more slowly.

In the real world, most (but not all) programmers have decided that the benefits of UTF-8 encoding, especially compatibility, are important, and that we so rarely care about anything other than scanning a string from front to back or copying blocks of memory that the draw­backs of variable width are acceptable. We could use packed, variable-width elements similar to UTF-8 for other things. But we very rarely do, and they aren’t in the standard library.

Solution 14 - C++

> Why does a type have only one size associated with it when the space > required to represent the value might be smaller than that size?

Primarily because of alignment requirements.

As per basic.align/1:

> Object types have alignment requirements which place restrictions on > the addresses at which an object of that type may be allocated.

Think of a building that has many floors and each floor has many rooms.
Each room is your size (a fixed space) capable of holding N amount of people or objects.
With the room size known beforehand, it makes the structural component of the building well-structured.

If the rooms are not aligned, then the building skeleton won't be well-structured.

Solution 15 - C++

It can be less. Consider the function:

int foo()
{
    int bar = 1;
    int baz = 42;
    return bar+baz;
}

it compiles to assembly code (g++, x64, details stripped)

$43, %eax
ret

Here, bar and baz end up using zero bytes to represent.

Solution 16 - C++

> so why would myInt not just occupy 1 byte of memory?

Because you told it to use that much. When using an unsigned int, some standards dictate that 4 bytes will be used and that the available range for it will be from 0 to 4,294,967,295. If you were to use an unsigned char instead, you would probably only be using the 1 byte that you're looking for, (depending on the standard and C++ normally uses these standards).

If it weren't for these standards you'd have to keep this in mind: how is the compiler or CPU supposed to know to only use 1 byte instead of 4? Later on in your program you might add or multiply that value, which would require more space. Whenever you make a memory allocation, the OS has to find, map, and give you that space, (potentially swapping memory to virtual RAM as well); this can take a long time. If you allocate the memory before hand, you won't have to wait for another allocation to be completed.

As for the reason why we use 8 bits per byte, you can take a look at this: What is the history of why bytes are eight bits?

On a side note, you could allow the integer to overflow; but should you use a signed integer, the C\C++ standards state that integer overflows result in undefined behavior. Integer overflow

Solution 17 - C++

Something simple which most answers seem to miss:

because it suits the design goals of C++.

Being able to work out a type's size at compile time allows a huge number of simplifying assumptions to be made by the compiler and the programmer, which bring a lot of benefits, particularly with regards to performance. Of course, fixed-size types have concomitant pitfalls like integer overflow. This is why different languages make different design decisions. (For instance, Python integers are essentially variable-size.)

Probably the main reason C++ leans so strongly to fixed-size types is its goal of C compatibility. However, since C++ is a statically-typed language which tries to generate very efficient code, and avoids adding things not explicitly specified by the programmer, fixed-size types still make a lot of sense.

So why did C opt for fixed-size types in the first place? Simple. It was designed to write '70s-era operating systems, server software, and utilities; things which provided infrastructure (such as memory management) for other software. At such a low level, performance is critical, and so is the compiler doing precisely what you tell it to.

Solution 18 - C++

To change the size of a variable would require reallocation and this is usually not worth the additional CPU cycles compared to wasting a few more bytes of memory.

Local variables go on a stack which is very fast to manipulate when those variables do not change in size. If you decided you want to expand the size of a variable from 1 byte to 2 bytes then you have to move everything on the stack by one byte to make that space for it. That can potentially cost a lot of CPU cycles depending on how many things need to be moved.

Another way you could do it is by making every variable a pointer to a heap location, but you would waste even more CPU cycles and memory this way, actually. Pointers are 4 bytes (32 bit addressing) or 8 bytes (64 bit addressing), so you are already using 4 or 8 for the pointer, then the actual size of the data on the heap. There is still a cost to reallocation in this case. If you need to reallocate heap data, you could get lucky and have room to expand it inline, but sometimes you have to move it somewhere else on the heap to have the contiguous block of memory of the size you want.

It's always faster to decide how much memory to use beforehand. If you can avoid dynamic sizing you gain performance. Wasting memory is usually worth the performance gain. That's why computers have tons of memory. :)

Solution 19 - C++

The compiler is allowed to make a lot of changes to your code, as long as things still work (the "as-is" rule).

It would be possible to use a 8-bit literal move instruction instead of the longer (32/64 bit) required to move a full int. However, you would need two instructions to complete the load, since you would have to set the register to zero first before doing the load.

It is simply more efficient (at least according to the main compilers) to handle the value as 32 bit. Actually, I've yet to see a x86/x86_64 compiler that would do 8-bit load without inline assembly.

However, things are different when it comes to 64 bit. When designing the previous extensions (from 16 to 32 bit) of their processors, Intel made a mistake. Here is a good representation of what they look like. The main takeaway here is that when you write to AL or AH, the other is not affected (fair enough, that was the point and it made sense back then). But it gets interesting when they expanded it to 32 bits. If you write the bottom bits (AL, AH or AX), nothing happens to the upper 16 bits of EAX, which means that if you want to promote a char into a int, you need to clear that memory first, but you have no way of actually using only these top 16 bits, making this "feature" more a pain than anything.

Now with 64 bits, AMD did a much better job. If you touch anything in the lower 32 bits, the upper 32 bits are simply set to 0. This leads to some actual optimizations that you can see in this godbolt. You can see that loading something of 8 bits or 32 bits is done the same way, but when you use 64 bits variables, the compiler uses a different instruction depending on the actual size of your literal.

So you can see here, compilers can totally change the actual size of your variable inside the CPU if it would produce the same result, but it makes no sense to do so for smaller types.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionUkula UdanView Question on Stackoverflow
Solution 1 - C++SergeyAView Answer on Stackoverflow
Solution 2 - C++UselessView Answer on Stackoverflow
Solution 3 - C++Martin YorkView Answer on Stackoverflow
Solution 4 - C++mtraceurView Answer on Stackoverflow
Solution 5 - C++Bill KView Answer on Stackoverflow
Solution 6 - C++NO_NAMEView Answer on Stackoverflow
Solution 7 - C++supercatView Answer on Stackoverflow
Solution 8 - C++Matthieu M.View Answer on Stackoverflow
Solution 9 - C++John Doe the RighteousView Answer on Stackoverflow
Solution 10 - C++Cort AmmonView Answer on Stackoverflow
Solution 11 - C++scohe001View Answer on Stackoverflow
Solution 12 - C++BuurmanView Answer on Stackoverflow
Solution 13 - C++DavislorView Answer on Stackoverflow
Solution 14 - C++Joseph D.View Answer on Stackoverflow
Solution 15 - C++max630View Answer on Stackoverflow
Solution 16 - C++BlergView Answer on Stackoverflow
Solution 17 - C++ArteliusView Answer on Stackoverflow
Solution 18 - C++Chris RollinsView Answer on Stackoverflow
Solution 19 - C++meneldalView Answer on Stackoverflow