Why should I use int instead of a byte or short in C#

C#asp.netSql ServerTypes

C# Problem Overview


I have found a few threads in regards to this issue. Most people appear to favor using int in their c# code accross the board even if a byte or smallint would handle the data unless it is a mobile app. I don't understand why. Doesn't it make more sense to define your C# datatype as the same datatype that would be in your data storage solution?

My Premise: If I am using a typed dataset, Linq2SQL classes, POCO, one way or another I will run into compiler datatype conversion issues if I don't keep my datatypes in sync across my tiers. I don't really like doing System.Convert all the time just because it was easier to use int accross the board in c# code. I have always used whatever the smallest datatype is needed to handle the data in the database as well as in code, to keep my interface to the database clean. So I would bet 75% of my C# code is using byte or short as opposed to int, because that is what is in the database.

Possibilities: Does this mean that most people who just use int for everything in code also use the int datatype for their sql storage datatypes and could care less about the overall size of their database, or do they do system.convert in code wherever applicable?

Why I care: I have worked on my own forever and I just want to be familiar with best practices and standard coding conventions.

C# Solutions


Solution 1 - C#

Performance-wise, an int is faster in almost all cases. The CPU is designed to work efficiently with 32-bit values.

Shorter values are complicated to deal with. To read a single byte, say, the CPU has to read the 32-bit block that contains it, and then mask out the upper 24 bits.

To write a byte, it has to read the destination 32-bit block, overwrite the lower 8 bits with the desired byte value, and write the entire 32-bit block back again.

Space-wise, of course, you save a few bytes by using smaller datatypes. So if you're building a table with a few million rows, then shorter datatypes may be worth considering. (And the same might be good reason why you should use smaller datatypes in your database)

And correctness-wise, an int doesn't overflow easily. What if you think your value is going to fit within a byte, and then at some point in the future some harmless-looking change to the code means larger values get stored into it?

Those are some of the reasons why int should be your default datatype for all integral data. Only use byte if you actually want to store machine bytes. Only use shorts if you're dealing with a file format or protocol or similar that actually specifies 16-bit integer values. If you're just dealing with integers in general, make them ints.

Solution 2 - C#

I am only 6 years late but maybe I can help someone else.

Here are some guidelines I would use:

  • If there is a possibility the data will not fit in the future then use the larger int type.
  • If the variable is used as a struct/class field then by default it will be padded to take up the whole 32-bits anyway so using byte/int16 will not save memory.
  • If the variable is short lived (like inside a function) then the smaller data types will not help much.
  • "byte" or "char" can sometimes describe the data better and can do compile time checking to make sure larger values are not assigned to it on accident. e.g. If storing the day of the month(1-31) using a byte and try to assign 1000 to it then it will cause an error.
  • If the variable is used in an array of roughly 100 or more I would use the smaller data type as long as it makes sense.
  • byte and int16 arrays are not as thread safe as an int (a primitive).

One topic that no one brought up is the limited CPU cache. Smaller programs execute faster then larger ones because the CPU can fit more of the program in the faster L1/L2/L3 caches.

Using the int type can result in fewer CPU instructions however it will also force a higher percentage of the data memory to not fit in the CPU cache. Instructions are cheap to execute. Modern CPU cores can execute 3-7 instructions per clock cycle however a single cache miss on the other hand can cost 1000-2000 clock cycles because it has to go all the way to RAM.

When memory is conserved it also results in the rest of the application performing better because it is not squeezed out of the cache.

I did a quick sum test with accessing random data in random order using both a byte array and an int array.

const int SIZE = 10000000, LOOPS = 80000;
byte[] array = Enumerable.Repeat(0, SIZE).Select(i => (byte)r.Next(10)).ToArray();
int[] visitOrder = Enumerable.Repeat(0, LOOPS).Select(i => r.Next(SIZE)).ToArray();

System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
sw.Start();
int sum = 0;
foreach (int v in visitOrder)
    sum += array[v];
sw.Stop();

Here are the results in time(ticks): (x86, release mode, without debugger, .NET 4.5, I7-3930k) (smaller is better)

________________ Array Size __________________
       10  100   1K   10K  100K    1M   10M 
byte: 549  559  552   552   568   632  3041  
int : 549  566  552   562   590  1803  4206
  • Accessing 1M items randomly using byte on my CPU had a 285% performance increase!
  • Anything under 10,000 was hardly noticeable.
  • int was never faster then byte for this basic sum test.
  • These values will vary with different CPUs with different cache sizes.

One final note, Sometimes I look at the now open-source .NET framework to see what Microsoft's experts do. The .NET framework uses byte/int16 surprisingly little. I could not find any actually.

Solution 3 - C#

You would have to be dealing with a few BILLION rows before this makes any significant difference in terms of storage capacity. Lets say you have three columns, and instead of using a byte-equivalent database type, you use an int-equivalent.

That gives us 3 (columns) x 3 (bytes extra) per row, or 9 bytes per row.

This means, for "a few million rows" (lets say three million), you are consuming a whole extra 27 megabytes of disk space! Fortunately as we're no longer living in the 1970s, you shouldn't have to worry about this :)

As said above, stop micro-optimising - the performance hit in converting to/from different integer-like numeric types is going to hit you much, much harder than the bandwidth/diskspace costs, unless you are dealing with very, very, very large datasets.

Solution 4 - C#

For the most part, 'No'.

Unless you know upfront that you are going to be dealing with 100's of millions of rows, it's a micro-optimisation.

Do what fits the Domain model best. Later, if you have performance problems, benchmark and profile to pin-point where they are occuring.

Solution 5 - C#

Not that I didn't believe Jon Grant and others, but I had to see for myself with our "million row table". The table has 1,018,000. I converted 11 tinyint columns and 6 smallint columns into int, there were already 5 int & 3 smalldatetimes. 4 different indexes used a combo of the various data types, but obviously the new indexes are now all using int columns.

Making the changes only cost me 40 mb calculating base table disk usage with no indexes. When I added the indexes back in the overall change was only 30 mb difference overall. So I was suprised because I thought the index size would be larger.

So is 30 mb worth the hassle of using all the different data types, No Way! I am off to INT land, thanks everyone for setting this anal retentive programmer back on the straight and happy blissful life of no more integer conversions...yippeee!

Solution 6 - C#

If int is used everywhere, no casting or conversions are required. That is a bigger bang for the buck than the memory you will save by using multiple integer sizes.

It just makes life simpler.

Solution 7 - C#

The .NET runtime is optimised for Int32. See previous discussion at https://stackoverflow.com/questions/129023/net-integer-vs-int16

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionBreadtruckView Question on Stackoverflow
Solution 1 - C#jalfView Answer on Stackoverflow
Solution 2 - C#SunsetQuestView Answer on Stackoverflow
Solution 3 - C#Jon GrantView Answer on Stackoverflow
Solution 4 - C#Mitch WheatView Answer on Stackoverflow
Solution 5 - C#BreadtruckView Answer on Stackoverflow
Solution 6 - C#Robert HarveyView Answer on Stackoverflow
Solution 7 - C#Dan DiploView Answer on Stackoverflow