What is the difference between intXX_t and int_fastXX_t?

COptimizationTypesIntegerC99

C Problem Overview


I have recently discovered existence of standard fastest type, mainly int_fast32_t and int_fast64_t.

I was always told that, for normal use on mainstream architecture, one should better use classical int & long which should always fit to the processor default reading capacity and so avoid useless numeric conversions.

In the C99 Standard, it says in §7.18.1.3p2 : > "The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N."

And there is also a quote about it in §7.18.1.3p1 : > "The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements."

It's unclear to me what fastest really means. I do not understand when I should use this type and when I should not.

I have googled a little on this and found that some open source projects have changed some of their functions to it, but not all of them. They didn't really explain why they have changed a part, and only a part, of their code to it.

Do you know what are the specific cases/usages when int_fastXX_t are really faster than the classical ones ?

C Solutions


Solution 1 - C

In the C99 Standard, 7.18.1.3 Fastest minimum-width integer types.

>(7.18.1.3p1) "Each of the following types designates an integer type that is usually fastest225) to operate with among all integer types that have at least the specified width."

>225) "The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements."

and

>(7.18.1.3p2) "The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N."

The types int_fastN_t and uint_fastN_t are counterparts to the exact-width integer types intN_t and uintN_t. The implementation guarantees that they take at least N bits, but the implementation can take more bits if it can perform optimization using larger types; it just guarantees they take at least N bits.

For example, on a 32-bit machine, uint_fast16_t could be defined as an unsigned int rather than as an unsigned short because working with types of machine word size would be more efficent.

Another reason of their existence is the exact-width integer types are optional in C but the fastest minimum-width integer types and the minimum-width integer types (int_leastN_t and uint_leastN_t) are required.

Solution 2 - C

Gnu libc defines {int,uint}_fast{16,32}_t as 64-bit when compiling for 64-bit CPUs and 32-bit otherwise. Operations on 64-bit integers are faster on Intel and AMD 64-bit x86 CPUs than the same operations on 32-bit integers.

Solution 3 - C

There will probably not be a difference except on exotic hardware where int32_t and int16_t don't even exist.

In that case you might use int_least16_t to get the smallest type that can contain 16 bits. Could be important if you want to conserve space.

On the other hand, using int_fast16_t might get you another type, larger than int_least16_t but possibly faster for "typical" integer use. The implementation will have to consider what is faster and what is typical. Perhaps this is obvious for some special purpose hardware?

On most common machines these 16-bit types will all be a typedef for short, and you don't have to bother.

Solution 4 - C

IMO they are pretty pointless.

The compiler doesn't care what you call a type, only what size it is and what rules apply to it. so if int, in32_t and int_fast32_t are all 32-bits on your platform they will almost certainly all perform the same.

The theory is that implementers of the language should chose based on what is fastest on their hardware but the standard writers never pinned down a clear definition of fastest. Add that to the fact that platform maintainers are reluctant to change the definition of such types (because it would be an ABI break) and the definitions end up arbitrarily picked at the start of a platforms life (or inherited from other platforms the C library was ported from) and never touched again.

If you are at the level of micro-optimisation that you think variable size may make a significant difference then benchmark the different options with your code on your processor. Otherwise don't worry about it. The "fast" types don't add anything useful IMO.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionCorenView Question on Stackoverflow
Solution 1 - CouahView Answer on Stackoverflow
Solution 2 - CPr0metheanView Answer on Stackoverflow
Solution 3 - CBo PerssonView Answer on Stackoverflow
Solution 4 - CplugwashView Answer on Stackoverflow