Is there a 128 bit integer in C++?

C++PerformanceTypesCross PlatformUuid

C++ Problem Overview


I need to store a 128 bits long UUID in a variable. Is there a 128-bit datatype in C++? I do not need arithmetic operations, I just want to easily store and read the value very fast.

A new feature from C++11 would be fine, too.

C++ Solutions


Solution 1 - C++

Although GCC does provide __int128, it is supported only for targets (processors) which have an integer mode wide enough to hold 128 bits. On a given system, sizeof() intmax_t and uintmax_t determine the maximum value that the compiler and the platform support.

Solution 2 - C++

GCC and Clang support __int128

Solution 3 - C++

Checkout boost's implementation:

#include <boost/multiprecision/cpp_int.hpp>

using namespace boost::multiprecision;

int128_t v = 1;

This is better than strings and arrays, especially if you need to do arithmetic operations with it.

Solution 4 - C++

Your question has two parts.

1.128-bit integer. As suggested by @PatrikBeck boost::multiprecision is good way for really big integers.

2.Variable to store UUID / GUID / CLSID or whatever you call it. In this case boost::multiprecision is not a good idea. You need GUID structure which is designed for that purpose. As cross-platform tag added, you can simply copy that structure to your code and make it like:

struct GUID
{
    uint32_t Data1;
    uint16_t Data2;
    uint16_t Data3;
    uint8_t  Data4[8];
};

This format is defined by Microsoft because of some inner reasons, you can even simplify it to:

struct GUID
{
    uint8_t Data[16];
};

You will get better performance having simple structure rather than object that can handle bunch of different stuff. Anyway you don't need to do math with GUIDS, so you don't need any fancy object.

Solution 5 - C++

I would recommend using std::bitset<128> (you can always do something like using UUID = std::bitset<128>;). It will probably have a similar memory layout to the custom struct proposed in the other answers, but you won't need to define your own comparison operators, hash etc.

Solution 6 - C++

There is no 128-bit integer in Visual-C++ because the Microsoft calling convention only allows returning of 2 32-bit values in the RAX:EAX pair. The presents a constant headache because when you multiply two integers together with the result is a two-word integer. Most load-and-store machines support working with two CPU word-sized integers but working with 4 requires software hack, so a 32-bit CPU cannot process 128-bit integers and 8-bit and 16-bit CPUs can't do 64-bit integers without a rather costly software hack. 64-bit CPUs can and regularly do work with 128-bit because if you multiply two 64-bit integers you get a 128-bit integer so GCC version 4.6 does support 128-bit integers. This presents a problem with writing portable code because you have to do an ugly hack where you return one 64-bit word in the return register and you pass the other in using a reference. For example, in order to print a floating-point number fast with Grisu we use 128-bit unsigned multiplication as follows:

#include <cstdint>
#if defined(_MSC_VER) && defined(_M_AMD64)
#define USING_VISUAL_CPP_X64 1
#include <intrin.h>
#include <intrin0.h>
#pragma intrinsic(_umul128)
#elif (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6))
#define USING_GCC 1
#if defined(__x86_64__)
#define COMPILER_SUPPORTS_128_BIT_INTEGERS 1
#endif
#endif

#if USING_VISUAL_CPP_X64
    UI8 h;
    UI8 l = _umul128(f, rhs_f, &h);
    if (l & (UI8(1) << 63))  // rounding
      h++;
    return TBinary(h, e + rhs_e + 64);
#elif USING_GCC
    UIH p = static_cast<UIH>(f) * static_cast<UIH>(rhs_f);
    UI8 h = p >> 64;
    UI8 l = static_cast<UI8>(p);
    if (l & (UI8(1) << 63))  // rounding
      h++;
    return TBinary(h, e + rhs_e + 64);
#else
    const UI8 M32 = 0xFFFFFFFF;
    const UI8 a = f >> 32;
    const UI8 b = f & M32;
    const UI8 c = rhs_f >> 32;
    const UI8 d = rhs_f & M32;
    const UI8 ac = a * c;
    const UI8 bc = b * c;
    const UI8 ad = a * d;
    const UI8 bd = b * d;
    UI8 tmp = (bd >> 32) + (ad & M32) + (bc & M32);
    tmp += 1U << 31;  /// mult_round
    return TBinary(ac + (ad >> 32) + (bc >> 32) + (tmp >> 32), e + rhs_e + 64);
#endif
  }

Solution 7 - C++

Use the TBigInteger template and set any bit range in the template array like this TBigInt<128,true> for being a signed 128 bit integer or TBigInt<128,false> for being an unsigned 128 bit integer.

Hope that helps maybe a late reply and someone else found this method already.

The TBigInt is a structure defined by Unreal Engine. It provides a multi-bit integer override.

Basic usage (as far as I can tell):

#include <Math/BigInt.h>

void foo() {
    TBigInt<128, true> signed128bInt = 0;
    TBigInt<128, false> unsigned128bInt = 0;
}

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestiondanijarView Question on Stackoverflow
Solution 1 - C++OnkarView Answer on Stackoverflow
Solution 2 - C++doronView Answer on Stackoverflow
Solution 3 - C++Patrik BeckView Answer on Stackoverflow
Solution 4 - C++ST3View Answer on Stackoverflow
Solution 5 - C++Mikhail VasilyevView Answer on Stackoverflow
Solution 6 - C++Cale McColloughView Answer on Stackoverflow
Solution 7 - C++SilanusView Answer on Stackoverflow