# Why does rand() repeat numbers far more often on Linux than Mac?

CLinuxMacosRandom## C Problem Overview

I was implementing a hashmap in C as part of a project I'm working on and using random inserts to test it. I noticed that `rand()`

on Linux seems to repeat numbers far more often than on Mac. `RAND_MAX`

is `2147483647/0x7FFFFFFF`

on both platforms. I've reduced it to this test program that makes a byte array `RAND_MAX+1`

-long, generates `RAND_MAX`

random numbers, notes if each is a duplicate, and checks it off the list as seen.

```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <time.h>
int main() {
size_t size = ((size_t)RAND_MAX) + 1;
char *randoms = calloc(size, sizeof(char));
int dups = 0;
srand(time(0));
for (int i = 0; i < RAND_MAX; i++) {
int r = rand();
if (randoms[r]) {
// printf("duplicate at %d\n", r);
dups++;
}
randoms[r] = 1;
}
printf("duplicates: %d\n", dups);
}
```

Linux consistently generates around 790 million duplicates. Mac consistently only generates one, so it loops through every random number that it can generate *almost* without repeating. Can anyone please explain to me how this works? I can't tell anything different from the `man`

pages, can't tell which RNG each is using, and can't find anything online. Thanks!

## C Solutions

## Solution 1 - C

While at first it may sound like the macOS `rand()`

is somehow better for not repeating any numbers, one should note that with this amount of numbers generated it is expected to see plenty of duplicates (in fact, around 790 million, or (2^{31}-1)/*e*). Likewise iterating through the numbers in sequence would also produce no duplicates, but wouldn't be considered very random. So the Linux `rand()`

implementation is *in this test* indistinguishable from a true random source, whereas the macOS `rand()`

is not.

Another thing that appears surprising at first glance is how the macOS `rand()`

can manage to avoid duplicates so well. Looking at its source code, we find the implementation to be as follows:

```
/*
* Compute x = (7^5 * x) mod (2^31 - 1)
* without overflowing 31 bits:
* (2^31 - 1) = 127773 * (7^5) + 2836
* From "Random number generators: good ones are hard to find",
* Park and Miller, Communications of the ACM, vol. 31, no. 10,
* October 1988, p. 1195.
*/
long hi, lo, x;
/* Can't be initialized with 0, so use another value. */
if (*ctx == 0)
*ctx = 123459876;
hi = *ctx / 127773;
lo = *ctx % 127773;
x = 16807 * lo - 2836 * hi;
if (x < 0)
x += 0x7fffffff;
return ((*ctx = x) % ((unsigned long) RAND_MAX + 1));
```

This does indeed result in all numbers between 1 and `RAND_MAX`

, inclusive, exactly once, before the sequence repeats again. Since the next state is based on multiplication, the state can never be zero (or all future states would also be zero). Thus the repeated number you see is the first one, and zero is the one that is never returned.

Apple has been promoting the use of better random number generators in their documentation and examples for at least as long as macOS (or OS X) has existed, so the quality of `rand()`

is probably not deemed important, and they've just stuck with one of the simplest pseudorandom generators available. (As you noted, their `rand()`

is even commented with a recommendation to use `arc4random()`

instead.)

On a related note, the simplest pseudorandom number generator I could find that produces decent results in this (and many other) tests for randomness is xorshift*:

```
uint64_t x = *ctx;
x ^= x >> 12;
x ^= x << 25;
x ^= x >> 27;
*ctx = x;
return (x * 0x2545F4914F6CDD1DUL) >> 33;
```

This implementation results in almost exactly 790 million duplicates in your test.

## Solution 2 - C

MacOS provides an undocumented rand() function in stdlib. If you leave it unseeded, then the first values it outputs are 16807, 282475249, 1622650073, 984943658 and 1144108930. A quick search will show that this sequence corresponds to a very basic LCG random number generator that iterates the following formula:

> *x*_{n+1} = 7^{5} · *x*_{n} (mod 2^{31} − 1)

Since the state of this RNG is described entirely by the value of a single 32-bit integer, its period is not very long. To be precise, it repeats itself every 2^{31} − 2 iterations, outputting every value from 1 to 2^{31} − 2.

I don't think there's a *standard* implementation of rand() for all versions of Linux, but there is a glibc rand() function that is often used. Instead of a single 32-bit state variable, this uses a pool of over 1000 bits, which to all intents and purposes will never produce a fully repeating sequence. Again, you can probably find out what version you have by printing the first few outputs from this RNG without seeding it first. (The glibc rand() function produces the numbers 1804289383, 846930886, 1681692777, 1714636915 and 1957747793.)

So the reason you're getting more collisions in Linux (and hardly any in MacOS) is that the Linux version of rand() is basically more random.

## Solution 3 - C

`rand()`

is defined by the C standard, and the C standard does not specify which algorithm to use. Obviously, Apple is using an inferior algorithm to your GNU/Linux implementation: The Linux one is indistinguishable from a true random source in your test, while the Apple implementation just shuffles the numbers around.

If you want random numbers of any quality, either use a better PRNG that gives at least some guarantees on the quality of the numbers it returns, or simply read from `/dev/urandom`

or similar. The later gives you cryptographic quality numbers, but is slow. Even if it is too slow by itself, `/dev/urandom`

can provide some excellent seeds to some other, faster PRNG.

## Solution 4 - C

In general, the rand/srand pair has been considered sort of deprecated for a long time due to low-order bits displaying less randomness than high-order bits in the results. This may or may not have anything to do with your results, but I think this is still a good opportunity to remember that even though some rand/srand implementations are now more up to date, older implementations persist and it's better to use random(3). On my Arch Linux box, the following note is still in the man page for rand(3): > The versions of rand() and srand() in the Linux C Library use the same random number generator as random(3) and srandom(3), so the lower-order bits should be as random as the higher-order bits. However, on older rand() implementations, and on current implementations on different systems, the lower-order bits are much less random than the higher-or- der bits. Do not use this function in applications intended to be por- table when good randomness is needed. (Use random(3) instead.)

Just below that, the man page actually gives very short, very simple example implementations of rand and srand that are about the simplest LC RNGs you've ever seen and having a small RAND_MAX. I don't think they match what's in the C standard library, if they ever did. Or at least I hope not.

In general, if you're going to use something from the standard library, use random if you can (the man page lists it as POSIX standard back to POSIX.1-2001, but rand is standard way back before C was even standardized). Or better yet, crack open Numerical Recipes (or look for it online) or Knuth and implement one. They're really easy and you only really need to do it once to have a general purpose RNG with the attributes you most often need and which is of known quality.