Did I understand /dev/urandom?

LinuxUnixRandomCryptography

Linux Problem Overview


I have been reading about /dev/urandom, and as far as I can tell, /dev/random creates cryptographically random numbers by taking advantage of several events like network packet timings, etc. However, did I understand right that /dev/urandom uses a PRNG, seeded with a number from /dev/random? Or does it just use /dev/random as long as there are bits -- and when they run out it falls back to some PRNG with a seed gathered from where?

Linux Solutions


Solution 1 - Linux

From the urandom manpage:

> The random number generator gathers > environmental noise from device > drivers and other sources into an > entropy pool. The generator also > keeps an estimate of the number of > bits of noise in the entropy pool. > From this entropy pool random numbers > are created. > > When read, the /dev/random device > will only return random bytes within > the estimated number of bits of noise > in the entropy pool. /dev/random > should be suitable for uses that need > very high quality randomness such > as one-time pad or key > generation. When the entropy pool > is empty, reads from /dev/random will > block until additional environmental > noise is gathered. > > A read from the /dev/urandom device will not block waiting for more > entropy. As a result, if there is > not sufficient entropy in the entropy > pool, the returned values are > theoretically vulnerable to a > cryptographic attack on the algorithms > used by the driver. Knowledge of how > to do this is not available in the > current unclassified literature, but > it is theoretically possible that such > an attack may exist. If this is a > concern in your application, use > /dev/random instead.

both uses a PRNG, though using environmental data and entropy pool makes it astronomically much more difficult to crack the PRNG, and impossible without also gathering the exact same environmental data.

As a rule of thumb, without specialized expensive hardware that gathers data from, say, quantum events, there is no such thing as true random number generator (i.e. a RNG that generates truly unpredictable number); though for cryptographic purpose, /dev/random or /dev/urandom will suffice (the method used is for a CPRNG, cryptographic pseudo-random number generator).

The entropy pool and blocking read of /dev/random is used as a safe-guard to ensure the impossibility of predicting the random number; if, for example, an attacker exhausted the entropy pool of a system, it is possible, though highly unlikely with today's technology, that he can predict the output of /dev/urandom which hasn't been reseeded for a long time (though doing that would also require the attacker to exhaust the system's ability to collect more entropies, which is also astronomically improbably).

Solution 2 - Linux

Actually what you need in practice is what FreeBSD's /dev/urandom provides: it will read an initial seed of sufficient length from /dev/random, then use a PRNG. Thus, it may block initially (just after system boot) but once it has gathered enough entropy, it never blocks. This provides the level of randomness needed by most cryptographic protocols, while not unduly blocking.

Linux's /dev/urandom is similar except that it will never block, and thus may risk returning low-quality randomness if used just after boot. On the other hand, /dev/random may block even long after boot-time, which is also a problem. I have often seen servers stall mysteriously, because some software was insisting on using /dev/random, and the keyboard-less server was not getting enough entropy.

Usual Linux distribution save at shutdown a random seed obtained from /dev/urandom, and inject it back upon next boot, thus guaranteeing the quality of the random provided by /dev/urandom. Only during OS installation does cryptographic quality becomes an issue, and usually it is not because installation involves a number of interactions with the human being who performs the installation, yielding hordes of entropy.

To sum up, under both Linux and FreeBSD, you should use /dev/urandom, not /dev/random.

Solution 3 - Linux

Quoting here

> /dev/random will block after the entropy pool is exhausted. It will remain blocked until additional data has been collected from the sources of entropy that are available. This can slow down random data generation. > > /dev/urandom will not block. Instead it will reuse the internal pool to produce more pseudo-random bits.


/dev/urandom is best used when:

  • You just want a large file with random data for some kind of testing.
  • You are using the dd command to wipe data off a disk by replacing it with random data.
  • Almost everywhere else where you don’t have a really good reason to use /dev/random instead.

/dev/random is likely to be the better choice when:

  • Randomness is critical to the security of cryptography in your application – one-time pads, key generation.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionTowerView Question on Stackoverflow
Solution 1 - LinuxLie RyanView Answer on Stackoverflow
Solution 2 - LinuxThomas PorninView Answer on Stackoverflow
Solution 3 - LinuxzangwView Answer on Stackoverflow