Question

I'd like to get uniform distribution in range [0.0, 1.0)

If possible, please let the implementation make use of random bytes from /dev/urandom.

It would also be nice if your solution was thread-safe. If you're not sure, please indicate that.

See some solution I thought about after reading other answers.

Was it helpful?

Solution 2

This seems to be pretty good way:

unsigned short int r1, r2, r3;
// let r1, r2 and r3 hold random values
double result = ldexp(r1, -48) + ldexp(r2, -32) + ldexp(r3, -16);

This is based on NetBSD's drand48 implementation.

OTHER TIPS

Simple: A double has 52 bits of precision assuming IEEE. So generate a 52 bit (or larger) unsigned random integer (for example by reading bytes from dev/urandom), convert it into a double and divide it by 2^(number of bits it was).

This gives a numerically uniform distribution (in that the probability of a value being in a given range is proportional to the range) down to the 52nd binary digit.

Complicated: However, there are a lot of double values in the range [0,1) which the above cannot generate. To be specific, half the values in the range [0,0.5) (the ones that have their least significant bit set) can't occur. Three quarters of the values in the range [0,0.25) (the ones that have either of their least 2 bits set) can't occur, etc, all the way to only one positive value less than 2^-51 being possible, despite a double being capable of representing squillions of such values. So it can't be said to be truly uniform across the specified range to full precision.

Of course we don't want to choose one of those doubles with equal probability, because then the resulting number will on average be too small. We still need the probability of the result being in a given range to be proportional to the range, but with a higher precision on what ranges that works for.

I think the following works. I haven't particularly studied or tested this algorithm (as you can probably tell by the way there's no code), and personally I wouldn't use it without finding proper references indicating it's valid. But here goes:

  • Start the exponent off at 52 and choose a 52-bit random unsigned integer (assuming 52 bits of mantissa).
  • If the most significant bit of the integer is 0, increase the exponent by one, shift the integer left by one, and fill the least significant bit in with a new random bit.
  • Repeat until either you hit a 1 in the most significant place, or else the exponent gets too big for your double (1023. Or possibly 1022).
  • If you found a 1, divide your value by 2^exponent. If you got all zeroes, return 0 (I know, that's not actually a special case, but it bears emphasis how very unlikely a 0 return is [Edit: actually it might be a special case - it depends whether or not you want to generate denorms. If not then once you have enough 0s in a row you discard anything left and return 0. But in practice this is so unlikely as to be negligible, unless the random source isn't random).

I don't know whether there's actually any practical use for such a random double, mind you. Your definition of random should depend to an extent what it's for. But if you can benefit from all 52 of its significant bits being random, this might actually be helpful.

Reading from files is thread-safe AFAIK, so using fopen() to read from /dev/urandom will yield "truly random" bytes.

Although there might be potential gotchas, methinks any set of such bytes accessed as an integer, divided by the maximum integer of that size, will yield a floating point value between 0 and 1 with approximately that distribution.

Eg:

FILE* f = fopen("/dev/urandom", "r");
int32_t int;
fread(&int, sizeof(int32_t), 1, f);
fclose(f);
double theRandomValue = int / (double) (2 ** 32 - 1);

The trick is you need a 54 bit randomizer that meets your requirements. A few lines of code with a union to stick those 54 bits in the mantissa and you have your number. The trick is not double float the trick is your desired randomizer.

#include <stdlib.h>
printf("%f\n", drand48());

/dev/random:

double c;
fd = open("/dev/random", O_RDONLY);
unsigned int a, b;
read(fd, &a, sizeof(a));
read(fd, &b, sizeof(b));
if (a > b)
   c = fabs((double)b / (double)a);
else
    c = fabs((double)a / (double)b);

c is your random value

/dev/urandom is not POSIX, and is not generally available.

The standard way of generating a double uniformly in [0,1) is to generate an integer in the range [0,2^N) and divide by 2^N. So pick your favorite random number generator and use it. For simulations, mine is the Mersenne Twister, as it is extremely quick, yet still not well correlated. Actually, it can do this for you, and even has a version that will give more precision for the smaller numbers. Typically you give it a seed to start with, which helps for repeatability for debugging or showing others your results. Of course, you can have your code grab a random number from /dev/urandom as the seed if one isn't specified.

For cryptographic purposes you should use one of the standard cryptographic libraries out there instead, such as openssl) which will indeed use /dev/urandom when available.

As to thread safety, most won't be, at least with the standard interfaces, so you'll need to build a layer on top, or only use them in one thread. The ones that are thread safe have you supply a state that they modify, so that instead you are effectively running multiple non-interacting random number generators, which may not be what you are looking for.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top