Question

In this StackOverflow question:

Generating random integer from a range

the accepted answer suggests the following formula for generating a random integer in between given min and max, with min and max being included into the range:

output = min + (rand() % (int)(max - min + 1))

But it also says that

This is still slightly biased towards lower numbers ... It's also possible to extend it so that it removes the bias.

But it doesn't explain why it's biased towards lower numbers or how to remove the bias. So, the question is: is this the most optimal approach to generation of a random integer within a (signed) range while not relying on anything fancy, just rand() function, and in case if it is optimal, how to remove the bias?

EDIT:

I've just tested the while-loop algorithm suggested by @Joey against floating-point extrapolation:

static const double s_invRandMax = 1.0/((double)RAND_MAX + 1.0);
return min + (int)(((double)(max + 1 - min))*rand()*s_invRandMax);

to see how much uniformly "balls" are "falling" into and are being distributed among a number of "buckets", one test for the floating-point extrapolation and another for the while-loop algorithm. But results turned out to be varying depending on the number of "balls" (and "buckets") so I couldn't easily pick a winner. The working code can be found at this Ideone page. For example, with 10 buckets and 100 balls the maximum deviation from the ideal probability among buckets is less for the floating-point extrapolation than for the while-loop algorithm (0.04 and 0.05 respectively) but with 1000 balls, the maximum deviation of the while-loop algorithm is lesser (0.024 and 0.011), and with 10000 balls, the floating-point extrapolation is again doing better (0.0034 and 0.0053), and so on without much of consistency. Thinking of the possibility that none of the algorithms consistently produces uniform distribution better than that of the other algorithm, makes me lean towards the floating-point extrapolation since it appears to perform faster than the while-loop algorithm. So is it fine to choose the floating-point extrapolation algorithm or my testings/conclusions are not completely correct?

Was it helpful?

Solution

The problem occurs when the number of outputs from the random number generator (RAND_MAX+1) is not evenly divisible by the desired range (max-min+1). Since there will be a consistent mapping from a random number to an output, some outputs will be mapped to more random numbers than others. This is regardless of how the mapping is done - you can use modulo, division, conversion to floating point, whatever voodoo you can come up with, the basic problem remains.

The magnitude of the problem is very small, and undemanding applications can generally get away with ignoring it. The smaller the range and the larger RAND_MAX is, the less pronounced the effect will be.

I took your example program and tweaked it a bit. First I created a special version of rand that only has a range of 0-255, to better demonstrate the effect. I made a few tweaks to rangeRandomAlg2. Finally I changed the number of "balls" to 1000000 to improve the consistency. You can see the results here: http://ideone.com/4P4HY

Notice that the floating-point version produces two tightly grouped probabilities, near either 0.101 or 0.097, nothing in between. This is the bias in action.

I think calling this "Java's algorithm" is a bit misleading - I'm sure it's much older than Java.

int rangeRandomAlg2 (int min, int max)
{
    int n = max - min + 1;
    int remainder = RAND_MAX % n;
    int x;
    do
    {
        x = rand();
    } while (x >= RAND_MAX - remainder);
    return min + x % n;
}

OTHER TIPS

The problem is that you're doing a modulo operation. This would be no problem if RAND_MAX would be evenly divisible by your modulus, but usually that is not the case. As a very contrived example, assume RAND_MAX to be 11 and your modulus to be 3. You'll get the following possible random numbers and the following resulting remainders:

0 1 2 3 4 5 6 7 8 9 10
0 1 2 0 1 2 0 1 2 0 1

As you can see, 0 and 1 are slightly more probable than 2.

One option to solve this is rejection sampling: By disallowing the numbers 9 and 10 above you can cause the resulting distribution to be uniform again. The tricky part is figuring out how to do so efficiently. A very nice example (one that took me two days to understand why it works) can be found in Java's java.util.Random.nextInt(int) method.

The reason why Java's algorithm is a little tricky is that they avoid slow operations like multiplication and division for the check. If you don't care too much you can also do it the naïve way:

int n = (int)(max - min + 1);
int remainder = RAND_MAX % n;
int x, output;
do {
  x = rand();
  output = x % n;
} while (x >= RAND_MAX - remainder);
return min + output;

EDIT: Corrected a fencepost error in above code, now it works as it should. I also created a little sample program (C#; taking a uniform PRNG for numbers between 0 and 15 and constructing a PRNG for numbers between 0 and 6 from it via various ways):

using System;

class Rand {
    static Random r = new Random();

    static int Rand16() {
        return r.Next(16);
    }

    static int Rand7Naive() {
        return Rand16() % 7;
    }

    static int Rand7Float() {
        return (int)(Rand16() / 16.0 * 7);
    }

    // corrected
    static int Rand7RejectionNaive() {
        int n = 7, remainder = 16 % n, x, output;
        do {
            x = Rand16();
            output = x % n;
        } while (x >= 16 - remainder);
        return output;
    }

    // adapted to fit the constraints of this example
    static int Rand7RejectionJava() {
        int n = 7, x, output;
        do {
            x = Rand16();
            output = x % n;
        } while (x - output + 6 > 15);
        return output;
    }

    static void Test(Func<int> rand, string name) {
        var buckets = new int[7];
        for (int i = 0; i < 10000000; i++) buckets[rand()]++;
        Console.WriteLine(name);
        for (int i = 0; i < 7; i++) Console.WriteLine("{0}\t{1}", i, buckets[i]);
    }

    static void Main() {
        Test(Rand7Naive, "Rand7Naive");
        Test(Rand7Float, "Rand7Float");
        Test(Rand7RejectionNaive, "Rand7RejectionNaive");
    }
}

The result is as follows (pasted into Excel and added conditional coloring of cells so that differences are more apparent):

enter image description here

Now that I fixed my mistake in above rejection sampling it works as it should (before it would bias 0). As you can see, the float method isn't perfect at all, it just distributes the biased numbers differently.

It's easy to see why this algorithm produces a biased sample. Suppose your rand() function returns uniform integers from the set {0, 1, 2, 3, 4}. If I want to use this to generate a random bit 0 or 1, I would say rand() % 2. The set {0, 2, 4} gives me 0, and the set {1, 3} gives me 1 -- so clearly I sample 0 with 60% and 1 with 40% likelihood, not uniform at all!

To fix this you have to either make sure that your desired range divides the range of the random number generator, or otherwise discard the result whenever the random number generator returns a number that's larger than the largest possible multiple of the target range.

In the above example, the target range is 2, the largest multiple that fits into the random generation range is 4, so we discard any sample that is not in the set {0, 1, 2, 3} and roll again.

By far the easiest solution is std::uniform_int_distribution<int>(min, max).

Without loss of generality, the problem of generating random integers on [a, b] can be reduced to the problem of generating random integers on [0, s). The state of the art for generating random integers on a bounded range from a uniform PRNG is represented by the following recent publication:

Daniel Lemire,"Fast Random Integer Generation in an Interval." ACM Trans. Model. Comput. Simul. 29, 1, Article 3 (January 2019) (ArXiv draft)

Lemire shows that his algorithm provides unbiased results, and motivated by the growing popularity of very fast high-quality PRNGs such as Melissa O'Neill's PCG generators, shows how to the results can be computed fast, avoiding slow division operations almost all of the time.

An exemplary ISO-C implementation of his algorithm is shown in randint() below. Here I demonstrate it in conjunction with George Marsaglia's older KISS64 PRNG. For performance reasons, the required 64×64→128 bit unsigned multiplication is typically best implemented via machine-specific intrinsics or inline assembly that map directly to appropriate hardware instructions.

#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>

/* PRNG state */
typedef struct Prng_T *Prng_T;
/* Returns uniformly distributed integers in [0, 2**64-1] */
uint64_t random64 (Prng_T);
/* Multiplies two 64-bit factors into a 128-bit product */
void umul64wide (uint64_t, uint64_t, uint64_t *, uint64_t *);

/* Generate in bias-free manner a random integer in [0, s) with Lemire's fast
   algorithm that uses integer division only rarely. s must be in [0, 2**64-1].

   Daniel Lemire, "Fast Random Integer Generation in an Interval," ACM Trans.
   Model. Comput. Simul. 29, 1, Article 3 (January 2019)
*/
uint64_t randint (Prng_T prng, uint64_t s) 
{
    uint64_t x, h, l, t;
    x = random64 (prng);
    umul64wide (x, s, &h, &l);
    if (l < s) {
        t = (0 - s) % s;
        while (l < t) {
            x = random64 (prng);
            umul64wide (x, s, &h, &l);
        }
    }
    return h;
}

#define X86_INLINE_ASM (0)

/* Multiply two 64-bit unsigned integers into a 128 bit unsined product. Return
   the least significant 64 bist of the product to the location pointed to by
   lo, and the most signfiicant 64 bits of the product to the location pointed
   to by hi.
*/
void umul64wide (uint64_t a, uint64_t b, uint64_t *hi, uint64_t *lo)
{
#if X86_INLINE_ASM
    uint64_t l, h;
    __asm__ (
        "movq  %2, %%rax;\n\t"  // rax = a
        "mulq  %3;\n\t"         // rdx:rax = a * b
        "movq  %%rax, %0;\n\t"  // l = (a * b)<31:0>
        "movq  %%rdx, %1;\n\t"  // h = (a * b)<63:32>
        : "=r"(l), "=r"(h)
        : "r"(a), "r"(b)
        : "%rax", "%rdx");
    *lo = l;
    *hi = h;
#else // X86_INLINE_ASM
    uint64_t a_lo = (uint64_t)(uint32_t)a;
    uint64_t a_hi = a >> 32;
    uint64_t b_lo = (uint64_t)(uint32_t)b;
    uint64_t b_hi = b >> 32;

    uint64_t p0 = a_lo * b_lo;
    uint64_t p1 = a_lo * b_hi;
    uint64_t p2 = a_hi * b_lo;
    uint64_t p3 = a_hi * b_hi;

    uint32_t cy = (uint32_t)(((p0 >> 32) + (uint32_t)p1 + (uint32_t)p2) >> 32);

    *lo = p0 + (p1 << 32) + (p2 << 32);
    *hi = p3 + (p1 >> 32) + (p2 >> 32) + cy;
#endif // X86_INLINE_ASM
}

/* George Marsaglia's KISS64 generator, posted to comp.lang.c on 28 Feb 2009
   https://groups.google.com/forum/#!original/comp.lang.c/qFv18ql_WlU/IK8KGZZFJx4J
*/
struct Prng_T {
    uint64_t x, c, y, z, t;
};

struct Prng_T kiss64 = {1234567890987654321ULL, 123456123456123456ULL,
                        362436362436362436ULL, 1066149217761810ULL, 0ULL};

/* KISS64 state equations */
#define MWC64 (kiss64->t = (kiss64->x << 58) + kiss64->c,            \
               kiss64->c = (kiss64->x >> 6), kiss64->x += kiss64->t, \
               kiss64->c += (kiss64->x < kiss64->t), kiss64->x)
#define XSH64 (kiss64->y ^= (kiss64->y << 13), kiss64->y ^= (kiss64->y >> 17), \
               kiss64->y ^= (kiss64->y << 43))
#define CNG64 (kiss64->z = 6906969069ULL * kiss64->z + 1234567ULL)
#define KISS64 (MWC64 + XSH64 + CNG64)
uint64_t random64 (Prng_T kiss64)
{
    return KISS64;
}

int main (void)
{
    int i;
    Prng_T state = &kiss64;

    for (i = 0; i < 1000; i++) {
        printf ("%llu\n", randint (state, 10));
    }
    return EXIT_SUCCESS;
}

If you really want to get a perfect generator assuming rand() function that you have is perfect, you need to apply the method explained bellow.

We will create a random number, r, from 0 to max-min=b-1, which is then easy to move to the range that you want, just take r+min

We will create a random number where b < RAND_MAX, but the procedure can be easily adopted to have a random number for any base

PROCEDURE:

  1. Take a random number r in its original RAND_MAX size without any truncation
  2. Display this number in base b
  3. Take first m=floor(log_b(RAND_MAX)) digits of this number for m random numbers from 0 to b-1
  4. Shift each by min (i.e. r+min) to get them into the range (min,max) as you wanted

Since log_b(RAND_MAX) is not necessarily an integer, the last digit in the representation is wasted.

The original approach of just using mod (%) is mistaken exactly by

(log_b(RAND_MAX) - floor(log_b(RAND_MAX)))/ceil(log_b(RAND_MAX)) 

which you might agree is not that much, but if you insist on being precise, that is the procedure.

You have touched on two points involving a random integer algorithm: Is it optimal, and is it unbiased?

Optimal

There are many ways to define an "optimal" algorithm. Here we look at "optimal" algorithms in terms of the number of random bits it uses on average. In this sense, rand is a poor method to use for random numbers, in part because it need not necessarily produce random bits (because RAND_MAX is not exactly specified)*. Instead, we will assume we have a "true" random generator that can produce unbiased and independent random bits.

In 1976, D. E. Knuth and A. C. Yao showed that any algorithm that produces random integers with a given probability, using only random bits, can be represented as a binary tree, where random bits indicate which way to traverse the tree and each leaf (endpoint) corresponds to an outcome. They also gave lower bounds on the number of bits a given algorithm will need on average for this task. In this case, an optimal algorithm to generate integers in [0, n) uniformly, will need at most log2(n) + 2 bits on average. There are many examples of optimal algorithms in this sense. One of them is the Fast Dice Roller by J. Lumbroso (2013); the following shows an implementation in JavaScript, not in C or C++, but it's easy to adapt to either language and the idea is to show that it's possible to generate integers from bits in an optimal way. In the code, (Math.random() < 0.5 ? 0 : 1) is JavaScript's way to generate an unbiased random bit.

function randomInt(minInclusive, maxExclusive) {
  var maxInclusive = (maxExclusive - minInclusive) - 1
  var x = 1
  var y = 0
  while(true) {
    x = x * 2
    var randomBit = (Math.random() < 0.5 ? 0 : 1)
    y = y * 2 + randomBit
    if(x > maxInclusive) {
      if (y <= maxInclusive) { return y + minInclusive }
      // Rejection
      x = x - maxInclusive - 1
      y = y - maxInclusive - 1
    }
  }
}

Unbiased

However, any optimal integer generator that is also unbiased will, in general, run forever in the worst case, as also shown by Knuth and Yao. Going back to the binary tree, each one of the n outcomes labels leaves in the binary tree so that each integer in [0, n) can occur with probability 1/n. But if 1/n has a non-terminating binary expansion (which will be the case if n is not a power of 2), this binary tree will necessarily either—

  • have an "infinite" depth, or
  • include "rejection" leaves at the end of the tree,

and in either case, the algorithm won't run in constant time and will run forever in the worst case. (On the other hand, when n is a power of 2, the optimal binary tree will have a finite depth and no rejection nodes.) The Fast Dice Roller is an example of an algorithm that uses "rejection" events to ensure it's unbiased; see the comment in the code above.

And for general n, there is no way to "fix" this worst case time complexity without introducing bias. For instance, modulo reductions (including the min + (rand() % (int)(max - min + 1)) in your question) are equivalent to a binary tree in which rejection leaves are replaced with labeled outcomes — but since there are more possible outcomes than rejection leaves, only some of the outcomes can take the place of the rejection leaves, introducing bias. The same kind of binary tree — and the same kind of bias — results if you stop rejecting after a set number of iterations. (However, this bias may be negligible depending on the application. There are also security aspects to random integer generation, which are too complicated to discuss in this answer.)

Note

* There are other problems with rand() as well. Perhaps the most serious here is the fact that the C standard does not specify a particular distribution for the numbers returned by rand().

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top