Question

I have a device which generates some noise that I want to add to the entropy pool for the /dev/random device in an embedded Linux system.

I'm reading the man page on /dev/random and I don't really understand the structure that you pass into the RNDADDENTROPY ioctl call.

   RNDADDENTROPY
          Add some additional entropy to the input pool, incrementing
          the entropy count.  This differs from writing to /dev/random
          or /dev/urandom, which only adds some data but does not
          increment the entropy count.  The following structure is used:

              struct rand_pool_info {
                  int    entropy_count;
                  int    buf_size;
                  __u32  buf[0];
              };

          Here entropy_count is the value added to (or subtracted from)
          the entropy count, and buf is the buffer of size buf_size
          which gets added to the entropy pool.

Is entropy_count in this structure the number of bits that I am adding? Why wouldn't this just always be buf_size * 8 (assuming that buf_size is in terms of bytes)?

Additionally why is buf a zero size array? How am I supposed to assign a value to it?

Thanks for any help here!

Was it helpful?

Solution

I am using a hardware RNG to stock my entropy pool. My struct is a static size and looks like this (my kernel has a slightly different random.h; just copy what you find in yours and increase the array size to whatever you want):

#define BUFSIZE 256
/* WARNING - this struct must match random.h's struct rand_pool_info */
typedef struct {
    int bit_count;               /* number of bits of entropy in data */
    int byte_count;              /* number of bytes of data in array */
    unsigned char buf[BUFSIZ];
} entropy_t;

Whatever you pass in buf will be hashed and will stir the entropy pool. If you are using /dev/urandom, it does not matter what you pass for bit_count because /dev/urandom ignores it equaling zero and just keeps on going.

What bit_count does is push the point out at which /dev/random will block and wait for something to add more entropy from a physical RNG source. Thus, it's okay to guesstimate on bit_count. If you guess low, the worst that will happen is that /dev/random will block sooner than it otherwise would have. If you guess high, /dev/random will operate like /dev/urandom for a little bit longer than it otherwise would have before it blocks.

You can guesstimate based on the "quality" of your entropy source. If it's low, like characters typed by humans, you can set it to 1 or 2 per byte. If it's high, like values read from a dedicated hardware RNG, you can set it to 8 bits per byte.

OTHER TIPS

If your data is perfectly random, then I believe it would be appropriate for entropy_count to be the number of bits in the buffer you provide. However, many (most?) sources of randomness aren't perfect, and so it makes sense for the buffer size and amount of entropy to be kept as separate parameters.

buf being declared to be size zero is a standard C idiom. The deal is that when you actually allocate a rand_pool_info, you do malloc(sizeof(rand_pool_info) + size_of_desired_buf), and then you refer to the buffer using the buf member. Note: With some C compilers, you can declare buf[*] instead of buf[0] to be explicit that in reality buf is "stretchy".

The number of bytes you have in the buffer correlates to the entropy of the data but the entropy can not be calculated only from that data or its length.

Sure, if the data came from a good, unpredictable and equal-distributed hardware random noise generatr the entropy (in bits) is 8*size of the buffer (in bytes).

But if the bits are not equally distributed or are somehow predictable the entropy becomes less.

See https://en.wikipedia.org/wiki/Entropy_(information_theory)

I hope that helps.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top