Question

I'm writing a programming language, and when I came across this question, my immediate thought was that languages should optimize booleans into bit flags for the programmer. This would keep all the benefits of speed and efficient memory usage while removing the burden of maintenance and the possibilities for errors caused by the more complicated bit manipulation.

The one case where you might want to not have this optimization is if you have a situation where there is only one set of booleans being stored. Basically, if you have 8 bits for flags + 8 bit masks * 8 bits per bit mask = 72 bits instead of 8 booleans * 8 bits per boolean = 64 bits. But as soon as you have even two copies of the booleans, it becomes 2 copies * 8 bits for flags + 8 bit masks * 8 bits per bit mask = 80 bits versus 2 copies * 8 booleans * 8 bits per boolean = 128 bits. It seems like the few cases where the booleans would be more storage-optimal would be easy to detect so one could not apply the optimization.

Is there any reason that languages don't support this optimization? I looked around and it doesn't seem that any languages do (I may just not be looking in the right places).

Was it helpful?

Solution

I've seen people do this in assembly language, where they pack booleans into words to save space, and then write lots of instructions to get/set the bits.

Clearly there is a tradeoff in speed and memory between packing booleans and not packing them, and personally I'm shy of a compiler trying to decide that for me.

OTHER TIPS

C does ...

#include <stdio.h>

typedef struct foo_s {
    unsigned char field1 :1;
    unsigned char field2 :1;
    unsigned char field3 :4;
    unsigned char field4 :2;
} foo_t;

int main() {
    printf("%d\n", sizeof(foo_t));
    return 0;
}

when run, those four fields get packed into one byte:

$ gcc -Wall -o bittest bittest.c 
$ ./bittest 
1
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top