Question

Is there any advantage of using int vs varbinary for storing bit masks in terms of performance or flexibility.

For my purposes, I will always be doing reads on these bit masks (no writes or updates).

Was it helpful?

Solution

You should definitely use an INT (if you need 32 flags) or BIGINT (for 64 flags). If you need more flags you could use BINARY (but you should probably also ask yourself why you need so many flags in your application).

Besides, if you use an integral type, you can use standard bitwise operators directly without converting a byte array to an integral type.

If you do need more flags and have to use BINARY you lose native support for bitwise operators and therefore easy support for checking flag values. I would probably move checking for flag values to a client application but if you're comfortable programming in T-SQL that's an option as well. If you're using C# you have a BitArray class with the necessary operations and in Java you have a BitSet class.

OTHER TIPS

It is generally considered preferable to use a bunch of bit columns instead of a bit mask. They will get packed together in the page, so they won't take any more room. Although I too always seem to go with an int or bigint column to avoid all of the column name typing.. but with intellisense I would probably go with the bit columns.

Well, considering an int has less storage space and is generally a little easier to work with I'm not sure why you'd use a varbinary.

I usually agree with @hainstech's answer of using bit fields, because you can explicitly name each bit field to indicate what it should store. However I haven't seen a practical approach to doing bitmask comparisons with bit fields. With SQL Server's bitwise operators (&, |, etc...) it's easy to find out if a range of flags are set. A lot more work to do that with equality operators against a large number of bit fields.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top