Consider your struct
as a whole to be a string of bytes (7, to be precise). You may use any acceptably general string hash function upon those 7 bytes. Here is the FNV (Fowler/Noll/Vo) general bit-string hash function applied to your example (within the given hash functor class):
inline std::size_t operator()(const exemple& obj ) const
{
const unsigned char* p = reinterpret_cast<const unsigned char*>( &obj );
std::size_t h = 2166136261;
for (unsigned int i = 0; i < sizeof(obj); ++i)
h = (h * 16777619) ^ p[i];
return h;
}
Note how I converted the reference to the exemple
structure (obj
) to a pointer to const unsigned char
so that I could access the bytes of the structure one-by-one, and I treat it as an opaque binary object. Note that sizeof(obj)
may actually be 8 rather than 7 depending upon the compiler's padding (which would mean there's a garbage padding byte somewhere in the structure, probably between c
and n
. If you wanted, you could rewrite the hash function to iterate over a
, b
, and c
and then the bytes of n
in order (or any order), which would eliminate the influence of any padding bytes (which may or may not exist) upon the hash of your struct
.
Yes, a bad hash function can make unordered_map
slower than ordered_map
. This isn't always discussed, because generalized, fast algorithms like the FNV hash given above are assumed to be used by those using unordered_map
, and in those cases, generally an unordered_map
is faster than an ordered_map
at the expense of the ability to iterate over the container's elements in order. However, yes, you must being using a good hash function for your data, and usually it's good enough to use one of these well-known hashes. Ultimately, however, every hash function has its weaknesses depending upon the input data's (here, the contents of the exemple
structure) distribution.
A good discussion of generalized hashing and example hashing functions can be found at Eternally Confuzzled, including a C-style FNV hash similar to the one which I've given you.