I may be wrong with the assumption that none of the answers truly gets to the point of this question, therefore here goes.
Compiling as a 64bit binary a long is defined as a 64bit value (or 8 bytes), whereas a 32 bit binary a long is the same as an int which is 32 bits or 4 bytes.
There are several solutions to the problem:
1. redefine the parameter to a long long or an int64 as suggested in other responses.
2. add a preprocessor define to block out the offending bit operations. Such as...
#ifdef __LP64__
buffer[4] = static_cast<uint8_t>((v >> 32) & max_byte);
buffer[5] = static_cast<uint8_t>((v >> 40) & max_byte);
buffer[6] = static_cast<uint8_t>((v >> 48) & max_byte);
buffer[7] = static_cast<uint8_t>((v >> 56) & max_byte);
#endif
This will ensure that a long is processed according to the processors architecture and not forcing a long to always be 64 bits.
3. Using a union would accomplish the same end result too, for the code provided
void func(unsigned long v)
{
union {
unsigned long long ival;
unsigned char cval[8];
} a;
a.ival = v;
}
of course if you use c++ then you can store any basic datatype in a similar fashion by modifying the above as follows:
template<class I>
void func(I v) {
union {
unsigned long long ival;
unsigned char cval[8];
I val;
} a;
a.val = v;
}