Is there any construct that behaves differently (except for some warnings disappearing, that's why I want to change it) when using unsigned long literals instead of int literals?
Here's the relevant part in the C11 (draft) standard 6.3.1.3 (emphasis mine):
6.3.1.3 Signed and unsigned integers
- When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
- Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
- Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
If there's a point in code where 65535U
is, inadvertently, assigned to a signed type variable which cannot hold the value completely then the final value is implementation-defined. So assigning 65535U
to int16_t
produces -1
with GCC and VC++ but this result is not guarenteed by the standard and depends on the compiler implementation. Also, both GCC (-Wall
) and VC++ (/W4
) doesn't warn this by default; passing -pedantic
does the trick in GCC.
That said this issue should be there even if it was continuing to be 65535
since it's larger than the maximum positive value holdable by a 16-bit signed 2's complement type. So my recommendation would be to go ahead and change it. Once done, enable the maximum warning level and build the project to verify this.