Question

Is there any disadvantage of converting size_t to long? Because, I am writing an program that maintains linked_list in a file. So I traverse to another node based on size_t and I also keep track of total number of lists as size_t. Hence, obviously there is going to be some conversion or addition of long and size_t. Is there any disadvantage of this? If there is then I will make everything as long instead of size_t, even the sizes. Please advise.

Was it helpful?

Solution

It's not a problem now, but it may be in the future depending on where you'll port your app. That's because size_t is defined to be large enough to store offsets of pointers, so if you have a 64-bit pointer, size_t will be 64 bits too. Now, long may or may not be 64 bits, because the size rules for fundamental types in C/C++ give room to some variations.

But if you're to write these values to a file, you have to choose a specific size anyway, so there's no option other than convert to long (or long long, if needed). Better yet, use one of the new size-specific types like int32_t.

My advice: somewhere in the header of your file, store the sizeof for the type you converted the size_t to. By doing that, if in the future you decide to use a larger one, you can still support the old size. And for the current version of the program, you can check if the size is supported or not, and issue an error if not.

OTHER TIPS

The "long" type, unfortunately, doesn't have a good theoretical basis. Originally it was introduced on 32 bit unix ports to differentiate it from the 16 bit "int" assumed by the existing PDP11 software. Then later "int" was changed to 32 bits on those platforms (and "short" was introduced) and "long" and "int" became synonyms, which they were for a very long time.

Now, on 64 bit unix-like platforms (Linux, the BSDs, OS X, iOS and whatever proprietary unixes people might still care about) "long" is a 64 bit quantity. But, sadly, not on windows: there was too much legacy "code" in the existing headers that made the sizeof(int)==sizeof(long) assumption, so they went with an abomination called "LLP64" and left long as 32 bits. Sigh.

But "size_t" isn't like that. It has always meant precisely one thing: it's the unsigned type that stores the native pointer size in the address space. If you have an unsigned (! -- use ssize_t or ptrdiff_t if you need signed arithmetic) pointer that needs an integer representation (i.e. you need to store the memory size of an object), this is what you use.

Is there any disadvantage of converting size_t to long?

Theoretically long can be smaller than size_t. Also, long is signed. size_t is unsigned. So if you start using them both in same expression, compiler like g++ will complain about it. A lot. Theoretically it might lead to unexpected errors due to signed-to-unsigned assignments.

obviously there is going to be some conversion or addition of long

I don't see why there's supposed to be some conversion or addition to long. You can keep using size_t for all arithmetical operations. You can typedef it as "ListIndex" or whatever and keep using it throughout the code. If you mix types (long and size_t), g++/mignw will nag you to death about it.

Alternatively, you could select specific type which has guaranteed size. Newer compilers have cstdint header which includes types like uint64_t (it is extremely unlikely that you encounter file larger than 2^64, for example). If your compiler doesn't have the header, it should be available in boost.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top