In my project I'm using struct timespec as follows

struct timespec start, end;
clock_gettime(CLOCK_REALTIME,&start);
/* Do something */
clock_gettime(CLOCK_REALTIME,&end);

It returns a value as ((((unsigned64)start.tv_sec) * ((unsigned64)(1000000000L))) + ((unsigned64)(start.tv_nsec))))

Can anyone tell why we are using the unsigned64 format and also help me understand this structure in detail?? I'm using this code in my study about time calculation in nanoseconds precision for the code execution time taken

有帮助吗?

解决方案

An unsigned 32-bit type (like unsigned int on modern platforms) have a max value of a little over four billion. If you have 5 and multiply that with one billion (like done in the code in the question) you will get a value of five billion, larger than can be contained in a 32-bit unsigned type. Enter 64-bit types, which can hold a lot higher values (18446744073709551615 to be more precise, to compare with an unsigned 32-bit max value of only 4294967295).


By the way, the code can be simplified like

start.tv_sec * 1000000000ULL + start.tv_nsec

This simplification is possible because the compiler will automatically convert lower-precision types and values to higher-precision as needed. As you have an unsigned long long (that's what the ULL means) literal value in the expression, the rest of the expression will also be converted to unsigned long long and the result will be of type unsigned long long.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top