Question

I do some local experiments on different database systems. I collect (sum up) CPU information from /proc/status before and after I execute a query. The difference should tell me the amount of jiffies or USER_HZ during query runtime. But the difference is zero when (according to clock_gettime()) a query has a runtime somewhere below 0.001 seconds. Is this to fast to utilize the CPU information or am I missing something else?

Was it helpful?

Solution

A jiffy, as of Linux kernel 2.6.0, is 1/250 of a second, or 0.004 seconds [see time(7)]. You'll never get a smaller resolution than that.

I recommend you use the rdtsc instruction, which is likely available as a compiler intrinsic. This is incremented every 1 CPU tick, so by dividing by the frequency you can get the amount of time that passed. You can also implement it with inline assembly.

It's actually a bit ridiculous to be checking /proc/status because there's a good chance that opening the file descriptor and reading the contents will actually take longer than your query did to execute. rdtsc is much more reliable.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top