I've a program to calculate the latency of an object in a pub-sub model. I've used the following function for timestamp:

uint64_t GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv.tv_sec*(uint64_t)1000000+tv.tv_usec;
}

The latency is measured as timestamp difference in publisher and subscriber. So, I'm concerned about the unit of the latency measured. Is it in seconds or microseconds??

有帮助吗?

解决方案

The timeval structure has tv_sec, which gives you the absolute value for the seconds, and tv_usec, which gives you the remaining fraction in micro seconds.

So, you could get the resolution in micro seconds.

For more information, http://www.gnu.org/software/libc/manual/html_node/Elapsed-Time.html

其他提示

tv.tv_sec gives the second count, tv.tv_usec gives the remaining microsecond count.

For gettimeofday() principle and its accuracy:

How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?

Is gettimeofday() guaranteed to be of microsecond resolution?

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top