Question

I have some code to convert a time value returned from QueryPerformanceCounter to a double value in milliseconds, as this is more convenient to count with.

The function looks like this:

double timeGetExactTime() {
    LARGE_INTEGER timerPerformanceCounter, timerPerformanceFrequency;
    QueryPerformanceCounter(&timerPerformanceCounter);
    if (QueryPerformanceFrequency(&timerPerformanceFrequency)) {
        return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
    }
    return 0.0;
}

The problem I'm having recently (I don't think I had this problem before, and no changes have been made to the code) is that the result is not very accurate. The result does not contain any decimals, but it is even less accurate than 1 millisecond.

When I enter the expression in the debugger, the result is as accurate as I would expect.

I understand that a double cannot hold the accuracy of a 64-bit integer, but at this time, the PerformanceCounter only required 46 bits (and a double should be able to store 52 bits without loss) Furthermore it seems odd that the debugger would use a different format to do the division.

Here are some results I got. The program was compiled in Debug mode, Floating Point mode in C++ options was set to the default ( Precise (/fp:precise) )

timerPerformanceCounter.QuadPart: 30270310439445
timerPerformanceFrequency.QuadPart: 14318180
double perfCounter = (double)timerPerformanceCounter.QuadPart;
30270310439445.000

double perfFrequency = (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
14318.179687500000

double result = perfCounter / perfFrequency;
2114117248.0000000

return (double)timerPerformanceCounter.QuadPart / (((double)timerPerformanceFrequency.QuadPart) / 1000.0);
2114117248.0000000

Result with same expression in debugger:
2114117188.0396111

Result of perfTimerCount / perfTimerFreq in debugger:
2114117234.1810646

Result of 30270310439445 / 14318180 in calculator:
2114117188.0396111796331656677036

Does anyone know why the accuracy is different in the debugger's Watch compared to the result in my program?

Update: I tried deducting 30270310439445 from timerPerformanceCounter.QuadPart before doing the conversion and division, and it does appear to be accurate in all cases now. Maybe the reason why I'm only seeing this behavior now might be because my computer's uptime is now 16 days, so the value is larger than I'm used to? So it does appear to be a division accuracy issue with large numbers, but that still doesn't explain why the division was still correct in the Watch window. Does it use a higher-precision type than double for it's results?

Was it helpful?

Solution 2

Thanks, using decimal would probably be a solution too. For now I've taken a slightly different approach, which also works well, at least as long as my program doesn't run longer than a week or so without restarting. I just remember the performance counter of when my program started, and subtract this from the current counter before converting to double and doing the division.

I'm not sure which solution would be fastest, I guess I'd have to benchmark that first.

bool perfTimerInitialized = false;
double timerPerformanceFrequencyDbl;
LARGE_INTEGER timerPerformanceFrequency;
LARGE_INTEGER timerPerformanceCounterStart;
double timeGetExactTime()
{
    if (!perfTimerInitialized) {
        QueryPerformanceFrequency(&timerPerformanceFrequency);
        timerPerformanceFrequencyDbl = ((double)timerPerformanceFrequency.QuadPart) / 1000.0;
        QueryPerformanceCounter(&timerPerformanceCounterStart);
        perfTimerInitialized = true;
    }

    LARGE_INTEGER timerPerformanceCounter;
    if (QueryPerformanceCounter(&timerPerformanceCounter)) {
        timerPerformanceCounter.QuadPart -= timerPerformanceCounterStart.QuadPart;
        return ((double)timerPerformanceCounter.QuadPart) / timerPerformanceFrequencyDbl;
    }

    return (double)timeGetTime();
}

OTHER TIPS

Adion,

If you don't mind the performance hit, cast your QuadPart numbers to decimal instead of double before performing the division. Then cast the resulting number back to double.

You are correct about the size of the numbers. It throws off the accuracy of the floating point calculations.

For more about this than you probably ever wanted to know, see:

What Every Computer Scientist Should Know About Floating-Point Arithmetic http://docs.sun.com/source/806-3568/ncg_goldberg.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top