So, I needed a way to be able to get the second of the day, so I messed around with fmod() & gettimeofday() (Mac OSX). However, I came into some odd results along the way:

#include <iostream>

#include <sys/time.h>
#include <cmath>

class A {
public:
    static float secondOfDayFmodF()
    {
        timeval t;
        gettimeofday(&t, NULL);

        return fmodf((t.tv_sec) + (t.tv_usec / 1000000.0), 86400);
    }

    static float secondOfDayFmod()
    {
        timeval t;
        gettimeofday(&t, NULL);

        return fmod(((t.tv_sec) + (t.tv_usec / 1000000.0)), 86400);
    }
};

using namespace std;

int main(int argc, const char *argv[])
{    
    for (int i = 0; i < 100; i++)
    {
        cout << "fmodf:\t" << A::secondOfDayFmodF() << endl;
        cout << "fmod:\t"  << A::secondOfDayFmod()  << endl;

        // sleep for 1 s
        sleep(1);
    }

    getchar();
}

Output:

fmodf: 5760
fmod: 5699.17
fmodf: 5760
fmod: 5700.17
fmodf: 5760
fmod: 5701.17
fmodf: 5760
fmod: 5702.17
fmodf: 5760
fmod: 5703.17
fmodf: 5760
fmod: 5704.17
fmodf: 5760
fmod: 5705.18
...

So, why does the fmodf() version give me the same output every time, where the fmod() version gives the expected result (changing after the sleep() call)? Am I missing something in the documentation?

有帮助吗?

解决方案

Single-precision floats don't have enough precision to store all the bits in (t.tv_sec) + (t.tv_usec / 1000000). If you wait long enough (2 minutes about), you'll see a big jump.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top