Question

TimeSpan.FromSeconds takes a double, and can represent values down to 100 nanoseconds, however this method inexplicably rounds the time to whole milliseconds.

Given that I've just spent half an hour to pinpoint this (documented!) behaviour, knowing why this might be the case would make it easier to put up with the wasted time.

Can anyone suggest why this seemingly counter-productive behaviour is implemented?

TimeSpan.FromSeconds(0.12345678).TotalSeconds
    // 0.123
TimeSpan.FromTicks((long)(TimeSpan.TicksPerSecond * 0.12345678)).TotalSeconds
    // 0.1234567
Was it helpful?

Solution

As you've found out yourself, it's a documented feature. It's described in the documentation of TimeSpan:

Parameters

value Type: System.Double

A number of seconds, accurate to the nearest millisecond.

The reason for this is probably because a double is not that accurate at all. It is always a good idea to do some rounding when comparing doubles, because it might just be a very tiny bit larger or smaller than you'd expect. That behaviour could actually provide you with some unexpected nanoseconds when you try to put in whole milliseconds. I think that is the reason they chose to round the value to whole milliseconds and discard the smaller digits.

OTHER TIPS

On the rights of a speculation..

  1. TimeSpan.MaxValue.TotalMilliseconds is equat to 922337203685477. The number that has 15 digits.
  2. double is precise to 15 digits.
  3. TimeSpan.FromSeconds, TimeSpan.FromMinutes etc. all go through conversion to milliseconds expressed in double (then to ticks then to TimeSpan which is not interesting now)

So, when you are creating TimeSpan that will be close to TimeSpan.MaxValue (or MinValue) the conversion will be precise to milliseconds only.
So the probable answer to the question "why" is: to have the same precision all the times.
Further thing to think about is whether the job could have been done better if conversions were done through firstly converting value to ticks expressed in long.

Imagine you're the developer responsible for designing the TimeSpan type. You've got all the basic functionality in place; it all seems to be working great. Then one day some beta tester comes along and shows you this code:

double x = 100000000000000;
double y = 0.5;
TimeSpan t1 = TimeSpan.FromMilliseconds(x + y);
TimeSpan t2 = TimeSpan.FromMilliseconds(x) + TimeSpan.FromMilliseconds(y);
Console.WriteLine(t1 == t2);

Why does that output False? the tester asks you. Even though you understand why this happened (the loss of precision in adding together x and y), you have to admit it does seem a bit strange from a client perspective. Then he throws this one at you:

x = 10.0;
y = 0.5;
t1 = TimeSpan.FromMilliseconds(x + y);
t2 = TimeSpan.FromMilliseconds(x) + TimeSpan.FromMilliseconds(y);
Console.WriteLine(t1 == t2);

That one outputs True! The tester is understandably skeptical.

At this point you have a decision to make. Either you can allow an arithmetic operation between TimeSpan values that have been constructed from double values to yield a result whose precision exceeds the accuracy of the double type itself—e.g., 100000000000000.5 (16 significant figures)—or you can, you know, not allow that.

So you decide, you know what, I'll just make it so that any method that uses a double to construct a TimeSpan will be rounded to the nearest millisecond. That way, it is explicitly documented that converting from a double to a TimeSpan is a lossy operation, absolving me in cases where a client sees weird behavior like this after converting from double to TimeSpan and hoping for an accurate result.

I'm not necessarily arguing that this is the "right" decision here; clearly, this approach causes some confusion on its own. I'm just saying that a decision needed to be made one way or the other, and this is what was apparently decided.

I think the explanation is there: TimeSpan structure incorrectly handles values close to min and max value

And it looks like it's not going to change any time soon :-)

FromSeconds uses private method Interval

public static TimeSpan FromSeconds(double value)
{
    return Interval(value, 0x3e8);
}

0x3e8 == 1000

Interval method multiplay value on that const and then cast to long (see last row):

private static TimeSpan Interval(double value, int scale)
{
    if (double.IsNaN(value))
    {
        throw new ArgumentException(Environment.GetResourceString("Arg_CannotBeNaN"));
    }
    double num = value * scale; // Multiply!!!!
    double num2 = num + ((value >= 0.0) ? 0.5 : -0.5);
    if ((num2 > 922337203685477) || (num2 < -922337203685477))
    {
        throw new OverflowException(Environment.GetResourceString("Overflow_TimeSpanTooLong"));
    }
    return new TimeSpan(((long) num2) * 0x2710L); // Cast to long!!!
}

As result we have precision with 3 (x1000) signs. Use reflector to investigate

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top