Question

I'd tried this code by these values:

float timeStamp;
a=1338526801      
    b=113678

 timeStamp = a +( b / 1000000);

then I changed the b to 113680 and calculated the timeStamp,

timeStamp = a+ (b / 1000000) ;

in real the timeStamp should change because the b has been changed, but when I print it by console.writeline(), the timeStamp value doesn't change.I think it refers to the precision of the c# values, but I don't know how to resolve it.

Was it helpful?

Solution

You should take a look at Floating-Point Types Table (C# Reference) which gives the following info

> Type       Approximate range      Precision 
> float      ±1.5e−45 to ±3.4e38    7 digits
> double     ±5.0e−324 to ±1.7e308  15-16 digits

Your combination of 338526801 + 113678/1000000 is about 16 digits and would better fit into a double.

A float which contains 7 digits would get you accuracy to 338526800.000000 and no more

float f = 338526801 + 113678f/1000000
System.Diagnostics.Debug.Print(f.ToString("F6")); // results in 338526800.000000

however a double gets 15-16 digits can actually store the data to your precision.

double d = 338526801d + 113678d/1000000
System.Diagnostics.Debug.Print(d.ToString("F6")); // results in 338526801.113678

You could also look at Timespan and DateTime which give you accuracy to 100-nanosecond units. Since there are 10 ticks in a micro-second (us), the same TimeSpan would be:

TimeSpan time = new TimeSpan(3385268011136780);

One of the comments suggested you might be trying to convert Unix Time. If so then you can add the Timespan to the proper DateTime representing 1/1/1970.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top