You should take a look at Floating-Point Types Table (C# Reference) which gives the following info
> Type Approximate range Precision
> float ±1.5e−45 to ±3.4e38 7 digits
> double ±5.0e−324 to ±1.7e308 15-16 digits
Your combination of 338526801
+ 113678
/1000000 is about 16 digits and would better fit into a double.
A float which contains 7 digits would get you accuracy to 338526800.000000 and no more
float f = 338526801 + 113678f/1000000
System.Diagnostics.Debug.Print(f.ToString("F6")); // results in 338526800.000000
however a double gets 15-16 digits can actually store the data to your precision.
double d = 338526801d + 113678d/1000000
System.Diagnostics.Debug.Print(d.ToString("F6")); // results in 338526801.113678
You could also look at Timespan
and DateTime
which give you accuracy to 100-nanosecond units. Since there are 10 ticks in a micro-second (us), the same TimeSpan would be:
TimeSpan time = new TimeSpan(3385268011136780);
One of the comments suggested you might be trying to convert Unix Time. If so then you can add the Timespan to the proper DateTime representing 1/1/1970.