Question

Can any of you explain why does this happen?

static void Main()
{
    const float xScaleStart = 0.5f;
    const float xScaleStop = 4.0f;
    const float xScaleInterval = 0.1f;
    const float xScaleAmplitude = xScaleStop - xScaleStart;

    const float xScaleSizeC = xScaleAmplitude / xScaleInterval;

    float xScaleSize = xScaleAmplitude / xScaleInterval;

    Console.WriteLine(">const float {0}, (int){1}", xScaleSizeC, (int)xScaleSizeC);

    Console.WriteLine(">      float {0}, (int){1}", xScaleSize, (int)xScaleSize);

    Console.ReadLine();
}

Output:

>const float 35, (int)34
>      float 35, (int)35

I know that the binary representation of 0.1 is actually 0.09999990463256835937, though why does this happen using 'const float' and not with 'float'? Is this considered a compiler bug?

For the record, the code compiles into:

private static void Main(string[] args)
{
    float xScaleSize = 35f;
    Console.WriteLine(">const float {0}, (int){1}", 35f, 34);
    Console.WriteLine(">      float {0}, (int){1}", xScaleSize, (int)xScaleSize);
    Console.ReadLine();
}
Was it helpful?

Solution

The "Why" of this will basically boil down to the fact that frequently, when working with float data, an internal representation may be used that has more precision than is specified for float or double. This is explicitly catered for in the Virtual Execution System (VES) Spec (section 12 of Partition I):

floating-point numbers are represented using an internal floating-point type. In each such instance, the nominal type of the variable or expression is either float32 or float64, but its value can be represented internally with additional range and/or precision

And then later we have:

The use of an internal representation that is wider than float32 or float64 can cause differences in computational results when a developer makes seemingly unrelated modifications to their code, the result of which can be that a value is spilled from the internal representation (e.g., in a register) to a location on the stack.

Now, according to the C# language specification:

The compile-time evaluation of constant expressions uses the same rules as run-time evaluation of non-constant expressions, except that where run-time evaluation would have thrown an exception, compile-time evaluation causes a compile-time error to occur.

But as we observe above, the rules actually allow more precision to be used at times, and when this enhanced precision is used isn't actually under our direct control.


And obviously, in different circumstances, the results could have been precisely the opposite of what you observed - the compiler may have dropped to lower precision and the runtime could have maintained higher precision instead.

OTHER TIPS

I can't say this is a duplicate question since in here --> Eric Postpischil comment

explained something very similar regarding int's and const int's.

The main idea is that the division of two constants calculated by the compiler before generating the code and not in run-time, BUT in this specific case whenever the compiler does this it performs the calculations in double format. Thus the xScaleSizeC is basically equals 34.9999... so when it getting cast in run-time to int it becomes 34.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top