Question

I know you can't rely on equality between double or decimal type values normally, but I'm wondering if 0 is a special case.

While I can understand imprecisions between 0.00000000000001 and 0.00000000000002, 0 itself seems pretty hard to mess up since it's just nothing. If you're imprecise on nothing, it's not nothing anymore.

But I don't know much about this topic so it's not for me to say.

double x = 0.0;
return (x == 0.0) ? true : false;

Will that always return true?

Was it helpful?

Solution

It is safe to expect that the comparison will return true if and only if the double variable has a value of exactly 0.0 (which in your original code snippet is, of course, the case). This is consistent with the semantics of the == operator. a == b means "a is equal to b".

It is not safe (because it is not correct) to expect that the result of some calculation will be zero in double (or more generally, floating point) arithmetics whenever the result of the same calculation in pure Mathematics is zero. This is because when calculations come into the ground, floating point precision error appears - a concept which does not exist in Real number arithmetics in Mathematics.

OTHER TIPS

If you need to do a lot of "equality" comparisons it might be a good idea to write a little helper function or extension method in .NET 3.5 for comparing:

public static bool AlmostEquals(this double double1, double double2, double precision)
{
    return (Math.Abs(double1 - double2) <= precision);
}

This could be used the following way:

double d1 = 10.0 * .1;
bool equals = d1.AlmostEquals(0.0, 0.0000001);

For your simple sample, that test is okay. But what about this:

bool b = ( 10.0 * .1 - 1.0 == 0.0 );

Remember that .1 is a repeating decimal in binary and can't be represented exactly. Then compare that to this code:

double d1 = 10.0 * .1; // make sure the compiler hasn't optimized the .1 issue away
bool b = ( d1 - 1.0 == 0.0 );

I'll leave you to run a test to see the actual results: you're more likely to remember it that way.

From the MSDN entry for Double.Equals:

Precision in Comparisons

The Equals method should be used with caution, because two apparently equivalent values can be unequal due to the differing precision of the two values. The following example reports that the Double value .3333 and the Double returned by dividing 1 by 3 are unequal.

...

Rather than comparing for equality, one recommended technique involves defining an acceptable margin of difference between two values (such as .01% of one of the values). If the absolute value of the difference between the two values is less than or equal to that margin, the difference is likely to be due to differences in precision and, therefore, the values are likely to be equal. The following example uses this technique to compare .33333 and 1/3, the two Double values that the previous code example found to be unequal.

Also, see Double.Epsilon.

The problem comes when you are comparing different types of floating point value implementation e.g. comparing float with double. But with same type, it shouldn't be a problem.

float f = 0.1F;
bool b1 = (f == 0.1); //returns false
bool b2 = (f == 0.1F); //returns true

The problem is, programmer sometimes forgets that implicit type cast (double to float) is happening for the comparison and the it results into a bug.

If the number was directly assigned to the float or double then it is safe to test against zero or any whole number that can be represented in 53 bits for a double or 24 bits for a float.

Or to put it another way you can always assign and integer value to a double and then compare the double back to the same integer and be guaranteed it will be equal.

You can also start out by assigning a whole number and have simple comparisons continue to work by sticking to adding, subtracting or multiplying by whole numbers (assuming the result is less than 24 bits for a float abd 53 bits for a double). So you can treat floats and doubles as integers under certain controlled conditions.

No, it is not OK. So-called denormalized values (subnormal), when compared equal to 0.0, would compare as false (non-zero), but when used in an equation would be normalized (become 0.0). Thus, using this as a mechanism to avoid a divide-by-zero is not safe. Instead, add 1.0 and compare to 1.0. This will ensure that all subnormals are treated as zero.

Try this, and you will find that == is not reliable for double/float.
double d = 0.1 + 0.2; bool b = d == 0.3;

Here is the answer from Quora.

Actually, I think it is better to use the following codes to compare a double value against to 0.0:

double x = 0.0;
return (Math.Abs(x) < double.Epsilon) ? true : false;

Same for float:

float x = 0.0f;
return (Math.Abs(x) < float.Epsilon) ? true : false;
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top