# Why would I use 2's complement to compare two doubles instead of comparing their differences against an epsilon value?

###### https://stackoverflow.com/questions/96233

### Full question

- floating-point - double - c++ |

### Question

Referenced here and here...Why would I use two's complement over an epsilon method? It seems like the epsilon method would be good enough for most cases.

**Update:** I'm purely looking for a theoretical reason why you'd use one over the other. I've always used the epsilon method.

Has anyone used the 2's complement comparison successfully? Why? Why Not?

### Solution

the second link you reference mentions an article that has quite a long description of the issue:

http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm

but unless you are tweaking performance I would stick with epsilon so people can debug your code

### OTHER TIPS

The bits method might be faster. I say might because on modern (multicore, highly pipelined) processors it is often impossible to guess what is really faster. Code the simplest most obviously correct implementation, then measure, then optomise.

In short, when comparing two floats with unknown origins, picking an epsilon that is valid is almost impossible.

For example:

What is a good epsilon when comparing distance in miles between Atlanta GA, Dallas TX and some place in Ohio?

What is a good epsilon when comparing distance in miles between my left foot, my right foot and the computer under my desk?

**EDIT:**

Ok, I'm getting a fair number of people not understanding why you wouldn't know what your epsilon is.

Back in the old days of lore, I wrote two programs that worked with NeverWinter Nights (a game made by BioWare). One of the programs took a binary model and converted it to ASCII. The other program took an ASCII model and compiled it into binary. One of the tests I wrote was to take all of BioWare's binary models, decompile them to ASCII and then back to binary. Then I compared my binary version with original one from BioWare. One of the problems during the comparison was dealing with some of the slight variances in floating point values. So instead of coming up with a bunch of different EPSILONS for each type of floating point number (vertex, normal, etc), I wanted to use something such as this twos compliment compare. Thus avoiding the whole multiple EPSILON issue.

The same type of issue can apply to any type of software that processes 3rd party data and then needs to validate their results with the original. In these cases you might not even know what the floating point values represent, you just have to compare them. We ran into this issue with our industrial automation software.

**EDIT:**

LOL, this has been voted up and down by different people.

I'll boil the problem down to this, given two **arbitrary** floating point numbers, how do you decide what epsilon to use? You can't.

How can you compare 1e23 and 1.0001e23 with an epsilon and still compare 1e-23 and 5.2e-23 using the same epsilon? Sure, you can do some dynamic epsilon tricks, but that is the whole point to the integer compare (which does NOT require the integers be exact).

The integer compare is able to compare two floats using an epsilon relative to the magnitude of the numbers.

**EDIT**

Steve, lets look at what you said in the comments:

"But you know what equality means to you... Hence, you should be able to find an appropriate epsilon".

Turn this statement around to say:

"If you know what equality means to you, then you should be able to find an appropriate epsilon."

The whole point to what I am trying to say is that there are applications where we don't know what equality means in the absolute sense, thus we have to resort to a relative compare which is what the integer version is trying to do.

When it comes to speed, follow these rules:

- If you're not a very experienced developer, don't optimize.
- If you are an experienced developer, don't optimize yet.

Do the easiest method.

Alex

Oskar's right. Don't screw with this unless you really, really need that performance.

And you don't. If you were in the situation that did, you wouldn't have needed to ask the question -- you'd already know. If you think you do, then you don't. Your performance problems lie elsewhere. Just use the readable version.

Using any method that compares bitwise will result in trouble when fractions are represented by approximations. All floating point numbers with fractions that are not denominated in powers of two (1/2, 1/4, 1/8, 1/65536, &c) are approximated. So, of course, are all irrational numbers.

float third = 1/3; float two=2.0; float another_two=third*6.0; if(two != another_two) print ("Approximation!\n");

The only time comparing bitwise would work is when you derive the floating point numbers exactly the same way or they are exact representations (whole numbers, fraction powers of two). Even then, there can be multiple representations of some numbers, though I have never seen this in a working system.