If we assume that a variable x and y can be represented in double precision without an error. Would it be better to writte x^2 - y^2 or just (x + y)(x-y)?

I thought about this problem, and think that x^2 - y^2 should be more accurate, because of the irratonality of certain roots of numbers. (sqrt(numbers))

I really would appreciate your answer!!!

有帮助吗?

解决方案

Addition and subtraction of doubles is problematic, whenever their magnitude differs greatly. Say you have a precision of 10 decimal digits and want to

1234567890 + 0.05

Then the addition gets it wrong, because the mantissa cannot represent the additional 05. (Actually, with 64 IEEE doubles, the precision is more like 15 or 16 decimal digits)

From this we can conlude that the form (x+y)(x-y) should do better, because in the case that we have a big x and a small (< 1) y, the magnitude difference of x² and y² will get even greater, making an accurate result in the subtraction even less probable.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top