Question

I am writing code to evaluate the following expression: $$ \frac{(a+b+c)!}{a! b! c!} $$ where $a$, $b$ and $c$ are on the range of $10$ to $500$. The result is going to be a floating point number. I could use a big number package, but the code will run slowly. I am using 64-bit floating point numbers.

I claim by doing as much of the computation in integer (maybe 64 bit) I will minimize the floating point round off error. Therefore, I claim that if put the integers to be multiplied together in an array, cancel common denominators, and then do the final computation in floating point I will minimize round off error.

Do I have this right?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top