Question

I am trying to analyze how reduction (parallel) can be used to add a large array of floating point numbers and precision loss involved in it. Definitely reduction will help in getting more precision compared to serial addition . I'll be really thankful if you can direct me to some detailed source or provide some insight for this analysis. Thanks.

No correct solution

OTHER TIPS

Every primitive floating point operation will have a rounding error; if the result is x then the rounding error is <= c * abs (x) for some rather small constant c > 0.

If you add 1000 numbers, that takes 999 additions. Each addition has a result and a rounding error. The rounding error is small when the result is small. So you want to adjust the order of additions so that the average absolute value of the result is as small as possible. A binary tree is one method. Sorting the values, then adding the smallest two numbers and putting the result back into the sorted list is also quite reasonable. Both methods keep the average result small, and therefore keep the rounding error small.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top