At each step in a sequence of floating point arithmetic operations, the system has to produce a result that is representable in the floating point format. That may lead to rounding error, the loss of some information.
When adding two numbers of different magnitudes, the larger one tends to control which bits have to be dropped. If you add a large and a small number, many bits of the small number will be lost to rounding error, because of the large magnitude of the result. That effect is reduced when adding numbers of similar magnitude. Adding several small numbers first, leaving the large magnitude numbers to the end, allows the effect of the small numbers to accumulate.
For example, consider { 1e17, 21.0, 21.0, 21.0, 21.0, 21.0, 21.0, 21.0, -1e17 }
. The exact answer, without any rounding, would be 147. Adding in the order shown above gives 112. Each addition of a "21.0" has to be rounded to fit in a number with magnitude around 1e17. Adding in ascending order of absolute magnitude gives 144, much closer to the exact answer. The partial result of adding the 7 small numbers is exactly 147, which then has to be rounded to fit in a number around 1e17.