Question

Has arbitrary-precision arithmetic affected numerical analysis software?

I feel that most numerical analysis software keeps on using the same floats and doubles.

If I'm right, I'd love to know the reason, as in my opinion there are some calculations that can benefit from the use of arbitrary-precision arithmetic, particularly when it is combined with the use of rational number representation, as been done on the GNU Multi-Precision Library.

If I'm wrong, examples would be nice.

Was it helpful?

Solution

Arbitrary precision is slow. Very slow. And the moment you use a function that produces an irrational value (such as most trig functions), you lose your arbitrary precision advantage.

So if you don't need, or can't use that precision, why spend all that CPU time on it?

OTHER TIPS

Has arbitrary-precision arithmetic affected numerical analysis software? I feel that most numerical analysis software keeps on using the same floats and doubles.

There are several unfortunate reasons that arbitrary-precision (ap) is not used more extensively.

  • Lack of support for important features: missing values for NaN/Infinities, no complex numbers or special functions, lack or buggy implementation of rounding modes (round half-even not implemented in GMP), lack of handlers for important events (loss of significant digits, overflow, underflow...ok,this isn't even implemented in most standard libraries). Why is this important ? Because without that you must invest much energy to formulate your problem in arbitrary precision (ever written a complex number library or special functions in ap ?), you can't reproduce your double result because ap lacks the features you need to track the changes.

  • 99,9% of all programmers aren't interested in numerics at all. One of the most asked question here is: "Why is 0.1+0.1 NOT 0.2 ???? HELP !!!" So why should programmers invest time to learn a specific ap implementation and formulate their problem in it ? If your ap results diverge from the double results and you have no knowledge of numerics, how do you find the bug ? Is double precision too inexact ? Has the ap library a bug ? WHAT IS GOING ON ?! Who knows....

  • Many numeric experts who does know how to compute discourage the use of ap. Frustated by the hardware implementations of FP they insist that reproducability is anyway "impossible" to implement and input data has almost always only few significant digits. So they mostly analyze the precision loss and rewrite the critical routines to minimize it.

  • Benchmark addiction. Wow, my computer is FASTER than others. As the other commentators rightly remarked, ap is much slower than hardware supported floating-point datatypes because you must program it with the integer datatypes per hand. One of the imminent dangers of this attitude is that the programmers, totally unaware of the problems, choose solutions who spit out totally impressive nonsense numbers. I am very cautious about GPGPU. Sure, the graphic cards are much, much faster than the processor, but the reason for that is less precision and accuracy. If you use floats (32bit) instead of doubles(64bit), you have much less bits to compute and to transfer. The human eye is very fault-tolerant, so it does not matter if one or two results are off-limits. Heck, as hardware constructor you can use imprecise, badly rounded computations to speed up your computations (which is really ok for graphics). Throw off those pesky subnormal implementation or rounding modes. There is a very good reason why processors aren't so fast as GPUs.

I can recommend William Kahans page link text for some information about the problems in numerics.

Wolfram Research Institute put a huge amount of effort in getting arbitrary-precision interval arithmetic into the core of Mathematica in a pragmatic way and they did an excellent job. Mathematica will transparently do almost any computation to arbitrary precision.

If you look at programs like Mathematica, I strongly suspect you'd find that they do not use floats and doubles for their work. If you look at cryptography, you will definitely find that they do not use floats and doubles (but they are mainly working with integers anyway).

It is basically a judgement call. The people who feel that their product will benefit from increased accuracy and precision use extended-precision or arbitrary-precision arithmetic software. Those who don't think the precision is needed won't use it.

Arbitrary precision doesn't work well with irrational values. I think flip everything upside down would help numerical analysis software. Instead of figuring how what precision is needed for the calculation, you should tell the software what you want the final precision to be and it'll figure everything out.

This way it can use a finite precision type just large enough for the calculation.

It's very rare that you need an exact answer to a numerical problem - it's almost always the case that you need the result to some given accuracy. It's also the case that operations are most efficient if performed by dedicated hardware. Taken together that means that there is pressure on hardware to provide implementations that have sufficient accuracy for most common problems.

So economic pressure has created an efficient (ie hardware based) solution for the common cases.

This paper by Dirk Laurie presents a cautionary tale on the use of variable precision.

Although not directly related to your question you might also want to look at this paper by l Trefethen

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top