Question

A lot of modern languages have support for arbitrary precision numbers. Java has BigInteger, Haskell has Integer, Python is de-facto arbitrary. But for a lot of these languages, arbitrary precision isn't de-facto, and is instead prefixed with 'Big'.

Why isn't arbitrary precision de-facto for all modern languages? is there a particular reason to use fixed-precision numbers over arbitrary-precision?

If I had to guess it would be because fixed precision somehow corresponds to assembly instructions, so it's more optimal and runs faster, which would be a worthwhile trade-off if you didn't have to worry about overflow because you knew the numeric ranges beforehand. Are there any particular use cases of fixed-precision over arbitrary-precision?

Was it helpful?

Solution

It is a trade-off between performance and features/safety. I cannot think of any reason why I would prefer using overflowing integers other then performance. Also, I could easily emulate overflowing semantics with non-overflowing types if I ever needed to.

Also, overflowing a signed int is a very rare occurrence in practice. I happens almost never. I wish that modern CPUs supported raising an exception on overflow without performance cost.

Different languages emphasize different features (where performance is a feature as well). That's good.

OTHER TIPS

Fixed precision is likely to be faster than arbitrary; don't confuse however fixed precision with precision of the machine itself. You can use an extended (but fixed) precision in some cases. I myself often used the excellent library qd by the great expert D.H. Bailey (see http://crd-legacy.lbl.gov/~dhbailey/mpdist/) which can be easely installed on a Linux system for instance. This library provides two fixed-precision types with a greater precision than native double-precision, but they are (by far) quicker than more known arbitrary-precision libraries.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top