Question

This question is asking for programming languages which accept numeric constants for assignment to arbitrary precision variables or for use in arbitrary precision expressions without performing conversion to IEEE floating-point representation before assignment or application in an expression. For example, the following pseudo-language assignment expression:

BigNum x = 0.1;

Many languages provide or have access to libraries which will enable construction of such BigNum type objects from a text string. I am looking for programming languages which can convert a numeric token like 0.1 directly into a BigNum without requiring the programmer to create a string which must then be parsed and potentially throw an exception or flag an error at runtime. Instead, I am interested in languages where the compiler or tokenizer can report syntax errors from incorrectly formatted numbers or invalid expressions before processing the numeric literal into an arbitrary precision decimal or integer ratio representation.

From Literals/Floating point on the Rosetta Code web site, it looks like J, Maple, and Maxima provide a literal syntax for arbitrary precision floating point numbers. Do any other more widely used languages provide the same or something similar to the pseudo-example I provided above?

As a concrete example, Julia provides built in support for rational numbers. These number have a literal representation which can be used in source code. For example:

x = 1//10

Now 1//10 and 0.1 are the same numbers mathematically -- in base 10. However, most programming languages will convert a literal decimal number in source code into an IEEE floating-point number. Often that is exactly what is wanted. However, more than a few people unfamiliar with the IEEE floating point representations of numbers -- or similar floating-point representations which have largely faded into history -- are a surprised to learn that one-tenth isn't exactly one-tenth when it is converted into a binary fraction. Moreover, this surprise usually arises after code which works "most of the time" produces a surprising result when floating-point "errors" accumulate rather average/cancel out. Of course, that is the nature of floating-point representations and arithmetic operations which are, just the same, very useful in practice. Caveat emptor: What Every Computer Scientist Should Know About Floating-Point Arithmetic

Still, I find there are times when integers are insufficient and floating-point numbers introduce unnecessary issues in otherwise exact calculations. For that, rational number and arbitrary precision libraries fit the bill. Great. However, I would still like to know if there are any languages which support direct representation of rational and arbitrary precision literals in the language itself. After all, I do not want to use a language which only has string literals which must then be parsed into numbers at run-time.

So far, Julia is a good answer for rational numbers, but far from the only language with support for rational number literals. However, it does not have arbitrary precision literals. For that, J, Maple, and Maxima seem to have what I am seeking. Perhaps that is very nearly the complete list. Still, if anyone knows of another candidate or two, I would appreciate a pointer...

The Answer So Far...

The best answer to date is Haskell. It provides a rich comprehension of numeric types and operations as well as numeric literal notation which includes rational number expressions and which appears to provide for treating decimal numbers with a fractional part as rational numbers rather than floating-point literals in all cases. At least, that is what I gather from a quick reading of Haskell documentation and a blog post I came across, Overloading Haskell numbers, part 3, Fixed Precision, in which the author states:

...notice that what looks like a floating point literal is actually a rational number; one of the very clever decisions in the original Haskell design.

For many programmers, Julia will be more approachable while offering excellent support for a variety of mathematical types and operations as well as usually excellent performance. However, Python has a very capable syntax as well, many natively compiled packages which match or exceed those available to Julia today, and, unquestionably, enjoys far greater adoption and application in commercial, open-source, and academic projects -- still, my personal preference is for Julia if I have a choice.

For myself, I will be spending more time reseaching Haskell and revisiting Ocaml/F# which may be viable intermediate choices between Julia/Python like languages and a language like Haskell -- how those programming languages may fall across some sort of spectrum is left as an exercise for the reader. If Ocaml/F# offer comparable expressive power to Haskell in the cases in which I have an interest, they may be a better choice just on the basis of current and likely future adoption rates. But for now, Haskell seems to be the best answer to my original question.

Était-ce utile?

La solution

Haskell has Rationals backed by arbitrary precision Integers, and it has overloaded numeric literals, so that most often your literals have exactly the type you want.

Autres conseils

In Smalltalk, Visualworks extended the st-80 syntax in order to introduce 1.23s which is an instance of FixedPoint (not that a good name...), internally represented as the rational Fraction 123/100

The number behaves exactly like a Fraction, which can have arbitrary long numerator and denominator, and performs exact arithmetic, except that it has some specific rules for printing: it round to a fixed number of decimal after decimal fraction separator. 1.23s4 rounds to 4 decimals -> '1.2300' 1.23s rounds to provided number of fraction digits thus 2.

After Visualworks, most Smalltalk dialects added some equivalent classes and same literal syntax. In Squeak, the class is named ScaledDecimal, but only the 1.23s2 syntax has been implemented in Compiler. Pharo is going to also include 1.23s in release 3.0.

In Scheme, there are two kinds of numbers, exact and inexact, a distinction which is orthogonal to numeric types (integer, rational, real, complex). Operations on inexact numbers produce inexact results, and inexact numbers are typically implemented as IEEE floats. For convenience, a numeric literal with a decimal point or exponent is assumed to be inexact, and one without is assumed to be exact, so 1 and 1/10 and 1+2i are exact, whereas 1.0 and 0.1 and 1.0+2.0i are inexact. But you can override this by preceding the literal with #e to make it exact or #i to make it inexact, so #e3.0 is the same as 3, #e0.1 is the same as 1/10, and #i1/10 is the same as 0.1.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top