Question

I am a graduate student in mathematics working on a research project that involves calculating the norms of algebraic integers in very high degree number fields. Essentially, that would involve multiplying 100 more more floating point numbers together to get an integer. I am programming in C.

The problem I am encountering is that I'm beginning to need to calculate norms which exceed 1e20 = 10^20, which exceeds both the integer and the floating point precision of my hardware (Macbook).

I could attempt to use some kind of software emulation to do extended precision, but my understanding is this is about 300x slower than hardware floating point calculation. This would mean that the calculations I do that currently take several days would instead take several years, which isn't acceptable.

I have two questions: Q1: Is my estimation of 300x slower for software emulation of extended precision (say to 10^36 precision, about twice the precision of hardware) unduly pessimistic?

Q2: Is I had many thousands of dollars to throw at the problem, what kind of hardware solutions might be available?

Was it helpful?

Solution

With the information you have given, I do not see any other solution than MPFR. Yes, hardware floating-point give a result earlier, but it is the wrong result.

You need to pick a precision to do the computations at. Your question makes it seem like you are only concerned with the precision of the final result, but actually, accuracy is lost at each operation, and the cumulative effect of these losses must be less than 0.5 if you want to be able to decide what integer is the eventual answer. You may need to pick a precision higher than would be enough to represent the final result.

Two approaches to estimate the required precision are before-the-fact numerical analysis and interval arithmetic. I do not know anything about numerical analysis but the basics are that each irrational factor is represented by a multi-precision floating-point number that we'll assume less than 0.5 ULP away, and each multiplication may be 0.5 ULP away from the real result (of the product of the actual floating-point operands, not of what the operands are supposed to represent). People find it simpler to reason in terms of relative precision.

Interval arithmetic doubles the number of operations but provides guaranteed bounds for the real result. If there is only one integer in the computed interval, well done! Otherwise, start again with higher precision.

Finally, you ask about speed. If you really have only multiplications, it is trivial to parallelize the computation, because multiplication is associative.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top