The title is self explanatory. What is going on here? How can I get this not to happen? Do I really have to change all of my units (it's a physics problem) just so that I can get a big enough answer that python doesn't round 1-x to 1?

code:

import numpy as np
import math

vel=np.array([5e-30,5e-30,5e-30])

c=9.7156e-12

def mag(V):
    return math.sqrt(V[0]**2+V[1]**2+V[2]**2)

gam=(1-(mag(vel)/c)**2)**(-1/2)

print(mag(vel))
print(mag(vel)**2)
print(mag(vel)**2/(c**2))
print(1-mag(vel)**2/(c**2))
print(gam)

output:

>>> (executing lines 1 to 17 of "<tmp 1>")
8.660254037844386e-30
7.499999999999998e-59
7.945514251743055e-37
1.0
1.0
>>> 
有帮助吗?

解决方案

In python decimal may work and maybe mpmath.

as is discussed in this SO article

If you are willing to use Java (instead of python), you might be able to use BigDecimal, or apfloat or JScience.

8.66e-30 only uses 3 sigs, but to illustrate 1 minus that would require more than 30. Any more than 16 significant figures you will need to represent digits using something else, like very long strings. But it's difficult to do math with long strings. You could also perform binary computations on very long arrays of byte values. The byte values could be made to represent a very large integer value modified by a scale factor of your choice. So if you can support and integer larger than 1E60, then you can alternately scale the value so that you can represent 1E-60 with a maximum value of 1. You can probably do that with about 200 bits or 25 bytes, and with 400 bits, you should be able to precisely represent the entire range of 1E60 to 1E-60. There may already be utilities that can perform calculations of this type out there used by people that work in math or security as they may want to represent PI to a thousand places for instance, which you can't do with a double.

The other useful trick is to use scale factors. That is, in your original coordinate space you cannot do the subtraction because the digits will not be able to represent the values. But, if you make the assumption that if you are making small adjustments you do not simultaneously care about large adjustments, then you can perform a transform on the data. So for instance you subtract 1 from your numbers. Then you could represent 1-1E-60 as -1E-60. You could do as many operations very precisely in your transform space, but knowing full well that if you attempt to convert them back from your transform space they will be lost as irrelevant. This sort of tactic is useful when zooming in on a map. Making adjustments on the scale of micrometers in units of latitude and longitude for your single precision floating point DirectX calculations won't work. But you could temporarily change your scale while you are zoomed in so that the operations will work normally.

So complicated numbers can then be represented by a big number plus a second number that represents the small scale adjustment. So for instance, if you have 16 digits in a double, you can use the first number to represent the large portion of the value, like from 1 to 1E16, and the second double to represent the additional small portion. Except that using 16 digits might be flirting with errors from the double's ability to represent the big value accurately so you might use only 15 or 14 or so just to be safe.

1234567890.1234567890

becomes

1.234567890E9 + 1.23456789E-1.

and basically the bigger your precision the more terms your complex number gets. But while this sort of thing works pretty well when each term is more or less mathematically independent, in cases where you have to do lots of rigorous calculations that operate across the scales, doing the book-keeping between these values would likely be more of a pain than it would be worth.

其他提示

I think you won't get the result you are expecting because you are dealing with computer math limits. The thing about this kind of calculations is that nobody can avoid this error, unless you make/find some models that has infinite (theoretically) decimals and you can operate with them. If that is too much for the problem you are trying to solve, maybe you just have to be careful and try to do whatever you need but trying to handle these errors in calculations.

There are a lot of bibliography out there with many different approaches to handle the errors in calculations that helps not to avoid but to minimize these errors.

Hope my answer helps and don't disappoint you..

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top