I have the following python code:

In [1]: import decimal

In [2]: decimal.getcontext().prec = 80

In [3]: (1-decimal.Decimal('0.002'))**5
Out[3]: Decimal('0.990039920079968')

Shouldn't it match 0.99003992007996799440405766290496103465557098388671875 according to this http://www.wolframalpha.com/input/?i=SetPrecision%5B%281+-+0.002%29%5E5%2C+80%5D ?

有帮助吗?

解决方案 2

Wolfram alpha is actually wrong here.

(1 - 0.002) ** 5

is exactly 0.990039920079968.

You can verify that by simply assessing that there are 15 digits after the ., which matches 5 * 3, 3 being the number of digits after the . in the expression (1 - 0.002). There couldn't be any digit after the 15th by definition.

Edit

A little more digging got me something interesting:

This notation Decimal('0.002') creates an actual decimal with this exact value. Using Decimal(0.002) the decimal is made from a float rather than a string, creating an imprecision. Using this notation is the original formula :

(1-decimal.Decimal(0.002))**5

Returns Decimal('0.99003992007996799979349352807411754897106595345737537649055432859002826694496107' which is indeed 80 digits long after the ., but different from the wolfram alpha value.

This is probably caused by a difference of precision between python and wolfram alpha floating point representation, and is a further indication that wolfram alpha is using floats when SetPrecision is used.

Nota: directly asking for the result returns the correct value (see http://www.wolframalpha.com/input/?i=%281+-+0.002%29%5E5).

其他提示

Here's what's happening here: Because it looks like syntax from the Mathematica programming language, WolframAlpha is interpreting the input SetPrecision[(1 - 0.002)^5, 80] as Mathematica source code, which it proceeds to evaluate. In Mathematica, as others have surmised in other answers, 0.002 is a machine precision floating point literal value. Roundoff error ensues. Finally, the resulting machine precision value is cast by SetPrecision to the nearest 80-precision value.

To get around this, you have a couple of options.

  1. You could try to make WolframAlpha not think you are entering code from the Mathematica programming language, so that it will do its own magic. As njzk2 mentioned, entering (1 - 0.002)^5 will do this.
  2. In Mathematica code that you ask WolframAlpha to evaluate, you could enter an infinite-precision literal instead of the machine precision literal 0.002. There are several ways, but here is one: SetPrecision[(1 - 2*^-3)^5, 80].

Finally, I want to point out that in Mathematica, and by extension in a WolframAlpha query consisting of Mathematica code, you usually want N (documentation) rather than SetPrecision. They are often similar (identical in this case), but there is a subtle difference:

  • SetPrecision[..., n] first sets all enclosed numbers to precision n, then evaluates everything (roundoff error will ensue)
  • N[..., n] essentially repeatedly tries SetPrecision at higher and higher precision until the final roundoff error is almost certainly less than n.

N works slightly harder but gets you the right number of correct digits (assuming the input is sufficiently precise).

So my final suggestion for using WolframAlpha to do this calculation via Mathematica Code is N[(1 - 2*^-3)^5, 80].

wolfram is wrong, try it to the power of one and you get 0.9979999999999999982236431605997495353221893310546875 instead of 0.998. They are likely using floating point numbers.

Following Andrews answer, it is a result of the precision of the entered literal being taken to be machine precision before the SetPrecision directive gets to it.

Another fix to this, that is nice in that it retains your basic input notation, is to directly specify the precision of the literal with a backtic notation:

SetPrecision[(1-.002`80)^5, 80]

Produces the desired result.

For anyone who still doesn't follow, you could also key in all the zeros..

 SetPrecision[(1-.0020000000000000000000000...0000)^5, 80]

These work in alpha and mathematica..

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top