Question

Python is supposed to have "arbitrary precision integers," according to the answer in Python integer ranges. But this result is plainly not arbitrary precision:

$ python -c 'print("%d" % (999999999999999999999999/3))'
333333333333333327740928

According to PEP 237, bignum is arbitrarily large (not just the size of C's long type). And Wikipedia says Python's bignum is arbitrary precision.

So why the incorrect result from the above line of code?

Was it helpful?

Solution

Actually in python3 whenever you divide ints you get float as a result. There is a // operator that does integer division:

 >>> 999999999999999999999999/3
 3.333333333333333e+23
 >>> 999999999999999999999999//3
 333333333333333333333333

 >>> type(999999999999999999999999/3)
 <class 'float'>
 >>> type(999999999999999999999999//3)
 <class 'int'>

This does give the correct arbitrary precision output:

 python -c 'print("%d" % (999999999999999999999999//3))' 
 333333333333333333333333

How to write code compatible with both python 2.2+ and 3.3

This is actually simple, just add:

 >>> from __future__ import division 

this will enable 3.X division in 2.2+ code.

>>> from sys import version 
>>> version
'2.7.6 (default, Dec 30 2013, 14:37:40) \n[GCC 4.8.2]'
>>> from __future__ import division 
>>> type(999999999999999999999999//3)
<type 'long'>
>>> type(999999999999999999999999/3)
<type 'float'>
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top