Question

Python provides a "bignum" type called "long" which can represent arbitrarily large numbers. What is the internal representation of this type?

I ask in part because I am curious what operations might be particularly fast or slow on these numbers. For example, is bit shifting particularly fast compared to multiplication or division (as it is for "regular" ints)?

Was it helpful?

Solution

CPython's arbitrary precision integers are stored an array of binary digits. Each digit consists of either 15 or 30 bits. Addition, subtraction, and bit shifts are all O(n). Multiplication (for large enough values) uses Karatsuba multiplication which is O(n**1.585). Division still uses the classical O(n**2) algorithm.

OTHER TIPS

Well, I wrote this. I'm not sure how good this is, but you can use it as a starting point for a more refined and complete benchmark.

import timeit

def timef(f, *args):
    return timeit.timeit(lambda: f(*args), number = 1000000) # repetitions


tests = {
    'addition'      : lambda x: x + 17,
    'substraction'  : lambda x: x - 17,
    'multiplication': lambda x: x * 17,
    'division'      : lambda x: x / 17,
    'float division': lambda x: x / 17.0
}


for name, f in tests.items():
    print  'int {0}'.format(name).ljust(20), timef(f, 0)
    print 'long {0}'.format(name).ljust(20), timef(f, 10 ** 50)
    print

What I'm seeing is that int() operations are indeed faster, but not a lot faster.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top