Question

I'm in need of an algorithm faster than the current normal Python long multiplication.

I tried to find a decent Karatsuba implementation, but I can't.

def main():
    a=long(raw_input())
    if(a<0):
        a=a*-1
        a=((a*(a+1)/2)-1)
        print(-a)
    else:
        a=(a*(a+1))/2
        print(a)
main()

As you see, it's nothing complicated, just a few multiplications. But it has to handle numbers with up to 100000 digits in under 2.5 sec.

I'd like some snippet of a function or just a link to some implementation of a faster multiplication function, or anything that helps.

Was it helpful?

Solution

You could have a look at the implementation of the DecInt module (pure Python version is available (Toom-Cook) although the fastest it will probably be when using gmpy).

OTHER TIPS

I'm the author of the DecInt (Decimal Integer) library so I'll make a few comments.

The DecInt library was specifically designed to work with very large integers that needed to be converted to decimal format. The problem with converting to decimal format is that most arbitrary-precision libraries store values in binary. This is fastest and most efficient for utilizing memory but converting from binary to decimal is usually slow. Python's binary to decimal conversion uses an O(n^2) algorithm and gets slow very quickly.

DecInt uses a large decimal radix (usually 10^250) and stores the very large number in blocks of 250 digits. Converting a very large number to decimal format now runs in O(n).

Naive, or grade school, multiplication has a running time of O(n^2). Python uses Karatsuba multiplication which has running time of O(n^1.585). DecInt uses a combination of Karatsuba, Toom-Cook, and Nussbaumer convolution to get a running time of O(n*ln(n)).

Even though DecInt has much higher overhead, the combination of O(n*ln(n)) multiplication and O(n) conversion will eventually be faster than Python's O(n^1.585) multiplication and O(n^2) conversion.

Since most computations don't require every result to be displayed in decimal format, almost every arbitrary-precision library uses binary since that makes the computations easier. DecInt targets a very small niche. For large enough numbers, DecInt will be faster for multiplication and division than native Python. But if you are after pure performance, a library like GMPY will be the fastest.

I'm glad you found DecInt helpful.

15.9 ms on my slow notebook. It is the print that is slowing you down. Converting to binary numbers to decimal is quite slow, which is a required step of printing it out. If you need to output the number you should try the DecInt ChristopheD mentioned already.

DecInt will be slower doing the multiply but make the print much faster

In [34]: a=2**333000

In [35]: len(str(a))
Out[35]: 100243

In [36]: b=2**333001

In [37]: len(str(b))
Out[37]: 100244

In [38]: timeit c=a*b
10 loops, best of 3: 15.9 ms per loop

Here is an example with a slightly modified version of your code. Note that converting a 100000 digit string to a long already takes ~1sec on this computer

In [1]: def f(a):
   ...:     if(a<0):
   ...:         a=a*-1
   ...:         a=((a*(a+1)/2)-1)
   ...:     else:
   ...:         a=(a*(a+1))/2
   ...:     return a
   ...: 

In [2]: a=3**200000

In [3]: len(str(a))
Out[3]: 95425

In [4]: timeit f(a)
10 loops, best of 3: 417 ms per loop

I suggest you get the Sage math tool, which has just about every Python math tool ever made rolled into one package. See if there is a nice fast arbitrary precision math tool in Sage that meets your needs.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top