Pregunta

How to increase performance of Java's Big Integer?

For example, this factorial program:

import java.math.*;
class Fac {
  public static void main(String[] args) {
    BigInteger i = BigInteger.ONE;
    for(BigInteger z=BigInteger.valueOf(2);z.compareTo(BigInteger.valueOf(99999)) != 0;) {
      i = i.multiply(z);
      z = z.add(BigInteger.ONE);
    }
    System.out.println( i );
  }
}

That program completed in 31.5s

Where's in C++:

#include <iostream>
#include <gmpxx.h>
using namespace std;
int main() {
  mpz_class r;
  r = 1;
  for(int z=2;z<99999;++z) {
    r *= mpz_class(z);
  }
  cout << r << endl;
}

completed in 1.0s

And Ruby (for comparison):

puts (2...99999).inject(:*)

completed in 4.4s (Ruby) and 32.2s in JRuby

And also Go (for comparison):

package main
import (
 "fmt"
 "math/big"
)
func main() {
  i := big.NewInt(1);
  one := big.NewInt(1)
  for z := big.NewInt(2); z.Cmp(big.NewInt(99999)) < 0;  {
      i.Mul(i,z);
      z.Add(z,one)
  }
  fmt.Println( i );
}

completed in 1.6s and 0.7s for MulRange

EDIT As requested:

import java.math.*;
class F2 {
  public static void main(String[] args) {
    BigInteger i = BigInteger.ONE, r = BigInteger.valueOf(2);
    for(int z=2; z<99999 ; ++z) {
      i = i.multiply(r);
      r = r.add(BigInteger.ONE);
    }
    System.out.println( i );
  }
}

runtime duration: 31.4 s

EDIT 2 for those who still think that the first and second java code is unfair..

import java.math.*;
class F3 {
  public static void main(String[] args) {
    BigInteger i = BigInteger.ONE;
    for(int z=2; z<99999 ; ++z) {
      i = i.multiply(BigInteger.valueOf(z));
    }
    System.out.println( i );
  }
}

completed in 31.1s

EDIT 3 @OldCurmudgeon comment:

import java.math.*;
import java.lang.reflect.*;
class F4 {
  public static void main(String[] args) {
    try {
      Constructor<?> Bignum = Class.forName("java.math.MutableBigInteger").getDeclaredConstructor(int.class);
      Bignum.setAccessible(true);
      Object i = Bignum.newInstance(1);
      Method m = i.getClass().getDeclaredMethod("mul", new Class[] { int.class, i.getClass()});
      m.setAccessible(true);
      for(int z=2; z<99999 ; ++z) {
        m.invoke(i, z, i);
      }
      System.out.println( i );
    } catch(Exception e) { System.err.println(e); } 
  }
}

completed in 23.7s

EDIT 4 As stated by @Marco13 the biggest problem was on the string creation not on the BigInteger itself..

  • BigInteger: 3.0s
  • MutableBigInteger hack: 10.1s
  • String creation: ~20s
¿Fue útil?

Solución 2

The computation itself should not take so long. The string creation may take a while, however.

This program (Kudos to OldCurmudgeon and https://stackoverflow.com/a/8583188/823393 ) takes approximately 3.9 seconds on a Core I7, 3GHz, Java 7/21, when started with -Xmx1000m -sever:

import java.lang.reflect.Constructor;
import java.lang.reflect.Method;

public class FastBigInteger
{
    public static void main(String[] args)
    {
        try
        {
            Class<?> c = Class.forName("java.math.MutableBigInteger");
            Constructor<?> con = c.getDeclaredConstructor(int.class);
            con.setAccessible(true);
            Object i = con.newInstance(1);
            Method m = c.getDeclaredMethod("mul", new Class[] { int.class, c });
            m.setAccessible(true);
            long before = System.nanoTime();
            for (int z = 2; z < 99999; ++z)
            {
                m.invoke(i, z, i);
            }
            long after = System.nanoTime();
            System.out.println("Duration "+(after-before)/1e9);

            String s = i.toString();
            int n = s.length();
            int lineWidth = 200;
            for (int j=0; j<n; j+=lineWidth)
            {
                int j0 = j;
                int j1 = Math.min(s.length(), j+lineWidth);
                System.out.println(s.substring(j0, j1));
            }
        }
        catch (Exception e)
        {
            System.err.println(e);
        }
    }
}

After printing the duration for the actual computation, it takes quite a while until it finished creating the string, but this should hardly be taken into account here.

This is still not a sensible benchmark, but shows that there is at least no problem with the computation itself.

But admittedly, when using only BigInteger instead of this MutableBigInteger hack, it takes appx. 15 seconds, which is rather poor compared to the C++ implementation.

Otros consejos

Start with:

import java.math.*;
class Fac {
  public static void main(String[] args) {
    BigInteger i = BigInteger.ONE;
    BigInteger maxValue = BigInteger.valueOf(99999);

    for(BigInteger z=BigInteger.valueOf(2); z.compareTo(maxValue) != 0;) {
      i = i.multiply(z);
      z = z.add(BigInteger.ONE);
    }

    System.out.println( i );
  }
}

.valueOf source

1081    public static BigInteger More ...valueOf(long val) {
1082        // If -MAX_CONSTANT < val < MAX_CONSTANT, return stashed constant
1083        if (val == 0)
1084            return ZERO;
1085        if (val > 0 && val <= MAX_CONSTANT)
1086            return posConst[(int) val];
1087        else if (val < 0 && val >= -MAX_CONSTANT)
1088            return negConst[(int) -val];
1089
1090        return new BigInteger(val);
1091    }

It will create a new BigInteger everytime since MAX_CONSTANT is 16.


I think it could go slower because the GC starts to collect some older BigInteger instances but anyway you should always use int and long.. here BigInteger is not really needed.

After your last test i think we can be sure it could be caused by the GC.

I have some clojure code calculating the 100 000th fibonacci number using big integers. Now this thread is not about clojure but as clojure runs on the JVM and I ran benchmarks on some of the existing big integer implementations I felt a comment here might be valuable.

The algorithm when it is using the JVM BigInteger class (denoted by the xN literal syntax in clojure) looks as follows:

(defn fibo [n]
  (loop [i n a 1N b 1N]
    (if (> i 0)
      (recur (dec i) b (+ a b))
      a)))

I have re-implemented this using four big integer implementations and I ran benchmarks using the clojure criterium library which does warm ups and some statistical analysis to try to get to somewhat relevant numbers.

Results on my 2,8 GHz Intel Core i7 macbook:

now I realize this is all anecdotal and that we are only measuring addition here, but I would have to say that the huldra catch phrase "Outperforming BigInteger since 2015" seems pretty accurate in this case.

Any comments with pointers to potential candidates for faster big int addition algorithms are much appreciated.

Other answers are related to tune the performance using code.

If you are using java version less than 1.8.0_151, you can tune big integer performance by using the following command options:

-XX:+UseMontgomerySquareIntrinsic
-XX:+UseMontgomeryMultiplyIntrinsic
-XX:+UseSquareToLenIntrinsic
-XX:+UseMultiplyToLenIntrinsic

After 1.8.0_151, these options are turned on by default.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top