Question

I've been working to optimize the Lucas-Lehmer primality test using C# code (yes I'm doing something with Mersenne primes to calculate perfect numbers. I was wondering it is possible with the current code to make further improvements in speed. I use the System.Numerics.BigInteger class to hold the numbers, perhaps it is not the wisest, we'll see it then.

This code is actually based on the intelligence found on: http://en.wikipedia.org/wiki/Lucas%E2%80%93Lehmer_primality_test

This page (at the timestamp) section, some proof is given to optimize the division away.

The code for the LucasTest is:

public bool LucasLehmerTest(int num)
{
  if (num % 2 == 0)
     return num == 2;
  else
  {
     BigInteger ss = new BigInteger(4);
     for (int i = 3; i <= num; i++)
     {
        ss = KaratsubaSquare(ss) - 2;
        ss = LucasLehmerMod(ss, num);
     }
     return ss == BigInteger.Zero;
  }

}

Edit: Which is faster than using ModPow from the BigInteger class as suggested by Mare Infinitus below. That implementation is:

public bool LucasLehmerTest(int num)
{
  if (num % 2 == 0)
     return num == 2;
  else
  {
     BigInteger m = (BigInteger.One << num) - 1;
     BigInteger ss = new BigInteger(4);
     for (int i = 3; i <= num; i++)
       ss = (BigInteger.ModPow(ss, 2, m) - 2) % m;
     return ss == BigInteger.Zero;
  }

}

The LucasLehmerMod method is implemented as follows:

public BigInteger LucasLehmerMod(BigInteger divident, int divisor)
{
   BigInteger mask = (BigInteger.One << divisor) - 1;  //Mask
   BigInteger remainder = BigInteger.Zero;
   BigInteger temporaryResult = divident;

   do
   {
      remainder = temporaryResult & mask;
      temporaryResult >>= divisor;
      temporaryResult += remainder;
   } while ( (temporaryResult >> divisor ) != 0 );

   return (temporaryResult == mask ? BigInteger.Zero : temporaryResult);
}

What I am afraid of is that when using the BigInteger class from the .NET framework, I am bound to their calculations. Would it mean I have to create my own BigInteger class to improve it? Or can I sustain by using a KaratsubaSquare (derived from the Karatsuba algorithm) like this, what I found on Optimizing Karatsuba Implementation:

public BigInteger KaratsubaSquare(BigInteger x)
{
   int n = BitLength(x);                                 

   if (n <= LOW_DIGITS) return BigInteger.Pow(x,2);        //Standard square

   BigInteger b = x >> n;                                  //Higher half
   BigInteger a = x - (b << n);                            //Lower half
   BigInteger ac = KaratsubaSquare(a);                     // lower half * lower half
   BigInteger bd = KaratsubaSquare(b);                     // higher half * higher half
   BigInteger c = Karatsuba(a, b);                         // lower half * higher half

   return ac + (c << (n + 1)) + (bd << (2 * n));          
}

So basically, I want to look if it is possible to improve the Lucas-Lehmer test method by optimizing the for loop. However, I am a bit stuck there... Is it even possible?

Any thoughts are welcome of course.

Some extra thoughs:

I could use several threads to speed up the calculation on finding Perfect numbers. However, I have no experience (yet) with good partitioning. I'll try to explain my thoughts (no code yet):

First I'll be generating a primetable with use of the sieve of Erathostenes. It takes about 25 ms to find primes within the range of 2 - 1 million single threaded.

What C# offers is quite astonishing. Using PLINQ with the Parallel.For method, I could run several calculations almost simultaneously, however, it chunks the primeTable array into parts which are not respected to the search.

I already figured out that the automatic load balancing of the threads is not sufficient for this task. Hence I need to try a different approach by dividing the loadbalance depending on the mersenne numbers to find and use to calculate a perfect number. Has anyone some experience with this? This page seems to be a bit helpful: http://www.drdobbs.com/windows/custom-parallel-partitioning-with-net-4/224600406

I'll be looking into it further.

As for now, my results are as following. My current algorithm (using the standard BigInteger class from C#) can find the first 17 perfect numbers (see http://en.wikipedia.org/wiki/List_of_perfect_numbers) within 5 seconds on my laptop (an Intel I5 with 4 cores and 8GB of RAM). However, then it gets stuck and finds nothing within 10 minutes.

This is something I cannot match yet... My gut feeling (and common sense) tells me that I should look into the LucasLehmer test, since a for-loop calculating the 18th perfect number (using Mersenne Prime 3217) would run 3214 times. There is room for improvement I guess...

What Dinony posted below is a suggestion to rewrite it completely in C. I agree that would boost my performance, however I choose C# to find out it's limitations and benefits. Since it's widely used, and it's ability to rapidly develop applications, it seemed to me worthy of trying.

Could unsafe code provide benefits here as well?

Was it helpful?

Solution

One possible optimization is to use BigInteger ModPow

It really increases performance significantly.

OTHER TIPS

Just a note for info... In python, this

ss = KaratsubaSquare(ss) - 2

has worse performance than this:

ss = ss*ss - 2

What about adapting the code to C? I have no idea about the algorithm, but it is not that much code.. so the biggest run-time improvement could be adapting to C.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top