Question

I have written programs in C++, Python and Java for matrix multiplication and tested their speed for multiplying two 2000 x 2000 matrices (see post). The standard ikj-implentation - which is in enter image description here - took:

  • C++: 15 seconds (Source)
  • Python: 6 minutes 13 seconds (Source)

Now I have implemented the Strassen algorithm for matrix multiplication - which is in enter image description here - in Python and C++ as it was on wikipedia. These are the times I've got:

  • C++: 45 minutes (Source)
  • Python: Killed after 10 hours (Source)

Why is Strassen matrix multiplication so much slower than standard matrix multiplication?


Ideas:

  • Some cache effects
  • Implementation:
    • error (the resulting 2000 x 2000 matrix is correct)
    • null-multiplication (shouldn't be that important for 2000 x 2000 -> 2048 x 2048)

This is especially astonishing, as it seems to contradict the experiences of others:


edit: The reason why Strassen matrix multiplication was slower in my case, were:

  • I made it fully recursive (see tam)
  • I had two functions strassen and strassenRecursive. The first one resized the matrix to a power of two, if required and called the second one. But strassenRecursive didn't recursively call itself, but strassen.
Was it helpful?

Solution

The basic problem is that you're recursing down to a leaf size of 1 with your strassen implementaiton. Strassen's algorithm has a better Big O complexity, but constants do matter in reality, which means in reality you're better off with a standard n^3 matrix multiplication for smaller problem sizes.

So to greatly improve your program instead of doing:

if (tam == 1) {
        C[0][0] = A[0][0] * B[0][0];
        return;
    }

use if (tam == LEAF_SIZE) // iterative solution here. LEAF_SIZE should be a constant that you have to experimentally determine for your given architecture. Depending on the architecture it may be larger or smaller - there are architectures where the constant factors for strassen are so large that it's basically always worse than a simpler n^3 implementation for sensible matrix sizes. It all depends.

OTHER TIPS

Well, "arithmetic operations" are not the only things that count. It's not like everything else is free.

My naive guess would be that all this memory-allocating and copying beats the gain from having fewer arithmetic operations...

Memory access, in particular, can be quite expensive when it gets out of the cache, In comparison, arihmetic operations could be considered free :-)

Although the Strassen algorithm has a smaller Big O notation, in order to take advantage of this you would need to multiply to matrcies that are too big to solve on most standard machines and even super computers.

Think of it this way

one problem is x^3 , the other is X^1.6734 + 8x^(1/2) +x .....

I remember that I did the same thing back in college. My implementation was in Java. I also wrote a script to test the code, I had more than 10000 test cases with random matrices of different sizes (22) ~ (81928192). I did not let the recursion go to the scalar level, I used all power of 2 as stopping points. I found a range where Strassen's is more efficient and a range that is worse than the naive algorithm.

I did not investigate cache, memory, or JVM (garbage collection).

I attributed the findings when I presented in front of the class to the fact that Strassen's algorithm asymptotic complex is measured in term of the number of multiplications. It was devised in a time when computers did additions faster than multiplication.

Nowadays CPUs multiple as fast as they add (number of cycles). If examines both algorithms, you will find that Strassen's has less arithmetic operation than the naive algorithm only if the size is less than 2^10 (if I remember correctly)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top