Question

I was testing out the mmultP function from repa-algorithms-3.2.1.1 with the following code (a tad condensed here for brevity):

import Data.Array.Repa hiding            (map)
import Data.Array.Repa.Algorithms.Matrix (mmultP)

import Control.Monad                     (replicateM)
import Control.Arrow                     ((&&&))
import System.Random.MWC                 (initialize, uniformR)
import Control.Monad.ST                  (runST)
import Data.Vector.Unboxed               (singleton)
import Data.Word                         (Word32)

-- Create a couple of dense matrices
genRnds :: Word32 -> [Double]
genRnds seed = runST $ do
    gen <- initialize (singleton seed)
    replicateM (1000 ^ 2) (uniformR (0, 1) gen)

(arr, brr) = head &&& last $ map (fromListUnboxed (Z :. 1000 :. 1000 :: DIM2) . genRnds) [1, 100000]

-- mmultP test
main :: IO ()
main = mmultP arr brr >>= print

and as specified here, compiled using

ghc mmultTest.hs -Odph -rtsopts -threaded -fno-liberate-case -funfolding-use-threshold1000 -funfolding-keeness-factor1000 -fllvm -optlo-O3 -fforce-recomp

Here's the sequential run in the threaded runtime:

$ time ./mmultTest +RTS -K100M > /dev/null
real    0m10.962s
user    0m10.790s
sys     0m0.161s

and here's one using 4 cores (running on a four-core MacBook Air):

$ time ./mmultTest +RTS -N4 -K100M > /dev/null
real    0m13.008s
user    0m18.591s
sys     0m2.067s

Anyone have any intuition as to what's happening here? I also get slower-than-sequential performance for -N2 and -N3; each core seems to add some additional time.

Note that I do observe some minor gains from parallelism on some hand-rolled Repa matrix multiply code.

UPDATE:

Puzzling; I replaced main with

mmultBench :: IO ()
mmultBench  = do 
   results <- mmultP arr brr 
   let reduced = sumAllS results 
   print reduced

and removed the dependency on mwc-random:

(arr, brr) = head &&& last $ map (fromListUnboxed (Z :. 1000 :. 1000 :: DIM2)) (replicate 2 [1..1000000])

A Criterion benchmark with the runtime options -N1 -K100M yields:

mean: 1.361450 s, lb 1.360514 s, ub 1.362915 s, ci 0.950
std dev: 5.914850 ms, lb 3.870615 ms, ub 9.183472 ms, ci 0.950

and -N4 -K100M gives me:

mean: 556.8201 ms, lb 547.5370 ms, ub 573.5012 ms, ci 0.950
std dev: 61.82764 ms, lb 40.15479 ms, ub 102.5329 ms, ci 0.950

Which is a lovely speedup. I would almost think that the previous behaviour was due to writing the resulting 1000x1000 array to stdout, but as I mentioned, I do observe parallelism gains there if I swap in my own matrix multiply code. Still scratching my head.

Was it helpful?

Solution

1) Printing the matrix to stdout will make the program IO bound. Any speedup figures recorded in this situation will be lies.

2) There are no 4 core MacBook Airs. They are all 2 core, with 2 hyper-threads per core. Only 2 threads can actually run at a time. Any speedup with > -N2 will be due to latency hiding -- the second hyper-thread on a core can run while the first is stalled on cache-miss.

OTHER TIPS

This does seem strange, but maybe you are just making the usual payment for parallelism but not reaping the benefits? --So it's akin to parallelizing with absurdly unbalanced loads?

It seems like something more must be wrong. What struck me, though -- and it might give a partial explanation of your results -- is that you are only using one repa combinator, mmultP. The framework hardly gets a chance! If I complicate the matter with a sprinking of zipWith, foldAllP etc. -- e.g.

main :: IO ()
main =  arr `xxx` brr >>= foldAllP (+) 0 >>= print where
   xxx arr brr = R.zipWith (+) <$> complicated arr <*> complicated brr
   complicated = mmultP brr >=> mmultP arr >=> mmultP brr >=> mmultP arr

Then with my two-core jalopy, I get exactly the two-core-parallelizer's dream:

 $ time ./mmmult +RTS -K200M  -N2
 6.2713897715510016e16

 real   0m8.742s
 user   0m16.176s
 sys    0m0.444s

 $ time ./mmmult +RTS -K200M  
 6.2713897715512584e16

 real   0m15.214s
 user   0m14.970s
 sys    0m0.239s
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top