I'm starting down the exciting road of GPU programming, and if I'm going to do some heavyweight number-crunching, I'd like to use the best libraries that are out there. I would especially like to use cuBLAS from an F# environment. CUDAfy offers the full set of drivers from their solution, and I have also been looking at Alea.cuBase, which has thrown up a few questions.

The Alea.cuSamples project on GitHub makes a cryptic reference to an Examples solution: "For more advanced test, please go to the MatrixMul projects in the Examples solution." However, I can't find any trace of these mysterious projects.

  1. Does anyone know the location of the elusive "MatrixMul projects in the Examples solution"?
  2. Given that cuSamples performs a straightfoward matrix multiplication, would the more advanced version, wherever it lives, use cuBLAS?
  3. If not, is there a way to access cuBLAS from Alea.cuBase a la CUDAfy?
有帮助吗?

解决方案 2

The matrixMulCUBLAS project is a C++ project that ships with the CUDA SDK, https://developer.nvidia.com/cuda-downloads. This uses cuBLAS to get astonishingly quick matrix multiplication (139 GFlops) on my home laptop.

其他提示

With Alea GPU V2, the new version we have now two options:

  1. Alea Unbound library provides optimized matrix multiplication implementations http://quantalea.com/static/app/tutorial/examples/unbound/matrixmult.html
  2. Alea GPU has cuBlas integrated, see tutorial http://quantalea.com/static/app/tutorial/examples/cublas/index.html
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top