Question

I have a neural network written in Erlang, and I just bought a GeForce GTX 260 card with a 240 core GPU on it. Is it trivial to use CUDA as glue to run this on the graphics card?

Was it helpful?

Solution

No, using CUDA is not a trivial matter.

The CUDA programming model basically uses C (with some additions) but in order to get the most of the GPGPU's capabilities you would have to ensure that your algorithms follow the CUDA guidelines. (see NVidia CUDA Programming Guide)

For example in order to get the best memory performance (somewhere around 70Gbps) you need to access memory in streaming mode with coalescing, also branches are very costly on the GPUs so you should avoid conditionals as much as possible. Check out the guide and samples provided with the SDK, they'll provide an excellent starting point

OTHER TIPS

I wish I could tell you how to do this with Erlang... ;-), but at least, Satnam Singh at MS Research has done some very interesting work with Haskell (Lava) and F#. Perhaps this paper can give you some intuition for how it could be done:

http://research.microsoft.com/en-us/people/satnams/

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top