How can I use the GPU as a second processor in .Net?
-
19-08-2019 - |
Question
The question says it all really. I'm hoping that I don't have to write the code in a c++ .dll and then call it from managed code.
Solution
With a quick googling, I found this called Accelerator. Its a Microsoft Research project found here.
OTHER TIPS
You can write code for the GPU only using a shading language. All of the shading languages are proper languages (it's not C++, .NET languages or anything else) on their own but they are very similar to C++.
Most common shading languages are:
GLSL (for OpenGL)
Cg (the nVidia shading language compatible with both directX and OpenGL)
HLSL (microsoft shading language for directX).
You need to write the code you want to run on the GPU in your shading language then dynamically compile and download to the GPU. The code you use to dynamically compile and download is usually any code that supports directX/OpenGL stuff (there's Java, .NET and C++ APIs as far as I know - most people uses C++ though so most of the samples you'll find will be written in C++).
One thing I can tell you for sure: as of December of 2008, Nvidia and ATI did not have managed classes for their GPUs. I don't know if they have plans for this any time soon.
Allows you to code in C# and it is translated into CUDA C++ for you, statically or dynamically. It does work well most of the time, but more complicated things can be difficult without making direct changes to the library. It may have improved since I used it. I was also under the impression that OpenCL was to be supported at some point.