Dirac mentions Fermi GPUs on its banner. If you are on a node with Fermi GPUs, your compile command is incorrect:
-arch=sm_30
is used for Kepler GPUs.
Try:
-arch=sm_20
instead.
I was confused by the fact that cuda-memcheck
was reporting no errors, but the type of error you are encountering is a type that cuda-memcheck
will not necessarily catch. Specifically, there are a category of launch failure errors that can only be trapped by the proper cuda error checking that @talonmies suggested. Specifically note the error checking code that is required immediately after a kernel launch.
When you compile for -arch=sm_30
and try to run it on a Fermi (sm_20
) machine, the kernel launch will immediately fail, but all other subsequent CUDA API calls will report no failure.
The detail page for Dirac does mention a couple Kepler nodes/GPUs:
•1 node: Tesla K20Xm
•1 node: Tesla K40c
I believe your code compiled with -arch=sm_35
should run correctly on those nodes.
And I also note that there are even some older ("Tesla" family) GPUs/nodes:
•4 nodes: 1 C1060 NVIDIA Tesla GPU with 4GB of memory and 240 parallel CUDA processor cores.
• 1 node: 4 C1060 Nvidia Tesla GPU's, each with 4GB of memory and 240 parallel CUDA processor cores.
For those nodes, you would need to compile with:
-arch=sm_13
but don't forget to use the proper cuda error checking, any time you are having difficulty with a CUDA code.
Or you could use nvcc extended notation to specify a compile and binary/executable for all 3 types.
Using extended notation, for the 3 different GPU architectures on that cluster (that I can see):
nvcc -gencode arch=compute_13,code=sm_13 -gencode arch=compute_20,code=sm_20 -gencode arch=compute_35,code=sm_35 ...