Question

hi I just wanted to know whether it is possible to do the following inside the nvidia cuda kernel

__global__ void compute(long *c1, long size, ...)
{
  ...
  long d[1000];
  ...
}

or the following

__global__ void compute(long *c1, long size, ...)
{
  ...
  long d[size];
  ...
}
Was it helpful?

Solution

You can do the first example, I haven't tried the second.

However, if you can help it, you might want to redesign your program not to do this. You do not want to allocate 4000 bytes of memory in your kernel. That will lead to a lot of use of CUDA local memory, since you will not be able to fit everything into registers. CUDA local memory is slow (400 cycles of memory latency).

OTHER TIPS

You can do #1, but beware this will be done in EVERY thread!

Your second snippet won't work, because dynamic memory allocation at kernel runtime is not supported.

You can allocate shared memory dynamically when you launch the kernel.

__global__ void compute(long *c1, long size, ...)
 {
  ...
   extern __shared__ float shared[];
  ...
 }

compute <<< dimGrid, dimBlock, sharedMemSize >>>( blah blah );

CUDA programming guide:

the size of the array is determined at launch time (see Section 4.2.3).

dynamic memory allocation at kernel runtime is supported, check the sdk example , new delete.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top