Question

I am running CUBLAS v2.0 on different streams on a single GPU (Tesla C2050) by subdividing the input matrices (A[x/num_of_streams*y]B[xy] = C[x/num_of_streams*y]), but somehow it is taking more time when I use CUDA streams. Here is the code snippet:

             //plan is a struct containing the matrix dimensions and stream numbers
             //parallel in nstreams - should be! MAX 16 streams could run concurrently
            //Copy A - cudaMemCpyAsync
            for(i = 0; i < nstreams; i++)
                    cudgemm_copyA_in_streams (&plan[i]);
            //Copy B - cudaMemCpyAsync
            for(i = 0; i < nstreams; i++)
                    cudgemm_copyB_in_streams (&plan[i]);

            //Create handles - serial
            for(i = 0; i < nstreams; i++)
                    handle[i] = create_handle();

            //Run kernels - first doing a cublasSetStream(handle, plan->stream) before running cublasDgemm... 
            for(i = 0; i < nstreams; i++)
                    cudgemm_kernel_in_streams (&plan[i], handle[i], 1.0f, 1.0f);

            //Destroy handles - serial
            for(i = 0; i < nstreams; i++)
                    destroy_handle (handle[i]);

            //Copy C - cudaMemCpyAsync
            for(i = 0; i < nstreams; i++)
                    cudgemm_copyC_in_streams (&plan[i]);

            //EDIT: Function body

            //The other two copy functions are exactly the same as this
            void cudgemm_copyA_in_streams(TGPUplan *plan)
           {
                 cudasafe(cudaMemcpyAsync(plan->Ad_Data, plan->Ah_Data, (plan->Acols * plan->Arows * sizeof(double)), cudaMemcpyHostToDevice, plan->stream) );

            }

            //Create handle
            cublasHandle_t create_handle ()
            {
                   cublasHandle_t handle;
                   checkError(cublasCreate(&handle), "cublasCreate() error!\n");
                   return handle;
             }

             //Destroy handle
             void destroy_handle (cublasHandle_t handle)
             {
                  checkError(cublasDestroy(handle), "cublasDestroy() error!\n");
             }

             //Kernel
             void cudgemm_kernel_in_streams(TGPUplan *plan, cublasHandle_t handle, const double alpha, const double beta)
             {
                   cublasStatus_t ret;
                   cublasSetStream(handle, plan->stream);

                   ret = cublasDgemm(handle, CUBLAS_OP_N, CUBLAS_OP_N, plan->Arows, plan->Ccols, plan->Acols, &alpha, plan->Ad_Data, plan->Arows, plan->Bd_Data, plan->Brows, &beta, plan->Cd_Data, plan->Crows);
                   checkError(ret, "cublas Dgemm returned an error!\n");
              }

So I am bouncing back and forth between streams and assigning work, expecting to get a better execution time, but I notice that more the number of streams, the program takes more time as compared to the version that does not uses stream. Where am I going wrong? Cross post to Nvidia forums - http://forums.nvidia.com/index.php?showtopic=209420

EDIT:

I modified my program as follows:

            //copy data
            for(i = 0; i < nstreams; i++)
            {
                    cudgemm_copyA_in_streams (&plan[i]);
                    cudgemm_copyB_in_streams (&plan[i]);
            }

            //Run kernel and copy back
            for(i = 0; i < nstreams; i++)
            {
                    cudgemm_kernel_in_streams (&plan[i], handle[i], 1.0f, 1.0f);
                    cudgemm_copyC_in_streams (&plan[i]);
            }

When I profile my program for a matrix order of 6144, I get the following info:

Kernel time = 42.75 % of total GPU time 
Memory copy time = 28.9 % of total GPU time
Kernel taking maximum time = fermiDgemm_v2_kernel_val (42.8% of total GPU time)
Memory copy taking maximum time = memcpyHtoDasync (21.7% of total GPU time)
Total overlap time in GPU = 65268.3 micro sec. (3.6% of total GPU time)

Blue = kernel, Green = cudaMemCpyAsync in 2 streams

When I time the above loop, I get an time of 0.000284s, vs 1.703289s for the version that does not uses streams (in that version also, I time the two sequential memory copies, kernel invocation and the remaining memCpy). I think since I am not using any synchronization constructs, may be I am printing the time before the computation actually finishes (I find it difficult to believe that there is a 100% improvement).

Was it helpful?

Solution

I suggest two changes:

1) move your cuBLAS handle creation/destruction to outside the copies and kernel invocations. It's possible it is breaking concurrency by doing an unneeded context synchronize.

2) do the memcpy's together in one loop over the streams. That way, the B copy of stream 0 does not do any extra synchronization to wait until the A memcpy has been completed. i.e. do this:

        for(i = 0; i < nstreams; i++) {
                cudgemm_copyA_in_streams (&plan[i]);
                cudgemm_copyB_in_streams (&plan[i]);
        }

not this:

        for(i = 0; i < nstreams; i++)
                cudgemm_copyA_in_streams (&plan[i]);
        for(i = 0; i < nstreams; i++)
                cudgemm_copyB_in_streams (&plan[i]);

Don't be surprised if you are unable to get a speedup of more than 40% or so from overlapping transfers and computation. Streams deliver the biggest benefits on workloads that spend equal time transferring and processing data, and very few workloads fall into that category.

OTHER TIPS

I would also suggest to check the SIZE of the copies, you should start using different streams only when the time to transfer one block of memory can be compared to the time needed to compute on it. If the time to transfer is little compared to the computation time, then adding streams add more overhead with their management. Use the Visual Profiler to see how long it takes the various steps. Make a graph with different memory inputs.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top