Question

This piece of CUDA code reports lots of bank conflicts when analysed by Nsight. The first snippet contains the constants definition and kernel call:

// Front update related constants
#define NDEQUES 6
#define FRONT_UPDATE_THREADS 480
#define BVTT_DEQUE_SIZE 500000
#define FRONT_DEQUE_SIZE 5000000
#define FRONT_UPDATE_SHARED_SIZE FRONT_UPDATE_THREADS*2

updateFront<OBBNode , OBB , BVTT_DEQUE_SIZE , FRONT_DEQUE_SIZE , FRONT_UPDATE_THREADS>
    <<<NDEQUES, FRONT_UPDATE_THREADS>>>
    (d_aFront , d_aOutputFront , d_aiFrontCounts , d_aWorkQueues , d_aiWorkQueueCounts , d_collisionPairs ,
    d_collisionPairIndex , obbTree1 , d_triIndices1);

The second snippet has the kernel code:

template<typename TreeNode , typename BV , unsigned int uiGlobalWorkQueueCapacity , unsigned int uiGlobalFrontCapacity ,
unsigned int uiNThreads>
void __global__ updateFront(Int2Array *aFront , Int2Array *aOutputFront , int *aiFrontIdx , Int2Array *aWork_queues ,
int* aiWork_queue_counts , int2 *auiCollisionPairs , unsigned int *uiCollisionPairsIdx , const TreeNode* tree ,
uint3 *aTriIndices)
{
__shared__ unsigned int uiInputFrontIdx;
__shared__ unsigned int uiOutputFrontIdx;
__shared__ unsigned int uiWorkQueueIdx;

__shared__ int          iLeafLeafOffset;
__shared__ int          iNode0GreaterOffset;
__shared__ int          iNode1GreaterOffset;

__shared__ int          aiLeafLeafFrontX[FRONT_UPDATE_SHARED_SIZE];
__shared__ int          aiLeafLeafFrontY[FRONT_UPDATE_SHARED_SIZE];

__shared__ int          aiNode0GreaterFrontX[FRONT_UPDATE_SHARED_SIZE];
__shared__ int          aiNode0GreaterFrontY[FRONT_UPDATE_SHARED_SIZE];

__shared__ int          aiNode1GreaterFrontX[FRONT_UPDATE_SHARED_SIZE];
__shared__ int          aiNode1GreaterFrontY[FRONT_UPDATE_SHARED_SIZE];

if(threadIdx.x == 0)
{
    uiInputFrontIdx = aiFrontIdx[blockIdx.x];
    uiOutputFrontIdx = 0;
    uiWorkQueueIdx = aiWork_queue_counts[blockIdx.x];

    iLeafLeafOffset = 0;
    iNode0GreaterOffset = 0;
    iNode1GreaterOffset = 0;
}
__syncthreads();

unsigned int uiThreadOffset = threadIdx.x;

while(uiThreadOffset < uiInputFrontIdx + FRONT_UPDATE_THREADS - (uiInputFrontIdx % FRONT_UPDATE_THREADS))
{
    if(uiThreadOffset < uiInputFrontIdx)
    {
        int2 bvttNode;

        aFront->getElement(bvttNode , blockIdx.x*FRONT_DEQUE_SIZE + uiThreadOffset);

        TreeNode node0 = tree[bvttNode.x];
        TreeNode node1 = tree[bvttNode.y];

        if(node0.isLeaf() && node1.isLeaf())
        {
            int iOffset = atomicAdd(&iLeafLeafOffset , 1);

            //Bank conflict source
            aiLeafLeafFrontX[iOffset] = bvttNode.x;
            aiLeafLeafFrontY[iOffset] = bvttNode.y;
            //End of bank conflict source
        }
        else if(node1.isLeaf() || (!node0.isLeaf() && (node0.bbox.getSize() > node1.bbox.getSize())))
        { // node0 is bigger. Subdivide it.
            int iOffset = atomicAdd(&iNode0GreaterOffset , 1);

            //Bank conflict source
            aiNode0GreaterFrontX[iOffset] = bvttNode.x;
            aiNode0GreaterFrontY[iOffset] = bvttNode.y;
            //End of bank conflict source
        }
        else
        { // node1 is bigger. Subdivide it.
            int iOffset = atomicAdd(&iNode1GreaterOffset , 1);

            //Bank conflict source
            aiNode1GreaterFrontX[iOffset] = bvttNode.x;
            aiNode1GreaterFrontY[iOffset] = bvttNode.y;
            //End of bank conflict source
        }
    }

    __syncthreads();

    /* ... */

    uiThreadOffset += uiNThreads;
    __syncthreads();
}

I want to know why the bank conflicts are ocurring. The only way I think conflicts could happen is if the accesses in different arrays that map to the same bank were serialized.

Was it helpful?

Solution

I see two possibilities. Further testing is required to choose which one is the culpit:

  • The bank conflict is not occurring from the location you selected, but from the atomicAdd operations which also work on shared memory. I believe atomics on shmem can increase the internal conflict counters as well. (the belief is not tested!)

  • You hit a situation where two or more warps are atomically increasing the same value - this might be a possibility on newer hardware which runs 2 or 4 warps at the same time. (testing is required to confirm or deny this as well). As a result, threads within one warp may actually get quite distant iOffset values and you end up having some random bank conflicts.

However, if either of the above is true, I woulndn't worry about the conflicts much. In the first case - atomicAdd hits your performance anyway. In the latter case, I wouldn't expect having greater than 2-way bank conflicts often. Unless you hit some really rare corner case....

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top