Question

I need to find information about how the Unified Shader Array accessess the GPU memory to have an idea how to use it effectively. The image of the architecture of my graphics card doesn't show it clearly.

enter image description here

I need to load a big image into GPU memory using C++Amp and divide it into small pieces (like 4x4 pixels). Every piece should be computed with a different thread. I don't know how the threads share the access to the image.

enter image description here

Is there any way of doing it in such way that the threads aren't blocking each other while accessing the image? Maybe they have their own memory that can be accesses exclusively?

enter image description here

Or maybe the access to the unified memory is so fast that I shouldn't care about it (however I don't belive in it)? It is really important, because I need to compute about 10k subsets for every image.

Was it helpful?

Solution

For C++ AMP you want to load the data that each thread within a tile uses into tile_static memory before starting your convolution calculation. Because each thread accesses pixels which are also read by other threads this allows your to do a single read for each pixel from (slow) global memory and cache it in (fast) tile static memory so that all of the subsequent reads are faster.

You can see an example of tiling for convolution here. The DetectEdgeTiled method loads all the data that it requires and the calls idx.barrier.wait() to ensure all the threads have finished writing data into tile static memory. Then it executes the edge detection code taking advantage of tile_static memory. There are many other examples of this pattern in the samples. Note that the loading code in DetectEdgeTiled is complex only because it must account for the additional pixels around the edge of the pixels that are being written in the current tile and is essentially an unrolled loop, hence it's length.

I'm not sure you are thinking about the problem in quite the right way. There are two levels of partitioning here. To calculate the new value for each pixel the thread doing this work reads the block of surrounding pixels. In addition blocks (tiles) of threads loads larger blocks of pixel data into tile_static memory. Each thread on the tile then calculates the result for one pixel within the block.

void ApplyEdgeDetectionTiledHelper(const array<ArgbPackedPixel, 2>& srcFrame, 
                                   array<ArgbPackedPixel, 2>& destFrame)
{    
    tiled_extent<tileSize, tileSize> computeDomain = GetTiledExtent(srcFrame.extent);
    parallel_for_each(computeDomain.tile<tileSize, tileSize>(), [=, &srcFrame, &destFrame, &orgFrame](tiled_index<tileSize, tileSize> idx) restrict(amp) 
    {
        DetectEdgeTiled(idx, srcFrame, destFrame, orgFrame);
    });
}

void DetectEdgeTiled(
    tiled_index<tileSize, tileSize> idx, 
    const array<ArgbPackedPixel, 2>& srcFrame, 
    array<ArgbPackedPixel, 2>& destFrame) restrict(amp)
{
    const UINT shift = imageBorderWidth / 2;
    const UINT startHeight = 0;
    const UINT startWidth = 0;
    const UINT endHeight = srcFrame.extent[0];    
    const UINT endWidth = srcFrame.extent[1];

    tile_static RgbPixel localSrc[tileSize + imageBorderWidth ]
        [tileSize + imageBorderWidth];

    const UINT global_idxY = idx.global[0];
    const UINT global_idxX = idx.global[1];
    const UINT local_idxY = idx.local[0];
    const UINT local_idxX = idx.local[1];

    const UINT local_idx_tsY = local_idxY + shift;
    const UINT local_idx_tsX = local_idxX + shift;

    // Copy image data to tile_static memory. The if clauses are required to deal with threads that own a
    // pixel close to the edge of the tile and need to copy additional halo data.

    // This pixel
    index<2> gNew = index<2>(global_idxY, global_idxX);
    localSrc[local_idx_tsY][local_idx_tsX] = UnpackPixel(srcFrame[gNew]);

    // Left edge
    if (local_idxX < shift)
    {
        index<2> gNew = index<2>(global_idxY, global_idxX - shift);
        localSrc[local_idx_tsY][local_idx_tsX-shift] = UnpackPixel(srcFrame[gNew]);
    }
    // Right edge
    // Top edge
    // Bottom edge
    // Top Left corner
    // Bottom Left corner
    // Bottom Right corner
    // Top Right corner

    // Synchronize all threads so that none of them start calculation before 
    // all data is copied onto the current tile.

    idx.barrier.wait();

    // Make sure that the thread is not referring to a border pixel 
    // for which the filter cannot be applied.
    if ((global_idxY >= startHeight + 1 && global_idxY <= endHeight  - 1) && 
        (global_idxX >= startWidth + 1 && global_idxX <= endWidth - 1))
    {
        RgbPixel result = Convolution(localSrc, index<2>(local_idx_tsY, local_idx_tsX));
        destFrame[index<2>(global_idxY, global_idxX)] = result;
    }
}

This code was taken from CodePlex and I stripped out a lot of the real implementation to make it clearer.

WRT @sharpneli's answer you can use texture<> in C++ AMP to achieve the same result as OpenCL images. There is also an example of this on CodePlex.

OTHER TIPS

In this particular case you do not have to worry. Just use OpenCL images. GPU's are extremely good at simply reading images (due to texturing). However this method requires writing the result into a separate image because you cannot read and write from the same image in a single kernel. You should use this if you can perform the computation as a single pass (no need to iterate).

Another way is to access it as a normal memory buffer, load the parts within a wavefront (group of threads running in sync) into local memory (this memory is blazingly fast), perform computation and write complete end result back into unified memory after computation. You should use this approach if you need to read and write values to the same image while computing. If you are not memory bound you can still read the original values from a texture, then iterate in local memory and write the end results in separate image.

Reads from unified memory are slow only if it's not const * restrict and multiple threads read the same location. In general if subsequent thread id's read subsequent locations it's rather fast. However if your threads both write and read to unified memory then it's going to be slow.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top