Question

I'm teaching myself OpenCL by trying to optimize the mpeg4dst reference audio encoder. I achieved a 3x speedup by using vector instructions on CPU but I figured the GPU could probably do better.

I'm focusing on computing auto-correlation vectors in OpenCL as my first area of improvement. The CPU code is:

for (int i = 0; i < NrOfChannels; i++) {
    for (int shift = 0; shift <= PredOrder[ChannelFilter[i]]; shift++)
        vDSP_dotpr(Signal[i] + shift, 1, Signal[i], 1, &out, NrOfChannelBits - shift);
}
NrOfChannels = 6
PredOrder = 129
NrOfChannelBits = 150528.

On my test file, this function take approximately 188ms to complete.

Here's my OpenCL method:

kernel void calculateAutocorrelation(size_t offset,
                                 global const float *input,
                                 global float *output,
                                 size_t size) {
size_t index = get_global_id(0);
size_t end = size - index;
float sum = 0.0;

for (size_t i = 0; i < end; i++)
    sum += input[i + offset] * input[i + offset + index];

output[index] = sum;
}

This is how it is called:

gcl_memcpy(gpu_signal_in, Signal, sizeof(float) * NrOfChannels * MAXCHBITS);

for (int i = 0; i < NrOfChannels; i++) {
    size_t sz = PredOrder[ChannelFilter[i]] + 1;
    cl_ndrange range = { 1, { 0, 0, 0 }, { sz, 0, 0}, { 0, 0, 0 } };

    calculateAutocorrelation_kernel(&range, i * MAXCHBITS, (cl_float *)gpu_signal_in, (cl_float *)gpu_out, NrOfChannelBits);
    gcl_memcpy(out, gpu_out, sizeof(float) * sz);
}

According to Instruments, my OpenCL implementation seems to take about 13ms, with about 54ms of memory copy overhead (gcl_memcpy).

When I use a much larger test file, 1 minute of 2-channel music vs, 1 second of 6-channel, while the measured performance of the OpenCL code seems to be the same, the CPU usage falls to about 50% and the whole program takes about 2x longer to run.

I can't find a cause for this in Instruments and I haven't read anything yet that suggests that I should expect very heavy overhead switching in and out of OpenCL.

Was it helpful?

Solution

If I'm reading your kernel code correctly, each work item is iterating over all of the data from it's location to the end. This isn't going to be efficient. For one (and the primary performance concern), the memory accesses won't be coalesced and so won't be at full memory bandwidth. Secondly, because each work item has a different amount of work, there will be branch divergence within a work group, which will leave some threads idle waiting for others.

This seems like it has a lot in common with a reduction problem and I'd suggest reading up on "parallel reduction" to get some hints about doing an operation like this in parallel.

To see how memory is being read, work out how 16 work items (say, global_id 0 to 15) will be reading data for each step.

Note that if every work item in a work group access the same memory, there is a "broadcast" optimization the hardware can make. So just reversing the order of your loop could improve things.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top