Pregunta

I tried to port code some from the FANN Lib (neuronal network written in C) to SSE2. But the SSE2 performance got worse than the normal code. With my SSE2 implementation runs one run takes 5.50 min without 5.20 min.

How could SSE2 be slower than the normal run? Could it be because of the _mm_set_ps? I use the Apple LLVM Compiler (XCode 4) to compile the code (all SSE extension flags are on, optimization level is -Os).

Code without SSE2

                neuron_sum +=
                fann_mult(weights[i], neurons[i].value) +
                fann_mult(weights[i + 1], neurons[i + 1].value) +
                fann_mult(weights[i + 2], neurons[i + 2].value) +
                fann_mult(weights[i + 3], neurons[i + 3].value);

SSE2 code

                __m128 a_line=_mm_loadu_ps(&weights[i]);
                __m128 b_line=_mm_set_ps(neurons[i+3].value,neurons[i+2].value,neurons[i+1].value,neurons[i].value);
                __m128 c_line=_mm_mul_ps(a_line, b_line);
                neuron_sum+=c_line[0]+c_line[1]+c_line[2]+c_line[3];
¿Fue útil?

Solución

To have any chance of seeing a speedup here you need to do the following:

  • make sure weights[i] is 16 byte aligned and then use _mm_load_ps instead of _mm_loadu_ps
  • reorganise neurons[] so that it is SoA instead of AoS (and also 16 byte aligned) and then use _mm_load_ps to load 4 values at a time
  • move the horizontal sum out of the loop (there is a loop, right ?) - just keep 4 partial sums in a vector vneurom_sum and then do one final horizontal sum on this vector after the loop

Even then, you won't see a huge speed-up, as you're only doing one arithmetic operation for 2 loads and 1 store. Since most modern x86 CPUs have two scalar FPUs anyway you probably won't get close to the theoretical 4x speed-up for 128 bit float SIMD, I'd expect no more than, say, 50% speed up relative to scalar code.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top