Question

I am trying to improve this code with the SSE4 dot product but I am having a hard time finding a solution. This function gets the parameters qi and tj which contain float arrays with 80 cell each and then calculate the dot product. The return value is a vector with four dot products. So I what I'm trying to do is calculating four dot products of twenty values in parallel.

Have you any idea how to improve this code?

inline __m128 ScalarProd20Vec(__m128* qi, __m128* tj)
{
    __m128 res=_mm_add_ps(_mm_mul_ps(tj[0],qi[0]),_mm_mul_ps(tj[1],qi[1]));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[2],qi[2]),_mm_mul_ps(tj[3],qi[3])));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[4],qi[4]),_mm_mul_ps(tj[5],qi[5])));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[6],qi[6]),_mm_mul_ps(tj[7],qi[7])));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[8],qi[8]),_mm_mul_ps(tj[9],qi[9])));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[10],qi[10]),_mm_mul_ps(tj[11],qi[11])));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[12],qi[12]),_mm_mul_ps(tj[13],qi[13])));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[14],qi[14]),_mm_mul_ps(tj[15],qi[15])));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[16],qi[16]),_mm_mul_ps(tj[17],qi[17])));
    res=_mm_add_ps(res,_mm_add_ps(_mm_mul_ps(tj[18],qi[18]),_mm_mul_ps(tj[19],qi[19])));
    return res;
}
Was it helpful?

Solution

Of the hundreds of SSE examples I've seen on SO, your code is one of the few that's already in pretty good shape from the start. You don't need the SSE4 dot-product instruction. (You can do better!)

However, there is one thing you can try: (I say try because I haven't timed it yet.)

Currently you have a data-dependency chain on res. Vector addition is 3-4 cycles on most machines today. So your code will take a minimum of 30 cycles to run since you have:

(10 additions on critical path) * (3 cycles addps latency) = 30 cycles

What you can do is to node-split the res variable as follows:

__m128 res0 = _mm_add_ps(_mm_mul_ps(tj[ 0],qi[ 0]),_mm_mul_ps(tj[ 1],qi[ 1]));
__m128 res1 = _mm_add_ps(_mm_mul_ps(tj[ 2],qi[ 2]),_mm_mul_ps(tj[ 3],qi[ 3]));

res0 = _mm_add_ps(res0,_mm_add_ps(_mm_mul_ps(tj[ 4],qi[ 4]),_mm_mul_ps(tj[ 5],qi[ 5]))); 
res1 = _mm_add_ps(res1,_mm_add_ps(_mm_mul_ps(tj[ 6],qi[ 6]),_mm_mul_ps(tj[ 7],qi[ 7])));

res0 = _mm_add_ps(res0,_mm_add_ps(_mm_mul_ps(tj[ 8],qi[ 8]),_mm_mul_ps(tj[ 9],qi[ 9])));
res1 = _mm_add_ps(res1,_mm_add_ps(_mm_mul_ps(tj[10],qi[10]),_mm_mul_ps(tj[11],qi[11])));

res0 = _mm_add_ps(res0,_mm_add_ps(_mm_mul_ps(tj[12],qi[12]),_mm_mul_ps(tj[13],qi[13])));
res1 = _mm_add_ps(res1,_mm_add_ps(_mm_mul_ps(tj[14],qi[14]),_mm_mul_ps(tj[15],qi[15])));

res0 = _mm_add_ps(res0,_mm_add_ps(_mm_mul_ps(tj[16],qi[16]),_mm_mul_ps(tj[17],qi[17])));
res1 = _mm_add_ps(res1,_mm_add_ps(_mm_mul_ps(tj[18],qi[18]),_mm_mul_ps(tj[19],qi[19])));

return _mm_add_ps(res0,res1);

This almost cuts your critical path in half. Note that because of floating-point non-associativity, this optimization is illegal for compilers to do.


Here's an alternative version using 4-way node-splitting and AMD FMA4 instructions. If you can't use the fused-multiply adds, feel free to split them up. It might still be better than the first version above.

__m128 res0 = _mm_mul_ps(tj[ 0],qi[ 0]);
__m128 res1 = _mm_mul_ps(tj[ 1],qi[ 1]);
__m128 res2 = _mm_mul_ps(tj[ 2],qi[ 2]);
__m128 res3 = _mm_mul_ps(tj[ 3],qi[ 3]);

res0 = _mm_macc_ps(tj[ 4],qi[ 4],res0);
res1 = _mm_macc_ps(tj[ 5],qi[ 5],res1);
res2 = _mm_macc_ps(tj[ 6],qi[ 6],res2);
res3 = _mm_macc_ps(tj[ 7],qi[ 7],res3);

res0 = _mm_macc_ps(tj[ 8],qi[ 8],res0);
res1 = _mm_macc_ps(tj[ 9],qi[ 9],res1);
res2 = _mm_macc_ps(tj[10],qi[10],res2);
res3 = _mm_macc_ps(tj[11],qi[11],res3);

res0 = _mm_macc_ps(tj[12],qi[12],res0);
res1 = _mm_macc_ps(tj[13],qi[13],res1);
res2 = _mm_macc_ps(tj[14],qi[14],res2);
res3 = _mm_macc_ps(tj[15],qi[15],res3);

res0 = _mm_macc_ps(tj[16],qi[16],res0);
res1 = _mm_macc_ps(tj[17],qi[17],res1);
res2 = _mm_macc_ps(tj[18],qi[18],res2);
res3 = _mm_macc_ps(tj[19],qi[19],res3);

res0 = _mm_add_ps(res0,res1);
res2 = _mm_add_ps(res2,res3);

return _mm_add_ps(res0,res2);

OTHER TIPS

Firstly, the most important optimisation you can do is making sure your compiler has all its optimisation settings turned on.


Compilers are pretty smart, so if write it as a loop, it is likely to unroll it:

__128 res = _mm_setzero();
for (int i = 0; i < 10; i++) {
  res = _mm_add_ps(res, _mm_add_ps(_mm_mul_ps(tj[2*i], qi[2*i]), _mm_mul_ps(tj[2*i+1], qi[2*i+1])));
}
return res;

(With GCC you need to pass -funroll-loops, and then it'll unroll it to do 5 iterations at a time.)

You could also define a macro and unroll it by hand, if the loop version is slower, e.g.:

__128 res = _mm_setzero();

#define STEP(i) res = _mm_add_ps(res, _mm_add_ps(_mm_mul_ps(tj[2*i], qi[2*i]), _mm_mul_ps(tj[2*i+1], qi[2*i+1])))

STEP(0); STEP(1); STEP(2); STEP(3); STEP(4);
STEP(5); STEP(6); STEP(7); STEP(8); STEP(9);

#undef STEP

return res;

You could even run the loop from 0 to 20 (or do the same with the macro version), i.e.:

__128 res = _mm_setzero();
for (int i = 0; i < 20; i++) {
  res = _mm_add_ps(res, _mm_mul_ps(tj[i], qi[i]));
}
return res;

(With GCC and -funroll-loops this is unrolled to do 10 iterations at a time, i.e. the same as the two-at-a-time loop above.)

Your data isn't arranged in memory in a suitable format for the specialised SSE4 dot product instructions (dpps) . Those instructions expect the dimensions of a single vector to be adjacent,, like this:

| dim0 | dim1 | dim2 | ... | dim19 |

whereas your data appears to have the vectors interleaved with each other:

| v0-dim0 | v1-dim0 | v2-dim0 | v3-dim0 | v0-dim1 | ...

Your current general approach seems appropriate - you might be able to improve things by reordering instructions such that the results of multiplications aren't used immediately after they're generated, but really the compiler ought to be able to figure that out on its own.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top