Question

I am doing conditional computations on a Xeon Phi using intrinsic functions. I have to use double values so i need a __mmask8. As long as I use some of the compare functions there is no problem for me, but if I want to modify those masks I run into some type conflicts. Where the documentation gives me plenty of functions to modify __mmask16 used for single precision there is not a single one usable for double precision.

I want to do someting like the following:

int tmp = 0;
for(i = 0; i < 8; i++) { 
    tmp = index[i];
    tmp = tmp << 1;
}
__mmask8 something = _mm512_int2mask(tmp);

The documentation provides the given function only for a __mmask16. The same comes with all manipulating functions in the Vector Mask Intrinsic chapter of the Documentation.

Can i use those functions as well?

Is there a convention like "use every second bit of a __mmask16" ?

Thanks in advance

Was it helpful?

Solution

According to http://software.intel.com/en-us/articles/intel-xeon-phi-coprocessor-vector-microarchitecture

Each VPU has 128 entry 512-bit vector registers divided up among the threads, thus getting 32 entries per thread. These are hard-partitioned. There are eight 16-bit mask registers per thread which are part of the vector register file. The mask registers act as a filter per element for the 16 elements and thus allows one to control which of the 16 32-bit elements are active during a computation. For double precision the mask bits are the bottom 8 bits.

Intel doesn't provide any intrinsics for operating on __mmask8 types; all of the intrinsics are for __mmask16. Therefore I assume that we're expected to just use the __mmask16 intrinsics for manipulating __mask8 types. This seems to work, but I've had very little experience with these so far.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top