Question

Okay, I've been banging my head for the last day and I'm sure it's something simple so here goes. Why does this code not work? I'm using Xcode 3.2.5 and LLVM and when I try to compile something like this:

uint16x8_t          testUnsigned = {1,2,3,4,5,6,7,8};
int16x8_t           testSigned;

testSigned = vreinterpretq_s16_u16(testUnsigned);

I get the error: "Assigning to 'int16x8_t' from incompatible type 'int'" all my other intrinsics work fine but for some reason I can't reinterpret a vector. Any ideas? Thanks in advance.

Was it helpful?

Solution

As Hiroshi points out, there appears to be a bug with this particular call. However, since it is just casting under the hood, you can go by way of any other type, without any runtime penalty. For example, I tested, and this works:

testSigned = vreinterpretq_s16_f32(vreinterpretq_f32_u16(testUnsigned));

OTHER TIPS

/Developer/Platforms/iPhoneOS.platform/Developer/usr/llvm-gcc-4.2/lib/gcc/arm-apple-darwin10/4.2.1/include/arm_neon_gcc.h:6947

#define vreinterpretq_s16_u16(__a) \
  (int16x8_t)__builtin_neon_vreinterpretv8hiv8hi ((int16x8_t) __a)

This seem like that the type of the argument is a signed int. It smell like a bug. I'm not sure, but you should try

testSigned = vreinterpretq_s16_u16((int16x8_t)testUnsigned);
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top