Question

I want to make xor of an uint16 with a floating point number like the following:

uint16_t a=20000;
double r,x,xo;
r=3.8;
xo=.1;
x=(int) r*xo*(1-xo);
c=a^x;

When I run the test the following error occurs:

invalid operand to binary ^

How can I convert x to an integer value with 16 bit?

Was it helpful?

Solution

The problem is that x is still a double value. The cast in

x=(int) r*xo*(1-xo);

truncates the number, but it's still a double number.

To do what you want you need to declare x as int or cast right before xor:

c=a^((int)x);

OTHER TIPS

nesC is an extension of C, so you can convert a floating point number to an integer just like you would do it in C, with a cast. For example:

(int)(x+0.5)

Note that this has limitations, see more details here: http://www.cs.tut.fi/~jkorpela/round.html

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top