Question

In C, sometimes we cast data type for arithmetic operations, for example:

int a = (int)b + (int)c;

What does a type cast look like to CPU? What instructions does type cast compile into (x86)? Does type cast harm CPU pipelines?

Was it helpful?

Solution

It depends on the types involved, obviously. Some casts are just a matter of interpretation, so involve no instructions - eg, unsigned int to int.

Others may require 'widening' of the data to propagate the sign bit into the high order bits, for example signed char to int.

The instruction on X86 used for this is cbw or cwde. http://www.fermimn.gov.it/linux/quarta/x86/cbw.htm

ex: signed char 0b10000000 must become int 0b1111111110000000 (for 16-bit int)

OTHER TIPS

If b is, say, a float; the compiler will generate code to call a library subroutine of the nature of convert_float_to_int(). That's usually not something done directly in hardware. It might be in-lined, if the routine is fairly short.

That's very architecture and datatype-specific. Static casts like that could be a register-to-register move, a no-op, they could set or clear CPU flags, logically mask bytes, etc.. If b is a float, for example, then the temporary will have to be filled with whatever the CPU's floating point integer conversion mechanism yields. If it's a char then it will be the two's complement (possibly sign-extended) value. If it's unsigned char then the temporary will contain the value of b in its LSB and zeros in the more significant bytes. Really the only way to tell is to look at the generated code (in gcc this is the -S option). A floating point move could certainly cause a pipeline stall or bubble. These days you have the complication that it might even end up in a GPU.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top