This cast has no effect; when you use +
on a float
and a char
, the integer value of the char
is converted to be the closest float
value. C has implicit conversion between these two types.
For example:
char ch = -5;
float f = ch; // f is now -5.0 (or the closest representable value to that)
When you write n=(char)f;
: f
is truncated towards zero, e.g.:
float f = -44.3;
char ch = f; // ch is now -44
However, if this value is outside of the range of char
, there is a problem. The behaviour is implementation-defined which means your compiler may do whatever it likes, but it must document that. To be safe, you should check something like:
if ( f < CHAR_MIN || f > CHAR_MAX )
ch = 0; // and maybe output a message
else
ch = f;
Regarding your claim about 16-bit chars: Systems with 16-bit chars do exist however they are very specialized, this is why some people are doubting the claim.
To verify this , examine CHAR_BIT
defined in limits.h
. Alternatively (if your implementation does not provide CHAR_BIT
, CHAR_MIN
and CHAR_MAX
, which is non-conforming btw) do this:
unsigned int u = (unsigned char)-1;
and output the value of u
somehow, or perform a test on it. If you get 255 then it is 8-bit; if you get 65535 then it is 16-bit.