Domanda

I would like to know one thing about type casting where as far as I know the variable can be cast to other types on certain operations.

int c;
char i;
i = (char)c; 

Type casts c as character and assigns to i. Where as c= (int)i; Type casts i as integer.

Is there any difference between the two operations mentioned above? What happens when a character is assigned as integer using type cast?

È stato utile?

Soluzione 2

In C, the char is actually a tiny signed integer having 8 bits, int having usually 16 or 32 bits is also a signed integer.

Doing

char c = X;
int i = (int)c;

copies the 8 bits of c into i and expands the sign of c. I.e. -10 in c would set -10 in i.

But doing

int i = X;
char c = (char) i;

will copy only the 8 least significant bits of i into c. The remaining bits of i are lost (using gcc -Wall gives a warning).

Altri suggerimenti

Basically when you type cast from higher type to lower type truncation happens. i.e Loss of data happens

 #include<stdio.h>

int main()
{

 char c;
 int i=2000; 
 c=(char)i;
 printf("%d",c);
}

Bit representation for i=2000

---> (MSB)0000011111010000(LSB)

So Here char is 8 bit when you typecast int to char only 8 bit from lsb get stored remaining bits are truncated. i.e --> 11010000 typically this value get stored in c

take two complements of above one to get value.

--->00110000 i.e 48 and MSB bit is 1 since its signed char so the final value is

---> -48

 include<stdio.h>

 int main()
 {

  char c=200;
  int i;
  i=(int)c;
  printf("%d",i);
 }

Bit representation for c=200 is 11001000

So Here int is 32 bit when you typecast char to int the MSB bit get extended .Its a machine dependent implementation. If MSB is 1 then sign extension happens if 0 zero is filled in rest of bits

From K&R

There is one subtle point about the conversion of characters to integers. The language does not specify whether variables of type char are signed or unsigned quantities. When a char is converted to an int , can it ever produce a negative integer? The answer varies from machine to machine, reflecting differences in architecture. On some machines a char whose leftmost bit is 1 will be converted to a negative integer (``sign extension' ' ). On others, a char is promoted to an int by adding zeros at the left end, and thus is always positive.

It makes no difference whether you write a cast or not, in this situation. The issue is conversion of a char to an int. In C, there is implicit conversion between these two types: you can just assign one to the other without a cast, and a conversion happens. A cast is an explicit conversion. In other words, i = c; and i = (char)c; are exactly the same.

To understand the conversion, it seems easier to me to think in terms of values, rather than representations, sign extension, etc. etc.

When you write i = c;, it means that the value of i should be the same as the value of c. If c was -4 then i will also be -4, regardless of which bits are set in memory to represent this.

This always works, because all possible char values are also valid int values.

However when you go c = i; you may find that i has a value that is not a valid char. For example if your compiler gives char a range of [-128, 127] (this is common, but not the only possibility), and i had a value of 150, then it is out of range.

When you assign an out-of-range value , what happens depends on whether the target type is signed or not.

  • If your chars are unsigned, then the value is adjusted modulo CHAR_MAX+1 until it is in range
  • If your chars are signed then the behaviour is implementation-defined.

The latter means that the compiler must document what happens. It is also permitted to raise a signal (similar to a segfault occurring, if you're not familiar with signals).

On typical systems, the compiler will take the lower 8 bits of 2's complement representation, but this is certainly not something you should rely on; in order to write robust code, you should avoid triggering this operation.

You can inspect the ranges for your types by doing #include <limits.h>, and looking at CHAR_MIN, CHAR_MAX. (Those can be output via printf("%d, CHAR_MIN); etc.)

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top