Question

I don't understand why the following code prints out 7 2 3 0 I expected it to print out 1 9 7 1. Can anyone explain why it is printing 7230?:

unsigned int e = 197127; 
unsigned char *f = (char *) &e;

printf("%ld\n", sizeof(e));
printf("%d ", *f); 
f++;        
printf("%d ", *f); 
f++;            
printf("%d ", *f); 
f++;            
printf("%d\n", *f);
Was it helpful?

Solution

Computers work with binary, not decimal, so 197127 is stored as a binary number and not a series of single digits separately in decimal

19712710 = 0003020716 = 0011 0000 0010 0000 01112

Suppose your system uses little endian, 0x00030207 would be stored in memory as 0x07 0x02 0x03 0x00 which is printed out as (7 2 3 0) as expected when you print out each byte

OTHER TIPS

Because with your method you print out the internal representation of the unsigned and not its decimal representation.

Integers or any other data are represented as bytes internally. unsigned char is just another term for "byte" in this context. If you would have represented your integer as decimal inside a string

char E[] = "197127";

and then done an anologous walk throught the bytes, you would have seen the representation of the characters as numbers.

Binary representation of "197127" is "00110000001000000111". The bytes looks like "00000111" (is 7 decimal), "00000010" (is 2), "0011" (is 3). the rest is 0.

Why did you expect 1 9 7 1? The hex representation of 197127 is 0x00030207, so on a little-endian architecture, the first byte will be 0x07, the second 0x02, the third 0x03, and the fourth 0x00, which is exactly what you're getting.

The value of e as 197127 is not a string representation. It is stored as a 16/32 bit integer (depending on platform). So, in memory, e is allocated, say 4 bytes on the stack, and would be represented as 0x30207 (hex) at that memory location. In binary, it would look like 110000001000000111. Note that the "endian" would actually backwards. See this link account endianess. So, when you point f to &e, you are referencing the 1st byte of the numeric value, If you want to represent a number as a string, you should have

 char *e = "197127"

This has to do with the way the integer is stored, more specifically byte ordering. Your system happens to have little-endian byte ordering, i.e. the first byte of a multi byte integer is least significant, while the last byte is most significant.

You can try this:

printf("%d\n", 7 + (2 << 8) + (3 << 16) + (0 << 24));

This will print 197127.

Read more about byte order endianness here.

The byte layout for the unsigned integer 197127 is [0x07, 0x02, 0x03, 0x00], and your code prints the four bytes.

If you want the decimal digits, then you need to break the number down into digits:

int digits[100];
int c = 0;
while(e > 0) { digits[c++] = e % 10; e /= 10; }
while(c > 0) { printf("%u\n", digits[--c]); }

You know the type of int often take place four bytes. That means 197127 is presented as 00000000 00000011 00000010 00000111 in memory. From the result, your memory's address are Little-Endian. Which means, the low-byte 0000111 is allocated at low address, then 00000010 and 00000011, finally 00000000. So when you output f first as int, through type cast you obtain a 7. By f++, f points to 00000010, the output is 2. The rest could be deduced by analogy.

The underlying representation of the number e is in binary and if we convert the value to hex we can see that the value would be(assuming 32 bit unsigned int):

0x00030207

so when you iterate over the contents you are reading byte by byte through the *unsigned char **. Each byte contains two 4 bit hex digits and the byte order endiannes of the number is little endian since the least significant byte(0x07) is first and so in memory the contents are like so:

0x07020300
  ^ ^ ^ ^- Fourth byte
  | | |-Third byte
  | |-Second byte
  |-First byte

Note that sizeof returns size_t and the correct format specifier is %zu, otherwise you have undefined behavior.

You also need to fix this line:

unsigned char *f = (char *) &e;

to:

unsigned char *f = (unsigned char *) &e;
                    ^^^^^^^^

Because e is an integer value (probably 4 bytes) and not a string (1 byte per character).

To have the result you expect, you should change the declaration and assignment of e for :

unsigned char *e = "197127";
unsigned char *f = e;

Or, convert the integer value to a string (using sprintf()) and have f point to that instead :

char s[1000];
sprintf(s,"%d",e);
unsigned char *f = s;

Or, use mathematical operation to get single digit from your integer and print those out.

Or, ...

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top