Question

I have tried implementing the sizeof operator. I have done in this way:

#define my_sizeof(x) ((&x + 1) - &x)

But it always ended up in giving the result as '1' for either of the data type.

I have then googled it, and I found the following code:

#define my_size(x) ((char *)(&x + 1) - (char *)&x)

And the code is working if it is typecasted, I don't understand why. This code is also PADDING a STRUCTURE perfectly.

It is also working for:

#define my_sizeof(x) (unsigned int)(&x + 1) - (unsigned int)(&x)

Can anyone please explain how is it working if typecasted?

Was it helpful?

Solution

The result of pointer subtraction is in elements and not in bytes. Thus the first expression evaluates to 1 by definition.

This aside, you really ought to use parentheses in macros:

#define my_sizeof(x) ((&x + 1) - &x)
#define my_sizeof(x) ((char *)(&x + 1) - (char *)&x)

Otherwise attempting to use my_sizeof() in an expression can lead to errors.

OTHER TIPS

The sizeof operator is part of the C (and C++) language specification, and is implemented inside the compiler (the front-end). There is no way to implement it with other C constructs (unless you use GCC extensions like typeof) because it can accept either types or expressions as operand, without making any side-effect (e.g. sizeof((i>1)?i:(1/i)) won't crash when i==0 but your macro my_sizeof would crash with a division by zero). See also C coding guidelines, and wikipedia.

You should understand C pointer arithmetic. See e.g. this question. Pointer difference is expressed in elements not bytes.

#define my_sizeof(x) ((char *)(&x + 1) - (char *)&x)

This my_sizeof() macro will not work in the following cases:

  1. sizeof 1 - 4 byte (for a platform with 4-byte int)
    my_sizeof(1) - won't compile at all.

  2. sizeof (int) - 4 byte(for a platform with 4-byte int)
    my_sizeof(int) - won't compile code at all.

It will work only for variables. It won't work for data types like int, float, char etc., for literals like 2, 3.4, 'A', etc., nor for rvalue expressions like a+b or foo().

#define my_sizeof(x) ((&x + 1) - &x)

&x gives the address of the variable (lets say double x) declared in the program and incrementing it with 1 gives the address where the next variable of the type x can be stored (here addr_of(x) + 8, for the size of a double is 8Byte).

The difference gives the result that how many variables of type of x can be stored in that amount of memory which will obviously be 1 for the type x (for incrementing it with 1 and taking the difference is what we've done).

#define my_size(x) ((char *)(&x + 1) - (char *)&x)

typecasting it into char* and taking the difference will tell us how many variables of type char can be stored in the given memory space (the difference). Since each char requires only 1 Byte of memory therefore (amount of memory)/1 will give the number of bytes between two successive memory locations of the type of variable passed on to the macro and hence the amount of memory that the variable of type x requires.

But you won't be able to pass any literal to this macro and know their size.

But it always ended up in giving the result as '1' for either of the data type

Yes, that's how pointer arithmetic works. It works in units of the type being pointed to. So casting to char * works units of char, which is what you want.

This will work for both literals and variables.

#define my_sizeof(x) (char*) (&(((__typeof__(x) *)0)[1])) - (char *)(&(((__typeof__(x) *)0)[0]))

I searched this yesterday, and I found this macro:

#define mysizeof(X)  ((X*)0+1)

Which expands X only once (no error as double evaluation of expression like x++), and it works fine until now.

#define my_sizeof(x) ((&x + 1) - &x)
  • This is basically (difference of two memory values) / (size of the data type).

  • It gives you the number in which how many number of elements of type x can be stored. And that is 1. You can fit one full x element in this memory space.

  • When we typecast it to some other datatype, it represents how many number of elements of that datatype can be stored in this memory space.

#define my_size(x) ((char *)(&x + 1) - (char *)&x)
  • Typecasting it to (char *) gives you the exact number of bytes of memory because char is of one byte.
#define my_sizeof(x) (unsigned int)(&x + 1) - (unsigned int)(&x)
  • It will give you compilation error as you are typecasting a pointer type to int.

# define my_sizeof(x) ((&x + 1) - &x)

&x gives the address of your variable and incrementing it with one (&x + 1), will give the address, where another variable of type x could be stored. Now if we do arithmetic over these addresses like ((&x + 1) - &x), then it will tell that within ((&x + 1) - &x) address range 1 variable of type x could be stored.

Now, if we typecast that amount of memory with (char *) [because size of char is 1 byte and incrementing a char * would move with one byte only], then we would get the number of bytes type x is consuming

#include<bits/stdc++.h>

using namespace std;
//#define mySizeOf(T) (char*)(&T + 1) - (char*)(&T)

        template<class T>
size_t mySizeOf(T)
{
        T temp1;
        return (char*)(&temp1 + 1) - (char*)(&temp1);
}
int main()
{
        int num = 5;
        long numl = 10;
        long long numll = 100;
        unsigned int num_un_sz = 500;

        cout<<"size of int="<<mySizeOf(num) << endl;
        cout<<"size of long="<<mySizeOf(numl) << endl;
        cout<<"size of long long ="<<mySizeOf(numll) << endl;
        cout<<"size of unsigned int="<<mySizeOf(num_un_sz) << endl;
        return 0;
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top