First, of course: the pointer is guaranteed to be aligned in the
first case (by §5.3.4/10 and §3.7.4.1/2), and may be correctly
aligned in both cases. (Obviously, if sizeof(int) == 1
, but
even when this is not the case, an implementation doesn't
necessarily have alignment requirements.)
And to make things clear: your casts are all reinterpret_cast
.
Beyond that, this is an interesting question, because as far as
I can tell, there is no difference in the two casts, as far as
the standard is concerned. The results of the conversion are
unspecified (according to §5.2.10/7); you're not even guaranteed
that converting it back into a char*
will result in the
original value. (It obviously won't, for example, on machines
where int*
is smaller than a char*
.)
In practice, of course: the standard requires that the return
value of new char[N]
be sufficiently aligned for any value
which may fit into it, so you are guaranteed to be able to do:
intPtr = new (charPtr) int;
Which has exactly the same effect as your cast, given that the
default constructor for int
is a no-op. (And assuming that
sizeof(int) <= 42
.) So it's hard to imagine an implementation
in which the first part fails. You should be able to use the
intPtr
just like any other legally obtained intPtr
. And the
idea that converting it back to a char*
would somehow result
in a different value from the original char*
seems
preposterous.
In the second part, all bets are off: you definitely can't
dereference the pointer (unless your implementation guarantees
otherwise), and it's also quite possible that converting it back
to char*
results in something different. (Imagine a word
addressed machine, for example, where converting a char*
to an
int*
rounds up. Then converting back would result in
a char*
which was sizeof(int)
higher than the original. Or
where an attempt to convert a misaligned pointer always resulted
in a null pointer.)