1: In the following,
(a2-a1)
has theunsigned
value of4294967100 - 4294967290 mod 4294967296
or4294967106
.
2: It is modded by4294967296
because on your platform,UINT_MAX
is4294967295
.
3:4294967106 * 0.1
-->(double) 429496710.6
4: Assigning(double) 429496710.6
tovalue1
results in a(unsigned) 429496710
.unsigned int value1 = (a2 - a1) * double(0.1);
1: With the below, it is more complicated.
a2-a1
has theunsigned
value of4294967106
. This value is cast toint()
and results in unspecified/undefined behavior as @Yakk suggested. A typical result is type4294967106 - 4294967296
or-190
.
2:-190 * 0.1
-->(double) -19.0
3: Assigning(double) -19.0
tovalue2
is a problem as it is a negative number for anunsigned
, again UB is encountered.
4:value2
was assigned the value of(unsigned) 4294967277
which is the same bit pattern as(int) -19
on your platform.unsigned int value2 = int(a2 - a1)* double(0.1);
unsigned int and double conversion order
-
28-09-2022 - |
Question
I'm confused by unsigned int and double conversion order. I thought that when evaluating an expression, the intermediate type is the one with the biggest cardinality of the representing set, but here in the code
unsigned int a1 = 4294967290, a2 = 4294967100;
unsigned int value1 = (a2 - a1) * double(0.1);
std::cout << value1 << std::endl;
unsigned int value2 = int(a2 - a1)* double(0.1);
std::cout << value2 << std::endl;
When compiling with Microsoft compiler, I receive these results:
value1 = 429496710 value2 = 4294967277
Whereas I thought that the immediate type of the answer should be double and therefore values1 and values2 should be equal
Where am I wrong?
Solution
OTHER TIPS
You subtract two unsigned int
. This does arithmetic mod 2^k
for some k
(probably 32).
In one case you convert this to int
. If it is greater than max int, the result is unspecified at least (and may be undefined behaviour: I forget). This is probably the case here. In practice this will generate a negative number on many systems, but trusting that is often a bad idea.
Then the int
or unsigned
is converted to a double
arithmetically, multiplied by 0.1
, and then converted to an unsigned int
arithmatically mod 2^k
for the same k
(with possibly strange rounding going on: towards zero prior to conversion to unsigned?).
There is little reason to think these will result in the same value.