Integer 0
is exactly zero; there is no error. And since division by zero is mathematically undefined, it makes sense that integer division by 0
raises an error.
On the other hand, float 0.0
is not necessarily representing exactly zero. It might originate from a number whose absolute value is small enough to be rounded to zero. In such case, mathematical division is still defined. It rather does not make sense to suddenly raise an error when the divisor's absolute value is small. However, in such case, a meaningful value cannot be reproduced due to the value being rounded, so the best it can be done to make sense is to return some sort of a pseudo-numeral, such as NaN
.