Question

Can we consider value type conversions like int to float conversion as upcasting and float to int as downcasting? I believe when we talk about upcasting and downcasting, we specifically mean reference conversions.

Était-ce utile?

La solution

No, conversions between value types cannot be seen as upcasting or downcasting, as these do not share a hierarchical link (they do not inherit from each other in any way or form).

Upcasting means specifically that you are changing a subtype reference to a super type reference.

Downcasting is the other way around.

Autres conseils

No, a float is not a special case (I.E. subclass of) an int. They are different types.

No, up- and downcasting is only valid in the context of inheritance. When you are converting between value types that can hold more or less data you are using plain type conversions. Of that, there are two different types: 1. Implicit conversions. These are conversions that can be made as the conversion is "safe", that is there will be no data lost. For example, you can safely convert an int value to a long value. 2. Explicit conversions. These are conversions that can result in the losing of data and therefore you have to do an explicit cast to do the conversion. For example, converting a long value to an int is unsafe.

For more importation, see: http://msdn.microsoft.com/en-us/library/ms173105.aspx

These are not examples of upcasting and downcasting, as others have described. However, we can talk about widening and narrowing primitive type conversions (for instance, going from int to long is a widening conversion and hence can be done implicitly.) In this particular case (float/int), however, the conversion is a bit different.

There is an implicit conversion from int to float, but the conversion can actually still lose precision, due to the inability of float to represent integral numbers exactly. This is worth knowing, as it bends the general rule that implicit conversions should never lose information. This is really only noticeable if you convert integers to floats, do a bunch of accumulating math with them and then try converting back to an int. In these cases, the approximation error can accumulate, producing incorrect results if you assumed the floats were perfect representations of decimal numbers.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top