Introduction:
I am a little confused by your example and I might have misunderstood your question, I have a feeling there is a certain type recursion between S
and Tx
that I am not getting from your question(because if not, S#Tx
could be anything and I don't understand the problem with the anySer
)
Tentative Answer:
At compile time, for any instance of Ser[T]
there will be a well-defined type parameter T
, since you want to save it on instantiation, you will have a single anySer Ser[T]
for a given specific type A
What you are saying in some way is that a Ser[A]
will work as Ser[S]
for any S
. This can be explained in two ways, according to the relationship between type A and S.
If this conversion is possible for every
A<:<S
then your serializer isCOVARIANT
and you can initialize your anySer as aSer[Nothing]
. Since Nothing is subclass of every class in Scala, your anySer will always work as aSer[Whatever]
If this conversion is possible for every
S<:<A
then your serializer isCONTRAVARIANT
and you can initialize your anySer as aSer[Any]
. Since Any is subclass of every class in Scala, your anySer will always work as aSer[Whatever]
If it's neither the one of the previous case, then it means that:
def serializer[S <: Sys[S]] : Serializer[S, Test[S]] = anySer.asInstanceOf[Ser[S]]
Could produce an horrible failure at runtime, because there will some S for which the Serializer won't work. If there are no such S for which this could happen, then your class falls in either 1 or
Comment post-edit
If your types are really invariant, the conversion through a cast breaks the invariance relation. You are basically forcing the type system to perform an un-natural conversion because you know that nothing wrong will happen, on the basis of your own knowledge of the code you have written. If this is the case then casting is the right way to go: you are forcing a different type from the one the compiler can check formally and you are making this explicit. I would even put a big comment saying why you know that operation is legal and the compiler can't guess and eventually attach a beautiful unit test to verify that the "in-formal" relation always holds.
In general, I believe this practice should be used with extreme care. One of the benefits of strongly typed languages is that the compiler performs formal type checking that helps you catch early errors. If you intentionally break it, you give away this big benefit.