Question

I used to think C++ was the "weird" one with all the ambiguities with < and >, but after trying to implement a parser I think I found an example which breaks just about every language that uses < and > for generic types:

f(g<h, i>(j));

This could be syntactically either interpreted as a generic method call (g), or it could be interpreted as giving f the results of two comparisons.

How do such languages (especially Java, which I thought was supposed to be LALR(1)-parsable?) get around this syntactic ambiguity?

I just can't imagine any non-hacky/context-free way of dealing with this, and I'm baffled at how any such language can be context-free, let alone LALR(1)-parsable...

(It's worth noting that even a GLR parser can't return a single parse for this statement with no context!!)

Was it helpful?

Solution

a generic method call in java would be <h,i>g(j) so there is no ambiguity :)

OTHER TIPS

I just can't imagine any non-hacky/context-free way of dealing with this, and I'm baffled at how any such language can be context-free, let alone LALR(1)-parsable...

The answer is that they aren't (at least not Java and C++; I know very little about C#). The Java grammar that you link to dates back to 1996, way before generics have been introduced.

For further discussion, see Are C# and Java Grammars LALR(x)?

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top