Okay, so the two specific questions you've given:
102 - how would you expect to type this? Programming languages tend to stick to ASCII for all but identifiers. Note that you can use
double x = 10e2;
in Java and C#... but thee
form is only valid for floating point literals, not integers.As noted in comments, exponentiation is supported in some languages - but I suspect it just wasn't deemed sufficiently useful to be worth the extra complexity in most.
An identifier with a
-
in leads to obvious ambiguity in languages with infix operators:int x = 10; int y = 4; int x-y = 3; int z = x-y;
Is
z
equal to 3 (the value of thex-y
variable) or is it equal to 6 (the value of subtractingy
fromx
)? Obviously you could come up with rules about what would happen, but by removing-
from the list of valid characters in an identifier, this ambiguity is removed. Using_
or just casing (nameSize
) is simpler than providing extra rules in the language. Where would you stop, anyway? What about.
as part of an identifier, or+
?
In general, you should be aware that languages can easily suffer from too many features. The C# team in particular have been quite open about how high the bar is for a new feature to make it into the language. Every new feature must be designed, specified, implemented, tested, documented, and then developers have to learn about it if they're going to understand code using it. This is not cheap, so good language designers are naturally conservative.