Frage

There is some stuff that I never see in any programming language and I would know why. I believe this things may be useful. Wll,maybe the explanation will be obvious when you point. But let's go.

Why doesn't 10², be valid in its syntax? sometimes, we want express by using such notation(just like in a paper) instead of pre-computed value(that sometimes, is a big number,and,makes some difficult when seen at first time, I belive that it is the purpose to _ in the D and Java programming languages) or still call math functions for this. Of course that I'm saying to the compiler replace the value of this variable to the computed value,don't leave it to at run-time.

The - in an indentifier. Why is - not acceptable like _?(just lisp dialect does) to me, int name-size = 14; does not means unreadable. Or this "limitation" is attribute to characters set of computer?

I will be so happy when someone answer my questions. Also,if you have another pointer to ask,just edit my answer and post a note on its edition or post as comment.

War es hilfreich?

Lösung

Okay, so the two specific questions you've given:

  1. 102 - how would you expect to type this? Programming languages tend to stick to ASCII for all but identifiers. Note that you can use double x = 10e2; in Java and C#... but the e form is only valid for floating point literals, not integers.

    As noted in comments, exponentiation is supported in some languages - but I suspect it just wasn't deemed sufficiently useful to be worth the extra complexity in most.

  2. An identifier with a - in leads to obvious ambiguity in languages with infix operators:

    int x = 10;
    int y = 4;
    int x-y = 3;
    int z = x-y;
    

    Is z equal to 3 (the value of the x-y variable) or is it equal to 6 (the value of subtracting y from x)? Obviously you could come up with rules about what would happen, but by removing - from the list of valid characters in an identifier, this ambiguity is removed. Using _ or just casing (nameSize) is simpler than providing extra rules in the language. Where would you stop, anyway? What about . as part of an identifier, or +?

In general, you should be aware that languages can easily suffer from too many features. The C# team in particular have been quite open about how high the bar is for a new feature to make it into the language. Every new feature must be designed, specified, implemented, tested, documented, and then developers have to learn about it if they're going to understand code using it. This is not cheap, so good language designers are naturally conservative.

Andere Tipps

Can it be done?

2.⁷

1.617 * 10.ⁿ(13)

Apparently yes. You can modify languages such as ruby (define utf-8 named functions and monkey patch numeric classes) or create User-defined literals in C++ to achieve additional expressiveness.

Should it be done?

How would you type those characters?

Which unicode would you use for, say, euler's constant ? U+2107?

I'd say we stick to code we can type and agree on.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top