سؤال

Across I have seen many questions of why the unary plus exists in the specific language like:
Why does F# have a unary plus operator?
What is the purpose of Java's unary plus operator?
What's the point of unary plus operator in Ruby?
Use of unary plus operator

And they all say that the '+' is only for overloading if you need it,
or it is for performing some rare/unnecessary language-specific task.
But why isn't there any actual use of it?

It would make sense (atleast to me) that a unary '+' should act as an absolute value like:

int x = -5; //x is -5
x = +x;     //x is now 5

Is there any specific reason why this hasn't been done?
or is it because this only makes sense to me?

هل كانت مفيدة؟

المحلول

It's not implemented that way because math doesn't work that way.

In math:

note: the following is math on a blackboard, not a programming language

+ -5 = -5

- -5 = 5

Doing it any other way will confuse anyone who's ever finished highschool.

Obviously, the above is fairly weak answer since you can indeed implement the action taken by the ASCII character '+' in the code to work differently from math. In Lisp for example, the + operator does not work like regular highschool math:

;;; Adding two numbers:
+ 1 2

And on TI calculators, it's even reversed:

1 2 +

You could argue that implementing + the way you describe in your programming language makes sense. There are a number of different arguments on why it should not be done including the principle of least surprise, a language should model the problem domain, it's what we've been taught at school etc. All of them are essentially weak arguments because they don't provide a fundamental reason for a machine to treat + the same way as math.

But here's the thing, humans are emotional creatures. We like what's familiar and cringe at something that goes against something we've been taught. And programmers don't like being surprised by + behaving differently from normal. And programmers are the ones who create programming languages so it makes sense that they implement + the way they do.

One programmer may not think like that, or a small group, and they are free to create a programming language that implements + differently. But they'll have a hard time convincing the programming community at large they what they did was right.

So it doesn't matter so much that all the arguments that can be made against + acting as abs() are essentially weak. As long as the majority feels it's right + will behave the same as regular math.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top