Question

Recently I learned that standard Fortran does not support unsigned integers. Fortran is a language with a very long history. I suppose that when Fortran was first designed, there simply wasn't any notion of unsigned integers, and they have never been introduced after that.

Do signed integers predate unsigned integers?

If yes, when were unsigned integers first introduced as a dedicated type, directly accessible in practice in programming languages, and when did they become widespread?

I read this question on the rationale for unsigned integers to exist at all, but the history is not really addressed in the answers. PieterB's answer comes closest with Microprocessors are inherently unsigned. The signed numbers are the thing that's implemented, not the other way around.. This implies that the answer to the first part of my question would be "No, they don't." but such an answer is not actually stated there.

Was it helpful?

Solution

According to Wikipedia, the concept of signed numbers appeared during the Han dynasty in China (~200 BC to ~200 AD), and wasn't accepted in the west until the 17th century.

While that may seem like a snarky comment, it isn't: it's meant to highlight the fact that numbers are inherently tied to the domain in which they're used. Western mathematics was based on counting and geometry (and, having taken a college-level class on Greek mathematics, I can tell you that it's quite remarkable what 27 letters, a straight-edge, and a compass can accomplish).

Similarly, the numbers used by computer languages are tied to the domain where they are used. Fortran was developed for numerical analysis; Cobol, its contemporary, was designed for business math. Both of those domains require signed numbers, and have little use for unsigned numbers. Lisp was the other "big" late-50s high-level language; I have no idea what the original version used for numbers.

Unsigned numbers are generally appropriate only to the domain of machine control, where you access every bit of the machine word and don't want behaviors such as sign extension. However, machine-level programming didn't use "higher-level" languages, due to programmers' fear of compilers. That changed with C, and perhaps other languages, and those languages introduced unsigned integer types.

So that brings us to machine languages, which represent the basic operations of the machine. And here things get a little strange, because signed numbers are simply a way of looking at bits.

The adder circuit knows nothing of signs: it simply takes two bits as input and produces two bits as output. By chaining a bunch of adders together, you can take two words as input and produce one word + carry bit as output. If you add 0x7FFF and 0x0003 on a 16-bit architecture, you get 0x8002. How you interpret that is up to you: maybe you added two positive numbers and got another positive number, maybe you had an overflow to a negative number.

All of which is saying that the question itself is invalid. Signed versus unsigned is simply a domain-specific way of looking at the same bits.

Licensed under: CC-BY-SA with attribution
scroll top