Question

We all know 0/0 is Undefined and returns an error if I were to put it into a calculator, and if I were to create a program (in C at least) the OS would terminate it when I try to divide by zero.

But what I've been wondering is if the computer even attempts to divide by zero, or does it just have "built in protection", so that when it "sees" 0/0 it returns an error even before attempting to compute it?

Était-ce utile?

La solution

The CPU has built in detection. Most instruction set architectures specify that the CPU will trap to an exception handler for integer divide by zero (I don't think it cares if the dividend is zero).

It is possible that the check for a zero divisor happens in parallel in hardware along with the attempt to do the division, however, the detection of the offending condition effectively cancels the division and traps instead, so we can't really tell if some part of it attempted the division or not.

(Hardware often works like that, doing multiple things in parallel and then choosing the appropriate result afterwards because then each of the operations can all get started right away instead of serializing on the choice of appropriate operation.)

The same trap to exception mechanism will also be used when overflow detection is turned on, which you ask for usually by using different add/sub/mul instructions (or a flag on those instructions).

Floating point division also has built in detection for divide by zero, but returns a different value (IEEE 754 specifies NaN) instead of trapping to an exception handler.


Hypothetically speaking, if the CPU omitted any detection for attempt to divide by zero, the problems could include:

  • hanging the CPU (e.g. in an inf. loop) — this might happen if the CPU uses an algorithm to divide that stops when the numerator is less than the divisor (in absolute value).  A hang like this would pretty much count as crashing the CPU.
  • a (possibly predictable) garbage answer, if the CPU uses a counter to terminate division at the maximum possible number of divide steps (e.g. 31 or 32 on a 32-bit machine).

Autres conseils

It depends on the language, on the compiler, on whether you are using integers or floating point numbers, and so on.

For floating point number, most implementations use the IEEE 754 standard, where division by 0 is well defined. 0 / 0 gives a well defined result of NaN (not-a-number), and x / 0 for x ≠ 0 gives either +Infinity or -Infinity, depending on the sign of x.

In languages like C, C++ etc. division by zero invokes undefined behaviour. So according to the language definition, anything can happen. Especially things that you don't want to happen. Like everything working perfectly fine when you write the code and destroying data when your customer uses it. So from the language point of view, don't do this. Some languages guarantee that your application will crash; it's up to them how this is implemented. For those languages, division by zero will crash.

Many processors have some kind of built-in "divide" instruction, which will behave differently depending on the processor. On Intel 32bit and 64 bit processors, the "divide" instructions will crash your application when you try to divide by zero. Other processors may behave differently.

If a compiler detects that a division by zero will happen when you execute some code, and the compiler is nice to its users, it will likely give you a warning, and generate a built-in "divide" instruction so that the behaviour is the same.

Seems like you're wondering what would happen if someone made a CPU that doesn't explicitly check for zero before dividing. What would happen depends entirely on the implementation of the division. Without going into details, one kind of implementation would produce a result that has all bits set, e.g. 65535 on a 16-bit CPU. Another might hang up.

But what I've been wondering is if the computer even attempts to divide by zero, or does it just have "built in protection", so that when it "sees" 0/0 it returns an error even before attempting to compute it?

Since x/0 makes no sense, period, computers must always check for division by zero. There's a problem here: Programmers want to compute (a+b)/c without having to bother to check if that calculation even makes sense. The underneath-the-hood response to division by zero by the CPU + number type + operating system + language is to either do something rather drastic (e.g., crash the program) or do something overly benign (e.g., create a value that makes no sense such as the IEEE floating point NaN, a number that is "Not a Number").

In an ordinary setting, a programmer is expected to know whether (a+b)/c makes sense. In this context, there's no reason to check for division by zero. If division by zero does happen, and if the machine language + implementation language + data type + operating system response to this is to make the program crash, that's okay. If the response is to create a value that might eventually pollute every number in the program, that's okay, too.

Neither "something drastic" or "overly benign" is is the right thing to do in the world of high reliability computing. Those default responses might kill a patient, crash an airliner, or make a bomb explode in the wrong place. In a high reliability environment, a programmer who writes (a+b)/c will be picked to death during code review, or in modern times, perhaps picked to death automatically by a tool that checks for verboten constructs. In this environment, that programmer should instead have written something along the lines of div(add(a,b),c) (and possibly some checking for error status). Underneath the hood, the div (and also the add) functions/macros protects against division by zero (or overflow in the case of add). What that protection entails is very implementation specific.

We know by now that x/0 and 0/0 do not have well defined answers. What happens if you attempt to calculate 0/0 anyway?

On a modern system, the calculation is passed to the MPU within the CPU and is flagged as an illegal operation, returning NaN.

On a much older system, such as '80s home computers that had no on-chip division, the calculation was done by whatever software was running. There are a few possible choices:

  • Subtract smaller and smaller copies of the divisor until the value reaches zero and keep track of which sized copies were subtracted
    • If it checks for zero before the first subtraction, it will exit quickly and the result will be 0
    • If it assumes it must be able to subtract at least once, the result will be 1
  • Calculate the logarithms of both numbers, subtract them and raise e to the power of the result. A very inefficient method compared to the above subtraction method, but mathematically valid
    • An overflow might occur trying to calculate log(0) and the software would either use its error handling routines, or crash
    • The software might assume that all logarithms can be calculated in a fixed number of steps and return a large, but incorrect value. Since both logarithms would be the same, the difference would be 0 and e0 = 1, giving a result of 1

In other words, it would be implementation dependant what would happen and it would be possible to write software that produces correct and predictable results for every value but seemingly strange values for 0/0 that are nonetheless, still internally consistent.

Licencié sous: CC-BY-SA avec attribution
scroll top