Frage

I am currently reading a chapter in a textbook on Processor Architecture and saw the following statement:

The less precision there is, the less space is occupied by a program variable in memory. Further, there is often a time advantage, both in ferrying the operands back and forth between the processor and memory, and for arithmetic and logic operations that need less precision. This is particularly true for floating-point arithmetic operations.

Why are less precise data like float sometimes faster than larger, more precise data like double? Can somebody develop on this explanation and maybe give an example?

War es hilfreich?

Lösung

For intuitively the same reason why it's faster to calculate 2 + 2 by hand than it is to calculate 3685 + 2193: there's simply less data to work your way through.

Andere Tipps

Single precision floating point format compared to double precision:

  1. uses less memory, so can be transferred into register faster (in one machine instruction, usually)
  2. has less accuracy, so some approximations can be used for faster calculations (on software level this means less machine instructions per call, on hardware level this means less CPU clocks per instruction)

The size of double word types (double, long), is also influenced higher level languages specifications, for example, Java does not guarantee access to variable of such type to be atomic (done in one step for external observer).

An FPU or GPU can (sometimes) parallelize more 32-bit (float) FP operations than 64-bit (double) FP operations. That is, if it can add 2 doubles in parallel, it can add 4 floats in parallel.

For highly-optimized tight loops this can have a dramatic effect, especially on GPU where the processing units are less constrained with memory bandwidth.

Lizenziert unter: CC-BY-SA mit Zuschreibung
scroll top