Question

I understand any fraction to be a quotient of integers which isn't 0, but after coming across the term "float" in various programming languages (such as JavaScript) I misunderstand why it is even needed and we don't say a fraction instead.

What is the difference between a fraction and a float?

Was it helpful?

Solution

Computers usually deal with floating-point numbers rather than with fractions. The main difference is that floating-point numbers have limited accuracy, but are much faster to perform arithmetic with (and are the only type of non-integer numbers supported natively in hardware).

Floating-point numbers are stored in "scientific notation" with a fixed accuracy, which depends on the datatype. Roughly speaking, they are stored in the form $\alpha \cdot 2^\beta$, where $1 \leq \alpha < 2$, $\beta$ is an integer, and both are stored in a fixed number of bits. This limits the accuracy of $\alpha$ and the range of $\beta$: if $\alpha$ is stored using $a$ bits (as $1.x_1\ldots x_a$) then it always expresses a fraction whose denominator is $2^a$, and if $\beta$ is stored using $b$ bits then it is always in the range $-2^{b-1},\ldots,2^{b-1}-1$.

Due to the limited accuracy of floating-point numbers, arithmetic on these numbers is only approximate, leading to numerical inaccuracies. When developing algorithms, you have to keep that in mind. There is actually an entire area in computer science, numerical analysis, devoted to such issues.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top