Question

I like radians just as much as the next guy, and typically prefer to use them over degrees, but why do we use radians in programming?

To rotate something 180 degrees, you need to rotate it by 3.14159265.... Sure, most languages have some kind of constant for pi, but why do we ever want to use irrational numbers like pi when we can instead use integers, especially for simple programs?

We're relying on the computer to say that 3.14159265 is close enough to pi that functions like sine and cosine return the proper values, but if a computer is too accurate, then the values would be slightly off (sin(3.14159265) = 0.00000000358979303). This isn't an issue when using 180 degrees.

Was it helpful?

Solution

It actually is an issue, it just shows up in different ways, especially if you don't stick to 90 degree increments.

Ultimately, it comes down to the mechanisms used to compute trig functions are defined in terms of radians (even when implemented by a CPU's microcode; you might want to examine a numerical methods text for details, but they really do want to be done in radians) and working in degrees then requires constant conversions between the two, leading to cumulative errors. Since floating point (and transcendental numbers in particular) has plenty of error built into it already, adding that additional conversion on top is both slowing things down and adding even more avoidable error.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top