Question

Are there any situations where it would make more sense to use a single datatype instead of a double? From my searching, the disadvantage to a double is that it requires more space, which isn't a problem for most applications. In that case, should all floating point numbers be doubles?

A little background info: I'm working with an application that deals with a lot of data about coordinates and chemicals. A few customers have noticed that when importing spreadsheets of data, some values with high precision are rounded down the precision of a single.

Was it helpful?

Solution

On most desktop applications, yes.

Though if you have a huge array of them, halving the size of that could be significant enough to be worthwhile if you don't need the precision.

Especially given that pretty much all consumer desktops have double-precision floating-point arithmetic done in hardware.

OTHER TIPS

From this .net article

Data Type Width

The most efficient data types are those that use the native data width of the run-time platform. On current platforms, the data width is 32 bits, for both the computer and the operating system.

Consequently, Integer is currently the most efficient data type in Visual Basic .NET. Next best are Long, Short, and Byte, in that order of efficiency. You can improve the performance of Short and Byte by turning off integer overflow checking, for example by setting the RemoveIntegerChecks property, but this incurs the risk of incorrect calculations due to undetected overflows. You cannot toggle this checking on and off during run time; you can only set its value for the next build of your application.

If you need fractional values, the best choice is Double, because the floating-point processors of current platforms perform all operations in double precision. Next best are Single and Decimal, in that order of efficiency.

As Mark says in his comment, space can be an issue on memory-constrained systems. You may also want to index or sort a list, and why do that on doubles if you can store your values in singles?

On some hardware, arithmetic involving double values may take longer than that involving single values, but most recent FPUs have a single native data type (e.g., 80-bit extended floating point values for x86) which will be used internally for calculations regardless of what in-memory data type you are using. So that is to say that "FPU calculations will be faster with single precision" is generally not a reason to use single-precision on most modern hardware today.

That said, in addition to the "uses less memory" reasons elaborated on in the other answers, there is a very practical reason when it comes to SIMD vector instructions like SSE and AltiVec - single precision is likey to be twice as fast as double precision, since the instructions operate on vectors of fixed size, and you can stuff twice as many single precision values into a single vector, with the processing time typically remaining the same.

For example, with a 128-bit vector unit capable of processing vector multiplications in 2 clock cycles, you could get a throughput of 2 single precision multiplications per clock, versus 1 double precision, since you can 4 singles in a vector, versus two doubles.

A similar effect occurs with memory bandwidth, and is not specific to vector processing - if you have large arrays of doubles, they will not only take twice the space, but may take up to twice as long to process in the case that your algorithm is bandwidth constrained (which is increasingly likely given the increasing sizes and decreasing latencies of vector processing units).

Doubles take more space but the extra precision may or may not be necessary. I have done a lot of programming in the scientific world where floating point arithmetic is very common and have found that often you can do the calculations in double or higher precision but store the results as singles without ill effect.

Keep in mind that once the numbers are sucked into the FPU, they are expanded to very high precision anyway. That being said, it would be best to try whatever you are doing in both precisions and see if the results are comparable.

Unfortunately, computing is still an experimental science.

If you're coding OpenGL then it's normal to use GLSingle (eg single) rather than GLDouble. In nearly all circumstances single precision is more than enough for most graphics applications and should be faster - although I confess I'm not certain of this on the latest generation of GPUs.

My favourite citation on this is that single precision was sufficient to navigate to the moon and back, so in practise it's unusual to cause a real issue. That said in most circumstances I'd reach for a double nowadays as storage is cheap and there's less likely to be any odd binary to decimal issues.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top