Question

Very large integers are often stored as variable-length arrays of digits in memory, as opposed to a straightforward binary representation as is the case with most primitive 'int' or 'long' types, as in Java or C. With this in mind, I would be interested to know algorithm(s) that can compute:

  1. At what count an integer must reach before it becomes more efficient to store it as a BigInteger (or equivalent arbitrary-precision arithmetic construct) with a given radix for the integer's digits;

  2. Which radix would be most efficient to store the digits of this large integer.

I have mentioned 'efficiency'; by this, I mean I am mainly concerned with the amount of space such a BigInteger would consume, though I would also be interested to hear any comments on processing speed or time complexity.

Was it helpful?

Solution

An integer should consume the least space if stored in a raw binary format (unless maybe it is a small integer and data type is way too wide for it - to store 1 in 128 bit long long). Storing differently does not save any memory and is used to make the work with such integers easier.

If byte by byte, this translates into 256'ecimal radix - 256 possible values, as much as the byte can hold.

OTHER TIPS

  1. BigInt is never more efficient than one of the integer types directly supported by hardware. If you can use what's supported directly, use it.
  2. What's supported by hardware most efficiently, likely a power of 2 or, often equivalently, binary.
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top