Question

I am looking for a streaming algorithm for converting a decimal number to a hexadecimal number. I am assuming the input is a string represention of a decimal number (i.e., each character is a decimal digit) in digit-reversed order, and I'd like the output to be a string represention of a hexadecimal number (i.e., each character is a hex digit) in digit-reversed order.

By streaming, I mean that I'd like the algorithm to start producing hexadecimal digits without having to look at the entire decimal-string input first (i.e. up to some fixed number of digits of input can be looked at before producing digits of output); and ideally with only a constant amount of memory.

Is there a way to do this?

I know one can convert from hexadecimal to binary in a streaming way. For single-digit numbers, we can use the following table:

hex -> bin
0 -> 0000
1 -> 0001
2 -> 0010
3 -> 0011
4 -> 0100
5 -> 0101
6 -> 0110
7 -> 0111
8 -> 1000
9 -> 1001
a -> 1010
b -> 1011
c -> 1100
d -> 1101
e -> 1110
f -> 1111

For multiple-digit numbers, we can do the same by applying the mapping above to each hexadecimal digit individually.

1234 -> 1, 2, 3, 4 -> 0001, 0010, 0011, 0100 -> 0001001000110100

Is there a way to do something like this, converting from decimal to hexadecimal?

Was it helpful?

Solution

I don't think this is possible. Changing a high-order digit in a base-10 number -- say, changing 5000000 to 6000000 -- can change bits in its binary equivalent all the way from high-order bits to fairly-low-order bits:

  • 10011000100101101000000
  • 10110111000110110000000

It doesn't change the very lowest bits: Since $10 = 5\times 2$, the digit in position $k$ in a decimal number is a multiple of $10^{k-1} = (5\times 2)^{k-1} = 5^{k-1}2^{k-1}$, so changing it will never change the lowest $k-1$ bits of the binary representation. But it can change the $k$-th bit, and bits in positions up to $\log_2(9\times 10^{k-1}) = (\log_2 10)(k-1) + \log_2 9 \approx 3.322(k-1) + 3.170$. This is true for any $k$, so the range of affected digits grows indefinitely as $k$ increases.

If you were, say, trying to convert between base 4 and base 8, you could group each block of 3 base-4 digits into a base-64 "superdigit", which you could then convert back down to 2 base-8 digits since 8^2 = 64.

But there is no "right" superdigit base for the bases 10 and 2, since 10 contains 5 as a factor and 2 does not: every positive integer power of 10 contains 5 as a prime factor, and no positive integer power of 2 does, so by the uniqueness of prime factorisations there is no integer that is a power of both 2 and 10.

OTHER TIPS

You can't do this - if you don't know whether the input is k digits or k+1 digits, then the additional decimal digit changes all or most of the hexadecimal digits. For example 100 -> 64, but 1000 -> 3e8.

Also, consider the numbers $16^k-1$ and $16^k$. In hexadecimal, one is k f's, the other is 1 followed by k zeroes. Since $16^k$ in decimal doesn't end in zero, all their decimal digits except the last one are equal. So all the decimal input digits must be processed before we can output the first hexadecimal digit.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top