Pergunta

Consider the problem of representing in memory numbers in the range $\{1,\ldots,n\}$.

Obviously, exact representation of such number requires $\lceil\log_2(n)\rceil$ bits.

In contrast, assume we are allowed to have compressed representations such that when reading the number we get a 2-approximation for the original number. Now we can encode each number using $O(\log\log n)$ bits. For example, given an integer $x\in\{1,\ldots,n\}$, we can store $z = \text{Round}(\log_2 x)$. When asked to reconstruct an approximation for $x$, we compute $\widetilde{x} = 2^z$. Obviously, $z$ is in the range $\{0,1,\ldots,\log_2 n\}$ and only requires $O(\log\log n)$ bits to represent.

In general, given a parameter $\epsilon>0$, what is the minimal number of bits required for saving an approximate representation of an integer in the above set, so that we can reconstruct its value up to a multiplicative error of $(1+\epsilon)$?

The above example shows that for $\epsilon=1$, approximately $\log\log n$ bits are enough. This probably holds asymptotically for every constant $\epsilon$. But how does epsilon affect the memory (e.g., does it require $\Theta(\frac{1}{\epsilon^2}\log\log n)$ bits?).

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
Não afiliado a cs.stackexchange
scroll top