Pergunta

In trying to understand the relationships between Huffman Coding, Arithmetic Coding, and Range Coding, I began to think of the shortcomings of Huffman coding to be related to the problem of fractional bit-packing.

That is, suppose you have 240 possible values for a symbol, and needed to encode this into bits, you would be stuck with 8 bits per symbol, even though you do not need a "full" 8, as 8 can express 256 possible values per symbol. A solution to this problem is something I've seen referred to as "fractional bit packing", where you are able "bitshift" by a non-power of two using multiplication. Just like multiplication of powers-of-two are shifting x * 2 == x << 1 and x * 4 == x << 2 and so on for all powers of two, so too you can "shift" with a non-power-of-2 by multiplying instead, and pack in fractional-bit-sized symbols.

The problem is similar with Huffman coding: you end up with codes that must be non-fractionally-bit-sized in length, and therefore it has this packing inefficiency. However, you can't just use the solution of fracitonal-bit-packing, because this solution assumes fixed sized symbols.

The question is, are there any papers or solutions to improve on huffman coding with a similar idea to fractional-bit-packing to achieve something similar to arithmetic coding? (or any results to the contrary).

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
Não afiliado a cs.stackexchange
scroll top