質問

There are ten different compression levels for DEFLATE (0 no compression & fastest, 9 best compression & slowest). What is the best way to determine such level for a raw DEFLATE data?

One obvious (yet slow) method would be to try each and compare sequentially. As a side question, is it guaranteed that the size of compressed data for a file is strictly non-increasing going from compression level 0 to 9? If so, binary search can speed up this procedure by a factor of two/three.

役に立ちましたか?

解決 2

Other than the slow method, no.

No, there is not a guarantee that the compressed size is monotonic. However not being monotonic is pretty rare.

他のヒント

If you only have compressed data, it does not contain such information. Compression level is only configurable for compression so it's not encoded in the compressed data.

However, if you use something like a zlib, it does add header which includes compression level. From https://www.rfc-editor.org/rfc/rfc1950 :

  FLEVEL (Compression level)
     These flags are available for use by specific compression
     methods.  The "deflate" method (CM = 8) sets these flags as
     follows:

        0 - compressor used fastest algorithm
        1 - compressor used fast algorithm
        2 - compressor used default algorithm
        3 - compressor used maximum compression, slowest algorithm

     The information in FLEVEL is not needed for decompression; it
     is there to indicate if recompression might be worthwhile.

If you don't use library that adds informational header, you could implement it yourself (if that's really needed for your application). It's just a matter of putting extra byte or two (usually) in the beginning.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top