質問

I am looking at the problem of reducing storage space when storing multiple images together as a single bigger image. The basic intuition is that images tend to have some similarities (like those taken at the same location or around the same point of time) and can we exploit this similarity to save space.

For instance, for JPG-encoded images, the overall flow is : Input JPG Images -> Each image converted into RGB Image Tiles -> Reorganize similar RGB tiles together -> Again transform to JPG format . Naturally, when retrieving images, we will need to reverse the process.

I just realized that JPG images are not well suited for this as they primarily work on small 8x8 macroblocks and hence similarities at a bigger scale (at tile level - each tile being some 256x256 macroblocks etc.) are not exploited by JPG encoding much.


Is there some other image encoding format besides JPG that can exploit this kind of similarity better when aggregating multiple images ? Like will this work better with PNG encoding processes for instance ?

役に立ちましたか?

他のヒント

I am not aware of an existing library or format that does what you want.

However, you may be interested in image reshuffling, a paradigm that has drawn some attention in computer graphics and vision research in the last five years or so.

The idea is make up image content from tiles of an existing image, primarily for image editing (e.g. for moving parts of the image around or for making the image larger, much like the "content aware fill" of Photoshop). Most applications generate content for an image from the image itself, but there is no reason why you should not build one image from another one for compression. The compression would be lossy, of course, but you could try to compress the residuals afterwards.

This is a nice overview of one of the algorithms.

Here and here are original research papers. The first one contains an example of creating one image from patches of a similar but different one.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top