Question

Is there an efficient way to get a fingerprint of an image for duplicate detection?

That is, given an image file, say a jpg or png, I'd like to be able to quickly calculate a value that identifies the image content and is fairly resilient to other aspects of the image (eg. the image metadata) changing. If it deals with resizing that's even better.

[Update] Regarding the meta-data in jpg files, does anyone know if it's stored in a specific part of the file? I'm looking for an easy way to ignore it - eg. can I skip the first x bytes of the file or take x bytes from the end of the file to ensure I'm not getting meta-data?

Was it helpful?

Solution

Stab in the dark, if you are looking to circumvent meta-data and size related things:

  1. Edge Detection and scale-independent comparison
  2. Sampling and statistical analysis of grayscale/RGB values (average lum, averaged color map)
  3. FFT and other transforms (Good article Classification of Fingerprints using FFT)

And numerous others.

Basically:

  1. Convert JPG/PNG/GIF whatever into an RGB byte array which is independent of encoding
  2. Use a fuzzy pattern classification method to generate a 'hash of the pattern' in the image ... not a hash of the RGB array as some suggest
  3. Then you want a distributed method of fast hash comparison based on matching threshold on the encapsulated hash or encoding of the pattern. Erlang would be good for this :)

Advantages are:

  1. Will, if you use any AI/Training, spot duplicates regardless of encoding, size, aspect, hue and lum modification, dynamic range/subsampling differences and in some cases perspective

Disadvantages:

  1. Can be hard to code .. something like OpenCV might help
  2. Probabilistic ... false positives are likely but can be reduced with neural networks and other AI
  3. Slow unless you can encapsulate pattern qualities and distribute the search (MapReduce style)

Checkout image analysis books such as:

  1. Pattern Classification 2ed
  2. Image Processing Fundamentals
  3. Image Processing - Principles and Applications

And others

If you are scaling the image, then things are simpler. If not, then you have to contend with the fact that scaling is lossy in more ways than sample reduction.

OTHER TIPS

Using the byte size of the image for comparison would be suitable for many applications. Another way would be to:

  1. Strip out the metadata.
  2. Calculate the MD5 (or other suitable hashing algorithm) for the image.
  3. Compare that to the MD5 (or whatever) of the potential dupe image (provided you've stripped out the metadata for that one too)

Check out this paper on Robust Image Hashing.

You could use an algorithm like SIFT (Scale Invariant Feature Transform) to determine key points in the pictures and match these.

See http://en.wikipedia.org/wiki/Scale-invariant_feature_transform

It is used e.g. when stitching images in a panorama to detect matching points in different images.

You want to perform an image hash. Since you didn't specify a particular language I'm guessing you don't have a preference. At the very least there's a Matlab toolbox (beta) that can do it: http://users.ece.utexas.edu/~bevans/projects/hashing/toolbox/index.html. Most of the google results on this are research results rather than actual libraries or tools.

The problem with MD5ing it is that MD5 is very sensitive to small changes in the input, and it sounds like you want to do something a bit "smarter."

Pretty interesting question. Fastest and easiest would be to calculate crc32 of content byte array but that would work only on 100% identical images. For more intelligent compare you would probably need some kind of fuzy logic analyzis...

I've implemented at least a trivial version of this. I transform and resize all images to a very small (fixed size) black and white thumbnail. I then compare those. It detects exact, resized, and duplicates transformed to black and white. It gets a lot of duplicates without a lot of cost.

The easiest thing to do is to do a hash (like MD5) of the image data, ignoring all other metadata. You can find many open source libraries that can decode common image formats so it's quite easy to strip metadata.

But that doesn't work when image itself is manipulated in anyway, including scaling, rotating.

To do exactly what you want, you have to use Image Watermarking but it's patented and can be expensive.

This is just an idea: Possibly low frequency components present in the DCT of the jpeg could be used as a size invariant identifier.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top