문제

I am looking for an algorithm that I can use to compare two images and determine if there is something significantly different between the two. By "significant", I mean, if you took two photos of a room and a large spider was clearly on the wall in one of them, you'd be able to detect it. I am not really interested in WHAT is detected or even where - just that there is something different. The algorithm would need to ignore brightness. If the room gets brighter or darker during the day, the algorithm should ignore it.

Even if you don't know of an algorithm, any hints in the right direction would help.

Thanks!

도움이 되었습니까?

해결책

I'd try to perform a high-pass filtering of your 2d-data.

According to Fourier, every signal can be transformed to "frequency space" by analyzing which frequencies are in the signal. This also applies to 2d-signals, like images.

By the means of a "high-pass-filter", you remove all low-frequency parts, like constant offsets and slow gradients. If applied to an image it can serve as a simple "edge detection" algorithm. Looking at a sample might make it easier to understand:

High-pass filtering of images

I took an image of a spider on a wall from somewhere on the web (top-left). I then decreased the brightness of this image (lower-left). For both versions, I applied a high-pass filter using GIMP (This plugin). For both input images, the output looks very similar.

My recommendation: First apply a high-pass filter, then look at differences.

Possible problems

As requested, here are some problems that I can imagine.

  • No sharp edges: if the object you want to detect doesn'T have sharp edges you probably will filter it out using HF-pass filtering. But what objects could that be? They must be huge, flat (to not produce shadows) and unstructured.

  • Only color differs, not brightness: If the object only differs in term of its color, but the brightness is the same as the background, the grayscale-conversion might be a problem. But if you run into this problem, just analyse the R, G, B-data separately, then at least one channel should help detecting the object - otherwise, you can't see it anyway.

Edit As reply to ???, if you also adjust the levels of the high-pass filtered image (which of course is all around 0.5*256) by just normalizing it to the range 0, 256 again you get

With adjusted levels

Which probably isn't worse than your result. But, HP-filters are simple and, when using FFT, very fast.

다른 팁

If the camera is completely static and all differences are due to ambient lighting and/or camera exposure settings, then ignoring brightness (and contrast) can be done by normalizing the 2 images.

Subtract the respective image mean (average pixel value) from all pixels of each image and the take the difference. That will take care of brightness.

If you want to handle contrast too, the calculate the variance of each each image (after bringing the mean to 0), and multiply the pixel values by the factor that will bring them both the the same variance. The difference will now be invariant to contrast as well (assuming no over/under exposure regions).

A common approach with such a problem is to average the images taken by your camera over time, and detect any difference above a given threshold.

You need to keep an image in memory that will be the averaged image. Let's call it "avg".

Each time your camera takes a picture (called "pic"), you gonna :

  • Sum up absolute pixels value differences between "avg" and "pic".
    • If above a threshold, something is moving in front of the camera.
    • Else, modify "avg" so it will converge slightly to "pic". Up to you to find the proper formula, avg = avg * 0.95 + pic * 0.05 for instance.

Here, your reference image change over the day to adapt to sun and shadow changes.

How about removing the brightness component from the pixels:

Red_ratio = Red / (Red + Blue + Green)
Blue_ratio = Blue / (Red + Blue + Green)
Green_ratio = Green / (Red + Blue + Green)
라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top