Question

I have to detect cars on a video stream (recorded) and provide some traffic data. I have read that background subtraction is the most important step so that we can be able to extract foreground objects.

The question is, how do we do this for colour frames? Articles I have read talk more of black and white images.

I would like to use automatic background removal, which uses frame differencing (in my understanding).

If I did the removal in gray scale, would I still be able to replay the video with the tracked object in colour? Because the point is being able to show the tracked car on the original video.

Was it helpful?

Solution

Gray is used because it reduces the information needed, without taking out the overall meaning from the information.

Background extraction is the extraction of the "general appearance of the scene." The foreground is all of the objects added to the image. Taking the difference of images technically will work. However, averaging all of the images will give you the best version of the background. [This assumes some things but I'll go into that later]. Once you have the background, that will help you create a mask that you can use to apply it to a simple image. Once you use the mask that'll separate out the foreground objects and use those to what you wish.

The color is really only useful to you for tracking. [Mean color shift will probably help you in tracking].

Assumptions: This is assuming that you'll have enough video to create an average scene, and that the foreground object will eventually disappear. [Otherwise they'll turn into the background: aka a broken down vehicle being left]. Also the way you average is another issue, significant season changes can modify the background. [After winter roads tend to crack]

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top