Question

I want to count the vehicles from a video. After frame differencing I got a gray scale image or kind of binary image. I have defined a Region of Interest to work on a specific area of the frames, the values of the pixels of the vehicles passing through the Region of Interest are higher than 0 or even higher than 40 or 50 because they are white.

My idea is that when a certain number of pixels in a specific interval of time (say 1-2 seconds) are white then there must be a vehicle passing so I will increment the counter.

What I want is, to check whether there are still white pixels coming or not after a 1-2 seconds. If there are no white pixels coming it means that the vehicle has passed and the next vehicle is gonna come, in this way the counter must be incremented.

One method that came to my mind is to count the frames of the video and store it in a variable called No_of_frames. Then using that variable I think I can estimate the time passed. If the value of the variable No_of_frames is greater then lets say 20, it means that nearly 1 second has passed, if my videos frame rate is 25-30 fps.

I am using Qt Creator with windows 7 and OpenCV 2.3.1

My code is something like:

             for(int i=0; i<matFrame.rows; i++)
             {
                 for(int j=0;j<matFrame.cols;j++)

                 if (matFrame.at<uchar>(i,j)>100)//values of pixels greater than 100
                                                 //will be considered as white.
                     {
                        whitePixels++;
                     }
                 if ()// here I want to use time. The 'if' statement must be like: 
                 //if (total_no._of_whitepixels>100 && no_white_pixel_came_after 2secs)
                //which means that a vehicle has just passed so increment the counter. 
                    {  
                        counter++;      
                    }
             }

Any other idea for counting the vehicles, better than mine, will be most welcomed. Thanks in advance.

For background segmentation I am using the following algorithm but it is very slow, I don't know why. The whole code is as follows:

// opencv2/video/background_segm.hpp OPENCV header file must be included.
IplImage*       tmp_frame = NULL;
CvCapture*      cap = NULL;
bool update_bg_model = true;

 Mat element = getStructuringElement( 0, Size( 2,2 ),Point() );
 Mat eroded_frame;
 Mat before_erode;
if( argc > 2 )
    cap = cvCaptureFromCAM(0);

else
//  cap = cvCreateFileCapture( "C:\\4.avi" );
   cap = cvCreateFileCapture( "C:\\traffic2.mp4" );

if( !cap )
{
    printf("can not open camera or video file\n");
    return -1;
}

tmp_frame = cvQueryFrame(cap);
if(!tmp_frame)
{
    printf("can not read data from the video source\n");
    return -1;
}

cvNamedWindow("BackGround", 1);
cvNamedWindow("ForeGround", 1);

CvBGStatModel* bg_model = 0;

for( int fr = 1;tmp_frame; tmp_frame = cvQueryFrame(cap), fr++ )
{
    if(!bg_model)
    {
        //create BG model
        bg_model = cvCreateGaussianBGModel( tmp_frame );
      //  bg_model = cvCreateFGDStatModel( temp );
        continue;
    }

    double t = (double)cvGetTickCount();
    cvUpdateBGStatModel( tmp_frame, bg_model, update_bg_model ? -1 : 0 );
    t = (double)cvGetTickCount() - t;
    printf( "%d. %.1f\n", fr, t/(cvGetTickFrequency()*1000.) );

    before_erode= bg_model->foreground;
    cv::erode((Mat)bg_model->background, (Mat)bg_model->foreground,  element );
    //eroded_frame=bg_model->foreground;
     //frame=(IplImage *)erode_frame.data;

     cvShowImage("BackGround", bg_model->background);
     cvShowImage("ForeGround", bg_model->foreground);
    // cvShowImage("ForeGround", bg_model->foreground);
    char k = cvWaitKey(5);
    if( k == 27 ) break;
    if( k == ' ' )
    {
        update_bg_model = !update_bg_model;
        if(update_bg_model)
            printf("Background update is on\n");
        else
            printf("Background update is off\n");
    }
}
cvReleaseBGStatModel( &bg_model );
cvReleaseCapture(&cap);
return 0;
Was it helpful?

Solution

A great deal of research has been done on vehicle tracking and counting. The approach you describe appears to be quite fragile, and is unlikely to be robust or accurate. The main issue is using a count of pixels above a certain threshold, without regard for their spatial connectivity or temporal relation.

Frame differencing can be useful for separating a moving object from its background, provided the object of interest is the only (or largest) moving object.

What you really need is to first identify the object of interest, segment it from the background, and track it over time using an adaptive filter (such as a Kalman filter). Have a look at the OpenCV video reference. OpenCV provides background subtraction and object segmentation to do all the required steps.

I suggest you read up on OpenCV - Learning OpenCV is a great read. And also on more general computer vision algorithms and theory - http://homepages.inf.ed.ac.uk/rbf/CVonline/books.htm has a good list.

OTHER TIPS

Normally they just put a small pneumatic pipe across the road (soft pipe semi filled with air). It is attached to a simple counter. Each vehicle passing over the pipe generates two pulses (first front, then rear wheels). The counter records the number of pulses in specified time intervals and divides by 2 to get the approx vehicle count.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top