I am trying to use opencv to automatically find and locate all parking spots in an empty parking lot.

Currently, I have a code that thresholds the image, applies canny edge detection, and then uses probabilistic hough lines to find the lines that mark each parking spot.

The program then draws the lines and the points that make up the lines

Here is the code:

#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"

#include <iostream>

using namespace cv;
using namespace std;

int threshold_value = 150;
int threshold_type = 0;;
int const max_value = 255;
int const max_type = 4;
int const max_BINARY_value = 255;

int houghthresh = 50;

char* trackbar_value = "Value";

char* window_name = "Find Lines";

int main(int argc, char** argv)
{
 const char* filename = argc >= 2 ? argv[1] : "pic1.jpg";
 VideoCapture cap(0);
 Mat src, dst, cdst, tdst, bgrdst;
 namedWindow( window_name, CV_WINDOW_AUTOSIZE );

 createTrackbar( trackbar_value,
          window_name, &threshold_value,
          max_value);

while(1)
{
 cap >> src;
 cvtColor(src, dst, CV_RGB2GRAY);
 threshold( dst, tdst, threshold_value, max_BINARY_value,threshold_type );
 Canny(tdst, cdst, 50, 200, 3);
 cvtColor(tdst, bgrdst, CV_GRAY2BGR);

  vector<Vec4i> lines;
  HoughLinesP(cdst, lines, 1, CV_PI/180, houghthresh, 50, 10 );
  for( size_t i = 0; i < lines.size(); i++ )
  {
    Vec4i l = lines[i];
    line( bgrdst, Point(l[0], l[1]), Point(l[2], l[3]), Scalar(0,255,0), 2, CV_AA);
    circle( bgrdst,
         Point(l[0], l[1]),
         5,
         Scalar( 0, 0, 255 ),
         -1,
         8 );
    circle( bgrdst,
         Point(l[2], l[3]),
         5,
         Scalar( 0, 0, 255 ),
         -1,
         8 );
  }

 imshow("source", src);
 imshow(window_name, bgrdst);

 waitKey(1);
}
 return 0;
}

Currently, my main problem is figuring out how to extrapolate the line data to find the locations of each parking space. My goal is to have opencv find the parking spaces and draw out rectangles on each parking space with a number labeling the spots.

I think there are some major problems with the method I am currently using, because as shown in the output images, opencv is detecting multiple points on the line other than the 2 endpoints. That might make it very hard to use opencv to connect 2 adjacent endpoints.

I read something about using convex hull, but I am not exactly sure what it does and how it works.

Any help will be appreciated. Here are the output images from my program: http://imageshack.us/photo/my-images/22/test1hl.png/

http://imageshack.us/photo/my-images/822/test2lw.png/

有帮助吗?

解决方案

Consider thinning your binary image, and then detect the end points and the branch points. Here is one such result based on the images provided; end points are in red and branch points are in blue.

enter image description here

Now you can find the locations of the parking spaces. A pair of blue dots is always connected by a single edge. Each blue dot is connected to either two or three red points. Then there are several ways to find the parking space formed by two blue dots and two red dots, the simplest is along the lines: find the closest pair of red dots where one dot is connected to a certain blue dot, and the other red point is connected to the other blue dot. This step can also be complemented by checking how close to parallel lines are the edges considered.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top