Question

I am extremely new to computer vision and the opencv library.

I've done some googling around to try to find how to make a new image from a vector of Point2fs and haven't found any examples that work. I've seen vector<Point> to Mat but when I use those examples I always get errors.

I'm working from this example and any help would be appreciated.

Code: I pass in occludedSquare.

   resize(occludedSquare, occludedSquare, Size(0, 0), 0.5, 0.5);

   Mat occludedSquare8u;
   cvtColor(occludedSquare, occludedSquare8u, CV_BGR2GRAY);

   //convert to a binary image. pixel values greater than 200 turn to white. otherwize black
   Mat thresh;
   threshold(occludedSquare8u, thresh, 170.0, 255.0, THRESH_BINARY);



   GaussianBlur(thresh, thresh, Size(7, 7), 2.0, 2.0);

   //Do edge detection
   Mat edges;
   Canny(thresh, edges, 45.0, 160.0, 3);

   //Do straight line detection
   vector<Vec2f> lines;
   HoughLines( edges, lines, 1.5, CV_PI/180, 50, 0, 0 );

   //imshow("thresholded", edges);


   cout << "Detected " << lines.size() << " lines." << endl;

   // compute the intersection from the lines detected...
   vector<Point2f> intersections;
   for( size_t i = 0; i < lines.size(); i++ )
   {
       for(size_t j = 0; j < lines.size(); j++)
       {
           Vec2f line1 = lines[i];
           Vec2f line2 = lines[j];
           if(acceptLinePair(line1, line2, CV_PI / 32))
           {
               Point2f intersection = computeIntersect(line1, line2);
               intersections.push_back(intersection);
           }
       }

   }

   if(intersections.size() > 0)
   {
       vector<Point2f>::iterator i;
       for(i = intersections.begin(); i != intersections.end(); ++i)
       {
           cout << "Intersection is " << i->x << ", " << i->y << endl;
           circle(occludedSquare8u, *i, 1, Scalar(0, 255, 0), 3);
       }
   }

//Make new matrix bounded by the intersections
...
imshow("localized", localized);
Was it helpful?

Solution

Should be as simple as

std::vector<cv::Point2f> points;
cv::Mat image(points);
//or
cv::Mat image = cv::Mat(points) 

The probably confusion is that a cv::Mat is an image width*height*number of channels but it also a mathematical matrix , rows*columns*other dimension.

If you make a Mat from a vector of 'n' 2D points it will create a 2 column by 'n' rows matrix. You are passing this to a function which expects an image.

If you just have a scattered set of 2D points and want to display them as an image you need to make an empty cv::Mat of large enough size (whatever your maximum x,y point is) and then draw the dots using the drawing functions http://docs.opencv.org/doc/tutorials/core/basic_geometric_drawing/basic_geometric_drawing.html

If you just want to set the pixel values at those point coordinates search SO for opencv setting pixel values, there are lots of answers

OTHER TIPS

Martin's answer is right but IMO it depends on how image cv::Mat is used further along the line. I had some issues and Haofeng's comment helped me fix them. Here is my attempt to explain it in detail:

Let's say the code looks like this:

  std::vector<cv::Point2f> points = {cv::Point2f(1.0, 2.0), cv::Point2f(3.0, 4.0), cv::Point2f(5.0, 6.0), cv::Point2f(7.0, 8.0), cv::Point2f(9.0, 10.0)};
  cv::Mat image(points);  // or cv::Mat image = cv::Mat(points) 
  std::cout << image << std::endl;

This will print:

[1, 2;
 3, 4;
 5, 6;
 7, 8;
 9, 10]

So, at first glance, this looks perfectly correct and as expected: for the five 2D points in the given vector, we got a cv::Mat with 5 rows and 2 columns, right? However, that's not the case here!

If further properties are inspected:

  std::cout << image.rows << std::endl;  // 5
  std::cout << image.cols << std::endl;  // 1
  std::cout << image.channels() << std::endl;  // 2

it can be seen that the above cv::Mat has 5 rows, 1 column, and 2 channels. Depending on the pipeline, we may not want that. Most of the time, we want a matrix with 5 rows, 2 columns, and just 1 channel.

To fix this problem, all we need to do is reshape the matrix:

  cv::Mat image(points).reshape(1);

In the above code, 1 is for 1 channel. Check out OpenCV reshape() documentation for further information.

If this matrix is printed out, it will look the same as the previous one. However, that's not the whole picture (metaphorically!) The new matrix has 5 rows, 2 columns, and 1 channel.

I wish OpenCV had different ways of printing out these two similar yet different matrices (from the OpenCV data structure point of view)!

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top