Question

I'm new to OpenCV. I'm trying to draw matches of features between to images using FLANN/SURF in OpenCV on iOS. I'm following this example:

http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html#feature-matching-with-flann

Here's my code with some little modifications (wrapped the code in the example in a function that return a UIImage as a result and read the starting images from bundle):

UIImage* SURFRecognition::test()
{
    UIImage *img1 = [UIImage imageNamed:@"wallet"];
    UIImage *img2 = [UIImage imageNamed:@"wallet2"];

    Mat img_1;
    Mat img_2;

    UIImageToMat(img1, img_1);
    UIImageToMat(img2, img_2);

    if( !img_1.data || !img_2.data )
    {
        std::cout<< " --(!) Error reading images " << std::endl;
    }

    //-- Step 1: Detect the keypoints using SURF Detector
    int minHessian = 400;

    SurfFeatureDetector detector( minHessian );

    std::vector<KeyPoint> keypoints_1, keypoints_2;

    detector.detect( img_1, keypoints_1 );
    detector.detect( img_2, keypoints_2 );

    //-- Step 2: Calculate descriptors (feature vectors)
    SurfDescriptorExtractor extractor;

    Mat descriptors_1, descriptors_2;

    extractor.compute( img_1, keypoints_1, descriptors_1 );
    extractor.compute( img_2, keypoints_2, descriptors_2 );

    //-- Step 3: Matching descriptor vectors using FLANN matcher
    FlannBasedMatcher matcher;
    std::vector< DMatch > matches;
    matcher.match( descriptors_1, descriptors_2, matches );

    double max_dist = 0; double min_dist = 100;

    //-- Quick calculation of max and min distances between keypoints
    for( int i = 0; i < descriptors_1.rows; i++ )
    { double dist = matches[i].distance;
        if( dist < min_dist ) min_dist = dist;
        if( dist > max_dist ) max_dist = dist;
    }

    printf("-- Max dist : %f \n", max_dist );
    printf("-- Min dist : %f \n", min_dist );

    //-- Draw only "good" matches (i.e. whose distance is less than 2*min_dist )
    //-- PS.- radiusMatch can also be used here.
    std::vector< DMatch > good_matches;

    for( int i = 0; i < descriptors_1.rows; i++ )
    { if( matches[i].distance <= 2*min_dist )
    { good_matches.push_back( matches[i]); }
    }

    //-- Draw only "good" matches
    Mat img_matches;
    drawMatches( img_1, keypoints_1, img_2, keypoints_2,
                good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
                vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );

    //-- Show detected matches
    //imshow( "Good Matches", img_matches );

    UIImage *imgTemp = MatToUIImage(img_matches);

    for( int i = 0; i < good_matches.size(); i++ )
    {
        printf( "-- Good Match [%d] Keypoint 1: %d  -- Keypoint 2: %d  \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx );
    }

    return imgTemp;
}

The result I the function above is:

enter image description here

Only the line that connect the matches are show, but the to original images are not show. If I understood well the drawMatches fiunction return an cv::Mat which contains the images and the connection between similar feature. Is this correct or I am missing something? Can someone help me?

Was it helpful?

Solution

I've found the solution by myself. It seems, after searching a lot, that drawMatches need img1 and img2 to be with 1 to 3 channel. I was opening a PNGa with alpha so these were 4 channel images. Here's my code reviewed:

Added

UIImageToMat(img1, img_1);
UIImageToMat(img2, img_2);

cvtColor(img_1, img_1, CV_BGRA2BGR);
cvtColor(img_2, img_2, CV_BGRA2BGR);
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top