Question

I have been using the Kinect SDK (1.6) DepthBasicsD2D C++ example to take the depth frame from kinect and want to perform blob detection in OpenCV with the data.

I have configured OpenCV with the example and also understood the basic working of the example.

But somehow there's no help anywhere and its difficult to figure out how to take the pixel data from Kinect and pass to OpenCV's IplImage/cv::Mat structure.

Any thought on this problem?

Was it helpful?

Solution

This could help you convert kinect color and depth images and depth images to OpenCV representations:

// get a CV_8U matrix from a Kinect depth frame 
cv::Mat * GetDepthImage(USHORT * depthData, int width, int height) 
{
    const int imageSize = width * height; 
    cv::Mat * out = new cv::Mat(height, width, CV_8U) ;
    // map the values to the depth range
    for (int i = 0; i < imageSize; i++)
    {
        // get the lower 8 bits
        USHORT depth =  depthData[i];   
        if (depth >= kLower && depth <= kUpper) 
        {
            float y = c * (depth - kLower); 
            out->at<byte>(i) = (byte) y; 
        }
        else
        {
            out->at<byte>(i) = 0; 
        }
    }
    return out; 
};

// get a CV_8UC4 (RGB) Matrix from Kinect RGB frame
cv::Mat * GetColorImage(unsigned char * bytes, int width, int height)
{
    const unsigned int img_size = width * height * 4; 
    cv::Mat * out = new cv::Mat(height, width, CV_8UC4);

    // copy data
    memcpy(out->data, bytes, img_size); 

    return out; 
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top