I'm using openCV C++ API and I'm trying to convert a camera buffer (YUV_NV12) to a RGB format. However, the dimensions of my image changed (width shrinked from 720 to 480) and the colors are wrong (kinda purple/green ish).

unsigned char* myYUVBufferPointer = (something passed as argument)
int myYUVBufferHeight = 1280
int myYUVBufferWidth = 720

cv::Mat yuvMat(myYUVBufferHeight, myYUVBufferWidth, CV_8U, myYUVBufferPointer);
cv::Mat rgbMat;
cv::cvtColor(yuvMat, rgbMat, CV_YUV2RGB_NV12);
cv::imwrite("path/to/my/image.jpg",rgbMat);

Any ideas? *(I'm more interested about the size changed than the color, since I will eventually convert it to CV_YUV2GRAY_NV12 and thats working, but the size isn't).*

有帮助吗?

解决方案

Your code constructs a single channel (grayscale) image called yuvMat out of a series of unsigned chars. When you try to -- forcefully -- convert this single channel image from YUV 4:2:0 to a multi-channel RGB, OpenCV library assumes that each row has 2/3 of the full 4:4:4 information (1 x height x width for Y and 1/2 height x width for U and V each, instead of 3 x height x width for a normal YUV) therefore your width of the destination image shrinks to 2/3 of the original width. It can be assumed that half of the data read from the original image comes from unallocated memory though because the original image only has width x height uchars but 2 x width x height uchars are read from the memory!

If your uchar buffer is already correctly formatted for a series of bytes representing YUV_NV12 (4:2:0 subsampling) in the conventional width x height bytes per channel, all you need to do is to construct your original yuvMat as a CV_8UC3 and you are good to go. The above assumes that all the interlacing and channel positioning is already implemented in the buffer. Most probably this is not the case though. YUV_NV12 data comes with width x height uchars of Y, followed by (1/2) width x (1/2) x height of 2 x uchars representing UV combined. You probably need to write your own reader to read Y, U, and V data separately and to construct 3 single-channel Mats of size width x height -- filling in the horizontal and vertical gaps in both U and V channels -- then use cv::merge() to combine those single-channel images to a 3-channel YUV before using cv::cvtColor() to convert that image using CV_YUV2BGR option. Notice the use of BGR instead of RGB.

其他提示

It could be that "something passed as argument" does not have enough data to fill 720 lines. With some video cameras, not all three channels are represented using the same number of bits. For example, when capturing video on an iPhone, the three channels use 8-4-4 Bytes instead of 8-8-8. I haven't used this type of a camera with OpenCV, but most likely the problem is here.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top