Your code constructs a single channel (grayscale) image called yuvMat
out of a series of unsigned char
s. When you try to -- forcefully -- convert this single channel image from YUV 4:2:0 to a multi-channel RGB, OpenCV library assumes that each row has 2/3 of the full 4:4:4 information (1 x height x width
for Y
and 1/2 height x width
for U
and V
each, instead of 3 x height x width
for a normal YUV
) therefore your width of the destination image shrinks to 2/3 of the original width. It can be assumed that half of the data read from the original image comes from unallocated memory though because the original image only has width x height uchar
s but 2 x width x height uchar
s are read from the memory!
If your uchar
buffer is already correctly formatted for a series of bytes representing YUV_NV12 (4:2:0 subsampling) in the conventional width x height
bytes per channel, all you need to do is to construct your original yuvMat
as a CV_8UC3
and you are good to go. The above assumes that all the interlacing and channel positioning is already implemented in the buffer. Most probably this is not the case though. YUV_NV12 data comes with width x height uchar
s of Y
, followed by (1/2) width x (1/2) x height of 2 x uchar
s representing UV
combined. You probably need to write your own reader to read Y
, U
, and V
data separately and to construct 3 single-channel Mat
s of size width x height
-- filling in the horizontal and vertical gaps in both U
and V
channels -- then use cv::merge()
to combine those single-channel images to a 3-channel YUV
before using cv::cvtColor()
to convert that image using CV_YUV2BGR
option. Notice the use of BGR
instead of RGB
.