Question

I am currently trying to record a Surface through the command line screenrecord utility. As per the implementation, the framework sets the color format of the video encoder component as OMX_COLOR_FormatAndroidOpaque. Hence, the video encoder is forced to retrieve the color format from the gralloc handle.

When this usecase is executed, the video encoder is encountering a BGRA Surface. The question I am trying to find an answer is thus:

In case of Miracast, the input to the encoder is received through SurfaceMediaSource. In case of Surface recording, the input surface for MediaCodec is provided through the GraphicBufferSource interface.

Should the color conversion from RGB to YUV space be handled inside the video encoder or is it better to introduce in SurfaceMediaSource which is the encoder's source abstraction for gralloc sources?

Was it helpful?

Solution

Further to my question, I investigated the codecs exposed as part of AOSP distribution and have found that the top-3 vendors viz., Qualcomm, Samsung and TI have adopted an internal color conversion to handle the Surface recording scenario. Some useful links are as below:

Qualcomm's V4L2 based codec implementation:

In Qualcomm video ecnoders, for Surface recording there is a color conversion and hence, the actual YUV data is present in the buffer pointers. There is a differentiation in handling between this scenario i.e. Surface recording and Camera scenario with a Gralloc source as can be observed from this note.

TI's Video Encoders:

Samsung Video Encoders:

In Samsung's case, there is no differentiation between gralloc source scenarios as can be observed from Exynos_OSAL_GetInfoFromMetaData.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top