Short answer: no.
Longer answer: the Surface
encapsulates a queue of buffers. (Edit: the system is now explained in some detail here.) When you call updateTexImage()
, if a new frame of data is available, the buffer at the head is dropped and the next one in the queue becomes current. Calling updateTexImage()
is necessary to see successive frames; there is no mechanism for examining buffers not at the head.
A SurfaceTexture
wraps an instance of GLConsumer. This consumer requires the producer (the video decoder) to generate data in a format that can be used as a "hardware texture", i.e. something the device's GL implementation can understand. It may or may not be YUV. More to the point, the consumer doesn't require that the buffer be available to "software", which means you can't assume that you can access the data directly -- you need to use GLES. (See the gralloc header for the full list of flags.)
What would be nice here is the ability to copy the buffer from the head of the BufferQueue
to a separate data structure (BufferArrayList
?) without doing a format conversion, but there isn't a mechanism like that at present (Android 4.3). I don't know of a better way to go about it than what you describe (shared EGL contexts, etc).
Update: My office-mate had a suggestion: use a shader to render the buffer into two textures, one for the Y and for CbCr (in GLES 3 you can use an RG texture). That keeps all the manipulation in GLES without expanding into full RGB. Internally it'll convert the MediaCodec
output to RGB and grind through it twice, but that's likely cheaper than copying it out to userspace and doing it yourself on the CPU.