Question

I'd like to display decoded video frames from MediaCodec out of order, or omit frames, or show frames multiple times.

I considered configuring MediaCodec to use a Surface, call MediaCodec.dequeueOutputBuffer() repeatedly, save the resulting buffer indices, and then later call MediaCodec.releaseOutputBuffer(desired_index, true), but there doesn't seem to be a way to increase the number of output buffers, so I might run out of output buffers if I'm dealing with a lot of frames to be rearranged.

One idea I'm considering is to use glReadPixels() to read the pixel data into a frame buffer, convert the color format appropriately, then copy it to a SurfaceView when I need the frame displayed. But this seems like a lot of copying (and color format conversion) overhead, especially when I don't inherently need to modify the pixel data.

So I'm wondering if there is a better, more performant way. Perhaps there is a way to configure a different Surface/Texture/Buffer for each decoded frame, and then a way to tell the SurfaceView to display a specific Surface/Texture/Buffer (without having to do a memory copy). It seems like there must be a way to accomplish this with OpenGL, but I'm very new to OpenGL and could use recommendations on areas to investigate. I'll even go NDK if I have to.

So far I've been reviewing the Android docs, and fadden's bigflake and Grafika. Thanks.

Was it helpful?

Solution

Saving copies of lots of frames could pose a problem when working with higher-resolution videos and higher frame counts. A 1280x720 frame, saved in RGBA, will be 1280x720x4 = 3.5MB. If you're trying to save 100 frames, that's 1/3rd of the memory on a 1GB device.

If you do want to go this approach, I think what you want to do is attach a series of textures to an FBO and render to them to store the pixels. Then you can just render from the texture when it's time to draw. Sample code for FBO rendering exists in Grafika (it's one of the approaches used in the screen recording activity).

Another approach is to seek around in the decoded stream. You need to seek to the nearest sync frame before the frame of interest (either by asking MediaExtractor to do it, or by saving off encoded data with the BufferInfo flags) and decode until you reach the target frame. How fast this is depends on how many frames you need to traverse, the resolution of the frames, and the speed of the decoder on your device. (As you might expect, stepping forward is easier than stepping backward. You may have noticed a similar phenomena in other video players you've used.)

Don't bother with glReadPixels(). Generally speaking, if decoded data is directly accessible from your app, you're going to take a speed hit (more so on some devices than others). Also, the number of buffers used by the MediaCodec decoder is somewhat device-dependent, so I wouldn't count on having more than 4 or 5.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top