Question

I am working on a video conferencing project. We were using software codec for encode and decode of video frames which will do fine for lower resolutions( up to 320p). We have planned to support our application for higher resolutions also up to 720p. I came to know that hardware acceleration will do this job fairly well.

As the hardware codec api Media codec is available from Jelly Bean onward I have used it for encode and decode and are working fine. But my application is supported from 2.3 . So I need to have an hardware accelerated video decode for H.264 frames of 720p at 30fps.

On research came across the idea of using OMX codec by modifying the stage fright framework.I had read that the hardware decoder for H.264 is available from 2.1 and encoder is there from 3.0. I have gone through many articles and questions given in this site and confirmed that I can go ahead.

I had read about stage fright architecture here -architecture and here- stagefright how it works

And I read about OMX codec here- use-android-hardware-decoder-with-omxcodec-in-ndk.

I am having a starting trouble and some confusions on its implementation.I would like to have some info about it.

  1. For using OMX codec in my code should I build my project with the whole android source tree or can I do by adding some files from AOSP source(if yes which all).
  2. What are the steps from scratch I should follow to achieve it.

Can someone give me a guideline on this

Thanks...

Was it helpful?

Solution

The best example to describe the integration of OMXCodec in native layer is the command line utility stagefright as can be observed here in GingerBread itself. This example shows how a OMXCodec is created.

Some points to note:

  1. The input to OMXCodec should be modeled as a MediaSource and hence, you should ensure that your application handles this requirement. An example for creating a MediaSource based source can be found in record utility file as DummySource.

  2. The input to decoder i.e. MediaSource should provide the data through the read method and hence, your application should provide individual frames for every read call.

  3. The decoder could be created with NativeWindow for output buffer allocation. In this case, if you wish to access the buffer from the CPU, you should probably refer to this query for more details.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top