As of Android 4.3 (API 18), the bigflake CameraToMpegTest approach is the correct way.
The EGL/SurfaceTexture overhead is currently unavoidable, especially for what you want to do in goal #2. The idea is:
- Configure the Camera to send the output to a
SurfaceTexture
. This makes the Camera output available to GLES as an "external texture". - Render the
SurfaceTexture
to theSurface
returned byMediaCodec#createInputSurface()
. That feeds the video encoder. - Render the
SurfaceTexture
a second time, to aGLSurfaceView
. That puts it on the display for real-time preview.
The only data copying that happens is performed by the GLES driver, so you're doing hardware-accelerated blits, which will be fast.
The only tricky bit is you want the external texture to be available to two different EGL contexts (one for the MediaCodec
, one for the GLSurfaceView
). You can see an example of creating a shared context in the "Android Breakout game recorder patch" sample on bigflake -- it renders the game twice, once to the screen, once to a MediaCodec
encoder.
Update: This is implemented in Grafika ("Show + capture camera").
Update: The multi-context approach in "show + capture camera" approach is somewhat flawed. The "continuous capture" Activity uses a plain SurfaceView, and is able to do both screen rendering and video recording with a single EGL context. This is recommended.