Question

I've been strugling with this for some time now, found some topics about using TextureSurface, FFmpegFrameGrabber etc but still dont have a clue. I would like load a video file, than take a frame and make some modifications ( process frame using either JavaCV or OpenCV4Android) and show it to back to the user. This proccess should be repeated for every frame in video file. Does anyone have a clue how to do that ?

Was it helpful?

Solution

I have not yet used FFmpegFrameGrabber before, but I use ffmpeg JNI writed by myself. My solution is:

  1. Get frame data (NV21 or BGR24) by ffmpeg.
  2. Process the frame data by my image / video processing algorithm.
  3. Upload frame data to GPU as OpenGL texture.
  4. Display data by OpenGL fragment shader in a fullscreen rectangle.

I have another idea if the 2th operation can be written in GPU shader (Many real-time camera effects in mobile are done by this method). You can use MediaPlayer instead of ffmpeg. And write GPU shader for SurfaceTexture.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top