Question

I have an app that needs to render frames from a video/movie into a CGBitmapContext with an arbitrary CGAffineTransform. I'd like it to have a decent frame rate, like 20fps at least.

I've tried using AVURLAsset and [AVAssetImageGenerator copyCGImageAtTime:], and as the documentation for this method clearly states, it's quite slow, taking me down to 5fps sometimes.

What is a better way to do this? I'm THINKING that I could set up an AVPlayer with an AVPlayerLayer, then use [CGLayer renderInContext:] with my transform. Would this work? Or perhaps does a AVPlayerLayer not run when it notices that it's not being shown on the screen?

Any other ways to suggest?

Was it helpful?

Solution

I ended up getting lovely, quick UIImages from the frames of a video by:
1) Creating an AVURLAsset with the video's URL.
2) Creating an AVAssetReader with the asset.
3) Setting the readers's timeRange property.
4) Creating an AVAssetReaderTrackOutput with the first track from the asset.
5) Adding the output to the reader.

Then for each frame:
6) Calling [output copyNextSampleBuffer].
7) Passing the sample buffer into CMSampleBufferGetImageBuffer.
8) Passing the image buffer into CVPixelBufferLockBaseAddress, read-only
9) Getting the base address of the image buffer with CVPixelBufferGetBaseAddress
10) Calling CGBitmapContextCreate with dimensions from the image buffer, passing the base address in as the location of the CGBitmap's pixels.
11) Calling CGBitmapContextCreateImage to get the CGImageRef.

I was very pleased to find that this works surprisingly well for scrubbing. If the user wants to go back to an earlier part of the video, simply create a new AVAssetReader with the new time range and go. It's quite fast!

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top