Question

I have seen this question asked many times in different forms both here and in other forums. Some of the questions get answered, some do not. There are a few where the answerer or author claims to have had success. I have implemented the examples from those that claim success, but have yet to see the same results.

I am able to successfully use AVAssetWriter (and AVAssetWriterInputPixelBufferAdaptor) to write image data and audio data simultaneously when the sample buffers are obtained from an AVCaptureSession. However, if I have CGImageRef's that were generated in some other way, and build a CVPixelBufferRef "from scratch", the appendPixelBuffer:withPresentationTime method of AVAssetWriterInputPixelBufferAdaptor succeeds for a few frames and then fails for all subsequent frames. The resulting movie file is of course not valid.

You can see my example code at: http://pastebin.com/FCJZJmMi

The images are valid, and are verified by displaying them in a debug window (see lines 50-53). The app has been tested with Instruments, and memory utilization is low throughout the running of the app. It does not get any memory warnings.

As far as I can tell I have followed the documentation that is available. Why does the example code fail? What needs to be done to fix it?

If anybody has successfully gotten AVAssetWriterInputPixelBufferAdaptor to work with their own images, please chime in.

Was it helpful?

Solution 2

There are two things that were required to make this work.

If you are debugging your own AVAssetWriterInputPixelBufferAdaptor be careful to make sure you don't skip CMTime's and also make sure you don't ever repeat a CMTime (have exactly one frame per time slow ALWAYS).

OTHER TIPS

Make a movie with a series of images using AVAssetWriter from the example - https://github.com/caferrara/img-to-video

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top