Pregunta

I am trying to use AVFoundation framework to capture a 'series' of still images from AVCaptureStillImageOutput QUICKLY, like the burst mode in some cameras. I want to use the completion handler,

    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection 
                                              completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) { 

and pass the imageSampleBuffer to an NSOperation object for later processing. However i cant find a way to retain the buffer in the NSOperation class.

    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection 
                                              completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error) { 

    //Add to queue
    SaveImageDataOperation *saveOperation = [[SaveImageDataOperation alloc] initWithImageBuffer:imageSampleBuffer];
    [_saveDataQueue addOperation:saveOperation];
    [saveOperation release];

    //Continue
    [self captureCompleted];
}];

Does any one know what I maybe doing wrong here? Is there a better approach to do this?

¿Fue útil?

Solución

"IMPORTANT: Clients of CMSampleBuffer must explicitly manage the retain count by calling CFRetain and CFRelease, even in processes using garbage collection."

SOURCE: CoreMedia.Framework CMSampleBuffer.h

Otros consejos

I've been doing a lot of work with CMSampleBuffer objects recently and I've learned that most of the media buffers sourced by the OS during real-time operations are allocated from pools. If AVFoundation (or CoreVideo/CoreMedia) runs out of buffers in a pool (ie. you CFRetain a buffer for a 'long' time), the real time aspect of the process is going to suffer or block until you CFRelease the buffer back into the pool.

So, in addition to manipulating the CFRetain/CFRelease count on the CMSampleBuffer you should only keep the buffer retained long enough to unpack (deep copy the bits) in the CMBlockBuffer/CMFormat and create a new CMSampleBuffer to pass to your NSOperationQueue or dispatch_queue_t for later processing.

In my situation I wanted to pass compressed CMSampleBuffers from the VideoToolbox over a network. I essentially created a deep copy of the CMSampleBuffer, with my application having full control over the memory allocation/lifetime. From there, I put the copied CMSampleBuffer on a queue for the network I/O to consume.

If the sample data is compressed, deep copying should be relatively fast. In my application, I used NSKeyedArchiver to create an NSData object from the relevant parts of the source CMSampleBuffer. For H.264 video data, that meant the CMBlockBuffer contents, the SPS/PPS header bytes and also the SampleTimingInfo. By serializing those elements I could reconstruct a CMSampleBuffer on the other end of the network that behaved identically to to the one that VideoToolbox had given me. In particular, AVSampleBufferLayer was able to display them as if they were natively sourced on the machine.

For your application I would recommend the following:

  1. Take your source CMSampleBuffer and compress the pixel data. If you can, use the hardware encoder in VideoToolbox to create I-frame only H.264 images which will be very high quality. The VT encoder apparently is very good for battery life as well, probably much better than JPEG unless they have a hardware JPEG codec on the system as well.
  2. Deep copy the compressed CMSampleBuffer output by the VideoToolbox, VT will CFRelease the original CMSampleBuffer back to the pool used by the capture subsystem.
  3. Retain the VT compressed CMSampleBuffer only long enough to enqueue a deep copy for later processing.

Since the AVFoundation movie recorder can do steps #1 and #2 in real time without running out of buffers, you should be able to deep copy and enqueue your data on a dispatch_queue without exhausting the buffer pools used by the video capture component and VideoToolbox components.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top