Pergunta

Many before me have shared their knowledge in stack overflow about this topic. I was able to take over much of the tips and code snippets thanks to the contribution. It all worked quite good except that it was often hard on the working memory. This time-lapse application that I am working on, was able to generate a movie out of 2000 hd images and more, but since iOS 7.1 it is having trouble generating a video out of more than 240 hd images. 240 images seems to be the limit on an iPhone 5s. I was wondering whether anybody has had these problems too and whether anybody has found solutions to it. Now to the source code.

This part iterates through saved uiimages in the apps document's directory.

if ([adaptor.assetWriterInput isReadyForMoreMediaData])
{
            CMTime frameTime = CMTimeMake(1, fps);
            CMTime lastTime=CMTimeMake(i, fps);
            CMTime presentTime=CMTimeAdd(lastTime, frameTime);

            NSString *imageFilePath = [NSString stringWithFormat:@"%@/%@",folderPathName, imageFileName];
            image = [UIImage imageWithContentsOfFile:imageFilePath] ;
            cgimage = [image CGImage];

            buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage: cgimage];
            bool result = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];


            if (result == NO)
            {
                NSLog(@"failed to append buffer %i", i);    
                _videoStatus = 0;
                success = NO;

                return success;
            }

     //buffer has to be released here or memory pressure will occur 
            if(buffer != NULL)
            {
                CVBufferRelease(buffer);
                buffer = NULL;
            }
}

This is the local method which appears to make most trouble. It gets the pixel buffer reference from cgimage.

- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image 
{

NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                         nil];



CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
                                      CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (CFDictionaryRef) CFBridgingRetain(options),
                                      &pxbuffer);

NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);    
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
 NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
                                             CGImageGetHeight(image), 8, 4*CGImageGetWidth(image), rgbColorSpace,                                               (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);

NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(M_PI));

float width = CGImageGetWidth(image);
float height = CGImageGetHeight(image);

CGContextDrawImage(context, CGRectMake(0, 0, width,
                                           height), image);

CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);


CVPixelBufferUnlockBaseAddress(pxbuffer, 0);


return pxbuffer;
}

I have been spending a lot of time on this and not moving forward. Help is much appreciated. If any more details are necessary, I am glad to provide.

Foi útil?

Solução 2

Finally I found the solution to my problem, there were 2 points I had to change in my code.

  1. I changed the parameter type of the method (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image from CGImageRef to UIImage. Now the reason for this is mainly to simplify the code so the coming correction is easier to implement.
  2. Autoreleasepool is introduced to this method. Now this is the actual key to the solution. CGImageRef cgimage = [image CGImage]; and all other components of the method must be included in the Autoreleasepool.

The code looks like this.

- (CVPixelBufferRef) pixelBufferFromImage: (UIImage*) image withOrientation:(ImageOrientation)orientation
{
 @autoreleasepool
 {    
 CGImageRef cgimage = [image CGImage];

 NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                         nil];

 CVPixelBufferRef pxbuffer = NULL;

 float width =  CGImageGetWidth(cgimage);
 float height = CGImageGetHeight(cgimage);


 CVPixelBufferCreate(kCFAllocatorDefault,width,
                                      height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)(options),
                                      &pxbuffer);

 CVPixelBufferLockBaseAddress(pxbuffer, 0);

 void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

 NSParameterAssert(pxdata != NULL);

 CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
 CGContextRef context = CGBitmapContextCreate(pxdata, width,
                                             height, 8, 4*width, rgbColorSpace,
                                              (CGBitmapInfo)kCGImageAlphaNoneSkipFirst);


  CGContextConcatCTM(context, CGAffineTransformMakeRotation(-M_PI/2));

  CGContextDrawImage(context, CGRectMake(-height, 0, height, width), cgimage);

  CGColorSpaceRelease(rgbColorSpace);
  CGContextRelease(context);

  CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

  return pxbuffer;
 }
}

With this solution a hd movie of more than 2000 images is generated at a rather slow speed but it seems to be very reliable, which is most important.

Outras dicas

I use very similar code although for slightly different reasons (I'm using AVAssetReader, grabbing frames as images and manipulating them). Net result however should be similar - I'm iterating through 1000's of images without issue.

The two things I notice that I'm doing that are different:

  1. When you release the Buffer, you're using CVBufferRelease, I'm using CVPixelBufferRelease.
  2. You are not releasing the CGImage using CGImageRelease.

Try rewriting this:

 //buffer has to be released here or memory pressure will occur 
        if(buffer != NULL)
        {
            CVBufferRelease(buffer);
            buffer = NULL;
        }

as:

 //buffer has to be released here or memory pressure will occur 
        if(buffer != NULL)
        {
            CVPixelBufferRelease(buffer);
            buffer = NULL;
        }
        CGImageRelease(cgImage);

Let me know how that goes.

EDIT: Here is a sample of my code, getting and releasing a CGImageRef. The Image is created from a CIImage extracted from the reader buffer and filtered.

                CGImageRef finalImage = [context createCGImage:outputImage fromRect:[outputImage extent]];

            // 2. Grab the size
            CGSize size = CGSizeMake(CGImageGetWidth(finalImage), CGImageGetHeight(finalImage));

            // 3. Convert the CGImage to a PixelBuffer
            CVPixelBufferRef pxBuffer = NULL;
            pxBuffer = [self pixelBufferFromCGImage: finalImage andSize: size];

            // 4. Write things back out.
            // Calculate the frame time
            CMTime frameTime = CMTimeMake(1, 30);
            CMTime presentTime=CMTimeAdd(currentTime, frameTime);

            [_ugcAdaptor appendPixelBuffer:pxBuffer withPresentationTime:presentTime];

            CGImageRelease(finalImage);

            CVPixelBufferRelease(pxBuffer);
Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top