I'm using Core Image filters in my app, everything works fine on my iPhone 5 device running iOS 7, but when I test it on iPhone 4s, which has only a total memory of 512MB, the app crashes.

Here's the situation, I have 2 images taken from the camera, with a resolution of 2448x3264 each. In my iPhone 5, the whole process takes up 150MB at the peak according to instruments.

instruments memory usage

However, when I try to run the same code on iPhone 4s, the instruments gave me memory low warning all the time, even if the whole memory use is quite low (around 8 MB). Here's the screenshot below.

iphone 4s memory usage

And here's the code, basically, I loaded two images from documents folder of my app, and applied 2 filters in a row:

    CIImage *foreground = [[CIImage alloc] initWithContentsOfURL:foregroundURL];
    CIImage *background = [[CIImage alloc] initWithContentsOfURL:backgroundURL];
    CIFilter *softLightBlendFilter = [CIFilter filterWithName:@"CISoftLightBlendMode"];
    [softLightBlendFilter setDefaults];
    [softLightBlendFilter setValue:foreground forKey:kCIInputImageKey];
    [softLightBlendFilter setValue:background forKey:kCIInputBackgroundImageKey];

    foreground = [softLightBlendFilter outputImage];
    background = nil;
    softLightBlendFilter = nil;

    CIFilter *gammaAdjustFilter = [CIFilter filterWithName:@"CIGammaAdjust"];
    [gammaAdjustFilter setDefaults];
    [gammaAdjustFilter setValue:foreground forKey:kCIInputImageKey];
    [gammaAdjustFilter setValue:[NSNumber numberWithFloat:value] forKey:@"inputPower"];
    foreground = [gammaAdjustFilter valueForKey:kCIOutputImageKey];

    gammaAdjustFilter = nil;

    CIContext *context = [CIContext contextWithOptions:nil];
    CGRect extent = [foreground extent];
    CGImageRef cgImage = [context createCGImage:foreground fromRect:extent];

    UIImage *image = [UIImage imageWithCGImage:cgImage scale:1.0 orientation:imgOrientation];
    CFRelease(cgImage);
    foreground = nil;

    return image;

The app crashed at this line: CGImageRef cgImage = [context createCGImage:foreground fromRect:extent];

Is there any more memory-efficient way of handling this situation, or what am I doing wrong here?

Big thanks!

有帮助吗?

解决方案

Short version:

While it seems trivial in concept, this is actually a pretty memory intensive task for the device in question.

Long version:

Consider this: 2 images * 8 bits each for RGBA * 2448 * 3264 ~= 64MB. Then CoreImage will require another ~32MB for the output of the filter operation. Then getting that from a CIContext into a CGImage is likely going to consume another 32MB. I would expect the UIImage copy to share the CGImage's memory representation at least by mapping the image using VM with copy-on-write, although you may get dinged for the double usage anyway since despite not consuming "real" memory, it still counts against pages mapped.

So at a bare minimum, you're using 128MB (Plus any other memory your app happens to use). This is a considerable amount of RAM for a device like the 4S which only starts with 512MB to begin with. IME, I would say that this would sorta be on the outer edge of what would be possible. I would expect it to work at least some of the time, but it does not surprise me to hear that it's getting memory warnings and memory pressure kills. You will want to make sure that the CIContext and all the input images are deallocated/disposed as soon after making the CGImage as possible, and before making the UIImage from the CGImage.

In general, this could be made easier by scaling down the image size.

Without testing, and assuming ARC, I present the following as a potential improvement:

- (UIImage*)imageWithForeground: (NSURL*)foregroundURL background: (NSURL*)backgroundURL orientation:(UIImageOrientation)orientation value: (float)value
{
    CIImage* holder = nil;
    @autoreleasepool
    {
        CIImage *foreground = [[CIImage alloc] initWithContentsOfURL:foregroundURL];
        CIImage *background = [[CIImage alloc] initWithContentsOfURL:backgroundURL];
        CIFilter *softLightBlendFilter = [CIFilter filterWithName:@"CISoftLightBlendMode"];
        [softLightBlendFilter setDefaults];
        [softLightBlendFilter setValue:foreground forKey:kCIInputImageKey];
        [softLightBlendFilter setValue:background forKey:kCIInputBackgroundImageKey];

        holder = [softLightBlendFilter outputImage];
        // This probably the peak usage moment -- I expect both source images as well as the output to be in memory.
    }
    //  At this point, I expect the two source images to be flushed, leaving the one output image
    @autoreleasepool
    {
        CIFilter *gammaAdjustFilter = [CIFilter filterWithName:@"CIGammaAdjust"];
        [gammaAdjustFilter setDefaults];
        [gammaAdjustFilter setValue:holder forKey:kCIInputImageKey];
        [gammaAdjustFilter setValue:[NSNumber numberWithFloat:value] forKey:@"inputPower"];
        holder = [gammaAdjustFilter outputImage];
        // At this point, I expect us to have two images in memory, input and output
    }
    // Here we should be back down to just one image in memory
    CGImageRef cgImage = NULL;

    @autoreleasepool
    {
        CIContext *context = [CIContext contextWithOptions:nil];
        CGRect extent = [holder extent];
        cgImage = [context createCGImage: holder fromRect:extent];
        // One would hope that CG and CI would be sharing memory via VM, but they probably aren't. So we probably have two images in memory at this point too
    }
    // Now I expect all the CIImages to have gone away, and for us to have one image in memory (just the CGImage)
    UIImage *image = [UIImage imageWithCGImage:cgImage scale:1.0 orientation:orientation];
    // I expect UIImage to almost certainly be sharing the image data with the CGImageRef via VM, but even if it's not, we only have two images in memory
    CFRelease(cgImage);
    // Now we should have only one image in memory, the one we're returning.
    return image;
}

As indicated in the comments, the high watermark is going to be the operation that takes two input images and creates one output image. That will always require 3 images to be in memory, no matter what. To get the high watermark down any further from there, you'd have to do the images in sections/tiles or scale them down to a smaller size.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top