سؤال

I am trying to cut a UIImage into an irregular shape using UIBezierPath. This is how I mask the UIImageView which contains the UIImage I'm interested in:

- (void)maskImageView:(UIImageView *)image
{
    UIBezierPath *path = [self getPath];
    CGRect rect = CGRectZero;
    rect.size = image.image.size;
    CAShapeLayer *shapeLayer = [CAShapeLayer layer];
    shapeLayer.frame = CGRectMake(0, 0,image.frame.size.width, image.frame.size.height);
    shapeLayer.path = path.CGPath;
    shapeLayer.fillColor = [[UIColor whiteColor] CGColor];
    shapeLayer.backgroundColor = [[UIColor clearColor] CGColor];
    [image.layer setMask:shapeLayer];
}

This works as intended and I get a correctly masked UIImageView. However, I want to get the underlying UIImage so I can draw it on top of another image. This is how I get the UIImage:

-(UIImage *)getCroppedImageForView:(UIImageView *)view{
    CGSize size = view.layer.bounds.size;
    UIGraphicsBeginImageContext(size);
    [view.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return image;
}

I then draw the image and save it to the camera roll:

-(UIImage *)drawImage:(UIImage*)image
{
    UIGraphicsBeginImageContext(image.size);
    [image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)];
    UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    UIImageWriteToSavedPhotosAlbum(finalImage, nil, nil, nil);
    return finalImage;
}

When I look at the saved image I can see that it is smaller than the original one even though I draw the UIBezierPath around the image edges (not really cropping it). I use the same method to draw and save other images and they come out correctly so I'm pretty sure the problem is with the getCroppedImageForView method. Can anyone point me in the right direction?

EDIT

The image is captured using the device's camera so for iPhone 5 the size is: 3264x2448 The size of the UIImageView containing the image is: 320x568 and its contentMode is UIViewContentModeScaleAspectFit. What I want to achieve is to get the full size (3264x2448) cropped image and then draw it on top of another image (which is the same size).

هل كانت مفيدة؟

المحلول

A couple of thoughts:

  1. You note that the image view is 320x568 points, but that the image itself is 3264x2448 pixels. The key observation is that the UIGraphicsBeginImageContext... functions are creating a UIImage by rendering a UIImageView and thus, the size of that resulting image will be linked to the size of that UIKit control and may bear little relation to the size of the actual original UIImage.

  2. In getCroppedImageForView, you are not considering the scale of the device, notably not handling retina resolution (generating an image that is 320x568 pixels). Generally you'd do something like the following:

    - (UIImage *)imageForView:(UIView *)view
    {
        UIGraphicsBeginImageContextWithOptions(view.bounds.size, NO, 0);
        [view.layer renderInContext:UIGraphicsGetCurrentContext()];
        UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();
        return image;
    }
    

    By using UIGraphicsBeginImageContextWithOptions with 0 for the last option, it will reflect the scale of the device, and on retina screen it will generate image that is 640x1136 pixels.

  3. A minor simplification to maskImageView: By default, when adding a sublayer, it will use the same frame as the image view, so you can eliminate that setting of the frame variable altogether, yielding:

    - (void)maskImageView:(UIImageView *)imageView
    {
        UIBezierPath *path = [self getPath];
        CAShapeLayer *shapeLayer = [CAShapeLayer layer];
        shapeLayer.path = path.CGPath;
        shapeLayer.fillColor = [[UIColor whiteColor] CGColor];
        shapeLayer.backgroundColor = [[UIColor clearColor] CGColor];
        [imageView.layer setMask:shapeLayer];
    }
    
  4. In your drawImage, it is unnecessary to re-render the image. You already have the image. So just use that UIImage directly:

    - (void)writeToSavedPhotosAlbum:(UIImage*)image
    {
        UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
    }
    

    But if you really wanted to re-render the image, you'd use the same UIGraphicsBeginImageContextWithOptions syntax I illustrated in point 2, above, enjoying the extra resolution that a retina device provides.

While the above changes will render an image that is a little bigger, it's still considerably smaller than the original image's 3268x2448 pixels. If you really want to create a final image that is 3268x2448 pixels, there are two options that leap out at me:

  1. One option is to make the image view the same size as the image (or more accurately, same size divided by the screen scale). You may want to put this image view on a scroll view and set the zoom scale appropriately.

  2. The other option is to abandon UIKit for the rendering of the images, and shift to Core Image (a major refactoring of your code).

By the way, be aware that these large images will be much slower and consume a lot more memory. The internal, uncompressed size exceeds 30 mb each.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top