I am using CIFilter to get a blur image,but why is the output image always larger than input image?

StackOverflow https://stackoverflow.com/questions/18779251

Codes are as below:

CIImage *imageToBlur = [CIImage imageWithCGImage: self.pBackgroundImageView.image.CGImage];
CIFilter *blurFilter = [CIFilter filterWithName: @"CIGaussianBlur" keysAndValues: kCIInputImageKey, imageToBlur, @"inputRadius", [NSNumber numberWithFloat: 10.0], nil];
CIImage *outputImage = [blurFilter outputImage];
UIImage *resultImage = [UIImage imageWithCIImage: outputImage];

For example,the input image has a size of (640.000000,1136.000000),but the output image has a size of (700.000000,1196.000000)

Any advice is appreciated.

有帮助吗?

解决方案

This is a super late answer to your question, but the main problem is you're thinking of a CIImage as an image. It is not, it is a "recipe" for an image. So, when you apply the blur filter to it, Core Image calculates that to show every last pixel of your blur you would need a larger canvas. That estimated size to draw the entire image is called the "extent". In essence, every pixel is getting "fatter", which means that the final extent will be bigger than the original canvas. It is up to you to determine which part of the extent is useful to your drawing routine.

许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top