I'm currently running code on the CPU that sums columns and rows of an grey scale NSImage (i.e. only 1 samplePerPixel). I thought I would try to move the code to the GPU (if possible). I found a CoreImage filters CIRowAverage and CIColumnAverage which seem similar.

In the Apple Docs on writing custom Core Image filters they state,

Keep in mind that your code can’t accumulate knowledge from pixel to pixel. A good strategy when writing your code is to move as much invariant calculation as possible from the actual kernel and place it in the Objective-C portion of the filter.

This hints that maybe one cannot make a summation of pixels using a filter kernel. If so, how do the above function manage to get an average of a region?

So my question is, what the best way to implemented summing row or columns of a image to get the total value of the pixels. Should I stick to the CPU?

有帮助吗?

解决方案

The Core Image filters perform this averaging through a series of reductions. A former engineer on the team describes how this was done for the CIAreaAverage filter within this GPU Gems chapter (under section 26.2.2 "Finding the Centroid").

I talk about a similar averaging by reduction in my answer here. I needed this capability on iOS, so I wrote a fragment shader that reduced the image by a factor of four in both horizontal and vertical dimensions, sampling between pixels in order to average sixteen pixels into one at each step. Once the image was reduced to a small enough size, the remaining pixels were read out and averaged to produce a single final value.

This kind of reduction is still very fast to perform on the GPU, and I was able to extract an average color from a 640x480 video frame in ~6 ms on an iPhone 4. You'll of course have a lot more horsepower to play with on a Mac.

You could take a similar approach to this by reducing in only one direction or the other at each step. If you are interested in obtaining a sum of the pixel values, you'll need to watch out for precision limits in the pixel formats used on the GPU. By default, RGBA color values are stored as 8-bit values, but OpenGL (ES) extensions on certain GPUs can give you the ability to render into 16-bit or even 32-bit floating point textures, which extends your dynamic range. I'm not sure, but I believe that Core Image lets you use 32-bit float components on the Mac.

其他提示

FYI on the CIAreaAverage filter—it's coded like this:

    CGRect inputExtent = [self.inputImage extent];
CIVector *extent = [CIVector vectorWithX:inputExtent.origin.x
                                       Y:inputExtent.origin.y
                                       Z:inputExtent.size.width
                                       W:inputExtent.size.height];
CIImage* inputAverage = [CIFilter filterWithName:@"CIAreaAverage" keysAndValues:@"inputImage", self.inputImage, @"inputExtent", extent, nil].outputImage;

//CIImage* inputAverage = [self.inputImage imageByApplyingFilter:@"CIAreaMinimum" withInputParameters:@{@"inputImage" : inputImage, @"inputExtent" : extent}];
EAGLContext *myEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSDictionary *options = @{ kCIContextWorkingColorSpace : [NSNull null] };
CIContext *myContext = [CIContext contextWithEAGLContext:myEAGLContext options:options];

size_t rowBytes = 32 ; // ARGB has 4 components
uint8_t byteBuffer[rowBytes]; // Buffer to render into

[myContext render:inputAverage toBitmap:byteBuffer rowBytes:rowBytes bounds:[inputAverage extent] format:kCIFormatRGBA8 colorSpace:nil];

const uint8_t* pixel = &byteBuffer[0];
float red   = pixel[0] / 255.0;
float green = pixel[1] / 255.0;
float blue  = pixel[2] / 255.0;
NSLog(@"%f, %f, %f\n", red, green, blue);

Your output should look something like this:

2015-05-23 15:58:20.935 CIFunHouse[2400:489913] 0.752941, 0.858824, 0.890196
2015-05-23 15:58:20.981 CIFunHouse[2400:489913] 0.752941, 0.858824, 0.890196
许可以下: CC-BY-SA归因
不隶属于 StackOverflow
scroll top