Domanda

I am doing some image processing work which requires floating point grayscale image data. The method -imagePlanarFData: below is presently how I'm extracting this data from an input CIImage:

- (NSData *)byteswapPlanarFData:(NSData *)data
                    swapInPlace:(BOOL)swapInPlace;
{
    NSData * outputData = swapInPlace ? data : [data mutableCopy];
    const int32_t * image = [outputData bytes];
    size_t length = [outputData length] / sizeof(*image);
    for (int i = 0;
         i < length;
         i++) {
        int32_t * val = (int32_t *)&image[i];
        *val = OSSwapBigToHostInt32(*val);
    }
    return outputData;
}

- (NSData *)imagePlanarFData:(CIImage *)processedImage;
{
    NSSize size = [processedImage extent].size;
    if (size.width == 0) {
        return nil;
    }
    dispatch_once(&_onceToken, ^ {
        _colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericGray);
        _bytesPerRow = size.width * sizeof(float);
        _cgx = CGBitmapContextCreate(NULL, size.width, size.height, 32, _bytesPerRow, _colorSpace, kCGImageAlphaNone | kCGBitmapFloatComponents);
        // Work-around for CIImage drawing EXC_BAD_ACCESS when running with Guard Malloc;
        // see <http://stackoverflow.com/questions/11689233/ciimage-drawing-exc-bad-access>
        NSDictionary * options = nil;
        if (getenv("MallocStackLogging") || getenv("MallocStackLoggingNoCompact")) {
            NSLog(@"Forcing CIImageContext to use software rendering; see %@",
                  @"<http://stackoverflow.com/questions/11689233/ciimage-drawing-exc-bad-access>");
            options = @{ kCIContextUseSoftwareRenderer: @YES };
        }
        _cix = [CIContext contextWithCGContext:_cgx
                                       options:options];
        _rect = CGRectMake(0, 0, size.width, size.height);
    });
    float * data = CGBitmapContextGetData(_cgx);
    CGContextClearRect(_cgx, _rect);
    [_cix drawImage:processedImage
             inRect:_rect
           fromRect:_rect];
    NSData * pixelData = [NSData dataWithBytesNoCopy:data
                                              length:_bytesPerRow * size.height
                                        freeWhenDone:NO];
    // For whatever bizarre reason, CoreGraphics uses big-endian floats (!)
    return [self byteswapPlanarFData:pixelData swapInPlace:NO];
}

As mentioned in the comment, I was quite surprised to see that the float pixel data came out of the CGBitmapContext in big-endian order. (I determined this only by trial and error.) Thus additional the method -byteswapPlanarFData:swapInPlace: was introduced, and all seemed well with the world... For a time.

But now I would like to render this processed data back into a CGImage.

Previously my code took the float * data buffer extracted above and used it directly to render a new CGImage and then wrap it up in an NSImage, like this:

CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * size.height, NULL);
CGImageRef renderedImage = CGImageCreate(size.width, size.height, 32, 32, bytesPerRow, colorSpace, kCGImageAlphaNone | kCGBitmapFloatComponents, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
NSImage * image = [[NSImage alloc] initWithCGImage:renderedImage
                                              size:size];
CGImageRelease(renderedImage);

So now I am doing this:

pixelData = [mumble imagePlanarFData:processedImage];
// Swap data back to how it was before...
pixelData = [mumble byteswapPlanarFData:pixelData
                            swapInPlace:YES];
float * data = (float *)[pixelData bytes];
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, data, bytesPerRow * size.height, NULL);
CGImageRef renderedImage = CGImageCreate(size.width, size.height, 32, 32, bytesPerRow, colorSpace, kCGImageAlphaNone | kCGBitmapFloatComponents, provider, NULL, false, kCGRenderingIntentDefault);
CGDataProviderRelease(provider);
NSImage * image = [[NSImage alloc] initWithCGImage:renderedImage
                                              size:size];
CGImageRelease(renderedImage);

The above does not work. Instead I get a corrupted image and a stream of complaints from CoreGraphics:

Mar 21 05:56:46 aoide.local LabCam[34235] <Error>: CMMConvLut::ConvertFloat 1 input (inf)
Mar 21 05:56:46 aoide.local LabCam[34235] <Error>: ApplySequenceToBitmap failed (-171)
Mar 21 05:56:46 aoide.local LabCam[34235] <Error>: ColorSyncTransformConvert - failed width = 160 height = 199 dstDepth = 7 dstLayout = 0 dstBytesPerRow = 1920 srcDepth = 7 srcLayout = 0 srcBytesPerRow = 640

Where have I gone wrong?

È stato utile?

Soluzione

Ahh, found the problem... Turns out there was a left-over routine from before I added the byte-swapping call to -imagePlanarFData: that was also swapping the bytes in-place...

I would still like to hear some explanation for why CoreGraphics expects these values in big-endian byte order on our little-endian platform.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top