Question

It can be done by mallocing a temporary bitmap with 32bits per pixel and then clearing the alpha component with a for loop and and finally turn it back into a NSImage again.

I suspect is can be done in a simpler way using a clever combination of NSColor and NSCompositingOperation. Or perhaps the image needs to be composited with itself using drawAtPoint.

My code looks like this.

NSImage* img = some image with RGB and Alpha;
NSRect rect  = some rect inside the image;
[img lockFocus];
[[NSColor clearColor] set];
NSRectFillUsingOperation(rect, NSCompositeXOR);
[img unlockFocus];

NOTE: Setting the alpha channel to 1 can be done by using a blackColor with NSCompositePlusLighter.

What is the secret in clearing the alpha channel?

Was it helpful?

Solution

It won't be fast but this will work as well:

   NSImage *newImage = [[NSImage alloc] initWithSize:[srcImage size]];
   [newImage lockFocus];
   [[NSColor whiteColor] set];
   NSRectFill(NSMakeRect(0,0,[newImage size].width, [newImage size].height));
   [srcImage compositeToPoint:NSZeroPoint operation:NSCompositeCopy];
   [newImage unlockFocus];

OTHER TIPS

(1) Please read the AppKit release notes on the subject of image mutability. NSImage should basically be treated as immutable.

(2) All of the pixel formats supported in graphics contexts have premultiplied alpha. If the alpha channel is zero, the other channels have to be zero too.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top