A few days ago you asked the (nearly) same question and got no answers. I had not the time to answer your questions but now it is time to write some remarks.
First of all your question(s) and (most of) the answers and comments show a big misunderstanding of NSImage
, NSImageRep
and the image stored in the filesystem.
The image stored in the filesystem is a complicated data structure which not only contains all the pixels of an image (if it is a raster image) but also a lot of metadata: comments, some dates, informations about the camera, thumbnail images and all this sometimes in different formats: exif, photoshop, xml etc. So you cannot assume that the size of the file has something to do with the image in the computer to be depicted on the screen or to be asked for some special properties. To get these data for further usage you can do:
NSData *imgData = [NSData dataWithContentsOfURL:url];
or
NSData *imgData = [NSData dataWithContentsOfFile:[url path]];
or you directly load an image as an object of NSImage:
NSImage *image = [[NSImage alloc] initWithContentsOfURL:url]; // similar methods:see the docs
And and if you now think this is the file image data transformed into a Cocoa structure you are wrong. An object of the class NSImage is not an image, it is simply a container for zero, one or more image representations. Gif, jpg, png images have always only one representation, tiff may have one ore more and icns have about 5 or 6 image representations.
Now we want some information about the image representations:
for( NSUInteger i=0; i<[[image representations] count]; i++ ){
// let us assume we have an NSBitmapImagedRep
NSBitmapImageRep *rep = [[image representations] objectAtIndex:i];
// get informations about this rep
NSUInteger pixelX = [rep pixelsWide];
NSUInteger pixelY = [rep pixelsHigh];
CGFloat sizeX = [rep size].width;
CGFloat sizeY = [rep size].height;
CGFloat resolutionX = 72.0*pixelX/sizeX;
CGFloat resolutionY = 72.0*pixelY/sizeY;
// test if there are padding bits per pixel
if( [rep bitsPerSample]>=8 ){
NSInteger paddingBits = [rep bitsPerPixel] - [rep bitsPerSample]*[rep samplesPerPixel];
// test if there are padding bytes per row
NSInteger paddingBytes = [rep bytesPerRow] - ([rep bitsPerPixel]*[rep pixelsWide]+7)/8;
NSUInteger bitmapSize = [rep bytesPerRow] * [rep pixelsHigh];
}
Another remark: you said:
I scanned an image (.tiff) with Macintosh millions of colors which
means 24 bits per pixel.
No, that need not be so. If a pixel has only three components it may not only use 24 but sometimes 32 bits because of some optimization rules. Ask the rep. It will tell you the truth. And ask for the bitmsapFormat! (Details in the doc).
Finally: you need not use the CG-functions. NSImage and NSImageRep do it all.