Frage

I scanned an image (.tiff) with Macintosh millions of colors which means 24 bits per pixel. The scanned image that I am getting has the following attributes: size = 330KB and dimensions = 348 * 580 pixels. Since, there are 24 bits per pixel, the size should actually be 348 * 580 * 3 = 605KB.

Is there something incorrect? I also used this code to extract the image raw data from the url of the image scanned:

NSString * urlName = [url path];
NSImage *image = [[NSImage alloc] initWithContentsOfFile:urlName];
NSData *imageData = [image TIFFRepresentation];
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)CFBridgingRetain(imageData), NULL);
CGImageRef imageRef =  CGImageSourceCreateImageAtIndex(source, 0, NULL);
NSUInteger numberOfBitsPerPixel = CGImageGetBitsPerPixel(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
NSUInteger width = CGImageGetWidth(imageRef);

From this code also, I am getting the same info about the width, height and number of bits per pixel in the image.

Basically, I have to use this image's information and reproduce it somewhere else in some other form, so if I'll not be able to get the correct the information, the final product is not reproducible. What can be wrong here?

P.S.: If some other information is needed to answer the question, then I'll be happy to provide that.

War es hilfreich?

Lösung

The most common image formats jpg, png and tiff compress the image which is why the filesize is lower than w*h*bits per pixel.

JPEG use lossy compression, PNG use lossless compression and TIFF can be uncompressed, use lossy compression or lossless compression.

Lossy compression means that some color imformation is lost in the compression so the image won't look exactly like it did before compression but you can reduce the file size even further using lossy compression.

An example of lossless compression is run length encoding which basically means that if you have several consequtive pixels with the same color you just say that you have N pixels with value (R,G,B) instead of saying (R,G,B),(R,G,B),...,(R,G,B)

Andere Tipps

A few days ago you asked the (nearly) same question and got no answers. I had not the time to answer your questions but now it is time to write some remarks.

First of all your question(s) and (most of) the answers and comments show a big misunderstanding of NSImage, NSImageRep and the image stored in the filesystem.

The image stored in the filesystem is a complicated data structure which not only contains all the pixels of an image (if it is a raster image) but also a lot of metadata: comments, some dates, informations about the camera, thumbnail images and all this sometimes in different formats: exif, photoshop, xml etc. So you cannot assume that the size of the file has something to do with the image in the computer to be depicted on the screen or to be asked for some special properties. To get these data for further usage you can do:

NSData *imgData = [NSData dataWithContentsOfURL:url];

or

NSData *imgData = [NSData dataWithContentsOfFile:[url path]];

or you directly load an image as an object of NSImage:

NSImage *image = [[NSImage alloc] initWithContentsOfURL:url];  // similar methods:see the docs

And and if you now think this is the file image data transformed into a Cocoa structure you are wrong. An object of the class NSImage is not an image, it is simply a container for zero, one or more image representations. Gif, jpg, png images have always only one representation, tiff may have one ore more and icns have about 5 or 6 image representations.

Now we want some information about the image representations:

for( NSUInteger i=0; i<[[image representations] count]; i++ ){
   // let us assume we have an NSBitmapImagedRep
   NSBitmapImageRep *rep = [[image representations] objectAtIndex:i];
   // get informations about this rep
   NSUInteger pixelX = [rep pixelsWide];
   NSUInteger pixelY = [rep pixelsHigh];
   CGFloat sizeX = [rep size].width;
   CGFloat sizeY = [rep size].height;
   CGFloat resolutionX = 72.0*pixelX/sizeX;
   CGFloat resolutionY = 72.0*pixelY/sizeY;

   // test if there are padding bits per pixel
   if( [rep bitsPerSample]>=8 ){
       NSInteger paddingBits = [rep bitsPerPixel] - [rep bitsPerSample]*[rep samplesPerPixel];

   // test if there are padding bytes per row
   NSInteger paddingBytes = [rep bytesPerRow] - ([rep bitsPerPixel]*[rep pixelsWide]+7)/8;

   NSUInteger bitmapSize =  [rep bytesPerRow] * [rep pixelsHigh];
}

Another remark: you said:

I scanned an image (.tiff) with Macintosh millions of colors which means 24 bits per pixel.

No, that need not be so. If a pixel has only three components it may not only use 24 but sometimes 32 bits because of some optimization rules. Ask the rep. It will tell you the truth. And ask for the bitmsapFormat! (Details in the doc).

Finally: you need not use the CG-functions. NSImage and NSImageRep do it all.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top