Pregunta

I would like to understand how "bytesPerRow" is calculated when building up an NSBitmapImageRep (in my case from mapping an array of floats to a grayscale bitmap).

Clarifying this detail will help me to understand how memory is being mapped from an array of floats to a byte array (0-255, unsigned char; neither of these arrays are shown in the code below).

The Apple documentation says that this number is calculated "from the width of the image, the number of bits per sample, and, if the data is in a meshed configuration, the number of samples per pixel."

I had trouble following this "calculation" so I setup a simple loop to find the results empirically. The following code runs just fine:

int Ny = 1; // Ny is arbitrary, note that BytesPerPlane is calculated as we  would expect = Ny*BytesPerRow;
for (int Nx = 0; Nx<320; Nx+=64) {
    // greyscale image representation:
    NSBitmapImageRep *dataBitMapRep = [[NSBitmapImageRep alloc]
       initWithBitmapDataPlanes: nil // allocate the pixel buffer for us
       pixelsWide: Nx 
       pixelsHigh: Ny
       bitsPerSample: 8
       samplesPerPixel: 1  
       hasAlpha: NO
       isPlanar: NO 
       colorSpaceName: NSCalibratedWhiteColorSpace // 0 = black, 1 = white
       bytesPerRow: 0  // 0 means "you figure it out"
       bitsPerPixel: 8]; // bitsPerSample must agree with samplesPerPixel
    long rowBytes = [dataBitMapRep bytesPerRow];
    printf("Nx = %d; bytes per row = %lu \n",Nx, rowBytes);
}

and produces the result:

Nx = 0; bytes per row = 0 
Nx = 64; bytes per row = 64 
Nx = 128; bytes per row = 128 
Nx = 192; bytes per row = 192 
Nx = 256; bytes per row = 256 

So we see that the bytes/row jumps in 64 byte increments, even when Nx incrementally increases by 1 all the way to 320 (I didn't show all of those Nx values). Note also that Nx = 320 (max) is arbitrary for this discussion.

So from the perspective of allocating and mapping memory for a byte array, how are the "bytes per row" calculated from first principles? Is the result above so the data from a single scan-line can be aligned on a "word" length boundary (64 bit on my MacBook Pro)?

Thanks for any insights, having trouble picturing how this works.

¿Fue útil?

Solución

Passing 0 for bytesPerRow: means more than you said in your comment. From the documentation:

If you pass in a rowBytes value of 0, the bitmap data allocated may be padded to fall on long word or larger boundaries for performance. … Passing in a non-zero value allows you to specify exact row advances.

So you're seeing it increase by 64 bytes at a time because that's how AppKit decided to round it up.

The minimum requirement for bytes per row is much simpler. It's bytes per pixel times pixels per row. That's all.

For a bitmap image rep backed by floats, you'd pass sizeof(float) * 8 for bitsPerSample, and bytes-per-pixel would be sizeof(float) * samplesPerPixel. Bytes-per-row follows from that; you multiply bytes-per-pixel by the width in pixels.

Likewise, if it's backed by unsigned bytes, you'd pass sizeof(unsigned char) * 8 for bitsPerSample, and bytes-per-pixel would be sizeof(unsigned char) * samplesPerPixel.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top