Question

Imagine that I have an image in memory that is represented by an array or an std::vector; for the sake of this example I'm also assuming that my image is like 400x300 pixels and I want to subdivide this structure in squares ( or tiles ) that are 64x64 pixels maximum.

The array that I'm considering is declared like this

int a[400*300];

and not like

int a[400][300];

It's 1 nice contiguous chunk of memory.

My point is that you always try to keep the data structure and the access to that data structure as much as linear as possible . Subdividing the image in squares involves jumping from 1 row to another or from 1 column to another depending on how the image is laid out in memory. I have no problem in computing the boundaries for the squares given a size and the dimensions of the image, but things get a little bit too complicated when expressing the iteration over this squares, without me seeing any real benefit in having this approach.

So why the solution about this kind of subdivision steps is so popular ? why not just render like 1 row at time or 1 column at time ?

Was it helpful?

Solution

Memory locality / cache coherency. Most image processing operations operate in 2D and for efficient memory access you want pixels that are close to each other in 2D to be close to each other in memory. Arranging the data in blocks like this means that 2 pixels that have the same x coordinate and adjacent y coordinates will on average have closer memory addresses than if you used a simple linear layout.

There are more complex ways of laying out the image that are often used for textures when rendered by GPUs which give even better memory locality on average.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top