How to deal with texture distortion caused by “squaring” textures, and the interactions with mipmapping?

StackOverflow https://stackoverflow.com/questions/9878997

Question

Suppose I have a texture which is naturally not square (for example, a photographic texture of something with a 4:1 aspect ratio). And suppose that I want to use PVRTC compression to display this texture on an iOS device, which requires that the texture be square. If I scale up the texture so that it is square during compression, the result is a very blurry image when the texture is viewed from a distance.

I believe that this caused by mipmapping. Since the mipmap filter sees the new larger stretched dimension, it uses that to choose a low mip level, which is actually not correct, since those pixels were just stretched to that size. If it looked at the other dimension, it would choose a higher resolution mip level.

This theory is confirmed (somewhat) by the observation that if I leave the texture in a format that doesn't have to be square, the mipmap versions look just dandy.

There is a LOD Bias parameter, but the docs say that is applied to both dimensions. It seems like what is called for is a way to bias the LOD but only in one dimension (that is, to bias it toward more resolution in the dimension of the texture which was scaled up).

Other than chopping up the geometry to allow the use of square subsets of the original texture (which is infeasible, give our production pipeline), does anyone have any clever hacks they've used to deal with this issue?

Was it helpful?

Solution

It seems to me that you have a few options, depending on what you can do with, say, the vertex UVs.

[Hmm Just realised that in the following I'm assuming that the V coordinates run from the top to the bottom... you'll need to allow for me being old school :-) ]

The first thing that comes to mind is to take your 4N*N (X*Y) source texture and repeat it 4x vertically to give a 4N*4N texture, and then adjust the V coordinates on the model to be 1/4 of their current values. This won't save you much in terms of memory (since it effectively means a 4bpp PVRTC becomes 4x larger) but it will still save bandwidth and cache space, since the other parts of the texture won't be accessed. MIP mapping will also work all the way down to 1x1 textures.

Alternatively, if you want to save a bit of space and you have a pair of 4N*N textures, you could try packing them together into a "sort of" 4N*4N atlas. Put the first texture in the top N rows, then follow it by the N/2 of the top rows. The pack the bottom N/2 rows of the 2nd texture, followed by the second texture, and then the top N/2 rows. Finally, do the bottom N/2 rows of the first texture. For the UVs that access the first texture, do the same divide by 4 for the V parameter. For the second texture, you'll need to divide by 4 and add 0.5 This should work fine until the MIP map level is so small that the two textures are being blended together... but I doubt that will really be an issue.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top