Question

I would like to manually load NPOT PVR textures on iOS (I don't mean loading the texture in OpenGL memory, I mean loading it in a custom data structure that allows accessing the image data in order enlarge the canvas and save it again as a new PVR).

I am asking this because we are implementing an OpenGL iOS application with lots of background textures (480 x 320 on non retina displays). We intend to store this textures as NPOT PVR files, so we have memory gains on 3GS+ iPhones.

3G- iPhone cannot load NPOT PVR textures in non OpenGL memory. My intentions are to implement a preprocessing step for 3G- devices that converts all NPOT PVRs textures to POT PVRs and stores them on the app's Cache folder.

Is it possible to load and process a PVR in this way? For example, a TexturePacker generated PVR.

Was it helpful?

Solution

Sure it is.

  1. Decompress the file format.
  2. Scale raw image data
  3. Compress new scaled data and save

For step one, see https://github.com/Volcore/quickpvr or PowerVR's SDK.

For step three there is no publicly available documentation or implementation of PVRTC compression. So you have several options. Use the combination of class-dump and otx to decompile Apple s texturetool (not advisable), or (and probably easier) write your own encoder based on whatever specification of the file format you can conjure from step 1.

However, given your situation I would just do this all as a preprocessed step and include two versions of your textures along with your app.

OTHER TIPS

If you don't need the texture to automatically repeat/wrap and are just going to use your 480 x 320 texels as a subset of, say, a 512x512 texture and never access parts where X > 480 and y > 320, then you could rearrange the data.

One suggestion would be to pad your source images up to 512x512 (using some constant colour), compress them, and then just store the relevant parts of the texture. When you come to load the texture in the application, just re-pad out with the missing compressed data. (Just realised that's not the clearest of descriptions :-| ).

Note that you'd need to take into account the storage order (which is probably described in the link that bronxbomber92 gave) but it just occurred to me that there might be a simpler approach. The compressed data is arranged/interleaved in 64-bit chunks. If you have padded your source image with a constant colour, after compression (at least with the IMG PVRTextool compressor) there should then be a reasonable percentage (~38%) of 64-bit chunks which are the same. You could use, say, a set of flags to mark which 64-bit words use this 'same' constant to avoid storing that data.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top