I've been suffering from severe performance problems rendering a regularly updated image to the iphone. After trying it on the iPad 3 today I found out that I was only getting 2fps. This was WAAAY too slow. As such I decided to profile and found that nearly all the time was spent converting the image into a 32-bit ARGB image (after the UIImage drawInRect). I'm seriously shocked at how awful the performance is given everyone says UIKit renders using OpenGLES.
So I converted the rendering code to GLES1 (I can't be bothered to set up a GLES2 renderer at the mo ;)). The performance has shot up. I'm now getting 20+fps. In fact the performance is so good I'm beginning to wonder whether I can perform full retina rendering!
Anyway I am creating the texture as follows:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_SHORT_5_5_5_1, &colours.front() );
Unfortunately this reverses the component order. I have "fixed" it by reversing the bit order from:
val = ((rgba.a >> 7) << 15) | ((rgba.b >> 3) << 10) | ((rgba.g >> 3) << 5) | ((rgba.r >> 3) << 0);
to:
val = ((rgba.r >> 3) << 11) | ((rgba.g >> 3) << 6) | ((rgba.b >> 3) << 1) | ((rgba.a >> 7) << 0);
So this confirms that the component order is reversed over CGImage. However this code is used in multiple products so I need to come up with a way of loading the "component reversed" image.
As such I found the flag "GL_UNSIGNED_SHORT_1_5_5_5_REV" but I can't get it to work.
I've tried the following code:
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_SHORT_1_5_5_5_REV, &colours.front() );
and:
glTexImage2D( GL_TEXTURE_2D, 0, GL_BGRA, width, height, 0, GL_BGRA, GL_UNSIGNED_SHORT_1_5_5_5_REV, &colours.front() );
but both return me GL_INVALID_ENUM. So what am I doing wrong? Is it even possible to load the component reversed images?
Thanks in advance!
edit: In the interim I've introduced YET another pixel class and then added the following code:
inline uint32_t UpdateGLTexture( uint32_t texture, std::vector< R5G5B5A1 >& colours, unsigned int width, unsigned int height )
{
std::vector< A1R5G5B5 > convertedColours( colours.size() );
ColourSpaceConversion::ConvertFormat( convertedColours, colours );
return UpdateGLTexture( texture, convertedColours, width, height );
}
So I now have correct rendering but I'm trying to avoid these kind of colour conversions. I may just create the source image in that format when using GL rendering but its annoying that I can't load the format directly :(