GL_ALPHA
really just boils down to a single-component texture where the single component is taken to mean the alpha channel. If you store the image data in a non-deprecated internal format (e.g. GL_R8
), you can write your own Get
method that does conversion into RGBA (assuming you actually need this).
The thing is, I do not see anywhere in your proposed PixelFormat
class right now that actually exposes the internal representation of your data. About the closest this code ever comes to having a lack of GL_ALPHA
make a difference is the ability to query the storage per-component.
To answer your original question, GL_ALPHA
represents this:
vec4 (0.0, 0.0, 0.0, alpha)
If you want to use a GL_R8
image format to replace GL_ALPHA
, you can do this in a shader:
vec4 (0.0, 0.0, 0.0, red)
You can just as easily do the same thing on the CPU side of things if you ever need to return an RGBA image from your Alpha pixel format. Otherwise, just return an array of pixels where RedBits = 0
, GreenBits = 0
, BlueBits = 0
and AlphaBits = ...
. This will amount to an array of alpha values, and while GL would consider each one of those pixels to be a red color component, that hardly matters here.
TL;DR: There will only be a handful of places where the fact that modern GL interprets single-component image formats as red even matters. Since enough of the pipeline is programmable these days it makes no sense to have GL handle alpha image formats specially. The same goes for luminance, all of this can be accomplished with swizzles in a shader.