Question

According to the wiki, the difference is that R16 is normalized and R16UI is not. What does this practically mean? They are both stored the same way internally right, as unsigned integers, the fact that one is suppose to represent a normalized integer and one is suppose to be just an integer, does not really tell me anything. When you sample these two types in a shader, you still get a normalized value on the 0 to 1 range right? So what is the difference?

Was it helpful?

Solution

You sample them differently. For a R16 texture, you will have something like this in your shader:

uniform sampler2D tex;
in vec2 texCoord;
...
float val = texture(tex, texCoord).r;

For a R16UI texture, it will look like this:

uniform usampler2D tex;
in vec2 texCoord;
...
uint val = texture(tex, texCoord).r;

So for R16UI, you read out an integer value when sampling. R16 on the other hand behaves very much like R8 and similar formats (RGBA8, etc), except that it has twice as much internal precision.

Note that all integer textures, including R16UI, support only NEAREST filtering, or NEAREST_MIPMAP_NEAREST for the minification filter.

There are additional constraints when integer textures are used as render targets. For example, blending is not supported. Consult the specs before using integer render targets to avoid surprises.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top