Question

In my project, I used 'discard' call to perform customized stencil test, which tries to draw things only on a specified area defined by a stencil texture. Here is the code from fragment shader:

//get the stencil value from a texture
float value=texture2D( stencilTexture, gl_FragCoord.xy/1024.0).x;
//check if value equals the desired value, if not draw nothing
if(abs(value-desiredValue)>0.1)
{
    discard;
}

This code works, but suffers from a performance problem because of the 'discard' call. Is there an alternative way to do this through GPU Shaders? Tell me how.

Was it helpful?

Solution

If you access a texture, you must suffer the performance penalties associated with accessing a texture. In the same way, if you want to stop a fragment from being rendered, you must suffer the performance penalties associated with stopping fragments from being rendered.

This will be true regardless of how you stop that fragment. Whether it's a true stencil test, your shader-based discard, or alpha testing, all of these will encounter the same general performance issues (for hardware where discard leads to any significant performance problems, which is mainly mobile hardware). The only exception is the depth test, and that's because of why certain hardware has problems with discard.

For platforms where discard has a substantial impact in performance, the rendering algorithm works most optimally if the hardware can assume that the depth is the final arbiter of whether a fragment will be rendered (and thus, the fragment with the highest/lowest depth always wins). Therefore, any method of culling the fragment other than the depth test will interfere with this optimization.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top