Question

  1. I just wanted to do how this works. So I am in the vert shader. I store a UV-coord from va1 to v1. In the frag shader, an interpolated texture coordinate now is available in v1 to sample. Are there connections under the hood, like semantics, to associate v1 with being the normal coordinates?

  2. Are fragments pushed to the fragment shader per triangle or once an entire mesh's geometry vertices are run through?

  3. In the vertex shader, let's say I multiply my modelview by my vertex's position. I then want to share my z-coord with the fragment shader for something else. I store the z-cord of the transformed vector into v3. Once I am in the fragment shader and I access v3 for the z-coord, is this the same z-coord I passed in or is this z-coordinate going to be altered by the perspective divide in between? I only ask as I want to clarify the "interpolating" behavior described when storing values in these registers.

Was it helpful?

Solution

  1. There are no semantics with the varying registers. Anything that you pass to a varying register will be interpolated, and it works the same for normals as it does for uvs or anything else.

  2. I believe that the entire mesh is transformed via the vertex shader first, then the fragment shader runs on it. It shouldn't really matter how the hardware handles this though.

  3. The coordinate will be interpolated; the varying registers don't know what kind of data they hold - from their perspective it's just 4 floating point numbers. Since the varying registers hold 4 components, you could just stick the entire vertex position in there. One thing to note - if you're wanting to do something dealing with depth, I think you'd want the w component.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top