Frage

My current rendering implementation is as follows:

  • Store all vertex information as quads rather than triangles
  • For triangles, simply repeat the last vertex (i.e. v0 v1 v2 v2)
  • Pass vertex information as lines_adjacency to geometry shader
  • Check if quad or triangle, output as triangle_strip

The reason I went this route was because I was implementing a wireframe shader, and I wanted to draw the quads without a diagonal line through them. But, I've since discarded the feature.

I'm now wondering if I should go back to simply drawing GL_TRIANGLES, and leave the geometry shader out of the equation. But that got me thinking... what's actually more efficient from a performance point of view?

  • In average, my scenes are composed of quads and triangles in equal amounts.
  • Drawing with all triangles would mean: 6 vertices per quad, 3 per triangle.
  • Drawing with lines_adjacency would mean: 4 vertices per quad, 4 per triangle.
  • (This is with indexed drawing, so the vertex buffer is the same size for both of them)

So the vertex ratio is 9:8 (triangles : lines_adjacency).

Would I be correct in assuming that with indexed drawing, each vertex is only getting processed once by the vertex shader (as opposed to once per index)? In which case drawing triangles is going to be more efficient (since there isn't an extra geometry-shader step to perform), with the only negative being the slight amount of extra memory the indices take up.

Then again, if the vertices do get processed once per index, I could see the edge being with the lines_adjacency method, considering the geometry conversion is very simple, whilst the vertex shader might be running more intensive lighting calculations.

So that pretty much sums up my question: how do vertices get treated with indexed drawing, and what sort of performance impact could be expected if including a simple geometry shader?

War es hilfreich?

Lösung

Geometry shaders never improve efficiency in this sort of situation, they only complicate the primitive assembly process. When you use geometry shaders, the post-T&L cache no longer works the way it was originally designed.

While it is true that the geometry shader will reuse any shared (indexed) vertices transformed in the vertex shader stage when it needs to fetch vertex data, the geometry shader still computes and emits a unique set of vertices per-output-primitive.

Furthermore, because geometry shaders are allowed to emit a variable number of data points they are unlike other shader stages. It is much more difficult to parallelize geometry shaders than it is vertex or fragment. There are just too many negative things about geometry shaders for me to suggest using them unless you actually need them.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top