Question

I have a large C++ OpenGL software that was running with very high performance under Mountain Lion. After updating to Mavericks and recompiling, the performance has dropped significantly. By switching between Triangle strips and triangles as the type of object being rendered and seeing a drop in performance by a further factor 2 or 3, I am under the impression that the vertex shader must be the cause of the issue and given how simple it is, I suspect that it is running in software on the CPU rather than on the GPU. How can I recover the performance I had under Mountain Lion? Are there some changes I need to do?

The source of my vertex shader is given below. It feeds a following geometry shader.

#version 410

uniform mat3 normalMatrix;
uniform mat4 modelMatrix;
uniform mat4 modelProjMatrix;
uniform vec3 color  = vec3(0.4,0.4, 0.4);

in vec3 vertex;
in vec3 normal;

out NodeData {
    vec3 normal, position;
    vec4 refColor;
} v;

void main()
{
    vec4 position = modelMatrix * vec4(vertex, 1.0);
    vec3 vertNormal = normal;
    v.normal = normalize(normalMatrix * vertNormal);
    v.position = position.xyz;

    v.refColor = vec4(color, 1.0);
    gl_Position = modelProjMatrix * vec4(vertex, 1.0);

}

For 180,000 triangles I can only get 3FPS when feeding as triangles and about 8 when fed as strips. The triangle are ordered according to Forsyth's optimization algorithm for post transform cache optimization.

Was it helpful?

Solution

Solution: Make sure all vector buffers that are added to the VAO are used in the vertex shader.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top