Question

I am having some trouble with the framerate of my application and am hoping someone can spot what might be the cause.

My average framerate is around 120 (which seems a little low anyway...). But when I move my camera inside a cube I am drawing the rate plummets to lows of 18 and my GPU load goes up to 99% and stays there as long as I am inside the cube. I have a pretty decent card (Radeon 7870 2GB) so I expected excellent FPS with just a cube.

If I remove only the cube, leaving the rest of the main loop, I get ~2000 FPS which leads me to believe that my Entity::_doDraw() function is the issue as it is never called without the cube.

This is where the action starts, I have 1 Camera and 1 Entity, the cube.

void Renderer::render(sf::Window* renderWindow) {
    renderWindow->setActive();

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    for(CameraIterator itr = mCameras.begin(); itr != mCameras.end(); ++itr) {
        Camera* c = *itr;      
        glMatrixMode(GL_PROJECTION);
        glLoadIdentity();

        sf::Vector2u size = renderWindow->getSize();
        GLdouble aspect = double(size.x) / double(size.y);
        gluPerspective(90, aspect, 0.01, 10000);

        // Setup a viewport -> This should come from the Camera!
        glViewport(0, 0, renderWindow->getSize().x, renderWindow->getSize().y);
        glTranslatef(c->getPosition().x, c->getPosition().y, c->getPosition().z);

        // Draw the scene
        for(EntityIterator itr = mEntities.begin(); itr != mEntities.end(); ++itr) {
            (*itr)->_doDraw();
        }
    }

    renderWindow->display();    // Swap buffers
}

Entity drawing function

void Entity::_doDraw() {

    if(mPipelineID == -1) return;


    glBindProgramPipeline(mPipelineID);
    glMatrixMode(GL_MODELVIEW);
    glLoadIdentity();

    glEnableVertexAttribArray(0);
    glBindBuffer(GL_ARRAY_BUFFER, mModel->getBufferID(MODEL_BUFFER_VERTEX));
    glVertexAttribPointer(
        0,                  // attribute 0. must match the layout in the shader.
        3,                  // size
        GL_FLOAT,           // type
        GL_FALSE,           // normalized?
        0,                  // stride
        0                    // array buffer offset
    );

    glEnableVertexAttribArray(1);
    glBindBuffer(GL_ARRAY_BUFFER, mModel->getBufferID(MODEL_BUFFER_COLOUR));
    glVertexAttribPointer(
        1,                  // attribute 1
        3,                  // size
        GL_FLOAT,           // type
        GL_FALSE,           // normalized?
        0,                  // stride
        0                    // array buffer offset
    );

    glDrawArrays(GL_TRIANGLES, 0, mModel->getVertexData()->size());
    glDisableVertexAttribArray(0);
    glDisableVertexAttribArray(1);
    glBindBuffer(GL_ARRAY_BUFFER, 0);

}

My 2 Shaders, just in case they are pertinent.

// VERT
#version 400

layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColour;

out vec4 gl_Position;
out vec4 outColour;

void main() {

    vec4 position = vec4(inPosition.xyz, 1);
    gl_Position = gl_ModelViewProjectionMatrix * position;

    outColour.xyz = inColour;
}

// FRAG
#version 400

in vec4 inColour;

out vec4 outColour;

void main() {
    outColour = inColour;
}
Was it helpful?

Solution

do not see 'anything' wrong at first look

  1. may be some ATI driver related issue

    ATI has many bugs inside their drivers since ever... After AMD bought them it is much much better (but the need of .net is silly) but still no match to nVidia on OpenGL. Had seen something similar in past here on SO where ATI shader had problems when camera was too close to cube. It was shader and minecraft like cube world rendering I think.

    • try to change color RGB to RGBA (4 values instead of 3)
    • try use GL_QUADS instead of GL_TRIANGLES (ATI had problems with odd parameter size)
    • also sometimes help use GLuint instead of GLfloat
    • sometimes help to use indices (sometimes indices do problems instead)
    • try different driver version
    • try non ATI gfx for core 400 is nVidia the only option. Intel stopped at 330 I think and all other vendors even before.
    • try to render with fixed pipeline (no shaders if the problem is also there) but that is no longer option for core and non nVidia drivers (only glVertex is preserved)

    but your shaders are very basic so if the problem is there it is most likely a driver issue

  2. make sure you do not have any memory leaks in your (even unrelated) code

    ATI drivers are extremly sensitive to memory leaks (even minor ones) have not a clue why. Usually solving memory leaks get rid of very many issues

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top