Question

I've got a very basic scene rendering with a vertex and color array (some code below). I see how to bind the vertexes and colors to the vertex shaders attributes. Currently this vertex and color information is in a local array variable in my render function as you can see below and then glDrawArrays(GL_TRIANGLES, 0, n) is called to draw them for each frame.

I'm trying to picture the architecture of a larger moving scene where there are lots of models with lots of verticies that need to be loaded and unloaded.

The naïve way I imagine to extend this would be to place all the vertex/color data in one big array in main memory and then call glDrawArrays once for each frame. This seems to be inefficient to me. On every frame the vertex and color information changes only in parts, so arranging and reloading an entire monolithic vertex array for every frame seems wrong.

What do 3D games and so forth do about this? Are they for each frame placing all the vertexes in one big array in main memory, and then calling glDrawArrays once? If not, what architecture and OpenGL calls do they generally use to communicate all the vertexes of the scene to the GPU? Is it possible to load vertexes into GPU memory and then reuse them for several frames? Is it possible to draw multiple vertex arrays from multiple places in main memory?

static const char *vertexShaderSource =
R"(

    attribute highp vec4 posAttr;
    attribute lowp vec4 colAttr;
    varying lowp vec4 col;
    uniform highp mat4 matrix;

    void main()
    {
       col = colAttr;
       gl_Position = matrix * posAttr;
    }

)";

static const char *fragmentShaderSource =
R"(

    varying lowp vec4 col;

    void main()
    {
       gl_FragColor = col;
    }

)";

void Window::render()
{
    glViewport(0, 0, width(), height());

    glClear(GL_COLOR_BUFFER_BIT);

    m_program->bind();

    constexpr float delta = 0.001;
    if (forward)
        eyepos += QVector3D{0,0,+delta};
    if (backward)
        eyepos += QVector3D{0,0,-delta};
    if (left)
        eyepos += QVector3D{-delta,0,0};
    if (right)
        eyepos += QVector3D{delta,0,0};

    QMatrix4x4 matrix;
    matrix.perspective(60, 4.0/3.0, 0.1, 10000.0);
    matrix.lookAt(eyepos, eyepos+direction, {0, 1, 0});
    matrix.rotate(timer.elapsed() / 100.0f, 0, 1, 0);

    m_program->setUniformValue("matrix", matrix);

    QVector3D vertices[] =
    {
        {0.0f, 0.0f, 0.0f},
        {1.0f, 0.0f, 0.0f},
        {1.0f, 1.0f, 0.0f},
    };

    QVector3D colors[] =
    {
        {1.0f, 0.0f, 0.0f},
        {1.0f, 1.0f, 0.0f},
        {1.0f, 0.0f, 1.0f},
    };

    m_program->setAttributeArray("posAttr", vertices);
    m_program->setAttributeArray("colAttr", colors);

    m_program->enableAttributeArray("posAttr");
    m_program->enableAttributeArray("colAttr");

    glDrawArrays(GL_TRIANGLES, 0, 3);

    m_program->disableAttributeArray("posAttr");
    m_program->disableAttributeArray("colAttr");

    m_program->release();

    ++m_frame;
}

No correct solution

OTHER TIPS

Depends on how you want to structure things.

If you have a detailed model that needs to be moved and rotated and transformed but without changing its shape, then a pretty clear way to do it is to load that model into e.g. a VBO (I'm not sure what your setAttributeArray does), and this has to happen only before the first frame, and subsequent frames can render that model with any transformation you want by simply setting the model view matrix uniform which is a much smaller chunk of data going over the bus.

Vertex shaders can and should be used for letting the GPU help or offload entirely the computation and/or application of these types of operations.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top