Frage

I made a videoplayer with Qt, OpenGl and FFMpeg which works quite well.

Now I want to use multi-threading and a buffer to have better performances. I've set up a thread which decode the frames and store it in a QMap :

QMap<int, uint8_t *> myBuffer;

The first int being the timecode and the second point to the OpenGl Texture.

Everytime I decode a frame, I add it to the buffer using new and as soon as the frame is read, I delete it.

Is assume this is not the best way to do it, not in term of memory management (no memory leak) but in term of performance.

Is there a better approach to do this ?

War es hilfreich?

Lösung

You could decode directly into OpenGL pixel buffer objects. Create a number (3 or 4 should suffice) PBOs and have the renderer thread mapped at least 2 of them into process address space using glMapBuffer and queue the location and PBO ID into a pool of available buffers. To decode a frame, dequeue the pointer/ID from the available PBO pool and decode into the provided memory. Once a frame has been decoded, queue the ID of the used PBO into a "decoded" pool, which the renderer thread uses to unmap with glUnmapBuffer(); immediately followed by a load into texture using glTexSubImage2D; the texture upload happens asynchronously. Enqueue the texture ID into a "ready to display" queue/FIFO. Then have the renderer dequeue from the "ready to display" the next to display texture; by keeping at least 3 elements in the "ready to display" FIFO frames get asynchronously uploaded while drawing the previously decoded frames without OpenGL blocking because it has to wait for previous steps to complete.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top