Question

I'm writing a simple tool in PyOpenGl for displaying visual stimuli that are used in a physiology experiment. Currently I render my stimuli on a single viewport inside a wxGLCanvas, which I display fullscreen on a projector. I'd like to add a second viewport so that the user can 'preview' what's being shown on the projector. However, I don't want to duplicate all of the draw calls I'm already making for the second viewport. I need to display the stimuli at millisecond-precise times (measured against the wall clock), so I want to keep my overhead to a minimum in order to achieve a sufficiently high framerate.

My question is: what is the simplest way to efficiently render the exact same frame in two viewports? I've read a little about vertex buffer objects, but my understanding of OpenGL is very rudimentary and I'm not sure whether VBOs would be the most suitable method for my case.

Was it helpful?

Solution

  1. Create a texture, renderbuffer, and framebuffer object
  2. bind the framebuffer object to your opengl context
  3. attach the texture to the color component of the framebuffer
  4. attach the renderbuffer to the depth component of the framebuffer
  5. render to this framebuffer
  6. switch to default framebuffer
  7. render the texture to your viewports.

details about FBO: http://www.opengl.org/wiki/Framebuffer_Objects

make sure you share the opengl context between the 2 viewports (see: http://wiki.wxwidgets.org/WxGLCanvas#Sharing_wxGLCanvas_context)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top