سؤال

I am working on a little game (using OS X Lion, Xcode 4, Objective-C / C++, Cocoa, OpenGL). It's a rather simple concept. I've got some objects that move around inside a two-dimensional array. Now i want to write an OpenGL GUI for my game.

What i did was go through my array and draw a cube for each object inside it, at its specific position, with a texture that depends on the kind of object. Of course my first naive implementation was a bit CPU-intensive, so the next step was to implement a texture atlas. Since i've got a lot of vertices, there is still a lot of room for improvement.

I read that VBOs and DisplayLists are a lot faster. I worked through some tutorials, but i've still got a lot of questions about the implementation in a real dynamic game environment.

Say i compile me some display lists for my game-objects. If i want to place it at a specific position i'd have to glTranslatef(). But a lot of glTranslatef() can be very CPU-intensive. How do i deal with that? Of course i could create a display list for each kind of game-object at every possible position, but it seems impractical, and it would only work because my game is tile-based...

VBOs seem to suffer from the same problem. Of course (at least from my understanding) i could create a whole mesh of vertices, then go through my array and for every object render a cube of vertices associated with the objects position. But i don't get how i'd apply a texture to a specific cube from a whole mesh of vertices. Do i have to create texture coordinates for every cube face inside the mesh or would it be enough to create texture coordinate for all 6 faces of a cube?

هل كانت مفيدة؟

المحلول

Display lists and VBOs usually don't help with trivial geometry. Sometimes they can even slow it down if I recall correctly from the blue OpenGL book.

Display lists were "as good as it gets" in OpenGL 1.x but were considered legacy functionality with the introduction of VBOs.

It would help if we know what your OpenGL target version was. In OpenGL 1.x using Immediate mode what you do is set up your "camera matrix" ( OpenGL doesn't have one, you just leave its transpose on the matrix stack) then push and pop matricies for each model in your render loop.

Using OpenGL 2.x or 1.xARB extensions you can put your transforms in uniform matricies and then update the uniform for each model. This doesn't really buy you much since updating a uniform is also slow.

I think the best thing you could do would be to provide OpenGL with a (set of) attributes matricies in the shader. This would reduce your CPU overhead by transferring all the data in one go. I can't think of the technical implementation details, but I think attributes are per vertex so you would have to pass an index of where to look in the other buffer you used to put the transforms. I have never done this since most models contain more geometry and texture information than the time to set up transforms.

That being said, redesigning it with VBOs and supplying the transforms in attributes is probably a great deal of work for 8% CPU usage gains. 8% on one core of probably a 2.0ghz core 2 duo

EDIT

Uniform buffer object is what I really wanted. Probably for a uniform block layout. The Blue book (and maybe Orange book) and google have more details.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top