Question

I'm curious about how NURBS are rendered in GPU's / the OpenGL graphics pipeline. I understand there are various calls within OpenGL and GLUT for easily rendering NURBS objects from a coding perspective using glMap and glMapGrid, but what I don't get is the process OpenGL goes through to do this. The idea behind NURBS is using curves to define surfaces, whereas the graphics pipeline appears to be build around triangle rasterization and triangle meshes, whereas NURBS are based around Bezier Curves, which are curved.

So how are NURBS actually rendered, from a (high-level) pipeline perspective?

Was it helpful?

Solution

The simple answer, is that they are not dealt with in the OpenGL pipeline, but must be converted to something that the GL pipeline can process. The general approach would probably be to first convert to a primitive a little more real-time friendly, such as bezier patches, and then tesselate these at runtime into triangles.

Tessellation could be regular, mapping a grid onto the patch, or could be based on curvature, subdividing the patch more where there is higher variance. Either way the surface is only truly evaluated at some vertices, and rendered as flat polygons (though shaders can be used to create appropriately smoothly varying normals, etc.)

glMap() et-al (which were previously used to help render bezier patches, etc.) are deprecated and no longer present in the modern OpenGL API. Nowadays you would use shaders to deal with tessellation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top