Question

I am currently writing a basic rendering demo using Haskell's OpenGL bindings. The problem is that it can barely handle 2000+ vertices. My pseudo-code amounts to this:

terrain = The set of points generated from [-1...1] x [-1...1] x [-1...1].
camera = Camera at position (xc, yc) with angles (ax, ay, az).
while running:
    input = anything that moves the camera's position or angles
    projected = []
    for point in terrain:
        projected.append(camera.perspectiveProjection(point))
    renderPoints(projected)

The problem (I believe) is that I am manually converting each of my three dimensional points into two dimensions and then using OpenGL to plot those points.

My question is: should I be feeding OpenGL three dimensional points and then using whatever projections OpenGL has baked in?

(I feel like I understand how perspective projections work - I'm just unsure if I should be doing this manually.)

EDIT:

The following is, for the most part, my code. I've left out sections that I feel are self-explanatory given only the function definition.

main :: IO()
main = do
    (_progName, _args) <- getArgsAndInitialize
    initialDisplayMode $= [DoubleBuffered]
    _window <- createWindow "Hello, World"
    -- The camera position followed by pitch, yaw and roll.
    camera <- newIORef Camera [0,0,0] 0 0 0
    displayCallback $= display camera
    mainLoop

display :: IORef Camera -> DisplayCallback
display camIO = do
    camera <- get camIO
    clear [ColorBuffer, DepthBuffer]
    clear [ColorBuffer]
    renderPrimitive Points $ mapM_ vertex 
        $ map perspectiveProjection camera points
    swapBuffers
    postRedisplay Nothing
Était-ce utile?

La solution

As you guessed correctly, rolling out your own projection algorithm can be very slow. Also, unless you're doing something extremely complicated, OpenGL(or more specifically GLU) has a set of functions that solves most of your problems.

The simplest way to do a traditional perspective projection is to have a camera with a position, look-to point and up vector. Personally, I find this simpler than defining the camera axis with rotation angles. Once you have this, you could have your display function like this:

import Graphics.Rendering.OpenGL.GLU.Matrix

display :: IORef Camera -> DisplayCallback
display camIO = do
    camera <- get camIO
    perspective fov aspect zNear zFar
    lookAt (position camera) (lookAt camera) (upVector camera)
    -- call clear functions
    -- call renderPrimitive with the untransformed points.

The lookAt function changes the camera position and direction, give the camera attributes. The perspective is a function that takes information about the camera and window, and creates a proper perspective projection. If you find it to not give enough control about the projection, you could use frustum from Graphics.Rendering.OpenGL.GL.CoordTrans instead.

PS.: the correct way to do this would be to have a setup function, which setups the projection matrix, and have the display function change the modelview matrix, if necessary. The above code, however, should work.

PS2.: as pointed out in a comment, the way to implement this depends heavily on the OpenGL version, and I don't know which versions of OpenGL haskell supports. This implementation is based on OpenGL 2.1 and below.

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top