Frage

A bit of background

I am writing a simple ray tracer in C++. I have most of the core complete but don't understand how to retrieve the world coordinate of a pixel on the image plane. I need this location so that I can cast the ray into the world.

Currently I have a Camera with a position(aka my perspective reference point), a direction (vector) which is not normalized. The directions length signifies the center of the image plane and which way the camera is facing.

There are other values associated with the camera but they should not be relevant.

My image coordinates will range from -1 to 1 and the perspective(focal length), will change based on the distance of the direction associated with the camera.

What I need help with

I need to go from pixel coordinates (say [0, 256] in an image 256 pixels on each side) to my world coordinates.

I will also want to program this so that no matter where the camera is placed and where it is directed, that I can find the pixel in the world coordinates. (Currently the camera will almost always be centered at the origin and will look down the negative z axis. I would like to program this with the future changes in mind.) It is also important to know if this code should be pushed down into my threaded code as well. Otherwise it will be calculated by the main thread and then the ray will be used in the threaded code.

Example of general idea
(source: in.tum.de)

I did not make this image and it is only there to give an idea of what I need.

Please leave comments if you need any additional info. Otherwise I would like a simple theory/code example of what to do.

War es hilfreich?

Lösung

Basically you have to do the inverse process of V * MVP which transforms the point to unit cube dimensions. Look at the following urls for programming help http://nehe.gamedev.net/article/using_gluunproject/16013/ https://sites.google.com/site/vamsikrishnav/gluunproject

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top