Question

As explained above I would like to render a 3D-Scene onto a 2D-Plane with raytracing. Eventually I would like to use it for Volume Rendering but I'm struggling with the basics here. I have a three.js scene with the viewing plane attached to the camera (in front of it of course).

The Setup: The Scene

Then (in the shader) I'm shooting a ray from the camera through each point (250x250) in the plane. Behind the plane is 41x41x41 volume (a cube essentially). If a ray goes through the cube, the point in the viewing plane the ray crossed will be rendered red, otherwise the point will be black. Unfortunately this only works if you look at the cube from the front. Here's the example: http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html

If you try to look at the cube from different angles (you can move the camera with the mouse) then we don't get a cube rendered onto the viewing plane as we would like but a square with some weird pixels on the side..

That's the code for Raytracing:

Vertex Shader:

bool inside(vec3 posVec){
        bool value = false;

        if(posVec.x <0.0 ||posVec.x > 41.0 ){
            value = false;
        }
        else if(posVec.y <0.0 ||posVec.y > 41.0 ){
            value = false;
        }
        else if(posVec.z <0.0 ||posVec.z > 41.0 ){
            value = false;
     }
     else{
        value = true;
     }
     return value;
    }





float getDensity(vec3 PointPos){

    float stepsize = 1.0;
    float emptyStep = 15.0;

    vec3 leap;
    bool hit = false;
    float density = 0.000;
    // Ray direction from the camera through the current point in the Plane
    vec3 dir = PointPos- camera;
    vec3 RayDirection = normalize(dir);
    vec3 start = PointPos;   

    for(int i = 0; i<STEPS; i++){

         vec3 alteredPosition = start;
         alteredPosition.x += 20.5;
         alteredPosition.y += 20.5;
         alteredPosition.z += 20.5;

        bool insideTest = inside(alteredPosition);

        if(insideTest){


         // advance from the start position
        start = start + RayDirection * stepsize;

          hit = true;

     }else{
        leap = start + RayDirection * emptyStep;
        bool tooFar = inside(leap);
        if(tooFar){
            start = start + RayDirection * stepsize;
        }else{
            start = leap;
        }
     }

    }
    if(hit){
        density = 1.000;
    }

    return density; 
}


        void main() {


    PointIntensity = getDensity(position);
    vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
    gl_Position = projectionMatrix * mvPosition;

}

Fragment Shader:

varying float PointIntensity;

void main() {

   //Rays that have traversed the volume (cube) should leave a red point on the viewplane, Rays that just went through empty space a black point
       gl_FragColor= vec4(PointIntensity, 0.0, 0.0, 1.0);



}

Full Code: http://pastebin.com/4YmWL0u1

Same Code but Running: http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html

I would be very glad if somebody had any tips on what I did wrong here

EDIT:

I updated the example with the changes that Mark Lundin proposed but unfortunately I still only get a red square when moving the camera (no weird pixels on the side though):

mat4 uInvMVProjMatrix = modelViewMatrix *inverseProjectionMatrix;

inverseProjectionMatrix being the Three.js camera property projectionMatrixInverse passed to the shader as a uniform. Then the unproject function gets called for every point in the viewplane with its uv-coordinates.

The new code is here:

http://pastebin.com/Dxh5C9XX

and running here:

http://ec2-54-244-155-66.us-west-2.compute.amazonaws.com/example.html

To see that the camera is actually moved you can press x, y, z to get the current camera x, y, z coordinate.

Was it helpful?

Solution

The reason you're seeing a square, rather than a 3D volume, is because your raytracing method doesn't take into account the camera orientation or projection. As you move the camera with the trackball it's orientation changes, therefore this should be included in your calculation. Secondly, the projection matrix of the camera should also be used to project the coordinates of the plane into 3D space. You can achieve this with something like the following:

vec3 unproject( vec2 coord ){
    vec4 screen = vec4( coord, 0, 1.0 );
    vec4 homogenous = uInvMVProjMatrix * 2.0 * ( screen - vec4( 0.5 )  );
    return homogenous.xyz / homogenous.w;
}

where coord is the 2d coordinate of your plane and uInvMVProjMatrix is the inverse of the model view projection matrix. This will return a vec3 that you can use to test against intersection.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top