Pregunta

I have a camera class for controlling the camera, with the main function:

void PNDCAMERA::renderMatrix()
{
    float dttime=getElapsedSeconds();
    GetCursorPos(&cmc.p_cursorPos);
    ScreenToClient(hWnd, &cmc.p_cursorPos);

    double d_horangle=((double)cmc.p_cursorPos.x-(double)cmc.p_origin.x)/(double)screenWidth*PI;
    double d_verangle=((double)cmc.p_cursorPos.y-(double)cmc.p_origin.y)/(double)screenHeight*PI;

    cmc.horizontalAngle=d_horangle+cmc.d_horangle_prev;
    cmc.verticalAngle=d_verangle+cmc.d_verangle_prev;

    if(cmc.verticalAngle>PI/2) cmc.verticalAngle=PI/2;
    if(cmc.verticalAngle<-PI/2) cmc.verticalAngle=-PI/2;

    changevAngle(cmc.verticalAngle);
    changehAngle(cmc.horizontalAngle);

    rightVector=glm::vec3(sin(horizontalAngle - PI/2.0f),0,cos(horizontalAngle - PI/2.0f));
    directionVector=glm::vec3(cos(verticalAngle) * sin(horizontalAngle), sin(verticalAngle), cos(verticalAngle) * cos(horizontalAngle));

    upVector=glm::vec3(glm::cross(rightVector,directionVector));

    glm::normalize(upVector);
    glm::normalize(directionVector);
    glm::normalize(rightVector);


    if(moveForw==true)
    {
        cameraPosition=cameraPosition+directionVector*(float)C_SPEED*dttime;
    }
    if(moveBack==true)
    {
        cameraPosition=cameraPosition-directionVector*(float)C_SPEED*dttime;
    }
    if(moveRight==true)
    {
        cameraPosition=cameraPosition+rightVector*(float)C_SPEED*dttime;
    }
    if(moveLeft==true)
    {
        cameraPosition=cameraPosition-rightVector*(float)C_SPEED*dttime;
    }

    glViewport(0,0,screenWidth,screenHeight);
    glScissor(0,0,screenWidth,screenHeight);
    projection_matrix=glm::perspective(60.0f, float(screenWidth) / float(screenHeight), 1.0f, 40000.0f);

    view_matrix = glm::lookAt(
        cameraPosition,
        cameraPosition+directionVector,
        upVector);

    gShader->bindShader();
    gShader->sendUniform4x4("model_matrix",glm::value_ptr(model_matrix));
    gShader->sendUniform4x4("view_matrix",glm::value_ptr(view_matrix));
    gShader->sendUniform4x4("projection_matrix",glm::value_ptr(projection_matrix));
    gShader->sendUniform("camera_position",cameraPosition.x,cameraPosition.y,cameraPosition.z);
    gShader->sendUniform("screen_size",(GLfloat)screenWidth,(GLfloat)screenHeight);
};

It runs smooth, I can control the angle with my mouse in X and Y directions, but not around the Z axis (the Y is the "up" in world space).

In my rendering method I render the terrain grid with one VAO call. The grid itself is a quad as the center (highes lod), and the others are L shaped grids scaled by powers of 2. It is always repositioned before the camera, scaled into world space, and displaced by a heightmap.

rcampos.x = round((camera_position.x)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);
rcampos.y = 0;
rcampos.z = round((camera_position.z)/(pow(2,6)*gridscale))*(pow(2,6)*gridscale);

vPos =  vec3(uv.x,0,uv.y)*pow(2,LOD)*gridscale + rcampos;
vPos.y = texture(hmap,vPos.xz/horizontal_scale).r*vertical_scale;

The problem:

The camera starts at the origin, at (0,0,0). When I move it far away from that point, it causes the rotation along the X axis discontinuous. It feels like the mouse cursor was aligned with a grid in screen space, and only the position at grid points were recorded as the cursor movement.

I've also recorded the camera position when it gets pretty noticeable, it's about at 1,000,000 from the origin in X or Z directions. I've noticed that this 'lag' increases linearly with distance, (from the origin).

There is also a little Z-fighting at this point(or similar effect), even if I use a single plane with no displacement, and no planes can overlap. (I use tessellation shaders and render patches.) Black spots appear on the patches. May be caused by fog:

float fc = (view_matrix*vec4(Pos,1)).z/(view_matrix*vec4(Pos,1)).w;
float fResult = exp(-pow(0.00005f*fc, 2.0)); 
fResult = clamp(fResult, 0.0, 1.0);

gl_FragColor = vec4(mix(vec4(0.0,0.0,0.0,0),vec4(n,1),fResult));

Another strange behavior is the little rotation by the Z axis, this increases with distance too, but I don't use this kind of rotation.

Variable formats:

The vertices are unsigned short format, the indexes are in unsigned int format. The cmc struct is the camera/cursor struct with double variables.

PI and C_SPEED are #define constants.

Additional information:

The grid is created with the above mentioned ushort array, with the spacing of 1. In the shader I scale it with a constant, then use tessellation to achieve the best performance and the largest view distance. The final position of a vertex is calculated in the tessellation evaluation shader.

mat4 MVP = projection_matrix*view_matrix*model_matrix;

As you could see I send my matrices to the shader with the glm library.

+Q:

How could the length of a float (or any other format) cause this kind of 'precision loss', or whatever causes the problem. The view_matrix could be a cause of this, but I still cannot output it on the screen at runtime.

PS: I don't know If this helps, but the view matrix at about the 'lag start location' is

-0.49662       -0.49662      0.863129   0
 0.00514956     0.994097     0.108373   0
-0.867953       0.0582648   -0.493217   0
 1.62681e+006   16383.3     -290126     1

EDIT

Comparing the camera position and view matrix:

view matrix = 0.967928      0.967928    0.248814   0
             -0.00387854    0.988207    0.153079   0
             -0.251198      -0.149134   0.956378   0
             -2.88212e+006  89517.1     -694945    1

position =    2.9657e+006,   6741.52,   -46002
¿Fue útil?

Solución

It's a long post so I might not answer everything. I think it is most likely precision issue. Lets start with the camera rotation problem. I think the main problem is here

view_matrix = glm::lookAt(
        cameraPosition,
        cameraPosition+directionVector,
        upVector);

As you said, position is quite a big number like 2.9657e+006 - and look what glm does in glm::lookAt:

GLM_FUNC_QUALIFIER detail::tmat4x4<T> lookAt
    (
        detail::tvec3<T> const & eye,
        detail::tvec3<T> const & center,
        detail::tvec3<T> const & up
    )
    {
        detail::tvec3<T> f = normalize(center - eye);
        detail::tvec3<T> u = normalize(up);
        detail::tvec3<T> s = normalize(cross(f, u));
        u = cross(s, f);

In your case, eye and center are these big (very similar) numbers and then glm subtracts them to compute f. This is bad, because if you subtract two almost equal floats, the most significant digits are set to zero, which leaves you with the insignificant (most erroneous) digits. And you use this for further computations, which only emphasizes the error. Check this link for some details.

The z-fighting is similar issue. Z-buffer is not linear, it has the best resolution near the camera because of the perspective divide. The z-buffer range is set according to your near and far clipping plane values. You always want to have the smallest possible ration between far and near values (generally far/near should not be greater than 30000). There is a very good explanation of this on the openGL wiki, I suggest you read it :)

Back to the camera issue - first, I would consider if you really need such a huge scene. I don't think so, but if yes, you could try computing your view matrix differently, compute rotation and translation separately, which could help your case. The way I usually handle camera:

glm::vec3 cameraPos;
glm::vec3 cameraRot;
glm::vec3 cameraPosLag;
glm::vec3 cameraRotLag;

int ox, oy;
const float inertia = 0.08f; //mouse inertia
const float rotateSpeed = 0.2f; //mouse rotate speed (sensitivity)
const float walkSpeed = 0.25f; //walking speed (wasd)

void updateCameraViewMatrix() {
    //camera inertia
    cameraPosLag += (cameraPos - cameraPosLag) * inertia;
    cameraRotLag += (cameraRot - cameraRotLag) * inertia;
    // view transform
    g_CameraViewMatrix = glm::rotate(glm::mat4(1.0f), cameraRotLag[0], glm::vec3(1.0, 0.0, 0.0));
    g_CameraViewMatrix = glm::rotate(g_CameraViewMatrix, cameraRotLag[1], glm::vec3(0.0, 1.0, 0.0));
    g_CameraViewMatrix = glm::translate(g_CameraViewMatrix, cameraPosLag);
}
void mousePositionChanged(int x, int y) {
    float dx, dy;
    dx = (float) (x - ox);
    dy = (float) (y - oy);

    ox = x;
    oy = y;

    if (mouseRotationEnabled) {
        cameraRot[0] += dy * rotateSpeed;
        cameraRot[1] += dx * rotateSpeed;
    }
}
void keyboardAction(int key, int action) {
    switch (key) {
        case 'S':// backwards
            cameraPos[0] -= g_CameraViewMatrix[0][2] * walkSpeed;
            cameraPos[1] -= g_CameraViewMatrix[1][2] * walkSpeed;
            cameraPos[2] -= g_CameraViewMatrix[2][2] * walkSpeed;
            break;
        ...
    }
}

This way, the position would not affect your rotation. I should add that I adapted this code from NVIDIA CUDA samples v5.0 (Smoke Particles), I really like it :)

Hope at least some of this helps.

Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top