Domanda

I have to read a 3D object from an ASE file. This object turns to be too big for the world I have to create, therefore, I must scale it down.

With its original size, it is properly lighted up.

Teapor properly lighted up

However, once I scale it down, it becomes oversaturated.

enter image description here

The world is centered around (0, 0, 0) and it is 100 meters long (y axis) and 50 meters wide (x axis), my upVector is (0, 0, 1). There are two lights, light0 in (20, 35, 750) and light1 in (-20, -35, 750).

Relevant parts of the code:

void init(void){
     glClearColor(0.827, 0.925, 0.949, 0.0);
     glEnable(GL_DEPTH_TEST);
     glEnable(GL_COLOR_MATERIAL);
     glColorMaterial(GL_FRONT, GL_DIFFUSE);

     glEnable(GL_LIGHT0); 
     glEnable(GL_LIGHT1); 
     glEnable(GL_LIGHTING); 
     glShadeModel(GL_SMOOTH); 

     GLfloat difusa[] = { 1.0f, 1.0f, 1.0f, 1.0f}; // white light 
     glLightfv(GL_LIGHT0, GL_DIFFUSE, difusa);
     glLightfv(GL_LIGHT1, GL_DIFFUSE, difusa);

     loadObjectFromFile("objeto.ASE");
}

void display ( void ) {
     glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
     glMatrixMode(GL_MODELVIEW);
     glLoadIdentity();

     gluLookAt(eyeX, eyeY, eyeZ, atX, atY, atZ, 0.0, 0.0, 1.0);

     GLfloat posicion0[] = { 20.0f, 35.0f, 750.0f, 1.0f};
     glLightfv(GL_LIGHT0, GL_POSITION, posicion0);

     GLfloat posicion1[] = { -20.0f, -35.0f, 750.0f, 1.0f}; 
     glLightfv(GL_LIGHT1, GL_POSITION, posicion1);

     glColor3f(0.749, 0.918, 0.278); 

     glPushMatrix();
         glTranslatef(0.0, 0.0, 1.5);
              //Here comes the problem
              glScalef(0.08, 0.08, 0.08);
              glBegin(GL_TRIANGLES);
                  for(int i = 0; i < numFaces; i++){
                       glNormal3d(faces3D[i].n.nx, faces3D[i].n.ny, faces3D[i].n.nz);
                       glVertex3d(vertex[faces3D[i].s.A].x, vertex[faces3D[i].s.A].y, vertex[faces3D[i].s.A].z);
                       glVertex3d(vertex[faces3D[i].s.B].x, vertex[faces3D[i].s.B].y, vertex[faces3D[i].s.B].z);
                       glVertex3d(vertex[faces3D[i].s.C].x, vertex[faces3D[i].s.C].y, vertex[faces3D[i].s.C].z);
                  }
              glEnd();
          glPopMatrix();

     glutSwapBuffers();
}

Why does lighting fail when the object is scaled down?

È stato utile?

Soluzione

The problem you're running into is, that scaling the modelview matrix also influences the "normal matrix" normals are transformed with. The "normal matrix" is actually the transpose of the inverse of the modelview matrix. So by scaling down the modelview matrix, you're scaling up the normal matrix (because of the modelview inversion step used to obtain it).

Because of that the transformed normals must be rescaled, or normalized if the scale of the modelview matrix is not unitary. In fixed function OpenGL there are two methods to do this: Normal normalization (sounds funny, I know) and normal rescaling. You can enable either with

  • glEnable(GL_NORMALIZE);
  • glEnable(GL_RESCALE_NORMALS);

In a shader you'd simply normalize the transformed normal

#version ...

uniform mat3 mat_normal;

in vec3 vertex_normal;

void main()
{
     ...
     vec3 view_normal = normalize( mat_normal * vertex_normal );
     ...
}

Altri suggerimenti

Depending on the setting of GL_NORMALIZE and GL_RESCALE_NORMALS, your normals can be transformed by the OpenGL-Pipeline.

Start with glEnable(GL_NORMALIZE) and see if that solves your problem

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top