Question

I have a blender model and below is a image of how my model renders when I load it into python. It looks like the normals are all messed up. I am using the correct normal for each vertex. I'm exporting them in the correct order. I test this in the blender console that the actual export file had the right data.

I know that I had to rotate the model in python because z axis is different, so I'm not sure if the normals z is pointing in the wrong direction.

I'm using pyglet. Has anyone ever had this problem before? Any ideas of what I can do to try to fix it?

Im not sure if this a OpenGL or python problem.

opengl setup code:

glMatrixMode(GL_PROJECTION)
    zNear = 0.01
    zFar = 1000.0
    fieldOfView = 45.0
    size = zNear * math.tan(math.radians(fieldOfView) / 2.0)
    glFrustum(-size, size, -size / (w / h), size / 
           (w / h), zNear, zFar); 
    glViewport(0, 0, w, h);  
    # Set-up projection matrix
    # TODO


    glMatrixMode(GL_MODELVIEW)
    glShadeModel(GL_SMOOTH)
    glEnable(GL_LIGHTING)
    glEnable(GL_LIGHT0)


    light0Ambient = (GLfloat * 4)(*[])
    light0Ambient[0] = 0.2
    light0Ambient[1] = 0.2
    light0Ambient[2] = 0.2
    light0Ambient[3] = 1.0
    glLightfv(GL_LIGHT0, GL_AMBIENT, light0Ambient);


    lightpos = (GLfloat * 3)(*[])
    lightpos[0] = 5.0
    lightpos[1] = 5.0
    lightpos[2] = 5.0
    glLightfv(GL_LIGHT0, GL_POSITION, lightpos)


    tempLV = self.kjgCreateVectorWithStartandEndPoints((5.0,5.0,5.0), (0.0,0.0,-3.0))
    lightVector = (GLfloat * 3)(*[])
    lightVector[0] = tempLV[0]
    lightVector[1] = tempLV[1]
    lightVector[2] = tempLV[2]
    glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION,lightVector);

    glLoadIdentity( )
    glTranslatef(0.0, 2.0, -18.0)
    #glScalef(0.4, 0.4, 0.4)
    glRotatef(-90, 1.0, 0.0, 0.0)

Draw Code:

    for face in self.faces:
        #print group
        if len(face) == 3:
                glBegin(GL_TRIANGLES)
        elif len(face) == 4:
                glBegin(GL_QUADS)
        else:
                glBegin(GL_POLYGON)
        for i in face:
            if i in (104,16,18,102):
                    glVertex3f(*self.vertices[i])
                    color = self.calculateVertexIntensity((.5,.5,.5),self.normals[i],self.vertices[i])
                    glColor3f(*color)
                    glNormal3f(*self.normals[i])
        glEnd()

alt text

Was it helpful?

Solution

Right now you are specifying the normal after the vertex instead of before, so basically you're specifying the normal of vertex X for the vertex X+1. This is the most important flaw in the current code.

Also, what's that calculateVertexIntensity? First of all, lighting is enabled in your code so glColor is going to be ignored anyway, but still it looks like you tried to do something unnecessary here - the OpenGL fixed function rendering already will calculate the vertex intensity basing on the glLight* settings and glMaterial* settings.

Also, you might want to normalize your lightVector, I'm not sure whether OpenGL's going to normalize it for you.

(Also make sure to check on the recent OpenGL features; you're using deprecated functions right now and because of that you'll soon hit a performance barrier. First thing too look for is vertex arrays or VBO, the next is shaders.)

OTHER TIPS

This looks like a problem with your normals.

The data in self.normals is probably wrong: you should recalculate normals making sure that you always use a sequence of vertices in the anticlockwise direction around the face to calculate each normal.

(also, you should be calling glNormal before drawing each vertex / face)

(also also, I don't know what's going on when you calculate the colour for each vertex: but check that that isn't causing problems)

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top