Question

I'm trying to implement the flood fill algorithm. But glReadPixels() is returning float RGB values of a pixel which are slightly different from the actual value set by me, causing the algorithm to fail. Why is this happening?

Outputting returned RGB values to check.

#include<iostream>
#include<GL/glut.h>
using namespace std;

float boundaryColor[3]={0,0,0}, interiorColor[3]={0,0,0.5}, fillColor[3]={1,0,0};
float readPixel[3];

void init(void) {
    glClearColor(0,0,0.5,0);
    glMatrixMode(GL_PROJECTION);
    gluOrtho2D(0,500,0,500);
}

void setPixel(int x,int y) {        
        glColor3fv(fillColor);
        glBegin(GL_POINTS); 
             glVertex2f(x,y); 
        glEnd();
}

void getPixel(int x, int y, float *color) {
    glReadPixels(x,y,1,1,GL_RGB,GL_FLOAT,color);
}

void floodFill(int x,int y) {
    getPixel(x,y,readPixel);

    //outputting values here to check
    cout<<readPixel[0]<<endl;
    cout<<readPixel[1]<<endl;
    cout<<readPixel[2]<<endl;

    if( readPixel[0]==interiorColor[0] && readPixel[1]==interiorColor[1] && readPixel[2]==interiorColor[2] ) {
        setPixel(x,y);
        floodFill(x+1,y);
        floodFill(x,y+1);
        floodFill(x-1,y);
        floodFill(x,y-1);
    }
}

void display() {
    glClear(GL_COLOR_BUFFER_BIT);

    glColor3fv(boundaryColor);
    glLineWidth(3); 

    glBegin(GL_LINE_STRIP);
        glVertex2i(150,150);        
        glVertex2i(150,350);        
        glVertex2i(350,350);        
        glVertex2i(350,150);
        glVertex2i(150,150);
    glEnd();

    floodFill(200,200);

    glFlush();
}

int main(int argc,char** argv) {
    glutInit(&argc,argv);
    glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB);
    glutInitWindowPosition(100,100);
    glutInitWindowSize(500,500);
    glutCreateWindow("Flood fill");

    init();
    glutDisplayFunc(display);
    glutMainLoop();
}
Was it helpful?

Solution

I expected the color to be read as 0.0, 0.0, 0.5.

Why? You don't appear to be rendering to a floating-point framebuffer. Which means that you're likely rendering to a GL_RGBA8 buffer storing normalized integers (meaning that it maps the [0, 255] range to [0, 1] floating-point values). Well, an 8-bit normalized integer cannot store 0.5 exactly. The closest it can get is 127/255, which is 0.498.

If you wanted "accurate values", then you should be rendering to a render target that will give you accurate values. IE: not the screen.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top