Pergunta

I'm trying to write a C++ program that applies a flatfield to an image. It's supposed to load an image and divide it by another loaded image, then export it as a PNG. My program compiles correctly, but only produces an all-black image.

#include <climits>
#include <stdio.h>
#include <string>
#include <opencv/cv.h>
#include <opencv/highgui.h>

using namespace cv;
using namespace std;

int main( int argc, char *argv[] )
 {
     const char *flat_file = argv[1];
     const char *img_file = argv[2];
     const char *out_file = argv[3];

     Mat flat_img;
     flat_img = imread( flat_file, CV_LOAD_IMAGE_ANYCOLOR | CV_LOAD_IMAGE_ANYDEPTH );   
     Mat image;
     image = imread( img_file, CV_LOAD_IMAGE_ANYCOLOR | CV_LOAD_IMAGE_ANYDEPTH );

     GaussianBlur( flat_img, flat_img, cv::Size(3,3), 0, 0, BORDER_REPLICATE );
     divide( image, flat_img, image, 1.0, -1 );
     imwrite( out_file, image );

 return 0;
 }

All input files are 16-bit TIFFs. Currently, the resulting output file is a 16-bit PNG that has the correct pixel dimensions, but is completely black.

I'm very new to both C++ and OpenCV, so I'm not entirely sure what I'm missing. This thread seems to suggest that maybe floating point values are the cause. I'm fine with converting out of floating point, but I need to maintain the 16-bit nature of my source. I've tried adding a line image.convertTo( image, CV_16U ); after the divide command, with both CV_16U and CV_8U and neither works.

Any and all help is greatly appreciated!

EDIT

Per suggestions below, I tried adding some imshow commands to actually test that opencv was processing things correctly. It seems that everything works fine until my divide command. After that, my image array just ends up being blank.

Foi útil?

Solução

Turns out it was indeed my divide command. I had been basing my code on a similar application written by a coworker, so I went and looked at how they had done the division. I'm not 100% sure what this is accomplishing, but it works when I change my divide to this:

divide( image, flat_img, image, image.depth() == 16 ? UCHAR_MAX : USHRT_MAX );

Just making a prediction, that conditional is setting the scalar for my data based on the depth of the image. One lingering question I still have is what the difference between what I have and divide( image, flat_img, image, image.depth() == 8 ? UCHAR_MAX : USHRT_MAX ); would be. Both produce what appears to be the same image. Mostly, I'm trying to understand what's happening a bit better.

Outras dicas

The answer is probably you got the image right already, because if you use a normal image browser to open a 16bit image, it's usually completely dark out since they only support displaying 8bit image correctly.

To manage this out, first to try to display the 16bit image using OpenCV highgui. you can do imshow("test",image) ; waitKey(0) ; and see if the image is displayed correctly. If so, you can convert your image to 8bit by Mat img8bit ; image.convertTo(img8bit, CV8U, 255./65535.) ; and imwrite this new Mat.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top