Question

I have the same object that is shot in two different lighting conditions by the same camera.

Suppose I take the saturation of a red colored component A, which becomes A' in the second picture.

If I know that the saturation of the white letters is B, how can I get a good estimate for B'? They seem to have dependence and it is intuitive that they may actually have a simple mathematical dependence, but I may be wrong.

Simply said: Find an expected saturation for the grey letters in the second picture when having the original image saturation of the red and grey letters, and the second image's red saturation. A, A', B go from 0 to 1.

Can I separate this equation for the three HSV channels? Or what type of transformation should I do?

My current code is normalizing around a pivot point (by default 1) and I've found that it fails when B approaches zero:

float delta1 = (A - pivotpoint);
float delta1new = Aprime - pivotpoint;
float ratio = delta1new / delta1;
float delta2 = B - pivotpoint;
float delta2new = abs(ratio * delta2);
float Bprime = pivotpoint - delta2new;

enter image description here

Was it helpful?

Solution

I am not sure that I understood what you want to do. But if I don't mistake, I think you should try to split your channels not in HSV but in HSL and work on Luminance.

    #include "opencv2/opencv.hpp"


    int main(int ac, char **av){

      cv::Mat src = cv::imread("./files/lena.jpg", -1);
      cv::Mat hls;
        // Create a hsv image with 3 channels and hue, sat e val with 1 channel. All with the same size
      std::vector hlsChannels;

        // Convert from Red-Green-Blue to Hue-Saturation-Luminance
      cv::cvtColor( src, hls, CV_RGB2HLS );
      cv::split(hls, hlsChannels);

      cv::Mat hue = hlsChannels.at(0);
      cv::Mat lum = hlsChannels.at(1);
      cv::Mat sat = hlsChannels.at(2);
      for (int y = 0; y (y, x) += 20;
        }
      }
      hlsChannels.clear();
      hlsChannels.push_back(hue);
      hlsChannels.push_back(lum);
      hlsChannels.push_back(sat);
      cv::Mat HLSColors;
      cv::Mat RGBColors;
      cv::merge(hlsChannels, HLSColors);
      cv::cvtColor(HLSColors, RGBColors, CV_HLS2RGB);
      cv::imwrite("lumLena.png", RGBColors);
      return 0;
    }

Moreover take a look to the histogram equalization it can be the first step in your work.

http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_equalization/histogram_equalization.html#histogram-equalization

Hope it helped !

OTHER TIPS

You should first think about how you want to model the relation you are seeking. To do this, I would bring the image in correspondence manually as good as possible, and look at scatter plots of the information you are interested in. I.e. plot 2D points using the saturation (or other values, see below) of the pixels as coordinates. This should give you some idea of an appropriate model.

From my experience with exposure matching, I think that a linear model A' = m*A + x will work better than a simple additive or multiplicative one (A' = A + x or A' = m*A ). To solve for a linear model you will, however, need at least two corresponding values. Even better use more and solve in a least squares sense. You could also think about using a polynomial - you will see fits best in the scatter plots.

I would also consider applying the correction to the R,G and B channels separately, instead of using HSV. RGB is a much easier to handle mathematically and will often give good results as well. In HSV you are essentially operating in a cylindrical coordinate system, while RGB is a simple vector space.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top