Question

My objective is to handle illumination and expression variations on an image. So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image.

Reference: this paper

Lets see my steps:

1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage). so that large intensity variations can be handled to some extent.

2) Apply svd approximations on the histo_equalized_image. But before to do that, I applied the svd decomposition ([L D R]=svd(histo_equalized_image)), then these singular values are used to make the derived image J=L*power(D, i)*R where i varies between 1 and 2.

3) Finally, the derived image is combined with the original image to: C=(MyGrayImage+(a*J))/1+a. Where a varies from 0 to 1.

4) But all the steps above are not able to perform well under varying conditions. So finally, wavelet transform should be used to handle those variations(we use only the LL image bloc). Low frequency component contains the useful information, also, unimportant information gets lost in this component. The (LL) component is ineffective with illumination changes and expression variations.

I wrote a matlab code for that, and I would like to know if my code is correct or no (if no, so how to correct it). Furthermore, I am interested to know if I can optimize these steps. Can we improve this method? if yes, so how? Please I need help.

Lets see now my Matlab code:

%Read the RGB image
image=imread('img.jpg');

%convert it to grayscale
image_gray=rgb2gray(image);

%convert it to double
image_double=im2double(image_gray);

%Apply histogram equalization
histo_equalized_image=histeq(image_double);

%Apply the svd decomposition
[U S V] = svd(histo_equalized_image);

%calculate the derived image
P=U * power(S, 5/4) * V';


%Linearly combine both images
    J=(single(histo_equalized_image) + (0.25 * P)) / (1 + 0.25);

    %Apply DWT
    [c,s]=wavedec2(J,2,'haar');
    a1=appcoef2(c,s,'haar',1); % I need only the LL bloc.
Was it helpful?

Solution

  1. You need to define, what do you mean by "USEFUL" or "important" information. And only then do some steps.

  2. Histogram equalization is global transformation, which gives different results on different images. You can make an experiment - do histeq on image, that benefits from it. Then make two copies of the original image and draw in one black square (30% of image area) and white square on second. Then apply histeq and compare results.

Low frequency component contains the useful information, also, unimportant information gets lost in this component.

Really? Edges and shapes - which are (at least for me) quite important are in high frequencies. Again we need definition of "useful" information.

I cannot see theoretical background why and how your approach would work. Could you a little bit explain, why do you choose this method?

P.S. I`m not sure if this papers are relevant to you, but recommend "Which Edges Matter?" by Bansal et al. and "Multi-Scale Image Contrast Enhancement" by V. Vonikakis and I. Andreadis.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top