Question

I'm a new OpenCV user and I'm working on a project for university. The program takes an input image, blurs it synthetically and, later, deblurs it. When the synthetically blurred image gets deconvolved, boundary artifacts generate because...well, so far I haven't implemented boundary conditions yet. Here're a few examples: you can see the input unblurred image, the synthethically blurred one and the final output I get:

http://answers.opencv.org/upfiles/13953138566866107.png

According to the paper I'm writing the code from, boundary conditions have to be implemented via padding the input image by the point spread function width and creating a mask that indicates which pixels are from the captured region versus from the boundary region.

I apologize if my questions may be silly but:

1. How do I calculate the point spread function width? So far I use a simple 3x3 box blur kernel with 1/9s on the inside. Is 3 the width?

2. If the point spread function width is 3, do I have to pad the input image by adding three pixels on the four sides or do I have to pad the input image by "covering" the "dark frame" around it resulting from the blurring process? From what I understand, those "dark frame" areas contain mean values of the original unblurred image, therefore it's impossible to reconstruct the starting image doing a deconvolution in those ares, this would just generate and propagate artifacts.

What I'm trying to say is: do I have to add extra pixels to all four sides of the input image or do I have to "cover" the "dark frame", its width being the same one of the point spread function, from what I understand?

http://answers.opencv.org/upfiles/13953135698274495.png

3. Do I have to pad the unblurred input image or the synthethically blurred one?

Thank you in advance for your help!

Was it helpful?

Solution

I've tested the source (I've adapted to opencv) and it works perfect.

Answers on your questions:

1.Yes kernel size in this case is 3.

2.In the source on your link convolution is applied to image region reduced by half kernel side on each side.

enter image description here

Image size is equal to your souce image (all green and blue areas).

But your work area is less than whole image and marked green.

It reduced relative to source image by half kernel size (blue border).

3.No, you have not.

It seems that you have appled box filter with kernel size larger than 3.

Here is my results:

Blurred image (box filter 3x3):

enter image description here

Deblurred image:

enter image description here

You can download my source here: https://www.dropbox.com/s/u11qo8o3q1a8j5f/stochastic_deconvolution_opencv.zip

You'll get ringing on high frequency (hard edges) when using large kernels.

It can be reduced by increasing regularization coefficient (it will add some "flattnes" to image).

Here is my result for kernel from initial source:

Blurred image:

enter image description here

Deblurred image:

enter image description here

Try for your image parameters:

const double reg_weight     = 0.0002;   // regularizer weight
const double sigma          = 9.0;      // mutation standard deviation
const double reset_prob     = 0.005f;   // russian roulette chain reset probability
const int    num_iterations = 400;      // number of 'iterations', mostly for output
double ed                   = 0.025;     // starting deposition energy

For PSF:

const int    psf_cnt = 9;
const double psf_v[] = { 1.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/9.0 };
const int    psf_x[] = { -4, -3, -2, -1, 0, 1, 2, 3, 4 };
const int    psf_y[] = { -4, -3, -2, -1, 0, 1, 2, 3, 4 };

I've got the result as I post below:

enter image description here

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top