Question

I have been trying to make my own convolution operator instead of using the inbuilt one that comes with Java. I applied the inbuilt convolution operator on this image link

using the inbuilt convolution operator with gaussian filter I got this image. link

Now I run the same image using my code

public static int convolve(BufferedImage a,int x,int y){
 int red=0,green=0,blue=0;
      float[] matrix = { 
 0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f, 
 0.2196956447338621f, 0.28209479177387814f, 0.2196956447338621f, 
 0.1710991401561097f, 0.2196956447338621f, 0.1710991401561097f, 
 };
      for(int i = x;i<x+3;i++){
          for(int j = y;j<y+3;j++){
              int color = a.getRGB(i,j);
              red += Math.round(((color >> 16) & 0xff)*matrix[(i-x)*3+j-y]);
              green += Math.round(((color >> 8) & 0xff)*matrix[(i-x)*3+j-y]);
              blue += Math.round(((color >> 0) & 0xff)*matrix[(i-x)*3+j-y]);

          }
      }

    return (a.getRGB(x, y)&0xFF000000) | (red << 16) | (green << 8) | (blue);
}

And The result I got is this. link

Also how do I optimize the code that I wrote. The inbuilt convolution operator takes 1 ~ 2 seconds while my code even if it is not serving the exact purpose as it is suppose to, is taking 5~7 seconds !

I accidentally rotated my source image while uploading. So please ignore that.

Was it helpful?

Solution

First of all, you are needlessly (and wrongly) converting your result from float to int at each cycle of the loop. Your red, green and blue should be of type float and should be cast back to integer only after the convolution (when converted back to RGB):

  float red=0.0f, green = 0.0f, blue = 0.0f
  for(int i = x;i<x+3;i++){
      for(int j = y;j<y+3;j++){
          int color = a.getRGB(i,j);
          red += ((color >> 16) & 0xff)*matrix[(i-x)*3+j-y];
          green += ((color >> 8) & 0xff)*matrix[(i-x)*3+j-y];
          blue += ((color >> 0) & 0xff)*matrix[(i-x)*3+j-y];

      }
  }

return (a.getRGB(x, y)&0xFF000000) | (((int)red) << 16) | (((int)green) << 8) | ((int)blue);

The bleeding of colors in your result is caused because your coefficients in matrix are wrong:

0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f + 
0.2196956447338621f + 0.28209479177387814f + 0.2196956447338621f +
0.1710991401561097f + 0.2196956447338621f + 0.1710991401561097f =

1.8452741

The sum of the coefficients in a blurring convolution matrix should be 1.0. When you apply this matrix to an image you may get colors that are over 255. When that happens the channels "bleed" into the next channel (blue to green, etc.). A completely green image with this matrix would result in:

 green = 255 * 1.8452741 ~= 471 = 0x01D7;
 rgb = 0xFF01D700;

Which is a less intense green with a hint of red.

You can fix that by dividing the coefficients by 1.8452741, but you want to make sure that:

 (int)(255.0f * (sum of coefficients)) = 255

If not you need to add a check which limits the size of channels to 255 and don't let them wrap around. E.g.:

if (red > 255.0f)
   red = 255.0f;

Regarding efficiency/optimization:
It could be that the difference in speed may be explained by this needless casting and calling Math.Round, but a more likely candidate is the way you are accessing the image. I'm not familiar enough with BufferedImage and Raster to advice you on the most efficient way to access the underlying image buffer.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top