Question

I'm working on image warping. The transformed version of the real coordinates of an image are x and y, and, the polar coordinates of the transformed image are r and theta.

(cylindrical anamorphosis). I have the transformation functions. But Im confused about a certain things. I'm getting the polar coordinates from transformation functions which can easily be converted to cartesian. But how to draw this transformed image? as the new size will be different than the old image size.

EDIT : I have the image as shown in the cylinder. I have the transformation function to convert it into the illusion image as shown. As this image's size is different from the original image, how do I ensure that all my points in the main image are being transformed. Moreover the coordinates of those points in transformed image are polar. Can I use openCV to form the new image using the transformed polar coordinates?

enter image description here

REF: http://www.physics.uoguelph.ca/phyjlh/morph/Anamorph.pdf

Was it helpful?

Solution

You have two problems here. In my understanding, the bigger problem arises because you are converting discrete integral coordinates into floating point coordinates. The other problem is that the resulting image's size is larger or smaller than the original image's size. Additionally, the resulting image does not have to be rectangular, so it will have to be either cropped, or filled with black pixels along the corners.

According to http://opencv.willowgarage.com/documentation/geometric_image_transformations.html there is no radial transformation routine.

I'd suggest you do the following:
  1. Upscale the original image to have width*2, height*2. Set the new image to black. (cvResize, cvZero)
  2. Run over each pixel in the original image. Find the new coordinates of the pixel. Add 1/9 of its value to all 8 neighbors of the new coordinates, and to the new coordinates itself. (CV_IMAGE_ELEM(...) += 1.0/9 * ....)
  3. Downscale the new image back to the original width, height.
  4. Depending on the result, you may want to use a sharpening routine.

If you want to KEEP some pixels that go out of bounds, that's a different question. Basically you want to find Min and Max of the coordinates you receive, so for example your original image has Min,Max = [0,1024] and your new MinNew,MaxNew = [-200,1200] you make a function

normalize(int &convertedx,int &convertedy)
{
convertedx = MinNewX + (MaxNewX-MinNewX)/(MaxX-MinX) * convertedx;
convertedy = ...;
}
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top