This is a well known problem for stereo vision systems. I had the same problem a while back. The original question I posted can be found here. What I was trying to do was kind of similar to this. However after a lot of research I came to the conclusion that a captured dataset can not be easily aligned.
On the other hand, while recording the dataset you can easily use a function call to align both the RGB and Depth data. This method is available in both OpenNI and Kinect SDK (functionality is same, while names of the function call are different for each)
It looks like you are using Kinect SDK to capture the dataset, to align data with Kinect SDK you can use MapDepthFrameToColorFrame.
Since you have also mentioned using OpenNI, have a look at AlternativeViewPointCapability.
I have no experience with Kinect SDK, however with OpenNI v1.5 this whole problem was solved by making the following function call, before registering the recorder node:
depth.GetAlternativeViewPointCap().SetViewPoint(image);
where image
is the image generator node and depth
is the depth generator node. This was with older SDK which has been replaced by OpenNI 2.0 SDK. So if you are using the latest SDK, then the function call might be different, however the overall procedure might be similar.
I am also adding some example images:
Without using the above alignment function call the depth edge on RGB were not aligned
When using the function call the depth edge gets perfectly aligned (there are some infrared shadow regions which show some edges, but they are just invalid depth regions)