Question

I am trying to make a PointCloud mapping user with multiple kinects on Processing. I get the user's front and back with 2 kinects on opposite sides and generate both PointClouds.

The trouble is that the PointClouds X/Y/Z are not syncronized, it just puts the two of them on screen and it surely looks messy. There is a way to calculate or make a comparison between them, to translate the second PointCloud to "join" the first? I could translate the position manually, but if I move the sensors it will go off again.

Was it helpful?

Solution

Supposing all the Kinects are stationary, I guess you would have to go in this order:

  1. decide on which Kinect to use as a global reference,
  2. get parameters for a 3D transformation for each of the other Kinects - I'd try to use PMatrix3D and applyMatrix(), although it may be slow,
  3. apply the transformations on to each of the other Kinects' point clouds and draw the clouds

I don't (yet) know how to get the transformation parameters for a Procrustes transformation, but assuming they won't change, you'd probably have to set up multiple reference points, maybe by displaying the point clouds from each pair of Kinects and registering the points you know are the same in both point clouds. After getting enough of them, construct a PMatrix3D and apply it inside push/popMatrix. This is the approach used by this guy: http://www.youtube.com/watch?v=ujUNj1RDL4I

An alternative approach would be to use an Iterative Closest Point algorithm and construct 3D transform from its output. I'd really like an ICP or PCL library for Processing, if anyone knows a good one.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top