Pergunta

I have two 3D point clouds using the Point Cloud Library. One which is the reference point cloud(lets call it A) and the other one with a deformity(lets call it B). Both the point clouds are taken from objects which are somehow with no or very minute features on the surface, except the edges. These point clouds A and B are also aligned.

  • I want to know if there is any algorithm which can detect the missing cloud from B.
  • How to construct a high resolution 3D image of the missing portion of B.

Helps are appreciated.

Foi útil?

Solução 2

I'm no expert on these things, so these are mostly ideas, not solutions and I might be wrong.

But my naive approach would be boolean operations / constructive solid geometry based on the two meshes (see also this question at gamedev). If you calculate A-B, you get the mesh(es) that contain everything that is in A, but not in B - or in other words: the missing portion of B.

There are two issues with this approach, though:

  1. Boolean operations are tricky due to floating point inaccuracy and special cases.
  2. Your meshes are noisy, so their surfaces will mostly not be coincident, even outside of the missing region.

As a result, the difference mesh will contain lots of small "volumes" outside of the actual missing region. You might remedy this by adding some sort of tolerance radius to A during the boolean operation or by applying some smoothing or other post-processing to the result.

Another approach might be to do the boolean operation not on the meshes, but on implicit functions created from the point clounds (e.g. with moving least squares) and then creating a mesh from the resulting implicit function (e.g. with marching cubes). This might be a more robust solution.

To create an image of the mesh, just render it using OpenGL or DirectX.

Outras dicas

there are some "Spatial change detection" solutions offered by PCL.

take look at this link: change detection

It uses the octree structures (build from point clouds) and compare the two octrees for differences.

As long as both your clouds are organized (you got them from Kinect which AFAIK produces clouds organized as regular point grids) you can turn them into depth images. As long as you believe the clouds are properly aligned (your Kinect was stationary, looking at the same scene) then you can use the usual image processing technics with the depth images including getting the difference between the two images, smoothing, creating a mask image from the difference image using some threshold. After you got the mask image you apply it to your B cloud setting all points outside the mask to NaNs (like here https://stackoverflow.com/a/17282917/1027013) and voila, the 3d image of the part in B which differs from A.

Though I know this approach is in use but I never used it myself and never played with Kinect. I guess due to noise and small ground vibrations the produced mask may be too noisy too, especially at the edges and "silhouette" points of the scene and it is where image processing tools applied to depth masks come to rescue.

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top