Question

How can we access the point cloud in the Leap Motion API? One feature that led me to purchase it was the point cloud demo from their promo video, but I can't seem to locate documentation regarding it and user replies on the forums seem mixed. Am I just missing something?

I'm looking to use the Leap Motion as a sort of cheap 3D scanner.

Was it helpful?

Solution

That demo was clearly a mockup which simulated a 3-D model of the human hand, not actual point cloud data. You can tell by the fact that points were displayed which could not have possibly been read by the sensor, due to obstruction.

orion78fr points to one forum post on this, but the transcript of an interview by the founders provides more information direct from the source:

  1. Can you please allow access to cloud points in SDK?

David: So I think sometimes people have a misperception as to really how things work in our hardware. It’s very different from other things like the Kinect, and in normal device operation we have very different priorities than most other technologies. Our priority is precision, small movements, very low latency, very low CPU usage - so in order to do that we will often be making sacrifices that make what the device is doing completely not applicable to what I think you’re getting at, which is 3D scanning.

What we’re working on are sort of alternative device modes that will let you use it for those sorts of purposes, but that’s not what it was originally built for.You know, it’s our goal to let it be able to do those things and with the hardware can do many things. But our priority right now is of course human computer interaction, which we think is really the missing component in technology, and that’s our core passion.

Michael: We really believe in trying to squeeze every ounce of optimization and performance out of the devices for the purpose they were built. So in this case the Leap today is intended to be a great human computer interface. And we have made thousands of little optimizations along the way to make it better, that might sacrifice things in the process that might be useful for things like 3D scanning objects. But those are intentional decisions, but they don’t mean that we think 3D scanning isn’t exciting and isn’t a good use case. There will be other things we build as a company in the future, and other devices that might be able to do both or maybe there will be two different devices. One that is fully optimized for 3D scanning, and one that continues to be optimized and as great as it can be at tracking fingers and hands.

If we haven’t done a good job communicating that the device isn’t about 3D scanning or isn’t going to be able to 3D scan, that’s unfortunate and it’s a mistake on our part - but that’s something that we’ve had to sacrifice. The good news is that those sacrifices have made the main device really exceptional at tracking hands and fingers.

I have developed with the Leap Motion Controller as well as several other 3-D scanning systems, and from what I've seen I'd seriously doubt if we're ever going to get point cloud data out of the currently shipping hardware. If we do, the fidelity will be far below what we see for gross finger and hand tracking from that device.

There are some low-cost alternatives for 3-D scanning that have started to emerge. SoftKinetic has their DepthSense 325 camera for $250 (which is effectively the same as the Creative Gesture Camera that is only $150 right now). The DS 325 is a time-of-flight IR camera that gives you a 320x240 point cloud map of the 3-D space in front of it. In my tests, it worked well with opaque materials, but anything with a little gloss or shininess gave it trouble.

The PrimeSense Carmine 1.09 ($200) uses structured light to get point cloud data in front of it, as an advancement of the technology they supplied for the original Kinect. It has a lower effective sptial resolution than the SoftKinetic cameras, but it seems to provide less depth noise and to work on a wider variety of materials.

The DUO was also a promising project, but unfortunately its Kickstarter campaign failed. It was using stereoscopic imaging from an IR source to return a point cloud from a couple of PS3 Eye cameras. They may restart that project at some point in the future.

While the Leap may not do what you want, it looks like more and more devices are coming out in the consumer price range to enable 3-D scanning.

OTHER TIPS

See this link

It says that yes, Leap Motion can theorically handle point cloud and it was temporarily part of the visualiser in beta and no, you can't access it using the Leap Motion APIs right now.

It may appear in the future but it's not a priority of Leap Motion Team.

As with LeapMotion SDK 2.x one can at least access the stereo camera images! As I know by myself it is a convenient solution, for many tasks where the point cloud data was asked for. This is why I mention it here, even if it does not give the point-cloud data internally generated by the driver to extract the pointer-metadata. But now one has the capability to generate own point-cloud by yourself, this is why I think it is strongly related to the question.

Currently there is no access to the Pointcloud in the public API. But I think this video is no mock-up, so there should be a possibility: http://www.youtube.com/watch?v=MYgsAMKLu7s#t=40s

Roadtovr recently reviewed the Nimble Sense Kickstarter, which is using point cloud.

It’s the same technology that the Kinect 2 uses, and it’s supposed to have some advantages over the Leap Motion.

Because it’s a depth sensing camera, you can point the camera top-down like the Touch+, although their product will not ship till next year.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top