Question

I am just about to get started with Kinect development and am hoping someone might have insight into the following.

I am hoping to mount a kinect on the ceiling, just below a projector. The projector will show a pond with some fish (similar to the Microsoft Windows 7 Touch pack).

I am then hoping to detect movement of people in and out of the projection and add ripples or move the fish etc.

I do not need to track peoples shapes, just know if they are in the frame and where they are. In fact if someone rolls a football across I am just as happy to track that.

I will be getting myself a Kinect in the next few days but if someone knows that this is not possible then please let me know. If it is possible then any pointers to get me started would be great.

Thanks, Patrick.

Was it helpful?

Solution

Before getting started, you need to decide which software to use to access the Kinect. The two most popular choices are:

There is also libfreenect, but it only provides just raw depth data and is, in my opinion, harder to use than the two above.

The Kinect for Windows SDK and OpenNI both provide skeleton tracking, which is a very convenient way of detecting the location of your users and parts of their body in detail. In case of your project however, skeleton tracking will most likely not work properly, since the Kinect is mounted to the ceiling and points downwards. The provided tracking algorithms work best when facing the user directly, and if most of the body is visible (except f

For your project, you probably won't need skeleton tracking at all (it can be deactivated in Kinect for Windows SDK/OpenNI). An approach I can think of off the top of my head would be:

  1. At the start of the application, calibrate your software by measuring the distance from the Kinect to the surface, where you will project your imagery.
  2. For each new depth frame you receive from the Kinect, you check for differences between the current frame and the calibration frame. If there is a chunk of pixels which are closer to the sensor in the current frame than in the calibration frame, you can assume that it's an object.

Of course, Kinect's depth measurements are not perfect. You will have to provide some sort of error correction to filter out false positives.

Using this approach, you will be able to detect most objects sitting on or moving over the surface.

OTHER TIPS

I would rather put the sensor on one of the walls, close to the ceiling, instead of on the ceiling pointing directly downwards, this way you will have a better field of view, and you can still use skeleton/user tracking algorithms.

Note that the sensor have a minimum distance of ~50cm so if you have a low ceiling you might encounter problems with people straight under the sensor.

If you use OpenNI you can use Nite's SceneAnalysis, which tracks people, and you can get their center of mass easily.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top