Question

I'm playing around with using Kinect for mapping skeletons and can see the device supports up-to 4 sensors connected simultaneously.

However unfortunately I only have 1 sensor at my disposal at the moment and as a result I am unsure about behavior of the SDK in the event you have more than one sensor connected.

Specifically is the data merged in the exposed API? Say you are using the approach of handling the

private void Kinect_AllFramesReady(object sender, AllFramesReadyEventArgs e)
{
}

event does the SkeletonFrame.SkeletonArrayLength increase to 12, 18, 24?

How do I access the different ColorImageFrame or DepthImageFrame for each sensor? Normally you might do something like this

using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
    //Write pixels
}

to access the camera but I don't see any obvious method for accessing data specific to a device.

An explanation of the above and guidance on what - if any - other differences are important to understand when building applications that utilize multiple Kinect sensors concurrently would be much appreciated.

Was it helpful?

Solution

Each KinectSensor can have seperate bindings and events.

foreach (var sensor in KinectSensor.KinectSensors)
{
    if (potentialSensor.Status == KinectStatus.Connected)
    {
       //add binding and events.
    }
}

So it's up to you, you can possible bind the event to the same handler and determine what to do with it based on the sender but you will need to create your own logic with binding the data all together I think.

The SkeletonArrayLength will not increase because it's unique per SkeletonStream.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top