Question

I'm going to run a project; to reconstruct my room with kinect.

In what format the 3d reconstructed view will be saved ?

could it be save or convert to 3dm, 3ds, max, obj and etc ?

thanks!

Was it helpful?

Solution

You can easily save 3d coordinates to the PLY format. Here's a basic example using ofxKinect:

void exportPlyCloud(string filename, ofMesh& cloud) {
    ofFile ply;
    if (ply.open(filename, ofFile::WriteOnly)) {        
        // write the header
        ply << "ply" << endl;
        ply << "format binary_little_endian 1.0" << endl;
        ply << "element vertex " << cloud.getVertices().size() << endl;
        ply << "property float x" << endl;
        ply << "property float y" << endl;
        ply << "property float z" << endl;
        ply << "end_header" << endl;

        // write all the vertices
        vector<ofVec3f>& surface = cloud.getVertices();
        for(int i = 0; i < surface.size(); i++) {
            if (surface[i].z != 0) {
                // write the raw data as if it were a stream of bytes
                ply.write((char*) &surface[i], sizeof(ofVec3f));
            }
        }
    }
}

You can then use MeshLab to process/stich PLY files and then export them another format like OBJ. Related to openFrameworks, you can find a few handy examples including the above PLY export in this workshop.

Saving to PLY would solve a part of the problem and you'd need to manually stich which can be time consuming. You would need something like SLAM(Simultaneous Localization And Mapping) or other reconstruction algorithms to help stitch things together. You can find a nice collection of algorithm on OpenSLAM.

Now depending on your level of comfort with coding there a few options to help with that. I also recommend having a look at the RGBDemo built software which has a reconstruction feature. This requires no coding, unless you want to (as it's opensource).

With a bit of coding you can also do reconstruction using the PointCloudsLibrary(PCL). If also includes an implementation of KinectFusion

PCL Kinfu preview 1 PCL Kinfu preview 2

If you're using the Microsoft Kinect SDK, Kinect Fusion was integrated into Kinect SDK 1.7

KinectFusion in Kinect SDK 1.7

You might also find this post interesting: Kinect Fusion inside AutoCAD.

OTHER TIPS

The kinect gives you a depth image. It is an image with varying shades of gray to indicate how far something is away from the sensor.
Any pixel in the image either uses 16 or 13 bits for the depth value(when only 13 bits are used the 3 last bits are used for the player ID).
You can just save the image as a .bmp for example... or convert it to any other format you like better.

I am not entirely sure on how you want to reconstruct your room.
The kinect cannot capture the whole room at once so you will have to at least turn your kinect to capture the whole room.

What kind of reconstruction do you want to achieve? Is it just an image in gray scale or do you want distances to walls and objects and also their dimensions?
If the latter is the case you should process the images.
If the former is the case you can just glue the single pictures together to form either a panorama or a cube, whatever you like.

I hope you could derive some useful information. Feel free to ask about uncertainties.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top