Question

I have an RGB-D image and am trying to get a 3D visualization in matlab. Currently I am doing:

    depth = imread('img_031_depth.png');
    depth = double(depth);
    img = imread('img_031.png');
    surf(depth, img, 'FaceColor', 'texturemap', 'EdgeColor', 'none' )
    view(158, 38)

Which gives me an image like: enter image description here

I have two questions:

1) how can I save the image without it blurring as above

2) As you can see some edges show lined going to zero (e.g. the top of the coffee cup) I would like to remove these.

What I'm trying to produce is a 3D looking pointcloud, as these are only 2.5D I must show them from the right angle.

Any help is appreciated

EDIT: added images (note depth image needs to be normalized for visualization)

enter image description here

enter image description here

Was it helpful?

Solution

If you are only interested in a point cloud, you might want to consider scatter3. You can select which points to plot (discard those with depth == 0).

You need to have explicit x-y coordinates though.

[y x] = ndgrid( 1:size(img,1), 1:size(img,2) );
sel = depth > 0 ; % which points to plot
% "flatten" the matrices for scatter plot
x = x(:);
y = y(:);
img = reshape( img, [], 3 );
depth = depth(:);
scatter3( x(sel), y(sel), depth(sel), 20, img( sel, : ), 'filled' );
view(158, 38)

Edit: sampled version

[y x] = ndgrid( 1:2:size(img,1), 1:2:size(img,2) );
sel = depth( 1:2:end, 1:2:end ) > 0;
x = x(:);
y = y(:);
img = reshape( img( 1:2:end, 1:2:end, : ), [], 3 );
depth = depth( 1:2:end, 1:2:end );
scatter( x(sel), y(sel), depth(sel), 20, img( sel, : ), 'filled' );
view( 158, 38 );

Alternatively, you can directly manipulate sel mask.

OTHER TIPS

i suggest you first restore x=zu/f and y=zv/f, to obtain x, y, z, where f is your camera focal length;
then apply whatever rotation, translation you want before displaying them [x’,y’,z’] = R[x, y, z] + t;
then project them back using col = xf/z+w/2, row = h/2-yf/z to get a simple image that you can display fast; you can add a depth buffer to the last operation to guarantee proper occlusions by writing depth at each pixel there and checking that repetitive writing happens only if new z is smaller (that is a new pixel is close to the viewer). The resulting image will still have holes due to the nature of point clouds. You can interpolate in those holes but this means you have to trace rays from every pixels in the image to your point cloud and find a closest neighbor to the ray which probably takes forever in Matlab.

I am also doing some 3D image restoring and reconstructing. The first question is easy. Your photo is taken by a camera. So you need to transform the position to camera coordinate system. In other words, you need to know some intrinsic value of your camera! Or you can never recover it with a single image. Google 'kinect intrinsic value' you can get the focal length etc. Also, change your view. Try this! And if it's not working, ask again.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top