Question

Anything I ask and the questions I have are all related to an idea I've had for a personal project.

I want to experiment with creating 3D and interactive dot density map in the browser, using WebGL. This is all new to me from my background point of view, but I'd like to think it is something I could manage.

My question revolves around the components to the project, specifically your opinion on what I've perceived to be needed. The maximum amount of points which would need to be generated would be 4 million, as this would be the furthest view of all the points.

Starting off:

  1. I'm assuming the most efficient way of rendering this would be to use Vertex Buffer Objects? If so, does this apply itself to be manipulated with interaction*?

  2. For so many points, I believe a hierarchical space partitioning structure would be needed, leaning towards an Octree (as height data would also be used). Is this a good choice? How would memory be managed when all points need to be shown?

  3. If id, x, y, z data for 4 million points were to be stored, I'm assuming this would be approximately 400-600MB as a CSV file, is there any way to make this a reasonable size to send over the internet? Are compression algorithms really that good?

I really do not require every desktop and mobile device to be able to run this, but I also do not want to limit its functionality to that one genius in his lab running a Titan in SLI. I need it to be a reasonable power hog.

*interactive: Animate data from one point to another, where users can brush (filter)points and find out more information about those said points, changing point colour etc.

Was it helpful?

Solution

hierarchical space partitioning structure would be needed, leaning towards an Octree (as height data would also be used). Is this a good choice?

Space partitioning is definitely a good idea to query the dots.

If id, x, y, z data for 4 million points were to be stored, I'm assuming this would be approximately 400-600MB as a CSV file

If you have 4 million points and 4 components of 4 Bytes each then the size of the data would be around 64 MB. This is manageable by a modern cpu/gpu.

is there any way to make this a reasonable size to send over the internet? Are compression algorithms really that good?

I think bandwidth is the main problem not size.

You can only send and display visible dots. If many dots are visible at once you can filter them by combining nearest points together in a hierarchy. Dots should exhibit high frequency and filtering, + possibly changing the brightness, should give perceptually smooth results.

Such concept is similar to texture clipmaps or geometry clipmaps.

OTHER TIPS

Data handling: It sounds that you could use the OctoMap framework. It is known to handle very large 3D point clouds very efficiently. It uses an octree inside and does not store points per default but the occupancy of each box (node). However, it is quite easy to extend and to store custom information in the nodes such as raw points or even to model point distribution in local nodes (example).

Streaming: If you implement your own "<<" operator for reading and writing a tree into a stream, than you can easily stream it to a webbrowser via a socket.

Visualization: You actually can use any WebGL-JavaScript framework. Currently I can suggest CubeVR

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top