Question

I'm looking for idea's how to convert a 30+gb, 2000+ colored TIFF image series into a dataset able to be visualized in realtime (interactive frame rates) using GPU-based volume rendering (using OpenCL / OpenGL / GLSL). I want to use a direct volume visualization approach instead of surface fitting (i.e. raycasting instead of marching cubes).

The problem is two-fold, first I need to convert my images into a 3D dataset. The first thing which came into my mind is to see all images as 2D textures and simply stack them to create a 3D texture.

The second problem is the interactive frame rates. For this I will probably need some sort of downsampling in combination with "details-on-demand" loading the high-res dataset when zooming or something.

A first point-wise approach i found is:

  1. polygonization of the complete volume data through layer-by-layer processing and generating corresponding image texture;
  2. carrying out all essential transformations through vertex processor operations;
  3. dividing polygonal slices into smaller fragments, where the corresponding depth and texture coordinates are recorded;
  4. in fragment processing, deploying the vertex shader programming technique to enhance the rendering of fragments.

But I have no concrete ideas of how to start implementing this approach.

I would love to see some fresh ideas or ideas on how to start implementing the approach shown above.

Was it helpful?

Solution

If anyone has any fresh ideas in this area, they're probably going to be trying to develop and publish them. It's an ongoing area of research.

In your "point-wise approach", it seems like you have outlined the basic method of slice-based volume rendering. This can give good results, but many people are switching to a hardware raycasting method. There is an example of this in the CUDA SDK if you are interested.

A good method for hierarchical volume rendering was detailed by Crassin et al. in their paper called Gigavoxels. It uses an octree-based approach, which only loads bricks needed in memory when they are needed.

A very good introductory book in this area is Real-Time Volume Graphics.

OTHER TIPS

I've done a bit of volume rendering, though my code generated an isosurface using marching cubes and displayed that. However, in my modest self-education of volume rendering I did come across an interesting short paper: Volume Rendering on Common Computer Hardware. It comes with source example too. I never got around to checking it out but it seemed promising. It is it DirectX though, not OpenGL. Maybe it can give you some ideas and a place to start.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top