Question

I'm developing an application that creates a 3D Voronoi Diagram created from a 3D point cloud using boost multi_array allocated dynamically to store the whole diagram.

One of the test cases I'm using requires a large amount of memory (around [600][600][600]), which is over the limit allowed and results in bad_alloc.

I already tried to separate the diagram in small pieces but also it doesn't work, as it seems that the total memory is already over the limits.

My question is, how can I work with such large 3D volume under the PC constraints?


EDIT

The Element type is a struct as follows:

struct Elem{
  int R[3];
  int d; 
  int label;
}

The elements are indexed in the multiarray based on their position in the 3D space.

The multiarray is constructed by setting specific points on the space from a file and then filling the intermediate spaces by passing a forward and a backward mask over the whole space.

No correct solution

OTHER TIPS

You didn't say how do you get all your points. If you read them from a file, then don't read them all. If you compute them, then you can probably recompute them as needed. In both cases you can implement some cache that will store most often used ones. If you know how your algorithm will use the data, then you can predict which values will be needed next. You can even do this in a different thread.

The second solution is to work on your data so they fit in your RAM. You have 216 millions of points, but we don't know what's the size of a point. They are 3D but do they use floats or doubles? Are they a classes or simple structs? Do they have vtables? Do you use Debug build? (in Debug objects may be bigger). Do you allocate entire array at the beginning or incrementally? I believe there should be no problem storing 216M of 3D points on current PC but it depends on answers for all those questions.

The third way that comes to my mind is to use Memory Mapped Files, but i never used them personally.


Here are few things to try:

Try to allocate in different batches, like: 1 * 216M, 1k * 216k, 1M * 216 to see how much memory can you get.

Try to change boost map to std::vector and even raw void* and compare maximum RAM you can get.

You didn't mention the element type. Give the element is a four-byte float, a 600*600*600 matrix only takes about 820M bytes, which is not very big actually. I'd suggest you to check your operating system's limit on memory usage per process. For Linux, check it with ulimit -a.

If you really cannot allocate the matrix in memory, create a file of desired size on disk map it to memory using mmap. Then pass the memory address returned by mmap to boost::multi_array_ref.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top