Question

The terms 'pool' and 'buffer' may be used interchangeably here.
Suppose I have a pool I want to allocate at the beginning of the programm, as to not always call new all the time.
Now, I don't want to artificially limit myself on the size of the pool, but if I reallocate a bigger pool, all pointers to the old one will be invalidated, which certainly isn't very cool.


One way I thought of was "paging", aka

const int NUM_PAGES = 5;
char* pool[NUM_PAGES];

And allocate a new page instead of reallocating only one page. This would let all pointers stay valid, but make the management of the paged-pool a bit more difficult. Also, I'm limiting myself on the number of pages, so in the end again on the size of the pool.


Another way was to have a mapping from the pointers my allocation function returns to pointers to the real memory space. This would let all the old pointers stay valid, but would take more memory and I'd need to write a smart pointer to return from my allocation function which does the mapping.


Which other possible ways to achieve what I want are there? What (dis)advantages have I missed in my above example implementations?

Was it helpful?

Solution

You're talking about something that reminds me of a std::deque. I'm not really sure if you can use a std::deque as is, or if you'll simply need to use its basic design to implement some kind of allocator.

OTHER TIPS

Extending on your paged "pool" concept, what about if you stored the pages as a linked list??

To allocate new data from the pool you should only need to have access to the top "page", which will be at the head of the list, so that's O(1). If you need to grow the total size of your pool, allocate a new page and push it onto the head of the list, also O(1).

I use basically this same idea for pooled allocators, but also with a "free list" of recently deallocated items...

EDIT: As per your comment, if you want to also make use of deallocated data, you can also store a free list, possibly also as a linked list. So when you deallocate data you push a pointer and size marker onto the free list. When you allocate data from the pool, you first check if any items on the free list can be used, if not you allocate from the pool.

The standard memory mangers often already do something like this, so this approach will not always be better. Specifically, I usually only use this type of custom allocator when the allocated items are the same size (so that traversal of the free list is O(1)!). A custom allocator for std::list would be one example.

Hope this helps.

Consider using boost pools

One question, of course, is why troubling yourself so ?

You say you wish to avoid the new overhead, but why not instead choosing a better implementation of new ? tcmalloc and jemalloc are generally very good contenders for multi-threaded applications for example.

What you are trying to create is very similar, in fact, to writing a customized malloc / new implementation. You would therefore benefit, if you really want not to use a proven implementation, from the insights of those who did.

My personal interest leans toward the BiBOP strategy (Big Bag of Pages) to fight off fragmentation. The idea being to have a dedicated pool per size of allocation, and thus a simple free-list (interleaved with the allocations) is enough (no merge required). Usually this is done as long as the requested size is smaller than the page size (I've seen 4KB be used) and anything bigger is allocated on its own (in several pages). Discarded pages are recycled.

The main interest I have is that it's easy with a BiBOP to maintain an interval tree address-range -> page header, thus determining the object full size from the address of a (possibly) internal element (like an attribute), which is useful for Garbage Collection (reference following step).

For multi-threaded allocation, tcmalloc and jemalloc use two different approaches:

  • jemalloc use per-thread pool: good with a fixed number of threads / thread pools, but make the process of creating a thread more costly
  • tcmalloc use a global multi-threaded pool, with lock-free technics, and try to load-balance the threads on the pools available to limit contention by having a thread look for a new pool if the one it uses is "locked" (rather than waiting) and having each thread caching the last used pool in a thread local variable. The threads are therefore lightweight, but there might be some contentions if the number of pools is too low wrt the number of active threads.

A few thoughts:

  • When you have a std::vector<T*>, adding elements and triggering a resize invalidates references/pointers/iterators into that container, but not references/pointers that directly address the pointed-to objects. So, a layer of indirection might solve your problem, depending on what you're really trying to do with these references/pointers/iterators.

  • In systems with virtual memory and a large address space, you can make huge allocations without the pages actually being allocated from physical memory resources until they're written to. Consequently, on such systems you can set a larger-than-ever-needed capacity for a vector initially without feeling like you're wasting anything valuable.

  • Other containers - std::map<> and std::list<> - don't move their existing elements as new ones are added, so iterators/pointers/references remain valid.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top