Question

Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:

  • allocation of one block
  • freeing a previously allocated block which is not used anymore.

Also, as a requirement, the memory management system is not allowed to move around currently allocated blocks: their index/address must remain unchanged.

The most naive memory management algorithm would increment a global counter (with initial value 0) and use its new value as an address for the next allocation. However this will never allow to shorten the segment when only a few allocated blocks remain.

Better approach: Keep the counter, but maintain a list of deallocated blocks (which can be done in constant time) and use it as a source for new allocations as long as it's not empty.

What next? Is there something clever that can be done, still with constraints of constant time allocation and deallocation, that would keep the memory segment as short as possible?

(A goal could be to track the currently non-allocated block with the smallest address, but it doesn't seem to be feasible in constant time…)

Was it helpful?

Solution

With fixed-size blocks, what you have described is a free list. This is a very common technique, with the following twist: the list of free blocks is stored in the free blocks themselves. In C code, it would look like this:

static void *alloc_ptr = START_OF_BIG_SEGMENT;
static void *free_list_head = NULL;

static void *
allocate(void)
{
    void *x;

    if (free_list_head == NULL) {
        x = alloc_ptr;
        alloc_ptr = (char *)alloc_ptr + SIZE_OF_BLOCK;
    } else {
        x = free_list_head;
        free_list_head = *(void **)free_list_head;
    }
    return x;
}

static void
release(void *x)
{
    *(void **)x = free_list_head;
    free_list_head = x;
}

This works well as long as all allocated blocks have the same size, and that size is a multiple of the size of a pointer, so that alignment is preserved. Allocation and deallocation are constant-time (that is, as constant-time as memory accesses and elementary additions -- in a modern computer, a memory access can involve cache misses and even virtual memory, hence disk accesses, so the "constant time" can be quite big). There is no memory overhead (no extra per-block pointers or things like that; the allocated blocks are contiguous). Also, the allocation pointer reaches a given point only if, at one time, that many blocks had to be allocated: since the allocation prefers using the free list, the allocation pointer is increased only if the space below the current pointer is clock full. In that sense, this is an optimal technique.

Decreasing the allocation pointer after a release can be more complex, since free blocks can be reliably identified only by following the free list, which goes through them in unpredictable order. If decreasing the big segment size when possible is important to you, you could want to use an alternate technique, with more overhead: between any two allocated blocks, you put a "hole". The holes are linked together with a doubly-linked list, in memory order. You need a data format for a hole such that you can locate the hole start address by knowing where it ends, and also the hole size if you know where the hole begins in memory. Then, when you release a block, you create a hole which you merge with the next and the previous holes, rebuilding (still in constant time) the ordered list of all holes. The overhead is then about two pointer-sized words per allocated block; but, at that price, you can reliably detect the occurrence of a "final hole", i.e. an occasion to decrease the big segment size.

There are many possible variations. A good introductory paper is Dynamic Storage Allocation: A Survey and Critical Review by Wilson et al.

OTHER TIPS

This answer is about generic memory management techniques. I missed that the question asks about the case where all blocks have the same size (and are aligned).


The basic strategies you should know are first-fit, next-fit, best-fit, and the buddy system. I wrote a short summary once for a course I taught, I hope it's readable. I point there to a fairly exhaustive survey.

In practice you'll see various modifications of these basic strategies. But none of these is really constant time! I don't think that's possible in the worst case, while using a bounded amount of memory.

You may want to have a look at amortized analysis and in particular dynamic arrays. Even if the operations are not really done in constant time at every step, in the long run it looks like it is the case.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top