Question

I am working on a dynamic memory allocation simulation using a fixed sized array in C and i would like to know the best way to deal with fragmentation. My plan is to split the array into two parts, the left part reserved for small blocks and the right part reserved for big blocks. I would then use the best fit approach to find the smallest/largest memory block available to use. Is there another better approach to avoid fragmentation(where you have a bunch of blocks available throughout the array but a single one does not meet the space needed)?

Was it helpful?

Solution

The best approach depends on the modus operandi of your program (the user of your memory manager). If the usage pattern is to allocate many small fragments and delete them frequently, you don't need to be overly aggressive with defragmentation. In that case rare large block users will pay for the defragmentation operation. Similarly, if large block allocations are frequent, it might make sense to defragment more often. But the best strategy (assuming you still want to roll your own) is to program it in a general, tunable way and then measure performance impact (in fragmentation ops or otherwise) based on real program run.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top