Question

I'm writing a DirectShow Video splitter/demuxer filter for an embedded system.

In Audio all audio samples have the same size, small, ~5-8 KB (Each audio sample fits in DATA_PACKET_MAX_SIZE), so I allocate 200 buffers of DATA_PACKET_MAX_SIZE KB, calling RequestAllocator(...).

In Video the video samples behave differently, ranging between 100-160,000 bytes!!!

Edit: I know the size of the MAX sample (Video and Audio) before the splitting starts, before setting my allocator properties, but it sounds almost criminal to hold from an embedded system so much resources...

Should I create a specific IMemAllocator, on-demand, for each 'large' video sample?

Should I create a few IMemAllocators, holding pools of different size buffers?

Thanks, Rami

Was it helpful?

Solution

Standard memory allocator works in a way that you set allocator properties, that is fixed buffer size and buffer count, and then commit operator allocates real memory and you cannot change it. There are two issues with this approach: the first is that you choose buffer size before possibly starting reading real data, and the other one is that if small amount of samples need much larger buffers than other - which is typical with temporal compression formats like H.264 - then you need to have all buffers large and you need to allocate much more memory than you actually need.

There are several ways to address the problem.

The cleanest way is to implement custom memory allocator where you can allocate/reallocate buffers on runtime to satisfy unexpectedly large allocations. It is the output pin who chooses memory allocator, so you are in position to mount yours and you are good to go.

Another way which is more or less safe and which is used by some popular filters is to decommit and commit the allocator again on runtime when you need a larger buffer. That is, once you see on runtime that your payload does not fit the buffer, you decommit the allocator, update its properties, commit it back and continue as if nothing has happened at all. This normally works but is not as clean as the first method.

Then with many filters you can voluntary allocate non-allocator media sample and deliver it downstream. This is less safe, this is not how it was supposed to work in first place, but this works out well still in many scenarios.

OTHER TIPS

Perhaps this suggestion will not work for some reason in your environment, but here is one idea:

Create an array of two elements for IMediaSample pointers. Set both elements to 0. When you need the first element for the first time, check against NULL and create an instance of IMediaSample, else simply use that media sample and pass it to the next filter. Same for the second, and iterate circularly. Note that this will work ONLY if some downstream filter is not holding the IMediaSmaple for a long time (async processing, etc...). You can check that by checking the reference count (and you will probably see heavy artifacts in video if that's the case). Penalty is that extra frame holding the memory, but if you can afford it, you will not have to allocate memory for each new IMediaSample.

As I said, this might not be usable in your environment.....

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top