Question

I am currently reading audio floats from a file using Dirac's (OSStatus) readFloatsConsecutive:(SInt64)numFrames intoArray:(float**)audio function. I create a pointer of float **

arrayToFill = malloc(channelCount * sizeof(float*));

for(int i = 0; i < channelCount; ++i)
{
    arrayToFill[i] = malloc(frameCount * sizeof(float));
}

and pass it to the Dirac function I get a massive memory spike when all the floats are malloced.

In instruments I get spikes that increase about 90MB, and for some reason this app still runs on the device.

would e.g. 15839544 * 2 number of floats cause these massive spikes?

How can it use so much memory? Is it virtual memory? I dont see any VM allocations.

I dont see how loading a single file of e.g. 5MB audio file can result in such massive spikes in memory.

Was it helpful?

Solution

would e.g. 15839544 * 2 number of floats cause these massive spikes?

Yes, absolutely. A float is 4 bytes, so two arrays of 15.8 million floats apiece is around 120 MB total.

As far as how you're ending up with this from a 5 MB input file: Audio compression is an amazing thing. :)

OTHER TIPS

It's probably virtual memory - although not in the way it is commonly (mis)understood.

Virtual memory is address space available mapped into a process. It may or may not be backed up with physical pages of memory.

Access to a page not so backed up results in a page fault which the kernel then services in one of a number of ways:

  • Allocating a new zeroed page
  • Allocating a page and filling its contents with a page of a memory mapped file
  • Allocating a page and filling its contents from the page-file
  • Not doing any of the above and terminating the application

Thus, a malloc() for large amount of memory (larger than physical pages available) tends to succeed whilst the operating system has enough RAM to allocate page-descrioptors to map the virtual space into the process (although it might decline if resource limits are exceeded at this point). Attempts to actually write into the allocated space gradually pull physical pages into the process.

The size you indicate is actually ~128MB of memory. It's pretty unlikely you have this much physical RAM to play with on an iDevice, so I think we can assume it's not all being used. You can probably get stats for the number of page faults - this will give you a good idea of the amount used (at 4kB per page, presumably).

I would expect the VM stats for your process to include this allocation.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top