Question

I am building a CoreAudio application, where i need to process raw input samples from the microphone. I use two threads, where one thread is producing samples from the microphone input and the other thread is consuming samples. These threads share a (well working) circular buffer, whose implementation i have taken from Github (a good example can be found on the author's blog).

As a minimal example for my problem, i have put together a small XCode project on Github, so you can see the full code. The App puts the microphone samples into a circular buffer and then reads the buffer from another thread. The consuming thread computes the average rectified value (ARV) of 500 samples and outputs the ARV on the console.

When i run the app in the iOS Simulator (5.1) everything works fine and i get the desired output:

2012-08-21 20:58:31.882 BufferedSamples[23505:6003] 88
2012-08-21 20:58:31.890 BufferedSamples[23505:6003] 108
2012-08-21 20:58:31.890 BufferedSamples[23505:6003] 137
2012-08-21 20:58:31.891 BufferedSamples[23505:6003] 137
2012-08-21 20:58:31.892 BufferedSamples[23505:6003] 106
2012-08-21 20:58:31.901 BufferedSamples[23505:6003] 140
...

When i try to run the app on a device (i tried iPhone 3/3GS/4) instead, i get an EXC_BAD_ACCESS error at runtime due to a NULL pointer. Therefore i added a check for a NULL pointer in the CoreAudio callback function (in the file DummyRecorder.m):

// render samples into buffer
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = inNumberFrames * kTwoBytesPerSInt16;
bufferList.mBuffers[0].mData = NULL;
AudioUnitRender(dummyRecorder->audioUnit, ioActionFlags, inTimeStamp, kInputBus, inNumberFrames, &bufferList);

// move samples to ring buffer
if (bufferList.mBuffers[0].mData != NULL)
    TPCircularBufferProduceBytes(&dummyRecorder->buffer, bufferList.mBuffers[0].mData, bufferList.mBuffers[0].mDataByteSize);
else
    NSLog(@"null pointer");

When i run the app in the simulator, i still get the desired output, but on the device, i get the following:

2012-08-21 21:15:38.903 BufferedSamples[544:3b03] null pointer
2012-08-21 21:15:38.926 BufferedSamples[544:3b03] null pointer
2012-08-21 21:15:38.949 BufferedSamples[544:3b03] null pointer
2012-08-21 21:15:38.972 BufferedSamples[544:3b03] null pointer
2012-08-21 21:15:38.996 BufferedSamples[544:3b03] null pointer
2012-08-21 21:15:39.019 BufferedSamples[544:3b03] null pointer
...

How is it possible that CoreAudio allocates and fills a buffer correctly in the simulator, but on the device leaves the buffer untouched? What have i missed?

The complete code of my example can be found on Github.

Edit

I added a method to check for errors after CoreAudio calls, as @MichaelTyson suggested. Now i am doing this in my callback:

OSStatus err = AudioUnitRender(dummyRecorder->audioUnit, ioActionFlags, inTimeStamp, kInputBus, inNumberFrames, &bufferList);

// move samples to ring buffer
if (checkResult(err, "AudioUnitRender"))
    TPCircularBufferProduceBytes(&dummyRecorder->buffer, bufferList.mBuffers[0].mData, bufferList.mBuffers[0].mDataByteSize);

I have also updated the code on Github. Running the app in the simulator produces no errors, whereas running it on a device gives me the following errors:

2012-08-22 11:19:49.248 BufferedSamples[637:3b03] /DummyRecorder.m:50: AudioUnitRender result -50 FFFFFFCE Œˇˇˇ
2012-08-22 11:19:49.271 BufferedSamples[637:3b03] /DummyRecorder.m:50: AudioUnitRender result -50 FFFFFFCE Œˇˇˇ
2012-08-22 11:19:49.294 BufferedSamples[637:3b03] /DummyRecorder.m:50: AudioUnitRender result -50 FFFFFFCE Œˇˇˇ
2012-08-22 11:19:49.317 BufferedSamples[637:3b03] /DummyRecorder.m:50: AudioUnitRender result -50 FFFFFFCE Œˇˇˇ
2012-08-22 11:19:49.341 BufferedSamples[637:3b03] /DummyRecorder.m:50: AudioUnitRender result -50 FFFFFFCE Œˇˇˇ
2012-08-22 11:19:49.364 BufferedSamples[637:3b03] /DummyRecorder.m:50: AudioUnitRender result -50 FFFFFFCE Œˇˇˇ
...
Was it helpful?

Solution

There's usually a bit of guess work involved in figuring this stuff out, as it's hard to keep all the parameters in mind, but the first thing I would be trying is setting the stream format on the output scope of your audio unit's input bus (bus 1).

Currently you're only setting it on the input scope of the output bus (bus 0), and I'm not 100% certain that that's sufficient.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top