Question

I'm building an app that will generate sound (for now it's mostly experimental) and play it on an Android phone.

For now I'm trying to play a simple sinewave sound (440 Hz), and first tried with an Audiotrack but experienced some buffer underrun. So I decided to take a look at OpenSL.

Now I've read lots of tutorial and blog posts on this, and finally made my own implementation, using an OpenSL Engine with an Android Simple Buffer Queue.

Now in the buffer callback, I generate a new buffer data and add it to the queue, but then the latency is much worse than the audio track (I can hear gaps between each buffers).

My question is, what is the best practice / architecture for generated sounds in OpenSL ? Should I fill the buffer in an alternative thread (then needing some synchronization process with the buffer callback)?

I've not found yet tutorials on OpenSL ES for generated sounds (most are on playing audio files or redirecting audio input to audio output).

Was it helpful?

Solution

Regarding the latency : it is important to choose the right sampling rate and buffer size for your device. You can query the device for the recommended values by using the Android SDK's AudioManager (PROPERTY_OUTPUT_SAMPLE_RATE and PROPERTY_OUTPUT_FRAMES_PER_BUFFER are only available from API level 17) and pass the values onto the NDK application :

// getting the samplerate and buffer size
if ( android.os.Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR1 )
{
    AudioManager am = ( AudioManager ) aContext.getSystemService( Context.AUDIO_SERVICE );
    int sampleRate = Integer.parseInt( am.getProperty( AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE ));
    int bufferSize = Integer.parseInt( am.getProperty( AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER ));
}

The importance of getting the sample rate right is that if it differs from the devices preferred sample rate (some use 48 kHz, others 44.1 kHz) the audio is routed past a system resampler before it is being output by the hardware, adding to the overall latency. Additionally, the importance of getting the right buffer size is to prevent samples/frames dropping after several buffer callbacks, which might lead to the problem you describe where gaps / glitches occur between callbacks. You can use multiples (power of 2 ) to decrease / increase the buffer size for experimenting with a more stable engine (higher buffer size) and faster response (lower buffer size).

Having created some simple Android apps doing exactly this, I've written a small write-up explaining the above recommendation in slightly more detail along with how a basic sequenced engine for music related applications could be constructed, however the page is just a basic architecture outline, and might be completely useless depending on your needs > Android audio engine in OpenSL

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top