Pergunta

I am rendering audio in the browser (mobile/desktop) which arrives over the network (via web sockets) as a potentially endless stream of successive audio buffers (Float32Array typed arrays) and to minimize chance of playback starvation I'd like to queue up multiple buffers prior to start of audio rendering. Does Web Audio API support the notion of queuing up multiple buffers (like OpenAL) to be rendered sequentially in a streaming fashion ? I am not speaking of simultaneous rendering of multiple buffers. Before I roll my own ...

Foi útil?

Solução

This is something you need to handle yourself. As far as I know, there is no way to queue up multiple buffers for the API to play back on its own.

You can use a ScriptProcessorNode to implement yourself.

Outras dicas

The web Audio API does not support a queue of buffers. Instead you can concatenate the buffer yourself (lets say you have buffer1 and buffer2):

var tempBuffer = context.createBuffer(buffer1.numberOfChannels, buffer1.length+buffer2.length, buffer1.sampleRate);
//now we need to concatenate the buffers for each channel
for(var i=0;i<buffer1.numberOfChannels;i++){
    var channel = tempBuffer.getChannelData(i);
    channel.set(buffer1.getChannelData(i), 0); //this puts the data of buffer1 in the channel var, starting at offset 0
    channel.set(buffer2.getChannelData(i), buffer1.length); //this starts at the offset of buffer1.length, so it is exactly placed after buffer1
}
//tempBuffer now contains your 2 buffers concatenated

As you are talking about a stream, that is a bit hard if you only have one incoming stream. If you use a scriptProcessor to cut out the empty places, you have to either put something there or just remove the silence, but then the retuned sample is to short and you stil get gaps. A solution for that is a buffer outside the scriptProcessor. Every time the scriptProcessor fires you can check for silence, cut that out and put it on the end of the buffer outside it, and not return anyting. Then after you have x seconds of audio in that buffer you play it. The only downside is that the x seconds offset is getting smaller every time you remove something, and it will come to a point where the buffer is getting so small there is no audio left to play and it might starve. Besides that I am not sure about putting audio in a buffer while it is playing. You should try try that out first.

This is a problem that screams for Media Source Extension: https://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/media-source.html.

It's available in Chrome, IE11, and soon in Firefox.

If you want to do it using web audio nonetheless, check out this project, and this other code.

I rolled my own implementation ... browser initiates a websocket connection to nodejs server which responds by sending a stream of typed array buffers back to the browser ... web worker in browser manages all such websocket traffic and populates a Transferable Object shared buffer which is browser side accessible by Web Audio API event loop ... as event loop consumes this circular queue of buffers it triggers webworker to request more from server side ... it worked when I finished it using nodejs 0.10.x however modern nodejs breaks it see source here

Licenciado em: CC-BY-SA com atribuição
Não afiliado a StackOverflow
scroll top