Question

Is it possible to have an audiofile loaded from <audio/>-element via createMediaElementSource and then load the audio data into a AudioBufferSourceNode?

Using the audio-element as a source (MediaElementSource) seems not to be an option, as I want to use Buffer methods like noteOn and noteGrain.

Loading the audiofile directly to the buffer via XHR unfortunately isn't an option neither ( see Open stream_url of a Soundcloud Track via Client-Side XHR?)

Loading the buffer contents from the audio elements seems to be possible though:

http://www.w3.org/2011/audio/wiki/Spec_Differences#Reading_Data_from_a_Media_Element

Or is it even possible to directly use the buffer of an <audio/>-element as a sourceNode?

Was it helpful?

Solution

Seems as if it's not possible to extract the audiobuffer from an MediaElementSourceNode.

see https://groups.google.com/a/chromium.org/forum/?fromgroups#!topic/chromium-html5/HkX1sP8ONKs

Any reply proving me wrong is very welcome!

OTHER TIPS

This is possible. See my post at http://updates.html5rocks.com/2012/02/HTML5-audio-and-the-Web-Audio-API-are-BFFs. There is also a code snippet and example there. There are a few outstanding bugs, but loading an <audio> into the Web Audio API should work as you want.

// Create an <audio> element dynamically.
var audio = new Audio();
audio.src = 'myfile.mp3';
audio.controls = true;
audio.autoplay = true;
document.body.appendChild(audio);

var context = new webkitAudioContext();
var analyser = context.createAnalyser();

// Wait for window.onload to fire. See crbug.com/112368
window.addEventListener('load', function(e) {
  // Our <audio> element will be the audio source.
  var source = context.createMediaElementSource(audio);
  source.connect(analyser);
  analyser.connect(context.destination);

  // ...call requestAnimationFrame() and render the analyser's output to canvas.
}, false);

I'm not sure if you found a better solution yet and I also checked the W3C link you posted: http://www.w3.org/2011/audio/wiki/Spec_Differences#Reading_Data_from_a_Media_Element

But in order for it to really work you have to use AudioContext.createScriptProcessor(). I didn't yet tried this but basically you connect the source node (an audio element) to a script processor but then don't even output the audio if you don't need it. In the onaudioprocess callback you have direct access to the audio buffer data (in chunks of a specified size of course). There are examples in the link above.

Also I think you can somehow tweak the speed of the playback so that you can get more buffer arrays faster.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top