Вопрос

I have been experimenting with Firefox's Audio API to detecting silence in audio. (The point is to enable semi-automated transcription.)

Surprisingly, this simple code more or less suffices to detect silence and pause:

var audio = document.getElementsByTagName("audio")[0];

audio.addEventListener("MozAudioAvailable", pauseOnSilence, false);

function pauseOnSilence(event) { 
  var val = event.frameBuffer[0];
  if (Math.abs(val) < .0001){
    audio.pause();
  }
}

It's imperfect but as a proof of concept, I'm convinced.

My question now is, is there way to do the same thing in Webkit's Audio API? From what I've seen of it it's more oriented toward synthesize than sound processing (but perhaps I'm wrong?).

(I wish the Webkit team would just implement the same interface that Mozilla has created, and then move on to their fancier stuff...)

Это было полезно?

Решение

You should be able to do something like this using an AnalyzerNode, or perhaps looking for thresholding using a JavaScriptAudioNode.

For example:

meter.onaudioprocess = function(e) {
  var buffer = e.inputBuffer.getChannelData(0); // Left buffer only. 
  // TODO: Do the same for right.
  var isClipping = false;
  // Iterate through buffer to check if any of the |values| exceeds 1.
  for (var i = 0; i < buffer.length; i++) {
    var absValue = Math.abs(buffer[i]);
    if (absValue >= 1) {
      isClipping = true;
      break;
    }
  }
  this.isClipping = isClipping;
  if (isClipping) {
    this.lastClipTime = new Date();
  }

};

Rather than clipping, you can simply check for low enough levels.

Roughly adapted from this tutorial. Specific sample is here.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top