Frage

Below is the 'theoretical' pipeline that would cancel of particular user's audio contribution in an audio conference mixer. Theory goes like, we invert the user's audio samples from the original and it finally added to the amixer output. It should cancel off. However i can't figure of why i doesn't work in the pipeline below. The idea of the mixer is that it sums of all the user's audio contribution and when streaming back to individual user, their contribution is canceled of with an 'invert' + 'addder' elements.

I suspect clocking. or is it because these pipelines are separate ie not in the single pipeline ?

gst-launch \
  audiotestsrc name="sinewave" wave=sine ! tee name="audio_in_user1" \
  audio_in_user1. ! queue ! audioconvert ! amixer.sink0 \
  audiotestsrc wave=ticks ! queue ! audioconvert !  amixer.sink2 \
  adder name="amixer" ! tee name="mixerout" \
  mixerout. ! queue ! audio_out_user1.sink1 \
  audio_in_user1. ! queue ! audioinvert degree=1 ! audioconvert ! audio_out_user1.sink1 \
  adder name="audio_out_user1" ! alsasink

A sample pipeline that works from above theory, pipeline has only one audio source and it is cancelled in the adder.

audioinvert degree=1

gst-launch \
  audiotestsrc name="sinewave" wave=sine ! tee name="audiosource" \
  audiosource. ! queue ! audioconvert ! adder.sink0 \
  audiosource. ! queue ! audioinvert degree=1 ! audioconvert ! adder.sink1 \
  adder name="adder" ! alsasink

audioinvert degree=0.55

gst-launch \
  audiotestsrc name="sinewave" wave=sine ! tee name="audiosource" \
  audiosource. ! queue ! audioconvert ! adder.sink0 \
  audiosource. ! queue ! audioinvert degree=0.55 ! audioconvert ! adder.sink1 \
  adder name="adder" ! alsasink
War es hilfreich?

Lösung

I assume you want to implement that algorithm on the server (on the client it's a whole lot of more difficult).

Nevertheless, using your example GStreamer pipeline, you'll most likely end up with timing issues (every queue object may delay the audio stream from which point it will be impossible to cancel out any audio).

To illustrate your requirements, I've drawn a (simplified) pipeline (incoming streams are decoded before src[A-D] and the outgoing ones are encoded after stream[A-D]:

simplified pipeline, created with the simple yet powerful graphviz toolkit

The yellow boxes are time-critical, therefore I think the easiest way to do this would be writing your own gstreamer element that does the adding and subtracting work. Then place queue elements right before your element's input (and after your element's outputs; to work around network latency issues).

Another note: GStreamer may use many threads for one pipeline (especially if it's not linear like in your case) so if you get a lot of clients, you might hit the thread limit on your server (see this SO question, I hit that limit once using GStreamer (ok, I could've placed everything in one or two pipelines, but I didn't want read errors to cause all of my audio streams to stop) and a rather simple custom plugin helped me deal with that problem (each pipeline used two threads afterwards instead of eight)).

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top