Frage

I'm trying to get started with reactive-banana and want to create a simple synthesizer. There are lots of GUI examples, but I have trouble applying them to audio. Since audio APIs have callbacks that say "give me n samples of audio" I figure I should fire an event each callback (using the snd part of what newAddHandler returns) that contains the number of samples to generate, a pointer where they should be written, and timing info to coordinate MIDI events. The IO action passed to reactimate would then write the samples to the pointer. MIDI events would be similarly fired from another callback and also contain timing info.

This is where I get stuck however. I guess the audio signal is supposed to be a behaviour, but how do I "run" a behaviour for the right amount of time to obtain the samples? The right amount of course depends on MIDI events that might occur between two audio callbacks.

War es hilfreich?

Lösung 2

To approach problems like this, I find useful to take a semantic viewpoint: What is an audio signal? What type can I use to represent it?

Essentially, an audio signal is a time-varying amplitude

Audio = Time -> Double

which suggests the representation as a behavior

type Audio = Behavior Double

Then, we can use the <@> combinator to query the amplitude at a particular moment in time, namely whenever an event happens.

However, for reasons of efficiency, audio data is generally stored in blocks of 64 bytes (or 128, 256). After all, processing needs to be fast and it's important to use tight inner loops. This suggests to model audio data as a behavior

type Audio = Behavior (Vector Double)

whose values are 64 byte blocks of audio data and which changes whenever the time period corresponding to 64 bytes is over.

Connecting to other APIs is done only after the semantic model has been clarified. In this case, it seems a good idea to write the audio data from the behavior into a buffer, whose contents is then presented whenever the external API calls your callback.


By the way, I don't know whether reactive-banana-0.8 is fast enough yet to be useful for sample-level audio processing. It shouldn't be too bad, but you may have to choose a rather large block size.

Andere Tipps

Presuming the intention is to do something live, I think firing an event for each callback is going to be extremely limiting. Most audio APIs expect that these callbacks will return very quickly (e.g. typically you would never call malloc or do blocking IO in one). Firing an FRP event may work for very simple processing, but I think if you try to do anything more complex you'll get dropouts in the audio stream.

I would expect a more viable approach is to fire events yourself (by a clock, or in response to GUI events, etc) and generate a buffer of audio, and have the callback API read from that buffer. I know that some audio APIs (e.g. portaudio) have a buffered mode which handles some of this automatically. Although if all you have is a callback API, it's not too hard to add a buffer on top of that.

Lizenziert unter: CC-BY-SA mit Zuschreibung
Nicht verbunden mit StackOverflow
scroll top