To approach problems like this, I find useful to take a semantic viewpoint: What is an audio signal? What type can I use to represent it?
Essentially, an audio signal is a time-varying amplitude
Audio = Time -> Double
which suggests the representation as a behavior
type Audio = Behavior Double
Then, we can use the <@>
combinator to query the amplitude at a particular moment in time, namely whenever an event happens.
However, for reasons of efficiency, audio data is generally stored in blocks of 64 bytes (or 128, 256). After all, processing needs to be fast and it's important to use tight inner loops. This suggests to model audio data as a behavior
type Audio = Behavior (Vector Double)
whose values are 64 byte blocks of audio data and which changes whenever the time period corresponding to 64 bytes is over.
Connecting to other APIs is done only after the semantic model has been clarified. In this case, it seems a good idea to write the audio data from the behavior into a buffer, whose contents is then presented whenever the external API calls your callback.
By the way, I don't know whether reactive-banana-0.8 is fast enough yet to be useful for sample-level audio processing. It shouldn't be too bad, but you may have to choose a rather large block size.