Question

I am new very to audio programming and am having trouble figuring out the right kind of algorithm for converting control events (e.g. like MIDI) to real-time sound genesis with a buffer.

At the moment I am currently trying to convert data which comes from a modulation wheel to alter the pitch of a sine wave. On one thread I am storing the events coming in in a ring buffer, with the value and timeStamp of when they occurred. On the audio thread, I need to build a buffer with the sinewave using the arbitrary value->frequency mapping.

How is this normally done for optimal performance, i.e. if the buffer is very low, do you just take the most recent value for the control, do you interpolate, or what algorithm do you do to get a value for the control at each sample?

Was it helpful?

Solution

in the MIDI control thread, you update the control values as soon as the complete MIDI control change message (0xB0) is received. that control value remains constant until it is updated by another control change (for the very same control).

in the audio processing thread, you always refer to the current value of whatever control you're using. you'll probably be scaling and offseting that control value so that 0x0000 is mapped to some MIN and so that 0x3FFF is mapped to some MAX values.

perhaps there will be slewing or portamento, but that is processing done in the audio processing thread to the scaled control value. your slewing is always done toward the target value which is the current scaled value of that MIDI control. it can be linear or, more simply, a decaying exponential:

value = (1.0 - slew_rate)*value + slew_rate*target_value;

it's not particularly sophisticated.

Licensed under: CC-BY-SA with attribution
scroll top