Domanda

I am trying to construct a low-latency metronome using Core Audio.

What I am trying to achieve is using Remote IO, which should give me a timestamp for every packet of audio I produce. Then I want to use that to remember when I started playback and subtracting the current timestamp from the starting timestamp to get the current position. Then I want to use that to generate the audio for the metronome as needed.

After some research, I have found that this would be the best way to create a low-latency metronome. However, attempting its implementation and diving into this framework has been rather daunting. If anyone knows how I could put this together or perhaps even point me to sources where I could gather the information I would need to make it work, I would be most grateful!

Thank you.

È stato utile?

Soluzione

Ignore the packet timestamps, and count samples. If you position the start of each metronome sound an exact number of samples apart at a known sample rate, the tempo will be sub-millisecond accurate. Per packet time stamp resolution is much less precise.

Autorizzato sotto: CC-BY-SA insieme a attribuzione
Non affiliato a StackOverflow
scroll top