سؤال

I know this is a very broad topic, but I've been floundering around with demos and my own tests and am not sure if I'm attacking the problem correctly. So any leads on where I should start would be appreciated.

The goal is to have the app generate some synthesized sounds, per the user's settings. (This isn't the only app function, I'm not recreating Korg here, but synth is part of it.) The user would set the typical synth settings like wave, reverb, etc. then would pick when the note would play, probably with a pitch and velocity modifier.

I've played around a bit with audio unit and RemoteIO, but only barely understand what I'm doing. Before I go TOO far down that rabbit hole, I'd like to know if I'm even in the right ballpark. I know audio synth is going to be low level, but I'm hoping that maybe there are some higher level libraries out there that I can use.

If you have any pointers on where to start, and which iOS technology I should be reading about more, please let me know.

Thanks!

EDIT: let me better summarize the questions.

Are there any synth libraries already built for iOS? (commercial or Open Source - I haven't found any with numerous searches, but maybe I'm missing it.)

Are there any higher level APIs that can help generate buffers easier?

Assuming that I can already generate buffers, is there a better / easier way to submit those buffers to the iOS audio device than the RemoteIO Audio Unit?

هل كانت مفيدة؟

المحلول

This is a really good question. I sometimes ask myself the same things and I always end up using the MoMu Toolkit from the guys at Stanford. This library provides a nice callback function that connects to AudioUnits/AudioToolbox (not sure), so that all you care about is to set the sampling rate, the buffer size, and the bit depth of the audio samples, and you can easily synthesize/process anything you like inside the callback function.

I also recommend the Synthesis ToolKit (STK) for iOS that was also released by Ge Wang at Stanford. Really cool stuff to synthesize / process audio.

Every time Apple releases a new iOS version I check the new documentation in order to find a better (or simpler) way to synthesize audio, but always with no luck.

EDIT: I want to add a link to the AudioGraph source code: https://github.com/tkzic/audiograph This is a really interesting app to show the potential of AudioUnits, made by Tom Zicarelli. The code is really easy to follow, and a great way to learn about this --some would say-- convoluted process of dealing with low level audio in iOS.

نصائح أخرى

Swift & Objective C

There's a great open source project that is well documented with videos and tutorials for both Objective-C & Swift.

AudioKit.io

The lowest level way to get the buffers to the soundcard is through the audiounit api, and particularly the remoteIO audiounit. This is a bunch of gibberish, but there are a few examples scattered around the web. http://atastypixel.com/blog/using-remoteio-audio-unit/ is one.

I imagine that there are other ways to fill buffers, either using the AVFoundation framework, but I have never done them.

The other way to do it is use openframeworks for all of your audio stuff, but that also assumes that you want to do your drawing in openGL. Tearing out the audiounit implementation shouldn't be too much of an issue though, if you do want to do your drawing in another way. This particular implementation is nice because it casts everything to -1..1 floats for you to fill up.

Finally, if you want a jump start on a bunch of oscillators / filters / delay lines that you can hook into the openframeworks audio system (or any system that uses arrays of -1..1 floats) you might want to check out http://www.maximilian.strangeloop.co.uk.

There are two parts to this: firstly you need to generate buffers of synthesised audio - this is pretty much platform-agnostic and you'll need a good understanding of audio synthesis to write this part. The second part is passing these buffers to an appropriate OS-specific API so that the sound actually gets played. Most APIs for audio playback support double buffering or even multiple buffers so that you can synthesise future buffers while playing the current buffer. As to which iOS API to use, that will probably depend on what kind of overall architecture you have for your app, but this is really the easy part. The synthesis part is where you'll need to do most of the work.

I know this is a little old, but this seems like the wrong approach to me - what you should probably be doing is finding an audio unit synthesizer that models the kind of changes that you want to do. There are many of them, some of them open source, others possibly licensable - and host the audio units from your code. The mechanisms described above seem like they would work just fine, but they're not really going to be optimized for the ios platform.

I know this topic is old, and I'm amazed that the situation on iOS still hasn't improved when it comes to audio.

However, there's a silver line on the horizon: iOS 6 supports the WebAudio API. I successfully managed to have a nice polyphone synth with barely a couple of lines in JavaScript. At least there are basic things like Oscillators available out of the box:

https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html

and (just to pick one example out of many)

I know this is a old post, but check out The Amazing Audio Engine.

The Amazing Audio Engine is a sophisticated framework for iOS audio applications, built so you don't have to.It is designed to be very easy to work with, and handles all of the intricacies of iOS audio on your behalf.

This came form the developer of AudioBus for iOS.

Basically it is going to be a toss up between audio queues and audio units. if you need to get close to real-time, for example if you need to process microphone input, audio units are your way to achieve minimum latency.

however, there is some limitation to how much processing you can do inside the render callback -- ie a chunk of data arrives on an ultrahigh priority system thread. And if you try to do too much in this thread, it will chugg the whole OS.

so you need to code smart inside this callback. there are few pitfalls, like using NSLog and accessing properties of another object that were declared without nonatomic (ie they will be implicitly creating locks).

this is the main reason Apple built a higher level framework (AQ) to take out this low level tricky business. AQ lets you receive process and spit out audio buffers on a thread where it doesn't matter if you cause latency.

However, you can get away with a lot of processing, especially if you're using accelerate framework to speed up your mathematical manipulations.

In fact, just go with audio units -- start with that link jonbro gave you. even though AQ is a higher-level framework, it is more headache to use, and RemoteIO audio unit is the right tool for this job.

I have been using the audio output example from open frameworks and the stanford stk synthesis library to work on my iOS synth application.

I have been experimenting with the Tonic Audio synth library. Clean and easy to understand code with ready to compile macOS and iOS examples.

At some point I've started generating my own buffers with simple C code from scratch to do basic stuff like sine generators, ADSR's and delays, which was very satisfying to experiment with.

I pushed my float arrays to speakers using Tonic's counterpart, Novocaine.

For example 256k use these for all the music it generates.

Just recently I found AVAudioUnitSampler, a super-easy way to playback sample based audio at different pitches with low latency.

مرخصة بموجب: CC-BY-SA مع الإسناد
لا تنتمي إلى StackOverflow
scroll top