質問

I'm building an app that involves playing songs from the user's music library while applying an equalization (EQ) effect. I've only used AudioUnits to generate sound before, so I'm having a bit of trouble.

My current plan is to use AVAssetReader to get the samples, and though I'm a bit fuzzy on that, my question here is with regards to the correct AudioUnit design pattern to use from Apple's documentation: https://developer.apple.com/library/ios/documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/ConstructingAudioUnitApps/ConstructingAudioUnitApps.html#//apple_ref/doc/uid/TP40009492-CH16-SW1.

My guess is that a render callback is needed to perform my EQ effect (I was thinking kAudioUnitSubType_ParametricEQ), so that leaves either the "I/O with a Render Callback Function" pattern or the "Output-Only with a Render Callback Function." If I'm reading data from the music library (potentially via AVAssetReader), which of these two patterns would be the best fit?

役に立ちましたか?

解決

I think you would need to use an Output-Only with a Render Callback Function. The callback function should be responsible for reading/decoding the audio data, and applying the EQ effect.

By the way, I don't know if this might be useful in any way, but here it says that there's an already existing EQ audio unit that you could use.

ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top