What's the difference between the Apple audio frameworks?
In the documentation I see several Apple frameworks for audio. All of them seem to be targeted at playing and recording audio. So I wonder what the big differences are between these?
- Audio Toolbox
- Audio Unit
- AV Foundation
- Core Audio
Did I miss a guide that gives a good overview of all these?
I made a brief graphical overview of Core Audio and the its (containing) frameworks:
The framework closest to the hardware is Audio Unit. Based on that there is OpenAL and AudioToolbox with AudioQueue. On top you can find the Media Player and AVFoundation (Audio & Video) frameworks.
Now it depends on what you want to do: just a small recording, use AVFoundation, which is the most easiest one to use. (Media Player has no options for recording, it is - as the name says - just a media player.)
Do you want to do serious real time signal processing? Use Audio Unit. But believe me, this is hardest way. :-)
With iOS 8.0 Apple introduced AVAudioEngine, an Objective-C/Swift based audio graph system in AV Foundation. This encapsulate some dirty C-stuff from Audio Units. Due to the complexity of Audio Unit it is maybe worth a look.
Further readings in the Apple Documentation:
- Core Audio Overview: Introduction
- Multimedia Programming Guide
- Audio & Video Starting Point
Core Audio is the lowest-level of all the frameworks and also the oldest.
Audio Toolbox is just above Core Audio and provides many different APIs that make it easier to deal with sound but still gives you a lot of control. There's ExtAudioFile, AudioConverter, and several other useful APIs.
Audio Unit is a framework for working with audio processing chains for both sampled audio data and MIDI. It's where the mixer and the various filters and effects such as reverb live.
AV Foundation is a new and fairly high-level API for recording and playing audio on the iPhone OS. All of them are available on both OS X and iOS, though AV Foundation requires OS X 10.8+.