Audio UnitsĪudio units are software plug-ins that process audio data. In iOS and OS X, Core Audio supports most common file formats for storing and playing audio data, as described in iPhone Audio File Formats and Supported Audio File and Data Formats in OS X. For details on the so-called canonical audio data formats for iOS and OS X, see Canonical Audio Data Formats. iOS provides a Converter audio unit and includes the interfaces from Audio Converter Services. The result is faster calculations and less battery drain when processing audio. IOS uses integer and fixed-point audio data. Core Audio in OS X supplies codecs to translate most common digital audio formats (though it does not supply an encoder for converting to MP3). You also use these converters to translate between linear PCM and compressed audio formats such as MP3 and Apple Lossless. You can use Audio Converter Services to translate audio data between different linear PCM variants. In OS X, Core Audio expects audio data to be in native-endian, 32-bit floating-point, linear PCM format. Audio Data Formats goes into more detail on this topic. The difference in amplitude from one digital value to the next is always the same.Ĭore Audio data structures, declared in the CoreAudioTypes.h header file, can describe linear PCM at any sample rate and bit depth. For example, the 16-bit integer samples in standard CD audio allow 65,536 possible values between silence and maximum level. In linear PCM audio, a sample value varies linearly with the amplitude of the original signal that it represents. A packet defines the smallest meaningful set of frames for a given audio data format. In compressed formats, it is typically more. In linear PCM audio, a packet is always a single frame. For instance, a stereo sound file has two samples per frame, one for the left channel and one for the right channel.Ī packet is a collection of one or more contiguous frames. Standard compact disc (CD) audio uses a sampling rate of 44.1 kHz, with a 16-bit integer describing each sample-constituting the resolution or bit depth.Ī sample is single numerical value for a single channel.Ī frame is a collection of time-coincident samples. Digital audio recording creates PCM data by measuring an analog (real world) audio signal’s magnitude at regular intervals (the sampling rate) and converting each sample to a numerical value. Most Core Audio services use and manipulate audio in linear pulse-code-modulated ( linear PCM) format, the most common uncompressed digital audio data format. Figure 1-2 iOS Core Audio architecture A Little About Digital Audio and Linear PCM Figure 1-2 provides a high-level view of the audio architecture in iOS. For example, Audio Session Services lets you manage the audio behavior of your application in the context of a device that functions as a mobile telephone and an iPod. However, there are additional services in iOS not present in OS X. There is no API for services that must be managed very tightly by the operating system-specifically, the HAL and the I/O Kit. Use System Sound Services (represented in the figure as “System sounds”) to play system sounds and user-interface sound effects.Ĭore Audio in iOS is optimized for the computing resources available in a battery-powered mobile platform. Use Core Audio Clock Services for audio and MIDI synchronization and time format management. Use Music Sequencing Services to play MIDI-based control and music data. In OS X you can also create custom audio units to use in your application or to provide for use in other applications. Use Audio Unit Services and Audio Processing Graph Services (represented in the figure as “Audio units”) to host audio units (audio plug-ins) in your application. In OS X you can also create custom codecs. Use Audio File, Converter, and Codec Services to read and write from disk and to perform audio data format transformations. Use Audio Queue Services to record, play back, pause, loop, and synchronize audio. You find Core Audio application-level services in the Audio Toolbox and Audio Unit frameworks. The Core MIDI (Musical Instrument Digital Interface) framework provides similar interfaces for working with MIDI data and devices. You can access the HAL using Audio Hardware Services in the Core Audio framework when you require real-time audio. Audio signals pass to and from hardware through the HAL. In OS X, the majority of Core Audio services are layered on top of the Hardware Abstraction Layer (HAL) as shown in Figure 1-1. Core Audio in iOS and OS XĬore Audio is tightly integrated into iOS and OS X for high performance and low latency. Read this chapter to learn what you can do with Core Audio. It includes a set of software frameworks designed to handle the audio needs in your applications. Core Audio is the digital audio infrastructure of iOS and OS X.