Question

I have a buffer that contain packets read by ffmpeg from a video file encoded using H264/AAC According to Apple document, audio stream encoded in AAC can be decode with hardware support,

how to decode the audio stream with hardware support ?

UPDATE: I use Audio Queue Service to output the audio. Right now i decode AAC packet using ffmpeg and send LPCM audio to AQS. According to the Apple document, I can send directly AAC audio to AQ and it will take care of decoding task. Does it decode with hardware? Do i need, and how to set Audio Queue's parameter to enable audio hardware decoding?

Was it helpful?

Solution

You can tell the system to not use hardware decoding but probably not the other way around.

constant to determine which hardware codecs can be used.

enum {
    kAudioFormatProperty_HardwareCodecCapabilities = 'hwcc',
};

Constants kAudioFormatProperty_HardwareCodecCapabilities A UInt32 value indicating the number of codecs from the specified list that can be used, if the application were to begin using them in the specified order. Set the inSpecifier parameter to an array of AudioClassDescription structures that describes a set of one or more audio codecs. If the property value is the same as the size of the array in the inSpecifier parameter, all of the specified codecs can be used. Available in iOS 3.0 and later. Declared in AudioFormat.h. Discussion Use this property to determine whether a desired set of codecs can be simultaneously instantiated.

Hardware-based codecs can be used only when playing or recording using Audio Queue Services or using interfaces, such as AV Foundation, which use Audio Queue Services. In particular, you cannot use hardware-based audio codecs with OpenAL or when using the I/O audio unit.

When describing the presence of a hardware codec, the system does not consider the current audio session category. Some categories disallow the use of hardware codecs. A set of hardware codecs is considered available, by this constant, based only on whether the hardware supports the specified combination of codecs.

Some codecs may be available in both hardware and software implementations. Use the kAudioFormatProperty_Encoders and kAudioFormatProperty_Decoders constants to determine whether a given codec is present, and whether it is hardware or software-based.

Software-based codecs can always be instantiated, so there is no need to use this constant when using software encoding or decoding.

The following code example illustrates how to check whether or not a hardware AAC encoder and a hardware AAC decoder are available, in that order of priority:

AudioClassDescription requestedCodecs[2] = {
    {
        kAudioEncoderComponentType,
        kAudioFormatAAC,
        kAppleHardwareAudioCodecManufacturer
    },
    {
        kAudioDecoderComponentType,
        kAudioFormatAAC,
        kAppleHardwareAudioCodecManufacturer
    }
};

UInt32 successfulCodecs = 0;
size = sizeof (successfulCodecs);
OSStatus result =   AudioFormatGetProperty (
                        kAudioFormatProperty_HardwareCodecCapabilities,
                        requestedCodecs,
                        sizeof (requestedCodecs),
                        &size,
                        &successfulCodecs
                    );
switch (successfulCodecs) {
    case 0:
        // aac hardware encoder is unavailable. aac hardware decoder availability
        // is unknown; could ask again for only aac hardware decoding
    case 1:
        // aac hardware encoder is available but, while using it, no hardware
        // decoder is available.
    case 2:
        // hardware encoder and decoder are available simultaneously
}

https://github.com/mooncatventures-group/sampleDecoder

You probably better off using audioUnits however rather than audio queue

OTHER TIPS

You can, though as usual with Core Audio there are various caveats and edge cases to watch for.

Set the property kExtAudioFileProperty_CodecManufacturer to kAppleHardwareAudioCodecManufacturer. Do this before you set the client data format.

Some docs in ExtendedAudioFile.h

rather than doing this calculation just force a very large buffer size here.

status = AudioQueueAllocateBufferWithPacketDescriptions(audioQueue_, _audioCodecContext->bit_rate * kAudioBufferSeconds / 8, _audioCodecContext->sample_rate * kAudioBufferSeconds / _audioCodecContext->frame_size + 1, &audioQueueBuffer_[i]);

Found this gem:

https://developer.apple.com/library/ios/qa/qa1663/_index.html

Since the AudioFormatGetProperty doesn't work too often. The above describes how to use AudioFormatGetPropetyInfo for the Encoder or decoder and detect which is present in hw or sw.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top