int16_t* audioBuffer=(int16_t*)av_malloc(AVCODEC_MAX_AUDIO_FRAME_SIZE+FF_INPUT_BUFFER_PADDING_SIZE);
This is wrong. AVCODEC_MAX_AUDIO_FRAME_SIZE
is deprecated (removed in new versions). With the new decoding API you can get larger decoded frames than this.
static AVFrame frame;
No. Do not allocate AVFrame on stack, it will break. Use avcodec_alloc_frame()
(or av_frame_alloc()
with new versions).
data_size = av_samples_get_buffer_size(frame.linesize,channel_count,sample_fr,_audio_ccontext->sample_fmt,1);
You realize you're oveerwriting the original frame linesize (AKA the plane size), right? I wonder if that is intended.
Also, what is sample_fr
? Is that some constant from somewhere? Just use AVFrame.nb_samples.
Finally, your code apparrently assumes interleaved audio (i.e. the audio samples for all the channels interleaved in one buffer). Many decoders now output planar audio (i.e. the samples for each channel in a separate buffer). This might be the reason for the crash, since av_samples_get_buffer_size
calculates the total size for all the samples in all channels, while data[0]
will contain only the samples for the first channel.
If planar audio is indeed the reason, you should either modify your code to support it, or use libavresample to convert planar to interleaved