Pregunta

I am trying to build a opensource video system in android, since we have no access to the data in a closed system. In this system, we can modify the raw data captured by camera.

I used MediaCodec and MediaMux to do the video data encoding and muxing job, and that works. But I have no idea about the audio part. I used onFramePreview to get each frame and do the encoding/muxing work by frame. But how do I do the audio recording at the same time(I mean capturing the audio by frame, encode it and send the data to the MediaMux).

I've done some research. It seems that we use audiorecorder to get the raw data of audio. But audiorecorder does a constant recording job, I don't think it can work.

Can anyone give me a hint? Thank you!

¿Fue útil?

Solución

Create audioRecorder like this:

private AudioRecord getRecorderInstance() {
    AudioRecord ar = null;
    try {
        //Get a audiorecord
        int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);            
        ar = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
    } 
    catch (Exception e) {

    }
    return ar; //Returns null if mic is unavailable
} 

Prepare and send the data for the encoding and muxing later like this is separate thread:

public class MicrophoneInput implements Runnable {
    @Override
    public void run() {
        // Buffer for 200 milliseconds of data, e.g. 400 samples at 8kHz.
        byte[] buffer200ms = new byte[8000 / 10];

        try {
                while (recording) {
                    audioRecorder.read(buffer200ms, 0, buffer200ms.length);

                    //process buffer i.e send to encoder
                    //don't forget to set correct timestamps synchronized with video
                }
        } 
        catch(Throwable x) {        
                //  
        }
    }
}
Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top