I wrote this class, to acquire audio data. I want to use the audio input to sample realtime RF signals. I sample @ 44kHz, and I expect to know the elapsed time by measuring the total acquired samples, knowing the sample frequency.
I don't know why I found a delta time between elapsed time measured by system.nanoTime and acquired samples divided by frequency. Why this delta of about 170ms changing each time I start/stop acquisition? Am I losing samples from acquired signal?
Basically, what I do, is to call this class with the started
boolean set to true
, then after few seconds I set this boolean to false
, then the class exits from the while loop, then I measure the elapsed time and extract the delta.
This is my testing code:
public class RecordAudio extends AsyncTask<Void, Long, Void> {
@Override
protected Void doInBackground(Void... arg0) {
try {
int bufferSize = AudioRecord.getMinBufferSize(frequency,
channelConfiguration, audioEncoding);
AudioRecord audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC, frequency,
channelConfiguration, audioEncoding, bufferSize);
short[] buffer = new short[blockSize];
double[] toTransform = new double[blockSize];
audioRecord.startRecording();
// started = true; hopes this should true before calling
// following while loop
double aquiredSignalLen=0;
long elapsedTime = System.nanoTime();
while (started) {
int bufferReadResult = audioRecord.read(buffer, 0,blockSize);
double tmpElTime1=(double)bufferReadResult/(double)44000;
aquiredSignalLen=aquiredSignalLen+tmpElTime1;
}
//when i stop the acquisition, i calculate the elapsed time,
//and i compare the result with the elapsed time measured counting
//the total number of samples
elapsedTime = System.nanoTime() - elapsedTime;
double elapsedTimeDouble=(double)elapsedTime/1000000000;
double delta=elapsedTimeDouble-aquiredSignalLen;
audioRecord.stop();
} catch (Throwable t) {
t.printStackTrace();
Log.e("AudioRecord", "Recording Failed");
}
return null;
}
I asked this question, to solve this problem:
I need to calculate the precise elapsed time between 2 particular signals waveform, received on the microphone input.
I would like to have at least 1mS precision, better if higher precision is achievable..
this code was just a starting test. may be counting the samples i can achieve high precision? my fear is that i can lose some samples due to processing time?