Question

Hi I am using canvas to draw captured audio records I have this code but some how it gives me the fft of the audio signal I don't get which part of the code does that ?? and what should I do to make it draw the waveform it self not the fft ??

public void doDraw(Canvas paramCanvas)
{
  if (mCanvasHeight == 1)
    mCanvasHeight = paramCanvas.getHeight();
  paramCanvas.drawPaint(mBackPaint); 
  /**
   * Set some base values as a starting point
   * This could be considerd as a part of the calculation process
   */
  int height = paramCanvas.getHeight();
  int BuffIndex = (mBuffer.length / 2 - paramCanvas.getWidth()) / 2;
  int width = paramCanvas.getWidth();
  int mBuffIndex = BuffIndex;
  int scale = height / m_iScaler;
  int StratX = 0;
  if (StratX >= width)
  {
    paramCanvas.save();
    return;
  }
int cu1 = 0;
/**
 * Here is where the real calculations is taken in to action
 * In this while loop, we calculate the start and stop points
 * for both X and Y
 * 
 * The line is then drawer to the canvas with drawLine method
 */
while (StratX < width -1)
{
  int StartBaseY = mBuffer[(mBuffIndex - 1)] / scale;

  int StopBaseY = mBuffer[mBuffIndex] / scale;
  if (StartBaseY > height / 2)
  {
      StartBaseY = 1 + height / 2;
    int checkSize = height / 2;
    if (StopBaseY <= checkSize)
        return;
    StopBaseY = 2 + height / 2;
  }

    int StartY = StartBaseY + height / 2;
    int StopY = StopBaseY + height / 2;
    paramCanvas.drawLine(StratX, StartY, StratX +1, StopY, mLinePaint);
    cu1++;
    mBuffIndex++;
    StratX++;
    int checkSize_again = -1 * (height / 2);
    if (StopBaseY >= checkSize_again)
      continue;
    StopBaseY = -2 + -1 * (height / 2);
  }
}

So basically main activity calls three functions in the CSampler class

init() //which prepares the audio record and sets its configuration
Start recording() // starts the audio recorder
StartSampling() // reads data into CSampler.buffer;

StartSampling function called

public void StartSampling()
{
recordingThread = new Thread()
{
  public void run()
  {
    while (true)
    {
      if (!m_bRun.booleanValue())
      {
        m_bDead = Boolean.valueOf(true);
        m_bDead2 = Boolean.valueOf(true);
        return;
      }
      Sample();

      m_ma.setBuffer(CSampler.buffer); //m_ma is object of main activity 
    }
   }
 };
 recordingThread.start();

}


Function setBuffer in main activity

/** 
 * Recives the buffer from the sampler
 * @param buffert
 */
public void setBuffer(short[] paramArrayOfShort)
{
  mDrawThread = mdrawer.getThread();
  mDrawThread.setBuffer(paramArrayOfShort);
}

Which basically calls another function in the CDrawer class

Function SetBuffer in CDrawer class sets the mbuffer with the same read data =/

public void setBuffer(short[] paramArrayOfShort)
{
  synchronized (mBuffer)
  {
    mBuffer = paramArrayOfShort;
    return;
  }
}

public void Sample()
{
mSamplesRead = ar.read(buffer, 0, buffersizebytes);
}
___________________________________________________

output pic


enter image description here

Was it helpful?

Solution

Actually, the display shows a very short piece of the audio signal waveform. To be more precise, each screen pixel vertical line shows one sample value of the signal. As the signal is sampled with 44.1kHz there are 44100 sample values per second. If your screen has a physical pixel width of say 768 pixels, then you see 768/44100 = 0.017s = 17ms of the signal.

This is rather short: with a typical male voice you will see about two cycles of the fundamental frequency.

With a quick change in the source you can see a little bit more of the signal waveform: Just change SAMPPERSEC = 44100 to e.g. SAMPPERSEC = 16000;

The code does not contain any call to a Fourier transformation or any other spectral transformation, so it is simply not able to show a spectrum.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top