Вопрос

I'm looking at an output of an electroencephalogram sensor. This data is displayed on screen in raw form at about 200Hz. I read that in the old times, it was possible to hook up such output to a speaker and hear the waveform, instead of seeing it. So I'm interested if it is possible to replicate this experiment with modern iPhone. How can I take a waveform that is displayed in a graph form and package it in such a way that it can be played through a iPhone's speakers live? In other words, I'm looking to stream EEG data through some sort of audio player and need to know how to create audio packets from this data on the fly.

Here's the raw waveform, it is displayed at 200 data points per second (200Hz)

enter image description here

After I clean up and process the waveform, I'm interested in how far it deviates from the average of the waveform. In this case, I think this can be played as a increasing/decreasing amplitude of a sine wave, which may be easier.

enter image description here

Thank you for your input

Это было полезно?

Решение

Here's a good tutorial on generating a sine tone for output through CoreAudio:

http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html

The RenderProc is the bit of code you're twiddling with, in the example they're using an NSSlider to change the frequency, you just need to feed it with your signal data instead.

Другие советы

One of the ideas that I had for playing sound in response to the signal amplitude change is to divide the amplitude into a set of discrete bands of values (for example 0-10, 10-20, 20-30, etc) and then assign a sound to each band. Then using audio services or system sound, it might be possible to loop a unique sound fragment for each band.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top