Pregunta

This is my first time using Octave/MATLAB for this sort of project. I am also completely new to signal processing, so please excuse my lack of knowledge. The end goal of this project is to create an mfile which will be able to take a wav file which will be recorded from a microphone, add a level of noise distortion to it which will be specified by the user in increments, and also to add variable onset delay to either the right or left channel of the audio for the new wav file that will be generated.

edit 12:29AM 5/13/14 I have a more clear idea of what needs to happen now after discussion with partner on goals and equipment and now need to find out how to solve these blanks. The delay will most likely have to be between 10 and 300 ns max and the intensity of noise should be from 0 to 5 on a scale of silent to heavy static.

clear;
delay=input('Delay in ns: '); 
noise=input('Level of distortion: ); 
[y,Fs,nbits]=wavread(filename); 

generate some noise same length and sampling as file 

[newy,Fs]=[y+y2,Fs]; 

shift over wave x many nanoseconds 

wavwrite(newy,Fs,'newwave');

Any help with the current goal of combining signals OR if you could help with generating noise to overlay onto any size of .wav recording I would be extremely grateful.

¿Fue útil?

Solución

Here's an example of how it might work. I've simplified the problem by limiting the delay to multiples of the sample period. For a 48kHz sample rate, the delay resolution is around 20us. This method is to first convert the delay to a number of samples and prepend it to the samples from the wave file. Second, the noise signal is generated of the same length and then it is added element wise to the first signal.

noiseLevel = input('Level of distortion: '); % between 0 and 1 - 0 means all signal
                                                               - 1 means all noise
delaySeconds = input('Delay in seconds: ');  % in seconds
[y,fs,nbits] = wavread(filename);

% figure out how many samples to delay. Resolution is 1/fs.
delaySamples = round(delaySeconds * fs);

% signal length
signalLength = length(y) + delaySamples;

% generate noise samples
noiseSignal = gennoise(signalLength); % call your noise generation function.

% prepend zeros to the signal to delay.
delayedSignal = vertcat(zeros(delaySamples,1), y);

combinedSignal = noiseLevel*noiseSignal + (1-noiseLevel)*delayedSignal;

Otros consejos

Couple of points:

Unless I'm doing my math wrong (entirely possible), a delay of 10 to 300 ns is not going to be detectable with typical 44 kHz audio sampling rates. You'd need to be in the MHz sampling rate range.

This solution assumes that your signal is one channel (mono). It shouldn't be too difficult to implement more channels using this format.

For adding noise, you can use randn and scale it based on your level of distortion. I'd suggest tinkering with the value that you multiply it by. Alternatively, you can use awgn to add white gaussian noise. I'm sure there are ways to add other kinds of noise, either in Fourier or time domain, but you can look into those.

If you want the noise during the delay, switch the order of the two.

A reminder that you can use sound(newy,Fs) to see if you like your result.

clear;
delay=input('Delay in ns: '); 
noise=input('Level of distortion: ); 
[y,Fs,nbits]=wavread(filename); 

% Add random noise to signal
newy = y + noise*0.1*randn(length(y),1);

% shift over wave x many nanoseconds 
num = round(delay*1e-9*Fs); % Convert to number of zeros to add to front of data
newy = [zeros(num,1); newy]; % Pad with zeros

wavwrite(newy,Fs,'newwave');
Licenciado bajo: CC-BY-SA con atribución
No afiliado a StackOverflow
scroll top