Question

I am applying some gaussian noise to an image. I think that this type of noise is most similar to sensor noise one could expect from a rubbish camera (?).

My question is: for a 3-channel image is the noise value applied to all values of each pixel the same i.e.

noise = gaussian_value()
pixel = (r+noise, g+noise, b+noise)

this is effectively changing the brightness of the pixel overall.

or, is a separate noise value applied to each of the channels in the pixel i.e.

r_noise = gaussian_value()
g_noise = gaussian_value()
b_noise = gaussian_value()
pixel = (r+r_noise, g+g_noise, b+b_noise)

or, is a random channel chosen for each pixel and noise applied i.e.

noise = gaussian_value()
pixel[randint(0,2)] += noise

Which one of these methods most accurately models the type of noise I am after (i.e. sensor noise). I also think that most cameras do not have separate channel sensors for each pixel and interpolate colour values from the surrounding pixels, so if this is the case too, does it affect the answer?

Was it helpful?

Solution

If your goal is to simulate the noise from a real sensor, you should start with an image from a real camera. Take a picture of a gray card that's defocused and subtract the average value of a large block around a pixel from the pixel value itself - that should give you pure noise which you can analyze. Depending on your requirements you might even be able to use this saved noise directly, either by overlaying it or by choosing a random starting point and incrementing through it.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top