Question

I am learning about "Image Denoising using Autoencoders". So, now I want to build and train a model. Hence, when I read into how Nvidia generated the dataset, I came across: We used about 1000 different scenes and created a series of 16 progressive images for each scene. To train the denoiser, images were rendered from the scene data at 1 sample per pixel, then 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, and 131072 samples per pixel.

I was trying to understand-

1) what is meant by rendering images at n samples per pixel?

2) How to do this in python to generate the dataset?

I have read some articles regarding this but could not form a confident opinion from a Data Science perspective.

https://area.autodesk.com/tutorials/what-is-sampling/

Any leads would be much appreciated! Thanks

Was it helpful?

Solution

Your link is to paid course :) In ray-tracing too few samples will generate something like at the top enter image description here In fact the link with the picture answers your question https://chunky.llbit.se/path_tracing.html

2) Ray-tracing is hard... but not impossible, google for "python ray tracing module"... But something looking close - easily https://stackoverflow.com/questions/22937589/how-to-add-noise-gaussian-salt-and-pepper-etc-to-image-in-python-with-opencv Although actually on the ray-traced images the noise can change because of slope and environment.

If you still want ray-traced noisy images, better to find tutorials for 3D modelling programs, like "ray tracing in 3D Studio MAX tutorial"

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top