In a recent answer I read on Stack Exchange, I read about a possible way to understand more clearly what happens in each hidden layer of a neural network.

Here's the excerpt-

You should watch what makes neuron activated in each layer depending on the input. As you know each neuron will be activated (once the DNN is trained) for specific input combinations. By visualizing that you can get an idea about what exactly each layer has learned in term of high-low level features.

Source - High-level features of a neural network

I wanted to know if there are any papers who have tried doing this (links would be really helpful). Meanwhile are there any other ways to understand what happens in each hidden layers?

没有正确的解决方案

许可以下: CC-BY-SA归因
scroll top