Question

In a recent answer I read on Stack Exchange, I read about a possible way to understand more clearly what happens in each hidden layer of a neural network.

Here's the excerpt-

You should watch what makes neuron activated in each layer depending on the input. As you know each neuron will be activated (once the DNN is trained) for specific input combinations. By visualizing that you can get an idea about what exactly each layer has learned in term of high-low level features.

Source - High-level features of a neural network

I wanted to know if there are any papers who have tried doing this (links would be really helpful). Meanwhile are there any other ways to understand what happens in each hidden layers?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top