문제

In a recent answer I read on Stack Exchange, I read about a possible way to understand more clearly what happens in each hidden layer of a neural network.

Here's the excerpt-

You should watch what makes neuron activated in each layer depending on the input. As you know each neuron will be activated (once the DNN is trained) for specific input combinations. By visualizing that you can get an idea about what exactly each layer has learned in term of high-low level features.

Source - High-level features of a neural network

I wanted to know if there are any papers who have tried doing this (links would be really helpful). Meanwhile are there any other ways to understand what happens in each hidden layers?

올바른 솔루션이 없습니다

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top