Question

I am reading this article on Layer-wise relevance propagation method and I can't understand this particular paragraph

LRP is a conservative technique, meaning the magnitude of any output y is conserved through the backpropagation process and is equal to the sum of the relevance map R of the input layer. This property holds for any consecutive layers j and k, and by transitivity for the input and output layer.

What does this even mean? I can't understand what the author is trying to say. I understand what backpropagation is but not what is being said related to backpropagation!

Was it helpful?

Solution

In this layered graph structure, the relevance conservation property can be formulated as follows: Let j and k be indices for neurons of two successive layers. Let Rk be the relevance of neuron k for the prediction f (x). We define R j←k as the share of Rk that is redistributed to neuron j in the lower layer. The conservation property for this neuron imposes

∑j R j←k = Rk. Likewise, neurons in the lower layer aggregate all relevance coming from the neurons from the higher layer: Rj = ∑k R j←k These two equations, when combined, also ensure a relevance conservation property between layers Refer to this paper Methods for interpreting and understanding deep neural networks by Grégoire Montavon for more details

OTHER TIPS

The answer is within your question itself. Conservation means something which has been conserved. Here through the backpropagation process the output is conserved. And the output is the sum of the all the intermediate neurons/pixels that are contributing in determining which feature in a particular input vector contribute the most to a neural network's output.

Hope it helps.. Thank you..

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top