Question

For neural networks, can we tell which parameters are responsible for which features?

For example, in an image classification task, each pixel of an image is a feature. Can I somehow find out which parameters encode the learned information, say from the top-left pixel of my training instances?

Was it helpful?

Solution

Yes, at least you can identify what pixels' are contributing most in the prediction.

Tool like Layerwise Relevance Propagation, used for Explainable AI, serves the similar purpose and evaluate the values(weights) during back propagation and evaluate what pixels are contributing most.

Many opensource implementation are available and on similar track, instead of identifying just relevant pixel you can perform activity for each pixel.

I believe that I'm able to answer your question.

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top