質問

For neural networks, can we tell which parameters are responsible for which features?

For example, in an image classification task, each pixel of an image is a feature. Can I somehow find out which parameters encode the learned information, say from the top-left pixel of my training instances?

役に立ちましたか?

解決

Yes, at least you can identify what pixels' are contributing most in the prediction.

Tool like Layerwise Relevance Propagation, used for Explainable AI, serves the similar purpose and evaluate the values(weights) during back propagation and evaluate what pixels are contributing most.

Many opensource implementation are available and on similar track, instead of identifying just relevant pixel you can perform activity for each pixel.

I believe that I'm able to answer your question.

ライセンス: CC-BY-SA帰属
所属していません datascience.stackexchange
scroll top