문제

For neural networks, can we tell which parameters are responsible for which features?

For example, in an image classification task, each pixel of an image is a feature. Can I somehow find out which parameters encode the learned information, say from the top-left pixel of my training instances?

도움이 되었습니까?

해결책

Yes, at least you can identify what pixels' are contributing most in the prediction.

Tool like Layerwise Relevance Propagation, used for Explainable AI, serves the similar purpose and evaluate the values(weights) during back propagation and evaluate what pixels are contributing most.

Many opensource implementation are available and on similar track, instead of identifying just relevant pixel you can perform activity for each pixel.

I believe that I'm able to answer your question.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top