Question

In an RBM, if we represent the weights learned by the hidden units, they show that the neural net is learning basic shapes. For example, for the mnist dataset, they learn features of the numbers they are trying to classify.

In a regular feed-forward net with one hidden layer, I can train the network to recognize digits, and it works, but when I try to visualize the hidden layer weights, I only see noise, no distinguishable feature. Why is that? Hasn't the network learned to recognize the digits?

Was it helpful?

Solution

It has learnt to recognize the digits, but it might have put too much weight on single pixels. Try to add different amounts of L2 regularization or dropout and compare the visualizations of the weights. Adding some kind of regularization should make the net rely less on single / independent pixels and more on the inherent structure of the digits, giving you smoother weights / visualization.

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top