Question

How do we select number of neurons for hidden layer (Backpropagation Network)? Is there any hard-and-fast rule for selecting number of hidden neurons? I found that it should be nearly equal to square root of (no_input_neurons * no_output_neurons) in some papers. But it didn't work for me. Is Selecting neurons a hit and trial method?

I am trying to design the neural network for tic-tac-toe to start from basic (Already did XOR). How should I proceed? I found the number of hidden neurons different in different forums. What's the idea?

Was it helpful?

Solution

Choosing hidden layers for backpropagation networks is a bit of a black magic but you can reason about it to some extent. As you know, by learning the network determines parameters of a plane in a high-dimensional space that can correctly classify your inputs. So you need to have a sufficiently high number of neurons to allow for discriminating between your different inputs. It used to be an active research topic when I studied neural networks more than 5 years ago. Perhaps have a look at this paper: An algebraic projection analysis for optimal hidden units size and learning rates in back-propagation learning

With other kinds of networks, such as recurrent networks, there are some techniques that can help with finding the right architecture - for example visualizing the learned weights which sometimes clearly resemble features of the inputs.

OTHER TIPS

Determining the number of hidden layers/neurons are based on a trial and error method, and is highly dependable upon the type of training data your using. I always try out matching the number of hidden neurons to the number of input neurons first, and then increment/decrements afterwards. Try changing the number of training epoch and the learning rate too.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top