Is a Neural Network with 20 times the number of input neurons (on hidden layers) guaranteed to overfit? When is this not so?

datascience.stackexchange https://datascience.stackexchange.com/questions/37577

문제

I'm aware of the problem of over-fitting. One way to describe it is your Neural Network learning your training data to a high accuracy and performing poorly (generalizing) on new data.

Was wondering if there are situations where having 20 times the number of input neurons on the first hidden layer wouldn't necessarily produce overfitting always.

Here is a screen shot of my work flow. As you can see I'm using test data. The way I've done it is split the data myself using a "Past/Future" column with a P or an F. The future is the last 5% of the sequential time data.

I don't use randomization since I don't think it makes much sense to unsort the sequential time datapoints.

Orange screenshot

Thanks.

올바른 솔루션이 없습니다

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top