문제

I have some specific questions for which I could not extract answers from books. Therefore, I ask for help here and shall be extremely grateful for an intuitive explanation if possible.

In general, neural networks have a bias/variance tradeoff and thus we need to have a regularizer. Higher bias --> underfitting; Higher Variance--->overfitting.

To solve overfitting, we use regularization for contraining the weight. This is a hyperparameter and should be learned during training based on my understanding using cross-validation. Thus, the dataset is split into a train, validation and test set. The test set is independent and is unseen by the model during learning, but we have the labels available for it. We usually report the statistics such as false positives, confusion matrix, misclassification based on this test set.

Q1) Is this bias/variance problem encountered in other algorithms such as SVM, LSTM etc as well?

In convolutional neural network (Matlab toolbox) I have not seen any option for specifying the regularization constant. So, does this mean that CNN's don't need a regularizer?

Q2) What is the condition if training error and test error are both zero? Is this the ideal best situation?

Q3) What is the condition if training error > test error?

Q4) What is the condition if training error > validation error?

Please correct me where wrong. Thank you very much.

올바른 솔루션이 없습니다

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 datascience.stackexchange
scroll top