Pergunta

In a deep model, I used the Early stopping technique as below in Keras:

from keras.callbacks import EarlyStopping

early_stopping = [EarlyStopping(monitor='val_loss',
                          min_delta=0,
                          patience=2,
                          verbose=2, mode='auto')]

model.fit(train_x, train_y, batch_size=batch_size,
                epochs=epochs, verbose=1,
                callbacks=early_stopping,
                validation_data=(val_x, val_y))

model.fit(train_x, train_y, batch_size=batch_size, 
                epochs=epochs, verbose=2, 
                callbacks=early_stopping,
                validation_data=(val_x, val_y))

Now, when I run this code, in the output it prints the loss value for training and validation of each epoch.

I set the patience=2 in the early stopping. So, it continues the training process two times after when the validation loss increased instead of decreased.

Some things like this:

Epoch 1/10
- 198s - loss: 99.7160 - val_loss: 123.0397 
Epoch 2/10
- 204s - loss: 78.7000 - val_loss: 109.0344 
Epoch 3/10
- 208s - loss: 65.4412 - val_loss: 78.0097 
Epoch 4/10
- 268s - loss: 61.9812 - val_loss: 79.0312
Epoch 5/10
- 298s - loss: 59.1124 - val_loss: 79.3397 
Epoch 6/10
- 308s - loss: 57.2200 - val_loss: 218.0397 
Epoch 00007: early stopping

In the end, what will be the final weights of the model and the Loss values? The final time of training or two times before it?

If it considers the final epoch, so should it be better if I set the patience as little as possible to overcome the overfitting?

Thank you

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
scroll top