Question

As a follow-up to Validate via predict() or via fit()? I wonder about the difference between validation and prediction. To keep it simple, I will refer to train, val and test:

CV

Training data: Train model, especially find hyperparameters through GridSearchCV or similar Validation data: Validate these hyperparameters on "new" data? Test data: Make prediction on unseen data

My status so far:

  • Split data: 60 % Training - 20 % Validation - 20 % Test
  • Find hyperparameters on training data
  • Fit again with best parameters on training data by using .fit(X_train, y_train, validation_data(X_val, y_val)).
  • Check model on unseen data through .predict() or .evaluate().

Is this correct? Though using GridSearchCV do I have to split train manually into training and validation?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top