문제

Given a sample dataset with 1000 samples of data, suppose I would like to preprocess the data in order to obtain 10000 rows of data, so each original row of data leads to 10 new samples. In addition, when training my model I would like to be able to perform cross validation as well. The scoring function I have uses the original data to compute the score so I would like cross validation scoring to work on the original data as well rather than the generated one. Since I am feeding the generated data to the trainer (I am using a RandomForestClassifier), I cannot rely on cross-validation to correctly split the data according to the original samples.

What I thought about doing:

  • Create a custom feature extractor to extract features to feed to the classifier.
  • add the feature extractor to a pipeline and feed it to, say, GridSearchCv for example
  • implement a custom scorer which operates on the original data to score the model given a set of selected parameters.

Is there a better method for what I am trying to accomplish?

I am asking this in connection to a competition going on right now on Kaggle

도움이 되었습니까?

해결책

Maybe you can use Stratified cross validation (e.g. Stratified K-Fold or Stratified Shuffle Split) on the expanded samples and use the original sample idx as stratification info in combination with a custom score function that would ignore the non original samples in the model evaluation.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top