Pergunta

I have a question regarding Hyperparameter Optimization for a Machine Learning Algorithm.

I try to fit a Support Vector Classifier and use Hyperparameter-Tuning (but it could be also another classifier).

My classes are highly imbalanced (20% of one class, lets call it “red” and 80% of the other, lets call it “black”).

Now, the objective of my Hyperparameter-Optimization is the cross-validation loss.

If, say, 20% of the observations are “reds” and 80% are “black”, then a really bad classifier would just label every case as “black” and achieve on average a cross-validation loss of 20%.

Now my question: If I see that the objective of my Hyperparamer-Optimization does not fall significantly below 20%, I could infer that it is useless without further analyzing. Correct? Or is there something I do not understand correctly?

This happens in all optimizations I am running now. I could conclude that my features are not informative.

I guess I have to change the objective for the hyperparameter optimization routine? (If my package allows for that)

Nenhuma solução correta

Licenciado em: CC-BY-SA com atribuição
scroll top