質問

I'm writing a machine-learning solution for a problem that may have more than one possible classifier, depending on the data. so I've collected several classifiers, each of them performs better than the others on some conditions. I'm looking into the meta-classification strategies, and I see there are several algorithms. can anyone please point at fundamental difference between them?

役に立ちましたか?

解決

Voting algorithms are simple strategies, where you aglomerate results of classifiers' decisions by for example taking the class which appears in most cases. Stacking/grading strategies are generalizations of this concept. Instead of simply saying "ok, I have a scheme v, which I will use to select the best answer among my k classifiers" you create another abstraction layer, where you actually learn to predict the correct label having k votes.

In short terms, basic voting/stacking/grading methods can be outlined as:

  • voting - you have some constant method v, that given answers a_1,...,a_k results in a=v(a_1,...,a_k)
  • stacking - you use the answers as the new representation of the problem, so for each (x_i,y_i) you get (a_i_1,...,a_i_k) and so create the training sample ((a_i_1,...,a_i_k),y_i) and train meta-classifier on it
  • grading - you train a separate meta-classifier for each of your k classifiers to predict its "classification grade" for current point, and use it to make decision
ライセンス: CC-BY-SA帰属
所属していません StackOverflow
scroll top