Question

I have the feeling that these terms often are used as synonyms for one another, however they have the same goal, namely increasing prediction accuracy by combining different algorithms.

My question thus is, is there a difference between them? And if so is there some book/paper that explains the difference?

Was it helpful?

Solution

Here is a para that I found by searching What are hybrid methods in Machine Learning, on google.

"In general, it is based on combining two different machine learning techniques. For example, a hybrid classification model can be composed of one unsupervised learner (or cluster) to pre-process the training data and one supervised learner (or classifier) to learn the clustering result or vice versa." Along with that example, let's consider an example of Ensemble Learning that is Random Forest. *

In classical ensemble learning, you have different or similar algorithms, working on different or the same data-sets (for example Random Forest Stratifies the data set and builds different Decision Trees for those data-sets, while at the same time you can build different models on the same unstratified data-set and create an ensemble method). So in essence, you have different machine learning models, working independently of each other to give a prediction and then there is a voting system (hard or soft voting) which determines the final prediction.

According to the example of the hybrid machine learning model that we saw, the models in hybrid machine learning models essentially feed their output to one another (one-way) in an effort to create an efficient and accurate machine learning model. So the difference in both is that ensemble methods work independently to vote on an outcome while hybrid methods work together to predict one single outcome, which no voting element present in it.

*https://www.sciencedirect.com/science/article/pii/S1568494609001215

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top