Question

I have a data set with 3 independent variables and 1 dependent variable. The dependent is play_golf, the independents are Humidity, Pending_Chores, Wind. I want to aggregate the probability of playing golf based on those rules of multiple trees. So again, play_golf (this is the dependent value), and there are three independent variables Humidity(High, Medium, Low), Pending_Chores(Taxes, None, Laundry, Car Maintenance) and Wind (High, Low). A rule would be like (IF humidity = "High" AND pending_chores = "None" AND Wind = "High" THEN play_golf = 77% ).

I was thinking that random forest would weight the rules somehow on the collection of trees and give a probability. But if that doesn't make sense, then can you just tell me how to get the decision rules with one tree and I will work from that. I believe am I talking about a decision tree vector? I'm not sure.

Was it helpful?

Solution

Have a look on HMM (Hidden Markov Model) also. A concrete example Of HMM is available in wikipedia. Decision tree is better in generalising and applying learned data in another context and a Markov-model is better to recall the exact learned machine state.

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top