Question

How to prune decision tree build with ID3 when there are too few examples in the training set.

I cannot divide it into training, validation and test set, so that is out of the question.

Are there any statistical methods that might be used or something like that?

Was it helpful?

Solution

Yes when you have low amounts of data cross-validation can be used to train and prune your dataset. The idea is fairly simple. You divide your data into N sets, and train your tree with N-1 of them. The last set you use as your pruning test set. Then you pick another set on of the N sets to leave out, and do the same thing. Repeat this until you've left out all sets. That means you'll have built N trees. You'll use these N trees to calculate an optimal size of the tree, then train on the full set of data using the calculation to prune that tree. It's more complex than I can effectively describe here, but here is an article about how to adapt cross validation to ID3.

Decision Tree Cross Validation

Lots of research has been conducted on what the proper segmentation of cross validation, and it's been found N=10 gives the best results for the given extra processing time. Cross Validation increases your computation time by a lot (well N times), but when you have smaller amounts of data it can overcome the small number of samples. And since you don't have a lot of data that means using cross validation isn't that bad computationally.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top