Question

I am trying to do the following with weka's MultilayerPerceptron:

  1. Train with a small subset of the training Instances for a portion of the epochs input,
  2. Train with whole set of Instances for the remaining epochs.

However, when I do the following in my code, the network seems to reset itself to start with a clean slate the second time.

mlp.setTrainingTime(smallTrainingSetEpochs);

mlp.buildClassifier(smallTrainingSet);

mlp.setTrainingTime(wholeTrainingSetEpochs);

mlp.buildClassifier(wholeTrainingSet);

Am I doing something wrong, or is this the way that the algorithm is supposed to work in weka?

If you need more information to answer this question, please let me know. I am kind of new to programming with weka and am unsure as to what information would be helpful.

Was it helpful?

Solution

This thread on the weka mailing list is a question very similar to yours.

It seems that this is how weka's MultilayerPerceptron is supposed to work. It's designed to be a 'batch' learner, you are trying to use it incrementally. Only classifiers that implement weka.classifiers.UpdateableClassifier can be incrementally trained.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top