Вопрос

I'm using mahout 0.7 for the implementation of the recommender system.

Ro evaluate the quality of the provided recommendations I'm using the AverageAbsoluteDifferenceRecommenderEvaluator which allows me to evaluate MAE (Mean Average Error). When I use the AverageAbsoluteDifferenceRecommenderEvaluator, the MAE values seems to be normalized between 0.0 and 1.0. But if I choose the GenericBooleanPrefItemBasedRecommender, the values are not within 0.0 and 1.0.

If I increase the percentage of the training dataset , the evaluation value is bigger with GenericBooleanPrefItemBasedRecommender which refers to poor recommendation.

This is how I evaluate the recommender:

RecommenderEvaluator evaluator = new AverageAbsoluteDifferenceRecommenderEvaluator();
      RecommenderBuilder recommenderBuilder = new RecommenderBuilder() {
      public Recommender buildRecommender(DataModel model) throws TasteException {
            ItemSimilarity similarity = new EuclideanDistanceSimilarity(model);
            return new GenericItemBasedRecommender(model, similarity); // or GenericBooleanPrefItemBasedRecommender
      }
};
double evaluation = evaluator.evaluate(recommenderBuilder, null, model, 0.7, 1.0);

Why the AverageAbsoluteDifferenceRecommenderEvaluator with the GenericBooleanPrefItemBasedRecommender produces not normalized values, and how can I interpret them correctly?

Это было полезно?

Решение

The evaluator has nothing to do with it. It is not meaningful to asses mean absolute error with a boolean-data recommender. Mean average error is between actual and predicted rating, but there are no ratings.

Instead, the input is assumed to have rating "1". The predicted 'rating' in this case however is not a quantity that carries meaning, although higher means stronger.

You have to use precision/recall metrics or similar ranking metrics instead.

Лицензировано под: CC-BY-SA с атрибуция
Не связан с StackOverflow
scroll top