Question

I'm currently using scikit-learn's GaussianNB package.

I've noticed that I can choose to return results for the classification several different ways. One way to return a classification is using the predict_log_proba method.

Why would I choose to use predict_log_proba versus predict_proba versus predict?

Était-ce utile?

La solution

  • predict just gives you the class for every example
  • predict_proba gives you the probability for every class, and predict is just taking the class which maximal probability
  • predict_log_proba gives you the logarithm of the probabilities, this is often handier as probabilities can become very, very small

Autres conseils

When computing with probabilities, it's quite common to do so in log-space instead of in linear space because probabilities often need to be multiplied, causing them to become very small and subject to rounding errors. Also, some quantities like KL divergence are either defined or easily computed in terms of log-probabilities (note that log(P/Q) = log(P) - log(Q)).

Finally, Naive Bayes classifiers typically work in logspace themselves for reasons of stability and speed, so first computing exp(logP) only to get logP back later is wasteful.

Let's see the problem first, posterior for vector's {w1, w2, w3, w4_ _ _ _ _ _ w_d}

P(y=1|w1,w2,w3,_ _ ,w_d) = P(y=1)*P(w1|y=1)*P(w2|y=1)P(w2|y=1) _ _ _ *P(w_d|y=1)

let assume the random probability of each LIKELIHOOD,

P(y=1|w1,w2,w3,_ _ ,w_d) = 0.6 * 0.2 * 0.23 * 0.04 * 0.001 * 0.45 * 0.012 _ SO ON

nowhere is a problematic situation while multiplication the LIKELIHOOD,

note:- In python, a float is rounded to some number of significant figures. it means you cannot gate correct results when you have numbers of likelihoods.

for solving this critical problem we use log probability. the nice property of log is it's a monotonic function and it converts multiplication to addition and gives a fast and accurate result compared to a simple multiplication.

log(P(y=1|w1,w2,w3,_ _ ,w_d)) = log(P(y=1)*P(w1|y=1)*P(w2|y=1)P(w2|y=1) _ _ _ *P(w_d|y=1))

Now it's good

Licencié sous: CC-BY-SA avec attribution
Non affilié à StackOverflow
scroll top