Is it correct to define the F-measure as the harmonic mean of specificity and sensitivity in such a way?

datascience.stackexchange https://datascience.stackexchange.com/questions/68973

  •  09-12-2020
  •  | 
  •  

Question

It is common to define the F-measure as a function of precision and recall, as mentioned in [1]:

$F_{\beta}=\frac{(1+\beta^2)PR}{\beta^2 P+R}$

However I came across some other cases, another definition is used [2] (without weights):

$F = H(sensitivity, 1- specificity)$

Where H is harmonic mean.

Reference:

  1. F - measure derivation (harmonic mean of precision and recall)

  2. https://link.springer.com/chapter/10.1007/978-3-540-68947-8_133.

  3. https://stackoverflow.com/a/52892413/2243842
Was it helpful?

Solution

The one is general formula the other you get for Beta=1:

The beta value greater than 1 means we want our model to pay more attention to the model Recall as compared to Precision. On the other, a value of less than 1 puts more emphasis on Precision. So you just want to generalise, and punish certain mistakes more.

So to conclude: Correct in mathematical sense is always to generalise and derive special cases, in that sense the first one is preferable since setting beta to one you get the 'standard' F-1-harmonic-mean-formula.

http://scikit-learn.org/stable/modules/generated/sklearn.metrics.fbeta_score.html

OTHER TIPS

Yes, as they are effectively synonyms of one another. See for instance this link

If you pay attention, the first formula is the (weighted) harmonic mean of the recall and precision.

Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange
scroll top