F1_score(average='micro') is equal to calculating accuracy for multiclasification
-
20-10-2020 - |
Question
Is f1_score(average='micro') always the same as calculating the accuracy. Or it is just in this case?
I have tried with different values and they gave the same answer but I don't have the analytical demonstration.
from sklearn.metrics import accuracy_score
from sklearn.metrics import f1_score
y_true = [0, 1, 2, 0, 1, 2]
y_pred = [0, 2, 1, 0, 0, 1]
print(f1_score(y_true, y_pred, average='micro'))
print(accuracy_score(y_true,y_pred))
0.3333333333333333 0.3333333333333333 ``
Solution
In classification tasks for which every test case is guaranteed to be assigned to exactly one class, micro-F is equivalent to accuracy.
The above answer is from: https://stackoverflow.com/questions/37358496/is-f1-micro-the-same-as-accuracy
More detailed explanation: https://simonhessner.de/why-are-precision-recall-and-f1-score-equal-when-using-micro-averaging-in-a-multi-class-problem/
Licensed under: CC-BY-SA with attribution
Not affiliated with datascience.stackexchange