Английская Википедия:F-score
Шаблон:Short description Шаблон:For
In statistical analysis of binary classification and information retrieval systems, the F-score or F-measure is a measure of predictive performance. It is calculated from the precision and recall of the test, where the precision is the number of true positive results divided by the number of all samples predicted to be positive, including those not identified correctly, and the recall is the number of true positive results divided by the number of all samples that should have been identified as positive. Precision is also known as positive predictive value, and recall is also known as sensitivity in diagnostic binary classification.
The F1 score is the harmonic mean of the precision and recall. It thus symmetrically represents both precision and recall in one metric. The more generic <math>F_\beta</math> score applies additional weights, valuing one of precision or recall more than the other.
The highest possible value of an F-score is 1.0, indicating perfect precision and recall, and the lowest possible value is 0, if either precision or recall are zero.
Etymology
The name F-measure is believed to be named after a different F function in Van Rijsbergen's book, when introduced to the Fourth Message Understanding Conference (MUC-4, 1992).[1]
Definition
Шаблон:Refimprove section The traditional F-measure or balanced F-score (F1 score) is the harmonic mean of precision and recall:[2]
- <math>F_1 = \frac{2}{\mathrm{recall}^{-1} + \mathrm{precision}^{-1}} = 2 \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}} = \frac{2\mathrm{tp}}{2\mathrm{tp} + \mathrm{fp} + \mathrm{fn}}
</math>.
Fβ score
A more general F score, <math>F_\beta</math>, that uses a positive real factor <math>\beta</math>, where <math>\beta</math> is chosen such that recall is considered <math>\beta</math> times as important as precision, is:
- <math>F_\beta = (1 + \beta^2) \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{(\beta^2 \cdot \mathrm{precision}) + \mathrm{recall}}</math>.
In terms of Type I and type II errors this becomes:
- <math>F_\beta = \frac {(1 + \beta^2) \cdot \mathrm{true\ positive} }{(1 + \beta^2) \cdot \mathrm{true\ positive} + \beta^2 \cdot \mathrm{false\ negative} + \mathrm{false\ positive}}\,</math>.
Two commonly used values for <math>\beta</math> are 2, which weighs recall higher than precision, and 0.5, which weighs recall lower than precision.
The F-measure was derived so that <math>F_\beta</math> "measures the effectiveness of retrieval with respect to a user who attaches <math>\beta</math> times as much importance to recall as precision".[3] It is based on Van Rijsbergen's effectiveness measure
- <math>E = 1 - \left(\frac{\alpha}{p} + \frac{1-\alpha}{r}\right)^{-1}</math>.
Their relationship is <math>F_\beta = 1 - E</math> where <math>\alpha=\frac{1}{1 + \beta^2}</math>.
Diagnostic testing
This is related to the field of binary classification where recall is often termed "sensitivity". Шаблон:Diagnostic testing diagram
Dependence of the F-score on class imbalance
Precision-recall curve, and thus the <math>F_\beta</math> score, explicitly depends on the ratio <math>r</math> of positive to negative test cases.[4] This means that comparison of the F-score across different problems with differing class ratios is problematic. One way to address this issue (see e.g., Siblini et al, 2020[5] ) is to use a standard class ratio <math>r_0</math> when making such comparisons.
Applications
The F-score is often used in the field of information retrieval for measuring search, document classification, and query classification performance.[6] It is particularly relevant in applications which are primarily concerned with the positive class and where the positive class is rare relative to the negative class.
Earlier works focused primarily on the F1 score, but with the proliferation of large scale search engines, performance goals changed to place more emphasis on either precision or recall[7] and so <math>F_\beta</math> is seen in wide application.
The F-score is also used in machine learning.[8] However, the F-measures do not take true negatives into account, hence measures such as the Matthews correlation coefficient, Informedness or Cohen's kappa may be preferred to assess the performance of a binary classifier.[9]
The F-score has been widely used in the natural language processing literature,[10] such as in the evaluation of named entity recognition and word segmentation.
Properties
The F1 score is the Dice coefficient of the set of retrieved items and the set of relevant items.[11]
- The F1-score of a classifier which always predicts the positive class converges to 1 as the probability of the positive class increases.
- The F1-score of a classifier which always predicts the positive class is equal to 2 * proportion_of_positive_class / ( 1 + proportion_of_positive_class ), since the recall is 1, and the precision is equal to the proportion of the positive class.[12]
- If the scoring model is uninformative (cannot distinguish between the positive and negative class) then the optimal threshold is 0 so that the positive class is always predicted.
- F1 score is concave in the true positive rate.[13]
Criticism
David Hand and others criticize the widespread use of the F1 score since it gives equal importance to precision and recall. In practice, different types of mis-classifications incur different costs. In other words, the relative importance of precision and recall is an aspect of the problem.[14]
According to Davide Chicco and Giuseppe Jurman, the F1 score is less truthful and informative than the Matthews correlation coefficient (MCC) in binary evaluation classification.[15]
David Powers has pointed out that F1 ignores the True Negatives and thus is misleading for unbalanced classes, while kappa and correlation measures are symmetric and assess both directions of predictability - the classifier predicting the true class and the true class predicting the classifier prediction, proposing separate multiclass measures Informedness and Markedness for the two directions, noting that their geometric mean is correlation.[16]
Another source of critique of F1 is its lack of symmetry. It means it may change its value when dataset labeling is changed - the "positive" samples are named "negative" and vice versa. This criticism is met by the P4 metric definition, which is sometimes indicated as a symmetrical extension of F1.[17]
Difference from Fowlkes–Mallows index
While the F-measure is the harmonic mean of recall and precision, the Fowlkes–Mallows index is their geometric mean.[18]
Extension to multi-class classification
The F-score is also used for evaluating classification problems with more than two classes (Multiclass classification). In this setup, the final score is obtained by micro-averaging (biased by class frequency) or macro-averaging (taking all classes as equally important). For macro-averaging, two different formulas have been used by applicants: the F-score of (arithmetic) class-wise precision and recall means or the arithmetic mean of class-wise F-scores, where the latter exhibits more desirable properties.[19]
See also
- BLEU
- Confusion matrix
- Hypothesis tests for accuracy
- METEOR
- NIST (metric)
- Receiver operating characteristic
- ROUGE (metric)
- Uncertainty coefficient, aka Proficiency
- Word error rate
- LEPOR
Шаблон:Machine learning evaluation metrics
References
de:Beurteilung eines Klassifikators#Kombinierte Maße
- ↑ Шаблон:Cite news
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite book
- ↑ Шаблон:Cite conference
- ↑ Шаблон:Cite conference
- ↑ Шаблон:Cite thesis
- ↑ Шаблон:Cite conference
- ↑ See, e.g., the evaluation of the [1].
- ↑ Шаблон:Cite arXiv
- ↑ Шаблон:Cite conference
- ↑ Шаблон:Cite book
- ↑ https://stats.stackexchange.com/q/390541 Шаблон:Bare URL inline
- ↑ Lipton, Z.C., Elkan, C.P., & Narayanaswamy, B. (2014). F1-Optimal Thresholding in the Multi-Label Setting. ArXiv, abs/1402.1892.
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite arXiv
- ↑ Шаблон:Cite journal
- ↑ Шаблон:Cite arXiv
- Английская Википедия
- Страницы с неработающими файловыми ссылками
- Statistical natural language processing
- Evaluation of machine translation
- Statistical ratios
- Summary statistics for contingency tables
- Clustering criteria
- Страницы, где используется шаблон "Навигационная таблица/Телепорт"
- Страницы с телепортом
- Википедия
- Статья из Википедии
- Статья из Английской Википедии