Skip to main content

Precision Recall and PR-AUC

 https://towardsdatascience.com/gaining-an-intuitive-understanding-of-precision-and-recall-3b9df37804a7



Precision = True Positives / All Predicted Positives

Recall = True Positives / All Actual Positives


Precision = TP / (TP + FP)

Precision doesn't care about False Negatives (We are only looking at positive class now so we ignore negative correct prediction.) Meaning if a lot of positive class is classified as negative, precision can still be high. So by using high threshold precision can be made 1. So precision alone is not a good measure.


Recall =TP / (TP + FN)

Recall is like accuracy of positive class only. So by making all classes predict positive (1), recall can be made very high. For example, if threshold is 0 and all classes are predicted as positive (1), recall is high.

So recall alone is also not a good measure of classifier performance.


So if we move towards 0 threshold, recall increases, precision decreases., typically.

If we move towards 1 threshold, precision increases and recall decreases.


PR-Curve is a plot of that, we measure Precision and Recall at different thresholds (using softmax probability) and plot Precision in Y axis and Recall in low axis, for a perfect classifier area under the curve is 1.

for multiclass you do for each class separately.