Skip to main content

Posts

Showing posts from January, 2022

Precision Recall and PR-AUC

 https://towardsdatascience.com/gaining-an-intuitive-understanding-of-precision-and-recall-3b9df37804a7 Precision = True Positives / All Predicted Positives Recall = True Positives / All Actual Positives Precision = TP / (TP + FP) Precision doesn't care about False Negatives (We are only looking at positive class now so we ignore negative correct prediction.) Meaning if a lot of positive class is classified as negative, precision can still be high. So by using high threshold precision can be made 1. So precision alone is not a good measure. Recall =TP / (TP + FN) Recall is like accuracy of positive class only. So by making all classes predict positive (1), recall can be made very high. For example, if threshold is 0 and all classes are predicted as positive (1), recall is high. So recall alone is also not a good measure of classifier performance. So if we move towards 0 threshold, recall increases, precision decreases., typically. If we move towards 1 threshold, precision increases