Posts
Showing posts with the label Precision
Precision/Recall Trade-off
- Get link
- X
- Other Apps
Precision/Recall Trade-off · To understand this trade-off , ü let’s look at how the SGDClassifier makes its classification decisions. ü For each instance, it computes a score based on a decision function . ü If that score is greater than a threshold , • it assigns the instance to the positive class ; • otherwise it assigns it to the negative class . Figure 3-3. In this precision/recall trade-off , images are ranked by their classifier score, and those above the chosen decision threshold are considered positive; the higher the threshold, the lower the recall, but (in general) the higher the precision. • Figure 3 shows a few digits positioned from the lowest score on the left to the highest score on the right. • Suppose the decision threshold is positioned at the central arrow (between the two 5s): ü you will find ü 4 true positives ( actual 5s ) on the right of that threshold , and ü 1 false positive
Precision and Recall
- Get link
- X
- Other Apps
Precision and Recall Scikit-Learn provides several functions to compute classifier metrics , including precision and recall : from sklearn.metrics import precision_score , recall_score precision_score ( y_train_5 , y_train_pred ) # == 4096 / (4096 + 1522) 0.7290850836596654 recall_score ( y_train_5 , y_train_pred ) # == 4096 / (4096 + 1325) 0.7555801512636044 · Now your 5-detector does not look as shiny as it did when you looked at its accuracy . · When it claims an image represents a 5 , it is correct only 72.9% of the time. · Moreover, it only detects 75.6% of the 5 s. • It is often convenient to combine ü precision and ü recall into a single metric called the F1 score , ü in particular if you need a simple way to compare two classifiers . • The F1 score is the harmonic mean of precision and recall . F1Score • Whereas the regular mean treats all values equally , ü the ha