Confusion Matrix
Confusion Matrix
·
A much better way to evaluate the performance of a classifier is to look
at the confusion
matrix.
·
The general idea is to
·
count the number of times instances of class A are classified
as class B.
·
For example,
ü to know the number
of times the classifier confused images of 5s with 3s,
ü you
would look in
·
the fifth row and
·
third column of the
confusion matrix.
ü
To compute the confusion matrix,
•
you first need to have a set of predictions so
that they can be compared to the actual targets.
•
You could make predictions on
the test set.
•
Remember that you want
to use the test set only at the very end of your project, once you have a classifier
that you are ready to launch.
ü
Instead, you can use the cross_val_predict() function:
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
·
Just like the cross_val_score() function,
·
cross_val_predict() performs
ü K-fold cross-validation, but instead of returning
the evaluation
scores,
§ it returns the predictions made on each test fold.
·
This means that you get a clean prediction for each instance in the training set.
·
“clean” meaning that the prediction is made by a model that never saw the data during training.
ü
Now you are ready to get the confusion matrix using the confusion_matrix() function.
ü
Just pass it the target classes (y_train_5) and the predicted classes (y_train_pred):
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
array([[53057, 1522],
[
1325, 4096]])
ü
Each row in a confusion matrix represents an actual class, while each column represents a predicted class.
ü
The first row of this matrix considers non-5 images (the negative class):
•
53,057 of them were correctly classified as non-5s (they are called true negatives),
•
while the remaining 1,522 were wrongly classified as 5s (false
positives).
•
The second row considers the images of 5s (the positive class):
ü 1,325 were wrongly classified as non-5s
(false
negatives),
ü while
the remaining 4,096 were correctly classified as 5s (true positives).
ü
A perfect classifier would have only
•
true positives and
•
true negatives,
ü so its
confusion matrix would have nonzero values only on its main diagonal (top
left to bottom right):
y_train_perfect_predictions = y_train_5 # pretend we reached perfection
confusion_matrix(y_train_5, y_train_perfect_predictions)
array([[54579, 0],
[
0, 5421]])
·
The confusion matrix gives you a lot of information, but sometimes you may
prefer a more concise metric.
·
The accuracy of the positive predictions; this is called the precision of the classifier.
•
Precision
ü TP is the number of true
positives
ü FP is the number of false
positives.
·
A trivial way to have perfect precision is to make one single positive prediction
and ensure it is correct (precision = 1/1 = 100%).
·
But this would not be very useful, since the classifier would ignore all but one positive
instance.
·
So precision is typically used along with another metric named recall, also called
ü sensitivity or
ü the true positive rate
(TPR):
·
this is the ratio of positive instances that are correctly detected by the classifier.
Recall
•
ü
FN is the number of false negatives.
•
Confusion matrix is explained in Figure 2.
Figure 2. An
illustrated confusion matrix shows examples of true negatives (top left), false
positives (top right), false negatives (lower left), and true positives (lower
right)
Comments
Post a Comment