Measuring Accuracy Using Cross-Validation
Measuring Accuracy Using Cross-Validation • A good way to evaluate a model is to use cross-validation . • Let’s use the cross_val_score() function to ü evaluate our SGDClassifier model , · using K-fold cross-validation with three folds . • Remember that K-fold cross-validation means ü splitting the training set into K folds (in this case, three), then · making predictions and · evaluating them on each fold using ü a model trained on the remaining folds . from sklearn.model_selection import cross_val_score cross_val_score ( sgd_clf , X_train , y_train_5 , cv = 3 , scoring = "accuracy" ) array([0.96355, 0.93795, 0.95615]) ü Above 93% accuracy (ratio of correct predictions) on all cross-validation folds? ü This looks amazing, doesn’t it? ü let’s look at a very dumb classifier that just classifies every single image in the “not-5” class: from sklearn.