Posts

Showing posts with the label Classification

About Machine Learning 1

  Machine Learning The Machine Learning Landscape Classification Support Vector Machines Decision Trees Ensemble Learning and Random Forests Dimensionality Reduction Clustering πŸ‘‰  Machine Learning 1 Syllabus Unit I: The Machine Learning Landscape : What Is Machine Learning? Why Use Machine Learning? Types of Machine Learning Systems , Supervised /Unsupervised Learning, Batch and Online Learning, Instance-Based Versus Model-Based Learning, Main Challenges of Machine Learning , Insufficient Quantity of Training Data, Nonrepresentative Training Data, Poor-Quality Data, Irrelevant Features, Overfitting the Training Data, Underfitting the Training Data, Stepping Back, Testing and Validating. πŸ‘‰ UNIT 1(A) NOTEs : The Machine Learning Landscape Notes πŸ‘‰ UNIT 1(A) PPTs: The Machine Learning Landscape πŸ‘‰ UNIT 1(B) NOTEs: The Machine Learning Landscape NOTEs πŸ‘‰ Machine Learning 1 : UNIT 1(B) PPTs: The Machine Learning Landscape PPTs πŸ‘‰ Machine Learning 1: UNIT 1 The Machine Learning Landscape

Machine Learning 1 Syllabus

Machine Learning Syllabus  Unit I: The Machine Learning Landscape : What Is Machine Learning? Why Use Machine Learning? Types of Machine Learning Systems, Supervised/Unsupervised Learning, Batch and Online Learning, Instance-Based Versus Model-Based Learning, Main Challenges of Machine Learning, Insufficient Quantity of Training Data, Nonrepresentative Training Data, Poor-Quality Data, Irrelevant Features, Overfitting the Training Data, Underfitting the Training Data, Stepping Back, Testing and Validating. Unit II: Classification: Training a Binary Classifier, Performance Measures, Measuring Accuracy Using Cross-Validation, Confusion Matrix, Precision and Recall, Precision/Recall Tradeoff, The ROC Curve, Multiclass Classification, Error Analysis, Multilabel Classification, Multi Output Classification. k-NN Classifier. Unit III: Support Vector Machines: Linear SVM Classification, Soft Margin Classification, Nonlinear SVM Classification, Polynomial Kernel, Adding Similarity F

About Machine Learning

  Machine Learning πŸ‘‰    About Machine Learning 1 The Machine Learning Landscape Classification Support Vector Machines Decision Trees Ensemble Learning and Random Forests Dimensionality Reduction Clustering πŸ‘‰   About Machine Learning 2   Introduction Concept Learning and the General to Specific Ordering Decision   Tree   Learning Artificial Neural Networks Bayesian Learning Instance-Based Learning Genetic Algorithms Learning Sets of Rules Analytical   Learning Reinforcement Learning πŸ‘‰  About Machine Learning 3 Introduction  Data Pre-processing Performance measurement of models  Supervised Learning  Decision Tree Learning  Unsupervised Learning  Ensemble Models πŸ‘‰  Machine Learning MCQs πŸ‘‰  Machine Learning Programs

Measuring Accuracy Using Cross-Validation

Image
  Measuring Accuracy Using Cross-Validation •         A good way to evaluate a model is to use cross-validation . •         Let’s use the cross_val_score() function to ΓΌ   evaluate our SGDClassifier model , ·        using K-fold cross-validation with three folds . •         Remember that K-fold cross-validation means ΓΌ   splitting the training set into K folds (in this case, three), then ·        making predictions and ·        evaluating them on each fold using ΓΌ   a model trained on the remaining folds . from sklearn.model_selection import cross_val_score cross_val_score ( sgd_clf , X_train , y_train_5 , cv = 3 , scoring = "accuracy" )                         array([0.96355, 0.93795, 0.95615]) ΓΌ   Above 93% accuracy (ratio of correct predictions) on all cross-validation folds? ΓΌ   This looks amazing, doesn’t it? ΓΌ   let’s look at a very dumb classifier that just classifies every single image in the “not-5” class: from sklearn.

WEEK 9 - Write a program to implement SVM algorithm to classify the iris data set. Print both correct and wrong predictions.

Image
 WEEK 9 - Write a program to implement SVM algorithm to classify the iris data set. Print both correct and wrong predictions.

Classification of Iris flowers using Random Forest

Image
  Classification of Iris flowers using Random Forest Steps: 1. Importing the library files 2. Reading the Iris Dataset 3. Preprocessing 4. Split the dataset into training and testing 5. Build the model (Random Forest Model) 6. Evaluate the performance of the Model 1. Importing the library files 2. Reading the Iris Dataset 3. Preprocessing 4. Split the dataset into training and testing 5. Build the model (Random Forest Model) sklearn.ensemble .RandomForestClassifier class   sklearn.ensemble. RandomForestClassifier ( n_estimators = 100 ,  * ,  criterion = 'gini' ,  max_depth = None ,  min_samples_split = 2 ,  min_samples_leaf = 1 ,  min_weight_fraction_leaf = 0.0 ,  max_features = 'sqrt' ,  max_leaf_nodes = None ,  min_impurity_decrease = 0.0 ,  bootstrap = True ,  oob_score = False ,  n_jobs = None ,  random_state = None ,  verbose = 0 ,  warm_start = False ,  class_weight = None ,  ccp_alpha = 0.0 ,  max_samples = None ) A random forest classifier. A random forest is a