Posts

Showing posts with the label Naive Bayes

Naïve Bayes Classifier - ML Program

Image
 Naïve Bayes Classifier Steps: Understand the business problem Import the library files Load the dataset Data preprocessing Split the data into train and test Build the model (Naïve Bayes classifier) Test the model Performance Measures Predict the class label for new data. 1. Understand the business problem Let’s build a classifier that predicts whether I should play tennis given the forecast. It takes four attributes to describe the forecast; namely, the outlook, the temperature, the humidity, and the presence or absence of wind. Furthermore, the values of the four attributes are qualitative (also known as categorical). p(C_k |x_1,x_2,…,x_n )= p(C_k ) ∏_(i=1)^n p(x_i |C_k ) 2. Import the library files 3. Load the dataset 4. Data preprocessing 5.Split the data into train and test 6. Build the model (Navie Bayes classifier) 7. Test the model 8 .Performance Measures 9. Predict the class label for new data.

Machine Learning MCQs-2 (Performance Metrics, Linear Regression, Naïve Bayes Classifier )

                               Machine Learning MCQs- 2 Performance Metrics, Linear Regression, Naïve Bayes Classifier  1.   The greater the value for ROC AUC, better the model: True False Ans: 1   2.   A set of data are all close to each other, and they are close to the actual value.  This set of data can be described as... Accurate Precise both Precise and accurate None of the above Ans: 3 3. The maximum value of the ROC AUC is: 0.8 0.9 1 0 Ans: 3 4. Recall can be increased by increasing the decision threshold. True or False? True False Ans: 2 5. Which of these is a good measure to decide which threshold to use? Confusion matrix F1 score ROC curve Precision & Recall versus Threshold Curve Ans: 4 6.  Which of these may have to be performed before analyzing and training the dataset? Shuffling Cross-Validation F1 Score None Ans: 1 7. For the below confusion matrix, what is the total number of training datasets?   Not 5 5 Not 5 53272