Posts

Showing posts with the label Logistic Regression

Machine Learning Programs

 Machine Learning Programs πŸ‘‰ Data Preprocessing in Machine Learning πŸ‘‰ Data Preprocessing in Machine learning (Handling Missing values ) πŸ‘‰ Linear Regression - ML Program - Weight Prediction πŸ‘‰ NaΓ―ve Bayes Classifier - ML Program πŸ‘‰ LOGISTIC REGRESSION - PROGRAM πŸ‘‰ KNN Machine Learning Program πŸ‘‰ Support Vector Machine (SVM) - ML Program πŸ‘‰ Decision Tree Classifier on Iris Dataset πŸ‘‰ Classification of Iris flowers using Random Forest πŸ‘‰ DBSCAN πŸ‘‰ Implement and demonstrate the FIND-S algorithm for finding the most specific hypothesis based on a given set of training data samples. Read the training data from a .CSV file πŸ‘‰ For a given set of training data examples stored in a .CSV file, implement and demonstrate the Candidate-Elimination algorithm to output a description of the set of all hypotheses consistent with the training examples. πŸ‘‰ Write a program to demonstrate the working of the decision tree based ID3 algorithm. Use an appropriate data set for building the decision tree and

TensorFlow and Keras -1

Image
  Q1. Binary classification In order to perform binary classification on a dataset (class  0  and  1 ) using a neural network, which of the options is  correct  regarding the outcomes of code snippets  a  and  b ? Here the labels of observation are in the form : [0, 0, 1...]. Common model: import tensorflow from keras.models import Sequential from keras.layers import Dense from tensorflow.keras.optimizers import SGD model = Sequential() model.add(Dense(50, input_dim=2, activation='relu', kernel_initializer='he_uniform')) opt = SGD(learning_rate=0.01) Code snippet a: model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) Code snippet b: mode.add(Dense(1, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) The term " Required results " in the options means that the accur

Machine Learning MCQs-3 (Logistic Regression, KNN, SVM, Decision Tree)

 Machine Learning MCQs-3  (Logistic Regression, KNN, SVM, Decision Tree) --------------------------------------------------------------------- 1. A Support Vector Machine can be used for Performing linear or nonlinear classification Performing regression For outlier detection All of the above Ans: 4 2.   The decision boundaries in a Support Vector machine is fully determined (or “supported”) by the instances located on the edge of the street?   True False Ans: 1   3.   Support Vector Machines are not sensitive to feature scaling True False Ans: 2  4.  If we strictly impose that all instances be off the street and on the right side, this is called Soft margin classification Hard margin classification Strict margin classification Loose margin classification Ans: 2 5. The main issues with hard margin classification are It only works if the data is linearly separable It is quite sensitive to outliers It is impossible to find a margin if the data is not linearly separable All of the above A