Machine Learning - Support Vector Machines (SVM) - MCQs
Machine Learning - Support Vector Machines (SVM) - MCQs
1. A Support Vector Machine can
be used for
A.
Performing linear or nonlinear
classification
B.
Performing regression
C.
For outlier detection
D.
All of the above
Ans: D
2. The decision boundaries in a
Support Vector machine is fully determined (or “supported”) by the instances
located on the edge of the street?
- True
- False
Ans: A
3. Support
Vector Machines are not sensitive to feature scaling
- True
- False
Ans: B
4. If
we strictly impose that all instances be off the street and on the right side,
this is called
- Soft margin classification
- Hard margin classification
- Strict margin classification
- Loose margin classification
Ans: B
5. The main issues with hard
margin classification are
- It only works if the data is linearly separable
- It is quite sensitive to outliers
- It is impossible to find a margin if the data is not linearly
separable
- All of the above
Ans: D
6. The
objectives of Soft Margin Classification are to find a good balance between
- Keeping the street as large as possible
- Limiting the margin violations
- Both of the above
- None of the above
Ans: C
7. The
balance between keeping the street as large as possible and limiting margin
violations is controlled by this hyperparameter
- Tol
- Loss
- Penalty
- C
Ans: D
8. A smaller C value leads to a
wider street but more margin violations.
- True
- False
Ans: A
9. If
your SVM model is overfitting, you can try regularizing it by reducing the
value of
- Tol
- C hyperparameter
- intercept_scaling
- None of the above
Ans: B
10. Problems with adding
polynomial features are
- At a low polynomial degree, it cannot deal with very complex
datasets
- With a high polynomial degree, it creates a huge number of features
- Adding high polynomial degree makes the model too slow
- All of the above
Ans: D
11. The hyperparameter coef0 of
SVC controls how much the model is influenced by high-degree polynomials versus
low-degree polynomials
A. True
B. False
Ans: A
12. A similarity function like
Gaussian Radial Basis Function is used to
A.
Measure how many features are
related to each other
B.
Find the most important features
C.
Find the relationship between
different features
D.
Measure how much each instance
resembles a particular landmark
Ans: D
13. When adding features with
similarity function, and creating a landmark at the location of each and every
instance in the training set, a training set with m instances and n features
gets transformed to (assuming you drop the original features)
- A training set with n instances and n features
- A training set with m/2 instances and n/2 features
- A training set with m instances and m features
- A training set with m instances and n features
Ans: C
14. When
using SVMs we can apply an almost miraculous mathematical technique for adding
polynomial features and similarity features called the
- Kernel trick
- Shell trick
- Mapping and Reducing
- None of the above
Ans: A
15. Which
is right for the gamma parameter of SVC which acts as a regularization
hyperparameter
- If model is overfitting, increase it, if it is underfitting, reduce
it
- If model is overfitting, reduce it, if it is underfitting, increase
it
- If model is overfitting, keep it same
- If it is underfitting, keep it same
Ans: B
16. LinearSVC is much faster
than SVC(kernel="linear"))
- True
- False
Ans: A
17. In SVM regression the model
tries to
- Fit the largest possible street between two classes while limiting
margin violations
- Fit as many instances as possible on the street while limiting
margin violations
- Both
- None of the above
Ans: B
18. The SVR class is the
regression equivalent of the SVC class, and the LinearSVR class is the
regression equivalent of the LinearSVC class
- True
- False
Ans: A
Comments
Post a Comment