UNIT II
CNN
2. striding and padding
3. pooling layers
4. structure
5. operations and prediction of CNN with layers
6. CNN -Case study with MNIST
UNIT II
CNN
UNIT II
Q1. Sparse Connection
What does sparsity of connections mean as a benefit of using
convolutional layers?
Choose the correct answer from below:
A.
Each filter is connected to every channel in
the previous layer
B.
Each layer in a convolutional network is
connected only to two other layers
C.
Each activation in the next layer depends on
only a small number of activations from the previous layer
D.
Regularization causes gradient descent to set
many of the parameters to zero
Ans: C
Correct answer: Each activation in the next
layer depends on only a small number of activations from the previous layer.
Reason:
Q2. Data size
As you train your model, you realize that you do not have
enough data. Which of the following data augmentation techniques can be used to
overcome the shortage of data?
Choose the correct answer from below, please note that
this question may have multiple correct answers
A.
Adding Noise
B.
Rotation
C.
Translation
D.
Color Augmentation
Ans: A, B, C, D
The correct answers are:
Reason:
Q3. Accuracy After DA
Is it possible for the training data Accuracy to be lower
than testing Data after the use of data Augmentation?
Choose the correct answer from below:
A.
True
B.
False
Ans: A
Correct answer: True
Reason:
Q4. fruit augment
We are making a CNN model that classifies 5 different
fruits. The distribution of number of image are as follows:
Banana—20 images
Apple—30 images
Mango—200 images
Watermelon—400 images
Peaches—400 images
Which of the given fruits should undergo augmentation in
order to avoid class imbalance in the dataset?
Choose the correct answer from below:
A.
Banana, Apple
B.
Banana, Apple, Mango
C.
Watermelon, Peaches
D.
All the Fruits
Ans: B
Correct answer: Banana, Apple, Mango
Q5. CNN Select Again
Which among the following is False:
Choose the correct answer from below:
A.
Dilated convolution increases the receptive
field size when compared to standard convolution operator
B.
Dropout is a regularization technique
C.
Batch normalization ensures that the weight
of each of the hidden layer of a deep network is normalized
D.
Convolution neural networks are translation
invariant
Ans: C
Correct answer: Batch normalization ensures that
the weight of each of the hidden layers of a deep network is normalized
Reason:
Q6. Reducing Parameters two
methods
Which of the following are the methods for tackling
overfitting?
Choose the correct answer from below, please note that
this question may have multiple correct answers
A.
Improving Network Configuration to increase
parameters
B.
Augmenting Dataset to decrease the number of
samples
C.
Augmenting Dataset to increase the number of
samples
D.
Improving Network Configuration to optimise
parameters
Ans: C, D
Correct Answers:
Explanation:
Q7. Underfitting vs
Overfitting
The given chart below shows, the training data accuracy vs
validation data accuracy for a CNN model with a task of classification for 5
classes.
What is the problem with the model and how to solve the
problem, if any?
Choose the correct answer from below:
A.
Overfitting, adding More Conv2d layers
B.
Underfitting, More epochs
C.
Overfitting, Regularization
D.
No problem
Ans: B
Correct Answer: Underfitting, More epochs
Explanation:
Q8. Data augmentation
effectiveness
Suppose you wish to train
a neural network to locate lions anywhere in the images, and you use a training
dataset that has images similar to the ones shown above. In this case, if we
apply the data augmentation techniques, it will be ______ as there is _______
in the training data.
Choose the correct answer from below:
A.
effective, position bias
B.
ineffective, angle bias
C.
ineffective, position bias
D.
effective, size bias
Ans: A
The correct answer is: effective, position bias.
Reason:
Q9. EarlyStopping
Which of the following statement is the best description of
early stopping?
Choose the correct answer from below:
A.
Train the network until a local minimum in
the error function is reached
B.
Simulate the network on a validation dataset
after every epoch of training. Stop the training when the generalization error
starts to increase.
C.
Add a momentum term to the weight update in
the Generalized Delta Rule
D.
A faster version of backpropagation
Ans: B
Correct Answer: Simulate the network on a
validation dataset after every epoch of training. Stop the training when the
generalization error starts to increase.
Explanation:
Q10. EarlyStopping code
Fill the code, for setting early stopping in Tensorflow to
monitor validation accuracy val_accuracy and to stop the training
when there is no improvement after 2 epochs?
custom_early_stopping = EarlyStopping(
___________,
____________
)
Choose the correct answer from below:
A.
monitoring=’val_accuracy’, min_delta=2
B.
mode=’val_accuracy’, min_delta=2
C.
monitor=’val_accuracy’, patience=2
D.
monitoring=’val_accuracy’, patience=2
Ans: C
Correct Answer: monitor=’val_accuracy’,
patience=2
Explanation:
Q11. tf dot image
How will you apply data augmentation to rotate the image 270o counter-clockwise
using tf.Image?
Choose the correct answer from below:
A.
tf.image.rot(image)
B.
tf.image.rot270(image)
C.
tf.image.rot90(image, k=3)
D.
tf.image.rot(image, k=3)
Ans: C
Correct Answer: tf.image.rot90(image, k=3)
Explanation:
Welcome! Your Hub for AI, Machine Learning, and Emerging Technologies In today’s rapidly evolving tech landscape, staying updated with the ...