Convolutional Neural Network 1

 Q1. CNN features

Why is convolution neural network taking off quickly in recent times?

Choose the correct answer from below:

A.     Access to large amount of digitized data

B.     Integration of feature extraction within the training process

C.      Availability of more computational power

D.     All the above

Ans:

  • All the above is the correct answer.
  • Using CNN, we can Access and train our model on a large amount of digitized data
  • Unlike classical image D recognition where you define the image features yourself, CNN takes the image’s raw pixel data, trains the model, then extracts the features automatically for better classification.
  • Using CNN, the number of training parameters is reduced significantly. And due to the availability of more computational power in recent times. The model takes less time to train.

Q2. Recognizing a cat

For an image recognition problem (recognizing a cat in a photo), which of the following architecture of neural network would be best suited to solve the problem?

Choose the correct answer from below:

A.     Multi Layer Perceptron

B.     Convolutional Neural Network

C.      Perceptron

D.     Support Vector Machine

Ans: B

The correct answer is Convolutional Neural Network.

The Convolutional Neural Network (CNN or ConvNet) is a subtype of the Neural Networks that is mainly used for applications in image and speech recognition. Its built-in convolutional layer reduces the high dimensionality of images without losing its information. That is why CNNs are especially suited for this use case.

Q3. CNN Layers

Which of the following statements is False?

Choose the correct answer from below:

A.     CNN's are prone to overfitting because of less number of parameters

B.     There are no learnable parameters in Pooling layers

C.      In a max-pooling layer, the unit that contributes(maximum entry) in the forward propagation gets all the gradient in the backpropagation

D.     None of the above

Ans: A

Correct option: CNNs are prone to overfitting because of less number of parameters

Explanation :

  • The statement "CNNs are prone to overfitting because of less number of parameters" is false. CNN's are prone to overfitting when they have a lot of parameters. A neural network with a lot of parameters tries to learn too much or too many details in the training data along with the noise from the training data, which results in poor performance on unseen or test datasets, which is termed overfitting.
  • There are no trainable parameters in a max-pooling layer. In the forward pass, it passes the maximum value within each filter to the next layer. In the backward pass, it propagates error in the next layer to the place from where the max value is taken, because that's where the error comes from. You can use this link to learn more about max pooling layer.
  • In a max-pooling layer, the unit that gets contributed(maximum entry) in the forward propagation gets all the gradients in the backpropagation.( This is True )

Q4. Max-Pooling necessary

Why do we use Max-pooling in Convolutional Neural Networks ?

Choose the correct answer from below, please note that this question may have multiple correct answers

A.     Reduce Resolution

B.     Extract the High intensity features

C.      Extract the low intensity features

D.     Increase Resolution

Ans: A, C

The correct answers are:

  • Reduce Resolution
  • Extract the High intensity features

Reason:

  • Max-pooling helps in extracting high intensity features.
  • While Avg-pooling goes for smooth features.
  • If time constraint is not a problem, then one can skip the pooling layer and use a convolutional layer to do the same.
  • It also helps in reducing the resolution of the input.

Q5. Pixel

A Pixel means a Picture Element. It is the smallest Element of an image on a computer display. Given two different images (pixel grids, where cells have the value of pixels) of size 5×5, find out the type of image1 and image2 respectively.



Choose the correct answer from below:

A.     image1= Black and White, image2= color

B.     image1= color, image2= Black and White

C.      image1= Grayscale, image2= color

D.     image1= Black and White, image2= Grayscale

Ans: D

  • Correct answer is image1= Black and White, image2= Grayscale
  • For a binary image (Black and White), a pixel can only take a value of 0 or 255
  • In a GrayScale image, it can choose values between 0 and 255.

Q6. Translation in-variance

Determine whether the given statement is true or false.

When a pooling layer is added to a convolutional neural network, translation invariance is preserved.

Note: Translation in-variance means that the system produces the same response, regardless of how its input is shifted.

Choose the correct answer from below:

A.     True

B.     False

 

Ans: A

The correct answer is True

Reason:

  • Invariance means that we can recognize an object as an object, even when its appearance varies in some way. This is generally a good thing, because it preserves the object's identity, category, (etc.) across changes in the specifics of the visual input, like relative positions of the viewer/camera and the object.
  • Pooling helps make the representation approximately invariant to small translations of the input.
    • If we translate the input by a small amount, the values of most of the outputs do not change.
    • Pooling can be viewed as adding a strong prior that the function the layer learns must be invariant to small translations.

Q7. True About Type of Padding

Which of the following are True about Padding in CNN?

Choose the correct answer from below, please note that this question may have multiple correct answers

A.     We should use valid padding if we know that information at edges is not that much useful.

B.     There is no reduction in dimension when we use zero padding.

C.      In valid padding, we drop the part of the image where the filter does not fit.

Ans: A,B,C

The correct answers are:

  1. We should use valid padding if we know that information at edges is not that much useful.
  2. There is no reduction in dimension when we use zero padding.
  3. In valid padding, we drop the part of the image where the filter does not fit.


Reason:

  • The output size of the convolutional layer shrinks depending on the input size & kernel size.
  • In zero padding, we pad zeros around the image's border to save most of the information, whereas, in valid padding, we lose out on the information that doesn't fit in filters.
  • There is no reduction in dimension when we use zero padding.
  • To sum up, Valid padding means no padding. The output size of the convolutional layer shrinks depending on the input size & kernel size. On the contrary, 'zero' padding means using padding.

Q8. CNN with benefits

What are the benefits of using Convolutional Neural Network(CNN) instead of Artificial Neural Network(ANN)?

Choose the correct answer from below, please note that this question may have multiple correct answers

A.     Reduce the number of units in the network, which means fewer parameters to learn and decreased computational power is required

B.     Increase the number of units in the network, which means more parameters to learn and increase chance of overfitting.

C.      They consider the context information in the small neighborhoods.

D.     CNN uses weight sharing technique

Ans: A, C,D

Correct options:

  • Reduce the number of units in the network, which means fewer parameters to learn and decreased computational power is required
  • They consider the context information in the small neighborhoods
  • CNN uses weight sharing technique.

Explanation :

  • CNNs usually have a lesser no of parameters compared to ANNs, which means
  • CNNs consider the context information and pixel dependencies in the small neighborhood and due to this feature, they achieve a better prediction in data like images
  • Weight sharing decreases the number of parameters and also makes feature search insensitive to feature location in the image. This results in a more generalized model and thus also works as a regularization technique .

Q9. Appyling Max pooling

If we pass a 2×2 max-pooling filter over the given input with a stride of 2, find the value of W, X, Y, Z?


Choose the correct answer from below:

A.     W = 8, X = 6, Y= 9, Z=6​

B.     W = 9, X = 8, Y= 8, Z=6​

C.      W = 6, X = 9, Y= 8, Z=8​

D.     W = 9, X = 8, Y= 8, Z=9​

 

Ans: B

The correct answer is W = 9, X = 8, Y= 8, Z=6

  • Our first 2 × 2 region is highlighted in yellow, and we can see the max value of this region is 6.
  • Next 2 × 2 region is highlighted in blue, and we can see the max value of this region is 9.
  • Similarly, we will do this for all the 2×2 sub-matrices highlighted in different colors.

Q10. Difference in output size

What is the difference between the output size of the given two models with input image of size 100×100. Given, number of filter, filter size, strides respectively in the figure ? (Take padding = 0)





Note: The Answer is the difference of final convolution of Model1 and Model2.

Example: Say the final convolution of Model1 is 10 x 10 x 30 = 3000 and Model2 is 20 x 20 x 14 = 5600
Answer = 5600 - 3000 = 2600

Choose the correct answer from below:

A.     1392

B.     1024

C.      6876

D.     500

Ans: B

The correct answer is 1024

The result size of a convolution after 1 layer will be (W – F + 2P) /S + 1.

For model 1,

Step1 - Input = 100 x 100, filter = 15, filter size = 3 x 3, strides = 1

Answer = (100 - 3 + (2x0))/1 + 1 = 98

Step1_output =  98 x 98 x 15


Step2 - Input = 98 x 98, filter = 42, filter size = 6 x 6, strides = 4

Answer = (98 - 6 + (2x0))/4 + 1 = 24

Step2_output =  24 x 24 x 42


Step3 - Input = 24 x 24, filter = 30, filter size = 3 x 3, strides = 3

Answer = (24 - 3 + (2x0))/3 + 1 = 8

Step3_output =  8 x 8 x 30

final_model1_ output = 1920


——————————————————————————

For model 2,

Step1 - Input = 100 x 100, filter = 5, filter size = 6 x 6, strides = 1

Answer = (100 - 6 + (2x0))/1 + 1 = 95

Step1_output =  95 x 95 x 5


Step2 - Input = 95 x 95, filter = 11, filter size = 3 x 3, strides = 4

Answer = (95 - 3 + (2x0))/4 + 1 = 24

Step2_output =  24 x 24 x 11


Step3 - Input = 24 x 24, filter = 14, filter size = 3 x 3, strides = 3

Answer = (24 - 3 + (2x0))/3 + 1 = 8

Step3_output =  8 x 8 x 14

final_model2_ output = 896

Therefore, difference in output size will be 1920 – 896 = 1024.

Q11. Horizontal Edges

Perform a default Horizontal edge detection on the given image and choose the correct option?

Note : Here Stride = 1, Padding = Valid







Choose the correct answer from below:

A.     A

B.     B

C.      C

D.     D

Ans: A




Therefore, correct option is A

Q12. Dimensionality Reduction

Jay is working on an image resizing algorithm. He wants to reduce the dimensions of an image, he takes inspiration from the course he took on Scaler related to Data Science where he was taught about CNN's. Which of these options might be useful in the dimensionality reduction of an image?

hoose the correct answer from below, please note that this question may have multiple correct answers

A.     Convolution Layer

B.     ReLU Layer

C.      Sigmoid

D.     Pooling Layer

Ans: A,D

Correct options:

  • Convolution Layer
  • Pooling Layer

Explanation :

  • Convolution Layer helps in dimensionality reduction as convolution layer can decrease the size of input depending upon size of kernel, stride etc.
  • Pooling layer also decreases size, like if we use Max Pooling, then it takes maximum value present in size of kernel matrix.
  • ReLU and sigmoid are just activations, they don't affect the shape of an image.

 

 

 

Neural Network 3

 Q1. Complete the code



For the above code implementation of forward and backward propagation for the sigmoid function, complete the backward pass [????] to compute analytical gradients.

Note: grad in backward is actually the output error gradients.

Choose the correct answer from below:

A.     grad_input = self.sig * (1-self.sig) * grad

B.     grad_input = self.sig / (1-self.sig) * grad

C.      grad_input = self.sig / (1-self.sig) + grad

D.     grad_input = self.sig + (1-self.sig) - grad

Ans: A

Correct Answer : grad_input = self.sig * (1-self.sig) * grad

Explanation : The grad_input will be given by :



  • dZ
     = The error introduced by input Z.
  • dA = The error introduced by output A.
  • σ(x) · 1 − σ(x) = The derivative of the Sigmoid activation function.

where σ(x) represents the sigmoid function.

Q2. Trained Perceptron

A perceptron was trained to distinguish between two classes, "+1" and "-1". The result is shown in the plot given below. Which of the following might be the reason for poor performance of the trained perceptron?




Choose the correct answer from below:

A.     The perceptron can not separate linearly separated data

B.     The perceptron works only if the two classes are linearly separable which is not the case here.

C.      The smaller learning rate with less number of epochs of perceptron could have restricted it from producing good results.

D.     The "-1" class dominates the dataset, thereby pulling the decision boundary closer to itself.

Ans:C

Correct option: The smaller learning rate with less number of epochs of perceptron could have restricted it from producing good results.

Explanation:

  • The number of data in both classes is enough,but the difference between their numbers is not that significant that it can cause misclassification.
  • Since the dot product between weights “w” and input “x” is related linearly to x, the perceptron is a linear classifier. It is not capable of separating classes that are not linearly separable.
  • When observing the result, it can be classes seem to be linearly separable with few exceptions.However, for classes that are linearly separable, the algorithm is guaranteed to converge to the correct decision boundary.
  • Also, the decision boundary is not towards class -1 because of the majority. Both the classes seems to have fairly equal amount of samples for training a perceptron.
  • As we can see the model underfits the data, this means that the number of epochs for the model to train on is quite low or the learning rate is quite small, making the model perform poorly

Q3. Identify the Function

Mark the correct option for the below-mentioned statements:

(a) It is possible for a perceptron that it adds up all the weighted inputs it receives, and if the sum exceeds a specific value, it outputs a 1. Otherwise, it just outputs a 0.

(b) Both artificial and biological neural networks learn from past experiences.

 

Choose the correct answer from below:

A.     Both the mentioned statements are true.

B.     Both the mentioned statements are false.

C.      Only statement (a) is true.

D.     Only statement (b) is true.

Ans: A

Correct option: Both the statements are true.

Explanation :

Implementation of statement (a) is called step function and yes it is possible.

Both of artificial and biological neural networks learn from past experiences.
The artificial networks are trained on data to make predictions. The weights assigned to each neuron continuously
changes during the training process to reduce the error.

Q4. Find the Value of 'a'

Given below is a neural network with one neuron that takes two float numbers as inputs.


If the model uses the sigmoid activation function, What will be the value of 'a' for the given x1 and x2 _____(rounded off to 2 decimal places)?

Choose the correct answer from below:

A.     0.57

B.     0.22

C.      0.94

D.     0.75

Ans:  A

Correct option :

  • 0.57

Explanation :

The value of z will be :

  • zw1.x1+w2.x2+b
  • z = (0.5×0.55) + (−0.35×0.45) + 0.15 = 0.2675

 

The value of a will be :

  • af(z) = σ(0.2675) = 1+e(−z)1​=1.7652901​=0.5664=0.57

 

Neural network 4

 Q1. Tanh and Leaky ReLu

Which of the following statements with respect to Leaky ReLu and Tanh are true?

a. When the derivative becomes zero in the case of negative values in ReLu, no learning happens which is rectified in Leaky ReLu.

b. Tanh is a zero-centered activation function.

c. Tanh produces normalized inputs for the next layer which makes training easier.

d. Tanh also has the vanishing gradient problem.

Choose the correct answer from below:

A.     All the mentioned statements are true.

B.     All the mentioned statements are true except c.

C.      All the mentioned statements are true except b.

D.     All the mentioned statements are true except d.

Ans: A

Correct options: All the mentioned statements are true.

Explanation :

1) The problem of no learning in the case of ReLu is called dying ReLu which Leaky ReLu takes care of.

2) Yes, tanh is a zero-centered activation function.

3) As the Tanh is symmetric and the mean is around zero it produces normalized inputs( between -1 and 1 ) to the next layer which makes the training easier.

4) As Tanh is also a sigmoidal function it also faces the vanishing gradient problem.

 

Q2. Dog and cat classifier

You are building a binary classifier for recognizing dogs (y=1) vs. cats (y=0). Which one of these is the best activation function for the output layer?

Choose the correct answer from below:

A.     ReLU

B.     Leaky ReLU

C.      sigmoid

D.     Tanh

Ans: C

Correct option : sigmoid
Explanation : Sigmoid function outputs a value between 0 and 1 which makes it a very good choice for binary classification. You can classify as 0 if the output is less than 0.5 and classify as 1 if the output is more than 0.5. We can also change this threshold value.
It can be done with tanh as well but it is less convenient as the output is between -1 and 1.

Q3. Maximum value of derivates

This shows two columns one showing activation functions and the other showing the maximum value of the first-order derivatives. Map the function to the correct value on the right that shows the maximum value of their derivative.

Choose the correct answer from below:

A.     1-d, 2-c, 3-b, 4-a

B.     1-b, 2-c, 3-d, 4-a

C.      1-c, 2-b, 3-d, 4-a

D.     1-b, 2-d, 3-d, 4-d

Ans: D

Correct option : 1-b, 2-d, 3-d, 4-d.

Explanation :

The derivative of the sigmoid function is sigmoid(x)(1−sigmoid(x)), the maximum value of this is 0.25 at sigmoid(x)=0.5 and x=0.

The derivative of tanh is 1−tanh2, whose maximum is at tanh(x)=0 and x=0, which is 1.

The derivative for all positive values in ReLu is 1 and 0 for all negative values of x.

The derivative for all positive values in LeakyReLu is 1. In case of negative values, let’s say LeakyReLu outputs 0.5*(input), therefore the slope will be 0.5, and hence the derivative will also be 0.5.


For both ReLU and leaky ReLU, the maximum derivative value is 1.

Q4. Leaky relu advantages

What are the advantages of using Leaky Rectified Linear Units (Leaky ReLU) over normal ReLU in deep learning?

Choose the correct answer from below, please note that this question may have multiple correct answers

A.     It fixes the “dying ReLU” problem, as it doesn’t have zero-slope parts.

B.     Leaky ReLU always slows down training.

C.      It increases the “dying ReLU” problem, as it doesn’t have zero-slope parts.

D.     Leaky ReLU help the gradients flow easier through the architecture.

Ans: A, D

Correct options:

  • It fixes the “dying ReLU” problem, as it doesn’t have zero-slope parts
  • Leaky ReLU helps the gradients flow easier through the architecture.

Explanation:

  • Leaky ReLU is a variant of the ReLU activation function, which is commonly used in deep learning. The key advantage of using Leaky ReLU over normal ReLU is that it can avoid the "dying ReLU" problem, which occurs when a large number of neurons in a network become inactive and stop responding to inputs.
  • As for the impact on training, it depends on the context and specific problem you are trying to solve. In some cases, it has been observed that using Leaky ReLU can speed up the training process by preventing the dying ReLU problem, this could be useful when you have sparse data, or the dataset is highly imbalanced. On the other hand, in other cases, it may slow down the training process by introducing more complex non-linearity to the model which results in more difficult optimization process.
  • This can happen when the input to a neuron is negative and the ReLU activation function is used, since ReLU sets negative inputs to zero. In contrast, Leaky ReLU allows a small, non-zero gradient for negative input values, which can help prevent neurons from becoming inactive and can improve the overall performance of the network, and make the gradients flow easier through the architecture.
  • Additionally, Leaky ReLU has been shown to outperform other variants of ReLU on some benchmarks, so it may be a better choice in some cases.

 

Q5. No Activation Function

What if we do not use any activation function(s) between the hidden layers in a neural network?

Choose the correct answer from below:

A.     It will still capture non-linear relationships.

B.     It will just be a simple linear equation.

C.      It will not affect.

D.     Can't be determined.

Ans: B

Correct option : It will just be a simple linear equation.

Explanation :

The main aim of this question is to understand why we need activation functions in a neural network.

Following are the steps performed in a nerual network:

Step 1: Calculate the sum of all the inputs (X) according to their weights and include the bias term:
Z=(weightsX)+bias

Step 2: Apply an activation function to calculate the expected output:
Y=Activation(Z)

Steps 1 and 2 are performed at each layer. This is a forward propagation.

Now, what if there is no activation function?
Our equation for Y becomes:
Y=Z=(weightsX)+bias

This is just a simple linear equation. A linear equation will not be able to capture the complex patterns in the data.
To capture non-linear relationships, we use activation functions.

Q6. Trainable parameters

What is the number of trainable parameters in the neural network given below:


Note: The network is not fully connected and the trainable parameters include biases as well.

Choose the correct answer from below:

A.     17

B.     15

C.      10

D.     20

Ans:B

Correct option : 15

Explanation :

The network is not fully connected, and hence the weight terms can be seen by the connections between neurons which are 10 in total.
For biases, we have 4 for neurons in the hidden layer and 1 for the neuron in the output layer which in total gives us 15.

Note : The network shown in the image is purely for teaching purposes. We won’t encounter any neural networks like these in the real life.

 Q7. Number of connections

The number of nodes in the input layer of a fully connected neural network is 10 and the hidden layer is 7. The maximum number of connections from the input layer to the hidden layer are :

Choose the correct answer from below:

A.     70

B.     less than 70

C.      more than 70

D.     It is an arbitrary value

Ans: A

Correct option : 70.

Explanation :

  • Since MLP is a fully connected directed graph, the maximum number of connections is product of the number of nodes in the input layer and hidden layer.
  • The total number of connections = 10.7 = 70

 Q8. How many parameters?

For a neural network consisting of an input layer, 2 hidden layers, and one output layer, what will be the number of parameters if each layer is dense and has a bias associated with it?


Choose the correct answer from below:

A.     24

B.     44

C.      51

D.     32

 Ans: B

Correct option : 44

Explanation :

The no. of parameters in a fully connected neural network is given by : (i×h + h×o) + (h+o)
where i = number of neurons in the input layer
h = number of neurons in the hidden layer
o = number of neurons in the output layer

For the input layer each of the 5 inputs are connected to each of the 3 units of the first hidden layer, therefore there will be 5 x 3+ 3(for bias of each unit in the hidden layer) parameters for the first layer.

For the second hidden layer, 3 x 4 + 4 = 16
For the output layer, 4 x 2 + 2 = 10
Therefore total = 18 + 16 + 10 = 44.

About Machine Learning

Welcome! Your Hub for AI, Machine Learning, and Emerging Technologies In today’s rapidly evolving tech landscape, staying updated with the ...