Deep Learning: UNIT 2: CNN: Short Answer Questions

  UNIT II
CNN
Short Answer Questions

---------------------------------------------------------------------------------------------------------------------------

1.     List the applications of CNN.

2.     Define Convolution.

3.     Define Stride.

4.     Explain padding.

5.     What the purpose of padding?

6.     Define Kernal.

7.     Define Pooling.

8.     List the different pooling techniques.

9.     Define Flattening.

10.  Define Fully Connected Layer.

11.  List the difference between CNN and Fully Connected Layer.

12.  What is a filter (or kernel) in the context of a CNN?

13.  Discuss the role of fully connected layers in CNNs.

14.  Explain the concept of pooling in CNNs and how it sometimes impacts output size and can cause underfitting.


15.  What is the purpose of the pooling layer in a CNN?


16.  what are two special cases of padding? explain them with a neat diagram.


17.  Discuss the concept of convolution in CNNs.


18.  How do filters (kernels) help in feature extraction, and how are these filters learned during the training process?


19.  What is the role of pooling in CNNs?

20.  How does pooling in CNNs reduce the spatial dimensions of feature maps?

21.  Discuss the trade-offs between using smaller and larger pooling windows in CNNs.


22.  How does the choice of pooling size affect the information retained in the feature maps?


23.  Compare max pooling and average pooling, and explain how pooling layers help in reducing the dimensionality of feature maps.


24.  How does pooling help to control the size of the output feature map in a Convolutional Neural Network (CNN)?

 

Deep Learning: UNIT-2 : CNN: Long Answer Questions

UNIT II

CNN

Long Answer Questions

------------------------------------------------------------------------------------------------------

1.     Explain CNN with an example.

2.     List the different applications of CNN.

3.     Write an example function for Convolution and Pooling operations and explain in detail.

4.     Draw and explain the architecture of convolution neural networks.

5.     Explain about the convolutional layers in CNN.

6.     Explain striding and padding in CNN with example.

7.     Draw the structure of CNN.

8.     Apply CNN architecture to Classify MNIST Hand Written Dataset.

9.     List the difference between CNN and Fully Connected Layers.

Deep Learning: UNIT-1 : Deep Learning Fundamentals- Short Answer Questions

 UNIT-1

Deep Learning: Fundamentals

Short Answer Questions

-----------------------------------------------------------------------------------------------------------------------------

1.     Define Artificial Neural Network.


2.     Define Neuron.


3.     List the operations performed by ANN layers.


4.     List the different applications of Deep Learning.


5.     Define Deep Learning.


6.     List the different applications of Artificial Neural Network.


7.     List the Building Block of Neural Networks.


8.     Define Dense layer.


9.     What is loss function.


10.  Identify the different layers in ANN.


11.  Explain Forward Pass.


12.  Explain Backward Pass.

13.  List the different optimizers.

14.  How to overcome vanishing and exploding gradient problems

15.  List the difficulties in convergence.

16.  Define Preprocessing.

17.  Define Momentum.

18.  What is Learning Rate Decay?

19.  What is the purpose of weight initialization?

20.  What is Regularization?

21.  List different Regularization techniques.

22.  Define Dropout.

23.  Define SoftMax activation function.

24.  When we use cross entropy loss function?

25.  List the different activation functions.

26.  Define sigmoid activation function.

27.  Define tanh activation function.

28.  Define ReLU activation function.

29.  How to train the neural network?


30.  Compare the ReLU activation function with the sigmoid activation function.

Deep Learning UNIT-1 Deep Learning: Fundamentals: Long Answer Questions

 UNIT-1

Deep Learning: Fundamentals

Long Answer Questions

-----------------------------------------------------------------------------------------------------------------------------

1.     Explain Artificial Neural Network with example.

2.     List the different applications of Artificial Neural Network.

3.     Explain Building Block of Neural Networks with an example.

4.     Discuss Multi-Layer Perceptron (MLP) with an example.

5.     Identify the different layers in ANN. Explain them.

6.     Explain Forward Pass.

7.     Explain Backward Pass.

8.     Explain back propagation algorithm with an example.

9.     List the different optimizers. Explain them.

10.  What is the vanishing and exploding gradient problems? How to overcome those problems. Explain.

11.  List the difficulties in convergence. How to achieve convergence? Explain.

12.  Explain Preprocessing.

13.  Explain Momentum.

14.  What is Learning Rate Decay? Explain.

15.  What is the purpose of weight initialization? Explain.

16.  What is the purpose of Regularization? Explain different techniques of Regularization.

17.  Explain Dropout with example.

18.  Explain softmax activation function with example.

19.  When we use cross entropy loss function? Explain.

20.  List the different activation functions. Explain them.

21.  How to train the neural network? Explain.

 

Deep Learning: UNIT- II: CNN : 7. CNN VS Fully Connected

 7. CNN vs Fully Connected

  • The basic difference between the two types of layers is the density of the connections. The FC layers are densely connected, meaning that every neuron in the output is connected to every input neuron. On the other hand, in a Conv layer, the neurons are not densely connected but are connected only to neighboring neurons within the width of the convolutional kernel.
  • A second main difference between them is weight sharing. In an FC layer, every output neuron is connected to every input neuron through a different weight . However, in a Conv layer, the weights are shared among different neurons. This is another characteristic that enables Conv layers to be used in the case of a large number of neurons.

The problems with MLP for Image Data?

a.      MLP will react differently to an image and its shifted version

b.     MLP doesnot consider Spatial relations

c.      Includes too many Parameters

 

a.     MLP will react differently to an image and its shifted version

  • Since MLP flattens the image, it is not positioning invariant
  • we fitted our image to ANN
  • Converted the image into a single feature vector,
  • hence not considering the neighbouring pixels, and
  • most importantly the Image channels (R-G-B)

Lets take an example

  • Supposedly we have two images of the same dog but at two different position
  • One on the Upper left while one on the middle right




  • Now since MLP will flatten the matrix, the neurons
    which might be more active for the first image will be dormant for the second one
  • Making MLP think these two images having completely different objects

b. MLP doesnot consider Spatial relations

  • Spatial Information (like if a Person is standing at the right side of the Car or The red car is on the left side of the blue bike) gets lost when image is flattened
  • Flattening also loses the internal representation of the 2D image.

c. Includes too many Parameters

  • Since MLP is a fully connected model, it requires a neuron for every input pixel of the image
  • Now Lets take an example with an image of size (1280 x 720) .
    • For an image with dimension as such the vector for the input layer becomes (921600 x 1). if a Dense layer of 128 is used then the number of parameters equals = 921600*128.
    • This makes MLP infeasible for large image and it may cause overfitting.

 

Do we even require global connectivity ?

  • The global connectivity caused due to densely connected neurons leads to more reduntant parameters which makes the MLP overfit

With all the above discussion we need:

  • to make the system translation (position) invariant
  • to leverage the spatial correlation between the pixels
  • focus only on the local connectivity

 

What should be the SPECIAL Features of CNN?

From the above discussion and taking inspiration from our visual cortex system, there are 3 essential properties of image data:

1. LOCALITY: Correlation between neighbouring pixels in a Image

2. STATIONARITY: Similar Patterns appearing multiple times in a Image

3. COMPOSITIONALITY: Extracting higher level features by pooling lower level features

 

 

 



Deep Learning: UNIT- II: CNN :6. CNN -Case study with MNIST

Deep Learning: UNIT- II: CNN: 5. Operations and prediction of CNN with layers

About Machine Learning

Welcome! Your Hub for AI, Machine Learning, and Emerging Technologies In today’s rapidly evolving tech landscape, staying updated with the ...