Posts

Showing posts with the label Sigmoid

Neural Network 3

Image
  Q1. Complete the code For the above code implementation of forward and backward propagation for the sigmoid function, complete the backward pass [ ???? ] to compute analytical gradients. Note:  grad in backward is actually the output error gradients. Choose the correct answer from below: A.      grad_input = self.sig * (1-self.sig) * grad B.      grad_input = self.sig / (1-self.sig) * grad C.       grad_input = self.sig / (1-self.sig) + grad D.      grad_input = self.sig + (1-self.sig) - grad Ans: A Correct Answer :  grad_input = self.sig * (1-self.sig) * grad Explanation :  The  grad_input  will be given by : dZ  = The error introduced by input Z. dA  = The error introduced by output A. σ(x) · 1 − σ(x)  = The derivative of the Sigmoid activation function. where σ(x) represents the sigmoid function. Q2. Trained Perceptron A perceptron was trained to distinguish between two classes, "+1" and "-1". The result is

Neural network 4

Image
  Q1. Tanh and Leaky ReLu Which of the following statements with respect to Leaky ReLu and Tanh are true? a.  When the derivative becomes zero in the case of negative values in ReLu, no learning happens which is rectified in Leaky ReLu. b.  Tanh is a zero-centered activation function. c.  Tanh produces normalized inputs for the next layer which makes training easier. d.  Tanh also has the vanishing gradient problem. Choose the correct answer from below: A.      All the mentioned statements are true. B.      All the mentioned statements are true except c. C.       All the mentioned statements are true except b. D.      All the mentioned statements are true except d. Ans: A Correct options: All the mentioned statements are true. Explanation : 1) The problem of no learning in the case of ReLu is called dying ReLu which Leaky ReLu takes care of. 2) Yes, tanh is a zero-centered activation function. 3) As the Tanh is symmetric and the mean is around zero it p

TensorFlow and Keras -1

Image
  Q1. Binary classification In order to perform binary classification on a dataset (class  0  and  1 ) using a neural network, which of the options is  correct  regarding the outcomes of code snippets  a  and  b ? Here the labels of observation are in the form : [0, 0, 1...]. Common model: import tensorflow from keras.models import Sequential from keras.layers import Dense from tensorflow.keras.optimizers import SGD model = Sequential() model.add(Dense(50, input_dim=2, activation='relu', kernel_initializer='he_uniform')) opt = SGD(learning_rate=0.01) Code snippet a: model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) Code snippet b: mode.add(Dense(1, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) The term " Required results " in the options means that the accur