Posts

Showing posts with the label ReLU

Neural network 4

Image
  Q1. Tanh and Leaky ReLu Which of the following statements with respect to Leaky ReLu and Tanh are true? a.  When the derivative becomes zero in the case of negative values in ReLu, no learning happens which is rectified in Leaky ReLu. b.  Tanh is a zero-centered activation function. c.  Tanh produces normalized inputs for the next layer which makes training easier. d.  Tanh also has the vanishing gradient problem. Choose the correct answer from below: A.      All the mentioned statements are true. B.      All the mentioned statements are true except c. C.       All the mentioned statements are true except b. D.      All the mentioned statements are true except d. Ans: A Correct options: All the mentioned statements are true. Explanation : 1) The problem of no learning in the case of ReLu is called dying ReLu which Leaky ReLu takes care of. 2) Yes, tanh is a zero-centered activation function. 3) As the Tanh is symmetric and the mean is around zero it p

TensorFlow & keras 3

Image
  Q1. Functional model Complete the code snippet in order to get the following model summary. from tensorflow.keras.layers import Dense, Flatten, Input from tensorflow.keras.models import Model def create_model_functional():   inp = Input(shape=(28, ))   h1 = Dense(64, activation="relu", name="hidden_1")(inp)   h2 = Dense( _a_ , activation="relu", name="hidden_2")(h1)   out = Dense(4, activation="softmax", name="output")( _b_ )   model = Model(inputs=inp, outputs=out, name="simple_nn")   return model model_functional = create_model_functional() model_functional.summary() Choose the correct answer from below: A.      512, b - h2 B.      64, b - h2 C.       10, b - h1 D.      512, b – inp Ans: A Correct Option:  a- 512, b - h2 Explanation: To get the model summary as shown in the question, the value of a should be 512 and the value of b should be h2. This will create a neur

TensorFlow and Keras -1

Image
  Q1. Binary classification In order to perform binary classification on a dataset (class  0  and  1 ) using a neural network, which of the options is  correct  regarding the outcomes of code snippets  a  and  b ? Here the labels of observation are in the form : [0, 0, 1...]. Common model: import tensorflow from keras.models import Sequential from keras.layers import Dense from tensorflow.keras.optimizers import SGD model = Sequential() model.add(Dense(50, input_dim=2, activation='relu', kernel_initializer='he_uniform')) opt = SGD(learning_rate=0.01) Code snippet a: model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=opt, metrics=['accuracy']) Code snippet b: mode.add(Dense(1, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) The term " Required results " in the options means that the accur