TensorFlow & keras 3
Q1. Functional model
Complete the code snippet in order to get the following
model summary.
from tensorflow.keras.layers import Dense, Flatten, Input
from tensorflow.keras.models import Model
def create_model_functional():
inp = Input(shape=(28, ))
h1 = Dense(64,
activation="relu", name="hidden_1")(inp)
h2 = Dense(_a_ ,
activation="relu", name="hidden_2")(h1)
out = Dense(4,
activation="softmax", name="output")(_b_)
model = Model(inputs=inp, outputs=out,
name="simple_nn")
return model
model_functional = create_model_functional()
model_functional.summary()
Choose the correct answer from below:
A.
512, b - h2
B.
64, b - h2
C.
10, b - h1
D.
512, b – inp
Ans: A
Correct Option: a- 512, b - h2
Explanation:
- To
get the model summary as shown in the question, the value of a should be
512 and the value of b should be h2. This will create a neural network
model with 2 hidden layers, the first hidden layer with 64 neurons and the
second hidden layer with 512 neurons.
- Here's
an explanation of the code:
- The
'create_model_functional' function creates a functional neural network
model using the Keras API from TensorFlow.
- The
model has an input layer with shape (28,), which means it expects input
data with 28 features. The first hidden layer has 64 neurons and uses the
ReLU activation function.
- The
second hidden layer has 'a' neurons and uses the ReLU activation
function. In this case, we want 'a' to be 512, so that the second hidden
layer has 512 neurons.
- The
output layer has 4 neurons and uses the softmax activation function,
which is suitable for multiclass classification problems.
- The
'b' placeholder is used to connect the output of the second hidden layer
to the input of the output layer. In this case, we want to connect it to
'h2', which is the output of the second hidden layer.
Q2. Customized loss function
For a certain sequential regression model predicting
two outputs, we implemented a loss function that penalizes the prediction error
for the second output(y2) more than the first one(y1) because y2
is more important and we want it to be really close to the target value.
import numpy as np
def custom_mse(y_true, y_pred):
loss = np.square(y_pred - y_true)
loss = loss * [0.5, 0.5] #x
loss = np.sum(loss, axis=0) #y
return loss
model.compile(loss=custom_mse, optimizer='adam')
Which of the following option is correct with
respect to the above implementation of a custom-made loss function?
Note: The shape of y_pred is
(batch_size, 2) in the implementation.
Choose the correct answer from below, please note that
this question may have multiple correct answers
A.
Custom_mse function's output should have a shape
(batch_size, 2)
B.
Custom_mse function's output should have a
shape (batch_size, )
C.
The axis for the sum of loss in line x should be
1
D.
The multiplication of [0.5, 0.5] in line y won't
be helpful for our requirement
Ans: B,C,D
Correct options :
- Custom_mse
function's output should have a shape (batch_size, )
- The
axis for the sum of loss in line x should be 1
- The
multiplication of [0.5, 0.5] in line y won't be helpful for our
requirement
Explanation :
- Custom_mse
function's output should have a shape (batch_size, ): The first
dimension of arguments y_true and y_pred is always the same as batch size.
The loss function should always return a vector of length batch_size.
- The
axis for the sum of loss in line x should be 1: Here we need the
loss values for each observation's two outputs to be summed up therefore
axis=1 should be used.
- The
multiplication of [0.5, 0.5] in line y won't be helpful for our
requirement: Because we want to penalize the error for y2 more
therefore we can use any of the values where the weight for y2 is more eg.
[0.3, 0.7].
Comments
Post a Comment