Neural Network 2

Q1. Neuron

Which of the following is true about a single artificial neuron?

Choose the correct answer from below, please note that this question may have multiple correct answers

A.     It is loosely inspired from biological neurons

B.     It computes weighted sum

C.      It applies an activation function

D.     It is capable of performing multi class classification

Ans: A,B,C

Correct Options:-

  • It is loosely inspired from biological neurons
  • It computes weighted sum
  • It applies an activation function

Explanation:-

  • The basic inspiration for artificaial neurons did come from biological neurons.
    Biological neurons form a network a network within themselves.
    Each connection, like the synapses in a biological brain, can transmit a signal to other neurons.
    An artificial neuron receives signals then processes them and can signal neurons connected to it.
  • A neuron does it’s computation in 2 steps:
    1. First it computes the weighted sum as: z=w1​x1​+w2​x2​+…+wdxd​+b
    2. Then it applies an activation function on top of this sum: a=f(z)
  • A single neuron can perform binary classification if it’s activation is the sigmoid function.
    However, it cannot perform muti-class classification on it’s own. We would need a network of multiple neurons to do that.

Q2. Sigmoid and softmax functions

Which of the following statements is true for a neural network having more than one output neuron ?

Choose the correct answer from below:

A.     In a neural network where the output neurons have the sigmoid activation, the sum of all the outputs from the neurons is always 1.

B.     In a neural network where the output neurons have the sigmoid activation, the sum of all the outputs from the neurons is 1 if and only if we have just two output neurons.

C.      In a neural network where the output neurons have the softmax activation, the sum of all the outputs from the neurons is always 1.

D.     The softmax function is a special case of the sigmoid function

Ans: C

Correct option : In a neural network where the output neurons have the softmax activation, the sum of all the outputs from the neurons is always 1.

Explanation :

  • For the sigmoid activation, when we have more than one neuron, it is possible to have the sum of outputs from the neurons to have any value.
  • The softmax classifier outputs the probability distribution for each class, and the sum of the probabilities is always 1.
  • The Sigmoid function is the special case of the Softmax function where the number of classes is 2.

 

Q3. Forward propagation

Given the independent and dependent variables in X and y, complete the code to calculate the results of the forward propagation for a single neuron on each observation of the dataset.

The code should print the calculated labels for each observation of the given dataset i.e. X.

Input Format:

Two lists are taken as the inputs. First list should be the independent variable(X) and the second list should be the dependent variable(y)

Output Format:

A numpy array consisting of labels for each observation.

Sample Input:

X = [[100, 129, 157, 133], [168, 150, 30, 19], [4, 148, 106, 74], [123, 195, 60, 93], [169, 40, 188, 179], [40, 59, 29, 94], [165, 126, 16, 99], [167, 157, 65, 23], [128, 87, 37, 111], [191, 154, 89, 134], [101, 41, 145, 112], [43, 110, 197, 118], [147, 22, 109, 139], [11, 161, 135, 119], [26, 48, 199, 182], [96, 100, 82, 87], [149, 2, 8, 10], [5, 38, 166, 100], [193, 117, 59, 164], [133, 5, 38, 163], [88, 177, 84, 114], [9, 132, 177, 24], [94, 130, 83, 131], [77, 11, 141, 81], [154, 198, 175, 98], [21, 148, 170, 122], [185, 145, 101, 183], [100, 196, 111, 11], [97, 147, 112, 11], [25, 97, 95, 45], [6, 89, 88, 38], [51, 16, 151, 3], [90, 174, 122, 157], [2, 133, 121, 199], [15, 78, 163, 180], [103, 118, 7, 179], [102, 179, 157, 183], [113, 139, 195, 122], [55, 88, 68, 117], [115, 185, 93, 102], [139, 82, 3, 165], [135, 29, 78, 11], [11, 16, 60, 123], [103, 191, 187, 129], [146, 181, 28, 192], [85, 73, 136, 139], [117, 179, 81, 183], [15, 131, 106, 28], [58, 78, 111, 65], [76, 11, 25, 103], [11, 90, 162, 129], [144, 1, 16, 33], [33, 172, 40, 72], [106, 83, 160, 151], [68, 159, 150, 64], [31, 79, 83, 15], [51, 140, 173, 10], [105, 80, 70, 21], [195, 80, 64, 129], [50, 96, 107, 82], [185, 150, 15, 143], [28, 71, 27, 57], [58, 13, 146, 78], [20, 71, 183, 44], [91, 44, 15, 87], [77, 157, 95, 110], [132, 28, 193, 49], [177, 87, 57, 41], [194, 175, 17, 20], [166, 64, 134, 150], [79, 74, 162, 168], [166, 149, 34, 117], [160, 170, 127, 44], [99, 41, 103, 155], [48, 127, 138, 68], [17, 3, 101, 94], [29, 102, 123, 158], [194, 60, 135, 179], [73, 192, 145, 168], [21, 94, 154, 143], [17, 10, 145, 131], [73, 29, 195, 199], [132, 189, 90, 100], [134, 32, 81, 119], [118, 37, 119, 27], [51, 78, 187, 86], [95, 8, 56, 29], [156, 162, 186, 127], [126, 111, 144, 59], [7, 140, 32, 75], [40, 0, 109, 92], [165, 175, 61, 103], [178, 68, 185, 119], [132, 105, 36, 80], [165, 117, 35, 176], [128, 49, 185, 9], [50, 176, 12, 198], [124, 164, 99, 102], [36, 30, 114, 147], [166, 172, 35, 14]]
y = [1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0]

Sample Output:

[1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 0 1 1 1 1 0 0 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1]


 import numpy as np

np.random.seed(2)

 

#independent variables

X = np.array(eval(input()))

#dependent variable

y = np.array(eval(input()))

 

m = X.shape[__]  #no. of samples

n = X.shape[__]  #no. of features

c =    #no. of classes in the data and therefore no. of neurons in the layer

 

#weight vector of dimension (number of features, number of neurons in the layer)

w = np.random.randn(___, ___)

 

#bias vector of dimension (1, number of neurons in the layer)

b = np.zeros((___, ___))

 

#(weighted sum + bias) of dimension (number of samples, number of classes)

z = ____

 

#exponential transformation of z

a = np.exp(z)

 

#Perform the softmax on a

a = ____

 

#calculate the label for each observation

y_hat = ____

 

print(y_hat)

Ans:

import numpy as np

np.random.seed(2)

 

#independent variables

X = np.array(eval(input()))

#dependent variable

y = np.array(eval(input()))

‘m’ and ‘n’ refers to the no. of rows and columns in the dataset respectively.’c’ refers to the number of classes in y.

m = X.shape[0]  #no. of samples

n = X.shape[1]  #no. of features

c = len(np.unique(y))   #no. of classes in the data and therefore no. of neurons in the layer

Initializing weights randomly

#weight vector of dimension (number of features, number of neurons in the layer)

w = np.random.randn(n, c)

Initializing biases as zero

#bias vector of dimension (1, number of neurons in the layer)

b = np.zeros((1, c))

Finding the output ‘z’

#(weighted sum + bias) of dimension (number of samples, number of classes)

z = np.dot(X, w) + b

Applying the softmax activation function on the output

#exponential transformation of z

a = np.exp(z)

a = a/np.sum(a, axis = 1, keepdims = True)

Calculating the label for each observation

y_hat = np.argmax(a, axis = 1)

print(y_hat)

 

Q4. Same layer still different output

Why do two neurons in the same layer produce different outputs even after using the same kind of function (i.e. wT.x + b)?

Choose the correct answer from below:

A.     Because the weights are not the same for the neurons.

B.     Because the input for each neuron is different.

C.      Because weights of all neurons are updated using different learning rates.

D.     Because only biases (b) of all neurons are different, not the weights.

Ans: C

 

Correct option: Because the weights are not the same for the neurons.

Explanation :

  • The weights for each neuron in a layer are different. Thus the output of each neuron ( wT.x + b ) will be different
  • The input for each neuron in a layer is the same. In a fully connected network, each neuron in a particular layer gets inputs from each neuron in the previous layer.
  • There may be different learning rates for each model weight depending on the type of optimizer used, but that is not the reason for the neurons to give different outputs.
  • In a fully connected network, each neuron has two trainable parameters : a bias and a weight. The values of bias and weight for any two neurons in a layer need not be the same, since they keep changing during the model training.

 

Q5. Will he watch the movie?

We want to predict whether a user would watch a movie or not. Each movie has a certain number of features, each of which is explained in the image.









Now take the case of the movie Avatar having the features vector as [9,1,0,5]. According to an algorithm, these features are assigned the weights [0.8,0.2,0.5,0.4] and bias=-10. For a user X, predict whether he will watch the movie or not if the threshold value(θ) is 10?

Note: If the output of the neuron is greater than θ then the user will watch the movie otherwise not.

 

Choose the correct answer from below:

A.     Yes, the user will watch the movie with neuron output = -0.6

B.     No, the user will not watch the movie with neuron output = -0.6

C.      No, the user will watch the movie with neuron output = 2.5

D.     Yes, the user will watch the movie with neuron output = 2.5

Ans: B

Correct option :No, the user will not watch the movie with neuron output with -0.6

Explanation :

The output of a neuron is obtained by taking the weighted sum of inputs and adding the bias term to it.

The output of neuron is :
(0.8).(9)+(0.2).(1)+(0).(0.5)+(5).(0.4)−10 =−0.6<10,
therefore he won’t watch the movie.

Q6. And perceptron

 

We want to design a perception that performs AND operation. Refer to the table below:-






For this, first, a weighted sum is calculated: z=w1​x1​+w2​x2​+b

Here, w1​ and w2​ are weights, and b is the bias for the neuron.

The activation function applied on this is as follows:-

f(z)=0, if z<0

f(z)=1, otherwise

Which of the following values of weights and bias will give the desired results?

Choose the correct answer from below:

A.     W1=1, w2=1,b=-2

B.     W1=1, w2=1,b=2

C.      W1=1,w2=2,b=-2

D.     W1=2,w2=1,b=4

Ans: A

Correct answer: w1​=1,w2​=1,b=−2

Explanation:

  • For the input (0,0) the perceptron will perform calculation something like this:
    z=w1​x1​+x2​x2​+b=(1)(0)+(1)(0)+(−2)=−2
    Therefore f(z)=0
  • Similarly for the inputs (0,1) and (1,0)
    For both cases, z=−1,
    Therefore, f(z)=0
  • But for the input (1,1) the value of z=w1​.x1​+w2​.x2​+b=0
    Therefore, f(z) = 1
  • Therefore, among the options, 1,1, and -2 give the required results.
  • We can use any values for w1​,w2​, and b which satisfy our conditions and output.

 

Q7. Vectorize

Consider the following code snippet:





How do you vectorize this?

Note: All x, y and z are NumPy arrays.

Choose the correct answer from below:

A.     z = x + y

B.     z= x * y.T

C.      z = x + y.T

D.     z = x.T + y.T

 

Ans: C

orrect option : z=x+y.T

Explanation :

The shape of x is (10,6)
The shape of y is (6,1)

We can observe from the question that all elements of y are added element-wise to z, while y[j] is added to each element in the jth column in z.
For this to happen, we will have to convert the y array to shape (1,6) and then add it to x, which will result in broadcasting of y into shape (10,6) .

Thus the answer is z=x+y.T

 

 

 

 

 

 

 

Comments