10.1 Perceptron Flashcards

1
Q

What is the ReLU function?

A

The rectified linear activation function or ReLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero. It has become the default activation function for many types of neural networks because a model that uses it is easier to train and often achieves better performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A neural network with a single neuron and a sigmoid activation is equivalent to:

A
  • Logistic regression

A neural network with a single neuron and a sigmoid activation is equivalent to logistic regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a sigmoid activation

A

It is a mathematical function having a characteristic that can take any real value and map it to between 0 to 1 shaped like the letter “S”. The sigmoid function also called a logistic function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following could be used to give non-linearity to a neural network?

A
  • Rectified linear unit (ReLU)

A rectified linear unit is a function that can add non-linearity to a neural network. Convolution and addition may be steps in a neural network but they do not add non-linearities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

The perceptron uses a loss function in which there is no penalty for correctly classified examples, while there is a penalty (loss) for misclassified examples.

A
  • True

True. The perceptron uses a loss function based on the difference between the true and predicted labels (which are always 0 or 1). When the predicted label is correct, the difference is 0 and the perceptron weights are not changed. When the predicted label is incorrect, the difference is -1 or 1 and the weights are updated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Consider the following MLP: It has 4 neurons in the input layer, 5 neurons in its single hidden layer and 1 neuron in the output layer. The network is fully-connected, so all of the input neurons are mapped to all of the hidden layer neurons, and all of the hidden layer neurons map to the output neuron. Assume each neuron in the hidden layer has a bias associated to it, as does the neuron in the output layer. Each connection has some weight. What is the number of parameters to be learnt here?

A
  • 31

Number of weights between input and hidden layer = 4 x 5 = 20

Number of weights between hidden layer and output layer = 5 x 1 = 5

Number of biases for hidden layer: Number of Neurons in hidden layer = 5

Number of biases for output layer: Number of Neurons in Output Layer = 1

Hence, total number of parameters to be learnt = 20+5+5+1 = 31

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What method is used to train a neural network with hidden layers?

A
  • Backpropagation

Backpropagation is used to train neural networks with hidden layers. The perceptron algorithm does not work for hidden layer neurons because there is no ground truth expected response to which the neuron’s response can be compared.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the requirements for an activation function in the hidden layers of a multilayer neural network? (Select all that are correct)

A
  • It must be differentiable
  • It must be non-linear

The activation functions of neurons in hidden layers of a multilayer network should be non-linear, because this allows the network to learn more complex features across layers, and differentiable, because the backpropagation algorithm will use the derivative of the network’s loss (error) to train the weights. Some activation functions like sigmoid output a value between 0 and 1, but this is not a requirement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the hidden L

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly