Unit 1 Fundamentals of Deep Learning Flashcards

1
Q

What is the Deep learning?

A
  • Subset of machine learning that mimics the way the human brain functions
  • Talk about how deep learning is similar to the human brain
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Example of Deep learning:

A

How a toddler learns what is a dog

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Types of neural networks used in deep learning:

A
  1. Convolution neural network(CNN)
  2. Recurrent neural network (RNN)
  3. Long short term memory (LSTM)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a Perceptron?

A
  • Basic unit used to build ANN
  • Takes real valued input(not only boolean)
  • Calculates linear combination of these inputs and generates output
  • If result obtained from a perceptron is greater than threshold output is 1 otherwise -1
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a multilayer perceptron?

A

Like neurons in a human brain perceptrons form dense network with each other and this dense network is known as multilayer perceptron

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a feed forward neural network?

A
  • Type of neural network consisting of three types of layers:
    1. Input layer
    2. Hidden layers
    3. Output layer
  • Inputs are passed on in the forward direction
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Types of Feed forward network:

A
  1. Single layer feed forward network (2 layers)
  2. Multilayer feed forward network (3 layers)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is Back-propagation?

A
  • Used in multilayered feed forward network
  • algorithm used to adjust the weights and biasis by minimizing the error
  • error is back propagated to the input layer and each neuron adjust its weights and biases according to it
  • uses gradient decent for finding out the optimal weights and biases
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

(BPN)
What are weights?

A

Define the strength of connections between each of the neurons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

(BPN)
What are biases?

A

Additional parameter that shifts the activation function to the left or the right

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is gradient decent?

A
  • an iterative optimization algorithm
  • aim is to minimise the cost function or error between predicted and actual value
  • to find local global minimum(point of convergence) of a differentiable function
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Process of Gradient descent:

A
  1. Arbitrary point selected
  2. Calculate slope at the point
  3. Update weights and bias yielded by slope
  4. Updation leads the slope to get steeper untill it reaches minimum value
  5. Large and small learning rate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Solutions to vanishing gradient decent:

A
  1. ReLU
  2. LSTM (constant gradient size)
  3. Gradient clipping
  4. Residual neural network
  5. Multi level hierarchy
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is vanishing gradient problem?

A

The slope of the gradient decent becomes steeper and stepper and reaches of minimum of almost 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the activation function?

A

Desides whether neuron will fire or not for a given set of inputs, depending on a rule or threshold
It can be called as a mathematical gate between a input feeding the current neuron and the output going to the next layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is ReLU?

A

Rectified linear unit
Most commonly used activation function which is computer allows neuron to respond quickly

17
Q

Characteristics of ReLU:

A
  • It is nonlinear
  • It is continuously differentiable
  • It supports back propagation
  • It does not have a fixed output range
  • It is not zero-centred
  • It does not have vanishing gradient problem
18
Q

Drawback of ReLU:

A

Dying ReLU problem

19
Q

LeakyReLU

A

Modification of ReLU which replaces is the zero output with negative slope

20
Q

What is exponential ReLU?

A

It is very similar to ReLU but has an extra alpha constant due to which the ELU becomes smooth slowly until it is -alpha

21
Q

What are Hyperparameters?

A

That contribute to the architecture of the network. That are set before training process. Top level parameters that control learning process. Hyper parameters are not updated for the training process the value remains constant till the end

22
Q

Types of Hyperparameters:

A
  1. Layer size
  2. Learning rate
  3. Momentum
23
Q

What is regularisation?

A

A technique used to reduce the overfitting of a model and penalize any complexity produced (large weight matices). Cost function updated by adding a regularisation term to it

24
Q

What is regularisation parameter?

A

Lambda hyperparameter is optimised for better results

25
Q

What are the types of regularisation?

A

L1 - generalizing the absolute value of weights
L2 - weight decay, forces weights to decay toward zero (but not exactly zero)

26
Q

What is drop out regularisation?

A

Regularisation method in which intentionally random neurones from neural network are removed

27
Q

What is drop connect regularisation?

A

Generalisation of dropout where few random individual weights are disabled instead of disabling the whole node. So that node remains partially active