Module 01 - Fundamentals of Neural Networks Flashcards

1
Q

Neural Networks (Part 1): Model Representation

What is this an example of?

A

A nonlinear classification problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Neural Networks (Part 1): Model Representation

Why did ANNs become popular again? (2)

A
  • Better computer architecture (GPUs, parallelism)
  • More data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Neural Networks (Part 1): Model Representation

What’s the difference between the old and new views of ANNs?

A

Previously: function approximators.
Now: Interesting intermediate representations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Neural Networks (Part 1): Model Representation

What do neurons consist of? (ADS)

A

A neuron consists of:
- Axon (Single long fiber, output)
- Dendrites (Fibers, inputs)
- Soma (Cell body)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Neural Networks (Part 1): Model Representation

Where is information processed/stored in the brain?

A

Simultaneously throughout the whole network instead of specific locations.
(Though parts of the brain specialize.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Neural Networks (Part 1): Model Representation

Where is the nucleus located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Neural Networks (Part 1): Model Representation

Where are the dendrites located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Neural Networks (Part 1): Model Representation

Where is the cell body (soma) located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Neural Networks (Part 1): Model Representation

Where is the node of ranvier located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Neural Networks (Part 1): Model Representation

Where is the axon located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Neural Networks (Part 1): Model Representation

Where is the myelin sheath located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Neural Networks (Part 1): Model Representation

Where is the Schwann cell located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Neural Networks (Part 1): Model Representation

Where is the axon terminal located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Neural Networks (Part 1): Model Representation

What is the purpose of the dendrites?

A

They are the input channels to the neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Neural Networks (Part 1): Model Representation

What components are the input channels to the neuron?

A

The dendrites.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Neural Networks (Part 1): Model Representation

What is the purpose of the axon?

A

It’s the output of the neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Neural Networks (Part 1): Model Representation

What is the output channel of the neuron called?

A

The axon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Neural Networks (Part 1): Model Representation

What is an activation function?

A

Any function applied to the outputs of a neural network, e.g. sigmoid or ReLU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Neural Networks (Part 1): Model Representation

What is the “step function” activation function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Neural Networks (Part 1): Model Representation

What is the “sign function” activation function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Neural Networks (Part 1): Model Representation

What is the “sigmoid function” activation function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Neural Networks (Part 1): Model Representation

What is the “linear function” activation function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Neural Networks (Part 1): Model Representation

What is a requirement for activation functions?

A

They have to be differentiable.

24
Q

Neural Networks (Part 1): Model Representation

What is a logistic unit?

A

A neuron with a sigmoid function (e.g. logistic function) applied to the outputs.

25
# Neural Networks (Part 1): Model Representation What is multiclass classification?
A neural network predicting multiple classes as once, e.g. cat or dog or rabbit.
26
# Neural Networks (Part 1): Model Representation What's another name for one-vs-all classification?
Multiclass classification
27
# Neural Networks (Part 1): Model Representation What is Hebbian learning?
If 2 units are both active (firing), the weights between them should increase
28
# Neural Networks (Part 1): Model Representation What's the name for the "Neurons that fire together wire together" rule.
Hebbian learning.
29
# Neural Networks (Part 1): Model Representation What is a feedforward neural network?
A NN with no cycles (Connections back).
30
# Neural Networks (Part 1): Model Representation What do we call an NN with cycles?
A recurrent neural network.
31
# Neural Networks (Part 1): Model Representation What is a recurrent neural network?
A NN with cycles?
32
# Neural Networks (Part 2): Learning In Hebbian learning, are learning rules local or global?
Local
33
# Neural Networks (Part 2): Learning What's the idea behind Hebbian learning in NNs?
Change weights based on correlation of connected neurons
34
# Neural Networks (Part 2): Learning When does Hebbian learning work best?
Works best when relevance of inputs to outputs is independent
35
# Neural Networks (Part 2): Learning What's the problem with Hebbian learning and weights?
Simple Hebb rule grows weights unbounded
36
# Neural Networks (Part 2): Learning What's the perceptron learning rule?
New weights are old weights plus a portion of the error.
37
# Neural Networks (Part 2): Learning What's the Widrow-Hoff Rule?
The Widrow-Hoff rule aims to minimize the mean square difference between the predicted (expected) and the actual (observed) data or response
38
# Neural Networks (Part 2): Learning How do you calculate the gradient of a multi-variable function? (See image)
39
# Neural Networks (Part 2): Learning What is gradient descent?
Following the gradient of a function in an attempt to reach the global minima.
40
# Neural Networks (Part 2): Learning Under what situations will gradient descent reach the global minimum of a function?
When the function is convex.
41
# Neural Networks (Part 2): Learning What is the gradient descent update rule? (Formula)
42
# Neural Networks (Part 2): Learning What is the squared error formula?
43
# Neural Networks (Part 2): Learning What are the parameters we want to train with gradient descent?
Weights (+ bias) and thresholds.
44
# Neural Networks (Part 2): Learning What is the requirements for using the backpropagation algorithm?
All activation functions have to be differentiable, as well as the error function.
45
# Neural Networks (Part 2): Learning How do you calculate the derivative of z with respect to x? (See image)
46
# Neural Networks (Part 2): Learning How do you calculate the derivative of z with respect to x, when the path to z branches? (See image)
47
# Neural Networks (Part 2): Learning What is gradient checking?
Numerically approximating the gradients.
48
# Neural Networks (Part 2): Learning How do you use gradient checking?
It's used to check if backprop is properly implemented.
49
# Neural Networks (Part 2): Learning What technique do we use to check if backprop is properly implemented?
Gradient checking (numerical technique).
50
# Neural Networks (Part 2): Learning How are the parameters of a NN initialized?
They are initialised with random values
51
# Neural Networks (Part 2): Learning Why is "Symmetry breaking" important?
When NN models are initialized with similar parameters, gradient descent doesn't know which one to update. Random parameters
52
# Neural Networks (Part 2): Learning What is deep learning?
The study of neural networks with 3+ layers.
53
# Neural Networks (Part 2): Learning What is SGD with mini-batches?
Stochastic gradient descent with more than one training example at a time.
54
# Neural Networks (Part 2): Learning What is SGD short for?
Stochastic gradient descent.
55
# Neural Networks (Part 2): Learning What is weight decay
Decaying the weight by multiplying by a constant c after every epoch.
56
# Neural Networks (Part 2): Learning What is weight decay helpful for?
Reducing overfitting.
57
# Neural Networks (Part 2): Learning What is weight decay similar to?
Adding a weight regularization terms to the error.