Module 01 - Fundamentals of Neural Networks Flashcards

1
Q

Neural Networks (Part 1): Model Representation

What is this an example of?

A

A nonlinear classification problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Neural Networks (Part 1): Model Representation

Why did ANNs become popular again? (2)

A
  • Better computer architecture (GPUs, parallelism)
  • More data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Neural Networks (Part 1): Model Representation

What’s the difference between the old and new views of ANNs?

A

Previously: function approximators.
Now: Interesting intermediate representations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Neural Networks (Part 1): Model Representation

What do neurons consist of? (ADS)

A

A neuron consists of:
- Axon (Single long fiber, output)
- Dendrites (Fibers, inputs)
- Soma (Cell body)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Neural Networks (Part 1): Model Representation

Where is information processed/stored in the brain?

A

Simultaneously throughout the whole network instead of specific locations.
(Though parts of the brain specialize.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Neural Networks (Part 1): Model Representation

Where is the nucleus located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Neural Networks (Part 1): Model Representation

Where are the dendrites located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Neural Networks (Part 1): Model Representation

Where is the cell body (soma) located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Neural Networks (Part 1): Model Representation

Where is the node of ranvier located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Neural Networks (Part 1): Model Representation

Where is the axon located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Neural Networks (Part 1): Model Representation

Where is the myelin sheath located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Neural Networks (Part 1): Model Representation

Where is the Schwann cell located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Neural Networks (Part 1): Model Representation

Where is the axon terminal located? (See image)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Neural Networks (Part 1): Model Representation

What is the purpose of the dendrites?

A

They are the input channels to the neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Neural Networks (Part 1): Model Representation

What components are the input channels to the neuron?

A

The dendrites.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Neural Networks (Part 1): Model Representation

What is the purpose of the axon?

A

It’s the output of the neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Neural Networks (Part 1): Model Representation

What is the output channel of the neuron called?

A

The axon.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Neural Networks (Part 1): Model Representation

What is an activation function?

A

Any function applied to the outputs of a neural network, e.g. sigmoid or ReLU.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Neural Networks (Part 1): Model Representation

What is the “step function” activation function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Neural Networks (Part 1): Model Representation

What is the “sign function” activation function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Neural Networks (Part 1): Model Representation

What is the “sigmoid function” activation function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Neural Networks (Part 1): Model Representation

What is the “linear function” activation function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Neural Networks (Part 1): Model Representation

What is a requirement for activation functions?

A

They have to be differentiable.

24
Q

Neural Networks (Part 1): Model Representation

What is a logistic unit?

A

A neuron with a sigmoid function (e.g. logistic function) applied to the outputs.

25
Q

Neural Networks (Part 1): Model Representation

What is multiclass classification?

A

A neural network predicting multiple classes as once, e.g. cat or dog or rabbit.

26
Q

Neural Networks (Part 1): Model Representation

What’s another name for one-vs-all classification?

A

Multiclass classification

27
Q

Neural Networks (Part 1): Model Representation

What is Hebbian learning?

A

If 2 units are both active (firing), the weights between them should increase

28
Q

Neural Networks (Part 1): Model Representation

What’s the name for the “Neurons that fire together wire together” rule.

A

Hebbian learning.

29
Q

Neural Networks (Part 1): Model Representation

What is a feedforward neural network?

A

A NN with no cycles (Connections back).

30
Q

Neural Networks (Part 1): Model Representation

What do we call an NN with cycles?

A

A recurrent neural network.

31
Q

Neural Networks (Part 1): Model Representation

What is a recurrent neural network?

A

A NN with cycles?

32
Q

Neural Networks (Part 2): Learning

In Hebbian learning, are learning rules local or global?

A

Local

33
Q

Neural Networks (Part 2): Learning

What’s the idea behind Hebbian learning in NNs?

A

Change weights based on correlation of connected neurons

34
Q

Neural Networks (Part 2): Learning

When does Hebbian learning work best?

A

Works best when relevance of inputs to outputs is independent

35
Q

Neural Networks (Part 2): Learning

What’s the problem with Hebbian learning and weights?

A

Simple Hebb rule grows weights unbounded

36
Q

Neural Networks (Part 2): Learning

What’s the perceptron learning rule?

A

New weights are old weights plus a portion of the error.

37
Q

Neural Networks (Part 2): Learning

What’s the Widrow-Hoff Rule?

A

The Widrow-Hoff rule aims to minimize the mean square difference between the predicted (expected) and the actual (observed) data or response

38
Q

Neural Networks (Part 2): Learning

How do you calculate the gradient of a multi-variable function? (See image)

A
39
Q

Neural Networks (Part 2): Learning

What is gradient descent?

A

Following the gradient of a function in an attempt to reach the global minima.

40
Q

Neural Networks (Part 2): Learning

Under what situations will gradient descent reach the global minimum of a function?

A

When the function is convex.

41
Q

Neural Networks (Part 2): Learning

What is the gradient descent update rule? (Formula)

A
42
Q

Neural Networks (Part 2): Learning

What is the squared error formula?

A
43
Q

Neural Networks (Part 2): Learning

What are the parameters we want to train with gradient descent?

A

Weights (+ bias) and thresholds.

44
Q

Neural Networks (Part 2): Learning

What is the requirements for using the backpropagation algorithm?

A

All activation functions have to be differentiable, as well as the error function.

45
Q

Neural Networks (Part 2): Learning

How do you calculate the derivative of z with respect to x? (See image)

A
46
Q

Neural Networks (Part 2): Learning

How do you calculate the derivative of z with respect to x, when the path to z branches? (See image)

A
47
Q

Neural Networks (Part 2): Learning

What is gradient checking?

A

Numerically approximating the gradients.

48
Q

Neural Networks (Part 2): Learning

How do you use gradient checking?

A

It’s used to check if backprop is properly implemented.

49
Q

Neural Networks (Part 2): Learning

What technique do we use to check if backprop is properly implemented?

A

Gradient checking (numerical technique).

50
Q

Neural Networks (Part 2): Learning

How are the parameters of a NN initialized?

A

They are initialised with random values

51
Q

Neural Networks (Part 2): Learning

Why is “Symmetry breaking” important?

A

When NN models are initialized with similar parameters, gradient descent doesn’t know which one to update.

Random parameters

52
Q

Neural Networks (Part 2): Learning

What is deep learning?

A

The study of neural networks with 3+ layers.

53
Q

Neural Networks (Part 2): Learning

What is SGD with mini-batches?

A

Stochastic gradient descent with more than one training example at a time.

54
Q

Neural Networks (Part 2): Learning

What is SGD short for?

A

Stochastic gradient descent.

55
Q

Neural Networks (Part 2): Learning

What is weight decay

A

Decaying the weight by multiplying by a constant c after every epoch.

56
Q

Neural Networks (Part 2): Learning

What is weight decay helpful for?

A

Reducing overfitting.

57
Q

Neural Networks (Part 2): Learning

What is weight decay similar to?

A

Adding a weight regularization terms to the error.