Lecture 15 - Neural networks: backpropagation Flashcards

1
Q

What is forward propagation?

A

The process of computing outputs by passing inputs through the network layer by layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q

What are the components of a simple neural network?

A

Input layer: Takes raw data.
Hidden layers: Perform computations and transformations.
Output layer: Produces predictions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is backpropagation?

A

An algorithm to compute gradients of the loss function with respect to weights efficiently, layer by layer, using the chain rule.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Why is the chain rule important in backpropagation?

A

It allows the gradient of a composite function to be computed by breaking it into smaller, manageable derivatives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Name and describe common activation functions.

A

Sigmoid: Maps inputs to (0, 1); used in binary outputs.
ReLU: 𝑔(𝑧)=max(0,𝑧); avoids vanishing gradients.
Softmax: Converts logits to probabilities for multi-class classification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a computational graph?

A

A structured representation of computations where:

Nodes: Variables/operations.
Edges: Flow of computations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the goal of gradient-based learning?

A

To minimize the cost function 𝐽 using methods like gradient descent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the steps of backpropagation?

A

Forward Pass: Compute predictions.
Backward Pass: Calculate gradients.
Update Weights: Adjust weights using gradients.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does dynamic programming optimize backpropagation?

A

By caching intermediate gradients to avoid redundant calculations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

How can overfitting in neural networks be prevented?

A

Dropout: Randomly deactivate units during training.
L2 Regularization: Penalize large weights.
Early Stopping: Halt training when validation loss stops improving.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly