Unit 3: Neural Networks Flashcards

1
Q

What are neural networks, and how do they function?

A

Neural Networks: A set of algorithms modeled after the human brain, designed to recognize patterns and learn from data.
Functionality:

Input Layer: Receives input features.
Hidden Layers: Intermediate layers that transform inputs into outputs through weights and activation functions.
Output Layer: Produces final predictions or classifications.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain the process of training a neural network.

A

Training Process:

Forward Propagation: Input data passes through the network, producing output predictions.
Loss Calculation: Compares predicted output with actual labels using a loss function (e.g., Mean Squared Error for regression).
    Equation: MSE=1n∑i=1n(yi−y^i)2MSE=n1​∑i=1n​(yi​−y^​i​)2 (where yiyi​ is the actual value and y^iy^​i​ is the predicted value).
Backpropagation: Adjusts weights based on the loss gradient to minimize error.
    Gradient Descent: Optimizes weights using calculated gradients.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe common activation functions used in neural networks and their significance.

A

Common Activation Functions:

Sigmoid Function: Maps input to a value between 0 and 1.
    Equation: σ(x)=11+e−xσ(x)=1+e−x1​
    Use Case: Binary classification problems.
ReLU (Rectified Linear Unit): Outputs the input directly if positive; otherwise, it outputs zero.
    Equation: f(x)=max⁡(0,x)f(x)=max(0,x)
    Use Case: Hidden layers in deep learning models.
Softmax Function: Converts raw scores into probabilities that sum to one.
    Equation: P(yi)=ezi∑jezjP(yi​)=∑j​ezj​ezi​​ (where zizi​ are the raw scores).
    Use Case: Multi-class classification problems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is overfitting in neural networks, and how can it be prevented?

A

Overfitting: When a model learns noise in the training data, resulting in poor generalization to new data.
Prevention Techniques:

Regularization: Adding a penalty term to the loss function (e.g., L1 or L2 regularization).
    L2 Regularization Equation: Lossnew=Lossoriginal+λ∑iwi2Lossnew​=Lossoriginal​+λ∑i​wi2​ (where λλ is the regularization parameter).
Dropout: Randomly setting a fraction of neurons to zero during training to prevent co-adaptation.
Early Stopping: Monitoring validation loss and stopping training when it begins to increase.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly