Block 1: Fundamentals of Machine Learning Flashcards

1
Q

Name one application of neural networks mentioned in the text.

A

Handwriting recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the primary function of the sigmoid output function in neural networks?

A

The sigmoid function is used to map the output of a neuron to a value between 0 and 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are neural networks typically used for in machine learning?

A

They are used for learning, modeling, and making predictions based on data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the significance of weights in a neural network?

A

Weights determine the strength of the signal that neurons send to each other, impacting the network’s overall output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define the term ‘overfitting’ in the context of neural networks.

A

Overfitting occurs when a model learns the training data too well, including its noise and fluctuations, and performs poorly on new, unseen data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the role of the loss function in neural networks?

A

The loss function measures how well the neural network is performing, guiding adjustments to improve accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Discuss the ethical considerations of using neural networks.

A

Ethical considerations include ensuring fairness, transparency, and accountability in decision-making processes, and addressing privacy and data security concerns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does a three-layer network learn through errors?

A

It learns by adjusting its weights based on the errors in the output compared to the expected result, a process known as backpropagation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the challenges in initialising weights for datasets in neural networks?

A

Challenges include avoiding too large or too small weights, which can lead to slow learning or the vanishing gradient problem, respectively.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Explain how the error function is used in training neural networks.

A

The error function measures the difference between the network’s predicted output and the actual output, guiding the adjustment of weights to minimize this error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Discuss the trade-offs in improving neural network outputs, considering data size and applications of weights.

A

Trade-offs include balancing model complexity with the risk of overfitting, managing computational resources, and ensuring the model’s generalizability to new data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Analyze the potential limitations of neural networks in terms of overfitting and interpretability.

A

Overfitting limits a network’s ability to generalize from the training data to new data. Interpretability issues arise from the ‘black box’ nature of neural networks, making it hard to understand how decisions are made.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Detail the process of backpropagation in neural networks using a worked example.

A

Backpropagation involves computing the gradient of the loss function and propagating this information back through the network to update weights. This helps the network learn from its errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explore the implications of the vanishing gradient problem in neural network training.

A

The vanishing gradient problem occurs when gradients become too small, drastically slowing down training or stopping it altogether, particularly in deep networks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Analyze the role of training parameters in determining the success of a neural network model.

A

Training parameters like learning rate, batch size, and the number of epochs significantly influence the model’s learning efficiency and accuracy.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Assess the ethical implications of applying neural networks in real-world scenarios.

A

Ethical implications include potential biases in decision-making, privacy concerns, and the need for transparent and accountable AI systems.

17
Q

Evaluate the effectiveness of different loss functions in improving neural network performance.

A

Different loss functions, like mean squared error or cross-entropy, can be more effective depending on the type of problem (e.g., regression vs. classification) and the specific characteristics of the data.

18
Q

What is the purpose of an activation function in a neural network?

A

The activation function introduces non-linearity into the network, allowing it to learn and perform more complex tasks.

19
Q

What is the difference between supervised and unsupervised learning?

A

In supervised learning, the model is trained on labeled data. In unsupervised learning, the model learns from unlabeled data, identifying patterns on its own.

20
Q

What is a deep neural network?

A

A deep neural network is a neural network with multiple hidden layers, allowing it to model more complex relationships in data.

21
Q

What is a loss function, and why is it important?

A

A loss function measures the error between the predicted output and the actual

22
Q

How does batch size impact neural network training?

A

Batch size, the number of training samples used in one iteration, impacts training speed and stability. Larger batches provide a more stable gradient, but smaller batches offer faster convergence.

23
Q

Explain the challenge of local minima in neural network training.

A

Local minima are points where the model’s performance stops improving during training. The model gets ‘stuck’ and doesn’t reach the optimal solution.