Block 1: Fundamentals of Machine Learning Flashcards
Name one application of neural networks mentioned in the text.
Handwriting recognition.
What is the primary function of the sigmoid output function in neural networks?
The sigmoid function is used to map the output of a neuron to a value between 0 and 1.
What are neural networks typically used for in machine learning?
They are used for learning, modeling, and making predictions based on data.
What is the significance of weights in a neural network?
Weights determine the strength of the signal that neurons send to each other, impacting the network’s overall output.
Define the term ‘overfitting’ in the context of neural networks.
Overfitting occurs when a model learns the training data too well, including its noise and fluctuations, and performs poorly on new, unseen data.
What is the role of the loss function in neural networks?
The loss function measures how well the neural network is performing, guiding adjustments to improve accuracy.
Discuss the ethical considerations of using neural networks.
Ethical considerations include ensuring fairness, transparency, and accountability in decision-making processes, and addressing privacy and data security concerns.
How does a three-layer network learn through errors?
It learns by adjusting its weights based on the errors in the output compared to the expected result, a process known as backpropagation.
What are the challenges in initialising weights for datasets in neural networks?
Challenges include avoiding too large or too small weights, which can lead to slow learning or the vanishing gradient problem, respectively.
Explain how the error function is used in training neural networks.
The error function measures the difference between the network’s predicted output and the actual output, guiding the adjustment of weights to minimize this error.
Discuss the trade-offs in improving neural network outputs, considering data size and applications of weights.
Trade-offs include balancing model complexity with the risk of overfitting, managing computational resources, and ensuring the model’s generalizability to new data.
Analyze the potential limitations of neural networks in terms of overfitting and interpretability.
Overfitting limits a network’s ability to generalize from the training data to new data. Interpretability issues arise from the ‘black box’ nature of neural networks, making it hard to understand how decisions are made.
Detail the process of backpropagation in neural networks using a worked example.
Backpropagation involves computing the gradient of the loss function and propagating this information back through the network to update weights. This helps the network learn from its errors.
Explore the implications of the vanishing gradient problem in neural network training.
The vanishing gradient problem occurs when gradients become too small, drastically slowing down training or stopping it altogether, particularly in deep networks.
Analyze the role of training parameters in determining the success of a neural network model.
Training parameters like learning rate, batch size, and the number of epochs significantly influence the model’s learning efficiency and accuracy.