Week 3 - supervised learning: Classification Flashcards
Artificial neural networks (ANNs)
- 3 kenmerken
- Network of interconnected units.
- Connections have a certain weights.
- In a feedforward network, activation
propagates from input units to output units
through hidden units.
Classification
Assigning a discrete label to an input.
How do you determine the weights in a ANN?
Supervised learning!
-> Backpropagation of error
Backpropagation of error
- 10 steps
- Initialize the Network:
Start with a network that has random weights on its connections between nodes.
- Supervised Learning:
You have two groups of siblings, and you want the network to learn which person belongs to which group.
- For Each Person:
Take one person at a time.
Input information about that person into the network.
- Compare Output:
Compare the network’s output (what it thinks about the group the person belongs to) to the correct or required output (the actual group the person belongs to).
- Calculate Error:
Determine the difference or error between the network’s output and the correct output.
- Adjust Weights Backwards:
Starting from the output layer and moving backward through the layers:
Modify the connection weights to reduce the error.
Focus more on connections that contributed more to the error.
- Repeat for Each Person:
Go through this process for each person in your dataset.
- Iterations:
Repeat this process for multiple rounds or iterations, going through the entire dataset each time.
- Network Improvement:
With each iteration, the network adjusts its weights to get better at assigning the correct group to each person.
- Training Completion:
Continue this process until the network reaches a point where it consistently assigns the correct groups based on the input.
Does adding layers in an Artificial Neural Network improve performance?
- Answer
- Analogy with neurons
Answer:
No, this depends on the task.
* Deep networks allow for hierarchical
representation.
* Together with convolution, one of the causes of the
successes of deep networks
Analogy with neurons:
* There is a minimum number of neurons needed to
approximate any specific function, but simply
adding more neurons does not work well.
* Instead, specific changes in topology are needed,
e.g. deep networks.