LESSON 5 - Neural computation Flashcards
How is the total input (Net Input) calculated in a neural network?
The total input is obtained by summing up all incoming signals, each multiplied by its connection weight. This value is then converted into an output that is sent to other neurons.
What is the role of connection weight in a neural network?
Connection weight, assigned to each connection, determines the effect of an incoming signal on the receiving neuron. It can be positive (excitatory) or negative (inhibitory), influencing the strength of the signal’s effect.
Why is the activation of neurons typically kept between 0 and 1?
Neuron activation is kept within this range to represent the firing rate, indicating the strength of response, rather than a binary firing or not firing. This can be achieved through functions like the sigmoid.
What is the purpose of the sigmoid function in neural networks?
The sigmoid function is applied to the net value, converting it to an output between 0 and 1. The middle part is linearly related to the total input, but saturation occurs at too high or low values, mimicking the behavior of biological neurons.
Describe the MNIST dataset and its application in neural networks.
The MNIST dataset contains 70,000 images of handwritten digits. Each image, representing a digit, is a matrix of pixels (28x28 = 784). Neural networks use these pixel values as inputs, where each pixel corresponds to an input neuron.
What are the three possible connectivity schemes in neural networks, and which one is widely used for object recognition?
The three connectivity schemes are feed-forward, recurrent, and fully recurrent. For object recognition tasks, the feed-forward connectivity scheme is most commonly used.
What are the main differences between biological neural networks and artificial neural networks?
Artificial neural networks simplify assumptions compared to biological networks. They do not simulate complex aspects like spiking, continuous mechanisms, or laminal structures in the cortex.
What is the Hebb rule in learning, and how does it affect synaptic efficacy?
The Hebb rule states that if two linked neurons are simultaneously active, the synaptic efficacy (connection weight) is strengthened. Learning involves changing connection weights based on this rule to achieve the correct response.
Explain the three classes of learning in neural networks.
Supervised learning involves providing labeled examples for correct output, unsupervised learning focuses on building representations without associated output, and reinforcement learning uses rewards and punishments to maximize a reward signal.
Why are initial values of connection weights randomly assigned in neural networks?
Initial values are randomly assigned to avoid all connections having a weight of zero, allowing signals to propagate. This randomness is essential for learning to fix and improve the initially garbage-like output.
What is the significance of the learning rate in neural network learning?
The learning rate scales the weight changes during learning. A small learning rate is essential to make small adjustments to weights and prevent unstable learning, where knowledge is overwritten.
How is online learning different from batch learning in neural networks?
In online learning, weights are changed immediately after each example, while in batch learning, weights are changed after processing a batch of examples. Online learning computes changes for every example, making immediate adjustments.
What is the role of the learning rate (n) in the Hebb rule?
The learning rate scales the change in connection weights in the Hebb rule, ensuring small adjustments. A too large learning rate can lead to unstable learning.
How does the unsupervised learning approach differ from supervised learning?
Unsupervised learning focuses on building representations without associated output, often involving clustering or finding common factors of variation across examples. Unlike supervised learning, it doesn’t require labeled examples during training.
What is the goal of reinforcement learning, and how does it differ from associative learning?
Reinforcement learning aims to maximize rewards or minimize punishments based on an external signal. Unlike associative learning, rewards are signals rather than associations, and the agent learns to perform actions to achieve maximum rewards.