Week 3: Introduction to Neural Networks Flashcards
Neuron
An information processing unit.
Action potential
The signal outputted by a biological neuron.
Firing rate
The number of action potentials emitted during a defined time-period.
Synapse
The connection between two neurons.
Artificial Neural Network
A parallel architecture composed on many simple processing elements interconnected to achieve certain collective computational capabilities.
De-noising Autoencoder
The network is trained so that the output, r, reconstructs the input, x. However, before encoding is performed the input is corrupted with noise. This mitigates overfitting.
Processing Units
For neural networks, they’re organised into layers and each have an activation function, as well as weights for each neuron.
Perceptron
A linear threshold unit.
Logical Functions
Can be AND, OR, XOR, or other arbitrary logical functions. Not all logical functions are linearly separable.
Sequential (Online) Delta Learning Algorithm
For each sample, x_k, in the dataset in turn, w <- w + eta(t_k - H(wx_k))x_k^t. Do until the algorithm converges. One update of the parameters is based on 1 sample. The order in which the samples are used may affect the speed of convergence. It doesn’t necessarily outperform batch learning. This approach only depends on misclassified samples.
Batch Delta Learning Algorithm
For each iteration, go through all the all the samples, calcualte eta(t-y)x^t, then sum up the results. That’s the update for the weights. Keep iterating until the weights are unchanged. One update of the parameters is based on n samples. Sample order isn’t relevant in batch learning. This method only depends on misclassified samples. Batch learning can be faster than Sequential learning, because it processes the entire batch at once using vectorisation, which allows the parallel computing of each sample in the batch.
Hebbian Learning Rule
An unsupervised method which strengthens connections between active neuron and any active inputs. Sequential (Online) Learning Algorithm is w <- w + eta y x^t
Competitive Learning Algorithms
When output units compete for the right to respond to input, meaning that some neurons have their activity suppressed by other neurons.
Negative Feedback Networks
When output units compete to receive inputs, rather than compete to produce output.
Autoencoder Networks
When the output, r, is a reconstruction of the input using a separate neural population, with inhibitory feedback connections removed, and neural responses no longer having to be calculated iteratively. It can still learn weights as to minimise error between input and reconstruction.