Neural module Flashcards
How can we compare the spiking activity responses to different stimuli?
1) Grand-average method:
– one grand average per neuron
– perform standard (non)parametric test
2) Single-trial method:
– Each trial will yield a response
– Perform a repeated measurement (random effects) test
(Both methods are valid, and often equivalent, although the latter can be more powerful;
Rule of thumb: use the simplest available method)
What is the Coefficient of variation (in the context of PSTH) and how do you compute it?
A measure of response reliability(/how stereotypical response is)
1. Compute response parameter for each trial (single-trial PSTH)
2. Build probability distribution of responses
3. CV = sigma/mu
What is the response period in PSTH and what variables can we extract from it?
Response period: interval after the stimulus where activity is
significantly greater than in the baseline (> mean + 2 s.d.)
– Average/total activity in response period (option: –
mean(baseline))
– Max activity in response period (option: – mean(baseline))
– Peak latency from stimulus onset
– Onset latency from stimulus onset (delay of first significant response bin)
What are spike trains and what are the two major ways of computing firing rates based on them?
A spike train is a series of timestamps representing the times at which a neuron fires an action potential (spike).
Discrete-time firing rates: time is divided into equal-length bins, and the firing rate is calculated for each bin as the number of spikes that occur within that bin divided by the bin width.
Sliding window: a window of fixed length is “slid” across the spike train, and the firing rate is calculated for each position of the window based on the spikes within it ((spike train is convolved with a window function). Unlike discrete bins, the window overlaps as it slides, providing a smoother estimate. There are different types e.g rectangle window, gaussian window, alpha function
Decoding vs. encoding (in the context of neural spikes)
Encoding: how neurons codify information about sensory stimuli.
– given a sensory stimulus, what is its neural representation in terms of
spiking activity?
– what is the probability of observing a certain spiking pattern, given a
sensory stimulus?
Decoding: what was the stimulus, given a certain pattern of neuronal activity?
– probability of a stimulus, given a pattern of activity
What is the simplest type of classifier?
Linear classifier: Identify the line that best separate the two classes
1) How do we identify which line best separates two classes and (2) how can we automatically and optimally do this?
- Minimize the error in the classifier (cost or loss function)
- Neural networks
How is learning achieved in a perceptron?
- Fix a training set of M samples
What is the architecture of an MLP?
- All the neurons of a layer are connected to all the neurons of the next layer.
- There are no connections between neurons in the same layer and between
non adjacent layers.
Multi-layer perceptrons can separate any convex region
How do we compute the error for hidden layers?
backpropagation algorithm.
Which rule are weights varied according to?
Learning rule/optimization algorithm, e g: Gradient descent rule
What is an epoch in the context of training an MLP and what steps does it entail?
A pass of the entire training set.
1. Forward Propagation:
Input data is passed through the network, layer by layer, applying weights, biases, and activation functions.
The network computes predictions (output).
- Loss Computation
Compare the predicted outputs with the actual labels using a loss function appropriate for the task:
* Mean Squared Error (MSE) for regression.
* Cross-Entropy Loss for classification. - Back propagation:
Compute the gradient of the loss function with respect to the weights and biases using the chain rule of calculus (backpropagation algorithm).
Gradients are propagated backward through the network from the output layer to the input layer. - update weights (how often this is done depends on training strategy)
* based on learning rule/optimization algorithm (e.g gradient descent)
How do we evaluate the capability of an MLP to generalize the examples of the training set?
By testing it on the Validation set (that sounds to me like its the same thing as a test set but idk)
Three things that are important to ensure good training, generalization and no overfitting
- Network architecture (number of layers/neurons)
- learning rate
- a good
choice of TS
What is cross validation?
a procedure to prevent a classifier from achieving optimal
performance only on a specific validation/test set, and to test a classifier’s
performance against the whole available dataset