Brain Constrained Neural Language Modelling Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Brain constraint 1: Integration of modelling at different levels

A

Until now, most models have focused on:
neuronal function at the level of single neurons
neuronal interaction in local cortical circuits
global interplay between cortical areas

These levels of modelling have to be integrated in the same model. When doing this, experimental data from e.g. fMRI recordings about the connections between neurons, cortical circuits and brain areas should be utilised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Brain constraint 2: Neuron models

A

Neural networks are composed of artificial correlates of neurons, but the way neurons are coded vary greatly. E.g. “Mean field models” use neurons with continuous in- and outputs, which ignores the spiking activity of most real neurons.

Spiking “integrate-and-fire” neurons model the summation of post-synaptic potentials and resultant neuronal firing and provide a model that is closer to real neurons.

Biophysical neuron models can include even more detail, including post-synaptic ion-channel dynamics and dendritic action potentials, which is biologically precise, but for most applications it is not strictly necessary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Brain constraint 3: Synaptic plasticity and learning

A

In most neural networks, learning has been “supervised”. In supervised learning, learning is often based on correct or wrong answers. So if a neural network has to classify a picture of a deer, and it guesses “duck”, the error signal is used to update the network. However, we know that most learning in human brains is Hebbian learning “fire together wire together”. This can be called unsupervised learning, as it occurs due to specific networks of neurons firing together on the basis of some stimuli, which, over time, creates a distinct representation of e.g. a deer.

Pulvermüller finds it crucial that this learning mechanism is included in neural networks, alongside a biologically realistic amount of supervised learning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Brain constraint 4: Inhibition and regulation

A

We know that the brain is made up of both excitatory and inhibitory neurons, and that both types contribute to the production and regulation of brain activity. Regulation by inhibition occurs both in local circuits and between brain areas (E.g. the basal ganglia, striatum action selection network). However, most neural networks only use excitatory neurons.

Inclusion of inhibition and regulation mechanisms at both local circuit and more global (e.g., area) levels is an important feature of making neurocognitive networks biologically plausible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Brain constraint 5: Area Structure

A

In neural models, areas on the brain are often separately modelled.
However, progressing towards biological realism includes a progression in the range of brain parts and regions covered by the model and the granularity of the modelled areas, moving from coarse to more fine-grained area subdivision.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Brain constraint 6: Within-area local connectivity

A

The probability of connection between two pyramidal cells decrease with distance. Thereby, local excitatory connections within a cortical area are sparse and show a neighborhood bias towards links between adjacent neurons.

Neural networks that connect auto-associativity with between-area hetero-associative connections have provided a possible solution to this problem. However, more biologically plausible models can be achieved by accurately modelling within- and between-area connections.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Brain constraint 7: Between-area global connectivity

A

In the brain, areas that are close to each other have a higher probability of being connected. While the between-area connections are modulated by proximity, it connections to far away areas are hard to model. Therefore, Therefore, essential brain constraints on artificial neural networks should come from the connectivity structure of between-area links documented by neuroanatomical research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Explain this

A

Constraints address the (1) integration of modelling across the levels of cortical neurons, local cortical circuits and macroscopic brain structures (left to right panels) and specifically highlight (2) the nature of the neuron model, (3) the implementation of synaptic plasticity and learning, (4) regulation and control by way of interplay between excitatory and inhibitory neurons, (5) gross anatomical structure and area subdivision and (6) local within-area and (7) global between-area connectivity. Most current network models used for modelling cognition focus on only one or a few of these aspects, whereas brain-constrained modelling works towards networks integrating all of them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Hetero-associative network

A

One set of neurons project to another set of neurons, with no connections between neurons in the same layer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Auto-associative network

A

Network of neurons where neurons at the same layer are interconnected. This is more biologically realistic in a local brain circuit where neurons that are close together are almost always connected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Backpropagation

A

“Backpropagation, short for “backward propagation of errors,” is an algorithm for supervised learning of artificial neural networks. Given an artificial neural network and an error function, the method calculates the gradient of the error function with respect to the neural network’s weights.”

When a neural network has classified e.g. a picture of a bird, but it was a dog. The model then calculates in each layer how the weights of the connections have to change in order to classify the specific input as a bird instead of a dog. In this way, the error is applied backwards through the network to update the weights one layer at the time.

Backpropagation presents a supervised learning type which is not the main learning method in the brain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How did Schomers et al. (2017) improve the Garagnani et al (2008) model of word-form learning in left perisylvian cortex?

A

Garagnani’s model from 2008 has only neighbouring connectivity between areas in their model. Schomers model utilised DTI studies in humans and monkeys to make a much improved model of between-area connectivity.

This allowed them to study the features that differ between human and monkey perisylvian cortex.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How did a neural model of word mapping change our perception of Hebbian learning?

A

Due to the nature of correlational learning it was assumed that many exsposures were necessary for word mapping. However, a brain model showed that Hebbian learning doesn’t require as many exposures to the same word before it is encoded, as some words were mapped after the first exposure and almost all were mapped after 20 exposures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly