week 6 - Intro into cognitive modelling Flashcards
Does computational neuroscience (CN) has have top-down or bottom-up approach to computational modelling? why?
bottom-up approach as CN looks at finer details first such as neuronal activity patterns to create biologically plausible representations (the models)
How does the approach to computational modelling differ from CN to cognitive science?
CN is bottom-up whereas cognitive science is top-down
cognitive looks at behavioural patterns to make models whereas CN looks as activity on a neuronal level to form model
What is the issue from deriving a computational model from data (data models)?
data model have no intrinsic psychological content (no explanations to the patterns in the data -> so you can’t really build a model/theory upon data)
Give an example of a cognitive model
What did this model propose about how our brains work?
Baddely and Hitch working memory model with the visuospatial sketchpad and the phonological loop
-working memory isn’t just a single short-term storage space, but a system with multiple components
Give an example of a data model
What is the issue with this model?
-Study by Heathcote investigating data patterns of the ‘practice effect’. Is the learning rate better described by a power function or an exponential function?
-Just describes patterns and doesnt explain why psychologically and there is no biological representation in the brain
What is the problem with the cognitive science approach to computational modelling?
top-down approach means there is no biological representation
What type of model is the Spreading-Activation Model by Collins and Loftus?
How does it work?
-verbal model
-semantic memory model: when one word is activated then other words associated in meaning are activated too. Different associations in different people.
What are the benefit of MODELLING (in general) cognition?
-can compare different plausible models systematically
-make implicit assumptions (inferred) -> explicit (apparent)
-communicate theoretical ideas (box and arrow)
-test THEORETICAL hypothesis and predictions
What is the benefit of using computational modeling to describe cognitive theories?
-removes ambiguity from verbal description in cognitive theory
-constrains the dimensions of which the cognitive theory can span
How does computational modelling help with establishing a framework (for a conceptual system which defines terms and provides concepts) in a cognitive theory?
What are the benefits of adding computational modelling to building cognitive theories?
-after doing real world experiment and deriving and hypothesis from this data,
Next CM adds by implementing model (instantiation), specifying a mathematical model and then creating scientific theory created from model which describes/predicts AND explains phenomena in cognitive theory
-computational modelling helps clarify the theory, makes it more explicit, repeatable for other researchers
What are the benefits of adding computational modelling to building cognitive theories?
computational modelling helps clarify the theory, makes it more explicit, repeatable for other researchers
Why must you make you model precise but also falsifiable?
How is this represented graphically by Farrell and Lewandowsky in their paper?
precise: because the theory’s hypothesis must be concise and have a precise selection criteria otherwise everything would be valid for the hypothesis and you
falsifiable: because the must be criteria in the hypothesis which you can reject data observed from the experiment
-precision is the dots (predicted data from hypothesis)
-cross is the observed data
What do the length of the cross arms represent in the Farrell & Lewandowsky paper about precision and falsifying data?
error bars: the longer the cross arms, the less falsifiable the hypothesis is (more room for error)
What do Farrell and Lewandowsky theorise about data and predictions in cognitive modelling?
cognitive modelling brings data and predictions together
What is the difference between free and fixed parameters?
free are flexibly adjusted but fixed are set
what is the benefit of using free parameters?
you can adjust the parameters when fitting the model until the difference between the predicted model values and real data is minimised
How are computational models (created from theory) connected to experiments (also created from theory)?
model makes predictions which can be compared and contrasted to the data produced from the experiments
What is model identifiability?
Extent to which you can uniquely predict each parameter value in a model by determining these values from a data set
Are non-identifiable models informative?
yes they can be if the model is also falsifiable + additional constraints to the model
What does it mean when a model is non-identifiable?
you cannot determine its parameters uniquely, meaning different combinations of parameter values could lead to the same predictions or outcomes.
What is the goal of when fitting a model?
to minimise the discrepancy between predicted and observed data
What does the discrepancy function describe when fitting a model?
expresses the deviation between predictions and observations in a single value
(distance between dots (real data) and the curve of best fit (predicted data))
What is the discrepancy function aka?
loss function
objective function
What mathematical term does the discrepancy function use for continuous and categorical/discrete data?
continuous: root mean squared deviation (RMSD)
discrete: X squared or G squared
What does the RMSD do mathematically?
it is a sum of all deviations in data