PSY2002 SEMESTER 1 - WEEK 4 Flashcards
define model
simplified (idealised) representation of a thing
define statistical model
mathematical relationship between variables, that hold under specific assumptions (no real explanation of what is going on, why there is relationships)
define theoretical model
description of relationship between different mental processes, that make assumptions about nature of these processes (tried explaining relationships between variables)
define regression
same as correlation and is a single line through graph representing trends (as seen in statistical models)
outline what cognitive box-and-arrow models are (informal theoretical model)
describe relationship between different mental processes, under assumption that mind operates like multi-staged info processing machine
manipulate input and observe output to have look at mind, allow testing models
what are formal cognitive models (theoretical)
mathematical description of relationship between mental processes
box and arrows switched for formula to describe our cognitive process, explain them (often computer codes)
outline simplification and abstraction (common model characteristic)
only describe parts of info we think are critical for what we’re trying to represent
simplification: making something simpler
abstraction: generating general rules and concepts from specific info
what does the right level of abstraction depend on
question being asked, what we’re trying to convey
who was Popper, influence in theories
influential in differentiating between scientific and non-scientific theories
suggest non-scientific theory only explains “after fact” and can’t provide falsifiable prediction
name the 5 stages of models being used for prediction and explanation
framework, theory, model, hypothesis, data
define framework
conceptual system that defines terms and provides context (eg, cognitive psychology)
define theory
scientific proposition that provides relations between phenomena (eg, early selection theory)
define model
schematic representation of theory, more limited in it’s scope (eg, Broadbent’s filter model)
what should we do if framework keep producing bad theories that are always rejected
should change framework - falsification
give an example of explanation without exact prediction, and then prediction without explanation
Ew/oP: models of sz can indicate causes but can’t predict individual cases
Pw/oE: some models can predict whether an individual will develop AD even though not that close to understanding factor explaining AD
what are informal cognitive models
verbal description of relationship between different cognitive procedure, where some assumptions are often implicit and only provide directional prediction
what are formal/computational model
mathematical description of relationship between different cognitive procedure, often instantiated via computer program/simulation
with explicit assumptions, providing often specific numerical prediction
formal models allow more specific predictions. what does having numerical simulation of this model mean
see if model provide unreasonable prediction so is easier to reject bad models
help us select what experiments to perform (if predictions suggest exact same, no point in running experiments)
provide subtle hypothesis testing, to see how close model is to predicting an actual result
formal models allow counter-intuitive predictions. why is this an advantage
model can more clearly describe which predictions follow from model
informal models are hard to notice when making counter-intuitive prediction but formal are clear
a benefit of formal model is explicit assumption, why
allows reveal of unanswered questions, flaws of reasoning, contradictory/unreasonable assumptions and allows assumptions transparent allowing development and futhering of existing theories for improvement
what are disadvantages of formal model
- needs expertise
- best compared against other computational model
- numerical predictions can be premature
- changing model take time and limit progress
outline David Marr’s level of analysis
idea that we can understand and model a system at a number of levels (3 levels of understanding)
we can only sample tiny fraction of brain’s activity, so this provides solution
state Marr 3 levels
- computational level
- algorithm level
- implementation level
what is compuational (1) David Marr levels
problem being solved
what is algorithm (2) David Marr 3 levels
steps/rules to solve it
what is implementation (3) David Marrs 3 levels
actual machinery
why are top-down approaches not pure, where is it most common
use knowledge of available data so limits POV
common in cognitive science, AI
where is bottom-up approach more common
cognitive neuroscience
did Marr believe top-down or bottom-up was better
top-down = not overloaded by infinite amount of data we could find
but best way is to consider all 3 levels at same time
apply bottom-up approach to David Marr levels
start at implementation (build model of neural networks), then ask what representations/algorithms can be generated (2nd level), then ask what kind of problems can be solved with these algorithms
apply top-down approach to David Marr levels
- problem first - what is problem we need to solve?
- then rules- what representation/algorithms can solve it
- then implementation- how can representations/algorithms be implemented in neural circuits
give an issue of using top-down approach for David Marr levels
if identify potential algorithm at level 2, no garuntee could model neural circuits capable of implementing this
whats an implicit model
assumption hidden, internal consistency untested, logical consequences and relation to data unknown
whats an explicit model
assumptions shown, can be studied, sensitivity analysis to identify most salient uncertainties and important thresholds
how does using modelling change focus from testing hypothesis to testing formal model and why is this good
forces scientist to document what theory assumes, allows comparison to other models and compare different parameter value effects within a model
what is pizza problem
if we don’t make thinking explicit via formal modelling then have massive inconsistency in our understanding of own models
what is path function
function where output is dependent on path of transformations that inputs undergo
whats specification
formal description of system to be implemented based on theory, providing means of discriminating between theory-relevant, closer to core claims of theory and theory-irrelevant, auxiliary assumptions
what is implementation
instantiation of models created using anything and in computer modelling usually codebase written in 1 or more programming languages
what are open theories
developed explicitly, defined formally, explored computationally, more robust to failures of replication (if detected can be explained and drive theory creation not just rejecting theory)