lecture 1 - why modeling (and how) Flashcards
3 different disciplines
-
neuroscience
- the physical universe
- data-rich but theory poor
- lots to measure, hard to interpret -
cognitive science/psychology
- the human mind
- long-standing debates on theories
- theories lack mechanistic explanations -
artificial intelligence
- aims to create intelligent systems that are possibly, but not necessarily, inspired by brains
how to bridge gaps between disciplines
by using physical science for detailed, mechanistic theories of the mind and behavior
(models of the mind grounded in measurable physical processes)
David Marr’s levels of analysis
-
computational
- why
- focuses on what abstract problem the system is trying to solve.
- asks “why does this system exist?” and “what is the goal of the computation?”
- goals, functions, principles -
algorithmic
- how
- examines what rules/processes/algorithms are used to solve the problem.
- could technically be written down in pseudocode
- still very abstract
- rules, operations, transformations -
implementation
- what
- how does the brain implement it in ‘wetware’
- asks, “how is this computation physically realized?”
- hardware, physiology, mechanisms
what is a model
- a simplified version of reality
- easier to create than a parallel reality
- keeps/highlights the fundamental mechanisms that make an aspect of reality work, while leaving out unnecessary details
- by forcing us to decide what to include or exclude, models push us to think deeply about the workings of a system
Richard Feynman: “what I cannot create, i do not understand.”
- truly understanding something means being able to break it down, explain it simply, and even recreate it
- so, if we understand cognitive processes well, we should be able to replicate it with a computer using explicit rules
- this forces us to write explicit rules to capture and simulate aspects of cognition
classical experimentation approach in psychology
- hypothesis testing
- H0 is proposed and then tested, with the goal of finding evidence to reject it in favor of H1
problems with hypothesis testing
- p-hacking
- replicability crisis
how to fix problems with hypothesis testing
- focus on mechanisms that underlie behavior and cognition and care less about the words in our theories
- precision in hypotheses: clearer, more specific hypotheses can help avoid ambiguity in testing, making results more interpretable and reliable
- move from hypotheses to prediction based models: compare models in how they explain our (left-out) data. this way researchers can more objectively assess which model better captures the phenomenon
benefits of usisng models in scientific research
- integrative: models are conceptually broad – they converge findings from a wide range of research.
- falsifiable: precisely defined preditions – in contrast to theories that allow for flexible interpretations, models make precise predictions that can be tested and potentially proven wrong
- detailed/specific: models are functionally complete – a phenomenon can be understood if it can be explained in a program and it produces the to-be-explained behavior/patterns/data
- generative: playground for exploring new ideas and concepts – new concepts, ideas, and understanding can emerge from simulations
weaknesses of models
- high threshold: developing and understanding models often requires significant prerequisite knowledge
- it’s not a real field: modeling is more of an application (applied tool) than a standalone scientific field
- mathematical: typically involve detailed mathematical formulations, which can be complex and challenging
- few standard methods: lack of standardization in modeling practices – everybody has their own model and no one wants to look at the other researcher’s
terminology & mechanistic understanding
we need to be critical of terminology: it needs to be able to be translated into mechanistic understanding, so that it can improve our understanding of mind/brain function
types of models
- data-driven models: more statistical – focus on analyzing and interpreting data patterns
- process/generative models: more conceptual – focus on describing the underlying mechanisms or processes that produce observable phenomena
- this is not a fundamental divide. these modelling efforts can work together, and are often combined in practice
old AI vs. after the silicon revolution
old AI
- used to make explicit generative models based on abstract ideas
- these models tried to capture high-level cognitive processes by explicitly defining the steps or rules involved
- these models were successful, but in many cases had a hard time reaching human-level performance even in simple tasks
- this highlights the challenges of explicitly coding intelligence
since the silicon revolution
- more and more use ‘deep’ learning: ANNs inspired by the brain
- these models are loosely inspired by the brain’s structure and focus on distributed computation rather than explicit, step-by-step modeling.
- deep learning doesn’t rely on predefined rules or abstract ideas.
this is a pendulum
- there are things that deep learning cannot do
Allen Newell
asked ‘how can the human mind occur in the physical universe?’