Reasoning Theories Flashcards
3 theoretical approaches to reasoning
mental logics (Braine, Rips)
mental models (Johnson-Laird)
Bayesian reasoning (Chater, Oaksford)
1) mental logics
People often respond in ways inconsistent with formal logic, but presence of mental logic
Braine & O’Brien (1991) and Rips (1994):
– Assume propositional representations.
– Both suggest that deductive reasoning consists of the
application of mental rules of inference, but not all the
rules of logic may be part of this
e.g. modus ponens as part of mental logic but modus tollens not part of mental logic
using mental logics
Use a limited set of logical rules, rather than all the ones in formal logic.
Chains of inferences can be forward (from premises
to conclusion) or backward (from conclusion to premises), but eventually all the suppositions must be
either rejected or accepted.
Which rules are part of mental logic is seen as an empirical question.
– May vary between individuals
processes for mental logics
- Encode the premises into mental representation held in working memory
– Example: Alice gets wet in the rain
– Encoding 1 “If it is raining, then Alice gets wet”
– Encoding 2 “If Alice gets wet, then it is raining”
– Different encoding can lead to different conclusion - Apply mental logic rules that we have to derive conclusions.
- Intermediate conclusions are made and applied
- Look for contradictions
e. g. make supposition that has to be rejected, using proof by contradiction
modus ponens applied directly, modus tollens needs steps
errors of mental logics
Encoding (conversion) errors
If a chain of inferences are required then there is a greater chance of error.
– Working memory limitations
– Executive control limitations
Wrong rule
– Lack of appropriate inference rule or fail to apply.
– Failed to apply appropriate rule.
– Application of inappropriate rule.
Output of rule may be garbled (process error).
evidence of error
Most evidence comes from people judging the logical
correctness of a series of arguments.
– Tests are relative difficult
Braine et al (1984) asked people to judge 85 syllogisms.
– Number of deductive steps needed correlated 0.79 with rated difficulty.
– Need to apply more complex rules correlated with difficulty.
Rips (1994) found that people found problems more difficult if backwards reasoning required.
Forward reasoning seems more automatic
– Lea et al (1990) found more false recognitions for forward reasoning conclusions.
Rips (1989) found evidence of mental logic type reasoning
in verbal protocols.
evaluation of mental logics
Accounts for many findings.
However it is often underspecified (e.g., when should encoding errors occur?) so hard to know exactly what predictions it should make.
Has only been applied to a limited range of problems so comes down to exactly what is “reasoning”?
Does not always handle context effects well
– Byrne (1989) showed that modus ponens is not always
easy to apply in all contexts.
Direct evidence that people use mental logics has been hard to find
2) mental models
“each mental model represents a possibility, capturing what is common to the different ways in which the possibility could occur”
Claims that reasoning involves building a representation (model) of the situation and
making conclusions by inspection.
Johnson-Laird’s (1983) approach
“Continuation of comprehension by other means”
– 1. A mental model describing the situation is
constructed and a conclusion from it generated. The model is iconic (its structure corresponds to what it represents)
– 2. Attempt to generate alternative conclusions and see if they falsify earlier conclusion by finding counterexamples to the conclusion. If no contradiction then assume conclusion is valid.
– 3. Models are limited by working memory, therefore harder when multiple models are required.
Apply “Principle of truth”, models represent what is true, not what is false (hard to show absence)
principle of truth: we represent assertions by forming mental models concerning what is true but ignoring what is false
evidence for mental models
Main predictions (Johnson-Laird & Byrne, 1991):
Inferences requiring only one model will be easier than those based on multiple models.
Systematic errors are likely to correspond to initial models of premises.
– So knowledge can influence the process of
inference.
Copeland & Radvansky (2004) study
Presented 64 syllogisms to participants who choose which conclusion they thought was valid for each.
Varied how many mental models needed to be constructed to answer correctly
Working memory capacity correlated .42 with accuracy
Higher correlations for multiple than single model
syllogisms
Systematic errors relate to initial models
Consider the following:
- All the athletes are bakers
- Some of the bakers are chemists
- Therefore some of the athletes are chemists
Woodworth & Sells (1935) found that people tend to endorse this invalid conclusion
Proposed the atmosphere hypothesis: terms in premises more likely to be repeated in accepted conclusions
– the “some“ in the premises biases us towards “some” in the conclusion.
Suggests people aren’t really trying to reason, instead taking a stab at what they think looks right.
Atmosphere hypothesis predicts few “no valid conclusion” respondances
knowledge influences process
All the Frenchmen are gourmets
Some of the gourmets are wine drinkers
Some of the Frenchmen are wine drinkers (72% agree)
vs
All the Frenchmen are gourmets
Some of the gourmets are Italian
Some of the Frenchmen are Italians (8% agree)
knowledge affects what models we will first produce
evaluation of mental models
Mental models approach account for a lot of data
Principle of truth and effects of model numbers seem
strong.
Making reasoning like conversation has intuitive appeal
However processes are underspecified and thus predictions not always clear.
Which information do we use and when?
People don’t seem to like making more than one model.
mental logics vs mental models
Testing theories requires critical tests that can distinguish them.
Has proven to be hard to do for mental logics and
mental models.
– Tend to be able to explain each others results posthoc.
This may be because each is under-constrained
– What is a model and which ones do we build?
– What mental logics do we have and not have?
Thus the argument has continued, and each provide a
possible way to understand reasoning phenomena
3) bayesian reasoning
Oaksford & Chater (2007) see errors and biases as the result of people applying probabilistic strategies to an uncertain world.
Suggest that probability theory is better than logic as a computational level theory of reasoning.
But judgements of probability are hard, so this makes reasoning fundamentally subjective.
– See Bayes theorem for conditional probabilities as a model of human reasoning
Oaksford & Chater (1994) tripe analogy
asked “Is turning over the “4” card an error?” -> it might be optimal
equivalence: if you eat tripe, you will feel sick
Asking person who feels sick is useful as if
they ate tripe, that adds credence to the claim.
– Lots of people don’t feel sick, but few expected to have possibly eaten tripe. So checking these people not optimal.
– Optimality measured by information gain.
– Optimality depends on distribution of information in the environment.
– Most of the time it is the “4” card that is rare.
So reasoning could involve people updating their current beliefs based on their prior beliefs and new data.
– E.g., prob(tripe)t+1 = prob(tripe)t given (found another sick tripe eater)
optimality
In general positive case (q) more informative than negative case (not q) when Pr(p: tripe) and Pr(q: sick) are both low
Oaksford & Chater (1994) argued that reasoning involves heuristics that maximize information gain.
– Not necessarily equivalent to aiming for
certainty
– This can lead to different conclusions about how people should reason.
– “Deduction” is more like induction
Emphasize that goal of reasoning is to
operate effectively in environment.
Explanation more important than validity.
Oaksford et al (1997) Reduced Array Selection Task
Reduce Wason Selection task (test “if p then q”) to just the two critical selections: q, and not-q
Test “All the triangles (p) are blue (q)”
Focus on q: Look at cards (face down) from “Red shapes” or “Blue shapes” stack?
varied the size of stacks of cards to vary P(q), turn over cards until certain rule is right or wrong
The bigger the stack, the less likely cards were selected.Fits to rarity assumption
rarity assumption
McKenzie et al (2001): people prefer rare evidence
“If it is a raven, then it is black” Which is better evidence for this?
black raven or pink flamingo?
Logically both support this, but people see black ravens as more supportive.
Tend to phrase hypotheses in terms of the rare
errors of bayesian reasoning
They aren’t all errors.
Mini-heurstics explain many syllogism errors
Like other theories: memory constraints,
misperceptions.
Emphasis on environmental constraints
– Inaccurate information about environment.
– Environment may be “noisy.”
– Priming of the wrong elements of the environment.
Experts make fewer errors due to learning the right
probabilities
evaluation of bayesian reasoning
Provides a different perspective on reasoning
– Reasoning may be misnamed
Has successfully accounted for a lot of data
– Not all errors are errors but can account for others
However it proposes no mechanism for reasoning.
– Uses complex mathematics to predict the conclusions
people draw, but no proposal that people do such
calculation.
– Also leads it to be more successful at interpreting existing
data than making unique predictions.
Interestingly, by emphasising knowledge it connects
reasoning to skill.
evans (2006) principles of human reasoning
1 Singularity principle: Only a single mental model is considered at any given time.
2 Relevance principle: The most relevant (i.e., plausible or probable) mental model based on prior knowledge and the current context is considered.
3 Satisficing principle: The current mental model is evaluated by the analytic system and accepted if adequate. Use of this principle often leads people to accept conclusions that are not necessarily true.