Thinking Flashcards

1
Q

THINKING

A

INDICTIVE REASONING
- predicting future via past data/making judgements
- stat generalisations/probabilistic judgements/predictions
- hypothesis testing/rule induction
DEDUCTIVE REASONING
- solving logical/mathematical problems w/right answers
- based on grounds/given facts/generating valid conclusions/evaluating ones validity
PROBLEM SOLVING
- how to get from A-B
- numerous solutions/varying constraint degrees
JUDGEMENT/DECISION MAKING
- choosing among options
CREATIVE THINKING
- daydreaming/imagining

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

THINKING RESEARCH

A
  • focused primarily where there’s:
  • right answer/way of evaluating answer rationality/efficiency of getting to answer
  • asks HOW people think (processes/representations)
  • human imperfection emphasis (why irrationality/efficiency limits relative to ideal)
  • practical focus motivation:
  • practical false medical/legal/military importance
  • improvements via training/IT support
  • attempts changing behaviour
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

GENERAL DUAL-PROCESS THEORY: SYSTEM 1

A
  • intuitive/automatic/unconscious/quick & dirty/approximate BUT domain-specific
  • procedures/schemas/heuristics are:
  • adaptive/effective when applied in appropriate domain (otherwise error)
  • only approximate w/some built in biases
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

GENERAL DUAL-PROCESS THEORY: SYSTEM 2

A
  • slow/sequential/effortful BUT…
  • logical/rational/conscious reasoning system
  • allocates attention to demanding effortful mental activities
  • constrained via limited WM capacity/basic cognitive machinery limits
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

SYSTEMS 1 + 2 COMBINED

A
  • effortful S2 = depleted; self-control/cognitive effort = mental work
  • if cognitively busy, S1 = ^ beh influence/^ temptation following (ie. selfish choices/superficial social judgements)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

INDUCTIVE REASONING

A
  • illustrate basic cognitive machinery properties
  • limit optimal reasoning strategies
  • introduce biases:
  • difficulty attending relevant info w/salient/irrelevant info available
  • limited WM capacity
  • LTM retrieval properties
  • shifting mental set/perspective difficulties
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

DEDUCTIVE REASONING

A
  • logical reasoning w/quantifiers (some/not/all) = reasoning via imagining concrete examples (mental models) rather than abstract/general logical operations
  • illustrated WM limit impacts
  • Wason’s 4 card deductive reasoning test w/if-then propositions = explains performance effects on problem content/characteristic error nature in domain-specific heuristics
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

PROBABILITY/FREQUENCY JUDGEMENTS

A
  • some frequency facts = told to us/searchable (ie. lifetime morbid schizophrenia risk = 0.7%)
  • BUT many based on experience (ie. will it rain today?)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

MEMORY AVAILABILITY

A

TVERSKY & KAHNEMAN (1973)

  • availability heuristic = judge as ^ probable/frequent events/objects ^ readily available in memory/environment
  • works as generally easier to retrieve event/object memory that’s frequent
  • retrievability also affected via: recency/salience/current case similarity
  • over-estimate event probability where examples = easily retrievable AKA…
  • availability bias = recent/personally salient/presently similar examples
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

AVAILABILITY BIAS EXAMPLES

A
  • Cape Cod “Jaws” screening = drop in California coast swimming
  • seeing accident/police = drivers slow (for a while)
    SLOVIC et al (1980)
  • people overestimate dying of rare BUT dramatic/reported causes (ie. floods/tornadoes/measles)
  • underestimate dying of common causes (ie. strokes/cancers/diabetes)
  • aka. fear of British kids being run over > ran over
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

REPRESENTATIVENESS BIAS/BASE RATE NEGLECT

A
  • when evaluating particular cases:
  • tend to ignore important info source aka. base knowledge rate = particular event classes overall frequencies (ie. is someone = X features, we think they have X’s standard properties)
  • possible best guess basis for category member w/o other info BUT biases prototypical property attribution even w/other info
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

REPRESENTATIVENESS BIAS EXAMPLE

A

TVERSKY & KAHNEMAN (1973)

  • 100 descriptions (70 = lawyers; 30 = engineers; ie. kids/hobbies/age/motivations); pp asked engineer prob
  • > 90% went w/older/conservatism/maths hobbies etc.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

REPRESENTATIVENESS BIAS X SEQUENTIAL EXENTS

A
  • head prob x11 in row = same as any as coin = no memory; expecting non-representativeness redressing = gambler’s fallacy
  • difficulty ignoring sequence representativeness/unusualness or focusing on known individual event probs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

FUNCTIONAL FIXEDNESS

A

DUNCKER et al (1945)

  • classic Gestalt psychologists exps
  • pps asked to support lighted candle on vertical wooden wall w/nails/matches etc
  • < successful than pps w/same prob BUT pins tipped out of box
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

CONSERVATISM X CONFIRMATION BIAS IN INDUCTIVE REASONING

A
  • IRL/research we want rules/principles describing experienced instances/testing hypothesised rules VS further observations
    WASON (1960)
  • 2-4-6 seq generated rule; pp tries to guess via new 3n sequences w/positive/negative feedback (ie. fits rule/doesn’t then declares rule hypothesis)
  • tend to offer over-specific hypotheses (ie. n+2)/conservatively reluctant to abandon them/seek confirmatory VS disconfirmatory evidence
  • scientists equally prone
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

PROBLEM SOLVING

A
  • studies = situations w/start/goal states; fast goal achievement via available operators subject to certain constraints (ie. missionaries/cannibals/Hanoi’s tower)
    LUCHIN’S WATER JUG
  • start w/full 8 pint jug/empty 5 pint jug/empty 3 pint jug
  • end w/4 pints of water in largest jug
17
Q

THE “PROBLEM SPACE”

A

NEWELL & SIMON (1972)

  • soluble problem = at least 1 path through state space between start/goal
  • problem solver must search w/o knowing advanced optimal path/traversable intermediate states for operators that:
  • move them through intermediate states towards goal
  • avoid dead-end backing up/going in circles
  • minimise path
18
Q

WM CAPACITY/HEURISTICS X PROBLEM SOLVING

A
  • huge workplace + time = exhaustively enumerate all possible legal moves/pick shortest path (ie. Big Blue playing chess) BUT illegitimate WM capacity SO…
  • recognise familiar patterns/retrieve previous effective moves via LTM (ie. Kasparov’s chess)
  • hunt through initial/goal states in small steps via heuristics (ie. mean-end analysis/don’t repeat moves)
    MEAN-END ANALYSIS
  • pick general means of goal reaching; unavailable = create sub-goal of achieving availability means till 1 generates satisfiable sub-goal via available operator
  • requires WM goal stack maintenance
19
Q

DESIGN LIMITS INTRINSIC TO COGNITIVE MACHINERY

A
  • design limits (in cog caps (memory retrieval properties/limited WM/relevant info attendance difficulty/cognitive set shift difficulty/sequential reasoning effortfulness difficulty) =
  • heuristic reliance (approx rules of thumb)
  • intrinsic biases when heuristics applied
20
Q

MENTAL MODELS X SYLLOGISTIC REASONING

A

JOHNSON-LAIRD et al

  • we DON’T reason w/formal logic mental version
  • given premises we imagine +1 possible concrete worlds where premise = true (mental models)
  • then generate conclusion/determine if offered conclusion = valid via mental model examination
  • errors arise via: failure of generating all possible premise mental models/WM capacity lack for multiple model maintenance
  • only 1st model construction matches conclusion = think inference valid BUT untrue
  • 2nd model also describes affairs consistent w/premises where conclusion = false
21
Q

ILLUSORY CORRELATION

A

IMAGINE…

  • doc investigates disease presenting certain symptoms BUT present w/other diseases
  • particular S in 80%; concludes S = good D predictor
  • WRONG! must compare D frequency w/S VS w/o S
  • 40/50 w/o S = D = confirmation?
  • WRONG! must consider all cases w/S+D VS w/o (aka S+D/S-D/D-S/neither)
  • 4/5 patients develop D regardless of S or w/o
  • wrong conclusion = availability bias; patients w/o S/D = < striking than those who do
22
Q

ABSTRACT VS CONCRETE REASONING

A
  • concrete scenarios imagined assisting reasoning/mental models BUT…
  • ability limited via WM representational capacity
  • may fail all possible scenario consideration
  • more available instances = easier to access
23
Q

TROUBLE W/CONDITIONAL PROPOSITIONS

A

WASON’S 4-CARD PROBLEM

  • all cards w/letter on 1 side; number on other
  • RULE = if card had vowel, then odd number on other side; which cards flipped to check if true?
  • pps choose A (as should) BUT 1 VS 2 too
  • A+2 correct why? logical = if P then Q, only combo inconsistent w/rule = P+ NOT!Q so those (A+2) cards; BUT this says nothing about “if Q…” so no point checking 1
24
Q

ARE PEOPLE JUST ILLOGICAL?

A

JOHNSON-LAIRD (1972)
- NO; changing problem content/context w/o changing formal structure = dramatic performance improvements
CHENG & HOLYOAK (1985)
- formally identical prob w/form w/transit/entering on 1 side then disease list on other; check rule observance: “if the form has “entering” on 1 side, then the other includes cholera amongst diseases”
- half pps given rationale (mere transit passengers don’t need cholera inoculation; visitors do); other told to check for tropical diseases
- w/o = poor 60% perf; w/ = well 90%
- SO if given/familiar w/social rules/permissions rule context, performance = good

25
Q

DOMANI-SPECIFIC “IF-THEN” DEONTIC REASONING

A
  • questions why performance ^ w/concrete contexts
    CHENG & HOLYOAK
  • successful conditions engage familiar permission schema for social rules (what ought to happen aka. IF YOU WANT P, THEN YOU MUST Q)
  • deontic “if-then” = same truth conditions as logical
26
Q

DOMAIN-SPECIFIC “IF-THEN” CAUSAL REASONING

A

OAKSFORD & CHATER (1994)

  • why characteristic error made (ie. Q alternative in abstract problem versions)?
  • pps choices = rational under causal/probabilistic “if-then”; NOT same truth conditions as logical IT (ie. clouds cause rain = IF ran, THEN clouds)
  • if proposition interpreted as causal/correlational claim = clouds imply rain prob
  • SO IF clouds, reasonable to check for rain to collect info about relationship strength
27
Q

SUMMARY: WHY IS THINKING ERROR-PRONE?

A
  • “design limits” in cognitive caps (memory retrieval properties/limited WM/difficulty attending relevant info/difficulty shifting cognitive sets) =
  • heuristic reliance (approx rules of thumb)
  • intrinsic biases applying heuristics
  • habit of reasoning w/concrete mental models coupled w/failure/inability to generate/represent all possible mental models
  • “capture” of reasoning via relatively automatic domain-specific heuristics =
  • adaptive in right contexts BUT…
  • inappropriate for cases at hand