Midterm 2 Flashcards
What are the goals of scientific research?
- EXPLANATION: explain properties or relationships
- PREDICTION: predict events
- CONTROL: solve problems & gain understanding
Operationalization
Stating precisely what you plan on measuring & how
Hypothesis
A set of beliefs about the nature of the world
Independent v.s. dependent variable
IV = manipulated DV = measured
How can samples be biased? How can this affect the interpretation of research?
- Not selecting a truly random sample that is a representative of a larger population
- Unable to generalize findings
Type I error
False positive
Type 2 error
False negative (failure of a test to detect an actual outcome)
What is the purpose of a control group?
To compare results to
Reductive approach
- Attempting to understand a complex system by looking at its parts and their interactions
- Using lower levels of analysis to explain phenomenon at higher levels
Level of analysis
The differing complementary views, from biological to psychological to social cultural, for analyzing any given problem
(ex. neurons, molecular biology, biochemistry or when testing a drug, examining histology first then animals)
Confound
When experimental groups differ in more than one way
Prospective
Uses our understanding of a system to make predictions about the future
Retrospective
Uses our understanding of a system to explain what has already happened (detective work)
Expected value
Amount of money you would expect to win in the long run in a better situtation
Base-rate neglect
Tendency for people to mistakenly judge the likelihood of a situation by not taking into account all relevant data
Gambler’s fallacy
- Mistaken belief that chance events are self-correcting
- If a random event hasn’t occurred recently, it is more likely to occur.
What biases can affect our judgment of likelihood and probability?
- Motivated reasoning
- Limited perspectives and cognition
- Bad data and problems evaluating evidence
Overconfidence
Mismatch between estimation of risks and the actual risks
Confirmation bias
More likely to decide a favorable outcome
Pollyanna principle
- a.k.a. “Wishful thinking”
- the idea that if we want something to happen, it will
- tendency to believe that pleasant events are more likely to happen than unpleasant ones
Psychological reactance
- WE DON’T LIKE BEING TOLD WHAT TO DO
- resistance arising from restrictions of freedom
- some people will select a less preferred alternative if they are told they must select the preferred alternative
Stages of problem solving
- PREPARATION: understanding the nature of the problem
- PRODUCTION: producing solution pathways
- EVALUATION: evaluates solution paths in order to pick one
Ill-defined problem
- Many possible answers
- Most problems in life
Well-defined problem
Single correct answer
Anatomy of a problem
- Initial = where you are
- Goal = where you want to be
- Problem space
Problem space
All possible routes that take you from initial state to goal state in a problem
Mean-ends analysis
Break problems downs into subgoals –> brings you closer to the end goal
Incubation
- Period in problem solving when the problem solver is not actively working on the problem
- “time out period”
Steps in Halpern’s framework of thinking
- Verbal reasoning
- Argument analysis
- Thinking as hypothesis testing
- Likelihood and uncertainty
- Decision making and problem solving
Falsification
To disprove hypothesis
Principle of parsimony or occham’s razor
SIMPLE IS BETTER
What approaches help researchers deal with high levels of complexity?
- Shows how things interact in a complex system
- Reductive approach
- Controlled experiment
- Converging evidences
- Level of analysis
- Principle parsimony
- Factorial designs
Advantages of using a factorial design instead of single factor experiment
- Factorial design = looking at how things interact in a complex system
- Understand the effects of TWO OR MORE IV upon a single DV
- Traditional research methods generally study the effect of one
Why can we not conclude causation when we observe a correlation between two variables?
Just because two things co-occur with one another, does not necessarily mean that one causes the other
What are some means of double-checking or evaluating the quality of research?
- Is it PEER-REVIEWED?
- Is it part of a body of converging evidence?
- Do you trust the source & reputation?
- Has it been REPLICATED?
Mechanism
Refer to a system of interacting factors whose details are described well enough that predictions can be made from them
3 rules of causation
- Are the variables related?
- Temporal precedence
- Internal validity
Problems in pseudoscience
- Not testable, can’t be replicated
- Sounds too good to be true
- example: Airborne claimed it prevented/treated colds
Kahneman & Tversky subjective utility of gains v.s. loses
- Subjective utility: the value of a choice to an individual who is making the decision
- Usually a bigger deal when we lose than gain
Sunk-cost fallacy
- Decisions made on past investments rather than future reward
(ex. junker car, spend thousands already, keep repairing instead of getting a new one –> spend as much in the end)
How does framing a decision in terms of gains versus framing in terms of risks/losses affect decision-making?
- MINIMIZE LOSS & MAXIMIZE GAINS
- people make riskier decisions if the frame emphasizes loss & people become risk-avoiders if the decision is framed in terms of gains
Backdrop of possible
Easier to remember what happened rather than what could have happened
How do people’s expectations of randomness (naive people) differ from what true randomness looks like?
- Most people tend to believe that randomness forms patterns/lines, but in reality it does not, it forms clumps and is scattered about
- We expect more uniformity out or randomness and streak and clumps draw our attention
Conjunction error
Mistaken belief that the co-occurrence of two or more events is more likely than the occurrence of one event alone
Central tendency
Mean, median, & mode
Variability
Range, variance, & standard deviation
3 main ways to get it “wrong”
- Attitudes
- Bad data
- Bad habits & skills
Expert
A person who has a comprehensive and authoritative knowledge of skill in a particular area
What do Ericcson & Ward have to say about becoming an expert?
- “expert performance approach”
- GOAL: understand the mechanism that mediate consistently superior performance
- Takes time
- Expertise can change you physically and mentally
- Deliberate practice is key
Deliberate practice
Gain more experience
Dunning-Krueger effect
Tendency to rate oneself higher than average
How is expert memory different from novice memory? How is it similar?
- Experts are encoding pieces by their meaningful relationships to other pieces which means they are chunking more effectively
- Not found in random positioning when the structure is irrelevant the advantage goes away
What evidence is there that experience shapes the brain?
- “The knowledge” –> London cab drivers
- Parts of the brain can be changed by different things you learn
Big C and little c creativity
Big C = “breakthough”
Little c = everyday creativity
Lateral thinking
Thinking around the problem, increasing the number of possible alternatives
2 strategies for creative thinking
- Brain storming
- Creative idea checklists
How might creativity relate to problem solving?
- Using the information stored in memory to go beyond what is learned from experience –> different domains of knowledge
How have laboratory studies of creativity operationalized performance?
- Divergent thinking test: looks at the ability to generate as many possible solutions to a problem (ex. figuring out other uses for a paperclip)
- Remote associative test: 3 random words where you state a word that links all 3 (ex. stool, powder, ball –> foot)
Belief bias
Tendency to judge the strength of an argument based on the believability of the conclusion
Shifting goalposts
- Whenever you are challenged to present evidence against a position and you succeed, your opponent simply moves on to the next demand and pretend that it is what is truly important
- Usually done by the losing side –> try to save face
- “raising the bar”
Dunning-Krueger effect
Most people rate themselves as moderately high in competence, regardless of their actual competence
Appeal to consequences
Arguing that if an idea has bad consequences that it must be untrue
Fallacy fallacy
Arguing that because an opponent’s argument contains a fallacy that their conclusion must be untrue
Just world hypothesis
Describes a tendency for people to assume that actions lead to fitting consequences
Halo effect
Likable/attractive people are sometimes perceived as smart or more capable
Validity
The measure actually measures what you want
Functional-fixedness
Considering only the standard use of an object
False consensus
Tendency to think that one’s opinion are more typical b/c we usually surround ourselves with others who have similar views