Chapter 2 Flashcards
systemic thinking 1
- intuitive; fast; relies on gut reactions
- relies on heurisitics
heuristics
mental shortcuts or rules of thumb
System thinking 2:
- anaytical; slow; relies on evaluation of evidence
The scientific method can be compared to a
toolbox
Random selection
- a technique whereby every person in a population has an equal chance of being chosen to participate in a study
- increases generalization of results
- studying more people broadly is better than studying more people narrowly
Reliability
the consistency of a measurement
test retest reliability
- if tested again will achieve the same results
Interrater reliability
- ability to review and reanalyze the data from a study and find exactly the same results
validity
- the extent to which a measure asses what it claims to measure
Relability and validity interaction
- reliability is necessary for validity
- validity not necessary for reliability
Openess in science
- essential findings are replicable and reproducible
- created in response to the replicability crisis
5 responses to the replication crisis
- post and share data and material publicity
- conduct replications of their own and others work
- preregister research
- encourage journals to publish all sound science, not just flashy findings
- place less emphasis on findings for single studies
Naturalistic obsevation
- observe behaviour without trying to manipulate / change it in any way
Advantages of Naturalistic Observation
- high external validity
- captures natural behaviours
Disadvantages of naturalistic observation
- low internal validity
- possible reactivity
- opssible observer bias
- no control over other variables
case study definition
- an in-depth analysis of a individual, group, or event
Case studies major advantages
- allows investigation of a rare phenonmena
- may provide exsitance proofs
- may be good for hypothesis eneration
Major disadvantages of case studies
- cannot determine cause and effect
- generlalization may be an issue
- possible observer bias
Self reported measure
- researches use interviews, questionaires, or surveys to gather specific information about persons behaviour, attitudes, and feelings.
self reported measure advantages
- easy to administer and gather large amount of data
- cost effective
- allows assesment of interanl processes/thoughs/feelings that outside observers are not typically aware
Major disadvantages of self reported measures
- how the question is worded can lead to many diffrent results
- assume respondents have enough insight/knowledge to report accuratly
- assume participants are honest, even though they often engage in respince sets, and sometimes display malingering
Rating Data
- are a type of slef-reported measure where someone else is asked to comment on a person behavious (it is assumed that they know the person well)
Rating Data advantages
- gets around malingering and response set bias in self reporting
disadvantages o f rating data
- halo effect
- horns effect
- particularly susceptiable to sterotypes
Halo effect
- the tendency for a high rating in one positive charateristic to spill over and enhance the ratings of other characteristics
Horns effect
- the tendency for a high rating in one negative characteristic to spill over and lower ratings for other characteristics
Correlation designs
- are those in which a researcher measures diffrent variables, to see if there is a relationsip between them
Correlation designs advantage
- more flexiable and easier to conduct than experiments
correlation designs disadvantages
- cannot explain causation
What is the strenght of a correlation measured
- using a correlation coefficent
Experiemntal designs
- a research design characterized by random assignment of participants to conditions, and manipulation of at least one independent varibale
independent variables
- a varibale that the experiementor manipulates
dependent varibale
- a varibale that the experimenter measures
Random assignment
- ensures each participant has a equal chance of being assigned to the experimental group, or control group
Between-subject designs
- a research design where the experimenter assigns diffrent groups to the control or experimental conditions (group A get the drug, group B gets nothing)
Within subject designs
- where the experimenter has a participant serve as their own control ( measure behaviour before a variable is manipulated, and then after)
Extraneous (confounding) variables
- any variable that differs between the experimental and control groups, and may be responsible for the observer diffrence between the two groups after manipulation
Placebo effect
- improvement from the mere expectation of improvement
Nocebo Effect
- harm from the mere expectation of harm
Nocebo effect example
Morse, 1999
Experimenter Expectancy effect (Rosenthal Effect)
- researchers hypothesis lead them to unintentionally bias the outcome of the study (ussualy in the line of their hypothesis)
- driven by conformation bias
how can you protect against the rosenthal effect
- double blind procedure
Demand characteristics
- participants guess as to the purpose of the study, and change how they act based on their assumptions
hawthrone effect
- peoples knowledge that they are being studied changes their behaviour
what is the Tuskegee study an example of
shameful science
Tuskegee Study
- 1932-1972
- men diagnoised with syphilis
- never given treatment in order to study the “natural progression” of disease
1979 Belmont report
responce to the Tuskegee study
stated that research should:
- allow people to make decisions about themselves
- be beneficial
- distribute benefits and risks equally to all participants
Research Ethics boards
- all North America research colleges and universities should have at least one
- they reveiw planned research with the mandate to protect participants against harm
- adhere to national guidlines found in the Tri-council Policy statement
Research with people have:
- informed consent
- protection from harm
- freedom from coercion
- risk benefit analysis
- justification of deception
- debriefing participants afterwards
- confidentiality
Animial research ethics board
- all universities and colleges must have one
- reveiw planned research to ensure that animals are treated humanely
- follow the guidelines of the Canadian Council on Animal Care
Statistics
- is the application of mathematics to describe and analyze data
Descriptive Statistics
- numerical characteristics of the nature of data
types of descriptive statisitics
- central tendency
- variability
Central tendency
- statements about the value of measurements that tend to lie near the enter or midpoint of a distribution
Three measures of central tendency
- mean
- median
- mode
variability
- measures of how loosely or tightly bunched scores are in a dataset
two main methods of measuring variability
- range
- standard deviation
inferential statistics
- mathematical methods that allow researchers to determine whether we can generalize findings from a sample to the general public
- allows allows researchers to determine if their results are likely to have occured simply due to chance
statistical significance
- what is the probability that these findings are due to chance. if the results are statistically significant it means that the results are very unlikely to have occured due to chance factors
Practical significance
- a determination of whether this finding has any real-world importance
Peer reveiw
- a process of quality control for research before it is published in an acaedemic journal
- reviewers job is to identify flaws that could undermine the findings of a study and ensure the claims made reflect the data
3 things to look out for when evaluating data from the media
- sharpening
- levelling
- pseudo symmetry
conformation bias
- tendency to seek out evidence that supports out hypothesis, and deny evidence that contradicts it
double bild
- when neither researchers or participants are aware who is on the study
operational definition
- working definition of what we are measuring
Illusionary correlation
- perception of a statistical association between two variables where non exist
positive correlation
one goes up so does the other
zero correlation
- no correlation
negative correlation
- one variable goes down while the other goes up
responce sets
- responds to paint themselves in a positive light
malingering
- tendency to make ourselves appear psychologically disturbed with the aim of achieving clear cut personal goal
replicability
- new data
- refers to the ability to duplicate the original findings consistency
reproductivity
- same study
- ability to review and reanalyze the data from a study and find exactly the same results
external validity
- the extent to which we cna generalize findings to real world settings
Internal validity
- the extent to which we can draw cause and effect inferences from a study
mean
- average
- a measure of central tendency
Inferential statistics
- mathematical methods that allow us to determine wheather we can generalize findings from our sample to the full population
median
- middle score in a datadase
- a measure of central tendency
mode
- most frequent score in a database
- a measure of central tendency
meta-analysis
- statistical methods that help large researchers interpret large bodies of psychological liturature
range
- difrence between the highest and lowest scores
- a measure of variability
standard deviation
- a measure of variability that takes into account how far each data point is from the mean