Chapter 2 Flashcards
1
Q
systemic thinking 1
A
- intuitive; fast; relies on gut reactions
- relies on heurisitics
2
Q
heuristics
A
mental shortcuts or rules of thumb
3
Q
System thinking 2:
A
- anaytical; slow; relies on evaluation of evidence
4
Q
The scientific method can be compared to a
A
toolbox
5
Q
Random selection
A
- a technique whereby every person in a population has an equal chance of being chosen to participate in a study
- increases generalization of results
- studying more people broadly is better than studying more people narrowly
6
Q
Reliability
A
the consistency of a measurement
7
Q
test retest reliability
A
- if tested again will achieve the same results
8
Q
Interrater reliability
A
- ability to review and reanalyze the data from a study and find exactly the same results
9
Q
validity
A
- the extent to which a measure asses what it claims to measure
10
Q
Relability and validity interaction
A
- reliability is necessary for validity
- validity not necessary for reliability
11
Q
Openess in science
A
- essential findings are replicable and reproducible
- created in response to the replicability crisis
12
Q
5 responses to the replication crisis
A
- post and share data and material publicity
- conduct replications of their own and others work
- preregister research
- encourage journals to publish all sound science, not just flashy findings
- place less emphasis on findings for single studies
13
Q
Naturalistic obsevation
A
- observe behaviour without trying to manipulate / change it in any way
14
Q
Advantages of Naturalistic Observation
A
- high external validity
- captures natural behaviours
15
Q
Disadvantages of naturalistic observation
A
- low internal validity
- possible reactivity
- opssible observer bias
- no control over other variables
16
Q
case study definition
A
- an in-depth analysis of a individual, group, or event
17
Q
Case studies major advantages
A
- allows investigation of a rare phenonmena
- may provide exsitance proofs
- may be good for hypothesis eneration
18
Q
Major disadvantages of case studies
A
- cannot determine cause and effect
- generlalization may be an issue
- possible observer bias
19
Q
Self reported measure
A
- researches use interviews, questionaires, or surveys to gather specific information about persons behaviour, attitudes, and feelings.
20
Q
self reported measure advantages
A
- easy to administer and gather large amount of data
- cost effective
- allows assesment of interanl processes/thoughs/feelings that outside observers are not typically aware
21
Q
Major disadvantages of self reported measures
A
- how the question is worded can lead to many diffrent results
- assume respondents have enough insight/knowledge to report accuratly
- assume participants are honest, even though they often engage in respince sets, and sometimes display malingering
22
Q
Rating Data
A
- are a type of slef-reported measure where someone else is asked to comment on a person behavious (it is assumed that they know the person well)
23
Q
Rating Data advantages
A
- gets around malingering and response set bias in self reporting
24
Q
disadvantages o f rating data
A
- halo effect
- horns effect
- particularly susceptiable to sterotypes
25
Halo effect
- the tendency for a high rating in one positive charateristic to spill over and enhance the ratings of other characteristics
26
Horns effect
- the tendency for a high rating in one negative characteristic to spill over and lower ratings for other characteristics
27
Correlation designs
- are those in which a researcher measures diffrent variables, to see if there is a relationsip between them
28
Correlation designs advantage
- more flexiable and easier to conduct than experiments
29
correlation designs disadvantages
- cannot explain causation
30
What is the strenght of a correlation measured
- using a correlation coefficent
31
Experiemntal designs
- a research design characterized by random assignment of participants to conditions, and manipulation of at least one independent varibale
32
independent variables
- a varibale that the experiementor manipulates
33
dependent varibale
- a varibale that the experimenter measures
34
Random assignment
- ensures each participant has a equal chance of being assigned to the experimental group, or control group
35
Between-subject designs
- a research design where the experimenter assigns diffrent groups to the control or experimental conditions (group A get the drug, group B gets nothing)
36
Within subject designs
- where the experimenter has a participant serve as their own control ( measure behaviour before a variable is manipulated, and then after)
37
Extraneous (confounding) variables
- any variable that differs between the experimental and control groups, and may be responsible for the observer diffrence between the two groups after manipulation
38
Placebo effect
- improvement from the mere expectation of improvement
39
Nocebo Effect
- harm from the mere expectation of harm
40
Nocebo effect example
Morse, 1999
41
Experimenter Expectancy effect (Rosenthal Effect)
- researchers hypothesis lead them to unintentionally bias the outcome of the study (ussualy in the line of their hypothesis)
- driven by conformation bias
42
how can you protect against the rosenthal effect
- double blind procedure
43
Demand characteristics
- participants guess as to the purpose of the study, and change how they act based on their assumptions
44
hawthrone effect
- peoples knowledge that they are being studied changes their behaviour
45
what is the Tuskegee study an example of
shameful science
46
Tuskegee Study
- 1932-1972
- men diagnoised with syphilis
- never given treatment in order to study the "natural progression" of disease
47
1979 Belmont report
responce to the Tuskegee study
stated that research should:
- allow people to make decisions about themselves
- be beneficial
- distribute benefits and risks equally to all participants
48
Research Ethics boards
- all North America research colleges and universities should have at least one
- they reveiw planned research with the mandate to protect participants against harm
- adhere to national guidlines found in the Tri-council Policy statement
49
Research with people have:
- informed consent
- protection from harm
- freedom from coercion
- risk benefit analysis
- justification of deception
- debriefing participants afterwards
- confidentiality
50
Animial research ethics board
- all universities and colleges must have one
- reveiw planned research to ensure that animals are treated humanely
- follow the guidelines of the Canadian Council on Animal Care
51
Statistics
- is the application of mathematics to describe and analyze data
52
Descriptive Statistics
- numerical characteristics of the nature of data
53
types of descriptive statisitics
- central tendency
- variability
54
Central tendency
- statements about the value of measurements that tend to lie near the enter or midpoint of a distribution
55
Three measures of central tendency
- mean
- median
- mode
56
variability
- measures of how loosely or tightly bunched scores are in a dataset
57
two main methods of measuring variability
- range
- standard deviation
58
inferential statistics
- mathematical methods that allow researchers to determine whether we can generalize findings from a sample to the general public
- allows allows researchers to determine if their results are likely to have occured simply due to chance
59
statistical significance
- what is the probability that these findings are due to chance. if the results are statistically significant it means that the results are very unlikely to have occured due to chance factors
60
Practical significance
- a determination of whether this finding has any real-world importance
61
Peer reveiw
- a process of quality control for research before it is published in an acaedemic journal
- reviewers job is to identify flaws that could undermine the findings of a study and ensure the claims made reflect the data
62
3 things to look out for when evaluating data from the media
- sharpening
- levelling
- pseudo symmetry
63
conformation bias
- tendency to seek out evidence that supports out hypothesis, and deny evidence that contradicts it
64
double bild
- when neither researchers or participants are aware who is on the study
65
operational definition
- working definition of what we are measuring
66
Illusionary correlation
- perception of a statistical association between two variables where non exist
67
positive correlation
one goes up so does the other
68
zero correlation
- no correlation
69
negative correlation
- one variable goes down while the other goes up
70
responce sets
- responds to paint themselves in a positive light
71
malingering
- tendency to make ourselves appear psychologically disturbed with the aim of achieving clear cut personal goal
72
replicability
- new data
- refers to the ability to duplicate the original findings consistency
73
reproductivity
- same study
- ability to review and reanalyze the data from a study and find exactly the same results
74
external validity
- the extent to which we cna generalize findings to real world settings
75
Internal validity
- the extent to which we can draw cause and effect inferences from a study
76
mean
- average
- a measure of central tendency
76
Inferential statistics
- mathematical methods that allow us to determine wheather we can generalize findings from our sample to the full population
76
median
- middle score in a datadase
- a measure of central tendency
77
mode
- most frequent score in a database
- a measure of central tendency
78
meta-analysis
- statistical methods that help large researchers interpret large bodies of psychological liturature
79
range
- difrence between the highest and lowest scores
- a measure of variability
80
standard deviation
- a measure of variability that takes into account how far each data point is from the mean