Exam 1 Flashcards

1
Q

experimental research

A
  • Carefully controls and manipulates variables
  • Quantitative research → focuses on numerical data (statistics)
  • Tries to reveal cause-and-effect relationships (causality)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

non-experimental research

A
  • No manipulation of variables
  • Can be qualitative as well as quantitative
  • Reveals relationships, not causation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

theory

A

a set of statements that describes general principles about how variables relate to one another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

parsimonious theory

A

simple, concise, and elegant, with few hypotheses and constructs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

data cycle of theories

A

theory –> research questions –> research design –> hypothesis –> data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

components that make a good theory

A

testable, coherent, economical, generalizable, explain known findings, principle of determinism, principle of parsimony, principle of testability, principle of empiricism, all principles repeated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

principle of determinism

A

seeks to establish explanations for events

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

principle of parsimony

A

seeks the simplest explanation possible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

principle of testability

A

relies on testable, falsifiable statements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

principle of empiricism

A

requires objective observations

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

all principles repeated

A

seeks replicable results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

non-falsifiable theory

A

a theory or assertion that is impossible to prove wrong because there is no way to test it
- ex. beaches are better travel destinations than mountains (subjective); aliens exist (cannot disprove this)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

basic research

A

enhance the general body of knowledge rather than to address a specific, practical problem

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

applied research

A

conducted in a local, real-world context

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

translational research

A

the use of lessons from basic research to develop and test applications to health care, psychotherapy, or other forms of treatment and intervention

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Merton’s scientific norms

A

how scientists should act
- universalism
- communality
- disinterestedness
- organized skepticism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

universalism

A

everyone can do science; scientific claims are evaluated by the same pre-established criteria

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

communality

A

scientific knowledge is created by a community and its findings belong to the public

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

disinterestedness

A

scientists should not be invested in whether their hypotheses are supported by the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

organized skepticism

A

what can be tested should be tested, including one’s own theories, widely accepted ideas, and “ancient wisdom”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

four sources of evidence

A

experience, intuition, authority, empirical research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

experience as a source of evidence

A
  • has no comparison group
  • confounded (several possible explanations for an outcome; difficult to isolate variables in our personal experiences)
  • research is better than experience
  • research is probabilistic (findings are not expected to explain all the cases all the time; multiple causes exist for a single outcome)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

intuition as a source of evidence

A
  • our hunches about what seems “natural”
  • accepting a conclusion just because it makes sense or feels natural
  • can be biased
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

ways intuition can be biased

A
  • Being swayed by a good story
  • Being persuaded by what comes easily to mind
    (Availability heuristics)
  • Failing to think about what we cannot see
    (Present bias → failing to look for absences)
  • Focusing on the evidence we like best
    (Confirmation bias → looking only at information that agrees with what we want to believe)
  • Biased about being biased
    (People think that biases do not apply to them)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
authority as a source of evidence
- authorities can also base their advice on their own experience or intuition - might present only the studies that support their own side
26
research as a source of evidence
empirical research through journal articles, edited book chapters, full-length books
27
constant
something that does not change
28
variables
something that changes
29
measured variable
observed, recorded - some variables can only be measured
30
manipulated variable
controlled - some variables can be either manipulated or measured
31
constructs/conceptual variables
must be precise and clear definitions that others can use to understand exactly what it means and what it does not mean - ex. determining "school achievement" by looking at grades
32
operational definitions/operational variables/operationalization
define constructs in terms of how they will be empirically measured - ex. determining "school achievement" through self-report questionnaire, checking records, teachers' observations
33
frequency claims
describes a particular rate or degree of a single variable; involves only one measured variable - Examples: 39% of teens admit to texting while driving; screen time for kids under 2 more than doubles; 44% of Americans struggle to stay happy
34
association claims
an argument; one level of a variable is likely to be associated with a level of another variable; at least two measured variables - if association exists, variables correlate - positive, negative, or zero association
35
positive association
increase of one variable correlated with increase of a second variable; as x increases, y also increases
36
negative association
increase of one variable correlated with decrease of a second variable; as x increases, y decreases
37
zero association
no pattern of increase or decrease between two variables
38
making predictions based on association
- Given x, you can predict this from y, or vice versa - Can only make predictions based on positive and negative associations— not zero associations - Stronger associations = more accurate predictions - A value between 0.7 and 1 (or -0.7 and -1) is generally considered a strong correlation
39
causal claims
an argument; one variable causes changes in the level of another variable - supported by experiments (studies that have a manipulated variable and a measured variable)
40
3 criteria to make a causal claim
- two variables (the causal variable and the outcome variable) are correlated; the relationship cannot be zero - the causal variable came first, and the outcome variable came later - no other explanations exist for the relationship
41
validity
the appropriateness of a conclusion or decision; is reasonable, accurate, and justifiable
42
four big validities
construct validity, external validity, internal validity, statistical validity
43
construct validity
how well a conceptual variable is operationalized; how well variables are measured/manipulated
44
external validity
extent to which results generalize to larger population, across time, or situation
45
statistical validity
how well do numbers support claim; how strong is effect; how precise is the estimate
46
internal validity
if A caused B, to what extent is A the cause and not another variable (C)
47
three criteria for causation
covariance, temporal precedence, internal validity
48
covariance
as A changes, so does B; the variables are related
49
temporal precedence
method used ensures A (causal variable) comes first in time before B (effect variable)
50
internal validity as a criteria of causation
method ensures no plausible alternative explanations for change in B; A is the only cause of the change in B
51
unethical choices of the Tuskegee Syphilis study
- the men were not treated respectfully - The men were harmed - The researchers targeted a disadvantaged social group
52
ethical issues raised by the Little Albert study
- No informed consent obtained - Is conditioning fear into a young child ethical?
53
declaration of Helsinki addressed several issues
- Health and welfare of human research participants - All medical research must conform to accepted scientific principles - Must be based on knowledge of relevant scientific literature
54
three basic principles of the Belmont Report
- respect for persons - beneficence - justice
55
respect for persons as a principle of the Belmont Report
- Participants should be treated as autonomous agents - People with less autonomy are entitled to special protection - Must be volunteers who are fully informed - Researchers are not allowed to mislead, coerce, or unduly influence a person into participating
56
beneficence as a principle of the Belmont Report
- Ensure well-being of research participants - Do no harm - Maximize benefits, minimize harm - Protect people’s personal information
57
justice as a principle of the Belmont Report
- Equal share in costs and benefits - Participants should be representative of the people who would benefit from the study’s results
58
ethical rules derived from the respect for persons principle of the Belmont Report
- Obtain and document informed consent - Respect privacy - Employ special protections for participants who have limited autonomy (e.g., prisoners)
59
ethical rules derived from the beneficence principle of the Belmont Report
- Use the least risky research methods possible - Potential risks must be balanced against the potential benefits - Fulfill the promise to maintain confidentiality - Carefully monitor research
60
ethical rules derived from the justice principle of the Belmont Report
- Treat participants equitably - Avoid exploiting vulnerable populations
61
APA's five general principles
- Belmont Report (respect for persons, beneficence, justice) - fidelity and responsibility - integrity
62
fidelity and responsibility
- faithfulness and honesty - Accountability and protection - Psychologists uphold professional standards of conduct, clarify their professional roles and obligations, accept appropriate responsibility for their behavior, and seek to manage conflicts of interest that could lead to exploitation or harm
63
integrity
- Psychologists seek to promote accuracy, honesty, and truthfulness in the science, teaching, and practice of psychology - In these activities, psychologists do not steal, cheat or engage in fraud, subterfuge, or intentionally misrepresent facts
64
APA ethical standards for research
1) the institutional review board (IRB) 2) informed consent 3) deception 4) debriefing 5) research misconduct 6) plagiarism 7) animal research
65
the institutional review board (IRB)
- Research must be screened by IRB: Before research is started, Individuals with no vested interest in IRB - IRB ensures that ethical rules are followed: Risk-benefit ratio is assessed; Two important functions: Ensures research meets ethical standards, Protects researchers from liability - IRB works well when they adhere to two principles: Acting to protect human participants, Helping to educate and train staff about ethical issues (Improves communication between IRB and researcher)
66
informed consent
- A written document that outlines the procedures, risks, and benefits of the research - Obtained before taking part in the study - Voluntary participation - Made aware of purposes and logistics - Made aware of risks if any - Not always required
67
cases in which informed consent might not be required in a study
- Is not likely to cause harm - Involves a completely anonymous questionnaire - Takes place in an educational setting (exemption under federal regulations) - Involves naturalistic observation of participants in low-risk public settings, e.g., museum, classroom, mall
68
deception
- to mislead, to hide the truth - allowed only when the benefit outweighs the cost
69
debriefing
- After the study ended - Reveal purpose - Reveal any deception - Answer questions
70
research misconduct
types of research fraud: - Outright fabrication (making up) of data (most harmful, but rare) - Altering data to make them “look better” (Data Falsification) - Selecting only the best data for publication - Using the “least publishable unit” rule (Deriving several publications out of a single study) - Sabotage of others’ work - Claiming credit for work done by others - Attaching your name to a study you had little to do with
71
plagiarism
the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit
72
animal research
- IACUC (Institutional Animal Care and Use Committee) - It approves animal research projects - Legal Protection for Laboratory Animals Psychologists who use animals in research must: Care for them humanely, Use as few animals as possible, Be sure the research is valuable enough to justify using animal subjects
73
animal care guidelines
- Replacement → find alternatives to animals in research when possible (Computer simulations, statistical modeling) - Refinement → most modify experimental procedures and other aspects of animal care to minimize or eliminate animal distress - Reduction → use designs that require fewest animal subjects possible
74
conceptual definition of a variable
researcher’s definition of the variable in question at a theoretical level
75
operational definition of a variable
a researcher’s specific decision about how to measure or manipulate the conceptual variable
76
three common types of measures
- self-report - observational - physiological
77
self-report
people’s answers to questions about themselves in a questionnaire or interview - rating scale commonly used - popular and easy to use - questionable reliability and validity
78
observational
recording observable behaviors or physical traces of behaviors, e.g., how many times a person smiles, taking an IQ test
79
physiological
recording biological data, e.g., brain activity, hormone levels, heart rate
80
types of observational/behavioral measure
frequency, latency, number of errors
81
frequency type of observational/behavioral measure
count of the number of behaviors
82
latency type of observational/behavioral measure
amount of time it takes for a behavior to occur
83
number of errors type of observational/behavioral measure
number of incorrect responses made
84
categorical variables
normal variables
85
quantitative variables
ordinal scale, interval scale, ratio scale
86
nominal scale
lowest scale of measurement; variables whose values differ by category - values of variables for different names with no ordering of values implied
87
ordinal scale
different values of a variable can be ranked according to quantity
88
interval scale
spacing between values is known - ex. temperature, IQ score - no true zero point - can apply mathematical operations
89
ratio scale
similar to interval scale, but with a true zero point
90
factors affecting choice of a scale of measurement
- information yield (nominal scale yields least information, ordinal scale adds some crude information, interval and ratio scales yield the most information) - statistical tests available
91
reliability
consistency or repeatability of a measure or observation
92
test-retest reliability
method for evaluating the consistency of a test's results over time by adminitering the same test to a group of people twice degree to which a test continues to rank order scores in a stable manner over time - measure something at least twice - need a high correlation coefficient (r)
93
interrater reliability
two or more independent observers will come up with consistent (or very similar) findings - look for high correlation coefficient (r) - most relevant for observational measures
94
internal reliability
applies to measures that combine multiple items a measure of how consistently the items on a test measure the same concept or construct
95
type of internal reliability
split-half reliability: Randomly split the data collected in half and compare the results to see if they are similar one administration, correlation of items, odd-even split is the preferred method
96
correlation coeffecient
a statistical measure of the strength of a linear relationship between two variables - ranges from -1 to +1 - A correlation coefficient of 1 describes a perfect positive, correlation - A correlation coefficient of -1 describes a perfect negative, or inverse, correlation - Test-retest reliability r = 0.5 or higher - Interrater reliability r = 0.7 or higher
97
types of validity
construct validity, face validity, content validity, criterion-related validity
98
construct validity
evaluate whether a measurement tool accurately represents the concept it's intended to measure - abstract constructs - important when a construct is not directly observable - Evaluates the weight of the evidence (not if a measure is valid or not valid)
99
face validity
- A test appears to measure what it’s supposed to measure - Whether a measure seems relevant and appropriate for what it’s assessing on the surface - To have face validity, your measure should be: Clearly relevant for what it’s measuring, Appropriate for the participants, Adequate for its purpose
100
content validity
- Evaluates how well an instrument (like a test) covers all relevant parts of the construct it aims to measure - Whether a study fully examines the construct it is designed to measure
101
criterion-related validity
- Does it correlate with key behaviors? - Measures how well one measure predicts an outcome for another measure - A test has criterion validity if it is useful for predicting performance or behavior in another situation (past, present, or future)
102
convergent validity
how closely a test is related to other tests that measure the same (or similar) constructs
103
discriminant validity
the degree to which a test or measure diverges from (i.e., does not correlate with) another measure whose underlying construct is conceptually unrelated to it
104
relationship between reliability and validity
reliability ≠ validity - A test can be reliable without being valid; however, a test cannot be valid unless it is reliable
105
types of survey questions
- open-ended questions - forced-choice questions - Likert scale - semantic differential format
106
drawbacks of open-ended questions
- Time-consuming to answer - Lower response rates - Difficult to compare - A lot of noise/irrelevant information - Time-consuming and difficult to analyze
107
forced-choice questions
e.g., yes/no questions
108
Likert scale
from "strongly disagree" to "strongly agree"
109
semantic differential format
- other adjectives for a scale - e.g., rateMyProfessor (1 = awful, 5 = awesome)
110
how-to's for writing well-worded questions
- Avoid leading questions - Try to be as neutral in the wording and in the lead-in to the question as you possibly can - Ask questions that are clear and specific and that each respondent will be able to answer - Closed-ended questions should include: All reasonable responses (i.e., the list of options is exhaustive), the response categories should not overlap (i.e., response options should be mutually exclusive) - Use simple and concrete language that is more easily understood by respondents - Avoid using negatively worded questions
111
double-barreled questions
Questions that ask respondents to evaluate more than one concept → poor construct validity
112
acquiescence
tendency to select a positive response option or indicate a positive connotation disproportionately more frequently - e.g., Do you think the government should increase funding for education?
113
fence sitting
selecting the middle choice
114
faking good
the tendency of survey respondents to give answers to questions that they believe will make them look good to others - social desirability bias
115
faking bad
when someone intentionally tries to appear worse than they actually are
116
how to reduce acquiescence bias
- Use reverse-worded items - Avoid leading or loaded questions
117
how to avoid fence sitting
only provide two options
118
how to avoid social desirability bias (faking good)
- Include items that can identify respondents who have the social desirability bias Ex. My table manners at home are as good as when I eat out in a restaurant. Answer is pretty clear - Ask people’s friends or relatives - Use computerized measures to evaluate people’s implicit opinions about sensitive topics
119
self-reporting "more than they can know"
People may not be able to accurately explain why they acted as they did
120
self-reporting memories of events
Vividness and confidence are unrelated to how accurate people’s memories are
121
population (N)
the entire group that you want to draw conclusions about
122
sample (n)
a smaller set, taken from that population
123
if the sample can generalize to the population...
good external validity
124
externally valid samples
- Unbiased sample - Probability sample - Random sample - Representative sample
125
unknown external validity samples
- Biased sample - Nonprobability sample - Nonrandom sample - Unrepresentative sample
126
convenience sampling
- Only those who can be contacted easily are sampled - Can lead to nonresponse bias when doing it online
127
types of samples
- simple random sampling - systematic sampling - cluster sampling - multistage sampling - stratified random sampling - oversampling
128
simple random sampling
- Most basic - Each member of the population has an equal chance of being selected - Uses methods like lotteries
129
systematic sampling
- Selecting members of the population at a regular interval - Pick the first number from the population then from the next interval
130
cluster sampling
- Randomly select some of the clusters - Select all the members of those clusters
131
multistage sampling
- Randomly select some of the clusters - Randomly select some members of those clusters
132
stratified random sampling
select a demographic category
133
oversampling
- Over sample the demographic - Make statistical adjustments to the final result so that the correct weight is given to the group
134
random sampling
enhances external validity
135
random assignment
enhances internal validity
136
purposive sampling
Intentionally selecting participants based on their characteristics, knowledge, experiences, or some other criteria
137
snowball sampling
- A variation of purposive sampling - Currently enrolled research participants help recruit future subjects for a study - Often used to find rare individuals
138
quota sampling
Same as stratified sampling but without random sampling Example: out of a sample of 1000, purposely selecting 200 Whites, 100 Asians, 100 African Americans, 100 Latinx, 100 international students, etc.
139
larger samples are more representative than smaller samples only when...
they are selected probabilistically