Research Methods Exam Flashcards

1
Q

How do we generally investigate causal claims?

A

Experiments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the 3 criteria for establishing causation?

A
  1. Covariance
  2. Temporal Precedence
  3. Internal Validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain Covariance

A

An association establishes that A causes B or B causes A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Explain Temporal Precedence

A

directionality, figuring out what came first; did A cause B? or did B cause A?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain Internal Validity

A

is there a third variable that is associated with both A and B independently that could interfere with causation?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an experiment?

A

the manipulation of one variable and the measurement of another

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

independent variables can have multiple…

A

conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

what are the two types of varaibles?

A
  1. independent
  2. dependent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

explain independent variables

A

manipulated (ex. note taking methods)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

explain dependent variables

A

measured (ex. academic performance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

what is a control group?

A

receives no treatment; placebo group

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what is a treatment group?

A

receives the treatment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what things should you keep in mind when choosing variables and methodology?

A
  1. replicability
  2. generalizability
  3. ability to make causal claims
  4. outside interfering varaibles
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

why do we need experiments?

A

allows us to draw conclusions about causation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

what is the easiest criteria to establish?

A

covariance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

what happens when results are explained by systemic differences (confounds)?

A

we cannot infer causation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

what are confounds?

A

alternative explanation for the change in the dependent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

what are the two types of confounds?

A
  1. design confounds
  2. selection effects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

what are design confounds?

A

mistakes when designing the experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

example of a confound within this claim: alcohol use increases your risk of lung cancer

A

individuals who use alcohol may be more likely to also smoke
confound: smoking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

what is a selection effect

A

errors in the selection or participation of participants

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

unsystematic variability is not the same as…

A

confound

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

whats a secondary way that selection bias occurs?

A

when people volunteer for a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

how do you prevent confounds?

A

make sure researchers treat participants the same and try to make sure there aren’t parts of the experiment that vary systematically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
how do you prevent selection effects?
random assignment
26
what is a way to make sure groups are equal?
matched groups
27
why can dealing with confounds be difficult?
time measuring more variables resources more complicated study designs lots of variables missed variables
28
what are the three types of validity?
1. statistical 2. construct 3. internal
29
explain statistical validity
statistical significance resulting from a statistical test (t-test, ANOVA, effect size)
30
explain construct validity
operationalization of variables
31
explain external validity
info about participants and their characteristics
32
what does construct validity ask within the note taking experiment?
1. did the study do a good job of measuring academic performance? 2. how well did they manipulate the note taking condition?
33
explain potential design confounds in the pasta experiment
1. individuals in the large bowl group had tastier pasta 2. what if the medium and large bowl groups ate at different times?
34
explain potential selection effects in the pasta experiment
individuals who received the large bowl love pasta, while those who received the medium bowl don't
35
what is the difference between qualitative research and quantitative research?
quantitative: tests hypotheses or theories qualitative: explores ideas formulation hypotheses and theories
36
how is qualitative research analyzed?
summarizing, categorizing, interpreting
37
how is quantitative research analyzed?
math and statistical analysis
38
how is qualitative research expressed?
words
39
how is quantitative research expressed?
numbers, graphs, tables, fewer words
40
how is qualitative research sampled
few responses
41
how is quantitative research sampled?
many responses
42
how are qualitative questions formatted?
open-ended
43
how are quantitative questions formatted?
close-ended or multiple choice
44
when do you use qualitative research?
to understand something
45
when do you use quantitative research?
confirm or test something
46
how do you analyze qualitative data?
1. prepare and organize data 2. review and explore your data 3. identify recurring themes 4. develop a data coding system 5. assign codes to the data (labelling)
47
what is a mixed method of research?
combines quantitative and qualitative research to answer research questions
48
what do results look like in research in humanities
products of research are predominately intellectual and intangible, results contribute to an academic discipline
49
what are the 6 reasons research ethics matter?
1. protecting participant safety 2. maintaining scientific integrity 3. upholding human rights and dignity 4. ensuring social responsibility 5. building trust in research and institutions 6. complying with legal and regulatory compliance
50
how do we know a manipulation works?
manipulation check; measuring the independent variable to make sure that it actually worked
50
what is a null effect?
no significant difference or relationship between the things you're studying; the independent variable did not effect the dependent variable
51
what is the file drawer problem?
studies that don't find a difference stay in the file-drawer or aren't published, because of this, no one knows that someone has conducted a particular study
52
what are the 6 components checked in pre-registration of a study?
1. demonstrated credibility 2. lasting reproducibility 3. constructive review 4. increased likelihood of acceptance 5. a more complete scientific record 6. keeps your options open
53
peer review before results are known to...
align scientific values and practices
54
when does stage 1 of peer review occur?
between the design study and collecting and analyzing data
55
when does stage 2 of peer review occur?
between writing the report and the publishing of the report
56
how could we miss an effect?
obscuring factors
57
what are the 3 forms of obscuring factors?
1. weak manipulation of the IV 2. insensitive measure of the DV 3. floor and ceiling effects of the IV or DV
58
what is weak manipulation?
not very strong or clear creation of changes or differences between groups
59
what is insensitive measurement?
inaccuracy of the research tools
60
what are floor and ceiling effects?
very high or very low clusters of results
61
what are control variables?
factors that a researcher keeps constant or under control during an experiment; helps to rule out other possible explanations
62
what is the difference between systematic and unsystematic variables?
systematic variables are related to the study's purpose and are controlled or accounted for, unsystematic variables are not directly linked to the study and are random or uncontrolled
63
what are obscuring factors?
things that make it difficult to see of understand the true relationship between variables
64
what is a pilot study?
a small-scale preliminary investigation conducted before the main study; helps researchers identify potential problems and refine their approach
65
what is P-hacking?
intentionally fish for a specific p-value; lower than 0.05
66
what is HARKing?
Hypothesis After Results are Known
67