Experimental Exam #2 Flashcards

1
Q

Quasi Experiments

A
  • Have an IV and a DV
  • But cannot randomly assign participants to levels of the IV
  • Loss of control over the experiment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Independent groups: non-equivalent control-groups design

A

Have a comparison group but no random assignment to a condition
Ex: Olympic medal level (gold, silver, bronze) and mental health

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Repeated measures: interrupted time-series design

A

Participants/groups are measured multiple times before, during, and after an “interruption” (the event of interest)
Ex: COVID mask mandates

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Combined: non-equivalent control-groups interrupted time-series design

A

combo of the two others

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Validity and Quasi-experiments

A

1) Internal validity: you can not control for confounds → suffer a lot
2) Statistical validity: pretty good, same as experiments
3) Construct validity: can be really good; same as or even better than experiments
4) External validity: can be really good; same as or even better than experiments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Quasi experiments and causal claims

A

**It is very hard to make a causal claim with quasi-experiments: if there are several quasi-experiments that all have the same results, you can be more confident in making a casual claim

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Survey research

A

Self-report data:
- Asking the person: the world’s best expert on the many aspects of life is probably you
- The most common type of data collected → may be overused
- May not be true for young children or older adults

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Question Formats for self-report

A

1) Likert
2) Open-ended/free response
3) Forced choice items
4) Semantic differential items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Likert

A

Ex: rate your agreement with the following statement “I am creative”
- 1 = strongly disagree, 7 = strongly agree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Open-ended/free response

A

Ex: “do you think that you are creative”
Hard to do math on these kinds of questions → often have to go in and code words into numbers (how often did you use positive vs. negative words etc.)
Qualitative research → takes effort to make it quantitative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Forced choice items

A

Ex: have to pick one option (force into yes/no)

Pros: avoids fence sitting → have to make a decision
Cons: loses nuance (range of responses)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Semantic differential item

A

use opposite adjectives
Ex: rate your creativity:
1 = uncreative, 7 = creative

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

problems with questions

A

1) Leading questions
2) Limited range of response options
3) Double-barreled questions
4) Negatively-worded/confusingly worded questions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Leading questions

A

wording leads the participants to give the results you want

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Limited range of response options

A

forcing people to say yes, no or I don’t know → there is a range of support and feelings within these (no room for nuance)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Double-barreled questions

A

touches on more than one issue but you can only give one answer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Response sets

A

1) Fence sitting (middle for every construct)
2) Acquiescence bias (yes saying)
3) Social desirability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Statistical analysis and response types

A

ANOVA: requires categorial IVs and continuous DVs
- Can NOT use a forced choice question as a DV and do an ANOVA
- Can use Likert item as a DV and do an ANOVA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Experiment and prioritized validity

A

Internal validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

frequency claim and prioritized validity

A

external validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Association claim and prioritized validity

A

construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

bivariate correlation

A

how 2 variables (usually scale/continuous but not necessarily) are linearly related
- Has a standardized scale
- Coefifient “r”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Descriptive correlations

A

Descriptive: heres how two things trend together
**Effect size: quantifying the strength of the correlation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Inferential correlations

A

Inferential: can make a statement about a population
- To do this ask: is this correlation coefficient statistically different from a correlation of 0
- You can have weak but significant correlations (it mostly tells you if your N is large enough)
- If you have a huge N, everything is significant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Correlations and manipulations

A

Nothing is manipulated → no causal inference drawn
Just seeing how things are related

26
Q

What can influence correlations

A

1) outliers
2) restriction of range
3) nonlinearity

27
Q

subjective validities

A

1) Face validity: does it look like a good measure? Ask experts to give their perception → think smell test
2) Content validity: does it include all the important components of the construct

28
Q

Empirical validities

A

1) Criterion validity: measure predicts some real-world outcome
2) Convergent validity: Measure is more associated with similar measures
3) Divergent (discriminant) validity: measure is NOT associated with dissimilar measures

29
Q

Reliability and validity

A

Reliability is necessary but not sufficient for validity

30
Q

correlations and causal claims

A

For correlations: we have covariance NOT temporal precedence → so not causal claims
Directionality problems
- also internal validity: we do not look at other variables

31
Q

Longitudinal designs

A

Measure people over multiple points

32
Q

Cross-lag longitudinal study:

A

1) Auto correlations
2) Cross-lagged correlations
3) Cross-sectional correlations

33
Q

Auto correlations

A

the correlation of each variable with itself over time
- Tell you about stability (interindividual stability)

34
Q

Cross-lagged correlations

A

earlier measure of one variable correlated with later measure of a different variable
- How people change over time → can help establish temporal presedence

35
Q

cross-sectional correlations

A

2 variables measured at the same time are correlated

36
Q

Pattern and parsimony

A

Patterns are plausible, coherent, consistent, strong
What is most likely and simple is probably true

37
Q

Multiple regression

A
  • An expansion of the correlation
  • Correlation between several predictor variables (IVs) and a single criterion variable (DV)
  • Not r but beta → can be interpreted as a correlation coefficient
38
Q

Moderation

A

when, for whom, or under what conditions are two variables related
A is related to b for one type of c but not for the other type

Ex: work frequency and reading time are related only for younger adults and not for older adults

39
Q

Mediation

A

why are two variables related

A is related to b because a leads to c and c leads to be

40
Q

Third variable problem

A

two variables are correlated but only because they are both linked to a third variable

41
Q

Mediators and third variables similarities

A

1) Both involve multivariate research designs
2) Both can be detected using multiple regression

42
Q

Mediators and third variables differences

A

1) Third variables are external to the correlation (problematic)
2) Mediators are internal to the causal variable (not problematic; explains relationship)

43
Q

IRB

A

Institutional Review Board
- Required at every institution that received federal funds

44
Q

Three ethical principles of the belmont report

A

1) Beneficence
2) Autonomy
3) Justice

45
Q

Beneficence

A

maximize benefits and minimize risk

46
Q

Autonomy

A

respect for persons
- Informed consent
- Need to provide compensation
- Describe any foreseeable discomforts or risks and how we will address that
- Need to describe what happens if you need to drop out of the study
- simple language

47
Q

Justice

A

Ensure that equity is not violated when selecting participants
- Decisions to include or exclude must be made on scientific grounds

48
Q

APA ethics code

A

Principle A: beneficence and nonmaleficence:
- Maximize benefits and minimize risks
Principle B: Fidelity and Responsibility
- Be responsible and professional in interaction with people
Principle C: Integrity
- Don’t lie, cheat, steal, commit fraud, etc.
Principle D: Justice
- Fairness and equity
Principle E: respect for people’s rights and dignity
- Respect for persons (informed consent)

49
Q

Replication

A

attempt to repeat the result of an experiment by repeating an original study
- same data, same methods
- Ensures that the initial findings are not a case of “discovering” an effect that is not real (type I error) → false positive

50
Q

Generalizability

A

fundamental results from as study are produced across a variety of situations

Why: generalizability across contexts, text the truth of the underlying hypothesis, discover boundary conditions

51
Q

P-hacking

A

Collecting data or analyzing your data in different ways until non-significant results become significant
- Increases probability of type I error (false positive)

52
Q

HARKing

A

Hypothesizing after results are known
- You analyze data and find a significant result (might be unexpected) and post-hoc come up with a hypothesis
- Increases probability of type I error (false positive)

53
Q

Cherry picking

A

select/report only data/findings that support your hypothesis and hide other data
Only reporting significant effects
- Increase false positive

54
Q

Fishing/data dredging

A

Look at a ton of different combinations of variables to find something significant
- Look at a ton of different combinations of variables to find something significant
- increase type I error rate

55
Q

Peer review cons

A

1) Not a paid position → takes so much time
2) You don’t know who your reviewer is
3) If your reviewer doesn’t have good training, you are left with weird/wrong comments

56
Q

Retractions

A

when you pull a paper from a journal (remove from scientific literature)
- Usually not because it’s wrong: because it is unethical

57
Q

Replications

A
  • Within and across labs
  • Did not start this until 2011
  • Expensive
58
Q

Fraud detection

A

people explore published data sets and see if things look fishy

59
Q

Open science framework

A
  • Publish all of your data and statistics code
  • Encourages transparency in all aspects of scientific conduct
  • Provides a platform for this to be possible
60
Q

Adversarial collaborations

A

Work with people who have opposing views to conduct a study that will help you figure out the correct

61
Q

Solutions to bad data practices

A

1) Open science framework
2) Preregistration: forces scientists to publically outline tier plans prior to starting work
- Introduction and methods before collecting any data
- You can pivot but have to justify reasons for doing that
5) Change p-value standard
6) Publish your analyses so everyone knows what you did