Validity in designs Flashcards
What are the different types of validity in experimental research?
- Statistical conclusion validity
- Construct validity
- External validity
- Internal validity
What is statistical conclusion validity?
- The degree to which conclusions about the relationship based on the data are correct.
- You need to do the correct analysis without violating any assumptions
- Determine your hypotheses and primary outcome variables before the data has been collected.
- Report all analyses, even the ones you don’t like.
- Don’t go fishing (this is a type I error)
- Make sure your measures are reliable, so that if there is an effect you can find it (otherwise you run the risk of a type II error)
What is a type I error in terms of statistical conclusion validity?
- The more the analyses, the higher the chance of a false positive.
What is a type II error in terms of statistical conclusion validity?
A false negative.
What is construct validity?
- The extent to which a measure of a construct is empirically related to other measures with which it is theoretically associated.
- Does a test truly measure what it purports to measure?
- Are your operational definitions adequate?
What is external validity?
- The degree to which the findings/conclusions can be generalized beyond the confines of the design and study setting.
- Generalisability to: other populations, other environments, other times.
What is internal validity?
- The degree to which a study is methodologically sound and confound-free.
- Ensure all third variables controlled.
- The degree to which we can be sure that the dependent variable changed as a result of the independent variable.
- Concerns the research design and validity of the conclusions.
What are the 7 threats to validity?
- History
- Maturation
- Testing
- Instrumentation
- Selection bias
- Attrition (mortality)
- Regression to the mean.
What is discriminant evidence?
Measures that aren’t theoretically related aren’t empirically related either. Convergent evidence is the opposite.
What are pre-post studies?
- The dependent variable is measured; there is an intervention; the dependent variable is measured again.
What is the con to pre-post studies?
They are low in internal validity. This is why we need control groups and placebos
What is History and how do we counteract it as a threat to internal validity?
- History is any event between pre and post test that is outside of research control and impacts entire sample
- Counteract it by having a control group that provides point of comparison.
What is Maturation and how do we control it?
- How participants naturally mature (better or worse) during a study
- Particularly a problem with younger people/children
- Counteract it by having a control group that provides point of comparison, using random assignment, and having a large sample.
What is testing and how do we counteract it?
- When pretest (baseline) changes behaviour in participants
- Counteract by using a control group as a point of comparison
What is instrumentation and how do we control for it?
- This happens when measures are not operationalised accurately or reliably.
- Counteract by: ensure operationalisation is accurate and reliable, use random assignment and large sample (both groups will experience the same but will not be reliable)