Finding and fixing flaws in research Flashcards
Types of validity
What is validity?
Degree to which research conclusions match the real world i.e. whether they are actually correct
What factors can lead us to misinterpret results i.e. what factors weaken the validity of a study?
Issues of experimental design, which affect the DV and offer alternative explanations for an effect seen (sources of variance)
Every time a DV is measured there will be some degree of variance around a mean - this is known as ERROR VARIANCE and it is unsystematic i.e. random and uncontrollable
We generally assume all uncontrolled variance i.e. any not caused by the IV to be error variance
What may be the case, however, regarding variance in a sample?
A particular manipulation may have caused the effect - if we can show this to be true, we can remove some of the error variance and explain overall variance as being partly caused by SYSTEMATIC VARIANCE relating to that manipulation
I.e. we can show that some of the variance among our DV measurements is NOT RANDOM
What would happen in a perfect experiment?
There would be no error variance and all DV variance due to IV
But in real experiments we have systematic effects from not only the IV but also from confounds, and unsystematic variance from error
What is the issue with error variance?
They are assumed to occur in each condition equally so don’t produce a systematic bias in scores, but if there is a lot of it we may find that subtle treatment effects get buried underneath the distorting “noise”, stop treatment from having any effect
What is meant by threats to validity?
Limitations on the interpretations of our results, any influence on variables that might provide alternative explanations for an observed effect
What is meant by statistical validity?
Extent to which conclusions drawn from a statistical test are accurate and reliable
Are the results due to chance (e.g. fluke large difference between samples) or has there been some statistical error made e.g. use of wrong test, incorrect data entry, limited power, inaccurate effect size estimation etc. Or is it that there is genuinely a cause-effect relationship between IV and DV?
What two conclusions can be drawn in the context of statistical validity?
A type 1 error - finding an effect where there isnt one (was genuinely just a fluke occurrence)
A type 2 error - Conclude no effect when reality is that there is one
What is meant by internal validity?
Asking whether the manipulation of the IV actually caused observed change in the DV - if yes, the study has internal validity
Without internal validity the effects of the IV are confounded i.e. effects of multiple variables cannot be separated and accurately interpreted, a particularly acute problem where the IVs are subject variables
What are 7 threats to internal validity?
History - events outside the lab that can bias conclusions
Selection (biased sampling) - picking subjects in a way that isnt random; can cause a difference even in absence of IV, or can turn a small treatment effect into an apparently big one
Attrition - subjects dropping out, is there a reason for this? Do they share a common quality?
Maturation - Time between measurements, esp when studying children and they mature at different rates so some effects can become obscured/accentuated
Effects of repeat testing - Learning a test and getting better at it, it is a change in the participants
Regression effect - extreme scores tend to move towards the middle on a second test
Instrumental problems - e.g. where an “equivalent” version of a test is not actually equivalent, inter- and intra-rater changes
How can we address threats to internal validity in between subjects designs?
Ensure groups treated as similarly as possible, gather data rapidly/simultaneously, and randomly assign to groups
How can we address threats to internal validity in within subjects designs?
Counterbalancing, gather data rapidly, random assignment to counterbalancing order
What is meant by construct validity?
How closely our interpretations i.e. operationalisations of a construct are related to the real thing - in psychology many of our variables are not observable so we have to ensure there is construct validity to the way we are making inferences about a particular psychological construct
What is the most common type of threat to construct validity and how can we address this?
Confounding i.e. making assumptions about the wrong psychological construct
Placebo groups - if placebo group do no better than the control group, the competing explanation of the placebo is weakened, but if they do as well as the treatment group the placebo explanation might explain the DV changes and we would need to alter our psychological construct
e.g. in an experiment looking at effects of a new therapeutic treatment, the placebo group could simply receive attention
How could we improve construct validity?
Use a MANIPULATION CHECK designed to make sure the IV is changing in the way it should e.g. in a stress experiment take some measurements to show the stressor is actually working prior to testing