Validity and Reliability Flashcards
What is validity?
The accuracy of a study
The extent in which something measures what its supposed to measure?
What is internal validity?
If we measure what’s intended to be measured
Weather we a certain the IV caused the DV
If the answers on the study are low they need to be revisited
What is external validity?
The extent to which the findings of a study can be generalised beyond the point of research setting
Sampling method may be under represented so therefor can improve
What are the 5 different types of validity?
Face
Content
Concurrent
Construct
Predictive
What is face validity?
If the report looks like what we are trying to measure
e.g. if stress q’s are related to stress
What is content validity?
Looking at methods of measurement and and deciding if it measure the intended content
Could ask an independent expert on assessment to evaluate measure used, suggesting improvements and therefor dealing with validity
What is concurrent validity?
Comparing method used with a previously validated one at same topic, to be done p’s are given measures at the same time then scores are compared
If we have concurrent validity we should expect similar scores on each test
What is construct validity?
To assess the extent that a test measures the target construct
e.g. in stress measurement, we look at definition of stress and considered were relevant to this construct
What is predictive validity?
Weather scores on a test predicts what your expecting it to
e.g. we would expect people with high stress levels will have high blood pressure
What is reliability?
The extent in which a test produces a consistent result
What is internal reliability?
if the test is consistent within itself
e.g. weather the test q’s are measuring what the are intended to measure
What is external reliability?
The consistency overtime
e.g. if we repeated an IQ test would we expect the same results on different days
How do we test reliability in observations?
By interrater reliability, this is when theirs are more then 1 observers in a test who agree at least 80% on the findings
How do we deal with issues of reliability in observation?
Observers trained by coding system and behavioural checklist
If its used they can check reliability of the observations
What is the split half method?
One group does a test, then the answers are split in half then compared to see if we get same scores
2 scores calculated by looking at the coefficient