rm2 bless up Flashcards
- Experiments
differences in IV leading to possible differences in the DV
- Observations
recording what people do in a situation of interest
- Self-reports:
asking participants about their behaviour.
- Correlation
relationships between two variables.
diadvatage of correlation
- They cannot tell you if two things are casually related
When should you use a correlation, rather than any other type of psychological investigation (e.g experiments)?
- To test a hypothesis about a relationship between two variables. You then might do an experiment with those conclusions.
- When using an experimental design to explore variables would be deemed too unethical or not practical.
- Reasons to conduct a case study:
- Opportunity to study an individual with a rare quality.
- Studying the antecedents of interesting events.
- Anywhere where depth of experience is more important than generalisability.
- Create a theory, which will then be tested experimentally.
- The emphasis is on qualitative data, though quantitative measures can be taken. The studies also tend to be longitudinal, so that they can capture insight over a longer period of time.
strengths of case studies
- Rich in qualitative data, and thus insight, which the other research methods tend to lack.
- Studying rare things in depth can give us understanding of ‘normal’ functioning.
- Ability to generate hypotheses, from which experimental designs can be created.
disadvantage of case studies
- Lack of generalisability.
- As much of the data is qualitative, interpreting finding can be hugely subjective (e.g., Freud).
- Investigator effects likely as you get to know the subject better.
content anyalis
As we’ve just seen, content analysis is a type of observation in which people are studied indirectly via their communication:
strengths of content anaylias
- High ecological validity.
- If in public, it can be more ethical.
- Allows a lot of data to be analysed at once.
- When quantitative, it can show differences between groups.
disadbvatgae of content analysis
- Investigator effects. Especially, in thematic analysis, the prejudices of the investigator may influence the conclusions of the study. Known as reflexivity.
- Presupposes the correspondence of language.
- Time consuming.
reliability
- Reliability: extent to which a measuring device or assessment (e.g., experiment) is consistent.
validity
- Validity: extent to which results are legitimate. Whether a study measures what it claims to measure.
Internal reliability
extent to which a measure is consistent with itself (e.g., IQ must test IQ, not celebrity knowledge).
Assessing internal reliability:
- Split-half analysis – test randomly divided in 2. Is there a positive correlation between scores on one and the other?
- Item analysis – performance on item is compared with overall score. Again, positive correlation is desirable.
External reliability
extent to which a measure varies from one time to another
Assessing external reliability
- Test-retest- same person is tested twice over a period of time.
- Replication – any research should produce similar findings if repeated.
Inter-observer reliability
- Measure of the consistency of ratings.
- Basically, the ability to say that different people observing the same event will observe it in the same way.
Example: - Judges in gymnastics rating a performer the same score
- Talent shows judges rating a performance.
How would you assess inter-rater reliability?
- Make sure that observers are seeing everything from the same perspective/viewpoint.
- Observers should keep in discussion to make sure they’re on the same track
procedure for inter observer
- Creating a scale or list of behaviours on which observers agrees.
- Try to make the behaviour less subjective by using a behavioural coding system.
- Make sure we observe the exact same situation.
- Make sure we observe it from the same place.
- Make sure we then check that we gained the same results (they should agree 80% of the time).