A2 Lessons 01 - 06 Flashcards
Content analysis
A systematic research technique for analysing data. The researcher may create a coding system of predetermined categories.
Produces quantitative and qualitative data.
Themes
Refers to an idea that reoccurs from interviews. They tend to be more descriptive than coding
Evaluation of Content Analysis
(+) Generates both qualitative and quantitative data, increasing validity
(+) Easy to analyse trends/patterns
(-) Not very scientific or objective
(-) May be hard to understand without deeper information
Case studies
Involve the detailed investigation/insight of a single individual/group
Usually focus on a small number of people
May take place over a long period of time (longitudinal study)
Evaluation of case studies
(+) Provides rich, detailed images
(+) Can investigate behaviour that might be rare or unusual. The research may be unethical to carry out otherwise
(-) Cannot generalise - small sample. Could lead to bias
(-) Data could be low in reliability because replication may not lead to the same results
Ways to assess reliability
Test retest: repeat the observation a second time. Compare the results and if the correlation coefficient is +0.8 then it is reliable
Pilot study: small trial run of the observation
Ways to improve reliability
Inter observer reliability: more than one psychologist observes, results are compared (need +0.8)
Ensuring the categories have been operationalised and clearly stated, and questions are not ambiguous
Providing more training/practice to the observers
Standardisation of instructions
Factors that can reduce validity
Investigator effects
Demand characteristics
Confounding variables
Social desirability bias
Lack of operationalisation
Concurrent Validity
Improving it
A way of establishing internal validity. The scores obtained are compared against an older, established test where the validity is already known. Correlation coefficient of +0.8
Irrelevant or ambiguous questions can be removed
Face validity
Improving it
Measures whether the test is measuring what it set out to measure. Researchers/experts will look “on the face of it”.
Some questions might be improved/rewritten/re-worded
Assessing external validity
Meta-analysis
Consider the environment (e.g. lab)
Assess how the DV is measured
Assess whether the participants are behaving as normal as possible
Improving external validity
Double blind procedure
Naturalistic settings to improve ecological validity
The key features of a science
Includes experiments, observations, case studies etc. that produce valid and reliable data.
Samples need to be large and representative, key words need to be operationalised, confounding variables need to be managed, pilot studies should be done, there needs to be a high level of control
Empirical methods defined
A method of gaining knowledge which relies on direct observation or testing.
Looking at scientific evidence
Paradigm
A shared set of assumptions found within scientific disciplines
Psychology lacks a universal acceptance of paradigms