A2 RM Flashcards
Content Analysis (CA)
- Is a type of observational technique which involves studying people indirectly thru qualitative data
- Data can be placed into categories + counted (quantitative) or can be analysed in themes (qualitative)
Qualitative and quantitative
- Qualitative data collected in a range of formats can be used e.g. video or audio recordings (or the interview transcripts), written responses (such as those provided to an open question in a questionnaire) or children’s drawings
- CA helps to classify responses in a way that is systematic which can then allow clear conclusions to be drawn
Coding
- Is an important step in conducting CA + involves the researcher developing categories for the data to be classified
- Qualitative data can be extensive so coding is helpful in reaching conclusions about the data
- These categories provide a framework to convert the qualitative data to quantitative data which can then be used for further statistical analysis
- It is important for researchers to have their research questions formulated so they know exactly what their CA will focus on
- Researchers must familiarise themselves with the data before conducting any analysis so that they are confident that their coding system is appropriate for the task
When is content analysis is useful?
- CA is helpful when conducting research that would otherwise be considered unethical
- Any data already released into the public domain is available for analysis e.g. newspaper articles meaning that explicit consent is not required
- For material that is of a sensitive nature like experience of domestic violence, participants can write a report of their experience which can be used in analysis
- This allows high quality data to be collected, even in difficult circumstances
Content analysis involves design decisions:
- Sampling method = how material should be sampled e.g. time or event sampling
- Recording data = should data be transcribed or recorded (video camera) + should data be collected by an individual researcher or by a team
- Analysing and representing data = how should the material be categorised or coded to summarise it? + should the no of times something is mentioned be calculated or described using themes?
Example of content analysis:
A researcher is interested in investigating prejudice + discrimination in the media towards refugees. To do this, they will follow the following procedures:
- The researcher will select a newspaper article relating to refugees
- They will read through the text, highlighting important points of reference + annotating the margins with comments
- Using the comments made, the researcher will categorise each excerpt according to what it contains e.g. evidence of prejudice, discriminatory language + positive regards towards refugees
- This process will be repeated for each newspaper article of interest identified by the researcher at the outset
- Once all steps above are completed for each newspaper article, the categories which emerged through the process of analysing the content are reviewed to decide if any need refining, merging or subdividing
- With the well‐defined (operationalised) behavioural categories, the researcher returns to the original articles + tallies the occurrence of each ‘behaviour’ accordingly.
- The qualitative data has now undergone analysis to produce quantitative data which can undergo further analysis such as statistical testing, descriptive statistics + producing graphs or tables
Advantage of Content Analysis (1)
CA tends to have high ecological validity because it is based on observations of what people actually do e.g. real communications, such as recent newspapers or the books that people read
Advantage of Content Analysis (2)
When sources can be accessed by others e.g. videos of people giving speeches, the CA can be replicated + therefore the observations can be tested for reliability
Disadvantage of Content Analysis (1)
Researchers can still be biased when putting the data into categories which reduces the reliability + validity of the data because diff researchers may interpret the meaning of the categories differently
Disadvantage of Content Analysis (2)
- Cultural differences may contribute to inconsistent interpretation of behaviour coding since language may be translated + therefore interpreted differently by someone of a different nationality
- As a result, the validity of findings from a CA can be questioned since it may not have been measuring what it intended to with accuracy
Thematic Analysis (TA)
- Is a technique that helps identify themes throughout qualitative data
- A theme is an idea or a notion + can be explicit (such as stating that you feel depressed) or implicit (using the metaphor of a black cloud for feeling depressed)
- TA will produce further qualitative data but this will be much more refined
Example of Thematic Analysis:
The researcher reviewing the articles for evidence of prejudice or discrimination against refugees, the following procedures would be followed:
1.Carry out steps 1–3 as if conducting a CA
2. After the researcher must decide if any of the categories identified can be linked in any way such as ‘stereotypical views’, ‘economic prejudice’ or perhaps ‘positive experiences for refugees’.
3. Once the themes are successfully identified, they can then be used in shorthand to identify all aspects of the data that fit with each theme
e.g. every time the researcher identifies an example within the data of a positive experience for the refugee, they might write ‘PER’ (positive experience for refugees) alongside it so that they are able to quickly re‐identify this theme in subsequent analysis of the data
4. Once all the steps above are completed, the themes which emerged will be critically reviewed to decide their relevance
5. This process will be repeated for each newspaper article of interest identified by the researcher at the outset.
6. Qualitative comparisons are drawn between major and minor themes of the analysis
Advantage of Thematic Analysis
- High ecological validity
- Much of the analysis that takes place within these research methods are basing their conclusions on observations of real‐life behaviour + written and visual communications - E.g. analysis can take place on books people have read or programmes that people have watched on TV
- Since records of these qualitative sources remain, replication of the content/thematic analysis can be conducted
- If results were found to be consistent on re‐analysis then they would be said to be reliable
Weakness of Thematic Analysis
- There is the possibility that TA can produce findings that are very subjective
- E.g. the researcher may interpret some things said in an interview in a completely diff manner from how they were intended due to their own preconceptions, judgements or biases
Case Studies
- The purpose of a case study is to provide a detailed analysis of an individual, establishment or real‐life event
- A case study does not refer to the way in which the research was conducted as case studies can use experimental or non‐experimental methods to collect data
- E.g. a researcher may want to interview the participants, provide a questionnaire to their family or friends + even conduct a memory test under controlled conditions to provide a rich and detailed overview of human behaviour
When are case studies used?
- Case studies are often used where there is a rare behaviour being investigated which does not arise often to conduct a larger study
- A case study allows data to be collected + analysed on something that psychologists have very little understanding of
- And can therefore be the starting point for further, more in‐depth research
Example of famous case studies:
- HM
- Phineas Gage
- Little Albert
- Little Hans
- 9/11
- London Riots
Advantage of Case Studies (1)
- It offers the opportunity to unveil rich, detailed info about a situation
- These unique insights can often be overlooked in situations where there is only the manipulation of 1 variable to measure its effect on another
Advantage of Case Studies (2)
- Case studies can be used in circumstances which would not be ethical to examine experimentally
- E.g. the case study of Genie (Rymer, 1993) allowed researchers to understand the long‐term effects of failure to form an attachment which they could not do with a human participant unless it naturally occurred
Disadvantage of Case Studies (1)
- There are methodological issues associated with the use of case studies
- By only studying 1 individual, an isolated event or a small group of people it is difficult to generalise any findings to the wider population since results are likely to be so unique
- Therefore this creates issues with external validity as psychologists are unable to conclude with confidence that anyone beyond the ‘case’ will behave in the same way under similar circumstances
- Thus lowering population validity
Disadvantage of Case Studies (2)
- An issue when qualitative methods are used is that the researcher’s own subjectivity may pose a problem
- In the case study of Little Hans, Freud developed an entire theory based around what he observed
- There was no scientific or experimental evidence to support his suggestions from his case study
- This means that a major problem with his research is that we can’t be sure that he objectively reported his findings
- Consequently, a major limitation with case studies is that research bias + subjectivity can interfere with the validity of the findings/conclusion
Reliability
- is a measure of consistency
- If the results are not consistent then the measure is not reliable
- If researchers are using a questionnaire to measure levels of depression they want to ensure that the measure is consistent between participants + over time
Test-Retest Reliability
- The same person or group of people are asked to undertake the research measure e.g. questionnaire on different occasions
- The same group of participants are being studied twice so researchers need to be aware of any potential demand characteristics
- If the same measure is given twice in 1 day, there is a strong chance participants will be able to recall the responses they gave in the first test, + so psychologists could be testing their memory rather than the reliability of their measure
- Also ensure there is not too much time between each test
- If psychologists are testing a measure of depression + question the participants a year apart, it is possible they may have recovered + so they give completely different responses rather than that the questionnaire is not reliable
- After the measure has been completed on two separate occasions, the 2 scores are then correlated
- If the correlation is shown to be significant, then the measure is said to have good reliability
- Perfect correlation is 1 + so the closer the score is to this, the stronger the reliability of the measure
- But a correlation of over +0.8 is also perfectly acceptable + seen as a good indication of reliability
Inter-Observer Reliability
- Also known as inter-rater reliability
- Refers to the extent to which 2 or more observers are observing + recording behaviour in a consistent way
- A useful way of ensuring reliability in situations where there is a risk of subjectivity
- If a psychologist was making a diagnosis for a mental health condition it would be a good idea for someone else to also make a diagnosis to check that they are both in agreement
- In psychology studies where behavioural categories are being applied, inter‐observer reliability is also important to ensure the categories are being used in the correct manner
- Psychologists would observe the same situation or event separately + then their observations (or scores) would be correlated to see whether they are suitably similar
- If the correlation coefficient of the 2 observers is more than +0.8 then this means the reliability is strong
Example of Inter-Observer Reliability
- Ainsworth’s Strange Situation
- During the controlled observation, her research team were looking for instances of separation anxiety, proximity seeking, exploration + stranger anxiety across the 8 episodes of the methodology (operationalised behavioural categories)
- Ainsworth found 94% agreement between observers + when inter‐observer reliability is assumed to a high degree, the findings are considered more meaningful
- If reliability is found to be poor, there are different ways in which it can be rectified depending on the type of measure being used
Improving Reliability: Interviews
- Ensuring the same interviewer is conducting all interviews will help reduce researcher bias as there is the potential for variation in the way questions are asked which can lead to different responses
- Some researchers may ask questions that are leading or are open to interpretation
- If the same interviewer can’t be used throughout the interviewing process then training should be provided to limit the potential bias
- Changing the interview from unstructured to structured will limit researcher bias
Improving Reliability: Questionnaires
- Identify which questions that are having the biggest impact on the reliability + adjust them as necessary
- If they are important items that must remain in the questionnaire then rewriting them in a way that reduces them incorrectly interpreted may be enough
- E.g. if the item in question is an open question, it may be possible to change it into a closed question reducing possible responses + thereby limiting potential ambiguity
Improving Reliability: Experiments
- Lab experiments are often referred to as having high reliability due to the high level of control over the independent variables, which makes them easier to replicate by following the standardised procedures
- To improve reliability within experiments researchers might try to take more control over extraneous variables, helping to further the potential for them to become confounding
Improving Reliability: Observations
- Observations can lack objectivity as they are relying on the researcher’s interpretations of a situation
- If behavioural categories are being used, it is important that the researcher is applying them accurately + not being subjective in their interpretations
- One way would be to operationalise the behavioural categories
- This means that the categories need to be clear + specific on what constitutes the behaviour in question
- There should be no overlap between categories leaving no need for personal interpretation of the meaning
Validity
Refers to whether a measuring instrument or study measures what it claims to measure (whether something is true or legitimate)
Internal validity
Is a measure of whether results obtained are solely affected by changes in the variable being manipulated (IV) in a cause + effect relationship
External validity
Is a measure of whether data can be generalised to other situations outside of the research environment
Ecological validity
- Type of external validity
- Refers to the extent to which psychologists can apply their findings to other settings (everyday life)
- Lab experiments lack ecological validity
- Due to the artificial setting of a lab, it is difficult to generalise the findings to a more natural situation since behaviour may be very different as a result
- Exam Hint: If you are suggesting that results are low in ecological validity as evaluation, make sure you justify this point with specific examples relating to that individual study. Avoid writing sentences which could be ‘copy and pasted’ into another essay as this means you have not tied the commentary closely to the question at hand
Temporal Validity
- Form of external validity
- Refers to the extent to which research findings can be applied across time
- E.g. Asch’s research into conformity is said to lack temporal validity because the study was conducted in a conformist era + thus the findings might not be as applicable in today’s society
Population validity
- Form of external validity
- Refers to the extent to which the research can be applied to different groups of people apart from the group that were used in the study
- E.g. Asch’s study was carried out on males but could the study also be applied to females
The validity of a psychological test or experiment can be assessed in 2 ways:
- Face validity
- Concurrent validity
Face validity
- Does the test appear to measure what it says it measures?
- If there is a questionnaire that is designed to measure depression, do the items look like they are going to represent what it is like to have depression?
- If not it is not likely to have face validity
- A test of face validity is most likely to be conducted by a specialist in the given area (a clinical psychologist, doctor or other mental health specialist) familiar with the assessment of depression
- If the specialist believes that the instrument or measure is valid, this is often seen as a good indication of validity
Concurrent validity
- This is where the performance of the test in question is compared to a test that is already recognised + trusted within the same field
- If psychologists are wanting to introduce a new measure of depression they might compare their results to the data obtained from a similar measure such as Beck’s depression inventory
- As both measures are looking to do the same thing it would be expected for participants to score relatively similarly on each questionnaire
- Statistically, a correlation of +0.80 or higher would assume that there is high concurrent validity
Improving Validity: Experiments
1 - A control group is used in a lab experiment which allows psychologists to see whether the IV influences the DV
- If researchers are testing the efficacy of a new anti‐depressant drug they will often have an experimental group (receive the true medication) + a control group (receive a placebo)
- In this case, using a control group would allow a comparison to see whether the medication was truly effective thus giving greater confidence in the validity of the research
2 - Research also includes single‐blind or double‐blind procedures to improve validity
- This ensures that the knowledge of the conditions does not result in demand characteristics by participants or investigator effects from the direct or indirect behaviour of the experimenter
3 - Use standardised instructions (giving all participants the same instructions in exactly the same formats)
- Participants receive identical info + psychologists can minimise investigator effects
- In this way participants are less likely to have a different interpretation of what they are required to do whilst the researcher is at less risk of giving a higher level of info to some participants compared to others
Improving Validity: Questionnaires
1 - Researchers will include a lie scale to check the consistency of participants’ responses
- One way this can be done is by having 2 items that are asking the same thing but in opposite ways
- E.g. on a scale measuring depression imagine that each item asks participants to rate from 1-5 with 1 = completely disagree + 5 = completely agree
- There might be 1 item in the scale that says, ‘I generally sleep well at night’ + another that says, ‘my sleeping has become worse’
- A participant can’t respond to both items honestly with a rating of with 5 because they contradict each other
- Such items are then used to check the validity of an individual participant’s scores
2 - Ensure that participants know that their responses are going to be kept anonymous because by remaining unidentifiable, participants are less likely to give answers that are socially desirable
Improving Validity: Observations
1 - Making sure that the researchers have minimal impact on the behaviour that they are observing
- One way to do this is to conduct a covert observation (researcher is not seen)
- Increases the likelihood that the behaviour observed is natural as participants will not be acting in a way that they see as correct or desirable for the sake of the study
2 - Use of behavioural categories
- Researchers will tick off behaviours when they are seen which helps to improve validity by reducing the chance of researcher subjectivity
- Ensuring that the categories are clearly defined + do not overlap would also further improve validity in observations