A2 Research Methods L1-6 Flashcards
Define and describe content analysis
Content analysis is a method used to analyse qualitative data. It allows a researcher to take qualitative data and transform it into quantitative data.
The technique can be used for data in many different formats, for example interview transcripts and film and/or audio recordings.
The researcher conducting a content analysis will use coding units in their work. An example of coding units would be the number of positive or negative words used by a mother to describe her child’s behaviour, or the number of swear words in a film.
Describe the procedure for content analysis
- Data is collected.
- Researcher reads through or examines the data making themselves familiar with it.
- The researcher identifies coding units.
- The data is analysed by applying the coding units.
- A tally is made of the number of times that a coding unit appears.
What is thematic analysis?
This is a method for analysing qualitative data that involves identifying and reporting patterns within the material. The material might be a diary, TV advertisements, or interview transcripts.
What is the procedure for thematic analysis?
the researcher will need to;
- Make a transcription of the interview.
- Use coding units to initially analyse the transcript.
- Review the coding units to look for themes.
For example, imagine a psychologist was interviewing violent offenders about their family and early childhood. The themes that emerge could be family violence, parental argument, alcohol misuse etc.
What are the advantages of content analysis?
1) Content analysis produce reliable data. If the content analysis was to be repeated in the future, similar/consistent results would be obtained (quantitative data)
2) Content analysis produces quantitative data which allows for trends and patterns in data to be identified (also some qualitative data can be produced)
3) Content analysis is less time consuming that other research methods in terms of collecting data e.g. interviews
What are the disadvantages of content analysis?
1) Content analysis is not very scientific or objective. It can be quite subjective based on the themes/categories that the psychologist uses
2) Content analysis can be invalid, are the themes/categories really measuring the effect of the IV on the DV?
3) Content analysis data collection needs to be contextualised e.g. sleep behaviour measured in a sleep lab is a different context to sleeping at home
Define and describe ‘case studies’
Case studies involve the detailed investigation of a single individual or group or institution. This may be because the psychologist has only found one or two individuals who display this rare and/or fascinating behaviour.
Case studies provide rich, detailed qualitative data compared to other research methods. Case studies usually involve several methods (observations, interviews, etc.) which allows researchers to check for consistency, reliability and validity.
What kind of data can be collected from case studies?
Psychologists can collect qualitative data (for example from interviews and observations); and/or quantitative data such as questionnaires or experiments.
Case studies can last weeks, months or years and so can be longitudinal. This means they are able to observe changes over time.
What are the advantages of case studies?
1 ) Case studies provide rich, detailed insights into behaviour. Psychologists value the qualitative data that case studies collect as they tend to be more valid than quantitative data (collected from experiments/questionnaires).
2) Case studies allow psychologists to investigate human behaviour that might be rare or unusual e.g. London riots, or Genie. Such research might otherwise be unethical to carry out using other research methods such as an experiment or an observation.
What are the disadvantages of case studies?
1) Case studies often use small samples, and therefore the research findings cannot be generalised to the wider population. Using small samples could also lead to the researcher being biased in terms of subjective selection and interpretation of results
2) The data collected from case studies can often be low in reliability.
If the research was to be conducted again, we might find that the same results would not be obtained. Sometimes the data collected from a case study might be low in validity, especially if it relies on participants recallina information from a long time ago (memory decay)
How do we assess reliability in an observation?
1) Test retest:
Repeat the observation a second time using the same participants and compare results gained
from the second observation, with the results gained from the first observation (this can be easier
if the observation is recorded). The results from the first and second observation should be very
similar and should produce a correlation coefficient of +0.8 or more in order to be reliable
2) Pilot study:
Conduct a small trial run of the observation, before the main research study is carried out. Pilot
studies can ensure that procedures and resources used in the research can improve precision when
measuring behaviour. This might include standardised instructions, debriefing, and planning
procedures properly. This will minimise human error and variation. All key concepts and terms need
to be operationalised so that all researchers know what they are defining, measuring and
observing. Operationalise the behavi oural categories and make sure that all observers have been
properly trained to look for the appropriate behaviour that fits into each category.
How do we assess reliability in a self-report?
1) Test retest:
Give a self-report to a group pf participants and collect the results. Give the same participants the
same self-report to complete a second time (a short interval of time should be left between the
first and second test e.g. a few weeks). Compare results from the first self-report to the second
self-report . The results from the first and second self-report should be very similar/consistent. We
should gain a correlation coefficient of +0.8 or more if the self-report is reliable
How do we assess reliability in an experiment?
Experiments are reliable if psychologists can conduct the experiment again and gain the
same/similar results.
1) Test retest:
Conduct the experiment once and collect the results. Repeat the experiment again a few weeks
later with the same participants who will be tested in exactly the same way. Compare the results
gained from both occasions. The results should be similar from both occasions in order for the
experiment to be reliable. A correlation coefficient of +0.8 or more should be gained.
How do we improve reliability in an observation?
1) Inter observer reliability:
Make sure the observation is not biased. U se
more than one observer/psychologist to observe
and record the behaviour s separately. The
results from both psychologists could be
compared and should gain a positive correlation
coefficient of +0.8 or more if the observation is
reliable
2) If Inter observer reliability is low, then
the reliability could be improved by
ensuring that the behavioural categories
have be e n operationalised properly and
clearly so that each observer understands
the categories properly when recording
data . The observers might need further
training about which behaviours to observe
and how they should be measured.
3) If the results from the pilot study are not
very clear, then the reliability could be
improved by giving more training and
practice to the observers. They can become
more familiar with the behavioural
categories and can respond more quickly
when they are observing participants.
How do we improve reliability in a self-report?
1) Questions used in the interview: We must
make sure that the interview question s are not
ambiguous. They should be very clear so that
participants understand them and can give the
same answers if the questions were asked again
in the future. Ambiguous questions might need
to be rewritten if necessary or removed
2) Inter researcher reliability:
Make sure the self-report is not biased. If
conducting an interview, it is possible to use
more than one psychologist to interview
participants separately or together and record
answers separately. The researchers need to act
in similar and consistent ways, and each
researcher needs to carry out the procedure and
design in exactly the same way so as to make
the research consistent. The results from both
psychologists could be compared and should
gain a positive correlation coefficient of +0.8 or
more if the interview is reliable
How do we improve reliability in an experiment?
Standardisation of instructions –If an
experiment is conducted twice, then the
procedures are repeated twice. Procedures
should be exactly the same for each participant
that takes part in the experiment, this will help
ensure that reliable results are gained.
Standardised instructions should be used and
key concepts and variables should be
operationalis ed.
What are the types of validity?
face validity, concurrent validity, ecological validity, temporal validity
Define internal/experimental validity
Measures whether the results are due to to the manipulation of the independent variable and not confounding variables.
Define investigator effects.
1) Investigator effects the characteristics of the
researcher can affect the participants and the outcome of the research (rather than the IV effecting the DV)
Define demand characteristics
2) Demand characteristics - participants guess the
aim of the study and change their behaviour, therefore the
outcome of the study might not be due to the Iv effecting
the DV
Define confounding variables
3) Confounding variables - External variables have
not been well controlled, and might have an effect on the
DV, rather than the IV causing the change alone.
Define social desirability bias
4) Social desirability bias - participants might try to
portray themselves in a positive light and behave in an unnatural
way which will affect the DV and the results gained.
Define lack of operational isst ion
5) Lack of operationalisation - variables have not been
defined and measured properly. This could affect the results
overall resulting in low internal validity.
happy