A2 Research Methods L1-6 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Define and describe content analysis

A

Content analysis is a method used to analyse qualitative data. It allows a researcher to take qualitative data and transform it into quantitative data.

The technique can be used for data in many different formats, for example interview transcripts and film and/or audio recordings.

The researcher conducting a content analysis will use coding units in their work. An example of coding units would be the number of positive or negative words used by a mother to describe her child’s behaviour, or the number of swear words in a film.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the procedure for content analysis

A
  1. Data is collected.
  2. Researcher reads through or examines the data making themselves familiar with it.
  3. The researcher identifies coding units.
  4. The data is analysed by applying the coding units.
  5. A tally is made of the number of times that a coding unit appears.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is thematic analysis?

A

This is a method for analysing qualitative data that involves identifying and reporting patterns within the material. The material might be a diary, TV advertisements, or interview transcripts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the procedure for thematic analysis?

A

the researcher will need to;
- Make a transcription of the interview.
- Use coding units to initially analyse the transcript.
- Review the coding units to look for themes.

For example, imagine a psychologist was interviewing violent offenders about their family and early childhood. The themes that emerge could be family violence, parental argument, alcohol misuse etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the advantages of content analysis?

A

1) Content analysis produce reliable data. If the content analysis was to be repeated in the future, similar/consistent results would be obtained (quantitative data)

2) Content analysis produces quantitative data which allows for trends and patterns in data to be identified (also some qualitative data can be produced)

3) Content analysis is less time consuming that other research methods in terms of collecting data e.g. interviews

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the disadvantages of content analysis?

A

1) Content analysis is not very scientific or objective. It can be quite subjective based on the themes/categories that the psychologist uses

2) Content analysis can be invalid, are the themes/categories really measuring the effect of the IV on the DV?

3) Content analysis data collection needs to be contextualised e.g. sleep behaviour measured in a sleep lab is a different context to sleeping at home

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define and describe ‘case studies’

A

Case studies involve the detailed investigation of a single individual or group or institution. This may be because the psychologist has only found one or two individuals who display this rare and/or fascinating behaviour.

Case studies provide rich, detailed qualitative data compared to other research methods. Case studies usually involve several methods (observations, interviews, etc.) which allows researchers to check for consistency, reliability and validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What kind of data can be collected from case studies?

A

Psychologists can collect qualitative data (for example from interviews and observations); and/or quantitative data such as questionnaires or experiments.

Case studies can last weeks, months or years and so can be longitudinal. This means they are able to observe changes over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the advantages of case studies?

A

1 ) Case studies provide rich, detailed insights into behaviour. Psychologists value the qualitative data that case studies collect as they tend to be more valid than quantitative data (collected from experiments/questionnaires).

2) Case studies allow psychologists to investigate human behaviour that might be rare or unusual e.g. London riots, or Genie. Such research might otherwise be unethical to carry out using other research methods such as an experiment or an observation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are the disadvantages of case studies?

A

1) Case studies often use small samples, and therefore the research findings cannot be generalised to the wider population. Using small samples could also lead to the researcher being biased in terms of subjective selection and interpretation of results
2) The data collected from case studies can often be low in reliability.
If the research was to be conducted again, we might find that the same results would not be obtained. Sometimes the data collected from a case study might be low in validity, especially if it relies on participants recallina information from a long time ago (memory decay)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do we assess reliability in an observation?

A

1) Test retest:
Repeat the observation a second time using the same participants and compare results gained
from the second observation, with the results gained from the first observation (this can be easier
if the observation is recorded). The results from the first and second observation should be very
similar and should produce a correlation coefficient of +0.8 or more in order to be reliable
2) Pilot study:
Conduct a small trial run of the observation, before the main research study is carried out. Pilot
studies can ensure that procedures and resources used in the research can improve precision when
measuring behaviour. This might include standardised instructions, debriefing, and planning
procedures properly. This will minimise human error and variation. All key concepts and terms need
to be operationalised so that all researchers know what they are defining, measuring and
observing. Operationalise the behavi oural categories and make sure that all observers have been
properly trained to look for the appropriate behaviour that fits into each category.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

How do we assess reliability in a self-report?

A

1) Test retest:
Give a self-report to a group pf participants and collect the results. Give the same participants the
same self-report to complete a second time (a short interval of time should be left between the
first and second test e.g. a few weeks). Compare results from the first self-report to the second
self-report . The results from the first and second self-report should be very similar/consistent. We
should gain a correlation coefficient of +0.8 or more if the self-report is reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How do we assess reliability in an experiment?

A

Experiments are reliable if psychologists can conduct the experiment again and gain the
same/similar results.
1) Test retest:
Conduct the experiment once and collect the results. Repeat the experiment again a few weeks
later with the same participants who will be tested in exactly the same way. Compare the results
gained from both occasions. The results should be similar from both occasions in order for the
experiment to be reliable. A correlation coefficient of +0.8 or more should be gained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do we improve reliability in an observation?

A

1) Inter observer reliability:
Make sure the observation is not biased. U se
more than one observer/psychologist to observe
and record the behaviour s separately. The
results from both psychologists could be
compared and should gain a positive correlation
coefficient of +0.8 or more if the observation is
reliable
2) If Inter observer reliability is low, then
the reliability could be improved by
ensuring that the behavioural categories
have be e n operationalised properly and
clearly so that each observer understands
the categories properly when recording
data . The observers might need further
training about which behaviours to observe
and how they should be measured.
3) If the results from the pilot study are not
very clear, then the reliability could be
improved by giving more training and
practice to the observers. They can become
more familiar with the behavioural
categories and can respond more quickly
when they are observing participants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do we improve reliability in a self-report?

A

1) Questions used in the interview: We must
make sure that the interview question s are not
ambiguous. They should be very clear so that
participants understand them and can give the
same answers if the questions were asked again
in the future. Ambiguous questions might need
to be rewritten if necessary or removed
2) Inter researcher reliability:
Make sure the self-report is not biased. If
conducting an interview, it is possible to use
more than one psychologist to interview
participants separately or together and record
answers separately. The researchers need to act
in similar and consistent ways, and each
researcher needs to carry out the procedure and
design in exactly the same way so as to make
the research consistent. The results from both
psychologists could be compared and should
gain a positive correlation coefficient of +0.8 or
more if the interview is reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do we improve reliability in an experiment?

A

Standardisation of instructions –If an
experiment is conducted twice, then the
procedures are repeated twice. Procedures
should be exactly the same for each participant
that takes part in the experiment, this will help
ensure that reliable results are gained.
Standardised instructions should be used and
key concepts and variables should be
operationalis ed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the types of validity?

A

face validity, concurrent validity, ecological validity, temporal validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Define internal/experimental validity

A

Measures whether the results are due to to the manipulation of the independent variable and not confounding variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Define investigator effects.

A

1) Investigator effects the characteristics of the
researcher can affect the participants and the outcome of the research (rather than the IV effecting the DV)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Define demand characteristics

A

2) Demand characteristics - participants guess the
aim of the study and change their behaviour, therefore the
outcome of the study might not be due to the Iv effecting
the DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Define confounding variables

A

3) Confounding variables - External variables have
not been well controlled, and might have an effect on the
DV, rather than the IV causing the change alone.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Define social desirability bias

A

4) Social desirability bias - participants might try to
portray themselves in a positive light and behave in an unnatural
way which will affect the DV and the results gained.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Define lack of operational isst ion

A

5) Lack of operationalisation - variables have not been
defined and measured properly. This could affect the results
overall resulting in low internal validity.
happy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What can internal validity be impacted by?

A

Investigator effects, demand characteristics, confounding variables, social desirability bias, lack of operationalisation

25
Q

How can internal validity be assessed?

A

Concurrent validity

Face validity

26
Q

Define concurrent validity

A

This is a way of establishing the internal validity of a new test, (e.g. a new IQ test) whereby the scores gained from this new
test can be compared against an older, established test where the validity is already known (e.g. Stanford-Binet IQ test). If
the scores from both tests are similar and a positive correlation coefficient of +0.8 or greater is found, then the new test is
judged as having high internal validity.

27
Q

Define face validity

A

This is a way to measure whether the test or measuring instrument is measuring what it should e.g. a questionnaire. One or
more researchers/experts in the field can examine the test items/questions to see whether they are measuring what it set out
to. They would do this by looking at the questions on a questionnaire and seeing, “on the face of it” whether the questions
appear to be measuring what they should. For instance if we were to measure IQ, we could get a specialist psychologist in the
field to examine each question in the IQ test and see whether each question is really measuring IQ or not. This would involve
a quick look over the questions.

28
Q

How can we improve internal validity (using concurrent and face validity)

A

Concurrent Validity:
Concurrent validity can be improved (depending on the research method used). For instance if low concurrent validity is found
on a questionnaire, then the researcher could remove questions that seem irrelevant or ambiguous, and then test the
concurrent validity again.
• Face Validity:
Face validity can be improved (for a questionnaire) by an expert in the field examining all of the questions on the
questionnaire. They might decide that some of the questions are not a good measure of the topic being investigated e.g. IQ,
and they then might improve/rewrite/re-word certain questions again. This will help improve face validity.

Also, reduce investigator effects, demand characteristics, and confounding variables

29
Q

Define external validity

A

This refers to factors outside of the research
setting. How well can the results gained from the
research be generalised to other settings, people
and time eras.

30
Q

What are the types of external validity

A

Ecological validity and temporal validity

31
Q

Define ecological validity

A

Ecological validity:
The ability to generalise research findings to other
settings and contexts, in particu\av to everyday
situations and settings (high in mundane vea\ism).
If a study is high in mundane realism then this
might increase ecological validity, however \tqe
must carefully assess the setting that the research
was carried out in.
settings and contexts in particular to everyday life
situations and settings ( high in mundane realism
If a study is high in mundane realism then this
might increaseecological validity however we
must carefully assess the setting that the research

32
Q

Define temporal validity

A

Temporal validity:
The findings from a study are true over a period of
time and can be generalised to other historical
time eras.

33
Q

How can we assess external validity?

A

Meta A…………………………… can be conducted, whereby a comparison of f……………………… (from a range of
different research studies that have investigated the same hypothesis) can be compared. Consistent findings
from different research studies that have occurred across populations, locations and periods of t……………….
indicate high ecological validity e.g. Van Ijzendoorn who conducted cross cultural studies into the Strange
S………………………………………. study.
• Consider the environment that the study was conducted in. The environment should be one that is quite
naturalistic if the psychologist wishes to have high ecological validity. A l………………………….. study might be
low in ecological validity because the setting is not very n……………………….. and quite artificial
• Assess how the dependent variable was m……………………….. For instance the task that participants are given
to do in the study and the way it is measured can affect the external validity. The task should have high
m…………………………… realism and should reflect the task that a person would be expected to do in
everyday life, e.g. memorise a list of words
• Assess whether the participants were behaving as n…………………………. as possible, and ensure that demand
c………………………………………… have been kept to the minimum. Participants should not be aware of the true
a…………………………… of the study because they will change their behaviour and this would have a dramatic
effect on the DV which could result in low validity.

34
Q

How can we improve external validity?

A

In order to improve external validity, demand characteristics could be reduced. This could be done via using a
double b……………………….. procedure whereby the psychologist and the participants do not know the
true aim of the study, and therefore no one is really aware of what the research is investigating. A
s………………….. blind procedure means that only the participants do not know the true aim of the study,
they are deceived into believing the study is about something else.
• In order to improve ecological validity, some pieces of research should be carried out in naturalistic settings.
For instance a l……………………………………. experiment could be carried out in a more natural setting such as a
f………………………………………….. experiment instead, or an observation could be carried out in a covert
manner. A f…………………………………….experiment and a covert observation could ensure that participants
behave more naturally and therefore this could lead to an improvement in ecological validity.

35
Q

define science and state its key elements

A

Science is the systematic and controlled approach to creating knowledge that we can rely on to predict and control the world (e.g. find cures for schizophrenia)

key elements: Objectivity, Paradigms, replicability, falsifiability

36
Q

When is psychology a science.

A

• The sample is large and representative
• Key words are defined and measured (Operationalised)
• Confounding variables have been identified and controlled for (to see if they have an effect on the DV).
• Pilot studies are conducted
• There is a high element of control

37
Q

Define empirical methods

A

• “A method of gaining knowledge which relies on direct observation or testing. This can help separate unfounded beliefs from real truth. We need to look for facts and scientific evidence that can be directly tested using empirical evidence”

38
Q

What is a paradigm?

A

a shared set of assumptions and agreed methods that are found within scientific disciplines.
Kuhn (1962) suggested that what distinguishes scientific and non-scientific disciplines is the presence of paradigms.
Social sciences like Psychology lack a universal acceptance of paradigms and that is why psychology might be viewed
as a, “Pre- science” rather than a science. Natural sciences like Biology and Physics have a number of principles at
their core, e.g. the theory of evolution. Psychology however, has too many internal disagreements and conflicting
approaches to qualify as a science and is a pre-science.

39
Q

What is a paradigm shift

A

Kuhn stated that a paradigm shift is when, “The result of a scientific revolution occurs. A significant change in the
dominant unifying theory of a scientific discipline occurs and causes a paradigm shift.”

40
Q

Describe the stages in which this shift occurs

A

1) One theory remains dominant within a scientific discipline. Some researchers might question the accepted
paradigm and might have contradictory research that disagrees with the main paradigm. Counter evidence
might start to accumulate against the main paradigm, critics might begin to gain popularity and eventually the
counter evidence becomes hard to ignore. The present paradigm might then be overthrown due to the
emergence of a new one. This is an example of a paradigm shift.
2) An established science makes rapid progress and a scientific revolution occurs due to the paradigm shift

41
Q

Give an example of a paradigm shift

A

the work of Copernicus in the sixteenth century. The paradigm used
to be that people thought that the Earth was at the centre of the universe, but Coperni c us was
responsible for a paradigm shift. He found that the sun is at the centre of the universe!

42
Q

Define objectivity

A

Dealing with facts in a way that is unaffected by beliefs, opinions, feelings
or expectations. Objectivity is the basis of the empirical method, and is more likely to be achieved when using laboratory experiments
or observations.

43
Q

explain the importance of objectivity in science

A

A good researcher is always objective and keeps a, “critical distance” from the research they are conducting.
Researchers should not let their personal opinions or biases interfere or affect the outcome of the research. This
means that the findings of a piece of research should not be influenced by the psychologist that conducted the
research in the first place. A high level of objectivity increases other researcher’s confidence that the results are
accurate, and can be replicated.

44
Q

explain the importance of objectivity in science

A

A good researcher is always objective and keeps a, “critical distance” from the research they are conducting.
Researchers should not let their personal opinions or biases interfere or affect the outcome of the research. This
means that the findings of a piece of research should not be influenced by the psychologist that conducted the
research in the first place. A high level of objectivity increases other researcher’s confidence that the results are
accurate, and can be replicated.

45
Q

What is replicability in scientific research?

A

Replicability is the extent to which research findings can be repeated in different contexts and circumstances. It shows whether similar results can be found when a study is carried out again.

46
Q

Why is replicability important in psychology?

A

It ensures that findings are consistent over time, validates research, and prevents unreliable results from informing policy or theories, especially since sample sizes in psychology are often small.

47
Q

What are some purposes of replicability?

A

a) Guards against scientific fraud
b) Checks if results were a one-off due to extraneous/confounding variables
c) Indicates the reliability of findings
d) Can indicate the validity of findings

48
Q

In which research method is replicability greatest in psychology?

A

Replicability tends to be greatest in laboratory experiments.

49
Q

What is falsifiability according to Popper (1934)?

A

Falsifiability is the idea that scientific theories can potentially be disproved by evidence. It is the hallmark of science, referring to the ability to prove a hypothesis wrong.

50
Q

What did Popper (1969) say about falsifiability in scientific theories?

A

Genuine scientific theories should be tested and can be proven false or incorrect. A theory is considered scientific if it can be potentially disproven.

51
Q

Can a scientific principle ever be proven true according to Popper?

A

No, even if a scientific principle is successfully tested and repeated, it only means it hasn’t been proven false yet.

52
Q

What is the difference between “good science” and “pseudoscience” according to Popper?

A

Good sciences like Biology and Physics have strong theories that are constantly challenged but not easily disproven, while pseudosciences, such as Freud’s theories, are unfalsifiable and not easily tested.

53
Q

How are scientific theories constructed?

A

Theories are constructed through hypothesis testing and re-testing, based on multiple research studies, and they must be testable and falsifiable.

54
Q

Why is Freud’s theory of the id, ego, and superego considered unfalsifiable?

A

Freud’s theory cannot be properly tested, so it cannot be proven false, making it non-scientific.

55
Q

What is deductive reasoning in scientific research?

A

Deductive reasoning involves starting with a theory, forming a hypothesis, testing it using empirical methods, and then drawing conclusions from the data.

56
Q

What are the steps in deductive reasoning?

A
57
Q

What is the Hypothetico-deductive model proposed by Popper (1935)?

A

It suggests that theories/laws about the world should be formed first, and hypotheses should then be generated and tested to see if the theory/law is correct.

58
Q

What is inductive reasoning in scientific research?

A

Inductive reasoning involves observing a natural phenomenon or behavior, forming a hypothesis, testing it, and then generating a theory based on the conclusions drawn.

59
Q

What are the steps in inductive reasoning?

A