Reliability, validity and threats to validity Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Reliability

A

A measure of consistency.
How replicable is it?
If a psychological test has a high degree of reliability when it’s replicated it will produce similar results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Internal reliability

A

The extent to which something is consistent within itself.
For example:
IQ tests should be measuring the same thing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

External reliability

A

How much a measure differs from one occasion to another.
For example:
A questionnaire to access the personality of an individual should return similar results over any number of occasions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Split - half method

A

This involves splitting the test answers from a participant in half (e.g., compare answers to odd number questions with even number answers) and seeing whether the individual got the same or similar scores on the two halves.
If so, internal reliability is high; if not, it’s low and individual questions would need to be redesigned to ensure all questions were consistently testing the aims of the study.
Compare the two scores using a correlation coefficient.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Test - retest reliability

A

A method of accessing the reliability of a questionnaire or psychological test by assessing the same person on two separate occasions.
There must be sufficient time between the test and retest to ensure that the recall of responses is not from memory but not long enough for attitude changes to occur.
The two tests will be correlated to make sure they are similar and if correlation is significant and positive then the reliability of the measure is considered to be assumed as good.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Inter - observer / - rater reliability

A

An issue relevant to observational research.
Observers may have differing perspectives which may mean results are unreliable and show subjectivity bias.
A small scale trial run (pilot study) of the observation to check observers are applying the behavioural categories in the same way.
All watch the same the same event but record the data independently.
(Total number of agreements) / (total number of observation) >+.80.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Improving reliability - questionnaires

A

Correlation of data must exceed +.80 otherwise some items will be deselected or need to be rewritten.
Complex or ambiguous questions may be misinterpreted so many need to be simplified or changed from open to closed questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Improving reliability - interviews

A

Either to use the same interviewer or provide comprehensive training to ensure similarity and not have ambiguous or leading questions.
May be avoided if the interview is structured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Improving reliability - lab experiments

A

Researcher must have strict control over the conditions and the precision of replications of method rather than reliability of findings.
Findings would only be reliable if tested in slightly different conditions each timed they were tested.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Improving reliability - observations

A

Behavioural categories must be operationalised - measurable and self - evident.
Categories should not overlap and not be ambiguous and all possible behaviours should be covered by the checklist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Validity

A

The extent to which the observed effect is genuine.
Does it measure what it sets out to measure and can it be generalised to other settings?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Internal validity

A

Refers to whether the effects observed are due to the manipulation of the IV and not some other factor.
A major threat to internal validity is demand characteristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Validity - example

A

Some researchers question Milgram’s obedience study for internal validity claiming that the participants were merely playing along and that they knew there was a good probability that the shocks administered were not real so they were merely responding to the demand of the situation which was reinforced by the repetition of phrases by the authority figure to continue.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

External validity

A

Factors outside of the investigation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

External validity - example

A

Generalising to other situations, populations of people and other eras.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Ecological validity

A

Generalising findings from one setting to others.
In particular, from a laboratory setting into “everyday life”.
The task that is used to measure the dependent variable in an experiment is not replicable of everyday life.
It lacks or has “low mundane realism” and it is this that lowers ecological validity.

17
Q

Face validity

A

A very basic form of validity in which validity is determined by seeing if the scale or measure appears to measure what it’s supposed to measure or by passing it to an expert.

18
Q

Concurrent validity

A

The particular score or test are very close to or match those results of another recognised and well - established test.

19
Q

Concurrent validity - example

A

A new IQ test scores may be measured against a well- established test to check concurrent validity. Close agreement between the two sets of data would indicate the new test has a high level of validity. Correlation of validity must be +.80 at least or more.

20
Q

Content validity

A

Aims to demonstrate that the content of a test represents the area of interest.

21
Q

Construct validity

A

The extent to which performance on the test measures an identified underlying construct.

22
Q

Improving validity - experimental research

A

Use of a control group.
Standardisation of procedures to minimise impact of participant reactivity and investigator effects.
The use of single-blind (participants not made aware of the aims of the study) and double-blind procedures (a third party conducts the study without knowing the main purpose).

23
Q

Improving validity - questionnaires

A

Incorporation of lie scale within the questions to control for effects of social desirability bias.
Validity further enhanced by ensuring all data remains anonymous.

24
Q

Improving validity - observations

A

Higher ecological validity when minimal intervention from the observer.
Covert observation likely to have higher validity due to factor of observed behaviour more likely to be natural and authentic.
Behavioural conditions that are too broad, ambiguous or where there is crossover are likely to have a negative impact on the validity of the data collected.

25
Q

Improving validity - qualitative methods

A

Considered to have higher ecological validity than quantitative methods. Depth of detail in qualitative can better reflect the participants reality.
Researcher however may demonstrate interpretative validity of their conclusions – the extent to which the researchers interpretation of events match those of the participant.
Can be demonstrated by coherence of reporting and inclusion of direct quotes.
Validity further enhanced by triangulation – using a number of different sources of evidence – interviews from participant and friends and family, diary entries, observations etc.

26
Q

Demand characteristics

A

A cue to make participants unconsciously aware of the aims of the study or to help the participants work out what the study is about.

27
Q

Researcher bias

A

Information from the researcher that encourages certain behaviours in the participants which might lead to fulfilment of researcher expectations rather than the independent variable.

28
Q

Leading question

A

A question where content or format suggests to the participant what the desired response is or leads them to the desired response.

29
Q

Indirect investigator effects

A

Effects which may indirectly affect the independent variable.
For example:
Non - standardised instructions or results that are operationalised in a way to make the desired outcome more likely.

30
Q

Standardised instructions / procedures

A

Instructions or procedures that follow a set order for all participants.
This allows for replication and avoids researcher bias.

31
Q

Operationalise

A

The variables are in a form that’s easy to test.
They are precise and measurable and aim to measure without bias.

32
Q

Single blind

A

The participant isn’t aware of the research aims and / or the condition that they’re in.
However, the researcher is.

33
Q

Double blind

A

Both the participant and the researcher are unaware of the research aims and / or hypothesis.