Reliability And Validity Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is reliability?

A

The consistency of a measuring instrument/results

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How is reliability measured?

A

By assessing:
- inter-rater reliability
- split-half reliability
- test re-test reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is inter-rater reliability?

A
  • the extent to which the is rn agreement between 2 or more raters
  • often used for observation research and content analysis
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do you assess inter-rater reliability?

A
  • 2 (or more) observers/laters independently observe/assess/rate the same thing and record their data
  • the data from 2 armore laters are compared and correlated
  • positive correlation means there’s inter-rater reliability, correlation coefficient of 0.80 or more = good inter-rater reliability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is split-half reliability measuring?

A

Measures internal reliability/internal consistency of a measuring instrument

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do you assess split-half reliability?

A
  • Split test in half (first half and second half)
  • positive correlation indicates consistency
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is test re-test reliability?

A
  • External reliability
  • measures consistency over time/ on different occasions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you assess test re-test reliability?

A
  • Same participants complete same test on 2 separate occasions
  • positive correlation between first and second testing indicates consistency
    Same test + same people= reliability, same test + different people = validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is validity?

A

Whether me research measures what it set out to measure
- measuring room cannot be valid if it isn’t reliable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is internal validity?

A
  • Whether the researcher rested what may intended to test
  • is concerned with what goes on within the study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What factors reduce internal validity?

A

Observer/researcher bias
Demand characteristics and participant reactivity
Investigator effects
Social desirability bias
Confounding variables
Poorly operationalised behavioural categories

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is external validity?

A

The extent at which the findings of the study can be generalised beyond the study to:
- other situations (ecological)
- other people (population)
- other cultures (cross-cultural)
- other times (temporal)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is ecological validity?

A

Ability to generalise a research effect beyond the particular setting which its demonstrated in another setting

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is mundane realism?

A
  • how a study numbers the real world
  • research environment is realistic to experiences in the real world
    -Non real it feels for participants
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is temporal validity?

A

Whether research from one time period can be generalised to another time period

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can you assess/improve validity?

A
  • Face (or content) validity
  • Concurrent validity
  • Predictive validity
17
Q

What is face (or content) validity?

A

Extent to which the test looks like they measure what the test claims to measure

18
Q

How do you assess face validity?

A

Ask independent judges whether they think the measuring tool measures what it set out to measure

19
Q

What is concurrent validity?

A

Establishing validity by comparing your newly designed test with an existing previously validated one assessing he same factor/ variable

20
Q

How do you assess concurrent validity?

A
  • Compare performance on new test with another previously established test assessing the same thing
  • participants complete both tests
  • positive correlation between the 2 tests = concurrent validity