RELIABILITY AND VALIDITY Flashcards

1
Q

Define reliability

A

the consistency or repeatability of your measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what are the three types of reliability

A

stability of the measure (test-retest)

internal consistency of the measure (split-half, cronbach;s alpha)

Agreement or consistency across raters (inter-rater reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what does test-retest reliability look at?

A

whether your test measures the the same thing every time you use it

same Q. given on two occasions and data correlate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

test-retest - how do you address the stability of the measure?

A
  • you administer the measure at one point in time (test)
  • you then give the same measure to the same participant at a later point in time (retest)
  • correlate the scores on the two measures
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what are the two main problems with test-retest

A

Memory Effect -
participants may remember the experiment > will improve their second measure
- to short time between = greater risk of memory effects

Practice effect-
performance improve because of practice in test taking
- too long time between = risk of other variables (additional learning)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what does split half reliability look at?

A

whether your measure is internally consistent

Split Q in half and correlate data from two halves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

split half reliability - how do you test whether your measure is internally consistent?

A
  • administer a single measure at one time to a group of participants
  • split the measure into two halves and correlate the scores
  • higher correlation means greater reliability

e.g. 20 item, score one half (10 items) and second half (10 items), test correlation between the two halves

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

pros and cons of split- half reliability

A

PRO
eliminates memory/practice effects

CON
are the two halves really equivalent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

two methods of assessing internal consistency

A

split-half method

cronbach’s alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

what does cronbach’s alpha assess

A

internal consistency of your measure

tells you how well the items or questions in your measure appear to reflect the same underlying construct

good internal consistency = when individuals respond the same way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how is cronbach’s alpha measured

A

mathematically equivalent to average of all possible split-half reliabilities

coefficient alpha can range from 0-1 >closer to 1 = better reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

what does inter-rater reliability look at?

A

whether different raters measure the same thing

checking the match between two or more raters or judges

e.g. coding videos for infants “looking time” - need to check agreement amongst the coders

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

how is inter-rater reliability calculated

A

nominal/ordinal scale
- the percentage of times different raters agree

interval or ratio scale
- correlation coefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

define Validity

A

the credibility of the measure

are we measuring what we think we are?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

why is validity an issue

A

many variables in social research cannot be directly observes

  • motivation, satisfaction, helplessness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

types of validity

A

face validity

content validity

criterion validity (concurrent, predictive)

construct validity (convergent, discriminant/divergent)

17
Q

what is face validity

A

items appear to relate to construct

a weak, subjective method for assessing validity

a good first step to validity assessment

18
Q

what is content validity

A

the extent to which the measure is representative of a sampling of relevant dimensions

does it cover all aspects of the construct that its meant to measure

how much does the measure cover the content of the definition?

19
Q

what is criterion-related validity

A

checking the performance of your measure agains an external criterion

agree with external sources

20
Q

what are the two types of criterion-related validity

A

concurrent

predictive

21
Q

define concurrent criterion validity

A

a means in establishing validity of your measurement by comparing to a gold standard
>i.e. existing validated measure of the same construct

agrees with pre-existing “gold standard” measure

22
Q

what is predictive criterion validity

A

assessing the validity of your measure against what you theoretically predict to happen

agrees with future behaviour

23
Q

define construct validity

A

how well the measure and other constructs relate to each other (consistent with a theory)

24
Q

what are the two types of construct validity

A

convergent

divergent

25
define divergent construct validity
assessing validity by comparing measures of construct that theoretically should not be related to each other and are observed to not relate to each other > theoretically should not and in fact are not related < i.e. you should be able to discriminate/diverge between dissimilar constructs
26
define convergent construct validity
assessing validity by comparing measures of constructs that theoretically should be related that are observed to relate to each other > theoretically should and in fact are related < i.e. there is correspondence or convergence between similar constructs