MEASUREMENT Flashcards

1
Q

Systematic error vs random error

A

Systematic errors are repeated in the same way throughout an investigation (using a balance incorrectly in the same way for each measurement) this can be corrected - precision describes how repeatable they are

Random error: cannot easily be corrected as it affects measurements differently

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Example of nominal scale of measurement

A

Gender

Days of the week

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Example of ordinal level of measurement

A

Rankings

Rating scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Example of interval levels of measurement

A

Rating scales

Temperature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Example of ratio levels of measurement

A

Timing

Quantities: height,weight,age,length

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How are constraints considered measurement concerns

A

Amount of time, money , available participants , equipment multiple ways to measure any construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
How do we know we are measuring the concept? 
How valid (VALIDITY)
A

Is there a degree of fit between constrict and indicator?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

RELIABILITY

Are our measurements consistent and dependable?

A

Will respondents answer in the same way if asked again?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the branches of validity? (hint: 4)

give a brief example of each

A

Face validity: extent to which a tool APPEARS to measure what it’s suppose to
Content validity: extent to which items are relevant to the content being measured
Criterion validity/ predictive validity: extent to which responses on a measure can predict future behaviour
Construct validity: extent to which a tool measures an underlying construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is face validity

A
  • when a tool subjectively appears to measure a construct
  • not a good way to measure validity
  • it involves peoples opinions and opinions can be wrong

“On the face of it”
Subjective assessment ( by researcher, experts)
Weak subjective method but a first step

Example: measuring interviewer skill
Maintain eye contact
Use neutral probes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is content validity and an example ?

A
  • extent to which the individual items on a test are relevant to the content area it is testing
  • Does the measure cover the entire range of meanings included in the concept?
  • Based on judgement

Example: you wouldn’t put a spelling question on a math test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is criterion validity?

what are the three types of criterion validity?

A

Checking against an external criterion believed to be another indicator of same construct

Predictive validity
Concurrent validity
Known groups validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is predictive validity

give an example

A
  • a type of criterion validity
  • when a tool can predict certain behaviors
  • does the measure predict some future criterion that it’s expected to predict?

Example: does attendance at biol1900 lectures accurately predict student performance on exams?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Concurrent validity

A
  • a type of criterion validity
  • does the measure relate to some known criterion concurrently?

example: Do scores on a measure of health-related quality of life correspond to the ratings based on clinician interviews?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Known groups validity (hint: type of … validity)
(hint: differentiate)
give an example

A
  • type of criterion validity
  • does the measure differentiate people in the way you would expect?

example: Does grip strength differentiate between those of low and high risk of cardiovascular mortality?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q
Construct validity (hint: how does it relate to other constructs?)
what are two types of construct validity?
A
  • extent to which a tool measures a construct
  • hard to prove
  • Relates to other constructs in a way that is expected based on theoretical relationships

Convergent validity
Divergent validity

17
Q

Convergent validity (hint: type of … validity)
What is convergent validity associated with?
Give an example

A
  • type of construct validity
  • Associated with other measures that it should be related to

example: Do scores on a vertical jump test correspond to wall sit test times (leg strength)

18
Q
Divergent validity (hint: type of ... validity) 
give an example
A
  • type of construct validity
  • Does not associate with measures of other constructs as closely as it does with other measures of same construct
    example: Do BSS scores relate more to measures of sit and reach than they do to standing long jump?
19
Q

What are the three main points (degrees) of reliability

A

Consistency
Repeatability
Agreement

20
Q

What is consistency (hint: one of the three main degrees of …)

A
  • a degree of reliability
  • Degree of consistency in a measurement
  • Do all items on the measure reflect the same underlying construct?
  • Internal consistency reliability (cronbachs alpha)
21
Q

What is the degree of repeatability of measurement (hint: one of the three main degrees of …)
give an example

A
  • reliability
  • Does the same measurement technique give the same result each time you use it
  • Test-retest reliability (Pearsons correlation, r)
22
Q

Problems with reliability (hint: 3 problems)

A
  • Internal consistency (difficult items, unrelated items) -Test retest (memory effects, practice effects, time interval)
  • Inter rater: (non standardised or no instructions, different experiences, need calibration)
23
Q

Type 1 error

A
  • Find a significant relationship but one does not exist in the real world
  • “false positive”: the error of rejecting a null
    hypothesis when it is actually true.
24
Q

Type 2 error

A
  • You find no significant relationship when one does exist in the real world
  • “false negative”: the error of not rejecting a null
    hypothesis when the alternative hypothesis is the true state of nature
25
what is a construct
abstract ideas that are not observable
26
what kind of validity measures how representative a research project is at 'face value'
face validity
27
Putting a spelling question on a maths test is an example of low....validity? and why?
- low content validity | - because a spelling question is not relevant to the content a maths test is testing
28
predictive validity, concurrent validity, and known groups validity are all a type of what validity?
criterion validity
29
Does attendance at BIOL1900 lectures accurately predict student performance on exams? What kind of validity would this represent and why?
- Predictive validity | - The measure of lecture attendance is predicting certain future behaviours (exam results)
30
What kind of validity encompasses the following question (and why): Do scores on a measure of health-related quality of life correspond to the rating based on clinician interviews?
- concurrent validity | - seeing if the measure (scores) relates to some known criteria (ratings) concurrently?
31
which validity encompasses the question (and why): does grip strength differentiate between those of low and high risk of cardiovascular mortality?
- known groups validity | - seeing if the measure differentiates people in a way expected
32
convergent validity and divergent validity are an example of what validity?
construct validity
33
what validity is the extent to which a tool measures a construct
construct validity
34
what kind of validity encompasses (and why): do scores on a vertical jump test correspond to wall sit test (leg strength)
- convergent validity | - association with other measures that it should be related to (both relate to the same measure of leg strength)
35
which validity encompasses (and why) Do BSS scores relate more to measures of sit and reach than they do to standing long jump?
- divergent validity - doesn't associate with other measures as closely as it does with measures of the same construct (is BSS more related to flexibility or explosive leg power)?
36
why is inter-rater reliability as issue?
- non standardised or no instructions - different experiences - need calibration
37
what does (Pearsons correlation, r) relate to
- degree of repeat-ability in reliability
38
what does (cronbachs alpha) relate to
- internal consistency in reliability