Week 3 Flashcards

1
Q

What are the four major levels of measurement?

A

Nominal; ordinal; interval; ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the two main indicators of the quality of measurement?

A

Reliability and validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define ‘level of measurement’

A

Level of measurement describes the relationship between numerical values on a measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe nominal level of measurement:

A

Measuring a variable by assigning a number arbitrarily in order to name it numerically so that is might be distringuished from other objects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain ordinal level or measurement

A

Measuring a variable using ranking

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Explain interval level of measurement

A

Measuring a variable on a scale where the distance between numbers is interpretable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Explain ration level of measurement

A

Measuring a variable on a scale where the distance between numbers is interpretable and there is an absolute zero value

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is level of measurement important?

A
  1. It helps you decide how to interpret the data from the variable
  2. It helps you decide what statistical analysis is appropriate on the values that were assigned.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

There are two criteria for evaluating the quality of measurement. Name both and explain them.

A

Reliability: the consistency of measurement
Validity: The accuracy with which a theoretical construct is translated into an actual measure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can you infer the degree of reliability?

A

Does the observation provide the same results each time?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Explain true score theory:

A

True score theory maintains that every observable score is the sum of two components: the true ability of the respondent on that measure; and random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s a ‘true score’?

A

Essentially the score that a person would have received if the score were pretty accurate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Why is true score theory important?

A
  1. It is a simple yet powerful model for measurement
  2. It is the foundation of reliability theory
  3. It can be used in computer simulations as the basis for generating observed scored with certain known properties.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What if some errors are not random, but systematic.

A

One way to deal with this is to revise the simple true score model by dividing the error component into two subcomponents, random error and systematic error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is ‘random error’?

A

Random error is a component or part of the value of a measure that varies entirely by chance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is ‘systematic error’?

A

Systematic error is a component of an observed score that consistently affects the response in the distribution.

17
Q

What’s the difference between random error and systematic error?

A

Unlike random error, systematic errors tend to be either positive or negative consistently; because of this, systematic error is sometimes considered to be bias in measurement

18
Q

How can you reduce measurement errors?

A
  1. Pilot test your instruments and get feedback from respondents
  2. Train the interviewers or observers thoroughly
  3. Double-check the data for your study thoroughly
  4. Use statistical procedures to adjust for measurement error
  5. Use multiple measures of the same construct
19
Q

Name and explain the four types of reliability

A
  1. Inter-rater or inter-observer reliability is used to assess the degree to which different raters/ observers give consistent estimates of the same phenomenon
  2. Test-retest reliability is used to assess the consistency of an observation from one time to another
    3.Parallel-forms reliability is used to assess the consistency of the results of two tests constructed in the same way from the same content domain
    4.Internal consistency reliability is used to assess the consistency of results across items within a test.
20
Q

Explain Cohen’s Kappa

A

Cohen’s Kappa, a statistical estimate of inter-rater reliability that is more robust than percent agreement because it adjusts for the probability that some agreement is due to random chance, was introduced to avoid the problem

21
Q

Explain Cronbach’s Alpha

A

Cronbach’s Alpha takes all possible split halves into account. So Cronbach’s Alpha is mathematically equivalent to the average of all possible split-half etimates

22
Q

There are 4 different internal consistency measures. Name them and explain them shortly

A
  • Average inter-item correlation uses all of the items on your instrument that are designed to measure the same construct.
  • The average item-total correlation involves computing a total score across the set of items on a measure and treating that total score as though it were another item, thereby obtaining all of the item-to-total score correlations.
  • In split half reliability, you randomly divide into two sets all items that measure the same construct. It’s an estimate of internal consistency reliability that uses the correlation between the total score of two randomly selected halves of the same multi-item test or measure.
  • Cronbach’s Alpha
23
Q

Define Construct validity:

A

Overarching category of validity that contributes to the quality of measurement, with all of the other measurement validity labels falling beneath it.

Construct validity is an assessment of how well your actual programs or measures reflect your ideas or theories.

Construct validity is all about representation, and it can be viewes as a truth in labeling issues.

24
Q

There are various validity types, name them.

A
  • Construct validity
  • Translation validity
  • Face validity
  • Content validity
  • Criterion-related validity
  • Predictive validity
  • Concurrent validity
  • Convergent validity and discriminant validity
25
Q

Shortly explain construct validity

A

The approximate truth of the conclusion or inference that your operationalization accurately reflects its construct

26
Q

Shortly explain translation validity

A

Focuses on whether the operationalization is a good translation of the constrcut

27
Q

Shortly explain face validity

A

Face validity is a validity that checks that on its face the operationalization seems like a good translation of the construct.

28
Q

Explain content validity

A

In content validity, you check the operationalization against relevant content domain for the construct.

29
Q

Explain the criterion-related validty

A

Examines whether the operationalization or the implementation of the construct performs the way it should according to some criterion

30
Q

Explain predictive validity

A

Based on the idea that your measure is able to predict what is theoretically should be able to predict

31
Q

Explain concurrent validity

A

About an operationalization’s ability to distinguish between groups that it should theorec