Taak 2: Reliability Flashcards

1
Q

Name 3 types of measurement errors and give an example on each of them.

A
  • External environmental factors (back ground noises)
  • Personal factors (illnesses, time of the day)
  • Test factors (non-representative items, difficulty level)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the classical test theory? (formula)

A

Observed score (x) = true score (T) + measurement error (E).

(E) needs to be random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What 2 types of norms does COTAN define? Give an explanation on both of them.

A
  • Norm-referenced interpretation: raw scores are compared to those of others (in the same age group (in children), and the best to use the same gender and the same country/culture)
  • Absolute norms: The raw scores are compared to an absolute norm. This norm is often determined by experts. What skills does one have to obtain for instance? Also known as Content-referenced and criterion-referenced interpretation. Example is: Experts determine how long someone would have to sit without fidgeting to score often. Or which words a child age 10 needs to be able to understand.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the definition of reliability?

A

Reliability is a test that is relatively free of unsystematic measurement eroor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Test reliability is usually estimated in one of what three ways?

A
  • Test-retest method
  • Parallel forms
  • Internal consistency
    (Interrater is also named on the scribbr website, but not in de lecture)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Fit the form of reliability with the methodology.
1. Test-retest
2. Interrater
3. Parallel forms
4. Internal consistency

A. Using two different tests to measure the same thing
B. Measuring a property that you expect to stay the same over time
C. Using a mulit-item test where all the items are intended to measure the same variable
D. Multiple researchers making observations or ratings about the same topic.

A

1 = B, 2 = D, 3 = A, 4 = C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the difference between factor analysis and item analysis?

A

Factor analysis pics out a subtype and checks weather items load on certain subscales. For the item analysis you have to check every item and measure if the item influences the test in any way. If they do’nt make a difference, then the item needs to be cut off the test. It is best to have least questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What methods are used to measure internal consistency?

A

Cronbach’s Alpha and Split-Half (corrected correlation between two halves of the test). Also KR20…?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Practice effects is one important aspect of the carryover effect, what are the carryover- and practice effect?

A

The carryover effect occurs when the first testing session influences scores from the second session (remebering answers). The practice effect is that skills improve in practice of the test.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Tests are more reliable of they are unidimensional. What does that mean?

A

This means that one factor should account for considerably more of the variance than any other factor.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Discriminability analysis is an approach to examine the correlation between each item and the total score for the test. How can you measure that an item drags down the estimate of reliability of a test?

A

When the correlation between the performance on an single item and the total test score is low, the item is probably measuring something different from the other items on the test. It might also mean that the item is so easy or so hard that people do not differ in their response to it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Two methods are used to measure internal consistency: Average inter-item correlation and Split-half reliability. Explain these two measurements.

A
  • Average inter-item correlation: For a set of measures designed to assess the same construct, you calculate the correlation between the results of all possible pairs of items and then calculate the average.
  • Split-half reliability: You randomly split a set of measures into two sets. After testing the entire set on the respondents, you calculate the correlation between the two sets of responses.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly