Week 7 Learning Outcomes Flashcards

1
Q

Define systematic error

A

A consistent, repeatable error associated with faulty equipment or a flawed experiment design

Systematic errors can lead to results that are consistently skewed in one direction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Define random error

A

An unpredictable error that leads to variability in measurements due to chance factors

Random errors can often be reduced by increasing the sample size.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is intra-rater reliability?

A

The degree to which the same rater provides consistent scores over multiple trials

Intra-rater reliability is crucial for ensuring that a single evaluator’s measurements are stable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is inter-rater reliability?

A

The level of agreement among different raters evaluating the same phenomenon

High inter-rater reliability indicates that different evaluators give similar scores.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is test-retest reliability?

A

The consistency of a measure when it is administered to the same group on two different occasions

Test-retest reliability is important for determining the stability of a measure over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define face validity

A

The extent to which a test appears to measure what it is supposed to measure, based on subjective judgment

Face validity is not a rigorous form of validity but is important for user acceptance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define content validity

A

The degree to which a test covers the representative sample of the domain being measured

Content validity ensures that all aspects of a construct are adequately assessed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define construct validity

A

The extent to which a test measures the theoretical construct it is intended to measure

Construct validity is crucial for ensuring that the test truly reflects the concept being studied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define criterion validity

A

The extent to which a measure correlates with an outcome or criterion that it should theoretically be related to

Criterion validity can be assessed through predictive or concurrent validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Identify commonly used statistics related to measurement reliability

A
  • Cronbach’s alpha
  • Kappa statistic
  • Intraclass correlation coefficient
  • Pearson correlation coefficient

These statistics help quantify the reliability of measurements in research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly