Lecture 8 Flashcards

1
Q

Common sources of measurement error

A

– Systematic error
– Random error
– Errors in alternate forms of measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

The act or process of assigning numbers or values to phenomena according to a rule.
• The dimension, quantity, or capacity determined by measuring

A

What is measurement?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

All measurements can be reduced to just two

components:

A

number and unit

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

There are two kinds of measurement errors:

A

system errors and random errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Measurement errors occur when:

A

the collected data do not accurately portray the concept that we intend to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When we deal with data from different sources and countries we need to standardize these data into
standardized comparable units. What are some examples of units of measure that need to be standardized:

A

Examples of this issue are currencies used in different countries, weight (kg vs lb), length (meter vs foot)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

If a loud construction is going on just outside of a classroom where pupils are writing a test, this noise is liable to affect all of the children‘s scores. What is this an example of?

A

Systematic Error. Often called biases in the
measurements. A systematic error occurs when the collected information consistently reflects an inaccurate picture of the concept that we are
attempting to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

We may ask questions in a way that predisposes
individuals to answer in the way we want them to. For example, “It is better to give than to receive?” This is called the:

A

acquiescent response set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Individuals may be biased to answer the questions in the ways that distort their true views or behaviors. This bias can be minimized by anonymity:

A

What is social desirability bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

In a particular testing, some children may be in a good mood and others may be depressed. What is this a (terrible) example of?

A

Random error. The key point about random errors is that they do not have any consistent effects across the entire sample; they do not affect the average only the variability around the average.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

The steps you can take to minimize errors depend mainly on the:

A

data collection methods. For example, unbiased wording, pretesting, consistency among testers to reduce interrater bias, unobtrusive observation, use different methods to collect the same information (triangulation)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

This concerns the amount of random error in a measure and measurement consistency and the chance that a given measurement procedure will yield the same results of a given phenomenon at another time. In another word: repeatability

A

What is reliability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Inter-observer or inter-rater reliability

A

The degree to which different observers/raters generate consistent estimates of the same phenomenon

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Test-retest reliability

A

The consistency of a measure over time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Internal consistency reliability

A

he consistency of results across items within a test,

e.g. the split-halves method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Parallel-forms reliability

A

The consistency of the results of two different tests

constructed from the same content

17
Q

What is a bad thermometer an example of?

A

Reliability does not ensure accuracy. If the measurement is accurate, it must be reliable. But if it is reliable, it is not necessarily accurate.

18
Q

This refers to the extend to which an
empirical measure adequately reflects the real
meaning of the concept under consideration

A

What is validity

19
Q

____ validity concerns whether a measurement
appears to measure a certain criterion; (It is
necessary, but far from sufficient.)

A

face

20
Q

______ validity refers to the degree to which a
measurement covers the range of
meanings/contents included within the concept

A

Content

21
Q

____-_____ validity checks the performance of our operationalization against certain criteria. These criteria may be external standards or another indicator that our instrument (indicator) tends to measure. Example: written test and road test for new drivers

A

Criterion-related

22
Q

This is based on the way that a measure relates to other variables within a theoretical framework

A

Construct validity. Construct validity can be evaluated whether a common factor can represent several measurements using different observable indicators

23
Q

_____ validity is an estimate of how much our
measurement is based on clean experimental
techniques, so that you can make clear-cut
inferences about cause-consequence
relationships (in the sample being studied)

A

Internal validity

24
Q

_____ validity refers the extent to which the results of an investigation can be generalized to the population as a whole, and to other populations, settings, measurement devices. Example: if the sample is representative of a population

A

External

25
Q

If the measurement is accurate it must be reliable. True or false?

A

True.

26
Q

Reliability is not sufficient for accuracy

True or false?

A

T

27
Q

Accuracy is a sufficient condition for reliability.

True or false?

A

true

28
Q

Reliability versus validity? What is the diff?

A

Reliability is about the consistency of a measure, and validity is about the accuracy of a measure.