Deck 5 Flashcards

1
Q

What is an indicator?

A

An Indicator is an aspect of the construct measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a measurand?

A

A measurand is the signal the instrument actually detects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a measure?

A

A measure is the actual procedure used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a measurement?

A

A measurement is the reading you take.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a measurement range?

A

A measurement range is conditions within which a measure works.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an observation unit?

A

An observation unit is a unit actually measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a research unit?

A

A research unit is the unit that you say things about.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the types of reliability?

A
  • internal consistency: if I have parallel items, do they return the same values?
  • External consistency: if I have the same measurements under the same circumstances, are my results the same?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is measurement validity?

A
  • Does the measurement really measure what it is supposed to measure?Does it adequately cover the construct?
  • Does it measure all relevant component indicators without systematic bias?
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is face validity?

A

Face validity is when at first sight there appears to be a logical link between measurement instrument (question, operalisation) and the objective.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is content validity?

A

Content validity is when different items of the research instrument and the research instruments themselves each cover all aspects of your constructs. Content validity is judged by experts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is predictive validity?

A

Does it predict a future outcome that theory says it should predict?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is concurrent validity?

A

How well an instrument compares with a second assessment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is construct validity?

A

Is it statistically related with other constructs that it should be strongly related with?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Can concurrent validity, predictive validity and construct validity be empirically tested?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Can you explain regression towards the mean?

A

It occurs whenever what is actually measured is imperfectly correlated with the underlying trait we’re trying to measure, which results in an imperfect correlation between expressions of this trait over time, even if the trait itself is stable over that time period. For example, I want to measure baseball skills. My baseball skills may be the same this year and next year, but this year all the stars were aligned and I scored 30 home runs. Next year, I turn back to a more typical performance for my skill level, but you may wrongly conclude that either my skill has dropped or your measurement of my baseball skill isn’t reliable. This phenomenon explains the typical disappointing results of stars in whatever discipline after a top year.

17
Q

What are the common problems with regard to determining reliability of your instrument?

A
  • Regression towards the mean
  • Testing/instrument effects: testing a unit influences how it will react to subsequent measurements
  • Assumption of equal circumstances may not hold (we may confuse inconsistency of measurement with true change/variation)
  • Assumption of equivalence of measures may not hold (parallel items aren’t truly parallel)
18
Q

What is a hypothesis?

A

A hypothesis is an idea about what how the is world works, that can be empirically tested.

19
Q

Why do we use hypothesis?

A

It gives a focus to our research. It provides a link to theory. And it is a target for falsification (experimental and exploratory).

20
Q

Where do hypothesis come from?

A

Research often starts from or generates hypotheses.

21
Q

What are the steps in hypothesis testing according to CIA?

A
  • Identify the possible hypotheses to be considered
  • List significant evidence and arguments for and against each
  • Identify which evidence/arguments are most helpful judging each
  • Reconsider hypotheses and delete evidence and arguments that have no diagnostic value.
  • Draw tentative hypothesis. Proceed empirically by trying to disprove them
  • Test sensitivity of conclusion to a few critical items of evidence. Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.
  • Report relative likelihood of all the hypotheses, not just the most likely one. (context +)
  • Identify milestones for future observation that may indicate events are taking a different course than expected. (decision support vs. description)
    Be very worried if your initial bias is the same as your final conclusion.
22
Q

What are the different types of hypotheses?

A
  • Non-relational: states the existence/level/condition. E.g. Soil salinity in Wageningen is 1500 ppm
  • Correlational: states a relation between variables. E.g. Soil salinity is related to plant growth
  • Developmental: states a development of one/more variables in time. E.g. Soil salinity in Wageningen is increasing
  • Causal: states a causal relation between variables. E.g. Soil salinity affects plant growth
23
Q

What are the components of causal hypotheses?

A
  • Independent variable (the cause) = x

- Dependent variables (the effect) = y

24
Q

What are the types of extraneous variables?

A
  • We distinguish between:
    • (Other) Independent variables
    • Intervening variables
    • Moderating variables
    • Confounding variables (big headache for causality)