2: Measurement Issues Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is an operational definition?

A

Translating theoretical constructs into measurable variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

List five measurement methods commonly used in developmental psychology research. Provide examples for each.

A

Rank order (e.g., “How helpful is “Sam” compared with his/her peers?”).

Self-ratings (e.g., “When you have candy, do you share it with your friends?” Sometimes, Never, Often).

Questionnaire (e.g., “In the past 12 months, has your child …”).

Peer evaluations or sociometric rankings (e.g., ranking kids as popular, controversial, etc.).

Experimenter (e.g., observation of behaviour at a schoolyard).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List the three kinds of observational measures used in developmental psychology research.

A

Narrative recording: provide a narration (minute-by-minute or second-by-second account) of target child’s behaviour.

Time sampling: observing/recording specific behaviours on a checklist for a specific time period.

Event sampling: recording during specific events (only when target behaviour occurs).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are two problems with observational techniques?

A

Observer influences (reactivity): problem of being watched.

Observer bias: can affect results, procedure (e.g., looking time infant research).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are ways you counteract the two problems with observational techniques?

A

Observer influences (reactivity): observer in environment for period with participants, participant observation, hidden observer.

Observer bias: score behaviours as specifically as possible, blinding, inter-observer reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

List three measurement issues.

A

Converging operations.

Floor effects (most subjects score towards bottom) vs. ceiling effects (most subjects score towards top).

Measurement of equivalence (e.g., teen equivalent of child sharing?).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the four measurement scales?

A

Nominal Scale: categories, qualitative scores.

Ordinal Scale: ordered scores.

Interval Scale: ordered, equidistance.

Ratio Scale: ordered, equidistance, zero point.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Provide a general definition for reliability. What are two specific types?

A

Consistency in measures.

Internal consistency reliability: consistency between different items that test same concept on the same test.

Test-retest reliability: similarity of scores from one testing session to another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reliability is a necessary but not sufficient condition for _____.

A

Validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define the following types of validity:

  • Content Validity
  • Criterion Validity
  • Construct Validity
  • Internal Validity
  • External Validity
A

Content: adequately tests given concept.

Criterion: whether performance on test relates to independent measure of the same abilities.

Construct: correct theoretical explanation for performance.

Internal: testing what supposed to be testing.

External: applies outside experimental context (e.g., rule use in the real world?).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Regarding internal and external validity, what happens when you decrease one kind of threat?

A

Increases another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define concurrent and predictive criterion validity.

A

Concurrent: same time period.

Predictive: future performance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define convergent and divergent/discriminant construct validity.

A

Convergent: correlation with other theoretically related tasks.

Divergent/discriminant: lack of correlation with other dis-like tasks.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

The presence of a confounding variable is a threat to what kind of validity? How can it be counteracted?

A

Internal.

Use random/blind assignment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

List eight threats to reliability and validity.

A

Selection bias: nonequivalent participants assigned to different groups.

Data loss: selective and differential dropout; selective data loss (more experimenter errors with most “difficult” children).

History: effects of external events on study (e.g., major disaster, Christmas).

Experimenter biases: observer biases; observer and experimenter drift (systematic changes over period of time in application of criteria for coding/scoring).

Instrumentation changes over time: changes in experimenters, observers, and measurements (e.g., longitudinal studies).

Retesting issues: maturation between tests (measure of equivalence); order and carry-over effects.

Reactivity: participant’s reaction to experimental situation.

Response set: “yes” biases; last-named option; positional responding; alternate answering to same question asked twice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the most effective way to minimize threats? Is this a perfect solution? Can it ever be a problem?

A

Standardization: nonstandardization is a problem if systematic.

Sometimes can’t help it (e.g., different instructions per age groups).

Overstandardization can be a problem (e.g., limiting the pool of participants to make data “cleaner” - studying only one sex).

17
Q

Define “good-participant” and “bright-participant” syndromes.

A

Good-participant: respond to demand characteristics.

Bright-participant: evaluation apprehension (kids & parents).