Selecting the best Measure Flashcards

1
Q

What is one of the difficult things about measurement in psychology?

A

We need operational definitions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are some observable measures we can choose to look at?

A
  • Verbal response
  • Nonverbal response
  • Physiological response
  • Overt actions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the 2 elements that are contained in every measurement?

A

“True” score-hypothetical concept
Error
“True” score+ error=observed score

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are 3 sources of measurement error?

A

Experimenter, participant, observer/scorer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are 2 sources of experimenter error?

A

Random Error: Time of day, temperature, noise.

Bias: Experimenter characteristics, experimenter expectancies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What do we mean by experimenter characteristics?

A

When a particular aspect of the experimenter affects how participants respond-can be physical characteristics (age, gender, ethnicity) OR personality (friendliness, hostility, anxiety).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How do we control for experimenter characteristics?

A

Use standardized methods- train experimenters to follow set standards when administering procedures, standardize aspects of experimenter as much as possible (appearance, attitude, etc)
Replication!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What do we mean by experimenter expectancies?

A

When the expectations of the experimenter affect how the participant behaves. Not limited to humans!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are 2 examples of experimenter expectancies?

A
  • The Rosenthal effect (educational “Bloomers”

- Maze bright versus maze dull rats.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do we control for experimenter expectancies?

A
  • Standardization (instructions scripted, recorded in advance, or presented via computer).
  • Objectivity (make coding schemas as objective as possible, automated recording equipment)
  • Single-blind research
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are some participant errors?

A

Random: Carelessness, distraction
Bias: Demand Characteristics, good participant effect, response bias

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are demand characteristics?

A

Features of an experiment that seem to inadvertently cause participants to act in a certain way.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the good participant effect?

A

Tendency for participants to behave as they perceive the researcher wants them to behave.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is it called when demand characteristics and the good participant effect work together?

A

Pact of Ignorance (Orne 1968).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How do we control for demand characteristics?

A

Conduct double-blind research

Deception

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is response bias?

A

When the context affects the way a participant responds. (yes sayers vs nay sayers). Social desirability is also an issue.

17
Q

How do we control for response bias?

A

Include both “agree” and “disagree” items.
Randomize question presentation
Careful review of questions/setting
Pilot testing

18
Q

What are some types of observer error?

A

Random: carelessness, distraction

Observer/scorer bias: Confirmatory bias (see what we want to see).

19
Q

How do we control for observer error?

A

Eliminate human observer-use mechanical measure instead
Limit observer subjectivity-focus on observable behaviour, use standardized coding schema
Make observer “blind”-unaware of experimental condition

20
Q

What is construct validity?

A

The extent to which your manipulation or measure actually represents the claimed construct (ex: does your measure of extraversion actually measure obnoxiousness?)

21
Q

What are some criteria needed for construct validity?

A

1) Reliability
2) Content validity
3) Convergent validity
4) Discriminant or divergent validity

22
Q

What is reliability?

A

The repeatability or consistency of the research

23
Q

What is test-retest reliability?

A

Comparable scores on retest. Relationship between scores at time 1 and time 2. The difference between the observed value and 1 equals the proportion of random error in scores.

24
Q

What is inter-rater reliability?

A

Comparable scores between observers. Calculated the same way as test-retest, and then percentage agreement.

25
Q

What is internal consistency?

A

The extent to which responses to items that propose to measure the same construct are similar. Variability across items may be due to random error or more than one construct being assessed.

26
Q

How do we test internal consistency?

A

Average inter-item correlation
Split-half correlation
Cronbach’s Alpha

27
Q

How do we improve internal consistency?

A

Add items/questions-random error balances out. Create better items/questions (reduces potential variability in interpretation)

28
Q

What is content validity?

A

The extent to which a measure covers all aspects of a construct. Ex) measuring love measures commitment, sexual attraction, liking.

29
Q

How do we make sure something measures all aspects of a construct?

A

Theory, definitions, experts. Make sure to measure all component dimensions, have a large enough set of measures for each dimension.

30
Q

What is convergent validity?

A

The extent to which a measure correlates with other indicators of the same construct. ex) People scoring high on your measure should also score high on other measures of the same construct.

31
Q

How do we asses convergent validity?

A

Similar measures, known comparison groups, other indicators of construct.

32
Q

What is discriminant validity?

A

The extent to which your measure is distinguishable from related constructs. People scoring high on your measure should not be scoring as high on measures of similar constructs.

33
Q

What is discriminant validity?

A

The extent to which your measure is distinguishable from other constructs (unrelated constructs). People scoring high on your measure should not also score high on measures of the “wrong” construct.

34
Q

What else do you need in a research study.

A

Best measure. Best fit for the research context. Additional issues-is the scale appropriate to the context? Is the measure sensitive enough?

35
Q

What is sensitivity?

A

Ability of measure to detect effects. Is the measure strong enough for what you want to study? Does your measure minimize the influence of error?

36
Q

How do we achieve sensitivity in measurement?

A

Use measure with score variance (avoid restriction of range)
Avoid all or nothing measures (ask how much instead(
Add scale points to rating scale
Pilot test measure.