Chapter 5: Identifying Good Measurement Flashcards

1
Q

What are three ways psychologists measure variables?

A
  1. self report
  2. observational
  3. physiological
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is operationalization?

A

the process of turing a concept of interest into a measured or manipulated variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Two ways a variable can be expressed?

A
  1. conceptual: definition of a variable at an abstract level

2. operational: represents a researchers specific decision about how to measure or manipulate the conceptual variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Operationalization of a conceptual variable? Steps?

A
  • start by developing careful definitions of their constructs (conceptual variable) and then create operational definitions.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

It is important to remember that any conceptual variable can be ____ in a wide variety of ways. This is where what comes into the research process?

A
  • operationalized

- creativity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a self report measure?

A
  • operationalizes a variable by recording peoples answers to verbal questions about themselves in a questionnaire or interview.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is an observational measure?

A
  • operationalizes a variable by recording observable behaviours or physical traces of behaviour
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a physiological measure?

A
  • operationalizes a variable by recording biological data such as brain activity, hormone levels or heart rate. Usually this requires the use of equipment to amplify, record and analyze bio activity.
    ex: measuring moment to moment happiness via facial EMG.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Many people erroneously believe which of the three measures is the most accurate?

A
  • physiological measures

L> no matter the measure it must HAVE good construct validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

All variables must have at least two level….but the levels of operational variables may be what?

A
  • coded using different scales of measurement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What do we first classify operational variables as? (2)

A
  • categorical variables

- quantitative variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a categorical variable?

A
  • categories……(nominal variables)
  • numbers do not have numerical meaning!!
    ex: sex (levels are male and female)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a quantitative variable?

A
  • coded with meaningful numbers. Height and weight are examples
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the three kinds of quantitative variables?

A
  • Ordinal Scale
  • Interval Scale
  • Ratio Scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is an ordinal scale?

A
  • applies when the numerals of a quantitative variable represent rank order.
    L> we know they are different but not HOW different they are.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is an interval scale?

A
  • applies to the numerals of a quantitative variable that meet two conditions. First the numerals represent equal intervals between levels and second there is no true zero!
  • we cannot say things like something is twice as hot as something else since there is no true zero!
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a ratio scale?

A
  • applies when the numerals of a quantitative variable have the equal intervals and when the value of zero really means nothing.
    ex: weight or income !
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Construct Validity?

A
  • whether the operationalized variable is measuring what it should be
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Reliability??

A
  • how consistent is the measure??
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The construct validity of a measure has what two aspects?

A
  • reliability

- validity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the three types of reliability?

A
  1. Test-retest
  2. Interrater
  3. Internal
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Test retest reliability?

- most relevant to what measures?

A
  • researcher gets consistent results every time they use the measure
  • can be relevant for all three types of measurement
  • mostly relevant though just for when measuring constructs we suspect should be stable over time aka not something like subjective well being.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Interrater Reliability??

- most relevant to what measures?

A
  • two or more independent observers will come up with the same (or similar) findings.
  • most relevant for observational measures
24
Q

Internal Reliability???

  • often researchers test this how?
  • interpretation?
  • what kind of claim is this?
A
  • study participant gives a consistent pattern of answers no matter how the research phrases a question.
  • researchers collect data from samples and evaluate the results.
  • use statistical devices such as scatterplots and the correlation coefficient r.
  • version of an association claim*
25
Q

Scatterplot significance?

A
  • you can see whether two ratings agree (near the line of best Fit)or if they disagree (scattered from the line of best fit)
26
Q

Correlation coefficient r??

A
  • indicates how close dots are to the line on a scatterplot

- range= -1 to +1

27
Q

Describe the relationships seen on a scatterplot.

A
  • strong= points close to the line
  • weak= spread out
  • direction and strength of the relationship
28
Q

Relationship between scatterplot and r?

A
  • when the plots slope is positive …r is positive
  • ## when slope is negative r is neg
29
Q

Within r’s range what is a strong relationship?Weak?

A
  • close to either +1 or -1
  • close to zero
  • no relationship the r value will be zero or very close.
30
Q

Test-Retest Reliability and r?

  • r is + and strong
  • r is + and weak?
A
  • measure the same participants at least twie.
  • r is + and strong = 0.5 or above (good trr)
  • r is + but weak= we know participants scores have changed ….poor measurement reliability.
31
Q

Interrater Reliability and r?

  • r is + and strong?
  • r is + and weak
  • neg r?
A

-two observers rate the same participants at the same time.
- r is + and strong….r= 0.7 or higher…good interrater reliability
- if r is + and weak…not trust the observers ratings therefore retain coders or refine the operational definition.
- neg r would indicate a big problem..
L> when assessing reliability neg r is unwanted and rare.

32
Q

Interrater reliability and r?

-Kappa??

A
  • kappa measures the extent to which two raters place participants in the same categories.
33
Q

Internal Reliability?
- mainly for?
-good reliability =? (r)
L> if it is good we can?

A
  • self report that contain more than one question to measure the same construct
    L> are responses consistent even when q’s are phrased differently?
    -good reliability = strong correlation with one another….. if it is strong we can take the average of all items and create a single score for each person.
34
Q

Internal Reliability:
- Cronbach’s alpha?
L> what happens before doing this?
L> what does CA tell us

A
  • first collect data then compute all possible correlations among the items.
  • C alpha gives us one value from averaging the inter-item correlations and the number of items n the scale…closer to one the better the scales reliability. (0.7 or higher for self report q’s)
35
Q

Internal Reliability
- Cronbach’s alpha
L> bad reliability?
L> good reliability?

A
  • do not combine all items into one scale…..revise the items or avg only the items that correlated strongly together.
  • average all items together
36
Q

For internal reliability why do we average items?

A
  • it cancels out any random errors…
37
Q

Something can be reliable and not ___.

A
  • valid

* *cannot be valid and not reliable though

38
Q

Construct validity is easier/harder for measures of abstract constructs.

A

harder than what it is like for testing concrete constructs.

39
Q

What are the first kinds of measurement validity used to start?

A
  1. Face validity
  2. Content validity
    * depend on experts judgements
40
Q

Face validity?

A
  • when the extent that it is plausible measure of the variable in question…aka if it looks as if it should be a good measure it has this.
  • checked by consulting experts
41
Q

Content validity?

A
  • a measure must capture all parts of a defined construct

* *experts are consulted

42
Q

After face validity and content validity are examined what are the next validities tested?(2)

A
  1. Predictive
  2. Concurrent
    - both evaluate whether the measure under consideration is related to a concrete outcome that it should be related to according to the theory being tested.
43
Q

Predictive validity?

A
  • testing the correlation with the outcome in the future.
44
Q

Concurrent validity?

A
  • when testing the correlation with an outcome at the same time
45
Q

We use what two things to assess the validity of the measurement in question?

A

scatterplots and r

46
Q

What two types of validity provide perfect evidence for content validity?

A
  • predictive and concurrent
47
Q

No matter the operationalization if it is a good measure of its construct it should ___ with a behaviour or outcome that is related to the construct.

A

correlate

48
Q

Instead of going by correlation coefficients what else can we use to represent evidence for predictive and concurrent validity?

A
  • known-groups paradigm
    L> researchers see whether scores on the measure can discriminate among a set of groups whose behaviour is already well understood.
    ex: testing cortical levels as a measure of stress in a group about to public speech and those in the audience. We know public speaking causes stress but what about being in an audience?
49
Q

Known-groups paradigm can be used for what types of measurements?

A
  • self report and physiological
    ex: self report
    L> Beck teste the BDI on people who are depressed and those that are not
50
Q

Besides Face, content, predictive and concurrent validity what other two criterions for validity are there?

A
  • convergent and discriminant.
51
Q

Convergent Validity?

A

the measure should correlate more strongly with other measures of the same constructs

52
Q

Discriminant Validity?(divergent)

A

the measure should correlate less strongly with measures of other distinct constructs.

53
Q

When do researchers worry about discriminant validity?

A
  • when one is worried their measure is not accidentally capturing a similar but different construct.
54
Q

A measure may be less valid, than it is reliable it cannot be what?

A
  • cannot be more valid than it is reliable
    L> reliability has to do with how well a measure correlates with itself
    L> validity has to do with how well a measure is associated with some other similar but not identical measure.
55
Q

Reliability is ____(but not sufficient) for validity.

A

necessary