Exam 2 - Chapter 5 (Measurement Issues) Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

True Score vs. Measurement Error

A

True Score: Someone’s real value on a give variable
* (ie: true intelligence, true reaction time, true happiness, etc)
* a true score cannot be directly measured (measurement error will always impact the score)

Measurement Error: When measuring a “true” score, something will cause deviation from the real value.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Reliability

A

Reliability = Consistency

Reliability assessment helps us figure out if our measure is consistent/stable

A measurement that is as close to its real score as possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Validity

A

Validity = Accuracy

Validity helps us figure out if we’re truly studying what we intended to study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Differentiating: Reliability & Validity

Reliability vs. Validity

A

A good study should be BOTH Reliable & Validity

Consistency and Accuracy are important and work together.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Types of Reliability

A
  1. Internal Consistency Reliability – Reliability of Items across people
  • Item Total Correlations
  • Split-Half Reliability
  • Cronbach’s Alpha
  1. Reliability across Time – Reliability of scales over time (or versions)
  • Test-retest and alternate form

Reliability across People – Reliability of ratings across raters

  • Inter-rater Agreement
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Internal Consistency Reliability: Item Total Correlation

A

Item Total Correlation: How well a specific item tracks responses to the rest of the scale.

  • Useful for creating & refining questionnaires
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Internal Consistency Reliability: Split-Half Correlation:

A

Split-Half Correlation: Split data in half, and find the correlation between the halfs.

Issue: Room for malpractice because “half” can be defined differently.

  • Someone could try picking different “halfs” until they get a correlation coefficient that they like
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Internal Consistency Reliability

A

How much individual items in a scale/survey relate to each other and how consistent they are in measuring the same concept or trait in their results.

The more they overlap/are related, the greater the Internal Consistency

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Internal Consistency Reliability: Cronback’s Alpha:

A

Cronback’s Alpha Statistical solution to estimating reliability across infinite number of sets (ie: the average of every possible combination of ‘halfs’)

  • alpha > 0.8
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Reliability Across Time: Test-Retest Reliability

A

Test-Retest: How well does one agree with himself at multiple time points.

  • Give the same test at two points in time.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Reliability Across Time: Alternate Forms Reliability

A

Alternate Forms Reliability: Correlations between multiple measures of the same category.

*Give two different forms of the same test at two points in time. (Used when given closer together)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Reliability Across People:
Inter-Rater Reliability

A

Inter-Rater Reliability: Degree of agreement or consistency between multiple people (raters) assessing or scoring the same thing.

  • It shows how much raters produce similar results when evaluating the same subject.
  • Cohen’s Kappa: To calculate the consistency use Cohen’s kappa
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Types of Validity

A
  • Construct validity

What are we measuring?

  • Face validity
  • Content validity
  • Concurrent validity

How do our measures relate to other measures?

  • Convergent validity
  • Divergent (i.e., discriminant) validity
  • Predictive validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Construct Validity

A

How well an operational definition of a variable accurately reflects the variable being measured or manipulated.

  • Other types of Validity fall under Construct Validity
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Face Validity

A

How well a measurement device appears to accurately measure a variable. (not that useful)

  • Face Validity is not sufficient to conclude that a measure is valid.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Content Validity

A

How well a test or survey relates to/covers all parts of the concept it aims to measure.

(pretty useful)

15
Q

Convergent Validity

A

Checks if a measure is strongly related to other measures of the same construct.

  • Our measure should be related to other measures that assess a similar construct
  • Asks: Is my new scale of measurement as good as the old scale that measures the same thing?
15
Q

Concurrent Validity

A

Tells us if the measurement used successfully differentiates people who are theoretically supposed to be different

  • Test this by giving the measure to a different demographic and see if they score much lower than the target demographic
16
Q

Divergent/ Discriminant Validity

A

Tests whether a measure is not strongly related to different or unrelated constructs

  • This confirms that the measure accurately distinguishes between distinct concepts.
  • Our measure should not be related to other measures that assess different constructs.
17
Q

Predictive Validity

A

How well does our measure predict future scores on another test related to the concept it is assessing.

  • Ie: a measure of depression should predict loneliness at a future time
18
Q
A
19
Q
A