Quantitative Approaches Flashcards

1
Q

Essential steps in measurement

A

Define construct
Operationalization
Determine measurement procedure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

abstract idea, theme, or subject matter that a researcher wants to measure. Because it is initially abstract, it must be defined.

A

Construct

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

specific rules that govern how numbers can be used to represent some quality of the construct that is being measured.

A

Scales of measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

4 scales of measurement

A

Nominal
Ordinal
Interval
Ratio

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

used to categorize characteristics of subjects

A

Nominal scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Used to classify ranked categories

A

Ordinal scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Have equal distance between units of measurement

A

Interval scales

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Indicate absolute amount of measure

A

Ratio scale

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

General degree of error present in measurement

A

Measurement error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Two types of error

A

Systematic and random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Predictable errors; occurs when instrument used over or underestimates true score

A

Systematic errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

occurs by chance and can affect a subject’s score in an unpredictable manner

A

Random error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Factors that can contribute to random errors:

A

Fatigue of the subject
Environmental influences
Inattention of the subject or rater

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Ways to reduce measurement error

A

Standardized instrument
Train rafters
Take repeated measurements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

In order to reduce measurement error we should ensure that our measures are _____ and _____

A

Reliable and valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

the degree of consistency with which an instrument or rater measures a variable

A

Reliability

17
Q

The ratio of the true score variance to the total variance observed on an assessment

A

Reliability coefficient

18
Q

used to determine if an assessment is reliable

A

Empirical evaluation of an assessment

19
Q

The assessment is empirically evaluated through what 4 methods:

A
  1. Test-retest reliability
  2. Split-half reliability
  3. Alternate forms of equivalency reliability
  4. Internal consistency
20
Q

A metric indicating whether an assessment provides consistent results when it is
administered on two different occasions

A

Test-retest reliability

21
Q

Time 1 score

A

First variable

22
Q

Time 2 score

A

Second variable

23
Q

assess the reliability of questionnaires through divided sections and correlation scores from each half of the assessment

A

Split-half reliability

24
Q

assessment’s alternative forms are administered to subjects at the same time and then scores are correlated from the two forms of the assessment

A

Parallel forms of reliability

25
Q

extent to which the items that make up an assessment covary or correlate with each other.

A

Internal consistency

26
Q

The presence of the rater may impact the behavior of the subjects (The Hawthorne effect)

A

Observer presence and characteristics

27
Q

Bias may be introduced when one rater takes two or more measurements of the same item. The rater may be biased by remembering the score on the subject’s previous attempt/performance.

A

Rater bias

28
Q

two sources of observer/rater error that are typically examined

A

Observer presence and characteristics
Rater bias

29
Q

When you have two or more raters who are assigning scores based on subject observation, there may be variations in the scores.

A

Inter-rater reliability

30
Q

How do we make sure a measurement is valid?

A

By making sure that the instrument being used measures what it is supposed to measure

31
Q

4 types of validity

A
  1. Face validity
  2. Content validity
  3. Criterion validity
  4. Construct validity
32
Q

The assumption of validity of a measuring instrument based on its appearance as a reasonable measure of a given variable

A

Face validity

33
Q

the adequacy with which an assessment is able to capture the construct it aims to measure

A

Content validity

34
Q

The ability of an assessment to produce results that are in agreement with or predict a known criterion assessment or known variable.

A

Criterion validity

35
Q

Criterion validity includes two types of evidence

A

Concurrent validity
Predictive validity

36
Q

the degree to which the outcomes of one test correlate with outcomes on a criterion test, when both are given at the same time

A

Concurrent validity

37
Q

an instrument is used to predict some future performance

A

Predicative validity

38
Q

A type of measurement validity in which the degree of a theoretical construct is measured

A

Construct validity