Quantitative Approaches Flashcards
Essential steps in measurement
Define construct
Operationalization
Determine measurement procedure
abstract idea, theme, or subject matter that a researcher wants to measure. Because it is initially abstract, it must be defined.
Construct
specific rules that govern how numbers can be used to represent some quality of the construct that is being measured.
Scales of measurement
4 scales of measurement
Nominal
Ordinal
Interval
Ratio
used to categorize characteristics of subjects
Nominal scale
Used to classify ranked categories
Ordinal scales
Have equal distance between units of measurement
Interval scales
Indicate absolute amount of measure
Ratio scale
General degree of error present in measurement
Measurement error
Two types of error
Systematic and random
Predictable errors; occurs when instrument used over or underestimates true score
Systematic errors
occurs by chance and can affect a subject’s score in an unpredictable manner
Random error
Factors that can contribute to random errors:
Fatigue of the subject
Environmental influences
Inattention of the subject or rater
Ways to reduce measurement error
Standardized instrument
Train rafters
Take repeated measurements
In order to reduce measurement error we should ensure that our measures are _____ and _____
Reliable and valid
the degree of consistency with which an instrument or rater measures a variable
Reliability
The ratio of the true score variance to the total variance observed on an assessment
Reliability coefficient
used to determine if an assessment is reliable
Empirical evaluation of an assessment
The assessment is empirically evaluated through what 4 methods:
- Test-retest reliability
- Split-half reliability
- Alternate forms of equivalency reliability
- Internal consistency
A metric indicating whether an assessment provides consistent results when it is
administered on two different occasions
Test-retest reliability
Time 1 score
First variable
Time 2 score
Second variable
assess the reliability of questionnaires through divided sections and correlation scores from each half of the assessment
Split-half reliability
assessment’s alternative forms are administered to subjects at the same time and then scores are correlated from the two forms of the assessment
Parallel forms of reliability
extent to which the items that make up an assessment covary or correlate with each other.
Internal consistency
The presence of the rater may impact the behavior of the subjects (The Hawthorne effect)
Observer presence and characteristics
Bias may be introduced when one rater takes two or more measurements of the same item. The rater may be biased by remembering the score on the subject’s previous attempt/performance.
Rater bias
two sources of observer/rater error that are typically examined
Observer presence and characteristics
Rater bias
When you have two or more raters who are assigning scores based on subject observation, there may be variations in the scores.
Inter-rater reliability
How do we make sure a measurement is valid?
By making sure that the instrument being used measures what it is supposed to measure
4 types of validity
- Face validity
- Content validity
- Criterion validity
- Construct validity
The assumption of validity of a measuring instrument based on its appearance as a reasonable measure of a given variable
Face validity
the adequacy with which an assessment is able to capture the construct it aims to measure
Content validity
The ability of an assessment to produce results that are in agreement with or predict a known criterion assessment or known variable.
Criterion validity
Criterion validity includes two types of evidence
Concurrent validity
Predictive validity
the degree to which the outcomes of one test correlate with outcomes on a criterion test, when both are given at the same time
Concurrent validity
an instrument is used to predict some future performance
Predicative validity
A type of measurement validity in which the degree of a theoretical construct is measured
Construct validity