Exam 1: Measurement Concepts Flashcards
Behavior
Anything an organism does that can be measured.
Measurement
The assignment of scores to individuals so that the scores represent some characteristic of the individuals.
Variable
Any characteristic or condition that can vary or be manipulated, controlled, or measured in research.
Must have at least two levels (values).
Conceptual Definition
Explains the meaning of a variable or concept in theoretical terms, describing what it represents abstractly or generally.
Operational Definition
A definition of a variable in terms of how it is precisely measured, such as using self-reports, behavioral observations, or physiological measurements.
Self-Report Measures
Measures in which participants provide information about themselves, typically through questionnaires or interviews.
Behavioral Measures
Measures that involve observing and recording behavior, either in structured tasks or in natural settings.
Physiological Measures
Measures that involve recording physiological processes, such as heart rate, hormone levels, or brain activity.
Converging Operations
The use of multiple operational definitions to measure the same construct, providing evidence that the construct is being measured effectively.
Levels of Measurement
Four categories (nominal, ordinal, interval, ratio) that specify the types of information scores can have and the types of statistical procedures to use.
Nominal Level
A level of measurement used for categorical variables, involving scores that are labels for different categories without any quantitative meaning.
Ordinal Level
A level of measurement that involves scores representing the rank order of individuals but not equal intervals between ranks.
Interval Level
A level of measurement using numerical scales with equal intervals between scores but no true zero point, such as temperature scales.
Ratio Level
A level of measurement with equal intervals and a true zero point, allowing for statements about ratios, like weight or reaction time.
Reliability
The consistency of a measure, including test-retest reliability, internal consistency, and interrater reliability.
Test-Retest Reliability
The consistency of a measure over time, assessed by testing the same participants at two different times.
Internal Consistency
The consistency of people’s responses across the items on a multiple-item measure, often assessed with split-half correlations or Cronbach’s alpha.
Split-Half Correlation
A method of assessing internal consistency by dividing a test into two halves and comparing the consistency of scores across them.
Cronbach’s α
A measure of internal consistency that reflects how well the items in a scale measure the same construct. Conceptually, α is the mean of all possible split-half correlations
for a set of items.
Interrater Reliability
The extent to which different observers or raters agree in their measurements or judgments.
Validity
The extent to which a measure accurately represents the variable it is intended to measure.
Face Validity
The extent to which a measure appears to assess what it is intended to assess, based on a subjective judgment.
Content Validity
The extent to which a measure covers the full range of the construct it aims to measure, ensuring it represents all aspects.
Criterion Validity
The extent to which a measure is related to an outcome or criterion, indicating how well it predicts or reflects the criterion.
Concurrent Validity
A type of criterion validity assessed when the measure and criterion are assessed at the same time.
Predictive Validity
A type of criterion validity assessed by examining how well a measure predicts future outcomes or behaviors.
Convergent Validity
The extent to which scores on a measure are correlated with scores on other measures of the same construct.
Discriminant Validity
The extent to which a measure does not correlate with measures of different constructs, demonstrating it is distinct.