Chapter 8 Flashcards
Operational definition
A definition that describes something concretely, usually in terms of measurement (i.e., an instrument)
It tells us the way a study is measuring a variable.
Every variable in a quantitative study must have an operational definition because every variable must be measured; therefore…
The measures in a quantitative study should reflect the specific variables under study.
Physiological Measurement
Vital signs, Lab results, Glucose levels, etc.
Advantages:
Objective
Psychometric Instrumentation
Collect subjective information directly from subjects (e.g. tools that measure coping, stress or self-esteem)
Key issues: reliability and validity
Use existing instrument if possible
Surveys/Questionaires
Surveys Advantages/Disadvantages
Most common data collection method Systematic tool used to collect data Quantitative/Qualitative/or both Variety of options for delivery Open ended or closed
Close-ended:
Questionaires Advantages/Disadvantages
Closed-ended questions include:
Force choice: Forces respondent to provide mutually exclusive answers for respondent to choose the best possible answer
Dichotomous: Selecting from only one of two choices, Yield limited information, Difficult to analyze
Use only when other types of questions are not appropriate
Scale = a set of written questions or statements that in combination are intended to measure a specified variable
Types of Scales
Likert scale – Ranking on 5 or 7 point scale
Guttman scale – items on a continuum or statements ranging from one extreme to another
Visual analog scale – measures perceptual commonly used in health care
Random vs Systematic Error
Random Error Human factors Bias Confusion Environmental variations
Systematic error
A measure is consistently biased
Stratigies to Minimize Error
Calibration Ensure instrument reliability - Consistency - Internal reliability: these are all the same * Cronbach’s Alpha * Coefficient Alpha * Internal reliability
Tests for reliability
Inter rater reliability: Stability between raters. Same result from 2 researchers
Test–retest reliability: Stability over time
Internal “consistency” reliability: Stability within in instrument
Item-total correlation: Stability among individuals
Validity
The ability of an instrument to consistently measure what it is supposed to measure
Examines how accurate the measure is or how true results are using the measure
A measure can be reliable but not valid.
That is, it can be consistent, but consistently measure the wrong thing.
Types of Validity
Content validity: The content of the instrument reflects the attribute
Construct validity: The instrument represents the conceptual issues
Criterion-related validity:
Concurrent
Predictive
Discriminate
Responsiveness
Sensitivity vs Specificity
Sensitivity
Measure that indicates an instrument has the capacity to detect disease if it is present
Specificity
Measures that indicates an instrument has the capacity to differentiate when a disease is not present