Variables and Their Measurement Flashcards
Variable (definition)
Characteristic of an individual, object, or environmental conditoin
Types Of Variables (3)
1) Independent Variable
2) Dependent Variable
3) Extraneous Variable
Independent Variable (def)
The variable that is intentionally manipulated by researchers to produce a change in an outcome; there are levels indicated by the number of IV’s in the study
IV’s are used in ___ Types of studies (2)
- Interventional
- Prognostic
IV’s are NOT used in ___ Studies (1)
Descriptive; bc researchers don’t manipulate anything and there is no outcome of interest
Purpose of IV’s in Interventional Studies (2)
- Can draw CAUSE AND EFFECT relationships because manipulating an IV
- They are commonly used as treatments/interventions
Levels of IV’s (3)
- > 1 IV = factorial design
- 2x2 factorial design
- 2x3 factorial design
2x2 factorial design?
Means there are 2 IV’s with 2 levels
2x3 factorial design?
Means there are 2 IV’s with 3 levels
IV Prognostic Studies (3)
- CANNOT draw Cause and Effect relationships bc not manipulating anything!
- IV’s are used to predict the outcome of interest
- Not intentionally manipulated by researchers
Dependent Variable (def)
The outcome of interest
Purpose of IV’s and DV’s in Interventional Studies
-Manipulate IV to cause a change in the DV= CAUSAL relationship!
IV’s and DV’s in Prognostic Studies
-Assess whether the IV is associated with the DV or predicts the DV; looks at IV and tries to predict the DV
Extraneous Variable(def)
- An outside variable that influences the DV, but it’s not an IV- want to limit these!
- Should be controlled through study design and statistical adjustment so you can say without a doubt that change was from your tx
Discrete Variable (def)
Values are distinct categories
Dichotomous Variable (def)
- A type of discrete variable
- Variables who only have 2 possible values; ex. Male and Female
Continous Variable (def)
Variables who have a theoretical infinite number of values
Levels of Measurement (4)
1) Nominal
2) Ordinal
3) Interval
4) Ratio
Nominal Level of Measurement
- Includes values that are named CATEGORIES
- NO rank; no one category is better than another
- Statistical analysis uses FREQUENCIES (not means)
Ordinal Level of Measurement
- Categories have a RANK order relative to one another; some form of hierarchy (ex. Likert scales)
- Statistical Analysis uses FREQUENCIES (not means)
Interval Level of Measurement
- Assigns quantitative values to variables
- Do NOT have a KNOWN ZERO POINT, but zero doesn’t mean there is an “absence” of the characteristic
- Possible values extend to infinity and can be both positive or negative
- Values may be added and subtracted, but not multiplied or divided
Ratio Level of Measurement
- Have a RANK order, and a KNOWN ZERO POINT
- No negative values
- has a TRUE zero
- Values can be added, subtracted, multiplied, and divided
True zero (def)
If there is a zero present, there is an ABSENCE of the characteristic
What type of variable is MMT grades? Why?
Discrete and ordinal variable; not an infinite number of possibilities and there is a heirarchy
What type of variable is gender? Why?
Discrete (dichotomist) and Nominal variable; bc not an infinite number of possibilities and equal groups (no hierarchy)
What type of variable is Blood Pressure? Why?
Ratio Variable- bc there is a true zero (no negative numbers)
Reliability (def)
How consistent and free from error is the instrument; represents how much of a measurement represents error and how much is the true score
Measurement error (def)
Difference between observed and true score
Observed score =
True score-error
How researchers minimize measurment error?
Choose instruments which have established reliability measures
2 forms of Measurement Relaibility
- related to the instrument
- Related to the person TAKING the measurements
Types of Reliability related to the INSTRUMENT (2)
1) Reproducibility aka Test-Retest Reliability
2) Internal Consistency
Reproducibility (def)
- Aka Test-Retest Reliability
- established when an instrument is used in two separate occasions with the same subjects
Internal Consistency (def)
Related to self-report outcome measure
Types of Reliability related to the RATER (2)
1) INTRA-rater reliability
2) INTER-rater Reliability
Intra-rater Reliability (def)
Consistency of repeated measures performed by ONE INDIVIDUAL
Inter-rater Reliability (def)
Consistency of measures performed by >1 individual
Regression Toward the Mean
Observed scores move closer to the mean with repeated tests; affects OUTLIERS
Reliability Coefficient(def)
Estimates reliability based on statisical concept of variance; a Measure of variability or differences among scores within a sample
Reliability Coefficient (math)
- (True score variance/(true score variance + error variance))
- Ranges from 0.00-1.00
Reliability Coefficient that indicates POOR reliability:
Reliability Coefficient that indicates MODERATE reliability
.50-.75
Reliability Coefficient that indicates GOOD Reliability
> .75
Validity
Does the test measure what it’s supposed to measure; goes hand-in-hand with reliability- don’t want to be reliably invalid!
4 Types of Validity
1) Face Validity
2) Content Validity
3) Construct Validity
4) Criterion Validity
Face Validity (def)
- Simplest and most subjective form of validity
- The instrument appears to be the appropriate choice to measure the variable (or not)
Content Validity (def)
Does the measure represent all of the relevant facets of the variable of interest?; should not contain elements that capture irrelevant information
Construct Validity (def)
Does the measure reflect the operational definition of the concept or construct it says it represents?
Convergent Validity (def)
Method used to evaluate the construct validity of an instrument; assess the relationship between scores on the instrument of interest and on another instrument ; if scores from both instruments yield similar resulst, then the measure has convergent validity
Discriminant Validity (def)
Method used to validate construct validity of an instrument; reflects degree to which an instrument can distinguish between different constructs
2 Things that evaluates Construct Validity
1) Convergent Validity
2) Discriminant Validity
Convergent Validity (def)
- Method used to evalute construct validity of an instrument;
- Assess the relationships between scores on the instrument of interest and ON ANOTHER INSTRUMENT
- If scores from both instruments yield similar results, then the measure has convergent validity
Discriminant Validity (def)
- Method used to validate construct validity of an instrument
- Reflects the degree to which an instrument can distinguish between different constructs; ex. can a goniometer distinguish between strength and ROM?
Things that Evaluate Criterion Validity (2)
1) Concurrent Validity
2) Predictive Validity
Criterion Validity (def)
-Degree to which the scores on an instrument are related to the scores on a reference standard instrument
Concurrent Validity (def)
-Method of evaluating criterion validity; administer the test of interest and reference standard test (the GOLD STANDARD) at the same time (to eliminate bias)
Predictive Validity (def)
- Method of evaluating criterion validity
- Degree to which the results from the test of interest can predict a future outcome
Sensitivity and Specificity are modes of ____ Validity
Criterion
A test CANNOT be ___ but not ___
valid;reliable
Instruments should be ____, ____, and ____ _ ____
reliable, valid, responsive to change
To be Responsive (to change) an instrument should Have: (2)
- Construct validity
- Severeal values on the scale to limit the floor or cieling effect
Standard error of Measurement (def)
Extent to which observed scores are disbursed around the “true” score
Large Standard of Error = Less ___; why?
Responsive; bc the “true” values are lost in the inaccuracies each time the measure is repeated
Ceiling effect
Happens when scores don’t go high enough
Floor Effect
Happens when scores don’t go low enough