Quiz 1 that i created Flashcards
What is a variable?
A property that can take different values
Operational definition is
converting construct into a measurable variable
A construct variable is
an abstract (non-observable)
Continuous variable is
any value along a continuum within a defined range
A discrete variable is
described in whole units. Cannot be halved
What are the levels of measurement?
Ratio, interval, ordinal, and nominal
Ratio is
numbers that represents units with equal intervals. Measured from a true zero, and has no negatives. Highest and carries the most info. Ex: distance, age, time
Interval is
numbers have equal intervals but no true zero. Ex: calendar years, temp
Ordinal
numbers indicate a rank order
Nominal
numbers are category label. Dichotomous. yes or no answers
What is an independent variable?
What you can manipulate/specify.
What are dependent variables?
What you measure
Types of independent variable
Active and Atrribute
Attribute IV
cannot be manipulated. EX: gender
Active IV
can be manipulated. Ex: treatment given to a group
Repeated factors
same group/people are measured in all levels of an IV. They are their own controls. (within subject)
Independent factors
different groups for each level. of IV. (between subjects)
Single factor design
Just one independent variable
Multifactorial design
two or more IV
Univariate design
only 1 dependent variable
Multivariate design
multiple dependent variables
what is reliability
the extent to which a measurement is consistent & free from error
what is measurement error
having the idea that a measurement has a margin of error.
Observed score = true score +/- error
Types of measurement error are
Systematic, and random
What is systematic error
error that is constant in its ways. Either always overestimating or underestimating
What is random error
error that is due to chance from measurement. The measurement are unpredictable
Sources of measurement error
Rater, instrument, or variability of characteristic being observed
Ways to improve reliability
standardize measurement methods, train & test observers, refine & calibrate instruments, blind rater to reduce bias.
what are the two reliability coefficients and what are they used for?
ICC: for continuous scale score
Cohen’s kappa: categorical scale scores
what is MDC?
The ability of an instrument to detect change above measurement error. (Minimal Detectable Change)
Types of reliability
test-retest, inter-rater, intra-rater, alternate/parallel, internal consistency, split-half
What is the test-retest?
Used to establish that an instrument is capable of measuring a variable consistently. Ignores the rater.
Conditions being measured has not changed between tests.
What is Inter-rater reliability?
Making sure that two or more people can agree on a measurement for the same group. Best assessed in a single trial.
between rater
What is intra-rater reliability?
Same rater taking measurements for the same group, on multiple occasions.
Issue with this is rater bias. Can be avoided by blinding
What is alternate/parallel reliability?
Reliability between two different things/instruments.
Measured with correlation coefficients
What is internal consistency reliability?
looking to see if all the items on a document are internally consistent. How well will the items reflect the same results.
Mostly used on questionnaires
Make sure there is no redundancy
Usually measured with a Cronbach’s alpha
What is split-half reliability?
Taking half of the items provided and comparing it with the other half
Which types of reliability are most relevant for clinicians?
Test-retest, inter and intra rater
What types of reliability are mostly for questionnaires, surveys and comparing different types of tests?
Alternate/parallel, internal consistency, Split half reliability
What is measurement validity?
the extent to which an instrument measures what it is intended to measure
A test cannot be _____ if its ____, but can be ____ but not ____
valid, unreliable. reliable but not valid
Types of measurement validity
Face validity, content validity, criterion validity, and construct validity
What is face validity?
Subjective or objective?
when an instrument appears to test what its supposed to.
Least rigorous, subjective, scientifically weak
What is content validity?
What is it used for?
Do measurements adequately represent concept & unrelated concepts. Used with questionnaire development
What is criterion validity?
Subjective or objective?
How is it measured?
can the outcomes of the instrument be substituted for an established gold standard.
Highest and most objective form
Measured by correlation coefficients between measure & source value
Types of criterion validity
Concurrent validity and predictive validity
What is concurrent validity?
measurements between test taken within the same time
What is predictive validity?
establishes that the outcome of the target test can be used to predict a future score/outcome
What is construct validity?
how well a tool measures an abstract, concept/construct.
Ways to test are not ideal
Types of construct validity
Known group, and convergent validity
What is known group validity?
do test result differ between two different known groups
What is convergent validity?
is there a correlation with similar text?
______ is often the primary focus of research outcomes & must be able to trust that change is “real”
Measuring change
What are the issues affecting validity of change
- levels of measurement: Ordinal ratio, ex: is a change from 5:4 same as 2:1
- Reliability: is change a measurement error?
- Stability: are there meaningless natural fluctuations?
- Baseline score: floor effect (minimum) or ceiling effect (maximum)
What is responsiveness?
the ability of an instrument to detect minimal change over time.
What is MCID
the ability of an instrument to detect minimally important change. Smallest difference that signifies an important rather than a trivial difference. should be larger than MDC