Analysis/Interpretation Flashcards
Rasch methodology
A hierarchal design used to develop a linear measurement scale within a standardized assessment
Test-taker variables
Factors that may impact the performance results of a client during the evaluation process (motivation, energy level, stress)
Environment bias
A type of testing bias that involves the degree to which the testing context is similar to the natural setting in which the task is typically performed
Item bias
A. type of testing bias that involves clients of similar performance abilities scoring differently when the same evaluation instrument or subtext is administered
Test-taker bias
A type of testing bias that must be controlled during standardized and non-standardized testing and involves actions of the client that influence the outcome of an evaluation or the test results (a client who influences test results by providing false information)
Likert scale
A psychometric method, typically used in a questionnaire or survey, that includes response option that progress in a linear direction
Evaluator bias
A type of testing bias that must be controlled during standardized and non-standardized testing and involves actions of the evaluator that influence the outcome of an evaluation or the test results
Floor effect
A situation in which an assessment instrument is not able to measure any additional performance differences at the bottom of the rating scale
Ceiling effect
A situation in which an assessment instrument is not able to measure any additional performance differences at the top of the rating scale
Assessment responsiveness: specificity
A test’s ability to accurately detect functional and performance abilities (true negative)
Assessment responsiveness: sensitivity
A test’s ability to accurately detect impairments or decreased performance abilities (true positive)
Corrrelation
A statistical term that refers to the measurement of the proximity or two distinct variables
Criterion validity
The degree to which the results of an assessment predict performance ability on other assessments that measure similar constructs
Two types:
- predictive validity
- concurrent validity
Content validity
The degree to which items in an assessment are an accurate representation of all aspects of the domain being tested
Construct validity
The degree to which an assessment tool measure specific constructs (FM skills) consistent to what it claims it measures