research and eval Flashcards
Objective evaluation of the influence of interventions on the client’s performance
When possible and applicable, measurement should be taken at the beginning, during, and at the end of intervention
Outcome measure
Advantages include:
• scores can be understood by interprofessional team
• tools are often widely available
• uniform administration, scoring, and interpretation of results
• can help monitor progress over time
• can contribute to quality improvement and evidence-based practice
Advantages of standardized testing
Disadvantages include:
• must be combined with qualitative and other assessment methods to complete a comprehensive evaluation
• multiple internal and external variables can impact performance on test and affect results
• rigidity of administration may negatively influence the results
Disadvantages of standardized testing
A large sample of people who represent the intended population for a test
Also referred to as the norm group
Standardization sample
A score used in standardized testing, also referred to as z-values and z-scores, that is used to make comparisons across variables and across populations or individuals
Standard score
Classification system used for quantifiable data
Types include: • nominal scale • ordinal scale • interval scale • ratio scale
Scales of measurement
Provides insight into the general characteristics of data collected during a study
Types include:
• measure of central tendency
• measure of variability
Descriptive statistics
A value that describes the center point of a data set
Types include:
• mean
• median
• mode
Measures of central tendency
A measure of central tendency, also known as an average score, that is calculated by finding the sum of all scores within a data set then dividing the sum by the total number of scores in the data set
Mean
A measure of central tendency, also referred to as middle value, that is determined by placing all scores in numerical order and locating the number in the middle
Median
A measure of central tendency that refers to the value that occurs most frequently within a data set
Mode
The statistical value that represents how much the group varies from the mean, and the degree to which the data spreads across the distribution
Types include:
• variance
• standard deviation
Measures of variability
Measures the distribution and variation of data points around the mean
Standard deviation
Factors or variables that cause a difference in standardized test scores (e.g., environmental conditions, motivation, fatigue)
Error variance
Score used in developmental testing with the following features:
• mean score is 100
• standard deviation of 15 or 16
• intervention often beneficial if scores are 2 standard deviations below the mean
Developmental index score
Score used for measurement of intelligence with the following features:
• mean score is 100
• intellectual disability is considered 2 standard deviations below the mean
Deviation IQ score
A score that compares a child’s performance to others in the same age range
Age-equivalent scores
Age of an individual since birth that is calculated by subtracting birth date from current date
Chronological age
Age calculation, also referred to as adjusted age, that is applied to premature infants to consider achievement of developmental milestones
Age is calculated by subtracting the weeks of prematurity from chronological age
Corrected age
A score that compares a student’s performance to a normative group of students at the same academic level
Grade equivalent
The degree to which an assessment tool measures what it claims to be measuring
Validity
The degree to which an assessment tool produces consistent results when the same client is retested on separate occasions while external factors remain constant
Reliability
The degree to which an assessment tool measures specific constructs (e.g., fine motor skills) consistent to what it claims it measures
Construct validity
The degree to which items in an assessment are an accurate representation of all aspects of the domain being tested
Content validity
The degree to which the results of an assessment predict performance ability on other assessments that measure similar constructs
Two types include:
• predictive validity
• concurrent validity
Criterion validity
A statistical term that refers to the measurement of the proximity of two distinct variables
Correlation
A test’s ability to accurately detect impairments or decreased performance abilities (i.e. true positive)
Assessment responsiveness: Sensitivity
A test’s ability to accurately detect functional and performance abilities (i.e. true negative)
Assessment responsiveness: Specificity
A situation in which an assessment instrument is not able to measure any additional performance differences at the top of the rating scale
Ceiling effect
A situation in which an assessment instrument is not able to measure any additional performance differences at the bottom of the rating scale
Floor effect
A bias that may occur when administering a standardized or nonstandardized assessment
Types include:
• person-related bias
• item bias
• environment bias
Testing bias
An aspect of testing bias related to the actions of the evaluator or the client that influence the outcome of an evaluation or a test and must be controlled to achieve optimal results in standardized and non-standardized testing
Types include:
• evaluator bias
• test-taker bias
Person-related testing bias
A type of testing bias that must be controlled during standardized and non-standardized testing and involves actions of the evaluator that influence the outcome of an evaluation or the test results (e.g., an evaluator who influences test results by imposing personal expectations)
Evaluator bias
A type of testing bias that must be controlled during standardized and non-standardized testing and involves actions of the client that influence the outcome of an evaluation or the test results (e.g., a client who influences test results by providing false or misleading information
Test-taker bias
A type of testing bias that involves clients of similar performance abilities scoring differently when the same evaluation instrument or subtest is administered
Item bias
A type of testing bias that involves the degree to which the testing context is similar to the natural setting in which the task is typically performed
Environment bias
Factors that may impact the performance results of a client during the evaluation process (e.g., motivation, energy level, stress)
Test-taker variables
A hierarchal design used to develop a linear measurement scale within a standardized assessment
Rasch methodology
A psychometric method, typically used in a questionnaire or survey, that includes response options that progress in a linear direction (e.g., never to always)
Likert scales