PRINCIPLES OF ASSESSMENT Flashcards
Purposes of measurement (4)
Discriminative - differentiate between two or more groups of people
Predictive - classify people into a set of predefined measurement categories for purpose of estimating prognosis
Evaluative - measurement of change in an individual or group over time
Descriptive - obtain a ‘clinical picture’ or baseline of person’s skills
Define measurement
Use of a standard to quantify an observation
Define assessment
Process of determining meaning of a measurement; the process of measuring something with the purpose of assigning a numerical value
Criterion-referenced assessment
Where the client is graded in terms of some behavioural standard
Norm-referenced assessment
Where the client is compared to a group of other people who have taken the same measure
Define evaluation
Process of determining the worth of something in relation to established benchmarks using assessment information
Define re-evaluation
Process of critical analysis of client response to intervention
Define screening
A quick review of the client’s situation to determine if an occupational therapy evaluation is warranted; typically a “hands off” process
Define testing
A systematic procedure for observing a person’s behaviour & describing it with the aid of a numerical scale or a category-system
Types of testing
Observation Interview / history Review of records / survey Paper & pencil tests: checklists, answer questions on paper Oral tests (interviews) Apparatus tests requiring equipment
Evidence-Based Practice (EBP)
The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients
Non Standardised assessments
Do not follow a standard approach or protocol
Standardised assessments
Are developed using prescribed procedures
Types of assessments
Descriptive Evaluative Predictive Criterion-referenced Norm-referenced
Reliability
Consistency and repeatability of the results obtained when a scale is administered on more than one occasion by the same researcher using a measure
Sources of error (reliability)
Random
systematic
Types of reliability (6)
Intra-Rater Inter-Rater Alternate form Split half Test-retest Internal consistency
Validity
The extent to which a test measures what it purports to measure
Types of validity (6)
Construct Content Criterion Face Ecological Experimental
Sensitivity
Ability of a test to detect genuine changes in a client’s clinical condition or ability
Specificity
Test’s ability to obtain a negative result when the condition is really absent (true negative
Responsiveness
Ability of a measure to assess and quantify clinically important change
Issues to consider when using standardised test (4)
Simplicity
Clinical utility
Communicability
Discriminability
Why use standardised test?
- Problem identification / Basis for intervention
- Provide the basis for goal-setting with clients & families
- Outcome measurement
- Prediction or prognosis
- Research
- Communication & reporting
- Accountability & quality assurance
- Funding & reimbursement
- Comparison & tracking
Levels of measurement (5)
Nominal Ordinal Interval scales Ratio scales Hierarchical scales