Scoring & Interpreting Flashcards
Rasch Analysis
Each items has a degree of complexity/difficulty
Reduces bias of observation
Stabilizes reliability and validity
Rasch score derived from
various studies
consistency ratings
Berg Balance Assessment
14 Items; 56 max points
- Sitting to Standing
- Standing Unsupported
- Sitting Unsupported
- Standing to Sitting
- Transfers
- Standing with eyes closed
- Standing with feet together
- Reaching forward with outstretched arm
- Retrieve object from floor
- Turning to look behind
- Placing alternate foot on stool
- Placing alternate foot on stool
- Standing with one foot in front
- Standing on one foot (hold >10 sec)
Likert Scale
ex: Short Sensory Profile
Item Discrimination
Differentiates the items intended to be measured
Mastered versus not mastered
Appropriate distractors
Discrimination power
Extreme group validation
High scorers versus low scorers
Cross-validation
Outside group compared to original group
Item Analysis
Difficulty of items:
Appropriate difficulty
Avoidance of clues or defects
Intended function
Effective distractors
Information gained from wrong answers
Spread of difficulty
50%
80-90% mastery
3 Types of Scores
Raw score:
Derived: ex: IQ is converting by inputting raw data into formula
Standard:
Why can’t raw data simply be used?
Standard score formula: (raw score - x=MEAN)/SD (std.dev)
need to be compared to the norm
Too many variables may be involved: each task included in an assessment
T-scores
Avoid the negative numbers
Can calculate from the z-scores
10(z-score)+50 = T-Score
Become the same distribution as a z-score with large sample size
Errors in test
Intra-individual error: physiological factors (ex: no sleep, distraction, etc.)
Standardization: tool must be used on the reference group
Administration errors: ex: wrong setting (too loud)
Scoring errors: calculations
Scoring Errors
Generosity or Severity (scoring too generously or severly); Central tendencies: midpoint scores (average score)
Halo Effect: Examiner’s impression
Logical errors: evaluators prior knowledge may hold bias
Proximity errors: unexpected events; already accounted for during standardization
Ambiguity: interpretation differs from typical evaluators’ interpretations
Contrast errors: evaluator’s subjective response