Unit 3: Psychometrics and Measurement Principles Flashcards
Biases
- Evaluator Biases
- Hawthorne effect
- Observer expectation
- Test Biases
- Scoring Errors
Evaluator Biases (Biases)
- Background
- Severity or leniency: Give 1 or 5 (extremes of scale)
- Central tendency: Can’t really make a decision, give 3 out of 0-5)
- Halo effect: Prior experiences impact evaluations
Hawthorne Effect (Biases)
- Happens to individual test is being given to.
- Person changes their performance since they are being watched (can be positive or negative)
Observer Expectation (Biases)
-Our interest impacts their effort
we want them to do better so we give them a few tries to show we want them to progress
Test Biases (Biases)
- Gender
- Education Level
- SES
- Ethnicity/Culture
- Geographic
- Medical Status
Scoring Errors (Biases)
- Generosity: we are giving them more credit than deserved for their performance
- Ambiguity: Interpretation error (not sure what it means about the client)
- Halo: Scoring different based on previous experiences
- Central Tendency
- Leniency/Severity
- Proximity: How preceding events affect scoring
- Logical Error: Insufficient info to decide on an answer
- Contrast Error: Too much divergence, score very different on the variety of tests
Practice Q: An error of the halo effect means what?
Your past experiences color your current opinion.
Practice Q: Which assessment bias is occurring in the following scenario?
A child is taking a handwriting assessment and doing very well. They are taking their time, sitting appropriately at the table, and holding the pencil correctly. However, the samples of handwriting provided by the teacher demonstrate significantly illegible handwriting, and there are reports that the child rushes through work or refuses to participate in classroom writing activities; has a difficult time sitting at their seat; and uses an immature pencil grasp.
-What might be causing the child’s performance to be so different on the assessment than in the classroom?
Hawthorne effect
Errors of Measurement
- Item Bias: Some items may be harder/easier, better/worse
- Rater Error
- Individual Error: Inability to perform or understand the task
- Standard Error of Measurement
Standard Error of Measurement (SEM)
Best prediction of how much error still exists
-Despite how closely you follow directions, there is still error
Reliability
Types
- Accuracy and stability or consistency of a measure
- Intrarater: Same therapist gets same measurement for the same client
- Interrater: 2 therapists can get the same measurement
Test-Retest: Get same results for same test
Relaibility Coeeficient: (+.80= good)
Internal consistency or homogeneity: Items within the same test can be pulled apart and reliable when compared to each other
- Split half
- Covariance
Validity
Does a test measure what it says it does?
-Face: Weakest, Appears to measure what it says it does
-Content: Enough items to efficiently represent the construct
-Criteria Related: (Most objective) Concurrent (performance on one assessment as it relates to another), Predictive (predicting performance on one assessment to another), Sensitivity (chance of getting a true positive), and Specificity (chance of getting a true negative)
-Construct: Test is measuring a theoretical concept that is a true representation of what you are trying to assess
(has to be valid, should be reliable)
Practice Q: What is the standard error of measurement?
A prediction of how much error may still exist despite the standardization of the test.
Practice Q: Which of the following types of reliability refers to the ability of an assessment to consistently measure the construct when a person takes the assessment twice with two different therapists who get the same results?
Inter-rater reliability
PRACTICE Q: Specificity means…
The chance of finding a true negative