11. Assessment and Treatment: Principles of Evidence-Based Practice Flashcards
Standard/Traditional Assessment Procedures (8)
- Speech and language screening
- Case Hx (incl: description of comm. disorder, prior assessment and tx, family constellation and comm., prenatal, birth, and developmental hx, medical hx, educational hx, occupational hx)
- Hearing Screening (20 or 25dB HL for 500, 1k, 2k, and 4k and screen at 25 dB HL for 500 Hz; Screen younger kids at 15 dB HL for 500, 1k, 2k, 4k, and 8k)
- Orofacial Exam (to rule out structural abnormalities)
- Interview (to obtain info, to inform, to provide support)
- Speech and language sample (50-100 utterances)
- Administration of standardized tests
- Review of assessment reports from other professionals
Assessment Teams (3)
Multidisciplinary: team members represent multiple disciplines, but each member conducts own eval and writes separate report w/ little interaction w/ others
Transdisciplinary: multiple specialists work together in initial assessment, but 1-2 members provide services
Interdisciplinary: team members of multiple disciplines interact and use each other’s suggestions and info in interpreting data; team collaboratively writes eval report and intervention plan
Standardized Assessment: Advantages
- Ease of test administration
- Ease of test scoring (quantitative measure)
- Some assurance of reliability (minimal bias)
- Scores may help determine eligibility for treatment
- Scores allow client skill to be compared to peers
Standardized Assessment: Limitations
- Cts often not represented in normative sample (big problem for culturally and linguistically diverse kids)
- Kids’ interactive styles may not match the formal fixed stimulus-question-response format of many tests
- Normative sample size may be small
- Sample skills in highly structured context so may not represent ct’s behavior in natural environment
- Rarely give opportunities to initiate conversation
- Do not effectively sample nonverbal communication
- Do not offer much opportunity for family/caregivers to participate in assessment
- Rarely give info re: how cts arrived at certain answers
- Many tests inadequately sample behaviors (i.e., too few opportunities)
- Comparing performance to norms ignores individual differences and variations in (speech/language) learning
- *Poor basis on which to develop tx goals
Prudent Use of Standardized Tests
- Sample matches ct’s ethnocultural b/g; large, diverse
- Don’t modify test items to suit ct, interferes w/ scoring
- Detailed manuals/clear instructions, current norms, report satisfactory reliability and validity
- Select test that you are well trained to administer
- Supplemental to naturalistic and in-depth assessments
- Use informal probes to sample behaviors in greater depth (using incorrect answers on standardized test)
- Create tx goals and assess progress based on these probe measures
Standardized Tests: Validity Types (4)
Degree to which a measuring instrument measures what it purports to measure
- Concurrent: degree to which a new test correlates with an established test of known validity
- Construct: degree to which test scores are consistent w/ theoretical constructs or concepts
- Content: thorough examination of all test items to determine if items are relevant to measuring what test purports to measure, and whether items adequately sample full range of the skill being measured
- Predictive: accuracy in which test predicts future performance on a related task; aka criterion-related validity because future performance is the criterion used to evaluate validity
Standardized Tests: Reliability
Reliability refers to replicability; Scores are reliable if they are consistent across repeated testing or measurement of same skill or event
*Reliability of test is influenced by: a) fluctuations in examinee’s behavior, b) examiner error and c) instrumental/equipment errors
- Most reliability measures are expressed in terms of a correlation coefficient (r), which is a number or index that indicates relationship bet 2+ independent measures
- Highest value of r = 1.00 and lowest value of r = -1.00
Reliability Types (5)
- Interjudge (Interobserver): the more similarly the observers independently rate the same skill or event, the higher the interjudge reliability coefficient
- Intrajudge (Intraobserver): the consistency with which the same observer measures the same phenomenon on repeated occasions
- Alternate/Parallel Form: consistency of measures when 2 forms of same test are administered to same person
- Test-Retest: consistency of measures when same test is administered to same person twice
- Split-Half: measure of internal consistency of a test; responses on 1st half of test corresponds to 2nd half (1st and 2nd half should measure same skill)
Other Assessments: Rating Scales, Questionnaires, and Developmental Inventories
- Rating Scales: nominal (not numerical) or ordinal
- Questionnaires
- Developmental Inventories: help track kids’ physical and behavioral changes over time
Alternative (Nonstandardized) Assessment Approaches (7)
- Functional assessment (eval day-to-day comm. skills in naturalistic, socially meaningful contexts)
- Client-specific assessment (sp./lang. samples over time; establishing reliable baselines)
- Criterion-references assessment (skills eval. against a standard of performance (“criterion”) selected by clinician; e.g., “90% accuracy”)
- Authentic assessment (naturalistic observation of sp. and lang., e.g., class, homes, etc; “minimal competency core” and “contrastive analysis”)
- Dynamic assessment (evals ability to learn when provided instruction; test-teach-retest format; intervention incorporated into assessment process)
- Portfolio assessment (collecting samples of child’s work/performance over period of time and observing growth that occurs when instruction is provided)
- Comprehensive and integrated assessment (incl. elements of functional, client-specific, criterion-referenced, authentic, dynamic, and portfolio assessments + essential elements of traditional approach)
Authentic Assessment: “Minimal Competency Core”
and “Contrastive Analysis”
A variation of authentic assessment is based on concept of “minimal competency core.” Taking age and specific context into account, the least amount of linguistic skill/knowledge that a speaker is expected to display is the minimal competency core. Whether a ct exhibits such a minimal competency is the concern
Another variant is “contrastive analysis.” Appropriate for establishing whether a speech pattern is a part of a speaker’s cultural b/g or is a disorder, contrastive analysis requires knowledge of a speaker’s dialect and a naturalistic language sample to determine whether the differences found in the sample are disorders or culturally appropriate comm. patterns
Comprehensive and Integrated Approach
- Retain necessary elements of traditional approach (case hx, interview, lang sample, orofacial exam, and hearing screening)
- Standardized test may not be used, but if necessary, clinician will prudently select ethnoculturally appropriate tests and interpret all test results cautiously
- Will use client-specific stimulus materials, sample communication in natural settings, and evaluate each skill in depth
- Targets of assessment will always be functional, meaningful comm. in social contexts
- Will consider standardized test results, if obtained, as supplemental to other, more naturalistic and client-specific assessment results
- May expand traditional clinical file to include additional materials as drawings and writing samples (and other “portfolio assessment” type items)
Basic Treatment Terms: Constituent vs Operational Definitions Discrete Trials Evoked Trials Exemplar Probes Shaping or Successive Approximation
Constituent definitions: dictionary-like definitions
Operational: how what is defined is measured; helpful in quantitatively measuring changes in target behaviors
Discrete trials: Tx method in which each opportunity to produce a response is counted separately; efficient in establishing target behaviors but less efficient than naturalistic methods in promoting generalization
Evoked trials: Clinical procedure in which no modeling is given; pics, questions, and other stimuli are used to provoke a response; evoked trials follow modeled trials
Exemplar: a specific target response that illustrates a broader target behavior; e.g., word “soup” to teach /s/ o
Probes: procedures to assess generalized production of responses w/o reinforcing them; involve a criterion to be met before training advances to more complex lvl; Pure probes: when only untrained stimuli is present
Shaping: target response is broken down into initial, intermediate, and terminal components and those are then taught in ascending sequence
Basic Reinforcement Definitions: Continuous Intermittent Differential Negative Reinforcement Withdrawal
- Continuous: reinforcing all correct responses
- Intermittent: reinforcing only some responses
- Differential: teaching ct to give diff. responses to diff. stimuli; reinforcing correct response while ignoring incorrect response to same stimuli
- Negative: strengthening behaviors by termination of aversive event (strengthening PWS avoidance of speaking situations b/c helps avoid aversive listener)
- Reinforcement Withdrawal: prompt removal of reinforcers to decrease response; incl. extinction, time-out, and response cost
Schedules of Reinforcement (4)
- Fixed-interval: time interval
- Fixed-ratio: certain number of responses required
- Variable-interval: time bet. reinforcers is varied around an average
- Variable ratio: number of responses required is varied around an average