Module 3 Flashcards
primary goals of assessment:
determine ?
-are they ?
Measure the extent to which?
-asking “wh” questions consistently is … NOT ?
strengths and weaknesses in terms of language and literacy ability as compared to age-matched peers or certain criteria
the intervention process employed can be deemed successful
assessment NOT intervention
Purpose of Assessment:
establishing a purpose for assessment helps to guide the ?
while some assessments inform a variety of decisions other serve a more narrow purpose
4 primary purpose of assessment
1) screening: a quick
-only tells
-
2) … data: examine ?
3) establishing?
4) … monitoring: administered?
selection of individual testing products
determination of which students may need additional help
- need more testing or not
- quick, norm-referenced
baseline data: all areas of current functioning
intervention targets
progress monitoring
-periodically throughout the year to determine if students are successfully making progress
Formative v. summative assessment:
formative: - examples -let's us know if things are -focus on provides info to ? interactive process between ? -.. centered, ... based, often ? not ? - -occuring during the ?
kahoot, ticket out the door, knowledge check quiz
going great or not
- learning, teaching, outcomes
- improve learning and teaching
- students and faculty to inform learning and teaching to figure out how intervention is going
- learner centered, course based, often anonymous, not graded
ongoing
learning process
Summative: focus on examples? course content ? can include?
grades
final grade, quiz
mastery
discussion, cooperation, attendance, verbal ability
Formative: occurs ?
provides data that will
includes? which provides the clinician with a ?
during learning process
-support continuing or modifying instruction to meet the needs of learners
ongoing monitoring of client progress/clear picture of student learning
summative assessment:
serves an ?
gives summary info of
afford an opportunity to ?
evaluative role following a period of sustained instruction
client achievement
discern learning outcomes at both individual client and program levels
formal assessments: - - -allows you to -tries to make sure you ? -to figure out if you have ?
standardized norm-referenced allows you to compare based off time and age to other student administer test the same every time language disorder or eligibility
informal assessments: everything else other than -most common to can be ? ... referenced compare client performance to a set of ? - - -
norm-referenced SLP's standardized - no normative data criterion referenced established standards and expectations -dynamic assessments -functional assessments -curriculum-based assessments
Assessing the components of language: it's usually important to see how -shouldn't be -usually, information from formal assessments does not provide -think about the ?
your client compares to a set of established standards and expectations
-giving assessments just to give them
-enough detail to know where to start in therapy
assessment task and how true it is to typically language use
Norm-referenced Tests: Overview
compare an individual’s
designed to produce?
norming population is of the?
scores are reported in terms of ?
norm-referenced tests can be ? but not all standardized tests can be
performance with the performance of others
normal curve with 5-% falling above and below mean
-same age, sometimes grade, and sometimes gender
-standard scores, percentile ranks, grade/age equivalents, scaled scores, z-scores, stanine scores
standardized/norm-referenced
Normal (Bell) Curve
-what is considered average
-no ?
for language ? is mean, what is standard deviation
average range is
what percent of people fall in average mean
one standard deviation -quantification of a standard deviation -100/15 85-115 -68.26%
Normal Curve continued
you are looking for people
how much is gifted, how much intellectually disabled
what is outside average range- what is below the average range
percentiles are NOT
-what are they
-not ?
highest you can score is a ? because you can’t be ?
below the average range but NOT below average
-130/70
-31.74
-15%
percentages
-where you fall on normal curve
-equally distributed across normal curve
99.9% better or worse than yourself
Raw scores:
the number ?
-all it does is help you
-never ?
when you administer any test, the first step in scoring almost always will be
-a raw score is a test score that has not been ?
-by itself?
someone got right on a test
-find/calc. standard score
-show these scores to parents
-to calc. # of items student got correct
-weighted, transformed, or statistically manipulated
no real meaning
Standard Scores:
a standard score is a score that has been?
-normally, standard scores have a mean of ? and SD of?
-perhaps most well known version is?
-using the scoring system a child with standard score of 115 would be ? whereas a child with a standard score of 85 would be?
also, percentage of scores between 115 and 85 is?
often, when doing assessment, you will have to tell parents and administrators the ?
transformed to fit a normal curve with a mean and SD that remain the same across ages
-100/15
Wechsler Intelligence Scales
-1 SD above mean/ 1 SD below
-68.26
-standard scores and appropriate classification they represent
percentile ranks:
a percentile rank is a score indicating ?
-not a ?
a percentile rank of 16 means
percntile ranks range from the lowest ? to the highest?
the 50th percentile normally signifies ?
percentage of people or scores that occur below a given score
- percentage
- you scored as well as or better than only 16% of the pop.
- 1st percentile 99th percentile
- average ranking or average performance
Importance of PR:
in assessment, percentile ranks are very important because they ?
eligibility and insurance prefer?
indicate how well a child did when compared to the norms on a test
-standards class
Stanines:
a stanine, an abbreviation for ? is a type of standard score that has a mean of ? and a standard deviation of ?
stanine scores can range from ? a stanine of 7 is ?
a stanine of 9 is ?
conversely a stanine of 3 is ?
a stanine of 1 is ?
standard nines/5/2
1 to 9 / 1 standard dev. above the mean (5+2)
-2 SD above the mean (5+2+2)
-1 SD below the mean (5-2)
2 SD below the mean (5-2-2)
Scaled Scores:
used for
-mean is ? and SD is ? so average ranging is from
subtests for a norm-referenced (CELF-5)
10/3
7-13
Z-Scores and T-scores: Z-scores: aka ? describes where is ? 0 is ? positive scores ? negative scores? -each unit increment represents ? therefore, a Z-score of -1 is ? -parents seem to ?
standard score -score is based on a distribution of scores -average/above average/below average one SD from the mean one SD below the mean understand better
T-Scores: describes how far ? Standard score calculated by? therefore a T-Score of 50 is the ? and a score of 60 is? -more common in?
indiv. is away from the mean
- mult. z-score by 10 and adding 50
- mean score/one SD above the mean
- psychology
age-equivalent scores: an age equivalent is a very general score that is used to ? very ? it is the estimated ? age equivalent scores are almost always
compare the performance of children at the same age with one another
- misleading, use with caution
- age level that corresponds with given score
- given in years and months
Grade Equivalent score: a grade equivalent is a very? -very ? -it is the estimated? - almost always given in ?
general score that is used to compare the performance of children in same grade with one another
-cautious
-grade level that corresponds to given score
years and months in school
Confidence interval: used to describe the amount of ? a 90% CI means that ? calculated by the ? this will also account for? never is there an ?
uncertainty associated with a sample of a pop.
-90% of the interval estimates include the pop. parameter
-sample statistic +/- Margin of error
-test performance on a given day for a given child
exact score
PPVT-4: age?
what does it test
2-90
single word receptive vocab test
Basal: a basal is the? -lowest ? -we assume ? usually in ? it represents the ? all of the items prior to the basal are? these items are considered
starting point
- score they know
- they know everything before basal
- all assessments
- level of mastery of a task below which the student would correctly answer all items on a test
- not given to student
- already correct
Ceiling:
one the basal is determined, the examiner will administer until ?
-everything above ceiling ?
predetermined by?
once you hit ceiling you?
the ceiling is the point where the student has made a ? and therefore stops ?
ceiling is ?
student reaches a ceiling they are going to get incorrect -test-taker -stop testing -predetermined number of errors/administering all other items on this test because it is assumed that the student will continue to get answers wrong -ending point
Sensory and Cognitive demands of assessments:
- H
- V
- P
- Range of
- M
- A
- F
- A
- M
- …speed
- …functioning
hearing vision positioning/vestibular range of motion -motor -alertness -fatigue -attention span -memory -processing speed -executive functioning
Measures of Diagnostic Accuracy
Validity refers to the degree to which a study ?
-a measurement device is valid if it ?
reliability refers to the ability of a measure to be ?
accurately reflects or assesses the specific concept that the researcher is attempting to measure
- really measures what it is supposed to measure
- consistent
validity unpacked:
- most experiments are designed to measure hypothetical constructs, the experimenter must create an ?
a valid measure is one that measures this ? without?
operational definition of the dependent variable
hypothetical construct accurately/being influenced by other factors
Validity of evidence:
internal validity: the extent to which ?
several types:
empirical evidence provides a true or accurate reflection of the patients, procedures and settings that were observed
construct,face,content, criterion, predictive, and concurrent
External Validity:
the extent to which empirical evidence provides a ?
can i ?
two external validity questions:
are the participants?
is the study ? if so would the study?
true or accurate reflection of the patients procedures and settings other than those that were observed
-compare kids across diff. pop.
representatives of the pop.
replicable/produce sim. results
Threats to validity: subjective bias includes an indiv. ? a.k.a? types of subjective bias ? one solution to subjective bias ? a single-blinded condition refers to when the participants are ? double-blind
personal beliefs, opinions and expectations
- self-fulfilling prophecies
- experimenter bias, observer bias, participant bias
blinding
- intentionally kept unaware of which treatment they are receiving
- both participants and experimenter unaware
3 types of validity:
construct:
concerned with ?
does it ?
content:
-do the items
-3 factors imapcting
-
-
-
criterion related: also called ? 2 types: predictive: tells if you can ? -with no intervention in 6 months you should concurrent: looks at how closely
theoretical relationships between variables
-measure construct
reflect domain
- appropriateness of types of items included
- completeness of sample
- the way items assess the content
predictive validity
- predict an indiv. future performance based on their test results , less concern with whys
- get same score
- child’s score is related to his or her score on a second measure collected at the same time
Face Validity:
is the ?
least ?
because most behavioral variable require indirect measures the validity of a measured definition may not be ?
consensus that a measure represents a particular concept
- stringent
- self-evident
Reliability: Test Reliability: - the test to be the ? Test-Retest: -repeated Parallel forms of Reliability: - .. v... -shouldn't matter if? Internal consistency: do the items ? even if question asked ? split-half reliability: is the ? important for? test is ?
test to be the same over and over
-administration
form A v. Form B
-give form a or b
- relate to each other as expected? repeated questions on surveys ?
- different get same answer
first half consistent with second half
-consistent with itself
Inter-Rater Reliability: aka? 2 people ? we at least want it to be when collecting behavioral measures clincians must use ? risk of? to measure inter-rater reliability , different observers ? percent agreement is ?
Inter-Observer Agreement (IOA)
-come to same agreement with same assessment
-80%
-own judgment to interpret events they are observing
-subjectivity
-take measurements of same responses
calculated
Predictive validity: predict
test-retest: how high
inter-examiner: how high
-
special qualifications:
later performance in same domain
.90 or higher
.90 or higher
sufficient detail to duplicate across examiners
qualified to take test
who can assess
Diagnostic Accuracy:
SENSITIVITY: aka ?
true ?
how
probability of detection
-positive rate
many times will this test accurately tell us that a child has a language impairment
Specificity:
true ?
how many times will this test ?
how many negatives were actual?
negative rate
-accurately tell us that a child does not have a language impairment
actual true negatives