Praxis 2019 Flashcards
Data- Based Decision Making (RTI/MTSS)
o Involves the collection of formal and informal information to help the student
o Background data collection, techniques, and problem identification level:You must know various methods of data collection to help identify anddefinethe problem.
o 2.Screening level:Data can be used to help identify at-risk students and make decisions about students who struggle with academic work.
o 3.Progress monitoring and RTI level:Data are used to determine effectiveness of the interventions (RTI) once a student is identified.
o 4.Formal assessment level (special education evaluation):Cognitive, social, and emotional data are derived from various sources, but especially from formal standardized measures.
Data is used for the following needs:
o To identify the problem and plan interventions
o To increase or decrease levels of intervention
o To help determine whether interventions are implemented with fidelity
o To decide whether interventions are related to positive student outcomes (effectiveness)
*To plan individualized instruction and strategic long-term educational planning
When a struggling student has been identified already through various means, the initial data should..
Define the problem
Informal Data
o Student files and records o Staff interviews and comments about the student o Medical records and reports o Review of previous interventions o Developmental history
Structured, unstructured and semi-structured interviews
o Structured – highest validity, rigid and is given the same way all the time.
o Unstructured- help put the student at ease, the less structure you put on the child the more they will open up. Responses can be difficult to interpret.
o Semi-structured – combines the best of both. Allows for flexibility but also follow up questions.
Observation Techniques:
Whole- Interval Recording
Whole-interval recording: Behavior is only recorded when it occurs during the entiretime interval. (This is good for continuous behaviors or behaviors occurring in short duration.)
Frequency or event recording
Record thenumberof behaviors that occurred during a specific period.
Duration Recording
refers to thelength of timethe specific behavior lasts.
Latency recording
Time between onset of stimulus or signal that initiates a specific behavior.
Time-sampling interval recording
Select a time period for observation, divide the period into a number of equal intervals, and record whether or not behavior occurs. Time sampling is effective when the beginning and end of behavior are difficult to determine or when only a brief period is available for observation.
Partial-interval recording
Behavior is scored if it occurs during any part of the time interval. Multiple episodes of behavior in a single time interval are counted as one score or mark. Partial-interval recording is effective when behaviors occur at a relatively low rate or for inconsistent durations.
Momentary time sampling
Behavior is scored as present or absent only during the moment that a timed interval begins. This is the least biased estimate of behavior as it actually occurs.
Universal Screening Measures
o CBM (Curriculum Based Measures) – must be reliable and only used if they align with local norms, benchmarks and standards. Ex. Dibels
o CogAT (Cognitive Assessment Test) – Cognitive measure that is group administered
o Fluency screeners – letter-naming fluency, phoneme segmentation and reading fluency.
o State educational agencies – formal group administered test given to every student every year.
STEEP (Systems of Enhanced Educational performance – Conduct CBMs serval times a year in reading, math and writing.
Subskill mastery measurement (SMM) & General outcome measurement (GOM)
o SMM – Info to measure of the intervention is effective. Collected frequently, even daily.
o GOM- collected to see of the student is making progress towards long rage goals. Recorded once a week.
Three levels of Analysis
o Variability in data – centers on the effectiveness of the intervention and whether an intervention is effective or not is defined by its ability to change behavior. There are also confounding variables, these include uncontrolled subject and environmental variables. There is also measurement error.
o Level - Levelrefers to the average performance within a condition.
o Trend - When a student’s performance systematically increases or decreases across time, then analyzing the trend in the data is important. The pattern of change in a student’s behavioracross timecan be described astrend.
Baseline RTI data
o One rule for baseline data is that there should be no new highs (spikes) or lows forthreeconsecutive data points.
o Another rule is that80% of the data points should fall within 15% of the mean (average)line or, in the case of increasing or decreasing data points, within 15% of the trend line.
o Some researchers recommend collecting a minimum number of baseline data points, approximately three to five points.
o In schools, practical considerations often affect the amount of data that can be collected.
A comprehensive evaluation includes formal and informal data in the following
domains
o Cognitive o Achievement o Communication (speech–language) o Motor skills o Adaptive skills o Social, emotional, and behavioral functioning o Sensory processing
FBA & Steps
• FBA – Identify the purpose or function of the behavior
o Describe problem behavior (operationally define problem).
o Perform the assessment. (Review records; complete systematic observations; and interview student, teacher, parents, and other needed individuals.)
o Evaluate assessment results. (Examine patterns of behavior and determine the purpose or function of the target behaviors.)
o Develop a hypothesis.
o Formulate an intervention plan.
o Start or implement the intervention.
o Evaluate effectiveness of intervention plan.
CBM
o CBM - CBM refers to the specific forms of criterion-referenced assessments in which curriculum goals and objectives serve as the “criteria” for assessment items. The key to CBM is the examination of student performanceacross timeto evaluate intervention effectiveness.
Ecological Assessments
o Ecological assessments are just as important as formal or standardized assessments. Ecological assessments help to determine the “goodness of fit” between the student and the learning environment.
o •An important acronym to remember isICEL. ICEL stands for instruction, curriculum, environment, and learner. During an ecological assessment, the evaluator must review key elements of the four aspects of ICEL. For example, a school psychologist analyzes work samples, prior grades, and assessments. Information from parents, teachers, and the student is collected. Finally, authentic assessments include observational data of the target student during instruction and in other environments.
• Assessment of Non-English Speaking (ELL)
o you must assess the child’s speaking, reading, and writing abilities while considering the following:
o a.Developmental history and all languages that are spoken and heard
o b.Language dominance (the language the student has heard the most in his or her environment)
o c.Language preference
o The disorder must be present in the child’s native language (L1) and English (L2).
o b.Testing must be conducted in the native or strongest language.
o Normed on the appropriate cultural group
o The child should be compared with members of the same cultural group who speak the dialect.
♣ (use of an interpreter) is not the best practice and is psychometrically weak if the test is not normed on the cultural group being assessed. score validityremains loweven when the interpreter is highly trained and experienced.
Premack Principle
theory posits that a lower level behavior can be shaped by a higher level (desired) behavior. For example, a student is not allowed to play outside unless he does his homework first.
Immediacy
This is a key behaviorism concept. Consequences (e.g., rewards) should occur immediately after the behavior in order to be an effective reinforcement.
Negative reinforcement
This is often confused with punishment. Unlike punishment, a behaviorincreasesunder negative reinforcement. A stimulus is removed, which causes a behavior to increase.
Positive reinforcement
A behavior occurs, a rewarding stimulus is provided, and the behaviorincreases.
Fixed ratio reinforcement
A specific number of behaviors must occur before a reinforcer is given.
Variable ratio
The number of behaviors needed in order to receive the reinforcer varies. Variable schedules of reinforcement, once a behavior is established by this method, areresistant to change.
Frequency, duration, and intensity
These vital aspects of behavior are measurable and are key parts in all behavior modification plans for students.
Shaping
o Shaping is a technique that creates a behavior by reinforcing approximations of the desired target behavior.
Extinction
o Eliminating the reinforcers or rewards for the behavior terminates the problem behavior.
Punishment
The introduction of an undesirable stimulus thatdecreasesa behavior.
Began the foundation for INTELLIGENCE
Charles Spearman
Who created the first intelligence test
Stanford Binet
Thurstone’s primary mental ability
• He claimed there were at least 11 primary mental abilities. Spearman believed these abilities and dimensions were causal properties of behavior and he did not view intelligence as a unitary construct such as “g.”
CHC
gf, gc, gv, gs, gsm, glr
Phonology
System of sounds that a language uses. Note that people commonly confuse phonemic awareness with phonological processing. Phonemic awareness is a component of the broader construct phonological processing.
Phoneme
The basic unit of a language’s sound or phonetic system. It is the smallest sound units that affect meaning. Example: /s/.
Morpheme
Language’s smallest units of meaning, such as prefix, suffix, or root word. Example: “pre” in the word “preheat.”
Semantics
he study of word meanings and combinations, such as in phrases, clauses, and sentences.
Syntax
Prescribes how words may combine into phrases, clauses, and sentences.
Pragmatics
A set of rules that specify appropriate language for particular social contexts.
Key person to study Language development
Noam Chomsky is a key person to study as he is widely known as an expert on language development. He proposed that children are born with an innate mental structure that guides their acquisition of language and grammar.
Brain areas involved in language
The left hemisphere of the cerebral cortex plays a primary role in language.
- Broca’s area: Located in the frontal portion of the left hemisphere, this brain area supports grammatical processing and expressive language production.
- Wernicke’s area: Located in the medial temporal lobe, this section of the brain supports word-meaning comprehension and receptive language.
Cognitive abilities tests
Cognitive tests are norm-referenced scientific instruments that psychologists use to measure human abilities that are strongly correlated to a host of outcomes. Examples of common cognitive abilities tests include the WISC-V, SB-V, and DAS-II.
Formative evaluations
There are specific assessments used to determine a student’s strengths and weaknesses. Formative evaluations typically evaluate the academic areas in which students are doing well and areas in which they are doing poorly.
Summative evaluations
These provide a review and summary of a person’s accomplishments to date
Domain-referenced and criterion-referenced tests
These are tests concerned with the level mastery of a defined skill set. Their purpose is solely to assess a student’s standing on a defined standard (e.g., criterion) or performance of a specific skill.
Percentile ranks
A percentile rank of a score is the percentage of scores (students) in its frequency distribution that are equal to or lower than it. An example is a student with a score at the 33rd percentile who has scored better than or equal to 33% of those who took the same test.
Standard scores
Ss are psychometrically sound measures and are used to describe a person’s position within the normal curve (bell curve) of human traits. These scores express the position of a score in relation to the average (mean) of other scores. SSs use standard deviations (SDs) in their formulas and place a student’s score as below average, average, or above average. Mainstream cognitive test batteries typically use an SS with a mean of 100 and an SD of 15 (e.g., SS = 85–115 is average)
Z-scores
Z-scores have a mean of 0 and an SD of 1. They are not used much in education or in education reports.
T-scores
T-scores are common scores and they have a mean of 50 and an SD of 10 (T = 40–60 is average)
Scaled scores
Ss are commonly reported and they typically have a mean of 10 with an SD of 3 (Ss = 7–13 is average).
Variance
A measure of how far a set of numbers is spread out.
SD
A measure of the spread of a set of values from the mean value. The SD is the square root of the variance. It is a measure of dispersion. SD is used as a measure of the spread or scatter of a group of scores as a way to express the relative position of a single score in a distribution. As mentioned previously, most common CogAT batteries express their full-scale SS with a mean of 100 and an SD of 15 to indicate the “average range” (85–115 = average).
Reliability:
Reliability refers to standardized test results and scores that are consistent and stable across time.
Reliability coefficient:
This statistic illustrates the consistency of a score or the stability of a score. An appropriate reliability coefficient for standardized tests should generally be around or above r = 0.80. The higher the reliability coefficient, the better.
Alternate and parallel forms
Alternate forms of a test should be thought of as two tests built according to the same specifications, but composed of separate samples from the defined behavior domain. This method takes into account variation resulting from tasks and correlation between two test forms to provide the reliability coefficient.
Split half
Take a full test and create two tests from it, being careful to share difficult and easy items on both tests. Both tests are administered, even on the same day, and the scores on both tests are correlated.
Internal consistency reliability
An estimate of the reliability of the total test is developed from an analysis of the statistics of the individual test items. Each test item is compared with the total set of items. This statistic is expressed in terms of Cronbach’s alpha.
Interrater reliability
The reliability of people administrating the test is increased by increasing the number of raters or judges. Rater’s results on an assessment should be highly congruent for the test to be considered reliable.
Validity
Like reliability, validity is vital to a test’s effectiveness and usefulness. Validity regards the degree to which the test actually measures what it claims it measures. To put it another way, validity is the degree to which evidence and theory support the interpretation of test scores. As stated with reliability coefficients, validity coefficients are acceptable if they are generally above 0.80.
Criterion-related validity
Criterion validity concerns the correlation between two measures (tests) that are designed to measure human traits. If two tests measure the same trait, the correlation between the tests should obviously be higher. If one of the two tests is not designed to measure the same trait, the correlation should be lower between the two tests.
Convergent validity
Convergent validity is determined when a test is correlated with another test that has a similar purpose and measures the same trait. For example, if a test that measures attention deficit hyperactivity disorder (ADHD) correlates highly or “converges” with another well-known test of ADHD, then the test is said to have good validity.
Divergent validity
Divergent validity is established by correlating two tests that measure two different traits. For example, a test that measures ADHD should have a low correlation to a test that measures depression.
NASP & Interpreters
NASP does not encourage the use of standardized tests with interpreters if the test is not appropriately normed.
False postive & False Negative
False positives: A student performs well on a test, but in actuality, the student is failing in the authentic environment. For example, a student scores high on a reading comprehension test, but has difficulty reading in class.
• False negatives: A student performs poorly on a test, but in actuality, the student is making acceptable progress in the authentic environment with little or no problem.
Interagency and School Community Collaboration
Child centered: Direct service to the student such as tutoring or mentoring
- Family centered: Service to parents or entire families such as parenting workshops, family counseling, and family assistance
- School centered: Donation of money or equipment, staff development, or classroom assistance
- Community centered: Outreach programs, artwork and science exhibits, and after-school programs
Consultation With Interpreters
The use of interpreters is encouraged and necessary to build rapport with families and students who do not speak English. When using interpreters, be mindful of speech rate and use brief, simple statements so that the interpreter can relay the information efficiently.
Refer back to pg. 38 to look at RTI Pyramid!
…
Basic Principles of Effective Instruction
- Activate a student’s prior knowledge before teaching.
- Make connections between new learning and a student’s current knowledge. Make learning relevant to the student’s life.
- Do not overload students’ abilities when teaching new concepts, especially their working memory. Working memory capacity is typically limited to four to seven bits of information.
- Provide the optimum level of instruction, not too hard and not too easy. Have the student experience some success and some challenge. This concept is related to the Zone of Proximal Development (ZPD).
- Model desired responses, have explicit expectations, and provide exemplars of completed work.
- Allow time for practice. Provide corrective feedback and frequent practice of skills. Have cognitive rest periods (days) between teaching new concepts.
- Feedback needs to be provided in an immediate and positive manner.
- Multimodal teaching is good practice. Incorporate “learning by doing” when possible. Use visual, auditory, and kinesthetic modalities.
- Student learning develops as target skills progress through phases: Acquisition → proficiency → generalization → adaptation.
Specific Instructional Strategies
explicit and systematic approach
students are told specifically what they are learning before their lesson starts every class period. Next, students are told why they need to learn the new concept(s). Third, the teacher models the new skill or concept. After new information is presented, students will practice with teacher feedback. Finally, students practice the skill over multiple trials. Explicit instruction also includes breaking down tasks or new concepts into small manageable steps. The steps to effective instruction involve the “I do, we do, you do” approach.