Assessments Flashcards
Alternate Form Reliability
Deals with the evidence as to whether two or more allegedly equivalent forms of the same test are actually equivalent (Popham, 2007, p. 33). Multiple test forms are more often used in high-stakes testing, rather than run of the mill classroom testing. To test for alternate-form reliability, both (or all) forms would have to be administered to the same student(s) with little delay in between. Once you have the scores, you could compute the correlation coefficient reflecting the relationship between students’ performance on the two forms.
Alternative assessments
Often contrasted with “traditional” assessment. According to Brown & Hudson (1998) the defining characteristics are that alternative assessment–
- requires students to perform or create;
- uses real world contexts or simulations;
- extends day-to-day classroom activity and are non-intrusive;
- allows students to be assessed on what they do everyday already;
- uses tasks that represent meaningful activity;
- focuses on processes as well as product; 7
- taps into higher-order thinking and problem-solving;
- provides information about student’s strengths and weaknesses;
- is multiculturally sensitive
- ensures that humans do the scoring
- encourages open disclosure of standards and rating criteria;
- encourages teachers to perform new instructional and assessment roles.
- continuous and untimed
According to Brown (2004, Chapter 10) some forms of alternative assessment include–
- Portfolios
- Journals
- Conferences & Interviews
- Observations
- Self- and peer- assessment
The effect of testing on teaching and learning (Hughes, 2003, in Brown, 2010, p. 37). The extent to which assessment affects a student’ future language development. It can also refer to the “washing back” of diagnostic knowledge of strengths and weaknesses to the student. Teachers should strive to make classroom tests that enhance positive and effective washback (Brown, 2004, p. 29).
Annual Measurement Achievement Objectives
The state standards that indicate that a state or district has met Adequate Yearly Progress
Assessment
Appraising or estimating the level or magnitude of some attribute of a a person (Mousavi, 2009).
“Assessment” is not synonymous with “test”. Tests are prepared administrative procedures that occur at a regular time in a curriculum when learners must perform at peak ability. Assessment is much broader and encompasses a wide domain of activities and evaluation. Tests are a subset of assessment.
According to Gottlieb (2006) the assessment of ELLs must be inclusive, fair, relevant, comprehensive, valid, and yield meaningful information.
Authentic Assessment
A form of assessment in which students are asked to perform real-world tasks that demonstrate meaningful application of essential knowledge and skills (Mueller, 2012)
Circumstantial Bilingualism
A situation in which an individual must become bilingual because of an outside force (war, school mandates, relocation, etc.)
Construct Related Validity Evidence
The extent to which empirical evidence confirms that an inferred construct exists and that a given assessment procedure is measuring the inferred construct accurately (Popham, 2007).
Content Related Validity Evidence
Refers to the extent to which an assessment procedure adequately represents the content of the curricular aim being measured (Popham, 2007).
Criterion Referenced Tests
Are designed to give test-takers feedback, usually in the form of grades , on specific course objectives. It is possible for everyone to get a good grade if all have mastered the objective(s). They could be formative (units tests, midterm) but they also could be summative (like end-of-course tests). (Brown, 2004)
Criterion Related Validity Evidence
The degree to which performance on an assessment procedure accurately predicts a student’s performance on an external criterion (Popham, 2007)
Cut score
The point on an assessment scale at which scores at or above that point are interpreted or acted upon differently (i.e. 70 passing, 69=failing).
Domain
Refers to the four skills - listening, peaking, reading, and writing. Often assessment is seen to evaluate or focus on one domain.
Elective Bilingualism
When an individual chooses to learn a second language.
Face Validity
The degree to which a test looks and appears to measure the knowledge or ability it claims to measure. It is subjective and based on the examinees’ perceptions, the administrators who use it, others. It is purely in the “eye of the beholder” and cannot be empirically measured. The appearance of content validity increases the probability of face validity (Brown, 2004).
Formative Assessment
Evaluating students in the process of forming their competencies or skills. The key to formative assessment is delivery and internalization of appropriate feedback on performance. Performance should inform instruction. Takes into account forms of informal assessment (Brown, 2010)
Grade-equivalent scores
In the K-12 context, it is a score that represent the grade level of most students who earn that score. Grade-equivalent scores should be as skill-specific as possible. http://www.hishelpinschool.com/testing/test4.html
Internal Consistency Reliability
Deals with the extent to which the items in an assessment tool are functioning in a consistent fashion or homogeneously (Popham, 2007). Students should perform consistently on items that assess the same skill.
To test for internal consistency reliability for dichotomous items (two answer choice), the Kuder-Richardson method is used. To determine the internal consistency of polytomous items (items with multiple answer choices), Cronbach’s coefficient alpha is used.
mean
The mean is the sum of the values divided by the number of values. http://en.wikipedia.org/wiki/Mean
Median
A median is described as the numerical value separating the higher half of a sample from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking the middle one.
Multiple Measures
Valenzuela (2002) states that one needs to take into account multiple measures of ELLs proficient and development. MM helps one triangulate proficiency better. It enhances construct validity.
Norm-referenced teds
Each test-taker’s score is interpreted in relation to a mean, median, standard deviation, and percentile rank. The purpose of such tests is to place test-takers along a mathematical continuum in rank order. Norm-referenced tests include the SAT and TOEFL (Brown, 2004)
Normal Curve Equivalent
Also known as a bell curve
Performance-Based Assessment
In terms of language assessment, this refers to the type of assessment that involves oral productions, written production, open-ended responses, integrated performance, group performance, and other interactive tasks. This type of assessment is time-consuming and expensive, but have more content-validity because learners are measured in the process of performing linguistic acts (Brown, 2004). PBA is not completely considered a form of alternative assessment but it does have some characteristics in common.
Some characteristics of PBA are-
- Students make a constructed response
- Student engage in higher-order thinking
- Tasks are meaningful, engaging, and authentic
- Tasks call for the integration of language skills
- Both the process and the product are assessed
- Depth of master is emphasized over breadth
PBA should be treated with the same rigor as traditional tests. (Brown, 2004) Teachers need to be careful in assessing PBA that they’re assess language features not just surface features. This may lead to problems with inter-rater reliability.
Portfolio Assessment
An alternative form of assessment that is a purposeful collection of students’ work that demonstrates their efforts, progress, and achievement in given areas (Genesee & Upshur, 1996).
Examples of items that can be collected in portfolio assessment include: essays with drafts, project outlines, poetry, creative prose, artwork, photos, clippings, audio/video, journals, diaries, reflections, tests, homework, notes on lectures, and self-assessments.
Gottlieb (1995) uses the CRADLE acronym for 6 possible attributes of a portfolio: C - Collecting R - Reflecting A - Assessing D - Documenting L - Linking E - Evaluating
Several reports show the advantages and benefits of portfolio assessment (Genesee & Upshur, 1996 (and others) in Brown, 2004, p. 257) including that portfolios -
- create a sense of intrinsic motivation, responsibility, and ownership
- promote student-teacher interaction
- individualize learning
- provide tangible evidence
- foster critical thinking
- create the opportunity for collaborative work
- permit assessment in multiple dimensions of learning
However, a portfolio must not become a “pile of junk” and to prevent this, Brown (2004) suggests that teachers take the following steps -
- State objectives clearly
- Give guidelines on what material to include
- Communicate assessment criteria
- Designate time for portfolio development
- Establish a schedule for review
- Keep portfolios in an accessible place
- Provide positive washback before final assessment