Terminology Flashcards
Anaphoric reference
means that a word in a text refers back to other ideas in the text for its meaning.
It can be compared with cataphoric reference, which means a word refers to ideas later in the text.
Example
‘I went out with Jo on Sunday. She looked awful.’ ´She` clearly refers to Jo, there is no need to repeat her name.
In the classroom
Asking learners to identify what or who the pronouns in a text refer to is one way to raise awareness. They can then practise this by using pronouns to replace words themselves. Comparing texts with well managed referencing to ones with poorly managed referencing can help students develop an idea of effective referencing even at low levels.
Backwash (also called washback)
Backwash (also called washback) is the effect that knowledge of the contents of a test may have on the course which precedes it. It may be positive or negative.
Example : If students are working towards an exam where all of the test items focus on grammatical accuracy, the teacher (possibly at the students instigation) may spend much of the preceding course focusing on this area - and possibly neglecting other areas that they will need outside the course such as spoken fluency or listening. Thus the test would have negative backwash - it pushed the teacher into “teaching for the test” rather than providing a balanced course which dealt with all the students needs and developed other areas of competence than just grammatical knowledge.
Fresh starts
Fresh starts are another feature that can make test results more reliable, and the Delta itself is a good example. A long time ago, when Delta was still DTEFLA, the written exam consisted of three one-hour essay questions. This meant that after a course focusing on a wide range of topics (ask yourself how many different topics you’ve covered on your Module One course - ours must have between eighty and a hundred), your final grade was determined by your ability to write about just three of them. If none of the topics which came up happened to be your “speciality” or if you had a general knowledge of everything but in-depth knowledge of nothing, you might do less well than someone who in fact knew relatively little, but just happened to know a lot about the three topics she had to or chose to answer questions on. With the new format, this can’t happen. With eight questions, many of which have a number of different sections, and all of which are marked by individual point made, there are now numerous “fresh starts”. If you don’t know the first term defined in Paper 1.1, you may still know the second; if you can’t analyse the phonology of the phrases specified in Paper 1.5, you may still be able to analyse the form and meaning, and so on. So fresh starts lead to a much more reliable result - there is no chance of the result being swayed by a specific strength or weakness. Applying this to our placement test, this test has no “fresh starts”. Presuming that the learner starts narrating the story using past verb forms, her final mark is liable to be dominated by her accuracy and fluency in using those. If this is high, it may hide the fact that when talking about the future she relies exclusively on will + infinitive. Or if it is low, may not reveal that in general social conversation using the present simple, she is both accurate and fluent.
Construct validity
- Definition : A quality of an effective test. If a language test has construct validity, it reflects an accepted theory of language use and tests only the subskills/competencies that would naturally be involved.
- Example : Dictation as a test of listening comprehension lacks construct validity as a) it involves a number of skills not generally needed when listening - ie the ability to retain the exact words used, the ability to write and to spell; b) it is generally carried out under conditions which are quite different to those generally required when listening - the words being dictated are repeated more than once,; they are pronounced clearly and slowly; they are delivered in short chunks etc. This is quite different from the normal listening situation where the listener has to decode the stream of speech in real time, hears it once only etc
Face validity
Face validity concerns the extent to which a test appears valid - ie appears to be “a good test” to the people using it - the learners taking the test, their parents, the teachers and institutions putting them in for the test etc.
As these people may not be testing experts, they may of course not fully understand what is involved. As an example: if you look at the example given under construct validity, you will see that there are various reasons why dictation is not a valid test of listening comprehension. Yet many learners (and teachers) who are used to it as an activity type may accept quite happily that listening ability should be tested in this way, and make no objection to a test which includes it.
This means that a test which has face validity is not necessarily a good test - and similarly a test which does not have face validity is not necessarily a bad test. However, despite this, it is an important quality for a test to have. If a test, which may possess every other quality required - other forms of validity, reliability, practicality or whatever - does not have face validity, then learners are not going to want to take the test or be happy that its results are a fair assessment of their language competence.
Related terms : validity, content validity, predictive validity,
Catenation
Catenation is one of the ways speakers join words together. In catenation, a consonant sound at the end of one word joins with a vowel sound at the beginning of the next word.
Example
The two words an + apple become ‘anapple’ in speech, with catenation of the consonant n and the vowel a sounds.
In the classroom
Learners often have difficulty hearing individual words due to catenation. Specific listening tasks such as counting the number of words heard, dictation, and reading with a cassette recording can help practise this.
face validity
If a test has face validity then it looks like a valid test to those who use it. Face validity can be compared with content validity, which describes how far the test actually measures what it aims to measure.
Example
Many public English exams have high face validity as they are seen as being very good tests by those who take them.
In the classroom
Face validity is not an objective measure of how good a test may be. However, it is as important as content validity, because learners and teachers need to think a test is credible if it is to work.
http://www.teachingenglish.org.uk/article/face-validity