Lang & Comm 3 Flashcards
2 countries of highest and lowest literacy rates
Latvia 99.9%
Chad
How many people are illiterate globally
769,000,000 (World Literacy Found., 2012)
2 types of costs of illiteracy
economic - £2BN per year
social - higher chance of depression, substance abuse, suicidal ideation and poor physical health
visual word recognition
first stage of reading where we transform letters into meaning
through which store do we achieve visual word recogniton
through our mental lexicon
how many words in English speaker’s mental lexicon
60,000-70,000
grapheme
letter groups that correspond to form one phoneme
what do graphemes form the bridge between
phonology and orthogrpahy
how do we test if graphemes are used for visual word recognition?
letter detection (Ray, Ziegler and Jacobs, 2000)
what is this testing and what did it find
if graphemes are used for visual word recognition - they ARE as ‘a’ in broad took longer than a in ‘brash’
morpheme
smallest meaningful unit of language (“un-real” = prefix and root)
complications of morphemes and examples
pseudo-affixes:
de-ter vs de-press
corn-er vs farm-er
se-ed vs look-ed
how do we test if morphemes are used for visual word recognition?
primed lexical decision (Lima and Pollastek, 1983)
what is this testing and what did it find
with no morphemes used, priming expectedly was strongest with most overlap (3>2>1)
with morphemes, priming was strongest with the morpheme option no matter how much overlap it had (2>1=3)
morphemes are access units in visual word recognition
what is this testing and what did it find
had to say it words were real/non-real:
CORNER = pseudo-suffix = greater priming
BROTHEL = no pseudo suffix = priming is comparable
suffixes + pseudo suffixes are used in early word recognition
how do we test if letters are processed in parallel or serially to one another?
word naming (DV = rt + accuracy) (Weekes 1997)
what is this testing and what did it find
word length effects in reading:
HF words all comparable
LF words weak connection
non-words needed serial grapheme-phoneme conversion
why is grapheme-phoneme conversion necessary for non-words
they aren’t stored in our mental lexicon, have to serially letter-by letter
are letters processed in parallel or serially? And what is paid attention to
MOSTLY parallel
first > last letters > middle letters
consonants
activating a whole set of words coded in a similar way
Coltheart’s N: orthographic neighbourhood
how do we test if words with shared letters are all activated when searching for a target?
form-based priming/orthographic priming (Evett and Humphrys 1981)
what is this testing
if we activate all similar words when searching for a target
orthogrpahic neighbourhoods: explain results
real and non-real primes that share letters as target = faster identification
WORDS that share letters are negatively connected in lexicon (inhibit similar words)
NON-WORDS that share letters don’t inhibit
how do we test if there are feedback connections between letters and words?
letter detection (Reicher 1969)
what does this letter detection task (Reicher 1969) test?
if there are feedback systems between words and letters
explain results of Reicher’s study
letters detected better in words than non-words
letters in words are detected better by letters on their own
= there is a feedback systems of words-letters (word superiority effect)
give 3 extra phenomena with word recognition (FAS)
Frequency effect – HF words recognised faster
Age of Acquisition effect – words learned younger = recognised faster
Semantic priming effect – primed by dog rather than school eg – faster in saying CAT
2 ways different models of word recognition differ
series/parallel access
one way/interactive relationship between letters and words
list these top to bottom
Forster’s search model
Morton’s logogen model
R&M Interactive Activation model
what model is this
Forster’s search model:
list 4 steps in Forster’s search model (RASA)
Recognise units of a word
Access a unit in correct bin (parallel)
Search frequency ranked bin for target word (serial)
Access master file for meaning
which model is this
Morton’s logogen model
why is Morton’s logogen model parallel
information about letters is being sent to all logogens at once
why is Morton’s logogen model not interactive
information only feeds forward from letters to words
what type of word is a logogen threshold high for
low frequency words
what can logogen thresholds depend on
frequency and prior exposure
what happens when activation passes logogen threshold
logogen fires and word is recognised
logogen
word detector
is Morton’s logogen model top-down or bottom-up
bottom-up, from letters to words only
which model is this
R&M Interactive Activation Model
what makes R&M IAC Model the same as logogen model
Parallel activation of all words that contain letters recognised
what makes R&M IAC Model different to logogen
between level connections - feedback means activation of word goes back to letters (word superiority effect)
what effect according to the R&M IAC Model creates feedback (bottom-up and top-down processing)
word superiority effect
what is the difference between thresholds in logogen and R&M IAC Models?
logogen thresholds vary
HF words have higher resting levels of activation - not thresholds - in R&M IAC Model
how well does each model of word recognition explain frequency effect (HF = faster)
:) SM - bins ranked by frequency
:) LM - Lower activation threshold
:) R&M IAC - higher resting levels of activation
how well does each model of word recognition explain word length effect (no clear effect)
:) SM - words rnaked by freq. (length = insig.)
:) LM & R&M IAC - letters processed in parallel (length = insig)
how well does each model of word recognition explain form priming (loup/LOUD)
:) SM - possible if prime and target in same bin and prime hasn’t been reached yet when target appears
:) LM & R&M IAC - primes activates letters from target
how well does each model of word recognition explain morpheme units
:) SM - only if the access unit = morphemes
:( LM & R&M IAC - predict more priming for more shared letter overlap
how well does each model of word recognition explain word superiority effect
:( SM & LM - no interaction between letters-words
:) R&M IAC - interactivity predicts effects