Visual word recognition Flashcards

1
Q

Why is reading an important cultural invention?

A

Because it allows the transmission and storage of information and therefore the endurance of ideas.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What was the first written language, and what was the ‘giant leap for mankind’?

A

Akkadian, developed in Ancient Mesopotamia - first purely images, then symbols were used to represent sound, which was a giant leap for mankind as it allowed the symbolic representation of abstract thoughts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does reading relate to visual word recognition?

A

Visual word recognition is one of the cornerstones for reading.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the main features of visual word recognition?

A
  • Fast and automatic
  • Flexible
  • Precise
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How many words can we process per minute?

A

250-300 words per minute.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How quickly can the brain distinguish between words and non-words?

A

Within 200ms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

In what way is visual word recognition flexible?

A

We can read different fonts, scripts, case, handwriting, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

In what way is visual word recognition precise?

A

We can distinguish between words that are similar e.g. trail and trial.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What did Hauk et al. (2006) do and find?

A

Investigated the speed of visual word recognition through EEG. Found that the typicality effect occurs 100ms after onset and the lexicality effect 200ms after onset.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What did Stroop (1935) do?

A

Demonstrated the fast and automatic nature of visual word recognition - found that it’s impossible to ignore the word in a colour-naming stroop task, therefore it’s highly automatic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What did Bisson et al. (2012) do?

A

Demonstrated the automatic nature of visual word recognition through eye movements and subtitles - found that we try to read subtitles whenever they’re there regardless of whether we know the language or need subtitles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Describe the masked priming paradigm.

A

Mask – prime – target:

  • Prime is presented briefly (e.g. 60ms)
  • Prime duration (prime-target SOA): 30-250ms
  • Participants to decide whether a target word is a correct English word (LDT)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How can the masked priming paradigm be used to support the automatic nature of visual word recognition?

A

A masked non-word prime differing from the target by one letter facilitates target word processing compared to non-words without similar letters. E.g. bontrast facilitates CONTRAST according to Forster and Davis (1984). This shows that to some extent we read words even when they’re presented for a very brief period of time and we’re not aware of them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What did Ferrand and Grainger (1993) do and find?

A

Manipulated orthography and phonology in different priming conditions and found that:

  • Phonology: facilitation increases when prime duration increases (although drops after optimum of 67ms)
  • Orthography: facilitation decreases when prime duration increases.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What did Perfetti and Tan (1998) investigate and find?

A

Graphic, phonological and semantic priming in Chinese. The effect of graphic priming decreases over prime duration, phonological increases then stabilises, and semantic is non-existent until 85ms, after which it increases.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How can letter order demonstrate the flexibility of visual word recognition?

A

Because to a certain extent, letter order in a word doesn’t matter as long as the first and last letter remain correct, as the brain reads the whole word. However this often doesn’t work for longer/more uncommon/unexpected words, for example magltheuansr isn’t easily recognisable as manslaughter.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How do writing systems demonstrate the flexibility of visual word recognition?

A

There are 7,105 languages spoken in the world. As shown by Cook & Basetti (2005), languages can be categorised as being either meaning- or sound-based, then divided into whether they use morphemes, syllables or phonemes, what script they use, etc. This wide variety of written languages demonstrates the flexibility of our brain, as we are capable of learning all of them!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What experimental tasks can be used to investigate word recognition?

A
  • Lexical decision task (LDT) [very powerful when combined with masked priming]
  • Word naming
  • Perceptual identification
  • Priming
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What can be measured in experimental investigation of word recognition?

A
  • Response times (RTs) and accuracy
  • Eye movements
  • Brain imaging:
    • Event-related potentials (ERP)
    • functional Magnetic Resonance Imaging (fMRI)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Define orthographic input coding.

A

How we recognise letters and words.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

How do we distinguish between anagrams such as leap, pale, peal and plea?

A

Through position specific coding - each letter is assigned a position number in the word.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the three main computational models of visual word recognition?

A
  • Interactive Activation (IA) model (McCelland & Rumelhart, 1981; Rumelhart & McCellend, 1982)
  • DRC model (Coltheart et al., 2001)
  • MROM (Grainger & Jacobs, 1996)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Describe the IA model.

A

Localist connectionist neural network model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What representations are involved in the IA model?

A

(Visual input) –> letter features –> letters –> words

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What do representations do in the IA model?

A

Each representation has the ability to inhibit or excite the next, and words can excite similar words (orthographic neighbours).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What does the resting level activation of word nodes in the IA model reflect?

A

Word frequency.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the IA model very good at?

A

Predicting speed of word recognition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

How did Coltheart et al. (1997) define orthographic neighbours?

A

The number of words that can be created by changing one letter of a target word, for example the 29 orthographic neighbours of ‘mine’ include pine, line, mind, mint etc.

29
Q

What effect does neighbourhood size/density (number of neighbours) have on word recognition?

A

The results are mixed - some suggest facilitation (Andrews, 1989, 1992); other suggest null (Coltheart et al., 1997); inhibition (Carreiras et al., 1997).

30
Q

How does neighbourhood frequency (number of lower or higher neighbours) affect word recognition?

A

Results suggest that more frequent neighbours result in inhibition (Carerias et al., 1997; Grainger, 1989; Grainger et al., 1990).

31
Q

How does large neighbourhood size affect performance in naming tasks?

A

Results suggest facilitation (Andrews, 1989, 1992).

32
Q

What does the IA model predict will be the effect of orthographic neighbours?

A

Inhibitory due to lateral inhibition (competition) at the word level, whereas findings have in fact been mixed - Andrews (1992) suggests facilitation and Carreiras et al. (1997) suggests inhibition.

33
Q

How can the IA model account for mixed findings regarding the effect of orthographic neighbours?

A

By being changed: Grainger and Jacobs (1996) devised the Multiple read-out model (MROM), a more sophisticated IA model with lexical decision criteria.

34
Q

What three noisy response criteria does the MROM have?

A
  • Single word node activity
  • Summed activity of all active words
  • Time threshold for lexical decision
35
Q

What do the three noisy response criteria of the MROM enable it to do?

A

Account for task and context dependent factors in visual word recognition - this explains mixed findings regarding the effect of orthographic neighbours.

36
Q

What are the problems for position specific encoding?

A
  1. Transposition neighbours (SLAT-SALT)
    - Inhibitory effects in LDT and naming (Andrews, 1996)
  2. Transposition priming (reversal of two letters)
    • sevrice primes SERVICE (Schoonbaert & Grainger, 2003)
    • identity (what – WHAT) and transposition primes (what – WHAT) produce both the same amount of priming (Foster et al., 1987). This suggests that letter position doesn’t matter.
  3. Relative position priming
    - silr primes SALIOR (Peressotti & Grainger, 1999), which it shouldn’t according to position specific encoding, as the letters are in the wrong positions and even the wrong order.
37
Q

What input coding systems are there, other than position specific?

A

Local context (Wickelgraphs and open bigram encoding) and spatial coding. They’re more flexible than position specific coding.

38
Q

Describe the idea of wickelgraph coding.

A

Seidenberg & McCellend (1989) - each word contains letter triplets in the correct order, e.g. LEAP contains #LE, LEA, EAP, AP#. These are matched up in the brain.

39
Q

Describe the idea of open bigram coding.

A

E.g. Grainger & van Heuven (2003): similar to Wickelgraphs, but pairs of letters, and they don’t have to be immediately chronological - for example LEAP contains LE, LA, LP, EA, and EP. If all active, suggests that the word is leap.

40
Q

Describe spatial coding (Davis, 2011).

A

Letters have specific locations which are activated a certain amount depending on their position, e.g. L4E3A2P1 / P4A3L2E1 / P4L3E2A1.

41
Q

What can the open bigram model (Grainger & van Heuven, 2003) account for?

A

TL and relative-position priming effects due to high overlap in open bigrams between the prime and target.

42
Q

Describe the pathway of the open bigram model (Grainger & van Heuven, 2003).

A

Visual stimulus –> alphabetic array –> relative position map –> words
The relative position map contains a certain number of bigrams created by the word.

43
Q

Why is there a limit for the number of intervening letters for open bigrams?

A

Because otherwise very long words would have a ridiculous amount of bigrams.

44
Q

How can the relative flexibility of different types of input coding be demonstrated?

A

Through comparing the amount of overlap between similar words, e.g.TRIAL and TRAIL according to the different models:

  • position specific = 0.6 (less flexible)
  • open bigram = 0.89
  • spatial coding = 0.97
45
Q

What is the most important factor in determining the speed at which words are recognised?

A

Word frequency - common words are recognised faster than less common words (Foster & Chambers, 1973).

46
Q

What are corpuses and how are they created?

A

Estimates of word frequencies, created by counting word frequency in books.

47
Q

Give examples of some of the main corpuses used.

A
  • American English (Kucera & Francis, 1967)
  • CELEX English, Dutch, German (Baayen et al., 1995)
  • British National Corpus
  • SUBTLEX-US: subtitle frequencies (Brysbaert & New, 2009)
  • SUBTLEX-UK: BBC subtitle frequencies (van Heuven et al., 2013)
48
Q

What is the best corpus for lexical decision tasks?

A

According to Brysbaert & New (2009), subtitles are better than any other frequency norm for LDT.

49
Q

Which is better in corpuses: word form or lemma frequencies?

A

Word form (e.g. play) frequencies are better than lemma frequencies (e.g. sum of plays, played etc.).

50
Q

What is a possible explanation for why word frequency is so important for word recognition?

A

Age of acquisition, as we acquire more common words earlier.

51
Q

What are the two main questions regarding age of acquisition?

A
  1. Does it influence word recognition?

2. How do we measure AoA?

52
Q

What did Morrison and Ellis (1995) state about age of acquisition?

A

That frequency effects are AoA effects - after matching for AoA, they found no difference between high and low frequency word.

53
Q

What is a problem with Morrison and Ellis (1995)’s research?

A

As stated by Zevin and Seidenberg (2002), Morrison and Ellis’ (1995) results depend on the quality of frequency norms, and they used the Kuchera and Francis (1982) corpus, which only has 1 million words and has the lowest percentage of variance explained by frequency.

54
Q

What did Brysbaert & Cortese (2011) state about age of acquisition?

A

AoA effects only account for about 7% extra variance in lexical decision times - this implies that they aren’t very important, and don’t account for frequency effects.

55
Q

What did New et al. (2006) state about the word length effect?

A

That mixed results have been found in lexical decision, perceptual identification and naming tasks (null/inhibitory effects found) in both Dutch and English for various word lengths, as well as an inhibitory effect found in terms of eye movements. The results may have been mixed because of differences in terms of length range used.

56
Q

What did Balota et al. (2007) find?

A

Used words from the English lexicon project (40,481 words) and found a non-linear effect for average lexical decision times - shortest for 5-9 letters, longest for 13 letters, and mid-range for 3 letters.

57
Q

What are graphemes?

A

Groups of letters that represent phonemes in spoken words - they form the bridge between orthography and phonology. E.g. BREAD: (b) (r) (e) (d).

58
Q

What did Rey et al. (2000) find about graphemes?

A

Longer reaction times for multi letter graphemes (e.g. A in HOARD) for both high and low frequency words (slightly lower for HF words), and shorter reaction times for single letter graphemes (e.g. A in BRASH) (lower for LF words).

59
Q

What did Frost (1989) state about phonology?

A

Phonology is automatically activated in visual word recognition.

60
Q

How does phonology play a role in lexical ambiguity?

A
  • Pseudohomophone effect (Rubenstein et al., 1971) - pseudohomophones take longer to reject in LDTs and participants make more errors than with non-word controls.
  • Homophone effect (Rubenstein et al., 1971) - “yes” responses to heterographic homophones (e.g. WEIGHT - WAIT) are slower than to non-homophonic control words (i.e. just normal words).
61
Q

What have priming studies shown about phonology?

A

That it is activated rapidly e.g. koat primes COAT vs. poat – COAT.

62
Q

What did Seidenberg et al. (1984) state about spelling-to-sound consistency/regularity effects?

A

Graphemes can be pronounced differently in different words, e.g. INT is pronounced differently in HINT and PINT and YACHT is an irregular word. To account for phonological effects in visual word recognition, models need to include phonological representations.

63
Q

What is the DRC model?

A

Dual Route Cascaded Model of visual word recognition and reading aloud (Coltheart et al., 2001).

64
Q

Describe the DRC model.

A

Two routes from print to speech:

  • Lexical (orthographic input, phonological output, linked to semantic system)
  • Sublexical (uses grapheme-to-phoneme rules for orthographic analysis)
65
Q

What are limitations of the both the IA and the DRC model?

A
  • Position specific letter coding
  • Unable to account for phonological effects in masked priming - according to the model this should be slow but is in fact fast.
  • Grapheme to phoneme conversion rules? Learning rules when learning to read? Possibly true as orthography connected to phonology for letters and letter groups.
66
Q

What is the triangle model?

A

Seidenberg & McClellend (1989)’s Parallel Distributed Connectionist model of reading.

67
Q

Describe the triangle model.

A

Single route: orthography to phonology.
Three layers: input (orthography), output (phonology), hidden (to).
Neural networks learns to associate phonological output with orthographical input through back-propagation (error score=go back and adjust).

68
Q

What problems are there with the triangle model?

A
  • Letter position coding (uses wickelgraphs, for which there’s not much evidence)
  • Poor non-word reading
  • Learning mechanism is questionable.