lecture 2 - phonological development Flashcards

1
Q

why is development exciting?

A
  • Theory, technology, computational modelling,
    machine learning…
  • Not just about studying children
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Language (development)

A

Why should you care?
– Fascinating
* Complexity of human development
– Important
* Practitioners (often difficulties in disorders)
* Impact beyond psychology (e.g., machine learning)
– Many of us are likely to be parents…

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Language - linguistics

A

semantics - the study of meaning

syntax - the study of word order

phonology - the study of how sounds are used within a language

phonetics - the study of raw sounds

pragmatics - the study of language use

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

linguistics - morphology

A

the study of words and word formation

inflectional morphology - concerned with changes to a word that do not alter its underlying meaning

derivational morphology - concerned with changes to a word that alters it underlying meaning

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why language acquisition should be hard?

A
  • Infants learn language from listening to
    people speak
    Problems:
  • Speech is continuous, not segmented into
    words → SEGMENTATION problem
  • Different auditory signals should be perceived
    as the same sound → LACK OF
    INVARIANCE
  • Each speaker sounds a bit different →
    SPEAKER VARIABILITY
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Speech is sound

A

Sound = waves of increases and decreases in air pressure

ripples in the air

diagrams in the notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Speech is a special kind of sound

A
  • Not just one wave - speech composed
    of waves of many different frequencies
  • Each wave has its own energy level - corresponds to sound loudness
  • Ear decomposes the sound wave into
    frequencies and energies
    – Spectrogram

speech spectrogram is in notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do adult listeners
make sense of speech?

A
  • Despite the fact that speech is continuous and
    variable, listeners perceive it as composed of
    discrete sounds.
  • Each language has a fixed inventory of these
    discrete sounds, called phonemes.
  • We can demonstrate that listeners perceive
    phonemes (despite the variability) using
    phoneme identification tasks.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Phoneme identification

A

Sounds differ in Voice Onset Time (VOT)
Long VOT → /k/
Short VOT → /g/

Clear – 95 ms VOT - long VOT

Glear– 11 ms VOT - short VOT

diagram in notes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

word bias - ganong 1980

A

phoneme identification helps adults make sense out of speech

GISS-KISS
GIVE-KIV

diagram In notes

adults are good at taking context into account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

phoneme restoration

A

(Warren, 1970)
“It was found that the *eel was on the…
axle / table / fishing rod / orange.”
Perceive wheel, meal, reel, or peel
Which sound was missing?
When did the cough occur?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

how do infants learn phonemes

A
  • Before we can answer this question, we
    need to answer another one…
  • How can we know what infants know
    about phonemes?
  • After all, we cannot ask them to tell us
    what they are hearing…
    we use high amplitude sucking - diagram in notes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

newborn language discrimination

A

mehler et al 1988
high amplitude sucking

if french changed rate of sucking when heard French then Russian but not English then Italian as not familiar with either language

if American changed rate of sucking when switched from English and Italian but not French to Russian as not familiar with either language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Prenatal language perception?

A
  • 4 months before birth
  • Low frequencies
  • Rhythms and prosody
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

phoneme perception

A

korean-speaking adults vs Korean newborns

adults can tell the difference between da and ta not ra and la

newborns can tell the difference between both ra and la and da and ta

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

conditioned head turn

A

werker and Lalonde 1988
discrimination of alveolar /da/ and retroflex /Da/

if speak English can distinguish difference at age 7 months not 12 months or adult

if speak Hindi can distinguish at 7 months, 12 months and as an adult as its a meaningful contrast in their language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

perception of phonemes

A

newborn/ young age - ability to describe non-native phonemes

as get older non-native language phonetic abilities decrease only know native phonetic contrasts it gets better with age

adults - non-native phonetic discrimination = very difficult

study graph in notes - kuhl et al 2006

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

turning into language

A

turning into the world around - perceptual narrowing

narrows from 6/9 months to adult

pascalis et al 2002

diagram in notes

19
Q

neural plasticity

A

Humans are born particularly early in development
Allows for tuning and shaping of neural circuitry in interaction
with the environment

20
Q

neural commitment

A

(Kuhl, 2004; Kuhl et al., 2008)
* Initial native language learning
– Neural commitment
* Brain across development
 Specialized
 Committed to their native language

graph in notes

21
Q

summary

A
  • Speech perception is hard because of
    – Lack of segmentation
    – Lack of invariance
    – Speaker variability
  • Infants gradually tune in to the speech
    sounds of the language(s) being spoken
    around them.
  • Methods for studying language development
    need to be age-appropriate
22
Q

how to describe speech sounds

A
  • Acoustics is the name of the study of the physical properties of sounds. Acoustic info about sounds can be depicted in a number of ways eg most commonly a sound spectrogram
23
Q

sound spectrogram

A

shows the amount of energy present in a sound when frequency is plotted against time. The peaks of energy at particular frequencies are called formants which is an important characteristic of speech sounds. All vowels and some constants have formants but the patten of formants is particularly important in distinguishing vowels.

24
Q

we can describe the sounds of speech at two levels

A

phonetics = acoustic detail of speech sounds (physical properties) and how they are articulated. study of phones

phonology = sound categories each language uses to divide up the space of possible sounds. study of phonemes

25
Q

aspirated and unaspirated letters

A

In english aspirated letters and unaspirted letters does not make any difference to the meaning of the word you use but changes the way it sounds but in other languages like thai it does change the meaning.

26
Q

phoneme

A

a basic unit of sound in a particular language

identified using / /

27
Q

Allophones

A

different phones that are understood as the same phoneme in a language

identified using [ ] square brackets

28
Q

There are three types of phonetics depending on what is emphasised:

A

articulatory (which emphasises how sounds are made), auditory and perceptual (which emphasizes how sounds are percieved) and acoustic (which emphasises the sound waveform and physical properties).

29
Q

minimal pairs

A

Two words in a language that differ by just one sound

there are also minimal sets of words all of which differ by only one phoneme in the same position. substituting one phoneme for another by definition leads to a change in the meaning, whereas just changing one phone for another (e.g., aspirated for unaspirated [p]) need not necessarily lead to a change in meaning.

30
Q

The International Phonetic Alphabet (or IPA for short)

A

is the standard method of representing sounds.

One adv of the IPA is that it is possible to represent these different ways of pronouncing the same thing.

31
Q

dialect

A
  • Different systems of pronounciations within a language are known as dialects and they mostly differ in their vowel sounds.
32
Q

producing speech

A
  • We produce speech by moving the parts of the vocal tract, including the lips, teeth, tongue, mouth and voice box or layrnx. The basic source of sounds is in the layrnx which modifies the flow of air from the lungs and produces a range of higher frequencies called harmonics. Different sounds are then made by changing the shape of the vocal tract. There are two different major types of sounds.
    • Vowels are made by modifying the shape of the vocal tract which remains more or less open while the sound is being produced.
    • The position of the tongue modifies the range of harmonics produced by the larynx.
      Consonants are made by closing or restricting some part of the vocal tract at the beginning or end of a vowel. Most consonants cannot be produced with out some sort of vowel. This description suggests that one way to examine the relation between sounds is to look at their place of articulation that is the place where the vocal tract is closed or restricted. The contrasting features needed to describe sounds are known as distinctive features.
33
Q

consonants

A

Consonants are produced by restricting or closing part of the vocal tract as air flows through. They are classified based on place of articulation, voicing, and manner of articulation:

  1. Place of Articulation (Where the sound is made)
    Bilabial (/p/, /b/): Lips closed together.
    Alveolar (/t/, /d/): Tongue on alveolar ridge behind upper teeth.
    Dental (/θ/, /ð/): Tongue tip behind upper front teeth.
    Labiodental (/f/, /v/): Lower lip against upper teeth.
    Postalveolar (/ʃ/, /ʒ/): Tongue near the hard palate.
    Palatal (/j/, /y/): Tongue on the middle of the palate.
    Velar (/k/, /g/): Tongue at the soft palate (velum).
    Glottal (/h/): Produced at the vocal cords.
  2. Voicing (Vibration of vocal cords)
    Voiced: Vocal cords vibrate (e.g., /b/, /d/, /v/).
    Voiceless: No vibration (e.g., /p/, /t/, /f/).
    Voice Onset Time (VOT): The time between releasing a sound and the start of vocal cord vibration.
  3. Manner of Articulation (How the sound is made)
    Stops (Plosives): Complete airflow blockage, then release (e.g., /p/, /b/, /t/, /d/).
    Fricatives: Airflow constriction, producing a hissing sound (e.g., /f/, /s/, /ʃ/).
    Affricates: Combination of stop + fricative (e.g., /tʃ/, /dʒ/).
    Liquids: Air flows around the tongue (e.g., /l/, /r/).
    Nasals: Air exits through the nose (e.g., /m/, /n/).
    Glides (Semi-vowels): Transition sounds between vowels (e.g., /w/, /j/).
    Some languages produce unique consonants (e.g., clicks) not found in European languages
34
Q

vowels

A
  • They are made with a relatively free flow of air. The nature of the vowel is determined by the way in which the shape of the tongue modifies the airflow . Vowels can be classified on position (can be raised, medium or lower) of the front, central or rear portions of the tongue. Eg the /i/ sound in ‘meat’ is an example of a high front vowel because the air flows through the mouths with the front part of the toungue in a raised (high) positions.
    • Two vowel sounds can be combined to form a diphthong eg the sounds in “my”, “cow”, “go” and “boy”
      Whereas the pronounciations of consonants is relativley constant across dialects that of vowels can differ greatly.
35
Q

understanding speech

A

Speech perception refers to how we identify or perceive the sounds of language.
Spoken word recognition is a higher-level process of recognizing words from these sounds.
The distinction between the two may be artificial, as recognizing words might help identify individual sounds.
We may not need to hear all the sounds of a word to recognize it.
The role of word-level knowledge in sound perception is a significant and debated topic

36
Q

recognising speech

A

Lexical access involves understanding the representations and units used in our mental dictionary.
Prelexical code represents sounds before identifying a word, while postlexical code contains information available only after word recognition.
A key task in speech recognition is determining the nature of the prelexical code.
Important questions include whether phonemes are explicitly represented and the role of syllables in speech perception

37
Q

why is speech perception difficult

A

Key difference: Spoken words are fleeting, while written words can be analyzed at length.
One-time hearing: You only get one chance to hear a spoken word, whereas you can recheck a written word.
Segmentation challenge: Words in speech blend together, making it harder to separate sounds than letters in written words.
Automatic process: Speech recognition happens effortlessly and is difficult to ignore once heard.
Speech perception speed:
People struggle to distinguish non-speech sounds presented faster than 1.5 sounds per second.
Yet, we can process 20+ phonemes per second in speech.
Spoken words can be identified in context within 200 ms of their onset.
Speech vs. background noise:
Speech is easier to recognize against noise than non-speech sounds.
The more possible word choices, the louder the speech must be for equal recognition.
Context matters: Words in meaningful contexts are recognized faster than isolated words.
Advantage of speech in context: Context enhances recognition, but the exact mechanism is still a question.

38
Q

Acoustic signals and phonetic segments - how do we segment speech?

A

Acoustic Variability & Speech Perception Challenges
Phonemes lack fixed acoustic properties: Their sounds vary based on context and speaking rate.
No “perfect template”: Phoneme identification is complex due to variability.
Analogy to letter recognition: Just as letters have multiple acceptable forms, phonemes vary but are mapped onto a single category.
The Segmentation & Invariance Problems
Invariance problem: The same phoneme sounds different depending on its context.
Segmentation problem: Speech sounds blend together, making it difficult to separate individual phonemes and words.
Assimilation & co-articulation:
Phonemes adopt features of neighboring sounds (e.g., nasal quality in “pin” vs. “sing”).
Co-articulation benefits:
Speeds up speech production for speakers.
Helps listeners gather information about multiple phonemes at once (parallel transmission).
Speech Segmentation Strategies
Speech sounds do not align neatly with phonetic segments (spectrogram analysis shows non-linear mapping).
Possible-word constraint: Listeners prefer segmenting speech in a way that forms actual words.
Language exposure shapes segmentation:
Metrical Segmentation Strategy (MSS): In English, strong syllables are word-initial, and weak syllables often belong to function words.
Stress-based segmentation: Listeners use stress patterns to identify word boundaries (e.g., mishearings occur when stress patterns are ambiguous).
Syllable-based segmentation: Used in languages like French, where syllable boundaries are clearer.
Bilingual segmentation:
Bilingual speakers do not mimic monolingual segmentation patterns.
Segmentation strategy depends on their dominant language:
English-dominant speakers use stress-based segmentation.
French-dominant speakers use syllabic segmentation but only in French.
Bilinguals adapt by discarding inefficient segmentation processes and using broader analytical strategies

39
Q

Categorical perception

A

Speech perception often involves categorical perception, where we classify phonemes into distinct categories, even though sounds vary. This was first demonstrated by Liberman et al. (1957) using a speech synthesizer to create a continuum of syllables that varied by place of articulation. Despite the continuum, participants grouped these syllables into distinct categories, like /b/, /d/, and /g/.

Another example is voice onset time (VOT), which distinguishes voiced and voiceless consonants (e.g., /b/ vs. /p/). Although VOT is on a continuum, we categorize sounds as either voiced or unvoiced. However, factors like repeated exposure can shift perception, a process called selective adaptation, where exposure to a sound like “ba” can make listeners less sensitive to the voicing feature, shifting their perception toward /p/.

Contextual factors, such as speech rate, can also affect the boundaries between categories. For example, faster speech rates can cause a short VOT sound typically perceived as /b/ to be heard as /p/. This adjustment happens even in infants, who can interpret speech based on its rate.

While early research suggested listeners couldn’t distinguish small variations within phoneme categories, more recent studies show that participants are sensitive to these differences. However, there is debate about whether speech perception is truly categorical. Some argue that speech perception might be better described as continuous, with the perception of distinct categories arising from bias rather than early sensory processing. Despite this, categorical perception remains a widely accepted concept in psycholinguistics

40
Q

What role does context play in identifying sounds?

A

how context affects speech perception, specifically examining whether language processing is a top-down or bottom-up process.

Categorical perception studies show that word context influences the boundary of phoneme categories. For example, a phoneme may be categorized differently depending on whether it appears in a word like “kiss” rather than a non-word like “giss” (Ganong, 1980).
Further research suggests lexical knowledge (knowledge of words) can alter how ambiguous sounds are perceived, while sentence context (meaning of the whole sentence) has postperceptual effects, influencing the interpretation after the sound is perceived.
The phoneme restoration effect, where participants hear missing phonemes restored by context, suggests that higher-level semantic and syntactic information aids in speech processing. This effect occurs even when participants are aware that the sound is missing (Warren & Warren, 1970).
However, it’s debated whether the restoration occurs at the phonological processing level or as a postperceptual process. Studies indicate that lexical context can lead to true perceptual restoration, while sentential context only affects later stages of processing.
Samuel’s studies (1981, 1996) suggest that lexical context affects phoneme restoration at a perceptual level, while sentence context influences postlexical processing.
Later research by Samuel (1997) combined phoneme restoration with selective adaptation, showing that restored phonemes can cause adaptation, suggesting they may be treated as real sounds.
Some argue that the lexical context merely biases responses rather than improving perceptibility, meaning that top-down context has limited effects on sound identification.
The general consensus is that top-down context has a smaller role in speech processing, with lexical context being more influential than sentential context

41
Q

On phonological development

A

Language development follows a clear progression, though it’s debated whether it involves discrete stages. Infants aren’t born silent; they make vegetative sounds from birth and begin cooing at 6 weeks, laughing at 16 weeks, and engaging in vocal play (making speech-like sounds) between 16 weeks and 6 months. Vowels emerge before consonants. Around 6-9 months, babbling starts, with repeated syllables. By 9 months, infants recognize patterns in sounds and situations. At 10-11 months, they produce their first words, often in single-word utterances. Around 18 months, there is a vocabulary explosion, and two-word sentences emerge. Children may learn 40 new words a week during this period. Before producing grammatically correct speech, children use telegraphic speech, which omits some grammatical elements. As they grow, their sentences become more complex, with continued grammatical development throughout childhood. Even teenagers still learn new words, estimated at 10 a day.

Studying language development can be challenging, with common techniques like the sucking habituation paradigm, where infants’ sucking rate is measured in response to novel stimuli. The preferential-looking technique observes where children look while hearing sentences, and the conditioned head turn technique involves teaching infants to turn their heads when stimuli change.

Cross-sectional studies, examining groups of children at certain ages, have limitations due to individual variation in linguistic development. Longitudinal studies of individual children have been influential, though they often focus on a small number of children, potentially underestimating developmental variation.

42
Q

How children develop language

A

Language development is influenced by genetics, the environment, and social interaction. The main issue is how much of language acquisition depends on innate, language-specific information versus general learning mechanisms. These learning mechanisms evolve as the child matures. Connectionist modeling highlights how learning systems adapt with experience. The balance of factors influencing phonological, syntactic, semantic, and pragmatic development is complex and hard to pinpoint.

43
Q

Do children learn any language in the womb?

A

Children do not start speaking at birth because they need exposure to language and the development of other cognitive and sensory processes. However, language learning begins before birth, as sounds from the outside world, including speech, penetrate the womb. Though the amniotic fluid muffles higher frequencies, fetuses can still detect and learn from the speech they hear.

Studies show that newborns recognize and prefer stories they heard in the womb, even when read by someone other than their mother, indicating they learn general language characteristics rather than just specific voices. Experiments monitoring fetal heart rates confirm that babies respond differently to familiar versus unfamiliar stories.

Newborns can also distinguish between languages based on prosody—features like rhythm, stress, and intonation—rather than individual speech sounds. Research using filtered speech confirms that babies recognize language changes even without access to higher-frequency sounds. Sensitivity to prosody is crucial, as it later helps infants segment and identify language sounds, aiding their overall language acquisition

44
Q

Phonological development
Early speech perception

A

Infants, even before they start speaking, have advanced speech recognition abilities. Research shows that from birth, they can distinguish speech sounds from non-speech sounds and make fine phonetic distinctions. Studies suggest that these perceptual abilities might be innate, as infants categorize sounds similarly to adults.

Cross-linguistic studies reveal that infants initially recognize phonetic contrasts from various languages, but this ability declines if the contrasts are not present in their native language. By six months, English-learning babies can distinguish Hindi sounds, but this ability fades by eight months. Similarly, African Kikuyu infants can distinguish between [p] and [b], but lose this ability if not needed in their language.

Beyond phonetic perception, infants show sensitivity to speech rhythm, preferring their mother’s voice and distinguishing between languages based on rhythm. By eight months, they can recognize syntactic boundaries and segment speech into words using cues like pauses, prosody, and distributional information. Studies show that infants rely on statistical learning to detect sound patterns and segment speech.

Bootstrapping plays a key role in language acquisition, allowing infants to build on existing knowledge to acquire new words. PRIMIR, a model of word learning, emphasizes this process. By 17 months, children focus on phonological distinctions that help them learn new words efficiently.

Though some speech perception skills decline temporarily, young children rapidly develop word recognition abilities. By 18 months, they can identify words from just the first 300 milliseconds of speech. Ultimately, early perceptual abilities, experience, and cognitive development interact to shape language learning