Hearing Sounds, Speach And Music Flashcards
Sound waves can vary in terms of
Frequency and amplitude
Sound is…
Vibration of air. Travels into ears and you detect it. Changes in air pressure.
Frequency is
Wavelength - how fast the wave oscillates/repeats in certain amount time ie seconds. Number of waves per second
Amplitude is:
The height of the wave. How much air pressure is changing by the vibrations
Faster waves have
Higher frequency and higher pitch
Slower waves have
Lower pitch and lower frequency
Frequency is measured using
Hertz (Hz)
1 Hz indicates
1 cycle per second
Humans can hear from
20 - 20,000 Hz
Another way sound waves differ is in…
Amplitude (intensity)
Amplitude is:
The magnitude of displacement of a sound pressure wave (how large the peak or trough of the wave is)
Amplitude is perceived as
Loudness
Bigger waves have
Higher amplitudes are LOUD
Smaller waves have
Lower amplitude and are quiet
Amplitude is measured in
Decibels (dB)
Decibels is on a
Logarithmic scale
A logarithmic scale is
Where a 10 x increase in air pressure is = to an increase in 20 dB
Safe range of dB is
0-140 dB
Frequency and amplitude are more
Simplified characteristics of sound waves
Most sounds that we hear are
Complex sounds
Complex sounds are made up of
A spectrum of vibrations at different amplitudes and different frequencies
Complex sound example is
Human voice.
Timbre is
The quality of sounds conveyed by harmonics of different frequencies vibrating at the same time
When sound enters the ear canal what does it hit first?
The ear drum (timpanic membrane)
What happens when sound enters in the inner ear?
Oval window pushes onto the fluid in the vestibular canal of the cochlea
The cochlea looks like a
Giant shell spirals inwards
In the cochlea the movement of the fluid by the oval window that is then
Sent through the cochlea. This is the start of how we detect sound
Basilar membrane contains the
Organ of corti
How do we detect different amplitudes in our auditory system?
The greater the air pressure the stronger the vibration in the basilar membrane
The further the stereocilia are bent then the
Stronger the signal will be to the brain louder noise
How to we detect frequency in the cochlea?
Place coding is used (different areas of the cochlea are responsible for detecting different frequencies of sound)
Basila membrane is thick and narrow at the
Base of the cochlear
At the apex (middle-top) The basilia membrane is
Thin and wide
Higher frequency vibrations are picked up better at the ………..end of the basilia membrane
Base (oval window end)
Towards the apex …………are picked up better
Lower
As we age the first cilia that tend to get damaged are the ones closest to the
Base of the cochlea - the oval window due to being closest to the incoming higher frequencies
As we age we become less able to hear what type of frequency sounds?
High
Once the cochlea detect the frequencies they start to send signals to the brain via the:
Auditory nerve fibres
Different auditory nerve fibres respond to different
Frequency and amplitude ranges to different extents depending on the positioning relation to the oval window or apex in the cochlea
Brain picks up the quality of sound through the
Pattern of firing across the different auditory nerve fibres
How many auditory nerve fibres are there?
1400 approximately in each ear
Top down processes in auditory processing can help us to become more
Sensitive to certain frequencies.
How do top down processes work in auditory
By sending signals via the auditory nerve fibres to the outer ear cilia which then change how the inner ear cilia respond to the frequencies
Top down processing in auditory also helps us to
Focus in on certain sounds in the environment and ‘tune out’ of others
First place auditory nerve fibres take info is the
Synapse in the brain stem
In the brain stem the areas responsible for auditory processing are the
Cochlea nuclei and the superior olive
Superior olive does this…
Binaural integration
What is biaural integration?
Integrates info that has been detected by both ears
Biaural integration is important for our ability to
Detect sound location
Which part of the brain is primarily responsible for higher processing of audio?
Auditory cortex in the temporal lobe
One characteristic of primary auditory cortex…
It has tonotophic organisation
What is tonotophic organisation?
Different areas of primary auditory cortex are mapped onto different frequencies
Both sides of the primary auditory cortex process
Info from both ears
The first level of auditory processing in the brain is done in the
Primary auditory cortex
Depending on where the sound is coming from your head and pinna will…
Modify the sound slightly before it gets sent into your ear canal
What different cues to we use to work out how far away the sound is?
Relative intensity
Spectral composition
Relative amounts of direct vs. reverberant energy
Spectral composition (auditory distance perception) means that
Sounds further away have less slow frequency than high frequency due to moisture air absorption ( I.e thunder)
Direct energy =
Coming from the source
Reverberant energy =
Bounced off surfaces in the environment
Sounds from far away will have more what type of energy?
Reverberant
Auditory scene analysis is
How we distinguish between different sounds occurring in the environment
What are the cues we use in auditory scene analysis?
Spatial segregation
Grouping by onset
Grouping by frequency
Grouping by timbre
What is spatial segregation?
Soubds coming from same location are prob from the same source
Grouping by onset means that
Sounds that start together tend to come from the same source
Grouping by frequency
Sounds of similar frequency usually come from same source
Restoration happens when
There is a gap in the sound and the auditory system fills in the gap so the sound sounds continuous
Occluded sounds are sounds that are
Hidden
It’s easier for our auditory system to restore a …………rather than ………….
A Noise rather than silence
It’s easier to fill in interruptions of noise rather than silence because…
Due to the kinds of interactions we are used to. Much more used to being interrupted by noises so its more likely for us. No blocks of silence naturally
Babies as young as 2months can differentiate between
Familiar vs unfamiliar
Ways we perceive music
Melody
Rhythm
Why did we develop musical perception?
Sexual attraction?
Basis for group bonding early in history?
Side effect of language evolution
Formants in speech perception are
Bands of acoustic energy on a spectrogram caused by a particular resonance in the vocal tract
Spectrograms show us
Which frequencies in the voice are used the most
Each human voice has around …….distinct Formants (bands)
4
How many Formants are crucial in distinguishing between speech sounds
3
Coarticulation is when
The production of one speech sound overlaps with the next (depends on the vowel that follows)
Lack of invariance is
Articulation of sounds is not consistent and changes in accordance to the surrounding sounds yet somehow we can still identify a particular sound as belonging to a particular category
Categorical perception of speech sounds
We have a category of sound rather than being able to distinguish between minute change in the sound
Categorical perception starts
From early infancy
We have generally learnt acoustic categories by the time we are
In puberty
Why would a Spanish native speaker have trouble pronouncing different English vowels?
Because their own native sound categories are different and once reached puberty is much harder to perceive and pronounce other language sound categories
Phonemic restoration is called a…….approach
Top down
Magurk effect
Auditory illusion. Says ba we hear gah and see lips as saying da. Integrating visual and auditory to see 3 different sounds.