Cours 10 - Chapitre 14 et 15 Flashcards
What causes sound in the air?
Sound comes from pressure fluctuations in the air caused by vibrating objects.
What do vibrating objects produce in the air?
They produce cycles of air compression and rarefaction.
What are compression and rarefaction in sound waves?
Compression increases air molecule density; rarefaction decreases it.
What wave shape characterizes sound pressure fluctuations?
A sinusoidal wave.
What determines the loudness of a sound wave?
The amplitude of the sinusoidal wave.
Why is audition useful from a survival perspective?
It helps detect moving objects, like footsteps or cracking branches.
What special function do humans and songbirds use sound for?
Vocalizations for communication.
What does the chapter examine about sound?
How complex patterns of air waves are transduced by the ear and transmitted to the central nervous system.
What is loudness in terms of sound perception?
Loudness is the psychological aspect of sound related to its perceived intensity.
In what unit is loudness typically measured?
Decibels (dB).
What do decibels reflect more: actual sound pressure or subjective perception?
Subjective perception.
What is the reference point for decibel measurements?
The smallest pressure perceivable by most people.
Does 0 dB mean that there is no sound?
No, it means the sound is at the threshold of hearing for most people.
Is it possible to hear sounds below 0 dB?
Yes, for people with exceptionally good hearing.
What unit is used to measure sound pressure level?
Pascals.
What does the measurement of Pascals represent in terms of sound?
The force exerted by air molecules.
What is the value of the minimal perceivable sound pressure in Pascals?
2 x 10⁻⁵ Pascals.
How do you calculate the sound pressure level ratio?
Divide the pressure of the sound by the minimal perceivable sound pressure.
Why must we square the pressure level to get sound intensity?
Because sound waves propagate in 3D and exert force on a 2D tympanic membrane, making intensity proportional to the square of pressure.
How do sound waves propagate in space?
In a spherical fashion, like the expansion of an inflating balloon.
What is the formula for sound intensity ratio?
Sound intensity ratio = (sound pressure level ratio)²
Why is the tympanic membrane important in hearing?
It is the 2D structure that the sound wave must exert force on to be perceived.
What is the sound intensity ratio of the minimal audible sound?
1 (because it is used as the reference point).
What is the sound intensity ratio of a helicopter?
10¹⁰ or 10,000,000,000.
What is the purpose of the Bel scale?
To work with more manageable numbers when measuring sound intensity.
How many decibels are in one Bel?
10 decibels.
What happens to sound intensity each time you add 10 dB?
It multiplies by 10.
What type of relationship exists between decibels and sound intensity?
A logarithmic relationship.
How much more intense is a 100 dB sound compared to a 90 dB sound?
9,000,000,000 times more intense.
How much more intense is a 20 dB sound compared to a 10 dB sound?
90 times more intense.
What does the frequency of a sound wave represent?
The number of cycles of the sound wave that repeats in one second.
What is the unit used to measure frequency?
Hertz (Hz).
What determines the pitch of a sound?
The frequency of the sound wave.
Which type of frequency is associated with a low pitch?
Low frequencies.
Which type of frequency is associated with a high pitch?
High frequencies.
What influences the perceived loudness of a sound?
The combination of sound frequency and amplitude.
Why are very high and very low frequencies harder to hear?
They require higher intensity to be perceived.
Which frequencies are easiest for humans to process?
Frequencies in the middle range, such as those in speech and music.
What do equal-loudness curves represent?
Combinations of intensity and frequency that are perceived as equally loud.
What unit is used to measure subjectively perceived loudness?
Phons.
At what frequency is 40 dB equal to 40 phons?
At 1000 Hz.
At what frequency is 40 dB equal to 10 phons?
At 100 Hz.
Why can we perceive sounds below 0 dB between 2000 and 5000 Hz?
Because 1000 Hz was chosen as the dB reference, but it is not the most audible frequency.
What are sounds with only one frequency called?
Pure tones
Do pure tones exist in nature?
No, pure tones don’t exist in nature; all natural sounds are complex.
What term describes sounds with more than one frequency?
Complex sounds
What is a power spectrum?
It refers to the energy or power associated with the different frequencies composing a sound.
What do the lengths of the bars in a power spectrum represent?
They represent how much a certain frequency is present in the sound.
In the example provided, what are the frequencies present in the third waveform?
100 Hz, 300 Hz, and 500 Hz
In the example provided, what are the frequencies present in the fourth waveform?
300 Hz and 400 Hz
What is the difference between the first and second pure tones in the example?
The first has an intensity of 50 dB, and the second has an intensity of 40 dB.
What is a harmonic spectrum?
A spectrum with energy at frequencies that are integer multiples of the fundamental frequency.
What are harmonics?
Frequencies that are integer multiples of the fundamental frequency in a harmonic spectrum.
If a sound has a fundamental frequency of 100 Hz, name three harmonics.
200 Hz (2nd harmonic), 300 Hz (3rd harmonic), 400 Hz (4th harmonic).
What does the fundamental frequency represent in a harmonic spectrum?
It is the lowest frequency of the sound, and all other harmonics are multiples of it.
Why do many natural sounds have a harmonic spectrum?
Because they are typically caused by a single vibrating source, such as a guitar string or a saxophone reed.
What determines the modes of vibration for a vibrating object?
The shape of the vibrating object, which only allows certain stable modes of vibration.
What analogy is used to explain harmonic vibrations?
A jumping rope, which can move along its whole length or at integer fractions of its length.
Why are waves of unequal sizes physically impossible in a vibrating rope?
Because they are unstable and always tend to converge towards waves of equal sizes.
What frequencies can a rope oscillate at?
At a fundamental frequency corresponding to its total length or at multiple integers of that frequency.
What do the different frequencies observed in a jumping rope correspond to in sound production?
They correspond to the harmonic frequencies of the vibrating object producing the sound.
Why do vocal folds also produce harmonic spectra?
Because they are subject to the same physical constraints as other vibrating objects like guitar strings.
What does the fundamental frequency determine in a sound?
The fundamental frequency determines the pitch that we hear.
What determines the timbre of a sound?
The profile of harmonics determines the timbre of the sound.
What is timbre?
Timbre refers to the qualitative aspect of the sound that characterizes, for instance, the sound produced by different instruments.
Why does the same note sound different when played by a flute and a violin?
Because the timbre is different; the fundamental frequency is the same, but the profile of the harmonics is different.
What is the ‘missing fundamental’ phenomenon?
It is the perception of the fundamental frequency even when it is physically absent from a sound’s spectrum.
How does the auditory system estimate the missing fundamental?
By identifying the greatest common factor among the present frequencies.
What is the greatest common factor of 200 Hz, 300 Hz, and 400 Hz?
100 Hz
Does the perceived pitch change if the fundamental frequency is removed from a harmonic sound?
No, the perceived pitch remains the same, but the timbre is affected.
Why doesn’t the brain assume a new fundamental when 100 Hz is reintroduced (e.g., at 50 Hz)?
Because multiplying 50 Hz by integers yields frequencies not present in the sound, like 150 Hz or 250 Hz.
What aspect of the sound is altered when the fundamental frequency is removed?
The timbre of the sound.
What are the three parts of the ear?
The outer ear, the middle ear, and the inner ear.
What structures make up the outer ear?
The pinna (auricle), the auditory canal, and the tympanic membrane.
What is the function of the pinna?
The pinna helps estimate the elevation of a sound source (whether it is coming from above or below).
What does the auditory canal do?
It conveys sound waves to the tympanic membrane.
What is the role of the tympanic membrane?
It vibrates in synchrony with the sound source.
What are the three small bones in the middle ear called?
Ossicles
What is the role of the ossicles?
To conduct sound to the cochlea and amplify the vibrations.
Which bone is directly connected to the tympanic membrane?
The malleus
How are the ossicles arranged?
In a cog-and-wheel manner to amplify movement.
What is the name of the second ossicle?
Incus
What is the name of the third ossicle?
Stapes
To what structure is the stapes attached?
The oval window of the cochlea
What happens when sound waves reach the tympanic membrane?
They cause it to vibrate, which moves the ossicles.
What is the function of the oval window?
To transfer the movement of the ossicles to the cochlea.
What are the two main structures of the inner ear?
The cochlea and the vestibular organs.
What is the function of the vestibular organ?
It is responsible for our sense of equilibrium.
What is the function of the cochlea?
It is the structure in which sound waves are transduced.
What does the cochlea resemble?
A snail shell.
What is the cochlea filled with?
Fluid distributed across three main canals.
Name the three main canals of the cochlea.
Vestibular canal (scala vestibuli), middle canal (cochlear duct), and tympanic canal.
What produces movement in the ossicles and oval window?
Sound reaching the tympanic membrane.
What happens when the oval window moves?
It produces waves in the lymphatic fluid inside the cochlea.
What is one of the main processes the auditory system uses to identify the frequency of a sound?
The cochlear place code.
How does the structure of the basilar membrane change from the base to the tip of the cochlea?
It becomes wider and thinner.
Which part of the basilar membrane is more sensitive to low frequencies?
The tip of the basilar membrane.
Which part of the basilar membrane is more sensitive to high frequencies?
The base of the basilar membrane.
Why do different parts of the basilar membrane respond to different frequencies?
Because of variations in its width and thickness from base to tip.
After hair cells transduce sound into neural activity, where do they transfer this information?
To cochlear (or auditory) nerve fibers.
What do auditory fibers have that allows them to differentiate different frequencies in a sound?
Characteristic frequencies (CF).
How are auditory fibers arranged on the basilar membrane in terms of frequency sensitivity?
Cells more responsive to high frequencies are located closer to the base of the cochlea, and cells responsive to low frequencies are located closer to the tip of the cochlea.
What is phase locking in relation to cochlear encoding of sound frequency?
It refers to the firing of hair cells in synchrony with the phase of the sinusoidal wave in the cochlea.
How do hair cells in the cochlea encode the temporal characteristics of sound?
Hair cells fire an action potential each time the tectorial membrane pushes their hairs in the direction of their tallest hair, with their firing synchronized to the phase of the sinusoidal wave.
Why can’t individual neurons encode frequencies higher than 4–5 kHz?
Because it is physiologically impossible for individual neurons to fire at such high rates.
How are high frequencies encoded in the auditory system if individual neurons cannot follow every cycle?
Through the combined firing of the whole population of neurons, each firing on a subset of cycles in a phase-locked manner.
What does it mean when neuron firing is phase-locked?
It means neurons fire in synchrony with the phase of the sound wave cycles they respond to.
How does phase-locking help explain the missing fundamental effect?
Neurons can phase-lock their firing to peaks that occur at regular intervals (e.g., every 4 ms), tricking the brain into perceiving a fundamental frequency (e.g., 250 Hz) that isn’t physically present.
In the case of a sound with components at 500, 750, and 1000 Hz, what is the perceived fundamental frequency due to phase-locking?
250 Hz, because the combined waveform peaks every 4 ms, corresponding to that frequency.
What is the first brain structure to receive auditory information?
The ipsilateral cochlear nucleus.
After the cochlear nucleus, where is auditory information sent?
To the contralateral superior olive.
What are the next brain structures auditory information travels through after the superior olive?
The superior colliculus, medial geniculate nucleus of the thalamus, and then the auditory cortex.
Is there only a contralateral auditory pathway?
No, there is also a secondary pathway on the ipsilateral side that connects the same structures.
Where is the auditory cortex located?
Inside the Sylvian (lateral) fissure.
What is the function of the primary auditory cortex?
It is the first cortical region to receive auditory information.
What kind of organization does the primary auditory cortex have?
Tonotopic organization, with adjacent frequencies processed in adjacent areas.
How is the tonotopic organization in the auditory cortex related to the cochlea?
It follows the same low-to-high frequency tuning gradient as in the cochlea.
After processing in the primary auditory cortex, where does sound information go?
To the belt and parabelt regions for processing more complex features.
What do the ventral and dorsal pathways process in the auditory system?
The ventral pathway processes sound identity (‘what’), and the dorsal pathway processes sound location (‘where’).
What are conductive hearing impairments?
Hearing impairments caused by a loss of sound conduction to the cochlea, such as from earwax buildup, tympanic membrane tearing, or pus in the middle ear.
What are sensorineural hearing impairments?
Hearing impairments caused by damage to the cochlea or auditory nerve, such as to the basilar membrane or hair cells.
What tool is typically used to assess hearing loss?
Audiograms.
What does an audiogram measure?
Perceptual thresholds (smallest intensity of sound perceivable) across different frequencies.
What threshold generally represents hearing loss on an audiogram?
Perceptual thresholds above 20 dB.
What does severe hearing loss between 4000-8000 Hz look like on an audiogram?
It shows perceptual thresholds significantly above 20 dB, representing severe hearing loss in that range.
What is tinnitus?
A condition where a sound is perceived in the absence of sound waves, often associated with hearing loss.
What is a leading hypothesis for tinnitus?
The brainstem increases auditory pathway gain to compensate for hearing loss, amplifying spontaneous neural firing perceived as sound.
What other condition is tinnitus compared to?
Phantom limb pain, where sensations are perceived in an absent body part.
Why is sound localization important in dangerous situations, like being chased by a bear?
It helps us determine the direction the sound is coming from, which is essential for reacting appropriately.
What is the Interaural Time Difference (ITD)?
It’s the difference in time it takes for a sound to reach each ear, depending on the location of the sound source.
How does the ITD help us locate sound sources?
By detecting which ear the sound reaches first, our brain can infer the direction the sound came from.
What happens when a sound comes from the left side of the head?
It reaches the left ear before the right ear.
What is the term for the angles of sound sources relative to the head?
Azimuths.
Which azimuths produce the largest Interaural Time Differences (ITDs)?
Azimuths of 90 degrees to the left or right.
What ITD values are associated with azimuths of 0 or 180 degrees?
ITDs of 0.
How does sound localization rely on interaural time difference?
It uses the tiny time differences between when sound waves reach each ear to estimate the location of the sound source.
What is the smallest interaural time difference (ITD) that the human auditory system can detect?
As small as 10 microseconds (0.01 milliseconds).
How much longer was the time difference between Usain Bolt and Justin Gatlin compared to the smallest ITD humans can detect?
8000 times longer (80 milliseconds vs 0.01 milliseconds).
Which structure receives auditory input from both the left and right cochlear nuclei?
The superior olive.
What allows the brain to compare the timing of sounds from both ears?
The projection of cochlear nuclei to both ipsilateral and contralateral superior olives.
What are coincidence detector neurons?
Neurons that are pre-tuned to respond to specific interaural time differences (ITDs).
In the example provided, where do the action potentials from the left and right cochlear nuclei meet?
In the left superior olive.
Why do the action potentials meet in the left superior olive in the example?
Because although the right signal starts earlier, it travels a longer path, allowing both signals to coincide in the left superior olive.
What is the function of coincidence detector neurons in sound localization?
They detect specific timing differences between inputs from both ears to infer the direction of the sound source.
Besides ITDs, what is another cue the brain uses to localize sound?
Interaural Level Differences (ILDs).
What does ILD stand for?
Interaural Level Difference.
What causes ILDs?
The head blocking or attenuating sound as it travels through space.
How does the sound intensity differ between ears if the sound comes from the left?
The sound is more intense in the left ear than in the right ear.
What role does the head play in creating ILDs?
It acts as a barrier that blocks some of the sound energy, creating a level difference between the two ears.
Why are ILDs considered a redundant mechanism in auditory localization?
Because they complement ITDs and help fine-tune the spatial location of sound sources.
At which azimuth angles are ILDs the largest?
At 90 degrees (to the left or right of the head).
At which azimuth angles do ILDs not exist?
At 0 and 180 degrees (directly in front or behind the head).
For which frequencies are ILDs more effective?
Higher frequencies.
Why are ILDs not as informative for low frequencies?
Because long wavelengths of low-frequency sounds are less affected by obstacles like the head.
How does the superior olive process ILDs?
It receives excitatory input from the ipsilateral ear and inhibitory input from the contralateral ear, then subtracts the decibels.
How is ILD circuitry different from ITD circuitry?
ILD circuitry is simpler, relying on excitatory/inhibitory inputs and subtraction of sound intensity.
What is a key limitation of ITDs and ILDs in localizing sound?
They can determine direction (azimuth) but not distance of the sound source.
Why can’t ITDs tell us how far away a sound source is on a given azimuth?
Because ITD remains the same for a given angle, regardless of the absolute distance from the source.
What cue can the auditory system use to infer the distance of a sound?
The absolute amplitude (intensity) of the sound.
Why is using amplitude alone not a perfect solution for determining distance?
Because a loud, far sound and a quiet, close sound could have the same amplitude.
What spectral cue helps estimate sound distance?
The spectral composition of the sound, specifically the ratio of low to high frequencies.
Why do distant sounds tend to have proportionally more low frequencies?
Because long wavelengths (low frequencies) are more resistant to obstacles and less absorbed by air.
What happens to high frequencies as sound travels through air?
They are more easily absorbed, decreasing in intensity with distance.
What is a “cone of confusion”?
A 3D region where all points produce the same ITDs, making it hard to localize the exact source of sound.
Why do cones of confusion occur?
Because multiple locations in space can create identical ITDs due to being equidistant from the two ears.
How do azimuths of 60° and 120° illustrate the problem of ITD ambiguity?
They produce the same ITD, making it hard to distinguish the true direction.
How can head movements help resolve cones of confusion?
By altering the ITDs associated with each possible sound source location, allowing the auditory system to identify the one location that remains constant across head positions.
In the cone of confusion example, which frog position represents the true sound location?
The blue frog, because it’s the only location compatible with both head positions.
What auditory cue helps resolve elevation in sound localization?
Sound reflection and distortion by the pinna, which varies with elevation.
What is the Directional Transfer Function (DTF)?
A function that describes how the pinna modifies the intensity of different sound frequencies depending on the elevation of the sound source.
How does the DTF help identify sound elevation?
By analyzing the specific pattern of frequency distortions caused by the pinna, such as reduced intensity between 8 and 10 kHz for sounds coming from 40° above.
What physical features contribute to elevation detection in audition besides the pinna?
The ear canal, head, and torso.
Why does using elevation cues help with cones of confusion?
Because elevation cues reduce the 3D ambiguity of cones of confusion to a 2D localization problem.
What is auditory stream segregation?
The process by which the auditory system assigns different streams of sounds to different sound objects.
Which visual concept does auditory grouping resemble?
Gestalt grouping principles from visual perception.
How does sound location influence auditory grouping?
Sounds coming from the same location are grouped together as coming from the same source.
How does frequency affect auditory grouping?
Tones with similar frequencies are grouped together as part of the same sound stream.
What happens when tones are far apart in frequency?
They are perceived as two distinct auditory streams.
How does timing affect auditory grouping?
Tones close together in time are grouped together; tones separated by longer delays may be perceived as separate.
How does timbre influence stream segregation?
Tones with the same timbre are grouped together; tones with different timbres are perceived as coming from different sources.
What is the effect of onset timing on grouping?
Sounds that begin at different times are perceived as coming from different sources.
What makes it easier to distinguish tones in a cluster: gradual or abrupt rise time?
Abrupt rise time.
What is the continuity effect in auditory scene analysis?
When a sound is interrupted by noise, the brain can still perceive it as continuous if the gap is filled with noise.
What happens if the gap in a continuous sound is not filled with noise?
The sound is perceived as broken into separate chunks.
What are restoration effects in auditory perception?
They are higher-order effects where the brain fills in missing auditory information using semantic or syntactic knowledge, especially when gaps are filled with noise.
What condition is necessary for restoration effects to occur?
The gaps in the sound must be filled with noise.
What happens to restoration effects if the gaps are not filled with noise?
The effect disappears; the brain does not fill in the missing sound.
What type of knowledge supports restoration effects?
Higher-order semantic and syntactic knowledge.
What are phonemes?
Phonemes are the building blocks of speech—units of sound that distinguish one word from another in a specific language.
Give an example of two words distinguished by a single phoneme.
“Kill” and “Kiss.”
Can the same phoneme be spelled differently?
Yes, the same phoneme can have different spellings across words.
What tool helps us represent phonemes more consistently across languages?
The International Phonetic Alphabet (IPA).
Approximately how many languages are spoken around the world today?
About 5000 languages.
How many different speech sounds are used across world languages?
Over 850 different speech sounds.
What is the first step in speech production?
Respiration — the diaphragm pushes air out of the lungs, through the trachea, and up to the larynx.
What happens during the phonation step of speech production?
Vocal folds vibrate as air passes through the larynx, producing sound.
What causes a higher-pitched voice during phonation?
More tension in the vocal folds.
Why do children have higher-pitched voices than adults?
Because smaller vocal folds create more tension, resulting in higher pitch (children < women < men).
What type of spectrum is associated with sounds that pass through the vocal folds?
A harmonic spectrum.
What determines the pitch of a voice?
The fundamental frequency.
What determines the unique characteristics of a person’s voice?
The profile of harmonics in the harmonic spectrum.
What does phonation provide in speech production?
Phonation gives us the pitch of the sound.
What must be added to phonation in order to produce phonemes?
Articulation using the vocal tract.
What structures are included in the vocal tract?
The oral and nasal tracts, including the jaws, lips, tongue body, tongue tip, and velum (soft palate).
How does changing the shape of the vocal tract affect speech?
It alters the resonance characteristics, affecting the harmonic spectrum and producing different phonemes.
In many languages, what distinguishes different phonemes?
Their timbre, or profile of harmonics.
What are formants?
Peaks in the speech spectrum corresponding to harmonics with the highest intensities.
How are formants labeled?
By number from lowest to highest: F1, F2, F3.
How many formants are typically enough to identify a phoneme?
The first three (F1, F2, F3).
What role does articulation play in the sound produced by phonation?
It filters the frequencies, amplifying some and reducing others, to create distinct phonemes.
What does a spectrogram show?
The sound amplitude of different frequencies over time, with warmer colors indicating higher amplitude.
In the spectrogram of “We were away a year ago,” what do the red zones for the “e” in “We” represent?
The formants of the phoneme “e” at low, medium, and medium-high frequencies.
Why can’t we always use formants directly to identify phonemes?
Because speech is produced very quickly and formants can vary due to coarticulation.
What is coarticulation?
The overlap in articulatory or speech patterns caused by anticipating the next consonant or vowel during speech.
How fast do humans typically produce phonemes in speech?
About 10–15 consonants and vowels per second, which can double with fast speech.
Give an example of coarticulation using the phoneme “d”.
The “d” has a higher second formant when followed by “i” than when followed by “u”.
What does coarticulation imply about using formants for phoneme identification?
That formants vary depending on surrounding phonemes, making identification more complex.
What is categorical perception?
A phenomenon where stimuli are perceived in discrete categories rather than as gradual changes, despite continuous variation in input.
Give an example of coarticulation using the phoneme ‘d’.
The ‘d’ has a higher second formant when followed by ‘i’ than when followed by ‘u’.
What visual example is used to illustrate categorical perception?
A set of images transitioning from a monkey to a cow, where people categorize them sharply despite gradual changes.
How does categorical perception help with speech perception?
It allows us to perceive consistent phoneme categories despite variability in acoustic features due to coarticulation.
How do people perform when distinguishing between pictures within a category vs. across categories?
They’re better at detecting differences when the pictures cross a categorical boundary.
What kind of perception does the brain favor, according to categorical perception?
All-or-none perception based on pre-existing categories.
What is the motor theory of speech perception?
It proposes that the motor processes used to produce speech can be used in reverse to understand speech sounds.
What phenomenon supports the motor theory of speech perception?
The McGurk Effect.
What is the McGurk Effect?
A perceptual phenomenon where visual input (e.g., seeing lips move) influences what we hear.
Are phoneme distinctions universal across languages?
No, phoneme distinctions are language-specific (e.g., Japanese does not distinguish between ‘r’ and ‘l’).
What happens to infants’ ability to distinguish phonemes as they age?
They lose the ability to perceive phonemes that are not used in their native language, usually by 10 months.
How many different phonemes are estimated to exist across the world’s languages?
More than 850.
How many phonemes does the English language use?
44 phonemes.
Where does phonetic discrimination occur in the brain?
In the belt region surrounding the primary auditory cortex.
Where are phonemes assembled into words and meaning extracted?
In Wernicke’s area.
Where is Wernicke’s area located?
In the posterior part of the superior temporal gyrus of the left temporal lobe.
What is Wernicke’s aphasia?
A type of fluent aphasia where speech is grammatically correct but lacks meaning, and comprehension is impaired.
Do patients with Wernicke’s aphasia know they are not making sense?
No, they are often unaware of their deficits.
Where is Broca’s area located?
In the left frontal operculum.
What is Broca’s aphasia?
A nonfluent aphasia where patients understand language but struggle to produce speech.
Are people with Broca’s aphasia aware of their condition?
Yes, they are typically very conscious of their language production difficulties.
What is tone chroma?
A sound quality shared by tones that have an interval of one octave; notes with the same chroma share the same pitch class across octaves.
What is tone height?
The perceived pitch of a sound based on its frequency; higher frequencies correspond to higher tone heights.
What is the frequency ratio of an octave?
0.08402777777777781
What is the fundamental frequency of middle C (C4)?
261.6 Hz
How many notes are in an octave in Western music?
13 notes separated by 12 semi-tone intervals.
What is consonance in music?
A combination of notes that sound pleasant due to simple frequency ratios, like 3:2 or 4:3.
What is dissonance in music?
A combination of notes that sound unpleasant due to complex frequency ratios, like 42:33.
When does the auditory system perceive notes as coming from the same source?
When many harmonics coincide, as in consonance.
What evidence suggests that preference for consonance may be innate?
Infants as young as two months prefer consonant chords and intervals.
What is a musical scale?
A subset of notes within an octave that sound well together.
What is the interval pattern for a major scale?
2–2–1–2–2–2–1
What is the interval pattern for a minor scale?
2–1–2–2–1–2–2
How do major and minor scales differ in emotional quality?
Major scales sound ‘happy’, minor scales sound ‘sad’.
What is a musical key?
The scale that serves as the basis for a musical composition.
What is the tonic in music?
The root note of a key; it acts as a gravitational center for a musical piece.
Why do notes in-key sound more pleasant?
They match the notes present in the original chord or scale, fulfilling musical expectations.
What defines a melody?
A melody is defined as a sequence of notes or chords perceived as a single coherent structure.
What aspect of a melody allows it to be recognized even when played in different octaves or keys?
The contour, or the pattern of rises and declines in pitch.
Where are musical contours primarily processed in the brain?
In the right auditory cortex, specifically in the belt and parabelt regions.
What is congenital amusia?
A lifelong musical disability characterized by difficulty detecting pitch deviations and recognizing out-of-key notes, not due to intellectual disability or brain damage.
What is the ERAN response and what does it indicate?
ERAN (early right anterior negativity) is a negative ERP occurring 200 ms after detecting a melodic tonal violation. It indicates basic perceptual detection of tonal anomalies.
What is the P600 response and how does it relate to congenital amusia?
P600 is a positive ERP that occurs 600 ms after a tonal violation and reflects conscious awareness. Congenital amusics typically lack this response despite detecting the violation.
What does the lack of a P600 response in congenital amusics suggest?
It suggests that their brains detect tonal violations, but they are not consciously aware of them—’in-tune’ but ‘unaware’.