W4: Audition Flashcards

1
Q

Audition

A

Auditory information surrounding the body can be sensed (unlike vision as field of view is only in front)
Sound stimuli (e.g. language and music) is important to communication - loss of audition limits communication abilities and enhances risk of injury from hazards out of view.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

sound

A

Audition: Physics/Biology: Stimulus
Sound: is the pressure waves transferred by air molecules, caused from vibrating surface *sound waves constantly travel 335m/s
Compressions: positive component of cycle where pressure compresses air molecules
Rarefactions: negative component of cycle where pressure decreases, expanding molecules

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Longitudinal waves

A

Audition: Physics/Biology: Stimulus
air pressure wave in which particles “vibrate back and forth in the same direction as the wave” (e.g. wave created by gongs)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Sine wave/pure tone waveform

A

Audition: Physics/Biology: Stimulus: Simple Waves
simplest wave with even variation of pressure amongst compressions and rarefactions (e.g. pure tone of tuning fork) R&C = same size/angle

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Waveform

A

Audition: Physics/Biology: Stimulus: Simple Waves
“Graphs pressure changes over time” aka sine wave.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Wave Cycle

A

Audition: Physics/Biology: Stimulus: Simple Waves
“A single alternation between compression and rarefaction”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

3 important features of sinusodal variation in sound pressure levels

A

Audition: Physics/Biology: Stimulus: Simple Waves
frequency, amplitude and phase

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

frequency

A

Audition: Physics/Biology: Stimulus: Simple Waves
no. of wave cycles/second, described in hertz (Hz).
Freq. perceived as pitch (low freq = deep bass/low pitch, high freq = high treble pitch)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

amplitude

A

Audition: Physics/Biology: Stimulus: Simple Waves
“max height of wave/amnt of change in pressure” perceived as loudness
Decibel (dB): describes amplitude.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Phase

A

Audition: Physics/Biology: Stimulus: Simple Waves
indicates specific point in waveform, describes in degrees. Perceived as quality of sound/timbre (the harmonic frequencies which make sounds distinct from one another - what makes an instrument sound like the instrument every time it is played)
360 degrees of phase = 1 cycle; 0 degrees = resting point of wave; 90 degrees = top of compression wave; 270 degrees = bottom of rarefaction wave
Phase influences two waves interactions: “two superimposed waves have similar phase values” heighten each other; waves differing by 180 degrees cancel each other.
Phase describes timing of two sine waves in complex wave: ‘when one is ahead, (wave w/ smaller phase value leads).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Complex Waves

A

Audition: Physics/Biology: Stimulus: Complex Waves
is the result of many sine waves added together. Form is determined by the sinusoidal components (amp, freq, phase) of the individual pure tones

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Fundamental Frequency

A

Audition: Physics/Biology: Stimulus: Complex Waves
pure tone with lowest frequency, which dictates pitch/note or chord of a sound (FF and pitch are pos related)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Harmonic Frequency

A

Audition: Physics/Biology: Stimulus: Complex Waves
refers to the rest of the higher frequency sine waves (which are multiples of FF) within a complex wave. Along with FF’s pitch, HF dictates the timbre/quality of sound (timbre is characteristic sound - what makes a piano sound the same every time and different from other instruments).
numbered by distance from fundamental frequency (e.g. “fifth harmonic is 5x higher than fundamental frequency”).
*”diff instruments playing note at same pitch/FF sound different due to difference in HF/timbre
UNDERSTAND THIS: “Many natural sounds are not periodic, and do not contain a harmonic series of frequency components. Instead they contain a continuous “spectrum” of components in which all frequencies are represented. The particular amplitudes and phases of these components determine the overall form of the complex wave representing the sound.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Foureir Theory

A

Audition: Physics/Biology: Stimulus: Complex Waves
theory that explains the procedure of fourier analysis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Foureir Analysis

A

Audition: Physics/Biology: Stimulus: Complex Waves
procedure which separates complex soundwaves into its frequency (pure tone) components

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Foureir Synthesis

A

Audition: Physics/Biology: Stimulus: Complex Waves
is the process of combining the components of the fourier and phase spectrum in order to reproduce the original signal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Foureir/Magnitude Spectrum

A

Audition: Physics/Biology: Stimulus: Complex Waves
“A representation of the magnitude of individual frequency components present in a signal such as a sound wave” - info about amplitude

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Phase Spectrum

A

Audition: Physics/Biology: Stimulus: Complex Waves
“A representation of the phases of (a sine waves) individual frequency components present in a complex signal.”
*phase and fourier spectrum together represent the original signal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Foureir/Frequency components

A

Audition: Physics/Biology: Stimulus: Complex Waves
individual sine waves that together make a complex waveform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Spectrogram

A

Audition: Physics/Biology: Stimulus: Complex Waves
‘represents changes in the signals freq content over time.’ Short periods of time in windows makes it hard to distinguish components of freq (like resolution, can only get 100hz resolution in 10 ms window) - “Spectrograms therefore must trade off their ability to resolve variations over time (using wideband spectrograms) with their ability to resolve variations over frequency”
Amplitude/loudness: represented by the darkness of the plot
Frequency/pitch: is on vertical axis
Time: is on horizontal axis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Auditory/Foureir Filters

A

Audition: Physics/Biology: Stimulus: Complex Waves
are mediums or transmitting devices which filter acoustic signals of certain frequencies by altering their amplitude (e.g. A head filters the higher freq components, only allowing the low freq components of the signal to pass through).
Low-Pass Filters: only allow lower freq in (before cutoff) - attenuates higher freq (e.g. head, bass knob on amp)
High-Pass Filter: only allow high freq in (after cutoff) - attenuates lower freq. (e.g. treble knob on amp)
Band-Pass Filter/Frequency Bands: only allow frequencies within a certain range/band (e.g. many parts of ear) Bandwidth = frequency range for band-pass filters/freq bands

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Linear systems theory/3 rules of linear filters

A

Audition: Physics/Biology: Stimulus: Complex Waves
Output must only contain frequencies present at input, nothing more. Amplitude and phase may alter.
“If the amplitude of the input to the filter is changed by a certain factor, then the output should change by the same factor.”
‘Output of two inputs applied simultaneously should match the output of inputs applied separately and summed’

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

non-linear filter

A

Audition: Physics/Biology: Stimulus: Complex Waves
filter that doesn’t obey all three rules. “it distorts signals by adding new frequencies or by failing to respond at very low or high amplitudes.” makes it hard to predict response. Tools based on the linear systems theory can be used to locate the nonlinear parts of the auditory system and the nature of them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Ear as a Foureir Analyzer

A

Audition: Physics/Biology: Stimulus: Complex Waves
appears as a fourier analyzer since the auditory nerve fibres encode input sounds by phase, intensity (encoded in fibers activity) and frequency (w/ frequency-to-place conversion on BM). BUT it does not follow the linear systems theory:
No new freq component. if two tones w/ different frequencies stimulate BM, third distorted frequency can occur
Input and output proportionate.BUT BM displacement is nonlinear: Cochlear amplifies
‘Output of two inputs applied simultaneously should match the output of inputs applied separately and summed’. BUT two-tone suppression:
Two-tone suppression: “Suppression of an auditory nerve fiber’s response to a tone during presentation of a second tone.” When the second tone lies outside the fibers dynamic range.
*foureir analysis can still be applied to the auditory neural impulse thats sent to the brain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

two-tone suppression

A

Audition: Physics/Biology: Stimulus: Complex Waves
“Suppression of an auditory nerve fiber’s response to a tone during presentation of a second tone.” When the second tone lies outside the fibers dynamic range.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Peripheral Auditory system

A

Audition: Physiology
(incl. outer, middle and inner ear) detects/interprets sound pressure waves. Outer ear is only visible part. Inner ear creates and transfers neural responses to the central auditory system (“population of neurons in brainstem and cerebral cortex” ). Peripheral system is well understood, less understood is how the central auditory system converts the neural response into auditory perception.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Outer ear:

A

Audition: Physiology
(incl pinna, meatus and ear canal) detects and transfers energy down ear canal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Pinna

A

Audition: Physiology: outer ear
shapes and ridges create reflections which are used as comparison stimuli to create monaural cues to detect the location of sound along a vertical plane.
immobile “flexible flap” consisting of cartilage and ligaments and muscles which connect the ear to the skull. shapes and ridges create reflections which are used as comparison stimuli to create monaural cues to detect the location of sound along a vertical plane.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

meatus

A

Audition: Physiology: outer ear
opening of the ear which sends soundwaves down the ear canal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

ear canal

A

Audition: Physiology: outer ear
carries sound pressure waves through ear. Consists of the tympanic membrane/ear drum

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Concha

A

Audition: Physiology: outer ear

Concha: inner funnel/”bowl-shape” structure of outer ear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Influence of pinna, concha and ear canals size and shape

A

Audition: Physiology: outer ear

“Amplifying sound pressure for medium frequencies between 1500 and 7000 Hz”
“Folds of pinna (acoustic filter) attenuates high frequency sound components”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

middle ear

A

Audition: Physiology: Mid ear
“The air-filled cavity” housing ossicles and “associated supporting structures” which sends soundwaves from the eardum/tympanic membrane to the oval window of the cochlea (via impedance matching - which makes mid ear a linear transmitter)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Ossicles

A

Audition: Physiology: Mid ear:
Ossicles: three bones in middle ear; connecting TM/eardrum to cochlea, and “maximise transmission of sound from air pressure waves of outer ear to fluids in inner ear” - impedance match
incus
Malleus: “head” connects to TM/eardrum, “handle” connects to incus
Stapes: (smallest bone in body) connects to the incus and oval window of the cochlea - it pushes in and out to send vibrations through/displace to the fluids in the labyrinth (perilymph) and the inner ear (endolymph)
Estachian Tube: connects mid-ear chamber to the nasal cavity to control air pressure within the chamber by draining the fluids created by T.M.
Oval Window: “membrane-covered opening of the cochlea” connects stapes to cochlea. This window gives flexibility to the ossicles in order to send vibrations through the perilymph, displacing the endolymph/fluid within the inner ear/cochlea

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Acoustic Impendance

A

Audition: Physiology: Mid ear:
“The degree of resistance offered by a medium (air or fluid) to an oscillating signal… air and fluid differ by acoustic impedance” why the middle ears job is to “maximise transmission of sound pressure waves of outer to inner ear” by applying impedance matching

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

impendance matching

A

Audition: Physiology: Mid ear:
“The boost in pressure at the oval window provided by the mechanical properties of the middle ear, it matches up the differing acoustic impedances of air and inner-ear fluid.” (air in ™ has lower acoustic impedance, oval window = higher acoustic impedance due to fluid in cochlea)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Ways that Middle Ear do Impedance Matching

A

Audition: Physiology: Mid ear
Ways that Middle Ear do Impedance Matching:
The force/unit area of stapes is significantly higher than T.M’s (as stapes area is 17 times smaller) therefore creating a large force onto the oval window
Ossicles boost mechanical force at T.M
*”together they increase pressure by 33 dB SPL.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Inner Ear

A

Audition: Physiology: Inner ear
“The fluid-filled organ lying in the temporal bone, containing mechanoreceptors for hearing and balance.”
considered to be an “inertial guidance system, acoustic amplifier, and frequency analyzer”. Vestibular organ senses body movement and position.
Cochlea is the auditory sense organ which converts sound pressure waves into neural impulses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Cochlea

A

Audition: Physiology: Inner ear
auditory sense organ in which mechanoreceptors convert sound pressure waves into neural impulses. In the shape of a coiled tube (10mm diameter, 34mm long), which minimizes space, “maximises the supply of blood and nerves, and boosts its response to low frequency sounds” The tube is divided into three chambers - the scala vestibuli and scala tympani which is separated by the scala media located within the cochlear partition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Scala Vestibuli

A

Audition: Physiology: Inner ear: Cochlea
first tube of cochlear which includes the oval window.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Scala Tympani

A

Audition: Physiology: Inner ear: Cochlea
at the end of the cochlear with a round window

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Scala Media

A

Audition: Physiology: Inner ear: Cochlea
separated vestibuli and tympani, located within the cochlear partition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Cochlear Partition

A

Audition: Physiology: Inner ear: Cochlea
includes SM and BM; and separates SV and ST

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Basilar Membrane

A

Audition: Physiology: Inner ear: Cochlea
located in cochlear partition, which has hair cells and mechanoreceptors. Part of organ of corti.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Mechanical Properties of Cochlea

A

Audition: Physiology: Inner ear: CochleaT.M. transfers sound vibrations to stapes - “stapes push back and forth on oval window at same freq. As sound wave” - stapes hitting oval window - displaces scala vestibulis fluid = traveling wave down BM - transfer pressure to scala media - displaces cochlear partition - transfer vibrations to scala tympani - deforms BM
*pressure is sent through chambers when stapes pulls back from oval window

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Traveling waves

A

Audition: Physiology: Inner ear: Cochlea: Mechanical Properties
caused by displacement of BM when sound waves hit oval window. wave travels through the part of BM with max displacement. Starts at basal end of membrane to the apical end

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Frequency-to-place conversion

A

Audition: Physiology: Inner ear: Cochlea: Mechanical Properties
cochlear fluid displacement = vibrations along BM. vibrations freq determines location of max displacement (low freq near apical end (wider/flexible), high freq near basal end near staped (narrow/stiff) - how cochlear codes freq.) - location codes freq.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Sharp frequency tuning of auditory filters in BM

A

Audition: Physiology: Inner ear: Cochlea: Mechanical Properties
a small band of frequencies cause max displacement - similar frequencies require much more sound to create the same amount of displacement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Linearity of basilar membrane displacement

A

Audition: Physiology: Inner ear: Cochlea: Mechanical Properties
at extremely high or low amplitudes, amplitude of BM doubles when amplitude of input wave does
When two pure sinusodal tones with vastly different frequencies enter the membrane together, their separate locations of displacement should match their frequencies (if close in freq, creates one larger nonsinusodal displacement)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Organ of Corti

A

Audition: Physiology: Inner ear: Cochlea
part of cochlear partition. Includes stereocilia between BM and T.M

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Tectorial membrane

A

Audition: Physiology: Inner ear: Cochlea
flexible structure above the BM. holds other stereocilia

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Stereocilia

A

Audition: Physiology: Inner ear: Cochlea: mechanoreceptors
mechanoreceptors/sensory hair cells that are organised into four neat rows along the BM. one row is inside the cochlear spiral, the other rows are closer to the outside of spiral

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Inner Hair Cells

A

Audition: Physiology: Inner ear: Cochlea: mechanoreceptors
“The mechanoreceptors on the basilar membrane that provide afferent signals to the auditory system when the membrane is displaced.”
a row of approx. 3500 hair cells that are located inside the cochlear spiral, along BM
Conveys most sensory info on sound. “The base of each inner hair cell makes synaptic contact with (10) afferent fibers of the auditory nerve. = 90-95% of afferent fibres”
Fluid displacement makes BM vibrate = shearing motion > deflects inner hair cells stereocilia = inner hair cell base sends an electric impulse to afferent fibers which carry impulse to brain
BM moves twrds T.M = Displace twrd tallest stereocilia = depolarized hair cell that sterocilia is embedded into = release neurotransmitter = voltage increase = electrical impulse
BM away from T.M = Displace twrd smallest stereocilia = hyperpolarized cell = voltage decrease
small displacements cause enough potential to reach the threshold of hearing. Transduction can happen within 10 microseconds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Outer Hair Cells

A

Audition: Physiology: Inner ear: Cochlea: mechanoreceptors
“Motile hair cells spanning the gap between the basilar membrane and tectorial membrane; they control the mechanical coupling between the two membranes. (MC: amplifies inner hair cells and BM response)” three rows of 12000 hair cells, close to outside spiral.
Motile response/changing length: either by contracting from proteins in the cell when stereocilia is displaced or by “receiving effervent stimulation from the cochlea nerve” efferent fibres send signals to cochlea from central auditory system. *”this increases BM’s mechanical sensitivity and narrow freq response”
*fluid vibrations that displace the BM, displace the tectorial membrane - displacing inner hair cells in fluid, and displacing tips of outer hair cells through the motion of the membranes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Sound frequency coding in the auditory nerve

A

Audition: Physiology: Inner ear: Cochlea: Sound Freq. Coding (Aud nerve)
“intracellular resting potential of hair cells is –70 mV.”
Hair cells are depolarized/increased voltage when BM moves towards T.M, cells are hyperpolarised as BM moves away from ™.
^ depolarized cells release neurotransmitters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Phase Locking:

A

Audition: Physiology: Inner ear: Cochlea: Sound Freq. Coding (Aud nerve)
“The firing of hair cells in synchrony with the variation of pressure in a sound wave (during positive phase)” - for low freq. Waves
Sound freq. Higher than 1kHz = auditory nerve fibres cannot fire at every cycle but still phase-locked. Phase locking stops at 4-5 kHz
Ways to encode freq. Of sound waves as they enter ear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Place Code & characteristic frequency

A

Audition: Physiology: Inner ear: Cochlea: Sound freq Coding (Aud nerve)
is based on frequency-to-place conversion. Area of max displacement/max hair cell activity = characteristic frequency of sound
Characteristic Frequency: max activity of cells reflect the frequency cells are tuned to in that area, similar freq receive more reduced response

58
Q

Frequency Rate Code: phase locking and volley principle

A

Audition: Physiology: Inner ear: Cochlea: Sound Freq. Coding (Aud nerve)
due to phase-locking, auditory nerve fiber/inner hair cell impulse rate reflects frequency. Impulse doesnt reflect magnitude as fibers response decrease over time due to adaptation
Phase locking: auditory nerve fiber impulses fire at same phase - rate of impulse therefore reflects frequency of wave
Volley Principle: ‘combined response of ensemble of nerve fibres should reconstruct the frequency of the wave.’

59
Q

Spontaneous Firing Rate (SFR)

A

Audition: Physiology: Inner ear: Cochlea: Intensity Coding (Aud nerve)
steady firing rate of nerve fibres without stimulation, rate varies across different nerve fibres, most ranging from higher rates 18-250 spikes/second.

60
Q

Dynamic Range

A

Audition: Physiology: Inner ear: Cochlea: Intensity Coding (Aud nerve)
“In auditory nerve fibers, it is the difference between the minimum SPL to which a fiber responds, and the SPL at the fiber’s maximum firing rate.” increases in SPL will cause no changes to fibers firing rate.
Auditory nerve fibres: 20 - 60 dB. Human range is 100dB - covered by two fiber sets:
^”fibers w/ high spontaneous firing rate = low threshold (30-40dB); DR = 60 dB SPL
Low spontaneous firing rate = high threshold (50dB); DR = 100 dB SPL
^’different affervent fibres (connect to inner hair cells) carry different sound intensity levels’
Characteristic Frequency: fibers respond best to certain frequencies (req low amp) and respond lesser to similar frequencies (w/ higher amp) - reflects location on BM - this frequency tuning is reflected in tuning curves, proving that auditory nerves are band-pass filters

61
Q

Spiral ganglion cells

A

Audition: Physiology: Pathways: Ascending
in the cochlea are associated with the auditory nerve. These cells create auditory nerve fibers when they contact 1+ hair cells.

62
Q

Auditory nerve fibers: ipsilateral and contralateral

A

Audition: Physiology: Pathways: Ascending
“form synapses with large groups of neurons in the cochlear nuclei in the brain stem” start on one side of the ears cochlea, then go to the nucleus on the same side, for most to then be sent to the cortex on the opposite (contralateral) side of the brain. The remaining fibers stay on the same (ipsilateral) side of the brain. “One group of fibers from each cochlear nucleus ascends directly to the contralateral inferior colliculus in the midbrain.” another group only go to superior olive in the pons (most going to the contralateral one). Then to the lateral lemniscus, then inferior colliculus, then medial geniculate, then to their associated side of the primary auditory cortex.

63
Q

Superior olive

A

Audition: Physiology: Pathways: Ascending
“The complex of cells in the brainstem receiving projections from the cochlear nucleus; it contains binaural neurons that provide information about the location of sound sources.”

64
Q

Medial geniculate nucleus

A

Audition: Physiology: Pathways: Ascending
“The obligatory relay station for all ascending projections in the auditory system, en route to the cortex.” in thalamus

65
Q

Auditory pathways

A

Audition: Physiology: Pathways: Ascending
“involves a complex combination of serial and parallel processing stages.”
what and where processing

66
Q

Auditory ascending pathway process

A

Audition: Physiology: Pathways: Ascending

67
Q

What Processing

A

Audition: Physiology: Pathways: Ascending
“What” Processing:
“What” Sound Attributes: ‘frequency composition, temporal features (incl. Phase properties, onset and duration)’
Frequency: responded to by neurons in cochlear nuclei (wider DR than fibers)
“What” information: conveyed by cells in the medial geniculate nucleus and inferior colliculus “responding only to sounds coming from one ear (monaural) that vary in frequency, or sounds of a specific duration.” sent to cortex from MGN
^”some of these cells respond to multiple sensory modalities, incl. Somatosensory vestibular and visual systems”

68
Q

Where Processing

A

Audition: Physiology: Pathways: Ascending
“Where” Processing:…
Sound localisation: helps locate sound source or distinguish sounds from sources. ‘Pinna filtering effect locates sound sources in vertical plane’
Location of ears of each side of head helps locate sound sources in azimuth plane
^ sound sources side of head indicated by the ear the sound gets to first. The other signal that reaches the contralateral ear can appear different if attenuated as it travels behind the head.
“Specialized neural circuits in the auditory pathway detect the very small differences in phase and intensity that can be used for sound localization.”:
Phase: Circuits in the medial superior olive compute the interaural time differences (ITD) between the auditory signals arriving at the two ears.
Intensity: Circuits in the lateral superior olive compute the difference in intensity between the signals arriving at the two ears (interaural level differences , or ILD).
Azimuth: refers to horizontal plane

69
Q

Interaural time difference (ITD)

A

Audition: Physiology: Pathways: Ascending
‘sound source directly in front of one ear arrives at the opposite ear 700 ms later. ITD decreases as sound source moves to front/behind head.’
ITD detection threshold: 10ms

70
Q

Interaural level difference (ILD)

A

Audition: Physiology: Pathways: Ascending
head attenuates frequencies over 2kHz, resulting in higher frequency signal reaching the ipsilateral ear, which neurons in lateral superior olive respond to - these neurons are EI (meaning they receive and inhibitory (from ipsilateral anteroventral cochlear nucleus) and excitatory signals (from contralateral anteroventral cochlear nucleus)
^net response of these neurons depend on the levels of each signal.
Strongest response = when the signal is on the same side of the head
Weakest response = when signal is in median plane of head (front/behind) as the e and i signals cancel eachother out
^this response is sent through the inferior colluculus, medial geniculate nucleus to cortex
Therefore “Some cells in the inferior colliculi respond to either ITD or ILD”
Head/Acoustic Shadow: area surrounding the further ear to noise, where sounds that reach this area decrease in intensity as the head attenuates these signals

71
Q

Primary Auditory Cortex

A

Audition: Physiology: Cortex
located on superior temporal lobe, includes ‘three core areas with different maps of frequency space, and cells within these areas respond well to pure tones and have a narrow frequency tuning (small range of preferable frequencies)

72
Q

Auditory Association Cortex

A

Audition: Physiology: Cortex
“Two concentric bands of cortex encircling primary audi­tory cortex, containing cells responsive to auditory stimuli.”
“Cells in this region respond better to narrowband noise and frequency-modulated sweeps”

73
Q

Belt vs parabelt regions

A

Audition: Physiology: auditory association Cortex
Belt region: surrounds the primary auditory cortex
Parabelt region: surrounds the belt region. “Anterior area projects the “what” pathway to orbitofrontal cortex, posterior part projects the “where” pathway to posterior parietal cortex and dorsolateral prefrontal cortex”

74
Q

auditory Tonotopic Organization

A

Audition: Physiology: Cortex
Tonotopic Organization: “An organized arrangement of neurons, so that cells with similar frequency preferences lie close together in a neural structure.”
Isofrequency bands: “The cortical surface can be divided into a series of bands containing cells with the same characteristic frequency… Cells within a column extending down from the surface of the cortex have similar characteristic frequencies.”

75
Q

Deficits in where and what: lesion studies

A

Audition: Physiology: Cortex
Lesion Studies: Deficit in “where”: parietal, frontal and superior temporal cortex
Deficit in “what”: fusiform and temporal cortex

76
Q

isofrequency bands

A

Audition: Physiology: Cortex
Isofrequency bands: “The cortical surface can be divided into a series of bands containing cells with the same characteristic frequency… Cells within a column extending down from the surface of the cortex have similar characteristic frequencies.”

77
Q

Auditory Descending Pathway route

A

Audition: Physiology: Descending Pathway
Route: cortex– medial geniculate nuclei– inferior colliculi– superior olives– cochlear nuclei– cochleas outer hair cells

78
Q

Auditory Descending Pathway main roles

A

Audition: Physiology: Descending Pathway
alters sensory input, “ability to attend selectively to certain auditory stimuli.”

79
Q

Auditory Descending Pathway contributes to

A

Audition: Physiology: Descending Pathway
amplifying inner hair cells, middle ears acoustic reflex

80
Q

Auditory Descending Pathway and outer hair cells

A

Audition: Physiology: Descending Pathway
Efferent pathway: Outer Hair Cells receive top-down information/descending projections from the cortex to “regulate hearing aspects such as amplification, auditory attention and acoustic reflex of inner ear”.
More outer hair cells than inner but only 5-10% connect to nerve fibers
Motile response occurs when they receive efferent signals - source of cochlear amplifier
“Descending projections may be involved in auditory attention & outer hair cell amp functions”

81
Q

three types of audition perception

A

Audition: Perception
pitch, loudness and spatial hearing

82
Q

Pitch

A

Audition: Perception: Pitch
perceptual attribute that = frequency of pure tone, fundamental frequency of complex tones. Orders musical notes from low bass to high treble.

83
Q

Frequency Selectivity/Ohm’s Law

A

Audition: Perception: Pitch
in particular circumstances the ear can decompose a complex soundwave from an instrument into “separate representation for each frequency component” : “the prime partial tone” (fundamental frequency) “and the various partial upper tones” (harmonic series)
Psychophysical Studies of Freq Selectivity: investigating its limits.

84
Q

Masking

A

Audition: Perception: Pitch
Psychophysical Studies of Freq Selectivity: investigating its limits.
Masking: main technique to examine limits ^. Experimental procedure where a noise mask (complex soundwave including tones with same amp but different frequencies and phases = hissing noise) impacts participants detection of a sine wave

85
Q

band-pass noise & critical bandwidth

A

Audition: Perception: Pitch
Band-Pass Noise: when a complex tones individual frequencies range within a bandwidth around the center frequency (e.g. “band-pass noise with a center frequency of 2000 Hz and a bandwidth of 400 Hz contains energy at all frequencies between 1800 and 2200 Hz”) *center freq = indv sine wave to be detected against mask
The sine wave gets harder to detect as bandwidth increases until a certain point where detection abilities stay constant - due to band-pass filters in auditory system. This filter’s center frequency would be similar the sine signals freq, and noises that produce a masking effect must fall within the filter’s critical bandwidth
Critical Bandwidth: is the bandwidth of a filter which ends at the point of bandwidth where this level and further increase does not alter detection threshold/abilities.

86
Q

pure tone masks

A

Audition: Perception: Pitch
Pure Tone Masks: have also been used where signal freq is set low (e.g. 10 dB SPL) and pure tone mask is modified in equal increments (systematic) until signal freq is detected - the results create a psychophysical tuning curve (meaning that masks do best when their freq is close to the signals. The more different their frequencies are, the higher the required SPL is for it to mask signal freq). This suggests that we possess many band-pass filters with different ranges of critical bandwidths which overlap in order to detect a large range of frequencies.

87
Q

critical band masking

A

Audition: Perception: Pitch
Critical-Band Masking: masking of a sine signal occurs if noise masks frequency is within the critical bandwidth.
Assumptions:
‘A signal increases activity in auditory fibers tuned to it’s freq
Signal is detected if this activity reaches a certain threshold
Mask will cause activity in filter irrelevant to signal
*the mask will only have an effect when activating same fibers as signal if masks activity passes the same detection threshold for the signal- the masks activity will counteract the signals activity, preventing detection of the signal.

88
Q

Pitch perception theories

A

Audition: Perception: Pitch
are required to explain our small discrimination threshold. Links patterns across freq-tuned neurons (which reflects the type of sound stimulus) and auditory fibers to perception of pitch.
Place and rate coding theories for pure tones
temporal and pattern regocnition theory for complex waves

89
Q

pure tone perception theories

A

Audition: Perception: Pitch
Place and rate coding theories for pure tones
Evidence for both theories: rate and place code is used for lower- mid freq tones (up to around 4-5kHz)

90
Q

complex tone perception theories

A

Audition: Perception: Pitch
temporal and pattern regocnition theory for complex waves

91
Q

PLace coding theory: pitch

A

Audition: Perception: Pitch
auditory system as fourier analyser - freq-tuned filters respond to each component wave of complex tone - area of activated filters reflects sine wave frequency
high freq/pitch tones = base of BM near oval window; low freq/pitch tones = apex (at end of cochlear spiral) -
^ perceived pitch = fundamental frequency = sine wave w/ freq closest to apex.
*pitch discrimination thresholds depends on filters characteristic frequency and bandwidth (measuring bandwidths help us understand filter’s discrimination threshold - bandwidths start narrow and increase with frequency (why its harder to discriminate between higher frequencies))
^this goes against place theory which assumes that bandwidth and freq discrimination remain constant

92
Q

Rate coding theory: pitch

A

Audition: Perception: Pitch
Auditory nerve responses are phase locked (they fire at same point of phase in each cycle) to frequencies of waves below 5kHz - therefore response rate reflects frequency of sound. Theory proposes that pitches with differing tones are discriminated by “differences in the response rate or time intervals between neural firings.”

93
Q

missing fundamental wave

A

Audition: Perception: Pitch
Missing Fundamental Wave: when one can determine a tones pitch in the absence of the fundamental frequency (e.g. fundamental frequencies of males voices cannot be produced by phone) as the brain fills it in (the sound appears hollow/higher pitch due to absence of bass/FF).

94
Q

Temporal Theory

A

Audition: Perception: Pitch
Temporal Theory: explains how beats of unresolved harmonics create a residue pitch which reflects the sound of the missing fundamental frequency.
Up to 1kHz = Phase locking determines pitch
Up to 4kHz = Volley Principle determines pitch: fibers combined response = freq
Residue pitches determine pitch for Higher frequencies: unresolved harmonics (multiple harmonics that are indiscriminable due to falling within the same filters bandwidth) - overlapping of these harmonics creates a residue pitch with beats - timing reflects the missing fundamental frequency
Residue Pitch: is the sensation of pitch created and encoded by the beats of higher unresolved harmonics.
Beats: unresolved harmonics (two indistinguishable harmonics sound as one due to falling within the same filter bandwidth) overlap and beat at a frequency that reflects their fundamental frequency. The auditory nerve fires encode pitch by phase locking to the residue pitch (rate coding)
*criticism: people tend to depend on the different pitch created by resolved harmonics. Need theories that apply to resolved and unresolved harmonics such as the following…

95
Q

Pattern Recognition Theory

A

Audition: Perception: Pitch
Pattern Recognition Theory: pitch = fundamental frequency of resolved harmonics by using place coding
Explains missing fundamental effect: ff determined by the distance between harmonics
Criticism: doesnt explain unresolved harmonics

96
Q

Is there a “Pitch Centre” in the Brain

A

Audition: Perception: Pitch
‘unsure how auditory nerves temporal code influences cortical neurons frequency tuning’
Unsure how Tonotopical mapping (like frequency-to-place conversion in BM) of primate cortex’s allows pitch to be encoded
Pitch centure may be in secondary auditory cortex but unsure

97
Q

Loudness Perception

A

Audition: Perception: Loudness
relates to SPL intensity and amplitude. The following 2 are comparison techniques:

98
Q

Equal Loudness Contour

A

Audition: Perception: Loudness
is a plot of various comparison (freq) stimuli, being described in dB SPL (i.e. a certain amount of dB above or below the standard pressure level) - SPL being the standard stimulus. Shows how different amplitudes (dB SPL) can have the equal loudness due to variation in frequency (kHz).
Lowest curve = absolute threshold for tones at that freq./hearing in general (at around 3dB)
‘Low freq = poor sensitivity - requires higher amp for equal loudness
Mid freq = high sensitivity - low amp
High freq = medium sensitivity - medium amp < curve flattening out

99
Q

Loudness Scaling

A

Audition: Perception: Loudness
participant applies magnitude estimation (chptr 1) to sounds of various intensities to understand “how loudness scales with intensity”. Non-linear increase of loudness by intensity - in accordance with stevens power law, loudness exponent = 0.3 (intensity ^ 10dB, loudness ^ 2 (for 40 dB SPL and above))

100
Q

Auditory Localization: Azimuth vs elevation angle

A

Audition: Perception: Auditory Localization
Two types of planes can be used to locate the source’s direction:
horizontal plane is described in “azimuth angle relative to straight head” and uses binaural cues
vertical plane is described “by an angle of elevation relative to horizontal.” uses monaural cues
(e.g. source w/ azimuth angle of 0° and elevation angle of 90° = right above head)

101
Q

Horizontal plane localization

A

Audition: Perception: Auditory Localization
Azimuth angle and binaural cues

102
Q

Min Audible Angle

A

Audition: Perception: Auditory Localization: horizontal plane
“The smallest change in the azimuth angle of a sound source that can be detected reliably.” *JND of sound based on differences along azimuth angle
All angles struggle to detect diff arond 1500-1800 Hz - is the transition freq period where ITD and ILD both struggle to encode location.

103
Q

The Duplex Theory

A

Audition: Perception: Auditory Localization: horizontal plane
ITD for low freq, ILD for high freq because ILD struggle to examine small diff between low frequencies as they can easily pass through the head/not attenuated
ITD: arrives at closer ear first - ‘diff in arrival depends on azimuth angle’ (max ITD = 650ms, 180 degree travel, ITD = 0 in front/behind head - shows that two azimuths share the same timing). applied to low frequency sounds. ‘Processed in brainstem, medial superior olive’
ILD: higher frequencies dont pass through head and therefore lie in the acoustic shadow of the opposite ear, attenuating the further freq = discrimination detection
ILD = 0 in front/behind head - same intensity/travelling
‘Processed in brainstem, Lateral superior olive’

104
Q

Cones of Confusion

A

Audition: Perception: Auditory Localization: horizontal plane
“A cone of space extended from a listener’s head, defining the directions that produce the same interaural time difference and are therefore confusable.” e.g. the front and back of the head
^RESOLUTION: moving head around decreases size of cone and use of vertical cues helps define elevation.

105
Q

Vertical plane localization

A

Audition: Perception: Auditory Localization: Vertical plane
much better judgment of location based on monaural cues - created by the pinna working as an acoustic filter for amp and phase. Two main effects of the pinna:
Pinna only allows short wavelengths in above the frequency of 6 kHz
Adds peaks and valleys (happens to higher frequencies when the sound is more elevated)

106
Q

Precedence Effect

A

Audition: Perception: Auditory Localization: Vertical plane
deciding the location of fused sounds (multiple sounds that occur 5-50ms apart sound like one sound) based on the direction of the first/direct component of fused sound (as they are less likely to be reflecting off other objects in different directions) or by comparing the “difference between the direct and reflected sound” (from pinna filtering/reflecting)

107
Q

Distance Judgements

A

Audition: Perception: Auditory Localization: Vertical plane
four cues are used to determine distance:
Distance-to-intensity ratio: Closer sounds = louder - ‘distance doubles: sound decreases by 4’
Direct-to-reverbant energy ratio: In places where sound reflects off surfaces - amount of sound not directly hitting ear (i.e. bouncing off other surfaces) increases with distance
More distant sounds are more muffled: “Air molecules absorb energy at higher sound frequencies more”
Within a meter, ILD decreases as distance increases - isnt this the first point?
*studies show these cues underestimate and their effectiveness depends on the sounds nature and angular position.

108
Q

The Production of Speech Sounds

A

Audition: Perception: Speech
diaphragm pushes air through vocal chords to create vibration (vibrating sound source) - rate of vibration predicts the fundamental frequency. Vocal tract (resonating chamber - filters) modifies harmonic frequencies - amps frequencies that it vibrates/resonates more to

109
Q

Formant Frequency & Transitions

A

Audition: Perception: Speech
Formant Frequency: are the resonant frequencies of the vocal tract (vibrates more to certain frequencies). Appear as transition to peaks on spectrograms. “Numbered in ascending order- start w/ lowest freq.”
Formant transitions: “first 50ms in speech sound” where formant freq smoothly transitions (ba, da and ga all have different transitions)

110
Q

Speech Mode

A

Audition: Perception: Speech
is a different mode of auditory processing required for speech. People who believe in this mode “argue” that this processing is different to non-speech sounds and that speech mode has its own “specialized neural structures”

111
Q

Qualitative Diff Btwn Speech & Non-Speech Processing

A

Audition: Perception: Speech Mode
speech processing is qualitatively different because its the only processing that uses categorical perception and is a totally different computation.

112
Q

Categorical Perception

A

Audition: Perception: Speech Mode
speech sounds vary by formant frequencies and transitions (like normal sounds vary by freq/pitch) which alters the way the sound is perceived. ‘When altering the transition, the transition becomes more sudden’ - these sudden transitions show sharp boundaries which proves that there are strict phoneme categories

Evaluation: speech mode cannot be concluded as categorical perception was the one unique thing to this mode which has been disproven. Differences in speech and nonspeech perception may just be due to us being better “experts” at speech processing since we encounter it so much.

113
Q

Phoneme

A

Audition: Perception: Speech Mode
smallest unit of sound (e.g. dog = d/o/g)

114
Q

Phoneme boundary

A

Audition: Perception: Speech Mode
: when a formant frequency reaches a phoneme boundary, the phoneme changes ‘When altering the transition, the transition becomes more sudden’
*some evidence of categorically processing non-speech sounds (e.g. chords) which weakens this argument of qualitative differences

115
Q

Physiological Specialisation in Processing of Speech Sounds

A

Audition: Perception: Speech Mode
Physiological Specialisation in Processing of Speech Sounds: evidence of specialized neural structures
Neuropsychology: showed that damage to Wernicke’s Area (in brain’s left hem - posterior temporal lobe overlapping primary and secondary auditory cortex) impaired speech perception (Wernicke’s Aphasia) - “may reflect a disturbance in the neural system that links the auditory representation of speech sounds with their meanings.”
Neuroimaging: isolate cortex regions that’s activity only reflects speech processing.
“used PET to study the brain areas activated solely by intelligible speech” - range of sounds varied by speeches intelligibility, kept acoustic complexity constant. - *left hem preferentially activated in response to speech, intelligible speech = left anterior superior temporal sulcus, non-speech = right hem superior temporal gyrus
Two cortical streams of speech processing:
Dorsal: left-hem dominant - transforms speech sounds into representations
Ventral: “bilaterally organized, and maps acoustic speech input onto conceptual and semantic representations.”

116
Q

Spoken Word Recognition

A

Audition: Perception: Speech
Essential component of speech perception - recognition involves categorising “acoustic energies” into “discrete set of meaningful symbols/known words”
Becomes more complex as the words meaning depends of the context of consecutive words
Characteristics of the speaker alter the pronouncing of the word which further complicates
Recent studies show heirarchal processing system using ventral route: primary auditory cortex does initial analysis of speech - midsuperior temporal gyrus forms representations of phonemes and speech formants from “spectral features” - anterior superior temporal gyrus turns “temporal sequence of phonemes” into word representations

117
Q

Mental Lexicon

A

Audition: Perception: Speech: Spoken Word Recognition
“brains dictionary” - has a representation for each word based on it’s sound (physical properties) and the meaning (symbolic properties).

118
Q

Word-recognition Process

A

Audition: Perception: Speech: Spoken Word Recognition
prelexical analysis: dividing acoustic signal into phonemes -> potential words are activated to different extends depending on how many features of theirs match the word -> most activated word’s representation is presented and matched to the word

119
Q

How do word recognition theories differ by their opinion of bottom up and top-down processing

A

Audition: Perception: Speech: Spoken Word Recognition
some argue that recognition only involves bottom-up (words and phonemes are derived only from stimulus info) Others argue that prior knowledge influences early processing - supported by context effects when interpreting incompleted words spoken in a full sentence

120
Q

Auditory Scene Analysis

A

Audition: Perception: Auditory Scene Analysis
processes that allow people to decompose a complex wave into the individual sounds in order to focus on the correct waves (e.g. focus on the convo while ignoring background noise.

121
Q

Auditory Streams

A

Audition: Perception: Auditory Scene Analysis
Auditory Streams refer to the “Grouping of parts of a complex acoustic signal into discrete auditory objects.” discrete auditory objects = sound sources. This grouping is determined by the following physical characteristics: spatial location and spectral content

122
Q

Spatial Location of Auditory Scene Analysis

A

Audition: Perception: Auditory Scene Analysis
ILD’s are only relevant here as ITD’s only work if the sound is previously grouped - therefore cannot be a cue for grouping.

123
Q

Spectral Content of Auditory Scene Analysis

A

Audition: Perception: Auditory Scene Analysis
grouping based on differences in pitch (men have a lower pitch/freq than women). People with different fundamental frequencies allow us to distinguish the two (as fundamental frequency dictates the variation between harmonics (since harmonics are multiples of FF)) based on differences in harmonic series.
One stream/melody with alternating high and low pitched notes: splits the tones into two streams (high and low notes) based on two spectral grouping cues: either the shortening of intervals between notes (temporal proximity) or the increased differences of frequency (frequency similarity). Other grouping cues include harmonic similarity (differences based on timbre), amplitude

124
Q

Time or Onset of Auditory Scene Analysis

A

Audition: Perception: Auditory Scene Analysis
sounds spectral components share the same onset and offset - group on the basis of this - the onset and offset differences are perceived as differences in timbre

125
Q

Conductive Hearing Loss

A

Audition: Perception: Hearing Dysfunction
is caused by issues within the “mechanical structures of the outer and middle ear” - due to damaged T.M., impeded transmission in the ossicles or by blocked ears (with wax or foreign objects)

126
Q

Sensorineural Hearing Loss

A

Audition: Perception: Hearing Dysfunction
caused by “damage to neural structures in the cochlea, auditory nerve, or central auditory system.”

127
Q

Damage to Tympanic Membrane

A

Audition: Perception: Hearing Dysfunction: Conductive
hole/perforation in eardrum from infection or “violent stimulation”. Results in ‘poorer transmission of sound pressure waves to middle ear’. Further damage to this process comes from scar tissue firming up the membrane.
Consequences and Treatment: conduction of sound energy to cochlea is impaired - thresholds increase by 40 - 50 dB for all frequencies: loud sounds heard but muffled.
Drugs or mechanical intervention or hearing aid treat these issues: grafts help T.M. if they dont heal, prosthetic ossicles, gromet drains middle ear fluid (but may damage eardrum). Can fully or mostly resolve these issues

128
Q

Damage to the ossicles

A

Audition: Perception: Hearing Dysfunction: Conductive
infection or ostersclerosis (stapes fuses to ossicles) and further scar tissue - restricts movement = restricts transmission of energy. Also caused by the blockage causing fluid build up from middle ear to be drained through eustichan tube into nasal cavity - air cannot flow properly through ossicles (most common cause of conductive hearing loss)
Consequences and Treatment: conduction of sound energy to cochlea is impaired - thresholds increase by 40 - 50 dB for all frequencies: loud sounds heard but muffled.
Drugs or mechanical intervention or hearing aid treat these issues: grafts help T.M. if they dont heal, prosthetic ossicles, gromet drains middle ear fluid (but may damage eardrum). Can fully or mostly resolve these issues

129
Q

Cochlear Damage

A

Audition: Perception: Hearing Dysfunction: Sensorineural
Cochlear Damage:
Causes: intense sound exposure, ototoxic drug exposure (e.g. antibiotics), infection, metabolic disturbance, allergy, genetic disorders, age (presbycusis).
Intense sound exposure e.g.: ruin organ of corti’s elasticity - damaging stereocilia, tectorial membrane, basilar membrane and therefore outer hair cells + transduction.
Perceptual consequences of cochlear damage:
Raised detection thresholds - sound is de-amplified and distorted (so hearing aid wont resolve distortion) when auditory nerve (freq-tuning) is damaged = outer hair cells lose function and auditory fibers have to broaden their tuning = cannot decompose complex signals
Loudness Recruitment: “Abnormally rapid growth in loudness with SPL, resulting from damage to outer hair cells in the cochlea.” hear loud sounds better (why old people say no need to shout) (cochlear damage and outer haircell damage (fibers broadening tuning))
Presbyacusis: can start in 20s, be noticed in 50s, starts to affect high frequencies and slowly goes down the freq spectrum. = due to hair cells deteriorating and losing efficiency’ at base of cochlea first (since base cells respond to all frequencies)

130
Q

Retrocochlear Dysfunction

A

Audition: Perception: Hearing Dysfunction: Sensorineural
(auditory nerve damage): “tumours on vestibular nerve can damage auditory nerve” - treatment = surgical removal of tumour = hearing loss

131
Q

Central Auditory Processing Disorder

A

Audition: Perception: Hearing Dysfunction: Sensorineural
“hearing issues in noisy environments with no sensory deficits… Treatment is confined to measures that improve the perceptibility of the signal against noisy backgrounds.”

132
Q

Tinnitus

A

Audition: Perception: Hearing Dysfunction: Sensorineural
perception of ringing in head. May persist - debilitating if so. Cause must be in central structures but unsure - one is due to ostesclerosis (refer to the ossicles damage) Isnt classified into either of the groups as origin is unknown. Treatment = in ear sound generators for auditory system, relaxation techniques and CBT to calm down the limbic (emotion) and sympathetic nervous system (arousal).

133
Q

Cochlear Implants

A

Audition: Perception: Hearing Dysfunction: Sensorineural
treats sensorineural deafness by transducing energy and send to spiral ganglion neurons to take along auditory nerve. Does “information processing of auditory input, reducing dynamic range and bandwidth.” it helps but is not a proper replacement for transduction and encoding processes.

134
Q

Function

A

Foureir Theory
waveform/collection of sinusodial waves

135
Q

Ear Drum/Tympanic Membrane

A

Physiology: Outer ear
cone-shaped eardrum that is located throughout the ear canal. “Air pressure waves cause TM to vibrate” - these vibrations are transferred to the middle ear, through the ossicles and to the cochlea.

136
Q

Cochlea

A

the liquid inside the cochlea vibrates, this response is then sent to the brainstem and then the cortex

137
Q

Physiological Basis for Masking & Freq Selectivity:

A

freq. tuning of auditory nerves is determined by place thats stimulated and displaces in BM - links freq-tuned responses to auditory filters.

138
Q

Frequency discrimination

A

techniques include consecutively playing tones with slightly diff freq and participant selects higher freq tone. Discrimination threshold is the amount of freq where participant correctly answers 75% of the time - discrimination is best at lower freq (freq below 1000Hz can discriminate tones w/ less than 1Hz difference. Threshold increases with frequency up to 100HZ (no further) for frequencies above 10kHz (1% change in freq).)

139
Q

Loudness Matching (Method of Adjustment)

A

Loudness perception: measurement
based on the theory that intensity depends on freq and amp - allows us to examine “the frequency dependence of loudness” - as two waves with different amplitudes can have the same loudness due to frequency. Participant adjusts the frequency of the comparison stimulus until it matches the loudness of the standard stimulus (fixed freq.).

140
Q

Excitation Pattern Model

A

Loudness perception: measurement
frequency is coded by the area of the cochlea that experiences max displacement (freq-to-place conversion: (low freq = apical end (wider/flexible), high freq = basal end (narrow/stiff)). Intensity is coded by auditory fibers firing rate. ‘This model explains the link of encoded intensity to loudness perception.’
Equal ratios between auditory nerve’s activity/intensity and perceived loudness. (rate coding)
Auditory nerve fibers have different dynamic ranges and characteristic frequencies in order to cover the whole range of human hearing - CF: similar freq require higher intensity to receive same response from fiber
Neighbouring neurons waves w/ freq at detection threshold will only receive a response from fibers with that characteristic frequency - neighbouring neurons with similar CF’s fire as intensity increases (as they enter the auditory filters bandwidth of multiple neuron types) - loudness perception increases with the increase of amount of neighboring filters that respond to the same frequency