SPEECH: Impact of hearing loss on speech perception Flashcards
what are some other health problems that are associated with untreated hearing loss?
-heart disease
-depression
-falls
-chronic renal failure
-cognitive decline & dementia
-diabetes
-hospitalisation
-premature death
what issues can hearing loss cause in children?
- Delayed speech and language skills
- School success
- Poor social skills, e.g. low self esteem
what is the impact of hearing loss on speech perception?
Hearing loss reduces hearing sensitivity (amplification issue)
Hearing loss reduces perceptual clarity (distortion issue)
what does the speech banana show?
- The speech banana loosely shows the frequency regions where each phoneme appears. It estimates the sound level of each phoneme in speech recorded 1 meter from the talker.
- Based on the audiogram (hearing sensitivity), the perceivable speech can be estimated.
- Ideally, a person with hearing loss can perceive the speech well in quiet if the aided hearing thresholds have fallen into or above the shaded area.
what can we assume when we overlay the speech banana on a audiogram and we see the persons thresholds fall outside the range?
-reduced sensitivity
-We may infer that the person can’t hear these speech sound standing 1 meter from the talker.
What does the term “weighted fraction” refer to in the context of speech intelligibility?
The “weighted fraction” is a numerical value that shows how much of the normal speech signal is effectively available to a listener for understanding speech in a particular environment. It ranges from 0.0, meaning none of the speech signal is available, to 1.0, indicating that the entire speech signal is accessible. This value is calculated based on factors such as the speech channel (e.g., phone call, public announcement) and the level of background noise. It helps predict how well a listener can understand speech in different situations.
how are we able to see how much speech information can be understood?
To understand how well speech can be understood, we examine how much information is present at different frequencies. This helps predict how intelligible the speech will be.
what is the Articulation index (AI) calculation?
AI=Σ Ii*Ai
it calculates AI (how well speech can be heard) by multiplying Ii (how important that freq band range is for understanding speech) * Ai (how well speech can be heard in that range) and adding these up to get AI
you can get Ii from the y axis and A from the x axis
the Ii values should all add up to 1 as they’re all a fraction o 1.0
What type of audiogram is referred to as “Count-the-dot-audiogram”?
This audiogram incorporates audibility with AI weighting, allowing AI to be calculated by hand.
How are frequency-importance weightings represented in the Count-the-dot-audiogram?
The number of dots occurring at a specific frequency corresponds to frequency-importance weightings.
what are some other speech intelligibility prediction (both reference free and reference of a clean signal needed)?
Reference-free:
- Speech intelligibility index (SII) (very similar to AI)
- Speech transmission index (STI): The STI measures some physical characteristics of a transmission channel (a room, electro-acoustic equipment, telephone line, etc.), and expresses the ability of the channel to carry across the characteristics of a speech signal.
Reference-needed (need a clean signal as reference)
- Spectrotemporal modulation index (STMI)
- Envelope power spectrum model (EPSM)
How are speech intelligibility prediction metrics utilized in research?
Researchers often use these metrics along with simulated hearing loss to test different hearing device technologies, such as new algorithms of hearing aids or cochlear implants.
What is an example of a speech intelligibility predictor based on automatic speech recognition?
An example is mel-frequency cepstral coefficients (MFCC), although it’s not typically discussed in the context of audiology
what can cause reduced speech intelligibility (poor speech perception performance)?
due to poor frequency selectivity
what is frequency selectivity or spectral resolution?
- Frequency selectivity, also known as spectral resolution, refers to our auditory system’s ability to distinguish and separate different frequencies within a complex sound.
- It’s like the filtering process in our ears that allows us to pick out individual components in a mixture of sounds.
- For instance, if two tuning forks are struck at the same time, each producing a distinct frequency, like one tuned to C (262 Hz) and another to A (440 Hz), we can perceive these as two separate tones rather than a single blended sound.