Neural coding of overlapping speech and the effects of hearing loss

How listeners identify speech that is masked by an overlapping speaker remains a key problem in auditory neuroscience. A mild hearing impairment can profoundly alter a listener’s capacity to identify speech amongst interfering talkers. Existing theories propose that the auditory system uses cues (e....

Full description

Bibliographic Details
Main Author: Smith, Samuel S.
Format: Thesis (University of Nottingham only)
Language:English
Published: 2020
Subjects:
Online Access:https://eprints.nottingham.ac.uk/59942/
_version_ 1848799700134658048
author Smith, Samuel S.
author_facet Smith, Samuel S.
author_sort Smith, Samuel S.
building Nottingham Research Data Repository
collection Online Access
description How listeners identify speech that is masked by an overlapping speaker remains a key problem in auditory neuroscience. A mild hearing impairment can profoundly alter a listener’s capacity to identify speech amongst interfering talkers. Existing theories propose that the auditory system uses cues (e.g. pitch, onset/offset timing) to segregate neural representations of overlapping speech. Historically, much attention has been paid to the benefit of fundamental frequency cues to the identification of concurrently presented vowels, but segregation-based models fail to adequately explain human behaviour. Proposed here is a model void of segregation but that embodies an optimal strategy of predicting the combined representations of vowels. The model was able to quantitatively replicate listeners’ identification of concurrent-vowel pairs, an improvement on segregation-based models. This was the case for neural representations simulated from a filter recreation of the auditory periphery, or recorded from multi-channel electrodes in the guinea pig midbrain. The compressive nature of auditory encoding appeared to facilitate a strategy of predicting the combined representation of speech. In addition, neural coding did not support previously proposed segregation cues such as temporally encoded pitch and harmonic beats. Neural recordings were also gathered in response to overlapping vowel-consonants, at variable onset delays, from animals with normal hearing or a noise induced hearing loss. Poorer speech recognition was predicted for the hearing loss data, compounded by an inability to optimally utilise knowledge of interfering speech sounds. Overall, investigation of the auditory system’s goals seems a promising approach to understanding the neural processes underlying speech identification when masked by a competing talker.
first_indexed 2025-11-14T20:39:50Z
format Thesis (University of Nottingham only)
id nottingham-59942
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T20:39:50Z
publishDate 2020
recordtype eprints
repository_type Digital Repository
spelling nottingham-599422025-02-28T14:48:22Z https://eprints.nottingham.ac.uk/59942/ Neural coding of overlapping speech and the effects of hearing loss Smith, Samuel S. How listeners identify speech that is masked by an overlapping speaker remains a key problem in auditory neuroscience. A mild hearing impairment can profoundly alter a listener’s capacity to identify speech amongst interfering talkers. Existing theories propose that the auditory system uses cues (e.g. pitch, onset/offset timing) to segregate neural representations of overlapping speech. Historically, much attention has been paid to the benefit of fundamental frequency cues to the identification of concurrently presented vowels, but segregation-based models fail to adequately explain human behaviour. Proposed here is a model void of segregation but that embodies an optimal strategy of predicting the combined representations of vowels. The model was able to quantitatively replicate listeners’ identification of concurrent-vowel pairs, an improvement on segregation-based models. This was the case for neural representations simulated from a filter recreation of the auditory periphery, or recorded from multi-channel electrodes in the guinea pig midbrain. The compressive nature of auditory encoding appeared to facilitate a strategy of predicting the combined representation of speech. In addition, neural coding did not support previously proposed segregation cues such as temporally encoded pitch and harmonic beats. Neural recordings were also gathered in response to overlapping vowel-consonants, at variable onset delays, from animals with normal hearing or a noise induced hearing loss. Poorer speech recognition was predicted for the hearing loss data, compounded by an inability to optimally utilise knowledge of interfering speech sounds. Overall, investigation of the auditory system’s goals seems a promising approach to understanding the neural processes underlying speech identification when masked by a competing talker. 2020-07-24 Thesis (University of Nottingham only) NonPeerReviewed application/pdf en arr https://eprints.nottingham.ac.uk/59942/1/SSmith_2019_PhDThesis_14289124_CORRECTED.pdf Smith, Samuel S. (2020) Neural coding of overlapping speech and the effects of hearing loss. PhD thesis, University of Nottingham. speech hearing loss auditory midbrain neural coding computational modelling
spellingShingle speech
hearing loss
auditory midbrain
neural coding
computational modelling
Smith, Samuel S.
Neural coding of overlapping speech and the effects of hearing loss
title Neural coding of overlapping speech and the effects of hearing loss
title_full Neural coding of overlapping speech and the effects of hearing loss
title_fullStr Neural coding of overlapping speech and the effects of hearing loss
title_full_unstemmed Neural coding of overlapping speech and the effects of hearing loss
title_short Neural coding of overlapping speech and the effects of hearing loss
title_sort neural coding of overlapping speech and the effects of hearing loss
topic speech
hearing loss
auditory midbrain
neural coding
computational modelling
url https://eprints.nottingham.ac.uk/59942/