Investigating the neural code for dynamic speech and the effect of signal degradation

It is common practice in psychophysical studies to investigate speech processing by manipulating or reducing spectral and temporal information in the input signal. Such investigations, along with the often surprising performance of modern cochlear implants, have highlighted the robustness of the aud...

Full description

Bibliographic Details
Main Author: Steadman, Mark
Format: Thesis (University of Nottingham only)
Language:English
Published: 2015
Subjects:
Online Access:https://eprints.nottingham.ac.uk/28839/
_version_ 1848793655058366464
author Steadman, Mark
author_facet Steadman, Mark
author_sort Steadman, Mark
building Nottingham Research Data Repository
collection Online Access
description It is common practice in psychophysical studies to investigate speech processing by manipulating or reducing spectral and temporal information in the input signal. Such investigations, along with the often surprising performance of modern cochlear implants, have highlighted the robustness of the auditory system to severe degradations and suggest that the ability to discriminate speech sounds is fundamentally limited by the complexity of the input signal. It is not clear, however, how and to what extent this is underpinned by neural processing mechanisms. This thesis examines the effect on the neural representation of reducing spectral and temporal information in the signal. A stimulus set from an existing psychophysical study was emulated, comprising a set of 16 vowel-consonant-vowel phoneme sequences (VCVs) each produced by multiple talkers, which were parametrically degraded using a noise-vocoder. Neuronal representations were simulated using a published computational model of the auditory nerve. Representations were also recorded in the inferior colliculus (IC) and auditory cortex (AC) of anaesthetised guinea pigs. Their discriminability was quantified using a novel neural classifier. Commensurate with investigations using simple stimuli, high rate envelope modulations in complex signals are represented in the auditory nerve and midbrain. It is demonstrated here that representations of these features are efficacious in a closed-set speech recognition task where appropriate decoding mechanisms are available, yet do not appear to be accessible perceptually. Optimal encoding windows for speech discrimination increase from of the order of 1 millisecond in the auditory nerve to 10s of milliseconds in the IC and the AC. Recent publications suggest that millisecond-precise neuronal activity is important for speech recognition. It is demonstrated here that the relevance of millisecond-precise responses in this context is highly dependent on the brain region, the nature of the speech recognition task and the complexity of the stimulus set.
first_indexed 2025-11-14T19:03:45Z
format Thesis (University of Nottingham only)
id nottingham-28839
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T19:03:45Z
publishDate 2015
recordtype eprints
repository_type Digital Repository
spelling nottingham-288392025-02-28T11:34:40Z https://eprints.nottingham.ac.uk/28839/ Investigating the neural code for dynamic speech and the effect of signal degradation Steadman, Mark It is common practice in psychophysical studies to investigate speech processing by manipulating or reducing spectral and temporal information in the input signal. Such investigations, along with the often surprising performance of modern cochlear implants, have highlighted the robustness of the auditory system to severe degradations and suggest that the ability to discriminate speech sounds is fundamentally limited by the complexity of the input signal. It is not clear, however, how and to what extent this is underpinned by neural processing mechanisms. This thesis examines the effect on the neural representation of reducing spectral and temporal information in the signal. A stimulus set from an existing psychophysical study was emulated, comprising a set of 16 vowel-consonant-vowel phoneme sequences (VCVs) each produced by multiple talkers, which were parametrically degraded using a noise-vocoder. Neuronal representations were simulated using a published computational model of the auditory nerve. Representations were also recorded in the inferior colliculus (IC) and auditory cortex (AC) of anaesthetised guinea pigs. Their discriminability was quantified using a novel neural classifier. Commensurate with investigations using simple stimuli, high rate envelope modulations in complex signals are represented in the auditory nerve and midbrain. It is demonstrated here that representations of these features are efficacious in a closed-set speech recognition task where appropriate decoding mechanisms are available, yet do not appear to be accessible perceptually. Optimal encoding windows for speech discrimination increase from of the order of 1 millisecond in the auditory nerve to 10s of milliseconds in the IC and the AC. Recent publications suggest that millisecond-precise neuronal activity is important for speech recognition. It is demonstrated here that the relevance of millisecond-precise responses in this context is highly dependent on the brain region, the nature of the speech recognition task and the complexity of the stimulus set. 2015-07-15 Thesis (University of Nottingham only) NonPeerReviewed application/pdf en arr https://eprints.nottingham.ac.uk/28839/1/Steadman2015_PhD_thesis.pdf Steadman, Mark (2015) Investigating the neural code for dynamic speech and the effect of signal degradation. PhD thesis, University of Nottingham. Auditory hearing neuroscience neural coding auditory nerve inferior colliculus auditory cortex speech consonants electrophysiology machine learning sound
spellingShingle Auditory
hearing
neuroscience
neural coding
auditory nerve
inferior colliculus
auditory cortex
speech
consonants
electrophysiology
machine learning
sound
Steadman, Mark
Investigating the neural code for dynamic speech and the effect of signal degradation
title Investigating the neural code for dynamic speech and the effect of signal degradation
title_full Investigating the neural code for dynamic speech and the effect of signal degradation
title_fullStr Investigating the neural code for dynamic speech and the effect of signal degradation
title_full_unstemmed Investigating the neural code for dynamic speech and the effect of signal degradation
title_short Investigating the neural code for dynamic speech and the effect of signal degradation
title_sort investigating the neural code for dynamic speech and the effect of signal degradation
topic Auditory
hearing
neuroscience
neural coding
auditory nerve
inferior colliculus
auditory cortex
speech
consonants
electrophysiology
machine learning
sound
url https://eprints.nottingham.ac.uk/28839/