End-to-end audiovisual speech recognition

Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end aud...

Full description

Bibliographic Details
Main Authors: Petridis, Stavros, Stafylakis, Themos, Ma, Pingchuan, Cai, Feipeng, Tzimiropoulos, Georgios, Pantic, Maja
Format: Conference or Workshop Item
Language:English
Published: 2018
Online Access:https://eprints.nottingham.ac.uk/51132/
_version_ 1848798424187535360
author Petridis, Stavros
Stafylakis, Themos
Ma, Pingchuan
Cai, Feipeng
Tzimiropoulos, Georgios
Pantic, Maja
author_facet Petridis, Stavros
Stafylakis, Themos
Ma, Pingchuan
Cai, Feipeng
Tzimiropoulos, Georgios
Pantic, Maja
author_sort Petridis, Stavros
building Nottingham Research Data Repository
collection Online Access
description Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models.
first_indexed 2025-11-14T20:19:33Z
format Conference or Workshop Item
id nottingham-51132
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T20:19:33Z
publishDate 2018
recordtype eprints
repository_type Digital Repository
spelling nottingham-511322018-04-15T04:35:06Z https://eprints.nottingham.ac.uk/51132/ End-to-end audiovisual speech recognition Petridis, Stavros Stafylakis, Themos Ma, Pingchuan Cai, Feipeng Tzimiropoulos, Georgios Pantic, Maja Several end-to-end deep learning approaches have been recently presented which extract either audio or visual features from the input images or audio signals and perform speech recognition. However, research on end-to-end audiovisual models is very limited. In this work, we present an end-to-end audiovisual model based on residual networks and Bidirectional Gated Recurrent Units (BGRUs). To the best of our knowledge, this is the first audiovisual fusion model which simultaneously learns to extract features directly from the image pixels and audio waveforms and performs within-context word recognition on a large publicly available dataset (LRW). The model consists of two streams, one for each modality, which extract features directly from mouth regions and raw waveforms. The temporal dynamics in each stream/modality are modeled by a 2-layer BGRU and the fusion of multiple streams/modalities takes place via another 2-layer BGRU. A slight improvement in the classification rate over an end-to-end audio-only and MFCC-based model is reported in clean audio conditions and low levels of noise. In presence of high levels of noise, the end-to-end audiovisual model significantly outperforms both audio-only models. 2018-04-15 Conference or Workshop Item PeerReviewed application/pdf en https://eprints.nottingham.ac.uk/51132/1/av_speech1.pdf Petridis, Stavros, Stafylakis, Themos, Ma, Pingchuan, Cai, Feipeng, Tzimiropoulos, Georgios and Pantic, Maja (2018) End-to-end audiovisual speech recognition. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, 15-20 April 2018, Calgary, Alberta, Canada.
spellingShingle Petridis, Stavros
Stafylakis, Themos
Ma, Pingchuan
Cai, Feipeng
Tzimiropoulos, Georgios
Pantic, Maja
End-to-end audiovisual speech recognition
title End-to-end audiovisual speech recognition
title_full End-to-end audiovisual speech recognition
title_fullStr End-to-end audiovisual speech recognition
title_full_unstemmed End-to-end audiovisual speech recognition
title_short End-to-end audiovisual speech recognition
title_sort end-to-end audiovisual speech recognition
url https://eprints.nottingham.ac.uk/51132/