Combining residual networks with LSTMs for lipreading

We propose an end-to-end deep learning architecture for word level visual speech recognition. The system is a combination of spatiotemporal convolutional, residual and bidirectional Long Short-Term Memory networks. We trained and evaluated it on the Lipreading In-The-Wild benchmark, a challenging da...

Full description

Bibliographic Details
Main Authors: Stafylakis, Themos, Tzimiropoulos, Georgios
Format: Conference or Workshop Item
Published: 2017
Subjects:
Online Access:https://eprints.nottingham.ac.uk/44756/
Description
Summary:We propose an end-to-end deep learning architecture for word level visual speech recognition. The system is a combination of spatiotemporal convolutional, residual and bidirectional Long Short-Term Memory networks. We trained and evaluated it on the Lipreading In-The-Wild benchmark, a challenging database of 500-size vocabulary consisting of video excerpts from BBC TV broadcasts. The proposed network attains word accuracy equal to 83.0%, yielding 6.8% absolute improvement over the current state-of-the-art.