A dynamic appearance descriptor approach to facial actions temporal modeling
Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expression...
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Published: |
IEEE
2014
|
| Subjects: | |
| Online Access: | https://eprints.nottingham.ac.uk/37396/ |
| _version_ | 1848795450154418176 |
|---|---|
| author | Jiang, Bihan Valstar, Michel Martinez, Brais Pantic, Maja |
| author_facet | Jiang, Bihan Valstar, Michel Martinez, Brais Pantic, Maja |
| author_sort | Jiang, Bihan |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information. |
| first_indexed | 2025-11-14T19:32:17Z |
| format | Article |
| id | nottingham-37396 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| last_indexed | 2025-11-14T19:32:17Z |
| publishDate | 2014 |
| publisher | IEEE |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-373962020-05-04T16:40:52Z https://eprints.nottingham.ac.uk/37396/ A dynamic appearance descriptor approach to facial actions temporal modeling Jiang, Bihan Valstar, Michel Martinez, Brais Pantic, Maja Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information. IEEE 2014-02-01 Article PeerReviewed Jiang, Bihan, Valstar, Michel, Martinez, Brais and Pantic, Maja (2014) A dynamic appearance descriptor approach to facial actions temporal modeling. IEEE Transactions on Cybernetics, 44 (2). pp. 161-174. ISSN 2168-2275 Facial dynamics action unit detection dynamic appearance descriptors LPQ-TOP temporal segment detection http://ieeexplore.ieee.org/document/6504484/ doi:10.1109/TCYB.2013.2249063 doi:10.1109/TCYB.2013.2249063 |
| spellingShingle | Facial dynamics action unit detection dynamic appearance descriptors LPQ-TOP temporal segment detection Jiang, Bihan Valstar, Michel Martinez, Brais Pantic, Maja A dynamic appearance descriptor approach to facial actions temporal modeling |
| title | A dynamic appearance descriptor approach to facial actions temporal modeling |
| title_full | A dynamic appearance descriptor approach to facial actions temporal modeling |
| title_fullStr | A dynamic appearance descriptor approach to facial actions temporal modeling |
| title_full_unstemmed | A dynamic appearance descriptor approach to facial actions temporal modeling |
| title_short | A dynamic appearance descriptor approach to facial actions temporal modeling |
| title_sort | dynamic appearance descriptor approach to facial actions temporal modeling |
| topic | Facial dynamics action unit detection dynamic appearance descriptors LPQ-TOP temporal segment detection |
| url | https://eprints.nottingham.ac.uk/37396/ https://eprints.nottingham.ac.uk/37396/ https://eprints.nottingham.ac.uk/37396/ |