Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability

Independent, discrete models like Paul Ekman’s six basic emotions model are widely used in affective state assessment (ASA) and facial expression classification. However, the continuous and dynamic nature of human expressions often needs to be considered for accurately assessing facial expressio...

Full description

Bibliographic Details
Main Authors: Vice, Jordan, Khan, Masood, Tan, Tele, Yanushkevich, Svetlana
Other Authors: Papadopoulos, George
Format: Conference Paper
Language:English
Published: ieee.org 2022
Subjects:
Online Access:http://hdl.handle.net/20.500.11937/88713
Description
Summary:Independent, discrete models like Paul Ekman’s six basic emotions model are widely used in affective state assessment (ASA) and facial expression classification. However, the continuous and dynamic nature of human expressions often needs to be considered for accurately assessing facial expressions of affective states. This paper investigates how mutual information-carrying continuous models can be extracted and used in continuous and dynamic facial expression classification systems for improving the efficacy and reliability of ASA systems. A novel, hybrid learning model that projects continuous data onto a multidimensional hyperplane is proposed. Through cosine similarity-based clustering (unsupervised) and classification (supervised) processes, our hybrid approach allows us to transform seven, discrete facial expression models into twenty-one facial expression models that include micro-expressions. The proposed continuous, dynamic classifier was able to achieve greater than 73% accuracy when experimented with Random Forest, Support Vector Machine (SVM) and Neural Network classification architectures. The presented system was validated using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) and the extended Cohn-Kanade (CK+) dataset.