| Summary: | Independent, discrete models like Paul Ekman’s six
basic emotions model are widely used in affective state assessment
(ASA) and facial expression classification. However, the continuous
and dynamic nature of human expressions often needs to be
considered for accurately assessing facial expressions of affective
states. This paper investigates how mutual information-carrying
continuous models can be extracted and used in continuous and
dynamic facial expression classification systems for improving the
efficacy and reliability of ASA systems. A novel, hybrid learning
model that projects continuous data onto a multidimensional
hyperplane is proposed. Through cosine similarity-based clustering
(unsupervised) and classification (supervised) processes, our
hybrid approach allows us to transform seven, discrete facial
expression models into twenty-one facial expression models that
include micro-expressions. The proposed continuous, dynamic
classifier was able to achieve greater than 73% accuracy when
experimented with Random Forest, Support Vector Machine
(SVM) and Neural Network classification architectures. The
presented system was validated using the Ryerson Audio-Visual
Database of Emotional Speech and Song (RAVDESS) and the
extended Cohn-Kanade (CK+) dataset.
|