Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability
Independent, discrete models like Paul Ekman’s six basic emotions model are widely used in affective state assessment (ASA) and facial expression classification. However, the continuous and dynamic nature of human expressions often needs to be considered for accurately assessing facial expressio...
| Main Authors: | , , , |
|---|---|
| Other Authors: | |
| Format: | Conference Paper |
| Language: | English |
| Published: |
ieee.org
2022
|
| Subjects: | |
| Online Access: | http://hdl.handle.net/20.500.11937/88713 |
| _version_ | 1848765070701494272 |
|---|---|
| author | Vice, Jordan Khan, Masood Tan, Tele Yanushkevich, Svetlana |
| author2 | Papadopoulos, George |
| author_facet | Papadopoulos, George Vice, Jordan Khan, Masood Tan, Tele Yanushkevich, Svetlana |
| author_sort | Vice, Jordan |
| building | Curtin Institutional Repository |
| collection | Online Access |
| description | Independent, discrete models like Paul Ekman’s six
basic emotions model are widely used in affective state assessment
(ASA) and facial expression classification. However, the continuous
and dynamic nature of human expressions often needs to be
considered for accurately assessing facial expressions of affective
states. This paper investigates how mutual information-carrying
continuous models can be extracted and used in continuous and
dynamic facial expression classification systems for improving the
efficacy and reliability of ASA systems. A novel, hybrid learning
model that projects continuous data onto a multidimensional
hyperplane is proposed. Through cosine similarity-based clustering
(unsupervised) and classification (supervised) processes, our
hybrid approach allows us to transform seven, discrete facial
expression models into twenty-one facial expression models that
include micro-expressions. The proposed continuous, dynamic
classifier was able to achieve greater than 73% accuracy when
experimented with Random Forest, Support Vector Machine
(SVM) and Neural Network classification architectures. The
presented system was validated using the Ryerson Audio-Visual
Database of Emotional Speech and Song (RAVDESS) and the
extended Cohn-Kanade (CK+) dataset. |
| first_indexed | 2025-11-14T11:29:24Z |
| format | Conference Paper |
| id | curtin-20.500.11937-88713 |
| institution | Curtin University Malaysia |
| institution_category | Local University |
| language | English |
| last_indexed | 2025-11-14T11:29:24Z |
| publishDate | 2022 |
| publisher | ieee.org |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | curtin-20.500.11937-887132022-06-21T02:41:23Z Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability Vice, Jordan Khan, Masood Tan, Tele Yanushkevich, Svetlana Papadopoulos, George Angelov, Plamen 4601 - Applied computing 4602 - Artificial intelligence 4603 - Computer vision and multimedia computation 4611 - Machine learning 0915 - Interdisciplinary Engineering Independent, discrete models like Paul Ekman’s six basic emotions model are widely used in affective state assessment (ASA) and facial expression classification. However, the continuous and dynamic nature of human expressions often needs to be considered for accurately assessing facial expressions of affective states. This paper investigates how mutual information-carrying continuous models can be extracted and used in continuous and dynamic facial expression classification systems for improving the efficacy and reliability of ASA systems. A novel, hybrid learning model that projects continuous data onto a multidimensional hyperplane is proposed. Through cosine similarity-based clustering (unsupervised) and classification (supervised) processes, our hybrid approach allows us to transform seven, discrete facial expression models into twenty-one facial expression models that include micro-expressions. The proposed continuous, dynamic classifier was able to achieve greater than 73% accuracy when experimented with Random Forest, Support Vector Machine (SVM) and Neural Network classification architectures. The presented system was validated using the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) and the extended Cohn-Kanade (CK+) dataset. 2022 Conference Paper http://hdl.handle.net/20.500.11937/88713 10.1109/EAIS51927.2022.9787730 English ieee.org restricted |
| spellingShingle | 4601 - Applied computing 4602 - Artificial intelligence 4603 - Computer vision and multimedia computation 4611 - Machine learning 0915 - Interdisciplinary Engineering Vice, Jordan Khan, Masood Tan, Tele Yanushkevich, Svetlana Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability |
| title | Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability |
| title_full | Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability |
| title_fullStr | Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability |
| title_full_unstemmed | Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability |
| title_short | Dynamic Hybrid Learning for Improving Facial Expression Classifier Reliability |
| title_sort | dynamic hybrid learning for improving facial expression classifier reliability |
| topic | 4601 - Applied computing 4602 - Artificial intelligence 4603 - Computer vision and multimedia computation 4611 - Machine learning 0915 - Interdisciplinary Engineering |
| url | http://hdl.handle.net/20.500.11937/88713 |