Multimodal models for contextual affect assessment in real-time

Most affect classification schemes rely on near accurate single-cue models resulting in less than required accuracy under certain peculiar conditions. We investigate how the holism of a multimodal solution could be exploited for affect classification. This paper presents the design and implementatio...

Full description

Bibliographic Details
Main Authors: Vice, J., Khan, Masood, Yanushkevich, S.
Format: Conference Paper
Published: 2019
Online Access:http://hdl.handle.net/20.500.11937/80264
_version_ 1848764192702595072
author Vice, J.
Khan, Masood
Yanushkevich, S.
author_facet Vice, J.
Khan, Masood
Yanushkevich, S.
author_sort Vice, J.
building Curtin Institutional Repository
collection Online Access
description Most affect classification schemes rely on near accurate single-cue models resulting in less than required accuracy under certain peculiar conditions. We investigate how the holism of a multimodal solution could be exploited for affect classification. This paper presents the design and implementation of a prototype, stand-alone, real-time multimodal affective state classification system. The presented system utilizes speech and facial muscle movements to create a holistic classifier. The system combines a facial expression classifier and a speech classifier that analyses speech through paralanguage and propositional content. The proposed classification scheme includes a Support Vector Machine (SVM) - paralanguage; a K-Nearest Neighbor (KNN) - propositional content and an InceptionV3 neural network - facial expressions of affective states. The SVM and Inception models boasted respective validation accuracies of 99.2% and 92.78%.
first_indexed 2025-11-14T11:15:27Z
format Conference Paper
id curtin-20.500.11937-80264
institution Curtin University Malaysia
institution_category Local University
last_indexed 2025-11-14T11:15:27Z
publishDate 2019
recordtype eprints
repository_type Digital Repository
spelling curtin-20.500.11937-802642021-01-19T06:58:05Z Multimodal models for contextual affect assessment in real-time Vice, J. Khan, Masood Yanushkevich, S. Most affect classification schemes rely on near accurate single-cue models resulting in less than required accuracy under certain peculiar conditions. We investigate how the holism of a multimodal solution could be exploited for affect classification. This paper presents the design and implementation of a prototype, stand-alone, real-time multimodal affective state classification system. The presented system utilizes speech and facial muscle movements to create a holistic classifier. The system combines a facial expression classifier and a speech classifier that analyses speech through paralanguage and propositional content. The proposed classification scheme includes a Support Vector Machine (SVM) - paralanguage; a K-Nearest Neighbor (KNN) - propositional content and an InceptionV3 neural network - facial expressions of affective states. The SVM and Inception models boasted respective validation accuracies of 99.2% and 92.78%. 2019 Conference Paper http://hdl.handle.net/20.500.11937/80264 10.1109/CogMI48466.2019.00020 restricted
spellingShingle Vice, J.
Khan, Masood
Yanushkevich, S.
Multimodal models for contextual affect assessment in real-time
title Multimodal models for contextual affect assessment in real-time
title_full Multimodal models for contextual affect assessment in real-time
title_fullStr Multimodal models for contextual affect assessment in real-time
title_full_unstemmed Multimodal models for contextual affect assessment in real-time
title_short Multimodal models for contextual affect assessment in real-time
title_sort multimodal models for contextual affect assessment in real-time
url http://hdl.handle.net/20.500.11937/80264