Unsupervised learning of facial expression components

The face is one of the most important means of non-verbal communication. A lot of information can be gotten about the emotional state of a person just by merely observing their facial expression. This is relatively easy in face to face communication but not so in human computer interaction. Supervis...

Full description

Bibliographic Details
Main Author: Egede, Joy Onyekachukwu
Format: Dissertation (University of Nottingham only)
Language:English
Published: 2013
Subjects:
Online Access:https://eprints.nottingham.ac.uk/30902/
_version_ 1848794086327189504
author Egede, Joy Onyekachukwu
author_facet Egede, Joy Onyekachukwu
author_sort Egede, Joy Onyekachukwu
building Nottingham Research Data Repository
collection Online Access
description The face is one of the most important means of non-verbal communication. A lot of information can be gotten about the emotional state of a person just by merely observing their facial expression. This is relatively easy in face to face communication but not so in human computer interaction. Supervised learning has been widely used by researchers to train machines to recognise facial expressions just like humans. However, supervised learning has significant limitations one of which is the fact that it makes use of 'labelled' facial images to train models to identify facial actions. It is very expensive and time consuming to label face images. It takes about an hour to label a 5min video. In addition, more than 7000 distinct facial expressions can be created from a combination of different facial muscle actions [72]. The amount of labelled face images available for facial expression analysis is limited in supply and do not cover all the possible facial expressions. On the other hand, it is quite easy to collect or record a large amount of raw unlabelled data of people communicating without incurring so much cost. This research therefore used an unsupervised learning to identify facial action units. Similar appearance changes were identified from a large data set of face images without knowing beforehand which facial expressions they belong to and then the identified patterns or learned features were assigned to distinct facial expressions. For the purpose of comparison with the supervised learning approach, the learned features were used to train models to identify facial actions. Results showed that in 70% of the cases, the accuracy from the unsupervised learning model surpassed the performance of the supervised learning model.
first_indexed 2025-11-14T19:10:36Z
format Dissertation (University of Nottingham only)
id nottingham-30902
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T19:10:36Z
publishDate 2013
recordtype eprints
repository_type Digital Repository
spelling nottingham-309022017-10-19T15:08:35Z https://eprints.nottingham.ac.uk/30902/ Unsupervised learning of facial expression components Egede, Joy Onyekachukwu The face is one of the most important means of non-verbal communication. A lot of information can be gotten about the emotional state of a person just by merely observing their facial expression. This is relatively easy in face to face communication but not so in human computer interaction. Supervised learning has been widely used by researchers to train machines to recognise facial expressions just like humans. However, supervised learning has significant limitations one of which is the fact that it makes use of 'labelled' facial images to train models to identify facial actions. It is very expensive and time consuming to label face images. It takes about an hour to label a 5min video. In addition, more than 7000 distinct facial expressions can be created from a combination of different facial muscle actions [72]. The amount of labelled face images available for facial expression analysis is limited in supply and do not cover all the possible facial expressions. On the other hand, it is quite easy to collect or record a large amount of raw unlabelled data of people communicating without incurring so much cost. This research therefore used an unsupervised learning to identify facial action units. Similar appearance changes were identified from a large data set of face images without knowing beforehand which facial expressions they belong to and then the identified patterns or learned features were assigned to distinct facial expressions. For the purpose of comparison with the supervised learning approach, the learned features were used to train models to identify facial actions. Results showed that in 70% of the cases, the accuracy from the unsupervised learning model surpassed the performance of the supervised learning model. 2013-12-10 Dissertation (University of Nottingham only) NonPeerReviewed application/pdf en https://eprints.nottingham.ac.uk/30902/1/Egede%2C%20Joy%20Onyekachukwu.pdf Egede, Joy Onyekachukwu (2013) Unsupervised learning of facial expression components. [Dissertation (University of Nottingham only)] Unsupervised learning Facial expressions Action Units LBP Local binary patterns.
spellingShingle Unsupervised learning
Facial expressions
Action Units
LBP
Local binary patterns.
Egede, Joy Onyekachukwu
Unsupervised learning of facial expression components
title Unsupervised learning of facial expression components
title_full Unsupervised learning of facial expression components
title_fullStr Unsupervised learning of facial expression components
title_full_unstemmed Unsupervised learning of facial expression components
title_short Unsupervised learning of facial expression components
title_sort unsupervised learning of facial expression components
topic Unsupervised learning
Facial expressions
Action Units
LBP
Local binary patterns.
url https://eprints.nottingham.ac.uk/30902/