Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection

In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very eas...

Full description

Bibliographic Details
Main Authors: Almaev, Timur, Martinez, Brais, Valstar, Michel F.
Format: Conference or Workshop Item
Published: 2015
Online Access:https://eprints.nottingham.ac.uk/31306/
_version_ 1848794173016113152
author Almaev, Timur
Martinez, Brais
Valstar, Michel F.
author_facet Almaev, Timur
Martinez, Brais
Valstar, Michel F.
author_sort Almaev, Timur
building Nottingham Research Data Repository
collection Online Access
description In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the tar- get subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we con- sider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available.
first_indexed 2025-11-14T19:11:59Z
format Conference or Workshop Item
id nottingham-31306
institution University of Nottingham Malaysia Campus
institution_category Local University
last_indexed 2025-11-14T19:11:59Z
publishDate 2015
recordtype eprints
repository_type Digital Repository
spelling nottingham-313062020-05-04T17:26:36Z https://eprints.nottingham.ac.uk/31306/ Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection Almaev, Timur Martinez, Brais Valstar, Michel F. In this article we explore the problem of constructing person-specific models for the detection of facial Action Units (AUs), addressing the problem from the point of view of Transfer Learning and Multi-Task Learning. Our starting point is the fact that some expressions, such as smiles, are very easily elicited, annotated, and automatically detected, while others are much harder to elicit and to annotate. We thus consider a novel problem: all AU models for the tar- get subject are to be learnt using person-specific annotated data for a reference AU (AU12 in our case), and no data or little data regarding the target AU. In order to design such a model, we propose a novel Multi-Task Learning and the associated Transfer Learning framework, in which we con- sider both relations across subjects and AUs. That is to say, we consider a tensor structure among the tasks. Our approach hinges on learning the latent relations among tasks using one single reference AU, and then transferring these latent relations to other AUs. We show that we are able to effectively make use of the annotated data for AU12 when learning other person-specific AU models, even in the absence of data for the target task. Finally, we show the excellent performance of our method when small amounts of annotated data for the target tasks are made available. 2015-12-16 Conference or Workshop Item PeerReviewed Almaev, Timur, Martinez, Brais and Valstar, Michel F. (2015) Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection. In: ICCV15, International Conference on Computer Vision, 11-18 Dec 2015, Santiago, Chile. http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Almaev_Learning_to_Transfer_ICCV_2015_paper.pdf
spellingShingle Almaev, Timur
Martinez, Brais
Valstar, Michel F.
Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection
title Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection
title_full Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection
title_fullStr Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection
title_full_unstemmed Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection
title_short Learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection
title_sort learning to transfer: transferring latent task structures and its application to person-specific facial action unit detection
url https://eprints.nottingham.ac.uk/31306/
https://eprints.nottingham.ac.uk/31306/