Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD

Neurodevelopmental conditions like Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) impact a significant number of children and adults worldwide. Currently, the means of diagnosing of such conditions is carried out by experts, who employ standard questionnaires and...

Full description

Bibliographic Details
Main Author: Jaiswal, Shashank
Format: Thesis (University of Nottingham only)
Language:English
Published: 2018
Online Access:https://eprints.nottingham.ac.uk/49074/
_version_ 1848797915878785024
author Jaiswal, Shashank
author_facet Jaiswal, Shashank
author_sort Jaiswal, Shashank
building Nottingham Research Data Repository
collection Online Access
description Neurodevelopmental conditions like Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) impact a significant number of children and adults worldwide. Currently, the means of diagnosing of such conditions is carried out by experts, who employ standard questionnaires and look for certain behavioural markers through manual observation. Such methods are not only subjective, difficult to repeat, and costly but also extremely time consuming. However, with the recent surge of research into automatic facial behaviour analysis and it's varied applications, it could prove to be a potential way of tackling these diagnostic difficulties. Automatic facial expression recognition is one of the core components of this field but it has always been challenging to do it accurately in an unconstrained environment. This thesis presents a dynamic deep learning framework for robust automatic facial expression recognition. It also proposes an approach to apply this method for facial behaviour analysis which can help in the diagnosis of conditions like ADHD and ASD. The proposed facial expression algorithm uses a deep Convolutional Neural Networks (CNN) to learn models of facial Action Units (AU). It attempts to model three main distinguishing features of AUs: shape, appearance and short term dynamics, jointly in a CNN. The appearance is modelled through local image regions relevant to each AU, shape is encoded using binary masks computed from automatically detected facial landmarks and dynamics is encoded by using a short sequence of image as input to CNN. In addition, the method also employs Bi-directional Long Short Memory (BLSTM) recurrent neural networks for modelling long term dynamics. The proposed approach is evaluated on a number of databases showing state-of-the-art performance for both AU detection and intensity estimation tasks. The AU intensities estimated using this approach along with other 3D face tracking data, are used for encoding facial behaviour. The encoded facial behaviour is applied for learning models which can help in detection of ADHD and ASD. This approach was evaluated on the KOMAA database which was specially collected for this purpose. Experimental results show that facial behaviour encoded in this way provide a high discriminative power for classification of people with these conditions. It is shown that the proposed system is a potentially useful, objective and time saving contribution to the clinical diagnosis of ADHD and ASD.
first_indexed 2025-11-14T20:11:28Z
format Thesis (University of Nottingham only)
id nottingham-49074
institution University of Nottingham Malaysia Campus
institution_category Local University
language English
last_indexed 2025-11-14T20:11:28Z
publishDate 2018
recordtype eprints
repository_type Digital Repository
spelling nottingham-490742025-02-28T12:01:30Z https://eprints.nottingham.ac.uk/49074/ Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD Jaiswal, Shashank Neurodevelopmental conditions like Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorder (ASD) impact a significant number of children and adults worldwide. Currently, the means of diagnosing of such conditions is carried out by experts, who employ standard questionnaires and look for certain behavioural markers through manual observation. Such methods are not only subjective, difficult to repeat, and costly but also extremely time consuming. However, with the recent surge of research into automatic facial behaviour analysis and it's varied applications, it could prove to be a potential way of tackling these diagnostic difficulties. Automatic facial expression recognition is one of the core components of this field but it has always been challenging to do it accurately in an unconstrained environment. This thesis presents a dynamic deep learning framework for robust automatic facial expression recognition. It also proposes an approach to apply this method for facial behaviour analysis which can help in the diagnosis of conditions like ADHD and ASD. The proposed facial expression algorithm uses a deep Convolutional Neural Networks (CNN) to learn models of facial Action Units (AU). It attempts to model three main distinguishing features of AUs: shape, appearance and short term dynamics, jointly in a CNN. The appearance is modelled through local image regions relevant to each AU, shape is encoded using binary masks computed from automatically detected facial landmarks and dynamics is encoded by using a short sequence of image as input to CNN. In addition, the method also employs Bi-directional Long Short Memory (BLSTM) recurrent neural networks for modelling long term dynamics. The proposed approach is evaluated on a number of databases showing state-of-the-art performance for both AU detection and intensity estimation tasks. The AU intensities estimated using this approach along with other 3D face tracking data, are used for encoding facial behaviour. The encoded facial behaviour is applied for learning models which can help in detection of ADHD and ASD. This approach was evaluated on the KOMAA database which was specially collected for this purpose. Experimental results show that facial behaviour encoded in this way provide a high discriminative power for classification of people with these conditions. It is shown that the proposed system is a potentially useful, objective and time saving contribution to the clinical diagnosis of ADHD and ASD. 2018 Thesis (University of Nottingham only) NonPeerReviewed application/pdf en arr https://eprints.nottingham.ac.uk/49074/1/thesis_Shashank_Jaiswal2.pdf Jaiswal, Shashank (2018) Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD. PhD thesis, University of Nottingham.
spellingShingle Jaiswal, Shashank
Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD
title Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD
title_full Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD
title_fullStr Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD
title_full_unstemmed Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD
title_short Dynamic deep learning for automatic facial expression recognition and its application in diagnosis of ADHD & ASD
title_sort dynamic deep learning for automatic facial expression recognition and its application in diagnosis of adhd & asd
url https://eprints.nottingham.ac.uk/49074/