Personality recognition using composite audio-video features on custom CNN architecture

Automatic personality recognition is becoming more prominent in the domain of intelligent job matching. Traditionally, individual personality traits are measured through questionnaire carefully design based on personality models like the big-five or MBTI. Although the attributes in these models are...

Full description

Bibliographic Details
Main Author: Eng, Zi Jye
Format: Final Year Project / Dissertation / Thesis
Published: 2020
Subjects:
Online Access:http://eprints.utar.edu.my/3864/
http://eprints.utar.edu.my/3864/1/16ACB05282_FYP.pdf
_version_ 1848886011048755200
author Eng, Zi Jye
author_facet Eng, Zi Jye
author_sort Eng, Zi Jye
building UTAR Institutional Repository
collection Online Access
description Automatic personality recognition is becoming more prominent in the domain of intelligent job matching. Traditionally, individual personality traits are measured through questionnaire carefully design based on personality models like the big-five or MBTI. Although the attributes in these models are proven effective; data collection through surveys can result in biased scoring due to illusory superiority. Machine-learning based personality models alleviate these constraints by modelling behavioural cues from videos annotated by personality experts; For example, the ECCV ChaLearn LAP 2016 challenge seek to recognise and quantise human personality traits. Using variants of CNN(s), existing methods attempt to improve model accuracy through adding custom layers and hyperparameters tuning; trained on the full ChaLearn LAP 2016 datasets that are computeintensive. This project proposes a rapid behavioural modelling technique for short videos to improve model accuracy and prevent overfitting while minimizing the amount of training data needed. The contribution of this work is two folds: (1) a selective sampling technique using the first seven-seconds of video for training and (2) Using limited amount of dataset to model a personality trait recognition model with optimum performance. By applying selective sampling technique and inclusion of multiple modalities, the model performance able to achieve 90.30 in testing result with almost 600% smaller training data.
first_indexed 2025-11-15T19:31:42Z
format Final Year Project / Dissertation / Thesis
id utar-3864
institution Universiti Tunku Abdul Rahman
institution_category Local University
last_indexed 2025-11-15T19:31:42Z
publishDate 2020
recordtype eprints
repository_type Digital Repository
spelling utar-38642021-01-06T13:14:53Z Personality recognition using composite audio-video features on custom CNN architecture Eng, Zi Jye Q Science (General) Automatic personality recognition is becoming more prominent in the domain of intelligent job matching. Traditionally, individual personality traits are measured through questionnaire carefully design based on personality models like the big-five or MBTI. Although the attributes in these models are proven effective; data collection through surveys can result in biased scoring due to illusory superiority. Machine-learning based personality models alleviate these constraints by modelling behavioural cues from videos annotated by personality experts; For example, the ECCV ChaLearn LAP 2016 challenge seek to recognise and quantise human personality traits. Using variants of CNN(s), existing methods attempt to improve model accuracy through adding custom layers and hyperparameters tuning; trained on the full ChaLearn LAP 2016 datasets that are computeintensive. This project proposes a rapid behavioural modelling technique for short videos to improve model accuracy and prevent overfitting while minimizing the amount of training data needed. The contribution of this work is two folds: (1) a selective sampling technique using the first seven-seconds of video for training and (2) Using limited amount of dataset to model a personality trait recognition model with optimum performance. By applying selective sampling technique and inclusion of multiple modalities, the model performance able to achieve 90.30 in testing result with almost 600% smaller training data. 2020-05-14 Final Year Project / Dissertation / Thesis NonPeerReviewed application/pdf http://eprints.utar.edu.my/3864/1/16ACB05282_FYP.pdf Eng, Zi Jye (2020) Personality recognition using composite audio-video features on custom CNN architecture. Final Year Project, UTAR. http://eprints.utar.edu.my/3864/
spellingShingle Q Science (General)
Eng, Zi Jye
Personality recognition using composite audio-video features on custom CNN architecture
title Personality recognition using composite audio-video features on custom CNN architecture
title_full Personality recognition using composite audio-video features on custom CNN architecture
title_fullStr Personality recognition using composite audio-video features on custom CNN architecture
title_full_unstemmed Personality recognition using composite audio-video features on custom CNN architecture
title_short Personality recognition using composite audio-video features on custom CNN architecture
title_sort personality recognition using composite audio-video features on custom cnn architecture
topic Q Science (General)
url http://eprints.utar.edu.my/3864/
http://eprints.utar.edu.my/3864/1/16ACB05282_FYP.pdf