Predicting faking in interviews with automated text analysis and personality

INTRODUCTION/PURPOSE: Some assessment companies are already applying automated text-analysis to job interviews. We aimed to investigate if text-mining software can predict faking in job interviews. To our knowledge, we are the first to examine the predictive validity of text-mining software to detec...

Full description

Bibliographic Details
Main Authors: Holtrop, Djurre, Van Breda, Ward, Oostrom, Janneke, De Vries, Reinout
Format: Conference Paper
Published: 2019
Online Access:http://hdl.handle.net/20.500.11937/75891
_version_ 1848763576307679232
author Holtrop, Djurre
Van Breda, Ward
Oostrom, Janneke
De Vries, Reinout
author_facet Holtrop, Djurre
Van Breda, Ward
Oostrom, Janneke
De Vries, Reinout
author_sort Holtrop, Djurre
building Curtin Institutional Repository
collection Online Access
description INTRODUCTION/PURPOSE: Some assessment companies are already applying automated text-analysis to job interviews. We aimed to investigate if text-mining software can predict faking in job interviews. To our knowledge, we are the first to examine the predictive validity of text-mining software to detect faking. DESIGN/METHOD: 140 students from the University of Western Australia were instructed to behave as an applicant. First, participants completed a personality questionnaire. Second, they were given 12 personality-based interview questions to read and prepare. Third, participants were interviewed for approximately 15-20 minutes. Finally, participants were asked to—honestly—indicate to what extent they had verbally (α=.93) and non-verbally (α=.77) faked during the interview. Subsequently, the interview text transcripts (M[words]=1,755) were automatically analysed with text-mining software in terms of personality-related words (using a program called Sentimentics) and 10 other hypothesised linguistic markers (using LIWC2015). RESULTS: Overall, the results showed very modest relations between verbal faking and the text-mining programs’ output. More specifically, verbal faking related to the linguistic categories ‘affect’ (r=.21) and ‘positive emotions’ (r=.21). Altogether, the personality-related words and linguistic markers predicted a small amount of variance in verbal faking (R2=.17). Non-verbal faking was not related to any of the text-mining programs’ output. Finally, self-reported personality was not related to any of the faking behaviours. LIMITATIONS/PRACTICAL IMPLICATIONS: The present study shows that linguistic analyses with text-mining software is unlikely to detect fakers accurately. Interestingly, verbal faking was only related to positive affect markers. ORIGINALITY/VALUE: This puts the use of text-analysis software on job interviews in question.
first_indexed 2025-11-14T11:05:39Z
format Conference Paper
id curtin-20.500.11937-75891
institution Curtin University Malaysia
institution_category Local University
last_indexed 2025-11-14T11:05:39Z
publishDate 2019
recordtype eprints
repository_type Digital Repository
spelling curtin-20.500.11937-758912019-07-08T05:33:01Z Predicting faking in interviews with automated text analysis and personality Holtrop, Djurre Van Breda, Ward Oostrom, Janneke De Vries, Reinout INTRODUCTION/PURPOSE: Some assessment companies are already applying automated text-analysis to job interviews. We aimed to investigate if text-mining software can predict faking in job interviews. To our knowledge, we are the first to examine the predictive validity of text-mining software to detect faking. DESIGN/METHOD: 140 students from the University of Western Australia were instructed to behave as an applicant. First, participants completed a personality questionnaire. Second, they were given 12 personality-based interview questions to read and prepare. Third, participants were interviewed for approximately 15-20 minutes. Finally, participants were asked to—honestly—indicate to what extent they had verbally (α=.93) and non-verbally (α=.77) faked during the interview. Subsequently, the interview text transcripts (M[words]=1,755) were automatically analysed with text-mining software in terms of personality-related words (using a program called Sentimentics) and 10 other hypothesised linguistic markers (using LIWC2015). RESULTS: Overall, the results showed very modest relations between verbal faking and the text-mining programs’ output. More specifically, verbal faking related to the linguistic categories ‘affect’ (r=.21) and ‘positive emotions’ (r=.21). Altogether, the personality-related words and linguistic markers predicted a small amount of variance in verbal faking (R2=.17). Non-verbal faking was not related to any of the text-mining programs’ output. Finally, self-reported personality was not related to any of the faking behaviours. LIMITATIONS/PRACTICAL IMPLICATIONS: The present study shows that linguistic analyses with text-mining software is unlikely to detect fakers accurately. Interestingly, verbal faking was only related to positive affect markers. ORIGINALITY/VALUE: This puts the use of text-analysis software on job interviews in question. 2019 Conference Paper http://hdl.handle.net/20.500.11937/75891 restricted
spellingShingle Holtrop, Djurre
Van Breda, Ward
Oostrom, Janneke
De Vries, Reinout
Predicting faking in interviews with automated text analysis and personality
title Predicting faking in interviews with automated text analysis and personality
title_full Predicting faking in interviews with automated text analysis and personality
title_fullStr Predicting faking in interviews with automated text analysis and personality
title_full_unstemmed Predicting faking in interviews with automated text analysis and personality
title_short Predicting faking in interviews with automated text analysis and personality
title_sort predicting faking in interviews with automated text analysis and personality
url http://hdl.handle.net/20.500.11937/75891