An audio CAPTCHA to distinguish humans from computers

CAPTCHAs are employed as a security measure to differentiate human users from bots. A new sound-based CAPTCHA is proposed in this paper, which exploits the gaps between human voice and synthetic voice rather than relays on the auditory perception of human. The user is required to read out a given...

Full description

Bibliographic Details
Main Authors: Haichang, Gao, Liu, Honggang, Yao, Dan, Liu, Xiyang, Aickelin, Uwe
Other Authors: Yu, Fei
Format: Book Section
Published: IEEE Computer Society 2010
Online Access:https://eprints.nottingham.ac.uk/1343/
_version_ 1848790589634510848
author Haichang, Gao
Liu, Honggang
Yao, Dan
Liu, Xiyang
Aickelin, Uwe
author2 Yu, Fei
author_facet Yu, Fei
Haichang, Gao
Liu, Honggang
Yao, Dan
Liu, Xiyang
Aickelin, Uwe
author_sort Haichang, Gao
building Nottingham Research Data Repository
collection Online Access
description CAPTCHAs are employed as a security measure to differentiate human users from bots. A new sound-based CAPTCHA is proposed in this paper, which exploits the gaps between human voice and synthetic voice rather than relays on the auditory perception of human. The user is required to read out a given sentence, which is selected randomly from a specified book. The generated audio file will be analyzed automatically to judge whether the user is a human or not. In this paper, the design of the new CAPTCHA, the analysis of the audio files, and the choice of the audio frame window function are described in detail. And also, some experiments are conducted to fix the critical threshold and the coefficients of three indicators to ensure the security. The proposed audio CAPTCHA is proved accessible to users. The user study has shown that the human success rate reaches approximately 97% and the pass rate of attack software using Microsoft SDK 5.1 is only 4%. The experiments also indicated that it could be solved by most human users in less than 14 seconds and the average time is only 7.8 seconds.
first_indexed 2025-11-14T18:15:01Z
format Book Section
id nottingham-1343
institution University of Nottingham Malaysia Campus
institution_category Local University
last_indexed 2025-11-14T18:15:01Z
publishDate 2010
publisher IEEE Computer Society
recordtype eprints
repository_type Digital Repository
spelling nottingham-13432024-11-06T14:52:23Z https://eprints.nottingham.ac.uk/1343/ An audio CAPTCHA to distinguish humans from computers Haichang, Gao Liu, Honggang Yao, Dan Liu, Xiyang Aickelin, Uwe CAPTCHAs are employed as a security measure to differentiate human users from bots. A new sound-based CAPTCHA is proposed in this paper, which exploits the gaps between human voice and synthetic voice rather than relays on the auditory perception of human. The user is required to read out a given sentence, which is selected randomly from a specified book. The generated audio file will be analyzed automatically to judge whether the user is a human or not. In this paper, the design of the new CAPTCHA, the analysis of the audio files, and the choice of the audio frame window function are described in detail. And also, some experiments are conducted to fix the critical threshold and the coefficients of three indicators to ensure the security. The proposed audio CAPTCHA is proved accessible to users. The user study has shown that the human success rate reaches approximately 97% and the pass rate of attack software using Microsoft SDK 5.1 is only 4%. The experiments also indicated that it could be solved by most human users in less than 14 seconds and the average time is only 7.8 seconds. IEEE Computer Society Yu, Fei Russell, Martha Rubens, Neil Zhang, Jun 2010 Book Section PeerReviewed Haichang, Gao, Liu, Honggang, Yao, Dan, Liu, Xiyang and Aickelin, Uwe (2010) An audio CAPTCHA to distinguish humans from computers. In: Third International Symposium on Electronic Commerce and Security, ISECS2010: Guangzhou, China, 29-31 July 2010. IEEE Computer Society, Los Alamos, Calif., pp. 265-269. http://isecs2010.gdcc.edu.cn/index.htm
spellingShingle Haichang, Gao
Liu, Honggang
Yao, Dan
Liu, Xiyang
Aickelin, Uwe
An audio CAPTCHA to distinguish humans from computers
title An audio CAPTCHA to distinguish humans from computers
title_full An audio CAPTCHA to distinguish humans from computers
title_fullStr An audio CAPTCHA to distinguish humans from computers
title_full_unstemmed An audio CAPTCHA to distinguish humans from computers
title_short An audio CAPTCHA to distinguish humans from computers
title_sort audio captcha to distinguish humans from computers
url https://eprints.nottingham.ac.uk/1343/
https://eprints.nottingham.ac.uk/1343/