Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh

Hearing and speech-impairment disability is widespread throughout the world. At present, 15 million people have this disability in the Arab world, and about 86% of them come from low- and middle-income countries. Meanwhile, sign language (SL) can be classified into standard Arabic sign language (ArS...

Full description

Bibliographic Details
Main Author: Ahmad Sami Abd Alkareem , Al-Shamayleh
Format: Thesis
Published: 2020
Subjects:
Online Access:http://studentsrepo.um.edu.my/14341/
http://studentsrepo.um.edu.my/14341/2/Ahmad_Sami.pdf
http://studentsrepo.um.edu.my/14341/1/Ahmad_Sami.pdf
_version_ 1848774943668436992
author Ahmad Sami Abd Alkareem , Al-Shamayleh
author_facet Ahmad Sami Abd Alkareem , Al-Shamayleh
author_sort Ahmad Sami Abd Alkareem , Al-Shamayleh
building UM Research Repository
collection Online Access
description Hearing and speech-impairment disability is widespread throughout the world. At present, 15 million people have this disability in the Arab world, and about 86% of them come from low- and middle-income countries. Meanwhile, sign language (SL) can be classified into standard Arabic sign language (ArSL) and local Arabic sign language (LArSL). ArSL is the formal standard and is the more acceptable SL in the Arab world; it is also considered as the medium of instructions for schools and universities as well as television news, shows and programmes. With the absence of usable ArSL recognition (ArSLR) platforms, hearing- and speech-impaired people tend to be isolated and face serious difficulties in communication and interaction. The focus of this thesis is to design and create a new ArSL and an ArSLR framework, which cover all ArSLR approaches and ArSL forms based on the standard criteria and consensus of SL experts. Essentially, this thesis proposes two frameworks for modelling and developing ArSLR. The first framework represents a usable static backhand and a signer-independent approach for the ArSLR of letters and number signs based on the Vision-Based Recognition (VBR) approach using a smartphone camera. The second framework represents a usable hybrid VBR and sensor-based recognition (SBR) approach for signer-independent continuous ArSLR development and evaluation using Microsoft Kinect and Smart Data Gloves. To accomplish the research objective, an extensive systematic literature review has been conducted to create research taxonomies and to identify the research gaps on ArSLR approaches and ArSL forms of signs. For the first framework, the input signs to the ArSLR framework were first split into open and closed hand signs. Then, the ArSLR framework identified suitable approaches of recognising open and closed hand signs, such as the discrete wavelet transform, which was integrated with 1D-signature signals in closed hand signs. The open hand signs were differentiated through the distribution of quantised area levels, which were generated from the Run-Length Matrix. Statistical analysis approaches were employed to compute the feature descriptors. In our modelled framework, the multi-classification approach was implemented in the recognition phase. The presented framework has the ability to recognise ArSL static backhand finger numbers and letters with a recognition accuracy of 95.63%. The second framework shows how to jointly exploit the VBR and SBR for accurate ArSLR made by the signer-independent. Firstly, a dedicated solution for the combined calibration of the two different devices was proposed. Then, a set of novel feature descriptors was used both for the smart gloves and for Microsoft Kinect. Finally, the proposed feature sets were fed to the multi-class support vector machines. Experimental results show that the obtained accuracies for the Arabic alphabet, Arabic numbers, isolated words, and continuous sentences datasets of this ArSL database are 99.38%, 99.42%, 99.15% and 99.55%, respectively. In conclusion, the created database and the proposed frameworks are able to increase the performance of the ArSLR.
first_indexed 2025-11-14T14:06:20Z
format Thesis
id um-14341
institution University Malaya
institution_category Local University
last_indexed 2025-11-14T14:06:20Z
publishDate 2020
recordtype eprints
repository_type Digital Repository
spelling um-143412023-04-05T21:55:38Z Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh Ahmad Sami Abd Alkareem , Al-Shamayleh QA76 Computer software T Technology (General) Hearing and speech-impairment disability is widespread throughout the world. At present, 15 million people have this disability in the Arab world, and about 86% of them come from low- and middle-income countries. Meanwhile, sign language (SL) can be classified into standard Arabic sign language (ArSL) and local Arabic sign language (LArSL). ArSL is the formal standard and is the more acceptable SL in the Arab world; it is also considered as the medium of instructions for schools and universities as well as television news, shows and programmes. With the absence of usable ArSL recognition (ArSLR) platforms, hearing- and speech-impaired people tend to be isolated and face serious difficulties in communication and interaction. The focus of this thesis is to design and create a new ArSL and an ArSLR framework, which cover all ArSLR approaches and ArSL forms based on the standard criteria and consensus of SL experts. Essentially, this thesis proposes two frameworks for modelling and developing ArSLR. The first framework represents a usable static backhand and a signer-independent approach for the ArSLR of letters and number signs based on the Vision-Based Recognition (VBR) approach using a smartphone camera. The second framework represents a usable hybrid VBR and sensor-based recognition (SBR) approach for signer-independent continuous ArSLR development and evaluation using Microsoft Kinect and Smart Data Gloves. To accomplish the research objective, an extensive systematic literature review has been conducted to create research taxonomies and to identify the research gaps on ArSLR approaches and ArSL forms of signs. For the first framework, the input signs to the ArSLR framework were first split into open and closed hand signs. Then, the ArSLR framework identified suitable approaches of recognising open and closed hand signs, such as the discrete wavelet transform, which was integrated with 1D-signature signals in closed hand signs. The open hand signs were differentiated through the distribution of quantised area levels, which were generated from the Run-Length Matrix. Statistical analysis approaches were employed to compute the feature descriptors. In our modelled framework, the multi-classification approach was implemented in the recognition phase. The presented framework has the ability to recognise ArSL static backhand finger numbers and letters with a recognition accuracy of 95.63%. The second framework shows how to jointly exploit the VBR and SBR for accurate ArSLR made by the signer-independent. Firstly, a dedicated solution for the combined calibration of the two different devices was proposed. Then, a set of novel feature descriptors was used both for the smart gloves and for Microsoft Kinect. Finally, the proposed feature sets were fed to the multi-class support vector machines. Experimental results show that the obtained accuracies for the Arabic alphabet, Arabic numbers, isolated words, and continuous sentences datasets of this ArSL database are 99.38%, 99.42%, 99.15% and 99.55%, respectively. In conclusion, the created database and the proposed frameworks are able to increase the performance of the ArSLR. 2020-06 Thesis NonPeerReviewed application/pdf http://studentsrepo.um.edu.my/14341/2/Ahmad_Sami.pdf application/pdf http://studentsrepo.um.edu.my/14341/1/Ahmad_Sami.pdf Ahmad Sami Abd Alkareem , Al-Shamayleh (2020) Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh. PhD thesis, Universiti Malaya. http://studentsrepo.um.edu.my/14341/
spellingShingle QA76 Computer software
T Technology (General)
Ahmad Sami Abd Alkareem , Al-Shamayleh
Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh
title Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh
title_full Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh
title_fullStr Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh
title_full_unstemmed Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh
title_short Vision and sensor-based signer-independent framework for Arabic sign language recognition / Ahmad Sami Abd Alkareem Al-Shamayleh
title_sort vision and sensor-based signer-independent framework for arabic sign language recognition / ahmad sami abd alkareem al-shamayleh
topic QA76 Computer software
T Technology (General)
url http://studentsrepo.um.edu.my/14341/
http://studentsrepo.um.edu.my/14341/2/Ahmad_Sami.pdf
http://studentsrepo.um.edu.my/14341/1/Ahmad_Sami.pdf