Sensor fusion of motion-based sign language interpretation with deep learning
Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition ut...
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2020
|
| Subjects: | |
| Online Access: | https://eprints.nottingham.ac.uk/63821/ |
| _version_ | 1848800061657448448 |
|---|---|
| author | Lee, Boon Giin Chong, Teak-Wei Chung, Wan-Young |
| author_facet | Lee, Boon Giin Chong, Teak-Wei Chung, Wan-Young |
| author_sort | Lee, Boon Giin |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life. |
| first_indexed | 2025-11-14T20:45:34Z |
| format | Article |
| id | nottingham-63821 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| language | English |
| last_indexed | 2025-11-14T20:45:34Z |
| publishDate | 2020 |
| publisher | MDPI AG |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-638212020-11-18T01:18:35Z https://eprints.nottingham.ac.uk/63821/ Sensor fusion of motion-based sign language interpretation with deep learning Lee, Boon Giin Chong, Teak-Wei Chung, Wan-Young Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life. MDPI AG 2020-11-02 Article PeerReviewed application/pdf en cc_by https://eprints.nottingham.ac.uk/63821/1/Lee-2020-Sensor-fusion-of-motion-based-sign-.pdf Lee, Boon Giin, Chong, Teak-Wei and Chung, Wan-Young (2020) Sensor fusion of motion-based sign language interpretation with deep learning. Sensors, 20 (21). p. 6256. ISSN 1424-8220 deep learning; human-computer interaction; motion sensor; sensor fusion; sign language recognition; wearable computing http://dx.doi.org/10.3390/s20216256 doi:10.3390/s20216256 doi:10.3390/s20216256 |
| spellingShingle | deep learning; human-computer interaction; motion sensor; sensor fusion; sign language recognition; wearable computing Lee, Boon Giin Chong, Teak-Wei Chung, Wan-Young Sensor fusion of motion-based sign language interpretation with deep learning |
| title | Sensor fusion of motion-based sign language interpretation with deep learning |
| title_full | Sensor fusion of motion-based sign language interpretation with deep learning |
| title_fullStr | Sensor fusion of motion-based sign language interpretation with deep learning |
| title_full_unstemmed | Sensor fusion of motion-based sign language interpretation with deep learning |
| title_short | Sensor fusion of motion-based sign language interpretation with deep learning |
| title_sort | sensor fusion of motion-based sign language interpretation with deep learning |
| topic | deep learning; human-computer interaction; motion sensor; sensor fusion; sign language recognition; wearable computing |
| url | https://eprints.nottingham.ac.uk/63821/ https://eprints.nottingham.ac.uk/63821/ https://eprints.nottingham.ac.uk/63821/ |