Combined Hand Gesture — Speech Model for Human Action Recognition
This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and spee...
Main Authors: | , , |
---|---|
Format: | Online |
Language: | English |
Published: |
Molecular Diversity Preservation International (MDPI)
2013
|
Online Access: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3892835/ |
id |
pubmed-3892835 |
---|---|
recordtype |
oai_dc |
spelling |
pubmed-38928352014-01-16 Combined Hand Gesture — Speech Model for Human Action Recognition Cheng, Sheng-Tzong Hsu, Chih-Wei Li, Jian-Pan Article This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and speech models is considered by integrating speech recognition technology with a multimodal model, thus improving the accuracy of human behavior recognition. The experimental results proved that the proposed method can effectively improve human behavior recognition accuracy and the feasibility of system applications. Experimental results verified that the multimodal gesture-speech model provided superior accuracy when compared to the single modal versions. Molecular Diversity Preservation International (MDPI) 2013-12-12 /pmc/articles/PMC3892835/ /pubmed/24351628 http://dx.doi.org/10.3390/s131217098 Text en © 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). |
repository_type |
Open Access Journal |
institution_category |
Foreign Institution |
institution |
US National Center for Biotechnology Information |
building |
NCBI PubMed |
collection |
Online Access |
language |
English |
format |
Online |
author |
Cheng, Sheng-Tzong Hsu, Chih-Wei Li, Jian-Pan |
spellingShingle |
Cheng, Sheng-Tzong Hsu, Chih-Wei Li, Jian-Pan Combined Hand Gesture — Speech Model for Human Action Recognition |
author_facet |
Cheng, Sheng-Tzong Hsu, Chih-Wei Li, Jian-Pan |
author_sort |
Cheng, Sheng-Tzong |
title |
Combined Hand Gesture — Speech Model for Human Action Recognition |
title_short |
Combined Hand Gesture — Speech Model for Human Action Recognition |
title_full |
Combined Hand Gesture — Speech Model for Human Action Recognition |
title_fullStr |
Combined Hand Gesture — Speech Model for Human Action Recognition |
title_full_unstemmed |
Combined Hand Gesture — Speech Model for Human Action Recognition |
title_sort |
combined hand gesture — speech model for human action recognition |
description |
This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and speech models is considered by integrating speech recognition technology with a multimodal model, thus improving the accuracy of human behavior recognition. The experimental results proved that the proposed method can effectively improve human behavior recognition accuracy and the feasibility of system applications. Experimental results verified that the multimodal gesture-speech model provided superior accuracy when compared to the single modal versions. |
publisher |
Molecular Diversity Preservation International (MDPI) |
publishDate |
2013 |
url |
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3892835/ |
_version_ |
1612047858471534592 |