GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures

A complete microcomputer system is described, GesRec3D, which facilitates the data acquisition, segmentation, learning, and recognition of 3-Dimensional arm gestures, with application as a Augmentative and Alternative Communication (AAC) aid for people with motor and speech disability. The gesture d...

Full description

Bibliographic Details
Main Authors: Craven, Michael P., Curtis, K. Mervyn
Other Authors: Camurri, Antonio
Format: Book Section
Published: Springer 2004
Subjects:
Online Access:https://eprints.nottingham.ac.uk/1899/
_version_ 1848790681742475264
author Craven, Michael P.
Curtis, K. Mervyn
author2 Camurri, Antonio
author_facet Camurri, Antonio
Craven, Michael P.
Curtis, K. Mervyn
author_sort Craven, Michael P.
building Nottingham Research Data Repository
collection Online Access
description A complete microcomputer system is described, GesRec3D, which facilitates the data acquisition, segmentation, learning, and recognition of 3-Dimensional arm gestures, with application as a Augmentative and Alternative Communication (AAC) aid for people with motor and speech disability. The gesture data is acquired from a Polhemus electro-magnetic tracker system, with sensors attached to the finger, wrist and elbow of one arm. Coded gestures are linked to user-defined text, to be spoken by a text-to-speech engine that is integrated into the system. A segmentation method and an algorithm for classification are presented that includes acceptance/rejection thresholds based on intra-class and inter-class dissimilarity measures. Results of recognition hits, confusion misses and rejection misses are given for two experiments, involving predefined and arbitrary 3D gestures.
first_indexed 2025-11-14T18:16:29Z
format Book Section
id nottingham-1899
institution University of Nottingham Malaysia Campus
institution_category Local University
last_indexed 2025-11-14T18:16:29Z
publishDate 2004
publisher Springer
recordtype eprints
repository_type Digital Repository
spelling nottingham-18992020-05-04T20:31:23Z https://eprints.nottingham.ac.uk/1899/ GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures Craven, Michael P. Curtis, K. Mervyn A complete microcomputer system is described, GesRec3D, which facilitates the data acquisition, segmentation, learning, and recognition of 3-Dimensional arm gestures, with application as a Augmentative and Alternative Communication (AAC) aid for people with motor and speech disability. The gesture data is acquired from a Polhemus electro-magnetic tracker system, with sensors attached to the finger, wrist and elbow of one arm. Coded gestures are linked to user-defined text, to be spoken by a text-to-speech engine that is integrated into the system. A segmentation method and an algorithm for classification are presented that includes acceptance/rejection thresholds based on intra-class and inter-class dissimilarity measures. Results of recognition hits, confusion misses and rejection misses are given for two experiments, involving predefined and arbitrary 3D gestures. Springer Camurri, Antonio Volpe, Gualtiero 2004 Book Section PeerReviewed Craven, Michael P. and Curtis, K. Mervyn (2004) GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures. In: Gesture-based communication in human-computer interaction: 5th International Gesture Workshop, GW 2003: Genova, Italy, April 2003: selected revised papers. Lecture notes in computer science (2915). Springer, Berlin, pp. 231-238. ISBN 9783540210726 gesture recognition dissimilarity similarity segmentation text-to-speech gesture-to-speech sign language 3D tracking Augmentative and Alternative Communication AAC human computer interaction HCI http://link.springer.com/chapter/10.1007%2F978-3-540-24598-8_21 doi:10.1007/978-3-540-24598-8_21 doi:10.1007/978-3-540-24598-8_21
spellingShingle gesture recognition
dissimilarity
similarity
segmentation
text-to-speech
gesture-to-speech
sign language
3D tracking
Augmentative and Alternative Communication
AAC
human computer interaction
HCI
Craven, Michael P.
Curtis, K. Mervyn
GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures
title GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures
title_full GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures
title_fullStr GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures
title_full_unstemmed GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures
title_short GesRec3D: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures
title_sort gesrec3d: a real-time coded gesture-to-speech system with automatic segmentation and recognition thresholding using dissimilarity measures
topic gesture recognition
dissimilarity
similarity
segmentation
text-to-speech
gesture-to-speech
sign language
3D tracking
Augmentative and Alternative Communication
AAC
human computer interaction
HCI
url https://eprints.nottingham.ac.uk/1899/
https://eprints.nottingham.ac.uk/1899/
https://eprints.nottingham.ac.uk/1899/