A multimodal approach to assessing user experiences with agent helpers
The study of agent helpers using linguistic strategies such as vague language and politeness has often come across obstacles. One of these is the quality of the agent's voice and its lack of appropriate fit for using these strategies. The first approach of this article compares human vs. synthe...
| Main Authors: | , , , |
|---|---|
| Format: | Article |
| Published: |
ACM
2016
|
| Subjects: | |
| Online Access: | https://eprints.nottingham.ac.uk/45263/ |
| _version_ | 1848797100128600064 |
|---|---|
| author | Adolphs, Svenja Clark, Leigh Ofemile, Abdulmalik Rodden, Tom |
| author_facet | Adolphs, Svenja Clark, Leigh Ofemile, Abdulmalik Rodden, Tom |
| author_sort | Adolphs, Svenja |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | The study of agent helpers using linguistic strategies such as vague language and politeness has often come across obstacles. One of these is the quality of the agent's voice and its lack of appropriate fit for using these strategies. The first approach of this article compares human vs. synthesised voices in agents using vague language. This approach analyses the 60,000-word text corpus of participant interviews to investigate the differences of user attitudes towards the agents, their voices and their use of vague language. It discovers that while the acceptance of vague language is still met with resistance in agent instructors, using a human voice yields more positive results than the synthesised alternatives. The second approach in this article discusses the development of a novel multimodal corpus of video and text data to create multiple analyses of human-agent interaction in agent-instructed assembly tasks. The second approach analyses user spontaneous facial actions and gestures during their interaction in the tasks. It found that agents are able to elicit these facial actions and gestures and posits that further analysis of this nonverbal feedback may help to create a more adaptive agent. Finally, the approaches used in this article suggest these can contribute to furthering the understanding of what it means to interact with software agents. |
| first_indexed | 2025-11-14T19:58:30Z |
| format | Article |
| id | nottingham-45263 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| last_indexed | 2025-11-14T19:58:30Z |
| publishDate | 2016 |
| publisher | ACM |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-452632020-05-04T18:18:20Z https://eprints.nottingham.ac.uk/45263/ A multimodal approach to assessing user experiences with agent helpers Adolphs, Svenja Clark, Leigh Ofemile, Abdulmalik Rodden, Tom The study of agent helpers using linguistic strategies such as vague language and politeness has often come across obstacles. One of these is the quality of the agent's voice and its lack of appropriate fit for using these strategies. The first approach of this article compares human vs. synthesised voices in agents using vague language. This approach analyses the 60,000-word text corpus of participant interviews to investigate the differences of user attitudes towards the agents, their voices and their use of vague language. It discovers that while the acceptance of vague language is still met with resistance in agent instructors, using a human voice yields more positive results than the synthesised alternatives. The second approach in this article discusses the development of a novel multimodal corpus of video and text data to create multiple analyses of human-agent interaction in agent-instructed assembly tasks. The second approach analyses user spontaneous facial actions and gestures during their interaction in the tasks. It found that agents are able to elicit these facial actions and gestures and posits that further analysis of this nonverbal feedback may help to create a more adaptive agent. Finally, the approaches used in this article suggest these can contribute to furthering the understanding of what it means to interact with software agents. ACM 2016-12-01 Article PeerReviewed Adolphs, Svenja, Clark, Leigh, Ofemile, Abdulmalik and Rodden, Tom (2016) A multimodal approach to assessing user experiences with agent helpers. ACM Transactions on Interactive Intelligent Systems, 6 (4). 29/1-29/31. ISSN 2160-6463 Human-agent interaction vague language instruction giving gestures facial actions emotions http://dl.acm.org/citation.cfm?doid=3015563.2983926 doi:10.1145/2983926 doi:10.1145/2983926 |
| spellingShingle | Human-agent interaction vague language instruction giving gestures facial actions emotions Adolphs, Svenja Clark, Leigh Ofemile, Abdulmalik Rodden, Tom A multimodal approach to assessing user experiences with agent helpers |
| title | A multimodal approach to assessing user experiences with agent helpers |
| title_full | A multimodal approach to assessing user experiences with agent helpers |
| title_fullStr | A multimodal approach to assessing user experiences with agent helpers |
| title_full_unstemmed | A multimodal approach to assessing user experiences with agent helpers |
| title_short | A multimodal approach to assessing user experiences with agent helpers |
| title_sort | multimodal approach to assessing user experiences with agent helpers |
| topic | Human-agent interaction vague language instruction giving gestures facial actions emotions |
| url | https://eprints.nottingham.ac.uk/45263/ https://eprints.nottingham.ac.uk/45263/ https://eprints.nottingham.ac.uk/45263/ |