Towards the multimodal unit of meaning: a multimodal corpus-based approach
While there has been a wealth of research that uses textually rendered spoken corpora (i.e. written transcripts of spoken language) and corpus methods to investigate utterance meaning in various contexts, multimodal corpus-based research beyond the text is still rare. Monomodal corpus-based research...
| Main Author: | |
|---|---|
| Format: | Thesis (University of Nottingham only) |
| Language: | English |
| Published: |
2019
|
| Subjects: | |
| Online Access: | https://eprints.nottingham.ac.uk/56017/ |
| _version_ | 1848799255818403840 |
|---|---|
| author | Chen, Yaoyao |
| author_facet | Chen, Yaoyao |
| author_sort | Chen, Yaoyao |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | While there has been a wealth of research that uses textually rendered spoken corpora (i.e. written transcripts of spoken language) and corpus methods to investigate utterance meaning in various contexts, multimodal corpus-based research beyond the text is still rare. Monomodal corpus-based research often limits our description and understanding of the meaning of words and phrases, mainly due to the fact that meaning is constructed by multiple modes (e.g. speech, gesture, prosody, etc.).
Hence, focusing on speech and gesture, the thesis explores multimodal corpus-based approaches for investigating multimodal units of meaning, using recurrent phrases, e.g. “(do) you know/see what I mean”, and gesture as two different, yet complementary points of entry. The primary goal is to identify the patterned uses of gesture and speech that can assist in the description of multimodal units of meaning. The Nottingham Multimodal Corpus (250,000 running words) is used as the data base for the research.
The main original contributions of the thesis include a new coding scheme for segmenting gestures, two multimodal profiles for a target recurrent speech and gesture pattern, and a new framework for classifying and describing the role of gestures in discourse. Moreover, the thesis makes important implications for our understanding of the temporal, cognitive and functional relationship between speech and gesture; it also discusses potential applications, particularly in English language teaching and Human-Computer Interaction. These findings are of value to the methodological and theoretical development of multimodal corpus-based research on units of meaning. |
| first_indexed | 2025-11-14T20:32:46Z |
| format | Thesis (University of Nottingham only) |
| id | nottingham-56017 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| language | English |
| last_indexed | 2025-11-14T20:32:46Z |
| publishDate | 2019 |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-560172025-02-28T14:23:04Z https://eprints.nottingham.ac.uk/56017/ Towards the multimodal unit of meaning: a multimodal corpus-based approach Chen, Yaoyao While there has been a wealth of research that uses textually rendered spoken corpora (i.e. written transcripts of spoken language) and corpus methods to investigate utterance meaning in various contexts, multimodal corpus-based research beyond the text is still rare. Monomodal corpus-based research often limits our description and understanding of the meaning of words and phrases, mainly due to the fact that meaning is constructed by multiple modes (e.g. speech, gesture, prosody, etc.). Hence, focusing on speech and gesture, the thesis explores multimodal corpus-based approaches for investigating multimodal units of meaning, using recurrent phrases, e.g. “(do) you know/see what I mean”, and gesture as two different, yet complementary points of entry. The primary goal is to identify the patterned uses of gesture and speech that can assist in the description of multimodal units of meaning. The Nottingham Multimodal Corpus (250,000 running words) is used as the data base for the research. The main original contributions of the thesis include a new coding scheme for segmenting gestures, two multimodal profiles for a target recurrent speech and gesture pattern, and a new framework for classifying and describing the role of gestures in discourse. Moreover, the thesis makes important implications for our understanding of the temporal, cognitive and functional relationship between speech and gesture; it also discusses potential applications, particularly in English language teaching and Human-Computer Interaction. These findings are of value to the methodological and theoretical development of multimodal corpus-based research on units of meaning. 2019-03-15 Thesis (University of Nottingham only) NonPeerReviewed application/pdf en arr https://eprints.nottingham.ac.uk/56017/1/Thesis_Yaoyao%20Chen.pdf Chen, Yaoyao (2019) Towards the multimodal unit of meaning: a multimodal corpus-based approach. PhD thesis, University of Nottingham. Speech and gesture English language teaching Human-computer interaction |
| spellingShingle | Speech and gesture English language teaching Human-computer interaction Chen, Yaoyao Towards the multimodal unit of meaning: a multimodal corpus-based approach |
| title | Towards the multimodal unit of meaning: a multimodal corpus-based approach |
| title_full | Towards the multimodal unit of meaning: a multimodal corpus-based approach |
| title_fullStr | Towards the multimodal unit of meaning: a multimodal corpus-based approach |
| title_full_unstemmed | Towards the multimodal unit of meaning: a multimodal corpus-based approach |
| title_short | Towards the multimodal unit of meaning: a multimodal corpus-based approach |
| title_sort | towards the multimodal unit of meaning: a multimodal corpus-based approach |
| topic | Speech and gesture English language teaching Human-computer interaction |
| url | https://eprints.nottingham.ac.uk/56017/ |