Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience
The research area of self-regulated learning (SRL) has shown the importance of the learner’s role in their cognitive and meta-cognitive strategies to self-regulate their learning. One fundamental step is to self-assess the knowledge acquired, to identify key concepts, and review the understanding ab...
| Main Authors: | , , , , |
|---|---|
| Other Authors: | |
| Format: | Conference Paper |
| Published: |
SCITEPRESS
2013
|
| Subjects: | |
| Online Access: | http://hdl.handle.net/20.500.11937/26572 |
| _version_ | 1848752024989990912 |
|---|---|
| author | Wesiak, G. Rizzardini, R. Amado-Salvatierra, H. Guetl, Christian Smadi, M. |
| author2 | Owen Foley |
| author_facet | Owen Foley Wesiak, G. Rizzardini, R. Amado-Salvatierra, H. Guetl, Christian Smadi, M. |
| author_sort | Wesiak, G. |
| building | Curtin Institutional Repository |
| collection | Online Access |
| description | The research area of self-regulated learning (SRL) has shown the importance of the learner’s role in their cognitive and meta-cognitive strategies to self-regulate their learning. One fundamental step is to self-assess the knowledge acquired, to identify key concepts, and review the understanding about them. In this paper, we present an experimental setting in Guatemala, with students from several countries. The study provides evaluation results from the use of an enhanced automatic question creation tool (EAQC) for a self-regulated learning online environment. In addition to assessment quality, motivational and emotional aspects, usability, and tasks value are addressed. The EAQC extracts concepts from a given text and automatically creates different types of questions based on either the self-generated concepts or on concepts supplied by the user. The findings show comparable quality of automatically and human generated concepts, while questions created by a teacher were in part evaluated higher than computer-generated questions. Whereas difficulty and terminology of questions were evaluated equally, teacher questions where considered to be more relevant and more meaningful. Therefore, future improvements should especially focus on these aspects of questions quality. |
| first_indexed | 2025-11-14T08:02:03Z |
| format | Conference Paper |
| id | curtin-20.500.11937-26572 |
| institution | Curtin University Malaysia |
| institution_category | Local University |
| last_indexed | 2025-11-14T08:02:03Z |
| publishDate | 2013 |
| publisher | SCITEPRESS |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | curtin-20.500.11937-265722023-02-08T05:21:22Z Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience Wesiak, G. Rizzardini, R. Amado-Salvatierra, H. Guetl, Christian Smadi, M. Owen Foley Maria Teresa Restivo James Uhomoibhi Markus Helfert e-Assessment Evaluation Study Automatic Test Item Generation Self-Regulated Learning The research area of self-regulated learning (SRL) has shown the importance of the learner’s role in their cognitive and meta-cognitive strategies to self-regulate their learning. One fundamental step is to self-assess the knowledge acquired, to identify key concepts, and review the understanding about them. In this paper, we present an experimental setting in Guatemala, with students from several countries. The study provides evaluation results from the use of an enhanced automatic question creation tool (EAQC) for a self-regulated learning online environment. In addition to assessment quality, motivational and emotional aspects, usability, and tasks value are addressed. The EAQC extracts concepts from a given text and automatically creates different types of questions based on either the self-generated concepts or on concepts supplied by the user. The findings show comparable quality of automatically and human generated concepts, while questions created by a teacher were in part evaluated higher than computer-generated questions. Whereas difficulty and terminology of questions were evaluated equally, teacher questions where considered to be more relevant and more meaningful. Therefore, future improvements should especially focus on these aspects of questions quality. 2013 Conference Paper http://hdl.handle.net/20.500.11937/26572 10.5220/0004387803510360 SCITEPRESS restricted |
| spellingShingle | e-Assessment Evaluation Study Automatic Test Item Generation Self-Regulated Learning Wesiak, G. Rizzardini, R. Amado-Salvatierra, H. Guetl, Christian Smadi, M. Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience |
| title | Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience |
| title_full | Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience |
| title_fullStr | Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience |
| title_full_unstemmed | Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience |
| title_short | Automatic Test Item Creation in Self-Regulated Learning: Evaluating Quality of Questions in a Latin American Experience |
| title_sort | automatic test item creation in self-regulated learning: evaluating quality of questions in a latin american experience |
| topic | e-Assessment Evaluation Study Automatic Test Item Generation Self-Regulated Learning |
| url | http://hdl.handle.net/20.500.11937/26572 |