Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates
In most tests of vocabulary size, knowledge is assessed through multiple-choice formats. Despite advantages such as ease of scoring, multiple-choice tests (MCT) are accompanied with problems. One of the more central issues has to do with guessing and the presence of other construct-irrelevant strate...
| Main Authors: | , , |
|---|---|
| Format: | Article |
| Published: |
John Benjamins Publishing
2015
|
| Subjects: | |
| Online Access: | https://eprints.nottingham.ac.uk/32284/ |
| _version_ | 1848794376767012864 |
|---|---|
| author | Gyllstad, Henrik Vilkaitė, Laura Schmitt, Norbert |
| author_facet | Gyllstad, Henrik Vilkaitė, Laura Schmitt, Norbert |
| author_sort | Gyllstad, Henrik |
| building | Nottingham Research Data Repository |
| collection | Online Access |
| description | In most tests of vocabulary size, knowledge is assessed through multiple-choice formats. Despite advantages such as ease of scoring, multiple-choice tests (MCT) are accompanied with problems. One of the more central issues has to do with guessing and the presence of other construct-irrelevant strategies that can lead to overestimation of scores. A further challenge when designing vocabulary size tests is that of sampling rate. How many words constitute a representative sample of the underlying population of words that the test is intended to measure? This paper addresses these two issues through a case study based on data from a recent and increasingly used MCT of vocabulary size: the Vocabulary Size Test. Using a criterion-related validity approach, our results show that for multiple-choice items sampled from this test, there is a discrepancy between the test scores and the scores obtained from the criterion measure, and that a higher sampling rate would be needed in order to better represent knowledge of the underlying population of words. We offer two main interpretations of these results, and discuss their implications for the construction and use of vocabulary size tests. |
| first_indexed | 2025-11-14T19:15:13Z |
| format | Article |
| id | nottingham-32284 |
| institution | University of Nottingham Malaysia Campus |
| institution_category | Local University |
| last_indexed | 2025-11-14T19:15:13Z |
| publishDate | 2015 |
| publisher | John Benjamins Publishing |
| recordtype | eprints |
| repository_type | Digital Repository |
| spelling | nottingham-322842020-05-04T20:10:59Z https://eprints.nottingham.ac.uk/32284/ Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates Gyllstad, Henrik Vilkaitė, Laura Schmitt, Norbert In most tests of vocabulary size, knowledge is assessed through multiple-choice formats. Despite advantages such as ease of scoring, multiple-choice tests (MCT) are accompanied with problems. One of the more central issues has to do with guessing and the presence of other construct-irrelevant strategies that can lead to overestimation of scores. A further challenge when designing vocabulary size tests is that of sampling rate. How many words constitute a representative sample of the underlying population of words that the test is intended to measure? This paper addresses these two issues through a case study based on data from a recent and increasingly used MCT of vocabulary size: the Vocabulary Size Test. Using a criterion-related validity approach, our results show that for multiple-choice items sampled from this test, there is a discrepancy between the test scores and the scores obtained from the criterion measure, and that a higher sampling rate would be needed in order to better represent knowledge of the underlying population of words. We offer two main interpretations of these results, and discuss their implications for the construction and use of vocabulary size tests. John Benjamins Publishing 2015 Article PeerReviewed Gyllstad, Henrik, Vilkaitė, Laura and Schmitt, Norbert (2015) Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates. ITL - International Journal of Applied Linguistics, 166 (2). pp. 278-306. ISSN 1783-1490 Vocabulary Size Multiple-Choice Test Guessing Sampling Rate Assessment Validation Testing Criterion-Related Validity http://www.jbe-platform.com/content/journals/10.1075/itl.166.2.04gyl doi:10.1075/itl.166.2.04gyl doi:10.1075/itl.166.2.04gyl |
| spellingShingle | Vocabulary Size Multiple-Choice Test Guessing Sampling Rate Assessment Validation Testing Criterion-Related Validity Gyllstad, Henrik Vilkaitė, Laura Schmitt, Norbert Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates |
| title | Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates |
| title_full | Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates |
| title_fullStr | Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates |
| title_full_unstemmed | Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates |
| title_short | Assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates |
| title_sort | assessing vocabulary size through multiple-choice formats: issues with guessing and sampling rates |
| topic | Vocabulary Size Multiple-Choice Test Guessing Sampling Rate Assessment Validation Testing Criterion-Related Validity |
| url | https://eprints.nottingham.ac.uk/32284/ https://eprints.nottingham.ac.uk/32284/ https://eprints.nottingham.ac.uk/32284/ |