Rating crowdsourced annotations: evaluating contributions of variable quality and completeness

Crowdsourcing has become a popular means to acquire data about the Earth and its environment inexpensively, but the data-sets obtained are typically imperfect and of unknown quality. Two common imperfections with crowdsourced data are the contributions from cheats or spammers and missing cases. The...

Full description

Bibliographic Details
Main Author: Foody, Giles M.
Format: Article
Published: Taylor & Francis 2013
Online Access:https://eprints.nottingham.ac.uk/2448/
_version_ 1848790788231659520
author Foody, Giles M.
author_facet Foody, Giles M.
author_sort Foody, Giles M.
building Nottingham Research Data Repository
collection Online Access
description Crowdsourcing has become a popular means to acquire data about the Earth and its environment inexpensively, but the data-sets obtained are typically imperfect and of unknown quality. Two common imperfections with crowdsourced data are the contributions from cheats or spammers and missing cases. The effect of the latter two imperfections on a method to evaluate the accuracy of crowdsourced data via a latent class model was explored. Using simulated and real data-sets, it was shown that the method is able to derive useful information on the accuracy of crowdsourced data even when the degree of imperfection was very high. The practical potential of this ability to obtain accuracy information within the geospatial sciences and the realm of Digital Earth applications was indicated with reference to an evaluation of building damage maps produced by multiple bodies after the 2010 earthquake in Haiti. Critically, the method allowed data-sets to be ranked in approximately the correct order of accuracy and this could help ensure that the most appropriate data-sets are used.
first_indexed 2025-11-14T18:18:11Z
format Article
id nottingham-2448
institution University of Nottingham Malaysia Campus
institution_category Local University
last_indexed 2025-11-14T18:18:11Z
publishDate 2013
publisher Taylor & Francis
recordtype eprints
repository_type Digital Repository
spelling nottingham-24482020-05-04T16:39:48Z https://eprints.nottingham.ac.uk/2448/ Rating crowdsourced annotations: evaluating contributions of variable quality and completeness Foody, Giles M. Crowdsourcing has become a popular means to acquire data about the Earth and its environment inexpensively, but the data-sets obtained are typically imperfect and of unknown quality. Two common imperfections with crowdsourced data are the contributions from cheats or spammers and missing cases. The effect of the latter two imperfections on a method to evaluate the accuracy of crowdsourced data via a latent class model was explored. Using simulated and real data-sets, it was shown that the method is able to derive useful information on the accuracy of crowdsourced data even when the degree of imperfection was very high. The practical potential of this ability to obtain accuracy information within the geospatial sciences and the realm of Digital Earth applications was indicated with reference to an evaluation of building damage maps produced by multiple bodies after the 2010 earthquake in Haiti. Critically, the method allowed data-sets to be ranked in approximately the correct order of accuracy and this could help ensure that the most appropriate data-sets are used. Taylor & Francis 2013-11-15 Article PeerReviewed Foody, Giles M. (2013) Rating crowdsourced annotations: evaluating contributions of variable quality and completeness. International Journal of Digital Earth . ISSN 1753-8947 http://www.tandfonline.com/action/showCopyRight?doi=10.1080%2F17538947.2013.839008#tabModule doi:10.1080/17538947.2013.839008 doi:10.1080/17538947.2013.839008
spellingShingle Foody, Giles M.
Rating crowdsourced annotations: evaluating contributions of variable quality and completeness
title Rating crowdsourced annotations: evaluating contributions of variable quality and completeness
title_full Rating crowdsourced annotations: evaluating contributions of variable quality and completeness
title_fullStr Rating crowdsourced annotations: evaluating contributions of variable quality and completeness
title_full_unstemmed Rating crowdsourced annotations: evaluating contributions of variable quality and completeness
title_short Rating crowdsourced annotations: evaluating contributions of variable quality and completeness
title_sort rating crowdsourced annotations: evaluating contributions of variable quality and completeness
url https://eprints.nottingham.ac.uk/2448/
https://eprints.nottingham.ac.uk/2448/
https://eprints.nottingham.ac.uk/2448/