Potential evaluation of different types of images and their combination for the classification of GIS objects cropland and Grassland
In many publications the performance of different classification algorithms regarding to agricultural classes is evaluated. In contrast, this paper focuses on the potential of different imagery for the classification of the two most frequent classes: cropland and grassland. For our experiment s thre...
| Main Authors: | , , |
|---|---|
| Format: | Conference Paper |
| Published: |
International Society for Photogrammetry and Remote Sensing
2012
|
| Online Access: | http://blackbridge.com/rapideye/upload/papers/2011_Recio_et_al_Hannover_Workshop.pdf http://hdl.handle.net/20.500.11937/9654 |
| Summary: | In many publications the performance of different classification algorithms regarding to agricultural classes is evaluated. In contrast, this paper focuses on the potential of different imagery for the classification of the two most frequent classes: cropland and grassland. For our experiment s three categories of imagery, high resolution aerial images, high resolution RapidEye satellite images and medium resolution Disaster Monitoring Constellation (DMC) satellite images are examined. An object-based image classification, as one of the most reliable methods for the automatic updating and evaluation of landuse geospatial databases, is chosen. The object boundaries are taken from a GIS data base, each object is described by means of a set of image based features. Spectral, textural and structural (semivariogram derived) features are extracted from images of different dates and sensors. During classification a supervised decision trees generating algorithm is applied. To evaluate the potential of the different images, all possible combinations of the available image data are tested during classification. The results show that the best performance of landuse classification is based on RapidEye data (overall accuracy of 90%), obtaining slightly accuracy increases when this imagery is combined with additional image data (overall accuracy of 92%). |
|---|