| Summary: | SVM are attractive for the classification of remotely sensed data with some claims that the method is insensitive to the dimensionality of the data and so not requiring a dimensionality reduction analysis in pre-processing. Here, a series of classification analyses with two hyperspectral sensor data sets reveal that the accuracy of a classification by a SVM does vary as a function of the number of features used. Critically, it is shown that the accuracy of a classification may decline significantly (at 0.05 level of statistical significance) with the addition of features, especially if a small training sample is used. This highlights a dependency of the accuracy of classification by a SVM on the dimensionality of the data and so the potential value of undertaking a feature selection analysis prior to classification. Additionally, it is demonstrated that even when a large training sample is available feature selection may still be useful. For example, the accuracy derived from the use of a small number of features may be non-inferior (at 0.05% level of significance) to that derived from the use of a larger feature set providing potential advantages in relation to issues such as data storage and computational processing costs. Feature selection may, therefore, be a valuable analysis to include in pre-processing operations for classification by a SVM.
|